• 0 Posts
  • 19 Comments
Joined 1 year ago
cake
Cake day: July 21st, 2023

help-circle
  • I’m a manager at a FAANG and have been involved in tech and scientific research for commercial, governmental, and military applications for about 35 years now, and have been through a lot of different careers in the course of things.

    First - and I really don’t want to come off like a dick here - you’re two years in. Some people take off, and others stay at the same level for a decade or more. I am the absolute last person to argue that we live in a meritocracy - it’s a combination of the luck of landing with the right group on the right projects - but there’s also something to be said about tenacity in making yourself heard or moving on. You can’t know a whole lot with two years of experience. When I hire someone, I expect to hold their hand for six months and gradually turn more responsibility over as they develop both their technical and personal/project skills.

    That said, if you really hate it, it’s probably time to move on. If you’re looking to move into a PM style role, make sure that you have an idea of what that all involves, and make sure you know the career path - even if the current offer pays more, PMs in my experience cap out at a lower level for compensation than engineers. Getting a $10k bump might seem like you’re moving up, but a) it doesn’t sound like you’re comparing it to other engineering offers and b) we’re in a down market and I’d be hesitant to advise anyone to make a jump right now if their current position is secure. Historically speaking, I’m expecting demand to start to climb back to high levels in the next 1-2 years.

    Honestly, it just sounds like your job sucks. I have regularly had students, interns, and mentees in my career because that’s important to me. One thing I regularly tell people is that if there’s something that they choose to read about rather than watching Netflix on a Saturday, that’s something they should be considering doing for a living. Obviously that doesn’t cover Harry Potter, but if you’re reading about ants or neural networks or Bayesian models or software design patterns, that’s a pretty good hint as to where you should be steering. If you’d rather work on space systems, or weapons, or games, or robots, or LLMs, or whatever - you can slide over with side and hobby projects. If you’re too depressed to even do that, take the other job. I’d rather hire a person who quit their job to drive for Uber while they worked on their own AI project than someone who was a full stack engineer at a startup that went under.

    Anyway, that’s my advice. Let me know if I can clarify anything.



  • The oldest crpg I ever played was called advent, because the Vax computers could only use 6 characters for file names and so the people who ported it couldn’t use the actual name “adventure.” It was basically the same as the game infocom shipped as Zork.

    Apparently the original implementation was on the PDP-10 in 1976. There might have been a couple other games that predated it by a year or two, but adventure was the big one in my opinion because it led (eventually) to the creation of the infocom text based game engine and a whole line of games ranging from hitchhiker’s guide to the galaxy to leather goddesses of Phobos.


  • The main part you need to pick up is being able to establish the mental hooks around the ideas that are central to programming. Do you know how you can watch a choreography session and see the dancers just pick up the moves as they’re described/demonstrated? That’s because they’ve learned the language of dance. It’s an entire (physical) vocabulary. It’s the semantics of dance.

    What you need to do is do that with programming. There’s a number of getting started with books and videos, but you’re going to want them to learn the fundamentals of not just a language but of programming.

    If you’re talking about using other people’s functions (like in an api), then the function name should give you a clue about what it does. The cool thing about functions is that you don’t have to know how they’re doing their thing, just what they’re doing. If you have the source code, you will find you remember more if you use comments to make notes for yourself (it engages more of your brain than just reading).

    If your problem is writing your own code using functions, start out more slowly. Write a program that’s just a giant block of linear code. Once that’s working, then take a look as to how to break it down into functions. If you have a block of code that sorts a list, for example, and you had to copy and paste it into three different areas, that would mean it should be a function.

    Use comments very often as you’re going. Before you write a block, write a comment about what it’s supposed to do. You’ll start to see some generalities, which will be you learning programming, not just a language.


  • One of my first computer jobs was working in a student computer lab at my undergraduate university. This was back in the mid 90s-ish.

    We had three types of computers - windows machines running 3.1 or whatever was current then, Macs who would all do a Wild Eep together when they rebooted en masse, and Sun X Windows dumb terminals that were basically just (obviously) unix machines for all intents and purposes. This was back when there were basically like 5 websites total, and people still hadn’t heard of Mosaic.

    So everyone wanted the windows and Mac boxes, and only took the xterms when there was nothing else open. I was the primary support person for them since none of the other people wanted to learn Unix and I was the only CS major.

    The X boxes suffered from two main learning hurdles. One was that backspaces were incorrectly mapped into some escape key sequence, and the other is that it would drop you from (I think) pine into emacs as a mail editor as soon as you hit it. 90% of my time was telling people how to exit emacs. It was that, putting more paper into the printers, and teaching myself more programming than I was learning in classes.



  • There should be a full write up from a lawyer - or, better yet, an organization like the EFF. Because lemmy.world is such a prominent instance, it would probably garner some attention if the people who run it were to approach them.

    People would still have to decide what their own risk tolerances are. Some might think that even if safe harbor applies, getting swatted or doxxed just isn’t worth the risk.

    Others might look at it, weigh their rights under the current laws, and decide it’s important to be part of the project. A solid communication on the specific application of S230 to a host of a federated service would go a long way.

    I worked as a sys admin for a while in college in the mid-90s, and it was a time when ISPs were trying to get considered common carriers. Common carrier covers phone companies from liability if people use their service to commit crimes. The key provision of common carrier status was that the company exercised no control whatsoever over what went across their wires.

    In order to make the same argument, the systems I helped manage had a policy of no policing. You could remove a newsgroup from usenet, but you couldn’t any other kind of content oriented filtering. The argument went that as soon as you start moderating, you’re now responsible for moderating it all. True or not, that’s the argument made and policy adopted on multiple university networks and private ISPs. And to be clear, we’re not talking about a company like facebook or reddit which have full control over their content. We’re talking things like the web in general, such as it was, and usenet.

    Usenet is probably the best example, and I knew some BBS operators who hosted usenet content. The only BBS owners that got arrested (as far as I know) were arrested for being the primary host of illegal material.

    S230 or otherwise, someone should try to get a pro bono from a lawyer (or lawyers) who know the subject.

    Edit: Looks like EFF already did a write up. With the amount of concerned people posting on this optic, this link should be in every official reply and as a post in the topic.


  • Hahaha - okay, that makes a lot more sense. I had so completely lost track of the community and have just been getting back into it lately because of the fediverse community, and so when I heard “Linus Tech Tips,” I completely assumed it was Linus Torvalds. I had even talked to him on usenet (iirc) a couple of times, and couldn’t square the reports.

    Thanks for the clarification.


  • It’s very weird for me to have been out of the loop on the linux/foss news fronts for so many years and then seeing this cropping up.

    My first linux install was slackware back in (I think) 1994, when I downloaded 20-something high density 3.5” slackware floppies over my university’s dial up, which included dodging drops and busy signals. Linus at the time was seen as a kind of Woz sort of persona. He was all about the tech and very invested in the foss movement. There was the GNU/Linux drama, but that just looks quaint in hindsight.

    There was an episode of (I think) The Other Two in which someone is dating tech executives. The billionaires are all Elon Musk-level weird, but she ends up meeting someone who is only a millionaire, and he’s pretty normal. In the middle of their date, he gets a phone call saying that his company is being acquired and he’s now a billionaire, and he then shaves his head and starts ranting while hanging from a balcony railing.

    That’s what this feels like. Money can do really weird things to people.



  • When I’d set systems up, creating a password for the automatically created root account was one of the first steps in the process after setting up the basics. You could then set other accounts to have root privileges, or set up sudo to allow your personal account access via sudo, but even sudo acts as UID 0. If your setup didn’t do that, or if you set your account name up as UID 0, then you can always boot off of another source and mount the internal hd, right?



  • I understand how that might be a justification, but I’m really not sure about that. I can’t imagine using LinkedIn so many times per hour that caching would make any improvement in my experience, and I suspect that fixed asset caching in the browser would take care of a chunk of that anyway. Even if there were some headhunters using the phone version 8 hours+ per day, the amount of time wasted and the number of annoyed customers dealing with the app harassment would still have a negative overall cost. I’ve done these kinds of KPIs.

    It’s about

    1. Control over data mining at a much higher level than can be done from a browser including location tracking
    2. Monetization of the data via “third party partners”
    3. Making sure the app team is justified as an ongoing cost, because that’s just the way corporate organizations work


  • For some floating-point heavy code, it could potentially be major, but not disastrous.

    That’s a really interesting point (no pun intended)

    I had run into a few situations where a particular computer architecture (eg, the Pentiums for a time) had issues with floating point errors and I remember thinking about them largely the same way. It wasn’t until later that I started working in complexity theory, by which time I completely forgot about those issues.

    a one of the earliest discoveries in what would eventually become chaos and complexity theory was the butterfly effect. Edward Lorenz was doing weather modeling back in the 60s. The calculations were complex enough that the model could have to be run over several sessions, starting and stopping with partial results at each stage. Internally, the computer model used six significant figures for floating point data. When Lorenz entered the parameters to continue his runs, he used three sig figs. He found that the trivial difference in sig digs actually led to wildly different results. This is due to the nature of systems that use present states to determine next states and which also have feedback loops and nonlinearities. Like most complexity folks, I learned and told the story many times over the years.

    I’ve never wondered until just now whether anyone working on those kinds of models ran into problems with floating point bugs. I can imagine problematic scenarios, but I don’t know if it ever actually happened or if it would have been detected. That would make for an interesting study.


  • I’ve been out of the builder world for long enough that I didn’t follow the 2018 bug. I’m more from the F00F generation in any case. I also took a VLSI course somewhere in the mid-90s that convinced me to do anything other than design chips. I seem to remember something else from that era where a firmware based security bug related to something I want to say was browser-based, but it wasn’t the CPU iirc.

    In any case, I get the point you and others are making about evaluating the risks of a security flaw before taking steps that might hurt performance or worrying about it too much.


  • From the description, it sounds like you upload a picture, then show a face to a video camera. It’s not like they’re going through FaceID that has anti-spoofing hardware and software. If they’re supporting normal web cams, they can’t check for things like 3d markers

    Based on applications that have rolled out for use cases like police identifying suspects, I would hazard a guess that

    1. It’s not going to work as well as they imply
    2. It’s going to perform comically badly in a multi-ethnic real world scenario with unfortunate headlines following
    3. It will be spoofable.

    I’m betting this will turn out to be a massive waste of resources, but that never stopped something from being adopted. Even the cops had to be banned by several municipalities because they liked being able to identify and catch suspects, even if it was likely to be the wrong person. In one scenario I read about, researchers had to demonstrate that the software the PD was using identified several prominent local politicians as robbery and murder suspects.



  • I’m curious - does this kind of report make people less likely to go with an AMD cpu? The last time I was thinking about building a new pc, AMD had just definitively taken the lead in speed per dollar, and I would have gone with one of the higher end chips. I’m not sure whether this would have affected my decision, but I’d probably be concerned with performance degradation as well as the security issue. I’d have waited for the patch to buy a system with updated firmware, but Od still want to see what the impact was as well as learn more about the exploit and whether there were additional issues.

    I ended up just getting a steam deck and all of my other computers are macs, so it’s hard to put myself back into the builder’s/buyer’s headspace.