• 1 Post
  • 46 Comments
Joined 1 year ago
cake
Cake day: July 6th, 2023

help-circle

  • Ah, you’ve never worked somewhere where people regularly rebase and force-push to master. Lucky :)

    I have no issue with rebasing on a local branch that no other repository knows about yet. I think that’s great. As soon as the code leaves local though, things proceed at least to “exercise caution.” If the branch is actively shared (like master, or a release branch if that’s a thing, or a branch where people are collaborating), IMO rebasing is more of a footgun than it’s worth.

    You can mitigate that with good processes and well-informed engineers, but that’s kinda true of all sorts of dubious ideas.


  • You can get in some pretty serious messes, though. Any workflow that involves force-pushing or rebasing has the potential for data loss… Either in a literally destructive way, or in a “Seriously my keys must be somewhere but I have no idea where” kind of way.

    When most people talk about rebase (for example) being reversible, what they’re usually saying is “you can always reverse the operation in the reflog.” Well yes, but the reflog is local, so if Alice messes something up with her rebase-force-push and realizes she destroyed some of Bob’s changes, Alice can’t recover Bob’s changes from her machine-- She needs to collaborate with Bob to recover them.


  • I gotta say, I was with you for most of this thread, but looking through old commits is definitely something that I do on a regular basis… Like not even just because of problems, but because that’s part of how I figure out what’s going on.

    The whole reason I keep my git history clean and my commit messages thoughtful is so that future-me (or future-someone-else) will have an easier time walking through it later, because that happens all the time.

    I’ll still almost always choose merge instead of rebase, but not because I don’t care about the git history-- quite the opposite, it’s really important to me in a very practical way.


  • Yeah, tbh the “no timezones” approach comes with its own basket of problems that isn’t necessarily better than the “with timezones” basket. The system needed to find a balance between being useful locally, but intelligible across regions. Especially challenging before ubiquitous telecommunications

    Imagine having to rethink the social norms around time every time you travel or meet someone from far away. They say “Oh I work a 9-to-5 office job” and then you need to figure out where they live to understand what that means. Or a doctor writes a book where they recommend that you get to bed by 2:00PM every night, and then you need to figure out how to translate that to a time that makes sense for you.

    We’d invent and use informal timezones anyway, and then we’d be writing Javascript functions to translate “real” times to “colloquial” times, and that’s pretty close to just storing datetimes in UTC then translating them to a relevant timezone ad hoc, which is what we’re already doing.

    That’s what my rational programmer brain says. My emotional programmer brain is exactly this meme.





  • Design in your head a cabinet that can be designed and built within 30 minutes with no research or preparation, and build it. We will be watching over your shoulder, so please coherently describe your process as you figure out what it is.

    You wouldn’t have learned to do this at any previous workshop, so hopefully you’ve specifically practiced making this kind of shitty half-hour cabinet in preparation.


  • That sounds like a good plan in many situations… But how do you handle candidates who say something like “look, there’s heaps of code that I’m proud of and would love to walk you through, but it’s all work I’ve done for past companies and don’t have access (or the legal right) to show you?”

    You might just say “well the ideal candidate has meaningful projects outside of work,” and just eliminate the others… But it seems like you’d lose out on many otherwise great candidates that way.


  • Pretty questionable take IMO:

    The truth is, there are typically a bunch of good candidates that apply for a job. There are also not-so-great candidates. As long as a company hires one of the good ones, they don’t really care if they lose all the rest of the good ones. They just need to make sure they don’t hire one of the no-so-great ones.

    That’s actually a pretty bad thing. Like you could say the same thing about rejecting applicants who didn’t go to a certain set of schools, or submit a non-PDF resume, or who claims to have experince with a library/language that you don’t like (I had a colleague who said that he’d reject anyone with significant PHP experience because they probably learned “bad habits”) or any number of arbitrary filters.

    If “good at leetcode” was a decent proxy for “knows how to build and scale accessible web UIs” or whatever, then okay great… But it’s not, as the author admits in the conclusion:

    Coding interviews are far from perfect. They’re a terrible simulation of actual working conditions. They favor individuals who have time to do the prep work (e.g., grind leetcode). They’re subject to myriad biases of the interviewer. But there’s a reason companies still use them: they’re effective in minimizing hiring risk for the company. And to them, that’s the ball game.

    So it’s unclear to me what they mean by “effective.” Are they good at evaluating how good a candidate will be at the job? No. Are they good at identifying talent that hiring teams might otherwise overlook? No. They are good at “minimizing hiring risk” by setting up another arbitrary hoop to jump through.

    Let’s just call a spade a spade and admit that our hiring processes are so bad at evaluating talent that we settle for making candidates “audition” to prove that they can code at all, and then decide based on whatever entrenched biases we’ve decided constitute “culture fit.” Then the title could be “Coding interviews are the most effective tool we have, and that’s kind of a disaster.”

    Thank you for reading my rant. I am available for podcasts and motivational speaking appearances.


  • For me, it’s primarily #5: I want to know which apps are accessing the network and when, and have control over what I allow and what I don’t. I’ve caught lots of daemons for software that I hadn’t noticed was running and random telemetry activity that way, and it’s helped me sort-of sandbox software that IMO does not need access to the network.

    Not much to say about the other reasons, other than #2 makes more sense in the context of working with other people: If your policy is “this is meant to be an HTTPS-only machine,” then you might want to enforce that at the firewall level to prevent some careless developer from serving the app on port 80 (HTTP), or exposing the database port while they’re throwing spaghetti at the wall wrestling with some bug. That careless developer could be future-you, of course. Then once you have a policy you like, it’s also easier to copy a firewall config around to multiple machines (which may be running different apps), instead of just making sure to get it consistently right on a server-by-server basis.

    So… Necessary? Not for any reason I can think of. But useful, especially as systems and teams grow.