• 0 Posts
  • 85 Comments
Joined 1 year ago
cake
Cake day: June 11th, 2023

help-circle
  • Depends on the vendor for the specifics. In general, they don’t protect against an attacker who has gained persistent privileged access to the machine, only against theft.
    Since the key either can’t leave the tpm or is useless without it (some tpms have one key that it can never return, and will generate a new key and return it encrypted with it’s internal key. This means you get protection but don’t need to worry about storage on the chip), the attacker needs to remain undetected on the server as long as they want to use it, which is difficult for anyone less sophisticated than an advanced persistent threat.

    The Apple system, to its credit, does a degree of user and application validation to use the keys. Generally good for security, but it makes it so if you want to share a key between users you probably won’t be using the secure enclave.

    Most of the trust checks end up being the tpm proving itself to the remote service that’s checking the service. For example, when you use your phones biometrics to log into a website, part of that handshake is the tpm on the phone proving that it’s made by a company to a spec validated by the standards to be secure in the way it’s claiming.


  • Package signing is used to make sure you only get packages from sources you trust.
    Every Linux distro does it and it’s why if you add a new source for packages you get asked to accept a key signature.

    For a long time, the keys used for signing were just files on disk, and you protected them by protecting the server they were on, but they were technically able to be stolen and used to sign malicious packages.

    Some advanced in chip design and cost reductions later, we now have what is often called a “secure enclave”, “trusted platform module”, or a general provider for a non-exportable key.
    It’s a little chip that holds or manages a cryptographic key such that it can’t (or is exceptionally difficult) to get the signing key off the chip or extract it, making it nearly impossible to steal the key without actually physically stealing the server, which is much easier to prevent by putting it in a room with doors, and impossible to do without detection, making a forged package vastly less likely.

    There are services that exist that provide the infrastructure needed to do this, but they cost money and it takes time and money to build it into your system in a way that’s reliable and doesn’t lock you to a vendor if you ever need to switch for whatever reason.

    So I believe this is valve picking up the bill to move archs package infrastructure security up to the top tier.
    It was fine before, but that upgrade is expensive for a volunteer and donation based project and cheap for a high profile company that might legitimately be worried about their use of arch on physical hardware increasing the threat interest.







  • In the sense that they have a manager? Sure. In the sense that there’s one individual dictating the design of the software? I’ve never even been on a team with that dynamic, to say nothing of the entire codebase.

    Modern software teams tend to eschew design by decree.

    What’s the dynamic that you’re thinking is typically what teams use?


  • I’m not sure I’d construe a manual you can find, or a variety of guides, as a negative. :) most days my usage of git consists of “pull, commit, push, merge” in different orders. You might be overestimating how much effort goes in to managing the tool.

    Most of my professional experience has been working on projects that consist of multiple teams of between 4-6 developers, and between 5 and 40 teams. I’m not entirely sure what you mean about git not mirroring the development patterns of most “real life” projects.
    “Real” projects are frequently developed by groups of people working on the same goal adjacent to other groups working on related but distinct goals.


  • We very clearly work in different professional environments. :)

    In no particular order: Administrating a git server is similarly trivial. A repository is a folder (easy to backup, easy to repair, easy to host), and setting up a new server usually a matter of ssh key management. Don’t even need to install sqlite or anything beyond the git package. Or, because the tool has wide support, you can install a wide selection of tools that manage it for you, or use a free hosting service, or a paid one.

    I’m startled that you would say you can’t think of anyone who would care. My entire professional experience has been developer stories about bad jobs often include details about using old or esoteric VCS systems, usually met with “ew” or “wtf” comments. Sets the flavor of the story.
    Personally, in a business environment, I would take using anything except git for the org as a red flag. It’s a sign that someone in leadership at the company values doing things unrelated to the core mission “their way” above doing it the easy or “paved path” way.

    The standard tool is indeed not constant. Before git existed, using CVS would have been the better choice, as well as for years afterwards until it had clearly been usurped. Most projects aren’t Linux when it made the switch to git.

    You joke that no one really “knows” git, but… This is literally the first time I’ve ever seen a fossil command. I just searched for “fossil manual” and I get analog watches. It’s not even available in any of my systems package managers.
    Developer familiarity is a big advantage that I think you’re downplaying in comparison to “there are metadata files in .git”, which I don’t know has ever been relevant to me in any significant way.
    (Also, I thought the different systems all work basically the same? 😛)

    I’d handily agree people should be using the best tool for the job. Familiarity and ease of use are significant factors in what makes a tool better.
    Ability to integrate with other tools is also a major factor. Setting up continuous integration or code review tools with git is trivial with any number of different systems.

    What are any of the tools you’re using doing better than git? The biggest selling point you’ve shared for fossil is that it’s functionally similar to git, and that it has better merging. I can’t find anything related to merge conflicts outside of years old forum posts, and barely anything relating to merges at all, so I’m not entirely certain what makes it “better”.

    If it’s biggest advantage is that it’s similar enough to git that you can pick it up fast, why wouldn’t I just use git?


  • Like I said, there are always factors.

    For a company starting from scratch though, the usage base factor becomes vastly more significant.
    Using a tool that radically limits your integration capabilities is a poor choice, to say nothing of most likely needing to onboard every new employee to an entirely new VCS.

    I don’t know that I’ve encountered anyone using svn that wasn’t interested in moving in recent memory, so “developer experience” would be a reason to move.




  • File1, file2, file_3.new, etc would be bizarrely stupid. A home rolled solution involving rsync, tar, gzip, crons or inotify would also be bizarrely stupid.

    https://en.wikipedia.org/wiki/List_of_version-control_software anything on that list that’s marked anything other than “active” as a more serious answer. So like DCVS, visual source safe, or bitkeeper. Anything that’s not getting bug fixes or maintenance.

    Anything that doesn’t have significant enough usage to give confidence that bugs or glitches are being caught by common usage would be risky, since you don’t want to be the person to find that edge case.

    There’s things other than git that aren’t wrong, but I see little compelling reason not to use the most ubiquitous tool.


  • There’s a difference between “can’t code” and “can’t work”.

    A lot of people use git for version control: super good idea, basically anything else is at best unorthodox, at worst bizarrely stupid.
    A lot of people also use github for repository hosting, continuous integration, code review, deployment, packaging, etc, etc. this is more of an opinion thing than a standard practice thing, and there are plenty of other ways to get the same tools, either all in one package or from a variety of different ones, self hosted, in the cloud, or some hybrid in between.

    If GitHub goes down, you can make code changes and everything to your hearts content. But you might not be able to run your full integration testing pipeline on it, get a code review, or package your software.

    If your local build process pulls packages from GitHub or refreshes a remote repository automatically, it can also powerfully mess that up, but that’s nothing to do with git. You can use “ctrl-c/v” backups and still have a build process that tips over when GitHub goes down.


  • https://daniel.haxx.se/blog/2020/12/17/curl-supports-nasa/

    https://daniel.haxx.se/blog/2023/02/07/closing-the-nasa-loop/

    Their process for validating software doesn’t have a box for “open source”, and basically assumes it’s either purchased, or contracted. So someone in risk assessment just gets a list of software libraries and goes down it checking that they have the required forms.

    As the referenced talk mentions, the people using the software understand that all the testing and everything is entirely on them, and that sending these messages is bothersome and unfair, and they’re working on it. Unfortunately, NASA is also a massive government bureaucracy and so process changes are slow, at best.
    The TLAs don’t generally help NASA, and getting them involved would unfortunately only result in more messages being sent.

    As for contributions, I think that turns into an even worse can of worms, since generally software developed by or for the US government isn’t just open source, but public domain. I think you’d end up with a big mess of licensing horror if you tried to get money or official relationships involved. It’s why sqlite is public domain, since it was developed at the behest of the US.

    Mostly just context for what you said. NASA isn’t being arrogant, they’re being gigantic. Doing their due diligence in-house while another branch goes down a checklist, sees they don’t have a form and pops of an email and embarrassing the hell out of the first group.

    The time limit thing is weird, but it’s a common practice in bureaucracies, public or private. You stick a timeline on the request to convey your level of urgency and the establish some manner of timeline for the other person to work with. Read the line again, but extremely literally: “we have a time frame of 5 days for a response”. “Our audit timeline guessed that it would take a business week for you to reply, so if you take longer we’re behind schedule”. The threatening version is “your response is required on or before five business days from the date of this message”.
    The presumption is that the person on the other end is also working through a task queue that they don’t have much personal investment in, and is generally good natured, so you’re telling them “I don’t expect you to jump on this immediately, but wherever you can find a moment to reply this week would keep anyone from bothering me, and me from needing to send another email or trying to find a phone number”



  • Paul Eggart is the primary maintainer for tzdb, and has been for the past 20 years.
    Tzdb is the database that maintains all of the information about timezones, timezone changes, leap whatever’s and everything else. It’s present on just about every computer on the planet and plays an important role in making sure all of the things do time correctly.

    If he gets hit by a bus, ICANN is responsible for finding someone else to maintain the list.

    Sqlite is the most widely used database engine, and is primarily developed by a small handful of people.

    ImageMagick is probably the most iconic example. Primarily developed by John Cristy since 1987, it’s used in a hilarious number of places for basic image operations. When a security bug was found in it a bit ago, basically every server needed to be patched because they all do something with images.


  • I wasn’t mocking your argument, I was agreeing with you and clarifying that my feeling was about who I’m most “irritated” with, not about responsibility or legal culpability.

    My example was for simplicity, not mockery.
    The power going out is the power companies fault, so I’m most mad at them. The store didn’t have a generator because they trusted the power company, so my cake got ruined. I’m still mad at them but less so because they weren’t the cause of the problem, even though they could have done more to prevent this from impacting me.
    Culpability wise, I can only make demands of the store and hope that enough other people do so that they in turn demand answers from the power company.

    There are actually a fair number of certifications, including ones from government agencies, relating to software development, deployment, and related practices. That so many organizations didn’t have the ones relating to protection from supply chain issues is distressing, to say nothing of it slipping through quality control in the first place.

    Please, if you think we’re in a place in this thread where I’d be mocking you, re-read it with the understanding that I agree with you entirely on legal and structural issues, and at most just have a different opinion about where the balance of "fuck you"s go. I think I put more scorn towards the vendor because doing the thing is worse than failing to prevent the thing. Also, I work at a parallel company and so I’m more familiar with exactly how much you have to be fucking up for this to happen because I spent the last three days dealing with the more minor controls that prevent this from happening. Everyone has outages because you can’t prevent 100% of errors, but it’s on the vendor to build to the spec of their most sensitive customer and ensure that outages don’t keep a doctor from patient records.


  • Can’t fault you for feeling that way. I definitely don’t think anyone should be exempt from responsibility, I meant blame in the more emotional “ugh, you jerk” sense.

    If someone can’t fulfill their responsibilities because someone they depended on failed them, they’re still responsible for that failure to me, but I’m not blaming them if that makes any sense.

    Power outage or not, the store owes me an ice cream cake and they need to make things even between us, but I’m not upset with them for the power outage.