• 0 Posts
  • 10 Comments
Joined 2 years ago
cake
Cake day: May 8th, 2023

help-circle

  • I looked into this previously, and found that there is a major problem for most users in the Terms of Service at https://codeium.com/terms-of-service-individual.

    Their agreement talks about “Autocomplete User Content” as meaning the context (i.e. the code you write, when you are using it to auto-complete, that the client sends to them) - so it is implied that this counts as “User Content”.

    Then they have terms saying you licence them all your user content:

    “By Posting User Content to or via the Service, you grant Exafunction a worldwide, non-exclusive, irrevocable, royalty-free, fully paid right and license (with the right to sublicense through multiple tiers) to host, store, reproduce, modify for the purpose of formatting for display and transfer User Content, as authorized in these Terms, in each instance whether now known or hereafter developed. You agree to pay all monies owing to any person or entity resulting from Posting your User Content and from Exafunction’s exercise of the license set forth in this Section.”

    So in other words, let’s say you write a 1000 line piece of software, and release it under the GPL. Then you decide to trial Codeium, and autocomplete a few tiny things, sending your 1000 lines of code as context.

    Then next week, a big corp wants to use your software in their closed source product, and don’t want to comply with the GPL. Exafunction can sell them a licence (“sublicence through multiple tiers”) to allow them to use the software you wrote without complying with the GPL. If it turns out that you used some GPLd code in your codebase (as the GPL allows), and the other developer sues Exafunction for violating the GPL, you have to pay any money owing.

    I emailed them about this back in December, and they didn’t respond or change their terms - so they are aware that their terms allow this interpretation.


  • A1kmm@lemmy.amxl.comtoLinux@lemmy.mlopen letter to the NixOS foundation
    link
    fedilink
    English
    arrow-up
    70
    arrow-down
    21
    ·
    7 months ago

    I wonder if this is social engineering along the same vein as the xz takeover? I see a few structural similarities:

    • A lot of pressure being put on a maintainer for reasons that are not particularly obvious what they are all about to an external observer.
    • Anonymous source other than calling themselves KA - so that it can’t be linked to them as a past contributor / it is not possible to find people who actually know the instigator. In the xz case, a whole lot of anonymous personas showed up to put the maintainer under pressure.
    • A major plank of this seems to be attacking a maintainer for “Avoiding giving away authority”. In the xz attack, the attacker sought to get more access and created astroturfed pressure to achieve that ends.
    • It is on a specially allocated domain with full WHOIS privacy, hosted on GitHub on an org with hidden project owners.

    My advice to those attacked here is to keep up the good work on Nix and NixOS, and don’t give in to what could be social engineering trying to manipulate you into acting against the community’s interests.


  • Most of mine are variations of getting confused about what system / device is which:

    • Had two magnetic HDDs connected as my root partitions in RAID-1. One of the drives started getting SATA errors (couldn’t write), so I powered down and disconnected what I thought was the bad disk. Reboot, lots of errors from fsck on boot up, including lots about inodes getting connected to /lost+found. I should have realised at that point that it was a bad idea to rebuild the other good drive from that one. Instead, I ended up restoring from my (fortunately very recent!) backup.
    • I once typed sudo pm-suspend on my laptop because I had an important presentation coming up, and wanted to keep my battery charged. I later noticed my laptop was running low on power (so rushed to find power to charge it), and also that I needed a file from home I’d forgotten to grab. Turns out I was actually in a ssh terminal connected to my home computer that I’d accidentally suspended! This sort of thing is so common that there is a package in some distros (e.g. Debian) called molly-guard specifically to prevent that - I highly recommend it and install it now.
    • I also once thought I was sending a command to a local testing VM, while wiping a database directory for re-installation. Turns out, I typed it in the wrong terminal and sent it to a dev prod environment (i.e. actively used by developers as part of their daily workflow), and we had to scramble to restore it from backup, meanwhile no one could deploy anything.

  • more is a legitimate program (it reads a file and writes it out one page at a time), if it is the real more. It is a memory hog in that (unlike the more advanced pager less) it reads the entire file into memory.

    I did an experiment to see if I could get the real more to show similar fds to you. I piped yes "" | head -n10000 >/tmp/test, then ran more < /tmp/test 2>/dev/null. Then I ran ls -l /proc/`pidof more`/fd.

    Results:

    lr-x------ 1 andrew andrew 64 Nov  5 14:56 0 -> /tmp/test
    lrwx------ 1 andrew andrew 64 Nov  5 14:56 1 -> /dev/pts/2
    l-wx------ 1 andrew andrew 64 Nov  5 14:56 2 -> /dev/null
    lrwx------ 1 andrew andrew 64 Nov  5 14:56 3 -> 'anon_inode:[signalfd]'
    

    I think this suggests your open files are probably consistent with the real more when errors are piped to /dev/null. Most likely, you were running something that called more to output something to you (or someone else logged in on a PTY) that had been written to /tmp/RG3tBlTNF8. Next time, you could find the parent of the more process, or look up what else is attached to the same PTS with the fuser command.


  • Programming is the most automated career in history. Punch cards, Assembler, Compilers, Linkers, Keyboards, Garbage Collection, Type Checkers, Subroutines and Functions, Classes, Macros, Libraries (of increasingly higher-level abstractions), Build Scripts, CI/CD - those are all automation concepts that do things that theoretically a programmer could have done manually. To build all the software we build now would theoretically be possible without any automation - but it would probably require far more programmers than there are people on earth. However, because better tech leads to people doing more with the same, in practice the number of programmers has grown with time as we’ve just built more complex software.


  • Having to have a trustworthy notary interactively as part of the protocol during the TLS request seems like it shuts out a lot of applications.

    I wonder if it could be done with zk-STARKs, with the session transcript and ephemeral keys as secret inputs, and a CA certificate as a public input, to produce a proof of the property without the need for the notary. That would then mean the only roles are TLS server, prover, and verifier, with no interactive dependency between the prover and verifier (i.e. the prover could generate the proof first, that can non-interactively verified at any time later by any number of verifiers).


  • I use Restic, called from cron, with a password file containing a long randomly generated key.

    I back up with Restic to a repository on a different local hard drive (not part of my main RAID array), with --exclude-caches as well as excluding lots of files that can easily be re-generated / re-installed/ re-downloaded (so my backups are focused on important data). I make sure to include all important data including /etc (and also backup the output of dpkg --get-selections as part of my backup). I auto-prune my repository to apply a policy on how far back I keep (de-duplicated) Restic snapshots.

    Once the backup completes, my script runs du -s on the backup and emails me if it is unexpectedly too big (e.g. I forgot to exclude some new massive file), otherwise it uses rclone sync to sync the archive from the local disk to Backblaze B2.

    I backup my password for B2 (in an encrypted password database) separately, along with the Restic decryption key. Restore procedure is: if the local hard drive is intact, restore with Restic from the last good snapshot on the local repository. If it is also destroyed, rclone sync the archive from Backblaze B2 to local, and then restore from that with Restic.

    Postgres databases I do something different (they aren’t included in my Restic backups, except for config files): I back them up with pgbackrest to Backblaze B2, with archive_mode on and an archive_command to archive WALs to Backblaze. This allows me to do PITR recovery (back to a point in accordance with my pgbackrest retention policy).

    For Docker containers, I create them with docker-compose, and keep the docker-compose.yml so I can easily re-create them. I avoid keeping state in volumes, and instead use volume mounts to a location on the host, and back up the contents for important state (or use PostgreSQL for state instead where the service supports it).


  • The proposal doesn’t say what the interface between the browser and the OS / hardware is. They mention (but don’t elaborate on) modified browsers. Google’s track record includes:

    1. Creating SafetyNet software and the Play Integrity API that create ‘attestations’ that the device is running manufacturer supplied software. They can pass for now (at a lower ‘integrity level’) with software like LineageOS combined with software like Magisk (Magisk by itself used to be enough, but then Google hired the Magisk developer and soon after that was dropped) and Universal SafetyNet Fix, but those work by making the device pretend to be an earlier device that doesn’t have ARM TrustZone configured, and one day the net is going to close - so these actively take control away from users over what OS they can run on their phone if they want to use Google and third party services (Google Pay, many apps).
    2. Requiring Android Apps be signed, and creating a separate tier of ‘trusted’ Android apps needed to create a browser. For example, to implement WebAuthn with hardware support (as Chrome does) on Android, you need to call com.google.android.gms.fido.fido2.Fido2PrivilegedApiClient, and Google doesn’t even provide a way to apply to get allowlisted for (Mozilla and Google are, for example, allowed to build software that uses that API but want to run your own modified browser and call that API on hardware you own? Good luck convincing Google to add you to the allowlist).
    3. Locking down extension APIs in Chrome to make it unsuitable for things they don’t like, like Adblocking, as in: https://www.xda-developers.com/google-chrome-manifest-v3-ad-blocker-extension-api/.

    So if Google can make it so you can’t run your own OS, and their OS won’t let you run your own browser (and BTW Microsoft and Apple are on a similar journey), and their browser won’t let you run an adblocker, where does that leave us?

    It creates a ratchet effect where Google, Apple, and Microsoft can compete with each other, and the Internet is usable from their browsers running unmodified systems sold by them or their favoured vendors, but any other option becomes impractical as a daily driver, and they can effectively stack things against there ever being a new operating system / distro to compete with them, by making their web properties unusable and promoting that as the standard. This is a massive distortion of the open web from where it is now.

    A regulation that if hardware has private or secret keys embedded into it, hardware manufacturers must provide the end user with those keys; and that if they have unchangeable public keys embedded and require that software be signed with that to boot or access some hardware, manufacturers must provide the private keys to end users. If that was the law in a few states that are big enough that manufacturers won’t just ignore them, it would shut down this sort of scheme.