Hi, I’ve been thinking for a few days whether I should learn Docker or Podman. I know that Podman is more FOSS and I like it more in theory, but maybe it’s better to start with docker, for which there is a lot more tutorials. On the other hand, maybe it’s better to straight up learn podman when I don’t know any of the two and not having to change habits later. What do you think? For context, I know how containers works in theory, I know some linux I think well, but I never actually used docker nor podman. In another words: If I want to eventually end up with Podman, is it easier to start with docker and then learn Podman, or start with Podman right away? Thanks in advance

  • Nibodhika@lemmy.world
    link
    fedilink
    English
    arrow-up
    3
    arrow-down
    1
    ·
    3 months ago

    Yes I’m aware of that, having written several systemd units for my own services in the past. But you’re not likely to get any of that by default when you just install from the package manager as it’s the discussion here, and most people will just use the default systemd unit provided, and in the vast majority of cases they don’t provide the same level of isolation the default docker compose file does.

    We’re talking about ease of setting things up, anything you can do in docker you can do without, it’s just a matter of how easy it is to get good standards. A similar argument to what you made would be that you can also install multiple versions of databases directly on your OS.

    For example I’m 99% sure the person I replied to has this file for service:

    [Unit]
    Description=Plex Media Server
    After=network.target network-online.target
    
    [Service]
    # In this file, set LANG and LC_ALL to en_US.UTF-8 on non-English systems to avoid mystery crashes.
    EnvironmentFile=/etc/conf.d/plexmediaserver
    ExecStart=/usr/lib/plexmediaserver/Plex\x20Media\x20Server
    SyslogIdentifier=plexmediaserver
    Type=simple
    User=plex
    Group=plex
    Restart=on-failure
    RestartSec=5
    StartLimitInterval=60s
    StartLimitBurst=3
    
    [Install]
    WantedBy=multi-user.target
    

    Some good user isolation, but almost nothing else, and I doubt that someone who argued that installing from the package manager is easier will run systemctl edit on what he just installed to add extra security features.

    • Victor@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      ·
      3 months ago

      Can confirm, have this file. Can confirm, will not learn unit files because I don’t know enough to know the provided one is not sufficient, because the wiki has no such mention. You are spot on.

      • Nibodhika@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        3 months ago

        Btw I don’t mean any of that as an insult or anything of the sort, I do the same with the services I install from the package manager even though I’m aware of those security flags, what they do and how to add them.

    • TCB13@lemmy.world
      link
      fedilink
      English
      arrow-up
      1
      arrow-down
      1
      ·
      edit-2
      3 months ago

      But you’re not likely to get any of that by default when you just install from the package manager as it’s the discussion here,

      This is changing… Fedora is planning to enable the various systemd services hardening flags by default and so is Debian.

      We’re talking about ease of setting things up, anything you can do in docker you can do withou

      Yes, but at what cost? At the cost of being overly dependent on some cloud service / proprietary solution like DockerHub / Kubernetes? Remember that the alternative is packages from your Linux repository that can be easily mirrored, archived offline and whatnot.

      • Nibodhika@lemmy.world
        link
        fedilink
        English
        arrow-up
        1
        ·
        3 months ago

        You’re not forced to use dockerhub or kubernetes, in fact I use neither. Also if a team chooses to host their images on dockerhub that’s their choice, it’s like saying git is bad because Microsoft owns GitHub, or that installing software X from the repos is better than compiling because you need to use GitHub to get the code.

        Also docker images can also be easily mirrored, archived offline etc, and they will keep working after the packages you archived stop because the base version of some library got updated.

        • TCB13@lemmy.world
          link
          fedilink
          English
          arrow-up
          1
          ·
          3 months ago

          Yet people chose to use those proprietary solutions and platforms because its easier. This is just like chrome, there are other browser, yet people go for chrome.

          It’s significantly hard to archive and have funcional offline setups with Docker than it is with an APT repository. It’s like an hack not something it was designed for.

          • Nibodhika@lemmy.world
            link
            fedilink
            English
            arrow-up
            2
            ·
            3 months ago

            It’s definitely much easier to do that on docker than with apt packages, and docker was designed for thst. Just do a save/load, https://docs.docker.com/reference/cli/docker/image/save/ and like I mentioned before this is much more stable than saving some .deb files which will break the moment one of the dependencies gets updated.

            Most people will use whatever docker compose file a project shows as default, if the project hosts the images on dockerhub that’s their choice. Plus I don’t understand what’s the problem, GitHub is also proprietary and no one cares that a project is hosted there.

            • TCB13@lemmy.world
              link
              fedilink
              English
              arrow-up
              1
              ·
              3 months ago

              It’s definitely much easier to do that on docker than with apt packages,

              What a joke.

              Most people will use whatever docker compose file a project shows as default, if the project hosts the images on dockerhub that’s their choice

              Yes and they point the market in a direction that affects everyone.

              GitHub is also proprietary and no one cares that a project is hosted there.

              People care and that’s why there are public alternatives such as Codeberg and the base project Gitea.

              • Nibodhika@lemmy.world
                link
                fedilink
                English
                arrow-up
                2
                ·
                3 months ago

                Got it, no one should use software hosted on GitHub, you’re either a teenager who just discovered Linux a couple of years ago or a FOSS fundamentalist, in any case I’ve had the personal policy of not to waste time with either for over 20 years.

                • TCB13@lemmy.world
                  link
                  fedilink
                  English
                  arrow-up
                  1
                  ·
                  3 months ago

                  I never said people shouldn’t use those platforms. What I said countless times is that while they make the life of newcomers easier they pose risks and the current state of things / general direction don’t seem very good.