I’m pretty new to self-hosting in general, so I’m sorry if I’m not using correct terminology or if this is a dumb question.
I did a big archival project last year, and ripped all 700 or so DVDs/Blu-rays I own. Ngl, I had originally planned on just having them all in a big media folder and picking out whatever I wanted to watch that way. Fortunately, I discovered Jellyfin, and went with that instead.
So I bought a mini pc to run Ubuntu server on, and I just installed Jellyfin directly there. Eventually I decided to try hosting a few other services (like Home Assistant and BookLore (R.I.P.)), which I did through Docker.
So I’m wondering, should I be running Jellyfin through Docker as well? Are there advantages to running Jellyfin through Docker as opposed to installed directly on the server? Would transitioning my Jellyfin instance to Docker be a complicated process (bearing in mind that I’m new and dumb)?
Thanks for any assistance.
I run it in docker and it’s fine. It’s not because I don’t know how to run it natively - I’m a linux sysadmin - it’s just that very often, docker is easier to do this stuff with. Easier to migrate to other machines, easier to upgrade, easier to install, easier to remove if you want to.
By all means go native if you want to learn. Pros and cons in each method, but for me, docker works just fine for most things.
You should know how to host something without using docker, because well… that’s how you’d make a dockerfile.
But you should not self host without containerization. The whole idea is that your self hosted applications are not polluting your environment. Your system doesn’t need all these development libraries and packages. Once you remove your application you will realize that the environment is permanently polluted and often times it is difficult to “reset” it to its previous state (without dependencies and random files left behind).
However with docker none of that happens. Your environment is in the same state you left it.
Don’t change now if you don’t have an issues in my opinion. However, if you have the space for the jellyfin backup, it should be a pretty simple transition. I always prefer deploying using docker compose for all my services, I have backups of the compose files, and it handles all the networking between all the services (VPN, *arr stack, qbt, seer, jellyfin) When I had to move off of my ancient server after it kicked the bucket, it was as simple as copying my compose files, a single docker deployment per stack, and loading the backups for specific services. I’ve not had any issues with Jellyfin on docker, but I am using GPU passthrough to allow for hardware accelerated transcoding.
Isolating network services from the rest of your system is a good thing
I just use docker compose for everything, i like how everything pertaining to a service can be contained within a single directory and there’s minimal file permission management. Also lots of services need their own databases which might conflict on system installs.
The biggest advantage of Docker is that it’s a little bit easier to manage all the dependencies of a service. And often enough the Docker images come from the official vendor and thus should in theory be configured optimally out of the box and give you timely updates.
But if you don’t have any problems with your current install I wouldn’t touch it.
I prefer to run processes directly on the host system if I can. Jellyfin is well behaved, running as its own user and not hogging RAM, and it doesn’t need dependencies that conflict with other apps/services. So I don’t see a need to add a layer of port/volume/stderr mapping.
I also ran HA and AppDaemon just in Python virtual envs. Glad to share Ansible playbooks if you’re interested.
Contrary to the other poster I prefer Docker over directly on the main OS. For one simple reason, uninstall. I tend to install/uninstall stuff frequently. Sure Jellyfin is great now, but what about next year when something happens and I want to switch to a fork, or emby, or something else? Uninstalling in Linux is a crapshoot. Not too bad if you’re using a package manager, but oftentimes the things I install aren’t in the package manager. Uninstalling binaries, cleaning up directories, removing users and groups, and removing dependancies is a massive pain. Back before docker instead of doing dist upgrades on my ubuntu server, I’d reinstall from scratch just to clean everything up.
With docker, cleanup is a breeze.
If you already know how to use docker it is a no-brainer. It works very well, I do not recall ever seeing anyone have issues that would have prompted them to move away from docker to a standard install. Other than that they forgot making directories available to the container, but seeing you already use docker that would probably not happen to you.
Look at DietPi, there a ‘normal pc’ version you can run on your mini pc. DietPi is super lightweight and makes installing and using very popular self hosted services extremely easy.
I’m also relatively new to self-hosting and am not using docker. I don’t fully understand it, and my Jellyfin server is working well already, so I haven’t felt a need to rock the boat.
I see so many people using docker that I frequently question if I should be too.




