Containers are a great way run applications.
Docker is a piece of garbage by a company way too far down the enshittification slide.
Containers are a great way run applications.
Docker is a piece of garbage by a company way too far down the enshittification slide.
I prefer having a convenient pull mechanism that I can trigger from a workstation in the lab network. I maintain the setup with Ansible
That sounds valid. I was thinking about the fact that most carbon credits are generated from deforestation reduction projects that renew yearly.
Climate was only cool while you could buy fake carbon credits to make up lies about responsible computing at scale. Now we save the world with AI once again. Next year, compute on mars, true planetary scale! Cloud? We’re doing Starsystem now!
There will always be gaps, but describing your machine through Ansible is worth it and can be fun if you’re into that sort of thing.
The first time I set up a freshly installed Debian laptop from my existing Ansible roles was a really enjoyable moment.
Being able to establish a familiar base on a fresh system at will is a far greater power than pure config/data backups.
If I learned one thing from TunnelVision, it’s how blindly people are operating right now. If you open a VPN tunnel, also ensure traffic is actually routed through it, especially if you don’t control the network. Adding a tunnel on top of the insecure network also does not protect your client from other malicious clients on that network. I feel like people have seen one too many VPN snake oil salesman on social media.
PathPrefix no longer being regex stood out
You can read this blog post, authored as a series of tweets instead https://mastodon.social/@pid_eins/112353324518585654
Sharing the network space with another container is the way to go IMHO. I use podman and just run the main application in one container, and then another VPN-enabling container in the same pod, which is essentially what you’re achieving with with the network_mode: container:foo
directive.
Ideally, exposing ports on the host node is not part of your design, so don’t have any --port
directives at all. Your host should allow routing to the hosted containers and, thus, their exposed ports. If you run your workloads in a dedicated network, like 10.0.1.0/24
, then those addresses assigned to your containers need to be addressable. Then you just reach all of their exposed ports directly. Ultimately, you then want to control port exposure through services like firewalld, but that can usually be delayed. Just remember that port forwarding is not a security mechanism, it’s a convenience mechanism.
If you want DLNA, forget about running that workload in a “proper” container. For DLNA, you need the ability to open random UDP ports for communication with consuming devices on the LAN. This will always require host networking.
Your DLNA-enabled workloads, like Plex, or Jellyfin, need a host networking container. Your services that require internet privacy, like qBittorrent, need their own, dedicated pod, on a dedicated network, with another container that controls their networking plane to redirect communication to the VPN. Ideally, all your manual configuration then ends up with a directive in the Wireguard config like:
PostUp = ip route add 192.168.1.0/24 via 192.168.19.1 dev eth0
Wireguard will likely, by default, route all traffic through the wg0
device. You just then tell it that the LAN CIDR is reachable through eth0
directly. This enables your communication path to the VPN-secured container after the VPN is up.
Replace “AI” with “metaverse” or “Bitcoin”. Same bullshit
Next level: just type what you want and let the AI figure it out. 1 ad per prompt
I do not. As far as I’m aware, this is usually countered through a proper way to follow through on reports. If you host user-generated content, have an abuse contact who will instantly act on reports, delete reported content, and report whatever metadata came along with the upload to the authorities if necessary.
The bookkeeping code for keeping track of unused uploads has a cost attributed to it. I claim that most providers are not willing to pay that cost proactively, and prefer to act on reports.
I can only extrapolate from my own experience though. No idea how the industry at large really handles or reasons about this.
The mistake is using Ubuntu
This is not unique to Lemmy. You can do the same on Slack, Discord, Teams, GitHub, … Finding unused resources isn’t trivial, and you’re usually better off ignoring the noise.
If you upload illegal content somewhere, and then tell the FBI about it, being the only person knowing the URL, let me know how that turns out.
Checking every single image ID against all stored text blobs is not trivial. Most platforms don’t do this. It’s cheaper to just ignore the unused images.
I don’t really have any Insights into Quora. I know StackExchange hardliner always joked about the site and felt like SE is better. I joined that because I thought it was fun to feel superior, but I don’t even have an account on that site
SO is a shithole, just like Reddit. All the work is done by volunteers. When it was time to cash out with the platform, they also did several things to fuck with their community. I’ve contributed quite a bit to the trilogy sites, and served as a moderator. I regret every second of it. But at least a few people got rich in the process.
I guess https://geti2p.net/en/
Only freaks have AM/PM in their time system.