I see there is an m.2 slot too with what looks to be a Kingston SSD.
I’m still confused what era this laptop is from. It might be a SATA m.2.
I see there is an m.2 slot too with what looks to be a Kingston SSD.
I’m still confused what era this laptop is from. It might be a SATA m.2.
Wayland was subject to “first mover disadvantage” for a long time. Why be the first to switch and have to solve all the problems? Instead be last and everyone else will do the hard work for you.
But without big players moving to it those issues never get fixed. And users rightly should not be forced to migrate to a broken system that isn’t ready. People just want a system that works right?
Eventually someone had to decide it was ‘good enough’ and try an industry wide push to move away from a hybrid approach that wastes developer time and confuses users.
Can someone please validate my decision to pay $23 a year for this dumb corndog.social domain just so I had something fun for my Lemmy instance.
I’ve had exactly this happen to me. It was my own fault but it took a bit of work figure out.
Backups need to be reliable and I just can’t rely on a community of volunteers or the availability of family to help.
So yeah I pay for S3 and/or a VPS. I consider it one of the few things worth it to pay a larger hosting company for.
Look upon what thou has twat and ponder it.
Unironically Powershell is great and learning it has propelled me through the last 12 years of my career as a Sysadmin. My biggest complaints with it are generally Windows complaints or due to legacy powershell modules.
I intentionally do not host my own git repos mostly because I need them to be available when my environment is having problems.
I make use of local runners for CI/CD though which is nice but git is one of the few things I need to not have to worry about.
My beef is with the machine.
No need to optimize when you can just push people to upgrade their hardware more frequently so you make fat stacks of cash from OEM’s.
Do you have any links or guides that you found helpful? A friend wanted to try this out but basically gave up when he realized he’d need an Nvidia GPU.
I’ve been testing Ollama in Docker/WSL with the idea that if I like it I’ll eventually move my GPU into my home server and get an upgrade for my gaming pc. When you run a model it has to load the whole thing into VRAM. I use the 8gb models so it takes 20-40 seconds to load the model and then each response is really fast after that and the GPU hit is pretty small. After I think five minutes by default it will unload the model to free up VRAM.
Basically this means that you either need to wait a bit for the model to warm up or you need to extend that timeout so that it stays warm longer. That means that I cannot really use my GPU for anything else while the LLM is loaded.
I haven’t tracked power usage, but besides the VRAM requirements it doesn’t seem too intensive on resources, but maybe I just haven’t done anything complex enough yet.
I’ve been using ZFS now for a few years for all my data drives/pools but I haven’t gotten brave enough to boot from it yet. Snapshotting a system drive would be really handy.
I thought it was just a meme.
I see way more complaints about ‘elitist Arch users’ than I ever do comments from actual elitist Arch users.
DuckDNS is great… but they have had some pretty major outages recently. No complaints, I know it’s an extremely valuable free service but it’s worth mentioning.
Cloudflare has an api for easy dynamic dns. I use oznu/docker-cloudflare-ddns to manage this, it’s super easy:
docker run \
-e API_KEY=xxxxxxx \
-e ZONE=example.com \
-e SUBDOMAIN=subdomain \
oznu/cloudflare-ddns
Then I just make a CNAME for each of my public facing services to point to ‘subdomain.example.com’ and use a reverse proxy to get incoming traffic to the right service.
They also specifically warn that it’s not optimized for a VM right now. It’s still not quite ready on bare metal, but less so in a VM.
Bazzite has finally got me to pay attention to Fedora derivatives again for the first time in like 15 years.
“What does this section of code do?”
Run it and find out, coward.
I’ve not tried GPT4ALL but Ollama combined with Open WebUI is really great for selfhosted LLMs and can run with podman. I’m running Bazzite too and this is what I do.