![](/static/253f0d9/assets/icons/icon-96x96.png)
![](https://lemmy.ml/pictrs/image/q98XK4sKtw.png)
Atomic Fedora, like Fedora Kinoite is probably the most noob friendly. Impossible to break.
Atomic Fedora, like Fedora Kinoite is probably the most noob friendly. Impossible to break.
If I have to draw diagrams, I use D2 https://d2lang.com/
It’s a very simple to use code to diagram language.
It has plugins for vscode and obsidian.
It’s open source that you can run locally, with the exception of their proprietary visualization engine. But I don’t use that one, just use ELK.
Matrix and beeper
The main complaints towards Google was LLM maturity, bias and other factors. The same things will be true for any LLM
Google using ai, everyone hates it
Some upstart uses ai search, everyone is like woooowowow???
I’d use an ai project manager I think. It could nag me via a push msg on my phone every day to ask for updates and info about time-frames. Which I would provide, then it could like do some shit. Create some gannt charts and progress tracking.
Would be nice to stay on track with stuff I want to do
Owners: with AI we can finally get rid of everyone
Cheaper too I guess
It’s not that bad. Grafana has a lot of features for this.
They didn’t get kicked out. Just moved to a more expensive solution / pricing structure
12000 a month is probably chump change for a casino and money well spent at that for the features cloudflare provides
At my job, a reasonably sized it customer generates about 100-500k a month.
They usually go hand in hand.
They provide a whole lot more to begin with.
Currently seeing this up at work. Ollama with danswer as a rag / frontend. On a bunch of Nvidia L4 /L40 on kubernets
Is pretty plug and play honestly.
You can easily access raw live output from any source in kibana if you want to for observability
There’s a few.
Very easy if you set it up with Docker.
Best is probably just ollama and use danswer as a frontend. Danswer will do all the RAG stuff for you. Like managing / uploading documents and so on
Ollama is becoming the standard selfnhosted LLM. And you can add any models you want / can fit.
https://ollama.com/blog/ollama-is-now-available-as-an-official-docker-image
To be able to sell them as supported builds to the enterprise market
A VPS with fail2ban is all you need really. Oh and don’t make ssh accounts where the username is the password. That’s what I did once, but the hackers were nice, they closed the hole and then just used it to run a irc client because the network and host was so stable.
Found out by accident, too bad they left their irc username and pw in cleartext. Was a fun week or so messing around with their channels