

Two rack rails bolted together with a power strip and a tray holding my server mini PC. My router is bolted on as well to act as a switch for everything while also providing Wifi to my phone and laptop



Two rack rails bolted together with a power strip and a tray holding my server mini PC. My router is bolted on as well to act as a switch for everything while also providing Wifi to my phone and laptop



I kind of railroaded myself into using calibre unfortunately.
I have a very specific filenaming scheme which I originally came up with back when I only used folders for organising my books in order to group together books that belong to a series but where the series is part of a larger universe.
Basically my folder structure is {World}/{Reading Order}; {Series} #{Series_Index} - {Title} - {Author}
On my kobo I have the autoshelf plugin installed which automatically parses this information when I add books and groups them together by world while filling out the series information.
In order to properly make use of this system I need to use Calibre custom columns and be able to export the books I want with this specific name format. I have yet to find a program other than calibre that would support this.
It would probably be smarter for me to reorganize my books at some point but I really like being able to basically drop a ton of books at once onto my reader using SFTP and as far as I can tell all common options rely on manually downloading the books, sending them directly to the reader or pulling them from their internal file storage in whatever form the application stores them…
I do like Audiobookshelf for the ability to add a book to multiple series, but the missing mass export functions stop me from switching


I name mine after greek and roman gods.
My NAS is bamed Hestia, the goddess of the bearth and home.
My docker server is called Poseidon due to the sea iconography of docker. My second iteration of my docker server where I tried playing around with podman I called Neptune.
I briefly had a Raspberry Pi for experimenting with some stuff which was called Eileithyia, the goddess of childbirth.
My Proxmox machine on which pretty much all ky other servers are run as VMs is called Atlas, as the Titan holding up my personal network.
I also have a truenas VM which I boringly called truenas…


Quick question, the way you say server/agent architecture, does this mean that the server manages the backup schedule and pulls the backups from the systems or does the connected computer initiate the backups?
I’m currently using synology active backup for my server and used to also use it for my desktop. Linux support is not ideal though and I would like to move to something with similar capabilities that is also not vendor locked.
My personal usecase would be backing up a single server, a desktop and a laptop.


Good questions. Would like to know that too
I have a bare minimum of documentation as markdown files which I take care to keep in an accessible lovation, aka not on my server.
If my server does ever go down, I might really want to access the (admittedly limited) documentation for it
I read the title and this was literally the first thing that popped in my head
Professionally or hobbywise?
Hobbywise I’m pretty dead lately cause I left all my embedded gear at my parents when I moved.
Professionally I am trying to optimize software on a microcontroller to minimize power consumption for my master thesis. Currently I’m sitting at an average power draw of 70 uA at 3V. If all goes well I might get it even lower


Yeah, that would be the ideal scenario I guess.
It should technically be possible by mapping the compose files into the opt folder via docker mounts but I think that’s an unreasonable way to go about this since every compose file would need a mounting point


Proxmox to manage my VMs, SSH for anything on the command line and portainer for managing my docker containers.
One day I will switch probably switch to dockge so my docker-compose files are stored plain on the hard drive but for now portainer works flawlessly.


After reading through some of the comments, here is my opinion.
C would be a good language IF you know your students plan to get into IT, specifically a sector where the low level knowledge is useful. Beyond that, I assume your students probably use windows and I personally always find it a pain to work with C on windows outside of full IDEs like jetbrains and Visual Studio. It’s also a lot more work till you get some results that you are happy about. Unless you start with an Arduino, which I find pretty nice to get students interested in embedded stuff.
I don’t like JavaScript because I find it a mess although it is very useful for anything web related.
Given you said in another comment that this is meant to be a general purpose skill for your students I would strongly recommend python. While I dislike the dynamic type system, it is a very powerful language to get stuff done. You can quickly get results that feel rewarding instead of running into hard to fix issues that turn your students off of programming in general. Also it’s very useful outside of IT as a scripting language for analyzing data in basically any field or for generating nice plots for some document


I remember building something vaguely related in a university course on AI before ChatGPT was released and the whole LLM thing hadn’t taken off.
The user had the option to enter a couple movies (so long as they were present in the weird semantic database thing our professor told us to use) and we calculated a similarity matrix between them and all other movies in the database based on their tags and by putting the description through a natural language processing pipeline.
The result was the user getting a couple surprisingly accurate recommendations.
Considering we had to calculate this similarity score for every movie in the database it was obviously not very efficient but I wonder how it would scale up against current LLM models, both in terms of accuracy and energy efficiency.
One issue, if you want to call it that, is that our approach was deterministic. Enter the same movies, get the same results. I don’t think an LLM is as predictable for that
I used to use enums for my return codes.
Then I got pissed I had to add my enum definition to every project I worked on.
I now return integers based on errno


Well, guess that’s it for heaven


Good thing I decided against switching to it, even though my main reason is that my weird book organisation scheme isn’t feasible with anything but calibre or manual organisation currently as far as I know
I use a MikroTik Router and while I do love the amount of power it gives me, I very quickly realized that I jumped in at the deep end. Deeper than I can deal with unfortunately.
I did get everything running after a week or so but I absolutely had to fight the router to do so.
Sometimes less is more I guess
That was my exact setup as well until I switched to a different router which supported both custom DNS entries and blocklists, thereby making the pi-hole redundant
Not OP but a lot of people probably use pi-hole which doesn’t support wildcards for some inane reason


I typically use EndeavorOS because I enjoy how well documented and organized the arch wiki is.
I tried switching to fedora on my laptop recently but actually had some issues with software that was apparently only distributed through the AUR or AppImage (which I could have used, I know).
When I also had issues setting up my VPN to my home network again, I caved and restored the disk to a backup I took before attempting the switch. The VPN thing almost definitely wasn’t Fedoras fault since I remember running into the same issue on EndeavorOS but after my fix from last time didn’t work I was out of patience.
My servers runs either on debian or Ubuntu LTS though.
I doubt you’ve heard of it honestly. It’s an ADuCM355 from Analog Devices. Internally it uses an Cortex-M3.