

Sigh, you missed the obvious pun: would you like some cheese with your wine?


Sigh, you missed the obvious pun: would you like some cheese with your wine?


Memory leaks are more than possible in rust. Rust type system prevents things like free being called on an already free resource. It very much also allows not calling free even when nothing references things. It also makes things like arena allocation a fun endeavor compared to other systems languages. It’s not impossible just trickier. Rust isn’t a panacea, you would need something more like idris with its type system to programmatically enforce resources are freed at runtime during the compilation phase. But a fully dependent type system is very much a bleeding edge thing.
I use shenanigans, more fitting and descriptive.


The default for cargo is debug builds why that would surprise anyone as being slower is beyond me, —release isn’t that much extra to type or alias. Do people not learn how their tools work any longer? This isn’t that far off from c/c++ where you set cflags etc to fit the final binaries purpose.


I’m using shenanigans now, fits the best methinks.


SQLite doesn’t need a networked setup at all. What the poster above is asking is an option for linkwarden to just use embedded SQLite as its db engine. For apps I build I just embed SQLite into the binary, no db network needed, the binary just sets up a db file at startup in say ~/.config/app/db.file and off to the races. If you don’t need to access it from multiple contexts SQLite is hard to beat.


Stellaris is a trip, don’t buy it expecting Mario. The fun is in adapting to random events and where you start. I’m biased but my fave thing to do is to enslave empires that eff me over early game by bio engineering their population to be sapient livestock. The game is awesome but it’s deep. Be prepared to lose.


I pronounce gif like zyhfe to annoy both jif and gif pronouncers equally. I also advocate for the initial array index to be .5 to be equally annoying to programmers and mathematicians alike.


As a recently former hpc/supercomputer dork nfs scales really well. All this talk of encryption etc is weird you normally just do that at the link layer if you’re worried about security between systems. That and v4 to reduce some metadata chattiness and gtg. I’ve tried scaling ceph and s3 for latency on 100/200g links. By far NFS is easier than all the rest to scale. For a homelab? NFS and call it a day, all the clustering file systems will make you do a lot more work than just throwing hard into your nfs mount options and letting clients block io while you reboot. Which for home is probably easiest.
We can replace it with an emu.
It’s more: I have routed a few pipes in our test system and it’s now spitting out water known to be contaminated but now should have some extra sprinkles in so it’s fine.
What I’m saying is it’s even worse than didn’t do any checks. It’s willfully ignoring existing checks intentionally.
That’s rpm, suse Linux 1.0 was never built off the same source or installer that Redhat Linux was.
Do you have a historical example where any suse distribution used redhat based source? As opensuse as I said only used the rpm package manager, it never used any other components of a redhat derived install.
Source: I work there and can find zero redhat strings in any old source code from that era, the old greybeards took offense to the implication that suse was ever based on redhat other than using rpm which at the time was about it for packaging.
All they did was start to use rpm instead of tar for packaging.
Opensuse and fedora have no common history though? Just because it uses rpm doesn’t make it a red hat derivative.
https://upload.wikimedia.org/wikipedia/commons/1/1b/Linux_Distribution_Timeline.svg
Yeah this only really applies to Algol style imperative languages. Dependent types and say stack languages like idris and apl are dramatically different in their underlying axioms.


I mean I’ve been using native dual stack for over a decade and I’m most definitely American. A fun anecdote was I was having issues with clicking on links from Google once and turned out ipv4 was busted but 6 worked fine for half a day. And there really isn’t any turning on ipv6 I get it by default and it’s with the most hated isp Comcast. They’re actually really good about v6 support I’ve not moved off them because of it. It’s literally 10ms faster than 4 lilely due to cgnat.


The USA is ahead of most nations at about 50% so not sure how you’re coming to that conclusion based off of evidence. Outside of maybe Brazil in the americas on both continents our ipv6 adoption is better than the rest, Canada included.


No don’t take shitposts literally. I’ve been using ipv6 for a decade at home now in the USA and I don’t pay extra for it ever. Also why are you assuming this post refers to the us?


It’s more likely an admission they have to trampoline every gpl function in the kernel which isn’t really easy to do and would let that kernel module run on any other kernel. Otherwise they would have to do a shim like nvidia which would mean a whole other level of issues like saying we support Linux but only Ubuntu which as a non Ubuntu user would mean to me they do not in fact support Linux. I’d vote with walle here but I already don’t own this game as my friends said the user base is terrible years ago but this just means there is no reason to buy any of their games.
This page intentionally left blank.