Iirc I seem to find whatever was configured dead or no longer the cool choice when I check online.
Whatever it is, I barely touch it and it works great. Very happy.
Iirc I seem to find whatever was configured dead or no longer the cool choice when I check online.
Whatever it is, I barely touch it and it works great. Very happy.
Two pihole servers, one n VM vlan, one on device VLAN with OpnSense delivering them both via DHCP options. I sometimes update lists, like yearly… At best. They’ve been there over 7 years. Calling them robust is correct. The hypervisors are 3 proxmox servers in cluster using ceph. Intrl NUC 3rd Gen. Less than 80w combined with all vms. Also 8 years old no failures but tolerant for it.
I think you probably don’t realise you hate standards and certifications. No IT person wants yet another system generating more calls and complexity. but here is iso, or a cyber insurance policy, or NIST, or acsc asking minimums with checklists and a cyber review answering them with controls.
Crazy that there’s so little understanding about why it’s there, that you just think it’s the “IT guy” wanting those.
Sr-iov works already though? That’s not needed for this. The motherboard presents the pci bus to the guest regardless of what’s plugged in. Works fine.
This is when you want many guests to have shared graphics by partitioning a gpu. So the host still retains it and presents the graphics card to guests. You need to partition the ram up equally though, so useful only in VDI generally where you want a RTX A6000 like card to split to 10 guests each with 8gb of ram, and they share the gpu, but keep their individual video ram. Economy of scale can work out in graphics or maybe ML situations. Not so useful at home since you’ll probably have a Rtx 3080 with like 10-12gb of ram, and at most you wouldn’t want to split it below 8gb for modern games and partitions need to be equally sized. For 10g two = 2x5gb which would be a poor experience probably. Lots of frame stutters as it switches stuff between ram to video ram.
Hope that helps. Unless this technology unlocks better partitions it’s more about opening to vdi and machine learning in a full open source context like proxmox rather than just the driver being locked behind hyperv vmware and citrix hypervisor/xen and a big yearly license. Maybe it still needs that yearly license.
This is possible now, but in xen or vmware you need to buy a nvidia license to unlock this feature. You can trial it for a minute in a lab but you can’t give 4 guests each 2gb of vram on your graphics card without Nvidia specialist proprietary driver on both the host and the guest.
For vdi where you can buy 48gb rtx a6000 graphics cards, with architects (for example) each user getting each about 8gb each, you can 10 guests concurrently per card. Which at a few hundred architects scales better than buying many $5000 dollar workstations that struggle with WFH.
For a home user, maybe being able to split for your two kids on a standard rtx 3070 with what like 8gb might be OK? Probably not though.
Right now I have a hacky way that isn’t really supported in nvidia to split graphics cards to two guest vms but it’s neither license compatible or what I’d call “production ready”. I’d like proxmox to be able to handle this out of the box because it’s already in the kernel.
I’ve no idea what this means with licensing though. The yearly license cost to allow you to use your driver is actually stupidly expensive. The Rtx A series cards are already dumb money.
Either way it’s a good thing, but probably not much news for the average enthusiast
Pop! Os
Imo.
I spent like 20 minutes self hosting and running over tailscale so traffic is always private… Never had an issue. I’ve got over 20 devices accessible on it.
Easy to remote register over ssh just by sending the installer plus running with server name plus key, then setting a static password.
I still think gaming wide moonlight is great though. You won’t really regret that.
Other then legacy and uefi does it have a CSM compatibility support mode? An option to enable usb initialisation before bios? Eg wait for usb initialisation?
Some “boot faster” options kind of reorder boot initialisation to a point where it’s not holding the system back.
Though I’m really running out of suggestions… I can imagine you’re pretty frustrated. I know my Dell laptop was a pain to get the right settings to get usb to boot and the stupid 100db beep to silent on boot interruption.
And you probably confirmed that live boot worked too I assume.
In the actual bios, can you see a boot order and see uefi for Windows/whatever is on your internal disk? But not any other entries?
I suggest a few more things:
Try a different brand usb. Different motherboards sometimes don’t support some usb brands. In fact, a Lenovo server I rebuilt refused to boot off certain usbs.
Some motherboards don’t initialise boot off some usb ports. Sometimes the additional ports are on another controller and initialise too slow.
Just try a straight working Ubuntu live boot usb to remove any ventoy from equation. Ubuntu has real signed uefi (and no shim) granted by Microsoft. I think that’s how it works, uefi is a mess.
Try to start isolating all the different factors, and there could be more. It doesn’t necessarily mean anything definitive if it works on another machine.
Well good news! Time to let yourself love again!
For me I want to know how much frame latency there is since I’m suspicious and I want to try things to see the effect and I just don’t know how to get that information in an OSD like I can with msi afterburner.
If someone knows what can do this in Linux, please reply!
Instead I just stopped all competitive and cooperative gaming. Which is a bit of a shame. Sometimes I’ll load up windows to join friends but usually by the time I’ve updated whatever game I’ve gotten over it.
Don’t get me wrong, hiccups aside I’m very happy which is why I’m in Linux most of the time. But it’s not always a wonderful world.
Hate to break it to you, but most IT Managers don’t care about crowdstrike: they’re forced to choose some kind of EDR to complete audits. But yes things like crowdstrike, huntress, sentinelone, even Microsoft Defender all run on Linux too.
Well, what I really wonder is if because the kernel can include it, if this will make an install more agnostic. Like literally pull my disk out of a gaming nvidia machine, and plug it into my AMD machine with full working graphics. If so this is good for me since I use a usb-c nvme ssd for my os to boot from on my work and home machines and laptops for when I’m not worrying. All three currently have nvidia cards and this works ok. I have some games to chill and take a break. My works core OS for work MDM etc unmodified. I like it that way.
I realise this is not a terribly useful case, but I could see it for graphically optimised VM migrations too not that I have many. Less work in transitioning gives greater flexibility.
Eating the onion is sure popular today!
How the fuck do you f2 your army on this keyboard when you’ve got dark templars in your mineral line?!
Sorry to clarify: updates come as security or as feature updates. If I’ve already got a standard operating environment (SOE) with all the features I/staff need to do work, I don’t need new features.
I then have to watch cves with my cve trackers to know when software updates are needed and all devices with those software get updated and the SOE is updated.
I can go on a rant about how bad the Linux has recently made my life as someone’s policy is that any Linux bug might be a security vulnerability and therefore I now have infinite noise in my cve feed, which in turn is making decisions on how to mitigate security issues hard, but that is beyond this discussion.
So in short I’m only talking about when you update, updating only security fixes, not the software and features. Live patching security vulnerabilities is pretty much free low effort, low impact, and in my personal opinion, absolutely critical. But software features patching can be disruptive, leaves little to be gained, and really only should be driven for a request to need that feature at which point it would also include an update to the SOE.
Inertia is just a sign of maturity. It’s fine. Nothing wrong with it. Especially when the new stuff is happening along side it. In 10 years there may be people asking why you’re using arch or nix, when whatever new thing is superior. But it’ll just be proof that nix can run in production for 10+ years.
It’s solving a real problem in a niche case. Someone called it gimmicky, but it’s actually just a good tool currently produced by an unknown quantity. Hopefully it’ll be sorted or someone else takes up the reigns and creates an alternative that works perfectly for all my different isos.
For the average home punter maybe even up to home lab enthusiast, probably not saving much time. For me it’s on my keyring and I use it to reload proxmox hosts, Nutanix hosts, individual Ubuntu vms running ROS Noetic and not to mention reimaging for test devices. Probably a thrice weekly thing.
So yeah, cumulatively it’s saving me a lot of time and just in trivialising a process.
If this was a spanner I’d just go Sidchrome or kingchrome instead of my Stanley. But it’s a bit niche so I don’t know what else allows for such simple multi iso boot. Always open to options.