

This won’t work, your wan ip isn’t dynamic, it’s on the ISP NAT network and your resulting ip to public services is shared across many customers. CG-NAT.
This won’t work, your wan ip isn’t dynamic, it’s on the ISP NAT network and your resulting ip to public services is shared across many customers. CG-NAT.
I don’t know where you work but don’t access your tailnet from a work device and ideally not their network.
Speaking to roku, you could buy a cheap raspberri pi and usb network port. One port to the network the other to roku. The pi can have a tailscale advertised network to the roku, and the roku probably needs nothing since everything is upstream including private tailscale 100.x.y.z networks which will be captured by your device in the middle raspberri pi.
I guess that’d cost like 40 ish dollars one time.
Enterprise applications are often developed by the most “quick, ship this feature” form of developers on the world. Unless the client is paying for the development a quick look at the sql table shows often unsalted passwords in a table.
I’ve seen this in construction, medical, recruitment and other industries.
Until cyber security requires code auditing for handling and maintaining PII as law, mostly its a “you’re fine until you get breached” approach. Even things like ACSC Australia cyber security centre, has limited guidelines. Practically worthless. At most they suggest having MFA for Web facing services. Most cyber security insurers have something but it’s also practically self reported. No proof. So if someone gets breached because someone left everyone’s passwords in a table, largely unguarded, the world becomes a worse place and the list of user names and passwords on haveibeenpwned grows.
Edit: if a client pays and therefore has control to determine things like code auditing and security auditing etc as well as saml etc etc, then it’s something else. But say in the construction industry I’ve seen the same garbage tier software used at 12 different companies, warts and all. The developer is semi local to Australia ignoring the offshore developers…
Wow I read through the blog post and though I’m not a developer I’ve compiled and built Linux packages and operating systems in the past so now I want to fly home and give your script a go myself.
I enjoyed your write up. I can’t comment on programming, but I enjoy a good journey and story.
My final takeaway is your image. I’ll keep it in mind. Interesting!
Make a YouTube on it and I’ll watch it. I’m not a coder though. But benchmarking and debunking is interesting. Either way it goes. Clear or complex the results come out it’ll be interesting.
I’m far from an expert sorry, but my experience is so far so good (literally wizard configured in proxmox set and forget) even during a single disk lost. Performance for vm disks was great.
I can’t see why regular file would be any different.
I have 3 disks, one on each host, with ceph handling 2 copies (tolerant to 1 disk loss) distributed across them. That’s practically what I think you’re after.
I’m not sure about seeing the file system while all the hosts are all offline, but if you’ve got any one system with a valid copy online you should be able to see. I do. But my emphasis is generally get the host back online.
I’m not 100% sure what you’re trying to do but a mix of ceph as storage remote plus something like syncthing on a endpoint to send stuff to it might work? Syncthing might just work without ceph.
I also run zfs on an 8 disk nas that’s my primary storage with shares for my docker to send stuff, and media server to get it off. That’s just truenas scale. That way it handles data similarly. Zfs is also very good, but until scale came out, it wasn’t really possible to have the “add a compute node to expand your storage pool” which is how I want my vm hosts. Zfs scale looks way harder than ceph.
Not sure if any of that is helpful for your case but I recommend trying something if you’ve got spare hardware, and see how it goes on dummy data, then blow it away try something else. See how it acts when you take a machine offline. When you know what you want, do a final blow away and implement it with the way you learned to do it best.
3x Intel NUC 6th gen i5 (2 cores) 32gb RAM. Proxmox cluster with ceph.
I just ignored the limitation and tried with a single sodim of 32gb once (out of a laptop) and it worked fine, but just backed to 2x16gb dimms since the limit was still 2core of CPU. Lol.
Running that cluster 7 or so years now since I bought them new.
I suggest only running off shit tier since three nodes gives redundancy and enough performance. I’ve run entire proof of concepts for clients off them. Dual domain controllers and FC Rd gateway broker session hosts fxlogic etc. Back when Ms only just bought that tech. Meanwhile my home “ARR” just plugs on in docker containers. Even my opnsense router is virtual running on them. Just get a proper managed switch and take in the internet onto a vlan into the guest vm on a separate virtual NIC.
Point is, it’s still capable today.
It’s solving a real problem in a niche case. Someone called it gimmicky, but it’s actually just a good tool currently produced by an unknown quantity. Hopefully it’ll be sorted or someone else takes up the reigns and creates an alternative that works perfectly for all my different isos.
For the average home punter maybe even up to home lab enthusiast, probably not saving much time. For me it’s on my keyring and I use it to reload proxmox hosts, Nutanix hosts, individual Ubuntu vms running ROS Noetic and not to mention reimaging for test devices. Probably a thrice weekly thing.
So yeah, cumulatively it’s saving me a lot of time and just in trivialising a process.
If this was a spanner I’d just go Sidchrome or kingchrome instead of my Stanley. But it’s a bit niche so I don’t know what else allows for such simple multi iso boot. Always open to options.
Iirc I seem to find whatever was configured dead or no longer the cool choice when I check online.
Whatever it is, I barely touch it and it works great. Very happy.
Two pihole servers, one n VM vlan, one on device VLAN with OpnSense delivering them both via DHCP options. I sometimes update lists, like yearly… At best. They’ve been there over 7 years. Calling them robust is correct. The hypervisors are 3 proxmox servers in cluster using ceph. Intrl NUC 3rd Gen. Less than 80w combined with all vms. Also 8 years old no failures but tolerant for it.
I think you probably don’t realise you hate standards and certifications. No IT person wants yet another system generating more calls and complexity. but here is iso, or a cyber insurance policy, or NIST, or acsc asking minimums with checklists and a cyber review answering them with controls.
Crazy that there’s so little understanding about why it’s there, that you just think it’s the “IT guy” wanting those.
Sr-iov works already though? That’s not needed for this. The motherboard presents the pci bus to the guest regardless of what’s plugged in. Works fine.
This is when you want many guests to have shared graphics by partitioning a gpu. So the host still retains it and presents the graphics card to guests. You need to partition the ram up equally though, so useful only in VDI generally where you want a RTX A6000 like card to split to 10 guests each with 8gb of ram, and they share the gpu, but keep their individual video ram. Economy of scale can work out in graphics or maybe ML situations. Not so useful at home since you’ll probably have a Rtx 3080 with like 10-12gb of ram, and at most you wouldn’t want to split it below 8gb for modern games and partitions need to be equally sized. For 10g two = 2x5gb which would be a poor experience probably. Lots of frame stutters as it switches stuff between ram to video ram.
Hope that helps. Unless this technology unlocks better partitions it’s more about opening to vdi and machine learning in a full open source context like proxmox rather than just the driver being locked behind hyperv vmware and citrix hypervisor/xen and a big yearly license. Maybe it still needs that yearly license.
This is possible now, but in xen or vmware you need to buy a nvidia license to unlock this feature. You can trial it for a minute in a lab but you can’t give 4 guests each 2gb of vram on your graphics card without Nvidia specialist proprietary driver on both the host and the guest.
For vdi where you can buy 48gb rtx a6000 graphics cards, with architects (for example) each user getting each about 8gb each, you can 10 guests concurrently per card. Which at a few hundred architects scales better than buying many $5000 dollar workstations that struggle with WFH.
For a home user, maybe being able to split for your two kids on a standard rtx 3070 with what like 8gb might be OK? Probably not though.
Right now I have a hacky way that isn’t really supported in nvidia to split graphics cards to two guest vms but it’s neither license compatible or what I’d call “production ready”. I’d like proxmox to be able to handle this out of the box because it’s already in the kernel.
I’ve no idea what this means with licensing though. The yearly license cost to allow you to use your driver is actually stupidly expensive. The Rtx A series cards are already dumb money.
Either way it’s a good thing, but probably not much news for the average enthusiast
Pop! Os
Imo.
I spent like 20 minutes self hosting and running over tailscale so traffic is always private… Never had an issue. I’ve got over 20 devices accessible on it.
Easy to remote register over ssh just by sending the installer plus running with server name plus key, then setting a static password.
I still think gaming wide moonlight is great though. You won’t really regret that.
Other then legacy and uefi does it have a CSM compatibility support mode? An option to enable usb initialisation before bios? Eg wait for usb initialisation?
Some “boot faster” options kind of reorder boot initialisation to a point where it’s not holding the system back.
Though I’m really running out of suggestions… I can imagine you’re pretty frustrated. I know my Dell laptop was a pain to get the right settings to get usb to boot and the stupid 100db beep to silent on boot interruption.
And you probably confirmed that live boot worked too I assume.
In the actual bios, can you see a boot order and see uefi for Windows/whatever is on your internal disk? But not any other entries?
I suggest a few more things:
Try a different brand usb. Different motherboards sometimes don’t support some usb brands. In fact, a Lenovo server I rebuilt refused to boot off certain usbs.
Some motherboards don’t initialise boot off some usb ports. Sometimes the additional ports are on another controller and initialise too slow.
Just try a straight working Ubuntu live boot usb to remove any ventoy from equation. Ubuntu has real signed uefi (and no shim) granted by Microsoft. I think that’s how it works, uefi is a mess.
Try to start isolating all the different factors, and there could be more. It doesn’t necessarily mean anything definitive if it works on another machine.
If dns resolved then it’s not blocked. You need to look at your network.
Bypass dns connect to the ip and port. What happens?