![](/static/61a827a/assets/icons/icon-96x96.png)
![](https://programming.dev/pictrs/image/170721ad-9010-470f-a4a4-ead95f51f13b.png)
Holy shit that’s brilliant
Holy shit that’s brilliant
I know it’s not totally relevant but I once convinced a company to run their log aggregators with 75 servers and 15 disks in raid0 each.
We relied on the app layer to make sure there was at least 3 copies of the data and if a node’s array shat the bed the rest of the cluster would heal and replicate what was lost. Once the DC people swapped the disk we had automation to rebuild the disks and add the host back into the cluster.
It was glorious - 75 servers each splitting the read/write operations 1/75th and then each server splitting that further between 15 disks. Each query had the potential to have ~1100 disks respond in concert, each with a tiny slice of the data you asked for. It was SO fast.
Our patch improves data harvesting speed by 13%!
I put my cloud in containers
I’ve asked for help finding API endpoints that do what I want because I’m feeling too lazy to pour over docs and it’ll just invent endpoints that don’t exist
Blasphemy, that’s not regex that’s just fancy grep
Any roadblocks? Any roadblocks? Squaaa, any roadblocks?
Perhaps more than one thing, who can know?
for X in $(seq -f host%02g 1 9); do echo $X; ssh -q $X “grep the shit”; done
:)
But yeah fair, I do actually use a big data stack for log monitoring and searching… it’s just way more usable haha
Ahhh I see - thanks!! 🙏
Awesome project! Love the photos and breakdown, very well presented and explained, thanks for sharing! 🙂
Hey, not op but also bring ready to convert my tuya stuff over to home assistant - what is this new coordinator you mention? Something within HA? Any extra info you could share would be appreciated!
Personally it’s just a matter of me never really using my webcam and not minding moving a little bit of electrical tape if I need to. It’s such a small inconvenience that I can’t see why not.
Stop at locker 🤣