I’ve been using Syncthing-Fork (on F-Droid) for the extra features it has. I wonder if that developer will be able to continue.
I’ve been using Syncthing-Fork (on F-Droid) for the extra features it has. I wonder if that developer will be able to continue.
This is correct, and along side the Rust development, they also started work on a web renderer, in Rust, called Servo that at one point was considered as a possible future replacement for the Gecko engine. Around the time Rust transitioned to the Rust foundation, Servo was also pretty much abandoned by Mozilla, moved to the Linux Foundation in 2020 and then Linux Foundation Europe in 2023 where it is finally getting some steady development again. There is also some recent progress on building a browser based on Servo, although it will probably be some time before it’s ready for daily use.
I work night shift and use blackout curtains and earplugs to improve my sleep during the day. Rather than cranking the volume on my alarm so it’s loud enough to consistently wake me up, I use Home Assistant to turn on some smart bulbs as my alarm. When I started, and even now if I have to be up extra early, I also have an audible alarm set to go off a few minutes after the lights come on - just in case the light doesn’t wake me up, but at this point my brain has gotten used to waking up to the lights, and I usually wake up and turn off the other alarm before it goes off.
Another useful automation for me is I have a buggy Samsung PC monitor that has all sorts of annoying issues; like not consistently waking from deep sleep which requires a hard power cycle to correct, and when it is asleep there’s some weird high pitched whine that beeps when the standby light flashes. I use a couple of smart plugs with power monitoring and monitor my PCs power draw to turn the power to my monitor on and off at the wall depending on if the PC is on.
Mine is that the “fake session restore” hasn’t landed yet (it will in 6.1) so the half-dozen konsole and dolphin windows I tend to leave open between reboots don’t re-open after rebooting under Wayland.
The reason it’s called “fake session restore” is that the Wayland session restore protocol isn’t finalized, so as a temporary workaround Plasma on Wayland will re-launch the applications and leave it up to the individual programs to restore their last state.
Not sure exactly how good this would work for your use case of all traffic, but I use autossh and ssh reverse tunneling to forward a few local ports/services from my local machine to my VPS, where I can then proxy those ports in nginx or apache on the VPS. It might take a bit of extra configuration to go this route, but it’s been reliable for years for me. Wireguard is probably the “newer, right way” to do what I’m doing, but personally I find using ssh tunnels a bit simpler to wrap my head around and manage.
Technically wireguard would have a touch less latency, but most of the latency will be due to the round trip distance between you and your VPS and the difference in protocols is comparatively negligible.
I figured you were being genuine, but there’s usually a few people who point at Microsoft’s “embracing” of Linux as the first step in the “embrace, extend, extinguish” trope, and see any involvement by Microsoft as nefarious. When the reality is just that Microsoft’s Azure cloud services are a much larger share of their annual revenue than Windows, and Linux is a major part of their cloud offerings.
If you browse the LKML (Linux Kernel Mailing List) for 5 minutes, you’ll probably see a bunch of microsoft.com email addresses, and it’s been that way for years. I understand why it bothers some people, but also Linus (and a couple others) approve everything that actually gets merged, whether it’s from a microsoft employee, or a redhat employee, or anyone else. Even if microsoft wanted to pay employees to submit patches that would hurt the kernel, the chance that they’d actually be approved is so low it wouldn’t be worth their time.
Maybe I’ll try and give it another go soon to see if things have improved for what I need since I last tried. I do have a couple aging servers that will probably need upgraded soon anyway, and I’m sure my python scripts that I’ve used in the past to help automate server migration will need updated anyway since I last used them.
I think that my skepticism and desire to have docker get out of my way, has more to do with already knowing the underlying mechanics, being used to managing services before docker was a thing, and then docker coming along and saying “just learn docker instead.” Which is fine, if it didn’t mean not only an entire shift from what I already know, but a separation from it, with extra networking and docker configuration to fuss with. If I wasn’t already used to managing servers pre-docker, then yeah, I totally get it.
I’ll probably make the jump when Plasma 6.1 releases with their “real, fake session restore” functionality, was hoping that would make it in to Plasma 6, and I am daily driving Wayland on my laptop now, but I kinda need my programs (or at least file managers and terminal windows) to re-open the way they were between reboots.
Thanks to kscreen-doctor, I’ve been able to port most of my desktop scripts that I use for managing my multiple monitors to work on Wayland, and krdc/krfb have been a decent enough replacement for x11vnc or x2go for accessing the desktop on my home server/NAS remotely (I know, desktops on servers are considered sacrilege, but for me it’s been useful too many times to get rid of at this point).
Where Wayland currently shines for me is VR, Steam VR works better, and more consistently on Plasma Wayland than X11 at this point, which is probably more of a Valve thing than a Wayland thing. When I first got my Index, X11 worked fine, but there have been times when Steam VR on Linux being “broken” has made the news on Phoronix/Gaming on Linux, but still worked fine on Plasma Wayland (which seems to be where Valve is doing most of their SteamVR Linux testing as of late).
As an end user, I do wish that the Wayland specification was organized better, because as an outsider, it seems a lot of the bickering that goes on has more to do with everyone having different end goals. I think if they would split out the different styles of window management to have their own sub-specs or extensions and then figure out what of that could be moved into the core after everyone has built what they need would be better than their current approach of compromising their way through every little decision that doesn’t always make sense for every use case. Work together when it makes sense, but understand that there are times when that doesn’t make sense, and sometimes you can’t please every stick in the mud, and are going to have to do your own thing without them. I do get the appeal of doing things right the first time too though, even if it takes more time. But it seems like usability is always the thing that gets sacrificed when compromises are made.
That’s a big reason I actively avoid docker on my servers, I don’t like running a dozen instances of my database software, and considering how much work it would take to go through and configure each docker container to use an external database, to me it’s just as easy to learn to configure each piece of software for yourself and know what’s going on under the hood, rather than relying on a bunch of defaults made by whoever made the docker image.
I hope a good amount of my issues with docker have been solved since I last seriously tried to use docker (which was back when they were literally giving away free tee shirts to get people to try it). But the times I’ve peeked at it since, to me it seems that docker gets in the way more often than it solves problems.
I don’t mean to yuck other people’s yum though, so if you like docker, and it works for you, don’t let me stop you from enjoying it. I just can’t justify the overhead for myself (both at the system resource level, and personal time level of inserting an additional layer of configuration between me and my software).
I’ll be interested to see how seamless Mozilla is able to make the transition away from Onerep for users. I’ve been subscribed to Mozilla Monitor for a bit over a month, and the service has been decent enough for what it is. I know a bunch of people dropped their subscriptions when the original story dropped, but I’m probably going to stick it out just to see how it goes.
Already kinda saw the expense of Mozilla Monitor as mostly an investment in Mozilla’s financial independence anyway, while getting something out of it for myself in the process. And while they absolutely should have done better vetting, not exactly sure I trust the alternatives that much more. Considering Incogni is owned by Surfshark which is owned by Nord. Both Incogni and DeleteMe sure buy a lot of YouTube sponsor spots - not that that necessarily means anything, but I find it’s best to be skeptical of the companies that go full on squarespace with YouTube sponsorships.
I’ve been helping out with bug triage since Plasma 6 released (I’m not a Plasma developer though, so take whatever I say with a healthy helping of salt). There have been a fair amount of bugs regarding fonts and fractional scaling, and a bunch of them seem to be upstream Qt bugs. So, it’s not just you/your system. There seem to be workarounds for some cases, but unfortunately I can’t say how long it will take for an out-of-the-box fix.
There is still a desktop overview that allows dragging windows between virtual desktops (Meta+G) unfortunately when they removed the old overview, they forgot to fully integrate the new overview, so it can’t be activated by screen edges (which is how I used to access the old desktop overview).
I think that’s actually what discord should be used for. It’s one of the better platforms for voice/video/text chat. It’s mostly just when people use discord for what should be a public forum or wiki that it becomes a problem.
And sure, it’s not a great place for open source developers to do all their communication in, because being able to reference things in the future if a project lead closes the server is important. But it’s probably fine for coding sprints and meetings here and there as long as someone is taking notes to be documented elsewhere. Discord is arguably better than zoom for that use case.
Based on the documentation on the GitHub, it looks like it does use Haier’s cloud. Which, doesn’t make Haier’s actions any less shitty, but I can understand a company not wanting a bunch of users using their undocumented API, especially if there’s potential to have automations hitting it more frequently than their own app does (not that I have any reason to believe this project was actually being inefficient with API calls).
I have a similar setup. Even for hard drives and slower SSDs on a NAS, 10g has been beneficial. 2.5 gig would probably be sufficient for most of what I do, but even a few years ago when I bought my used mellanox sfp+ cards on eBay it was basically just as cheap to go full 10g (although 2.5 gig Ethernet ports are a bit more common to find built-in these days, so depending on your hardware, that might be a cheaper place to start). But even from a network congestion standpoint, having my own private link to my NAS is really nice.
I’ve dabbled with some monitoring tools in the past, but never really stuck with anything proper for very long. I usually notice issues myself. I self-host my own custom new-tab page that I use across all my devices and between that, Nextcloud clients, and my home-assistant reverse proxy on the same vps, when I do have unexpected downtime, I usually notice within a few minutes.
Other than that I run fail2ban, and have my vps configured to send me a text message/notification whenever someone successfully logs in to a shell via ssh, just in case.
Based on the logs over the years, most bots that try to login try with usernames like admin or root, I have root login disabled for ssh, and the one account that can be used over ssh has a non-obvious username that would also have to be guessed before an attacker could even try passwords, and fail2ban does a good job of blocking ips that fail after a few tries.
If I used containers, I would probably want a way to monitor them, but I personally dislike containers (for myself, I’m not here to “yuck” anyone’s “yum”) and deliberately avoid them.
We could always start a theory that Elon is the Zodiac Killer. He probably doesn’t deserve that level of notoriety, but it would make for an overly drawn out crime documentary. Just bring in some “experts” off of Ancient Aliens, along with the phrase “could it be” so you have some standing when you’re inevitably sued for slander.
As long as they don’t remove my ability to still have a separate search field beside the address bar, I’ll still be happy. I know it has probably been a decade since Chrome merged the search and address bars (and Firefox followed), but IMO it makes the experience of both searching and typing addresses worse, in exchange for a at best mildly cleaner UI.