I haven’t had this happen in years, maybe it’s my config? I’m using GPT on a UEFI system (in UEFI mode), with systemd-boot.
I do remember having tons of issues back when I was using grub on an MBR system using legacy bios emulation.
I haven’t had this happen in years, maybe it’s my config? I’m using GPT on a UEFI system (in UEFI mode), with systemd-boot.
I do remember having tons of issues back when I was using grub on an MBR system using legacy bios emulation.
Oh yikes sorry for the hostility, I definitely did mix you up with OP.
Someone has invested, the solution is tiling window managers.
As 217 people have told you in this thread, tiling window managers allow you keep all your windows full screen if you want.
Sounds like your screen is too close to your face.
Yeah, definitely a matter of workflow and personal preference. Nobody wants to convert anyone else, you just ask why people use tiling WM, and people are answering.
why tile windows at all
I can answer that pretty comfortably. There are two main reasons, the first is that it’s very common to have to look at two things at once. If I’m taking notes while reading something complicated, or writing some complex code while referencing the documentation, or tweaking CSS rules while looking at the page I’m working on, it’s just way too disruptive to constantly have to switch windows.
The second main reason (for me) is that a lot of the time, the content of a single window is too small to make use of the space on your monitor. In those cases, if I have something else I’m working on and it’s also small, I’ll tile them. It might be easy to toggle between windows with a hotkey, but it’s strictly easier to not have to toggle, and just move your eyes over. Peripheral vision means that you don’t entirely lose the context of either window. When you’re ready to switch back to the one you just left, you don’t have to touch anything, and you don’t have to wait for the window to render to visually locate where you left off.
If you’re only actively using one window at a time, that makes sense, but alt+tabbing through a stack of 8 open applications to go back and forth between something you’re working on and something you’re closely referencing sucks. If your primary workflow for a computer involves that, I honestly don’t understand how someone can live without tiling.
I like vimium, but qutebrowser is way faster for me. It’s my go-to for research or reading documentation.
You really hit the nail on the head here. Never having to take take your hands off the keyboard, while always having windows take up exactly the right amount of room, is the main reason I hate having to use non-tiling WM.
And your other point is spot on, too. Any workflow that you use in a standard WM you can also do in a tiling WM, except (imo) more easily. And there are lots of workflows that are agonizing without tiling functionality.
I want to read this book full screen. Hang on, didn’t that other book say something different about this? I want to open it. This is complex, I want to compare side-by-side. Oh, I get it, I should take notes on both of these. Hang on, I need to look at both books while taking notes. Okay I’m done with the second book but I still want to take notes on the first.
Slogging a mouse around to click, drag, click, drag, double click, drag, all while repositioning your hands to type, sucks so bad.
The case is even more clear when you consider that the concept of tiling WMs is just an extension of the game-changing paradigm behind terminal multiplexers and IDE splits.
It’s just better. There’s probably a bit of an adjustment when you’re first adapting to it, especially if they’re really used to a mouse-centric, window-draggy workflow, which is likely the only reason that people think they don’t like them.
Honestly, if you’re using 3 monitors, you’re kind of using a single display split into a minimum of 3 tiles.
Tiling window managers support a workflow with one large monitor that you can split into n tiles whichever way you want without touching your mouse.
I’m not saying it’s objectively better or anything, but once you get past the learning curve, having to manually size all of your windows is a chore. I love having my browser window open full screen, pressing a hotkey, and having a text editor open next to it taking up 1/3rd of the screen, with the browser resized to fit.
Mostly, things are full screen, and I love that my WM launches apps in full screen automatically, unless there’s another window open on the workspace I’m targeting.
And when they’re not in full screen, it’s all handled smoothly without me ever having to take my hands off the keyboard.
Do you have a small monitor?
In my opinion, on a >32 4k or 1440p display, the full screen is just way too big for a single window. Which isn’t a problem, because as easy as it is to switch between two windows, it’s even easier not to. Especially for things like having a web browser and dev tools, switching back and forth every time I tweak a CSS rule would be agonizing.
I’ve used arch for the past 10 years or so as my primary OS, and it only took 7 or 8 of those years to get the OS set up.
/s in all seriousness, I kind of get what you’re saying. But I don’t think that having a bad experience is the goal at all though. I think the goal is to provide an OS that lets users decide on exactly what collection of packages they want on their system, and to provide packages that are up to date and unmodified from their upstream.
Setting up your system additively comes with a cost, though. It’s way less convenient than just installing something that someone else has configured.
To me personally, I think the one-time upfront cost of setting up arch is less burdensome than dealing with configuration files that have been moved to non-default locations (transmission-daemon on Debian-based distros is one example), packages being seriously out of date and thus missing new features and bug fixes (neovim), and dealing with cleaning up packages if you prefer to use non-default software and don’t want a ton of clutter.
Definitely valid to prefer a preconfigured system, I just think it’s a misrepresentation to say that the point of arch is to be difficult, or that configuration takes a ton of time for users of arch. Maybe learning to use arch takes longer, but learning to use arch is just learning to use Linux, so it’s hard for me to see that as a bad thing. And it doesn’t take that long to learn, I was more productive in arch after a couple days than I’ve ever been on *buntu, Debian or Mint.
Sounds like user error.
I think the answer to your question about why it’s frustrating for some people and not others has a lot to do with use case.
One use case that easily makes Linux way less frustrating is of developing software, especially in low-level languages. If you’re writing and debugging software, reading documentation is something you do every day, which makes it a lot easier. Most of the issues where people break their systems, don’t know how it happened, and can’t figure out how to fix it are because they default to copying bash commands from a Wordpress blog from 2007 instead of actually reading the documentation for their system. If you’re developing software, a log of the software you’re installing and using is open source, so you benefit tremendously from a package manager that’s baked into the OS.
If your use case is anything like that, Windows in particular is way more frustrating to use IMO.
If instead your use case is using a web browser and a collection of proprietary closed-source GUI tools, then most of the benefits that you’re getting using Linux are more ephemeral. You get the benefit of using a free and open source OS, not being tied into something that built to spy on you, not supporting companies that use copyrights to limit the free access of information and tools, etc. Those benefits are great and super important, and I would still recommend Linux if you’re up to it, but they definitely don’t make computing any easier.
If your use case is anything like the second one, you’re probably used to following online guides without needing to understand how each step works, and you’re probably used to expecting that software will make it hard for you to break it in a meaningful way. Both of those things directly contribute to making Linux might be frustrating to use at times for you.
If you’re in the second category, the best advice is to get used to going to the official webpage for the applications you use and actually reading the docs. When you run into a problem, try to find information about it the docs. It’s fine to use guides or other resources, but whenever you do, try to look up the docs for the commands that you’re using and actually understand what you’re doing. RTFM is a thing for a reason haha.
Okay, so I kind of lied, when I set up my radarr/sonarr/transmission/etc docked compose setup earlier this year, I did purchase PIA VPN, which is like $60 per year I believe. Didn’t want to have to think about it anymore, and I can afford it now, so whatever.
But still, over 20 years, that’s like a $1200 savings. When all that you’re realistically risking is having to switch ISPs, and that’s so unlikely that I’ve never met anyone who had to do it, I don’t think it’s as big of a deal as people make it out to be.
Having said that, don’t pirate things without a VPN and blame me when the fuzz comes for ya
Yeah, if they’re going to twist my arm, that’s one thing haha. Surprised that your ISP actually took action.
I’ve always thought it had something to do with absolving themselves if liability.
From what I understand, companies hired by copyright owners send a DMCA request to whichever ISP owns the IP addresses that show up in their honeypots.
ISP has to act on those requests in some way, so they send a sternly worded letter that basically says “we have been notified that your network was used to download copyrighted material illegally. Piracy is bad, you naughty boy/girl. If this continues, we may have to take action which could include canceling your service (don’t worry we won’t because we want your money)”.
Hypothetically, they could turn your information over to the digital rights company, who then could hypothetically file charges against you, but there is established judicial precedent in the US that says that showing that activity came from a specific IP address isn’t enough to convict an individual of a crime without more evidence. Could have been anyone in the household, or it could have been someone who hacked into the network and used it for piracy.
If we want to get even more hypothetical, they could try to convince a judge to issue a search warrant, seize your device, and look for evidence there, which could be used to convict you. But that is an insane amount of effort to go after one of the hundreds of thousands of people who downloaded an episode of game of thrones an Ubuntu ISO.
They do pull out all of those stops going after the original uploaded, though, but if that’s you you’re using way more than a VPN.
Okay, I see this so much, but I have been pirating virtually all of my media downloading Linux ISOs (in the US) without a VPN for… 20 years?
I’ve gotten about a dozen letters from my ISP and I just chuck ‘‘em in a bin.
Could you possibly give me an elevator pitch on what debrid is and why someone would want to use it?
Could you expand on this a little bit for me? I’m interested, never used gentoo, how did it ‘end up?’
Only thing keeping on my disk is fusion360, so annoying to have to deal with booting into windows just to use a single piece of software.