24.04 won’t have Plasma 6, but 24.10 will. In other words, fall 2024.
Or you can use KDE Neon, which is basically Ubuntu LTS, but with the newest Plasma.
24.04 won’t have Plasma 6, but 24.10 will. In other words, fall 2024.
Or you can use KDE Neon, which is basically Ubuntu LTS, but with the newest Plasma.
In polish we have ź and ż. For ż we use Alt gr + z, and for ź we use Alt gr + x. Same for other non-standard letters. The rest of the keyboard is a regular US layout.
So in Swedish you could use Alt gr + a and Alt gr + s for different variants of a.
You mentioned you changed firewall rules for that device. Any chance you have set outbound rule instead of inbound rule?
Anyway, what’s the output of ip route
?
Ah yes, perfect data format, where markup takes more space than the actual data.
If you know how to use git, you will know how to use docker (provided you know what you want to do). They are completely different programs, yet you can quickly grasp the other instinctively.
Now, Photoshop and Blender - they are also different programs, but if you know Photoshop, you still need to relearn Blender’s interface completely.
This is why I prefer terminal programs in general. Unless it’s more convenient to use GUi, i.e. Drag&Drop file manager, some git tools etc.
Learn it first.
I almost exclusively use it with my own Dockerfiles, which gives me the same flexibility I would have by just using VM, with all the benefits of being containerized and reproducible. The exceptions are images of utility stuff, like databases, reverse proxy (I use caddy btw) etc.
Without docker, hosting everything was a mess. After a month I would forget about important things I did, and if I had to do that again, I would need to basically relearn what I found out then.
If you write a Dockerfile, every configuration you did is either reflected by the bash command or adding files from the project directory to the image. You can just look at the Dockerfile and see all the configurations made to base Debian image.
Additionally with docker-compose you can use multiple containers per project with proper networking and DNS resolution between containers by their service names. Quite useful if your project sets up a few different services that communicate with each other.
Thanks to that it’s trivial to host multiple projects using for example different PHP versions for each of them.
And I haven’t even mentioned yet the best thing about docker - if you’re a developer, you can be sure that the app will run exactly the same on your machine and on the server. You can have development versions of images that extend the production image by using Dockerfile stages. You can develop a dev version with full debug/tooling support and then use a clean prod image on the server.
Then again, cookie auth is vulnerable to CSRF. Pick your poison.
Although CSRF protection just adds a minor inconvenience, while there is never a guarantee your code is XSS vulnerability free.
Framework has multiple config files, allowing you to customize almost every aspect of it.
Nooo, this is too much config files, they take up too much space in my project tree.
Framework is a monolith with a single file to configure it.
Nooo, the file is unreadable and developing extensions for it is annoying.
Framework is minimal
Nooo, it doesn’t have any useful built-in features.
Framework is a complete solution without too many things to configure.
Nooo, it doesn’t allow me to do what I want.
Reminds me of that one episode on House M.D. where he performed an operation on himself in the bathroom.
The fact is there is no evidence for existance of
GodFlying Spaghetti Monster. But also there is no evidence that disproves the existence ofGodFlying Spaghetti Monster.
See how that doesn’t make sense?
General rule of thumb: Comments say why is it here, not what it does. Code itself should describe what it does.
Yeah I don’t get why it spits out whole types instead of only differences between them. Like “function expects non-null ‘some.param.in.object’ of type ‘string’ in argument ‘someArgument’, which is missing in passed argument”.
If you’re a beginner:
I almost gave up programming once, I thought I was too stupid.
Then I learned Linux and figured out starting out in IDEs as a beginner is the worst thing you can do. It doesn’t teach you anything, it just lets you get the job done - the thing that you should avoid while learning.
If you can’t build your software with only CLI - you probably have no idea how technology you’re programming in works.
If you are intermediate:
Reinventing the wheel is a great way to learn how libraries you’re using actually work.
The language itself is not that bad. Especially the newest releases are really great, thought out DX improvements. What stinks are its legacy parts and how it needs to be run.
My biggest pain is that for it to actually behave like it should it requires some sort of an actual web server like apache or nginx.
Also, servers written in are actually request handlers - every time a request comes, the whole app is reinitialized, because it just can’t hold its state in memory. In many apps every request means reinitializing connection with database. If you want to keep some state, you have to use some caching mechanism like redis or memcached.
Also had one time when Symfony app was crashing, because someone forgot to close class braces, and everything was “working” until some part of code didn’t like it and was just dying without any error.
And one time when someone put two endlines after php closing tag at the end of the file, confusing the entire php interpreter into skipping some lines of code - also without warning, and only in specific php version.
Why do you need Windows VM for developing GUI apps? Last time I used Visual Studio to make GUI app I almost gave up programming, because of how code-generation dependent it was.
For C# you have AvaloniaUI. For cpp you have countless multi-platform GUI toolkits, same for rust, Java has its own toolkits (multi-platform), and finally you can make an Electron/Tauri app.
The way for your desktop to communicate with the hardware.
It used to be X11 - A server-client architecture, which meant your desktop was effectively just a client that told the server what to do. The server was the one doing the drawing
Wayland is just a protocol, defining how programs and desktop should communicate with each other - without a middleman that was X11 server. The desktop does the actual drawing here.
Honestly, if you work in a shell a lot, learning vim is a great investment. You’re gonna fly through files editing them faster than with any IDE.
Lockfile contains exact state of the npm-managed code, making it reproducible exactly the same every time.
For example without lockfile in your package.json you can have version 5.2.x. In your working directory, you use 5.2.1, however on repo, 5.2.2 has appeared, matching your criteria. Now let’s say a new bug appeared in 5.2.2.
Now you have mismatched vendor code, that can make your code behave differently on your machine, and your coworker’s machine, making you hunt for bug that wasn’t even on your side.
Lockfile prevents that by saving an actual state of vendor code.
The way I use it is ‘undefined’ is literally undefined (not set), but null means no value - explicitly.
What?
It’s simple and readable. You literally put somebody that has never coded in their life, show them the YAML file and they will probably get it. Worked both with my boss and my girlfriend.
In Toml there are too many ways to do the same thing, which I don’t like. Also unless you know it deeply, you have no idea how the underlying data structure is going to look.