

I assume he meant Linux on x86 laptops, where I can confirm battery life is atrocious and support for random display things is also pretty bad. My laptop still wont do more than 30 Hz over HDMI (works fine with DisplayPort though).
I assume he meant Linux on x86 laptops, where I can confirm battery life is atrocious and support for random display things is also pretty bad. My laptop still wont do more than 30 Hz over HDMI (works fine with DisplayPort though).
The classic “I had it hard so you should too” mentality. I think it fundamentally comes from an animal desire for fairness - it isn’t fair that these old geezers had to put up with C shooting them in the feet at the drop of a bracket, which the new kids on the block get friendly (usually) compiler errors instead.
But I think if you don’t recognise that instinct in yourself and overcome it then you have failed as a human.
I’ll always remember the pushback to making xfree86 easier to configure (yes I’m old). Back in the day you had to edit a stupid text file to tell X that your screen could display 1024x768 and your mouse had three buttons. Then some upstarts came along and make it automatically detect that. The absolute cheek! Our ancestors have been practicing xfree86config since before you were a wee bebe! Etc. etc.
It’s a human condition.
I do also agree with Linus that immediately running to social media to drum up drama is not the correct solution to getting it fixed.
So what is the correct solution?
Yeah the main reason is performance. In some languages if you use a value “linearly” (i.e. there’s only ever one copy) then functional style updates can get transformed to mutable in-place updates under the hood, but usually it’s seen as a performance optimisation, whereas you often want a performance guarantee.
Koka is kind of an exception, but even there they say:
Note. FBIP is still active research. In particular we’d like to add ways to add annotations to ensure reuse is taking place.
From that point of view it’s quite similar to tail recursion. It’s often viewed as an optional optimisation but often you want it to be guaranteed so some languages have a keyword like become
to do that.
Also it’s sometimes easier to write code that uses mutation. It doesn’t always make code icky and hard to debug. I’d say it’s more of a very mild code smell. A code musk, if you like.
Well… the else condition (bar
) needs to be covered. I haven’t used branch coverage tools in Python but in any sane language you cover the actual semantics, not the lines. It shouldn’t make any difference if you write your code on one line, or with ternary expressions, or whatever.
If you branch coverage tool can’t handle branches on the same line I would suggest you use a different one! Does it handle if foo or bar
?
Will not see the value that gets past into self.float_button.setIcon
Uhm, yes you will? Just step into the function.
Easily above average code for Python. I’m going to pick on one method:
def _set_float_icon(self, is_floating: bool):
""" set the float icon depending on the status of the parent dock widget """
if is_floating:
self.float_button.setIcon(self.icon_dock)
else:
self.float_button.setIcon(self.icon_float)
First, Python does have ternary expressions so you can
self.float_button.setIcon(self.icon_dock if is_floating else self.icon_float)
Second, what does this code do?
foo._set_float_icon(true)
Kind of surprising that it sets the icon to icon_dock
right? There are two easy fixes:
*, is_floating: bool
so you have to name the parameter when you call it._update_float_icon()
or something.Also use Black or Ruff to auto-format your code (it’s pretty well formatted already but those will still improve it and for zero effort).
He definitely improved it… but there’s still plenty of material for r/linusrants.
He may not directly call people names anymore but he’s still extremely rude and unprofessional. He would have been fired long ago from any company I work for, and I live in the UK where it’s practically impossible to get fired.
Probably what all the horse people said when cars were invented.
pointing and saying “this is shit. Look at this shit”
Yeah you only get to do that if you’re Linus 😄
His point could be valid, if C was working fine and Rust didn’t fix it. But C isn’t working fine and Rust is the first actual solution we’ve ever had.
He’s just an old man saying we can’t have cars on the road because they’ll scare the horses.
Small nit:
CHERI is even weirder. CHERI pointers store 128-bit capabilities in addition to the 64-bit address we’re used to
The 128-bit capability (actually 129 since there’s a tag bit) includes the address. It’s 64-bit address + 64-bit metadata + 1-bit tag = 129-bit capability.
Before virtual memory was a thing, almost all memory was accessible.
Virtual memory has nothing to do with whether 0 is a valid address. You can have a CPU where it is valid, or one where it isn’t and you’ll get an access fault if you try to access it. You can also have virtual memory where page 0 is mappable, or not.
I think the author knew that, based on the later points so it’s probably just bad wording. Interesting point about wasm too!
But don’t you loose polymorphism?
No. You’ll have to be more specific about what kind of polymorphism you mean (it’s an overloaded term), but you can have type unions, like int | str
.
Your points 1-3 are handled by running the code and reading the error messages, if any
Not unless you have ridiculously exhaustive tests, which you definitely don’t. And running tests is still slower than your editor telling you of your mistake immediately.
I probably didn’t explain 4-6 well enough if you haven’t actually ever used static types.
They make it easier to navigate because your IDE now understands your code and you can do things like “find all references”, and “go to definition”. With static types you can e.g. ctrl-click on mystruct.myfield
and it will go straight to the definition of myfield
.
They make the code easier to understand because knowing the types of variables tells you a lot of information about what they are and how to use them. You’ll often see in untyped code people add comments saying what type things are anyway.
Refactoring is easier because your IDE understands your code, so you can do things like renaming variables and moving code and it will update all the things it needs to correctly. Refactoring is also one of those areas where it tends to catch a lot of mistakes. E.g. if you change the type of something or the parameters of a function, it’s very easily to miss one place where it was used.
I don’t think “you need to learn it” really counts as slowing down development. It’s not that hard anyway.
I can understand the appeal for enterprise code but that kind of project seems doomed to go against the Zen of Python anyways, so it’s probably not the best language for that.
It’s probably best not to use Python for anything, but here we are.
I will grant that data science is probably one of the very few areas where you may not want to bother, since I would imagine most of your code is run exactly once. So that might explain why you don’t see it as worthwhile. For code that is long-lived it is very very obviously worth it.
Just in case that’s a genuine question, the reasons people like static types are:
Often people say it slows development down but it’s actually the opposite. Especially for large projects or ones involving multiple people.
The only downside really is that sometimes the types can get more complicated than they’re worth, but in that case you have an escape hatch via the Any
type.
Pyright is very good. The rest are worthless though.
How powerful do you want it? Python’s type system is actually pretty good already and relatively sound when checked with Pyright (not Mypy though).
It’s not Typescript-level, but it’s better than e.g. Java or C++.
The main problem is Python developers writing code that can’t be statically type checked. E.g. using magically generate method names via __dict__
or whatever (I think lxml does that).
Pff it’s not like Linux has perfect WiFi either. I set my WiFi to auto connect to a VPN, and then delete the VPN later. That caused WiFi to always fail with no error messages except some incomprehensible deauth message in dmesg! Good luck figuring that out.
This totally might be true, but the fact that he got as far as measuring the same latency on X and Wayland… and then just gave up and is like “well never mind what the measurements say, it’s definitely Wayland”… Hmm.
You gotta do the measurements. It’s probably not even that hard, all you need is a USB mouse emulator (any microcontroller with USB peripheral support can do this and there are tons of examples) and a photodiode.
You don’t even need to worry about display latency if you are just comparing X with Wayland.
Yeah they probably should have tried that. On the other hand this isn’t the first time, and Linus didn’t do anything then either.