Slide with text: “Rust teams at Google are as productive as ones using Go, and more than twice as productive as teams using C++.”
In small print it says the data is collected over 2022 and 2023.
Slide with text: “Rust teams at Google are as productive as ones using Go, and more than twice as productive as teams using C++.”
In small print it says the data is collected over 2022 and 2023.
I was a lot more productive in C++ 15 years ago when the current project was 100% greenfield. Now that the code is 15 years old I’m much less productive because over the years we have discovered mistakes we made. I suspect I’m still more productive than the average C++ programmer because 15 years ago modern C++ was known (c++11 was still a couple years away though) and so we didn’t do a lot of the mess that people hate on C++ for.
Which is to say I want to know how productive those programmers will be in 15 years when the shiny of rust has warn off and they are looking at years of what seemed like a good design but current requirements just don’t fit.
I suspect a large part of that will depend on how well Rust keeps the feature creep in check. C++ was a bit of a language design magpie. Pretty much any language design idea anyone had ever got pulled into the language and it turned into a real mess. Many of those features are incompatible with each other as well, so once you use one feature, you’re no longer able to use one of the competing ones, which has lead to partial fragmentation of the ecosystem (interestingly enough D who set out to be a “better” C++, also ran into a similar but far worse situation). Many of those features have also been found to be problematic in various ways and have fallen out of favor recently and so are viewed as warts on the language, or failed experiments.
Rust is still young, so there aren’t very many competing features, and none that I’m aware of that are considered things to avoid. If it can manage to keep it’s feature set under control by actively deprecating and removing features that are problematic, and being more judicious than C++ was in pulling in new ones it should be able to avoid the same fate as C++. Time will tell I suppose.
@orclev @bluGill “D was in a far worse situation than C++”?
I am in awe; and I guess that’s why I’ve never heard of D.
Early in the development of D they had two competing standard libraries that each provided nearly identical functionality but were incompatible with each other. Neither one was obviously the correct choice, and so their library ecosystem split in two, with some projects choosing to use one, while others picked the other one. Of course once a library decided to use one standard they were then locked into it and could only use the other libraries that had made the same choice.
I believe they eventually came to a solution where they merged the two libraries into a new one and deprecated the old ones, but for a while there it was an absolute mess in their ecosystem.
Can confirm, I was super excited about D about 10-15 years ago when all of that had recently been resolved. It’s a really cool language, but it didn’t really get much traction and Rust solves a lot of the problems I have with it, so I use that now.
That said, here are some features I really miss from D:
But at the end of the day, Rust provides more guarantees, enough features, and a fantastic ecosystem. But if both had the same ecosystem today, I would give D a serious consideration.
There are of course macros, but they’re kind of a pain to use. Zigs
comptime fn
are really nice and a similar concept. Rust does haveconst fn
but of course those come with limits on them.You kind of get that with Rust for free. You get implicit GC for anything stack allocated, and technically heap allocated values are deterministically freed which you can work out by tracking their ownership. As soon as the owning scope exits it will be freed. If you want more explicit control you can always invoke
std::mem::drop
to force it to be freed immediately, but generally you don’t gain much by doing so.Some really great work is being done on that pretty much all the time but… yeah, I can’t reasonably argue that the Rust compiler is fast. Taking full advantage of incremental compilation helps a lot, but if you’re doing a clean build, better grab a coffee.
What would be nice is if cargo explored a similar solution to what Arch Linux used, where there’s a repository of pre-compiled libraries for various platforms and configurations that can be used to speed up build times. That of course does come with a whole heap of problems though, probably the biggest of which is that it’s a HUGE security nightmare. Of lesser concern is the fact that they could not realistically do so for every possible combination of features or platforms, so it would likely only apply to crates built with the default features for a small subset of the most popular platforms. I’m also not sure what the tree shaking would end up looking like in a situation like that.
Yup, and Rust’s macros are pretty cool, but in D you can just do:
static if (condition) { ... }
There’s a whole compile-time reflection library as well, so you can take a class and make a super-optimized serialization/deserialization library if you want. It’s super cool, and I built a compile-time JSON library just because I could…
Yup, Rust is awesome.
But in D you can do explicit scope guards:
scope(exit)
- basically Go’sdefer()
scope(success)
- only runs when no exceptions are runscope(failure)
- only runs when there’s an exceptionI didn’t use them much, but they are really cool, so you can do explicit cleanup as you go through the logic flow, but defer them until they’re needed.
It’s a neat alternative to RAII, which D also supports.
I still need to try out Cranelift, which was posted here recently. Cranelift release mode could mostly solve this for me.
That said, I haven’t touched D in years since moving to Rust, so I obviously find more value in it. But I do miss some of the candy.
Hmm… that is interesting.
scope(exit)
is basically just an inlinestd::ops::Drop
trait, I actually think it’s a bad thing that you can mix that randomly into your code as you go instead of collecting all of the cleanup actions into a single function. Reasoning about what happens when something gets dropped seems much more straightforward in the Rust case. For instance it wasn’t immediately clear that those statements get evaluated in reverse order from how they’re encountered which is something I assumed, but had to check the documentation to verify.scope(success)
andscope(failure)
are far more interesting as I’m not aware of a direct equivalent in Rust. There’s the nightly only feature ofstd::ops::Try
that’s somewhat close to that, but not exactly the same. Once again though, I’m not convinced letting you sprinkle these statements throughout the code is actually a good idea.Ultimately, while it is interesting, I’m actually happy Rust doesn’t have that feature in it. It seems like somewhat of a nightmare to debug and something ripe to end up as a footgun.
It’s a stack, just like Go’s
defer()
.Probably because Rust doesn’t have exceptions, and I’m pretty sure there are no guarantees with
panic!()
.Same, but that’s because Rust’s semantics are different. It’s nice to have the option if RAII isn’t what you want for some reason (it usually is), but I absolutely won’t champion it since it just adds bloat to the language for something that can be solved another way.
I’m still a big fan of D for personal projects, but I fear the widespread adoption ship has sailed at this point, and we won’t see the language grow anymore. It’s truly a beautiful, well-rounded language.
Also just recently a rather prominent contributor forked the entire compiler/language so we’re seeing more fragmentation :/
This happened to Scala with cats vs zio. I’m sad it wasn’t more successful, it’s a really, really good language
Rust had the same issue with tokio vs. async-std. I don’t think this was ever resolved explicitly, async-std just silently died over time.
Hmm, sort of, although that situation is a little different and nowhere near as bad. Rusts type system and feature flags mean that most libraries actually supported both tokio and async-std, you just needed to compile them with the appropriate feature flag. Even more worked with both libraries out of the box because they only needed the minimal functionality that
Future
provided. The only reason that it was even an issue is thatFuture
didn’t provide a few mechanisms that might be necessary depending on what you’re doing. E.G. there’s no mechanism to fork/join in Future, that has to be provided by the implementation.async-std still technically exists, it’s just that most of the most popular libraries and frameworks happened to have picked tokio as their default (or only) async implementation, so if you’re just going by the most downloaded async libraries, tokio ends up over represented there. Longer term I expect that chunks of tokio will get pulled in and made part of the std library like
Future
is to the point where you’ll be able to swap tokio for async-std without needing a feature flag, but that’s likely going to need some more design work to do that cleanly.In the case of D, it was literally the case that if you used one of the standard libraries, you couldn’t import the other one or your build would fail, and it didn’t have the feature flag capabilities like Rust has to let authors paper over that difference. It really did cause a hard split in D’s library ecosystem, and the only fix was getting the two teams responsible for the standard libraries to sit down and agree to merge their libraries.
I feel like I work well even without the
new C++ featuressmart pointer stuff, simply because: