What you say describes my experience 10 to 15 years ago, not my experience today. Compare the settings dialog in KDE Plasma to the windows settings dialog for instance. Or should I say myriad of Windows settings dialogues.
What you say describes my experience 10 to 15 years ago, not my experience today. Compare the settings dialog in KDE Plasma to the windows settings dialog for instance. Or should I say myriad of Windows settings dialogues.
What was difficult in your experience?
Huh odd, I guess it depends quite heavily on the system? Just to check I cleaned my build folder and am building now, ~700 files that take around 5 minutes to compile. I don’t notice a thing, CPU (Ryzen 7 7700X ) is fully maxed out. I know that I do notice it on my laptop, but there reducing from 16 to 12 or even 14 is enough. Having to reduce to 4 is very different from what I experience. Currently on manjaro, the laptop has ubuntu.
If you don’t want compilation to take all cores, use one or two cores less for the compile. I frequently compile C++ code, almost always I just let it max out 100%, haven’t been really bothered by the lag. When I’m in a teams meeting for instance it can cause noticable lag so then I do ninja -n 8
or ninja -n 12
and problem solved.
Cross-platform and performant, are there options besides C++ and rust?
I was very surprised yesterday to find out that Unreal Engine now offers native linux builds as well as linux targets. Works flawlessly too. So with all the hate linux seems to be getting from them from what you read in the occasional blog post, they must have devs working only on this support.
Turns out you were the hacker all along
Which problems did you experienced?
ccache folder size started becoming huge. And it just didn’t speed up the project builds, I don’t remember the details of why.
This might be the reason ccache only went so far in your projects. Precompiled headers either prevent ccache from working, or require additional tweaks to get around them.
Right, that might have been the reason.
To each its own, but with C++ projects the only way to not stumble upon lengthy build times is by only working with trivial projects. Incremental builds help blunt the pain but that only goes so far.
When I tried it I was working on a 100+ devs C++ project, 3/4M LOC, about as big as they come. Compilation of everything from scratch was an hour at the end. Switching to lld was a huge win, as well as going from 12 to compilation 24 threads. The code-base in a way you don’t need to build everything to work on a specific part, using dynamically loaded libraries to inject functionality in the main app.
I was a linux dev there, the pch’s worked, not as well as for MSVC where they made a HUGE difference. Otoh lld blows the microsoft linker out of the water, clean builds were faster on msvc, incremental faster on linux.
I’ve had mixed results with ccache myself, ending up not using it. Compilation times are much less of a problem for me than they were before, because of the increases in processor power and number of threads. This together with pchs and judicously forward declaring and including only what you use.
From the times Circle surfaces in discussions, I think I remember reading it’s it not being open source that is holding back adoption? Not sure, anyway as a C++ dev i’d love to see one of the different approaches to fundamentally improving C++ take widespread hold.
I guess this is go, and I don’t know what the scoping is. In C++ I also suggest putting as much in the if as possible, because it limits the scope of the variables.
Such gains by limiting included headers is surprising to me, as it’s the first thing anyone would suggest doing. Clang-tidy hints in QtCreator show warnings for includes that are not used. For me this works pretty well to keep build times due to headers under control. I wonder, if reducing the amount of included headers already yields such significant gains, what other gains can be had, and what LOC we’re talking about. I’ve seen dramatic improvements by using pch for instance. Or isolating boost usage.
I found basic functioning of worktrees to fail with submodules. The worktree doesn’t know about submodules, and again and again messes up the links to it. Basic pulling, switching branches, …, all of this frequently fails to work because the link to the submodule is broken. I ended up creating the submodules as worktrees of a separate checkout of the submodule repo, and recreating these submodule worktrees over and over. I pretty much stopped using worktrees at that point.
Have you tried the global git config to enable recursive over sub modules by default?
Nope, fingers crossed it helps for you ;) Unrelated to worktrees but: in the end I like submodules in theory but found them to be absolutely terrible in practice, that’s without even factoring in the worktrees. So we went back to a monorepo.
I’m a C++ dev, I have one checkout of the main repo and 3 worktrees. Switching branches can be expensive because of recompiles, so to do e.g. quick fixes I’ll use worktree 1 where I typically don’t even compile the code, just make the fix and push it to the CI system. Worktrees 2 and 3 I keep at older releases so I can immediately fire up development and one of those releases side by side and compare results as well as the code.
The cool thing about worktrees instead of multiple checkouts is that you only have one .git folder, so less disk space. But more importantly local branches (well everything actually) are shared, so you can create a local branch in the main checkout, and later come back to it in a worktree. You also don’t need fetching/… in the worktrees, as they share the same .git folder.
Only thing that I found virtually impossible to work with is worktree + submodules.
To me that sounds like “that machine prototype is inefficient - just skip the prototype next time and build the real thing right away.”
I don’t think you understand my point, which is that developing the prototype takes e.g. 50% more time than it should because of complete lack of understanding of software development.
Mostly ML or data processing libraries I would assume, I’ve read tons of REST server and ORM python code for instance, none of that is written in C.
Wrt rust: no experience with that. I do do a lot of C++, there you quickly reach the end as typically you’re consuming quite a bit of libraries but the complete sources of those aren’t part of what is parsed by the IDE as keeping all that in memory would be unworkable.
My point about the jumping into was that you can immediately start reading the sources. Most alternative languages are compiled in some form or other so all you’ll see is an API, not the implementation.
As a researcher: all the professional software engineers here have no idea about the requirements for code in a research setting.
As someone with extensive experience in both: my first requirement would be readability. Single python file? Fine with that. 1k+ lines single python file without functions or other means of structuring the code: please no.
The nice thing about python is that your IDE let’s you jump into the code of the libraries you’re using, I find that to be a good way to look at how experienced python devs write code.
Odd take imo. OP is a programmer, albeit perhaps not a very good one. Did a PhD (computational astrophysics), been working as a professional dev for 10 years after that. Imo a good programmer writes code that solves the problem at hand, I don’t see that much of a difference between the problem being scientific or a backend service. It doesn’t mean “write lots of boilerplate-y factories, interfaces and other layers” to me, neither in research nor outside of it.
That being said, there is so much time lost in research institutes because of shoddy programming by researchers, or simply ignorance, not knowing a debugger exists for instance. OP wanting to level up their game would almost certainly result in getting to research results faster, + they may be able to help their peers become better as well.
Wonder how much of this relates to SUSE? How “normie-tolerant” is that? I’ve been printing for years without any issues for instance, and have a HP printer that used to hate my linux OS with a passion.