I think the main barriers are context length (useful context. GPT-4o has “128k context” but it’s mostly sensitive to the beginning and end of the context and blurry in the middle. This is consistent with other LLMs), and just data not really existing. How many large scale, well written, well maintained projects are really out there? Orders of magnitude less than there are examples of “how to split a string in bash” or “how to set up validation in spring boot”. We might “get there”, but it’ll take a whole lot of well written projects first, written by real humans, maybe with the help of AI here and there. Unless, that is, we build it with the ability to somehow learn and understand faster than humans.
- 4 Posts
- 106 Comments
People seem to disagree but I like this. This is AI code used responsibly. You’re using it to do more, without outsourcing all your work to it and you’re actively still trying to learn as you go. You may not be “good at coding” right now but with that mindset you’ll progress fast.
jcg@halubilo.socialto
Programmer Humor@lemmy.ml•They're trying to normalize calling vibe coding a "programming paradigm," don't let them.
3·7 months agoNot what I’d have expected. In my company it’s mostly higher ups (suits) pushing the stuff and workers begrudgingly implementing it.
jcg@halubilo.socialto
Programmer Humor@lemmy.ml•They're trying to normalize calling vibe coding a "programming paradigm," don't let them.
1·7 months agodeleted by creator
jcg@halubilo.socialto
Programmer Humor@lemmy.ml•They're trying to normalize calling vibe coding a "programming paradigm," don't let them.
3·7 months agoHow high up in the corporate ladder are they?
jcg@halubilo.socialto
Programmer Humor@lemmy.ml•They're trying to normalize calling vibe coding a "programming paradigm," don't let them.
8·7 months agoAs a former script kiddie myself I think it’s not much different from how I used to blindly copy and paste code snippets from tutorials. Well, environmental impact aside. Those who have the drive and genuine interest will actually come to learn things properly. Those who don’t should stay tf out of production code, it’s already bad enough. Which is why we genuinely shouldn’t let “vibe coding” be legitimized.
jcg@halubilo.socialto
DeGoogle Yourself@lemmy.ml•My journey towards more European, open source, privacy-oriented, and decentralized alternatives
4·7 months agoIt’s a pretty big jump to go from ChatGPT/LeChat to hosting your own LLM locally, if you want results that are anywhere close to the commercial offerings. Both of them even have a free tier that would probably still be better than anything you could run locally without significant hardware investment. It’s definitely not difficult these days, but it’s very expensive to get close results unless you’ve already got the hardware.
We declare children as dependents legally, don’t we?
jcg@halubilo.socialto
Programmer Humor@lemmy.ml•Mom can we have Scratch? We have scratch at home. Scratch at home:
4·7 months agoHuh, I had never considered this solution to fizzbuzz to be honest. I usually go for string concatenation, or (i % 3 == 0 && i % 5 == 0), but yeah i % 15 == 0 is certainly a clever simplification
That’s hilarious, reminds me of this.
Nah I’m an innovator! I’ll just innovate a better chip that’ll never fail and software that has no bugs!
Proceeds to put Linux on a common SoC and load it with shoddy software from a low paid contractor.
At least the source wasn’t a Rick roll
jcg@halubilo.socialto
Programmer Humor@programming.dev•When 'Pass the Interview' = 'Cancel My Flight'
8·9 months agodd if=/dev/null of=/dev/eng0Oops!
Convert the PWD value to use backslashes, too, for extra cursedness.
jcg@halubilo.socialto
Programmer Humor@lemmy.ml•What's stopping you from writing your Rust like this?
3·10 months agoThat was exactly what the .NET family of languages was back in the day. Still is, I guess? You could write in VB, C#, or F#, make use of the same standard library and general principles, but then it would all get compiled to the same IL code in the end.
Well, not exactly. For example, for a game I was working on I asked an LLM for a mathematical formula to align 3D normals. Then I couldn’t decipher what it wrote so I just asked it to write the code for me to do it. I can understand it in its code form, and it slid into my game’s code just fine.
Yeah, it wasn’t seamless, but that’s the frustrating hype part of LLMs. They very much won’t replace an actual programmer. But for me, working as the sole developer who actually knows how to code but doesn’t know how to do much of the math a game requires? It’s a godsend. And I guess somewhere deep in some forum somebody’s written this exact formula as a code snippet, but I think it actually just converted the formula into code and that’s something quite useful.
I mean, I don’t think you and I disagree on the limits of LLMs here. Obviously that formula it pulled out was something published before, and of course I had to direct it. But it’s these emergent solutions you can draw out of it where I find the most use. But of course, you need to actually know what you’re doing both on the code side and when it comes to “talking” to the LLM, which is why it’s nowhere near useful enough to empower users to code anything with some level of complexity without a developer there to guide it.
You can get decent results from AI coding models, though…
…as long as somebody who actually knows how to program is directing it. Like if you tell it what inputs/outputs you want it can write a decent function - even going so far as to comment it along the way. I’ve gotten O1 to write some basic web apps with Node and HTML/CSS without having to hold its hand much. But we simply don’t have the training, resources, or data to get it to work on units larger than that. Ultimately it’d have to learn from large scale projects, and have the context size to be able to hold if not the entire project then significant chunks of it in context and that would require some very beefy hardware.
Ah yes the ever elusive “tech debt”
jcg@halubilo.socialto
Linux@lemmy.ml•Linus responds to Hellwig - "the pull request you objected to DID NOT TOUCH THE DMA LAYER AT ALL... if you as a maintainer feel that you control who or what can use your code, YOU ARE WRONG."
8·10 months agoThanks for the summary, I did a bit of reading myself. It’s interesting the dynamics at play here - you’ve got a long, long term contributor in Hellwig who’s been a maintainer since before Rust even existed, then you’ve got quite a few people championing Rust being introduced into the kernel. I feel like Hellwig’s concerns must have more to do with the long term sustainability of the Rust code - like will there be enough Rust developers 10, 20, 30 years down the line. I mean, even if it stays maintained, having multiple languages in a codebase increases complexity and makes it harder to contribute. Then you have Filho resigning from the Rust for Linux project, which in itself kind of calls into question the long term sustainability of the project. It seems like Rust would have quite a few benefits for the Linux kernel, but the question remains of if it’s still gonna be any good in a few decades. This is juicy stuff!

Compilation is CPU bound and, depending on what language mostly single core per compilation unit (I.e. in LLVM that’s roughly per file, but incremental compilations will probably only touch a file or two at a time, so the highest benefit will be from higher single core clock speed, not higher core count). So you want to focus on higher clock speed CPUs.
Also, high speed disks (NVME or at least a regular SSD) gives you performance gains for larger codebases.