• 0 Posts
  • 654 Comments
Joined 2 years ago
cake
Cake day: January 3rd, 2024

help-circle



  • Oh, nice.

    I’m always looking for another ChangeLog tool.

    That said, I never leave my ChamgeLogs up to automation.

    My git logs are open to my users for full details, but my ChangeLogs are how I communicate which changes my users probably need to be aware of.

    So far, this hasn’t yielded well to automation. But my team is still considering standardizing our commit log messages enough to allow it someday.


  • I’m mainly interested in making code reviews a little easier to manage.

    One thing I haven’t seen mentioned yet, here: All future diffs become much easier to read if the team agrees to use a very strict lint tool.

    I know, I know. “Code changes should be small.” I’ve already voiced that to my team, yet here we are.

    I understand from another Lemmy thread that the tradition is to toss the offending team members’ laptop into the nearest large body of water.


  • Okay, this is fun, but it’s time for an old programmer to yell at the cloud, a little bit:

    The cost per AI request is not trending toward zero.

    Current ludicrous costs are subsidized by money from gullible investors.

    The cost model whole house of cards desperately depends on the poorly supported belief that the costs will rocket downward due to some future incredible discovery very very soon.

    We’re watching an edurance test between irrational investors and the stubborn boring nearly completely spent tail end of Moore’s law.

    My money is in a mattress waiting to buy a ten pack of discount GPU chips.

    Hallucinating a new unpredictable result every time will never make any sense for work that even slightly matters.

    But, this test still super fucking cool. I can think of half a dozen novel valuable ways to apply this for real world use. Of course, the reason I can think of those is because I’m an actual expert in computers.

    Finally - I keep noticing that the biggest AI apologists I meet tend to be people who aren’t experts in computers, and are tired of their “million dollar” secret idea being ignored by actual computer experts.

    I think it is great that the barrier of entry is going down for building each unique million dollar idea.

    For the ideas that turn out to actually be market viable, I look forward to collaborating with some folks in exchange for hard cash, after the AI runs out of lucky guesses.

    If we can’t make an equitable deal, I look forward to spending a few weeks catching up to their AI start-up proof-of-concept, and then spending 5 years courting their customers to my new solution using hard work and hard earned decades of expert knowledge.

    This cool AI stuff does change things, but it changes things far less than the tech bros hope you will believe.







  • It’s you can modify the settings file you sure as hell can put the malware anywhere you want

    True. (But in case it amuses you or others reading along:) But a code settings file still carries it’s own special risk, as an executable file, in a predictable place, that gets run regularly.

    An executable settings file is particularly nice for the attacker, as it’s a great place to ensure that any injected code gets executed without much effort.

    In particular, if an attacker can force a reboot, they know the settings file will get read reasonably early during the start-up process.

    So a settings file that’s written in code can be useful for an attacker who can write to the disk (like through a poorly secured upload prompt), but doesn’t have full shell access yet.

    They will typically upload a reverse shell, and use a line added to settings to ensure the reverse shell gets executed and starts listening for connections.

    Edit (because it may also amuse anyone reading along): The same attack can be accomplished with a JSON or YAML settings file, but it relies on the JSON or YAML interpreter having a known critical security flaw. Thankfully most of them don’t usually have one, most of the time, if they’re kept up to date.







  • Today I learned the term Vibe Coding. I love it.

    Edit: This article is a treasure.

    The concept of vibe coding elaborates on Karpathy’s claim from 2023 that “the hottest new programming language is English”,

    Claim from 2023?! Lol. I’ve heard (BASIC) that (COBOL) before (Ruby).

    A key part of the definition of vibe coding is that the user accepts code without full understanding.[1] AI researcher Simon Willison said: “If an LLM wrote every line of your code, but you’ve reviewed, tested, and understood it all, that’s not vibe coding in my book—that’s using an LLM as a typing assistant.”[1]

    Did we make it from AI hype to AI dunk in the space of a single Wikipedia article? Lol.


  • research papers that require a strong background in mathematics and cryptography to understand and implement.

    Lol. I guess that makes sense. Outside of school, we hope that all authentication will be implemented only cryptography experts anyway.

    Could you maybe suggest some resources on this topic?

    Not really, sorry. I’m not aware of anyone creating resources for your situation.

    Or should I choose a simpler project?

    For some context, cryptography isn’t even usually implemented “completely correctly” by experts. That’s part of why we have constant software security patches.

    If I were in your shoes, I guess it would depend on my instructor and advisors.

    If I felt like they have the skills to catch mistakes and no time to help correct mistakes, then I would just choose a simpler project. If they’re cool with awarding a good grade for a functional demo, I might just go for it.

    I guess I would take this one to an advisor and get some feedback on practicality.