prioritize process over people
Huh. I feel like that line is familiar…
Through this work we have come to value:
Individuals and interactions over processes and tools
prioritize process over people
Huh. I feel like that line is familiar…
Through this work we have come to value:
Individuals and interactions over processes and tools
I haven’t tried it yet, but GrayJay purports to be an aggregator along those lines: https://grayjay.app/
They did issue a fix: “Buy a new CPU please!”
That’s why they don’t mind the reputation hit. If 1 person swears allegiance to Intel as a result but 2 people buy new AMD chips, they’re still ahead. And people will forget eventually. But AMD won’t forget the Q3 2024 sales figures.
With a new package manager named vent
Microsoft sues the Library of Babel
Or maybe an abbreviated hash of the text of their specifications?
That looks like a pretty good deal. At least on paper. ASUS is having a bit of a consumer care meltdown at the moment, so you may wanna check that situation out before you decide. (Search “gamers nexus asus”)
The venerable master Qc Na was walking with his student, Anton. Hoping to prompt the master into a discussion, Anton said “Master, I have heard that objects are a very good thing - is this true?” Qc Na looked pityingly at his student and replied, “Foolish pupil - objects are merely a poor man’s closures.”
Chastised, Anton took his leave from his master and returned to his cell, intent on studying closures. He carefully read the entire “Lambda: The Ultimate…” series of papers and its cousins, and implemented a small Scheme interpreter with a closure-based object system. He learned much, and looked forward to informing his master of his progress.
On his next walk with Qc Na, Anton attempted to impress his master by saying “Master, I have diligently studied the matter, and now understand that objects are truly a poor man’s closures.” Qc Na responded by hitting Anton with his stick, saying “When will you learn? Closures are a poor man’s object.” At that moment, Anton became enlightened.
I think it’s kind of strange.
Between quantification and consciousness, we tend to dismiss consciousness because it can’t be quantified.
Why don’t we dismiss quantification because it can’t explain consciousness?
“Insufficient detail. Please ask a specific question.”
“Read the wiki”
“Nobody here is interested in holding your hand.”
I’m talking about user interactions, not deployments.
In a monolith with a transactional data store, you can have a nice and clean atomic state transition from one complete, valid state to the next in a single request/response.
With a distributed system, you’ll often have scenarios where the component which receives the initial request can’t guarantee the final state of the system by the time it needs to produce a response.
If it did, it would spend most of its effort orchestrating other components. That would couple them together and be no more useful than a monolith, just with new and exciting failure modes. So really the best it can do is tell the client “Here’s a token you can use to check back on the state of this operation later”.
And because data is often partitioned between different services, you can end up having partially-applied state changes. This leaves the data in an otherwise-invalid state, which must be accounted for – simply because of an implementation detail, not because it’s semantically meaningful to the client.
In operations that have irreversible or non-idempotent external side-effects, this can be especially difficult to manage. You may want to allow the client to resume from immediately before or after the side-effect if there is a failure later on. Or you may want to schedule the side-effect, from the perspective of an earlier component in the chain, so that it happens even if a middle component fails (like the equivalent of a catch or finally block).
If you try to cut corners by representing these things as special cases where the later components send data back to earlier ones, you end up introducing cycles in the data flow of your microservices. And then you’re in for a world of hurt. It’s better if you can represent it as a finite state machine, from the perspective of some coordinator component that’s not part of the data flow itself. But that’s a ton of work.
It complicates every service that deals with it, and it gets really messy to just manage the data stores to track the state. And if you have queues and batching and throttling and everything else, along with granular permissions… Things can break. And they can break in really horrible ways, like infinitely sending the same data to an external service because the components keep tossing an event back to each other.
There are general patterns – like sagas, distributed transactions, and event-sourcing – which can… kind of ease this problem. But they’re fundamentally limited by the CAP Theorem. And there isn’t a universally-accepted clean way to implement them, so you’re pretty much doing it from scratch each time.
Don’t get me wrong. Sometimes “Here’s a token to check back later” and modeling interactions as a finite state machine rather than an all-or-nothing is the right call. Some interactions should work that way. But you should build them that way on purpose, not to work around the downsides of a cool buzzword you decided to play around with.
Microservices can be useful, but yeah working in a codebase where every little function ends up having to make a CAP Theorem trade-off is exhausting, and creates sooo many weird UX situations.
I’m sure tooling will mature over time to ease the pain of representing in-flight, rolling-back, undone, etc. states across an entire system, but right now it feels like doing reactive programming without observables.
And also just… not everything needs to scale like whoa. And they can scale in different ways: queue up-front, data replication afterwards, syncing ledgers of CRDTs… Scaling in-flight operations is often the worst option. But it feels familiar, so it’s often the default choice.
but it comes at the cost of short term agility
Often long-term agility, as well.
Big teams are faster on straightaways. Small teams go through the corners better. Upgrading from a go-kart to a dragster may just send your project 200mph into a wall. Sometimes a go-kart is really what you need.
Pointer moved to Hollywood, to become a character star. They had a string of interviews, but it ended in nothing.
And if they settle on they/them pronouns, you could have an inverted non-binary tree.
Hah, yeah a hexagon is a weird case. In my experience, devs talking about “math in a custom view” has always meant simply “I want to render some arbitrary stuff in its own coordinate system.” Sorry my assumption was too far. 😉
You should probably use matrices rather than trig for view transformations. (If your platform supports it and has a decent set of matrix helper functions.) It’ll be easier to code and more performant in most cases.
I agree wholeheartedly, and I think I failed to drive my point all the way home because I was typing on my phone.
I’m not worried that libs like left-pad
will disappear. My comment that many devs will copy-paste stuff for “group by key” instead of bringing in e.g. lodash
was meant to illustrate that devs often fail to find FOSS implementations even when the problem has an unambiguously correct solution with no transitive dependencies.
Frameworks are, of course, the higher-value part of FOSS. But they also require some buy-in, so it’s hard to knock devs for not using them when they could’ve, because sometimes there are completely valid reasons for going without.
But here’s the connection: Frameworks are made of many individual features, but they have some unifying abstractions that are shared across these features. If you treat every problem the way you treat “group by key”, and just copy-paste the SO answer for “How do I cache the result of a GET?” over and over again, you may end up with a decent approximation of those individual features, but you’ll lack any unifying abstraction.
Doing that manually, you’ll quickly find it to be so painful that you can’t help but find a framework to help you (assuming it’s not too late to stop painting yourself into a corner). With AI helping you do this? You could probably get much, much farther in your hideous hoard of ad-hoc solutions without feeling the pain that makes you seek out a framework.
That sounds…
Easier to get almost right than actually learning the subject.
Much, much harder to get completely right than actually learning the subject.
So yes, basically the archetypal use case for LLMs.