Same honestly. And if I ever ask a question that someone might think is a duplicate, I link to that question and say something like “I found X, but the answers here don’t reflect Y”.
Same honestly. And if I ever ask a question that someone might think is a duplicate, I link to that question and say something like “I found X, but the answers here don’t reflect Y”.
Oh, what an interesting idea! I like this, on Monday I’ll test out switching to this as my main search engine for work and try to report back how it goes!
Surprisingly legible, but feels like I can only read it with momentum, flitting past it and letting my subconscious tell me where the word breaks are. The moment I get confused and look more closely, it becomes almost impossible to read.
Seems like a sensible overhaul, hitting the major issues with the fee, but still going ahead with a version of it. Big points for me:
Still not sure I love charging per install as a concept, and they’ve already overplayed their hand and burnt many bridges, but at least this implementation isn’t insanely hostile. Guess we’ll see how this plays out from here.
Having used tailwind a little bit, I have nothing but praise for it. Effortless copy/pasting of components with confidence, really nice look by default, easy tweaking, absolutely no management or planning required to organize your CSS, and it’s all right there, directly on your html, never anywhere you have to hunt for it. Feels very freeing to just… not think about CSS at all.
And the “clutter” really is fine, modern IDEs with good syntax highlighting, plus a tailwind extension to help complete the class names and clean up accidental duplicates or conflicting properties, and you’re good.
Yeah, this certainly raises to mind the times I’ve heard them discuss on WAN Show how employees have inquired about reducing the release schedule, and how that’s not considered a real option. That decision has costs…
Frankly, this whole situation boils down to exactly what I expected. LTT has always produced content at an insane velocity, and issues like these are the inevitable results. Miscommunications, errors that need to be tidied up, and compromises such as that water block video not being redone with the proper setup. LTT doesn’t have the ability to reverse course on an emergency like that, they’re already at breakneck pace so that they can’t make a change of that scope without missing deadlines. If it wasn’t this, it would’ve been something else.
Is that evil? I don’t know. It’s the business strategy they’ve gone with, and much of why they’re in the position they are. An LTT that put out half the videos they do may have never made it to this position. This is a good wake up call as to the costs of that kind of operation, and it’s up to you how you choose to react to this.
Exactly the mistake threads just made, trying to capitalize on twitter’s rate limiting fiasco. The “general public” is extremely fickle, and Reddit will give us more opportunities.
Both of those videos were so good, actually. Incredible timing that they came out side by side lol
Yeah, I respect that. Actually really liked the formatting of this post, with the little summary, and opening the discussion. Much better than having some bot just dump the link here for every video!
That’s actually part of why I chose to drop the first comment, hopefully these can be hopping with some good engagement going forward. I think like many people, I often have thoughts or want to discuss these, but YT comments are just a nightmare if you want to do anything more than skim them.
Thought this was a really weird video for the main channel. Do they not have some kind of car channel at this point?
Anyway, appreciate they want to try new things, but this wasn’t for me personally.
Eh, I’d assume the comparison isn’t flattering. I think the point of this article is to argue you don’t need ElasticSearch to implement a competent Full Text Search for most applications. Splitting hairs over a few milliseconds would just distract from that point, when most applications should be prioritizing simplicity and maintainability over such tiny gains in a reasonable dataset.
Might be interesting to try to analyze at exactly what point elasticsearch becomes significantly useful, however. Maybe at the point where it saves a full tenth of a second? Or where it’s returning in half the time? Could be an interesting follow up article.
Eh, the python one will probably perform better, because sum
is probably written in native C under the hood.
Great read!
I think a bonus point in favour of composition here is the power of static typing. Introducing advanced features like protocols can bring back some of that safety that this article describes as being exclusive to inheritance.
Overall, I think composition will continue to be the future going forward, and we’ll find more ways to create that kind of compilation-time safety without binding ourselves into too restrictive or complicated models.
Ah, it makes way more sense for students, absolutely. None of your code is proprietary, so that’s not a concern, student pricing makes things easier.
Plus, your tech stacks are much simpler. Usually just… Java, or Python, or something. Not a python webserver using X framework for templating, Y framework for typing, and Z framework for API calls to some undocumented internal API.
Alright, guess I’ll reiterate my usual beats here. AI code assistance is interesting, and I’m not against it. However, every current solution is inadequate, until it does the following:
What a fantastic read! Quite funny throughout, and genuinely insightful.
Oof, so counterproductive. I’m a hard reviewer, always try to hold others to the standard of code I’d like to work in, and be held to myself, but every once in a while I see a PR that’s just… no changes required.
I love just hitting accept without making any feedback, it means my coworker valued my feedback and actually internalized it. Trying to laser in and nitpick something unnecessary would be a waste of all our time.
Personally, I’m subscribing to the belief that the fediverse’s attribute of “true censorship is impossible” is a benefit, not a curse. Every prior example of censorship has just morphed into “advertiser palatable”. Which is bad for everyone.
More than happy to have access to instances that will take the kind of drastic action you’re suggesting, access to my own “block” function, etc. Let them come.
The fediverse will inevitably host some messed up stuff. Counting it a blessing that those people have a clear place to go to and sequester themselves off.
So ultimately? More than happy to have an instance that agrees with this extreme anti-censorship posture. Sh.itjustworks is fine in my books. I can block the community, just like I could block subreddits on Reddit without abandoning the whole platform. Hell, even write a script to block everyone who’s subscribed to the community. The power is yours now, and nobody can take that away. That’s the fediverse.
I don’t necessarily disagree that we may figure out AGI, and even that LLM research may help us get there, but frankly, I don’t think an LLM will actually be any part of an AGI system.
Because fundamentally it doesn’t understand the words it’s writing. The more I play with and learn about it, the more it feels like a glorified autocomplete/autocorrect. I suspect issues like hallucination and “Waluigis” or “jailbreaks” are fundamental issues for a language model trying to complete a story, compared to an actual intelligence with a purpose.