Much like that comment. Can you give a better example, or express why it’s a bad example? That would bring some quality in.
Much like that comment. Can you give a better example, or express why it’s a bad example? That would bring some quality in.
FYI you can self-host GitLab, for example in a Docker container.
You can use more debug outputs (log(…)) to narrow it down. Challenge your assumptions! If necessary, check line by line if all the variables still behave as expected. Or use a debugger if available/familiar.
This takes a few minutes tops and guarantees you to find at which line the actual behaviour diverts from your expectations. Then, you can make a more precise search. But usually the solution is obvious once you have found the precise cause.
I think that’s one of the best use cases for AI in programming; exploring other approaches.
It’s very time-consuming to play out how your codebase would look like if you had decided differently at the beginning of the project. So actually comparing different implementations is very expensive. This incentivizes people to stick to what they know works well. Maybe even more so when they have more experience, which means they really know this works very well, and they know what can go wrong otherwise.
Being able to generate code instantly helps a lot in this regard, although it still has to be checked for errors.
There’s a very naive, but working approach: Ask it how :D
Or pretend it’s a colleague, and discuss the next steps with it.
You can go further and ask it to write a specific snippet for a defined context. But as others already said, the results aren’t always satisfactory. Having a conversation about the topic, on the other hand, is pretty harmless.
Those LLMs are great fools, but I am just paranoid to use it in that manner.
Exquisite typo. I also agree to everything else you said.
You can do that when you control the frontend UI. Then, you can set up the input field for their name, applying input validation.
But I would rather not rely on telling the user, in hopes they understand and comply. If they have ways to do it wrong, they will.
Then null will be returned, as the value of b.
A design professor actually proposed this idea to us. Make the user feel how the computer is working, so they can appreciate the result more.
“Monad” is a shorter term though. “Structured data type” reads almost as bulky as “Curve of constant normal intersection points”.
Thanks again, though just for the record, that didn’t help either. It’s alright, I’m used to the Thunderbird lags. Let’s stop here :)
“For agencies like the FTC to seriously consider action, there has to be harm to customers. But the sneaky formula that mobile developers have pioneered is one where the app itself is free, and the gameplay technically does exist in the application, so where’s the harm? Any rEaSoNaBlE viewer won’t be harmed. They will see and uninstall, and there’s disclosures, so who cares? But these companies aren’t targeting ‘the reasonable customers’, they are targeting the people with addictive personalities who get easily sucked in from a deceptive ad to a predatory product.”
Damn, that’s insane and evil. Like a drug cartel distributing free candies after school, with crystal meth inside. They just weather the storm, well knowing a few “customers” will stick.
I still don’t understand how this can work so well, which apparently it does given the numbers and scale. I have questions:
I think that’s a helpful analogy and comment. Please remember this while I go on to nitpick. I’m aiming at in both fields, there may be more math-leaning scientists and concrete-leaning workers, with the engineer being somewhat in the middle.
Declaring bridges safe probably involves a lot of math and tables in the background. I guess we don’t actually run a million trucks but estimate the safety theoretically, with a few experimental tests. Likewise, a security specialist can define the edge cases against which the tests should be run. That may be the same person who also implements the test, but I want to emphasize it’s two different roles. And we might consider one more of a scientist, and the other more of a worker.
So how much your activity resembles that of a mathematician, or a traditional engineer probably depends on your specific task, and how much your team requires you to generalize or specialize.
Then let me spell it out: If ChatGPT convinces a child to wash their hands with self-made bleach, be sure to expect lawsuits and a shit storm coming for OpenAI.
If that occurs, but no liability can be found on the side of ChatGPT, be sure to expect petitions and a shit storm coming for legislators.
We generally expect individuals and companies to behave in society with peace and safety in mind, including strangers and minors.
Liabilities and regulations exist for these reasons.
Do car manufacturers get in trouble when someone runs somebody over?
Yes, if it can be shown the accident was partially caused by the manufacturer’s neglect. If a safety measure was not in place or did not work properly. Or if it happens suspiciously more often with models from this brand. Apart from solid legal trouble, they can get into PR trouble if many people start to think that way, no matter if it’s true.
If it has the information, why not?
Naive altruistic reply: To prevent harm.
Cynic reply: To prevent liabilities.
If the restaurant refuses to put your fries into your coffee, because that’s not on the menu, then that’s their call. Can be for many reasons, but it’s literally their business, not yours.
If we replace fries with fuse, and coffee with gun powder, I hope there are more regulations in place. What they sell and to whom and in which form affects more people than just buyer and seller.
Although I find it pretty surprising corporations self-regulate faster than lawmakers can say ‘AI’ in this case. That’s odd.
I’m from your camp but noticed I used ChatGPT and the like less and less over the past months. I feel they became less and less useful and more generic. In Februar or March, they were my go to tools for many tasks. I reverted back to old-fashioned search engines and other methods, because it just became too tedious to dance around the ethics landmines, to ignore the verbose disclaimers, to convince the model my request is a legit use case. Also the error ratio went up by a lot. It may be a tame lapdog, but it also lacks bite now.
Interesting, may I ask you a question regarding uncensored local / censored hosted LLMs in comparison?
There is this idea censorship is required to some degree to generate more useful output. In a sense, we somehow have to tell the model which output we appreciate and which we don’t, so that it can develop a bias to produce more of the appreciated stuff.
In this sense, an uncensored model would be no better than a million monkeys on typewriters. Do we differentiate between technically necessary bias, and political agenda, is that possible? Do uncensored models produce more nonsense?
The obvious solution is to abandon your project not too late; leave on a high note.
I also found it very useful to document every step of my setup procedures, right after I figured out what works. At least the respective CL.
Hehe, good point.
I think AI bots can help with that. It’s easier now to play around with code which you could not write by yourself, and quickly explore different approaches. And while you might shy away from asking your colleagues a noob question, ChatGPT will happily elaborate.
In the end, it’s just one more tool in the box. We need to learn when and how to use it wisely.