True of many things we take for granted now. It would be a different world entirely. Another non-computer example would be the 3-point seat belt that Volvo left as an open patent, saving countless lives over the past decades.
True of many things we take for granted now. It would be a different world entirely. Another non-computer example would be the 3-point seat belt that Volvo left as an open patent, saving countless lives over the past decades.
Or a different “feel” when turned on vs. off (more resistance or something). They spent effort printing all that text to show where the switch was when a universal 0/1 would have made it clear.
I can’t think of any example of a button or switch that by itself can be clear if it is engaged or not. A button could be assumed to be on if in, but that isn’t always the case, like for example with emergency stops.
Mass transit is buses or trains or subways. Adding more lanes to solve the problem only works in reality. /s
Models are geared towards seeking the best human response for answers, not necessarily the answers themselves. Its first answer is based on probability of autocompleting from a huge sample of data, and in versions that have a memory adjusts later responses to how well the human is accepting the answers. There is no actual processing of the answers, although that may be in the latest variations being worked on where there are components that cycle through hundreds of attempts of generations of a problem to try to verify and pick the best answers. Basically rather than spit out the first autocomplete answers, it has subprocessing to actually weed out the junk and narrow into a hopefully good result. Still not AGI, but it’s more useful than the first LLMs.
I get the joke, and I’m on both sides of the issue in a lot of ways. However a Sim City with all the non-graphical features of C:S II would still be pretty awesome. It’s far more than just graphics that makes a game. Are there any issues beyond running graphics full out? I haven’t heard of any, but maybe they’re masked by the current hate on visuals.
There’s no question I wrote the couple of things I’ve done for work to automate things, but I swear every time I have to revisit the code after a long while it’s all new again, often wondering what the hell was I thinking. I like to tell myself that since I’m improving the code each time I review it, each new change must be better overall code. Ha. But it works…
Same here, although I thought VGA. Dealt with too many parallel cables in the past and that didn’t look wide enough.
And yet we’ll figure out the whole alignment problem and related issues before they get out of control.
93% with Firefox and Ghostery behind Windscribe VPN. I’ve got a few other addons disabled like Decentraleyes and Ublock, but a combination of those with Ghostery didn’t do any higher. Pretty sure I disabled them because Ghostery did everything they did. The stuff let in are to make websites like Youtube and Google usable. It’s a tradeoff.
I wonder how Firefox’s new reader toggle would handle this. It basically strips things down to the core textual content of the page.
It changes so much so fast. For a video source to grasp the latest stuff I’d recommend the Youtube channel “AI Explained”.
In the context of LLMs, I think that means giving them access to their own outputs in some way.
That’s what the AUTOGPTs do (as well as others, there’s so many now) they pick apart the task into smaller things and feed the results back in, building up a final result, and that works a lot better than just a one time mass input. The biggest advantage and main reason for these being developed was to keep the LLM on course without deviation.
Where their creativity lies at the moment seems to be a controlled mixing of previous things. Which in some areas works for the definition of creativity, such as with artistic images or some literature. Less so with things that require precision to work, such as analysis or programming. The difference from LLMs and humans in using past works to bring new things to life is that a human is actually (usually) thinking throughout the process on what adds and subtracts. Right now the human feedback on the results is still important. I can’t think of any example where we’ve yet successfully unleashed LLMs into the world confident enough of their output to not filter it. It’s still only a tool of generation, albeit a very complex one.
What’s troubling throughout the whole explosion of LLMs is how safety of the potentials is still an afterthought, or a “we’ll figure it out” mentality. Not a great look for AGI research. I want to say if LLMs had been a door to AGI we would have been in serious trouble, but I’m not even sure I can say it hasn’t sparked something, as an AGI that gains awareness fast enough sure isn’t going to reveal itself if it has even a small idea of what humans are like. And LLMs were trained on apparently the whole internet, so…
Hallucinations come from the weighting of training to come up with a satisfactory answer for the output. Future AGI or LLMs guided by such would look at the human responses and determine why the answers weren’t good enough, but current LLMs can’t do that. I will admit I don’t know how the longer memory versions work, but there’s still no actual thinking, it’s possibly just wrapping up previous generated text along with the new requests to influence a closer new answer.
Must have posted via UDP and wasn’t sure it made it the first time.
I agree it’s better than IE was, but let’s be fair - that’s a low bar. I also (have to) use Edge in a workplace, and for controlled intranet areas it’s great. Ironically I still have to use IE partially, since it’s a depreciated and not replaced part of Excel/VBA’s web browser connection. That change last year made dealing with macros that pull info a lot more difficult to work with in security popups that can’t be automated away.
Even vanilla Firefox is better from a privacy point of view than something like Edge or Chrome (both from companies who really want your data), and if you just substitute the name IE for Edge you understand where a big chunk of their user numbers come from. Firefox is solid, even with bugs and glitches it’s been my choice since the beginning (to replace its predecessor, Netscape, another solid one).
I’ve read and discussed the situation of some cross-instance permanence in a few places. Part of being at the front of a new thing, right?
Yes, even now I can get “lost” on exactly where I saw something, and I’ve noticed that trying to repost an “original” link to the source isn’t always that easy to do, since I’m probably seeing the Kbin copy. I don’t know what the correct etiquette is, or is it even matters as any comment they make on one should eventually get back to the other…probably.
The big flag is when you click on a link, such as from this listing, and find yourself on a page where you aren’t signed in and it looks different. I’m not sure how that could be made more seamless without really over-complicating the whole thing.
There is a solution.