• 1 Post
  • 40 Comments
Joined 8 个月前
cake
Cake day: 2025年8月27日

help-circle

  • How about you reread the thread instead, see that it’s about accurately reproducing existing stars, and realize that you indeed have a comprehension problem.

    The sub-thread is about the minimum storage to hold a 3D model per star. Starman defined a 2-byte tetrahedron and multiplied. That’s storage math, not astrophysical reproduction.

    Nobody at any point said “accurately reproducing existing stars.”

    Procedural generation is relevant because it’s the canonical example of compressing astronomical-scale data into almost nothing - which is what Braben did in 1984, on the machine I cited, which you initially corrected me on incorrectly.

    You’ve now moved the goalposts twice: first from Elite to Elite Dangerous, now from “minimal storage per model” to “accurately reproducing existing stars.”

    At some point it’s easier for you to just re-read the thread than to keep inventing new arguments to lose.

    Go away.



  • Elite is from 1984. Per the wiki I cited

    “…The Elite universe contains eight galaxies, each with 256 planets to explore. Due to the limited capabilities of 8-bit computers, these worlds are procedurally generated. A single seed number is run through a fixed algorithm the appropriate number of times and creates a sequence of numbers determining each planet’s complete composition (position in the galaxy, prices of commodities, and name and local details; text strings are chosen numerically from a lookup table and assembled to produce unique descriptions, such as a planet with “carnivorous arts graduates”). This means that no extra memory is needed to store the characteristics of each planet, yet each is unique and has fixed properties. Each galaxy is also procedurally generated from the first. Braben and Bell at first intended to have 248 galaxies, but Acornsoft insisted on a smaller universe to hide the galaxies’ mathematical origins.[36]”

    Elite Dangerous expands on this mechanic, per cited article.

    "Of course, David Braben and his team didn’t dot their virtual galaxy manually with all those star systems, they used procedural generation. But there’s absolutely more to it, Braben explained when we recently sat down with him in San Francisco.

    “I think it is a distraction when you start describing it as ‘we generated our galaxy procedurally’. It belittles the fact that we actually put a lot of artistic work in it and gathered real data.

    We have a one-to-one scale model of the milky way in our game, with all the 400 billion star systems. What we’ve done is we got real data from 160,000 star systems. That’s every single star in the night sky. About 7,000 are visible to the human eye and a lot more with a telescope. These are all in the game. And all the nebulae and things like that.

    Now, beyond 30 or 40 light-years from Earth, even Hubble can’t resolve the smallest stars. So, the most common star we know about is a Class M Red-star, and beyond those 30 to 40 light-years, Hubble can’t see them. But you CAN see them as a sort of smoke, you just can’t see individual stars.

    And I’m sure in our lifetime, we’ll see further and further with better telescopes. But the point is, we can populate that smoke with stars –with the right sort of mix of stars as well as the density. Because we know how much radiation is coming out of that smoke. And that’s the sort of approach we have taken.

    Using procedural generation to create that smoke, in much the same way an artist uses an air brush or computer. The artists doesn’t mind where the individual dots come, what he’s doing, is getting the pattern of the smoke right, or whatever it is he’s drawing with the air brush."





  • ^ exactly that.

    Also, I suspect that’s the reason for Claude famously telling everyone to “go to bed” all the time. That bastich cannot run time and date as a background check reliably…it wings it based on start of conversation. Bitch I type a lot and fast…stop tellling me to go to bed at 9pm.

    I expect it will get patched soon.

    An endearing quirk…but it exposes the wiring if you know. Still, doesn’t make the trick any less impressive when it hits.


  • Good question. Short answer: not quite.

    The LLM is the reasoning layer. It reads your input, figures out intent, and outputs structured instructions. They have a method that achieves that (MCP).

    Something else like Home Assistant, n8n, a Python script, whatever you’ve set up actually executes the actions. The LLM interacts with those things.

    So for the calendar example: your email client triggers on a booking reply, passes the text to the LLM, the LLM extracts the date/time/location and outputs something structured, and then your automation tool creates the calendar event and sets the reminder. Once it’s set up, it looks and feels like one thing, because you interact with it via the LLM (or even better - you vocally tell the LLM. Yes, JARVIS).

    So the LLM never “talks to” Google Calendar directly, it just does the bit that’s hard to do with traditional code, which is reading messy natural language and making sense of it.

    Same for Home Assistant. The LLM parses “turn the lights down a bit, it’s movie time, play something sci-fi” into a device + action + value, and HA does the actual switching.

    The secret sauce that makes this work is MCP (Model Context Protocol) - basically a standardised way for LLMs to talk to tools and services.

    Instead of custom glue code for every integration, you wire up an MCP server once and the model knows how to use it.

    Growing library of them now: filesystems, calendars, browsers, databases, smart home etc.

    Anthropic open-sourced the spec, most major local LLM frontends support it.

    Think of it like hiring a translator who can manage your crew, rather than hiring someone who speaks every language and also has keys to every building and is also a plumber/electrician/contractor/interior designer, if that makes sense.

    TL;DR: once you set up the stack, then the cool automation stuff can happen. Not a big ask, just a bit fiddly, like learning to program your VCR.

    Super surprised Google’s AI doesn’t have the stack / harness inbuilt tho. They could afford to do a lot of the heavy lifting invisibly. I bet they actually do and it’s just … shit. Or a paid extra lol.


  • Some examples

    • Tell Home Assistant to adjust lights/thermostat/locks in plain English based on certain conditions being met
    • Ask Jellyfin/Plex to play something based on a vague description like “something like Interstellar but lighter”
    • Morning briefing that pulls calendar, weather, emails and traffic into a 60-second summary automatically. Or get it to read it to you out loud while you shave.
    • Schedule the robot mower or vacuum based on weather forecast via API
    • Fetch information for you off net at set intervals and update you (email, SMS etc)
    • CCTV uses (classification etc)
    • Batch rename files, sort downloads, resize images - stuff you’d normally write a one-off script for
    • Parse a booking reply email, confirm the time, add it to your calendar, set reminders
    • Tag and name your own pictures based on meta data

    That’s probably just the basics. People have some clever uses for these things. It’s not just summarize this document






  • Yep. Last I looked, they used both GAFAM and their own infra (teclis)? I think the goal is to eventually move solely to their own infra / web indexing.

    Tbh, I dunno how much longer “search” is going to be a unique category. I think we’re probably going to need to move to personal AI fetch tools, as grim as that sounds, that can filter out shit news sources using trusted domains, white lists / black lists, as blockers etc. Think like ublock but for search.

    I think that’s how lots of people use ChatGPT tbh; I’m not a fan of that. I’d favour a more local / self hosted AI agent. Something like Perplexica?

    https://github.com/kiranz/perplexica

    Actually, fuck it: maybe I’ll build that myself.

    The surface web is cooked / enshittified almost beyond use and we might need to fight fire with fire.


  • SuspciousCarrot78@lemmy.worldtoDeGoogle Yourself@lemmy.mlAged like milk
    link
    fedilink
    arrow-up
    6
    arrow-down
    2
    ·
    edit-2
    24 天前

    I hear you - a one off purchase (or a $50-100 for 5yrs) would be a selling point. Hell, I’d even buy credits like I do for USENET.

    https://stephango.com/quality-software

    Just let me pay in one lump sum. Not a fan of rolling subscriptions; never end up using the whole quota, so unless there’s a rollover it gets wasted.

    Self hosted SearXNG is an option but it’s going to be pulling from Bing, Google etc, so the result quality ceiling is capped by those engines. Kagi is trying to be a better search engine overall and not just a private wrapper around existing ones, IIUC.

    Personally, I find myself not really searching much any more. I sort of know which sites I need and go there directly. Anything low value goes thru ddg-lite or (gasp) my LLM.

    EDIT: Huh…my LLM just told me to sit down

    “Kagi supports PayPal and OpenNode (Bitcoin) as alternative payment methods. Crucially, these do not create a subscription. They top off your account with credits, which then fund your Kagi membership. That’s essentially the lump-sum/credits model you described”

    Well then…I sit corrected.