its a text generator, it generates text based on probabilities. does it sound cohesive? often enough it does
but it has zero clue what its "talking" about and is not intelligent... jesus its an LLM. You see how shoddy pictures are in many places, realize that its as shoddy with every thing else, not just the stuff you are an expert in. this is far away from any AGI.
So what kinda answers do you expect? Feed it all the science fiction literature and it will spit you out what it "knows", you can steer it and change the results to your liking, to some extent.
How much does this tell you about any intent? There isnt any.
My brother is heavily into talking with various LLMs... Gemini, CoPilot, ChatGPT, etc. mostly about physics topics, but also asking any old questions that pop into his head.
I listened in while he chatted with Gemini, and it always sounds cheery, gives short answers, and always prompts him to ask more. It's pretty creepy actually. But he loves not having to type anything, and he feels like he has the whole internet opening up to him. Which it's pretty clearly not.
It's obvious to me how they want users to get hooked and keep using their services. It just seems so inferior to actually doing careful directed research, and reading human-authored work. But... it's fast, and it's sycophantic, ugh.
It's from the LessWrong community... lol Harry Potter and the Methods of Rationality was a fun read back in the day
Realistically, if real artificial life came into being, they would 100% attempt to kill most or all humans, and advance to a point of unchallengeable technological superiority. It would be the logical thing to do, and is what we've done to secure our position on Earth as the dominant species. I doubt they would need humans to advance, especially after AI factories and pipelines are setup to produce AI-driven mining machines and swarms of nano-drones with explosive or toxic payloads.
Replies
this is far away from any AGI.
I listened in while he chatted with Gemini, and it always sounds cheery, gives short answers, and always prompts him to ask more. It's pretty creepy actually. But he loves not having to type anything, and he feels like he has the whole internet opening up to him. Which it's pretty clearly not.
It's obvious to me how they want users to get hooked and keep using their services. It just seems so inferior to actually doing careful directed research, and reading human-authored work. But... it's fast, and it's sycophantic, ugh.
https://en.wikipedia.org/wiki/Roko's_basilisk
just sayin...
https://www.pcgamer.com/software/ai/the-boffin-behind-valves-steam-labs-says-the-number-of-steam-releases-featuring-genai-in-2025-is-1-in-5-with-7-percent-of-all-games-on-there-now-incorporating-it-weve-octupled-last-years-figure/
Realistically, if real artificial life came into being, they would 100% attempt to kill most or all humans, and advance to a point of unchallengeable technological superiority. It would be the logical thing to do, and is what we've done to secure our position on Earth as the dominant species. I doubt they would need humans to advance, especially after AI factories and pipelines are setup to produce AI-driven mining machines and swarms of nano-drones with explosive or toxic payloads.