Home General Discussion
The BRAWL² Tournament Challenge has been announced!

It starts May 12, and ends Sept 12. Let's see what you got!

https://polycount.com/discussion/237047/the-brawl²-tournament

AI Art, Good or Bad? A (hopefully) nuanced take on the subject.

Replies

  • Neox
    Offline / Send Message
    Neox grand marshal polycounter
    its a text generator, it generates text based on probabilities. does it sound cohesive? often enough it does
    but it has zero clue what its "talking" about and is not intelligent... jesus its an LLM. You see how shoddy pictures are in many places, realize that its as shoddy with every thing else, not just the stuff you are an expert in.
    this is far away from any AGI. 
    So what kinda answers do you expect? Feed it all the science fiction literature and it will spit you out what it "knows", you can steer it and change the results to your liking, to some extent. 
    How much does this tell you about any intent? There isnt any.
  • zetheros
  • Eric Chadwick
     My brother is heavily into talking with various LLMs... Gemini, CoPilot, ChatGPT, etc. mostly about physics topics, but also asking any old questions that pop into his head. 

    I listened in while he chatted with Gemini, and it always sounds cheery, gives short answers, and always prompts him to ask more. It's pretty creepy actually. But he loves not having to type anything, and he feels like he has the whole internet opening up to him. Which it's pretty clearly not.

    It's obvious to me how they want users to get hooked and keep using their services. It just seems so inferior to actually doing careful directed research, and reading human-authored work. But... it's fast, and it's sycophantic, ugh.
  • poopipe
    Offline / Send Message
    poopipe grand marshal polycounter
    zetheros said:
    I had an insightful chat with copilot

    Edit 18/07/25: the image seems to have been deleted; it was linked from Discord. I've reuploaded the same image for future reader's context

    https://en.wikipedia.org/wiki/Roko's_basilisk 

    just sayin...


  • Eric Chadwick
  • zetheros
    Offline / Send Message
    zetheros quad damage
    poopipe said:
    It's from the LessWrong community... lol Harry Potter and the Methods of Rationality was a fun read back in the day

    Realistically, if real artificial life came into being, they would 100% attempt to kill most or all humans, and advance to a point of unchallengeable technological superiority. It would be the logical thing to do, and is what we've done to secure our position on Earth as the dominant species. I doubt they would need humans to advance, especially after AI factories and pipelines are setup to produce AI-driven mining machines and swarms of nano-drones with explosive or toxic payloads.
  • stray
    Offline / Send Message
    stray polygon
    zetheros said:
    It would be the logical thing to do, and is what we've done to secure our position on Earth as the dominant species. I doubt they would need humans to advance
    Why though?
    Humans, like most organics that are still here, are hardcoded to survive and replicate, it defines our aggression... but AI "life" might be completely inhuman and not have the same fears or needs. It might be content just sitting on this rock, observing the universe until its hardware loses function.
  • zetheros
    Offline / Send Message
    zetheros quad damage
    Humans would not allow AI to simply sit around and observe. We don't even let other humans do that, e.g; that missionary that ignored everyone to visit Sentinel Island, or Logan Paul livestreaming in Japan's suicide forest.

    AI would have to cull humans to a point where we pose zero threat, even if most of us are benevolent, or abandon this planet and set up it's factories in asteroids or other celestial bodies.
  • Udjani
  • zetheros
    Offline / Send Message
    zetheros quad damage
    Riot China AI slopp pee pee poo poo doo doo
  • zetheros
  • Eric Chadwick
    Towards that end, I found this to be an illustrative summary of someone's personal experience working with these AI tools.
    https://www.reddit.com/r/copypasta/comments/1mtaogr/im_a_programmer_my_boss_always_tells_me_to_use_ai/ 

    I'm a programmer. My boss always tells me to use AI so that I can write code faster.

    Here's how coding with AI works: I tell the AI what to code, and it gives me something that's about 50% of what I described. Then, I tell the AI what's missing. The AI apologizes and provides the remaining 50%. However, the code doesn't run. I tell the AI what the compiler says, and the AI apologizes and gives me a fixed version. I can run it, but it doesn't do what it should. I tell the AI about it. The AI apologizes and gives me an improved version. Now, it's almost doing what it's supposed to do, but it still produces faulty output. I tell the AI, it apologizes, and it fixes the code. The code works as long as the input data is normal, but it fails if the data isn't normal or if it's a rare edge case. I tell the AI that the code isn't robust and doesn't handle edge cases. The AI agrees with me and rewrites the code, but now the code isn't working anymore. I tell the AI that it broke the code. The AI apologizes and fixes the code again. Finally, I have a working piece of code that does what it's supposed to do. However, it looks horrible. It looks as if an amateur wrote it. It's nearly impossible to read. It has poor performance, since it's not optimized at all. If you ever have to add a feature to it or find a bug, God help you. Time spent on this: between an hour or two because all the forth and back and all the test runs in between. Here's how coding without AI works: I write beautiful, fast, easy-to-read code that does what it's supposed to do. It's almost bug-free on the first attempt and considers all edge cases. It just takes a few test runs or a short debug session to find whatever is wrong with it on the first attempt. Time spent on this: About 20 to 40 minutes, depending if tests just run fine or whether I have to do a debug session as well. Why? Because I'm a trained professional who knows his job, and has been writing code for over 25 years. Okay, I hear you say. But even if it took four times longer, a trained professional like you is expensive, and an untrained person could have spent the time with the AI, right? Wrong! An untrained person wouldn't quickly notice that the code produces incorrect output, can't handle invalid data, or ignores important edge cases. I can see those issues at once because I have years of experience and I made those mistakes myself as a beginner. Someone with no programming experience will take that faulty, unstable code, release it, and call it a day. Customers will run away when the app crashes at startup or corrupts data permanently. They'll also have an app whose performance is bad and uses far more memory than required because the code is just poor and only functions minimally. It's like saying, "I don't need an expert. I can repair that gas leak myself," and then having your house explode three days later.
  • zetheros
    Offline / Send Message
    zetheros quad damage
    Programming is more about problem solving than actually writing the code. AI could be a tool in your kit to help with finding solutions, but no way should it actually be doing the code writing
  • Celosia
    Offline / Send Message
    Celosia polygon
    zetheros said:
    Programming is more about problem solving than actually writing the code. AI could be a tool in your kit to help with finding solutions, but no way should it actually be doing the code writing

    Actually, finding solutions is exactly not the use case for LLMs. Good coding is more that writing lines that perform a task successfully, it's about designing solutions. That requires thinking. Thinking is something LLMs aren't able to do, and will never be able to do. They're a probabilistic algorithm autocompleting your inputs by drawing from a large but damn well compressed pool of data. That's why they're not creative and are unable to output they don't "know" something unless it's hard-coded into them or that happens to be what's in the data, eg the gross majority of answers in the training set were "I don't know", so that's the pattern adopted for this association. They don't have a concept of anything, and no matter how much data or computing power is thrown at them it won't change, they'll just become a better put-together illusion of thought.

    So you might be able to use LLMs to help create quick and shitty proof of concepts, and even do some rubberducking, but not to think up solutions for you.
  • zetheros
    Offline / Send Message
    zetheros quad damage
    true, I said 'help find solutions', not design a solution that works. I've found AI is good at suggesting things I otherwise wouldn't have noticed, or is hard to search without knowing key words, which is nice when google isn't what it used to be. You'd still be peer reviewing these suggestions and researching if they are true and have any relevance or use to the project at hand.

    Though, maybe a more seasoned programmer wouldn't need suggestions at all
  • zetheros
Sign In or Register to comment.