: So much for buttering up ChatGPT with ‘Please’ and ‘Thank you’
Google co-founder Sergey Brin claims that threatening generative AI models produces better results.
“We don’t circulate this too much in the AI community – not just our models but all models – tend to do better if you threaten them … with physical violence,” he said in an interview last week on All-In-Live Miami. […]
If true, what does this say about the data on which it was trained?
stack overflow and linux kernel mailing list? yeah, checks out
@athairmor Perhaps ironically, what we’re erroneously calling ‘AI’ really is a kind of black mirror. It’s a crude simulacrum of the shared human id, our worst failings and impulses – made manifest in virtual form, like a digital golem. It’s everything superficially awful about us, ginned up to seem self-aware and act autonomously.
That’s an inevitable and predictable result of how it was created.
Trained? Or… tortured.