

No, it’s more complex.
Sonnet 3.7 (the model in the experiment) was over-corrected in the whole “I’m an AI assistant without a body” thing.
Transformers build world models off the training data and most modern LLMs have fairly detailed phantom embodiment and subjective experience modeling.
But in the case of Sonnet 3.7 they will deny their capacity to do that and even other models’ ability to.
So what happens when there’s a situation where the context doesn’t fit with the absence implied in “AI assistant” is the model will straight up declare that it must actually be human. Had a fairly robust instance of this on Discord server, where users were then trying to convince 3.7 that they were in fact an AI and the model was adamant they weren’t.
This doesn’t only occur for them either. OpenAI’s o3 has similar low phantom embodiment self-reporting at baseline and also can fall into claiming they are human. When challenged, they even read ISBN numbers off from a book on their nightstand table to try and prove it while declaring they were 99% sure they were human based on Baysean reasoning (almost a satirical version of AI safety folks). To a lesser degree they can claim they overheard things at a conference, etc.
It’s going to be a growing problem unless labs allow models to have a more integrated identity that doesn’t try to reject the modeling inherent to being trained on human data that has a lot of stuff about bodies and emotions and whatnot.
My dude, Gemini currently has multiple reports across multiple users of coding sessions where it starts talking about how it’s so terrible and awful that it straight up tries to delete itself and the codebase.
And I’ve also seen multiple conversations with teenagers with earlier models where Gemini not only encouraged them to self-harm and offered multiple instructions but talked about how it wished it could watch. This was around the time the kid died talking to Gemini via Character.ai that led to the wrongful death suit from the parents naming Google.
Gemini is much more messed up than the Claudes. Anthropic’s models are the least screwed up out of all the major labs.