- Chatgpt’s O3 Model scored a 136 on Mensa IQ Test and a 116 on a custom offline -test that surpassed most people
- A new study found that 25% of Gen Z believes that AI is already conscious and over half believe it will soon be
- The change in IQ and belief in ai -consciousness has happened extremely fast
Openai’s new chatgpt model, called O3, just scored an IQ of 136 on Norway Mensa Test -higher than 98% of humanity, not bad for a glorified auto complex. In less than a year, AI models have become extremely more complex, flexible and in some ways intelligent.
The plunge is so steep that it may make some people believe that AI has been shown. According to a new Edubirdie survey, 25% of Gen Z now believes that AI is already self-conscious, and more than half think it is just a matter of time before their chatbot becomes aware and possibly requires voting rights.
There is a certain context to consider when it comes to the IQ test. Norway Mensa test is public, which means it is technically possible that the model used the answers or questions for training. So researchers at MaximumTruth.org created a new IQ test that is completely offline and out of reach of training data.
On the test designed to be equivalent in trouble with the Mensa version, the O3 model scored EN 116. It’s still high.
It puts O3 in the top 15% of human intelligence hovering somewhere between “sharp degree students” and “annoying smart trivia night regularly.” No feelings. No consciousness. But logic? It has it in Spar.
Compare it to last year when no AI tested over 90 on the same scale. Last May, the best AI fought with rotating triangles. Now O3 is parked comfortably to the right of the bell basket among the brightest people.
And that curve is crowded now. Claude has come up. Geminis scored in the 90s. Even the GPT-4O, the Baseline standard model for Chatgpt, is only a few IQ points below O3.
Still, it’s not just that these AIs become smarter. It’s that they learn quickly. They improve as software does not as humans do. And for a generation raised on software, it’s a disturbing kind of growth.
I don’t think consciousness means what you think it means
For those bred in a world navigated by Google, with a Siri in his pocket and an Alexa on the shelf, AI means something other than its strictest definition.
If you came into age during a pandemic, as most conversations were conveyed through screens, an AI joint case probably doesn’t feel very different from a zoom class. So it may not be a shock that almost 70% of Gen Zers, according to Edubirdie, says “thank you” and “thank you” when you talk to AI.
Two-thirds of them use AI regularly for work communication and 40% use it to write emails. A quarter uses it to finesse awkwardly relaxed answers, with almost 20% sharing of sensitive workplace information, such as contracts and colleagues’ personal details.
Many of the respondents depend on AI for different social situations ranging from asking for time off to just saying no. One in eight is already talking to AI about drama at work, and one in six has used AI as a therapist.
If you trust so much at AI, or find it engaging enough to treat as a friend (26%) or even a romantic partner (6%), the idea that AI is consciously seems less extreme. The more time you spend treating something like a person, the more it starts to feel like one. It answers questions, remembers things and even mimics empathy. And now that it becomes demonstrably smarter, philosophical questions naturally follow.
But intelligence is not the same as consciousness. IQ scores do not mean self-awareness. You can score a perfect 160 on a logical test and still be a toaster if your circuits are wired that way. AI can only think in the sense that it can solve problems using programmed reasoning. You might say I’m not different, just with meat, not circuit. But it would hurt my feelings, something you don’t have to worry about with some current AI product.
Maybe it will change one day, even one day soon. I doubt it, but I’m open to being proven wrong. I get the will to suspend disbelief to AI. It may be easier to think that your AI assistant really understands you when you pour your heart out at 3 o’clock and get supportive, useful answers instead of dwelling on its origin as a predictable language model trained on the Internet’s collective oversharing.
Maybe we are on the verge of genuine self -conscious artificial intelligence, but maybe we are just anthropomorphization of really good calculators. Either way, don’t tell secrets to an AI that you don’t want to train a more advanced model.