OpenAI, one of the leaders in artificial intelligence, claims its new o3 system has reached “human-level” scores on an intelligence test designed to see how AI works learning new data. If correct, it could mean that general AI (AGI), the Holy Grail of robotics, could be closer than we think, with implications still hard to imagine.
OpenAI Claims Its New Model Reached Human Level on a Test for ‘General Intelligence.’ What Does That Mean?
However, there are good reasons to suspect OpenAI’s motives. The company was founded by Elon Musk and others supposedly to create a safe form of AI for the benefit of all humanity. It has since pivoted towards a for-profit model, which has gotten them sued by Musk. But it has always been suspect as its method of scraping the internet of content for training purposes is a massive violation of copyright’s intent and purposes at the very least. Now it turns out, it’s all about the money.
Microsoft, a major investor, supporter, and competitor, has an agreement with OpenAI, where they would share the loot once it has developed a system that would generate $100 billions a year in profits. No mention of vexing standards, not a word said about safety, only money. That sure simplifies things. It may make their divorce easier — if such a lofty figure is ever met.
Leaked Documents Show OpenAI Has a Very Clear Definition of ‘AGI’
The high stakes involved, however, may already have deadly ramifications. Much like Boeing whistleblowers have mysterious suicided, and countless Russian opponents of Putin have died too, a mother is now calling for the FBI to investigate the death of her son, Suchir Balaji. He was one of those who scraped the net for ChatGPT, but became disillusioned when OpenAI changed course, and became a whistleblower. He spoke out on the massive theft of copyrighted materials to the New York Times, and then supposedly committed suicide under suspicious circumstances.