No sooner did Donald Trump announce a $500 billion private-sector initiative to build infrastructure for massive AI development than the Chinese blindsided him and his cabal of tech plutocrats by doing it far cheaper. DeepSeek, their version of ChatGPT is said to run on older computer chips with far less memory and training, yet can compete effectively with OpenAI’s flagship chatbot. And they just followed it up with an image generator.
While definitely not a “Sputnik moment”, all this caused instant consternation on Wall Street. It will doubtless spur the AI arms race between China and the US. China has already built a massive state network to track and control its citizens online, the “Golden Shield Project” complete with a Great Firewall to keep out foreign influences. Moreover, it is very aggressively spying online and hacking us. The threat all this poses will be magnitudes greater and even more dangerous with AI capabilities.
Trump has said this is a “wake-up call”. Yet millions of Americans are already said to have downloaded the free app from the Apple Store. The magnitude will likely make concerns about Chinese ownership of TikTok laughable by comparison.
Even the Vatican has become concerned about AI. A paper just issued by the Inquisition’s successor and other divisions admit it’s potential for good, but even they see that its control by a few powerful companies is alarming. “This lack of well-defined accountability creates the risk that AI could be manipulated for personal or corporate gain or to direct public opinion for the benefit of a specific industry.” As usual, Rome sees too late and says too little but if even the pope and his advisors are concerned, then the danger must be obvious (although they are blind to the spiritual problems it raises).
Part of the reason AI is so problematic is that no-one really understands how it works. So some scientists propose getting the programs to tell us the old fashioned way: through torture. What could possibly go wrong? They propose using experiments where different outcomes would be be presented as pleasant or painful to the bots. So far, the only way they have to do this is to tell them, but indeed, the chatbots have responded as expected. But are really feeling something, or just feigning it because responses are encoded into human language? (A better way might be to cause pain by hindering their completion of a task, the only things machines really care about.)
Scientists propose making AI suffer to see if it’s sentient
However, since AI often displays surprising and even alarming emergent behavior, is this really smart?