The AI Gold Rush: Power, Misinformation, and the Need for Decentralization
- Michael Rickwood
- Sep 23
- 3 min read

It started one evening with a call from my girlfriend. She had just spoken to a friend in the Czech Republic who had read a newspaper article claiming that in France, hospitals were preparing for civil war, supposedly on government orders ahead of civil unrest. Understandably, she was a little shaken.
My immediate reaction was skepticism. Nothing on the ground suggested such a scenario. A quick scan of reputable news sources and social media showed nothing. I reassured her, flagged the claim as fake news, and then cross-checked using Grok, Tesla’s AI assistant. The verdict was clear: misinformation.
This small episode revealed a big truth. AI is already serving us as one of the most powerful tools for fighting misinformation. Tools like Grok (integrated into X), ChatGPT, and Claude are becoming frontline defenders in verifying facts and protecting us against propaganda.
The Double-Edged Sword of Centralized AI
But here’s the catch. As good as these systems are, they are all owned and run by centralized corporations. Right now, we’re in an “AI gold rush,” with models racing to onboard users and outperform each other, releasing new versions every few months. For now, the competition fuels performance and fact-grounded answers. It’s a great time for AI.
But what happens if one of these organizations decides to change what their models say — subtly or even drastically? A centralized model can be altered by the company that controls it. Today, AI may help us flag misinformation. Tomorrow, it could become a source of it.
The Case for Decentralization
That’s why we need a failsafe. The solution isn’t to reject AI, far from it. But open the competition to projects grounded in community governance — basically, decentralize it. Models maintained by decentralized organizations, governed transparently, and secured through blockchain can ensure resilience against censorship, bias, or manipulation.
Blockchain has long been touted as the natural partner for AI, offering both security and transparency. Done right, decentralized AI could be as resilient and trustworthy as Bitcoin.”
Early Movers in Decentralized AI
This isn’t theory — here are three projects that are already building toward it:
Bittensor: creating an open network of AI sub-models, where independent developers contribute models and earn incentives through the TAO token. It’s a live ecosystem for distributed innovation. Think of it like an automated investor flywheel.
Near Protocol: reorienting its blockchain infrastructure toward AI, enabling scalable, decentralized applications and governance models designed for AI-native use cases.
Prime Intellect: pushing the frontier of decentralized AI training, focusing on collaborative model development across distributed nodes — reducing reliance on centralized data centers and making AI more resilient and transparent.
Together, these projects point toward a future where AI is not only powerful but trustworthy, because it is owned, governed, and maintained by communities rather than corporations.
Conclusion
My girlfriend’s phone call reminded me that misinformation spreads fast, and that AI is already a powerful ally in fighting it. But it also reminded me of the fragility of centralization.
I know some of the leaders of these organizations will prove me wrong, and I hope no one will prove me right.
The real safety net will be building decentralized AI systems that remain transparent, accountable, and resilient no matter who is in charge. Because the day AI becomes a monopoly, it becomes a complicated situation for humanity. And without trust, leadership, communication, and society itself begin to fracture.
Comments