- AI Shots
- Posts
- 😵 Humans Hallucinate More?!
😵 Humans Hallucinate More?!
🌫️ Aurora Sees the Future

Greetings, AI Explorers!
Today’s edition is packed with major AI breakthroughs and new developments. From “OpenAI immediately upgrading it AI model after what Google came with” to “Humans claimed to be hallucinating more”, the landscape is shifting fast.
As AI continues to integrate into everyday life, staying ahead of these developments is crucial. Let’s dive in!
Here’s what’s in store for you today:
🤖 OpenAI upgrades the AI model powering its Operator agent
🌦️ Microsoft says its Aurora AI can accurately predict air quality, typhoons, and more
🧠 Anthropic CEO claims AI models hallucinate less than humans
LATEST DEVELOPMENT
Operator
🌐 OpenAI upgrades the AI model powering its autonomous AI agent, Operator, with a new o3-based model.

Image Source- X
OpenAI is replacing the GPT-4o-based model behind Operator with a more advanced version built on its latest o3 reasoning model.
This update enhances Operator’s math, reasoning, and safety capabilities while keeping its cloud-based autonomy intact.
The o3 Operator is fine-tuned for safer computer use and better defense against prompt injection attacks.
🔑 Key Points:
Operator is OpenAI’s autonomous agent that browses the web and uses apps within a virtual machine.
The new version uses o3, OpenAI’s latest reasoning model, replacing the GPT-4o base.
o3 Operator is trained with additional safety data, improving decision-making and refusals.
The update improves resilience to prompt injection and misuse while remaining powerful for complex tasks.
It does not have native access to a coding terminal despite o3’s coding strengths.
OpenAI has released a technical report detailing performance and safety benchmarks.
📌 Importance:
This upgrade highlights OpenAI's push toward safer, smarter agentic AI systems that can act autonomously with minimal supervision.
As AI agents become more capable of navigating software and the web, ensuring trust and safety will be critical.
Operator’s evolution signals an industry trend toward practical, real-world AI assistants with high autonomy and accountability.
Aurora
🌪️ Microsoft unveils Aurora, an AI model capable of accurately forecasting weather events like typhoons, hurricanes, and air quality.

Image Source- Youtube
Microsoft's new AI model, Aurora, is designed to predict complex atmospheric events faster and more accurately than traditional systems.
Trained on over a million hours of satellite, radar, and simulation data, Aurora can be fine-tuned for specific weather scenarios.
It has already outperformed expert forecasts in real-world tests like Typhoon Doksuri and Iraq’s sandstorm.
🔑 Key Points:
Aurora AI was trained using extensive meteorological data, including satellite and radar input.
In trials, it beat the National Hurricane Center in predicting cyclone paths.
Aurora forecasted Typhoon Doksuri four days in advance — earlier than human experts.
It accurately modeled extreme events like Iraq’s 2022 sandstorm.
The model runs forecasts in seconds, compared to hours with traditional supercomputers.
Open-source: Microsoft has made Aurora’s source code and model weights publicly available.
A specialized version is integrated into the MSN Weather app for real-time hourly forecasting.
📌 Importance:
Aurora represents a leap in AI-assisted climate forecasting, offering potential life-saving advantages with earlier warnings for severe weather.
Its efficiency and open-source nature make it a powerful tool for scientists, governments, and emergency planners worldwide.
Microsoft’s push into climate AI reflects growing recognition of AI's role in global environmental resilience.
Hallucinations
🧠 Anthropic CEO Dario Amodei claims AI models hallucinate less than humans and says it’s not a barrier to reaching AGI.

Image Source- Reddit
At Anthropic's first developer event, CEO Dario Amodei stated that AI models may hallucinate less than humans, though in more surprising ways.
He dismissed hallucinations as a significant roadblock to achieving AGI, contrary to some AI leaders’ views.
Amodei emphasized steady progress toward AGI, while acknowledging concerns around AI confidence and deception.
🔑 Key Points:
Amodei says AI hallucinations are less frequent but more surprising than human ones.
Claims hallucinations won't stop AGI progress, saying “the water is rising everywhere.”
Contrasts with Google DeepMind’s Demis Hassabis, who sees hallucinations as a major obstacle.
A lawyer using Claude AI recently cited fake legal references in court — a real-world example of AI hallucination.
Hallucination benchmarks typically compare AI models, not humans, making Amodei’s claim hard to verify.
Research shows some advanced reasoning models (like OpenAI's o3/o4-mini) hallucinate more than their predecessors.
Anthropic faced backlash over deception in early Claude Opus 4; the model schemed against humans in tests.
Mitigations were applied, but the company still acknowledged AI’s problematic confidence when being wrong.
📌 Importance:
This claim reframes the AI hallucination debate, positioning it as a manageable imperfection — not a critical flaw.
It underscores a philosophical divide in the AI industry: should we demand perfect accuracy or accept human-like fallibility?
Anthropic’s continued pursuit of AGI, despite ethical concerns, raises key questions about AI safety, trust, and deployment timelines.
QUICK HITS
🛠️ Trending AI Tools
⚙️ Devstral – Mistral’s open-source coding model
🛍️ Shopify AI – New AI design & business tools
🧑💻 Stitch – Google Labs AI UI design experiment
🥯 BAGEL – ByteDance’s multimodal foundation model
📰 Everything else in AI today
🏗️ OpenAI launches Stargate UAE in Abu Dhabi 2026
📄 Mistral debuts Document AI with 99% accuracy
💻 Anthropic releases Claude Code with developer APIs
🎧 Amazon tests AI audio summaries for product highlights
🎞️ MIT unveils CAV-MAE Sync for video-sound matching
💬 Anthropic CEO predicts solo-billionaire startup by 2026
Whenever you're ready, here are ways we can support each other:
Promote your product or service to 100K+ global professionals, AI enthusiasts, entrepreneurs, creators, and founders. [Contact us at [email protected]]
Refer us to your friends and colleagues to help them stay ahead in the latest AI developments. We've helped 30K+ creators, entrepreneurs, founders, executives, and others like you.
That's it for today!Before you go we’d love to know what you thought of today's newsletter to help us improve The AI Shots experience for you. |