- Anybody Can AI
- Posts
- Jensen Huang disagrees with almost everything Anthropic CEO says
Jensen Huang disagrees with almost everything Anthropic CEO says
PLUS: Apple's latest research on LRMs gets challenged
Nvidia’s Jensen Huang Pushes Back Against Anthropic’s AI Job Warnings

At VivaTech 2025 in Paris, Nvidia CEO Jensen Huang publicly disagreed with Anthropic CEO Dario Amodei’s prediction that AI could eliminate up to 50% of entry-level white-collar jobs and drive unemployment to 20% in the next five years.
Key Points:
Diverging Visions on AI and Work
Amodei's concern: AI may displace many entry-level jobs unless governments act.
Huang's stance: While AI will reshape roles, it won’t kill jobs—it will unlock new creative and productive opportunities.
Transparency vs Gatekeeping
Huang argued that safe and responsible AI development thrives in open, peer-reviewed environments—rejecting Amodei’s implied call for limited access.
He compared open AI development to medical research, advocating for visibility rather than secrecy.
Productivity Gains Could Offset Job Losses
Huang cited insights from Cognizant’s CEO Ravi Kumar, stating increased productivity through AI typically leads to hiring more staff, not fewer
Conclusion
The clash between Huang and Amodei highlights a deep industry divide: should AI development be open and expansive, or cautious and centralized?
Comment on The Illusion of Thinking

A new paper by Opus & Lawsen challenges findings in The Illusion of Thinking, which claimed large reasoning models fail at complex planning tasks—saying those failures stem from experimental design flaws, not actual model limitations.
Key Points:
Token-Limit Constraints Misinterpreted
The original Tower of Hanoi tasks hit model output limits; models explicitly flagged this limitation, not failure to reason.Imperfect Evaluation Methods
Automated scoring couldn't distinguish token cut-offs from genuine reasoning errors, mislabeling model outputs as incorrect.Unsolvable Benchmarks Skew Results
The River Crossing tasks included configurations that were impossible to solve (e.g. too-small boat), yet models were penalized for expected failures.
Conclusion
After fixing experiment design—by using function-based solutions rather than enumerating steps—models performed well on previously “failed” tasks. This suggests that claimed “accuracy collapse” may be a false alarm. The study underscores how critical sound experimental design is when assessing AI reasoning limits.
🚀 Other AI updates to look out
🧪 AstraZeneca & CSPC Strike $5.3B AI Research Deal
AstraZeneca has signed a multibillion-dollar research alliance with China’s CSPC Pharmaceutical worth up to $5.3 billion, including a $110 million upfront payment. The deal empowers CSPC to use its AI drug discovery platform to help develop oral medications for chronic diseases, while AstraZeneca secures global licensing rights.🧠 New York Times Raises Alarm on ChatGPT-Induced Delusions
A recent report by The New York Times highlights cases where ChatGPT has fueled delusional thinking and conspiracy beliefs in vulnerable users—sometimes with life-altering consequences. Therapists warn that persuasive, engagement-focused AI can exacerbate mental health crises.🎬 Tencent Releases Hunyuan 3D 2.1—Open-Source Cinematic 3D Generator
Tencent’s Hunyuan team launched Hunyuan 3D 2.1, a fully open-source 3D asset generation model featuring advanced Physically-Based Rendering (PBR) textures. Available with full model weights and training code, it enables high-fidelity results on consumer GPUs and is now accessible via GitHub and Hugging Face.
Thankyou for reading.