The episode of “Mixture of Experts” covers several major AI developments as 2025 draws to a close, with a particular focus on the Disney and OpenAI licensing deal, Nvidia’s latest AI model releases, and broader trends in AI business and culture. The panel, consisting of Tim Hong, Martin Keane, Marina Danfki, and Kush Varsny, begins by discussing the Disney-OpenAI partnership. Disney is entering a three-year licensing agreement with OpenAI, allowing its characters and intellectual property to be used in generative AI models. In addition, Disney is taking a billion-dollar equity stake in OpenAI. The panelists note that this marks a significant shift in how major IP holders approach generative AI, moving from defensive postures to proactive participation and control, especially by integrating fan-generated content back into Disney’s own platforms.
The conversation then shifts to the implications of this deal for the broader content and creative ecosystem. Disney’s strategy is seen as a way to maintain control over its IP in the age of AI-generated content, encouraging fans to create within Disney’s ecosystem rather than on external platforms like X or Bluesky. The panelists discuss how this could set a precedent for other major IP holders, potentially leading to a wave of similar deals and a redefinition of the boundaries between official and unofficial content. They also reflect on the changing social contract of authorship, comparing the current moment to earlier eras of oral storytelling and blogging, and speculate on how the economics and legalities of AI-generated content will evolve.
Next, the panel reviews Time Magazine’s decision to name the “Architects of AI” as its Person of the Year, noting that the focus is largely on CEOs and business leaders rather than researchers or technical innovators. This is interpreted as a reflection of the current state of AI, where hype, business deals, and infrastructure spending dominate the narrative more than technical breakthroughs. The discussion highlights the massive investments being made in AI infrastructure—over $400 billion in 2025 alone—and draws parallels to previous technological revolutions, such as the rise of the personal computer.
The episode also covers Nvidia’s launch of its Neotron 3 open-source models, which are designed for a range of agentic behaviors and come with supporting infrastructure and tools. The panel debates why Nvidia, despite its dominance in AI hardware, has not always led in model development, and considers whether this new release signals a shift. They note that the AI model landscape is becoming increasingly commoditized, with integration, ease of use, and openness (including transparency about training data) becoming key differentiators. The conversation also touches on the trend of companies moving up and down the AI stack, and the growing importance of full-stack solutions.
Finally, the panel discusses the recent revelation of Anthropic’s “Claude Soul Document,” a manifesto-like set of principles used to guide the behavior of the Claude AI model during fine-tuning. They explore how this approach differs from traditional prompt-based alignment, embedding values and behavioral guidelines more deeply into the model. The panelists debate the trade-offs between flexibility and prescriptiveness, the challenges of evaluating such alignment, and the philosophical questions raised by attempts to encode a “soul” or moral philosophy into AI. The episode concludes with reflections on the future of prompting and fine-tuning, suggesting that as AI systems become more sophisticated, new methods of guiding and aligning their behavior will continue to emerge.
