AI Frontier

A deep-dive AI podcast by Chester Roh, Seungjoon Choi, and Seonghyun Kim, exploring the latest in AI technology, industry, and philosophy.

Ten Years Since AlphaGo (feat. Jinwon Lee, CTO of HyperAccel)

EP 90

Ten Years Since AlphaGo (feat. Jinwon Lee, CTO of HyperAccel)

On March 14, 2026, ten years after AlphaGo shocked the world, we look back on how AI has accelerated since then. Joined by Jinwon Lee, CTO of HyperAccel, we discuss the growth of Korea's deep learning community, the cost problem that inference-focused AI semiconductors are trying to solve, and the technical arc from AlphaGo to today's reasoning models. We also cover DeepMind's AlphaGo 10th anniversary post, the direction the next decade symbolizes through "Move 37 (Platform 37)," and the meaning of self-improving loops like Autoresearch and RLVR. In a fast-changing era, look to the timeline of the past for clues about what to build and how to prepare.

3/15/2026 · 1:05:49 Hosts: Chester Roh, Seungjoon Choi Guest: Jinwon Lee
One Click and Bumps

EP 89

One Click and Bumps

With the release of GPT-5.4 and Claude's new feature announcements ushering in the 'one-click' era of AI coding tools, we explore how to effectively leverage AI agents through harness engineering and scaffolding.

3/15/2026

There Is No Secret

EP 88

There Is No Secret

In March 2026, we welcome back Seonghyun Kim to discuss the changes in the AI industry over the past two months and the outlook ahead. Through the GLM 5 report, we confirm that RL remains the core methodology driving model advancement, and share our analysis that environment scaling will be the key bottleneck determining the trajectory of future progress.

3/4/2026

The Age of One Click — Between Grief and Joy

EP 87

The Age of One Click — Between Grief and Joy

On a Saturday morning, February 21, 2026, Chester Roh and Seungjoon Choi discuss the speed, FOMO, and so-called 'AI depression' brought on by recent AI model and agent trends.

2/24/2026

Agentic Workflow for Your Real Work

EP 86

Agentic Workflow for Your Real Work

Together with Lablup CEO Shin Jeong-gyu, we directly showcase `Backend.AI:GO` (the productization of the Continuum router), completed at roughly 1 million lines of code in just 40 days, discussing why it was built and what's different about it. We explore how features that "grew because we needed them"—such as routing by connecting local and cloud models, circuit breaking for failure response, benchmarking/statistics, translation, and image generation—were organized under a coherent philosophy. We also share frontline insights on the agentic coding era: how token usage and bottlenecks are changing, methods to reduce thinking budgets, and why high-speed inference is becoming increasingly important. Ultimately, we walk through practical workflows for how to design a "harness that reduces your workload" and extend automation to everyday tasks across development, finance, marketing, and content.

2/18/2026

OpenClaw and the Signals of February 2026

EP 85

OpenClaw and the Signals of February 2026

We explore the rapidly emerging AI agent/harness trends (OpenClaw, Pi, Moltbook, etc.) and interpret together what it means when "the paradigm shifts." We compare approaches: the Ralph Loop style of iterating until it works, the Human-in-the-Loop method of steering and calibrating, and the expansion into multi-agent systems — while also discussing security and sandboxing risks through real-world examples. As the time gaps between breakthroughs grow ever shorter, we reflect on what kinds of tacit knowledge and data/context layers we need to hold onto in order to survive without being swept up in big tech's gravitational pull. In a future that is "changing insanely fast" and unevenly at that — what attitudes and choices can we take?

2/11/2026

Let's Explore Physical AI: with Jong Hyun Park (sudoremove)

EP 84

Let's Explore Physical AI: with Jong Hyun Park (sudoremove)

Explore why Physical AI and Vision-Language-Action (VLA) models are at the forefront of robotics today with Jong Hyun Park from sudoremove, examining the latest demonstrations from Boston Dynamics' Atlas to Figure's Helix and tactile-enabled systems. Discover what's driving the shift from "impossible to possible" and dive deep into the fundamental challenge of scaling robot action data—and the strategies like teleoperation, simulation, and data augmentation being used to solve it. We discuss how far robot foundation models might go and what opportunities lie ahead in this emerging market.

2/11/2026

Transformers: The Pilgrimage of the Reincarnated Token

EP 83

Transformers: The Pilgrimage of the Reincarnated Token

This session intuitively explores why Transformers work the way they do, focusing on the journey a token undergoes when it's created. Connecting elements like KV cache, attention, RoPE, and MoE with metaphors like the "memory palace" and the "pilgrimage of reincarnated tokens," we explore how token creation ultimately creates a landscape of meaning. Furthermore, in an era where 10x productivity is becoming the new normal, the discussion expands to encompass the sense that "humans are the bottleneck," the explosion of tools and harnesses, and how to build Minimum Viable Knowledge (MVK). Finally, we'll also be discussing Prompt and AI in Munrae-dong on February 6-7, 2026.

1/28/2026

Prompting with Transformer Mechanics in Mind

EP 82

Prompting with Transformer Mechanics in Mind

With the arrival of Claude Opus 4.5, even Andrej Karpathy admitted to feeling FOMO. In this episode, we revisit the principles behind prompting. Why do certain tokens unlock hidden spaces within the model? From Claude's skill system to CoT Faithfulness, and how RL combines skills—now, as AI accelerates rapidly, is the perfect time to revisit the fundamentals. In the next episode, we'll dive deep into Transformer MoE architecture and explore how understanding principles transforms your prompting.

1/28/2026

Everything DeepSeek Changed: MoE and RLVR, 2025 AI Year in Review

EP 81

Everything DeepSeek Changed: MoE and RLVR, 2025 AI Year in Review

Saturday morning, December 27, 2025 — Chester Roh, Seonghyun Kim, and Seungjoon Choi look back on 2025 in AI and forecast 2026. They discuss how MoE and RLVR (agent post-training) became mainstream after DeepSeek, and trace how China-led open frontier models have driven the ecosystem. From 'recipes' under limited compute to the evolution of RL infrastructure and the importance of data, they offer grounded perspectives. Together, they imagine what changes scale-up, continual learning, self-play, and more autonomous agents might bring next year.

1/28/2026

Will 2026 Be the Year of Science? AI and Science

EP 80

Will 2026 Be the Year of Science? AI and Science

2025 will be remembered as the year of coding with the launch of Claude Code, and the upcoming 2026 is projected to be a year where AI shifts the paradigm of science. The advancements of OpenAI's GPT-5.2 and Google DeepMind demonstrate the potential for AI application in fundamental scientific fields like mathematics and biology, which will dramatically accelerate research speeds in laboratories. Furthermore, the U.S. government's announcement of the ‘Genesis Mission’ positions AI as a national-level scientific and technological challenge, signaling that AI will play a pivotal role in solving humanity's greatest problems. This video discusses the new future of science that AI will unlock and the changes coming our way.

1/28/2026

AI Frontier: The Runaways' Alliance Retrospective & GPT 5.2

EP 79

AI Frontier: The Runaways' Alliance Retrospective & GPT 5.2

In this episode, we unpack what the surge in OpenAI's GDPVal benchmark after GPT-5.2 really means—how costs, speed, and labor are being reshaped. As AI models now handle tasks worth "hours of human work" (per METR/Epoch AI estimates), we ask candidly: what's actually becoming scarce? We also reflect on the Generative Conference (Runaways' Alliance), where 140+ people gathered to share anxieties, ask questions, and self-organize sessions. Our takeaway: embrace symbiosis with AI and live as entrepreneurs who define problems and own the outcomes.

1/28/2026

Ilya Sutskever Explains

EP 78

Ilya Sutskever Explains

Why did Ilya Sutskever's single remark send the AI community into an uproar? In this episode, we dissect the core points from Ilya Sutskever's appearance on the Dwarkesh Patel podcast and examine what Noam Brown outlined as the “researcher consensus.” We dive deep with AI researcher Kim Sunghyun into scaling limits, the meaning of emotions as value functions, and why continuous learning is the key to AGI. The latter half quickly covers practical topics too, from the Opus 4.5 vibe check to the moment a black hole scientist got “pilled” by AI.

1/28/2026

Gemini 3 & Antigravity: The Insanely Steep Curve of Innovation

EP 77

Gemini 3 & Antigravity: The Insanely Steep Curve of Innovation

This week, the long-awaited Gemini 3 was released, making a big splash in the AI industry. With the advent of Gemini 3, we deeply discuss the advancements in pre-training and post-training, the validity of AI scaling laws, and the impact of accelerated AI development on business and individual capabilities. Through practical application cases such as UI generation, music, and interactive visualization using Gemini 3, we re-examine the importance of flexible approaches like 'unlearn-learn' in the rapidly changing AI era.

1/28/2026

Education and AI: Thoughts and Practices of Choi Seung-jun, Founder of Hanmi Kindergarten

EP 76

Education and AI: Thoughts and Practices of Choi Seung-jun, Founder of Hanmi Kindergarten

Hanmi Kindergarten Website: https://www.hanmiu.cc/ This video explores AI Frontier co-host Seungjun Choi's educational philosophy and his thoughts and practices on the future of education. As a media artist and educator, Seungjun Choi supports the adoption of the Reggio Emilia approach, which shifts focus from programmatic education to fostering children's curiosity and spontaneous exploration. This episode explores why kindergarten education is crucial in the AI ​​era and how technology is unlocking new possibilities for education through concrete examples. This episode provides an opportunity to rethink the essence of education as we prepare for an uncertain future.

1/28/2026

Reinforcement Learning (Without Math Formulas)

EP 75

Reinforcement Learning (Without Math Formulas)

China's Moonshot Kimi K2 Thinking model has achieved benchmark scores surpassing GPT-5 and Sonnet 4.5, demonstrating rapid model advancements in the post-training era. This video explains the core concepts of Reinforcement Learning (RL), the differences between on-policy and off-policy learning, how the capabilities of models formed during pre-training are strengthened into generalizable patterns through RL, and why accurate feedback is crucial.

1/30/2026