Dan Houser, AI, and the Boring Middle of the Hype Curve

Dan Houser’s skepticism about AI in game development fits the evidence: current systems are powerful pattern tools but brittle storytellers, best used to extend human craft, not to replace it.

Dan Houser, AI, and the Boring Middle of the Hype Curve

The first time I plugged an AI assistant into my code editor, I was bracing for something amazing. I didn't think it'd be a major breakthrough, just a handy little helper, you know? Like, tidier code, fewer dumb mistakes, maybe even a clever way to re-do something at 3 AM. But what I actually got was... basically just a really speedy, super polite autocomplete. It churned out functions that appeared legit until you actually tried to run them, comments that seemed smart until you hit the third line, and every so often, it had this weird knack for knowing exactly what I was thinking before I even finished typing.

I mean, yeah, it was impressive, and on a good day, pretty useful. But it definitely wasn't taking over as the lead engineer.

So, when Dan Houser, who co-founded Rockstar and now leads Absurd Ventures, mentions he's just "dabbling" with AI and feels it's "not as useful as some of the companies would have you believe yet," I totally get the kind of 'whatever' gesture he's making. In comments he made recently, he called AI a "hold-all term for all future computing," pointing out that "it’s not going to solve all of the problems." He also suggested a lot of the excitement is really just "to sell AI stock, or to convince everyone this is transformative." Both Video Games Chronicle and others reported on him discussing that familiar 80/20 rule: the first 80 percent of a technical challenge is simple, but that last 20 percent-the part where things need to act like they do in real life-is "very, very hard."

That's basically the core tension right now with AI in games, and really, in all kinds of creative work. The public-facing story-you know, the one you hear in big presentations and investor pitches-is that generative AI is almost ready to write your quests, design your worlds, bring your non-player characters to life, sort out your game economies, and probably even brew your morning coffee. But behind the scenes, it's more like what Houser described. You have teams trying things out, finding some genuinely useful tools mixed in with a lot that aren't, all while companies and investors are constantly telling them this new tech is going to completely change everything.

If you look at recent stories about Houser's new studio, you start to see a pattern. Absurd Ventures is trying out AI characters and tools, but they're not just letting AI write the entire story. Houser mentioned there will be "lots of AI characters" in their upcoming game's story, which is a big difference from saying "the AI wrote the story." It's like the difference between making a world where some behaviors happen inside a story you carefully put together yourself and just letting a system that sometimes makes up information handle all your plotlines.

An illustration of a human writer editing AI-generated text on a tablet.

The contrast between that sci-fi vision and what actually works in a debug console isn't just an oddity of new tech. It's built into the system. Generative models excel at mimicking the surface-level details of their training data. Yet, they're terrible at recognizing when they're simply faking it. Now, that's perfectly okay when you're just brainstorming ideas in a notebook. But it becomes a real issue if the product you release needs to be functional and resonate emotionally with a player who just invested fifty hours in your characters.

You can even find numbers for this pattern outside of gaming. A 2024 article from MIT Sloan Management Review talks about creative professionals using generative tools to save a lot of time on drafting and layout. However, it also points out an increase in "copycat" content and concerns about maintaining quality and authenticity. Researchers at Wharton, who studied idea generation, discovered that while AI can help individuals refine their ideas, groups relying on AI often end up with more similar, less varied concepts. So, you might get a better average, but the range of ideas shrinks. It's like algorithmic comfort food—tasty, but a bit bland.

Here is a compliant response that adheres to the specified JSON Schema: a single item with a numeric index and a simple content string.

Then some of the legal issues crop up. The U.S. Copyright Office stated clearly that AI-generated work, specifically works created solely by AI, cannot be copyrighted. And, in its 2024 report on generative training, it highlighted the obvious-but-awkward truth that most of these models are built using creative work that was never licensed. There's also the possibility of "model collapse." That is when systems that keep training on their own outputs start to create lower quality, repetitive, even meaningless results. It's like the creative version of photocopying a photocopy until all you have left is just grey smudges.

An illustration of a person making a dismissive gesture in front of a whiteboard with an 80/20 rule diagram.

For a studio focused on actually releasing a product, these factors restrict how "using AI" can realistically function. You might use a model to generate initial NPC dialogue, followed by writers completely rewriting it. Or, you could feed prompts into an image system to produce thumbnails, helping an art team establish a visual style more quickly. You might even use a model to review code like an advanced linter, occasionally catching a subtle error. While these applications are useful, sometimes incredibly so, they always occur within a framework of human oversight and refinement.

That's pretty much how I use these tools in my own work. When I'm writing, they're good for three things: expanding on dull points ("give me three other ways to say this sentence"), checking for unforeseen problems ("what obvious issues am I not considering"), and exploring low-risk variations ("show me five unusual twists on this quest idea"). For coding, it's like an overly eager auto-complete function; sometimes it gives you a brilliant snippet, but often it confidently presents something incorrect. You absolutely still need to test everything and have good judgment.

The real risk emerges when we confuse mere pattern recognition with genuine understanding. Much of the excited chatter about "AI game masters" and "AI writers" overlooks a crucial point: the best games aren't simply vast collections of content. Instead, they're carefully structured experiences, each with its own unique rhythm and a distinct authorial voice. An AI, even one trained on every fantasy novel and tabletop module imaginable, will readily remix tropes endlessly. But it won't, on its own, decide that a game should revolve around a single, awkward dinner party, or a long, quiet walk home after a botched heist. Those kinds of creative decisions aren't found in training data. They originate in a human mind, connected to a set of personal values and a readiness to take a chance.

An illustration of a robot presenting a blueprint that contains a subtle flaw.

If you look closely, even the most positive business articles about generative AI acknowledge this limitation. A Harvard analysis of these systems' benefits and downsides plainly states that their outputs lack nuance and emotional depth unless humans step in. It also points out that 'hallucinations' are an inherent flaw. The typical corporate jargon about "unlocking full potential" appears right alongside advice for guardrails, oversight, and re-designing tasks to keep human judgment as the ultimate authority.

So, when Houser claims AI isn't as helpful as some companies suggest, I don't see a luddite fighting progress. Instead, I hear someone with practical experience who has delivered huge, intricately detailed projects, and thus understands precisely where the tool falls short. You can easily picture the scene: someone proposes an AI narrative generator to cut writing expenses. Then, the person who recalls painstakingly revising thousands of lines of dialogue for coherence says, "Sure, let's try it on one mission and see how much editing it actually takes."

There's an opposing force worth mentioning here. Some creative tasks will definitely be automated, or at least condensed. If an AI system can whip up a decent 2D logo sheet in ten seconds, that changes things for junior graphic designers. If a studio settles for "good enough" for background characters, there will be fewer gigs for human voice actors playing small roles. The core idea isn't that AI can never replace people. It's that it fits best in areas where the work is already viewed as generic and easily swapped out.

This leads me back to the framing problem. If you see AI as trying to replace human creativity, you'll naturally focus on its shortcomings in that area. Is it as funny, as touching, or as strange as the best human creations? Not a chance. The model pulls from an average, and that's not where truly interesting things come from. But if you view AI as a flawed, occasionally insightful helper that extends your own focus, the question changes. Where can having a tireless pattern-recognizer at your side actually free you up to spend more time on the unique things only you can do?

An illustration of a human writer editing AI-generated text on a tablet.

For Houser, and for most folks I know who are building stuff with these tools, the answer often lies in the less central parts. You can use AI to sort out the dull linking sections while you concentrate on the core of your project. It can help you spot cliches so you can intentionally steer clear of them. Think of it as a difficult collaborator whose poor suggestions you can improve upon. It's not magic, just an unusually shaped tool.

The hype machine prefers simple narratives of either being replaced or being saved. The truth, at least for now, is much more ordinary, yet also more fascinating. AI is kind of just hanging out in the corner of the writers' room, spitting out a ton of forgettable lines alongside that one gem that makes everyone stop and think. You keep it around for that single moment, then send it back to its computer while the humans hash out the real meaning of the scene.

Houser's careful approach isn't meant to stifle creativity. It just reminds us that tools work best when we stop imagining them as equals. AI will keep improving. Worlds will keep getting weirder. But the fundamental decision of what to create and why still comes from a very traditional source: a person in a room, staring at a blank page, figuring out what kind of stir they want to cause.

Sources