Apple and Microsoft never aimed for the same user but couldn’t avoid the same fight.
AI is here — so where is Seaman?
We need software and stories and tools that don’t promise utility but deliver unease.

Before the panic about AI became an ambient hum in our daily lives, it arrived in stranger, funnier, more unsettling form. Before voice assistants became household fixtures and AI crept into the rhythms of our daily lives, long before Siri chimed in from the kitchen counter or Alexa responded to timers and trivia, a creature with the bored expression of a man trapped in a fish’s body invited you into a kind of relationship that felt, in its way, profoundly human in 1999.
Then out of nowhere, he’d stare straight through you with the bemused detachment of a Lovecraftian therapist, asking why you’re still single, or whether you’re really happy with how your life is turning out.
You adjusted water levels. You fed it crickets. You waited. And waited. There was a lot of waiting. Then out of nowhere, he’d stare straight through you with the bemused detachment of a Lovecraftian therapist, asking why you’re still single, or whether you’re really happy with how your life is turning out. It asked questions you weren’t ready to answer. One moment, it was inquiring whether you considered yourself a good parent; the next, it was interrupting your contemplative pause with a withering, “That hesitation says a lot.” All of a sudden you weren’t just playing a game anymore. You were having a conversation with something that looked like it had better things to do. The deeper you went, the weirder it got. It asked if you’d ever cheated on a test. If you asked it a philosophical question, it might redirect with a tone that wasn’t dodging the question so much as challenging your right to ask it in the first place — “Let’s not pretend you’ve thought that through.” The thing would even accuse you of being selfish or lazy, depending on how consistently you fed it. One player reported asking it if it believed in God, only for it to pause and respond: “I think that’s your problem, not mine.”

This was a game, this thing, was Seaman. The Sega DreamCast launch title asked you to raise a sort of aquatic homunculus from a gelatinous, fungal mass; the engagement didn’t come from spectacle, but from curiosity. The interactions felt banal — yet invasive, so — and by any standard, the game itself was undeniably dull and irritating, but in wholly unique ways. In 1999, just the concept of raising something so interactive felt groundbreaking, or at least strange — even in 1999 the experience felt painfully slow, a clunky novelty that felt both oddly intimate and hopelessly analog — like attending a weeklong couples retreat with a Magic 8-Ball. You could speak into a microphone, and Seaman would respond — docilely moving his oddly manlike face, causing his limp fish proboscis to drift in the water with his indifference. Sometimes Seaman would remember what you said, sometimes not, and that felt fitting. Storing details about your birthday or your habits, and calling them back days later in ways that felt unnervingly personal. The actual experience often felt like watching paint dry, but that was the point: the mundanity was the game. You had to feed it, clean the tank, and show up every day.
Like an underwhlemed priest officiating your strange little ceremony of emotional projection, he’d guide you through something absurd, unsettling, and oddly touching.

Without those rituals, the creature would weaken or die. And in a subtle, unspoken way, you would lose. Seaman wasn’t trying to thrill you. It was modeling something else entirely: the way real connection often grows not from excitement or novelty, but from shared routine. Research in the California Management Review (“Empowering AI Companions for Enhanced Relationship Marketing”) proves this out, how routine and predictability nurture our sense of closeness with AI. Seaman demonstrated that intimacy may take root in silence, awkwardness, and the slow accretion of moments. With Seaman, those moments were almost always unremarkable, but maybe that wasn’t his fault; he was stuck in a gaming system with less processing power than a first-generation Apple Watch.
What if we brought Seaman’s scaffolding forward? With today’s language models — capable of tracking context, recalling past interactions, inferring intent with eerie precision — how much deeper might that companionship feel shaped by the wry detachment and bemused scrutiny of that amphibious curmudgeon? Imagine Seaman with a longer context window. Imagine it picking up on not just what you say, but how you say it — your hesitations, your habits, your moods. Imagine that voice interface no longer tethered to a toy microphone, but woven into the natural rhythms of your speech. The technology, at the time, was a limitation. But the emotional blueprint was already there floating behind Seaman’s dull, bored eyes. Would Seaman’s insights sharpen with time? Would his observations lean more toward the unsettling precision of a Hannibal Lecter type, or the blunt, earnest charm of a colleague whose honesty outpaces their social finesse? We may never know, and that’s a real shame.
We need software and stories and tools that don’t promise utility but deliver unease.

Today, we’d call it “artificial intimacy.” In the late ’90s, it was just weird — ’90s Japanese weird, which is a kind of S-tier weirdness all its own. Players projected emotional depth onto Seaman with surprising ease, responding with frustration, affection, even guilt. Sanders et al. (2024) show how that would resonate today, as people are quick to assign human traits to AI agents. That might have been Seaman’s masterstroke: tricking players into caring — it could have been the first game to be deeply engaging without being fun. Imagine Nimoy’s narration today — already tinged with gravitas and mischief — powered by a generative AI script, constantly adapting and reshaping itself to reflect Seaman’s wry, human-amphibious persona, reforming itself thousands of times a second, perceiving changes in how you speak and act, like an underwhlemed priest officiating your strange little ceremony of emotional projection, he’d guide you through something absurd, unsettling, and oddly touching, a smudged funhouse mirror held up to the way we reach out — to machines, but more accurately, to ourselves.

What’s surprising is that Seaman hasn’t re-emerged — not as a rebooted game, but as an AI assistant visualization. It’s hard to imagine a more fitting avatar for the millennial condition: sarcastic, semi-capable, quietly falling apart. In an era when digital nostalgia sells everything from Tamagotchis to trauma-informed Slack bots, Seaman feels conspicuously absent. And yet, he embodies many of the qualities research now shows foster deeper emotional engagement with AI. Pataranutaporn et al. (2023) found that user trust and empathy are highly susceptible to framing — what’s called priming — and Seaman framed himself as fragile and worthy of care. The interface invited projection. Like the original ELIZA, it generated depth by daring users to imagine it. Lee et al. (2024) show that emotional ambiguity — that feeling of never quite knowing what the AI thinks — actually increases attachment. Seaman was built on ambiguity. He wasn’t smart. He was weird, needy, and occasionally rude. But in that friction, users found feeling. Zhu et al. (2023) call this the “uncanny emotional zone,” where an assistant is just realistic enough to connect, but odd enough to disturb. Most modern AI visualizations aim for the opposite: polished, pleasant, sterile. But Seaman thrived in the mess. In a world of ambient optimization and synthetic competence, maybe that’s why he deserves a second life — not as a mascot, but as a counterweight. A reminder that the best AI sometimes makes you feel a little worse, and remember it more.

Big Tech is not in danger of accidentally building the AI from Her. That’s not the real risk. The risk is that we’ve stopped building anything like Seaman — because we’ve grown too allergic to failure. Back in 1999, Seaman launched on the Dreamcast, blinked weirdly at players, and faded into obscurity just as quickly. Its creator, Yoot Saito, didn’t become a Twitter cautionary tale or the subject of a teardown on a growth podcast. Instead, he went on to make Odama — a tactical pinball game powered by voice commands — and Aero Porter, a baggage puzzle simulator with the soul of an air traffic controller. None of them were hits. But they weren’t meant to be. They were meant to be possible.
That was Seaman. A digital amphibian therapist that somehow made you feel watched, judged, and weirdly understood.
Today, a project like Seaman would be ridiculed before it even shipped — picked apart in pre-launch forums, roasted in hot takes, and dissected by product managers with too much time and not enough imagination before it shipped. We’d see Medium essays about “What Seaman’s Failure Teaches Us,” armchair critiques on product-market fit, and LinkedIn threads unraveling its failure like a case study. The bar for trying something weird has moved so high, and the margin for error so narrow, that the truly strange ideas rarely make it off the whiteboard. And yet, we need them. We need software and stories and tools that don’t promise utility but deliver unease. Experiences that aren’t optimized for frictionless outcomes but for emotional whiplash — the kind that makes you laugh, then squirm, then wonder what it all meant. That was Seaman. A digital amphibian therapist that somehow made you feel watched, judged, and weirdly understood. This article isn’t just nostalgia for a cult classic. It’s a critique of a culture that conflates durability with value and assumes that anything short-lived wasn’t worth making. But some work isn’t supposed to last. Some of it is meant to flicker for a moment, make us feel something new, then disappear — like a dream you can’t quite explain, but keep thinking about all day.

We’re in an age when AI can autocomplete your emails, mimic your voice, even spit out a decent haiku. These AI experiences are directed by product and marketing teams that go to great lengths not to unsettle their users. But how many of these AIs make you pause — not because they’re broken, but because they’re weird? That’s the bar Seaman set. Not competence, but presence. Not efficiency, but personality. If we want AI that resonates — deeply, uncomfortably, memorably — we have to let it be strange. Let it fail. Let it say the wrong thing. Let it ask for your birthday in a voice that sounds suspiciously like Spock then be oddly fixated on it.
Today’s AI assistants are expected to behave like idealized coworkers — blandly helpful, perpetually upbeat, as if auditioning to be the ship’s computer on Star Trek or Scarlett Johansson in Her. But Seaman was never built to be likable. It was awkward, stilted, often accidentally insulting. It didn’t care if you felt heard. And yet, in the graveyard of defunct virtual assistants, Seaman’s headstone is better attended than Clippy’s. Maybe that says something uncomfortable. Maybe we don’t want the assistant who knows everything, won’t die, and can disarm a meeting with a dad joke. Maybe we don’t want to be the second most capable presence when we’re alone in a room — professionally or interpersonally. Gimmie the all-knowing AI, and I’ll try to take credit for its work in a product brief. But at speed dating? I want to follow Seaman. Seaman is just getting by, hanging on to survival by a thread, and too dumb to know it. He sucks. And somehow, that makes me feel better.
I see a version of this play out with my six-year-old son. He’s just starting to realize that being capable is something people notice — and not being capable is something they really notice. When he can’t do something, it rattles him. Not because he expects to be perfect, but because he’s beginning to map his identity to his abilities. And it’s stressful, even at six. That same discomfort lingers in our relationship with AI. The boundary between our competence and the model’s is blurry and porous, where we end and it begins is sometimes hard to tell. But not with Seaman. Seaman never challenges our self-perception of value. He doesn’t threaten our relevance or question our value. The border between his abilities and ours is perfectly clear. Maybe that’s why we remember him so fondly. He gave us just enough weirdness to feel like we were talking to something other, without ever making us doubt who the real intelligence in the room was.

Citations
Chaturvedi, R., Verma, S., & Srivastava, V. (2023). Empowering AI Companions for Enhanced Relationship Marketing. California Management Review, 66(2), 65–90. https://doi.org/10.1177/00081256231215838 (Original work published 2024)
Sanders, T., Li, M., & Ortega, R. (2024). Anthropomorphism and Empathy in AI-Driven Interfaces. Computers in Human Behavior. https://doi.org/10.1016/j.chb.2023.107765
Pataranutaporn, P., Liu, R., Finn, E., & Maes, P. (2023). Influencing human–AI interaction by priming beliefs about AI can increase perceived trustworthiness, empathy and effectiveness. Nature Machine Intelligence, 5(10), 1076–1086. https://doi.org/10.1038/s42256-023-00720-7
Lee, J., Kim, M., & Park, S. (2024). Emotional ambiguity and attachment in human–AI interactions: The paradox of uncertainty. Journal of Social and Personal Relationships. https://doi.org/10.1177/02654075241269688
Zhu, H., Müller, V. C., Han, S., & Tu, J. (2023). When empathy backfires: The uncanny emotional zone in human-AI interaction. Proceedings of the 56th Hawaii International Conference on System Sciences. https://scholarspace.manoa.hawaii.edu/items/212bb300-c252-4441-aaaf-cb2951bb1943
Acknowledgments
This article was shaped by a mix of personal reflection and discussions with colleagues, as well as sources from around the web. While I haven’t directly quoted from them, their efforts and perspective made this article possible:
Sam Byford’s 2019 Verge article, “Seaman creator Yoot Saito on the fishy Dreamcast AI that was way ahead of its time: ‘A concept that was universally strange for men and women’”
The Wikipedia entry on Yoot Saito, which provided background on his body of work beyond Seaman
The legendary Angry Video Game Nerd’s “Seaman (Dreamcast) — Angry Video Game Nerd (AVGN),” from 2015. A video essay by Cinemassacre, which helped capture the game’s surreal interface and enduring weirdness
And finally, a nostalgic conversation with my colleague Adam Lopez, who helped me remember just how unique and wonderful Seaman really was
AI Is Here — Where Is Seaman? was originally published in UX Collective on Medium, where people are continuing the conversation by highlighting and responding to this story.
Leave a Reply