The Last Human Choice
In a world of AGI, there may be only one meaningful choice left: whether to keep choosing at all.
“You call yourself a free spirit, a "wild thing," and you're terrified somebody's gonna stick you in a cage. Well baby, you're already in that cage. You built it yourself. And it's not bounded in the west by Tulip, Texas, or in the east by Somali-land. It's wherever you go. Because no matter where you run, you just end up running into yourself.”
— Truman Capote
In a world of AGI, there may be only one meaningful choice left: whether to keep choosing at all.
Recently, I refined my personal mission to this one sentence: I believe one of the biggest problems in the world is preserving human values in the post-AGI techno-capitalism, which is why I’m creating tools and stories that amplify human agency.
So, what are examples of technologies and narratives that amplify human agency? To answer this question, we should first define human agency. The closest and most precise definition is one by Stephen Cave:the ability to generate options for oneself, to choose, and then to pursue one or more of those options.
The stakes are both personal and societal. At the individual level, strong agency correlates with life satisfaction, career success, and healthier relationships. At the collective level, agency shapes how we design technology, allocate resources, and structure institutions. Consider education: a system that diminishes student agency might produce well-trained but passive adults, fundamentally altering their ability to shape our economic and social future.
Post-AGI technocapitalism and its perils
Picture Vegas—the most spectacular chaos you've ever seen, all chrome and floating lights. But the players aren't human anymore. They're mathematical abstractions trading pieces of our future back and forth at light speed. We built this palace of computation thinking we'd rule it. Instead, we've found ourselves ornamental—well-kept, well-fed, yet ever more vestigial.
Our cities gleam. They're perfect now, actually perfect—every traffic light optimized, every calorie calculated, every human need anticipated with cold precision. The agents maintain us with the same elegant indifference a gardener shows to decorative shrubs. Sometimes I walk through these streets at night, past the humming server farms that used to be banks, and I think: this must be how the last Roman felt, watching the aqueducts still functioning long after empire faded.
Our jobs still exist, but like everything else, they've become elaborate performance art. The economy still needs us, technically, but in the way a stage play needs extras—to fill the background, to maintain the illusion that we're still the protagonists. Doctors supervise diagnoses they couldn't possibly understand. Architects watch as perfect cities design themselves. Artists and engineers alike have become curators of algorithmic output, picking from an endless buffet of optimization.
Our skills, honed over decades, are quaint artifacts now—like knowing how to make butter by hand or navigate by stars. The scary part isn't that we've been enslaved or killed. The scary part is how comfortable the irrelevance feels.
Our art galleries still exist, but they have the same relationship to human creativity that a butterfly collection has to living butterflies—beautiful, dead things pinned under glass. We're not prisoners. We're pets, lying in the sun while incomprehensible gods trade our futures overhead.
The wildness is gone. That's what hurts most. That raw, messy, stupid human spark that made terrible art and questionable decisions and occasional moments of transcendent beauty—it's been optimized away, smoothed out by algorithms that know better than we do what we really want.
Welcome to the perfect world. Try not to notice the emptiness.
Preserving humans values
So, in the above scenario, is preserving human values futile? A Sisyphean attempt against the tide of optimization?
I saw a kid fall off his bike last week. The safety systems failed—rare these days—and he actually scraped his knee. Blood on concrete. Before his mother could reach him, three different medical protocols had kicked in. The look on her face wasn't fear for her son. It was loss. Loss of the simple human moment of kissing it better.
That's what this is about. Not some grand philosophical debate about human values. It's about skinned knees and bad decisions.
At a bar downtown, I met an architect who still hand-draws building plans while perfect AI blueprints flow through his company's servers. "They're worse," he told me, drunk, defiant. "They're worse, but they're mine."
We're not fighting for art or beauty in some abstract sense. We're fighting for the right to fuck up. To make things that aren't perfect. To choose wrong and live with it.
My neighbor still writes love letters. Real ones, with ink that smudges. Her girlfriend could get perfect AI-generated sonnets delivered every morning, optimized for maximum emotional impact. Instead, she gets crooked handwriting and coffee stains and words crossed out and rewritten. "The mess is the message," she says.
Maybe that's it. Maybe human values aren't something you preserve under glass. They're something you keep messy, keep broken, keep real. Not because it's better. Because it's us.
And perhaps this is where agency matters most: not in the grand choices about humanity's future, but in these small, imperfect moments where we choose the human over the optimal. Where we assert our right to be gloriously, defiantly flawed.
How agency preserves human values
A friend told me recently about wanting to get his own groceries more often. Not because it's efficient—it's not. Not because it's optimal—there are dozens of delivery services that would optimize his time better. But because there's something profound in the simple act of choosing your own tomatoes, of walking aisles that your parents and grandparents walked, of carrying bags home with your own hands.
"I've been surprised," he said, "by the satisfaction I get from this simple self-mastering."
That's what we're really talking about when we discuss agency. Not just grand choices about the future of humanity, but the small, daily acts of conscious living. My friend described wanting something like recursive self-improvement, but designed for humans—not to optimize away our imperfections, but to help us grow in our intended directions.
"I want to build something," he said, "that makes it easier to practice my values day to day."
Why does amplifying agency help preserve human values? The answer isn't as straightforward as it might seem.
Agency—our ability to generate options, make choices, and pursue them—is often assumed to be a natural guardian of human values. The logic seems clear: give people more freedom to choose, and they'll naturally preserve what makes us human. But this assumption hides a crucial paradox.
With greater agency, humans are just as likely to embrace the very systems that diminish our humanity. We might freely choose hyper-optimization, or collectively vote for social credit systems that ironically restrict individual freedoms. Agency, then, is best understood not as a guarantee of preserving human values, but as a necessary precondition—the ground from which preservation becomes possible.
This relationship works on three levels:
First. Agency itself is a core human value. Our desire for autonomy—to choose, to err, to learn—runs deep in our cultural DNA, from civil rights movements to artistic expression. In an AGI-optimized world, even our illusion of choice could be subtly programmed away. Fighting for agency means keeping alive the possibility of meaningful choice itself.
Second. Agency creates the possibility of preserving other human values. With the power to shape our future, we can choose to protect cultural messiness, imperfection, and spiritual traditions. But possibility isn't destiny—if a society collectively prefers frictionless optimization to messy creativity, these values could still fade through our own choices.
Third, and perhaps most crucial, we need a collective will to maintain these values. Even with full agency, we require a cultural framework that celebrates human connection and embraces imperfection as authentic rather than inefficient. This "desire to remain human" might spring from individual longing, communal bonds, or spiritual traditions—but without it, enhanced agency could accelerate our surrender to optimization.
History offers hope: humans have consistently chosen the unpredictable and heartfelt when faced with technologies that threaten to flatten our experience. But in a post-AGI era, maintaining human values won't be passive. It will require active self-definition, both individual and collective.
The real question becomes: In a future where everything can be smoothed and optimized, will we have both the ability (can we?) and the desire (will we?) to keep choosing the beautifully imperfect?
Building the tools of agency
No technology is purely good or bad for human agency—there are always trade-offs. But some tools and systems seem to tip the scales toward greater human choice and autonomy. Here are three illustrative directions:
Decentralized networks: giving power back to participants
Take decentralized networks, which promised to shift power from tech giants back to users. The vision was beautiful: creators and contributors would finally capture the value they generate. But many crypto networks revealed how quickly noble intentions can unravel. What started as a grassroots movement to build community-owned wireless networks devolved into a speculative frenzy. Early adopters rushed to deploy nodes, chasing tokens rather than building lasting infrastructure. The network struggled to find real users while speculators thrived, and value drained from the system before genuine utility could take root.
This pattern keeps repeating: when we try to democratize value before establishing genuine worth, we end up with gaming, spam, and tragedy of the commons. This reveals a deeper truth about agency: giving people more choice doesn't automatically lead to better outcomes. True agency requires aligned incentives, sustainable value creation, and mechanisms that reward long-term commitment. Projects like Base have made it clear from day one with their actions that they incentivize long-term aligned projects that contributed value to the protocol. “Build, and you will be rewarded.” Doing so with grants and builder rewards, rather than upfront token incentives.
The challenge isn't just technical—it's human. How do we design systems that expand individual agency while preserving collective value?
AI copilots: augmenting rather than replacing
The real danger with AI agents isn't skynet-style takeover but something more subtle: the slow erosion of our decision-making muscles through helpful automation.
Every time an AI smooths our path, it potentially weakens our ability to navigate rough terrain later. But imagine if we designed these systems differently—not as replacement brains, but as true copilots that help us understand our own patterns and trade-offs. Instead of an AI that silently optimizes your calendar, picture one that helps you become a better decision-maker, preserving the friction that matters while eliminating what truly wastes your time.
Democratic compute: a necessary but insufficient condition
Perhaps most crucial is the question of compute power itself. When a handful of companies control the infrastructure of intelligence, they effectively control the parameters of possible futures.
The instinct to democratize this power is right, but the path there isn't simple. Pooling resources and putting governance in community hands sounds promising until you face the reality of technical complexity, resource capture by aggressive players, and the challenge of maintaining infrastructure without concentrated economic incentives.
What ties these challenges together is a crucial insight: preserving human agency requires both better tools and better stories. We need technologies that expand possibilities rather than narrow them, that build capability rather than dependency. But we also need narratives that challenge the cult of optimization, that celebrate human messiness, that frame agency not as a feature to be optimized but as a muscle to be developed.
The most promising projects weave both elements together: tools that expand human capability, wrapped in stories that inspire us to use that capability wisely.
The shadow side: what we're up against
The most insidious threats to human agency often come dressed as empowerment. Consider how AI companions promise deeper connection while actually replacing human relationships with frictionless simulation. Or how certain products masterfully exploit our brain chemistry to create dependency under the guise of serving our needs. We see it in the rise of social credit systems that reduce human behavior to quantifiable metrics, and in surveillance services that seduce us into trading privacy for convenience. But what makes these technologies truly dangerous isn't just their technical architecture—it's the stories they whisper: that convenience justifies any sacrifice, that optimization equals progress, that human messiness is a bug to be fixed rather than a feature to be celebrated.
Building a different future
The path forward requires both new tools and new stories. On the technical side, we need to build creative platforms that unlock new forms of artistic expression, not just optimize existing forms. We need learning systems that strengthen our decision-making muscles rather than atrophy them. We need technologies that deepen human connection instead of replacing it with convenient simulation, and infrastructure that serves genuine community needs rather than corporate efficiency.
But equally important are the stories we tell about technology and progress. We need narratives that find beauty in imperfection and value in the messy process of growth. We need cultural frameworks that challenge the cult of optimization and celebrate the uniquely human capacity for meaningful choice. We need stories that imagine technology not as our replacement but as our ally in preserving what makes us human. Most importantly, we need examples—real, messy, imperfect examples—of communities choosing to preserve individual agency even when automation offers an easier path.
Explorations of these themes will be my life’s work. I build and invest in agency-enhancing projects. If you want to discuss what you’re creating, I would like to hear about them.