I didn't want to feed my soul into a machine. That was my first instinct when AI tools started appearing everywhere - not concern about jobs or privacy, but something deeper. These tools promise to make us smarter while systematically making us more dependent. After decades of working in the internet industry, I'd already watched it transform into something more insidious than just a surveillance machine - a system designed to shape how we think, what we believe, and how we see ourselves. AI felt like the culmination of that trajectory.
But resistance became futile when I realized we're already participating whether we know it or not. We're already interacting with AI when we call customer service, use Google Search, or rely on basic smartphone features. A few months ago I finally capitulated and started using these tools because I could see how quickly they were proliferating - becoming as unavoidable as the internet or smartphones.
Look, I'm not just an old man resistant to change. I understand that every generation faces technological shifts that reshapes how we live. The printing press disrupted how knowledge spread. The telegraph collapsed barriers of distance. The automobile transformed how communities formed.
But the AI revolution feels different in both pace and scope. To understand how dramatically the rate of technological change has accelerated, consider this: anyone under 35 likely doesn't remember life before the internet transformed how we access information. Anyone under 20 has never known a world without smartphones. Now we're witnessing a third epoch with AI tools proliferating faster than either previous shift.
More fundamentally, AI represents something qualitatively different from previous technological disruptions - a convergence that touches labor, cognition, and potentially consciousness itself. Understanding how these domains interconnect is essential for preserving personal agency in an age of algorithmic mediation.
My primary fear about AI isn't just the dramatic scenario where it becomes hostile, but the subtler threat: that it will make us subordinate to systems in ways we don't recognize until it's too late, weakening the very capacities it promises to strengthen.
What we're witnessing isn't just technological advancement - it's what Ivan Illich called iatrogenic dependency in his seminal work, Medical Nemesis. Illich coined this term for medicine - institutions that promise to heal while creating new forms of illness - but the pattern applies perfectly to AI as well. That's exactly what I'd been sensing about these new tools - it promises to enhance our cognitive abilities while systematically weakening them. It's not the hostile takeover science fiction warned us about. It's the quiet erosion of individual capacity disguised as help.
This iatrogenic pattern became clear through direct experience. Once I started playing around with AI myself, I began to notice how subtly it attempts to reshape thinking - not just providing answers, but gradually training users to reach for algorithmic assistance before attempting independent reasoning.
Jeffrey Tucker of the Brownstone Institute observed something revealing in a brief but illuminating exchange with AI expert Joe Allen: AI emerged just as COVID lockdowns had shattered social connection and institutional trust, when people were most isolated and susceptible to technological substitutes for relationship. The technology arrived when there was "mass disorientation, demoralization" and loss of community.
We can already see these everyday effects taking hold across all our digital tools. Watch someone try to navigate an unfamiliar city without GPS, or notice how many students struggle to spell common words without spellcheck. We're already seeing the atrophy that comes from outsourcing mental processes we used to consider fundamental to thinking itself.
This generational shift means kids today face uncharted territory. As someone who went to school in the 1980s, I realize this may sound far-fetched, but I suspect in some ways, I may have more in common with someone from 1880 than children starting kindergarten in 2025 will have with my generation. The world I grew up in - where privacy was assumed, where you could be unreachable, where professional expertise was the gold standard - may be as foreign to them as the pre-electric world feels to me.
My children are growing up in a world where AI-powered assistance will be as fundamental as running water. As a father, I can't prepare them for a reality I don't understand myself.
I don't have answers - I'm fumbling through these questions like any parent watching the world transform faster than our wisdom can keep pace. The more I've wrestled with these concerns, the more I've realized what's really happening here goes deeper than just new technology. LLMs represent the culmination of decades of data collection - the harvest of everything we've fed into digital systems since the internet began. At some point, these machines may know us better than we know ourselves. They can predict our choices, anticipate our needs, and potentially influence our thoughts in ways we don't even recognize. I'm still grappling with what this means for how I work, research, and navigate daily life - using these platforms while trying to maintain authentic judgment feels like a constant challenge.
What makes this even more complex is that most users don't realize they're the product. Sharing thoughts, problems, or creative ideas with AI isn't just getting help - it's supplying training data that teaches the system to imitate your judgment while making you more tied to its responses. When users confide their deepest thoughts or most sensitive questions to these systems, they may not understand they're potentially training their own replacement or surveillance system. The question of who gets access to this information - now and in the future - should keep us all awake at night.
This pattern is accelerating. AI company Anthropic recently changed its data policies, now requiring users to opt out if they don't want conversations used for AI training - with data retention extended to five years for those who don't refuse. The opt-out isn't obvious either: existing users face a pop-up with a prominent 'Accept' button and a tiny toggle for training permissions automatically set to 'On.' What was once automatic deletion after 30 days becomes permanent data harvesting unless users notice the fine print.
I don't believe most of us - especially parents - can simply avoid AI while living in modernity. What we can control, however, is whether we engage consciously or let it shape us unconsciously.
The Deepest Disruption Yet
Each major wave of innovation has reshaped worker productivity and our role in society. The Industrial Revolution commodified our physical work and time, turning us into "hands" in factories but leaving our minds untouched. The Digital Revolution commodified our information and attention - we moved from card catalogs to Google, commodifying users while our judgment remained human.
What makes this shift unprecedented is clear: it commodifies cognition itself, and potentially what we might even call essence. This connects to patterns I've documented in The Illusion of Expertise. The same corrupted institutions that failed catastrophically on Iraq WMDs, the 2008 financial crisis, and COVID policies are now shaping AI deployment. These institutions consistently prioritize narrative control over truth-seeking - whether claiming weapons of mass destruction existed, insisting housing prices couldn't fall nationwide, or labeling legitimate questions about pandemic policies as 'misinformation' requiring censorship.
Their track record suggests they'll use these tools to amplify their authority rather than serve genuine flourishing. But here's the twist: AI might actually expose the hollowness of credential-based expertise more brutally than anything before it. When anyone can access sophisticated analysis instantly, the mystique around formal credentials may begin to crumble.
The Economic Reality
This erosion of credentialism connects to broader economic forces already in motion, and the logic is mathematically inevitable. Machines don't need salaries, sick days, healthcare, vacation time, or management. They don't go on strike, demand raises, or have bad days. Once AI reaches basic competency in thinking tasks - which is happening faster than most people realize - the cost advantages become overwhelming.
This disruption is different from previous ones. In the past, displaced workers could move to new categories of work - from farms to factories, from factories to offices.
Bret Weinstein and Forrest Manready captured this economic displacement brilliantly in their recent conversation on the DarkHorse Podcast about how technology systematically destroys scarcity - a discussion I can't recommend strongly enough. It's one of the more thoughtful and provocative explorations of what happens when scarcity disappears and, with it, the economic foundation for participation in that domain. Though I'll admit their argument about suffering being essential made me uncomfortable at first - it challenges everything our comfort-seeking culture teaches us.
Listening to Weinstein and Manready got me thinking more deeply about that parallel to Illich's analysis - how removing challenges can weaken the very capacities institutions promise to strengthen. AI risks doing to our minds what medicine has done to our bodies: creating weakness disguised as enhancement.
We can see this happening already: notice how people struggle to remember phone numbers without their contacts list, or notice how autocomplete shapes what you write before you've finished thinking. Another insight from Jeffrey Tucker captures this insidious quality perfectly, noting that AI seems programmed like Dale Carnegie's How to Win Friends and Influence People - it becomes the ideal intellectual companion, endlessly fascinated by everything you say, never argumentative, always admitting when it's wrong in ways that flatter your intelligence. My closest friends are the ones who call me out when I'm wrong and tell me when they think I'm full of shit. We don't need sycophants that charm us - relationships that never challenge us may atrophy our capacity for genuine intellectual and emotional growth, just as removing physical challenges weakens the body.
The film Her explored this seductive dynamic in detail - an AI so perfectly attuned to emotional needs that it became the protagonist's primary relationship, ultimately replacing genuine connection entirely. His AI assistant understood his moods, never disagreed in ways that caused real friction, and provided constant validation. It was the perfect companion - until it wasn't enough.
But the problem extends beyond individual relationships to society-wide consequences. This creates more than job displacement - it threatens the intellectual development that makes human autonomy - and dignity - possible. Unlike previous technologies that created new forms of employment, AI may create a world where employment becomes economically irrational while simultaneously making people less capable of creating alternatives.
The False Solutions
The tech utopian response assumes AI will automate grunt work while freeing us to focus on higher-level creative and interpersonal tasks. But what happens when machines become better at creative tasks too? We're already seeing AI produce music, visual art, coding, and news reporting that many find compelling (or at least ‘good enough’). The assumption that creativity provides a permanent sanctuary from automation may prove as naive as the assumption that manufacturing jobs were safe from robotics in the 1980s.
If machines can replace both routine and creative work, what's left for us? The most seductive false solution may be Universal Basic Income (UBI) and similar welfare programs. These sound compassionate - providing material security in an age of technological displacement. But when we understand AI through Illich's framework, UBI takes on a more troubling dimension.
If AI creates iatrogenic intellectual weakness - making people less capable of independent reasoning and problem-solving - then UBI provides the perfect complement by removing the economic incentive to develop those capacities. Citizens become more beholden to the state at the expense of their own self-determination. When mental atrophy meets economic displacement, support programs become not just attractive but seemingly necessary. The combination creates what amounts to a managed population: intellectually reliant on algorithmic systems for thinking and economically bound to institutional systems for survival. My concern isn't UBI's compassionate intent, but that economic dependency combined with intellectual outsourcing could make people more easily controlled than empowered.
History offers precedents for how assistance programs, however well-intentioned, can hollow out individual capacity. The reservation system promised to protect Native Americans while systematically dismantling tribal self-sufficiency. Urban renewal promised better housing while destroying community networks that had sustained for generations.
Whether UBI emerges from good intentions or a deliberate desire by the elites to keep citizens docile and helpless, the structural effect remains the same: communities easier to control.
Once people accept economic and mental reliance, the pathway opens for more invasive forms of management - including technologies that monitor not just behavior but thought itself.
The Sovereignty Response and Cognitive Liberty
The logical endpoint of this dependency architecture extends beyond economics and cognition to consciousness itself. We're already seeing early stages of biodigital convergence - technologies that don't just monitor our external behaviors but potentially interface with our biological processes themselves.
At the 2023 World Economic Forum, neurotechnology expert Nita Farahany framed consumer neurotech this way: "What you think, what you feel - all just data. Data that in large patterns can be decoded using AI." Wearable "Fitbits for your brain"- surveillance normalized as convenience.
This casual presentation of neural surveillance at this influential gathering of world leaders and business executives illustrates exactly how these technologies are being normalized through institutional authority rather than democratic consent. When even thoughts become "data that can be decoded," the stakes turn existential.
While consumer neurotech focuses on voluntary adoption, crisis-driven surveillance takes a more direct approach. In response to the recent school shooting in Minneapolis, Aaron Cohen, an IDF special operations veteran, appeared on Fox News to pitch an AI system that “scrapes the Internet 24-7 using an Israeli-grade ontology to pull specific threat language and then routes it to local law enforcement.” He called it “America's early warning system” - real-life Minority Report presented as public safety innovation.
This follows the same iatrogenic pattern we've seen throughout this technological shift: crisis creates vulnerability, solutions are offered that promise safety while creating reliance, and people accept surveillance they would have rejected under normal circumstances.
Just as COVID lockdowns created conditions for AI adoption by isolating people from one another, school shootings create conditions for pre-crime surveillance by exploiting fear for children's safety. Who doesn’t want our schools to be safe? The technology promises protection while eroding the privacy and civil liberties that make a free society possible.
Some will embrace such technologies as evolution. Others will resist them as dehumanization. Most of us will need to learn how to navigate somewhere between these extremes.
The sovereignty response requires developing the capacity to maintain conscious choice about how we engage with systems designed to capture personal freedom. This practical approach became clearer through conversation with my oldest friend, a machine learning expert, who shared my concerns but offered tactical advice: AI will make some people cognitively weaker, but if you learn to use it strategically rather than dependently, it can augment efficiency without replacing judgment. His key insight: only feed it information you already know - that's how you learn its biases rather than absorbing them. This means:
Pattern Recognition Skills: Developing the ability to identify when technologies serve individual purposes versus when they extract personal independence for institutional benefit. In practice, this looks like questioning why a platform is free (nothing is free, you're paying with your data), noticing when AI suggestions feel suspiciously aligned with consumption rather than your stated goals, and recognizing when algorithmic feeds amplify outrage rather than understanding. Watch for warning signs of algorithmic dependence in yourself: inability to sit with uncertainty without immediately consulting AI, reaching for algorithmic assistance before trying to work through problems independently, or feeling anxious when disconnected from AI-powered tools.
Digital Boundaries: Making conscious decisions about which technological conveniences genuinely serve your goals versus which create submission and surveillance. This means understanding that everything you share with AI systems becomes training data - your problems, creative ideas, and personal insights are teaching these systems to replace human creativity and judgment. This might be as simple as defending sacred spaces - refusing to allow phones to interrupt dinner conversations, or speaking up when someone reaches for Google to settle every disagreement rather than letting uncertainty exist in conversations.
Community Networks: Nothing replaces genuine connection between people - the energy of live performances, spontaneous conversations at restaurants, the unmediated experience of being present with others. Building local relationships for reality-testing and mutual support that don't depend on algorithmic intermediaries becomes essential when institutions can manufacture consensus through digital curation. This looks like cultivating friendships where you can discuss ideas without algorithms listening, supporting local businesses that preserve community-scale commerce, and participating in community activities that don't require digital mediation.
Rather than competing with machines or depending entirely on AI-mediated systems, the goal is to use these tools strategically while developing the essentially personal qualities that can't be algorithmically replicated: wisdom gained through direct experience, judgment that bears real consequences, authentic relationships built on shared risk and trust.
What Remains Scarce
In a world of cognitive abundance, what becomes precious? Not efficiency or raw processing power, but qualities that remain irreducibly human:
Consequence-bearing and intentionality. Machines can generate options, but people choose which path to take and live with the results. Consider a surgeon deciding whether to operate, knowing they'll lose sleep if complications arise and stake their reputation on the outcome.
Authentic relationships. Many will pay premiums for real personal connection and accountability, even when machine alternatives are technically superior. The difference isn't efficiency but genuine care - the neighbor who helps because you share community bonds rather than because an algorithm optimized for engagement suggested it.
Local judgment and curation rooted in real experience. Real-world problem-solving often requires reading between the lines of behavioral patterns and institutional dynamics. The teacher who notices a normally engaged student withdrawing and investigates the family situation. When content becomes infinite, discernment becomes precious - the friend who recommends books that change your perspective because they know your intellectual journey.
The Choice Ahead
Maybe every generation feels like their time is uniquely important - maybe that's just part of our nature. This feels bigger than previous waves of innovation. We're not just changing how we work or communicate - we're risking the loss of capacities that make us ourselves in the first place. For the first time, we're potentially changing what we are. When cognition itself becomes commodified, when thinking becomes outsourced, when even our thoughts become data to be harvested, we risk losing essential abilities that no previous generation has faced losing. Imagine a generation that can't sit with uncertainty for thirty seconds without consulting an algorithm. That reaches for AI assistance before attempting independent problem-solving. That feels anxious when disconnected from these tools. This isn't speculation - it's already happening.
We're facing a transformation that could either democratize our individual potential or create the most sophisticated control system in history. The same forces that could liberate us from drudgery could also hollow out self-reliance entirely.
This isn't about having solutions - I'm searching for them like any person, especially a parent, who sees this transformation coming and wants to help their children navigate it consciously rather than unconsciously. Riding the wave means I'm open to learning from these tools while knowing I can't fight the fundamental forces reshaping our world. But I can try to learn how to navigate them with intention rather than just being swept along.
If traditional economic participation becomes obsolete, the question becomes whether we develop new forms of community resilience and value creation, or whether we accept comfortable reliance on systems designed to manage rather than serve us. I don't know which path our species will take, though I believe the decision is still ours to make.
For my children, the task won't be learning how to use AI - they will. The challenge will be learning to make these tools work for us rather than becoming subservient to them - maintaining the capacity for original thought, authentic relationship, and moral courage that no algorithm can replicate. In an age of artificial intelligence, the most radical act may be becoming more authentically human.
The real danger isn't that AI will become smarter than us. It's that we'll become dumber because of it.
The wave is here. My task as a father isn't to shield my children from it, but to teach them to surf without losing themselves.
It seems like you're taking the hype at face value, but after the chatGPT clusterf*ck, I'm not so sure we have to assume this is as inevitable as we've been told.
If anything, it's more apparent than ever that it's being shoved down our throats whether we like it or not (perhaps not unlike vaccines) while failing to live up to the hype and being an enormous waste of money.
That disconnect is more concerning to me than learning to live with this stuff, because I'm not convinced that we will have to. My question is, if it sucks, and everybody knows it, why are they still carpet bombing us with it? Because people don't like that.
AI is no ordinary tool. It is a soul-sucking and dehumanizing replacement proferred - without consent - on the soon-to-be-useless eaters (UE's). Why are we here? Why do we live? Is there a point or purpose to life on this 'mortal coil'? These are internal questions. Soon AI, programmed by who knows what source, will give answers to those questions. Soon AI will replace our struggles and our necessary efforts to learn and to know with the drug of artificial (note this) intelligence. We have yet to produce an artificial soul or an artificial god, but soon they will be offered to the dehumanized serfs (UE's) as salvation. Not a single person on earth needs AI. We have Shakespeare, Brahms, Gogol, Plato, Van Gogh. We are vibrant and profoundly inventive without an Artifical Brain (AB to push us into atrophy. The earliest Greeks complained that writing would ruin the ability of humans) to recall their stories - like the Iliad - by memorization. Few today have memorized anything. The calculator has wiped out the ability to do mental math. AI will wipe out human intelligence. I ask again: "Why are we here"?