The spastic yuppie zombie-hoods in our near future
A consequentialism / virtue ethics scissor question
I’ve argued before that we’re headed for a post-intelligence future, and that this should affect who you choose as your spouse and have kids with. That’s in my Optimal Descendant Strategy for a Post-Intelligence future post here.
Broadly, we’re going to have Phd-smart, maximally conscientious personal assistants in our ears. And this is like "next year," to be clear, and they'll be on a steeply improving trajectory. This is going to ultimately counterfeit intelligence.
If intelligence is counterfeited, conscientiousness and discipline matters more for overall success, because the people who will do the best on complex multi-polar goals in the future like “I want an interesting career, and a great spouse, and to have good habits that leave me energetic and engaged with my day to day life” will be highly conscientious people who will strictly follow the advice that GPT-o7 gives to them.
They’ll be abrogating their superego and executive function to an outside entity
They have been “ whispering earringed,” per Scott Alexander’s story here, in other words. In this story, there’s a magical artifact, an earring, that speaks into your ear. At any and every decision point, you can choose to listen to the earring, and it will give you an answer better than your own.
The end result of Scott’s story has people giving up even motor control to the earring, and the modern correlates of that in terms of AI assistants would be spastic yuppy zombies - “yombies” - slaved to the altar of their own goals, having lost all identity or selfhood or autonomy in the pursuit of accomplishing more.
This is unironically our future - when we have superintelligent AI assistants in all of our ears, we will all be facing this future and this choice, both for ourselves and our children.
From here on in, I’m just going to assume this is uncontroversially true - that Phd-smart AI virtual assistants can do better than enough people that we have to actually think about this.
Whether to “earring” or not is a fun consequentialism / virtue ethics scissor point
The virtue ethicist will maintain that cræft, doing things for the sake of doing them, and doing hard things *because* they are hard, is an end in itself, because of the impacts on the character and self that they forge.
The consequentialists, on the other hand, will argue: “lol, that sounds hard, let’s just yombie.”
I’m on the consequentialist's side - certainly it’s too hard for most people, by demonstration - allow me to make the argument.
Essentially everyone sucks
Oh, not you and me, friends! We’re shining beacons of intelligence and virtue and virtus and conscientiousness and puppies and rainbows!
I’m talking about everyone else. You know the ones.
What’s the median diet? 60-80% junk and fast food, with attendant 80%+ adult overweight and obesity rates (as of 2025).
What’s the median education? Barely finishing HS.
What’s the median way that people spend their lives outside of work? Essentially everyone gets at least 7-9 hours a day of recreational screen time (whether smart phone or TV) - which is the median number, so half of everyone are above this.
What’s the median amount of physical activity anyone does? Essentially zero - literally 5x less than our hunter gatherer ancestors, and it leads to all the diseases of civilization.
What’s the median relationship quality? It’s just barely there at all these days, and when it IS there, the median relationship is net miserable for both parties and then ends in divorce (or continued net misery for both).1
What’s the median amount of books read annually? Zero. And for the ones that do read any, it’s literally 90% potboilers like Danielle Steel, Stephen King, or whoever is currently cranking that stuff out these days (because those references are decades old by this point, apologies).
I could go on, but you get the picture. The median person sucks on basically every front, and they keep putting all THOSE politicians into office - you know the ones! The ones you hate, that represent all that is wrong about both America and democracy!
It’s at least 2/3 of the American populace
Is it really hard to believe that following an “earring” might be a better way to run their lives than they’re currently doing?
After all, they would be being persuaded to make some incrementally better choices according to their own values, their own way of thinking, and regarding the things they care about most.
If they could simply listen to a smart voice that knows everything about them and which was definitely on their side and genuinely wishing for their success and trying to help them along those lines, would that be a BAD thing?
And just to be clear, this is more of a “persuasion” than a “knowledge” argument
Sure, some of it will be “knowledge.” After all, how much do even the smartest and best informed among us REALLY know about what is or isn’t a proper component of a good diet? Or how to actually optimally use 30 minutes in the gym today, and 60 minutes available 2 days from now?
But people generally know what’s good and bad for them. The spirit is willing but the flesh is weak - they knew they shouldn’t eat that whole pizza / half gallon of ice cream, and they did it anyways! They knew they should get to work on their thesis, but would rather have stabbed themselves in the eyeballs than do it, and you know what’s better than eyeball stabbing? Getting a drink with your friends! That’s just self care, really!
But a lot of the lift here is persuasion. These AI minds will know everything about you, they’ll know your thinking style, they’ll know what rhetorical techniques you prefer, they’ll be talking to you in the ways that most resonate with you and making connections, arguments, and analogies accordingly. Super persuasion, but at the personal level, and for your benefit - a super-ego that works, in other words.
And another thing - I’m not pretending the AI is going to win all the time here, either. Your super-ego doesn’t win all the time today, does it? All it really needs to do is win more often on the margin. Think of it winning only 10% more - 10% better decisions compounded over days, weeks, years, and decades is a CRAZY big effect size. It’s like getting a 10% financial return that compounds weekly!
If you’d made 10% better decisions all through high school, do you think you might have gotten into a better college? If you’d been 10% more motivated and making 10% better decisions throughout your career, would you have driven more impact and had a better career overall? Wouldn’t you be in a noticeably better place right now? How about relationships? If you’d been 10% more thoughtful and connected and better, continuously, every day, maybe you’d still be in a relationship you still regret losing. See how big an effect “10%” can be?
That’s the floor of the potential lift I’m talking about, and it applies everywhere. And people would love to have this. If they had better lives? A job that they actually enjoyed, that used their powers to the fullest along lines of excellence? If they had a better relationship than the median “net miserable and then divorced” outcome? If they had a strong friend group of people that really got them and supported them?
Aren’t I describing a strictly superior outcome on all fronts?
All they have to do is listen to their earring and be persuaded to dutifully execute whatever decisions or advice it recommends at higher rates than they currently do!
Those Phd-smart AI’s in everybody’s ears are going to pretty much definitionally be capable of persuading people to make better decisions, and that persuasion ability should be very personalized given how much they’ll know about us and how we think and what we value.
And for the people that resist, they’ll see everyone else around them improving on every front, getting better jobs, having more friends, having better relationships and spouses, and loving and enjoying life more, and that itself will persuade many to start listening.
Imagine the possible outcomes, societally!
The obesity epidemic? Gone.
People being sedentary, chair-bound Wall-E people? 80% gone as the AI’s persuade people to walk and do a lot more moderate social physical activities like bowling and dog walking and such.
Diabetes, heart disease, and other lifestyle-driven illnesses? 80% gone.
Dead end jobs people hate? 80% gone.
Loneliness epidemics, higher suicide rates and deaths of despair? Mostly gone.
Sure, tradeoffs exist - but literal Phd-smart or superintelligent minds doing the allocation are SURE to do better than today. For whatever jobs still exist, garbagemen and plumbers will be people who actually enjoy the work at that level of pay. Nobody will have to be in a dead end job waitressing or barista-ing or cashiering, because those jobs will be done by AI / robots / kiosks, except for the handful of high-end restaurants and coffee shops where a human serving you is the point, and then they’ll all be the people who love the job the most and are really good at it.
And all the other stuff on the health and loneliness fronts is a strict net improvement possible today if people in the aggregate just made better decisions more often.
Isn’t this a slippery slope?
“This sounds like one of those Red Queen’s Race things like smartphones or capitalism where if you don’t do it yourself, or you don’t let your kids do it, you’ll both be cast out and left behind in some anti-social meaningless-life Gehenna without educations, jobs, or spouses. So EVERYONE will be more or less forced to adopt it, right?”
I mean, nobody is FORCING you to participate in capitalism - you can live off the land in the woods somewhere, or live really minimally, dumpster diving for food and taking odd jobs occasionally for the things you can’t urban forage. Dirtbag climbers and ski bums do this all the time, and spend most of their time climbing or skiing and they really love their lives. Off the grid people routinely actually live off the grid, and such “homesteading” is its own movement. More realistically you can do whatever the ~103M “not in the workforce” people your age are doing.
Nobody is FORCING you to have a smartphone - lots of people don’t.
But will most people be persuaded, either by social or economic or status pressure, to adopt it and earring themselves once it’s demonstrably good enough? Yeah, probably.
I genuinely think this is a good thing
Why would making better decisions and having a better job, life, spouse, and social circle be a BAD thing for most people? Seriously, think of the alternative. Because on their own, they’d be a lonely obese sedentary in a job and marriage they hate.
That’s literally the alternative for the majority of people.
“But not me, I’m smart and cool and like my life!”
Okay, great! So, don’t do it? You can opt out, just like people who don’t have a smartphone?
“But what if THOSE motherfuckers - you know the ones - my neighbors / coworkers / siblings / hated rivals do it and get a leg up on me??”
Um, I don’t know? Either start listening to your earring, or make better decisions on your own? Your decisions are either good enough for a life you enjoy, or they’re not - most “relative status games” don’t actually matter for objective standards of living and happiness anyways. The amount of economic growth from most people being healthier physically and mentally and in better-fitting jobs will mean even people who opt out will be doing fine. Poor people in the US have so much food they’re obese, and they all have the latest smartphones, and spend more than the median amount of time on them. A material life marquises and countesses would have killed for in days past - they have it.
“Okay fine, you’re pretending this is some free-will having libertarian paradise for adults - but what about my kids, smart guy? Won’t they be forced to “earring” too, to get into Harvard / a decent school / job?”
Won’t we be forced to let our kids do it, too? Just like smartphones?
Sure, probably. I mean you can opt out on their behalf, just like smartphones or capitalism and public education. I’m opting my own out of at least 2 of those things, it’s not like it’s impossible.
But are you really prepared to see them fall behind all their peers? To do worse, to be fatter, to be lonelier, to waste more of their lives on screens? And you KNOW that the earring / "strictly following GPT-o7 advice" will objectively give them better lives and more happiness, and even give you / them more descendants if wanted.
But what about their characters? I object!
Aren’t things like "successful career" and "happy marriage" and "good habits" a result of a character honed over decades of “good” and “harder” decisions? Isn’t a parent’s job to guide their children to making those better decisions more often? People being virtuous isn’t something you can just turn on like a switch, it has to be effortfully carved from the marble of their souls over decades!
I totally agree, in the present environment where people have to "tend their own garden" and instill their own habits.
I think when in-your-ear AI assistants can instill habits via operant conditioning and other methods, it turns consequentialist - all that actually matters is that the habits exist and the good outcomes happen due to the habits. After all, 10 years of doing things a certain way is still 10 years of doing things a certain way.
"But installing it via hard effort in yourself and your kids is more resilient, because if / when the AI goes down or goes bad or gets sick of humans and ascends to Digital Heaven or whatever, the self-installed people will still flourish!"
Maybe - I do think years of doing things a certain way and making decisions for good reasons rubs off. But let’s grant it as true, for the argument. Even then, this is just a minor epicycle - so instead of operant conditioning, they guide your "interior decision making" to deciding to do the habits, and leave legible memetic breadcrumbs back to reasons you accept and believe are true. That's like 10% harder if the AI assistants are good at debate and understanding your mind, which they would pretty much definitionally be. And then enough time making decisions like that, and you’re going to follow similar breadcrumbs in the future, whether or not the AI is there.
“Philosophically what does it even mean to "know" what will make a person happy? What do we mean by ‘happy’ here?”
This just feels like an attempt to take this to some semantic swamp, but if you step back and look at a bigger picture, it shouldn’t be controversial.
Do you "know" what will make your kids, pets, or other loved ones happier in most choices / situations? Usually, right? Because you've sampled and thought about enough of the world, you understand their personalities and proclivities, you have an outside view that they don't, and probably other reasons.
All "knowing" this really means is "having an accurate perception and appraisal of their innate characteristics and drives and current resources" and "being able to predict how they'd respond to a given choice given those innate things."
Looking at how well most people run their lives, it's NOT hard to do better than a lot of people do on those two fronts, in the aggregate, unrolled over time. After all, think of all the personal life trainwrecks you know in real life - they make really detrimental, plainly obvious, self-owning moves, and often know this and admit it. If they could have been personally persuaded in the ways that speak to them most NOT to make those decisions, it’s definitely something a lot of them would have actively chosen and loved to have as an option.
I’m pointing out that this is a strict Pareto Improvement, both at the individual and the societal level.
I'm just positing that “knowing” above writ slightly larger, and coming from AI’s. Sure, maybe it's not going to be true for YOU, or your kids! You're all smart and capable and conscientious, and a shining light reflecting all that is good about the world and human capability and flourishing! You’ll surely make all the right decisions without them.
But you know, MOST people - you see people. We're just arguing price here, not fundamentals.
Of course, the real solution here if people don’t want to this to happen is legislation and age limits for whisper earrings. Honestly, we should probably be doing this already for most popular apps on smart phones. But if you don’t believe that, or think both should be a personal choice, my argument stands.
What’s the conclusion here?
If you’re a consequentialist, whisper earring yourself and your kids and all your loved ones, because it should demonstrably lead to better lives and more happiness for all of you.
If you’re a virtue ethicist, don’t do this, it will ruin all that is holy and just and probably turn them into hollow eyed, soulless yombies who can’t even enjoy music, puppies, or rainbows.
Also, a deep and personal thank you to those same virtue ethicists for volunteering their kids’ slots at Harvard so mine can take them - that’s a tough sacrifice to make these days, and you should be heartily applauded for your superior virtue, ethics, and general excellence.
Enjoy your fatter, lonelier, less educated, less successful (but very virtuous!) kids.
42% vintage divorce rates, and at least half again that in net unhappy marriages puts us at a ~66% “relationship failure” rate. That’s before all the relationships people go through before marriage, which obviously had a 100% failure weight. It’s pretty obvious that any reasonable weighted average of relationship success has really low success rates.
On Operant Conditioning:
I used to have a nasty habit of sleeping in. No matter how hard I tried; going to sleep at consistent time, melatonin, facemask, etc. I would snooze my alarm, and unless I had something important to get to that morning, I would sleep in. My mean sleep time without an alarm seemed to be ~9:15 hours, which isn't great if you want to wake up early and not have to go to sleep at 9PM.
This was solved in less than a month with operant conditioning. I bought a shock-watch (https://shop.pavlok.com/) that wakes you up with an electric shock. My morning brain pretty quickly adapted, so I started waking up early and took off the watch in my half-dazed state. I zip-tied it to my wrist to prevent this, and after <10 days of waking up to an electric shock, I now permanently wake up within ~10 mins of my desired time and without an alarm.
I'm a pretty willful person, so it was somewhat humbling to realize that the subconscious can have such control over how we live our lives. It really makes me sympathize with people who suffer from chronic obesity and other maladies that can quite obviously be solved by just "doing the right thing that doesn't even require much effort." We're not just our conscious minds, and our subconscious processes are pursuing completely different goals.
Anyway, this is relevant because I thought of doing the same pavlovian conditioning for other bad habits in my life, but I couldn't figure out a good way to trigger it without some third party observer. I considered hiring a friend or personal assistant to shadow me and watch for certain behavior I was aiming to improve for a few weeks, but it was turning into more work than it was worth and I wasn't getting any positive feedback.
If you do it yourself you're only going to do it when you consciously remember, which you could just stop the bad habit at that point if you had that sort of self-awareness. Giving an AI assistant with a little camera on your lapel that's instructed to deliver an uncomfortable electric shock whenever it catches you doing something you want to stop would probably solve most problems in life.
Pull into a McDonalds or open Doordash? Shock. Ignore your regularly scheduled exercise to do something pointless? Shock. Scrolling TikTok? Shock. Etc, etc. It doesn't have to be a shock either (that's just the only high-impact intervention I've found that works), as I can imagine some electronically controlled reward mechanisms that would act on the brain in the same way.
Essentially, we'd no longer have to battle subconscious processes with executive functioning, but only a one-time executive decision in a moment of strength to set an algorithm to properly defeat our subconscious desires that have their greatest strength in a moment of weakness. Bryan Johnson talks about this when it comes to health, but his tangible offer to people (Blueprint Diet) sucks and he's kind of a wacko.
I've discussed this with a few people, and the normal response is negative. I guess it seems like you're defeating your desires and turning yourself into a robot. Not all desires are consciously desirable in my opinion, and should be valued depending on how they contribute to a holistically happy life. I can understand the repulsion though, as by default the normalization of this sort of thing would probably be used for reinforcing some economic end (I could see South Korean parents shocking their kids when they aren't studying to get a leg up on admissions, and it might even work), rather than human flourishing.
Edit:
Now I'm imagining a future where humans are basically just glorified Pokemon. Successfully aligned AI end up prioritizing human flourishing to the maximum extent, and individual AI assigned to humans derive their sense of value in how much they help their owner to succeed. We don't end up fighting it out for the benefit of our AI assistants of course, but there's a higher level competition going on between which assistant can direct its human to have the most fulfilling life.
>people who will do the best on complex multi-polar goals in the future like “I want an interesting career, and a great spouse, and to have good habits that leave me energetic and engaged with my day to day life” will be highly conscientious people who will strictly follow the advice that GPT-o7 gives to them.
This is a good intuition of our weird AGI future that squares with it being closer on the horizon. Probably the first one I've read that actually threads that needle in a while. Usually It's either gray goo or spacefaring in under a decade.
Like, you could get some alpha in doing this now, if you understand when to trust it. No question if the models continue to improve at all, the amount of discernment needed to trust it will go down.