The "ten thousand Phd's" and what it tells us about AI risk and the next trillion dollar companies
This SWOT analysis ain't looking too good
The Data Science revolution
This is an idea that’s been coalescing and taking shape in my mind for a while, largely because in a former life myself and my friends led teams like this across multiple companies.
Essentially, around 2010 computers hit a point that we could start crunching Big Data, and deploying algorithms that used that data to build better-than-ever models and optimizations. It was the Data Science revolution, and it significantly changed how big companies marketed things, segmented and understood their customers, did analytics, tracked business outcomes and KPI’s, and created user interfaces.
It had precursors, of course. Computers were pretty good before then, and certain companies had been generating / storing Big Data well before then, but this is about when things took off collectively, the more powerful skills and algorithms were honed and deployed at scale, and every company that was big enough began doing it.
Basically, for about a decade, you could pull together a team of Phd’s (or those unambiguously smart enough to get a STEM Phd), and you could point them at some data and give them a business outcome or goal, and they could lift things by absurdly massive amounts. Bump conversion by 20-30%, drop costs by 20-50% by targeting things or using resources more intelligently, really dial in what factors were actually most important for driving various outcomes via modeling, segment customers in much more predictive ways, and so on. It was an arms race, of sorts - business is a competitive landscape, and those deltas are too big to ignore.
One outcome that came from this was the fundamentally adversarial dynamics that we see more and more of in our digital and physical lives now.
The ten thousand Phd’s
Basically, every big company assembled ten thousand Phd’s (more for FB and Google, less for many other large companies or financial firms), and then deployed that coordinated brainpower towards modeling and optimizing various business outcomes.
Another more telling way to look at this, is that ten thousand Phd’s in every major company have now for years been coordinating and arbitraging against much dumber people by creating and using vastly imbalanced world-models and optimization power.
They're creating and deploying incredibly fine-grained and insightful models of consumer behavior and motivations, which reach deep into our collective biology and neurology to identify, grasp, and yank on whatever hooks exist for altering people’s behavior at large. They’re discovering and creating biologically-grounded addictive superstimuli, in other words - Skinner boxes writ large.
This is relevant for multiple reasons, which we’ll get into, but I think it also gives us some insight into AI risk, because this is essentially a preview / sample of what powerful AGI (Artificial General Intelligence) is going to be like - the essential problem is that much better modeling and optimization power is powerful enough to drive many outcomes when deployed against average people, although corporations mainly turn them into money pumps.
There are many examples of this dynamic today:
Every major app is adversarial now, and is exerting lots of brainpower and bandwidth on increasing engagement and mining your eyeball-time. Phone screen time has gone from 2 hours a day in 2014, to 4.5 today, and 7-9 in Zennials, and this powers most trillion dollar companies now.
Those Phd’s have exerted their efforts in creating ultra processed food, where people eat 500 calories more per day1 due to engineered tastiness and "moreishness," which can basically entirely explain the obesity epidemic.
Sports gambling going from “you can bet on the game with the bookie down the street in that one bar” to mass advertised apps that literally ban anyone who knows what they’re doing, then milks the remaining rubes into bankruptcy.
Netflix and other streaming services being so successful they literally say their competition is "sleep" and "every other way that people spend free time.”
Finance absorbing the cream of our generations' minds when "liquidity" and "capital allocation" are fundamentally solved, and the bulk of their efforts goes into milking dumber participants in zero-sum games (individual investors, 401k's, pension funds, dumber-than-hedge-fund institutional investors)
And this is bad for many reasons, not just the immediate costs and dynamics for average people. Do you think it’s GOOD that the finest minds in our generation are wasted in the “eyeball and synthetic financial derivative” mines? No! It’s the greatest tragedy of our generation!
Basically, we’re wasting the finest minds of our generation scrabbling for the decimal places in zero sum games
Consider instead if this generation of talent, if this elite tier of literal genius and capability, had been coordinated in driving scientific research, inventing things, or in founding and growing new companies, instead of coordinating against dumber people to farm them (very successfully) for eyeballs or alpha.
Wouldn’t that be a better world on multiple fronts?
The reason it didn’t happen is because there’s a lot of money in capturing eyeballs and alpha, and this isn’t necessarily going to change, because all the current incentive structures are still pointed that way.
Sure, hopefully we’ll get some nice technological advances from stronger AI - better computers, better medicines and health care, better teaching and learning.
But we’re almost certainly going to get more “ten thousand Phd” dynamics, too.
AI is going to be pushing this dynamic further than ever
The reason I worry about this dynamic is that AI is going to make things worse along these same lines. AI is mostly “much better modeling and optimization,” which is the exact same dynamic as the ten thousand Phd’s.
And if all these trillion dollar companies have been built on the basis of doing that better than ever, it sure argues that there’s going to be a lot of incentives and motivations to hit that button ever-harder with stronger AI, too.
It also points at a number of upcoming trillion dollar company opportunities:
And what are some plausible things that AI might enable along these lines?
Sexbots and / or friendships that strictly dominate every category of "good" we get from other humans, without any compromises or friction.
Infinite Jest-style VR heavens that are procedurally generated (and so infinite), and that cue themselves on your pupil dilation, heart rate, cheek flushing, and other tiny cues to remain maximally engaging and interesting.
It can probably design food as good and addictive as cocaine, versus the current peak (cheesecake maybe?) that's like 20% of the way there.
AI assistants that people eventually can't function without, because they make interfacing and coordinating with the world of bureaucracy, phone calls, emails, schedules, and reservations so much more frictionless and easier that they're like a 20 point IQ bump and a thousand point "friction reduction."
Imagine any time you've done a literal double-take when hearing or seeing something, as in you were more-or-less compelled to suddenly direct your attention to something. Now imagine every company having the ability to tailor any stimulus to that level - advertisements so riveting you're double-take-compelled to watch them, shows that keep that dynamic running for 20-60 minutes at a time, news teasers with that level of surprisal and stickiness, and so on. Your attention will be being jerked back and forth at neck-hurting force repeatedly, all day, until memetic / sensorium shields are good enough to prevent it (probably the AI assistants curating your sensorium or media stream).
Pretty much every one of those is a trillion dollar company at the end state.
I really think future minds will look back on this time as "so wait, everyone smart stopped having babies, they decided to create and YOLO much-smarter-than-them unaligned artificial minds, AND they set all the smart people and AI minds onto coordinating together to milk rubes instead of actually solving any of their problems?? Well there's your problem right there!"
What’s the solution?
Well, what’s the solution to the current “ten thousand Phd’s” areas of life?
Top decile self-discipline, conscientiousness, and phone / media behaviors?
Whoops. Ouch. Doesn’t sound scalable or broadly applicable.
I think what’s probably going to happen is that the same AI assistants will eventually fully curate our media streams and /or sensoriums.
The biggest thing that worries me about this "AI assistants filtering your memetics" idea is that the current "users are the product and this lets us offer expensive + useful services for free" dynamic that powers most FAANGS and internet companies means there's going to be huge financial incentives for the AI assistant companies to auction people en masse off to high bidders, perpetuating and compounding a lot of the adversarial dynamics we’ve identified.
People with money will be able to just pay monthly and get actual filtering and preserve attention / sanity, of course.
That’s going to drive more caste-like divergence
We’re basically already separating into castes in the developed world.
Assortative mating is stronger than ever, and is particularly selecting for things like educational attainment and conscientiousness / discipline in recent generations.
Personal AI assistant media filtering is going to make our differentiation into castes all the stronger and more prevalent.
Already, the white collar / Professional Managerial Class caste alone gets into all the good schools, gets all the good jobs, owns all the real estate in any city worth living, and only partners / marries / has children with other PMC folk.
What’s going to happen when non-PMC kids grow up with media and memetic environments as adversarial as the ones that exist TODAY, much less the ones likely in the near future?
Jonathan Haidt has a great substack about the rampant society-level problems that today’s dynamics are causing already. Zvi has written about this too, if you could only read one post about it - imagine those problems amplified significantly.
And then imagine that the top ~10-20% has none of those disadvantages, because they grew up in (very expensive) carefully curated media and memetic environments specifically tuned for learning, human flourishing and maximizing capabilities and discipline? How much would you pay for that, as a parent?
That’s going to be another one of the next trillion dollar companies.
And this is the near future I’m talking about, to be clear - I think you could start building that AI Assistant company *today,* and we’re going to be getting more adversarial environments within the next 5 years or so, if the current ones aren’t adversarial enough to be worth paying for avoiding already.
Better start saving, or honing your media diet skills and praxis today.
KD Hall et al. Ultra-Processed Diets Cause Excess Calorie Intake and Weight Gain: An Inpatient Randomized Controlled Trial of Ad Libitum Food Intake (2019)
Great post, sits at the intersection of 3-4 of my personal hobby horses. As a tech-savvy early 20s grad student with >1%ile ambitions but <10%ile conscientiousness, I've struggled quite a bit to find ways to force myself to stop wire heading on screens and get work done. The only thing I've found any success with is 100% impossible to bypass self binding. My phone addiction isn't a problem so long as I set the screen time settings to take away my browser and ability to download new apps, then have my girlfriend set the password. Unfortunately, I've found a way to break and bypass every blocker available for my laptop and desktop, so my distraction continues unabated on the bigger screen.
Short of getting rid of the screens at home and opting to do work only in libraries, it seems like an LLM assistant with the ability to carry out Oddyssean controls on my screen time according to its evaluation of pre-set criteria for acceptable levels and definitions of productivity would save me. I hope someone is working on such a defense-first product, or else maybe I'll build it if I...ever get around to it.
Maybe one equilibrium resulting from the world you describe is an elite 1-3% with the genetically endowed self control to resist the super-stimuli, another fairly elite 5-10% who lack the control on their own but possess the ambition/intelligence/wherewithal to outsource their self control to the machines, and everyone else living out Wall-E world.
Here's a positive angle: the prevalence of cheap LLMs will actually balance the playing field! Here's what I mean: right now, only megacorps can afford all of these thousands of PhDs. The only way to win that game is simply not to play. Nobody is really "smart enough" to avoid being manipulated by Facebook and Nestle's junkfood (actually I think being traditionally smart makes you more vulnerable); educated people just avoid it entirely, but of course that takes some degree of self-control and knowledge (and in some cases, nontrivial amounts of money).
In a few years, though, we might all have ten thousand PhDs in our pocket. Sure, the megacorps will have them too, and theirs will be better, but it'll probably like a 5:1 gradient vs 1,000:1 like it is now. That seems pretty great to me! You could just ask your AI agent "buy me ingredients from Instacart that allow me to cook seven meals this week; prioritize time over health and health over flavor" and completely beat all of the manipulation foisted on you by the designs of grocery stores and the design of the Instacart app. That's just orders of magnitude better than the process that most people follow today, which is going to the grocery store planning to buy tomatoes and ending up with some ultraprocessed Nestle crap or getting sucked into buying Twinkies based on some ad in their food ordering app.
Search is a great example. Until quite recently, Google was really the only game in town; "friendly" alternatives like DuckDuckGo just weren't very good. You _could_ opt out of a lot of the manipulative chicanery that Google foisted on you, but you had to know enough to install various browser addons, network-level blockers etc. But now, you can just pay $10/month for Kagi, which gives you better results and no attempts at behavioral manipulation. For just $120/year! Sure, _some_ people can't afford that, but not many. And for a mere $25/month, you get to pipe its results through the best frontier LLMs to get even more utility! And it's only Year 3 of the LLM Era!
You'll be able to tell an agent "review my Facebook messages and summarize me the ones that I've historically cared about, but ignore all ads, ignore all posts designed to elicit outrage, ignore all posts about politics."
One last point: all we really need to do is get productivity to the point at which food, energy, and shelter are essentially free. Once we do that, we've got the Star Trek economy, where 90% of the population can basically not work and have a perfectly decent life, but there are still things (leadership positions, Veblin goods, status goods, etc.) left for the remaining 10% to strive for, and that's enough to keep productivity marching forward.
Now, of course, there's the "violent young men" problem, which becomes a catastrophe for societies (lots of other great posts here about that). But I think we might be able to solve that problem with videogames. Seriously. Tyler Cowen wants to build cheap shanty housing in New Mexico with broadband for these people. Peter Suderman wrote a really compelling article about it all the way back in 2017 (https://reason.com/2017/06/13/young-men-are-playing-video-ga/). It's a bit of a nonchemical version of Wirehead City. I think there's really something to it!
Now this could all go horribly wrong, of course, in all the (totally plausible and perhaps even likely) described above. But there really is room for optimism!