Well, you didn’t leave much meat on that DEI bone! You might have lingered a moment longer on the sheer religious-flavored hatred given off by so many of them, a hatred that goes well beyond just white men and ultimately encompasses reality itself, which must be denied. Although Allan Bloom was writing for an older generation, I feel these words hold up well:
It is certain that feminism has brought with it an unrelenting process of consciousness-raising and -changing that begins in what is probably a permanent human inclination and is surely a modern one—the longing for the unlimited, the unconstrained. It ends, as do many modern movements that seek abstract justice, in forgetting nature and using force to refashion human beings to secure that justice.
Perhaps the one avenue you didn’t explore is the relation of DEI to power. How they love to wring an apology out of their victims and then excitedly chatter amongst themselves to see if it met everyone’s standards.
> Perhaps the one avenue you didn’t explore is the relation of DEI to power.
Yeah, I just find this less interesting overall - naturally, if one side is in power they're going to be overbearing a*holes to the ones underneath, whether it's forced kissings of the ring from the Left, or the "religious right" installing v-chips in all the TV's and promulgating "satanic panics" about rock music and satan worshipping daycare owners.
And just to add one last detail, the battle over pronouns represented an apotheosis of sorts in the drive for power. Not only am I the master of grammar, the onus is on you to ask me, then bend the knee. Or else. I’m making you an unwilling participant in my delirium. What’s that you say? You’d also like to have a say? Sorry, mate, we’re playing heads I win, tails you lose. And now I expect an apology. Make it good.
(The color associated with the earth, the center, prosperity and fortune and the emperor in Chinese culture, of course.)
The West was ahead of East Asia from about 1750-2025. Before that, the Chinese had a successful empire while the West was mired in dark ages. Is it so surprising the pendulum would swing back?
There are definitely a lot of plausible paths for that to turn out true, but I think it's actually too early to call.
On the pro-Chinese side:
1. They can centralize all GPU's in the country much better and faster than we can
2. They can easily commission / centralize 10x the power that we can, MUCH faster than we could build it, *and* they can build comparable additional capacity roughly 2-3 times as fast as we can
3. They've got some really sharp folk in China overall, and if they did a big centralized effort, it would basically be a Chinese Manhattan Project casting call
But on the Pro-US side:
1. We have literally 75% of the frontier AI-capable chips now, and theoretically in the future, unless somebody high up does something really stupid (very much a possibility)*
2. We have the best and most cracked AI talent in the entire world, and every single one of the frontier AI companies is American, and they are all materially ahead of China (yes, R1 was impressive - it's still nowhere near even the publicly released o3 or 2.5 Pro, and every frontier lab has a one-gen-smarter model internally already)
3. There are "also rans" with big GPU / TPU reserves (Sutskever's SSI, Grok / X, others), that if we got serious about prioritization could be cannibalized so their data centers could go to the merger of the Big 3 that would happen under serious race conditions.
4. Even if Taiwan gets nuked, we have one-generation behind capable plants that will be opening in the US soon, and China has nothing comparable.**
Like overall, we're fighting with exponential advancement curves, right? That also have the potential to go super-exponential if any of the big overhangs gets solved, and if self-improvement starts happening. And the time and chip advantages that the USA has are actually a really big deal in an exponential / super-exponential regime, and they're not necessarily going to go away without some major blunders on our part, AND some major advances / overhang solutions on China's part.
* I used to think that China could step up and use their industrial might to outproduce on local GPU's, because the Ascend 910 / 920's are basically 1/3 of an H200, so if they just produce 3x as much, it's like having that many H200's - but further research has led me to believe they won't be able to do this, particularly if they nuke Taiwan (basically all the packaging and high speed memory happens in Taiwan and / or requires Korean memory).
** Also, they're not going to be able to progress on more advanced silicon internally either, they're currently 4 generations behind and struggling with yields and quality even at 7nm. They've made some experimental progress on a plasma EUV solution, but it's still really theoretical, and even if it were fully solved, it would only advance them 1-2 gens.
Not that this would affect your main point, but I'd say that blank-slatism has always been progressive. The problem is that most anyone who assumes it can't resist writing something on the slate - even Rosseau (Emile) and Skinner (Walden II), and of course DEI advocates, who create their well-intentioned curricula in the mostly futile hope that one "training" or whatever could root out prejudice and discrimination.
> Not that this would affect your main point, but I'd say that blank-slatism has always been progressive
Yes, and actually this brings up an interesting point - one of the things that irritates me about progressivism is that it stops at an arbitrary threshold, beyond which things are verboten.
But at a high enough level of abstraction, we ARE blank slates. Not today, but with gengineering and better and more informed and nuanced societal / childhood intervention engineering.
If we "space raced" genetics, we actually COULD reach a point of "everyone is born with similar-enough innate potential, and then people's destinations and characters are a result purely of their choices and luck rather than whatever hand they were dealt at their conception." The blank slate could *actually* be true if we put in the effort. But progressives and conservatives alike both hate this idea, when it entirely addresses the problems both are concerned with - fairness and equity on the progressive side, and justice and desert on the conservative side.
Similarly, on societal / childhood interventions. The reason everything is 60-80% heritable is partly due to base biological truth, but also partly due to a lack of imagination and effort.
I'm as "genetics is all that matters" as they come, and even I think there's probably pretty high-effect marginal improvements that could be made on tendencies to violence, conscientiousness, and determination / effort in any given individual via outside interventions. I think we can move the "heritable" needle noticeably down on all of those if we just exerted actual effort in testing and learning into what works, on the parenting and societal intervention levels. For an example, consider the "ten thousand phd's" and how every company has a team of them reaching into deep seated neurological hooks to jerk people's behaviors around, so everyone is fat and broke and stares at screens 7-9 hours a day. Imagine that level of effort and intervention, but towards the good.
But once again, completely verboten, everyone hates it and would find the very idea "immoral." The government / Google is controlling your children's minds!! Just imagine the nuclear meltdowns on both sides, even if it was actually a private company providing the service, it would be outlawed essentially immediately.
I do think we can actually achieve this second piece when everyone has a Phd-smart AI personal assistant, though, even though it's basically "OpenAI is controlling your minds!" The framing and context around it makes it enough of a personal / parental choice that I think we could actually see adoption, especially since the real-world results should be pretty obvious and tangible.
Sadly, the one example I know of where progressives and liberals were actually happy to see interventions work on those "neurological hooks" for social good yielded no demonstrable successes. I'm referring to the "brain-based education" approaches that came into vogue a few decades ago. They weren't necessarily bad curricula, but they turned out to offer no special benefits, because their use of neuroscience was so superficial (and the neuroscientific data doesn't call for unique curricular innovations in the first place). In the end, brain-based education wasn't successful enough to worry anyone. Meanwhile the very kids exposed to these curricula may now be fat and addicted to screens, like you were saying, because educational methods will never reach our neurological hooks as well as the food scientists et cetera can do.
I don't think it's coincidental that what's most needed is hardest to achieve. A lot of brainpower may have gone into making Cheetos, but making a better person through K-12 curriculum is way harder, and there are a bunch of needless obstacles (including those you described).
> I don't think it's coincidental that what's most needed is hardest to achieve. A lot of brainpower may have gone into making Cheetos, but making a better person through K-12 curriculum is way harder, and there are a bunch of needless obstacles (including those you described).
Yes, I think this is why it will take the AI assistants to make progress.
What everyone wants is a short-term intervention that drives lasting change. Spend an hour a day in "conscientiousness class," and bump your conscientiousness percentile up 10 points, or go to a 2 week immersive environment and achieve similar results.
That's obviously a pipe dream - how are our *bad* end states achieved? Obesity is built a mouthful at a time, and an incremental 3 mouthfuls per day makes people fat, or screen time was bumped up 5 minutes at a time until people went from 2 hours a day on smartphones in 2014 to 7-9 hours a day now in Zennials.
Similarly, for GOOD outcomes, you need constant, ongoing decisions at the margins. 1-3 better decisions a day adds up to that 10% bump in conscientiousness, making a single marginal "ok I will work out today" decision a couple times a week materially improves your health and well being, subscribing to a meal plan your AI assistant cooks up to meet the macros and calories that are ideal for your weight goals makes sure all of the little decisions happen for you, and so on.
Personal AI assistants can be exactly that "angel on your shoulder," and can make persuasive arguments tuned specifically to the things and goals you care most about, using the rhetorical techniques that most resonate to you, and help you make those better decisions on the margin.
Agree 100% about AI assistants, but will the people who need them the most use them enough? More convenient access to useful information tends to drive higher standards for convenience. I can already hear someone whining about having to actually look at their wearable every time a meaningful decision is needed, but this isn't the person who's already motivated to seek and work for good outcomes.
> but will the people who need them the most use them enough?
Oh, definitely not - but you know, I still think it helps a lot of people on the margins.
Some people are just lost - they're already obese, fill all their time with screens, etc. Or they're alcoholics / drug addicts, whatever. For whatever reasons, they'll refuse to change. There are some holes so deep and personalities so enslaved to their own vices and deficits, that they're going to be lost no matter what.
You're never going to be able to reach everyone. But I believe the results will be tangible enough that it will materially help the massive middle - say those from the 20th to 80th percentiles.
Today, the 80th+ percentiles are already fine - they're the PMC or better, not obese, not wasting lives on screens, good careers, etc.
So say you're at the 50th percentile now - you're obese and spend 7-9 hours a day on tv and phones. Now if some other people in your social graph start using their whisper earrings and lose weight, get promoted at work, you see them playing with their kids on the front yard, all that is going to make you ask what they're doing. See that enough (just once would be enough for some people), and you'll start doing it, too.
Imagine the massive society-wide lift possible from an intervention like that. The only thing I can think that would be comparable is splicing in the DEC2 "sleep less" SNP society-wide. Absolutely HUGE effect on human well being, the economy, and much else.
I've actually been slowly convincing myself over my various posts and comments on this that this is a big enough thing it might be worth coming out of retirement for and building a company around.
There's a sci-fi novel called Operation Terror that conceptualizes the ingroup/outgroup problem in a way I really like.
Essentially, a super-weapon is developed that if deployed, would lead to insane amounts of terror and suffering. The US government wasn't able to figure out any countermeasures, so instead they manufacture an alien invasion using the superweapon, so that the out-group hatred of the world was directed towards the (imagined) alien enemy, rather than against our human ideological enemies.
The writing and plot itself was alright, but I think the concept is what's interesting. So long as humans are around, we will direct forces of harm against the outgroup, and without an outgroup to unite against, we will find internal ideological enemies to demonize. If true, the grand project of expanding our circles of care (See discussion on that heatmap meme: https://bigthink.com/strange-maps/moral-circles-heat-map/ ), is doomed to self destruction.
My thinking is that rather than worrying about expanding our circles of care, we should be more worried about ensuring the inside/outside divide doesn't manifest itself in terrible competitions that lead to insane amounts of suffering. An imagined enemy is probably one way to do this, although that comes with some pretty obvious and major flaws, but my intuition is that there's got to be a way to do it.
Or maybe we engineer/breed out that tendency, so none of this matters. I hope not though! Getting rid of core bits about humanity because they are inconvenient would be a really sad way we neuter ourselves into irrelevance.
> An imagined enemy is probably one way to do this, although that comes with some pretty obvious and major flaws, but my intuition is that there's got to be a way to do it.
Yeah, I really struggle with this one, and ended up thinking it's the worst of any option.
My primary rebuttal to this is 9/11, which was an archetypical "outside enemy appears and everyone unites against them" moment. It was ideal! Low negative impact overall, from a weak and laughable enemy, and high patriotism and unification resulting. What were the actual end results, though? Basically, Al Queda won, full stop, completely pwned us.
1. We installed the TSA (which has wasted 10 times as many American lifetimes as the 9/11 terrorists took, combined with 95%+ failure rates when red teamed)
2. We enabled spying on our own citizens and everyone in the world in every channel in perpetuity (Patriot Act, NSA PRISM, Five Eyes, etc)
3. We embarked on the ~$8T 20 year boondoggle that has been our nihilistically pointless and destructive flailings in Iraq and Afghanistan
So, not only do we have to hope for an ineffective external enemy that can't actually destroy us, but we ALSO have to hope for not ending in flagrant stupidity and tragedy and massive waste of money and lives for 20 years straight for literally zero benefit.
And look around today - all that waste, and how united are we today? Zero benefit, huge costs, they 100% won.
> Or maybe we engineer/breed out that tendency, so none of this matters. I hope not though! Getting rid of core bits about humanity because they are inconvenient would be a really sad way we neuter ourselves into irrelevance.
Honestly, as we see AI progressing, I'm becoming more and more a skeptic of gengineering as a way out of any problems. Not that I think it isn't a good idea, I'd still 100% pay healthy sums to get it done to either me or my kids, it's just way too slow to be relevant. The absolute fastest it can influence anything is probably at the 30-50 year time scale, and that's at "absurdly optimistic implementation and adoption rates."
We're going to be facing civlization-scale challenges and changes well before 30-50 years from now, so gengineering isn't really on the field as a playable card.
Yeah. It's the sort of thing in real life would result in some 1984 forever war for literally no benefit to anyone, at immense cost of human lives. The concept itself really appeals to me for some reason though. Maybe it's the rapid unification of a group's aims towards a "worthy" goal? The "Manhattan Project of X" is thrown around whenever there's a new technology people want a lot of investment in, but there hasn't been a Manhattan Project level effort in the US since Apollo as far I can tell, and that's sad. Maybe AI qualifies, but we still haven't actually built the AGI that's the goal of this Trillion dollar project, so I'm still in wait and see mode.
>We're going to be facing civlization-scale challenges and changes well before 30-50 years from now, so gengineering isn't really on the field as a playable card.
I still don't buy into AI being the end of human history. I give it a double digit percentage likelihood of happening in decades, but not especially high double digits. I take the same attitude as Scott Alexander who thinks the chance is way higher than I do:
Q: How does the AI timeline affect your thinking on more kids?
A: I deliberately ignore it. If there's no apocalypse, I would be sad if I lived my life less than fully because I was worried about an apocalypse. And if there is an apocalypse, I think I would still be sad if I'd lived my life less than fully. My kids seem happy to be alive and I think they would accept the deal of a higher risk of dying at a young age in exchange for being able to exist at all.
I would feel differently if I thought not having kids would very significantly help avert the apocalypse, but I don't think that's true.
----
I think what the AI people and AI safety people does is noble, but it's not really something I can possibly influence, or properly estimate the risk of, so it's not especially a major concern of mine beyond abstract thinking. My mean estimate is that AI is/will be as disruptive to the world as the internet, but despite the internet providing a lot of good, and creating a lot of problems, may of the fundamental problems of life, and society, are basically the same as 30 years ago (or getting worse). I don't think we have to work on/think about a silver bullet for civilizational issues for these things to still be worthwhile.
I def agree on wanting the "Manhattan Project for X," because it would be great to show that we CAN coordinate and exert ourselves to that degree, and I honestly doubt that we can now. The push to AGI isn't really that, to my mind - there's no centralization or extraordinary sense of mission, just ordinary capital and economic incentives.
But I've wanted a Manhattan project for gengineering, one against aging and death, one for becoming a spacefaring race, and much more, but we're pretty much dropping the ball on every front and turning inward to navel gaze and use our powers in pointless internal conflict. Elon Musk is singlehandedly doing the spacefaring one, but who knows how that's going to go now that he's off his rocker.
> I still don't buy into AI being the end of human history. I give it a double digit percentage likelihood of happening in decades, but not especially high double digits. I take the same attitude as Scott Alexander who thinks the chance is way higher than I do:
100%, I'm in pretty much the same place.
But we are still definitely going to be facing civilizational scale challenges in the non-extinction scenarios, too. Even if we had another AI winter.
Like it's not priced in at ALL yet - not in money, not in people's mental schemas, not in how people are thinking about the future, but even the current level of AI is going to fundamentally transform our societies.
And sure, you can say, look around you! Everything is Lindy! We've been sititng on chairs for at least ten thousand years, we've been wearing clothes for more than a million! We're social primates who hang out in small groups and jockey for status, and that's sure not going away!
Morgan Housel's Same as Ever (What never changes) is a fairly good book on this front.
So yeah. But I actually think that for us to be living lives recognizable to our own today in 30-50 years will require extraordinary efforts and coordination.
You bring up the internet - we didn't used to stare at a small rectangle in our hands for 7-9 hours a day. We literally devote a third of our days and half of our waking hours to that now. Draw that line out just a little bit farther, you don't see major changes?
Similarly, we have a decent shot at going post-scarcity in that time - freed from the burden of providing for our own material welfare, how will we spend our time? Whatever that is, is likely going to look pretty different than today, along with all the "paths to success / status" and what people largely aim for when considering a life well lived.
Today the primary status games are education and career - where did you / your kids get into for school, what did you study? Then what you do, where you work, how high status or remunerated is that?
But in the future, all that is probably gone. AI teachers will be able to teach anyone interested in actually learning better than any post-graduate institution. I'd bet "getting into Harvard" will still matter, but not for your career, for the social graphs and confreres around you.
> Elon Musk is singlehandedly doing the spacefaring one, but who knows how that's going to go now that he's off his rocker.
Unfortunately not great. It's hard to tell whether the recent slowdown/failures in Starship development were due to practical issues that will be overcome, or an atrophying management culture. There's been some somewhat prominent people leaving SpaceX saying the latter, which is sad.
> Morgan Housel's Same as Ever (What never changes) is a fairly good book on this front.
Interesting. I'm not much one for self help books, but I'll take your recommendation. I added it to my list.
> You bring up the internet - we didn't used to stare at a small rectangle in our hands for 7-9 hours a day. We literally devote a third of our days and half of our waking hours to that now. Draw that line out just a little bit farther, you don't see major changes?
I agree that this could be a major change, but I'm a lot less certain, and therefore a lot less motivated to be thinking about "What happens in such a world during/after these major changes." It's a status-quo bias/nothing-ever-happens mindset, but I do like the thinking of "What can be done now that will help me/the world be better later." Part of that is understanding the directions the world could go before they happen, like planning for what to do during the next financial crisis, but beyond that, I'm stuck contending with the world as it is.
The things that are relevant for our current society; education, consumption, influence, etc. are already things I don't much care for, so the prospect of these things changing isn't especially daunting. Should education, social status and personal wealth all be turned on their heads, I think I'd still come out equal to where I am now, or ahead If not in material terms, in my lived experience.
I suppose that's not super relevant for your post though. The concern isn't that a few odd people are already navigating things pretty well and will probably continue to do so despite changing social conditions, but that society might not navigate the changes well, and when the pillars of a culture crumble, that's liable to spill over into everyone's lives.
I realize the content is info-hazardous to most left-liberals and progressives, but a lot of the moral/political extensions Christakis' gene-culture/network or group adaptation strategies are essentially Burkeanism (or best managed by that approach). It's the undercurrent or actual text of a third of Jonah Goldberg's monologues or essays. I think a lot of the messaging is can be digested by the public, but an elite consensus on this would be a necessary pre-condition. That hasn't yet arrived.
> but a lot of the moral/political extensions Christakis' gene-culture/network or group adaptation strategies are essentially Burkeanism
Yes, and you definitely see this in the public discourse, but to your "there's no elite consensus" on this point, isn't it because elites are all post-modern moral relativists, and the Burkeans are all trad religious folk or virtue ethicists?
Like to follow Burkean paths, you need to unironically embrace religion (or Stoicism if you're a tech bro), and unironically embracing anything just isn't The Way for any pomo relativist, and for the PMC / most college educated people.
It's certainly not a strong enough cultural Schelling point to guide societal grouping or actions. Still, we'll be under a lot more evolutionary pressure in an AI future, so in theory that's where Burkeans can really come into their own. And for the non-religious, they can adopt tech bro Neo-Victorianism or something, as happens in various Neal Stephenson novels.
I do think there is a contagious hyper self-consciousness, "anxiety of influence," and (post)ironic sensibility among many elites/elite aspirants (I'm sometimes afflicted) that precludes fully embracing any system of value or notion of the good. Hollow men, last men, men with no chest - whatever we want to call it.
However, I don't think this is a terminal condition. The right selection pressure (AI perhaps) may shock us out of this.
Also, I do think of myself as a Burkean to some degree. Or, at least, I have reverence for it.
> However, I don't think this is a terminal condition. The right selection pressure (AI perhaps) may shock us out of this.
Yeah, one thing I worry about on this front is that people don't have long enough planning horizons to really benefit here.
So in theory, I honestly think (and this is probably the Burkean in me) that in full self-reflective contemplation, most people would have rather made the choice to have had kids / more kids and get grandkids / more grandkids when they're older.
In practice, many of them don't, or have fewer incremental kids than the ideal, because you need to sacrifice your time, money, and sanity today for some far-off benefit 30+ years from now to reach that state.
In theory, all the post modern anomie people should be able to observe the world and see "hey, you know, these trad / Burkean people seem to live better and more fulfilled lives, and they're MUCH happier in their sixties and beyond, maybe I should emulate some of what they do."
But in reality, we're all in hyper filtered bubbles and mostly only think of a week or month or year from now, instead of our overall life path and goals. So all the pomo anomie-ists only see other childless (or 1 child) PMC people, who's only scorecards are "size of house" and "prestige of career" and "which degree you / your kids are getting" and don't really think about meaning or what life will look like as a grandparent.
Largely, Burkeans are so because they have other Burkeans in their social graph, and are able to look up and see them at different stages of life, and gauge their happiness and telos to guide their own life actions. Interestingly Burkeans typically have both pomo anomie-ists AND Burkeans, and can compare and contrast, but most pomo people don't have that, Burkeans are outside of their social graphs entirely and are comfortably outgroup - weird religious people who go to church and shop at Walmart, or whatever.
I think AI is probably just going to increase those filter bubbles, more or less. Sure, I think AI assistants will help people achieve more goals and more ambitious things, but the telos of those things will still originate within themselves and be informed largely by their bubbles and circles, and I don't really see that changing.
I wouldn't be surprised if it's being worked on already. The big lift was a scalable "mind in the loop" in terms of observing your visible physical correlates like subtle flushing, pupillary dilation, etc, and now we actually have that with multimodal LLM's.
AR and VR glasses have been a thing for a while, so the hardware is basically there (maybe coupled with something like an Oura ring for cardiac variables and breathing rate).
What that package of inputs, a library of relatively short potential paths / choices and outcomes, and just regular gradient descent, you could do a pretty good job today.
It's absolutely something that you could start working on today and probably get a pretty impressive package, and then as the AI capabilities advance further, integrate the frontier of capabilities there as it comes in.
Well, you didn’t leave much meat on that DEI bone! You might have lingered a moment longer on the sheer religious-flavored hatred given off by so many of them, a hatred that goes well beyond just white men and ultimately encompasses reality itself, which must be denied. Although Allan Bloom was writing for an older generation, I feel these words hold up well:
It is certain that feminism has brought with it an unrelenting process of consciousness-raising and -changing that begins in what is probably a permanent human inclination and is surely a modern one—the longing for the unlimited, the unconstrained. It ends, as do many modern movements that seek abstract justice, in forgetting nature and using force to refashion human beings to secure that justice.
Perhaps the one avenue you didn’t explore is the relation of DEI to power. How they love to wring an apology out of their victims and then excitedly chatter amongst themselves to see if it met everyone’s standards.
> Perhaps the one avenue you didn’t explore is the relation of DEI to power.
Yeah, I just find this less interesting overall - naturally, if one side is in power they're going to be overbearing a*holes to the ones underneath, whether it's forced kissings of the ring from the Left, or the "religious right" installing v-chips in all the TV's and promulgating "satanic panics" about rock music and satan worshipping daycare owners.
And just to add one last detail, the battle over pronouns represented an apotheosis of sorts in the drive for power. Not only am I the master of grammar, the onus is on you to ask me, then bend the knee. Or else. I’m making you an unwilling participant in my delirium. What’s that you say? You’d also like to have a say? Sorry, mate, we’re playing heads I win, tails you lose. And now I expect an apology. Make it good.
The future is yellow.
(The color associated with the earth, the center, prosperity and fortune and the emperor in Chinese culture, of course.)
The West was ahead of East Asia from about 1750-2025. Before that, the Chinese had a successful empire while the West was mired in dark ages. Is it so surprising the pendulum would swing back?
There are definitely a lot of plausible paths for that to turn out true, but I think it's actually too early to call.
On the pro-Chinese side:
1. They can centralize all GPU's in the country much better and faster than we can
2. They can easily commission / centralize 10x the power that we can, MUCH faster than we could build it, *and* they can build comparable additional capacity roughly 2-3 times as fast as we can
3. They've got some really sharp folk in China overall, and if they did a big centralized effort, it would basically be a Chinese Manhattan Project casting call
But on the Pro-US side:
1. We have literally 75% of the frontier AI-capable chips now, and theoretically in the future, unless somebody high up does something really stupid (very much a possibility)*
2. We have the best and most cracked AI talent in the entire world, and every single one of the frontier AI companies is American, and they are all materially ahead of China (yes, R1 was impressive - it's still nowhere near even the publicly released o3 or 2.5 Pro, and every frontier lab has a one-gen-smarter model internally already)
3. There are "also rans" with big GPU / TPU reserves (Sutskever's SSI, Grok / X, others), that if we got serious about prioritization could be cannibalized so their data centers could go to the merger of the Big 3 that would happen under serious race conditions.
4. Even if Taiwan gets nuked, we have one-generation behind capable plants that will be opening in the US soon, and China has nothing comparable.**
Like overall, we're fighting with exponential advancement curves, right? That also have the potential to go super-exponential if any of the big overhangs gets solved, and if self-improvement starts happening. And the time and chip advantages that the USA has are actually a really big deal in an exponential / super-exponential regime, and they're not necessarily going to go away without some major blunders on our part, AND some major advances / overhang solutions on China's part.
* I used to think that China could step up and use their industrial might to outproduce on local GPU's, because the Ascend 910 / 920's are basically 1/3 of an H200, so if they just produce 3x as much, it's like having that many H200's - but further research has led me to believe they won't be able to do this, particularly if they nuke Taiwan (basically all the packaging and high speed memory happens in Taiwan and / or requires Korean memory).
** Also, they're not going to be able to progress on more advanced silicon internally either, they're currently 4 generations behind and struggling with yields and quality even at 7nm. They've made some experimental progress on a plasma EUV solution, but it's still really theoretical, and even if it were fully solved, it would only advance them 1-2 gens.
Not that this would affect your main point, but I'd say that blank-slatism has always been progressive. The problem is that most anyone who assumes it can't resist writing something on the slate - even Rosseau (Emile) and Skinner (Walden II), and of course DEI advocates, who create their well-intentioned curricula in the mostly futile hope that one "training" or whatever could root out prejudice and discrimination.
> Not that this would affect your main point, but I'd say that blank-slatism has always been progressive
Yes, and actually this brings up an interesting point - one of the things that irritates me about progressivism is that it stops at an arbitrary threshold, beyond which things are verboten.
But at a high enough level of abstraction, we ARE blank slates. Not today, but with gengineering and better and more informed and nuanced societal / childhood intervention engineering.
If we "space raced" genetics, we actually COULD reach a point of "everyone is born with similar-enough innate potential, and then people's destinations and characters are a result purely of their choices and luck rather than whatever hand they were dealt at their conception." The blank slate could *actually* be true if we put in the effort. But progressives and conservatives alike both hate this idea, when it entirely addresses the problems both are concerned with - fairness and equity on the progressive side, and justice and desert on the conservative side.
Similarly, on societal / childhood interventions. The reason everything is 60-80% heritable is partly due to base biological truth, but also partly due to a lack of imagination and effort.
I'm as "genetics is all that matters" as they come, and even I think there's probably pretty high-effect marginal improvements that could be made on tendencies to violence, conscientiousness, and determination / effort in any given individual via outside interventions. I think we can move the "heritable" needle noticeably down on all of those if we just exerted actual effort in testing and learning into what works, on the parenting and societal intervention levels. For an example, consider the "ten thousand phd's" and how every company has a team of them reaching into deep seated neurological hooks to jerk people's behaviors around, so everyone is fat and broke and stares at screens 7-9 hours a day. Imagine that level of effort and intervention, but towards the good.
But once again, completely verboten, everyone hates it and would find the very idea "immoral." The government / Google is controlling your children's minds!! Just imagine the nuclear meltdowns on both sides, even if it was actually a private company providing the service, it would be outlawed essentially immediately.
I do think we can actually achieve this second piece when everyone has a Phd-smart AI personal assistant, though, even though it's basically "OpenAI is controlling your minds!" The framing and context around it makes it enough of a personal / parental choice that I think we could actually see adoption, especially since the real-world results should be pretty obvious and tangible.
Sadly, the one example I know of where progressives and liberals were actually happy to see interventions work on those "neurological hooks" for social good yielded no demonstrable successes. I'm referring to the "brain-based education" approaches that came into vogue a few decades ago. They weren't necessarily bad curricula, but they turned out to offer no special benefits, because their use of neuroscience was so superficial (and the neuroscientific data doesn't call for unique curricular innovations in the first place). In the end, brain-based education wasn't successful enough to worry anyone. Meanwhile the very kids exposed to these curricula may now be fat and addicted to screens, like you were saying, because educational methods will never reach our neurological hooks as well as the food scientists et cetera can do.
I don't think it's coincidental that what's most needed is hardest to achieve. A lot of brainpower may have gone into making Cheetos, but making a better person through K-12 curriculum is way harder, and there are a bunch of needless obstacles (including those you described).
> I don't think it's coincidental that what's most needed is hardest to achieve. A lot of brainpower may have gone into making Cheetos, but making a better person through K-12 curriculum is way harder, and there are a bunch of needless obstacles (including those you described).
Yes, I think this is why it will take the AI assistants to make progress.
What everyone wants is a short-term intervention that drives lasting change. Spend an hour a day in "conscientiousness class," and bump your conscientiousness percentile up 10 points, or go to a 2 week immersive environment and achieve similar results.
That's obviously a pipe dream - how are our *bad* end states achieved? Obesity is built a mouthful at a time, and an incremental 3 mouthfuls per day makes people fat, or screen time was bumped up 5 minutes at a time until people went from 2 hours a day on smartphones in 2014 to 7-9 hours a day now in Zennials.
Similarly, for GOOD outcomes, you need constant, ongoing decisions at the margins. 1-3 better decisions a day adds up to that 10% bump in conscientiousness, making a single marginal "ok I will work out today" decision a couple times a week materially improves your health and well being, subscribing to a meal plan your AI assistant cooks up to meet the macros and calories that are ideal for your weight goals makes sure all of the little decisions happen for you, and so on.
Personal AI assistants can be exactly that "angel on your shoulder," and can make persuasive arguments tuned specifically to the things and goals you care most about, using the rhetorical techniques that most resonate to you, and help you make those better decisions on the margin.
Agree 100% about AI assistants, but will the people who need them the most use them enough? More convenient access to useful information tends to drive higher standards for convenience. I can already hear someone whining about having to actually look at their wearable every time a meaningful decision is needed, but this isn't the person who's already motivated to seek and work for good outcomes.
> but will the people who need them the most use them enough?
Oh, definitely not - but you know, I still think it helps a lot of people on the margins.
Some people are just lost - they're already obese, fill all their time with screens, etc. Or they're alcoholics / drug addicts, whatever. For whatever reasons, they'll refuse to change. There are some holes so deep and personalities so enslaved to their own vices and deficits, that they're going to be lost no matter what.
You're never going to be able to reach everyone. But I believe the results will be tangible enough that it will materially help the massive middle - say those from the 20th to 80th percentiles.
Today, the 80th+ percentiles are already fine - they're the PMC or better, not obese, not wasting lives on screens, good careers, etc.
So say you're at the 50th percentile now - you're obese and spend 7-9 hours a day on tv and phones. Now if some other people in your social graph start using their whisper earrings and lose weight, get promoted at work, you see them playing with their kids on the front yard, all that is going to make you ask what they're doing. See that enough (just once would be enough for some people), and you'll start doing it, too.
Imagine the massive society-wide lift possible from an intervention like that. The only thing I can think that would be comparable is splicing in the DEC2 "sleep less" SNP society-wide. Absolutely HUGE effect on human well being, the economy, and much else.
I've actually been slowly convincing myself over my various posts and comments on this that this is a big enough thing it might be worth coming out of retirement for and building a company around.
When you do the IPO, I'll buy a few shares...
There's a sci-fi novel called Operation Terror that conceptualizes the ingroup/outgroup problem in a way I really like.
Essentially, a super-weapon is developed that if deployed, would lead to insane amounts of terror and suffering. The US government wasn't able to figure out any countermeasures, so instead they manufacture an alien invasion using the superweapon, so that the out-group hatred of the world was directed towards the (imagined) alien enemy, rather than against our human ideological enemies.
The writing and plot itself was alright, but I think the concept is what's interesting. So long as humans are around, we will direct forces of harm against the outgroup, and without an outgroup to unite against, we will find internal ideological enemies to demonize. If true, the grand project of expanding our circles of care (See discussion on that heatmap meme: https://bigthink.com/strange-maps/moral-circles-heat-map/ ), is doomed to self destruction.
My thinking is that rather than worrying about expanding our circles of care, we should be more worried about ensuring the inside/outside divide doesn't manifest itself in terrible competitions that lead to insane amounts of suffering. An imagined enemy is probably one way to do this, although that comes with some pretty obvious and major flaws, but my intuition is that there's got to be a way to do it.
Or maybe we engineer/breed out that tendency, so none of this matters. I hope not though! Getting rid of core bits about humanity because they are inconvenient would be a really sad way we neuter ourselves into irrelevance.
> An imagined enemy is probably one way to do this, although that comes with some pretty obvious and major flaws, but my intuition is that there's got to be a way to do it.
Yeah, I really struggle with this one, and ended up thinking it's the worst of any option.
My primary rebuttal to this is 9/11, which was an archetypical "outside enemy appears and everyone unites against them" moment. It was ideal! Low negative impact overall, from a weak and laughable enemy, and high patriotism and unification resulting. What were the actual end results, though? Basically, Al Queda won, full stop, completely pwned us.
1. We installed the TSA (which has wasted 10 times as many American lifetimes as the 9/11 terrorists took, combined with 95%+ failure rates when red teamed)
2. We enabled spying on our own citizens and everyone in the world in every channel in perpetuity (Patriot Act, NSA PRISM, Five Eyes, etc)
3. We embarked on the ~$8T 20 year boondoggle that has been our nihilistically pointless and destructive flailings in Iraq and Afghanistan
So, not only do we have to hope for an ineffective external enemy that can't actually destroy us, but we ALSO have to hope for not ending in flagrant stupidity and tragedy and massive waste of money and lives for 20 years straight for literally zero benefit.
And look around today - all that waste, and how united are we today? Zero benefit, huge costs, they 100% won.
> Or maybe we engineer/breed out that tendency, so none of this matters. I hope not though! Getting rid of core bits about humanity because they are inconvenient would be a really sad way we neuter ourselves into irrelevance.
Honestly, as we see AI progressing, I'm becoming more and more a skeptic of gengineering as a way out of any problems. Not that I think it isn't a good idea, I'd still 100% pay healthy sums to get it done to either me or my kids, it's just way too slow to be relevant. The absolute fastest it can influence anything is probably at the 30-50 year time scale, and that's at "absurdly optimistic implementation and adoption rates."
We're going to be facing civlization-scale challenges and changes well before 30-50 years from now, so gengineering isn't really on the field as a playable card.
Yeah. It's the sort of thing in real life would result in some 1984 forever war for literally no benefit to anyone, at immense cost of human lives. The concept itself really appeals to me for some reason though. Maybe it's the rapid unification of a group's aims towards a "worthy" goal? The "Manhattan Project of X" is thrown around whenever there's a new technology people want a lot of investment in, but there hasn't been a Manhattan Project level effort in the US since Apollo as far I can tell, and that's sad. Maybe AI qualifies, but we still haven't actually built the AGI that's the goal of this Trillion dollar project, so I'm still in wait and see mode.
>We're going to be facing civlization-scale challenges and changes well before 30-50 years from now, so gengineering isn't really on the field as a playable card.
I still don't buy into AI being the end of human history. I give it a double digit percentage likelihood of happening in decades, but not especially high double digits. I take the same attitude as Scott Alexander who thinks the chance is way higher than I do:
Q: How does the AI timeline affect your thinking on more kids?
A: I deliberately ignore it. If there's no apocalypse, I would be sad if I lived my life less than fully because I was worried about an apocalypse. And if there is an apocalypse, I think I would still be sad if I'd lived my life less than fully. My kids seem happy to be alive and I think they would accept the deal of a higher risk of dying at a young age in exchange for being able to exist at all.
I would feel differently if I thought not having kids would very significantly help avert the apocalypse, but I don't think that's true.
----
I think what the AI people and AI safety people does is noble, but it's not really something I can possibly influence, or properly estimate the risk of, so it's not especially a major concern of mine beyond abstract thinking. My mean estimate is that AI is/will be as disruptive to the world as the internet, but despite the internet providing a lot of good, and creating a lot of problems, may of the fundamental problems of life, and society, are basically the same as 30 years ago (or getting worse). I don't think we have to work on/think about a silver bullet for civilizational issues for these things to still be worthwhile.
I def agree on wanting the "Manhattan Project for X," because it would be great to show that we CAN coordinate and exert ourselves to that degree, and I honestly doubt that we can now. The push to AGI isn't really that, to my mind - there's no centralization or extraordinary sense of mission, just ordinary capital and economic incentives.
But I've wanted a Manhattan project for gengineering, one against aging and death, one for becoming a spacefaring race, and much more, but we're pretty much dropping the ball on every front and turning inward to navel gaze and use our powers in pointless internal conflict. Elon Musk is singlehandedly doing the spacefaring one, but who knows how that's going to go now that he's off his rocker.
> I still don't buy into AI being the end of human history. I give it a double digit percentage likelihood of happening in decades, but not especially high double digits. I take the same attitude as Scott Alexander who thinks the chance is way higher than I do:
100%, I'm in pretty much the same place.
But we are still definitely going to be facing civilizational scale challenges in the non-extinction scenarios, too. Even if we had another AI winter.
Like it's not priced in at ALL yet - not in money, not in people's mental schemas, not in how people are thinking about the future, but even the current level of AI is going to fundamentally transform our societies.
And sure, you can say, look around you! Everything is Lindy! We've been sititng on chairs for at least ten thousand years, we've been wearing clothes for more than a million! We're social primates who hang out in small groups and jockey for status, and that's sure not going away!
Morgan Housel's Same as Ever (What never changes) is a fairly good book on this front.
So yeah. But I actually think that for us to be living lives recognizable to our own today in 30-50 years will require extraordinary efforts and coordination.
You bring up the internet - we didn't used to stare at a small rectangle in our hands for 7-9 hours a day. We literally devote a third of our days and half of our waking hours to that now. Draw that line out just a little bit farther, you don't see major changes?
Similarly, we have a decent shot at going post-scarcity in that time - freed from the burden of providing for our own material welfare, how will we spend our time? Whatever that is, is likely going to look pretty different than today, along with all the "paths to success / status" and what people largely aim for when considering a life well lived.
Today the primary status games are education and career - where did you / your kids get into for school, what did you study? Then what you do, where you work, how high status or remunerated is that?
But in the future, all that is probably gone. AI teachers will be able to teach anyone interested in actually learning better than any post-graduate institution. I'd bet "getting into Harvard" will still matter, but not for your career, for the social graphs and confreres around you.
> Elon Musk is singlehandedly doing the spacefaring one, but who knows how that's going to go now that he's off his rocker.
Unfortunately not great. It's hard to tell whether the recent slowdown/failures in Starship development were due to practical issues that will be overcome, or an atrophying management culture. There's been some somewhat prominent people leaving SpaceX saying the latter, which is sad.
> Morgan Housel's Same as Ever (What never changes) is a fairly good book on this front.
Interesting. I'm not much one for self help books, but I'll take your recommendation. I added it to my list.
> You bring up the internet - we didn't used to stare at a small rectangle in our hands for 7-9 hours a day. We literally devote a third of our days and half of our waking hours to that now. Draw that line out just a little bit farther, you don't see major changes?
I agree that this could be a major change, but I'm a lot less certain, and therefore a lot less motivated to be thinking about "What happens in such a world during/after these major changes." It's a status-quo bias/nothing-ever-happens mindset, but I do like the thinking of "What can be done now that will help me/the world be better later." Part of that is understanding the directions the world could go before they happen, like planning for what to do during the next financial crisis, but beyond that, I'm stuck contending with the world as it is.
The things that are relevant for our current society; education, consumption, influence, etc. are already things I don't much care for, so the prospect of these things changing isn't especially daunting. Should education, social status and personal wealth all be turned on their heads, I think I'd still come out equal to where I am now, or ahead If not in material terms, in my lived experience.
I suppose that's not super relevant for your post though. The concern isn't that a few odd people are already navigating things pretty well and will probably continue to do so despite changing social conditions, but that society might not navigate the changes well, and when the pillars of a culture crumble, that's liable to spill over into everyone's lives.
I realize the content is info-hazardous to most left-liberals and progressives, but a lot of the moral/political extensions Christakis' gene-culture/network or group adaptation strategies are essentially Burkeanism (or best managed by that approach). It's the undercurrent or actual text of a third of Jonah Goldberg's monologues or essays. I think a lot of the messaging is can be digested by the public, but an elite consensus on this would be a necessary pre-condition. That hasn't yet arrived.
> but a lot of the moral/political extensions Christakis' gene-culture/network or group adaptation strategies are essentially Burkeanism
Yes, and you definitely see this in the public discourse, but to your "there's no elite consensus" on this point, isn't it because elites are all post-modern moral relativists, and the Burkeans are all trad religious folk or virtue ethicists?
Like to follow Burkean paths, you need to unironically embrace religion (or Stoicism if you're a tech bro), and unironically embracing anything just isn't The Way for any pomo relativist, and for the PMC / most college educated people.
It's certainly not a strong enough cultural Schelling point to guide societal grouping or actions. Still, we'll be under a lot more evolutionary pressure in an AI future, so in theory that's where Burkeans can really come into their own. And for the non-religious, they can adopt tech bro Neo-Victorianism or something, as happens in various Neal Stephenson novels.
I do think there is a contagious hyper self-consciousness, "anxiety of influence," and (post)ironic sensibility among many elites/elite aspirants (I'm sometimes afflicted) that precludes fully embracing any system of value or notion of the good. Hollow men, last men, men with no chest - whatever we want to call it.
However, I don't think this is a terminal condition. The right selection pressure (AI perhaps) may shock us out of this.
Also, I do think of myself as a Burkean to some degree. Or, at least, I have reverence for it.
> However, I don't think this is a terminal condition. The right selection pressure (AI perhaps) may shock us out of this.
Yeah, one thing I worry about on this front is that people don't have long enough planning horizons to really benefit here.
So in theory, I honestly think (and this is probably the Burkean in me) that in full self-reflective contemplation, most people would have rather made the choice to have had kids / more kids and get grandkids / more grandkids when they're older.
In practice, many of them don't, or have fewer incremental kids than the ideal, because you need to sacrifice your time, money, and sanity today for some far-off benefit 30+ years from now to reach that state.
In theory, all the post modern anomie people should be able to observe the world and see "hey, you know, these trad / Burkean people seem to live better and more fulfilled lives, and they're MUCH happier in their sixties and beyond, maybe I should emulate some of what they do."
But in reality, we're all in hyper filtered bubbles and mostly only think of a week or month or year from now, instead of our overall life path and goals. So all the pomo anomie-ists only see other childless (or 1 child) PMC people, who's only scorecards are "size of house" and "prestige of career" and "which degree you / your kids are getting" and don't really think about meaning or what life will look like as a grandparent.
Largely, Burkeans are so because they have other Burkeans in their social graph, and are able to look up and see them at different stages of life, and gauge their happiness and telos to guide their own life actions. Interestingly Burkeans typically have both pomo anomie-ists AND Burkeans, and can compare and contrast, but most pomo people don't have that, Burkeans are outside of their social graphs entirely and are comfortably outgroup - weird religious people who go to church and shop at Walmart, or whatever.
I think AI is probably just going to increase those filter bubbles, more or less. Sure, I think AI assistants will help people achieve more goals and more ambitious things, but the telos of those things will still originate within themselves and be informed largely by their bubbles and circles, and I don't really see that changing.
Fantastic essay, well done. Can I have my VR heaven now though? Please?
I wouldn't be surprised if it's being worked on already. The big lift was a scalable "mind in the loop" in terms of observing your visible physical correlates like subtle flushing, pupillary dilation, etc, and now we actually have that with multimodal LLM's.
AR and VR glasses have been a thing for a while, so the hardware is basically there (maybe coupled with something like an Oura ring for cardiac variables and breathing rate).
What that package of inputs, a library of relatively short potential paths / choices and outcomes, and just regular gradient descent, you could do a pretty good job today.
It's absolutely something that you could start working on today and probably get a pretty impressive package, and then as the AI capabilities advance further, integrate the frontier of capabilities there as it comes in.
A friend challenged me to write a "plausible path" story to said infinite VR heavens, it's here for anyone interested (reading time 14 min): https://performativebafflement.substack.com/p/946c2601-6871-479f-a009-daf50f370377
Wow, there is almost too much truth in this one.