23 Comments
User's avatar
John Smithson's avatar

Much of the lore about Johnny von Neumann is folklore, embellished beyond belief. Not that he wasn't a genius -- he was -- but he was human too. (The notion that the several world-class geniuses in his graduating class were all aliens from Mars was a joke. (I think.)) He had his limitations. The von Neumann architecture for computers, for example, is stolen valor. He did not come up with that architecture, but merely formalized and documented it.

Johnny von Neumann was staggeringly brilliant, though, no question about it. He did not win a Nobel prize, but he was of that class. And there is no one I can think of who is even comparable to him. People like him, and like Werner von Braun, seem to have disappeared from the earth. We just don't see their likes any more.

What made Johnny von Neumann who he was? Genetics seems to have made a contribution, as most of these Hungarians were Jews. Something in the environment? In the schooling they received? Historical and societal influences? I can't think of any other factors, but I can't tie them together into any sort of hypothesis, let alone find any proof to support it.

Expand full comment
Strategy Pattern (Don’t Laugh)'s avatar

From the subtitle, I expected a misapplication of John Von Neumann. Rationalism isn’t correct, he is second to Gödel in being Godelian, as he was a day or so behind at breaking mathematics in their impromptu race. I view him like a reverse-engineer on the very idea of rationalism, taking it part, putting it back together again — an utter maniac. Thank god he was the father of computer engineering.

I am also surprised that his pro-Germanism wasn’t mentioned though, because there’re anecdotes on him playing extremely loud German music in and leading up to WW2 throughout Princeton annoying Einstein like crazy. This is despite the fact that he moved to the United States ahead of many because of the growing Nazi repression at Gottingen soon after Gödel got beaten up by a mob.

(Also Eugene Wigner’s biography is a treat, the man truly loved Wisconsin.)

Expand full comment
Sol Hando's avatar

I wonder what it is that differentiates superman like Neumann from regular intelligent people. I know plenty of smart people, and I'm at least above average intelligence, but the ease of thinking for something like memory (as it's described) seems orders of magnitude better than what I'm capable of, and at least an order of magnitude better than the smartest person I personally know.

Is it just an abnormally high concentration of IQ genes? Maybe... but the improvement seems to be quite large. Maybe there's a few key factors that correlate with mathematical ability, and photographic memory, and if you put these two together with an exceptional upbringing (in order to learn ancient Greek by six, I presume you have to be exposed to someone who speaks ancient Greek), it creates the appearance of someone superhuman, when the difference isn't really that large.

Expand full comment
Performative Bafflement's avatar

Honestly, the thing that infuriates me most about our government isn't tariffs or DEI or the fact that our only choice is always between two 80 year old shambling multi-millionaire corpses, it's that *we literally have von Neumann's DNA* and could be figuring this out empirically! And we're not.

As to how large that gap was, the fact that multiple Nobel prize winners and all the other smartest men in the world stood in awe of his intellect suggests that he really was a prodigy far out on the bell curve.

But even if he was IQ 200+, the difference between IQ 100 and 200 still seems pretty small on an absolute scale - definitely suggests it's time to buckle up and hold on for a ride if we get AI's with IQ 200+, though.

Expand full comment
Sol Hando's avatar

Do we actually have von Neumann's DNA? As far as I know he's buried in New Jersey. There's quite a bit of rainfall, freeze-thaw cycles, and he's been dead for ~70 years.

I'm sure some DNA could be recovered, but as far as I know there's a pretty consistent linear degradation of DNA outside of low-humidity or frozen conditions. As far as using the genetic material of supermen we've already had, my understanding is that you need the complete DNA from a living, or maybe recently deceased person (when it actually becomes feasible at all).

Maybe we could sequence enough to get an idea of the bigger picture, or specific genes we think would contribute to superintelligent humans, but cloning is already super complicated, with a high failure rate. I imagine anything but a perfect, just living, genome is going to have exponentially more problems.

Expand full comment
Performative Bafflement's avatar

I think between inner-ear-bone DNA and his daughter (who would presumably be very generously compensated), we could pretty much cover it.

We definitely suck at cloning, but that's because we basically never get the reps in. But even just having the knowledge is very valuable - as our GWAS's have been getting bigger and bigger we keep finding more rare variants, and our "number of genes that affect X" keeps getting bigger for most complex traits.

Wouldn't you want to bet that JvN had some rare and interesting variants that we'd like to know about?

And obviously if we really WERE sequencing his DNA and seriously trying to figure "human excellence" out, we'd be sequencing pretty much all our Nobel prize winners, Olympians, etc, and would have a much bigger and more comprehensive database of these differences to slot him into.

Expand full comment
nic's avatar

Not super related, but to your last point, the thought experiment that reinforced my intuitive skepticism of an "intelligence explosion" scenario basically goes like, if we resurrected Von Neumann and had Sustkever, Karpathy, whoever, tutor him in AI research, would you expect superintelligence before he died? If not, would a 100 Von Neumanns make the difference, or even 10,000?

Certainly, a country of Von Neumanns in a datacenter, as they say, would be weird, but a problem really smart humans can't solve might also be a problem the smartest human can't solve.

Expand full comment
Performative Bafflement's avatar

> Not super related, but to your last point, the thought experiment that reinforced my intuitive skepticism of an "intelligence explosion" scenario

Wow, we must have really different intuitions on this, because my first thought is literally "jfc, we'd get ASI even faster than Scott thinks."

If you have 10k von Neumanns running at 100x human speed, absolutely you're going to get ASI. He's already notably higher than AGI himself!

There were only a couple hundred scientists working in the Manhattan Project. You don't think picking the smartest-by-far, creating hundreds of Manhattan Project scale endeavors with those guys, then compressing 100 years worth of their collective work into one year would drive some crazy results?

So yeah, my intuition is basically the opposite.

Expand full comment
nic's avatar

>jfc, we'd get ASI even faster than Scott thinks

My boring counter would be (and becoming a bit more skeptical has made my AGI takes pretty boring tbh) that since building the data centers for AGI is ballparked at around 50% of the current yearly capex for a big tech co, then wouldn't it just be cheaper to do an ASI manhattan project?

The more interesting argument is that, presumably the advantage a ~200 iq AGI has over current openai MTS is a) speed, and b) it's 99.99...th percentile at everything, not just AI research. So what does that mean?

a) Speed: yeah, maybe that's sufficient on its own, but then I'm pretty sure we're bottlenecked by compute, and also verfication of superintelligence sounds pretty hard, unless ASI can do something like prime factorization of big numbers from the jump.

b) It's 99.99th percentile at everything, not just ai research: if this made an impact it would be weird imo, like would you expect Karpathy to get even better at AI research if he became an expert on a bunch of unrelated topics?

Expand full comment
Performative Bafflement's avatar

> building the data centers for AGI is ballparked at around 50% of the current yearly capex for a big tech co, then wouldn't it just be cheaper to do an ASI manhattan project?

Maybe I'm misremembering, but wasn't Manhattan Project like 4% of GDP or something? Or is that the Space Race?

Ok - Dr. Google informs me that it was like .18% of GDP, which would be like $60B in one year. Sounds like we're already planning to invest more than that privately, so yeah, I think you're right if we centralized it and did it federally it'd be cheaper overall.

But Space Race level funding would be like $1T annually, and we're wayyyy under that.

But I didn't get your overall point here? That we're not spending MP level funds on it? We are, by probably a factor of 3-5x more, just across 4-5 companies.

> would you expect Karpathy to get even better at AI research if he became an expert on a bunch of unrelated topics?

Unironically yes, because a lot of good ideas are found at intersections of fields and ideas. Like think of Richard Feynman, who famously said he just had a handful of mental tools, and for each new problem he saw, he'd mentally try each tool, and often enough, that was enough for some actual insight or breakthrough.

Imagine that, but with the equivalent of mental tools at 99th percentile across every domain. Yeah, I'd def expect that to drive some progress and value.

Your point on verification is interesting - but don't you think this is very much a "know them by their works" sort of thing? Forget prime factorization, it could demonstrate immense edges in trading on the markets or something. Better algorithms drove hundreds of billions of dollars per year of value and net income across multiple companies during the Data Science revolution between 2010 and 2020 - imagine that, writ larger.

Expand full comment
nic's avatar
Apr 15Edited

I mean, my point about MP was a common counter to AI 2027, since compared to Situational Awareness last year, it didn't focus on compute scale-up.

Yes, there is MP levels of money being invested, but it doesn't seem like much of it is going to extra man-hours of AI research, instead it's going to datacenters.

So, if the AI 27 ppl are focused on automating AI research, couldn't you replicate some (a small fraction, because humans can't work 24/7) of that by dumping money into finding really high human capital people and putting them to work? If they have non-AI backgrounds, so much the better, so long as you can teach them.

Believe it or not, I have 0 insight into OpenAI's recruitment practices, so I could easily be wrong, but I think it's a compelling point.

Your "unironic yes" is good bc it's consistent with what ASI bloomers/doomers seem to think imo. It could get a lot weirder than Feynman, but the future is going to be weird, I get that.

For whatever reason I didn't think about markets but I get what you're saying.

Expand full comment
Saul's avatar

There has been some research to determine any commonality wrt genius. Turns out that one thing that many of the greatest minds had in common was that they were home tutored (which makes sense given a tailored syllabus). Not definitive but interesting nonetheless.

Expand full comment
Performative Bafflement's avatar

Yeah, I’m a fan of Hoel’s aristocratic tutoring article.

Expand full comment
Arbituram's avatar

Von Neumann's view that the Soviets should be pre-emptively incinerated rather takes the shine off, I have to say.

Expand full comment
Performative Bafflement's avatar

From my reading of it, it was a straightforward application of the game theory that he invented. He saw it as an atrocity, but he genuinely thought a unipolar world was better and less existentially risky than a two-power world (and looking at the Cuban Missile Crisis, Stanislav Petrov, and the like, it's hard to argue he was wrong, we're just lucky).

As soon as it was known that the Russians had the bomb, he backed off and never advocated for the use of nuclear weapons any more.

Expand full comment
Arbituram's avatar

Indeed, it is. Perhaps, on a first order basis, "he was right and we were lucky" is correct. But, to the best of my knowledge, he never then explored what the second order effect of a truly untrammeled nuclear power would be, one that pre-emptively nukes anyone who tries to develop nukes.

Could any society survive with its soul intact with that power over decades? The USA is powerful, but it has never been more powerful than everyone else put together.

Expand full comment
Performative Bafflement's avatar

> But, to the best of my knowledge, he never then explored what the second order effect of a truly untrammeled nuclear power would be, one that pre-emptively nukes anyone who tries to develop nukes.

Yeah, that's a great point, too, and that's probably unexplorable even for a mind like von Neumann's, given how chaotically history unfolds.

I was literally just having a conversation with somebody about how all our political institutions were broken and that all this happy nonsense about "democracy being the best, except for all the others we've tried" was wrong, because the only model that we've actually verified to be Lindy multiple times in history is "an emperor crushing all outsiders with an iron fist while the ones inside the fold are free to trade and innovate." That gets you Assyria, Persia, China, Rome, the Mayans and Aztecs, and many other long lasting, thriving empires over time. Much longer lasting than any democracy, I should point out.

A unipolar world would certainly have been better for America, most likely, but yeah, I wouldn't have wanted to be any of the countries on the outside given that "crush the outsiders" dynamic.

But then again, you can squint your way to some pretty plausible, pretty good global futures in that scenario now, too. Obviously Russia being a nuclear power today isn't going so well, either right now, or when they demographically and economically collapse a few decades down the line.

But overall major war would have been down, the Cold War wouldn't have happened, maybe we'd avoid the Korea and Vietnam wars, given Hollywood imperialism, global culture already looks really American anyways, so who's to say that would have been much different, and so on. Sadly, nuking Mao before he got nukes wasn't timed right to prevent the Great Leap Forward, it would have just killed an additional rounding error percentage towards the end of it.

I think it's impossible to say which world would really be better off - it would probably revolve around something outside of this mental model entirely.

Expand full comment
Kit's avatar

You had me at: ‘Let’s go to lunch.’

Expand full comment