12 Comments
User's avatar
Sol Hando's avatar

On Operant Conditioning:

I used to have a nasty habit of sleeping in. No matter how hard I tried; going to sleep at consistent time, melatonin, facemask, etc. I would snooze my alarm, and unless I had something important to get to that morning, I would sleep in. My mean sleep time without an alarm seemed to be ~9:15 hours, which isn't great if you want to wake up early and not have to go to sleep at 9PM.

This was solved in less than a month with operant conditioning. I bought a shock-watch (https://shop.pavlok.com/) that wakes you up with an electric shock. My morning brain pretty quickly adapted, so I started waking up early and took off the watch in my half-dazed state. I zip-tied it to my wrist to prevent this, and after <10 days of waking up to an electric shock, I now permanently wake up within ~10 mins of my desired time and without an alarm.

I'm a pretty willful person, so it was somewhat humbling to realize that the subconscious can have such control over how we live our lives. It really makes me sympathize with people who suffer from chronic obesity and other maladies that can quite obviously be solved by just "doing the right thing that doesn't even require much effort." We're not just our conscious minds, and our subconscious processes are pursuing completely different goals.

Anyway, this is relevant because I thought of doing the same pavlovian conditioning for other bad habits in my life, but I couldn't figure out a good way to trigger it without some third party observer. I considered hiring a friend or personal assistant to shadow me and watch for certain behavior I was aiming to improve for a few weeks, but it was turning into more work than it was worth and I wasn't getting any positive feedback.

If you do it yourself you're only going to do it when you consciously remember, which you could just stop the bad habit at that point if you had that sort of self-awareness. Giving an AI assistant with a little camera on your lapel that's instructed to deliver an uncomfortable electric shock whenever it catches you doing something you want to stop would probably solve most problems in life.

Pull into a McDonalds or open Doordash? Shock. Ignore your regularly scheduled exercise to do something pointless? Shock. Scrolling TikTok? Shock. Etc, etc. It doesn't have to be a shock either (that's just the only high-impact intervention I've found that works), as I can imagine some electronically controlled reward mechanisms that would act on the brain in the same way.

Essentially, we'd no longer have to battle subconscious processes with executive functioning, but only a one-time executive decision in a moment of strength to set an algorithm to properly defeat our subconscious desires that have their greatest strength in a moment of weakness. Bryan Johnson talks about this when it comes to health, but his tangible offer to people (Blueprint Diet) sucks and he's kind of a wacko.

I've discussed this with a few people, and the normal response is negative. I guess it seems like you're defeating your desires and turning yourself into a robot. Not all desires are consciously desirable in my opinion, and should be valued depending on how they contribute to a holistically happy life. I can understand the repulsion though, as by default the normalization of this sort of thing would probably be used for reinforcing some economic end (I could see South Korean parents shocking their kids when they aren't studying to get a leg up on admissions, and it might even work), rather than human flourishing.

Edit:

Now I'm imagining a future where humans are basically just glorified Pokemon. Successfully aligned AI end up prioritizing human flourishing to the maximum extent, and individual AI assigned to humans derive their sense of value in how much they help their owner to succeed. We don't end up fighting it out for the benefit of our AI assistants of course, but there's a higher level competition going on between which assistant can direct its human to have the most fulfilling life.

Expand full comment
Performative Bafflement's avatar

> I've discussed this with a few people, and the normal response is negative. I guess it seems like you're defeating your desires and turning yourself into a robot. Not all desires are consciously desirable in my opinion, and should be valued depending on how they contribute to a holistically happy life.

Ha! I love everything about this, and agree wholeheartedly.

I think the negative response goes away pretty quickly when people see the results possible - after all, empirical results are the last argument of kings (okay, technically that's supposed to be 'cannons,' but generalize a little with me here).

I agree the "Tiger Momming your kids into oblivion" is probably a problem, but that's down to empirical results and culture / legislation, too. If having a ~2x higher teenage suicide result than the US hasn't moved the S Korean Tiger momming (and that's even in the lowest birthrate country on earth!), they probably need to start looking at legislative solutions.

> Now I'm imagining a future where humans are basically just glorified Pokemon. Successfully aligned AI end up prioritizing human flourishing to the maximum extent, and individual AI assigned to humans derive their sense of value in how much they help their owner to succeed.

Honestly, it sounds pretty great!

It's like being the dog of really smart, friendly, and attentive owners who are always there. Certainly one of the happier potential AI futures that gets envisioned so far.

And to your point, this is something that should be directly aligned with the business interests of the companies providing the assistants - they make more money with happier and more successful customers, too.

Expand full comment
nic's avatar

>people who will do the best on complex multi-polar goals in the future like “I want an interesting career, and a great spouse, and to have good habits that leave me energetic and engaged with my day to day life” will be highly conscientious people who will strictly follow the advice that GPT-o7 gives to them.

This is a good intuition of our weird AGI future that squares with it being closer on the horizon. Probably the first one I've read that actually threads that needle in a while. Usually It's either gray goo or spacefaring in under a decade.

Like, you could get some alpha in doing this now, if you understand when to trust it. No question if the models continue to improve at all, the amount of discernment needed to trust it will go down.

Expand full comment
Manon's avatar

"which was definitely on their side and genuinely wishing for their success and trying to help them along those lines" is doing a lot of heavy lifting here. Otherwise, you make a compelling argument.

Expand full comment
Performative Bafflement's avatar

To my mind, it's one of the rare occassions when everyone's incentives are aligned. Usually this isn't true - it's typically thousands of Phd's one one side of the app using their collective brainpower to find and yank on your deep seated neurological hooks to make you give more eyeball time to them, and so more advertising dollars. The attention economy.

But the assistants actually have the opposite incentives! You'll be using them all the time anyways, so it's less of an attention economy, and more of an outcome economy.

Even if you're on a "free tier" personal assistant, the company will want it to help you get more disciplined and level up, because it will literally make you a more valuable customer and consumer overall, to both them and their advertisers.

And on the paid tier, the same dynamic holds - people will be more willing and more able to pay more the more the AI assistants drive positive impacts in their lives.

Expand full comment
Mister Kavon's avatar

I agree with your entire premise of immersive personalized AI being good for the plebs. (Certainly not me. After all, I'm subbed to Performative Bafflement.)

However, I would like you to elaborate on your envisioning of a "Post-Scarcity world." What that means to you. How all encompassing this world will be. Normal caveats aside. We're not talking about utopia here.

Some aspects of this theory I completely agree with. Hell, some aspects are already here. AI being a good example. Other aspects, not so much. In fact, I think the probability of some optimistic predictions are woefully naive. UBI is a pillar of most Post-scarcity thinkers. I don't see this happening for AT LEAST another 100 years. A mass reduction in the Gini co-efficient is another. There's no reason to see any sort of flattening of economic inequality in the foreseeable future. You did address this issue a bit in this post, but there's plenty of ground to cover on this topic.

Anyways, if you wouldn't mind expanding on what this term means to you, or point me in the direction of a post where you've covered this, I'd be most grateful.

Expand full comment
Performative Bafflement's avatar

Ah, a commenter of great taste and discernment! :-)

> Anyways, if you wouldn't mind expanding on what this term means to you, or point me in the direction of a post where you've covered this, I'd be most grateful.

I actually haven't written a post on this, but here's my thoughts on why UBI is likely. I agree with you that economic inequality will probably rise precipitously - I think it will essentially be bimodal. The majority of people will be UBI-only, and some small portion (20%?) will have unique rivalrous goods like land in places people want to live, unique artworks, and so on.

At bottom, I think all "post scarcity" means is that the material and entertainment needs of life are entirely covered for everyone. Everyone gets places to live, food, clothes, gadgets, furniture, electricity, and infinite entertainment for free.

But what it means within that chunk of possibilities can vary really wildly - from Marshall Brain-style Manna compounds at one end to Star Trek or Banks' Culture novels at the other, to many destinations in between.

Of course, mass and energy are still finite, and will likely be at a premium for AGI's or ASI's, but I think human-life scale mass and energy needs will be so small nobody will care, much like the ~50M cats or ~100M dogs in the US being taken care of by people, eating out of their owners' mass / energy budgets. People will probably be given a certain monthly or annual mass / energy budget denominated in dollars, and it's up to them how to use it.

And this is why rivalrous unique goods are valuable - they're totally unattainable with any reasonable annual UBI payout - they'll only be attainable to former or current capital holders with non-UBI revenue streams.

But why UBI? Let's take the 40k foot view - for the US, the US has ~30% of the billionaires on the planet (with ~3% of the population!), and roughly 30% of them have signed the Giving Pledge where they pledge to give away at least half their wealth.

So at a high level, we've got way more billionaires per capita than anyone else, and they've demonstrated enough public spiritedness / generosity that we can expect UBI if suddenly they had all the income ever and nobody else had income because their jobs were AGI-d away.

We're also a democracy, so you know, people are obviously going to use the political process to make sure people whose jobs are taken away are taken care of.

And of course, we're the locus of the 3-5 most advanced AI companies on earth, too. So if any one of them, or any combination of them, cracks AGI and automates some huge percentage of jobs, that money is going to be coming into US companies and available for taxation.

It's honestly the REST of the world I worry about, outside of China. China's biting our ankles, they'll be fine and can automate jobs and print UBI, too (at a much lower per capita standard, but thats where they are today too).

But everyone else? Their jobs will be automated away, and it'll be up to them whether they tax and UBI their people. Europe will, sure. But what about more corrupt and authoritarian places, which all have smaller and crappier economies already? Can't the elites just automate all the jobs and keep the great majority of the money? Probably. Sucks to be their people.

As to your 100 year timing, I think this really just depends on the "AGI" and "how many and which jobs are automated" timing, because it will be driven by those things rather than time itself. If it goes really slow, yeah, there'll probably be some rough patches for the first people getting automated. But I think everyone basically expects big chunks of the Professional Managerial Class to be automated first, because all they do is shuffle bits on computers already, in which case that's the social and political elite and UBI would happen really fast.

Expand full comment
Arbituram's avatar

I'm confused by your world-model of the near future: we have completely commodified and democratised intelligence, but getting into Harvard still matters for some reason?

Expand full comment
Performative Bafflement's avatar

Won’t unique rivalrous goods like land in major metros places people want to live, unique artworks, and exclusive degrees be one of the ONLY things that still have value in a post-scarcity future??

A T20 degree is a lot more than “I’m smart.” It’s “I’ve demonstrated I can win the hardest contests from a young age” as well as “I’m connected to high level people.”

And both of those will matter more in a post-scarcity future, too. If we don’t have to work, we’re basically just hunter gatherers again. You eat, you lounge around, you dance, you play status games, you raise kids.

And of course if jobs are still around, it matters too.

Remember when everyone was treating MSFT or GOOG interview questions like brain teasers several years ago? Lots of places already do IQ test, more or less - and it’s not worth much.

Many a wild hair can max on an IQ test, but then doesn't deliver solid results over time, are lazy, are inconsistent, fail to see the big picture, fail to map their work back to business value, and more. That's such a prevalent factor, in fact, that Google basically gave up on the brain teaser style interviews! They were optimizing for the wrong thing, and it didn't correlate with actual success in their work life.

Expand full comment
Arbituram's avatar

I mean, maybe? I'll be honest, I have no confidence whatsoever in my post scarcity intuitions, even assuming alignment etc goes well. Maybe the closest thing we have is the Haredim; do they jostle amongst themselves for rivalrous goods?

Part of my confusion here perhaps is that I've always been a bit 'autistic, but for status instead of social skills', that part of my brain seems missing, although that's partially a cultural thing from where I grew up. I also fine it hard to believe in the hedonic treadmill, it doesn't seem at all true in my personal experience.

In this world it's very hard to see the point of any of this, no? Although I suppose already wealthy people continue to seek additional money and power now for mysterious reasons, so, perhaps that would just continue in a shadow puppet play version of itself.

Expand full comment
Performative Bafflement's avatar

> In this world it's very hard to see the point of any of this, no? Although I suppose already wealthy people continue to seek additional money and power now for mysterious reasons

Well, classically we did it in the Environment of Evolutionary Adaptiveness (EEA) because it increased our chances of surviving and producing offspring. Needless to say, with those stakes, it's built in really deep and it's really strong in most people (hence the already wealthy still grinding).

One area it mattered then and would still matter in the future is mate search - higher status people get higher status mates, then and now, and in the past had more offspring (sometimes via polygyny, which has been true for the top 10-20% of men in most HG societies for the majority of HG cultures historically, but even without polygyny, it increases offspring via serial monogamy when your wife dies in childbirth (~33% chance by age 40), or from infection (~40% chance by age 50).

I agree in a post-scarcity world with birth control there's no real "objective" reason to spend all your time on this if you're already paired up. But before you're paired, it's still one of the biggest things on teenager-to-late-twenties people's minds today, and in the absence of careers will no doubt be the biggest thing on average (for those still of a mind to pair bond, this will no doubt be influenced by cultural mores and extant memetic superstimuli and the overall landscape of alternatives-to-relationships everyone has at that time - Netflix and apps have reduced pairing and sex today already, imagine what stimuli 10x stronger can do).

But honestly - like I mention in the article - if you're not prone to those games and don't see any point to the hedonic treadmill, you're probably better off in a post-scarcity future! Let everyone ELSE grind pointlessly while you enjoy a great standard of living while enjoying your life free of neuroses, status anxiety, and the grind!

Expand full comment
Erik's avatar

Will listening to the advice of an AI in your ear 24/7 move you higher or lower on the tree of life?

Expand full comment