Discussion about this post

User's avatar
Sol Hando's avatar

On Operant Conditioning:

I used to have a nasty habit of sleeping in. No matter how hard I tried; going to sleep at consistent time, melatonin, facemask, etc. I would snooze my alarm, and unless I had something important to get to that morning, I would sleep in. My mean sleep time without an alarm seemed to be ~9:15 hours, which isn't great if you want to wake up early and not have to go to sleep at 9PM.

This was solved in less than a month with operant conditioning. I bought a shock-watch (https://shop.pavlok.com/) that wakes you up with an electric shock. My morning brain pretty quickly adapted, so I started waking up early and took off the watch in my half-dazed state. I zip-tied it to my wrist to prevent this, and after <10 days of waking up to an electric shock, I now permanently wake up within ~10 mins of my desired time and without an alarm.

I'm a pretty willful person, so it was somewhat humbling to realize that the subconscious can have such control over how we live our lives. It really makes me sympathize with people who suffer from chronic obesity and other maladies that can quite obviously be solved by just "doing the right thing that doesn't even require much effort." We're not just our conscious minds, and our subconscious processes are pursuing completely different goals.

Anyway, this is relevant because I thought of doing the same pavlovian conditioning for other bad habits in my life, but I couldn't figure out a good way to trigger it without some third party observer. I considered hiring a friend or personal assistant to shadow me and watch for certain behavior I was aiming to improve for a few weeks, but it was turning into more work than it was worth and I wasn't getting any positive feedback.

If you do it yourself you're only going to do it when you consciously remember, which you could just stop the bad habit at that point if you had that sort of self-awareness. Giving an AI assistant with a little camera on your lapel that's instructed to deliver an uncomfortable electric shock whenever it catches you doing something you want to stop would probably solve most problems in life.

Pull into a McDonalds or open Doordash? Shock. Ignore your regularly scheduled exercise to do something pointless? Shock. Scrolling TikTok? Shock. Etc, etc. It doesn't have to be a shock either (that's just the only high-impact intervention I've found that works), as I can imagine some electronically controlled reward mechanisms that would act on the brain in the same way.

Essentially, we'd no longer have to battle subconscious processes with executive functioning, but only a one-time executive decision in a moment of strength to set an algorithm to properly defeat our subconscious desires that have their greatest strength in a moment of weakness. Bryan Johnson talks about this when it comes to health, but his tangible offer to people (Blueprint Diet) sucks and he's kind of a wacko.

I've discussed this with a few people, and the normal response is negative. I guess it seems like you're defeating your desires and turning yourself into a robot. Not all desires are consciously desirable in my opinion, and should be valued depending on how they contribute to a holistically happy life. I can understand the repulsion though, as by default the normalization of this sort of thing would probably be used for reinforcing some economic end (I could see South Korean parents shocking their kids when they aren't studying to get a leg up on admissions, and it might even work), rather than human flourishing.

Edit:

Now I'm imagining a future where humans are basically just glorified Pokemon. Successfully aligned AI end up prioritizing human flourishing to the maximum extent, and individual AI assigned to humans derive their sense of value in how much they help their owner to succeed. We don't end up fighting it out for the benefit of our AI assistants of course, but there's a higher level competition going on between which assistant can direct its human to have the most fulfilling life.

Expand full comment
nic's avatar

>people who will do the best on complex multi-polar goals in the future like “I want an interesting career, and a great spouse, and to have good habits that leave me energetic and engaged with my day to day life” will be highly conscientious people who will strictly follow the advice that GPT-o7 gives to them.

This is a good intuition of our weird AGI future that squares with it being closer on the horizon. Probably the first one I've read that actually threads that needle in a while. Usually It's either gray goo or spacefaring in under a decade.

Like, you could get some alpha in doing this now, if you understand when to trust it. No question if the models continue to improve at all, the amount of discernment needed to trust it will go down.

Expand full comment
10 more comments...

No posts