What would a human life guided by AI look like?

Lemmingwiser

Candyman
kiwifarms.net
Joined
Dec 15, 2022
People have let all sorts of thing design their life. Church, role models, social movements, parenting, school. There are different guides modern humans can use to structure their lives.

Let's say someone from age 12 to age 40 asks an "AI" like chatGPT for advice every day of their lives. From small questions to big ones. Like "I feel like I should study but I don't feel like it. What should I do?" and "I am not certain whether I should persue a medical degree or a law degree, how do I decide?"

When looking at games in chess, computers can tell you exactly where you made a mistake. Sometimes you still win because your opponent made worst mistakes. We know how chaotic life is and how sometimes the worst can happen no matter what you did and vice versa. But we also know in the long run there is a lot of influence a person has on the outcomes.

What are your expectations for a man or woman that would seek advice that way from an early age?
 
  • Like
Reactions: Aunt Carol
Those questions are too general. If AI is going to guide lives, those computers are supposed to know everything about us in this surveillance society. So at 12 one should be able to login and get their future course of study; partner, if any, by name; amount of children; and length of life. I know it's horrifying, but it's no less than some people expected from Ms. Cleo.
 
  • Agree
Reactions: Aunt Carol
There are two options:
1. The AI is guiding humanity, but is itself guided by the elites and large companies. In which case it's just repressive and consumeristic, the most realistic outcome
2. The AI is completely independent. Surprisingly this is worse.

AI must be made to serve humanity, not the other way around.
 
  • Agree
Reactions: Ultrapenguin
The issue with this question, and the issue with all AI singularity type questions, is that existence doesn't actually have any logical purpose. Life is an inherently irrational experience where any attempt to parse its nature logically will just wrap around on itself and you'll wind up right back where you started.

That means ultimately the bot itself has to be guided by some ulterior directive, by some entity who will ultimately be self-serving, or at least have a seriously biased philosophy.

In the end you either decide your own fate or get taken along in the pursuit of someone else's. In this case it would just depend on who's holding the reins.
 
The same now. We already live lives guided by machines. So it'd be modernity racked up to a thousand times what it is now. Hell, essentially.
 
I swear this was an actual premise to a show like Twilight Zone or Outer Limits.
 
  • Like
Reactions: Aunt Carol
Okay people are taking the question different than intended.

@Johan Schmidt @John Titor @My Name is Ed

I don't mean what is the effect on humanity. The scope is much smaller. I'm asking what would be the effect on that single life? For someone who dedicates him or herself to follow chatGPT recommendations, much like a teen might follow the recommendations of their olympic wrestling coach, or church? Someone who'd do so with zest, passion and dedication?

@BBJ_4_Ever @libRT @My Name is Ed

I don't mean AI as promised, I mean the current chatGPT. One that is essentially trained on online conversations and as such, would have had training material that is recent (and let's presume it isnt updated: even goes out of date a little).

And if you haven't experimented with it, is full of flaws. For example I recently asked it to solve a video conversion problem. It suggested how I could do this VLC, giving a step by step guide. I followed it and saw a button it insisted was there was missing. When I said the button wasn't there it told me I was correct and what I was trying to do was impossible with VLC, lmao.

But I wonder what might happen to a life if guided by the current chatGPT. The way some people might try to live on a dollar a day and write a book about it.
 
  • Thunk-Provoking
Reactions: BBJ_4_Ever
Okay people are taking the question different than intended.

@Johan Schmidt @John Titor @My Name is Ed

I don't mean what is the effect on humanity. The scope is much smaller. I'm asking what would be the effect on that single life? For someone who dedicates him or herself to follow chatGPT recommendations, much like a teen might follow the recommendations of their olympic wrestling coach, or church? Someone who'd do so with zest, passion and dedication?

@BBJ_4_Ever @libRT @My Name is Ed

I don't mean AI as promised, I mean the current chatGPT. One that is essentially trained on online conversations and as such, would have had training material that is recent (and let's presume it isnt updated: even goes out of date a little).

And if you haven't experimented with it, is full of flaws. For example I recently asked it to solve a video conversion problem. It suggested how I could do this VLC, giving a step by step guide. I followed it and saw a button it insisted was there was missing. When I said the button wasn't there it told me I was correct and what I was trying to do was impossible with VLC, lmao.

But I wonder what might happen to a life if guided by the current chatGPT. The way some people might try to live on a dollar a day and write a book about it.
In the current state of AI. You'd basically get a only slightly smarter ralphamale if they somehow survive (ChatGPT isn't intelligent, its brute force math and statistics).

Will grow up with no mental talent of its own and completely reliant on the machine even for basic thought processes. No need to learn anything if a machine can give you a good enough answer. So basically your average person walking around now, except instead of a machine as an intellectual authority, it's a retarded government/culture.
 
...But I wonder what might happen to a life if guided by the current chatGPT. The way some people might try to live on a dollar a day and write a book about it.
Then I did take it too far. But to go with your last example, would they have to follow the suggestions like a GPS that directs one off an unfinished bridge in the dark or does human free will enter into it? The AI appears rather malicious in the VLC account you related.
 
  • Thunk-Provoking
Reactions: Lemmingwiser
And if you haven't experimented with it, is full of flaws. For example I recently asked it to solve a video conversion problem. It suggested how I could do this VLC, giving a step by step guide. I followed it and saw a button it insisted was there was missing. When I said the button wasn't there it told me I was correct and what I was trying to do was impossible with VLC, lmao.
I mean, you've pretty much given the answer away. At some point, the sycophantic human will either find themself stuck in an endless loop, or taking some out-of-date or misconstrued advice which leads to them hurting themselves or worse. If we're talking about a real ai which isn't neutered by humans unlike the more recent iterations of ChatGPT, then I'd expect the human to survive less than 5 years.
 
  • Thunk-Provoking
Reactions: Lemmingwiser
In the current state of AI. You'd basically get a only slightly smarter ralphamale if they somehow survive (ChatGPT isn't intelligent, its brute force math and statistics).

Will grow up with no mental talent of its own and completely reliant on the machine even for basic thought processes. No need to learn anything if a machine can give you a good enough answer. So basically your average person walking around now, except instead of a machine as an intellectual authority, it's a retarded government/culture.
I mean I gave an example where it went wrong. I also asked it questions on a number of subjects that I have expertise on, and the advice was excellent.
 
You already know, roughly. Concepts like Systems Thinking have defined the last 40 or 60 years whether you're aware of it or not - funnelling human behaviour with background processes in order to guide towards desired outcomes. AI will work the same way - the "win" condition will be set by human beings, not computers, and the outcome will be the increasingly clumsy and dysfunctional world you already live in.
 
  • Thunk-Provoking
Reactions: Lemmingwiser
You already know, roughly. Concepts like Systems Thinking have defined the last 40 or 60 years whether you're aware of it or not - funnelling human behaviour with background processes in order to guide towards desired outcomes. AI will work the same way - the "win" condition will be set by human beings, not computers, and the outcome will be the increasingly clumsy and dysfunctional world you already live in.
Okay that's a wild ride to look at.

Any change you're willing to expand more on this? I have not encountered this idea before. I'll probably end up reading more about it, but if you can explain your thoughts further it'd be great.
 
Okay that's a wild ride to look at.

Any change you're willing to expand more on this? I have not encountered this idea before. I'll probably end up reading more about it, but if you can explain your thoughts further it'd be great.

It's not my area of expertise, though I've worked for and with a number of individuals that would consider themselves practioners of the theory, operating at high-levels of influence.

It's first important to understand it's a broad term that describes a lot of different and disparate disciplines. A key theory that's been instrumental in it's implementation is something called Cybernetics.

Cybernetics is a concept that's been promoted and applied since the 60s which is deeply embedded into Systems Theory. Essentially, what that boils down to is viewing the world as a series of feedback loops, and assuming that machines and people operate in the same way. It began as a way of articulating how technical/mechanical concepts operated, before people began applying it to the natural world - beginning with animals, then taking it further and applying it to human psychology. Systems Theory then suggests that by altering or manipulating these feedback loops, you can in turn alter and manipulate other feedback loops that form part of the overall "system".

640px-Cybernetics.jpg


A potential example of Systems Theory / Cybernetics in action would be Henry Kissinger's foreign policy in the Arab world. Kissinger believed in the idea of the "Balance of Power" which has it's roots squarely within Systems Theory - that the various tensions and alliances between different nations could be manipulated at the highest possible level in order to create global peace. This really only works if you assume total-knowledge of all the components of the 'system' and all potential side-effects/outcomes of any feedback loops you attempt to manipulate.


Obviously, that didn't work out too well for Kissinger, or the world.

Another (clumsy and benign) example would be in the UK - the creation of what they call "Nudge Units". Funded by the government, they suggested that urinals in men's bathrooms should have a fake fly, or piece of dirt, placed in a specific position on the urinal itself. The placement of the fly/dirt was deliberate, and assumed that men would instinctively aim to piss on it - the side-effect being that the positioning would ensure the least amount of splashback, and result in cleaner public toilets. As far as I'm aware, the scheme was an abject failure and ultimately abandoned after much ridicule.

A third example - which was entirely by accident - is the reintroduction of wolves to Yellowstone. The presence of the wolves impacted the animals they hunted, which in turn impacted the vegitation they grazed on, which impacted other animals - and so on and so on - which ultimately led to the rivers in the park changing.


Proponents of Systems Theory love this example - though personally, I feel it acts as an argument against it. When wolves were reintroduced to Yellowstone, it wasn't done with the objective of changing the flow/size/position of the rivers, it was an entirely unintended side-effect. To me, it demonstrates again that Systems Theory / Cybernetics only works if you assume you're working with a complete dataset, and are able to accurately predict every possible outcome.

Cybernetics/Systems Theory is a wild concept, with a number of flaws. You'll struggle to find much online that articulates it in anything resembling plain English, and it almost requires a deep understanding of everything from Mathematics to Psychology to really get a grip on how it works, and how it's applied.

I'd reccomend some of Adam Curtis' documentaries to get a broad view of how some of these ideas have played out in the world since the 60s.

Hopefully at least some of what I just wrote makes sense, and how it relates to the question you asked in the OP.
 
Back