What would a human life guided by AI look like?

It's not my area of expertise, though I've worked for and with a number of individuals that would consider themselves practioners of the theory, operating at high-levels of influence.

It's first important to understand it's a broad term that describes a lot of different and disparate disciplines. A key theory that's been instrumental in it's implementation is something called Cybernetics.

Cybernetics is a concept that's been promoted and applied since the 60s which is deeply embedded into Systems Theory. Essentially, what that boils down to is viewing the world as a series of feedback loops, and assuming that machines and people operate in the same way. It began as a way of articulating how technical/mechanical concepts operated, before people began applying it to the natural world - beginning with animals, then taking it further and applying it to human psychology. Systems Theory then suggests that by altering or manipulating these feedback loops, you can in turn alter and manipulate other feedback loops that form part of the overall "system".

View attachment 4460177

A potential example of Systems Theory / Cybernetics in action would be Henry Kissinger's foreign policy in the Arab world. Kissinger believed in the idea of the "Balance of Power" which has it's roots squarely within Systems Theory - that the various tensions and alliances between different nations could be manipulated at the highest possible level in order to create global peace. This really only works if you assume total-knowledge of all the components of the 'system' and all potential side-effects/outcomes of any feedback loops you attempt to manipulate.


Obviously, that didn't work out too well for Kissinger, or the world.

Another (clumsy and benign) example would be in the UK - the creation of what they call "Nudge Units". Funded by the government, they suggested that urinals in men's bathrooms should have a fake fly, or piece of dirt, placed in a specific position on the urinal itself. The placement of the fly/dirt was deliberate, and assumed that men would instinctively aim to piss on it - the side-effect being that the positioning would ensure the least amount of splashback, and result in cleaner public toilets. As far as I'm aware, the scheme was an abject failure and ultimately abandoned after much ridicule.

A third example - which was entirely by accident - is the reintroduction of wolves to Yellowstone. The presence of the wolves impacted the animals they hunted, which in turn impacted the vegitation they grazed on, which impacted other animals - and so on and so on - which ultimately led to the rivers in the park changing.


Proponents of Systems Theory love this example - though personally, I feel it acts as an argument against it. When wolves were reintroduced to Yellowstone, it wasn't done with the objective of changing the flow/size/position of the rivers, it was an entirely unintended side-effect. To me, it demonstrates again that Systems Theory / Cybernetics only works if you assume you're working with a complete dataset, and are able to accurately predict every possible outcome.

Cybernetics/Systems Theory is a wild concept, with a number of flaws. You'll struggle to find much online that articulates it in anything resembling plain English, and it almost requires a deep understanding of everything from Mathematics to Psychology to really get a grip on how it works, and how it's applied.

I'd reccomend some of Adam Curtis' documentaries to get a broad view of how some of these ideas have played out in the world since the 60s.

Hopefully at least some of what I just wrote makes sense, and how it relates to the question you asked in the OP.
I think this is precisely the same thinking that propelled Kurt Lewin when he applied feedback from a machine engineering system term to human interaction.

Also the person that innovated the term "safe space", which originally meant a place where people could voice their taboo racist ideas without being condemned for them. Because he kept running into the problem that he wanted to expose racism, but he couldn't find anyone saying racist things.

You've given me quite a subject to sink my teeth into, and it's fascinating. Thanks.
 
You're confusing a perfect AI, or a machine god, with even a "general AI". An AI is just a pile of matrices, if statements, and data.

What are the matrices, if statements, and data? What are the seeds? What are its biases? What does it "know"? (I'd really rather find a better word, since I'd be hard pressed to concede that an AI has any notion of experience or knowledge, but I need some way to express that an AI would be able to identify categories, concepts, 'things', and the like, and operate on that 'ontology').

An AI is just a really messy program that's somehow useful despite using up a lot of power to do anything. It's a program that depends on what its training inputs are a lot more than what the user inputs are. Training inputs are the pile of data and iterations as we shape this computational blob into saying what we want. We who? That group's biases will leak in, be it just innocent bias that comes from being an individual person with your own life experience, or ideological. Then there's the user input: who is asking what? What are they even allowed to ask?

Let me put it another way: whatever person or group makes the AI, and what they use to train it, would determine what the AI says. Thus, if we have a "leave it to beaver" AI it will give "leave it to beaver" life advice. Same with Red Foreman. Or Captain Picard from TNG. Or Mr. Rogers.

Or the distilled woke gestalt.
 
Back