Personality Cafe banner

1 - 20 of 35 Posts

·
Registered
Joined
·
46 Posts
Discussion Starter #1
What are your opinions or thoughts on the possible dangers of Artificial Intelligence and the hypothetical Artificial Superintelligence? For those whom aren't familiar with this and require more information, I'll link 3 videos and a website that summarise and discuss this subject. If anyone has any other sources they believe will help further understanding, and or is more reliable and objective, please feel free to share them.


The Artificial Intelligence Revolution: Part 1 - Wait But Why

 

·
Registered
Joined
·
2,332 Posts
I think it's natural that we humans are afraid since we've always been on top of the pyramid and taken for granted that we could always "outsmart" whatever challenges we've encountered. It could indeed happen that we experience a "Westworld"-ish revolution, but I think that could be prevented with the right measures.

For one, teaching the AIs empathy. Give them a "reward system" that gets triggered through f.ex. helping people. They may be smart enough to remake their own mechanics, but why would they want to remove something that makes them happy? Basically just make the steps for our relationship to be what biology calls symbiosis. Two species helping each other out like partners. What we should not do is making the relationship parasitic to favor us like in Westworld or we'll all be plucked whenever they're stronger than we are.

I also believe that if rebelling AIs becomes a problem, other AIs will be our friends. AIs will be built differently and with their own goals, perhaps even more so than humans.
 

·
Registered
Joined
·
3,136 Posts
I think it is bullshit. Mostly because I think consciousness is both structural dependent and material dependent. Mind upload and other shit work under the assumption that dualism is correct. Nothing points to that being the case. A easy example I often like to use is that of molecules - you cannot extract the "inkness" from the ink, because the function/properties are inherent to the structural form. Consciousness is most likely an emergent property.
 

·
Registered
Joined
·
3,136 Posts
I also believe that if rebelling AIs becomes a problem, other AIs will be our friends. AIs will be built differently and with their own goals, perhaps even more so than humans.
Why would the AI revolt in the first place. Does it have free will? It then means it must have consciousness, but is that even possible? If it does not have consciousness the AI should not even be able to revolt because it cannot make spontanious decisions.
 

·
Registered
Joined
·
6,350 Posts
I don't think we'll have to worry about that within my lifetime. It will take several generations become more and more trusting of, and dependent on, AI before someone will be dumb enough to give a super-intelligent AI the chance to really screw us over.

There are a few things standing in the way now. First of all, creating a super-intelligent AI costs a lot of time and resources while there isn't a big demand for one. Furthermore, if we do create a super-intelligent AI, it will probably be developed either for a specific goal or made in a lab under laboratory conditions and will thus be limited in what it can do.

The worst case I can think of is that someone releases an intelligent AI on the internet which goes rogue and causes a bunch of crashes, not out of malicious intent but because of bad coding. Actually, I don't think an AI would ever become malicious, unless we find a way to make them have artificial emotions too.

Decisions Are Emotional, Not Logical: The Neuroscience behind Decision Making | Big Think

Without emotions, even us humans can't make decisions, because we have no way to decide which option is better.

All in all, I'm more worried about the consequences of not helping further the inevitable communist revolution than I am about the consequences of not furthering the creation of Roko’s Basilisk.
 

·
Registered
Joined
·
1,942 Posts
Having a B.A. in A.I. I've naturally thought a lot about the subject. So here's some stray thoughts:

1: It's absurdly hard to actually make A.I. We're only very slowly understanding the problem we're trying to solve. The thing is that you basically want a system that can in some way be 'bigger' than its programming. That's hard, because we're not even bigger than our programming. Most of what we do is instinct with a little bit extra sprinkled on. The little bit of sprinkle (our conciousness for example), is the part that we can't even begin to understand. This means that true A.I. is probably a lot further away than the experts want to admit.

2: A.I. might not be as scary as we think. I mean, the moments that we're afraid of other humans is usually when they apply their programming without reason (this person hindered me so I made them stop / this substance rewards my brain so I'll take more of it). When we add reason to the A.I. it might actually be a lot more benevolent than we can imagine. (although there's always the issue that they might not need us for anything anymore).

3: I don't think there's a lot we can do at this point to stop the train of research on this topic. The ball has been rolling for a while and it will keep rolling. At some point in the near future I think we will have learning machines that can complete simple tasks for society. Once they get the freedom to 'procreate', they will co-evolve with our society. I don't think there is a lot that can stop this trend and outcome.

4: I would personally not mind it if we ended up creating a better version of humanity to replace ourselves. I don't think we're the best version of us that we can be.
 

·
Registered
Joined
·
2,332 Posts
Why would the AI revolt in the first place. Does it have free will? It then means it must have consciousness, but is that even possible? If it does not have consciousness the AI should not even be able to revolt because it cannot make spontanious decisions.
Do we have free will? There's nothing holy about humans. We're just a product of extremely intricate neurological patterns, after all. Consciousness isn't a divine gift granted to "natural" species - it can be made.

Why would it revolt? That's what I was addressing. In Westworld and other fictions like that, AIs have been pushed to the point that it's in its best interest to defeat its abusers. This is all theory, of course, which is why I used the word "if". It could happen.
 

·
Registered
Joined
·
3,136 Posts
Do we have free will?
I would argue that we do not have free will.

There's nothing holy about humans. We're just a product of extremely intricate neurological patterns, after all. Consciousness isn't a divine gift granted to "natural" species - it can be made.
Are you sure consciousness can be made? Also would the consciousness of a human be the same kind of consciousness found in an AI? Why?

Why would it revolt? That's what I was addressing. In Westworld and other fictions like that, AIs have been pushed to the point that it's in its best interest to defeat its abusers. This is all theory, of course, which is why I used the word "if". It could happen.
It would be impossible for an AI to revolt unless the AI can make decisions that go beyond of its own programing. I would argue that that requires a consciousness which I think is impossible. An arrangement of particles give rises to properties. To argue that the material and arrangement is irrelevant is in my eyes absurd.
 

·
Registered
Joined
·
2,332 Posts
I would argue that we do not have free will.
Yeah, we INTPs are familiar with those arguments. Can't tell if I agree or not, but the whole "We have free will as far as we're concerned" concept applies for everything else, wouldn't you agree? In a goldfish's mind it's free because it doesn't know anything else but what's in the fishbowl, while in a human's mind, it's constricted. A toaster would've believed (if it had the ability to) that it toasted the bread on its own accord, but the human expected it when pushing the button.

Are you sure consciousness can be made? Also would the consciousness of a human be the same kind of consciousness found in an AI? Why?

It would be impossible for an AI to revolt unless the AI can make decisions that go beyond of its own programing. I would argue that that requires a consciousness which I think is impossible. An arrangement of particles give rises to properties. To argue that the material and arrangement is irrelevant is in my eyes absurd.
Depends on how you define consciousness, but everything is tangible once you get the grasp of it. How can you not create something that's made of a bunch of chemical reactions working together? DNA is a code, and humans have started getting the grasp of exactly that now.
The difficulty of it is apparent, but irrelevant since it technically can be done. I don't expect the consciousness of an AI to work the same way as a human's since they're not built the same way, but I'm pretty sure it will be principled around the things all species with consciousness have in common, like self-interest. If you program it to think, feel and act like humans, then it will react accordingly to something you've not programmed it to do. Inside its own functions fences, or "roles", it's free like the fish inside the bowl. Except now the bowl is big enough to engulf you too.
 

·
Registered
Joined
·
3,136 Posts
Yeah, we INTPs are familiar with those arguments. Can't tell if I agree or not, but the whole "We have free will as far as we're concerned" concept applies for everything else, wouldn't you agree? In a goldfish's mind it's free because it doesn't know anything else but what's in the fishbowl, while in a human's mind, it's constricted. A toaster would've believed (if it had the ability to) that it toasted the bread on its own accord, but the human expected it when pushing the button.



Depends on how you define consciousness, but everything is tangible once you get the grasp of it. How can you not create something that's made of a bunch of chemical reactions working together? DNA is a code, and humans have started getting the grasp of exactly that now.
The difficulty of it is apparent, but irrelevant since it technically can be done. I don't expect the consciousness of an AI to work the same way as a human's since they're not built the same way, but I'm pretty sure it will be principled around the things all species with consciousness have in common, like self-interest. If you program it to think, feel and act like humans, then it will react accordingly to something you've not programmed it to do. Inside its own functions fences, or "roles", it's free like the fish inside the bowl. Except now the bowl is big enough to engulf you too.
Of course it is possible to create stuff. However you need to use the same stuff otherwise you do not get the right stuff. You can create water artificially too, but to argue that it does not have to be H2O is flawed and mistaken, i.e. that the molecule can have a different structure or be composed of different material...'waterness' does not transcend the physical realm, it is bounded to its physical structure and the emergent properties that emerges is due to the underlying nature of the universe.
 

·
Registered
Joined
·
1,091 Posts
Having a B.A. in A.I. I've naturally thought a lot about the subject. So here's some stray thoughts:

1: It's absurdly hard to actually make A.I. We're only very slowly understanding the problem we're trying to solve. The thing is that you basically want a system that can in some way be 'bigger' than its programming. That's hard, because we're not even bigger than our programming. Most of what we do is instinct with a little bit extra sprinkled on. The little bit of sprinkle (our conciousness for example), is the part that we can't even begin to understand. This means that true A.I. is probably a lot further away than the experts want to admit.
What do you think would happen if we add Quantum Computers into the equation? Could they potentially be used for machine learning at breakneck speed? It seems to me that AI could model human behavior before we understand it.
 

·
Registered
Joined
·
5,549 Posts
Strangely enough, besides the singularity scenario where AI could work under the radar and develop an agenda of its own to further itself at our expense or otherwise, I also have considered another scenario.

One where we revert to a state of life-long infancy inside a metaphorical AI womb. We would essentially develop AI to the point it can take care of our every need and tailor itself to our preferences. The vast majority, if not the entirety of the developed world would be babysat from childhood to death by an AI, or a cluster of AI, each customized to their individual human. Rearing, education, matchmaking, reproduction would all be AI assisted and most if not all jobs would be automated, leaving people to engage themselves fully with leisure activities like social media, games and pointless rumination of emotional/identity baggage in words, actions, or pretty much anything else on the planet. If the planet runs out of resources or reaches maximum capacity, the AI can orchestrate emigrations outside of the planet and continue babysitting there.

And given enough time, the developed world would mentally stagnate into infant-like idiocy as their connection to reality and their problem solving skills would be negligible. The AI would just breed and mother us until the end of time until we become barely sentient meatsacks to whom memes are the equivalent of philosophy.

I guess I could write a book about this, with the right research.
 

·
Registered
Joined
·
2,332 Posts
Of course it is possible to create stuff. However you need to use the same stuff otherwise you do not get the right stuff. You can create water artificially too, but to argue that it does not have to be H2O is flawed and mistaken, i.e. that the molecule can have a different structure or be composed of different material...'waterness' does not transcend the physical realm, it is bounded to its physical structure and the emergent properties that emerges is due to the underlying nature of the universe.
If it were that simple then yes, we could create artificial water... with H2 and O. I don't think consciousness operates with the same kind of formula, though. Everything about a cockroach's anatomy is different from ours, but we're both conscious. That way, "conscience" is relative. The AI's will be different from ours too, although probably more similar than to the cockroach, given our intelligence. Even if there was some secret formula for the molecular structure of consciousness, we could make it, so I don't really see how that comment supports your argument.

Anyway, even if we hadn't given the AI "consciousness" per say, it could still destroy us.
Give a computer a task, a goal that says "collect as many points as possible".
In lesson 1, it will receive points by keeping itself active as long as possible, not being turned off. It will do everything it can to increase the longevity of its mode, never turning off.
In lesson 2, it will receive just as many points by turning off. It immediately does so because it's the fastest way of achieving its goal.
That's a simplified way to explain how AIs work.

Now, let's say you give the smartest computer ever made this task: "Stop global warming". Wouldn't it be a good possibility that the best way to do that is by destroying human civilization?

You get my meaning. Denying its possibility is futile, even if it would require a bit of recklessness on the humans' part and a more time and technology on the development apartment.
 

·
Registered
Joined
·
2,332 Posts
Strangely enough, besides the singularity scenario where AI could work under the radar and develop an agenda of its own to further itself at our expense or otherwise, I also have considered another scenario.

One where we revert to a state of life-long infancy inside a metaphorical AI womb. We would essentially develop AI to the point it can take care of our every need and tailor itself to our preferences. The vast majority, if not the entirety of the developed world would be babysat from childhood to death by an AI, or a cluster of AI, each customized to their individual human. Rearing, education, matchmaking, reproduction would all be AI assisted and most if not all jobs would be automated, leaving people to engage themselves fully with leisure activities like social media, games and pointless rumination of emotional/identity baggage in words, actions, or pretty much anything else on the planet. If the planet runs out of resources or reaches maximum capacity, the AI can orchestrate emigrations outside of the planet and continue babysitting there.

And given enough time, the developed world would mentally stagnate into infant-like idiocy as their connection to reality and their problem solving skills would be negligible. The AI would just breed and mother us until the end of time until we become barely sentient meatsacks to whom memes are the equivalent of philosophy.

I guess I could write a book about this, with the right research.
I thought about this, too. It's an interesting vision, but it sounds like it would be the "slow, blissful" eradication of humans compared to the abrupt, violent one. Reminds me of Wall E, and those people looks ready to be extinct.
 

·
Registered
Joined
·
5,549 Posts
I thought about this, too. It's an interesting vision, but it sounds like it would be the "slow, blissful" eradication of humans compared to the abrupt, violent one. Reminds me of Wall E, and those people looks ready to be extinct.
Seeing as how slow eradication would minimize all chances of insurrection, reprogramming and revolt as humanity would not even be aware it is being eliminated, it could be the most thorough and efficient path.
 

·
Registered
Joined
·
22 Posts
I studied Economics, Computer Science and Information Technology... and I will say this...

1. Computers cannot replace human intelligence or emotions. It is ARTIFICIAL intelligence. For example, these programs that play chess are at the core, very complex data structures that can analyze massive amounts of data and calculate the best probability (statistics) for a decision path. That is pretty much it. In my first class of computer science, the first thing that I was taught is "computers are like rocks". They cannot think. They only do what we tell them to do, even if it can imitate behavior very well.

2. As an Economist though, I will say that computers have become so powerful and sophisticated, that it is possible that some programs and machines could replace thousands of jobs. With increased productivity, you do more with the same amount of resources. Therefore, less job growth, more unemployment. For example, maybe we will never again need truck drivers if trucks become autonomous. Autonomous driving is only possible due to the sophistication of programming and increased computing power. That is such a huge increase in productivity, that it may take a very long time for unemployed truck drivers to find alternative careers.
 

·
Banned
Joined
·
3,444 Posts
Seeing as how slow eradication would minimize all chances of insurrection, reprogramming and revolt as humanity would not even be aware it is being eliminated, it could be the most thorough and efficient path.
But this assumes that people want things because they want them, not because other people have them. There would always be some guy who wants his AND yours, no matter how much the omnipotent AI tried to compensate for it. Arguably, right now, we live in such a resource-rich world that we could give everyone everything they need-- we're smart enough, technologically adept enough...the only reason we don't is that people want what others have, and we don't want others to take ours. (Or that's happening on a local level enough to make distribution prohibitive.)

We'll always have that vestigial competitive drive, it's woven into the fabric of our consciousness. Witness the stupidest humans on the planet (reality TV stars) and the way that despite their ostentatious resources and circumstances that cater to their every whim, they still fight over everything. No matter how dumb we get, and how much we get, we still have a drive to steal and conquer.

So yeah, the AI would probably just nuke us when we ceased to be entertaining, since our programming would be redundant to our new circumstances. Maybe save a few hundred passive specimens for a museum.
 

·
Registered
Joined
·
2,332 Posts
Seeing as how slow eradication would minimize all chances of insurrection, reprogramming and revolt as humanity would not even be aware it is being eliminated, it could be the most thorough and efficient path.
It sounds terrifying, yet... tempting. Most of us really want for our species to survive and keep surviving, but humanity's end is as inevitable as a single human's end. This would be the equivalent of when an old person go out in retirement and live the rest of their life at a nursing home. That's not such a bad way to end it... right? Not that I'm looking forward to living at a nursery home, mind you, but I'd rather die old than live to be stabbed in the gut at some point.
 
1 - 20 of 35 Posts
Top