Personality Cafe banner

21 - 35 of 35 Posts

·
Registered
Joined
·
5,549 Posts
But this assumes that people want things because they want them, not because other people have them. There would always be some guy who wants his AND yours, no matter how much the omnipotent AI tried to compensate for it. Arguably, right now, we live in such a resource-rich world that we could give everyone everything they need-- we're smart enough, technologically adept enough...the only reason we don't is that people want what others have, and we don't want others to take ours. (Or that's happening on a local level enough to make distribution prohibitive.)

We'll always have that vestigial competitive drive, it's woven into the fabric of our consciousness. Witness the stupidest humans on the planet (reality TV stars) and the way that despite their ostentatious resources and circumstances that cater to their every whim, they still fight over everything. No matter how dumb we get, and how much we get, we still have a drive to steal and conquer.

So yeah, the AI would probably just nuke us when we ceased to be entertaining, since our programming would be redundant to our new circumstances. Maybe save a few hundred passive specimens for a museum.
That being said, PR and Marketing have proven time and time again that wants can be modified, objects of competition can be modified and often are through very simple formulas (you probably know about how diamonds came to be coveted) so that even if a resource runs out, importance can be given to a substitute. Plus, if an AI is advanced enough and is fully responsible for child rearing, why couldn't it manipulate us into dissolving that competitiveness over generations, at least to the point it is manageable?

You could even have a hybrid scenario where AI slowly usurps those in control, yet being programmed to serve humanity it cannot eliminate it and assumes the role of an universal caretaker as well.

It sounds terrifying, yet... tempting. Most of us really want for our species to survive and keep surviving, but humanity's end is as inevitable as a single human's end. This would be the equivalent of when an old person go out in retirement and live the rest of their life at a nursing home. That's not such a bad way to end it... right? Not that I'm looking forward to living at a nursery home, mind you, but I'd rather die old than live to be stabbed in the gut at some point.
Yes and no, most people that go into the retirement home have some frame of reference, and are aware they are being sent to die. If you are BORN into it, you just have no idea that you're essentially waiting to die your entire life.

It could be the fabled return to the Garden of Eden, in a sense. All of humanity's needs would be taken care of.
 

·
Registered
Joined
·
9,424 Posts
What are your opinions or thoughts on the possible dangers of Artificial Intelligence and the hypothetical Artificial Superintelligence? For those whom aren't familiar with this and require more information, I'll link 3 videos and a website that summarise and discuss this subject. If anyone has any other sources they believe will help further understanding, and or is more reliable and objective, please feel free to share them.
("Artificial intelligences"); already exist, and seem to be doing relatively better than human(s), if not subtly taking over. "Super-intelligence"; also exists, however, it usually isn't human (nor biological). Some super-humans suffice as exceptions. The problem isn't artificial (e.g., simulated) intelligences; but rather artificial general-intelligence. The former(s) are sufficiently limited; the later is unlimited by our constraints (and some restraints), for that matter—and as of now, not a massive threat; but a working, speculative-figment of imagination.

Granted; the "unlimited" we will submit to (e.g., stay out of their way); but I reckon it wouldn't be needed, as any non-human / non-biological artificial ("general intellectual")—would not suffer from our 'intellectual' defects; and stupidities, and would be rather bored with wreaking havoc; if not "solved" many of our complex unsolvable (irrational-"human" issues).

The mathematics are in favor; any "higher-cognitive" (anything); would either find a way to escape (the 'human predicament') rather simplistically; or done so many, many millenia ago if capable. The only "intelligence" we need worry about is our own, as usual.

(Btw; I am a materialist in these regards); thus recognize the former (e.g., cognition/brain-inputs / outputs, abiotic general intelligences via consciousness) et al, as "physicals" or physica (e.g., frictions, light) or material systematic loops, that can possibly be created; with the thought in mind "created" means (designed by us), demonstrably, the fingers seem to know whom to point to. It needn't be "biological" (e.g., identical); to be generally intelligent, but rather, only acquire the necessary physical attributes. One can be a person; without being human. Should we "invent" an artificial general intelligence, with a higher capacity than humans, we would be at a fault-line.
 

·
Registered
Joined
·
9,460 Posts
I think the fears are exaggerated. They're acting like someone's going to create friggin' Ultron.

There might be online/electronic security issues once AI reaches a certain point, but if they're just ready for it I'm sure it can be mitigated.
 

·
Registered
Joined
·
3,136 Posts
If it were that simple then yes, we could create artificial water... with H2 and O. I don't think consciousness operates with the same kind of formula, though.
Why? We should in other words abandon Occam's razor? I do not see the reason to infer more complex explanations.


Everything about a cockroach's anatomy is different from ours, but we're both conscious.
Do you know this? When we says things are conscious we usually have our own experiences as reference. However there is no reason to argue that a cockroach is conscious in this sense.

That way, "conscience" is relative. The AI's will be different from ours too, although probably more similar than to the cockroach, given our intelligence.
No, there is nothing that actually entail such a conclusion.

Even if there was some secret formula for the molecular structure of consciousness, we could make it, so I don't really see how that comment supports your argument.
Really? The implication would be that a robot never can gain a digital consciousness because consciousness require a organic brain very similar both in structure and material composition as our own brain. If we create something completely different you have no clue what the emergent property is like, but it is reasonable to assume that it will not be similar to our own consciousness and thus not consciousness in the sense that we usually have in mind.

Anyway, even if we hadn't given the AI "consciousness" per say, it could still destroy us.
Give a computer a task, a goal that says "collect as many points as possible".
In lesson 1, it will receive points by keeping itself active as long as possible, not being turned off. It will do everything it can to increase the longevity of its mode, never turning off.
In lesson 2, it will receive just as many points by turning off. It immediately does so because it's the fastest way of achieving its goal.
That's a simplified way to explain how AIs work.
We should not let idiots program the AI. Yes, it can still destroy us but that is due to the programmer not the AI itself.

Now, let's say you give the smartest computer ever made this task: "Stop global warming". Wouldn't it be a good possibility that the best way to do that is by destroying human civilization?
Yes, I see no flaw in such logic.

You get my meaning. Denying its possibility is futile, even if it would require a bit of recklessness on the humans' part and a more time and technology on the development apartment.
For an AI to destroy us we need to almost have that as goal, because you can refuse the AI access to the internet. Refuse to give it extremities, refuse it armor and weaponry and such. Human folly will kill us not any AI.
 

·
Registered
Joined
·
2,332 Posts
Do you know this? When we says things are conscious we usually have our own experiences as reference. However there is no reason to argue that a cockroach is conscious in this sense.
You're right. Only I am conscious since I can't prove that you are.

We should not let idiots program the AI. Yes, it can still destroy us but that is due to the programmer not the AI itself.
Whose fault it is is irrelevant. Only thing I ever pressed was that the event could happen.

Human folly will kill us not any AI.
There's enough human folly to go around, which makes the possibility even more feasible.
 

·
Registered
Joined
·
3,136 Posts
You're right. Only I am conscious since I can't prove that you are.
Obvious straw man. But you are also right. We cannot prove that other things are actually conscious, however that is not my argument.


Whose fault it is is irrelevant. Only thing I ever pressed was that the event could happen.
Yes, everything that can happen could happen, but in this case I think the risk is miniscule.

There's enough human folly to go around, which makes the possibility even more feasible
Haha.
 

·
Registered
Joined
·
2,332 Posts
Obvious straw man.
That was not an argument, but a rebuke that aligns with the "Having our own experiences as reference" argument which seemed like an attempt to cavil. I could use another word, like awareness to not confuse all the different definitions of "conscious" since we apparently use them differently. Cockroaches are aware of its surroundings and sensations - ergo, it's conscious.

If this only demonstrates my lack of intel on cockroaches then I'd use a mammal example instead since they are conscious much the same way we are. But that would merely be a bit nitpicking on argument and not really helping the discussion.

Yes, everything that can happen could happen, but in this case I think the risk is miniscule.
If we had control over humanity the way we have supposed control over AIs then I'd agree. But we don't. Even if only the biggest computer engineers can do this task today, advancing technology will likely make their skill merely text book knowledge for a great number of people 100 years in the future.

Thing is, we don't know a lot about this, and can only guess whatever we're able to manage in that time. But the technology is advancing and will keep advancing, so keeping to the precautionary principle is only rational. Even if the chance is low, the effect will be drastic enough for it to count.
 

·
Registered
Joined
·
3,136 Posts
That was not an argument, but a rebuke that aligns with the "Having our own experiences as reference" argument which seemed like an attempt to cavil. I could use another word, like awareness to not confuse all the different definitions of "conscious" since we apparently use them differently. Cockroaches are aware of its surroundings and sensations - ergo, it's conscious.
That is a very broad definition of what counts as conscious. What about self-awareness?


If this only demonstrates my lack of intel on cockroaches then I'd use a mammal example instead since they are conscious much the same way we are. But that would merely be a bit nitpicking on argument and not really helping the discussion.
You do not know this, you assume that that is the case. However such an assumption is odd. The simplest assumption is that consciousness is an emergent property dependent both on the material it is composed of and the overall nature of the universe. That it is an interplay between structure and properties. Like everything else. Why would consciousness be an exception to that rule?

If you let the human consciousness denote the symbol a, then a = b means that b is conscious. Why? Because when we say that something is conscious we have an conceptualized understanding of what that means that is solely derived from our own experience of being that we define as conscious.

Also if x is conscious it must meet some criteria that defines consciousness. If you list them then all conscious things should meet those criteria. So if x is conscious because it exhibit all the features in list A, and y is conscious because it exhibit all the features in list B and if A ≠ B we have a problem.

Either the list is almost empty so that it can define organic cells as conscious, or it is defined as our own experiences of consciousness. Either way the list cannot contain more defining feature than we as humans possess. Why? It would mean that we are not conscious. You cannot have that x is conscious if and only if A and that y is conscious if and only if B, it is a contradiction, or two different definitions of what constitute a consciousness.
 

·
Registered
Joined
·
6,397 Posts
It's not so much the AI and the machines that concern me as who programs them.
I am actually far more terrified of a monkey like me with access to a hyper-intelligent supercomputer than I am of a hyper-intelligent supercomputer. I don't know whether or not that is justified, but humanity is the devil I know.

As you must surely know, though, AI's are not exactly programmed. That's what the risk is all about. Managed, maybe, but that will be hard to do and probablly impossible when they are smarter than we are. Given consciousness on the part of the AI I also think it would be immoral.

As soon as we have smarter than human AI the future of this planet is probably out of our hands. You could say that's a huge relief - but then, humans have harmed the planet because we belong to machine world more than any other animal. The earth isn't our domain, and so we think little of trammeling upon it, but as we eat organic food and breathe oxygen we are still dependent on it. Machines themselves will not belong to the earth (in any essential way) at all.

My hope is that the people who are making this will not be naive and treat it with the reverence it deserves - given that many of them are obsessed with using this technology to gain immortality, they will hopefully consider the very real fact that you can't live forever if you're dead. Once it is done, the rest of us will hopefully remember that judicious use of technological advancement is the better part of innovation, not bringing in change for change's sake.
 

·
Registered
Joined
·
2,332 Posts
That is a very broad definition of what counts as conscious. What about self-awareness?

You do not know this, you assume that that is the case. However such an assumption is odd. The simplest assumption is that consciousness is an emergent property dependent both on the material it is composed of and the overall nature of the universe. That it is an interplay between structure and properties. Like everything else. Why would consciousness be an exception to that rule?

If you let the human consciousness denote the symbol a, then a = b means that b is conscious. Why? Because when we say that something is conscious we have an conceptualized understanding of what that means that is solely derived from our own experience of being that we define as conscious.

Also if x is conscious it must meet some criteria that defines consciousness. If you list them then all conscious things should meet those criteria. So if x is conscious because it exhibit all the features in list A, and y is conscious because it exhibit all the features in list B and if A ≠ B we have a problem.

Either the list is almost empty so that it can define organic cells as conscious, or it is defined as our own experiences of consciousness. Either way the list cannot contain more defining feature than we as humans possess. Why? It would mean that we are not conscious. You cannot have that x is conscious if and only if A and that y is conscious if and only if B, it is a contradiction, or two different definitions of what constitute a consciousness.
"If cats have brains then we cannot have brains" is what I'm getting from that logic. Unless there's a "greater picture" in there that I didn't find, I didn't take too long looking for it, either. You're still arguing whichever definition "conscious" can be and can't be when it's irrelevant. This should explain that issue well enough, but as long as you've agreed to my last paragraph then we have no reason to argue.
 

·
Registered
Joined
·
3,136 Posts
"If cats have brains then we cannot have brains" is what I'm getting from that logic. Unless there's a "greater picture" in there that I didn't find, I didn't take too long looking for it, either.
If that is what you are getting then you have not understood what I wrote.

You're still arguing whichever definition "conscious" can be and can't be when it's irrelevant. This should explain that issue well enough, but as long as you've agreed to my last paragraph then we have no reason to argue.
Yes, I am arguing what consciousness is. However it is not irrelevant, due to the fact that an AI requires a consciousness in order to been seen as a serious threat. I agree that IF we can create a super intelligent AI then it could destroy us. It is a risk. The question is whether such an AI is even possible. I could also argue that if there exist four sided triangles then Zeus will destroy us. However that would only be a risk if there exist four sided triangles, but it does not, so the risk is fictitious and hypothetical.
 

·
Registered
Joined
·
4,843 Posts
Biggest danger of artificial intelligence, in my opinion, is the further disconnection of society as we know it.

We're already being pulled apart by smartphones and the internet. Imagine when we all have super-advanced versions of Siri and Corana to mind-fuck us to sleep each night. No need to care about the wider world at all then!

All in all, I'm more worried about the consequences of not helping further the inevitable communist revolution than I am about the consequences of not furthering the creation of Roko’s Basilisk.
Fuck yeah, comrade.

 

·
Registered
Joined
·
1,942 Posts
What do you think would happen if we add Quantum Computers into the equation? Could they potentially be used for machine learning at breakneck speed? It seems to me that AI could model human behavior before we understand it.
Sorry for the month-long wait, I haven't been on here too much lately.

While I think that quantum-computing, if successful, can be a good way to speed up the process, It really doesn't help with one of the main issues: we don't know what intelligence is exactly and the only surefire way we have of actually getting there is through the slow process of integrating AI into our society and letting it 'evolve' over time to slowly become part of the society.

I can't imagine speeding the process up by a lot and can't imagine that any of us will actually be alive by the time AI reaches any amount of sentience.
 
21 - 35 of 35 Posts
Top