Personality Cafe banner

1 - 20 of 59 Posts

·
Spam-I-am
Joined
·
13,649 Posts
who's to say we aren't the A.I.?
according to the medieval theologians only god is intelligent
making us A.I.
until humans create A.I. one can only hypothesis
no one can empirically prove what human consciousness is at this point in time
 

·
Registered
Joined
·
1,864 Posts
Artificial stupidity can invent god, as a concept, in its stupid mind.

How advanced it would be : retarded enough.
I know you are half being funny, but they wouldn't be retarded. The question is about the level of conceptualisation an AI should possess in order to be able to claim a higher power. If you don't believe that level would be incredibly advanced on the machine's part, then compare its 'mind' to an animal's who is not able to create a combination of concepts and then to a human's. For the former there is evidence to suggest only the ability to form scarce hierarchical abstractions, which makes this ability almost unique to human beings. If the machine actually has conceptual abilities similar to ours (the ability to reason) then we would likely have to go as far as considering its rights as a species no different than the human species.

This is why, when talking about how far we can develop AIs, maybe it's interesting to be able to ask this opening question. Will they create their own god? It translates as Will they think so far as to arrive at one conclusion or another?. This is about the higher intelligence which is found in humans. We are able to ask questions and rationalise, regardless of how foolish the answers are deemed by our fellow humans. An animal without this ability lives on instinct, which is something regarded as a lower level of intelligence.
 

·
Registered
Joined
·
6,910 Posts
I know you are half being funny.
I don't aim to be funny.

You ask how far we can develop AIs : as far as the intelligence of the developer, or lack thereof. Intelligence is an artifice. No one can build a machine smarter than oneself.

You ask how much intelligence is required to conceptualize god : it depends on how well it is conceptualized.

Will they create their own god and believe in it? Not if I'm their creator.
 

·
Heretic
ESI 9w8 5w4 2w1
Joined
·
10,672 Posts
We don't have the information to answer that question.
I find it improbable that a God can be created.
It might create something that have absolute power over our lives.
God is in many ways an entity that has absolute power over a scope.
Yet since there is possibly always a scope outside the scope,
it is very hard to secure ultimate control over all possible scopes
and get the control that many attribute to the Biblical God.
 

·
Registered
Joined
·
1,864 Posts
Will they create their own god and believe in it? Not if I'm their creator.
But if you create the machine and isolate it from yourself, how do you know what it will conceptualise?
 

·
MOTM Sept 2014
Joined
·
8,492 Posts
No, because that makes no sense under any useful or meaningful definition of the word "God".
 

·
Banned
Joined
·
2,042 Posts
What the fuck is this question supposed to mean???

I don't aim to be funny.

You ask how far we can develop AIs : as far as the intelligence of the developer, or lack thereof. Intelligence is an artifice. No one can build a machine smarter than oneself.

You ask how much intelligence is required to conceptualize god : it depends on how well it is conceptualized.

Will they create their own god and believe in it? Not if I'm their creator.
AIs only lack the general intelligence we humans have; the ability to adapt to any task and excel in it. AIs can excel us easily in tasks that they are specialized to do, thanks to their higher computational power. Now I'll have you imagine an AI that has the general intelligence we humans have and also has a higher computational power. Clearly, this AI will far exceed us.

Creation is not limited by the Creator unless the limits are imposed. We've already created things that are far better at doing some things than we are. Who's to say there won't be an AI that won't just generally exceed us in every which way.
 

·
Registered
Joined
·
6,910 Posts
@Stawker Yes, yes, the "I'll make it as intelligent as I can and then add computational power" thing. Well, then it will think as bad as you, but faster.

Because all what you can give to the machine is your understanding of intelligence, not a higher understanding, thus it will start at best at your level. Then you will make it think faster? Okay, but will it fix itself faster? Not if it is built on a model of mentality that is self-defeating. It might just fall faster into madness, develop your bad thinking habits faster, that's all. It will believe in the holy computational power like you do and just exceed you at getting itself more of that power.
 

·
Banned
Joined
·
2,042 Posts
@Stawker Yes, yes, the "I'll make it as intelligent as I can and then add computational power" thing. Well, then it will think as bad as you, but faster.

Because all what you can give to the machine is your understanding of intelligence, not a higher understanding, thus it will start at best at your level. Then you will make it think faster? Okay, but will it fix itself faster? Not if it is built on a model of mentality that is self-defeating. It might just fall faster into madness, develop your bad thinking habits faster, that's all. It will believe in the holy computational power like you do and just exceed you at getting itself more of that power.
That's incoherent. If I'm making an AI which is generally intelligent, it presupposes that the nature of 'human intelligence' can be understood without any qualia limits. Basically, this AI would just be a human except his brain will operate at a frequency of more than 100-200 Hz. Basically, a 500 IQ human being or something of the sort.

Now back to the madness point. If we assume that perfect Epistemic and Instrumental Rationality cannot be achieved due to a lack of computational power (among lack of resources -- and this is true, otherwise high IQ people won't be the ones at the frontier), then this AI has irrefutable advantage over us. Highly intelligent people have an easier time being more rational, and it's rationality which avoids madness, bad habits, or whatever self-defeatism you mentioned. It will be quite immune to self-inflicted mental illnesses. Overall, I fail to see how this AI will have the same weaknesses granted that it'll be much better at recovering the 'signal from the noise' than we are. Unless of course you put this AI in isolation where it'll go mad, just as a human in isolation would. As long as it is exposed to the world, it can discern the wrong from the right way.
 

·
Registered
Joined
·
266 Posts
Imagine giving a genius 500 years to think up something, but in the span of a second, every second (and that would just be a single such processor). Obviously programming an AI to even approach human intelligence would be massive. If by, "inventing God" you mean, "being able to create universes out of thin air", then yeah I can see it. Sufficiently advanced technology being indistinguishable from magic and all that.

Not sure I buy "madness", as machines have no feelings.
 

·
Registered
Joined
·
224 Posts
No, Artificial neural network, if that's what you meant by AI, works in the similar way as as human intuition (and only intuition) - it finds correlations. It can't invent anything, it's not a person, just threat it as evolved calculator.
 

·
Registered
Joined
·
6,910 Posts
That's incoherent. If I'm making an AI which is generally intelligent, it presupposes that the nature of 'human intelligence' can be understood without any qualia limits. Basically, this AI would just be a human except his brain will operate at a frequency of more than 100-200 Hz. Basically, a 500 IQ human being or something of the sort.
Hence why I didn't say your machine would be intelligent. I think it wouldn't (sorry).

Of course, they would answer faster to questions the designers could answer themselves - but what about those they couldn't? Not with the same method of thinking. IQ tests are designed to test retardation, or precocity in children, but not genius. Because they don't test the method.


Now back to the madness point. If we assume that perfect Epistemic and Instrumental Rationality cannot be achieved due to a lack of computational power (among lack of resources -- and this is true, otherwise high IQ people won't be the ones at the frontier), then this AI has irrefutable advantage over us. Highly intelligent people have an easier time being more rational, and it's rationality which avoids madness, bad habits, or whatever self-defeatism you mentioned. It will be quite immune to self-inflicted mental illnesses. Overall, I fail to see how this AI will have the same weaknesses granted that it'll be much better at recovering the 'signal from the noise' than we are. Unless of course you put this AI in isolation where it'll go mad, just as a human in isolation would. As long as it is exposed to the world, it can discern the wrong from the right way.
All I assume is that the perfect epistemology can't be achieved by the imperfect epistemology. It's a matter of method. Either it is self-fulfilling or self-defeating. Speaking of method, highly intelligent people are notably audacious at the intuitive stage and cautious at the judging stage. Question : Can we just increase the performance of the judging stage by increasing its processing speed ad lib?

I don't think so and here is the downside : We develop our minds by perceiving the most obvious obstacles first, generally the most short-termed threats, which are the easiest to analyze as such and thus, the easiest to intuit. When one gets used to be rewarded by quick analyses, systematized rewards become a systematical priority. And that method is applied to the investigation and classification of issues as well, which limits the intuition of less obvious obstacles. In order to remove the limit, one's imagination needs to be a perpetual challenge for one's own analysis skills.

Life is not an IQ test from Mensa. There is almost no limit to the difficulty of the items and the first matter of intelligence if to figure out which one hides behind all the others.
 

·
Registered
ENTJ; 8w7; Persian C
Joined
·
9,447 Posts
Can an artificial intelligence invent god? How advanced should it be to do so? What do you think?
Only if we "deem it" so. I see no reason to. :numbness: [Whatever "AI" invents] on a 'godlike' status, surely would not be human - thus "deeming it god", is simply a fallacious anthropomorphic dream.
 

·
Registered
Joined
·
240 Posts
Could AI be god?
Is god AI?
What does it mean to be god?
People have invented god in the past...and they weren't any brighter than you or I.
 

·
Banned
Joined
·
2,042 Posts
Hence why I didn't say your machine would be intelligent. I think it wouldn't (sorry).

Of course, they would answer faster to questions the designers could answer themselves - but what about those they couldn't? Not with the same method of thinking. IQ tests are designed to test retardation, or precocity in children, but not genius. Because they don't test the method.
Then why is it always a highly intelligent (or high IQ) scientist or mathematician that can answer questions no one else can?
If high computational power only means that you solve known questions faster, there should be no innovation. But there is.

All I assume is that the perfect epistemology can't be achieved by the imperfect epistemology. It's a matter of method. Either it is self-fulfilling or self-defeating. Speaking of method, highly intelligent people are notably audacious at the intuitive stage and cautious at the judging stage. Question : Can we just increase the performance of the judging stage by increasing its processing speed ad lib?

I don't think so and here is the downside : We develop our minds by perceiving the most obvious obstacles first, generally the most short-termed threats, which are the easiest to analyze as such and thus, the easiest to intuit. When one gets used to be rewarded by quick analyses, systematized rewards become a systematical priority. And that method is applied to the investigation and classification of issues as well, which limits the intuition of less obvious obstacles. In order to remove the limit, one's imagination needs to be a perpetual challenge for one's own analysis skills.

Life is not an IQ test from Mensa. There is almost no limit to the difficulty of the items and the first matter of intelligence if to figure out which one hides behind all the others.
Addressing the bold part: we call that rationality. We achieve rationality by finding biases in our thinking. We find biases in our thinking by comparing our beliefs with reality or our methods with their expected efficacy. In other words, even discovering our biases is a computational matter. Why wouldn't a human with more computation power be able to eliminate biases faster? unless you're proceeding on the tacit assumption that rationality simply wouldn't be his priority.

I posit a hypothetical problem where we need to come up with a hypothesis that explains an observation. Given Scientific Method, this is and has always been a process of elimination. The 'hidden truth' behind all appearances is found only by hypothesizing again and again and testing them. Even in this process of finding 'hidden truth', a human with higher computation power (HCP from now on) will excel because it is a computational matter.
 
1 - 20 of 59 Posts
Top