I know you are half being funny, but they wouldn't be retarded. The question is about the level of conceptualisation an AI should possess in order to be able to claim a higher power. If you don't believe that level would be incredibly advanced on the machine's part, then compare its 'mind' to an animal's who is not able to create a combination of concepts and then to a human's. For the former there is evidence to suggest only the ability to form scarce hierarchical abstractions, which makes this ability almost unique to human beings. If the machine actually has conceptual abilities similar to ours (the ability to reason) then we would likely have to go as far as considering its rights as a species no different than the human species.Artificial stupidity can invent god, as a concept, in its stupid mind.
How advanced it would be : retarded enough.
I don't aim to be funny.I know you are half being funny.
AIs only lack the general intelligence we humans have; the ability to adapt to any task and excel in it. AIs can excel us easily in tasks that they are specialized to do, thanks to their higher computational power. Now I'll have you imagine an AI that has the general intelligence we humans have and also has a higher computational power. Clearly, this AI will far exceed us.I don't aim to be funny.
You ask how far we can develop AIs : as far as the intelligence of the developer, or lack thereof. Intelligence is an artifice. No one can build a machine smarter than oneself.
You ask how much intelligence is required to conceptualize god : it depends on how well it is conceptualized.
Will they create their own god and believe in it? Not if I'm their creator.
That's incoherent. If I'm making an AI which is generally intelligent, it presupposes that the nature of 'human intelligence' can be understood without any qualia limits. Basically, this AI would just be a human except his brain will operate at a frequency of more than 100-200 Hz. Basically, a 500 IQ human being or something of the sort.@Stawker Yes, yes, the "I'll make it as intelligent as I can and then add computational power" thing. Well, then it will think as bad as you, but faster.
Because all what you can give to the machine is your understanding of intelligence, not a higher understanding, thus it will start at best at your level. Then you will make it think faster? Okay, but will it fix itself faster? Not if it is built on a model of mentality that is self-defeating. It might just fall faster into madness, develop your bad thinking habits faster, that's all. It will believe in the holy computational power like you do and just exceed you at getting itself more of that power.
Hence why I didn't say your machine would be intelligent. I think it wouldn't (sorry).That's incoherent. If I'm making an AI which is generally intelligent, it presupposes that the nature of 'human intelligence' can be understood without any qualia limits. Basically, this AI would just be a human except his brain will operate at a frequency of more than 100-200 Hz. Basically, a 500 IQ human being or something of the sort.
All I assume is that the perfect epistemology can't be achieved by the imperfect epistemology. It's a matter of method. Either it is self-fulfilling or self-defeating. Speaking of method, highly intelligent people are notably audacious at the intuitive stage and cautious at the judging stage. Question : Can we just increase the performance of the judging stage by increasing its processing speed ad lib?Now back to the madness point. If we assume that perfect Epistemic and Instrumental Rationality cannot be achieved due to a lack of computational power (among lack of resources -- and this is true, otherwise high IQ people won't be the ones at the frontier), then this AI has irrefutable advantage over us. Highly intelligent people have an easier time being more rational, and it's rationality which avoids madness, bad habits, or whatever self-defeatism you mentioned. It will be quite immune to self-inflicted mental illnesses. Overall, I fail to see how this AI will have the same weaknesses granted that it'll be much better at recovering the 'signal from the noise' than we are. Unless of course you put this AI in isolation where it'll go mad, just as a human in isolation would. As long as it is exposed to the world, it can discern the wrong from the right way.
Only if we "deem it" so. I see no reason to. :numbness: [Whatever "AI" invents] on a 'godlike' status, surely would not be human - thus "deeming it god", is simply a fallacious anthropomorphic dream.Can an artificial intelligence invent god? How advanced should it be to do so? What do you think?
Then why is it always a highly intelligent (or high IQ) scientist or mathematician that can answer questions no one else can?Hence why I didn't say your machine would be intelligent. I think it wouldn't (sorry).
Of course, they would answer faster to questions the designers could answer themselves - but what about those they couldn't? Not with the same method of thinking. IQ tests are designed to test retardation, or precocity in children, but not genius. Because they don't test the method.
Addressing the bold part: we call that rationality. We achieve rationality by finding biases in our thinking. We find biases in our thinking by comparing our beliefs with reality or our methods with their expected efficacy. In other words, even discovering our biases is a computational matter. Why wouldn't a human with more computation power be able to eliminate biases faster? unless you're proceeding on the tacit assumption that rationality simply wouldn't be his priority.All I assume is that the perfect epistemology can't be achieved by the imperfect epistemology. It's a matter of method. Either it is self-fulfilling or self-defeating. Speaking of method, highly intelligent people are notably audacious at the intuitive stage and cautious at the judging stage. Question : Can we just increase the performance of the judging stage by increasing its processing speed ad lib?
I don't think so and here is the downside : We develop our minds by perceiving the most obvious obstacles first, generally the most short-termed threats, which are the easiest to analyze as such and thus, the easiest to intuit. When one gets used to be rewarded by quick analyses, systematized rewards become a systematical priority. And that method is applied to the investigation and classification of issues as well, which limits the intuition of less obvious obstacles. In order to remove the limit, one's imagination needs to be a perpetual challenge for one's own analysis skills.
Life is not an IQ test from Mensa. There is almost no limit to the difficulty of the items and the first matter of intelligence if to figure out which one hides behind all the others.