Personality Cafe banner

81 - 100 of 149 Posts

·
Registered
Joined
·
1,538 Posts
How can we prevent humans ending up feeling inferiour to robots, cobots, computers and machines as these take over for our skills, jobs and talents?
You know how robots work in 1 and 0s? a binary system. The human brain uses a senary system, from 6 to 0s. So we are still very far from making a computer like the human brain.

However, while humans are all-purpose, computers have very specialized tasks, that's why they beat us. You have a computer that's very good at cleaning your house, and not much else. You have a computer that's very good at finding you that song you liked, and not much else. Computers only have to worry about their task-specific purposes as this is how humans designed them. Humans had to worry about survival above all else.

But even making abstraction of that, ultimately there's no reason to feel inferior to computers. There's humans and there's computers, why should you feel inferior? To feel inferior, it means you make a value-judgement based on your own subjective standards or worth and compare it to computers, simply make no such value-judgement. The computers may be better than us in some areas, so what? isn't that the purpose? isn't that why we made them in the first place?

I hope with don't get to kill ourselves by creating a criminal AI. We already have enough criminal humans among us, but they are limited, imagine a criminal AI with access to the mainframe, which is what Elon Musk was afraid of. And honestly, I'm with him on this one. It's not like humans have been saints on this planet, so it's not out of the question for a rogue AI to see us otherwise.

Computers taking over your job, it can sadden you that a computer managed to take over your job, or motivate you to push higher limits so that you'll create work of art or learn a new skill for a job that a computer can't replicate, motivate you to stay above computers, your choice, but at least now you are conscious of it.
 

·
Premium Member
Joined
·
38,597 Posts
Discussion Starter #82
You know how robots work in 1 and 0s? a binary system. The human brain uses a senary system, from 6 to 0s. So we are still very far from making a computer like the human brain.

However, while humans are all-purpose, computers have very specialized tasks, that's why they beat us. You have a computer that's very good at cleaning your house, and not much else. You have a computer that's very good at finding you that song you liked, and not much else. Computers only have to worry about their task-specific purposes as this is how humans designed them. Humans had to worry about survival above all else.

But even making abstraction of that, ultimately there's no reason to feel inferior to computers. There's humans and there's computers, why should you feel inferior? To feel inferior, it means you make a value-judgement based on your own subjective standards or worth and compare it to computers, simply make no such value-judgement. The computers may be better than us in some areas, so what? isn't that the purpose? isn't that why we made them in the first place?

I hope with don't get to kill ourselves by creating a criminal AI. We already have enough criminal humans among us, but they are limited, imagine a criminal AI with access to the mainframe, which is what Elon Musk was afraid of. And honestly, I'm with him on this one. It's not like humans have been saints on this planet, so it's not out of the question for a rogue AI to see us otherwise.

Computers taking over your job, it can sadden you that a computer managed to take over your job, or motivate you to push higher limits so that you'll create work of art or learn a new skill for a job that a computer can't replicate, motivate you to stay above computers, your choice, but at least now you are conscious of it.
Well that was a long and interesting answer. What are the 6 "binary types" you were taling about?
 

·
Registered
INTJ 5w6 583 sp/sx
Joined
·
681 Posts
@Dezir
You know how robots work in 1 and 0s? a binary system. The human brain uses a senary system, from 6 to 0s.
You do know that we can encode "6", for example, as "110" effectively emulating any senary value in terms of a binary system? What makes you believe in the significance of this difference, assuming the validity of its premises?
 

·
Host
ENTP 5w6 So/Sx 584 ILE Honorary INTJ ♂
Joined
·
18,421 Posts
How can we prevent humans ending up feeling inferiour to robots, cobots, computers and machines as these take over for our skills, jobs and talents?
We can't. At some point, our machines will evolve beyond us, if we imbue them with our abilities. They'll be able to do it far faster than biological evolution. It may very well be that we get evolved with them.

 

·
Premium Member
Joined
·
38,597 Posts
Discussion Starter #85
We can't. At some point, our machines will evolve beyond us, if we imbue them with our abilities. They'll be able to do it far faster than biological evolution. It may very well be that we get evolved with them.

Maybe epigenetics will prevent it...I read an article about some fish type that lived in a extremely polluted sea and suddenly the whole species changed at once or something like that to adapt. There was another one about birs eggs and something happend to one of he bird babies and then the other birdbabies changed while still inside the eggs I think, to adapt to that.
 
  • Like
Reactions: tanstaafl28

·
Registered
Joined
·
1,538 Posts
Well that was a long and interesting answer. What are the 6 "binary types" you were taling about?
Electricity. Impuleses in the brain. The computer has only 2: on and off. Transistors with or without an electric impulse in them. Where as humans have different stages of those impuleses. When a computer has to make a micro-decision based on a set of instructions and input, he only has 2 outputs, if you want to do so something more complex you have to group those micro-decisions together. Modern computers have CPUs with transistors in the number of billions to be able to do the things a modern computer can do. A human has 7 outputs for the same micro-decision.
 
  • Like
Reactions: Electra

·
Registered
INTJ, most of the time. I like penguins.
Joined
·
73 Posts
How can we prevent humans ending up feeling inferiour to robots, cobots, computers and machines as these take over for our skills, jobs and talents?
I have a few thoughts on it. Either...
1.) Make the mechanical beings unable to do more than what they're programmed to.
- Say, if a robot is manufactured to fulfill the role of a security guard of the world bank, then it should have no qualms over vaporizing criminals in an instant. But because it was programmed entirely for keeping the world bank safe and is completely subservient to its masters (in this case, whomever is in charge with the defense and security), it doesn't possess any capacity for critical thought that could motivate it to strive for more than what its programming requires and abusing it's capabilities. Or...

2.) If you can't beat them, be them, but better. Scientists might have to make it so advanced technology would end up being the next step to evolution. A great way to ensure that the machinations don't end up surpassing - worse, usurping - their masters is to become even better machines than they are.

Robotics isn't one of my current interests and I don't know crap about programming, but those are just my opinions.
 

·
Registered
INTP
Joined
·
8 Posts
People gotta realise that AI systems are only as good as the assumptions underlying the mathematical system they are based upon and there is proof that these assumptions cannot be verified within the mathematical framework.So AI systems cannot generate new assumptions that might be needed for describing a phenomena because it cannot verify it.It seems only human beings can do that or we cannot build systems that can do it,all of which means that a human mind is not gonna become irrelevant.
 

·
Registered
INTJ 5w6 583 sp/sx
Joined
·
681 Posts
People gotta realise that AI systems are only as good as the assumptions underlying the mathematical system they are based upon and there is proof that these assumptions cannot be verified within the mathematical framework.So AI systems cannot generate new assumptions that might be needed for describing a phenomena because it cannot verify it.It seems only human beings can do that or we cannot build systems that can do it,all of which means that a human mind is not gonna become irrelevant.
AI system is not pure constellation of mathematical structures and theorems, it doesn't have to be consistent, and, hence, necessarily adhere to Gödel's theorems. I think you misused them here.
And there is no evidence/argument that would indicate that there is something extremely special and irreproducible about the human brain or its functionality.
 

·
Registered
Joined
·
325 Posts
if one day we become inferior to AI it becomes a fact
how we feel about it no longer matters
to suggest otherwise amounts to blatant denial
rather we have to rebuild our self-esteem elsewhere, adapt, invent and discover our new strengths
i don't see any other way

im more worried that AI does not necessarily have to follow what we consider reasonable, acceptable and valuable
plus recent study suggests we wouldn't be able to control super intelligent machines

source: Superintelligence Cannot be Contained: Lessons from Computability Theory
 

·
Registered
Joined
·
7,303 Posts
Anyways it is not possible to conceive a mind that works better than that of the conceiver. At best AIs can be as smart as the smartest man and work at the same pace towards their own understanding of what they lack to do better. Their need for ressources and energy won't be any more convenient than ours. Our technology is nothing without the very rare minerals that we can extract thanks to not needing them to survive in the first place. Even when it comes to survive in space, it's not a real advantage. It takes the same level of technology to protect very complex life systems or very complex computers. Protecting the latter from radiations is an equal conundrum.

Those who think that an AI can just pirate a space rocket and go weee in space forever or pirate a factory and make an army of drones that take over the world should quit watching movies. A very smart AI will have the same life here as a very smart human. Same problems same solutions and equivalent opportunities.
 

·
Registered
INTJ 5w6 583 sp/sx
Joined
·
681 Posts
Anyways it is not possible to conceive a mind that works better than that of the conceiver
Again, why?

There are AIs who can beat anyone at chess and even games with much more variables and possibilities, like Go and dota.
Yet conceivers didn't train in any of those games and were nowhere close to being champions.
This example shows that you can create something surpassing yourself with respect to some skill, at the very least.

This is not "real intelligence". Complex, but still very narrow subset of activities that humans capable of.

But what logical grounds are there that prove that other resembling cognitive processes principally cannot be reproduced to cover enough application domains to be classified in totality as having "general purpose"?

They won't be gods; they won't be able to "calculate future" or contradict physics in any other way. Still, there is no reason to assume that they won't surpass the smartest man in any intellectual activity.
 

·
Registered
INTJ 5w6 583 sp/sx
Joined
·
681 Posts
I can see how it can appear impossible when considering this task from a certain perspective.
The one that implies that every aspect of artificial mind is the result of conscious deliberation of its conceiver.
But this is certainly not the only way by how mind can possibly be built. Not the most popular, at the very least.
Skills in a sense "build themselves".
 

·
Registered
INTJ 5w6 583 sp/sx
Joined
·
681 Posts
So far, in my experience, those who did strong assertions about what AIs principally are incapable :

1. lacked rather important parts of computer science background, but somehow decided that this isn't engineering problem, but rather purely philosophical problem which can be fully exhausted with their unique undebatable perspective (which proved to be limited)

2. had different fundamental assumptions of metaphysical nature at the root of their conclusions.

3. misused their knowledge (happens to the best of us, but best of us anticipate this.)

4. had actually good enough reasons for their conclusions
(usually, in this case, they were related to estimations of computational capacities in hypothetical machines by using knowledge about materials, cpu architectures, computational science etc. they were careful and know what they don't know about the subject or at least strive to. They rarely make "strong assertions", though.)

Same pattern is generally mostly applicable to the domain of every serious and popular subject i guess.
 

·
Premium Member
Joined
·
2,699 Posts
Wouldn't it be ironic if AI take control of our planet to save the planet from us humans, which would essentially save us humans as well.
 

·
Registered
INFP 6w5 629 sp/sx
Joined
·
1,869 Posts
There are AIs who can beat anyone at chess and even games with much more variables and possibilities, like Go and dota.
Yet conceivers didn't train in any of those games and were nowhere close to being champions.
This example shows that you can create something surpassing yourself with respect to some skill, at the very least.

This is not "real intelligence". Complex, but still very narrow subset of activities that humans capable of.

But what logical grounds are there that prove that other resembling cognitive processes principally cannot be reproduced to cover enough application domains to be classified in totality as having "general purpose"?
(Underline is mine, and I removed parts of your quote even though I agreed with them)

There was at least one thing discussed thus far that would bar AI from being general purpose and that's creativity. AI currently has very strong power to find patterns and is adept at matching a process to get a specified end-result from a sample set. Creativity, though, entails specifying their own end-result and specifying their own sample set.

Take, for instance, the general problem: "The city looks boring." A group of artists and city planners could meet and discuss solutions for that problem, and come up with a bunch of murals and sculptures to satisfy the populace. An AI could assess local populations and measure dopamine levels and if the artists gave the AI a sample set of materials and paintings, I think it could create some good art.

However, "boring" is a general problem. At one point, the residents could get bored with the AI paintings they've gotten and are bored with paintings and sculptures in general. Without creativity, it's hard to imagine how an AI would solve this problem. Can AIs be made to recognize intent? How would we get the AI to realize that the solution is insufficient and that continuing to make a solution based on its given sample data will not give a good result? It can continue to produce "good" paintings, but "good" paintings can no longer satisfy the true end-goal.

Can this problem be overcome? Maybe. I discussed it here:
 

·
Registered
INFJ, SoCom, hands-on, physical intimacy, Energy being, Project Career Temp, Wisdom Growth Temp
Joined
·
3,970 Posts
The problem is not "what do we do when AI can do things as well as us", it's more like:

If robots can replace 5-15 human workers at a task, is that robot helping to pay salary (food, shelter, clothing) for those 5-15 workers replaced? If not, what can be done about this?

For example, to help pay for those salaries, there can be an income cap, so that the person who owns the robot does not have the salary of five people all to himself.
 

·
Registered
INTJ 5w6 583 sp/sx
Joined
·
681 Posts
@secondpassing
Yes, like you pointed out yourself, most current methods of making AIs won't comprehensively emulate creativity.

Randomness is what makes experimentation possible for humans, I guess, since if our signals traveled only through "most trained" pathways, humans would be too conservative and would always make only best-known decisions.
It is much more complicated than that. I don't claim to know how the brain makes decisions.

I don't see current popular ways as the only gateway to a real AI, though.

In the end, I think it will be a merge of something resembling Unsupervised Learning with old Expert systems.
Latter should necessarily have a configurable level of "flexibility" and not define this process's whole nature. It will allow machines to detach from raw perfectly consistent formal computations while still accessing them when necessary.
Just like humans can choose to be very precise despite being this "unmathematical" biological creative mess.

I think an expert system of sorts may be necessary because humans aren't born with fresh, empty, completely universal brains that can learn anything.
It has some preprogrammed special-purpose circuitries. For instance, the ability to pick up spoken language doesn't come from merely hearing enough of what humans say.

So, general-purpose intelligence and creativity won't be created "consciously" by creating a perfect, deliberately chosen set of rules or after achieving certain level of expertise.
It will emerge merely as a side effect, just like in humans. And will be able to build eventually comprehensive enough theory of mind to recognize issues with his paintings that you mentioned.

It may also have some low-level core drivers that would functionally resemble drives of psyche, the libido, which might prove essential for plausible reconstruction of consciousness/psyche. And all high-level goals/purposes/agendas will build on top of them.
 

·
Registered
INFP 6w5 629 sp/sx
Joined
·
1,869 Posts
@secondpassing
Yes, like you pointed out yourself, most current methods of making AIs won't comprehensively emulate creativity.

Randomness is what makes experimentation possible for humans, I guess, since if our signals traveled only through "most trained" pathways, humans would be too conservative and would always make only best-known decisions.
It is much more complicated than that. I don't claim to know how the brain makes decisions.

I don't see current popular ways as the only gateway to a real AI, though.

In the end, I think it will be a merge of something resembling Unsupervised Learning with old Expert systems.
Latter should necessarily have a configurable level of "flexibility" and not define this process's whole nature. It will allow machines to detach from raw perfectly consistent formal computations while still accessing them when necessary.
Just like humans can choose to be very precise despite being this "unmathematical" biological creative mess.

I think an expert system of sorts may be necessary because humans aren't born with fresh, empty, completely universal brains that can learn anything.
It has some preprogrammed special-purpose circuitries. For instance, the ability to pick up spoken language doesn't come from merely hearing enough of what humans say.
After embarrassingly losing to Leela Chess Zero (Lc0, an AI chess engine), the people behind Stockfish augmented their brute force computation engine with a neural network on the side. Stockfish NNUE currently has a higher rating than Lc0, but it did still lose the chess engine championship last year to Lc0. This could suggest that Expert Systems paired with Unsupervised Learning might be a good way to improve neural networks, but we will have to wait until the next championship.

Personally, in the long run, I think expert systems hinder the development of AIs. The problem lies behind who is defining what expert is, it's like it's a bit of pride that the humans programming think they knew better than the neural network and haven't admitted that AIs find patterns more efficiently than to look through a set of rules. I got distracted writing this and went to watch Lc0 play Ethereal (another formal engine with a NN augment). Lc0 consistently analyzes with less depth (makes much less brute force calculations) but from the position it predicts, I can tell Lc0 will probably crush her opponent.

You've mentioned cyborg augmentation as a way to alleviate self-esteem issues that come along with growing AI capabilities. Not a bad solution.

On the flipside, humanizing AIs by augmenting them with humans might also propel them to general intelligence. By avoiding expert systems and including a human, AIs can keep their ability to generalize. It might also solve the self-esteem issues because regarding specific decision making: [1 human + 1 AI > 1 human + 1 human] is much easier for humans to accept as okay. Besides, some amalgamation like the one that appeared in Serial Experiments Lain or Psycho-Pass sounds kinda cool.

So, general-purpose intelligence and creativity won't be created "consciously" by creating a perfect, deliberately chosen set of rules or after achieving certain level of expertise.
It will emerge merely as a side effect, just like in humans. And will be able to build eventually comprehensive enough theory of mind to recognize issues with his paintings that you mentioned.

It may also have some low-level core drivers that would functionally resemble drives of psyche, the libido, which might prove essential for plausible reconstruction of consciousness/psyche. And all high-level goals/purposes/agendas will build on top of them.
I'm skeptical. I might reply to this part once I collect my thoughts.
 

·
Premium Member
Joined
·
38,597 Posts
Discussion Starter #100 (Edited)
After embarrassingly losing to Leela Chess Zero (Lc0, an AI chess engine), the people behind Stockfish augmented their brute force computation engine with a neural network on the side. Stockfish NNUE currently has a higher rating than Lc0, but it did still lose the chess engine championship last year to Lc0. This could suggest that Expert Systems paired with Unsupervised Learning might be a good way to improve neural networks, but we will have to wait until the next championship.

Personally, in the long run, I think expert systems hinder the development of AIs. The problem lies behind who is defining what expert is, it's like it's a bit of pride that the humans programming think they knew better than the neural network and haven't admitted that AIs find patterns more efficiently than to look through a set of rules. I got distracted writing this and went to watch Lc0 play Ethereal (another formal engine with a NN augment). Lc0 consistently analyzes with less depth (makes much less brute force calculations) but from the position it predicts, I can tell Lc0 will probably crush her opponent.

You've mentioned cyborg augmentation as a way to alleviate self-esteem issues that come along with growing AI capabilities. Not a bad solution.

On the flipside, humanizing AIs by augmenting them with humans might also propel them to general intelligence. By avoiding expert systems and including a human, AIs can keep their ability to generalize. It might also solve the self-esteem issues because regarding specific decision making: [1 human + 1 AI > 1 human + 1 human] is much easier for humans to accept as okay. Besides, some amalgamation like the one that appeared in Serial Experiments Lain or Psycho-Pass sounds kinda cool.


I'm skeptical. I might reply to this part once I collect my thoughts.
I just think were is it gonna end. Maybe children can't use a lot of this aguementation before they are adults, for example they might be able to use scull caps but certainly not brain inplants like micro chips and those chips better be organic too. The technology gets better and we have to keep up but we can't, or only to a lesser degree and those with the ü63r l337 tech skillz are those in charge of us all.🙄 The future people stole our jobs...we will have to be put on this universal low payment but I guess that would kinda be like socialism/communism except-from those people on the top who are wealthier as opposed to what it is supposed to be like in such systems. And we know through history that if there are someone richer on top thats not gonna last very long because people want justice. So how are people gonna fight the system with bare hands against robots? Like enslaved animals in a slaughter house fights again the human killer machinery so that humans can get their precisious meat and get fatter? Humans probably has a lesser chance then robots in a power war. But the question is whether or not there will be enough bots to fight a lot of humans etc. Idk.
 
81 - 100 of 149 Posts
Top