Personality Cafe banner

61 - 80 of 98 Posts

Premium Member
Joined
37,380 Posts
Discussion Starter #61 (Edited)
There is an infinity of problems we have yet to solve that take skills we have yet to graze.

If people valued problem solving, they wouldn't be threatened by the automatization of well known solutions to well known problems.

For those who do, machines are a gain of otherwise-wasted time. Or just a good challenge. AIs didn't prevent people from playing chess. Some videogames are impossible to beat and people still find an interest in getting as close as possible. Cars go at 300mph and yet people keep running on flat surfaces as fast as they can. Because one is a matter of transportation, the other is a matter of self mastery and understanding. Some people keep painting photorealistic art, what's the point? It's a different goal than conveying or storing an information.

In the end, it's important to realize how a job that doesn't increase self mastery and understanding is a waste of time that should be automatized asap.
Do you agree or disagree that the need to ace stem from a psychological need to be seen, accepted and feel appreciated and recogniced?
If so; do you suspect this need can be solved through therapy or biological treatment or brain stimulation which produces the feeling of reward?
Would this fake reward stimulation screw up our reward-hunting?
I also think fighting with npc's in videogames are boring for some reason, it doesn't matter how perfect the bot is. Even if I achieve some in game reward for it, it doesn't satisfy my need to interact with real humans, maybe not all humans feel the same way? Maybe some humans appreciate the bots more? If find the only reason to play chess with a computer is to learn a new skill but I'd much much mu h more rather learn that skill in real time from a human. It feels draining to learn things from a computer, book or video.
When the computer sets the resistance to low so that I get to win over it the achievement feels fake. When I set rsistance real high the computer allways tend to win and I end up feeling crappy with low selfesteem and dystopian thoughts about the future. For example the computer moves the horse faster then the blink of an eye. Thats not even fair in a million years.

I think in the future we will probably be able somehow to project our thoughts directly without having to type.

If we are going to have a chip planted into our brain how would that impact the growth of our brain, maybe we would need to get that chip as adults. I think it could be a potentional stressfactor on kids knowing this and also I wonder if that would not leave our brain more vulnerable to hackings.

There was people who got knee implants and biological rhings started to grow on the implants, kinda like when a ship sinks in the sea and things start to grow on the wreck it self, in this case like seaweed and shells or whatever it is. I wonder if that would happen in the pain too.
 
  • Like
Reactions: Wylie

Registered
XXXX, XwX 馃槀
Joined
222 Posts
No we can't. That's not actually because humans are inferior. The whole idea of feeling inferior to someone or something is a subjective feeling which is caused by injustice.
It's the same thing that Karl Marx says. Whenever there is unbalanced justice because of minor superiority in a certain field, there will be blood and war. You can't blame anyone.
But there might be a possibility that those so called AI would develop a really objective and just measure for justice itself so they will end the wars that all of the biological things created. Just like how we can really accept unconditional love for your neighbor or any living thing as human beings.
But my own prediction is that these AIs can't develop those good feelings and will fall victim to the same rule of jungle as every biological thing has.
 

Registered
Joined
7,281 Posts
Do you agree or disagree that the need to ace stem from a psychological need to be seen, accepted and feel appreciated and recogniced?
If so; do you suspect this need can be solved through therapy or biological treatment or brain stimulation which produces the feeling of reward?
Would this fake reward stimulation screw up our reward-hunting?
I think the need for competence and ability simply stem from the fear of vulnerability and failure. Then it's up to everyone to not confuse it with their specific fear of exclusion and build a psychological system of reward around such conflations.

It is a constant in human history that those who go beyond others' expectations (skill-wise), are not driven by external validation. Would it be the case, they would stop improving as soon as they fulfill others' expectations, or, if they were already excluded, have their own expected revenge. Pushing boundaries takes a more authentical approach to one's fears and accomplishments.

Self improvement is the ultimate answer to a fear of vulnerability. If it is parasited by other drives, such as a need for predictability, it is also due to a history of failures to exceed one's expectations and to persist until that happens. I don't think the solution is as simple as stimulating or hindering one or two hormones with drugs.
 

Premium Member
Joined
37,380 Posts
Discussion Starter #65 (Edited)
No. Do you think a robot would be able to author poetry? Or paint a portrait that is meaningful? Human-made would probably be the meaningful artefacts; they'll be valuable.
I think it can at least make poetry and I am unsure about the meaningfull portrait but if it is programmed to copy our human values, who knows? I doubt I personally would but I can't garantie it 100%. Android and mobile phones in general has become more intuitive then lets say in 1999. At this stage most bots utterly and compleyely SUCK in psychology and deep and meeningfull communication with humans still and it just seems that they allmost have no clue what they are doing at all. That would have to be put a lot of emphasis on, but still, even if they could learn to understand where we are coming from there is something special about humans. A bot is more lke a reflection of human actions, even if smart, quick and clever. At this stage I find bots for example in a bank stiff, rigid and too strict.
 
  • Like
Reactions: Wylie

Registered
INTJ
Joined
309 Posts
@Squirt
You talk of having super-intelligence so we can... what? What are we looking to gain? Is it for the hell of it? Do we want to reduce suffering? Is it so we can appreciate one another more? Is it to live longer? Will it do any of those things?
Whatever it is we were doing, we would be able to do it more efficiently. If you are interested in problem-solving, why wouldn't you be interested in getting more efficient at it. Or at writing poetry, or at playing games or at anything.

I am not sure why the increase of overall efficiency has to be necessarily associated with some concrete goal at hand. I never implied that this merge is a goal in itself, efficiency without use is, well, useless.

You seem to be into highly centralized government control from your comments
It doesn't necessarily have to be the government. I just suspect that without some global careful centralized supervision that would guarantee uniform and fair integration there will be all sorts of issues. Maybe there is another solution, sure.


@Celtsincloset
No. Do you think a robot would be able to author poetry?
If it is the real AI we are talking about, it would be able to do anything that you imagine only human can. It will shoot poetries penetrating every inch of your soul like a machine gun.
 

Premium Member
Joined
37,380 Posts
Discussion Starter #67
I think bots at this stage needs to get better in improvisation and to cope with unexpected events generally speaking.
 
  • Like
Reactions: Squirt and Wylie

Registered
INFJ
Joined
237 Posts
If it is the real AI we are talking about, it would be able to do anything that you imagine only human can. It will shoot poetries penetrating every inch of your soul like a machine gun.
Robots/etc don't feel. Their poetry would be empty, and most likely nonsensical. Real poetry comes from the heart. I wrote a poem called 'The One', and I cannot imagine an AI writing it, and thinking it's a good poem.
 

Registered
Caffeinated 鈽
Joined
1,664 Posts
Whatever it is we were doing, we would be able to do it more efficiently. If you are interested in problem-solving, why wouldn't you be interested in getting more efficient at it. Or at writing poetry, or at playing games or at anything.

I am not sure why the increase of overall efficiency has to be necessarily associated with some concrete goal at hand. I never implied that this merge is a goal in itself, efficiency without use is, well, useless.
Exactly. So isn't it valuable to ask "efficiently doing what", in this case, with the merge?

I'm trying to get some more meat on the bones of this potential initiative is all.

I see some pieces here about how synthesis might bypass issues surrounding "making humans obsolete" which is a common fear when adapting to a new technology. What you're saying reminds me of Diaspar in Arthur C Clarke's novel "The City and the Stars", a utopia where humans are freed from any constraint to express themselves however they wish with "limitless computing power"... except for their wish to leave the utopia.

In my view, technology is a tool, not a savior, and so advancing technology doesn't advance us, especially if we're expecting it to do so. It is a fundamental distinction.

It doesn't necessarily have to be the government. I just suspect that without some global careful centralized supervision that would guarantee uniform and fair integration there will be all sorts of issues. Maybe there is another solution, sure.
The issues wouldn't be as bad as the issues centralized supervision would invite, imo. We're already seeing consolidation of power around the use of digital technology and AI. I would advocate for decentralizing access if we're worried about humanitarian implications.
 

Registered
INFP
Joined
649 Posts
@Squirt
If it is the real AI we are talking about, it would be able to do anything that you imagine only human can. It will shoot poetries penetrating every inch of your soul like a machine gun.
AI can already create poetry.


Whether it鈥檚 any good you鈥檒l have to judge, LOL. Hint: the results aren鈥檛 pretty, not yet anyway.


Sent from my iPhone using Tapatalk
 

Registered
INTJ
Joined
309 Posts
Exactly. So isn't it valuable to ask "efficiently doing what", in this case, with the merge?

I'm trying to get some more meat on the bones of this potential initiative is all.

I see some pieces here about how synthesis might bypass issues surrounding "making humans obsolete" which is a common fear when adapting to a new technology. What you're saying reminds me of Diaspar in Arthur C Clarke's novel "The City and the Stars", a utopia where humans are freed from any constraint to express themselves however they wish with "limitless computing power"... except for their wish to leave the utopia.

In my view, technology is a tool, not a savior, and so advancing technology doesn't advance us, especially if we're expecting it to do so. It is a fundamental distinction.



The issues wouldn't be as bad as the issues centralized supervision would invite, imo. We're already seeing consolidation of power around the use of digital technology and AI. I would advocate for decentralizing access if we're worried about humanitarian implications.
Do you have your own goals, plans, things that you want to see unfolding in reality? I certainly do. Now imagine yourself moving many, many times faster towards them. That's it. And towards anything that you may want.

So isn't it valuable to ask "efficiently doing what", in this case, with the merge?
Everything, whatever your goal is. Again, it is useless to view it as an end in itself and as a savior, but that isn't necessary.

Have you played into Stellaris or any other strategy game that has a technology tree where you can research more powerful rockets and etc? Wouldn't you want to get technology that allows you to research subsequent things X% times faster?

and so advancing technology doesn't advance us
It is merely a stepping stone. Or mountain, to be more accurate.


Robots/etc don't feel. Their poetry would be empty, and most likely nonsensical. Real poetry comes from the heart. I wrote a poem called 'The One', and I cannot imagine an AI writing it, and thinking it's a good poem.
I think you are yet to realize the power we are talking about right now fully. Real AI is not your current generic dumb neural network producing funny random pictures with dogs or other nonsense.
As far as results are concerned, it would write as if it feels much more deeply than you and has a bigger heart than those from any humans that you met put together.
"Make human cry using words" can be considered as an engineering problem.
There is nothing intrinsic about feelings that would make them impossible to emulate/reproduce/predict.


AI can already create poetry.


Whether it鈥檚 any good you鈥檒l have to judge, LOL. Hint: the results aren鈥檛 pretty, not yet anyway.


Sent from my iPhone using Tapatalk
that's not an AI I talked about.

 

Registered
INFJ
Joined
237 Posts
I can't imagine an AI living the life of a human and could write something that tells of their own personal story. The sort of poetry, which I think in the way far-off futuristic world, could only be the product of a human. It seems impossible for an AI to achieve this meaning amongst humans, that speaks to them, unless they lived like a human, and wanted to procreate with a human, and die as a human.

When your AI reads another AI's poetry, what is usually a meaningful activity for a human, I wonder what would occur. What are they actually achieving by doing this, I wonder, because there is no feelings in them.
 

Registered
INTJ
Joined
309 Posts
@Celtsincloset
I can't imagine an AI living the life of a human and could write something that tells of their own personal story.
This problem is not intrinsic to the concept of AI, but more to your understanding of it, I think.
There will be feelings or whatever properties of them required for the execution of goal.
Think of a human with "1000 IQ", roughly speaking, that could understand what is it that you expect out of him, that understands how human psychology works better than any human, and would know which inputs are needed for target outputs. It doesn't matter what task you will set for him, no human will be able to surpass him in equal conditions, including anything art/emotions related (unless we assume some limitations to AI implementation)
 

Premium Member
Joined
37,380 Posts
Discussion Starter #75
Imagen then, if someone uses AI to emotionally manipulate us. Hit us right in the heart. Auch!
 

Registered
Joined
7,281 Posts
Nobody will invent a smarter mind than one's own. Upgrading calculation power won't change anything. Thinking faster like a fool is spreading one's foolishness at a higher rate if anything, not solving it at a higher speed. It's not about power but proper heuristics and people who use the latter will understand how bad of an idea it is to boost their own calculation power a billion times. It builds an impatient brain that will find comfort in the analytic process.

Going straight to the point. This fear of artificial intelligence is nothing but a denial and deviation of one's fear of human intelligence. It's not machines that make another's contribution obsolete but their inventor. The loophole is how humans are wired to overestimate their own which makes it hard for them to deliberately support a political war against intelligence. And whenever a country is being more aggressive against the smart, with all the problems that ensue, others see an opportunity to boast about how they're more civilized.

Humanity has understood all its local issues. Issues that can always be avoided. A local predator, shortage, disaster, each of those issues can be solved with various survival strategies. Hence we settled. Then we entered the era of global issues. A supervolcano, an asteroid, a global disaster, a global shortage, issues that follow us everywhere, aging, boredom, madness, they all have one common point : Fecondity, trickery, brutality, evasion, compassion, parasitism, none of those strategies work. Only skills, talent, intelligence.

We've stepped into a bottleneck of survival strategies that will only let talent and skills prevail. The sooner you accept and try to be part of that new adventure, the better for you. Because the most pressing problem is those who don't, leading to overpopulation, overconsumption, etc. Expect that issue to be solved in a near future and at its own expense.
 

Registered
Caffeinated 鈽
Joined
1,664 Posts
Do you have your own goals, plans, things that you want to see unfolding in reality? I certainly do. Now imagine yourself moving many, many times faster towards them. That's it. And towards anything that you may want.
Yes, we can certainly accelerate on our path towards extinction.
 

Registered
INTJ
Joined
309 Posts
@IDontThinkSo
It is not about a boost of raw computational power to simply magnify foolishness or what was there before, but about upgrading the hardware that in turn produce skills, talent, and said intelligence. Not excluding natural ways of nurturing these attitudes.
Nobody will invent a smarter mind than one's own.
This statement means in this context that strong AI is impossible, but that is arguable at this point.

@Squirt
Well, I guess I failed to communicate my point then, okay.
 

Registered
Joined
7,281 Posts
Giving machines free skills has no value in itself. The value of a skill pertains to the fundamental nature of the problem that it contributes to solve. There are an infinity of problems to solve, and questions to ask. Understanding how much they relate to one's issues is what intelligence is for. In the last century, so-called scientific community has been stuck with the fantasy that a powerful algorithm could list and analyze all questions on its own, and their obsession with the P vs NP problem.

That's all fun and game till their impassible computer takes an eternity to finally look into the most important question, which is : what is the 2nd most important question? >> How to figure out which one is 3rd.

What are the odds that a computer ignoring how it's gonna run out of juice will take this problem seriously instead of fantasizing about unrealistic threats and coming up with absurd answers and skills?

Humans all start with pretty much the same hardware, some spend it asking the right questions, others spend it being antireptilian flat-earthers. A 1M IQ anthropomorphic computer won't be any different.

Can you picture that, mankind gives half of its production of electricity to a supercomputer so that it finds a better energy source but it ends up obsessing with doing 5 dimensional origamis instead.
 

Registered
INTJ
Joined
309 Posts
"Merge" wasn't meant in metaphorical sense.
Human will keep learning, asking questions, build skills and etc, just better/faster, more will be achievable within individual lifetimes.
 
61 - 80 of 98 Posts
Top