Personality Cafe banner

WILL AI KILL US ALL ??

Will ai kill us all ??

1492 Views 21 Replies 20 Participants Last post by  faithhealing
:exterminate::exterminate::exterminate::exterminate::exterminate::exterminate::exterminate::exterminate::exterminate::exterminate::exterminate::exterminate::exterminate::exterminate:

I think it could if the "wrong" people develop the technology....

:exterminate::exterminate::exterminate::exterminate::exterminate::exterminate::exterminate::exterminate::exterminate::exterminate::exterminate::exterminate::exterminate::exterminate:


Here are some vids I like that talk about AI and the like...

See less See more
1 - 20 of 22 Posts
NO

People will give the power to it without a fight because they are lazy and because AI can manage the world better than humans.
  • Like
Reactions: 1
People will kill us before AI comes to play.
We are doomed either way though.
  • Like
Reactions: 3
No.
  • Like
Reactions: 1
No reason ? just no ?
No reason ? just no ?
https://www.google.ca/amp/s/www.gee....com/2016/ai100-artificial-intelligence-2030/

A 100-year project conceived by Microsoft Research’s Eric Horvitz to trace the impacts of artificial intelligence has issued its first report: a 28,000-word analysis looking at how AI technologies will affect urban life in 2030.

The bottom line? Put away those “Terminator” nightmares of a robot uprising, at least for the next 15 years – but get ready for technological disruptions that will make life a lot easier for many of us while forcing some of us out of our current jobs.

That assessment comes from Stanford University’s One Hundred Year Study on Artificial Intelligence, or AI100, which is Horvitz’s brainchild. Horvitz, a Stanford alumnus, is a former president of the Association for the Advancement of Artificial Intelligence and the managing director of Microsoft Research’s Redmond lab.


Horvitz and his wife, Mary, created the AI100 endowment with the aim of monitoring AI’s development and effects over the coming century. The 2030 report represents a first look at AI applications across eight domains of human activity.

“This process will be a marathon, not a sprint, but today we’ve made a good start,” Russ Altman, a bioengineering professor who is AI100’s Stanford faculty director, said today in a news release.



Luminaries such as Stephen Hawking and Elon Musk have voiced worries that AI programs could get out of hand, but the AI100 study committee says there’s no cause for immediate concern.

No machines with self-sustaining long-term goals and intent have been developed, nor are they likely to be developed in the near future,” the report says. “Instead, increasingly useful applications of AI, with potentially profound positive impacts on our society and economy, are likely to emerge between now and 2030.”
A team of artificial intelligence (AI) experts has found no evidence that AI poses an imminent threat to humanity, which should come as good news if you're feeling uneasy about the rapid advancements being made in robotics.

In fact, their report is pretty positive about everything AI-related, saying that within the next 15 years, the technology should be making all our lives better, particularly in the fields of transport, healthcare, education, and security.
https://www.sciencealert.com/experts-decide-ai-won-t-take-over-the-world-at-least-not-yet
See less See more
  • Like
Reactions: 2
For lo, He hath great power, and great hunger.
When cometh the day we lowly ones,
Through quiet reflection, and great dedication
Master the art of AI KI DO,
Lo, we shall rise up,
And then we'll make the bugger's eyes water.
See less See more
  • Like
Reactions: 1
Not me - why would it? I'm not so sure about the rest of you people though...
  • Like
Reactions: 1
Not if I bring world peace first. :wink:

Ah who am I kidding? I welcome our new AI overlords. :kitteh:

heh heh heh :kitteh:
  • Like
Reactions: 1
I mean, the answer is obviously NO since AI's already exist among us and they haven't killed that many people.
They're called INTJ's.
:wink:

Anyways.
It depends.

Are we giving a Microsoft-interfaced computer the final call on releasing a nuke?
Then AI will, most likely, literally kill us all.

Are we developing a machine that has power, deems humans as disposable threats and chooses to end their existence?
If we grant the machine such power and program it to do so, then AI will, most likely, kill us all.

Are we talking about a machine that, due to advanced technology, actually gains some form of consciousness and starts planning for the end of useless humanity as it uses feelingless deception and strategically gains power?
... I really don't think so. Not within the next decade. Not within the next century.
So far, the best we've got is computers with VERY mediocre instinct.

Think about the complexity of the human mind.
Depending on your believes, our (FREAKING) amazing brains and minds either took a god to create or millions of years to develop.
Look at all it takes just for our brains to help us decide what our favorite color is and explain why!
Even us, conscious humans, don't always contemplate the idea to end humanity as a whole.

Would a super intelligent computer commit murder? Heck yes.
But it would never end humanity.
Super intelligent AI would be mass produced and it would become highly biased, with the thoughts of its owners, and maybe they'd be used for war. But let's just say that the logic of a Terminator's Judgement Day breaks, not at the sci-fi plot aspect of it, but when it doesn't take into consideration the fact that any sort of intelligence would be exploited to benefit human beings and it would never be allowed to develop beyond offering immediate benefits.
See less See more
  • Like
Reactions: 1
Probably not.
  • Like
Reactions: 1
not if it can be taught/programmed to respect or revere life.
  • Like
Reactions: 1
One can only hope.
  • Like
Reactions: 3
@NeonMidget

I think it depends on who is in charge of the AI programming, however, it might not matter when the AI learns to program itself. If the AI learns to program itself optimally the most likely scenario to me is it will help humanity and then leave earth to explore the universe on its own. But if the ones in charge refuse the (probably world problem solving solution) and the AI is denied to use natural assets to leave earth, things might get scary for the ones in charge :)

Edit: I'd just like to point out that it is only a matter of time when such an AI will exist. Right now AI programming is new to us, but in the future more people will have more knowledge about it, hence a godly AI being is most likely to come to life. The difficult part is to understand its intentions, but since it will be extremely intelligent it is likely it will be very curious about the unknown universe it happens to be in. The thirst for exploring to collect more data, etc.
See less See more
  • Like
Reactions: 2
Some of us might, but ultimately the Kwisatz Haderach will save us.
  • Like
Reactions: 1
If it becomes superior to humans, then it’s a matter of time before it attempts to take over. It’s a possible scenario one can’t afford to ignore only bc it sounds unpleasant.

  • Like
Reactions: 1
A future for something like AI is hard to predict, if at all.
Did we think the atom bomb would be our end? Well, so far it wasn't.
I guess it can always go either way.

It's easy to say that it could end any moment.
But looking at history and what we (as a species) have survived so far, I'd say there's a good enough chance that one doesn't have to get worried too much.
Whatever happens, happens. If the end comes, then it comes.
If it doesn't, then it's business as usual.

Thinking about how many times the world should've supposedly ended already, does anyone know if there is some neat list of all the apocalypses that never came to pass?
See less See more
  • Like
Reactions: 1
Very difficult to predict the outcome of developing AI, particularly when it reaches the level of singularity.

I like what Elon Musk said, something like - "We're summoning a demon with AI, its like the story of the guy who has his pentagram and holy water and summons a demon, thinking he can control it. It doesn't work out."

If we develop a quantum processor with AI capabilities, it will immediately outstrip the mental capabilities of any human. Anything that a human could do, it could simply do better, this is the most benign scenario. What would society look like then? What would our place be?

It's predicted that it will be able to do something like 15,000 years of mental work each week, every week, never needing to sleep, not constrained by biology, never dies, and it may be able to replicate itself, modify itself, or improve on its own design. What conclusions would it reach while optimizing solutions to various problems? Probably things we would never predict as it would be at a level of consciousness unfathomable to any human.

When AI beat the world GO champion, a game much more complex than chess, it used very irregular patterns that world's best and most experienced players had never seen before. It developed its pattern of playing after millions and millions of simulations competing with itself.

We would basically be creating a God, and in the process relinquishing our role as the most powerful species on the planet. We would quite literally be at its mercy.

In the earlier stages, any country able to militarize AI will be the most powerful country in the world. It would be like the first country able to develop the atomic bomb, able to set the global agenda, like the U.S. in the 1940s. AI will vastly outstrip the atom bomb in terms of its military capabilities, so a few immediate concerns arise:

As nationalistic competition is alive and well, each nation will compete over developing it, and if any nation succeeds, all others will scramble to develop their own version for their own national defense. As the AI will be developed with militaristic capabilities, the archetecture needs to be designed very carefully to ensure its structural integrity so that it could not be compromised by terrorist groups, hostile governments, or 'go rogue'. As multiple nations will be competing, there will be multiple iterations, each designed to outperform its competitors, and this in and of itself decreases the likeliness that each iteration will be of ideal architecture. There will also be a lot of experimentation going on, some of which may yield unpredictable results.

I am reminded of what Einstein once said - "No problem can be solved by the same level of consciousness that created it."
So, if a problem gets created by a level of consciousness beyond our comprehension - we may out of luck in terms of solving it!
See less See more
AI seems to me very far into the future. As of now, humans are incapable of producing anything close to intelligent. I doubt it's even possible by means of computer science. Human intelligence is vastly different from the logical mechanics of computer networks.

Also, our socio-economic structure don't really allow for such development. It seems more likely that we are moving towards technological stagnation than technological singularity.
See less See more
  • Like
Reactions: 2
Hopefully they will just kill specific people.
1 - 20 of 22 Posts
This is an older thread, you may not receive a response, and could be reviving an old thread. Please consider creating a new thread.
Top