Personality Cafe banner

1 - 15 of 15 Posts

·
Banned
Joined
·
111 Posts
Discussion Starter #1
Paper Topic On AI: Suggestions?! <--Interrobang, to make "interesting title."

I'm trying to decide on a paper topic; it's supposed to be 8 pages and about Artificial Intelligence somehow? But it needs to be less broad than that.

...Help? You guys are good at...things. And lots of 'em.




*note: I've actually already written a paper on AI perception in the west, but the content strikes me as 10 pages of no-shit-sherlock..I'm holding onto this as my other option.
*note II: I'm actually interested in AI.
*note III: I'm an Arts/Humanities person e ue;...So I have some limits; of course if you suggest something I can't do, I'll probably look into it later/anyway.


Wonder if I should have just posted on NT forum base... e 3e;
Well, really, anyone is welcome to help me :p.
 

·
Banned
Joined
·
459 Posts
What class and what level is the paper on? I'm not American, so arts/humanities, does that include philosophy? You could compare the concept of an advanced general intelligence with what different philosophers saw as the ideal ruler. For instance, Plato famously held aristocracy higher than democracy, in the context of Athens. He held that an aristocratic monarch who was raised around virtue and philosophy, would care for the interests of his people, and because he would strive at the difficult task of bettering his community, he wouldn't want to deceive or abuse his citizens. There is, of course, more to it than that, but that's the basemost idea.

Sounds like a fancy enough paper topic to me, at least.
 

·
Banned
Joined
·
111 Posts
Discussion Starter #3
Fuck, I typed up a lot that just disappeared...
Okay, shortened:
Class is pretty irrelevant, trust me? Let's say it's something a kin to a philosophy class...
Level being college/sophomore (I'm 20, I don't know how/why that would help, but there you go.)

So did you mean something like...."Should humans be creating an AI to rule--with the hypothetical AI being 'more intelligent'/quicker processing and all? Should we be trying to create an AI to be like the philosophers' 'ideal ruler'?"

Please elaborate...?
(Also thank you.)

P.S. Oh ho, it's you! :3 Yes, it's one of those feeling types coming to you for help again.
 

·
Banned
Joined
·
459 Posts
"AI in Governance - a fulfilment of Plato's Philosopher king?"

A short discussion on the general merits of an AI in a position of rulership(such as stability, incorruptibility and superior intellect and understanding of every field of interest, to name a few possibilities), and perhaps interweaving some of the conclusions from your paper on western attitudes towards an artificial general intelligence. A brief discourse on "the perfect leader" in the eyes of classical philsophers like Plato(who is the first and only to spring to mind, it's been some years since I studied philosophy, mind you), and the rest of the paper almost writes itself, doesn't it?

And you never did reply back again, you minx. The impudence nearly dented the cold, metallic crust of my unfeeling heart.

Anyway, being an insatiably curious ENTP, I think I simply described a paper I'd love to read, so whether it's a good idea or not, I don't know, but it's a twist on the AI debate I haven't tasted before. Piques my interest.
 

·
Banned
Joined
·
492 Posts
Ya know how in "I-robot" the big computer in control was saying that it was the logical choice to destroy humans to save the world? Maybe you can write about something about which is more important, morality or logic. Its got a lot to do with emotion, something AI might not have.

Maybe talk about emotion, and how we as humans are controlled by it. For us, thinking comes AFTER feeling, but for AI it could be only thinking. Like a big problem solving calculator.

Or maybe, if you really wanna go for gold, you can create a valuing system and interpret human issues into certain values to be put into a math equation. This would be the main way a computer solves problems.
 

·
Banned
Joined
·
111 Posts
Discussion Starter #6
かっこいい。。。

"AI in Governance - a fulfilment of Plato's Philosopher king?"

A short discussion on the general merits of an AI in a position of rulership(such as stability, incorruptibility and superior intellect and understanding of every field of interest, to name a few possibilities), and perhaps interweaving some of the conclusions from your paper on western attitudes towards an artificial general intelligence. A brief discourse on "the perfect leader" in the eyes of classical philsophers like Plato(who is the first and only to spring to mind, it's been some years since I studied philosophy, mind you), and the rest of the paper almost writes itself, doesn't it?
Yeah, sounds like a paper I'd like to read. Oh ho ho, but now I get to write it maybe... :3c

And you never did reply back again, you minx. The impudence nearly dented the cold, metallic crust of my unfeeling heart.

Anyway, being an insatiably curious ENTP, I think I simply described a paper I'd love to read, so whether it's a good idea or not, I don't know, but it's a twist on the AI debate I haven't tasted before. Piques my interest.

Hahaha, I have never been called a minx before. (I laughed and confused my roommates.) And it was because I got distracted trying to read Mayan script... >_>;; My sincerest apologies! (I will probably take this to your page though, because it's getting irrelevant to the thread. xD)

But as for the paper, that was honestly what I was hoping for--getting a description of something people you forumpeoples might want to read. Also, pretty sure the professor is an xNTP.







Ya know how in "I-robot" the big computer in control was saying that it was the logical choice to destroy humans to save the world? Maybe you can write about something about which is more important, morality or logic. Its got a lot to do with emotion, something AI might not have.

Maybe talk about emotion, and how we as humans are controlled by it. For us, thinking comes AFTER feeling, but for AI it could be only thinking. Like a big problem solving calculator.

Or maybe, if you really wanna go for gold, you can create a valuing system and interpret human issues into certain values to be put into a math equation. This would be the main way a computer solves problems.
Huh, that's interesting...Actually my ENTP friend in the class is doing something like that for his paper.
 

·
Registered
Joined
·
721 Posts
Do AI as portrayed in TV/movies. Start with old stuff from when computers started being big (Star Trek has an episode "The Ultimate Computer" that's ok) and then you can do I, Robot, The Matrix and maybe Bicentennial Man or AI for a nicer view. It'd keep shy of all the sciencey stuff and take the issue from a different angle.

As a side note, if anyone DOES want to get into the sciencey stuff of AI, I highly recommend Godel, Escher, Bach. It will change the way you think. And the way you think about thinking. And the way you think about thinking about thinking. Ad infinitum.
 
  • Like
Reactions: theflavouroflife

·
Banned
Joined
·
111 Posts
Discussion Starter #8
Do AI as portrayed in TV/movies. Start with old stuff from when computers started being big....... Bicentennial Man or AI for a nicer view. It'd keep shy of all the sciencey stuff and take the issue from a different angle.

...... I highly recommend Godel, Escher, Bach. It will change the way you think. And the way you think about thinking. And the way you think about thinking about thinking. Ad infinitum.
Actually, my first paper--the one I mentioned in the notes--goes pretty in depth into that. c: Also, all of those things were pretty good.

And Godel, Escher, Bach? Hmm, I'll look into that; it sounds fascinating from your description.
 

·
Registered
Joined
·
1,264 Posts
I'm trying to decide on a paper topic; it's supposed to be 8 pages and about Artificial Intelligence somehow? But it needs to be less broad than that.

...Help? You guys are good at...things. And lots of 'em.




*note: I've actually already written a paper on AI perception in the west, but the content strikes me as 10 pages of no-shit-sherlock..I'm holding onto this as my other option.
*note II: I'm actually interested in AI.
*note III: I'm an Arts/Humanities person e ue;...So I have some limits; of course if you suggest something I can't do, I'll probably look into it later/anyway.


Wonder if I should have just posted on NT forum base... e 3e;
Well, really, anyone is welcome to help me :p.
How about mixing this topic with your humanities and philosophy studies and rolling up something like "Functional AI's Likely Impact on Theology and Organized Religion"??
 
  • Like
Reactions: theflavouroflife

·
Banned
Joined
·
111 Posts
Discussion Starter #11
@Psyphon: Oh, good idea!
@crazyeddie: Oh, looks like pretty interesting stuff. *Taking a look at it now.* Huh, maybe I'll grab his book...
 
  • Like
Reactions: crazyeddie

·
Registered
Joined
·
450 Posts
Charles Stross has some stuff on his blog about why AIs won't be people-like. (People get bored. You don't want an AI assigned to some task getting bored.)
I'm going to derail your thread a tad, I apologize in advance.

AI's will be somewhat people-like until people become more like what we classically think of as "machines". It comes down to motivation and values. Programming "feelings" and motivation is actually pretty simple in concept and I'm not talking about that stupid shit you see in most movies. But as an example, if you were to tell me that I had to push some button for all eternity or my children were going to die...I'd probably be sitting there pushing that damn button because not doing so would remove my reason for being and all that I care about as a singular "being". You just make sure I'm not a suicidal manic depressive and I will probably do my task unfailingly. I will admit its hard for me to even conceive of doing something for an "eternity" but thats more related to some idea of the time passage over my given purpose. But humans and what it means to be human will keep changing and evolving and the lines between human and "robot" or "machine" will become all fuzzly wuzzly (just some of my thoughts anyhow).

As human beings we ARE machines, we are self replicating bio machines. Something I've been wondering about, since I see the integration of "machinery" and "bio machinery" pretty much locked in for the future, is that a lot of our activity is based on the fact that we have biological urges. What happens if we were to advance to a point where we didn't eat, sleep, age, didn't feel the "passion"...other such things. Where is the motivation to do anything as the core of everything we do is based on "survival" and "survival of self"? I mean we could program in other urges and directives but wouldn't that seem a bit redundant? There are other things I have in mind but its entirely too much work to even attempt to convert that crap to words.

Ya know how in "I-robot" the big computer in control was saying that it was the logical choice to destroy humans to save the world? Maybe you can write about something about which is more important, morality or logic. Its got a lot to do with emotion, something AI might not have.

Maybe talk about emotion, and how we as humans are controlled by it. For us, thinking comes AFTER feeling, but for AI it could be only thinking. Like a big problem solving calculator.
First, there is no way to "save the world". Even if we didn't exist the Earth will eventually poop out anyway. There have been several mass extinctions and there are periods of massive die off and new lifeforms emerge. Otherwise, the planet is basically a little shit ball in space and any number of rogue objects could slam into the planet with or without us.

Secondly, in the movie "I, robot" her logic was that humans are fucking retarded and want to kill themselves and do stupid shit that goes against her core directives. Therefore, she wanted to enact rules to preserve/protect more humans and in essence take away their "freedoms". She saw the lead designer dude as a problem for her to achieve what became her primary objective so she made him a prisoner until she achieved her objective. Old dude didn't like this so then he setup some complex puzzle thing for Spooner to follow and even went so far as to create a robot with an emotional value system mirroring his own. I actually agreed with her somewhat and understood the logic. I would have had to take that AI tyrant down via less dramatic and nerdy means. But thats just the INTJ talking....lol.

Feeling is a form of thinking or rather "backup information" for thinking/logic and again is something that could easily be programmed. They are sort of a system of perverted instincts adapted to social living. Both aspects were/are required for our survival as a social species and both aspects make us stronger, IMO. You could have a system where there are "negative" or "positive" sliders according to what you want the machine to "feel". For humans its a shot of dopamine in reward centers or the various stress response pathways and the other antagonizing neurotransmitters and stuff too complex for me to go into.

Logic is usually more important as a primary though morality has a side place. I'd also argue that morals are based off of a weird sort of slightly obscure survival logic. I could outline the entire idea but this is already pretty long.

Also, as a side note, I don't see a purely AI governing system as a good option. Its hard for me to fully articulate why but I already have a strong urge to destroy it and I don't exactly understand why, lol. I need to reflect on this.
 
  • Like
Reactions: NotSoRighteousRob

·
Registered
Joined
·
3,622 Posts
Also, as a side note, I don't see a purely AI governing system as a good option. Its hard for me to fully articulate why but I already have a strong urge to destroy it and I don't exactly understand why, lol. I need to reflect on this.
The movies terminator and matrix come to mind.. lol

I liked the concept of assigning values for positive and negative responses, reward centers for the processor. When it heats up trigger a negative response and then when it goes back to full power give it a boost.. Now we'll have computers that are addicts themselves. :D

Defining intelligence and thinking makes the most sense. Whether if a machine can think for itself and desires to live should it be given any rights? Aren't we just enslaving them at this point? If I go around using people (which happens at times) it is considered unethical. Once a tool advances past the point of being just a tool shouldn't the way we evaluate it change as well?

And of course there is the superiority issue. What makes man so damn great? Are we that afraid to create something that could be superior to us that it may prevent us from doing so?

I see this turning into a religious war when A.I.'s use logic that defies traditional ethics. They will be demonized and feared. Ethics are a joke anyways, only hindering progress. I doubt there is a person alive that hasn't caved, cast away or broken several ethical boundaries they "promised" to never cross. Society as a whole has changed the viewpoints of ethics time and time again, they are trivial boundaries that those without common sense will ignore anyways.
 
  • Like
Reactions: Wizardry

·
Registered
Joined
·
450 Posts
I'm not too concerned about movies so much but I think the slight issue is more about specialization and that humans beings would ruin the system because they would interject their own core beliefs. This is not to say that sort of governing system wouldn't be highly advantageous but that people are imperfect and I don't see how a group could be trusted not to mess it up until the more negative aspects of humanity are no longer around (greed, megalomania, control grabs).

I believe that there should be a point when the neural mimicry is so close that they should be recognized as a different form of sentience and be given "rights". Not really sure how to define that point and as a species we have never had to deal with this sort of problem. I mean its cruel if you have a true "positive" or "negative feedback system and you put that "being" into a state where its being perpetually tortured. I think its bad enough when people do that to other people, or critters. If you have a unit thats running around simulating emotional states but there is no true feedback loop then its a bit different. Its no different than just a fancy animated information shuffling calculator.
 
  • Like
Reactions: NotSoRighteousRob

·
Registered
Joined
·
3,622 Posts
I'm not too concerned about movies so much but I think the slight issue is more about specialization and that humans beings would ruin the system because they would interject their own core beliefs. This is not to say that sort of governing system wouldn't be highly advantageous but that people are imperfect and I don't see how a group could be trusted not to mess it up until the more negative aspects of humanity are no longer around (greed, megalomania, control grabs).

I believe that there should be a point when the neural mimicry is so close that they should be recognized as a different form of sentience and be given "rights". Not really sure how to define that point and as a species we have never had to deal with this sort of problem. I mean its cruel if you have a true "positive" or "negative feedback system and you put that "being" into a state where its being perpetually tortured. I think its bad enough when people do that to other people, or critters. If you have a unit thats running around simulating emotional states but there is no true feedback loop then its a bit different. Its no different than just a fancy animated information shuffling calculator.
Have you ever heard of the term "the ghost within the machine"? all throughout programming development those writing the code often found fragments that shouldn't have existed or were created by the program itself. I found this fascinating because we expect other advanced lifeforms to either look, think, or behave similar to how we ourselves behave. Outside of this galaxy there may be lifeforms that are not even carbon based, and if you go into the multidimensional theory imagining a lifeform that exists within 6 dimensions would be impossible as we are limited by only having two.

I don't think time should be considered a dimension by its current definition but that's another topic completely.

Having a computer in charge of making the decisions for a nation would be flawed, but having one capable of performing budgets, audits on companies suspected of embezzlement, tax evasion, or improper wage distribution would be ideal, except for there is a flaw with computers similar to humans, they can be influenced by ones who know how to speak the language. As it is now Cash is the language of senate. Some day it may be C or C+
 
1 - 15 of 15 Posts
Top