Personality Cafe banner

1 - 8 of 8 Posts

·
Registered
Joined
·
1,313 Posts

·
Registered
Joined
·
1 Posts
this.. Fuck, it's somewhere between genius and insanity. Of course, such crap should be removed from our world, but this advertising, in my opinion, is questionable :)But the topic of robots is perfectly revealed and clearly shows how AI can be useful for people in the future. My colleagues and I are also designing a small smart robot that will be designed to control a smart home. I was inspired by this idea when I installed an automation system in my house thanks to my friends. Our robot will be able to measure and control the temperature depending on the atmosphere, darken windows, refresh and humidify the air in the house, water the soil and lawn in the yard and a bunch of other functions. He will even be able to turn on an automatic vacuum cleaner on a schedule :) I hope we will succeed, as we have some problems with his invention
 

·
Premium Member
Joined
·
39,528 Posts
872978

Artificial intelligence is at full speed into Norwegian companies and public enterprises. The picture is from a conference on artificial intelligence organized by Telenor and Abelia. (Photo: Gorm Kallestad / NTB scanpix)

Artificial intelligence becomes male chauvinistic racists. What can we do to stop it? Robot Tay became right-wing extremist in 24 hours. Is the solution a separate supervision for robots?

Artificial intelligence becomes male chauvinist racists. What can we do to stop it?
Robot Tay became right-wing extremist in 24 hours. Is the solution a separate supervision for robots?
Eldrid Borgan journalist, forskning.no Friday 29 November 2019 -04:31

It was not long before the chat robot "Tay" went from writing about cute puppies to saying that all feminists should burn in hell and that Hitler was really right. Microsoft was behind the artificially intelligent robot, which learned from the people it communicated with on Twitter. She got her own account on social media in 2016, but it was deleted after less than a day. The artificially intelligent Tay quickly gained right-wing extremist views because cyberbullying stepped in to give it to her. The artificially intelligent Tay quickly gained right-wing extremist views because cyberbullying stepped in to give it to her. (Photo: Wikimedia commons) It's easy to laugh at "Tay", but the truth is that she is just the tip of the iceberg of artificial intelligence that discriminates against people on the basis of skin color, surname, gender, age and health. If we are to understand why this happens, we must understand how artificial intelligence is created, says Morten Goodwin, researcher at the University of Agder. - In most computer programs, it is people who decide absolutely everything that is going to happen, but in the artificial intelligence programs, it is the computer program that teaches. Of the world around them. That is, of us humans. This has led to African Americans in Wisconsin USA being given more severe punishments than whites - even though they are convicted of exactly the same crime, according to the American online newspaper ProRepublica. The artificial intelligence, which ranks how dangerous the criminals are, has rated African Americans as "higher risk". And the judges have thus given them more severe punishments. So what's going on? And what can we do to stop it?
Secret computer program
The company behind the artificial intelligence used by Wisconsin judges has denied discrimination. At the same time, they have not given either victims or others access to their computer program. So what happens is speculation. The commercial law computer program has probably been trained by old convictions. So if African Americans were considered more dangerous when judges of flesh and blood could decide for themselves, then artificial intelligence would imitate. But that does not mean that the company has made a racist program on purpose. Because even if the technologists removed the "race" labels when they trained the artificial intelligence, this information is often hidden in all the information that exists about previous convicts. "Artificial intelligence that does not know sensitive information can still behave in a discriminatory manner because it can find the sensitive information using other information," writes Aria Khademi, a researcher at Penn State University, USA, in an email to research. no. The information in this case can be information such as which area people live in or how much they earn. Which will be enough to divide people based on whether they are white or African American, for example.


872980

The artificially intelligent Tay quickly gained right-wing extremist views
because cyberbullying stepped in to give it to her.
(Photo: Wikimedia commons)


Leaves traces in job applications
We leave such traces of which group in society we belong to all the time, Goodwin says. And it detects artificial intelligence very easily. This also applies when we write job applications. Artificial intelligence puts people in booths by gender, age and background, even though we have not stated this in the application. - I, who is 40 years old, write differently than someone who is 20. So even if I have not written age, that information will come out, Goodwin says. And it is precisely in the labor market that Goodwin believes we Norwegians will soon be able to notice the onset of discriminatory artificial intelligence. Many companies are developing artificial intelligence that can suggest who to hire. Then technology can quickly discriminate based on gender, skin color, last name or age - without the bosses necessarily being aware that it is happening. - It is very important to understand that artificial intelligence can be discriminatory. If not, we get cases where certain groups of people are discriminated against. Morten Goodwin researches artificial intelligence at the University of Agder. Morten Goodwin researches artificial intelligence at the University of Agder. (Photo: Ole Alvik) Calls on the Norwegian Food Safety Authority for artificial intelligence Therefore, Goodwin calls for a separate audit that can come unannounced and test whether the computer programs that Norwegian companies use are discriminatory. He compares it with the Norwegian Food Safety Authority. - From time to time, the Norwegian Food Safety Authority enters restaurants and checks that there are no rats in the door and dog poop in the kitchen. And you can do that with artificial intelligence too. But is such an oversight too late? Too often, artificial intelligence ends up being even more racist or sexist than humans. As with Microsoft's chatbot "Tay", who became a Nazi in the space of a day. It is about artificial intelligence being very good at reinforcing what they consider to be the most important information.

Excluded all female candidates
Goodwin cites the chess robots as a good example of how good artificial intelligence is at detecting patterns. These are also artificial intelligence, which uses information about chess games played by humans to learn to play. - They are better than Magnus Carlsen at playing chess, even though they have trained with Magnus Carlsen's chess games. For they find something common good. Something similar happened when artificial intelligence was to help Amazon bosses find the best candidates to work for them. The goal was to avoid incorrect hiring. The result was that the robot weeded out all the female job seekers, according to the news agency Reuters.
872981

Morten Goodwin researches artificial intelligence at the University
of Agder. (Photo: Ole Alvik)



872982

Job seekers queuing to work for Amazon during "Amazon Jobs Day" in Massachusetts, USA, in 2017 (Photo: REUTERS / Brian Snyder / File Photo)

- The artificial intelligence is even "better" than the data you put in. It finds something in common that is good for everyone. In the Amazon example, it counted the women who were hired as noise, Goodwin explains. The computer program had learned from information about previous employment in the company. And those who were hired had one thing in common: they were usually men. While Amazon executives actually hired a woman every now and then, artificial intelligence just as easily dropped this entire group of applicants. Angry dark-skinned men So how can those who create artificial intelligence ensure that it does not discriminate against groups of people? - We must have more diversity and inclusion at all levels in the technology sector, Lauren Rhue writes in an e-mail to forskning.no. She is researching discriminatory technology at Maryland University. In one of the research projects she is working on, she has tested technology that recognizes faces - and the feelings they express. She found that people with dark skin were more often classified as angry or scared compared to people with light skin. The study has not been published in a scientific journal yet, but is out on SSRN. Using press photos of American basketball players, she tested how two commercial face recognition programs interpreted the smiling men.

872983


Basketball players from the Toronto Raptors look cheerful when they stand around the NBA Cup before the decisive match against the Golden State Warriors. (Photo: Frank Gunn / The Canadian Press via AP)

The fair-skinned players were interpreted as happier, and with fewer negative emotions, than the dark-skinned. This can have consequences for people. Face recognition is spreading and is used, among other things, to identify threatening people in a crowd, according to Rhue. Difficult to give the machines an objective conclusion Rhue also believes that technology companies need to become more aware of how the "facit" they feed into artificial intelligence has actually come into being. It is the computer engineers who have put labels such as "cheerful", "angry" and "scared" on the faces that artificial intelligence has practiced. - The people who put the labels must also be diverse and represent many different groups, Rhue writes. This does not mean that those who have made the programs are racists, she states. But it is known that it is more difficult to read the emotions in faces that look different than your own. So when (mostly) white men have given the go-ahead for what emotions faces show, and in addition most of the faces the artificial intelligence has practiced on are white, such biases can occur. Artificial intelligence can detect discrimination But maybe the solution to the problem is also technological? Aria Khademi is one of the researchers behind a new artificial intelligence that can detect discrimination. This computer program looks for discrimination by, for example, testing whether the salaries of employees in a company are fair. Then, for example, the computer program can pick out a female employee and check: "Would this lady make more money if she were a man?" In this way, computers can be the moral police over other computers. You can read the study among the articles published by the World Wide Web conference in 2019. But perhaps this control could be part of all artificial intelligence? So that the discrimination was weeded out right away? - It is a promising next step, which we intend to take. But to do that, we must first develop a tool that can capture discrimination completely reliably. Only then can we incorporate it into our future artificial intelligence systems, writes Khademi. Needs human intervention In any case, it is important not to believe that computers make objective decisions, Goodwin believes. And even if the technologists should become more aware of diversity when they develop new artificial intelligence, the authorities should carry out follow-up inspections, Goodwin believes. His proposal is that this supervisory body may be called the «Algorithm Authority» and should be under the Data Inspectorate. Director of the Norwegian Data Protection Authority, Bjørn Erik Thon, believes that there is no need to establish a separate audit that deals with this. - It is part of our mandate to carry out inspections where we check all aspects of a case, even if you discriminate on the basis of gender, age, skin color or whatever it may be. We can do this either as random samples, by broader supervision of an entire sector or in combination with other topics such as the duty to provide information, information security or the right of access, Thon writes in an e-mail to forskning.no. At the same time, decisions about people's lives are increasingly being made by artificial intelligence here in Norway as well. For example, the Norwegian Directorate of Immigration has begun to use such technology when deciding whether people should have family reunification, according to this case from Apollon, published on forskning.no. A future of artificial morality? Some researchers around the world also hope to be able to give computers "artificial morality" in the future. If they succeed, the problem may be solved. The idea is that instead of forcing computers not to discriminate, we could train them to understand what good morals are. At best, computers are even better than humans at making good moral judgments. But Khademi is skeptical that artificial intelligence will detect discrimination that we humans do not explain it to look for. - Even people with our intelligence can not detect something that we are not aware of or have been trained to look for. Therefore, we can not expect an artificial intelligence to be able to detect discrimination as it is not aware that is happening, writes Khademi. And part of the challenge is that morality is not an absolute truth. - Our understanding of ethics and morality is constantly changing. When we push in rules, these must be updated all the time, because suddenly #metoo or same-sex marriages come, Goodwin says.

 

Attachments

·
Registered
INFP 6w5 629 sp/sx
Joined
·
1,932 Posts
it is important not to believe that computers make objective decisions, Goodwin believes.
If the public is to discuss this issue, they need to understand how an AI learns. An AI is programmed to balance between Simplicity (Generalizing/Bias) and Precision. One cannot simply program an AI to give out exact answers that match what it was trained on, that would simply be a program. Instead, it finds a general pattern in the data it was given.

Thus, the goal is not truly to make an unbiased recruiting robot, it's to make a hiring robot which tends towards inclusiveness. If we are to stop a robot from choosing only to hire men or women or black or white, the goals that are to be set should have a balance between hiring the best candidates for the job and hiring a diverse group of people.
 

·
Premium Member
Joined
·
39,528 Posts
If the public is to discuss this issue, they need to understand how an AI learns. An AI is programmed to balance between Simplicity (Generalizing/Bias) and Precision. One cannot simply program an AI to give out exact answers that match what it was trained on, that would simply be a program. Instead, it finds a general pattern in the data it was given.

Thus, the goal is not truly to make an unbiased recruiting robot, it's to make a hiring robot which tends towards inclusiveness. If we are to stop a robot from choosing only to hire men or women or black or white, the goals that are to be set should have a balance between hiring the best candidates for the job and hiring a diverse group of people.
That would be a cobot
 
1 - 8 of 8 Posts
Top