autonomous Weapons that attack a target without human intervention, systems analysis of behaviors capable of influencing the vote of the society or vehicles without a driver that can be hacked to produce an accident. The artificial intelligence in addition to be a great opportunity for the humanity, brings with it a series of risks. As he said in 2017, the british physicist Stephen Hawking, “you can be the best or the worst thing that will happen to the human race.” The experts in this sector predict that there will be those who try to make a bad use of this tool. But, what you must to stop using this technology for it?
“we Need the artificial intelligence to survive as a species”, assured THE COUNTRY Nuria Oliver, phd in Artificial Intelligence from MIT, and a member of the Royal Academy of Engineering. Without it, “we’re not going to be able to meet many of the big challenges we face such as climate change, the ageing of the population and the prevalence of chronic diseases or the availability of limited resources”.
MORE INFORMATION “The line between legislating against disinformation and censorship is very thin,” Perplexity: what it is and why now
More than 8,000 scientists specialized in areas of technology —as the founder of Tesla, Elon Musk, or the co-founder of Apple Steve Wozniak— have signed an open letter published in 2015 warning of the dangers of artificial intelligence. Because of its great potential, the experts argue that it is important to research how to reap its benefits and avoid the potential risks. For this reason, the European Commission has named this year a Committee of High Level Experts in Artificial Intelligence, in order to study the implications of ethical, legal and social issues of this tool.
“The danger of artificial intelligence is not the technological singularity due to the existence of a hypothetical superinteligencias artificial. The real problems are already here,” says Ramón López de Mántaras. The director of the Research Institute on Artificial Intelligence of the CSIC maintains that these risks have to do with the privacy, the autonomy, the excessive confidence about the capabilities of the machines, the bias of the learning algorithms and the inability to be accountable and to justify their decisions in a language understandable to the people.
Sonia Pacheco, director of congress Digital World Business Congress, distinguishes between the misuse of artificial intelligence “not intentional” and misuse “with intentionality”. The first can occur when an algorithm is trained with data that is biased and conditioned by our knowledge and prejudices. For example, Amazon began to develop in 2014 an artificial intelligence of recruitment, based on the files of the last 10 years of the company, he learned that the men were to be preferred, and began to discriminate against women.
ethical Dilemmas
“The decisions of algorithmic-based data have the potential to improve our decision making,” says Oliver. But when these decisions affect thousands or millions of people, “emerging ethical dilemmas” important: “How can we ensure that such decisions and actions do not have adverse consequences to persons? Who is responsible of those decisions? What will happen when an algorithm we get to know each of us better than ourselves and can leverage that knowledge to manipulate subliminally our behavior?”.
A wrong use of the artificial intelligence “with intent” could bring with it risks physical, political, or security, according to Pacheco. For example, an autonomous vehicle could be hacked and crashed or used as a weapon, the fake news can “fill-in of noise the social networks with the aim of handling in a manner directed to selected groups of users” and “systems malicious could replicate our voice Taraftarium generating false information or generate images of us that is unreal thanks to regeneration techniques of image”.
The founder and ceo of Amazon, Jeff Bezos, stated last April at the Forum on Leadership of the Presidential Center George W. Bush is “much more likely that artificial intelligence will help us”. But acknowledged the dangers that can result: “autonomous weapons are extremely frightening.”
The increased risk, according to Oliver, “it is not robots physically, but in software systems of a large scale that may affect millions of people in very little time.” “To minimize the risk that these systems can be hacked, it is essential to take safety measures, reliability, reproducibility, prudence and veracity,” says the phd in Artificial Intelligence from MIT.
The founder and ceo of Amazon, Jeff Bezos. Jason Redmond Reuters The role of technology companies
To Francesca Rossi, the director of Ethics in Artificial Intelligence from IBM, it is important that technology companies are committed to the development of this tool with the purpose of increasing human intelligence and cannot replace it. Also considered essential that there is a constructive dialogue in which to participate “from those who are on the frontline of the research in artificial intelligence to those who represent the most vulnerable sectors of society”. Precisely Rossi is part of the Committee of High Level Experts in Artificial Intelligence of the European Commission, which not only is formed by experts in the field of technology, “but extends to philosophers, psychologists, sociologists and economists”.
The artificial intelligence is cross-disciplinary and can be applied in different fields: from biology, physics, medicine or chemistry, to education, production systems, logistics and transport. All the experts consulted agree that there is no stopping you to use this tool. “It’s a matter of using it properly and put the necessary controls to prevent their use for evil,” says Jose María Lucia, partner in charge of the Centre of Artificial Intelligence and Data Analysis from EY Wavespace. To identify a problem and implement any solution needed to answer some questions: “How are we going to detect if something is out of the normal?What is there to do if that happens?”. Lucy explains for example that “there have been several cases in the stock world where the inversion algorithms to scenarios unexpected have finished creating the chaos”.
Some technology companies have already established basic principles for developing the artificial intelligence in an ethical manner. David Carmona, director general of Business Development in Artificial Intelligence of Microsoft Corporation, explains that his company it is based on six basic principles: “Fairness, to ensure that the algorithms do not have biases; reliability, to ensure that technological confidence; privacy; transparency on the use of data and functioning; the inclusion of all people and the responsibility of the company that is behind these processes”.
The fact that artificial intelligence exists does not mean that it is necessarily going to make a bad use of it, according to stress experts consulted. “In the same way that there are ethical codes agreed virtually to an international level, in biomedicine and in other areas of scientific research, we must lay a foundation: a code of conduct, criteria and ethical standards and a regulatory structure that will prevent an evil use”, concluded Pacheco.