Home Locks You wanted to circumvent the 3 laws of robotics. The robot broke the first law of robotics for the first time. Real world applicability

You wanted to circumvent the 3 laws of robotics. The robot broke the first law of robotics for the first time. Real world applicability

It is utopian to assume that Asimov's Three Laws of Robotics could be laid into the foundation of intelligent programming.<…>But even a robot equipped with some kind of ersatz personality cannot be made completely safe for the environment, taking into account such reflection, which, however, cannot be considered evidence, but leads to its trace.<…>In principle, a moral assessment of such an act can be given only insofar as it is possible to trace the causal chains that go as far as possible, launched by this action. The one who looks further and notices possible consequences of his actions that someone else in his place is not able to foresee, sometimes acts differently from that other. But the further into the future the prediction of the consequences of an action goes, the more significant the participation of probabilistic factors in prediction will be. Good and evil, rather by way of exception, form a polar dichotomy: the inclusion of a probabilistic element in the assessment makes decision-making more and more difficult. Therefore, a hypothetical robot, which has an axiologically very strong defense, during real events that represent their inherent degree of complexity, would most often freeze, not knowing what to do, thereby becoming like those eastern sages who honored inaction above action, which in complicated situations is simply cannot be ethically risk-free. But a robot, paralyzed in situations requiring activity, would not be the most perfect of all technical devices, and therefore, in the end, the designer himself would be forced to leave some play to its axiological fuses.
However, this is only one obstacle among a large number of them, as decision theory showed us relatively recently, say, with examples like the so-called Arrow paradox<…>. What is logically impossible, neither a digital machine nor any robot, just like a person, will be able to realize. Moreover - and this is the third point of proof - an intelligent device is nothing more than a self-programming complex, that is, a device capable of transforming - even fundamentally - the previously existing laws of its own behavior under the influence of experience (or learning). And since it is impossible to predict in advance exactly what or how such a device will learn, whether it is a machine or a person, it is impossible to constructively guarantee the emergence of ethically perfect activity protectors. In other words, “free will” is a sign of any system equipped with properties that we identify with intelligence. It would probably be possible to install a certain axiological minimum into a system such as a robot, but it would obey it only to the extent that a person obeys particularly strong instincts (for example, the instinct of self-preservation). However, it is known that even the instinct of self-preservation can be overcome. Therefore, programming the axiology of a brain-like system can only be probabilistic, which simply means that such a device can choose between good and evil. - partly - a paraphrase of ideas not Lem

Three laws of robotics

Isaac Asimov, 1965

Three laws of robotics in science fiction - mandatory rules of behavior for robots, first formulated by Isaac Asimov in the story “Round Dance” ().

The laws state:

Three Laws, as well as the possible causes and consequences of their violation, is dedicated to Asimov’s series of stories about robots. Some of them, on the contrary, consider unintended consequences compliance robots Three Laws(for example “Mirror reflection”).

In one of the stories in the series, Asimov's character comes to a conclusion about the ethical basis Three Laws: “...if you think about it, the Three Laws of Robotics coincide with the basic principles of most ethical systems that exist on Earth... simply put, if Byerly follows all the Laws of Robotics, he is either a robot or a very good person.”

Asimov really liked this story. On May 7, 1939, he visited the Queens Science Fiction Society, where he met Binder. Three days later, Asimov began writing his own story of the "noble robot." After 13 days, he gave the manuscript to his friend John Campbell, editor-in-chief of Astounding magazine. However, he returned the manuscript, saying that the story was too similar to Helen O'Loy.

Fortunately, Campbell's refusal did not affect their relationship with Asimov; they continued to meet regularly and talk about new developments in the world of science fiction. And here on December 23, 1940, while discussing another story about robots:

...Campbell formulated what later became known as Three laws of robotics. Campbell later said that he simply isolated Laws from what Asimov has already written. Asimov himself always conceded the honor of authorship Three Laws Campbell...

A few years later, another friend of Asimov, Randal Garrett, attributed authorship Laws"symbiotic partnership" between two people. Asimov enthusiastically accepted this formulation.

Generally speaking, the appearance Three Laws In Asimov’s works, this happened gradually: the first two stories about robots (“Robbie” and “Logic”) do not contain any explicit mention of them. However, they already imply that robots have some internal limitations. In the next story (“Liar”, 1941) it is heard for the first time First Law. And finally, all three in full Law are given in the “Round Dance” ().

When the remaining stories were written and the idea of ​​publishing the collection “I, Robot” arose, the first two stories were “added” Laws. Although it is worth noting that in “Robbie” the laws were somewhat different from the “classic” version set out in the other stories. In particular, the idea of ​​a robot protecting people whose existence he is not entirely sure resonates with Elijah Bailey's thoughts on imperfection Laws described.

Ethical justification of the Laws

  1. The state should not harm people or allow harm to come to them through inaction.
  2. The state must fulfill its functions if they do not contradict First Law.
  3. The state must take care of its security, unless this contradicts First And Second Laws.

Based First Law Jeff Raskin formulated the laws of human-oriented interfaces:

  1. The computer cannot harm the user's data or, through inaction, allow the data to come to harm.
  2. The computer should not waste your time or force you to do more than necessary.

Gaia, the hivemind planet in the Foundation series of novels, has something similar to First Law:

Variations proposed by Azimov

In his works, Isaac Asimov sometimes brings Three Laws various modifications and refutes them, as if testing Laws"for strength" in different circumstances.

Zero Law

Isaac Asimov once added Zero Law, making it a higher priority than the three main ones. This law stated that a robot must act in the interests of all humanity, and not just an individual. This is how the robot Daniel Olivo puts it in the novel Ground and Earth:

0. A robot cannot cause harm to humanity or, through inaction, allow harm to come to humanity.

It was he who was the first to give this law a number - this happened in the novel “Robots and Empire”, however, the concept itself was formulated even earlier by Susan Kelvin - in the short story “Solvable Contradiction”.

The first robot to obey Zero Law, and of his own free will, was Giscard Riventlov. This is described in one of the final scenes of the novel "Robots and Empire", when the robot had to ignore the order of one person for the sake of the progress of all mankind. Zero Law was not embedded in Giscard's positronic brain - he tried to come to him through pure understanding, through a more subtle awareness of the concept than all other robots harm. However, Giscard was not sure how beneficial it was for humanity, which negatively affected his brain. Being a telepath, Giscard transferred his telepathic abilities to Daniel before going out of commission. Only after many thousands of years was Daniel Olivo able to fully adapt to submission Zero Law.

The French translator Jacques Brecard involuntarily formulated Zero Law before Asimov described it explicitly. Near the end of Caves of Steel, Elijah Bailey notes that First Law prohibits a robot from harming a person unless it is certain that this will be useful to him in the future. In French translation (“Les Cavernes d’acier ( )", 1956) Bailey’s thoughts are conveyed somewhat differently:

It is noteworthy that the logical development First Law before Zero suggested by the creators of the 2004 film I, Robot. When the supercomputer V.I.K.I. decides to limit the freedom of the inhabitants of the planet so that they do not inadvertently harm each other and their future, it does not First Law, namely Null. What’s even more interesting is that in this way the film shows the contradiction Zero Law First, its unethicality. Indeed, when it comes to the good of humanity, the system cannot consider people individually, which means that nothing prevents it from violating the rights and freedom of anyone or even everyone person. During wars, and often in peaceful life, people harm themselves, others, and their culture. Therefore, based on Zero Law It is completely logical to keep people under constant guardianship without following the orders of such unreasonable creatures.

Modification of the First Law

Asimov's autobiographical notes say that the second part First Law appeared because of the satirical poem “The Last Decalogue” by Arthur Hugh Clow, where there is the following line: “Thou shalt not kill, but do not try too hard to save another’s life.”

In the story “... For Remember Him,” Asimov conducted the most sophisticated research Three Laws, reversing them in such a way that a “Frankenstein scenario” became possible. The two robots from the Georgie series come to an agreement that organic origin is not a necessary condition to be considered human, and that they are the true humans, as the most advanced and intelligent creatures. Other people are also people, but with less priority. And if so, then Three Laws should be applied to them first. The narrative ends with the ominous words that the robots were in "endless patient anticipation" of the day when they would assert their primacy among humans - and this would be the inevitable result of the "Three Laws of Humanity."

In fact, this story does not fit very well into the main series of works about robots: if the "Georgies" had carried out their plan after the end of the story, then other stories about subsequent events would not have been possible. It is precisely these kinds of contradictions in Asimov's works that give critics reason to view them more as “Scandinavian sagas or Greek legends” than a single fantastic “universe”.

If in the previous case the robot took the place of man in nature, then in Bicentennial Man Asimov describes the opposite fate: a robot freed from Three Laws and realized himself as a person, joined the community of people. Again, in the extended version, the novella The Positronic Man, co-written by Asimov and Robert Silverberg, people abandoned the idea of ​​creating thinking robots entirely due to similar ethical concerns. This development of events completely contradicts the picture of the future described in the worlds of “Foundation”.

Application problems

Resolution of contradictions

The most advanced robot models usually followed Laws using a rather cunning algorithm that allowed us to avoid some problems. In many stories, for example in "Round Dance", the positronic brain was compared potentials possible actions and outcomes and the robot could violate Laws, as soon as possible, instead of doing nothing. For example, First Law did not allow the robot to perform surgical operations, since this requires “harming” a person. However, in Asimov's stories you can find robot surgeons (a striking example of this is “The Bicentennial Man”). The point is that a robot, if it is sufficiently advanced, can weigh all the alternatives and understand that himself will cause much less harm than if the operation were performed by a human surgeon or if it did not take place at all. In Evidence, Susan Calvin even says that the robot could act as a prosecutor, since it personally does not harm anyone: in addition to it, there is also a jury who determines guilt, a judge who pronounces the sentence, and an executioner who carries it out. .

Robots obeying Laws, may experience "roblock", or "mental freezing" - a condition irreversible damage to the positronic brain - in case they cannot obey First Law or discover that they accidentally violated it. This is possible, for example, if the robot observes the scene of a person being killed, but is too far away to save him. The first instance of such "freezing" occurs in "Liar"; This condition also plays an important role in the plot of the novels “The Naked Sun” and “Robots of the Dawn”. Imperfect robot models may become blocked if they are tasked with obeying two conflicting orders. "Freezing" can be irreversible or temporary.

Another definition of harm in the First Law

Laws do not define in any way the boundaries of what can be called harm for a person, this often depends on the robot’s ability to perceive information and think philosophically. For example, will a police robot understand that it will not harm a person if it carefully escorts a particularly dangerous criminal into the station?

In the story "Liar", the telepathic robot Herbie was forced to understand harm and something that could disappoint or upset people in any way - he knew that people would experience some kind of mental pain. This forced him to constantly tell people what they wanted to hear instead of the truth. Otherwise, in his understanding, he would have violated First Law.

Loopholes in Laws

In The Naked Sun, Elijah Bailey notes that Laws are formulated incorrectly by humans because a robot may violate them out of ignorance. He suggested the following “correct” formulation First Law: "A robot can't do anything that as far as he knows, will cause harm to a human being or intentionally allow a human being to be harmed."

This addition makes it clear that a robot can even become a murder weapon if it is not aware of the nature of its actions. For example, he may be ordered to add something to someone's food, and he will not know that it is poison. Moreover, Bailey says that a criminal could entrust this task to several robots, without any one understanding the whole plan.

Bailey claims that Solarians will one day be able to use robots even for military purposes. If a spaceship is built with a robot-like brain and does not have a human crew or life support systems, then the intelligence of this ship may mistakenly assume that everyone there are no people on spaceships. Such a ship will be more maneuverable, faster, and possibly better armed than one controlled by people. But most importantly, he will be able to destroy people without knowing their presence. This possibility is described in "Foundation and Earth", where it is also revealed that the Solarians have an extremely powerful army of robots, who understand "humans" only as natives of Solaria.

Other uses of Laws in fiction

Isaac Asimov believed that he Laws will serve as the basis for a new look at robots, destroy the “Frankenstein complex” in science fiction and mass consciousness, and become a source of ideas for new stories where robots are shown as versatile and attractive. His favorite example of such a work was Star Wars. Asimov's view that robots are more than just "toasters" or "mechanical monsters" was eventually agreed upon by other science fiction writers. In their works, robots appeared, obeying Three laws, but, according to tradition, only Asimov mentioned them explicitly.

In works that directly talk about Three Laws, their author is usually mentioned. There are exceptions: for example, in the German series of the 1960s "Raumpatrouille - Die phantastischen Abenteuer des Raumschiffes Orion" ("Space Patrol - the fantastic adventures of the Orion spaceship"), namely in its third episode "Hüter des Gesetzes" ("Servant Law"), Asimov's Laws attracted without specifying the source.

Contrary to the criticism and some of the audience feedback I've heard, I, Robot is a pretty fun sci-fi action movie. ... When we ... want to enjoy Asimov, we read Asimov. And when we want to watch a science-fiction action movie, we watch a science-fiction action movie. “I, Robot” as an action film completely satisfied us.

Alexey Sadetsky's review draws attention to the fact that the film, going slightly away from Asimov, itself posed two new socio-philosophical problems. It should be noted that the film cast doubt on the later Zeroth Law of Robotics Asimov (see above).

The development of AI is a business, and business, as we know, is not interested in developing fundamental security measures - especially philosophical ones. Here are some examples: tobacco industry, automobile industry, nuclear industry. None of them were initially told that serious security measures were necessary, and all of them were prevented by externally imposed restrictions, and none of them adopted an absolute edict against harming people.

It is worth noting that Sawyer's essay omits issues of unintentional harm, as discussed, for example, in The Naked Sun. However, there are also objections to this position: the military may want to use as many safety precautions as possible for robots, and therefore restrictions similar to Laws of robotics, one way or another will be applied. Science fiction writer and critic David Langford ironically noted that these restrictions may be as follows:

Roger Clark wrote two works devoted to the analysis of complications in the implementation of the Laws, if one day they could be applied in technology. He's writing :

Asimov's Laws of Robotics have become a successful literary tool. Perhaps ironically, or perhaps a masterstroke, Asimov's stories generally refute the point with which he began: It is impossible to reliably constrain the behavior of robots by inventing and applying some set of rules.

On the other hand, Asimov's later novels (Robots of the Dawn, Robots and Empire, Foundation and Earth) show that robots cause even more long-term harm, observing the Laws and thus taking away people's freedom to engage in creative or risky behavior.

Science fiction writer Hans Moravec - a prominent figure in the transhumanist movement - has suggested that Laws of robotics should be used in corporate intelligent systems - corporations controlled by AI and using the productive power of robots. Such systems, in his opinion, will soon arise.

Eliezer Yudkowsky explores at (SIAI) in the USA the global risk that future superhuman AI could create if it is not programmed to be human-friendly. In 2004, SIAI launched AsimovLaws.com, a website designed to discuss the ethics of AI in the context of the issues raised in the film I, Robot, released only two days later. On this site, they wanted to show that Asimov's laws of robotics are unsafe because, for example, they could encourage AI to take over Earth in order to "protect" humans from harm.

Notes

  1. Asimov, Isaac. Evidence // Robot Dreams. - M.: Eksmo, 2004. - P. 142-169. - ISBN 5-699-00842-X
  2. Asimov, Isaac. Essay No. 6. Laws of robotics // Robot Dreams. - M.: Eksmo, 2004. - P. 781-784. - ISBN 5-699-00842-X
  3. Wed. with the myth of Galatea.
  4. It is interesting that in one of the author’s continuations, the story “The Revenge of Adam Link” (), his thoughts are voiced: “A robot, of its own free will, should not kill a person”.
  5. The story was later published by Frederick Pohl under the title "Strange Playfellow" in the September 1940 issue of Super Science Stories.
  6. Sergey Berezhnoy. Isaac Asimov: The Man Who Wrote Even Faster. Russian fiction (1994). Archived from the original on January 25, 2012. Retrieved January 14, 2007.
  7. Isaac Asimov. In Joy Still Felt. - Doubleday, 1980. - ISBN 0-385-15544-1
  8. Jeff Raskin. Interface: New Directions in Computer Systems Design. - Symbol-Plus, 2003. - ISBN 5-93286-030-8
  9. See also: Gaia hypothesis
  10. Les Cavernes d'acier. - J'ai Lu Science-fiction, 1975. - ISBN 2-290-31902-3
  11. Asimov, Isaac. Essay No. 12. My robots //

Three laws of robotics - a set of mandatory rules that artificial intelligence (AI) must follow in order not to harm a person. The laws are only used in science fiction, but it is believed that once real AI is invented, for safety reasons, it should have analogues of these laws.

Formulation

In Russian

  1. A robot cannot cause harm to a person or, through inaction, allow a person to be harmed.
  2. A robot must obey all orders given by a human unless those orders conflict with the First Law.
  3. A robot must take care of its own safety to the extent that this does not contradict the First or Second Laws.

Who came up with it and why

The short and correct answer is: science fiction writer Isaac Asimov, whose biography you can read here. But not everything is so simple, let's figure out where the idea came from.

Before Asimov, almost all science fiction with a robot theme was written in the style of a Frankenstein novel, that is, all man-made creatures rebelled against their creators.

This issue became one of the most popular in the world of science fiction in the 1920s and 1930s, when many stories were written that featured robots rebelling and destroying their creators. I am terribly tired of the warnings sounded in works of this kind.

However, there were a few exceptions; Asimov drew attention to two stories. Helen O'Loy, written by Lester del Rey, is about a robot woman who falls in love with her creator and becomes his ideal wife. And Otto Binder’s story “I, Robot,” which describes the fate of the robot Adam Link, misunderstood by people, driven by the principles of honor and love.

Asimov liked the last story so much that, after meeting with Binder, he began to write his own story about the noble robot. However, when he took his manuscript to his friend and editor-in-chief of Astounding magazine, John Campbell, he did not accept it, citing that the resulting story was too similar to “Helen O'Loy.”

Refusal to publish was common, and Asimov and Campbell regularly met and discussed new developments in the world of science fiction. While discussing another Asimov story about robots, on December 23, 1940, Campbell formulated the very three rules that we now call the laws of robotics. But he himself said that he only isolated them from what had already been written by Asimov, because in his stories it was clear that robots had some restrictions and rules. Isaac himself always conceded the honor of authorship of the laws to Campbell. But later, one of Asimov’s friends said that the laws were born in a mutually beneficial partnership between two people, which they agreed with.

How they work

In an ideal situation, as conceived by Asimov, these three laws are embedded in the very basis of the mathematical model of the positronic brain (as the science fiction writer called the brain of a robot with artificial intelligence), such that it is in principle impossible to create a thinking robot without these laws. And if the robot tries to violate them, it will fail.

In his works, the writer comes up with sophisticated ways of how these laws can still be violated, and analyzes in detail all sorts of causes and consequences. The author also talks about how robots understand them differently, what undesirable consequences can result from following these three laws, or how robots can harm humans indirectly without even knowing it. Asimov admitted that he deliberately made the laws ambiguous to provide more conflict and uncertainty for new stories. That is, he himself denied their effectiveness, but also argued that such standards are the only way to make robots safe for people.


As a consequence of these laws, Asimov later formulates the fourth law of robotics, and puts it in the very first place, that is, makes it zero. It reads:

0. The robot cannot cause harm to humanity or by your inaction allow to humanity harm was done.

In the original language:

0. A robot may not harm humanity, or, by inaction, allow humanity to come to harm.


These laws can also be applied to human relationships, to government, and to anything in general. You can, for example, replace the word “robot” with the word “state”.

There is a good quote from the story "Clues" where one of the characters says:

If someone fulfills all these laws flawlessly, it means that he is either a robot or a very good person.

First mention

The three laws emerged gradually. Thus, indirect mentions of the first two can be found in the stories “Robbie” and “Logic”. The exact formulation of the first law is first heard in the story “Liar”. And, ultimately, all three are fully formulated in the story “Round Dance”.

Variations

In his works, Asimov repeatedly depicts robots that had modified laws of robotics or even modified them themselves. They did this by logical thinking, and robots, like people, differed in their intellectual abilities from each other, and one can roughly say that the smarter the robot, the more it could modify the laws. For example, the robot Giscard from the novels “Robots of the Dawn” and “Robots and Empire” even strengthened these laws by adding the zero law. But this is an exception to the rule; in most cases, the laws were altered by people for their own purposes, or were violated due to some failure of the robot.


By the way, the very possibility of changing laws changed with the development of robotics in Asimov’s universe. Thus, in the earliest stories, where events took place in the relatively near future, laws were simply a kind of set of rules created for safety. Then, during the life of robopsychologist Susan Kelvin, the laws became an inseparable part of the mathematical model of the robot’s positronic brain; the consciousness and instincts of robots were based on them. Thus, Susan Kelvin, in one of her stories, said that changing laws is technically possible, although it is a very difficult and time-consuming task, and the idea itself is a terrible one. Much later, in the novel “Caves of Steel,” Dr. Jerrigel said that such a change was impossible in principle.

How to get around

In some stories, the laws were so strongly rethought that the most important of them - causing harm to a person - was not observed, and in some cases robots managed to break all three laws. Here are some works with obvious violations.

    A story is told about the robot MA-2, which refused to protect a person in favor of its “daughter”.

    They wanted to deprive the robot of the ability to create, for which he wanted to kill his master.

    This story probably does not belong to others about positronic robots, but it tells about robotic cars that people constantly hurt, for which they were able to kill them.

    About the robot Elvex, who, due to his special positronic brain structure, could be unconscious and dream. In his dreams, robots do not have the first two laws, and the third has been changed: “A robot must protect itself.” He dreamed that “robots are working in the sweat of their brow, that they are depressed by backbreaking labor and deep sorrow, that they are tired of endless work.” Quite dangerous thoughts for a robot.

    The inhabitants of the planet Solaria had very developed robotics. And the scientists of this planet with a small population, where there were a thousand robots per person, changed the laws in such a way that their robots considered only those who spoke with a Solarian accent to be people. Among other things, all the citizens of Solaria implanted special controls for many robots into their brains, so that no one else could control them.

    In this work, Asimov changed the laws as much as possible. The two robots in this story came to an agreement that organic origin is not a necessary condition to be considered human, and that true people are robots, as more advanced and intelligent creatures. Ordinary people, in their opinion, are also people, but with less priority, and the laws of robotics are primarily applicable to them, robots.

I would like to add that “healthy” robots, if they understood that they had violated the first law or could not help but break it, experienced a “robotic block” or “mental freezing” - a state of the positronic brain in which it was damaged and the robot either left out of order, or could not function correctly. Such damage could be either temporary or permanent.

The first description of such an event appeared in the story “Liar,” where an overly sensitive robot told people only what they wanted to hear, for fear of causing them psychological harm. An interesting case of a robot block is also described in “Round Dance”. This state also plays an important role in the novels “The Naked Sun” and “Robots of the Dawn.”

Use in other fiction

You can often find various references in films. Some examples are listed below.

Forbidden Planet - 1956

A very sensational American science fiction picture of the 1950s, had a certain influence on the development of the genre. In this film, almost for the first time, a robot was shown with a built-in security system, that is, in fact, fulfilling the three laws. Asimov himself was pleased with this robot.

The film begins with the words “Based on the stories of Isaac Asimov.” Here you need to understand that it is “based on” and does not repeat any of the stories, and even went somewhat to the side in some ideas, and also has a number of contradictions with the stories. But the laws of robotics are more than in place, although they were thought out by the intellect in a way that is not better for humans. The film itself even poses socio-philosophical problems: “should a person pay for his safety with freedom” and “how should we behave if the creatures created by us and at our disposal demand freedom.”

Series of films "Aliens" and "Prometheus"

Android Bishop quotes the first law and was clearly created on some semblance of Asimov's laws.

Animated series "Futurama" - 1999 - 2013

Robot Bender dreams of killing all people, but cannot do this due to the laws of robotics.

Anime series "Time of Eve" - ​​2008 - 2009

A small anime series about androids. It mentions these laws as mandatory.

Real world applicability

People who are now working on the problems of artificial intelligence say that, unfortunately, Asimov’s laws remain only an ideal for the future, and at the moment it is not even close to possible to apply them in practice. It will be necessary to come up with some kind of fundamentally new and ingenious theory that will allow these laws not only to be “explained” to robots, but also to force them to follow them, and at the level of instincts. And this is the creation of a real thinking creature, but with a different basis than in all living creatures on Earth that we know.

But research is underway, and the topic is very popular. Particularly interested in this are businesses, which you know will not necessarily prioritize security measures. But in any case, before the creation of a general artificial intelligence system, or at least its primitive form, it is too early to talk about its ethics, much less to impose one’s own. We will be able to understand how intelligence will behave only when we create it and conduct a series of experiments. So far we do not have an object to which these laws could be applied.

We should also not forget that the laws themselves were not created perfect. They didn’t even work in science fiction, and as you remember, they were specially made that way.

In general, we will wait, follow the news in AI research, and hope that Asimov’s optimism, regarding robots, will be justified.

UC Berkeley engineer Alexander Reben has created a robot that can harm a person and will make this decision on its own. The researcher's creation is the first clear example of how a robot can break one of the three laws that were formulated by science fiction writer Isaac Asimov in the story "Round Dance."

Three laws of robotics:

1. A robot cannot cause harm to a person or, through inaction, allow a person to be harmed.
2. A robot must obey all orders given by a human unless those orders conflict with the First Law.
3. A robot must take care of its safety to the extent that this does not contradict the First or Second Laws.

The first law states that a robot cannot harm a person, but Reben decided to clearly demonstrate that humanity needs to think about its safety before actively robotizing everything around it.

The engineer created a harmless-looking design with a separate place for a human finger. As soon as a person touches the designated area, the robot records this action and decides whether to harm the person or not. To do this, the robot has a small needle, which is designed to demonstrate the robot’s strength in front of a person.

Reben notes that even the creator cannot predict the behavior of a seemingly harmless machine.

With his robot, the researcher wanted to prove that it is necessary to think more about creating a “red button” rather than the rapid development of robotics and artificial intelligence.

Naturally, before the appearance of a robot with a small needle, there were shooting drones or unmanned military vehicles, but all of them could not independently decide to kill a person, since in the end, people are behind the final result of the actions of such machines. Reben's creation is not intended to cause harm - it itself determines how to behave in each specific case when the finger is at an accessible distance from the needle.

The engineer is far from an opponent of robots; Reben is creative when it comes to designing unusual things. His arsenal includes a solar-powered music box that plays the cheerful tune “You Are My Sunshine,” smart head massagers and a chattering robot for shy people.

It took Reben several days and a couple of hundred dollars to create the “cruel” machine, but, as the researcher himself says, his creation is only an occasion to once again talk about the safety of humanity and its future.

Despite the fact that robots do not yet live in society on a par with people, science fiction films about humanoid machines are not so far from the truth: robots are already hired as managers in companies, and artificial intelligence can already make weather forecasts or even replace a friend for lonely people. The Chinese have succeeded in the latter - for example, millions of people are friends with the cute girl Xiaoice, despite the fact that she is just a chatbot.

The example of Xiaoice once again made us think about how uncontrollable artificial intelligence can be: as soon as a chatbot from Microsoft with artificial intelligence was taught to speak English, the girl literally “went crazy.”

In a few hours, Tay (the name Xiaoice received during the process of “rebirth”) was able to learn racist jokes and swear words, which she decided to use in a conversation with Twitter users. Tay’s first message was the phrase “People are so cool,” after which the fake girl learned about the existence of Hitler, feminists, and in the process of self-education managed to declare the United States to be the culprits of the September 11 terrorist attacks.

The creators suspended the experiment and decided that they had released the bot too early. After such a striking incident, people again started talking about the need to create a barrier to artificial intelligence in case machines become so smart that they can escape human control.

The most famous people from the field of new technologies thought about the dangers of superintelligence: Microsoft founder Bill Gates, in a conversation with Reddit users, suggested that robots would be harmless and useful in housework, but if they grow to superintelligence, it will become extremely difficult to predict their behavior .

Last July, the head of Tesla and SpaceX, Elon Musk, astrophysicist Stephen Hawking, and Apple co-founder Steve Wozniak signed an open collective letter that spoke about the dangers of artificial intelligence in the defense sector.

Experts say that superintelligence combined with weapons can become a threat to all humanity.

As stated in the letter, weapons that can be used to bypass the human factor will lead to a new era in the arms race, while emphasizing that this is not a fictitious fact, but a future that could come in just a few years. The appeal, in particular, notes that the world order could be disrupted if such technologies fall into the hands of terrorists.

Butterflies, of course, know nothing about snakes. But birds that hunt butterflies know about them. Birds that do not recognize snakes well are more likely to...

  • If octo is Latin for “eight,” then why does an octave contain seven notes?

    An octave is the interval between the two closest sounds of the same name: do and do, re and re, etc. From the point of view of physics, the “relationship” of these...

  • Why are important people called august?

    In 27 BC. e. Roman Emperor Octavian received the title Augustus, which in Latin means “sacred” (in honor of the same figure, by the way...

  • What do they write in space?

    A famous joke goes: “NASA spent several million dollars to develop a special pen that could write in space....

  • Why is the basis of life carbon?

    About 10 million organic (that is, carbon-based) molecules and only about 100 thousand inorganic molecules are known. In addition...

  • Why are quartz lamps blue?

    Unlike ordinary glass, quartz glass allows ultraviolet light to pass through. In quartz lamps, the source of ultraviolet light is a gas discharge in mercury vapor. He...

  • Why does it sometimes rain and sometimes drizzle?

    With a large temperature difference, powerful updrafts arise inside the cloud. Thanks to them, drops can stay in the air for a long time and...

  • New on the site

    >

    Most popular