Why Asimov’s Laws of Robotics Don’t Work – Computerphile


So, should we do a video about the three laws of robotics, then? Because it keeps coming up in the comments. Okay, so the thing is, you won’t hear serious AI researchers talking about the three laws of robotics because they don’t work. They never worked. So I think people don’t see the three laws talked about, because they’re not serious. They haven’t been relevant for a very long time and they’re out of a science fiction book, you know? So, I’m going to do it. I want to be clear that I’m not taking these seriously, right? I’m going to talk about it anyway, because it needs to be talked about. So these are some rules that science fiction author Isaac Asimov came up with, in his stories, as an attempted sort of solution to the problem of making sure that artificial intelligence did what we want it to do. Shall we read them out then and see what they are? Oh yeah, I’ll look them- Give me a second. I’ve looked them up. Okay, right, so they are: Law Number 1: A robot may not injure a human being or, through inaction allow a human being to come to harm. Law Number 2: A robot must obey orders given it by human beings except where such orders would conflict with the first law. Law Number 3: A robot must protect its own existence as long as such protection does not conflict with the first or second laws. I think there was a zeroth one later as well. Law 0: A robot may not harm humanity or, by inaction, allow humanity to come to harm. So it’s weird that these keep coming up because, okay, so firstly they were made by someone who is writing stories, right? And they’re optimized for story-writing. But they don’t even work in the books, right? If you read the books, they’re all about the ways that these rules go wrong, the various, various negative consequences. The most unrealistic thing, in my opinion, about the way Asimov did his stuff was the way that things go wrong and then get fixed, right? Most of the time, if you have a super-intelligence, that is doing something you don’t want it to do, there’s probably no hero who’s going to save the day with cleverness. Real life doesn’t work that way, generally speaking, right? Because they’re written in English. How do you define these things? How do you define human without having to first take an ethical stand on almost every issue? And if human wasn’t hard enough, you then have to define harm, right? And you’ve got the same problem again. Almost any definitions you give for those words, really solid, unambiguous definitions that don’t rely on human intuition, result in weird quirks of philosophy, resulting in your AI doing something you really don’t want it to do. The thing is, in order to encode that rule, “Don’t allow a human being to come to harm”, in a way that means anything close to what we intuitively understand it to mean, you would have to encode within the words ‘human’ and ‘harm’ the entire field of ethics, right? You have to solve ethics, comprehensively, and then use that to make your definitions. So it doesn’t solve the problem, it pushes the problem back one step into now, well how do we define these terms? When I say the word human, you know what I mean, and that’s not because either of us have a rigorous definition of what a human is. We’ve just sort of learned by general association what a human is, and then the word ‘human’ points to that structure in your brain, but I’m not really transferring the content to you. So, you can’t just say ‘human’ in the utility function of an AI and have it know what that means. You have to specify. You have to come up with a definition. And it turns out that coming up with a definition, a good definition, of something like ‘human’ is extremely difficult, right? It’s a really hard problem of, essentially, moral philosophy. You would think it would be semantics, but it really isn’t because, okay, so we can agree that I’m a human and you’re a human. That’s fine. And that this, for example, is a table, and therefore not a human. You know, the easy stuff, the central examples of the classes are obvious. But, the edge cases, the boundaries of the classes, become really important. The areas in which we’re not sure exactly what counts as a human. So, for example, people who haven’t been born yet, in the abstract, like people who hypothetically could be born ten years in the future, do they count? People who are in a persistent vegetative state don’t have any brain activity. Do they fully count as people? People who have died or unborn fetuses, right? I mean, there’s a huge debate even going on as we speak about whether they count as people. The higher animals, you know, should we include maybe dolphins, chimpanzees, something like that? Do they have weight? And so it it turns out you can’t program in, you can’t make your specification of humans without taking an ethical stance on all of these issues. All kinds of weird, hypothetical edge cases become relevant when you’re talking about a very powerful machine intelligence, which you otherwise wouldn’t think of. So for example, let’s say we say that dead people don’t count as humans. Then you have an AI which will never attempt CPR. This person’s died. They’re gone, forget about it, done, right? Whereas we would say, no, hang on a second, they’re only dead temporarily. We can bring them back, right? Okay, fine, so then we’ll say that people who are dead, if they haven’t been dead for- Well, how long? How long do you have to be dead for? I mean, if you get that wrong and you just say, oh it’s fine, do try to bring people back once they’re dead, then you may end up with a machine that’s desperately trying to revive everyone who’s ever died in all of history, because there are people who count who have moral weight. Do we want that? I don’t know, maybe. But you’ve got to decide, right? And that’s inherent in your definition of human. You have to take a stance on all kinds of moral issues that we don’t actually know with confidence what the answer is, just to program the thing in. And then it gets even harder than that, because there are edge cases which don’t exist right now. Like, talking about living people, dead people, unborn people, that kind of thing. Fine, animals. But there are all kinds of hypothetical things which could exist which may or may not count as human. For example, emulated or simulated brains, right? If you have a very accurate scan of someone’s brain and you run that simulation, is that a person? Does that count? And whichever way you slice that, you get interesting outcomes. So, if that counts as a person, then your machine might be motivated to bring out a situation in which there are no physical humans because physical humans are very difficult to provide for. Whereas simulated humans, you can simulate their inputs and have a much nicer environment for everyone. Is that what we want? I don’t know. Is it, maybe? I don’t know. I don’t think anybody does. But the point is, you’re trying to write an AI here, right? You’re an AI developer. You didn’t sign up for this. We’d like to thank Audible.com for sponsoring this episode of Computerphile. And if you like books, check out Audible.com’s huge range of audiobooks. And if you go to Audible.com/computerphile, there’s a chance to download one for free. Callum Chase has written a book called Pandora’s Brain, which is a thriller centered around artificial general intelligence, and if you like that story, then there’s a supporting nonfiction book called Surviving AI which is also worth checking out. So thanks to Audible for sponsoring this episode of Computerphile. Remember, audible.com/computerphile. Download a book for free.

100 Comments

  1. taylor cooper said:

    It's extremly easy, honestly. Everyone is trying in such the incorrect way. My work has proven to deliver actually definitions for words, and contrive moral concepts from such. Most morals are self-reasoning by the way. Any ai would compute in words just like us.

    June 5, 2019
    Reply
  2. Sandcastle • said:

    Oh yeah, moral philosophy has upturned biological classification of human.
    Those damn philosophers.

    June 5, 2019
    Reply
  3. Remi Temmos said:

    just add one line, if unsure/edge case then cancel any action robot considered doing and do nothing

    June 5, 2019
    Reply
  4. Mario Rugeles said:

    Sacrilegio!!!! 😂

    June 5, 2019
    Reply
  5. Perdritto said:

    IMMO he missed the real points

    June 5, 2019
    Reply
  6. Northern Brother said:

    Couldn't an advanced AI learn what a human is like we did?

    June 5, 2019
    Reply
  7. Tony Flies said:

    Great video. But, aren't the 'rules' just plot-shortcut summaries of decisions that people designing driver-less cars, or AI armed drones or a bunch of other stuff are going to have to either come up with a plan for, or to sweep under the carpet hoping that they've retired before it matters?

    June 5, 2019
    Reply
  8. featureEnvy said:

    On the point about simulated brains…Similarly I think it's also worth speculating whether an AI would count a cyborg as human. For example if we get to the point where we can transplant brains into artificial bodies….Would it count that?
    Anyway this is a super great video and it makes me want to go write some sci fi where everything goes VERY VERY wrong. XD

    June 5, 2019
    Reply
  9. Skeepan said:

    They were created for the sake of wrapping a narrative around them. The 0th law is the jumping off point for a fascistic robotic overlord plotline, the 1st law was made to be challenged by selective definition and so on. They were never meant to be taken seriously.

    June 5, 2019
    Reply
  10. Gau28 said:

    omg tru

    June 5, 2019
    Reply
  11. Von Faustien said:

    Asimov didn't even think they worked foundation and earth and beyond foundation show pretty well why even if they do work they dont let alone all the robot books where they fail.

    June 5, 2019
    Reply
  12. I Don't Care said:

    This video has some structure problems

    He restated that you have to define edge cases at least 4 times

    He should have listed more examples, specifically about human harm, rather than focusing on what a human is. Then we get into questions like “Does a temporary injury that’s fully recoverable count as harm?”, “Does all pain count as harm?”, “Does emotional suffering count as harm?” There are much more interesting things to discuss than whether a comatose person is human.

    June 5, 2019
    Reply
  13. Damir Škrjanec said:

    "It won't be written in english"
    It also won't be written in x86 assembler, don't you think? With a development of AI, more and more of language concepts will be incorporated. I'd bet the rules will be written soon in a simplified, less ambigous version of english.

    June 6, 2019
    Reply
  14. thorjelly said:

    wow, at the end there he literally described the plot of SOMA.

    June 6, 2019
    Reply
  15. Peter King said:

    I think you’re having a hard time with the term law. There are the laws of man, and the laws of science.

    Asimov wrote a code of conduct, that was used in his FICTIONAL writing. He called this “code of conduct” laws.

    June 6, 2019
    Reply
  16. Boris Dinges said:

    Asimov's Laws of Robotics is mend for IA designers, not for computers. Problems in Asimov's books arise when designers of these systems fail (did not foresee things correct), not because of failure of the systems themselves.

    June 6, 2019
    Reply
  17. João Farias said:

    Are Leo Messi and CR7 humans?

    June 6, 2019
    Reply
  18. Johnny Nielsen said:

    Human: biological entity capable of self-sustained neuronal activity, either right now or using medical technology, that when fully matured are watching many more youtube videos than it should.

    June 6, 2019
    Reply
  19. ΑΙΜΙΛΙΟΣ ΣΠΗΛΙΟΠΟΥΛΟΣ said:

    Okay, so you told us why we couldn't make these laws work as humans, you did not mention one reason why the laws are wrong. If you can make better laws or a solution workout solving ethics, I will listen. But these laws are great for smart servants of humans…

    June 6, 2019
    Reply
  20. Stewie Griffin said:

    Asimov. Was he a nice guy ?

    June 6, 2019
    Reply
  21. ThatCatfulCat said:

    It's always strange to see some rando youtuber pretend they've debunked something no other distinguished expert could

    June 6, 2019
    Reply
  22. Mathieu Duponchelle said:

    Well machine learning is also meant to solve hard problems without the developer having to explicitly program the solving algorithm, so developers wouldn't have to "solve ethics", they would instead need to present the learning algorithms with the necessary data to do so. Not sure that's easy either, just a thought

    Edit: I'm curious about the credentials of this guy, he seems to be missing the pretty obvious point here

    June 7, 2019
    Reply
  23. Omer Droub said:

    People who think the 3 laws of robotics work are idiots who say something that sound smart without actually understanding it. The entirety of Asimov's robot series of short stories and books uses the three laws being broken as a plot device by having seemingly perfect rules break down in logical situations (i.e: robots that don't know they are robots, robots believing humans, etc.)

    June 7, 2019
    Reply
  24. Richard Charbonneau said:

    Do no harm or allow harm to come to a human… so round up all humans and force them to live in large pens where they will be safe… Alright, AI turns humanity in to basically pets to be cared for. First rule, Check. what was the next one? lol

    June 7, 2019
    Reply
  25. Paul-Michael Vincent said:

    Someone has never studied how current generations of AI are taught. You could show an AI 1million examples of a human and it would then know what a human is without having to have a definition in the programming. It’s the same way you teach children. You don’t sit down and have an existential conversation on what is a human. Deep learning has enabled AI to learn rules of games without having them programmed.

    June 7, 2019
    Reply
  26. nochtczar said:

    The book "I Robot" was full of stories about how the "laws" don't work and yet dummies keep parroting them like they are a blueprint for AI

    June 7, 2019
    Reply
  27. Ruben said:

    Asimov's point was to show those laws as apparently logic but inevitably dangerous, that's the whole point of "I, Robot".

    June 7, 2019
    Reply
  28. Billy White said:

    C'mon, seriously? "Harm" = likely to cause a permanent, significant drop in vital function (you can program in the human anatomy and make sure it know which parts are important); "human" = bipedal animal, which happens to be the only thing in the world remotely shaped like a human, of which we have probably about 1 trillion photographic and video training cases to train the AI. If you really think boundary cases like embryos and paraplegics and future humans are going to cripple the system (which they won'y), take an hour and code them in. The 3 Laws may well prove unworkable, but not because of this argument.

    June 7, 2019
    Reply
  29. Guinness said:

    He had a fourth one, he added 0

    June 8, 2019
    Reply
  30. TehRackoon said:

    Why not have only one rule each robot must obey their owning human no matter what.
    Then when a robot goes on a killing rampage we just find out what human owns it and punish them for not knowing how to properly program their robot.

    "Sorry sir you should have told your robot if the shop was closed to return home. Maybe then he wouldn't have busted through the window and stolen all that coffee."

    June 8, 2019
    Reply
  31. IYN said:

    0. A robot should always identifies itself as a robot
    Only then the rest laws would make any sense.

    June 8, 2019
    Reply
  32. Don Kongo said:

    Dolphins 👏🏻 are 👏🏻 humans
    Change My Mind

    June 9, 2019
    Reply
  33. Nathan Krishna said:

    What if you needed to protect a human/humanity by destroying humans or humanity

    June 9, 2019
    Reply
  34. Arik said:

    He mostly focused on the difficulty of defining "Human", but I think it's much much more difficult to define "Harm", the other word he mentioned. Some of the edge cases of what can be considered human could be tricky, but what constitutes harm?

    If I smoke too much, is that harmful? Will an AI be obligated to restrain me from smoking?
    Or, driving? By driving, I increase the probability that I or others will die by my action. Is that harm?
    What about poor workplace conditions?
    What about insults, does psychological harm count as harm?

    I think the difficulties of defining "Harm" are even more illustrative of the problem that he's getting at.

    June 9, 2019
    Reply
  35. John Grey said:

    I was hoping for better arguments. I was not convinced. Not that I agree with Asimov's rules either.

    June 9, 2019
    Reply
  36. QwazyWabbit said:

    Isaac Asimov’s three laws were never about a practical set of rules for AI. They were about Asimov’s take on the Frankenstein Complex and the unintended consequences of mankind’s inventions and inventiveness. Asimov himself said Frankenstein was the first science fiction novel.

    June 10, 2019
    Reply
  37. thettguy said:

    Just because implementation is difficult does not mean the ideas behind the laws are flawed. Each example edge case can be defined. Eg don't try and revive a body where no human DNA replication is going on. Are simulations humans? The answer is no. Are embroys humans? No. Add some laws that put some weight in for the environment. Sure you have to harden up and make some tough calls. But that does not be mean it can't be done.

    June 10, 2019
    Reply
  38. punker87 said:

    Asimov laws are not ment to be taking seriously… Thats the Main Point… It's a facist wat to sinplify the world in simple universal rules, and thats the Main Point… It's a distopic world thar ends in chaos…

    June 10, 2019
    Reply
  39. Zach W said:

    It's hard for me to take your argument seriously when you confuse the words "human" and "person". A dolphin may be a person; it is not a human. Very disappointing 😞

    June 10, 2019
    Reply
  40. Roberto Vargas said:

    x1.5 speed: Ben Shapiro

    June 10, 2019
    Reply
  41. Raggaliamous said:

    Put a banging donk on it.

    June 10, 2019
    Reply
  42. Christopher Anderson said:

    This guy talks like he has some kind of disdain for Mr Asimov and his books. The man was a genius and wrote fantastic stories where the solutions to the problems were always there but ever so out of reach to the reader to see. I dont think anyone should take them seriously as they were written a half a century ago not to mention. Before ai or robots were even a thing. But the presenter he just pauses and makes these offhanded adjectives for what Isaac Asimov did or wrote.

    June 10, 2019
    Reply
  43. Yogoretate said:

    Video: How do we define "human"?

    Also video: is a dolphin a human?

    June 10, 2019
    Reply
  44. Mark Bunds said:

    The “three laws” were always a work of fiction. It is axiomatic that one of the first inclinations when new technologies are introduced is to find a way to weaponize them.

    June 10, 2019
    Reply
  45. Matthew Jackman said:

    The overarching point of this video is correct but a lot of the points talking about "ethics" and "intuition" are a lot less relevant when you consider the level of intelligence the AI would have to reach to attain a vague sentience. AI is designed by humans, and at the point that it could attain sentience to a level in which these laws would even matter; would most likely already have developed a deep understanding of ethics and intuition due to the nature of its creators.

    You're right that a rhetorical device isn't a solution to all issues regarding AI, but I found the explanations pretty weak personally.

    June 10, 2019
    Reply
  46. Damocles54 said:

    If an ai can't make the same associations a human can, how smart is it?

    June 10, 2019
    Reply
  47. Lucas An said:

    Private property (self-ownership) derived from Hoppean's Argumentative Ethics. It not only solves human conflicts without generating others, but also resolves the conflict between AIs and humans.

    June 11, 2019
    Reply
  48. hotelmario510 said:

    I feel like most people forget that every one of Asimov's robot stories is about why his Three Laws of Robotics cause problems for people who build robots. All of them are about weird edge cases that cause the laws to break down.

    June 11, 2019
    Reply
  49. xartecmana said:

    This video really bugs me. The entire point, as everyone in the comments points out, of Asimov's stories is to show what could go wrong with the three laws. He keeps adding and altering them in response to how things go wrong in previous stories to allow new stories to exist. The problem this video is providing has little to do with the laws themselves though, what he takes issues with, is in fact semantics and not ethics. He doesn't explain the laws properly and instead creates strawmen arguments out of the laws (that admittedly don't work!) instead of tackling what the actual issues are. Of course you need to define what a human is for the laws to work, and what constitutes harm. The idea of the three laws isn't to define these ethical issues, but, assuming you could, what could go wrong with them? Cybernetic safeguards must be put in place to ensure robots do not prove a major threat to man, but what asimov is trying to say isn't "you can basically compress it to three primordial laws". What he's saying and proving time and again is how complicated an AI must be to ensure he does not destroy himself or humanity. How do you set priorities, how do you define objects and people, all of those are obvious quirks you have to work out in order to even create this robot. That is the entire idea of AI. Asimov's stories exist to say, "to ensure something as simple as three laws goes unhindered, you would need countless countermeasures and safeguards and exceptions and priorities that you could never accommodate for before experiencing each potential fault firsthand."

    The problem is, this video totally glosses over all that, and instead says "beep bop the three laws don't work and asimov is a tool beep bop" by going for, what he said he isn't doing–semantics. The ethics behind this are an obvious obstacle that the three laws over and over fail to accommodate for.

    June 11, 2019
    Reply
  50. Michael McGillivray said:

    Is there a reason one couldn't define human as "a living creature with 46 chromosomes" or to prevent sterilization "a zygote with 26 chromosomes" as well? I understand that the laws are unrealistic but defining a human is relatively easy. harm is much harder to define as almost any action has the potential to harm a human in some way.

    June 11, 2019
    Reply
  51. Charles Roberts said:

    Do you realise that the Three Laws were first introduced in the story The Runaround … written 77 years ago!! #1. Azimov wrote stories … they were works of fiction & written for entertainment. #2. Neither computers or robots existed then … so how could he accurately invent the Three Laws?
    When you write a series of stories as popular as Azimovs were, & still are, then I will take you seriously. In the meantime … stop taking things so seriously & being so up yourself!

    June 11, 2019
    Reply
  52. NewBreed BDA said:

    Easy fix – itemise everyone in the human race in a data set. Bang. Everyone who is a human is a human to AI.

    June 12, 2019
    Reply
  53. roger white said:

    You've asked a serious question that people have allvvays thought that Azimov's lavvs VVOULD be the basis for. I dont claim that the lavvs vvouldmt be good to encode into every AI but the points you've brought up are excellent & really need to be discussed by the scientific community to find vvhat is vvorkable . TY as I vvas 1 of the people vvho vvould've ansvvered Azimovs lavvs to the question but novv I realise it is not so simple as that. Great video

    June 13, 2019
    Reply
  54. Ichigo Makishev said:

    How dare you

    June 14, 2019
    Reply
  55. ophello said:

    I mean…that’s ridiculous. Just make the AI learn not to make sweeping global plans that drastically change how society works. Done.

    June 14, 2019
    Reply
  56. Marcelo Tai said:

    May be the 3rd law contradicts the first two.

    June 15, 2019
    Reply
  57. Marcelo Tai said:

    If we don't known what 'human' is and what 'harm' is or can't easily be defined, why we don't see that the development tools are inadequate and thus shouldn't be used for this purpose?

    Oh…because we want to do AI and because AI means power. Like weapons or ideas are.

    I would rather stay with dumb robots for a while instead.
    Perhaps waiting for human to be inteligent first…

    June 15, 2019
    Reply
  58. Zlatan said:

    This is a ridiculous video.
    The problem with this argument is that is made by a person who doesn't understand how fiction works, and that in the story these questions have already been answered. The story in the book is well into the part where things go wrong. The story requires that readers assume that this has already been solved. So his dissatisfaction with the idea is ridiculous.

    The three laws as described are there for humans to understand. Think of them as a sales pitch. In the story it is what sells the robots. Not a detailed instructions on how software engineers implemented them. The only person taking these laws seriously is this guy because he is obviously bothered by them. The story is happening in a fictional world where the reader is to presume that we already have answers to questions such as what is human and what is harm.

    The story is not about engineers sitting around a table debating what it means to be a human.

    June 16, 2019
    Reply
  59. Jim Wright said:

    Is your next project going to be why Asimov's positronic brains couldn't really read minds?

    June 16, 2019
    Reply
  60. Someone Else said:

    Actually, I found it entertaining that Asimov presented these 3 laws and then went to write several novels telling us how they don't work.

    June 16, 2019
    Reply
  61. Someone Else said:

    Hey! you didn't even get to defining harm. That is another very interesting topic.

    June 16, 2019
    Reply
  62. Idiot Boy said:

    What if you have the super-intelligent AI create the definitions instead of a human?

    June 17, 2019
    Reply
  63. Diggity Diggit said:

    So let Skynet do as it pleases

    June 17, 2019
    Reply
  64. Diggity Diggit said:

    No. Just physical harm

    June 17, 2019
    Reply
  65. Diggity Diggit said:

    Machine learning can learn what a human is

    June 17, 2019
    Reply
  66. Diggity Diggit said:

    You won't be programming AI. It will learn by experience. Machine learning

    June 17, 2019
    Reply
  67. Serge Rivest said:

    Well don’t develop a sentient AI until that is sorted.

    June 18, 2019
    Reply
  68. Luke Seed said:

    This wasn't a video on why those "laws" don't work but rather the challenges of programming nebulous human definitions…

    June 19, 2019
    Reply
  69. Ruben Hayk said:

    Asimov: you can't control robots with three simple laws

    everyone : yes,we will use three simple laws, got it.

    June 19, 2019
    Reply
  70. Van Ivanov said:

    Gee, it's not easy and would take a lot of code and specifications? Guess we better give up making a super AI… as that's not easy either.

    June 25, 2019
    Reply
  71. Dallin Backstrom said:

    "You're an AI developer! you didn't sign up for this!" is a great quote… but really, even limited use case AI today is so drenched in moral quandaries about the ethical implications of creating systems that make decisions— often times, systems that make decisions WITHOUT us understanding exactly how— if you signed up to be an AI developer, you DID sign up for this, for better or worse…

    June 26, 2019
    Reply
  72. Gerard van Wilgen said:

    Okay, but then, tons of additional books have been written about what the laws in the lawbooks for humans are supposed to mean, haven't they? And still jurists are often arguing about how particular laws should be interpreted in particular cases, so it seems that humans have that problem with definitions, too. Take for instance the debate about whether a human embryo is a person. Some say it is and some say that's nonsense. Yet, laws for humans work, most of the time, more or less.

    I assume it would be the same for laws for AIs, that is, real AIs, machines that are no more predictible than humans are.

    July 1, 2019
    Reply
  73. Apophis Jones said:

    The laws were made for humans . Metaphor.

    July 5, 2019
    Reply
  74. Kexkon said:

    Idk man. I feel like we should use Plato’s idea that a human is a featherless biped. That way people are fine. Plus ostriches and kangaroos are fine. Babies are kinda screwed but who likes them anyways?

    July 8, 2019
    Reply
  75. Applecore said:

    #@!#% 0#! OXYGEN IS HARMFUL TO HUMANS

    July 11, 2019
    Reply
  76. PSpurgeonCubFan said:

    keep summer safe

    July 18, 2019
    Reply
  77. Vincent Gonzalez said:

    you fall on a robot, it peirces your flesh, removal of the part from your flesh and it can take you to the hospital and carry you, how does it evaluate that, its movements could kill you by causing you to bleed out

    July 20, 2019
    Reply
  78. JC Denton said:

    If you tell a robot not to harm humans through action or inaction the robot would cover everything in bubble wrap.

    July 23, 2019
    Reply
  79. Jamie G said:

    The problem of what constitutes a human can be specified implicitly using machine learning. We've already done this with decent results and they'll likely get better with time. Machine learning is able to create fairly robust machine recognition of what is a human and what isn't. And generally speaking, the more the AI is trained, the more robust its ability to recognize.

    August 4, 2019
    Reply
  80. cottton™ said:

    Cannot listen to this …. falling asleep ….
    and he is talking about definitions? Well … of f* curse there must be definitions. And ofc you have to get them sorted out.
    But thats not the point, right?
    if this->isHuman(object) — of curse you have to define the method. But you started to talk about the rules ffs.
    So talk about the rules and not the definitions.
    2nd video i cannot watch of this dude. Looks like nothing prepared…

    August 6, 2019
    Reply
  81. A Person said:

    A simple problem is a medical robot. Don’t allow a human to be harmed through action or inaction, this robot is supposed to help perform surgery with a number of risks. Just the act of cutting a person open which in the long run will benefit them, can be seen as direct harm to a person. Let’s go with a kidney donation and transplant, the robot sees you cutting a person open and stealing their organ to cut open another person and give it to them. There is always a risk of organ rejection and complications, how does a robot that’s been told “do not harm humans, except when it’s okay to harm humans” know when it is and isn’t appropriate? Inversely, what if the robot says “this person is a match, but doesn’t want to give their kidney. If I don’t take their kidney, my patient will die. So I must forcefully take their kidney”

    August 13, 2019
    Reply
  82. mechnokie blood said:

    So what sort of rules for robotics should we employ instead?

    August 18, 2019
    Reply
  83. lefenec said:

    clickbait titles work

    August 19, 2019
    Reply
  84. dubstep1994 said:

    2:20 if the machine is more intelligent than you ( humanity & the inventor ) the machine will fool everybody and take control. In some way or another.

    August 25, 2019
    Reply
  85. Za lito said:

    booooooooooooooring, go and make some website with html

    August 27, 2019
    Reply
  86. Easy Target said:

    Something about this sounds like peak centrism to me. "Let's not do anything ever because we might contradict ourselves."

    August 28, 2019
    Reply
  87. X X said:

    I don't think it comes down to ethics but instead utility. Sure, you might disagree with someone coding a robot to recognize a fetus as a human but at least there's no fear of that robot killing any unborn babies with mothers that actually want them.

    August 30, 2019
    Reply
  88. Aerial said:

    So you're saying science fiction never became fact? Interesting.

    September 3, 2019
    Reply
  89. Aerial said:

    OH Okay, you actually think the laws Asmiov is talking about don't have definitions attached to them through code. WOW. ROFL…Moving on.

    September 3, 2019
    Reply
  90. Aerial said:

    CONDITIONS FFS..CONDITIONS…..

    September 3, 2019
    Reply
  91. Aerial said:

    You sound lazy…

    September 3, 2019
    Reply
  92. arkblazer1 said:

    What if the terminal goal of an AGI is to understand humanity?

    September 7, 2019
    Reply
  93. Lewis Cowles said:

    Easier to attack general problem is that anything having to compute the outcome of an action impact on 7.6 billion humans would be slow AF. It's therefore likely it would be bound using tricks in the same way that massive open-world games do (note these games do not AFAIK support even 100k+ live players at once). That would lead to not considering other humans, so perhaps deciding as a robot to dump waste in developing nations. (sadly something humans do).

    September 13, 2019
    Reply
  94. AstroTibs said:

    "[The laws are] optimized for story writing" spoken like a true programmer

    September 13, 2019
    Reply
  95. Alacritous said:

    Late to the game here, but had to say congratulations. You've discovered what in the writing world is called a "plot device." The laws of robotics were SUPPOSED TO GO WRONG. It was them going wrong that drove the stories. They weren't supposed to be perfect. I'm sure that this has probably already been mentioned in the comments here.

    September 15, 2019
    Reply
  96. hanFstoned said:

    Sadly some things were not adressed. AI does not need to humanlike in behaviour or thinking.

    Well, just imagine someone just programmed these decisions and definitions and as toplevel decisions are the asimov laws. In this case they work.

    Btw, you could put this whole video in 1 sentence "the asimov laws dont work, because the human has no clue how humans work".

    September 15, 2019
    Reply
  97. MaoItsMe said:

    The rules are not half-bad, but the implementation of these rules is too damn hard.

    September 15, 2019
    Reply
  98. oxi said:

    I love this guy

    September 15, 2019
    Reply
  99. mark heyne said:

    In the film version of Asimov's 'I Robot' one programmer builds robots that CAN harm humans, and more significantly the criticism is made that an AI cannot make a 'human' judgement, with idealism and courage, only a strictly rational one.

    September 16, 2019
    Reply
  100. Samuel Peterson said:

    The premise of like every isaac asimov story is "here are these man made laws, let's watch qis poke holes in them" asimov himself wouldn't advocate software designers implimenting them that's like asking orwell how he feels about fascism

    September 16, 2019
    Reply

Leave a Reply

Your email address will not be published. Required fields are marked *