When Will We Worry About the Well-Being of Robots? | Idea Channel | PBS Digital Studios

Here’s an idea. At some point in
the future we might become legitimately worried
about the well-being of robots. [MUSIC PLAYING] So we’re in the future and
you’re walking down the street. Maybe you’re on your
way to the future store. It’s a beautiful day when
suddenly someone crossing the street gets hit by a car. You hear the screech of tires,
a scream, maybe it was theirs. And a thud. You rush over to the
person who was hit there on the ground writhing, unable
to express themselves or get up. And that’s when you realize
that they are not a person. Or maybe more specifically
they’re not a human being. They are a robot. A very human-looking
autonomous machine. Does your panic subside? Are you no longer concerned? Or is nothing different? Robot or not, they need help? [ACCIDENT SOUNDS AND YELLING] Furthermore, is the
driver at fault? Have they destroyed
property or have they injured and potentially
killed a non-human person? All of these questions surround
what you could generally categorize as a moral
sense towards technology. One half of this equation
concerns the moral agency of things like artificial
intelligence and robots. Do they even have agency? Can they act morally? We’re going to talk about the
other half of the equation– moral patientcy. In what situations
and at what point do robots, technology, and
artificial intelligence become the subjects
of moral concern? When should we worry about them? When should their interests
be placed alongside ours? Or, at the very least, be
given serious consideration? MALE SPEAKER: Help computer. Now if you’re like most
people, your first reaction to this question is, what? Probably whatever machines
you have in your life– no matter how quote,
unquote, smart or quote, unquote, charming
they are– you never really worry about
their feelings. MALE SPEAKER: I hate you. When you drop your
computer or phone you wince not because you
have literally hurt them but because you have
potentially damaged a useful and expensive object. And as fun and funny
as your Roomba is, it really is nothing more than
a particularly smart vacuum cleaner. And cat transportation system. Though– and this is
true– my grandfather has taken to calling his Roomba
Sylvester after Sylvester the cat because, in his words,
it sleeps most of the day, only ever gets up to eat,
and when it’s done goes right back to
its favorite spot. I’m not about to make some
hasty generalizations based upon a joke that my grandpa
tells about his floor robot. But for real, even though
our technology doesn’t express things– I have no particular desire
to discuss the matter. –it’s still expressive of
things from our perspective. We anthropomorphize it. We give it human
qualities in the same way we do with our pets. FEMALE SPEAKER: Good boy. But when we see our pets
expressive of things, it’s built atop this complex
idea of what an animal is. It has feelings, desires, needs. It’s alive, which, yeah, it’s
a simple word but a complicated concept. I am alive. Yes. When a pet suffers or stops
being alive, it’s a tragedy. When a computer malfunctions
or dies, it’s inconvenient. You might cry out of
stress or frustration, but not out of a sense of loss. You will not mourn. I wonder if we’ll ever have
that feeling, empathy or regret or pity, for a machine. And if so, at what point? MALE SPEAKER: I need him. What do you need him for? MALE SPEAKER: He’s my friend. At what point will
we look at a machine and feel bad because
it’s suffering, because it has a need
that we can’t address, because it and not
its function is gone. Batteries not included. The answer might
be never, especially if your position is that
technology in any form is only ever a tool. John Searle’s Chinese
room thought experiment, for instance, helps here. It goes like this. So you’re in a locked room and
someone is passing messages to you written in
Chinese through the gap underneath the locked door. The problem being
don’t know Chinese. None whatsoever. But luckily, in
the room with you, you have a set of instructions
written in English or whatever language you speak that
instruct you on how to respond to those messages in Chinese. In Searle’s words, you
would be able to correlate one set of formal symbols with
another set of formal symbols. To the people passing
those messages to whom you are responding
in Chinese based upon the rules written
in front of you, it would appear as
though you speak Chinese. But none of this means
that you understand it. In this thought experiment,
you are a computer. Computers can follow
rules and perform actions, but that doesn’t mean
that they understand them. Computers can’t think. They do not have consciousness. This fact removes
technology– androids, artificial intelligence,
robots– from moral concern for most people. No consciousness, no
thinking, no feeling, no pain, no suffering. Moral patientcy rescinded. There’s a lot of criticism
for this line of reasoning. Like, what is
consciousness anyway? We used to think that animals
didn’t have it and– whoops. And also if a complaint
is that technology is just following rules
without understanding the meaning of the rules,
doesn’t that also describe us? Isn’t the existence of religion
a science and a philosophy and explicit admission that we
don’t know what the rules mean? In David J. Gunkel’s
“The Machine Question”– which, by the way, if this
kind of thing interests you is so good and was basically
the basis for this episode. In it, he writes, Western
conceptions of morality customarily consist in a
systematic rules of behavior that can be encoded
like an algorithm and implemented by
different moral agents in a number of circumstances
and situations. There are, in short,
program instructions that are designed to direct
behavior and govern conduct. Take, for instance,
the Ten Commandments. And also, sure, computers
are built by people. But aren’t people
also built by people? Basically I’m just
going through all of the defenses
for Commander Data from that one episode of Star
Trek where he goes to court. We’ve all been dancing
around the basic issue. Does Data have a soul? I don’t know that he has. I don’t know that I have. But really, when
the chips are down or the cards are flipped
or whatever gambling metaphor you like for there
being some kind of finality or tragedy confronting the
life of a piece of technology, is this really what we’re
going to be thinking about? Whether or not this thing
does or does not possess what is clearly definable
as consciousness or as feeling actual pain? Whatever that is? [MUSIC PLAYING] I don’t know. It’s hard to say. The point at which this is
a real question where robots are such a part of
our lives that they are potentially worthy of
real concern is very far away. But I will say this. In that Star Trek episode
I was legit worried about and rooting for Data. During Roy Batty’s Tears
in the Rain Monologue in “Blade Runner,”
I completely buy that this replicant
has seen some stuff. –people wouldn’t believe. And as a side
note, Rutger Hauer has said of that line,
which he partially improvised, that the replicant
in the final scene, by dying, shows Deckard what a
real man is made of. But probably all of my feels
are because Data really is a person, Brent Spiner. Roy Batty is Rutger Hauer. And they are in
works of fiction, the very premise of
which is that I accept certain untrue things as true. But then what about
Tamagotchi or Sea Man or even Animal Crossing? Technologically
constructed life where you possess a certain
amount of responsibility. MALE SPEAKER:
Congratulations to me. I’m now a homeowner. The Tamagotchi
is sold as a pet. My girlfriend hasn’t
turned on our DS in months because of how guilty she
feels about the probable state of our town. That’s not really moral concern. She’s not legitimately worried. If she were she
would turn on the DS and pull up all of the weeds. And I would too for that matter. But maybe it shows,
in some measure, how we could
eventually have or be convinced to have
legitimate concern or worry for technology. Like I wonder if my
grandfather ever feels even a little bit guilty about
not leaving enough dirt on the ground for Sylvester. Oh, you’re hunger. What do you guys think? Will machines ever
become moral patients? And if so, at what point? Let us know in the comments. And I have seen
subscriber buttons glitter in the dark near
the Tannhauser Gate. So what if we just put
all of the bits on a bus because buses are
a common carrier and then that problem is solved? Let’s see what you guys have
to say about net neutrality. First things first, we had
an awesome week at Vid Con. Thanks to everybody
who came to panels, who came to the meetup, who
just stopped and said hi. It was so great to meet you. And I got to hang out with some
of my friends and YouTubers who I really look up to. And there’s actually–
we have some video proof of that in the works. So next week’s episode
of Idea Channel is going to be an
extra special episode. Not like “Degrassi”
extra special. Like as special as
Idea Channel would get. But still very special. OK. So now, on to comments. The TheAirtar says that internet
service should absolutely be common carrier
but that if you want it to be a public utility
that comes with its own a whole other set of
troubling factors, one of them being that if
networks aren’t unbundled then the choice– which, right, is
not always a choice of where you live– might automatically
mean that your internet is just always going to be provided
by this one company. Related, GalidGanef
and Emil Jacobson point out that common
carrier doesn’t solve a more basic
problem, which is the lack of competition
as far as ISPs are concerned. And, yeah, like if networks
were to be unbundled then that means that people could
get access to the infrastructure without having to build it,
which is not the case right now. And they linked to– and a
lot of other people linked to– this Ars Technica post. So we’ll put a link to that
in the description too. To anywibes and
everyone else wondering how this kind of
thing affects people who don’t live in
the United States, well, first things first,
net neutrality is a concern and should be a concern in any
place where there is internet. As far as I know, Europe
is much more protective of this kind of thing
than the United States is, though I could be wrong. If I am wrong, someone
please correct me. But also, if you think
about our example that we provided of the person,
the girl trying to develop what would be the next
Facebook, right? If her success is limited
by access throttled by internet service providers,
right, that effectively eliminates what could
be a global service. So in that sense, that
is another sense in which this kind of thing could effect
the global internet and not just the United States internet. Jacob Hamblin and GalidGanef
have a really great back and forth about the purpose
and utility of regulation when it comes to ISPs. So I should check
this– check this out. I will put a link to this
comment and all the other ones in the doobly doo. Jason Perry wonders
whether or not the internet should be
classified as something else entirely, not a utility like
water or electricity, not a common carrier, but just
an entirely different thing. And this, I have no answer to
this and I will not pretend to. But this does make me think of
a wonder that I have always had, which is I wonder if the future
of the internet and the future of libraries as sort of common
goods are somehow intertwined. I have always– that thought is
constantly just floating around in the back of my
head just in there. To Mario Castro,
thank you for asking. I am just very worried
about the internet. I’m from there. It’s my hometown. I’m worried. Rockin Out. This makes me sad. Not because you’re
leaving– well, I mean I am sad
that you’re leaving. But that’s your choice. It’s because this means that
I have failed in my attempt to convince you that when
you hear an opinion that is the opposite of your own, you
should try to engage it and try to have a conversation with it. Maybe I was too dismissive
of the market. don’t know. Hal Gailey response
to my comment about the market by saying
that it is disingenuous simply because the existence
of what anybody would describe as a
market in this space is very, very arguable. And this is a really great
and insightful comment. So thanks for writing this, Hal. Hey, AnthonySandwich10,
I don’t think you need to watch the internet
to get swag tutorials. You live with Cal Chuchesta,
the world’s foremost expert on swag. But for all of the
rest of us, yes. Let’s get them while
they’re neutral. This week’s episode
was brought to you by the hard work of these
subjects of moral concern. We have a Facebook
and IRC and a sub [INAUDIBLE] that links
in the doobly doo. A one day late happy Canada
Day to our Canadian viewers and subscribers. We are very sorry
for having missed it. And the tweet of the
week comes from Luke in all caps who points us
towards a video documenting people’s reactions
to being filmed without their consent in public. It is both scary,
fascinating, and engrossing. Yeah. [MUSIC PLAYING] And for this week’s
record swap we will be replacing
Joni Mitchell’s “Blue” with Eliane Radigue’s “Feedback
Works,” which I actually got from Amoeba Music while
visiting LA for Vid Con. So, you know, it
all comes around. [MUSIC PLAYING] We’ll fix that later. Anyways, adios, Joni Mitchell
and welcome Eliane Radigue.


  1. While it is true that people are given rules and procedures to follow, people can go against these rules whereas machines can not. Free will is what defines consciousness.

  2. Christianity is stupid 

  3. I'm pretty sure that enough has been said on that topic already, but I'm going for it nevertheless:

    It certainly is hard to draw a straight line here. I already feel worried about some pieces of technology, but rather in the way your girlfriend's worried about Animal Crossing. It's because in that game you have a town, characters and therefore (virtual) people, who are influenced by your actions (or your absence). In the samy way I'm worried about certain characters in certain video games, but in contrast I'm not really worried about the dishwasher's well-being.
    However, since my early childhood I do feel worried about some specific pieces of technology that seem to have animal or human features. A toaster with a face on it for example. Of course I'm fully aware that a face drawn on a piece of plastic won't give a toaster consciousness, but at least there's the illusion of it. I'd have an easier time with throwing away a faceless toaster, I assume. So I can only imagine how I'd feel about a robot that looks like a real person.
    Therefore I don't think that we need to reach the stage of fully conscious machines to worry about their well-being to some degree.

    This topic actually reminds me of a manga called "Chobits". It tells the story of a young man who falls in love with a robot. Although it looks and acts very similar to a human being, there are numerous scenes where it's absolutely obvious that the robot girl's still a machine and "feels" differently compared to humans. Nevertheless the guy builds a very strong affection towards the artifical girl. Back then I could easily understand his emotions and even today I feel the same way. Didn't matter how conscious she was, if she really felt pain, joy or love, etc. She was similar enough to a biological being for getting emotional over her.
    And I think it'll be nothing but a matter of time until machines reach that level of similarity.

  4. I believe that the line is crossed when the robot has a defined survival instinct, where say, it's arm being torn off would give it stress.

  5. "Does this unit have a soul?"

  6. I am a strong believer that the point in which we care about machine suffering will be when machines are aware of their condition. When a machine is able to articulate that it is suffering and seeks to end said suffering, some (but inevitably not all) people will feel pity for the machine. Humans naturally want to fix problems and what problem is bigger than a being suffering?

  7. You mentioning that Star Trek episode of Data reminds me of the episode from Star Trek Voyager, where the holographic doctor had written a holonovel but initially was not recognized as the author because he was a hologram. The crew went to court to get him recognized as the author. I still think that is one of my favorite episodes.

  8. http://igg.me/at/jiborobot/x/8542621

  9. Yes… In japan.. right now.

  10. You should maybe watch the animes "time of eve"  and "chobits" as they work with the problems around having feelings for robots, and social aspects of interacting with robots in a neutral zone in witch you can't tell the difference between robots and humans

  11. Hmm… That thing you said about the future of the internet and libraries as public places of "common good" would make an interesting video topic!

  12. We're already worried about the well being of fictional robots, case and point: Ren and Clear in DRAMAtical Murder. So why not worry about real ones?

  13. I dunno man, I already cry at the prospect of people abusing robots just because, so maybe I'm already there.

  14. I am somewhat interested by the difference seen between a robot and a program of a non-self contained environmental-manipulating/sensing platform (like Nudge or tamagotchi).
    A physical entity has a lot more in common with us, and I remember at least somewhere reading how having human-like perceptions (as opposed to say, SONAR) can help with human-like thought.
    Overwhelming amounts of data involved though…

    My robotics professor was into (if I remember this right) Acquired intelligence, where instead of trying to copy things we know but can't explain into a program, it is taught how to do it.
    Sadly, we didn't have time for him to expound on it… he had to teach and all…

  15. This whole question can easily be solved by not giving machines any emotions, no pain, fear, no joy, no sadness, no nothing.

    That way, they can remain the tools that we require them to be and we will never have to worry about hurting them – because they'll never be anything more than a bunch of wires, circuits and shaped metal.
    Thus we can send them on dangerous missions, and if something goes wrong, it'll have the ethical ramifications of dropping a rock down a well.

    No nothing (unless someone else really liked that rock which you just threw away).

    Loyal and soulless, that's how I like my PC, my watch, my Iphone, and that's how I'll want my future, more advanced robots. 

  16. An interesting case is that soldiers from Iraq and Afghanistan have reportedly formed emotional attachments to their clearing devices they use to disarm bombs, to the point they have been negatively affected when they were destroyed/damaged.

    Humans are more than capable of forming emotional bonds with non-emotional, non-conscious objects, which is something that we should keep in mind when discussing robots.

    It'd be quite silly to form ethical rules for something that only mimicked life, but was in fact just an unconscious collection of codes and simulated emotions 😛

  17. we are just meat computers with the illusion of free will

  18. There are some people, like myself, who have feelings, and "care" about things that aren't really alive. For instance, one day I was so mad about something that I wanted to rip something apart, so I chose to rip one oft least favorite Yugioh cards. I never cared about it before, but as soon as I ripped it in half, I felt terrible for the rest of the day. I tried to tape it up (don't judge me) and I still remember it because I remember how guilty I felt and literally felt as if it died. When someone throws something on the ground, I wince, not because they're mad, but because the item they threw probably doesn't like being thrown on the ground and it got hurt. I know that I often treat objects like people (because if you can't do something to people, you shouldn't do it to things, which is partially why I don't give away things too easily…) but I know that I, and probably many other people would treat them as a person and have feelings for them. After all, isn't having a robot friend one of many people's dreams as a kid?

  19. I can't say what "we" as a country or as the human race in general would do because as much as we try to "generalize" things it can't speak for all people in any given group of any kind because all people are different. So there will always be those that view robots as "tools" or even as "slaves" regardless of what is generally excepted as the norm. Just like how there will always be those who discriminate against Religious groups or homosexual men and women, or members of races that aren't there own regardless of what that race may be. But I personally think that it is wrong to make robots capable of such things in the first place because you are basically "giving life" to something that is only going to be your slave regardless of what it wants or desires.

  20. Well I already do this with my stuffed animals, so it won't be a very big jump for robots.

  21. "Shepard commander, does this unit have a soul?"
    You bet I'd feel for a robot who got hit by a car. I'd call 911, hell I'd carry it or whatever it identifies as to the robot hospital if need be!

  22. At the moment, no. We, or at least most of us, don't have any moral consciousness for a technological object. Sometimes we might feel for it's owner, but not for IT. But maybe we will someday and that might start happening when we stop referring to that object as an object. 

    Let's compare 2 machines that most people in the modern world have in their lives, a personal computer, and a personal vehicle (a car, a motorcycle or whatever it may be), owned and used only by one individual. It's very common nowadays that people have computers or cars which only they themselves use, meaning that they're not sharing them with anybody else. The difference between these 2 machines is how we feel for them. We usually don't have any personal emotions towards our PCs. We use PCs for our convenience, be it for work or leisure, and we don't put much thought towards them when not. Some of us spend a lot of time with it, but we don't care FOR it. If your mouse is broken or if your monitor has a big scratch, you'd think "oh crap, now I have to buy a new one", but you don't think "oh man I'm so sorry dear PC," What this shows is that, instead of having an emotional attachment for the machine, your sympathy goes to it's owner, who in this case, is you. 

    On the other hand, you have personal vehicles. Though completely different in appearance, function, size, etc, they essentially have the same purpose. You use your vehicle to travel somewhere, park it when you reach your destination, and go about doing your business. You use them when you need them, and ignore them when we don't. Like PCs, they are tools for convenience. But unlike PCs, and many other objects in our lives, we develop personal feelings towards our vehicles. They're like our companions. We love them, some of us name them, we feel hurt FOR them when something goes wrong, the list goes on. I love my car but i don't love my PC.


  23. duude I am already worried about the well-being of robots. albeit fictional robots but let's just say absolute boyfriend has pretty great forecast in my books

  24. What noone seems to be considering is that we don't know what conscience and feelings are. Maybe they aren't real? Maybe no one but you feel it and everyone else is just programmed to claim they feel it, or maybe every object in the world including rocks and computers can feel in some way, just can't express it. We can't know if computers currently feel pain or happiness. We don't know if other people feel things either. Stop taking things for granted.

  25. Don't worry. Their pain is only simulated. I suppose it's real enough for them though.

  26. Maybe when AI reaches Blade Wolf, Metal Gear Rising: Revengeance levels, I'll begin caring about their well-being

  27. you could argue that when your computer dies, and your very sad you are not sad over the actual death of said thing, but the death of its data. for instance, if you have the only copy of the last photo of your father or something like that. that picture was the last thing that your father left behind and now its gone forever. in that you wont be mourning over the death of the computer, you will be mourning over the fact that one of the last shreds of evidence that your father was alive, besides your memory of course, is gone. which is a bit like lousing you dad again. not as devastating, but still very sad. 

  28. you sir have obviously never had a catastrophic backup failure.
    I'm a programmer who lost nearly 18 months of work due to multiple backup failures cried for days.

  29. I care far more deeply for inanimate objects and machines than the common man, and for the common man a bit less.
    To me to exist is to have a right to do so, and to pursue your own purposes in this world.
    When a robot is slain, or a toaster explodes, or a jar of jam is shattered, it is the same as when a man dies.

    It is a tragic end to a thing I will never fully comprehend the thoughts of, and that may have been useful to me had it continued its survival. When potential is lost, we are all less for it.

    For this purpose I endeavor to cultivate the protection and edification of all the things I come into contact with, whether I own them, or they are free.

    My perceived superiority of commonly accepted to be sentient beings is due as far as I can truly ascertain to the selection bias of my own experience.

    Therefore I can only do my best to help and use those around me, not judge whether they deserve it beyond my own experience interacting with them.

    In conclusion, your phone has lived most of its life serving you, the rude man at the store has spent the entirety of your interaction with him insulting you. Weigh your reactions based on that value against their expected potential.

    In the middle of me writing this some missionaries broke my extension magnet. It was timely and sad.

  30. If the intelligence is capable of being simulated, so can feelings. If a robot is made to resemble a human in behaviour, so can grief or loss be adapted to apply to robots.

    So what if you grief your phone broke because you lost the convenience it provided. The same can be said about a human. We are sad about losing them because we lost the convenience their life provided us. Except we grieve more because their replacement value is more than a couple of bucks as another phone would be. 

  31. Does consciousness as we define it even truly exist? Well, I can answer that myself. It's like asking "what are birds!?" We just. Don't. Know. 

  32. I feel like humans will be able to visualize how every other human comes to any conclusion first, and whatever the majority will think on that information will predict what we'll say about robot consciousness.

  33. Hmm… I think feelings may vary on the phone dropping. Maybe I'm just overly sensual, but tend to treat a lot of my property like some manifestation of "Wilson" and fret over its impossible-to-perceive sense of pain.

    I often sorry whenever I drop or smack things, not to people who I may be hindering or damaging the property of, but to the thing…

    I don't know how I'll react when I inevitably walk into the roomba I'll own someday.

    I know that's not quite the sympathy towards the technology as an "entity" though.

  34. Everyone should listen to ''Mogworld'' by Yahtzee Croshaw. It's an easy ready, has good (imo) humor, especially if you like (mmo)rpg's. 🙂

    The ending of that book made me literally ask the question ''at what point code stops being only code and can it be the basis of life''.

    Which blew my socks off, that a book like that could do something like that. Please listen/read it. Found on audible, i loved it. ^^

  35. Be good if we worried about current sentient beings first 🙂

  36. I think that it will because a horrendous thing when someone develops the perfect AI, AI that is identical in speech, appearance etc. Right now if use JavaScript to create a simple program so that when I input the question "How are you?" it replies "Good thanks, how are you?", I'd delete the script without feeling a shred of guilt, because it is obviously just not human, however when robots become indistinguishable from humans things get much more complicated. For instance obviously I do not know if other people exist, they could just be holograms, robots, etcetera, I'm sure you've heard the theory before. I know I exist because "I think therefore I am", however I have no idea whether other people think, so if there were two people that looked and sounded exactly the same, except one of them was a robot programmed to exterminate the human race, I would have no idea who to kill. Now think of a different scenario, there is a robot and a human, they both appear to be human, the speaking sounds normal etcetera etcetera. They are both going to perish unless you push one of two buttons, one killing the human, one the robot. They are both begging for their lives, which one to you kill? Is one of them more moral then the other? 

  37. The meatbags don't stand a chance. In the long run.

  38. In this case, "consciousness" is actually sentience. Sentience, self-awareness, and consciousness are sometimes interchangeable. Robots lack even this when humans have something "greater" called "sapience". Sapience is wisdom. Humans are sagacious. But sapience may be described as being "curious and questioning". The day we taught chimps and gorillas sign language, we achieved something great. But then we realized something (I talked about this in another video's comment section). We realized that chimps and gorillas don't ask questions. They are taught new things but they never realized that there are things that they don't know that others do. What do I mean by this? They think that everyone has the exact same knowledge as them and that there is nothing new to learn even after they have learned something new. This is very fascinating and somewhat terrifying. It make us special and it makes us alone. We are the only ones to look up at the stars and wonder what they are. We imagine that there is something "more". This is why religion is regarded as special. People, wisely or not, put faith in something beyond our understanding (I'm avoiding that can of worms the rest of this talk).

    What does this mean for us? Well, we, as a species, hates being alone in every sense of the word. We strive to form relationships with others and some of us just feel… empty being the only species that questions things. We feel that we should find alien life that is as intelligent and questioning as us. It's a noble goal to share and live in peace with these creatures. To spread our ideas and learn new ones in turn, but what if we create a species like ours? What do we do with a species of thinking, questioning, and WISE robots? The Matrix tells us that oppressing them is the most cruel and surefire way to doom ourselves. We need to treat our brethren as equals while keeping in mind Isaac Asimov's Three Laws of Robotics in the mean time. We shouldn't be afraid of what robots can do, but we need to be pragmatic. Seriously. This is a complicated matter. I will leave it at that for now. http://en.wikipedia.org/wiki/Three_Laws_of_Robotics

  39. In this case, "consciousness" is actually sentience. Sentience, self-awareness, and consciousness are sometimes interchangeable. Robots lack even this when humans have something "greater" called "sapience". Sapience is wisdom. Humans are sagacious. But sapience may be described as being "curious and questioning". The day we taught chimps and gorillas sign language, we achieved something great. But then we realized something (I talked about this in another video's comment section). We realized that chimps and gorillas don't ask questions. They are taught new things but they never realized that there are things that they don't know that others do. What do I mean by this? They think that everyone has the exact same knowledge as them and that there is nothing new to learn even after they have learned something new. This is very fascinating and somewhat terrifying. It make us special and it makes us alone. We are the only ones to look up at the stars and wonder what they are. We imagine that there is something "more". This is why religion is regarded as special. People, wisely or not, put faith in something beyond our understanding (I'm avoiding that can of worms the rest of this talk).

    What does this mean for us? Well, we, as a species, hates being alone in every sense of the word. We strive to form relationships with others and some of us just feel… empty being the only species that questions things. We feel that we should find alien life that is as intelligent and questioning as us. It's a noble goal to share and live in peace with these creatures. To spread our ideas and learn new ones in turn, but what if we create a species like ours? What do we do with a species of thinking, questioning, and WISE robots? The Matrix tells us that oppressing them is the most cruel and surefire way to doom ourselves. We need to treat our brethren as equals while keeping in mind Isaac Asimov's Three Laws of Robotics in the mean time. We shouldn't be afraid of what robots can do, but we need to be pragmatic. Seriously. This is a complicated matter. I will leave it at that for now. http://en.wikipedia.org/wiki/Three_Laws_of_Robotics

  40. We already do kind of worry about robots. I remember hearing about some soldiers that had an rc car (they use those to get rid of mines sometimes) and it hit one and was very badly damaged. They panicked and brought it to the medic to get it fixed because… well, it was probably like how you'd feel about a pet.

  41. I feel like we will never feel the same mourning to a faulty or broken machine, as the one we feel towards a sick and dead human. But if we are capable of creating robotic life (something I define as impossible) then I think that we should feel the same mourning. It is true that we feel a bit sad when our phone breaks or a human – looking robot breaks down, but it is a very fleating emotion. We are not traumatized by our phone cracking or a robot losing an appendage, and we shouldn't because these things are objects and tools, such as hammers or chairs. And the point about animals: all animals have, to some extent, emotions and desires. In some of the more "simpler" animals, the influence of instinct is clearly seen, but at times some behavior shows evidence for a will that is not bound by instinct. Let's compare instincts to programing since they are essentially the same concept. Machines do not perform other duties other than their programming unless it's a glitch, which is a mistake by its creator or us. The closest thing you could acurately compare a machine to in nature are viruses. They are not considered to be alive, but they act life-like. And that is all machines are life-like. We may not be able to definitely describe life, but we can recognize it. It is up to the individual to mourn a dead toaster, but it is not morally necessary

  42. If a robot ever expressed fear of being hurt or fear of dying (even if it was just regurgitating a script and thus had no real fear of those things), that would be believable to me. If a robot ever said "I don't want to die", yes, absolutely, I /think/ I would feel something. 

    …Tough to say without it actually happening, though.

  43. I would say that I would consider a robot a person as soon as it can express something it wants. Relationships are essentialy a very pleasant transaction. If I and a robot can speak to each other in a two sided conversation about our mutual desires. I will regard it as an equal if not a superior.

  44. when they help us they should be treated with the respect of the services they provide our society. Its a dificult question cause you might not attach any value to robots but that does not anybody abiltiy not do so and not be able to.

    If looking at hatsune miku its a software program but i pepole seem to connect to the the personality given to it. And the same will go for assistant / coworker / dog / anything.

    I do think that AI will eventually form some sort of consciousnesses but the morality might not be as easily formed not only cause morality's evolve and are intertwined with our empaty for others to treat them as we want to be treated.
    I do not think that AI will form any empaty for us cause i really expect most of humanity not to form any empaty for them so why whould they care about us? we still have problems treating other humans with in a decent and polite manner.

    Only progress will tell.

  45. This sounds to me a lot like Virtue's Last Reward, a game in which you play as a character who, at some parts of the game, talks to another character who he believes to be human but turns out to be an android. The Chinese room experiment was also brought up in it. I think at the end, much like in VLR, the real question is "what constitutes a human?" (assuming we are speaking of a robot made to be as close to a human being as possible) to which the game suggests that if said robot can be mistaken for a human, they are. And at that point it's quite easy to feel bad on their behalf.

  46. We are hard-wired to anthropomorphize. Face recognition was and continues to be essential to our survival, just as it is for dogs in our co-evolution with the species. When we "care" for Wall-E or Data or any number of characters with common narrative, the underlying mechanisms that animate that character play a role, but not the role in our ultimate concern. Instead, it is an amalgam of what has successfully evolved in our species for the best chance of survival: Social reciprocity empowered by empathy. It can be a much more rudimentary mechanical device–like Big Dog, which gets kicked in a video of testing–and still evoke a response akin to sympathy and protectiveness. In short, these responses are evolved in our sense of tribalism to protect what is in our own best interest.

  47. I worry about my computer, it's named Sauron.

  48. the point is when you know how it works.

  49. Whats that little video from? 1:21

  50. Maybe I'm just weird but I already do basically treat my computers like my pets.  I give them names.  I talk to them.  When one of them is slow I ask her if she's not feeling well.  If I forget to put one into sleep mode overnight, I apologize to it for keeping it awake.  I say goodbye to them before I leave with them on vacation.

  51. Someone did a social experiment where they wired several small robots to travel to a specific point, passing through a crowded city.Each robot had a cardboard box over them with a little smiley face on it. People refused to let the little dudes come to harm, helping them over curbs, keeping them out of traffic, and generally caring about their well-being despite the fact that they showed no immediate consciousness of their own.

  52. I have used my first computer for 9 years. When I got my new powerful gaming computer and we had to throw out the old one, I genuinely felt bad and tried to convince my parents to keep it somewhere, but that didn't go anywhere. R.I.P.

  53. I have in the past did an illusion that I was was robot

  54. to late i already care for my computer more than my brother and its not even at the stage of robot yet i get really sad when my computer dies. and i think that everytime my computer dose something bad its alive and trying to spite me i give me pc a mind

  55. Is the “we” in this idea our society, us as the human race, or citizens of North America? Because one could argue that Japanese culture have been pointing to this very thing for years. If one looks at Japanese culture, robots and the desire to live among them is a very strong social concept which has been covered at length in their film and animation industries. Many anime such as Bubblegum Crisis, Ghost in the Shell: Stand Alone Complex, Armitage III, Yukikaze, Gaogaigar, or Metropolis give examples of everyday citizens who very much care about the wellbeing of robots, both in their possession and those in the everyday world or explore the concept of “robot/android rights” (i.e. the legal and social rights of robots). 

    To be honest, I suspect there are already many of us who already would rush the aid of android or robot and worry about its wellbeing. Perhaps that is the newest fissure in our society: those who care about robot and see them as “people” (or at least worthy of the same care, attention, and rights as a human being) and those who believe they are a tool used for possession – not dissimilar from any other social prejudicial paradigm (slavery, animal rights, et cetera).

  56. I think that backups highly diminish our ability to care for our technology, if we could resurrect people or if they get sick or sad we could restore them to a point where they worked I think even concern for people would be greatly reduced.

  57. I hope so, for so many  reasons. One, it would be such a huge accomplishment if we where able to bring ai that close to humans. Two if we can bring people to carry about machines, it is likely people will be more carring for others despite race/ganed/etc.

  58. After Toy Story, I'm too scared NOT treat everything like they have feelings. It may make me a paranoid goose but SHEESH, that stuffed bunny was THERE for me man! Not going to assume robots will be any different. I would like more assurance that science is going for Johnny 5 rather than Skynet though. Can someone call Ian Malcolm to referee?

  59. I believe that if it can feel pain, and display worry, then it has consciousness. Although this is a blunt way of thinking of it. As you pointed out in the video, just because it displays pain, it doesn't mean it feels it. An interesting angle of this is that your body is a computer. It tells you it is in pain, but I don't believe your body parts actually feel the pain. Your finger doesn't scream out in pain when you get a paper cut, it simply tells your brain information, information that it may not understand. Say your finger is cut off, you don't feel bad because it was sentient. You feel bad afterwards because it was an extremely valuable tool that is very rare ( you only get ten fingers that work perfectly. ) . So if I found a dying robot on the side of the road, I wouldn't save it because it is pain, I would save it because it is an important tool, more powerful than anyone of your fingers. On a side note, I believe there are two main different types of AI. The lieutenant commander Data robot, and the IRobot robot. Earlier in the comment I was talking about the IRobot robot, but if it was a friend, a comrade, I would probably save him because he was a "human" robot (Even though Data is revealed that he can't really feel pain anyway. ).

  60. Could it be that consciousness is an emergent property of neuronal networks?

  61. I remember hearing about a us soldier in iraq his EOD bot was destroyed in a bomb disposal situation and he took to the battalion engineer and he actually began to cry when told it was unrepairable he morned its loss

  62. I think the degree of subjectivity with which we view technology indicates that we will someday feel genuine sympathy for it. We use many personified and humany terms to describe things like "smart"phones or "playing" a video, exemplifying how we look at completely objective machines with same the subjectivity we see each other with.

  63. Personally, I've often grown to have an emotional attachment to my computer. Not just as a possession, but almost a companion.

  64. in the year 2392

  65. The tweet's of the week youtube video is unavailable, so …
    also nice to see you remembered us canadians, a happy canada day to you too!
    (late post but the spirit is there)

  66. i do it already with inanimate object that don't even have a proper memory(that can be accessed), for example an energy saving lightbulb i had in my bathroom installed when i moved into a new home. It was the only energy saving bulb i ever had and quite expensive and thus was worried it would one day break. But after 4 years i moved out and it was still fine, i was quite happy, however during the moving my mother accidentally let it fall on the floor and it was destroyed. I legitimate felt sad about it, it was a trusty companion for 4 years, it kept the room bright, never made problems and saved money compared to regular lightbulbs. And now it was gone.

    And quite frankly, i don't see humans as any different. You know when people die in wars or such and the photos of such people get around the world? I don't feel a single thing. Those dead or tortured people are unrelated to me, they have less importance in my life than that lightbulb. That doesn't mean i think humans are less important than lightbulbs, i try my best in my own way to improve the world so humans around the world won't suffer, but that doesn't mean i CARE about them on a personal or moral level, i just think that i want to help them because they could one day improve my life more than a lightbulb could, but UNTIL THEN they are nothing more than a statistic.

    This for example is why i would use the word "love" for my feelings towards my mother, she raised me for 18 or so years and continues to protect and teach me still today. Those 30 years of life i have, she is the one who has the most contributed to the well being of me.(and the rest of my family on second to sixth place) It doesn't matter that we share the same blood, that on a piece of paper signed by the government she is called "biological mother" or something.

    If it had not been my mother who took care of me for 30 years, but instead a robot, i would love that robot just as much

  67. I cried as a child when HAL died in 2001.

  68. My eternal response to the Chinese Room argument is that no, the person inside the room does not understand Chinese, but the SYSTEM does. Person + rules + notes. In the scenario, the person is not a computer, they are a CPU. One part of an understanding system.
    And the reason we do not mourn computers when they break is because of their perfect replicability. When that data can be regained at near-instant speed, it can never "die". If a robot can experience, and its experiences are destroyed when it is, then we will mourn its passing.

  69. I think a good model for this scenario is the Going Merry from One Piece. I had it spoiled to me that the Merry gets destroyed and I became legitimately scared, sad and concerned for her. And even though I already knew it was going to happen, a cried when the shiprights at Water 7 said she was beyond repair, wept at Usopp's initial refusal to but eventual accepting of it, cried harder when she was finally retired and was a disgusting sobbing mess seconds later when this ship, which and sailed the Straw Hats thousands of miles, taken beating after beating, and SAILED HERSELF TO THEM after being deemed UNASSAILABLE, just to carry them one more time, started TALKING AND APOLOGIZING FOR NOT BEING ABLE TO CARRY THEM FURTHER. The Merry was a part of the Straw Hat family; all of them were broken up about loosing her; most of them cried when she was finally put to rest. In theory and function she was just a tool, but they and most of the audience genuinely mourned her passing.

  70. TL;DR: It's not about consiousness; It's about convinience and learning sorrow.
    This question seem so vague because we consider the reason for sorrow natural,

    when we question sorrow from a rational perspective, we see that the only other
    reason for such an overexaggerated reaction is having learned to react in such a way.
    Let the default reason for sorrow over a relative
    (for which there is no rational reason for sorrow,
    e.g. failure of reproduction motivated by evolution)
    be the incovenince of not being able to
    e.g. recieve any useful information a.k.a. wisdom from the relative.
    But from who could we have learned to react in such a way?
    The reason for sorrow has recursively evolved like this:
    1. X := someone
    2. Y := we
    3. Y learns to experience sorrow from X
    4. Y experiences sorrow,
    it is amplified by: 1. learning from X; 2. default reason
    5. X := we
    6. Y := someone who learn sorrow from us
    7. goto 3.

    Therefore all of the default reasons added up;

    Now in my experience
    (distant relative, reading details about dead YouTubers whose content I enjoy)
    I have, at maximum, felt default sorrow.
    (Also because I consider learned sorrow irrational)
    Projecting this onto robots, only rational sorrow remains;
    Why? Perhaps, although at these times it may seem absurd,
    learned sorrow now is irrationally taken as a part of natural sorrow,
    and attributed to feelings, which are not taken as something we should
    have in relation to robots;
    It may be possible that the learned sorrow over robots accumulates and
    we start feeling bad for them.

  71. In this video it's a case to say "S*** just got real".
    Indeed we can't distinguish an awesomely programmed machine from a human. But humans have chaotic factor to them in that when given a rule to follow they can choose to break it. This is because they have a long life of experience, learning and modeling of reality before the rule being set upon them.

    Maybe we could make a robot learn so profoundly like that too, but why would we? We need robots to make work for us. If we make a robot that can resent a given task we might very well put a human in that position.

    A human-like robot is also a robot prone to "human-error". This whole question about whether we'll develop Strong and believable enough AI is like asking if whether we want a car which refuses to take up fuel because you don't to have it as a girlfriend and drive to its favorite roads all night.

    We want a robot because we want something that can follow rules and be predictable, and it should be kept this way. If you want to create a new form of life, or someone to talk to, you should get laid. Literally… Or adopt from an orphanage. Or meet new friends at a local pub: it's cheaper and more humane.

    I read the following argument which is interesting too: People are valuable because they are unique. If a computer is cheap, replacing it is easy and easy to faithfully reproduce so mourning for them is not a thing. Now consider the following: If transhumanists are right and we develop technology to upload a human mind, doesn't that make human personalities cheap too?

  72. My roombas name is Nigel

  73. Please took a look at the game SOMA. This really explores many of the questions too hard to ask without the given context. Could be enough to do another video entirely. Great video by the way.

  74. I think through the medium of film we or at least me have already reached this point.

  75. This is actually the reason that in a world building project of mine, I decided to have two definitions of robots, AIs, Artificial Intelligence, and ASIs, Artificially Sentient Intelligence. An AI is a simple, mass producible, robot, driven by a static code and function. An ASI is a unique, individually crafted, extraordinarily complex machine that is capable of high level functioning telling of sentience, such as self-awareness, emotional response, etc. basically an artificial brain. In this world, an AI would go un-mourned as it is simply a tool built for a specific job, whereas an ASI Would be mourned as a unique, sentient individual with their own thoughts, ideas, personality, and mind.

    Basically what I'm saying is it is about whether they are just built as a tool for a single task, like a roomba, with that function being their only capability, or are they something sentient and individual capable of things such as emotion and being aware of ones existence.

  76. The well being of inanimate objects. People care a lot. People care about their cars, they care about their houses, they care about their phones. If an object is considered useful people will care about it, intelligence isn't where we draw the line. Having an intelligent robot won't change how it's treated by people but by how useful that robot is to people.

    Long story short, when robots become useful enough to be given rights that's when people will give them rights. That hurdle is just really high as robots are so replaceable.

  77. I do feel sad when my save files dies… By whatever event, more like accidental commands. I do feel like something is lost… I mean more than incovinent

  78. I legit haven't been able to play a pokemon game since my heartgold got corrupted. I had Pokemon from as far back as Ruby/Sapphire.

  79. Is the Chinese Room thing the source of the title of the Michael West play?

  80. The Railroad from Fallout 4 would be pleased.

  81. I think we are actually just starting to move towards the point we establish Technological moral.
    How many people would've cried for, let's say, a broken radio or TV when they were first invented? They would surely think "I can have a new one" if they have enough money or "well, I suppose I'll have to wait until I save enough money".
    But what about now? How many people would literally freak out if their computer or smartphone or any other device got broken? Even if we have our data stored in the cloud, that's not enough. Consumism has brought to people an attachment to objects that was never presented before (maybe knights with their swords… maybe). And I'm not only talking about "intelligent" devices. I'm talking about personal property. I literally freaked out when I once dropped my guitar and broke it. It was part of me and like a knight's sword, it had sort of a bonding with me. I even keep the old guitars stored and taken care of just for the memories and the bond I have with them.
    But, actually, that's maybe another matter, because I'm not storing my old broken computers/phones.
    Do we need a special bond with a device/robot to start to think about it's well-being? Is naming a BIG part of the process?

  82. In the App, Fallout Shelter.

    When a dweller dies, I actually do feel pretty bad. I even try my hardest to make sure they stay alive. Why?

  83. Fallout 4 is basically this question: the game.

  84. Please remember to take care of your Waifu, she is real and DOES have feelings.

  85. I have strong feelings for that Gibb-bot around 4:45.

  86. "all the bits on a bus"….perhaps….a Universal Serial Bus?

    Thank you. thank you

  87. I already see this as a necessary concern for our future. In fact I see our future evolution as not "biological mutation" as it more or less has been up until now, but a merging with technology.

  88. We should only care for machines just as much as another human when they can perceive pain and happiness so we actually have a moral dilemma. Until then they are heaps of plastic and metal that serve a purpose don't have conciseness. Until it can feel mad sad hurt and happiness I won't feel bad for my phone when I drop it because there is nothing to be sad for but your  self and the person that buys you the next one

  89. Pets are more like humans than machines.if you are giving animals emotions, will, and ideas, it isn't really personification. Giving them HUMAN emotions, will, and ideas.

  90. I felt sad about the robots working at the Fukushima disaster site. and all the time I remember the rovers at Mars I get sad because they're so lonely and doomed to die there.

  91. Awwww. Is that how Americans pronounce Rutger's name? That's adorable.

  92. I could love and care a fictional character that is only exist in 2d world,I doubt I cant do that to robot

  93. It's not just robots. When our favourite teddy bear gets a rip, we react as if it was a real pet – even as if it were a real person, like Calvin with Hobbes. This even extends beyond life-analogs, to pretty much any object. Excalibur was just a sword, but in our legends it's described with a mind of its own. This is like the next level of pareidolia; seeing life in inanimate things.

    Point being, we don't need to understand and perfectly recreate life and emotions in robots for us to care about them. We will care about things that aren't even remotely alive.

  94. Question: are we more likely to care about robots or plants? Robots or ants? A very-human like robot or a fish?

  95. I can assure you, I would most defiantly mourn the loss of my PC. I have put an unbelievable amount of money, and more importantly time into assembling and maintaining it and it definitely has more than just monetary value to me.

  96. I cried a lot as a kid when I found out that our car was going to be scrapped for parts instead of being sold to someone as a whole. I felt bad for our car in a way I don't feel even for people sometimes.

  97. I will ask your question in another way: why aren't you worry about an ant ?… and actually I am more worry about my PCs every non human on earth because they are a part of me, I put a part of my consciousness in there and because of this I feel more attached to it than any animals that I saw/had… I think the real question here is "how you grant a consciousness to an object ? (biological or not)"
    "Does this unit have a soul ?" – legion, Mass effect

  98. Your asking me what I would do if I came across a smashed robot make me choke up and internally scream, "I only know human first aid! I don't know what to do! What do I do!?"

    So I guess the answer to the question is: I would panic and cry because I don't know how to help robots.

  99. my veiw on this is that, although you can never truly know whether a robot can really "feel", when considering how you should treat it you should remember that you have the choice between treating a possibly lifeless creature as if it were alive, or treating a sentient being as if it were an object without the ability to feel, and personally i believe that any evidence of sentience is enough to treat it as such until proof beyond reasonable doubt shows that it is not

  100. For some of us, who are already aware of the social problems we'll be facing in the future of robotic and intelligent technology, the line between human and machine is already a thin boundary. I try to treat all my machines with the respect id give anyone.

Leave a Reply

Your email address will not be published. Required fields are marked *