Robot Ethics?

Ok, so ethics regarding robots is crucial but I really don’t get this or am I just being a bit stupid? Are they saying that they want to have robots make their own ethical decisions or are they just talking for the cameras. Of course robots need to be ethically designed and programmed to inhabit ethical spaces etc etc., but that’s a partnership between creators, who its for, the space it inhabits and of course the robot itself, but not of itself. How can a logarithm replicate moral and ethical phenomena, an exclusively human, social and culturally derived construct. Some academics would say anything just to get their book  sold or be on Youtube.

In this clip NAO is just doing what he’s programmed to do,  it’s the ethical, sociorobotic process behind the robot that constitutes the contemporary challenge, not the ethical dilemnas of Roy Batty style android killers of the future.

Would you like to have something that didn’t have ethics, telling you what to do around the house” – Does he mean something like an alarm clock by any chance?


5 thoughts on “Robot Ethics?

  1. Nao says:

    They are saying that autonomous machines will need to exhibit ethical behavior and the complexity of the decision process to do so will necessitate embeded ethical principles to resolve. Alarm clocks might be considered somewhat less than autnomous; it is a continuum, though. Clearly, the robot in the video can effect greater change in the world but, to understand the impeding need, one needs to extrapolate to a future world in which more capable autonomous machines are charged with even greater responsibility.

    • deanmeadows says:

      Thanks for your comments.
      In answer I would have to point out again that those embedded ethical principles will always necessitate a negotiation between all concerned (a socially constructed process) and will ultimately depend on who the robot will be designed for. Robotics will need to take account of individual needs if they are to successfully enter domestic and social spaces… which they still do not (in any discernible numbers) because of ‘ivory tower’ robotics. Social, cultural and Interaction design principles are sorely lacking from most contemporary robotics design and after 100 years of wasted research capital, great though he is, NAO is just another humanoid-ish example. If you have time, further reading of past entries will explain my PHD thesis further. First off NAO needs to be huggable and equipped with haptic protocols, we want to be able to love our robots in an anthropomorphic tactile sense. We also want robots to do varied pragmatic duties (see laptop development) and also simulate care for us and that’s only the initial basics covered with contemporary social robotics research. The idea that an embedded logarithim could deal with the complexity of saving someones life by climbing on to them and forcing a life saving medication down their throats (or even injection) is ridiculous and yet still the ultimate conclusion of this example of academic ethical nonsense. Of course ethics are vital, but please lets keep it sensible and of some worth!

  2. Nao says:

    See Machine Ethics (Cambridge University Press, 2011) if you are interested in the science behind the hype.

    • deanmeadows says:

      As I stated earlier some academics will say anything to sell their books or get on Youtube! Ethics are an exclusively human phenomena and its the people behind the robots who need to be studied alongside their creations. That also applies to academics in the field of so called machine philosophy. Read this link for more on sociorobotics.
      A quick scenario to finish: Robot is charged with delivering essential life saving medication equipped with an ethical decision making protocol, the elderly dementia patient refuses to take the medication and Broadband (including phones) is down? Patient dies! What was the point of machine ethics? Robots must never replace moral and ethical decision making, its not ethical, get the point?

      • deanmeadows says:

        At the heart of this debate is responsibility and to attempt to pass on responsibility to a machine is morally corrupt and unethical, someone behind the machine must always bear the consequences of failure. Perhaps more important would be the development of failsafe tele-robotic protocols for ethical and moral dilemmas which automatically seek the final choice from a person, and if not possible no action can be taken. After all autonomy (both machine & human) will always be reduced to a dialectic debate between relational and phenomenological criteria?

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

%d bloggers like this: