TELOSscope: The Telos Press Blog

Ethical Robots?

The cover of the most recent issue of The Economist reads: “Morals and the machine: Teaching robots right from wrong.” A short piece in the magazine states: “As robots become more autonomous, the notion of computer-controlled machines facing ethical decisions is moving out of the realm of science fiction and into the real world. Society needs to find ways to ensure that they are better equipped to make moral judgments.” Moreover, as robots “become smarter and more widespread, autonomous machines are bound to end up making life-or-death decisions in unpredictable situations, thus assuming—or at least appearing to assume—moral agency.”

Indeed, some suggest that society could simply move toward a world free of autonomous (i.e., “self-deciding”) machines altogether. That is, policies could be developed that don’t allow for the development of machines that are sophisticated enough to make their own moral decisions via internal devices; or perhaps future robotics that display autonomous “tendencies” will require, if not the cessation of construction, at least the removal of such “tendencies” by whatever means necessary. Yet, for the author, “autonomous robots could do much more good than harm. Robot soldiers would not commit rape, burn down a village in anger or become erratic decision-makers amid the stress of combat.” Thus, “society needs to develop ways of dealing with the ethics of robotics—and get going fast.”

To address these concerns, the article puts forth “three laws for the laws of robotics”:

(1) “laws are needed to determine whether the designer, the programmer, the manufacturer or the operator is at fault if an autonomous drone strike goes wrong or a driverless car has an accident.” This would require “autonomous systems keep[ing] detailed logs so they can explain the reasoning behind their decisions when necessary,” as well as “system design [such as] artificial neural networks, decision-making systems that learn from example rather than obeying predefined rules.”

(2) “where ethical systems are embedded into robots, the judgments they make need to be ones that seem right to most people.” (Indeed, one will most likely be compelled to ask: by what cultural standards? Moreover, is “most people” not problematic insofar as most people simply won’t be consulted on such ethical concerns?)

(3) “more collaboration is required between engineers, ethicists, lawyers and policymakers, all of whom would draw up very different types of rules if they were left to their own devices.”

While I sympathize with the concerns and worries of autonomous robotics—especially military robotics, insofar as they only perpetuate the military-industrial complex—I find the author’s proposed “laws” less than satisfactory. This is not to say that I do not think swift action is necessary; rather, it is to say that the author’s solutions don’t seem to calm their concerns. Although these three laws seem productive on a superficial level, they beg a number of important questions:

(1) If it is the autonomy of machines that is so worrisome, that is, if we simply do not trust the possible moral agency of future machines, how does building more sophisticated technology (e.g., artificial neural networks) actually reduce the autonomy? Does not the constant production of technology only perpetuate its autonomy? This question bothered a number of technological skeptics, from Heidegger to Ellul.

(2) If humankind itself is so constantly torn between our utilitarian side and Kantian side, how do we expect to reconcile that for robots? Of course, one can say that we can simply upload future robots with the Universal Declaration of Human Rights (to be clear, not an idea I would endorse); but with a number of nation-states failing to recognize it in its own policies, how can one expect it to be recognized for non-human devices? Moreover, will such ethical training require a “robotic pedagogy?”

(3) Will this be a global effort? Will collaboration between engineers, ethicists, lawyers, and policymakers take place outside of a strict Western context? Moreover, if this a concern of the global society, will UN dialogue be necessary to reconcile concerns? In short, will robotics soon be subject to international law?