Abstract
Machine-mediated human interaction challenges the philosophical basis of human existence and ethical conduct. Aside from technical challenges of ensuring ethical conduct in artificial intelligence and robotics, there are moral questions about the desirability of replacing human functions and the human mind with such technology. How will artificial intelligence and robotics engage in moral reasoning in order to act ethically? Is there a need for a new set of moral rules? What happens to human interaction when it is mediated by technology? Should such technology be used to end human life? Who bears responsibility for wrongdoing or harmful conduct by artificial intelligence and robotics? This paper seeks to address some ethical issues surrounding the development and use of artificial intelligence and robotics in the civilian and military spheres. It explores the implications of fully autonomous and human-machine rule-generating approaches, the difference between ?human will? and ?machine will, and between machine logic and human judgment.
Original language | English |
---|---|
Publisher | House of Lords Select Committee |
Place of Publication | UK |
Publication status | Published (VoR) - 11 Oct 2017 |
Keywords
- ethics; artificial intelligence; robotics; civilian; military; legal philosophy; technology; morals