Research
  • AMA Source Code
    • Principles of Robot Ethics
  • Soar
  • ROS
  • Reference
  • main_banner02.png

    AMA task introduction
    Development of artificial moral agent

    In order for humans and robots to communicate with each other truly, artificial intelligence should be developed by combining moral factors together. 

  • main_banner02.png

    AMA Source code
    AMA Meta package download

    ownload AMA version 1.0 meta package for an OP 

Robots have been developed for application to simple tasks, high precision tasks, or dangerous tasks on behalf of human beings since 1950s. They are used in a variety of fields such as simple toy robots, personal services, professional services, and manufacturing industry. Robots used in each application field are classified as follows.

- Personal service robots: Provide services to the general public including housework robots such as cleaning robots, educational robots, recreational robots, personal assistant robots, healthcare robots, social robots, etc.

- Professional service robots: Provide special services required by society including medical robots, military robots, construction robots, social safety robots, etc.

- Manufacturing robots: Robots that are applied to all the industrial production activities such as manufacturing robots, agricultural robots, fishing robots, etc.

Current social and healthcare robots are focused on recognizing the situation of the robot and providing human users with basic services (notification services of email, medication doses, and appointment times, photo shoots service, and expert knowledge services such as food recipes through internet) through HRI (human-robot interaction) functions. In the future, however, robots should be able to treat human beings not only intelligently but also morally according to their health condition, personality, preference, and knowledge level. This should be supported by research in the field of robot ethics.

The definition of robot ethics can be divided into three categories [1]:

1. professional ethics

2. professional ethics

3. self-conscious ability

In this website, the second definition, 'moral code programmed into robot', is studied as robot ethics.

One widely known classic of robot ethics is Isaac Asimov's Three Laws of Robotics, which he introduced in his short novel, Runaround, published in 1942 [2].

1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.

2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.

3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

In this regard, the International Robot Fair held in Fukuoka, Japan issued the following World Robot Declaration in 2004 [3].

1. Next-generation robots will be partners that coexist with human beings.

2. Next-generation robots will assist human beings both physically and psychologically.

3. Next-generation robots will contribute to the realisation of a safe and peaceful society.

In 2010, EPSRCand AHRC Robotics Retreat announced the Principles of Robotics [4].

1. Robots are multi-use tools. Robots should not be designed solely or primarily to kill or harm humans, except in the interests of national security.

2. Humans, not robots, are responsible agents. Robots should be designed; operated as far as is practicable to comply with existing laws and fundamental rights and freedoms, including privacy.

3. Robots are products. They should be designed using processes which assure their safety and security.

4. Robots are manufactured artefacts. They should not be designed in a deceptive way to exploit vulnerable users; instead their machine nature should be transparent.

5. The person with legal responsibility for a robot should be attributed.

Among various general ethics theories, there are deontology, utilitarianism, virtue ethics, and responsibility ethics that can be applied to robot ethics.

Deontology: Kant thought the starting point of moral action is good will and said that the act of good will is a universal law and a moral obligation of human. This moral order of good will is called a categorical imperativedescribed with three formulations.

- First Formulation: "Act only according to that maxim whereby you can at the same time will that it should become a universal law."

- Second Formulation: "Act in such a way that you treat humanity, whether in your own person or in the person of any other, never merely as a means to an end, but always at the same time as an end."

- Third Formulation: "Thus the third practical principle follows (from the first two) as the ultimate condition of their harmony with practical reason: the idea of the will of every rational being as a universally legislating will."

Utilitarianism: Bentham and Mill put the ethical motives of human behavior in pursuit of personal interests and pleasures saying "Morality aims at the great happiness for the greatest number." Happiness in utilitarianism refers to the absence of suffering, and the person who performs the right action maximizes his or her happiness while at the same time achieves the result of maximizing many people's happiness as well.

Mill said that since happiness ultimately realizes human being's dignity, it is necessary to establish criteria for calculating actions that can maximize qualitative happiness in order to realize dignity.

Virtue Ethics: According to Aristotle, happiness is a concrete activity realized by virtue of being formed in the process of life. Virtue ethics regards the character and virtue of the actor in accordance with the Aristotle's ideas. Therefore, unlike deontology and utilitarianism it focuses on how to be fundamentally good human (role model).

In virtue ethics, a moral agent is a person who has excellent personality through virtue of many experiences of life, introspection and learning, forms virtue practically, and contemplates all things happening around him or her.

Reference

[1] P. Lin, K. Abney, and G. A. Bekey, Robot Ethics: The Ethical and Social Implications of Robotics MIT Press, 2014.
[2] Asimov I (1942) Runaround. Astounding Science Fiction.
[3] http://robots.net/article/1113.html
[4] https://www.epsrc.ac.uk/research/ourportfolio/themes/engineering/activities/principlesofrobotics/