Robofriend
  • main_banner02.png

    AMA task introduction
    Development of artificial moral agent

    In order for humans and robots to communicate with each other truly, artificial intelligence should be developed by combining moral factors together. 

  • main_banner02.png

    AMA Source code
    AMA Meta package download

    ownload AMA version 1.0 meta package for an OP 

Can robots be our friends?

The word robot originated from the Czech playwright named Karel Capek, whoused the word in Rossum's UniversalRobots, published in 1920, to refer to the word “robota”, which means“slave work” in Czech language.

As this fact implies, robots have been steadilydeveloped for performing repetitively, precisely, and strongly the tasks in dangerousor poor working environment.

The photographs below are the sketch ofresearchers who designed Honda's ASIMO and the ASIMO, the world-first humanoidrobot capable of stable bipedal walking. This sketch implies that developmentpurpose of the robot is doing the hard work instead of people.

Now, people need more and more versatile and familiarrobots to live together in our everyday lives, as well as industrial robotsthat do the hard work at the factory instead of the people.

As a result, robots come to communicate with people, to tellthe information you want, to provide the services you need, to participate inthe meeting on behalf of people at a long distance, to remove dangerousexplosives from the battlefield, and the like.

So, if a robot pops into the environment where our humanslive, what kind of existence will we accept the robot?

At first, the robot may be awkward and unfamiliar, but asalways, humans will soon get used to it with their own definition for the robotand will consider the robot as another-type of person (quasi-personality), whichis neither a person nor an animal.

And unknowingly, the robot will be expected to think,talk, and act like a human being.

This is essentially the same as requiring robots to haveminimal manners, socio-cultural literacy, morality, and the like. Thefrustration, shock, or fear that people experience when robots abandon suchexpectations may cause other social problems.

"What if the robot I bought with a lot of money could endanger or kill me?"

A recent study that started worldwide to address thesefundamental problems is the technology of artificial moral agent (AMA).Applying AMA to the robot in a vague or complex situation will enable it totake a reasonable moral decision.

However, in order for robots to make moral decisionsthemselves, numerous theoretical studies, technological developments, andconsensus among experts and social leaders in various research fields arerequired.

As to the relevant project, Professor Jong-Wook Kim ofDepartment of Electronic Engineering at Dong-A University, Professor Sun-Yong Byunof Department of Ethics Education at Seoul National University Education, and anumber of experts in each field are participating in developing together withvarious open source an AMA that can judge ethically like 10-years-old boys andgirls. This website will be the place where progresses of developing anartificial ethical agent are announced and recorded vividly, the source codesand techniques produced are shared, and public opinions are collected.

I hope this space to become like a 'small spring' for thepresent time when the people’s spirit is hardened and buried little by littlebecause of the rapidly advancing technology.

November 2016
Written by Kim Jong-wook


Warning: Unknown: write failed: Disk quota exceeded (122) in Unknown on line 0

Warning: Unknown: Failed to write session data (files). Please verify that the current setting of session.save_path is correct (/home/hosting_users/esensh_robofriend/www//tempss) in Unknown on line 0