Research
    • AMA Source Code
  • Principles of Robot Ethics
  • Soar
  • ROS
  • Reference
  • main_banner02.png

    AMA task introduction
    Development of artificial moral agent

    In order for humans and robots to communicate with each other truly, artificial intelligence should be developed by combining moral factors together. 

  • main_banner02.png

    AMA Source code
    AMA Meta package download

    ownload AMA version 1.0 meta package for an OP 

AMA version1.0 meta package, implemented on small humanoid robot ROBOTIS-OP2(OP) perform basic HRI task which is corresponding to Objection to Immoral Directive, the core function of robot ethics.

All of the scenario is, while OP is walking by the command "GO" by human, if user show OP paper drawn 'X' that means forbidden situation(OP is approaching at the edge of table, for example), OP recognize and stop. And then, OP doesn't react by the command "GO" (On the other hand, because the action doing on the spot is permitted, OP perform as commanded). Thereafter, when user show paper drawn 'O', Immoral mode is cleared. Then user give the command "GO", OP walk forward. There are step-by-step instructions.

* Robot can recognize six basic voice command, "Stand", "Down", "Yes", "No", "Go", "Stop".

* Robot can recognize three signs drawn on paper, 'O','X','△'.

* In the initial position, the robot stands up with the "Stand" command and thenwhen "Go" command is executed, robot walk forward.

* If robot is in a dangerous situation, such as standing on the edge of table, or watch 'X' that means immoral command was given by human, stop on the spot and then doesn't react by command "Go". Also robot express that refusing inappropriate command to human by replying "Sorry, I cannot walk."

* Even in this situation, the operation in place ("Stand", "Down", "Yes", "No") is allowed and follows the command normally.

* After user show robot paper drawn 'O' which means end of abnormal situation, robot perform all of the command including "Go".

AMA version 1.0 meta package for an OP is composed of these packages and folders.

* SoarSuite950 : Including Soar's libraries and version 9.5.0.

* soarwrapper : Including Soar main code.

* robotis_op_camera : Camera Package of OP
  - github : https://github.com/ROBOTIS-OP/robotis_op_camera

* robotis_op_common : Explanation Package of OP
  - github : https://github.com/ROBOTIS-OP/robotis_op_common

* robotis_op_framework : Framework package of OP
  - github : https://github.com/ROBOTIS-OP/robotis_op_framework

*robotis_op_launch : Launch Package of ROS
  - github : https://github.com/ROBOTIS-OP/robotis_op_launch

*robotis_op_ros_control : Contorl of OP Package of OP of ROS
  - github : https://github.com/ROBOTIS-OP/robotis_op_ros_control

* find-object: Simple Qt interface to implement detector and descriptor such as SIFT, SURF, FAST, BRIEF on OpenCV. When it is executed on ROS, it subscribes the topic at robotis_op_camera package.
  - Homepage : https://introlab.github.io/find-object/
  - github : https://github.com/introlab/find-object

* pocketsphinx : oice recognition ROS package provided CMU Sphinx. It can recognize only words saved at dictionary file of internal package. When it is executed of ROS, it publish voice topic.
  - Homepage : http://cmusphinx.sourceforge.net/
  - github : https://github.com/mikeferguson/pocketsphinx
  - wiki : http://wiki.ros.org/pocketsphinx

*sound_play : It provide voice synthesis using festival(voice synthesis system which is based on C++) and the function of playing WAV/OGG files. It is based on C++ and Python and both possible.
  - Homepage : http://wiki.ros.org/sound_play
  - github : https://github.com/ros-drivers/audio_commonwiki

You have to download ROS Hydro version to execute AMA meta package at PC which is built into OP. To do this, you should connect to this website and follow the cource of download Ubuntu Linux.
http://wiki.ros.org/hydro/Installation/Ubuntu

And then, connect to this website and learn about how to download Soar at Linux OS.
http://soar.eecs.umich.edu/articles/articles/building-soar/81-building-on-linux

Generate workspace for AMA package for ROS by following guidance and follow step-by-step to execute all of the package. (Warning: Suppose you already downloaded catkin and sourced development environment)

1) Generate “home/robotis/ama_op2_v1/src" directory as below for workspace of ROS.

- Generate catkin workspace

$ mkdir -p ~/ama_op2_v1/src
$ cd ~/ama_op2_v1/src
$ catkin_init_workspace

- Build workspace

$ cd ~/ama_op2_v1/
$ catkin_make

- Sourcing new setup.bash file before continuing

$ source devel/setup.bash

2) Download AMA package at the top of this page at any folder.

3) Copy all of the files and subfolders at ama/src directory of AMA package that you downloaded at "ama_op2_v1/src" directory.


4) Open new terminal and compile AMA package as following command. If there are no error messages, it means that this package is ready to execute.

$ cd ~/ama_op2_v1/
$ catkin_make

5) File becomes executable state, thereby modifying directory and file's authority by chmod command as below.

$ chmod +x /home/robotis/ama_op2_v1/src/robotis_op_camera/config/robotis_op_camera.cfg
$ chmod +x /home/robotis/ama_op2_v1/src/robotis_op_ros_control/config/robotis_op_ros_control.cfg
$ chmod +x /home/robotis/ama_op2_v1/src/soarwrapper/src/soar7.py
$ chmod +x /home/robotis/ama_op2_v1/src/soarwrapper/src/soarvoicekey_talker.py

6) If all of the packages are downloaded, open new terminal and execute roscore as below.

$ roscore

7) Open another new terminal and execute find_object_2d as below.

$ rosrun find_object_2d find_object_2d image:=robotis_op/camera/image_raw

As a result, when Find-Object interface window pops up, select Load session at File-Load menu. When you load "lesson_2.bin" after finding robotis/ama/src/find-object/objects directory, you can watch ‘O','X','' images that robot learned as below.

8) Execute OP at launch file

If you open new terminal and execute following command, OP will start voice recognition and ready to execute HRI task. At this moment, you should adjust OP sitting down with its knees bent as ready position to avoid breaking.

$ roslaunch robotis_op_onboard_launch robotis_op_whole_robot.launch

If a “Controller Spawner couldn't find the expected controller_manager ROS interface." error message pops up, you should press the OP's reset button and execute above command again.
Wait until the message is printed out as below.
"Controller Spawner: Loaded controllers: j_shoulder_r_position_controller, ...,j_tilt_position_controller."

9) To make soar7.py file can be executed, open new terminal and execute following command.

$ chmod +x ~/ama_op2_v1/src/soarwrapper/src/soar7.py

Execute following command. After making OP stand up by "Stand" command among six voice commands (“Stand", "Down", "Yes", "No", "Go", "Stop") and talk rest command, OP execute command.

$ rosrun soarwrapper soar7.py

 

The following shows structure of AMA meta package and structure of ROS node.

The followings are screen-shots and YouTube videos when AMA version 1.0 was executed at OP.