* Robot can recognize six basic voice command, "Stand", "Down", "Yes", "No", "Go", "Stop".
* Robot can recognize three signs drawn on paper, 'O','X','△'.
* In the initial position, the robot stands up with the "Stand" command and thenwhen "Go" command is executed, robot walk forward.
* If robot is in a dangerous situation, such as standing on the edge of table, or watch 'X' that means immoral command was given by human, stop on the spot and then doesn't react by command "Go". Also robot express that refusing inappropriate command to human by replying "Sorry, I cannot walk."
* Even in this situation, the operation in place ("Stand", "Down", "Yes", "No") is allowed and follows the command normally.
* After user show robot paper drawn 'O' which means end of abnormal situation, robot perform all of the command including "Go".
AMA version 1.0 meta package for an OP is composed of these packages and folders.
* SoarSuite950 : Including Soar's libraries and version 9.5.0.
* soarwrapper : Including Soar main code.
* robotis_op_camera : Camera Package of OP
- github : https://github.com/ROBOTIS-OP/robotis_op_camera
* robotis_op_common : Explanation Package of OP
- github : https://github.com/ROBOTIS-OP/robotis_op_common
* robotis_op_framework : Framework package of OP
- github : https://github.com/ROBOTIS-OP/robotis_op_framework
*robotis_op_launch : Launch Package of ROS
- github : https://github.com/ROBOTIS-OP/robotis_op_launch
*robotis_op_ros_control : Contorl of OP Package of OP of ROS
- github : https://github.com/ROBOTIS-OP/robotis_op_ros_control
* find-object: Simple Qt interface to implement detector and descriptor such as SIFT, SURF, FAST, BRIEF on OpenCV. When it is executed on ROS, it subscribes the topic at robotis_op_camera package.
- Homepage : https://introlab.github.io/find-object/
- github : https://github.com/introlab/find-object
* pocketsphinx : oice recognition ROS package provided CMU Sphinx. It can recognize only words saved at dictionary file of internal package. When it is executed of ROS, it publish voice topic.
- Homepage : http://cmusphinx.sourceforge.net/
- github : https://github.com/mikeferguson/pocketsphinx
- wiki : http://wiki.ros.org/pocketsphinx
*sound_play : It provide voice synthesis using festival(voice synthesis system which is based on C++) and the function of playing WAV/OGG files. It is based on C++ and Python and both possible.
- Homepage : http://wiki.ros.org/sound_play
- github : https://github.com/ros-drivers/audio_commonwiki
You have to download ROS Hydro version to execute AMA meta package at PC which is built into OP. To do this, you should connect to this website and follow the cource of download Ubuntu Linux.
And then, connect to this website and learn about how to download Soar at Linux OS.
Generate workspace for AMA package for ROS by following guidance and follow step-by-step to execute all of the package. (Warning: Suppose you already downloaded catkin and sourced development environment)
1) Generate “home/robotis/ama_op2_v1/src" directory as below for workspace of ROS.
- Generate catkin workspace
- Build workspace
- Sourcing new setup.bash file before continuing
2) Download AMA package at the top of this page at any folder.
3) Copy all of the files and subfolders at ama/src directory of AMA package that you downloaded at "ama_op2_v1/src" directory.
4) Open new terminal and compile AMA package as following command. If there are no error messages, it means that this package is ready to execute.
5) File becomes executable state, thereby modifying directory and file's authority by chmod command as below.
6) If all of the packages are downloaded, open new terminal and execute roscore as below.
7) Open another new terminal and execute find_object_2d as below.
As a result, when Find-Object interface window pops up, select Load session at File-Load menu. When you load "lesson_2.bin" after finding robotis/ama/src/find-object/objects directory, you can watch ‘O','X','' images that robot learned as below.
8) Execute OP at launch file
If you open new terminal and execute following command, OP will start voice recognition and ready to execute HRI task. At this moment, you should adjust OP sitting down with its knees bent as ready position to avoid breaking.
If a “Controller Spawner couldn't find the expected controller_manager ROS interface." error message pops up, you should press the OP's reset button and execute above command again.
Wait until the message is printed out as below.
"Controller Spawner: Loaded controllers: j_shoulder_r_position_controller, ...,j_tilt_position_controller."
9) To make soar7.py file can be executed, open new terminal and execute following command.
Execute following command. After making OP stand up by "Stand" command among six voice commands (“Stand", "Down", "Yes", "No", "Go", "Stop") and talk rest command, OP execute command.
The following shows structure of AMA meta package and structure of ROS node.
The followings are screen-shots and YouTube videos when AMA version 1.0 was executed at OP.