• AMA Source Code
  • Principles of Robot Ethics
    • Soar
  • ROS
  • Reference
  • main_banner02.png

    AMA task introduction
    Development of artificial moral agent

    In order for humans and robots to communicate with each other truly, artificial intelligence should be developed by combining moral factors together. 

  • main_banner02.png

    AMA Source code
    AMA Meta package download

    ownload AMA version 1.0 meta package for an OP 

A cognitive agent is software that actively recognizes and interacts with dynamic environment in order to achieve a specific goal assigned by a human user. Soar, ACT-R, LIDA, Clarion, EPIC, and Icarus have been presented for application to cognitive agents aiming at human-level intellectual thinking, memory, learning and behavior.

Soar is a cognitive agent architecture that has been developed by Professor John Laird of Michigan University for over 30 years since 1981 with Allen Newell, professor of first-generation AI. The 9.6 version is now available as open source on a dedicated web site with manuals and various examples.

Soar is an abbreviation of state, operator and result, where state means the current state of the system, operator means the means to change this state, and result means the result of the combination of state and operator.

Soar's source code is written in C, C ++, Java, and has algorithms and data structures that are independent of tasks. Soar has a principle of solving problems by organizing the necessary actions in the problem space based on the production rule system which is used to control each behavior (behavior = architecture + knowledge). Therefore, it can manage and adjust various functions such as perception, reasoning, planning, language processing, and learning in real time.

* Soar has working memory elements (WME) and decision procedures for short-term memory. WME consists of identifier, attribute and value of state as follows.

* Soar's long-term memory includes procedural memory for remembering rules, semantic memory for storing general facts and semantics, and episodic memory for storing snapshots of working memory.

* Soar's learning mechanisms include chunking and reinforcement learning (RL). Chunking means making new production rules through learning, and reinforcement learning involves unsupervised learning by adjusting preferences for operator selection through incentive and punishment.

The following figure shows Soar's decision cycle, which can be divided into an operator selection and an operator application as a whole.

The selection of operators includes the following stages of state elaboration, operator proposal, and operator evaluation:

- State elaboration: Abstracting another WME combinations and express this as a new augmentation in state (A combination of WMEs sharing the same first identifier is referred to as an object, and WME which constituting this object is called augmentation).

- Operator proposal: Additional WME generation required to declare operators' names and parameters

- Operator evaluation: When a candidate operator is proposed, it generates a preference by comparing the priority with another operator. Preferences include acceptable, reject, better/worse, best, worst, and indifferent. If there is an operator with insufficient or conflicting preference, Soar generates an impasse and creates a substate. This impasse can be solved by the preference of the substate added later.

Operator application has detailed steps of operator elaboration and operator application as follows.

- Operator elaboration: Test whether any operator is selected, create additional structure in operator, connect related parameters, and prepare for applying operator.

- Operator application: Steps that cause continuous changes in state and cause internal/external actions

Soar has been successfully applied to various fields such as reasoning works, algorithm design, medical diagnosis, natural language processing, control of relative character of computer games, etc. Recently, it has been applied to intelligent robot service technology.