lunedì 22 dicembre 2014

Jason - A Java autonomous agent based framework


During this last year studying computer engineering I started to hear more and more talking about the notion of actor and agent. For what I've learned the notions behind the name "agent" are really confused and they might mean different things based of the technology that uses that term. During the last 3 months I've started studying autonomous system and by that I've encountered the notion of autonomous agent. 
An autonomous agent encapsulate his thread of control, moreover he has the complete control over his course of action, he can perceive the environment where he's situated, he's proactive and has a memory with an accurate representation of information.
Jason is a framework based on java that permits the creation of MAS (Multi Agent System), it encapsulate the notion of agent based on the Belief-Desires-Intention model. Jason is an implementation of AgentSpeak, (actually Jason specific language is an extension of AgentSpeak), by which it is possible to describe the base of knowledge and the behaviour of an agent. Let's see really fast what are the characteristics of a Jason agent, even though they are just an implementation of what an autonomous agent should provide to the programmer.

Base of knowledge
It's the memory of the agent, a place where the agent stores it's Belief, which are used by the agent to evaluate it's course of action.

Cognitive Autonomy
The agent choices what to do. It's choice would be processed and it's his very conviction, it's represented as a belief in the base of knowledge. The cognitive autonomy can be seen when the agent execute a plan and as a result can modify it's beliefs.

Perception Autonomy
The agent can perceive changes in the environment and changes it's base of knowledge about it with the right information. Between the perception and the information there's the agent which take the decision of considering or not what he's perceiving.

Message passing
Message passing is the communication support of the agents, all communications are made with a specific semantic (for example: achive, tell, etc).

Means-end plan actions
The agent's behaviour is expressed by the mean of plans. A plan is formed by actions that can also call the execution of other plans, a plan should provide a recovery plan in case of a failure. An agent act by it's means, he choice what to do by himself based on it's belief, how to act based on this stimuli is described in his plans. A plan should be seen as the action an agent should make to reach a specific state of affairs.

What we just discussed are the basic concepts of an autonomous agent in the BDI architecture, it's obvious that if Jason can provide an implementation for this notion of autonomous agent, then it seems to be the right framework to use for bridging the abstraction gap between OO programming and agent-oriented programming.

Basic knowledge about Jason

As we said before, Jason's language is an implementation of AgentSpeak, which by himself uses prolog notions. At every running cycle an agent perceive changes in it's environment and update with a criteria his belief base. Adding a belief can trigger the execution of a plan, a plan is made by a series of action. An action can be:
  • an internal action: an action that is made by the agent and it's standalone (provided by Jason) or implemented in Java. Examples of internal action: .send(agent1,tell,msg("I'm alive"))
  • an external action: an action that is meant to modify the state of the environment.
  • an addiction/remove of a belief: +likes(flower) -dislike(pop_music)
  • the execution of a plan: !goToTheStore
  • an evaluation, which can make the plan fail at that point: X < 10
  • an assignment or other prolog like operations.
An agent have initials belief and goals that defines it's initial behaviour.

Jason program example

Suppose we have two robots; the first one is a coffee machine and makes a beautiful espresso, the second one takes the coffee cup and brings it to the coffee table. When the coffee machine doesn't have more coffee or water it stops and turn on a led to signal that it needs someone to  take care of it. When the transporter is out of battery it moves towards it's recharger's spot, when it's charged it starts again. This behaviour is made endlessly until we can't make coffee anymore (because coffee is awesome and we want all the coffee cups we can get).

For me first thing to do is to analyze the problem and see what is part of the "physical" agent and what is part of the agent "mind". What is in the agent mind is what we are gonna do in Jason asl file, specifying his plans and base of knowledge. So what is part of the physical agent? We can see what the CoffeeMachine and the Transporter is made of in their respective classes:

  • ICoffeeMachine
  • ITransporterRobot
After we've got the idea of the physical parts of our agents let's start and see what the agent mind should perceive about the environment and it's physical properties. For example, it's logical that the battery level of the transporter can be perceived from it's physical level and brought to it's mind so that he can reason with this knowledge. Now that we've sorted this kind of things let's put it on code by extending the Environment class of Jason, we'll then specify this class in the mas2j file.
public class CoffeeHouseEnv extends Environment {
 // this is the environment of our autonomous system
 public static CoffeeHouseModel model;
 //Literals we are going to use, as belief or for actions
 public static final Literal at_table = Literal.parseLiteral("at(trans,table)");
 public void init(String[] args) {
  model = new CoffeeHouseModel();

 void updatePercepts() {
  // Here we update all the perceptions for our agents

  //transporter location
  Location tl = model.getAgPos(0);
  //Coffee and water level for the machine
  //and battery for transporter
  double cl = model.machine.getCoffeeLelvel();

 public boolean executeAction(String agName, Structure act) {
  boolean result = false;

  //let's map the external action from jason agent to java
  //result = false -> the action failed
   result = model.makeCoffe();

  // only if action completed successfully, update agents' percepts
  if (result) {
   try {
   } catch (Exception e) {
  return result;

This is the common pattern of the extended Environment class, you have to notice:

  • executeAction() method is used to execute agents external actions
  • clearPercepts() clear all the percepts for specified agent
  • addPercept() add the specified belief to the specified (optional) agent
After this you have to specify the mind of the agents, this is done in the asl files. I can't explain all the syntax of the language, however looking at the code and at Jason Manual, I guess you can figure it out.

Download Code

Deeper thoughts

For what I've seen, Jason has it's limitation, even if I think they are kinda meant to be. I don't think Jason's programming language sweets well when developing intelligent agents, also the lack of a stochastic resource doesn't help making an agent that is more human-like.