Belief–desire–intention software model

The belief–desire–intention software model (usually referred to simply, but ambiguously, as BDI) is a software model developed for programming intelligent agents. Superficially characterized by the implementation of an agent's beliefs, desires and intentions, it actually uses these concepts to solve a particular problem in agent programming. In essence, it provides a mechanism for separating the activity of selecting a plan (from a plan library or an external planner application) from the execution of currently active plans. Consequently, BDI agents are able to balance the time spent on deliberating about plans (choosing what to do) and executing those plans (doing it). A third activity, creating the plans in the first place (planning), is not within the scope of the model, and is left to the system designer and programmer.

Overview

In order to achieve this separation, the BDI software model implements the principal aspects of Michael Bratman's theory of human practical reasoning (also referred to as Belief-Desire-Intention, or BDI). That is to say, it implements the notions of belief, desire and (in particular) intention, in a manner inspired by Bratman. For Bratman, intention and desire are both pro-attitudes (mental attitudes concerned with action), but intention is distinguished as a conduct-controlling pro-attitude. He identifies commitment as the distinguishing factor between desire and intention, noting that it leads to (1) temporal persistence in plans and (2) further plans being made on the basis of those to which it is already committed. The BDI software model partially addresses these issues. Temporal persistence, in the sense of explicit reference to time, is not explored. The hierarchical nature of plans is more easily implemented: a plan consists of a number of steps, some of which may invoke other plans. The hierarchical definition of plans itself implies a kind of temporal persistence, since the overarching plan remains in effect while subsidiary plans are being executed.

An important aspect of the BDI software model (in terms of its research relevance) is the existence of logical models through which it is possible to define and reason about BDI agents. Research in this area has led, for example, to the axiomatization of some BDI implementations, as well as to formal logical descriptions such as Anand Rao and Michael Georgeff's BDICTL. The latter combines a multiple-modal logic (with modalities representing beliefs, desires and intentions) with the temporal logic CTL*. More recently, Michael Wooldridge has extended BDICTL to define LORA (the Logic Of Rational Agents), by incorporating an action logic. In principle, LORA allows reasoning not only about individual agents, but also about communication and other interaction in a multi-agent system.

The BDI software model is closely associated with intelligent agents, but does not, of itself, ensure all the characteristics associated with such agents. For example, it allows agents to have private beliefs, but does not force them to be private. It also has nothing to say about agent communication. Ultimately, the BDI software model is an attempt to solve a problem that has more to do with plans and planning (the choice and execution thereof) than it has to do with the programming of intelligent agents.

BDI agents

A BDI agent is a particular type of bounded rational software agent, imbued with particular mental attitudes, viz: Beliefs, Desires and Intentions (BDI).

Architecture

This section defines the idealized architectural components of a BDI system.

BDI was also extended with an obligations component, giving rise to the BOID agent architecture[1] to incorporate obligations, norms and commitments of agents that act within a social environment.

BDI interpreter

This section defines an idealized BDI interpreter that provides the basis of SRI's PRS linage of BDI systems:[2]

  1. initialize-state
  2. repeat
    1. options: option-generator(event-queue)
    2. selected-options: deliberate(options)
    3. update-intentions(selected-options)
    4. execute()
    5. get-new-external-events()
    6. drop-unsuccessful-attitudes()
    7. drop-impossible-attitudes()
  3. end repeat

This basic algorithm has been extended in many ways, for instance to support planning ahead,[3][4] automated teamwork,[5] and maintenance goals.[6]

Limitations and criticisms

The BDI software model is one example of a reasoning architecture for a single rational agent, and one concern in a broader multi-agent system. This section bounds the scope of concerns for the BDI software model, highlighting known limitations of the architecture.

BDI agent implementations

'Pure' BDI

Extensions and hybrid systems

See also

Notes

  1. J. Broersen, M. Dastani, J. Hulstijn, Z. Huang, L. van der Torre The BOID architecture: conflicts between beliefs, obligations, intentions and desires Proceedings of the fifth international conference on Autonomous agents Pages 9-16, ACM New York, NY, USA
  2. 1 2 3 Rao, M. P. Georgeff. (1995). "BDI-agents: From Theory to Practice" (PDF). Proceedings of the First International Conference on Multiagent Systems (ICMAS'95).
  3. de Silva, L. Sardina, S. and Padgham, L. First principles planning in BDI systems, in Proceedings of The 8th International Conference on Autonomous Agents and Multiagent Systems, 2009.
  4. Meneguzzi, F. Zorzo, A. Móra, M. and Luck, M. Incorporating Planning into BDI Agents. Scalable Computing: Practice and Experience, v. 8, 2007.
  5. Kaminka, G. A. and Frenkel, I. Flexible Teamwork in Behavior-Based Robots. In Proceedings of the Twentieth National Conference on Artificial Intelligence (AAAI-05), 2005.
  6. Kaminka, G. A. Yakir, A. Erusalimchik, D. and Cohen-Nov, N. Towards Collaborative Task and Team Maintenance. In Proceedings of the Sixth International Joint Conference on Autonomous Agents and Multi-Agent Systems (AAMAS-07), 2007.
  7. Phung, Toan; Michael Winikoff; Lin Padgham (2005). "Learning Within the BDI Framework: An Empirical Analysis". Knowledge-Based Intelligent Information and Engineering Systems.
  8. Guerra-Hernández, Alejandro; Amal El Fallah-Seghrouchni; Henry Soldano (2004). "Learning in BDI Multi-agent Systems". Computational Logic in Multi-Agent Systems.
  9. Rao, M. P. Georgeff. (1995). "Formal models and decision procedures for multi-agent systems". Technical Note, AAII. CiteSeerX 10.1.1.52.7924Freely accessible.
  10. Georgeff, Michael; Barney Pell; Martha E. Pollack; Milind Tambe; Michael Wooldridge (1999). "The Belief-Desire-Intention Model of Agency". Intelligent Agents V: Agents Theories, Architectures, and Languages.
  11. Pokahr, Alexander; Lars Braubach; Winfried Lamersdorf (2005). "Jadex: A BDI Reasoning Engine". Multi-Agent Programming.
  12. Sardina, Sebastian; Lavindra de Silva; Lin Padgham (2006). "Hierarchical planning in BDI agent programming languages: a formal approach". Proceedings of the fifth international joint conference on Autonomous agents and multiagent systems.
  13. Vikhorev, K., Alechina, N. and Logan, B. (2011). "Agent programming with priorities and deadlines" Archived March 26, 2012, at the Wayback Machine.. In Proceedings of the Tenth International Conference on Autonomous Agents and Multiagent Systems (AAMAS 2011). Taipei, Taiwan. May 2011., pp. 397-404.
  14. Vikhorev, K., Alechina, N. and Logan, B. (2009). "The ARTS Real-Time Agent Architecture" Archived March 26, 2012, at the Wayback Machine.. In Proceedings of Second Workshop on Languages, Methodologies and Development Tools for Multi-agent Systems (LADS2009). Turin, Italy. September 2009. CEUR Workshop Proceedings Vol-494.
  15. 1 2 TAO: A JAUS-based High-Level Control System for Single and Multiple Robots Y. Elmaliach, CogniTeam, (2008) "Archived copy". Archived from the original on 2009-01-07. Retrieved 2008-11-03.
  16. 1 2 Rimassa, G., Greenwood, D. and Kernland, M. E., (2006). The Living Systems Technology Suite: An Autonomous Middleware for Autonomic Computing Archived May 16, 2008, at the Wayback Machine.. International Conference on Autonomic and Autonomous Systems (ICAS).
  17. Galitsky, Boris (2012). "Exhaustive simulation of consecutive mental states of human agents". Knowledge-Based Systems. doi:10.1016/j.knosys.2012.11.001.

References

This article is issued from Wikipedia - version of the 11/18/2016. The text is available under the Creative Commons Attribution/Share Alike but additional terms may apply for the media files.