In the field of artificial intelligence , it is assumed that environments have multiple properties and attributes. A rational agent interacts with this task environment to achieve a specific performance measure. While designing an artificial system, it is helpful to comprehensively frame/define the task environment.

These properties are loosely classified as :

a) Fully Observable/Partially Observable/Completely Unobservable*

b) Single Agent Vs Multi-Agent

c) Competitive Vs Competitive

d) Deterministic Vs Stochastic

e) Episodic Vs Sequential

f) Static Vs Dynamic

g) Discrete Vs Continuous

h) Known Vs Unknown

Before discussing the observability feature of any environment, it may be helpful to understand the perception-action cycle or the PEAS (Performance, Environment, Actuators, Sensors) model.

Agents interact with the task environment using two primary functions perception and action. In Artificial Intelligence, perception is the process of taking information from the environment and converting it to a multi-dimensional, variable state predictive model or internal representations. These internal states are also called percept-sequences i.e. a tractable memory of all sequences of events the agent has experienced in the past. The agent-environment coupling unfolds over multiple cycles of perception-action cycles creating cascading and nesting frames of information dimensions. Marvin Minsky popularized the term “frames” in his famous paper “A Framework for Representing Knowledge”.

Action is the process of using embodied dynamics using its actuators to change the state of the environment itself. Agents intake information through their sensors and use a variety of actions to change or influence the environment. Agents are assumed to be continuously propelling the perception-action cycles, taking information from the environment and storing percept-sequences in their memory bank for ease of future use. Both the environment and the agents have Variety (V) of states which makes the perception-action cycle propagation and evolution pretty complex.

Now, we can define the task environment specifically, in terms of observability.

  • Fully Observable Task Environments – In fully observable environments, the agent’s sensors are assumed to have complete access to the multi-dimensional states of the task environment enabling them to make optimal decisions. Fully observable environments are convenient and energy saving as the agents do not need to store any information in its memory. Let us use a simple example here. In the board game “Noughts & Crosses” (also called Tic Tac Toe or X’s and O’s), all the features, attributes of the game i.e positional elements of X’s and O’s are visible to the agent and no other information is required to take an optimal decision to achieve a performance measure.

  • Partially Observable Task Environments – In a partially observable environment, some of the information required for optimal decision making is hidden till it emerges due to system level activity, other’s actions, environmental unfolding or till the agent takes pro-active action to make it visible. The agent will have to make assumptions and predictions to deal with the hidden dimensions of the environment which are required to take a rational and optimal decision. A simple example would be the cards game, where the cards with someone else are not visible to you till “showtime”. To offset this partial visibility, the agent can rely on memory (past experience) to predict what is likely to become visible in the future. It can be done in two ways, one way is to remember the sequence of the games and various probabilities with other players, the second way is to remember the moves made by the other players in the current game and predict what is likely to happen next.

  • Completely Unobservable Environment – In this type of environment, either the agent has no sensors or the sensors have limitations to extract information from the environment. Sometimes, the environment may be too complex or bizarre where inspite of well-functioning sensors, it is very challenging to extract information from the environment rendering it unobservable.

*Please note that in AI, there is nothing called completely unobservable classification and complete unobservability is a state that can exist in both full and partial observational states.

Note: Hope this answer helped. Please note that I am not an AI expert but have been studying AI along with Intuition and Meta-heuristics. Based on some bizarre personal experiences, I have been obsessed with understanding how some agents can use sophisticated intuition to make partially and completely unobservable environments observable in strategy plays, breakthrough inventions and achieving their goals. My research interests are “Unconscious Search”, “Intuitive Predictive Triangulation ”, an element of spidey sense and “Reducing Inventing Cycle Time (ICT)”