Logical Agents
Jesus A. Gonzalez
September 29, 2016
Logical Agents
Logical Agents
- Humans don’t act by pure reflex
- Humans use reasoning mechanisms that operate on internal representations of knowledge
- In AI, an approach like this would be knowledge-based agents
- Simple problem-solving agents
- Limited & inflexible knowledge
- Can’t deduce two tiles can’t use same space
- Represent a partially observable environment
- Every possible concrete state, too many!
Logical Agents
- Represent states with variables
- Work in domain-independent way
- Allow more efficient algorithms
- Logic representation for knowledge-based agents
- Knowledge-based agents
- Accept new tasks described as goals
- Can learn new knowledge about environment
- Can adapt to changes in the environment (update new relevant knowledge)
Knowledge-based Agents
- Central component knowledge base, (KB)
- Set of sentences
- Expressed in a knowledge representation language
- Represents an assertion of the world
- Actioms, a sentence that is not derived from other sentences
- A way to add knowledge to KB
- A way to query to KB
- BACKGROUND KNOWLEDGE
- Base knowledge with which KB starts
Knowledge-based Agents
- INFERENCE
- Deriving new sentences from old ones
- Based only on what has been told to KB
- A KB-agent program
- Tells KB what it perceives
- Asks KB which action it could take
- Agent tells KB which action was chosen, and executes action
Knowledge-based Agents
- Details of language representation of KB agent hidden in
- MAKE-PERCEPT-SENTENCE
- MAKE-ACTION-QUERY
- MAKE-ACTION-SENTENCE
- Details of inference mechanism hidden in
Knowledge-based Agents

Knowledge-based Agents
- Agent, a knowledge level description
- What the agent knows
- What its goals are
- Agent, an implementation level
- How is the agent’s location knowledge is implemented?
- How its knowledge is represented?
Knowledge-based Agents
- An agent is built by TELLing it what it needs to know
- Declarative approach
- Start with empty KB
- TELL sentences one by one, until agent knows how to operate in its environment
- Procedural approach
- Encodes desired behaviors as code
- Succesful agents often combine declarative and procedural elements in its design
- A KB agent could have mechanisms to learn for itself
- Create general knowledge about environment from precepts
- Learning agent \(\rightarrow\) could be fully autonomous
The Wumpus World
The Wumpus World
- Wumpus world
- A cave with rooms connected by passageways
- Wumpus is a terrible beast that eats agents entering its room (room in cave)
- Wumpus can be shot by agent
- Agent has only one arrow
- Some rooms contain pits, which trap anyone but the wumpus (too big to fall in a pit)
- There is a heap of gold in the cave
The Wumpus World

The Wumpus World
- PEAS for the wumpus world
- Performance measure:
- +1000 for climbing out the cave with the gold,
- -1000 for falling in a pit or being eaten by the wumpus,
- -1 for each action taken,
- -10 for using the arrow
- Game ends when agent dies or climbs out of the cave.
The Wumpus World
- PEAS for the wumpus world
- Environment:
- a 4x4 grid of rooms.
- Agent always starts at square [1,1], facing right
- Location of wumpus and gold chosen randomly, with uniform distribution (not at [1,1])
- Each square (except [1,1]) can be a pit with probability 0.2
The Wumpus World
- PEAS for the wumpus world
- Actuators:
- Agent can move Forward
- TurnLeft by 90 degrees
- TurnRight by 90 degrees
- Agent dies if it enters square containing a pit or a live wumpus
- If agent bumps into a wall, then it doesn’t move
- Grab used to pick up gold (if gold in same square as agent)
- Shoot to fire arrow in straight line (agent has only 1 arrow)
- Climb used to climb out cave, only from [1,1]
The Wumpus World
- PEAS for the wumpus world
- Sensors
- 5 sensors
- In square contaning the wumpus and directly (not diagonally) adjacent squares: agent perceives Stench
- In squares directly adjacent to a pit, agent perceives a Breeze
- In square with gold, agent perceives a Glitter
- When agent walks into a wall, perceives a Bump
- When wumpus is killed, it emits a woeful Scream, can be perceived anywhere in the cave
The Wumpus World
- Percepts given as a list of five symbols
- stench, breeze, glitter, bump, scream
- There is a stench and a breeze, there is no glitter, bump, or scream
- [Stench, Breeze, None, None, None]
The Wumpus World
- Characterizing the wumpus environment (the wumpus doesn’t move!)
- Discrete
- Static
- Single agent
The Wumpus World
- It is sequential
- Rewards come only after many actions are taken
The Wumpus World
- It is partially observable
- Some aspects of a state not directly perceivable
- Agent’s location, wumpus’s state of health, availability of arrow
- Locations of pits and wumpus: unobserved parts of state (immutable)
- Transition model of environment completely known
- Agent can discover locations of pits and wumpus and complete agent’s knowledge of transition model
The Wumpus World
- An agent in the wumpus world
- Challenge: its initial ignorance of the environment
- Overcome this ignorance
- Requires logical reasoning
- Agent in different instances of the wumpus world
- For most instances agent can retrieve gold safely
- Sometimes agent chooses between going home with no gold and risking death to find gold
- About 21% of instances are utterly unfair
- Gold in pit
- Gold surrounded by pits
The Wumpus World

The Wumpus World

The Wumpus World

The Wumpus World
- Informal knowledge representation language
- Writing symbols in a grid (figures 3 and 4)
- Initial KB
- Rules of the game!
- Agent in [1,1]
- Safe [1,1]
- First percept: [None, None, None, None, None]
- This means [1,2] and [2,1] are free of dangers
The Wumpus World
- Need inference
- Discover Wumpus is in [1,3]
- Discover a Pit is in [3,1]
- How could we infer this?
- Inference leads to valid conclusions
Logic
- Concepts of logic
- Representation and reasoning
- Syntax
- Rules that specify all sentences that are well formed
- i.e. in arithmetic: “x + y = z” is correct, while “x=+yz” is not
- Semantics, the meaning of sentences
- Defines the truth of each sentence with respect to each possible world
Logic
- Model, an especific world, in which a sentence is being considered
- possible worlds thought as potentially real environments that agent might or might not be in
- models are mathematical abstractions which fix the truth or falsehood of every relevant sentence
- Satisfaction, \(m\) satisfies \(\alpha\) if a sentence \(\alpha\) is true in model \(m\)
- Also \(m\) is a model of \(\alpha\)
Logic
- Logical reasoning
- Involves the relation of logical entailment between sentences
- Idea that a sentence follows logically from another sentence
- Mathematical notation \(\alpha \vDash \beta\)
- Sentence \(\alpha\) entails sentence \(\beta\)
- Entailment, formal definition
- \(\alpha \vDash \beta\) iff, in every model in which \(\alpha\) is true, \(\beta\) is also true
- \(\alpha \vDash \beta\) if and only if \(M(\alpha) \subseteq M(\beta)\).
Logic

Logic
- Logical reasoning, can we conclude?:
- \(\alpha_1\) = There is no pit in [1,2]
- \(\alpha_2\) = There is no pit in [2,2]
- Entailment
- In every model in which \(KB\) is true, \(\alpha_1\) is also true
- \(KB \vDash \alpha_1\), can conclude there is no pit in [1,2]
- There is No entailment in this case:
- There are some models in which \(KB\) is true but \(\alpha_2\) is false
- \(KB \nvDash \alpha_2\), can’t conclude there is no pit in [2,2]
Logic
- Logical inference, the process of using entailment to derive conclusions
- Model checking, the process of enumerating all possible models to check \(\alpha\) is true in all models in which \(KB\) is true, \(M(KB) \subseteq M(\alpha)\)
- If inference algorithm \(i\) can derive \(\alpha\) from \(KB\), we write:
- \(KB \vdash_i \alpha\)
- “\(\alpha\) is derived from \(KB\) by \(i\)”
- Inference algorithms that derive only entailed sentences are Sound or truth-preserving
- Completeness, an inference algorithm is complete if it can derive any sentence that is entailed
Logic
- if KB is true in the real world, then any sentence \(\alpha\) derived from KB by a sound inference procedure is also true in the real world

Propositional Logic
Propositional Logic - Syntax
- Simple but powerful
- Syntax, defines allowable sentences
- Atomic sentence, consists of a single proposition symbol
- Each symbol can be either true or false
- \(P\), \(Q\), \(R\), \(W_{1,3}\), etc.
- Proposition symbols with fixed meanings: \(True\) is always true, and \(False\) is always false
- Complex sentences, constructed from simpler sentences using parentheses and logical connectives
Propositional Logic - Syntax
- Logical connectives, highest to lowest precedence
- Negation: \(\neg\), not
- Conjunction: \(\land\), and
- Disjunction: \(\lor\), or
- Implication: \(\Rightarrow\), implies
- Premise or antecedent
- Conclusion or consequent
- Biconditional: \(\iff\), if and only if
Propositional Logic - Syntax

Propositional Logic - Semantics
- Rules for determining the truth of a sentence for a particular model
- Given three proposition symbols: P, Q, R
- A possible model: \(m_1 = \{P = false, Q = true, R = false \}\)
- How many models? \(2^3 = 8\)
Propositional Logic - Semantics
- Rules
- For atomic sentences
- \(True\) is true in every model and \(False\) is false in every model
- For complex sentences there are 5 rules
- \(\neg\) \(P\) is true \(\iff\) \(P\) is false in \(m\)
- \(P \land Q\) is true \(\iff\) both \(P\) and \(Q\) are true in \(m\)
- \(P \lor Q\) is true \(\iff\) either \(P\) or \(Q\) is true in \(m\)
- \(P \Rightarrow Q\) is true unless \(P\) is true and \(Q\) is false in \(m\)
- \(P \iff Q\) is true \(\iff\) \(P\) and \(Q\) are both true or both false in \(m\)
Propositional Logic - Semantics

Propositional Logic - A Simple KB
- A KB for the wumpus world
- Immutable aspects
- \(P_{x,y}\) is true if there is a pit in \([x,y]\).
- \(W_{x,y}\) is true if there is a wumpus in \([x,y]\), dead or alive.
- \(B_{x,y}\) is true if the agent perceives a breeze in \([x,y]\).
- \(S_{x,y}\) is true if the agent perceives a stench in \([x,y]\).
- Label for each sentence \(R_i\)
Propositional Logic - A Simple KB
- A KB for the wumpus world
- There is no pit in [1,1]:
- \(R_1\): \(\neg\) \(P_{1,1}\).
- A square is breezy if and only if there is a pit in a neighboring square
- \(R_2\): \(B_{1,1} \iff (P_{1,2} \lor P_{2,1})\).
- \(R_3\): \(B_{2,1} \iff (P_{1,1} \lor P_{2,2} \lor P_{3,1})\).
- Breeze precepts
- \(R_4\): \(\neg\) \(B_{1,1}\).
- \(R_5\): \(B_{2,1}\).
Simple Inference Procedure
- Decide if \(KB \vDash \alpha\) for some sentence \(\alpha\)
- Is \(\neg\) \(P_{1,2}\) entailed by \(KB\)?
- Model checking approach
- Enumerate the models, check \(\alpha\) is true in every model in which \(KB\) is true
- Models: assignments of true or false to all proposition symbols
- \(B_{1,1}\), \(B_{2,1}\), \(P_{1,1}\), \(P_{1,2}\), \(P_{2,1}\), \(P_{2,2}\), and \(P_{3,1}\)
- \(2^7 = 128\) possible models, \(KB\) is true only in 3 of them
Simple Inference Procedure

Simple Inference Procedure
- TruthTable-ENTAILS?
- General alg. to decide entailment in propositional logic
- Performs recursive enumeration of finite space of assignments to symbols
- Sound, directly implements definition of entailment
- Complete, works for any \(KB\) and \(\alpha\) and always terminates
- There are finitely many models to examine
- Time complexity \(O(2^n)\)
- Space complexity \(O(n)\), enumeration uses depth-first
Simple Inference Procedure

Propositional Theorem Proving
Propositional Theorem Proving
- Entailment can also be done by theorem proving
- Apply rules of inference to sentences in KB
- Construct a proof of desired sentences
- No need to consult models
- Could be more efficient than model checking
Propositional Theorem Proving
- Logical equivalence: two sentences \(\alpha\) and \(\beta\) are logically equivalent if they are true in the same set of models
- \(\alpha \equiv \beta\)
- Example: \(P \land Q\) is equivalent to \(Q \land P\)
- Two sentences \(\alpha\) and \(\beta\) are equivalent only if each of them entails the other
- \(\alpha \equiv \beta \iff \alpha \vDash \beta\) and \(\beta \vDash \alpha\)
Propositional Theorem Proving

Propositional Theorem Proving
- Validity, a sentence is valid if it is true in all models
- \(P \lor \neg P\) is valid
- Valid sentences also known as tautoligies, necessarily true
- Deduction theorem
- For any sentences \(\alpha\) and \(\beta\), \(\alpha \vDash \beta\) if and only if the sentence (\(\alpha \Rightarrow \beta\)) is valid.
- Now we decide if \(\alpha \vDash \beta\) by checking (\(\alpha \Rightarrow \beta\)) is true in every model
- Satisfiability, a sentence is satisfiable if it is true in, or satisfied by, some model.
- Can be checked by enumerating the possible models
- Until we find one that satisfies the sentence
Propositional Theorem Proving
- Validity and satisfiability are connected
- \(\alpha\) is valid iff \(\neg \alpha\) is unsatisfiable
- \(\alpha \vDash \beta\) if and only if the sentence (\(\alpha \land \neg \beta\)) is unsatisfiable
- Prove \(\beta\) from \(\alpha\) by checking unsatisfiability of (\(\alpha \land \neg \beta\))
- reductio ad absurdum, proof by refutation or proof by contradiction
Inference and Proofs
- Inference rules can be used to derive proofs
- Chain of conclusions leading to a goal
- Modus Ponens, \(\frac{\alpha \Rightarrow \beta, \alpha}{\beta}\)
- If sentences of the form \(\alpha \Rightarrow \beta\) and \(\alpha\) are given, then \(\beta\) is inferred
- If (WumpusAhead \(\land\) WumpusAlive) \(\Rightarrow\) Shoot and (WumpusAhead \(\land\) WumpusAlive) are given, then Shot can be inferred
Inference and Proofs
- And-Elimination, \(\frac{\alpha \land \beta}{\alpha}\)
- From a conjunction, any of the conjuncts can be inferred
- From (WumpusAhead \(\land\) WumpusAlive), WumpusAlive can be inferred
Inference and Proofs
- Inference rules and equivalences in the Wumpus World
- There is no pit in [1,1]:
- \(R_1\): \(\neg\) \(P_{1,1}\).
- A square is breezy if and only if there is a pit in a neighboring square
- \(R_2\): \(B_{1,1} \iff (P_{1,2} \lor P_{2,1})\).
- \(R_3\): \(B_{2,1} \iff (P_{1,1} \lor P_{2,2} \lor P_{3,1})\).
- Breeze precepts
- \(R_4\): \(\neg\) \(B_{1,1}\).
- \(R_5\): \(B_{2,1}\).
Inference and Proofs
- Applying biconditional elimination to \(R_2\):
- \(R_6\): \((B_{1,1} \Rightarrow (P_{1,2} \lor P_{2,1})) \land ((P_{1,2} \lor P_{2,1}) \Rightarrow B_{1,1})\).
- Applying And-Elimination to \(R_6\)
- \(R_7\): \(((P_{1,2} \lor P_{2,1}) \Rightarrow B_{1,1})\).
- Applying logical equivalence for contrapositives to \(R_7\)
- \(R_8\): \((\neg B_{1,1} \Rightarrow \neg (P_{1,2} \lor P_{2,1}))\).
- Applying Modus Ponens using \(R_8\) and \(R_4\):
- \(R_9\): \(\neg ( P_{1,2} \lor P_{2,1})\).
- Applying De Morgan’s rule:
- \(R_{10}\): \(\neg P_{1,2} \land \neg P_{2,1}\).
Inference and Proofs
- We can use search algorithms to find a sequence of steps to make a proof:
- INITIAL STATE: the initial KB
- ACTIONS: all inference rules applied to all sentences that match top half of inference rule
- RESULT: add the sentence in bottom half of inference rule
- GOAL: a state containing a sentence that we are trying to prove
Inference and Proofs
- Monotonicity
- The set of entailed sentences can only increase as information is added to KB
- For any sentences \(\alpha\) and \(\beta\)
- if \(KB \vDash \alpha\) then \(KB \land \beta \vDash \alpha\).
- Conclusion of rule must follow regardless of what else is in the KB