1

UNIT – 3

CONTEXT-FREE GRAMMARS AND LANGUAGES

Objectives

To learn about CFG

To learn about the derivation trees and their ambiguity

To solve problems in CFG

 To learn about Equivalence of PDA’s and CFG’s

To learn about Pushdownautomata (PDA).

Context Free languages

•The class of context-free languages generalizes the class of regular languages, i.e., every regular language is a context-free language.

•The reverse of this is not true,i.e., every context-free language is not necessarily regular. For example, as we will see {0k1k | k>=0} is context-free but not regular.

•Many issues and questions we asked for regular languages will be the same for context-free languages:

Machine model – PDA (Push-Down Automata)

Descriptor – CFG (Context-Free Grammar)

Pumping lemma for context-free languages

Closure of context-free languages with respect to various operations

Algorithms and conditions for finiteness or emptiness

•Some analogies don’t hold, e.g., non-determinism in a PDA makes a difference and, in particular, deterministic PDAs define a subset of the context-free languages.

•Informally a Context-Free Language (CFL) is a language generated by a Context-Free Grammar (CFG).

•What is a CFG?

•Informally, a CFG is a set of rules for deriving (or generating) strings (or sentences) in a language.

•Informally a CFG consists of:

•A set of replacement rules, each having a Left-Hand Side (LHS) and a Right-Hand Side (RHS).

•Two types of symbols; variables and terminals.

•LHS of each rule is a single variable (no terminals).

•RHS of each rule is a string of zero or more variables and terminals.

•A string consists of only terminals.

Formally, a Context-Free Grammar (CFG) is a 4-tuple:

G = (V, T, P, S)

V-A finite set of variables or non-terminals

T-A finite set of terminals (V and T do not intersect)

P - A finite set of productions, each of the form A –> α, where A is in V and

αis in (V U T)*

Note that α may be ε

S-A starting non-terminal (S is in V)

•Example CFG:

G = ({S}, {0, 1}, P, S)

P:

(1)S –> 0S1or just simply S –> 0S1 | ε

(2)S –> ε

Example Derivations:

S=> 0S1(1) S => ε (2)

=> 01(2)

S=> 0S1(1)

=> 00S11(1)

=> 000S111(1)

=> 000111(2)

•Example CFG:

G = ({A, B, C, S}, {a, b, c}, P, S)

P:

(1)S –> ABC

(2)A –> aAA –> aA | ε

(3)A –> ε

(4) B –> bBB –> bB | ε

(5) B –> ε

(6) C –> cCC –> cC | ε

(7) C –> ε

Example Derivations:

S=> ABC(1) S => ABC (1)

=> BC(3) => aABC (2)

=>C(5) => aaABC (2)

=> ε(7) => aaBC (3)

=> aabBC (4)

=> aabC (5)

=> aabcC (6)

=> aabc (7)

Formal Definitions for CFLs

•Let G = (V, T, P, S) be a CFG.

•Observation: –> forms a relation on V and (V U T)*

•Definition: Let A be in V, B be in (V U T)*, A –> B be in P, and let α and β be in (V U T)*. Then:

αAβ => αBβ

In words, αAβ directly derives αBβ, or rather αBβ follows from αAβ by the application of exactly one production from P.

•Observation: => forms a relation on (V U T)* and (V U T)*.

•Definition: Suppose that α1, α2,…,αm are in (V U T)*, m1, and

α1 => α2

α2 => α3

αm-1 => αm

Then α1 =>* αm

In words, αm follows from α1 by the application of zero or more productions. Note that: α =>* α.

•Observation: =>* forms a relation on (V U T)* and (V U T)*.

•Definition: Let α be in (V U T)*. Then α is a sentential form if and only if S =>* α.

•Definition: Let G = (V, T, P, S) be a context-free grammar. Then the language generated by G, denoted L(G), is the set:

{w | w is in T* and S=>* w}

•Definition: Let L be a language. Then L is a context-free language if and only if there exists a context-free grammar G such that L = L(G).

•Definition: Let G1 and G2 be context-free grammars. Then G1 and G2 are equivalent if and only if L(G1) = L(G2).

•Theorem: Let L be a regular language. Then L is a context-free language.

•Proof: (by induction)

We will prove that if r is a regular expression then there exists a CFG G such that L(r) = L(G). The proof will be by induction on the number of operators in r.

Basis: Op(r) = 0

Then r is either Ø, ε, or a, for some symbol a in Σ.

For Ø:

Let G = ({S}, {}, P, S) where P = {}

For ε:

Let G = ({S}, {}, P, S) where P = {S –> ε}

For a:

Let G = ({S}, {a}, P, S) where P = {S –> a

Inductive Hypothesis:

Suppose that for any regular expression r, where 0op(r)  k, that there exists a CFG G such that L(r) = L(G), for some k>=0.

Inductive Step:

Let r be a regular expression with op(r)=k+1. Then r = r1 + r2, r = r1r2 or r = r1*.

Case 1)r = r1 + r2

Since r has k+1 operators, one of which is +, it follows that r1 and r2 have at most k operators. From the inductive hypothesis it follows that there exist CFGs G1 = (V1, T1, P1, S1) and G2 = (V2, T2, P2, S2) such that L(r1) = L(G1) and L(r2) = L(G2). Assume without loss of generality that V1 and V2 have no non-terminals in common, and construct a grammar G = (V, T, P, S) where:

V = V1 U V2 U {S}

T = T1 U T2

P = P1 U P2 U {S –> S1, S –> S2}

Clearly, L(r) = L(G).

Case 2)r = r1r2

Let G1 = (V1, T1, P1, S1) and G2 = (V2, T2, P2, S2) be as in Case 1, and construct a grammar G = (V, T, P, S) where:

V = V1 U V2 U {S}

T = T1 U T2

P = P1 U P2 U {S –> S1S2}

Clearly, L(r) = L(G).

Case 3)r = (r1)*

Let G1 = (V1, T1, P1, S1) be a CFG such that L(r1) = L(G1) and construct a grammar G = (V, T, P, S) where:

V = V1 U {S}

T = T1

P = P1 U {S –> S1S, S –> ε}

Clearly, L(r) = L(G).

•The preceding theorem is constructive, in the sense that it shows how to construct a CFG from a given regular expression.

•Example #1:

r = a*b*

r = r1r2

r1= r3*

r3 = a

r2= r4*

r4 = b

•Example #1: a*b*

r4 = bS1 –> b

r3 = aS2 –> a

r2= r4*S3 –> S1S3

S3 –> ε

r1= r3*S4 –> S2S4

S4 –> ε

r = r1r2S5 –> S4S3

•Example #2:

r = (0+1)*01

r = r1r2

r1= r3*

r3 = (r4+r5)

r4 = 0

r5 = 1

r2= r6r7

r6 = 0

r7 = 1

•Example #2: (0+1)*01

r7 = 1S1 –> 1

r6 = 0S2 –> 0

r2= r6r7S3 –> S2S1

r5 = 1S4 –> 1

r4 = 0S5 –> 0

r3 = (r4+r5)S6 –> S4, S6 –> S5

r1= r3*S7 –> S6S7

S7 –> ε

r = r1r2S8 –> S7S3

•Definition: A CFG is a regular grammar if each rule is of the following form:

–A –> a

–A –> aB

–A –> ε

where A and B are in V, and a is in T

•Theorem: A language L is a regular language iff there exists a regular grammar G such that L = L(G).

•Proof: Exercise.

•Observation: The grammar S –> 0S1 | ε is not a regular grammar.

•Observation: A language may have several CFGs, some regular, some not (The fact that the preceding grammar is not regular does not in and of itself prove that 0n1n is not a regular language).

Parse Tree

•Definition:Let G = (V, T, P, S) be a CFG. A tree is a derivation (or parse) tree if:

–Every vertex has a label from V U T U {ε}

–The label of the root is S

–If a vertex with label A has children with labels X1, X2,…, Xn, from left to right, then

A –> X1, X2,…, Xn

must be a production in P

–If a vertex has label ε, then that vertex is a leaf and the only child of its’ parent

•More Generally, a derivation tree can be defined with any non-terminal as the root.

•Definition: Let G be a CFG. Then G is said to be ambiguous if there exists an x in L(G) with >1 leftmost derivations. Equivalently, G is said to be ambiguous if there exists an x in L(G) with >1 parse trees, or >1 rightmost derivations.

•Note: Given a CFL L, there may be more than one CFG G with L = L(G). Some ambiguous and some not.

•Definition: Let L be a CFL. If every CFG G with L = L(G) is ambiguous, then L is inherently ambiguous.

A context-free grammar G is ambiguous if some string has two or more derivation trees.

A context-free grammar G is ambiguous if some string has two or more leftmost derivation or rightmost derivation trees.

Applications of Context-Free Grammers

Grammars are used to describe programming languages. Most importantly there isa mechanical way of turning the description as a Context Free Grammar (CFG) intoa parser, the component of the compiler that discovers the structure of the sourceprogram and represents that structure as a tree.

For example, The Document Type De¯nition(DTD) feature of XML (ExtensibleMarkup Language) is essentially a context-free grammar that describes the allowableHTML tags and the ways in which these tags may be nested. For example, one coulddescribe a sequence of characters that was intended to be interpreted as a phonenumber by <PHONE> and </PHONE> .

Example-1: Typical programming languages use parentheses and or brackets in anested and balanced fashion. That is, we must be able to match some left parenthesisagainst a right parenthesis that appears immediately to its right, remove both ofthem and repeat. If we eventually eliminate all the parenthesis, then the string was

balanced. Example of strings with balanced parenthesis are (()), ()(), (()()), while )(,

and (() are not balanced.

Pushdown Automata (PDA)

•Informally:

– A PDA is an NFA-ε with a stack.

–Transitions are modified to accommodate stack operations.

•Questions:

–What is a stack?

–How does a stack help?

•A DFA can “remember” only a finite amount of information, whereas a PDA can “remember” an infinite amount of (certain types of) information.

As shown in figure, a PDA has three components: an input tape with read only head, a finite control and a pushdown store.

The input head is read-only and may only move from left to right, one symbol (or cell) at a time. In each step, the PDA pops the top symbol off the stack; based on this symbol, the input symbol it is currently reading, and its present state, it can push a sequence of symbols onto the stack, move its read-only head one cell (or symbol) to the right, and enter a new state, as defined by the transition rules of the PDA.

PDA are nondeterministic, by default. That is, - transitions are also allowed in which the PDA can pop and push, and change state without reading the next input symbol or moving its read-only head. Besides this, there may be multiple options for possible next moves.

Explanation of the transition function,:

If, for any , . This means intitutively that whenever the PDA is in state q reading input symbol a and z on top of the stack, it can nondeterministically for any i,

  • go to state
  • pop z off the stack
  • push onto the stack (where ) (The usual convention is that if , then will be at the top and at the bottom.)
  • move read head right one cell past the current symbol a.

If a = , then means intitutively that whenever the PDA is in state q with z on the top of the stack regardless of the current input symbol, it can nondeterministically for any i, ,

  • go to state
  • pop z off the stack
  • push onto the stack, and
  • leave its read-only head where it is.

State transition diagram: A PDA can also be depicted by a state transition diagram. The labels on the arcs indicate both the input and the stack operation. The transition forand is depicted by

Final states are indicated by double circles and the start state is indicated by an arrow to it from nowhere.

Configuration or Instantaneous Description (ID) :

A configuration or an instantaneous description (ID) of PDA at any moment during its computation is an element of describing the current state, the portion of the input remaining to be read (i.e. under and to the right of the read head), and the current stack contents. Only these three elements can affect the computation from that point on and, hence, are parts of the ID.

The start or inital configuration (or ID) on input is . That is, the PDA always starts in its start state, with its read head pointing to the leftmost input symbol and the stack containing only the start/initial stack symbol, .

The "next move relation" one figure describes how the PDA can move from one configuration to another in one step

Formally,


iff
'a' may be or an input symbol.

Let I, J, K be IDs of a PDA. We define we write IK, if ID I can become K after exactly i moves. The relations and define as follows

I K

I J if such that I K and KJ

I J if such that I J.

That is, is the reflexive, transitive closure of . We say that I J if the ID J follows from the ID I in zero or more moves.

( Note : subscript M can be dropped when the particular PDA M is understood. )

Language accepted by a PDAM

There are two alternative definition of acceptance as given below.

1. Acceptance by final state:

Consider the PDA . Informally, the PDA M is said to accept its input by final state if it enters any final state in zero or more moves after reading its entire input, starting in the start configuration on input

Formally, we define L(M), the language accepted by final state to be

{ | for some and }

2. Acceptance by empty stack (or Null stack): The PDA Maccepts its input by empty stack if starting in the start configuration on input , it ever empties the stack w/o pushing anything back on after reading the entire input. Formally, we define N(M), the language accepted by empty stack, to be

{ | for some }

Note that the set of final states, F is irrelevant in this case and we usually let the F to be the empty set i.e. F = Q .

Example 1 : Here is a PDA that accepts the language .





, and consists of the following transitions

Informally, whenever the PDA M sees an input a in the start state with the start symbol z on the top of the stack it pushes a onto the stack and changes state to. (to remember that it has seen the first 'a'). On state if it sees anymore a, it simply pushes it onto the stack. Note that when M is on state, the symbol on the top of the stack can only be a. On state if it sees the first b with a on the top of the stack, then it needs to start comparison of numbers of a's and b's, since all the’s at the begining of the input have already been pushed onto the stack. It start this process by popping off the a from the top of the stack and enters in state q3 (to remember that the comparison process has begun). On state , it expects only b's in the input (if it sees any more a in the input thus the input will not be in the proper form of anbn). Hence there is no more on input a when it is in state . On state it pops off an a from the top of the stack for every b in the input. When it sees the last b on state q3 (i.e. when the input is exaushted), then the last a from the stack will be popped off and the start symbol z is exposed. This is the only possible case when the input (i.e. on -input ) the PDA M will move to state which is an accept state.

we can show the computation of the PDA on a given input using the IDs and next move relations. For example, following are the computation on two input strings.

Let the input be aabb. we start with the start configuration and proceed to the subsequent IDs using the transition function defined

(Using transition 1 )

(Using transition 2 )

(Using transition 3)

(Using transition 4)

(Using transition 5)

is final state. Hence, accept. So the string aabb is rightly accepted by M.

we can show the computation of the PDA on a given input using the IDs and next move relations. For example, following are the computation on two input strings.

i) Let the input be aabab.

No further move is defined at this point.

Hence the PDA gets stuck and the string aabab is not accepted

Example 2 : We give an example of a PDA M that accepts the set of balanced strings of parentheses [] by empty stack. The PDA M is given below.

where is defined as

Informally, whenever it sees a [, it will push the ] onto the stack. (first two transitions), and whenever it sees a ] and the top of the stack symbol is [, it will pop the symbol [ off the stack. (The third transition). The fourth transition is used when the input is exhausted in order to pop z off the stack ( to empty the stack) and accept. Note that there is only one state and no final state.

The following is a sequence of configurations leading to the acceptance of the string [ [ ] [ ] ] [ ].

Equivalence of acceptance by final state and empty stack.

It turns out that the two definitions of acceptance of a language by a PDA - accpetance by final state and empty stack- are equivalent in the sense that if a language can be accepted by empty stack by some PDA, it can also be accepted by final state by some other PDA and vice versa. Hence it doesn't matter which one we use, since each kind of machine can simulate the other.Given any arbitrary PDA M that accpets the language L by final state or empty stack, we can always construct an equivalent PDA M with a single final state that accpets exactly the same language L. The construction process of M' from M and the proof of equivalence of MM' are given below.

There are two cases to be considered.

CASE I : PDA M accepts by final state, Let Letqf be a new state not in Q. Consider the PDA where as well as the following transition.

contains and . It is easy to show that M and M' are equivalenti.e. L(M) = L()

Let L(M) . Then for some and
Then Thus accepts

Conversely, let accepts i.e. L(), then for inherits all other moves except the last one from M. Hence for some .

Thus M accepts . Informally, on any input simulate all the moves of M and enters in its own final state whenever M enters in any one of its final status in F. Thus accepts a string iffM accepts it.

CASE II : PDA M accepts by empty stack.

We will construct from M in such a way that simulates M and detects when M empties its stack. enters its final state when and only when M empties its stack.Thuswill accept a string iffM accepts.

Let where and Xand contains all the transition of , as well as the following two transitions.

and

Transitions 1 causes to enter the initial configuration of M except that will have its own bottom-of-stack marker X which is below the symbols of M's stack. From this point onward will simulate every move of M since all the transitions of M are also in .

If M ever empties its stack, then when simulating M will empty its stack except the symbol X at the bottom. At this point, will enter its final state by using transition rule 2, thereby (correctly) accepting the input. We will prove that M and are equivalent.

Let M accepts. Then

for some . But then

( by transition rule 1)

(Since includes all the moves of M )

(by transition rule 2 )

Hence, also accepts.

Conversely, let accepts.

Then for some

Every move in the sequence

Were taken from M.

Hence, M starting with its initial configuration will eventually empty its stack and accept the input i.e.

Equivalence of PDA’s and CFG’s

Given a language L that is accepted by a certain PDA, there exists a CFG that generates exactly L.

Procedure:

1. Start with any PDA

2. Put the PDA into a standardized form, known as conversion form.

3. The purpose of putting a PDA in conversion form is that since the PDA now has a standardized form, we can easily convert

the pictorial representation of the PDA into a table. This table will be known as a summary table. Number the rows in the summary table.

• The summary table and the pictorial representation of the PDA will contain exactly the same amount of information.

In other words, if you are only given a summary table, you could draw the PDA from it.

• The correspondence between the pictorial representation of the PDA and the summary table is similar to the correspondence between a drawing of a finite automaton and a tabular representation of the FA.

4. Processing and accepting a string on the PDA will correspond to a particular sequence of rows from the summary table. But not every possible sequence of rows from the summary table will correspond to a processing of a string on the PDA. So we will come up with a way of determining if a particular sequence of rows from the summary table corresponds to a valid processing of a string on the PDA.