Chapter 5

Pushdown Automata and Conteyext-Free Language

5.1 Pushdown Automata

Definition 1.1 A pushdown automaton is a six tuple , where is a finite set of states, a finite set called the input alphabet, a finite set called the stack alphabet, the start state, a set of final states, and a transition function from to subsets of .

Example 1.1

The language contains strings consisting solely of or an equal number of and . The stack of the PDA M that accepts L maintains a record of the number of processed until a is encountered or the input string is completely processed.

Figure 20:

When scanning an in state , there are two transitions that are applicable. A string of the form , , is accepted by a computation that remains in states and . If a transition to state follows the processing of the final in string , the stack is emptied and the input is accepted. Reaching in any other manner results in an unsuccessful computation, since no input is processed after is entered.

The -transition from allows the machine to enter after the entire input string has been read, since a symbol is not required to process an -transition. The transition,

which is applicable whenever the machine is in state , introduces nondeterministic computations of .

Example 1.2

The even-length palindromes over are accepted by the PDA

Figure 21: PDA

That is, . A successful computation remains in state while processing the string and enters state upon reading the first symbol in .

5.2 Variations on the PDA Theme

Pushdown automata are often defined in a manner that differs slightly from Definition 1.1 In this section we examine several alterations to our definition that preserve the set of accepted languages.

Along with changing the state, a transition in a PDA is accompanied by three actions: popping the stack, pushing a stack element, and processing an input symbol. A PDA is called atomic if each transition causes only one of the three actions to occur. Transitions in an atomic PDA have the form

Theorem 2.1 shows that the languages accepted by atomic PDAs are the same as those accepted by PDAs. Moreover, it outlines a method to construct an equivalent atomic PDA from an arbitrary PDA.

Theorem 2.1

Let be a PDA. Then there is an atomic PDA with .

Proof: To construct , the nonatomic transitions of are replaced by a sequence of atomic transitions. Let be a transition of . The atomic equivalent requires two new states, and , and the transitions

In a similar manner, a transition that consists of changing the state and performing two additional actions can be replaced with a sequence of two atomic transitions. Removing all nonatomic transitions produces an equivalent atomic PDA.

An extended transition is an operation on a PDA that replaces the stack top with a string of symbols, rather than just a single symbol. The transition replaces the stack top with the string with becoming the new stack top. The apparent generalization does not increase the set of languages accepted by pushdown automaton. A PDA containing extended transitions is called an extended PDA. Each extended PDA can be converted into an equivalent PDA in the sense of Definition 1.1

To construct a from an extended , extended transitions are converted to a sequence of transitions each of which pushes a single stack element. To achieve the result of an extended transition that pushes elements requires additional states to push the elements in the correct order. The sequence of transitions

replaces the stack top with the string and leaves the machine in state . This produces the same result as the single extended transition .

5.3 Pushdown Automata and Context-Free Languages

Theorem 3.1 Let L be a context-free language. Then there is a PDA that accepts .

Proof: Let be a grammar in Greibach normal form that generates . An extended PDA with start state is defined by

with transitions

We first show that . Let be a derivation with and .

We will prove that there is a computation

in . The proof is by induction on the length of the derivation and utilizes the correspondence between derivations in and computations of . The basis consists of derivations of length one. The transition generated by the rule yields the desired computation. Assume that for all strings generated by derivations there is a computation.

in

Now let be a derivation with = and . This derivation can be written as

,

where and is a rule in . The inductive hypothesis and the transition combine to produce the computation

For every string in of positive length, the acceptance of is exhibited by the computation in corresponding to the derivation . If , then is a

rule of and the computation|- accepts the null string. The opposite inclusion, , is established by showing that for every computation

there is a corresponding derivation in .

Theorem 3.2 Let be a PDA. Then there is a context-free grammar such that .

5.4 The Pumping Lemma for Context-Free Languages

Lemma 4.1 Let be a context-free grammar in Chomsky normal form and a derivation of with derivation tree . If the depth of is , then .

Corollary 4.1 Let be a context-free grammar in Chomsky normal form and a derivation of . If , then the derivation tree has depth at least .

Theorem 4.1 (Pumping Lemma for Context-Free Languages)

Let be a context-free language. There is a number , depending on , such that any string with can be written as where

  1. , for .

Proof: Let be a Chomsky normal form grammar that generates and let where . We show that all stings in with length or greater can be decomposed to satisfy the conditions of the pumping lemma. Let be such a string and a derivation in . By Corollary 4.1, there is a path of length at least in the derivation tree of . Let be a path of maximal length from the root to a leaf of the derivation tree. Then must contain at least nodes, all of which are labeled by variables except the leaf node which is labeled be a terminal symbol. The pigeon hole principle gurantees that some variable must occur twice in the final nodes of this path.

The derivation tree can be divided into subtrees where the nodes labeled by the variable indicated in the diagram are the final two occurrences of in the path .

Figure 22: Pumping Lemma for CFL

The derivation of consists of the subderivations

1.

2.

3.

4.

5. .

Subderivation 3 may be omitted or be repeated any number of times before applying subderivation 4. The resulting derivations generate the strings .

We now show that conditions (ii) and (iii) in the pumping lemma are satisfied by this decomposition. The subderivation must begin with a rule of the form The second occurence of the variable is derived from either or . If it is derived from , the derivation can be written

The string is nonnull since it is obtained by a derivation from a variable in a Chomsky normal form grammar that is not the start symbol of the grammar. It follows that is also nonnull. If the second occurrence of is derived from the variable , a similar argument shows that must be nonnull.

The subpath of from the first occurrence of the variable in the diagram to a leaf must be of the length at most . since this is the longest path in the subtree with root , the derivation tree generated by the derivation has depth at most . Also the string obtained from this derivation has length or less.

Example 4.1

The language is not context-free.

Proof: Assume is context-free. By Theorem 4.1, the string , where is the number specified by the pumping lemma, can be decomposed into substrings that satisfy the repetition properties. Consider the possibilities for the substrings and . If either of these contain more than one type of terminal symbol, then contains a preceding an or a preceding a . In either case, the resulting string is not in .

By the previous observation, and must be substrings of one of , or . Since at most one of the strings and is null, increases the number by at least one, maybe two, but not all three types of terminal symbols. This implies that . Thus there is no decomposition of satisfying the conditions of the pumping lemma; consequently, is not context-free.

5.5 Closure Properties of Context-Free Languages

Theorem 5.1 The set of context-free languages is closed under the operations union, concatenation, and Kleene star.

Proof: Let and be two context-free languages generated by and , respectively. The sets and of variables are assumed to be disjoint. Since we may rename variables, this assumption imposes no restriction on the grammars.

A context-free grammar will be constructed from and that establishes the desired closure property.

  1. Union: Define . A string is in if, and only if, there is a derivation for i = 1 or 2 Thus is in or . On the other hand, any derivation can be initialized with the rule to generate in .
  1. Concatenation: Define . The start symbol initiates derivation in both and . A leftmost derivation of a terminal string in has the form , where and . The derivation of uses only rules from and rules from . Hence . The opposite inclusion is established by observing that every string in can be written with and . The derivations u and v along with the rule of generate in .
  1. Kleene Star: Define. The rule of generates any number of copies of . Each of these, in turn, initiates the derivation of a string in . The concatenation of any number of strings from yields .

Theorem 5.2

The set of context-free languages is not closed under intersection or complementation.

Proof:

  1. Intersection: Let and . and are both context-free, since they are generated by and , respectively.

The intersection of and is the set , which, by Example 4.1, is not context-free.

ii. Complementation: Let and be any two context-free languages. If the

context-free languages are closed under complementation, then, by Theorem 5.1 the language

is context-free. By DeMorgan's law, . This implies that the context-free languages are closed under intersection, contradicting the result of part(i).

5.6 A Two-Stack Automaton

Finite automata accept regular languages. Pushdown automata accepts context-free languages.

Definition 6.1 A two-stack PDA is structure , where is a finite set of states, a finite set called the input alphabet, a finite set called the stack alphabet, the start state, a set of final states, and a transition function from to subsets of .

Example 6.1

The two-stack PDA defined below accepts the language . The first stack is used to match the and and the second and .

The computation that accepts

|-

|-

|-

|-

|-

|-

illustrates the interplay between the two stacks.