Parsing Tutorial Notes Study Material with Examples in Handbook Series
What is Parsing
Parsing is a technique to construct a parse (derivation) tree or to check whether there is some left most derivation or right most derivation. Parsing may be classified as.
Topdown Parsing
In this Parsing, start symbol is root, terminals are leaves (inputs symbols) and other nodes are variables. We start from the root and replacing her intermediate nodes one by one from left to right reach the leaves. This approach is also known as recursive descent parsing or predictive descent parsing.
LL Parser is a topdown parser for a subset of the contextfree grammars. It parses the input form left to right and constructs a left most derivation of the sentence. LL parser is called an LL (k) parser, if it constructs a left most derivation by looking k symbols ahead.
LL parser consists of
 An input buffer, holding the input string.
 A stack on which to store the terminals and nonterminals from the grammar yet to be parsed.
 A parsing table which tells it what grammar rule to apply given the symbols on top of its stack and the next input token.
Bottomup Parsing
Bottomup parser reads the input from left and uses right most derivation in reverse order. Another name of bottomup parser is shift reduce parser.
LR Grammar
For grammar to be LR, it is sufficient that a left to right shift reduce parser should able to recognize handles when they appear on top of the stack when LR parser is implemented by using a stack. A grammar that can be parsed by an LR parse examining upto ’k’ input symbols (k look ahead symbols) on each move is called an LR (K) grammar.
Point to be Remembered
 For every regular set, there exists a CFG G such that L= L(G).
 Every regular language is a CFL.
 Let G_{1 }and G_{2 }be contextfree grammar. Then, G_{1 }and G_{2 }are equivalent if and only if L (G_{1}) = L(G_{2}).
 The Intersection of a contextfree language and a regular language is a contextfree language.
 The reverse of a contextfree language is contextfree.
 A DFA can remember only a finite amount of information whereas a PDA can remember an infinite amount of information.
 For every PDA, there is a contextfree grammar and for every contextfree grammar, there is a PDA.
 If L_{1}is a DCFL and L_{2 }is regular then, L_{1}U L_{2 }is also DCFL.
 If L_{1}is a DCFL and L_{2 }is regular then, L_{1}∩ L_{2 }is also DCFL.
 Every regular language is DCFL.
 The Power of nondeterministic pushdown automata and deterministic pushdown automata is not same. But the power of nondeterministic pushdown automata and deterministic push down is same.
 A FSM (Finite State Machine) with one stack is more powerful than FSM without stack.
 If left recursion or left factoring is present, it is not sure that the grammar is ambiguous but there may be chance of ambiguity.
Difference between LL and LR
There is a significant difference between LL and LR grammars. For a grammar to be LR (k), we must be able to recognize the occurrence of right side of production having seen all of what is derived from right side with ‘k’ input symbols of look ahead. This requirement of LL (k) grammars is quite different, where we must be able to recognise the use of a production looking only the first ‘k’ symbols of what its right side derives.
LR(K)
L stands for left to right scanning . R Stands for right most

LL(K)
L stands for left to right scanning. L stands for left most Derivation. 
Importance of Turing Machine Tutorial Notes Study Material with Examples
Follow Us on Social Platforms to get Updated : twiter, facebook, Google Plus
Learn Turing Machine in Handbook Series: Click here