A token is a consecutive sequence of characters that have a collective meaning. As the name implies, this facility revolves around tokens. The basic facilities provided are for lexical analysis, which is the breaking up of a stream of text into tokens, and parsing, which is the grouping of tokens into grammatical phrases.
type Example_Token_ID is (If_ID, Then_ID, Else_ID, ID_ID, Num, Relop, Whitespace);Again, this is a very simple step once you know the list of tokens you need. But of course figuring that out is not always so simple!
package Example_Token is new Opentoken.Token.Enumerated (Example_Token_ID);
package Tokenizer is new Example_Token.Analyzer;
If_Recognizer : constant Tokenizer.Recognizable_Token :=The source above creates token recognizers for
Tokenizer.Get(Opentoken.Recognizer.Keyword.Get ("if"));
Then_Recognizer : constant Tokenizer.Recognizable_Token :=
Tokenizer.Get(Opentoken.Recognizer.Keyword.Get ("then"));
Else_Recognizer : constant Tokenizer.Recognizable_Token :=
Tokenizer.Get(Opentoken.Recognizer.Keyword.Get ("else"));
ID_Recognizer : constant Tokenizer.Recognizable_Token :=
Tokenizer.Get(Opentoken.Recognizer.Identifier.Get);
Num_Recognizer : constant Tokenizer.Recognizable_Token :=
Tokenizer.Get(Opentoken.Recognizer.Real.Get);
Whitesp_Recognizer : constant Tokenizer.Recognizable_Token :=
Tokenizer.Get(Opentoken.Recognizer.Character_Set.Get
(Opentoken.Recognizer.Character_Set.Standard_Whitespace)
);
The keywords "if", "then", and "else". Case does not matter for keywords, so "IF" or "If" would also be recognized. An identifier. Any series of characters starting with a letter and containing only letters, number, and underscores will be recognized.. A real (floating or fixed point) literal. A series of "whitespace" characters. The Standard_Whitespace is used, which includes spaces, tabs, and line terminators.
A recognizer is a tagged type that is derived from the type OpenToken.Recognizer.Instance.
You should extend the type to provide yourself state information and to
keep track of any settings that your recognizer type may allow. Other routines
and information about this specific type of token may be placed in there
too. In our example the token Relop cannot be recognized by any
of the provided token recognizers, so we declare it as follows. The part
that can be cut-and-paste is in black. The part that was custom for this
recognizer is is blue (if your browser
supports colors).
with OpenToken.Recognizer;Note that very little code is in blue; just the name of the package and the states between the first and last state. Of course more routines and fields in Instance may be added at your discretion depending on the needs of your recognizer.
package Relop_Example_Token istype Instance is new Opentoken.Recognizer.Instance with private;
---------------------------------------------------------------------------
-- This function will be called to create an Identifier token. Note that
-- this is a simple recognizer, so Get doesn't need any parameters.
---------------------------------------------------------------------------
function Get return Instance;private
type State_ID is (First_Char, Equal_or_Greater, Equal, Done);
type Instance is new Opentoken.Recognizer.Instance with record
State : State_ID := First_Char;
end record;---------------------------------------------------------------------------
-- This procedure will be called when analysis on a new candidate string
-- is started. The Token needs to clear its state (if any).
---------------------------------------------------------------------------
procedure Clear (The_Token : in out Instance);
---------------------------------------------------------------------------
-- This procedure will be called to perform further analysis on a token
-- based on the given next character.
---------------------------------------------------------------------------
procedure Analyze (The_Token : in out Instance;
Next_Char : in Character;
Verdict : out Opentoken.Recognizer.Analysis_Verdict);end Relop_Example_Token;
To help avoid confusion, when naming states, I have found it easiest to stick to the following standard:
The result will be one of the enumeration values in OpenToken.Recognizer.Analysis_Verdict. Matches indicates that the string you have been fed so far (since the last Clear call) does fully qualify as a token. So_Far_So_Good indicates that the string in its current state does not match a token, but it could possibly in the future match, depending on the next characters that are fed in. Note that it is quite possible for the verdict to be Matches on one call, and So_Far_So_Good on a later call, depending on the definition of the token. The final verdict, Failed, is different. You return it to indicate that the string is not a legal token of your type, and can never be one no matter how many more characters are fed in. Whenever you return this, you should set the recognizer's state to Done as well.
package body Relop_Example_Token isNow the only thing that remains is to create a token recognizer object of your new recognizer type, just like you did for the predefined recognizer types.---------------------------------------------------------------------------
-- This procedure will be called when analysis on a new candidate string
-- is started. The Token needs to clear its state (if any).
---------------------------------------------------------------------------
procedure Clear (The_Token : in out Instance) is
begin
The_Token.State := First_Char;
end Clear;---------------------------------------------------------------------------
-- This procedure will be called to create a Relop token recognizer
---------------------------------------------------------------------------
function Get return Instance is
begin
return (Report => True,
State => First_Char);
end Get;--------------------------------------------------------------------------
-- This procedure will be called to perform further analysis on a token
-- based on the given next character.
---------------------------------------------------------------------------
procedure Analyze (The_Token : in out Instance;
Next_Char : in Character;
Verdict : out Opentoken.Recognizer.Analysis_Verdict) is
begincase The_Token.State is
when First_Char =>
-- If the first char is a <, =, or >, its a match
case Next_Char is
when '<' =>
Verdict := Opentoken.Recognizer.Matches;
The_Token.State := Equal_Or_Greater;when '>' =>
Verdict := Opentoken.Recognizer.Matches;
The_Token.State := Equal;
when '=' =>
Verdict := Opentoken.Recognizer.Matches;
The_Token.State := Done;when others =>
Verdict := Opentoken.Recognizer.Failed;
The_Token.State := Done;
end case;when Equal_Or_Greater =>
-- If the next char is a > or =, its a match
case Next_Char is
when '>' | '=' =>
Verdict := Opentoken.Recognizer.Matches;
The_Token.State := Done;when others =>
Verdict := Opentoken.Recognizer.Failed;
The_Token.State := Done;
end case;when Equal =>
-- If the next char is a =, its a match
if Next_Char = '=' then
Verdict := Opentoken.Recognizer.Matches;
The_Token.State := Done;
else
Verdict := Opentoken.Recognizer.Failed;
The_Token.State := Done;
end if;when Done =>
Verdict := Opentoken.Recognizer.Failed;
end case;
end Analyze;end Relop_Example_Token;
Relop_Recognizer : constant Tokenizer.Recognizable_Token :=
Tokenizer.Get(Relop_Example_Token.Get);
Syntax : constant Tokenizer.Syntax :=To make things a little easier, we can easily combine steps 3 and 4 into one step. eg:
(If_ID => If_Recognizer,
Then_ID => Then_Recognizer,
Else_ID => Else_Recognizer,
ID_ID => ID_Recognizer,
Num => Num_Recognizer,
Relop => Relop_Recognizer,
Whitespace => Whitesp_Recognizer
);
Syntax : constant Tokenizer.Syntax :=
(If_ID => Tokenizer.Get(Opentoken.Recognizer.Keyword.Get ("if")),
Then_ID => Tokenizer.Get(Opentoken.Recognizer.Keyword.Get ("then")),
Else_ID => Tokenizer.Get(Opentoken.Recognizer.Keyword.Get ("else")),
ID_ID => Tokenizer.Get(Opentoken.Recognizer.Identifier.Get),
Int => Tokenizer.Get(Opentoken.Recognizer.Integer.Get),
Real => Tokenizer.Get(Opentoken.Recognizer.Real.Get),
Relop => Tokenizer.Get(Relop_Example_Token.Get),
Whitespace => Tokenizer.Get(Opentoken.Recognizer.Character_Set.Get
(Opentoken.Recognizer.Character_Set.Standard_Whitespace))
);
Analyzer : Tokenizer.Instance := Tokenizer.Initialize (Syntax);This creates an analyzer that will read input from Ada.Text_IO.Current_Input, and attempt to match it to the given syntax. By default this will be standard input, but than can be redirected to the file of your choice using Ada.Text_IO.Set_Input.
File : aliased Ada.Text_IO.File_Type;
Feeder : aliased OpenToken.Text_Feeder.Text_IO.Instance :=
OpenToken.Text_Feeder.Text_IO.Create (File'Unchecked_Access);
Analyzer : Tokenizer.Instance := Tokenizer.InitializeThe text feeder is tagged type in OpenToken.Text_Feeder.Instance'Class. It has a primitive (overrideable) routine named Get that fills a string with characters. Whenever the analyzer runs out of characters to process, it will request more from the feeder's Get function. If you do not supply a text feeder, a default one is used which reads input from the current input file.
(Language_Syntax => Syntax,
Feeder => Feeder'access);
If you want to change the file the default text feeder reads from, you can directly modify Tokenizer.Input_Feeder, either along with changing Text_IO's current input file...
Ada.Text_IO.Set_Input (File);...or independently from Text_IO's current input file:
Tokenizer.Input_Feeder := OpenToken.Text_Feeder.Text_IO.Create;
Tokenizer.Input_Feeder := OpenToken.Text_Feeder.Text_IO.Create(File'Unchecked_Access);If you want to change your analyzer's text feeder during analysis, you can also use the function Set_Text_Feeder:
The full source that was used for this tutorial is available in the Examples/ASU_Example_3_6 directory, along with a sample input file. To run it using Gnat, issue the "make" command in that directory. (For other compilers, consult your compiler documentation to see how to build a program). When the command completes, type in "asu_example_3_6" to run it. You should see the following list of tokens recognized:
Found IF_IDThe text is read in from the file Example.txt in that directory. If you want you can modify the contents of that file to produce a different list of token recognitions.
Found ID_ID
Found RELOP
Found ID_ID
Found THEN_ID
Found ELSE_ID
Found RELOP
Found REAL
Found RELOP
Found INT
The newest method is recursive-descent parsing. This method relies on
enumerated tokens, as well as non-enumerated tokens from the OpenToken.Token
hierarchy.
S' -> SFor our implementation we will also use two extra tokens to designate the end of the file, and whitespace in the text stream.
S -> L = R | R
L -> * R | id
R -> L
type Token_IDs is (Asterix_ID, ID_ID, Equals_ID, EOF_ID, Whitespace_ID, S_ID, L_ID, R_ID, S_Prime_ID);Again, this is a very simple step once you know the list of tokens you need. But of course figuring that out is not always so simple!
package Master_Token is new OpenToken.Token.Enumerated(Token_IDs);Next you will need to use the instantiated token package to create a token list package for use in creating productions. That package is also used to instantiate the OpenToken.Token.Nonterminal package.
package Tokenizer is new Master_Token.Analyzer(Whitespace_ID);
package Token_List is new Master_Token.List;Next we need access to the packages that allow us to create a Grammar. OpenToken.Production gets instantiated with our root token package, our root nonterminal package, and our token list package. Its child .List is instantiated after that.
package Nonterminal is new Master_Token.Nonterminal(Token_List);
package Production is new OpenToken.Production(Master_Token, Token_List, Nonterminal);Next we instantiate the root package in the parser hierarchy with the production list package and the analyzer package. And then at last we create our parser package.
package Production_List is new Production.List;
package Parser is new Production.Parser(Production_List, Tokenizer);That's it for the generic instantiations. That wasn't really so bad, was it? There is one last package visibility chore though. In order to use the infix production operators to define our grammar, we'll probably want to perform a few use type's
package LALR_Parser is new Parser.LALR;
-- Allow infix operators for building productions
use type Token_List.Instance;
use type Production.Right_Hand_Side;
use type Production.Instance;
use type Production_List.Instance;
Asterix : aliased Master_Token.Class := Master_Token.Get (Asterix_ID);
ID : aliased Master_Token.Class := Master_Token.Get (ID_ID);
Equals : aliased Master_Token.Class := Master_Token.Get (Equals_ID);
EOF : aliased Master_Token.Class := Master_Token.Get (EOF_ID);
S : aliased Nonterminal.Class := Nonterminal.Get (S_ID);
L : aliased Nonterminal.Class := Nonterminal.Get (L_ID);
R : aliased Nonterminal.Class := Nonterminal.Get (R_ID);
S_Prime : aliased Nonterminal.Class := Nonterminal.Get (S_Prime_ID);
Additionally, there's a special version of the Get routine which has no New_Token parameter. This form can be used to create a mapping without first creating a token object. This is useful for unreported tokens like our whitespace token, which do not take part in the grammar. In this case a Master_Token.Instance will be dynamically allocated and assigned into the mapping.
Syntax : constant Tokenizer.Syntax :=
(Asterix_ID => Tokenizer.Get (Recognizer => OpenToken.Recognizer.Keyword.Get ("*"),
New_Token => Asterix),
ID_ID => Tokenizer.Get (Recognizer => OpenToken.Recognizer.Keyword.Get ("id"),
New_Token => ID),
Equals_ID => Tokenizer.Get (Recognizer => OpenToken.Recognizer.Keyword.Get ("="),
New_Token => Equals),
EOF_ID => Tokenizer.Get (Recognizer => OpenToken.Recognizer.End_Of_File.Get,
New_Token => EOF),
Whitespace_ID => Tokenizer.Get
(OpenToken.Recognizer.Character_Set.Get
(OpenToken.Recognizer.Character_Set.Standard_Whitespace))
);
Input_File : aliased Ada.Text_IO.File_Type;Now to create the analyzer itself, we just make an object of type Tokenizer.Instance, and initialize it with our syntax and text feeder. This is trivial.
Feeder : aliased OpenToken.Text_Feeder.Text_IO.Instance :=
OpenToken.Text_Feeder.Text_IO.Create (Input_File'Unchecked_Access);
Analyzer : Tokenizer.Instance := Tokenizer.Initialize (Syntax, Feeder'access);So far this should have all looked pretty familiar if you have gone through the previous example. Yes, there were a lot more packages to instantiate, and that bit about creating tokens was new. But still most of the steps were familiar, and the result is pretty much the same so far: an analyzer. With the next step this will start to change, so get ready. Here it comes...
A grammar is created using the OpenToken.Production.List.Instance type. It is a series of productions strung together using and operators.
So in that case, what is a production? A production is a relation on the <= operator between a nonterminal token on the left hand side and a right hand side consisting of a list of tokens (and perhaps an attribute synthesization routine). Token lists on the right hand side are built using the & operator, and synthesization routines can be tacked on using the + operator. Note that the <= operator in this context is the "is derived from" operator. It has nothing to do with one side being less than or equal to another.
The idea is that the nonterminal on the left-hand side of the first production symbolizes the entire language. The tokens on its right hand side represent tokens that the language may be composed of (or that it may be decomposed into). For each token in that list that is a non-terminal (derived from Nonterminal.Instance), there must be one or more productions describing what tokens it is composed of. Some of those tokens may also be nonterminals, in which case there may need to be more productions describing what tokens they are derived from. Ultimately, all nonterminals must somehow derive from a series of terminals. It is also important that left-hand token of the first production does not appear in any other production. Depending on the parser used, there may be extra restrictions on the grammar as well. For instance, many parsers cannot handle grammars that are ambiguous (could produce more than one derivation sequence using the parser's derivation method).
Terminals are not described using productions, as they will be generated by the lexical analyzer from series of characters.
Below is the grammar definition for our example language. I have also included the textual definition provided by the dragon book for comparison purposes
--------------------------------------------------------------------------
-- Define the Grammar. The text in the example in the book looks something
-- like:
--
-- S' -> S
-- S -> L = R | R
-- L -> * R | id
-- R -> L
--
Grammar : constant Production_List.Instance :=
S_Prime <= S & EOF and
S <= L & Equals & R and
S <= R and
L <= Asterix & R + Nonterminal.Synthesize_Self and
L <= ID + Nonterminal.Synthesize_Self and
R <= L;
-- The lalr parser instance.
Test_Parser : LALR_Parser.Instance :=
LALR_Parser.Generate (Grammar => Grammar,
Analyzer => Analyzer
);
Test_File_Name : constant String := "Example.txt";
beginAda.Text_IO.Put ("Parsing file " & Test_File_Name & "...");
Ada.Text_IO.Flush;Ada.Text_IO.Open (File => Input_File,
Name => Test_File_Name,
Mode => Ada.Text_IO.In_File
);LALR_Parser.Parse (Test_Parser);
If you take a peek at one of the Token.Nonterminal packages, you will notice some primitive operations with the word "synthesize" in them. One of these routines will be dispatched to whenever a token is recognized. Their job is to initialize a nonterminal based on their input parameters, which are typically a list of tokens. Whenever the right hand side of a production is matched a process called "reduction" occurs. This involves all the tokens matching the right hand side of the production being deleted and a new nonterminal token of the same type as the production's left hand side token being created. The standard Nonterminal token is a tagged record with only one field; the token ID. But you can derive from it your own tokens with extra fields (called attributes in compiler lingo). These attributes can store useful information like numeric values, symbol table pointers, object code, etc.
When a reduction of a production occurs, a synthesization routine of your choosing is called with the list of tokens found on the right hand side. If you didn't specify a routine when you defined the production, the routine OpenToken.Token.Nonterminal.Synthesize_Default is called. Synthesize_Default dispatches to the Default_Synthesize routine for your type. If you don't override Default_Synthesize, it in turn dispatches to the Synthesize_By_Copying routine for your type, passing it the first token on the right hand side list. So by default nonterminals are created as copies of the first token on the right hand side of the production, but you can override that default on several levels, or just explicitly specify a different method. This is important since you will probably have to explicitly override some synthesizations in any grammar. Trying to use the default Synthesize_By_Copying will cause a constraint error if the source token isn't in the target token's class.
To help show how all this works, lets try a more complicated example
from the Dragon Book. Example 5.10 defines a simple desk calculator. It
has addition and multiplication operators, as well as parenthesis for association.
genericEvery nonterminal needs a constructor to create an initial token for the grammar.
with package Token is new OpenToken.Token(<>);
with package Token_List is new Token.List;
with package Nonterminal is new Token.Nonterminal (Token_List);
with package Integer_Literal is new Token.Integer_Literal;
package Simple_Integer_Token istype Instance is new Nonterminal.Instance with private;
subtype Class is Instance'Class;
type Handle is access all Class;
function Get (ID : in Token.Token_ID;We also need a routine to retrieve the value of the token (so we can print it on the screen when we are done). Additionally, we'd like to provide custom implementations for the Synthesize_By_Copying routine.
Value : in Integer := 0
) return Instance'Class;
function Value (Subject : in Instance) return Integer;
procedure Synthesize_By_Copying (New_Token : out Instance;This token also needs three new synthesization methods: One that synthesizes by adding the values of the first and third tokens, one that synthesizes by multiplying them, and one that synthesizes by copying the value of the second token (for expressions enclosed in parenthesis). These are declared as Nonterminal.Synthesize constants.
Source : in Token.Instance'Class;
To_ID : in Token.Token_ID);
In the private section, we make the full declaration of our instance, with its attributes that we want to keep track of. In this case, that would be an integer value for the calculator.Add_Integers : constant Nonterminal.Synthesize;
Multiply_Integers : constant Nonterminal.Synthesize;
Synthesize_Second : constant Nonterminal.Synthesize;
We also declare the actual attribute synthesization routines that the constants refer to. Since they will be pointed to by procedure access objects, their parameter profile must be identical to that of Nonterminal.Synthesize. Note that since they operate on objects of Nonterminal.Class, they will not dispatch. Hang around for a little longer, and I'll show you how we get around that problem.
private
type Instance is new Nonterminal.Instance with record
Value : Integer;
end record;
For the package body, we now need to implement all of the routines we declared in the spec. Get and Value are both quite trivial. For get we just return an instance with all its fields properly initialized. For value we return the value of the token.
procedure Synthesize_Add (New_Token : out Nonterminal.Class;
Source : in Token_List.Instance'Class;
To_ID : in Token.Token_ID);
procedure Synthesize_Multiply (New_Token : out Nonterminal.Class;
Source : in Token_List.Instance'Class;
To_ID : in Token.Token_ID);
procedure Synthesize_From_Second_Argument (New_Token : out Nonterminal.Class;
Source : in Token_List.Instance'Class;
To_ID : in Token.Token_ID);Add_Integers : constant Nonterminal.Synthesize := Synthesize_Add'Access;
Multiply_Integers : constant Nonterminal.Synthesize := Synthesize_Multiply'Access;
Synthesize_Second : constant Nonterminal.Synthesize := Synthesize_From_Second_Argument'Access;
package body Simple_Integer_Token isNext is our overload of the inherited Synthesize_By_Copying routine. We basically do what the default Nonterminal version of the routine did. But we also want to copy the value of an integer terminal. To do that we check the tag of the source token. If the source isn't in either our class or Integer_Literal's class, we raise an Invalid_Synth_Argument with a descriptive error message.function Get (ID : in Token.Token_ID;
Value : in Integer := 0) return Instance'Class is
begin
return Instance'Class(Instance'(Nonterminal.Instance(Nonterminal.Get(ID)) with Value => Value));
end Get;function Value (Subject : in Instance) return Integer is
begin
return Subject.Value;
end Value;
procedure Synthesize_By_Copying (New_Token : out Instance;Now we implement our new synthesization routines. Note that as classwide routines, they will not dispatch. If we want dispatching behavior, the way to do it is to call a primitive operation on Instance from within one of these routines.
Source : in Token.Instance'Class;
To_ID : in Token.Token_ID) is
begin
if Source in Integer_Literal.Class then
New_Token := (Nonterminal.Instance(Nonterminal.Get(To_ID)) with
Value => Integer_Literal.Value(Integer_Literal.Class(Source)));
elsif Source in Class then
New_Token := (Nonterminal.Instance(Nonterminal.Get(To_ID)) with
Value => Instance(Source).Value);
else
Ada.Exceptions.Raise_Exception
(Nonterminal.Invalid_Synth_Argument'Identity,
"Token " & Token.Token_ID'Image(To_ID) & " cannot be synthesized " &
"solely from a " & Token.Token_ID'Image(Token.ID(Source)) & "."
);
end if;
end Synthesize_By_Copying;
The routines require us to have the handles of the first, second, and third tokens in the source list. This is accomplished using a list iterator from the Token_List package (OpenToken.Token.List). The first token will be pointed to by the Initial_Iterator routine. Subsequent tokens in the list can be reached by using the Next_Token procedure.
If Source is not of type Instance, then the assignment will cause a Constraint_Error. We trap that and raise Invalid_Synth_Argument with a descriptive message for debugging.
procedure Synthesize_Add (New_Token : out Nonterminal.Class;
Source : in Token_List.Instance'Class;
To_ID : in Token.Token_ID) isLeft : Token_List.List_Iterator := Token_List.Initial_Iterator(Source);
Right : Token_List.List_Iterator := Token_List.Initial_Iterator(Source);begin
-- Move "Right" over to the third item;
Token_List.Next_Token (Right);
Token_List.Next_Token (Right);New_Token := Class(Instance'(Token.Instance(Token.Get (To_ID)) with
(Value (Class (Token_List.Token_Handle(Left).all)) +
Value (Class (Token_List.Token_Handle(Right).all))
)
)
);
exception
when Constraint_Error =>
Ada.Exceptions.Raise_Exception
(Nonterminal.Invalid_Synth_Argument'Identity,
"Token " & Token.Token_ID'Image(To_ID) & " cannot be synthesized " &
"from a " &
Token.Token_ID'Image (Token.ID (Token_List.Token_Handle(Left).all) ) &
" and a " &
Token.Token_ID'Image (Token.ID (Token_List.Token_Handle(Right).all) ) &
"."
);
end Synthesize_Add;procedure Synthesize_Multiply (New_Token : out Nonterminal.Class;
Source : in Token_List.Instance'Class;
To_ID : in Token.Token_ID) isLeft : Token_List.List_Iterator := Token_List.Initial_Iterator(Source);
Right : Token_List.List_Iterator := Token_List.Initial_Iterator(Source);begin
-- Move "Right" over to the third item;
Token_List.Next_Token (Right);
Token_List.Next_Token (Right);New_Token := Class(Instance'
(Token.Instance(Token.Get (To_ID)) with
Value => (Value (Class(Token_List.Token_Handle(Left).all)) *
Value (Class(Token_List.Token_Handle(Right).all))
)
)
);exception
when Constraint_Error =>
Ada.Exceptions.Raise_Exception
(Nonterminal.Invalid_Synth_Argument'Identity,
"Token " & Token.Token_ID'Image(To_ID) & " cannot be synthesized " &
"from a " &
Token.Token_ID'Image (Token.ID (Token_List.Token_Handle(Left).all) ) &
" and a " &
Token.Token_ID'Image (Token.ID (Token_List.Token_Handle(Right).all) ) &
"."
);
end Synthesize_Multiply;procedure Synthesize_From_Second_Argument (New_Token : out Nonterminal.Class;
Source : in Token_List.Instance'Class;
To_ID : in Token.Token_ID) isSecond : Token_List.List_Iterator := Token_List.Initial_Iterator(Source);
begin
-- Move "Second" over to the second item;
Token_List.Next_Token (Second);New_Token := Class(Instance'(Nonterminal.Instance(Nonterminal.Get(To_ID)) with
Value => Class(Token_List.Token_Handle(Second).all).Value));exception
when Constraint_Error =>
Ada.Exceptions.Raise_Exception
(Nonterminal.Invalid_Synth_Argument'Identity,
"Token " & Token.Token_ID'Image(To_ID) & " cannot be synthesized " &
"solely from a " &
Token.Token_ID'Image
(Token.ID (Token_List.Token_Handle(Second).all) ) & ".");
end Synthesize_From_Second_Argument;end Simple_Integer_Token;
L -> E print (L.val)The stuff on the right is the synthesization actions that are to occur when a production is reduced. The absence of an action implies that the attributes of the first token on the right are used to create the new token.
E -> E + T E.val := E1.val + T.val
E -> T
T -> T * F T.val := T1.val * F.val
T -> F
F -> ( E ) F.val := E.val
F -> digit
We start by declaring our Token_IDs:
type Token_IDs is (Integer_ID, Left_Paren_ID, Right_Paren_ID, Plus_Sign_ID,We instantiate all our packages as in the previous example, with two new additions. We'll use OpenToken's Integer_Literal token package for our literal numbers, and a Nonterminal Simple_Integer_Token package that we will write ourselves to represent numeric-valued non-terminals.
Multiply_ID, EOF_ID, Whitespace_ID, L_ID, E_ID, T_ID, F_ID);
package Integer_Literal is new Master_Token.Integer_Literal;For our calculator we want to read lines from the terminal. But each line should be a complete parse in and of itself. We could have done that by making an end-of-line the final token in our top production. But instead we'll use the custom text feeder OpenToken.Text_Feeder.String.Instance. It returns to the analyzer strings we manually feed into it, with end of file tacked on the end. We'll also need a couple of variables for reading the strings from standard input.
package Simple_Integer is new Simple_Integer_Token(Master_Token, Token_List, Nonterminal, Integer_Literal);
Line : String (1..1024);The tokens, analyzer are made much like the previous ones were, so we won't dwell on that. We do want to create one last synthesization routine though. This is the routine that gets run when the final (first) production is reduced. Its job is to print the value of that nonterminal to the screen.
Line_Length : Natural;
Feeder : aliased OpenToken.Text_Feeder.String.Instance;
procedure Print_Value (New_Token : out Nonterminal.Class;Now here's where those synthesization routines come in. We define our grammar in much the same way we defined the previous one. Just as in the dragon book's example, productions that don't have a synthesization routine will by synthesized by copying the attributes from the first token on the right. Obviously if that token is not the the left hand side token's class, a constraint error would normally occur. But we took care of that for the case of Integer_Literals to Simple_Integers by overloading the default Synthesize_By_Copyingroutine for Simple_Integers. In cases where that is not the behavior we want, as in the case of addition productions, we explicitly specify an appropriate synthesization routine.
Source : in Token_List.Instance'Class;
To_ID : in Master_Token.Token_ID);
After doing this and creating our parser, (and implementing Print_Value) we are ready to use the parser. The code below shows how this parser can be used to repeatedly perform calculations on strings entered from the keyboard. It will terminate when a blank line is entered).
Grammar : constant Production_List.Instance :=
L <= E & EOF + Print_Value'Access and
E <= E & Plus & T + Simple_Integer.Add_Integers and
E <= T and
T <= T & Times & F + Simple_Integer.Multiply_Integers and
T <= F and
F <= Left_Paren & E & Right_Paren + Simple_Integer.Synthesize_Second and
F <= Int_Literal;
loopWhen this code is run and the user types in
Ada.Text_IO.Get_Line(Line, Line_Length);exit when Line_Length = 0;
OpenToken.Text_Feeder.String.Set
(Feeder => Feeder,
Value => Line (1..Line_Length));LALR_Parser.Parse (Test_Parser);
end loop;
3 * (5 + 7 * 2)The program responds with
57
Although there is this division between terminal and non-terminal tokens, there really aren't a lot of separate parts and facilities to a recursive-decent parser. Terminal tokens are parsed by reading them from the analyzer, while non-terminals are parsed by looking for the right combination of other tokens. But parsing is just an activity that is performed on tokens, and everything is a token.
This conceptual simplification should bring you a lot more freedom. You could take an existing parser and:
The general process you follow in creating a recursive descent parser
is:
S' -> SFor our implementation we will also use two extra tokens to designate the end of the file, and whitespace in the text stream.
S -> L = R | R
L -> * R | id
R -> L
type Token_IDs is (Asterix_ID, ID_ID, Equals_ID, EOF_ID, Whitespace_ID);Again, this is a very simple step once you know the list of tokens you need. But of course figuring that out is not always so simple!
package Master_Token is new Opentoken.Token.Enumerated (Token_IDs);
package Tokenizer is new Master_Token.Analyzer;
Additionally, there's a special version of the Get routine which has no New_Token parameter. This form can be used to create a mapping without first creating a token object. This saves us some work, at the expense of a little heap storage. A Master_Token.Instance will be dynamically allocated and assigned into the mapping.
Syntax : constant Tokenizer.Syntax :=
(Asterix_ID => Tokenizer.Get (OpenToken.Recognizer.Keyword.Get ("*")),
ID_ID => Tokenizer.Get (OpenToken.Recognizer.Keyword.Get ("id")),
Equals_ID => Tokenizer.Get (OpenToken.Recognizer.Keyword.Get ("=")),
EOF_ID => Tokenizer.Get (OpenToken.Recognizer.End_Of_File.Get),
Whitespace_ID => Tokenizer.Get (OpenToken.Recognizer.Character_Set.Get
(OpenToken.Recognizer.Character_Set.Standard_Whitespace))
);
S_Prime : OpenToken.Token.Handle;Later I will also explain why S_Prime is not initialized and the others are.
S : constant OpenToken.Token.Handle := new OpenToken.Token.Selection.Instance;
L : constant OpenToken.Token.Handle := new OpenToken.Token.Selection.Instance;
R : constant OpenToken.Token.Handle := new OpenToken.Token.Selection.Instance;
Input_File : aliased Ada.Text_IO.File_Type;Now to create the analyzer itself, we just make an object of type Tokenizer.Instance, and initialize it with our syntax and text feeder. This is trivial.
Feeder : aliased OpenToken.Text_Feeder.Text_IO.Instance :=
OpenToken.Text_Feeder.Text_IO.Create (Input_File'Unchecked_Access);
Analyzer : Tokenizer.Instance := Tokenizer.Initialize (Syntax, Feeder'access);In this example program, we're going to be reading from a set filename. Let's also define that filename here in a constant, so its easy to change:
Test_File_Name : constant String := "Example.txt";So far this should have all looked pretty familiar if you have gone through the first example. That bit about creating tokens was new. But still most of the steps were familiar, and the result is pretty much the same so far: an analyzer. With the next step this will start to change, so get ready. Here it comes...
S' -> SNow you may think this looks like a pretty silly rule. Well, you're absolutely right, it is a silly rule! It was artificially put into this example in order to make a table parser building algorithm work right. We are using a table to parse, so its really just a waste. The smart thing to do would be to just ditch this rule and start with S. But that wouldn't exactly be playing fair, because then we aren't really doing anything for the production. We could create a token that just looks for an S token when it parses. But that's really a bit of a waste of time.
So what we will do is just make S_Prime point to the same object as S:
S_Prime := S;Incidentally, this is the reason we didn't make an object for S_Prime, like we did for the others.
S -> L = R | RNote that since this it the top level token, there is an implied EOF token at the end of the production. So in other words, it is either the sequence of tokens L, =, R, and EOF or it is a sequence of an R and an EOF token. We could craft a custom token with a parser that recognizes that pattern. But a sequence of tokens, and a selection between tokens happen to be very common token types. For that reason there are prebuilt Token.Sequence and Token.Selection tokens. Using those packages (with appropriate "use type" declarations of course), we can define S thusly:
S.all := OpenToken.Token.Selection.Class
(OpenToken.Token.Sequence.New_Instance(L & Equals & R & EOF) or
OpenToken.Token.Sequence.New_Instance(R & EOF));
L -> * R | idThat's either a sequnce of * and R, or an id. Using the same two prebuilt tokens, we get:
L.all := OpenToken.Token.Selection.Class
(OpenToken.Token.Sequence.New_Instance(Asterix & R) or ID);
R -> LHere is another of those silly identity productions again. But let's try another tactic this time and make a separate R token that is a copy of the L token.
R.all := L.all;Use
To use the parser, we just call the top-level token's Parse routine. All our token's built-in parse routines will handle the rest. For our example program we want to first open up the file that the token analyzer's text feeder uses. Then we must call the token analyzer once to load up the first token. Token parser routines assume the currently loaded token is the first one they are to look at.
Ada.Text_IO.Open (File => Input_File,That's it! If you want to perform error handling, you may consider adding something like the following exception handler:
Name => Test_File_Name,
Mode => Ada.Text_IO.In_File
);-- Load up the first token
Tokenizer.Find_Next (Analyzer);OpenToken.Token.Parse
(Match => S_Prime.all,
Analyzer => Analyzer
);
exception
when Error : OpenToken.Token.Parse_Error =>
Ada.Text_IO.Put_Line ("failed at line" & Integer'Image(Tokenizer.Line (Analyzer)) &
", column" & Integer'Image(Tokenizer.Column (Analyzer)) &
" due to parse exception:");
Ada.Text_IO.Put_Line (Ada.Exceptions.Exception_Information (Error));
integer -> (+ | -)? digit+This change has been made simply because it matches the definition used for the Integer and Real tokens provided with the OpenToken package. A joint "num" token could have been created to exactly match the num specified in ASD, but we will leave that as an exercise for the reader.
real -> (+ | -)? (digit | _)* digit . (digit | _)* ( (e | E) (- | +)? (digit)+ )?
$Log: UsersGuide.html,v $Revision 1.5 2000/08/14 03:05:08 TedAdd recursive-descent example for 3.0b releaseRevision 1.4 2000/02/05 04:08:10 TedFix typoRevision 1.3 2000/01/27 21:08:56 Ted Add two examples to illustrate the new parsing facility. Spell check.Revision 1.2 1999/08/17 03:21:41 Ted Add log line