Ceci est le fichier Info bison.info, produit par Makeinfo version 4.2 �
partir bison.texinfo.



   This manual is for GNU Bison (version 1.75, 14 October 2002), the
GNU parser generator.

   Copyright (C) 1988, 1989, 1990, 1991, 1992, 1993, 1995, 1998, 1999,
2000, 2001, 2002 Free Software Foundation, Inc.

     Permission is granted to copy, distribute and/or modify this
     document under the terms of the GNU Free Documentation License,
     Version 1.1 or any later version published by the Free Software
     Foundation; with no Invariant Sections, with the Front-Cover texts
     being "A GNU Manual," and with the Back-Cover Texts as in (a)
     below.  A copy of the license is included in the section entitled
     "GNU Free Documentation License."

     (a) The FSF's Back-Cover Text is: "You have freedom to copy and
     modify this GNU Manual, like GNU software.  Copies published by
     the Free Software Foundation raise funds for GNU development."
   
INFO-DIR-SECTION GNU programming tools
START-INFO-DIR-ENTRY
* bison: (bison).	GNU parser generator (yacc replacement).
END-INFO-DIR-ENTRY


File: bison.info,  Node: Locations Overview,  Next: Bison Parser,  Prev: GLR Parsers,  Up: Concepts

Locations
=========

   Many applications, like interpreters or compilers, have to produce
verbose and useful error messages.  To achieve this, one must be able
to keep track of the "textual position", or "location", of each
syntactic construct.  Bison provides a mechanism for handling these
locations.

   Each token has a semantic value.  In a similar fashion, each token
has an associated location, but the type of locations is the same for
all tokens and groupings.  Moreover, the output parser is equipped with
a default data structure for storing locations (*note Locations::, for
more details).

   Like semantic values, locations can be reached in actions using a
dedicated set of constructs.  In the example above, the location of the
whole grouping is `@$', while the locations of the subexpressions are
`@1' and `@3'.

   When a rule is matched, a default action is used to compute the
semantic value of its left hand side (*note Actions::).  In the same
way, another default action is used for locations.  However, the action
for locations is general enough for most cases, meaning there is
usually no need to describe for each rule how `@$' should be formed.
When building a new location for a given grouping, the default behavior
of the output parser is to take the beginning of the first symbol, and
the end of the last symbol.


File: bison.info,  Node: Bison Parser,  Next: Stages,  Prev: Locations Overview,  Up: Concepts

Bison Output: the Parser File
=============================

   When you run Bison, you give it a Bison grammar file as input.  The
output is a C source file that parses the language described by the
grammar.  This file is called a "Bison parser".  Keep in mind that the
Bison utility and the Bison parser are two distinct programs: the Bison
utility is a program whose output is the Bison parser that becomes part
of your program.

   The job of the Bison parser is to group tokens into groupings
according to the grammar rules--for example, to build identifiers and
operators into expressions.  As it does this, it runs the actions for
the grammar rules it uses.

   The tokens come from a function called the "lexical analyzer" that
you must supply in some fashion (such as by writing it in C).  The Bison
parser calls the lexical analyzer each time it wants a new token.  It
doesn't know what is "inside" the tokens (though their semantic values
may reflect this).  Typically the lexical analyzer makes the tokens by
parsing characters of text, but Bison does not depend on this.  *Note
The Lexical Analyzer Function `yylex': Lexical.

   The Bison parser file is C code which defines a function named
`yyparse' which implements that grammar.  This function does not make a
complete C program: you must supply some additional functions.  One is
the lexical analyzer.  Another is an error-reporting function which the
parser calls to report an error.  In addition, a complete C program must
start with a function called `main'; you have to provide this, and
arrange for it to call `yyparse' or the parser will never run.  *Note
Parser C-Language Interface: Interface.

   Aside from the token type names and the symbols in the actions you
write, all symbols defined in the Bison parser file itself begin with
`yy' or `YY'.  This includes interface functions such as the lexical
analyzer function `yylex', the error reporting function `yyerror' and
the parser function `yyparse' itself.  This also includes numerous
identifiers used for internal purposes.  Therefore, you should avoid
using C identifiers starting with `yy' or `YY' in the Bison grammar
file except for the ones defined in this manual.

   In some cases the Bison parser file includes system headers, and in
those cases your code should respect the identifiers reserved by those
headers.  On some non-GNU hosts, `<alloca.h>', `<stddef.h>', and
`<stdlib.h>' are included as needed to declare memory allocators and
related types.  Other system headers may be included if you define
`YYDEBUG' to a nonzero value (*note Tracing Your Parser: Tracing.).


File: bison.info,  Node: Stages,  Next: Grammar Layout,  Prev: Bison Parser,  Up: Concepts

Stages in Using Bison
=====================

   The actual language-design process using Bison, from grammar
specification to a working compiler or interpreter, has these parts:

  1. Formally specify the grammar in a form recognized by Bison (*note
     Bison Grammar Files: Grammar File.).  For each grammatical rule in
     the language, describe the action that is to be taken when an
     instance of that rule is recognized.  The action is described by a
     sequence of C statements.

  2. Write a lexical analyzer to process input and pass tokens to the
     parser.  The lexical analyzer may be written by hand in C (*note
     The Lexical Analyzer Function `yylex': Lexical.).  It could also
     be produced using Lex, but the use of Lex is not discussed in this
     manual.

  3. Write a controlling function that calls the Bison-produced parser.

  4. Write error-reporting routines.

   To turn this source code as written into a runnable program, you
must follow these steps:

  1. Run Bison on the grammar to produce the parser.

  2. Compile the code output by Bison, as well as any other source
     files.

  3. Link the object files to produce the finished product.


File: bison.info,  Node: Grammar Layout,  Prev: Stages,  Up: Concepts

The Overall Layout of a Bison Grammar
=====================================

   The input file for the Bison utility is a "Bison grammar file".  The
general form of a Bison grammar file is as follows:

     %{
     PROLOGUE
     %}
     
     BISON DECLARATIONS
     
     %%
     GRAMMAR RULES
     %%
     EPILOGUE

The `%%', `%{' and `%}' are punctuation that appears in every Bison
grammar file to separate the sections.

   The prologue may define types and variables used in the actions.
You can also use preprocessor commands to define macros used there, and
use `#include' to include header files that do any of these things.

   The Bison declarations declare the names of the terminal and
nonterminal symbols, and may also describe operator precedence and the
data types of semantic values of various symbols.

   The grammar rules define how to construct each nonterminal symbol
from its parts.

   The epilogue can contain any code you want to use.  Often the
definition of the lexical analyzer `yylex' goes here, plus subroutines
called by the actions in the grammar rules.  In a simple program, all
the rest of the program can go here.


File: bison.info,  Node: Examples,  Next: Grammar File,  Prev: Concepts,  Up: Top

Examples
********

   Now we show and explain three sample programs written using Bison: a
reverse polish notation calculator, an algebraic (infix) notation
calculator, and a multi-function calculator.  All three have been tested
under BSD Unix 4.3; each produces a usable, though limited, interactive
desk-top calculator.

   These examples are simple, but Bison grammars for real programming
languages are written the same way.  You can copy these examples out of
the Info file and into a source file to try them.

* Menu:

* RPN Calc::          Reverse polish notation calculator;
                        a first example with no operator precedence.
* Infix Calc::        Infix (algebraic) notation calculator.
                        Operator precedence is introduced.
* Simple Error Recovery::  Continuing after syntax errors.
* Location Tracking Calc:: Demonstrating the use of @N and @$.
* Multi-function Calc::  Calculator with memory and trig functions.
                           It uses multiple data-types for semantic values.
* Exercises::         Ideas for improving the multi-function calculator.


File: bison.info,  Node: RPN Calc,  Next: Infix Calc,  Up: Examples

Reverse Polish Notation Calculator
==================================

   The first example is that of a simple double-precision "reverse
polish notation" calculator (a calculator using postfix operators).
This example provides a good starting point, since operator precedence
is not an issue.  The second example will illustrate how operator
precedence is handled.

   The source code for this calculator is named `rpcalc.y'.  The `.y'
extension is a convention used for Bison input files.

* Menu:

* Decls: Rpcalc Decls.  Prologue (declarations) for rpcalc.
* Rules: Rpcalc Rules.  Grammar Rules for rpcalc, with explanation.
* Lexer: Rpcalc Lexer.  The lexical analyzer.
* Main: Rpcalc Main.    The controlling function.
* Error: Rpcalc Error.  The error reporting function.
* Gen: Rpcalc Gen.      Running Bison on the grammar file.
* Comp: Rpcalc Compile. Run the C compiler on the output code.


File: bison.info,  Node: Rpcalc Decls,  Next: Rpcalc Rules,  Up: RPN Calc

Declarations for `rpcalc'
-------------------------

   Here are the C and Bison declarations for the reverse polish notation
calculator.  As in C, comments are placed between `/*...*/'.

     /* Reverse polish notation calculator.  */
     
     %{
     #define YYSTYPE double
     #include <math.h>
     %}
     
     %token NUM
     
     %% /* Grammar rules and actions follow.  */

   The declarations section (*note The prologue: Prologue.) contains two
preprocessor directives.

   The `#define' directive defines the macro `YYSTYPE', thus specifying
the C data type for semantic values of both tokens and groupings (*note
Data Types of Semantic Values: Value Type.).  The Bison parser will use
whatever type `YYSTYPE' is defined as; if you don't define it, `int' is
the default.  Because we specify `double', each token and each
expression has an associated value, which is a floating point number.

   The `#include' directive is used to declare the exponentiation
function `pow'.

   The second section, Bison declarations, provides information to Bison
about the token types (*note The Bison Declarations Section: Bison
Declarations.).  Each terminal symbol that is not a single-character
literal must be declared here.  (Single-character literals normally
don't need to be declared.)  In this example, all the arithmetic
operators are designated by single-character literals, so the only
terminal symbol that needs to be declared is `NUM', the token type for
numeric constants.


File: bison.info,  Node: Rpcalc Rules,  Next: Rpcalc Lexer,  Prev: Rpcalc Decls,  Up: RPN Calc

Grammar Rules for `rpcalc'
--------------------------

   Here are the grammar rules for the reverse polish notation
calculator.

     input:    /* empty */
             | input line
     ;
     
     line:     '\n'
             | exp '\n'  { printf ("\t%.10g\n", $1); }
     ;
     
     exp:      NUM             { $$ = $1;         }
             | exp exp '+'     { $$ = $1 + $2;    }
             | exp exp '-'     { $$ = $1 - $2;    }
             | exp exp '*'     { $$ = $1 * $2;    }
             | exp exp '/'     { $$ = $1 / $2;    }
           /* Exponentiation */
             | exp exp '^'     { $$ = pow ($1, $2); }
           /* Unary minus    */
             | exp 'n'         { $$ = -$1;        }
     ;
     %%

   The groupings of the rpcalc "language" defined here are the
expression (given the name `exp'), the line of input (`line'), and the
complete input transcript (`input').  Each of these nonterminal symbols
has several alternate rules, joined by the `|' punctuator which is read
as "or".  The following sections explain what these rules mean.

   The semantics of the language is determined by the actions taken
when a grouping is recognized.  The actions are the C code that appears
inside braces.  *Note Actions::.

   You must specify these actions in C, but Bison provides the means for
passing semantic values between the rules.  In each action, the
pseudo-variable `$$' stands for the semantic value for the grouping
that the rule is going to construct.  Assigning a value to `$$' is the
main job of most actions.  The semantic values of the components of the
rule are referred to as `$1', `$2', and so on.

* Menu:

* Rpcalc Input::
* Rpcalc Line::
* Rpcalc Expr::


File: bison.info,  Node: Rpcalc Input,  Next: Rpcalc Line,  Up: Rpcalc Rules

Explanation of `input'
......................

   Consider the definition of `input':

     input:    /* empty */
             | input line
     ;

   This definition reads as follows: "A complete input is either an
empty string, or a complete input followed by an input line".  Notice
that "complete input" is defined in terms of itself.  This definition
is said to be "left recursive" since `input' appears always as the
leftmost symbol in the sequence.  *Note Recursive Rules: Recursion.

   The first alternative is empty because there are no symbols between
the colon and the first `|'; this means that `input' can match an empty
string of input (no tokens).  We write the rules this way because it is
legitimate to type `Ctrl-d' right after you start the calculator.  It's
conventional to put an empty alternative first and write the comment
`/* empty */' in it.

   The second alternate rule (`input line') handles all nontrivial
input.  It means, "After reading any number of lines, read one more
line if possible."  The left recursion makes this rule into a loop.
Since the first alternative matches empty input, the loop can be
executed zero or more times.

   The parser function `yyparse' continues to process input until a
grammatical error is seen or the lexical analyzer says there are no more
input tokens; we will arrange for the latter to happen at end-of-input.


File: bison.info,  Node: Rpcalc Line,  Next: Rpcalc Expr,  Prev: Rpcalc Input,  Up: Rpcalc Rules

Explanation of `line'
.....................

   Now consider the definition of `line':

     line:     '\n'
             | exp '\n'  { printf ("\t%.10g\n", $1); }
     ;

   The first alternative is a token which is a newline character; this
means that rpcalc accepts a blank line (and ignores it, since there is
no action).  The second alternative is an expression followed by a
newline.  This is the alternative that makes rpcalc useful.  The
semantic value of the `exp' grouping is the value of `$1' because the
`exp' in question is the first symbol in the alternative.  The action
prints this value, which is the result of the computation the user
asked for.

   This action is unusual because it does not assign a value to `$$'.
As a consequence, the semantic value associated with the `line' is
uninitialized (its value will be unpredictable).  This would be a bug if
that value were ever used, but we don't use it: once rpcalc has printed
the value of the user's input line, that value is no longer needed.


File: bison.info,  Node: Rpcalc Expr,  Prev: Rpcalc Line,  Up: Rpcalc Rules

Explanation of `expr'
.....................

   The `exp' grouping has several rules, one for each kind of
expression.  The first rule handles the simplest expressions: those
that are just numbers.  The second handles an addition-expression,
which looks like two expressions followed by a plus-sign.  The third
handles subtraction, and so on.

     exp:      NUM
             | exp exp '+'     { $$ = $1 + $2;    }
             | exp exp '-'     { $$ = $1 - $2;    }
             ...
             ;

   We have used `|' to join all the rules for `exp', but we could
equally well have written them separately:

     exp:      NUM ;
     exp:      exp exp '+'     { $$ = $1 + $2;    } ;
     exp:      exp exp '-'     { $$ = $1 - $2;    } ;
             ...

   Most of the rules have actions that compute the value of the
expression in terms of the value of its parts.  For example, in the
rule for addition, `$1' refers to the first component `exp' and `$2'
refers to the second one.  The third component, `'+'', has no meaningful
associated semantic value, but if it had one you could refer to it as
`$3'.  When `yyparse' recognizes a sum expression using this rule, the
sum of the two subexpressions' values is produced as the value of the
entire expression.  *Note Actions::.

   You don't have to give an action for every rule.  When a rule has no
action, Bison by default copies the value of `$1' into `$$'.  This is
what happens in the first rule (the one that uses `NUM').

   The formatting shown here is the recommended convention, but Bison
does not require it.  You can add or change white space as much as you
wish.  For example, this:

     exp   : NUM | exp exp '+' {$$ = $1 + $2; } | ...

means the same thing as this:

     exp:      NUM
             | exp exp '+'    { $$ = $1 + $2; }
             | ...

The latter, however, is much more readable.


File: bison.info,  Node: Rpcalc Lexer,  Next: Rpcalc Main,  Prev: Rpcalc Rules,  Up: RPN Calc

The `rpcalc' Lexical Analyzer
-----------------------------

   The lexical analyzer's job is low-level parsing: converting
characters or sequences of characters into tokens.  The Bison parser
gets its tokens by calling the lexical analyzer.  *Note The Lexical
Analyzer Function `yylex': Lexical.

   Only a simple lexical analyzer is needed for the RPN calculator.
This lexical analyzer skips blanks and tabs, then reads in numbers as
`double' and returns them as `NUM' tokens.  Any other character that
isn't part of a number is a separate token.  Note that the token-code
for such a single-character token is the character itself.

   The return value of the lexical analyzer function is a numeric code
which represents a token type.  The same text used in Bison rules to
stand for this token type is also a C expression for the numeric code
for the type.  This works in two ways.  If the token type is a
character literal, then its numeric code is that of the character; you
can use the same character literal in the lexical analyzer to express
the number.  If the token type is an identifier, that identifier is
defined by Bison as a C macro whose definition is the appropriate
number.  In this example, therefore, `NUM' becomes a macro for `yylex'
to use.

   The semantic value of the token (if it has one) is stored into the
global variable `yylval', which is where the Bison parser will look for
it.  (The C data type of `yylval' is `YYSTYPE', which was defined at
the beginning of the grammar; *note Declarations for `rpcalc': Rpcalc
Decls..)

   A token type code of zero is returned if the end-of-input is
encountered.  (Bison recognizes any nonpositive value as indicating
end-of-input.)

   Here is the code for the lexical analyzer:

     /* The lexical analyzer returns a double floating point
        number on the stack and the token NUM, or the numeric code
        of the character read if not a number.  It skips all blanks
        and tabs, and returns 0 for end-of-input.  */
     
     #include <ctype.h>
     
     int
     yylex (void)
     {
       int c;
     
       /* Skip white space.  */
       while ((c = getchar ()) == ' ' || c == '\t')
         ;
       /* Process numbers.  */
       if (c == '.' || isdigit (c))
         {
           ungetc (c, stdin);
           scanf ("%lf", &yylval);
           return NUM;
         }
       /* Return end-of-input.  */
       if (c == EOF)
         return 0;
       /* Return a single char.  */
       return c;
     }


File: bison.info,  Node: Rpcalc Main,  Next: Rpcalc Error,  Prev: Rpcalc Lexer,  Up: RPN Calc

The Controlling Function
------------------------

   In keeping with the spirit of this example, the controlling function
is kept to the bare minimum.  The only requirement is that it call
`yyparse' to start the process of parsing.

     int
     main (void)
     {
       return yyparse ();
     }


File: bison.info,  Node: Rpcalc Error,  Next: Rpcalc Gen,  Prev: Rpcalc Main,  Up: RPN Calc

The Error Reporting Routine
---------------------------

   When `yyparse' detects a syntax error, it calls the error reporting
function `yyerror' to print an error message (usually but not always
`"parse error"').  It is up to the programmer to supply `yyerror'
(*note Parser C-Language Interface: Interface.), so here is the
definition we will use:

     #include <stdio.h>
     
     void
     yyerror (const char *s)  /* called by yyparse on error */
     {
       printf ("%s\n", s);
     }

   After `yyerror' returns, the Bison parser may recover from the error
and continue parsing if the grammar contains a suitable error rule
(*note Error Recovery::).  Otherwise, `yyparse' returns nonzero.  We
have not written any error rules in this example, so any invalid input
will cause the calculator program to exit.  This is not clean behavior
for a real calculator, but it is adequate for the first example.


File: bison.info,  Node: Rpcalc Gen,  Next: Rpcalc Compile,  Prev: Rpcalc Error,  Up: RPN Calc

Running Bison to Make the Parser
--------------------------------

   Before running Bison to produce a parser, we need to decide how to
arrange all the source code in one or more source files.  For such a
simple example, the easiest thing is to put everything in one file.  The
definitions of `yylex', `yyerror' and `main' go at the end, in the
epilogue of the file (*note The Overall Layout of a Bison Grammar:
Grammar Layout.).

   For a large project, you would probably have several source files,
and use `make' to arrange to recompile them.

   With all the source in a single file, you use the following command
to convert it into a parser file:

     bison FILE_NAME.y

In this example the file was called `rpcalc.y' (for "Reverse Polish
CALCulator").  Bison produces a file named `FILE_NAME.tab.c', removing
the `.y' from the original file name.  The file output by Bison
contains the source code for `yyparse'.  The additional functions in
the input file (`yylex', `yyerror' and `main') are copied verbatim to
the output.


File: bison.info,  Node: Rpcalc Compile,  Prev: Rpcalc Gen,  Up: RPN Calc

Compiling the Parser File
-------------------------

   Here is how to compile and run the parser file:

     # List files in current directory.
     $ ls
     rpcalc.tab.c  rpcalc.y
     
     # Compile the Bison parser.
     # `-lm' tells compiler to search math library for `pow'.
     $ cc -lm -o rpcalc rpcalc.tab.c
     
     # List files again.
     $ ls
     rpcalc  rpcalc.tab.c  rpcalc.y

   The file `rpcalc' now contains the executable code.  Here is an
example session using `rpcalc'.

     $ rpcalc
     4 9 +
     13
     3 7 + 3 4 5 *+-
     -13
     3 7 + 3 4 5 * + - n              Note the unary minus, `n'
     13
     5 6 / 4 n +
     -3.166666667
     3 4 ^                            Exponentiation
     81
     ^D                               End-of-file indicator
     $


File: bison.info,  Node: Infix Calc,  Next: Simple Error Recovery,  Prev: RPN Calc,  Up: Examples

Infix Notation Calculator: `calc'
=================================

   We now modify rpcalc to handle infix operators instead of postfix.
Infix notation involves the concept of operator precedence and the need
for parentheses nested to arbitrary depth.  Here is the Bison code for
`calc.y', an infix desk-top calculator.

     /* Infix notation calculator--calc */
     
     %{
     #define YYSTYPE double
     #include <math.h>
     %}
     
     /* BISON Declarations */
     %token NUM
     %left '-' '+'
     %left '*' '/'
     %left NEG     /* negation--unary minus */
     %right '^'    /* exponentiation        */
     
     /* Grammar follows */
     %%
     input:    /* empty string */
             | input line
     ;
     
     line:     '\n'
             | exp '\n'  { printf ("\t%.10g\n", $1); }
     ;
     
     exp:      NUM                { $$ = $1;         }
             | exp '+' exp        { $$ = $1 + $3;    }
             | exp '-' exp        { $$ = $1 - $3;    }
             | exp '*' exp        { $$ = $1 * $3;    }
             | exp '/' exp        { $$ = $1 / $3;    }
             | '-' exp  %prec NEG { $$ = -$2;        }
             | exp '^' exp        { $$ = pow ($1, $3); }
             | '(' exp ')'        { $$ = $2;         }
     ;
     %%

The functions `yylex', `yyerror' and `main' can be the same as before.

   There are two important new features shown in this code.

   In the second section (Bison declarations), `%left' declares token
types and says they are left-associative operators.  The declarations
`%left' and `%right' (right associativity) take the place of `%token'
which is used to declare a token type name without associativity.
(These tokens are single-character literals, which ordinarily don't
need to be declared.  We declare them here to specify the
associativity.)

   Operator precedence is determined by the line ordering of the
declarations; the higher the line number of the declaration (lower on
the page or screen), the higher the precedence.  Hence, exponentiation
has the highest precedence, unary minus (`NEG') is next, followed by
`*' and `/', and so on.  *Note Operator Precedence: Precedence.

   The other important new feature is the `%prec' in the grammar
section for the unary minus operator.  The `%prec' simply instructs
Bison that the rule `| '-' exp' has the same precedence as `NEG'--in
this case the next-to-highest.  *Note Context-Dependent Precedence:
Contextual Precedence.

   Here is a sample run of `calc.y':

     $ calc
     4 + 4.5 - (34/(8*3+-3))
     6.880952381
     -56 + 2
     -54
     3 ^ 2
     9


File: bison.info,  Node: Simple Error Recovery,  Next: Location Tracking Calc,  Prev: Infix Calc,  Up: Examples

Simple Error Recovery
=====================

   Up to this point, this manual has not addressed the issue of "error
recovery"--how to continue parsing after the parser detects a syntax
error.  All we have handled is error reporting with `yyerror'.  Recall
that by default `yyparse' returns after calling `yyerror'.  This means
that an erroneous input line causes the calculator program to exit.
Now we show how to rectify this deficiency.

   The Bison language itself includes the reserved word `error', which
may be included in the grammar rules.  In the example below it has been
added to one of the alternatives for `line':

     line:     '\n'
             | exp '\n'   { printf ("\t%.10g\n", $1); }
             | error '\n' { yyerrok;                  }
     ;

   This addition to the grammar allows for simple error recovery in the
event of a parse error.  If an expression that cannot be evaluated is
read, the error will be recognized by the third rule for `line', and
parsing will continue.  (The `yyerror' function is still called upon to
print its message as well.)  The action executes the statement
`yyerrok', a macro defined automatically by Bison; its meaning is that
error recovery is complete (*note Error Recovery::).  Note the
difference between `yyerrok' and `yyerror'; neither one is a misprint.

   This form of error recovery deals with syntax errors.  There are
other kinds of errors; for example, division by zero, which raises an
exception signal that is normally fatal.  A real calculator program
must handle this signal and use `longjmp' to return to `main' and
resume parsing input lines; it would also have to discard the rest of
the current line of input.  We won't discuss this issue further because
it is not specific to Bison programs.


File: bison.info,  Node: Location Tracking Calc,  Next: Multi-function Calc,  Prev: Simple Error Recovery,  Up: Examples

Location Tracking Calculator: `ltcalc'
======================================

   This example extends the infix notation calculator with location
tracking.  This feature will be used to improve the error messages.  For
the sake of clarity, this example is a simple integer calculator, since
most of the work needed to use locations will be done in the lexical
analyzer.

* Menu:

* Decls: Ltcalc Decls.  Bison and C declarations for ltcalc.
* Rules: Ltcalc Rules.  Grammar rules for ltcalc, with explanations.
* Lexer: Ltcalc Lexer.  The lexical analyzer.


File: bison.info,  Node: Ltcalc Decls,  Next: Ltcalc Rules,  Up: Location Tracking Calc

Declarations for `ltcalc'
-------------------------

   The C and Bison declarations for the location tracking calculator are
the same as the declarations for the infix notation calculator.

     /* Location tracking calculator.  */
     
     %{
     #define YYSTYPE int
     #include <math.h>
     %}
     
     /* Bison declarations.  */
     %token NUM
     
     %left '-' '+'
     %left '*' '/'
     %left NEG
     %right '^'
     
     %% /* Grammar follows */

Note there are no declarations specific to locations.  Defining a data
type for storing locations is not needed: we will use the type provided
by default (*note Data Types of Locations: Location Type.), which is a
four member structure with the following integer fields: `first_line',
`first_column', `last_line' and `last_column'.


File: bison.info,  Node: Ltcalc Rules,  Next: Ltcalc Lexer,  Prev: Ltcalc Decls,  Up: Location Tracking Calc

Grammar Rules for `ltcalc'
--------------------------

   Whether handling locations or not has no effect on the syntax of your
language.  Therefore, grammar rules for this example will be very close
to those of the previous example: we will only modify them to benefit
from the new information.

   Here, we will use locations to report divisions by zero, and locate
the wrong expressions or subexpressions.

     input   : /* empty */
             | input line
     ;
     
     line    : '\n'
             | exp '\n' { printf ("%d\n", $1); }
     ;
     
     exp     : NUM           { $$ = $1; }
             | exp '+' exp   { $$ = $1 + $3; }
             | exp '-' exp   { $$ = $1 - $3; }
             | exp '*' exp   { $$ = $1 * $3; }
             | exp '/' exp
                 {
                   if ($3)
                     $$ = $1 / $3;
                   else
                     {
                       $$ = 1;
                       fprintf (stderr, "%d.%d-%d.%d: division by zero",
                                @3.first_line, @3.first_column,
                                @3.last_line, @3.last_column);
                     }
                 }
             | '-' exp %preg NEG     { $$ = -$2; }
             | exp '^' exp           { $$ = pow ($1, $3); }
             | '(' exp ')'           { $$ = $2; }

   This code shows how to reach locations inside of semantic actions, by
using the pseudo-variables `@N' for rule components, and the
pseudo-variable `@$' for groupings.

   We don't need to assign a value to `@$': the output parser does it
automatically.  By default, before executing the C code of each action,
`@$' is set to range from the beginning of `@1' to the end of `@N', for
a rule with N components.  This behavior can be redefined (*note
Default Action for Locations: Location Default Action.), and for very
specific rules, `@$' can be computed by hand.


File: bison.info,  Node: Ltcalc Lexer,  Prev: Ltcalc Rules,  Up: Location Tracking Calc

The `ltcalc' Lexical Analyzer.
------------------------------

   Until now, we relied on Bison's defaults to enable location
tracking.  The next step is to rewrite the lexical analyzer, and make it
able to feed the parser with the token locations, as it already does for
semantic values.

   To this end, we must take into account every single character of the
input text, to avoid the computed locations of being fuzzy or wrong:

     int
     yylex (void)
     {
       int c;
     
       /* Skip white space.  */
       while ((c = getchar ()) == ' ' || c == '\t')
         ++yylloc.last_column;
     
       /* Step.  */
       yylloc.first_line = yylloc.last_line;
       yylloc.first_column = yylloc.last_column;
     
       /* Process numbers.  */
       if (isdigit (c))
         {
           yylval = c - '0';
           ++yylloc.last_column;
           while (isdigit (c = getchar ()))
             {
               ++yylloc.last_column;
               yylval = yylval * 10 + c - '0';
             }
           ungetc (c, stdin);
           return NUM;
         }
     
       /* Return end-of-input.  */
       if (c == EOF)
         return 0;
     
       /* Return a single char, and update location.  */
       if (c == '\n')
         {
           ++yylloc.last_line;
           yylloc.last_column = 0;
         }
       else
         ++yylloc.last_column;
       return c;
     }

   Basically, the lexical analyzer performs the same processing as
before: it skips blanks and tabs, and reads numbers or single-character
tokens.  In addition, it updates `yylloc', the global variable (of type
`YYLTYPE') containing the token's location.

   Now, each time this function returns a token, the parser has its
number as well as its semantic value, and its location in the text.
The last needed change is to initialize `yylloc', for example in the
controlling function:

     int
     main (void)
     {
       yylloc.first_line = yylloc.last_line = 1;
       yylloc.first_column = yylloc.last_column = 0;
       return yyparse ();
     }

   Remember that computing locations is not a matter of syntax.  Every
character must be associated to a location update, whether it is in
valid input, in comments, in literal strings, and so on.


File: bison.info,  Node: Multi-function Calc,  Next: Exercises,  Prev: Location Tracking Calc,  Up: Examples

Multi-Function Calculator: `mfcalc'
===================================

   Now that the basics of Bison have been discussed, it is time to move
on to a more advanced problem.  The above calculators provided only five
functions, `+', `-', `*', `/' and `^'.  It would be nice to have a
calculator that provides other mathematical functions such as `sin',
`cos', etc.

   It is easy to add new operators to the infix calculator as long as
they are only single-character literals.  The lexical analyzer `yylex'
passes back all nonnumber characters as tokens, so new grammar rules
suffice for adding a new operator.  But we want something more
flexible: built-in functions whose syntax has this form:

     FUNCTION_NAME (ARGUMENT)

At the same time, we will add memory to the calculator, by allowing you
to create named variables, store values in them, and use them later.
Here is a sample session with the multi-function calculator:

     $ mfcalc
     pi = 3.141592653589
     3.1415926536
     sin(pi)
     0.0000000000
     alpha = beta1 = 2.3
     2.3000000000
     alpha
     2.3000000000
     ln(alpha)
     0.8329091229
     exp(ln(beta1))
     2.3000000000
     $

   Note that multiple assignment and nested function calls are
permitted.

* Menu:

* Decl: Mfcalc Decl.      Bison declarations for multi-function calculator.
* Rules: Mfcalc Rules.    Grammar rules for the calculator.
* Symtab: Mfcalc Symtab.  Symbol table management subroutines.


File: bison.info,  Node: Mfcalc Decl,  Next: Mfcalc Rules,  Up: Multi-function Calc

Declarations for `mfcalc'
-------------------------

   Here are the C and Bison declarations for the multi-function
calculator.

     %{
     #include <math.h>  /* For math functions, cos(), sin(), etc.  */
     #include "calc.h"  /* Contains definition of `symrec'        */
     %}
     %union {
     double     val;  /* For returning numbers.                   */
     symrec  *tptr;   /* For returning symbol-table pointers      */
     }
     
     %token <val>  NUM        /* Simple double precision number   */
     %token <tptr> VAR FNCT   /* Variable and Function            */
     %type  <val>  exp
     
     %right '='
     %left '-' '+'
     %left '*' '/'
     %left NEG     /* Negation--unary minus */
     %right '^'    /* Exponentiation        */
     
     /* Grammar follows */
     
     %%

   The above grammar introduces only two new features of the Bison
language.  These features allow semantic values to have various data
types (*note More Than One Value Type: Multiple Types.).

   The `%union' declaration specifies the entire list of possible types;
this is instead of defining `YYSTYPE'.  The allowable types are now
double-floats (for `exp' and `NUM') and pointers to entries in the
symbol table.  *Note The Collection of Value Types: Union Decl.

   Since values can now have various types, it is necessary to
associate a type with each grammar symbol whose semantic value is used.
These symbols are `NUM', `VAR', `FNCT', and `exp'.  Their declarations
are augmented with information about their data type (placed between
angle brackets).

   The Bison construct `%type' is used for declaring nonterminal
symbols, just as `%token' is used for declaring token types.  We have
not used `%type' before because nonterminal symbols are normally
declared implicitly by the rules that define them.  But `exp' must be
declared explicitly so we can specify its value type.  *Note
Nonterminal Symbols: Type Decl.


File: bison.info,  Node: Mfcalc Rules,  Next: Mfcalc Symtab,  Prev: Mfcalc Decl,  Up: Multi-function Calc

Grammar Rules for `mfcalc'
--------------------------

   Here are the grammar rules for the multi-function calculator.  Most
of them are copied directly from `calc'; three rules, those which
mention `VAR' or `FNCT', are new.

     input:   /* empty */
             | input line
     ;
     
     line:
               '\n'
             | exp '\n'   { printf ("\t%.10g\n", $1); }
             | error '\n' { yyerrok;                  }
     ;
     
     exp:      NUM                { $$ = $1;                         }
             | VAR                { $$ = $1->value.var;              }
             | VAR '=' exp        { $$ = $3; $1->value.var = $3;     }
             | FNCT '(' exp ')'   { $$ = (*($1->value.fnctptr))($3); }
             | exp '+' exp        { $$ = $1 + $3;                    }
             | exp '-' exp        { $$ = $1 - $3;                    }
             | exp '*' exp        { $$ = $1 * $3;                    }
             | exp '/' exp        { $$ = $1 / $3;                    }
             | '-' exp  %prec NEG { $$ = -$2;                        }
             | exp '^' exp        { $$ = pow ($1, $3);               }
             | '(' exp ')'        { $$ = $2;                         }
     ;
     /* End of grammar */
     %%


File: bison.info,  Node: Mfcalc Symtab,  Prev: Mfcalc Rules,  Up: Multi-function Calc

The `mfcalc' Symbol Table
-------------------------

   The multi-function calculator requires a symbol table to keep track
of the names and meanings of variables and functions.  This doesn't
affect the grammar rules (except for the actions) or the Bison
declarations, but it requires some additional C functions for support.

   The symbol table itself consists of a linked list of records.  Its
definition, which is kept in the header `calc.h', is as follows.  It
provides for either functions or variables to be placed in the table.

     /* Function type.                                    */
     typedef double (*func_t) (double);
     
     /* Data type for links in the chain of symbols.      */
     struct symrec
     {
       char *name;  /* name of symbol                     */
       int type;    /* type of symbol: either VAR or FNCT */
       union
       {
         double var;                  /* value of a VAR   */
         func_t fnctptr;              /* value of a FNCT  */
       } value;
       struct symrec *next;    /* link field              */
     };
     
     typedef struct symrec symrec;
     
     /* The symbol table: a chain of `struct symrec'.     */
     extern symrec *sym_table;
     
     symrec *putsym (const char *, func_t);
     symrec *getsym (const char *);

   The new version of `main' includes a call to `init_table', a
function that initializes the symbol table.  Here it is, and
`init_table' as well:

     #include <stdio.h>
     
     int
     main (void)
     {
       init_table ();
       return yyparse ();
     }
     
     void
     yyerror (const char *s)  /* Called by yyparse on error */
     {
       printf ("%s\n", s);
     }
     
     struct init
     {
       char *fname;
       double (*fnct)(double);
     };
     
     struct init arith_fncts[] =
     {
       "sin",  sin,
       "cos",  cos,
       "atan", atan,
       "ln",   log,
       "exp",  exp,
       "sqrt", sqrt,
       0, 0
     };
     
     /* The symbol table: a chain of `struct symrec'.  */
     symrec *sym_table = (symrec *) 0;
     
     /* Put arithmetic functions in table.  */
     void
     init_table (void)
     {
       int i;
       symrec *ptr;
       for (i = 0; arith_fncts[i].fname != 0; i++)
         {
           ptr = putsym (arith_fncts[i].fname, FNCT);
           ptr->value.fnctptr = arith_fncts[i].fnct;
         }
     }

   By simply editing the initialization list and adding the necessary
include files, you can add additional functions to the calculator.

   Two important functions allow look-up and installation of symbols in
the symbol table.  The function `putsym' is passed a name and the type
(`VAR' or `FNCT') of the object to be installed.  The object is linked
to the front of the list, and a pointer to the object is returned.  The
function `getsym' is passed the name of the symbol to look up.  If
found, a pointer to that symbol is returned; otherwise zero is returned.

     symrec *
     putsym (char *sym_name, int sym_type)
     {
       symrec *ptr;
       ptr = (symrec *) malloc (sizeof (symrec));
       ptr->name = (char *) malloc (strlen (sym_name) + 1);
       strcpy (ptr->name,sym_name);
       ptr->type = sym_type;
       ptr->value.var = 0; /* Set value to 0 even if fctn.  */
       ptr->next = (struct symrec *)sym_table;
       sym_table = ptr;
       return ptr;
     }
     
     symrec *
     getsym (const char *sym_name)
     {
       symrec *ptr;
       for (ptr = sym_table; ptr != (symrec *) 0;
            ptr = (symrec *)ptr->next)
         if (strcmp (ptr->name,sym_name) == 0)
           return ptr;
       return 0;
     }

   The function `yylex' must now recognize variables, numeric values,
and the single-character arithmetic operators.  Strings of alphanumeric
characters with a leading non-digit are recognized as either variables
or functions depending on what the symbol table says about them.

   The string is passed to `getsym' for look up in the symbol table.  If
the name appears in the table, a pointer to its location and its type
(`VAR' or `FNCT') is returned to `yyparse'.  If it is not already in
the table, then it is installed as a `VAR' using `putsym'.  Again, a
pointer and its type (which must be `VAR') is returned to `yyparse'.

   No change is needed in the handling of numeric values and arithmetic
operators in `yylex'.

     #include <ctype.h>
     
     int
     yylex (void)
     {
       int c;
     
       /* Ignore white space, get first nonwhite character.  */
       while ((c = getchar ()) == ' ' || c == '\t');
     
       if (c == EOF)
         return 0;
     
       /* Char starts a number => parse the number.         */
       if (c == '.' || isdigit (c))
         {
           ungetc (c, stdin);
           scanf ("%lf", &yylval.val);
           return NUM;
         }
     
       /* Char starts an identifier => read the name.       */
       if (isalpha (c))
         {
           symrec *s;
           static char *symbuf = 0;
           static int length = 0;
           int i;
     
           /* Initially make the buffer long enough
              for a 40-character symbol name.  */
           if (length == 0)
             length = 40, symbuf = (char *)malloc (length + 1);
     
           i = 0;
           do
             {
               /* If buffer is full, make it bigger.        */
               if (i == length)
                 {
                   length *= 2;
                   symbuf = (char *)realloc (symbuf, length + 1);
                 }
               /* Add this character to the buffer.         */
               symbuf[i++] = c;
               /* Get another character.                    */
               c = getchar ();
             }
           while (isalnum (c));
     
           ungetc (c, stdin);
           symbuf[i] = '\0';
     
           s = getsym (symbuf);
           if (s == 0)
             s = putsym (symbuf, VAR);
           yylval.tptr = s;
           return s->type;
         }
     
       /* Any other character is a token by itself.        */
       return c;
     }

   This program is both powerful and flexible.  You may easily add new
functions, and it is a simple job to modify this code to install
predefined variables such as `pi' or `e' as well.


File: bison.info,  Node: Exercises,  Prev: Multi-function Calc,  Up: Examples

Exercises
=========

  1. Add some new functions from `math.h' to the initialization list.

  2. Add another array that contains constants and their values.  Then
     modify `init_table' to add these constants to the symbol table.
     It will be easiest to give the constants type `VAR'.

  3. Make the program report an error if the user refers to an
     uninitialized variable in any way except to store a value in it.


File: bison.info,  Node: Grammar File,  Next: Interface,  Prev: Examples,  Up: Top

Bison Grammar Files
*******************

   Bison takes as input a context-free grammar specification and
produces a C-language function that recognizes correct instances of the
grammar.

   The Bison grammar input file conventionally has a name ending in
`.y'.  *Note Invoking Bison: Invocation.

* Menu:

* Grammar Outline::   Overall layout of the grammar file.
* Symbols::           Terminal and nonterminal symbols.
* Rules::             How to write grammar rules.
* Recursion::         Writing recursive rules.
* Semantics::         Semantic values and actions.
* Locations::         Locations and actions.
* Declarations::      All kinds of Bison declarations are described here.
* Multiple Parsers::  Putting more than one Bison parser in one program.


File: bison.info,  Node: Grammar Outline,  Next: Symbols,  Up: Grammar File

Outline of a Bison Grammar
==========================

   A Bison grammar file has four main sections, shown here with the
appropriate delimiters:

     %{
     PROLOGUE
     %}
     
     BISON DECLARATIONS
     
     %%
     GRAMMAR RULES
     %%
     
     EPILOGUE

   Comments enclosed in `/* ... */' may appear in any of the sections.

* Menu:

* Prologue::          Syntax and usage of the prologue.
* Bison Declarations::  Syntax and usage of the Bison declarations section.
* Grammar Rules::     Syntax and usage of the grammar rules section.
* Epilogue::          Syntax and usage of the epilogue.


File: bison.info,  Node: Prologue,  Next: Bison Declarations,  Up: Grammar Outline

The prologue
------------

   The PROLOGUE section contains macro definitions and declarations of
functions and variables that are used in the actions in the grammar
rules.  These are copied to the beginning of the parser file so that
they precede the definition of `yyparse'.  You can use `#include' to
get the declarations from a header file.  If you don't need any C
declarations, you may omit the `%{' and `%}' delimiters that bracket
this section.

   You may have more than one PROLOGUE section, intermixed with the
BISON DECLARATIONS.  This allows you to have C and Bison declarations
that refer to each other.  For example, the `%union' declaration may
use types defined in a header file, and you may wish to prototype
functions that take arguments of type `YYSTYPE'.  This can be done with
two PROLOGUE blocks, one before and one after the `%union' declaration.

     %{
     #include <stdio.h>
     #include "ptypes.h"
     %}
     
     %union {
       long n;
       tree t;  /* `tree' is defined in `ptypes.h'. */
     }
     
     %{
     static void yyprint(FILE *, int, YYSTYPE);
     #define YYPRINT(F, N, L) yyprint(F, N, L)
     %}
     
     ...


File: bison.info,  Node: Bison Declarations,  Next: Grammar Rules,  Prev: Prologue,  Up: Grammar Outline

The Bison Declarations Section
------------------------------

   The BISON DECLARATIONS section contains declarations that define
terminal and nonterminal symbols, specify precedence, and so on.  In
some simple grammars you may not need any declarations.  *Note Bison
Declarations: Declarations.


File: bison.info,  Node: Grammar Rules,  Next: Epilogue,  Prev: Bison Declarations,  Up: Grammar Outline

The Grammar Rules Section
-------------------------

   The "grammar rules" section contains one or more Bison grammar
rules, and nothing else.  *Note Syntax of Grammar Rules: Rules.

   There must always be at least one grammar rule, and the first `%%'
(which precedes the grammar rules) may never be omitted even if it is
the first thing in the file.


File: bison.info,  Node: Epilogue,  Prev: Grammar Rules,  Up: Grammar Outline

The epilogue
------------

   The EPILOGUE is copied verbatim to the end of the parser file, just
as the PROLOGUE is copied to the beginning.  This is the most convenient
place to put anything that you want to have in the parser file but
which need not come before the definition of `yyparse'.  For example,
the definitions of `yylex' and `yyerror' often go here.  *Note Parser
C-Language Interface: Interface.

   If the last section is empty, you may omit the `%%' that separates it
from the grammar rules.

   The Bison parser itself contains many static variables whose names
start with `yy' and many macros whose names start with `YY'.  It is a
good idea to avoid using any such names (except those documented in this
manual) in the epilogue of the grammar file.