Now we show and explain six sample programs written using Zyacc. The first three examples illustrate features of Zyacc which are also present in yacc. These are a reverse polish notation calculator, an algebraic (infix) notation calculator, and a multi-function calculator. The last three illustrate features which are unique in Zyacc: these include a differently sugared implementation of the multi-function calculator, a calculator which evaluates polynomials and a lazy person's calculator which allows omitting some sets of parentheses.
The code shown in this manual has been extracted automatically from code which has been tested. These examples are simple, but Zyacc grammars for real programming languages are written the same way.
If you have access to the Zyacc distribution, you will find these
examples under the doc
directory.
The first example is that of a simple double-precision reverse polish notation calculator (a calculator using postfix operators). This example provides a good starting point, since operator precedence is not an issue. The second example will illustrate how operator precedence is handled.
The source code for this calculator is named `rpcalc.y'. The `.y' extension is a convention used for Zyacc input files.
rpcalc
Here are the C and Zyacc declarations for the reverse polish notation calculator. As in C, comments are placed between `/*...*/'.
/* Reverse polish notation calculator. */ %{ #include <math.h> #include <stdio.h> #define YYSTYPE double int yylex(void); void yyerror(const char *errMsg); %} %token NUM_TOK %% /* Grammar rules and actions follow */
The C declarations section (see section The C Declarations Section) contains two preprocessor directives.
The #define
directive defines the macro YYSTYPE
, thus
specifying the C data type for semantic values of both tokens and groupings
(see section Data Types of Semantic Values). The Zyacc parser will use whatever type
YYSTYPE
is defined as; if you don't define it, int
is the
default. Because we specify double
, each token and each expression
has an associated value, which is a floating point number.
The #include
directive is used to declare the exponentiation
function pow
.
The second section, Zyacc declarations, provides information to Zyacc
about the token types (see section The Zyacc Declarations Section). Each terminal symbol that is not a
single-character literal must be declared here. (Single-character
literals normally don't need to be declared.) In this example, all the
arithmetic operators are designated by single-character literals, so the
only terminal symbol that needs to be declared is NUM_TOK
, the
token type for numeric constants (since Zyacc will #define
NUM_TOK
adding the `_TOK' suffix prevents it from clashing
with identifiers used for other purposes).
rpcalc
Here are the grammar rules for the reverse polish notation calculator.
input : /* empty */ | input line ; line : '\n' | exp '\n' { printf ("\t%.10g\n", $1); } ; exp : NUM_TOK | exp exp '+' { $$= $1 + $2; } | exp exp '-' { $$= $1 - $2; } | exp exp '*' { $$= $1 * $2; } | exp exp '/' { $$= $1 / $2; } /* Exponentiation */ | exp exp '^' { $$= pow ($1, $2); } /* Unary minus */ | exp 'n' { $$= -$1; } ; %%
The groupings of the rpcalc "language" defined here are the expression
(given the name exp
), the line of input (line
), and the
complete input transcript (input
). Each of these nonterminal
symbols has several alternate rules, joined by the `|' punctuator
which is read as "or". The following sections explain what these rules
mean.
The semantics of the language is determined by the actions taken when a grouping is recognized. The actions are the C code that appears inside braces. See section Actions.
You must specify these actions in C, but Zyacc provides the means for
passing semantic values between the rules. In each action, the
pseudo-variable $$
stands for the semantic value for the grouping
that the rule is going to construct. Assigning a value to $$
is the
main job of most actions. The semantic values of the components of the
rule are referred to as $1
, $2
, and so on.
input
Consider the definition of input
:
input : /* empty */ | input line ;
This definition reads as follows: "A complete input is either an empty
string, or a complete input followed by an input line". Notice that
"complete input" is defined in terms of itself. This definition is said
to be left recursive since input
appears always as the
leftmost symbol in the sequence. See section Recursive Rules.
The first alternative is empty because there are no symbols between the
colon and the first `|'; this means that input
can match an
empty string of input (no tokens). We write the rules this way because it
is legitimate to type Ctrl-d right after you start the calculator.
It's conventional to put an empty alternative first and write the comment
`/* empty */' in it.
The second alternate rule (input line
) handles all nontrivial input.
It means, "After reading any number of lines, read one more line if
possible." The left recursion makes this rule into a loop. Since the
first alternative matches empty input, the loop can be executed zero or
more times.
The parser function yyparse
continues to process input until a
grammatical error is seen or the lexical analyzer says there are no more
input tokens; we will arrange for the latter to happen at end of file.
line
Now consider the definition of line
:
line : '\n' | exp '\n' { printf ("\t%.10g\n", $1); } ;
The first alternative is a token which is a newline character; this means
that rpcalc accepts a blank line (and ignores it, since there is no
action). The second alternative is an expression followed by a newline.
This is the alternative that makes rpcalc useful. The semantic value of
the exp
grouping is the value of $1
because the exp
in
question is the first symbol in the alternative. The action prints this
value, which is the result of the computation the user asked for.
This action is unusual because it does not assign a value to $$
. As
a consequence, the semantic value associated with the line
is
uninitialized (its value will be unpredictable). This would be a bug if
that value were ever used, but we don't use it: once rpcalc has printed the
value of the user's input line, that value is no longer needed.
expr
The exp
grouping has several rules, one for each kind of expression.
The first rule handles the simplest expressions: those that are just numbers.
The second handles an addition-expression, which looks like two expressions
followed by a plus-sign. The third handles subtraction, and so on.
exp : NUM_TOK | exp exp '+' { $$ = $1 + $2; } | exp exp '-' { $$ = $1 - $2; } ... ;
Note that there is no semantic action specified in the case when a
exp
is a NUM_TOK
. That is because if a rule has no
associated semantic action, then Zyacc automatically generates the
implicit action { $$= $1; }
, which is exactly what we need in
this case.
We have used `|' to join all the rules for exp
, but we could
equally well have written them separately:
exp : NUM_TOK ; exp : exp exp '+' { $$ = $1 + $2; } ; exp : exp exp '-' { $$ = $1 - $2; } ; ...
Most of the rules have actions that compute the value of the expression in
terms of the value of its parts. For example, in the rule for addition,
$1
refers to the first component exp
and $2
refers to
the second one. The third component, '+'
, has no meaningful
associated semantic value, but if it had one you could refer to it as
$3
. When yyparse
recognizes a sum expression using this
rule, the sum of the two subexpressions' values is produced as the value of
the entire expression. See section Actions.
You don't have to give an action for every rule. When a rule has no
action, Zyacc by default copies the value of $1
into $$
.
This is what happens in the first rule (the one that uses NUM_TOK
).
The formatting shown here is the recommended convention, but Zyacc does not require it. You can add or change whitespace as much as you wish. For example, this:
exp : NUM_TOK | exp exp '+' {$$ = $1 + $2; } | ... ;
means the same thing as this:
exp : NUM_TOK | exp exp '+' { $$ = $1 + $2; } ... ;
The latter, however, is much more readable.
rpcalc
Lexical Analyzer
The lexical analyzer's job is low-level parsing: converting characters or
sequences of characters into tokens. The Zyacc parser gets its tokens by
calling the lexical analyzer. See section The Lexical Analyzer Function yylex
.
Only a simple lexical analyzer is needed for the RPN calculator. This
lexical analyzer skips blanks and tabs, then reads in numbers as
double
and returns them as NUM_TOK
tokens. Any other character
that isn't part of a number is a separate token. Note that the token-code
for such a single-character token is the character itself.
The return value of the lexical analyzer function is a numeric code which
represents a token type. The same text used in Zyacc rules to stand for
this token type is also a C expression for the numeric code for the type.
This works in two ways. If the token type is a character literal, then its
numeric code is the ASCII code for that character; you can use the same
character literal in the lexical analyzer to express the number. If the
token type is an identifier, that identifier is defined by Zyacc as a C
macro whose definition is the appropriate number. In this example,
therefore, NUM_TOK
becomes a macro for yylex
to use.
The semantic value of the token (if it has one) is stored into the global
variable yylval
, which is where the Zyacc parser will look for it.
(The C data type of yylval
is YYSTYPE
, which was defined
at the beginning of the grammar; see section Declarations for rpcalc
.)
A token type code of zero is returned if the end-of-file is encountered. (Zyacc recognizes any nonpositive value as indicating the end of the input.)
Here is the code for the lexical analyzer:
/* Lexical analyzer returns a double floating point * number in yylval and the token NUM_TOK, or the ASCII * character read if not a number. Skips all blanks * and tabs, returns 0 for EOF. */ #include <ctype.h> int yylex(void) { int c; /* skip white space */ while ((c = getchar ()) == ' ' || c == '\t') ; /* process numbers */ if (c == '.' || isdigit (c)) { ungetc(c, stdin); scanf("%lf", &yylval); return NUM_TOK; } /* return end-of-file */ if (c == EOF) return 0; /* return single chars */ return c; }
In keeping with the spirit of this example, the controlling function is
kept to the bare minimum. The only requirement is that it call
yyparse
to start the process of parsing.
int main() { return yyparse(); }
When yyparse
detects a syntax error, it calls the error reporting
function yyerror
to print an error message (usually but not always
"parse error"
). It is up to the programmer to supply yyerror
(see section Parser C-Language Interface), so here is the definition we will use:
/* Called by yyparse on error */ void yyerror(const char *s) { printf("%s\n", s); }
After yyerror
returns, the Zyacc parser may recover from the error
and continue parsing if the grammar contains a suitable error rule
(see section Error Recovery). Otherwise, yyparse
returns nonzero. We
have not written any error rules in this example, so any invalid input will
cause the calculator program to exit. This is not clean behavior for a
real calculator, but it is adequate in the first example.
Before running Zyacc to produce a parser, we need to decide how to arrange
all the source code in one or more source files. For such a simple example,
the easiest thing is to put everything in one file. The definitions of
yylex
, yyerror
and main
go at the end, in the
"additional C code" section of the file (see section The Overall Layout of a Zyacc Grammar).
For a large project, you would probably have several source files, and use
make
to arrange to recompile them.
With all the source in a single file, you use the following command to convert it into a parser file:
zyacc file_name.y
In this example the file was called `rpcalc.y' (for "Reverse Polish
CALCulator"). Zyacc produces a file named `file_name.tab.c',
removing the `.y' from the original file name. The file output by
Zyacc contains the source code for yyparse
. The additional
functions in the input file (yylex
, yyerror
and main
)
are copied verbatim to the output.
Here is how to compile and run the parser file:
# List files in current directory.
% ls
rpcalc.tab.c rpcalc.y
# Compile the Zyacc parser.
# `-lm' tells compiler to search math library for pow
.
% cc rpcalc.tab.c -lm -o rpcalc
# List files again.
% ls
rpcalc rpcalc.tab.c rpcalc.y
The file `rpcalc' now contains the executable code. Here is an
example session using rpcalc
.
$ rpcalc 4 9 + 13 3 7 + 3 4 5 *+- -13 3 7 + 3 4 5 * + - n Note the unary minus, `n' 13 5 6 / 4 n + -3.166666667 3 4 ^ Exponentiation 81 ^D End-of-file indicator $
On an error, rpcalc
simply gives up after printing an error
message as shown below:
$ rpcalc 1 2 + + parse error $ echo $? 1 $The
echo $?
makes the shell (the Bourne shell or Korn shell)
print the exit code of the last command (rpcalc
) in this case.
The value 1 is the value returned by the return yyparse()
statement in main()
.
calc
We now modify rpcalc to handle infix operators instead of postfix. Infix notation involves the concept of operator precedence and the need for parentheses nested to arbitrary depth. Here is the Zyacc code for `calc.y', an infix desk-top calculator.
/* Infix notation calculator--calc */ %{ #include <math.h> #include <stdio.h> #define YYSTYPE double int yylex(void); void yyerror(const char *errMsg); %} /* zyacc declarations */ %token NUM_TOK %left '-' '+' %left '*' '/' %left NEG /* negation--unary minus */ %right '^' /* exponentiation */ /* Grammar follows */ %% input : /* empty */ | input line ; line : '\n' | exp '\n' { printf ("\t%.10g\n", $1); } ; exp : NUM_TOK | exp '+' exp { $$= $1 + $3; } | exp '-' exp { $$= $1 - $3; } | exp '*' exp { $$= $1 * $3; } | exp '/' exp { $$= $1 / $3; } | '-' exp %prec NEG { $$= -$2; } | exp '^' exp { $$= pow ($1, $3); } | '(' exp ')' { $$= $2; } ; %%
The functions yylex
, yyerror
and main
can be the same
as before.
There are two important new features shown in this code.
In the second section (Zyacc declarations), %left
declares token
types and says they are left-associative operators. The declarations
%left
and %right
(right associativity) take the place of
%token
which is used to declare a token type name without
associativity. (These tokens are single-character literals, which
ordinarily don't need to be declared. We declare them here to specify
the associativity.)
Operator precedence is determined by the line ordering of the
declarations; the higher the line number of the declaration (lower on
the page or screen), the higher the precedence. Hence, exponentiation
has the highest precedence, unary minus (NEG
) is next, followed
by `*' and `/', and so on. See section Operator Precedence.
The other important new feature is the %prec
in the grammar section
for the unary minus operator. The %prec
simply instructs Zyacc that
the rule `| '-' exp' has the same precedence as NEG
---in this
case the next-to-highest. See section Context-Dependent Precedence.
Here is a sample run of `calc.y':
$ calc 4 + 4.5 - (34/(8*3+-3)) 6.880952381 -56 + 2 -54 3 ^ 2 9
Up to this point, this manual has not addressed the issue of error
recovery---how to continue parsing after the parser detects a syntax
error. All we have handled is error reporting with yyerror
. Recall
that by default yyparse
returns after calling yyerror
. This
means that an erroneous input line causes the calculator program to exit.
Now we show how to rectify this deficiency.
The Zyacc language itself includes the reserved word error
, which
may be included in the grammar rules. In the example below it has
been added to one of the alternatives for line
:
line : '\n' | exp '\n' { printf ("\t%.10g\n", $1); } | error '\n' { yyerrok; } ;
This addition to the grammar allows for simple error recovery in the event
of a parse error. If an expression that cannot be evaluated is read, the
error will be recognized by the third rule for line
, and parsing
will continue. (The yyerror
function is still called upon to print
its message as well.) The action executes the statement yyerrok
, a
macro defined automatically by Zyacc; its meaning is that error recovery is
complete (see section Error Recovery). Note the difference between
yyerrok
and yyerror
; neither one is a misprint.
This form of error recovery deals with syntax errors. There are other
kinds of errors; for example, division by zero, which raises an exception
signal that is normally fatal. A real calculator program must handle this
signal and use longjmp
to return to main
and resume parsing
input lines; it would also have to discard the rest of the current line of
input. We won't discuss this issue further because it is not specific to
Zyacc programs.
mfcalc
Now that the basics of Zyacc have been discussed, it is time to move on to
a more advanced problem. The above calculators provided only five
functions, `+', `-', `*', `/' and `^'. It would
be nice to have a calculator that provides other mathematical functions such
as sin
, cos
, etc.
It is easy to add new operators to the infix calculator as long as they are
only single-character literals. The lexical analyzer yylex
passes
back all non-number characters as tokens, so new grammar rules suffice for
adding a new operator. But we want something more flexible: built-in
functions whose syntax has this form:
function_name (argument)
At the same time, we will add memory to the calculator, by allowing you to create named variables, store values in them, and use them later. Here is a sample session with the multi-function calculator:
$ mfcalc pi = 3.141592653589 3.141592654 sin(pi) 7.932657935e-13 alpha = beta1 = 2.3 2.3 alpha 2.3 ln(alpha) 0.8329091229 exp(ln(beta1)) 2.3 $
Note that multiple assignment and nested function calls are permitted.
The implementation given below has the scanner first intern identifiers into a separate string-space so that identifiers with the same spelling share the same string-space entry. The scanner returns the intern'd representation of the identifier to the parser. The parser uses the intern'd representation to access a symbol table which keeps track of the properties of the identifier (whether it is a variable or function, the value associated with the identifier).
mfcalc
The previous grammars had only a single semantic type: the double
value associated with NUM_TOK
s and exp
s. However, now we
will have identifiers used for variables and functions: they will need a
semantic type different from double
. The C and Zyacc
declarations given below for the multi-function calculator use a couple
of new Zyacc features to allow multiple semantic types (see section More Than One Value Type).
%{ #include <ctype.h> #include <math.h> #include <stdio.h> int yylex(void); void yyerror(const char *errMsg); /* Typedef typical unary <math.h> function. */ typedef double (*MathFnP)(double input); /* Interface to symbol table. */ static double getIDVal(const char *name); static MathFnP getIDFn(const char *name); static void setIDVal(const char *name, double val); static void setIDFn(const char *name, MathFnP fnP); %} %union { double val; /* For returning numbers. */ const char *id; /* For returning identifiers. */ } %token <val> NUM_TOK /* Double precision number */ %token <id> ID_TOK /* Identifiers. */ %type <val> exp %right '=' %left '-' '+' %left '*' '/' %left NEG /* Negation--unary minus */ %right '^' /* Exponentiation */ /* Grammar follows */ %%
The %union
declaration specifies the entire list of possible
types; this is instead of defining YYSTYPE
. The allowable types
are now double-floats (for exp
and NUM_TOK
) and char
*
pointers for the names of variables and functions. See section The Collection of Value Types.
Since values can now have various types, it is necessary to associate a
type with each grammar symbol whose semantic value is used. These
symbols are NUM_TOK
, ID_TOK
, and exp
. Their
declarations are augmented with information about their data type
(placed between angle brackets).
The Zyacc construct %type
is used for declaring nonterminal
symbols, just as %token
is used for declaring token types. We
have not used %type
before because nonterminal symbols are
normally declared implicitly by the rules that define them. But
exp
must be declared explicitly so we can specify its value type.
See section Nonterminal Symbols.
The C declarations section above also declares the functions used to
interface to the symbol table. The symbol table is a mapping from
identifiers to either double
values (for identifiers which are
variables) or function pointers (for identifiers whicha re functions).
The `get' functions are used to get the value associated with an
identifier; the `set' functions are used to change the value.
mfcalc
The grammar rules for the multi-function calculator are
identical to those for calc
except for three additional rules
for exp
shown below:
exp : ID_TOK { $$= getIDVal($1); } | ID_TOK '=' exp { setIDVal($1, $3); $$= $3; } | ID_TOK '(' exp ')' { $$= (*(getIDFn($1)))($3); }
The above additional rules correspond to the cases when an expression is an identifier, an assignment or a function application respectively.
getIDVal()
.
setIDVal()
.
getIDFn
and apply the corresponding function to the value of the
expression.
mfcalc
Symbol TableThe multi-function calculator requires a symbol table to keep track of the names and meanings of variables and functions. This doesn't affect the grammar rules (except for the actions) or the Zyacc declarations, but it requires some additional C functions for support.
The symbol table itself consists of a linked list of records. It provides for either functions or variables to be placed in the table. Its definition is as follows:
/* Symbol table ADT. */ /* Possible types for symbols. */ typedef enum { VAR_SYM, FN_SYM } SymType; typedef struct Sym { const char *name; /* Name of symbol. */ SymType type; /* Type of symbol. */ union { double var; /* Value of a VAR_SYM. */ MathFnP fn; /* Value of a FN_SYM. */ } value; struct Sym *succ; /* Link field. */ } Sym; /* The symbol table: a chain of Sym's.*/ static Sym *symTab;
The Sym
type contains the name
of the identifier and a
type
field which classifies the symbol as either a variable
(type == VAR_SYM
) or function (type == FN_SYM
). Depending
on this type
field, the symbol's value
is in the
union
field var
for a variable or in fn
for a
function. The succ
field is used to chain all symbols together
in a LIFO chain.
The heart of the symbol table module is the getSym()
routine
shown below:
/* Search symTab for name. If doCreate, then create an entry for * it if it is not there. Return pointer to Sym entry. */ static Sym * getSym(const char *name, unsigned doCreate) { Sym *p; for (p= symTab; p != NULL && p->name != name; p= p->succ) ; if (p == NULL && doCreate) { p= malloc(sizeof(Sym)); p->name= name; p->succ= symTab; symTab= p; } return p; }
getSym()
searches the linear chain of Sym
s rooted in
symTab
for an
identifier with a specified name
. If it finds one, it returns a
pointer to the corresponding Sym
; otherwise if doCreate
is
zero it simply returns NULL
; if doCreate
is
non-zero it creates a new entry for name
and links it in at the
head of the symTab
chain.
To get the value or function associated with an identifier, the symbol
table interface routines getIDVal()
and getIDFn()
shown
below can be used:
/* Get value associated with name; signal error if not ok. */ static double getIDVal(const char *name) { const Sym *p= getSym(name, 0); double val= 1.0; /* A default value. */ if (!p) fprintf(stderr, "No value for %s.\n", name); else if (p->type != VAR_SYM) fprintf(stderr, "%s is not a variable.\n", name); else val= p->value.var; return val; } /* Get function associated with name; signal error if not ok. */ static MathFnP getIDFn(const char *name) { const Sym *p= getSym(name, 0); MathFnP fn= sin; /* A default value. */ if (!p) fprintf(stderr, "No value for %s.\n", name); else if (p->type != FN_SYM) fprintf(stderr, "%s is not a function.\n", name); else fn= p->value.fn; return fn; }
The routines take care of printing out a error message if the
name
is not found or is of an inappropriate type.
Changing the values of a symbol is done in a straight-forward manner as shown below:
/* Unconditionally set name to a VAR_SYM with value val. */ static void setIDVal(const char *name, double val) { Sym *p= getSym(name, 1); p->type= VAR_SYM; p->value.var= val; } /* Unconditionally set name to a FN_SYM with fn ptr fnP. */ static void setIDFn(const char *name, MathFnP fnP) { Sym *p= getSym(name, 1); p->type= FN_SYM; p->value.fn= fnP; }
It is necessary to preload the symbol table with the functions which
will be provided by mfcalc
. This is done as shown below:
/* Initial functions. */ struct { const char *name; /* Name of function. */ MathFnP fn; /* Corresponding <math.h> function. */ } initFns[]= { { "sin", sin }, { "cos", cos }, { "atan", atan }, { "ln", log }, { "exp", exp }, { "sqrt", sqrt } }; static void initSyms(void) { const unsigned n= sizeof(initFns)/sizeof(initFns[0]); unsigned i; for (i= 0; i < n; i++) { setIDFn(getID(initFns[i].name), initFns[i].fn); } }
By simply editing the initialization list and adding the necessary include files, you can add additional functions to the calculator.
The new version of main
includes a call to initSyms
, the
function defined above which initializes the symbol table:
int main() { initSyms(); return yyparse(); }
mfcalc
Scanner
The function yylex
must now recognize numeric values,
single-character arithmetic operators and identifiers. The recognition
of numeric values and single-character arithmetic operators is exactly
as before. In order to recognize identifiers, the following code
fragment is added to yylex()
after the code for recognizing
numbers:
/* Char starts an identifier => read the name. */ if (isalpha(c)) { ungetc(c, stdin); yylval.id= readID(); return ID_TOK; }
After checking if the current character is a letter, the code fragment
calls readID()
after pushing back the current character.
readID()
reads the current character into a dynamically sized
buffer as shown below. After completing the read, readID()
calls
getID()
routine which interfaces to the string-space.
getID()
will return a unique char *
pointer for the
identifier. If the returned char *
pointer is not equal to the
dynamically allocated buffer, then the identifier had been seen
previously and the dynamic buffer is freed.
/* Read alphanumerics from stdin into a buffer. Check * if identical to previous ident: if so return pointer * to previous, else return pointer to new buffer. * Assumes char after ident is not an EOF. */ static const char * readID(void) { enum { SIZE_INC= 40 }; unsigned size= SIZE_INC; char *buf= malloc(size); unsigned i= 0; int c; const char *ident; do { /* Accumulate stdin into strSpace. */ c= getchar(); if (i >= size) buf= realloc(buf, size*= 2); buf[i++]= c; } while (isalnum(c)); ungetc(c, stdin); buf[i - 1]= '\0'; /* Undo extra read. */ buf= realloc(buf, i); /* Resize buf to be only as big as needed. */ ident= getID(buf); /* Search string-space. */ if (ident != buf) free(buf); /* Previously existed. */ return ident; }
mfcalc
String Space
The string-space is maintained as a linked list of Ident
s as
shown below. The getID()
function does a linear search through
the linked list for its name
argument: if found it returns a
pointer to the previously entered name
, if not found, it adds a
new entry at the head of the list and returns the name
argument.
/* String space ADT to map identifiers into IDNums. */ typedef struct Ident { const char *name; /* NUL-terminated chars of identifier. */ struct Ident *succ; /* Next entry in linear chain. */ } Ident; /* The string space is a chain of Ident's. */ static Ident *strSpace; static const char * getID(const char *name) { Ident *p; for (p= strSpace; p != NULL; p= p->succ) { if (strcmp(name, p->name) == 0) break; } if (!p) { p= malloc(sizeof(Ident)); p->name= name; p->succ= strSpace; strSpace= p; } return p->name; }
It is possible to design mfcalc
so that the scanner interns
identifiers directly using the symbol table, avoiding the need for a
separate string space. Though that is adequate for mfcalc
, such
an approach may be unwieldy for more complex applications because of the
reasons outlined below.
A symbol table is a mapping from identifiers to objects like variables and functions; in many applications (like programming languages with separate name-spaces or multiple scopes) this mapping is typically one-to-many. Hence maintaining a symbol table usually requires understanding the context within which a identifier is used. If the scanner is solely responsible for maintaining the symbol table, then it must keep track of the context within which an identifier is used --- this complicates the scanner. If other higher level modules like the parser share the maintenance of the symbol table with the scanner, then the symbol table interface may become complex due to order-dependencies and feedback between modules.
The separate string table avoids some of these complications. Though
the implementation given for mfcalc
required two searches for
each identifier (one within the string space, the other within the
symbol table), it is possible to get by with a single search.
One disadvantage of the $i
syntax used for specifying the
semantic attributes of grammar symbols is that correctly specifying the
i is often tedious, especially with long rules. More seriously,
if a rule changes then the i in a $i
may also change:
this can make maintaining such grammars error-prone.
Another disadvantage is that the attributes of grammar symbols are
declared in section 1 of the grammar file (using %union
,
%type
and other declarations), rather than near where they are
actually used in section 2. This conflicts with modern software
engineering practice, where entities are declared near or at the point
of first use.
Zyacc permits syntactic sugar which overcomes these deficiencies. It allows named attribute variables to refer to the semantic attributes, thus making it possible to refer to semantic attributes without having to count grammar symbols. It also allows the semantic attributes of a nonterminal to be declared on the left-hand side of rules for that nonterminal.
The syntax is illustrated by repeating the grammar for the mfcalc
using the new syntax:
input : /* empty */ | input line ; line : '\n' | exp($v) '\n' { printf ("\t%.10g\n", $v); } | error '\n' { yyerrok; } ; exp(double $v) : ID_TOK($id) { $v= getIDVal($id); } | ID_TOK($id) '=' exp($v) { setIDVal($id, $v); } | ID_TOK($id) '(' exp($v1) ')'{ $v= (*(getIDFn($id)))($v1); } | NUM_TOK($v) | exp($v1) '+' exp($v2) { $v= $v1 + $v2; } | exp($v1) '-' exp($v2) { $v= $v1 - $v2; } | exp($v1) '*' exp($v2) { $v= $v1 * $v2; } | exp($v1) '/' exp($v2) { $v= $v1 / $v2; } | '-' exp($v1) %prec NEG { $v= -$v1; } | exp($v1) '^' exp($v2) { $v= pow($v1, $v2); } | '(' exp($v1) ')' { $v= $v1; } ; /* End of grammar */ %%
The semantic attributes of a grammar symbol are written within parentheses following the grammar symbols using a positional notation similar to that of function application in C. The types of the semantic attributes for a nonterminal are declared on the left-hand side of a rule using a syntax similar to that for function prototype declarations in C.
In the above example, exp
has a single semantic attribute which
represents the value of the expression; its value is computed on-the-fly
as an exp
is parsed. This attribute is named $v
and is
declared to be of type double
on the common left-hand side of the
rules for exp
. This declaration is similar to that of a formal
parameter in a function definition.
The attributes of right-hand side symbols detail the flow of semantic
information among the grammatical constructs constituting the rule. For
example in the first rule for exp
repeated below, the terminal
ID_TOK
representing a variable has an attribute referred to
$id
within that rule. The action for that rule computes the
attribute $v
for the left-hand side exp
in terms of this
$id
using the function getIDVal()
which looks up the value
of the variable using the current symbol table.
exp(double $v) : ID_TOK($id) { $v= getIDVal($id); } | ID_TOK($id) '=' exp($v) { setIDVal($id, $v); }
Similarly, in the second rule which describes an assignment expression,
the $v
for the left-hand side exp
is computed via the
exp
on the right-hand side: this is indicated by using the same
name $v
for both occurrences.
The sugared syntax allows the attributes of a nonterminal to be declared when that nonterminal occurs on the left-hand side of a rule. Since this can never happen for a terminal symbol, the attributes of terminal symbols must still be declared in section 1 of the Zyacc file as follows:
%token <val>(double $v) NUM_TOK /* Double precision number */ %token <id>(const char *$id) ID_TOK /* Identifiers. */
We have enhanced the <val>
type tag previously used for a
NUM_TOK
by a parenthesized list containing declarations for its
semantic attributes, namely a single attribute named $v
of type
double
. Similarly an ID_TOK
has a semantic type with type
tag <id>
with a single const char *
semantic attribute
named $id
.
If the sugared syntax is used for the entire grammar, then there is no
need for a %union
directive. In fact, Zyacc generates one
automatically; for the above grammar, it will generate something similar
to:
typedef union { struct { double v; } val; /* NUM_TOK */ struct { const char *id; } id; /* ID_TOK */ } YYSTYPE;
The union
has a separate struct
field for each sugared
%token
declaration and for each nonterminal.
The semantic attributes of a terminal are stored in the field in the
union
which has a name identical to the <type>
tag
used in the %token
declaration for that terminal. The fields in
the struct
are identical to the attributes declared for that
<type>
tag in the token declaration, except that the
`$'s are removed. Hence the above %token
declarations
result in the fields val
and id
in the union, with each of
them being a struct
containing the fields v
and id
repectively.
Since the programmer controls the names used in the fields in the
union
for terminals, the programmer should use the same names
when setting up semantic information for the terminals in the scanner.
For the above grammar, the only change necessary in the scanner is to
change assignments to yylval
to access the appropriate fields in
the union
. Specifically, the code for numbers
and identifiers in yylex()
is changed to:
/* Char starts a number => parse the number. */ if (c == '.' || isdigit (c)) { ungetc (c, stdin); scanf ("%lf", &yylval.val.v); return NUM_TOK; } /* Char starts an identifier => read the name. */ if (isalpha(c)) { ungetc(c, stdin); yylval.id.id= readID(); return ID_TOK; }
The semantic attributes for each nonterminal are stored in a separate
struct
field within the union
. The types of the fields
used within the struct
for a nonterminal are identical to the
types declared for the attributes of that nonterminal. However, the
names used for and within this field are implementation defined; that
should not be a problem, as there is no need for the programmer to
access them directly.
Using these named attributes overcomes the disadvantages mentioned
earlier for the numeric $
-variables. However, the resulting
grammars tend to be somewhat more verbose. The new notation does not by
itself add anything to the power of the grammars, which is why we refer
to it as syntactic sugar. However when used in conjunction with the
features mentioned in the next couple of sections, the named attribute
variables do add to the power of the grammars.
Consider enhancing our calculator with an evaluation operator @
,
which evaluates a polynomial. This operator can be used as in x
@ [5, 2, 3]
to denote the polynomial 5*x^2 + 2*x + 3
. The
polynomial coefficients within the square brackets are allowed to be
arbitrary expressions (including nested polynomial evaluations). The
@
operator should associate to the left and have the highest
precedence (greater than that of exponentiation). An example of the use
of this polynomial calculator is shown below:
$ polycalc 3@[4, 5, 1] 52 2 * 3@[4, 5, 1] 104 3@[4, 2@[8, 4, 2, 1], 1] 292 $
It is easy enough to force the @
operator to have the required
precedence by simply adding a %left
declaration after the
declaration for the exponentiation operator as shown below.
%right '^' /* Exponentiation */ %left '@' /* Polynomial application */
Unfortunately, evaluating the polynomial is not that easy.
One solution to this problem is to not evaluate the polynomial as its
coefficients are being parsed, but to merely store its coefficient
values in some data structure which will be the semantic attribute for
the coefficient list. Once the coefficient list has been parsed, the
data structure containing the coefficients and the value of the point at
which evaluation is requested can be passed to some action routine which
evaluates the polynomial at that point. Assuming that the coefficients
have been evaluated into an (n+1
)-element array coeffs[]
,
a scheme like the following is typical for the action routine:
/* Evaluate nth-degree polynomial * coeffs[n]*point^n + coeffs[n-1]*point^(n-1) + ... + coeffs[0] */ double evalPoly(double point, double coeffs[], int n) { double sum= 0; int i; for (i= n; i >= 0; i--) sum= sum*point + coeffs[i]; return sum; }Unfortunately, this requires auxiliary storage and a non-trivial data structure to handle nested polynomials. Fortunately in Zyacc, the computation represented by the above code can be performed on-the-fly during parsing using a special type of semantic attributes known as inherited attributes described below.
All the attributes seen in the previous examples are known as
synthesized attributes. An attribute for a construct is said to
be a synthesized attribute if the value of that attribute
directly depends only on the semantic attributes of the constituents of
that construct and not on the semantic attributes of a surrounding
construct. For example, the value of a exp
is independent of the
larger exp
within which it may appear.
A little reflection shows that synthesized attributes are not sufficient
to evaluate polynomials on-the-fly during parsing: the contribution made
by each coefficient to the final value depends not only on the value of
the coefficient but also on the point at which the polynomial is being
evaluated (the value of the expression before the @
operator),
as well as the position of the coefficient in the list of polynomial
coefficients.
Inherited attributes allow us to get around this problem: they allow us
to pass in information about the surrounding context to a grammatical
construct. Inherited attributes are declared using the %in
keyword as shown in the following extract of the polycalc
grammar:
coeffs(%in double $sum, %in double $point, %out double $v) : exp($v1) { $v= $sum*$point + $v1; } | coeffs($sum, $point, $v1) ',' exp($v2) { $v= $v1*$point + $v2; } ; exp(double $v) : exp($v1) '@' '[' coeffs(0.0, $v1, $v) ']'
polycalc
is derived from mfcalc
. Besides adding an
operator declaration for @
as described above, these rules
represent the only other addition needed to mfcalc
.
coeffs
is a new nonterminal used to represent a comma-separated
list of polynomial coefficient expressions. It has three semantic
attributes: the first two are inherited attributes $sum
and
$point
and correspond to the variables with the same name used in
the evalPoly
C-function. The last attribute is a synthesized
attribute declared using %out
(the %out
is usually
optional) which is the value of the entire polynomial at point.
The first rule for coeffs
coeffs(%in double $sum, %in double $point, %out double $v) : exp($v1) { $v= $sum*$point + $v1; }deals with the situation where there is only a single polynomial coefficient which has not been processed: in that case the value of the polynomial is the value of the polynomial so far (
$sum
)
multiplied by the value of the point ($point
) plus the value of
the coefficient ($v1
).
The second rule
coeffs(%in double $sum, %in double $point, %out double $v) : coeffs($sum, $point, $v1) ',' exp($v2) { $v= $v1*$point + $v2; }deals with the situation when there is more than one remaining coefficient. In that case, the remaining coefficients can be decomposed into all but the last coefficient (described by the first
coeffs
on the right-hand side) and the last coefficient (described by the
','
followed by exp
). If we assume that the value of the
polynomial represented by all but the last coefficient is $v1
,
then the value of the entire polynomial is $v1*$point + $v2
where
$v2
is the value of the last coefficient.
Notice that the computation performed by the above rules is identical to
the computation performed within the for
-loop of the
evalPoly
function. The initialization of sum
to 0
is performed in the rule for exp
exp(double $v) : exp($v1) '' '[' coeffs(0.0, $v1, $v) ']'which passes in a value of
0.0
for the $sum
inherited
attribute of coeffs
and the value of the point ($v1
) for
the $point
inherited attribute. The synthesized attribute
$v
computed for coeffs
is the value of the left-hand side
exp
.
The previously developed multi-function calculator mfcalc
requires its (single) function argments to be within parentheses.
Consider enhancing it for lazy people who prefer not to type those
parentheses, as illustrated by the following interaction log:
$ lazycalc exp ln 7 7 pi = 3.141592653589 3.141592654 sin pi/4 0.7071067812 (sin pi/4)^2 + (cos pi/4)^2 1 $A first attempt may be to simply declare a right-associative precedence level for function application between that of binary addition and multiplication operators as shown below (without any semantic attributes):
... %right '=' %left '-' '+' %right FN /* Prefix function application. */ %left '*' '/' %left NEG /* Negation--unary minus */ %right '^' /* Exponentiation */ ... %% ... exp : ID_TOK exp %prec FN ...Unfortunately, a little reflection shows that this is not enough. Given simply the token sequence
ID_TOK - ID_TOK
, a human would not know
whether the sequence represents a function (given by the first
ID_TOK
) applied to a unary minus expression (- ID_TOK
), or
a subtraction of the variable represented by the second ID_TOK
from the variable represented by the first ID_TOK
. If a human
being cannot distinguish between these two meanings purely on the basis
of syntax, then there is no way that Zyacc can do so.
The way a human would distinguish between the above two sequences would
be to consider the semantics for the first ID_TOK
. If the first
ID_TOK
represents a function, then the sequence would correspond
to a function application, else to a subtraction expression. However,
this means that the way a token sequence is parsed depends on the
semantics of the tokens in the sequence.
Generic yacc does not allow any way for the semantics to affect a parse.
However, Zyacc allows arbitrary semantic predicates to affect a parse.
A semantic predicate can be any arbitrary C expression E
written on the right-hand side of a rule as %test(E)
. If
at parse time, the rule in which a semantic predicate occurs is
potentially applicable, then the predicate is evaluated: if the
predicate succeeds (returns non-zero), then the rule in which the
%test
is embedded wins out over other rules. So for our
lazycalc
example, the rule for function application becomes
exp(double $v) : ID_TOK($id) %test(isFn($id)) exp($v1) %prec FN { $v= (*(getIDFn($id)))($v1); }
isFn()
is a trivial C function which interfaces to the symbol
table, returning 1 iff its argument is a function.
/* Return nonzero iff name is a function. */ static unsigned isFn(const char *name) { Sym *p= getSym(name, 0); return (p != NULL && p->type == FN_SYM); }
Semantic predicates allow relatively clean solutions to problems which are otherwise rather painful to solve. They should not be overused. In particular, they should not be used when simpler mechanisms suffice as they make a Zyacc grammar harder to understand -- to understand Zyacc's parsing decisions when semantic predicates are used it is no longer sufficient to merely consider the statically-defined grammar rules, but it is also necessary to consider the semantics defined at parse time.
These example programs are both powerful and flexible. You may easily
add new functions, and it is a simple job to modify this code to install
predefined variables such as pi
or e
as well. The
following exercises suggest several simple enhancements.
mfcalc
.
mfcalc
to add another array that contains constants and
their values. Then modify initSyms
to add these constants to the
symbol table. It will be easiest to give the constants type
VAR_SYM
.
CONST_SYM
).
mfcalc
so as to allow implicit
multiplication: i.e. your calculator should allow 2 + 3 x
as
equivalent to 2 + 3*x
?
mfcalc
to add array variables such that an
array is accessed using arrayName(index)
where
arrayName is an ID_TOK
representing the name of the array
and index is an exp
representing the value used to index
the array.
Syntactically, an array access is identical to a function application,
and your answer should concentrate on how your parser can distinguish
between the two.
Feedback: Please email any feedback to zdu@acm.org.