I created a simple parser in PLY that has two rules:
:
comes first a name appears=
comes first a number appearsCorresponding code:
from ply import lex, yacc
tokens = ['Name', 'Number']
def t_Number(t):
r'[0-9]'
return t
def t_Name(t):
r'[a-zA-Z0-9]'
return t
literals = [':', '=']
def t_error(t):
print("lex error: " + str(t.value[0]))
t.lexer.skip(1)
lex.lex()
def p_name(p):
'''
expression : ':' Name
'''
print("name: " + str(list(p)))
def p_number(p):
'''
expression : '=' Number
'''
print("number: " + str(list(p)))
def p_error(p):
print("yacc error: " + str(p.value))
yacc.yacc()
yacc.parse("=3")
yacc.parse(":a")
yacc.parse(":3")
My expectation is that if it sees a :
or =
it enters the corresponding rule and tries to match the corresponding terminal. Yet in the third example it matches a Number which should be a Name and then fails.
Afaik the grammar should be context free (which is needed to be parsed), is this the case? Also how would I handle the case when one token is a superset of another token?
Ply tokenises before the grammar is consulted, so the context does not influence the tokenisation.(To be more precise, the parser receives a stream of tokens produced by the lexer. The two processes are interleaved in practice, but they are kept independent.)
You can build context into your lexer, but that gets ugly really fast. (Nonetheless, it is a common strategy.)
Your best bet is to write your lexixal rules to produce the most granular result possible, and then write your grammar to accept all alternatives:
def p_name(p):
'''
expression : ':' Name
expression : ':' Number
'''
print("name: " + str(list(p)))
def p_number(p):
'''
expression : '=' Number
'''
print("number: " + str(list(p)))
That assumes you change your lexical rules to put the most specific pattern first.