Discussion forum for David Beazley

PLY: nonassoc attribute doesn't work

Hi there,

PLY documentation says, that if I specify ‘nonassoc’ as attribute for operators (usally compare operators) in the precedence table, expressions like

a < b < c

would throw a syntax error. But this doesn’t work. My precedence table looks like this

precedence = (

but still the parser swallows “a < b < c” as a valid expression.

Is there something else to do, to make this a syntax error?

Would have to see the rest of the grammar (or at least the rules associated with <, >) to know more.

I defined the rules as part of the expression rules as shown below. Can I attach a file here (other than an image), to share the whole grammar?

def p_expression(t):
    '''expression : expression OR andexpr
                  | andexpr'''
    if len(t)>2:
        t[0] = "OR(%s , %s)" % (t[1],t[3])
        t[0] = t[1]

def p_andexpr(t):
    '''andexpr : andexpr AND notexpr
               | notexpr'''
    if len(t)>2:
        t[0] = "AND(%s , %s)" % (t[1],t[3])
        t[0] = t[1]

def p_notexpr(t):
    '''notexpr : NOT cmpexpr
               | cmpexpr'''
    if len(t)>2:
        t[0] = "NOT(%s)" % t[2]
        t[0] = t[1]

def p_cmpexpr(t):
    '''cmpexpr : cmpexpr EQUAL      addexpr
               | cmpexpr NOT_EQ     addexpr
               | cmpexpr GREATER    addexpr
               | cmpexpr GREATER_EQ addexpr
               | cmpexpr LESS       addexpr
               | cmpexpr LESS_EQ    addexpr
               | addexpr'''
    if len(t)>2:
        t[0] = "COMPARE(%s, '%s', %s)" % (t[1],t[2],t[3])
        t[0] = t[1]

def p_addexpr(t):
    '''addexpr : addexpr PLUS  multexpr
               | addexpr MINUS multexpr
               | multexpr'''
    if len(t)>2:
        if t[2]=="+":
            t[0] = "PLUS(%s , %s)" % (t[1],t[3])
            t[0] = "MINUS(%s , %s)" % (t[1],t[3])
        t[0] = t[1]

def p_multexpr(t):
    '''multexpr : multexpr TIMES  negateexpr
                | multexpr DIVIDE negateexpr
                | negateexpr'''
    if len(t)>2:
        if t[2]=="*":
            t[0] = "TIMES(%s , %s)" % (t[1],t[3])
            t[0] = "DIVIDE(%s , %s)" % (t[1],t[3])
        t[0] = t[1]

def p_negateexpr(t):
    '''negateexpr : MINUS powerexpr %prec UMINUS
                  | powerexpr'''
    if len(t)>2:
        t[0] = "NEGATE(%s)" % t[2]
        t[0] = t[1]

def p_powerexpr(t):
    '''powerexpr : powerexpr POWER MINUS subexpr %prec UMINUS
                 | powerexpr POWER subexpr
                 | subexpr'''
    if len(t)>4:
        t[0] = "POWER(%s , NEGATE(%s))" % (t[1], t[4])
    elif len(t)>2:
        t[0] = "POWER(%s , %s)" % (t[1], t[3])
        t[0] = t[1]

def p_subexpr(t):
    '''subexpr : LPAREN expression RPAREN
               | simpleexpr'''
    if len(t)>2:
        t[0] = "( %s )" % t[2]
        t[0] = t[1]

def p_simpleexpr(t):
    '''simpleexpr : val
                  | FUN_ABS LPAREN expression RPAREN
                  | FUN_ASC LPAREN expression RPAREN
                  | FUN_ATN LPAREN expression RPAREN
                  | FUN_COS LPAREN expression RPAREN
                  | FUN_EXP LPAREN expression RPAREN
                  | FUN_FRE LPAREN val RPAREN
                  | FUN_INT LPAREN expression RPAREN
                  | FUN_LEN LPAREN expression RPAREN
                  | FUN_LOG LPAREN expression RPAREN
                  | FUN_POS LPAREN val RPAREN
                  | FUN_RND LPAREN expression RPAREN
                  | FUN_SGN LPAREN expression RPAREN
                  | FUN_SIN LPAREN expression RPAREN
                  | FUN_SQR LPAREN expression RPAREN
                  | FUN_TAN LPAREN expression RPAREN
                  | FUN_VAL LPAREN expression RPAREN'''
    if len(t)>2:
        t[0] = "%s( %s )" % (t[1].upper(), t[3])
        t[0] = t[1]

def p_val(t):
    '''val : const
           | var'''
    t[0] = t[1]

def p_var_list(t):
    '''var_list : VAR COMMA var_list
                | VAR'''
    if len(t)>2:
        t[0] = "'%s' , '%s'" % (t[1],t[3])   # var_list is for setting vars (in READ or NEXT statement)..
        t[0] = "'%s'" % t[1]                 # ..therefore, not var() function around

def p_const_list(t):
    '''const_list : const COMMA const_list
                  | const'''
    if len(t)>2:
        t[0] = "%s , %s" % (t[1],t[3])
        t[0] = t[1]

# NOTE: if we wrap reading a variable into a function, we can later handle it easily in Python
# Anm.: die Unterscheidung des Typs kann weder Lex noch Yacc machen - das muss die jeweilige Funktion tun
#       Bei Lex geht's nicht, weil beide Typen gleich losgehen, bei YACC geht's nicht, weil
#       man da, basierend auf dem Typ eine andere Regel anspringen müsste; sowas geht aber wohl nicht
def p_var(t):
    '''var : VAR'''
    t[0] = "var('%s')" % t[1]

def p_const(t):
    '''const : INT
             | FLOAT
             | STRING'''
    t[0] = t[1]

Without digging into it more, the main purpose of the precedence table is to resolve conflicts in the presence of ambiguous grammars. For example, if you have rules like this:

expr : expr PLUS expr
      | expr MINUS expr
      | expr TIMES expr
      | expr DIVIDE expr
      | expr LT expr
      | expr LE expr

As written, the grammar you provide seems like it might be written without any ambiguity to begin with. As such, it’s possible that PLY isn’t even consulting the precedence table. That would certainly explain why the ‘nonassoc’ rule isn’t being enforced (i.e., it’s not even being looked at).

You might be able to get more information by looking at the parser.out file created for debugging.

Ah, I see. In this case I would have to make sure directly in the grammar, that the expressions used on either side of a compare operator are not compare expressions in themselves, right?
Thanks a lot for your help!!!

This is sort of an interesting case I hadn’t really thought much about. That whole precedence table thing is a “hack” that most LALR parser generators support (yacc, bison, etc.) in order to resolve ambiguity. However, if the grammar is already unambiguous, then maybe it’s kind of pointless.

(note to self: wonder how hard it would be to generate a warning about useless precedence rules).