I am aware that the use of eval()
usually means bad code, but I stumbled upon a weird behavior of the eval()
function in internal functions that I could not understand. If we write:
def f(a):
def g():
print(eval('a'))
return g()
Running f(1)
in this case yields a NameError
, claiming that a
is not defined. However, if we define
def f(a):
def g():
b = a + 1
print(eval('a'))
return g()
Then running f(1)
prints 1
.
There is something happening with local and global variables that I can't quite understand. Is a
only a local variable in g()
when it is "used" for something? What is going on here?
In short, since eval
is for dynamic evaluation, the interpreter has no way to know it should add a
to the local scope of g
. For efficiency, the interpreter will not add unneeded variables to the dict
of local variables.
From the doc for eval
:
The expression argument is parsed and evaluated as a Python expression (technically speaking, a condition list) using the globals and locals dictionaries as global and local namespace.
This means the functions eval(expression)
will use globals()
as its default global scope and locals()
as its local scope if none are provided.
Although, in you first example a
is in neither.
def f(a):
print("f's locals:", locals())
def g():
print("g's locals:", locals())
print(eval('a'))
return g()
f(1)
Indeed, since the interpreter sees no reference to a
when parsing the body of g
, it does not add it to its local variables.
For it to work, you would need to specify nonlocal a
in g
.
f's locals: {'a': 1}
g's locals: {}
Traceback ...
...
NameError: name 'a' is not defined
In your second example, a
is in g
local variables as it is used in the scope.
def f(a):
print("f's locals:", locals())
def g():
print("g's locals:", locals())
b = a + 1
print("g's locals after b = a + 1:", locals())
print(eval('a'))
return g()
f(1)
f's locals: {'a': 1}
g's locals: {'a': 1}
g's locals after b = a + 1: {'a': 1, 'b': 2}
1