Search code examples
pythonescapingevalcode-injection

Is this eval() hackable?


I would like to make a function decorator which, before executing actual function, will do some actions with its variables. And these actions I would like to provide as eval() strings. Variables are the arguments of the function. Let me show you:

from functools import wraps
from inspect import getcallargs


def safeornot(*keys):
    def decorator(func):
        @wraps(func)
        def wrapper(*args, **kwargs):
            # get dict of arguments passed to the function
            func_args = getcallargs(func, *args, **kwargs)
            # now these are made locally visible as normal variables inside the eval function
            _keys = []
            for key in keys:
                _key = eval(key, globals(), func_args)
                _keys.append(_key)
            print(_keys)
            return func(*args, **kwargs)
        return wrapper
    return decorator


@safeornot('you + " " + __name__', 'you + " " + me')
def you_and_me(you, me):
    print("you and me")

you_and_me("1", "2")

will, obviously, print:

['1 __main__', '1 2']
you and me

which I'm aiming at.

But this function will be used in an unsafe environment: it's going to be a rate-limiting decorator for functions inside web app, so the you and me variables are as unsafe as they come.

Can this eval() be hacked into, for example, formatting the server? I kinda don't see it being hacked, as the locals() are not eval'ed themselves and are treated as non-callable objects (str in this case).

Any hacker thoughts?

UPD:

  • The decorated function can be called by anyone.
  • The arguments to the decorated function can be anything.
  • The *keys argument can be filled in only by code author.

It's intended use is for example:

@ratelimit('some_custom_id_func(["by-username", username])').at('5/15s')
def login(username, password):
    ...

UPD[2]:

What if:

# get dict of arguments passed to the function
func_args = getcallargs(func, *args, **kwargs)
# THE CHANGE IS HERE!
for func_arg_key in func_args.keys():
    func_args[func_arg_key] = str(func_args[func_arg_key]) 
# NOW INPUT IS SANITIZED (KINDA)?
# now these are made locally visible as normal variables inside the eval function
_keys = []
for key in keys:
    _key = eval(key, globals(), func_args)
    _keys.append(_key)
print(_keys)

Input sanitized?

UPD[3]

Imagine, we got rid of the eval(). Does this make it safer? If so, then why?

def ratelimit(*keys):
    def decorator(func):
        @wraps(func)
        def wrapper(*args, **kwargs):
            # get dict of arguments passed to the function
            func_args = getcallargs(func, *args, **kwargs)
            for key in keys:
                print(key(func_args))
            return func(*args, **kwargs)
        return wrapper
    return decorator


@ratelimit(lambda d: d['you'] + " and " + d['me'], lambda d: d['me'] + " or " + d['you'])
def you_and_me(you, me):
    print("you and me")


you_and_me("1", "2")

Solution

  • The best answer I can give you is "I don't know". I can't think of anything, but there are lots of people out there a lot smarter than me at breaking things. I hate eval, and think every occurrence of eval is a security hole waiting to happen. You should avoid it at all costs.

    If this were me, I'd write:

    def login_logger(x, y):
       Do whatever you want here safely
    
    @ratelimit(login_logger)
    def login(x, y): ...
    
    

    Have your @ratelimit call a logging function that takes the exact same arguments as the function it is wrapping.