Search code examples
pythonsecuritysandboxpicklepython-3.3

How to safely return objects from an OS-level Python sandbox?


I need to be able to run untrusted Python scripts. After much research, it seems like a Python-only sandbox is not secure, at least with CPython (which I need to use).

Therefore, we are planning to use OS-level sandboxing as well (SELinux, AppArmor, etc.).

My question is: how do we safely communicate with the sandbox? The code in the sandbox will need to return Python types such as int and str, as well as Numpy arrays. There may be more types in the future.

The obvious way is to use pickle, but it seems possible that some malicious code in the sandbox could get a hold of the output pipe (we were thinking of using 0MQ) and send back something that could result in arbitrary code execution when unpickled outside the sandbox.

Are there safer serialization alternatives to pickle that don't have the performance overhead of JSON and the like?

We're using Python 3.3.


Solution

  • It sounds like your only real problem with JSON is the way you're encoding NumPy arrays (and Pandas tables). JSON is not ideal for your use case—not because it's slow at handling NumPy data, but because it's a text-based format, and you have a lot of data that's easier to encode in a non-text-based format.

    So, I'll show you a way around all of your problems with JSON below… but I would suggest using a different format.

    The two major "binary JSON" formats, BJSON and BSON, aim to provide most of the benefits of JSON (simple, safe, dynamic/schemaless, traversable, etc.), while also making it possible to embed binary data directly. (The fact that they're also binary rather than textual formats isn't really important to you in this case.) I believe the same is true of Smile, but I've never used it.

    This means that, in the same way JSON makes it easy to hook in anything you can reduce to strings, floats, lists, and dicts, BJSON and BSON make it easy to hook in anything you can reduce to strings, floats, lists, dicts, and byte strings. So, when I show how to encode/decode NumPy to strings, the same thing works for byte strings, but without all the extra steps at the end.

    The downsides of BJSON and BSON are that they're not human-readable, and don't have nearly as widespread support.


    I have no idea how you're currently encoding your arrays, but from the timings I suspect you're using the tolist method or something similar. That will definitely be slow, and big. And it will even lose information if you're storing anything other than f8 values anywhere (because the only kind of numbers JSON understands are IEEE doubles). The solution is to encode to a string.

    NumPy has a text format, which will be faster, and not lossy, but still probably slower and bigger than you want.

    It also has a binary format, which is great… but doesn't have enough information to recover your original array.

    So, let's look at what pickle uses, which you can see by calling the __reduce__ method on any object: Basically, it's the type, the shape, the dtype, some flags that tell NumPy how to interpret the raw data, and then the binary-format raw data. You can actually encode the __reduce__ data yourself—in fact, it might be worth doing so. But let's do something a bit simpler for the sake of exposition, with the understanding that it will only work on ndarray, and won't work on machines with different endianness (or rarer cases like sign-magnitude ints or non-IEEE floats).

    def numpy_default(obj):
        if isinstance(obj, np.ndarray):
            return {'_npdata': obj.tostring(), 
                    '_npdtype': obj.dtype.name,
                    '_npshape': obj.shape}
        else:
            return json.dumps(obj)
    
    def dumps(obj):
        return json.dumps(obj, default=numpy_default)
    
    def numpy_hook(obj):
        try:
            data = obj['_npdata']
        except AttributeError:
            return obj
        return np.fromstring(data, obj['_npdtype']).reshape(obj['_npshape'])
    
    def loads(obj):
        return json.loads(obj, object_hook=numpy_hook)
    

    The only problem is that np.tostring gives you 'bytes' objects, which Python 3's json doesn't know how to deal with.

    This is where you can stop if you're using something like BJSON or BSON. But with JSON, you need strings.

    You can fix that easily, if hackily, by "decoding" the bytes with any encoding that maps every single-byte character, like Latin-1: change obj.tostring() to obj.tostring().decode('latin-1') and data = obj['_npdata'] to data = obj['_npdata'].encode('latin-1'). That wastes a bit of space by UTF-8-encoding the fake Latin-1 strings, but that's not too bad.

    Unfortunately, Python will encode every non-ASCII character with a Unicode escape sequence. You can turn that off by setting ensure_ascii=False on the dump and strict=False on the the load, but it will still encode control characters, mostly to 6-byte sequences. This doubles the size of random data, and it can do much worse—e.g., an all-zero array will be 6x larger!

    There used to be a trick to get around this problem, but in 3.3, it doesn't work. The best thing you can do is to fork or monkey-patch the json package so it lets you pass control characters through when given ensure_ascii=False, which you can do like this:

    json.encoder.ESCAPE = re.compile(r'"')
    

    This is pretty hacky, but it works.


    Anyway, hopefully that's enough to get you started.