This piece of code is from the Python C-API reference:
item = PyLong_FromLong(0L);
if (item == NULL)
goto error;
Assuming the interpreter is CPython, the memory for the Python 0
object is already allocated, so what could go wrong there? Reading the source code for PyLong_FromLong
, for small-integer values it would immediately return get_small_int((sdigit)0L)
. The get_small_int
function is really very simple:
static PyObject *
get_small_int(sdigit ival)
{
assert(IS_SMALL_INT(ival));
PyThreadState *tstate = _PyThreadState_GET();
PyObject *v = (PyObject*)tstate->interp->small_ints[ival + NSMALLNEGINTS];
Py_INCREF(v);
return v;
}
The assertion on the first line won't fail because PyLong_FromLong
already verified it. _PyThreadState_GET()
is a macro that, according to a comment next to its definition, is unsafe: it does not check for error and it can return NULL.
This might look like a source of failure but note that tstate->interp
is accessed normally, which would segfault the interpreter if the macro had returned NULL
. After that, the wanted reference to the Python 0
object is Py_INCREF
'd and returned to the original caller of PyLong_FromLong(0L)
.
Did I understant correctly that CPython's PyLong_FromLong
can't fail for small-integer arguments, or did I miss something? Also, just for completeness, can extension modules written in C can be used from other interpreters, or when writing them I can assume that they will be dealing with CPython?
Yes, you are absolutely right, there would never be any erro here, the small ints are preallocated, so PyLong_FromLong(0L);
never fails.
But why still test if item was a NULL? As I see, this just follows a partten:
The PyLong_FromLong does some optomization by caching small ints, but it still returns a New Object mostly and logicaly, though 0L does not. And as we a function caller we should not rely on the inner cache strtegy. So leave some code to check would make sense and safer.