I am using Jinja2 as my templating engine for an admin dashboard, that displays some user feedback. I worry that an attacker could type some python code as their feedback and the Jinja2 template could execute that.
I.e.
An attacker might put the following as their feedback:
__import__('subprocess').getoutput('tree')
When the templater renders this, i.e.
feedback = "__import__('subprocess').getoutput('tree')"
Template("{{ feedback }}").render(feedback=feedback)
The tree command is run in the terminal.
How can I sanitise my strings so that they do not include any python code which might allow them access to my server via the command line?
The solution is that the templater only executes it if you put it directly in the template.
So...
Template("{{ __import__('subprocess').getoutput('tree') }}").render()
executes the code.
Whereas...
x = "__import__('subprocess').getoutput('tree')"
Template("{{ x }}").render(x=x)
Will not. So treat them as strings and vulnerabilities are mitigated.