Sandboxing Untrusted Python Python doesn't have a built-in way to run untrusted code safely. Multiple attempts have been made, but none really succeeded. Why? Because Python is a highly introspective object-oriented language with a mutable runtime. Core elements of the interpreter can be accessed through the object graph, frames and tracebacks, making runtime isolation difficult. This means that even aggressive restrictions can be bypassed: # Attempt: Remove dangerous built-ins del __builtins__.eval del __builtins__.__import__ # 1. Bypass via introspection ().__class__.__bases__[0].__subclasses__() # 2. Bypass via exceptions and frames try: raise Exception except Exception as e: e.__traceback__.tb_frame.f_globals['__builtins__'] NoteOlder alternatives like sandbox-2 exist, but they provide isolation near the OS level, not the language level. At that point we might as well use Docker or VMs. So people concluded it's safer to run Python in a sandbox rather than sandbox Python itself. The thing is, Python dominates AI/ML, especially the AI agents space. We're moving from deterministic systems to probabilistic ones, where executing untrusted code is becoming common. Why sandboxing became important now? 2025 was marked by great progress but also showed us that isolation for AI agents goes beyond resource control or retry strategies. It's become a security issue. LLMs have architectural flaws. The most notorious one is prompt injection, which exploits the fact that LLMs can't tell the difference between system prompt, legitimate user instructions and malicious ones injected from external sources. For example, People demonstrate how a hidden instruction can be injected from a web page through the coding agent and extract sensitive data from your .env file. It's a pretty common pattern. We've found similar flaws across many AI tools in recent months. Take Model Context Protocol (MCP) for example. It shows how improper implementation extends the attack surface: the SQLite MC...
First seen: 2026-01-05 18:25
Last seen: 2026-01-06 03:29