TL;DR Found four issues in Eurostar’s public AI chatbot including guardrail bypass, unchecked conversation and message IDs, prompt injection leaking system prompts, and HTML injection causing self XSS. The UI showed guardrails but server side enforcement and binding were weak. An attacker could exfiltrate prompts, steer answers, and run script in the chat window. Disclosure was quite painful, despite Eurostar having a vulnerability disclosure programme. During the process, Eurostar even suggested that we were somehow attempting to blackmail them! This occurred despite our disclosure going unanswered and receiving no responses to our requests for acknowledgement or a remediation timeline. The vulnerabilities were eventually fixed, hence we have now published. The core lesson is that old web and API weaknesses still apply even when an LLM is in the loop. Introduction I first encountered the chatbot as a normal Eurostar customer while planning a trip. When it opened, it clearly told me that “the answers in this chatbot are generated by AI”, which is good disclosure but immediately raised my curiosity about how it worked and what its limits were. Eurostar publishes a vulnerability disclosure programme (VDP), which meant I had permission to take a closer look at the chatbot’s behaviour as long as I stayed within those rules. So this work was done while using the site as a legitimate customer, within the scope of the VDP. Almost all websites for companies like train operators have a chatbot on them. What we’re used to seeing is a menu-driven bot which attempts to direct you to available FAQ pages or help articles, trying to minimise interactions which require putting you in front of a human being operator on the other end. These sort of chatbots either don’t understand free text input, or have very limited capabilities. However, some of the chatbots now use can understand free text, and sometimes even live speech. They still sit on top of familiar menu driven systems, but...
First seen: 2026-01-04 21:21
Last seen: 2026-01-05 14:23