shot-button
Home > Technology News > Printed messages hack autonomous vehicles

Printed messages hack autonomous vehicles

Updated on: 15 February,2026 11:10 AM IST  |  Mumbai
A Correspondent |

The study warns that self-driving cars, delivery robots, and camera-guided drones that rely on large vision-language models could misread text on signs, posters, or objects as commands, overriding their intended instructions

Printed messages hack autonomous vehicles

Pic/iStock

Listen to this article
Printed messages hack autonomous vehicles
x
00:00

Researchers at the University of California, Santa Cruz, have shown that simple, misleading text placed in the physical environment can hijack the behaviour of AI-enabled robots without hacking their software. The study warns that self-driving cars, delivery robots, and camera-guided drones that rely on large vision-language models could misread text on signs, posters, or objects as commands, overriding their intended instructions. Led by Alvaro Cardenas and Cihang Xie, the team built an attack pipeline called CHAI (command hijacking against embodied AI) that uses generative AI to craft both the wording and the appearance of “visual prompts”, optimising factors such as colour, size and placement. 




"Exciting news! Mid-day is now on WhatsApp Channels Subscribe today by clicking the link and stay updated with the latest news!" Click here!

Did you find this article helpful?

Yes
No

Help us improve further by providing more detailed feedback and stand a chance to win a 3-month e-paper subscription! Click Here

Note: Winners will be selected via a lucky draw.

Help us improve further by providing more detailed feedback and stand a chance to win a 3-month e-paper subscription! Click Here

Note: Winners will be selected via a lucky draw.

Artificial Intelligence california santacruz Technology Technology News Lifestyle news

Mid-Day Web Stories

Mid-Day Web Stories

This website uses cookie or similar technologies, to enhance your browsing experience and provide personalised recommendations. By continuing to use our website, you agree to our Privacy Policy and Cookie Policy. OK