
A new study from the University of Waterloo has unveiled major privacy weaknesses in collaborative robots—calling for stronger defenses.
In recent years, the use of robotics has become widespread in public and private spheres. Hospitals are employing robots as surgical assistants due to their high precision and dexterity, and various manufacturing firms are increasingly using robots, especially for dangerous and hazardous tasks. Not only can robots build high-quality products at a consistent and fast rate, but they can also improve workplace safety.
Despite their popularity, collaborative robots could be exploited in malicious attacks. If a hacker notices any command patterns during a procedure, they could infer sensitive patient information, such as their illness or medication schedules—even when commands are encrypted.
“Picture a robot talking to its controller. You can’t understand the conversation, but you can notice when the robot is talking or not,” explains lead author Cheng Tang, a third-year undergraduate engineering student. “By analyzing how often it talks, how long the conversation lasts, and how long the breaks are in between the conversations, you can infer what type of commands are being sent.”
“In the robotics community, there’s an increased interest in controlling robots remotely by sending commands over a network. The robot could be anywhere, like a hospital, factory, or another country. Many don’t realize that once these robots are hooked into the network, they are exposed to security risks,” adds Dr. Yue Hu, a professor in the Department of Mechanical and Mechatronics Engineering.
These privacy concerns prompted Hu to reach out to her former co-op student, Cheng, and Drs. Diogo Barradas and Urs Hengartner, computer science researchers and fellow members of the University of Waterloo’s Cybersecurity and Privacy Institute (CPI), to explore ways to address the problem collaboratively. CPI unites all six of Waterloo’s faculties and industry partners to secure critical Canadian infrastructure.
Previous research has focused on privacy concerns in teleoperation robotics, where humans can control robots in real-time, like using joysticks or virtual reality interfaces. This study focused on script-based robots, where robots perform pre-programmed commands. This unique interface allows robots to complete tasks with minimal human intervention.
The team investigated techniques that could identify a robot’s actions by analyzing its network traffic. They developed a classification technique based on signal processing, found in products like noise canceling headphones, which analyzes and transforms signals for information extraction or quality improvement.
The researchers conducted their experiment by instructing a Kinova Gen3 robotic arm to perform four actions and collected 200 network traces—a key component to understanding a system’s data traffic and flow—that were exchanged between the robot and its controller.
Ultimately, the researchers discovered that robot commands can create traffic sub-patterns, which can be detected by common signal processing techniques, particularly signal correlation and convolution. Notably, their technique identified the Kinova robot‘s actions 97% of the time, despite being encrypted.
These results suggest that robots could easily leak private information from industry secrets to patient confidentiality, calling for the robotic community to build better security defenses.
However, certain design choices could prevent leakage and make a system’s network steadier. Some of the researchers’ proposals include changing the system’s interface like its application programming interface (API) timing or employing a smart traffic shaping algorithm at run-time.
This earned the team the Best Research Paper Award at the 20th International Conference on Availability, Reliability and Security (ARES).
The research, On the Feasibility of Fingerprinting Collaborative Robot Network Traffic, was published in the proceedings of ARES 2025 and as part of the book series, Lecture Notes in Computer Science.
More information:
Cheng Tang et al, On the Feasibility of Fingerprinting Collaborative Robot Network Traffic, Lecture Notes in Computer Science (2025). DOI: 10.1007/978-3-032-00624-0_5
Citation:
Robots are prone to privacy leaks despite encryption (2025, September 22)
retrieved 23 September 2025
from https://techxplore.com/news/2025-09-robots-prone-privacy-leaks-encryption.html
This document is subject to copyright. Apart from any fair dealing for the purpose of private study or research, no
part may be reproduced without the written permission. The content is provided for information purposes only.