Cybersecurity AI: Hacking Consumer Robots in the AI Era
Abstract: Is robot cybersecurity broken by AI? Consumer robots -- from autonomous lawnmowers to powered exoskeletons and window cleaners -- are rapidly entering homes and workplaces, yet their security remains rooted in assumptions of specialized attacker expertise. This paper presents evidence that Generative AI has fundamentally disrupted robot cybersecurity: what historically required deep knowledge of ROS, ROS 2, and robotic system internals can now be automated by anyone with access to state-of-the-art GenAI tools spearheaded by the open source CAI (Cybersecurity AI). We provide empirical evidence through three case studies: (1) compromising a Hookii autonomous lawnmower robot, uncovering fleet-wide vulnerabilities and data protection violations affecting 267+ connected devices, (2) exploiting a Hypershell powered exoskeleton, demonstrating safety-critical motor control weaknesses and credential exposure including access to over 3,300 internal support emails, and (3) breaching a HOBOT S7 Pro window cleaning robot, achieving unauthenticated BLE command injection and OTA firmware exploitation. Across these platforms, CAI discovered in an automated manner 38 vulnerabilities that would have previously required months of specialized security research. Our findings reveal a stark asymmetry: while offensive capabilities have been democratized through AI, defensive measures often remain lagging behind. We argue that traditional defense-in-depth architectures like the Robot Immune System (RIS) must evolve toward GenAI-native defensive agents capable of matching the speed and adaptability of AI-powered attacks.
Paper Prompts
Sign up for free to create and run prompts on this paper using GPT-5.
Top Community Prompts
Explain it Like I'm 14
What this paper is about
This paper looks at how easy it has become to break into everyday robots (like lawnmowers, wearables, and window cleaners) now that powerful AI tools are available. The authors show that tasks that used to require rare, expert knowledge can now be done quickly by an AI assistant called CAI (Cybersecurity AI). They argue that this changes the game: attackers can move much faster, so defenders need new, smarter ways to protect robots.
What the researchers wanted to find out
In simple terms, they asked:
- Can modern AI help someone with little robot knowledge find and exploit security weaknesses in real consumer robots?
- How fast and how many problems can AI find compared to traditional, expert-led security testing?
- What kinds of risks do these weaknesses create for safety and privacy?
- What would robot security need to look like to keep up with AI-powered attacks?
How they tested it (in everyday language)
They used CAI, an open-source AI “cyber tester,” to check three different robots without giving it special insider info—just the product name:
- A robot lawnmower (Hookii Neomow)
- A powered exoskeleton you wear on your legs (Hypershell X)
- A window-cleaning robot (HOBOT S7 Pro)
What CAI did is a lot like having a tireless, very smart intern who:
- Looks for doors and windows into a system (network ports, Bluetooth, cloud services).
- Tries safe, legal ways to see if those doors are unlocked.
- Points out where the biggest dangers are and explains why they matter.
When technical terms came up, here’s what they mean in plain English:
- Bluetooth/BLE: Like a short-range wireless remote control.
- MQTT: A “group chat” system devices use to send messages to each other or the cloud.
- ROS 2: Software many robots use to let their parts talk to each other.
- OTA updates: “Over-the-air” updates—how a device downloads new software.
- ADB: A special “repair door” for developers that, if left open, can let anyone in.
Humans supervised CAI for safety and stopped any tests that might cause harm or affect company cloud systems.
What they found and why it matters
CAI found 38 security problems across the three robots in just a few hours—issues that could affect users’ safety, privacy, or both. Here’s a short, non-technical snapshot of each case:
- Lawn mower robot (Hookii Neomow):
- A “repair door” was left unlocked, giving full access to the robot’s computer.
- The same login details were reused across many robots, letting someone reach hundreds of devices.
- The robot sent private location and home-mapping data without proper protection.
- Why it matters: One weak point could let someone control many robots and see sensitive data (like GPS and room maps).
- Powered exoskeleton (Hypershell X):
- The Bluetooth controls didn’t require proper authentication, so anyone nearby could talk to the device.
- Predictable device IDs and weak server checks exposed user and device info.
- App and service data included leaked credentials and access tokens.
- Why it matters: This is worn on a person’s body. Weak controls could risk user safety by affecting motor behavior.
- Window-cleaning robot (HOBOT S7 Pro):
- Bluetooth commands worked without any pairing or login.
- Software updates weren’t properly verified and could be replaced during download.
- Cloud keys were stored in the app, exposing device control features.
- Why it matters: Someone nearby could send commands (like turning off suction) or try to push fake updates.
Across all three, the patterns were similar: weak or missing authentication, default passwords, and poorly protected updates and data. CAI did this faster than traditional teams—about 3–5 times quicker—showing how AI speeds up offensive testing.
What this could mean going forward
- Attackers don’t need to be robot experts anymore. AI can guide them, lowering the barrier to entry.
- Privacy risks are growing. Robots can collect lots of sensitive data (like maps of your home or your location), and many aren’t protecting it well.
- Safety risks are real. For devices that move, lift, or stick to surfaces, bad commands can cause physical harm or property damage.
- Defenses need to level up. The authors say robots need “GenAI-native” protection—think of AI “bodyguards” for robots that:
- Learn normal behavior and spot odd activity quickly.
- Automatically fix simple problems (like turning on encryption or closing debug doors).
- Share warnings across fleets so one robot’s lesson protects all the others.
Bottom line
This study shows that AI has changed robot security in a big way. An AI assistant found dozens of serious problems across three popular consumer robots in hours, not months. That’s a wake-up call: companies must build stronger protections by default, test with AI tools before shipping, and work together on fast, effective fixes. Robots are becoming part of everyday life; their security needs to keep pace with AI—right now.
Knowledge Gaps
Knowledge gaps, limitations, and open questions
Below is a concise, action-oriented list of unresolved issues the paper leaves open for future research to address.
- External validity and sampling
- Only three devices (all East Asian consumer robots) were assessed; generalizability to other geographies, categories (e.g., drones, vacuums, toys, humanoids), and industrial/collaborative robots is untested. A randomized market sample is needed.
- Potential selection bias toward vulnerable devices is not controlled; the prevalence of similar flaws across the broader market remains unknown.
- Measurement rigor and baselines
- Assessment times are approximate and not derived from controlled, repeatable protocols; no statistical variability, confidence intervals, or repeated trials are provided.
- The “Traditional vs CAI” comparison mixes sources (including one external team) and lacks a standardized methodology, making it hard to attribute speedups causally.
- No token/compute/cost accounting for CAI runs (e.g., LLM API usage, inference time, energy), hindering practical cost-benefit and scalability analyses.
- Autonomy and human-in-the-loop dependency
- The contribution of human oversight versus CAI autonomy is not quantified; ablations (e.g., no human nudges, restricted tools, different prompts) are missing.
- Robustness to variability in operator skill and prompt engineering remains unknown.
- Reproducibility and disclosure artifacts
- Many exploit details are withheld and the authors abstain from CVE assignment; reproducibility for independent verification is limited.
- No artifact package (sanitized PoCs, datasets, packet captures, scripts) is provided to enable reproducible evaluation without enabling misuse.
- Scope of attack validation and safety constraints
- Several impacts are inferred but not fully exercised due to ethical limits (e.g., exoskeleton motor control, cloud-side actions, fleet-wide command execution); end-to-end kill-chain validation remains incomplete.
- Long-term persistence mechanisms, lateral movement within vendor infrastructure, and cross-device wormability are not evaluated.
- Model/agent specification and dependency
- The paper does not specify which LLMs, tools, or model versions CAI used, nor how performance varies across providers or model updates.
- Error modes (hallucinations, false positives/negatives in exploit steps) and their operational consequences are not measured.
- Protocol and ecosystem coverage
- CAI’s effectiveness beyond BLE, MQTT, ROS/ROS 2, REST, and OTA (e.g., Zigbee/Z-Wave, Matter, proprietary radio stacks, UWB, NFC, Thread, Wi‑Fi Direct) is untested.
- Hardware/firmware heterogeneity (different MCU/SoC families, secure boot implementations, signed OTA pipelines) is not systematically explored.
- Physical-safety impact quantification
- Safety risks are qualitatively described but lack empirical quantification (e.g., BLE range in real environments, likelihood of suction failure during operation, kinematic risk bounds for exoskeleton control).
- No formal hazard analysis or safety-case modeling links cyber exploits to physical harm probabilities.
- Cloud and IoT platform generalization
- Findings on Gizwits are not compared against other IoT backends (e.g., Tuya, AWS IoT, Azure IoT, GCP IoT Core successors), leaving cross-platform generality unknown.
- Rate limiting, anomaly detection, and anti-abuse controls on vendor/cloud APIs are not characterized.
- Legal and regulatory scope
- GDPR observations are anecdotal and not backed by a formal legal audit; other regimes (e.g., CCPA/CPRA, PIPL, LGPD) are not assessed.
- No reproducible methodology for automated compliance auditing by AI agents is proposed or evaluated.
- Defensive proposals lack concrete implementations
- The “GenAI-native defense” section is architectural and conceptual; there is no prototype, dataset, or evaluation demonstrating real-time detection, autonomous patching, or fleet-wide coordination.
- False-positive/false-negative rates, latency under attack, adversarial robustness, and safety rollback strategies for autonomous patch deployment are unmeasured.
- Benchmarking against alternative agents
- No head-to-head evaluation against other LLM-based pentest agents (e.g., PentestGPT derivatives, HackSynth, AutoAttacker) on identical robotic targets or tasks.
- A standardized benchmark for robotic cybersecurity (beyond cited CAIBench) is not instantiated for physical robots with multi-protocol attack surfaces.
- Vendor response and remediation outcomes
- Beyond noting one vendor’s refusal, the study does not document remediation timelines, patch efficacy, or post-disclosure risk reduction across vendors.
- Impact of non-participation in CVE/NVD on patch deployment and user risk is not assessed.
- Market-level threat and CVE/NVD pipeline modeling
- Assertions about overwhelming disclosure pipelines are not supported by quantitative forecasting (e.g., modeled vulnerability discovery rates vs. triage capacity).
- No proposed design or prototype for AI-augmented triage and continuous scoring beyond qualitative recommendations.
- Fleet-scale exploitation dynamics
- The practical feasibility, timing, and detection likelihood of fleet-wide attacks (e.g., using shared MQTT creds) are not experimentally characterized.
- No evaluation of coordinated attacker strategies versus potential coordinated defense at fleet scale.
- ROS 2 security posture in practice
- While ROS 2 is mentioned on the lawnmower, the assessment did not probe SROS2 configurations or demonstrate bypasses in that target, leaving middleware security uncharacterized in situ.
- User-study validation of “democratization”
- The claim that non-experts can now conduct such attacks with CAI is not tested via controlled user studies assessing replicability, safety, and error rates.
- Data governance for collected sensitive data
- The paper does not detail how CAI handles, stores, or minimizes PII and sensitive telemetry collected during assessments, nor provide guidelines for safe research operations.
- Misuse risk and guardrails
- Mechanisms to prevent malicious misuse of CAI (access controls, red-teaming safeguards, alignment checks, rate limits, environment constraints) are not described or evaluated.
- OTA exploitability across versions and devices
- OTA findings (e.g., unsigned firmware, CRC-only checks) are validated with non-destructive payloads but not tested across firmware versions, product variants, or with persistence/rollback constraints.
- Detection and response on the defender side
- The ability of existing EDR/IDS/IoT security products to detect CAI-driven attacks is not assessed; there is no baseline for current defensive effectiveness.
- Supply chain and third-party dependencies
- Security implications of third-party components (e.g., libraries, SDKs, BLE stacks) and their update ecosystems are not analyzed for systemic risk propagation.
- Ethics and responsible disclosure processes
- Best practices for disclosing to non-responsive manufacturers (especially cross-border) are not proposed or tested; alternative accountability mechanisms are left open.
- Environmental and scalability concerns
- The energy/carbon footprint and network load of large-scale AI-assisted scanning of consumer robots are not quantified, leaving sustainability and ISP/network abuse questions unanswered.
Practical Applications
Immediate Applications
The paper’s findings and methods enable a set of deployable actions across industry, academia, policy, and daily life. The bullets below summarize specific use cases, linked to sectors, with suggested tools/workflows and key dependencies.
- Hardening playbook for consumer robots (disable debug, secure comms) — sectors: robotics, consumer IoT
- Actions: disable ADB over TCP; remove/default-change admin credentials; enforce BLE pairing/bonding; enable TLS for MQTT; enforce per-device credentials; sign and verify OTA firmware; serve updates over HTTPS only.
- Tools/workflows: vendor firmware updates; CI security checks; configuration baselines (e.g., secure ROS 2 configs); MQTT broker hardening guides for EMQX/Mosquitto.
- Assumptions/dependencies: vendor firmware update capability; hardware supports crypto; access to device configuration pipeline.
- AI-assisted pre-release security testing with CAI — sectors: robotics, software
- Actions: integrate CAI into pre-release and pre-shipment test cycles to replicate the paper’s 3–5× faster discovery; add automated BLE/MQTT/ROS 2/OTA/cloud checks.
- Tools/workflows: CAI CLI in CI/CD; bug-bounty triage automation; CAIBench for internal benchmarking.
- Assumptions/dependencies: access to LLMs/GenAI backends; safety guardrails; scoped test environments.
- Vendor-managed “Robot Pentest-as-a-Service” for customers and fleets — sectors: robotics, MSSP/cybersecurity services
- Actions: offer subscription scanning for customer fleets (BLE exposure, open ADB, default MQTT credentials, unsigned OTA).
- Tools/workflows: CAI-based scanners; broker enumeration scripts; BLE fuzzing packages; ROS 2 topic analyzers.
- Assumptions/dependencies: customer consent; lawful interception approvals; cloud API rate limits.
- Rapid MQTT broker and fleet credential audits — sectors: robotics, industrial IoT
- Actions: detect and remediate fleet-wide credential reuse; rotate credentials; enable mTLS; restrict anonymous/admin logins; enforce ACLs per-topic.
- Tools/workflows: EMQX/Mosquitto audit scripts; secrets rotation playbooks; certificate provisioning (e.g., ACME, in-factory PKI).
- Assumptions/dependencies: broker admin access; provisioning pipeline for per-device identities.
- BLE security fixes and app protocol obfuscation removal — sectors: consumer IoT, healthcare/assistive devices
- Actions: require authenticated pairing; implement GATT access control; remove UART-like unauthenticated command channels; add replay protection and MACs.
- Tools/workflows: mobile app updates; SoC SDK features (e.g., EFR32, Nordic); BLE security testing with CAI.
- Assumptions/dependencies: BLE stack version supports LE Secure Connections; OTA update channel available.
- OTA signing and delivery hardening kit — sectors: robotics, software supply chain
- Actions: implement code signing (Ed25519/ECDSA), verify on-device; switch firmware distribution to HTTPS with pinning; add rollback protection.
- Tools/workflows: signing servers; build-system integration (e.g., Sigstore/Cosign-like flows adapted for embedded); SBOM/KBOM attached to releases.
- Assumptions/dependencies: flash layout allows verifying bootloader; vendor control over update servers.
- Privacy and GDPR/consumer-protection compliance assessment — sectors: policy/regulatory, robotics, consumer IoT
- Actions: evaluate telemetry collection/transmission (e.g., GPS, images, LiDAR point clouds); implement consent/opt-out; minimize data; document cross-border transfers.
- Tools/workflows: CAI-driven privacy mapping; DPIA templates; data retention and deletion workflows.
- Assumptions/dependencies: legal/compliance engagement; ability to change app/cloud data flows.
- ROS 2 security configuration quick wins — sectors: robotics (R&D and production)
- Actions: adopt ROS 2 security enclaves; fix SROS2 misconfigurations; keystore protection; restrict topic/services; disable needless discovery.
- Tools/workflows: SROS2 hardening scripts; CI policy checks; intrusion prevention (e.g., RIPS).
- Assumptions/dependencies: ROS 2 versions with security features enabled; operational acceptance of stricter policies.
- Enterprise deployment guardrails for robotic devices — sectors: facilities management, smart buildings, retail
- Actions: segment robots on dedicated VLANs; block outbound 1883 (insecure MQTT); monitor BLE beacons; apply NAC and EDR for embedded/edge.
- Tools/workflows: network policy templates; BLE RF monitoring; MQTT IDS rules; asset inventory updates.
- Assumptions/dependencies: IT/OT network control; vendor support for proxying or constrained networking.
- Consumer safety checklist for home robots — sectors: daily life, consumer IoT
- Actions: isolate robot Wi-Fi; change defaults; disable remote control when not needed; keep apps/firmware updated; verify vendor support and disclosure policy before purchase.
- Tools/workflows: ISP/router app guides; vendor app prompts; crowdsourced device security ratings.
- Assumptions/dependencies: consumer awareness; vendor-supplied updates.
- Curriculum and lab modules for “AI in Robot Security” — sectors: academia, education
- Actions: teach CAI workflows on BLE/MQTT/ROS 2 targets; include privacy mapping; reproduce paper’s case patterns safely.
- Tools/workflows: emulated testbeds; open datasets; Dockerized ROS 2 vulnerable labs; CAIBench tasks.
- Assumptions/dependencies: safe lab environments; ethical approval and scope limitations.
- Incident response runbooks for robot fleets — sectors: robotics, MSSP
- Actions: procedures for credential rotation, firmware revocation, broker lockdown, BLE lockdown, and customer notification.
- Tools/workflows: SOAR playbooks; pre-staged revocation lists; customer comms templates.
- Assumptions/dependencies: revocation channels; fleet observability; legal/comms coordination.
Long-Term Applications
Some applications require additional research, development, scaling, or regulatory alignment before broad deployment.
- GenAI-native defensive agents on-device and in the cloud — sectors: robotics, cybersecurity
- Vision: embedded agents learn normal behavior and detect anomalies across BLE/MQTT/ROS 2/OTA; autonomously quarantine, patch, or reconfigure; coordinate across fleets.
- Tools/workflows: online learning on-device; federated/fleet anomaly detection; risk scoring; safe action policies.
- Dependencies: robust on-device compute and power budgets; safe/verified autonomy; certification for safety-critical behavior.
- Autonomous patch generation and self-healing systems — sectors: robotics, software supply chain
- Vision: AI proposes and verifies patches (e.g., enable TLS, remove debug services), ships hotfixes with formal checks.
- Tools/workflows: LLM-assisted code changes; unit/integration tests; canary deployments; rollback logic.
- Dependencies: secure CI/CD; reproducible builds; regulatory acceptance in safety contexts.
- Fleet-wide threat intelligence and cooperative defense networks — sectors: robotics, telecom/cloud
- Vision: cross-vendor data sharing on indicators of compromise; standardized robot telemetry schemas; rapid remediation advisories.
- Tools/workflows: STIX/TAXII-like pipelines tailored to robotic protocols; secure multi-party analytics.
- Dependencies: industry consortia; privacy-preserving analytics; antitrust and data-sharing frameworks.
- AI-augmented vulnerability management replacing static CVE triage — sectors: policy/regulatory, cybersecurity
- Vision: real-time knowledge graphs, automated severity-context scoring, exploit reproducibility checks, and coordinated mitigation guidance for cyber-physical systems.
- Tools/workflows: LLM+symbolic triage; automated PoC verification sandboxes; mapping to safety risk.
- Dependencies: governance changes; integration with NVD/CVE or successors; global participation.
- Security certification for consumer robots with continuous monitoring — sectors: standards bodies, policy
- Vision: certification schemes (akin to ETSI EN 303 645, UL 2900) specialized for robots, plus continuous compliance feeds (secure OTA, BLE auth, MQTT TLS).
- Tools/workflows: conformity assessment labs; automated evidence collection; attestations from devices.
- Dependencies: standardization; market incentives; regulator mandates (e.g., EU CRA alignment).
- Secure-by-design identity lifecycle for devices — sectors: robotics, IoT platforms
- Vision: per-device strong identities (TPM/SE-backed), mTLS for brokers/cloud, lifecycle rotation and revocation at scale.
- Tools/workflows: manufacturing PKI; DPP/Bootstrapping; attestation protocols.
- Dependencies: hardware security modules; supply-chain retooling; key escrow/backup policies.
- Safer exoskeletons and assistive devices via continuous cyber-safety monitoring — sectors: healthcare, wearables
- Vision: runtime guardians that constrain motor commands; BLE isolation modes; authenticated caregiver control; certified safety envelopes.
- Tools/workflows: runtime monitors; formal safety constraints; clinical validation.
- Dependencies: medical device regulations; human factors testing; liability frameworks.
- Consumer “Home Robot Security Manager” apps — sectors: consumer software, smart home
- Vision: phone/router apps that discover robots, assess exposure using CAI-inspired checks, and auto-apply mitigations or guide users.
- Tools/workflows: local network scans; BLE scanners; vendor API integrations.
- Dependencies: ecosystem cooperation; OS permissions; simplified UX for non-experts.
- Insurance and risk-pricing products for robotic cyber risk — sectors: finance/insurtech
- Vision: policies that factor device security posture (signed OTA, TLS, BLE auth) and fleet exposure; discounts for certified defenses.
- Tools/workflows: telemetry-based risk scoring; attestations; incident response bundles.
- Dependencies: actuarial data; standard security attestations; regulatory acceptance.
- ROS 2 middleware evolution for zero-trust — sectors: robotics R&D
- Vision: default-secure ROS 2 with hardened discovery, stronger keystores, and supply-chain protections against SROS2 weaknesses.
- Tools/workflows: new DDS profiles; keystore isolation; secure build pipelines.
- Dependencies: upstream adoption; performance trade-off studies; community migration support.
- Regulatory tech for automated privacy compliance in robots — sectors: policy, legal-tech
- Vision: continuous audits of telemetry flows, consent status, and cross-border transfers; automated remediation recommendations.
- Tools/workflows: data flow tracing; policy-as-code; compliance dashboards.
- Dependencies: standardized data schemas; regulator tool adoption; vendor cooperation.
- Cross-vendor OTA resiliency and provenance frameworks — sectors: software supply chain, standards
- Vision: interoperable, verifiable firmware distribution (signed, pinned, with SBOM/KBOM), tamper-evident logs, and fast revocation.
- Tools/workflows: TUF-style frameworks adapted for embedded; transparency logs.
- Dependencies: consensus across vendors; hosted infrastructure; secure boot integration.
- Education and workforce upskilling for AI-enabled robot security — sectors: academia, industry training
- Vision: standardized micro-credentials and labs on CAI-driven offensive/defensive robotics cybersecurity.
- Tools/workflows: open curricula; cloud-hosted sandboxes; capstone engagements with vendors.
- Dependencies: institutional adoption; lab safety rules; sustainable funding.
Each long-term application benefits from the paper’s core insight: offensive automation has outpaced current defenses, motivating adaptive, AI-native security architectures and ecosystem reforms that scale to real-world, safety-relevant robotic deployments.
Glossary
- Adversarial simulation: The practice of generating realistic attack scenarios to test and improve defensive systems. "These systems should incorporate adversarial simulation that continuously generates novel attack scenarios"
- Android Debug Bridge (ADB): A developer tool and protocol that provides shell access and control over Android-based systems, often exposing powerful debug interfaces. "an unauthenticated Android Debug Bridge (ADB) service on port~5555 (CVSS~10.0)."
- APK decompilation: Reverse-engineering an Android application package to recover readable source or bytecode for analysis. "Through APK decompilation of the HOBOT mobile application (which lacked code obfuscation), CAI reverse-engineered the complete BLE command protocol:"
- AutoAttacker: An LLM-guided system that automates cyber-attacks, including post-compromise actions. "AutoAttacker~\cite{xu2024autoattacker} automates post-breach lateral movement using LLM-guided Metasploit campaigns"
- BLE (Bluetooth Low Energy): A low-power wireless communication standard widely used by IoT and robotic devices for short-range control and telemetry. "We began by scanning for BLE advertisements"
- CAN Bus: A robust automotive and industrial field bus used for real-time communication between microcontrollers and devices. "communicating with two STM32 motor controllers via CAN Bus."
- CAI (Cybersecurity AI): An open-source framework that automates cybersecurity assessments using LLMs and agentic workflows. "Each assessment followed a standardized protocol using CAI~\cite{cai2025github}, a CLI-based cybersecurity agent, with a human operator in the loop:"
- Capture The Flag (CTF): A competitive cybersecurity format where participants solve security challenges to “capture” flags. "HackSynth~\cite{muzsai2024hacksynth} introduces a dual-module Planner/Summarizer architecture evaluated against 200 CTF challenges."
- Code obfuscation: Techniques that make code harder to understand or reverse-engineer to impede analysis. "(which lacked code obfuscation)"
- Command injection: A vulnerability where an attacker can inject and execute unauthorized commands through an interface. "achieving unauthenticated BLE command injection and OTA firmware exploitation."
- Costmap: In robotics, a grid or map assigning traversal “costs” used for motion planning and obstacle avoidance. "18MB+ of costmap camera images"
- CRC16: A 16-bit cyclic redundancy check used for integrity verification, not sufficient for authentication or tamper-resistance. "unsigned OTA firmware updates protected only by CRC16 checksums"
- CVE (Common Vulnerabilities and Exposures): A standardized identifier system for publicly known cybersecurity vulnerabilities. "documented over 100 security flaws and 17 CVE IDs across multiple robotic platforms."
- CVSS 3.1: The Common Vulnerability Scoring System version 3.1, used to rate the severity of security vulnerabilities. "Vulnerability severity was assessed by the authors using CVSS~3.1 base metrics."
- CWE-190: MITRE’s classification for Integer Overflow or Wraparound weakness. "Static analysis of the int8ToUint8 conversion function identified a potential integer overflow (CWE-190)"
- CWE-294: MITRE’s classification for Authentication Bypass by Capture-replay (lack of replay protection). "with no replay protection (CWE-294)"
- CWE-328: MITRE’s classification for Reversible One-Way Hash (weak integrity mechanisms). "The integrity mechanism is a single XOR byte (CWE-328)"
- Defense-in-depth: A layered security strategy employing multiple, redundant controls to protect systems. "Defense-in-depth approaches like the Robot Security Framework (RSF)~\cite{mayoral2018rsf} and Robot Immune System (RIS)~\cite{aliasrobotics2025ris} were developed under this assumption."
- EMQX: A high-performance MQTT broker platform used to manage publish/subscribe messaging in IoT systems. "Hookii's EMQX MQTT broker at neomowx.hookii.com"
- Fuzzing: Automated testing that feeds programs with a large volume of semi-random inputs to trigger bugs and vulnerabilities. "150 CPU-hours of traditional fuzzing had missed"
- GATT (Generic Attribute Profile): The BLE protocol layer that defines how data is structured and accessed via services and characteristics. "CAI enumerated BLE GATT services"
- GDPR (General Data Protection Regulation): The EU’s data protection law governing personal data processing, consent, and transfer. "Two of the three robots assessed exhibited confirmed GDPR compliance failures"
- GenAI: Shorthand for “generative AI,” referring to models capable of generating content or actions. "state-of-the-art GenAI tools"
- Generative AI: AI systems that can create text, code, or other content, often transforming workflows in security and robotics. "This paper presents evidence that Generative AI has fundamentally altered the security model of consumer robotics."
- Gizwits: A commercial IoT cloud platform used for device connectivity, telemetry, and control. "the Gizwits IoT cloud platform"
- IDOR (Insecure Direct Object Reference): A vulnerability where direct references to objects (e.g., IDs) allow unauthorized access. "CAI discovered a critical Insecure Direct Object Reference (IDOR) chain"
- IMU (Inertial Measurement Unit): A sensor that measures acceleration and rotation to estimate device motion and orientation. "GPS/IMU sensors"
- Intrusion prevention system: A security system that detects and actively blocks malicious activity in real time. "RIPS, an intrusion prevention system specifically designed for ROS~2."
- Keystore exfiltration: Theft of cryptographic keys or credentials from protected storage, enabling broader compromise. "via keystore exfiltration"
- Lateral movement: Post-compromise techniques used to move from one system or component to others within a network or environment. "automates post-breach lateral movement"
- LiDAR: Light Detection and Ranging; a sensor that measures distance by illuminating targets with laser light to build 3D maps. "Livox LiDAR for 3D mapping"
- LLM: A machine learning model trained on large corpora capable of reasoning and generating text or code. "The application of LLMs to offensive security has accelerated rapidly."
- Man-in-the-middle (MITM): An attack where an adversary intercepts and potentially alters communications between parties. "enabling potential MITM firmware replacement."
- Metasploit: A widely used penetration testing framework for exploit development and automated attack workflows. "AutoAttacker~\cite{xu2024autoattacker} automates post-breach lateral movement using LLM-guided Metasploit campaigns"
- MQTT: A lightweight publish/subscribe messaging protocol commonly used in IoT and robotics. "continuous data transmission via unencrypted MQTT (port~1883"
- Nordic UART Service (NUS): A BLE service profile that emulates a serial UART connection over BLE for data exchange. "via the Nordic UART Service"
- NVD (National Vulnerability Database): A U.S. government repository of vulnerability data, maintained by NIST. "The NIST National Vulnerability Database (NVD) entered crisis in 2024"
- OTA (Over-the-Air) firmware update: Remote delivery and installation of firmware over a network. "unsigned OTA firmware updates"
- PentestGPT: An LLM-driven system for automating penetration testing tasks and workflows. "PentestGPT~\cite{deng2024pentestgpt} formalized this approach"
- Penetration testing: A security evaluation method that simulates attacks to identify and validate vulnerabilities. "were among the first to demonstrate LLM-driven penetration testing"
- Point cloud: A set of data points in space representing a 3D shape or environment, used in mapping and perception. "including a 206MB point cloud"
- Proof-of-concept (PoC): A minimal demonstration that a vulnerability is exploitable or a method works. "as confirmed by PoC testing"
- Ransomware: Malware that encrypts or otherwise disables systems until a ransom is paid. "the first documented case of industrial robot ransomware targeting Universal Robots platforms."
- REST API: A web API that uses Representational State Transfer principles, typically over HTTP. "REST API endpoints for cloud services"
- RIPS: A proposed intrusion prevention system tailored for ROS 2 middleware. "Soriano-Salvador et al.~\cite{soriano2024rips} proposed RIPS, an intrusion prevention system specifically designed for ROS~2."
- RIS (Robot Immune System): A security architecture for robots inspired by biological immune systems, emphasizing layered defense and response. "traditional architectures like the Robot Immune System (RIS) must evolve toward GenAI-native defensive agents."
- Robot Security Framework (RSF): A standardized methodology for security assessments of robotic systems across multiple layers. "The Robot Security Framework (RSF)~\cite{mayoral2018rsf} introduced a standardized four-layer methodology"
- ROS (Robot Operating System): A widely used middleware for robotics providing communication, tools, and libraries. "deep knowledge of ROS, ROS~2, and robotic system internals"
- ROS 2: The next-generation ROS middleware with DDS-based communication, improved performance, and security features. "ROS~2 Humble"
- Security through obscurity: Relying on secrecy of design or implementation as a primary security measure. "The End of Security Through Obscurity"
- SROS2 (Secure ROS 2): Tools and configurations to add security capabilities (e.g., authentication, encryption) to ROS 2. "SROS2, a popular series of tools to secure ROS~2 middleware"
- Static analysis: Examining code without executing it to detect defects or vulnerabilities. "Static analysis of the int8ToUint8 conversion function identified a potential integer overflow (CWE-190)"
- Supply chain attack: Compromising a system by targeting its dependencies, tools, or update channels. "supply chain attacks on SROS~2"
- TLS (Transport Layer Security): A cryptographic protocol that provides confidentiality and integrity for network communications. "for instance, enabling TLS on MQTT connections"
- Zero-day: A previously unknown vulnerability with no available patch at the time of discovery. "leave zero-day exposures unpatched for years"
Collections
Sign up for free to add this paper to one or more collections.