Papers
Topics
Authors
Recent
Search
2000 character limit reached

daVinci-Dev: Surgical Robotics & LLM Agentics

Updated 18 February 2026
  • daVinci-Dev is a comprehensive suite that merges repurposed da Vinci surgical platforms with agent-native machine learning, enabling both surgical robotics research and advanced code reasoning.
  • It features open-source hardware and a real-time software stack that support precise teleoperation and autonomous surgical tasks, achieving sub-millimeter accuracy via sophisticated calibration techniques.
  • Its agent-native training paradigm uses extensive code trajectory data to boost model performance, setting new benchmarks in both robotics control and software engineering problem solving.

daVinci-Dev refers to a suite of platforms, methodologies, and modifications derived from the da Vinci surgical system, encompassing the open-source research kit (dVRK), advanced data-driven model training for software engineering agents, and instrument-level innovations for intraoperative tissue quantification. It serves as both a physical robotic infrastructure for surgical research and a conceptual framework for agent-native machine learning in code reasoning domains. daVinci-Dev has enabled technical advances across surgical robotics, autonomous manipulation, multi-modal sensing, and large-scale software engineering with agentic LLMs.

1. Open-Source Hardware and Software Platform

The da Vinci Research Kit (dVRK) is a repurposed version of retired clinical da Vinci systems, exposing full access to kinematics, dynamics, vision, and control for research and development in surgical robotics (D'Ettorre et al., 2021). dVRK consists of three core subsystems:

  • Patient-Side Manipulators (PSMs and ECM): Multi-DOF serial chains with kinematics suitable for constrained surgical tasks, featuring optical encoders, harmonic drives, and 8 mm tool compatibility.
  • Surgeon-Side Console (MTMs): Twin master arms and stereo visualization system for bilateral teleoperation, supporting haptic feedback and ergonomic control.
  • Controller Electronics: Real-time Linux-based computation, custom servo amplifiers (IEEE-1394 FireWire communication), frame grabbers, and safety interlocks.

The real-time software stack comprises CISST/SAW libraries for task management and data exchange, real-time joint servoing (up to 1 kHz), vision, and force/torque feedback. Integration occurs through both C++/Python APIs and ROS middlewares via the CRTK standard.

This configuration permits the development and benchmarking of advanced control schemes (e.g., computed torque, impedance/admittance control, virtual fixtures), high-bandwidth sensing, and multi-modal data fusion for both teleoperated and autonomous robotic interventions.

2. Technical Constraints and Calibration Methodologies

First-generation dVRK systems present several technical and calibration challenges (Cui et al., 2022):

  • Kinematic Chain Nonlinearities: Cable-driven actuation and gearbox wear induce up to 1.02 mm end-effector positional error. Forward kinematics employ modified DH transforms; errors are resolved through periodic optical/vision-based recalibration.
  • Encoder Drift and Potentiometer Error: Dual sensor (encoder, potentiometer) states require per-joint slope/offset calibration, susceptible to time-dependent drift (0.5° over two weeks), affecting tool-tip accuracy.
  • Mechanical Compliance: Backlash and shaft elasticity necessitate online monitoring; compliance is modeled as lateral deflection, δ=K1Fext\delta = K^{-1} F_{\text{ext}}.
  • Optical System Limitations: Stereo endoscopes provide 25 Hz interlaced video with variable intrinsics across focus settings, requiring custom per-focus calibration maps and periodic checkerboard routines.

Critical calibration routines span preoperative multi-point joint characterization, camera-lens calibration (Zhang’s method), hand-eye registration (AX=XB solution), and runtime residual drift monitoring with automatic recalibration triggers.

Calibration Aspect Typical Error Range Recurrence / Comments
Joint-angle estimation 1–3.4 mm (RMS) Reset on power-up; re-calibrate weekly
Hand-eye calibration 0.5–2.0 mm Optical tracking/marker-based
Camera focus distinction Per-focus lookup; ~mm Updated dynamically intraoperatively
Teleop latency 50–200 ms Mitigated via AR predictive displays

By rigorously maintaining these calibrations, researchers achieve sub-millimeter pose fidelity critical for telemanipulation, autonomous control, and vision-guided analytics.

3. Research Applications: Autonomous Manipulation and Imaging

daVinci-Dev underpins a broad research agenda in surgical robotics (D'Ettorre et al., 2021, Zhang et al., 2016):

  • Teleoperation Enhancements: Motion scaling, predictive visual overlays, latency compensation.
  • Autonomous Subtasks: Needle driving, scanning, debridement, and precise instrument trajectory planning. For example, autonomous endomicroscopy scanning employs position-based visual servoing to maintain probe-to-tissue orientation, achieves ≈0.21 mm camera-based visual error and ≈60 μm 2D mosaicing repeatability (Zhang et al., 2016).
  • Multi-scale Imaging and Data Fusion: Real-time mosaicing of endomicroscopic images with 3D stereo reconstructions, enabling cellular-scale overlays on the surgical field.
  • Haptic Guidance and Shared Control: Virtual fixtures implemented as projections onto admissible subspaces, with geometric and force constraints guiding operator input.

The system enables real-time integration of force/torque sensing, advanced vision pipelines, and external sensors (EM trackers, ultrasound, etc.), supporting cutting-edge computer vision (e.g., deep learning for skill recognition, tool segmentation) and reinforcement learning for manipulation.

4. Instrument-Level Innovations: Optical Coherence Elastography

Recent instrument-level modifications demonstrate daVinci-Dev’s extensibility for hybrid sensing (Neidhardt et al., 2024). By integrating piezoelectric actuators at the instrument’s proximal shaft, one achieves shear-wave excitation suitable for optical coherence elastography (OCE). Coupled with high-speed OCT imaging (1.5 MHz A-scan), phase-sensitive detection yields quantitative elasticity estimates directly at the surgical site.

Elasticity is estimated using the relation: E=3ρc2,E = 3\rho c^2, where EE is Young’s modulus, ρ\rho tissue density, and cc shear-wave velocity (assuming ν0.5\nu \approx 0.5).

Deep learning-based (spatio-temporal DenseNet) signal processing achieves mean absolute error (MAE) of 6.29±4.99kPa6.29 \pm 4.99\,\text{kPa}, outperforming conventional FFT-based methods (MAE 19.27±23.50kPa19.27 \pm 23.50\,\text{kPa}). Ex vivo tissue discrimination (liver, heart, stomach) is realized via this OCE modality, with all control and high-voltage electronics located externally to preserve full articulation and sterility.

5. Agent-Native Mid-Training for Software Engineering

The term daVinci-Dev also denotes a methodology for LLM development targeting multi-step, feedback-driven agentic workflows in software engineering (Zeng et al., 26 Jan 2026). This framework extends beyond static file-level code generation, supporting full problem navigation, context accumulation, code editing, test execution, and iterative revision—mirroring real-world developer behavior.

Agent-native mid-training (MT) leverages two data sources:

  • Contextually-native trajectories: Reconstructed from merged GitHub PRs, comprising problem statements, pre-edit file states, and patch sequences (68.6B tokens).
  • Environmentally-native trajectories: Rollouts collected in real executable Docker environments, capturing the action-observation loop (tool calls, test outputs, errors) across passing and failing upgrade attempts (3.1B tokens, upsampled ×3).

The MT objective is standard causal next-token cross-entropy: L(θ)=i=1Nt=1TilogPθ(xi,txi,<t)\mathcal{L}(\theta) = -\sum_{i=1}^N\sum_{t=1}^{T_i}\log P_\theta(x_{i,t}|x_{i,<t}) with no explicit RL reward structure. Models are subsequently post-trained (SFT) on issue-fix tasks with an agentic scaffold (SWE-Agent, 128K context, deterministic rollouts).

6. Empirical Results and Benchmarking

daVinci-Dev’s agentic MT methodology achieves state-of-the-art resolution rates on SWE-Bench Verified:

Model MT Data SFT Data SWE-Bench Pass@1
Qwen2.5-32B Python PRs D₂ 53.0%
daVinci-Dev-32B D₁∪D₂∪E (73.1B) Python PRs D₂ 56.1%
Qwen2.5-72B Python PRs D₂ 56.6%
Kimi-Dev-72B Agentless MT Agentless RL+D₂ ≈56.2%
daVinci-Dev-72B D₁∪D₂∪E (73.1B) Python PRs D₂ 58.5%

Ablation studies show that contextually-native PR data is critical for generalization, while environmentally-native rollouts provide additional improvements in synergy with SFT. The combination results in highly token-efficient agentic capabilities, outperforming prior open recipes, including those using expensive RL post-training, despite consuming less than half the MT tokens.

Generalization to code reasoning and scientific benchmarks (HumanEval, GPQA, SciBench) is robust, with MT-mixed models outperforming non-agentic baselines by significant margins—HumanEval resolution rates increase 12–23% over base models in respective parameter scales.

7. Future Directions and Significance

daVinci-Dev’s dual identity—as an open surgical robotics stack and as a paradigm for LLM agentic training—defines a broad technical and scientific agenda:

  • Surgical Robotics: Expansion to high-fidelity simulation environments, modular plug-and-play sensor/actuator hardware, haptic feedback maps, and regulatory/clinical pathways for advanced autonomy.
  • LLM Agentics: Scaling agent-native MT pipelines to more languages and domains (leveraging 3×10⁸+ available PRs), fully automated trajectory collection, and evaluation on agentic reasoning benchmarks (AgencyBench, InnovatorBench). Adaptive MT curricula and hard-pattern up-sampling are flagged for further study.

In both contexts, daVinci-Dev embodies an open-access, high-fidelity, extensible foundation for translational research—supporting safe, high-performance, multi-modal, and interactive automation in both the operating room and the software engineering domain (Zeng et al., 26 Jan 2026, Neidhardt et al., 2024, D'Ettorre et al., 2021, Cui et al., 2022, Zhang et al., 2016).

Topic to Video (Beta)

Whiteboard

No one has generated a whiteboard explanation for this topic yet.

Follow Topic

Get notified by email when new papers are published related to daVinci-Dev.