April 17, 2026
Free online URDF validator no ROS install, instant 9-check validation (also supports xacro)

Hi ROS community,

I built a free online URDF validator because I kept running into the same
problem: testing URDF files required a full ROS install just to catch basic
structural errors.

Live tool (no signup): RoboInfra Dashboard

What it checks (9 structural checks):

  • Root element must be
  • At least one exists
  • No duplicate link/joint names
  • All joint parent/child refs valid
  • Valid joint types (revolute, continuous, prismatic, fixed, floating, planar)
  • revolute/prismatic joints include
  • Exactly one root link (no cycles, no orphans)

Also supports .xacro files (server-side preprocessing via the official
xacro Python package no ROS install needed on your side).

Why I’m sharing:
I built this as a solo developer and want feedback from actual ROS users.
Is the validation useful? What other checks would help? Does xacro support
cover your real-world files?

Other things available (optional, paid plans):

  • Python SDK: pip install roboinfra-sdk
  • GitHub Action for PR validation: uses: roboinfra/validate-urdf-action@v1
  • Kinematic analysis (DOF, end effectors, chain depth)
  • 3D model conversion (STL/OBJ/FBX/GLB/DAE)

Free tier: 50 validations/month access on signup.
Happy to answer any questions.

1 post - 1 participant

Read full topic

by Robotic on April 17, 2026 07:08 PM

Tesseract & ROS-I Developer Monthly Meeting Revisit

ROS-I Developer Meeting

This was the second quarter ROSI Developer Meeting Americas, led by Matt Robinson, focusing on recent GitHub repository updates and documentation improvements. Matt presented new documentation pages for the Scan and Plan Workshop and Noether repositories, showcasing enhanced architecture diagrams and status information.

Michael discussed updates to Python bindings for Tesseract using NanoBind, noting improvements over the previous Swig implementation and plans for code reorganization. The team also discussed upcoming events including a July training session and Automate 2026 exhibition in Chicago, where they will host an open source meetup and ROS Industrial Consortium gathering.

Matt shared updates on OSRA's technical strategy development and concerns about ROS-2 release processes affecting industrial users, particularly regarding RMW and version compatibility issues. The conversation ended with Michael announcing plans to update all repositories to support Ubuntu 20.04 LTS, including necessary changes for QT5 to QT6 transition.

Tesseract Monthly Check-In

The meeting focused on discussing OMPL 2.0's new VAMP (Vectorized Antipodal Motion Planning) integration and Tesseract's 1.0 release updates.

The team explored how VAMP's SIMD acceleration and parallel collision checking capabilities could be integrated into Tesseract, with Levi and Michael explaining that VAMP uses fine-grained parallelism to process thousands of states simultaneously rather than checking single states sequentially.

Roelof provided an update on the Cole continuous collision checking implementation, reporting significant performance improvements of 20-30% and noting that the implementation now matches Bullet's approach using convex hulls.

The team also discussed ongoing work on replacing string-based data structures with hash-based ones to improve performance, and Levi mentioned plans to implement schema validation tools for easier YAML file management in Tesseract.

Information on ROS-I Developer Meetings may be found here: https://rosindustrial.org/developers-meeting

Info on the Tesseract Monthly Check-In may be found here: https://rosindustrial.org/tesseract-robotics

by Matthew Robinson on April 17, 2026 06:58 PM

How to use fastdds_monitor on ROS2 Humble

I tried following some online tutorials ( 3. Example of usage - 4.0.0 , https://www.youtube.com/watch?v=OYibnUnMIlc, …) but cannot get any statistics. I can see my topics, just not statistics. I made sure to set FASTDDS_STATISTICS and even tried rebuilding my WS with --cmak-args -DFASTDDS_STATISTICS=ON but I’m quite sure that did nothing. I then run the AppImage (~/Apps/eProsima_Fast-DDS-Monitor-v3.2.0-Linux.AppImage) but no luck.

1 post - 1 participant

Read full topic

by PeterMitrano on April 17, 2026 01:20 PM

April 16, 2026
Baxter robot's RSDK GUI not booting

Hi everyone,

I’m currently working with a Baxter robot system and ran into an issue after recovering access to the internal PC. I’d really appreciate any guidance from those who have dealt with similar setups.


Background

  • Platform: Baxter robot

  • Internal PC: Dell OptiPlex 7010

  • OS: Baxter RSDK system (Ubuntu-based), but also has a Gentoo layer

  • ROS: Indigo

The robot had been unused for several years. Initially:

  • BIOS was locked (password protected)

  • Could not access GRUB or boot from USB

  • SSH password was unknown

I managed to:

  • Reset BIOS password (via PSWD jumper)

  • Boot from a Live USB

  • Reset the ruser password via chroot

  • Successfully SSH into the robot


Current Status

  • SSH into Baxter works (ruser login OK)

  • Network connection is working (can ping and communicate)

  • System boots into a Gentoo console login

  • I can log into the Gentoo environment

  • I cannot access or see the RSDK (Ubuntu-based) GUI environment

  • ROS tools are accessible after sourcing environment (in some contexts)


Problem

The RSDK GUI does not start automatically on boot.

Instead of the normal Baxter interface, the system:

  • Boots into a Gentoo console

  • Requires manual login

  • Does not launch the Baxter runtime or GUI

  • Does not appear to transition into the RSDK (Ubuntu) environment


What I’ve tried

  • Logged in via SSH and locally

  • Verified system access through Gentoo console

  • Sourced ROS:

    source /opt/ros/indigo/setup.bash
    
  • Tried enabling the robot manually:

    rosrun baxter_tools enable_robot.py -e
    
  • Attempted:

    rostopic list
    

However:

  • It seems the Baxter runtime is not being launched

  • The system may not be switching from Gentoo → RSDK layer

  • Startup scripts/services may be broken or missing


Questions

  1. What is responsible for transitioning from the Gentoo layer into the RSDK (Ubuntu) environment?

  2. What service or script launches the Baxter GUI on boot?

  3. Is there a manual way to trigger the RSDK environment from the Gentoo console?

  4. Could this be a broken startup script, service, boot configuration, or a corrupted drive?

  5. Is there a known way to restore the original Baxter startup behaviour without reinstalling the system?

  6. If there are no way to restore it then is there an image of the system available? I tried checking in with Cothinks Robotics (The ones who took over the license and manufacturing from Rethinks Robotics.) with no response.


Additional Notes

  • I would prefer not to wipe the system, since the original Baxter image is difficult to obtain

  • Hardware appears to be functioning correctly

  • This seems like a boot/runtime configuration issue rather than a hardware failure


Goal

Restore normal behaviour where:

  • Baxter boots into the RSDK GUI

  • The robot runtime starts automatically

  • No manual login or intervention is required


Any help or pointers (especially from others maintaining older Baxter systems) would be greatly appreciated.

Thanks in advance.

1 post - 1 participant

Read full topic

by MinhBao19 on April 16, 2026 06:36 PM

Writing ROS2 nodes using modern python tooling with ros-z

Hi,

I recently found ZettaScaleLabs/ros-z, a work-in-progress Rust reimplementation of ROS2 by some of the people behind Zenoh. This project is still young and does not seem to have been discussed here yet, however they have already developped a very interesting feature: ros-z provides python bindings with no dependency on ROS.

Concretely, this means it is possible to create ROS2 nodes from a pyproject.toml-based python project. AFAIK, this is not possible with the standard ROS tooling.

I think many people (including myself) avoid using ROS in python projects (and python in ROS project) because modern python tooling is not supported. Could ros-z be python’s big comeback in ROS? What do you think?

1 post - 1 participant

Read full topic

by vrichard on April 16, 2026 08:12 AM

April 15, 2026
Real-Time Face Tracking in ROS 2 & OpenCV

Hi everyone,

I recently developed a zero-latency face tracking node using ROS 2 and OpenCV, designed as a foundation for responsive human-machine interaction, and was encouraged to share it with the community here!

The Challenge: Middleware Overhead

During development, I encountered severe frame-rate drops (sub-1 FPS). This was due to the heavy network serialization overhead of translating image matrices across standard ROS middleware.

The Solution: Edge Processing & Optimization

To solve this, I completely re-architected the pipeline:

Bypassing Drivers: By bypassing the standard camera drivers and processing the hardware stream directly at the edge, I eliminated the latency loop entirely.

Algorithm Optimization: The optimized system utilizes Haar cascades paired with dynamic contrast adjustment (CLAHE).

Result: Smooth, real-time bounding box tracking executed entirely on local hardware.

GitHub repository link:GitHub - abinaabey2006/ros2-opencv-face-tracker: A zero-latency, real-time face tracking node for ROS 2 using OpenCV and Haar Cascade · GitHub

LinkedIn post link: #ros2 #ros2 #computervision #opencv #roboticsengineering #python | Abina Abey | 22 comments

1 post - 1 participant

Read full topic

by abinaabey2006 on April 15, 2026 11:19 PM

Session Postponed to 2026-04-20 | Cloud Robotics Working Group

We planned a session for the 13th - more information here:

The session will instead run on Mon, Apr 20, 2026 4:00 PM UTCMon, Apr 20, 2026 5:00 PM UTC. The meeting link for is here, and you can sign up to our calendar or our Google Group for meeting notifications or keep an eye on the Cloud Robotics Hub.

1 post - 1 participant

Read full topic

by mikelikesrobots on April 15, 2026 03:06 PM

April 14, 2026
ROSCon Global 2026 Talk Proposals Due April 26th

Quick reminder, presentation proposals for ROSCon Global 2026 in Toronto are due by Sun, Apr 26, 2026 12:00 AM UTC. Please submit your proposals via HotCRP.

Additional details are available on the ROSCon Global 2026 website.

2 posts - 1 participant

Read full topic

by Katherine_Scott on April 14, 2026 04:38 PM

ROS jazzy driver for lego mindstroms Robot inventor

Hello everyone,

I have ported a ROS driver for Lego Mindstroms Robot inventor to ROS jazzy. Here is the link: GitHub - pem120/lego_ri_ros: ROS packages for Lego Mindstorms Robot Inventor · GitHub

1 post - 1 participant

Read full topic

by pem120 on April 14, 2026 03:58 AM

April 13, 2026
What's new in Transitive 2.0: ClickHouse DB storage, Grafana visualizations, Alerting

Transitive 2.0 is here!

We are thrilled to announce a new major version of Transitive, the open-source framework for full-stack robotics. Version 2.0 adds significant new integrations and features: storage of historic and time-series data in ClickHouse, visualization in Grafana, and custom alerting via Alertmanager. Some of our capabilities, like the free Health Monitoring capability, already use these features, providing significant added value to robotics companies with growing fleets.

Fleet Operation at Scale

Until now Transitive has been very much focused on transactional features needed for the operation of robot fleets. This includes our most popular capabilities: WebRTC Video streaming, Remote Teleop, and ROS Tool. These capabilities are particularly empowering to robotics companies that have not yet deployed more than 50 robots. Transitive’s open-source MQTTSync data protocol, its realization of full-stack packages, and the built-in fine-grained authentication and authorization features provided a solid foundation for us to build such transactional capabilities efficiently and reliably.

But as fleets grow so do the challenges in monitoring and operating. This means that companies need tools that go beyond the direct form of one operator working on one robot at a time, but provide both longitudinal as well as historic views of the fleet. Similarly, passive monitoring and alerting need to gradually replace active monitoring by (remote) operators. Supporting robotics companies in this second chapter of growth was our goal in this new major release, while still staying true to our philosophy of embeddability, ease of use, and fine-grained, namespaced access control.

Read more about the added features and how to try them out here:

1 post - 1 participant

Read full topic

by chfritz on April 13, 2026 07:42 PM

RobotCAD 10.5.0 adapted to FreeCAD 1.1 AppImage

Let you introduce RobotCAD adaption to FreeCAD 1.1 AppImage.
Enjoy new FreeCAD 1.1 functionality.

RobotCAD is a FreeCAD workbench to generate robot description packages for ROS2 (URDF) with launchers to Gazebo and RViz. Includes controllers based on ros2_controllers and sensors based on Gazebo. With integrated models library and a lot of other tools. In other words CAD → ROS2.

How to run RobotCAD - fast install and run with FreeCAD 1.1 AppImage

I did not post long time releases info, there are a lot of bug fixes and new functionality in previous versions.

1 post - 1 participant

Read full topic

by fenixionsoul on April 13, 2026 07:40 PM

Fast Lossless Image Compression: interested?

Hi,

so, I coundn’t shut up about this on my social media, so some of you might be already sick and tired of me, but I am sharing it here, hoping to understand if the robotic community may benefit from this work.

By pure chance, I started exploring the topic of Lossless Image Compression, in particular in terms of speed, thinking about real-time streaming and recording.

I got very interesting results that I think may benefit some use cases in robotics.

Before moving forward releasing the code or more details about the algorithm (that is very much still work in progress) I wanted to:

  • share the binaries with the community to allow people with healthy dose of skepticism to replicate results on their computer.

  • understand what the actual use case for fast, but still better than PNG, lossless compression is.

These are my results: 3 codecs with 3 different tradeoffs (being Griffin the most balanced one in the 3 dimensions).

I would love to hear the feedback of the community :grin:

LINK: GitHub - AurynRobotics/dvid3-codec · GitHub

Also, if you think that you have a practical application for this, please DM me to discuss this, either here or contacting me on dfaconti@aurynrobotics.com

Davide

4 posts - 2 participants

Read full topic

by facontidavide on April 13, 2026 10:32 AM

🚀 New "ROS Adopters" page is live - ADD YOUR PROJECT

Hi everyone :waving_hand:

We are excited to announce a new ROS Adopters page on the official ROS documentation site! This is a community-maintained, self-reported directory that showcases organizations and projects using ROS in any capacity - whether it’s a commercial product, a research platform, an educational tool, or anything in between.

:link: Browse the current adopters here: ROS 2 Adopters — ROS 2 Documentation: Rolling documentation

The page supports filtering by domain (e.g., Aerial/Drone, Manufacturing, Research, Consumer Robot, etc.) and by country, and includes a search function to help you find projects that interest you.

:thinking: Why add your project? (Main Part of the Post)

  • :globe_showing_europe_africa: Visibility - Let the world know your project runs on ROS.
  • :light_bulb: Inspire others - Seeing real-world deployments motivates new adopters and contributors.
  • :flexed_biceps: Strengthen the ecosystem - A healthy adopter list demonstrates the breadth and maturity of ROS to potential users, sponsors, and decision-makers.

:memo: How to add your project

We’ve made it as easy as possible. There’s an interactive form right on the documentation site:

:link: Add Your Project — ROS 2 Documentation: Rolling documentation

That’s it :white_check_mark: No special tooling required - you can do it entirely from your browser.

:robot: What counts as an “adopter”?

Anything that uses ROS :rocket: Commercial products, open-source projects, university research labs, hobby builds - if ROS is part of your stack, we’d love to see it listed. The directory is self-reported and accepted with minimal scrutiny, so don’t be shy :blush:

:open_book: Background / History

This feature was proposed in ros2/ros2_documentation#6248 and implemented in PR #6309.

Please consider to add your project, share this post with your colleagues, and let’s build a comprehensive picture of what the ROS ecosystem looks like in 2026! :tada:

Looking forward to seeing your PRs! :folded_hands:
Ping fujitatomoya@github once your PR is up! I am happy to review PRs!

Cheers,
Tomoya

4 posts - 4 participants

Read full topic

by tomoyafujita on April 13, 2026 01:09 AM

April 09, 2026
International Conference on Humanoid Robotics, Innovation & Leadership

======================================================================

                       **CALL FOR PAPERS**
                         **HRFEST 2026**

International Conference on Humanoid Robotics, Innovation & Leadership

Date: November 05 - 07, 2026
Location: Universidad Nacional del Callao (UNAC) - Callao, Peru (Hybrid Event)
Website: https://hrfest.org

CONFERENCE HIGHLIGHTS & WHY SUBMIT

* High-Impact Indexing: All accepted and presented papers will be
submitted to the IEEE Xplore digital library, which is typically
indexed by Scopus and Ei Compendex.
* Hybrid Format: Offering both in-person and virtual presentation
options to accommodate global researchers and industry professionals.
* Global Networking: Hosted alongside the IEEE RAS Regional
Manufacturing Workshop, connecting LATAM researchers with global
industry leaders.

ABOUT THE CONFERENCE

The HRFEST 2026: International Conference on Humanoid Robotics, Innovation
& Leadership is the premier Latin American forum that bridges the gap
between advanced robotics research and industrial leadership. Hosted by
the Universidad Nacional del Callao (UNAC) as the official academic and
not-for-profit sponsor, with NFM Robotics acting as an industrial patron
and logistical facilitator, this conference gathers top researchers,
industry leaders, and innovators.

HRFEST 2026 is technically co-sponsored by IEEE. Accepted and presented
papers will be submitted for inclusion into the IEEE Xplore digital
library, subject to meeting IEEE Xplore’s scope and quality requirements.

TECHNICAL TRACKS & TOPICS OF INTEREST

We invite researchers, academics, and professionals to submit original,
unpublished technical papers. Topics of interest include, but are not
limited to:

* Track 1: Robotics & Adv. Manufacturing

  • Humanoid Robotics, Bipedalism & Legged Locomotion
  • Control Systems, Kinematics & Dynamics
  • Mechatronics, Soft Robotics & Smart Materials
  • Industrial Automation, Cobots & Swarm Robotics

* Track 2: AI & Data Science

  • Machine Learning & Deep Learning
  • Generative AI & LLMs
  • Computer Vision, Pattern Recognition & NLP
  • Ethical AI & Explainable AI (XAI)

* Track 3: Engineering Management

  • Tech, Innovation & R&D Management
  • Industry 4.0 & Digital Transformation
  • Agile Project Management
  • Tech Entrepreneurship & Startups

* Track 4: Applied Technologies

  • Internet of Things (IoT) & Smart Cities
  • Biomedical Eng. & Healthcare Systems
  • Financial Engineering & FinTech
  • Renewable Energy Systems

SUBMISSION GUIDELINES

* Review Process: HRFEST 2026 enforces a strict Double-Blind Peer Review.
* Submission Portal: All manuscripts must be submitted electronically
via EasyChair at: https://easychair.org/conferences/?conf=hrfest2026
* Format & Length: All manuscripts must follow the standard double-column
IEEE Conference template and should not exceed six (6) pages in PDF format.
* Originality: Submissions must be original work not currently under
review by any other conference or journal.
* Camera-Ready Submissions: Final versions of accepted papers must be
validated using IEEE PDF eXpress (Conference ID: 71784X). The PDF
eXpress validation site will open on September 15, 2026.

IMPORTANT DEADLINES

* Full Paper Submission Deadline: July 05, 2026
* Notification of Acceptance: September 15, 2026
* Final Camera-Ready Submission: October 15, 2026

For more information regarding submissions, registration, and the
IEEE RAS Regional Manufacturing Workshop, please visit our official
website: https://hrfest.org

We look forward to seeing you in Callao!

1 post - 1 participant

Read full topic

by RoboticsLab on April 09, 2026 11:13 PM

[Virtual Event] The Messy Reality of Field Autonomy: ROS 2 Architectures, Behavior Trees & Sim-to-Real

Hi everyone,

If you have ever lost a week of field data because of a typo in a custom ROS message, or watched a perfectly tuned simulation model immediately fail on physical hardware, this session is for you.

On May 1st, the Canadian Physical AI Institute (CPAI) is hosting a highly technical, virtual deep-dive into the architectural evolution of robotic autonomy and the gritty realities of physical deployment.

We are moving past the theoretical benchmarks to talk about what actually breaks in the wild and how to architect your software to handle it.

Here is what we are covering:

Part 1: Driving into the (Un)Known: Navigation for Field Robots

Alec Krawciw (PhD candidate, UofT Autonomous Space Robotics Lab & Vanier Scholar) will cover the logistical and systemic realities of field deployment, including:

  • Pre-Field Data Strategies: Why post-processing tools must be built before testing, and how simple data-logging errors (like ROS message naming typos) can ruin a deployment.

  • System Failure is Inevitable: The critical difference between fault prevention and fault recovery, and why strict deterministic approaches shatter off-road.

  • Maximizing Field Time: Practical workflows to reduce on-site engineering workload.

Part 2: Beyond Hard-Coded Control: Embodied AI & ROS 2 Architecture

Behnam Moradi (Senior Software Engineer in Robotic Autonomy) will break down the shift from classical state machines to modern autonomy stacks:

  • From Loops to Graphs: Making the architectural leap from linear execution loops to the distributed graph of nodes required in ROS 2 (“What data is available now?”).

  • Behavior Trees & Goal-Seeking: Moving beyond massive if-else chains to priority-driven agents that respect constraints and dynamically replan.

  • The True Role of Simulation: Why tools like PX4 and AirSim aren’t for testing if your software works, but for validating if your simulation was accurate in the first place.

Event Details

  • Date: Friday, May 1

  • Time: 6:00 PM - 8:00 PM EDT

  • Location: Google Meet

  • Host: Diana Gomez Galeano (former Director, McGill Robotics)

Whether you are migrating a stack to ROS 2, building out your first Behavior Trees, or gearing up for summer field trials, we would love to have you join the conversation. We will have dedicated time for Q&A to help troubleshoot your specific architecture roadblocks.

Registration & Tickets: We have 10 complimentary tickets for ROS community to join us

Looking forward to seeing some of you there!

Cheers,

Saeed Sarfarazi
Canadian Physical AI Institute (CPAI)

1 post - 1 participant

Read full topic

by Saeed on April 09, 2026 12:02 AM

April 08, 2026
FusionCore demo: GPS outlier rejection in a ROS 2 filter built to replace robot_localization

Quick demo of outlier rejection working in simulation.

I built a spike injector that publishes a fake GPS fix 500 meters from the robot’s actual position into a live running FusionCore filter. The Mahalanobis distance hit 60,505 against a rejection threshold of 16. All three spikes dropped instantly. Position didn’t move.

The video is 30 seconds: robot driving in Gazebo, FusionCore GCS dashboard showing the Mahalanobis waveform, rejection log, and spike counter updating in real time.

GitHub

For anyone who missed the original announcement: FusionCore is a ROS 2 Jazzy sensor fusion package replacing deprecated robot_localization. IMU, wheel encoders, and GPS fused via UKF at 100Hz. Apache 2.0.

GitHub: https://github.com/manankharwar/fusioncore

1 post - 1 participant

Read full topic

by manankharwar on April 08, 2026 04:23 PM

Delaying Lyrical RMW and Feature Freezes

Hi all,

In today’s ROS PMC meeting we decided to delay the RMW freeze and Feature freeze by 1 week each. The purpose of the delay is to give more time to upgrade and stabilize all Tier 1 RMW implementations. The ROS Lyrical Release date has not changed.

The new timelines are:

  • New RMW Freeze Tue, Apr 14, 2026 6:59 AM UTC
  • New Feature freeze: Tue, Apr 21, 2026 6:59 AM UTC
  • New Branch from Rolling: Wed, Apr 22, 2026 6:59 AM UTC

Updates here: Delay Lyrical RMW Freeze; Feature Freeze; Branch by sloretz · Pull Request #6350 · ros2/ros2_documentation · GitHub

1 post - 1 participant

Read full topic

by sloretz on April 08, 2026 12:37 AM

April 06, 2026
Multi-Robot Fleet Management System using ROS2, Nav2, and Gazebo

I am developing a multi-robot fleet management system in a simulated warehouse environment using ROS2 (Humble) and Gazebo. The system is designed to study scalable coordination and task allocation across multiple autonomous mobile robots operating in a structured environment.

The architecture follows a distributed approach where each robot is implemented as an independent agent node responsible for navigation, execution, and state reporting. A centralized fleet manager node handles global task allocation and coordination. Communication is implemented using ROS2 topics, services, and action interfaces to enable asynchronous and real-time interaction between components.

Navigation is implemented using the Nav2 stack, integrating localization, global and local path planning, and obstacle avoidance. LiDAR-based perception is used for environmental awareness and safe navigation within the simulated warehouse.

The system supports dynamic task allocation, where robots receive pick-and-deliver tasks, compute feasible paths, and execute them while continuously publishing execution status. A typical workflow involves a robot navigating to a shelf location, performing a simulated pickup, and delivering the item to a designated drop-off point.

This project focuses on understanding distributed robotic system design, inter-node communication, and multi-robot coordination challenges such as scalability and synchronization. Future work includes implementing conflict resolution strategies, fleet-level optimization, and extending the system toward real-world deployment.

4 posts - 4 participants

Read full topic

by Arjun_R on April 06, 2026 11:54 PM

Ros2_medkit + VDA 5050: bridging SOVD diagnostics with fleet management

Hey everyone,

Quick update on ros2_medkit. We’ve been exploring how medkit’s diagnostic data can serve VDA 5050 fleet integrations, and put together a working demo.

Context: VDA 5050 error reporting is intentionally minimal (errorType, errorLevel, errorDescription). That’s fine for fleet routing decisions, but when an engineer needs to debug a fault, there’s a gap. We wanted to see if medkit’s SOVD layer could fill it without breaking either standard.

What we did:

The new SOVD Service Interface plugin exposes medkit’s entity tree, faults, and capabilities via ROS 2 services (ListEntities, GetEntityFaults, GetCapabilities). This means any ROS 2 node can query diagnostic data (not just SOVD/REST clients).

We built a VDA 5050 agent as a separate process that:

  • Handles MQTT communication with a fleet manager (orders, state, instant actions)
  • Drives Nav2 for navigation
  • Queries medkit’s services to report faults as VDA 5050 errors

medkit stays completely unaware of VDA 5050. The agent is just another ROS 2 service consumer (same interface a BT.CPP node or PlotJuggler plugin would use).

vda5050_demo_560_15fps

Demo video:

  • ROSMASTER M3 Pro (Jetson Orin Nano),
  • mission dispatched from VDA 5050 Visualizer,
  • LiDAR fault injected mid-navigation,
  • fault propagated to fleet manager + full SOVD snapshot (freeze frames, extended data records, rosbag) in medkit’s web UI.

The service interface plugin is useful beyond VDA 5050 - anything that consumes ROS 2 services can now pull diagnostic data from medkit. Curious if anyone sees other use cases.

repo: GitHub - selfpatch/ros2_medkit: ros2_medkit - diagnostics gateway for ROS 2 robots. Faults, live data, operations, scripts, locking, triggers, and OTA updates via REST API. No SSH, no custom tooling. · GitHub

1 post - 1 participant

Read full topic

by Michal_Faferek on April 06, 2026 02:42 PM

How to find code “someone already wrote that”? (WaypointFollow Metrics, Rotate Normal To Wall”)

I came to ROS many years ago thinking “someone has probably already coded every basic robotics challenge”. Indeed, I found lots to use, but still find myself writing basic nodes because I don’t know how to search the “ROS mine” for a particular basic node I need.

For example: I’m trying to improve the robustness, and reliability of navigation of my TurtleBot4 robot in my home environment. Nav2 has a million parameters, and I have managed to get a param set for 10 waypoints around my home that succeed most tests. Two desirable waypoints cause a lot of recoveries and occasional goal failures.

I need a test node that collects recovery metrics and goal success/failure/skipped status during a 10 stop waypoint following run, to compare robustness and reliability across parameter changes, and waypoint tweaks. Other metrics like navigation time, distance travelled, delta x,y,heading between goal and result would be nice to have.

Surely someone has written a Nav2 test node I can use to optimize my Nav2 parameter set?

P.s. “Rotate normal to closest wall by /scan” is another basic challenge I would guess was written years ago.

1 post - 1 participant

Read full topic

by RobotDreams on April 06, 2026 01:54 PM

April 05, 2026
Introduction: QERRA-v2 — Hybrid Quantum-Ethical Safety Layer for Humanoid Robots
Hello everyone,

My name is Marussa Metocharaki (@marunigno).
 I’m the solo founder of **QERRA-v2** — a hybrid quantum-classical ethical decision engine for safer humanoid robots and high-stakes AI systems.

The project combines quantum-inspired exploration (I successfully ran a real 8-qubit W-state on IBM quantum hardware) with classical ethical vectors (SEMEV-12), toxicity detection, and a safety kernel. I already have a live public API with a working /analyze endpoint.

Right now the project is still in an early experimental stage — the classical safety layer works well, while the hybrid quantum part is a small prototype that I am actively improving.

I’m building this completely alone under significant personal constraints, and I would love to connect with people in the robotics community who care about ethical and safety layers for humanoid robots.

I just published the full Whitepaper and the code is open-source (AGPL-3.0).

Would be very grateful for any feedback, ideas, or potential collaboration.

GitHub: https://github.com/marunigno-ship-it/QERRA-v2
Whitepaper: https://github.com/marunigno-ship-it/QERRA-v2/blob/main/WHITEPAPER.md

Thank you and looking forward to learning from this community!

1 post - 1 participant

Read full topic

by marunigno-ship-it on April 05, 2026 11:24 PM

April 03, 2026
Rapid deployment of OpenClaw and GraspGen crawling system

OpenClawPi: AgileX Robotics Skill Set Library

License
Language
Platform

OpenClawPi is a modular skill set repository focused on the rapid integration and reuse of core robot functions. Covering key scenarios such as robotic arm control, grasping, visual perception, and voice interaction, it provides out-of-the-box skill components for secondary robot development and application deployment.

From Zero to AI Robot Grasping: OpenClaw + GrabGen Full Setup Guide (Step-by-Step)

I. Quick Start

OpenClaw Deployment

Visit the OpenClaw official website: https://openclaw.ai/

Execute the one-click installation command:

curl -fsSL https://openclaw.ai/install.sh | bash

Next, configure OpenClaw:

  1. Select ‘YES

  2. Select ‘QuickStart

  3. Select ‘Update values


  4. Select your provider (recommended: free options like Qwen, OpenRouter, or Ollama)

  5. Select the company model you wish to use.

  6. Select a default model.

  7. Select the APP you will connect to OpenClaw.

  8. Select a web search provider.

  9. Select skills (not required for now).

  10. Check all Hook options.

  11. Select ‘restart’.

  12. Select ‘Web UI’.

1. Clone the Project

git clone https://github.com/vanstrong12138/OpenClawPi.git

2. Prompt the Agent to Learn Skills

Using the vision skill as an example:

User: Please learn vl_vision_skill

:package: Skill Modules Overview

Module Name Description Core Dependencies
agx-arm-codegen Robotic arm code generation tool; automatically generates trajectory planning and joint control code. Supports custom path templates. pyAgxArm
grab_skill Robot grasping skill, including gripper control, target pose calibration, and grasping strategies (single-point/adaptive). pyAgxArm
vl_vision_skill Visual perception skill, supporting object detection, visual positioning, and image segmentation. SAM3, Qwen3-VL
voice_skill Voice interaction skill, supporting voice command recognition, voice feedback, and custom command set configuration. cosyvoice

II. GrabGen - Pose Generation and Grasping

This article demonstrates the identification, segmentation, pose generation, and grasping of arbitrary objects using SAM3 and pose generation tools.

Repositories

Hardware Requirements

  • x86 Desktop Platform
  • NVIDIA GPU with at least 16GB VRAM
  • Intel RealSense Camera

Project Deployment Environment

  • OS: Ubuntu 24.04
  • Middleware: ROS Jazzy
  • GPU: RTX 5090
  • NVIDIA Driver: Version 570.195.03
  • CUDA: Version 12.8
  1. Install NVIDIA Graphics Driver
sudo apt update
sudo apt upgrade
sudo add-apt-repository ppa:graphics-drivers/ppa
sudo apt update
sudo apt install nvidia-driver-570
# Restart
reboot
  1. Install CUDA Toolkit 12.8
wget https://developer.download.nvidia.com/compute/cuda/12.8.1/local_installers/cuda_12.8.1_570.124.06_linux.run
sudo sh cuda_12.8.1_570.124.06_linux.run
  • During installation, uncheck the first option (“driver”) since the driver was installed in the previous step.
  1. Add Environment Variables
echo 'export PATH=/usr/local/cuda-12.8/bin:$PATH' >> ~/.bashrc
echo 'export LD_LIBRARY_PATH=/usr/local/cuda-12.8/lib64:$LD_LIBRARY_PATH' >> ~/.bashrc
source ~/.bashrc
  1. Verify Installation
    Execute nvcc -V to check CUDA information.
nvcc -V
  1. Install cuDNN
  • Download the cuDNN tar file from the NVIDIA Official Website. After extracting, copy the files.

  • Execute the following commands to copy cuDNN to the CUDA directory:

sudo cp cuda/include/cudnn*.h /usr/local/cuda/include
sudo cp cuda/lib/libcudnn* /usr/local/cuda/lib64
sudo chmod a+r /usr/local/cuda/include/cudnn*.h /usr/local/cuda/lib64/libcudnn*
  1. Install TensorRT
    Download the TensorRT tar file from the NVIDIA Official Website.
  • Extract and move TensorRT to the /usr/local directory:
# Extract
tar -xvf TensorRT-10.16.0.72.Linux.x86_64-gnu.cuda-12.9.tar.gz 

# Enter directory
cd TensorRT-10.16.0.72.Linux.x86_64-gnu.cuda-12.9/

# Move to /usr/local
sudo mv TensorRT-10.16.0.72/ /usr/local/
  • Test TensorRT Installation:
# Enter MNIST sample directory
cd /usr/local/TensorRT-10.16.0.72/samples/sampleOnnxMNIST

# Compile
make

# Run the executable found in bin
cd /usr/local/TensorRT-10.16.0.72/bin
./sample_onnx_mnist

SAM3 Deployment

  • Python: 3.12 or higher
  • PyTorch: 2.7 or higher
  • CUDA: Compatible GPU with CUDA 12.6 or higher
  1. Create Conda Virtual Environment
conda create -n sam3 python=3.12
conda deactivate
conda activate sam3
  1. Install PyTorch and Dependencies
# For 50-series GPUs, CUDA 12.8 and Torch 2.8 are recommended
# Downgrade numpy to <1.23 if necessary
pip install torch==2.8.0 torchvision==0.23.0 torchaudio==2.8.0 --index-url https://download.pytorch.org/whl/cu128

cd sam3
pip install -e .
  1. Model Download
    1. Submit the form to gain download access on HuggingFace: https://huggingface.co/facebook/sam3
    2. Or search via local mirror sites.

Robotic Arm Driver Deployment

The project outputs target_pose (end-effector pose), which can be manually adapted for different robotic arms.

  1. Example: PiPER Robotic Arm
pip install python-can

git clone https://github.com/agilexrobotics/pyAgxArm.git

cd pyAgxArm
pip install .

Cloning

Clone this project to your local machine:

cd YOUR_PATH
git clone -b ros2_jazzy_version https://github.com/AgilexRobotics/GraspGen.git

Running the Project

  1. Grasping Node
python YOUR_PATH/sam3/realsense-sam.py --prompt "Target Object Name in English"
  1. Grasping Task Execution Controls
A = Zero-force mode (Master arm) | D = Normal mode + Record pose | S = Return to home
X = Replay pose | Q = Open gripper | E = Close gripper | P = Pointcloud/Grasp
T = Change prompt | G = Issue grasp command | Esc = Exit
  1. Automatic Grasping Task
python YOUR_PATH/sam3/realsense-sam.py --prompt "Target Object Name" --auto

1 post - 1 participant

Read full topic

by Agilex_Robotics on April 03, 2026 10:38 AM

April 02, 2026
Interactive GUI toolkit for robotics visualization - Python & C++, runs on desktop and web

Hi everyone,

I’d like to share Dear ImGui Bundle, an open-source framework for building interactive GUI applications in Python and C++. It wraps Dear ImGui with 23 integrated libraries (plotting, image inspection, node editors, 3D gizmos, etc.) and runs on desktop, mobile, and web.

I’m a solo developer and have been working hard on this for 4 years. I am new here, but I thought it might be useful for robotics developers.

It provides:

Real-time visualization

  • ImPlot and ImPlot3D for sensor data, trajectories, live plots at 60fps (or even 120fps)
  • ImmVision for camera feed inspection with zoom, pan, pixel values, and colormaps
  • All GPU-accelerated (OpenGL/Metal/Vulkan)

Interactive parameter tuning

  • Immediate mode means your UI code is just a few lines of Python or C++
  • Sliders, knobs, toggles, color pickers - all update in real time
  • No callbacks, no widget trees, no framework boilerplate

Cross-platform deployment

  • Same code runs on Linux, macOS, Windows
  • Python apps can run in the browser via Pyodide (useful for sharing dashboards without requiring install)
  • C++ apps compile to WebAssembly via Emscripten

Example: live camera + Laplacian filter with colormaps in 54 lines

import cv2
import numpy as np
from imgui_bundle import imgui, immvision, immapp


class AppState:
    def __init__(self):
        self.cap = cv2.VideoCapture(0)
        self.image = None
        self.filtered = None
        self.blur_sigma = 2.0
        # ImmVision params
        # For the camera image
        self.params_image = immvision.ImageParams()
        self.params_image.image_display_size = (400, 0)
        self.params_image.zoom_key = "cam"
        # For the filtered image (synced zoom via zoom_key)
        self.params_filter = immvision.ImageParams()
        self.params_filter.image_display_size = (400, 0)
        self.params_filter.zoom_key = "cam"
        self.params_filter.show_options_panel = True


def gui(s: AppState):
    # grab
    has_image, frame = s.cap.read()
    if has_image:
        s.image = cv2.resize(frame, (640, 480))
        gray = cv2.cvtColor(s.image, cv2.COLOR_BGR2GRAY)
        gray_f = gray.astype(np.float64) / 255.0
        blurred = cv2.GaussianBlur(gray_f, (0, 0), s.blur_sigma)
        s.filtered = cv2.Laplacian(blurred, cv2.CV_64F, ksize=5)

    # Refresh images only if needed
    s.params_image.refresh_image = has_image
    s.params_filter.refresh_image = has_image

    if s.image is not None:
        immvision.image("Camera", s.image, s.params_image)
        imgui.same_line()
        immvision.image("Filtered", s.filtered, s.params_filter)

    # Controls
    _, s.blur_sigma = imgui.slider_float("Blur", s.blur_sigma, 0.5, 10.0)


state = AppState()
immvision.use_bgr_color_order()
immapp.run(lambda: gui(state), window_size=(1200, 550), window_title="Camera Filter", fps_idle=0)

The filtered image is float64 - click “Options” to try different colormaps (Heat, Jet, Viridis…). Both views are zoom-linked: pan one, the other follows.

Try it:

Install: pip install imgui-bundle

Adoption:
The framework is used in several research projects, including CVPR 2024 papers (4K4D), Newton Physics, and moderngl. The Python bindings are auto-generated with litgen, so they stay in sync with upstream Dear ImGui.

Happy to answer any questions or discuss how it could fit into ROS workflows.

Best,
Pascal

4 posts - 2 participants

Read full topic

by pthom on April 02, 2026 06:10 PM

On message standardization (and a call for participation)

Hi folks!

I presume at least some of you are aware of the OSRA efforts towards better supporting Physical AI applications. Some of those efforts revolve around messaging and interfaces, and in that context, a few gaps in standard sensing messages have been identified. In a way, this is orthogonal to Physical AI, yet still we may as well seize the opportunity to improve the state of things.

To that end, the Standardized Interfaces & Messages Working Group will be hosting public sessions to discuss, review, and craft proposals to address those gaps. Either through implementation or through recommendation if the community has already organically developed a solution. Academic researchers and industry practitioners are more than welcome to join. If you design or manufacture sensor hardware, even better.

Our friends at Ouster already took the lead and posted a proposal for a new 3D LiDAR message, so our focus during the first couple sessions will likely be on LiDAR technology. Tactile is a close second. We’ve heard complains about the IMU message structure too. Feel free to propose more (and challenge others too).

We’ll meet on Mondays, biweekly, starting Mon, Apr 6, 2026 3:00 PM UTC. Fill this form to join the meetings. Hope to see you there!

1 post - 1 participant

Read full topic

by hidmic on April 02, 2026 03:17 PM

April 01, 2026
Announcing MoveIt Pro 9 with ROS 2 Jazzy Support

Hi ROS Community!

It’s been a while, but we’re excited to announce MoveIt Pro 9.0, the latest major release of PickNik’s manipulation developer platform built on ROS 2. MoveIt Pro includes comprehensive support for AI model training & execution, Behavior Trees, MuJoCo simulation, and all the classic capabilities you expect like motion planning, collision avoidance, inverse kinematics, and real-time control.

This release adds support for ROS 2 Jazzy LTS (while still supporting ROS Humble), along with significant improvements to teleoperation, motion planning, developer tooling, and robot application workflows. MoveIt Pro now includes new joint-space and Cartesian-space motion planners that outperform previous implementations, to improve cycle time, robustness, and industry-required reliability. See the full benchmarking comparison for details

MoveIt Pro is developed by the team behind MoveIt 2, and our goal is to make it easier for robotics teams to build and deploy real-world manipulation systems using ROS. Many organizations in manufacturing, aerospace, logistics, agriculture, industrial cleaning, and research use MoveIt Pro to accelerate development without needing to build large amounts of infrastructure from scratch.

What’s new

Improved real-time control and teleoperation with Joint Jog

MoveIt Pro now includes a new “Joint Jog” teleoperation mode for controlling robots directly from the web UI. This replaces the previous MoveIt Servo based teleoperation implementation and introduces continuous collision checking, configurable safety factors, and optional link padding for safer manual control during debugging or demonstrations.

Scan-and-plan workflows

New scan-and-plan capabilities allow robots to scan surfaces with a sensor and automatically generate tool paths for tasks like spraying, sanding, washing, or grinding. These workflows make it easier to build surface-processing applications.

scan-and-plan-capabilities-for-spraying-f1782ba23bff8f3dbedf9550a8dd3403

New Python APIs for MoveIt Pro Core

New low-level Python APIs expose the core planners, solvers, and controllers directly, enabling developers to build custom applications outside of the Behavior Tree framework. These APIs provide fine-grained control over motion planning and kinematics, including advanced features like customizable nullspace optimization and path constraints.

Improved motion planning APIs

Several updates improve flexibility for motion generation, including: improved path inverse kinematics, orientation tracking as a nullspace cost, customizable nullspace behavior, tunable path deviation tolerances.

Developer productivity improvements

The MoveIt Pro UI and Behavior Tree tooling received a number of improvements to make debugging and application development faster, including a redesigned UI layout and improved editing workflows, Behavior Tree editor improvements such as search and node snapping, better debugging tools including TF visualization and alert history

Expanded Library of Reusable Manipulation Skills

MoveIt Pro also includes a large library of reusable robot capabilities implemented as thread-safe Behavior Tree nodes, allowing developers to compose complex manipulation applications from modular building blocks instead of writing large amounts of robotics infrastructure from scratch. See our Behaviors Hub to explore the 200+ available Behaviors.

enhanced-ai-processing-of-point-clouds-4ec0f48f9070435cd417ab4915e90bed

Built for the ROS ecosystem

MoveIt Pro integrates with the broader ROS ecosystem, including standard ROS drivers and packages. PickNik has been deeply involved in the MoveIt project since its early development, and we continue investing heavily in open-source robotics such as developing many ROS drivers for major vendors.

Learn more

Full release notes:
https://docs.picknik.ai/release-notes/

We’d love feedback from the ROS community, and we’re excited to see what developers build with these new capabilities. Contact us to learn more.

4 posts - 3 participants

Read full topic

by davetcoleman on April 01, 2026 04:45 PM


Powered by the awesome: Planet