RobotPerf Benchmarks

The benchmarking suite to evaluate robotics computing performance

RobotPerf provides an open reference benchmarking suite that is used to evaluate robotics computing performance fairly with ROS 2 as its common baseline, so that robotic architects can make informed decisions about the hardware and software components of their robotic systems.

RobotPerf benchmarks Join RobotPerf project
RobotPerf Benchmark

The performance benchmarking suite in robotics

The myriad combinations of robot hardware and robotics software make assessing robotic-system performance challenging, specially in an architecture-neutral, representative, and reproducible manner. RobotPerf addresses this issue delivering a reference performance benchmarking suite that is used to evaluate robotics computing performance across CPU, GPU, FPGA and other compute accelerators. The benchmarks are designed to be representative of the performance of a robotic system and to be reproducible across different robotic systems. For that, RobotPerf builds on top of ROS 2, the de facto standard for robot application development.


Represented by consortium of robotics leaders from industry, academia and research labs, RobotPerf is formated as an open project whose mission is to build open, fair and useful robotics benchmarks that are technology agnostic, vendor-neutral and provide unbiased evaluations of robotics computing performance for hardware, software, and services.


Benchmarking assists in performance evaluation. Roboticists can use performance data to develop more efficient robotic systems and choose the appropriate hardware for each robotic application with the help of performance data. It can also aid in comprehending the trade-offs between algorithms implementing the same skill.


RobotPerf aligns to robotics standards so that you don’t spend time reinventing the wheel and re-develop what already works for most. Particularly benchmarks are conducted using the Robot Operating System 2 (ROS 2) as its common baseline. RobotPerf also aligns to standardization initiatives within the ROS ecosystem related to computing performance and benchmarking such as REP 2008 (ROS 2 Hardware Acceleration Architecture and Conventions) and the REP 2014 (Benchmarking performance in ROS 2).


RobotPerf benchmarks aim to cover the complete robotics pipeline including perception, localization, control, manipulation and navigation

New categories may appear over time.







Robot behaviors take the form of computational graphs, with data flowing between computation Nodes, across physical networks (communication buses) and while mapping to underlying sensors and actuators. The popular choice to build these computational graphs for robots these days is the Robot Operating System (ROS), a framework for robot application development. ROS enables you to build computational graphs and create robot behaviors by providing libraries, a communication infrastructure, drivers and tools to put it all together. Most companies building real robots today use ROS or similar event-driven software frameworks. ROS is thereby the common language in robotics, with several hundreds of companies and thousands of developers using it everyday. ROS 2 was redesigned from the ground up to address some of the challenges in ROS and solves many of the problems in building reliable robotics systems.

ROS 2 presents a modern and popular framework for robot application development most silicon vendor solutions support, and with a variety of modular packages, each implementing a different robotics feature that simplifies performance benchmarking in robotics.

Which companies are using ROS? More about ROS

Meeting monthly at the ROS 2
Hardware Acceleration Working Group

The RobotPerf project is led by members of the ROS 2 Hardware Acceleration Working Group (HAWG), which drive the creation, maintenance and testing of hardware acceleration for ROS 2 and Gazebo and are interested in the performance of ROS 2 and Gazebo on different hardware platforms.


Powered by leading players in
industry, academia and research labs


Building upon
past experiences

RobotPerf is being driven by members of the same group who previously created MLPerf™ and other popular industry benchmarks. Join us to contribute and democratize the benchmarking of robotics software.

Join RobotPerf Get early access to our paper

Need help to run the RobotPerf benchmarks?

GitHub issues Get additional support