Peer Robotics Implements Global Localization for Industrial Mobile Robots Using NVIDIA Isaac ROS

Peer Robotics
5 min readJan 8, 2024

--

Implementation of global localization on Peer Robotics Autonomous Mobile Robots (AMRs) using NVIDIA Isaac ROS enables ease of use and robust navigation in dynamic environments.

Peer Robotics is a collaborative mobile robotics startup building material handling solutions for manufacturing industries. Our key focus is on making robots that can learn from humans in real time, allowing people on the shop floor to integrate and deploy these robots easily. Our robots can understand human haptics, allowing operators to easily grab the robot and move it like a trolley to teach it how to perform any operation.

Our built-in intuitive web app also allows users to access the robot from any device connected to the same network and automate missions and other repetitive operations with ease. This ensures that users won’t have to worry about robot operations every single time they have to run the system. But one area that still requires human input is setting up the Initial Pose. This touch point is a major challenge, as whenever a user needs to restart the system, it has to provide the Initial Pose to operate the robot. Failure to do so renders the platform useless.

Why is setting up Initial Pose more complex than it seems?

Existing AMRs rely on Simultaneous Localization and Mapping (SLAM)-based algorithms for autonomous navigation in any indoor environment. These algorithms use SLAM techniques to create a map of the environment and determine the robot’s location within it. However, a key challenge with SLAM-based algorithms is that they require an initial input to identify the robot’s current location in the environment (Initial Pose Guesses) every time they are started.

This poses a major challenge for users, as they need to provide this input to ensure the robot can navigate effectively. Without accurate initial pose guesses, the robot may not be able to perform its tasks properly or might even fail to complete them. This can lead to inefficient operation and frustration for users.

The task of manual input of initial pose guesses becomes even more painful as we scale the number of robots in the same facility. As the number of robots increases, the complexity of providing initial pose guesses grows exponentially. Users not only have to identify the correct initial pose guess for each robot but also ensure that they are setting it up for the right robot. The video below shows how a user has to navigate different parameters to correctly provide the initial pose for one of the robots among several of them.

In order to scale AMR deployments and improve their ease of use, it’s essential to eliminate the need for manual input of initial pose guesses. By automating the process of identifying the robot’s location, the usability of the platform will be greatly improved. Users would no longer have to spend time and effort providing this input, thus saving valuable resources. Furthermore, eliminating the manual input requirement would have a positive impact on the overall performance of the robot. With accurate and automated localization, the robot would be able to navigate more efficiently and effectively. This not only saves time but also improves the robot’s ability to complete tasks successfully.

Current approach of fixing Initial Pose at the cost of flexibility

To tackle this challenge, we and several other companies currently provide a couple of solutions to ‌end users.

Most common is a solution that allows users to provide initial pose guesses to the robot via a web interface, as shown in the video above. But rather than using that as a final initial pose, the user-provided pose is fed the AMCL module that corrects the robot’s position by performing particle filtering. But this approach doesn’t eliminate human input and running AMCL for correcting initial poses is a computationally expensive task that takes a few seconds. Once done, the user still needs to check to ensure it was performed successfully.

Another approach we have implemented is setting up Home Position. In this approach, the user allocates a specific position on the map as a home position. This home position is then hard coded as the initial pose estimate. This means that whenever the robot is started, the system automatically uses the home position as the initial pose. However, one significant limitation of this approach is that it relies on the user to consistently place the robot at the home position before starting it. In scenarios where the user has to shut down the robot anywhere away from the home position, they would have to bring the robot to the home position again before starting it. This leads to a massive cost increase due to the sub-optimal approach.

Global Localization Module and parallelization on GPU

The Global Localization Module (GLM) from NVIDIA Isaac ROS addresses a key challenge by implementing a grid search algorithm and using GPUs to parallelize the search. It takes a 2D map, scans the data as input, and provides a localization result as output. One of the main advantages of the global localization module is its ability to perform the search quickly across maps of different sizes.

By integrating the output of the global localization package into the existing localization module of the AMR, the need for manual input of initial pose guesses can be completely eliminated. This leads to faster and more accurate localization, reducing the chances of errors and saving valuable human time and effort.

In order to accommodate manufacturing plants of different sizes, we operate in a serverless manner for single-robot deployments. This means that we remove the need for IT involvement and perform all computations on the edge. Modules such as Mapping, Navigation, and Obstacle Avoidance run directly on the robot using our built-in compute capabilities. However, this approach limits our computational resources for any new modules. At Peer Robotics, we constantly work on optimizing our compute, making the GLM a great fit due to its parallelization nature.

We have implemented a service-based system to run the GLM. This ensures that the module is only executed when the robot is started or when the map is changed by the user. By doing so, we prevent the system from using resources during regular operations while still eliminating the need for user input of the initial pose.

This video shows the GLM integrated with our stack in action. Currently, the user can trigger the module by pressing a button on the dashboard. However, we are working on automating the process to eliminate any human involvement.

The results of the integration are highly promising, with some key highlights as follows:

  1. The GLM can locate the robot within 2–3 seconds in maps ranging from 100k-200k sq feet. The speed of the process is so fast that users can hardly notice it being performed.
  2. The module runs entirely on the GPU (RTX 2060 in our case) for a short period of a few seconds, ensuring that the compute for other essential services is not compromised in any way.

In summary, eliminating the need for manual input of initial pose guesses in SLAM-based algorithms would significantly improve the usability and performance of AMRs. This simplifies the user experience, saves time and resources, and enables more efficient navigation in dynamic environments. Implementing a GLM offers a promising solution to overcome this challenge.

--

--

Peer Robotics
Peer Robotics

No responses yet