Map-making Robots: A Review of the Occupancy Grid Map Algorithm

Rasmus Arnling Bååth

Introduction

Maps are extremely useful artifacts. A map helps us relate to places we have never been to and shows us the way if we decide we want to go there. For an autonomous robot a map is even more useful as it could, if it is detailed enough, serve as the robots internal representation of the world. The field of robotic mapping is quite young and started to receive attention first in the early 80s. Since then a lot of effort has gone into constructing robust robotic mapping algorithms, but the challenge is great as the way a human intuitively would build a map can not be directly applicable to a robot. Whereas a human possesses superior vision sensors and can locate herself by identifying landmarks, a robot, most often, only have sensors that approximates the distance to the closest walls. The conditions of robotic mapping actually closer resembles the conditions for a 15th century ship mapping uncharted water. Similar to the ship the robot only knows the approximate distance to the closest obstacles, it could happen that all obstacles are so far away that the robot senses void and it is often difficult for the robot to keep track of its position and heading. As opposed to the ship, a robot using a faulty map will bump into walls in a disgraceful manner, while the ship, on the other hand, might discover America.

A long-standing goal of AI and robotics research has been to construct truly autonomous robots, capable of reasoning about and interacting with their environment. It is hard to see how this could be realized without general robust mapping algorithms.

The Mapping Problem

What is a map? And what does it mean to build one?

From a robot's point of view a map is a data structure representing an area of the world. By means of this data structure the robot should be able to localize it self in and navigate around this area. Any data structure that allows the robot to do this qualifies as a map. Many different ways to represent maps have been proposed but most representations can be labeled either metric or topological. A metric map is a map where the distances between every point of the map are encoded. What we normally think of as a map, e.g. a world map, is a metric map. A topological map is a map where only the connections between a set of features are encoded, e.g. the connections between rooms in a building. The map of the London Underground is a real world example of a topological map. A map representation that is both metric and topological is called a hybrid map.

To build a map, is to take a number of sensor readings from a robot, a sensor reading being a discrete-time sample from a robots sensor, and ingrate them into a map. This is not as straight forward as it might sound as there are many reasons why robotic mapping is a hard problem.

The Approach

This thesis describes an implementation of a map building algorithm for a robot setup at Lund University Cognitive Science (LUCS). One goal with the implementation was that it should be possible to use it in the research at LUCS. The main characteristics of the robot setup are that the environment is static and that the pose is given, therefore it does not induce all the difficulties mentioned above. The given pose is not without noise but there will never be the problem with cumulative position noise. Even if the problem is eased it is still far from trivial thus interesting in its own right.

Given these precondition the occupancy grid map algorithm, first described by Elfes and Moravec (1985), was chosen. Its basic idea is simple: Represent the part of the world you want to map with a grid. When you observe an obstacle, mark the cells covered by that obstacle as occupied.

The occupancy grid map algorithm was implemented and a number of experiments were conducted to investigate how it would perform given different types of sensor noise.

The Occupancy Grid Map Algorithm

One naive way of solving the problem with noisy sensors would be to simply count the number of times a sensor reports an obstacle at a particular place. Each time a position p is scanned, increment a counter cp, each time an obstacle is detected at p, increment a counter op. The probability that p is occupied is then of course op/cp. One problem with this approach is that, as time passes, the counters will grow larger to eventually overflow.

The occupancy grid map algorithm is an algorithm for building metric maps that solves this problem. It was developed in the mid 80s by Efes and Moravec and is a recursive Bayesian estimation algorithm. Here recursive means that in order integrate an nth sensor reading into a map no history of sensor readings is necessary. This is a useful property which implies that sensor readings can be integrated online and that the space and time complexity is constant with respect to the number of sensor readings. The algorithm is Bayesian because the central update equation is based on Bayes theorem:

P(A|B) = P(B|A)P(A) / P(B)

which answers the question "what is the probability of A given B", if we know the probabilities P(B|A), P(A) and P(B).

The map data structure is a grid, in 2D or 3D, that represents a region in space. This thesis will treat the 2D case, thus the region is a rectangle. The value of each cell of the grid is the estimated probability that the corresponding area in space is occupied. The region corresponding to a cell is always considered completely occupied or completely empty. One can have different definitions regarding whether a region is free or occupied, but often a region is considered occupied if any part of it is occupied.

The algorithm consists of two separate parts: the update equation and a sensor model. The update equation is the basis of the algorithm and does not have to change for different robot setups. The sensor model on the other hand depends on the robot setup and each robot setup requires a customized sensor model.

The computational complexity of the algorithm depends on the implementation of the sensor model. Apart from that, each update loop have time complexity O(n'm'), where n' and m' are the number of columns and rows of the grid that are affected by the current sensor reading. The space complexity is O(nm) where n and m are the total number of columns and rows of the grid.

The original algorithm is limited in several ways. It requires that the robots pose is given, thus it can not rely solely on the odometry of the robot. It presumes a static environment or requires sensor readings where dynamic obstacles have been filtered. Finally the area to be mapped has to be specified in advance. This might sound like severe limitations but in many robot setups one can assume a static environment and that there is a way to deduce the robots pose. The original algorithm has also been successfully extended to deal with e.g. unknown robot poses.

The Inverse Sensor Model

A sensor model is a procedure for calculating the probability P(st | m, pt). Therefore it follows that the procedure for calculating P(m | st, pt) is called an inverse sensor model, that is the probability of m given only one sensor reading. An inverse sensor model can be though of as function ism(st, pt) that returns a grid the size of g where the probabilities of P(m | st, pt) are imprinted. There is not only one correct way to construct ism(st, pt) for a given sensor, different approaches have different advantages. An example of how the output of an inverse sensor model could look is given in the figure below.

inverse sensor model

The picture to the left show what the robot senses. The picture to the right is the resulting occupational probabilities. White denotes occupied space,black denotes free space and gray denotes unknown space. Notice how the black strokes fade with the distance to the robot. This indicates that the probability that a sensor detects an obstacle decreases with the distance to the obstacle. An inverse sensor model can be built by hand or learned.

Using a Map

When a robot has constructed a map it should be able to use it for navigation. In the case of an occupancy grid map this is fortunately relatively straight forward. An occupancy grid map g can be converted into an easy to navigate map in two steps. First g is thresholded so that all cells of g have either 0.0 or 1.0 as value. Then all occupied cell, that is cells with a value of 1.0, are expanded by the radius of the robot. The free cells of g can now be considered a graph embedded in the plane where all vertices represent a position the robot can be in without the risk of hitting an obstacle. g is a fully connected graph and you can now use many different algorithms to find a path between two vertices. The most popular algorithm for path finding is probably the A* algorithm.

Things are more difficult in the case of an unfinished occupancy grid map where the values of the cells are in the range (0.0, 1.0). If the goal of the robot lies inside an area that has not been visited, there is no way to construct a safe path. But by means of a value iteration algorithm, also known as a wavefront algorithm, one can construct the probably safest path.

The Wavefront Algorithm

Let g be the unfinished occupancy grid and let gg be the goal cell of a robot. The probability that a robot safely can pass through one cell gx,y, is the probability that gx,y is empty. Thus the probability that a robot safely can travel a path p is the probability

product

where { gx,y | gx,y in p } contain all those cells that p traverses. The wavefront algorithm will construct a grid w where each cell wx,y holds the probability that gg can be reached from wx,y. Then to find a path from any cell to gg you move to the neighboring cell with the highest probability of reaching gg, repeat this until gg is reached.

In pseudo-code a procedure to construct w is:

wg <-  1.0

while w converges
  for each x and y 
    wmax <- the cell adjacent to wx,y with highest value
    wx,y <- wmax - c if wx,y ≤ wmax - c

Here c is a constant that denotes the cost of moving. If c is set to 0.0 the wavefront algorithm will find the safest path. The more c is increased the more will the wavefront algorithm favor short paths over safe paths.

Implementation

This section describes how the occupancy grid algorithm was implemented on a robot setup at Lund University Cognitive Science. The setup is currently used in the ongoing research regarding robot attention and one purpose of the implementation was that it should be possible to use in this context. In order to understand the design choices made a description of the robot setup will first be given. Then the implementation will be described.

The Robot Setup

The robot used is the e-puck, a small, muffin sized robot developed by Ecole Polytechnique Federale de Lausanne (www.e-puck.org). Its a differential wheeled robot boosting eight infra-red proximity sensors, a camera, accelerometer and Bluetooth connectivity.

the e-puck

The e-puck also have very precise step motors to control its wheels. One problem is that no matter how precise the e-pucks odometry is it can not solely be used to determine the robot's poses. Another problem is the proximity sensors of the e-puck. They have very limited range, roughly 10 cm, and are sensitive with respect to light conditions.

In order to remedy these problems a video camera has been placed in the ceiling of room where the robot experiments take place. The robots movements are restricted to a 2x2 m2 "sandbox" and objects in this area have been given color codes. Robots are wearing bright red plastic cups, the floor, the free space, is dark gray and obstacles are white. Images from the camera are processed in order to extract the poses of the robots and an image where only the obstacles are visible. Given this image and a robots pose a circle sector is cut out of the image, its center being the robots position and its direction being the robots heading. By using this as the robots sensor reading the robot can be treated as if it had a high resolution proximity sensor. The robots are controlled over Bluetooth link.

The Implementation

Ikaros The whole system is implemented using Ikaros, a multi-purpose framework developed at LUCS. Ikaros is written in C++ and is intended for, among other things, brain modeling and robot control. The central concept in Ikaros is the module, and a system built in Ikaros is a collection of connected modules. An Ikaros module is simply put, a collection of inputs and an algorithm that works on these, the result ending up in a number of outputs. A modules inputs and outputs are defined by an Ikaros control file using an XML based language while the algorithm is implemented in C++.

A modules outputs can be connected to other modules inputs and to build a working system in Ikaros you would specify these connection in a control file. In this control file you could also give arguments to the modules. The data that can be transmitted between modules can only be in one format, that is arrays and matrices of floats. An Ikaros system works in discrete time-steps, so called "ticks". Each tick every module receives input and produces output.

Ikaros comes with a number of modules, both simple utility modules and more advanced such as several image feature extraction modules. Ikaros also includes a web interface that can display outputs in different ways. For a detailed introduction to Ikaros please see \citet{ikarosintro.

Overview of the System The core of the map drawing system consists of five modules: Camera, Tracker, CameraSensor, SensorModel and OccupancyGridMap. Further modules could be added to the system, e.g. a path planning module and a robot controller module.

overview

Camera and Tracker The Camera and Tracker modules were already available and will only be described briefly.

The Camera module is basically a network camera interface and it is used to fetch images from the camera mounted in the ceiling. It outputs three matrices; RED, GREEN and BLUE, the size of the image, containing the corresponding color intensities of the image.

These matrices are fed into the Tracker module that extracts the poses of robots and the positions of obstacles in the image. It outputs one array POSITION with the positions of the robots, one array HEADING with the headings of the robots and one matrix OBSTACLES with the obstacles extracted from the picture.

CameraSensor The CameraSensor module simulates a high resolution proximity sensor. It requires a matrix in the form of Tracker's OBSTACLES matrix and an array with the position of a robot as inputs. More specific we want to simulate a top mounted stereo camera. This is because, in the robot attention research at LUCS, it is useful if the robots has a "head" they can turn to direct attention. The CameraSensor module takes arguments specifying he range of the camera and the breadth of the view. Given the pose of the robot a square is cut out of the matrix, this square is rotated and projected onto another matrix representing the SENSOR READING of the robot. The SENSOR READING shows everything in the cut out square, even obstacles behind walls. Some simple ray-casting will solve this. Rays are shot from the center of the robot to the edge lying on the opposite side of the SENSOR READING matrix so that the cells touched by the rays form a circle sector. If a ray hits an obstacle the ray stops and all cells not touched by any ray obtains the value 0.5 indicating it's not part of the sensor reading. CameraSensor then outputs SENSOR READING.

CameraSensorModel The CameraSensorModel is an inverse sensor model tailored to work with the output of the CameraSensor. CameraSensorModel has two outputs, both required by OccupancyGridMap: AFFECTED GRID REGION and OCC PROB GRID. OCC PROB GRID is a matrix the same size as the final occupancy grid that contains the probabilities P(m|st, pt). AFFECTED GRID REGION is an array of length four defining a box bounding the area of the occupancy grid that is affected by the OCC PROB GRID. The rationale behind this is that OccupancyGridMap should not have to update the whole occupancy grid when only a small area of it is affected by the current SENSOR READING.

The SENSOR READING from CameraSensor is already in the format of an occupancy grid, so transforming this into OCC PROB GRID in the format the OccupancyGridMap module requires, is pretty straight forward. First OCC PROB GRID is initialized with P(m), the prior probability, given as an argument to CameraSensorModel. Then the SENSOR READING is rotated and translated, according to the robots pose, so that it covers the corresponding area of the OCC PROB GRID. The SENSOR READING is then imprinted on the OCC PROB GRID. The values of SENSOR READING; 1.0, 0.5 and 0.0, should not be used directly as they do not correspond to the right probabilities. Instead 0.5 is substituted by the prior probability and 1.0 and 0.0 are substituted by two values free_prob and occ_prob given as arguments to CameraSensorModel. The values of free_prob and occ_prob should reflect probability that the information in SENSOR READING is correct. As the Camera and Tracker modules are quite exact good values seems to be; free_prob= 0.05 and occ_prob = 0.95. The performance of occupancy grid algorithm depends heavily on these values and they have to be adjusted according to the reliability of SENSOR READING.

OccupancyGridMap The OccupancyGridMap take two inputs in the formats of OCC PROB GRID and AFFECTED GRID REGION. OccupancyGridMap also contains the state of the occupancy grid constructed so far; MAP GRID, and the prior probability; pri_prob, given as an argument. The MAP GRID is initialized by giving each cell the value of pri_prob.

The purpose of OccupancyGridMap is to update MAP GRID using the update equation. Here follows the update equation taken directly from the code:

for(int i = affected_grid_region[2]; i <= affected_grid_region[3]; i++) 
{
  for(int j = affected_grid_region[0]; j <= affected_grid_region[1]; j++) 
  {
    float occ_prob = occ_prob_grid[i][j];
    map_grid[i][j] = 1.0 / (
      1.0 + (1.0 - occ_prob) / occ_prob *
      prior_prob / (1.0 - prior_prob) *
      (1.0 - map_grid[i][j]) / map_grid[i][j]);
    }
}

An example of an how a MAP GRID could look is given in the figure below.

MAP GRID

Experiment Setup

The implementation of the occupancy grid algorithm works very well on the robot setup at LUCS. This is no big surprise as the conditions are ideal, there is practically no sensor noise nor pose uncertainty. In order to investigate how the implementation would handle different conditions a number of experiments were made, where noise was added to the sensor readings. How the implementation reacts to noise is highly dependent on the two parameters of CameraSensorModel; free_prob and occ_prob. Thus for each experiment, except for experiment 2, three different values of free_prob and occ_prob were used to illustrate this. The following values were used (using the notation [free_prob, occ_prob]): [0.01, 0.99], [0.2, 0.8] and [0.45, 0.55]. These values will be referred to as the sensor weights, as they reflect to what degree the occupancy grid map algorithm is persuaded by new sensor readings. All experiments used pri_prob = 0.5. The parameters free_prob and occ_prob might seem to be very specific for the CameraSensorModel but any sensor model will have parameters that governs to what degree the sensor readings should be trusted.

The experiments were setup in the following way: A robot was places in the middle of the 2x2 m2 "sandbox" and a number of obstacles were placed around it.

MAP GRID

The "camera" of CameraSensor was given a range of sqrt(m) and a breadth of 32 degrees. The robot does not move but each tick the heading of the robot is randomized, in this way the robot will eventually have "seen" the whole "sandbox" visible from the center. Four different experiments were then conducted:

  1. The ideal case. No noise was added, this is to get an measure to compare the other experiments with.
  2. Gaussian white noise was added to the OCC PROB GRID of the CameraSensorModel.
  3. Salt and Pepper noise was added to 40% of the OCC PROB GRID that represents the current sensor reading.
  4. Gaussian white noise was added to he robot's position given as input to the CameraSensorModel. The noise had a variance of 0.001.

The ideal map was constructed by running experiment 1 with free_prob=0.45 and occ_prob=0.55 for 2000 steps. The probabilities of this map was then rounded to the closest of the values 1.0, pri_prob and 0.0. Given this ideal map the possible maximum score is 640.

To show that the map drawing implementation can be used in practice an Ikaros system was setup to control an e-puck robot. The goal of the e-puck was to find another e-puck wandering randomly in a maze. The e-puck was not given a path to the other e-puck, only its position. In order to find a path to the other e-puck the wavefront algorithm described in \refsub:The-Wavefront-Algorithm was used. The e-puck would begin with an empty map, which it would build up gradually as it tried different paths to the other e-puck. Eventually, the map would be complete enough so that the e-puck would find a safe path to the other e-puck.

Results

Here the results of he experiments will be presented. Generally the implementation performed well in all four experiments but what became obvious is that the choice of sensor weights is important. Each experiment was run for a 1 000 ticks. As all of the experiments contain a randomized component a single run might not produce a characteristic result. To avoid this, each experiment was run ten times and the average of each tick was taken. When interpretating the results one should know that a score above 500 corresponds to a reasonably good map. Rather than looking for the sensor weights that eventually results in the best score one should look for the sensor weights that converge fast to a reasonable score. Most often a robot has more use for a good enough map now, that for a perfect map in five minutes. Because of this, the charts only display up to tick 500, even if the maps continue to converge after that.

Experiment 1 This algorithm performs well for both [0.01, 0.99] and [0.2, 0.8]. Even if [0.45, 0.55] surpasses them both eventually, it converges to slow to be practically useful.

Experiment 2 The outcome of this experiment showed the strength of the probabilistic approach to robotic mapping. The algorithm handles the noisy sensor readings well and the map converges nearly as fast as [0.2, 0.8] from \textnumero 1.

Experiment 3 While [0.2, 0.8] converges nicely with salt and pepper noise, [0.45, 0.55] converges steady but too slow. As [0.01, 0.99] is the most sensible to noise, it converges slowly and never produces a reliable map.

Experiment 4 In this last experiment the score measure is a bit misleading. All three choices of sensor weights actually produces acceptable maps. What happens in the case of [0.01, 0.99] is that the edges of the obstacles get slightly displaced, which the score measure penalizes. Even though [0.01, 0.99] of experiment and experiment 4 score the same, the map from experiment 3 is practically unusable, while the map from experiment 4 is OK.

Discussion

This report has described an implementation of the occupancy grid map algorithm. This algorithm was implemented to be used with the e-puck robot, using the Ikaros framework. A derivation of the update equation, the basis of the algorithm, was given, as well as a measure for comparing maps. The implementation worked well. This was no surprise as the sensors and the pose tracking system produced very exact information. To investigate how noise would affect the performance of the algorithm a number of experiments were conducted. Gaussian white noise was applied to the sensors and the pose tracking system, and so called salt and pepper noise was applied to the sensors only. To show that the implementation was usable in practice a system was constructed that made an e-puck draw an occupancy grid map. The e-puck then used this map to find a path to another e-puck wandering randomly.

Experiment 1 show that the algorithm works well given ideal preconditions. This is no surprise, but it is important note how the tuning of sensor weights impacts the performance. When the sensor weights are set so that the algorithm put little trust in the sensors, the map converges steadily but unnecessarily slow.

Experiment 2 and 3 show the strength of the algorithm, its capability to handle independent noise. Both the sensor readings of 2 and 3 are very noisy, indeed it is often hard for the human eye to separate true obstacles from noise. The algorithm manages this well, given that the sensor weights are set so that the algorithm does not put to much trust in the sensors.

Experiment 4 show that the algorithm can produce an acceptable map when the position is noisy. The tuning of the sensor weights does not have such an impact could be expected. This is due to the fact that the score measure does not reward correctly identified obstacles that are off by a small distance. One problem with positional noise is that it does not lead to sensor noise that is statistically independent. If the positional noise is to large the algorithm will not be able to handle it no matter how the sensor weights are tuned.

The implementation of the e-puck control system worked well in simulation. The two robots steadily moved towards each other, drawing the map and avoiding obstacles as they went along. When trying this with the real robots there were some problems. The Tracker module sometimes confused one of the robots for the other one. Also there were some problems communicating with two robots over one Bluetooth connection. Nevertheless, the occupancy grid map algorithm, in combination with the wavefront path planner, always produced a correct path, even if the robot had troubles following it.

The big limitation of the occupancy grid map algorithm is that it does not handle unknown poses. There are ways around this and it would be interesting to study this further. The algorithm could also be tuned in different ways. E.g. when the robot is not moving it might be the case that non-independent noise gets imprinted in the map. This might be remedied by developing a scheme were sensor readings are blocked when the robot is immobile. It could also be interesting to experiment with variable sensor weights. For example, if the robot has spent a lot of time in an area, the sensor weights might be adjusted so that the robot puts more trust in its internal map and less trust in its sensors.

In spite of its limitation the occupancy grid map algorithm is, as this thesis has shown, a robust and versatile algorithm. When in need for a robotic mapping algorithm one should have good reasons not to consider using it.

I would like to thank Christian Balkenius, for his help and support. I would also like to thank my supervisor at the Department of Computer Science, Jacek Malec, for accepting to supervise this project with such a short notice. And thank you Birger Johansson for showing me some ropes and keeping my Coca Cola Zeros cool.

blog comments powered by Disqus