In robotics, the pathplanning task consists of finding a sequence of actions that cause an agent to move from an initial state (position and orientation) to a final state (position and orientation). In path planning, each transition between states represents actions the agent can make, each associated with a cost. A path is said to be optimal if the sum of its transition costs is minimal across all possible paths from an initial state \(q_\mathrm{init}\) to a goal (final) state \(q_\mathrm{goal}\). A planning algorithm is said to be complete if it always finds a path in a finite amount of time when such a path exists. It can be said that a planning algorithm is optimal if it always finds an optimal path. The proposed modifications can be applied to any of these algorithms (A*, D* and its evolutions, such as D*Lite and E*, ARA* and AD*) to achieve a faster solution. This affirmation is based on the fact that the differences between these algorithms are in the optimization process, always aiming at a shorter processing time and lower use of resources, such as computational memory. Therefore, in the following subtopics, an overview of gridbased algorithms will be presented.
Furthermore, the cell decomposition algorithms such as D* (and its evolutions such as D*Lite and E*), ARA* and AD* are based in the A* and were developed to solve problems of computational cost, processing time, or memory expenditure. The modifications proposed here are in the configuration space and not in the algorithm core itself. Therefore, in the matter of configuration space, all the previous algorithms from the A* family should give an equal or similar solution to the A* algorithm. When applying our modification to any algorithm from the A* family, the final solution would be better, as it will be demonstrated with A* in this paper.
Finally, in our approach we base the modifications on the method of cell decomposition, where the modifications are not in the A* algorithm, but in the configuration space to later run an A* algorithm to find the best path. The advantage comes with the fact that, regarding the configuration space, in the cell decomposition there are no local minima, such as in potential functions, while in the VFH or in other similar approaches the local minima can become a problem when avoiding narrow areas. The only exception is when another robot that is trying to block the robot’s path is faster than the robot. However, this case would create local minima in any approach.
Gridbased algorithms
A* algorithm
A* is a traditional graph search algorithm that has been developed to calculate leastcost paths on a weighted graph. This algorithm uses a heuristic function,
$$\begin{aligned} \textit{F}(n)= \textit{g}(n)+ \textit{h}(n) \end{aligned}$$
(1)
which estimates the lowest cost of going from the initial state to the goal state, while going through node \(n\). This sets the order in which nodes are sought in order to find the best path as soon as possible. This function is the sum of two other functions:

1.
\(g(n)\)\(=\) Cost from the initial node to node \(n\);

2.
\(h(n)\)\(=\) A heuristic function to estimate the cost of the path from node \(n\) to the target node.
There are two lists in this algorithm: the Olist and the Clist. The open list, known as the Olist, contains the nodes that are candidates for exploration. The closed list, known as the Clist, contains the nodes that have already been explored. The nodes from the Clist were previously on the Olist, but as they are explored they are moved to the Clist. The nodes on these lists store the father node, which is the node used to optimally reach them. This is the node that lies in the shortest path from the original to the current node. If the heuristic function is admissible, then the path cost of \(q_\mathrm{goal}\) is guaranteed to be optimal.
To use the A* algorithm in the calculation of the robot’s path, it is necessary to divide the environment map into cells, as stated in the approximate cell decomposition method. To increase the set of applications of this algorithm, this division could be achieved with a GPS, omnidirectional cameras, global cameras (for outdoor applications when the grid moves with the robot) or previous measurements of the environment, allowing the robotic system to apply this algorithm to almost any kind of situation. Here, each cell represents a node. Each node can be connected to other nodes, and moving from one node to the other has an associated cost (Fig. 2). In this case, the cost is the metric distance between the cell centers. The A* algorithm can calculate the path that minimizes the cost from moving from the initial cell to the target cell. In Fig. 3, the black cells represent the obstacles, the yellow cell represents the initial position (node) and the blue cell represents the destination point (node).
Finally, the robot is represented by the initial node, and it occupies only a single node. This last node is the geometric center of the objects. The destination node is the goal state \(q_\mathrm{goal}\). All other moving objects are considered to be obstacles. As the robot is represented by a single cell, the obstacles have to be bigger, so as to represent both the obstacle and the robot’s body. These obstacles are represented by circles with their radius equal to the sum of the obstacle’s radius and the robot’s radius. This representation is depicted in the Fig. 4.
In robotics, it is often important for the agent to keep planning new paths when new information on the environment is received by the sensors. The A* algorithm continuously plans the path from scratch after new information is received. However, it is very computationally expensive to keep planning a path from scratch every time the graph changes. Instead, it may be far more efficient to take the previous solution and repair it.
D* algorithm
The Focused Dynamic A* (also called D*) and D*Lite have been used for path planning in a large number of robotic systems, including indoor and outdoor platforms. D* and D*Lite are extensions of A*. Nevertheless, D*Lite is much simpler and slightly more efficient than D* in some navigation tasks. The D*Lite proceeds initially similarly to A*, creating an optimal solution path from the initial state to the goal state, in exactly the same manner as A*. The difference is that when the replanning is necessary, the previously planned path is used instead of planning a path from scratch. This saves computational time and can be up to two orders of magnitude more efficient than planning a path from scratch using A* [11].
Generally, D* is very effective for replanning in the context of mobilerobot navigation. In such scenarios, the changes to the graph occur closely to the robot, which means that the effects are usually limited. However, if the areas of the graph being changed are not close to the position of the robot, it is possible that D* is less efficient than A*. This is due to the fact that D* processes every state in the environment twice. The worstcase scenario is when changes are made to the graph in the vicinity of the goal, which happens frequently in a highly complex environment. If the planning problem has changed sufficiently upon the generation of the previous result (a common characteristic of a highly dynamic environment, as in this study case), this result may be a burden rather than a useful starting point. In this case, which is mostly common in real experiments containing uncertainties, A* is much more efficient than D* [11]. Finally, there are some variations of the D* algorithm, such as E*, which makes the path smoother but still suffers the drawbacks of D*, similar to what happens when highly dynamic and complex environments [6] are considered.
ARA* algorithm
In some cases, the reaction of the agent must be quick, and therefore the replanning problem is complex, even in static environments. In such cases, computing optimal paths as described above can be infeasible due to the sheer number of states that need to be processed in order to obtain such paths. Algorithms often construct an initial highly suboptimal solution very quickly, thus improving the quality of the solution afterwards while time permits. One of the most common algorithms is the Anytime Repairing A* (ARA*), which limits the processing performed during each search by considering only those states whose costs at the previous search may not be valid given a new \(k\) value (current heuristic parameter of optimality). This improves the efficiency of each state in two ways: by expanding each state at least once when a solution is reached, and by only reconsidering states from the previous search that were inconsistent [11].
However, because ARA* is an anytime algorithm, it is only applicable in static planning domains. If too many changes are being made to the planning graph (which is the biggest characteristic of a highly dynamic environment with moving uncertainties), ARA* is unable to reuse its previous search results and therefore must plan the path from scratch again, which makes A* far more applicable. As a result, it is not appropriate for dynamic planning problems [11]. Therefore, another class of algorithms were created to fix this problem, the Anytime Dynamic A* (also called AD*).
AD* algorithms
Algorithms that plan the path iteratively (A* and D*) have concentrated on finding a single and usually optimal solution, and anytime algorithms (ARA*) have concentrated on static environments. However, some of the most interesting realworld problems are those that are both highly dynamic (requiring replanning) and highly complex (requiring anytime approaches). The authors in [22] developed the Anytime Dynamic A* (AD*), an algorithm that combines the continuously planning capability of D* Lite with the anytime performance of ARA*. Unfortunately, as the authors put it in [11], this AD* algorithm suffers from the drawbacks of both anytime and replanning algorithms. As with replanning algorithms, AD* can be much more computationally expensive than planning from scratch. The larger the change in the environment, the more time consuming it is to redo planning a path with AD*. This becomes a problem in an environment with many movable uncertainties (moving obstacles). In such cases, A* will also be less time consuming than AD*.
Note here that the following experiments and simulations are highly complex (which becomes an issue for replanning algorithms), highly dynamic (which becomes an issue for anytime repairing algorithms), and full of moving uncertainties, sometimes faster than the robot itself, which makes the AD* computationally expensive. Note also that all these new algorithms only give specific solutions, always with drawbacks, when all problems are considered at the same time, something that is often seen in realworld situations such as in airport daily patrols. To solve these problems, a set of novel modifications based on the A* family algorithms was proposed.
The modifications
As mentioned before, it is known that most environments are highly dynamic, highly complex and contain obstacles moving randomly. The situation studied is often common in the real world considering the dynamic constrains of the robot, which is to find the fastest solution between the initial state \(q_\mathrm{init}\) and the goal state \(q_\mathrm{target}\), avoiding as many collisions as possible. Therefore, one of the concepts that it is necessary to highlight is that the best solution, in most cases, is given not by the shortest path (optimal path) and can lead to undesired collisions. In another words, the best solution is not the shortest path (the optimal one), but the fastest path (usually the suboptimal one). That is because the velocity of the robot is not constant (the robot has limited acceleration) and the robot controller has difficulty in following trajectories with abrupt changes in direction.
The first thing to take into consideration when analyzing the proposed modifications is that all contributions should be disregarded and a different angle of analysis should be pursued. The first point of analysis is that the built cell map must have the location of the obstacles in the workspace, in a fixed position. This should be known at the instant the information is captured. This information ignores the velocities of the obstacles. In dynamic environments this can be a big mistake, for it does not allow the robot to avoid obstacles sooner than expected, thus leading to an unwanted collision.
One way to avoid this situation in a cellbased map is to calculate the possible point of collision given the current velocity of the robot and the current velocities of the moving obstacles. In each trajectory calculation the modified algorithm makes the robot assume that the obstacles have constant velocities in the time t of data acquisition, and that the robot has a maximum speed and a maximum acceleration. Using this information, the position of new obstacles for path planning calculations is no longer the current position, but the possible collision point, as seen in Fig. 5.
In this case, while the trajectory must be fully planned, only the first steps are taken into account before new information arrives and a new calculation is performed. There are some proposed techniques presented in this paper that can be applied to any A* family algorithm and that are useful when trying to approximate the inherent environment dynamics in a static map. They are called:

1.
Obstacle Distance

2.
Obstacle Slack

3.
Obstacle Direction

4.
Processing Time

5.
Target Orientation
Finally, it is important to remember that A* family algorithms either find a solution or not. Although the A* and the A* with k can give a suboptimal solution, there is no mathematical guarantee that there will be no local minima due to the high nonlinearity of the system. Our guarantee is given by the hundreds of hours of using this modified algorithm in the Small Size Robot Soccer League of RoboCup since 2005 with the 5DPO team from University of Porto.
Obstacle distance
This change causes the obstacle to lose relative importance as the distance from the robot increases and the possible collision point is further away from the robot. This can be seen in Figs. 6 and 7. A distant obstacle mostly does not affect the immediate trajectory points. That can speed up the calculation because fewer obstacles will lead to less visited cells and a lower amount of time to find a solution.
Note that in Fig. 7:

1.
min\(=\) Starting distance for decreasing the obstacle’s importance

2.
max\(=\) Distance for total loss of obstacle’s importance

3.
radius\(=\) here, as the obstacle goes far from the robot, the obstacle’s importance decreases and this is measured by the obstacle’s radius.
The improvement made by this change is visible in the comparison shown by Fig. 8 in an environment with a static obstacle. The image on the top is the trajectory without the modification, while the image below includes the modification.
Obstacle representation (slack)
This modification changes the way an obstacle is represented in the cells. A security area is created around the obstacle. This area is built by setting the cost for those cells above the free ones, but still allowing the robot to choose a path through those cells if the algorithm finds it optimal (Fig. 9).
This does not make the obstacle bigger (nor does it expand) but creates a security zone that should be avoided if doing so does not cause any impact on the optimal solution. There are cases where an optimal solution can be found using that zone instead of choosing a longer path. The equation for calculating the can be seen below.
$$\begin{aligned} \textit{C}(n_1,n_c)= \textit{C}(n_1,n_2).\mathbf Cs \end{aligned}$$
(2)
where

1.
C\((n_1,n_c) =\) Cost for going from node 1 to node c;

2.
C\((n_1,n_2) =\) Cost for going from node 1 to node 2;

3.
Cs = Cost inside the slack zone.
The Cs can be set by the graph in Fig. 10.
The improvement caused by this change is visible in the comparison shown by Fig. 11 in an environment with a static obstacle. The image on the left is the trajectory without the modification, while the image on the right includes the modification.
Obstacle direction
A moving obstacle can obstruct the robot for a longer period of time if the path to avoid the obstacle ends, moving the robot parallel to the obstacle movement. This change creates a certain dynamic awareness of an otherwise static map. It creates an additional zone for which the cost to travel there is increased. This zone is created around the projected direction of the moving obstacle. The size of this zone depends on the magnitude of the obstacle’s speed (Figs. 12, 13).
where:

1.
\(\mathbf a =\) Magnitude of the direction zone;

2.
\(\mathbf Ce =\) Cost inside the direction zone.
The improvement caused by this change is visible in the comparison shown by Fig. 14 in an environment with a static obstacle. The image on the top is the trajectory without the modification, while the image below includes the modification.
Finally, it is important to mention that, despite the fact that in real applications the obstacles usually have a nonconstant velocity, our algorithm was optimized to be executed in a fast fashion. In each control loop the algorithm is recalculated and the unpredictability of the obstacle detection is smoothened. Usually, the errors in the obstacle’s position and velocity estimations decrease abruptly when the obstacle approaches the robot, and therefore for the important obstacles (the ones near the robot) the uncertainty is not high.
Processing time
This change addresses a modified heuristic for the A* search algorithm that reduces the computing time and finds the optimal search effort level, considering the computing time and the optimal path costs. This is the only modification that is not new, as it is well known as weighted A* and is usually used in the anytime algorithms. Also, it was first mentioned by [26]. To do this adjustment, it is necessary to set the correct heuristic parameter k. Using Eq. 3 with \(\mathbf k = 1\) there is a guarantee that the final solution is optimal. Using a higher value for k, the search space is reduced and the solution found can be suboptimal. When performing path planning with the original A* method with different k values, it is noted that as k increases, the region of possible paths decreases. As a result, it is possible to observe that the computing time can be controlled, possibly paying the price of having a suboptimal path where the length of the path found is extended. In fact, k affects processing time and path length. While the first increases, the second decreases. Assuming the cost as a weighted sum of both variables, an optimized k can be found. However, it will depend on the path type and obstacles.
$$\begin{aligned} h(x,y)= \mathbf k \sqrt{(xx_t)^2+(yy_t)^2} \end{aligned}$$
(3)
Therefore, it is desirable to find a compromise between cost of computing time and the quality of the result (set by the heuristic parameter k). As a result of simulations, the average total cost in computing time can be seen in Fig. 15. It is possible to obtain the minimum cost for a \(\mathbf k = 1.2\), thus resulting in an acceptable and a much faster suboptimal path. Finally, it is important to notice that this modification can be made in any A*family algorithm if a suboptimal value is found when studying its time processing.
Target orientation
This change tries to set the required orientation used by the robot as it approaches the target. Without it, the robot will hit the target destination from any direction. There are cases when the approach direction is mandatory. For these cases, a restriction like the one depicted in Fig. 16 (left) is used.
Sometimes there is a preferred direction, but that restriction is not strict. It can be violated if the gain in the arrival time is significant. To achieve this, a softer version of the extra obstacle is used, as depicted in Fig. 16 (right).
To calculate the approach cost, the image above (Fig. 17) can be used. Where:

1.
Cd\(=\) Approach direction cost;

2.
amp\(=\) Amplitude of the approach direction;

3.
dc\(=\) Center of the amplitude;

4.
dg\(=\) Distance of cost decrement in approaching the amplitude of the goal direction.