Path Planning – Wavefront algorithm

REPORT 4

Introduction:

One of the main achievements for functioning robots is to perform interesting tasks on its own. Some robots have been programmed to fold napkins; others have been able to help assemble various car parts, and recently, NASA launched their newest rover, Curiosity, to Mars on an exciting exploration mission, to tell whether there could be life on Mars and so on. The world of robotics seems to get bigger and interesting each day and this is as a result of the different components and studies that come together to make robotics what it is today. One of the studies is motion planning, and in this report we will discuss one of the various motion planning algorithms – Wavefront algorithm – and relate it to a particular task that was given to us in class.

Task 4: Application of Wavefront Algorithm:

In the task that was given, our goal was to program a robot to find its way from a starting point to a goal area given a map. In this map, obstacles are placed in various points and the robot must prevent itself from crossing any path that had an obstacle in the way. In other words, the robot must find an optimal path from the starting point to the goal point without moving into the path of obstacles.    The map is represented in a 2D array with a starting point (x and y coordinates), which has been defined already for us. Below is a simple file demonstrating the map

 

#define X_SIZE 5

#define Y_SIZE 4

#define MAP_AREA 20

#define FREE 0

#define OBST 1

#define GOAL 2

 

int world_map[] = {{0, 0, 0, 0, 0},

{0, 0, 1, 1, 0},

{0, 0, 1, 0, 0},

{0, 1, 0, 0, 0}};

int start_x = 1;

int start_y = 2;

int goal_x = 4;

int goal_y = 3;

 

The world_map[] is  a representation of a given area in which the robot can traverse and below is the declaration of the coordinates of the start position and goal position in the map.

Implementation and Observation:

Path Following:

We divided the task into two major models. The first part was to plan a path for the robot(path following) to follow in a given map. Before we were able to complete this model, we created different methods for specific tasks. Some of the methods are spinLeft(), spinRight(), moveForward(), moveBackward(). As the names of the methods suggest, they perform exactly the task given it. Also, we implemented another method known as orientation() and direction(). The orientation method defines the robot’s position at a given time in a particular cell. So the robot could be facing north, south, east or west depending on the direction it is heading. Now this takes us to the direction method. The direction method only helps the robot to move into the next cell after processing all its neighbouring cells. So we had four different direction points labeled 1, 2, 3, and 4 and at every orientation the robot must determine which direction it is heading with respect to the cell it must move to.

Wavefront algorithm:

The next model was the application of the wavefront algorithm. This time the robot wasn’t given a path to follow but was given a map, a starting point, a goal point and obstacle cells. The work of the robot was to apply the wavefront algorithm to the map, extract the path from the map and follow the extracted map.

Implementing:

We were required to repeatedly process cells and this was done by implementing queues to store the cells. Now we had to add elements to the queue and so we added the goal cell to the queue. As long as the queue is not empty and the start cell has not been processed, and this is done over and over so far as a cell hasn’t been processed. Then we take the next cell from the queue, set their values by adding one (1) of the current cell.

Observation:

We faced series of challenges when implementing the wavefront algorithm. One major challenge was that whenever the robot made a right turn or a left turn, it couldn’t determine its direction as defined earlier in the report. Another was how to extract the path after the robot has gone through the map. Also, we experienced oscillation between two cells, in other words, when the robot was going to make a left turn or right turn, the robot keeps going back and forth between its current cell and initial cell.

Conclusion:

This task gave us a real experience in motion planning, and made us understand the various principles that come together to complete this task with respect to wavefront planning.

 

Keith and Frank

BUILDING A SIMPLE MACHINE FROM LOCAL MATERIALS

Edem Adzo Diaba

Frank Anamuah-Koufie

REPORT ON TASK 1

Introduction

The objective of the task was design a system of machines capable of moving a small ball (6cm diameter) across a distance of one metre into a goal area. The goal area has dimensions 25cm X 25cm X 10cm. the mechanism is designed to be touched once and be “automated” for the rest of its course.

Design

The following constraints affected the design of the machine:

  1. The machine must be composed of locally acquired materials only.
  2. Materials purchased in building the machine should not cost more than GHC20.00.
  3. The machine must be composed of a minimum of four simple machines.
  4. The machine may only be touched once, the purpose of which is to get it started.

The constraints listed above naturally limited the options we had in designing a solution. However, it also paved way for creativity and new thoughts about possible uses of some materials that were considered to be waste materials or would have never been considered in the building of mechanical systems.

Concept

We pictured the machine in terms of the stages involved in accomplishing the task. These were:

  1. Picking up the ball
  2. Moving the ball across the distance
  3. Dropping the ball in the goal area

In picking up the ball, we considered several options that included having an arm attach a metallic tape to the ball and picking the ball up with an attached magnet. We also came up with a “dustpan and broom” mechanism that would “sweep” the ball from the floor unto a “launch pad”.

 We considered three possibilities for moving the ball across the distance. The first option was to “fly” the ball using a catapult or lever mechanism from the machine directly into the box. The second consideration involved rolling the ball on the ground from an inclined plane straight into the box. But this was based on a misconceived idea that the goal area would have a front-facing opening rather than the upward-facing opening it had. Our final consideration was to have the ball drop from a height and then bounce into the box in a calculated number steps.

One option for dropping the ball required actively dropping the ball into the goal area by turning of an electromagnet that would be mounted on an arm carrying the ball across. The other options were simply passive results of the “moving process” and as such would not require a deliberate attempt to drop the ball.

Each of the three steps were tested independently to ascertain their efficacy before being tested together to see how the whole system may work.

Implementation

We chose to use an iterative process in building the machine. The steps involved were designing, building and testing in that cycle respectively. We began with 2 days of brainstorming and sketching our initial ideas on paper. At this stage, we considered no physical limitations and allowed all sorts of ideas to flow.

The next step, which involved building models, was more realistic. At this stage, we focused more on the physical challenges that the machine may face. We considered weight limitations, material limitations and the role certain forces such as gravity and friction would play on the functioning of our machine.

Our first model involved 3 simple machines; a pulley to release a calculated weight onto an inclined plane. The weight would then slide down the plane and gather momentum enough to catapult the ball from one end of a simple lever into the goal area.

Our tests revealed a number of challenges and provided some new insights into building the machine. We realized that using a stone as our weight was difficult because we could not control the weight and it was also difficult to get the ball rolling on the inclined plane. Another challenge we encountered was getting the pulley to support two heavy stones. Theoretically, we thought the two would balance themselves but in reality, they put a lot of strain on the pulley and almost split the wooden support.

After several iterations, we finally came up with a design that was composed of three simple machines, not meeting all the basic requirements of the challenge. The final machine instead of having one human intervention point had two. The final design was composed of two inclined planes and a lever. The lever and inclined plane we put together using pieces of wood. We used a sock filled with sand as the weight in place of a stone because it was easier to adjust the weight. The mechanism employs four energy transfers:

  1. At one end of the machine, a rubber band holds the weight in place and a wedge holds the ball at another end.
  2. The rubber band is released and at that instant, the wedge is turned.
  3. The ball and weight move down the two planes with the ball arriving on the “launchpad” first. This is because the ball is lighter and moves on a shorter inclined plane.
  4. The weight, which is a sock loaded with sand is much heavier and on a longer plane so arrives later at the other end of the lever.
  5. The weight lands on one end of the lever and shoots the ball into the box.

Materials Used

  1. Plywood – for inclined planes and lever
  2. Sand – Weight
  3. Sock – Container for weight
  4. Rubber band – Trigger
  5. Nails and kebab sticks – for nailing components together.

Conclusion

This task has opened our minds to new ways of thinking about some of the local materials scattered around us. It has also shown us how some ideas may look very simple on paper but can be gruesomely difficult to build in the real world. It is indeed a perfect introductory work to a class in robotics as it has made us appreciate the importance of robots in our activities.

 

 

Wavefront Planning-Task 4

Task: Path Planning

Group Members:

Flynn Darko

Selase Attah Kwamuar

Lecturer: Dr. Ayorkor Korsah

Task 4 Report

Introduction & Objective

The task required us to implement the wavefront planning algorithm to enable our robot plan its way through a specified map from a start point to a goal area.

Design Concept

The task required us to implement the wavefront algorithm in an enclosed map that was divided into grids and with “x” and “y” coordinates/boundary. The figure below shows the whole map divided into grids. The numbered grids each have an x and y coordinate and contain the numbers 0 and 1 which specify free spaces and obstacles respectively.

Figure 1: Our World Map

Figure 1: Our World Map

As can be seen above figure 1, gives a summary of the task our robot is to perform. The robot is supposed to start at the “start cell” and follow along the red line until it gets to the goal cell or area. In the implementation section of the report, the program is explained further.

The concept we decided to follow is as follows;

  • Get specified map
  • Populate map from goal area
  • Calculate and find shortest path
  • Using the shortest, navigate to the goal area

Implementation in RobotC Language

Before going ahead to write the program to implement the wavefront algorithm the pseudo code that follows explains the structure of the our program;

Cell Population

Set value of goal cell to 2

Enqueue goal cell

         While  (start cell has not been processed)

                currentcell = Enqueue

                Expand currentcell

                for each neighbor of currentcell, set neighbor’s value to currentcell value

                Enqueue neighbor

Path Extraction

Start at the start cell

While not at goal cell

                Move to the neighbor cell with the smallest value which is an not obstacle

Navigate to goal Cell

 

Implementation Stages

Movement through the map (tiles)

To do this we had to calculate the distance between the center points of two cells

And write a turnLeft() and turnRight() function for our robot.

Writing a function for the movement of the robot was the first part of the task we implemented. After using and observing how the encoders work and using the distance calculated for the tile distances we implemented forward, left, and right movement of our robot. The respective functions used in our program were moveForward(), ninty_turnLeft() and the ninty_turnRight(). We also had to make sure that for the left and right turns our robot turned 90 degress for both turns. We were able to do this by adjusting the magnitude of the power assigned to the both wheels for the respective turns.

Displaying the map on NXT screen

To implement this we used a printWaveMap( ) method to display the map specified in the planning_map.h source code. Using the nxtDisplayString( ) method we printed the various map values in the various cells, specifying the free cells with 0, obstacles as 1 and the goal cell as 2 as can be seen in figure 2.

figure 2

figure 2

Populate the cells from the goal area to start area

After implementing the display of the given map on the screen of the robot, we went ahead to populate the cells within the map starting from the goal cell.  That is, the wavefront plan. We used unique functions namely addQueue( ) and removeQueue( ) to queue all the processed cells until we got to the start area. These functions also made use of variables which were passed as parameters in the function to queue up the values of the neighboring cells as they were populated. From figure 3, it can be seen that from the goal area 2, we populate the neighboring cells which are enqueued till we get to 12 which is the start area.

 

figure 3

figure 3

Planning and Navigating through planned path to goal

Finally, we made use of the movement function and the wavefront plan to implement finding the shortest path and navigation to the goal area. The program algorithm we wrote for this implementation is as follows;

  1. Given the start cell of robot
  2. Get value of cell
  3. Compare value of that cell to its neighboring cells and find the least cell value
  4. Repeat steps 1,2 and 3 till it gets to the goal cell which is 2
  5. Save shortest path in an Array

After finding the shortest path to the goal we encountered the challenge of factoring in the orientation of the robot. Relative to the bearing of the world map we were using, the orientation for the robot changed after a turn.

We therefore had to use a variable called heading which was assigned variables representing the current bearing of the robot. In our program this enabled us to inform the robot to know the position of the minimum cell to itself and know which direction to turn in order to get to the right neighboring cell with the minimum number.

figure 4

figure 4

Challenges

On major challenge we faced was how to factor in the orientation for the robot. After getting an understanding of the concept of how to use it, we still had a lot of errors when implementing it in our program. We had to write about sixteen conditions for all possible positions of the minimum cells the robot was supposed to move to and the current orientation for the robot as well. We were however finally able to have our program ran by writing a condition for all four possible orientations for the robot. This included a condition for when the robot was facing north, east, south and west. Each condition included the action the robot was to take depending on where the minimum value cell was to the robot.

During our testing, we also realized that our robot did not exactly turn 90 degrees for both left and right turns. We therefore had to adjust the assigned motor powers as well as the wait times.

Conclusion

After debugging our program following each unit task we realized that our approach to the task cost us a lot of time on the whole task. What we had to do first was implement the movements function and make sure our turns worked before writing our program for the wavefront plan. This would have saved us some time since we had to go back and correct the movement functions.

In summary this task was challenging in terms of determining the orientation of the robot upon every turn but it taught us the value of breaking a problem or a solution algorithm down to the very minute detail since this really helps when writing the program.

Treasure Hunt Task

Task: Treasure Hunt

Group Members:

Flynn Darko

Daniel Botchway

Lecturer: Dr. Ayorkor Korsah

 

Task 3 Report

Introduction & Objective

Our main goal for this task was to write a program for the robot that would enable it find red and blue colored bottles and place them in their respective color zones (holding stations) with the help of sensors attached to the robot.

Design Concept

First, we had to rebuild our robot from task 2 to enable it grab and transport the bottles to the collection points. Treasures are shown in figure 1.

Bottles wrapped in colored cards

Bottles wrapped in colored cards (treasures)

 

  • For the robot and its sensors

For our robot we decided on the “Tribot” model of the robot. The sensors we decided to use were two sonar sensors and two color sensors. For the sonar sensors, one was placed at the top and the other placed by the side of the robot. The concept we wanted to implement with these two sensors was to use the top sonar sensor for front obstacle (wall) detection and the side sensor for side obstacle (wall) detection and avoidance.

Also, one color sensor faced downwards while the other was place in front of the robot. With the front color sensor detecting, finding and recording the color of a bottle and its color the downward color sensor would use that data to find the respective collection zone that has the same color as the bottle detected.

  • For our solution concept, the process goes like;
  1. Do random exploration in the environment to find colored bottles while avoiding obstacles.
  2. On finding bottle, save color of the bottle, grab the bottle and move along the wall of the environment.
  3. When the appropriate color zone is detected, drop bottle.
  4. Continue with the random exploration.

Implementation

Our first step in the implementation stage was building our robot. The robot had to have arms to enable grabbing of the bottles. The arm that was meant to grab the bottles needed a motor to control the opening and closing of the arms. The color sensors were placed in front of the robot in between the arms with one sensor horizontally positioned to detect the color of obstacles in front of the robot. Another color sensor was place vertically, so that it the sensing area pointed down, in order to detect the color of the landing zones.

The next step was programming the robot to use its sonar sensor inputs to do wall-following. This we programmed using the Bang-Bang and P control mechanisms. Though the wall following program worked along a straight wall, it failed in the treasure hunt environment because of the acute angular nature of the boundaries. With that situation, we had to implement a wall avoidance procedure (solution) to enable the robot travel successfully through the environment without bumping into the walls. A solution that would use an extra sonar sensor to prevent head-on collision was implemented and that necessitated the use of two sonar sensors on tribot. The sonar sensors on the robot needed to be positioned at a particular angle to allow both of them to work in synchronization based on the code and logic implemented. A number of tests helped determine the appropriate position for the sonar sensors and appropriate sonar values.

With regard to the Programming algorithm, this is the approach

->reset claw ()

->start exploration task ()

->start avoidance task ()

–>avoid front collision ()

–>avoid side collision ()

–>avoid going outside the assigned environment ()

->while search isn’t ended {

–>if treasure is detected {

—>start grab and Deposit ()

–>}

->}

View of our Treasure hunter robot

View of our Treasure hunter robot

Challenges

One major challenge encountered was with the alternative wall avoidance procedure. This solution did a great job of avoiding obstacles but the robot kept avoiding the treasures also and followed along the same route close to the walls of the environment. There was no randomness in the exploration and just a little section of the total surface area of the environment was being covered. This problem was fixed by programming the robot to turn after every front obstacle detection and hence obstacle avoidance. This introduced some randomness and the robot covered more surface area, however, this did not fix the issue of the robot avoiding the treasures themselves.

The other challenge was with the coverage area of the color sensors that made detecting a treasure quite difficult. The color sensor needs to be very close to the object of interest in order to accurately determine the color of the object. Also, the sensors had a narrow field of view and thus the object of interest needed to be directly in front of the sensor to make a successful detection. This makes the robot pass by and knock down treasures that are in its path but not directly in front of the color sensor. This problem has not being fixed and chance is really the deciding factor in the detection and recognition of a treasure.

The approach used in exploring and searching for treasures was a random one and this came with the problem of time inefficiency. As the number of treasures in the environment decreases the time taken to detect a treasure increases. The exploration bit of the program relied on obstacle avoidance till the robot came on a path that would allow the front color sensor detect a treasure (a bottle). Attempts to fix this problem would demand a major change in the concept of the solution and the structural design of the robot.

Conclusion

Looking at the design concept and its implementation we found iterative testing to be key in making our robot more efficient and reliable. The placement of the sonar sensors was very important since placing them at the wrong position would give unreliable results due to the noisiness that comes with the some sensors such as the sonar sensors.

The whole task taught us a few lessons which were very important in building and programming a robot to perform tasks. One challenge we faced was testing and making sure the robot is reliable in hunting for the treasures (bottles) and that is it could perform the same task over and over again without encountering any problems.

Another challenge was breaking down the whole treasure hunt task into small problem units and testing them to make sure each “sub-task” such as wall-following was working very well before moving on to say, treasure (bottle) detection. Overall, the task taught us the essence of first breaking down the whole task into smaller task and solving them unit by unit.

 

Relevance to Problem Solving in Ghana

This treasure hunt concept can be implemented in areas of accidents where victims cannot be located or reached to offer help to them. For example during the recent collapse of the  one Melcom shop branch in the country, health and security personnel were not able to find or locate victims due to their inability to enter the rubble.

Robots can be programmed with the proper sensors such as a camera to help detect where victims are located in an accident. This would not only hasten the work of security and health personal but saves lives as well.

 

Task 3: Treasure hunt

FRANK ANAMUAH-KOUFIE

ABDELRAHMAN BARAKAT

16TH OCTOBER, 2012

Our Experience

The task was interesting because of its seeming complexity and thought required to accomplish it. Initially it sounded very abstract and almost impossible to perfectly program but as we reasoned through it logically, we were able to break it down into very basic tasks. These included object detection using the sonar, wall following, collision prevention as well as grabbing and dropping the bottle. We began by designing small modules to accomplish specific tasks and testing them individually before gradually putting them together into a complete program.We modified the program continually and even redesigned the robot a number of times as we researched and found new insight.

Lessons learnt

The main lessons we learnt from this task were how to use the compass sensor, sensor fusion (in our case, using the sonar with the color sensor for object detection) and ways of reducing sensor noise. We also gained a better understanding on how to manage separate tasks using the signals from the sensors.

Challenges faced

The first challenge we faced was getting all the required sensors to work together. We noticed that on several occasions we had different sensors causing the robot to perform conflicting actions and in effect producing very awkward results. Also, we were not able to make good use of the multiplexer due to its complexity even though it would have been helpful at some point. This caused a change in our design. In the end, we stuck to using four sensors and managed to have them coordinate to produce similar results as would have been possible with a few more sensors.

Another challenge worth mentioning was the difficulty in planning a reliable motion path for the robot in order to accomplish its task. For the greater part of the task, we stuck with using random motion which was unreliable and greatly increased the time it took to accomplish the task.

Problem Solving with sensors

  1. Fixing Potholes:

Many roads in Ghana are covered with potholes and other forms of damages which often cause deadly accidents especially in the rainy season. An important aspect of solving this problem is data collection. Data collected about the current condition of the road will help make estimates for either considering repairs or a complete reconstruction as well as providing useful information to road users on avoiding certain sections of roads.

Measuring every hole and crack on the road manually would cost a lot of money and time. With the help of sensor and additional image recognition software, we could create a system which recognizes the damages on a road and estimates the cost of fixing it. Sensors which can be used may include a camera to capture images to be analysed and an accelerometer to collect data about how bumpy the road is. These sensors may be simple accelerometers and cameras available on phones or could come as part of more advanced equipment. Over time, as more roads are traversed a very reliable database could be established with enough information to help with more efficient means of repairing damaged roads.

  1. Flood Prevention:

Many communities in Ghana suffer from floods every year. Floods are usually caused by the overflow of rivers during the rainy season and often cause massive loss of property and sometimes loss of lives. Because of the lack of alarm mechanisms, affected communities don’t normally have enough time to prepare to leave or move their properties.

A mechanism which could help to reduce the loss in life and property could be built using sensors which could measure change in water levels in rivers. The level of water in the river could be measured using a level sensor. Some level sensors which can be used for this purpose may include ultrasonic level sensors, capacitive sensor, and mechanical diaphragm. With these sensors being constantly monitored, meteorological agencies and disaster prevention organisations would have enough time to warn communities and also be well prepared to deal with floods if they could not be prevented.

 

Task 4: Path planning: Wavefront Algorithm

6/11/12

Edem Diaba

Abdelrahman Barakat

Task Objectives:

  1. To implement Wavefront algorithm to generate a path.
  2. Use Wavefront algorithm to plan a path through a given workspace.
  3. To execute a Wavefront plan.

Task description

In this task, we will use the NXT mindstorrm robot kit to implement a path planning algorithm called the Wavefront algorithm. The workspace for this task is a 5 by 4 square grid with obstacles identified by numbers. The RobotC program which we will load onto the NXT  brick will be able to determine the path it will take using map which will be preloaded in its memory. Later we will create a program which will be able to find its way using the wavefront algorithm.

What is the Wavefront Algorithm?

It is a cell-decomposition path planning method. This means that it is a way to plan the best path across a workspace which has been divided into equal polygons(e.g. a grid of squares). The squares occupied by obstacles are marked and therefore considered to be obstacles themselves.

 fig1

Figure 1 Initial map

 fig2

Figure 2 Wavefront planning

Every obstacle is marked by a specific number(e.g. 1) while the goal and start point are also give two other members (e.g. 0 and 2 respectively). Now, all the other boxes are still empty. To find the best path using this method, we will have to mark all the other empty squares with numbers greater than that go the goal (>2). This will be done by systematically labelling the surrounding squares with increments of the numbers starting from a particular point (the goal) as seen in the figure 2. Now, ne notice that e numbers increase as they approach the start point.

To find the path to be followed, we follow the numbers as they decrease towards the goal. Notice that we took one step at a time till the goal is reached.

 

Our approach

For this task, we didn’t use any additional sensors because the map for the workspace was already provided to the robot. We also used the tiles on the ground as our grid.

Explain Implementation in RobotC

The program is divided into several sections. First, there is a map which is written in a file of its own and consists of a two dimensional array and two points to mark the start and goal points. The array is already filled with values; 0 to indicate the absence of an obstacle and 1 to indicate its presence. The map file is included at the beginning of the main program(#include “map.h”). Secondly, there are functions in the program which are separated from the main task to perform every action separately. The actions are moving forward, moving clockwise, moving counter clockwise and stopping. We therefore,  used a 4-point connectivity motion to navigate the workspace. Functions which were written to perform basic thinking tasks included getNeighbours(), validateNeighbours(), findMin(), extract(),  enqueue() and dequeue(). Thirdly, there are functions which combine basic thinking and basic moves to decide which actions to take. Finally, the main task, which calls the relevant task; mapping the area, extracting and then traversing towards the goal.

Planning the workspace involved loading the map and using a queue structure to assign numbers to the empty cells and ordering them in increasing order towards the start point. In order not to exceed the boundaries of the workspace, the squares outside the grid are given very high values to avoid being picked for the next move. The outcome is a plan of the entire area populated in an array.

Thinking(extraction) uses the array generated in the planning stage to decide which path to take. Starting from the start point, the function generates a series of commands which follow the decreasing numbers towards the goal. These decisions are stored for the next function to use them.

Acting(traversing) uses the points generated by the extraction function which to move from one cell to the other according to planned.

fig3

Figure 3 final path followed

Observation

The program was successful in planning, thinking and acting and successfully performed the task even when the map was changed.

Task 4 – Wavefront Planning

Emmanuel Nkansah

Daniel Botchway

Introduction

The task is to have a robot plan and navigate through a known map by implementing the wavefront planning algorithm. The robot will be given a map representing a cell decomposition of the workspace. The map will be presented by a 2-dimensional array. Furthermore, the coordinates of the start and goal cells will be provided. The robot will have to continuously calculate the values of neighboring cells (starting from the goal cell which will be given a value of 2) and determine the next cell to traverse by examining neighboring cells to see which one has a lower non-obstacle value. Combining these tasks, the robot should be able to plan and navigate its way to the goal cell.

Concept

Preparation for Wave front Planning:

Driving (Forward)

We started by trying to determine how many rotations it would take the robot to move from the center of one cell or tile to another. We measured the distance using the tape measure and concluded the distance between the two centers was 45 cm. With that as our basis, we calculated the number of rotations required to cover that distance and obtained 920 degrees, which is equivalent to the 2.5 rotations we obtained after a series of adjustments.

Turning 90 degrees

We then tried to get our robot to accurately turn 45 degrees and 90 degrees. To do this, we first employed a mathematical calculation from an online source. The calculation was based on the concept that a circle is made spins in place. The idea was to determine the circumference of one 90-degree sector (with the radius being half the distance between the tires) and use that in computing the number of rotations required to achieve an accurate turn. The value we obtained was not too far off and after a few adjustments we arrived at 207 degrees.

Finding bearing of destination cell

Subsequently, we worked towards getting the robot to move in the direction of the goal cell or desired direction. We started by initializing integer variables named north, south, east and west (in the four point connectivity version) and northeast, northwest, southeast and southwest (in the eight point connectivity version). We used these variables to make the robot aware of the direction it was facing. We did this by assigning a variable called current bearing a value each time the robot executed a turn. For instance, if the robot was facing north and it turned 90 degrees to the right then current bearing was assigned east, telling the robot that it’s facing east. This allowed us to create a method that ensured the robot moved towards the desired direction.

Moving using given points

After creating a method to alter and determine the direction in which the robot was facing, we tried combining driving one cell, turning 90 degrees and finding the bearing into a method that would make the robot move from one point to another given a start cell and a destination or goal cell. This was done successfully.

Extracting a path from a map

We then gave the robot a completed wavefront-planning map to extract a path and follow in an effort to test our implementation thus far.  The method we implemented involved starting from a given start cell and proceeding to identify the neighbors of the cell with the lowest value. The identified cell is then marked as the next cell to be traversed. Figure 1. Shows a completed map. During the implementation of this phase, we had difficulty reading values from an array of points created to store the neighbors and the next cell to be processed. To solve this problem, we defined a structure called vector. Vector, had an x, y and a value. This aided us in keeping track of values in cells without having to access the cells.

completedmap

 Figure 1 – Completed Map

Performing the wave front algorithm

Afterwards, we wrote a method for performing wavefront planning. The method implemented a queue data structure to keep track of the cells waiting processing. The following is a step-by-step breakdown of our method:

  1. Enqueue the goal
  2. Get value at goal
  3. Dequeue
  4. While the queue is not empty
  5. Find the free neighbors
  6. For each free neighbor, the value of neighbor becomes the value of Dequeued cell + 1
  7. Enqueue the neighbor
  8. Go to step 4

Subsequently, the robot extracts the path to follow by implementing the methodology discussed prior to the performing the wave front algorithm section. The path is stored node by node in an array structure named EXTRACTED_path[ ] and the points are traversed consecutively.

Result

We put together the methods we had written. To ensure our methodology was fail proof we tested it using 1 major scenario. We modified the map to include obstacles at more places than usual, so doing, our method will display “no path found” on the screen of the NXT brick. It worked. Figure 2 shows a map with obstacles cutting of the path to the goal area marked with value 2.

modifiedmap

 Figure 2 – Modified World Map

Conclusion

From the carefully outlined paragraphs above, task 4 involved executing and implementing the wavefront algorithm. Although the task was not the easiest one, we were able to complete the task in time and extended it by adding a version that implements 8-point connectivity.