MULTI-ROBOT COORDINATION ( FINAL PROJECT)

FRANK ANAMUAH-KOUFIE

ROBOTICS

DR. AYORKOR KORSAH

21ST DECEMBER, 2012

FINAL PROJECT REPORT

Image 

INTRODUCTION

The objective of my final project was to probe further into the subject of robot communication and coordination. I wanted to gain a better understanding of how multiple robots are able to share information and collaborate on tasks in real time. Initially, I planned to design a system of robots with different capabilities that could determine how to collaborate and perform given tasks. However, I settled on implementing a system of similarly-designed robots that will collaborate on a specific task. For my final demonstration, I designed two robots that would simultaneously carry a box along a path. One of the robots would act as the master and will send driving instructions to the second robot. The second robot, the slave, would simply follow the instructions of the master.

APPROACH

After carefully thinking through the project, I divided it into three sub tasks – the robot design, communication mechanism and carrying of the payload.

 Image

ROBOT DESIGN

 I began by researching on NXT robot designs that could perform the task of carrying the box. After several considerations I decided to build forklift robots so that they could carry the payload off the ground. I did not readily find a design that had the forklifts on the side so I designed one myself. The robot employed a two-wheel differential drive system with a motor to power each wheel and a castor wheel at the back. It also had a third motor that powered the forklift. The design of the forklifts resulted in one side of the robot being heavier than the other which caused some irregular driving behaviour. However, I fixed this by determining the ratio of imbalance and configured the nSynchedMotors function accordingly. I carried out a number of tests to ensure that the forklifts would be strong enough to carry the box.

Image

BLUETOOTH COMMUNICATION

The next objective was to create a reliable communication mechanism between the robots. I tested out the Bluetooth functions available for NXT and realised that a lot of them were not applicable for my purposes. I ran into several difficulties at this stage of the project and had to spend a considerable amount of time resolving them. The major challenges were maintaining the connection between the robots as the robots kept losing connection on several occasions and also problems with delays in message sending. I overcame these challenges by repeating some activities to ensure increase the chance of success and also increased the amount of time scheduled for some Bluetooth-related processes. Also, by running repeated tests I managed to figure out the average time it took activities such as Bluetooth search and message sending to occur and used that as the wait time for those processes. The results of these were greater reliability and a stronger communication framework between the two robots.

CARRYING THE BOX

The challenge in accomplishing this task was to develop an algorithm that would calculate the required motor power and duration needed by each of the four motors. This drew from principles of inverse kinematics where I knew the desired robot configuration and so had to determine the configuration of the individual wheels. I carried out a number of tests to determine the motor requirements for the robots individually and recorded the results. I used this data to determine the relation between the motor powers and durations of the two robots. I used this approach in developing the synchronized movement algorithm because working with angles did not produce desired results. This was due to the fact that the motor encoders took their recordings in degrees of rotation rather than provide angular values. I tested the function several times as well and fine-tuned the time requirements to achieve near perfect synchronisation.

RESULT

Putting everything together for the first time gave me heartbreak as the entire system broke down. Connection and message-sending began to fail and in effect the robots were unable to even lift the box together. A lot of it was due to the fact that it was the first time I had put everything together and had not taken a lot of factors into consideration. I had not provided any fall-backs in case communication delays were experienced and I had not accounted for natural discords in motor coordination. However, with a good number of tests and redesign, I managed to get the system to function together correctly as planned.

CONCLUSION

In doing this project I have learnt a lot about robot coordination and communication. I have seen first-hand the problems that face this field of research chief among which are communication failures. However, I managed to have a successful implementation which shows the immense opportunities that exist in the real world for such systems. It is possible to design systems that will have planet exploration robots or rescue robots share resources and join forces to carry objects or accident victims so that no particular robot is overloaded.

 

Exploration Robot (Robotics Final Project)

Daniel Botchway

Final Project Report Document

Introduction to Robotics

Dr. Ayorkor Korsah

21st December, 2012

Introduction

The overall concept of my final robotics project was to design and build an exploration robot that would scan its environment and build a map that accurately represents that environment. In the project proposal submitted, the focus of the project was on motion planning and mapping using the Histogramic In-Motion Mapping (HIMM) technique. However, this algorithm could not be implemented but rather a solution I developed based on my understanding of the concept of the HIMM technique and the CMU mapping technique from which the HIMM was developed from.

The motivation behind this project was the interest I got from reading about robot mapping and motion planning. Deeply thinking about the HIMM and SLAM (Simultaneous Localization And Mapping) techniques was thrilling especially comparing it to how easy it is for humans to get the sense of their environment and simply know where they are and move around flawlessly. Unlike the excitement from reading, actually executing this project was awesomely tedious.

Design Concept

The initial designs of the project after a few brainstorming sessions before drawing up the project proposal got outrageously out of scope with the concepts of autonomous localization of the robot. This idea just like the implementation of the actual HIMM technique required the whole project to accommodate a certain level of randomness and unpredictability that would make the project difficult to complete considering the limited time frame.

A more simple design concept was developed to eliminate the element of randomness and unpredictability. The design was

  1. To move in a calculated manner of one cell at a time using 8 point connectivity on an occupancy grid map.
  2. To scan the four main sides of the robot.
  3. To use a mathematical model to generate the senor output into information for the map.
  4. To visit every cell in the grid to generate the map representation.
  5. To send the map data from the robot to a remote site for visualization.

Implementation

Movement

Building off from the designs used in the Wave Front planning task, the fundamental code for moving to a series of cells making a path was already available (i.e. the Movement module). This module had functions for moving one cell forward, turning 45 and 90 degree right and left, determining the bearing of valid neighboring cells and moving appropriately to them.

Architecture

Also, the same robot architecture was used with slight modifications to accommodate the four sonar sensors. The sensors were placed in front, behind, on the left and on the right sides of the robot as illustrated in Figure 1. The positioning of the sensors gave a fairly complete view of the area around the robot though not a complete 360 degree panoramic view.

Image

Figure 1

Coverage

The next critical and ultimately the most challenging task was to develop a coverage algorithm that would allow the robot visit every cell in a given grid. I spent a lot of time searching for ideas and already existing algorithms that insured that every cell in a grid would be accurately covered in the most efficient manner. Most of the results of my search pulled up concepts that had to do with graphs like the Travelling Salesman Algorithm and other Graph Coverage Algorithms.  Despite that, already existing implementations of these algorithms that suited the project I was working on proved futile. These algorithms did not use the idea of moving systematically from one cell to the other using only valid neighbor cells available but rather computing a path using a Bresenham’s line generation algorithm from a current cell to the next optimal cell in the coverage sequence. Also, the graph coverage algorithms did not accommodate the presence of dynamic obstacles present in the environment.

After, several attempts of designing, testing and redesigning of a model that would allow the robot efficiently cover every cell in a given grid while accommodating the presence or absence of obstacles, a solution was finally developed (i.e. the Path Module). The fundamental theory of this Path Module is

  1. To start at a given known location.
  2. Find all the valid neighboring cell of the current location i.e. cells that have not been visited, cells that have no obstacles and cells that are not outside the defined area of the map.
  3. Move to the first valid neighboring cell.
  4. Store all the valid neighbor cells that were not chosen into an unvisited cell data structure variable.
  5. Remove current cell from the list of unvisited cells.
  6. If no valid neighbors are found and the size of unvisited cell data structure variable is not empty, then backtrack to the most recent unvisited cell using a Wave Front planning algorithm that re-plans after every move to a destination cell.
  7. When all the unvisited cells have been visited, terminate the process.

This module was developed from scratch from simply experimenting with ideas that came up and fixing faults that arose from testing those ideas. Major fault encountered when developing the coverage algorithm was determining when to stop after every cell was covered without moving back and forth between two cells infinitely. Prior to using the Wave Front backtrack idea, the algorithm run over a 100 time in order to cover just a 5×5 grid map. This was because, the back track idea used then, was to keep track of all the covered cells and undo each move (by going back the path covered)  until an unvisited neighbor cell is found. Because, the earlier algorithm retracted along the path already covered, when an obstacle gets in the way of the old path the algorithm failed.  The Wave Front plan and re-planning backtracking concept catered from new obstacles that blocked already covered paths and provided options no matter the situation.

Map building

The next sub task was to translate the senor readings into a mathematical function that would be used in building the map. The approach used in the HIMM algorithm was understandable in theory but difficult in implementing. In this area also, I experimented with idea and the failures they brought up until a Scan and Map Building Module was developed. The theory underpinning this module was as follows;

  1. Take sonar readings from the four sensors (an average of 10 readings was always taken to make up for the noise encountered with the sensors).
  2. The readings are then converted into the grid cell range values by dividing the readings by the length of one grid cell. This meant that every obstacle was known not just by the distance in centimeter but also by the number of cells away.

i.e. So an obstacle that is detected 100cm away has a grid cell range value of 100/ 45 = 2.222, which is about 2 cells away

  1. With the grid cell range value and the current bearing of the robot (i.e. North, South, East or West) the appropriate immediate four neighboring cells (because only four sensors were mounted) of the current location were updated to the grid cell range values. As illustrated in Figure 2 and 3, the four neighbors of the current location are updated with values that represent the number of cells between an obstacle and the current location cell. When the robot moves to the next cell, the same procedure is repeated as shown in Figure 4 and 5 and Figure 6 and 7.
  2. The current cell is given a value of 7 to indicate that it is certainly free. A cell with a value of 7 cannot be re-updated except on the event when the grid cell value of that cell is determined to be 1 (i.e. certainly occupied) in building process.
  3. Continuously updating the four neighboring cells of each grid cell builds a map that adequately represents the environment. This technique also captures changes to the obstacle present or absent in the environment without erratically changing certainly free cells.

Image

Communication

The next sub task of the project was to work on communicating with the remote Bluetooth server. I downloaded source code that implemented a java Bluetooth server from (http://www.miniware.net/mobile/articles/viewarticle.php?id=22).  After studying the RobotC API and experimenting with the ideas used by other RobotC NXT Bluetooth communication projects online, a communication module was developed. This module did the following things;

  1. Turn on the Bluetooth functionality of the robot.
  2. Set the robot Bluetooth visibility feature to discoverable.
  3. Search for and pair with my laptop on port 2.
  4. Activate the communication stream to write raw bytes of data.
  5. Shutdown the Bluetooth server after completion.

The robot after establishing communication with the Bluetooth server sends initial data that is used to initialize certain parameters on the server side. The robot then sends frequent data updates about its current position in the map, its bearing and the latest version of the map being developed.

On the server side, the data packets are identified and approximately represented in text on the console display. Also, a color coded interface of the grid displays the information being sent from the robot for better visualization. This visual component was developed from java applet code used by Programming II student in one of their assignment. As illustrated in Figure 8, cells with the value of 0 are unexplored and have a gray color, cells with the value of 7 are certainly free and have the color green, cells with the value of 1 are certainly occupied and have the color red, the current cell location is color coded yellow while any other value between 2 and 6 have the color blue and are in the known unvisited cell list.

Image

Discussion

It is critical to point out that, I did not fully understanding the implementation of the HIMM technique I studied. The algorithms I developed were mostly my own understanding of the concepts presented in the readings and research I had done. After several test runs of the whole project, I realized that the robot constantly lost it sense of location whenever the emergency obstacle avoidance module was triggered in.

The robot used the generated map to avoid obstacles and the Wave Front planning algorithm allowed it capture changing obstacles very well. However, in case the robot detects an obstacle in a 10cm range in its path that was not initially detected before it starts moving, an emergency avoidance module is triggered to take it back to the cell it was moving from. Triggering this module repeatedly causes the robot to lose track of its current location in the map. The cause of this malfunction is still not known and thus has not being fixed.

One other major challenge encountered was actually getting already existing implementation of the various concepts that were described in the papers I read. Sample code or pseudo code of how the techniques presented in the research papers could be implemented, were not available to serve as a guide for the work I was doing. This is the main reason I moved towards implementing my own understanding of the concepts they presented.

Conclusion

In conclusion, I would say this was a very exciting challenge for me and I really got to appreciate the technical and mathematical skills needed to develop a fully functioning robotic project. Unlike programming where debugging only occurs in the code; with robotic projects, bugs could arise from mechanical faults when building the robot with the Lego kit, electronic faults from the sensors or NXT console, logical errors from the concept of the algorithm being implemented or syntax errors from the very limited RobotC language.

Despite the satisfaction from getting the robot to work, I was really excited about the fact that I designed and implemented my own algorithms that work quite well and that I could to a large extent predict every move of the robot. The ability to predict gave this feeling of control and help me really appreciate how complicated yet beautiful the human mind is. I believe, this was a successful job done and I am excited about taking on the next challenge of mapping the whole Ashesi University campus in my final year capstone project using the KINECT kit and the Roomba Robotic kit.

Final Project: Planning in Dynamic Environment

INTRODUCTION

The objective of my final project was to program an NXT mindstorm robot to move in a dynamic environment, an environment where the positions of obstacles are unknown. The algorithm I decided to use in order to achieve my goal is the D* Lite algorithm, a rapid re-planning algorithm coined by Sven Koenig and Maxim Likhachev. The main tasks in the project were as follows:

  • Implementing the D* Lite algorithm
  • Detecting of obstacles
  • Moving from a start to goal location given a planned path

APPROACH

The D* Lite algorithm is implemented using a priority queue. The first step in this project was to implement a priority queue. I implemented the priority queue by transposing an implementation done in Java by Mark Allen Weiss into Robot C. Upon implementing the priority queue, the next step was to start working on the D* Lite algorithm, the main part of project.

The D* Lite algorithm is made up of four main modules: CalculateKeys, UpdateVertex, Initialize, ComputeShortestPath and Main.

State: In my project, a state is used to describe  a cell in the world. It consists of the x and y positions of the cell in the world map, the g value, rhs value and the keys.

Initialize(): this module initializes the search problem before the shortest path can be computed.

CalculateKeys(): calculates the two keys of a state. The first key determines the position of a state in the priority queue. The second key is used to determine the position of a state on the priority queue when the first keys of two states are the same.

UpdateVertex(): this module updates the rhs-values and keys of the vertices potentially affected by the changed edge costs as well as their position in the priority queue

Main(): the module is the main part of the project and puts together all the other modules.

D* LITE ALGORITHM

The D* Lite algorithm focuses on identifying cells efficiently to be processed. Consider a robot with a goal location in an unknown terrain.  In moving from cell to cell to arrive at the goal location, the robot uses eight-point connectivity and moves to one of the eight cells with the least cost. The robot calculates the shortest path from its current cell to the goal cell under the assumption that cells with unknown blockage status are traversable. The planned path is then followed to the goal cell if no untraversable cell is observed. If a cell is untraversable, the shortest path is re-computed from the robot’s current location to the goal location.  The goal distances, distance from the cell to the goal, of all cells are important since one can easily determine a shortest path from its current cell of the robot to the goal cell by greedily decreasing the goal distances once the goal distances have been computed. The goal distances of cells change as untraversable cells are detected. The number of cells whose goal distances change are minimal and are sometimes irrelevant in the calculation of the shortest distance. Therefore, in order to efficiently calculate the shortest path, one must efficiently identify the relevant cells. Therefore the D* Lite algorithm solves this challenge.

 

RESULTS

Implementing the D* Lite algorithm which is the core part of the project was challenging but to successfully complete the project, I implemented each of the modules and ensured that each was working correctly. In piecing together the entire algorithm, I encountered various issues which I did not consider during implementation of the modules. I had to make sure that in getting the numbers of each cell to be processed I had to ensure that they were within the world map and not outside the boundaries. I first implemented the algorithm to work in static environment and made sure that path extraction and movement was working perfectly. The next step was to integrate obstacle detection into the project in order to achieve the objective of the project. To achieve this I used code written by Daniel Botchway 2013 Computer Science, my colleague in the robotics class. These methods detected obstacles and updated the world map working with the sonar sensor. In the world map, one (1) represented obstacle and zero (0) represented a free cell. Upon including these methods and implementing the re-planning section of the D* Lite algorithm, the algorithm worked only if the obstacles were not in close proximity. I therefore had to demo with fewer obstacles than planned and make sure that obstacles were not next to each other. Looking back, I think the problem with the implementation of the algorithm was how I was trying to avoid obstacles. In making the decision on which direction to turn, I never checked the world map to check for obstacles. I believe this omission in addition to the not so perfect implementation of re-planning section of D*Lite algorithm resulted in the robot failing to move and the program unexpectedly closing.

APPENDIX

Figure 1 below shows an example of the world map used in implementing the D* Lite algorithm. The robot moves one cell at a time, checking the next cell for obstacles. Figure 2 shows the D* LIte algorithm as written by Sven Koenig and Maxim Likhachev in the reference paper for this project, D* LIte.

robot_maze

Figure 1: example of world map

Untitled

Figure 2: D* LIte algorithm as written by Sven Koenig and Maxim Likhachev

ts.20100922T104656.9846_713x380_MainProduct

Figure 3: Ultrasonic sensor used for obstacle detection

 

Final Project. Building a Sorter Robot

Task: Building a Sorter Robot

Flynn Darko

Lecturer: Dr. Ayorkor Korsah

 

Final Robotics Project

 

Introduction & Objective

The objective of this project was to build and program a robot that would sort a number of maize grains painted in different colors. When the color of the painted grain is detected, the sorter robot places it in the assigned container placed in front of it. The robot is supposed to do this until all the grains are sorted into the respective containers.

 

Design Concept

Structure

My first conception of the how the structure of the sorter robot should be was first influenced by a model I saw online which looked like the one shown in figure 1.   I therefore went ahead to build the model using the building instructions that came with it. However, I had stated in my proposal that I was going to make use of colored maize grains. But looking at the model I had decided on, the grains could not be transported along the inclined structure that this model provided. The model could work well for colored balls which would fall along the inclined Lego brick. Though presented as my first milestone, I decided to work a new model which would be more efficient in transporting the grains during the sorting process. Using a conveyor belt was my next design concept.

figure 1: First sorter robot model

figure 1: First sorter robot model

As shown in figure 2, I saw another model which I knew would be the best robot structure model to use. Having no building instructions, I had to build it myself using the various parts from the NXT Mindstorm Lego kit. This took a great deal of the time I had to finish the whole project.

 

New Picture (1)

figure 2

New Picture (2)

figure 2

Program Algorithm

During the building process of the sorter robot, I had the following algorithm concept in mind which I wanted to implement

  1. Detect and display grain color on NXT screen
  2. Upon detection, stop upper conveyor belt and turn lower conveyor belt (upper conveyor and lower conveyor shown as Uconv and  Lconv in figure 3) to the respective bin collector
  3. Start upper conveyor again till the grain drops in the collector.
  4. Reset lower conveyor to original position
  5. Repeat steps I to IV until all the grains placed on the sorter robot are sorter into their respective bins

The next progress was the implementation stage.

New Picture (3)

figure 3

Implementation

The above figures (figures 2 and 3) show the model of the robot I built. I used a color sensor to help detect the color of the Lego brick color (the colored maize grains were not being detected well by the color sensor so I used Lego bricks instead).  In order to get a conveyor belt I attached a motor to a wheel which had a conveyor belt wrapped around it. This is shown in figure 4.

New Picture (4)

figure 4

Untitled-3

Challenges

As already mentioned, getting the structure of the sorter robot right was the one major challenge I faced. Though I had the program algorithm written down, I could not implement it without building the sorter robot. I was therefore able to implement the program for the sorting process very late to the due date for demonstration. During demonstration, I still had some modifications to make to the rotation of the lower conveyor (Lconv) so that it moved towards the right bin throughout the sorting process.

Conclusion

This final project proved to be very challenging since I had to build the robot from scratch without any building instructions. However, I found it very fulfilling when I was finally able to get it done an implement my program.

Relevance to Problem Solving in Ghana

In Ghana we could have robots that can sort waste to hasten the process of recycling. For example in homes, such robots can be programmed to sort waste materials into labeled bins which would make the reuse of waste possible and much easier.

Final Project – Remote Controlled Treasure Hunt Game

Introduction

My project was to implement a remote controlled treasure hunt game. The goal of the game was to collect the largest number of treasures given a fixed period of time by grabbing and dropping the treasures on the colored paper that matched the color of the treasure. The core of the implementation was to realize a duplex Bluetooth communication link between the controller NXT brick and the robot NXT brick.

Design Concept

The project required the use of four NXT bricks; two to serve as master bricks or remote controls and the other two as slave bricks or drones for the hunt. The master bricks read bytes from an NXT numeric pad and transmitted the values read via Bluetooth. I obtained the code for reading from the numeric pad from the sample code in the RobotC IDE and converted the values read into function calls. That was implemented using a switch case. The remotes for the treasure hunt comprised of an NXT brick and an NXT numeric pad connected via a cable. During the early stages of my project, I realized that the structure of a Tribot (regular blueprint for building robots for tasks that require grabbing) was inefficient in it’s grabbing and holding mechanisms. To solve this problem, I constructed a castor bot. The castor bot’s grabbing mechanisms featured a motor turned horizontally. This implied direct control over the claws as opposed to the vertically inclined third motor of the tribot that required gears. However, the gears of the tribot would sometimes slip or rub against each other and render the claw moving mechanism useless. Figure 1 shows the first remote control and robot constructed as part of the testing process.

 Untitled

Figure 1. Controller 1 and Drone 1

To allow the robots sense the color of the treasure items and the color of dropping zones, each robot was fitted with two color sensors. One attached horizontally to detect the color of imminent treasures. The other was attached vertically, to detect the color of dropping zones. Figure 2 shows a horizontally attached color sensor.

 Untitled2

Figure 2. Horizontal Color Sensor

Implementation

From the paragraphs above, the components required for this project were, two NXT bricks for remote controls, two NXT bricks for drones used in gathering and four color sensors (two for each drone) to detect treasures. The treasures consisted of bottles wrapped in colored paper. During the implementation and testing phase, I noticed a flaw that Dr. Korsah re-established or also drew my attention to. The drones moved at a power of approximately 30 and thus would lead to a very slow game. To counter this finding, I implemented a method and initialized two attributes in the code that was to run on the drone brick. These attributes were gear and power. The gear attribute was merely an integer that stored the values 1 to 4 depending on the motor power and the rotation realized. The power attribute was an integer that stored values 30 to 100. To achieve acceleration, I read the encoder values of one of the motors during a call to the accelerate function and depending on the value, gear and power would either increase or decrease. The result was a drone that increased in acceleration after a meter of movement. Furthermore, to make the game more interactive and interesting, I ensured that all readings made on the drone robot were sent back to the controller. This included speed or acceleration and colors from the color sensors. This allowed a player to move and pick objects without having to pay too much attention to the color of the object being picked. To ascertain who wins at the end of the given time period, a counting algorithm was implemented on the drone and the results sent back to the master for viewing. The counting algorithm was built on the idea that if the color of the treasure item matched that of the dropping zone, the corresponding color count should increase by one. This made the game fun and meant a player didn’t have to manually count the number of items he or she collected at the end of the game.

Discussion

The section above highlighted one of the flaws I noticed in the implementation during the test stages. Subsequently, the Bluetooth communication or messaging algorithm presented another flaw I did not see coming. I could only send messages in one direction, from the master to the slave (controller to drone). After a day of two of further experimenting with the code, I had a breakthrough and was able to send the gear value from the drone to the controller and have it displayed. This gave me a sense of fulfillment. However, prior to that, building the castor bot proved more tiring than I thought, mainly due to the fact that the bot required pieces that our kit did not posses. This meant having to look for the pieces used in the online blueprint I found. Difficult!

Conclusion

Working on this project was fun and educative. The remote controlled robots reminded me of my younger days and how much I desired to own one. I now know I could’ve built my own. The educational aspect of the project was inspired by the possibilities of expanding this project. I thought about tele-operated robots, a topic I presented on a week or two before beginning work on my project. I thought about the impact tele-operated tractors will make on farms in Berekuso and what tele-operated medicinal robots would make on remote medicine in Africa. Endless. I’m glad I worked on this project.

Task 5- Follow the leader

by Abdelrahman Barakat

Follow the Leader

•Introduction

 elephants 

As the area of autonomy grows in relation to vehicular control and navigation, complete autonomous driving is quickly becoming a reality. in the past get years, we have seen several significant steps being made. Some of the most famous are the Google car and the Volvo highway train.  This project was inspired by Volvo’s model. It functions by controlling part of the journey by connecting several follower vehicles wirelessly by a leading vehicle driven by a professional driver. The purpose of this mode of driving is to allow the drivers of the follower cars to take a break from driving during long journeys. Vehicles which have the system installed can choose to join or leave the convoy at any point in the journey.

• Background

Autonomous driving technology is one of the most anticipated futuristic technologies which could dominate the roads in the near future. The technology needed to realize the dream of creating an efficient autonomous driving system involves sensing, thinking, and acting. In the Volvo model, sensing is done internally within the leading truck and is translated into wireless signals which are decoded by the follower vehicles and analyzed to be safely used to control it. An advantage of internal sensing is that it could be more predictable since the components within the truck being sensed do not change.

The analysis would be done by an algorithm which does all the calculations and computes the reaction to be used by the other cars.

•

IMAG0670

 Follow the Leader

My Approach

To create a system which is designed to copy the moves of the vehicles, there could be

several approaches. One approach is to communicate wirelessly between the vehicles. Another approach is to use sensors like ultrasonic sensors and cameras to track the exterior motion of the leader. The way I looked at the problem was to divide it into two main sections; direction and speed. As the leader vehicle changes its direction or speed, the follower should react similarly. I used external sensors; a camera and an ultrasonic sensor. The camera was used to track the chance in direction while the ultrasonic sensor was set to keep a constant distance from the back of the leader.ts.20100922T104656.9846_713x380_MainProductNXTCamv4w300

The camera I used (NXTcam) depended on a technique which identified “blobs” of a particular color and their position in the image frame. I used the information to move the front wheels right and left depending in the relative vertical position of a brightly colored orange object at the back of the leader vehicle. The ultrasonic sensor was attached to the front wheels such that it moves towards the direction of the object to avoid losing track of the leader. The distance from the back of the leader truck is taken and the speed is adjusted depending on how close the follower is. The closer the follower, the slower the speed becomes. All the analysis was done by the follower robot. The leader robot was a remote controlled robot with a trailer attached to it. The Lego construction of the robot was designed to look similar to normal vehicles as seen in the images.

• Results

After several weeks of hard work, I finally completed the project. I was able to adjust the steering of the robot to make it always follow the leader. The robot could also keep a safe distance from the leading robot and avoid colliding with it.

Some of the problems I faced included dealing with variation in room illumination during different times of the day and understanding the syntax used to program the camera.

• In Conclusion

An autonomous driving system like this one could be implemented in real life but with several constraints including operation during daylight only. A better way to do things could be via wireless communication perhaps by Bluetooth connection between the NXT bricks for example. However, using the camera and the ultrasonic sensor makes it closer to how human beings operate cars by vision. Rebuilding this project using a better camera and connection is definitely worth trying.

Tic-tac-toe

FINAL PROJECT REPORT

INTRODUCTION:

This semester has been one of the greatest for me as a student in this institution. Even though it had some rough times, it is still counted as one of the greatest. Eight mates including myself took the robotics class and to be honest, we were thrilled with the various activities we were engaged with. In each task, we were required to share our experiences in the form of reports. However, this is our final report, which is also part of our requirements in this course on our final project. Each student worked on a unique project and finally presented it on Tuesday, December 12, 2012. At the end of the day, each student, in my view, felt fulfilled and the purpose of this report is to give you the breakdown on my unique project.

BACKGROUND:

My project dubbed Tic-tac-toe, is a very common game that most people play. It is a two player game that is played in a 3×3 grid cell. To win in this game, you have the option of choosing either ‘X’ or ‘O’ as your character, and make sure that they all line up vertically, horizontally or diagonally. Furthermore, the game is referred to as a zero sum game, which is, when we add the number of wins and losses after a couple of games, the sum is zero. Tic-tac-toe is not the only zero sum game, we also have games like chess, checkers and others that are also known as zero sum games. The question is, what is unique about this project? There is not much difference in this project because it has the same rules in the mode of play however, the major difference is that, this time you are not going to be playing with a fellow human, if you are familiar with the game, but a robot – an NXT robot – that is smart and unbeatable. You may ask, what makes this robot smart and unbeatable, and the answer is quite simple. The robot uses two major methods, which I refer to as its “brain power”, which I will be discussing in a short while.

APPROACH:

As said earlier, my project is for a robot to be able to play the tic-tac-toe game with a human, and the method it uses to perform this task are two algorithms – Minimax, which is the main brain power for the robot to play the game; and wavefront algorithm for path planning, that is, to be able move through the 3×3 grid cells when the robot has decided where to play.

In details, the wavefront algorithm is used in path planning, to find the shortest and optimal way from a starting point to a goal regarding obstacles in various positions in a given map. Obstacle cells are labelled with the number 1, as in the figure below, and in my project, cells that were played by either the robot or human were considered as obstacle cells for the robot
after every move. So that the robot does not move through those occupied cells.

wavefront

The whole purpose for this algorithm is to prove to the human that the robot had identified the various moves made by both players, and will not tamper with those cells. Below is a true representation of how the game board, in this case, floor tiles was used to play the game.

grid
The second method was the minimax algorithm which has been proven to be one of the best algorithms for such games. It’s quite slow because of its complexity O(bx) but unbeatable. It is a very interesting algorithm and this report will give a brief description on So the region shaded was where the game actually took place (that is the 3×3 grid cell) and the unshaded region was the path in which the robot used to get to its goal whenever it was its turn to play. On the grid cell, you’ll find at the lower left corner, a cell labelled S, that cell was the robot’s starting point, that is, after every move, the robot finds itself at that location waiting for its turn. The second method was the minimax algorithm which has been proven to be one of the best algorithms for such games. It’s quite slow because of its complexity O(bx) but unbeatable. It is a very interesting algorithm and this report will give a brief description on how the algorithm works.

400px-ab_pruningsvg1

The figure above is a simple representation of the minimax algorithm in a typical game using a tree. The tree is made up of different parts. We have rows labelled “MAX” and “MIN”, and this will help explain the process. In the row labelled “MAX”, the algorithm checks all nodes and finds out which node has the highest heuristic (i.e. number on the node), then returns it to the parent node. In the case of the “MIN” row, the algorithm does the same check but this time returns the least heuristic to the parent. For instance, the first row for MAX from the bottom has different values. Let’s take the children 7, 4 and 5. The least among these is 4 so in the next row, that is MIN, the algorithm returns 4 recursively and this is done till it gets to the root, taking into consideration the MAX row as well.

Basically, these were the main components for my project. Breaking it further, my project was divided into 3 milestones:

  1. Implement the minimax algorithm.
  2. Implement the wavefront algorithm.
  3. Combine the two algorithms mentioned above into one program.

Each milestone had a duration and it lasted at least one week before the final day of presentation. So work had to commence immediately the project and its milestones were clear.

RESULTS:

The project was successful after the duration given us, however, milestone 3 couldn’t be accomplished, but I had two different programs, that is, milestone 1 and 2 ready. Milestone 3 couldn’t be done because I had to translate the minimax algorithm from the C language to ROBOTC, the language for the robot. Some of the syntax in the C language were not as the syntax in ROBOTC, and it really was a struggle, but it didn’t stop me from having my final presentation. So I deduced a way for my presentation, which was to run both programs together but on different platforms. It was successful, that is, the robot could play a game with a human but it was just slow. A typical game could take 4 to 5 minutes before it ended, whereas it could have been shorter if milestone 3 was completed.

CONCLUSION:

“There are no secrets to success. It is the result of preparation, hard work, and learning from failure from failure” – Colin Powell. Even though my project didn’t see full completion, it was hard work that brought to me even to the point of presenting a pretty good work. It has been an honour to be part of this journey.