Task 3 by Kwame Afram and Lady-Asaph Lamptey

Maze Task
This task requires us to write software to enable our robot to get through a maze from the specified start location to the goal location. The start location is where the robot starts and the goal location is the brightest part of the maze. The task requires us to use at least two types of sensors to guide your robot through the maze. Our robot should be able to determine when it has reached the goal. When it reaches its goal it should stop on its own and give give some kind of audible indication that it is done.

Idea for navigating maze
After carefully studying the maze, we realised that we could have our robot, “Bumpy”, successfully go through the maze with four main movement strategies. These were: line following, turning right, reverse and going straight.  This is not to say that our robot had no need to turn left, but that any where it would have been necessary to turn left, there was a dark trail (or strip) that it could follow.
The line following algorithm we employed  work for straight and  only curved lines in any direction. This made it possible for us to accurately turn left whenever we really needed to, while sticking to the conventional right turns in the maze since only two out of nine possible turns were left turns.

A picture of the maze that got us to appreciate right-hand drive countries

Building the robot to best fit the Maze task
Given the general layout of the maze, the objects to sense within the maze and our little right-hand drive derivation, this is how we decided to rebuild our robot to suit the task.
Sensors used:
Two touch sensors
Only one was connected to a port on the NXT processor. However both touch sensors were connected to a modified bumper we built; to enable the connected touch sensor be triggered even when the unconnected touch sensor was triggered.
Two Light sensor
One for ambient light and the other for for active light

Picture of “Bumpy” the Maze Wizard.
*The sonar sensor in the picture was not used in the Maze Task

Functions available to the Robot:
drive_forward()      – to move the robot forward
drive_backward()   – to move the robot backward
turn_left()                – to turn the robot in the left direction
turn_right()              – to turn the robot in the right direction
stop()                      – to stop the robot from moving
celebrate()              – to stop the robot and make a sound after task completion
bumper()                 – to check if the touch sensors have been triggered, and reverse and turn right if they were
line_track()              – to follow the trail of dark lines within the maze

Implementation strategy:
When the robot is started, it first goes straight [using the drive_forward() function] until it either bumps the wall of the maze or it senses the dark trail line. If the robot bumps into the wall of the maze, the bumper() function would be invoked; which will cause the robot to reverse [using the drive_backward() function] and then turn right [using the turn_right() function]. However, if the robot senses the dark trail line instead, the robot would follow the line [by invoking the line_tracking() function]. While tracking the line at any given point in time, if the robot is to bump the wall of the maze, it would again invoke the bumper() function –  causing it to reverse and then turn right. Our program iterates this logical sequence with the ambient sensor reading a value greater than 36 as its “operational condition” –  the main condition by which the program should run. If the ambient light sensor detects a value less than 36, it stops the robot and plays a sound via the celebrate() function. The celebrate function uses the stop() function to stop the robot motors.

Challenges Faced
Calibration for the different areas within the maze turned to be a little issue as we had difficulty in detecting the light bulb place in the final destination. Anyway, the light bulb blew just before the demo. The final destination was no longer distinguished by the light but rather darkness. This meant we had to re-tweak and re-tweak the program code until we had our calibration and dark-spot logic perfect.

Bumpy bumping into the wall of the maze               Bumpy tracking a line in the maze

Wall Following Task

Idea for wall following
We decided to implement bang-bang control for wall following. In order to implement  wall following, the robot used three main movement strategies: turning left, turning right and going straight.
The wall following algorithm we used was, to move the robot to the left, if it is less than 6cm away from the wall or move the robot to the right, if it is more than 9cm away from the wall or to move the robot straight, if it is in between 6cm and 9cm.

Functions available to the Robot:

drive_forward()      – to move the robot forward
turn_left()               – to turn the robot in the left direction
turn_right()             – to turn the robot in the right direction

Bumpy trying his hands on the wall Following Task

Challenges Faced
One challenge we faced was where to place the sonar sensor on the robot. If the sonar sensor’s emitter and receiver are not well aligned, the readings from the sensor would  be wrong . Also,if the sonar sensor and the robot were not aligned correctly, the reading from the sensor will not be a true representation of the distance of a robot from the wall, especially when the robot is negotiating a bend. We solved both problems by repositioning the sonar sensor very close to the left tyre and aligning it such that its emitter and transmitter were able to send and receive correct values .

Ghana’s problems and solution using sensors
In the northern region of Ghana, lack of electricity is a major problem hence using alternative source of electricity is a solution that has been well discussed by students, government and citizens. Implementing a system that alternates between hydro and solar energy could solve the electricity problem. Having an automated system that switches between the different sources of energy will make the system easy to use and more effective. The sensors that will be used are laser and light sensors. The laser sensors will be use to determine the intensity of a fog if one exists. And the light sensors will be use to read the light intensity. Both sensors will interface to ensure that the light intensity readings are correct. If the light intensity is low due to harmattan or darkness, the power source is switched to hydro energy and if the light intensity is high, the power source is switch to solar energy.

Ashesi University locks  its library, labs and lecturer halls at 10pm which prevents students from using these facilities at night. The main reason is, the school cannot keep track of people  who  use  thosefacilities after 10pm. Hence there is no accountability. To solve this problem, Ashesi can implement a system that requires all students, faculty and staff to scan their thumbs before they can enter or exit the library, labs or lecturer hall after 10pm. The sensor the system will use is a thumb scanner.

Task #3 – Maze navigation

Name: Samuel Kwadwo Obeng

Course: Robotics

Class: 2012

Date: March 12, 2012

Task 3: Maize navigation

Objectives:

a)      Write a library of movement routines

b)      Implement a wall following algorithm

c)      Combine a) and b) to implement maze navigation

Part 1:       Movement routines

In this task, robotC was used to create the following routines: drive_forward(), reverse(), stop(), turn_left(), and turn_right(). A motor power of 50 was assigned to both motors B and C. In the drive_foward routine, a while loop was created such that the motors continue to move at a power of 50 while the given condition is true. In the stop routine, both motors were assigned power value of 0. For the motor to drive backward (thus the reverse function), motors B and C were assigned a power of -50 each. For the robot to turn right, motor C was given a power of 0, while motor B was given a power of 50. The wait time for the robot to turn at 90 degrees was 1350 milliseconds. Similarly, for the robot to turn left, motor B was given a power of 0 while motor C was given a power of 50. Again the wait time for the robot to negotiate a left 90 degrees turn was 1350 milliseconds.

Part two: maze navigation

To navigate the maze, three sensors were attached to the robot. The touch sensor was used to detect collision against obstacles. This enabled the robot to think and act, thereby implementing the next appropriate routine. The ultra-sound sensor was used to measure the distance between the robot and the nearest obstacle to the right side of the robot. The light sensor was used to measure the light intensity at the goal area.

Wall following

The touch sensor was used as the robot’s bumper. When the robot hits a wall or any obstacle, the sensor value was recorded as one. This value is combined with the sonar value to enable the robot determine whether or not to turn right or left. The sonar sensor measured the distance between the robot and the nearest obstacle on the right side of the robot such that if the distance was less than 40 centimeters and the bumper value was 1, then the robot would turn right. Otherwise if the bumper value was 1 and the sonar value was greater than 40 then the robot should turn left. When the robot reaches a dark area (which is supposed to be the goal area) and the reflected light from the ambient sensor measures 15 or less, then the robot is at its goal area, hence it stopped.

 

 

Fig1: Image of the maze showing the start and goal

 

 

Fig2. Image of the robot starting its navigation

 

 

 

 

Fig3: The robot uses its sensors to decide the next possible route.

 

 

 

Fig 4: The robot is reaching its goal area

 

 

 

 

 

 

In this task, the PID control was used to correct the movement of the robot. The PID control works such that if the measured speed is not equal to the desired speed, the PID algorithm uses the difference in speed (error) to adjust the motor power that should get the actual speed closer to the desired speed. To implement the PID control algorithm, the PID control was enabled on both motors by setting their nMotorPIDSpeedControl to mtrSpeedReg. To debug the PID control implementation, I navigated to the NXT Device Control Display and noted the changes in motor speed as the robot was suspended and running on a block.Controls

Discussions:

In this task, the robot implemented its required task by navigating from the start area to the goal area. It did well by navigating to its goal area in the first attempt. However, in subsequent attempts, a number of problems cropped up: the touch sensors stop working well and fluctuations in the weather causes the light sensor calibration to be changed from time to time. The malfunctioning of the touch sensor forced me to redesign the bumper. However, the ultra-sound sensor functioned perfectly. After much effort in recalibration of bumper and light sensor values, the robot achieves its goal upon further trials.

In this task, I learnt about sensors and their calibrations. I was able to include the sensor libraries into the code and used them to perform the required task. I was able to navigate to the NXT sensor devices to name the sensors. I learned so many ways to use the NXT sensors such as wall following, line tracking, and touching obstacles. I also noticed how small changes in the environment could cause the robot to deviate from its task or possibly cause its algorithm to malfunction. I also realized that the algorithms that I devised for implementing this task was built on the previous algorithms I implemented on some of my previous task. This made it easy to implement this task in a very short time.

Two problems relevant to Ghana, and the solution of which could involve sensing as an important component are defense, and natural disasters. A couple of years ago, a tanker of oil got lost from Ghana’s territorial waters. Not only that, other items are been smuggled into neighboring countries from time to time. The nation could protect its coastal borders with radar sensors such that where humanly impossible, these sensors could detect and record activities of such prohibited locations. Annual floods have become traditional in the city of Accra. With the help of remote sensing, we could obtain primary data about objects on the earth’s surface including water levels observed from space. The information obtained from the sensor readings could be used to communicate warnings to citizens. This will help provide accurate data for accurate positioning of infrastructure, aid in national development, as well as reduce casualties resulting from natural disasters such as flooding.

Task #3 Navigating a Maze

Shim Kporku & Ariel Taylor

Robotics and Artificial Intelligence

Write-Up for Task 3.

The objective of this task was to have the robot traverse within a maze to a particular point guided only by the sensors attached. There are also few obstacles within the maze that the robot must navigate around without assistance. For our group, we choose to use wall and line following to help guide the robots movement to the goal point. The following sensors were used in this task: two sonar sensors, and a light sensor. When the robot reached the target it made a sound to indicate that it has arrived at the proper destination.

The strategy for solving the maze was split into 3 parts:

-Enter the maze

-Wall follow

-When in space find a wall.

Enter the maze

The entire code was place in a

while (true)

{                  }

loop condition. The first few lines of code – move in a straight line – were therefore the state the robot reverted when it was not doing anything else i.e. its default state.  When it encountered an obstacle it reacted by wall following or turning. As such when the robot entered the maze it turned right and continued into the maze by wall following.

 

Wall following

Wall following was implemented with two ultrasound sensors mounted at the front of the robot at 90 degree angles. One was placed at the front to detect oncoming obstacles, the second one was positioned at the side to implement wall following. There were three possible states for wall following which were: a specific distance from the wall, greater than that distance and less than that distance.

 

Fig 1.1 Robot with 2 ultrasound sensors and a light sensor.

 

As a result when the robot was that specific distance from the wall it moved in a straight line. If it was greater it reacted by turning gently to the left (towards) and finally if it was less it reacted by turning to the right (away) . This gave the robot a slightly S-shaped movement as it moved.

 

 

When in space find a wall

When the robot no longer could follow a wall we encountered a problem. The robot continued to move forward until it ran into another wall section of the maze. A solution was devised but not implemented because it did not work all the time. Essentially if both ultrasound sensors did not find a wall, implemented with nested if statements, the robot was to turn 90 degrees and move forward a bit. It was to do this until it encountered a wall. However the robot tended to move in circles without finding a wall. Essentially it was our biggest challenge.

Fig 1.2 Robot lost in space.

With your teammate think about two problems, relevant to Ghana the solution of which could involve sensing as an important component

Problem 1

One problem relevant to Ghana is traffic congestion. If there were some cameras installed to detect the heavily congested areas then the patterns of the traffic can shift to help reduce the amount of cars that pass through a green light. For the traffic monitor a camera for visual detection of the traffic situation would be used. The camera will broadcast to a road service station and show the peak hours of traffic prone areas. From there the controllers can be changed so that the traffic lights can change the flow of cars in major intersections.

 

Problem 2

Most forms of public transport such as  buses, minibuses or taxis have a maximum weight limit or maximum number of people that should be allowed on board. However, in trying to get access to remote areas, making multiple trips with only the legal number of people on board these transport systems is not feasible, especially given the distance and time it would take. As a result, these vehicles tend to be overloaded to transport as many people/goods as possible to their destinations. In general when travelling with an overloaded vehicle the only problem tends to be with travelling uphill. A gyroscope sensor in addition to a weight sensor will provide the driver with the appropriate feedback to determine if it can move uphill with that weight. The gyroscope determines the angular velocity in which an object is travelling. Depending on the values shown, this sensor can gage the different levels of how the load can be carried based on its weight and the road. If applied to the transportation sector, this tool can improve road safety to some extent.

 

Task #3 – Navigating in a maze

Objectives

1. Write a library of movement routines for your robot

2. Implement wall-following in your robot

3. Navigate maze using at least 2 sensors

Writing a library of movement routines

This task was fairly easy. We chose to write routines for:

driving forward, driving, backward, spinning left and spinning right about the robot’s center, turning right and left and stopping. For the turns, we chose to have one function, turn(float speed, float speed), instead of two separate functions. The robot would turn left/right depending on the power of the motors provided.

Continue reading