1. Write a library of movement routines for your robot
2. Implement wall-following in your robot
3. Navigate maze using at least 2 sensors
Writing a library of movement routines
This task was fairly easy. We chose to write routines for:
driving forward, driving, backward, spinning left and spinning right about the robot’s center, turning right and left and stopping. For the turns, we chose to have one function, turn(float speed, float speed), instead of two separate functions. The robot would turn left/right depending on the power of the motors provided.
In implementing wall following, we used the bang-bang control. With this control-loop type, we picked desired distances close and away from the wall, that we wanted our robot to use. If the robot was less than the minimum distance/close to the wall than expected, the robot was programmed to turn away from the wall. If it got far away from the wall, the robot was programmed to turn towards the wall. With this particular type of closed-loop control, the turn of the robot is not affected by how far away/close the robot is from/to the wall. This results in the robot moving close to the wall and away form the wall (with large displacements). However, for this particular task, this control method suffices. To make it more accurate, we further checked if the robot was too close to the wall (the minimum distance – an offset) or too far away from the wall (the maximum distance + an offset).
The turns for those conditions were much sharper than the norm. There was also a peculiar problem we discovered. The robot was not able to go around outer corners, when we first programmed it. This is because of the nature of the sonar sensor (the emitter is placed some distance away from the receiver, Figure 1). The readings from the sensor exceed the allowable limit when a robot gets to a corner. Thus, we had to add an extra check for that condition and program the robot to get around the corner.
Navigate maze using at least 2 sensors
After observing the maze, we realized there were a number of markers we could use to guide our robot to its destination. They included black tape on the floor and the walls of the maze. Our robot also had to be capable of detecting the lighting condition of its environment, so that, it could stop at the designated area. We decided on using four sensors for this task: two sonar sensors and two light sensors (Figures 2 & 3). One light sensor would face the floor and would be used to detect the black tape. The other light sensor would face upwards so that it could detect the ambience in the environment. The sonar sensors would be used for wall detection. The ambient light sensor would face in the forward direction of the robot while the other would be placed on the left side of the robot.
Plan for navigating through maze
To get the robot through the maze, we divided the maze into three sections (Figure 4). In order to get through SECTION A, the robot had to move forward from its starting position, sense a wall in front of it, rotate 90 degrees, move forward and detect the tape at the end of Section A. For the robot to get through SECTION B, it had to be able to follow the tape it detected, towards its left side, detect a wall in front of it and rotate 90 degrees to the right. To get through SECTION C, it had to be able to detect the wall to its left and follow it until it detected a wall in front of it, and then rotate. This set of movement patterns would get the robot to its destination. During implementation, we decided to have functions that would enable the robot to “think”. One function called followLine() enabled the robot to keep following a line until it met a wall in front of it, and then rotate. There was also a rotate() function that enabled the robot to rotate clockwise and anticlockwise. If the previous direction were clockwise, then the next direction would be anticlockwise unless there was an object/wall at that side of the robot. There was also a function, wallFollow() that would be called if there was a wall to the left of the robot and none in front of it. All these conditions were then checked in another function, freeRoam(), which kept moving the robot until black tape was detected on the floor or there was an obstacle in front of the robot, thereafter, calling the appropriate function. For instance, if a tape was detected, the followLine() function would be called. The freeRoam() function would keep running until the desired terminating condition is met, in this case, the brightness or darkness of the destination.
This task allowed us to fully understand the complexities that come with employing sensors in moving a robot. From this experience, we learnt that sensors are unstable in their readings; their readings have frequent spikes and noise. Thus, when taking readings using sensors, one must take an average of a number of readings instead of just one. This increases the accuracy. Overall, this was a much challenging task compared to the past ones, however, we found it fulfilling when our robot achieved its task.
From this task, we have learnt about the different feedback control methods (i.e. bang-bang control vs. proportional control vs. proportional-derivative control vs. proportional-integral-derivative control). Although the best one to use is the PID control, we chose to use the bang-bang control because it was suitable for our needs. We have also learned that sensor readings are different in different environments. As a result one must calibrate the sensors in the environment they would be used in and if possible automatically. Another lesson for us is to comment our code as we program and not after we are done programming. This would make it easy for a programmer to modify his/her code correctly when the need arises (for example, switching from brightness to darkness as the desired goal for the robot to cease).
We had challenges with the battery life span of the robot; the robot behaved differently at different battery-levels. Because the robotics kit does not charge while connected via USB, we had to stop to charge when levels were low. We found out that the behaviour of a single program may change with different levels of power (i.e. low battery vs. fully charged) especially when it involved the use of sensors.
An observation we made was that, the RobotC language was so different from the NXT Mindstorm programming language. The latter simplifies coding so much that when one moves to the RobotC, it takes some re-orientation to get confortable; so much is taken for granted or hidden in the NXT Programming language.
Problems in Ghana and their solutions using sensors
Problem#1: We identified the sanitation problem, more specifically waste disposal.
Solution: We would need a touch sensor/weight and location sensor. The problem is that in Accra, to prevent littering and improper waste disposal on the streets the AMA has mounted bins at vantage points. This did not solve the problem because even when the bins are full they are not emptied on time and people force rubbish into it or just leave the trash around the bin. If we have a bin that can sense when it’s full (using touch/weight sensor) and move to a predefined location (location sensor) to be emptied, it might solve the waste disposal issue on our streets.
Problem#2: We also identified the illegal logging of trees in our forest.
Solution: We would need a sound sensor and location sensors. The sound sensors would be programmed to identify the frequency of the noise made by chainsaws. When there is illegal logging in a forest, the sound sensors would identify this. Location sensors would be read and this data passed to a monitoring post in order to alert them of the activity. Police personnel can then be sent to that location to arrest the offenders.