Free Essay

Autonomous Navigation Using Ar.Drone

In: Computers and Technology

Submitted By TarifEzaz
Words 2654
Pages 11
Chapter 1

Introduction 1.1 Motivation Autonomous navigation using miniature aerial robots are becoming an interesting topic in the field of computer vision and image processing. The availability of low­cost, robust Miniature Aerial vehicles (MAV) like AR.Drone equipped with high definition camera, has created new interests in exploring this field. Path planning and obstacle avoidance are two important aspects of autonomous navigation. Flying a miniature robot autonomously in an indoor environment has numerous real life applications. Some of the application areas include: Indoor Surveillance, Real time object tracking, Disaster estimation, Human­Computer Interactions and Augmented Reality games. In simple terms, autonomous navigation for robots means, the ability for the robots to navigate in an unknown environment without taking commands from humans. A robot must keep moving to a given environment looking for exit points and achieving specific goals in the meantime. Navigating in simple indoor environments like corridors, can be achieved by applying simple image processing and computer vision algorithms. Simple and easy to implement domain transformation algorithm like Hough Transform equipped with machine learning techniques can be used to tackle these type of problems. These algorithms take the advantage of high computational power that modern computer possess and perform real time flights with ease. The availability of wireless communication devices also makes it possible separate the computational burden from low cost on board processors, thus keeping the drone lightweight and yet equipped with higher computational power. Besides, modern computers can perform parallel processing, which makes the work of image retrieval, decision making and instruction transmission running at the same time. 1.2 Related Works MAV used perspective cues to navigate on indoor environments using an AR.Drone. They classified the 1

indoor environment as one of the three categories: long narrow space or corridors.[1] Areas of non­uniform heights like stairs and small enclosed places like offices are important indoor environments for autonomous robots to navigate on. Bills and Saxena(2011) first classified the environment as one of the three categories using Machine Learning and then found out perspective cues to navigate between those areas. In corridors, the proposed algorithm first finds the vanishing points by performing Canny Edge Detection and Hough Transform. In order to eliminate inconsistencies between two vanishing points of successive frames, Markov model is used to model the probability of a vanishing point to appear at a certain coordinate. This model was then solved using Viterbi Algorithm. However, when the quadrocopter is in close proximity to an obstacle, sound sensors are used instead to avoid accidents.[1] One of the most challenging part of navigating using a single camera, is the lack of ability for the system to measure the depth. Engel, Sturn, & Cremer (2012) solved the problem of navigating through an environment that is completely unknown previously. Simultaneous Localization and Mapping (SLAM) and extended Kalman Filter was used in this paper. A PID controller is used to generate steering commands for the flying copter. The exact same three steps were used to draw various figures from a previously unknown environment.[3]

Figure 1: AR.Drone 2.0 Visual sensors is used along with sound sensor and pressure sensors available on the drone. A single image based obstacle avoidance method was proposed by Saxena et al (2005), A Markov Random approach was taken that enables the drone to take evasive actions as certain obstacles ( like

2

trees ) appeared in front of the drone.[4] Several other papers and projects also inspired us in our works. In another study, the drone was landed on a moving object. (Lenz, et al., 2005)[5]

Fig 2: AR.Drone is landing on a moving object. AR.Drone was also used in Krajnik et al (2011) as a proof of concept of their vision based autonomous navigation algorithm. They used AR.Drone for Autonomous Indoor Surveillance.[6] 1.3 Objective of the project Fro our project, we learned the perspective cues of a corridors to navigate in indoor corridor environments. But as this approach was not solely capable of avoiding collisions with walls, we used laser sensors to detect and avoid walls and other obstacles. Java threads and parallel processing were used to speed up the execution time of the algorithm. We have used OpenCV 2.4.6 for our computer vision and image processing related algorithms. We have used Dell Vostro laptop with 2.5 GHz clock speed and Intel Core­i5 3210M dual core processor for our project. We have used wireless internet network to communicate with the AR.Drone. 3

Chapter 2

Proposed Method AR.Drone provides some important data for control and navigation. It gives “NAV data”, that includes readings from different sensors associated with the drone. The other and the most important data comes in the form of video. Based on the navigation data, AR.Drone uses it’s built­in functionality to plan an emergency landing. Image is provided to the image analyzer, where our work applies. Based on the decisions that are made from the image data, commands are send to the control unit, which then executes the command. A detailed block diagram is given below:

4

Fig 3a : Block diagram of our decision model.

2.1 Path finding Our objective was to move the AR.Drone autonomously through a corridor. We have used perspective cues for finding paths and exit points in a corridor. In simple terms, if we find out the edges on both sides of the corridor and stretch them indefinitely towards the forward direction, they will eventually intersect in some certain point, considered the “vanishing point”. This “point” or a 10 * 10 window of pixels indicates where our drone should lead in order traverse the entire corridor. It is worth mentioning that, the drone do not need to face directly towards the exit point at the start. Even if it is facing a wall, it should be able to turn and move towards one of the endpoints of the corridor. We have used Canny Edge Detector initially to find all the edges of the corridor. From those edges, we performed the Hough Transform to find only the straight lines in the entire image. From those straight lines, we eliminated all the horizontal and vertical lines by measuring the angles that the straight lines create.

5

Another important part for the development of our algorithm, is to avoid getting hit by walls. We initially tried to use edges to figure out whether use them to avoid walls. But without some knowledge of the depth of our environment, it is impossible to find which edges are close and which of them further to the front. So we moved to a laser sensor, from which we approximated the depth of the nearby walls and avoided crashes in most of the cases. We used specific intensities of our laser ray and detected them from the images. We simply run a brute force to find out the pixels where laser ray was found. 2.1.1 Edges Edge detection is an important part of image processing and Computer Vision. In many cases, edges are detected as a preprocessing step before running more complex and detailed algorithms. In simple terms, edges appears in the images where sharp changes of intensities takes place. Sometimes it is useful to run an edge detection algorithm to eliminate or establish the possibilities for certain objects to be present at the image. As a result, meaningful information can be extracted. Often, edge detection reduces the volume of an object as well. Despite having these advantages, edge detection has issues. Some trade­offs are required to be made. While detecting edges, we increase the chance of a detection, but on the other hand, lose important information for localization. Image signals are often smoothed to locate the edges. As the image signals are smoothed to get rid of the false edges, we lose important information which would otherwise have helped us to localize edges more efficiently.

2.1.2 Canny Edge Detection The Canny Edge Detector is one of the most efficient and commonly used edge detection algorithm. This relatively new algorithm was developed by John F. Canny in 1986. The approach uses multiple stages to determine edges in an image. Besides developing this algorithm, John F. Canny also developed a mathematical model for explaining his edge detection technique.[7] Advantages of Canny Edge Detection include Good Detection, Good Localization and Minimal Response.

6

Fig 3b: Canny Edge applied to an image. source:wikipedia The property of Good Detection indicates that the algorithm should discover as many edges as possible for the image. It may be required for a later stage to cancel out some of the edges of the current image. But at the time of finding edges, maximum number of correct edges should be found. Good localization says, that an edge should be detected as close as the place where the actual edge is created. Minimal response says one edge should be detected only once. As the size of the image increases, the number of edges increases. Thus it is computationally inefficient to calculate edge twice or many times. The mathematics for the canny edge is as follows: ● ● ● Canny edge detector is susceptible to noise that are present in raw data. So, it is a common practice to apply convolution or filtering to reduce the amount of noise in an image before detecting the edges. Canny edge detection has a couple of parameters. They are: 7 finds a function which optimizes a functional. the optimal function is expressed as the sum of four exponential terms. can also be estimated by the first derivative of a Gaussian

● Size of the Gaussian filter kernel ○ ○ ○ ● ○ directly affects the result of Canny edge detection smaller filters cause less blurring and finds sharp edges larger filter finds cause more blurring, and finds more larger and smoother edges A threshold set too high can miss important information. In our case, higher thresholds resulted in very few number of vertical or horizontal edges where there were many. On the other hand, a threshold set too low will falsely identify irrelevant information (such as noise) as important. We found many edges that were not necessary for the navigational purposes. It is difficult to give a generic threshold that works well on all images. In our first step to finding the edges of the corridor, we run the Canny Edge Detection algorithm. The algorithm for the Canny Edge Detection is written below:

Thresholds

1. Filter out any noise. The Gaussian filter is used for this purpose. An example of a Gaussian kernel of that might be used is shown below:

2. Find the intensity gradient of the image. For this, we follow a procedure analogous to Sobel:

a. Apply a pair of convolution masks (in and directions:

8

b. Find the gradient strength and direction with:

3. Non­maximum suppression is applied. This removes pixels that are not considered to be part of an edge. Hence, only thin lines (candidate edges) will remain. 4. Hysteresis: The final step. Canny does use two thresholds (upper and lower): a. If a pixel gradient is higher than the upper threshold, the pixel is accepted as

an edge b. c. If a pixel gradient value is below the lower threshold, then it is rejected. If the pixel gradient is between the two thresholds, then it will be accepted

only if it is connected to a pixel that is above the upper threshold. Canny recommended a upper:lower ratio between 2:1 and 3:1
Our objective is to find long edges so that eventually we will locate those long lines that are directed towards the vanishing points. For our Canny Edge Detector, we used a Gaussian filter of 5 * 5 and the threshold values are kept between 230 and 255. This values are mainly found by performing numerous 9

experiments and observing results. The complexity for finding the Canny Edges in an image is O(RC log(RC)), where R is the number of rows and C is the number of columns in that image. 2.1.3 Hough transform Hough Transform is a feature extraction technique. This technique is very useful while finding geometric shapes like straight lines and circles. We have used this technique to find straight lines. The Hough Transformation utilizes a voting technique in parameter space in order to identify features. The accumulation stores the votes and the Local maxima yield the parameters for the target feature. Original Hough Transform was only used to detect straight lines, but more complex geometric shapes can also be detected. Edge detection algorithms like Canny Edge Detector are often run as a preprocessing technique for this algorithm. And later on, interesting points are identified. This preprocessing technique sets up the voting procedure. Due to the imperfections in image pixels some of the interesting pixels may found missing from the desired output. The process of converting a set of edges to a line may be found non­trivial.

Fig 4: Hough Transform applied to an image for finding straight edges. Hough Transform have some limitations as well. The algorithm is only efficient when a high number of votes falls in the right bin. When number of parameters is large, the average number of votes per row is very low complexity raises to O(A ^ ( m ­ 2 ) ), where A is the size of the image space and m is the 10

number of parameters. For detecting straight line, the complexity is O(A * L), where L is the maximum number of lines assumed to have intersected at some point of the image. Also, performance depends on the quality of the input data, in noisy images, some cleaning needs to be done. The algorithm for the Hough Transform is written below:

1. A line in the image space can be expressed with two variables. For example: a. In the Cartesian coordinate system: Parameters: b. In the Polar coordinate system: Parameters: .

For Hough Transforms, we will express lines in the Polar system. Hence, a line equation can be written as:

2. In general for each point point as:

, we can define the family of lines that goes through that

Meaning that each pair 3. .If for a given we plot the family of lines that goes through it, we get a sinusoid. and . represents each line that passes by

We consider only points such that

In our Hough Transform, we have used 75 bins for the value of rho and about 180 bins for the value of thetas. The straight lines that are found are then checked manually to eliminate the horizontal and vertical lines. Again, these values are calculated after performing many experiments and measuring the results. 2.1.4 Vanishing Points 11

Once the straight lines are found from the Hough Transform, a simple brute force over the entire image will find the vanishing point of the algorithm. Vanishing point is simply the area of the image where most number of straight lines interested. On a corridor, this vanishing point indicates that an exit point exists, it will be around that vanishing point. The algorithm for finding the vanishing point is follows: 1. li = a line in the image, discovered via Hough transform. Let L be the number of lines found in the image, and l ε [0, L] . The line li can be represented by parameters mi , bi as y = mix + bi 2. (xk, yk = ℜ 2 the coordinates of the intersection of two lines. There are K total intersections. In detail, if the lines li and lj do not intersect, let xk, yk = (∞, ∞) . If they do, then we get them by solving, yk = mixk + bi = mjxk + bj 3. G: The n * n grid that tiles the image plane. Ga,b represents the number of line intersections falling in the grid element (a, b). (a, b) ε [0 , n) are integers ) Ga,b = ∑ 1{a

Similar Documents

Premium Essay

Fake Encounters

...Wikipedia, the free encyclopedia "UAV" redirects here. For the entertainment company, see UAV Corporation. For the veterans' organization, see Ukrainian American Veterans. A group photo of aerial demonstrators at the 2005 Naval Unmanned Aerial Vehicle Air Demo. An unmanned aerial vehicle (UAV), commonly known as a drone, is anaircraft without a human pilot onboard. Its flight is either controlled autonomously by computers in the vehicle, or under the remote control of anavigator, or pilot (in military UAVs called a Combat Systems Officer on UCAVs) on the ground or in another vehicle. There are a wide variety of drone shapes, sizes, configurations, and characteristics. Historically, UAVs were simple remotely piloted aircraft, but autonomous control is increasingly being employed.[1] Their largest use is within military applications. UAVs are also used in a small but growing number of civil applications, such as firefighting or nonmilitary security work, such as surveillance of pipelines. UAVs are often preferred for missions that are too "dull, dirty, or dangerous" for manned aircraft. Contents  [hide]  * 1 History * 2 FAA designation * 3 Classification * 3.1 Classifications by the United States military * 3.1.1 US Air Force tiers * 3.1.2 US Marine Corps tiers * 3.1.3 US Army tiers * 3.1.4 Future Combat Systems (FCS) (US Army) classes * 3.1.5 Unmanned aircraft system * 4 Uses * 4.1 Remote sensing * 4.2 Commercial...

Words: 10057 - Pages: 41