Free Essay

Wearable Obstacle Detection for the Blind That Can Detect Discrete Elevation That Uses Gizduino Microcontroller

In:

Submitted By AlphaIkaros
Words 15527
Pages 63
TITLE PAGE

Wearable Obstacle Detection for the Blind that can Detect Discrete Elevation using Gizduino
Microcontroller

by

Nicole Sam Rey P. Cuaresma
Carissa D. Eustaquio
Glenda T. Ofiana

A Thesis Report Submitted to the School of Electrical, Electronics, and
Computer Engineering in Partial Fulfilment of the Requirements for the
Degree
Bachelor of Science in Computer Engineering

Mapúa Institute of Technology
May 2015

ii

iii

ACKNOWLEDGEMENT
The researchers would like to express their deepest gratitude to all those who gave contribution to the completion of this paper.
To our adviser, Engr. Glenn V. Magwili, for his patience and efforts in stimulating suggestions and encouragement that helped the whole group in order to create a device that can help many people. To our former instructors who have taught us valuable knowledge throughout our academic progress.This research would also not be possible without the belief of the research panel members.
The researchers would also like to thank the unconditional support, patience and understanding of their families throughout the process. To the researchers’ colleagues who had also given support and shared their knowledge during the course of completing the research.
Lastly, to the Almighty God who had bestowed them with wisdom and strength throughout the research and their life. To the Lord who gave blessings and guidance for the success of this research.

iv

TABLE OF CONTENTS
TITLE PAGE

i

APPROVAL SHEET

ii

ACKNOWLEDGEMENT

iii

TABLE OF CONTENTS

iv

LIST OF TABLES

vii

LIST OF FIGURES

vii

ABSTRACT

ix

Chapter 1: INTRODUCTION

1

Chapter 2: REVIEW OF RELATED LITERATURE

4

Existing System for the blind

4

Sensing and Sensors

6

Object Detection and Recognition

7

Microcontroller

8

3-axis Accelerometer

10

Chapter 3: WEARABLE OBSTACLE DETECTION FOR THE BLIND THAT CAN
DETECT DISCRETE ELEVATION USING GIZDUINO MICROCONTROLLER

11

Abstract

11

Introduction

12

Methodology

14

Hardware Development
Algorithm and Mathematical Model
Calibration
Testing
Results and Discussion

15
22
27
30
31

Actual Calibration
Mathematical Model

31
34

v

Conclusion

48

Chapter 4: CONCLUSION

49

Chapter 5: RECOMMENDATION

51

REFERENCES

52

APPENDICES

54

vi

LIST OF TABLES
Table 3.1 Audio Module Filename Assignments

19

Table 3.2 Trials for each Height

29

Table 3.3 Sample Calibration

32

Table 3.4 Percent Difference for Calibration

33

Table 3.5 Actual Results for 170 cm

34

Table 3. 6 Values Gathered for Step Up Elevation

36

Table 3. 7 Values Gathered for Step Down Elevation

38

Table 3. 8 Values Gathered for Inline Elevation

40

Table 3. 9 Values Gathered for Decline Elevation

42

Table 3. 10 Values Gathered for Hole Elevation

44

Table 3. 11 Confusion Matrix Values

46

vii

LIST OF FIGURES
Figure 2. 1 Arduino Module

9

Figure 2. 2 gizDuino Module

9

Figure 2. 3 Single Axis Tilt Measurement

10

Figure 3. 1 System Flowchart

14

Figure 3. 2 Conceptual Framework of the system

15

Figure 3. 3 Orientation of MCU and Ultrasonic sensors Diagram

16

Figure 3. 4 Sample Connection of 3-axis Digital Accelerometer

17

Figure 3. 5 Audio Module Hardware and Pin Assignments

18

Figure 3. 6 MCU with Audio Module

19

Figure 3. 7 Different Signal Patterns for Vibrating Motor

20

Figure 3. 8 Ultrasonic Pins Assignment

20

Figure 3. 9 Schematic Diagram of the Wearable Obstacle Detection Device

21

Figure 3. 10 Ultrasonic Signal Flow

22

Figure 3. 11 Decomposition of components

24

Figure 3. 12 Test Environment

25

Figure 3.13 Sample relationship of each Height

26

Figure 3. 14 Flowchart of the System

27

Figure 3.15 Plain Surface Graph

35

Figure 3. 16 Step Up Graph for Equation Derivation

36

Figure 3.17 Step Up Graph

37

Figure 3. 18 Step Down Graph for Equation Derivation

38

Figure 3.19 Step Down Graph

39

viii

Figure 3. 20 Incline Graph for Equation Derivation

40

Figure 3.21 Incline Slope Graph

41

Figure 3. 22 Decline Graph for Equation Derivation

42

Figure 3.23 Decline Slope Graph

43

Figure 3. 24 Hole Graph for Equation Derivation

44

Figure 3.25 Hole Graph

45

ix

ABSTRACT
Visual Impairment is a condition of loss of sight that needs support because of significant limitations of visual capability. In the past, people with such condition use guide dogs and white canes also presents certain limitations. With the advancements in technology, several electronic travel aids were built to extend the support given to people with visual impairment for improved mobility. In this paper, the researchers aimed to recreate the traditional wearable obstacle detection device for the visual impaired people by providing instructions and guiding them through the easiest path to avoid them obstructions.
Instructions at decision points will be transmitted through voice guidance. The proposed system will also feature an ultrasonic-based obstacle detection system that can also detect elevated obstacles such as stairs with reference from the eye-level of the user down to their feet.
Keywords: Ultrasonic, Visual Impairment, Obstacle Detection, Voice guidance

Chapter 1
INTRODUCTION
People with visual impairment face enormous challenges in terms of their mobility, and, in today's technology, there is a dearth in facility to contend with these challenges. The task of creating a system or device that would enable a visually impaired person to move from one place to another presents challenges especially in dynamic environments that have stairs and obstacles. From the visually-impaired person’s point of view, moving through an unknown environment is always a real challenge unless they use an aid system, usually a walking cane or a guide dog. Since dynamic environments produce sounds and noise while the person is moving, visually-impaired individuals ares able to locate things and, later on, use their sense of touch to verify such obstacles. With the advancement of technology, electronic travel aids (ETAs) are devised to include systems characteristics that not just determine the current location and keep track through the memorized course until the destination is reached but also avoid obstacle.
There are multiple studies regarding navigation systems for the blind, which presented new approaches; although these systems also possessed problems regarding the method or the technology used. One of the existing systems has been designed to recognize obstacles using ultrasonic sensor integrated into a walking cane (Ryu, 2011).
However, it does not eliminate the portability and convenience since the user still needs to hold the cane and propagate the cane around; it simply made the cane wireless. Also, another system includes three infrared sensors system for detecting of ascending and descending stairs and the decision are based on the current sensors readings (Lee, 2010).

1

2

However, it is tested only on three types of terrain and used infrared sensors in distance measurement. The studies made by other researchers regarding obstacle detection systems for visually-impaired persons only included sensing of obstacles and not elevations such as stairs that may also harm the user. The problem to be solved by this study is that the existing systems have limited sensing capabilities. Therefore, there is a need to design and implement a system that can detect not only obstacles but also elevation.
The general objective of this paper is to implement a portable wearable obstacle detection system, a wearable cap, which uses ultrasonic sensors that can detect basic elevation. The specific objectives are to (1) design a wearable cap, which uses Gizduino
Microcontroller integrated with multiple ultrasonic sensors, accelerometer, vibrating motor, and earphones; (2) Calibrate and position the sensors; (3)program the software part using C; and(4) test the performance the system in a static controlled environment.
Once the system is built, it will benefit visually impaired people. They will be confident in their movement and be always aware of their present environment. This system will reduce their effort in moving effectively. Also, it will able to reduce the risk of accidents when traveling alone. Moving with uncertainty causes slower actions; with this device, however, performance and sensing capabilities of the visually impaired will improve without sacrificing the risk of accidents. It will also serve as the basis for future research and will help future researchers to improve the system further for the benefit of the society.
This system will only be used indoor under typical conditions. The sensing capabilities will include all sides that will sense the environment for obstacle detection

3

and will produce outputs in a form of sound and vibration through the earphones and vibrating motor, respectively. There is also a limited feedback when there is a disorientation of cap as well as the movement of the user also affects the raw data; this paper does not include such functions to correct errors based on the user’s movement speed. The researcher will test the system in a controlled environment in which different elevations are readily available such as in schools and establishments. The design of the cap is also limited to users with slim or small body built.

Chapter 2
REVIEW OF RELATED LITERATURE
Visually impaired persons are often dependent upon external assistance such as white cane, trained dogs or special electronic devices. Development of system that improves the traditional way is getting more and more evident each year since improvement of technology has paved way to reduce cost and size of hardware needed to build such systems. Current systems use specialized sensors to detect range but are selected in terms of its accuracy in different kinds of environment.
Existing System for the blind
In the paper of Zuban et al. (2012) entitled, “Digital Radar Aided Cane,” they developed a simple electronic cane that can help blind people in their movement.
According to their paper, the device is just like a white cane that maps the area around in front of the user using three sensors and Windows phone-based mobile phone for outputs which also includes voice commands. It is also connected to a SMS gateway service for request of assistance. It used two combinations of sensor, two infrared and one ultrasonic sensor, whereby the ultrasonic sensors were used in front to detect far obstacle and the infrared sensor was to detect short obstacles that are easily overlooked. It has used a programmable IC in calculating the distance from each sensor which is sent to the mobile phone to process the raw data. Then the mobile phone produces sounds and numerical feedback; it uses C# language in programming the processing and graphical user interface of the software. From this previous study, the current researcher observed that the device had some limitations. One is that the position and handling of the device needs to be positioned correctly in order to produce outputs effectively. Also C# uses more resources

4

5

compared to C++, and it somehow unnecessary to use C# for coding the process since the blind will not be able to see outputs as well as to control the device through the graphical user interface of the mobile phone. However, on the positive side, they have used at least one ultrasonic sensor which, according to them, “has great characteristics as its fast, small, energy efficient and has reliable measuring-range of 280cm which is very impressive.” From another study, the most usually proposed system for the blind is the concept of wearable obstacle detection device. One of the existing studies regarding this kind of device is that of Jameson and Manduchi (2010), both from the Department of Computer
Engineering of the University of California. The researchers aimed at designing and implementing a device, in a form of a pocket-sized device, which would help the user to avoid head-level obstacles, before they hit the user. In addition to this previous work, the researchers also aimed to prevent an overflow of feedback that could annoy or distract the user. Although the previous study is not implemented yet, it proposes the usage of ultrasonic sensor in distance measuring. However, their proposal uses only two sensors, less than the current proposal.
According to the research paper written by Cardin, Thalmann and Vexo (2005) entitled, “Wearable Obstacle Detection System for Visually Impaired People,” a proposed system makes use of stereoscopic sonar architecture since it has more precise spatial recognition; as well as the vibro-tactile feedback to inform the user about the obstacle. Based on the results, the researchers had a hard time on dealing the obstacles at the closest range or when these obstacles are placed out of the sensor’s range. In order to

6

resolve this problem, the proposed solution is to use a set of sensors with narrower range for more precise object detection.
Sensing and Sensors
In the paper Design and Analysis of an Infrared Range Sensor System for Floorstate Estimation by Lee and Lee (2010), the system used infrared sensors to identify discrete states which are even surface, ascending and descending stair. Through the different values obtained from the infrared sensor, the three floor states can be accurately identified, and instead of relying on current sensor values the team referenced the changes as reflected in state transitions. According to Lee and Lee (2010), ultrasonic sensors may have a longer measurement range but infrared sensors provide accurate measurement though unable to detect glass.
Kassim, Jaafar, Azam and Abas (2013) researched “Performances Study of
Distance Measurement Sensor with Different Object Materials and Properties,” which mainly focused on performance comparison between infrared and ultrasonic sensor. The results of the experiment showed that infrared sensor lost its ability to detect obstacle above 80 cm; thus, ultrasonic sensor gives a longer measurement range. Also, the researchers conducted a test using both sensors in different surfaces, the infrared sensor uses infrared beam which cannot reflect on transparent surfaces such as water and glass.
As for the ultrasonic sensor, it uses sound wave to reflect on surfaces; thus, it keep its accuracy with transparent materials. Based on the results, the researchers concluded that the ultrasonic sensor gives a better performance in measuring various object distances with different properties compared to the infrared sensor.

7

In the article “Stair Case Detection and Recognition Using Ultrasonic Signal,”
Bouhamed, Kallel and Masmoudi (2013), the researchers had considered one ultrasonic sensor, attached to an electronic walking cane, to identify staircases and floors; and
Support Vector Machines (SVMs) for easy object detection and classification. The researchers were able to prove that a single ultrasonic sensor can be used to identify and classify objects, but the results of the recognition were not accurate, thus a single ultrasonic sensor is not enough to provide clearer object recognition. Since it only uses one ultrasonic sensor and it is prone to vibrations, the ultrasonic sensor has been represented into more than one frequential domain. First, the ultrasonic signal spectrogram representation shows how the amplitude of the ultrasonic sensor varies in relation with time. It is just simply the raw representation of data from the ultrasonic sensors. Then later on, they have represent the data in time-frequency in which provides good domain for signal representation and classification. From there they were able to derive the distinctive patterns that capture different characteristics of signals. With the limitations of sensing of capability of a single ultrasonic sensor, the researchers for this present study have first preprocessed and performed feature extraction on the raw data in order to minimize errors.
Object Detection and Recognition
Ohtaniand Baba’s (2012) study entitled, “Shape Recognition and Position
Measurement of an Object Using an Ultrasonic Sensor Array”, gave emphasis to an array of ultrasonic sensors used for object detection and image processing. The researchers’ proposed system includes an array of sensors composed of two-dimensional receivers and a transmitter in the center; a signal processing unit that extracts the received information

8

from the sensors; and an identification unit in which is composed of two neural networks for shape recognition. The system can detect depth direction and produce high resolution results, though the width resolution is limited by the arrangement of the sensor array. The researchers had also made use of the sensor’s ultrasonic pressure to identify the object’s position. Based on the results the researchers gained, the prototype system was able to differentiate measured objects, as well as the transparent ones such as PET bottles. On the other hand, this system was only subjected to measured and steady objects and the current orientation of it (Ohtani & Baba, 2012).
Microcontroller
Li et al.’s (2012) design of real-time ranging system, which is based on
ATmega16L, successfully implemented and tested system. It includes temperature compensation, uses another module, to improve the ranging accuracy of the system since the temperature affects the speed of sound; if the temperature changes for every 1°
Celsius, the speed of sound will vary 0.6m/s. The system used ATmega16L, CMOS 8-bit microcontroller, real for the processing of real time detection of the distance and simultaneously display on the LCD screen system. It also produced voice alarm when the distance is less than threshold, and gives feedback to the user for obstruction. When they finished testing the system in measuring distance, they confirmed the effectiveness and reliability of the design. From the actual results, between the 200mm and 1500mm, the maximum absolute error is just +/-6 mm, which is negligible.
Arduino is an open-source single board microcontroller that powers an
ATmega328 microprocessor and has an I/O support. The board runs a boot loader and supports standard programming language such as the C language with its own compiler.

9

The microcontroller also has an advanced RISC (Reduced Instruction Set Computing) architecture, non-volatile program and data memories. The board features 8-bit and 16-bit timers/counters, with eight analog input pins.

Figure 2.1 Arduino Module
Gizduino is another microcontroller board that runs an ATmega328 and
ATmega168 microprocessor. The circuit board features 14 digital input/output pins, with six analog inputs, and a 16 MHz crystal oscillator. The module can be powered via serial cable or by using an external power source. The Gizduino module supports the same software used in programming the Arduino module, C programming language, and runs with its own boot loader.

Figure 2.2gizDuino Module

10

3-axis Accelerometer
3-axis Accelerometer is an electromechanical device useful for tilt-sensing applications, wherein it can measure static acceleration of gravity, as well as dynamic acceleration resulting from motion or shock. According to Dimension Engineering, thru the accelerometers, the angle, which the device has been tilted, can be determined with respect to the earth. A presence or lack of motion can be detected and if the acceleration on any axis exceeds user-set level.
According to Biopac, which published an article entitled, “Using BioNomadix
Tri-axial Accelerometer as a Tilt Sensor Inclinometer”, in order to determine the tilt angle, the device uses measurements of gravity and its trigonometric projection on the axes of the accelerometer.

Figure 2.3 Single Axis Tilt Measurement
Figure 2.3 shows a single axis tilt measurement using a single acceleration measurement. To determine the acceleration along the X-axis, the following formula is used: = 1g ∗ sin
Using only the X-axis of the accelerometer, the following formula will be used to calculate tilt angle: = sin−1

Chapter 3
WEARABLE OBSTACLE DETECTION FOR THE BLIND THAT CAN DETECT
DISCRETE ELEVATION USING GIZDUINO MICROCONTROLLER
Abstract
Visual Impairment is a condition of loss of sight that needs support because of significant limitations of visual capability. In the past, people with such condition use guide dogs and white canes also presents certain limitations. With the advancements in technology, several electronic travel aids were built to extend the support given to people with visual impairment for improved mobility. In this paper, the researchers aimed to recreate the traditional wearable obstacle detection device for the visual impaired people by providing instructions and guiding them through the easiest path to avoid them obstructions.
Instructions at decision points will be transmitted through voice guidance. The proposed system will also feature an ultrasonic-based obstacle detection system that can also detect elevated obstacles such as stairs with reference from the eye-level of the user down to their feet.
Keywords: Ultrasonic, Visual Impairment, Obstacle Detection, Voice guidance

11

12

Introduction
People with visual impairment face enormous challenges in terms of their mobility, and, in today's technology, there is a dearth in facility to contend with these challenges. The task of creating a system or device that would enable a visually impaired person to move from one place to another presents challenges especially in dynamic environments that have stairs and obstacles. From the visually-impaired person’s point of view, moving through an unknown environment is always a real challenge unless they use an aid system, usually a walking cane or a guide dog. Since dynamic environments produce sounds and noise while the person is moving, visually-impaired individuals are able to locate things and, later on, use their sense of touch to verify such obstacles. With the advancement of technology, electronic travel aids (ETAs) are devised to include systems characteristics that not just determine the current location and keep track through the memorized course until the destination is reached but also avoid obstacle.
There are multiple studies regarding navigation systems for the blind, which presented new approaches; although these systems also possessed problems regarding the method or the technology used. One of the existing systems has been designed to recognize obstacles using ultrasonic sensor integrated into a walking cane (Ryu, 2011).
However, it does not eliminate the portability and convenience since the user still needs to hold the cane and propagate the cane around; it simply made the cane wireless. Also, another system includes three infrared sensors system for detecting of ascending and descending stairs and the decision are based on the current sensors readings (Lee, 2010).
However, it is tested only on three types of terrain and used infrared sensors in distance measurement. 13

The studies made by other researchers regarding obstacle detection systems for visually-impaired persons only included sensing of obstacles and not elevations such as stairs that may also harm the user. The problem to be solved by this study is that the existing systems have limited sensing capabilities. Therefore, there is a need to design and implement a system that can detect not only obstacles but also elevation.
The general objective of this paper is to implement a portable wearable obstacle detection system, a wearable cap, which uses ultrasonic sensors that can detect basic elevation. The specific objectives are to (1) design a wearable cap, which uses Gizduino
Microcontroller integrated with multiple ultrasonic sensors, accelerometer, vibrating motor, and earphones; (2) Calibrate and position the sensors; (3)program the software part using C; and(4) test the performance the system in a static controlled environment.
Once the system is built, it will benefit visually impaired people. They will be confident in their movement and be always aware of their present environment. This system will reduce their effort in moving effectively. Also, it will able to reduce the risk of accidents when traveling alone. Moving with uncertainty causes slower actions; with this device, however, performance and sensing capabilities of the visually impaired will improve without sacrificing the risk of accidents. It will also serve as the basis for future research and will help future researchers to improve the system further for the benefit of the society.
This system will only be used indoor under typical conditions. The sensing capabilities will include all sides that will sense the environment for obstacle detection and will produce outputs in a form of sound and vibration through the earphones and vibrating motor, respectively. There is also a limited feedback when there is a

14

disorientation of cap as well as the movement of the user also affects the raw data; this paper does not include such functions to correct errors based on the user’s movement speed. The researcher will test the system in a controlled environment in which different elevations are readily available such as in schools and establishments. The design of the cap is also limited to users with slim or small body built.
Methodology

Figure 3.1 System Flowchart

15

Figure 3.1 shows the step-by-step procedure in building the whole system. Each part will be discussed in the succeeding sections of the methodology. The development of the system first includes the system initial design, in which information gathering and planning is the main point of the process. Hardware construction will be next together with the integration of each component, as well as the initial software part of the system using C language. Then, the system will be calibrated and position the sensors in order to acquire data correctly. It also includes testing each output component if it properly working. If the system is already functional and able to acquire data correctly, the system will then be tested in the actual environment and derived the mathematical model of each terrain types. The data gathered will be analyzed in order to derive conclusion as to the accuracy of the system based on the results gathered.
Hardware Development

Figure 3.2 Conceptual Framework of the system

16

The design consists of ultrasonic sensors, microcontroller, audio module and earphones as shown in Figure 3.2. The system detects obstacles through the ultrasonic sensor, which will generate the time it takes for the ultrasonic sound to travel from the sensor to the object and back. Through the microcontroller, the time generated from the sensor will then be converted into distance from the detected obstacle. The output of the system will be the audio coming from the audio module and vibration with vibrating motor which will vary on certain output conditions. It also includes an accelerometer in order to tell the user to position correctly the cap when disoriented.

Figure 3.3 Orientation of MCU and Ultrasonic sensors Diagram
As shown in Figure 3.3, there are two kinds of ultrasonic sensor used based on functions. Four ultrasonic sensors will be used for obstacle detection and five for the elevation detection - three ultrasonic sensors on each side. The detection location of each is described in Figure 3.3, which shows the sensors for obstacle detection are placed on four directions of the user. While the remaining five sensors are used for the elevation detection, which is placed in front of the user; one at the middle and two other sensors on each side. The first sensor on the middle will be perpendicular to the ground, will serve as

17

the reference sensor, which is also the next step of the user, and the remaining four sensors will be arrange in also perpendicular to the ground with 1 and 3 inch margin for the first and second sensor from each side, respectively.
The microcontrollers, powered by a 9V battery, integrated with the audio module, will be at the backpack in order to maintain the stability of the cap and comfort when using the system. The 3-axis accelerometer will determine if the cap in correct position then produce sound outputs to tell the user to adjust the position of the cap.

Figure 3.4 Sample Connection of 3-axis Digital Accelerometer

18

The main system which consists of the Gizduino microcontroller plus audio module, sets of ultrasonic sensors, and earphones should be connected properly and document it with their respective pins. Having connected it properly and documented will simplify the programming process later on and able to minimize errors that are cause by simply loose wires.

Figure 3.5 Audio Module Hardware and Pin Assignments
The Gizduino microcontroller can be integrated by an audio module (WTV020 SD), which can handle a SD card memory and can load WAV and AD4 audio file. This module will play the audio file selected by the microcontroller, which actually selected serially, into the pins that outputs sound signals. Figure 3.6 shows the connection of the microcontroller and the audio module, but not limited to it since it can be transferred to other equivalent pins. Since the module is stand alone, it has its own processor. It can play audio while the microcontroller is processing the sensing of ultrasonic sensors. Also, from the SPK+ and SPK- pins, the earphones will be connected to produce sounds signals. Since the audio module retrieves its sound data in the SD-card, the sound output will be stored in the SD-card. The sound module can only read and play numbered filenames from 0000.wav to 0511.wav with WAV and AD4 audio file types. The table shows the filename and sound data suggested being store in the SD-card and can be played by microprocessor after the elevation have been determined.

19

Table 3.1 Audio Module Filename Assignments
Filename
Audio Type
Description
0000.wav
WAV audio file
Produces the “Step Up Ahead” sound
0001.wav
WAV audio file
Produces the “Step Down Ahead” sound
0002.wav
WAV audio file
Produces the “Upward Slope Ahead” sound
0003.wav
WAV audio file
Produces the “Downward Slope Ahead” sound
0004.wav
WAV audio file
Produces the “Plain Surface Ahead” sound
0005.wav
WAV audio file
Produces the “Hole Ahead” sound
0006.wav
WAV audio file Produce the “Unable to determine terrain type” sound 0007.wav
WAV audio file
Produces the “Stop” sound
0008.wav
WAV audio file
Produces the “Straight Ahead” sound
0009.wav
WAV audio file
Produces the “Position Head” sound

Figure 3.6 MCU with Audio Module
There will be four vibrating motors that are attached to each position of the ultrasonic sensors used for the obstacle detection. The vibrating motors will be used to generate output signal to indicate an obstacle. The pattern of the output signal will specify the distance of the user from the obstacle. The pattern will depend on the distance

20

range measured by the sensor. As shown in Figure 3.7, when there is no obstacle, there is no output signal, an obstacle distance greater than 0.6m will then be classified as a far obstacle. Mid obstacles ranges to 0.31m to 0.6m while near obstacles ranges up to 0.3m from the user. Every range corresponds with an output pattern as shown in the Figure 3.7.

Figure 3.7 Different Signal Patterns for Vibrating Motor

Figure 3.8 Ultrasonic Pins Assignment
Ultrasonic sensors are not just simply connected to the microcontroller. When there are more ultrasonic sensors the pins management of the microprocessor tends to be harder. Connecting it to the microcontroller should be documented the assigned pins, and make a table out of it, together with the audio module. With this procedure it will be

21

easier to address such pins when programming the software part later on. Figure 3.7a shows the connection of a single ultrasonic sensor into the microcontroller, whose pins use 2 I/O ports, 1 supply port, and 1 ground port. When the sets of ultrasonic sensor populated the I/O ports of the microprocessor, seen in Figure 3.9, it will increase the needed pins but not including the Vcc and Gnd port; since ultrasonic sensor using common Vcc and Gnd. Since all the pins of the accelerometer’s output pins are analog, its pins are directly connected to the analog pins of the microcontroller. The analog inputs are then processed by the microcontroller which converts it, to become the data to be used in programming the whole device. The vibrating motors are also connected with analog pins in order to vary the vibration output.

Figure 3.9 Schematic Diagram of the Wearable Obstacle Detection Device

22

Algorithm and Mathematical Model
The basic idea of ultrasonic sensor operation is that it propagates high frequency sound into space and evaluates the time the echo returns when it encounter a solid object.
By the use of microprocessor it exposes the capability of ultrasonic in distance measurement. Figure 3.10 Ultrasonic Signal Flow
The sensor needs a trigger signal, produce from the microprocessor, in order for it to propagate a high frequency sound. Then, it will respond and produce output signal on the echo pin when it receives back the propagated sound waves. This echo pulse output width corresponds to the time it takes for the ultrasonic sensor to object and back. And the distance can be computed by: =

( ℎ × )
2

(3.1)

where the speed of sound for the ultrasonic sensor is 331.4 m/s at absolute zero temperature. Though the calculation of the distance value can be easily solved, having greater number of ultrasonic sensors working together is another set of problems. One problem

23

poses in multiple sets of ultrasonic sensors is that they have same frequency of sound.
Propagating them at the same time may cause the each sensors to receive incorrect readings, thus, the sensors should not propagated at the same time; sensors should be used one at a time only. The researchers proposed that the data gathering of the sensors will be done sequentially; one after one and only one sensor will activate at a time.
Since the operation is sequential, the drawback will be the processing will be relatively slow compared to the parallel processing. Ideally, the ultrasonic sensor will take around maximum of twice the max range over the speed of sound seconds to record the results; max range is double since the sound will travel forward and back when it deflects an obstruction. =

2 ×

(3.2)

From the specification of the ultrasonic sensor, it can detect objects at a maximum range of 3.5 m, and the speed of sound propagation is at 331.5m/s and using the equation, it take roughly around 20ms for a single sensor or 20nms of n number of sensors operated sequentially excluding the main processing of data gathered.
This paper includes detection of elevation below the eye level by just using distance from the sensor positioned in front of the user. Detection of obstacles is comparing of these sensor values to detect the slope of the terrain. The distance for each ultrasonic sensor at certain position should produce values that is precise that can be compared to each other.

24

Figure 3.11 Decomposition of components
Figure 3.11 shows the system is divided into components and sensors are able to take readings on distances in front of the user. The reference height, which is the middle sensor distance from the sensor to the ground, can be the reference value in determining certain properties of the ground. Comparing the perpendicular distance can able to conclude the properties of the ground. For example, Figure 3.11, if the calculated values for each height, Ha to Hc, are the same it could be concluded that it is a plain surface.
Since the orientation of the cap should be always perpendicular to the ground, at least the reference sensor, there will be a prompt produce to the earphones to inform the user to adjust the position. Using accelerometer will determine the position of the cap. It will give inputs to the microcontroller to produce signals to the audio module to play the file that will inform the user to adjust the cap position.
As for the types of the ground, the researchers will analyze and determine the position and distance of certain type. Then, they will make an association to each variable from deviation and combinations. Figure 3.12 shows the type of terrain that will be analyzed, tested and proposed variation of distances.

25

Figure 3.12 Test Environment

26

The label of each figure is the relationship of each distance that can be able to identify object/ground properties. The example from the figure is demonstrated only to three sensors but the idea of detecting such terrain is basically the same. The specific terrain such as step up and step down combination will follow the standard specification of stairs.
Graphing the indicated terrain will be tested and acquire raw data to better represent and easily derive the relationship of each height. At first the system will determine the reference height that will be saved into the microcontroller. Then, by the comparison of each height readings from the different ultrasonic sensors, the elevation can be detected. The relationship of each height for certain elevation will be extracted from the making a graph of height of each sensor, then deriving the comparison or the mathematical model.
200

Trial 1

180

Trial 2

160

Trial 3

140

Trial 4

120

Trial 5

100

Trial 6

80

Trial 7

60

Trial 8

40

Trial 9

20

Trial 10

0
M

L1

R1

L2

R2

Average

Figure 3.13 Sample relationship of each Height
The main process of the program is described in the flowchart in Figure 3.14. The algorithms for processes were already described and explained in previous parts. The

27

flowchart will be implemented using C, uploaded to the microcontroller, and needed libraries will also be implemented for object-oriented programming.

Figure 3.14 Flowchart of the System
Calibration
This part of the research will calibrate the position of the sensors and test if it, at different height of the user, can still acquire accurate distances at each sensor based on its reference height. Before testing the system, it must be first calibrated in order to have

28

precise data to be analyzed. Precise inputs mean precise and correct outputs, unless incorrect processing of data, since it reduces errors before processing the data.
Each ultrasonic sensor will be tested in different heights and test the gathered data through Grubbs Test; Grubbs’ Test is used to determine a single outlier in a sample set that follows a normal distribution. The ultrasonic sensor cannot be accurate to nearest centimeter but it is very precise in reading values relative to others sensors. The test height will be positioned and adjusted initially by the researcher and at different reference heights reading from the reference sensor. The distance will be determined by the ultrasonic sensors and will be tested into different actual heights. It can assure that after calibrating the system, it will produce accurate inputs and can detect correct.
Each sensor will be calibrated to its position with respect to the reference height of the reference sensor. Each height of the sensor readings is in centimeter, in which the reference height starts in 130 cm up to 178 cm with 2 cm increments. The values of each sensor will come from the average readings, i.e. 10 trials per sensor at certain height; then each reading will be compared to the average reference height reading to compute for the percent difference. Then the percent difference for each sensors value will be determined using the equation below: =

| ℎ− ℎ| ℎ

(3.3)

The height readings of each sensor, including the reference sensor, will be first tested by sampling the distance gathered and to determine if there is an outlier in the sample using Grubbs Test; outliner is a value in which appears to deviate from an observation in a sample.The Grubbs’ Test is defined for the hypothesis:
H0: There are no outliers in the data set.

29

Ha: There is one outlier in the data set.
Grubbs’ test statistic is defined as:
=

̅
| −|

(3.4)

where is the sample mean and is the standard deviation of the sample set.
While is the element of the sample that produces largest deviation from the sample mean. At then, using two-sided test, the null hypothesis is rejected at significance level when:

>

−1


2

/(2),−2
√ −2+ 2

(3.5)

/(2),−2

Where 2
/(2),−2 is the upper critical value of t-distribution with N-2 dF and alpha or significance level of /(2).At alpha = 0.05 the critical value of is approximately
2.29.
Table 3.2 Trials for each Height
Reference
Trials
1
2
3
4
5
6
7
8
9
10
Mean
Furthest
STDEV
G
Significant if
G>2.29 ( p< 0.05)

Reference
Height
160.4
160.47
160.82
160.89
160.4
160.82
160.47
160.47
160.89
160.47
160.61
160.89
0.213854
1.309307
No

Sensor
L1
160.37
159.95
160.37
160.37
160.37
160.37
160.37
160.37
160.37
160.37
160.328
160.37
0.132816
0.316228
No

Sensor
R1
160.4
160.47
160.47
160.47
159.63
160.47
160.47
159.56
160.4
160.05
160.239
160.47
0.363056
0.636265
No

Sensor
L2
159.56
159.56
159.63
159.63
159.98
159.63
159.63
159.63
159.21
159.63
159.609
159.98
0.183875
2.017676
No

Sensor
R2
160.05
160.05
160.05
160.05
160.05
160.05
159.98
159.98
159.98
159.98
160.022
160.05
0.036148
0.774597
No

30

In order to compute for the t-value, first computer the sample mean and standard deviation by using the formula:
N

x

x i 1

N

i

;wherei = trials

 x
N

s

i 1

i

x

(3.6)



2

(3.7)

N 1

Next, find the furthest value by computing the maximum absolute difference from the sample:
=

̅
| −|

(3.8)

After computing for the G value, refer to the G-table in order to determine the range value of P where df=N-2. Then if the P value is less than 0.05 then the sensor is not able to acquired precise values and needs calibration of its position. When the G value is greater than the critical value from the G-table, which 2.29 for df = 8, then the value of P is approximately is less than 0.05. P values less than 0.05 will be rejected and will performed another set of trials until sensors is able to produce P values greater than 0.05.
Testing
The system will be tested based on its accuracy on detecting correct terrain. On an existing environment the system will be tested that reproduce any terrain in the Figure
3.10 and using confusion matrix will summarize the results of such testing and will be ready further investigation of its accuracy in detecting correct terrain. The researcher will test the system in several kinds of terrain and also with greater sample size or trials.

31

The overall accuracy of the system can be calculated by summing up all the true positives, actual class that are classified correctly, value over the total sample size. And the precision of the system in detecting certain terrain can also be evaluated by simply dividing each true positive by each subtotal. Such dependent parameters are calculated as follows: =



(3.9)

=

1

; ℎ

(3.10)

Results and Discussion
Actual Calibration
The researchers calibrated the sensors by getting the readings of each sensor at different height. The test height starts from 130 cm to 178 cm, with 2 cm increments.
Table 3.3 shows the reading of the sensors at reference height 160cm which is also the same to other height. There are 10 trials per test height in which the values are verified by applying Grubbs’ Test to determine an outlier on the sample. The mean and standard deviation of each sample set are determine in order to determine the value of G using the formula =

̅
| −|

; where is the value that produces the larger absolute difference

with the sample mean. Since the critical value of G at df=8is 2.29, computed value of G greater than 2.29 means that the value of p is less 0.5 which translates that the value is an outlier. If an outlier is detected, the position will be checked together with the plain surface and perform 10 trials again, perform Grubbs’ Test until no more outliner is detected. 32

Table 3.3 Sample Calibration
Reference
Trials

160
1
2
3
4
5
6
7
8
9
10

Mean
Furthest
STDEV
G
Significant if G>2.29 ( p <
0.05)

M
160.4
160.47
160.82
160.89
160.4
160.82
160.47
160.47
160.89
160.47
160.61
160.89
0.213854
1.309307
No

L1
R1
L2
R2
160.37
160.4
159.56
160.05
159.65
160.47
159.56
160.05
160.37
160.47
159.63
160.05
160.37
160.47
159.63
160.05
160.37
159.63
159.98
160.05
160.37
160.47
159.63
160.05
160.37
160.47
159.63
159.98
160.37
159.56
159.63
159.98
160.37
160.4
159.21
159.98
160.37
160.05
159.63
160.05
160.298 160.239 159.609 160.029
160.37
160.47
159.98
160.05
0.227684 0.363056 0.183875 0.033813
0.316228 0.636265 2.017676 0.621059
No
No
No
No

See appendix for all tables of calibration
Table 3.3 shows the actual results of calibration with 160cm reference height. The mean, largest absolute difference with the mean, standard deviation, and Grubbs’ value are computed. It also determines if the value with largest absolute difference with the mean is an outlier or not. If the value is significant, therefore, the selected value is an outlier and need to perform the test again. If there are no outliner detected the mean of each sensor will be tabulated to compare the percentage difference with the reference sensor. 33

Table 3.4 Percent Difference for Calibration
Reference
Height(cm)

L1

R1

L2

20.64
40.43
60.99
81.07
100.37
120.74
130.58
132.74
134.33
137.40
139.03
141.08
142.57
145.06
146.72
148.95
150.95
152.37
154.57
157.52
159.25
160.61
162.62
164.37
167.11
168.91
170.51
172.67
175.22
176.53
178.85
201.40
201.40
220.74
240.59
260.86
300.57
320.95
340.78
360 - 0

20.83
40.48
61.03
81.31
100.41
120.71
130.57
132.70
134.62
137.31
139.08
141.27
142.54
145.11
146.64
148.79
150.88
152.62
154.32
157.32
158.93
160.30
162.41
164.27
166.62
168.55
169.99
172.15
174.95
176.42
178.36
201.31
201.31
220.70
240.58
260.35
300.32
320.88
340.62
360 - 0

20.83
40.36
60.57
80.67
99.83
120.32
130.14
132.23
133.88
136.88
138.56
140.68
142.34
144.45
146.14
148.19
150.44
151.85
153.65
156.91
158.80
160.24
162.15
163.92
166.46
168.49
169.94
172.15
174.90
176.31
178.21
200.88
200.88
220.23
240.14
260.22
299.65
320.44
340.25
360 - 0

19.90
39.78
59.85
80.34
99.54
119.89
129.64
131.80
133.58
136.60
137.89
140.30
141.78
144.13
146.03
147.90
149.94
151.56
153.68
156.46
158.64
159.61
161.67
163.33
165.85
167.99
169.38
171.84
174.35
175.53
177.73
200.60
200.60
219.80
239.64
259.73
299.68
319.94
339.82
360 - 0

R2

20.19
40.25
60.55
80.65
100.26
120.13
130.08
131.90
134.22
136.87
138.55
140.65
142.27
144.44
146.26
148.20
150.48
152.03
154.15
157.09
158.83
160.03
162.13
163.79
166.40
168.46
170.27
172.62
174.94
176.45
177.60
200.87
200.87
219.90
240.08
260.40
300.15
320.48
339.88
360 - 0
Average

L1

R1

L2

R2

0.94%
0.13%
0.08%
0.30%
0.04%
0.02%
0.01%
0.03%
0.22%
0.06%
0.04%
0.14%
0.02%
0.04%
0.05%
0.10%
0.05%
0.16%
0.16%
0.13%
0.20%
0.19%
0.13%
0.06%
0.30%
0.21%
0.31%
0.31%
0.15%
0.06%
0.28%
0.04%
0.04%
0.02%
0.01%
0.20%
0.08%
0.02%
0.05%
0.00%
0.13%

0.94%
0.19%
0.69%
0.49%
0.54%
0.34%
0.34%
0.38%
0.33%
0.38%
0.34%
0.28%
0.16%
0.42%
0.40%
0.51%
0.34%
0.34%
0.59%
0.39%
0.28%
0.23%
0.29%
0.28%
0.39%
0.25%
0.34%
0.30%
0.19%
0.13%
0.36%
0.26%
0.26%
0.23%
0.19%
0.25%
0.30%
0.16%
0.16%
0.00%
0.33%

3.60%
1.61%
1.86%
0.90%
0.82%
0.70%
0.72%
0.70%
0.56%
0.58%
0.82%
0.56%
0.56%
0.64%
0.47%
0.70%
0.67%
0.53%
0.57%
0.68%
0.38%
0.62%
0.59%
0.63%
0.75%
0.54%
0.66%
0.48%
0.50%
0.57%
0.63%
0.40%
0.40%
0.42%
0.40%
0.44%
0.29%
0.31%
0.28%
0.00%
0.69%

2.18%
0.45%
0.72%
0.51%
0.11%
0.51%
0.39%
0.63%
0.08%
0.39%
0.34%
0.30%
0.22%
0.43%
0.31%
0.50%
0.32%
0.22%
0.27%
0.28%
0.26%
0.36%
0.31%
0.35%
0.43%
0.27%
0.14%
0.03%
0.16%
0.05%
0.70%
0.26%
0.27%
0.38%
0.22%
0.18%
0.14%
0.15%
0.26%
0.00%
0.35%

34

Table 3.4 shows the data gathered from the several test height starting from 0 cm to 300 cm with 20 cm increment, but with more precision between 138 cm to 178 cm with 2 cm increment. These values are the mean of the height readings at certain height in which no outliner is detected. Then the percentage difference is calculated with respect to the middle or reference sensor.
It can be observed that the average percentage difference of the sensors to the reference sensor is less 1 percent. This observation provides that the sensor readings are reliable and precise.
Mathematical Model
The researchers graphed the height comparison for each terrain in order to easily derive the relationship of each height and determine the mathematical model to determine each terrain. Linear Regression will be used to determine the line equation and the slope of the set of value for each elevation type. First, the researchers graphed the height comparison of the plain surface. Then, the researchers performed 10 trials for each terrain and graphed the values into a bar graph to easily visually the relationship of values.
Table 3.5 Actual Results for 170 cm
Trial 1
M
L1
R1
L2
R2
1
170.65 169.71 169.76 169.79 169.69
2
170.6 169.72 170.02 169.72 169.71
3
170.56 169.75 169.91 169.71
169.7
4
170.61 169.68 169.88 169.72 169.72
5
170.64 169.73 169.87 169.85 169.72
6
170.54 169.74 169.78 169.86 169.71
7
170.56
169.7 169.84 169.84 169.72
8
170.56 169.66 169.71 169.78 169.71
9
170.56 169.72 169.76 170.03 169.72
10
170.56 169.73 169.72 169.79 169.72
Average 170.584 169.714 169.825 169.809 169.712

35

Table 3.5 shows the actual results at 170 cm reference height at plain surface.
This actual height will be saved into the microcontroller to be compared later on to detect the other types of elevation. The graph of the plain surface is shown below:
200

Trial 1

180

Trial 2

160

Trial 3

140

Trial 4

120

Trial 5

100

Trial 6

80

Trial 7

60

Trial 8

40

Trial 9

20

Trial 10

0
M

L1

R1

L2

R2

Average

Figure 3.15 Plain Surface Graph
Figure 3.15 shows that at plain surface the distances of each sensor are the same at 170 cm. It is also the same at other height, with each sensor having the same readings.
The equation in order to acquire reference height can be described as: =

∑0

(3.11)

ℎ ℎℎ ℎ ℎ = =

( + 1 + 1 + 2 + 2 )
5

(3.12)

170.584 + 169.714 + 169.825 + 169.809 + 169.712
5
= 169.9288

36

Table 3.6 Values Gathered for Step Up Elevation
Trial 1
M
L1
R1
L2
R2
Ref Height
1 148.02 148.38 150.64 148.23 149.05
170.65
2 148.13 148.32 149.46 148.14 149.29
170.6
3 148.58 148.32 149.46 148.51
149.2
170.56
4 148.24 149.38 149.53 148.12 148.95
170.61
5 148.24 149.29
149.7 148.54 149.29
170.64
6 147.98 149.61 149.96 148.42 149.12
170.54
7 148.28
149.1 150.05 148.08 149.36
170.56
8 148.28 149.18 149.92 148.26 149.27
170.56
9 147.88 149.12 149.88 148.78 149.84
170.56
10 148.36 149.51 149.55 148.25 148.94
170.56
Average 148.199 149.021 149.815 148.333 149.231
170.584
Table 3.6 shows the values of the heights when the system is subjected into a step up elevation. The means will be plotted based on their distance to the actual position: middle sensor is at 0 because it is the start of the sensor, then R1 and L1 at 8cm, lastly,
R2 and L2 at 18 cm. Finally, using linear regression plot in Microsoft Excel, the line equation can be derived easily in a form of slope-intercept form.

Step Up
160
y = 0.0364x + 148.51

140
120
100

80
60
40
20
0
0

2

4

6

8

10

12

14

16

Figure 3.16 Step Up Graph for Equation Derivation

y = 0.0364x + 148.5
ℎ ℎ ℎℎ ,

(3.13)

18

37 ,
ℎ − , ℎ ℎ ℎ

Equation 3.13 serves as the mathematical model for the step-up elevation. When the line equation is a straight horizontal line, the slope is closer to 0, and the value of yintercept is less than the reference height. Figure 3.17 shows the actual values of each trial to easily visualize the comparison of values.

200

Trial 1

180

Trial 2

160

Trial 3

140

Trial 4

120

Trial 5

100

Trial 6

80

Trial 7

60

Trial 8

40

Trial 9

20

Trial 10

0
M

L1

R1

L2

R2

Ref Height

Average

Figure 3.17 Step-Up Graph
Figure 3.17 shows the actual readings of the sensor when the system is tested on a step-up set-up. It shows that all sensors have the same value but have lesser value than the reference height. Through observation the comparison of the values can be described as: ≈ 1 ≈ 1 ≈ 2 ≈ 2 < (3.14)

38

Table 3.7 Values Gathered for Step Down Elevation
Trial 1
M
L1
R1
L2
R2
Ref Height
1
186.12 185.16 185.56 185.16 185.41
170.65
2
186.05 185.14 185.53 185.26 185.15
170.6
3
186.05 184.96 185.55
185.2 185.33
170.56
4
186.13 184.95 185.47 185.24 185.39
170.61
5
186.14 184.99 185.62 185.32 185.14
170.64
6
186.14 184.89 185.51 185.51
185.2
170.54
7
186.14 184.93 185.65 185.24 185.17
170.56
8
186.69 184.92 185.69 185.55 185.13
170.56
9
186.13 185.25
185.8 185.72 185.07
170.56
10
186.14 185.31 185.75 185.67 185.12
170.56
Average 186.173 185.05 185.613 185.387 185.211
170.584
Table 3.7 shows the values of the heights when the system is subjected into a step down elevation. Graphing the values and deriving the line equation with same process done on the step up elevation, the outputs are shown in Figure 3.18 below.

Step Down
200
180

y = -0.0546x + 186.04

160
140

120
100
80
60
40
20
0

0

2

4

6

8

10

12

14

16

Figure 3.18 Step Down Graph for Equation Derivation

y = -0.0546x + 186.04

(3.15)

ℎ ℎ ℎℎ , ,
ℎ − , ℎ ℎ ℎ

18

39

Equation 3.15 serves as the mathematical model for the step down elevation.
When the line equation is a straight horizontal line, which means that slope is closer to 0, and the value of y-intercept is greater than the reference height, it signifies that the elevation can be classified as step down. And the graph below shows the actual values of each trial to easily visualize the comparison of values.

200

Trial 1

180

Trial 2

160

Trial 3

140

Trial 4

120

Trial 5

100

Trial 6

80

Trial 7

60

Trial 8

40

Trial 9

20

Trial 10

0
M

L1

R1

L2

R2

Ref Height

Average

Figure 3.19 Step Down Graph
Figure 3.19 shows the actual readings of the sensor when the system is tested on a step down set-up. The graph, as well as the ratio between the reference heights, shows that all sensors have the same value but have greater value than the reference height.
Through observation the comparison can be described as: ≈ 1 ≈ 1 ≈ 2 ≈ 2 >

(3.16)

40

Table 3.8 Values Gathered for Incline Elevation
Trial 1
M
L1
R1
L2
R2
Ref Height
1 165.59 154.18
158.4 156.75 153.92
170.65
2 166.14 154.29 158.38 156.61
154
170.6
3 166.43 154.24 158.24 156.82 153.56
170.56
4 166.38 154.52 158.22 156.45 154.16
170.61
5 166.25
154.8 158.66 154.08 154.51
170.64
6 166.76 157.15
159.1
152.7 154.05
170.54
7 166.98 158.81
158.1 153.41 154.01
170.56
8 165.89 163.46
158.7 153.13 154.04
170.56
9 165.53 161.51 158.68 154.53 153.96
170.56
10 166.15 159.15 158.31 155.49 153.67
170.56
Average 166.21 157.211 158.479 154.997 153.988
170.584
Table 3.8 shows the values of the heights when the system is subjected into an incline elevation. The equation below show the line equation:

Incline
180
160
140

y = -0.7323x + 165.37

120
100
80
60
40
20
0
0

2

4

6

8

10

12

14

Figure 3.20 Incline Graph for Equation Derivation

y = -0.7323x + 165.37

(3.17)

ℎ ℎ ℎℎ , ,
ℎ − , ℎ ℎ ℎ

16

18

41

The equation above serves as the mathematical model for the incline elevation.
When the line equation has negative slope and the value of y-intercept is less than the reference height, it means that the elevation can be classify as incline. And the graph below shows the actual values of each trial to easily visualize the comparison of values.

200

Trial 1

180

Trial 2

160

Trial 3

140

Trial 4

120

Trial 5

100

Trial 6

80

Trial 7

60

Trial 8

40

Trial 9

20

Trial 10

0
M

L1

R1

L2

R2

Ref Height

Average

Figure 3.21 Incline Slope Graph
Figure 3.21 shows the actual readings of the sensor when the system is tested on an upward slope set-up. The graph, as well as the ratio between the reference height, shows that all sensors have the decreasing value and have lesser value than the reference height. Through observation the comparison can be described as: > > 1 ≈ 1 > 2 ≈ 2

(3.18)

42

Table 3.9 Values Gathered for Decline Elevation
Trial 1
M
L1
R1
L2
R2
Ref Height
1 171.18 169.31 170.81
175.6 176.75
170.65
2 171.01 169.31 170.86 175.41 175.81
170.6
3 170.97 169.29 170.65 175.41 175.84
170.56
4 170.99 169.29
170.8 174.78
177
170.61
5 170.94 169.28 170.61 174.65 177.61
170.64
6 171.11 169.28
170.7 174.85
175.3
170.54
7 171.11 169.27 170.66 175.11 176.05
170.56
8 170.68 169.35 170.49 175.66 176.87
170.56
9 170.78 169.34 170.61 175.57 175.41
170.56
10 170.78 169.29 170.67 175.55 175.99
170.56
Average 170.955 169.301 170.686 175.259 176.263
170.584
Table 3.9 shows the values of the heights when the system is subjected into a step down elevation. The equation below described the line equation through applying the linear regression:

Decline
200
180
160

140

y = 0.3004x + 169.83

120
100
80
60
40
20
0
0

2

4

6

8

10

12

14

Figure 3.22 Decline Graph for Equation Derivation

y = 0.3004x + 169.83

(3.19)

ℎ ℎ ℎℎ , ,
ℎ − ,

16

18

43 ℎ ℎ ℎ

The equation above serves as the mathematical model for the incline elevation.
When the line equation has positive slope and the value of y-intercept is less than the reference height, it means that the elevation can be classified as inclined. Figure 3.23 shows the actual values of each trial to easily visualize the comparison of values.

200

Trial 1

180

Trial 2

160

Trial 3

140

Trial 4

120

Trial 5

100

Trial 6

80

Trial 7

60

Trial 8

40

Trial 9

20

Trial 10

0
M

L1

R1

L2

R2

Ref Height

Average

Figure 3.23 Decline Slope Graph
Figure 3.23 shows the actual readings of the sensor when the system is tested on a downward slope set-up. The graph, as well as the ratio between the reference height, shows that all sensors have the increasing value, from M to R2/L2, and have greater value than the reference height. Through observation the comparison can be described as: 2 ≈ 2 > 1 ≈ 1 > ≈

(3.20)

44

Table 3.10 Values Gathered for Hole Elevation
Trial 1
M
L1
R1
L2
1
0
0
0
2
0
0
0
3
0
0
0
4
0
0
0
5
0
0
0
6
0
0
0
7
0
0
0
8
0
0
0
9
0
0
0
10
0
0
0
Average
0
0
0

R2
0
0
0
0
0
0
0
0
0
0
0

0
0
0
0
0
0
0
0
0
0
0

Ref Height
170.65
170.6
170.56
170.61
170.64
170.54
170.56
170.56
170.56
170.56
170.584

Table 3.10 shows the values of the heights when the system is subjected into a step down elevation. The equation below described how to get each ratio:

Hole
1
0.9
0.8
0.7
0.6
0.5
0.4
0.3
0.2
0.1 y=0 0
0

2

4

6

8

10

12

14

Figure 3.24 Hole Graph for Equation Derivation

y=0

(3.21)

ℎ ℎ ℎℎ , ,
ℎ − , ℎ ℎ ℎ

16

18

45

The equation above serves as the mathematical model for the incline elevation.
When the line equation is equal to zero, it means that the elevation is a hole. And the graph below shows the actual values of each trial to easily visualize the comparison of values. 200

Trial 1

180

Trial 2

160

Trial 3

140

Trial 4

120

Trial 5

100

Trial 6

80

Trial 7

60

Trial 8

40

Trial 9

20

Trial 10

0

M

L1

R1

L2

R2

Ref Height

Average

Figure 3.25 Hole Graph
Figure 3.25 shows the actual readings of the sensor when the system is tested on a hole set-up. It shows that there is no reading in all other sensors; therefore, it has detected beyond the maximum distance that the sensor can detect; the sensor produces a zero value when the distance is beyond the maximum distance. Through observation of the graph, the comparison can be described as: > 2 ≈ 2 ≈ 1 ≈ 1 ≈ ≈ 0

(3.22)

After the derivation of the mathematical model, it was shown that the slope for every elevation changes. For the plane surface, including step up and step down, shows that the value of slope is closer to 0, and the researchers set a range for plain surface when as -0.1 to 0.1 for the slope. Also, y-intercept is also included in the determination of

46

slopes for step and step down. Values that are not within the range of -0.1 to +0.1, elevation are classified as either incline or decline. For the incline and decline can be determine by the slope sign, if the slope is negative then the elevation is inclined, and positive for the decline. Lastly, when the line equation is equal to zero, the elevation will be classified as hole. The slope is programmed and calculated into the microcontroller in order to determine the microcontroller and now ready for testing.
Table 3.11 Confusion Matrix Values

0

0

0

10

0

8

0

0

0

0

8

0

0

9

1

1

0

11

0

2

1

9

1

0

13

0

0

0

0

8

0

8

0

0

0

0

0

10

10

10

10

10

10

10

10

60

Hole

Total 1

0

Sub

Downwar d Slope

0

Plain

Upward
Slope

10

Step Up

Step
Down

Predicted Class

Step Up

Actual Class

Step
Down
Upward
Slope
Downward
Slope

Plain

Hole
Sub Total
2

Table 3.11 shows the tally of the results for each elevation types. For each terrain types 10 trials are performed. The minimum accurate detection for a terrain is 80 percent for step down and plain, 90% percent upward and downward slope and with 100% percent for the hole and step up. The total accuracy of the system is the sum of the true

47

positive over the total trials performs. The calculation of accuracy and precision of each type is shown below: =
=



100

(3.23)

10 + 8 + 9 + 9 + 8 + 10 = 54
60
= 90 %

=

1

(3.24)

10 100% = 100%
10

=

=

8 100% = 100%
8

=

9 100% = 81.82%
11

= =

=

9 100% = 69.23%
13

8 100% = 100%
8

10 100% = 100%
10

48

Conclusion
After calibrating, programming, and testing the system, the performance shows that the system has 90 % accuracy in detecting elevations. Incorrect detection may be attributed to the disorientation of the cap by the movement of the user’s head. Testing first the readings of the ultrasonic sensor and adjusting correctly provided precise data.
These data sets are also validated using the Grubb’s Test, to find an outlier. If outlier existed in the datasets, the position is adjusted until no outlier is detected. Graphing the reading of distances on each elevation types provides easier derivation of the mathematical model, and later on, programmed the comparison into the microcontroller to detect elevation.

Chapter 4
CONCLUSION
The researchers were able to build a wearable obstacle detection cap that can detect elevation through the use of e-Gizmo Gizduino Microcontroller, Accelerometer
Module, Ultrasonic sensors, Vibrating motor and Earphones. The Gizduino microcontroller has controlled nine ultrasonic sensors and processed average the 10 trials of height per reading. The Accelerometer Module integrated into the system produces the orientation of the cap in which processed by the microcontroller to produce the signal when to adjust the position of the cap. The Vibrating Motors serve as a haptic feedback for the obstacle detection from all sides of the user, and the intensity of vibration is relative to how near the obstacle to the user. Lastly, with the Audio Module integrated into the system, the earphones were able to produce the sound to inform the current elevation in-front of the user.
Calibrating of the sensors before programming the software of the prototype ensures that precise and reliable data are being process. The researcher performs calibration for several heights starting from the 130 cm to 178 cm with 2 cm increment and verifies the precision using Grubbs’ Test; Grubb’s Test ensures that all readings, or datasets, that will be process are precise. After using the Grubbs’ Test on several heights with 10 trials each, the result shows no outliner is detected for each reading at certain height. Therefore the system readings, the data gathering function, were precise and reliable, and ready for processing. Also, comparing the average of the readings, the percent difference is less than 1% for each sensor, which is very low and signifies that the system is precise.

49

50

Programming of mathematical model to determine elevation types is performed by graphing the actual values of each elevation type and derived the mathematical model thought observation of graphs. Results show the relationship of each sensor to each other for certain elevation. These mathematical models or comparisons were programmed and integrated into the microcontroller to detect elevation. Also, the program was coded in C language that controls the sensors, processing of the data, and interfacing of the several modules. After testing the prototype for each elevation types, with 10 trials per elevation type, the minimum accurate detection for a terrain is 80 percent for step down and plain,
90% percent upward and downward slope and with 100% percent for the hole and step up. Then, calculating the overall accuracy of the device in detecting different elevation types is 90 %; while the precision for all elevation types are 100% except for the upward and downward slope with 81.82 % and 69.23 %, respectively.

Chapter 5
RECOMMENDATION
The device is limited to wired connections; thus, the researchers suggest the implementation of wireless technology to provide convenience to the user. An improvement for this device is to integrate all the modules in one and mount it on the cap, instead of using a separate bag to contain all of them. Another recommendation is to create a wireless or remote controller in which the switches and/or buttons are engraved with Braille dots for the user to be able to control the calibration of the device. Such recommendation is given since the device was intended to help visually-impaired users to navigate around their environment.
The researchers also recommend the usage of smaller ultrasonic sensors to help design fit a more marketable device. In addition, the ultrasonic sensors can be replaced with other sensors such as cameras to detect more types of terrain and for better object sensing. Another recommendation is to make the cap able to withstand harsh weather conditions, such as rain or being exposed to extreme sunlight. The researchers suggest making the cap wearable and useable for outdoor use since the prototype was intended for static indoor set-up.
It is also recommended to increase the rate of sensor calibration and sensing of objects for the user to be able to move faster in the same environment. In addition, the researchers recommend that the device elevation detecting should be improved so that it could determine small changes in terrain elevation.

51

REFERENCES

Cardin, S., Thalmann, D., & Vexo, F. (2005). Wearable Obstacle Detection System for
Visually Impaired People. VR Workshop on Haptic and Tactile Perception of Deformable
Objects, 50 – 55.

Do-Hoon, K., & Heung-Gyoon, R. (2011). Obstacle Recognition System using Ultrasonic
Sensor and Duplex Radio-Frequency Camera for the Visually Impaired Person. ICACT
2011, 326 – 329.

Zuban, E., Labadi, H., Balogh, I., Kovacs, K., & Covic, Z. (2012). Digital Radar Aided
Cane. 2012 IEEE 10th Jubilee International Symposium on Intelligent Systems and
Informatics, 117-120.

Bharathi, S., Ramesh, A., & Vivek, S. (2012). Effective Navigation for Visually Impaired by Wearable Obstacle Avoidance System. 2012 International Conference on Computing,
Electronics and Electrical Technologies (ICCEET), 956 – 958.

Jameson, B., & Manduchi, R. (2010). Watch your head: A Wearable Collision Warning
System for the Blind. 2010 IEEE Sensors, 1922 – 1927.

52

53

Bouhamed, S., Kallel, I., & Masmoudi, D. (2013). Stair Case Detection and Recognition using Ultrasonic Signal. 2013 36th International Conference on Telecommunications and
Signal Processing (TSP), 672 – 676

Ohtani, K., & Baba, M. (2012). Shape Recognition and Position Measurement of an
Object Using an Ultrasonic Sensor Array. Sensor Array.

Li, Z., Wang, X., Yan, R., Liu, Z., & Liu, G. (2012). Design of the Real-Time Ranging
System based on Microcontroller Atmega16l. 2012 International Conference on
Computer Science and Information Processing (CSIP), 72 – 75

Kassim, A., Jaafar, H., Azam, M., Abas, N., & Yasuno, T. (2013). Performances study of distance measurement sensor with different object materials and properties. 2013 IEEE
3rd International Conference on System Engineering and Technology, 281- 284

Dirain, J.R. et al,(2011). Wearable Obstacle Detection System and Braille Cell Phone for the Blind

Biopac Systems Inc., April 2013. Using BioNomadix Tri-axial Accelerometer as a Tilt
Sensor Inclinometer. Retrieved from http://www.biopac.com

Lee, M., & Lee, S. (2010). Design and analysis of an infrared range sensor system for floor-state estimation. J Mech Sci Technol Journal of Mechanical Science and
Technology, 1043-1050.

54

APPENDICES
APPENDIX A
Operation’s Manual
User’s Manual
1. Setting up the device.
To power up the device, connect it to the powerbank. Before wearing the cap, it is needed to clear the sensors from things that could lead to error in data while calibrating. The wire from the bag is adjustable and is advisable to put the excess in the bag. The earphones should be connected on the device and is passed thru the hole in the bag to make it more accessible and avoid tangle in using it.
2. Calibrating the device.
Before using the device for it purpose, it should be calibrated in a straight surface to get the user’s height which is needed in the computation. The device will say “Calibration Done” if the calibration is completed while
“Position Head” states that the user should adjust the cap or stand straight.
3. Using the device.
The device will state the type of elevation by sending a voice message thru the earphones. The user can adjust their desired volume by adjusting the knob in the device. Whenever the user is not standing straight or the cap was adjusted from the calibration, the device will also prompt “Position
Head” to advise the user to adjust the cap. The device also has vibrating

55

motors that will detect obstacles and its distance from the user will depend on the power of the motor.
Troubleshooting Guides and Procedures
1. The device is not calibrating.
Reconnect the power to reboot the device; this will enable the device to refresh the sensors.
Calibrate in a straight surface with lesser obstacles and make sure that the user’s hands are not blocking the sensors.
The supply can also be a factor; the power bank should be checked if it has still enough battery to power up the device.
2. The audio is not audible.
One must always check if the earphones are plugged in. Rebooting the device can also help where it might be sending thrash values into the audio module that cause not to function or play other audio.

56

APPENDIX B
Pictures of Prototype

57

58

59

APPENDIX C
Data Sheet

60

61

62

63

64

65

66

67

68

69

70

71

72

73

74

75

76

77

78

79

80

81

82

83

84

85

86

87

88

89

90

91

92

93

94

95

96

97

98

99

100

101

102

103

104

105

106

107

108

109

110

111

112

113

114

115

116

117

118

119

120

121

122

123

124

125

126

APPENDIX D
Calibration Results
Reference
Trials
1
2
3
4
5
6
7
8
9
10
Mean
Furthest
STDEV
G
Significant if G>2.29 ( p <
0.05)

No

Reference
Trials
1
2
3
4
5
6
7
8
9
10
Mean
Furthest
STDEV
G
Significant if G>2.29 ( p <
0.05)

20
Height(mean) L1
R1
L2
R2
20.04
21
21
19.84
20.61
20.11
20.58
20.58
19.77
20.19
20.68
20.58
20.58
19.84
20.19
20.11
21
21
20.19
20.26
21.11
21
21
19.84
19.84
21.04
20.58
20.58
19.84
19.91
21.11
21
21
19.77
20.26
21.04
20.58
20.58
19.84
19.84
20.61
21
21
19.84
20.26
20.61
20.58
20.58
20.26
20.61
20.646
20.79
20.79
19.903
20.197
20.04
20.58
20.58
19.77
19.84
0.432902 0.221359 0.221359 0.172887 0.27697
1.399854 0.948683 0.948683 0.769288 1.28895
No

No

No

No

40
Height(mean) L1
R1
L2
R2
40.37
40.61
40.37
39.53
39.95
40.37
40.61
40.37
39.95
40.37
40.37
40.61
40.37
39.53
39.95
40.37
40.68
40.37
39.53
40.37
40.79
40.61
40.3
39.95
40.3
40.37
40.68
40.3
39.95
40.37
40.79
40.11
40.3
39.88
40.3
40.72
40.61
40.37
39.53
40.37
40.37
40.61
40.3
39.95
40.3
40.21
40.26
40.37
39.95
40.37
40.473
40.539
40.342
39.775
40.265
40.21
40.11
40.3
39.53
39.95
0.209446 0.192033 0.036148 0.211936 0.169066
1.255691 2.233991 1.161895 1.156012 1.863177
No

No

No

No

No

127
Reference
Trials
1
2
3
4
5
6
7
8
9
10
Mean
Furthest
STDEV
G
Significant if G>2.29 ( p <
0.05)

No

Reference
Trials
1
2
3
4
5
6
7
8
9
10
Mean
Furthest
STDEV
G
Significant if G>2.29 ( p <
0.05)

60
Height(mean) L1
R1
L2
R2
61.35
61.25
60.58
59.74
60.51
60.93
60.89
60.58
60.16
60.51
61
61.32
60.51
59.74
60.58
61
60.89
60.51
60.16
60.58
61
61.32
60.58
59.67
60.51
60.58
61.32
60.51
60.09
60.58
61.42
60.82
60.58
59.74
60.58
61
60.89
60.58
59.74
60.58
61
61.25
60.58
59.74
60.58
61
60.82
60.58
60.09
60.51
61.028
61.077
60.559
59.887
60.552
61.42
61.32
60.58
60.16
60.58
0.229095 0.229495 0.033813 0.20726 0.036148
1.711082 1.058848 0.621059 1.317187 0.774597
No

No

No

No

80
Height(mean) L1
R1
L2
R2
81.11
81.42
80.68
80.26
80.68
81.11
81.35
80.68
79.84
80.68
81.11
81.42
80.68
80.26
80.68
81.04
81.35
80.68
80.26
80.61
81.11
81.42
80.68
80.19
80.68
81.04
81
80.68
80.68
80.61
81.11
81
80.68
80.68
80.68
81.11
81
80.68
80.26
80.61
81.04
81.35
80.68
80.26
80.61
81.04
81.42
80.68
80.26
80.68
81.082
81.273
80.68
80.295
80.652
81.11
81.42
80.68
80.68
80.68
0.036148 0.190849
1.5E-14 0.240797 0.036148
0.774597 0.770243 0.948683 1.598855 0.774597
No

No

No

No

No

128
Reference
Trials
1
2
3
4
5
6
7
8
9
10
Mean
Furthest
STDEV
G
Significant if G>2.29 ( p <
0.05)

No

Reference
Trials
1
2
3
4
5
6
7
8
9
10
Mean
Furthest
STDEV
G
Significant if G>2.29 ( p <
0.05)

100
Height(mean) L1
R1
L2
R2
100.37
100.68
99.95
99.53
100.37
100.37
100.26
99.88
99.46
100.37
99.3
100.68
99.88
99.46
99.95
100.37
100.68
99.95
99.53
100.3
100.37
100.19
99.95
99.88
100.37
100.37
100.61
99.95
99.46
100.3
100.37
100.68
99.95
99.95
100.3
100.72
100.68
99.95
99.46
99.95
100.72
100.68
99.46
99.53
99.88
100.3
135.05
99.88
99.53
100.37
100.326 104.019
99.88
99.579 100.216
100.72
135.05
99.95
99.95
100.37
0.390931 10.90479 0.151217 0.18089 0.202879
1.007852 2.845631 0.46291 2.050973 0.759072
Yes

No

No

No

120
Height(mean) L1
R1
L2
R2
120.68
120.58
120.19
120.26
120.26
120.61
120.58
120.26
119.84
119.84
121.04
120.58
120.26
119.84
119.84
120.68
120.51
120.26
119.84
119.84
120.61
121
120.19
119.77
119.77
120.68
120.58
120.26
119.84
119.84
120.61
120.93
120.19
119.77
119.77
120.68
120.65
120.19
119.77
119.77
120.68
120.58
120.26
119.84
119.84
121.11
121
120.26
119.26
120.26
120.738 120.699 120.232 119.803 119.903
121.11
121
120.26
120.26
120.26
0.181218 0.19536 0.036148 0.238935 0.190849
2.052775 1.540745 0.774597 1.912653 1.870589
No

No

No

No

No

129
Reference
Trials
1
2
3
4
5
6
7
8
9
10
Mean
Furthest
STDEV
G
Significant if G>2.29 ( p <
0.05)

No

Reference
Trials
1
2
3
4
5
6
7
8
9
10
Mean
Furthest
STDEV
G
Significant if G>2.29 ( p <
0.05)

140
Height(mean) L1
R1
L2
R2
141.11
141.42
140.68
140.26
140.68
141.11
141.35
140.68
139.84
140.68
141.11
141.42
140.68
140.26
140.68
141.04
141.35
140.68
140.26
140.61
141.11
141.42
140.68
140.19
140.68
141.04
141
140.68
140.68
140.61
141.11
141
140.68
140.68
140.68
141.11
141
140.68
140.26
140.61
141.04
141.35
140.68
140.26
140.61
141.04
141.42
140.68
140.26
140.68
141.082 141.273
140.68 140.295 140.652
141.11
141.42
140.68
140.68
140.68
0.036148 0.190849
3E-14 0.240797 0.036148
0.774597 0.770243 0.948683 1.598855 0.774597
No

No

No

No

200
Height(mean) L1
R1
L2
R2
201.32
201.98
200.89
200.54
200.89
201.32
201.14
200.89
200.47
200.82
201.32
201.21
200.89
200.89
200.89
201.32
201.21
200.89
200.89
200.82
201.32
201.05
200.89
200.89
200.89
201.32
201.14
200.82
200.47
200.89
201.39
201.98
200.89
200.47
200.89
201.67
201.14
200.82
200.47
200.89
201.32
201.05
200.89
200.47
200.82
201.7
201.21
200.89
200.47
200.89
201.4 201.311 200.876 200.603 200.869
201.7
201.98
200.89
200.89
200.89
0.15195 0.357412 0.029515 0.199223 0.033813
1.97433 1.871789 0.474342 1.440593 0.621059
No

No

No

No

No

130
Reference
Trials
1
2
3
4
5
6
7
8
9
10
Mean
Furthest
STDEV
G
Significant if G>2.29 ( p <
0.05)

No

Reference
Trials
1
2
3
4
5
6
7
8
9
10
Mean
Furthest
STDEV
G
Significant if G>2.29 ( p <
0.05)

220
Height(mean) L1
R1
L2
R2
220.68
220.58
220.19
220.26
220.26
220.61
220.58
220.26
219.84
219.84
221.04
220.58
220.26
219.84
219.84
220.68
220.51
220.26
219.84
219.84
220.61
221
220.19
219.77
219.77
220.68
220.58
220.26
219.84
219.84
220.61
220.93
220.19
219.77
219.77
220.68
220.65
220.19
219.77
219.77
220.68
220.58
220.26
219.84
219.84
221.11
221
220.26
219.26
220.26
220.738 220.699 220.232 219.803 219.903
221.11
221
220.26
220.26
220.26
0.181218 0.19536 0.036148 0.238935 0.190849
2.052775 1.540745 0.774597 1.912653 1.870589
No

No

No

No

240
Height(mean) L1
R1
L2
R2
240.58
240.4
240.09
239.74
240.16
240.58
240.7
240.09
239.67
240.16
240.58
240.4
240.16
239.67
240.16
240.58
240.47
240.16
239.74
240.09
240.58
240.89
240.09
239.74
239.88
240.58
240.62
240.16
239.32
240.16
240.58
240.47
240.16
239.32
239.67
240.58
240.47
240.16
239.74
240.16
240.65
240.4
240.16
239.74
240.15
240.51
240.89
240.16
239.67
240.16
240.58 240.571 240.139 239.635 240.075
240.65
240.89
240.16
239.74
240.16
0.032998 0.194619 0.033813 0.169066 0.167083
2.12132 1.639098 0.621059 0.621059 0.50873
No

No

No

No

No

131
Reference
Trials
1
2
3
4
5
6
7
8
9
10
Mean
Furthest
STDEV
G
Significant if G>2.29 ( p <
0.05)

No

Reference
Trials
1
2
3
4
5
6
7
8
9
10
Mean
Furthest
STDEV
G
Significant if G>2.29 ( p <
0.05)

260
Height(mean) L1
R1
L2
R2
260.58
260.05
260.16
259.74
260.58
261
260.4
260.51
259.74
260.58
261
260.47
260.16
259.74
260.51
260.93
260.4
260.58
259.74
260.58
261
260.4
260.09
259.74
260.51
260.51
260.05
260.16
259.74
260.58
260.51
260.47
260.09
259.74
176.16
261
260.4
260.09
259.74
176.16
261
260.47
260.16
259.74
176.16
261
260.47
260.09
259.67
176.16
260.853 260.358 260.209 259.733 226.798
261
260.47
260.58
259.74
260.58
0.222463 0.16565 0.18089 0.022136 43.58226
0.660783 0.676123 2.050973 0.316228 0.775132
No

No

No

No

280
Height(mean) L1
R1
L2
R2
281.21
280.68
280.3
279.53
280.37
281.21
281.11
280.72
279.95
280.37
281.21
280.68
280.37
279.95
280.37
280.79
280.26
280.3
279.95
280.37
281.14
280.61
280.3
279.95
280.37
281.21
280.61
280.37
279.95
280.3
281.21
280.68
280.79
279.46
280.37
280.79
280.19
280.37
279.95
280.79
281.14
280.68
280.3
279.88
280.3
281.21
280.68
280.79
279.95
280.37
281.112 280.618 280.461 279.852 280.398
281.21
281.11
280.79
279.95
280.79
0.172098 0.25227 0.213981 0.190134 0.140776
0.569442 1.950294 1.537521 0.515425 2.784573
No

No

No

No

Yes

132
Reference
Trials
1
2
3
4
5
6
7
8
9
10
Mean
Furthest
STDEV
G
Significant if G>2.29 ( p <
0.05)

300
Height(mean) L1
R1
L2
R2
300.58
300.05
299.67
299.74
300.09
301
300.05
299.74
299.74
300.16
300.58
299.98
299.67
299.67
300.16
300.51
300.4
299.74
299.74
300.16
300.51
300.4
299.74
299.74
300.16
300.58
300.47
299.67
299.74
300.09
300.16
300.47
299.74
299.74
300.16
300.58
300.47
299.67
299.67
300.16
300.58
300.4
299.74
299.74
300.16
300.58
300.47
299.16
299.32
300.16
300.566 300.316 299.654 299.684 300.146
301
300.47
299.74
299.74
300.16
0.200178 0.202879 0.177025 0.131166 0.029515
2.168074 0.759072 0.485808 0.426941 0.474342
No

No

No

No

No

133

APPENDIX E
Program Listing
//For Accelerometer
// Pin usage, change assignment if you want to const byte spiclk=17; // connect to ADXL CLK const byte spimiso=16; // connect to ADXL DO const byte spimosi=15; // connect to ADXL DI const byte spics=14; // connect to ADXL CS
// Don't forget, connect ADXL VDD-GND to gizDuino/Arduino +3.3V-GND byte xyz[8]; // raw data storage int x,y,z; // x, y, z accelerometer data byte spiread;
#include
int resetPin = 12; // The pin number of the reset pin. int clockPin = 13; // The pin number of the clock pin. int dataPin = 18; // The pin number of the data pin. int busyPin = 19; // The pin number of the busy pin.
Wtv020sd16p wtv020sd16p(resetPin,clockPin,dataPin,busyPin);
//For the Ultrasonic Sensors
#include
#include
#define SONAR_NUM 5 // Number or sensors.
#define MAX_DISTANCE 300 // Max distance in cm.
#define PING_INTERVAL 40 // Milliseconds between pings. unsigned long pingTimer[SONAR_NUM]; // When each pings. double cm[SONAR_NUM]; // Store ping distances. double meancm[5]={0,0,0,0,0}; uint8_t currentSensor = 0; // Which sensor is active. double ref=-1; int ctr=0;
NewPing sonar[SONAR_NUM] = { // Sensor object array.
NewPing(2, 3, MAX_DISTANCE),
NewPing(4, 5, MAX_DISTANCE),
NewPing(6, 7, MAX_DISTANCE),
NewPing(8, 9, MAX_DISTANCE),
NewPing(10,11, MAX_DISTANCE)
};

134

void setup() {
Serial.begin(115200);
init_adxl();
// initialize ADXL345 pingTimer[0] = millis() + 75; // First ping start in ms. wtv020sd16p.reset(); for (uint8_t i = 1; i < SONAR_NUM; i++) pingTimer[i] = pingTimer[i - 1] + PING_INTERVAL;
}
void loop() { for (uint8_t i = 0; i < SONAR_NUM; i++) { cm[i]=sonar[i].ping()/(double)US_ROUNDTRIP_CM; } oneSensorCycle(); } void echoCheck() { // If ping echo, set distance to array. if (sonar[currentSensor].check_timer()) cm[currentSensor] = sonar[currentSensor].ping_result/(double)US_ROUNDTRIP_CM; } void oneSensorCycle() { // Do something with the results. read_xyz(); Serial.print("x = ");
Serial.print(x);
Serial.print(" y = ");
Serial.print(y);
Serial.print(" z = ");
Serial.println(z);
//if(x>-15 && x-15 && yref-meancm[0] && ifEqual(ref,meancm[0])==2 && ifEqual(ref,meancm[4])==2 && ifEqual(ref,meancm[3])==2 && ifEqual(ref,meancm[2])==2) { wtv020sd16p.asyncPlayVoice(0); delay(3000); wtv020sd16p.stopVoice(); Serial.print("Step Up");

136

} else if(ref>0 && ref*0.3>meancm[0]-ref && ifEqual(ref,meancm[0])==3 && ( ifEqual(meancm[0],meancm[1])==1 || ifEqual(meancm[1],meancm[2])==1 || ifEqual(meancm[1],meancm[3])==1)) { wtv020sd16p.asyncPlayVoice(1); delay(3000); wtv020sd16p.stopVoice(); Serial.print("Step Down");
}
else if(ifEqual(ref,meancm[0])==1 || ifEqual(ref,meancm[1])==1 || ifEqual(ref,meancm[2])==1 || ifEqual(ref,meancm[4])==1 )
{
Serial.println("Plain1"); wtv020sd16p.asyncPlayVoice(8); delay(3000); wtv020sd16p.stopVoice(); } else if(ifEqual(meancm[3],meancm[0])==1 && ifEqual(meancm[3],meancm[1])==1
&& ifEqual(meancm[3],meancm[2])==1 && ifEqual(meancm[3],meancm[4])==1 && ref0 && ref*0.3meancm[0])
// {
//
Serial.print("Incline");
//
wtv020sd16p.asyncPlayVoice(0);
//
delay(3000);
//
wtv020sd16p.stopVoice();
// } else 137

{ wtv020sd16p.asyncPlayVoice(6); delay(4000); wtv020sd16p.stopVoice(); Serial.print("No Reading");
}
Serial.println(); for(int i=0;iy && x-x*.025y) return 2; else if(x0){ pinMode(spiclk,OUTPUT); // SPI CLK =0 if((spidat & 0x80)!=0) pinMode(spimosi,INPUT); // MOSI = 1 if MSB =1 else pinMode(spimosi,OUTPUT); // else MOSI = 0
spidat=spidat

Similar Documents