Free Essay

Gesture Recognition Technology

In:

Submitted By poojavarun
Words 1289
Pages 6
1. Introduction
Processing speeds of computers have increased dramatically, and computers have advanced to a point where they can assist humans in complex tasks. However, the major bottleneck in this is still the input. Users spend majority of their time inputting the information. Thus the total time required for performing any job would still be dependent on the time taken for providing the input to the machines.
Thus working on automating the input entering process using the technology could come to the rescue. Recognising this potential, a lot of growth has been made in the ‘Gesture Recognition Technology’. Now, gestures can be used as a form of input. They will help in bridging the gap between humans and machines. Now, with the advent of this technology, the need of a physical touch on the machine would be eliminated, and the machines would be able to interpret these gestures and operate. Using the concept of gesture recognition, it would be possible to point a finger at the computer screen so that the cursor will move accordingly. This could potentially make the potential input devices such as mouse, keyboard redundant.
The report presents the details of the ‘Gesture Recognition Technology’, throws light on the fields where the technology can be applied, provides the technical know-how of the technology- how it works, and discusses the major solution Providers along with the product offerings. The report concludes with the mentioning possible issues which could limit the wide spread of the technology. 2. Gesture Recognition Technology
Gesture recognition technology allows users to interact with their devices using simple and natural hand gestures. It utilizes advanced image processing and machine vision algorithms to track a user's hand gestures and convert them into commands. They also allow for detecting natural and intuitive hand gestures such as GRAB (closing of the hand) which supports all mouse functionalities including mouse left-click, double click, right-click and drag & drop. Also, the unique algorithms can simultaneously detect and track two hands, allowing for natural gestures such as zoom and rotate. These commands are then used to control the functions and applications of the devices. The technology is independent of the underlying underlying processor and camera hardware and produces high quality gesture recognitions using as low as a VGA resolution camera.
Moreover, the technology is designed for embedded platforms. It is optimized to function using minimal CPU and power consumption and supports challenging user environments with poor as well as direct lighting conditions. The technology can be integrated on various levels of the device; on the chipset level, operating system, as part of the camera module as well as integrated in application level.
The below screenshots show the various gestures that are interpreted by the machines such as the system monitor, the smart TV, the smart phone and the tablet.
TV and Set-top Boxes:

The technology can be utilized in control of TV functions and features where gesture recognition takes the form of fingertip tracking among others. This allows for virtual remote-control functions from afar, even in a dynamic multi-person environment using face detection capabilities.

Portable Computers:
The technology can be used in the computers as well. For instance one may easily control the volume while watching a movie, skip to the next track when playing music, scroll to the next page on an e-book, all with the natural gestures of a hand. Likewise business users can flip through presentation slides without shifting their focus from the audience.

In-car Infotainment System:

It also facilitates both in-car as well as in-flight infotainment systems. Rather than seeking a specific key or pressing on a particular point on the touch-screen, the user may control the device at ease without distractions using hand gestures. While driving the user can answer a call, switch navigation viewing modes, change channels on the radio, and more.
Mobile Phones:

The Gesture Recognition technology in mobile phones presents capabilities which will allow users to control features and applications without a single touch. For instance, a simple hand gesture above the device will silence incoming calls during a meeting, scroll between photos in the gallery, and even play mobile games incorporating real hand gestures, etc. 3. Understanding Gestures
The primary goal of gesture recognition research is to create a system which can identify specific human gestures and use them to convey information or for device control. The first step in considering gesture based interaction with computers is to understand the role of gesture in human to human communication.
A gesture is a form of non-verbal communication in which visible bodily actions communicate particular messages, either in place of speech or together and in parallel with words. Gestures include movement of the hands, face, or other parts of the body.
A broad classification of the gestures has been laid down by Cadoz (1994): a. Semiotic: those used to communicate meaningful information. b. Ergotic: those used to manipulate the physical world and create artifacts. c. Epistemic: those used to learn from the environment through tactile or haptic exploration
The report focuses mainly on communicating with the computer, so the emphasis is on ‘Semiotic’ gestures.
Application domain Classification
Gestures can be categorized to fit into the following application domain classifications:- a. Pre-emptive Gestures
A pre-emptive natural hand gesture occurs when the hand is moving towards a specific control (device/ appliance) and the detection of the hand approaching is used to pre-empt the operators intent to operate a particular control.
Examples of such gesture could include operation of the interior light; as the hand is detected approaching the light switch the light could switch on. If the hand is detected approaching the light switch again it would switch off, thus the hand movement to and from the device being controlled could be used as a pre-emptive gesture.

b. Function Associated Gestures
Function Associated gestures are those gestures that use the natural action of the arm/hand/other body part to associate or provide a cognitive link to the function being controlled. For example, moving the arm in a circles pivoted about the elbow towards the fan could be used to signify that the operators’ wish to switch on the fan. Such gestures have an action that can be associated with a particular function.

c. Context Sensitive Gestures
Context Sensitive gestures are natural hand gestures that are used to respond to operator prompts or automatic events. Possible context sensitive gestures to indicate yes/no or accept/reject could be thumbs-up and a thumbs-down. These could be used to answer or reject an incoming phone call, an incoming voice message or an incoming SMS text message. d. Global Shortcut Gestures
Global shortcut gestures are in fact natural symbolic gestures that can be used at any time, the term natural refers to the use of natural hand gestures that are typically used in human to human communications. It is expected that hand gestures will be selected whereby the user can easily link the gesture to the function being controlled. Possible applications could include fairly frequently used controls that present unwanted high visual workload, such as phone dial home, phone dial work.

e. Natural Dialogue Gestures
Natural dialogue hand gestures utilize natural gestures as used in human to human communication to initiate a gesture dialogue with the vehicle, typically this would involve two gestures being used although only one gesture at any given time. For example if a person fanned his hand in front of his face, the gesture system could detect this and interpret that he is too hot and would like to cool down.

4. Technology Used- here discuss the technical part 5. Applications in various fields 6. Issues

Similar Documents

Free Essay

Emerging It Technologies

...BLOG Emerging IT Technologies Have you seen the Iron Man movies? Imagine working like Tony Stark. On that movie alone, there are lots of ideas that popped out for new technologies. Now, a majority of these are not real. But in the rapid evolution of information technology, that time when we use the stuff Tony Stark uses might be sooner than we think. Here are a few samples of the technologies that are emerging right now: 1. Gesture Recognition Using gesture recognition and Computer vision control technology, we can make some tasks easier. It is also being used In Xbox 360’s kinect. This can become quite useful in the future. It is even being considered in the field of surgery – a very delicate field. In the movies, we can see Tony stark making hand gestures with his UI. It is awesome because the information is displayed right in front of him. He performs a lot of activities with his hands. Where could we apply this? I can imagine using this technology in manufacturing where small and delicate parts are inspected manually on some departments. Hand gesture could be used with a powerful camera and mechanical arms that allows the inspector to check very small equipment by having it displayed on a big screen, not through a device that looks like a microscope. And using his or her hand gestures, the camera and the mechanical arm can be controlled using hand gestures. That will simplify and make the job a lot easier. The inspection will be more accurate as well...

Words: 747 - Pages: 3

Free Essay

Kinect-Based Gesture Password Recognition

...Sciences, 6(8): 492-499, 2012 ISSN 1991-8178 Kinect-based Gesture Password Recognition Mohd Afizi Mohd Shukran, Mohd Suhaili Bin Ariffin Faculty of Science and Defence Technology, Universiti Pertahanan Nasional Malaysia, Aras 6, Bangunan Bistari, Kem Sungai Besi, 57000 Kuala Lumpur. Abstract: Hand gesture password might be the most natural and intuitive way to communicate between people and machines, since it closely mimics how human interact with each other. Its intuitiveness and naturalness have spawned many applications in exploring large and complex data, computer games, virtual reality, health care, etc. Although the market for hand gesture password is huge, building a robust hand gesture recognition system remains a challenging problem for traditional vision-based approaches, which are greatly limited by the quality of the input from optical sensors. In this paper, we use their gesture in order to login or authenticate to the system. And then we introduce a novel method to create a gesture pattern that act as a password. This hand gesture recognition system performs robustly despite variations in hand orientation, scale or articulation. Moreover, it works well in uncontrolled environments with background clusters. Key words: Password Recognition, Authentication, Gesture password INTRODUCTION The advent of relatively cheap image and depth sensors has spurred research in the field of object tracking and gesture recognition. One of the more popular devices used to do this type...

Words: 3290 - Pages: 14

Free Essay

Gesture Recognition

...Gesture recognition is a topic in IT with the goal of interpreting human gestures via mathematical algorithms. Gestures can originate from any bodily motion but typically originate from facial or hand movements. Current focuses in the field include emotion recognition from face and hand gestures however the identification and recognition of posture, gait, proxemics, and human behaviors is also the subject of gesture recognition techniques. Although not wearable, the use of gesture recognition is, in fact, only possible through the use of the human body and motion (akin to any other wearable technology). In terms of fitness and health management, gesture recognition software is at the forefront of post injury and post-op physical therapy. Specifically, a company called GestureTek, through their proprietary Interactive Rehabilitation and Exercise System (IREX), “uses immersive video gesture control technology to place patients into virtual sport or gaming environments where they are guided through clinician prescribed therapeutic exercise regimes. The IREX virtual reality environments are specifically designed to rehabilitate the patient in the precise manner recommended by the clinician. The motion of the patient can be monitored and reported to describe functional improvement in the patient’s range of motion, control of motion and balance over the course of their treatment.” Being able to independently monitor progress, whilst also making rehabilitation fun, allows...

Words: 294 - Pages: 2

Free Essay

Accounting

...Use Voice, Gestures to Control TV a new wave of TVs this year will provide the option of using voice controls or hand gestures to navigate channels, change volumes or find the right content from broadcasts or the Internet. Samsung recently shipped new interactive TVs that not only recognize hand gestures, but also voice commands with the assistance of a "Smart Touch Remote" control. Lenovo hopes to go beyond voice and gesture recognition with a smart TV that is capable of quality gaming. Panasonic has already shipped some of its latest Viera TVs in the U.K. with Nuance Communications' Dragon TV platform, which allows users to speak out channel choices or Twitter updates. Samsung Smart TV Samsung a few weeks ago started shipping new TVs that users can control simply by speaking or moving their hands. The interactive TVs include the Smart Touch Control remote, which has a microphone to take in voice commands that are then transmitted wirelessly to a TV set. The TV responds to simple commands like "volume up" or "channel up." Saying "Web browser" opens up the browsing software, and users can also speak directly into the Google search box instead of typing. The remote and TV are linked via a long-range Bluetooth connection, and voice commands are helpful if a user is not in front of the TV. Beyond voice recognition, the TV also has gesture recognition features in which moving a hand in the air allows users to browse around and select features, much like the Nintendo Wii or Microsoft...

Words: 863 - Pages: 4

Premium Essay

Sixth Sense

...5.0 Conclusion The sixth sense technology using hand gesture movement recognition is an emerging innovative ideas. Sixth sense technology is a wearable gestural device that breaking the traditional modal of connecting the digital without using any electronic devices such as computers, PDA, smart phones, etc. Sixth sense technology is not an alternative to replace electronic devices in the computing world however it act as an option to the existing electronic devices in the computing world. This invention created by Pranav Mistry, allows the users to be a part of the physical world and access to the digital world by using hand gesture recognition to interact with the information to obtain from the World Wide Web (www). Sixth sense technology enables to obtain pertinent data or information about the objects in the physical world and enable us to interaction between the physical world and the digital world. Nowadays, computing devices are getting portable and miniaturization that permits us to carry in our pocket and where we want. However, these devices act as an interface between the digital world and the physical world compared to the sixth sense technology that understands user's gesture movement made in the physical world and replicates in the digital world. For instance, an analog clock illusion will appear on the user's waist when the user draws a circle on the waist, checking departure time without login into the flight website and manipulate information that has been...

Words: 367 - Pages: 2

Free Essay

Cse - Applications

...of a positioning module (like GPS) to determine the current location of the user inside the building. The maps for buildings should be formed with a collaborative effort, with different users tagging locations or objects as they encounter them. Rerouting will be needed when a user fails to follow the directions provided by the system. Keep in mind that in such a system, the streets will be corridors and ramps will be elevators or stairs. Also, we cannot expect our user to pass through walls, so entering a room will only be possible after exiting one! 2. Who am I? Person recognition has been a hot topic in a variety of fields for years. With the proliferation in social networking, this topic is becoming even more important, as it is used in tasks like auto-tagging of pictures. This project involves developing a mobile person recognition system, which uses a combination of existing face recognition algorithms and the locations of users (using information from an application like Google Latitude) to make the most accurate prediction. 3. Programming Game: Everyone likes games and the trend is for education to capitalize on this in order to encourage learning of various topics. In the end-product of this project, students would have to write little programs in order to progress through a game. One...

Words: 583 - Pages: 3

Free Essay

Competitive Analysis

...which is a gesture–controlled device to communicate with other digital devices. Existing Competitors include: 1. Leap Motion: A device that is designed with camera technology to identify finger and hand motions to control different computers. The product only works with a USB cord and is only compatible with computers with certain system requirements 2. Microsoft Kinect: A hands-free product that has infrared and camera technology to identify gestures and motions so the user can control Xbox 360, Xbox One, and Windows PC. It was designed for mostly gaming and application purposes. A perceptual map comparing some features and benefits of these competitors with the Myo Armband is located on page 5 in the report. Microsoft may be a potential competitor in the near future because they have a patent on a very similar product. Other than the sophisticated technology to create a similar device, there are few barriers to enter the market. Myo has a competitive advantage. Myo Armband is compatible with more of today’s popular products and uses muscle activity technology, unlike its competitors. Competitive Analysis Product Description The Myo Armband is a gesture controlled armband that can be used with many technological devices. It was created by Thalmic Labs to help enhance consumers’ experiences with technology. This device comes with full software support to help link the product with many other popular devices using Bluetooth 4.0 technology. The Myo Armband...

Words: 1155 - Pages: 5

Free Essay

Mech-Humanoid-Robot

...giving me guidelines to present a seminar report. It helped me a lot to realize of what we study for. Secondly, I would like to thank my parents who patiently helped me as i went through my work and helped to modify and eliminate some of the irrelevant or un-necessary stuffs. Thirdly, I would like to thank my friends who helped me to make my work more organized and well-stacked till the end. Next, I would thank Microsoft for developing such a wonderful tool like MS Word. It helped my work a lot to remain error-free. Last but clearly not the least,I would thank The Almighty for giving me strength to complete my report on time. 3 www.studymafia.org Content  Introduction  System architecture  Real-time facial gesture recognition system  The vision system  Tracking the face:  Conclusion  References 4 www.studymafia.org...

Words: 3784 - Pages: 16

Free Essay

Paper

...ABSTRACT The field of humanoids robotics is widely recognized as the current challenge for robotics research .The humanoid research is an approach to understand and realize the complex real world interactions between a robot, an environment, and a human. The humanoid robotics motivates social interactions such as gesture communication or co-operative tasks in the same context as the physical dynamics. This is essential for three-term interaction, which aims at fusing physical and social interaction at fundamental levels. People naturally express themselves through facial gestures and expressions. Our goal is to build a facial gesture human-computer interface fro use in robot applications. This system does not require special illumination or facial make-up. By using multiple Kalman filters we accurately predict and robustly track facial features. Since we reliably track the face in real-time we are also able to recognize motion gestures of the face. Our system can recognize a large set of gestures (13) ranging from “yes”, ”no” and “may be” to detecting winks, blinks and sleeping. Chapter 1 ROLE OF HUMANOIDS 1. INTRODUCTION.: The field of humanoids robotics, widely recognized as the current challenge for robotics research, is attracting the interest of many research groups worldwide. Important efforts have been devoted to the objective of developing humanoids and impressive results have been produced, from the technological point of view, especially...

Words: 3654 - Pages: 15

Premium Essay

Real Time Hand Tracking for Human Computer Interaction

...proposed work is part of a project that aims for the control of a mouse based on hand gesture recognition. This goal implies the restriction of real-time response and unconstrained environments. This is basically a vision based skincolour segmentation method for moving hand in real time application [3]. This algorithm is based on three main steps: hand segmentation, hand tracking and gesture recognition from hand features. For the hand segmentation step we use the colour cue due to the characteristic colour values of human [1]. The hands are recognized by the computer using the skin colour as one of the basic features for the hand recognition. The important feature is the accurate segmentation of hands [3]. I. Introduction Nowadays, the majority of the human-computer interaction (HCI) is based on mechanical devices such as keyboards, mouse, joysticks or gamepads. In recent years there has been a growing interest in a class of methods based on computational vision due to its ability to recognise human gestures in a natural way .These methods use as input the images acquired from a camera or from a stereo pair of cameras. The main goal of these algorithms is to measure the hand configuration in each time instant. Our application uses images from a lowcost web camera placed in front of the hand. The hand must be localized in the image and segmented from the background before recognition. The pixels are selected from the hand. The selected pixels are transformed from the RGB-space...

Words: 2141 - Pages: 9

Premium Essay

Input

...Discovering Computers Technology in a World of Computers, Mobile Devices, and the Internet Chapter 7 Input and Output Objectives Overview Differentiate among various  types of keyboards:  standard, compact, on‐ screen, virtual, ergonomic,  gaming, and wireless Describe characteristics of  various pointing devices:  mouse, touchpad, pointing  stick, and trackball Describe various types of  pen input: stylus, digital  pen, and graphics tablet See Page 288 for Detailed Objectives Describe various uses of  touch screens Describe various uses of  motion input, voice input,  and video input Discovering Computers 2014: Chapter 7 2 Objectives Overview Differentiate among various  scanners and reading devices Explain the characteristics of  various displays Identify the purpose and features  of speakers, headphones and  earbuds, data projectors,  interactive whiteboards, and  force‐feedback game controllers  and tactile output See Page 288 for Detailed Objectives Summarize the various types of  printers Identify various assistive  technology input and output  methods Discovering Computers 2014: Chapter 7 3 What Is Input? • Input is any data and  instructions entered  into the memory of a  computer • • • Instructions can be in the form of  software (programs and apps),  commands, and user response Command is an instruction that  causes a program or app to perform a  specific action...

Words: 3910 - Pages: 16

Free Essay

Nbcvafdg

...futuristic technology. But now science is fast catching up Tom Cruise as John Anderton in Minority Report - an early adopter of gesture-based computing. Photograph: 20th Century Fox The launch of Microsoft's new Kinect games system, which allows players to run, jump, punch and shoot without having to wear strange clothing or hold any kind of controller, has got technology and cinema buffs alike thinking of Tom Cruise again. Specifically, the moment in the film Minority Report when Cruise, playing police chief John Anderton, tries to figure out film footage and computer data by waving his hands around in mid-air to manipulate it: turning it, shrinking it, pushing it aside, revolving it. Give it time: in a few years, we'll more than likely be controlling our computers in a similar way. 1. Minority Report 2. Production year: 2002 3. Country: USA 4. Cert (UK): 12 5. Runtime: 145 mins 6. Directors: Steven Spielberg 7. Cast: Colin Farrell, Max von Sydow, Samantha Morton, Tom Cruise 8. More on this film When Minority Report came out in the summer of 2002 – the iPod was less than a year old and the iPhone and iPad weren't even gleams in Steve Jobs's glinting eyes – its technological visions of the future seemed mind-bogglingly cool. The film was set in 2054 (Philip K Dick's short story, on which it is based, isn't so specific), so director Steven Spielberg presumably reckoned he was giving it plenty of room for the array of cutting-edge technologies to become...

Words: 1470 - Pages: 6

Premium Essay

Reflection Analysis Of Hand Gesture Recognition

...Being a significant part in non verbal communication hand gestures are playing vital role in our daily life. Hand Gesture recognition system provides us an innovative, natural, user friendly way of interaction with the computer which is more familiar to the human beings. Gesture Recognition has a wide area of application including human machine interaction, sign language, immersive game technology etc. By keeping in mind the similarities of human hand shape with four fingers and one thumb, this paper aims to present a real time system for hand gesture recognition on the basis of detection of some meaningful shape based features like orientation, centre of mass (centroid), status of fingers, thumb in terms of raised or folded fingers of hand and their respective location in image. The approach introduced in this paper is totally depending on the shape parameters of the hand gesture. It does not consider any other mean of hand gesture recognition like skin color, texture because these image based features are extremely variant to different light conditions and other influences. To implement this approach we have utilized a simple web cam which is working on 20 fps with 7 mega pixel intensity. On having the input sequence of images through web...

Words: 3301 - Pages: 14

Free Essay

Robotics

...Index Terms ➢ Introduction ➢ Related Work ➢ Robot Hardware ➢ Detecting And Tracking People * Face Detection * Voice Detection * Memory-Based Person Tracking ➢ Integrating Interaction Capabilities * Speech And Dialog * Emotions And Facial Expressions * Using Deictic Gestures * Object Attention * Dynamic Topic Tracking * Bringing It All Together ➢ Experiments * Scenario 1: Multiple Person Interaction * Scenario 2: Showing Objects to BARTHOC * Scenario 3: Reading Out a Fairy Tale ➢ Conclusion ➢ References Abstract A very important aspect in developing robots capable of human-robot interaction (HRI) is the research in natural, human-like communication, and subsequently, the development of a research platform with multiple HRI capabilities for evaluation. Besides a flexible dialog system and speech understanding, an anthropomorphic appearance has the potential to support intuitive usage and understanding of a robot, e.g .. human-like facial expressions and deictic gestures can as well be produced and also understood by the robot. As a consequence of our effort in creating an anthropomorphic appearance and to come close to a human-human interaction model for a robot, we decided to use human-like sensors, i.e., two cameras and two microphones only, in analogy to human perceptual capabilities too. Despite the...

Words: 7843 - Pages: 32

Free Essay

Motion Sensing Technology

...Motion Sensing Technology Motion-sensing capability has become a boon in the gaming industry, with 90 million Nintendo Wii units sold since its 2006 introduction and over 10 million units of Microsoft’s Xbox360 Kinect peripheral sold since its winter 2010 launch, then latter having surpassed the former as the world’s fastest-selling consumer electronic device. Distinguishing Kinect from Wii is its dedication to pure, controller-less motion input, you are the controller, as the marketing slogan goes. One of the key aspects of Kinect that has generated much excitement is its promotion of the natural user interface (NUI) paradigm, which in many ways aims to succeed the traditional graphical user interface (GUI) and command line interface (CLI) paradigms for human-computer interaction. Indeed, Microsoft’s approach to Kinect clearly focuses on the expansion of NUI and a vision for Kinect that goes far beyond gaming. Craig Mundie, chief research and strategy officer at Microsoft recently stated that the “computing interface is evolving from something we drive to something that’s more like us” but while the world searches for applications that may be primed to accept NUI, Kinect and the activity surrounding it offer clues as to where motion-sensing technology may be headed in the next 5-10 years. Motion-sensing in gaming has had many premature starts, with a history of products for older generation consoles ranging in reputation from grossly unsuccessful to largely...

Words: 2061 - Pages: 9