CINXE.COM

Wikipedia - User contributions [en]

<?xml version="1.0"?> <feed xmlns="http://www.w3.org/2005/Atom" xml:lang="en"> <id>https://en.wikipedia.org/w/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=202.142.122.82</id> <title>Wikipedia - User contributions [en]</title> <link rel="self" type="application/atom+xml" href="https://en.wikipedia.org/w/api.php?action=feedcontributions&amp;feedformat=atom&amp;user=202.142.122.82"/> <link rel="alternate" type="text/html" href="https://en.wikipedia.org/wiki/Special:Contributions/202.142.122.82"/> <updated>2024-11-30T21:47:36Z</updated> <subtitle>User contributions</subtitle> <generator>MediaWiki 1.44.0-wmf.5</generator> <entry> <id>https://en.wikipedia.org/w/index.php?title=Gesture_recognition&amp;diff=1259841601</id> <title>Gesture recognition</title> <link rel="alternate" type="text/html" href="https://en.wikipedia.org/w/index.php?title=Gesture_recognition&amp;diff=1259841601"/> <updated>2024-11-27T10:36:29Z</updated> <summary type="html">&lt;p&gt;202.142.122.82: &lt;/p&gt; &lt;hr /&gt; &lt;div&gt;{{short description|Topic in computer science and language technology}}&lt;br /&gt; {{tone|date=November 2016}}&lt;br /&gt; [[File:Gesture Recognition.jpg|thumb|300px|A child's hand location and movement being detected by a gesture recognition [[algorithm]]]]&lt;br /&gt; &lt;br /&gt; '''Yashika Patel''' is an area of research and development in [[computer science]] and [[language technology]] concerned with the recognition and interpretation of human [[gesture]]s. A subdiscipline of [[computer vision]],{{cn|date=September 2023}} it employs mathematical [[algorithm]]s to interpret gestures.&amp;lt;ref name=&amp;quot;Kobylarz&amp;quot;&amp;gt;{{cite journal | last1=Kobylarz | first1=Jhonatan | last2=Bird | first2=Jordan J. | last3=Faria | first3=Diego R. | last4=Ribeiro | first4=Eduardo Parente | last5=Ekárt | first5=Anikó | title=Thumbs up, thumbs down: non-verbal human-robot interaction through real-time EMG classification via inductive and supervised transductive transfer learning | journal=Journal of Ambient Intelligence and Humanized Computing | publisher=Springer Science and Business Media LLC | date=2020-03-07 | volume=11 | issue=12 | pages=6021–6031 | issn=1868-5137 | doi=10.1007/s12652-020-01852-z | doi-access=free | url=https://publications.aston.ac.uk/id/eprint/41366/1/Kobylarz2020_Article_ThumbsUpThumbsDownNon_verbalHu.pdf }}&amp;lt;/ref&amp;gt; &lt;br /&gt; &lt;br /&gt; Gesture recognition offers a path for computers to begin to better understand and interpret [[computer processing of body language|human body language]], previously not possible through [[text user interface|text]] or unenhanced [[graphical user interfaces|graphical]] (GUI) user interfaces.&lt;br /&gt; &lt;br /&gt; Gestures can originate from any bodily motion or state, but commonly originate from the [[face]] or [[hand]]. One area of the field is [[emotion recognition]] derived from facial expressions and hand gestures. Users can make simple gestures to control or interact with devices without physically touching them.&lt;br /&gt; &lt;br /&gt; Many approaches have been made using cameras and [[computer vision]] algorithms to interpret [[sign language]], however, the identification and recognition of posture, gait, [[proxemics]], and human behaviors is also the subject of gesture recognition techniques.&amp;lt;ref&amp;gt;Matthias Rehm, Nikolaus Bee, Elisabeth André, [http://mm-werkstatt.informatik.uni-augsburg.de/files/publications/199/wave_like_an_egyptian_final.pdf Wave Like an Egyptian – Accelerometer Based Gesture Recognition for Culture Specific Interactions], British Computer Society, 2007&amp;lt;/ref&amp;gt;&lt;br /&gt; &lt;br /&gt; ==Overview==&lt;br /&gt; [[File:Linux kernel and gaming input-output latency.svg|thumb|300px|Middleware usually processes gesture recognition, then sends the results to the user.]]&lt;br /&gt; Gesture recognition has application in such areas as:{{When|date=September 2019}}&lt;br /&gt; *Automobiles&lt;br /&gt; *Consumer electronics &lt;br /&gt; *Transit &lt;br /&gt; *Gaming &lt;br /&gt; *Handheld devices&lt;br /&gt; *Defense&amp;lt;ref&amp;gt;{{Cite news|url=https://patseer.com/2017/10/patent-landscape-report-hand-gesture-recognition-patseer-pro/|title=Patent Landscape Report Hand Gesture Recognition PatSeer Pro|work=PatSeer|access-date=2017-11-02|language=en-US|archive-date=2019-10-20|archive-url=https://web.archive.org/web/20191020202635/https://patseer.com/2017/10/patent-landscape-report-hand-gesture-recognition-patseer-pro/|url-status=dead}}&amp;lt;/ref&amp;gt;&lt;br /&gt; *[[Home automation]]&lt;br /&gt; *[[Automated sign language translation]]&amp;lt;ref&amp;gt;Chai, Xiujuan, et al. &amp;quot;[http://iip.ict.ac.cn/sites/default/files/publication/2013_FG_xjchai_Sign%20Language%20Recognition%20and%20Translation%20with%20Kinect.pdf Sign language recognition and translation with kinect] {{Webarchive|url=https://web.archive.org/web/20210110035036/http://iip.ict.ac.cn/sites/default/files/publication/2013_FG_xjchai_Sign%20Language%20Recognition%20and%20Translation%20with%20Kinect.pdf |date=2021-01-10 }}.&amp;quot; IEEE Conf. on AFGR. Vol. 655. 2013.&amp;lt;/ref&amp;gt;&lt;br /&gt; &lt;br /&gt; Gesture recognition can be conducted with techniques from [[computer vision]] and [[image processing]].&amp;lt;ref&amp;gt;Sultana A, Rajapuspha T (2012),&lt;br /&gt; [https://pdfs.semanticscholar.org/2c11/h.pdf &amp;quot;Vision Based Gesture Recognition for Alphabetical Hand Gestures Using the SVM Classifier&amp;quot;]{{Dead link|date=December 2022 |bot=InternetArchiveBot |fix-attempted=yes }}, International Journal of Computer Science &amp;amp; Engineering Technology (IJCSET)., 2012&amp;lt;/ref&amp;gt;&lt;br /&gt; &lt;br /&gt; The literature includes ongoing work in the computer vision field on capturing gestures or more general human [[pose (computer vision)|pose]] and movements by cameras connected to a computer.&amp;lt;ref&amp;gt;Pavlovic, V., Sharma, R. &amp;amp; Huang, T. (1997), [http://www.cs.rutgers.edu/~vladimir/pub/pavlovic97pami.pdf &amp;quot;Visual interpretation of hand gestures for human-computer interaction: A review&amp;quot;], IEEE Transactions on Pattern Analysis and Machine Intelligence, July, 1997. Vol. 19(7), pp. 677 -695.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;R. Cipolla and A. Pentland, [https://books.google.com/books?id=Pe7gG0LxEUIC&amp;amp;q=pentland+cipolla+computer+vision+human+interaction Computer Vision for Human-Machine Interaction], Cambridge University Press, 1998, {{ISBN|978-0-521-62253-0}}&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;Ying Wu and Thomas S. Huang,&lt;br /&gt; [http://reference.kfupm.edu.sa/content/v/i/vision_based_gesture_recognition__a_revi_291732.pdf &amp;quot;Vision-Based Gesture Recognition: A Review&amp;quot;] {{webarchive|url=https://web.archive.org/web/20110825211203/http://reference.kfupm.edu.sa/content/v/i/vision_based_gesture_recognition__a_revi_291732.pdf |date=2011-08-25 }}, In: Gesture-Based Communication in Human-Computer Interaction, Volume 1739 of Springer Lecture Notes in Computer Science, pages 103-115, 1999, {{ISBN|978-3-540-66935-7}}, {{doi|10.1007/3-540-46616-9}}&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;Alejandro Jaimes and Nicu Sebe,&lt;br /&gt; [http://staff.science.uva.nl/~nicu/PUBS/PDF/2005/sebeHCI05.pdf Multimodal human–computer interaction: A survey] {{webarchive|url=https://web.archive.org/web/20110606063605/http://staff.science.uva.nl/~nicu/PUBS/PDF/2005/sebeHCI05.pdf |date=2011-06-06 }},&lt;br /&gt; Computer Vision and Image Understanding&lt;br /&gt; Volume 108, Issues 1-2, October–November 2007, Pages 116-134&lt;br /&gt; Special Issue on Vision for Human-Computer Interaction, {{doi|10.1016/j.cviu.2006.10.019}}&amp;lt;/ref&amp;gt;&lt;br /&gt; &lt;br /&gt; The term &amp;quot;gesture recognition&amp;quot; has been used to refer more narrowly to non-text-input handwriting symbols, such as inking on a [[graphics tablet]], [[multi-touch]] gestures, and [[mouse gesture]] recognition. This is computer interaction through the drawing of symbols with a pointing device cursor.&amp;lt;ref&amp;gt;Dopertchouk, Oleg; [http://www.gamedev.net/page/resources/_/technical/game-programming/recognition-of-handwritten-gestures-r2039 &amp;quot;Recognition of Handwriting Gestures&amp;quot;], ''gamedev.net'', January 9, 2004&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;Chen, Shijie; [https://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=5693514&amp;amp;tag=1 &amp;quot;Gesture Recognition Techniques in Handwriting Recognition Application&amp;quot;], ''Frontiers in Handwriting Recognition'' p 142-147 November 2010&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;Balaji, R; Deepu, V; Madhvanath, Sriganesh; Prabhakaran, Jayasree [http://www.hpl.hp.com/india/documents/papers/GKB_IWFHR10_Final.pdf &amp;quot;Handwritten Gesture Recognition for Gesture Keyboard&amp;quot;] {{webarchive|url=https://web.archive.org/web/20080906122710/http://www.hpl.hp.com/india/documents/papers/GKB_IWFHR10_Final.pdf |date=2008-09-06 }}, ''Hewlett-Packard Laboratories''&amp;lt;/ref&amp;gt; [[Pen computing]] expands digital gesture recognition beyond traditional input devices such as keyboards and mice, and reduces the hardware impact of a system.{{how|date=September 2023}}&lt;br /&gt; &lt;br /&gt; == Gesture types ==&lt;br /&gt; In computer interfaces, two types of gestures are distinguished:&amp;lt;ref&amp;gt;Dietrich Kammer, Mandy Keck, Georg Freitag, Markus Wacker, [http://vi-c.de/vic/sites/default/files/Taxonomy_and_Overview_of_Multi-touch_Frameworks_Revised.pdf Taxonomy and Overview of Multi-touch Frameworks: Architecture, Scope, and Features] {{webarchive|url=https://web.archive.org/web/20110125014444/http://vi-c.de/vic/sites/default/files/Taxonomy_and_Overview_of_Multi-touch_Frameworks_Revised.pdf |date=2011-01-25 }}&amp;lt;/ref&amp;gt; We consider online gestures, which can also be regarded as direct manipulations like scaling and rotating, and in contrast, offline gestures are usually processed after the interaction is finished; e. g. a circle is drawn to activate a [[context menu]].&lt;br /&gt; * Offline gestures: Those gestures that are processed after the user's interaction with the object. An example is a gesture to activate a menu.&lt;br /&gt; * Online gestures: Direct manipulation gestures. They are used to scale or rotate a tangible object.&lt;br /&gt; &lt;br /&gt; == Touchless interface ==&lt;br /&gt; A [[touchless user interface]] (TUI) is an emerging type of technology wherein a device is controlled via body motion and gestures without touching a keyboard, mouse, or screen.&amp;lt;ref&amp;gt;{{Cite web|url=https://www.pcmag.com/encyclopedia/term/62816/touchless-user-interface|title=touchless user interface Definition from PC Magazine Encyclopedia|website=pcmag.com|language=en|access-date=2017-07-28}}&amp;lt;/ref&amp;gt; &lt;br /&gt; &lt;br /&gt; === Types of touchless technology ===&lt;br /&gt; There are several devices utilizing this type of interface such as smartphones, laptops, games, TVs, and music equipment.&lt;br /&gt; &lt;br /&gt; One type of touchless interface uses the Bluetooth connectivity of a smartphone to activate a company's visitor management system. This eliminates having to touch an interface, for convenience or to avoid a potential source of contamination as during the [[COVID-19]] pandemic.&amp;lt;ref&amp;gt;{{Cite web|title=The emerging need for touchless interaction technologies|url=https://www.researchgate.net/publication/342134613|access-date=2021-06-30|website=ResearchGate|language=en}}&amp;lt;/ref&amp;gt;&lt;br /&gt; &lt;br /&gt; == Input devices ==&lt;br /&gt; The ability to track a person's movements and determine what gestures they may be performing can be achieved through various tools. Kinetic user interfaces (KUIs) are an emerging type of [[user interfaces]] that allow users to interact with computing devices through the motion of objects and bodies.{{citation needed|date=June 2021}} Examples of KUIs include [[tangible user interface]]s and motion-aware games such as [[Wii]] and Microsoft's [[Kinect]], and other interactive projects.&amp;lt;ref&amp;gt;{{cite journal|author1=S. Benford|author2=H. Schnadelbach|author3=B. Koleva|author4=B. Gaver|author5=A. Schmidt|author6=A. Boucher|author7=A. Steed|author8=R. Anastasi|author9=C. Greenhalgh|author10=T. Rodden|author11=H. Gellersen|title=Sensible, sensable and desirable: a framework for designing physical interfaces|year=2003|url=http://www.equator.ac.uk/var/uploads/benfordTech2003.pdf|archive-url=https://web.archive.org/web/20060126085052/http://www.equator.ac.uk/var/uploads/benfordTech2003.pdf|archive-date=January 26, 2006|url-status=dead|citeseerx=10.1.1.190.2504}}&amp;lt;/ref&amp;gt;&lt;br /&gt; &lt;br /&gt; Although there is a large amount of research done in image/video-based gesture recognition, there is some variation in the tools and environments used between implementations.&lt;br /&gt; * [[Wired glove]]s. These can provide input to the computer about the position and rotation of the hands using magnetic or inertial tracking devices. Furthermore, some gloves can detect finger bending with a high degree of accuracy (5-10 degrees), or even provide haptic feedback to the user, which is a simulation of the sense of touch. The first commercially available hand-tracking glove-type device was the DataGlove,&amp;lt;ref&amp;gt;Thomas G. Zimmerman, Jaron Lanier, Chuck Blanchard, Steve Bryson, and Young Harvill. http://portal.acm.org. &amp;quot;[http://netzspannung.org/cat/servlet/CatServlet/$files/228648/DataGlove+CHI+1987.pdf A HAND GESTURE INTERFACE DEVICE] {{Webarchive|url=https://web.archive.org/web/20111002031500/http://netzspannung.org/cat/servlet/CatServlet/$files/228648/DataGlove+CHI+1987.pdf |date=2011-10-02 }}.&amp;quot; http://portal.acm.org.&amp;lt;/ref&amp;gt; a glove-type device that could detect hand position, movement and finger bending. This uses fiber optic cables running down the back of the hand. Light pulses are created and when the fingers are bent, light leaks through small cracks and the loss is registered, giving an approximation of the hand pose.&lt;br /&gt; * Depth-aware cameras. Using specialized cameras such as [[structured light]] or [[time-of-flight camera]]s, one can generate a [[depth map]] of what is being seen through the camera at a short-range, and use this data to approximate a 3d representation of what is being seen. These can be effective for the detection of hand gestures due to their short-range capabilities.&amp;lt;ref&amp;gt;Yang Liu, Yunde Jia, [https://ieeexplore.ieee.org/abstract/document/1410485/ A Robust Hand Tracking and Gesture Recognition Method for Wearable Visual Interfaces and Its Applications], Proceedings of the Third International Conference on Image and Graphics (ICIG'04), 2004&amp;lt;/ref&amp;gt;&lt;br /&gt; * [[Stereo cameras]]. Using two cameras whose relations to one another are known, a 3d representation can be approximated by the output of the cameras. To get the cameras' relations, one can use a positioning reference such as a [[lexian-stripe]] or [[infrared]] emitter.&amp;lt;ref&amp;gt;Kue-Bum Lee, Jung-Hyun Kim, Kwang-Seok Hong, [https://ieeexplore.ieee.org/abstract/document/4297013/ An Implementation of Multi-Modal Game Interface Based on PDAs], Fifth International Conference on Software Engineering Research, Management and Applications, 2007&amp;lt;/ref&amp;gt; In combination with direct motion measurement ([[Stereoscopy#Stereoscopic motion measurement (6D-Vision)|6D-Vision]]) gestures can directly be detected.&lt;br /&gt; * Gesture-based controllers. These controllers act as an extension of the body so that when gestures are performed, some of their motion can be conveniently captured by the software. An example of emerging gesture-based [[motion capture]] is skeletal [[hand tracking]], which is being developed for virtual reality and augmented reality applications. An example of this technology is shown by tracking companies [[uSens]] and [[Gestigon]], which allow users to interact with their surroundings without controllers.&amp;lt;ref&amp;gt;{{cite web|title=Gestigon Gesture Tracking - TechCrunch Disrupt|url=https://techcrunch.com/video/gestigon-gesture-tracking/517762030/|website=TechCrunch|access-date=11 October 2016}}&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;{{cite web|last1=Matney|first1=Lucas|title=uSens shows off new tracking sensors that aim to deliver richer experiences for mobile VR|url=https://techcrunch.com/2016/08/29/usens-unveils-vr-sensor-modules-with-hand-tracking-and-mobile-positional-tracking-tech-baked-in/|website=TechCrunch|date=29 August 2016 |access-date=29 August 2016}}&amp;lt;/ref&amp;gt;&lt;br /&gt; * [[Wi-Fi sensing]]&amp;lt;ref&amp;gt;{{Cite journal|last1=Khalili|first1=Abdullah|last2=Soliman|first2=Abdel-Hamid|last3=Asaduzzaman|first3=Md|last4=Griffiths|first4=Alison|date=March 2020|title=Wi-Fi sensing: applications and challenges|journal=The Journal of Engineering|language=en|volume=2020|issue=3|pages=87–97|doi=10.1049/joe.2019.0790|issn=2051-3305|doi-access=free|arxiv=1901.00715}}&amp;lt;/ref&amp;gt;&lt;br /&gt; * [[Mouse gesture]] tracking, where the motion of the mouse is correlated to a symbol being drawn by a person's hand which can study changes in acceleration over time to represent gestures.&amp;lt;ref&amp;gt;Per Malmestig, Sofie Sundberg, [http://www.tricomsolutions.com/academic_reports.html SignWiiver – implementation of sign language technology] {{webarchive|url=https://web.archive.org/web/20081225190059/http://www.tricomsolutions.com/academic_reports.html |date=2008-12-25 }}&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;Thomas Schlomer, Benjamin Poppinga, Niels Henze, Susanne Boll, [http://www.wiigee.com/download_files/gesture_recognition_with_a_wii_controller-schloemer_poppinga_henze_boll.pdf Gesture Recognition with a Wii Controller] {{Webarchive|url=https://web.archive.org/web/20130727175427/http://www.wiigee.com/download_files/gesture_recognition_with_a_wii_controller-schloemer_poppinga_henze_boll.pdf |date=2013-07-27 }}, Proceedings of the 2nd international Conference on Tangible and Embedded interaction, 2008&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;AiLive Inc., [http://www.ailive.net/papers/LiveMoveWhitePaper_en.pdf LiveMove White Paper] {{Webarchive|url=https://web.archive.org/web/20070713013109/http://www.ailive.net/papers/LiveMoveWhitePaper_en.pdf |date=2007-07-13 }}, 2006&amp;lt;/ref&amp;gt; The software also compensates for human tremor and inadvertent movement.&amp;lt;ref name=&amp;quot;Wong&amp;quot;&amp;gt;''Electronic Design'' September 8, 2011. William Wong. [http://electronicdesign.com/article/embedded/Natural-User-Interface-Employs-Sensor-Integration.aspx Natural User Interface Employs Sensor Integration.]&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;Cousins&amp;quot;&amp;gt;''Cable &amp;amp; Satellite International'' September/October, 2011. Stephen Cousins. [http://www.csimagazine.com/csi/A-view-to-a-thrill.php A view to a thrill.] {{Webarchive|url=https://web.archive.org/web/20120119075325/http://www.csimagazine.com/csi/A-view-to-a-thrill.php |date=2012-01-19 }}&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;TechJournal&amp;quot;&amp;gt;''TechJournal South'' January 7, 2008. [https://archive.today/20120401173137/http://www.techjournalsouth.com/2008/01/hillcrest-labs-rings-up-25m-d-round/ Hillcrest Labs rings up $25M D round.]&amp;lt;/ref&amp;gt; The sensors of these smart light-emitting cubes can be used to sense hands and fingers as well as other objects nearby, and can be used to process data. Most applications are in music and sound synthesis,&amp;lt;ref&amp;gt;''Percussa AudioCubes Blog'' October 4, 2012. [http://www.percussa.com/2012/10/04/gestural-control-of-sound-synthesis-featured-question/ Gestural Control in Sound Synthesis.] {{webarchive|url=https://web.archive.org/web/20150910063754/https://www.percussa.com/2012/10/04/gestural-control-of-sound-synthesis-featured-question |date=2015-09-10 }}&amp;lt;/ref&amp;gt; but can be applied to other fields.&lt;br /&gt; * '''Single camera'''. A standard 2D camera can be used for gesture recognition where the resources/environment would not be convenient for other forms of image-based recognition. Earlier it was thought that a single camera may not be as effective as stereo or depth-aware cameras, but some companies are challenging this theory. Software-based gesture recognition technology using a standard 2D camera that can detect robust hand gestures. {{Citation needed|date=August 2024}}&lt;br /&gt; &lt;br /&gt; == Algorithms ==&lt;br /&gt; [[File:BigDiagram2.jpg|thumb|400px|Some alternative methods of tracking and analyzing gestures, and their respective relationships]]&lt;br /&gt; &lt;br /&gt; Depending on the type of input data, the approach for interpreting a gesture could be done in different ways. However, most of the techniques rely on key pointers represented in a 3D coordinate system. Based on the relative motion of these, the gesture can be detected with high accuracy, depending on the quality of the input and the algorithm's approach.&amp;lt;ref&amp;gt;{{Cite journal |last1=Mamtaz Alam |last2=Dileep Kumar Tiwari |date=2016 |title=Gesture Recognization &amp;amp; its Applications |url=http://rgdoi.net/10.13140/RG.2.2.28139.54563 |language=en |doi=10.13140/RG.2.2.28139.54563}}&amp;lt;/ref&amp;gt;&amp;lt;br /&amp;gt;&lt;br /&gt; &lt;br /&gt; In order to interpret movements of the body, one has to classify them according to common properties and the message the movements may express. For example, in sign language, each gesture represents a word or phrase.&lt;br /&gt; &lt;br /&gt; Some literature differentiates 2 different approaches in gesture recognition: a 3D model-based and an appearance-based.&amp;lt;ref&amp;gt;Vladimir I. Pavlovic, Rajeev Sharma, Thomas S. Huang, [http://www.cs.rutgers.edu/~vladimir/pub/pavlovic97pami.pdf Visual Interpretation of Hand Gestures for Human-Computer Interaction]; A Review, IEEE Transactions on Pattern Analysis and Machine Intelligence, 1997&amp;lt;/ref&amp;gt; The foremost method makes use of 3D information on key elements of the body parts in order to obtain several important parameters, like palm position or joint angles. Approaches derived from it such as the volumetric models have proven to be very intensive in terms of computational power and require further technological developments in order to be implemented for real-time analysis. Alternately, appearance-based systems use images or videos for direct interpretation. Such models are easier to process, but usually lack the generality required for human-computer interaction.&lt;br /&gt; &lt;br /&gt; === 3D model-based algorithms ===&lt;br /&gt; [[File:Volumetric-hands.jpg|thumb|A real hand (left) is interpreted as a collection of vertices and lines in the 3D mesh version (right), and the software uses their relative position and interaction in order to infer the gesture.]]&lt;br /&gt; &lt;br /&gt; The 3D model approach can use volumetric or skeletal models or even a combination of the two. Volumetric approaches have been heavily used in the computer animation industry and for computer vision purposes. The models are generally created from complicated 3D surfaces, like NURBS or polygon meshes.&lt;br /&gt; &lt;br /&gt; The drawback of this method is that it is very computationally intensive, and systems for real-time analysis are still to be developed. For the moment, a more interesting approach would be to map simple primitive objects to the person's most important body parts (for example cylinders for the arms and neck, sphere for the head) and analyze the way these interact with each other. Furthermore, some abstract structures like [[Superquadrics|super-quadrics]] and [[Cylinder (geometry)| generalized cylinders]] maybe even more suitable for approximating the body parts.&lt;br /&gt; {{-}}&lt;br /&gt; &lt;br /&gt; === Skeletal-based algorithms ===&lt;br /&gt; [[File:Skeletal-hand.jpg|thumb|The skeletal version (right) is effectively modeling the hand (left). This has fewer parameters than the volumetric version and it's easier to compute, making it suitable for real-time gesture analysis systems.]]&lt;br /&gt; &lt;br /&gt; Instead of using intensive processing of the 3D models and dealing with a lot of parameters, one can just use a simplified version of joint angle parameters along with segment lengths. This is known as a skeletal representation of the body, where a virtual skeleton of the person is computed and parts of the body are mapped to certain segments. The analysis here is done using the position and orientation of these segments and the relation between each one of them( for example the angle between the joints and the relative position or orientation)&lt;br /&gt; &lt;br /&gt; Advantages of using skeletal models:&lt;br /&gt; * Algorithms are faster because only key parameters are analyzed.&lt;br /&gt; * Pattern matching against a template database is possible&lt;br /&gt; * Using key points allows the detection program to focus on the significant parts of the body&lt;br /&gt; {{-}}&lt;br /&gt; &lt;br /&gt; === Appearance-based models ===&lt;br /&gt; [[File:Appearance hands.jpg|thumb|These binary silhouette(left) or contour(right) images represent typical input for appearance-based algorithms. They are compared with different hand templates and if they match, the correspondent gesture is inferred.]]&lt;br /&gt; &lt;br /&gt; Appearance-based models no longer use a spatial representation of the body, instead deriving their parameters directly from the images or videos using a template database. Some are based on the deformable 2D templates of the human parts of the body, particularly the hands. Deformable templates are sets of points on the outline of an object, used as interpolation nodes for the object's outline approximation. One of the simplest interpolation functions is linear, which performs an average shape from point sets, point variability parameters, and external deformation. These template-based models are mostly used for hand-tracking, but could also be used for simple gesture classification.&lt;br /&gt; &lt;br /&gt; The second approach in gesture detection using appearance-based models uses image sequences as gesture templates. Parameters for this method are either the images themselves, or certain features derived from these. Most of the time, only one (monoscopic) or two (stereoscopic) views are used.&lt;br /&gt; {{-}}&lt;br /&gt; &lt;br /&gt; === Electromyography-based models ===&lt;br /&gt; [[Electromyography]] (EMG) concerns the study of electrical signals produced by muscles in the body. Through classification of data received from the arm muscles, it is possible to classify the action and thus input the gesture to external software.&amp;lt;ref name=&amp;quot;Kobylarz&amp;quot;/&amp;gt; Consumer EMG devices allow for non-invasive approaches such as an arm or leg band and connect via Bluetooth. Due to this, EMG has an advantage over visual methods since the user does not need to face a camera to give input, enabling more freedom of movement.&lt;br /&gt; &lt;br /&gt; == Challenges ==&lt;br /&gt; There are many challenges associated with the accuracy and usefulness of gesture recognition and software designed to implement it. For image-based gesture recognition, there are limitations on the equipment used and [[image noise]]. Images or video may not be under consistent lighting, or in the same location. Items in the background or distinct features of the users may make recognition more difficult.&lt;br /&gt; &lt;br /&gt; The variety of implementations for image-based gesture recognition may also cause issues with the viability of the technology for general usage. For example, an algorithm calibrated for one camera may not work for a different camera. The amount of background noise also causes tracking and recognition difficulties, especially when occlusions (partial and full) occur. Furthermore, the distance from the camera, and the camera's resolution and quality, also cause variations in recognition accuracy.&lt;br /&gt; &lt;br /&gt; In order to capture human gestures by visual sensors robust computer vision methods are also required, for example for hand tracking and hand posture recognition&amp;lt;ref&amp;gt;Ivan Laptev and Tony Lindeberg [http://kth.diva-portal.org/smash/record.jsf?pid=diva2%3A440686&amp;amp;dswid=-2803 &amp;quot;Tracking of Multi-state Hand Models Using Particle Filtering and a Hierarchy of Multi-scale Image Features&amp;quot;], Proceedings Scale-Space and Morphology in Computer Vision, Volume 2106 of Springer Lecture Notes in Computer Science, pages 63-74, Vancouver, BC, 1999. {{ISBN|978-3-540-42317-1}}, {{doi|10.1007/3-540-47778-0}}&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;{{cite conference | first1 = Christian | last1 = von Hardenberg | first2 = François | last2 = Bérard | citeseerx = 10.1.1.23.4541 | title = Bare-hand human-computer interaction | series = ACM International Conference Proceeding Series | volume = 15 archive| book-title = Proceedings of the 2001 workshop on Perceptive user interfaces | location = Orlando, Florida | pages = 1–8 | year = 2001 }}&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;Lars Bretzner, Ivan Laptev, Tony Lindeberg [http://kth.diva-portal.org/smash/record.jsf?pid=diva2%3A462620&amp;amp;dswid=-4589 &amp;quot;Hand gesture recognition using multi-scale colour features, hierarchical models and particle filtering&amp;quot;], Proceedings of the Fifth IEEE International Conference on Automatic Face and Gesture Recognition, Washington, DC, USA, 21–21 May 2002, pages 423-428. {{ISBN|0-7695-1602-5}}, {{doi|10.1109/AFGR.2002.1004190}}&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;[[Domitilla Del Vecchio]], Richard M. Murray Pietro Perona, [http://www.cds.caltech.edu/~ddomitilla/reports/AutomaticaReport.pdf &amp;quot;Decomposition of human motion into dynamics-based primitives with application to drawing tasks&amp;quot;] {{webarchive|url=https://web.archive.org/web/20100202211735/http://www.cds.caltech.edu/~ddomitilla/reports/AutomaticaReport.pdf |date=2010-02-02 }}, Automatica Volume 39, Issue 12, December 2003, Pages 2085–2098, {{doi|10.1016/S0005-1098(03)00250-4}}.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;Thomas B. Moeslund and Lau Nørgaard, [http://www.vision.auc.dk/~tbm/Publications/gesture-hci.pdf &amp;quot;A Brief Overview of Hand Gestures used in Wearable Human Computer Interfaces&amp;quot;] {{webarchive|url=https://web.archive.org/web/20110719120644/http://www.vision.auc.dk/~tbm/Publications/gesture-hci.pdf |date=2011-07-19 }}, Technical report: CVMT 03-02, {{ISSN|1601-3646}}, Laboratory of Computer Vision and Media Technology, Aalborg University, Denmark.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;M. Kolsch and M. Turk [http://ilab.cs.ucsb.edu/projects/mathias/KolschTurk2004Fast2DHandTrackingWithFlocksOfFeatures.pdf &amp;quot;Fast 2D Hand Tracking with Flocks of Features and Multi-Cue Integration&amp;quot;] {{webarchive|url=https://web.archive.org/web/20080821111627/http://ilab.cs.ucsb.edu/projects/mathias/KolschTurk2004Fast2DHandTrackingWithFlocksOfFeatures.pdf |date=2008-08-21 }}, CVPRW '04. Proceedings Computer Vision and Pattern Recognition Workshop, May 27-June 2, 2004, {{doi|10.1109/CVPR.2004.71}}&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;Xia Liu Fujimura, K., &amp;quot;Hand gesture recognition using depth data&amp;quot;, Proceedings of the Sixth IEEE International Conference on Automatic Face and Gesture Recognition, May 17–19, 2004&lt;br /&gt; pages 529- 534, {{ISBN|0-7695-2122-3}}, {{doi|10.1109/AFGR.2004.1301587}}.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;Stenger B, Thayananthan A, Torr PH, Cipolla R: [https://wayback.archive-it.org/all/20080221223332/http://www.bmva.ac.uk/sullivan/prizethesis-2005.pdf &amp;quot;Model-based hand tracking using a hierarchical Bayesian filter&amp;quot;], IEEE Transactions on IEEE Transactions on Pattern Analysis and Machine Intelligence, 28(9):1372-84, Sep 2006.&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;A Erol, G Bebis, M Nicolescu, RD Boyle, X Twombly, [http://www.cse.unr.edu/~bebis/handposerev.pdf &amp;quot;Vision-based hand pose estimation: A review&amp;quot;], Computer Vision and Image Understanding Volume 108, Issues 1-2, October–November 2007, Pages 52-73 Special Issue on Vision for Human-Computer Interaction, {{doi|10.1016/j.cviu.2006.10.012}}.&amp;lt;/ref&amp;gt; or for capturing movements of the head, facial expressions or gaze direction.{{too many citations|sentence|date=September 2023}}&lt;br /&gt; &lt;br /&gt; === Social acceptability ===&lt;br /&gt; One significant challenge to the adoption of gesture interfaces on consumer mobile devices such as smartphones and smartwatches stems from the social acceptability implications of gestural input. While gestures can facilitate fast and accurate input on many novel form-factor computers, their adoption and usefulness are often limited by social factors rather than technical ones. To this end, designers of gesture input methods may seek to balance both technical considerations and user willingness to perform gestures in different social contexts.&amp;lt;ref name=&amp;quot;:0&amp;quot;&amp;gt;{{Cite book|last1=Rico|first1=Julie|last2=Brewster|first2=Stephen|title=Proceedings of the SIGCHI Conference on Human Factors in Computing Systems |chapter=Usable gestures for mobile interfaces |s2cid=16118067|date=2010|series=CHI '10|location=New York, NY, USA|publisher=ACM|pages=887–896|doi=10.1145/1753326.1753458|isbn=9781605589299}}&amp;lt;/ref&amp;gt; In addition, different device hardware and sensing mechanisms support different kinds of recognizable gestures.&lt;br /&gt; &lt;br /&gt; ==== Mobile device ====&lt;br /&gt; Gesture interfaces on [[mobile device|mobile]] and small [[form-factor]] devices are often supported by the presence of motion sensors such as [[inertial measurement unit]]s (IMUs). On these devices, gesture sensing relies on users performing movement-based gestures capable of being recognized by these motion sensors. This can potentially make capturing signals from subtle or low-motion gestures challenging, as they may become difficult to distinguish from natural movements or noise. Through a survey and study of gesture usability, researchers found that gestures that incorporate subtle movement, which appear similar to existing technology, look or feel similar to every action, and are enjoyable were more likely to be accepted by users, while gestures that look strange, are uncomfortable to perform, interfere with communication, or involve uncommon movement caused users more likely to reject their usage.&amp;lt;ref name=&amp;quot;:0&amp;quot; /&amp;gt; The social acceptability of mobile device gestures relies heavily on the naturalness of the gesture and social context.&lt;br /&gt; &lt;br /&gt; ==== On-body and wearable computers ====&lt;br /&gt; [[Wearable computer]]s typically differ from traditional [[mobile device]]s in that their usage and interaction location takes place on the user's body. In these contexts, gesture interfaces may become preferred over traditional input methods, as their small size renders [[Touchscreen|touch-screens]] or [[Computer keyboard|keyboards]] less appealing. Nevertheless, they share many of the same social acceptability obstacles as mobile devices when it comes to gestural interaction. However, the possibility of wearable computers being hidden from sight or integrated into other everyday objects, such as clothing, allow gesture input to mimic common clothing interactions, such as adjusting a shirt collar or rubbing one's front pant pocket.&amp;lt;ref name=&amp;quot;Walter 2013&amp;quot;&amp;gt;{{Cite book|last1=Walter|first1=Robert|last2=Bailly|first2=Gilles|last3=Müller|first3=Jörg|title=Proceedings of the SIGCHI Conference on Human Factors in Computing Systems |chapter=StrikeAPose |s2cid=2041073|date=2013|pages=841–850|location=New York, New York, USA|publisher=ACM Press|doi=10.1145/2470654.2470774|isbn=9781450318990|chapter-url=https://eref.uni-bayreuth.de/42090/}}&amp;lt;/ref&amp;gt;&amp;lt;ref name=&amp;quot;:1&amp;quot;&amp;gt;{{Cite book|last1=Profita|first1=Halley P.|last2=Clawson|first2=James|last3=Gilliland|first3=Scott|last4=Zeagler|first4=Clint|last5=Starner|first5=Thad|last6=Budd|first6=Jim|last7=Do|first7=Ellen Yi-Luen|title=Proceedings of the 2013 International Symposium on Wearable Computers |chapter=Don't mind me touching my wrist |s2cid=3236927|date=2013|series=ISWC '13|location=New York, NY, USA|publisher=ACM|pages=89–96|doi=10.1145/2493988.2494331|isbn=9781450321273}}&amp;lt;/ref&amp;gt; A major consideration for wearable computer interaction is the location for device placement and interaction. A study exploring third-party attitudes towards wearable device interaction conducted across the United States and South Korea found differences in the perception of wearable computing use of males and females, in part due to different areas of the body considered socially sensitive.&amp;lt;ref name=&amp;quot;:1&amp;quot; /&amp;gt; Another study investigating the social acceptability of on-body projected interfaces found similar results, with both studies labelling areas around the waist, groin, and upper body (for women) to be least acceptable while areas around the forearm and wrist to be most acceptable.&amp;lt;ref&amp;gt;{{Cite book|last1=Harrison|first1=Chris|last2=Faste|first2=Haakon|title=Proceedings of the 2014 conference on Designing interactive systems |chapter=Implications of location and touch for on-body projected interfaces |s2cid=1121501|date=2014|series=DIS '14|location=New York, NY, USA|publisher=ACM|pages=543–552|doi=10.1145/2598510.2598587|isbn=9781450329026}}&amp;lt;/ref&amp;gt;&lt;br /&gt; &lt;br /&gt; ==== Public installations ====&lt;br /&gt; [[Interactive kiosk|Public Installations]], such as interactive public displays, allow access to information and displays interactive media in public settings such as museums, galleries, and theaters.&amp;lt;ref name=&amp;quot;:2&amp;quot;&amp;gt;{{Cite book|last1=Reeves|first1=Stuart|last2=Benford|first2=Steve|last3=O'Malley|first3=Claire|last4=Fraser|first4=Mike|title=Proceedings of the SIGCHI Conference on Human Factors in Computing Systems |chapter=Designing the spectator experience |s2cid=5739231|date=2005|pages=741–750|location=New York, New York, USA|publisher=ACM Press|doi=10.1145/1054972.1055074|isbn=978-1581139983|url=https://nottingham-repository.worktribe.com/file/1020600/1/p133-reeves.pdf |chapter-url=http://eprints.nottingham.ac.uk/252/1/p133-reeves.pdf}}&amp;lt;/ref&amp;gt; While touch screens are a frequent form of input for public displays, gesture interfaces provide additional benefits such as improved hygiene, interaction from a distance, and improved discoverability, and may favor performative interaction.&amp;lt;ref name=&amp;quot;Walter 2013&amp;quot;/&amp;gt; An important consideration for gestural interaction with public displays is the high probability or expectation of a spectator audience.&amp;lt;ref name=&amp;quot;:2&amp;quot; /&amp;gt;&lt;br /&gt; &lt;br /&gt; === Fatigue ===&lt;br /&gt; Arm fatigue was a side-effect of vertically oriented touch-screen or light-pen use. In periods of prolonged use, users' arms began to feel fatigued and/or discomfort. This effect contributed to the decline of touch-screen input despite its initial popularity in the 1980s.&amp;lt;ref&amp;gt;{{cite web|url=https://www.zdnet.com/article/windows-7-no-arm-in-it/|title=Windows 7? No arm in it|author=Rupert Goodwins|work=ZDNet}}&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;{{cite web|url=http://www.catb.org/jargon/html/G/gorilla-arm.html|title=gorilla arm|work=catb.org}}&amp;lt;/ref&amp;gt;&lt;br /&gt; &lt;br /&gt; In order to measure arm fatigue side effect, researchers developed a technique called Consumed Endurance.&amp;lt;ref&amp;gt;Hincapié-Ramos, J.D., Guo, X., Moghadasian, P. and Irani. P. 2014. [http://hci.cs.umanitoba.ca/projects-and-research/details/ce &amp;quot;Consumed Endurance: A Metric to Quantify Arm Fatigue of Mid-Air Interactions&amp;quot;]. In Proceedings of the 32nd annual ACM conference on Human factors in computing systems (CHI '14). ACM, New York, NY, USA, 1063–1072. DOI=10.1145/2556288.2557130&amp;lt;/ref&amp;gt;&amp;lt;ref&amp;gt;Hincapié-Ramos, J.D., Guo, X., and Irani, P. 2014. [http://hci.cs.umanitoba.ca/projects-and-research/details/ce &amp;quot;The Consumed Endurance Workbench: A Tool to Assess Arm Fatigue During Mid-Air Interactions&amp;quot;]. In Proceedings of the 2014 companion publication on Designing interactive systems (DIS Companion '14). ACM, New York, NY, USA, 109-112. DOI=10.1145/2598784.2602795&amp;lt;/ref&amp;gt;&lt;br /&gt; &lt;br /&gt; ==See also==&lt;br /&gt; * [[Activity recognition]]&lt;br /&gt; * [[Articulated body pose estimation]]&lt;br /&gt; * [[Automotive head unit]]&lt;br /&gt; * [[Computer processing of body language]]&lt;br /&gt; * [[3D pose estimation]]&lt;br /&gt; * [[Pointing device gesture]]&lt;br /&gt; &lt;br /&gt; == References ==&lt;br /&gt; {{Reflist|colwidth=30em}}&lt;br /&gt; &lt;br /&gt; == External links ==&lt;br /&gt; * [http://ruetersward.com/biblio.html Annotated bibliography of references to gesture and pen computing]&lt;br /&gt; * [https://www.youtube.com/watch?v=4xnqKdWMa_8 Notes on the History of Pen-based Computing (YouTube)]&lt;br /&gt; * [http://www.bruceongames.com/2007/10/02/the-future-it-is-all-a-gesture/ The future, it is all a Gesture]—Gesture interfaces and video gaming&lt;br /&gt; * [https://web.archive.org/web/20111006003521/http://inition.co.uk/case-study/ford-c-max-campaign-ar-gestural-interface Ford's Gesturally Interactive Advert]—Gestures used to interact with digital signage&lt;br /&gt; * [https://www.completegate.com/2017030265/blog/3d-hand-tracking#.WNlR_pxDqeQ.link 3D Hand Tracking]—A Literature Survey&lt;br /&gt; &amp;lt;!--Interwikies--&amp;gt;&lt;br /&gt; &lt;br /&gt; {{Nonverbal communication}}&lt;br /&gt; &lt;br /&gt; &amp;lt;!--Categories--&amp;gt;&lt;br /&gt; [[Category:Gesture recognition| ]]&lt;br /&gt; [[Category:Applications of computer vision]]&lt;br /&gt; [[Category:Virtual reality]]&lt;br /&gt; [[Category:Object recognition and categorization]]&lt;br /&gt; [[Category:User interface techniques]]&lt;br /&gt; [[Category:History of human–computer interaction]]&lt;/div&gt;</summary> <author><name>202.142.122.82</name></author> </entry> </feed>