C-Face | Proceedings of the 33rd Annual ACM Symposium on User Interface Software and Technology (2024)

research-article

Authors: Tuochao Chen, Benjamin Steeper, Kinan Alsheikh, Songyun Tao, François Guimbretière, Cheng Zhang

UIST '20: Proceedings of the 33rd Annual ACM Symposium on User Interface Software and Technology

Pages 112 - 125

Published: 20 October 2020 Publication History

Metrics

Total Citations57Total Downloads910

Last 12 Months153

Last 6 weeks16

New Citation Alert added!

This alert has been successfully added and will be sent to:

You will be notified whenever a record that you have chosen has been cited.

To manage your alert preferences, click on the button below.

Manage my Alerts

New Citation Alert!

Please log in to your account

Get Access

    • Get Access
    • References
    • Media
    • Tables
    • Share

Abstract

C-Face | Proceedings of the 33rd Annual ACM Symposium on User Interface Software and Technology (7)

C-Face (Contour-Face) is an ear-mounted wearable sensing technology that uses two miniature cameras to continuously reconstruct facial expressions by deep learning contours of the face. When facial muscles move, the contours of the face change from the point of view of the ear-mounted cameras. These subtle changes are fed into a deep learning model which continuously outputs 42 facial feature points representing the shapes and positions of the mouth, eyes and eyebrows. To evaluate C-Face, we embedded our technology into headphones and earphones. We conducted a user study with nine participants. In this study, we compared the output of our system to the feature points outputted by a state of the art computer vision library (Dlib) from a font facing camera. We found that the mean error of all 42 feature points was 0.77 mm for earphones and 0.74 mm for headphones. The mean error for 20 major feature points capturing the most active areas of the face was 1.43 mm for earphones and 1.39 mm for headphones. The ability to continuously reconstruct facial expressions introduces new opportunities in a variety of applications. As a demonstration, we implemented and evaluated C-Face for two applications: facial expression detection (outputting emojis) and silent speech recognition. We further discuss the opportunities and challenges of deploying C-Face in real-world applications.

Supplementary Material

VTT File (ufp7476pv.vtt)

  • Download
  • .49 KB

VTT File (ufp7476vf.vtt)

  • Download
  • 7.97 KB

VTT File (3379337.3415879.vtt)

  • Download
  • 6.31 KB
SRT File (ufp7476pvc.srt)

Preview video captions

  • Download
  • .51 KB
SRT File (ufp7476vfc.srt)

Video figure captions

  • Download
  • 7.20 KB
ZIP File (ufp7476aux.zip)

Title: C-Face: Continuously Reconstructing Facial Expressions by Deep Learning Contours of the Face with Ear-mounted Miniature Cameras Auxiliary/Supplemental Material: 1. Full_length_video.mp4: The full-length video for our paper, which is around 3 minutes and explains the system design, evaluation, and application of our paper. 2. Full_length_caption.srt: The closed captions file for full-length video. 3. Preview_video.mp4: The 30-second Video Preview 4. Preview_caption.srt: The closed captions file of Video Preview

  • Download
  • 162.57 MB
MP4 File (ufp7476pv.mp4)

Preview video

  • Download
  • 87.49 MB
MP4 File (ufp7476vf.mp4)

Video figure

  • Download
  • 75.58 MB
MP4 File (3379337.3415879.mp4)

Presentation Video

  • Download
  • 28.19 MB

References

[1]

Toshiyuki Ando, Yuki Kubo, Buntarou Shizuki, and Shin Takahashi. 2017. Canalsense: Face-related movement recognition system based on sensing air pressure in ear canals. In Proceedings of the 30th Annual ACM Symposium on User Interface Software and Technology. 679--689.

Digital Library

[2]

Zhe Cao, Gines Hidalgo, Tomas Simon, Shih-En Wei, and Yaser Sheikh. 2018. OpenPose: realtime multi-person 2D pose estimation using Part Affinity Fields. arXiv preprint arXiv:1812.08008 (2018).

[3]

George Caridakis, Lori Malatesta, Loic Kessous, Noam Amir, Amaryllis Raouzaiou, and Kostas Karpouzis. 2006. Modeling naturalistic affective states via facial and vocal expressions recognition. In Proceedings of the 8th international conference on Multimodal interfaces. 146--154.

Digital Library

[4]

Lam Aun Cheah, James M Gilbert, José A González, Phil D Green, Stephen R Ell, Roger K Moore, and Ed Holdsworth. 2018. A Wearable Silent Speech Interface based on Magnetic Sensors with Motion-Artefact Removal. In BIODEVICES. 56--62.

[5]

Ciprian Adrian Corneanu, Marc Oliu Simón, Jeffrey F Cohn, and Sergio Escalera Guerrero. 2016. Survey on rgb, 3d, thermal, and multimodal approaches for facial expression recognition: History, trends, and affect-related applications. IEEE transactions on pattern analysis and machine intelligence 38, 8 (2016), 1548--1568.

[6]

Roddy Cowie, Ellen Douglas-Cowie, Nicolas Tsapatsoulis, George Votsis, Stefanos Kollias, Winfried Fellenz, and John G Taylor. 2001. Emotion recognition in human-computer interaction. IEEE Signal processing magazine 18, 1 (2001), 32--80.

[7]

Bruce Denby, Yacine Oussar, Gérard Dreyfus, and Maureen Stone. 2006. Prospects for a silent speech interface using ultrasound imaging. In 2006 IEEE International Conference on Acoustics Speech and Signal Processing Proceedings, Vol. 1. IEEE, I--I.

[8]

Bruce Denby, Tanja Schultz, Kiyoshi Honda, Thomas Hueber, Jim M Gilbert, and Jonathan S Brumberg. 2010. Silent speech interfaces. Speech Communication 52, 4 (2010), 270--287.

Digital Library

[9]

Alex Graves and Jürgen Schmidhuber. 2005. Framewise phoneme classification with bidirectional LSTM and other neural network architectures. Neural networks 18, 5--6 (2005), 602--610.

[10]

Anna Gruebler and Kenji Suzuki. 2010. Measurement of distal EMG signals using a wearable device for reading facial expressions. In 2010 Annual International Conference of the IEEE Engineering in Medicine and Biology. IEEE, 4594--4597.

[11]

Jun He, Jianfeng Cai, Lingzhi Fang, and Z He. 2016. Facial expression recognition based on LBP/VAR and DBN model. Appl. Res. Comput 33 (2016), 453--461.

[12]

Kaiming He, Xiangyu Zhang, Shaoqing Ren, and Jian Sun. 2016. Deep residual learning for image recognition. In Proceedings of the IEEE conference on computer vision and pattern recognition. 770--778.

[13]

Lang He, Dongmei Jiang, Le Yang, Ercheng Pei, Peng Wu, and Hichem Sahli. 2015. Multimodal affective dimension prediction using deep bidirectional long short-term memory recurrent neural networks. In Proceedings of the 5th International Workshop on Audio/Visual Emotion Challenge. 73--80.

Digital Library

[14]

Shan He, Shangfei Wang, Wuwei Lan, Huan Fu, and Qiang Ji. 2013. Facial expression recognition using deep Boltzmann machine from thermal infrared images. In 2013 Humaine Association Conference on Affective Computing and Intelligent Interaction. IEEE, 239--244.

Digital Library

[15]

Tong He, Zhi Zhang, Hang Zhang, Zhongyue Zhang, Junyuan Xie, and Mu Li. 2019. Bag of tricks for image classification with convolutional neural networks. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 558--567.

[16]

Sepp Hochreiter and Jürgen Schmidhuber. 1997. Long short-term memory. Neural computation 9, 8 (1997), 1735--1780.

Digital Library

[17]

Robin Hofe, Stephen R Ell, Michael J Fagan, James M Gilbert, Phil D Green, Roger K Moore, and Sergey I Rybchenko. 2013. Small-vocabulary speech recognition using a silent speech interface based on magnetic sensing. Speech Communication 55, 1 (2013), 22--32.

Digital Library

[18]

Fang Hu, Peng He, Songlin Xu, Yin Li, and Cheng Zhang. 2020. FingerTrak: Continuous 3D Hand Pose Tracking by Deep Learning Hand Silhouettes Captured by Miniature Thermal Cameras on Wrist. Proceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies 4, 2 (2020), 1--24.

Digital Library

[19]

Peter J Huber. 1992. Robust estimation of a location parameter. In Breakthroughs in statistics. Springer, 492--518.

[20]

Thomas Hueber, Elie-Laurent Benaroya, Gérard Chollet, Bruce Denby, Gérard Dreyfus, and Maureen Stone. 2010. Development of a Silent Speech Interface Driven by Ultrasound and Optical Images of the Tongue and Lips. Speech Commun. 52, 4 (April 2010), 288--300. http://dx.doi.org/10.1016/j.specom.2009.11.004

Digital Library

[21]

Sergey Ioffe and Christian Szegedy. 2015. Batch normalization: Accelerating deep network training by reducing internal covariate shift. arXiv preprint arXiv:1502.03167 (2015).

Digital Library

[22]

Yasha Iravantchi, Yang Zhang, Evi Bernitsas, Mayank Goel, and Chris Harrison. 2019. Interferi: Gesture Sensing Using On-Body Acoustic Interferometry. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. 1--13.

Digital Library

[23]

Aaron S Jackson, Adrian Bulat, Vasileios Argyriou, and Georgios Tzimiropoulos. 2017. Large pose 3D face reconstruction from a single image via direct volumetric CNN regression. In Proceedings of the IEEE International Conference on Computer Vision. 1031--1039.

[24]

Eloi Moliner Juanpere and Tamás Gábor Csapó. 2019. Ultrasound-Based Silent Speech Interface Using Convolutional and Recurrent Neural Networks. Acta Acustica united with Acustica 105, 4 (2019), 587--590.

[25]

Samira Ebrahimi Kahou, Christopher Pal, Xavier Bouthillier, Pierre Froumenty, cC aglar Gülcc ehre, Roland Memisevic, Pascal Vincent, Aaron Courville, Yoshua Bengio, Raul Chandias Ferrari, and others. 2013. Combining modality specific deep neural networks for emotion recognition in video. In Proceedings of the 15th ACM on International conference on multimodal interaction. 543--550.

Digital Library

[26]

Asifullah Khan, Anabia Sohail, Umme Zahoora, and Aqsa Saeed Qureshi. 2019. A survey of the recent architectures of deep convolutional neural networks. Artificial Intelligence Review (2019), 1--62.

[27]

Naoki Kimura, Michinari Kono, and Jun Rekimoto. 2019. SottoVoce: an ultrasound imaging-based silent speech interaction using deep neural networks. In Proceedings of the 2019 CHI Conference on Human Factors in Computing Systems. 1--11.

Digital Library

[28]

Davis E King. 2009. Dlib-ml: A machine learning toolkit. Journal of Machine Learning Research 10, Jul (2009), 1755--1758.

Digital Library

[29]

Irene Kotsia and Ioannis Pitas. 2006. Facial expression recognition in image sequences using geometric deformation features and support vector machines. IEEE transactions on image processing 16, 1 (2006), 172--187.

[30]

Ying-Hsiu Lai and Shang-Hong Lai. 2018. Emotion-preserving representation learning via generative adversarial network for multi-view facial expression recognition. In 2018 13th IEEE International Conference on Automatic Face & Gesture Recognition (FG 2018). IEEE, 263--270.

Digital Library

[31]

Hao Li, Laura Trutoiu, Kyle Olszewski, Lingyu Wei, Tristan Trutna, Pei-Lun Hsieh, Aaron Nicholls, and Chongyang Ma. 2015. Facial performance sensing head-mounted display. ACM Transactions on Graphics (ToG) 34, 4 (2015), 1--9.

Digital Library

[32]

Robert LiKamWa, Bodhi Priyantha, Matthai Philipose, Lin Zhong, and Paramvir Bahl. 2013. Energy characterization and optimization of image sensing toward continuous mobile vision. In Proceeding of the 11th annual international conference on Mobile systems, applications, and services. 69--82.

[33]

Ping Liu, Shizhong Han, Zibo Meng, and Yan Tong. 2014. Facial expression recognition via a boosted deep belief network. In Proceedings of the IEEE conference on computer vision and pattern recognition. 1805--1812.

Digital Library

[34]

Ilya Loshchilov and Frank Hutter. 2016. Sgdr: Stochastic gradient descent with warm restarts. arXiv preprint arXiv:1608.03983 (2016).

[35]

Katsutoshi Masai, Yuta Sugiura, Masa Ogata, Kai Kunze, Masahiko Inami, and Maki Sugimoto. 2016. Facial expression recognition in daily life by embedded photo reflective sensors on smart eyewear. In Proceedings of the 21st International Conference on Intelligent User Interfaces. 317--326.

Digital Library

[36]

Denys JC Matthies, Bernhard A Strecker, and Bodo Urban. 2017. Earfieldsensing: A novel in-ear electric field sensing to enrich wearable gesture input through facial expressions. In Proceedings of the 2017 CHI Conference on Human Factors in Computing Systems. 1911--1922.

Digital Library

[37]

Jérémie Nicolle, Vincent Rapp, Kévin Bailly, Lionel Prevost, and Mohamed Chetouani. 2012. Robust continuous prediction of human emotions using multiscale dynamic cues. In Proceedings of the 14th ACM international conference on Multimodal interaction. 501--508.

Digital Library

[38]

Frederic I Parke and Keith Waters. 1996. Computer facial animation. CRC press.

[39]

Stavros Petridis, Jie Shen, Doruk Cetin, and Maja Pantic. 2018. Visual-only recognition of normal, whispered and silent speech. In 2018 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP). IEEE, 6219--6223.

Digital Library

[40]

Huy Phan, Martin Krawczyk-Becker, Timo Gerkmann, and Alfred Mertins. 2017. DNN and CNN with weighted and multi-task loss functions for audio event detection. arXiv preprint arXiv:1708.03211 (2017).

[41]

Subramanian Ramanathan, Ashraf Kassim, YV Venkatesh, and Wu Sin Wah. 2006. Human facial expression recognition using a 3D morphable model. In 2006 International conference on image processing. IEEE, 661--664.

[42]

Ville Rantanen, Pekka-Henrik Niemenlehto, Jarmo Verho, and Jukka Lekkala. 2010. Capacitive facial movement detection for human-computer interaction to click by frowning and lifting eyebrows. Medical & biological engineering & computing 48, 1 (2010), 39--47.

[43]

Ville Rantanen, Hanna Venesvirta, Oleg Spakov, Jarmo Verho, Akos Vetek, Veikko Surakka, and Jukka Lekkala. 2013. Capacitive measurement of facial activity intensity. IEEE Sensors journal 13, 11 (2013), 4329--4338.

[44]

Marc'Aurelio Ranzato, Joshua Susskind, Volodymyr Mnih, and Geoffrey Hinton. 2011. On deep generative models with applications to recognition. In CVPR 2011. IEEE, 2857--2864.

Digital Library

[45]

Salah Rifai, Yoshua Bengio, Aaron Courville, Pascal Vincent, and Mehdi Mirza. 2012a. Disentangling factors of variation for facial expression recognition. In European Conference on Computer Vision. Springer, 808--822.

Digital Library

[46]

Salah Rifai, Yoshua Bengio, Aaron Courville, Pascal Vincent, and Mehdi Mirza. 2012b. Disentangling factors of variation for facial expression recognition. In European Conference on Computer Vision. Springer, 808--822.

Digital Library

[47]

Himanshu Sahni, Abdelkareem Bedri, Gabriel Reyes, Pavleen Thukral, Zehua Guo, Thad Starner, and Maysam Ghovanloo. 2014. The tongue and ear interface: a wearable system for silent speech recognition. In Proceedings of the 2014 ACM International Symposium on Wearable Computers. 47--54.

Digital Library

[48]

Jocelyn Scheirer, Raul Fernandez, and Rosalind W Picard. 1999. Expression glasses: a wearable device for facial expression recognition. In CHI'99 Extended Abstracts on Human Factors in Computing Systems. 262--263.

Digital Library

[49]

Tanja Schultz. 2010. ICCHP keynote: Recognizing silent and weak speech based on electromyography. In International Conference on Computers for Handicapped Persons. Springer, 595--604.

[50]

Tanja Schultz and Michael Wand. 2010. Modeling coarticulation in EMG-based continuous speech recognition. Speech Communication 52, 4 (2010), 341--353.

Digital Library

[51]

Nitish Srivastava, Geoffrey Hinton, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov. 2014. Dropout: a simple way to prevent neural networks from overfitting. The journal of machine learning research 15, 1 (2014), 1929--1958.

Digital Library

[52]

Bo Sun, Liandong Li, Tian Zuo, Ying Chen, Guoyan Zhou, and Xuewen Wu. 2014. Combining multimodal features with hierarchical classifier fusion for emotion recognition in the wild. In Proceedings of the 16th international conference on multimodal interaction. 481--486.

Digital Library

[53]

Ke Sun, Chun Yu, Weinan Shi, Lan Liu, and Yuanchun Shi. 2018. Lip-Interact: Improving Mobile Device Interaction with Silent Speech Commands. In Proceedings of the 31st Annual ACM Symposium on User Interface Software and Technology. 581--593.

Digital Library

[54]

Anestis Terzis. 2016. Handbook of camera monitor systems: The automotive mirror-replacement technology based on ISO 16505. Vol. 5. Springer.

[55]

Jacob Whitehill, Marian Bartlett, and Javier Movellan. 2008. Automatic facial expression recognition for intelligent tutoring systems. In 2008 IEEE Computer Society Conference on Computer Vision and Pattern Recognition Workshops. IEEE, 1--6.

[56]

Huiyuan Yang, Zheng Zhang, and Lijun Yin. 2018. Identity-adaptive facial expression recognition through expression regeneration using conditional generative adversarial networks. In 2018 13th IEEE International Conference on Automatic Face & Gesture Recognition (FG 2018). IEEE, 294--301.

Digital Library

[57]

Cheng Zhang, Qiuyue Xue, Anandghan Waghmare, Ruichen Meng, Sumeet Jain, Yizeng Han, Xinyu Li, Kenneth Cunefare, Thomas Ploetz, Thad Starner, and others. 2018a. FingerPing: Recognizing fine-grained hand poses using active acoustic on-body sensing. In Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems. 1--10.

Digital Library

[58]

Feifei Zhang, Tianzhu Zhang, Qirong Mao, and Changsheng Xu. 2018b. Joint pose and expression modeling for facial expression recognition. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 3359--3368.

[59]

Minghua Zhao and Yonggang Zhao. 2010. Skin color segmentation based on improved 2D Otsu and YCgCr. In 2010 International Conference on Electrical and Control Engineering. IEEE, 1954--1957.

Digital Library

[60]

Ruicong Zhi, Markus Flierl, Qiuqi Ruan, and W Bastiaan Kleijn. 2010. Graph-preserving sparse nonnegative matrix factorization with application to facial expression recognition. IEEE Transactions on Systems, Man, and Cybernetics, Part B (Cybernetics) 41, 1 (2010), 38--52.

Cited By

View all

  • Zhang JXie XPeng GLiu LYang HGuo RCao JYang J(2024)A Real-Time and Privacy-Preserving Facial Expression Recognition System Using an AI-Powered MicrocontrollerElectronics10.3390/electronics1314279113:14(2791)Online publication date: 16-Jul-2024
  • Mizuho YKawasaki YAmesaka TSugiura Y(2024)EarAuthCam: Personal Identification and Authentication Method Using Ear Images Acquired with a Camera-Equipped Hearable DeviceProceedings of the Augmented Humans International Conference 202410.1145/3652920.3653059(119-130)Online publication date: 4-Apr-2024

    https://dl.acm.org/doi/10.1145/3652920.3653059

  • Li KZhang RChen SChen BSakashita MGuimbretiere FZhang C(2024)EyeEcho: Continuous and Low-power Facial Expression Tracking on GlassesProceedings of the 2024 CHI Conference on Human Factors in Computing Systems10.1145/3613904.3642613(1-24)Online publication date: 11-May-2024

    https://dl.acm.org/doi/10.1145/3613904.3642613

  • Show More Cited By

Index Terms

  1. C-Face: Continuously Reconstructing Facial Expressions by Deep Learning Contours of the Face with Ear-mounted Miniature Cameras

    1. Human-centered computing

      1. Ubiquitous and mobile computing

        1. Ubiquitous and mobile devices

    Recommendations

    • Face recognition: Sparse Representation vs. Deep Learning

      ICGSP '18: Proceedings of the 2nd International Conference on Graphics and Signal Processing

      The pose, illumination and facial expression discrepancies between two face images are the key challenges in face recognition. The deep Convolutional Neural Networks (CNNs) and the fast Sparse Representation-based Classification (SRC) have achieved ...

      Read More

    • Research on a face recognition algorithm based on 3D face data and 2D face image matching

      Highlights

      • A 3D face recognition algorithm framework based on 2D-3D matching is proposed to solve the face recognition problem when there is no light and the database ...

      Abstract

      Under the condition of weak light or no light, the recognition accuracy of the mature 2D face recognition technology decreases sharply. In this paper, a face recognition algorithm based on the matching of 3D face data and 2D face ...

      Read More

    • Deep face recognition using imperfect facial data

      Abstract

      Today, computer based face recognition is a mature and reliable mechanism which is being practically utilised for many access control scenarios. As such, face recognition or authentication is predominantly performed using ‘perfect’ ...

      Highlights

      • We show the performance of machine learning for face recognition using partial faces and other manipulations of the face such as rotation and zooming which ...

      Read More

    Comments

    Information & Contributors

    Information

    Published In

    C-Face | Proceedings of the 33rd Annual ACM Symposium on User Interface Software and Technology (8)

    UIST '20: Proceedings of the 33rd Annual ACM Symposium on User Interface Software and Technology

    October 2020

    1297 pages

    ISBN:9781450375146

    DOI:10.1145/3379337

    • General Chairs:
    • Shamsi Iqbal

      Microsoft Research, USA

      ,
    • Karon MacLean

      University of British Columbia, Canada

      ,
    • Program Chairs:
    • Fanny Chevalier

      University of Toronto, Canada

      ,
    • Stefanie Mueller

      MIT CSAIL, USA

    Copyright © 2020 ACM.

    Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full citation on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or republish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [emailprotected]

    Sponsors

    • SIGGRAPH: ACM Special Interest Group on Computer Graphics and Interactive Techniques
    • SIGCHI: ACM Special Interest Group on Computer-Human Interaction

    Publisher

    Association for Computing Machinery

    New York, NY, United States

    Publication History

    Published: 20 October 2020

    Permissions

    Request permissions for this article.

    Check for updates

    Author Tags

    1. computer vision: facial expression reconstruction and tracking
    2. deep learning
    3. ear sensing
    4. emoji recognition
    5. silent speech
    6. wearable computing

    Qualifiers

    • Research-article

    Conference

    UIST '20

    Sponsor:

    • SIGGRAPH
    • SIGCHI

    Acceptance Rates

    Overall Acceptance Rate 561 of 2,567 submissions, 22%

    Contributors

    C-Face | Proceedings of the 33rd Annual ACM Symposium on User Interface Software and Technology (13)

    Other Metrics

    View Article Metrics

    Bibliometrics & Citations

    Bibliometrics

    Article Metrics

    • 57

      Total Citations

      View Citations
    • 910

      Total Downloads

    • Downloads (Last 12 months)153
    • Downloads (Last 6 weeks)16

    Reflects downloads up to 23 Oct 2024

    Other Metrics

    View Author Metrics

    Citations

    Cited By

    View all

    • Zhang JXie XPeng GLiu LYang HGuo RCao JYang J(2024)A Real-Time and Privacy-Preserving Facial Expression Recognition System Using an AI-Powered MicrocontrollerElectronics10.3390/electronics1314279113:14(2791)Online publication date: 16-Jul-2024
    • Mizuho YKawasaki YAmesaka TSugiura Y(2024)EarAuthCam: Personal Identification and Authentication Method Using Ear Images Acquired with a Camera-Equipped Hearable DeviceProceedings of the Augmented Humans International Conference 202410.1145/3652920.3653059(119-130)Online publication date: 4-Apr-2024

      https://dl.acm.org/doi/10.1145/3652920.3653059

    • Li KZhang RChen SChen BSakashita MGuimbretiere FZhang C(2024)EyeEcho: Continuous and Low-power Facial Expression Tracking on GlassesProceedings of the 2024 CHI Conference on Human Factors in Computing Systems10.1145/3613904.3642613(1-24)Online publication date: 11-May-2024

      https://dl.acm.org/doi/10.1145/3613904.3642613

    • Pandey LArif A(2024)MELDER: The Design and Evaluation of a Real-time Silent Speech Recognizer for Mobile DevicesProceedings of the 2024 CHI Conference on Human Factors in Computing Systems10.1145/3613904.3642348(1-23)Online publication date: 11-May-2024

      https://dl.acm.org/doi/10.1145/3613904.3642348

    • Cai ZMa YLu F(2024)Robust Dual-Modal Speech Keyword Spotting for XR HeadsetsIEEE Transactions on Visualization and Computer Graphics10.1109/TVCG.2024.337209230:5(2507-2516)Online publication date: 5-Mar-2024

      https://dl.acm.org/doi/10.1109/TVCG.2024.3372092

    • Yi CWei BZhu JChen CWang YChen ZHuang YJiang F(2024)Mordo2: A Personalization Framework for Silent Command RecognitionIEEE Transactions on Neural Systems and Rehabilitation Engineering10.1109/TNSRE.2023.334206832(133-143)Online publication date: 2024
    • Sun XXiong JFeng CLi HWu YFang DChen X(2024)EarSSR: Silent Speech Recognition via EarphonesIEEE Transactions on Mobile Computing10.1109/TMC.2024.335671923:8(8493-8507)Online publication date: Aug-2024
    • Zhang SLu TZhou HLiu YLiu RGowda M(2023)I Am an Earphone and I Can Hear My User’s Face: Facial Landmark Tracking Using Smart EarphonesACM Transactions on Internet of Things10.1145/36144385:1(1-29)Online publication date: 16-Dec-2023

      https://dl.acm.org/doi/10.1145/3614438

    • Yang XWang XDong GYan ZSrivastava MHayashi EZhang Y(2023)HeadarProceedings of the ACM on Interactive, Mobile, Wearable and Ubiquitous Technologies10.1145/36109007:3(1-28)Online publication date: 27-Sep-2023

      https://dl.acm.org/doi/10.1145/3610900

    • Zhang RChen HAgarwal DJin RLi KGuimbretière FZhang C(2023)HPSpeech: Silent Speech Interface for Commodity HeadphonesProceedings of the 2023 ACM International Symposium on Wearable Computers10.1145/3594738.3611365(60-65)Online publication date: 8-Oct-2023

      https://dl.acm.org/doi/10.1145/3594738.3611365

    • Show More Cited By

    View Options

    Get Access

    Login options

    Check if you have access through your login credentials or your institution to get full access on this article.

    Sign in

    Full Access

    Get this Publication

    View options

    PDF

    View or Download as a PDF file.

    PDF

    eReader

    View online with eReader.

    eReader

    Media

    Figures

    Other

    Tables

    C-Face | Proceedings of the 33rd Annual ACM Symposium on User Interface Software and Technology (2024)

    References

    Top Articles
    Latest Posts
    Recommended Articles
    Article information

    Author: Madonna Wisozk

    Last Updated:

    Views: 6053

    Rating: 4.8 / 5 (68 voted)

    Reviews: 91% of readers found this page helpful

    Author information

    Name: Madonna Wisozk

    Birthday: 2001-02-23

    Address: 656 Gerhold Summit, Sidneyberg, FL 78179-2512

    Phone: +6742282696652

    Job: Customer Banking Liaison

    Hobby: Flower arranging, Yo-yoing, Tai chi, Rowing, Macrame, Urban exploration, Knife making

    Introduction: My name is Madonna Wisozk, I am a attractive, healthy, thoughtful, faithful, open, vivacious, zany person who loves writing and wants to share my knowledge and understanding with you.