Chia Nan University of Pharmacy & Science Institutional Repository:Item 310902800/1231
English  |  正體中文  |  简体中文  |  全文笔数/总笔数 : 18034/20233 (89%)
造访人次 : 23627268      在线人数 : 616
RC Version 7.0 © Powered By DSPACE, MIT. Enhanced by NTU Library IR team.
搜寻范围 查询小技巧:
  • 您可在西文检索词汇前后加上"双引号",以获取较精准的检索结果
  • 若欲以作者姓名搜寻,建议至进阶搜寻限定作者字段,可获得较完整数据
  • 进阶搜寻


    jsp.display-item.identifier=請使用永久網址來引用或連結此文件: https://ir.cnu.edu.tw/handle/310902800/1231


    標題: A Visual Communication System for Hear-impaired and Talk-impaired Patients
    聽障者與不能發音者的電腦輔助溝通系統
    作者: 黃文楨;Wen-Chen Huang
    貢獻者: 資訊管理系
    關鍵字: lip-reading
    spatial-temporal image difference
    pattern recognition
    3D reconstruction
    日期: 2003
    上傳時間: 2008-06-30 10:33:22 (UTC+8)
    出版者: 台南縣:嘉南藥理科技大學資訊管理系
    摘要: The most common communication disorders clinically are speech and hearing impairment. For the lengthening of life expectancy, hearing impairment becomes one of the most important issues in our society. After medical or surgical therapeutic processes, rehabilitation is the last resort to restore the capability of communication. A key component for such restoration depends on an alternative media through which the impaired function can be processed as well as possible. For speech and hearing disorders, communication procedure can usually be augmented by visual stimulation, for example, by signs or text. Owing to the development of world-wide-web system, not only our traditional oro-aural communication mode can be replaced by cyber-text and far-distance communication but also the communication mode of the speech or hearing impaired people can be. For the promising technique of data analysis, we can also build a visual lip-reading system for these patients, either for their post-operation communication or for their later speech rehabilitation period.
    The purpose of this research is to build a visual lip-reading system which recognizes the sentence from the image sequences of speakers. At the same time, many face images are taken from different angles through digital cameras to build a vivid 3D human head model. The recognized text drives the 3D human head model to talk as the real person. There are three parts in our framework: lip-reading recognizer, 3D head model generator, and talking face animation. The integrated system combines the web-based visual communication interface. Experimental results show that the recognition rate is about 97 percent of ten sentences for a specific person.
    關聯: 計畫編號:CNMI9204
    显示于类别:[資訊管理系] 校內計畫

    文件中的档案:

    档案 描述 大小格式浏览次数
    92CNMI9204.pdf107KbAdobe PDF986检视/开启


    在CNU IR中所有的数据项都受到原著作权保护.

    TAIR相关文章

    DSpace Software Copyright © 2002-2004  MIT &  Hewlett-Packard  /   Enhanced by   NTU Library IR team Copyright ©   - 回馈