Lip Reading Using Convolutional Auto Encoders as Feature Extractor

31 May 2018  ·  Dharin Parekh, Ankitesh Gupta, Shharrnam Chhatpar, Anmol Yash Kumar, Manasi Kulkarni ·

Visual recognition of speech using the lip movement is called Lip-reading. Recent developments in this nascent field uses different neural networks as feature extractors which serve as input to a model which can map the temporal relationship and classify. Though end to end sentence level Lip-reading is the current trend, we proposed a new model which employs word level classification and breaks the set benchmarks for standard datasets. In our model we use convolutional autoencoders as feature extractors which are then fed to a Long short-term memory model. We tested our proposed model on BBC's LRW dataset, MIRACL-VC1 and GRID dataset. Achieving a classification accuracy of 98% on MIRACL-VC1 as compared to 93.4% of the set benchmark (Rekik et al., 2014). On BBC's LRW the proposed model performed better than the baseline model of convolutional neural networks and Long short-term memory model (Garg et al., 2016). Showing the features learned by the models we clearly indicate how the proposed model works better than the baseline model. The same model can also be extended for end to end sentence level classification.

PDF Abstract

Datasets


Results from the Paper


  Submit results from this paper to get state-of-the-art GitHub badges and help the community compare results to other papers.

Methods


No methods listed for this paper. Add relevant methods here