Skip to content

Latest commit

 

History

History
10 lines (6 loc) · 736 Bytes

File metadata and controls

10 lines (6 loc) · 736 Bytes

Image caption generator | Eye-for-blind

Image to text to speech generation

Brief description

To create a deep learning model which can explain the contents of an image in the form of speech through caption generation with an attention mechanism on Flickr8K dataset. This kind of model is a use-case for blind people so that they can understand any image with the help of speech. The caption generated through a CNN-RNN model will be converted to speech using a text to speech library.

The features of an image will be extracted by a CNN-based encoder and this will be decoded by an RNN model.

The project is an extended application of "Show, Attend and Tell: Neural Image Caption Generation with Visual Attention paper".