Please use this identifier to cite or link to this item: https://hdl.handle.net/10356/72777
Title: From an image to a text description of the image
Authors: Thian, Ronald Chuan Yan
Keywords: DRNTU::Engineering::Computer science and engineering
Issue Date: 2017
Abstract: This project presents an implementation of a search function that allows users to search for a particular object of interest using only textual information. The main idea is to train a very deep neural network architecture that generates a useful description for the video frame. Also, the focus is heavily emphasised on exploring different types of image captioning models and their differences. Network used consists of a Convolutional Neural Network (CNN) that learns features on an image, and a Long Short-Term Memory (LSTM) unit that is used to predict the sequence of words from the learnt features in the CNN. This project does not implement live captioning of videos but pre-processes the video into frames and generates the appropriate captions for each frame, before the user is able to conduct the textual search.
URI: http://hdl.handle.net/10356/72777
Schools: School of Computer Science and Engineering 
Rights: Nanyang Technological University
Fulltext Permission: restricted
Fulltext Availability: With Fulltext
Appears in Collections:SCSE Student Reports (FYP/IA/PA/PI)

Files in This Item:
File Description SizeFormat 
Ronald_FYP_Report.pdf
  Restricted Access
3.21 MBAdobe PDFView/Open

Page view(s)

436
Updated on May 7, 2025

Download(s) 50

53
Updated on May 7, 2025

Google ScholarTM

Check

Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.