Please use this identifier to cite or link to this item: https://hdl.handle.net/10356/181109
Title: Are vision language models multimodal learners?
Authors: Lee, Gyeonggeon
Keywords: Computer and Information Science
Issue Date: 2024
Source: Lee, G. (2024). Are vision language models multimodal learners?. AI for Education Singapore 2024. Nanyang Technological University.
Conference: AI for Education Singapore 2024
Abstract: Since the release of accessible vision language models (VLMs) such as GPT-4V and Gemini Pro in 2023, scholars have envisaged utilizing these artificial intelligence (AI) models to widely support instructors and learners. Particularly, their capability to simultaneously process visual and textual data and yield subsequent information is considered one of the most important features of these user-friendly VLMs. This capability is significant as human cognition benefits from multimodality, which has called for teaching, learning, and evaluation to be conducted in more diverse, sophisticated, and constructive ways. However, these multimodal educational practices are yet to be realized in everyday classrooms, while the integration of AI promises to facilitate this transformation. In this talk, we will review the hypothesized parallelism between humans and VLMs as multimodal learners and its implications for the potential role of AI models in future education. Additionally, we will discuss the limitations, challenges, and possible remedies to effectively integrate these models into educational settings.
URI: https://hdl.handle.net/10356/181109
URL: https://www.ntu.edu.sg/mae/ai-education-singapore-2024/activities/keynote-invited-talk#Content_C021_Col00
Schools: School of Mechanical and Aerospace Engineering 
Organisations: NVIDIA
Rights: © 2024 The Author. Published by Nanyang Technological University. All rights reserved.
Fulltext Permission: none
Fulltext Availability: No Fulltext
Appears in Collections:MAE Conference Papers

Page view(s)

64
Updated on Jan 16, 2025

Google ScholarTM

Check

Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.