Please use this identifier to cite or link to this item:
Full metadata record
DC FieldValueLanguage
dc.contributor.authorDamen, Dimaen_US
dc.contributor.authorDoughty, Hazelen_US
dc.contributor.authorFarinella, Giovanni Mariaen_US
dc.contributor.authorFurnari, Antoninoen_US
dc.contributor.authorKazakos, Evangelosen_US
dc.contributor.authorMa, Jianen_US
dc.contributor.authorMoltisanti, Davideen_US
dc.contributor.authorMunro, Jonathanen_US
dc.contributor.authorPerrett, Tobyen_US
dc.contributor.authorPrice, Willen_US
dc.contributor.authorWray, Michaelen_US
dc.identifier.citationDamen, D., Doughty, H., Farinella, G. M., Furnari, A., Kazakos, E., Ma, J., Moltisanti, D., Munro, J., Perrett, T., Price, W. & Wray, M. (2022). Rescaling egocentric vision: collection, pipeline and challenges for EPIC-KITCHENS-100. International Journal of Computer Vision, 130(1), 33-55.
dc.description.abstractThis paper introduces the pipeline to extend the largest dataset in egocentric vision, EPIC-KITCHENS. The effort culminates in EPIC-KITCHENS-100, a collection of 100 hours, 20M frames, 90K actions in 700 variable-length videos, capturing long-term unscripted activities in 45 environments, using head-mounted cameras. Compared to its previous version (Damen in Scaling egocentric vision: ECCV, 2018), EPIC-KITCHENS-100 has been annotated using a novel pipeline that allows denser (54% more actions per minute) and more complete annotations of fine-grained actions (+128% more action segments). This collection enables new challenges such as action detection and evaluating the “test of time”—i.e. whether models trained on data collected in 2018 can generalise to new footage collected two years later. The dataset is aligned with 6 challenges: action recognition (full and weak supervision), action detection, action anticipation, cross-modal retrieval (from captions), as well as unsupervised domain adaptation for action recognition. For each challenge, we define the task, provide baselines and evaluation metrics.en_US
dc.relation.ispartofInternational Journal of Computer Visionen_US
dc.rights© The Author(s) 2021. This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecomm
dc.subjectEngineering::Computer science and engineeringen_US
dc.titleRescaling egocentric vision: collection, pipeline and challenges for EPIC-KITCHENS-100en_US
dc.typeJournal Articleen
dc.contributor.schoolSchool of Computer Science and Engineeringen_US
dc.description.versionPublished versionen_US
dc.subject.keywordsVideo Dataseten_US
dc.subject.keywordsAnnotation Qualityen_US
dc.description.acknowledgementResearch at Bristol is supported by Engineering and Physical Sciences Research Council (EPSRC) Doctoral Training Program (DTP), EPSRC Fellowship UMPIRE (EP/T004991/1). Research at Catania is sponsored by Piano della Ricerca 2016-2018 linea di Intervento 2 of DMI, by MISE - PON I&C 2014-2020, ENIGMA project (CUP: B61B19000520008) and by MIUR AIM - Attrazione e Mobilita Internazionale Linea 1 - AIM1893589 - CUP E64118002540007.en_US
item.fulltextWith Fulltext-
Appears in Collections:SCSE Journal Articles
Files in This Item:
File Description SizeFormat 
s11263-021-01531-2.pdf4.89 MBAdobe PDFView/Open

Citations 20

Updated on Jan 30, 2023

Web of ScienceTM
Citations 20

Updated on Jan 27, 2023

Page view(s)

Updated on Jan 30, 2023


Updated on Jan 30, 2023

Google ScholarTM




Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.