Please use this identifier to cite or link to this item:
https://hdl.handle.net/10356/174338
Title: | Multiattribute multitask transformer framework for vision-based structural health monitoring | Authors: | Gao, Yuqing Yang, Jianfei Qian, Hanjie Mosalam, Khalid M. |
Keywords: | Engineering | Issue Date: | 2023 | Source: | Gao, Y., Yang, J., Qian, H. & Mosalam, K. M. (2023). Multiattribute multitask transformer framework for vision-based structural health monitoring. Computer-Aided Civil and Infrastructure Engineering, 38(17), 2358-2377. https://dx.doi.org/10.1111/mice.13067 | Journal: | Computer-Aided Civil and Infrastructure Engineering | Abstract: | Using deep learning (DL) to recognize building and infrastructure damage via images is becoming popular in vision-based structural health monitoring (SHM). However, many previous studies solely work on the existence of damage in the images and directly treat the problem as a single-attribute classification or separately focus on finding the location or area of the damage as a localization or segmentation problem. Abundant information in the images from multiple sources and intertask relationships are not fully exploited. In this study, the vision-based SHM problem is first reformulated into a multiattribute multitask setting, where each image contains multiple labels to describe its characteristics. Subsequently, a general multiattribute multitask detection framework, namely ϕ-NeXt, is proposed, which introduces 10 benchmark tasks including classification, localization, and segmentation tasks. Accordingly, a large-scale data set containing 37,000 pairs of multilabeled images is established. To pursue better performance in all tasks, a novel hierarchical framework, namely multiattribute multitask transformer (MAMT2) is proposed, which integrates multitask transfer learning mechanisms and adopts a transformer-based network as the backbone. Finally, for benchmarking purposes, extensive experiments are conducted on all tasks and the performance of the proposed MAMT2 is compared with several classical DL models. The results demonstrate the superiority of the MAMT2 in all tasks, which reveals a great potential for practical applications and future studies in both structural engineering and computer vision. | URI: | https://hdl.handle.net/10356/174338 | ISSN: | 1093-9687 | DOI: | 10.1111/mice.13067 | Schools: | School of Electrical and Electronic Engineering | Rights: | © 2023 The Authors. Computer-Aided Civil and Infrastructure Engineering published by Wiley Periodicals LLC on behalf of Editor. This is an open access article under the terms of the Creative Commons Attribution License, which permits use, distribution and reproduction in any medium, provided the original work is properly cited. | Fulltext Permission: | open | Fulltext Availability: | With Fulltext |
Appears in Collections: | EEE Journal Articles |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
Computer aided Civil Eng - 2023 - Gao - Multiattribute multitask transformer framework for vision‐based structural health.pdf | 8.16 MB | Adobe PDF | ![]() View/Open |
SCOPUSTM
Citations
20
30
Updated on Mar 19, 2025
Page view(s)
82
Updated on Mar 18, 2025
Download(s) 50
33
Updated on Mar 18, 2025
Google ScholarTM
Check
Altmetric
Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.