Please use this identifier to cite or link to this item:
Title: Deep convolutional neural networks for manufactured IC image analysis
Authors: Tan, Weiwei
Keywords: DRNTU::Engineering::Electrical and electronic engineering
Issue Date: 2019
Abstract: Image analysis for manufactured Integrated Circuits (IC) plays an important role in IC function verification, hardware security assurance, intellectual property protection, and etc. Circuit extraction is one of the most common and reliable approaches to manufactured IC image analysis. However, the annotation of delayered IC images, which is a crucial step for circuit extraction, is getting infeasible with conventional manual methods due to the increasing complexity of modern VLSI designs. Thus, recent research efforts have been devoted to automating the IC image annotation process using image processing or machine learning techniques. In this final year project, we first developed a deep convolutional neural network based segmentation model (wptnet) for pixel-wise annotation of circuit components in the metal layer of our delayered IC images. Our proposed wptnet achieved mean intersection over union of 88.98% and mean pixel accuracy of 94.35% when applied to 880 testing images from IC metal layer (image dimension: 224 × 224 pixels). However, IC chips normally have more than one layer and images of different IC layers exhibit different image features. Therefore, the segmentation performance of our proposed model will be degraded if a model trained on one layer is applied to a different layer. For example, our wptnet trained on our source set of IC images mentioned above can only achieve mean intersection over union of 81.54% and mean pixel accuracy of 89.54% on our target set of IC images which are slightly different from our source set. Preparing another set of training data for model retraining on a different set of IC images to preserve the performance on a different layer is time-consuming and resource-demanding. To improve the efficiency, we further present a wptnetDA network which incorporates domain adaptation techniques to perform the segmentation of delayered images from different layers. Specifically, we adopted domain confusion with Maximum Mean Discrepancy (MMD). Our wptnetDA model can then achieve mean intersection over union of 88.51% and mean pixel accuracy of 95.74% on the target set of images without degrading the performance on the source set.
Schools: School of Electrical and Electronic Engineering 
Rights: Nanyang Technological University
Fulltext Permission: restricted
Fulltext Availability: With Fulltext
Appears in Collections:EEE Student Reports (FYP/IA/PA/PI)

Files in This Item:
File Description SizeFormat 
  Restricted Access
3.59 MBAdobe PDFView/Open

Page view(s)

Updated on Jun 18, 2024

Download(s) 50

Updated on Jun 18, 2024

Google ScholarTM


Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.