Please use this identifier to cite or link to this item:
https://hdl.handle.net/10356/73745
Title: | Implementation of machine learning techniques to denoise and unmix TEM spectroscopic dataset | Authors: | Quang, Uy Thinh | Keywords: | DRNTU::Engineering | Issue Date: | 2018 | Abstract: | Rapid advancement in Transmission Electron Microscopy (TEM) instrumentation has led to better acquisition of high-resolution, nanoscale images, allowing material scientists to obtain in-depth analysis of material samples with complex designs. Concurrently, however, it has resulted in highly mixed dataset. In other words, each pixel of the imaged sample would be a combination of multiple signals from constituent elements and phases. Data separation, or unmixing, of this mixed image, would be required for tasks including quantification and identification. This project involves two computational algorithms developed for such a purpose: Vertex Component Analysis (VCA) and Bayesian Linear Unmixing (BLU). The project first focused on the implementation of these algorithms into HyperSpy, an open source analytical imaging toolbox developed in Python language. The new code scripts for both techniques were designed independently, and incorporated into existing software scripts such that they could fully utilize the functionalities available in HyperSpy. The implementation was confirmed to be operational via verification tests using sample EDX and EELS images, ensuring that the codes did not produce random unmixing outputs. The project’s second phase studied dataset pre-treatment techniques using highly noise-corrupted EDX images, and compared the unmixing performance between BLU and VCA. The images employed were that of a methylammonium lead iodide (MAPbI3) perovskite film, and In(Zn)P/ZnS core-shell nanocrystal. A permutated combination of three pre-treatment methods, namely binning, cropping and normalization were applied on the images. Binning was used to boost signals through reducing the image resolution while cropping targeted the region of interest in an image to avoid irrelevant signals. Normalization, finally, dealt with the shot-noise nature of EDX images. It was found that a concurrent combination of the three methods produced the optimal unmixing outputs for BLU and VCA. Furthermore, a synthetic dataset was created via HyperSpy to test the Signal-to-Noise Ratio (SNR) dependence of BLU and VCA. Interestingly, BLU had a larger margin of unmixing error compared to VCA, but in heavily-noise corrupted conditions, BLU performed marginally better. Overall, however, it appeared that VCA excelled with lighter resource demand, faster processing time and reasonably accurate unmixing output. | URI: | http://hdl.handle.net/10356/73745 | Schools: | School of Materials Science and Engineering | Rights: | Nanyang Technological University | Fulltext Permission: | restricted | Fulltext Availability: | With Fulltext |
Appears in Collections: | MSE Student Reports (FYP/IA/PA/PI) |
Files in This Item:
File | Description | Size | Format | |
---|---|---|---|---|
FYP Report Quang Uy Thinh.pdf Restricted Access | Main article | 3.57 MB | Adobe PDF | View/Open |
Page view(s) 50
427
Updated on Mar 29, 2024
Download(s) 50
31
Updated on Mar 29, 2024
Google ScholarTM
Check
Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.