Please use this identifier to cite or link to this item: https://hdl.handle.net/10356/156815
Title: Using CodeBERT model for vulnerability detection
Authors: Zhou, ZhiWei
Keywords: Engineering::Computer science and engineering
Issue Date: 2022
Publisher: Nanyang Technological University
Source: Zhou, Z. (2022). Using CodeBERT model for vulnerability detection. Final Year Project (FYP), Nanyang Technological University, Singapore. https://hdl.handle.net/10356/156815
Abstract: This report presents on the experimental study that was done based on the aim of achieving a deeper understanding of the parameters used in fine-tuning of the pre-trained model, while also trying to achieve a model with the same or even better accuracy than what was stated in the repository, through fine-tuning it by varying various parameter settings. Based on existing research, there have been a clear and growing need for these models to detect vulnerabilities in code intelligence tasks with decent accuracies in order to ultimately increase productivity of programmers and also reduce the risks of using codes that are already available online on code sharing platforms. CodeBERT is a BERT-style (Bidirectional Encoder Representations from Transformers) pretrained model for Natural Language (NL) and Programming Language (PL) which learns general-purpose representations, that supports downstream NL-PL applications such as natural language code search, code documentation generation, etc. It is developed with a Transformerbased neural architecture and trained with a hybrid objective function which enables the utilization of both “bimodal” data and “unimodal” data. CodeBERT is evaluated by fine-tuning the model’s parameters Results show that fine-tuning the parameters of CodeBERT achieves state-of-the-art performance on both NL code search and code documentation generation. Furthermore, CodeBERT is evaluated in a zero-shot setting where parameters of pre-trained models are fixed to find out what type of knowledge is learnt. Results show that CodeBERT constantly performs better than previous pre-trained models on NL-PL probing. With the benchmarks of CodeBERT already in the repository, the purpose of this experimental study is to achieve the benchmarks and possibly exceed it by researching about the parameters, in order to get a better understanding, then changing the various parameters singly, graphing its results, and studying its effects on the fine-tuned process and in turn, the final accuracy of the model.
URI: https://hdl.handle.net/10356/156815
Fulltext Permission: restricted
Fulltext Availability: With Fulltext
Appears in Collections:SCSE Student Reports (FYP/IA/PA/PI)

Files in This Item:
File Description SizeFormat 
FYP_Final_Report_Zhou_ZhiWei (2).pdf
  Restricted Access
2.11 MBAdobe PDFView/Open

Page view(s)

24
Updated on May 18, 2022

Download(s)

2
Updated on May 18, 2022

Google ScholarTM

Check

Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.