Please use this identifier to cite or link to this item:
Title: Secure algorithm based data aggregation for sensor networks
Authors: Chai, Jeremy Wen Zhang
Keywords: DRNTU::Engineering
Issue Date: 2016
Abstract: This paper attempts to solve the problem of aggregating data from a large number of sensors whereby an unknown number of sensors could potentially be reporting false data. These malicious sensors could either be operating independently or be working together to launch a collusive attack. This is in an attempt to force the algorithm to report an aggregated value that deviates from the true value. Three different algorithm will be analysed. These are Median, Robust Iterative Filtering and MaxTrust. Median is shown to be a poor performer because in the event of a collusive attack where the percentage of malicious sensors is less than 50, the median will always come from the tailing values of a non-malicious sensor. Robust Iterative Filtering is than shown to not work because of a wrong assumption that the sum of bias will be zero. MaxTrust is shown to be the best performer among these three algorithm. This paper than make three improvements to the MaxTrust algorithm in order to improve its performance. This improved algorithm is called Time-Sensitive MaxTrust. The first improvement introduces the idea of time period to the algorithm. A sensor that had previously reported a false reading is likely to report a false reading again in subsequent time periods. Therefore, the algorithm will aggregate the current time period data using an aggregated trust value comprising of 90% historical trust value and 10% current trust value. This resulted in an improvement to the RMSE by 13% to 23%. The second improvement introduce the idea of replacing all reported precision with an arbitrary precision. This is to prevent attackers from attacking the algorithm by reporting a false precision value and also to solve the problem of sensors not reporting their precision value. Ideally, this arbitrary precision will be near the sensor’s normal operating precision. If this is not possible because the value is unknown as in the case of crowd source data, the precisions can be replaced with a sufficiently high precision. The last improvement takes previous time period trust value as the initial value for the MaxTrust algorithm instead of a fixed pre-determined value. This is to reduce the number of iterations needed to reach optimal solution. Experimental result has shown that this resulted in an improvement in both the mean and standard deviation of iterations needed to reach optimal solution while maintaining the RMSE.
Rights: Nanyang Technological University
Fulltext Permission: restricted
Fulltext Availability: With Fulltext
Appears in Collections:SCSE Student Reports (FYP/IA/PA/PI)

Files in This Item:
File Description SizeFormat 
FYP Final Report_Jeremy Chai Wen Zhang.pdf
  Restricted Access
1.59 MBAdobe PDFView/Open

Page view(s)

Updated on Jun 22, 2021


Updated on Jun 22, 2021

Google ScholarTM


Items in DR-NTU are protected by copyright, with all rights reserved, unless otherwise indicated.