• No results found

Real-Time Detection of False Data Injection Attacks in Wind Farm:

N/A
N/A
Protected

Academic year: 2021

Share "Real-Time Detection of False Data Injection Attacks in Wind Farm:"

Copied!
54
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

Real-Time Detection of False Data Injection

Attacks in Wind Farm:

A simulation approach with statistical based methods

Master Thesis

Master of Science in Supply Chain Management University of Groningen, Faculty of Economics and Business

Jan 27, 2020

(2)

ABSTRACT

Wind energy is rapidly becoming an attractive source of renewable energy to the modern society because of its capability of providing electricity in a sustainable way; However, vulnerability of wind farm infrastructure can lead to its exposure to

(3)

TABLE OF CONTENT

1. INTRODUCTION ... 3

2. THEORECTICAL BACKGROUND ... 7

2.1. INTRODUCTION TO FALSE DATA INJECTION ATTACK ... 7

2.1.1. State Estimation ... 7

2.1.2. Bad Data Detection ... 7

2.1.3. False Data Injection Attack Mechanism ... 8

2.2. CYBERATTACK SIMULATION MODELS ... 8

2.3. CYBERATTACK DETECTION METHODS ... 9

3. METHODOLOGY ... 11

3.1. SIMULATION TOOLS ... 11

3.2. BASIC MODEL DESCRIPTION ... 11

3.2.1. Data Source ... 12

3.2.2. Wind Farm and Wind Turbines Initialization ... 13

3.2.3. Data Modification ... 13

3.2.4. Power Output Calculation ... 15

3.2.5. Wake Loss Subtraction ... 15

3.3. SIMULATING FALSE DATA INJECTION ATTACK ... 16

3.3.1. False Data Generation ... 16

3.3.2. False Data Mean ... 16

3.3.3. False Data Standard Deviation ... 17

3.3.4. False Data Injection ... 17

3.4. PROPOSED DETECTION METHOD ... 18

3.4.1. Employed Statistical Techniques ... 18

3.4.2. Proposed Detection Method ... 19

4. PERFORMANCE EVALUATIONS ... 23

4.1. PERFORMANCE MEASUREMENTS ... 23

4.2. EXPERIMENTS ... 24

4.2.1. Experiment 1- Impact of IQR rule parameters ... 24

4.2.2. Experiment 2: Impact of severity level of FDIA ... 32

5. CONCLUSIONS ... 46

6. LIMITATIONS ... 48

(4)

1. INTRODUCTION

Cyberattacks on power generation infrastructure are difficult to predict and mitigate. In many cases, individuals involved in cyberattacks were motivated by economic or ideological reasons. With the knowledge on how to access to the targeted

infrastructures, the attackers were able to launch attacks which can be discovered only after having negative effects on the infrastructure.(Jarmakiewicz, Parobczak, & Má, 2017)

A few cyber-attack incidents on the electric grid have been recorded in the history. For example, during the Russian-Georgian war in 2008, it is widely believed that cyberattacks originating in Russia brought down the Georgian electric grid. One year later, according to the Wall Street Journal, cyber attackers used software programs to disrupt the system of the U.S. electrical grid.(Bou-Harb, Fachkha, Pourzandi,

Debbabi, & Assi, 2013) These cyber-attack incidents have inevitably triggered the fear within power generator companies, governments, and the public, since these targeted infrastructures are built to provide better power quality, as well as improve the efficiency of delivering electricity. With disturbance from cyberattacks on the power control system, sensors etc, the efficiency of power generation will decrease and might even have the risk of power outage and affect the daily life of local citizens.(Bou-Harb et al., 2013)

(5)

report (GWEC Global Wind Report, 2018). Although not as many as cyber-attack incidents have been reported in this field compared to smart grid and other power systems, as wind becomes an increasing resource of renewable energy and contributes more electricity to the power grid, it might start to draw more attention from the cyber attackers (Staggs, Ferlemann, & Shenoi, 2017), and the operations of wind farm will inevitably have impact on the overall performance of the power system. (Yan, Liu, & Govindarasu, 2011) For example, in March 2019, a cyberattack incident on wind farm has been reported in U.S. causing a series of five-minute communications outages over a time period of 12 hours. (Sobczak, 2019) Secondly, a wind farm is a vulnerable infrastructure and is exposed to a large variety of external threats due to its large geographic scale, remote location, and flat logical design of its control

networks.(Staggs et al., 2017) Wind turbines, as the lowest level of electricity generation unit in a wind farm infrastructure, are arranged in fiber optic rings for communications. Therefore, because of this unique structure of groups of wind turbines, it may be possible for attackers to implement cyberattacks on one single remote wind turbine, and could potentially affect the operation of multiple wind turbines, or even the whole wind farm. (Staggs et al., 2017)

In this thesis, in order to understand the vulnerability of power generation

infrastructure, gain insights of mechanism of cyberattacks and come up the method to be able to detect the existing cyberattacks as quick as possible, one type of

cyberattack, called false data injection attack (FDIA) will be simulated and analyzed, since it is one of the most threatened attacks to the power system. (Anwar &

Mahmood, 2014)

(6)

system will be changed (Liu, Ning, & Reiter, 2011). By altering readings of multiple sensors, the attackers can mislead energy management system at the control center without being detected by BDD which can put the security of the power system in danger and cause serious problems. (Mohammadpourfard, Sami, & Weng, 2018; Yang, Li, & Li, 2017; Xie, Peng, Yang, Kong, & Zhang, 2019) It has been proved that FDIA can also be designed by attackers with limited resources which makes the power system even more vulnerable. (Pasqualetti, Dorfler, & Bullo, 2013)

To address this issue, this thesis aims to develop a detection strategy with various decision rules based on statistical methods that could help detect those existing FDIA on wind farm power systems as soon as possible to mitigate the risk and increase the energy production rate of a wind farm.

The contribution of this thesis are as follows:

1. Most of the previous research focuses on the cyberattacks to the general power systems, without considering the increasing penetration of wind power energy into power system (Zabetian, Mehrizi-Sani, Zabetian-Hosseini, & Liu, 2018). Therefore, in this research, selecting wind farm as the focus will bridge the research gap.

2. Wake effect, as one of unique characteristics for wind farm, will be taken into consideration when simulating the wind farm and designing the cyber-attack detection method.

3. Use PYTHON and its libraries to build a model with initialized wind farm, wind turbines, as well as weather data that can be used to set up the simulation environment.

(7)

5. Propose a detection method to filter out suspected power output readings by comparing the differences of the power outputs between normal system and attacked system.

6. Evaluate the effectiveness of the detection method by experimenting with different values of thresholds.

7. Verify the robustness of the detection method to the severity level of attack.

(8)

2. THEORECTICAL BACKGROUND

2.1. Introduction to False Data Injection Attack

2.1.1. State Estimation

In a power grid network, state estimation is often used to obtain the measurement data, such as different sensor readings from the field, to estimate the state variables of the power system, such as voltage phasor magnitudes and angles for each bus in the power system. In general, the state estimation can be formulated as:

z = Hx+ e (1) Here the equation contains four elements: z represents the state measurement vector which is collected from sensor readings; x represents the state variables which needs to be determined based on the formula; e refers to the measurement noise which is assumed to follow the Gaussian distribution; H is a constant Jacobian matrix and denotes the power network topology. (Abur & Exposito, 2004)

And by using the weighted least squares method, when the measurement noise

follows the Gaussian distribution with zero mean, the state variables can be computed as below, where W is a diagonal matrix which denotes the noise co-variance:

xˆ =(HTWH)−1HTWz (2)

2.1.2. Bad Data Detection

(9)

r = z - Hxˆ (3) τ (4)

2.1.3. False Data Injection Attack Mechanism

In FDIA against the power grid, with the knowledge of the physical structure of the power system (H), the attacker is able to manipulate the state estimation by altering the sensor readings by injecting a nonzero attack vector a, such that the state

measurement vector z is replaced by a compromised false data measurement vector zbad, where zbad = z + a. (Mohammadpourfard et al., 2018)

zbad = z + a (5) a = Hc (6) rbad = zbad - Hxbadˆ = z + a – H (xˆ + c) = r (7)

The results show that if the attacker has access to the Jacobian matrix H, and selects the attack vector a as described above, the bad data detection method will be no longer be effective, because the residual remains the same when the power system is under the attack. Although in practice, the attacker doesn’t always have a complete knowledge of power grid structure, the research conducted by (Abur & Exposito, 2004) shows that, with limited information, the attacker is still able to launch the FDIA by the means of applying a probability distribution to the unknown

measurements.

2.2. Cyberattack Simulation Models

This part reviews the previous works discussed in the literatures regarding the models to simulate cyberattacks from the perspectives of mechanisms, procedures and

(10)

system concerning impact analysis of cyberattack on the power systems, and cyberattack identification, etc. Additionally, Mohammadpourfard et al. (2017) has proposed a statistical unsupervised method in which it adds several statistical measures, such as mean, variance and skewness to quantify the features of power system and its state vectors, in order to distinguish the data and patterns between normal scenario and attacked scenario. Furthermore, another method called mean time-to-compromise (MTTC) is also used to evaluate the frequencies of successful cyberattacks against the wind farm system and estimate the time interval between attacks. (Zhang, Wang, Xiang, & Ten, 2015) Apart from simulating the probabilities and procedures of a cyber-attack, a game-theoretic study has been done to simulate the interaction between attack and defense. (Xiang & Wang, 2017) According to Wu et al. (2019), simulation for the probabilities and cyber-physical relationship are the foundation for a further reliability analysis. For example, Monte Carlo simulation (MCS) method has been used to perform the reliability analysis of the power system. (Zhang, Xiang, & Wang, 2017)

2.3. Cyberattack Detection Methods

Yang et al. (2017) has divided methods to deal with attacks into two main categories:

Detection based methods, and Active protection-based methods. They both have their

(11)

This thesis focuses on developing a detection-based method so that the cyberattacks can be detected once the cyber-attack incidents happen. Previous studies have

proposed several types of methods to detect FDIA, such as novel trust – based method (Xie et al., 2019), utilized graph theory, classification algorithms and statistical

threshold testing. Mohammadpourfard et al. (2017) has provided a detailed review of the properties of those methods, as well as the comparison between them. However, almost all the work are based on the analysis of power grid, without considering the impact of incorporating renewable energy resources into the existing power system (Mohammadpourfard et al., 2018). To fill the gap, (Mohammadpourfard et al., 2018) have proposed an unsupervised anomaly detection algorithm, with which they first analyze the historical data and filter out the suspicious state vectors that are deviated from the normal trend, then use three different outlier detection algorithms FCM, IQR and MAD to determine the attacked states based on the differences between

suspicious state vectors and normal state vectors. FCM(fuzzy c-means) is a clustering method which divides similar data points into different categories and detects outliers if the data points don’t belong to any pre-defined clusters; IQR stands for Inter

(12)

3. METHODOLOGY

The methodology section starts with introducing the simulation tools that are going to be used in this thesis. Next, the basic model will be presented to describe the method of setting up simulation environment. Last, the method of simulating the false data injection attack, as well as the method for false data injection attack detection will be described in detail.

3.1. Simulation Tools

In this research, Python will be used due to its extensive and powerful libraries. A few general libraries can be used for mathematical computations and visualization. For example, Numpy allows the users to manipulate multi-dimensional arrays more efficiently; Scipy contains mathematical functions and algorithms; Matplotlib can visualize the simulation results. (Santhanam, 2013) Apart from the libraries mentioned above, a few other libraries specifically designed for wind farm power systems can also be found. In this thesis, Windpowerlib library is selected to build the model and run simulations, as it provides the user with a set of data, functions and classes to initiate a wind farm and calculate the power output of wind turbines, etc.

3.2. Basic Model Description

The basic model sets up a simulation environment by obtaining and modifying the meteorological data; initializing the wind farm and wind turbines and providing the power output of each turbine with and without the wake effect. The objective of basic model is to simulate the FDIA and the proposed detection method and use the

(13)

3.2.1. Data Source

Meteorological data was obtained from Open Energy Database via Windpowerlib package in Python. The dataset contains several variables as input parameters to calculate the power output of wind turbines (see Table 3.1). All variables are recorded for every hour. The dataset used for simulation covers the period of 2010, and an example of the dataset can be seen in the Table 3.2.

Variables Hub Height (m)

Wind speed (m/s) 10

Wind speed (m/s) 80

Air temperature (Kelvin) 2

Air temperature (Kelvin) 10

Air pressure (Pa) 0

Surface roughness length (m) 0

Table 3.1 - Meteorological data variables

(14)

3.2.2. Wind Farm and Wind Turbines Initialization

To initialize the wind farm, a matrix of size 10 x 10 has been constructed to represent the layout of wind farm. During the simulation process, number of turbines, and the layout of wind turbines can be determined by manually adjusting the matrix: for each element in the matrix, value 0 indicates no turbine, whereas value 1 indicates one turbine in that position.

To initialize the wind turbines, Windpowerlib package provides a large sets of wind turbines with various attributes such as turbine type, hub height, power curve etc. In order to simplify the simulation process, the entire wind farm contains only one type of turbine: Enercon E-126. According to Open Energy Database, this type of turbine has a hub height of 135 meters, and a maximum capacity of 4.2MW.

3.2.3. Data Modification

Once the wind turbine type has been determined, wind turbine parameters, together with other meteorological data, will be used to calculate the power output for each turbine. With the same inputs, power outputs of all turbines will be identical to each other, which makes the detection procedure of false data injection attack easy. In order to simulate the detection method in a more realistic environment with certain level of variation, background noises should be added to the meteorological data, so that power outputs of all turbines can be slightly different from each other.

To achieve this, the following steps are used to generate the noise data:

Step 1: Variables selection. After analyzing the meteorological dataset provided by

(15)

Step 2: Noise generation. Considering the values of each variable differ from each

other and have their own characteristics, it would be sensible to generate different noises depending on choices of variables, so that all data points after the modification are still within a realistic range. To better describe the noise level, SNR (signal to noise ratio) will be introduced to measure the ratio of system power level to system noise level (Jiang et al., 2017). SNR can be defined as follows, and often expressed in decibels:

SNR = 10 * log (System Power Level/System Noise Level) (8)

In this thesis, in order to investigate the robustness of the proposed detection method to the change of its parameters, the system noise level is assumed to be known by the wind farm system operators and will not fluctuate drastically throughout the whole time, so that a stable environment can be provided to run different simulations. Generally, a signal of with a SNR value of 20db or more is recommended (Jiang et al., 2017), which means that the system noise level should be equal or less than 1% of the system power level. In this thesis, in order to evaluate the performance of

detection method under the maximum level of noise, SNR = 20db will be applied to background noise for each variable.

It is assumed that the noise level of each input variable follows normal distribution. The mean of system noise level for each input variable can be estimated as the average value of the input variable sensor readings during the period of 2010, multiplying by 1% as the mean, and 10% of mean value as the estimated standard deviation.

Step 3: Add noise to original meteorological dataset. Once the mean and standard

(16)

randomly generated and added to each data point throughout the year 2010 for each corresponding variable.

3.2.4. Power Output Calculation

Given the modified meteorological data, and turbine type, Windpowerlib package will generate a series of power output readings of each turbine distributed at one-hour intervals with one reading at each time point for the whole year.

3.2.5. Wake Loss Subtraction

Wake effect describes the changes in wind speed caused by the interaction between different wind turbines given a specific wind farm layout. (Herp, Poulsen, & Greiner, 2015) In order to make the simulation environment more realistic, a simplified wake model has been built that follows several rules. To better describe the rules, an example of the wind farm layout with size of 5 x 5 has been illustrated in the Figure

3.1 below, where each gray square represents a wind turbine; wind turbine in the area

A stands for the central wind turbine; wind turbines in the area B stand for the first layer of wind turbines surrounding the central wind turbine; wind turbines in the area C indicates the second layer of wind turbines that surrounds the central wind turbine.

(17)

The rules for the wake loss deduction are as follows:

a. For any turbine that is in the central position, only two layers of wind turbines around it are considered to result in the wind power reduction of central

turbine due to wake effect. Any wind turbines that are located out of this range will not be considered as having wake effect on the central wind turbine. b. Each of the wind turbine that is located at the first layer around the central

turbine causes the central turbine to lose 6% of its wind power.

c. Each of the wind turbine that is located at the second layer around the central turbine causes the central turbine to lose 3% of its wind power.

At this stage, the basic model has set up a simulation environment to implement the attack and the proposed detection method.

3.3. Simulating False Data Injection Attack

3.3.1. False Data Generation

In this part of the model, a non-zero vector c = [c1, c2, … ci] T will be generated to

represent a series of false data, where i denotes the duration of the attack and is set as 20 hours initially; and each element of the vector ci indicates one false data point,

which is assumed to follow the normal distribution. The mean and standard deviation of each false data will be defined as below:

3.3.2. False Data Mean

(18)

determined by the ratio of false data level to the system noise level, the higher the ratio is, the attack is more severe, and vice versa. As previously mentioned, in this thesis, the noise level is set to be 1% of the system power level. In order to

differentiate the false data level from the noise level, the initial value of false data mean will be set to be higher than the system noise, which is 10% of the system power level. Yearly average of the power output before adding the noises can be used as the system power level.

3.3.3. False Data Standard Deviation

Coefficient of variation, which is defined as the ratio of the standard deviation to the mean, will be used to determine the standard deviation of ci. In this thesis, it is

assumed that the hacker tries to keep the variation of false data relatively low to reduce the chances of being detected. Therefore, the initial value for standard deviation of ci, will be set as 10% of the mean.

To better illustrate the statistical characteristics of the false data, two parameters will be defined as follows: false data mean factor (fm) will be used to describe the ratio of

the false data level to the system power level; and false data standard deviation factor (fs) will be used to represent the coefficient of variation.

3.3.4. False Data Injection

(19)

3.4. Proposed Detection Method

The proposed detection method aims to constantly monitor the differences between the measured power output values and the expected power output values of each turbine and use the statistical techniques to detect the outliers and identify them as an attack when certain criteria are met. In this section, first, these statistical techniques will be briefly introduced. Afterwards, detection method will be described in detail.

3.4.1. Employed Statistical Techniques

a. Simple Moving Average (SMA): In statistics, moving average is a widely used tool to smooth out the time-series data by calculating its arithmetic mean over certain period of time. The time frame used to calculate the average can be various depending on the situations. The shorter the smoothing period, the more sensitive it will be to the upcoming changes in the dataset. On the contrast, the longer the smoothing period, the less sensitive it will become to the changes (Hayes, 2019). This method can be described in the formulation below:

"#$ = '(∑ *+(

' (9)

where xi is the data value in time period i, and n is the total number of

smoothing periods.

(20)

lowest value and median value; Q3 represents the middle point of median value and highest value; The difference between Q3 and Q1 is the IQR value and can be used to measure the variability of the given dataset. To determine the upper and lower limits, this method defines the interval as follows:

[Q1 − k ∗ IQR, Q3 + k ∗ IQR] (10)

where k is usually set to 1.5 or 3 in practice (Mohammadpourfard et al., 2018). In this thesis, k will be noted as IQR interval parameter, and the different values of k will be tested to search for an optimal value that can utilize the detection method given specific experiment settings.

3.4.2. Proposed Detection Method

Step 1: Normalization. Due to wake effect, power output of each turbine can deviate a

lot from each other at the same time period, which causes difficulty in FDIA detection. Because the detection method cannot determine whether the deviation in power output readings comes from the wake effect, or the attack. Therefore, in order to make the power output of each turbine at the same time period more comparable, wake effect needs to be excluded to have a lower probability of identifying an “attack” when attack is absent. According to the wake effect subtraction rules that were defined in the previous section, the corresponding wake loss for each turbine need to be added back so that power outputs of all turbines for each time period are normalized and fall into the same scale with slight differences due to added noises.

Step 2: Rolling check. The proposed detection method will monitor the system and

(21)

Step 3: Comparing Actual Values with Expected Values. When false data are injected

into the system, the actual power output readings of attacked turbines will be altered and expected to be different from the normal ones before an attack occurs. The proposed detection method aims to identify the attack when the deviation between actual and expected power output values are high. In this step, the normalized power output with noises after the attack obtained from Step 1 will be considered as the actual value. To estimate the expected value, it is assumed that the wind farm system operator has access to sensor readings for all input variables and can use them to obtain the expected values of power outputs. In this thesis, the expected power output values for all turbines are calculated by taking the average of power output with added noises of all turbines before the attack occurs. During each hourly check, a selected sample of the actual power output values for each turbine, will be compared with the sample of expected power output values to obtain the sample of differences, which is the absolute value of differences between the sample of actual values and expected values for each turbine will be calculated. Absolute value is more suitable to use for the proposed detection method, since it measures the amount of deviation between the actual and expected power output values.

Step 4: Smoothing the Sample of Differences. Given the system noise level, the

(22)

consecutive sample of differences has lower variability, and hence can be more effective when identifying the outliers.

Step 5: Filtering the Outliers. The algorithm for outlier detection is presented in the

Figure 3.2 below, where a = [ai-n, ai-n+1,, ... ai-1 ] T (i ³ n) is a vector containing

averages of sample of differences, which has a length of n time periods (noted as IQR length in rest of thesis); i denotes the current time point; ai indicates the average of

sample of differences at time period i. The proposed detection method includes four different levels of attack alerts depending on the number of consecutive time periods where a suspicious ai were observed: Level 1, attack alert will be reported when any

(23)

Algorithm Detection of False Data Injection Attack

I. Initialization: Initialize IQR interval parameter k, IQR length n and level of attack

alerts.

II. Repeat the procedure for detecting the false data injection attack: a. Calculate ai;

b. Apply IQR rule;

c. Identify and classify an attack:

if ai falls beyond the interval then

flag i as a suspicious time period as it causes a noticeable deviation between the actual and expected power output value

if ai satisfy the criteria of given level of attack alerts then

classify i as the time period of confirming an attack exclude ai and power output at time period i from the rolling

smoothing process to reduce the possibility of influencing further detection process;

update i ¬ i + 1, and continue with the FDIA detection procedure else if ai falls within the interval then

make the decision of no attack;

update the vector a by removing ai-n and add ai to the vector;

update i ¬ i + 1, and continue with the FDIA detection procedure III. Until the proposed detection method finish checking all the data points IV. Terminate the attack detection process

V. Analysis the results, and distinguish between hit, false alarm, and miss

(24)

4. PERFORMANCE EVALUATIONS

In this section, we will evaluate the performance of the proposed method for detecting false data injection attack. First, the main performance measurements will be

introduced; Second, two experiments under different settings will be carried out to evaluate the effectiveness of the proposed detection method.

4.1. Performance Measurements

A reliable detection method should be able to successfully capture the false data injection attack as soon as possible. Considering all the possible scenarios, whether an attack occurred or not, and whether the detection method is able to identify the attack or not, these would lead to four possible outcomes: 1. True Positive (hit): an attack event occurred, and the detection method is able to identify it; 2. False Negative (miss): an attack event occurred, but the detection method fails to identify it; 3. False Positive (false alarm): an attack event did not occur, but the detection method

identifies it as an attack; 4. True Negative (correct rejection): an attack event did not occur, and the detection method identifies no attack.(Jiang, Qian, & Yi, 2017)

In this thesis, we will mainly consider three of the outcomes, which are hit, miss, and false alarm. Using these measurements, we can then define the probability of

detection as the ratio of hit to the sum of hit and miss cases.

(25)

4.2. Experiments

To verify the performance of the proposed detection method, two experiments are designed and simulated. Experiment 1 aims to investigate the impact of IQR rule on the effectiveness of the detection method by simulating with different value

combinations of IQR interval parameter k and IQR length n; The objective of

Experiment 2 is to test the robustness of the proposed detection method to the severity level of attack, which is measured by different combinations of false data mean and false data standard deviation under different system settings.

In this section, each experiment will be described in detail following the structure below:

a. Initial system setting of the basic model b. Experiment plan

c. Numerical results

d. Performance analysis of detection method

4.2.1. Experiment 1- Impact of IQR rule parameters

4.2.1.1 Initial System Setting of the Basic Model (see Table 4.1)

System Setting Values

SNR 20db

False data mean factor 10%

False data standard deviation factor 10%

Frequency of FDIA attack 1

Duration of FDIA attack (hours) 20 Number of turbines attacked 1

Total number of turbines 10

(26)

4.2.1.2. Experiment Plan

To investigate the impact of IQR rule parameters on performance of the proposed detection method, this thesis considers different combinations of two parameters, IQR length parameter n and IQR interval parameter k, by setting the value of n as 4, 8 or 12, and the value of k as 5, 10 or 15. For each combination of parameter setting, we will run the simulation model for 10 times, and take the average of each performance measurement as the simulation results.

4.2.1.3. Numerical Results

The simulation results for experiment 1 are shown in the Table 4.2:

IQR length parameter n 4 8 12

IQR interval parameter k 5 10 15 5 10 15 5 10 15

Total number of alerts 8214,9 1347,3 349,7 2722,8 189,7 60,1 1267,5 72,9 29,1

Hit (1*) 15,8 14,7 13,3 14,3 13,2 14,0 13,8 15,0 15,7

False alarm (1) 8199,1 1332,6 336,4 2708,5 176,5 46,1 1253,7 57,9 13,4

Miss (1) 4,2 5,3 6,7 5,7 6,8 6,0 6,2 5,0 4,3

Total time to detect attack (1) 7,5 14,3 12,3 9,8 1,0 1,0 8,5 1,1 1,0

Hit (2*) 13,3 13,3 13,1 13,4 13,2 13,5 13,1 13,5 13,9

False alarm (2) 969,3 70,5 7,5 456,8 33,7 7,2 246,8 12,3 2,3

Miss (2) 6,7 6,7 6,9 6,6 6,8 6,5 6,9 6,5 6,1

Total time to detect attack (2) 13,0 15,3 9,0 10,8 1,0 2,0 17,0 2,0 1,9

Hit (3*) 11,0 10,1 9,7 11,4 10,9 11,0 11,1 11,5 11,9

False alarm (3) 268,0 8,1 0,1 118,9 2,8 0,4 114,1 3,5 1,3

Miss (3) 9,0 9,9 10,3 8,6 9,1 9,0 8,9 8,5 8,1

Total time to detect attack (3) 12,4 9,0 10,2 11,8 10,3 11,4 18,0 2,4 2,9

Hit (4*) 2,9 3,1 2,5 1,8 2,7 3,0 3,1 3,6 3,1

False alarm (4) 87,6 4,3 0,0 27,0 0,2 0,0 31,2 1,2 0,0

Miss (4) 17,1 16,9 17,5 18,2 17,3 17,0 16,9 16,4 16,9

Total time to detect attack (4) 13,2 10,0 11,5 14,0 13,8 14,6 19,0 3,0 1,0

Table 4.2 – Simulation Results for Experiment 1

(27)

4.2.1.4. Performance Analysis of Detection Method

Simulation results presented in Table 4.2 gives an overview of the performance of proposed detection method. To further investigate how different combinations of two IQR parameters affect the effectiveness of the method, four different performance measurements will be used to do the analysis: total number of alerts, hit, false alarm and total time to detect an attack.

1) Total number of alerts

Total number of alerts indicates the average number of data points that have been identified as suspicious in each simulation run (see Graph 4.1). The graph shows that for each value of IQR length parameter n, as IQR interval parameter k increases from 5 to 15, total number of alerts decreases; and the total number of alerts has a more radical change as k increases from 5 to 10, than from 10 to 15. It can also be observed that the proposed detection method performs badly due to a large amount of false alarms when n is 4, given any value k. This shows that the sample size of IQR rule method could have a highly impact on the detection performance. The smaller the value of n is, the proposed detection method is more likely to produce false alarms.

Graph 4.1 – Total Number of Alerts – Experiment 1

0 1000 2000 3000 4000 5000 6000 7000 8000 9000 k = 5 k = 10 k = 15 T ot al N um be r of A le rt s

IQR Interval Parameter k

Total Number of Alerts

(28)

2) Hit

Total number of hits demonstrates the average number of cases for each simulation run where the detection method has successfully identified an attack when the attack occurs. The Graph 4.2 below describes the total number of hits for all combinations of the IQR parameters n, k, under the different alert levels. In general, under the alert level 1, 2, and 3, as the alert level goes up, the probability of detection decreases incrementally; and when the alert level reaches 4, the total number of hits drops drastically to around 3 for each scenario.

When experimenting with the value of k (see three graphs on the left), it is observed that as the value of k increases, the probability of detection shows different trends for each given n. For example, when n equals to 4 (see the

top-left graph), given certain level of alert (except for level 4 alert), the probability

of detection tends to decrease slightly; when n equals to 8 (see the middle-left

graph), the probability of detection is the highest when k equals to 5 and

lowest when k equals to 10; and when n equals to 12 (see the bottom-left

graph), the probability of detection increases slightly as k increases from 5 to

15 under the alert level of 1, 2, and 3. To sum it up, for each given n, the proposed detection method performs the best when the combination of value n and k are (4, 5), (8, 5), and (12, 15), as they have the highest detection rate.

(29)

bottom-right graph), the scenario where n with the value of 12 results in the highest

detection probability.

Graph 4.2 – Total Number of Hits – Experiment 1

3) False alarm

(30)

alert levels. It should be noted that for each graph, the primary vertical axis (on the left side of each graph) is used for the level 1 alert; and the secondary vertical axis (on the right side of each graph) is used for the alert level of 2, 3, and 4. According to the Graph 4.3, a few general observations can be made. First, more false alarms get reported as the alert level reduces, as higher-level alert has stricter criteria when confirming a suspicious data point as an attack. Second, when the value of n is fixed, the number of false alarms decreases as k increases. Third, when the value of k is fixed, the number of false alarms decreases as n increases.

(31)

4) Total time to detect attack

Total time to detect attack illustrates the average hours of each simulation run, that it takes the proposed detection method to successfully identify an attack which is present. The Graph 4.4 below describes the total time to detect the attack for all combinations of the IQR parameters n and k under the different alert levels.

When experimenting with different values of k (see three graphs on the left), it is observed that the total time to detect attack has various distribution between each given n. When n equals to 4 (see the top-left graph), total time to detect attack tends to have a low variance as value of k changes from 5 to 15.

However, when n equals to 8 (see the middle-left graph) or 12 (see the

bottom-left graph), total time to detect attack has a higher variance as value of k

changes. For example, when n equals to 8, for any given value of k, it takes the detection method a long time to confirm at attack if the alert level is high: an average of 11 hours under the level 3 alert, and an average of 14 hours under the level 4 alert. This makes sense since higher alert level requires a longer cycle time for an attack to be confirmed. But when n changes from 8 to 12, the detection time reduces to around 3 hours under the alert level 3, and around 2 hours under the alert level 4. This indicates that under this specific setting, where n is 12, and k is 15, the proposed detection method sometimes fails to detect any attacks, as the criteria is set to be too strict.

(32)

more false alarms are generated, more data points will be excluded from the detection process. When the rolling process reaches the time point where false data is injected, the detection method might end up comparing the attacked power output with a sample data from several periods ago. There are the possibilities that the historical sample data are similar to the attacked power output readings due to different weather conditions. Therefore, the time to identify and confirm an attack seems longer k has a lower value.

(33)

4.2.2. Experiment 2: Impact of severity level of FDIA

4.2.2.1. Initial system setting of the basic model

According to the results obtained from experiment 1, the proposed detection method has an overall higher performance when the IQR length equals to 12, since it results in less false alarms and shorter detection time. Therefore, in this experiment, we will test the detection method under three different system environments by setting the value of IQR length parameter n as 12 and changing the value of IQR interval parameter k from 5, 10, to 15. An overview of three system settings are presented in the Table 4.3,

Table 4.4 and Table 4.5.

System Setting Values

SNR 20db

IQR length parameter n 12

IQR interval parameter k 5

Frequency of FDIA attack 1

Duration of FDIA attack (hours) 5 Number of turbines attacked 1

Total number of turbines 3

Table 4.3 – System Setting 1 for Experiment 2

System Setting Values

SNR 20db

IQR length parameter n 12

IQR interval parameter k 10

Frequency of FDIA attack 1

Duration of FDIA attack (hours) 5 Number of turbines attacked 1

Total number of turbines 3

(34)

System Setting Values

SNR 20db

IQR length parameter n 12

IQR interval parameter k 15

Frequency of FDIA attack 1

Duration of FDIA attack (hours) 5 Number of turbines attacked 1

Total number of turbines 3

Table 4.5 – System Setting 3 for Experiment 2

4.2.2.2. Experiment plan

This experiment aims to investigate the robustness of the proposed detection method to the severity level of FDIA attack, which is characterized by the mean and the standard deviation of the injected false data. This thesis will set the value of false data mean factor fm as 20%, 10%, 5%, 1%, or 0.5%, the value of false data standard

deviation factor fs as 10%, 50% or 100%, and experiment with different combinations

of the two parameters. For each combination of parameter setting, we will run the simulation model for 10 times, and take the average of each performance

measurement as the simulation results.

4.2.2.3. Numerical results

The simulation results for experiment 2 are shown in the Table 4.6, Table 4.7, and

(35)

Table 4.6 - Numerical Results for Experiment 2 - System Setting 1

False Data Mean Factor fm 20% 10% 5% 1% 0.5%

Standard Deviation Factor fs 10% 50% 100% 10% 50% 100% 10% 50% 100% 10% 50% 100% 10% 50% 100% Total number of alerts 367,8 400,0 405,8 397,3 405,1 411,1 382,3 400,8 405,1 394,9 363,9 376,7 375,7 374,2 369,1

Hit (1) 0,4 1,8 1,4 0,7 2,9 2,1 1,4 1,0 2,9 0,9 0,8 0,5 1,0 0,0 0,2

False alarm (1) 367,4 398,2 404,4 396,6 402,2 409,0 380,9 399,8 402,2 394,0 363,1 376,2 374,7 374,2 368,9

Total time to detect attack (1) 4,0 1,8 0,8 0,3 0,5 2,6 0,4 1,3 0,5 1,3 0,4 1,8 3,0 N/A 3,5

Hit (2) 0,1 0,4 0,2 0,1 0,2 0,5 0,3 0,1 0,2 0,3 0,2 0,1 0,4 0,0 0,1

False alarm (2) 71,7 78,9 80,4 79,1 77,9 78,4 75,1 78,3 77,9 75,9 71,9 76,9 73,3 75,2 74,9

Total time to detect attack (2) 4,3 2,4 1,6 0,3 1,0 3,4 0,7 1,0 1,0 1,5 0,6 4,5 3,2 N/A 3,8

Hit (3) 0,1 0,3 0,2 0,1 0,2 0,4 0,4 0,2 0,2 0,2 0,2 0,1 0,2 0,0 0,0

False alarm (3) 32,1 37,0 38,4 37,0 36,6 35,9 33,3 33,8 36,6 34,8 29,3 32,1 34,2 34,7 32,0

Total time to detect attack (3) 4,5 2,9 1,9 0,5 1,3 3,3 0,8 0,8 1,3 1,4 0,9 4,8 2,5 N/A N/A

Hit (4) 0,1 0,3 0,3 0,1 0,3 0,3 0,3 0,4 0,3 0,1 0,1 0,1 0,1 0,0 0,0

False alarm (4) 8,9 10,9 9,7 10,2 8,4 10,0 8,6 8,5 8,4 8,6 7,0 7,5 8,0 7,3 8,3

(36)

False Data Mean Factor fm 20% 10% 5% 1% 0.5%

Standard Deviation Factor fs 10% 50% 100% 10% 50% 100% 10% 50% 100% 10% 50% 100% 10% 50% 100% Total number of alerts 27,3 24,2 23,3 26,4 26,8 26,2 26,3 23,1 26,4 18,2 19,0 22,7 16,5 16,4 15,7

Hit (1) 0,8 1,8 2,1 1,7 1,2 0,9 1,3 1,8 1,6 0,2 0,2 0,8 0,0 0,0 0,0

False alarm (1) 26,5 22,4 21,2 24,7 25,6 25,3 25,0 21,3 24,8 18,0 18,8 21,9 16,5 16,4 15,7 Total time to detect attack (1) 1,0 1,0 1,0 1,0 1,1 1,0 1,0 1,3 1,0 1,5 2,5 1,8 N/A N/A N/A

Hit (2) 0,1 0,3 0,5 0,4 0,3 0,1 0,3 0,5 0,5 0,0 0,0 0,3 0,0 0,0 0,0

False alarm (2) 6,2 5,2 4,5 5,3 5,8 6,0 6,1 5,1 6,3 4,0 4,1 5,5 3,7 3,1 2,9

Total time to detect attack (2) 1,0 1,7 1,6 1,5 1,7 2,0 2,0 2,6 1,6 N/A N/A 2,0 N/A N/A N/A

Hit (3) 0,3 0,6 0,7 0,5 0,4 0,4 0,4 0,6 0,5 0,0 0,1 0,1 0,0 0,0 0,0

False alarm (3) 2,4 1,9 2,0 2,0 2,3 2,1 2,2 2,0 2,2 1,1 0,6 1,3 0,7 0,7 0,3

Total time to detect attack (3) 1,3 1,8 2,1 2,2 2,0 1,5 2,5 2,5 2,6 N/A 1,0 3,0 N/A N/A N/A

Hit (4) 0,4 0,7 0,6 0,6 0,4 0,3 0,2 0,1 0,3 0,0 0,0 0,1 0,0 0,0 0,0

False alarm (4) 1,6 0,8 0,8 1,4 0,9 0,7 1,1 0,2 0,4 0,1 0,0 0,2 0,0 0,0 0,0

Total time to detect attack (4) 2,0 2,4 2,8 2,8 1,8 2,3 3,0 1,0 3,0 N/A N/A 4,0 N/A N/A N/A

(37)

False Data Mean Factor fm 20% 10% 5% 1% 0.5%

Standard Deviation Factor fs 10% 50% 100% 10% 50% 100% 10% 50% 100% 10% 50% 100% 10% 50% 100%

Total number of alerts 8,3 9,3 8,6 8,3 8,1 8,2 6,9 7,4 9,0 3,1 3,8 4,1 3,1 2,5 2,3

Hit (1) 3,0 3,3 2,3 2,3 2,8 1,9 2,4 2,5 2,9 0,1 1,1 0,1 0,0 0,0 0,0

False alarm (1) 5,3 6,0 6,3 6,0 5,3 6,3 4,5 4,9 6,1 3,0 2,7 4,0 3,1 2,5 2,3

Total time to detect attack (1) 1,0 1,2 1,0 1,2 1,1 1,1 1,3 1,2 1,1 4,0 2,5 1,0 N/A N/A N/A

Hit (2) 1,0 1,0 0,6 0,7 1,0 0,5 0,9 0,9 0,8 0,0 0,4 0,1 0,0 0,0 0,0

False alarm (2) 1,3 1,4 1,6 1,3 1,5 1,5 1,3 1,4 1,5 0,4 0,6 0,9 0,6 0,7 0,3

Total time to detect attack (2) 1,8 2,0 1,7 1,6 1,9 2,0 2,2 2,1 1,9 N/A 3,5 1,0 N/A N/A N/A

Hit (3) 1,0 1,0 0,9 0,8 0,9 0,7 0,6 0,8 0,9 0,0 0,1 0,0 0,0 0,0 0,0

False alarm (3) 1,0 1,0 0,9 0,7 0,9 0,6 0,6 0,5 0,8 0,1 0,1 0,1 0,1 0,0 0,1

Total time to detect attack (3) 2,8 3,0 2,1 2,1 3,0 2,4 2,5 2,9 2,7 N/A 3,0 N/A N/A N/A N/A

Hit (4) 0,2 0,5 0,4 0,3 0,1 0,3 0,1 0,0 0,4 0,0 0,0 0,0 0,0 0,0 0,0

False alarm (4) 0,1 0,4 0,4 0,2 0,0 0,2 0,0 0,0 0,2 0,0 0,0 0,0 0,0 0,0 0,0

Total time to detect attack (4) 4,0 3,8 3,3 3,0 5,0 3,0 4,0 N/A 3,8 N/A N/A N/A N/A N/A N/A

(38)

4.2.2.4. Performance analysis of detection method

Simulation results presented in Table 4.6, Table 4.7, and Table 4.8 gives an overview of the performance of proposed detection method under different attack scenario for each system setting. To further investigate how different levels of attack affect the effectiveness of the method, four different performance measurements will be used to perform the analysis: total number of alerts, hit, false alarm and total time to detect attack.

1) False data standard deviation factor

As can be seen from the simulation results, within each system setting, changing the false data standard deviation factor from 10% to 50%, and to 100% can create some randomness in the results, but not affecting the performance of detection method in a significant way. The only performance measurement that seems to be influenced by the change in false data standard deviation factor is the total time to detect attack (see

Graph 4.5, Graph 4.6, and Graph 4.7).

First, as can be seen from the graphs below, the probability of proposed detection method failing to detect the attack increases when severity level of attack is low (with false data mean factor smaller or equal to 1%). It can be concluded that the standard deviation of false data has limited impact on the performance of detection method when the mean of false data is small.

Second, there seems to be a clear pattern in Graph 4.5. It can be noted that under the system setting 1, where the IQR interval parameter k is 5, when false data mean factor equals to 20%, as the standard deviation of false data increase, it takes less time to detect an attack; and when false data mean factor reduces to 10% (see bottom-left

(39)

severe attack, when the threshold of detection method is set to be less strict, the standard deviation of false data will have a higher impact on the detection time.

Graph 4.5 – Performance Measurement for System Setting 1 – Experiment 2

(40)

Graph 4.6 – Performance Measurement for System Setting 2 – Experiment 2

(41)

Graph 4.7 – Performance Measurement for System Setting 3 – Experiment 2

(42)

2) False data mean factor

Rest of the analysis will mainly focus on the impact of false data mean. It is

reasonable to assume that the hacker tries to remain the variation of false data at low level, so that it causes a constant disruption to the power system and will possibly have less chance of being detected as it keeps the system stable. Therefore, only the cases where the value of false data standard deviation factor is 10% will be considered for the following analysis.

1) System setting 1

Graph 4.8 summarizes the four performance measurements of detection method

with the false data mean factor changing from 20% to 0.5%. Under the system setting 1, where the value of k is 5, and thus the detection method has a less strict threshold, previous experiment has shown that number of false alarms will be relatively high in this case. When experimenting with different false data mean factors, it can be observed that total number of alerts fluctuates slightly between 360 and 400; and the false alarms of each alert level differs slightly from each other. This indicates that under a less strict threshold setting of detection method, changing the false data mean factor from 20% to 0.5% has limited impact on the performance in terms of number of false alarms and total number of alerts.

In general, total number hits are also quite resilient to the severity level of attack. As can be seen from the top-left graph, the average total amount of hits is

fluctuating within a small range as false data mean factor changes.

(43)

Graph 4.8 – Performance Measurements for System Setting 1 – Experiment 2

(44)

2) System setting 2 & 3

Graph 4.9 and Graph 4.10 summarizes the four performance measurements of

detection method with the false data mean factor changing from 20% to 0.5%, for the system setting 2, and 3 respectively. As can be observed that each of the performance measurement reacts in a similar way to the change in severity level of attack, thus will be put together for discussion.

First, when looking at the total number of hits, it starts to decrease as the false data mean factor is equal to, or smaller than 1%. And as can be seen from the graph on the top left, the proposed detection method failed to detect any attack with the value of false data mean factor 0.5%. As previously defined, the ratio of system noise level to the power level is 1%, which indicates that when the severity level of attack is less than the level of system noise, it cannot be easily detected. However, this does not mean the detection method performs badly without being able to detect attack, since the severity level of attack is too low to cause any disruptions to the wind farm power system in this case. The similar observation can also be found in the measurements of attack detection time.

Second, when looking at the total number of false alarms, it decreases slightly as the false data mean factor is equal to, or smaller than 1%. As illustrated in the experiment 1, the number of false alarms is so highly responsive to the change in the detection method thresholds that the small variation in the number of false alarms in experiment 2 can be interpreted as being resilient to the change in the severity level of attack. It can also be inferred that the variation in total number alerts under the given system settings are mainly caused by the change in total number of hits, rather than the impact of attack severity level.

(45)

Graph 4.9 – Performance Measurements for System Setting 2 – Experiment 2

(46)

Graph 4.10 – Performance Measurements for System Setting 3 – Experiment 2

(47)

5.

CONCLUSIONS

Experiment 1 evaluates the performance of detection method under the different combinations of IQR rule parameters; and experiment 2 shows how well the detection method performs under different severity levels of attack. This section provides a brief overview of the findings.

A few conclusions can be summarized from the performance analysis based on the experiments:

First, when designing a defense strategy, it is important to evaluate the tradeoff between the probability of detection and the probability of false alarm under different thresholds. When the threshold has a low value, the detection method has a better chance of detecting an attack that is present; however, in might cause false alarms rise radically, and vice versa. It should be noted that the number of false alarms is highly responsive to the change in the detection method thresholds. In this thesis, the results show that for the given system setting, the proposed detection method performs well when the IQR length parameter equals to 12, and the choice of IQR depends on the attack level, as well as the chosen alert level.

Second, alert level serves as another important threshold in the design of a detection method. The analysis results show that different alert level has a great impact on the performance of detection method. In general, higher alert level will reduce false alarms, but at the same time, it is very likely to increase the time it takes to detect an attack and result in lower probability of detecting an attack.

Last, the value of false data will have an influence on the probability of detection for a given method. Both the mean and standard deviation of false data influence the

(48)
(49)

6.

LIMITATIONS

This thesis has several limitations within which the findings and conclusions need to be interpreted carefully.

First, In this thesis, it is assumed that the hacker can get access to real-time

measurements of power output and alter them, but has no access to the sensor reading of other input variables that are used to calculate the power output, such as wind speed, air pressure etc. Based on this assumption, the expected value of power output can be estimated with the sensor readings for those variables and be used in the detection method. However, in the real life, it is possible that the hacker is able to manipulate the power generation of wind farm by attacking various types of sensors, which makes the attack more difficult to be detected as it leads to higher complexity. Therefore, the defense strategy needs to be adapted accordingly to detect the attack effectively.

(50)
(51)

7.

REFERENCES

Abur, A., & Exposito, A. (2004). Power system state estimation: theory and

implementation. Retrieved from

https://www.taylorfrancis.com/books/9780203913673

Anwar, A., & Mahmood, A. N. (2014). Vulnerabilities of Smart Grid State Estimation

against False Data Injection Attack.

Bou-Harb, E., Fachkha, C., Pourzandi, M., Debbabi, M., & Assi, C. (2013). Communication security for smart grid distribution networks. IEEE

Communications Magazine, 51(1), 42–49.

https://doi.org/10.1109/MCOM.2013.6400437

Deng, R., Xiao, G., & Lu, R. (2017). Defending Against False Data Injection Attacks on Power System State Estimation. IEEE Transactions on Industrial Informatics,

13(1), 198–207. https://doi.org/10.1109/TII.2015.2470218

GWEC Global Wind Report 2018. (2019). Retrieved from

https://gwec.net/wp-content/uploads/2019/04/GWEC-Global-Wind-Report-2018.pdf

Hayes, A. (2019, November 18). Understanding Moving Averages (MA). Retrieved from https://www.investopedia.com/terms/m/movingaverage.asp

Herp, J., Poulsen, U. V., & Greiner, M. (2015). Wind farm power optimization including flow variability. Renewable Energy, 81, 173–181.

https://doi.org/10.1016/j.renene.2015.03.034

Jarmakiewicz, J., Parobczak, K., & Má, K. (2017). Cybersecurity protection for power grid control infrastructures. International Journal of Critical Infrastructure

Protection, 18, 20–33. https://doi.org/10.1016/j.ijcip.2017.07.002

Jiang, J., Qian, & Yi. (2017). Defense Mechanisms against Data Injection Attacks in Smart Grid Networks. IEEE Communications Magazine, 55(10), 76–82.

(52)

Liu, Y., Ning, P., & Reiter, M. K. (2011). False data injection attacks against state estimation in electric power grids. ACM Trans. Info. Syst. Sec, 14, 33.

https://doi.org/10.1145/1952982.1952995

Mohammadpourfard, M., Sami, A., & Seifi, A. R. (2017). A statistical unsupervised method against false data injection attacks: A visualization-based approach.

Expert Systems With Applications, 84, 242–261.

https://doi.org/10.1016/j.eswa.2017.05.013

Mohammadpourfard, M., Sami, A., & Weng, Y. (2018). Identification of False Data Injection Attacks With Considering the Impact of Wind Generation and

Topology Reconfigurations. IEEE Transactions on Sustainable Energy, 9(3), 1349–1364. https://doi.org/10.1109/TSTE.2017.2782090

Pasqualetti, F., Dorfler, F., & Bullo, F. (2013). Attack detection and identification in cyber-physical systems. IEEE Transactions on Automatic Control, 58(11), 2715– 2729. https://doi.org/10.1109/TAC.2013.2266831

Rahman, M. A., & Mohsenian-Rad, H. (2012). False data injection attacks with incomplete information against smart power grids. GLOBECOM - IEEE Global

Telecommunications Conference, 3153–3158.

https://doi.org/10.1109/GLOCOM.2012.6503599

Santhanam, P. (2013). Python for Power Systems Computation. Retrieved from https://www.researchgate.net/publication/255685748

Shi, L., Dai, Q., & Ni, Y. (2018). Electric Power Systems Research Cyber-physical

interactions in power systems: A review of models, methods, and applications.

https://doi.org/10.1016/j.epsr.2018.07.015

Sobczak, B. (2019). SECURITY: First-of-a-kind U.S. grid cyberattack. Retrieved November 10, 2019, from E&E News website:

(53)

Staggs, J., Ferlemann, D., & Shenoi, S. (2017). Wind farm security: attack surface, targets, scenarios and mitigation. International Journal of Critical Infrastructure

Protection, 17, 3–14. https://doi.org/10.1016/j.ijcip.2017.03.001

Wu, H., Liu, J., Liu, J., Cui, M., Liu, X., & Gao, H. (2019). Power Grid Reliability Evaluation Considering Wind Farm Cyber Security and Ramping Events.

Applied Sciences, 9(15), 3003. https://doi.org/10.3390/app9153003

Xiang, Y., & Wang, L. (2017). A game-theoretic study of load redistribution attack and defense in power systems. Electric Power Systems Research, 151, 12–25. https://doi.org/10.1016/j.epsr.2017.05.020

Xie, B., Peng, C., Yang, M., Kong, X., & Zhang, T. (2019). A novel trust-based false data detection method for power systems under false data injection attacks.

Journal of the Franklin Institute. https://doi.org/10.1016/j.jfranklin.2018.10.030

Yan, J., Liu, C. C., & Govindarasu, M. (2011). Cyber intrusion of wind farm SCADA system and its impact analysis. 2011 IEEE/PES Power Systems Conference and

Exposition, PSCE 2011, 1–6. https://doi.org/10.1109/PSCE.2011.5772593

Yang, L., Li, Y., & Li, Z. (2017). Improved-ELM method for detecting false data attack in smart grid. Electric Power and Energy Systems, 91, 183–191. https://doi.org/10.1016/j.ijepes.2017.03.011

Zabetian, A., Mehrizi-Sani, A., Zabetian-Hosseini, A., & Liu, C.-C. (2018).

Cyberattack to Cyber-Physical Model of Wind Farm SCADA. In IECON

2018-44th Annual Conference of the IEEE Industrial Electronics Society IEEE., 4929–

4934. https://doi.org/10.1109/IECON.2018.8591200

Zhang, Y., Wang, L., Xiang, Y., & Ten, C. W. (2015). Power System Reliability Evaluation With SCADA Cybersecurity Considerations. IEEE Transactions on

Smart Grid, 6(4), 1707–1721. https://doi.org/10.1109/TSG.2015.2396994

(54)

IEEE Transactions on Smart Grid, 8(5), 2343–2357.

Referenties

GERELATEERDE DOCUMENTEN

Survey results demonstrate a correlation between use of IT systems departments and the workload distribution within HR departments as well as their alignment with overall

The nonparametric test is employed in the research to explore that the succession date in the first half-year could improve the company financial performance while there are

The answers of this outcome will be used to asses how much and what kind of international experience a manager has and taken as a starting point for the questions related to the

Therefore the design answers the question: ““How can NSS improve control of its order driven processes?” This question can be answered as follows: “NSS can improve control of

This thesis will look at the division of power between the last five Vice Presidents of the United States — Al Gore, Dick Cheney, Joe Biden, and Mike Pence — based on

‘down’ are used, respectively, in the case of the market declines... Similar symbols with

While Joppke contends that multiculturalism has completely fallen back in the liberal state largely due to the growing assertiveness of the liberal state with the rise of

This report subsequently describes the (i) activities of the regulated entities, (ii) the theoretical framework and criteria for determining suitable peers, (iii)