• No results found

Solar predictor

N/A
N/A
Protected

Academic year: 2021

Share "Solar predictor"

Copied!
48
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

SOLAR PREDICTOR

By

Andrei-Cristian ŞTEFAN

GRADUATION REPORT

Submitted to

Hanze University of Applied Science Groningen

in partial fulfilment of the requirements

for the degree of

Fulltime Honours Bachelor Advanced Sensor Applications

(2)

ABSTRACT SOLAR PREDICTOR

by

ANDREI-CRISTIAN ŞTEFAN

A detailed solar prediction is necessary in case of particular cloudiness-patterns causing undesirable large fluctuations in photovoltaic electricity production. By considering the project objectives and expected outcomes, as well as the fact that the motion of the clouds is the key element that influences solar irradiance, the project focused on predicting cloud motion at an intra-hour temporal resolution. This report describes possible solutions to develop a Solar Predictor system in order to find a suitable method for anticipating the times when solar irradiance is at its peak depending on the motion of the clouds. Furthermore, it provides details about the fact that Camera Detection technology in combination with background reduction and motion tracking algorithms is the most suitable prediction method, as well as it gives indications regarding its development and application in a real-life situation. The testing in a real environment allowed the implemented system to achieve predictions for time intervals up to one minute in advance and demonstrated that cloud motion and solar irradiance can be predicted at an intra-hour temporal resolution for time intervals up to one minute, even when lower resources and significantly less time for acquiring data and validation are allocated.

(3)

I hereby certify that this report constitutes my own product, that where the language of others is set forth, quotation marks so indicate, and that appropriate credit is given where I have used the language, ideas, expressions or writings of another.

I declare that the report describes original work that has not previously been presented for the award of any other degree of any institution.

Signed,

(4)

First and foremost, I would like to thank my graduation supervisor, Ms. Corina Vogt, Lecturer at Hanze University of Applied Sciences and Mr. Gerard Nanninga from Energy Academy Europe for their permanent support and guidance throughout the whole duration of the graduation project. Furthermore, I would like to thank my mentor, Mr. Bryan Williams, as well as Dr. Felipe Nascimento Martins and Mr. Ronald van Elburg from Hanze University of Applied Sciences for their professional assistance in clarifying my questions in different phases of the project development.

I would also like to express my gratitude to Ms. Lies Oldenhof for giving me the opportunity to do the graduation project at Energy Academy Europe. Special thanks go to Mr. Jip Kosse for his availability to discuss and share ideas with regards to the project development.

Last but not least, I would like thank my parents for their permanent support and encouragement not only during the graduation project but for the whole duration of my bachelor studies.

(5)

v

TABLE OF CONTENTS

Page

List of Tables... vi

List of Figures... vii

Chapter I. RATIONALE... 1

II. SITUATIONAL & THEORETICAL ANALYSIS... 5

2.1. Pyranometer... 5

2.2. Digital Camera... 8

2.3. Thermographic Camera... 11

III. CONCEPTUAL MODEL... 13

IV. RESEARCH DESIGN... 15

4.1. Data Aquisition... 15

4.2. Data Processing... 17

4.3. Data Prediction... 19

V. RESEARCH RESULTS... 21

VI. CONCLUSIONS AND RECOMMENDATIONS... 24

LIST OF DEFINITIONS AND ABBREVIATIONS... 26

REFERENCES CITED... 27

Appendix A. Bill of Materials... 32

B. Hardware Casing Design... 33

C. Design FMEA... 35

D. Implemented Software... 36

(6)

vi

LIST OF TABLES

Table Page

(7)

vii

LIST OF FIGURES

Figure Page

1. Variations in sunlight captured by solar panels, creating high and low peaks... 1

2. Qualitative Peak Shaving concept for one day... 2

3. Smart PV system for a domestic user... 3

4. Component parts of a pyranometer... 5

5. Positioning of the pyranometer for data acquisition... 6

6. Pyranometer array setup... 6

7. Difference between CCD and CMOS technologies... 8

8. The determination of the Sun-Pixel angle... 9

9. Background reduction algorithm applied to a sky image... 10

10. The Electromagnetic Spectrum... 11

11. Image taken by a thermographic camera. Detection of cloud pixels and creation of Bit Mask... 12

12. Prototype Design... 15

13. Layers for hardware design... 16

14. Solar Predictor flowchart... 17

15. HSV colour space representation... 18

16. Software implementation with background reduction and blob detection... 18

17. Representation of cloud movement from position A to B in the Python coordinate axis system... 19

18. Schematic representation of the prediction range... 20

19. Blob detection... 21

(8)

1

Chapter 1 – Rationale

Solar energy possesses the highest potential for electricity generation, in comparison to the other renewable energy sources (ex. wind, biomass), as an average amount of approximately 350 W/m2

intensity reaches the Earth; from this amount, about 70% is available for harvesting [1]–[3].During the day, the power demand grows as more energy is required by industry and business areas, making the power plants produce more electricity for the grid [4]. During the night less power is demanded, as there is less industrial activity and less energy is required by residences [4]. As a result, the cost of production rises with peak demand. Due to the fact that the peak load determines the size and complexity of a system, an important issue is related to finding ways to overcome the peak demands [4]; it can be done through the analysis of Photovoltaic (PV) outputs, by integrating and connecting it into the power grid, lowering in this way the overall cost [5].

Photovoltaic solar systems produce most of their power during summer, less during fall and spring and very little during winter. Furthermore, they do not produce at night and have a lower efficiency during the days when the sun is covered by clouds, thus requiring power compensations purchases from the grid [2], [3]. This leads to important challenges to grid-connected PV systems, such as the rapid output variations that occur as clouds pass overhead.

Modern systems for electrical network management named Cyber-Physical Systems – including the power grid – are extremely intertwined with each other, therefore one problem can cause cascading effects for many other connected systems, being practically impossible to keep defects isolated when they occur [6]–[8]. The Cyber-Physical System for optimal operation of the power system is called Smart Grid and it is an important subject of research in universities and industry [6], [7], [9], [10]. The Smart Grid technology allows a two-way communication between producers and consumers for exchange of electricity and information, aiming to control the operation of a huge number of sensors and computers interconnected in complex networks as well as to reroute electricity in order to compensate for defects or power fluctuations [4], [9]–[11].

The variability is a real challenge for grid operators who must be prepared to compensate when PV output drops and to reduce grid support as PV output recovers after clouds have passed, causing daily variations in PV output [12]. Generally speaking, the photovoltaic (PV) power output for a full day can be represented as a bell shape [13]. Figure 1 shows the qualitative PV generation variability based on data extracted from the real power output of solar panels during a summer day in the State of Texas, U.S [13]. However, the contribution of solar energy to the grid during periods of peak demand is significant since it reduces demand to the grid through the addition of clean solar energy, thus helping to reduce electricity costs and increase the reliability of the energy network [14]. If there is no future information on when the grid will be used for power, power plants need to work uninterrupted to produce energy, so that there will be no shortage when demanded by consumers. In order to reduce the cost of running power plants during the time when solar power is available, a reliable generator switching system is required [15].

(9)

2

By integrating solar energy to the grid, the process known as peak shaving can occur where the amount of energy purchased from the energy providers during peak demand hours is reduced (Figure 2). This shifts the overall energy demand from midday to late in the evening [16].

Figure 2: Qualitative Peak Shaving concept for one day (Image source: [17])

There are a lot of ways to implement peak shaving, such as reducing consumption by turning off non-essential equipment during peak hours, as well as to install solar and battery solutions as soon as the peak demand occurs during peak PV output. Even though there are different solutions available, peak shaving methods require a lot of details, coordination and planning, including engineering support, utility company participation and energy producers [18]–[20]. Peak shaving can be a great option to reduce expenses and, as utility expenses and demand rises, it will become a more common way to reduce energy costs, leading to a lower peak demand and a cost price reduction [16].

Increasing energy efficiency and accelerating renewable energy production represents one of the top priorities for people and organisations around the world [21]–[24]. In order to achieve this goal, the implementation of Smart Grid systems plays an important role as they do not necessarily involve the replacement of the existing network, but it combines hardware and software elements to significantly improve the way the current system is operating while also offering the possibility of further upgrading [25]–[28].

Smart Grids can provide electricity using digital technology and can also integrate renewable energy giving the possibility to the consumers to reduce their consumption during peak hours by adapting the amount taken from the network to personal needs [21], [25]. Therefore, Smart Grid technology can revolutionize the industry by lowering power consumption by up to 30%, which also reduces the need to build new power plants [11].

As the fossil fuels are harmful to the environment by polluting not only the air, but also the soil, water, vegetation and buildings, renewable energy sources such as solar and wind energy are used more and more nowadays as they are environmentally friendly in comparison with the conventional energy sources [29]. However, because renewable energy sources are intermittent, Smart Grids are essential due to their flexibility, compatibility with the existing infrastructure, as well as safety and high efficiency [11]. In addition to the management of the grid to prevent power shortages, several technical issues – such as eliminating the solar energy fluctuations, linking consumption habits with the ability to collect renewable energy, setting the optimal price at which the energy gathered by an individual producer is sold to the Smart Grid – are required to be resolved to make Smart Grids more appealing and cost-effective [25], [30].

(10)

3

Fluctuations in the energy output are registered at the time when clouds pass over the solar panels, making them unable to deliver the required energy for running the desired appliances, thus having to rely on the grid to meet the energy demands [31]. Therefore, it is important to know when peak power generation occurs, so that the system can draw power directly from the cells using less electricity from the grid and reducing the overall cost of energy.

In this respect, PV technology in combination with Smart Grids encourage consumers to reduce their overall power consumption during peak time slots in order to minimise the costs of their electricity bill [30]. Moreover, Smart Grids can be efficiently used by coordinating the appliances used by each household, as well as managing the peak loads [32].

As an example of how a Smart Photovoltaic system can be used to manage domestic appliances depending on the peak solar irradiance, in 2016 S. Rauf et al. [30] published a scientific article in which an electrical load management system was proposed. In the mentioned article, three main electrical loads were identified: Basic Load, Regular Load and Burst Load (Figure 3). The Basic Load is represented by electrical devices that consume a low amount of electricity (ex. lights, fans), the Regular Load by the appliances that are always on (ex. refrigerator) and the Burst Load by those appliances with considerable energy consumption used only for a short period of time (ex. vacuum cleaner, washing machine).

Figure 3: Smart PV system for a domestic user (Image source: [30])

By predicting the peak irradiance, the appliances classified as part of the Burst Load can be scheduled in the periods when the solar panels produce the highest amount of electricity, thus reducing their operation cost [33]–[35]. Furthermore, this can be extended to the Basic Load and the Regular Load by including energy storage system to compensate for the variations in solar output [30].

The Solar Predictor offers the possibility to determine the peak solar irradiance (when solar panels produce the most electricity) before it happens – based on the motion, position and size of the clouds – encouraging consumers to reduce their overall power consumption during peak time intervals, thus minimising electricity costs [36]. Moreover, as the Solar Predictor can be used to schedule the use of appliances in each household, as well as to manage the peak loads, it grants the possibility to facilitate self-production and lower energy consumption.

(11)

4

Because of the limited information regarding the altitude of the clouds, satellite images offer insufficient spatial-temporal resolution for specific sites,not suitable for predictions at time intervals lower than one hour [37]–[39]. In this respect, in order to accurately schedule energy consumption, the need for a ground-based system able to predict intra-hour peak intervals arose.

A detailed solar prediction is necessary in case of particular cloudiness-patterns: e.g. blue sky combined with scattered clouds, causing undesirable large fluctuations in photovoltaic electricity production and/or opportunities for direct consumption of solar produced electricity.

Based on the client’s targeted results, the main aim of the project was to perform a feasibility study based on the research of available scientific literature for the state-of-the-art solar prediction technology and to investigate the possibility to create a proof of concept for the prediction of solar irradiance at a high temporal resolution (5 minutes or less).

Due to the fact that the Solar Predictor is mainly meant for domestic applications, the device is to be placed on the roof of a house or building where solar panels are installed. This means that the field of view can vary from location to location depending on the surrounding objects such as trees or taller buildings. For the purpose of this project, the field of view is considered as approximately 5 km2.

Regarding the accuracy, it was desired to achieve a prediction with an average margin of error of 5 seconds. However, this value depends on the wind speed as the lower the wind speed, the lower the cloud velocity is, making the allowed margin of error higher. This means that if a cloud moves at a slow pace, the margin of error can be higher as it will take longer for that particular loud to reach the sun and cover it completely so that the solar production is cut from the PV panels.

One of the main constrains of the project was finding a low-cost solution. In this respect, even the maximum allocated budget was of €1000, it was recommended by the client to keep the expenses as low as possible. Other important constrains were related to finding a reliable and easy-to-implement solution. This means that – besides the installation of the hardware and turning on the software, as well as the periodical maintenance check-ups – the system should not require any human supervision.

By considering the project objectives and its expected outcomes, as well as the fact that the motion of the clouds is the key element that influences solar irradiance [40], the following research question was derived:

“How can cloud motion and solar irradiance be predicted at an intra-hour temporal resolution?”

This led to several sub questions that needed to be taken into account as well: • “What is the state-of-the-art of the solar prediction technology?”

• “What is the most suitable and less resource-intensive method for an accurate prediction of

solar irradiance?”

• “How can the system be made as accurate as possible?”

This report describes possible solutions to develop a Solar Predictor system in order to find a suitable method for anticipating the times when solar irradiance is at its peak. Furthermore, this report provides details about the most suitable method based on literature research and project requirements, as well as gives indications regarding its development and application in a real-life situation.

(12)

5

Chapter 2 – Situational & Theoretical analysis

This chapter includes the research on the peak solar irradiance prediction topic, as well as the general description and analysis of the possible solutions based on the project requirements that led to the hypothesis that Camera Detection technology is the best approach for predicting solar irradiance at an intra-hour temporal resolution.

2.1. Pyranometer

Pyranometers are instruments that measure solar irradiance on a flat surface [41]. A pyranometer system is considered the most accurate way to measure the solar irradiation as it outputs readings for the flux density of the solar radiation [42]. They offer a hemispherical view on the surroundings and can be used to measure the total solar radiation on a given surface [43]. It consists of a flat sensor enclosed in a double hemispherical glass dome with high light transmission capacities, aimed to reduce errors (Figure 4).

Figure 4: Component parts of a pyranometer (Image source: [44])

The solar radiation (a broad range of wavelengths) that reaches the photo-sensitive cell is converted into a measurable current by the device. For most pyranometers, no power is required as the current is being generated only under illumination [41].

These devices can be classified into two major categories: Thermopile and Photovoltaic pyranometers. The first uses thermocouples to generate electricity depending on the temperature it reaches when illuminated, while the second is based on light-sensitive semiconductor chips, its working principle being similar with the one for solar cells. Between the two, the Photovoltaic pyranometer has a smaller band wavelength than the Thermopile pyranometer and it is therefore not suitable for precise measurements as it provides approximate readings [43].

A study performed in 2012 by Chow et al. [20] proposed a setup made out of two pyranometers for the prediction of intra-hour irradiance. Both pyranometers were placed in two locations on a horizontal plane facing the sky (Figure 5). Even though their relative distance from each other is not mentioned, the two pyranometers were separated by four solar panels.

(13)

6

Figure 5: Positioning of the pyranometer for data acquisition (Image source: [20])

In order to achieve irradiance predictions, the two pyranometers gathered data on the solar elevation angle and its azimuth angle, incident solar radiation on the sensor. Moreover, ambient temperature data was gathered as well. In order to analyse this set of data, an Artificial Neural Network (ANN) was created to compensate for the errors of the intra-hour predictions by using also historical data [20], [45], [46]. Using the two-pyranometer system, Chow et al. managed to achieve irradiance predictions of 20 minutes, with a recorder error of approximately 6.4% [20].

In 2014, J.C. Baltazar et al. [47] suggests that, in order to predict the accurate light intensity without the use of tracking devices, a multi-pyranometer array is required. The proposed design involved a four-pyranometer setup aimed in different directions: one on the horizontal plane and the rest set at azimuth angles of -60°, 0° and 60°, as shown in Figure 6. Furthermore, because of the fact that pyranometer are static sensors, they were mounted on a solar tracker device in order to allow the correction of the angle [47].

(14)

7

The use of multiple pyranometers allows readings to be taken simultaneously in order to reduce the statistical variance of the measurements. Also, through algorithms used for estimating the irradiance on a tilted plane based on the global and diffused irradiance [48], the overall irradiance errors were reduced to ±10 W/m2 which was considered acceptable [49]–[51].

Another study performed by V. Srikrishnan et al. [42] in 2015 also used a multiple pyranometers to predict the solar irradiance at intervals lower than an hour, using an approach based on neural networks, each sensor being considered as a node. Furthermore, as two pyranometer systems were tested, one with three and the other with five pyranometers, a comparison between the results obtained by the two systems was also made. For the two systems, the sensors were positioned in different configurations, similarly with the setup from Figure 6: the first configuration used five pyranometers where besides the one on the horizontal plane, the azimuth angle pyranometers were set 90° apart from each other, each facing a cardinal point. The second configuration uses only three pyranometers with the two azimuth ones facing South and West. Among their findings, was the fact that the five-pyranometer system showed an increase in the registered accuracy with approximately 2.5% in comparison with the three-pyranometer system [42]. This study concluded that, for a system with no moving parts, the results show lower errors if more pyranometers are used.

To sum up the findings from the examples above, in order to achieve solar irradiance predictions with low errors, one pyranometer is not enough. In order to use the pyranometer sensor in the Solar Predictor, a setup of at least five pyranometers would be needed. This would ensure a complete view on the surrounding area and even though it may be able to detect the irradiance, it will not be possible to accurately predict the movement of clouds that will block the sun rays until the moment when the sun is starting to be covered.

The advantages of the pyranometer consists of the fact that most pyranometers require no power to operate [43] and that it offers precise irradiance measurements (in W/m2) [42]. However, the drawbacks outweigh the advantages as hardware cost is high [52], constant monitoring is required in order to ensure accuracy [42], and – as the pyranometer is not designed to track cloud position – an array of pyranometer would be needed in order to reduce errors [42].

(15)

8

2.2. Digital Camera

Digital Cameras obtain an image (the frame) when the optical system is exposed to light; the image sensors convert the incoming light into electric signals and the different sections of the image sensor became charged proportional to the light intensity [53]. The image formed on the two-dimensional image detector array is converted into pixels, a process known as sampling.

There are two types of image sensors widely used in today’s digital cameras: Charge Coupled Device (CCD) and Complementary Metal Oxide Semiconductor (CMOS). Even though both work in a similar fashion by converting light into electric signals, the main difference being the fact that CCD technology move the generated charge from pixel to pixel until it is converted to voltage at the output node, while CMOS technology converts the charge to voltage inside each of the pixels (Figure 7) [54].

Figure 7: Difference between CCD and CMOS technologies (Image source: [54])

Further comparing the two technologies, it is considered that CCD image sensors display a higher level of noise than CMOS image sensors because of the higher bandwidth used by CCD [54]. However, this difference is not noticeable for applications where high definition imagery is unnecessary.

In 2012, Ghonima et al. [38] developed a ground-based methodology to classify clouds by their optical thickness. Such a method can improve the accuracy of an intra-hour solar irradiance forecasting system because thicker clouds allow less light to pass through, in comparison to thinner clouds, having a noticeable effect on the solar irradiance that reaches the ground. For their proposed method, a Total Sky

Imager device was used where a digital camera is pointed down at a spherical mirror reflecting the sky,

in order to increase the radiometric resolution of the regions of interest. For the classification, methods such as the red-to-blue ratio [55], the red-blue difference [56] and the normalised blue-red ratio [57], [58] were tested to determine which clouds were optically thin and which were optically thick. However, the to-blue ratio method was regarded as most suitable. For a clear sky, it was noticed that the

red-to-blue ratio has higher values around the area where the sun is located and gradually decreases with the Sun-Pixel Angle, the angle between the camera pixel and the direct solar beam (Figure 8). Furthermore,

a neural network was trained with a dataset of 60 images, representative for different types of clouds. It was noted that the implemented algorithm performed better when the difference between red-to-blue

ratio images and clear sky images was taken, being able to classify the cloud pixels as optically thin or optically thick, thus giving the opportunity to improve the detection accuracy of the short-term

(16)

9

Figure 8: The determination of the Sun-Pixel angle (Image source: [38])

A paper published by Chu et al. [59] in 2015 showed how cameras and image processing algorithms can be used to predict solar irradiance ten minutes in advance. The employed method combined a fish-eye digital camera with an artificial neural network and was divided into four parts: cloud identification, cloud indexing, cloud classification and performance assessment.

The first part, the identification method, identifies every image taken of the sky as either clear or cloudy. If the image is categorised as cloudy, a thresholding method created by Li et al. [57] in 2011 is used to further classify the image as either overcast or partly cloudy; the second part, the indexing of clouds, is used to obtain numerical pixel information for the clouds that move towards the sun; the third part, the classification of clouds, employs a multi-layered neural network to detect what influence will the clouds have on the solar light, based on the training data used for the network; the fourth part, the assessment of the results, was conducted by using statistical tests to evaluate the performance of the system, such as

mean bias error, root mean square error, forecasting skill and excess kurtosis [60]–[62]. The research

paper had positive results, succeeding in creating a real-time forecasting system with an accuracy of 65% able to predict when clouds will cover the sun for a time interval of ten minutes [59].

Another notable system based on Digital Camera technology was created by R. Chauvin et al. [63] in 2015. The main difference from the other systems presented above is that the approach of R. Chauvin et

al. employs a thresholding technique on the sky images based on pixel identification performed by

separating the cloud pixels from the clear sky pixels [57], [64]. This is done by calculating the optimal threshold based on light and colour [63]. Using the threshold, the background is successfully removed (Figure 9), allowing the implemented algorithm to detect the clouds. This type of method has a lot of potential to be implemented in the solar prediction, as it shows how – by using thresholding techniques – the background can be removed, so that clouds can be detected. With the detection of clouds, their trajectory as well as speed can be calculated, facilitating in this way the possibility to state predictions regarding the amount of time needed for the cloud to cover the sun, including the time it would take for the cloud to leave the sun’s corona.

(17)

10

Figure 9: Background reduction algorithm applied to a sky image (Image source: [63])

The systems presented above showed that using camera vision for intra-hour solar irradiance prediction offers a wide range of possibilities, and that various types of algorithms can be implemented. Digital cameras have the advantage of being able to perform detection in real-time [57] and having the flexibility of software implementation [57], [65]. However, as presented above, most camera-based systems are resource intensive [59] and there is the risk of damaging the image sensor when pointed directly at the sun [66]. In order to avoid damaging, special optical filter would be needed to reduce the intensity of the light that reaches the image sensor.

(18)

11

2.3.

Thermographic Camera

Thermographic cameras, also known as infrared (IR) cameras, are used for the detection of infrared radiation in non-contact temperature measurements. In the electromagnetic spectrum (Figure 10), the IR interval ranges from 0.77 µm to 100 µm [67]. This spectrum – invisible to the human eye – can be detected by thermographic cameras as every physical body with a temperature larger than -273.15°C (absolute zero) radiates heat [68], [69].

Figure 10: The Electromagnetic Spectrum (Image source: [70])

As an object’s temperature increases, the kinetic energy of the particles increases as well and more thermal radiation is produced [71]. This type of cameras uses the electromagnetic radiation to form an image, similar to how a normal camera uses visible light to create an image.

Because of the high spatial and temporal variability, it is a difficult task to accurately determine the radiative effects of clouds on solar irradiance [72]. This because clouds absorb a significant amount of radiation and also reflect back the radiation emitted from Earth’s surface [73].

Scientific literature shows that little research has been done on using thermographic cameras for detecting clouds. This is mainly because of the high cost of a thermographic camera as well as of the fact that clouds can be detected using a “normal” digital camera. Furthermore, even if thermographic cameras could be used to detect clouds [73], its main atmospheric applications are the detection of volcanic plumes and masses emitted during eruptions [74]–[77].

However, one research paper published in 2008 by S. Smith and R. Toumi [78] used one such camera (ground-based) to measure cloud cover and brightness temperature and use the data to make irradiance predictions. The used camera constantly adapts the ambient temperature readings in order to detect the areas that have a colder or hotter temperature. Using the readings, a temperature threshold value was set and based on that, a Bit Mask was created with all the cloud pixels, which was then counted by an algorithm (Figure 11).

As mentioned above, “normal” digital cameras are preferred over thermographic cameras mainly because of the fact that “normal” cameras can be programmed to recognise clouds without the need for thermographic data. Moreover, the cost of a digital camera appropriate for cloud detection is far lower than the cost of an appropriate thermographic camera (tens to hundreds of euros for the digital camera, comparing to hundreds to thousands of euros for the thermographic camera). Therefore, the use of a thermographic camera is redundant for the creation of a Solar Predictor, a more viable approach being the use of “normal” digital cameras.

(19)

12

Figure 11: (Left) Image taken by thermographic camera. (Right) Detection of cloud pixels and creation of Bit Mask (Image source: [78])

The paper published by S. Smith and R. Toumi [78] shows how a simple background reduction algorithm can be used to detect cloudiness. This type of algorithm can be implemented in the detection software of the Solar Predictor, without the use of an Artificial Neural Network. Despite the fact that in the mentioned article thermographic readings were used, the background reduction method can also be implemented for a “normal” digital camera, provided the fact that the creation of the Bit Mask is achieved by other means. For example, a colour filter can be implemented in the software to make the distinction between the sky and the clouds, based on a set threshold. However, as the accuracy of the created system is not clearly shown in the article, the Solar Predictor may require several other software methods to be implemented for a high accuracy. Even though the thermographic camera may identify clouds easier than a normal camera eliminating the background without the use of specialised software method [78], the high cost of hardware as well as the fact that the readings can be influenced by changes in the ambient temperature [79], [80], makes it not suitable for being used in the Solar Predictor.

(20)

13

Chapter 3 – Conceptual model

As mentioned in Chapter 1, finding a low-cost, easy-to-implement and reliable solution were the main constraints for this project, the research led to the conclusion that the Camera Detection technology is the most appropriate for the mentioned problem. This because, in comparison to the other technologies mentioned in Chapter 2, it has the advantage of being a relatively simple device that is able to offer enough flexibility for software implementation.

Table 1: Comparison overview of the methods in relation to the project requirements

Advantages Drawbacks

Pyranometer

- No power required to operate [43]

- Precise solar irradiance measurements [42] - Real-time detection [42]

- High cost of hardware [52] - Not designed to track cloud

position [42]

- An array of pyranometers is necessary (one is not enough) [42], [47]

- Constant monitoring required to ensure accuracy [42]

Digital Camera

- Low cost of hardware - Flexibility for software

implementation [57], [65] - Real-time detection [57]

- Reliability dependent on the software quality [57], [65] - The amount of necessary

processing power depends on the software efficiency [59] - risk of damage when pointed

directly at the sun [66]

Thermographic Camera

- Background removed without specialised software method [78]

- Real-time detection [78]

- High cost of hardware - Readings influenced by

changes in the ambient temperature [79], [80]

Besides this type of approach, there is little information in literature regarding systems based on other technologies. The only other technology that differs from Camera Detection and that was used for similar purposes was the one based on pyranometers [42], [47]. Even though pyranometers may be able to accurately detect the solar irradiance, it will not be possible to detect the movement of clouds that will block the sun rays until the moment when the sun is covered. This means that pyranometers are not a good choice for solar prediction technologies with respect to the project requirements.

For topics involving solar irradiance prediction and cloud tracking such as the one that the Solar Predictor intends to cover, most scientific research papers use a digital cameras and employ systems based on digital cameras in combination with image processing techniques such as blob detection, colour filters, motion tracking and Artificial Neural Networks (ANN) [37], [59], [81]–[83].

Several scientific articles refer to the use of a background reduction algorithm to detect cloudiness and to its ability to predict the peak irradiance 5 minutes in advance [37], [78]. The advantage of implementing this type of algorithm in the detection software is that it can be done without the use of an Artificial Neural Network, thus without the need to create a complex hardware system able to perform the tasks. As an example of the amount of resources that might be needed for an ANN, the system created by Chu et al. in 2015 used 10 computer cores and had a duration for the training of the neural network of approximately 24h [59]. Given that the system created by Chu et al. in 2015, as well as similar systems, require a large amount of resources both in terms of hardware and software, in order to comply with the project requirements and budget limitations, a less resource-intensive approach is needed. In this respect, the use of background reduction algorithm is the most suitable. Moreover, as mentioned in Part 2.3., the

(21)

14

Solar Predictor requires the implementation of other software methods in order to achieve a high prediction accuracy.

In conclusion, taking into account the project constrains, the information from the literature research combining existing weather/solar prediction technology, as well as the state-of-the-art technology and programming methods, Camera Detection is chosen to be used for the Solar Predictor project. With regard to the software, cloud detection algorithms such as background reduction and motion tracking can be implemented to predict the movement of clouds.

Furthermore, even though a single camera is considered accurate, it may prove difficult to exploit its full potential, as it may not be possible to point it directly at the sun without causing damage to its image sensor; this because the lens can act like a magnifying glass and focus the rays of the sun on the camera’s image sensor. Therefore, a special optical filter is required to be placed on top of the camera’s lens in order to protect it from damage.

(22)

15

Chapter 4 – Research design

This chapter presents the design approach as well the work done and the step-by-step technical process used for the Solar Predictor. In order to answer to the research question, the project work was distributed over three main phases: Data Acquisition, Data Processing and Data Prediction. The Data Acquisition phase highlights the procedure in which the hardware was created, the programming language used and the reasons why it was chosen. The Data Prediction section describes the implemented software architecture, including the filtering process and the detection algorithm. Data Prediction explains the reasoning behind the prediction algorithm as well as its gradual implementation within the software.

4.1. Data Acquisition

For the Solar Predictor, a webcam [84] facing the sky was used in a place that is not obstructed by objects such as houses or trees. During the testing, the camera was placed on the roof of Hanze University of Applied Sciences (53°00'16.1"N; 6°34'12.6"E) at a height of approximately 10m with an unobstructed sky field of view of approximately 5 km2.

As one of the requirements for the Solar Predictor was to find a low-cost solution, the chosen materials needed to have a balance between cost and quality, the predominant factor being the quality. This is reflected in the decision to buy an optical light filter of a good quality, instead of settling for an inferior one, even though its cost was higher than the used camera (Appendix A).

Figure 12: Prototype Design

For image acquisition, a 1.3 MP webcam was chosen. In order to enlarge its field of view, a clip-on fish-eye lens was added. To make sure that the webcam’s image sensor does not get damaged by the direct sunlight, a ND-8 filter was places on top of the fish-eye lens. The ND Filter (Neutral Density Filter) is a type of optical filter appropriate for outdoor applications that reduces the intensity of light that reaches the lens without altering the natural colour [85], [86], making it possible for the camera to be pointed at the sun without causing damage to the image sensor (Figure 12).

The external casing of the prototype, also meant to keep the camera fixed in the same position during the testing period, is made out of three acrylic layers, each of 6 by 7 cm and a thickness of 8 cm (Figure 13), cut to fit the webcam’s shape (Appendix B).

(23)

16

Figure 13: Layers for hardware design

Due to the fact that the acrylic sheet used is sturdy, it was chosen also for the base and top layer. The role of the first casing layer is mainly to provide a surface on which the camera can be placed. The second layer has two cut-outs, one of the size of the USB part of the webcam’s body which ensures the camera does not change position in the horizontal direction and the other to make room for the clip-on fish-eye lens. The third layer has only one cut-out of the size of the lens part of the camera. Because of the camera shape, the third layer not only prevents the camera to change position horizontally, but also vertically. In the corners of each layer, holes of a diameter of 8 mm were cut so that connectors can be used. Plastic nail anchors were chosen as connectors, because they keep a tight grip on all three layers, without allowing them to fall apart. The nail anchors used are meant for 3 mm screws, however, their outer diameter is what is important, which is of 8 mm.

The video feed camera recording is imported through USB connection to a computer which uses the

Python 3.6 programming language in combination with Open Source Computer Vision 3.4 (OpenCV 3.4)

libraries to analyse the data.

Python is a free open-source general-purpose programming language native to Linux that can be run on

any operating system, being used for a wide variety of applications such as automation, web development and data science [87]. Its main advantages over other programming languages include the large amount of support libraries as well as the user-friendliness and implementation speed [88]. The areas of practical applications mentioned above allow the implementation of complex techniques, making Python a good fit for this project.

In this project, Python 3.6 creates the link between the various components of the software infrastructure being responsible for the exchange of information between the camera and the OpenCV 3.4 libraries. The mentioned libraries were designed to create a general infrastructure for practical computer vision applications, containing more than 2500 optimised algorithms that offer a wide variety of possibilities for software implementation such as motion tracking, filters, image segmentation and object recognition [89]. Based on these methods, the identification of the sun and clouds was achieved by using a

(24)

17

4.2. Data Processing

The data received from the camera is processed and analysed through the use of Python 3.6 in combination with OpenCV 3.4 libraries using a 64-bit Intel® Core™ i3-4010U CPU operating at a

frequency of 1.70 GHz (Appendix D).

(25)

18 The video feed is processed so that a pure black and white mask from the real-time video feed is created (Figure 14). This is done by converting the RGB image format to the Hue-Saturation-Value (HSV) colour space – a cone-like space with its apex pointing downward (Figure 15) – with the Hue (H) being the dominant colour observed, the Saturation (S) the amount of white light present in the chosen Hue and the Value (V) the chromatic intensity [90].

The reason for this conversion is that the HSV colour space allows for an easier implementation of the thresholds necessary to create the colour filters mainly because it represents the colours in a way that can be better understood by the human vision system [90]. Using this, the blue of the sky is discarded – being considered as background – leaving only the clouds and the sun which are detected as white colour of different intensities. In this stage, the difference between the sun and the clouds is

made, with the sun being a white-yellow blob of high intensity and round shape and the clouds being white-grey blobs of amorphous shapes.

With the background removed, the only non-black pixels that remain in the filtered video feed are the ones from the sun and clouds. For this reason, another filter which converts the feed to a Bit Mask with all the remaining non-black pixels set to 1 (or white) and the background to 0 (or black) was implemented. In order to make it easier to distinguish between the white blob of the sun and the white blobs of the clouds, in the initial phase of the software, the sun coordinates and size are extracted and a yellow circle with the same size is drawn on the Bit Mask, at the same coordinates (Figure 16). Because for an intra-hour interval the motion of the sun is assessed as negligible [40], for the implementation of the software, the sun is considered stationary.

Figure 16: Software implementation with background reduction and blob detection

On the Bit Mask, blob detection algorithms are implemented for the different types of blobs to find the clouds’ location and size based on colour and area. This allows the recognition part of the program to work with a pure black and white image. This part of the software is designed to detect only the clouds that can potentially cover the sun, discarding the clouds with a size lower than 1500 pixels that will not influence the solar irradiance. Clouds smaller than 1500 pixels do not influence the solar irradiance as it was determined visually that they are not large enough to cover the sun. For each of the detected blobs, their location and size are recorded so that they can be used in the prediction part of the software.

Figure 15: HSV colour space representation (Image source: [90])

(26)

19

4.3. Data Prediction

In order to create the prediction algorithm, the velocity and the trajectory of each detected cloud had to be calculated. When a cloud is detected, it is recognized as a potential cover for the sun and calculation as performed within the algorithm as below:

Figure 17: Representation of cloud movement from position A to B in the Python coordinate axis system

The distance from cloud-position A to cloud-position B is calculated according to Pythagorean theorem: 𝐴𝐵2= (𝑥

2− 𝑥1)2+ (𝑦2− 𝑦1)2 => 𝐴𝐵 = √(𝑥2− 𝑥1)2+ (𝑦2− 𝑦1)2 (1)

Using the calculated distance (AB) and the time (t) necessary for the cloud to travel from position A to position B, the velocity is calculated using the velocity equation:

𝑣 =𝐴𝐵

𝑡 (2)

As the software uses frames as time unit, the time was converted to seconds based on the frame rate of 30 frames per second used by the camera.

From basic geometry, it is known that the alternate-interior angles formed by two parallel lines (situated in the same plane) with the transversal line that intersects both of them, are equivalent (see ∡α in Figure

17). In order to calculate the trajectory, the angle made by AB with the horizontal axis was determined

by applying the formula for the tangent of an angle, defined as the ratio between the length of the opposite side and the length of the adjacent side:

tan(∡𝛼) =𝑦2− 𝑦1 𝑥2− 𝑥1

=> ∡𝛼 = arctan𝑦2− 𝑦1 𝑥2− 𝑥1

(3)

Using the angle at which the cloud is moving, it can be determined if its trajectory is one that will reach the known position of the sun. If the cloud is on a path that will reach the sun, its velocity and the distance from the current location of the cloud to the sun are calculated. The mentioned distance is calculated similarly with the distance calculation from Equation 1.

(27)

20

Figure 18: Schematic representation of the prediction range

By knowing the velocity and the distance, the time it takes for the detected cloud to reach the sun is calculated using Equation 2, which states the prediction. The maximum range for which the prediction is made is of (2*dcloud + dsun), as shown in Figure 18. Using the cloud speed and size, as well as the

coordinates and size of the sun, it can be determined how long it will take for the mentioned cloud to exit its corona, thus making it possible to determine the time when the peak solar irradiance will return. It is important to mention that the created Solar Predictor is most suitable for the detection and prediction of Cumulus clouds. This because they are a type of clouds that are normally detached from one another and separated by areas of blue sky, making it possible to detect their boundaries with a Digital Camera system [91].

(28)

21

Chapter 5 – Research Results

To determine the efficiency of the developed software used in the Solar Predictor to collect, process and predict the data gathered from the camera, tests were performed in a real environment. In order to do so, the webcam was placed on the roof of a building in such a way that it had a clear view on the whole sky (sun included), making sure that the field of view is not obstructed by trees or buildings.

During the testing phase (Appendix E), the sun was positioned in the lower half of the field of view and it was considered stationary for the duration of each test. The created software was run and through the implemented background reduction and cloud motion tracking algorithms, the background (the sky) was discarded and only the clouds remained (Figure 19). By checking the direction of cloud’s movement, the predictions were stated.

Figure 19: Blob detection

Due to the fact that – because of the emitted sunlight – the clouds’ edges become less clear in the immediate vicinity of the sun, the software cannot detect the exact moment when the cloud reaches the sun. Therefore, the moment when the predicted cloud reached the sun was determined visually. In this way, the time interval from where the prediction started until it reached the sun was measured using a chronometer and it was compared to the prediction given by the software.

It is important to note that, because clouds are not objects with a constant shape, some of the samples had to be discarded as some clouds disappeared before reaching the sun, before exiting the sun’s corona or split into smaller clouds. Also, there were cases where the general direction and/or the speed of the wind changed, carrying the clouds in another direction and/or at different speeds. As the collection of samples depended entirely on the weather and on the availability of clouds as well as the wind direction, four samples could be collected.

The tests allowed the implemented system to achieve predictions for time intervals up to one minute in advance. However, as the predictions are depending on the weather conditions as well as on the position of the sun, in case the clouds are moving towards the sun from a bigger distance, the system could achieve predictions for time intervals larger than one minute.

Comparing with results from scientific literature, such as the system created by Chow et al. in 2011 that reached a prediction time interval of maximum 5 minutes [92], a one-minute prediction time interval is

(29)

22

lower, but not by much. Nevertheless, it is important to highlight that Chow et al. did not have restrictions with regards to resources, allowing the usage of professional equipment such as a Total Sky Imager, as well as the creation of a more complex and resource-intensive programming technique. Furthermore, for

Chow et al. the data gathering period alone was of seven months, while the entire duration of the Solar

Predictor project was of four months, including research, design, software development, testing and validation.

In order to determine de variability of the collected data, statistical methods of analysis such as Mean

Bias Error and Root Mean Square Error were used. Moreover, the Paired t-Test together with the

calculation of the Standard Error of the Statistic were implemented for the four collected samples. The overall systematic error of the model is given by the Mean Bias Error (MBE) as in Equation 4:

𝑀𝐵𝐸 =1 𝑛∗ ∑ 𝑥𝑖− 𝑥𝑡𝑟𝑢𝑒 𝑥𝑡𝑟𝑢𝑒 𝑛 𝑖=1 (4) 𝑤ℎ𝑒𝑟𝑒 𝑛 − 𝑠𝑎𝑚𝑝𝑙𝑒 𝑠𝑖𝑧𝑒 𝑥𝑖− 𝑝𝑟𝑒𝑑𝑖𝑐𝑡𝑒𝑑 𝑡𝑖𝑚𝑒 𝑥𝑡𝑟𝑢𝑒− 𝑚𝑒𝑎𝑠𝑢𝑟𝑒𝑑 𝑡𝑖𝑚𝑒

As the calculated MBE is -0.5004 (-50.04%), it has a negative bias; it means that the model tends to underestimate the predictions with an average of 50%.

In order to observe the difference between the predicted time and the measured time, the Root Mean Square Error (RMSE) was calculated using Equation 5.

𝑅𝑀𝑆𝐸 = √1 𝑛∗ ∑ ( 𝑥𝑖− 𝑥𝑡𝑟𝑢𝑒 𝑥𝑡𝑟𝑢𝑒 ) 2 𝑛 𝑖=1 (5)

The RSME is the absolute measure of fit that determines the accuracy of the prediction [93]. Using

Equation 5, the RMSE for the extracted samples is 0.52998 (52.998%), so the collected data fits the

predictions at a rate of approximately 53%. This means that the differences between the samples of population values predicted by the Solar Predictor and the values observed, even though the different samples have different rates of error, in the root mean square they differ by about half in average. In order to assess if there is a statistically significant effect between the predicted time and the measured time, for n samples, it is necessary to check if the mean values differ between the two data sets. Based on the configuration of the data (two data sets with a number of samples n lower than 30) and due to the fact that the observation was collected in pairs sets (the predicted time and the measured time), the Paired

t-Test was used as it is the most appropriate to analyse the differences between the readings [94].

In order to perform the statistical test, the R programming language was used as being an open-source software for statistical analysis with thousands of available packages for various topics such as statistics, econometrics and bi-informatics with a large variety of available documentation [95], [96].

The first step of implementing the Paired t-Test is to define the Null Hypothesis (H0) stating that there is

a difference between the means and the Alternative Hypothesis (Ha) stating that there is no difference

between the means.

The second step is the Paired t-Test statistic value is calculated using Equation 6 [94]:

𝑡0= 𝑑̅ 𝜎 √𝑛 ⁄ (6) 𝑤ℎ𝑒𝑟𝑒 𝑑̅ − 𝑚𝑒𝑎𝑛 𝑜𝑓 𝑡ℎ𝑒 𝑑𝑖𝑓𝑓𝑒𝑟𝑒𝑛𝑐𝑒𝑠 𝑛 − 𝑠𝑎𝑚𝑝𝑙𝑒 𝑠𝑖𝑧𝑒 𝜎 − 𝑠𝑡𝑎𝑛𝑑𝑎𝑟𝑑 𝑑𝑒𝑣𝑖𝑎𝑡𝑖𝑜𝑛

(30)

23

For a 95% confidence interval, the value of alpha refers to the significance level and is 0.05, calculated as (1 - 0.95). In case the calculated significance level, the p-value, is lower than 0.05, then reject the Null Hypothesis (H0).

Figure 20: R output for Paired t-Test

Since the p-value is 0.05549, greater than 0.05 (or 5%), the Null Hypothesis (H0) is accepted, meaning

that there is a difference between the means.

In this situation is necessary to compute the estimate of the Standard Error of the Statistic, which indicates the precision of an estimate of the population [97]. It is defined as being the standard deviation divided by the square root of the sample size:

𝜎𝑥̅=

𝜎

√𝑛 (7)

Due to the fact that two populations with different variances were used, it is necessary to distinguish between the two variances by rewriting Equation 7 as below:

𝜎𝑥̅̅̅̅−𝑥1 ̅̅̅̅2 =

𝜎1+ 𝜎2

√𝑛 (8)

For the two data sets used – the predicted time and the measured time – the standard deviation for the predicted samples (σ1) is of 0.3947573 and the standard deviation for the measured samples (σ2) is of

1.804624.

𝜎𝑥̅̅̅̅−𝑥1 ̅̅̅̅2 =

0.3947573 + 0.8504901 √4

The Standard Error of the Statistic calculated with the formula above is: σx̅̅̅̅−x1 ̅̅̅̅2= 0.6226237

As the degree of precision represented by the Standard Error of the Statistic calculated based on the extracted samples from the data sets is of 62.3%,it means that the Solar Predictor has a maximum error of 62.3% for an entire population. For example, if the software predicts that a cloud will reach the sun in 30s, the cloud may reach it in a time interval within the 62.3% margin of error, which can be around 50s. Due to the fact that the prediction of the time necessary for the cloud to exit the sun’s corona is based on the prediction of the time needed for the cloud to reach the sun, the same error is applicable also for the case when the cloud exits the sun’s corona.

Even though the prediction was achieved for intervals lower than one minute, it showed the capabilities of the Solar Predictor to make quasi-continuous predictions.

(31)

24

Chapter 6 – Conclusions and Recommendations

The purpose of this project was to investigate different methods of predicting solar irradiance based on the motion of the clouds as well as to create a proof-of-concept for a Solar Predictor. Based on the project requirements and on the available scientific literature, the research revealed that Camera Detection in combination with background reduction and motion tracking algorithms are the most appropriate. Given that for an intra-hour interval the sun can be considered motionless, the solar irradiance is mainly influenced by the motion of the clouds [40]. In accordance with the defined research question, the state-of-the-art solar prediction technology available in scientific literature revealed that Camera Detection exhibited the most advantages to predict cloud motion and solar irradiance, its main advantage being the fact that the hardware consists of a relatively simple device that allows enough flexibility for software implementation.

Regarding the accuracy of the prediction software, improvements were made by adjusting the cloud and sun recognition filters, as well as by implementing trajectory computing and speed determining software techniques available in the OpenCV 3.4 libraries. The reached accuracy in the time interval allocated for this project prove the capabilities of the Solar Predictor to make quasi-continuous intra-hour predictions. The most challenging part of the Solar Predictor was the requirement of creating a less resource-intensive method to achieve an accurate prediction of solar irradiance. In terms of software, background reduction and motion tracking algorithms were implemented to achieve the detection of clouds, as well as to determine and predict their movement. The overall size of the created software was of 6.62 KB (by considering also the size of the installed libraries it reaches approximately 300 MB), running on one single CPU, which achieved comparable results with other prediction systems from literature, such as the one created by Chu et al. in 2015 in which 10 computer cores were used [59].

An in-depth comparison with similar systems from the related scientific literature revealed the fact that the method implemented in this project yielded satisfying results, taking into account the project duration, its objectives and constraints, as well as the fact that the project was aimed for research purposes only. Other comparable systems were developed over a longer period of time using specialised hardware such as Total Sky Imagers and highly resource-intensive programming techniques [92], [98]. The Solar Predictor achieved similar results for a shorter period of time, by using a normal webcam, a fish-eye lens and an optical filter in combination with a more compact programming method of a total hardware cost of €54.81 (Appendix A).

The Solar Predictor Project has the potential to be upgraded in different ways, both in terms of software and hardware, depending on the needs of the area of implementation. For example, it can be used in a domestic environment as an aid for the management of household electrical systems connected to solar panels, making it possible to schedule appliances depending on the availability of solar energy. Furthermore, the solar predictor can be implemented on a much larger scale, such as an interconnected grid of devices that form a Smart Grid. A way to do that is by using multiple interconnected Solar Predictors in the form of a sensor network system able to communicate to each other at all times, so that the accuracy of solar irradiance prediction can increase.

As mentioned in Chapter 5, the created Solar Predictor system exhibited a maximum error of 62.3%. In order to reduce this error and improve the system’s overall accuracy, a number of improvements can be undertaken.

The first possible improvement would be to increase the time interval for which the software records the displacement of the clouds based on their coordinates (Figure 17). At the moment the software calculates the displacement of a cloud in one second (or 30 frames). By increasing the time interval in that the software records the position of the clouds, the clouds’ displacement can become better visible. Even though the precise time interval for the cloud position recording is still to be determined, prediction systems from scientific literature informs that the recording of cloud displacement is to be done every 20-30 seconds [63], [92], which in the case of the used webcam means every 600 to 900 frames. As the software approach for the Solar Predictor differs from the ones in the researched scientific literature, the displacement recording rate may differ as well. This means that, in the event of future improvements, the best approach would be to record the displacement and the stated prediction at various time intervals (ex. 5s, 10s, 15s etc.) for the same samples and compare them with the real time necessary for the cloud to cover and exit the sun. This would determine the most suitable time interval for the recording of the cloud displacement.

(32)

25

Another factor that can be further developed is the approximation of the clouds’ shape. In the current version of the software, when a cloud is detected it is estimated as a circle, this being an embedded functionality in the OpenCV 3.4 blob detection library. For further improvement of the shape approximation, a new method can be created so that the detected clouds can be estimated as ellipses. In this way, the errors will be reduced for clouds that have elongated shapes. Furthermore, another approach would be to implement an edge detection algorithm on the Bit Mask that would extract or approximate the outer edges of the cloud, offering the possibility to use a more accurate cloud detection. However, it is important to note that such algorithms is high resource-intensive and may require a high-performance CPU [99]–[103].

In order to keep in line with the project requirement to create a low resource-intensive system both from a hardware and software perspective, an Artificial Neural Network was considered redundant to be implemented in the Solar Predictor. However, a relatively new machine learning library, TensorFlow, represents a possible alternative for the creation of a complete ANN[104]. This is done by making use of a pre-trained Convolutional Neural Network model called Inception, created by Google, which was trained to apply the learning from a previous learning session to a new learning session [105]–[107].

TensorFlow makes it possible to train only the last layer of the network, making the model faster and

lower powered comparing with running from CPUs [108]. Being an open-source library that focuses on Machine Learning and Deep Neural Networks, it presents numerous advantages such as the fact that it demonstrates fast compilation time in comparison to other similar libraries [109] and provides the Application Programming Interface (API) for Python for building and executing computational graphs [104]. Above all, the main advantage of TensorFlow is the fact that it does not need a large amount of computing power or time; this because it can be compiled on a separate device and then loaded and executed on devices that have a limited storage space [108].

The use of the TensorFlow libraries mentioned above would allow the addition of a cloud classifier that could recognise different types of clouds and adjust the prediction for each case, depending on their optical thickness. For example, an optically thin cloud that passes in front of the sun, still allows solar light to pass through, but at lower intensity. Depending on the type of cloud, the percentage of light that reaches the PV system can be added to the prediction, providing in this way more information to the end user.

The findings from the scientific literature highlighted the fact that the creation of similar systems required a longer time for data acquisition and validation. One relevant example is the forecasting system developed by Chu et al. that achieved prediction for intra-hour time intervals up to 20 minutes, having allocated six months only for data acquisition and another six months for software preparation [59], [61]. The testing of the created software for a larger number of samples would be another important factor in the further improvement of the Solar predictor. By allocating a longer time for data acquisition, the statistical tests can output the accuracy and errors based on a larger sample population.

Overall, the Solar Predictor project was very challenging and demonstrated that cloud motion and solar irradiance can be predicted at an intra-hour temporal resolution for time intervals up to one minute, even when lower resources and significantly less time for acquiring data and validation are allocated.

(33)

26

List of definitions and abbreviations

ANN = Artificial Neural Network

API = Application Programming Interface

Azimuth = The angular distance between a celestial object and the observer CCD = Charge Coupled Device

CMOS = Complementary Metal Oxide Semiconductor CPU = Central Processing Unit

HSV = Hue, Saturation and Value

Intra-hour = Within an hour; in the context of the report, it referrers to temporal resolution

IR = Infrared

KB = Kilobyte

MBE = Mean Bias Error

MP = Megapixel

ND Filter = Neutral Density Filter

Overcast (sky) = Meteorological condition of clouds obscuring at least 95% of the sky

PV = Photovoltaic

RGB = Red, Green and Blue

RMSE = Root Mean Square Error USB = Universal Serial Bus

Referenties

GERELATEERDE DOCUMENTEN

In direct vergelijkend onderzoeken is nomegestrolacetaat 2,5 mg/estradiol 1,5 mg (NOMAC/E2) als oraal anticonceptivum even effectief als drospirenon 3 mg/ethinylestradiol 30 microgram

Again assume that the solar sail is fully reflecting as well as that the Earth and the Sun are point masses which move circularly around the centre.. The period of the orbits is 2π

The Suzaku-measured abundances, converted to the same scale using the up-to-date solar abundance table 25 , are compared with the Hitomi and XMM-Newton results in Extended Data Figure

Binnen verschillende onderzoeken is dan ook onderzoek gedaan naar het verbeteren van de gedragsmodellen naar meer realistisch gedrag waarbij bestuurders bijvoorbeeld ook fouten

De docent- en leerlingvragenlijsten zullen in definitieve vorm voor de eerste maal worden afgenomen in mei 2008 bij leerlingen en docenten uit 4havo- en

In the Perez all-weather model (Perez AWM) skylight is treated as a non-uniform light source whose intensity and angular distribution pattern varies as a function of three

The goal of this project was to replace an existing solar box cookers and making a better performing box cooker available to the South African market.. Although the idea of

Due to fast-paced developments and quickly changing trends, mapping the evolution and current state of the global photovoltaic (PV) industry is challenging, but