Free Essay

Wind Speed Analysis of Cox's Bazar Using Ann

In:

Submitted By alvimahmud
Words 4034
Pages 17
Title :Wind speed prediction using Artificial Neural Network (ANN)

Abstract : The crisis of fossil based fuel around the world has led to the research of Renewable Energy sources. One of the oldest sources of Renewable energy was using the wind to generate electrical or mechanical power using windmills. To use it efficiently the wind speed which determines the wind power must be known beforehand. Wind speed is a random variable depending on meteorological variables like atmospheric pressure,temperature,relative humidity & such. Methods that are currently being applied to predict wind speed are Statistical, Intelligent systems, Time series, Fuzzy logic, neural networks.Our focus will be on using Artificial Neural Network to predict the wind speed in daily basis in this report.

Chapter 1

1.1 Introduction

Bangladesh has a 724 lm long coastal area where south-westerly tradewind& sea breeze makes the usage of wind as a renewable energy source very visible. But, not much systematic wind study has been made, adequate information on the wind speed over the country and particularly on wind speeds at hub heights of wind machines is not available. A previous study (1986) showed that for the wind monitoring stations of Bangladesh Meteorological Department (BMD) the wind speed is found to be low near the ground level at heights of around 10 meter. Chittagong – Cox’s Bazar seacoast and coastal off-shore islands appeared to have better wind speeds. Measurements at 20m and 30m heights have been made later on by BCAS, GTZ and BCSIR. WERM project of LGED for measurements at the height of 20 and 30m were carried out for 20 locations all over Bangladesh (Bangladesh Renewable Energy Report -Asian and Pacific Centre for Transfer of TechnologyOf the United Nations – Economic and SocialCommission for Asia and the Pacific (ESCAP)). The apparent features are that wind speeds exhibit strong seasonal cycle, lower in the September to February period and higher in summer (March to August).Secondly, Wind speeds exhibit a diurnal cycle, generally peaking in the afternoon and weakest at night (the trends are also similar in West Bengal, India). (Reference: Bangladesh Centre for Advanced Studies (BCAS), in collaboration with Local Government and Engineering Department (LGED) and Energy Technology & Services Unit (ETSU), UK which was financially supported by the British Government.1995-1996,http://www.sdnbd.org/wind.htm) The study done by Bangladesh Meteorological Department (BMD), Bangladesh Centre for Advanced Studies (BCAS) has pointed out some favourable locations which can be exploited for wind power production. The government has made some plans regarding establishments in these areas which were published in the country report. One of the most eligible locations to use the wind power is Cox’s bazaar district, which has an area of 2491.86 km². It is located at 21°35′0″N 92°01′0″E.

The major sources to meet out the demand of power consumption around the globe have been petroleum, Coal, Gas. But, they can not be reused and therefore limited and becoming more costly day by day. Over the years, energy sources have become key figures in the socio-economic changes around the world. Plus, the “green house” effect has caused the governments to limit their usage of fossil fuels. It is inevitable that fossil based fuels will be replaced and so the research on renewable energy sources is quite a topic of interest in today’s world. Wind power is a good source as the source material, wind is free for all to use. The windmills use the strong wind that blows in the atmosphere near surface resulting from the rotation of the world. But, to use the wind power, the speed of the wind must be known in advance because it directly relates with the improvement of power transmission scheduling and resource allocation and hence the reliability of the power grid. Bus load forecasting is required for planning and operation of power system and power distribution. The wind speed measurement is also needed to find out a suitable location in the desired area because without strong wind, it is pointless to establish a windmill. There are other applications where wind speed prediction is a major requirement such as satellite launching, tracking targets (specially used in defense systems), and weather forecasting.

Wind speed is one of the many meteorological parameters that control the weather. The parameters are interdependent but the real problem is that the system is governed by the Fluid equation, which is a nonlinear, but still deterministic, set of equations. Non-linear equation can not be solved analytically; we need to solve them numerically. And although the system is deterministic meaning we should be able to predict the future results or in this case, values of the wind speed in advance. But the whole weather system is turbulent& chaotic (^ a b Sneyers Raymond (1997). "Climate Chaotic Instability: Statistical Determination and Theoretical Background". Environmetrics 8 (5): 517–532.). Turbulent system means that the energy generation (meaning how the wind gains speed) happens on relatively large scales, but exactly how the wind behaves depends on the dissipation of energy which happens on very small scales. Therefore we need to simulate both the largest scales (essentially the entire atmosphere or at least the entire lower atmosphere) and we need to have enough detail of every event. And a chaotic system is dynamic systems which are highly dependent on initial conditions. Small differences in initial conditions (such as those due to rounding errors in numerical computation) yield widely diverging outcomes for chaotic systems, rendering long-term prediction impossible in general (^ Stephen H. Kellert, In the Wake of Chaos: Unpredictable Order in Dynamical Systems, University of Chicago Press, 1993, p 32, ISBN 0-226-42976-8.). As we mentioned earlier, weather follows fluid equations which requires numerically solving & the Chaotic nature of it makes it tough to do so as no computer available so far is capable to perform an absolutely error free computation. It is best to predict the wind on a short term basis.

There are different models to predict the wind speed. Time series models are based on historical wind data and statistical methods. The simplest of these is the autoregressive moving average (ARMA) model and is usually used as the bench mark. This analysis results in the description of the process through a number of equations involving large amount of information, which is a major drawback of time series models. Fuzzy models are also used to estimate wind speed. The fuzzy model trained using genetic algorithm based – learning scheme, applied to an electrical power production plant is found to be more efficient than the conventional ARMA models. Regression techniques are found to be less efficient compared to Artificial Neural Network model (ANN) models. ANN has various algorithms as well; we will be using a multi-layered ANN with back propagation algorithm (BPN) for computing. The data used in the project was collected from Bangladesh Meteorological Department for Cox’s Bazar station. Wind resource assessment over Bangladesh has been done independently by RISOE NationalLaboratory, Denmark using KAMM (Karlsruhe Atmospheric Meso-scale) model. The model usesupper atmosphere wind speed data and satellite information. Based on a comparison between KAMM(done by RISOE) and WAsP results (done by RERC) the wind resource map for Bangladesh has been developed.

1.2 Literature overview

As we have mentioned earlier, there are various methods that are currently being used all over the world to predict wind speed in both short term and long term basis. We will be reviewing the other works done by different researchers in order to demonstrate a comparison of the different techniques.

Probably the earliest model was developed by McCarthy [28] for the Central California Wind Resource Area. It was run in the summers of 1985-87 on a HP 41CX programmable calculator, using meteorological observations and local upper air observations. The program was built around a climatological study of the site and had a forecast horizon of 24 hours. It forecast daily average wind speeds with better skill than either persistence or climatology alone. (Ed McCarthy: Wind Speed Forecasting in the Central California Wind Resource Area. Paperpresented in the EPRI-DOE-NREL Wind Energy Forecasting Meeting March 23, 1998,
Burlingame, CA)
Papkeet al [Papke, U., A. Petersen, and V. Köhne: Evaluation and Short-Time-Forecast of WEC-Power within the power grid of SCHLESWAG AG. Proceedings of the 1993 ECWEC in Travemünde,
8.-12. March 1993, pp. 770-773, ISBN 0-9521452-0-0] used a data assimilation technique together with three models to get a forecast of about 1 hour for the wind fed into the Schleswag grid in the German land of Schleswig- Holstein. These three models were a statistical model, analyzing the trend of the last three hours, a translatorical model which moved a measured weather situation over the utility's area, and a meteorological model based on very simple pressure difference calculations. No accuracy was given. The translatorical model developed into the Pelwin system [Papke, U., and V. Köhne: Pelwin -- einWindleistungsprognosesystemzurUnterstützung des EVU-Lastverteilers. In 2.Deutsche Windenergie-Konferenz 1994, TagungsbandTeil 1 (DEWEK'94), Wilhelmshaven, Deutschland, 1994.DeutschesWindenergieInstitutGmbH (DEWI).]. On a time scale of one hour, the weather fronts coming over the North Sea to Schleswig-Holstein are predicted to predict high negative gradients due to the shutdown of wind turbines.

Among the Time scale models to predict wind speed FatihO.Hocaog˘lu, Mehmet Fidan, Omer N. Gerek from AfyonKocatepe University and Andalus University respectively submitted a model using Mycielski approach (http://ieeexplore.ieee.org/xpl/freeabs_all.jsp?arnumber=5136524) which relies on the Mycielski Algorithm.The Mycielski algorithmuses all of the available past samples of the data. It searches for thelongest string (or data array) in the ‘‘history” that matches the longestsuffix string of the original data (which corresponds to thesamples at the end of the array). Once the longest such repeatingstring is found in the past history samples, it is assumed that thenext sample following the matched string is the prediction value. A quiet famous model is the Autoregressive Moving Average (ARMA) model. Works done based on this model are rather apparent e.g(Jiang, W., Yan, Z., Feng, D.-H. and Hu, Z. (2011), Wind speed forecasting using autoregressive moving average/generalized autoregressive conditional heteroscedasticitymodel. European Transactions on Electrical Power.doi: 10.1002/etep.596), a derivative of this is the Autoregressive Moving Inverse (ARIMA) model, works are also done using this model (e.gWind speed prediction using statistical regressionand neural network by Makarand A Kulkarni, Comparison of Models for Wind Speed Forecasting by J.C. Palomares-Salas). The ANEMOS project carried out in Europe which was funded by the European commission compared some tools used today in predicting wind speed successfully (ANEMOS_D1.1_StateOfTheArt_v1.1). It stated that if we divide the methods to predict wind speed in 2 categories namely 1.Physical & 2.Statiscal method, their performance could be summarized as below :

(I): Short-term statistical approaches using only Supervisory control & Data accusations (SCADA) as input (horizons: <6 hours).
(II): Physical or statistical approaches. Good performance for >3 hours.
(II)+ (III): Physical approach. Good performance for >3 hours.
(I)+ (II): Statistical approach.
(I)+ (II) + (III): Combined approach.

Recently, the most employed method has been using Artificial Neural Networks which we will be using in our project. The reasons to pick ANN over the other methods are the fact that it can provide more accuracy by fault tolerance, adaptive learning & it is self operating. And also its possibility hasn’t really been explored in Bangladesh much. We will be using back propagation algorithm with a multilayer perceptron based model. Our primary goal is to predict the wind speed one day ahead using various Meteorological parameters as inputs to the ANN.

Chapter 2 2.1 Introduction

In this chapter we will discuss the history of evolution of Artificial Neural network. The initial movements relating the neural network, the biological neuron which is basically what we try to recreate using a neural network and the current progress of the ANN are demonstrated here. The theories and the mathematics relating the neural networks will be covered as well. We will mainly emphasis on the algorithm we will be using on the project but we will briefly discuss the other algorithms as well.

2.2HISTORY OF ARTIFICIAL NEURAL NETWORK

The study of the human brain dates back thousands of years. But it has only been with the dawn of modern day electronics that man has begun to try and emulate the human brain and its thinking processes. The modern era of neural network research is credited with the work done by neuro-physiologist, Warren McCulloch and young mathematical prodigy Walter Pitts in 1943. McCulloch had spent 20 years of life thinking about the "event" in the nervous system that allowed to us to think, feel, etc. It was only until the two joined forces that they wrote a paper on how neurons might work, and they designed and built a primitive artificial neural network using simple electric circuits. They are credited with the McCulloch-Pitts Theory of Formal Neural Networks. (Haykin, 1994, pg: 36) (http://www.helsinki.fi)
The next major development in neural network technology arrived in 1949 with a book, "The Organization of Behavior" written by Donald Hebb. The book supported and further reinforced McCulloch-Pitts's theory about neurons and how they work. A major point bought forward in the book described how neural pathways are strengthened each time they were used. As we shall see, this is true of neural networks, specifically in training a network. (Haykin, 1994, pg: 37)(http://www.dacs.dtic.mil)
During the 1950's traditional computing began, and as it did, it left research into neural networks in the dark. However certain individuals continued research into neural networks. In 1954 Marvin Minsky wrote a doctorate thesis, "Theory of Neural-Analog Reinforcement Systems and its Application to the Brain-Model Problem", which was concerned with the research into neural networks. He also published a scientific paper entitled, "Steps towards Artificial Intelligence" which was one of the first papers to discuss AI in detail. The paper also contained a large section on what nowadays is known as neural networks. In 1956 the Dartmouth Summer Research Project on Artificial Intelligence began researching AI, what was to be the primitive beginnings of neural network research. (http://www.dacs.dtic.mil)
Years later, John von Neumann thought of imitating simplistic neuron functions by using telegraph relays or vacuum tubes. This led to the invention of the von Neumann machine. About 15 years after the publication of McCulloch and Pitt's pioneer paper, a new approach to the area of neural network research was introduced. In 1958 Frank Rosenblatt, a neuro-biologist at Cornell University began working on the Perceptron. The perceptron was the first "practical" artificial neural network. It was built using the somewhat primitive and "ancient" hardware of that time. The perceptron is based on research done on a fly's eye. The processing which tells a fly to flee when danger is near is done in the eye. One major downfall of the perceptron was that it had limited capabilities and this was proven by Marvin Minsky and Seymour Papert's book of 1969 entitled, "Perceptrons". (http://www.dacs.dtic.mil) (Masters, 1993, pg: 4-6)

Between 1959 and 1960, Bernard Wildrow and Marcian Hoff of Stanford University, in the USA developed the ADALINE (ADAptiveLINear Elements) and MADELINE (Multiple ADAptiveLINear Elements) models. These were the first neural networks that could be applied to real problems. The ADALAINE model is used as a filter to remove echoes from telephone lines. The capabilities of these models were again proven limited by Minsky and Papert (1969). (http://www.dacs.dtic.mil) (http://www.geocities.com)

(Haykin, 1994, pg: 38) The period between 1969 and 1981 resulted in much attention towards neural networks. The capabilities of artificial neural networks were completely blown out of proportion by writers and producers of books and movies. People believed that such neural networks could do anything, resulting in disappointment when people realized that this was not so. Asimov's television series on robots highlighted humanity's fears of robot domination as well as the moral and social implications if machines could do mankind's work. Writers of best-selling novels like "Space Oddesy 2001" created fictional sinister computers. These factors contributed to large-scale critique of AI and neural networks, and thus funding for research projects came to a near halt. (http://www.dacs.dtic.mil)

An important aspect that did come forward in the 1970's was that of self-organizing maps (SOM's). Self-organizing maps will be discussed later in this project. (Haykin, 1994, pg: 39) In 1982 John Hopfield of Caltech presented a paper to the scientific community in which he stated that the approach to AI should not be to purely imitate the human brain but instead to use its concepts to build machines that could solve dynamic problems. He showed what such networks were capable of and how they would work. It was his articulate, likeable character and his vast knowledge of mathematical analysis that convinced scientists and researchers at the National Academy of Sciences to renew interest into the research of AI and neural networks. His ideas gave birth to a new class of neural networks that over time became known as the Hopfield Model. (http://www.dacs.dtic.mil) (Haykin, 1994, pg: 39)

At about the same time at a conference in Japan about neural networks, Japan announced that they had again begun exploring the possibilities of neural networks. The United States feared that they would be left behind in terms of research and technology and almost immediately began funding for AI and neural network projects. (http://www.dacs.dtic.mil)

1986 saw the first annual Neural Networks for computing conference that drew more than 1800 delegates. In 1986 Rumelhart, Hinton and Williams reported back on the developments of the back-propagation algorithm. The paper discussed how back-propagation learning had emerged as the most popular learning set for the training of multi-layer perceptrons. With the dawn of the 1990's and the technological era, many advances into the research and development of artificial neural networks are occurring all over the world. Nature itself is living proof that neural networks do in actual fact work. The challenge today lies in finding ways to electronically implement the principals of neural network technology. Electronics companies are working on three types of neuro-chips namely, digital, analog, and optical. With the prospect that these chips may be implemented in neural network design, the future of neural network technology looks very promising.

2.3 Fundamental concepts regarding Neural Networks

Artificial neural networks have been developed asgeneralizations of mathematical models of humancognition or neural biology, based on assumptionsthat: Information processing occurs at many simpleelements called neurons; Signals are passed between neurons overconnection links; Each connection link has an associated weight,which in a typical neural net, multiplies thesignal transmitted; Each neuron applies an activation function(usually nonlinear) to its input (sum ofweightedinput signals) to compute the output signal. It is a simplified model to represent a biological Neuron, but in reality the biological Neuron is still many times faster and accurate then the models because the number of neurons and their interconnections as it exists in a biologicalNN and their operations in the natural asynchronous mode.

Figure: A biological neuron

2.3.1 Artificial Neurons: As we have mentioned, the artificial neuron is a mere replica of the biological neuron. Let us inspect the model of an artificial neuron

Figure: Artificial Neuron

As we can see, the artificial neuron is divided in three segments namely 1) The input layer, where the input data of the different parameters assigned for the model is given. 2) Middle layer which is better known as Hidden layer and finally 3) output Layer. The inputs all have different weights, the hidden layer sums these weights & the weighted sum is passed to the output layer where it is computed to pass to through a non linear filter Ф, which is known as activation function. After the weighted sums are passed through the activation functions, we get the output values.

2.3.2 Neural network Architecture & training:
Neural Networks are classified in three different architectures. Single layer feed-forward, Multi layer feed-forward & Recurrent Network Architecture; each resembling the biological neurons, the difference being the correlation of resemblance. A neural network must be trained to make use of its features. Here, training implies feeding the neural network a particular set of data so that it can be accustomed to the environment of the problem it will be later dealing with. It is quite similar to “learning”. There are different methods for training a neural network as well. They are namely Supervised, Unsupervised and Reinforced learning methods. Supervised and unsupervised learning methods have found expression through rules such as Hebbian learning, Gradient Descent learning, Competitive learning and Stochastic learning.

2.3.3 Backpropagation Algorithm:

A single neurons computational ability is very much limited. Only certain pattern recognition can be done easily using a single neuron network. The true potential of the neural network can be extracted by designing a layered architecture of the neurons, where the neurons are interconnected with the data moving forward & the errors are fed backward for further correction. Larger the network, the grater the computational ability of the ANN but the converse is not always accurate. Arranging the neurons in layers or stages is also supported by the fact that human brain also has neurons in a layered pattern. The Backpropagation algorithm is the reviver of ANN as it introduced the training method by virtue of which it can adapt to the present data and predict the future data.

Figure: Multilayer feed forward

Technically speaking, backpropagation calculates the gradient of the error of the network regarding the network's modifiable weights ( Paul J. Werbos (1994). The Roots of Backpropagation. From Ordered Derivatives to Neural Networks and Political Forecasting. New York, NY: John Wiley & Sons, Inc.). Considering linear activation function, the output of the input layer is input of input layer. Taking one set of data Olx1I = Ilx1I……………..(1)
The hidden neurons are connected to the input neurons through synapse, the ith input neuron with weight Vij is connected to the jth hidden neuron. So, the weighted sum of the inputs
IHP= V1pOl1+V2pOl2+…VlpOll……(2)
(p =1, 2, 3… m)
Donating weight matrix or connectivity matrix between input neurons and hidden neurons asVlxm, we can get an input to the hidden neuron IHmX1= VmXlT+IOIlX1………(3)
The output of the hidden neurons are given by the Sigmoidal function
OHp=11+exp⁡{-λIHp-θHp} ……………..(4)

Now, output to the hidden neuron is given by
{O}H=11+exp⁡{-λIHp-θHp} …………………..(5)

Again, the input to the qth hidden neuron is the weighted sum of the input neurons, which is IOq= W1qOH1+W2qOH2+…WmqOHm………….(6)

The final output is also given by Sigmoidal function for the oth neuron.

Chapter 3

Effect of different parameters on training process 3.1 Introduction

In this chapter, we will be discussing about the parameters that effects the determination of the wind speed. Also, the parameters of the program used to predict the wind speed and how varying these parameters can affect the prediction. The system model and the data that are used in the project are also discussed in this section.

3.2 System model

Rh%(d-1) =Previous days relative humidity (shown in %)
SLP(d-1)=Previous days mean sea level pressure (in milibar)
Tmin(d-1)=Previous days minimum temperature(in degree Celsius)
Tmax(d-1)=Previous days maximum temperature(in degree Celsius)
WS(d-1)=previous days wind speed (in naught)

For these variables, the system model given below has been taken in account to predict the wind speed of the following day

ƒWS=ƒ{(Rh%(d-1), SLP(d-1), Tmax(d-1), Tmin(d-1), WS(d-1)}

3.3 Data Acquiring
Below are the graphs for the data of the above mentioned variables for the training period which is from 01-01-2000 to 30-12-2009

Figure: Graph for relative humidity data

Similar Documents