Free Essay

Prediction of Oil Prices Using Neural Networks

In:

Submitted By BlueNotebook
Words 3399
Pages 14
Oil Price Prediction using Artificial Neural Networks
Author: Siddhant Jain, 2010B3A7506P Birla Institute of Technology and Science, Pilani

Abstract: Oil is an important commodity for every industrialised nation in the modern economy. The upward or downward trends in Oil prices have crucially influenced economies over the years and a priori knowledge of such a trend would be deemed useful to all concernd - be it a firm or the whole country itself. Through this paper, I intend to use the power of Artificial Neural Networks (ANNs) to develop a model which can be used to predict oil prices. ANNs are widely used for modelling a multitude of financial and economic variables and have proven themselves to be a very powerful tool to handle volumes of data effectively and analysing it to perform meaningful calculations. MATLAB has been employed as the medium for developing the neural network and for efficiently handling the volume of calculations involved. Following sections shall deal with the theoretical and practical intricacies of the aforementioned model. The appendix includes snapshots of the generated results and other code snippets.

Artificial Neural Networks: Understanding To understand any of the ensuing topics and the details discussed thereof, it is imperative to understand what actually we mean by Neural Networks. So, I first dwell into this topic: In simplest terms a Neural Network can be defined as a computer system modelled on the human brain and nervous system. Wikipedia elaborates on this definition as follows: “ An Artificial Neural Network, often just called a neural network, is a mathematical model inspired by biological neural networks… …In most cases a neural network is an adaptive system that changes its structure during a learning phase.” Both of these definitions stand correct in their own place and the context. In my understanding the simplest of the neural networks boil down to being an alternative statistical approach for the least square regression problem. However, advanced neural networks have the powerful capabilities of identifying influential patterns in a given set of data which can’t be done easily given the human computational limitations. This paper is understandably concerned with the applications of neural networks in forecasting models and without deviating any more we dive into the topic from the next section.

Artificial Neural Networks: Developing a Financial Model Neural Networks have become increasingly popular in finance and economics and this sector has been the second largest sponsor of research in neural networks. The popularity can be attributed to the fact that neural networks can tolerate noise and chaotic components better than any other statistical technique used for forecasting. Other obvious advantage of neural networks is that they have a greater portability in the sense that they can be ‘trained’ to learn new patterns. That said, neural networks can be excessively expensive in terms of the time consumed for computations especially if the amount of data is high (which is usually the case). Moreover there are very few control parameters in a neural network and for a researcher there is a lot which depends on trial and error. Although rules of thumb do exist but more often than not, one needs to customise their neural network on the case to case basis, given the variable to be modelled along with the data used.

Artificial Neural Networks: Forecasting In case of time series forecasting, the most commonly used neural network is backpropogation. Backpropogation neural networks consist of a collection of inputs and processing units known as neurons. The neurons in each layer are fully connected through connection strengths called weights. ‘Weights’ essentially store the knowledge of a trained network. In addition to the processing neuron there is a biasing neuron connected to each processing unit in the hidden and output layers. The bias neuron has a value of positive one and serves a similar purpose as the intercept in regression models. There can be number of hidden layers (and effectively, hidden neurons) for each model depending on the size and nature of the data set. Of course there are input and output neurons. Input neurons are the (assumed to be) independent variables and output neurons represent our target variable for prediction. Methodology To develop the model I have adapted the model developed by Boyd and Kaastra in their paper regarding the topic1. The paradigm is shown as follows:

All theoretical and practical details of the term paper will now be explained using this model as the template.
1: Designing a Neural Network for forecasting financial and economic time series, Icebling Kaastra, Milton Boyd

Variable Selection Selecting the variable is very important for any neural network, since it is one of the few things that we can control directly in a neural network. It is of course crucial to have an economic understanding of which indicators are the most likely economic predictors.

For time series prediction, it is an accepted practice and of course logical to keep the input and the output variable to be the same. The neural network predicts future values based on the present values, that is time series prediction by definition. However, logic also dictates that there can be other factors that influence a given variable. In my adaptation, I have considered this extension and have used one exogenous variable as an input variable. The chosen exogenous variable is : Supply of Oil. It is a basic economic theory that the price of any commodity is decided by its supply and demand and so it’s choice. (I didn’t use demand as my input neuron, because given the nature of the commodity, the demand of oil won’t alter much with price in the short run. There is so much dependent on oil that there will always be enough takers of oil regardless of its price).

Data Collection It makes sense, especially now, to further detail the choice of the variables. To be precise, the model is used to predict the WTI Spot Index. WTI spot index is the oil price index for USA and is one of the three major indices used for quoting international oil prices. This being the case, I have considered the oil supply for U.S.A. The data for both the variables could be obtained from the website for the U.S energy information administration (www.eia.gov). Although a researcher might be tempted in taking daily data for oil prices considering the high size of the sample available for training but taking a data with this high frequency is not advised because for daily data, there are various inconsistencies and missing points for the market has been closed or halted due to weekend or unexpected events. So, I have used monthly data for my model. Again, to be precise:  My data ranges from Jan 1986 to August 2012 and uses the average price for all days in the given month as the monthly price. EIA – the agency which maintains the data is an American governmental agency and is reputed to be consistent and reliable. Both consistency and reliability of the data are important considerations that should be kept in mind by any researcher interested in designing a neural network model. Values for both the variables were stored in Excel sheets for future use by the MATLAB code.

Data Pre-processing Data pre-processing refers to analysing and transforming the input and output variables to minimise noise, highlight important relationships, detect trends, and flatten the distribution of the variable to assist the neural network in learning the relevant patterns. Naturally, neural network training can be made more efficient if you perform certain pre-processing steps on the network inputs and targets. Generally in neural networks sigmoid transfer functions are used in the hidden layers. These functions become essentially saturated when the net input is greater than three (exp (−3) ≅ 0.05). If this happens at the beginning of the training process, the gradients will be very small, and the network training will be very slow. In the first layer of the network, the net input is a product of the input times the weight plus the bias. If the input is very large, then the weight must be very small in order to prevent the transfer function from becoming saturated. It is standard practice to normalize the inputs before applying them to the network.

Generally, the normalization step is applied to both the input vectors and the target vectors in the data set. In this way, the network output always falls into a normalized range. The network output can then be reverse transformed back into the units of the original target data when the network is put to use in the field. Apart from this sampling and filtering of data is also done. Filtering essentially removes observations from the dataset so that all biases which do not really effect the target value are removed. For MATLAB, this step of data pre-processing is done by default when a neural network is created. One can of course define their own functions and methods for the pre-processing. The most commonly used pre-processing routines is normalization and removed of unknown values.

Training, Testing and Validation sets When training multilayer networks, the general practice is to first divide the data into the following three subsets: 1. Training set, which is used for computing the gradient and updating the network weights and biases. 2. Validation set, the error on the validation set is monitored during the training process. The validation error normally decreases during the initial phase of training, as does the training set error. However, when the network begins to overfit the data, the error on the validation set typically begins to rise. The network weights and biases are saved at the minimum of the validation set error. 3. Test Set, is the set on which the model is tested. Additionally, there are four different approaches in dividing the data sets in the aforementioned categories, which are: 1. 2. 3. 4. Divide the data randomly Divide the data into continuous blocks Divide the data using an index Divide the data using through and interleaved selection

For this paper I have divided the data randomly. The logic behind doing so is that in case of economic modelling, there can be data sets which have a continuous trend. Now if such a set is used for training it will give erroneous results for future sets. Therefore it becomes important to include data from everywhere so that the model is appropriately represented in the neural network. Another parameter to be decided is the proportion of data which for each set. In accordance to the theoretical studies I have used the following proportions: Training: 65%, Testing: 20%, Validation: 15%

Neural Network Architecture In this step we decide on different attributes pertaining to the neural network architecture. These attributes namely: the number of hidden layers, the number of hidden neurons, transfer functions, though once decided to train the neural network for the first time, can be (and in most cases it is) changed if the results of the training are not satisfactory. I will discuss the theoretical aspects of the attributes. Clubbed with is attribute is the corresponding value used in my model.

Number of Hidden Layers: The hidden layers give the network its ability to generalise. In theory, it has been proved that any neural network with one hidden layer with a sufficient number of hidden neurons can approximate any continuous function. Increasing the number of hidden layer will not only increasing the computational time but will also lead to over-fitting leading to poor performance in the forecasting. In most cases one hidden layer or at most two hidden layers are sufficient in predicting a variable. For this term paper, I use two hidden layers. Number of Hidden Neurons: There is no set formula to decide the number of hidden neurons. It is to an extent based on experimentation with the given data. However theoretically, a particular heuristic is famous and goes by the name of the pyramid rule. The pyramid rule states that if there are n input neurons and m output neuron, the total number of hidden neurons should be sqrt(mxn). Such a thumb rule provided a great starting point for deciding the number of neurons, but certainly shouldn’t be accepted as the only option. After going through similar models developed by other researchers, and based on my own experimentation, I conclude on using 8 hidden neurons for my model. Transfer Function: Transfer functions are mathematical functions that decide determine the output of a processing neuron. Most commonly used transfer function is the sigmoid function (and is also used by my code). The sigmoid function, often called as the S-shaped function can be logistic, arctangent, hyperbolic tangent and sometimes even algebraic in nature. An example of a sigmoid function and the corresponding plot are given below:

Following is a representation of the neural network MATLAB models generated:

The NARX model used for initial training

The NARX(Closed loop) model used for prediction

Evaluation (performance) Criteria: To test the validity of the model, we need an evaluation criteria which can decide wether the model is reasonable or now. For this project I used the sum of Mean Squared Error (MSE), for every value in data. A lower MSE value is always preferred over a higher one.

Training Training a neural network involved iteratively presenting the network with examples of the correct known answers. The objective of training is to find the set of weights between the neurons that determine the global minimum of the error function. The BP network uses the gradient descent algorithm with adjusts the weights to move down the steepest slope of the error surface. What is happening behind thee screens is that the computers for each successive iteration takes a randomised sets for training testing and validation and assigns weight for an optimum. The iterations continue till the MSE reaches its minimum value. After that a few more iterations are done to confirm that this is the global minima and the solution is not trapped into a local minimum. The weights for each iterations are stored and the weights for the iteration with minimum value of the error are used for the model. I plot the graph of error versus iteration for emphasizing on this point (see appendix). Algorithm used for training: In this section I would like to stress upon the intricacies of the neural network model used by me and algorithms used therein. I use the NARX network for my model. The nonlinear autoregressive network with exogenous inputs (NARX) is a recurrent dynamic network, with feedback connections enclosing several layers of the network. A model for the same is given in the MATLAB documentation and is given by:

Although it is not extremely necessary to go into the minute theoretical details of this algorithm, I discuss a few key points. For prediction using NARX algorithm, a series parallel system is followed in which the input for prediction is purely feedforward in nature, but backpropogation is followed for training. This produces more accurate results and it also takes lesser time. For determination of the weights the Levenberg-Marquardt algorithm is used. The Levenberg–Marquardt algorithm (LMA), also known as the damped least-squares (DLS) method, provides a numerical solution to the problem of minimizing a function, generally nonlinear, over a space of parameters of the function. These minimization problems arise especially in least squares curve fitting and nonlinear programming. Like other numeric minimization algorithms, the Levenberg–Marquardt algorithm is an iterative procedure. To start a minimization, the user has to provide an initial guess for the parameter vector, β. In cases with only one minimum, an uninformed standard guess like βT=(1,1,...,1) will work fine; in cases with multiple minima, the algorithm converges only if the initial guess is already somewhat close to the final solution. In each iteration step, the parameter vector, β, is replaced by a new estimate, β + δ. To determine δ, the functions are approximated by their linearizations

where

is the gradient (row-vector in this case) of f with respect to β. At the minimum of the sum of squares, zero. The above first-order approximation of , the gradient of gives with respect to δ will be

. Or in vector notation,
.

.

Taking the derivative with respect to δ and setting the result to zero gives:

where

is the Jacobian matrix whose ith row equals , and where and ,

and

are vectors with

ith component solved for δ.

respectively. This is a set of linear equations which can be

Levenberg's contribution is to replace this equation by a "damped version",

where I is the identity matrix, giving as the increment, δ, to the estimated parameter vector, β. The (non-negative) damping factor, λ, is adjusted at each iteration. If reduction of S is rapid, a smaller value can be used, bringing the algorithm closer to the Gauss–Newton algorithm, whereas if an iteration gives insufficient reduction in the residual, λ can be increased, giving a step closer to the gradient descent direction. Note that the gradient of S with respect to β equals . Therefore, for large values of λ, the step will be taken approximately in the direction of the gradient. If either the length of the calculated step, δ, or the reduction of sum of squares from the latest parameter vector, β + δ, fall below predefined limits, iteration stops and the last parameter vector, β, is considered to be the solution. Levenberg's algorithm has the disadvantage that if the value of damping factor, λ, is large, inverting JTJ + λI is not used at all. Marquardt provided the insight that we can scale each component of the gradient according to the curvature so that there is larger movement along the directions where the gradient is smaller. This avoids slow convergence in the direction of small gradient. Therefore, Marquardt replaced the identity matrix, I, with the diagonal matrix consisting of the diagonal elements of JTJ, resulting in the Levenberg–Marquardt algorithm:
..

All information regarding Levenberg–Marquardt algorithm is taken through Wikipedia.

Different selection of data sets, hidden neurons etc. will lead to different number of iterations required. For one such test, 14 iterations were required and the maximum occurred at the 8 th iteration. The graph for the same is given in the appendix.

Implementation The neural network model was then used to predict values and a plot was made of the predicted values against the given values. One of the most startling feature of this step was that no matter how many changes were made in the number of iteration, hidden neurons, hidden layers etc. it was almost next to impossible to predict the prices exactly or even close to that. A little research on this topic revealed that it is indeed difficult to derive the exact value of any output variable, specially when it’s an economic or financial variable, given the number of factors controlling them. Accepting this as a limitation, the neural network shouldn’t be completely abandoned, because even though it couldn’t predict the exact values, the trends in the WTI spot prices were predicted correctly. For any policy decision, though exact prices will be helpful, knowing the trend in advance is of utmost importance and so the model is reasonably useful. A graph of predicted value versus actual value is given in the appendix and can be used for further reference.

Future Enhancements: With this my neural network model was completed. Although there is a lot of scope for future enhancements, a few of which I list below: 1. More number of exogenous input variables can be used 2. Different training algorithms can be compare and the best one based on that comparison should be used 3. Right now, only mathematical pre-processing is done. Pre-processing of the data based on the historical background should be done, to remove outlier which spoil the training data.

Conclusion Neural networks are an efficient medium for modelling financial and economic variables. Oil prices, though an extremely volatile and unpredictable variable can be predicted to some extent using the power of neural networks. Although the exact values can’t be predicted but trends are easily recognised and depicted by the neural network.

Appendix 1

Number of iterations versus Mean Squared Error

Appendix 2

Regression Model developed using the ANN

Appendix 3

Actual Prices versus predicted prices

Similar Documents

Premium Essay

Predictability of Stock Price on Nigeria Stock Exchange

...predictability CHAPTER ONE INTRODUCTION 1.1 BACKGROUND TO THE STUDY The performance of an economy is dependent largely on the efficient performance of its financial markets, since they facilitate the financing of productive activity and hence national output and economic growth. In this research report, the key roles and function of the financial markets are highlighted with the thrust of the discussion on the core issue of how the market works; directly and indirectly. One of the most important factors for rapid economic development is the effective mobilization and allocation of scarce resources within an economy. These resources can be real or financial, but they are scarce and command a price. The establishment of effective and efficient channel for the mobilization and allocation of scarce financial resources is therefore essential. The financial market, comprising of the money and capital markets, occupies an important place in most economies of the world. The primary function of a financial market is to enable funds to be sufficiently allocated from the surplus units of the economy to the deficit units for productive investment. The greater the transmission efficiency is, the higher the rate of growth of the economy (Olowe, 1997). The money market trades only in securities or debt instruments maturing in less than twelve months, while in the capital market, longer term debts as well as equity instruments are traded. The complementarity between money market...

Words: 11803 - Pages: 48

Premium Essay

Forecasting Gold Prices Using Multiple Linear Regression Method

...Forecasting Gold Prices Using Multiple Linear Regression Method Z. Ismail, 2A. Yahya and 1A. Shabri Department of Mathematics, Faculty of Science 2 Department of Basic Education, Faculty of Education University Technology Malaysia, 81310 Skudai, Johor Malaysia 1 1 Abstract: Problem statement: Forecasting is a function in management to assist decision making. It is also described as the process of estimation in unknown future situations. In a more general term it is commonly known as prediction which refers to estimation of time series or longitudinal type data. Gold is a precious yellow commodity once used as money. It was made illegal in USA 41 years ago, but is now once again accepted as a potential currency. The demand for this commodity is on the rise. Approach: Objective of this study was to develop a forecasting model for predicting gold prices based on economic factors such as inflation, currency price movements and others. Following the melt-down of US dollars, investors are putting their money into gold because gold plays an important role as a stabilizing influence for investment portfolios. Due to the increase in demand for gold in Malaysian and other parts of the world, it is necessary to develop a model that reflects the structure and pattern of gold market and forecast movement of gold price. The most appropriate approach to the understanding of gold prices is the Multiple Linear Regression (MLR) model. MLR is a study on the relationship between a single dependent...

Words: 3920 - Pages: 16

Premium Essay

Extreme Makeover: Walmart Edition

...transactional data and analytical information, and between OLTP and OLAP. Define TPS, DSS, and EIS, and explain how organizations use these types of information systems to make decisions. Understand what AI is and the four types of artificial intelligence systems used by organizations today. Describe how AI differs from TPS, DSS, and EIS. Describe the importance of business process improvement, business process reengineering, business process modelling, and business process management to an organization and how information systems can help in these areas. This chapter describes various types of business information systems found across the enterprise used to run basic business processes and used to facilitate sound and proper decision making. Using information systems to improve decision making and re-engineer business processes can significantly help organizations become more efficient and effective. ? 2.4 2.5 As a business student, you can gain valuable insight into an organization by understanding the types of information systems that exist in and across enterprises. When you understand how to use these systems to improve business processes and decision making, you can vastly improve organizational performance. After reading this chapter, you should have gained an appreciation of the various kinds of information systems employed by organizations and how you can use them to help make informed decisions and improve business processes. opening case study Information Systems...

Words: 16302 - Pages: 66

Premium Essay

Data Mining Practical Machine Learning Tools and Techniques - Weka

...Data Mining Practical Machine Learning Tools and Techniques The Morgan Kaufmann Series in Data Management Systems Series Editor: Jim Gray, Microsoft Research Data Mining: Practical Machine Learning Tools and Techniques, Second Edition Ian H. Witten and Eibe Frank Fuzzy Modeling and Genetic Algorithms for Data Mining and Exploration Earl Cox Data Modeling Essentials, Third Edition Graeme C. Simsion and Graham C. Witt Location-Based Services Jochen Schiller and Agnès Voisard Database Modeling with Microsoft® Visio for Enterprise Architects Terry Halpin, Ken Evans, Patrick Hallock, and Bill Maclean Designing Data-Intensive Web Applications Stefano Ceri, Piero Fraternali, Aldo Bongio, Marco Brambilla, Sara Comai, and Maristella Matera Mining the Web: Discovering Knowledge from Hypertext Data Soumen Chakrabarti Understanding SQL and Java Together: A Guide to SQLJ, JDBC, and Related Technologies Jim Melton and Andrew Eisenberg Database: Principles, Programming, and Performance, Second Edition Patrick O’Neil and Elizabeth O’Neil The Object Data Standard: ODMG 3.0 Edited by R. G. G. Cattell, Douglas K. Barry, Mark Berler, Jeff Eastman, David Jordan, Craig Russell, Olaf Schadow, Torsten Stanienda, and Fernando Velez Data on the Web: From Relations to Semistructured Data and XML Serge Abiteboul, Peter Buneman, and Dan Suciu Data Mining: Practical Machine Learning Tools and Techniques with Java Implementations Ian H. Witten and Eibe Frank ...

Words: 191947 - Pages: 768

Free Essay

Nit-Silchar B.Tech Syllabus

...NATIONAL INSTITUTE OF TECHNOLOGY SILCHAR Bachelor of Technology Programmes amï´>r¶ JH$s g§ñWmZ, m¡Úmo{ à VO o pñ Vw dZ m dY r V ‘ ñ Syllabi and Regulations for Undergraduate PROGRAMME OF STUDY (wef 2012 entry batch) Ma {gb Course Structure for B.Tech (4years, 8 Semester Course) Civil Engineering ( to be applicable from 2012 entry batch onwards) Course No CH-1101 /PH-1101 EE-1101 MA-1101 CE-1101 HS-1101 CH-1111 /PH-1111 ME-1111 Course Name Semester-1 Chemistry/Physics Basic Electrical Engineering Mathematics-I Engineering Graphics Communication Skills Chemistry/Physics Laboratory Workshop Physical Training-I NCC/NSO/NSS L 3 3 3 1 3 0 0 0 0 13 T 1 0 1 0 0 0 0 0 0 2 1 1 1 1 0 0 0 0 4 1 1 0 0 0 0 0 0 2 0 0 0 0 P 0 0 0 3 0 2 3 2 2 8 0 0 0 0 0 2 2 2 2 0 0 0 0 0 2 2 2 6 0 0 8 2 C 8 6 8 5 6 2 3 0 0 38 8 8 8 8 6 2 0 0 40 8 8 6 6 6 2 2 2 40 6 6 8 2 Course No EC-1101 CS-1101 MA-1102 ME-1101 PH-1101/ CH-1101 CS-1111 EE-1111 PH-1111/ CH-1111 Course Name Semester-2 Basic Electronics Introduction to Computing Mathematics-II Engineering Mechanics Physics/Chemistry Computing Laboratory Electrical Science Laboratory Physics/Chemistry Laboratory Physical Training –II NCC/NSO/NSS Semester-4 Structural Analysis-I Hydraulics Environmental Engg-I Structural Design-I Managerial Economics Engg. Geology Laboratory Hydraulics Laboratory Physical Training-IV NCC/NSO/NSS Semester-6 Structural Design-II Structural Analysis-III Foundation Engineering Transportation Engineering-II Hydrology &Flood...

Words: 126345 - Pages: 506

Free Essay

La Singularidad

...NOTE: This PDF document has a handy set of “bookmarks” for it, which are accessible by pressing the Bookmarks tab on the left side of this window. ***************************************************** We are the last. The last generation to be unaugmented. The last generation to be intellectually alone. The last generation to be limited by our bodies. We are the first. The first generation to be augmented. The first generation to be intellectually together. The first generation to be limited only by our imaginations. We stand both before and after, balancing on the razor edge of the Event Horizon of the Singularity. That this sublime juxtapositional tautology has gone unnoticed until now is itself remarkable. We're so exquisitely privileged to be living in this time, to be born right on the precipice of the greatest paradigm shift in human history, the only thing that approaches the importance of that reality is finding like minds that realize the same, and being able to make some connection with them. If these books have influenced you the same way that they have us, we invite your contact at the email addresses listed below. Enjoy, Michael Beight, piman_314@yahoo.com Steven Reddell, cronyx@gmail.com Here are some new links that we’ve found interesting: KurzweilAI.net News articles, essays, and discussion on the latest topics in technology and accelerating intelligence. SingInst.org The Singularity Institute for Artificial Intelligence: think tank devoted to increasing...

Words: 237133 - Pages: 949

Premium Essay

Dataminig

...further information about the Publisher’s permissions policies and our arrangements with organizations such as the Copyright Clearance Center and the Copyright Licensing Agency, can be found at our website: www.elsevier.com/permissions. This book and the individual contributions contained in it are protected under copyright by the Publisher (other than as may be noted herein). Notices Knowledge and best practice in this field are constantly changing. As new research and experience broaden our understanding, changes in research methods, professional practices, or medical treatment may become necessary. Practitioners and researchers must always rely on their own experience and knowledge in evaluating and using any information, methods, compounds, or experiments described herein. In using such information or methods they should be mindful of their own safety and the safety of others, including parties for whom they have a professional responsibility. To the fullest extent of the law, neither the Publisher nor the authors,...

Words: 194698 - Pages: 779

Premium Essay

Basic Concepts of Dss

...Characteristics of DSS: * Employed in semistructured or unstructured decision contexts * Intended to support decision makers rather than replace them * Supports all phases of the decision-making process * Focuses on effectiveness of the process rather than efficiency * Is under control of the DSS user * Uses underlying data and models * Facilitates learning on the part of the decision maker * Is interactive and user-friendly * Is generally developed using an evolutionary, iterative process * Can support multiple independent or interdependent decisions * Supports individual, group or team-based decision-making Situation of Certainty Structured Unstructured Situation of Uncertainty Top Middle Lower Benefits and Limitations of DSS: * The DSS is expected to extend the decision maker’s capacity to process information. * The DSS solves the time-consuming portions of a problem, saving time for the user. * Using the DSS can provide the user with alternatives that might go unnoticed. * It is constrained,...

Words: 15342 - Pages: 62

Free Essay

Trading Secret

...101 Option Trading Secrets Also by Kenneth R. Trester The Complete Option Player The Option Player’s Advanced Guidebook Secrets to Stock Option Success 101 Option Trading Secrets K E N N E T H R. TRESTER Institute for Options Research, Inc. Lake Tahoe, Nevada Copyright © Kenneth R. Trester 2004 All rights reserved. No part of this publication may be reproduced, stored in a retrieval system, or transmitted, in any form or by any means, electronic, mechanical, photocopying, recording, or otherwise, without prior written permission of the publisher. Printed in the United States. This publication is designed to provide accurate and authoritative information in regard to the subject matter covered. It is sold with the understanding that the publisher is not engaged in rendering legal, accounting, or other professional service. If legal advice or other expert assistance is required, the services of a competent professional person should be sought. From a Declaration of Principles jointly adopted by a Committee of the American Bar Association and a Committee of Publishers. We advise all readers that it should not be assumed that present or future recommendations will be profitable or equal the performance of previous recommendations. The reader should recognize that risk is involved in any option or security investment, and they should not assume that any formula, method, chart, theory or philosophy will result in profitable results or equal past performances. This...

Words: 41743 - Pages: 167

Premium Essay

Ggggggg

...Retailing in the 21st Century Manfred Krafft ´ Murali K. Mantrala (Editors) Retailing in the 21st Century Current and Future Trends With 79 Figures and 32 Tables 12 Professor Dr. Manfred Krafft University of Muenster Institute of Marketing Am Stadtgraben 13±15 48143 Muenster Germany mkrafft@uni-muenster.de Professor Murali K. Mantrala, PhD University of Missouri ± Columbia College of Business 438 Cornell Hall Columbia, MO 65211 USA mantralam@missouri.edu ISBN-10 3-540-28399-4 Springer Berlin Heidelberg New York ISBN-13 978-3-540-28399-7 Springer Berlin Heidelberg New York Cataloging-in-Publication Data Library of Congress Control Number: 2005932316 This work is subject to copyright. All rights are reserved, whether the whole or part of the material is concerned, specifically the rights of translation, reprinting, reuse of illustrations, recitation, broadcasting, reproduction on microfilm or in any other way, and storage in data banks. Duplication of this publication or parts thereof is permitted only under the provisions of the German Copyright Law of September 9, 1965, in its current version, and permission for use must always be obtained from Springer-Verlag. Violations are liable for prosecution under the German Copyright Law. Springer is a part of Springer Science+Business Media springeronline.com ° Springer Berlin ´ Heidelberg 2006 Printed in Germany The use of general descriptive names, registered names, trademarks, etc. in this publication does not...

Words: 158632 - Pages: 635

Free Essay

Loan Rating

...≈√ Guidelines on Credit Risk Management Rating Models a n d Va l i d a t i o n These guidelines were prepared by the Oesterreichische Nationalbank (OeNB) in cooperation with the Financial Market Authority (FMA) Published by: Oesterreichische Nationalbank (OeNB) Otto Wagner Platz 3, 1090 Vienna, Austria Austrian Financial Market Authority (FMA) Praterstrasse 23, 1020 Vienna, Austria Produced by: Oesterreichische Nationalbank Editor in chief: Gunther Thonabauer, Secretariat of the Governing Board and Public Relations (OeNB) ‹ Barbara Nosslinger, Staff Department for Executive Board Affairs and Public Relations (FMA) ‹ Editorial processing: Doris Datschetzky, Yi-Der Kuo, Alexander Tscherteu, (all OeNB) Thomas Hudetz, Ursula Hauser-Rethaller (all FMA) Design: Peter Buchegger, Secretariat of the Governing Board and Public Relations (OeNB) Typesetting, printing, and production: OeNB Printing Office Published and produced at: Otto Wagner Platz 3, 1090 Vienna, Austria Inquiries: Oesterreichische Nationalbank Secretariat of the Governing Board and Public Relations Otto Wagner Platz 3, 1090 Vienna, Austria Postal address: PO Box 61, 1011 Vienna, Austria Phone: (+43-1) 40 420-6666 Fax: (+43-1) 404 20-6696 Orders: Oesterreichische Nationalbank Documentation Management and Communication Systems Otto Wagner Platz 3, 1090 Vienna, Austria Postal address: PO Box 61, 1011 Vienna, Austria Phone: (+43-1) 404 20-2345 Fax: (+43-1) 404 20-2398 Internet: ...

Words: 60860 - Pages: 244

Premium Essay

Managing Information Technology (7th Edition)

...CONTENTS: CASE STUDIES CASE STUDY 1 Midsouth Chamber of Commerce (A): The Role of the Operating Manager in Information Systems CASE STUDY I-1 IMT Custom Machine Company, Inc.: Selection of an Information Technology Platform CASE STUDY I-2 VoIP2.biz, Inc.: Deciding on the Next Steps for a VoIP Supplier CASE STUDY I-3 The VoIP Adoption at Butler University CASE STUDY I-4 Supporting Mobile Health Clinics: The Children’s Health Fund of New York City CASE STUDY I-5 Data Governance at InsuraCorp CASE STUDY I-6 H.H. Gregg’s Appliances, Inc.: Deciding on a New Information Technology Platform CASE STUDY I-7 Midsouth Chamber of Commerce (B): Cleaning Up an Information Systems Debacle CASE STUDY II-1 Vendor-Managed Inventory at NIBCO CASE STUDY II-2 Real-Time Business Intelligence at Continental Airlines CASE STUDY II-3 Norfolk Southern Railway: The Business Intelligence Journey CASE STUDY II-4 Mining Data to Increase State Tax Revenues in California CASE STUDY II-5 The Cliptomania™ Web Store: An E-Tailing Start-up Survival Story CASE STUDY II-6 Rock Island Chocolate Company, Inc.: Building a Social Networking Strategy CASE STUDY III-1 Managing a Systems Development Project at Consumer and Industrial Products, Inc. CASE STUDY III-2 A Make-or-Buy Decision at Baxter Manufacturing Company CASE STUDY III-3 ERP Purchase Decision at Benton Manufacturing Company, Inc. CASE STUDY III-4 ...

Words: 239887 - Pages: 960

Free Essay

Disruptive Technology

...macroeconomic forces affecting business strategy and public policy. MGI’s in-depth reports have covered more than 20 countries and 30 industries. Current research focuses on four themes: productivity and growth, the evolution of global financial markets, the economic impact of technology and innovation, and urbanization. Recent reports have assessed job creation, resource productivity, cities of the future, and the impact of the Internet. MGI is led by McKinsey & Company directors Richard Dobbs and James Manyika. Yougang Chen, Michael Chui, Susan Lund, and Jaana Remes serve as MGI principals. Project teams are led by a group of senior fellows and include consultants from McKinsey’s offices around the world. These teams draw on McKinsey’s global network of partners and industry and management experts. In addition, leading economists, including Nobel laureates, act as research advisers. The partners of McKinsey & Company fund MGI’s research; it is not commissioned by any business, government, or other institution. For further information about MGI and to download reports, please visit www.mckinsey.com/mgi....

Words: 80396 - Pages: 322

Premium Essay

Test Question

...Essential of MIS (9th edition) Chapter 1 1) As discussed in the chapter opening case, the Yankees' use of information systems in their new stadium can be seen as an effort to achieve which of the primary business objectives? A) Operational excellence B) Survival C) Customer and supplier intimacy D) Improved decision making 2) Journalist Thomas Friedman's description of the world as "flat" referred to: A) the flattening of economic and cultural advantages of developed countries. B) the use of the Internet and technology for instantaneous communication. C) the reduction in travel times and the ubiquity of global exchange and travel. D) the growth of globalization. 3) The six important business objectives of information technology are: new products, services, and business models; customer and supplier intimacy; improved decision-making; competitive advantage; operational excellence, and: A) flexibility. B) survival. C) improved business practices. D) improved efficiency. 4) The use of information systems because of necessity describes the business objective of: A) survival. B) improved business practices. C) competitive advantage. D) improved flexibility. 5) Which of the following choices may lead to competitive advantage (1) new products, services, and business models; (2) charging less for superior products; (3) responding to customers in real-time? A) 1 only B) 1 and 2 C) 2 and 3 D) 1, 2, and 3 6) Verizon's implementation of a Web-based...

Words: 23003 - Pages: 93

Premium Essay

Islamic Banking

...A COMPARISON BETWEEN ISLAMIC AND TRADITIONAL BANKS: PRE AND POST THE 2008 FINANCIAL CRISIS Mohamed Hashem Rashwan1 The British University in Egypt ABSTRACT This study tests the efficiency and profitability of banks that belongs to two different sectors: a) Islamic Banks (IBs) and b) Traditional Banks (TBs). The study concentrates on the pre and post 2008 financial crisis with an aim to test if there are any significant differences in performance between the two sectors. The study applies the MANOVA techniques to analyze the financial secondary data for only publicly traded banks in the same region. The findings of the study show that there is a significant difference between the two sectors in 2007 and 2009 and there are no significant differences in 2008, which indicates the effect of the crisis on both sectors. IBs outperform TBs in 2007 and TBs outperform IBs in 2009. This result indicates the spread of the crisis to the real economy where IBs usually operate. INTRODUCTION Forty years ago Islamic Finance was virtually an unknown system; interestingly it has expanded to become a distinctive and fast growing segment of the International Financials markets. With a growth rate that ranges from 15% to 20% (EL- Qoroshy 2005). Islamic Finance in general and Islamic banking in specific become main players in the financial world. According to the IMF survey (2010) the total capital managed under Islamic Finance systems was estimated to be $820 billion at the end of 2008. More than...

Words: 7407 - Pages: 30