Free Essay

Buya

In:

Submitted By bobdole234
Words 5343
Pages 22
By Dr. Robert E. Brooks, RBAC, Inc. and C.P. Neill, Logistic Solutions, Inc.
Abstract
Gridnet is an embedded-network-based LP system for simultaneously optimizing very large natural gas sourcing and transportation acquisition operations. Using the PS/2-based Wizard i860 co-processor, Gridnet solves generalized network problems with over 18,000 nodes and 200,000 arcs in under five minutes. The paper discusses the need and requirements for this system as well as the technical approach, implementation, and effects of installation at five user sites.
The Setting
Since the mid-1970's, we have been developing computer-based decision support systems involving the natural gas industry. In 1975 in his Ph.D. dissertation, Robert Brooks developed the first math programming model explicitly to represent all of the interstate natural gas pipelines in a comprehensive nationwide model of the supply, demand, and transportation of natural gas [1]. This served as the basis for forecasting models used by the U.S. Department of Energy and Federal Energy Regulatory Commission during the 1970's [2, 3, 4, 5].
In the first such model, gas supply was divided up into about thirty production regions and demand was specified by state. Each pipeline was represented by a network of nodes and arcs, one node for each state served by the pipeline, one arc connecting each pair of states whose border was crossed by the pipeline in each allowable direction. All pipeline interconnects between each pair of pipelines serving a state were aggregated into a single arc in each feasible direction. In later models the level of disaggregation was increased to approximately 150 regions for both supply and demand and greater pipeline detail was added.
These models were used by various agencies to study the ability of the national pipeline system to meet winter peak demand for fuel in the midwest and east, the possible impact and need for new pipeline systems for Alaskan and Rocky Mountain gas, and the impact of gas price deregulation on availability of gas to local industries in the Texas and Louisiana Gulf region.
These studies were all policy oriented. The network was sufficiently defined for their purpose but clearly did not represent the actual physical details of the pipeline network in the U.S. At the same time, the cost of computing "optimal" gas flows with these models using IBM 370 mainframes was hundreds of dollars per run, even though excellent network algorithms were being utilized [6, 7]. Thus it was impractical to continue the trend toward more detailed and accurate representation of the pipeline network. Even if it had been practical, there was another factor which slowed the further progress of this work: the data for such a greatly refined network was not easy to come by.
As this development slowed, the industry itself was evolving, and so was the regulatory apparatus in Washington. New roles were being defined for old players in the industry. The concepts of gas brokering, gas commodity markets, and contract carriage began being discussed and studied. Producers who had previously only been permitted to sell their gas to pipelines could now, under certain conditions, make contracts with end-users or distributors and pay the pipeline for transportation. These buyers also began to search for non-traditional sources of supply who could offer better prices or supply security. Pipelines began to shift their operations from traditional long term sales contracts to spot market gas transportation in order to maintain throughput levels (and revenues) on their systems.
With all of these changes a need arose for a standardized information system which could be used by the various producers, pipelines, and even end-users to help them plan and operate in this new market environment. Charles Neill, in conjunction with Robert Ciliano and their associates, designed and developed a detailed, graphics-oriented pipeline database called OPG, the Official Pipeline Guide, to fill this vacuum.
Running on an IBM AT computer with enhanced graphics display, OPG could be used to find least cost transportation routes between user-selected origins and destinations. It could also be used to show schematic color graphs of pipeline facilities at various geographic levels from national to state to county. The database system was updated regularly as the physical network changed and the pipeline pricing structures were revised. All of the data came from publicly available sources, generally regulatory filings made by interstate pipelines with limited references to the intrastate pipeline network. The system's performance and presentation were greatly enhanced by more powerful 386 cpu's and VGA graphics during the late 1980's.
In time two limitations of this approach became obvious. First, the regulatory filings did not comprehensively represent all of the facilities on the pipeline. Attempts to collect additional details directly from the pipelines were generally unsuccessful since the principal purpose of OPG was to help shippers find more cost effective routes for delivering their product to market. It did not escape the pipeline's notice that the revenues that were being saved came out of their profits. Nonetheless, the system had great utility as an information base.
However, it had limited computational horsepower to help decisionmakers in marketing or gas producing companies with analyses of their situations and options. The analysis tools available (least cost path calculations) were often dreadfully slow. This was not due so much to the quality of the algorithms, but rather to the fact that as one began to make the system more closely fit the reality of the contracts, pricing schemes, and actual operations of the pipelines and producers, most of the traditional assumptions employed by the OR/MS analyst were found not to apply. Thus, standard methods for solving these problems - LP, network methods, shortest path, etc. - could not be directly used. In their place, heuristics were developed to bypass the barriers of non-linear and non-additive pricing structures. These heuristics consisted primarily of intelligent constrained enumeration methods.
As the decade of the 1980's ended, Neill launched an effort to overcome the limitations of OPG and to respond to the information needs of an important new player in the gas industry - the natural gas marketing company. Marketing had never been a big issue in the natural gas producing industry. Because the interstate pipeline industry was regulated by the government as a natural monopoly, the path from producer to consumer had always been very clear and straight-forward - producers sold to pipelines who sold to gas utilities who sold to end users. But during the deregulation-minded administrations of the 1980's, many traditional ways of doing business were made obsolete. In the case of natural gas, producers suddenly found that they had the opportunity (and challenge) to market their gas directly to customers other than pipelines. At the same time they often got the responsibility to arrange for the transportation of that gas to their new customers. Operational problems which had been the focus of pipelines suddenly became problems for the gas marketers too.
As there was no traditional producer industry method for solving these problems, the producer-marketers evolved their own. Typically, the procedure they have followed consists of three steps: 1) find a customer with a need; 2) find a source of gas supply for the customer; 3) find a feasible and economic way of getting gas from the source to the customer. Also, typically, these tasks have been assigned to three different sets of people (or departments) in the marketing company: 1) sales, 2) supply, 3) transportation.
Further the companies have often divided responsibilities in each of these departments geographically. Thus those solutions which were first discovered have become relatively standardized and opportunities for more profitable alternative strategies are less likely to be found.
To assist the fledgling gas marketing companies in overcoming these operational problems, the authors and their associates have collaborated to develop a new decision support tool, designed to address the sourcing and transportation of gas simultaneously and comprehensively, rather than sequentially and locally. This system is called Gridnet.
To Top
The Challenge
We noted earlier that the typical mode of operation of the gas marketing companies has been to divide their markets and sources up into different areas, to assign responsibility for these areas to different people, and then to have them operate relatively independently to find transportation capacity among the pipelines to get gas from here to there. At best this approach can lead to a group of locally optimal routings, but there is little or no chance that it could lead to an overall optimal solution for the entire system of the company.
The globally optimal solution (for any single period of time) would be the one which answered the following question:
Given a set of geographically separated customer demands and contract prices, a set of geographically separated gas production capacities, and a set of transportation contracts with gas pipeline companies, how much gas should be produced at each location, how much should be delivered to each customer, and which pipeline contracts and routings should be used to transport that gas in order to maximize the net revenues (revenues minus costs) of the gas marketing company?
While simple to state, the task of finding such a solution has heretofore been daunted by the sheer number of options available to the gas marketing companies and the absence of tools and data adequate to the challenge. The only possible approach has been to try to "divide and conquer".
To Top
What is the magnitude of the problem?
A typical large gas marketing company might have several hundred customers and thousands of wells in hundreds of producing areas. There are over 150 pipeline companies to transport gas from production regions to other pipelines and/or to gas consuming areas. The most standardized and complete database describing the network of pipelines is Petroleum Information's PI GRID, which was initially based on OPG after it was acquired by PI in 1988. PI greatly enhanced the quality of the database. The level of cooperation from the pipeline industry was greatly enhanced for several reasons. First, the data that had been somewhat obscure in regulatory filings was now more readily available. Secondly, the need for a universal code to facilitate communications in the industry and to help control high transaction costs created an incentive for pipeline to cooperate with PI. The PI GRID contains more than 50,000 records representing receipt points, delivery points, and interconnects among these pipelines. The database also contains a minor line file which details the linkages between the various points. There are tens of thousands of these links.
Each pipeline has its own rate structure. These rates are stored in the PI GRID rate file. A single record in this file contains over thirty pieces of information used to compute the price for moving gas under various conditions.
Analysis of these rates leads to the conclusion that they are neither linear nor additive one link to the next in the network. Even if one could build a network (transshipment) model relating the sources to the customers through this network, the cost structure would not allow it to be solved using traditional OR/MS methods.
In addition to the problem's "architectural" constraints, there are operational constraints as well. In the last few years, the gas industry has evolved a spot market to handle needs not addressed by its traditional long term contracts. Transportation capacity is now offered on the basis of thirty day contracts (and less). At the end of each month, the gas marketing companies and pipelines must develop an "operating plan" for the upcoming month. This is currently done as a monthly intensive cramming session during the last week of each month.
Each gas marketer tries to determine its available supplies, finds out how much gas each customer wants, computes economically sound sources and routes for each, and then delivers bids called "nominations" to the pipelines for each link in these routes. Each pipeline takes the various bids from each marketer and determines whether its system can actually handle the implied total load on each link. If there is insufficient capacity on a link, the pipeline uses priority and allocation schemes to cut back the requests so that they can be handled. It then notifies the marketers of those nominations which have been accepted, those which have been declined, and those which have been cut back.
If a marketer does get a bid cut back or denied, it must scramble to find a new routing and/or source to serve its affected customers. Often this is a difficult task and opportunities for sales are lost by not being able to find new routes or acquiring new sources or transport capacity quickly enough.
To help gas marketing companies during these crunch cycles at the end of each month, a decision support tool must be fast, flexible, and easy to use as well as powerful.
To Top
The Approach
To have any hope of solving a problem of this size and complexity, we decided we would have to remove the non-linearities. We noticed that there was one place at which the system was mostly linear and additive: the pipeline interconnect. If one moved gas across an interconnect from one pipeline to another, the cost of gas per unit moved could be correctly computed as the cost per unit on the first pipe plus the cost per unit on the second. This observation led to the notion of a model taking the form shown in Figure 1.

In this model, gas produced at each supply point S flows to one or more pipeline "receipt points" R. These points are connected to various pipeline "delivery points" D on the same pipeline. From each delivery point, gas could be delivered to a customer C or to one or more pipelines receipt points R interconnected at the same point. (Note: in the PI GRID database, interconnects are separate from receipt and delivery points, but in this model each interconnect is defined to be two points: a delivery point of the originating pipe and a receipt point of the destination pipe.)
Once this basic structure was selected for the model, the problem was reduced to one of defining the "necessary" receipt and delivery points for each pipeline and pre-computing the transportation cost and losses (as pipeline fuel) between each "point-to-point" pair. Even though this cost and fuel use calculation would be slow, it would be a setup cost which could be spread out over a large number of analysis and/or planning runs. From time to time as new prices were announced, the costs would be recalculated for use in further analyses and plans.
Which receipt and delivery points are "necessary" for a given scenario is an important consideration. The number of possible point to point combinations would require more than two gigabytes of memory. A method was required to customize the network for a given run by identifying only the "necessary" receipt and delivery points and "necessary" connections between them. By using this approach is was discovered that costs could be computed "on the fly" at problem generation time rather than having to pre-compute all prices, store them in a look-up table, and get them at execution time.
When the transport link matrix was first constructed, it was found to have over 150,000 links. Since then it has grown to well over 200,000. The interconnect matrix consisted of over 5,000 links. The links from supply sources to receipt points numbered over 4,000 and from delivery points to customers about the same.
If values were known for the unit cost of gas at each supply point and the unit price for gas delivered to each customer, then maximizing net revenues would be the equivalent of solving the minimum cost flow problem over a network where the cost of the flow to a customer would be the negative of the price. A "generalized" network algorithm would be required because the movement of gas along a point-to-point arc of a pipeline usually results in a fractional "loss" of gas. Gas is burned to run the compressor stations which move it further down the line.
In the 1970's Brooks used a generalized network code developed by Klingman, Glover, et al [6, 7] to solve efficiently the GASNET2 model for the Electric Power Research Institute.
Unfortunately, such a code cannot be used to solve the operations planning problem of the gas marketing companies. The reason is that there are additional constraints which must be added to the model shown in Figure 1 which cannot be modeled in the strict regimen of a "generalized network". These constraints include transportation agreement ceilings, extraction plant capacities, pipeline settlement agreements, and bottlenecks.
Transportation agreements between pipelines and gas marketing companies typically contain three kinds of constraints: o a ceiling on gas received at each receipt point o a ceiling on gas delivered to each delivery point o a ceiling on the total received in the system
While the first two constraints can be modeled as upper bounds on additional arcs added to the model, maintaining its form as a "generalized network", the latter can only be modeled as a "joint capacity constraint", that is, the sum of a group of flows in the model must be less than the contract ceiling. As a result, the model cannot be solved as a generalized network. (See Figure 2.)

Extraction plants also pose similar difficulties. At these facilities, "wet" gas laden with heavier hydrocarbons such as ethane, propane, butane, pentane and various natural gasolines are processed to remove these higher value gases and liquids. Gas from various supply points can be processed at these plants which then deliver the "dry" gas to various pipelines at the tailgate of the plant. The processing capacity of the plant creates the necessity for a joint capacity constraint over a group of flows from supply points to pipeline receipt points. (See Figure 3.)

In the early late 70's and early 80's, scarce supplies of gas touched off a bidding war by pipelines for producer gas. Contracts as high as 3 to 10 times the average were not uncommon. Later when demand and prices fell, pipelines obligated to buy gas at the higher prices tried unsuccessfully to void the contracts. Some were able to negotiate "settlement agreements" with producers which allowed their obligations to be paid off through discounts in transportation costs on gas moved by the producers' marketing divisions. These discounts applied to all gas moved up to a monthly ceiling.
To model this feature of the industry, we used two ideas: first, each point-to-point link on the pipeline was split into two: an undiscounted link and one to which the discount was applied; second, a joint capacity constraint was defined for the sums of flows on all the discounted links of each pipeline.
The last non-network constraint involves bottlenecks. These are places such as compressor stations or interconnects where there is a maximum throughput which has been historically inadequate to meet the desired throughout at one or more times during the year. Here again, the only method which will keep the rest of the network intact and yet correctly model this condition is to identify all point-to-point flows which utilize each bottleneck and to set up a joint capacity constraint for each one. (See Figure 4).

The model then consists of two pieces: a very large generalized network consisting of about 18,000 nodes and 200,000 arcs, and a set of joint capacity constraints numbering in the hundreds or more. A third piece, of course, is the routine for computing the costs and loss factors for all the point-to-point arcs used in each run of the model. (See Figure 5.)

In the early 80's, Brooks had confronted a similar problem while building a model of competition for natural gas supplies among interstate and intrastate pipelines for the State of Texas. This non-linear model could be solved using successive linear approximations where each linear model consisted of a generalized network core and a group of non-network "side constraints". To solve that model, he used a code named EMNET which had been developed by Richard McBride of the University of Southern California [8]. This code used the GENNET generalized network code of Brown and McBride [9] as the core and employed an advanced partitioning method to keep the non-network portion from overwhelming the network. Because of the success achieved in this earlier project, we decided to try EMNET for Gridnet.
To Top
The Implementation
The original conceptual design of the Gridnet system imposed a variety of critical interface requirements on the system. The system would be used by a wide variety of business persons without any O/R experience. These would include individuals from sales, supply, transportation, gas control, planning and accounting. For their benefit, the system had to provide a user friendly interface. The notion of creating arcs and constraint equations is not only foreign to the typical user but would likely inhibit their use of the system if they were visible. In addition, while Gridnet contained an enormous quantity of data, additional data that is unique to the marketing company is also required. This includes supply quantities, customer demands, individual point to point discounts and transportation agreement contract terms. Since this data usually exists elsewhere in the company, a relational database design was adopted to facilitate real time utilization of corporate data with the need for reloading information into Gridnet. Because such data may reside on a mainframe, it was obviously critical that linkage to that data be available.
In order to meet these requirements, the system was set up as a client server model utilizing a client PC on a local area network with a file server and a database server that may be PC based or mainframe based. The PC user operates in an OS/2 multi-user environment utilizing Presentation Manager's windowing capabilities (consistent with the corporate strategies of the target market). The system was designed to use the Intelligent Environment's Applications Manager user interface and IBM's Database Manager database system (although many other SQL PC and mainframe systems can be employed and are supported). Because of its SQL base, this system could be integrated with existing SQL databases on other platforms connected on a network, such connections being virtually transparent to the user.
To provide the horsepower required to solve the optimization problem discussed above, we decided to use the Wizard card, an add-on board for the PS/2 under OS/2 based on the Intel i860 processor. Promotional literature quoted maximum throughputs of "64 million single precision floating point operations and 27 VAX million instructions per second (MIPS)". In our initial tests we found that the Wizard was 10 to 20 times faster than a 20 megahertz 386 with 387 co-processor when solving small test problems using the EMNET code. (Subsequently, the optimization routines were ported to operate on an IBM RS6000 RISC workstation as well as a Sun Sparc Station.)
As the only compiler originally available for the Wizard was C and since most of the Gridnet software was written in C, we decided to translate FORTRAN-based EMNET into C. This facilitated the eventual ports to other Unix based systems.
A setup module was written to generate the problem in the correct form for EMNET using the Gridnet database. Initial tests of this generator revealed that the input file produced for EMNET would be 12 to 20 megabytes in size. To reduce setup time as well as disk storage requirements, the setup routine and EMNET input routines were modified so that the problem was generated directly in memory from the Gridnet database.
Similarly, the output routines of EMNET were replaced with routines to load the Gridnet database with those results which the user required for both on-screen data display and report writing.
Other routines were added to store the problem definition and the optimal basis in compact binary form on disk where it could be used later for fast restart and "what-if" analysis. The latter included the ability to modify transportation costs, to compute the marginal value of adding capacity to a transportation agreement, and to identify opportunities for spot market purchases and sales of gas at various locations in the network.
As the definition of the network grew more accurate during the development cycle, the model eventually outgrew the eight megabytes of dedicated memory on the Wizard card. To handle this problem, we replaced all of EMNET's double precision input data arrays, which were sized at the number of arcs (200,000), with two-byte pointer arrays pointing into a single "real number pool" array. Interestingly, this real number pool never exceeded 10,000 elements in size. Several megabytes of memory were made available using this approach.
The system has been tested and used in two different modes. The first, which might be called "planning mode", relaxes the pipeline agreement ceilings and allows all receipt points within a pipeline to connect with all its delivery points. This is used to identify new routings for which a new pipeline agreements might be appropriate. This increases the number of arcs in the model to the maximum of about 250,000, but eliminates the majority of joint capacity constraints. If the remainder of the joint constraints are also relaxed, the system reduces to the special case of a generalized network. EMNET (using its core GENNET) solves such a problem in five minutes on the Wizard board.
In the other extreme, in what could be called "operations mode", all constraints are in place and the size of the network reduces to about 30,000 arcs with the same 18,000 nodes. The computation time increases dramatically, however, due to the non-network joint capacity constraints. Computational time can be longer if numerical instabilities are encountered in the process. The latter is an important area for further analysis and development.
To Top
The Effect
The Gridnet system has been installed at five major natural gas marketing companies as of October 1991. Much time and effort has been spent in the process of training the users of this system to understand the differences between the "local, feasible" approach they have been (and still are) using and the new "global, optimal" approach of Gridnet.
The measure of merit in the "local, feasible" approach is called the "netback price". This is the difference between the price the ultimate customer pays and the unit transport cost on each leg of the route from customer back to the source. If the company can produce or acquire gas at a cost less than this netback price, it is considered to be a workable deal.
Gridnet, on the other hand, computes a quantity called the "shadow price" at each point in the network. This "shadow price" represents the "value" of gas at that point in the network - relative to the computed optimal solution. It says that if you had a customer at this point who was willing to buy gas for more than the shadow price, the company could increase its net revenues by selling at that higher price. To do so, it might have to deliver less gas to another customer or produce more gas in a production area not already at capacity or maybe a combination of the two, but it could improve net revenues by doing so. Similarly, if the company could purchase gas at that point for less than the shadow price, it should do so and readjust its production, sales, and routings to accommodate the change.
This can be very valuable information to a gas marketing company. Unfortunately, it is not sufficient: it doesn't tell them how much they should sell or buy at such points. Using Gridnet's spot-market analysis routines, however, one calculate how much profit one can make from buying and/or selling different amounts of gas at various such spots in the network.
This global concept is, of course, quite different from that of the "netback price". As part of our training, we have demonstrated to our users that the "netbacks" implied by the model are always non-negative and that the global approach can find alternative solutions which are more profitable than the ones produced using the "local, feasible" approach by hand.
The lack of a unique set of routings along the entire path from source to customer that results from a global approach posed a serious dilemma for the marketing companies since their transportation nominations to pipelines often required such a path specification. As a result we have developed a post-optimal routine which computes a feasible set of path flows consistent with the optimal arc flow solution generated by EMNET. We have successfully employed two different algorithms for producing these path flows, one using an LP approach, the other a simpler algorithm derived from an approach by Ford and Fulkerson [9]. Both are successful and neither has proven decidedly superior. Identifying criteria for selecting a superior method and then designing the algorithm itself are areas for future development. Having such a routine seems to be a useful step in helping the marketing company's analysts bridge the gap between their previous methods over to those of Gridnet.
The implementation of Gridnet at a client site typically imposes some management restructuring as well as behavioral changes. First, sales, supply and transportation become more integrated with the advent of "corporate benefit" (as distinguished from department or individual benefit) as the explicit objective. The "goodness" of a particular deal no longer dominates. Secondly, the characteristics of the ideal individual staff member changes as many previously routine tasks are now computerized. Third, management now has the ability to better measure the performance of all three groups to determine their net contribution to the gross margin - and to better allocate resources in a business where margins are always under downward pressure. Fourth, the planning staff now has access to real time data for their own analytical efforts because Gridnet uses the same data whether in analytical or operational mode. Fifth, the tasks of gas control are dramatically simplified as Gridnet generates the information necessary for the hundreds of nominations required by the pipelines in order for a marketer's gas to flow. Sixth, increasing revenues and volumes may not impose the same head count burden as in the past - a major consideration as downsizing is a continuing pattern in this industry.
This ability to produce, buy, sell and transport more gas more efficiently directly translates to the bottom line. For management, to achieve this in a payback period of less than one year more than overcomes the need to effect change on an organization.
To Top
Bibliography
1. Brooks, R.E., Allocation of Natural Gas in Times of Shortage: A Mathematical Programming Model of the Production, Transmission, and Demand for Natural Gas under Federal Power Commission Regulation, Ph.D. Dissertation, Massachusetts Institute of Technology, 1975.
2. Brooks, R.E., "The Federal Energy Administration Natural Gas Transmission Model", (with Chase Econometrics Associates, Inc.), 1976.
3. Brooks, R.E., "Modeling Gas Transportation for the Department of Energy", Robert Brooks and Associates, 1978.
4. Brooks, R.E., "Using Generalized Networks to Forecast Natural Gas Distribution and Allocation during Periods of Shortage", in Mathematical Programming Study 15, North-Holland Publishing Company, 1981.
5. Brooks, R.E., "Natural Gas Network Modeling as Part of the National Energy Transportation Study," Proceedings of the Transportation Research Forum, No. 21, 1980.
6. Glover, F., Hultz, J, Klingman, D., and Stutz, J., "Generalized Networks: A fundamental computer-based planning tool", Management Science 24 (12) (1978) 1209-1220.
7. Glover, F., Klingman, D. and Stutz, J., "Augmented threaded index method for network optimization", INFOR 12 (3) (1974) 293-298.
8. McBride, R., "Solving embedded generalized network problems", European Journal of Operations Research 21 (1985) 82-92.
9. Brown, G. and McBride, R., Solving generalized networks", presentation at Detroit ORSA/TIMS meeting, April 1982.
To Top

Similar Documents

Free Essay

Buya Hamka

...Tenggelamnya Kapal van der Wijck HAMKA DAFTAR ISI 1. ANAK ORANG TERBUANG 2. YATIM PIATU 3. MENUJU NEGERI NENEK MOYANG 4. TANAH ASAL 5. CAHAYA HIDUP 6. BERKIRIM-KIRIMAN SURAT 7. PEMANDANGAN DI DUSUN 8. BERANGKAT 9. DI PADANG PANJANG 10. PACU KUDA DAN PASAR MALAM 11. BIMBANG 12. MEMINANG 13. PERTIMBANGAN 14. PENGHARAPAN YANG PUTUS 15. PERKAWINAN 16. MENEMPUH HIDUP 17. JIWA PENGARANG 18. SURAT-SURAT HAYATI KEPADA KHADIJAH 19. CLUB ANAK SUMATERA 20. RUMAH TANGGA 21. HATI ZAINUDDIN 22. DEKAT, TETAPI BERJAUHAN 23. SURAT CERAI 24. AIR MATA PENGHABISAN 25. PULANG 26. SURAT HAYATI YANG PENGHABISAN 27. SEPENINGGAL HAYATI 28. PENUTUP Hamka TENGGELAMNYA KAPAL VAN DER WIJCK Cetakan keenam belas, P.T. Bulan Bintang, Jakarta, 1984 Diterbitkan pertama kali oleh N.V. Bulan Bintang, Jakarta, 1976 P.T. Bulan Bintang, Penerbit dan Penyebar Buku-buku Jalan Kramat Kwitang I/8, Jakarta 10420, Indonesia Anggota Ikatan Penerbit Indonesia Hak cipta dilindungi Undapg-undang 38 39 49 51 57 57 58 61 63 66 76 77 79 81 82 84 16026 K10.000 Dicetak oleh Percetakan P.T. Tri Handayani Utama, Jakarta PENDAHULUAN CETAKAN KEEMPAT DI DALAM usia 31 tahun (1938), masa darah muda masih cepat alirnya dalam diri, dan khayal serta sentimen masih memenuhi jiwa, di waktu itulah "ilham" "Tenggelamnya Kapal Van der Wijck" ini mulai kususun dan dimuat berturut-turut dalam majalah yang kupimpin "Pedoman Masyarakat." Setelah itu dia diterbitkan menjadi buku oleh saudara M. Syarkawi (cetakan kedua), seorarg pemuda...

Words: 53387 - Pages: 214

Premium Essay

Legal and Ethics

...MBA 6070X – Ethics & Law Essay 2 February 2015 Enron - Ethics & Law Essay Introduction: Enron Corporation was an American energy company based in Houston, Texas. Before its bankruptcy in late 2001, Enron employed approximately 22,000 employees and was one of the largest electricity, natural gas, paper, and communication companies, with overall revenues of nearly $101 billion in 2000. The company developed, built and operated power plants and pipelines while dealing with rules of law and various infrastructures worldwide. In just 15 years, Enron grew into one of America’s largest companies and leading magazine “Fortune” named Enron “America’s Most Innovative Company” for six consecutive years. Enron divided its business into three main areas: (I) Enron Wholesale, (II) Enron Energy Service, and (III) Enron’s Global Asset. Enron wasn’t focusing to specific industry strategies. Rather, it has an overall strategy that calls for creating an environment and culture of creativity and idea generation. “Enron is a laboratory of innovation. Enron’s entrepreneurial approach calls for new insights, new ways of looking at problems and opportunities. Enron has an exceptional ability to leverage its intellectual capital. Individuals are empowered to do what they think is best. Enron’s philosophy is not to stand in the way of our employees. This environment spurs the innovation that enables Enron to revolutionize traditional energy markets and successfully enter...

Words: 1740 - Pages: 7

Free Essay

React

...Reaction Paper Sa Filipino Isinumite ni: Isinumite kay: I.Pamagat : FLORANTE AT LAURA II.Tauhan: Florante- Anak ni Duke Briseo Laura - Anak ni Hring Lancao Aladin - Morong kasintahan ni Flerida Flerida – Kasinthan ni Aladin Duke Briseo – Ama ni Florante at punong tagapayo ni Haring Lincao Antenor o – Guro ni Florante sa Atenas Menandro – Kaeskwela at kaibigan ni Florante Adolfo - Mortal ng kaaway ni Florante III. Buod Sa simula ng awit ay makikita ang isang binatang nakagapos sa isang puno ng higera sa gitna ng malawak na gubat sa labas ng kahariang Albanya na pinamumugaran ng mababangis na hayop, nakalulunos na huni ng mga ibon, naglalakihang mga punong-kahoy na may masasangsang na amoy. Ang binatang ito ay si Florante, anak ng mag-asawang Duke Briseo at Prinsesa Floresca na kapwa taga-Albanya. Nagkataon namang sa gubat ding yaon ay napadako ang isang Morong taga-Persya na anak ni Sultan Ali-Adab na dahil sa sama ng loob sa kanyang ama sa pagkakaagaw sa pag-ibig ng kanyang kasintahan ay umalis sa sariling bayan. Ang mga panaghoy ng nakagapos ay tungkol sa mga kasamaang nangyari sa kanyang bayan, ang pagkawala ng kanilang mga karapatan, ang pangingibabaw ng katiwalian laban sa kabutihan, ang akala niyang pagtataksil ni Laura, ang pagkapatay sa hari at mga kabig nito kasama na ang kanyang ama ay narinig na lahat ni Aladin kaya tinunton niya ang boses na pinanggalingan...

Words: 1230 - Pages: 5

Premium Essay

Legal and Ethics

...MBA 6070X – Ethics & Law Essay 2 February 2015 Enron - Ethics & Law Essay Introduction: Enron Corporation was an American energy company based in Houston, Texas. Before its bankruptcy in late 2001, Enron employed approximately 22,000 employees and was one of the largest electricity, natural gas, paper, and communication companies, with overall revenues of nearly $101 billion in 2000. The company developed, built and operated power plants and pipelines while dealing with rules of law and various infrastructures worldwide. In just 15 years, Enron grew into one of America’s largest companies and leading magazine “Fortune” named Enron “America’s Most Innovative Company” for six consecutive years. Enron divided its business into three main areas: (I) Enron Wholesale, (II) Enron Energy Service, and (III) Enron’s Global Asset. Enron wasn’t focusing to specific industry strategies. Rather, it has an overall strategy that calls for creating an environment and culture of creativity and idea generation. “Enron is a laboratory of innovation. Enron’s entrepreneurial approach calls for new insights, new ways of looking at problems and opportunities. Enron has an exceptional ability to leverage its intellectual capital. Individuals are empowered to do what they think is best. Enron’s philosophy is not to stand in the way of our employees. This environment spurs the innovation that enables Enron to revolutionize traditional energy markets and successfully enter...

Words: 1740 - Pages: 7

Premium Essay

Graduation

...Filipino Writers and their Works A project in Philippine Literature ------------------------------------------------- Francisco Sionil José Born: December 03, 1924 (Rosales, Pangasinan, Philippines) Francisco S José was born in 1924 in Pangasinan province and attended the public school in his hometown. He attended the University of Santo Tomas after World War II and in 1949, started his career in writing. Since then, his fiction has been published internationally and translated into several languages including his native Ilokano. He has been involved with the international cultural organizations, notably International P.E.N., the world association of poets, playwrights, essayists and novelists whose Philippine Center he founded in 1958. F. Sionil José, the Philippines' most widely translated author, is known best for his epic work, the Rosales saga - five novels encompassing a hundred years of Philippine history - a vivid documentary of Filipino life. In 1980, Sionil José received the Ramon Magsaysay Award for Journalism, Literature and Creative Communication Arts. In 2001, Sionil José was named National Artist for Literature. In 2004, Sionil José received the Pablo Neruda Centennial Award. GRADUATION  by F. Sionil Jose I always knew that someday after I finished high school, I’d go to Manila and to college. I had looked ahead to thegrand adventure with eagerness but when it finally came, my leaving Rosales filled me with a nameless dread...

Words: 3377 - Pages: 14