Free Essay

Cluster Computing

In: Computers and Technology

Submitted By batti
Words 5312
Pages 22
Practical 1

Practical 1.1 : Basic of Cluster Computing
1. INTRODUCTION
1.1 Background study
Parallel computing has seen many changes since the days of the highly expensive and proprietary super computers. Changes and improvements in performance have also been seen in the area of mainframe computing for many environments. But these compute environments may not be the most cost effective and flexible solution for a problem. Over the past decade, cluster technologies have been developed that allow multiple low cost computers to work in a coordinated fashion to process applications. The economics, performance and flexibility of compute clusters makes cluster computing an attractive alternative to centralized computing models and the attendant to cost, inflexibility, and scalability issues inherent to these models.
Many enterprises are now looking at clusters of high-performance, low cost computers to provide increased application performance, high availability, and ease of scaling within the data center. Interest in and deployment of computer clusters has largely been driven by the increase in the performance of off-the-shelf commodity computers, high-speed, low-latency network switches and the maturity of the software components. Application performance continues to be of significant concern for various entities including governments, military, education, scientific and now enterprise organizations. This document provides a review of cluster computing, the various types of clusters and their associated applications. This document is a high-level informational document; it does not provide details about various cluster implementations and applications.
1.1.1 Cluster Computing
Cluster computing is best characterized as the integration of a number of off-the-shelf commodity computers and resources integrated through hardware, networks, and software to behave as a single computer. Initially, the terms cluster computing and high performance computing were viewed as one and the same. However, the technologies available today have redefined the term cluster computing to extend beyond parallel computing to incorporate load-balancing clusters (for example, web clusters) and high availability clusters. Clusters may also be deployed to address load balancing, parallel processing, systems management, and scalability. Today, clusters are made up of commodity computers usually restricted to a single switch or group of interconnected switches operating at Layer 2 and within a single virtual local-area network (VLAN). Each compute node (computer) may have different characteristics such as single processor or symmetric multiprocessor design, and access to various types of storage devices. The underlying network is a dedicated network made up of high-speed, low-latency switches that may be of a single switch or a hierarchy of multiple switches.
A growing range of possibilities exists for a cluster interconnection technology. Different variables will determine the network hardware for the cluster. Price-per-port, bandwidth, latency, and throughput are key variables. The choice of network technology depends on a number of factors, including price, performance, and compatibility with other cluster hardware and system software as well as communication characteristics of the applications that will use the cluster. Clusters are not commodities in themselves, although they may be based on commodity hardware. A number of decisions need to be made (for example, what type of hardware the nodes run on, which interconnect to use, and which type of switching architecture to build on) before assembling a cluster range. Each decision will affect the others, and some will probably be dictated by the intended use of the cluster. Selecting the right cluster elements involves an understanding of the application and the necessary resources that include, but are not limited to, storage, throughput, latency, and number of nodes.
When considering a cluster implementation, there are some basic questions that can help determine the cluster attributes such that technology options can be evaluated: * Will the application be primarily processing a single dataset? * Will the application be passing data around or will it generate real-time information? * Is the application 32- or 64-bit?
The answers to these questions will influence the type of CPU, memory architecture, storage, cluster interconnect, and cluster network design. Cluster applications are often CPU-bound so that interconnect and storage bandwidth are not limiting factors, although this is not always the case.
1.1.2 Cluster Benefits
The main benefits of clusters are scalability, availability, and performance. For scalability, a cluster uses the combined processing power of compute nodes to run cluster-enabled applications such as a parallel database server at a higher performance than a single machine can provide. Scaling the cluster's processing power is achieved by simply adding additional nodes to the cluster. Availability within the cluster is assured as nodes within the cluster provide backup to each other in the event of a failure. In high-availability clusters, if a node is taken out of service or fails, the load is transferred to another node (or nodes) within the cluster. To the user, this operation is transparent as the applications and data running are also available on the failover nodes. An additional benefit comes with the existence of a single system image and the ease of manageability of the cluster. From the users perspective the users sees an application resource as the provider of services and applications. The user does not know or care if this resource is a single server, a cluster, or even which node within the cluster is providing services. These benefits map to needs of today's enterprise business, education, military and scientific community infrastructures. In summary, clusters provide:
Scalable capacity for computer, data, and transaction intensive applications, including support of mixed workloads * Horizontal and vertical scalability without downtime * Ability to handle unexpected peaks in workload * Central system management of a single systems image * 24x7 availability.
2. TYPES OF CLUSTER
There are several types of clusters, each with specific design goals and functionality. These clusters range from distributed or parallel clusters for computation intensive or data intensive applications that are used for protein, seismic, or nuclear modeling to simple load-balanced clusters.
2.1 High Availability or Failover Clusters
These clusters are designed to provide uninterrupted availability of data or services (typically web services) to the end-user community. The purpose of these clusters is to ensure that a single instance of an application is only ever running on one cluster member at a time but if and when that cluster member is no longer available, the application will failover to another cluster member. With a high-availability cluster, nodes can be taken out-of-service for maintenance or repairs. Additionally, if a node fails, the service can be restored without affecting the availability of the services provided by the cluster (see Figure 2.1). While the application will still be available, there will be a performance drop due to the missing node.
High-availability clusters implementations are best for mission-critical applications or databases, mail, file and print, web, or application servers.

(Failover Clusters) (Figure- 2.1)
Unlike distributed or parallel processing clusters, high-availability clusters seamlessly and transparently integrate existing standalone, non-cluster aware applications together into a single virtual machine necessary to allow the network to effortlessly grow to meet increased business demands.
Cluster-Aware and Cluster-Unaware Applications
Cluster-aware applications are designed specifically for use in clustered environment. They know about the existence of other nodes and are able to communicate with them. Clustered database is one example of such application. Instances of clustered database run in different nodes and have to notify other instances if they need to lock or modify some data. Cluster-unaware applications do not know if they are running in a cluster or on a single node. The existence of a cluster is completely transparent forsuch applications, and some additional software is usually needed to set up a cluster. A web server is a typical cluster-unaware application. All servers in the cluster have the same content, and the client does not care from which server the server provides the requested content.
2.2 Load Balancing Cluster

(Load balancing Clusters) (Figure- 2.2)
This type of cluster distributes incoming requests for resources or content among multiple nodes running the same programs or having the same content (see Figure 2.2). Every node in the cluster is able to handle requests for the same content or application. If a node fails, requests are redistributed between the remaining available nodes. This type of distribution is typically seen in a web-hosting environment.
Both the high availability and load-balancing cluster technologies can be combined to increase the reliability, availability, and scalability of application and data resources that are widely deployed for web, mail, news, or FTP services.
2.3 Parallel/Distributed Processing Clusters
Traditionally, parallel processing was performed by multiple processors in a specially designed parallel computer. These are systems in which multiple processors share a single memory and bus interface within a single computer. With the advent of high speed, low-latency switching technology, computers can be interconnected to form a parallel-processing cluster. These types of cluster increase availability, performance, and scalability for applications, particularly computationally or data intensive tasks. A parallel cluster is a system that uses a number of nodes to simultaneously solve a specific computational or data-mining task. Unlike the load balancing or high-availability clusters that distributes requests/tasks to nodes where a node processes the entire request, a parallel environment will divide the request into multiple sub-tasks that are distributed to multiple nodes within the cluster for processing. Parallel clusters are typically used for CPU-intensive analytical applications, such as mathematical computation, scientific analysis (weather forecasting, seismic analysis, etc.), and financial data analysis. One of the more common cluster operating systems is the Beowulf class of clusters. A Beowulf cluster can be defined as a number of systems whose collective processing capabilities are simultaneously applied to a specific technical, scientific, or business application. Each individual computer is referred to as a “node” and each node communicates with other nodes within a cluster across standard Ethernet technologies (10/100 Mbps, GbE, or 10GbE). Other high-speed interconnects such as Myrinet, Infiniband, or Quadrics may also be used.
3. CLUSTER COMPONENTS
The basic building blocks of clusters are broken down into multiple categories: the cluster nodes, cluster operating system, network switching hardware and the node/switch interconnect (see Figure 3). Significant advances have been accomplished over the past five years to improve the performance of both the compute nodes as well as the underlying switching infrastructure.

(Cluster Components)
(Figure- 3)
Application: It includes all the various applications that are going on for a particular group. These applications run in parallel. This includes various queries running on different nodes of the cluster. This can be said as the input part of the cluster component.
Middleware: These are software packages which interacts the user with the operating system for the cluster computing. In other words we can say that these are the layers of software between applications and operating system. Middleware provides various services required by an application to function correctly. The software that are used as middleware are:
OSCAR
Features: * Image based Installation. * Supported by Red Hat 9.0 and Mandrake 9.0. * Processors supported: x86, Itanium (in beta). * Interconnects: Ethernet, Myrinet. * Diskless support in development. * Opteron support in development. * High-availability support in alpha testing.
SCYLD
Features: * Commercial distribution. * Single system image design. * Processors: x86 and Opteron. * Interconnects: Ethernet and InfiniBand. * MPI and PVM. * Diskful and diskless support.
Rocks
Features: * Processors: x86, Opteron, Itanium. * Interconnects: Ethernet and Myrinet. * Compute node management via Red Hat‟s kickstart mechanism. * Diskfull only. * Cluster on CD.
Operating System: Clusters can be supported by various operating systems which includes Windows, Linux.etc.
Interconnect: Interconnection between the various nodes of the cluster system can be done using 10GbE, Myrinet etc. In case of small cluster system these and be connected with the help of simple switches.
Nodes: Nodes of the cluster system implies about the different computers that are connected. All of these processors can be of Intel or AMD 64 bit.
4. CLUSTER OPERATION
4.1 Cluster Nodes
Node technology has migrated from the conventional tower cases to single rack-unit multiprocessor systems and blade servers that provide a much higher processor density within a decreased area. Processor speeds and server architectures have increased in performance, as well as solutions that provide options for either 32-bit or 64-bit processors systems. Additionally, memory performance as well as hard-disk access speeds and storage capacities have also increased. It is interesting to note that even though performance is growing exponentially in some cases, the cost of these technologies has dropped considerably. As shown in Figure 4.1 below, node participation in the cluster falls into one of two responsibilities: master (or head) node and compute (or slave) nodes. The master node is the unique server in cluster systems.
It is responsible for running the file system and also serves as the key system for clustering middleware to route processes, duties, and monitor the health and status of each slave node. A compute (or slave) node within a cluster provides the cluster a computing and data storage capability. These nodes are derived from fully operational, standalone computers that are typically marketed as desktop or server systems that, as such, are off-the-shelf commodity systems.

(Cluster Nodes)
(Figure- 4.1)
4.2 Cluster Network
Commodity cluster solutions are viable today due to a number of factors such as the high performance commodity servers and the availability of high speed, low-latency network switch technologies that provide the inter-nodal communications. Commodity clusters typically incorporate one or more dedicated switches to support communication between the cluster nodes. The speed and type of node interconnects vary based on the requirements of the application and organization. With today's low costs per-port for Gigabit Ethernet switches, adoption of 10-Gigabit Ethernet and the standardization of 10/100/1000 network interfaces on the node hardware, Ethernet continues to be a leading interconnect technology for many clusters. In addition to Ethernet, alternative network or interconnect technologies include Myrinet, Quadrics, and InfiniBand that support bandwidths above 1Gbps and end-to-end message latencies below 10 microseconds (uSec).
4.2.1 Network Characterization
There are two primary characteristics establishing the operational properties of a network: bandwidth and delay. Bandwidth is measured in millions of bits per second (Mbps) and/or billions of bits per-second (Gbps). Peak bandwidth is the maximum amount of data that can be transferred in a single unit of time through a single connection. Bi-section bandwidth is the total peak bandwidth that can be passed across a single switch.
Latency is measured in microseconds (µSec) or milliseconds (mSec) and is the time it takes to move a single packet of information in one port and out of another. For parallel clusters, latency is measured as the time it takes for a message to be passed from one processor to another that includes the latency of the interconnecting switch or switches. The actual latencies observed will vary widely even on a single switch depending on characteristics such as packet size, switch architecture (centralized versus distributed), queuing, buffer depths and allocations, and protocol processing at the nodes.
4.2.2 Ethernet, Fast Ethernet, Gigabit Ethernet and 10-Gigabit Ethernet
Ethernet is the most widely used interconnect technology for local area networking (LAN). Ethernet as a technology supports speeds varying from 10 Mbps to 10 Gbps and it is successfully deployed and operational within many high-performance cluster computing environments. 4.3 Cluster Applications
Parallel applications exhibit a wide range of communication behaviors and impose various requirements on the underlying network. These may be unique to a specific application, or an application category depending on the requirements of the computational processes. Some problems require the high bandwidth and low-latency capabilities of today's low-latency, high throughput switches using 10GbE, InfiniBand or Myrinet. Other application classes perform effectively on commodity clusters and will not push the bounds of the bandwidth and resources of these same switches. Many applications and the messaging algorithms used fall in between these two ends of the spectrum. Currently, there are four primary categories of applications that use parallel clusters: compute intensive, data or input/output (I/O) intensive, and transaction intensive. Each of these has its own set of characteristics and associated network requirements. Each has a different impact on the network as well as how each is impacted by the architectural characteristics of the underlying network. The following subsections describe each application types.

4.3.1 Compute Intensive Applications
Compute intensive is a term that applies to any computer application that demands a lot of computation cycles (for example, scientific applications such as meteorological prediction). These types of applications are very sensitive to end-to-end message latency. This latency sensitivity is caused by either the processors having to wait for instruction messages, or if transmitting results data between nodes takes longer. In general, the more time spent idle waiting for an instruction or for results data, the longer it takes to complete the application.
Some compute-intensive applications may also be graphic intensive. Graphic intensive is a term that applies to any application that demands a lot of computational cycles where the end result is the delivery of significant information for the development of graphical output such as ray-tracing applications.
These types of applications are also sensitive to end-to-end message latency. The longer the processors have to wait for instruction messages or the longer it takes to send resulting data, the longer it takes to present the graphical representation of the resulting data.
4.3.2 Data or I/O Intensive Applications
Data intensive is a term that applies to any application that has high demands of attached storage facilities. Performance of many of these applications is impacted by the quality of the I/O mechanisms supported by current cluster architectures, the bandwidth available for network attached storage, and, in some cases, the performance of the underlying network components at both Layer 2 and 3. Data-intensive applications can be found in the area of data mining, image processing, and genome and protein science applications. The movement to parallel I/O systems continues to occur to improve the I/O performance for many of these applications.
4.3.3 Transaction Intensive Applications
Transaction intensive is a term that applies to any application that has a high-level of interactive transactions between an application resource and the cluster resources. Many financial, banking, human resource, and web-based applications fall into this category.
There are three main care about for cluster applications: message latency, CPU utilization, and throughput. Each of these plays an important part in improving or impeding application performance. This section describes each of these issues and their associated impact on application performance.

4.4 Message Latency
Message latency is defined as the time it takes to send a zero-length message from one processor to another (measured in microseconds). The lower the latency for some application types, the better.
Message latency is made up of aggregate latency incurred at each element within the cluster network, including within the cluster nodes themselves (see Figure 4.4.1). Although network latency is often focused on, the protocol processing latency of message passing interface (MPI) and TCP processes within the host itself are typically larger. Throughput of today's cluster nodes are impacted by protocol processing, both for TCP/IP processing and the MPI. To maintain cluster stability, node synchronization, and data sharing, the cluster uses message passing technologies such as Parallel Virtual Machine (PVM) or MPI. TCP/IP stack processing is a CPU-intensive task that limits performance within high speed networks. As CPU performance has increased and new techniques such as TCP offload engines (TOE) have been introduced, PCs are now able to drive the bandwidth levels higher to a point where we see traffic levels reaching near theoretical maximum for TCP/IP on Gigabit Ethernet and near bus speeds for PCI-X based systems when using 10 Gigabit Ethernet. These high-bandwidth capabilities will continue to grow as processor speeds increase and more vendors build network adapters to the PCI-Express specification.

(Message Latency)
(Figure- 4.4.1)

To address host stack latency, reductions in protocol processing have been addressed somewhat through the implementation of TOE and further developments of combined TOE and Remote Direct Memory Access (RDMA) technologies are occurring that will significantly reduce the protocol processing in the host. See Figure 4.4.2 through Figure 4.4.4 below for examples. (Progression)
(Figure- 4.4.2) (Message path Without TOE and RDMA)
(Figure- 4.4.3) (Message path with TOE and RDMA)
(Figure- 4.4.4)

4.5 CPU Utilization
One important consideration for many enterprises is to use computer resources as efficiently as possible. As increased number of enterprises move towards real time and business-intelligence analysis, using compute resources efficiently is an important metric. However, in many cases compute resource is underutilized. The more CPU cycles committed to application processing the less time it takes to run the application. Unfortunately, although this is a design goal, this is not obtainable as both the application and protocols compete for CPU cycles.
As the cluster node processes the application, the CPU is dedicated to the application and protocol processing does not occur. For this to change, the protocol process must interrupt a uniprocessor machine or request a spin lock for a multiprocessor machine.
As the request is granted, CPU cycles are then applied to the protocol process. As more cycles are applied to protocol processing, application processing is suspended. In many environments, the value of the cluster is based on the run-time of the application. The shorter the time to run, the more floating-point operations and/or millions of instructions per-second occur, and, therefore, the lower the cost of running a specific application or job.

(CPU Utilization)
(Figure- 4.5.1)

5. CLUSTER APPLICATIONS
Few important cluster applications are: * Google Search Engine. * Petroleum Reservoir Simulation. * Protein Explorer. * Earthquake Simulation. * Image Rendering. Internet search engines enable Internet users to search for information on the Internet by entering specific keywords. A widely used search engine, Google uses cluster computing to meet the huge quantity of worldwide search requests that comprise of a peak of thousands of queries per second. A single Google query needs to use at least tens of billions of processing cycles and access a few hundred megabytes of data in order to return satisfactory search results.
Petroleum reservoir simulation facilitates a better understanding of petroleum reservoirs that is crucial to better reservoir management and more efficient oil and gas production. Itis an example of GCA as it demands intensive computations in order to simulate geological and physical models.

Practical 1.2 : Study the example project models of cluster computing

What is a Parallel Sysplex?

The z Systems Parallel Sysplex cluster contains innovative multisystem data sharing technology. It allows direct, concurrent read/write access to shared data from all processing nodes in the configuration without sacrificing performance or data integrity. Each node can concurrently cache shared data in local processor memory through hardware-assisted cluster-wide serialization and coherency controls. As a result, work requests that are associated with a single workload, such as business transactions or database queries, can be dynamically distributed for parallel execution on nodes in the sysplex cluster based on available processor capacity.
Parallel Sysplex technology builds on and extends the strengths of z Systems e-business servers by linking up to 32 servers with near linear scalability to create the industry's most powerful commercial processing clustered system. Every server in a Parallel Sysplex cluster has access to all data resources and every "cloned" application can run on every server. Using the z Systems "Coupling Technology," the Parallel Sysplex technology provides a "shared data" clustering technique that permits multi-system data sharing with high performance read/write integrity. This "shared data" (as opposed to "shared nothing") approach enables workloads to be dynamically balanced across all servers in the Parallel Sysplex cluster. This approach allows critical business applications to take advantage of the aggregate capacity of multiple servers to help ensure maximum system throughput and performance during peak processing periods. In the event of a hardware or software outage, either planned or unplanned, workloads can be dynamically redirected to available servers thus providing near continuous application availability.
Another significant and unique advantage of using Parallel Sysplex technology is the ability to perform hardware and software maintenance and installations in a non disruptive manner. Through data sharing and dynamic workload management, servers can be dynamically removed from or added to the cluster allowing installation and maintenance activities to be performed while the remaining systems continue to process work. Furthermore, by adhering to IBM's software and hardware coexistence policy, software and/or hardware upgrades can be introduced one system at a time. This capability allows customers to roll changes through systems at a pace that makes sense for their business. The ability to perform rolling hardware and software maintenance in a non disruptive manner allows business to implement critical business function and react to rapid growth without affecting customer availability.
Parallel Sysplex technology is an enabling technology, allowing highly reliable, redundant, and robust z Systems technologies to achieve near continuous availability. A properly configured Parallel Sysplex cluster is designed to have no single points of failure, for example: * Hardware and software components provide for concurrency to facilitate non disruptive maintenance, like z Systems Capacity Upgrade on Demand that allows processing or coupling capacity to be added, an engine at a time, without disruption to customer workloads. * DASD subsystems that employ disk mirroring or RAID technologies to help protect against data loss, and exploit technologies to enable point-in-time backup, without the need to shutdown applications. * Networking technologies that deliver functions like VTAM® Generic Resources, Multi-Node Persistent Sessions, Virtual IP Addressing, and Sysplex Distributor to provide fault tolerant network connections. * I/O subsystems support multiple I/O paths and dynamic switching to prevent loss of data access and improved throughput. * z/OS and OS/390 software components allow new software releases to coexist with lower levels of that software component to facilitate rolling maintenance. * Business applications are "data sharing enabled" and cloned across servers to allow workload balancing and to prevent loss of application availability in the event of an outage. * Operational and recovery processes are fully automated and transparent to users, and reduce or eliminate the need for human intervention. In computing, a Parallel Sysplex is a cluster of IBM mainframes acting together as a single system image with z/OS. Used for disaster recovery, Parallel Sysplex combines data sharing and parallel computing to allow a cluster of up to 32 systems to share a workload for high performance and high availability.
In 1990, IBM Sysplex, with MVS/ESA SPV4.1 in 1990. This allows authorized components in up to eight LPARs to communicate and cooperate with each other using the XCF protocol.
Components of a Sysplex include:
A common time source to synchronize all member systems' clocks. This can involve either a Sysplex mainframe computers introduced the concept of a Systems Complex, commonly called * timer (Model 9037), or the Server Time Protocol (STP) * Global Resource Serialization (GRS), which allows multiple systems to access the same resources concurrently, serializing where necessary to ensure exclusive access * Cross System Coupling Facility (XCF), which allows systems to communicate peer-to-peer * Couple Data Sets (CDS)
Users of a (base) Sysplex include: * Console services – allowing one to merge multiple MCS consoles from the different members of the Sysplex, providing a single system image for Operations * Automatic Restart Manager (ARM) – Policy to direct automatic restart of failed jobs or started tasks on the same system if it is available or on another LPAR in the Sysplex * Sysplex Failure Manager (SFM) – Policy that specifies automated actions to take when certain failures occur such as loss of a member of a Sysplex or when reconfiguring systems * z/OS Workload Manager (WLM) – Policy based performance management of heterogeneous workloads across one or more z/OS image * Global Resource Serialization (GRS) - Communication – allows use of XCF links instead of dedicated channels for GRS, and Dynamic RNLs * Tivoli OPC – Hot standby support for the controller * RACF – Sysplex-wide RVARY and SETROPTS commands * PDSE file sharing * Multisystem VLFNOTE, SDUMP, SLIP, DAE * Resource Measurement Facility (RMF) – Sysplex-wide reporting * CICS – uses XCF to provide better performance and response time than using VTAM for transaction routing and function shipping. * zFS – Using XCF communication to access data across multiple LPARs

Parallel Sysplex :

Schematic representation of a Parallel Sysplex
The Parallel Sysplex was introduced with the addition of the Coupling Facility (CF) with coupling links for high speed communication, with MVS/ESA V5.1 operating system support, together with the mainframe models in April 1994.
The Coupling Facility (CF) may reside on a dedicated stand-alone server configured with processors that can run Coupling Facility control code (CFCC), as integral processors on the mainframes themselves configured as ICFs (Internal Coupling Facilities), or less common, as normal LPARs. The CF contains Lock, List, and Cache structures to help with serialization, message passing, and buffer consistency between multiple LPARs.
The primary goal of a Parallel Sysplex is to provide data sharing capabilities, allowing multiple databases for direct reads and writes to shared data. This can provide benefits of :

* Help remove single points of failure within the server, LPAR, or subsystems * Application Availability * Single System Image * Dynamic Session Balancing * Dynamic Transaction Routing * Scalable capacity
Databases running on the System z server that can take advantage of this include:

* DB2 * IBM Information Management System (IMS). * VSAM (VSARM / RLS) * IDMS * AdaPlex * DataCom * Oracle
Other components can use the Coupling Facility to help with system management, performance, or reduced hardware requirements. Called “Resource Sharing”, uses include: * Catalog – shared catalogs to improve performance by reducing I/O to a catalog data set on disk * CICS – Using the CF to provide sharing and recovery capabilities for named counters, data tables, or transient data * DFSMShsm – Workload balancing for data migration workload * GRS Star – Reduced CPU and response time performance for data set allocation.

Tape Switching uses the GRS structure to provide sharing of tape units between z/OS images. * Dynamic CHPID Management (DCM), and I/O priority management * JES2 Checkpoint – Provides improved access to a multisystem checkpoint * Operlog / Logrec – Merged multisystem logs for system management * RACF – shared dataset to simplify security management across the Parallel Sysplex * WebSphere MQ – Shared message queues for availability and flexibility * WLM - provides support for
Intelligent Resource Director (IRD) to extends the z/OS Workload Manager to help manage CPU and I/O resources across multiple LPARs within the Parallel Sysplex. Functions include LPAR CPU management, IRD.
Multi-system enclave management for improved performance * XCF Star – Reduced hardware requirements and simplified management of XCF communication paths
Major components of a Parallel Sysplex include: * Coupling Facility (CF or ICF) hardware, allowing multiple processors to share, cache, update, and balance data access; * Sysplex Timers or Server Time Protocol to synchronize the clocks of all member systems; * High speed, high quality, redundant cabling; * Software (operating system services and, usually, middleware such as DB2).
The Coupling Facility may be either a dedicated external system (a small mainframe, such as a System z9 BC, specially configured with only coupling facility processors) or integral processors on the mainframes themselves configured as ICFs (Internal Coupling Facilities). It is recommended that at least one external CF be used in a parallel sysplex. It is recommended that a Parallel Sysplex has at least two CFs and/or ICFs for redundancy, especially in a production data sharing environment. Server Time Protocol (STP) replaced the Sysplex Timers beginning in 2005 for System z mainframe models z990 and newer. A Sysplex Timer is a physically separate piece of hardware from the mainframe, whereas STP is an integral facility within the mainframe's microcode.

Similar Documents

Free Essay

Cluster Computing

...Cluster Computing Name Course name Instructor’s name Date of submission Cluster Computing Cluster computing was first heard in the year 1960 from the IBM, the IBM used cluster computing as the second option for connecting their large mainframe in the servers. These cluster computing was used to provide cheap ways or alternative that was considered cost effective in the commercial parallelism. Cluster is the process where computers are tightly or loosely connected and are working together thus seen as one system. The component that are in the cluster are normally interconnected using a fast or local network that is of high speed. The nodes in the network mostly computer that is used as the server normally run their own instance of the operating system. The whole idea of computer cluster started from coming together of computing development that entailed the presence of cheap microprocessor, network that had high speed and the software’s that was considered having a high performance in the distributed mode of computing. The main use of cluster is to boost the performance and the availability compared to using a single computer. The process is cheap and faster if compared to using a single computer. Computer cluster can be used in many ways to start with small corporate clusters with a minority of nodes to roughly faster mainframes example the IBM. Cluster computing has some outstanding......

Words: 498 - Pages: 2

Free Essay

Cittrix Cluster Instructions

...and configuring licensing on a cluster-enabled server. These steps assume you configured the clustering on the hardware on which you intend to install the license server. A detailed procedure follows. 1. Ensure that the first node has control of the cluster resources. 2. On the first node of the cluster, start the Citrix Licensing installation from the command-line and install it on the first node to the shared cluster drive (not the quorum drive). 3. Move the resources from the active node in the cluster to the second node. 4. Install the license server on the second node to the same shared location as the first node. 5. Obtain license files that specify the cluster name of the license server as the host name. After obtaining license files, you must add them to the license server and then reread them. 6. Configure your Citrix product to use the cluster name—not the node name—of the license server cluster. Note: When a clustered license server fails over, the cluster service renames the lmgrd_debug.log to the name of the node that previously hosted the services. Then it starts the services on the new active node and creates a new lmgrd_debug.log. To install licensing on a cluster-enabled server 1. Install Java on both cluster nodes. You can find a supported version on the Citrix product CD in the Support folder. 2. Ensure that the cluster IP address, cluster name, and a shared disk are configured as cluster resources and that all the cluster resources are owned by......

Words: 1830 - Pages: 8

Free Essay

Data Mining Term Paper

...FINAL REPORT DATA MINING Reported by: Nguyen Bao An – M9839920 Date: 99/06/16 Outline In this report I present my study in the Data mining course. It includes my two proposed approaches in the field of clustering, my learn lessons in class and my comment on this class. The report’s outline is as following: Part I: Proposed approaches 1. Introduction and backgrounds 2. Related works and motivation 3. Proposed approaches 4. Evaluation method 5. Conclusion Part II: Lessons learned 1. Data preprocessing 2. Frequent pattern and association rule 3. Classification and prediction 4. Clustering Part III: My own comments on this class. I. Proposed approach • An incremental subspace-based K-means clustering method for high dimensional data • Subspace based document clustering and its application in data preprocessing in Web mining 1. Introduction and background High dimensional data clustering has many applications in real world, especially in bioinformatics. Many well-known clustering algorithms often use a whole-space distance score to measure the similarity or distance between two objects, such as Euclidean distance, Cosine function... However, in fact, when the dimensionality of space or the number of objects is large, such whole-space-based pairwise similarity scores are no longer meaningful, due to the distance of each pair of object nearly the same......

Words: 5913 - Pages: 24

Free Essay

A Computer Implementation of Estimated Variances in Multi-Stage Cluster Sampling Schemes

...A COMPUTER IMPLEMENTATION OF ESTIMATED VARIANCES IN MULTI-STAGE CLUSTER SAMPLING SCHEMES L. A. Nafiu, L. Idris, A. F. Busari and A. B. Olaniyan Department of Mathematics and Statistics, Federal University of Technology, Minna, Niger State (lanconserv@yahoo.com) ABSTRACT The computation of sample variances arising from multi-stage cluster sampling schemes or designs are complex and time-consuming. This paper presents a computer software written with Java programing language for implementing some of the available formulars for estimated variances in multi-stage techniques. The software has the advantages of accessibility, cheapness, and ease of use in computing estimated variances in both one-stage, two-stage and three-stage sampling schemes. A data set for estimating number of diabetic patients in Niger state for 2005 was used for illustration. We recommend that computation involving these estimated variances be done with the aid of this software. Keywords: Software, Computation, Multi-stage, Estimated Variances, Time, Data and Diabetic Patients. Introduction Multistage sampling is where the researcher divides the population into clusters, samples the clusters, and then resample, repeating the process until the ultimate sampling units are selected at the last of the hierarchical levels (Okafor, 2002). For instance, at the top level, states may be sampled (with sampling proportionate to state population size); then cities may be sampled; then schools; then classes;......

Words: 1461 - Pages: 6

Premium Essay

Personalized Optimization for Android Smartphones

...Personalized Optimization for Android Smartphones WOOK SONG, YESEONG KIM, HAKBONG KIM, JEHUN LIM, and JIHONG KIM, Seoul National University As a highly personalized computing device, smartphones present a unique new opportunity for system optimization. For example, it is widely observed that a smartphone user exhibits very regular application usage patterns (although different users are quite different in their usage patterns). User-specific high-level app usage information, when properly managed, can provide valuable hints for optimizing various system design requirements. In this article, we describe the design and implementation of a personalized optimization framework for the Android platform that takes advantage of user’s application usage patterns in optimizing the performance of the Android platform. Our optimization framework consists of two main components, the application usage modeling module and the usage model-based optimization module. We have developed two novel application usage models that correctly capture typical smartphone user’s application usage patterns. Based on the application usage models, we have implemented an app-launching experience optimization technique which tries to minimize user-perceived delays, extra energy consumption, and state loss when a user launches apps. Our experimental results on the Nexus S Android reference phones show that our proposed optimization technique can avoid unnecessary application restarts by up to 78.4% over the......

Words: 10997 - Pages: 44

Free Essay

Advance America Implements Grid Computing

...Advance America Implements Grid Computing Chances are you have seen places that offer payday loans in your town. Payday loans are short-term loans designed for people that run out of money before payday, but can repay the loan when their paycheck arrives. Advance America is the leading payday loan company in the United States. It includes 3,000 centers in 37 states, and employs nearly 7,000 people, according to its Web site. Advance America is big, and growing bigger every day. Its growth in recent years is straining the capabilities of its client-server information system infrastructure and holding the company back from further growth. Advance America used a system in which each center was equipped with an independent hardware and software environment. Installation and maintenance costs were high, and compiling data for all centers was time consuming and difficult. Each night the thousands of centers would upload their data to the main server for consolidation. With the growing number of centers, there wasn’t enough time in the night to process all of the incoming data. Advance America’s system had run up against a wall. It was time for a change. Advance America decided to invest in a new system based on a grid computing architecture. They installed thin client machines to run in each center, connecting via the Web to a fault-tolerant server cluster running Oracle database software. The server cluster consists of a four-node cluster ofIBM P5 series......

Words: 324 - Pages: 2

Free Essay

Impact of Power and Politics in Dan Mart Inc. in Achieving the Firm’s

... 10. Summary of project References Abstract The purpose of this project is to identify the impact of power and politics in Dan Mart Inc management decision in choosing information technology architecture that can provide a high availability and clustering in a business environment like Dan Mart Inc, this project will also identify the limitation power and politics, advantages and cost of implementing each one so as to have a choice of choosing from them all. But for the sake of this project the use of Oracle cooperation high availability and clustering technologies will be the target. We would be discussing different types of technologies by Oracle such as real application cluster(RAC), automatic storage management (ASM), data guard, grid infrastructure, grid control, cloud control, Flash back technology, database e-memory that will be suitable for Dan Mart Inc business environment. Brief Company Background DanMart is a high volume customer oriented business organization that require 24/7 availability of their services, they handle online sales...

Words: 1737 - Pages: 7

Free Essay

Rock Algorithm

...such that a single group or cluster have similar characteristics while different groups are dissimilar. ROCK belongs to the class of agglomerative hierarchical clustering algorithms. OCK algorithm has mainly 3 steps namely, ‘Draw random sample’, ‘Cluster with links’, ‘Label data in disk’ the steps are described in the following diagram: ROCK’s hierarchical algorithm accepts as input the set S of N sample points to be clustered, and the number of desired clusters K. The first step in the procedure is to compute the number of links between pairs of points. Initially each point is separate cluster. For each cluster i, we build a local heap q[i] and maintain the heap during the execution of the algorithm. Q[i] contains every cluster j such that link[i,j] is non-zero. The clusters j in q[i] are ordered in the decreasing order of the goodness measure with respect to I, g(i,j). In addition to the local heaps q[i] for each cluster I, the algorithm also maintains an additional global heap q that contains all the clusters. Furthermore, the clusters in q are ordered in the decreasing order of their best goodness measures. Thus, g(j, max(q[j])) is used to order the various clusters j in q, where max(q[j]), the max element in q[j], is the best cluster to merge with cluster j. At each step, the max cluster j in q and the max cluster q[j[ are the best pair of clusters to be merged. Example program in R is as follows: For every point, after computing a list of its neighbors,......

Words: 838 - Pages: 4

Premium Essay

Kdd Review

...Sensors Relevance: Pervasive computing, temporal analysis to discover behaviour Method: MDS, Co-occurrence, HMMs, Agglomerative Clustering, Similarity Analysis Organization: MERL Published: July 2006, Pattern Recognition 39(10) Special Issue on Similarity Based Pattern Recognition Summary: Unsupervised discovery of structure from activations of very low resolution ambient sensors. Methods for discovering location geometry from movement patterns and behavior in an elevator scheduling scenario The context of this work is ambient sensing with a large number of simple sensors (1 bit per second giving on-off info). Two tasks are addressed. Discovering location geometry from patterns of sensor activations. And clustering activation sequences. For the former, a similarity metric is devised that measures the expected time of activation of one sensor after another has been activated, on the assumption that the two activations are resulting from movement. The time is used as a measure of distance between the sensors, and MDS is used to arrive at a geometric distribution. In the second part, the observation sequences are clustered by training HMMs for each sequence, and using agglomerative clustering. Having selected an appropriate number of clusters (chosen by the domain expert) the clusters can be used to train new HMM models. The straightforward mapping of the cluster HMMs is to a composite HMM, where each branch of the HMM corresponds to an HMM in the cluster. The authors......

Words: 2170 - Pages: 9

Premium Essay

Chapter 3

...RESPONDENTS The researchers used ____ College Students from Education, Business Administration and Criminology Department who are currently enrolled in Metro Manila College, Novaliches Quezon City during the Academic year 2016 – 2017. SAMPLING TECHNIQUES The researchers utilized the cluster sampling technique in this study. According to Kelly (2013), Cluster sampling technique is a method of survey sampling which selects clusters such as groups defined by area of residence, organizational membership or other group-defining characteristics. It is often used where a complete list of subjects is impossible or impractical to construct. Cluster sampling is a two- (or more) stage process whereby clusters of individual units are first defined and selected and then samples of individual units are taken from each of the defined clusters. After getting the total population of each course from the registrar’s office and computing the 20 percent of each, the researcher came up with the total of ____ respondents for the BS Criminology, ___ respondents for the BS Education and ___ respondents for the BS Business Administration. The researchers randomly selected the respondents taken from each cluster. INSTRUMENT USED...

Words: 860 - Pages: 4

Free Essay

Correlation Based Dynamic Clustering and Hash Based Retrieval for Large Datasets

...Correlation Based Dynamic Clustering and Hash Based Retrieval for Large Datasets ABSTRACT Automated information retrieval systems are used to reduce the overload of document retrieval. There is a need to provide an efficient method for storage and retrieval .This project proposes the use of dynamic clustering mechanism for organizing and storing the dataset according to concept based clustering. Also hashing technique will be used to retrieve the data from the dataset based on the association rules .Related documents are grouped into same cluster by k-means clustering algorithm. From each cluster important sentences are extracted by concept matching and also based on sentence feature score. Experiments are carried to analyze the performance of the proposed work with the existing techniques considering scientific articles and news tracks as data set .From the analysis it is inferred that our proposed technique gives better enhancement for the documents related to scientific terms. Keywords Document clustering, concept extraction, K-means algorithm, hash-based indexing, performance evaluation 1. INTRODUCTION Now-a-days online submission of documents has increased widely, which means large amount of documents are accumulated for a particular domain dynamically. Information retrieval [1] is the process of searching information within the documents. An information retrieval process begins when a user enters a query; queries are formal statements......

Words: 2233 - Pages: 9

Free Essay

Marketing

...Cluster Analysis1 Cluster analysis, like reduced space analysis (factor analysis), is concerned with data matrices in which the variables have not been partitioned beforehand into criterion versus predictor subsets. In reduced space analysis our interest centers on reducing the variable space to a smaller number of orthogonal dimensions, which maintains most of the information–metric or ordinal– contained in the original data matrix. Emphasis is placed on the variables rather than on the subjects (rows) of the data matrix. In contrast, cluster analysis is concerned with the similarity of the subjects–that is, the resemblance of their profiles over the whole set of variables. These variables may be the original set or may consist of a representation of them in reduced space (i.e., factor scores). In either case the objective of cluster analysis is to find similar groups of subjects, where “similarity” between each pair of subjects is usually construed to mean some global measure over the whole set of characteristics–either original variables or derived coordinates, if preceded by a reduced space analysis. In this section we discuss various methods of clustering and the key role that distance functions play as measures of the proximity of pairs of points. We first discuss the fundamentals of cluster analysis in terms of major questions concerning choice of proximity measure, choice of clustering technique, and descriptive measures by which the resultant......

Words: 6355 - Pages: 26

Free Essay

It332 Kaplan Unit 10

...to be built, and the technology that needs to go into the project. System design is all how all the hardware is set up while the instruction set architecture is the program language. Table of Contents Introduction What OS file system should we use? What types of processors? Cluster Architecture? Data Backup LAN or WAN Web-based diagram Introduction to Coast to Coast computing We can get in a plane and be across the country in a few hours but what if we need to work together faster than a few hours? Is it possible? How hard is it to have users in California work with users in Washington, New York, and Florida? What has to be done to make this happen? How secure will it be to have such a network? This and many other questions come up when a company starts thinking of expansion beyond the building or city they headquartered at. When looking at a network a business wants to be sure it is using the most cost effective form for the network. The base of a network is the architecture. For small businesses a peer to peer architecture would be best. There are three types of peer-to-peer architecture: collaborative computing, instant messaging, and affinity communities. In collaborative computing the unused CPU processing power along with any free disk space is combined with machines on the same network. A very common peer-to-peer networking is instant messaging. Google Hangouts is an example of this where each user can communicate in real time via chat. In affinity......

Words: 1513 - Pages: 7

Premium Essay

Role of Information Technology

...its “natural advantages” and promise that one day a mighty settlement will rise there. Speculative development is proceeding rapidly and unevenly. But right now the settlers seem a little eccentric and the humble structures they have erected lack the scale and elegance of those in better developed regions. Development is uneven and streets fail to connect. The native inhabitants have their ideas about how things should be done, which sometimes causes friction with new arrivals. In a generation everything will be quite unrecognizable, but how? This is a guide to the growing (if ramshackle) secondary literature in the history of information technology: monographs, edited volumes, and journal articles written to explore different aspects of computing from historical perspectives. Except as a frame in which to situate these secondary...

Words: 27274 - Pages: 110

Free Essay

Business Research Methods

...Business Research Methods, Part I Nikkei Crowder, Jessica Thompson, Delores Winton QNT/561 Anthony Matias August 13, 2012 Business Research Methods, Part I There are elements needed when conducting research, such as developing a theory and hypothesis, determining an appropriate research design, collecting data, providing analysis of the data, and revising the theory upon results. This paper will develop a research design behind the auto industry bailout and identify a sample design used for collecting data. Organizational Dilemma What was the ultimate cause of the downfall of the auto industry resulting in a bailout? This research question arising from the dilemma in the auto industry is one that many American’s found him or herself asking after the government decided to bail them out of their financial crisis. In 2008, the auto industry found itself in a downward financial spiral. Gas prices reached over $4 a gallon and the credit debacle is only a couple of reasons for the dilemma the auto industry faced. The country was facing a recession and they blame up-and-coming technologies of distracting technologies to validate and continue the old routine and procedure to fill their pockets with money and bonuses to retain the status quo in the industry without any regard to others affected, even if this method compromises their long-term strategy.  Roche (2009), “This qualifies as failed management syndrome because they consider themselves royalty......

Words: 766 - Pages: 4