Free Essay

Server Analytics

In:

Submitted By stefano
Words 5168
Pages 21
Thought Leadership White Paper

2

Reengineering IT discovery with analytics and visualization

Contents 2 Introduction 3 The inevitable push towards greater efficiency 3 The need for better IT discovery 4 Building a more comprehensive snapshot of the data center 6 Changing the parameters for IT discovery 6 How ALDM works 8 Identifying issues that hinder operational efficiency and resilience 9 Compiling affinity groups automatically 11 Identifying the best candidates for virtualization 12 Extending insights with data visualization 12 The confluence of discovery analytics and human analysis 15 Conclusion 16 For more information

Introduction
An intimate knowledge of IT assets and dependencies has always been imperative to mitigating the risk of data center migrations and improving the resiliency of the IT environment. But the IT discovery process can be slow, costly and prone to error. And for all their value in helping organizations determine where and how to plan a migration or improve IT resiliency, traditional asset inventories and dependency maps provide only part of the picture.

With modern IT infrastructures an intricate web of interdependencies, uncovering the total IT environment, including the logical relationships between physical, virtual and cloud elements, has never been more important—or more complex. IBM’s analytics for logical dependency mapping,
ALDM, reengineers the IT discovery process to provide a more complete and accurate view of the IT infrastructure. It is designed to facilitate data center migration planning but also provide insights for companies looking to optimize or improve the resiliency of their IT environment. Developed by IBM
Research, ALDM uses analytics and advanced mathematical modeling to simplify IT inventories and dependency mapping, while extending data collection capabilities to deliver new insights about the infrastructure.
ALDM reads server configuration files to reveal dependencies that would otherwise go undetected. It fills information gaps and exposes platform anomalies that cannot be seen with many other discovery tools. But ALDM also delivers actionable insights with greater speed and rich visualization. It accelerates discovery by as much as 30 to 40 percent using automation and the scalable processing power of the cloud. And it replaces obscure, one-dimensional dependency maps with interactive infographics that enable companies to dynamically navigate complex IT environments and gain deeper insights into the configuration and operation of the infrastructure.

Data Center Services

ALDM also replaces much of the labor-intensive manual analysis that has become synonymous with IT discovery.
Affinity groups identify dependent assets that should be migrated together to avoid business application outages.
Resource utililization trending data suggests assets that could be rationalized, virtualized or consolidated to improve operational efficiency. These and other insights are automatically generated and delivered cost-effectively, enabling
IT managers to initiate the discovery process for a single platform or the entire IT infrastructure.

The inevitable push towards greater efficiency
Today’s data center is the veritable nerve center of the enterprise, with the infrastructure powering virtually every aspect of the business. With IT continuously challenged to optimize resource usage, conserve energy and reduce costs, infrastructure efficiency is always top of mind—and with good reason. IBM’s 2012 Data Center Study found that highly efficient data centers are able to shift 50 percent more of the IT budget to new projects.1 These IT organizations are spending less time maintaining the infrastructure and more time innovating.
Rapid advances in technology drive efficiency improvements, but business growth and change make them essential. After years of largely unfettered expansion, most IT infrastructures are highly heterogeneous and fragmented, with multivendor hardware, software and standards. Excessive energy consumption, hardware and software redundancy, and poor utilization are frequently the norm, resulting in higher costs and greater environmental impact than necessary.

3

For many companies, the quest for greater IT efficiency begins with consolidation and virtualization to increase capacity and availability while creating a smaller footprint. Standardization and automation tend to follow, with cloud computing increasingly in the mix and data center relocation and IT resiliency optimization initiatives often necessary to achieve desired efficiency objectives.
Whatever choices companies make, one thing is clear. They need to understand the existing IT environment before they can make decisions about relocating or optimizing it. Data center initiatives of any magnitude can expose the business to significant risks, and smooth transitions depend on a complete and accurate picture of the infrastructure.

The need for better IT discovery
With the IT environment continually changing with the addition of new equipment and applications, even inventory data collected a few weeks ago can become out of date quickly.
While numerous commercially available tools have been developed to automate the IT discovery process, most have been designed to monitor the IT infrastructure continuously.
These tools typically collect data over a longer period than is necessary for many data center migrations and IT resiliency optimization efforts. Further, because they can be hard to configure and manage and expensive to run, they are usually installed on a limited number of production servers, collecting only a subset of the information needed for a typical migration or enterprise-wide upgrade. Similarly, homegrown discovery tools developed to assess the IT environment for a specific business purpose, like an application upgrade, may not provide all of the information needed for more encompassing IT initiatives.

4

Reengineering IT discovery with analytics and visualization

Despite the availability of automated discovery tools, most data center inventories are still conducted manually by interviewing platform owners one at a time. The process is time-consuming and error-prone, relying on assigned IT personnel to correctly document every logical dependency flow between servers, middleware and applications. Needless to say, these inventories are only as good as the information collected. In fact, IBM’s experience with hundreds of client engagements found that manual inventories are usually only 40 to 60 percent accurate.

Building a more comprehensive snapshot of the data center

ALDM identifies dependencies that are observed during the scanning period, but also dependencies that are configured and not observed.

ALDM determines dependencies by scanning a server’s network connections, but also by examining the server’s configuration files. It not only identifies the middleware that runs on the server, it identifies other machines—virtual as well as physical—that the server is configured to communicate with. So, ALDM identifies dependencies that are observed during the scanning period, but also dependencies that are configured and not observed. Because it doesn’t have to see server dependencies in action to know they exist, it can provide a more complete view of dependencies, and it can do it in a few days.

Without a complete picture of IT assets and their dependencies, an IT manager’s optimization efforts are handicapped. Data center migrations can be particularly difficult because IT managers are unable to factor all of the dependencies into their scheduled equipment moves. As such, they are more likely to overlook or even retire assets that are still required by the operation. Such inadvertent changes can result in costly business disruptions and outages. And the longer it takes to detect them, the more devastating their impact on the business.

Advances in analytics, automation and data visualization have paved the way for a new kind of IT discovery, one that is more compatible with the large, distributed and complex nature of today’s data center environments. IBM’s analytics for logical dependency mapping, ALDM, dramatically improves current
IT discovery technologies to reengineer the way assets and dependencies are discovered, but also the way analytic insights are delivered to users.

ALDM also provides actionable insights about IT assets that would not otherwise be possible without expending considerable time and effort. Insights like multi-level dependency groupings, which can take weeks of human analysis, are automatically generated. ALDM provides shortcuts to other information as well, including resource utilization statistics, which can be helpful in data center consolidation and server virtualization and consolidation efforts.

Data Center Services

ALDM’s dependency maps provide a panoramic view of an organization’s IT infrastructure. Its data visualization capability brings the maps to life, enabling users to interact with discovered data. Visualization facilitates exploration by allowing users to drill down for additional detail about

an asset’s attributes and dependencies using infinitely customizable filters and dynamic rendering (Figure 1). Users can determine, for example, which assets to upgrade, which to retire and which to relocate, but also which should be virtualized to gain greater efficiency.

Figure 1. Dynamic rendering of the IT environment. A visualized infrastructure is displayed in dots (server nodes) and lines (dependencies). Using touch-screen navigation and customizable filters on the left of the display, users can drill down to focus on specific data, like all web servers, servers made by certain vendors or servers with certain middleware.

5

6

Reengineering IT discovery with analytics and visualization

Changing the parameters for IT discovery
ALDM offers a quicker route to IT discovery insights because its focus is point-in-time discovery. It takes a snapshot of the operation as it exists today, rather providing a continuous view of the infrastructure over time. In so doing, it avoids the hefty overhead costs and management requirements associated with continuously running discovery tools. Therefore, it is more versatile and can be used more frequently to accomplish a wide range of objectives. In fact, it can be run on one server or any combination of servers to assist with:
• Conducting routine inventories to identify and purge unsupported and redundant software and versions that are driving up management and licensing costs
• Preparing the IT infrastructure for the deployment of a new operating system or application
• Determining the fate of specific assets as part of a data center consolidation or migration initiative
• Grouping dependent assets to facilitate migration planning and equipment moves
• Identifying assets that are the best candidates for virtualization • Verifying compliance with established IT operating standards. How ALDM works
ALDM runs directly on a company’s servers. Once the ALDM script is downloaded, it can be copied to select servers and executed using a simple, one-line instruction. Generally, the script is set up to execute every 15 minutes for 5 to 7 days, capturing information from each server on which it is installed.
This time frame is sufficient for most companies; however,
ALDM’s run duration can be modified to support each organization’s own server environment.

The ALDM script runs transparently. Its impact on performance is about 2 to 5 percent the first time it is executed and negligible after that. Moreover, it does not read, copy or collect application or user data, and it does not contain any executables, agents or probes that could pose a security risk.

ALDM runs directly on a company’s servers. Once the ALDM script is downloaded, it can be copied to select servers and executed using a simple, one-line instruction. Static and dynamic data collection
ALDM uses three methods to extract static and dynamic data from the scanned servers. First, ALDM analyzes the server log files to identify any historical dependencies. Second, ALDM reads the server configuration files to identify hardware details like model number and serial number, but also to discover all dependencies the server is configured for. It identifies middleware that has been configured to access a database server, for example, or middleware that has been configured to access other middleware on other servers. This ability to read the configuration files allows ALDM to capture server dependencies that may not be observed during the 5 to 7 day scanning period.

Data Center Services

Finally, ALDM collects dynamic information about the activity taking place between servers. ALDM records observed dependencies by monitoring incoming and outgoing traffic at each port. It also captures resource utilization and other statistics for each scanned server, helping to complete the dependency picture for the IT infrastructure.

Multilevel dependency grouping
ALDM doesn’t just identify dependencies between assets; it identifies multilevel dependencies between them. It can determine that Server A is connected to Server B, but also that Server B is connected to Server C, and other downstream connections. Understanding multilevel dependencies makes it possible to follow asset dependencies as they traverse the infrastructure and to determine how they actually affect the business. More specifically, by enabling IT architects to map applications to servers, multilevel dependency groups provide them with a more application-centric understanding of the infrastructure. Without such a view, it is easy to miscalculate how applications will be impacted by changes they make to the infrastructure.
Multilevel dependency grouping is also a very significant benefit in data center relocations because it enables organizations to more easily determine the impact of moving their applications. IT architects can look at a web server application, for example, and figure out every connected infrastructure element (web server, application server, database server, license server, etc.) that needs to be migrated together to prevent disruption. When adding a new application, IT architects can use ALDM’s multilevel dependency insights to

7

determine how the application is going to connect to the web server and what else the web server connects to that could be affected. They can figure out the impact on the infrastructure and make any necessary adjustments before the application is installed.

ALDM’s backend analytic and visualization engines run in the IBM
SmartCloud Enterprise (SCE) cloud, which provides a scalable environment and support for the powerful analytics needed to process a massive volume of data quickly...
What used to take 3 hours can now be accomplished in less than 10 minutes.
Cloud-based processing and analytics
Once data is collected from a company’s configuration and server log files, it is submitted to IBM for processing. ALDM’s backend analytic and visualization engines run in the IBM
SmartCloud Enterprise (SCE) cloud, which provides a scalable environment and support for the powerful analytics needed to process a massive volume of data quickly. IBM’s preliminary test cases have found the cloud to reduce processing time significantly by an average of 20 times when compared with traditional processing environments.2 What used to take 3 hours can now be accomplished in less than 10 minutes.

8

Reengineering IT discovery with analytics and visualization

ALDM parses all of the data, extracting only the asset, dependency and utilization information needed. Then
ALDM’s algorithms and mathematical modeling go to work, correlating the captured data and going beyond standard analytic calculations and number crunching to produce a set of meaningful insights about the infrastructure. Its discovery analytics improve infrastructure visibility and facilitate the development of more targeted—and ultimately more effective—IT optimization initiatives.

Identifying issues that hinder operational efficiency and resilience
IT can only address problems it knows exist. One of the major benefits of IT discovery is that it frequently reveals issues that could be impeding the day-to-day operation, reducing the resiliency of the infrastructure or lessening the success of consolidation and relocation initiatives. These include

redundant and obsolete operating systems and middleware and assets with unknown server connections. By providing a more current and complete view of the IT infrastructure,
ALDM makes it easy to spot configuration anomalies and discrepancies with established IT and business standards.
These insights are useful in general assessments of the IT architecture because they identify weaknesses and facilitate the prioritization of improvements.
Tabular inventories containing configuration details and installed middleware make it easy to pick out servers that are running redundant or legacy versions of system and middleware (Figures 2 and 3). The cost to monitor and maintain multiple versions can be significant, not to mention the increased security risk if these versions are no longer supported by the manufacturer. IT managers can use the
ALDM inventories to direct the removal of extraneous versions and avoid paying unnecessary licensing fees.

Redundant middleware?

Figure 2. Detailed middleware inventory for each server. ALDM provides asset-specific middleware detail for each scanned server, enabling IT architects to uncover potentially redundant middleware running on the same machine.

Data Center Services

9

Unsupported operating system? Figure 3. High-level view of the server infrastructure. Taking a higher level view across the server inventory enables IT architects to scan for old and unsupported systems.

ALDM dependency maps routinely identify dependent servers that are not known to be connected to other servers (Figure 4).
ALDM is more likely to find these “orphan servers” because of its ability to comb through server configuration files and identify configured dependencies, not just observed dependencies. Servers are often configured for dependencies that are not observed during the IT discovery scanning period. Orphan servers are rarely scanned by ALDM; they are discovered because they are connected to a server that is.

Compiling affinity groups automatically

ALDM inventories and dependency maps can also be used to facilitate new application installations and upgrades. They enable IT architects to get a quick read on the destination server environment and verify that critical middleware and operating system software are already in place to allow for a smooth transition. They also alert IT to configuration issues and dependencies that could impact the application’s performance and availability.

ALDM uses its comprehensive dependency insights to compile dependency groups. These groups identify dependent assets that should be migrated together to avoid application and service outages. The automatic grouping of these assets reduces the painstaking human analysis that is typically required in the assembly of such groups (Figure 5).

The relocation and removal of infrastructure assets make data center consolidations and migrations inherently risky, especially when the cost of a single application outage can average as much as a half million dollars. Combined with customers’ decreasing tolerance for downtime, even planned outages can be very damaging to a company’s brand and bottom line.3 ALDM’s analytics change the game by helping to limit the chance of disruption.

10

Reengineering IT discovery with analytics and visualization

10.xxxx.36 10.xxxx.48
10.xxxx.216 uxxxxu

10.xxxx.214 10.xxxx.113
10.xxxx.89 10.xxxx.179 abxxxxo36 xxxxcom.br
Installs:
Apache HTTP Server unknown
NFS
Samba shell script gcc-c++ 3.4.6 bash 3.0 cvs 1.11.17 bash 3.0

10.xxxx.214 10.xxxx.180 acxxxxo36 xxxxcom.br
Installs:
Apache HTTP Server unknown
NFS
Samba shell script gcc-c++ 3.4.6 bash 3.0 perl 5.8.5 cvs 1.11.17 bash 3.0

10.xxxx.29 10.xxxx.59
10.xxxx.52.181 a.xxxx.z
Installs:
WebLogic Server
WebLogic Server
NFS
Samba shell script perl 5.8.5 cvs 1.11.17 gcc-c++ 3.4.6 bash 3.0 gcc 3.4.6 bash 3.0

10.xxxx.214 br.xxxx.ca brxxxx.ca rexxxx.br
Installs:
Symantec Antivirus Shares

Installs:
MySQLDatabase
WebLogic Server
NFS
Samba base shell script shell script
Apache HTTP Server 2.0.52 gcc-c++ 3.4.6 bash 3.0 perl 5.8.5 cvs 1.11.17 gcc 3.4.6 bash 3.0

Discovered IP and host names

Discovered middleware 10.xxxx.56.106
10.xxxx.41
10.xxxx.8.196
10.xxxx.147.119
10.xxxx.147.104

Discovered
“orphan services”

10.xxxx.5.151
10.xxxx.0.177
10.xxxx.1.224
10.xxxx.104.14

Figure 4. Graphical visualization. ALDM shows directional dependencies and discovered middleware while helping to uncover orphan servers. Servers are colorcoded to facilitate data center migration planning.

However, IBM architects still review the dependency groups for accuracy and suitability, based on IT policy considerations and other insights like client preferences for how the migration should be organized. Once the groups are approved, they are re-labeled affinity groups. Sometimes multiple dependency

groups are combined into a single affinity group, depending on the IT environment and desired migration strategy. Since most IT environments are too big to migrate all at once, affinity groups ease the migration by helping organizations structure their equipment migration in logical segments.

Data Center Services

10.xxxx.36 10.xxxx.48
10.xxxx.216 uxxxxu

10.xxxx.214 10.xxxx.113
10.xxxx.89 10.xxxx.179 abxxxxo36 xxxxcom.br
Installs:
Apache HTTP Server unknown
NFS
Samba shell script gcc-c++ 3.4.6 bash 3.0 cvs 1.11.17 bash 3.0

10.xxxx.214 10.xxxx.180 acxxxxo36 xxxxcom.br
Installs:
Apache HTTP Server unknown
NFS
Samba shell script gcc-c++ 3.4.6 bash 3.0 perl 5.8.5 cvs 1.11.17 bash 3.0

10.xxxx.29 10.xxxx.59
10.xxxx.52.181 a.xxxx.z
Installs:
WebLogic Server
WebLogic Server
NFS
Samba shell script perl 5.8.5 cvs 1.11.17 gcc-c++ 3.4.6 bash 3.0 gcc 3.4.6 bash 3.0

10.xxxx.214 br.xxxx.ca brxxxx.ca rexxxx.br
Installs:
Symantec Antivirus Shares

Installs:
MySQLDatabase
WebLogic Server
NFS
Samba base shell script shell script
Apache HTTP Server 2.0.52 gcc-c++ 3.4.6 bash 3.0 perl 5.8.5 cvs 1.11.17 gcc 3.4.6 bash 3.0

10.xxxx.51 10.xxxx.55
10.xxxx.52.183 a.xxxx.t
Installs:
NFS Share
Samba

10.xxxx.53 10.xxxx.57
10.xxxx.52.185 a.xxxx.t
Installs:
IBM WAS 7.0.0

10.xxxx.56.106
10.xxxx.41
10.xxxx.8.196
10.xxxx.147.119
10.xxxx.147.104
10.xxxx.5.151
10.xxxx.0.177
10.xxxx.1.224
10.xxxx.104.14

Figure 5. Affinity groups. Once ALDM’s automatically generated dependency groups are verified by IBM architects and additional IT policy considerations are

factored in, affinity groups are created. Affinity groups help clients structure the data center migration process so that dependent assets are migrated together. Each group is color coded, based on various migration planning criteria.

Identifying the best candidates for virtualization
Understanding the resource utilization of individual assets is of paramount importance for IT organizations that have made the decision to virtualize or consolidate. ALDM provides a quick and easy way to collect the utilization and other data needed to help determine the best candidates for server virtualization or consolidation.

Resource utilization data is presented statistically and graphically. Tables show peak and mean utilization for each server, and graphs plot CPU, disk, memory and network utilization over a user-defined sampling period (Figure 6).
System administrators can use this and other ALDM data to determine which devices to virtualize, upgrade and sunset.

11

12

Reengineering IT discovery with analytics and visualization

overall view of infrastructure nodes and dependencies to a detailed view of a single node with the simplicity of touchscreen navigation.

ATLWSCTX - Utilization to Peak
120%
% of Maximum

100%
80%
60%
40%
20%
0%

0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23
Hours

CPU % of Peak

MEM % Peak

% NET of Peak

Disk % of Peak

Figure 6. Resource utilization trending. System administrators can use

ALDM’s utilization data to identify underutilized servers and determine which servers to consolidate, virtualize and retire.

Extending insights with data visualization
Data visualization transforms ALDM’s infographics into interactive tools that IT architects can use to better understand the data center environment. Instead of relying solely on one-dimensional dependency maps that show assets and dependencies in intricate “spider charts,” data visualization delivers a multidimensional, navigable schema that makes it easier to understand the logical relationships between the assets.
Infinitely customizable filters enable IT architects to boil down large amounts of data into manageable chunks and drill down for server detail, including specific hardware attributes and installed middleware. By filtering out extraneous nodes, they can focus in on the nodes that are relevant to a specific objective (Figure 7). They can move quickly from a high-level,

ALDM’s data visualization capability operates on the Apple iPad and uses standard iPad navigation techniques. IT architects simply tap the screen and slide their fingers to navigate through the IT infrastructure. For example, tapping on a specific server node (represented as a dot in visualization schema) highlights and enlarges all of its attributes and dependencies while other non-related elements fade from view
(Figure 8).
This ability to shift fluidly from a panoramic view of the infrastructure to detail views of specific assets simplifies IT discovery dramatically. With a clear picture of each asset’s activity, usage level and importance in the context of the overall infrastructure, IT architects can make more informed decisions about the infrastructure. Migration architects, in particular, can retrieve a more precise view of configured dependencies and ensure that dependent applications and hardware will be brought online together, without incident.
In short, IT can execute infrastructure optimization initiatives with greater confidence.

The confluence of discovery analytics and human analysis
For all their capability distilling volumes of data into usable insights, discovery analytics are not a substitute for human analysis but rather a supplement to it. Peoples’ knowledge, reasoning and experience must be applied to validate ALDM’s machine-generated insights and to make independent decisions not possible with programmatic instructions and mathematic computation.

Data Center Services

13

Figure 7. Customizable filters. IT architects tap on the filter tiles on the left of the display to drill down to the desired level of server detail. In this example, filters have been used to hide the IT environment’s web, application and infrastructure servers. Only database servers are visible, with each dot representing a single DBMS server node and each line representing a dependency.

Consider ALDM’s resource utilization trending metrics, which identify potential servers for virtualization based on a series of aggregated calculations. Only individuals who understand the potential business ramifications can determine which of those servers should actually be virtualized or consolidated.
Likewise, while ALDM reveals platform inconsistencies across the data center, it cannot tell whether those platforms are still supported and, if so, for how long. So while machinegenerated analytics provide meaningful insights and highlight red flags, IBM works with companies to sort through the analytic results and determine whether action should be taken.

For all their capability distilling volumes of data into usable insights, discovery analytics are not a substitute for human analysis but rather a supplement to it.

14

Reengineering IT discovery with analytics and visualization

All of the discovered detail about node
‘pasdbs09’ is displayed in the sliding panel at right

Figure 8. Detail view of server attributes. Using standard iPad finger gestures, IT architects can enlarge the view to focus on specific servers (as shown in the center panel), then tap on a desired node to view its host and IP name, OS, middleware and hardware attributes (shown in the right panel).

IBM analysts review the output and prepare a formal report, then meet with companies to present issues of potential concern. The importance of these face-to-face discussions cannot be overstated. While companies are likely to recognize some of the more obvious problems and inconsistencies, others take a trained eye and years of experience to detect.
Take the case of virtualized development and production servers running on the same physical server. Most companies understand the security risk posed by such a conflict of interest but are often unaware of instances in their own environment.

Spotting the problem involves seeing production applications and compilers running together on the same machine— something that IBM analysts routinely look for and find, but which is otherwise not commonly recognized.
A typical ALDM implementation, including IBM’s insights analysis, resulting report and other deliverables, and summary discussion with clients, runs several weeks from start to finish.

Data Center Services

ALDM deliverables summary
ALDM analysis report • Infrastructure summary and analysis
• Description and criticality of risks, including legacy operating systems and middleware, redundant versions of middleware and orphan servers
• Remediation options for each risk
• Additional insights

Multilevel dependency mapping

Graphic server visualization of server-toserver dependency mapping, based on user criteria

Resource utilization trending Graphs and tables illustrating CPU, memory and disk utilization for each server, normalized for peak and average

Data visualization

Access to visualization capability for each server and the overall infrastructure, plus server-specific pictures in vector image format

ALDM data model

All data collected and parsed by ALDM presented in spreadsheet format

15

Conclusion
With so much riding on the efficiency and performance of the infrastructure, IT discovery is likely to become a standard element in the health regimen for today’s well-run operations.
Whether providing the basis for periodic inventory scrubs,
IT optimization or data center migration initiatives, discovery analytics have the potential to simplify the process while enabling a more comprehensive and accurate view of IT assets and dependencies.
ALDM accelerates the IT discovery process and signficantly lowers the cost, making it feasible to assess individual servers or the entire operation as often as required. Instead of limiting
IT discovery to one-off projects, it can become part of a company’s ongoing maintenance and efficiency program, facilitating the detection of configuration problems and streamlining inventory collection.
With the ability to read and analyze server configuration files,
ALDM can see dependency insights that other automated tools and manual inventorying processes cannot. And data visualization enables deeper understanding by bringing greater clarity to the results. For companies looking to drive
IT efficiency through consolidation, migration, IT resiliency optimization or other major transformation initiatives,
ALDM insights can be the linchpin to a seamless, disruptionfree transition.

16

Reengineering IT discovery with analytics and visualization

For more information
To learn how IBM is helping organizations improve IT discovery, please contact your IBM representative or
IBM Business Partner, or visit ibm.com/services/aldm © Copyright IBM Corporation 2013
IBM Global Services
Route 100
Somers, NY 10589
U.S.A.
Produced in the United States of America
January 2013
All Rights Reserved
IBM, the IBM logo and ibm.com are trademarks of International Business
Machines Corporation in the United States, other countries or both.
If these and other IBM trademarked terms are marked on their first occurrence in this information with a trademark symbol (® or TM), these symbols indicate U.S. registered or common law trademarks owned by
IBM at the time this information was published. Such trademarks may also be registered or common law trademarks in other countries. Other product, company or service names may be trademarks or service marks of others. A current list of IBM trademarks is available on the web at
“Copyright and trademark information” at ibm.com/legal/copytrade.shtml This document is current as of the initial date of publication and may be changed by IBM at any time.
Not all offerings are available in every country in which IBM operates.
The performance data discussed herein is presented as derived under specific operating conditions. Actual results may vary. It is the user’s responsibility to evaluate and verify the operation of any other products or programs with IBM products and programs.
THE INFORMATION IN THIS DOCUMENT IS PROVIDED
“AS IS” WITHOUT ANY WARRANTY, EXPRESS OR
IMPLIED, INCLUDING WITHOUT ANY WARRANTIES
OF MERCHANTABILITY, FITNESS FOR A PARTICULAR
PURPOSE AND ANY WARRANTY OR CONDITION OF NONINFRINGEMENT. IBM products are warranted according to the terms and conditions of the agreements under which they are provided.
1
Data center operational efficiency best practices: Enabling increased new project spending by improving data center efficiency, IBM, April 2012.
2

Similar Documents

Free Essay

Internet of Things (Iot)

...Definition IoT is one of the fastest growing technologies in computing. It is an environment where people, animals, or objects are presented with unique identifiers and the ability to transfer data over a network (Rouse, 2014). It has emerged from combining wireless technologies, micro-electromechanical systems, and the internet (Rouse, 2014). See Figure 1. These wireless technologies are equipped with, or connected to a smart device allowing data collection and communication through the internet (Caron, Bosua, Maynard, & Ahmad, 2016). Figure 1. IoT Ecosystem (Medici, 2015) Benefits * Tracking behavior for real-time marketing (Borne, 2014). * Enhanced situational awareness (Borne, 2014). * Sensor-driven decision analytics (Borne, 2014). * Process optimization (Borne, 2014). * Optimized resource consumption (Borne, 2014). * Instantaneous control and response in complex autonomous systems (Borne, 2014). * Increase operational efficiency, power new business models, and improve quality of life (Harrell, 2015). * Provide an accurate analysis of customer data (Medici, 2015). Some Applications of IoF Business intelligence (BI). “The BI application ensures the analysis and measurement of the consumer’s thoughts, behaviors, relationships, buying attitudes, choices, and many more parameters that form the backbone of effective strategy building, business operations management, customer relationship management, and other business operations” ...

Words: 863 - Pages: 4

Free Essay

Decision Making

...become an analytic competitor. Questions as to what are the sources of Decision Making to an analytic competitor will also be discussed. A discussion on how influential quantitative modeling is and its utility in business decision making will be summarized. Finally, a Christian view that presents an ethical perspective on quantitative modeling and decision making will be presented. In an ever increasing global environment, maintaining a competitive advantage can be sustained through quantitative modeling, which can make a company a viable analytic competitor. How Can a Company Become and Be an Analytics Competitor Competitors make it increasingly more difficult to maintain a strategic competitive advantage when exclusive technologies, products and services can be duplicated (Davenport, Cohen & Jacobson, 2005). Organizations are now framing their strategies to accomplish optimization of “key business processes”: serving optimal customers, optimize supply chains, and understand and create optimal financial performance (Davenport, et al., 2005, p. 1). Optimization strategies demand that organizations now gather extensive data and perform extensive analysis that will guide executives in the decision-making process. The data and the analysis must be specific to the issues. In other words, they must relate in order to be useful in decision-making for the firm. In order to benefit from this competitive advantage companies also have to possess the analytic capabilities...

Words: 1870 - Pages: 8

Premium Essay

Big Data

...O C T O B E R 2 0 11 m c k i n s e y g l o b a l i n s t i t u t e Are you ready for the era of ‘big data’? Brad Brown, Michael Chui, and James Manyika Radical customization, constant experimentation, and novel business models will be new hallmarks of competition as companies capture and analyze huge volumes of data. Here’s what you should know. The top marketing executive at a sizable US retailer recently found herself perplexed by the sales reports she was getting. A major competitor was steadily gaining market share across a range of profitable segments. Despite a counterpunch that combined online promotions with merchandizing improvements, her company kept losing ground. When the executive convened a group of senior leaders to dig into the competitor’s practices, they found that the challenge ran deeper than they had imagined. The competitor had made massive investments in its ability to collect, integrate, and analyze data from each store and every sales unit and had used this ability to run myriad real-world experiments. At the same time, it had linked this information to suppliers’ databases, making it possible to adjust prices in real time, to reorder hot-selling items automatically, and to shift items from store to store easily. By constantly testing, bundling, synthesizing, and making information instantly available across the organization— from the store floor to the CFO’s office—the rival company had become a different, far nimbler type of business...

Words: 3952 - Pages: 16

Premium Essay

Bizintel

...of my company, I am not familiar with the names or metrics used to evaluate important data, but I do know from experience and part of my job function, reports and data gathered are used to make judgments and decisions about new products and constant improvements for existing services we currently provide. Surveys are completed by our travel partners and guests, and even employees. We compile reports and present them to management electronically. Our research, experience, and use of different applications, along with our IT departments, helps management and executives determine which direction to move forward. Feedback from our travel partners and guests are both direct and indirect. Upon learning about programs such as Google Analytics, the importance of webpage layout, the amount of time spent on our site, as well, as how often individuals contact our chat system and sales automation for assistance with our product gives insight to “how we are doing” as a company. Constant looping and revisiting certain pages...

Words: 526 - Pages: 3

Premium Essay

Big Data

...volume, which is the quantity of data. The second is Variety, which the type of Data. The third is velocity, which is the speed of the data is gathered. The fourth one Variability, which is inconsistency of data can hamper processes to manage it. The final one is Veracity, which is the quality of data captured can vary. These data sets are growing rapidly mainly because they are gathered at a fairly cheap. The world's technological per-capita are doubling every 40 months. Business intelligence with data with high information density to look for trends. Big Data also increased information management specialist. Some of the largest companies like IBM and Microsoft spent over 15 billion dollars on software firms which specialize in data analytics. Governments use big data because it's efficient in terms of productivity and innovation. While gathering big data is a big benefit there are also some issues that need to brought up. Some policies involving privacy, security, and liability. Companies would need to put the right people to manage and use this data efficiently. Accessibility is also crucial because most likely there will need to be third parties included and incentives put in place to enable...

Words: 293 - Pages: 2

Premium Essay

Business It World

...Date: 04-19-2015 The New Frontier: Data Analytics (Course title: Info System Decision Making) Professor: Clifton Howell Student: Deep Ajabani Data analysis is the process of finding the right data to answer your question, understanding the processes underlying the data, discovering the important patterns in the data, and then communicating your results to have the biggest possible impact. Analytics have been used in business since the management exercises were put into place by Frederick Winslow Taylor in the late 19th century. Henry Ford measured the time of each component in his newly established assembly line. But analytics began to command more attention in the late 1960s when computers were used in decision support systems. Since then, analytics have changed and formed with the development of enterprise resource planning (ERP) systems, data warehouses, and a large number of other software tools and processes. In later years the business analytics have exploded with the introduction to computers. This change has brought analytics to a whole new level and has made the possibilities endless. As far as analytics has come in history, and what the current field of analytics is today many people would never think that analytics started in the early 1900s with Mr. Ford. We are going to have a look on Big Data Analytics. Let’s have a look on advantages of big data analytics. It helps marketing companies build models based on historical data to predict...

Words: 1537 - Pages: 7

Premium Essay

Big Data Analytics

...companies in every industry are using analytics to replace intuition and guesswork in their decision-making. As a result, managers are collecting and analyzing enormous data sets to discover new patterns and insights and running controlled experiments to test hypotheses. This course prepares students to understand structured data and business analytics and become leaders in these areas in business organizations. This course teaches the scientific process of transforming data into insights for making better business decisions. It covers the methodologies, algorithms, issues, and challenges related to analyzing business data. It will illustrate the processes of analytics by allowing students to apply business analytics algorithms and methodologies to real-world business datasets from finance, marketing, and operations. The use of real-world examples and cases places business analytics techniques in context and teaches students how to avoid the common pitfalls, emphasizing the importance of applying proper business analytics techniques. In addition to cases, this course features hands-on experiences with data collection using Python programs and analytics software such as SAS Enterprise Guide. Throughout the semester, each team works to frame a variety of business issues as an analytics problem, analyze data provided by the company, and generate applicable business insights as a secondary objective, while also learning essential business analytics techniques. Students benefit...

Words: 501 - Pages: 3

Premium Essay

Disruptive Innovation: a New Era of Crowdsourced Data Analytics!

...Disruptive Innovation: A new era of Crowdsourced Data Analytics! Abstract: The existing business paradigm of data analytics is set for a transformation. Today, companies are experimenting to replicate the “Outsourced data analytics” model to “Crowdsourced data analytics”. Companies like Kaggle, Crowdanalytix and others are hitting the headlines of top analytics blogs across the globe. The reason is that the new business model promises a drastic decrease in the cost of analytics for companies long with the flexibility to get the problem solved anytime with much less effort. In short, it’s not just crowdsourcing that is the novelty of the concept, but the manner in which it is put to use that steals the show. Abstract: The existing business paradigm of data analytics is set for a transformation. Today, companies are experimenting to replicate the “Outsourced data analytics” model to “Crowdsourced data analytics”. Companies like Kaggle, Crowdanalytix and others are hitting the headlines of top analytics blogs across the globe. The reason is that the new business model promises a drastic decrease in the cost of analytics for companies long with the flexibility to get the problem solved anytime with much less effort. In short, it’s not just crowdsourcing that is the novelty of the concept, but the manner in which it is put to use that steals the show. General Management General Management MBA Core, 2nd Year MBA Core, 2nd Year Ayush Malhotra NMIMS,Mumbai Ayush Malhotra ...

Words: 1574 - Pages: 7

Premium Essay

Electives

...Spring 2016 | Elective | FIN | FIN7503 | EQUITIES | Spring 2016 | Elective | FIN | FIN7504 | RISK MANAGEMENT | Spring 2016 | Elective | FIN | FIN7511 | CORP FIN I:RASNG CAP | Spring 2016 | Elective | FIN | FIN7513 | FIXED INCOME | Spring 2016 | Elective | FIN | FIN7516 | CORP FIN II:EVAL OPP | Spring 2016 | Elective | FIN | FIN7517 | FIN & VAL SUSTNBLTY | Spring 2016 | Elective | FIN | FIN7518 | Managing Portfolios | Spring 2016 | Elective | FIN | FIN7565 | RE INV FUNDAMENTALS | Spring 2016 | Elective | FIN | FIN7572 | BABSON COLLEGE FUND | Spring 2016 | Elective | FIN | FIN7573 | INVESTMENT BANKING | Spring 2016 | Elective | FIN | FIN7578 | RE DEVELOPMENT | Spring 2016 | Elective | MATH | QTM7571 | Bus Intel, Analytics, Visualization | Spring 2016 | Elective | MATH | QTM9515...

Words: 422 - Pages: 2

Premium Essay

Assignment 1 Week 2

...The New Frontier: Data Analytics Phylicia Marie Phillips Professor Progress Mtshali, Ph. D. Information Systems Decision-Making April 17, 2016 In the past, analytics was reserved for back-room debates by data geeks producing monthly reports on how things are going. Today, analytics make a difference in how a company does business, day to day, and even minute by minute; more specifically how Walmart does business. As many know, Walmart is an American based multinational retail corporation that operates a chain of hypermarkets, grocery stores and discount stores. With over eleven thousand stores and clubs in 27 countries, information technology and data analytics play a major role in Walmart’s survival and helps maintain its competitive advantage. Data Analytics Overview The business intelligence and analytic technologies and applications currently adopted in industry can be considered as BI&A 1.0, where data are mostly structured, collected by companies through various legacy systems, and often stored in commercial relational database management systems (Bottles and Begoli, 2014). The analytical techniques most commonly used in these systems, popularized in the 1990s, are mainly grounded in statistical methods developed in the 1970s and data mining techniques developed in the 1980s (Chiang, 2012). The digitalization of information has created more data and the development of cloud computing, and faster and faster computers has made the increased data more accessible...

Words: 1438 - Pages: 6

Premium Essay

Brief Overview of Business Intelligence and How Walmart Uses It

...biggest retailer in the world and handles more than one million customer transactions every hour and generates more than 2.5 petabytes of data storage (Venkatraman & Brooks, 2012). To put this into perspective, this data is equivalent to 167 times the number of books in America’s Library of Congress (Venkatraman & Brooks, 2012). So how can Wal-Mart use this massive amount of data and what useful information can this data provide? This paper will provide a brief overview of the importance of Business Intelligence (BI) and how the largest retailer in world, Walmart, is using it. BI platforms help management to truly understand its customer base and deliver individualized products and services (Brannon, 2010). When BI tools and analytics are used effectively, managers and decision makers can yield an all-encompassing view of the company, its position in the market, and its potential and perspectives (Albescu, and Pugna 2014). BI is best explained as a systematic process not found in a magazine, online or in a knowledge database. An organization that doesn’t have a viable BI capacity is making decisions without key information in this competitive market (Thomas, 2001). Walmart has more customer connections than any retailer in the world, from online activity to in-store purchases, and even social mentions (300,000 social mentions per week) (SAS Institute Inc.). Due to the abundance of information requiring analysis, Walmart created Walmart Labs after the company took...

Words: 867 - Pages: 4

Premium Essay

Cognos Ibm

...Chapter 12 Enhancing Decision Making Case 2: IBM and Cognos: Business Intelligence and Analytics for Improved Decision Making Tags: business decision making; business intelligence software; decision support systems; management reporting systems; executive dashboards; mobile delivery platforms. Summary In this video from the National Association of Broadcasters (NAB), Dan Fulwiler and Steve Segalewitz from IBM, discuss how understanding your online data more clearly can improve decision-making. Video demonstration of IBM/Cognos BI software applications. In the accelerating media industry, discovering relevant business insights within the chaos of available information can lead to substantial competitive advantage. IBM Business Intelligence and Analytics is designed to integrate all data providers and in-house sources to reveal timely and actionable business insights. The software collects, connects and leverages information from consumers to suppliers. L= 5:06 URL: http://www.youtube.com/watch?v=NI59ZgyQqBc Case IBM has been known for most of its recent history as the world's largest computer company. With over 388,000 employees worldwide, IBM is the largest and most profitable information technology employer in the world. It is still selling computers today, and is still one of the biggest computer companies in the world, and it has also been known to produce very efficient computers as well. IBM holds more patents than any other U.S. based technology company...

Words: 760 - Pages: 4

Premium Essay

Cis 500 Complete Class Assignments and Term Paper

...CIS 500 Complete ClasCIS 500 Complete Class Assignments and Term Paper Click link Below To Download Entire Class: http://strtutorials.com/CIS-500-Complete-Class-Assignments-and-Term-Paper-CIS5006.htm CIS 500 Complete Class Assignments and Term Paper CIS 500 Assignment 1 Predictive Policing CIS 500 Assignment 2: 4G Wireless Networks CIS 500 Assignment 3 Mobile Computing and Social Networking CIS 500 Assignment 4 Data Mining CIS 500 Term Paper Mobile Computing and Social Networks CIS 500 Assignment 1 Predictive Policing Click link Below To Download: http://strtutorials.com/CIS-500-Assignment-1-Predictive-Policing-CIS5001.htm In 1994, the New York City Police Department adopted a law enforcement crime fighting strategy known as COMPSTAT (COMPuter STATistics). COMPSTAT uses Geographic Information Systems (GIS) to map the locations of where crimes occur, identify “hotspots”, and map problem areas. COMPSTAT has amassed a wealth of historical crime data. Mathematicians have designed and developed algorithms that run against the historical data to predict future crimes for police departments. This is known as predictive policing. Predictive policing has led to a drop in burglaries, automobile thefts, and other crimes in some cities. Write a four to five (45) page paper in which you Compare and contrast the application of information technology (IT) to optimize police departments’ performance to reduce crime versus random patrols of the streets...

Words: 2044 - Pages: 9

Premium Essay

Project Management

...May 1, 2012 To: The Star plus Manufacturing Inc., Executive Members, Chief Executive Officer and All Others that this may concern: On April 24, 2012 the executive consulting offices of Hanns-G LLC, Have continued their investigation and Data Analyzing of the current business processes that have concurrently been in progress at Star plus Manufacturing Inc. Through our Time Spend at Star plus manufacturing Inc., we have conducted the following Business Analytics including but not limited too; Analyzing Qualitative Data, Analytics, Business Intelligence, Test and Learn, Business Processes, Statistics and Customer Dynamics. While Analyzing Qualitative Data; we have conducted Open-ended Questions, accepted written comments on questionnaires in order to generate Single word opinions, Brief Outlooks on company environment. We have also found some finding though daily business observations. During our Analytics practices, we have been able to develop optimal or realistic decision recommendations based on insights derived through the application of statistical models and analysis against existing and/or simulated future data. Business intelligence used a well-established process in guiding organizational change through using Computer-Based Techniques to identify, extract and analyze business data, such as Sales revenue by individual departments and products by each ones associated Costs and Income. Test and Learn methods in order to define the impact that, current strategies are...

Words: 521 - Pages: 3

Premium Essay

Business Analytics

...Introduction Sprint has nearly 54 million customers and offers a host of products for consumers, businesses and government. The company recently began using analytics tools to try to make sense of the mountains of data created by Sprint network users on a daily basis. With approximately 70,000 employees worldwide and nearly $27 billion in annual revenues, Sprint is widely recognized for developing, engineering and deploying state-of-the-art network technologies, including the United States' first nationwide all-digital, fiber-optic network and an award-winning Tier 1 Internet backbone. Sprint provides local voice and data services in 18 states and operates the largest 100-percent digital, nationwide wireless network in the United States. The decision to focus our project on Sprint was based on a couple of factors. Sprint is currently the 3rd largest telecom operator in USA and with the recent take over by SoftBank the prospects to grow look promising. This provides a clear opportunity to help the business grow using analytics. Secondly because of our connections in the company we were able to get more information on Sprint’s analytics strategy and future plans. Recommendations Sprint Telecom is part of an industry, which is one of the largest providers of data in the world. Sprint’s initial big data steps have been in the right direction. They have successfully used their current data sets for quick profits and short-term results. It is now time to take the jump...

Words: 652 - Pages: 3