Free Essay

Big Data

In:

Submitted By krishmellempudi
Words 2026
Pages 9
Originis of oop: Polymorphism of Operations (Operator Overloading). Another characteristic of OO systems in general is that they provide for polymorphism of operations, which is also known as operator overloading. This concept allows the same operator name or symbol to be bound to two or more different implementations of the operator, depending on the type of objects to which the operator is applied. Multiple Inheritance and Selective Inheritance. Multiple inheritance occurs when a certain subtype T is a subtype of two (or more) types and hence inherits the functions (attributes and methods) of both supertypes. For example,we may create a subtype ENGINEERING_MANAGER that is a subtype of both MANAGER and ENGINEER. Selective inheritance occurs when a subtype inherits only some of the functions of a supertype. Other functions are not inherited Charactrstc of ood: an ODMS provides a unique identity to each independent object stored in the database. This unique identity is typically implemented via a unique, system-generated object identifier (OID).Immutable – They do not change. An OID can be used only once.The main property required of an OID is that it be immutable; that is, the OID value of a particular object should not change. an ODMS must have some mechanism for generating OIDs and preserving the immutability property. It is also desirable that each OID be used only once; that is, even if an object is removed from the database, its OID should not be assigned to another object. These two properties imply that the OID should not depend on any attribute values of the object, since the value of an attribute may be changed or corrected. . The three most basic CONSTRUCTORS are ATOM, STRUCT (or tuple), and COLLECTION. . One type constructor has been called the atom constructor. This includes the basic built-in data types of the object model, which are similar to the basic types in many programming languages: integers, strings, floating point numbers, enumerated types, Booleans, and so on. They are called single-valued or atomic types, since each value of the type is considered an atomic (indivisible) single value. A second type constructor is referred to as the struct (or tuple) constructor. This can create standard structured types, such as the tuples (record types) in the basic relational model. Collection (or multivalued) type constructors include the set(T), list(T), bag(T), array(T), and dictionary(K,T) type constructors. These allow part of an object or literal value to include a collection of other objects or values when needed. These constructors are also considered to be type generators Transient objects exist in the executing program and disappear once the program terminates. Persistent objects are stored in the database and persist after program termination. The typical mechanisms for making an object persistent are naming and reachability.An extent is a named persistent object whose value is a persistent collection that holds a collection of objects of the same type that are stored permanently in the database. The objects can be accessed and shared by multiple programs. It is also possible to create a transient collection, which exists temporarily during the execution of a program but is not kept when the program terminates. Polymorphism and operator overloading. Operations and method names can be overloaded to apply to different object types with different implementations.Polymorphism of Operations (Operator Overloading). Another characteristic of OO systems in general is that they provide for polymorphism of operations, which is also known as operator overloading. This concept allows the same operator name or symbol to be bound to two or more different implementations of the operator, depending on the type of objects to which the operator is applied. Semantics is the study of meaning in language. It can be applied to entire texts or to single words. The semantics of a relation refers to its meaning resulting from the interpretation of attribute values in a tuple. Design a schema that can be explained easily relation. The semantics of attributes should be easy to interpret. Anomalies: Database anomalies are the problems in relations that occur due to redundancy in the relations. These anomalies affect the process of inserting, deleting, and modifying. Insert Anomalies: An Insert Anomaly occurs when certain attributes cannot be inserted into the database without the presence of other attributes. For example - we can't add a new course unless we have at least one student enrolled on the course. Delete Anomalies:A Delete Anomaly exists when certain attributes are lost because of the deletion of other attributes. For example, consider what happens if Student S30 is the last student to leave the course All information about the course is lost. Modification Anomalies:An Update Anomaly exists when one or more instances of duplicated data is updated, but not all. For example, consider Jones moving address - you need to update all instances of Jones's address. Problems with NULLs: Wasted storage space, Problems understanding meaning,Avoid placing attributes in a base relation whose values may frequently be NULL, If NULLs are unavoidable, Make sure that they apply in exceptional cases only, not to a majority of tuples A spurious tuple is, basically, a record in a database that gets created when two tables are joined badly. In database-ese, spurious tuples are created when two tables are joined on attributes that are neither primary keys nor foreign keys. Spurious tuples: Tuples generated by joining two relations on attributes that are not keys or foreign keys on these relations.How can spurious tuples be prevented? If original relations are separated using the primary key. This will enforce the join to be on primary/foreign keys. OBJECTS and LITERALS are the basic building blocks of the object model. The main difference between the two is that an object has both an object identifier and a state (or current value), whereas a literal has a value (state) but no object identifier In either case, the value can have a complex structure. An object has five aspects: identifier, name, lifetime, structure, and creation. 1.The object identifier is a unique system-wide identifier 
Every object must have an object identifier. 
2.Some objects may optionally be given a unique name within a particular ODMS—this name can be used to locate the object, and the system should return the object given that name. 3.
 The lifetime of an object specifies whether it is a persistent object). Lifetimes are indepen- dent of types—that is, some objects of a particular type may be transient whereas others may be persistent.4 The structure of an object specifies how the object is constructed By using the type constructors. The structure specifies whether an object is atomic or not. 5. Object creation refers to the manner in which an object can be created. There are three types of LITERALS: atomic, structured, and collection. Atomic literals correspond to the values of basic data types and are prede- fined. The basic data types of the object model include long, short, and unsigned integer numbers. Structured literals correspond roughly to values that are constructed using the tuple constructor .3. Collection literals specify a literal value that is a collection of objects or val- ues but the collection itself does not have an Object_id. Functional dependency:A functional dependency is a constraint (associated with table) between two sets of attributes from the database. Ex. If R is a relation with attributes X and Y, a functional dependency between the attributes is represented as X->Y, which specifies Y is functionally dependent on X. Functional dependency is a property of the semantics or meaning of the attribute,The relation extensions that satisfy the functional dependency constrains are called legal relation states of R,The main use of functional dependency is to describe for a relation schema R by specifying constrains on its attributes t hat must hold at all times.,A functional dependency of the relation schema(R), not of a particular legal relation state r of R . Primary storage media can be operated on directly by the computer’s central processing unit (CPU), such as the computer’s main memory and smaller but faster cache memories whereas secondary storage ‘s Data cannot be processed directly by the CPU; first it must be copied into primary storage and then processed by the CPU. Primary storage usually provides fast access to data but is of limited storage capacity. they are still more expensive and have less storage capacity Secondary storage devices usually have a larger capacity, cost less, and provide slower access to data than do primary storage devices.Data in secondary or tertiary storage cannot be processed directly by the CPU; first it must be copied into primary storage and then processed by the CPU. Ex ps: This category includes magnetic disks, optical disks (CD-ROMs, DVDs, and other similar storage media), and tapes. Ex ss: Hard-disk drives are classified as secondary storage The main goal of RAID is to even out the widely different rates of performance improvement of disks against those in memory and microprocessors.12While RAMcapacities have quadrupled every two to three years, disk access times are improving at less than 10 percent per year, and disk transfer rates are improving at roughly 20percent per year. Disk capacities are indeed improving at more than 50 percent peryear, but the speed and access time improvements are of a much smaller magnitude.A second qualitative disparity exists between the ability of special microprocessors that cater to new applications involving video, audio, image, and spatial data processing, with corresponding ack of fast access to large, shared data sets.The natural solution is a large array of small independent disks acting as a singlehigher-performance logical disk. A concept called data stripingis used, which utilizesparallelism to improve disk performance. Data striping distributes data transparently over multiple disks to make them appear as a single large, fast disk. In SQL the following types of PRIVILAGES can be granted on each individual relation R: SELECT (retrieval or read) privilege on R. Gives the account retrieval privilege. In SQL this gives the account the privilege to use the SELECT statement to retrieve tuples from R. Modification privileges on R. This gives the account the capability to modify the tuples of R. In SQL this includes three privileges: UPDATE, DELETE, and INSERT. These correspond to the three SQL commands for modifying a table R. Additionally, both the INSERT and UPDATE privileges can specify that only certain attributes of R can be modified by the account. References privilege on R. This gives the account the capability to reference(or refer to) a relation R when specifying integrity constraints. This privilege can also be restricted to specific attributes of R. GRANT is a command used to provide access or privileges on the database objects to the users. grant a user edit privileges (SELECT, UPDATE, INSERT, and DELETE), which allows the user to both view and modify the contents of a dataset. In SQL the following types of privileges can be granted on each individual relation R: SELECT (retrieval or read) privilege on R. , Modification privileges on R. , References privilege on R. The REVOKE command removes user access rights or privileges to the database objects. In some cases it is desirable to grant a privilege to a user temporarily. For example, the owner of a relation may want to grant the SELECT privilege to a user for a specific task and then revoke that privilege once the task is completed. Hence, a mechanism for revoking privileges is needed. In SQL a REVOKE command is included for the purpose of cancelling privileges. DIFF OF IR and Databases : 1.Structured data 
2.Schema driven 
3.Relational (or object, hierarchical, and 
network) model is predominant 
4.Structured query model 
 5. Rich metadata operations 
 6.Query returns data 
 7. Results are based on exact matching (always 
correct) 
 IR Systems : 1.Unstructured data 
 2.No fixed schema; various data models 
(e.g., vector space model) 
3.Free-form query models 
4.Rich data operations 
5.Search request returns list or pointers to 
documents 
6. Results are based on approximate matching 
and measures of effectiveness (may be imprecise and ranked)

Similar Documents

Free Essay

Big Data

...A New Era for Big Data COMP 440 1/12/13 Big Data Big Data is a type of new era that will help the competition of companies to capture and analyze huge volumes of data. Big data can come in many forms. For example, the data can be transactions for online stores. Online buying has been a big hit over the last few years, and people have begun to find it easier to buy their resources. When the tractions go through, the company is collecting logs of data to help the company increase their marketing production line. These logs help predict buying patterns, age of the buyer, and when to have a product go on sale. According to Martin Courtney, “there are three V;s of big data which are: high volume, high variety, high velocity and high veracity. There are other sites that use big volumes of data as well. Social networking sites such as Facebook, Twitter, and Youtube are among the few. There are many sites that you can share objects to various sources. On Facebook we can post audio, video, and photos to share amongst our friends. To get the best out of these sites, the companies are always doing some type of updating to keep users wanting to use their network to interact with their friends or community. Data is changing all the time. Developers for these companies and other software have to come up with new ways of how to support new hardware to adapt. With all the data in the world, there is a better chance to help make decision making better. More and more information...

Words: 474 - Pages: 2

Free Essay

Big Data

...Lecture on Big Data Guest Speaker Simon Trang Research Member at DFG RTG 1703 and Chair of Information Management Göttingen University, Germany 2014 The City City of Göttingen • Founded in the Middle Ages • True geographical center of Germany • 130,000 residents Chair of Information Management Lecture on Big Data at Macquarie University 2 2 The University Georg-August-Universität Göttingen (founded in 1737) • • • • One of nine Excellence Universities in Germany 13 faculties, 180 institutes 26,300 students (2013) 11.6% students from abroad (new entrants: approximately 20%) • 13,000 employees (including hospital and medical school), including 420 professors • 115 programs of study from A as in Agricultural Science to Z as in Zoology are offered (73 bachelor / 22 master programs) Chair of Information Management Lecture on Big Data at Macquarie University 3 “The Göttingen Nobel Prize Wonder” Over 40 Nobel prize winners have lived, studied, and/or lived, studied or/and researched 41 Prize researched at the University of Göttingen, among them… at the University of Göttingen, among them… • • • • • • • • • • • • • • Max von Laue, Physics, 1914 Max von Laue, physics, 1914 Max Planck, physics, 1918 Max Planck, Physics, 1918 Werner Heisenberg, physics, 1932 Werner Heisenberg, Physics, 1932 Otto Hahn, chemistry 1944 Otto Hahn, Chemistry 1944 Max Born, physics, 1954 Max Born, Physics, 1954 Manfred Eigen, chemistry, 1967 Manfred Eigen, Chemistry, 1967 Erwin...

Words: 1847 - Pages: 8

Free Essay

Big Data

...Article Summary - Data, data everywhere Data 2013.10.01 | Major Media Communication | Subject Understanding Digital Media | Student no 2010017713 | Professor Soochul Kim | Name Eunkang Kim | Double-side of a vast amount of information in accordance with development of technology is treated in this article. Even now, a lot of digital information beyond imagination is being accumulated all over the world. Not only the amount of information is increasing, but the production rate of one is also getting speedy. This explosion of information has some reasons. The main reason is technology development. It can actualize things which were impossible in the past. The digital technology changes a lot of information into digitization. Also, many people utilize them with the powerful mean digital device. Men communicating by information contributed to increase the amount of information. Humans who escaped from illiteracy and economic hardship have generated many kinds of information, which are utilized in several fields such as politics, economy, law, culture, science, and so on. The production rate of information is faster than the speed of technology development. Though the digital devices handling the information are getting various, storage space is not enough to store the increased information. Sea is not calm, but it has that big waves. Likewise, lots of information comes to our life. It is important to judge what information is...

Words: 614 - Pages: 3

Premium Essay

Big Data

...I. Big data emerging factor in IT area A. World’s notice for big data An appearance of tablet PC and social media was the hottest issue in IT market in last year. There are some successful global companies that go along the trends although it is not that long period since they appeared in the world, such as Apple, Google, Facebook, and Twitter. They have something in common. That is, they are based on ‘Big Data’ technology. As a result of using ‘big data’, the amount of stored data by their big data system during 2012 is much more than that of data which had been produced and stored until 2011. It helps to solve several problems in the company. Due to the geometrical increase of the amount of data, the important of big data will be continuous. Big data is selected as one of noticeable keyword in 2013 IT area with mobility, social, and cloud. It will be main factor of growth of IT infrastructure in the medium to longer term and is expected to provide new strategic superiority for many companies. It is highly acclaimed at the domestic market and also the foreign market. Several successful cases of applying big data shows that it can be positive factor helping to recover global economy. Moreover, it is not limited to IT-related business but the introduction in various areas will create value. B. Background of emerging big data In fact, there are many efforts to extract meaningful information through collection and analysis of huge amount of data. Through this effort...

Words: 2394 - Pages: 10

Free Essay

Big Data

...Big Data Management: Possibilities and Challenges The term big data describes the volumes of data generated by an enterprise, including Web-browsing trails, point-of-sale data, ATM records, and other customer information generated within an organization (Levine, 2013). These data sets can be so large and complex that they become difficult to process using traditional database management tools and data processing applications. Big data creates numerous exciting possibilities for organizations, but along with the possibilities, there are challenges. Managers must understand the pitfalls and limitations, as well as the potential of big data (Levine, 2013). The focus of this report is the business potential and implications of big data as well as understanding the challenges and limitations of big data management. The potentials for big data are numerous; however, in this report only five potentials and implications for use are discussed. These include the following: knowledge management, social media, in travel, banking, and marketing and advertising. Knowledge Management One of the greatest potential for big data is knowledge management. A goal of knowledge management is the ability to integrate information from multiple perspectives to provide the insights required for valid decision-making such as where to invest marketing dollars, how much to invest, or whether to expand into a new geographic market (Lamont, 2012). In terms of knowledge management, three dimensions...

Words: 1175 - Pages: 5

Premium Essay

Big Data

...examine the definition of big data. It also seeks to examine the components of a Unified Data Architecture and its ability to facilitate the analysis of big data. 2 WHAT IS BIG DATA Cuzzocrea, Song and Davis (2011) defined big data in part as being “enormous amounts of unstructured data produced by high-performance applications falling in a wide and heterogeneous family of application scenarios”. In recent years there has been an increasing interest and focus on big data. Many and varied definitions have been proposed but without a consensus on a single definition. The MIT Technology Review (2014), brought attention to the work of Ward and Barker (2014) which examined a number of definitions of big data that have attracted some general ICT industry support from leading ICT industry analysts and organisations such as Gartner, Oracle and Microsoft. In their work they proposed to provide a “concise definition of an otherwise ambiguous term”. The author having just attended a digital government conference with a large proportion of big data tagged presentations also noted that no single definition was offered. There was however a common content theme that supported the Ward and Barker definition of: “Big data is a term describing the storage and analysis of large and or complex data sets using a series of techniques including, but not limited to: NoSQL, MapReduce and machine learning.” 3 UNIFIED DATA ARCHITECTURE 3.1 WHAT IS THE UNIFIED DATA ARCHITECTURE? The concepts...

Words: 579 - Pages: 3

Premium Essay

Big Data

...have largely penetrated the communication industry and have since overtaken the use of computers in accessing the internet (Australian Communications and Media Authority, 2012). Consequently, business organizations have since devised better marketing and planning strategies by utilizing Big Data facilities and technologies whereby businesses are capable of deriving user requirements based on the searches potential users conduct on their mobile devices. From our initial report, we were able to highlight how Big Data is utilized in an organization and the accrued advantages against disadvantages of implementing Big Data technologies. We shall begin this report by first responding to the issues raised by management and then continue to make recommendations on the utilization of Big Data. Addressing Feedback Big Data technologies are fairly new to this organization and thus management was bound to raise issues concerning implementation and feasibility of the project. In this section, we shall briefly highlight these issues and how they may be addressed to achieve the organization’s objectives cost effectively. These issues include; i. Cost of implementing Big Data technologies – Big Data...

Words: 1262 - Pages: 6

Premium Essay

Big Data

...Big Data is a massive volume of data. It's usually so massive that it becomes complicated to comprehend using tools such as on-hand database, and traditional data processing applications. Some problems that come up are storage, sharing, analysis, and search.Even though these problems do occur it still can be helpful in business operations, and better business decisions. This data can also help give companies informations which can increase profit, bring more customers, and overall increase the business's value. Characteristics of Big Data include the five V’s. The first one is volume, which is the quantity of data. The second is Variety, which the type of Data. The third is velocity, which is the speed of the data is gathered. The fourth one Variability, which is inconsistency of data can hamper processes to manage it. The final one is Veracity, which is the quality of data captured can vary. These data sets are growing rapidly mainly because they are gathered at a fairly cheap. The world's technological per-capita are doubling every 40 months. Business intelligence with data with high information density to look for trends. Big Data also increased information management specialist. Some of the largest companies like IBM and Microsoft spent over 15 billion dollars on software firms which specialize in data analytics. Governments use big data because it's efficient in terms of productivity and innovation. While gathering big data is a big benefit there are also some issues...

Words: 293 - Pages: 2

Free Essay

Big Data

...The Situation of Big Data Technology Yu Liu International American University BUS 530: Management Information Systems Matthew Keogh 2015 Summer 2 - Section C Introduction In this paper, I will list the main technologies related to big data. According to the life cycle of the data processing, big data technology can be divided into data collection and pre-processing, data storage and management, data analysis and data mining, data visualization and data privacy and security, and so on. The reason I select topic about big data My major is computer science and I have taken a few courses about data mining before. Nowadays more and more job positions about big data are showing at job seeking website, such as Monster.com. I am planning to learn some mainstream big data technologies like Hadoop. Therefore, I choose big data as my midterm paper topic. Big data in Google Google's big data analytics intelligence applications include customer sentiment analysis, risk analysis, product recommendations, message routing, customer losing prediction, the classification of the legal copy, email content filtering, political tendency forecast, species identification and other aspects. It is said that big data will generate $23 million every day for Google. Some typical applications are as follows: Based on MapReduce, Google's traditional applications include data storage, data analysis, log analysis, search quality and other data analytical applications. Based on Dremel system...

Words: 1405 - Pages: 6

Premium Essay

Big Data

...Introduction to Big data Every day, 2.5 quintillion bytes of complex, every changing data are generated. (IBM) Data comes from social sites, digital images, transaction records, and countless unknown resources. The amount of data we generate daily is enormous, and the rate it is being generated is accelerating. As we head into a future where technology dominates the global market, this pace will only continue accelerate. Businesses and other entities are aware of this data and its power. In a survey taken by Capgemini and the Economist, over 600 global business leaders identified their companies as data driven and identified data analytics as an integral part of their business. Big Data solutions are considered the answer for handling this data converting it into useful information. According to the O'Reilly Radar Team (Big Data Now), Big Data consists of three variables – size, velocity and variety. Data is considered big if conventional systems cannot handle its size. It is not only that size of Big Data that matters, but also the volume of transactions that come with it. The second issue is how fast the data is generated and how fast if it changes (velocity). New data and updated data is constantly generated, and it must be processed and analyzed quickly to create real value for an organization. The final issue is data structure (variety). Data is typically collected in raw form, unstructured, from a variety of sources. To acquire useful information, data needs to be processed...

Words: 2909 - Pages: 12

Premium Essay

Big Data

...Big Data is Scaling BI and Analytics How the information surge is changing the way organizations use business intelligence and analytics Information Management Magazine, Sept/Oct 2011 Shawn Rogers Like what you see? Click here to sign up for Information Management's daily newsletter to get the latest news, trends, commentary and more. The explosive growth in the amount of data created in the world continues to accelerate and surprise us in terms of sheer volume, though experts could see the signposts along the way. Gordon Moore, co-founder of Intel and the namesake of Moore's law, first forecast that the number of transistors that could be placed on an integrated circuit would double year over year. Since 1965, this "doubling principle" has been applied to many areas of computing and has more often than not been proven correct. When applied to data, not even Moore's law seems to keep pace with the exponential growth of the past several years. Recent IDC research on digital data indicates that in 2010, the amount of digital information in the world reached beyond a zettabyte in size. That's one trillion gigabytes of information. To put that in perspective, a blogger at Cisco Systems noted that a zettabyte is roughly the size of 125 billion 8GB iPods fully loaded. Advertisement As the overall digital universe has expanded, so has the world of enterprise data. The good news for data management professionals is that our working data won't reach zettabyte scale for some...

Words: 2481 - Pages: 10

Premium Essay

Big Data

...era of ‘big data’? Brad Brown, Michael Chui, and James Manyika Radical customization, constant experimentation, and novel business models will be new hallmarks of competition as companies capture and analyze huge volumes of data. Here’s what you should know. The top marketing executive at a sizable US retailer recently found herself perplexed by the sales reports she was getting. A major competitor was steadily gaining market share across a range of profitable segments. Despite a counterpunch that combined online promotions with merchandizing improvements, her company kept losing ground. When the executive convened a group of senior leaders to dig into the competitor’s practices, they found that the challenge ran deeper than they had imagined. The competitor had made massive investments in its ability to collect, integrate, and analyze data from each store and every sales unit and had used this ability to run myriad real-world experiments. At the same time, it had linked this information to suppliers’ databases, making it possible to adjust prices in real time, to reorder hot-selling items automatically, and to shift items from store to store easily. By constantly testing, bundling, synthesizing, and making information instantly available across the organization— from the store floor to the CFO’s office—the rival company had become a different, far nimbler type of business. What this executive team had witnessed first hand was the gamechanging effects of big data. Of course...

Words: 3952 - Pages: 16

Premium Essay

Big Data

...Big Data Big Data and Business Strategy Businesses have come a long way in the way that information is being given to management, from comparing quarter sales all the way down to view how customers interact with the business. With so many new technology’s and new systems emerging, it has now become faster and easier to get any type of information, instead of using, for example, your sales processing system that might not get all the information that a manger might need. This is where big data comes into place with how it interacts with businesses. We can begin with how to explain what big data is and how it is used. Big data is a term used to describe the exponential growth and availability of data for both unstructured and structured systems. Back in 2001, Doug Laney (Gartner) gave a definition that ties in more closely on how big data is managed with a business strategy, which is given as velocity, volume, and variety. Velocity which is explained as how dig data is constantly and rapidly changing within time and how fast companies are able to keep up with in a real time manner. Which sometimes is a challenge to most companies. Volume is increasing also at a high level, especially with the amount of unstructured data streaming from social media such as Facebook. Also including the amount of data being collected from customer information. The final one is variety, which is what some companies also struggle with in handling many varieties of structured and unstructured data...

Words: 1883 - Pages: 8

Premium Essay

Big Data

...Big Data and its Effects on Society Kayla Seifert MGT-311 November 23, 2015 Big Data is a concept that has existed for a while but only gained proper attention a couple of years ago. One can describe Big Data as extremely large data sets that have grown so big that it becomes almost impossible to manage and analyze with traditional data processing tools. Enterprises can use Big Data by building new applications, improving the effectiveness, lowering the costs of their applications, helping with competitive advantage, and increasing customer loyalty. It can also be used in other industries to enable a better system and better decision-making. Big Data has become a valuable asset to everyone around the world and continues to impact society today. The ideology of Big Data first came up in the days before the age of computers, when unstructured data were the norm and analytics was in its infancy. The first Big Data challenge came in the form of the 1880 U.S. census when the information involving about 50 million people being gathered, classified, and reported. This census contained a lot of facts to deal with, however, limited technology was available to organize and manage it. It took over seven years to manually put the data into tables and report on the data. Thanks to Big Data, the 1890 census could be placed on punch cards that could hold about 80 variables. Instead of seven years, the analysis of the data only took six weeks. Big Data allowed the government...

Words: 1697 - Pages: 7

Premium Essay

Big Data

...Big Data/Predictive Analytics First Last Name Name of the Institution Big Data/Predictive Analytics Introduction There has been a controversial debate about the big data and the predictive analytics. With the evolution of technology and innovation, one fact needs to be appreciated that, the concept of the big data and the predictive analytics is here to stay. So it is up to the users to learn to deal with it and manage it to offset any adverse effects that may result. The proponents of the big data argue that the big data is advantageous, and the 21st-century generation benefits more from the big data and predictive analytics than the harm that the big data poses to their lives. The bottom line of the matter, however, is that, big data interferes with human’s privacy, ethics, and any unauthorized third party can access the personal data for evil purposes or their benefits. The definition of the big data takes the “3V” form; High-volume, high-variety and high-velocity information that demand the innovative forms of processing, cost-effective for improved insight and decision making. This technological definition does not encompass the societal aspect and. Therefore, it can be argued to be one-sided definition. To incorporate the societal aspect, the definition needs to be viewed in a broader manner so that the aspect of data analytics can come in. In this regard, the two terms can work together so that a meaning of full terms big data/ data analytics can denote the cloud...

Words: 4196 - Pages: 17