Free Essay

Analysis Algorithm and Data Structure

In:

Submitted By sindhu29
Words 473
Pages 2
1.2. Representation of integer has no size restriction
• The important factor affecting the running time is normally size of the input.
• Normally the built in data type has a fixed size, to enter and to store the larger data.
• The Linked List can be used to store the integer without size restriction, which stores each digit in nodes.
• The data can also be reused.

Addition:
A= 456
Represented by 6 → 5 → 4 in linked list
B= 094
Represented by 4 → 9 → 0 in linked list
The Resultant is C = 550
Represented by 0 → 5 → 5 in linked list.

Multiplication:
A= 123 * B= 456
A= Represented by 3 → 2 → 1 in linked list.
B= Represented by 6 → 5 → 4 in linked list.
STEP 1:
123 * 6 = 8 → 3 → 7 in linked list.
STEP 2:
123 * 50= 0 → 5 → 1 → 6 in linked list.
STEP 3: ADD STEP 1 AND STEP 2
8 → 8 → 8 → 6 in linked list.
STEP 4:
123 * 400= 0 → 0 → 2 → 9 → 4 in linked list.
STEP 5: ADD STEP 3 AND STEP 4
8 → 8 → 0 → 6 → 5 in linked list.

Exponentiation: The initial step is to arrange in decreasing order of exponents and then perform the action. The other method is by using Θ(log n) time algorithm based on binary representation.

1.5. Representation of one’s and two’s complement
1’s complement:
+ 4= 0100
- 4= 1011 (taking 1’s complement of +4 is -4 (i.e.) inverting the bit).
So, 1’s complement is used to represent both positive and negative integers.

2’s complement:
Converting the value to 1’s complement and then adding 1 to that complement.
- 4= (1’s complement of 4) + 1 = 1011+1 = 1100.
It is used in implementation of ADT, which performs arithmetic operation. So it is qualified under data structure.

1.9. Does every problem have an algorithm? Every problem doesn’t have an algorithm. For example if I were sorting the records stored in an array, the searching becomes difficult for the largest element. The value of an algorithm must be determined for larger set. So the possibility to determine the algorithm for this problem is uncertain.

1.12. Implementation of database
• The possible activities that I considered here is insertion, deletion and retrieval of data in database.
• Generally the attributes for the information of cities can be city name, location, zip code, etc.
• The Time Constraint and efficiency plays major role in database.
• While inserting or deleting the data in small numbers takes some seconds, which is acceptable. But for range of queries and mass deletion, it takes too long and reduces the efficiency.
• The Hash table is inappropriate for implementation because it cannot perform wide range of queries.
• The B+ tree is more efficient for implementation of database.
• A simple linear index can be used when data is created and never changed.

Similar Documents

Free Essay

Ds Java

...A Practical Introduction to Data Structures and Algorithm Analysis Third Edition (Java) Clifford A. Shaffer Department of Computer Science Virginia Tech Blacksburg, VA 24061 April 16, 2009 Copyright c 2008 by Clifford A. Shaffer. This document is the draft of a book to be published by Prentice Hall and may not be duplicated without the express written consent of either the author or a representative of the publisher. Contents Preface xiii I Preliminaries 1 1 Data Structures and Algorithms 1.1 A Philosophy of Data Structures 1.1.1 The Need for Data Structures 1.1.2 Costs and Benefits 1.2 Abstract Data Types and Data Structures 1.3 Design Patterns 1.3.1 Flyweight 1.3.2 Visitor 1.3.3 Composite 1.3.4 Strategy 1.4 Problems, Algorithms, and Programs 1.5 Further Reading 1.6 Exercises 3 4 4 6 8 12 13 14 15 16 17 19 21 2 Mathematical Preliminaries 2.1 Sets and Relations 2.2 Miscellaneous Notation 2.3 Logarithms 2.4 Summations and Recurrences 25 25 29 31 33 iii iv Contents 2.5 2.6 2.7 2.8 2.9 3 II 4 Recursion Mathematical Proof Techniques 2.6.1 Direct Proof 2.6.2 Proof by Contradiction 2.6.3 Proof by Mathematical Induction Estimating Further Reading Exercises Algorithm Analysis 3.1 Introduction 3.2 Best, Worst, and Average Cases 3.3 A Faster Computer, or a Faster Algorithm? 3.4 Asymptotic Analysis 3.4.1 Upper Bounds 3.4.2 Lower Bounds 3.4.3 Θ Notation 3.4.4 Simplifying...

Words: 30587 - Pages: 123

Premium Essay

It- 3rd Year

...E-COMMERCE (TIT-501) UNIT I Introduction What is E-Commerce, Forces behind E-Commerce Industry Framework, Brief history of ECommerce, Inter Organizational E-Commerce Intra Organizational E-Commerce, and Consumer to Business Electronic Commerce, Architectural framework Network Infrastructure for E-Commerce Network Infrastructure for E-Commerce, Market forces behind I Way, Component of I way Access Equipment, Global Information Distribution Network, Broad band Telecommunication. UNIT-II Mobile Commerce Introduction to Mobile Commerce, Mobile Computing Application, Wireless Application Protocols, WAP Technology, Mobile Information Devices, Web Security Introduction to Web security, Firewalls & Transaction Security, Client Server Network, Emerging Client Server Security Threats, firewalls & Network Security. UNIT-III Encryption World Wide Web & Security, Encryption, Transaction security, Secret Key Encryption, Public Key Encryption, Virtual Private Network (VPM), Implementation Management Issues. UNIT - IV Electronic Payments Overview of Electronics payments, Digital Token based Electronics payment System, Smart Cards, Credit Card I Debit Card based EPS, Emerging financial Instruments, Home Banking, Online Banking. UNIT-V Net Commerce EDA, EDI Application in Business, Legal requirement in E -Commerce, Introduction to supply Chain Management, CRM, issues in Customer Relationship Management. References: 1. Greenstein and Feinman, “E-Commerce”, TMH 2. Ravi Kalakota, Andrew Whinston...

Words: 2913 - Pages: 12

Premium Essay

The Importance Of Cluster Analysis

...1. Introduction Cluster analysis or clustering is the task of grouping a set of objects in such a way that objects in the same group (called a cluster) are more similar (in some sense or another) to each other than to those in other groups (clusters). Cluster analysis is an unsupervised form of learning, which means, that it doesn't use class labels. This is different from methods like discriminant analysis which use class labels and come under the category of supervised learning. K-means is the most simple and popular algorithm in clustering and was published in 1955, 50 years ago. The advancement in technology has led to many high-volume, high-dimensional data sets. These huge data sets provide opportunity for automatic data analysis, classification...

Words: 2367 - Pages: 10

Free Essay

Systems

...Component 01 - Computing Principles | AS-Level (H046) | A-Level (H446) | 1 The characteristics of contemporary processors, input, output and storage devices | Structure and function of the processor | The Arithmetic and Logic Unit (ALU), Control Unit and registers: Program Counter (PC), Accumulator (ACC), Memory Address Register (MAR), Memory Data Register (MDR), Current Instruction Register (CIR).Buses: data, address and control: How this relates to assembly language programs.The fetch-decode-execute cycle, including its effect on registers.The factors affecting the performance of the CPU, clock speed, number of cores, cache.Von Neumann, Harvard and contemporary processor architecture. | The use of pipelining in a processor to improve efficiency. | Types of processor | The differences between, and uses of, CISC and RISC processors.Multicore and parallel systems. | GPUs and their uses (including those not related to graphics). | Input, output and storage | How different input output and storage devices can be applied as a solution of different problems.The uses of magnetic, flash and optical storage devices.RAM and ROM.Virtual storage. | | 2 Software and software development | Operating systems | The need for, function and purpose of operating systems.Memory management (paging, segmentation and virtual memory).Interrupts, the role of interrupts and Interrupt Service Routines (ISR), role within the fetch decode execute cycle.Scheduling: round robin, first come...

Words: 1302 - Pages: 6

Free Essay

Crime Investigation

...(Online): 2347 - 4718 DATA MINING TECHNIQUES TO ANALYZE CRIME DATA R. G. Uthra, M. Tech (CS) Bharathidasan University, Trichy, India. Abstract: In data mining, Crime management is an interesting application where it plays an important role in handling of crime data. Crime investigation has very significant role of police system in any country. There had been an enormous increase in the crime in recent years. With rapid popularity of the internet, crime information maintained in web is becoming increasingly rampant. In this paper the data mining techniques are used to analyze the web data. This paper presents detailed study on classification and clustering. Classification is the process of classifying the crime type Clustering is the process of combining data object into groups. The construct of scenario is to extract the attributes and relations in the web page and reconstruct the scenario for crime mining. Key words: Crime data analysis, classification, clustering. I. INTRODUCTION Crime is one of the dangerous factors for any country. Crime analysis is the activity in which analysis is done on crime activities. Today criminals have maximum use of all modern technologies and hi-tech methods in committing crimes. The law enforcers have to effectively meet out challenges of crime control and maintenance of public order. One challenge to law enforcement and intelligence agencies is the difficulty of analyzing large volumes of data involved in criminal and...

Words: 1699 - Pages: 7

Premium Essay

Doc Remove Delibitablement

...First year ENGINEER – ENG1 Modules | credits | Hours | Functional Analysis | 3,5 | 48 | Algebra for Engineers | 2,5 | 32 | Probability 1 | 3,5 | 48 | Statistical Decision (courses +Tuto) | 3,5 | 48 | Microprocessor System | 3 | 40 | Signal Transmission | 2,5 | 32 | Data Transmission | 2,5 | 32 | Workshop on Linux | 3 | 40 | Databases | 3 | 40 | TOEIC 1 | 2,5 | 32 | Advanced Maintenance | 2,5 | 32 | Numerical Analysis | 2,5 | 32 | Operations Research | 2,5 | 32 | Servo (Tuto) | 2,5 | 32 | Servo (Courses) | 2,5 | 32 | Algorithm (Data Structure) | 2,5 | 32 | Algorithm oriented object (Tuto, C++ Language) | 3 | 40 | Operating System (Theories and Fundamental) | 2,5 | 32 | WAN (courses + Tuto) | 4,5 | 60 | Method of Analysis 1 | 3 | 40 | Programming Workshop C | 2,5 | 32 | Software Engineering workshop (Access, VB) | 3 | 40 | Management Workshop for Science Engineer | 2 | 24 | Entrepreneurship | 1,5 | 20 |   |   |   | TOTAL | 63,5 | 832 | ------------------------------------------------- OBJECT ORIENTED ALGORITHM ------------------------------------------------- (Hands-On in Language C + +) CHAPTER I: GENERAL ON CLASS I. Notion of class • Generality of P.O.O • Incompatibility C / C + + II. Property of the member functions • Defaults • Member functions in-line • Transmission of object as argument III. Object assignment IV. Object Constructors and Destructors V. Object initialization VI. The copy...

Words: 2262 - Pages: 10

Premium Essay

Data Mining

...Data Mining 6/3/12 CIS 500 Data Mining is the process of analyzing data from different perspectives and summarizing it into useful information. This information can be used to increase revenue, cut costs or both. Data mining software is a major analytical tool used for analyzing data. It allows the user to analyze data from many different angles, categorize the data and summarizing the relationships. In a nut shell data mining is used mostly for the process of finding correlations or patterns among fields in very large databases. What ultimately can data mining do for a company? A lot. Data mining is primarily used by companies with strong customer focus in retail or financial. It allows companies to determine relationships among factors such as price, product placement, and staff skill set. There are external factors that data mining can use as well such as location, economic indicators, and competition of other companies. With the use of data mining a retailer can look at point of sale records of a customer purchases to send promotions to certain areas based on purchases made. An example of this is Blockbuster looking at movie rentals to send customers updates regarding new movies depending on their previous rent list. Another example would be American express suggesting products to card holders depending on monthly purchases histories. Data Mining consists of 5 major elements: • Extract, transform, and load transaction data onto the data...

Words: 1012 - Pages: 5

Free Essay

Arificial Neural Network

...attention and a large number of publications concerning ANN-based short-term load forecasting (STLF) have appreared in the literature. An extensive survey of ANN-based load forecasting models is given in this paper. The six most important factors which affect the accuracy and efficiency of the load forecasters are presented and discussed. The paper also includes conclusions reached by the authors as a result of their research in this area. Keywords: artificial neural networks, short-term load forecasting models Introduction Accurate and robust load forecasting is of great importance for power system operation. It is the basis of economic dispatch, hydro-thermal coordination, unit commitment, transaction evaluation, and system security analysis among other functions. Because of its importance, load forecasting has been extensively researched and a large number of models were proposed during the past several decades, such as Box-Jenkins models, ARIMA models, Kalman filtering models, and the spectral expansion techniques-based models. Generally, the models are based on statistcal methods and work well under normal conditions, however, they show some deficiency in the presence of an abrupt change in environmental or sociological variables which are believed to affect load patterns. Also, the employed techniques for those models use a large number of complex relationships, require a long computational time, and may result in numerical instabilities. Therefore, some new forecasting...

Words: 3437 - Pages: 14

Free Essay

Case Study

...Computer Networks Computer Graphics and Multimedia Lab Advanced Operating System Internet programming and Web Design Data Mining and Warehousing Internet programming and Web Design Lab Project Work and Viva Voce Total University Examinations Durations Max in Hrs Marks 3 100 3 100 3 100 3 100 3 100 3 3 3 3 100 100 100 100 100 1000 II For project work and viva voce (External) Breakup: Project Evaluation : 75 Viva Voce : 25 1 Anx.31 J - M Sc CS (SDE) 2007-08 with MQP Page 2 of 16 YEAR – I PAPER I: ADVANCED COMPUTER ARCHITECTURE Subject Description: This paper presents the concept of parallel processing, solving problem in parallel processing, Parallel algorithms and different types of processors. Goal: To enable the students to learn the Architecture of the Computer. Objectives: On successful completion of the course the students should have: Understand the concept of Parallel Processing. Learnt the different types of Processors. Learnt the Parallel algorithms. Content: Unit I Introduction to parallel processing – Trends towards parallel processing – parallelism in uniprocessor Systems – Parallel Computer structures – architectural classification schemes – Flynn’ Classification – Feng’s Classification – Handler’s Classification – Parallel Processing Applications. Unit II Solving problems in Parallel: Utilizing Temporal Parallelism – Utilizing Data Parallelism –...

Words: 3613 - Pages: 15

Free Essay

Marketing

...Cluster Analysis1 Cluster analysis, like reduced space analysis (factor analysis), is concerned with data matrices in which the variables have not been partitioned beforehand into criterion versus predictor subsets. In reduced space analysis our interest centers on reducing the variable space to a smaller number of orthogonal dimensions, which maintains most of the information–metric or ordinal– contained in the original data matrix. Emphasis is placed on the variables rather than on the subjects (rows) of the data matrix. In contrast, cluster analysis is concerned with the similarity of the subjects–that is, the resemblance of their profiles over the whole set of variables. These variables may be the original set or may consist of a representation of them in reduced space (i.e., factor scores). In either case the objective of cluster analysis is to find similar groups of subjects, where “similarity” between each pair of subjects is usually construed to mean some global measure over the whole set of characteristics–either original variables or derived coordinates, if preceded by a reduced space analysis. In this section we discuss various methods of clustering and the key role that distance functions play as measures of the proximity of pairs of points. We first discuss the fundamentals of cluster analysis in terms of major questions concerning choice of proximity measure, choice of clustering technique, and descriptive measures by which the resultant clusters can...

Words: 6355 - Pages: 26

Free Essay

Wireless

...THE COOPER UNION FOR THE ADVANCEMENT OF SCIENCE AND ART ALBERT NERKEN SCHOOL OF ENGINEERING Adjustable Subband Allocation Algorithm for Critically Sampled Subband Adaptive Filters by Adam Shabti Charles A thesis submitted in partial fulfillment of the requirements for the degree of Master of Engineering May 6, 2009 Advisor Dr. Fred L. Fontaine THE COOPER UNION FOR THE ADVANCEMENT OF SCIENCE AND ART ALBERT NERKEN SCHOOL OF ENGINEERING This thesis was prepared under the direction of the Candidate’s Thesis Advisor and has received approval. It was submitted to the Dean of the School of Engineering and the full Faculty, and was approved as partial fulfillment of the requirements for the degree of Master of Engineering. Dr. Eleanor Baum Dean, School of Engineering Dr. Fred L. Fontaine Candidate’s Thesis Advisor Acknowledgments I would like to thank my advisor, Dr. Fred Fontaine, for his guidance and patience throughout this process. Without his teachings I would not be where I am today. I would also like to thank the rest of the faculty, as well as my friends and peers at The Cooper Union Albert Nerken School of Engineering. A special thanks is due to David Nummey, Deian Stefan, Ashwin Kirpalani, Stefan M¨nzel and Matthew Epstein, all u of whom gave their time to listen patiently to my ideas and help me improve this thesis into what it is today. I would also like to thank Dr. Jack Lowenthal for keeping me motivated with his interest...

Words: 22975 - Pages: 92

Premium Essay

Comparison of Social Media Tools

...Comparison of Social Network Analysis Tools What is Social Network :? Social network is a social structure made up of many actors, for example firms, or people which are all tied up in relationships, connections, or interactions(1). The social network perspective is made up to employ the structure of a social group, how they interact with each other, how this structure has an influence on other variables and how it changes as time passes. What is Social Networking Analysis? Social network analysis is the mapping and measuring of all the factors that make up the social network, it is the measuring of relationships and flows between people, groups, organizations, computers, URL, and other connected information entries(3). The nodes in the network are represented as people and the links show their direct relationships with each other. To have deeper understanding of networks and their participants , we evaluate the location of actors in the network which basically means finding the centrality of a node . These measures give us insight into the various roles and groupings in a network -- who are the connectors, mavens, leaders, bridges, isolates, where are the clusters and who is in them, who is in the core of the network, and who is on the periphery? In order to evaluate and understand these networks and the relationships between their actors we use social network analysis tools. We will be discussing three different SNA tools, compare between them, talk about their...

Words: 1454 - Pages: 6

Premium Essay

An Evolution of Computer Science Research

...Abbreviated version of this report is published as "Trends in Computer Science Research" Apirak Hoonlor, Boleslaw K. Szymanski and M. Zaki, Communications of the ACM, 56(10), Oct. 2013, pp.74-83 An Evolution of Computer Science Research∗ Apirak Hoonlor, Boleslaw K. Szymanski, Mohammed J. Zaki, and James Thompson Abstract Over the past two decades, Computer Science (CS) has continued to grow as a research field. There are several studies that examine trends and emerging topics in CS research or the impact of papers on the field. In contrast, in this article, we take a closer look at the entire CS research in the past two decades by analyzing the data on publications in the ACM Digital Library and IEEE Xplore, and the grants awarded by the National Science Foundation (NSF). We identify trends, bursty topics, and interesting inter-relationships between NSF awards and CS publications, finding, for example, that if an uncommonly high frequency of a specific topic is observed in publications, the funding for this topic is usually increased. We also analyze CS researchers and communities, finding that only a small fraction of authors attribute their work to the same research area for a long period of time, reflecting for instance the emphasis on novelty (use of new keywords) and typical academic research teams (with core faculty and more rapid turnover of students and postdocs). Finally, our work highlights the dynamic research landscape in CS, with its focus constantly ...

Words: 15250 - Pages: 61

Free Essay

Segmentation Using Neural Networks

...networks computation offers a wide range of different algorithms for both unsupervised clustering (UC) and supervised classification (SC). In this paper we approached an algorithmic method that aims to combine UC and SC, where the information obtained during UC is not discarded, but is used as an initial step toward subsequent SC. Thus, the power of both image analysis strategies can be combined in an integrative computational procedure. This is achieved by applying “Hyper-BF network”. Here we worked a different procedures for the training, preprocessing and vector quantization in the application to medical image segmentation and also present the segmentation results for multispectral 3D MRI data sets of the human brain with respect to the tissue classes “ Gray matter”, “ White matter” and “ Cerebrospinal fluid”. We correlate manual and semi automatic methods with the results. Keywords: Image analysis, Hebbian learning rule, Euclidean metric, multi spectral image segmentation, contour tracing. Introduction: Segmentation can be defined as the identification of meaningful image components. It is a fundamental task in image processing providing the basis for any kind of further highlevel image analysis. In medical image processing, a wide range of application is based on segmentation. A possible realization of high-level image analysis principle is the acquisition and processing of multisprectral image data sets, which forms the basis of the segmentation approach...

Words: 2010 - Pages: 9

Free Essay

A Comparative Study of "Fuzzy Logic, Genetic Algorithm & Neural Network" in Wireless Network Security

...GENETIC ALGORITHM & NEURAL NETWORK" IN WIRELESS NETWORK SECURITY (WNS) ABSTRACT The more widespread use of networks meaning increased the risk of being attacked. In this study illustration to compares three AI techniques. Using for solving wireless network security problem (WNSP) in Intrusion Detection Systems in network security field. I will show the methods used in these systems, giving brief points of the design principles and the major trends. Artificial intelligence techniques are widely used in this area such as fuzzy logic, neural network and Genetic algorithms. In this paper, I will focus on the fuzzy logic, neural network and Genetic algorithm technique and how it could be used in Intrusion Detection Systems giving some examples of systems and experiments proposed in this field. The purpose of this paper is comparative analysis between three AI techniques in network security domain. 1 INTRODUCTION This paper shows a general overview of Intrusion Detection Systems (IDS) and the methods used in these systems, giving brief points of the design principles and the major trends. Hacking, Viruses, Worms and Trojan horses are various of the main attacks that fear any network systems. However, the increasing dependency on networks has increased in order to make safe the information that might be to arrive by them. As we know artificial intelligence has many techniques are widely used in this area such as fuzzy logic, neural network and Genetic algorithms etc.....

Words: 2853 - Pages: 12