Free Essay

Kolmogorov Algorithm

In:

Submitted By adjeithomas
Words 3373
Pages 14
CSC 435
DESIGN AND ANALYSIS OF ALGORITHM
GROUP THREE(3) ASSIGNMENT
THE KOLMOGOROV COMPLEXITY ALGORITHM
Computer Science:

FMS/0704/11
FMS/0707/11
FMS/0720/11
FMS/0721/11
FMS/0728/11

Computing-with-Accounting:

FMS/0818/11
FMS/0643/11
FMS/0749/11
FMS/0722/11
FMS/0729/11
FMS/0741/11
FMS/0829/11
FMS/0784/11
FMS/0812/11
FMS/0652/11
Kolmogorov complexity
In algorithmic information theory (a subfield of computer science and mathematics), the Kolmogorov complexity (also known as descriptive complexity, Kolmogorov–Chaitin complexity, algorithmic entropy, or program-size complexity) of an object, such as a piece of text, is a measure of the computability resources needed to specify the object. It is named after Andrey Kolmogorov, who first published on the subject in 1963.
For example, consider the following two strings of 32 lowercase letters and digits: abababababababababababababababab 4c1j5b2p0cv4w1x8rx2y39umgw5q85s7
The first string has a short English-language description, namely "ab 16 times", which consists of 11 characters. The second one has no obvious simple description (using the same character set) other than writing down the string itself, which has 32 characters.
More formally, the complexity of a string is the length of the shortest possible description of the string in some fixed universal description language (the sensitivity of complexity relative to the choice of description language is discussed below). It can be shown that the Kolmogorov complexity of any string cannot be more than a few bytes larger than the length of the string itself. Strings, like the abab example above, whose Kolmogorov complexity is small relative to the string's size are not considered to be complex.
The notion of the Kolmogorov complexity can be used to state and prove impossibility results akin to Cantor's diagonal argument, Gödel's incompleteness theorem, and Turing's halting problem.
Definition
The Kolmogorov complexity can be defined for any mathematical object, but for simplicity the scope of this article is restricted to strings. We must first specify a description language for strings. Such a description language can be based on any computer programming language, such as Lisp, Pascal, or Java virtual machine bytecode. If P is a program which outputs a string x, then P is a description of x. The length of the description is just the length of P as a character string, multiplied by the number of bits in a character (e.g. 7 for ASCII).
We could, alternatively, choose an encoding for Turing machines, where an encoding is a function which associates to each Turing Machine M a bitstring <M>. If M is a Turing Machine which, on input w, outputs string x, then the concatenated string <M> w is a description of x. For theoretical analysis, this approach is more suited for constructing detailed formal proofs and is generally preferred in the research literature. In this article, an informal approach is discussed.
Any string s has at least one description, namely the program: function GenerateFixedString() return s
If a description of s, d(s), is of minimal length (i.e. it uses the fewest bits), it is called a minimal description of s. Thus, the length of d(s) (i.e. the number of bits in the description) is the Kolmogorov complexity of s, written K(s). Symbolically,
K(s) = |d(s)|.
The length of the shortest description will depend on the choice of description language; but the effect of changing languages is bounded (a result called the invariance theorem).
Invariance theorem
Informal treatment
There are some description languages which are optimal, in the following sense: given any description of an object in a description language, I can use that description in my optimal description language with a constant overhead. The constant depends only on the languages involved, not on the description of the object, or the object being described.
Here is an example of an optimal description language. A description will have two parts: * The first part describes another description language. * The second part is a description of the object in that language.
In more technical terms, the first part of a description is a computer program, with the second part being the input to that computer program which produces the object as output.
The invariance theorem follows: Given any description language L, the optimal description language is at least as efficient as L, with some constant overhead.
Proof: Any description D in L can be converted into a description in the optimal language by first describing L as a computer program P (part 1), and then using the original description D as input to that program (part 2). The total length of this new description D’ is (approximately):
|D’| = |P| + |D|
The length of P is a constant that doesn't depend on D. So, there is at most a constant overhead, regardless of the object described. Therefore, the optimal language is universal up to this additive constant.

A more formal treatment
Theorem: If K1 and K2 are the complexity functions relative to Turing complete description languages L1 and L2, then there is a constant c – which depends only on the languages L1 and L2 chosen – such that
∀s. -c ≤ K1(s) - K2(s) ≤ c.
Proof: By symmetry, it suffices to prove that there is some constant c such that for all strings s
K1(s) ≤ K2(s) + c.
Now, suppose there is a program in the language L1 which acts as an interpreter for L2: function InterpretLanguage(string p) where p is a program in L2. The interpreter is characterized by the following property:
Running InterpretLanguage on input p returns the result of running p.
Thus, if P is a program in L2 which is a minimal description of s, then InterpretLanguage(P) returns the string s. The length of this description of s is the sum of 1. The length of the program InterpretLanguage, which we can take to be the constant c. 2. The length of P which by definition is K2(s).
This proves the desired upper bound.
History and context
Algorithmic information theory is the area of computer science that studies Kolmogorov complexity and other complexity measures on strings (or other data structures).
The concept and theory of Kolmogorov Complexity is based on a crucial theorem first discovered by Ray Solomonoff, who published it in 1960, describing it in "A Preliminary Report on a General Theory of Inductive Inference" as part of his invention of algorithmic probability. He gave a more complete description in his 1964 publications, "A Formal Theory of Inductive Inference," Part 1 and Part 2 in Information and Control.
Andrey Kolmogorov later independently published this theorem in Problems Inform. Transmission in 1965. Gregory Chaitin also presents this theorem in J. ACM – Chaitin's paper was submitted October 1966 and revised in December 1968, and cites both Solomonoff's and Kolmogorov's papers.
The theorem says that, among algorithms that decode strings from their descriptions (codes), there exists an optimal one. This algorithm, for all strings, allows codes as short as allowed by any other algorithm up to an additive constant that depends on the algorithms, but not on the strings themselves. Solomonoff used this algorithm, and the code lengths it allows, to define a "universal probability" of a string on which inductive inference of the subsequent digits of the string can be based. Kolmogorov used this theorem to define several functions of strings, including complexity, randomness, and information.
When Kolmogorov became aware of Solomonoff's work, he acknowledged Solomonoff's priority. For several years, Solomonoff's work was better known in the Soviet Union than in the Western World. The general consensus in the scientific community, however, was to associate this type of complexity with Kolmogorov, who was concerned with randomness of a sequence, while Algorithmic Probability became associated with Solomonoff, who focused on prediction using his invention of the universal prior probability distribution. The broader area encompassing descriptional complexity and probability is often called Kolmogorov complexity. The computer scientist Ming Li considers this an example of the Matthew effect: "... to everyone who has more will be given ..."
There are several other variants of Kolmogorov complexity or algorithmic information. The most widely used one is based on self-delimiting programs, and is mainly due to Leonid Levin (1974).
An axiomatic approach to Kolmogorov complexity based on Blum axioms (Blum 1967) was introduced by Mark Burgin in the paper presented for publication by Andrey Kolmogorov (Burgin 1982).
Basic results
In the following discussion, let K(s) be the complexity of the string s.
It is not hard to see that the minimal description of a string cannot be too much larger than the string itself - the program GenerateFixedString above that outputs s is a fixed amount larger than s.
Theorem: There is a constant c such that
∀s. K(s) ≤ |s| + c.
Uncomputability of Kolmogorov complexity
Theorem: There exist strings of arbitrarily large Kolmogorov complexity. Formally: for each n ∈ ℕ, there is a string s with K(s) ≥ n.
Proof: Otherwise all infinitely many possible strings could be generated by the finitely manyprograms with a complexity below n bits.

Theorem: K is not a computable function. In other words, there is no program which takes a string s as input and produces the integer K(s) as output.
The following indirect proof uses a simple Pascal-like language to denote programs; for sake of proof simplicity assume its description (i.e. an interpreter) to have a length of 1400000 bits. Assume for contradiction there is a program function KolmogorovComplexity(string s) which takes as input a string s and returns K(s); for sake of proof simplicity, assume its length to be 7000000000 bits. Now, consider the following program of length 1288 bits: function GenerateComplexString() for i = 1 to infinity: for each string s of length exactly i if KolmogorovComplexity(s) >= 8000000000 return s
Using KolmogorovComplexity as a subroutine, the program tries every string, starting with the shortest, until it returns a string with Kolmogorov complexity at least 8000000000 bits, i.e. a string that cannot be produced by any program shorter than 8000000000 bits. However, the overall length of the above program that produced s is only 7001401288 bits, which is a contradiction. (If the code of KolmogorovComplexity is shorter, the contradiction remains. If it is longer, the constant used in GenerateComplexString can always be changed appropriately.)
The above proof used a contradiction similar to that of the Berry paradox: "The smallest positive integer that cannot be defined in fewer than twenty English words". It is also possible to show the non-computability of K by reduction from the non-computability of the halting problem H, since K and H are Turing-equivalent.
There is a corollary, humorously called the "full employment theorem" in the programming language community, stating that there is no perfect size-optimizing compiler.
Chain rule for Kolmogorov complexity
Main article: Chain rule for Kolmogorov complexity
The chain rule for Kolmogorov complexity states that
K(X,Y) = K(X) + K(Y|X) + O(log(K(X,Y))).
It states that the shortest program that reproduces X and Y is no more than a logarithmic term larger than a program to reproduce X and a program to reproduce Y given X. Using this statement, one can define an analogue of mutual information for Kolmogorov complexity.

Compression
It is straightforward to compute upper bounds for K(s) – simply compress the string s with some method, implement the corresponding decompressor in the chosen language, concatenate the decompressor to the compressed string, and measure the length of the resulting string.
A string s is compressible by a number c if it has a description whose length does not exceed |s|−c bits. This is equivalent to saying that K(s) ≤ |s|-c. Otherwise, s is incompressible by c. A string incompressible by 1 is said to be simply incompressible – by the pigeonhole principle, which applies because every compressed string maps to only one uncompressed string, incompressible strings must exist, since there are 2n bit strings of length n, but only 2n - 1 shorter strings, that is, strings of length less than n, (i.e. with length 0,1,...,n − 1).
For the same reason, most strings are complex in the sense that they cannot be significantly compressed – their K(s) is not much smaller than |s|, the length of s in bits. To make this precise, fix a value of n. There are 2n bitstrings of length n. The uniform probability distribution on the space of these bitstrings assigns exactly equal weight 2-n to each string of length n.
Theorem: With the uniform probability distribution on the space of bitstrings of length n, the probability that a string is incompressible by c is at least 1 - 2-c+1 + 2-n.
To prove the theorem, note that the number of descriptions of length not exceeding n-c is given by the geometric series:
1 + 2 + 22 + ... + 2n-c = 2n-c+1 - 1.
There remain at least
2n - 2n-c+1 + 1 bitstrings of length n that are incompressible by c. To determine the probability, divide by 2n.
Chaitin's incompleteness theorem

Kolmogorov complexity K(s), and two computable lower bound functions prog1(s), prog2(s). The horizontal axis (logarithmic scale) enumerates all strings s, ordered by length; the vertical axis (linear scale) measures string length in bits. Most strings are incompressible, i.e. their Kolmogorov complexity exceeds their length by a constant amount. 17 compressible strings are shown in the picture, appearing as almost vertical slopes. Due to Chaitin's incompleteness theorem, the output of any program computing a lower bound of the Kolmogorov complexity cannot exceed some fixed limit, which is independent of the input string s.
We know that, in the set of all possible strings, most strings are complex in the sense that they cannot be described in any significantly "compressed" way. However, it turns out that the fact that a specific string is complex cannot be formally proven, if the complexity of the string is above a certain threshold. The precise formalization is as follows. First, fix a particular axiomatic system S for the natural numbers. The axiomatic system has to be powerful enough so that, to certain assertions A about complexity of strings, one can associate a formula FA in S. This association must have the following property:
If FA is provable from the axioms of S, then the corresponding assertion A must be true. This "formalization" can be achieved, either by an artificial encoding such as a Gödel numbering, or by a formalization which more clearly respects the intended interpretation of S.
Theorem: There exists a constant L (which only depends on the particular axiomatic system and the choice of description language) such that there does not exist a string s for which the statement
K(s) ≥ L (as formalized in S) can be proven within the axiomatic system S.
Note that, by the abundance of nearly incompressible strings, the vast majority of those statements must be true.
The proof of this result is modeled on a self-referential construction used in Berry's paradox. The proof is by contradiction. If the theorem were false, then
Assumption (X): For any integer n there exists a string s for which there is a proof in S of the formula "K(s) ≥ n" (which we assume can be formalized in S).
We can find an effective enumeration of all the formal proofs in S by some procedure function NthProof(int n) which takes as input n and outputs some proof. This function enumerates all proofs. Some of these are proofs for formulas we do not care about here, since every possible proof in the language of S is produced for some n. Some of these are complexity formulas of the form K(s) ≥ n where s and n are constants in the language of S. There is a program function NthProofProvesComplexityFormula(int n) which determines whether the nth proof actually proves a complexity formula K(s) ≥ L. The strings s, and the integer L in turn, are computable by programs: function StringNthProof(int n) function ComplexityLowerBoundNthProof(int n)
Consider the following program function GenerateProvablyComplexString(int n) for i = 1 to infinity: if NthProofProvesComplexityFormula(i) and ComplexityLowerBoundNthProof(i) ≥ n return StringNthProof(i)
Given an n, this program tries every proof until it finds a string and a proof in the formal system S of the formula K(s) ≥ L for some L ≥ n. The program terminates by our Assumption (X). Now, this program has a length U. There is an integer n0 such that U + log2(n0) + C < n0, where C is the overhead cost of function GenerateProvablyParadoxicalString() return GenerateProvablyComplexString(n0)
(note that n0 is hard-coded into the above function, and the summand log2(n0) already allows for its encoding). The program GenerateProvablyParadoxicalString outputs a string s for which there exists an L such that K(s) ≥ L can be formally proved in S with L ≥ n0. In particular, K(s) ≥ n0 is true. However, s is also described by a program of length U + log2(n0) + C, so its complexity is less than n0. This contradiction proves Assumption (X) cannot hold.
Similar ideas are used to prove the properties of Chaitin's constant.
Minimum message length
The minimum message length principle of statistical and inductive inference and machine learning was developed by C.S. Wallace and D.M. Boulton in 1968. MML is Bayesian (i.e. it incorporates prior beliefs) and information-theoretic. It has the desirable properties of statistical invariance (i.e. the inference transforms with a re-parametrisation, such as from polar coordinates to Cartesian coordinates), statistical consistency (i.e. even for very hard problems, MML will converge to any underlying model) and efficiency (i.e. the MML model will converge to any true underlying model about as quickly as is possible). C.S. Wallace and D.L. Dowe (1999) showed a formal connection between MML and algorithmic information theory (or Kolmogorov complexity).
Kolmogorov randomness
Kolmogorov randomness – also called algorithmic randomness – defines a string (usually of bits) as being random if and only if it is shorter than any computer program that can produce that string. To make this precise, a universal computer (or universal Turing machine) must be specified, so that "program" means a program for this universal machine. A random string in this sense is "incompressible" in that it is impossible to "compress" the string into a program whose length is shorter than the length of the string itself. A counting argument is used to show that, for any universal computer, there is at least one algorithmically random string of each length. Whether any particular string is random, however, depends on the specific universal computer that is chosen.
This definition can be extended to define a notion of randomness for infinite sequences from a finite alphabet. These algorithmically random sequences can be defined in three equivalent ways. One way uses an effective analogue of measure theory; another uses effective martingales. The third way defines an infinite sequence to be random if the prefix-free Kolmogorov complexity of its initial segments grows quickly enough - there must be a constant c such that the complexity of an initial segment of length n is always at least n−c. This definition, unlike the definition of randomness for a finite string, is not affected by which universal machine is used to define prefix-free Kolmogorov complexity.
Relation to entropy
For dynamical systems, entropy rate and algorithmic complexity of the trajectories are related by a theorem of Brudno, that the equality K(x;T) = h(T) holds for almost all x.
It can be shown that for the output of Markov information sources, Kolmogorov complexity is related to the entropy of the information source. More precisely, the Kolmogorov complexity of the output of a Markov information source, normalized by the length of the output, converges almost surely (as the length of the output goes to infinity) to the entropy of the source.
Conditional versions | This section requires expansion. (July 2014) |
The conditional [Kolmogorov] complexity of two strings K(x|y) is, roughly speaking, defined as the Kolmogorov complexity of x given y as an auxiliary input to the procedure.
There is also a length-conditional complexity K(x|l(x)), which is the complexity of x given the length of x as known/input.

Similar Documents

Premium Essay

Image Theory

...Andrew R. Cohen1, Christopher Bjornsson1, Ying Chen1, Gary Banker2, Ena Ladi3, Ellen Robey3, Sally Temple4, and Badrinath Roysam1 1 Rensselaer Polytechnic Institute, Troy, NY 12180, USA, 2 Oregon Health & Science University, 3181 SW Sam Jackson Park Road, L606, Portland, OR 97239, USA 3 University of California, Berkeley, Berkeley, CA 94720, USA 4 Center for Neuropharmacology & Neuroscience, Albany Medical College, Albany, NY 12208, USA ABSTRACT An algorithmic information theoretic method is presented for object-level summarization of meaningful changes in image sequences. Object extraction and tracking data are represented as an attributed tracking graph (ATG), whose connected subgraphs are compared using an adaptive information distance measure, aided by a closed-form multi-dimensional quantization. The summary is the clustering result and feature subset that maximize the gap statistic. The notion of meaningful summarization is captured by using the gap statistic to estimate the randomness deficiency from algorithmic statistics. When applied to movies of cultured neural progenitor cells, it correctly distinguished neurons from progenitors without requiring the use of a fixative stain. When analyzing intra-cellular molecular transport in cultured neurons undergoing axon specification, it automatically confirmed the role of kinesins in axon specification. Finally, it was able to differentiate wild type from genetically modified thymocyte cells. Index Terms: Algorithmic information...

Words: 3769 - Pages: 16

Premium Essay

Pdf. Input Out Files

...something what we call System Analysis and Design programmers do to understand a problem. Many diagrams including "Work Break Down Structure", "Workflow Diagram" and "Class Diagrams" are some of the most common ones are used. Question 2. What is Pseaudocode? Pseudocode is an informal high-level description of the operating principle of a computer program or other algorithm. It uses the structural conventions of a programming language, but is intended for human reading rather than machine reading. Pseudocode typically omits details that are not essential for human understanding of the algorithm, such as variable declarations, system-specific code and some subroutines. The programming language is augmented with natural language description details, where convenient, or with compact mathematical notation. The purpose of using pseudocode is that it is easier for people to understand than conventional programming language code, and that it is an efficient and environment-independent description of the key principles of an algorithm. It is commonly used in textbooks and scientific publications that are documenting various algorithms, and also in planning of computer program development, for sketching out the structure of the program before the actual coding takes place. Question 3 computer programmers normally perform what 3 steps? 1. Input is received. 2. Some process is performed on the input. 3. Output is produced. Question 4 What does user friendly mean? 2. user friendly"...

Words: 330 - Pages: 2

Free Essay

Initial Sizing of an Airplane

...CONSTRAINT ANALYSIS The two parameters (i) wing loading and (ii) power loading are the most important parameters that affect the airframe design and its performance. Wing loading and power loading are interconnected for a number of design parameters and because of this interconnection it is difficult to use historical data to independently select these values. Hence a different approach needs to be followed. The team decided to use sizing matrix plots to find the optimum value of the two parameters. For calculating the optimum values, the most important design requirements must be decided. The design requirements that have the highest weightage are: (i) (ii) (iii) (iv) Stall speed Take off distance Landing distance Sustained turn All the above design requirements were written in terms of wing loading and power loading. Since in the design requirements parameters are given in the inequality form, hence a region of point is obtained instead of single point of intersection. This region of favorable points is known as design space. On putting values of all parameters and then plotting in matlab, design space is obtained. The code for plotting the sizing matrix plot is given in Appendix A(a). In the design space the optimum design point is obtained by calculating the score of the point based on its weightage. (The weightage of various design parameters is given in Appendix A(b). REGRESSION ANALYSIS A regression analysis was done to find out the statistical relation between...

Words: 582 - Pages: 3

Premium Essay

Cause and Effect of Computer Revolution

...ENC 1101 February 13 2013 Causes and Effects of the Computer/Information Revolution By illustrating the lifestyle of the computer revolution through advancements in human society whether it be medicine, school or businesses, computers paints a vivid image of a world that is interconnected providing further advancement upon our society; only to create a bigger, faster and more efficient world. The 21st century is known as the information and or computer revolution. As Hamming states, "the industrial revolution released man from being a beast of burden; computer revolution will similarly release him from slavery to dull, repetitive routine" (Hamming 4). The revolution began after World War II and to this day continues evolving at a rapid pace enhancements made to it’s speed and size, has led to more and more information being found and processed on a daily basis.  According to Linowes, more information has been produced in the last thirty years than in the previous five thousand. Changes to the lifestyle of the everyday human are quite prevalent. Nowadays, people have access to what seems to be an endless pool of information, whether it is social networks, instant messaging, electronic libraries while businesses use the internet and information technology to operate their organizations. People communicate every day and transfer data every day at an alarming rate. The computer revolution has shaped or current environment into one where the internet is central to todays society...

Words: 1488 - Pages: 6

Free Essay

How to Write a Computer Program

...need to determine how you are going to take your input information and turn it into your output information. An example problem is that you want to determine the price of items before and after tax. Your inputs would be: the price of the item, expressed as ItemPrice; the amount of tax, expressed as TaxRate; and the amount of that item, expressed as ItemQuantity. The output would then be the amount of the item before and after tax has been included, expressed as OriginalPrice and TaxPrice respectively. One way to solve this would be to use the following equations: ItemPrice * ItemQuantity = OriginalPrice and then OriginalPrice * TaxRate + OriginalPrice = TaxPrice. Once the problem has been analyzed, the variables identified, and the algorithm has been determined, it is time to design the program. Designing the program is no more than creating a set of step by step instructions,...

Words: 1031 - Pages: 5

Free Essay

Selection Paper

...Selection Structure Paper Given the following task: Selection Structure Paper, Use the Part 1: Programming Solution Proposal you developed in Week Two and select one section of the proposal that requires a selection structure. Write a 2- to 3-page paper describing the purpose of that structure and write the pseudocode for that structure. Examine any iteration control structure. If the program you described in Week Two does not lend itself well to the inclusion of a selection structure, create a new example of a selection structure. Create a Visual Logic flowchart that parallels this pseudocode. Test the flowchart to make sure that it executes properly and produces correct results. Submit the paper and the Visual Logic file. Format your paper consistent with APA guidelines. The process of selection is a way for the computer to interact with the user and to be able to understand how to make choices based on the user’s point of view or interest. Selection can be understood by computers by transforming such selections into algebraic equations, and from there into binary code which is the language that the computer understands, once the program is written, it will use a compilator, which acts as the translator between computer language and human language. The process of selection allows the user to choose what to do and then it gives options where to choose from, and it gives results which vary depending on the option selected by the user, when using the process of selection...

Words: 554 - Pages: 3

Premium Essay

Essay

...I like to think that perfection is an illusion. In that, it is an unattainable quest which we pursue in order to overcome our own shortcomings and flaws. It would be easy to mistake it for a futile or flawed endeavor in itself, but if there is anything that I have learned during the four years in college, it is that perfection is the only goal worth pursuing. This is appropriate more so because, as a student of computer science, we are on the never ending road to create algorithms that are not just simpler and faster but also pure in essence, for perfection is purity. Just as perfection is a lifelong goal, so is the pursuit of knowledge. My pursuit of computer science began way before I even went to college to take up a Bachelor’s Honors Degree in the same, the difference being that, as a kid I was driven by curiosity and now by passion. It was during my time in college that I was introduced to Open Source Software, a concept not just perfect as an ideal but also as a functioning system. Having always worked with windows till then, Linux seemed like a whole new world and I was easily enthralled by it. I guess that is how my passion for operating systems began. Having said that, I was eager to study and program and I have always paid special attention to the important courses such as ‘Computer Programming 1, 2’ , ‘Programming Languages & Compiler Construction’ , ‘Operating Systems’ and ‘Software Engineering’. These are courses that have helped me to understand the basics, at...

Words: 284 - Pages: 2

Premium Essay

It- 3rd Year

...E-COMMERCE (TIT-501) UNIT I Introduction What is E-Commerce, Forces behind E-Commerce Industry Framework, Brief history of ECommerce, Inter Organizational E-Commerce Intra Organizational E-Commerce, and Consumer to Business Electronic Commerce, Architectural framework Network Infrastructure for E-Commerce Network Infrastructure for E-Commerce, Market forces behind I Way, Component of I way Access Equipment, Global Information Distribution Network, Broad band Telecommunication. UNIT-II Mobile Commerce Introduction to Mobile Commerce, Mobile Computing Application, Wireless Application Protocols, WAP Technology, Mobile Information Devices, Web Security Introduction to Web security, Firewalls & Transaction Security, Client Server Network, Emerging Client Server Security Threats, firewalls & Network Security. UNIT-III Encryption World Wide Web & Security, Encryption, Transaction security, Secret Key Encryption, Public Key Encryption, Virtual Private Network (VPM), Implementation Management Issues. UNIT - IV Electronic Payments Overview of Electronics payments, Digital Token based Electronics payment System, Smart Cards, Credit Card I Debit Card based EPS, Emerging financial Instruments, Home Banking, Online Banking. UNIT-V Net Commerce EDA, EDI Application in Business, Legal requirement in E -Commerce, Introduction to supply Chain Management, CRM, issues in Customer Relationship Management. References: 1. Greenstein and Feinman, “E-Commerce”, TMH 2. Ravi Kalakota, Andrew Whinston...

Words: 2913 - Pages: 12

Premium Essay

Solving Reader Collision Problem in Large Scale Rfid Systems

...problem in large scale RFID systems : Algorithms, performance evaluation and discussions John Sum, Kevin Ho, Siu-chung Lau Abstract—Assigning neighboring RFID readers with nonoverlapping interrogation time slots is one approach to solve the reader collision problem. In which, Distributed Color Selection (DCS) and Colorwave algorithm have been developed, and simulated annealing (SA) technique have been applied. Some of them (we call them non-progresive algorithms), like DCS, require the user to pre-defined the number of time slots. While some of them (we call them progressive), like Colorwave, determine the number automatically. In this paper, a comparative analysis on both non-progressive and progressive algorithms to solve such a problem in a random RFID reader network is presented. By extensive simulations on a dense network consisting of 250 readers whose transmission rates are 100%, a number of useful results have been found. For those non-progressive type algorithms, it is found that DCS is unlikely to generate a collision-free solution, even the number of time slots is set to 20. On the other hand, heuristic and SAbased algorithms can produce collision-free solutions whenever the number of time slots is set to 16. For the cases when the number of time slots is not specified, heuristic-based, SAbased and Colorwave algorithms are all able to determine the number automatically and thus generate collision-free solution. However, SA-based algorithms require much longer time than the...

Words: 6608 - Pages: 27

Premium Essay

Program Design and Tools

...PROGRAM DESIGN TOOLS Algorithms, Flow Charts, Pseudo codes and Decision Tables Designed by Parul Khurana, LIECA. Introduction • The various tools collectively referred to as program design tools, that helps in planning the program are:– Algorithm. – Flowchart. – Pseudo-code. Designed by Parul Khurana, LIECA. Algorithms • An algorithm is defined as a finite sequence of instructions defining the solution of a particular problem, where each instruction is numbered. • However, in order to qualify as an algorithm, every sequence of instructions must satisfy the following criteria: Designed by Parul Khurana, LIECA. Algorithms • Input: There are zero or more values which are externally supplied. • Output: At least one value is produced. • Definiteness: Each step must be clear and unambiguous, i.e., having one and only one meaning. • Finiteness: If we trace the steps of an algorithm, then for all cases, the algorithm must terminate after a finite number of steps. Designed by Parul Khurana, LIECA. Algorithms • Effectiveness: Each step must be sufficiently basic that it can in principle be carried out by a person using only one paper and pencil. – In addition, not only each step is definite, it must also be feasible. Designed by Parul Khurana, LIECA. Formulation of Algorithm • Formulate an algorithm to display the nature of roots of a quadratic equation of the type: ax2 + bx + c = 0 provided a ≠ 0 Designed by Parul Khurana, LIECA. Formulation...

Words: 914 - Pages: 4

Premium Essay

Calculating the Window of Vulnerability

...To calculate the window of vulnerability (WOV) we will first need to know the amount of time It will take to get a working solution. In this case, we need a patch to solve the issue. We already know that it will take Microsoft 3 days to get a patch out to us. So, we can start with three days. After that, we need time to test the patch, and publish it out to the active directory update servers. This will usually take a few days according to the book. After it is all tested on the equipment, we need to push out the update to all of the client computers and servers. This will usually take a day or so. Also, depending on if the IT staff works on the weekends to solve the problem that will add another two days to fix the problem. So, to add it up, It takes three days to get the patch, Up to five days to test the patch, and another day or two to publish the patch out to all of the client computers. All in total, this will take around a week to solve this issue. My personal opinion is any IT personal that takes a WEEK to solve a major security breach should be fire. Personally, I would put immediate measures in place to solve the issue such as blocking the mac address, immediately writing scripts and programs to detect intrusions in the hole, and block out the attacker. Taking more than a day or two for testing is major overkill for fixing a major hole. But, that is my...

Words: 273 - Pages: 2

Premium Essay

Transforming Data Into Information

...Transforming Data into Information What is Data? What is information? Data is facts; numbers; statistics; readings from a device or machine. It depends on what the context is. Data is what is used to make up information. Information could be considered to be the same characteristics I just described as data. In the context of transforming data into information, you could assume data is needed to produce information. So information there for is the meaningful translation of a set of or clusters of data that’s produces an output of meaningful information. So data is a bunch of meaningless pieces of information that needs to be composed; analyzed; formed; and so forth to form a meaningful piece of information. Transforming Data Let’s pick a context such as computer programming. You need pieces of data to be structured and formed into something that will result in an output of something; a message, a graph, or a process, in which a machine can perform some sort of action. Well now we could say that information is used to make a product, make a computer produce something, or present statistical information. That would be the output of that data. The data would be numbers, words, or symbols. The information would be a message, a graph, or a process, in which a machine can perform some sort of action. Information Information could be looked at as data as well. Let’s say we need a chart showing the cost of a business expenses in relation to employee salaries. The data for showing...

Words: 315 - Pages: 2

Free Essay

Algorithms and Logic for Computer Programming

...Personal Learning Management University of Phoenix Algorithms and Logic for Computer Programming PRG 211 Professor Sam March 07, 2013 Personal Learning Management Being able to develop a management tool that would allow a user or student to review course material would be very beneficial. With a course such as programming that has so much information, it is important to be able to recall information in order to properly understand how programming works. I for example, do not have any prior knowledge of so I would have to continuously refresh the information that I have learn in the reading as well as in the class room environment. I will be discussing some topics that are important to the development of such a program. In order to properly develop an application, we must first address and analyze the problem that has caused this need. In this situation, we want to design an application that will allow students to be able to review reading assignments as well as task or anything that would be beneficial to retain. Some subjects are a harder to remember than others such as programming. Modular programming would be the best fit because we would want everyone to read the material in the same order. We would set up the program so everyone’s view is the same. If we allow people to “jump around” in the programming, some learning material is going to be skipped over and that would defeat the purpose of the development of this application. Submodules would be added...

Words: 480 - Pages: 2

Premium Essay

Live Project

...The information technology course module has been designed with more of software part in the course whereas Computer Science includes more of computer hardware part like networking, chip level knowledge etc. Although some of the subjects are same in both the streams.  Answer Information Technology is the business side of computers - usually dealing with databases, business, and accounting. The cs engineering degree usually deals with how to build micro processors, how to write a compiler, and is usually more math intensive than IT. One way to think of it is one is dealing with information - data which would be the IT and the other is dealing with the "science" or "how to make it" of computers.   Answer    The exact answer depends heavily on the college or university in question, as each tends to split things slightly differently. As a generalization, there are actually three fields commonly associated with computers:  Information Technology - this sometimes also goes by the names "Information Systems", "Systems Administration", or "Business Systems Information/Administration". This is a practical engineering field, concerned primarily with taking existing hardware and software components and designing a larger system to solve a particular business function. Here you learn about some basic information theory, applied mathematics theory, and things like network topology/design, database design, and the like. IT concerns itself with taking building blocks such...

Words: 490 - Pages: 2

Premium Essay

Cmoputer

...Programming Development Select and complete one of the following assignments: Option 1: Programming Solution Option 2: Personal Learning Management Option 1: Programming Solution Part 1: Programming Solution Proposal Select a problem in your workplace that requires a programming solution. Instead of a workplace, you may use another organization to which you belong, such as a house of worship, a local library, or a sports league. You may also use one of the Virtual Organizations as your model. Write a 2- to 3-page proposal in which you do the following: • Describe how you determined the problem that must be solved. • Describe the role of the personnel involved in the project. • Explain the process of solving the problem and developing the program in terms of the programming development cycle. • Explain how you would take a modular approach to the program solution and why it is important. • Provide appropriate references to support the points in your paper. Format your paper consistent with APA guidelines. Part 2: Selection Structure Paper Use the Part 1: Programming Solution Proposal you developed in Week Two and select one section of the proposal that requires a selection structure. Write a 2- to 3-page paper describing the purpose of that structure and write the pseudocode for that structure. Examine any iteration control structure. If the program you described in Week Two does not lend itself well to the inclusion of a selection...

Words: 972 - Pages: 4