Free Essay

Hostel Management

In:

Submitted By nicsgat
Words 12347
Pages 50
-------------------------------------------------
Data compression
From Wikipedia, the free encyclopedia (Redirected from Video compression)
"Source coding" redirects here. For the term in computer programming, see Source code.
In digital signal processing, data compression, source coding,[1] or bit-rate reduction involves encoding information using fewer bits than the original representation.[2]Compression can be either lossy or lossless. Lossless compression reduces bits by identifying and eliminating statistical redundancy. No information is lost in lossless compression. Lossy compression reduces bits by identifying unnecessary information and removing it.[3] The process of reducing the size of a data file is referred to as data compression. In the context of data transmission, it is called source coding (encoding done at the source of the data before it is stored or transmitted) in opposition to channel coding.[4]
Compression is useful because it helps reduce resource usage, such as data storage space or transmission capacity. Because compressed data must be decompressed to use, this extra processing imposes computational or other costs through decompression; this situation is far from being a free lunch. Data compression is subject to a space–time complexity trade-off. For instance, a compression scheme for video may require expensive hardware for the video to be decompressed fast enough to be viewed as it is being decompressed, and the option to decompress the video in full before watching it may be inconvenient or require additional storage. The design of data compression schemes involves trade-offs among various factors, including the degree of compression, the amount of distortion introduced (when using lossy data compression), and the computational resources required to compress and decompress the data.[5][6]
Contents
[hide] * 1Lossless * 2Lossy * 3Theory * 3.1Machine learning * 3.2Data differencing * 4Uses * 4.1Audio * 4.1.1Lossy audio compression * 4.1.1.1Coding methods * 4.1.1.2Speech encoding * 4.1.2History * 4.2Video * 4.2.1Encoding theory * 4.2.2Timeline * 4.3Genetics * 5Outlook and currently unused potential * 6See also * 7References * 8External links
-------------------------------------------------
Lossless[edit]
Lossless data compression algorithms usually exploit statistical redundancy to represent data more concisely without losing information, so that the process is reversible. Lossless compression is possible because most real-world data have statistical redundancy. For example, an image may have areas of colour that do not change over several pixels; instead of coding "red pixel, red pixel, ..." the data may be encoded as "279 red pixels". This is a basic example of run-length encoding; there are many schemes to reduce file size by eliminating redundancy.
The Lempel–Ziv (LZ) compression methods are among the most popular algorithms for lossless storage.[7] DEFLATE is a variation on LZ optimized for decompression speed and compression ratio, but compression can be slow. DEFLATE is used in PKZIP, Gzip and PNG. LZW (Lempel–Ziv–Welch) is used in GIF images. Also noteworthy is the LZR (Lempel-Ziv–Renau) algorithm, which serves as the basis for the Zip method.[citation needed] LZ methods use a table-based compression model where table entries are substituted for repeated strings of data. For most LZ methods, this table is generated dynamically from earlier data in the input. The table itself is often Huffman encoded (e.g. SHRI, LZX). A current LZ-based coding scheme that performs well is LZX, used in Microsoft's CAB format.
The best modern lossless compressors use probabilistic models, such as prediction by partial matching. The Burrows–Wheeler transform can also be viewed as an indirect form of statistical modelling.[8]
The class of grammar-based codes are gaining popularity because they can compress highly repetitive input extremely effectively, for instance, a biological data collection of the same or closely related species, a huge versioned document collection, internet archival, etc. The basic task of grammar-based codes is constructing a context-free grammar deriving a single string. Sequitur and Re-Pair are practical grammar compression algorithms for which software is publicly available.
In a further refinement of the direct use of probabilistic modelling, statistical estimates can be coupled to an algorithm called arithmetic coding. Arithmetic coding is a more modern coding technique that uses the mathematical calculations of a finite-state machine to produce a string of encoded bits from a series of input data symbols. It can achieve superior compression to other techniques such as the better-known Huffman algorithm. It uses an internal memory state to avoid the need to perform a one-to-one mapping of individual input symbols to distinct representations that use an integer number of bits, and it clears out the internal memory only after encoding the entire string of data symbols. Arithmetic coding applies especially well to adaptive data compression tasks where the statistics vary and are context-dependent, as it can be easily coupled with an adaptive model of the probability distribution of the input data. An early example of the use of arithmetic coding was its use as an optional (but not widely used) feature of the JPEG image coding standard. It has since been applied in various other designs including H.264/MPEG-4 AVC and HEVC for video coding.
-------------------------------------------------
Lossy[edit]
Lossy data compression is the converse of lossless data compression. In these schemes, some loss of information is acceptable. Dropping nonessential detail from the data source can save storage space. Lossy data compression schemes are designed by research on how people perceive the data in question. For example, the human eye is more sensitive to subtle variations in luminance than it is to the variations in color. JPEG image compression works in part by rounding off nonessential bits of information.[9] There is a corresponding trade-off between preserving information and reducing size. A number of popular compression formats exploit these perceptual differences, including those used in music files, images, and video.
Lossy image compression can be used in digital cameras, to increase storage capacities with minimal degradation of picture quality. Similarly, DVDs use the lossy MPEG-2 video coding format for video compression.
In lossy audio compression, methods of psychoacoustics are used to remove non-audible (or less audible) components of the audio signal. Compression of human speech is often performed with even more specialized techniques; speech coding, or voice coding, is sometimes distinguished as a separate discipline from audio compression. Different audio and speech compression standards are listed under audio coding formats. Voice compression is used in internet telephony, for example, audio compression is used for CD ripping and is decoded by the audio players.[8]
-------------------------------------------------
Theory[edit]
The theoretical background of compression is provided by information theory (which is closely related to algorithmic information theory) for lossless compression and rate–distortion theory for lossy compression. These areas of study were essentially forged by Claude Shannon, who published fundamental papers on the topic in the late 1940s and early 1950s. Coding theory is also related to this. The idea of data compression is also deeply connected with statistical inference.[10]
Machine learning[edit]
See also: Machine learning
There is a close connection between machine learning and compression: a system that predicts the posterior probabilities of a sequence given its entire history can be used for optimal data compression (by using arithmetic coding on the output distribution) while an optimal compressor can be used for prediction (by finding the symbol that compresses best, given the previous history). This equivalence has been used as a justification for using data compression as a benchmark for "general intelligence."[11]
Data differencing[edit]
Main article: Data differencing
Data compression can be viewed as a special case of data differencing:[12][13] Data differencing consists of producing a difference given a source and a target, with patching producing a target given a source and a difference, while data compression consists of producing a compressed file given a target, and decompression consists of producing a target given only a compressed file. Thus, one can consider data compression as data differencing with empty source data, the compressed file corresponding to a "difference from nothing." This is the same as considering absolute entropy (corresponding to data compression) as a special case of relative entropy (corresponding to data differencing) with no initial data.
When one wishes to emphasize the connection, one may use the term differential compression to refer to data differencing.
-------------------------------------------------
Uses[edit]
Audio[edit]
See also: Audio codec and Audio coding format
Audio data compression, as distinguished from dynamic range compression, has the potential to reduce the transmission bandwidth and storage requirements of audio data.Audio compression algorithms are implemented in software as audio codecs. Lossy audio compression algorithms provide higher compression at the cost of fidelity and are used in numerous audio applications. These algorithms almost all rely on psychoacoustics to eliminate less audible or meaningful sounds, thereby reducing the space required to store or transmit them.[2]
In both lossy and lossless compression, information redundancy is reduced, using methods such as coding, pattern recognition, and linear prediction to reduce the amount of information used to represent the uncompressed data.
The acceptable trade-off between loss of audio quality and transmission or storage size depends upon the application. For example, one 640MB compact disc (CD) holds approximately one hour of uncompressed high fidelity music, less than 2 hours of music compressed losslessly, or 7 hours of music compressed in the MP3 format at a mediumbit rate. A digital sound recorder can typically store around 200 hours of clearly intelligible speech in 640MB.[14]
Lossless audio compression produces a representation of digital data that decompress to an exact digital duplicate of the original audio stream, unlike playback from lossy compression techniques such as Vorbis and MP3. Compression ratios are around 50–60% of original size,[15] which is similar to those for generic lossless data compression. Lossless compression is unable to attain high compression ratios due to the complexity of waveforms and the rapid changes in sound forms. Codecs like FLAC, Shorten and TTAuse linear prediction to estimate the spectrum of the signal. Many of these algorithms use convolution with the filter [-1 1] to slightly whiten or flatten the spectrum, thereby allowing traditional lossless compression to work more efficiently. The process is reversed upon decompression.
When audio files are to be processed, either by further compression or for editing, it is desirable to work from an unchanged original (uncompressed or losslessly compressed). Processing of a lossily compressed file for some purpose usually produces a final result inferior to the creation of the same compressed file from an uncompressed original. In addition to sound editing or mixing, lossless audio compression is often used for archival storage, or as master copies.
A number of lossless audio compression formats exist. Shorten was an early lossless format. Newer ones include Free Lossless Audio Codec (FLAC), Apple's Apple Lossless(ALAC), MPEG-4 ALS, Microsoft's Windows Media Audio 9 Lossless (WMA Lossless), Monkey's Audio, TTA, and WavPack. See list of lossless codecs for a complete listing.
Some audio formats feature a combination of a lossy format and a lossless correction; this allows stripping the correction to easily obtain a lossy file. Such formats include MPEG-4 SLS (Scalable to Lossless), WavPack, and OptimFROG DualStream.
Other formats are associated with a distinct system, such as: * Direct Stream Transfer, used in Super Audio CD * Meridian Lossless Packing, used in DVD-Audio, Dolby TrueHD, Blu-ray and HD DVD
Lossy audio compression[edit]

Comparison of acoustic spectrograms of a song in an uncompressed format and lossy formats. That the lossy spectrograms are different from the uncompressed one indicates that they are, in fact, lossy, but nothing can be assumed about the effect of the changes on perceived quality.
Lossy audio compression is used in a wide range of applications. In addition to the direct applications (mp3 players or computers), digitally compressed audio streams are used in most video DVDs, digital television, streaming media on the internet, satellite and cable radio, and increasingly in terrestrial radio broadcasts. Lossy compression typically achieves far greater compression than lossless compression (data of 5 percent to 20 percent of the original stream, rather than 50 percent to 60 percent), by discarding less-critical data.[16]
The innovation of lossy audio compression was to use psychoacoustics to recognize that not all data in an audio stream can be perceived by the human auditory system. Most lossy compression reduces perceptual redundancy by first identifying perceptually irrelevant sounds, that is, sounds that are very hard to hear. Typical examples include high frequencies or sounds that occur at the same time as louder sounds. Those sounds are coded with decreased accuracy or not at all.
Due to the nature of lossy algorithms, audio quality suffers when a file is decompressed and recompressed (digital generation loss). This makes lossy compression unsuitable for storing the intermediate results in professional audio engineering applications, such as sound editing and multitrack recording. However, they are very popular with end users (particularly MP3) as a megabyte can store about a minute's worth of music at adequate quality.
Coding methods[edit]
To determine what information in an audio signal is perceptually irrelevant, most lossy compression algorithms use transforms such as themodified discrete cosine transform (MDCT) to convert time domain sampled waveforms into a transform domain. Once transformed, typically into the frequency domain, component frequencies can be allocated bits according to how audible they are. Audibility of spectral components calculated using the absolute threshold of hearing and the principles of simultaneous masking—the phenomenon wherein a signal is masked by another signal separated by frequency—and, in some cases, temporal masking—where a signal is masked by another signal separated by time. Equal-loudness contours may also be used to weight the perceptual importance of components. Models of the human ear-brain combination incorporating such effects are often called psychoacoustic models.[17]
Other types of lossy compressors, such as the linear predictive coding (LPC) used with speech, are source-based coders. These coders use a model of the sound's generator (such as the human vocal tract with LPC) to whiten the audio signal (i.e., flatten its spectrum) before quantization. LPC may be thought of as a basic perceptual coding technique: reconstruction of an audio signal using a linear predictor shapes the coder's quantization noise into the spectrum of the target signal, partially masking it.[16]
Lossy formats are often used for the distribution of streaming audio or interactive applications (such as the coding of speech for digital transmission in cell phone networks). In such applications, the data must be decompressed as the data flows, rather than after the entire data stream has been transmitted. Not all audio codecs can be used for streaming applications, and for such applications a codec designed to stream data effectively will usually be chosen.[16]
Latency results from the methods used to encode and decode the data. Some codecs will analyze a longer segment of the data to optimize efficiency, and then code it in a manner that requires a larger segment of data at one time to decode. (Often codecs create segments called a "frame" to create discrete data segments for encoding and decoding.) The inherent latency of the coding algorithm can be critical; for example, when there is a two-way transmission of data, such as with a telephone conversation, significant delays may seriously degrade the perceived quality.
In contrast to the speed of compression, which is proportional to the number of operations required by the algorithm, here latency refers to the number of samples that must be analysed before a block of audio is processed. In the minimum case, latency is zero samples (e.g., if the coder/decoder simply reduces the number of bits used to quantize the signal). Time domain algorithms such as LPC also often have low latencies, hence their popularity in speech coding for telephony. In algorithms such as MP3, however, a large number of samples have to be analyzed to implement a psychoacoustic model in the frequency domain, and latency is on the order of 23 ms (46 ms for two-way communication)).
Speech encoding[edit]
Speech encoding is an important category of audio data compression. The perceptual models used to estimate what a human ear can hear are generally somewhat different from those used for music. The range of frequencies needed to convey the sounds of a human voice are normally far narrower than that needed for music, and the sound is normally less complex. As a result, speech can be encoded at high quality using a relatively low bit rate.
If the data to be compressed is analog (such as a voltage that varies with time), quantization is employed to digitize it into numbers (normally integers). This is referred to as analog-to-digital (A/D) conversion. If the integers generated by quantization are 8 bits each, then the entire range of the analog signal is divided into 256 intervals and all the signal values within an interval are quantized to the same number. If 16-bit integers are generated, then the range of the analog signal is divided into 65,536 intervals.
This relation illustrates the compromise between high resolution (a large number of analog intervals) and high compression (small integers generated). This application of quantization is used by several speech compression methods. This is accomplished, in general, by some combination of two approaches: * Only encoding sounds that could be made by a single human voice. * Throwing away more of the data in the signal—keeping just enough to reconstruct an "intelligible" voice rather than the full frequency range of human hearing.
Perhaps the earliest algorithms used in speech encoding (and audio data compression in general) were the A-law algorithm and the µ-law algorithm.
History[edit]

Solidyne 922: The world's first commercial audio bit compression card for PC, 1990
A literature compendium for a large variety of audio coding systems was published in the IEEE Journal on Selected Areas in Communications (JSAC), February 1988. While there were some papers from before that time, this collection documented an entire variety of finished, working audio coders, nearly all of them using perceptual (i.e. masking) techniques and some kind of frequency analysis and back-end noiseless coding.[18] Several of these papers remarked on the difficulty of obtaining good, clean digital audio for research purposes. Most, if not all, of the authors in the JSAC edition were also active in the MPEG-1 Audio committee.
The world's first commercial broadcast automation audio compression system was developed by Oscar Bonello, an engineering professor at the University of Buenos Aires.[19] In 1983, using the psychoacoustic principle of the masking of critical bands first published in 1967,[20]he started developing a practical application based on the recently developed IBM PC computer, and the broadcast automation system was launched in 1987 under the name Audicom. Twenty years later, almost all the radio stations in the world were using similar technology manufactured by a number of companies.
Video[edit]
See also: Video coding format and Video codec
Video compression uses modern coding techniques to reduce redundancy in video data. Most video compression algorithms and codecs combine spatial image compression and temporal motion compensation. Video compression is a practical implementation of source coding in information theory. In practice, most video codecs also use audio compression techniques in parallel to compress the separate, but combined data streams as one package.[21]
The majority of video compression algorithms use lossy compression. Uncompressed video requires a very high data rate. Although lossless video compression codecs perform an average compression factor of over 3, a typical MPEG-4 lossy compression video has a compression factor between 20 and 200.[22] As in all lossy compression, there is atrade-off between video quality, cost of processing the compression and decompression, and system requirements. Highly compressed video may present visible or distractingartifacts.
Some video compression schemes typically operate on square-shaped groups of neighboring pixels, often called macroblocks. These pixel groups or blocks of pixels are compared from one frame to the next, and the video compression codec sends only the differences within those blocks. In areas of video with more motion, the compression must encode more data to keep up with the larger number of pixels that are changing. Commonly during explosions, flames, flocks of animals, and in some panning shots, the high-frequency detail leads to quality decreases or to increases in the variable bitrate.
Encoding theory[edit]
Video data may be represented as a series of still image frames. The sequence of frames contains spatial and temporal redundancy that video compression algorithms attempt to eliminate or code in a smaller size. Similarities can be encoded by only storing differences between frames, or by using perceptual features of human vision. For example, small differences in color are more difficult to perceive than are changes in brightness. Compression algorithms can average a color across these similar areas to reduce space, in a manner similar to those used in JPEG image compression.[23] Some of these methods are inherently lossy while others may preserve all relevant information from the original, uncompressed video.
One of the most powerful techniques for compressing video is interframe compression. Interframe compression uses one or more earlier or later frames in a sequence to compress the current frame, while intraframe compression uses only the current frame, effectively being image compression.[24]
The most powerful used method works by comparing each frame in the video with the previous one. If the frame contains areas where nothing has moved, the system simply issues a short command that copies that part of the previous frame, bit-for-bit, into the next one. If sections of the frame move in a simple manner, the compressor emits a (slightly longer) command that tells the decompressor to shift, rotate, lighten, or darken the copy. This longer command still remains much shorter than intraframe compression. Interframe compression works well for programs that will simply be played back by the viewer, but can cause problems if the video sequence needs to be edited.[25]
Because interframe compression copies data from one frame to another, if the original frame is simply cut out (or lost in transmission), the following frames cannot be reconstructed properly. Some video formats, such as DV, compress each frame independently using intraframe compression. Making 'cuts' in intraframe-compressed video is almost as easy as editing uncompressed video: one finds the beginning and ending of each frame, and simply copies bit-for-bit each frame that one wants to keep, and discards the frames one doesn't want. Another difference between intraframe and interframe compression is that, with intraframe systems, each frame uses a similar amount of data. In most interframe systems, certain frames (such as "I frames" in MPEG-2) aren't allowed to copy data from other frames, so they require much more data than other frames nearby.[16]
It is possible to build a computer-based video editor that spots problems caused when I frames are edited out while other frames need them. This has allowed newer formats likeHDV to be used for editing. However, this process demands a lot more computing power than editing intraframe compressed video with the same picture quality.
Today, nearly all commonly used video compression methods (e.g., those in standards approved by the ITU-T or ISO) apply a discrete cosine transform (DCT) for spatial redundancy reduction. The DCT that is widely used in this regard was introduced by N. Ahmed, T. Natarajan and K. R. Rao in 1974.[26] Other methods, such as fractal compression, matching pursuit and the use of a discrete wavelet transform (DWT) have been the subject of some research, but are typically not used in practical products (except for the use of wavelet coding as still-image coders without motion compensation). Interest in fractal compression seems to be waning, due to recent theoretical analysis showing a comparative lack of effectiveness of such methods.[24]
Timeline[edit]
The following table is a partial history of international video compression standards. History of Video Compression Standards | Year | Standard | Publisher | Popular Implementations | 1984 | H.120 | ITU-T | | 1988 | H.261 | ITU-T | Videoconferencing, Videotelephony | 1993 | MPEG-1 Part 2 | ISO, IEC | Video-CD | 1995 | H.262/MPEG-2 Part 2 | ISO, IEC, ITU-T | DVD Video, Blu-ray, Digital Video Broadcasting, SVCD | 1996 | H.263 | ITU-T | Videoconferencing, Videotelephony, Video on Mobile Phones (3GP) | 1999 | MPEG-4 Part 2 | ISO, IEC | Video on Internet (DivX, Xvid ...) | 2003 | H.264/MPEG-4 AVC | Sony, Panasonic, Samsung, ISO, IEC, ITU-T | Blu-ray, HD DVD, Digital Video Broadcasting, iPod Video, Apple TV, videoconferencing | 2009 | VC-2 (Dirac) | SMPTE | Video on Internet, HDTV broadcast, UHDTV | 2013 | H.265 | ISO, IEC, ITU-T | |
Genetics[edit]
See also: Compression of Genomic Re-Sequencing Data
Genetics compression algorithms are the latest generation of lossless algorithms that compress data (typically sequences of nucleotides) using both conventional compression algorithms and genetic algorithms adapted to the specific datatype. In 2012, a team of scientists from Johns Hopkins University published a genetic compression algorithm that does not use a reference genome for compression. HAPZIPPER was tailored for HapMap data and achieves over 20-fold compression (95% reduction in file size), providing 2- to 4-fold better compression and in much faster time than the leading general-purpose compression utilities. For this, Chanda, Elhaik, and Bader introduced MAF based encoding (MAFE), which reduces the heterogeneity of the dataset by sorting SNPs by their minor allele frequency, thus homogenizing the dataset.[27] Other algorithms in 2009 and 2013 (DNAZip and GenomeZip) have compression ratios of up to 1200-fold—allowing 6 billion basepair diploid human genomes to be stored in 2.5 megabytes (relative to a reference genome or averaged over many genomes).[28][29]
-------------------------------------------------
Outlook and currently unused potential[edit]
It is estimated that the total amount of data that is stored on the world's storage devices could be further compressed with existing compression algorithms by a remaining average factor of 4.5:1. It is estimated that the combined technological capacity of the world to store information provides 1,300 exabytes of hardware digits in 2007, but when the corresponding content is optimally compressed, this only represents 295 exabytes of Shannon information.[30]
-------------------------------------------------
See also[edit] * Auditory masking * HTTP compression * Kolmogorov complexity * Magic compression algorithm * Minimum description length * Modulo-N code * Range encoding * Sub-band coding * Universal code (data compression) * Vector quantization
-------------------------------------------------
References[edit] 1. Jump up^ Wade, Graham (1994). Signal coding and processing (2 ed.). Cambridge University Press. p. 34. ISBN 978-0-521-42336-6. Retrieved 2011-12-22. The broad objective of source coding is to exploit or remove 'inefficient' redundancy in the PCM source and thereby achieve a reduction in the overall source rate R. 2. ^ Jump up to:a b Mahdi, O.A.; Mohammed, M.A.; Mohamed, A.J. (November 2012). "Implementing a Novel Approach an Convert Audio Compression to Text Coding via Hybrid Technique"(PDF). International Journal of Computer Science Issues 9 (6, No. 3): 53–59. Retrieved6 March 2013. 3. Jump up^ Pujar, J.H.; Kadlaskar, L.M. (May 2010). "A New Lossless Method of Image Compression and Decompression Using Huffman Coding Techniques" (PDF). Journal of Theoretical and Applied Information Technology 15 (1): 18–23. 4. Jump up^ Salomon, David (2008). A Concise Introduction to Data Compression. Berlin: Springer.ISBN 9781848000728. 5. Jump up^ S. Mittal and J. Vetter, "A Survey Of Architectural Approaches for Data Compression in Cache and Main Memory Systems", IEEE Transactions on Parallel and Distributed Systems, 2015. 6. Jump up^ Tank, M.K. (2011). Implementation of Limpel-Ziv algorithm for lossless compression using VHDL. Thinkquest 2010: Proceedings of the First International Conference on Contours of Computing Technology. Berlin: Springer. pp. 275–283. 7. Jump up^ Navqi, Saud; Naqvi, R.; Riaz, R.A.; Siddiqui, F. (April 2011). "Optimized RTL design and implementation of LZW algorithm for high bandwidth applications" (PDF). Electrical Review 2011 (4): 279–285. 8. ^ Jump up to:a b Mahmud, Salauddin (March 2012). "An Improved Data Compression Method for General Data" (PDF). International Journal of Scientific & Engineering Research 3 (3): 2. Retrieved 6 March 2013. 9. Jump up^ Arcangel, Cory. "On Compression" (PDF). Retrieved 6 March 2013. 10. Jump up^ Marak, Laszlo. "On image compression" (PDF). University of Marne la Vallee. Retrieved 6 March 2013. 11. Jump up^ Mahoney, Matt. "Rationale for a Large Text Compression Benchmark".http://cs.fit.edu/~mmahoney/. Florida Institute of Technology. Retrieved 5 March 2013. 12. Jump up^ Korn, D.; et al. "RFC 3284: The VCDIFF Generic Differencing and Compression Data Format". Internet Engineering Task Force. Retrieved 5 March 2013. 13. Jump up^ Korn, D.G.; Vo, K.P. (1995), B. Krishnamurthy, ed., Vdelta: Differencing and Compression, Practical Reusable Unix Software, New York: John Wiley & Sons, Inc. 14. Jump up^ The Olympus WS-120 digital speech recorder, according to its manual, can store about 178 hours of speech-quality audio in .WMA format in 500MB of flash memory. 15. Jump up^ Coalson, Josh. "FLAC Comparison". Retrieved 6 March 2013. 16. ^ Jump up to:a b c d Jaiswal, R.C. (2009). Audio-Video Engineering. Pune, Maharashtra: Nirali Prakashan. p. 3.41. ISBN 9788190639675. 17. Jump up^ Faxin Yu, Hao Luo, Zheming Lu (2010). Three-Dimensional Model Analysis and Processing. Berlin: Springer. p. 47. ISBN 9783642126512. 18. Jump up^ "File Compression Possibilities". A Brief guide to compress a file in 6 different ways. 19. Jump up^ "Summary of some of Solidyne's contributions to Broadcast Engineering". Brief History of Solidyne. Buenos Aires: Solidyne. Retrieved 6 March 2013. 20. Jump up^ Zwicker, Eberhard; et al. (Originally published in 1967; Translation published in 1999).The Ear As A Communication Receiver. Melville, NY: Acoustical Society of America.Check date values in: |date= (help) 21. Jump up^ "Video Coding". Center for Signal and Information Processing Research. Georgia Institute of Technology. Retrieved 6 March 2013. 22. Jump up^ Graphics & Media Lab Video Group (2007). Lossless Video Codecs Comparison(PDF). Moscow State University. 23. Jump up^ Lane, Tom. "JPEG Image Compression FAQ, Part 1". Internet FAQ Archives. Independent JPEG Group. Retrieved 6 March 2013. 24. ^ Jump up to:a b Faxin Yu, Hao Luo, Zheming Lu (2010). Three-Dimensional Model Analysis and Processing. Berlin: Springer. p. 47. ISBN 9783642126512. 25. Jump up^ Bhojani, D.R. "4.1 Video Compression" (PDF). Hypothesis. Retrieved 6 March 2013. 26. Jump up^ Ahmed, N.; Natarajan, T.; Rao, K.R. (January 1974). "Discrete Cosine Transform". IEEE Transactions on Computers C–23 (1): 90–93. doi:10.1109/T-C.1974.223784. 27. Jump up^ Chanda P, Bader JS, Elhaik E; Elhaik; Bader (27 Jul 2012). "HapZipper: sharing HapMap populations just got easier" (PDF). Nucleic Acids Research 40 (20): e159.doi:10.1093/nar/gks709. PMC 3488212. PMID 22844100. 28. Jump up^ Christley S, Lu Y, Li C, Xie X; Lu; Li; Xie (Jan 15, 2009). "Human genomes as email attachments". Bioinformatics 25 (2): 274–5. doi:10.1093/bioinformatics/btn582.PMID 18996942. 29. Jump up^ Pavlichin DS, Weissman T, Yona G; Weissman; Yona (September 2013). "The human genome contracts again". Bioinformatics 29 (17): 2199–202.doi:10.1093/bioinformatics/btt362. PMID 23793748. 30. Jump up^ Hilbert, Martin; López, Priscila (1 April 2011). "The World's Technological Capacity to Store, Communicate, and Compute Information". Science 332 (6025): 60–65.Bibcode:2011Sci...332...60H. doi:10.1126/science.1200970. PMID 21310967. Retrieved 6 March 2013.
-------------------------------------------------
External links[edit] * Data Compression Basics (Video) * Video compression 4:2:2 10-bit and its benefits * Why does 10-bit save bandwidth (even when content is 8-bit)? * Which compression technology should be used * Wiley - Introduction to Compression Theory * EBU subjective listening tests on low-bitrate audio codecs * Audio Archiving Guide: Music Formats (Guide for helping a user pick out the right codec) * MPEG 1&2 video compression intro (pdf format) at the Wayback Machine (archived September 28, 2007) * hydrogenaudio wiki comparison * Introduction to Data Compression by Guy E Blelloch from CMU * HD Greetings - 1080p Uncompressed source material for compression testing and research * Explanation of lossless signal compression method used by most codecs * Interactive blind listening tests of audio codecs over the internet * TestVid - 2,000+ HD and other uncompressed source video clips for compression testing * Videsignline - Intro to Video Compression * Data Footprint Reduction Technology * What is Run length Coding in video compression. [show] * v * t * eData compression methods | |

[show] * v * t * eMultimedia compression and container formats | |

[show] * v * t * eData compression software | |

Authority control | * NDL: 00942229 | |
Categories:
* Audio engineering * Computer storage * Data compression * Digital audio * Digital television * Film and video technology * Video compression * Videotelephony * Utility software types
-------------------------------------------------
Navigation menu * Create account * Not logged in * Talk * Contributions * Log in * Article * Talk * Read * Edit * View history
-------------------------------------------------
Top of Form

Bottom of Form * Main page * Contents * Featured content * Current events * Random article * Donate to Wikipedia * Wikipedia store
Interaction
* Help * About Wikipedia * Community portal * Recent changes * Contact page
Tools
* What links here * Related changes * Upload file * Special pages * Permanent link * Page information * Wikidata item * Cite this page
Print/export
* Create a book * Download as PDF * Printable version
Languages
* Alemannisch * العربية * Беларуская (тарашкевіца)‎ * Български * Bosanski * Català * Čeština * Dansk * Deutsch * Eesti * Ελληνικά * Español * Euskara * فارسی * Français * Gaeilge * 한국어 * हिन्दी * Hrvatski * Bahasa Indonesia * Italiano * עברית * Қазақша * Кыргызча * Latina * Latviešu * Lietuvių * Magyar * Bahasa Melayu * Nederlands * 日本語 * Norsk nynorsk * Polski * Português * Română * Русский * Simple English * Slovenčina * Srpskohrvatski / српскохрватски * Suomi * Svenska * ไทย * Türkçe * Українська * اردو * Tiếng Việt * 中文
Edit links * This page was last modified on 5 November 2015, at 17:57. * Text is available under the Creative Commons Attribution-ShareAlike License; additional terms may apply. By using this site, you agree to the Terms of Use and Privacy Policy. Wikipedia® is a registered trademark of the Wikimedia Foundation, Inc.

INTRODUCTION
Data compression is the process of converting an input data stream or the source stream or the original raw data into another data stream that has a smaller size. data compression is popular because of two reasons
1) People like to accumulate data and hate to throw anything away. No matter however large a storage device may be, sooner or later it is going to overflow. Data compression seems useful because it delays this inevitability
2) People hate to wait a long time for data transfer. There are many known methods of data compression. They are based on the different ideas and are suitable for different types of data. They produce different results, but they are all based on the same basic principle that they compress data by removing the redundancy from the original data in the source file. The idea of compression by reducing redundancy suggests the general law of data compression, which is to "assign short codes to common events and long codes to rare events". Data compression is done by changing its representation from inefficient to efficient form.
The main aim of the field of data compression is of course to develop methods for better and better compression. Experience shows that fine tuning an algorithm to squeeze out the last remaining bits of redundancy from the data gives diminishing returns. Data compression has become so important that some researches have proposed the "simplicity and power theory". Specifically it says, data compression may be interpreted as a process of removing unnecessary complexity in information and thus maximizing the simplicity while preserving as much as possible of its non redundant descriptive power.

BASIC TYPES OF DATA COMPRESSION
There are two basic types of data compression.
1. Lossy compression
2. Lossless compression
LOSSY COMPRESSION
In lossy compression some information is lost during the processing, where the image data is stored into important and unimportant data. The system then discards the unimportant data
It provides much higher compression rates but there will be some loss of information compared to the original source file. The main advantage is that the loss cannot be visible to eye or it is visually lossless. Visually lossless compression is based on knowledge about colour images and human perception.
LOSSLESS COMPRESSION
In this type of compression no information is lost during the compression and the decompression process. Here the reconstructed image is mathematically and visually identical to the original one. It achieves only about a 2:1 compression ratio. This type of compression technique looks for patterns in strings of bits and then expresses them more concisely.

TECHNIQUES OF DATA COMPRESSION
There are three important techniques of data compression.
1) basic technique
2) statistical technique
3) dictionary method
BASIC TECHNIQUES
These are the techniques, which have been used only in the past. The important basic techniques are run length encoding and move to front encoding.
STATISTICAL TECHNIQUES
They are based on the statistical model of the data. Under this statistical techniques there comes three important techniques
¢ Shannon Fano coding
¢ Huffman coding
¢ Arithmetic coding
DICTIONARY METHODS
This method select strings of symbols and encodes each string as a token using a dictionary. The important dictionary methods are
¢ LZ77 (sliding window)
¢ LZRW1
BASIC TECHNIQUES
1. 1.RUN LENGTH ENCODING
The basic idea behind this approach to data compression is this: if a data item occurs n consecutive times in the input stream replace the n occurences with a single pair <n d> . the n consecutive occurences of a data item are called run length of n and this approach is called run length encoding or RLE.
RLE IMAGE COMPRESSION
RLE is a natural candidate for compressing graphical data. A digital image consists of small dots called pixels. Each pixel can be either one bit indicating a black or white dot or several bits indicating one of several colours or shades of gray. We assume that this pixels are stored in an array called bitmap in the memory. Pixels are normally arranged in the bit map in scan lines. So the first bit map pixel is the dot at the top left corner of the image and the last pixel is the one at the bottom right corner. Compressing an image using RLE is based on the observation that if we select a pixel in the image at random there is a good chance that its neighbours will have the same colour. The compressor thus scans the bit map row by row looking for runs of pixels of same colour.
Consider the grayscale bitmap -
12,12,12, 12, 12, 12, 12, 12, 12, 35,76,112,67,87,8787,5, 5, 5, 5, 5, 5,1- - - - - -
Compressed Form --

9, 12, 35, 76, 112, 67, 3, 87, 6, 5, 1- - - - - - - - - - -
1. MOVE TO FRONT CODING
The basic idea of this method is to maintain the alphabet A of symbols as a list where frequently occuring symbols are located near the front. A symbol 'a' is encoded as the no of symbols that precede it in this list. Thus if A=('t','h','e','s') and the next symbol in the input stream to be encoded is 'e', it will be encoded as '2' since it is preceded by two symbols. The next step is that after encoding 'e' the alphabet is modified to A=('e','t','h','s') . This move to front step reflects the hope that once 'e' has been read from the input stream it will read many more times and will at least for a while be a common symbol.
Let A = (t, h, e, s )
After encoding the symbol e, A is modified.
Modified Form:-
A = (e, t, h, s )
ADVANTAGE
This method is locally adaptive since it adapts itself to the frequencies of the symbol in the local areas of input stream. This method produces good results if the input stream satisfies this hope that is if the local frequency of symbols changes significantly from area to area in the input stream.
STATISTICAL TECHNIQUES
1. SHANNON FANO CODING
Shannon fano coding was the first method developed for finding good variable size codes. We start with a set of n symbols with known probabilities of occurences. The symbols are first arranged in the descending order of the probabilities. The set of symbols is then divided into two subsets that have the same probabilities. All symbols of one subset are assigned codes that start with a zero while the codes of the symbols in the other subset start with a one. Each subset is then recursively divided into two. The second bit of all codes is determined in a similar way. When a subset contains just two symbols their codes are distinguished by adding one more bit to each. The process continues until no subset remains.
Consider a set of seven symbols, whose probabilities are given. They are arranged in the descending order of the probabilities. The two symbols in the first subset are assigned codes that start with 1, so their final codes are 11 and 10. The second subset is divided in the second step, into two symbols and three symbols. Step 3 divides last three symbols into 1 and 2.
Shannon-Fano Example
Prob. Steps Final
____________________________________________________________
1. 0.25 1 1 :11
2. 0.20 1 0 :10
3. 0.15 0 1 1 :011
4. 0.15 0 1 0 :010
5. 0.10 0 0 1 :001
6. 0.10 0 0 0 1 :0001
7. 0.05 0 0 0 0 :0000
The average size of this code is
= 0.25 x 2 + 0.20x2 + 0.15 x3 + 0.15 x 3 + 0.10 x 3 + 0.10 x 4 + 0.05 x 4
= 2.7 bits / symbol.
This is a good result because the entropy is Ë 2.67.
ADVANTAGE
The advantage of this method is that it is very easy to implement.
2. HUFFMAN CODING
A commonly used method for data compression is huffman coding. The method starts by building a list of all the alphabet symbols in descending order of their probabilities. It then constructs a tree with a symbol at every leaf from the bottom up. This is done in steps where at each step the two symbols with smallest probabilities are selected, added to the top of partial tree, deleted from the list and replaced with an auxiliary symbol representing both of them. When the list is reduced to just one auxiliary symbol the tree is complete. The tree is then traversed to determine the codes of the symbols.
The huffman method is somewhat similar to shannon fano method. The main difference between the two methods is that shannon fano constructs its codes from top to bottom while huffman constructs a code tree from bottom up.
This is best illustrated by an example. Given five symbols with probabilities as shown in Figure. They are paired in the following order:
1. a4 is combined with a5 and both are replaced by the combined symbol a45, whose probability is 0.2.
2. There are now four symbols left, a1, with probability 0.4, and a2, a3, and a45, with probabilities 0.2 each. We arbitrarily select a3 and a45 combine them and replace them with the auxiliary symbol a345, whose probability is 0.4.
3. Three symbols are now left, a1, a2, and a345, with probabilities 0.4, 0.2, and 0.4 respectively. We arbitrarily select a2 and a345, combine them and replace them with the auxiliary symbol a2345, whose probability is 0.6.
4. Finally, we combine the two remaining symbols a1, and a2345, and replace them with a12345 with probability 1.
The tree is now complete, lying on its side with the root on the right and the five leaves on the left. To assign the codes, we arbitrarily assign a bit of 1 to the top edge, and a bit of 0 to the bottom edge of every pair of edges. This results in the codes 0, 10, 111, 1101, and 1100. The assignments of bits to the edges is arbitrary.
The average size of this code is 0.4 x 1 + 0.2 x 2 + 0.2 x 3 + 0.1 x 4 + 0.1 x 4 = 2.2 bits / symbol, but even more importantly, the Huffman code is not unique.
HUFFMAN CODING EXAMPLE

2. ARITHMETIC CODING
In this method the input stream is read symbol by symbol and appends more to the code each time a symbol is input and processed. To understand this method it is useful to imagine the resulting code as a number in the range [0,1) that is the range of real numbers from 0 to 1 not including one. The first step is to calculate or at least to estimate the frequency of occurrence of each symbol.
The foresaid techniques that is the Huffman and the Shannon Fano techniques rarely produces the best variable size code. The arithmetic coding overcomes this problem by assigning one code to the entire input stream instead of assigning codes to the individual bits.
The main steps of arithmetic coding are
1. start by defining the current interval as [0,1)
2. repeat the following two steps for each symbol 's' in the input stream.
2.1) divide the current interval into subintervals whose sizes are proportional to the probability of the symbol.
2.2) select the sub interval for 's' and define it as the new current interval.
When the entire input stream has been processed in this way the output should be any number that uniquely identifies the current interval.
Consider the symbols a1, a2, a3
Probabilities “ P1= 0.4. P2=0.5, P3=0.1
Subintervals “ [0-0.4] [0.4-0.9] [0.9-1]
To encode “ a2 a2 a2 a3
Current interval “ [0,1] a2 [0.4-0.9] a2 [0.6-0.85] {0.4 + (0.9-0.4)0.4 = 0.6}
{0.4 + (0.9-0.4)0.9 = 0.85} a2 [0.7-0.825] {0.6 + (0.85-0.6)0.6 = 0.7}
{0.6 + (0.85-0.6)0.85 = 0.825} a3 [0.8125-0,8250] {0.7 + (0.825-0.7)0.7 = 0.8125}
{0.7 + (0.825-0.7)0.825 =0.8250}
DICTIONARY METHODS
Dictionary methods select strings of symbols and encode each string as a token using a dictionary. The dictionary holds all the strings of symbols.
1) LZ77 (SLIDING WINDOW)
The main idea of this method is to use part of previously seen input stream as the dictionary. The encoder maintains a window to the input stream and shifts the input in that window from right to left as strings of symbols are being encoded. The method is thus based on "sliding window". The window is divided into two parts that is search buffer ,which is the current dictionary and lookahead buffer ,containing text yet to be encoded.
2. LZRW1 TECHNIQUE
The main idea is to find a match in one step using a hash table. The method uses the entire available memory as a buffer and encodes the input string in blocks. A block is read into the buffer and is completely encoded. Then the next block is read and encoded and so on. These two buffers slide along the input block in memory from left to right. It is necessary to maintain only one pointer pointing to the start of look ahead buffer. This pointer is initialized to one and is incremented after each phrase is encoded. The leftmost three characters of the look ahead buffer are hashed into a 12 bit number 'I', which is used to index an array of 212 ie 4096 pointers. A pointer p is retrieved and is immediately replaced in the array by 'I'. if p points outside the search buffer there is no match, the first character in the look ahead buffer is output as literal and pointer is advanced by one. the same thing is done if p points outside the search buffer but to a string that does not match with the one in look ahead buffer . If p points to a match of at least three characters the encoder finds the longest match, outputs a match item and advances the pointer by the length of the match.

The LZRW1 Encoder
JPEG
(JOINT PHOTOGRAPHIC EXPERTS GROUP)
JPEG method is an important method of data compression. It is a sophisticated lossy/lossless compression method for colour or gray scale still images. The main JPEG compression steps are as follows.
1) colour images are transformed from RGB into a luminance chrominance space.
2) Colour images are down sampled by creating low resolution pixels from the original ones. The down sampling is not done for the luminance component. The down sampling is done at a ratio of 2:1 both horizontally and vertically.
3) The pixels of each colour component are organized in groups of 8 X 8 pixels called data units.
4) The DCT is applied to each data unit to create an 8 X * map of frequency components. DCT means direct cosine transform. It involves transcendental function cosine and involves some loss of information due to limited precision of computer arithmetic. The data units represent the average pixel value and successive higher frequency changes with in groups.
5) Each of 64 frequency components in a data unit is divided by a separate number called quantisation coefficient and rounded to an integer.
6) The 64 quantized frequency coefficients of each data unit are encoded using a combination of RLE and huffman coding.
7) The last step adds headers and all the JPEG parameters used and outputs the result.
Often, the eye cannot see any image degradation at compression ratios of 10:1 or 20:1. There are two main modes: lossy (also called baseline) and lossless. Most implementation support just the lossy mode. This mode includes progressive and hierarchical coding.
JPEG is a compression method, not a complete standard for image representation. This is why it does not specify image features such as pixel aspect ratio, color space, or interleaving of bitmap rows.
JPEG has been designed as a compression method for continuous-tone images. The main goals of JPEG compression are the following:
¢ High compression ratios, especially in cases where imnage quality is judged as very good to excellent.
¢ The use of many parameters, allowing sophisticated users to experiment and achieve the desired compression/quality tradeoff.
¢ Obtaining good results with any kind of continuous-tone image, regardless of image dimensions, color spaces, pixel aspect ratios, or other image features.
¢ A sophisticated, but not too complex compression method, allowing software and hardware implementations on many platforms.
Several modes of operation: (a) sequential mode: each image component is compressed in a single left-to-right, top-to-bottom scan; (b) Progressive modes of the image is compressed in multiple blocks (known as scans) to be viewed from coarse to fine detail; © Lossless mode: important for cases where the user decides that no pixels should be lost and (d) Hierarchical mode: the image is compressed at multiple resolutions allowing lower-resolution blocks to be viewed without first having to decompress the following higher-resolution blocks.
The progressive mode is a JPEG option. In this mode, higher-frequency DCT coefficients are written on the compressed stream in blocks called scans. Each scan read and processed by the decoder results in a sharper image. The idea is to use the first few scans to quickly create a low-quality, blurred preview of the image, then either input the remaining scans or stop the process and reject the image. The tradeoff is that the encoder has to save all the coefficients of all the data units in a memory buffer before they are sent in scans, and also go through all the steps for each scan, slowing down the progressive mode.
In the hierarchical mode, the encoder stores the image several times in the output stream, at several resolutions. However, each high-resolution part uses information from the low-resolution parts of the output stream, so the total amount of information is less than that required to store the different resolutions separately. Each hierarchical part may use the progressive mode.
The hierarchical mode is useful in cases where a high-resolution image needs to be output in low resolution. Older dot-matrix printers may be a good example of a low-resolution output device still in use.
The lossless mode of JPEG calculates a predicted value for each pixel, generates the difference between the pixel and its predicted value and encodes the difference using the same method. The predicted value is calculated using values of pixels above and to the left of the current pixel.

JPEG ENCODER

ADVANTAGES
It is a sophisticated method having high compression ratio. One of the main advantage of JPEG is the use of many parameters allowing the user to adjust the amount of data lost over a wide range.

APPLICATION IN IMAGE COMPRESSION
Now the following approaches illustrates how all these fore said techniques are applied to image compression. Photographic digital images generate a lot of data taking up large amounts of storage space and this is one of the main problems encountered in digital imaging. To rectify this problem image compression is used depending on the type of data, text, graphics, photographic or video. Image compression reduces image data by identifying patterns in the bit strings, describing pixel values then replacing them with a short code.
BASIC PRINCIPLE OF IMAGE COMPRESSION
The idea of losing image information becomes more palatable when we consider how digital images are created. Here are three examples: (1) A real-life image may be scanned from a photograph or a painting and digitized (converted to pixels). (2) An image may be recorded by a video camera that creates pixels and stores them directly in memory. (3) An image may be painted on the screen by means of a paint program. In all these cases, some information is lost when the image is digitized. The fact that the viewer is willing to accept this loss suggests that further loss of information night be tolerable if done properly.
Digitizing an image involves two steps: sampling and quantization. Sampling an image is the process of dividing the two-dimensional original image into small regions: pixels. Quantization is the process of assigning an integer value to each pixel. Notice that digitizing sound involves the same two steps, with the difference that sound is one-dimensional.
Here is a simple process to determine qualitatively the amount of data loss in a compressed image. Given an image A, (1) compress it to B, (2) decompress B to C, and (3) subtract d = C “ A. if a was compressed without any loss and decompressed properly, then C should be identical to A and image D should be uniformly white. The more data was lost in the compression, the farther will D be from uniformly white.
The main principles discussed so far were RLE, scalar quantization, statistical methods, and dictionary-based methods. By itself, none is very satisfactory for color or grayscale images.
RLE can be used for (lossless or lossy) compression of an image. This is simple, and it is used by certain parts of JPEG, especially by its lossless mode. In general, however, the other principles used by JPEG produce much better compression than does RLE alone. Facsimile compression uses RLE combined with Huffman coding and gets good results, but only for bi-level images.
Scalar quantization can be used to compress images, but its performance is mediocre. Imagine an image with 8-bit pixels. It can be compressed with scalar quantization by cutting off the four least-significant bits of each pixel. This yields a compression ratio of 0.5, not very impressive, and at the same time reduces the number of colors (or grayscales) from 256 to just 16. Such a reduction not only degrades the overall quality of the reconstructed image, but may also create bands of different colors which is a noticeable and annoying effect.
Statistical methods work best when the symbols being compressed have different probabilities. An input stream where all symbols have the same probabilities will not compress, even though it may not necessarily be random. It turns out that for continuous-tone color or grayscale image, the different colors or shades often have roughly the same probabilities. This is why statistical methods are not good choice for compressing such images, and why new approaches for images with color discontinuities, where adjacent pixels have widely different colors compress better with statistical methods, but it is not easy to predict, just by looking at an image, whether it has enough color discontinuities.
Dictionary-based compression methods also tend to be unsuccessful in dealing with continuous-tone images. Such an image typically contains adjacent pixels with similar colors, but does not contain repeating patterns. Even an image that contains repeated patterns such as vertical lines may lose them when digitized. A vertical line in the original image may become slightly slanted when the image is digitized, so the pixels in a scan row may end up having slightly different colors from those in adjacent rows, resulting in a dictionary with short strings.
Another problem with dictionary compression of images is that such methods scan the image row by row, and may thus miss vertical correlations between pixels. Traditional methods are therefore unsatisfactory for image compression, so we turn on to novel approaches. They are all different, but they remove redundancy from an image by using the following principle.
Image compression is based on the fact that neighbouring pixels are highly correlated.
APPROACH 1
This is used for bi level images. A pixel in such an image is represented by 1 bit. Applying the principle of image compression to it therefore means that the immediate neighbours of a pixel 'p' tends to be similar to 'p'. Thus it makes sense to use run length encoding to compress the image. A compression method for such an image may scan it row by row and compute the run length of black and white pixels. A compression method for such an image may scan it in raster ie, row by row and compute the lengths of runs of black and white pixels. They are encoded by variable size codes and are written on the compressor. An example of such a method is facsimile compression.
Data compression is especially important when images are transmitted over a communication line because the user is typically waiting at the receiver, eager to see something quickly. Documents transferred between fax machines are sent as bitmaps, so a standard data compression method was needed when those were developed and proposed by the ËœInternational Telecommunications Unionâ„¢. Although it has no power enforcement, the standards it recommends are generally accepted and adopted by industry.
The first data compression standard developed by the ITU were T2 and T3. These are now obsolete and have been replaced by T4 and T6. They have typical speeds of 64 k band. Both methods can produce compression ratios of 10:1 or better, reducing the transmission time of a typical pate to about a minute with former and a few seconds with the later.
APPROACH 2
This approach is also for bi level images. The principle of image compression is that the neighbours of a pixel tend to be similar to the pixel. We can apply the principle and conclude that if the current pixel has colour C, then pixels of same colour seen in the past tend to have the same immediate neighbours. This approach looks at n of the near neighbours of the current pixel and assigns them an n bit number. This number is the context of the pixel. The encoder counts how many times each context has already been found in the pixel of colour C.and assigns probabilities to the context accordingly. If the pixel has colour C and its context has probability P the encoder can apply arithmetic encoding to encode the pixel with that probability.
Next we turn on to grayscale image. A pixel in such an image is represented by n bits and can have one of 2n values. Applying the principle of image compression to a grayscale image implies that the immediate neighbours of a pixel, P are similar to P, but are not necessarily compress such an image. Instead, two more approaches are there.
Gray codes:-
An image compression method that has been developed specifically for a certain type of image can sometimes be used for other types. Any method for compressing bi-level images, for example, can be used to compress grayscale images by separating the bitplanes and compressing each individually, as if it were a bi-level image. Imagine, for example, an image with 16 grayscale values. Each pixel is defined by four bits, so the image can be separated into four bi-level images. The trouble with this approach is that it violates the general principle of image compression. Imagine two adjacent 4-bit pixels with values 7 = 0112 and 8 = 10002. These pixels have close values, but when separated into four bitplanes, the resulting 1-bit pixels are different in every bitplane! This is because the binary representations of consecutive integers 7 and 8 differ in all four bit positions. In order to apply any bi-level compression method to grayscale images, a binary representation of the integers is needed where consecutive integers have codes differing by one bit only. Such a representation exists and is called reflected Gray code (RGC). This code is easy to generate with the following recursive construction:
Start with the two 1-bit codes (o,1). Construct two sets of 2-bit codes by duplicating (0,1) and appending, either on the left or on the right, first a zero, then a one, to the original set. The result is (00,01). We know reverse (reflect) the second set, and concatenate the two. The result is the 2-bit RGC (00,01,11,10); a binary code of the integers 0 through 3 where consecutive codes differ by exactly one bit. Applying the rule again produces the two sets (00, 001, 011, 010) and (110, 111, 101, 100), which are concatenated to form the 3-bit RGC. Note that the first and last codes of any RGC also differ by one bit.
APPROACH 3
Imagine a grayscale image with 4 bit pixels or 16 shades of gray. If two adjacent pixels have values 0000 and 0001, then they are similar. They are also identical in three of the four bi-level images. However, two adjacent pixels with values 011 and 1000 are also similar in grayscale image but differ in all four bi-level images. This problem occurs, because, the binary codes of adjacent integers may differ by several bits. The binary codes of 0 and 1 differ by one bit, those of 1 and 2 differ by two bits and those of 7 and 8 differ by four bits. The solution is to design special binary codes such that the codes of any consecutive integers is and will differ by one bit only.
The most significant bit planes of an image obey the principle of image compression more than the least-significant ones. When adjacent pixels have values that differ by one unit (such as p and p+1), chances are that the least-significant bits are different and the most-significant ones are identical. Any compression method that compress bit planes individually should therefore treat the least-significant bit planes differently from the most-significant ones, or RGC instead of binary to to represent pixels.
In this approach we separate the gray scale image into n bi level images of each with RLE and prefix codes. The principle of image compression implies that two adjacent pixels that are similar in gray scale will be identical in most of the n bi level images. Now we can apply any method for compressing bi level images to compress gray scale images also .Imagine for example an image with 16 gray scale values. Each pixel is defined by 4 bits. So the image can be separated into four bi level images. The important thing to be needed is the binary representation of integers, where consecutive integers have codes differing by one bit only. Such a representation is called Reflected Gray Code or RGC.
APPROACH 4
In this approach we use the context of the pixel to predict its values. The context of a pixel is the values of some of its neighbours.We can examine some neighbours of a pixel P ,compute an average value A and predict that P will have the value A. The principle of image compression tells us that our prediction will be correct in most of the cases, almost correct in many cases and completely wrong in a few cases. We can say that the predicted value of pixel p represents the redundant information in the pixel P. We can now calculate the difference D=P-A and assign variable size codes to different values of D such that small values are assigned short codes and large values are assigned long codes. The values of D tends to be distributed according to Laplace distribution and now we can use this distribution to assign probabilities to each values of D and thus arithmetic coding can be efficiently applied to encode D values.

If P can have values 0 through (n-1), then the value of d are in the range [-(n-1), +(m-1)], and the number of codes needed is (2(m-1) + 1) or (2m-1). The context of a pixel may consist of just one or two of its immediate neighbours however better results maybe obtained when several neighbour pixels are included in the context. The average A in such a case should be weighted with near neighbours assigned higher weights. Another important consideration is the decoder. In order for it to decode the image, it should be able to calculate the context of every pixel. This means that the context should contain only pixels that have already been encoded. If the image is scanned in raster order, the context should include only pixels located above the current pixels or the same row and to its left.
APPROACH 5
In this method e transform the values of pixels and encode the transformed values .An image can be compressed by transforming its pixels to a representation where they are decorrelated. Compression is achieved if the new values are smaller than the original ones. The decoder inputs the transformed values from the compressed stream and reconstructs the original data by applying the opposite transform. There are several methods for transforming the pixel. Of these the most important one is the direct cosine transform or DCT.
The mathematical concept of a transform is a powerful tool in many areas and can also serve as an approach to image compression. An image can be compressed by transforming its pixels (which are correlated) to a representation where they are decorrelated. Compression is achieved if the new values are smaller, on average, than the original ones. Lossy compression can be achieved by quantizing the transformed values. The decoder inputs the transformed values from the compressed stream and reconstructs the (precise or approximate) original data by applying the opposite transform.
The term decorrelated means that the transformed values are independent of one another. As a result, they can be encoded independently, which makes it simpler to construct a statistical model. An image can be compressed if its representation has redundancy. The redundancy in images stem from pixel correlation. If we transform the image to a representation where the pixels are decorrelated, we have eliminated the redundancy and the image has been fully compressed.
Orthogonal Transform
Image transforms used in practice should be fast and preferably also simple to implement. This suggests the use of linear transforms. In such a transform, each transformed value ci is a weighted sum of the data items (the pixels) dj, where each item is multiplied by a weight (or a transform coefficient) wij. Thus, ci = j djwij, for I, j = 1.2,¦¦¦..n For the general case, we can write C = W.D. each row of W is called a basis vector."
The important issue is the determination of the values of the weights wij. The guiding principle is that we want the first transformed value c1 to be large, and the remaining values c2, c3, ¦. To be small. The basic relation ci = j djwij suggests that ci will be large when each weight wij reinforces the corresponding data item dj. This happens, for example, when the vectors wij and dj have similar values and signs. Conversely, ci will be small if all the weights wij are small and half of them have the opposite sign of dj. Thus, when we get a large ci we know that the basis vector wij resembles the data vector dj. A small ci, on the other hand, means that wij and dj have different shapes. Thus, we can interpret the basis vectors wij as tools to extract features from the data vector.
In practice, the weights should be independent of the data items. Otherwise, the weights would have to be included in the compressed stream, for the use of the decoder. This, combined with the fact that our data items are pixel values, which are normally nonnegative, suggests a way to choose the basis vectors. The first vector, the one that produces c1, should consist of positive, perhaps even identical, values. This will reinforce the nonnegative values of the pixels. Each of the other vectors should have half its elements positive and the other half, negative. When multiplied by nonnegative data items, such a vector tends to produce a small value. Recall that we can interpret the basis vectors as tools for extracting features from the data vector. A good choice would therefore be to have basis vectors that are very different from each other, and so can extract different features. This leads to the idea of basis vectors that are mutually orthogonal. If the transform matrix W is orthogonal, the transform itself is called orthogonal. Another observation that helps to select the basis vectors is that they should feature higher and higher frequencies, thus extracting higher-frequency features from the data as we go along, computing more transformed values.
Discrete Cosine Transform
One of the most popular transform method is the discrete cosine transform or simply, the DCT. The two dimensional DCT is the method use in practice. The pixels of an image are correlated in two dimensions, not just in one dimension. This is why image compression methods use the two dimensional DCT, given by n-1 n-1
Gij = 1/v2nCiCj Pxy cos ((2y+1)jp) cos((2x + 1))ip),
X=0 y=0
For 0<=i, j<= n-1. The image is broken up into blocks of n x n pixels Pxy (we use n = 8 as an example), and the Equation is used to produce a block of 8 x 8 DCT coefficients Gij for each block of pixels. If lossy compression is required, the coefficients are quantized. The decoder reconstructs a block of (approximate or precise) data values by computing the inverse DCT (IDCT):
7 7
Pxy = ¼ CiCjGij cos((2x+1)ip)cos((2y+1)jp), i=0 j=0 where Cf = { 1/v2, f=0
1, f>0

APPROACH 6
The principle of this approach is to separate a continuous tone colour image into three gray scale images and compress each of the three separately using approaches 2,3 or 4. An important feature of this approach is to use luminance chrominance colour representation. The advantage of this representation is that the eye is sensitive to small changes in luminance but not in chrominance. This allows the loss of considerable data in chrominance components while making it possible to decode the image without a significant loss of quality.

APPROACH 7
This is for discrete tone images. This type of image contain uniform regions and a region may appear several time in the image. A possible way to compress such an image is to scan it, identify the regions and find the repeating regions. If a region B is identical to an already found region A then B can be compressed by writing a pointer to A on the compressed stream.
APPROACH 8
In this approach we partition the images to various parts and compress it by processing the parts one by one. Suppose that the next unprocessed image part is 15. Try to match it with parts 1 to 14 that have already been processed. If it matches, the few numbers that specify the combination need be saved and part 15 can be discarded. If part 15 cannot be expressed as a combination , it is saved in new format.

CONCLUSION
Data compression is still a developing field. It has become so popular that researches are going on in this field to improve the compression rate and speed. Modifying an algorithm by 1% will improve the right time by 10%. Of course the aim of data compression branch is to develop better and better compression techniques.

REFERENCES
¢ Data Compression : The Complete Reference
David Salomon -2nd edition , springer
¢ Introduction to Data Compression
Khalid sayood
¢ The Data Compression Book
Mark Nelson & Jean-loupGailly

ACKNOWLEDGEMENT
I express my sincere gratitude to Dr. P.M.S Nambissan, Prof. and Head, Department of Electrical and Electronics Engineering, MES College of Engineering, Kuttippuram, for his cooperation and encouragement.
I would also like to thank my seminars guide Asst. Prof. Gylson Thomas. (Staff in-charge, Department of EEE) for their invaluable advice and wholehearted cooperation without which this seminars would not have seen the light of day.
Gracious gratitude to all the faculty of the department of EEE and friends for their valuable advice and encouragement.

CONTENTS
¢ INTRODUCTION
¢ BASIC TYPES OF DATA COMPRESSION
¢ TECHNIQUES OF DATA COMPRESSION
¢ BASIC TECHNIQUES
¢ STATISTICAL TECHNIQUES
¢ DICTIONARY METHODS
¢ JPEG
¢ APPLICATION IN IMAGE COMPRESSION
¢ CONCLUSION
¢ REFERENCES

Similar Documents

Premium Essay

Hostel Management

...Hostels provide budget oriented, sociable accommodation where guests can rent a bed, usually a bunk bed, in a dormitory and share a bathroom, lounge and sometimes a kitchen. Rooms can be mixed or single-sex, although private rooms may also be available. Hostels are generally cheaper for both the operator and the occupants; many hostels have long-term residents whom they employ as desk clerks or housekeeping staff in exchange for free accommodation. In a few countries, such as the UK, Ireland, the Netherlands, India, Nigeria and Australia, the word hostel sometimes also refers to establishments providing longer-term accommodation (often to specific classes of clientele such as nurses, students, drug addicts, court defendants on bail) where the hostels are sometimes run by Housing Associations and charities. In the rest of the world, the word hostel refers only to properties offering shared accommodation to travelers, students or backpackers. In this research work “hostel allocation management system”, a system will be design to manage a database for allocating hostel to students. The system designed will keep track of all the available rooms, their occupants and fund generated from hostel fee. The new system will be implemented using Visual basic 6.0 and access database. Chapter One Introduction 1.1 Background of the Study In 1912, in Altena Castle in Germany, Richard Schirrmann created the first permanent Jugendherberge...

Words: 914 - Pages: 4

Free Essay

Hostel Management System

...BLUECREST COLLEGE NAME: SAMUEL OWUSU DANSO COURSE: SMALL BUSINESS MANAGEMENT & ENTREPRENEURSHIP LEVEL: 300 SEMESTER 2 PROGRAM: BACHELOR OF SCIENCE IN INFORMATION  TECHNOLOGY(BSc.IT) ASSIGNMENT I AN ENTREPRENEUR  INTERVIEW  BIOGRAPHY OF THE ENTREPRENEUR Mr. Gabriel Takyi is a young man of thirty six (36) years old  born at Abura Dunkwa a district in the  Central Region of south Ghana. He is married for the past six(6) years with his magnificent wife Christabel with three kids. Gabriel  pass through the junior high school and continued his senior high school at Amenfiman senior high  school in Western Region of Ghana and did general arts as a course of choice. Mr Gabriel Takyi is now an entrepreneur who owns  a supermarket and petroleum station with  branches at Abura Dunkwa, Sefwi Wiawso and Dunkwa  – Offin. ABOUT THE SUPERMARKET The  name of the supermarket is NASCO supermarket. It has being in operation since 2000 at Sefwi  Wiawso in the Western Region. He is mainly into sales of meat, fresh produce, dairy, baked goods and  household products. At this current stage, the supermarket and all the goods in it worth about GHC 250,000. HOW THE ENTREPRENEUR STARTED After the SHS education, he didn't get any financial support to further his  studies. So he traveled to  Tarkoradi as a sailor. Some years back he went to Accra as a sales boy and later on learnt mechanic. So he worked with a friend as a junior mechanic, while working he saves some money. During his experience as a sales boy...

Words: 631 - Pages: 3

Premium Essay

Literature Review on Hostel Management System

...CHAPTER ONE 1.1 Introduction: This Hostel Management System is developed in favor of the hostel management team which helps them to save the records of the students about their rooms and other things. It helps them from the manual work from which it is very difficult to find the record of the students and the information about those ones who had left the hostel years before. This solution is developed on the plight of the hostel management team, through this they cannot require so efficient person to handle and manage the affairs of the students in the hostel, all you need to do is to login as administrator and you can see the information of all the students who have obtained and registered their hostel form, click verify to ascertain their eligibility and allocate them to the available hostel. Identification of the problems of the existing hostel management leads to the development of computerized solution that will be compatible to the existing hostel management with the solution which is more users friendly and more GUI oriented. We can improve the efficiency of the hostel management, thus overcome the drawbacks of the existing management Visual Basic6.0 is used as the front end tool and Oracle is used as a backend tool. Visual Basic is one of the driven programming languages. The application wizards, menu editor and data reports etc is very much useful for creating very good professional software. 1.2 AIMS AND OBJECTIVES AIMS o The aim of the...

Words: 597 - Pages: 3

Premium Essay

Hostel Management

...[pic] TERM PAPER ON “HOSTEL MANAGEMENT” NAME : Rakesh kumar ranjan COURSE: btech (hons.) Roll NO: r260-b39 SUBMITTED TO: MRS. SUMIT KAUR MEHTA ACKNOWLEDGEMENT This project is a welcome and challenging experience for us as it took a great deal of hard work and dedication for its successful completion. It’s our pleasure to take this opportunity to thank all those who help us directly or indirectly in preparation of this report. We would also like to very sincere thank to our project guide Lecturer Mr. Mohantesh who supported us technically as well as morally in every stage of the project. Without their guidance and support, this project would not have seen light of the day. It gives us immense in expressing a deep sense of gratitude and sincere thanks to Lovely Professional University. There times in such project when clock beats you again and again and you run out of energy and you want to finish it once and forever. Last but not the least we thank our family for their boost and support in every sphere. Their vital push infused a sense of insurgency in us. RAKESH KUMAR RANJAN Table of contents:- #Introduction 3 #SOURCE OF INFORMATION 5 #PROPOSED SYSTEM 6 #FEATURES OF THE PROPOSED SYSTEM: 7 #System design 8 #source code 10 #Out put of program 17 #Testing of program ...

Words: 969 - Pages: 4

Premium Essay

Hostel Management

...Hotel Advisor Hotel Advisor Team Members: Shuaib Ahmed 05(4652) H.Numan Younis 39(4688) Chapter No. 1 4 “Project Proposal” 4 1.1. Abstract: 6 1.2. Introduction: 7 1.3. Literature Review: 8 1.4. Project Scope: 11 1.4.1. Users: 11 1.4.2 Administration: 11 1.5. Problem Statement: 12 1.6. Methodology: 12 1.6.1. Pattern we’ll follow: 12 1.7. Instrumentation: 14 1.7.1. Visual Studio: 14 1.7.2. SQL Server: 14 1.7.3. Database Connectivity: 15 1.8. Bootstrap Framework: 15 1.9. Application Architecture: 16 Advantages of 3-layer Architecture: 16 1.10. Features: 17 1.11. Software Requirements: 17 1.12. Hardware Requirements: 17 1.13. Advantages: 17 1.14. Applications: 17 1.14. References: 18 Chapter No. 2 20 “Project Feasibility and Costing” 20 2.1 Feasibility: 21 2.1.1. Technical Feasibility: 21 2.1.2. Schedule Feasibility: 21 2.1.3. Economic Feasibility: 21 2.1.4. Legal/Ethical Feasibility: 22 2.1.5. Operational Feasibility: 22 2.1.6. Marketing Feasibility: 22 2.1.7. Specification Feasibility: 22 2.2 Costing 22 2.2.1 FP Analysis: 23 2.2.1.1. Project Cost Estimation by Function Point Analysis: 26 2.2.1.2. General System Characteristic: 27 2.2.2. Critical Path Method (CPM) 29 2.2.2.1. Activity Chart: 29 2.2.2.2. Activity Completion Time and Estimation: 30 Activity Duration in Days: 30 Activity Sequence and Duration (Days): 31 2.2.2.3 Network Diagram: 32 2.2.2.4. Critical Path Diagram: 33 2.2.2.5. Critical...

Words: 8251 - Pages: 34

Premium Essay

Hostel Management Synopsis

...PROJECT HOSTEL MANAGEMENT SYSTEM PROBLEM STATEMENT This project needs to create the Hostel Management System (HMS) to organize the rooms, mess, student’s record and the other information about the students. All hostels without HMS are managed manually by the hostel office. And hence there is a lot of strain on the person who are running the hostel. This particular project deals with the problems on managing a hostel and avoids the problem which occur when carried manually. INTRODUCTION In hostels without a HMS all the things have to be done manually. The Registration form verification to the different data processing are done manually. Thus there are a lot of repetitions which can be easily avoided. Identification of the drawbacks of the existing system leads to the designing of computerized system that will be compatible to the existing system with the system which is more user friendly and more GUI oriented. We can improve the efficiency of the system, thus overcome the drawbacks of the existing system. Hostel management gives on idea about how the students details, room allocation, mess expenditure are maintained in the particular concern. The hostel management system also includes some special features like How many students can live in a room, and the students of the hostel can be recognized from their ID number. The administration has the unique identity for each members as well as students details. The stock management has also held...

Words: 903 - Pages: 4

Premium Essay

Hostel Management System

...Hostel Management System Project Report : CS315 - Introduction to Database Systems Akshay Kumar & Abhijit Sharang Advisor : Dr. Harish Karnick Department of Computer Science and Engineering {kakshay,abhisg}@iitk.ac.in April 9, 2013 We present to you Hostel Management System - a fully automated portal to cater to various affairs of a hostel. This is capable of handling Book Club, Complains, Washerman Transaction, Canteen and Mess Transactions. The Relational Database System used is mysql whereas the frontend is powered by php, ajax and other internet technologies. INTRODUCTION Our project involved developing a fully automated hostel management system which can cater to the needs of students & administrators. Presently, there’s no such system in place and most of the work done is not computerized. With the help of this portal, the time spent in such redundant chores can be heavily cut down and will also ensure transparency. For now, the day to day tasks of mess, canteen, library, complaints, washerman, etc. have been incorporated into the framework but it can be extended to suit the exact needs of a hostel. Since the amount of work is large, we have decided to split the work between two people. FUNCTIONAL SPECIFICATIONS AND FEATURES • Database : We created the database for this project right from scratch. All the information has been manually entered. To implement it actually, it can easily be imported from a csv file. • Login : The interface provides login only to select authorities...

Words: 526 - Pages: 3

Premium Essay

Hostel Management Project

...This is with an idea of suggesting the best methods of information management through the use of data warehouse concepts for the proposed Greenville School Hostel Management System. The result of literature review gives us information with regard to the research done on the topic by others researchers. Result of this review will be the gaps that are found in the existing works and the good features that can be suggested for the proposed system. A data warehouse is projected in a way that data can be stored and accessed and is not restricted only to tables and relational lines. (Fink, 1998) As the data warehouse is separated from operational databases, user’s queries do not cause any impact in these systems. Data warehouse is protected from any non-authorized alteration or loss of data. Data warehouse contemplates the base and the resources needed for a Decision Support System (DSS), supplying historic and integrated data. These data are for top managers, decision makers, and partners, donors – who need brief, summarized and integrated information – and for low-level managers, for whom detailed data helps to observe some tactical aspects of the organization. In this way, data warehouse provides a specialized. For a centralized database oracle will be used for storing details of data being brought from different hostel location of the organization and saved into single database using oracle database management system.  (BI / DW Insider, 2011) 2.2 BACKGROUND The architecture...

Words: 2337 - Pages: 10

Premium Essay

Hostel Management System

...1304000487 Project Title : ONLINE HOSTEL MANAGEMENT SYSTEM Supervisor Name: MR ERIC TWUMASI Aims & Objectives of the Project: The aim of this project is to: 1. To design an online hostel management system where each user’s activity is in a computerized way rather than the usual manual process which appears to be time consuming. 2. To upgrade the hostel booking system from manual hand book registration to an online registration. The objective of the entire activity is to automate the process of day to day activities of the hostel. For example: 1. Admission of a new customer, 2. Assign a room according to customer’s demand, 3. Checkout of a costumer and releasing the room 4. Packages available. 5. Feedbacks and to create a standard compliance website that will demonstrate the following qualities; 1. Proper organization. 2. Very informative. 3. User friendly. 4. Data Integrity 5. Students can store his or her information Literature Survey and overview of the project: Manual hostel management system is a very tedious process, this is because it involves work load and time comsumption. This new system is to create an easy way to manage the hostel details, room details, student records, mess expenditure, easy way of room allocation and hostel attendance. Thus a lot of repetition can be easily evaded which has reduced the data redundancy. (M Deepika, A. Chitra 2010) The proposed system for Online Hostel Management System is a computerized...

Words: 1311 - Pages: 6

Free Essay

Hostel Management

...nt Hostel Management System Introduction Legal document Management system is a document management system prepared for the local court on pilot project basis. This project will help them to manage all legal cases registered in the court. Purpose The purpose of this document is to specify requirements and to give guidelines for the development of above said project. In particular it gives guidelines on how to prepare the above said project. This document is intended to be a practical guide for people who developing this software. Scope This is of pilot project prepared for the local court. We know each cases take very long time to complete, this delay depends on many factors. The one major factors is documents, when the hearing take place documents need produced in front of the court. This system help them to maintain all documents in order. Goal The main goal is computerize the local court. References Overview This is very useful application for the legal system. Once it serve the purpose all lawyers, advocates and other related people can use this software. Existing System At present all records are typed in typewriter and they are filed. Because of thousands of records searching for the required data is time consuming. Typing using the typewriter leads many problems. For each error they have to make a note and get the signatures. Also system is not pool proof. Proposed System In the proposed system all information will entered to system. All...

Words: 455 - Pages: 2

Premium Essay

Hostel Management

...System Requirement Specifications (SRS) Assignment 1 Sample Solution System Requirement Specifications 1 Table of Contents 1 2 3 Table of Contents ......................................................................................................................................................... 1 Problem Statement ....................................................................................................................................................... 2 Overview ...................................................................................................................................................................... 2 3.1 Background ............................................................................................................................................... 2 3.2 Overall Description ................................................................................................................................... 2 4 Investigation & Analysis Methodology........................................................................................................................ 2 4.1 System Investigation ................................................................................................................................. 2 4.2 Analysis Methodology .............................................................................................................................. 3 4.2.1 Feasibility study and requirements elicitation...

Words: 2557 - Pages: 11

Premium Essay

Hostel Management

...information in an area (Henning, van Rensburg & Smith, 2004). The reason literature study is to take a gander at past work with a specific end goal to form compelling experiences into the exploration range that is continuously inspected. It is further used to contextualize the exploration study within reach so as to contend a case (Henning et al., 2004). A careful literature survey establishes the framework for the present research and empowers the researcher to advance a quality contention, giving proof of what has recently been carried out in the branch of knowledge. Electronic Commerce (e-commerce) applications support the interaction between different parties participating in a commerce transaction via the network, as well as the management of the data involved in the process. The emergence of E-commerce has created a novel marketplace. However, there are various ways to definite what constitutes e-commerce. Bakos (1997) argued that it is the electronic market systems that create a space where buyers and sellers converge. Zwass (1996) proposed an architecture that embraces the aforementioned perspectives as two components of an e-commerce structure. Chang, Jackson (2002) argued that e-commerce includes not only buying and selling goods, but also various processes within and across organizations. E-commerce can also be loosely defined as a business process that uses the Internet or other electronic medium as a channel to complete business transactions. As classified by Geoffrion...

Words: 1312 - Pages: 6

Premium Essay

Hostel Management

...UNIVERSITY OF RWANDA, HUYE CUMPAS COLLEGE OF ARTS AND SOCIAL SCIENCES DEPARTMENT OF POLITICAL SCIENCE OPTION OF PUBLIC ADMINISTRATION ACADEMIC YEAR: 2014-2015 ANALYSING IMPACT OF DIVORCE ON FAMILY SOCIAL WELFARE IN RWANDA Case study: KIYUMBA Sector, MUHANGA District :( 2008-2015). MEMOIRE Presented by: TUYISINGIZE Nazard Tel: 0787848528, E-mail:nazardt@yahoo.com/tunazy0513@gmail.com Supervisor: Mr. John GASASIRA Huye, April 2015 Declaration I, the undersigned TUYISINGIZE Nazard a student of University of Rwanda, College of Arts and Social Sciences, Department of Political Science, option of Public Administration hereby declare that the work presented in this dissertation is my original work and has never been presented anywhere else for any other academic qualifications at any university or institution either in Rwanda or out of country. Student‘s Signature………………………………………………………… Names: ……………………………………………………………………… Date: ………………………………………………………………………… Supervisor’s Signature………………………………………………………… Names: ………………………………………………………………………… Date: …………………………………………………………………………. DEDICATION To my God To my parents To my brothers and sisters To my relatives and friends ACKNOWLEGMENTS First of all, I highly thank God, who helps and protect me in all my activities under to his love and goodness toward me may glory, honor and praise be to him forever...

Words: 23440 - Pages: 94

Premium Essay

Hostel Management

...the students, which need to be relevant. Ex. Issues related to ragging, food, faculty etc. Grievance is sometimes referred to as complaint. 3. Student- Any student of SDMCET belonging to any of the semesters from 1st to 8th of any branch. 4. Authority- The Dean academic for academic and Dean Student welfare for non academic grievances, who are responsible for resolving the grievances of students. 5. Database Administrator- One who manages and maintains the database, i.e adding/ deleting of users of the software. 6. Abbreviations- * Fi: Functional requirement number i. * ti: Test case number i. 1.4 REFERENCES: Elmasri and Navathe, ‘Fundamentals of Database Systems’, 5th Edition, Addison- Wesley, 2007. ‘Database management systems’ by Raghu Ramakrishnan and gehrke http://www.freewebmasterhelp.com/tutorials/phpmysql...

Words: 5166 - Pages: 21

Free Essay

Hostel

...hosteSoftware requirement Specification Software requirement document (SRS) is a requirement document that specifies the purpose of a system and what it must do. The Federal university of technology, Adamawa intends to automate its hostel application process by installing an Hostel management software. In the school, there are two type of hostel namely Wing hostel and block hostel. In the wing hostel, they are wings and the wings have floors and the floors contain the rooms. For the block hostel they are blocks and the blocks contain rooms. In the manual system, a student interested in hostel approach the portal, the portal give the student a form to fill. If a student is applying for a wing hostel, the student chooses the wing, select choice floor and from the floor selects choice room. If the room is available i.e the room has a bed space, the student get the allocation and if not the portal ask the student to make another selection. And for the block hostel, student select block and select a room in the block. When the student is done with the filling of form, the portal go through record to ensure that the room has not been previously allocated. The software (Hostel management) is to automate the process involved in booking a hostel accommodation in the institution and the allocating of space to student. The proposed system will have two interface; admin interface and student interface. Admin interface will be visible to the school administrative staff only, while the student...

Words: 812 - Pages: 4