Free Essay

Abo (Adaptive Binary Optimization) the New Innovation in Data Compression

In:

Submitted By vegas4uf
Words 3505
Pages 15
ABSTRACT

Data transmission and storage cost money. The more information being dealt with, the more it costs. In spite of this, most digital data are not stored in the most compact form. Rather, they are stored in whatever way makes them easiest to use, such as: ASCII text from word processors, binary code that can be executed on a computer, individual samples from a data acquisition system, etc. Typically, these easy-to-use encoding methods require data files about twice as large as actually needed to represent the information. Data compression is the general term for the various algorithms and programs developed to address this problem. A compression program is used to convert data from an easy – to - use format to one optimized for compactness. Likewise, an uncompressing program returns the information to its original form. World over, the rapid growth in information and communication technologies has led to an explosive demand for effective means to compress and store data. And this data is no longer simple text, but encompasses a variety of formats from text, images to moving pictures. Despite the continuous research in compression technologies and the emergence of compression standards for audio, video, images and text, the quest for the perfect compression algorithm is still on. Many companies have entered the gun lap in the race to define compression standards, especially for the most complex and demanding format of all, the moving pictures. Here we examine the new innovation ABO (Adaptive Binary Optimization).

INTRODUCTION

Compression approaches have been many and diverse. Most of these, especially when it comes to images, revolve around the technique of pointillism—of seeing images as a collection of dots or pixels. Images are stored inside our computer as rows and columns of dots or pixels. In the case of a monochrome image, each pixel is represented in computer memory by a number that gives the pixel's brightness on a scale from black to white. For color images, a pixel typically consists of three or four numbers, encoding the intensities of the component colors. A snapshot-size color image might have a million pixels and fill up a few megabytes of computer memory. Extend this theory to moving images.
Data compression algorithms can be classified into lossless and lossy techniques. A lossless technique means that the restored data file is identical to the original. This is absolutely necessary for many types of data, for example: executable code, word processing files, tabulated numbers, etc. You cannot afford to misplace even a single bit of this type of information. In comparison, data files that represent images and other acquired signals do not have to be kept in perfect condition for storage or transmission. All real world measurements inherently contain a certain amount of noise. If the changes made to these signals resemble a small amount of additional noise, no harm is done. Compression techniques that allow this type of degradation are called lossy. This distinction is important because lossy techniques are much more effective at compression than lossless methods. The higher the compression ratio, the more noise added to the data. Both JPEG and MPEG are lossy techniques while ABO is visually lossless.

JPEG

JPEG, an acronym for Joint Photographic Experts Group, is an ISO/CCITT-backed international standards group that defined an image-compression specification (also referred to as JPEG) for still images. The following are mandatory in JPEG specification: algorithms must operate at or near the state of the art in image-compression rates; algorithms must allow for software only implementations on a wide variety of computer systems; compression ratios need to be user variable; compressed images must be good to excellent in quality; compression must be generally applicable to all sorts of continuous-tone (photographic type) images and interoperability must exist between implementations from different vendors. JPEG achieves image compression by methodically throwing away visually insignificant image information. This information includes the high frequency components of the image, which are less important to image content than the low-frequency components. When an image is compressed using JPEG, the discarded high-frequency components cannot be retrieved, so baseline sequential JPEG is considered lossy. When a JPEG compressed image is displayed, much of the high-frequency information is missing.

MPEG

MPEG is a compression standard for digital video sequences, such as used in computer video and digital television networks. In addition, MPEG also provides for the compression of the sound track associated with the video. The name comes from its originating organization, the Moving Pictures Experts Group. The approach used by MPEG can be divided into two types of compression: within-the-frame and between-frame. Within-the-frame compression means that individual frames making up the video sequence are encoded as if they were ordinary still images. This compression is preformed using the JPEG standard, with just a few variations. In MPEG terminology, a frame that has been encoded in this way is called an intra-coded or I-picture. Most of the pixels in a video sequence change very little from one frame to the next. Unless the camera is moving, most of the image is composed of a background that remains constant over dozens of frames. MPEG takes advantage of this with a sophisticated form of delta encoding to compress the redundant information between frames. After compressing one of the frames as an I-picture, MPEG encodes successive frames as predictive-coded or P-pictures. That is, only the pixels that have changed since the I-picture are included in the P-picture. While these two compression schemes form the backbone of MPEG, the actual implementation is immensely more sophisticated than described here.

How Small is Small?

Where are we today in terms of compression? MPEG4, the latest in moving picture compression, develops the compression techniques used in MPEG1 and MPEG2 codecs and combines them within a fully flexible framework.
The original source material is divided in multiple audio and video objects, which are compressed according to their character and then combined into a compound audio/video object (AVO). By dividing the content and compressing in the most appropriate manner for each component of the video stream, MPEG4 is able to offer high quality images at data rates from below 64 Kbps up to around 1 Mbps. Even at locations accessible only with a traditional telephone modem it will be possible to view MPEG4 video data at a reasonable quality and frame rate.
Both the popular compression standards come with their set of limitations. Let us look at some of them:

1. Lossy Techniques:

Both JPEG and MPEG are lossy techniques, which means that there is some data loss while the image is being compressed. Lossy techniques are effective in instances where the degree of compression to be achieved is high and where small trade-offs in terms of image quality is acceptable.
However, this trade-off may not be acceptable in certain applications such as bank and medical records where the compressed data need to be identical before and after compression. For audio, video and graphics, lossy techniques are popularly used as files can be compressed to within 5 percent of their original size.

2. Picture/Voice Quality:

As said before, there are limitations with relation to picture quality too. In JPEG, there is a certain amount of blocky-ness in the picture as the underlying block encoding structure becomes visible. Blurring, smearing, edge busyness and error blocks and quantization noise are just some of the distortions that occur when using a lossy technique to compress voice and images.

3. Greater Complexity:

JPEG 2000, MPEG4 and M-JPEG or Motion JPEG (which stores each frame of a moving picture in a JPEG compressed format) all aim to minimize the error level between a compressed and its original image. Hence, all these are extremely complex, and exploit the limitations of the human eye. Studies in the past have proved that while the cosine transforms can achieve higher compression ratios with some distortion to the original image, its competitors such as wavelet and fractal image compression techniques too are not devoid of limitations. It has been time and again proved that higher levels of compression are invariably accompanied by a greater degree of distortion.

Is There a Need for a New Compression Technique?

Rapid advancements in media and entertainment, medical imaging, defense and consumer sectors, have led to the need for more effective compression techniques. Though JPEG performs very well up to compression levels of 20:1, there is a certain element of blocky effect at higher compression levels.
Though MPEG4 is far more than a video compression standard, as it sets out a common approach to multimedia compression, the encoding process is so complex that it takes a long time to compress.
Moreover, tailoring the compression technique to fit the needs of a rapidly changing environment has led to the emergence of a number of proprietary standards to compress different forms of data. The application areas include:
• Satellite imagery
• Mini audio and video discs
• Wireless telephony
• Videoconferencing
• Wireless data
• Database design
• CT, MRI scans and mammography
• High definition television and video games
Applications such as wireless telephony and videoconferencing need near real-time compression and transmission, which is next to impossible given the complex coding processes of existing techniques. Ultrasound, mammogram and CT/MRI scans require the decompressed data to reflect the original data in its entirety. While some processes may achieve high compression, they do lose some of the information that was present in the original file. What we need today is a technique that not only compresses data in real-time, but also achieves the highest accuracy levels. The search for such an algorithm is on as the quality of the digital compressed picture and video change based on the data rate, picture complexity and more importantly on the type of encoding algorithm that is being used.

Adaptive Binary Optimization-“The small wonder”

A new innovation from MatrixView Pte.Ltd., called Adaptive Binary Optimization (ABO) is based on a refreshingly new approach to digital content optimization (DCO). ABO is a versatile and pervasive technique as it allows optimization of any form of digital content such as

• Images
• Videos/Moving frames
• Sound
• Text

Definition of ABO

ABO is MatrixView’s revolutionary approach to Digital Content Optimization (DCO). Where traditional compression technologies depend on the elimination of high frequency data (frequency transformation), ABO does not. Instead, ABO works on the correlation found in each bit and pixel of a digital content signal, rearranging and normalizing the individual data stream (e.g. Red, Green and Blue component separately for an image) for more efficient coding, via the unique deployment of bit-planes architecture. Without the use of frequency transformations, ABO’s complexities are reduced significantly without the occurrences of floating points (or decimal values). This leads to unprecedented advantages over all other techniques as illustrated below. ABO’s ability to accommodate a wide range of data format is especially significant if you consider the number of techniques available for each form of data. If MP3 is the most popular compression format for audio, JPEG and GIF are for still images and MPEG for moving images. For text, PKZIP has gained wide acceptance. Initial tests with ABO have proved that it is capable of breaching current technology barriers and establish never before possible new technology frontiers such as seamless 30 frames per second, video conferencing solutions over dialup lines or indexing of images. This is due to the inherent advantages of ABO over existing technologies such as JPEG or MPEG implementations.

Advantages of ABO

1. Breakthrough compression ratios

Table 1: Examples of Compression Ratios in Medical Imaging

Table 2: Examples of Compression Ratios in Documents

Preliminary tests have shown that with certain images such as Web pages with graphics, ABO can attain mathematically lossless compression rates of over 300 times. The path-breaking, yet simple approach to compression has endowed ABO with the ability to scale theoretically limitless compression ratios.

2. Greater Image Quality

Despite the higher compression rates achieved, the images, videos, sound or text optimized with ABO are of much higher quality than is possible with existing schemes. It has been proved that ABO is able to attain mathematically lossless images at much higher compression ratios. The algorithm has the ability to achieve significant higher compression rates while producing visually lossless images, a technique that has been popularly adopted by both JPEG and MPEG. This allows users to implement the most suitable quality levels at much higher compression ratios and speeds.

3. Higher Encoding/Decoding and Transmission Speeds

As data is compressed prior to transmission, the time taken to send compressed data over a communication channel such as a phone line, satellite or a fiber cable, is reduced by the same proportion as the compression ratio. The near real-time speeds of compression, decompression enabled by ABO not only ensures lesser data packets, quicker transmission and retrieval, but also lower error levels as errors are significantly reduced when there are less packets to transmit. As said before, ABO is unlike any other existing schemes, which are based on frequency or calculus transformations, ABO works on the correlation found in digital content signals. This in fact enables much faster speeds of encoding (compression) and decoding (decompression). The practical benefits of such compression are listed as follows:
• For example, in video conferencing, ABO will enable near real time encoding and decoding and never before possible 30 frames per second over dialup lines.
• Retrieval speeds will be faster as ABO checks only 1 bit per pixel as compared to 8 bits for grayscale, as done by current technologies, or 24 bits per pixel for color.

4. Data Protection through unique, simultaneous and personalized encryption

ABO employs a simultaneous encryption mechanism as it optimizes, which ensures secure data transfer. The technology also does not require a third party encryption layer and thus does not add additional data, complexities or time delays to the decoding of the content. The encryption is personalized and unique as it allows the same content to be uniquely encoded for every use and distributed securely, much like the public key infrastructure (PKI).

5. Lower Complexities

Compared to the complex mathematical coding employed by comparative technologies, ABO has much lower complexities that translate into significant benefits in terms of:
• Higher compression speeds
• Lower processing and battery power requirements
• Lower costs of implementation on hardware (e.g. As compared to MPEG4 on DSP chip).
Software implementations of lossless compression algorithms have been around for a long time now, examples include the intensive Markov Modeling and the Lempel-Ziv and its variants. However, one of the biggest drawbacks of these algorithms is the long calculation times involved because of their complexity, and the resultant impact on power consumption and system performance. Though there have been several hardware implementations of lossless data compression, none of them have been able to reach the simplicity levels that ABO has achieved. An immediate impact of such reduction in the number of calculations is on battery life.

What Makes ABO So Powerful?

To understand and appreciate ABO’s digital content optimization powers, it is essential to understand compression theories and how it goes beyond established beliefs about data compression and transmission.

ABO and Shannon Theory – Better than the Best

The year 1948 proved to be a watershed year for communication as Claude Shannon put forth his fundamental laws on data compression, and transmission. Some of the key aspects to Shannon’s theory are those relating to lossless data compression. Shannon says that the source of information should be modeled as a random process. Shannon grouped communication systems into 3 types – discrete, continuous, and mixed, stating that the discrete case is the foundation for the other two and “has applications not only in communication theory, but also in the theory of computing machines, the design of telephone exchanges and other fields.”
After establishing this, Shannon introduced the idea of an information source being probabilistic by posing the following question to the reader - If an information source is producing messages by successively selecting symbols from a finite set, and the probability of a symbol appearing is dependent on the previous choice, then what is the amount of information associated with this source? Shannon explained the answer to this question by describing information in terms of entropy. If a communication source is not characterized by a large degree of randomness or choice, then the information (or entropy) is low i.e. 1-(actual entropy/maximum entropy) = redundancy.
Shannon understood that to derive the fundamental theorems for a communication system, he would need to provide a precise definition of information, such that it was a physical parameter that could be quantified. He did this by providing his entropic definition of information. Once Shannon had established this concept of information, he was able to work within this framework to discover his two fundamental theories of communication.
The first theorem deals with communication over a noiseless channel. The main idea behind this theorem is that the amount of information that is possible to transmit is based on its entropy or randomness. Therefore based on the statistical characteristic of the information source, it is possible to code the information so that it is possible to transmit it at the maximum rate that the channel allows. This was revolutionary as communication engineers previously thought that the maximum signal that could be transported across a medium was related to various factors such as frequency, not on the concept of information.
The second theorem states that no matter what the noise, there is an encoding scheme that allows you to transmit the information error-free over the channel. Again, this idea was revolutionary, as it was believed that after a certain level of noise, it would be impossible to transmit the desired signal.
ABO has taken Shannon’s theory a step further and has shown that data can be transmitted independent of channel restrictions. It proves that it is mathematically possible to send information across channels and receive them without any loss irrespective of the distortions taking place in the channel.
ABO is able to achieve this by transforming the data into a new format that involves coding the data set at different levels. Unlike other compression algorithms, ABO does not depend on probability but on how the data is arranged. In this way, ABO enhances the Shannon theory by adding a new dimension through its simple, elegant, mathematical approach to data optimization.
Shannon’s architecting of the parts of a communication system, his modeling of information as entropy, and his theorizing on the limits of communication both with and without noise were leaps beyond contemporary thinking in communication engineering.
ABO, in its attempt to prove that data transfer can be independent of channel restrictions, has demonstrated that it can approach the core problem of data compression and transmission without worrying about the extraneous factors such as channel distortions and entropy. Just as Shannon demonstrated, ABO has proved that abstraction and simplification of this complex problem could lead to the solution that had eluded researchers in the past.
The problem of data optimization has been befuddled with all kinds of extraneous data of one sort or another. What ABO has succeeded in achieving today is to break this problem down into its main issues, which can be analyzed more clearly and perhaps lead to a possible solution. Now in doing so, ABO may have stripped away the problem until it is simplified to a point that it does not even resemble the problem that it originally was. But, it is this simplification that has helped in cracking the problem.
In saying that ABO has focused on how to represent data, we do not wish to undermine the importance of errors and channel restrictions. ABO works with standard error correction techniques, including request for re-transmission of information, error correction algorithms and in doing so, makes the core algorithm itself resistant to errors. This in fact helps ABO to manage errors with an exceptional level of robustness and data reproducibility.
The ABO algorithm is starkly different from existing compression algorithms in the following aspects:
• The way data is arranged
• Reduction in entropies
• Geometrical approach to encoding data in different layers of bit-planes

CONCLUSION

The compression techniques and features of ABO have been examined. Initial tests and pilot projects have proved that ABO goes beyond any prevalent digital content optimization algorithms, both lossless and lossy. ABO has clearly many advantages and it will become the superior standard in the digital content optimization space.
ABO has a number of significant functionalities. These include:
• The ability to handle data in a variety of formats—video, text, 2- dimensional and 3-dimensional synthetically generated audio and video, text and speech
• An error-resilient core algorithm
• A mathematical framework for arranging and coding data
• A simple and error-free technique for representing data, enabling higher and faster compression
• Support for encryption to ensure secure data transfer

REFERENCES

www.matrixview.com www.techonline.com www.bitpipe.com

Similar Documents

Free Essay

Swn Jdkjkjje Jne

...Employment News 31 May - 6 June 2014 www.employmentnews.gov.in 21 UNION PUBLIC SERVICE COMMISSION EXAMINATION NOTICE NO. 09/2014-CSP (LAST DATE FOR RECEIPT OF APPLICATIONS : 30/06/2014) DATE :31.05.2014 CIVIL SERVICES EXAMINATION, 2014 (Commission’s website-http://upsc.gov.in) F. No. 1/5/2013-E.I(B) : Preliminary Examination of the Civil Services Examination for recruitment to the Services and Posts mentioned below will be held by the Union Public Service Commission on 24th Aug., 2014 in accordance with the Rules published by the Department of Personnel & Training in the Gazette of India Extraordinary dated 31st May, 2014. (i) Indian Administrative Service. (ii) Indian Foreign Service. (iii) Indian Police Service. (iv) Indian P & T Accounts & Finance Service, Group ‘A’. (v) Indian Audit and Accounts Service, Group ‘A’. (vi) Indian Revenue Service (Customs and Central Excise), Group ‘A’. (vii) Indian Defence Accounts Service, Group ‘A’. (viii) Indian Revenue Service (I.T.), Group ‘A’. (ix) Indian Ordnance Factories Service, Group ‘A’ (Assistant Works Manager, Administration). (x) Indian Postal Service, Group ‘A’. (xi) Indian Civil Accounts Service, Group ‘A’. (xii) Indian Railway Traffic Service, Group ‘A’. (xiii) Indian Railway Accounts Service, Group 'A'. (xiv) Indian Railway Personnel Service, Group ‘A’. (xv) Post of Assistant Security Commissioner in Railway Protection Force, Group ‘A’ (xvi) Indian Defence Estates Service, Group...

Words: 47693 - Pages: 191

Free Essay

Whirlpool

...Employment News 11 - 17 February 2012 www.employmentnews.gov.in 21 Union Public Service Commission EXAMINATION NOTICE NO. 04/2012-CSP DATED 11.02.2012 (LAST DATE FOR RECEIPT OF APPLICATIONS : 05.03.2012) CIVIL SERVICES EXAMINATION, 2012 (Commission's website - http://www.upsc.gov.in) F. No. 1/4/2011-E.I(B) : Preliminary Examination of the Civil Services Examination for recruitment to the Services and Posts mentioned below will be held by the Union Public Service Commission on 20th May, 2012 in accordance with the Rules published by the Department of Personnel & Training in the Gazette of India Extraordinary dated 4th February, 2012. (i) Indian Administrative Service. (ii) Indian Foreign Service. (iii) Indian Police Service. (iv) Indian P & T Accounts & Finance Service, Group ‘A’. (v) Indian Audit and Accounts Service, Group ‘A’. (vi) Indian Revenue Service (Customs and Central Excise), Group ‘A’. (vii) Indian Defence Accounts Service, Group ‘A’. (viii) Indian Revenue Service (I.T.), Group ‘A’. (ix) Indian Ordnance Factories Service, Group ‘A’ (Assistant Works Manager, Administration). (x) Indian Postal Service, Group ‘A’. (xi) Indian Civil Accounts Service, Group ‘A’. (xii) Indian Railway Traffic Service, Group ‘A’. (xiii) Indian Railway Accounts Service, Group 'A'. (xiv) Indian Railway Personnel Service, Group ‘A’. (xv) Post of Assistant Security Commissioner in Railway Protection Force, Group ‘A’ (xvi) Indian Defence Estates Service, Group ‘A’. (xvii) Indian Information...

Words: 50586 - Pages: 203

Free Essay

Test2

...abhorrence/MS abhorrent/Y abhorrer/M abhorring abhor/S abidance/MS abide/JGSR abider/M abiding/Y Abidjan/M Abie/M Abigael/M Abigail/M Abigale/M Abilene/M ability/IMES abjection/MS abjectness/SM abject/SGPDY abjuration/SM abjuratory abjurer/M abjure/ZGSRD ablate/VGNSDX ablation/M ablative/SY ablaze abler/E ables/E ablest able/U abloom ablution/MS Ab/M ABM/S abnegate/NGSDX abnegation/M Abner/M abnormality/SM abnormal/SY aboard abode/GMDS abolisher/M abolish/LZRSDG abolishment/MS abolitionism/SM abolitionist/SM abolition/SM abominable abominably abominate/XSDGN abomination/M aboriginal/YS aborigine/SM Aborigine/SM aborning abortionist/MS abortion/MS abortiveness/M abortive/PY abort/SRDVG Abo/SM! abound/GDS about/S aboveboard aboveground above/S abracadabra/S abrader/M abrade/SRDG...

Words: 113589 - Pages: 455