ABSTRACT:

Image

compression is an implementation of the data compression. It will encode actual

image with some bits. The purpose of this image compression is to decrease the

redundancy. And the important purpose is to decrease an irrelevance of image

data to be capable to record the data or send the data in an effective form. The original image can

be perfectly recovered using the lossless compression techniques. They are also known as

entropy coding, noiseless compression etc.the exact original data can be recovered.

It was specified into two terms are efficiency and complexity. The image and they are

using statistics or decomposition techniques to reduce the redundancy. The

process that will effectively reduce the total number of bits needed to process

certain information by using some different codings.the lossless will required

to reproduce exactly when get decompressed again.

AIM:

The

lossless compression is basically used to compressing the data that is when get

decompressed. It is also known as a entropy coding as it uses the techniques of

decomposing or statistics to remove or reduce the redundancy.

KEYWORD:

Huffman

encoding, run length encoding, variable length encoding, LZW encoding,

arithmetic encoding.

INTRODUCTION:

Lossless

image compression means that you reduce the size of a one image without any

quality loss. It is used to reduce the unnecessary mete data from JPEG and PNG files.

There are some image formats are considered to be “lossless “such as

GIF,PNG,BMP. The big benefit of lossless image compression is that it is allows

you to retain the quality of your images while reducing their file size. The

original data will be identical. It was used in many applications, such as ZIP

file format and in the GNU tool gzip. The lossless compression algorithms and

its implementation are usually tested in HEAD TO HEAD benchmarks. There a

number of bench marks compression we have in lossless compression. Some of the

bench marks cover only the data compression ratio. The lossless data

compression algorithms cannot guarantee for all input data set. Each file is

represented by string of bit of some arbitrary length. a compression algorithm

that transforms all files in an output file that is no longer than the original

file. Every single bit of data that was originally in the file remains after

the file is uncompressed. The graphics interchange file (GIF) is an image

format used in the web that was provided lossless compression. In the lossless

image compression is rewrites the data in original file in more efficient way. Image compression address the problem

of reducing the amount of data required to represent a digital image with no

significant loss of information.

HUFFMAN

ENCODING: Huffman encoding is a form of statistical

coding which attempts to reduce the amount of bits required to represent a

string of symbols. It has the direct bearing on the length of its

representation. The more probable the occurrence of a symbol is, the shorter

will be its bit size representation. Huffman compression is a variable length

coding system that assigns smaller codes for more frequently used characters

and larger codes for less frequently used characters in order to reduce the

size of files being compressed and transferred. The original image is reconstructed.

The decompression is done by using Huffman Decoding.

Read input character wise and left

to the tree until last element is reached in the tree. Perform a traversal of tree to

generate code table. This will determine code for each element of tree in the

following way. The code for each symbol may be obtained by tracing a path to

the symbol from the root of the tree. A 1 is assigned for a branch in one

direction and a 0 is assigned for a branch in the other direction. A symbol

which is reached by branching right twice, then left once may be represented by

the pattern ‘110’. The figure below depicts codes for nodes of a sample tree

For

example:

*

/

(0) (1)

/

(10)(11)

/

(110) (111)

0

A1:0.4

10 1

A2:0.35

110 1

A3:0.2

0.6

111 11

A4:0.05 0.25

The average length of the code is

given by the average of the product of probability of the symbol and number of

bits used to encode it.

RUN LENGTH ENCODING: Run length is a very easy and simple technique of data

compression. In run length encoding, an individual channel matrices were

retrieved and used for processing. Each group of such repetitions was then

replaced by the pixel value and the frequency of occurrence. The run length

encoding is less useful with image such as same color occur many time. An

encoding technique performs a lossless compression of input images that is

based on sequence of identical values. There will be too much long runs of

white pixels and short runs of block pixels. It was provided an efficient

compression of data, while the data with large number of runs or large number

pixel contains same intensity value. The same data value occurs in many consecutive

data elements are stored as a single data value and count in original run. This

is most useful on data that contains many such run: for example the relatively

simple graphic image such as icons, line drawings, and animation. a Run

Length Encoded Bitmap, used to compress the Windows 3.x startup screen.

For

example:

The

run-length encoding for image compression algorithm to the above scan line,

(12W) (1B)

(12W) (3B) (24W) (1B) (14W).

12W, means

12 count of white color pixel, and so on.

Sample

processing example:

Input

stream: 22 22 22 57 57 57 33 33 33 33 33 22

Output

stream: 322 457 533 22

The output

stream produce a series of frequency-pixel value pairs as previously discussed.

VARIABLE- LENGTH ENCODING: Sort the symbols according to the

frequency count of their Occurrences. Recursively divide the symbols into two

parts, each with approximately the same number of counts, until all parts

contain only one symbol. This is in contrast to fixed length

coding methods, for which data compression is only possible for large blocks of

data.

For example:

Symbol

H e l l o

Count

1 1 2 1

Frequency count of the symbols in “HELLO”.

Variable-length codes can allow sources to be compressed and

decompressed with zero error (lossless data compression) and still be read back

symbol by symbol. The mapping M1={a?0,b?0,c?1}is not non-singular.

The mappingM2={a?1,b?011,c?01110,d?1110,e?10011,f?0}is

non-singular ; its extension will generate a lossless coding, which will be

useful for general data transmission (but this feature is not always required).

Note that it is not necessary for the non-singular code to be more compact than

the source.

LZW ENCODING: A lossless

compression algorithm for digital data of many kinds, named for the creators

Abraham Lempel and Jacob Ziv, and a later contributor, Terry Welch. LZW is

based on a translation table that maps strings of input characters into codes.

Through its incorporation in the graphics file formats GIF_89a and TIFF_LZW, LZW has come to be strongly

associated with image compression. it is also used in GIF image files. The LZW

method achieves compression by using codes 256 through 4095 to represent sequences

of bytes. For example, code 523 may represent the sequence of three bytes: 231

124 234. Each time the compression algorithm encounters this sequence in the

input file, code 523 is placed in the encoded file. During uncompressing, code

523 is translated via the code table to recreate the true 3 byte sequence. The

longer the sequence assigned to a single code, and the more often the sequence

is repeated, the higher the compression achieved. LZW also performs well when

presented with extremely redundant data files, such as tabulated numbers,

computer source code. LZW

encoding is working based on the occurrence multiplicity of bit sequences in

the pixel to be encoded. LZW compression works by replacing strings of characters

with single codes without doing any analysis of the incoming text data.

LZW is an adaptive technique. As the

compression algorithm runs, a changing dictionary of the strings that have

appeared in the text so far is maintained.

ARITHMETIC ENCODING: A new lossless image compression algorithm

based on Arithmetic Coding. Algorithms

are selected appropriately for each pixel position. One of a large number of

possible, dynamic, probability distributions, and encodes the current pixel

prediction error by using this distribution as the model for the arithmetic

encoder. We have experimentally compared our algorithm with Lossless JPEG, that

is currently the lossless image compression standard, and also with FELICS and

other lossless compression algorithms. Arithmetic coding differs from other

forms of entropy encoding, such as Huffman coding, in that rather than

separating the input into component symbols and replacing each with a code,

arithmetic coding encodes the entire message into a single number, an arbitrary-precision

fraction q where 0.0 ? q