Dct and Fractal Image Compressing.

Date of Submission

December 1992

Date of Award

Winter 12-12-1993

Institute Name (Publisher)

Indian Statistical Institute

Document Type

Master's Dissertation

Degree Name

Master of Technology

Subject Name

Computer Science


Machine Intelligence Unit (MIU-Kolkata)


Kundu, Malay Kumar (MIU-Kolkata; ISI)

Abstract (Summary of the Work)

A simple gray level image can be defined in a very simplistic way as a function f: R2 → R with the function value f(z, y) giving the brightness value ( or gray level may be with change in scale ). When this image is to be represented on a computer screen ( which has finite resolution ) image is discretized both spatially and in amplitude. Then image can be considered as a matrix ( of points ) whose rows and column indices identify a point in the image and the corresponding matrix element. value identifies the gray level at that point ( the point is called pirel or pel).Now suppose the size of the above mentioned matrix is N x N and the number of gray levels available is M then the number of bits required to store this image is N x N x log(M). Even for moderate resolution (N) and number of gray levels (M), the size of the image is too high for its practical use. Today's computing requires a constant image processing, storage and retrieval of images and that too with the least possible delay. This fast processing of images requires the image size be as small as possible. So the image compression is the need of today's computing world. With this we start our discussion of image compression.1.2 Image Compression FundamentalsIn a digital image, there can be three basic types of data redundancies, coding re- dundancy, inter-pixel redundancy and psycho-visual redundancy. An image compression algorithm must identify these and exploit them to achieve good image compression. For more details one may refer to [1].An image is said to have coding redundancy if the gray levels of the image are coded in a way that uses more code symbols than absolutely necessary to represent each gray level. This can be avoided if the coding of symbols is done depending on their probability of occurrence. Huffman coding reduces coding redundaney.Various relationships among the objects in an image leads to inter-pizel redundancy in the image. Because of these relationships value of some of the pixels can be estimated from the pixel value(s) of its neighbor(s). If such estimation is possible then coding of each pixel is not necessary. This redundaney can be removed by certain applying certain transformations on the image. These transformations can be very complicated like Karhunen-Loeve transforms or very simple like taking difference of successive pixels on each scan line.In normal visual processing, certain information in an image simply has less relative importance than some other information. This information is said to be psycho-visually redundant and can be eliminated without introduction of noticcable visual distortion in the image. Elimination of psycho-visually redundant data results in a loss of quantitative information, it is commonly referred to as quantization; the process is irreversible and resiults in a lossy data compression.So a general model for source coder is as in figure (1.1) and that of tdecoder is as in figure (1.2).Mapper S(x, v) (inter-pixel) → (psycho-visual) Quantizer Symbol encoder (coding) - channelFigure 1.1: Source encoderAfter looking at the encoders in general, we classify them into two categories, lossless coders (whose decoder can reconstruct the original image with no loss of in- formation of any kind) and lossy coder's (reconstructed image contains less information channel → Symbol decoder → Inverse mapper → f(x,y) than the original.


ProQuest Collection ID: http://gateway.proquest.com/openurl?url_ver=Z39.88-2004&rft_val_fmt=info:ofi/fmt:kev:mtx:dissertation&res_dat=xri:pqm&rft_dat=xri:pqdiss:28843107

Control Number


Creative Commons License

Creative Commons Attribution 4.0 International License
This work is licensed under a Creative Commons Attribution 4.0 International License.



This document is currently not available here.