One of the concern these days for “Pics” is its image quality and size when uploading them to web. Obviously we all need and buy camera which gives best quality within our budget constraints. Most of the digital cameras these days are quite cheap now and provide good quality for the money. Of what I know .. 8-16 MegaPixel cameras are quite common these days and most of us already know that as Megapixels of a camera increases the size of the image in terms of the resolution while in terms storage space it does not increase much because cameras these days are getting smarter to support fair amount of JPEG compression while taking shots. These “sizes” of the images are just too big for the web and if its like 3 to 8MB+ its gonna take a good amount of time to download it and if there are more images then that adds up to a good amount of download time. We also cant assume every one having 2mbps+ connection speeds .. like say if you wanna send some of your birthday pics to you cousins email and if your cousin doesn’t have a decent connection speed then its gonna take a very long for him to download and even worse if he uses MB based internet plans. Also for many other reasons too the RAW images from the cameras must be compressed in terms of both resolution and size without visible loss in images. This is possible because images tend to have certain types of redundancies i.e duplicated/repetitive information. Removal or encoding of these redundancies can lead to following types of compression :
- Lossless compression : Here the information, as present in original images, is kept intact and the original image can be reconstructed “as it is” after decompression.
- Lossy Compression : Here some of the information present in original image is permanently lost and the decompressed(reconstructed) image is a close approximation of the original image.
As suggested in literature there are 3 basic types of redundancies in Images:
- Phsycovisual Redundancy
- Coding Redundancy
- Interpixel Redundancy
Now, Lets go through each of the redundancy mentioned above.
Phsycovisual Redundancy:
Achieving very good compression ratios for Natural Images is possible because these type images contain a very good amount of phsycovisiual redundancies i.e information in the images to which our eyes are least sensitive which simply means that information of this sort can be simply removed from the images and we would’nt notice it 99% of the time .. unless we zoom too much. In this the following fact is exploited : Human eye is less sensitive to higher frequencies and more sensitive to lower frequencies in the visual spectrum.
Coding Redundancy:
In all of the images we can group pixels as per their intensities or gray-levels. Certain type of intensities are more frequent than others or simply certain pixel colors are more than others and can be grouped to encode this repetition in a way in which – the pixels which occur more are assigned shorter codewords and those which occur less frequently code words are bigger. The entire images is represented using these codewords. Obviously since we have grouped the pixels the space required to represent a codeword will be much less than the individual pixels. Since here we encode the repetitive information , none of the information is permanently lost which leads to lossless compression of images but the compression ratio is not good many times but is indeed acceptable.Example of this is the famous Huffman coding , Arithmetic coding etc.
Interpixel Redundancy:
This redundancy exists for the fact that adjacent pixels are correlated; which is to say that adjacent pixels have similar intensities (most of the times) and hence we can predict the intensity of a pixel from its neighbors. Usually Image is transformed into frequency domain from spatial domain to remove Interpixel Redundancies. Examples are the famous Fourier-Transform , Discrete cosine Transform (DCT) used in JPEG , Walsh-Hadamard Transform , etc.. Out of these DCT is a better option and is used as a standard for achieving Lossy Image compression in JPEG Compression technique. DCT is lossy because after transforming the Image to frequency domain the values High Frequency components are rounded-off which results in a permanent loss of information.
Widely used Image File Formats on Internet:
Some of the Most widely used File Format on web are :
1. JPG (JPEG – Joint Photographic Experts Group)
2. PNG (Portable Network Graphics)
3. GIF (Graphics Interchange FileFormat originally developed by CompuServe)
I generally use JPG for Images taken using a Camera since it gives best results for such kind of Images.
The storage space required for Images can be reduced by :
- 1. Reducing resolution of the image
- 2. Converting to a suitable fileformat (lossy/lossless)
- 3. Encoding the image information (lossless)
- 4. Reducing the image quality (lossy) like reducing the color space and some other techiques as well..
We shall use a combination of methods as mentioned above. Now lets actually see how do we compress(permanently hence lossy) the image till a point where its easy to exchange images on Internet.The software that I frequently use for processing Raw Camera Images is IrfanView which is free to use.
Cameras and Images used for this Guide :
1. Olympus FE-170 6 Megapixel Camera :
Images -> olympus_1.jpg to olympus_6.jpg
2. Canon EOS 1000D 10.1 Megapixel Camera :
Images -> canon_1.jpg to canon_3.jpg
3. Nokia N73 ME 3.2 Megapixel :
Images -> n73_1.jpg to n73_2.jpg
4. Fujifilm FinePix HS20EXR 16 Megapixel Camera :
Images -> fujifilm_1.jpg to fujifilm_3.jpg
How do I download Original Images ? (Links below)
1. Links to Original Images taken from Olympus FE-170 :
Olympus_6Mpix_1.rar
Olympus_6Mpix_2.rar
2. Link to Original Images taken from Canon EOS 1000D :
Canon_EOS_1000D.rar
3. Link to Original Images taken using Nokia n73 :
Nokia_N73_ME_3.2Mpix.rar
4. Link to Original Images taken using Fujifilm FinePix HS20EXR :
Fujifilm_FinePix_HS20EXR_16Mpix.rar
How do I download Processed Images ?
1. To download the processed Images as done in PART I and PART II of this guide just click on the Image(Thumbnail) shown.
2. To download the Thumbnails as generated in PART III Right-Click on the any thumbnail in PART I or II and Save it as Image – since Thumbnails are used Inline in this Guide(Post). Clicking on the thumbnails will lead you to the corresponding processed file in PART I or II. The thumbnails used in this guide were generated with the same method as mentioned in PART III.
My Assumption : I assume you will be using this Guide to optimize Images taken from Camera/Mobile and not computer generated images (CGIs) .. though this guide is fairly applicable for CGIs as well.
Compression Ratio Basics:
Of what I’ve studied in Literature there can be 2 possible methods to calculate Compression Ratios.
[Lower Values mean Higher/Better Compression]
Method 2:
[Higher Values mean Higher/Better Compression]