Article Preview
Top1. Introduction
Over the decades, the demand for multimedia products attaints faster growth in the field of communication. The digital images make the bandwidth insufficient as well as consume huge storage space of the memory devices (Miaou, et al., 2009; Lin & Hao, 2005). Thus, there is a necessity to diminish the data redundancy in the image with the desire of saving the hardware space and the transmission bandwidth. In fact, compressing the images plays a very important role in storage as well as transmission purpose by performing the compression into the least number of bits with no loss in the essential information content encapsulated in the original image. In each image, there is redundant data, which represents the duplication of data (Chen, et al., 2005; Shen & Rangayyan, 1997). The redundancy may be due to the frequent repetition of the pixels across the image. The redundancies in the image are distinguished into, psycho-visual redundancy spatial redundancy, and coding redundancy. The elimination of the correlation among the pixels in the natural image via transform coding or predictive coding is referred to as Inter-pixel Redundancy or spatial redundancy. The psycho-visual redundancy is done to reduce the quantity of data to make the visual information equally sensitive to all human eyes. The coding redundancy takes place with variable-length codes of the statistical model (Sanchez, et al., 2008; Srikanth & Ramakrishnan, 2005).
In general, the main approaches in image compression are categorized as Lossless technique and Lossy technique on the basis of the reconstruction of the original image from the compressed image (Taquet & Labit, 2012). With the lossless compression, the reconstruction of the original image from the compressed image is a bit easier while compared to the lossy compression. In the lossy compression, there is a minute difference between the reconstructed image and original image and with the lossy compression, there is a higher compression rate (Velisavljevic, et al., 2007; Creusere, 1997; Sanchez, et al., 2010; de Queiroz, et al., 2000). Run Length Encoding, Entropy Encoding, Huffman Encoding, Arithmetic Coding come under the lossless image compression technique. The lossy image compression includes Scalar Quantization and Vector Quantization (Kim & Cho, 2014; Lin, et al., 2018). With the lossy image compression, the speed of encoding and decoding of images as well as the Signal to Noise Ratio and compression ratio are high.
The most popular image compression technique is DCT as it is efficient in blocking the artifact effect and making the sub-images visible. The JPEG baseline coding system is the common mode employed in DCT as it fits most of the compression applications. This JPEG technique has the lowest compression ratio, and RLE is sufficient only for the files with high repetitive data. The Fractal Encoding has an excellent mathematical-encoding frame, and in contrast, its encoding scheme is slow (Aliaga & Carlbom, 2005; Lee, et al., 2015; Pang, et al., 2019). The Arithmetic encoding makes uses of the fractional values, and these values have complex computations. With the Vector Quantization, there lacks coefficient quantization and hence this scheme is simple to use (Fu, et al., 2018). But, the codebook generation in Vector Quantization is much slower and has low bit rates. The Huffman Encoding is more effective for text or program files other than images. Thus, with the desire of overcoming the issues in the existing image compression techniques, there is a necessity to have an appropriate image compression technique that will have a better compression and reconstruction rate.
The major contribution of this paper is portrayed below: