1. INTRODUCTION
The digital image is a function of brightness that corresponds to the intensity of pixels. This
representation involves large data associated so that the requirements of storage space, computing
power and the related communication bandwidth are very high. The technique involved is called
image compression to minimize these requirements, so that information can be depicted in a reduced
form (Gonzalez, 2004). The capacity of the compression technique to decrease the data size is called
the compression ratio. To attain lossless and lossy compression respectively, the redundant and
irrelevant data is removed (Holtz, 1993). The techniques of lossy compression have relatively higher
compression ratios than that of lossless compression. Compression ratio and reconstructed image
quality is always a tradeoff.
Nowadays, with the rise in mobile phone popularity, images are becoming an important record form.
Image compression is required for storing and processing large numbers of such images. Depending
upon the requirement of data preservation and accuracy reconstructed data quality, DC techniques can
be divided into lossless and lossy compression. Compressing the data without sacrificing its originality
is the main objective of lossless image compression, the reconstructed data is identical to original data
in lossless compression, and it is suitable primarily for applications in compression of Text, medical
imaging, law forensics, military imagery, satellite imaging, etc. In lossy compression the reconstructed
data is an acceptable approximation of original data, here higher compression ratio can be achieved its
applicable in compression of natural images, audio, video, etc. (Hosseini, 2012).
There is always a limit to the compression ratio that can be achieved in Lossless Compression
(Rehman, 1952). According to Shannon, on the other hand, in lossless compression techniques, the
measure of the amount of information content (Entropy) in the data that can be used to find the
theoretical maximum compression ratio for lossless compression techniques, data can be compressed
into as small as 10 percent of its actual size, and as the compression techniques require less-complex
encoders and decoders as compared to lossless techniques.
The Shannon Entropy concept is explored in the paper to point out different possibilities to increase
the compression ratio to its maximum extent. The paper discusses the different concepts related to
compression techniques. One alternative to deal with the tradeoff between image quality and
compression ratio is to opt for Near-Lossless compression, where difference between the original and
reconstructed data is within user-specified amount called as maximum absolute distortion (MAD).
This may be suitable for compression of medical images, hyper spectral images, videos, etc.
In addition to the storage space requirements and the overhead of processing time, all users on a
specific network are suggested to minimize the size of the data and use the network resources
optimally (Kavitha, 2016). Since compression is both time-effective and cost-effective, it helps to
share network resources to enhance network performance.
2. MACHINE LEARNING METHODS AND FEATURE
IMPORTANCE
In 1999, Holtz gave a review of lossless image compression techniques, saying," Theories are usually the
starting point of any new technology.' There are some lossless compression methods explained, namely
Shannon's theory, Huffman code, Lempel-Ziv (LZ) code and data trees for Self-Learning Autopsy. Hosseini
submitted another review in 2012, which discussed many algorithms with their performances and
applications. It includes Huffman algorithm, Run Length Encoding (RLE) algorithm, LZ algorithm,
Arithmetic coding algorithm, JPEG and MPEG with their applications.
https://doi.org/10.17993/3ctecno.2022.v11n2e42.38-49
3C Tecnología. Glosas de innovación aplicadas a la pyme. ISSN: 2254-4143
Ed. 42 Vol. 11 N.º 2 August - December 2022
40