# I. Introduction he encoder generates a set of symbols when a two-dimensional image f(x, y) is given as an input. Then transmit this through a channel and the encoded image is now sent to the decoder. The decoder generates a reconstructed image f'(x, y). The output f'(x, y) is an accurate imitation of f(x, y) in lossless compression. Else it means that there is some misconception present in the re-enacted image [1]. The JPEG (Joint Photographic Experts Group), the committee that shaped the JPEG standard, is an identifiable lossy compression proposal. Not just using less memory, but also the data in the regenerated image in a JPEG compression appears very much identical. Though the quality is reduced with JPEG compression, the image will look nearly as similar as the original image. The JPEG Algorithm wipes out high-frequency components that the human eye can't identify. # a) JPEG Algorithm When compared to straight better, it involves the following steps. 1. The acquired image can be divided it into 8-pixel by 8-pixel blocks. If the image size is not precisely multiplied by 8, then add zeros in empty pixels around the edges. [1]. If in 8x8 blocks include a lot of dissimilarity in pixel values then the number of constructive DCT coefficients will grow to be more. Otherwise only first few DCT coefficients will be more noteworthy while others are zeros. On the application of filters, as a result the image gets smoothened the distinction of the pixel values of a block abridged [1]. # II. Intended Innovative JPEG Compression Algorithms If in 8x8 blocks include lot of distinction in pixel values then the number of constructive DCT coefficients will grow to be more. Otherwise only first few DCT coefficients will be more noteworthy while others are zeros. On the application of filters the image gets smoothened as a result the distinction of the pixel values of a block abridged. There are two different ways to implement the JPEG Algorithm. 1) Earlier than segregating the image into 8X8 blocks the images tainted with Poisson, Speckle, Salt & Pepper noise and Gaussian noise is convoluted with Alpha trimmed Mean filter. 2) Before the application of normalized matrix, the image is convoluted with the Alpha trimmed Mean filter. This paper examines the comparison between the proposed approaches with the standard JPEG compression. The planned methods illustrate enhanced results compared to the JPEG in terms encoded bits. This paper implements the proposed algorithms by using MATLAB tools and the images are extracted from SIPI image database. Algorithm1: Alpha trimmed Mean Based JPEG algorithm on noisy images. Step1: Read the image. Step 2: Apply the smoothening operator Alpha trimmed Mean. Step 3: Standard Jpeg Compression [7,8,9]. # III. Implementation of Planned JPEG Algorithms In this paper Alpha trimmed mean based JPEG compression is executed on images of different sizes. Contemplation of results entrusts that the lately expected compression techniques are enormously a prominent alternate since they are proved to be better regarding image quality metrics like PSNR, MSE, AD, SC, Compression ratio. N1 is the extent of information hauling units required to imply uncompressed dataset and N2 is the number of entities in the encoded dataset. The units for N1 and N2 are same. Step1: Read the image. # CR = N1/N2 Step 2: Apply speckle/Poisson/ Gaussian/ Salt & Pepper Noise. Step 3: Apply Alpha trimmed Mean. Step 4: Standard Jpeg Compression [8,9]. The reconstructed image is identical to the original image with lossless compression algorithms as they not only swab out redundancy but also eradicates the redundancy present in the data they even guard all the information that is present in the input image. Higher compression is achieved in lossy compression algorithms as the output image and the input image will not be similar. We can either use subjective fidelity criteria or objective fidelity criteria for comparing the original and reprocessed image. An example for objective fidelity criteria is Root mean square (RMS) error. Measurement of the image quality is an imperative implication in image processing. In many of the image processing applications, estimation is a compulsion for the excellence of the image. The judgment of the quality of an image by the human is not sufficient. Therefore some more metrics like PSNR (Peak Signal to Noise Ratio) and MSE (Mean Square Error) are needed. PSNR is one of the specialized image quality metric. The differences between the restructured image and the input image will be small when the PSNR value is high. This paper spot the comparison between the proposed Alpha trimmed Mean based approaches with the standard JPEG compression. The premeditated approaches exemplify improved results contrasted to the JPEG. Out of these proposed JPEG compressions the Alpha Trimmed Mean filter on images corrupted with on Poisson noise in algorithm1, Alpha trimmed Mean on images encodes the images with a fewer number of bits, as a result the images will be transmitted with high speed. The decisive insinuation in image processing is the amount of image quality. Evaluation and assessing are obligatory for image quality in many image processing implementations. The refinement of human to boost the image quality is not adequate. So we necessitate some additional image quality metrics like Mean Square Error (MSE), Peak Signal to Noise Ratio (PSNR). The number of encoded bits required to characterize the compressed image is minimized with the Alpha trimmed Mean. The corrupted images with Poisson noise in the proposed algorithm resulted a high compression ratio compared to the standard JPEG compression technique. # IV. Results This paper presents the evaluation between the proposed Alpha trimmed mean based JPEG approaches with the standard JPEG compression. The wished-for approaches typify improved results compared to the JPEG. This paper makes use of MATLAB tools to access the proposed algorithm and the images are from SIPI image database. # Conclusion In this paper, Alpha trimmed mean based JPEG compression algorithm is proposed. This algorithm is evaluated with standard JPEG algorithm. The proposed algorithm uses less encoded bits for compression of images and hence the loading and storing of the image took less time. Also, the mean square error (MSE) of the proposed approach is low compared to the regular classification correctness is augmented with the estimated approach. The projected compression ratio can be realized with good quality image with necessary planned algorithm compared to JPEG compression technique. The requirement of encoded bits to represent the compressed image is less compared to JPEG compression. Also the image corrupted with various types of noises like Gaussian, Poisson, Speckle, Salt & Pepper noise are compressed efficiently with alpha trimmed JPEG compression. This proposed alpha trimmed JPEG compression algorithm eliminates the noise and encodes the image with fewer number of bits compared JPEG compression technique. JPEG. Due to the peak signal noise ratio (PSNR) perfect 12![Fig.1: Structure of Planned JPEG algorithms on images corrupted with various types of noise.](image-2.png "Fig. 1 :Algorithm 2 :") 2![Fig. 2: Structure of Planned JPEG algorithms on images corrupted with various types of noise.](image-3.png "Fig. 2 :") 12456873![Fig.1: Comparison between JPEG and Alpha trimmed mean in terms of No of bits transmitted for images of size 256×256](image-4.png "Fig. 1 :Fig. 2 :Fig. 4 :Fig. 5 :Fig. 6 :Fig . 8 :Fig. 7 :Fig. 3 :") frequency components. Next, the assortment of significant 2-D normalized DCT Coefficients by traversing in a ZIGZAG fashion and categorizing them in a 1-D array. In the 1-D array, the two types of DCT coefficients the first one is termed as direct current (DC) element, while other coefficients are called alternating current (AC) elements. Variable length Huffman coding is used to code AC components. 6. The reverse operation of compression is decompression. First calculate the normalized DCT values by decoding the compressed bit stream by Huffman code. Then organize all the DCT values in the 2-D array in a ZIGZAG fashion. We can obtain the decoded DCT values by multiplying them with normalized coefficients. Now an IDCT is executed on the denormalized DCT array. The decoding process engenders ensuing image block will not be identical to respective original image block used during encoding 1No of Bits Required38915 355674075648505Saved bits 485373 488721 483532 475483RMS Error1.992.142.162.95Compression ratio13.4714.7412.8610.8PSNR42.1941.5541.4838.76MSE3.964.584.668.71Images5.2.08 5.2.10 7.1.03 7.1.05No of Bits Required 160880 185945 151629 171235Saved bits 1936272 1911207 1945523 1925917RMS Error1.982.151.921.97Compre ssionratio13.0311.2713.8312.24PSNR48.2747.5448.5248.30MSE3.914.613.683.88© 2017 Global Journals Inc. (US) 2 3Images 5.1.095.1.11 5.1.12 5.1.13No ofBits Required33233320963796852879Saved bits491055 492192 486320 471409RMS Error1.622.211.962.6615.7716.3313.809.91PSNR43.9541.2842.3039.67MSE2.644.883.867.07 4Images5.1.09 5.1.115.1.125.1.13No ofBits131762 128052 130105 112493RequiredSaved bits392526 396236 394183 411795RMS Error8.308.218.147.183.974.094.024.66PSNR29.7929.8829.9531.04MSE68.8467.3966.2551.56Images5.2.08 5.2.107.1.037.1.05No of Bits Required134461 170584124837 151040Saved bits RMS Error196261 1926568 1972315 1946112 1.55 1.78 1.46 1.54Images No of Required Bits5.2.08 5.2.10 534501 5719167.1.03 5234007.1.05 54373115.5912.2916.7913.88Saved bits1562651 1525236 1573752 1553421PSNR50.3749.1750.8750.45RMS Error8.248.228.288.29MSE2.413.172.142.373.923.664.003.85PSNR35.8635.8935.8235.82MSE67.9767.4968.5768.64Images5.1.095.1.115.1.125.1.13No of Bits Required32309279803465046407Saved bits492249496308489638477881RMS Error1.561.821.922.4616.3618.7315.1311.29PSNR44.3342.9642.5040.35MSE2.423.313.696.04Images 5.2.085.2.107.1.037.1.05No of Bits Required134490170840124478150890Saved bits 1962662 1926312 1972674 1946262RMS Error1.531.821.461.5415.5912.2716.8413.89PSNR50.5249.0050.9050.44MSE2.333.302.132.37 5Images 5.1.09 5.1.115.1.125.1.13No of Bits Required31807283853474646210Saved bits 492481 495903 489542 478078RMS Error1.561.861.862.49Compression16.4818.4715.0811.34ratioPSNR44.2842.7642.7840.22MSE2.443.473.466.22 6 7 8 9 10 11Compressionatio rompression CratioCompressionatio rompression Catio rCompressionratioompression Catio r 17Images5.1.095.1.115.1.125.1.13No ofBits29370229713009445560RequiredSaved bits494918501317494194478728RMS Error1.351.681.482.1717.8522.817.4211.5PSNR45.5843.6844.7841.44MSE1.812.812.184.71 12 13 14 15 16Compression atio rCompressionatio rCompressionatio r 18Images5.2.085.2.107.1.037.1.05No ofBits126602168049115194147353RequiredSaved bits1970550 1929103 1981958 1949799RMS Error1.351.741.291.4316.5612.4718.214.23PSNR51.5749.3751.9751.05MSE1.833.031.672.06Table 19: JPEG Compression on Images of size256X256.Images5.1.095.1.115.1.125.1.13No ofBits60840405345028965622RequiredSaved bits463448483754473999458666RMS Error4.252.263.043.68.6112.9310.427.98PSNR35.5941.1038.5037.5MSE18.105.099.2612.94 20Images5.2.085.2.107.1.037.1.05No ofBits246431363397 243255 298239RequiredSaved bits1850721 1733755 1853897 1798913RMS Error3.485.393.84.78.515.7718.627.03PSNR43.3539.5542.5840.74MSE12.1129.0914.4622.11 © 2017 Global Journals Inc. (US) * Very Low Bit Rate Image Coding using morphological operators and Adaptive Decompositions OlivierEgger WeiLi IEEE International Conference on Image Processing Nov-1994 3 * Processing JPEG-Compressed Images and Documents RicardoLDe Queiroz IeeeMember IEEE Transactions on Image Processing 7 12 December 1998 * Enhanced JPEG Compression of Documents RaviPrakash JoanLMember Mitchell DavidAFellow Stepneski IEEE International Conference on Image Processing Oct-2001 3 * Edge Detection Based on Mathematical Morphology and Iterative Thresholding BaiXiangzhi ZhouFugen IEEE International Conference on Image Processing Nov-2006 2 * + "An Improved JPEG Compression Scheme Using Human Visual System Model GSreelekha PSSathidevi IEEE June 2007 * An N-Square Approach for Reduced Complexity Non-Binary Encoding GSrinivas PrasadReddy PV G KGjcst No XI Issue XI * Centric JPEG Compression for an Objective Image Quality Enhancement of Noisy Images Springer, International conference on smart computing and its application PP No143 2017 152 * Ch NBRamesh JVVenkateswarlu Murthy Filter Augmented JPEG Compressions" IJCA 60 17 Dec-2012 * Image Smoothening and Morphological operators Based JPEG Compression GMarlapalli Krishna PrasadSrinivas PvgdReddy JATIT 85 3 March 2016 * A Smoothening based JPEG Compression for an Objective Image Quality Enhancement of Regular and Noisy Images, IJAER MarlapalliKrishna PrasadReddy Pvgd GSrinivas P P 11 6 2016 * GSrinivas PSrinivasu TRao ChRamesh Harmonic and Contra Harmonic Mean