Authors - Rohini Hongal, Rahil Sanadi, Supriya Katwe, Rajeshwari .M Abstract - Image compression involves reducing digital image file sizes while keeping their quality intact. Interval arithmetic, a mathematical technique, deals with ranges of values rather than precise numbers, allowing for robust computations across various applications. Vector quantization is a data compression method that groups similar data points to represent data efficiently. This study attempts to create an advanced image reduction method by integrating Convolutional Neural Networks (CNNs) and Interval-Arithmetic Vector Quantization (IAVQ). The study also examines and validates the practical relevance of attribute preservation. In the compression stage, the trained CNN is employed to extract features from input images, and the interval-arithmetic-based quantization maps these features to the predefined quantization intervals, considering attributes like sum, difference, and product. The proposed framework involves two main stages: training and compression. During the training stage, a CNN is trained to learn feature representations that encapsulate important image characteristics such as contrast and luminance intensity. This project thoroughly examines Normal Compression and Interval Arithmetic Compression, with the latter displaying promising results. Notably, Interval Arithmetic Compression consistently yields superior outcomes in pixels per second, compression file size, and PSNR compared to normal compression techniques. The IAVQ method achieves nearly 20 per cent higher compression quality, reduces pixel count by 0.5 to 0.9 per second, lowers PSNR ratio by 0.7 to 2.3 dB, and saves 11 to 19 KB of storage compared to the standard method across all image types.”