MPC v1.2

MPC is a GPU-based lossless compressor/decompressor written in CUDA for binary IEEE 754 32-bit single-precision (float) and 64-bit double-precision (double) floating-point data. It outperforms many other lossless compression algorithms, including when compressing weights of DNNs, both in compression ratio and in speed.

Click on or to download the source code. A description of MPC is available here. Sample little-endian double-precision datasets are available here. Sample little-endian single-precision datasets are available here. Note that MPC is protected by the license included in the beginning of the code.

Important notice: The provided code is not necessarily meant to be used as is. (While it should work correctly, it is slow due to PCI data transfers.) Rather, it is meant as an example of how to invoke the compression and decompression kernels directly from your own code.

The source code can be compiled as follows:

nvcc -O3 -arch=sm_35 -o MPC_float
nvcc -O3 -arch=sm_35 -o MPC_double

For GPUs that support higher compute capabilities, the sm_35 should be adjusted accordingly.

To compress the single-precision file single.bin with a dimensionality of 1, enter:

./MPC_float single.bin 1

This generates a compressed file called single.bin.mpc. To decompress this file, enter:

./MPC_float single.bin.mpc

This, in turn, generates a decompressed file called

Note that the input files have to be a multiple of 4 bytes long for single-precision data and a multiple of 8 bytes long for double-precision data and should contain nothing but binary values. Only little-endian systems are currently supported.


A. Yang, H. Mukka, F. Hesaaraki, and M. Burtscher. "MPC: A Massively Parallel Compression Algorithm for Scientific Data." IEEE Cluster Conference. September 2015. [pdf] [pptx] [video]

Note that version 1.2 of the code is much faster and compresses slightly better than the version that is described in the above paper and presentation.

This work has been supported in part by the National Science Foundation, Texas State University, and by equipment donations from Nvidia Corporation.

Official Texas State University Disclaimer