I don’t understand the instructions given here and here.
Could someone offer some step-by-step guide for the installation of nvCOMP using the following assumption and step format (or equivalent):
System info:
The Steps (how you would do it with your Ubuntu or other Linux machine)
Download “exact_installation_package_name(s)_here”
If needed, where to place the decompressed installation package
/usr/local/
If needed, how to run cmake to install nvCOMP (exact code as if running on your computer)
cmake -DNVCOMP_EXTS_ROOT=/path/to/nvcomp_exts/${CUDA_VERSION} .. make -j
(code from this site)CUDA_VERSION
a literal string or a placeholder for, say, CUDA_11.4
?CUDA_VERSION
supposed to be a bash variable already defined by the installation package, or is it a variable supposed to be recognisable by the operating system because of some prior CUDA installation?nvcomp_exts
or what does it refer to?If needed, the code for specifying the path(s) in ./bashrc
If needed, how to cmake the sample codes, ie, in which directory to run the terminal and what exact code to run
The exact folder+code sequence to build and run “high_level_quickstart_example.cpp”, which comes with the installation package.
this exact line of code
Many thanks.
I will answer my own question.
Here is the system information obtained from the command line:
uname -r
: 5.15.0-46-genericlsb_release -a
: Ubuntu 20.04.5 LTSnvcc --version
: Cuda compilation tools, release 10.1, V10.1.243nvidia-smi
:
cmake --version
: cmake version 3.22.5make --version
: GNU Make 4.2.1lscpu
: Xeon CPU E5-2680 V4 @ 2.40GHz - 56 CPU(s)Observation
Although there are two GPUs installed in the server, nvCOMP only works with the RTX.
Step 1: The nvCOMP library
Download the nvCOMP library from https://developer.nvidia.com/nvcomp.
The file I downloaded was named nvcomp_install_CUDA_11.x.tgz
. And I left the extracted folder in the Downloads
directory and renamed it nvcomp
.
Step 2: The nvCOMP test package on GitHub
Download it from https://github.com/NVIDIA/nvcomp. Click the green "Code" icon, then click "Download ZIP".
By default, the downloaded zip file is called nvcomp-main.zip
. And I left the extracted folder, named nvcomp-main
, in the Downloads
directory.
Step 3: The NIVIDIA CUB library on GitHub
Download it from https://github.com/nvidia/cub. Click the green "Code" icon, then click "Download ZIP".
By default, the downloaded zip file is called cub-main.zip
. And I left the extracted folder, named cub-main
, in the Downloads
directory.
There is no "installation" of the CUB library other than making the folder path "known", ie available, to the calling program.
Comments: The nvCOMP GitHub site did not seem to explain that the CUB library was needed to run nvCOMP, and I only found that out from an error message during an attempted compilation of the test files in Step 2.
Step 4: "Building CPU and GPU Examples, GPU Benchmarks provided on Github"
The nvCOMP GitHub landing page has a section with the exact name as this Step. The instructions could have been more detailed.
Step 4.1: cmake
Downloads
directory are the folders nvcomp
(the Step 1 nvCOMP library), nvcomp-main
(Step 2), and cub-main
(Step 3).nvcomp-main
, ie, go to /your-path/Downloads/nvcomp-main
cmake -DCMAKE_PREFIX_PATH=/your-path/Downloads/nvcomp -DCUB_DIR=/your-path/Downloads/cub-main
cmake
step sets up the build files for the next make
" step.cmake
, a harmless yellow-colored cmake warning appearedcmake
variously stated it found Threads
, nvcomp
, ZLIB
(on my system) and it was done with "Configuring" and "Build files have been written".Step 4.2: make
make
in the same terminal as above.make
compilation.Step 5: Running the examples/benchmarks
Let's run the "built-in" example before running the benchmarks with the (now outdated) Fannie Mae single-family loan performance data from NVIDIA's RAPIDS repository.
Check if there are executables in /your-path/Downloads/nvcomp-main/bin
. These are the excutables created from the cmake
and make
steps above.
You can try to run these executables on your to-be-compressed files, which are buit with different compression algorithms and functionalities. The name of the executable indicates the algorithm used and/or its functionality.
Some of the executables require the files to be of a certain size, eg, the "benchmark_cascaded_chunked" executable requires the target file's size to be a multiple of 4 bytes. I have not tested all of these executables.
Step 5.1: CPU compression examples
time /your-path/Downloads/nvcomp-main/bin/gdeflate_cpu_compression -f /full-path-to-your-target/my-file.txt
gdeflate_cpu_compression
on an updated Fannie Mae loan data file "2002Q1.csv" (11GB)lz4_cpu_compression
or lz4_cpu_decompression
Step 5.2: The benchmarks with the Fannie Mae files from NVIDIA Rapids
Apart from following the NVIDIA instructions here, it seems the "benchmark" executables in the above "bin" directory can be run with "any" file. Just use the executable in the same way as in Step 5.1 and adhere to the particular executable specifications.
Below is one example following the NVIDIA instruction.
Long story short, the nvcomp-main
(Step 2) test package contains the files to (i) extract a column of homogeneous data from an outdated Fannie Mae loan data file, (ii) save the extraction in binary format, and (iii) run the benchmark executable(s) on the binary extraction.
The Fannie Mae single-family loan performance data files, old or new, all use "|" as the delimiter. In the outdated Rapids version, the first column, indexed as column "0" in the code (zero-based numbering), contains the 12-digit loan IDs for the loans sampled from the (real) Fannie Mae loan portfolio. In the new Fannie Mae data files from the official Fannie Mae site, the loan IDs are in column 2 and the data files have a csv
file extension.
Download the dataset "1 Year" Fannie Mae data, not the "1GB Splits*" variant, by following the link from here, or by going directly to RAPIDS
Place the downloaded mortgage_2000.tgz
anywhere and unzip it with tar -xvzf mortgage_2000.tgz
.
There are four txt files in /mortgage_2000/perf
. I will use Performance_2000Q1.txt
as an example.
Check if python is installed on the system
Check if text_to_binary.py
is in /nvcomp-main/benchmarks
Start a terminal (anywhere)
As shown below, use the python script to extract the first column, indexed "0", with format long, from Performance_2000Q1.txt
, and put the .bin
output file somewhere.
time python /your-path/Downloads/nvcomp-main/benchmarks/text_to_binary.py /your-other-path-to/mortgage_2000/perf/Performance_2000Q1.txt 0 long /another-path/2000Q1-col0-long.bin
time python /your-path/Downloads/nvcomp-main/benchmarks/text_to_binary.py /your-other-path-to/mortgage_2000/perf/Performance_2000Q1.txt 0 string /another-path/2000Q1-col0-string.bin
Run the benchmarking executables with the target bin files as shown at the bottom of the web page of the NVIDIA official guide
/your-path/Downloads/nvcomp-main/bin/benchmark_hlif lz4 -f /another-path/2000Q1-col0-long.bin
Step 5.3: The high_level_quickstart_example
and low_level_quickstart_example
/nvcomp-main/bin
high_level_quickstart_example
without any input arguments. Please see corresponding c++ source code in /nvcomp-main/examples
and see the official nvCOMP guides on GitHub.This could be another long thread but let's keep it short. Note that NVIDIA used various A-series cards for its benchmarks and I used a GeForce RTX 3060.
Speed
data.table
took 25.648 seconds to do the same.Compression ratio
benchmark_hlif lz4 -f 2000Q1-col0-string.bin
with the python output vs running benchmark_hlif lz4 -f 2000Q1-col0-string.txt
with the R outputOverall performance: accounting for file size and memory limits
Use of the nvCOMP library is limited by the GPU memory, no more than 12GB for the RTX 3060 tested. And depending on the compression algorithm, an 8GB target file can easily trigger a stop with cudaErrorMemoryAllocation: out of memory
In both speed and compression ratio, pigz
trumped the tested nvCOMP excutables when the target files were the new Fannie Mae data files containing 108 columns of strings and numbers.