I want to compare the difference between native pip install and build from source for TensorFlow in terms of computation speed.
Basically, I first followed this blog post:
However, 2 additional issues to solve:
- Need to use Bazel 0.5.2 instead of latest version. Latest version failed to build during my time.
- sudo apt-get purge bazel
- go to this link for installation of 0.5.2.
- If facing issue of “ImportError: libcuda.so.1: cannot open shared object file: No such file or directory“, will need to follow below steps:
- sudo apt install nvidia-361-dev
sudo find /usr/ -name 'libcuda.so.1'(then you will know path of
libcuda.so.1) If it’s already inside /usr/local/cuda dir, no need for the 3rd step.
- just copy the
- Most importantly, reboot your machine!
If everything goes well, you can test your gpu like in this link.
I did a comparison between two VMs on Google Cloud – one using ‘pip install’ and another using build from source – with same hardware config and everything else. I ran a basic CNN model on CIFAR10 image set, and when using GPU, there is no difference in computation time. However, when I switched off GPU and used only CPU for computations, I noticed a reduced of processing time around 25% – 30%.