I use a hack/workaround to avoid having to build the whole TF library myself (which saves both time (it's set up in 3 minutes), disk space, installing dev dependencies, and size of the resulting binary). It's officially unsupported, but works well if you just want to quickly jump in.
Install TF through pip (pip install tensorflow
or pip install tensorflow-gpu
). Then find its library _pywrap_tensorflow.so
(TF 0.* - 1.0) or _pywrap_tensorflow_internal.so
(TF 1.1+). In my case (Ubuntu) it's located at /usr/local/lib/python2.7/dist-packages/tensorflow/python/_pywrap_tensorflow.so
. Then create a symlink to this library called lib_pywrap_tensorflow.so
somewhere where your build system finds it (e.g. /usr/lib/local
). The prefix lib
is important! You can also give it another lib*.so
name - if you call it libtensorflow.so
, you may get better compatibility with other programs written to work with TF.
Then create a C++ project as you are used to (CMake, Make, Bazel, whatever you like).
And then you're ready to just link against this library to have TF available for your projects (and you also have to link against python2.7
libraries)! In CMake, you e.g. just add target_link_libraries(target _pywrap_tensorflow python2.7)
.
The C++ header files are located around this library, e.g. in /usr/local/lib/python2.7/dist-packages/tensorflow/include/
.
Once again: this way is officially unsupported and you may run in various issues. The library seems to be statically linked against e.g. protobuf, so you may run in odd link-time or run-time issues. But I am able to load a stored graph, restore the weights and run inference, which is IMO the most wanted functionality in C++.