Tensorflow plugin

NOTE: this page describes to build Tensorflow and TensorflowLite C++ API for Linux Android and Windows.

Tensorflow 2.1.0

A difficulty for a lot of people working with tensorflow is how to properly build it. With that in mind we created docker images with cuda and tensorflow libraries available for GNU/Linux builds here and for Android builds here. These docker can be used to build plugins for Linux and Android, however they cannot handle Windows. Here we carefully guide you through the proper build of tensorflow LITE Native and Tensorflow C++ API for our three supported platforms.

You will need:

  • Python 3

  • Bazel 0.29.1

  • Tensorflow 2.1.0 repository:

    git clone https://github.com/tensorflow/tensorflow.git
    cd tensorflow
    git checkout v2.1.0
    

We assembled Tensorflow headers needed to build plugins, to access them you only have to extract libs.tar.gz file found under jami-project/plugins/contrib. However, if you are using another tensorflow version or you want to do it by yourself, you can follow the assemble instructions for Tensorflow LITE Native and C++ API are available under gitlab:jami-plugins README_ASSEMBLE file.

Linux

Tensorflow LITE does not support desktops GPU. If you want to use them, please consider using C++ API

If you want to build Tensorflow C++ API with GPU suport, be sure to have a CUDA capable GPU and that you have followed all installation steps for the Nvidia drivers, CUDA Toolkit, CUDNN, Tensor RT, that their versions matches and that they are correct for the Tensorflow version you want to build.

The following links may be very helpfull:

  • https://www.tensorflow.org/install/source

  • https://developer.nvidia.com/cuda-gpus

  • https://developer.nvidia.com/cuda-toolkit-archive

  • https://developer.nvidia.com/cudnn

Setup your build options with ./configure.

  • Tensorflow LITE Native

    bazel build //tensorflow/lite:libtensorflowlite.so
    
  • Tensorflow C++ API

    bazel build --config=v1 --define framework_shared_object=false --define=no_tensorflow_py_deps=true //tensorflow:libtensorflow_cc.so
    

Windows

Tensorflow LITE does not support desktops GPU. If you want to use them, please consider using C++ API

If you want to build Tensorflow C++ API with GPU suport, be sure to have a CUDA capable GPU and that you have followed all installation steps for the Nvidia drivers, CUDA Toolkit, CUDNN, Tensor RT, that their versions matches and that they are correct for the Tensorflow version you want to build.

The following links may be very helpfull:

  • https://www.tensorflow.org/install/source

  • https://developer.nvidia.com/cuda-gpus

  • https://developer.nvidia.com/cuda-toolkit-archive

  • https://developer.nvidia.com/cudnn

Setup your build options with python3 configure.py.

  • Tensorflow LITE Native

    bazel build //tensorflow/lite:tensorflowlite.dll
    
  • Tensorflow C++ API

    bazel build --config=v1 --define framework_shared_object=false --config=cuda --define=no_tensorflow_py_deps=true //tensorflow:tensorflow_cc.dll
    

There may be some missign references while compilling a plugin with Tensorflow C++ API. If that happens you have to rebuild you tensorflow and explicitly export the missing symbols. Fortunatelly Tensorflow now has a easy workaround to do so, you only have to feed this file with the desired symbols.

Android - Tensorflow LITE Native

For mobile applications Tensorflow LITE is the only option you want to consider and to succesfully build it you will also need:

  • Android NDK 18r

Setup your build options with:

./configure
        >> Do you wish to build TensorFlow with XLA JIT support? [Y/n]: n
        >> Do you wish to download a fresh release of clang? (Experimental) [y/N]: y
        >> Would you like to interactively configure ./WORKSPACE for Android builds? [y/N]: y
        >> Please specify the home path of the Android NDK to use. [Default is /home/<username>/Android/Sdk/ndk-bundle]: put the right path to ndk 18r

And build as desired:

  • armeabi-v7a

    bazel build //tensorflow/lite:libtensorflowlite.so --crosstool_top=//external:android/crosstool --cpu=armeabi-v7a --host_crosstool_top=@bazel_tools//tools/cpp:toolchain --cxxopt="-std=c++11"
    
  • arm64-v8a

    bazel build //tensorflow/lite:libtensorflowlite.so --crosstool_top=//external:android/crosstool --cpu=arm64-v8a --host_crosstool_top=@bazel_tools//tools/cpp:toolchain --cxxopt="-std=c++11"