How to build?

NOTE: this page describes what is a Jami Plugin and how to install and use them.

If you want to make something with your video call, it is possible that you will do so with OpenCV and/or deep learning models (Tensorflow, PyTorch, etc). So, before going to the plugin, it is necessary to build plugin’s dependencies.

Dependencies

Here we give you the steps to build OpenCV and ONNX but do not feel limited to these libraries. We offer a [page](7.2 - Tensorflow Plugin) with detailled explanation of how to build tensorflow C++ API for Windows, Linux and Android. Other libraries should work as long they and the plugin are correctly built!

OpenCV 4.1.1

We kindly added OpenCV 4.1.1 as a contrib in daemon. This way you can easily build OpenCV for Android, Linux, and Windows. You only have to follow the corresponding instructions.

Windows

set DAEMON=<path/to/daemon>
cd ${DAEMON}/compat/msvc
python3 winmake.py -fb opencv

Linux

With Docker (recommended):

export DAEMON=<path/to/daemon>
cd ${DAEMON}/../
docker build -f plugins/docker/Dockerfile_ubuntu_18.04_onnxruntime -t plugins-linux .
docker run --rm -it -v ${DAEMON}/../:/home/plugins/jami:rw plugins-linux:latest /bin/bash
cd jami/plugins/contrib
cd ../../daemon/contrib
mkdir native
cd native
../bootstrap --disable-argon2 --disable-asio --disable-fmt --disable-gcrypt --disable-gmp --disable-gnutls --disable-gpg-error --disable-gsm --disable-http_parser --disable-iconv --disable-jack --disable-jsoncpp --disable-libarchive --disable-libressl --disable-msgpack --disable-natpmp --disable-nettle --enable-opencv --disable-opendht --disable-pjproject --disable-portaudio --disable-restinio --disable-secp256k1 --disable-speexdsp --disable-upnp --disable-uuid --disable-yaml-cpp --disable-zlib
make list
make fetch opencv opencv_contrib
make

Using your own system:

export DAEMON=<path/to/daemon>
cd ${DAEMON}/contrib/native
../bootstrap --enable-ffmpeg --disable-argon2 --disable-asio --disable-fmt --disable-gcrypt --disable-gmp --disable-gnutls --disable-gpg-error --disable-gsm --disable-http_parser --disable-iconv --disable-jack --disable-jsoncpp --disable-libarchive --disable-libressl --disable-msgpack --disable-natpmp --disable-nettle --enable-opencv --disable-opendht --disable-pjproject --disable-portaudio --disable-restinio --disable-secp256k1 --disable-speexdsp --disable-upnp --disable-uuid --disable-yaml-cpp --disable-zlib
make list
make fetch opencv opencv_contrib
make

Android

Using Docker (recommended):

export DAEMON=<path/to/jami-plugins>
cd <path-to-plugins
docker build -f docker/Dockerfile_android_onnxruntime -t plugins-android .
docker run --rm -it -v ${DAEMON}/:/home/gradle/plugins:rw plugins-android:latest /bin/bash
cd plugins/contrib
ANDROID_ABI="arm64-v8a" sh build-dependencies.sh

NOTE: if onnx makes error on root permissions: –allow_running_as_root

Using your own system:

export DAEMON=<path/to/daemon>
cd ${DAEMON}
export ANDROID_NDK=<NDK>
export ANDROID_ABI=arm64-v8a
export ANDROID_API=29
export TOOLCHAIN=$ANDROID_NDK/toolchains/llvm/prebuilt/linux-x86_64
export TARGET=aarch64-linux-android
export CC=$TOOLCHAIN/bin/$TARGET$ANDROID_API-clang
export CXX=$TOOLCHAIN/bin/$TARGET$ANDROID_API-clang++
export AR=$TOOLCHAIN/bin/$TARGET-ar
export LD=$TOOLCHAIN/bin/$TARGET-ld
export RANLIB=$TOOLCHAIN/bin/$TARGET-ranlib
export STRIP=$TOOLCHAIN/bin/$TARGET-strip
export PATH=$PATH:$TOOLCHAIN/bin
cd contrib
mkdir native-${TARGET}
cd native-${TARGET}
../bootstrap --build=x86_64-pc-linux-gnu --host=$TARGET$ANDROID_API --enable-opencv --enable-opencv_contrib
make

Onnxruntime 1.6.0

A difficulty for a lot of people working with deep learning models is how to deploy them. With that in mind we provide the user the possibility of using onnxruntime. There are several development libraries to train and test but, they are usually too heavy to deploy. Tensorflow with cuda support, for instance, can easily surpass 400MB. In our GreenScreen plugin We chose to use onnxruntime because it’s lighter (library size of 140Mb for cuda support) and supports model convertion from several development libraries (Tensorflow, PyTorch, Caffe, etc.).

  • For more advanced and curious third-party developpers, we also [provide instructions](7.2 - Tensorflow Plugin) to build Tensorflow C++ API for Windows and Linux, and the TensorflowLite C++ API for Android.

To build onnxruntime based plugins for Linux and Android, we strongly recommend using docker files available under <plugins>/docker/. We don’t offer Windows docker, but here we carefully guide you through the proper build of this library for our three supported platforms.

If you want to build onnxruntime with Nvidia GPU suport, be sure to have a CUDA capable GPU and that you have followed all installation steps for the Nvidia drivers, CUDA Toolkit, CUDNN, and that their versions match.

The following links may be very helpfull:

  • https://developer.nvidia.com/cuda-gpus

  • https://developer.nvidia.com/cuda-toolkit-archive

  • https://developer.nvidia.com/cudnn

Linux and Android

We kindly added onnxruntime as a contrib in daemon. This way you can easily build onnxruntime for Android, and Linux.

  • Linux - Without acceleration:

export DAEMON=<path/to/daemon>
cd ${DAEMON}/contrib/native
../bootstrap
make .onnx
  • Linux - With CUDA acceleration (CUDA 10.2):

export CUDA_PATH=/usr/local/cuda/
export CUDA_HOME=${CUDA_PATH}
export CUDNN_PATH=/usr/lib/x86_64-linux-gnu/
export CUDNN_HOME=${CUDNN_PATH}
export CUDA_VERSION=10.2
export USE_NVIDIA=True
export DAEMON=<path/to/daemon>
cd ${DAEMON}/contrib/native
../bootstrap
make .onnx
  • Android - With NNAPI acceleration:

export DAEMON=<path/to/daemon>
cd ${DAEMON}
export ANDROID_NDK=<NDK>
export ANDROID_ABI=arm64-v8a
export ANDROID_API=29
export TOOLCHAIN=$ANDROID_NDK/toolchains/llvm/prebuilt/linux-x86_64
export TARGET=aarch64-linux-android
export CC=$TOOLCHAIN/bin/$TARGET$ANDROID_API-clang
export CXX=$TOOLCHAIN/bin/$TARGET$ANDROID_API-clang++
export AR=$TOOLCHAIN/bin/$TARGET-ar
export LD=$TOOLCHAIN/bin/$TARGET-ld
export RANLIB=$TOOLCHAIN/bin/$TARGET-ranlib
export STRIP=$TOOLCHAIN/bin/$TARGET-strip
export PATH=$PATH:$TOOLCHAIN/bin
cd contrib
mkdir native-${TARGET}
cd native-${TARGET}
../bootstrap --build=x86_64-pc-linux-gnu --host=$TARGET$ANDROID_API
make .onnx

Windows

  • Pre-build:

mkdir pluginsEnv
export PLUGIN_ENV=<full-path/pluginsEnv>
cd pluginsEnv
mkdir onnxruntime
mkdir onnxruntime/cpu
mkdir onnxruntime/nvidia-gpu
mkdir onnxruntime/include
git clone https://github.com/microsoft/onnxruntime.git onnx
cd onnx
git checkout v1.6.0 && git checkout -b v1.6.0
  • Without acceleration:

.\build.bat --config Release --build_shared_lib --parallel --cmake_generator "Visual Studio 16 2019"
cp ./build/Windows/Release/Release/onnxruntime.dll ../onnxruntime/cpu/onnxruntime.dll
  • With CUDA acceleration (CUDA 10.2):

.\build.bat --config Release --build_shared_lib --parallel --cmake_generator "Visual Studio 16 2019"
--use_cuda --cudnn_home <cudnn home path> --cuda_home <cuda home path> --cuda_version 10.2
cp ./build/Windows/Release/Release/onnxruntime.dll ../onnxruntime/nvidia-gpu/onnxruntime.dll
  • Post-build:

cp -r ./include/onnxruntime/core/ ../onnxruntime/include/

For further build instructions, please refer to onnxruntime official GitHub.

Plugin

To exemplify a plugin build, we will use the GreenScreen plugin available here.

Linux/Android

First you need to go to the plugins repository in your cloned ring-project:

cd <ring-project>/plugins
  • Linux - Nvidia GPU

PROCESSOR=NVIDIA python3 build-plugin.py --projects=GreenScreen

  • Linux - CPU

python3 build-plugin.py --projects=GreenScreen

  • Android

export JAVA_HOME=/usr/lib/jvm/java-1.8.0-openjdk-amd64/jre
export ANDROID_HOME=/home/${USER}/Android/Sdk
export ANDROID_SDK=${ANDROID_HOME}
export ANDROID_NDK=${ANDROID_HOME}/ndk/21.1.6352462
export ANDROID_NDK_ROOT=${ANDROID_NDK}
export PATH=${PATH}:${ANDROID_HOME}/tools:${ANDROID_HOME}/platform-tools:${ANDROID_NDK}:${JAVA_HOME}/bin
ANDROID_ABI="<android-architecture-separate-by-space>" python3 build-plugin.py --projects=GreenScreen --distribution=android

The GreenScreen.jpl file will be available under <ring-project/plugins/build/>.

Windows

Windows build of plugins are linked with the daemon repository and its build scripts. So to build our example plugins you have to:

cd <ring-project>/daemon/compat/msvc
python3 winmake.py -fb GreenScreen

The GreenScreen.jpl file will be available under <ring-project/plugins/build/>.

Related articles: