Adapted Gelu Plugin from https://github.com/NVIDIA/TensorRT/tree/master/plugin/geluPlugin
This repository explain how to work with custom layers in an end-to-end Deep Learning Pipeline. In this repository, I have added a custom layer in the model architecture using Keras custom Layer api. Thereafter, the model is trained on a demo dataset to do dogs vs cats classification. Post training, I have written the converter code to convert the trained model to a TensorRT engine. I have also included a inference code to do inference with the converted engine.
The model can be trained using the script do_train.py.
The do_train.py saves the model after training.
Before the model is converted to tensorRT engine file we need to compile the custome plugin.
Custom TensorRT layer for Gelu is inside the "geluPluginv2" directory. To compile the gelu custom layer,
cd geluPluginv2
mkdir build && cd build
cmake ..
You can use the converter.py in the trt_utils directory to do this. Please change this line to the correct path in your system
You can run the inference.py code in trt_utils directore. Like the last section please change the path to the libGeluPlugin.so.
For more details please refer the documentation