dbnet/README.md
The Pytorch implementation is DBNet.
<p align="center"> </p>.wtsDownload code and model from DBNet and config your environments.
Go to filetools/predict.py, set --save_wts as True, then run, the DBNet.wts will be generated.
Onnx can also be exported, just need to set --onnx as True.
mkdir build
cd build
cmake ..
make
cp /your_wts_path/DBNet.wts .
sudo ./dbnet -s // serialize model to plan file i.e. 'DBNet.engine'
sudo ./dbnet -d ./test_imgs // deserialize plan file and run inference, all images in test_imgs folder will be processed.
https://github.com/BaofengZan/DBNet-TensorRT
1. In common.hpp, the following two functions can be merged.
ILayer* convBnLeaky(INetworkDefinition *network, std::map<std::string, Weights>& weightMap, ITensor& input, int outch, int ksize, int s, int g, std::string lname, bool bias = true)
ILayer* convBnLeaky2(INetworkDefinition *network, std::map<std::string, Weights>& weightMap, ITensor& input, int outch, int ksize, int s, int g, std::string lname, bool bias = true)
2. The postprocess method here should be optimized, which is a little different from pytorch side.
3. The input image here is resized to 640 x 640 directly, while the pytorch side is using letterbox method.