Back to Jetson Inference

Jetson Inference: jetson

docs/html/tensorNet_8h.html

latest10.3 KB
Original Source

| | Jetson Inference

DNN Vision Library |

Classes | Namespaces | Macros | Typedefs | Enumerations | Functions

tensorNet.h File Reference

#include <NvInfer.h>
#include <jetson-utils/cudaUtility.h>
#include <jetson-utils/commandLine.h>
#include <jetson-utils/imageFormat.h>
#include <jetson-utils/timespec.h>
#include <jetson-utils/logging.h>
#include <vector>
#include <sstream>
#include <math.h>

Go to the source code of this file.

|

Classes

| | class | tensorNet | | | Abstract class for loading a tensor network with TensorRT. More...
| | | | class | tensorNet::Logger | | | Logger class for GIE info/warning/errors. More...
| | | | class | tensorNet::Profiler | | | Profiler interface for measuring layer timings. More...
| | | | struct | tensorNet::layerInfo | | |

|

Namespaces

| | | nvinfer1 | | |

|

Macros

| | #define | DIMS_C(x) x.c | | | | #define | DIMS_H(x) x.h | | | | #define | DIMS_W(x) x.w | | | | #define | NV_TENSORRT_MAJOR 1 | | | | #define | NV_TENSORRT_MINOR 0 | | | | #define | NOEXCEPT | | | | #define | TENSORRT_VERSION_CHECK(major, minor, patch) (NV_TENSORRT_MAJOR > major || (NV_TENSORRT_MAJOR == major && NV_TENSORRT_MINOR > minor) || (NV_TENSORRT_MAJOR == major && NV_TENSORRT_MINOR == minor && NV_TENSORRT_PATCH >= patch)) | | | Macro for checking the minimum version of TensorRT that is installed. More...
| | | | #define | DEFAULT_MAX_BATCH_SIZE 1 | | | Default maximum batch size. More...
| | | | #define | LOG_TRT "[TRT] " | | | Prefix used for tagging printed log output from TensorRT. More...
| | |

|

Typedefs

| | typedef nvinfer1::Dims3 | Dims3 | | |

|

Enumerations

| | enum | precisionType {
TYPE_DISABLED = 0, TYPE_FASTEST, TYPE_FP32, TYPE_FP16,
TYPE_INT8, NUM_PRECISIONS
} | | | Enumeration for indicating the desired precision that the network should run in, if available in hardware. More...
| | | | enum | deviceType {
DEVICE_GPU = 0, DEVICE_DLA, DEVICE_DLA_0 = DEVICE_DLA, DEVICE_DLA_1,
NUM_DEVICES
} | | | Enumeration for indicating the desired device that the network should run on, if available in hardware. More...
| | | | enum | modelType {
MODEL_CUSTOM = 0, MODEL_CAFFE, MODEL_ONNX, MODEL_UFF,
MODEL_ENGINE
} | | | Enumeration indicating the format of the model that's imported in TensorRT (either caffe, ONNX, or UFF). More...
| | | | enum | profilerQuery {
PROFILER_PREPROCESS = 0, PROFILER_NETWORK, PROFILER_POSTPROCESS, PROFILER_VISUALIZE,
PROFILER_TOTAL
} | | | Profiling queries. More...
| | | | enum | profilerDevice { PROFILER_CPU = 0, PROFILER_CUDA } | | | Profiler device. More...
| | |

|

Functions

| | const char * | precisionTypeToStr (precisionType type) | | | Stringize function that returns precisionType in text. More...
| | | | precisionType | precisionTypeFromStr (const char *str) | | | Parse the precision type from a string. More...
| | | | const char * | deviceTypeToStr (deviceType type) | | | Stringize function that returns deviceType in text. More...
| | | | deviceType | deviceTypeFromStr (const char *str) | | | Parse the device type from a string. More...
| | | | const char * | modelTypeToStr (modelType type) | | | Stringize function that returns modelType in text. More...
| | | | modelType | modelTypeFromStr (const char *str) | | | Parse the model format from a string. More...
| | | | modelType | modelTypeFromPath (const char *path) | | | Parse the model format from a file path. More...
| | | | const char * | profilerQueryToStr (profilerQuery query) | | | Stringize function that returns profilerQuery in text. More...
| | |

Macro Definition Documentation

DIMS_C

| #define DIMS_C | ( | | x | ) | x.c |

DIMS_H

| #define DIMS_H | ( | | x | ) | x.h |

DIMS_W

| #define DIMS_W | ( | | x | ) | x.w |

NOEXCEPT

| #define NOEXCEPT |

NV_TENSORRT_MAJOR

| #define NV_TENSORRT_MAJOR 1 |

NV_TENSORRT_MINOR

| #define NV_TENSORRT_MINOR 0 |

Typedef Documentation

Dims3

| typedef nvinfer1::Dims3 Dims3 |