AI² Interfaces for Tools with Deep Learning
MERLIC comes with Artificial Intelligence Acceleration Interfaces (AI²) for the NVIDIA® TensorRT™ SDK and the Intel® Distribution of OpenVINO™ toolkit. They enable you to benefit from AI accelerator hardware that is compatible with the NVIDIA® TensorRT™ SDK or the OpenVINO™ toolkit. In MERLIC tools that provide deep learning functionality, you can make use of these AI accelerator hardware and select a processing unit that supports NVIDIA® TensorRT™ or the OpenVINO™ toolkit. This way MERLIC performs optimized inference on the respective hardware. As a result, significantly faster deep learning inference times can be achieved on these types of processing units, i.e., on NVIDIA® GPUs or Intel® processors including CPUs, GPUs, NPUs, and VPUs. By expanding the range of supported Intel® and NVIDIA® devices, you now have even more flexibility in your choice of hardware for inference optimization in your deep learning applications. Most MERLIC tools with deep learning technology also support using deep learning models that were optimized for an AI² interface when exporting the trained model. Using such an optimized model improves the loading time of the MVApp and reduces the amount of memory that is needed for the model.
Supported Hardware and Plug-ins
The following hardware and plug-ins are supported by the AI² interface in MERLIC:
NVIDIA® TensorRT™
- NVIDIA® GPUs (no installation step necessary)
GPUs with a compute capability of less than 7.5 are not supported. For more information about supported platforms, features, and hardware capabilities, see the page Support Matrix in the NVIDIA® TensorRT™ manual.
OpenVINO™ Toolkit
- CPUs
- Intel® GPUs
- Intel® NPUs
- Intel®Movidius™ VPUs: HDDL plug-in and MYRIAD plug-in
For more information about the devices and plug-ins, see the page Supported Devices of the OpenVINO™ toolkit
System Requirements
If you intend to use the Intel® Distribution of OpenVINO™ toolkit, you have to make sure that the system requirements for the OpenVINO™ toolkit are met, in addition to the system requirements of MERLIC. For more Information, see the respective documentation:
Using NVIDIA® TensorRT™ in MERLIC
You can use NVIDIA® TensorRT™, i.e., NVIDIA® GPUs, for inference optimization on the hardware without any further installation effort. All necessary drivers are delivered with MERLIC. If an NVIDIA® GPU is available on your system, you can use it in the following MERLIC tools:
- Processing → Counting → Count with Deep Learning
- Processing → Deep Learning - AI → Classify Image
- Processing → Deep Learning - AI → Detect Anomalies
- Processing → Deep Learning - AI → Detect Anomalies in the Global Context
- Processing → Deep Learning - AI → Find Objects
- Processing → Deep Learning - AI → Segment Image Pixel-Precisely
- Processing → Reading → Read Text and Numbers with Deep Learning
The hardware for the optimization via NVIDIA® TensorRT™ can be selected as follows in each of these MERLIC tools:
- Click on the parameter "Processing Unit" to display the list of available hardware.
- Select the desired NVIDIA® GPU from the list. You can recognize the respective hardware by its name with the prefix "TensorRT(TM)".
Using the OpenVINO™ Toolkit in MERLIC
In MERLIC, you can use hardware that supports the OpenVINO™ toolkit in all MERLIC tools that provide deep learning functionality, e.g., Classify Image. However, depending on the type of hardware you want to use, it might be necessary to set up the OpenVINO™ toolkit first to use the respective hardware in MERLIC.
Using an Intel® GPU, Intel® NPU, or CPU with the OpenVINO™ Toolkit
If you want to use an Intel® GPUs, Intel® NPUs, or CPU with the OpenVINO™ toolkit, no additional installation is necessary. You can immediately use it after the MERLIC installation. If not, an additional driver update might be necessary.
For more information on the drivers, see the page Additional Configurations of the OpenVINO™ toolkit documentation.
Using an Intel® Movidius™ VPU with the OpenVINO™ Toolkit
To use Intel® Movidius™ VPUs with the OpenVINO™ toolkit as processing unit in MERLIC, you first have to install the Intel® Distribution of OpenVINO™ toolkit. In addition, you have to start MERLIC in an OpenVINO™ toolkit environment that enables to make sure that the VPUs with OpenVINO™ toolkit support are recognized and available in MERLIC.
For more information on the installation and how to start MERLIC in an OpenVINO™ toolkit environment, see the following sections.
Installation of the Intel® Distribution of OpenVINO™ Toolkit
To use a VPU with the AI² interface as processing unit in tools such as "Classify Image", an installed version of the Intel® Distribution of OpenVINO™ toolkit is needed. This is necessary because most OpenVINO™ toolkit plug-ins require the installation of additional drivers and programs which are shipped with the OpenVINO™ Toolkit Interface.
- Download the installer for the OpenVINO™ toolkit from the Intel® website: OpenVINO™ toolkit download.
- Start the installation and follow the instructions of the installer.
Starting MERLIC in an OpenVINO™ Toolkit Environment
To use the OpenVINO™ toolkit for optimization, MERLIC has to be started in an OpenVINO™ toolkit environment as follows:
- Open a command prompt.
- Change to the "bin" directory of the OpenVINO™ toolkit installation directory.
- Run the following batch file: setupvars.bat
- Start MERLIC from the command line by calling the MERLIC executable file with the full path, e.g., "%PROGRAMFILES%\MVTec\MERLIC-26.03\bin\x64-win64\merlic_creator.exe".
cd "C:\Program Files (x86)\Intel\openvino_2024\bin"
setupvars.bat
"C:\Program Files\MVTec\MERLIC-26.03\bin\x64-win64\merlic_creator.exe"
For more information on the installation and the configuration of the environment, see the documentation Get Started on the OpenVINO™ toolkit website.
Selecting a Hardware with OpenVINO™ Toolkit Support in a MERLIC Tool
You can use the OpenVINO™ toolkit for inference optimization on the hardware in the following MERLIC tools:
- Processing → Counting → Count with Deep Learning
- Processing → Deep Learning - AI → Classify Image
- Processing → Deep Learning - AI → Detect Anomalies
- Processing → Deep Learning - AI → Detect Anomalies in the Global Context
- Processing → Deep Learning - AI → Find Objects
- Processing → Deep Learning - AI → Segment Image Pixel-Precisely
- Processing → Reading → Read Text and Numbers with Deep Learning
The hardware for the optimization via the OpenVINO™ toolkit can be selected as follows in each of these MERLIC tools:
- Click on the parameter "Processing Unit" to display the list of available hardware.
- Select the desired hardware with OpenVINO™ toolkit support from the list. You can recognize the respective hardware by its name with the prefix "OpenVINO(TM)".
If the desired hardware is not shown with the "OpenVINO(TM)" prefix at the parameter "Processing Unit", make sure that all prerequisites regarding the installation and OpenVINO™ toolkit environment are met as described above.
Usage of 3rd Party Libraries
The AI² interfaces depend on the 3rd party libraries of NVIDIA® TensorRT™ and the OpenVINO™ Toolkit Interface. See the file "third_party_licenses.txt" in the MERLIC installation directory or in the "Help > Third-Party Licenses" dialog for copyright and license information.
For more information about the License Agreement for NVIDIA Software Development Kits, see the page License Agreement for NVIDIA Software Development Kits.
Details on general legal information concerning the Intel® Distribution of OpenVINO™ toolkit can be found on the following website: Terms of Use for OpenVINO™
The End User License Agreements for Intel® Software Development Tools and other Intel® software development products can be found on the following website: Intel® End User License Agreements