Plugin for Openvino Toolkit
MVTec
This enables significantly faster Deep Learning speeds on Intel processors including CPUs, GPUs, and VPUs for key tasks. By expanding the range of supported hardware, users can now draw on the performance of a wide range of Intel devices to accelerate their deep learning applications and are no longer limited to a few very specific devices.
Customers are thus even more flexible in their choice of hardware. At the same time, the integration works seamlessly and is not bound to certain hardware specifics. Simply by changing parameters, the inference of an existing deep learning application can now be executed on devices supported by the Openvino toolkit.