OpenVINO™ for Deep Learning¶
This tutorial shows how to install OpenVINO™ on Clear Linux* OS, run an OpenVINO sample application for image classification, and run a benchmark_app for estimating inference performance—using Squeezenet 1.1.
Prerequisites¶
- Clear Linux OS installed on the host OS
Install OpenVINO¶
OpenVINO in Clear Linux OS offers pre-built OpenVINO sample applications with which developers can try inferencing immediately.
In Clear Linux OS OpenVINO is included in the computer-vision-basic bundle. To install OpenVINO, enter:
sudo swupd bundle-add computer-vision-basic
OpenVINO Inference Engine libraries are located in
/usr/lib64/
To view one added package, enter:ls /usr/lib64/libinference_engine.so
If bundle installation is successful, the output shows:
/usr/lib64/libinference_engine.so
To view the OpenVINO Model Optimizer, enter:
ls /usr/share/openvino/model-optimizer
To view the OpenVINO sample application Executables, enter:
ls /usr/bin/benchmark_app \ /usr/bin/classification_sample_async \ /usr/bin/hello_classification \ /usr/bin/hello_nv12_input_classification \ /usr/bin/hello_query_device \ /usr/bin/hello_reshape_ssd \ /usr/bin/object_detection_sample_ssd \ /usr/bin/speech_sample \ /usr/bin/style_transfer_sample \
Note
If bundle installation is successful, the above files should appear.
To view the pre-built OpenVINO sample application source code, enter:
ls /usr/share/doc/inference_engine/samples
In the next section, you learn how to use an OpenVINO sample application.
Run OpenVINO sample application¶
After installing OpenVINO on Clear Linux OS, you need a model against which to test. In this example, we use the public squeezenet 1.1 model for image classification. Test results vary based on the system used.
Use model to test¶
If you don’t have any model, you can download an intel_model or a public model using OpenVINO Model Downloader.
- Check the list of public models you can download from
/usr/share/open_model_zoo/models/public
- Check the list of Intel® models you can download from
/usr/share/open_model_zoo/intel_models
- Check the list of public models you can download from
View the location of OpenVINO Model Downloader:
cd /usr/share/open_model_zoo/tools/downloader
In general, download models with the following command:
python3 downloader.py --name <model_name> -o <downloading_path>
Note
- Where
<model_name>
is the one you chose from previous step - Where
<downloading_path>
is your project directory
- Where
For this example, enter:
python3 downloader.py --name squeezenet1.1 -o $HOME/.
After running this command, the model appears as downloading at your
$HOME/classification/squeezenet/1.1/caffe
as follows:###############|| Downloading topologies ||############### ========= Downloading /$HOME/classification/squeezenet/1.1/caffe/squeezenet1.1.caffemodel ... 100%, 4834 KB, 2839 KB/s, 1 seconds passed ...
Convert model to IR format¶
As necessary, follow the instruction on Convert deep learning models to convert deep learning models.
Navigate to the model:
cd $HOME/classification/squeezenet/1.1/caffe
Enter the command:
python3 /usr/share/openvino/model-optimizer/mo.py --input_model squeezenet1.1.caffemodel
The output will show these files being generated:
squeezenet1.1.xml squeezenet1.1.bin
Finally, enter ls to view the newly added model and files.
Run image classification¶
This sample application demonstrates how to run the Image Classification in asynchronous mode on supported devices. In this example, we use the image of a specific type of automobile to test the inference engine. Squeezenet 1.1 is designed to perform image classification and has been trained on the ImageNet database.
We provide an image of an automobile, shown in Figure 1. For ease of use, save this image into the
classification
model directory.To execute the sample application enter the command:
classification_sample_async -i <path_to_image> -m <path_to_model_ir> -d <device>
Note
- Where
<path_to_image>
is the image that you selected - Where
<path_to_model_ir>
is the path to the IR model file - Where
<device>
is your choice of CPU, GPU, etc.
- Where
In this case, we replace the
<path_to_image>
with the previously saved image for CPU inferencing.classification_sample_async -i ./automobile.png -m squeezenet1.1.xml
Note
If you do not specify the
device
, the CPU is used by default.The results show the highest probability is 67% for a sports car.
classid probability ------- ----------- 817 0.6717085 511 0.1611409
classid 817 sports car, sport car classid 511 convertible Next, add -d GPU to the end of the above command for GPU inferencing.
classification_sample_async -i ./automobile.png -m squeezenet1.1.xml -d GPU
Run benchmark_app¶
This sample application demonstrates how to use benchmark application to estimate deep learning inference performance on supported devices. We use the same image of an automobile, Figure 1, from the previous section.
To execute this sample application, enter:
benchmark_app -i <path_to_image> -m <path_to_model> -d <device>
Note
- Where
<path_to_image>
is the image that you selected - Where
<path_to_model_ir>
is the path to the IR model file - Where
<device>
is local your choice of CPU, GPU, etc.
- Where
Change directory:
cd $HOME/classification/squeezenet/1.1/caffe
Enter the following command for CPU inferencing.
benchmark_app -i ./automobile.png -m squeezenet1.1.xml
For the CPU, the results show a Throughput of 243.202 FPS.
1 2 3 4
Count: 1464 iterations Duration: 60196.8 ms Latency: 164.104 ms Throughput: 243.202 FPS
Next, add -d GPU to the end of the same command for GPU inferencing.
benchmark_app -i ./automobile.png -m squeezenet1.1.xml -d GPU
For the GPU, the results show a Throughput of 372.677 FPS.
1 2 3 4
Count: 2240 iterations Duration: 60105.7 ms Latency: 107.554 ms Throughput: 372.677 FPS