Skip to main content

Hailo Toolbox Quick Start Guide

This document will introduce how to install and use the Hailo Toolbox, a comprehensive toolkit designed for deep learning model conversion and inference. This guide contains complete instructions from basic installation to advanced usage.

Table of Contents

System Requirements

Basic Requirements

  • Python Version: 3.8 ≤ Python < 3.12
  • Operating System: Linux (Ubuntu 18.04+ recommended), Windows 10+
  • Memory: At least 8GB RAM (16GB+ recommended)
  • Storage: At least 2GB available space

Hailo-Specific Requirements

  • Hailo Dataflow Compiler: For model conversion functionality (required if using conversion features, X86 architecture and Linux only), refer to installation tutorial
  • HailoRT: For inference functionality (required when using inference features), refer to installation tutorial
  • Hailo Hardware: For hardware-accelerated inference (required when using inference features)

Python Dependencies

Core dependency packages will be automatically installed:

opencv-python>=4.5.0
numpy<2.0.0
requests>=2.25.0
matplotlib>=3.3.0
onnx
onnxruntime
pillow
pyyaml
tqdm

Installation

# Clone project source code
git clone https://github.com/Seeed-Projects/hailo_toolbox.git

# Enter project directory
cd hailo_toolbox

# Install project (development mode)
pip install -e .

# Or install directly
pip install .
# Create virtual environment
python -m venv hailo_env

# Activate virtual environment
# Linux/macOS:
source hailo_env/bin/activate
# Windows:
hailo_env\Scripts\activate

# Install project
git clone https://github.com/Seeed-Projects/hailo_toolbox.git
cd hailo_toolbox
pip install -e .

Verify Installation

After installation, verify successful installation with the following commands:

# Check version information
hailo-toolbox --version

# View help information
hailo-toolbox --help

# View conversion functionality help
hailo-toolbox convert --help

# View inference functionality help
hailo-toolbox infer --help

Model Conversion

Hailo Toolbox supports converting models from various deep learning frameworks to efficient .hef format for running on Hailo hardware.

Supported Model Formats

FrameworkFormatSupportedTarget FormatNotes
ONNX.onnx.hefRecommended format
TensorFlow.h5.hefKeras models
TensorFlowSavedModel.pb.hefTensorFlow SavedModel
TensorFlow Lite.tflite.hefMobile models
PyTorch.pt (torchscript).hefTorchScript models
PaddlePaddleinference model.hefPaddlePaddle inference models

Basic Conversion Commands

# View conversion help
hailo-toolbox convert --help

# Basic conversion (ONNX to HEF)
hailo-toolbox convert model.onnx --hw-arch hailo8

# Complete conversion example
hailo-toolbox convert model.onnx \
--hw-arch hailo8 \
--input-shape 320,320,3 \
--save-onnx \
--output-dir outputs \
--profile \
--calib-set-path ./calibration_images

Conversion Parameter Details

ParameterRequiredDefaultDescriptionExample
model-Model file path to convertmodel.onnx
--hw-archhailo8Target Hailo hardware architecturehailo8, hailo8l, hailo15, hailo15l
--calib-set-pathNoneCalibration dataset folder path./calibration_data/
--use-random-calib-setFalseUse random data for calibration-
--calib-set-sizeNoneCalibration dataset size100
--model-scriptNoneCustom model script path./custom_script.py
--end-nodesNoneSpecify model output nodesoutput1,output2
--input-shape[640,640,3]Model input shape320,320,3
--save-onnxFalseSave compiled ONNX file-
--output-dirSame as modelOutput file save directory./outputs/
--profileFalseGenerate performance analysis report-

Model Inference

Hailo Toolbox provides flexible inference interfaces supporting various input sources and output formats.

Inference Examples

# View inference help
cd examples

# Basic inference example
python Hailo_Object_Detection.py

Supported Input Source Types

Input Source TypeFormatExampleDescription
Image filesjpg, png, bmp, etc.image.jpgSingle image inference
Image foldersDirectory path./images/Batch image inference
Video filesmp4, avi, mov, etc.video.mp4Video file inference
USB camerasDevice ID0, 1Real-time camera inference
IP camerasRTSP/HTTP streamrtsp://ip:port/streamNetwork camera inference
Network video streamsURLhttp://example.com/streamOnline video stream inference

Code Explanation

To help with understanding:

from hailo_toolbox import create_source     # API for loading image sources
from hailo_toolbox.models import ModelsZoo # Model library
from hailo_toolbox.process.visualization import DetectionVisualization # Pre-implemented object detection visualization tool
import cv2 # OpenCV tools

if __name__ == "__main__":
# Create model input source
source = create_source(
"https://hailo-csdata.s3.eu-west-2.amazonaws.com/resources/video/example.mp4"
)

# Load YOLOv8n detection model
# Load yolov8s model under object detection task
inference = ModelsZoo.detection.yolov8s()
# Load visualization module
visualization = DetectionVisualization()

# Read image source frame by frame
for img in source:
# Pass image to model for inference prediction, inference module will perform corresponding preprocessing and postprocessing based on model configuration, and wrap processing results into directly usable data
results = inference.predict(img)
# Get inference results for each image sequentially, model accepts multiple images for simultaneous inference, so returned results are processing results for each image
for result in results:
# Visualize inference results
img = visualization.visualize(img, result)
cv2.imshow("Detection", img)
cv2.waitKey(1)
# print(f"Detected {len(result)} objects")
# Get predicted bounding boxes for current image
boxes = result.get_boxes()
# Get predicted confidence scores for current image
scores = result.get_scores()
# Get predicted class IDs for current image
class_ids = result.get_class_ids()

# Show first 5 detection results
for i in range(min(5, len(result))):
print(
f" Object{i}: bbox{boxes[i]}, score{scores[i]:.3f}, class{class_ids[i]}"
)
print("---")