Skip to main content

Integrate ONE AI Models (ONNX / TensorFlow Lite)

This guide explains how to use the AI models generated by ONE AI after export. ONE AI can export models in ONNX and TensorFlow Lite formats, which can be integrated into virtually any application or platform.

Export Settings

When exporting your model, make sure to enable "Include Pre- and Postprocessing" in the export options. This simplifies integration by embedding all necessary preprocessing (normalization, resizing) and postprocessing (result interpretation) directly into the model.

Enable Pre- and Postprocessing

Input Format

Single Image Input

When your model uses a single image as input, the input tensor has the following shape:

[1, height, width, 3]
DimensionDescription
1Batch size (always 1 for inference)
heightImage height in pixels (as configured during training)
widthImage width in pixels (as configured during training)
3RGB color channels

Data Type: float32 with values in range [0, 255] (when pre-processing is included)

Multiple Image Input

For models that compare multiple images (e.g., difference detection), the input tensor shape is:

[1, height, width, 3, image_count]
DimensionDescription
1Batch size (always 1 for inference)
heightImage height in pixels
widthImage width in pixels
3RGB color channels
image_countNumber of input images

Output Format

The output format depends on the task type configured during training.

Classification

For image classification tasks, the output tensor has the shape:

[1, detected_classes, 2]
IndexDescription
0Confidence value (0.0 - 1.0)
1Class ID

Example Output:

[[0.95, 0], [0.03, 1], [0.02, 2]]

This means: Class 0 with 95% confidence, Class 1 with 3% confidence, Class 2 with 2% confidence.

Object Detection

For object detection tasks, the output tensor has the shape:

[1, detected_objects, 6]
IndexDescription
0X center (in pixels, relative to model input size)
1Y center (in pixels, relative to model input size)
2Width (in pixels)
3Height (in pixels)
4Confidence value (0.0 - 1.0)
5Class ID

Example Output:

[[128, 96, 64, 48, 0.92, 1]]

This means: Object of Class 1 at center position (128px, 96px) with size (64px × 48px) and 92% confidence.

Important: Coordinate Reference

The pixel coordinates in the output refer to the model's input dimensions, not your original image size. If you resize or crop your image before feeding it into the model, you need to transform the coordinates back to your original image space.

Segmentation

For semantic segmentation tasks, the output tensor has the shape:

[1, height, width, 1]

Each pixel position contains the predicted Class ID for that location.

Example: A 256x256 segmentation output would be a tensor of shape [1, 256, 256, 1] where each value represents the class at that pixel.


Integration Options

We provide a ready-to-use C# SDK that handles all the complexity of model loading, inference, and result parsing:

C++ Project or Executable

For deployment on embedded systems, servers, or when you need maximum performance, ONE AI can export a complete C++ project or precompiled executable based on TensorFlow Lite:

Direct Integration

For custom integrations, you can use the standard ONNX or TensorFlow Lite runtimes:

ONNX Runtime

TensorFlow Lite