Zum Hauptinhalt springen

Getting Started with One AI for Object Detection

1. Create a New Project

Open the Project Creator by clicking on File -> New -> Project

New Project

Set Project options

New Project Options

You need to specify the name. The other settings are only important if you want to program an FPGA.
You can find more infomation about that here.


2. Initialize the AI Project

Make sure that the correct project is selected before you create a new AI project.
Navigate to the AI tab and choose “Open AI Generator”.

AI Generator Modal

Enter your AI Project Name and choose the AI Type.

Note: Currently, only “Image Detection” is available as the AI type.


3. Dataset Preparation - Build Your AI Foundation

Before training your model, you need to load and organize your image data. A clean and well-labeled dataset is the foundation for accurate AI performance.

3.1 Load Your Images

Access the Dataset tab in your ONE AI workspace to prepare your visual training data.

Training Data View

  • Use “Select Images” to load files directly from your device.
  • Select “Camera Tool” to capture and load images directly within ONE AI for real-time data collection.
  • Additional Settings - Use Preview Size to adjust how images display in your workspace for efficient labeling.

3.2 Choose Labeling Mode

Label Mode

  • Classification: Selecting a single class for each image. If your images are already in folders that define the image class, the classification is done automatically.
  • Annotation: Mark objects in the image by drawing boxes around objects/defects. Needed for object detection.

3.3 Dataset Organization for AI Training

Proper dataset organization is crucial for building reliable AI models. Follow these steps to split your data effectively.

Training Set

The training set teaches your AI what to recognize - it's your model's foundation.
Use about 70% of your total dataset with properly labeled images. Ideally it should include at least 50 images per class. More variety means better real-world performance.

Validation Set

The validation set monitors your model's performance on unseen data during training.
This evaluates performance without direct training involvement. Labels are required for the validation set as well to monitor the AI performance on unseen data while training.

Validation Setting

Using Validation Split: If you don't have separate validation images, you can enable "Use Validation Split" to auto-divide your training set:

  • 20% for standard datasets
  • 30% for small datasets
  • 10% for large datasets

Test Set

The test set provides a final performance evaluation after training.
Keep this set completely separate from training and validation data. Labels are optional but recommended. Make sure it represents real deployment conditions for an objective accuracy measurement.

This organized approach ensures your AI model will be robust, accurate, and ready for real-world deployment with ONE AI.

Test Setting

If you don't have a separate test dataset, you can use the images from the train or validation dataset to test the AI. Because ONE AI only uses the validation dataset to stop the training when there is no more improvement and not for hyperparameter settings, the results should not be too far off, if you use the validation dataset for the final evaluation.

3.4 Add Your Labels

Open the Labels tab and create labels for each class you want to detect, like "defect" or "strawberry". Assign unique colors to make annotation faster and easier.

Two Label Types

  • Classification - Categorize entire images with one label per image.
    Example: "defect" or "no defect" for quality control

    Labeled Image 1Labeled Image 2
  • Object Detection - Mark specific objects by drawing bounding boxes. Multiple objects and labels possible per image.
    Example: Box individual "strawberry" or "foreign" objects

    yolo Image 1yolo Image 2

4. Prefilters - Optimize Your Dataset

Apply prefilters before or after augmentation to optimize your dataset and boost model performance.

Prefilter View

When to Use Prefilters

  • Before Augmentation: Optimize your dataset for higher generalization and easier detection
  • After Augmentation: Some prefilters only represent real world performance if they are applied after augmentation

Resize Filter

Object resolution view Object resolution view

Adjust image resolution based on your smallest target objects.

Keep resolution high enough to preserve key details but avoid excessive size, as this increases prediction time and may reduce accuracy when the AI struggles with too much detail.

Essential Prefilters

Color Enhancement

Object resolution view Object resolution view

Make objects stand out when they blend into similar backgrounds. Boost saturation, contrast, brightness, and hue to create clear visual separation. Add threshold filtering for sharp background removal while preserving critical object features.

Smart Cropping

Smart Crop

Eliminate visual clutter and zero in on your target areas. Focus on regions where objects consistently appear to cut background noise, sharpen detection accuracy, and accelerate training performance.

Frequency Filtering

Simplify images while preserving critical details.

  • Highpass - Removes background and just focuses on changes in the image

  • Lowpass - Smoothes textures and removes visual noise that confuses models

    LowPass filter 1 LowPass filter 2

Advanced Prefilter Options

  • Color Space Conversion - Switch between HSV and RGB for optimal color processing

  • Edge Sharpening - Emphasizes object boundaries for clearer detection

  • Threshold Processing - Creates high-contrast black and white images for specific applications

  • Dataset Normalization - Rescales the image's brightness so that the darkest pixels become black and the brightest pixels become white

  • Channel Filtering

    Channel Filter 1 Channel Filter 2

    Remove or isolate specific color channels (red, green, blue) when one introduces unwanted visual noise


5. Augmentations

Augmentations are applied during training with random values within specified ranges. Their purpose is to increase the diversity of the training data, helping the AI to generalize better. By varying the training data it is possible to increase the size of the dataset without the need to record or annotate additional data. Furthermore, it is possible to make the AI model more robust against certain variations in the data by intentionally reproducing these variations with augmentations.

Move Augmentation

Shifts the image along the X and Y axes within a specified range to introduce positional variability.

Move Augmentation

Rotate Augmentation

Define the range of angles within which the image can be rotated.

Rotation Augmentation

Flip Augmentation

Apply random flips to increase diversity.

Flip Augmentation

Resize Augmentation

Scales the image in different dimensions for better object size detection.

Resize Augmentation

Color Augmentation

Enhances object detection under varying lighting conditions by adjusting brightness, contrast, saturation and hue.

Color Augmentation

Frequency Augmentation

Use high- and lowpass filters to reduce noise and improve generalization.

Frequency Augmentation

Noise Augmentation

Add random noise to images to help the model become robust against real-world image imperfections.

Noise Augmentation

6. Model Settings

Tune Model Complexity

Optimize your model according to your specific requirements

In Classification Mode:

  • Classification Type - Select if all class types should be detected separately or if the image always has one class

In Annotation Mode:

  • Prediction Type - Select if the size and position of objects or only the position should be detected

It is possible as well to annotate objects in the dataset and then only train to detect the classes in the image or the one class of the image. Compared to classification labels, this helps ONE AI to predict the right AI model and you can experiment with more detection types

  • X/Y Precision (%) - Set the precision level for predicting coordinates
  • Size Precision (%) - Controls prediction of object sizes
  • Prioritize Precision - Adjust the model's balance between false positives and false negatives
  • Minimum FPS - Minimum predictions per second with selected hardware
  • Maximum Memory Usage (%) - Percentage of memory used for weights and calculations

FPGA related:

  • Maximum Multiplier Usage (%) - Limit the amout of DSP elements that are used of your FPGA

  • FPGA Clock Speed (MHz) - Set the clock speed of you FPGA

    Model Tune Settings

Input Settings

  • Estimated Surrounding Min Width (%) - Estimate the minimum width of the area required to detect the smallest object correctly

  • Estimated Surrounding Min Height (%) - Estimate the minimum height of the area required to detect the smallest object correctly

  • Estimated Surrounding Max Width (%) - Estimate the minimum width of the area required to detect the largest object correctly

  • Estimated Surrounding Max Height (%) - Estimate the minimum height of the area required to detect the largest object correctly

  • Same Class Difference - Compare how different the objects in one class are

  • Background Difference - Compare how different the backgrounds are in the images

  • Detect Simplicity (%) - Estimate how easy it is to detect the object class

    Model Input Settings

In Classification Mode:

  • Estimated Min Object Width (%) - The width of the smallest object or area used for classification
  • Estimated Min Object Height (%) - The height of the smallest object or area used for classification
  • Estimated Average Object Width (%) - The width of the average object or area used for classification
  • Estimated Average Object Height (%) - The height of the average object or area used for classification
  • Estimated Max Object Width (%) - The width of the largest object or area used for classification
  • Estimated Max Object Height (%) - The height of the largest object or area used for classification
  • Maximum Number of Features for Classification - The maximum number of features used for classification
  • Average Number of Features for Classification - The average number of features used for classification

7. Hardware Settings

Select or define hardware resources to create a model that is optimized for your hardware.

Used Hardware

Choose the hardware that is used to run the AI model.

Advanced Settings

  • Hardware Type - Select the hardware type
  • Prioritize Speed Optimization - Enable this if your hardware, such as an FPGA with limited internal RAM, requires efficient memory usage for higher accuracy with fewer model parameters.
  • Compute Capability - Specify the computational power of your hardware
  • Prioritize Memory Optimization
  • Memory Limit - Define the available memory
  • Optimize for Parallel Execution - Enable this option for FPGA/ASIC parallel architectures
  • Quantized Calculations - Enable quantization to boost performance. This can slightly reduce accuracy but significantly increases speed. For most applications, especially on microcontrollers, TPUs, FPGAs, or ASICs, quantization is highly recommended.
  • Bits per Value - Set precision level for neural network calculations

8. Training

For these steps you need to be connected to the ONE AI Cloud

Ensure that your training data is uploaded, labeled, and properly prepared. This includes applying any necessary prefilters and selecting the most effective augmentations. Once your data is ready, double-check your model and hardware settings before starting the training process

Create

You can train different AI models for the same project, so you can test out different configurations.

Train

Test

You can test the AI with your test data.

Test

Export

Create Tool

Choose the tool format based on target hardware and application needs.

Model Settings