Skip to main content

Wildfire Segmentation on FPGA

Try it yourself

To try the AI in One Ware Studio, simply click on the Try Demo button below. If you don't have an account yet, you will be prompted to sign up.

With the Download Project button you get the necessary files to run the model locally on your FPGA/PC.

About this demo

This demo evaluates wildfire segmentation in drone imagery deployed on an Intel Agilex 3 FPGA (ACX3000 board with an A3CY100BM16AE7S).

The setup uses 128 × 128 resolution at 100 MHz clock frequency, demonstrating real-time segmentation for autonomous drone surveillance systems where wildfire detection must happen onboard with minimal latency.

The model processes single-class segmentation (wildfire vs. background) using grayscale input, optimized for power-constrained edge deployment on FPGAs.

Comparison: DeepLabv3+ vs UNet vs Custom CNN

To evaluate FPGA deployment feasibility, we compared classical segmentation baselines with a task-specific model generated by ONE AI.

The Challenge with Universal Models

UNet requires 322.98 GFLOPs and 28.24M parameters, making FPGA synthesis impractical for real-time drone applications.

DeepLabv3+ (MobileNet backbone) reduces complexity to 17.63 GFLOPs and 10.99M parameters, but still exceeds typical FPGA resource budgets for embedded vision systems.

The Solution with ONE AI

Using ONE AI, we generated a custom CNN optimized for this exact use case: single-class wildfire segmentation, 128×128 grayscale input, and FPGA deployment.

The resulting architecture has only 7,705 parameters and requires just 0.05 GFLOPs, achieving 95.2 FPS on Intel Agilex 3 at 100 MHz while maintaining segmentation quality suitable for fire detection.

Custom CNN segmentation result

Segmentation result with ONE AI on FPGA.

Results

ModelParametersGFLOPsAvg CPU SpeedMax. FPGA Speed*
ONE AI0.01 M0.05~600 FPS **95.2 FPS
DeepLabv3+10.99 M17.6312.7 FPS
UNet28.24 M322.980.44 FPS

* On Intel Agilex 3 @ 100MHz

** A quantized TFLite model was used for the CPU benchmark.

FPGA Resource Usage

Device: Intel Agilex 3 (A3CY100BM16AE7S)

ResourceUsagePercentage
Logic (ALMs)3,167 / 34,0009%
RAM-15%
DSP Blocks53 / 13838%

The low resource footprint enables integration with additional processing pipelines (image preprocessing, post-processing, communication interfaces) on the same FPGA fabric.

Model Configuration

Further Details

A more detailed guide on how to obtain a segmentation model with ONE AI can be found in this Guide

The following ONE AI plugin settings were used for this FPGA-targeted wildfire segmentation pipeline.

Data Processing

Preprocessing settings for grayscale drone imagery

The model uses grayscale input (single channel) to reduce computational complexity. Images are resized to 128×128 and converted to grayscale before augmentation.

Model Settings

Model settings for wildfire segmentation

The segmentation output resolution was set to 100% for pixel-accurate masks, while the model architecture remains compact enough for FPGA synthesis at 100 MHz clock frequency.

Conclusion

This setup targets real-time drone surveillance with a single segmentation class (wildfire).

FPGA deployment enables:

  • Low latency (< 11ms per frame at 95 FPS)
  • Deterministic timing for safety-critical applications
  • Low power consumption for battery-powered drones
  • Parallel processing with other onboard perception tasks
Christopher - Development Support

Need Help? We're Here for You!

Christopher from our development team is ready to help with any questions about ONE AI usage, troubleshooting, or optimization. Don't hesitate to reach out!

Our Support Email:support@one-ware.com