Edge AI on FPGA — Potato Chip Inspection
ONE AI generates optimized architectures that make hardware secondary. This benchmark deploys a quality inspection model on a decade-old Altera MAX® 10 FPGA and compares it against an Nvidia Jetson Orin Nano running a conventional network.
Full results published in the Altera × ONE WARE Whitepaper.
The Task
Detect burn marks and defects on potato chips in real time on a fast production line — under strict limits on latency, power, and cost.

Good

Defective
Results
| Metric | MAX® 10 + ONE AI | Jetson Orin Nano (VGG19) | Improvement |
|---|---|---|---|
| Accuracy | 99.5 % (INT8) | 88 % (FP32) | 24× fewer errors |
| Power | 0.5 W | 10 W | 20× lower |
| Latency | 0.086 ms | 42 ms | 488× faster |
| Cost | €45 | €250 | 6× cheaper |
| Throughput | 1,736 FPS | 24 FPS | 72× higher |
| Footprint | 11×11 mm | 70×45 mm | 26× smaller |
Why It Works
Optimized Architecture
ONE AI generated a network with only 6,750 parameters and 0.0175 GOPs — compared to VGG19's 127 million parameters and 25 GOPs. The result: higher accuracy with a fraction of the compute.
Quantization-Aware Training
Training directly in INT8 preserves accuracy during quantization — a critical step for FPGA deployment where every bit matters.
HDL Deployment
The optimized model compiles into RTL/HDL and runs natively on the FPGA fabric. No runtime overhead, deterministic microsecond latency, and seamless integration with existing control logic.
Takeaway
Even with decade-old FPGA hardware, an optimized ONE AI model outperforms a modern GPU across every dimension — accuracy, speed, power, cost, and size. The bottleneck in edge AI is not the hardware — it's the model design.

Need Help? We're Here for You!
Christopher from our development team is ready to help with any questions about ONE AI usage, troubleshooting, or optimization. Don't hesitate to reach out!