Skip to product information
1 of 2

Ultimate ONNX for Deep Learning Optimization

Ultimate ONNX for Deep Learning Optimization

SKU:9789349887206

Regular price $37.95 USD
Regular price Sale price $37.95 USD
Sale Sold out
Taxes included. Shipping calculated at checkout.
Book cover type

Free Book Preview

ISBN: 9789349887206
eISBN: 9789349887343
Rights: Worldwide
Author Name: Meet Patel
Publishing Date: 29-Dec-2025
Dimension: 7.5*9.25 Inches
Binding: Paperback
Page Count: 242

Download code from GitHub

View full details

Collapsible content

Description

Bringing Deep Learning Models to the Edge Efficiently Using ONNX.

Key Features

● Master end-to-end ONNX workflows from framework export models to edge deployment.
● Hands-on optimization techniques like quantization, pruning and knowledge distillation for real-world edge AI performance.
● Production-grade case studies across vision, speech, and language models on edge devices.

Book Description

ONNX has emerged as the de facto standard for deploying portable, framework-agnostic machine learning models across diverse hardware platforms.

Ultimate ONNX for Deep Learning Optimization provides a structured, end-to-end guide to the ONNX ecosystem, starting with ONNX fundamentals, model representation, and framework integration. You will learn how to export models from PyTorch, TensorFlow, and Scikit-Learn, inspect and modify ONNX graphs, and leverage ONNX Runtime and ONNX Simplifier for inference optimization. Each chapter builds technical depth, equipping you with the tools required to move models beyond experimentation.

The book focuses on performance-critical optimization techniques, including quantization, pruning, and knowledge distillation, followed by practical deployment on edge devices such as Raspberry Pi. Through complete, real-world case studies covering object detection, speech recognition, and compact language models, you can implement custom operators, follow deployment best practices, and understand production constraints. Thus, by the end of this book, you will be capable of designing, optimizing, and deploying efficient ONNX-based AI systems for edge environments.

What you will learn

● Design and understand ONNX models, graphs, operators, and runtimes.
● Convert and integrate models from PyTorch, TensorFlow, and Scikit-Learn.
● Optimize inference using graph simplification, quantization, and pruning.
● Apply knowledge distillation to retain accuracy on constrained devices.
● Deploy and benchmark ONNX models on Raspberry Pi and edge hardware.
● Build custom ONNX operators, and extend models beyond standard layers.

Who is this book for?

This book is tailored for Machine Learning Engineers, AI Engineers, Data Scientists, Embedded AI Developers, and Software Engineers transitioning ONNX models from research to production. Readers should have a working knowledge of machine learning fundamentals and basic Python experience to apply the optimization and edge deployment workflows effectively.

Table of Contents

1. Introduction to ONNX and Edge Computing
2. Getting Started with ONNX
3. ONNX Integration with Deep Learning Frameworks
4. Model Optimization Using ONNX Simplifier and ONNX Runtime
5. Model Quantization Using ONNX Runtime
6. Model Pruning in Pytorch and Exporting to ONNX
7. Knowledge Distillation for Edge AI
8. Deploying ONNX Models on Edge Devices
9. End to End Execution of YOLOv12
10. End to End Execution of Whisper Speech Recognition Model
11. End to End Execution of SmolLM Model
12. ONNX Model from Scratch and Custom Operators
13. Real-World Applications, Best Practices, Security, and Future Trends in ONNX for Edge AI
Index

About Author & Technical Reviewer

Meet Patel is a machine learning engineer with over seven years of expertise dedicated to a singular challenge, that is, making Artificial Intelligence (AI) faster, smaller, and more efficient. His passion lies in unlocking the potential of AI on resource-constrained devices, pushing models from the lab into the real world.

His transition into AI from a mechanical engineering background underscores a journey fueled by curiosity and self-motivation. He was driven by a passion to master the intricacies of machine learning. Meet has extensive hands-on experience in taking models from initial research and training through advanced optimization techniques such as quantization, pruning, and knowledge distillation, all the way to compiler level enhancements and final deployment.

About the Technical Reviewer

Heflin Stephen Raj S is an AI engineer and innovator specializing in advanced natural language processing, transformer architectures, ONNX, and edge intelligence. With a strong foundation in data science and emerging technologies, he has consistently transformed cutting-edge research into scalable, high-impact solutions that drive business value, and enhance user experiences worldwide.

At Zoho, Heflin serves as a Member Technical Staff (NLP), where he builds AI- powered capabilities that elevate automation across a product ecosystem used by over 20 million people, each month. His work spans text classification, semantic search, and optimized model deployment across both cloud and on-premise environments. Previously, he contributed to CodeOps, engineering APIs that reduced infrastructure costs, and to Webnexs, where he developed enterprise- grade chatbots, forecasting systems, and CRMs that streamlined operations for diverse organizations.