Jump to Content
NetsPressoDocumentation
HomeDocumentationRecipesChangelog
DocumentationLog InNetsPresso
Documentation
Log In
HomeDocumentationRecipesChangelog
All
Pages
Start typing to search…

INTRODUCTION

  • Welcome
  • What is NetsPresso?
  • Why NetsPresso is needed
  • Where to go?

How NetsPresso works

  • Overview
  • Stage 1: Model Development
  • Stage 2: Model Optimization
  • Stage 3: Model Testing

How to use NetsPresso

  • Using PyNetsPresso
    • How to use trainer
    • How to use compressor
      • Method: Structured Pruning
      • Method: Filter Decomposition
    • How to use converter
    • How to use benchmarker
  • Using Studio
    • Training Studio
      • Step 1: Install NetsPresso GUI
      • Step 2: Train a model
      • Step 3: Compress the model
      • Step 4: Convert the model framework
      • Step 5: Benchmark the model on devices
      • Step 6: Retrain model
      • Step 7: Export and Visualization
      • Tutorial 1: Compression Workflow
      • Tutorial 2: Conversion / Quantization Workflow
    • Optimization Studio
      • Step 1: Upload model
      • Step 2: Review profiling results
      • Step 3: Run optimization
      • Step 4: Review optimization results
    • Benchmark Studio
      • Step 1: Framework Conversion
      • Step 2: Model Profiling
      • INT8 quantization with Benchmark Studio

Scope of support

  • Trainer: Supported models
  • Compressor: Supported models
  • Converter: Scope of support
    • Supported JetPack-ONNX version
  • Benchmarker: Scope of support

success case

  • Results of compression
    • - Super Resolution
    • - Semantic Segmentation
    • - Object Detection
    • - Image Classification

FAQ

  • NetsPresso FAQ
  • NetsPresso X Qualcomm AI Hub

Developer References

  • Reference documents
  • custom page test

NetsPresso API Description

  • Trainer
  • Compressor
    • Automatic Compression
    • Advanced Compression
      • Recommendation Compression
      • Manual Compression
      • Compression Method
      • Pruning Options
    • Get Compression Information
  • Quantizer
    • Plain Quantization
    • Automatic Quantization
    • Recommendation precision
    • Custom Precision Quantization by Layer Name
    • Custom Precision Quantization by Operator Type
  • Converter
  • Benchmarker
  • Enums
    • Task
    • Framework
    • Extension
    • Origin From
    • Compression Method
    • Recommendation Method
    • Policy
    • Group Policy
    • LayerNorm
    • Device Name
    • Software Version
    • Hardware Type
    • Task Status
    • Data Type
    • QuantizationMode
    • QuantizationPrecision
    • SimilarityMetric
    • OnnxOperator

NetsPresso QAI API Description

  • Base
    • Upload Dataset
    • Get Dataset
    • Get Datasets
    • Upload Models
    • Get Model
    • Get Models
    • Get Devices
    • Get Device Attributes
    • Get Job Summaries
    • Get Job
  • Quantizer
    • Quantize Model
    • Download Model
    • Get Quantize Task Status
    • Update Quantize Task
  • Converter
    • Convert Model
    • Download Model
    • Get Convert Task Status
    • Update Convert Task
  • Benchmarker
    • Benchmark Model
    • Download Benchmark Results
    • Get Benchmark Task Status
    • Update Benchmark Task
    • Get Inference Task Status
    • Inference Model
  • Options
    • Common Options
    • Compile Options
    • Profile Options
    • Quantize Options
Powered by 

Get Model

Updated about 1 month ago


Get Datasets
Get Models