NetsPresso Releases

NetsPresso Release - 1.6.0 (November 21, 2023)

We officially announce NetsPresso has released new features to Version 1.6.0. 🎉

New devices to convert and benchmark are available with PyNetsPresso and LaunchX.

Updates & New Features

Devices with Arm Coretex M85 are supported to convert and benchmark.

  • The list of compatible devices is as follows:
    • Renesas RA8D1 (Arm Cortex-M85)
    • Alif Ensemble DevKit Gen2 (Arm Cortex-M55+Ethos-U55)
  • TensorFlow Lite now supports INT8 quantization.

Users can benchmark and compare models with/without Arm Helium.

  • Renesas RA8D and Alif Ensemble DeviKit Gen2 are available for use.
  • The benchmark result with Helium can be up to twice as fast as without Helium.

NetsPresso Release - 1.5.0 (November 7, 2023)

We officially announce NetsPresso has released new features to Version 1.5.0! 🎉

Updates & New Features

Users can resume or stop the project during the process.

  • When the project is stopped unexpectedly, users can try the project again.
  • The project consists of three stages: Train, Convert, Benchmark, and users can view corresponding status for each of these steps on the NetsPresso project page.

New task support for Model Compressor and Model Launcher.

  • Image classification model with MixNet and ShuffleNet are compressible.
  • Semantic Segmentation model with SegFormer is also compressible.
  • Convert models for image classification and semantic segmentation for NVIDIA and Intel devices.

Bug Fixes

  • Minor fixes on Model Compressor to support a STMicro model.

NetsPresso Release - 1.4.0 (September 21, 2023)

We officially announce NetsPresso has released new features to Version 1.4.0! 🎉

Updates & New Features

Now users can use their local dataset on NetsPresso.

  • Train the model with a local dataset on personal server.

Several updates of Model Compressor.

  • Tensorflow-Keras is now compatible with version 2.8.
  • SegFormer for the segmentation task is now available.
  • Staircase for pruning is now available.

Bug Fixes

Model Compressor

  • Handling simple flatten layer that is not linked to output layer.

NetsPresso Release - 1.3.0 (May 18, 2023)

We officially announce NetsPresso has released new features to Version 1.3.0! 🎉

Updates & New Features

Now users can use their personal server on NetsPresso.

  • Users can set personal GPU servers as a training server.

Users can compress the latest YOLO model with Model Compressor.

  • YOLOv8 is now available with Model Compressor.

Bug Fixes

  • Minor Bug fixes for Model Compressor.
  • Fixed misleading class information on a dataset page.

NetsPresso Release - 1.2.2 (April 21, 2023)

Updates & New Features

The NetsPresso console is now integrated into one system 🎉

  • The split compressor module is merged into one.
  • Users can see the lists of model on one integrated page.

Bug Fixes

  • Bug fix of CP Decomposition for TensorFlow-Keras Framework.
  • Minor bug fixes for Model Compressor.

NetsPresso Release - 1.2.1 (February 9, 2023)

Updates & New Features

New JetPack versions for NVIDIA Jetsons are available in NetsPresso Searcher and Launcher.

  • JetPack 4.6 and 5.0.2 are supported for NVIDIA Jetson Xavier NX.
  • JetPack 4.4.1 and 4.6 are supported for NVIDIA Jetson Nano.

Input shape information for uploading custom model.

  • Users need to provide Batch, Channel, and Dimension (e.g. height and width for images) for input shape information when uploading a custom model.
  • Models with dynamic input shapes will be compatible with Model Compressor based on given input shape information.

Bug Fixes

  • Minor fixes for Model Compressor.

NetsPresso Release - 1.2.0 (December 16, 2022)

Updates & New Features

New task support for Model Searcher: Image Classification and Semantic Segmentation

  • Users can make image classification models with ShuffleNet v2+ and MixNet for all devices supported in NetsPresso.
  • Users can make semantic segmentation models with SegFormer for all devices in NetsPresso except the Raspberry Pi series.
  • Model Compressor and Model Launcher support for these models will be available in future releases.

New version of Dataset Validator: task support and usability improvement

  • Users can prepare datasets in ImageNet format for the image classification task.
  • Users can prepare datasets in UNet (like YOLO) format for the semantic segmentation task.
  • Enhanced usability with changed UI and users can check the progress of the validation process.
  • Users must download the updated version to validate and upload datasets for image classification and semantic segmentation tasks.

New best practice for Model Compressor: ViT (transformer-based classification model)

  • Users can compress transformer models with Model Compressor.
  • Follow and customize the guides to make your best-compressed model.

New hardware for Model Searcher and Model Launcher: NVIDIA Jetson AGX Orin

  • Users can make optimized AI models for NVIDIA Jetson AGX Orin.

Bug Fixes

  • Fixed misleading latency information for retraining projects.

NetsPresso Release - 1.1.2 (November 29, 2022)

Updates & New Features

New compression method in Model Compressor: Filter Decomposition - Singular Value Decomposition

  • Singular Value Decomposition (SVD) decomposes the pointwise convolution or fully-connected layer into two pointwise or fully-connected layers. Recommendation feature is also available.
  • Users can use this method with Advanced Compression in Model Compressor.

New best practices for Model Compressor: YOLOX, FCN ResNet50

  • Follow and customize the guides to make your best compressed model.

Names of projects and models are editable

  • Names of projects and models can be changed after the creation so that users can manage them in NetsPresso.

Bug Fixes

  • Change in the Project Info: the base model item will show a more specific model name that was selected when creating a new project.
  • When using TensorFlow-Keras models with Model Compressor, the model must contain not only weights but also the structure of the model (do not use save_weights).

Coming Soon

  • Model Searcher will support Image Classification and Semantic Segmentation models soon.
  • Wait for the next release to train and optimize your own image classification and semantic segmentation models with NetsPresso!

NetsPresso Release - 1.1.0 (October 28, 2022)

We are excited to announce that NetsPresso is now the world’s first deep learning development platform that supports Arm Virtual Hardware (AVH)! Arm Virtual Hardware is a solution for accelerating hardware-aware AI model development. Users can examine AI models for Arm Virtual Hardware so that they can estimate the performance before production.

Updates & New Features

New object detection model: YOLO-Fastest

  • Users can select YOLO-Fastest with Model Searcher to train a model for Arm Virtual Hardware.
  • TFLite INT8 Quantization is available.
  • Users can compress YOLO-Fastest model with Model Compressor.

New hardware: Arm Virtual Hardware - Corstone 300 (Ethos-U65 High End)

  • Users can select AVH Corstone-300 (Ethos-U65 High End) to convert and benchmark models.
  • TensorFlow Lite INT8 quantization is necessary.

New custom model upload format: PyTorch GraphModule

  • Users can upload models in PyTorch GraphModule (.pt) format as well as ONNX (.onnx) format.
  • PyTorch graph module will increase compatibility of Model Compressor
  • How-to-guide for the conversion is at the ModelZoo-torch.

Bug Fixes

  • Minor fixes for uploading dataset.
  • Enhanced stability of the converting feature.
  • Better health condition of hardwares in NetsPresso device farm.

Coming Soon

  • New compression method will be available in the next release.

NetsPresso Release - 1.0.1 (September 28, 2022)

Updates & New Features

Model Compressor

  • Model Compressor core — Enhanced compatibility for CNNs. More complex graphs are supported.
    • Added validated PyTorch models in torchvision.models
      • Classification: ResNet50_32x4d, RegNet_y_400mf, RegNet_x_400mf
      • Semantic Segmentation: FCN ResNet50, FCN ResNet101
    • Enhanced compatibility for supported modules
    • More details will be updated in the document.
  • Advanced Compression — Pruning Policy option is now automatically set by NetsPresso. Users don’t need to struggle with the policy, NetsPresso will select and provide the best policy for each method.

Bug Fixes

  • Minor fix in processing code example for Packaging. Users don’t need to modify input shapes if the model is made with NetsPresso Model Searcher.

NetsPresso Release - 1.0.0 (August 30, 2022)

Hardware-aware AI Optimization Platform, NetsPresso, is now live!

We are excited to announce the release of NetsPresso 1.0.0. The release is the first official version of NetsPresso made by Nota AI after improvements based on feedback from hundreds of beta users.

This release contains three modules of NetsPresso; Model Searcher, Model Compressor, and Model Launcher. Users can develop optimized AI models for target hardware by using all modules together or each module that suits their development stage.

Key Features

Model Searcher

  • Quick Search is an automatic training pipeline based on open-source models. Quick Search provides expected latency measures on target hardware for multiple candidate models to let users easily select the appropriate model to train.
  • Retraining makes the fine-tuning process easier. Users can retrain the base model with a different dataset to improve the model. Compressed models can be retrained to recover the accuracy if it is made by Model Searcher.

Model Compressor

  • Automatic Compression simplifies the compression process. Compression methods are already implemented and users only need to set the overall compression ratio of the model.
  • Advanced Compression provides a more detailed compression setting. Users can select the compression method and set compression ratios for each layer. Visualized model graphs and latency profiles let users decide which layers to be compressed more.

(Beta) Model Launcher

  • Converting provides various converting options to quantize and compile the model to be executable on target hardware. For the beta version, users can convert models built with Model Searcher only.
  • Packaging let users be ready for the deployment. Users can package their models with pre/post-processing code to be directly deployed on the target hardware.

Get more informations of each modules at ‘Features & Scope of Support’ and ‘Quick Start’ in the documentation.