How to use benchmarker

NetsPresso Benchmarker can be used through two methods: PyNetsPresso and LaunchX.


PyNetsPresso

For first-time use or to obtain detailed information about PyNetsPresso, please visit PyNetsPresso Github.

from netspresso.enums import DeviceName, SoftwareVersion

# 1. Declare benchmarker
benchmarker = netspresso.benchmarker_v2()

# 2. Run benchmark
benchmark_result = benchmarker.benchmark_model(
    input_model_path="./outputs/converted/TENSORRT_JETSON_AGX_ORIN_JETPACK_5_0_1/TENSORRT_JETSON_AGX_ORIN_JETPACK_5_0_1.trt",
    target_device_name=DeviceName.JETSON_AGX_ORIN,
    target_software_version=SoftwareVersion.JETPACK_5_0_1,
)
print(f"model inference latency: {benchmark_result.benchmark_result.latency} ms")
print(f"model gpu memory footprint: {benchmark_result.benchmark_result.memory_footprint_gpu} MB")
print(f"model cpu memory footprint: {benchmark_result.benchmark_result.memory_footprint_cpu} MB")

To learn more about how to use PyNetsPresso, please visit the Recipes page below and follow the step-by-step guides.
PyNetsPresso Recipes


What’s Next