[P] [D]How to get TensorFlow model to run on Jetson Nano?

This page summarizes the projects mentioned and recommended in the original post on /r/MachineLearning

InfluxDB - Power Real-Time Data Analytics at Scale
Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.
www.influxdata.com
featured
SaaSHub - Software Alternatives and Reviews
SaaSHub helps you find the best software and product alternatives
www.saashub.com
featured
  • keras-onnx

    Discontinued Convert tf.keras/Keras models to ONNX

  • Conversion was done from Keras Tensorflow using to ONNX https://github.com/onnx/keras-onnx followed by ONNX to TensorRT using https://github.com/onnx/onnx-tensorrt The Python code used for inference using TensorRT can be found at https://github.com/jonnor/modeld/blob/tensorrt/tensorrtutils.py

  • onnx-tensorrt

    ONNX-TensorRT: TensorRT backend for ONNX

  • Conversion was done from Keras Tensorflow using to ONNX https://github.com/onnx/keras-onnx followed by ONNX to TensorRT using https://github.com/onnx/onnx-tensorrt The Python code used for inference using TensorRT can be found at https://github.com/jonnor/modeld/blob/tensorrt/tensorrtutils.py

  • InfluxDB

    Power Real-Time Data Analytics at Scale. Get real-time insights from all types of time series data with InfluxDB. Ingest, query, and analyze billions of data points in real-time with unbounded cardinality.

    InfluxDB logo
  • modeld

    Self driving car lane and path detection

  • Conversion was done from Keras Tensorflow using to ONNX https://github.com/onnx/keras-onnx followed by ONNX to TensorRT using https://github.com/onnx/onnx-tensorrt The Python code used for inference using TensorRT can be found at https://github.com/jonnor/modeld/blob/tensorrt/tensorrtutils.py

  • server

    The Triton Inference Server provides an optimized cloud and edge inferencing solution. (by triton-inference-server)

  • There is a stand-alone triton server that will take tensorflow models (and others) and run them directly: https://github.com/triton-inference-server/server/releases/tag/v2.10.0

NOTE: The number of mentions on this list indicates mentions on common posts plus user suggested alternatives. Hence, a higher number means a more popular project.

Suggest a related project

Related posts

  • "A matching Triton is not available"

    1 project | /r/StableDiffusion | 15 Oct 2023
  • Operationalize TensorFlow Models With ML.NET

    5 projects | dev.to | 17 Aug 2023
  • Keras Core: Keras for TensorFlow, Jax, and PyTorch

    5 projects | news.ycombinator.com | 11 Jul 2023
  • Triton Inference Server - Backend

    2 projects | /r/learnmachinelearning | 13 Jun 2023
  • Single RTX 3080 or two RTX 3060s for deep learning inference?

    1 project | /r/computervision | 12 Apr 2023