All NVIDIA® Jetson™ modules and developer kits are supported by the same software stack, enabling you to develop once and deploy everywhere. Jetson Software is designed to provide end-to-end acceleration for AI applications and accelerate your time to market. We bring the same powerful NVIDIA technologies that power data center and cloud deployments to the edge.
Jetson software stack begins with NVIDIA JetPack™ SDK which provides a full development environment and includes CUDA-X accelerated libraries and other NVIDIA technologies to kickstart your development.
JetPack is the most comprehensive solution for building AI applications. To accelerate your AI application end-to-end, we include TensorRT and cuDNN for accelerating AI inferencing, CUDA for accelerating general computing, VPI for accelerating computer vision and image processing, Jetson Multimedia API’s for accelerating multimedia, and libArgus and V4l2 for accelerating camera processing.
JetPack includes the Jetson Linux Driver package (L4T) which provides the Linux kernel, bootloader, NVIDIA drivers, flashing utilities, sample filesystem, and toolchains for the Jetson platform. It also includes security features, over-the-air update capabilities and much more.
JetPack includes NVIDIA container runtime, enabling cloud-native technologies and workflows at the edge. Transform your experience of developing and deploying software by containerizing your AI applications and managing them at scale with cloud-native technologies.
Read more about JetPack, Jetson Linux, and Cloud-Native on Jetson by following the links below.
NVIDIA TAO simplifies the time-consuming parts of a deep learning workflow, from data preparation to training to optimization, shortening the time to value. Speed up your development by 10X when you start with production-ready pre-trained AI models from the NVIDIA NGC™ catalog. These models have been trained to high accuracy for domains including computer vision, conversational AI, and morex.
NVIDIA Triton™ Inference Server simplifies deployment of AI models at scale. Triton Inference Server is open source and provides a single standardized inference platform that can support multi framework model inferencing in different deployments such as datacenter, cloud, embedded devices, and virtualized environments. It supports different types of inference queries through advanced batching and scheduling algorithms and supports live model updates.
NVIDIA Riva is a fully accelerated SDK for building multimodal conversational AI applications using an end-to-end deep learning pipeline. The Riva SDK includes pretrained conversational AI models, the NVIDIA TAO Toolkit, and optimized end-to-end skills for speech, vision, and natural language processing (NLP) tasks.
NVIDIA DeepStream SDK delivers a complete streaming analytics toolkit for AI-based multi-sensor processing and video and image understanding on Jetson. DeepStream is an integral part of NVIDIA Metropolis, the platform for building end-to-end services and solutions that transform pixel and sensor data to actionable insights.
NVIDIA Isaac makes it easy for developers to create and deploy AI-powered robotics. The platform includes the Isaac Engine (application framework), Isaac ROS GEMs (packages that include image processing and computer vision, including DNN-based algorithms that are highly optimized for NVIDIA GPUs and Jetson for incorporating into ROS-based robotic applications), Isaac Apps (reference applications) and Isaac Sim for Navigation (a powerful simulation platform). These tools and APIs accelerate robot development by making it easier to add Artificial Intelligence for perception and navigation into robots.
Learn about Developer Tools for the Jetson platform under Develop → Tools.