All NVIDIA® Jetson™ modules and developer kits are supported by the same software stack, enabling you to develop once and deploy everywhere. Jetson Software is designed to provide end-to-end acceleration for AI applications and accelerate your time to market. We bring the same powerful NVIDIA technologies that power data center and cloud deployments to the edge.
Jetson software stack begins with NVIDIA JetPack™ SDK which provides a full development environment and includes CUDA-X accelerated libraries and other NVIDIA technologies to kickstart your development.
JetPack is the most comprehensive solution for building AI applications. To accelerate your AI application end-to-end, we include TensorRT and cuDNN for accelerating AI inferencing, CUDA for accelerating general computing, VPI for accelerating computer vision and image processing, Jetson Multimedia API’s for accelerating multimedia, and libArgus and V4l2 for accelerating camera processing.
JetPack includes the Jetson Linux Driver package which provides the Linux kernel, bootloader, NVIDIA drivers, flashing utilities, sample filesystem, and toolchains for the Jetson platform. It also includes security features, over-the-air update capabilities and much more.
JetPack includes NVIDIA container runtime, enabling cloud-native technologies and workflows at the edge. Transform your experience of developing and deploying software by containerizing your AI applications and managing them at scale with cloud-native technologies.
Read more about JetPack, Jetson Linux, and Cloud-Native on Jetson by following the links below.
NVIDIA TAO simplifies the time-consuming parts of a deep learning workflow, from data preparation to training to optimization, shortening the time to value. Speed up your development by 10X when you start with production-ready pre-trained AI models from the NVIDIA NGC™ catalog. These models have been trained to high accuracy for domains including computer vision, conversational AI, and morex.
Data collection and annotation is an expensive and laborious process. Simulation can help bridge the need for data. NVIDIA Omniverse Replicator uses simulation to generate synthetic data that is an order of magnitude faster and cheaper to create than real data. With Omniverse Replicator you can quickly create diverse, massive and accurate datasets for training AI models.
NVIDIA Triton™ Inference Server simplifies deployment of AI models at scale. Triton Inference Server is open source and provides a single standardized inference platform that can support multi framework model inferencing in different deployments such as datacenter, cloud, embedded devices, and virtualized environments. It supports different types of inference queries through advanced batching and scheduling algorithms and supports live model updates.
NVIDIA Riva is a fully accelerated SDK for building multimodal conversational AI applications using an end-to-end deep learning pipeline. The Riva SDK includes pretrained conversational AI models, the NVIDIA TAO Toolkit, and optimized end-to-end skills for speech, vision, and natural language processing (NLP) tasks.
NVIDIA DeepStream SDK delivers a complete streaming analytics toolkit for AI-based multi-sensor processing and video and image understanding on Jetson. DeepStream is an integral part of NVIDIA Metropolis, the platform for building end-to-end services and solutions that transform pixel and sensor data to actionable insights.
NVIDIA Isaac ROS GEMs are hardware-accelerated packages that make it easier for ROS developers to build high-performance solutions on NVIDIA hardware. NVIDIA Isaac Sim, powered by Omniverse, is a scalable robotics simulation application. It includes Replicator - a tool to generate diverse synthetic datasets for training perception models. Isaac Sim is also a tool that powers photorealistic, physically accurate virtual environments to develop, test, and manage AI-based robots.
Learn about Developer Tools for the Jetson platform under Develop → Tools.