Nvidia Drive is a computer platform by Nvidia, aimed at providing autonomous car and driver assistance functionality powered by deep learning. [1] [2] The platform was introduced at the Consumer Electronics Show (CES) in Las Vegas in January 2015. [3] An enhanced version, the Drive PX 2 was introduced at CES a year later, in January 2016. [4]
The closely platform related software release program at some point in time was branded NVIDIA DRIVE Hyperion along with a revision number helping to match with the generation of hardware it is created for - and also creating ready to order bundles under those term. In former times there were only the terms Nvidia Drive SDK for the developer package and sub-included Nvidia Drive OS for the system software (aka OS) that came with the evaluation platforms or could be downloaded for OS switching and updating later on.
The first of Nvidia's autonomous chips was announced at CES 2015, based on the Maxwell GPU microarchitecture. [5] The line-up consisted of two platforms:
The Drive CX was based on a single Tegra X1 SoC (System on a Chip) and was marketed as a digital cockpit computer, providing a rich dashboard, navigation and multimedia experience. Early Nvidia press releases reported that the Drive CX board will be capable of carrying either a Tegra K1 or a Tegra X1. [6]
The first version of Drive PX is based on two Tegra X1 SoCs, and was an initial development platform targeted at (semi-)autonomous driving cars.
Drive PX platforms based on the Pascal GPU microarchitecture were first announced at CES 2016. [7] This time only a new version of Drive PX was announced, but in multiple configurations.
The Nvidia Drive PX 2 is based on one or two Tegra X2 SoCs where each SoC contains 2 Denver cores, 4 ARM A57 cores and a GPU from the Pascal generation. [8] There are two real world board configurations:
There is further the proposal from Nvidia for fully autonomous driving by means of combining multiple items of the AutoChauffeur board variant and connecting these boards using e.g. UART, CAN, LIN, FlexRay, USB, 1 Gbit Ethernet or 10 Gbit Ethernet. For any derived custom PCB design the option of linking the Tegra X2 Processors via some PCIe bus bridge is further available, according to board block diagrams that can be found on the web.
All Tesla Motors vehicles manufactured from mid-October 2016 include a Drive PX 2, which will be used for neural net processing to enable Enhanced Autopilot and full self-driving functionality. [9] Other applications are Roborace. [10] Disassembling the Nvidia-based control unit from a recent Tesla car showed that a Tesla was using a modified single-chip Drive PX 2 AutoCruise, with a GP106 GPU added as a MXM Module. The chip markings gave strong hints for the Tegra X2 Parker as the CPU SoC. [11] [12]
Systems based on the Volta GPU microarchitecture were first announced at CES 2017 [13]
The first Volta based Drive PX system was announced at CES 2017 as the Xavier AI Car Supercomputer. [13] It was re-presented at CES 2018 as Drive PX Xavier. [14] [15] Initial reports of the Xavier SoC suggested a single chip with similar processing power to the Drive PX 2 Autochauffeur system. [16] However, in 2017 the performance of the Xavier-based system was later revised upward, to 50% greater than Drive PX 2 Autochauffeur system. [13] Drive PX Xavier is supposed to deliver 30 INT8 TOPS of performance while consuming only 30 watts of power. [17] This spreads across two distinct units, the iGPU with 20 INT8 TOPS as published early and the somewhat later on announced, newly introduced DLA that provided an additional 10 INT8 TOPS.
In October 2017 Nvidia and partner development companies announced the Drive PX Pegasus system, based upon two Xavier CPU/GPU devices and two post-Volta (Turing) generation GPUs. The companies stated the third generation Drive PX system would be capable of Level 5 autonomous driving, with a total of 320 INT8 TOPS of AI computational power and a 500 Watts TDP. [18] [19]
The Drive AGX Orin board family was announced on December 18, 2019, at GTC China 2019. [20] On May 14, 2020, Nvidia announced that Orin would be utilizing the new Ampere GPU microarchitecture and would begin sampling for manufacturers in 2021 and be available for production in 2022. [21] Follow up variants are expected to be further equipped with chip models and/or modules from the Tegra Orin SoC.
Nvidia announced the SoC codenamed Atlan on April 12, 2021 at GTC 2021. [22]
Nvidia announced the cancellation of Atlan on September 20, 2022, which was supposed to be equipped with a Grace-Next CPU, and an Ada Lovelace based GPU, and Nvidia announced that their next SoC was called Thor. [23]
Announced on September 20, 2022, [24] Nvidia DRIVE Thor comes equipped with an Arm Neoverse V3AE CPU, [25] and a Blackwell based GPU, which was announced on March 18, 2024. [26] It features 8-bit floating point support (FP8) and delivers 1000 INT8 TOPS, 1000 FP8 FLOPS or 500 FP16 TFLOPS of performance. [27] Two Thor SoCs can be connected via NVLink-C2C. [24]
BYD, Hyper, XPENG, Li Auto and ZEEKR have said to be use DRIVE Thor in their vehicles. [28]
With the label Hyperion [29] added to their reference platform [30] series Nvidia promotes their mass products so that others can easily test drive and then create their own automotive grade products on top. Especially the feature rich software part of the base system is meant to be a big help for these others to quickly go ahead into developing their application specific solutions. Third-party companies, such as DeepRoute.ai, have publicly indicated using these software platform as their base of choice. [31] The whole design is concentrating on UNIX/Posix compatible or derived runtime environments (Linux, [32] Android, [33] QNX - aka the Drive OS variants) with special support for the semiconductors mentioned before in form of internal (CUDA, Vulkan) and external support (special interfaces and drivers for camera, lidar, CAN and many more) of the respective reference boards. For clearness Nvidia bundles the core of the developer needed software as Drive SDK that is sub-divided into DRIVE OS, DriveWorks, DRIVE AV, and DRIVE IX components. [34]
Hyperion Version | Announced | Latest Chip Launch | Start of Road Usage | Target Use Case | Semiconductors | Reference Platforms / Developer Kits | Drive OS Version | Sensor Support |
---|---|---|---|---|---|---|---|---|
7.1 [35] | 2020 | Level 2+ autonomous driving | Xavier, Turing GPU | DRIVE AGX Xavier Developer Kit, DRIVE AGX Pegasus Developer Kit | vehicle external: 7x camera, 8x radar; vehicle internal: 1x camera | |||
8 [36] [32] | 2020 | Xavier, Turing GPU | DRIVE™ AGX Pegasus GV100, DRIVE™ AGX Xavier | 5.0.13.2 (linux) | vehicle external: 12x camera, 9x radar, 1x lidar | |||
8.1 [37] | 2022 | estimated for 2024 | Orin, Xavier, Turing GPU | NVIDIA DRIVE AGX Orin™, DRIVE AGX Pegasus, DRIVE Hyperion 8.1 Developer Kits [38] | Orin: 6.0 (latest: 6.0.4) Xavier/Pegasus:5.2.6 [34] | vehicle external: 12x camera, 9x radar, 1x lidar | ||
9 [39] [40] | March 2022 | 2024 | estimated for 2026 | Atlan (Cancelled) | vehicle external: 14x camera, 9x radar, 3x lidar, 20x ultrasonic; vehicle internal: 3x camera, 1x radar |
Note: As of now the above table is still 'fresh' and thus might be incomplete.
Nvidia provided reference board | Drive CX | Drive PX | Drive PX 2 (AutoCruise) | Drive PX 2 (Tesla) | Drive PX 2 (AutoChauffeur) | Drive PX 2 (Tesla 2.5) | Drive PX Xavier [15] | Drive PX Pegasus [18] | Drive AGX Orin [20] | Drive AGX Pegasus OA [41] | Drive Atlan (Cancelled) | Drive Thor |
---|---|---|---|---|---|---|---|---|---|---|---|---|
GPU Microarchitecture | Maxwell (28 nm) | Pascal (16 nm) | Volta (12 nm) | Ampere (8 nm [42] ) | Ada Lovelace (TSMC 4N) | Blackwell (TSMC 4NP [43] ) | ||||||
Announced | January 2015 | September 2016 [44] | October 2016 [45] | January 2016 | August 2017 [46] | January 2017 | October 2017 | December 2019 | April 2021 [47] | September 2022 [48] | ||
Launched | N/A | N/A | N/A | N/A | N/A | N/A | N/A | 2022 [49] | Cancelled [23] | 2025 [48] | ||
Chips | 1x Tegra X1 | 2x Tegra X1 | 1x Tegra X2 (Parker) + 1x Pascal GPU | 2x Tegra X2 (Parker) + 2x Pascal GPU | 2x Tegra X2 (Parker) + 1x Pascal GPU [50] | 1x Tegra Xavier [51] | 2x Tegra Xavier + 2x Turing GPU | 2x Tegra Orin | 2x Tegra Orin + 2x Ampere GPU | ?x Grace-Next CPU [47] + ?x Ada Lovelace GPU [23] | ?x Arm Neoverse Poseidon AE CPU [52] + ?x Blackwell GPU [53] | |
CPU | 4x Cortex A57 4x Cortex A53 | 8x Cortex A57 8x Cortex A53 | 2x Denver 4x Cortex A57 | 4x Denver 8x Cortex A57 | 4x Denver 8x Cortex A57 | 8x Carmel ARM64 [51] | 16x Carmel ARM64 | 12x Cortex A78AE | 24x Cortex A78AE | ?x Grace-Next [47] | ?x Arm Neoverse Poseidon AE [54] | |
GPU | 2 SMM Maxwell 256 CUDA cores | 4 SMM Maxwell 512 CUDA cores | 1x Parker GPGPU (1x 2 SM Pascal, | 1x Parker GPGPU (1x 2 SM Pascal, | 2x Parker GPGPU (2x 2 SM Pascal, | 1x Parker GPGPU | 1x Volta iGPU (512 CUDA cores) [51] | 2x Volta iGPU (512 CUDA cores) | 2x Ampere iGPU (?CUDA cores) | 2x Ampere iGPU (? CUDA cores) | ?x Ada Lovelace [23] | ?x Blackwell GPU [56] |
Accelerator | 1x DLA 1x PVA [51] | 2x DLA 2x PVA | 2x DLA 2x PVA | 2x DLA 2x PVA | ? | ? | ||||||
Memory | 8 GB LPDDR4 [57] | 16 GB LPDDR4 [57] | 16 GB LPDDR4 [51] | 32 GB LPDDR5 | ? | ? | ||||||
Storage | 64 GB eMMC [57] | 128 GB eMMC [57] | ? | ? | ||||||||
Performance | 4 FP32 TFLOPS | 4 FP32 TFLOPS | 16 FP16 TFLOPS 8 FP32 TFLOPS | 4 FP32 TFLOPS | 20 INT8 TOPS, 1.3 FP32 TFLOPS (GPU) 10 INT8 TOPS, 5 FP16 TFLOPS (DLA) [51] | 320 INT8 TOPS (total) [60] | 400 INT8 TOPS (total) | 2000 INT8 TOPS (total) | 1000 INT8 TOPS [23] | 2000 FP8 TOPS [48] | ||
TDP | 20 W [59] | 40 W SoC portion: 10 W [44] | 40 W SoC portion: 10 W [44] | 80 W [61] [62] [59] [63] SoC portion: 20 W [44] | 60 W [61] [62] [59] SoC portion: 20 W [44] | 30 W [51] | 500 W [60] | 130 W | 750 W | ? | ? |
Note: dGPU and memory are stand-alone semiconductors; all other components, especially ARM cores, iGPU and DLA are integrated components of the listed main computing device(s)
Nvidia Corporation is an American multinational corporation and technology company headquartered in Santa Clara, California, and incorporated in Delaware. Founded in 1993 by Jensen Huang, Chris Malachowsky, and Curtis Priem, it is a software and fabless company which designs and supplies graphics processing units (GPUs), application programming interfaces (APIs) for data science and high-performance computing, as well as system on a chip units (SoCs) for mobile computing and the automotive market. Nvidia is also a dominant supplier of artificial intelligence (AI) hardware and software.
A free and open-source graphics device driver is a software stack which controls computer-graphics hardware and supports graphics-rendering application programming interfaces (APIs) and is released under a free and open-source software license. Graphics device drivers are written for specific hardware to work within a specific operating system kernel and to support a range of APIs used by applications to access the graphics hardware. They may also control output to the display if the display driver is part of the graphics hardware. Most free and open-source graphics device drivers are developed by the Mesa project. The driver is made up of a compiler, a rendering API, and software which manages access to the graphics hardware.
Tegra is a system on a chip (SoC) series developed by Nvidia for mobile devices such as smartphones, personal digital assistants, and mobile Internet devices. The Tegra integrates an ARM architecture central processing unit (CPU), graphics processing unit (GPU), northbridge, southbridge, and memory controller onto one package. Early Tegra SoCs are designed as efficient multimedia processors. The Tegra-line evolved to emphasize performance for gaming and machine learning applications without sacrificing power efficiency, before taking a drastic shift in direction towards platforms that provide vehicular automation with the applied "Nvidia Drive" brand name on reference boards and its semiconductors; and with the "Nvidia Jetson" brand name for boards adequate for AI applications within e.g. robots or drones, and for various smart high level automation purposes.
Atlan may refer to:
Momenta Global Limited is a developer of intelligent driving technologies, based in Beijing and Suzhou, China.
Project Denver is the codename of a central processing unit designed by Nvidia that implements the ARMv8-A 64/32-bit instruction sets using a combination of simple hardware decoder and software-based binary translation where "Denver's binary translation layer runs in software, at a lower level than the operating system, and stores commonly accessed, already optimized code sequences in a 128 MB cache stored in main memory". Denver is a very wide in-order superscalar pipeline. Its design makes it suitable for integration with other SIPs cores into one die constituting a system on a chip (SoC).
Mobileye Global Inc. is an Israeli autonomous driving company. It is developing self-driving technologies and advanced driver-assistance systems (ADAS) including cameras, computer chips, and software. Mobileye was acquired by Intel in 2017 and went public again in 2022.
Nvidia Tesla is the former name for a line of products developed by Nvidia targeted at stream processing or general-purpose graphics processing units (GPGPU), named after pioneering electrical engineer Nikola Tesla. Its products began using GPUs from the G80 series, and have continued to accompany the release of new chips. They are programmable using the CUDA or OpenCL APIs.
Volta is the codename, but not the trademark, for a GPU microarchitecture developed by Nvidia, succeeding Pascal. It was first announced on a roadmap in March 2013, although the first product was not announced until May 2017. The architecture is named after 18th–19th century Italian chemist and physicist Alessandro Volta. It was Nvidia's first chip to feature Tensor Cores, specially designed cores that have superior deep learning performance over regular CUDA cores. The architecture is produced with TSMC's 12 nm FinFET process. The Ampere microarchitecture is the successor to Volta.
Autonomous racing, self-driving racing or autonomous motorsports is an evolving sport of racing ground-based wheeled vehicles, controlled by computer. A number of events and series have launched, including the international Formula E spin-off series Roborace. and Self Racing Cars as well as student competitions such as Formula Student Driverless.
Vibrante is a Linux distribution created by Nvidia and used for at least their Drive PX 2 platform series. The name is listed as a registered trademark of Nvidia. First appearances of the name were seen in about the year 2010 when it labeled some rather universal multimedia engine including audio, video and 3D building display that was in tight cooperation with Audi company.
Nvidia Jetson is a series of embedded computing boards from Nvidia. The Jetson TK1, TX1 and TX2 models all carry a Tegra processor from Nvidia that integrates an ARM architecture central processing unit (CPU). Jetson is a low-power system and is designed for accelerating machine learning applications.
aiMotive is an autonomous vehicle technology company. The company aims to work with automotive manufacturers and Tier1s to enable automated technologies. aiMotive describes its approach as "vision-first", a system that primarily relies on cameras and artificial intelligence to detect its surroundings. The technology is designed to be implemented by automobile manufacturers to create autonomous vehicles, which can operate in all conditions and locations. In September 2017, PSA Group teamed up with AImotive.
DeepScale, Inc. was an American technology company headquartered in Mountain View, California, that developed perceptual system technologies for automated vehicles. On October 1, 2019, the company was acquired by Tesla, Inc.
Ampere is the codename for a graphics processing unit (GPU) microarchitecture developed by Nvidia as the successor to both the Volta and Turing architectures. It was officially announced on May 14, 2020 and is named after French mathematician and physicist André-Marie Ampère.
Nvidia GTC is a global artificial intelligence (AI) conference for developers that brings together developers, engineers, researchers, inventors, and IT professionals. Topics focus on AI, computer graphics, data science, machine learning and autonomous machines. Each conference begins with a keynote from Nvidia CEO and founder Jensen Huang, followed by a variety of sessions and talks with experts from around the world.
Tesla Dojo is a supercomputer designed and built by Tesla for computer vision video processing and recognition. It is used for training Tesla's machine learning models to improve its Full Self-Driving (FSD) advanced driver-assistance system. According to Tesla, it went into production in July 2023.
SXM is a high bandwidth socket solution for connecting Nvidia Compute Accelerators to a system. Each generation of Nvidia Tesla since the P100 models, the DGX computer series and the HGX boards come with an SXM socket type that realizes high bandwidth, power delivery and more for the matching GPU daughter cards. Nvidia offers these combinations as an end-user product e.g. in their models of the DGX system series. Current socket generations are SXM for Pascal based GPUs, SXM2 and SXM3 for Volta based GPUs, SXM4 for Ampere based GPUs, and SXM5 for Hopper based GPUs. These sockets are used for specific models of these accelerators, and offer higher performance per card than PCIe equivalents. The DGX-1 system was the first to be equipped with SXM-2 sockets and thus was the first to carry the form factor compatible SXM modules with P100 GPUs and later was unveiled to be capable of allowing upgrading to SXM2 modules with V100 GPUs.
The Ji Yue 01 is a high performance mid-size electric crossover SUV produced by Jidu Auto under the Ji Yue brand.
Tesla Autopilot, an advanced driver-assistance system for Tesla vehicles, uses a suite of sensors and an onboard computer. It has undergone several hardware changes and versions since 2014, most notably moving to an all-camera-based system by 2023, in contrast with ADAS from other companies, which include radar and sometimes lidar sensors.
{{cite web}}
: CS1 maint: multiple names: authors list (link)