Nvidia Drive is a computer platform by Nvidia, aimed at providing autonomous car and driver assistance functionality powered by deep learning. [1] [2] The platform was introduced at the Consumer Electronics Show (CES) in Las Vegas in January 2015. [3] An enhanced version, the Drive PX 2 was introduced at CES a year later, in January 2016. [4]
The closely platform related software release program at some point in time was branded NVIDIA DRIVE Hyperion along with a revision number helping to match with the generation of hardware it is created for - and also creating ready to order bundles under those term. In former times there were only the terms Nvidia Drive SDK for the developer package and sub-included Nvidia Drive OS for the system software (aka OS) that came with the evaluation platforms or could be downloaded for OS switching and updating later on.
The first of Nvidia's autonomous chips was announced at CES 2015, based on the Maxwell GPU microarchitecture. [5] The line-up consisted of two platforms:
The Drive CX was based on a single Tegra X1 SoC (System on a Chip) and was marketed as a digital cockpit computer, providing a rich dashboard, navigation and multimedia experience. Early Nvidia press releases reported that the Drive CX board will be capable of carrying either a Tegra K1 or a Tegra X1. [6]
The first version of Drive PX is based on two Tegra X1 SoCs, and was an initial development platform targeted at (semi-)autonomous driving cars.
Drive PX platforms based on the Pascal GPU microarchitecture were first announced at CES 2016. [7] This time only a new version of Drive PX was announced, but in multiple configurations.
The Nvidia Drive PX 2 is based on one or two Tegra X2 SoCs where each SoC contains 2 Denver cores, 4 ARM A57 cores and a GPU from the Pascal generation. [8] There are two real world board configurations:
There is further the proposal from Nvidia for fully autonomous driving by means of combining multiple items of the AutoChauffeur board variant and connecting these boards using e.g. UART, CAN, LIN, FlexRay, USB, 1 Gbit Ethernet or 10 Gbit Ethernet. For any derived custom PCB design the option of linking the Tegra X2 Processors via some PCIe bus bridge is further available, according to board block diagrams that can be found on the web.
All Tesla Motors vehicles manufactured from mid-October 2016 include a Drive PX 2, which will be used for neural net processing to enable Enhanced Autopilot and full self-driving functionality. [9] Other applications are Roborace. [10] Disassembling the Nvidia-based control unit from a recent Tesla car showed that a Tesla was using a modified single-chip Drive PX 2 AutoCruise, with a GP106 GPU added as a MXM Module. The chip markings gave strong hints for the Tegra X2 Parker as the CPU SoC. [11] [12]
Systems based on the Volta GPU microarchitecture were first announced at CES 2017 [13]
The first Volta based Drive PX system was announced at CES 2017 as the Xavier AI Car Supercomputer. [13] It was re-presented at CES 2018 as Drive PX Xavier. [14] [15] Initial reports of the Xavier SoC suggested a single chip with similar processing power to the Drive PX 2 Autochauffeur system. [16] However, in 2017 the performance of the Xavier-based system was later revised upward, to 50% greater than Drive PX 2 Autochauffeur system. [13] Drive PX Xavier is supposed to deliver 30 INT8 TOPS of performance while consuming only 30 watts of power. [17] This spreads across two distinct units, the iGPU with 20 INT8 TOPS as published early and the somewhat later on announced, newly introduced DLA that provided an additional 10 INT8 TOPS.
In October 2017 Nvidia and partner development companies announced the Drive PX Pegasus system, based upon two Xavier CPU/GPU devices and two post-Volta (Turing) generation GPUs. The companies stated the third generation Drive PX system would be capable of Level 5 autonomous driving, with a total of 320 INT8 TOPS of AI computational power and a 500 Watts TDP. [18] [19]
The Drive AGX Orin board family was announced on December 18, 2019, at GTC China 2019. [20] On May 14, 2020, Nvidia announced that Orin would be utilizing the new Ampere GPU microarchitecture and would begin sampling for manufacturers in 2021 and be available for production in 2022. [21] Follow up variants are expected to be further equipped with chip models and/or modules from the Tegra Orin SoC.
Nvidia announced the SoC codenamed Atlan on April 12, 2021 at GTC 2021. [22]
Nvidia announced the cancellation of Atlan on September 20, 2022, which was supposed to be equipped with a Grace-Next CPU, and an Ada Lovelace based GPU, and Nvidia announced that their next SoC was called Thor. [23]
Announced on September 20, 2022, [24] Nvidia DRIVE Thor comes equipped with an Arm Neoverse V3AE CPU, [25] and a Blackwell based GPU, which was announced on March 18, 2024. [26] It features 8-bit floating point support (FP8) and delivers 1000 INT8 TOPS, 1000 FP8 FLOPS or 500 FP16 TFLOPS of performance. [27] Two Thor SoCs can be connected via NVLink-C2C. [24]
BYD, Hyper, XPENG, Li Auto and ZEEKR have said to be use DRIVE Thor in their vehicles. [28]
With the label Hyperion [29] added to their reference platform [30] series Nvidia promotes their mass products so that others can easily test drive and then create their own automotive grade products on top. Especially the feature rich software part of the base system is meant to be a big help for these others to quickly go ahead into developing their application specific solutions. Third-party companies, such as DeepRoute.ai, have publicly indicated using these software platform as their base of choice. [31] The whole design is concentrating on UNIX/Posix compatible or derived runtime environments (Linux, [32] Android, [33] QNX - aka the Drive OS variants) with special support for the semiconductors mentioned before in form of internal (CUDA, Vulkan) and external support (special interfaces and drivers for camera, lidar, CAN and many more) of the respective reference boards. For clearness Nvidia bundles the core of the developer needed software as Drive SDK that is sub-divided into DRIVE OS, DriveWorks, DRIVE AV, and DRIVE IX components. [34]
Hyperion Version | Announced | Latest Chip Launch | Start of Road Usage | Target Use Case | Semiconductors | Reference Platforms / Developer Kits | Drive OS Version | Sensor Support |
---|---|---|---|---|---|---|---|---|
7.1 [35] | 2020 | Level 2+ autonomous driving | Xavier, Turing GPU | DRIVE AGX Xavier Developer Kit, DRIVE AGX Pegasus Developer Kit | vehicle external: 7x camera, 8x radar; vehicle internal: 1x camera | |||
8 [36] [32] | 2020 | Xavier, Turing GPU | DRIVE™ AGX Pegasus GV100, DRIVE™ AGX Xavier | 5.0.13.2 (linux) | vehicle external: 12x camera, 9x radar, 1x lidar | |||
8.1 [37] | 2022 | estimated for 2024 | Orin, Xavier, Turing GPU | NVIDIA DRIVE AGX Orin™, DRIVE AGX Pegasus, DRIVE Hyperion 8.1 Developer Kits [38] | Orin: 6.0 (latest: 6.0.4) Xavier/Pegasus:5.2.6 [34] | vehicle external: 12x camera, 9x radar, 1x lidar | ||
9 [39] [40] | March 2022 | 2024 | estimated for 2026 | Atlan (Cancelled) | vehicle external: 14x camera, 9x radar, 3x lidar, 20x ultrasonic; vehicle internal: 3x camera, 1x radar |
Note: As of now the above table is still 'fresh' and thus might be incomplete.
Nvidia provided reference board | Drive CX | Drive PX | Drive PX 2 (AutoCruise) | Drive PX 2 (Tesla) | Drive PX 2 (AutoChauffeur) | Drive PX 2 (Tesla 2.5) | Drive PX Xavier [15] | Drive PX Pegasus [18] | Drive AGX Orin [20] | Drive AGX Pegasus OA [41] | Drive Atlan (Cancelled) | Drive Thor |
---|---|---|---|---|---|---|---|---|---|---|---|---|
GPU Microarchitecture | Maxwell (28 nm) | Pascal (16 nm) | Volta (12 nm) | Ampere (8 nm [42] ) | Ada Lovelace (TSMC 4N) | Blackwell (TSMC 4NP [43] ) | ||||||
Announced | January 2015 | September 2016 [44] | October 2016 [45] | January 2016 | August 2017 [46] | January 2017 | October 2017 | December 2019 | April 2021 [47] | September 2022 [48] | ||
Launched | N/A | N/A | N/A | N/A | N/A | N/A | N/A | 2022 [49] | Cancelled [23] | 2025 [48] | ||
Chips | 1x Tegra X1 | 2x Tegra X1 | 1x Tegra X2 (Parker) + 1x Pascal GPU | 2x Tegra X2 (Parker) + 2x Pascal GPU | 2x Tegra X2 (Parker) + 1x Pascal GPU [50] | 1x Tegra Xavier [51] | 2x Tegra Xavier + 2x Turing GPU | 2x Tegra Orin | 2x Tegra Orin + 2x Ampere GPU | ?x Grace-Next CPU [47] + ?x Ada Lovelace GPU [23] | ?x Arm Neoverse Poseidon AE CPU [52] + ?x Blackwell GPU [53] | |
CPU | 4x Cortex A57 4x Cortex A53 | 8x Cortex A57 8x Cortex A53 | 2x Denver 4x Cortex A57 | 4x Denver 8x Cortex A57 | 4x Denver 8x Cortex A57 | 8x Carmel ARM64 [51] | 16x Carmel ARM64 | 12x Cortex A78AE | 24x Cortex A78AE | ?x Grace-Next [47] | ?x Arm Neoverse Poseidon AE [54] | |
GPU | 2 SMM Maxwell 256 CUDA cores | 4 SMM Maxwell 512 CUDA cores | 1x Parker GPGPU (1x 2 SM Pascal, | 1x Parker GPGPU (1x 2 SM Pascal, | 2x Parker GPGPU (2x 2 SM Pascal, | 1x Parker GPGPU | 1x Volta iGPU (512 CUDA cores) [51] | 2x Volta iGPU (512 CUDA cores) | 2x Ampere iGPU (?CUDA cores) | 2x Ampere iGPU (? CUDA cores) | ?x Ada Lovelace [23] | ?x Blackwell GPU [56] |
Accelerator | 1x DLA 1x PVA [51] | 2x DLA 2x PVA | 2x DLA 2x PVA | 2x DLA 2x PVA | ? | ? | ||||||
Memory | 8GB LPDDR4 [57] | 16GB LPDDR4 [57] | 16GB LPDDR4 [51] | 32GB LPDDR5 | ? | ? | ||||||
Storage | 64GB eMMC [57] | 128GB eMMC [57] | ? | ? | ||||||||
Performance | 4 FP32 TFLOPS | 4 FP32 TFLOPS | 16 FP16 TFLOPS 8 FP32 TFLOPS | 4 FP32 TFLOPS | 20 INT8 TOPS, 1.3 FP32 TFLOPS (GPU) 10 INT8 TOPS, 5 FP16 TFLOPS (DLA) [51] | 320 INT8 TOPS (total) [60] | 400 INT8 TOPS (total) | 2000 INT8 TOPS (total) | 1000 INT8 TOPS [23] | 2000 FP8 TOPS [48] | ||
TDP | 20W [59] | 40W SoC portion: 10W [44] | 40W SoC portion: 10W [44] | 80W [61] [62] [59] [63] SoC portion: 20W [44] | 60W [61] [62] [59] SoC portion: 20W [44] | 30W [51] | 500W [60] | 130W | 750W | ? | ? |
Note: dGPU and memory are stand-alone semiconductors; all other components, especially ARM cores, iGPU and DLA are integrated components of the listed main computing device(s)
Nvidia Corporation is an American multinational corporation and technology company headquartered in Santa Clara, California, and incorporated in Delaware. It is a software and fabless company which designs and supplies graphics processing units (GPUs), application programming interfaces (APIs) for data science and high-performance computing, as well as system on a chip units (SoCs) for the mobile computing and automotive market. Nvidia is also a dominant supplier of artificial intelligence (AI) hardware and software.
A graphics processing unit (GPU) is a specialized electronic circuit initially designed to accelerate computer graphics and image processing. After their initial design, GPUs were found to be useful for non-graphic calculations involving embarrassingly parallel problems due to their parallel structure. Other non-graphical uses include the training of neural networks and cryptocurrency mining.
Tegra is a system on a chip (SoC) series developed by Nvidia for mobile devices such as smartphones, personal digital assistants, and mobile Internet devices. The Tegra integrates an ARM architecture central processing unit (CPU), graphics processing unit (GPU), northbridge, southbridge, and memory controller onto one package. Early Tegra SoCs are designed as efficient multimedia processors. The Tegra-line evolved to emphasize performance for gaming and machine learning applications without sacrificing power efficiency, before taking a drastic shift in direction towards platforms that provide vehicular automation with the applied "Nvidia Drive" brand name on reference boards and its semiconductors; and with the "Nvidia Jetson" brand name for boards adequate for AI applications within e.g. robots or drones, and for various smart high level automation purposes.
Atlan may refer to:
Project Denver is the codename of a central processing unit designed by Nvidia that implements the ARMv8-A 64/32-bit instruction sets using a combination of simple hardware decoder and software-based binary translation where "Denver's binary translation layer runs in software, at a lower level than the operating system, and stores commonly accessed, already optimized code sequences in a 128 MB cache stored in main memory". Denver is a very wide in-order superscalar pipeline. Its design makes it suitable for integration with other SIPs cores into one die constituting a system on a chip (SoC).
Mobileye Global Inc. is an Israeli autonomous driving company. It is developing self-driving technologies and advanced driver-assistance systems (ADAS) including cameras, computer chips, and software. Mobileye was acquired by Intel in 2017 and went public again in 2022.
Nvidia Tesla is the former name for a line of products developed by Nvidia targeted at stream processing or general-purpose graphics processing units (GPGPU), named after pioneering electrical engineer Nikola Tesla. Its products began using GPUs from the G80 series, and have continued to accompany the release of new chips. They are programmable using the CUDA or OpenCL APIs.
NVLink is a wire-based serial multi-lane near-range communications link developed by Nvidia. Unlike PCI Express, a device can consist of multiple NVLinks, and devices use mesh networking to communicate instead of a central hub. The protocol was first announced in March 2014 and uses a proprietary high-speed signaling interconnect (NVHS).
Volta is the codename, but not the trademark, for a GPU microarchitecture developed by Nvidia, succeeding Pascal. It was first announced on a roadmap in March 2013, although the first product was not announced until May 2017. The architecture is named after 18th–19th century Italian chemist and physicist Alessandro Volta. It was Nvidia's first chip to feature Tensor Cores, specially designed cores that have superior deep learning performance over regular CUDA cores. The architecture is produced with TSMC's 12 nm FinFET process. The Ampere microarchitecture is the successor to Volta.
From 2014 until 2024, Apple undertook a research and development effort to develop an electric and self-driving car, codenamed "Project Titan". Apple never openly discussed any of its automotive research, but around 5,000 employees were reported to be working on the project as of 2018. In May 2018, Apple reportedly partnered with Volkswagen to produce an autonomous employee shuttle van based on the T6 Transporter commercial vehicle platform. In August 2018, the BBC reported that Apple had 66 road-registered driverless cars, with 111 drivers registered to operate those cars. In 2020, it was believed that Apple was still working on self-driving related hardware, software and service as a potential product, instead of actual Apple-branded cars. In December 2020, Reuters reported that Apple was planning on a possible launch date of 2024, but analyst Ming-Chi Kuo claimed it would not be launched before 2025 and might not be launched until 2028 or later.
An AI accelerator, deep learning processor, or neural processing unit (NPU) is a class of specialized hardware accelerator or computer system designed to accelerate artificial intelligence and machine learning applications, including artificial neural networks and machine vision. Typical applications include algorithms for robotics, Internet of Things, and other data-intensive or sensor-driven tasks. They are often manycore designs and generally focus on low-precision arithmetic, novel dataflow architectures or in-memory computing capability. As of 2024, a typical AI integrated circuit chip contains tens of billions of MOSFETs.
Tesla Autopilot is an advanced driver-assistance system (ADAS) developed by Tesla that amounts to partial vehicle automation. Tesla provides "Base Autopilot" on all vehicles, which includes lane centering and traffic-aware cruise control. Owners may purchase an upgrade to "Enhanced Autopilot" (EA) which adds semi-autonomous navigation on limited access roadways, self-parking, and the ability to summon the car from a garage or parking spot. The company claims the features reduce accidents caused by driver negligence and fatigue from long-term driving. Collisions and deaths involving Tesla cars with Autopilot engaged have drawn the attention of the press and government agencies.
Vibrante is the name of a Linux distribution created by Nvidia and used for at least their Drive PX 2 platform series. The name is listed as a registered trademark of Nvidia. First appearances of the name were seen in about the year 2010 when it labeled some rather universal multimedia engine including audio, video and 3D building display that was in tight cooperation with Audi company. At NVidia TechDay in December 2015 the distribution was reported with version numbers 3.0 for Jetson TK1 Pro and Drive CX, and with version 4.0 for Drive CX and PX platforms. Jetson TK1 is mentioned as running with the Linux4Tegra package instead. Companies like Toradex have built and published e.g. sample application codes on top of it. Abbreviations of Vibrante Linux like V3L, V3Le or V4L with the number representing the version plus terms like L4T and assigned to certain devices can be found in some history and release docs, e.g. for Nvidia VisionWorks. On top of Vibrante it is possible to run Nvidias VisionWorks Toolkit. Vibrante is one of the targets that OpenCV4Tegra can run upon. Further there is the Nvidia PerfKit Package that copes with Vibrante.
Nvidia Jetson is a series of embedded computing boards from Nvidia. The Jetson TK1, TX1 and TX2 models all carry a Tegra processor from Nvidia that integrates an ARM architecture central processing unit (CPU). Jetson is a low-power system and is designed for accelerating machine learning applications.
aiMotive is an autonomous vehicle technology company. The company aims to work with automotive manufacturers and Tier1s to enable automated technologies. aiMotive describes its approach as "vision-first", a system that primarily relies on cameras and artificial intelligence to detect its surroundings. The technology is designed to be implemented by automobile manufacturers to create autonomous vehicles, which can operate in all conditions and locations. In September 2017, PSA Group teamed up with AImotive.
DeepScale, Inc. was an American technology company headquartered in Mountain View, California, that developed perceptual system technologies for automated vehicles. On October 1, 2019, the company was acquired by Tesla, Inc.
Ampere is the codename for a graphics processing unit (GPU) microarchitecture developed by Nvidia as the successor to both the Volta and Turing architectures. It was officially announced on May 14, 2020 and is named after French mathematician and physicist André-Marie Ampère.
Nvidia GTC is a global artificial intelligence (AI) conference for developers that brings together developers, engineers, researchers, inventors, and IT professionals. Topics focus on AI, computer graphics, data science, machine learning and autonomous machines. Each conference begins with a keynote from Nvidia CEO and founder Jensen Huang, followed by a variety of sessions and talks with experts from around the world.
Tesla Dojo is a supercomputer designed and built by Tesla for computer vision video processing and recognition. It will be used for training Tesla's machine learning models to improve its Full Self-Driving (FSD) advanced driver-assistance system. According to Tesla, it went into production in July 2023.
SXM is a high bandwidth socket solution for connecting Nvidia Compute Accelerators to a system. Each generation of Nvidia Tesla since P100 models, the DGX computer series and the HGX boards come with an SXM socket type that realizes high bandwidth, power delivery and more for the matching GPU daughter cards. Nvidia offers these combinations as an end-user product e.g. in their models of the DGX system series. Current socket generations are SXM for Pascal based GPUs, SXM2 and SXM3 for Volta based GPUs, SXM4 for Ampere based GPUs, and SXM5 for Hopper based GPUs. These sockets are used for specific models of these accelerators, and offer higher performance per card than PCIe equivalents. The DGX-1 system was the first to be equipped with SXM-2 sockets and thus was the first to carry the form factor compatible SXM modules with P100 GPUs and later was unveiled to be capable of allowing upgrading to SXM2 modules with V100 GPUs.
{{cite web}}
: CS1 maint: multiple names: authors list (link)