This article has multiple issues. Please help improve it or discuss these issues on the talk page . (Learn how and when to remove these messages)
|
Ultra Accelerator Link (UALink) is an open specification for a die-to-die interconnect and serial bus between AI accelerators. It is co-developed by AMD, Astera Labs, [1] AWS, Cisco, Google, Hewlett Packard Enterprise, Intel, Meta, and Microsoft. [2]
UALink officially incorporated as an organization in 2024, and its first specification will provide interconnectivity specifically for a scale-up network. The initial 1.0 version 200Gbps UALink specification enables the connection of up to 1K accelerators within an AI pod, and is based on the IEEE P802.3dj PHY Layer. The specification was due to be available to Contributor Members in 2024, and will be released to the public during the first quarter of 2025. [3]
Brookhaven National Laboratory (BNL) is a United States Department of Energy national laboratory located in Upton, New York, a hamlet of the Town of Brookhaven. It was formally established in 1947 at the site of Camp Upton, a former U.S. Army base on Long Island. Located approximately 60 miles east of New York City, it is managed by Stony Brook University and Battelle Memorial Institute.
Fermi National Accelerator Laboratory (Fermilab), located in Batavia, Illinois, near Chicago, is a United States Department of Energy national laboratory specializing in high-energy particle physics.
A synchrotron is a particular type of cyclic particle accelerator, descended from the cyclotron, in which the accelerating particle beam travels around a fixed closed-loop path. The strength of the magnetic field which bends the particle beam into its closed path increases with time during the accelerating process, being synchronized to the increasing kinetic energy of the particles.
Thomas Jefferson National Accelerator Facility (TJNAF), commonly called Jefferson Lab or JLab, is a US Department of Energy National Laboratory located in Newport News, Virginia.
Samuel Harris Altman is an American entrepreneur and investor best known as the chief executive officer of OpenAI since 2019. He is also the chairman of clean energy companies Oklo Inc. and Helion Energy.
Intel Capital Corporation is the investment arm of Intel Corporation. Intel Capital makes equity investments in a range of technology startups and companies offering hardware, software, and services targeting artificial intelligence, autonomous technology, data center and cloud, 5G, next-generation compute, semiconductor manufacturing and other technologies. The firm has been an active investor in artificial intelligence since 2014. Recent notable investments in artificial intelligence include SambaNova Systems, Figure AI, AI21 Labs, Twelve Labs, RunPod, BRIA, Anyscale, MinIO, Oxide Computer Company, among many others.
Nvidia Tesla is the former name for a line of products developed by Nvidia targeted at stream processing or general-purpose graphics processing units (GPGPU), named after pioneering electrical engineer Nikola Tesla. Its products began using GPUs from the G80 series, and have continued to accompany the release of new chips. They are programmable using the CUDA or OpenCL APIs.
High Bandwidth Memory (HBM) is a computer memory interface for 3D-stacked synchronous dynamic random-access memory (SDRAM) initially from Samsung, AMD and SK Hynix. It is used in conjunction with high-performance graphics accelerators, network devices, high-performance datacenter AI ASICs, as on-package cache in CPUs and on-package RAM in upcoming CPUs, and FPGAs and in some supercomputers. The first HBM memory chip was produced by SK Hynix in 2013, and the first devices to use HBM were the AMD Fiji GPUs in 2015.
Annapurna Labs is an Israeli microelectronics company. Since January 2015 it has been a wholly owned subsidiary of Amazon.com. Amazon reportedly acquired the company for its Amazon Web Services division for US$350–370M.
M12, formerly Microsoft Ventures, is a corporate venture capital subsidiary of Microsoft. Founded in March 2016, its mission is to be an active, strategic partner during a startup's growth, typically investing between Series A and D. The fund is managed by Michelle Gonzalez, formerly of Google.
Tensor Processing Unit (TPU) is an AI accelerator application-specific integrated circuit (ASIC) developed by Google for neural network machine learning, using Google's own TensorFlow software. Google began using TPUs internally in 2015, and in 2018 made them available for third-party use, both as part of its cloud infrastructure and by offering a smaller version of the chip for sale.
An AI accelerator, deep learning processor or neural processing unit (NPU) is a class of specialized hardware accelerator or computer system designed to accelerate artificial intelligence (AI) and machine learning applications, including artificial neural networks and computer vision. Typical applications include algorithms for robotics, Internet of Things, and other data-intensive or sensor-driven tasks. They are often manycore designs and generally focus on low-precision arithmetic, novel dataflow architectures or in-memory computing capability. As of 2024, a typical AI integrated circuit chip contains tens of billions of MOSFETs.
The Nvidia DGX represents a series of servers and workstations designed by Nvidia, primarily geared towards enhancing deep learning applications through the use of general-purpose computing on graphics processing units (GPGPU). These systems typically come in a rackmount format featuring high-performance x86 server CPUs on the motherboard.
AMD Instinct is AMD's brand of data center GPUs. It replaced AMD's FirePro S brand in 2016. Compared to the Radeon brand of mainstream consumer/gamer products, the Instinct product line is intended to accelerate deep learning, artificial neural network, and high-performance computing/GPGPU applications.
Ryzen is a brand of multi-core x86-64 microprocessors, designed and marketed by AMD for desktop, mobile, server, and embedded platforms, based on the Zen microarchitecture. It consists of central processing units (CPUs) marketed for mainstream, enthusiast, server, and workstation segments, and accelerated processing units (APUs), marketed for mainstream and entry-level segments, and embedded systems applications.
Notion is a productivity and note-taking web application developed by Notion Labs, Inc. It is an online-only organizational tool with options for both free and paid subscriptions. It is headquartered in San Francisco, California, United States, with offices in New York, Tokyo, Dublin, Hyderabad, Seoul, and Sydney.
Mila - Quebec AI Institute is a research institute in Montreal, Quebec, focusing mainly on machine learning research. Approximately 1000 students and researchers and 100 faculty members, were part of Mila in 2022. Along with Alberta's Amii and Toronto's Vector Institute, Mila is part of the Pan-Canadian Artificial Intelligence Strategy.
Fusion VC is a venture capital firm and an accelerator for Israeli startups in the United States. It was founded in 2017 and is headquartered in the United States with offices in Israel.
Blackwell is a graphics processing unit (GPU) microarchitecture developed by Nvidia as the successor to the Hopper and Ada Lovelace microarchitectures.