Oracle Exalogic

Last updated

Exalogic is a computer appliance made by Oracle Corporation, commercially available since 2010. [1] It is a cluster of x86-64-servers running Oracle Linux or Solaris preinstalled.

Contents

Its full trade mark is Oracle Exalogic Elastic Cloud (derived from the SI prefix exa- and -logic, probably from Weblogic), positioned by the vendor as a preconfigured clustered application server to use for cloud computing with elastic computing abilities. [2]

History

Oracle Exadata and Exalogic OracleExadataExalogic.JPG
Oracle Exadata and Exalogic

Oracle Corporation announced Exalogic at the Oracle OpenWorld conference in San Francisco in September 2010. The company presented it as a continuation of the Oracle-engineered systems product-line which had started in 2008 with Exadata (preconfigured database cluster). [1] [2]

Exalogic is a factory assembled 19-inch rack of 42 rack units, completed with servers and network equipment. There are 4 configurations, at different prices, depending on what fills the rack. [3] The weight of the full rack is about 1 ton (more than 2000 lbs), a quarter rack weighs half as much. [4]

Hardware

The hardware component of the X2-2 appliance consisted of: a group of 1-unit Intel Xeon servers, each equipped with two six-core 2.93 GHz processors and two solid-state drives for operating system and swap space; a common storage area network; and a set of InfiniBand and Ethernet switches. [5] A full rack contains 30 server nodes, a half rack, 16, a quarter rack, 8, and an eighth rack, 4. Each server node has installed 96 GB of RAM, four 10 Gigabit Ethernet interfaces, and a double InfiniBand port. The storage area network for all configurations is similar, with 40 TB of raw space. The vendor's specifications and advertising content usually indicate the total parameters (360 processor kernels and 2.9 TB RAM for full rack). [4] An X3-2 model was announced in 2012 with newer processors and more memory. [6] [7] Since late 2013 an X4-2 model is commercially available, it has yet more processor cores and four times as large capacity of solid-state drives. [8]
[9] The latest version of Exalogic compute nodes have two Intel E5-2699v3 2.3 GHz Xeon (18-core) processors and eight 32 GB DDR4 2133 MHz RAM for a total of 256 GB per node. Two 400 GB SSDs (RAID1) and redundant power supplies

Software

Two 64-bit operating systems run on the server nodes of the appliance: Oracle Linux version 5.5 or Solaris 11. [5] All servers have an installed cluster configuration of Oracle WebLogic Server and distributed memory cache Oracle Coherence. To run Java applications on a machine there is a choice of HotSpot or JRockit. Management of the appliance is available in the Oracle Enterprise Manager toolset, which is also pre-installed in the appliance. A transaction monitor Tuxedo [10] is optionally supplied.

Customers

Exalogic is deployed by the University of Melbourne, Food and Drug Administration (FDA) in the United States, Amway, the Hyundai Motor Group, Bank of Chile, Haier, and Deutsche Post DHL, Public Authority of Minors Affairs (PAMA) in Kuwait . [11] [12]

Criticism

Mark Benioff, founder of Salesforce.com, presumes that any appliance principally lacks scalability for the end-user compared with the infrastructure, supplied as service, and notes that the Exalogic approach is actually a rollback to the obsolete mainframe computer concept. [13] Also, commentators have expressed concerns about the appropriateness of placing the word "elastic" in the name, [14] because, despite the ability to load balance, there are obvious computing limits of the box, and those limits cannot be transcended like they should be in a true elastic environment; the same criticism applies to all solutions designed for private cloud computing, in particular, it applies to EMC Corporation and Hewlett-Packard products. [14] However, any computing environment is a collection of servers, and since many Exalogic machines can be combined, it is not limited to the single box capacity, which may be considered as merely a building block.

See also

Related Research Articles

<span class="mw-page-title-main">Xserve</span> Apple rack-mounted server

Xserve is a line of rack unit computers designed by Apple Inc. for use as servers. Introduced in 2002, it was Apple's first designated server hardware design since the Apple Network Server in 1996. In the meantime, ordinary Power Macintosh G3 and G4 models were rebranded as Macintosh Server G3 and Macintosh Server G4 with some alterations to the hardware, such as added Gigabit Ethernet cards, UltraWide SCSI cards, extra large and fast hard drives etc. and shipped with Mac OS X Server software. The Xserve initially featured one or two PowerPC G4 processors, but later switched over to the then-new PowerPC G5, transitioned to Intel with the Core 2-based Xeon offerings and subsequently switched again to two quad-core Intel Nehalem microprocessors.

<span class="mw-page-title-main">Altix</span> Supercomputer family

Altix is a line of server computers and supercomputers produced by Silicon Graphics, based on Intel processors. It succeeded the MIPS/IRIX-based Origin 3000 servers.

IBM Z Family name used by IBM for its z/Architecture mainframe computers

IBM Z is a family name used by IBM for all of its z/Architecture mainframe computers. In July 2017, with another generation of products, the official family was changed to IBM Z from IBM z Systems; the IBM Z family now includes the newest model, the IBM z16, as well as the z15, the z14, and the z13, the IBM zEnterprise models, the IBM System z10 models, the IBM System z9 models and IBM eServer zSeries models.

<span class="mw-page-title-main">IBM BladeCenter</span> Blade server architecture by IBM

The IBM BladeCenter was IBM's blade server architecture, until it was replaced by Flex System in 2012. The x86 division was later sold to Lenovo in 2014.

The Texas Advanced Computing Center (TACC) at the University of Texas at Austin, United States, is an advanced computing research center that provides comprehensive advanced computing resources and support services to researchers in Texas and across the US. The mission of TACC is to enable discoveries that advance science and society through the application of advanced computing technologies. Specializing in high performance computing, scientific visualization, data analysis & storage systems, software, research & development and portal interfaces, TACC deploys and operates advanced computational infrastructure to enable computational research activities of faculty, staff, and students of UT Austin. TACC also provides consulting, technical documentation, and training to support researchers who use these resources. TACC staff members conduct research and development in applications and algorithms, computing systems design/architecture, and programming tools and environments.

<span class="mw-page-title-main">Oracle Linux</span> Linux distribution by Oracle

Oracle Linux is a Linux distribution packaged and freely distributed by Oracle, available partially under the GNU General Public License since late 2006. It is compiled from Red Hat Enterprise Linux (RHEL) source code, replacing Red Hat branding with Oracle's. It is also used by Oracle Cloud and Oracle Engineered Systems such as Oracle Exadata and others.

CLUMEQ was a Supercomputer based in McGill University founded in 2001 and has received two successive grants from the Canada Foundation for innovation. In 2011 CLUMEQ and its partner organization RQCHP were consolidated into a new consortium Calcul Quebéc.

<span class="mw-page-title-main">Irish Centre for High-End Computing</span> National high-performance computing centre in Ireland

The Irish Centre for High-End Computing (ICHEC) is the national high-performance computing centre in Ireland. It was established in 2005 and provides supercomputing resources, support, training and related services. ICHEC is involved in education and training, including providing courses for researchers.

<span class="mw-page-title-main">Pleiades (supercomputer)</span>

Pleiades is a petascale supercomputer housed at the NASA Advanced Supercomputing (NAS) facility at NASA's Ames Research Center located at Moffett Field near Mountain View, California. It is maintained by NASA and partners Hewlett Packard Enterprise and Intel.

The National Center for Computational Sciences (NCCS) is a United States Department of Energy (DOE) Leadership Computing Facility that houses the Oak Ridge Leadership Computing Facility (OLCF), a DOE Office of Science User Facility charged with helping researchers solve challenging scientific problems of global interest with a combination of leading high-performance computing (HPC) resources and international expertise in scientific computing.

The Oracle ExadataDatabase Machine (Exadata) is a computing platform optimized for running Oracle Databases.

The Oracle Big Data Appliance consists of hardware and software from Oracle Corporation sold as a computer appliance. It was announced in 2011,and is used for consolidating and loading unstructured data into Oracle Database software.

<span class="mw-page-title-main">PureSystems</span>

PureSystems is an IBM product line of factory pre-configured components and servers also being referred to as an "Expert Integrated System". The centrepiece of PureSystems is the IBM Flex System Manager in tandem with the so-called "Patterns of Expertise" for the automated configuration and management of PureSystems.

The Oracle Database Appliance (ODA) is a database server appliance made by Oracle Corporation. It was introduced in September 2011 as the mid-market offering in Oracle's family of full-stack, integrated systems the company calls engineered systems. The ODA is a single rack-mounted device providing a highly-available two-node clustered database server.

<span class="mw-page-title-main">QPACE2</span> Massively parallel and scalable supercomputer

QPACE 2 is a massively parallel and scalable supercomputer. It was designed for applications in lattice quantum chromodynamics but is also suitable for a wider range of applications..

<span class="mw-page-title-main">Pico (supercomputer)</span>

PICO is an Intel Cluster installed in the data center of Cineca. PICO is intended to enable new "BigData" classes of applications, related to the management and processing of large quantities of data, coming both from simulations and experiments. The cluster is made of an Intel NeXtScale server, designed to optimize density and performance, driving a large data repository shared among all the HPC systems in Cineca.

<span class="mw-page-title-main">Nvidia DGX</span> Line of Nvidia produced servers and workstations

Nvidia DGX is a line of Nvidia-produced servers and workstations which specialize in using GPGPU to accelerate deep learning applications. The typical design of a DGX system is based upon a rackmount chassis with motherboard that carries high performance x86 server CPUs. The main component of a DGX system is a set of 4 to 16 Nvidia Tesla GPU modules on an independent system board. DGX systems have large heatsinks and powerful fans to adequately cool thousands of watts of thermal output. The GPU modules are typically integrated into the system using a version of the SXM socket.

<span class="mw-page-title-main">Ampere Computing</span> American fabless semiconductor company

Ampere Computing LLC is an American fabless semiconductor company based in Santa Clara, California that develops cloud native server microprocessors (CNPs). Ampere also has offices in: Portland, Oregon; Taipei, Taiwan; Raleigh, North Carolina; Bangalore, India; and Ho Chi Minh City, Vietnam.

Leonardo is a petascale supercomputer currently under construction at the CINECA datacenter in Bologna, Italy. The system consists of an Atos BullSequana XH2000 computer, with close to 14,000 Nvidia Ampere GPUs and 200Gb/s Nvidia Mellanox HDR InfiniBand connectivity. Once completed, Leonardo will be capable of 250 petaflops, which will make it one of the top five fastest supercomputers in the world. Leonardo's components arrived on site in July 2022, and it is scheduled to begin operations by the end of summer 2022.

<span class="mw-page-title-main">Taiwania 3</span> Supercomputer of Taiwan

Taiwania 3 is one of the supercomputers made by Taiwan, and also the newest one. It is placed in the National Center for High-performance Computing of NARLabs. There are 50,400 cores in total with 900 nodes, using Intel Xeon Platinum 8280 2.4 GHz CPU and using CentOS as Operating System. It is an open access for public supercomputer. It is currently open access to scientists and more to do specific research after get permission from Taiwan's National Center for High-performance Computing. This is the third supercomputer of the Taiwania series. It uses CentOS x86_64 7.8 as its system operator and Slurm Workload Manager as workflow manager to ensure better performance. Taiwania 3 uses InfiniBand HDR100 100Gbit/s high speed Internet connection to ensure better performance of the supercomputer. The main memory capability is 192GB. There's currently two Intel Xeon Platinum 8280 2.4 GHz CPU inside each node. The full calculation capability is 2.7PFLOPS. It is launched into operation in November 2020 before schedule due to the needed for COVID-19. It is currently ranked number 227 on Top 500 list of June, 2021 and number 80 on Green 500 list. It is manufactured by Quanta Computer, Taiwan Fixed Network, and ASUS Cloud.

References

  1. 1 2 Clarke, Gavin (2010-09-20). "HP and Oracle avoid blows over disgraced Hurd". The Register . Retrieved 2011-05-29.
  2. 1 2 Nairn, Geoff (2010-09-27). "Big Data, Big Blue and Going Green". Financial Times . ISSN   0307-1766 . Retrieved 2011-05-29. More surprising was to hear the software giant announce a piece of hardware, the Oracle Exalogic Elastic Cloud. As its name suggests, it is Oracle's attempt to steal the cloud computing spotlight. It comprises a mix of Oracle software and high-performance hardware and is aimed at enterprises that want to build their own "private cloud" using their own hardware. Sounds suspiciously like mainframe computer.
  3. "Oracle Engineered Systems Price List" (PDF). Oracle price lists. Oracle. September 12, 2013. Retrieved September 17, 2013.
  4. 1 2 "Oracle Exalogic Elastic Cloud X2-2" (PDF). Data Sheet. Oracle. March 25, 2011. Archived from the original (PDF) on April 9, 2011. Retrieved September 17, 2013.
  5. 1 2 Frazier, Mitch (2010-09-20). "The Oracle Exalogic Elastic Cloud". Linux Journal . Retrieved 2011-05-29. Each 1U "node" in an Exalogic rack consists of two Xeon chips. Each Xeon chip is a 6-core processor running at 2.93 GHz. Each node has redundant InfiniBand connections. Each node also contains two solid-state disks (SSD) for the operating system and for local swap space. A full rack would contain 360 CPU Cores, 2.8 TB (TeraBytes, 1 TB = 1024 GB ) of RAM, 6 TB of SSD, and 60 TB of SAS (Serial Attached SCSI) disk.
  6. Pedro Hernandez (October 4, 2012). "Oracle Debuts Exalogic X3-2 Server". Server Watch. Retrieved September 17, 2013.
  7. "Oracle Exalogic Elastic Cloud X2-2" (PDF). Data Sheet. March 6, 2013. Retrieved September 17, 2013.
  8. James Sullivan (2013-12-20). "Oracle Puts A Cloud In A Single Rack: Elastic Cloud X4-2". Tom’s IT Pro. Retrieved 2014-01-01.
  9. "Oracle Exalogic Elastic Cloud X5-2" (PDF).
  10. "Oracle Exalogic Elastic Cloud Software Data Sheet" (PDF). Oracle Data Sheet. Oracle. 2011-03-25. Archived from the original (PDF) on 2011-04-09. Retrieved 2011-05-29.
  11. Are Oracle's Exadata racks fluffing Apple's iCloud?
  12. OpenWorld Recap Day 1: Innovations in Oracle Fusion Middleware, Exalogic, Cloud Application Foundation
  13. Clarke, Gavin (2010-12-07). "Salesforce's Benioff: 'Ellison flunks vision test'. Oracle dreams of a mainframe past". The Register . Retrieved 2011-05-31. 'The cloud is not in a box — you don't have to add more boxes to get scalability,' Benioff said
  14. 1 2 Williams, Alex (2010-09-30). "Why the Oracle Exalogic Cloud is Not Elastic". ReadWriteWeb. Archived from the original on 2012-08-12. Retrieved 2011-05-31. Placing the term "Elastic" in the name of this offering is stretching the accepted definition of the term as it relates to cloud computing ... You can scale your applications up and down within this solution, but in the end, you are limited to the number of cores, amount or RAM, and size of the storage you purchased ... EMC and HP are both making solutions that fit this description ... use case ends, those resources are then returned to the common pool to be redeployed, just as they would be in a larger cloud infrastructure