Java performance

Last updated

In software development, the programming language Java was historically considered slower than the fastest 3rd generation typed languages such as C and C++. [1] The main reason being a different language design, where after compiling, Java programs run on a Java virtual machine (JVM) rather than directly on the computer's processor as native code, as do C and C++ programs. Performance was a matter of concern because much business software has been written in Java after the language quickly became popular in the late 1990s and early 2000s.

Contents

Since the late 1990s, the execution speed of Java programs improved significantly via introduction of just-in-time compilation (JIT) (in 1997 for Java 1.1), [2] [3] [4] the addition of language features supporting better code analysis, and optimizations in the JVM (such as HotSpot becoming the default for Sun's JVM in 2000). Hardware execution of Java bytecode, such as that offered by ARM's Jazelle, was also explored to offer significant performance improvements.

The performance of a Java bytecode compiled Java program depends on how optimally its given tasks are managed by the host Java virtual machine (JVM), and how well the JVM exploits the features of the computer hardware and operating system (OS) in doing so. Thus, any Java performance test or comparison has to always report the version, vendor, OS and hardware architecture of the used JVM. In a similar manner, the performance of the equivalent natively compiled program will depend on the quality of its generated machine code, so the test or comparison also has to report the name, version and vendor of the used compiler, and its activated compiler optimization directives.

Virtual machine optimization methods

Many optimizations have improved the performance of the JVM over time. However, although Java was often the first virtual machine to implement them successfully, they have often been used in other similar platforms as well.

Just-in-time compiling

Early JVMs always interpreted Java bytecodes. This had a large performance penalty of between a factor 10 and 20 for Java versus C in average applications. [5] To combat this, a just-in-time (JIT) compiler was introduced into Java 1.1. Due to the high cost of compiling, an added system called HotSpot was introduced in Java 1.2 and was made the default in Java 1.3. Using this framework, the Java virtual machine continually analyses program performance for hot spots which are executed frequently or repeatedly. These are then targeted for optimizing, leading to high performance execution with a minimum of overhead for less performance-critical code. [6] [7] Some benchmarks show a 10-fold speed gain by this means. [8] However, due to time constraints, the compiler cannot fully optimize the program, and thus the resulting program is slower than native code alternatives. [9] [10]

Adaptive optimizing

Adaptive optimizing is a method in computer science that performs dynamic recompilation of parts of a program based on the current execution profile. With a simple implementation, an adaptive optimizer may simply make a trade-off between just-in-time compiling and interpreting instructions. At another level, adaptive optimizing may exploit local data conditions to optimize away branches and use inline expansion.

A Java virtual machine like HotSpot can also deoptimize code formerly JITed. This allows performing aggressive (and potentially unsafe) optimizations, while still being able to later deoptimize the code and fall back to a safe path. [11] [12]

Garbage collection

The 1.0 and 1.1 Java virtual machines (JVMs) used a mark-sweep collector, which could fragment the heap after a garbage collection. Starting with Java 1.2, the JVMs changed to a generational collector, which has a much better defragmentation behaviour. [13] Modern JVMs use a variety of methods that have further improved garbage collection performance. [14]

Other optimizing methods

Compressed Oops

Compressed Oops allow Java 5.0+ to address up to 32 GB of heap with 32-bit references. Java does not support access to individual bytes, only objects which are 8-byte aligned by default. Because of this, the lowest 3 bits of a heap reference will always be 0. By lowering the resolution of 32-bit references to 8 byte blocks, the addressable space can be increased to 32 GB. This significantly reduces memory use compared to using 64-bit references as Java uses references much more than some languages like C++. Java 8 supports larger alignments such as 16-byte alignment to support up to 64 GB with 32-bit references.[ citation needed ]

Split bytecode verification

Before executing a class, the Sun JVM verifies its Java bytecodes (see bytecode verifier). This verification is performed lazily: classes' bytecodes are only loaded and verified when the specific class is loaded and prepared for use, and not at the beginning of the program. However, as the Java class libraries are also regular Java classes, they must also be loaded when they are used, which means that the start-up time of a Java program is often longer than for C++ programs, for example.

A method named split-time verification, first introduced in the Java Platform, Micro Edition (J2ME), is used in the JVM since Java version 6. It splits the verification of Java bytecode in two phases: [15]

  • Design-time – when compiling a class from source to bytecode
  • Runtime – when loading a class.

In practice this method works by capturing knowledge that the Java compiler has of class flow and annotating the compiled method bytecodes with a synopsis of the class flow information. This does not make runtime verification appreciably less complex, but does allow some shortcuts.[ citation needed ]

Escape analysis and lock coarsening

Java is able to manage multithreading at the language level. Multithreading allows programs to perform multiple processes concurrently, thus improving the performance for programs running on computer systems with multiple processors or cores. Also, a multithreaded application can remain responsive to input, even while performing long running tasks.

However, programs that use multithreading need to take extra care of objects shared between threads, locking access to shared methods or blocks when they are used by one of the threads. Locking a block or an object is a time-consuming operation due to the nature of the underlying operating system-level operation involved (see concurrency control and lock granularity).

As the Java library does not know which methods will be used by more than one thread, the standard library always locks blocks when needed in a multithreaded environment.

Before Java 6, the virtual machine always locked objects and blocks when asked to by the program, even if there was no risk of an object being modified by two different threads at once. For example, in this case, a local Vector was locked before each of the add operations to ensure that it would not be modified by other threads (Vector is synchronized), but because it is strictly local to the method this is needless:

publicStringgetNames(){finalVector<String>v=newVector<>();v.add("Me");v.add("You");v.add("Her");returnv.toString();}

Starting with Java 6, code blocks and objects are locked only when needed, [16] so in the above case, the virtual machine would not lock the Vector object at all.

Since version 6u23, Java includes support for escape analysis. [17]

Register allocation improvements

Before Java 6, allocation of registers was very primitive in the client virtual machine (they did not live across blocks), which was a problem in CPU designs which had fewer processor registers available, as in x86s. If there are no more registers available for an operation, the compiler must copy from register to memory (or memory to register), which takes time (registers are significantly faster to access). However, the server virtual machine used a color-graph allocator and did not have this problem.

An optimization of register allocation was introduced in Sun's JDK 6; [18] it was then possible to use the same registers across blocks (when applicable), reducing accesses to the memory. This led to a reported performance gain of about 60% in some benchmarks. [19]

Class data sharing

Class data sharing (called CDS by Sun) is a mechanism which reduces the startup time for Java applications, and also reduces memory footprint. When the JRE is installed, the installer loads a set of classes from the system JAR file (the JAR file holding all the Java class library, called rt.jar) into a private internal representation, and dumps that representation to a file, called a "shared archive". During subsequent JVM invocations, this shared archive is memory-mapped in, saving the cost of loading those classes and allowing much of the JVM's metadata for these classes to be shared among multiple JVM processes. [20]

The corresponding improvement in start-up time is more obvious for small programs. [21]

History of performance improvements

Apart from the improvements listed here, each release of Java introduced many performance improvements in the JVM and Java application programming interface (API).

JDK 1.1.6: First just-in-time compilation (Symantec's JIT-compiler) [2] [22]

J2SE 1.2: Use of a generational collector.

J2SE 1.3: Just-in-time compiling by HotSpot.

J2SE 1.4: See here, for a Sun overview of performance improvements between 1.3 and 1.4 versions.

Java SE 5.0: Class data sharing [23]

Java SE 6:

Other improvements:

See also 'Sun overview of performance improvements between Java 5 and Java 6'. [26]

Java SE 6 Update 10

Java 7

Several performance improvements have been released for Java 7: Future performance improvements are planned for an update of Java 6 or Java 7: [31]

Comparison to other languages

Objectively comparing the performance of a Java program and an equivalent one written in another language such as C++ needs a carefully and thoughtfully constructed benchmark which compares programs completing identical tasks. The target platform of Java's bytecode compiler is the Java platform, and the bytecode is either interpreted or compiled into machine code by the JVM. Other compilers almost always target a specific hardware and software platform, producing machine code that will stay virtually unchanged during execution[ citation needed ]. Very different and hard-to-compare scenarios arise from these two different approaches: static vs. dynamic compilations and recompilations, the availability of precise information about the runtime environment and others.

Java is often compiled just-in-time at runtime by the Java virtual machine, but may also be compiled ahead-of-time, as is C++. When compiled just-in-time, the micro-benchmarks of The Computer Language Benchmarks Game indicate the following about its performance: [38]

Program speed

Benchmarks often measure performance for small numerically intensive programs. In some rare real-life programs, Java out-performs C. One example is the benchmark of Jake2 (a clone of Quake II written in Java by translating the original GPL C code). The Java 5.0 version performs better in some hardware configurations than its C counterpart. [42] While it is not specified how the data was measured (for example if the original Quake II executable compiled in 1997 was used, which may be considered bad as current C compilers may achieve better optimizations for Quake), it notes how the same Java source code can have a huge speed boost just by updating the VM, something impossible to achieve with a 100% static approach.

For other programs, the C++ counterpart can, and usually does, run significantly faster than the Java equivalent. A benchmark performed by Google in 2011 showed a factor 10 between C++ and Java. [43] At the other extreme, an academic benchmark performed in 2012 with a 3D modelling algorithm showed the Java 6 JVM being from 1.09 to 1.91 times slower than C++ under Windows. [44]

Some optimizations that are possible in Java and similar languages may not be possible in certain circumstances in C++: [45]

The JVM is also able to perform processor specific optimizations or inline expansion. And, the ability to deoptimize code already compiled or inlined sometimes allows it to perform more aggressive optimizations than those performed by statically typed languages when external library functions are involved. [46] [47]

Results for microbenchmarks between Java and C++ highly depend on which operations are compared. For example, when comparing with Java 5.0:


Notes
  1. Contention of this nature can be alleviated in C++ programs at the source code level by employing advanced methods such as custom allocators, exploiting precisely the kind of low-level coding complexity that Java was designed to conceal and encapsulate; however, this approach is rarely practical if not adopted (or at least anticipated) while the program remains under primary development.

Multi-core performance

The scalability and performance of Java applications on multi-core systems is limited by the object allocation rate. This effect is sometimes called an "allocation wall". [54] However, in practice, modern garbage collector algorithms use multiple cores to perform garbage collection, which to some degree alleviates this problem. Some garbage collectors are reported to sustain allocation rates of over a gigabyte per second, [55] and there exist Java-based systems that have no problems scaling to several hundreds of CPU cores and heaps sized several hundreds of GB. [56]

Automatic memory management in Java allows for efficient use of lockless and immutable data structures that are extremely hard or sometimes impossible to implement without some kind of a garbage collection.[ citation needed ] Java offers a number of such high-level structures in its standard library in the java.util.concurrent package, while many languages historically used for high performance systems like C or C++ are still lacking them.[ citation needed ]

Startup time

Java startup time is often much slower than many languages, including C, C++, Perl or Python, because many classes (and first of all classes from the platform Class libraries) must be loaded before being used.

When compared against similar popular runtimes, for small programs running on a Windows machine, the startup time appears to be similar to Mono's and a little slower than .NET's. [57]

It seems that much of the startup time is due to input-output (IO) bound operations rather than JVM initialization or class loading (the rt.jar class data file alone is 40 MB and the JVM must seek much data in this big file). [27] Some tests showed that although the new split bytecode verification method improved class loading by roughly 40%, it only realized about 5% startup improvement for large programs. [58]

Albeit a small improvement, it is more visible in small programs that perform a simple operation and then exit, because the Java platform data loading can represent many times the load of the actual program's operation.

Starting with Java SE 6 Update 10, the Sun JRE comes with a Quick Starter that preloads class data at OS startup to get data from the disk cache rather than from the disk.

Excelsior JET approaches the problem from the other side. Its Startup Optimizer reduces the amount of data that must be read from the disk on application startup, and makes the reads more sequential.

In November 2004, Nailgun, a "client, protocol, and server for running Java programs from the command line without incurring the JVM startup overhead" was publicly released. [59] introducing for the first time an option for scripts to use a JVM as a daemon, for running one or more Java applications with no JVM startup overhead. The Nailgun daemon is insecure: "all programs are run with the same permissions as the server". Where multi-user security is needed, Nailgun is inappropriate without special precautions. Scripts where per-application JVM startup dominates resource use, see one to two order of magnitude runtime performance improvements. [60]

Memory use

Java memory use is much higher than C++'s memory use because:

In most cases a C++ application will consume less memory than an equivalent Java application due to the large overhead of Java's virtual machine, class loading and automatic memory resizing. For programs in which memory is a critical factor for choosing between languages and runtime environments, a cost/benefit analysis is needed.

Trigonometric functions

Performance of trigonometric functions is bad compared to C, because Java has strict specifications for the results of mathematical operations, which may not correspond to the underlying hardware implementation. [65] On the x87 floating point subset, Java since 1.4 does argument reduction for sin and cos in software, [66] causing a big performance hit for values outside the range. [67] [ clarification needed ]

Java Native Interface

The Java Native Interface invokes a high overhead, making it costly to cross the boundary between code running on the JVM and native code. [68] [69] [70] Java Native Access (JNA) provides Java programs easy access to native shared libraries (dynamic-link library (DLLs) on Windows) via Java code only, with no JNI or native code. This functionality is comparable to Windows' Platform/Invoke and Python's ctypes. Access is dynamic at runtime without code generation. But it has a cost, and JNA is usually slower than JNI. [71]

User interface

Swing has been perceived as slower than native widget toolkits, because it delegates the rendering of widgets to the pure Java 2D API. However, benchmarks comparing the performance of Swing versus the Standard Widget Toolkit, which delegates the rendering to the native GUI libraries of the operating system, show no clear winner, and the results greatly depend on the context and the environments. [72] Additionally, the newer JavaFX framework, intended to replace Swing, addresses many of Swing's inherent issues.

Use for high performance computing

Some people believe that Java performance for high performance computing (HPC) is similar to Fortran on compute-intensive benchmarks, but that JVMs still have scalability issues for performing intensive communication on a grid computing network. [73]

However, high performance computing applications written in Java have won benchmark competitions. In 2008, [74] and 2009, [75] [76] an Apache Hadoop (an open-source high performance computing project written in Java) based cluster was able to sort a terabyte and petabyte of integers the fastest. The hardware setup of the competing systems was not fixed, however. [77] [78]

In programming contests

Programs in Java start slower than those in other compiled languages. [79] [80] Thus, some online judge systems, notably those hosted by Chinese universities, use longer time limits for Java programs [81] [82] [83] [84] [85] to be fair to contestants using Java.

See also

Citations

  1. "Java versus C++ benchmarks".
  2. 1 2 "Symantec's Just-In-Time Java Compiler To Be Integrated Into Sun JDK 1.1".
  3. "Short Take: Apple licenses Symantec's just-in-time compiler". cnet.com. May 12, 1998. Retrieved November 15, 2015.
  4. "Java gets four times faster with new Symantec just-in-time compiler".
  5. "Performance Comparison of Java/.NET Runtimes (Oct 2004)".
  6. Kawaguchi, Kohsuke (March 30, 2008). "Deep dive into assembly code from Java". Archived from the original on April 2, 2008. Retrieved April 2, 2008.
  7. "Fast, Effective Code Generation in a Just-In-Time Java Compiler" (PDF). Intel Corporation . Retrieved June 22, 2007.
  8. This article shows that the performance gain between interpreted mode and Hotspot amounts to more than a factor of 10.
  9. Numeric performance in C, C# and Java
  10. Algorithmic Performance Comparison Between C, C++, Java and C# Programming Languages Archived March 31, 2010, at the Wayback Machine
  11. "The Java HotSpot Virtual Machine, v1.4.1". Sun Microsystems . Retrieved April 20, 2008.
  12. Nutter, Charles (January 28, 2008). "Lang.NET 2008: Day 1 Thoughts" . Retrieved January 18, 2011. Deoptimization is very exciting when dealing with performance concerns, since it means you can make much more aggressive optimizations...knowing you'll be able to fall back on a tried and true safe path later on
  13. IBM DeveloperWorks Library
  14. For example, the duration of pauses is less noticeable now. See for example this clone of Quake II written in Java: Jake2.
  15. "New Java SE 6 Feature: Type Checking Verifier". Java.net. Retrieved January 18, 2011.[ permanent dead link ]
  16. Brian Goetz (October 18, 2005). "Java theory and practice: Synchronization optimizations in Mustang". IBM . Retrieved January 26, 2013.
  17. "Java HotSpot Virtual Machine Performance Enhancements". Oracle Corporation . Retrieved January 14, 2014. Escape analysis is a technique by which the Java Hotspot Server Compiler can analyze the scope of a new object's uses and decide whether to allocate it on the Java heap. Escape analysis is supported and enabled by default in Java SE 6u23 and later.
  18. Bug report: new register allocator, fixed in Mustang (JDK 6) b59
  19. Mustang's HotSpot Client gets 58% faster! Archived March 5, 2012, at the Wayback Machine in Osvaldo Pinali Doederlein's Blog at java.net
  20. Class Data Sharing at java.sun.com
  21. Class Data Sharing in JDK 1.5.0 in Java Buzz Forum at artima developer
  22. Mckay, Niali. "Java gets four times faster with new Symantec just-in-time compiler".
  23. Sun overview of performance improvements between 1.4 and 5.0 versions.
  24. STR-Crazier: Performance Improvements in Mustang Archived January 5, 2007, at the Wayback Machine in Chris Campbell's Blog at java.net
  25. See here for a benchmark showing a performance boost of about 60% from Java 5.0 to 6 for the application JFreeChart
  26. Java SE 6 Performance White Paper at http://java.sun.com
  27. 1 2 Haase, Chet (May 2007). "Consumer JRE: Leaner, Meaner Java Technology". Sun Microsystems. Retrieved July 27, 2007. At the OS level, all of these megabytes have to be read from disk, which is a very slow operation. Actually, it's the seek time of the disk that's the killer; reading large files sequentially is relatively fast, but seeking the bits that we actually need is not. So even though we only need a small fraction of the data in these large files for any particular application, the fact that we're seeking all over within the files means that there is plenty of disk activity.
  28. Haase, Chet (May 2007). "Consumer JRE: Leaner, Meaner Java Technology". Sun Microsystems. Retrieved July 27, 2007.
  29. Haase, Chet (May 2007). "Consumer JRE: Leaner, Meaner Java Technology". Sun Microsystems. Retrieved July 27, 2007.
  30. Campbell, Chris (April 7, 2007). "Faster Java 2D Via Shaders". Archived from the original on June 5, 2011. Retrieved January 18, 2011.
  31. Haase, Chet (May 2007). "Consumer JRE: Leaner, Meaner Java Technology". Sun Microsystems. Retrieved July 27, 2007.
  32. "JSR 292: Supporting Dynamically Typed Languages on the Java Platform". jcp.org. Retrieved May 28, 2008.
  33. Goetz, Brian (March 4, 2008). "Java theory and practice: Stick a fork in it, Part 2". IBM . Retrieved March 9, 2008.
  34. Lorimer, R.J. (March 21, 2008). "Parallelism with Fork/Join in Java 7". infoq.com. Retrieved May 28, 2008.
  35. "New Compiler Optimizations in the Java HotSpot Virtual Machine" (PDF). Sun Microsystems. May 2006. Retrieved May 30, 2008.
  36. Humble, Charles (May 13, 2008). "JavaOne: Garbage First". infoq.com. Retrieved September 7, 2008.
  37. Coward, Danny (November 12, 2008). "Java VM: Trying a new Garbage Collector for JDK 7". Archived from the original on December 8, 2011. Retrieved November 15, 2008.
  38. "Computer Language Benchmarks Game". benchmarksgame.alioth.debian.org. Archived from the original on January 25, 2015. Retrieved June 2, 2011.
  39. "Computer Language Benchmarks Game". benchmarksgame.alioth.debian.org. Archived from the original on January 13, 2015. Retrieved June 2, 2011.
  40. "Computer Language Benchmarks Game". benchmarksgame.alioth.debian.org. Archived from the original on January 10, 2015. Retrieved June 2, 2011.
  41. "Computer Language Benchmarks Game". benchmarksgame.alioth.debian.org. Archived from the original on January 2, 2015. Retrieved June 2, 2011.
  42. 260/250 frame/s versus 245 frame/s (see benchmark)
  43. Hundt, Robert. "Loop Recognition in C++/Java/Go/Scala" (PDF). Scala Days 2011. Stanford, California. Retrieved March 23, 2014.
  44. L. Gherardi; D. Brugali; D. Comotti (2012). "A Java vs. C++ performance evaluation: a 3D modeling benchmark" (PDF). University of Bergamo . Retrieved March 23, 2014. Using the Server compiler, which is best tuned for long-running applications, have instead demonstrated that Java is from 1.09 to 1.91 times slower(...)In conclusion, the results obtained with the server compiler and these important features suggest that Java can be considered a valid alternative to C++
  45. Lewis, J.P.; Neumann, Ulrich. "Performance of Java versus C++". Computer Graphics and Immersive Technology Lab, University of Southern California.
  46. "The Java HotSpot Performance Engine: Method Inlining Example". Oracle Corporation . Retrieved June 11, 2011.
  47. Nutter, Charles (May 3, 2008). "The Power of the JVM" . Retrieved June 11, 2011. What happens if you've already inlined A's method when B comes along? Here again the JVM shines. Because the JVM is essentially a dynamic language runtime under the covers, it remains ever-vigilant, watching for exactly these sorts of events to happen. And here's the really cool part: when situations change, the JVM can deoptimize. This is a crucial detail. Many other runtimes can only do their optimization once. C compilers must do it all ahead of time, during the build. Some allow you to profile your application and feed that into subsequent builds, but once you've released a piece of code it's essentially as optimized as it will ever get. Other VM-like systems like the CLR do have a JIT phase, but it happens early in execution (maybe before the system even starts executing) and doesn't ever happen again. The JVM's ability to deoptimize and return to interpretation gives it room to be optimistic...room to make ambitious guesses and gracefully fall back to a safe state, to try again later.
  48. "Microbenchmarking C++, C#, and Java: 32-bit integer arithmetic". Dr. Dobb's Journal. July 1, 2005. Retrieved January 18, 2011.
  49. "Microbenchmarking C++, C#, and Java: 64-bit double arithmetic". Dr. Dobb's Journal. July 1, 2005. Retrieved January 18, 2011.
  50. "Microbenchmarking C++, C#, and Java: File I/O". Dr. Dobb's Journal. July 1, 2005. Retrieved January 18, 2011.
  51. "Microbenchmarking C++, C#, and Java: Exception". Dr. Dobb's Journal. July 1, 2005. Retrieved January 18, 2011.
  52. "Microbenchmarking C++, C#, and Java: Array". Dr. Dobb's Journal. July 1, 2005. Retrieved January 18, 2011.
  53. "Microbenchmarking C++, C#, and Java: Trigonometric functions". Dr. Dobb's Journal. July 1, 2005. Retrieved January 18, 2011.
  54. Yi Zhao, Jin Shi, Kai Zheng, Haichuan Wang, Haibo Lin and Ling Shao, Allocation wall: a limiting factor of Java applications on emerging multi-core platforms, Proceedings of the 24th ACM SIGPLAN conference on Object oriented programming systems languages and applications, 2009.
  55. C4: The Continuously Concurrent Compacting Collector
  56. Azul bullies Java with 768 core machine
  57. "Benchmark start-up and system performance for .Net, Mono, Java, C++ and their respective UI". September 2, 2010.
  58. "How fast is the new verifier?". 7 February 2006. Archived from the original on 16 May 2006. Retrieved 9 May 2007.
  59. Nailgun
  60. The Nailgun Background page demonstrates "best case scenario" speedup of 33 times (for scripted "Hello, World!" programs i.e., short-run programs).
  61. "How to calculate the memory usage of Java objects".
  62. "InformIT: C++ Reference Guide > the Object Model". Archived from the original on 21 February 2008. Retrieved 22 June 2009.
  63. https://www.youtube.com/watch?v=M91w0SBZ-wc  : Understanding Java Garbage Collection - a talk by Gil Tene at JavaOne
  64. ".: ToMMTi-Systems :: Hinter den Kulissen moderner 3D-Hardware".
  65. "Math (Java Platform SE 6)". Sun Microsystems . Retrieved June 8, 2008.
  66. Gosling, James (July 27, 2005). "Transcendental Meditation". Archived from the original on August 12, 2011. Retrieved June 8, 2008.
  67. Cowell-Shah, Christopher W. (January 8, 2004). "Nine Language Performance Round-up: Benchmarking Math & File I/O". Archived from the original on October 11, 2018. Retrieved June 8, 2008.
  68. Wilson, Steve; Jeff Kesselman (2001). "JavaTM Platform Performance: Using Native Code". Sun Microsystems . Retrieved February 15, 2008.
  69. Kurzyniec, Dawid; Vaidy Sunderam. "Efficient Cooperation between Java and Native Codes - JNI Performance Benchmark" (PDF). Archived from the original (PDF) on 14 February 2005. Retrieved 15 February 2008.
  70. Bloch 2018, p. 285, Chapter §11 Item 66: Use native methods judiciously.
  71. "How does JNA performance compare to custom JNI?". Sun Microsystems . Retrieved December 26, 2009.[ permanent dead link ]
  72. Igor, Križnar (10 May 2005). "SWT Vs. Swing Performance Comparison" (PDF). cosylab.com. Archived from the original (PDF) on 4 July 2008. Retrieved 24 May 2008. It is hard to give a rule-of-thumb where SWT would outperform Swing, or vice versa. In some environments (e.g., Windows), SWT is a winner. In others (Linux, VMware hosting Windows), Swing and its redraw optimization outperform SWT significantly. Differences in performance are significant: factors of 2 and more are common, in either direction
  73. Brian Amedro; Vladimir Bodnartchouk; Denis Caromel; Christian Delbe; Fabrice Huet; Guillermo L. Taboada (August 2008). "Current State of Java for HPC". INRIA . Retrieved September 9, 2008. We first perform some micro benchmarks for various JVMs, showing the overall good performance for basic arithmetic operations(...). Comparing this implementation with a Fortran/MPI one, we show that they have similar performance on computation intensive benchmarks, but still have scalability issues when performing intensive communications.
  74. Owen O'Malley - Yahoo! Grid Computing Team (July 2008). "Apache Hadoop Wins Terabyte Sort Benchmark". Archived from the original on 15 October 2009. Retrieved 21 December 2008. This is the first time that either a Java or an open source program has won.
  75. "Hadoop Sorts a Petabyte in 16.25 Hours and a Terabyte in 62 Seconds". CNET.com. May 11, 2009. Archived from the original on May 16, 2009. Retrieved September 8, 2010. The hardware and operating system details are:(...)Sun Java JDK (1.6.0_05-b13 and 1.6.0_13-b03) (32 and 64 bit)
  76. "Hadoop breaks data-sorting world records". CNET.com. May 15, 2009. Retrieved September 8, 2010.
  77. Chris Nyberg; Mehul Shah. "Sort Benchmark Home Page" . Retrieved November 30, 2010.
  78. Czajkowski, Grzegorz (November 21, 2008). "Sorting 1PB with MapReduce" . Retrieved December 1, 2010.
  79. "TCO10". Archived from the original on 18 October 2010. Retrieved 21 June 2010.
  80. "How to write Java solutions @ Timus Online Judge".
  81. "FAQ".
  82. "FAQ | TJU ACM-ICPC Online Judge". Archived from the original on June 29, 2010. Retrieved May 25, 2010.
  83. "FAQ | CodeChef".
  84. "HomePage of Xidian Univ. Online Judge". Archived from the original on 19 February 2012. Retrieved 13 November 2011.
  85. "FAQ".

Related Research Articles

<span class="mw-page-title-main">Java applet</span> Small application written in Java

Java applets were small applications written in the Java programming language, or another programming language that compiles to Java bytecode, and delivered to users in the form of Java bytecode. The user launched the Java applet from a web page, and the applet was then executed within a Java virtual machine (JVM) in a process separate from the web browser itself. A Java applet could appear in a frame of the web page, a new application window, Sun's AppletViewer, or a stand-alone tool for testing applets.

<span class="mw-page-title-main">Java (programming language)</span> Object-oriented programming language

Java is a high-level, class-based, object-oriented programming language that is designed to have as few implementation dependencies as possible. It is a general-purpose programming language intended to let programmers write once, run anywhere (WORA), meaning that compiled Java code can run on all platforms that support Java without the need to recompile. Java applications are typically compiled to bytecode that can run on any Java virtual machine (JVM) regardless of the underlying computer architecture. The syntax of Java is similar to C and C++, but has fewer low-level facilities than either of them. The Java runtime provides dynamic capabilities that are typically not available in traditional compiled languages.

<span class="mw-page-title-main">Java virtual machine</span> Java Virtual machine

A Java virtual machine (JVM) is a virtual machine that enables a computer to run Java programs as well as programs written in other languages that are also compiled to Java bytecode. The JVM is detailed by a specification that formally describes what is required in a JVM implementation. Having a specification ensures interoperability of Java programs across different implementations so that program authors using the Java Development Kit (JDK) need not worry about idiosyncrasies of the underlying hardware platform.

Java and C++ are two prominent object-oriented programming languages. By many language popularity metrics, the two languages have dominated object-oriented and high-performance software development for much of the 21st century, and are often directly compared and contrasted. Java's syntax was based on C/C++.

Bytecode is a form of instruction set designed for efficient execution by a software interpreter. Unlike human-readable source code, bytecodes are compact numeric codes, constants, and references that encode the result of compiler parsing and performing semantic analysis of things like type, scope, and nesting depths of program objects.

The GNU Compiler for Java (GCJ) is a discontinued free compiler for the Java programming language. It was part of the GNU Compiler Collection.

In computing, just-in-time (JIT) compilation is compilation during execution of a program rather than before execution. This may consist of source code translation but is more commonly bytecode translation to machine code, which is then executed directly. A system implementing a JIT compiler typically continuously analyses the code being executed and identifies parts of the code where the speedup gained from compilation or recompilation would outweigh the overhead of compiling that code.

javac is the primary Java compiler included in the Java Development Kit (JDK) from Oracle Corporation. Martin Odersky implemented the GJ compiler, and his implementation became the basis for javac.

HotSpot, released as Java HotSpot Performance Engine, is a Java virtual machine for desktop and server computers, developed by Sun Microsystems and now maintained and distributed by Oracle Corporation. It features improved performance via methods such as just-in-time compilation and adaptive optimization.

Jikes Research Virtual Machine is a mature virtual machine that runs programs written for the Java platform. Unlike most other Java virtual machines (JVMs), it is written in the programming language Java, in a style of implementation termed meta-circular. It is free and open source software released under an Eclipse Public License.

<span class="mw-page-title-main">Java (software platform)</span> Set of computer software and specifications

Java is a set of computer software and specifications that provides a software platform for developing application software and deploying it in a cross-platform computing environment. Java is used in a wide variety of computing platforms from embedded devices and mobile phones to enterprise servers and supercomputers. Java applets, which are less common than standalone Java applications, were commonly run in secure, sandboxed environments to provide many features of native applications through being embedded in HTML pages.

Eclipse OpenJ9 is a high performance, scalable, Java virtual machine (JVM) implementation that is fully compliant with the Java Virtual Machine Specification.

Dalvik is a discontinued process virtual machine (VM) in the Android operating system that executes applications written for Android. Dalvik was an integral part of the Android software stack in the Android versions 4.4 "KitKat" and earlier, which were commonly used on mobile devices such as mobile phones and tablet computers, and more in some devices such as smart TVs and wearables. Dalvik is open-source software, originally written by Dan Bornstein, who named it after the fishing village of Dalvík in Eyjafjörður, Iceland.

<span class="mw-page-title-main">Azul Systems</span> Computer manufacturer of appliances for executing Java-based applications

Azul Systems, Inc. develops runtimes for executing Java-based applications. Founded in March 2002, Azul Systems has headquarter in Sunnyvale, California.

<span class="mw-page-title-main">Da Vinci Machine</span> Sun Microsystems project

The Da Vinci Machine, also called the Multi Language Virtual Machine, was a Sun Microsystems project aiming to prototype the extension of the Java Virtual Machine (JVM) to add support for dynamic languages.

The Java Development Kit (JDK) is a distribution of Java Technology by Oracle Corporation. It implements the Java Language Specification (JLS) and the Java Virtual Machine Specification (JVMS) and provides the Standard Edition (SE) of the Java Application Programming Interface (API). It is derivative of the community driven OpenJDK which Oracle stewards. It provides software for working with Java applications. Examples of included software are the virtual machine, a compiler, performance monitoring tools, a debugger, and other utilities that Oracle considers useful for a Java programmer.

In computing, Java bytecode is the bytecode-structured instruction set of the Java virtual machine (JVM), a virtual machine that enables a computer to run programs written in the Java programming language and several other programming languages, see List of JVM languages.

<span class="mw-page-title-main">GraalVM</span> Java virtual machine

GraalVM is a Java VM and Java Development Kit (JDK) based on Oracle JDK / OpenJDK, written in Java. Besides a just-in-time (JIT) compilation, GraalVM provides an ahead-of-time compilation technology to compile Java applications into standalone binaries that start instantly, provide peak performance with no warmup, and use fewer resources. It supports additional programming languages and execution modes. The first production-ready version, GraalVM 19.0, was released in May 2019. The most recent version is GraalVM for JDK 21, made available in September 2023.

Quarkus is a Java framework tailored for deployment on Kubernetes. Key technology components surrounding it are OpenJDK HotSpot and GraalVM. The goal of Quarkus is to make Java a leading platform in Kubernetes and serverless environments while offering developers a unified reactive and imperative programming model to optimally address a wider range of distributed application architectures.

References