Apache Flink

Last updated
Apache Flink
Developer(s) Apache Software Foundation
Initial releaseMay 2011;12 years ago (2011-05)
Stable release
1.18.0 [1]   OOjs UI icon edit-ltr-progressive.svg / 24 October 2023;2 months ago (24 October 2023) [2]
Repository
Written in Java and Scala
Operating system Cross-platform
Type
License Apache License 2.0
Website flink.apache.org   OOjs UI icon edit-ltr-progressive.svg

Apache Flink is an open-source, unified stream-processing and batch-processing framework developed by the Apache Software Foundation. The core of Apache Flink is a distributed streaming data-flow engine written in Java and Scala. [3] [4] Flink executes arbitrary dataflow programs in a data-parallel and pipelined (hence task parallel) manner. [5] Flink's pipelined runtime system enables the execution of bulk/batch and stream processing programs. [6] [7] Furthermore, Flink's runtime supports the execution of iterative algorithms natively. [8]

Contents

Flink provides a high-throughput, low-latency streaming engine [9] as well as support for event-time processing and state management. Flink applications are fault-tolerant in the event of machine failure and support exactly-once semantics. [10] Programs can be written in Java, Scala, [11] Python, [12] and SQL [13] and are automatically compiled and optimized [14] into dataflow programs that are executed in a cluster or cloud environment. [15]

Flink does not provide its own data-storage system, but provides data-source and sink connectors to systems such as Apache Doris, Amazon Kinesis, Apache Kafka, HDFS, Apache Cassandra, and ElasticSearch. [16]

Development

Apache Flink is developed under the Apache License 2.0 [17] by the Apache Flink Community within the Apache Software Foundation. The project is driven by over 25 committers and over 340 contributors.

Overview

Apache Flink's dataflow programming model provides event-at-a-time processing on both finite and infinite datasets. At a basic level, Flink programs consist of streams and transformations. “Conceptually, a stream is a (potentially never-ending) flow of data records, and a transformation is an operation that takes one or more streams as input, and produces one or more output streams as a result.” [18]

Apache Flink includes two core APIs: a DataStream API for bounded or unbounded streams of data and a DataSet API for bounded data sets. Flink also offers a Table API, which is a SQL-like expression language for relational stream and batch processing that can be easily embedded in Flink's DataStream and DataSet APIs. The highest-level language supported by Flink is SQL, which is semantically similar to the Table API and represents programs as SQL query expressions.

Programming Model and Distributed Runtime

Upon execution, Flink programs are mapped to streaming dataflows. [18] Every Flink dataflow starts with one or more sources (a data input, e.g., a message queue or a file system) and ends with one or more sinks (a data output, e.g., a message queue, file system, or database). An arbitrary number of transformations can be performed on the stream. These streams can be arranged as a directed, acyclic dataflow graph, allowing an application to branch and merge dataflows.

Flink offers ready-built source and sink connectors with Apache Kafka, Amazon Kinesis, [19] HDFS, Apache Cassandra, and more. [16]

Flink programs run as a distributed system within a cluster and can be deployed in a standalone mode as well as on YARN, Mesos, Docker-based setups along with other resource management frameworks. [20]

State: Checkpoints, Savepoints, and Fault-tolerance

Apache Flink includes a lightweight fault tolerance mechanism based on distributed checkpoints. [10] A checkpoint is an automatic, asynchronous snapshot of the state of an application and the position in a source stream. In the case of a failure, a Flink program with checkpointing enabled will, upon recovery, resume processing from the last completed checkpoint, ensuring that Flink maintains exactly-once state semantics within an application. The checkpointing mechanism exposes hooks for application code to include external systems into the checkpointing mechanism as well (like opening and committing transactions with a database system).

Flink also includes a mechanism called savepoints, which are manually-triggered checkpoints. [21] A user can generate a savepoint, stop a running Flink program, then resume the program from the same application state and position in the stream. Savepoints enable updates to a Flink program or a Flink cluster without losing the application's state . As of Flink 1.2, savepoints also allow to restart an application with a different parallelism—allowing users to adapt to changing workloads.

DataStream API

Flink's DataStream API enables transformations (e.g. filters, aggregations, window functions) on bounded or unbounded streams of data. The DataStream API includes more than 20 different types of transformations and is available in Java and Scala. [22]

A simple example of a stateful stream processing program is an application that emits a word count from a continuous input stream and groups the data in 5-second windows:

importorg.apache.flink.streaming.api.scala._importorg.apache.flink.streaming.api.windowing.time.TimecaseclassWordCount(word:String,count:Int)objectWindowWordCount{defmain(args:Array[String]){valenv=StreamExecutionEnvironment.getExecutionEnvironmentvaltext=env.socketTextStream("localhost",9999)valcounts=text.flatMap{_.toLowerCase.split("\\W+")filter{_.nonEmpty}}.map{WordCount(_,1)}.keyBy("word").timeWindow(Time.seconds(5)).sum("count")counts.printenv.execute("Window Stream WordCount")}}

Apache Beam “provides an advanced unified programming model, allowing (a developer) to implement batch and streaming data processing jobs that can run on any execution engine.” [23] The Apache Flink-on-Beam runner is the most feature-rich according to a capability matrix maintained by the Beam community. [24]

data Artisans, in conjunction with the Apache Flink community, worked closely with the Beam community to develop a Flink runner. [25]

DataSet API

Flink's DataSet API enables transformations (e.g., filters, mapping, joining, grouping) on bounded datasets. The DataSet API includes more than 20 different types of transformations. [26] The API is available in Java, Scala and an experimental Python API. Flink's DataSet API is conceptually similar to the DataStream API.

Table API and SQL

Flink's Table API is a SQL-like expression language for relational stream and batch processing that can be embedded in Flink's Java and Scala DataSet and DataStream APIs. The Table API and SQL interface operate on a relational Table abstraction. Tables can be created from external data sources or from existing DataStreams and DataSets. The Table API supports relational operators such as selection, aggregation, and joins on Tables.

Tables can also be queried with regular SQL. The Table API and SQL offer equivalent functionality and can be mixed in the same program. When a Table is converted back into a DataSet or DataStream, the logical plan, which was defined by relational operators and SQL queries, is optimized using Apache Calcite and is transformed into a DataSet or DataStream program. [27]

Flink Forward is an annual conference about Apache Flink. The first edition of Flink Forward took place in 2015 in Berlin. The two-day conference had over 250 attendees from 16 countries. Sessions were organized in two tracks with over 30 technical presentations from Flink developers and one additional track with hands-on Flink training.

In 2016, 350 participants joined the conference and over 40 speakers presented technical talks in 3 parallel tracks. On the third day, attendees were invited to participate in hands-on training sessions.

In 2017, the event expands to San Francisco, as well. The conference day is dedicated to technical talks on how Flink is used in the enterprise, Flink system internals, ecosystem integrations with Flink, and the future of the platform. It features keynotes, talks from Flink users in industry and academia, and hands-on training sessions on Apache Flink.

In 2020, following the COVID-19 pandemic, Flink Forward's spring edition which was supposed to be hosted in San Francisco was canceled. Instead, the conference was hosted virtually, starting on April 22 and concluding on April 24, featuring live keynotes, Flink use cases, Apache Flink internals, and other topics on stream processing and real-time analytics. [28]

History

In 2010, the research project "Stratosphere: Information Management on the Cloud" [29] led by Volker Markl (funded by the German Research Foundation (DFG)) [30] was started as a collaboration of Technical University Berlin, Humboldt-Universität zu Berlin, and Hasso-Plattner-Institut Potsdam. Flink started from a fork of Stratosphere's distributed execution engine and it became an Apache Incubator project in March 2014. [31] In December 2014, Flink was accepted as an Apache top-level project. [32] [33] [34] [35]

VersionOriginal release dateLatest versionRelease date
Old version, no longer maintained: 0.92015-06-240.9.12015-09-01
Old version, no longer maintained: 0.102015-11-160.10.22016-02-11
Old version, no longer maintained: 1.02016-03-081.0.32016-05-11
Old version, no longer maintained: 1.12016-08-081.1.52017-03-22
Old version, no longer maintained: 1.22017-02-061.2.12017-04-26
Old version, no longer maintained: 1.32017-06-011.3.32018-03-15
Old version, no longer maintained: 1.42017-12-121.4.22018-03-08
Old version, no longer maintained: 1.52018-05-251.5.62018-12-26
Old version, no longer maintained: 1.62018-08-081.6.32018-12-22
Old version, no longer maintained: 1.72018-11-301.7.22019-02-15
Old version, no longer maintained: 1.82019-04-091.8.32019-12-11
Old version, no longer maintained: 1.92019-08-221.9.22020-01-30
Old version, no longer maintained: 1.102020-02-111.10.32021-01-29
Old version, no longer maintained: 1.112020-07-061.11.62021-12-16
Old version, no longer maintained: 1.122020-12-101.12.72021-12-16
Old version, no longer maintained: 1.132021-05-031.13.62022-02-18
Old version, no longer maintained: 1.142021-09-291.14.62022-09-28
Old version, no longer maintained: 1.152022-05-051.15.42023-03-15
Old version, no longer maintained: 1.162022-10-281.16.32023-11-29
Older version, yet still maintained: 1.172023-03-231.17.22023-11-29
Current stable version:1.182023-10-241.18.02023-10-24
Legend:
Old version
Older version, still maintained
Latest version
Latest preview version
Future release

Release Dates

Apache Incubator Release Dates

Pre-Apache Stratosphere Release Dates

The 1.14.1, 1.13.4, 1.12.6, 1.11.5 releases, which were supposed to only contain a Log4j upgrade to 2.15.0, were skipped because CVE- 2021-45046 was discovered during the release publication. [36]

See also

Related Research Articles

<span class="mw-page-title-main">Visual programming language</span> Programming language written graphically by a user

In computing, a visual programming language or block coding is a programming language that lets users create programs by manipulating program elements graphically rather than by specifying them textually. A VPL allows programming with visual expressions, spatial arrangements of text and graphic symbols, used either as elements of syntax or secondary notation. For example, many VPLs are based on the idea of "boxes and arrows", where boxes or other screen objects are treated as entities, connected by arrows, lines or arcs which represent relations.

In computer programming, dataflow programming is a programming paradigm that models a program as a directed graph of the data flowing between operations, thus implementing dataflow principles and architecture. Dataflow programming languages share some features of functional languages, and were generally developed in order to bring some functional concepts to a language more suitable for numeric processing. Some authors use the term datastream instead of dataflow to avoid confusion with dataflow computing or dataflow architecture, based on an indeterministic machine paradigm. Dataflow programming was pioneered by Jack Dennis and his graduate students at MIT in the 1960s.

The Web Server Gateway Interface is a simple calling convention for web servers to forward requests to web applications or frameworks written in the Python programming language. The current version of WSGI, version 1.0.1, is specified in Python Enhancement Proposal (PEP) 3333.

Content Repository API for Java (JCR) is a specification for a Java platform application programming interface (API) to access content repositories in a uniform manner. The content repositories are used in content management systems to keep the content data and also the metadata used in content management systems (CMS) such as versioning metadata. The specification was developed under the Java Community Process as JSR-170, and as JSR-283. The main Java package is javax.jcr.

In computing, a materialized view is a database object that contains the results of a query. For example, it may be a local copy of data located remotely, or may be a subset of the rows and/or columns of a table or join result, or may be a summary using an aggregate function.

<span class="mw-page-title-main">Apache Solr</span> Open-source enterprise-search platform

Solr is an open-source enterprise-search platform, written in Java. Its major features include full-text search, hit highlighting, faceted search, real-time indexing, dynamic clustering, database integration, NoSQL features and rich document handling. Providing distributed search and index replication, Solr is designed for scalability and fault tolerance. Solr is widely used for enterprise search and analytics use cases and has an active development community and regular releases.

<span class="mw-page-title-main">Apache Cassandra</span> Free and open-source database management system

Cassandra is a free and open-source, distributed, wide-column store, NoSQL database management system designed to handle large amounts of data across many commodity servers, providing high availability with no single point of failure. Cassandra offers support for clusters spanning multiple datacenters, with asynchronous masterless replication allowing low latency operations for all clients. Cassandra was designed to implement a combination of Amazon's Dynamo distributed storage and replication techniques combined with Google's Bigtable data and storage engine model.

<span class="mw-page-title-main">Apache Hive</span> Database engine

Apache Hive is a data warehouse software project built on top of Apache Hadoop for providing data query and analysis. Hive gives an SQL-like interface to query data stored in various databases and file systems that integrate with Hadoop. Traditional SQL queries must be implemented in the MapReduce Java API to execute SQL applications and queries over distributed data. Hive provides the necessary SQL abstraction to integrate SQL-like queries (HiveQL) into the underlying Java without the need to implement queries in the low-level Java API. Since most data warehousing applications work with SQL-based querying languages, Hive aids the portability of SQL-based applications to Hadoop. While initially developed by Facebook, Apache Hive is used and developed by other companies such as Netflix and the Financial Industry Regulatory Authority (FINRA). Amazon maintains a software fork of Apache Hive included in Amazon Elastic MapReduce on Amazon Web Services.

<span class="mw-page-title-main">Akka (toolkit)</span> Open-source runtime

Akka is a source-available toolkit and runtime simplifying the construction of concurrent and distributed applications on the JVM. Akka supports multiple programming models for concurrency, but it emphasizes actor-based concurrency, with inspiration drawn from Erlang.

<span class="mw-page-title-main">Apache Storm</span> Open-source distributed stream processing

Apache Storm is a distributed stream processing computation framework written predominantly in the Clojure programming language. Originally created by Nathan Marz and team at BackType, the project was open sourced after being acquired by Twitter. It uses custom created "spouts" and "bolts" to define information sources and manipulations to allow batch, distributed processing of streaming data. The initial release was on 17 September 2011.

Apache Kafka is a distributed event store and stream-processing platform. It is an open-source system developed by the Apache Software Foundation written in Java and Scala. The project aims to provide a unified, high-throughput, low-latency platform for handling real-time data feeds. Kafka can connect to external systems via Kafka Connect, and provides the Kafka Streams libraries for stream processing applications. Kafka uses a binary TCP-based protocol that is optimized for efficiency and relies on a "message set" abstraction that naturally groups messages together to reduce the overhead of the network roundtrip. This "leads to larger network packets, larger sequential disk operations, contiguous memory blocks [...] which allows Kafka to turn a bursty stream of random message writes into linear writes."

<span class="mw-page-title-main">Apache Spark</span> Open-source data analytics cluster computing framework

Apache Spark is an open-source unified analytics engine for large-scale data processing. Spark provides an interface for programming clusters with implicit data parallelism and fault tolerance. Originally developed at the University of California, Berkeley's AMPLab, the Spark codebase was later donated to the Apache Software Foundation, which has maintained it since.

Google Cloud Platform (GCP), offered by Google, is a suite of cloud computing services that provides a series of modular cloud services including computing, data storage, data analytics, and machine learning, alongside a set of management tools. It runs on the same infrastructure that Google uses internally for its end-user products, such as Google Search, Gmail, and Google Docs, according to Verma, et.al. Registration requires a credit card or bank account details.

<span class="mw-page-title-main">Syncthing</span> Free and open-source peer-to-peer file synchronization application

Syncthing is a free, open-source peer-to-peer file synchronization application available for Windows, macOS, Linux, Android, Solaris, Darwin, and BSD. It can sync files between devices on a local network, or between remote devices over the Internet. Data security and data safety are built into its design. Version 1.0 was released in January 2019 after five years in beta.

Apache Calcite is an open source framework for building databases and data management systems. It includes a SQL parser, an API for building expressions in relational algebra, and a query planning engine. As a framework, Calcite does not store its own data or metadata, but instead allows external data and metadata to be accessed by means of plug-ins.

<span class="mw-page-title-main">Apache Kylin</span> Open-source distributed analytics engine

Apache Kylin is an open source distributed analytics engine designed to provide a SQL interface and multi-dimensional analysis (OLAP) on Hadoop and Alluxio supporting extremely large datasets.

<span class="mw-page-title-main">RocksDB</span> Embedded key-value database

RocksDB is a high performance embedded database for key-value data. It is a fork of Google's LevelDB optimized to exploit multi-core processors (CPUs), and make efficient use of fast storage, such as solid-state drives (SSD), for input/output (I/O) bound workloads. It is based on a log-structured merge-tree data structure. It is written in C++ and provides official language bindings for C++, C, and Java. Many third-party language bindings exist. RocksDB is free and open-source software, released originally under a BSD 3-clause license. However, in July 2017 the project was migrated to a dual license of both Apache 2.0 and GPLv2 license. This change helped its adoption in Apache Software Foundation's projects after blacklist of the previous BSD+Patents license clause.

<span class="mw-page-title-main">Apache Beam</span> Unified programming model for data processing pipelines

Apache Beam is an open source unified programming model to define and execute data processing pipelines, including ETL, batch and stream (continuous) processing. Beam Pipelines are defined using one of the provided SDKs and executed in one of the Beam’s supported runners including Apache Flink, Apache Samza, Apache Spark, and Google Cloud Dataflow.

Reynold Xin is a computer scientist and engineer specializing in big data, distributed systems, and cloud computing. He is a co-founder and Chief Architect of Databricks. He is best known for his work on Apache Spark, a leading open-source Big Data project. He was designer and lead developer of the GraphX, Project Tungsten, and Structured Streaming components and he co-designed DataFrames, all of which are part of the core Apache Spark distribution; he also served as the release manager for Spark's 2.0 release.

References

  1. "Release 1.18.0". 24 October 2023. Retrieved 18 November 2023.
  2. "All stable Flink releases". flink.apache.org. Apache Software Foundation. Retrieved 2021-12-20.
  3. "Apache Flink: Scalable Batch and Stream Data Processing". apache.org.
  4. "apache/flink". GitHub. 29 January 2022.
  5. Alexander Alexandrov, Rico Bergmann, Stephan Ewen, Johann-Christoph Freytag, Fabian Hueske, Arvid Heise, Odej Kao, Marcus Leich, Ulf Leser, Volker Markl, Felix Naumann, Mathias Peters, Astrid Rheinländer, Matthias J. Sax, Sebastian Schelter, Mareike Höger, Kostas Tzoumas, and Daniel Warneke. 2014. The Stratosphere platform for big data analytics. The VLDB Journal 23, 6 (December 2014), 939-964. DOI
  6. Ian Pointer (7 May 2015). "Apache Flink: New Hadoop contender squares off against Spark". InfoWorld.
  7. "On Apache Flink. Interview with Volker Markl". odbms.org.
  8. Stephan Ewen, Kostas Tzoumas, Moritz Kaufmann, and Volker Markl. 2012. Spinning fast iterative data flows. Proc. VLDB Endow. 5, 11 (July 2012), 1268-1279. DOI
  9. "Benchmarking Streaming Computation Engines at Yahoo!". Yahoo Engineering. Retrieved 2017-02-23.
  10. 1 2 Carbone, Paris; Fóra, Gyula; Ewen, Stephan; Haridi, Seif; Tzoumas, Kostas (2015-06-29). "Lightweight Asynchronous Snapshots for Distributed Dataflows". arXiv: 1506.08603 [cs.DC].
  11. "Apache Flink 1.2.0 Documentation: Flink DataStream API Programming Guide". ci.apache.org. Retrieved 2017-02-23.
  12. "Apache Flink 1.2.0 Documentation: Python Programming Guide". ci.apache.org. Retrieved 2017-02-23.
  13. "Apache Flink 1.2.0 Documentation: Table and SQL". ci.apache.org. Retrieved 2017-02-23.
  14. Fabian Hueske, Mathias Peters, Matthias J. Sax, Astrid Rheinländer, Rico Bergmann, Aljoscha Krettek, and Kostas Tzoumas. 2012. Opening the black boxes in data flow optimization. Proc. VLDB Endow. 5, 11 (July 2012), 1256-1267. DOI
  15. Daniel Warneke and Odej Kao. 2009. Nephele: efficient parallel data processing in the cloud. In Proceedings of the 2nd Workshop on Many-Task Computing on Grids and Supercomputers (MTAGS '09). ACM, New York, NY, USA, Article 8, 10 pages. DOI
  16. 1 2 "Apache Flink 1.2.0 Documentation: Streaming Connectors". ci.apache.org. Retrieved 2017-02-23.
  17. "ASF Git Repos - flink.git/blob - LICENSE". apache.org. Archived from the original on 2017-10-23. Retrieved 2015-04-12.
  18. 1 2 "Apache Flink 1.2.0 Documentation: Dataflow Programming Model". ci.apache.org. Retrieved 2017-02-23.
  19. "Kinesis Data Streams: processing streaming data in real time". 5 January 2022.
  20. "Apache Flink 1.2.0 Documentation: Distributed Runtime Environment". ci.apache.org. Retrieved 2017-02-24.
  21. "Apache Flink 1.2.0 Documentation: Distributed Runtime Environment - Savepoints". ci.apache.org. Retrieved 2017-02-24.
  22. "Apache Flink 1.2.0 Documentation: Flink DataStream API Programming Guide". ci.apache.org. Retrieved 2017-02-24.
  23. "Apache Beam". beam.apache.org. Retrieved 2017-02-24.
  24. "Apache Beam Capability Matrix". beam.apache.org. Retrieved 2017-02-24.
  25. "Why Apache Beam? A Google Perspective | Google Cloud Big Data and Machine Learning Blog | Google Cloud Platform". Google Cloud Platform. Archived from the original on 2017-02-25. Retrieved 2017-02-24.
  26. "Apache Flink 1.2.0 Documentation: Flink DataSet API Programming Guide". ci.apache.org. Retrieved 2017-02-24.
  27. "Stream Processing for Everyone with SQL and Apache Flink". flink.apache.org. 24 May 2016. Retrieved 2020-01-08.
  28. "Flink Forward Virtual Conference 2020".
  29. "Stratosphere". stratosphere.eu.
  30. "Stratosphere - Information Management on the Cloud". Deutsche Forschungsgemeinschaft (DFG). Retrieved 2023-12-01.
  31. "Stratosphere". apache.org.
  32. "Project Details for Apache Flink". apache.org.
  33. "The Apache Software Foundation Announces Apache™ Flink™ as a Top-Level Project : The Apache Software Foundation Blog". apache.org. 12 January 2015.
  34. "Will the mysterious Apache Flink find a sweet spot in the enterprise?". siliconangle.com. 9 February 2015.
  35. (in German)
  36. "Apache Flink Log4j emergency releases". flink.apache.org. Apache Software Foundation. 16 December 2021. Retrieved 2021-12-22.