Apache Pig

Last updated
Apache Pig
Developer(s) Apache Software Foundation, Yahoo Research
Initial releaseSeptember 11, 2008;13 years ago (2008-09-11)
Stable release
0.17.0 / June 19, 2017;5 years ago (2017-06-19)
Repository
Operating system Microsoft Windows, OS X, Linux
Type Data analytics
License Apache License 2.0
Website pig.apache.org

Apache Pig [1] is a high-level platform for creating programs that run on Apache Hadoop. The language for this platform is called Pig Latin. [1] Pig can execute its Hadoop jobs in MapReduce, Apache Tez, or Apache Spark. [2] Pig Latin abstracts the programming from the Java MapReduce idiom into a notation which makes MapReduce programming high level, similar to that of SQL for relational database management systems. Pig Latin can be extended using user-defined functions (UDFs) which the user can write in Java, Python, JavaScript, Ruby or Groovy [3] and then call directly from the language.

Contents

History

Apache Pig was originally [4] developed at Yahoo Research around 2006 for researchers to have an ad hoc way of creating and executing MapReduce jobs on very large data sets. In 2007, [5] it was moved into the Apache Software Foundation.

VersionOriginal release dateLatest versionRelease date [6]
Old version, no longer maintained: 0.12008-09-110.1.12008-12-05
Old version, no longer maintained: 0.22009-04-080.2.02009-04-08
Old version, no longer maintained: 0.32009-06-250.3.02009-06-25
Old version, no longer maintained: 0.42009-08-290.4.02009-08-29
Old version, no longer maintained: 0.52009-09-290.5.02009-09-29
Old version, no longer maintained: 0.62010-03-010.6.02010-03-01
Old version, no longer maintained: 0.72010-05-130.7.02010-05-13
Old version, no longer maintained: 0.82010-12-170.8.12011-04-24
Old version, no longer maintained: 0.92011-07-290.9.22012-01-22
Old version, no longer maintained: 0.102012-01-220.10.12012-04-25
Old version, no longer maintained: 0.112013-02-210.11.12013-04-01
Old version, no longer maintained: 0.122013-10-140.12.12014-04-14
Old version, no longer maintained: 0.132014-07-040.13.02014-07-04
Old version, no longer maintained: 0.142014-11-200.14.02014-11-20
Old version, no longer maintained: 0.152015-06-060.15.02015-06-06
Old version, no longer maintained: 0.162016-06-080.16.02016-06-08
Current stable version:0.172017-06-190.17.02017-06-19
Legend:
Old version
Older version, still maintained
Latest version
Latest preview version
Future release

Naming

Regarding the naming of the Pig programming language, the name was chosen arbitrarily and stuck because it was memorable, easy to spell, and for novelty. [7] [8] [9]

The story goes that the researchers working on the project initially referred to it simply as 'the language'. Eventually they needed to call it something. Off the top of his head, one researcher suggested Pig, and the name stuck. It is quirky yet memorable and easy to spell. While some have hinted that the name sounds coy or silly, it has provided us with an entertaining nomenclature, such as Pig Latin for the language, Grunt for the shell, and PiggyBank for the CPAN-like shared repository.

Alan Gates, Daniel Dai, "What Is Pig?", Programming Pig, 2nd Edition (November 2017)

Example

Below is an example of a "Word Count" program in Pig Latin:

 input_lines =LOAD'/tmp/my-copy-of-all-pages-on-internet'AS(line:chararray);-- Extract words from each line and put them into a pig bag-- datatype, then flatten the bag to get one word on each row  words =FOREACH input_lines GENERATEFLATTEN(TOKENIZE(line))AS word;    -- filter out any words that are just white spaces  filtered_words =FILTER words BY word MATCHES'\\w+';-- create a group for each word  word_groups =GROUP filtered_words BY word;    -- count the entries in each group  word_count =FOREACH word_groups GENERATECOUNT(filtered_words)AScount,groupAS word;    -- order the records by count  ordered_word_count =ORDER word_count BYcountDESC;STORE ordered_word_count INTO'/tmp/number-of-words-on-internet';

The above program will generate parallel executable tasks which can be distributed across multiple machines in a Hadoop cluster to count the number of words in a dataset such as all the webpages on the internet.

Pig vs SQL

In comparison to SQL, Pig

  1. has a nested relational model,
  2. uses lazy evaluation,
  3. uses extract, transform, load (ETL),
  4. is able to store data at any point during a pipeline,
  5. declares execution plans,
  6. supports pipeline splits, thus allowing workflows to proceed along DAGs instead of strictly sequential pipelines.

On the other hand, it has been argued DBMSs are substantially faster than the MapReduce system once the data is loaded, but that loading the data takes considerably longer in the database systems. It has also been argued RDBMSs offer out of the box support for column-storage, working with compressed data, indexes for efficient random data access, and transaction-level fault tolerance. [10]

Pig Latin is procedural and fits very naturally in the pipeline paradigm while SQL is instead declarative. In SQL users can specify that data from two tables must be joined, but not what join implementation to use (You can specify the implementation of JOIN in SQL, thus "... for many SQL applications the query writer may not have enough knowledge of the data or enough expertise to specify an appropriate join algorithm."). Pig Latin allows users to specify an implementation or aspects of an implementation to be used in executing a script in several ways. [11] In effect, Pig Latin programming is similar to specifying a query execution plan, making it easier for programmers to explicitly control the flow of their data processing task. [12]

SQL is oriented around queries that produce a single result. SQL handles trees naturally, but has no built in mechanism for splitting a data processing stream and applying different operators to each sub-stream. Pig Latin script describes a directed acyclic graph (DAG) rather than a pipeline. [11]

Pig Latin's ability to include user code at any point in the pipeline is useful for pipeline development. If SQL is used, data must first be imported into the database, and then the cleansing and transformation process can begin. [11]

See also

Related Research Articles

Apache Cocoon, usually abbreviated as Cocoon, is a web application framework built around the concepts of Pipeline, separation of concerns, and component-based web development. The framework focuses on XML and XSLT publishing and is built using the Java programming language. Cocoon's use of XML is intended to improve compatibility of publishing formats, such as HTML and PDF. The content management systems Apache Lenya and Daisy have been created on top of the framework. Cocoon is also commonly used as a data warehousing ETL tool or as middleware for transporting data between systems.

MapReduce is a programming model and an associated implementation for processing and generating big data sets with a parallel, distributed algorithm on a cluster.

Apache Hadoop is a collection of open-source software utilities that facilitates using a network of many computers to solve problems involving massive amounts of data and computation. It provides a software framework for distributed storage and processing of big data using the MapReduce programming model. Hadoop was originally designed for computer clusters built from commodity hardware, which is still the common use. It has since also found use on clusters of higher-end hardware. All the modules in Hadoop are designed with a fundamental assumption that hardware failures are common occurrences and should be automatically handled by the framework.

Apache Solr Open-source enterprise-search platform

Solr is an open-source enterprise-search platform, written in Java. Its major features include full-text search, hit highlighting, faceted search, real-time indexing, dynamic clustering, database integration, NoSQL features and rich document handling. Providing distributed search and index replication, Solr is designed for scalability and fault tolerance. Solr is widely used for enterprise search and analytics use cases and has an active development community and regular releases.

Language Integrated Query is a Microsoft .NET Framework component that adds native data querying capabilities to .NET languages, originally released as a major part of .NET Framework 3.5 in 2007.

Microsoft SQL Server is a relational database management system developed by Microsoft. As a database server, it is a software product with the primary function of storing and retrieving data as requested by other software applications—which may run either on the same computer or on another computer across a network. Microsoft markets at least a dozen different editions of Microsoft SQL Server, aimed at different audiences and for workloads ranging from small single-machine applications to large Internet-facing applications with many concurrent users.

HBase is an open-source non-relational distributed database modeled after Google's Bigtable and written in Java. It is developed as part of Apache Software Foundation's Apache Hadoop project and runs on top of HDFS or Alluxio, providing Bigtable-like capabilities for Hadoop. That is, it provides a fault-tolerant way of storing large quantities of sparse data.

Apache Mahout is a project of the Apache Software Foundation to produce free implementations of distributed or otherwise scalable machine learning algorithms focused primarily on linear algebra. In the past, many of the implementations use the Apache Hadoop platform, however today it is primarily focused on Apache Spark. Mahout also provides Java/Scala libraries for common math operations and primitive Java collections. Mahout is a work in progress; a number of algorithms have been implemented.

Apache Hive Database engine

Apache Hive is a data warehouse software project built on top of Apache Hadoop for providing data query and analysis. Hive gives an SQL-like interface to query data stored in various databases and file systems that integrate with Hadoop. Traditional SQL queries must be implemented in the MapReduce Java API to execute SQL applications and queries over distributed data. Hive provides the necessary SQL abstraction to integrate SQL-like queries (HiveQL) into the underlying Java without the need to implement queries in the low-level Java API. Since most data warehousing applications work with SQL-based querying languages, Hive aids portability of SQL-based applications to Hadoop. While initially developed by Facebook, Apache Hive is used and developed by other companies such as Netflix and the Financial Industry Regulatory Authority (FINRA). Amazon maintains a software fork of Apache Hive included in Amazon Elastic MapReduce on Amazon Web Services.

Data-intensive computing is a class of parallel computing applications which use a data parallel approach to process large volumes of data typically terabytes or petabytes in size and typically referred to as big data. Computing applications which devote most of their execution time to computational requirements are deemed compute-intensive, whereas computing applications which require large volumes of data and devote most of their processing time to I/O and manipulation of data are deemed data-intensive.

Data-centric programming language defines a category of programming languages where the primary function is the management and manipulation of data. A data-centric programming language includes built-in processing primitives for accessing data stored in sets, tables, lists, and other data structures and databases, and for specific manipulation and transformation of data required by a programming application. Data-centric programming languages are typically declarative and often dataflow-oriented, and define the processing result desired; the specific processing steps required to perform the processing are left to the language compiler. The SQL relational database language is an example of a declarative, data-centric language. Declarative, data-centric programming languages are ideal for data-intensive computing applications.

In database management systems (DBMS), a prepared statement or parameterized statement is a feature used to pre-compile SQL code, separating it from data. Benefits of prepared statements are:

Apache Drill Open-source software framework

Apache Drill is an open-source software framework that supports data-intensive distributed applications for interactive analysis of large-scale datasets. Built chiefly by contributions from developers from MapR, Drill is inspired by Google's Dremel system, also productized as BigQuery. Drill is an Apache top-level project.

Apache Impala is an open source massively parallel processing (MPP) SQL query engine for data stored in a computer cluster running Apache Hadoop. Impala has been described as the open-source equivalent of Google F1, which inspired its development in 2012.

Jaql is a functional data processing and query language most commonly used for JSON query processing on big data.

Apache Spark Open-source data analytics cluster computing framework

Apache Spark is an open-source unified analytics engine for large-scale data processing. Spark provides an interface for programming clusters with implicit data parallelism and fault tolerance. Originally developed at the University of California, Berkeley's AMPLab, the Spark codebase was later donated to the Apache Software Foundation, which has maintained it since.

Apache Phoenix is an open source, massively parallel, relational database engine supporting OLTP for Hadoop using Apache HBase as its backing store. Phoenix provides a JDBC driver that hides the intricacies of the NoSQL store enabling users to create, delete, and alter SQL tables, views, indexes, and sequences; insert and delete rows singly and in bulk; and query data through SQL. Phoenix compiles queries and other statements into native NoSQL store APIs rather than using MapReduce enabling the building of low latency applications on top of NoSQL stores.

Apache Flink Framework and distributed processing engine

Apache Flink is an open-source, unified stream-processing and batch-processing framework developed by the Apache Software Foundation. The core of Apache Flink is a distributed streaming data-flow engine written in Java and Scala. Flink executes arbitrary dataflow programs in a data-parallel and pipelined manner. Flink's pipelined runtime system enables the execution of bulk/batch and stream processing programs. Furthermore, Flink's runtime supports the execution of iterative algorithms natively.

Apache Parquet is a free and open-source column-oriented data storage format in the Apache Hadoop ecosystem. It is similar to RCFile and ORC, the other columnar-storage file formats in Hadoop, and is compatible with most of the data processing frameworks around Hadoop. It provides efficient data compression and encoding schemes with enhanced performance to handle complex data in bulk.

References

  1. 1 2 "Hadoop: Apache Pig" . Retrieved Sep 2, 2011.
  2. "[PIG-4167] Initial implementation of Pig on Spark - ASF JIRA". issues.apache.org. Retrieved 2018-12-29.
  3. "Pig user defined functions" . Retrieved May 3, 2013.
  4. "Yahoo Blog:Pig – The Road to an Efficient High-level language for Hadoop". Archived from the original on February 3, 2016. Retrieved May 23, 2015.
  5. "Pig into Incubation at the Apache Software Foundation". Archived from the original on February 3, 2016. Retrieved May 23, 2015.
  6. "Apache Pig Releases". Apache. Retrieved 2019-03-13.
  7. "1. What Is Pig? - Programming Pig, 2nd Edition [Book]". www.oreilly.com. Retrieved 2021-08-01.
  8. Gates, Alan (2016). Programming Pig. Daniel Dai (Second ed.). Sebastopol, CA. ISBN   978-1-4919-3706-8. OCLC   964523786.
  9. Gates, Alan (2021-07-27). "Pig mascot questions". Pig User Mailing List (Mailing list). Archived from the original on 1 August 2021. Retrieved 1 August 2021.
  10. "Communications of the ACM: MapReduce and Parallel DBMSs: Friends or Foes?" (PDF). Archived from the original (PDF) on July 1, 2015. Retrieved May 23, 2015.
  11. 1 2 3 "Yahoo Pig Development Team: Comparing Pig Latin and SQL for Constructing Data Processing Pipelines". Archived from the original on May 30, 2015. Retrieved May 23, 2015.
  12. "ACM SigMod 08: Pig Latin: A Not-So-Foreign Language for Data Processing" (PDF). Retrieved May 23, 2015.