Discipline | Computer science |
---|---|
Language | English |
Edited by | Leah Epstein |
Publication details | |
History | 1971–present |
Publisher | |
Frequency | 24/year |
no | |
0.959 (2020) | |
Standard abbreviations | |
ISO 4 | Inf. Process. Lett. |
Indexing | |
ISSN | 0020-0190 |
OCLC no. | 38995181 |
Links | |
Information Processing Letters is a peer-reviewed scientific journal in the field of computer science, published by Elsevier. The aim of the journal is to enable fast dissemination of results in the field of information processing in the form of short papers. Submissions are limited to nine double-spaced pages.
The scope of IPL covers fundamental aspects of information processing and computing. This naturally covers topics in the broadly understood field of theoretical computer science, including algorithms, formal languages and automata, computational complexity, computational logic, distributed and parallel algorithms, computational geometry, learning theory, computational number theory, computational biology, coding theory, theoretical cryptography, and applied discrete mathematics. Generally, submissions in all areas of scientific inquiry are considered, provided that they describe research contributions credibly motivated by applications to computing and involve rigorous methodology. High quality experimental papers that address topics of sufficiently broad interest are also considered.
IPL implements a 3-tier review process. Each submissions is assigned to an associate editor, who determines whether it falls within IPL's scope and meets basic quality criteria. On average, about 60% of submissions are desk-rejected by associate editors. Submissions determined to be suitable for further review are distributed between the members of the editorial board, who handle the review process, which typically involves soliciting external reviews from 2-3 experts in the area. Between 2017 and 2020, the overall acceptance rate in IPL averaged 20–25%.
Established in 1971, IPL is one of the oldest journals in computer science. In its now over 50-year old history, IPL has published research contributions from leading figures in computer science research, including multiple Turing Award winners: Alan Perlis, Edsger Dijkstra, Donald Knuth, Robert Floyd, Stephen Cook, Niklaus Wirth, Richard Karp, John Hopcroft, Robert Tarjan, Ronald Rivest, Edmund Clarke, Judea Perl, Silvio Micali, and Leslie Lamport. Among its earlier, pre-1990 articles, its list of influential papers includes the following:
Computer science is the study of computation, information, and automation. Computer science spans theoretical disciplines to applied disciplines. Though more often considered an academic discipline, computer science is closely related to computer programming.
In computer science, the computational complexity or simply complexity of an algorithm is the amount of resources required to run it. Particular focus is given to computation time and memory storage requirements. The complexity of a problem is the complexity of the best algorithms that allow solving the problem.
In theoretical computer science and mathematics, computational complexity theory focuses on classifying computational problems according to their resource usage, and relating these classes to each other. A computational problem is a task solved by a computer. A computation problem is solvable by mechanical application of mathematical steps, such as an algorithm.
Discrete mathematics is the study of mathematical structures that can be considered "discrete" rather than "continuous". Objects studied in discrete mathematics include integers, graphs, and statements in logic. By contrast, discrete mathematics excludes topics in "continuous mathematics" such as real numbers, calculus or Euclidean geometry. Discrete objects can often be enumerated by integers; more formally, discrete mathematics has been characterized as the branch of mathematics dealing with countable sets. However, there is no exact definition of the term "discrete mathematics".
Distributed computing is a field of computer science that studies distributed systems, defined as computer systems whose inter-communicating components are located on different networked computers.
In theoretical computer science and mathematics, the theory of computation is the branch that deals with what problems can be solved on a model of computation, using an algorithm, how efficiently they can be solved or to what degree. The field is divided into three major branches: automata theory and formal languages, computability theory, and computational complexity theory, which are linked by the question: "What are the fundamental capabilities and limitations of computers?".
Dijkstra's algorithm is an algorithm for finding the shortest paths between nodes in a weighted graph, which may represent, for example, road networks. It was conceived by computer scientist Edsger W. Dijkstra in 1956 and published three years later.
Ronald Linn Rivest is a cryptographer and computer scientist whose work has spanned the fields of algorithms and combinatorics, cryptography, machine learning, and election integrity. He is an Institute Professor at the Massachusetts Institute of Technology (MIT), and a member of MIT's Department of Electrical Engineering and Computer Science and its Computer Science and Artificial Intelligence Laboratory.
In computer science, the clique problem is the computational problem of finding cliques in a graph. It has several different formulations depending on which cliques, and what information about the cliques, should be found. Common formulations of the clique problem include finding a maximum clique, finding a maximum weight clique in a weighted graph, listing all maximal cliques, and solving the decision problem of testing whether a graph contains a clique larger than a given size.
Theoretical computer science (TCS) is a subset of general computer science and mathematics that focuses on mathematical aspects of computer science such as the theory of computation (TOC), formal language theory, the lambda calculus and type theory.
Combinatorial optimization is a subfield of mathematical optimization that consists of finding an optimal object from a finite set of objects, where the set of feasible solutions is discrete or can be reduced to a discrete set. Typical combinatorial optimization problems are the travelling salesman problem ("TSP"), the minimum spanning tree problem ("MST"), and the knapsack problem. In many such problems, such as the ones previously mentioned, exhaustive search is not tractable, and so specialized algorithms that quickly rule out large parts of the search space or approximation algorithms must be resorted to instead.
In the mathematical theory of directed graphs, a graph is said to be strongly connected if every vertex is reachable from every other vertex. The strongly connected components of a directed graph form a partition into subgraphs that are themselves strongly connected. It is possible to test the strong connectivity of a graph, or to find its strongly connected components, in linear time (that is, Θ(V + E )).
In computational complexity, problems that are in the complexity class NP but are neither in the class P nor NP-complete are called NP-intermediate, and the class of such problems is called NPI. Ladner's theorem, shown in 1975 by Richard E. Ladner, is a result asserting that, if P ≠ NP, then NPI is not empty; that is, NP contains problems that are neither in P nor NP-complete. Since it is also true that if NPI problems exist, then P ≠ NP, it follows that P = NP if and only if NPI is empty.
Mihalis Yannakakis is professor of computer science at Columbia University. He is noted for his work in computational complexity, databases, and other related fields. He won the Donald E. Knuth Prize in 2005.
In computational complexity theory, a problem is NP-complete when:
Lance Jeremy Fortnow is a computer scientist known for major results in computational complexity and interactive proof systems. As of January 2024, he was the Dean of the College of Computing at the Illinois Institute of Technology.
Natural computing, also called natural computation, is a terminology introduced to encompass three classes of methods: 1) those that take inspiration from nature for the development of novel problem-solving techniques; 2) those that are based on the use of computers to synthesize natural phenomena; and 3) those that employ natural materials to compute. The main fields of research that compose these three branches are artificial neural networks, evolutionary algorithms, swarm intelligence, artificial immune systems, fractal geometry, artificial life, DNA computing, and quantum computing, among others.
In graph theory, a bipolar orientation or st-orientation of an undirected graph is an assignment of a direction to each edge that causes the graph to become a directed acyclic graph with a single source s and a single sink t, and an st-numbering of the graph is a topological ordering of the resulting directed acyclic graph.