Transshipment problem

Last updated

Transshipment problems form a subgroup of transportation problems, where transshipment is allowed. In transshipment, transportation may or must go through intermediate nodes, possibly changing modes of transport.

Contents

The Transshipment problem has its origins in medieval times[ dubious ] when trading started to become a mass phenomenon. Obtaining the minimum-cost route had been the main priority. However, technological development slowly gave priority to minimum-duration transportation problems.

Overview

Transshipment or Transhipment is the shipment of goods or containers to an intermediate destination, and then from there to yet another destination. One possible reason is to change the means of transport during the journey (for example from ship transport to road transport), known as transloading. Another reason is to combine small shipments into a large shipment (consolidation), dividing the large shipment at the other end (deconsolidation). Transshipment usually takes place in transport hubs. Much international transshipment also takes place in designated customs areas, thus avoiding the need for customs checks or duties, otherwise a major hindrance for efficient transport.

Formulation of the problem

A few initial assumptions are required in order to formulate the transshipment problem completely:

Notations

Mathematical formulation of the problem

The goal is to minimize subject to:

Solution

Since in most cases an explicit expression for the objective function does not exist, an alternative method is suggested by Rajeev and Satya. The method uses two consecutive phases to reveal the minimal durational route from the origins to the destinations. The first phase is willing to solve time-minimizing problem, in each case using the remained intermediate nodes as transshipment points. This also leads to the minimal-durational transportation between all sources and destinations. During the second phase a standard time-minimizing problem needs to be solved. The solution of the time-minimizing transshipment problem is the joint solution outcome of these two phases.

Phase 1

Since costs are independent from the shipped amount, in each individual problem one can normalize the shipped quantity to 1. The problem now is simplified to an assignment problem from i to m+j. Let be 1 if the edge between nodes r and s is used during the optimization, and 0 otherwise. Now the goal is to determine all which minimize the objective function:

,

such that

Corollary

  • and need to be excluded from the model; on the other hand, without the constraint the optimal path would consist only of -type loops which obviously can not be a feasible solution.
  • Instead of , can be written, where M is an arbitrarily large positive number. With that modification the formulation above is reduced to the form of a standard assignment problem, possible to solve with the Hungarian method.

Phase 2

During the second phase, a time minimization problem is solved with m origins and n destinations without transshipment. This phase differs in two main aspects from the original setup:

In mathematical form

The goal is to find which minimize

,
such that

This problem is easy to be solved with the method developed by Prakash. The set needs to be partitioned into subgroups , where each contain the -s with the same value. The sequence is organized as contains the largest valued 's the second largest and so on. Furthermore, positive priority factors are assigned to the subgroups , with the following rule:

for all . With this notation the goal is to find all which minimize the goal function

such that

Extension

Some authors such as Das et al (1999) and Malakooti (2013) have considered multi-objective Transshipment problem.

Related Research Articles

<span class="mw-page-title-main">Knapsack problem</span> Problem in combinatorial optimization

The knapsack problem is a problem in combinatorial optimization: Given a set of items, each with a weight and a value, determine the number of each item to include in a collection so that the total weight is less than or equal to a given limit and the total value is as large as possible. It derives its name from the problem faced by someone who is constrained by a fixed-size knapsack and must fill it with the most valuable items. The problem often arises in resource allocation where the decision-makers have to choose from a set of non-divisible projects or tasks under a fixed budget or time constraint, respectively.

<span class="mw-page-title-main">Gaussian quadrature</span> Approximation of the definite integral of a function

In numerical analysis, a quadrature rule is an approximation of the definite integral of a function, usually stated as a weighted sum of function values at specified points within the domain of integration. An n-point Gaussian quadrature rule, named after Carl Friedrich Gauss, is a quadrature rule constructed to yield an exact result for polynomials of degree 2n − 1 or less by a suitable choice of the nodes xi and weights wi for i = 1, …, n. The modern formulation using orthogonal polynomials was developed by Carl Gustav Jacobi in 1826. The most common domain of integration for such a rule is taken as [−1, 1], so the rule is stated as

<span class="mw-page-title-main">Lindemann–Weierstrass theorem</span> On algebraic independence of exponentials of linearly independent algebraic numbers over Q

In transcendental number theory, the Lindemann–Weierstrass theorem is a result that is very useful in establishing the transcendence of numbers. It states the following:

In abstract algebra and multilinear algebra, a multilinear form on a vector space over a field is a map

<span class="mw-page-title-main">Gauss–Newton algorithm</span>

The Gauss–Newton algorithm is used to solve non-linear least squares problems, which is equivalent to minimizing a sum of squared function values. It is an extension of Newton's method for finding a minimum of a non-linear function. Since a sum of squares must be nonnegative, the algorithm can be viewed as using Newton's method to iteratively approximate zeroes of the sum, and thus minimizing the sum. It has the advantage that second derivatives, which can be challenging to compute, are not required.

AdaBoost, short for Adaptive Boosting, is a statistical classification meta-algorithm formulated by Yoav Freund and Robert Schapire in 1995, who won the 2003 Gödel Prize for their work. It can be used in conjunction with many other types of learning algorithms to improve performance. The output of the other learning algorithms is combined into a weighted sum that represents the final output of the boosted classifier. Usually, AdaBoost is presented for binary classification, although it can be generalized to multiple classes or bounded intervals on the real line.

Exponential smoothing is a rule of thumb technique for smoothing time series data using the exponential window function. Whereas in the simple moving average the past observations are weighted equally, exponential functions are used to assign exponentially decreasing weights over time. It is an easily learned and easily applied procedure for making some determination based on prior assumptions by the user, such as seasonality. Exponential smoothing is often used for analysis of time-series data.

In queueing theory, a discipline within the mathematical theory of probability, a Jackson network is a class of queueing network where the equilibrium distribution is particularly simple to compute as the network has a product-form solution. It was the first significant development in the theory of networks of queues, and generalising and applying the ideas of the theorem to search for similar product-form solutions in other networks has been the subject of much research, including ideas used in the development of the Internet. The networks were first identified by James R. Jackson and his paper was re-printed in the journal Management Science’s ‘Ten Most Influential Titles of Management Sciences First Fifty Years.’

Limited-memory BFGS is an optimization algorithm in the family of quasi-Newton methods that approximates the Broyden–Fletcher–Goldfarb–Shanno algorithm (BFGS) using a limited amount of computer memory. It is a popular algorithm for parameter estimation in machine learning. The algorithm's target problem is to minimize over unconstrained values of the real-vector where is a differentiable scalar function.

In probability theory and statistics, the Dirichlet-multinomial distribution is a family of discrete multivariate probability distributions on a finite support of non-negative integers. It is also called the Dirichlet compound multinomial distribution (DCM) or multivariate Pólya distribution. It is a compound probability distribution, where a probability vector p is drawn from a Dirichlet distribution with parameter vector , and an observation drawn from a multinomial distribution with probability vector p and number of trials n. The Dirichlet parameter vector captures the prior belief about the situation and can be seen as a pseudocount: observations of each outcome that occur before the actual data is collected. The compounding corresponds to a Pólya urn scheme. It is frequently encountered in Bayesian statistics, machine learning, empirical Bayes methods and classical statistics as an overdispersed multinomial distribution.

In applied mathematics – specifically in fuzzy logic – the ordered weighted averaging (OWA) operators provide a parameterized class of mean type aggregation operators. They were introduced by Ronald R. Yager. Many notable mean operators such as the max, arithmetic average, median and min, are members of this class. They have been widely used in computational intelligence because of their ability to model linguistically expressed aggregation instructions.

Learning with errors (LWE) is the computational problem of inferring a linear -ary function over a finite ring from given samples some of which may be erroneous. The LWE problem is conjectured to be hard to solve, and thus to be useful in cryptography.

Least-squares support-vector machines (LS-SVM) for statistics and in statistical modeling, are least-squares versions of support-vector machines (SVM), which are a set of related supervised learning methods that analyze data and recognize patterns, and which are used for classification and regression analysis. In this version one finds the solution by solving a set of linear equations instead of a convex quadratic programming (QP) problem for classical SVMs. Least-squares SVM classifiers were proposed by Johan Suykens and Joos Vandewalle. LS-SVMs are a class of kernel-based learning methods.

In Mathematics, the Lindström–Gessel–Viennot lemma provides a way to count the number of tuples of non-intersecting lattice paths, or, more generally, paths on a directed graph. It was proved by Gessel–Viennot in 1985, based on previous work of Lindström published in 1973.

In mathematics, a submodular set function is a set function whose value, informally, has the property that the difference in the incremental value of the function that a single element makes when added to an input set decreases as the size of the input set increases. Submodular functions have a natural diminishing returns property which makes them suitable for many applications, including approximation algorithms, game theory and electrical networks. Recently, submodular functions have also found immense utility in several real world problems in machine learning and artificial intelligence, including automatic summarization, multi-document summarization, feature selection, active learning, sensor placement, image collection summarization and many other domains.

For computer science, in statistical learning theory, a representer theorem is any of several related results stating that a minimizer of a regularized empirical risk functional defined over a reproducing kernel Hilbert space can be represented as a finite linear combination of kernel products evaluated on the input points in the training set data.

In the mathematical theory of probability, the drift-plus-penalty method is used for optimization of queueing networks and other stochastic systems.

In computer science, optimal computing budget allocation (OCBA) is an approach to maximize the overall simulation efficiency for finding an optimal decision. It was introduced in the mid-1990s by Dr. Chun-Hung Chen.

In the fields of dynamical systems and control theory, a fractional-order system is a dynamical system that can be modeled by a fractional differential equation containing derivatives of non-integer order. Such systems are said to have fractional dynamics. Derivatives and integrals of fractional orders are used to describe objects that can be characterized by power-law nonlocality, power-law long-range dependence or fractal properties. Fractional-order systems are useful in studying the anomalous behavior of dynamical systems in physics, electrochemistry, biology, viscoelasticity and chaotic systems.

Quantum optimization algorithms are quantum algorithms that are used to solve optimization problems. Mathematical optimization deals with finding the best solution to a problem from a set of possible solutions. Mostly, the optimization problem is formulated as a minimization problem, where one tries to minimize an error which depends on the solution: the optimal solution has the minimal error. Different optimization techniques are applied in various fields such as mechanics, economics and engineering, and as the complexity and amount of data involved rise, more efficient ways of solving optimization problems are needed. The power of quantum computing may allow problems which are not practically feasible on classical computers to be solved, or suggest a considerable speed up with respect to the best known classical algorithm.

References

  1. "Transshipment Problem and Its Variants: A Review". ResearchGate. Retrieved 2020-11-02.