Central place foraging (CPF) theory is an evolutionary ecology model for analyzing how an organism can maximize foraging rates while traveling through a patch (a discrete resource concentration), but maintains the key distinction of a forager traveling from a home base to a distant foraging location rather than simply passing through an area or travelling at random. CPF was initially developed to explain how red-winged blackbirds might maximize energy returns when traveling to and from a nest. [1] The model has been further refined and used by anthropologists studying human behavioral ecology and archaeology. [2]
Orians and Pearson (1979) found that red-winged blackbirds in eastern Washington State tend to capture a larger number of single species prey items per trip compared to the same species in Costa Rica, which brought back large, single insects. [1] Foraging specialization by Costa Rican blackbirds was attributed to increased search and handling costs of nocturnal foraging, whereas birds in Eastern Washington forage diurnally for prey with lower search and handling costs. Studies with sea birds and seals have also found that load size tends to increase with foraging distance from the nest, as predicted by CPF. [3] Other central place foragers, such as social insects, also show support for CPF theory. European honeybees increase their nectar load as travel time to nectar sites from a hive increases. [4] Beavers have been found to preferentially collect larger diameter trees as distance from their lodge increases. [5]
To apply the central place foraging model to ethnographic and experimental archaeological data driven by middle range theory, Bettinger et al. (1997) simplify the Barlow and Metcalf (1996) central place model to explore the archaeological implications of acorn (Quercus kelloggii) and mussel (Mytilus californianus) procurement and processing. [6] [7] This model assumes foragers are gathering resources at a distance from their central place with the goal of efficiently returning the resource home. Travel time is expected to determine the degree to which foragers will process a resource in order to increase its utility prior to returning from a foraging location to their central place. Transport capabilities in aboriginal California were established by measuring the volume of burden baskets and extrapolating the load weight based on ethnographic data on basket use.
Ethnographic and experimental data was used to estimate utility at each possible stage of processing. Examining ecology and procurement methods, the central place foraging model was used to predict the conditions in which field processing of the two species will occur.
An understanding of central place foraging has implications for studying archaeological site formation. Variability of remains at sites can tell us about mobility – whether or not groups are central place foragers, what resource they’re mapping on to, and their degree of mobility. Based on central place foraging application for the processing of mussels and acorns, Bettinger et al. (1997) make several predictions for archaeological expectations. [6] The study shows that procurement with field processing is more costly compared to foraging and processing resources residentially. These results imply that highly mobile foragers will establish a home base in close proximity to staple resources, and all processing of those resources will be done residentially. Less residentially mobile populations would in turn be mapped only onto a few resources, and would be expected to field process non-local resources on logistical procurement forays at greater distances from their central place. Processing debris from archaeological sites should reflect changes in mobility.
Glover (2009) used a CPF model to determine if late nineteenth century silver miners near Gothic, Colorado were choosing mine locations efficiently given the costs of transporting silver ore to the mill, the value of silver, and the amount of silver per kilogram of ore. [8] Estimates of the costs associated with transport were obtained using research from physiology to determine the most energetically efficient load size. Newspaper articles were used to determine the hourly wage that a miner could be making if they worked in town instead. Newspapers were also used to estimate the value of silver at that time, and estimates of the amount of silver per kilogram of ore were obtained through records from area silver mills, as well as through newspapers. These differed, with the newspapers optimistically claiming that silver deposits were far more productive than the more accurate mill records demonstrated.
These estimates were used to determine the optimal placement of mines. A number of historic mining locations were recorded using GPS. These data were used to calculate least cost paths from the mines to Gothic, which provided the distances to the central place. The results were compared to two different CPF models based on newspaper propaganda and the more realistic mill records, respectively.
Miners were choosing locations that were much farther away than feasible given the value of silver and its actual abundance. However, the mines were within the distance predicted using the optimistic newspaper estimates. Glover suggested that miners, being new to the area, used social learning strategies and based their decisions on newspaper propaganda and rumors, rather than individual experience. Therefore, they chose locations that were too far away to be economically viable.
Shellfish exemplify the resources targeted by the CPF model – those with a heavy, bulky, low utility component (e.g. shell) surrounding a smaller, lighter high utility component (e.g. meat). If foragers differentially field process and transport shellfish prey items, analyses of midden composition may incorrectly estimate the importance of some species and their relative contribution to prehistoric diets. Using foraging data from the Meriam of Australia, Bird and Bliege Bird (1997) compare observed shellfish field acquisition to shell deposition at residential sites, and test the hypotheses of the CPF model. [9]
The Meriam inhabit Torres Strait Islands of Australia, are of Melanesian descent, and have strong cultural and historical ties to New Guinea. They continue to harvest marine resources such as sea turtles, fishes, squid, and shellfish. Bird and Bliege Bird conducted “focal individual foraging follows” of 33 children, 16 men and 42 women during intertidal foraging bouts on reef flats and rock shores. Foraging technology includes 10- liter plastic buckets, long-blade knives, and hammers. Foragers are constrained by time (2–4 hours at low tide) and load size (10-liter bucket).
Large clams ( Hippopus hippopus and Tridacna spp. ) collected on the reef flat constitute over half of the edible weight collected, but since they are almost always field processed their shells make up only 10% of the residential site deposition. In contrast, sunset clams (Asaphis violascens) and nerites (Nerita undata) are usually processed residentially. Large clams were, therefore, underrepresented while small clams and nerites were overrepresented in the reconstructed diet.
Since reef flat and rocky shore foraging occurs at multiple sites at variable distances from the residential camp, the authors calculated the mean one-way travel distance processing threshold (, in meters) for each species. The CPF model accurately predicts field processing for the majority of reef flat foraging events for bivalves. Hippopus and Tridacna have small processing threshold distances ( = 74.6 and 137 respectively), and no shell is returned to camp at distances beyond 150 meters. Women’s fit nears 100%, but children and men made the optimal choice less frequently because they usually forage for shellfish opportunistically, and therefore do not always carry the appropriate processing technology.
For gastropods ( Lambis lambis , = 278.7), the model accurately predicts processing only 58-59% of the time. This could in part be due to a preference for cooking some species inside of their shells (i.e. the shell has some utility), or also because some prey items are prepared at “dinner-time camps” rather than the residential camp. A. violascens and N. undata are never field processed, consistent with their large processing threshold distances (2418.5 and 5355.7 respectively).
Overall, prey types that were difficult or inefficient to process and/or were collected near the residential or temporary camp were not field processed. Species that required little processing time to increase returns and/or were collected far from camp were field processed. The field processing predictions of the CPF model might be incorrect where shellfish are transported whole in order to maintain freshness for later consumption or trade, or where the shell itself is valuable.
Barlow and Metcalfe (1996) address the issues of field processing of plant materials. [7] Decisions of central place foragers may confound archaeological interpretations about the contribution plant material to the diet. Two interrelated issues are pertinent: the location of the central place, and field processing.
Barlow and Metcalfe study archaeological materials from two sites, Danger Cave and Hogup Cave, in the area of the Great Salt Lake. These sites contain evidence for the use of piñon pine (Pinus monophylla) and pickleweed ( Allenrolfea occidentalis ).
Samples were obtained for experimental processing from extant piñon groves and pickleweed patches in the vicinity as the cave sites. Piñon and pickleweed were harvested and processed in carefully timed and controlled stages. After each stage the useful, i.e. edible, portion of the remaining material was weighed and recorded before proceeding to the next stage. Stages consisted of: gathering, drying, and a variety of processes (parching, hulling, winnowing, etc.) to remove inedible constituents. Caloric values of the samples were then determined via laboratory analysis. These values, as well as assumed load sizes from 3 to 15 kg (based on ethnographic burden basket sizes) were then used to generate field processing model predictions.
At a distance of 15 kilometers from the central place, the estimated net return rates for field processing loads of piñon and pickleweed are 3,000 and 190 calories per hour, respectively. Since piñon has higher overall return rates, field processing produces a higher rate of return. Because pickleweed has a lower rate of return, it is not worthwhile to spend the additional effort required for field processing. Therefore, the central place will be situated closer to pickleweed patches than to piñon in order to more effectively exploit the lower-ranked resource.
These results imply that the archaeological evidence for pickleweed at the cave may over estimate its actual contribution to the diet. If foragers choose to reside closer to pickleweed patches and bring back largely unprocessed plants, a high density of pickleweed macrofossils will be incorporated into site deposits. However, the opposite is true for piñon, which is largely processed in the field. Thus, most sites will contain little macrofossil evidence of the inedible portions of piñon that could later be recovered by archaeologists. As such, the relative abundance of macrofossils in most cases does not directly translate into the relative contribution of those resources to the diet of central place foragers.
The goal of the field processing model is for a forager to maximize its return rate per roundtrip from home base to patch. The model typically solves for some amount of travel time that makes it worthwhile to process a resource to a certain stage. To determine this, we need to relate the benefit of processing and the time spent processing to the travel time. We let
point on transport-time axis where field processing become profitable
time to procure unprocessed resources
time to procure and process a load of resources
utility of load without field processing
utility of load with field processing
The relationship is then specified by:
With values for the utility and time of processed and unprocessed loads , we can solve for . The right hand side of the equation is the proportion of relative utility*time to utility. Two conditions must be satisfied. First, the processed load must have higher utility than the unprocessed load. Second, the return rate of the unprocessed load must be at least as good as the return rate for the processed load. Formally,
If then .
If , then .
Many resources have multiple components that can be removed during processing to increase utility. Multistage field processing models provide a way to calculate travel thresholds for each stage when a resource has more than one component. As one increases the utility per load, the time needed to procure a complete load increases.
The benefit of each stage of processing is:
where
utility of resource component j
proportion of package composed of resource component j prior to processing
utility of load at field-processing stage j
The cost in terms of time for each stage of processing is:
where
time required to remove resource component j
weight of optimal load size for transport
weight of unmodified resource package
time required to handle each resource package
total handling and processing time required to reach each stage j of processing
Now these values can be used to calculate , which is the travel threshold for processing to stage j. In addition to a resource with multiple components, this same model generalizes to a resource with multiple stages, each of which is composed of multiple resources, each of which can be removed independently of each other (i.e., with no additional cost). This model can be further generalized to the case where multiple components with additional costs can be removed in multiple stages of processing through recursion.
This model rests on a number of assumptions. The most important are listed here.
There are three key predictions from the field processing model.
Transport decay curves demonstrate the reduction in return rates (cal/hour) experienced by a central place forager as a function of round trip travel time.
A mathematical model is an abstract description of a concrete system using mathematical concepts and language. The process of developing a mathematical model is termed mathematical modeling. Mathematical models are used in applied mathematics and in the natural sciences and engineering disciplines, as well as in non-physical systems such as the social sciences. It can also be taught as a subject in its own right.
The characteristic impedance or surge impedance (usually written Z0) of a uniform transmission line is the ratio of the amplitudes of voltage and current of a single wave propagating along the line; that is, a wave travelling in one direction in the absence of reflections in the other direction. Alternatively, and equivalently, it can be defined as the input impedance of a transmission line when its length is infinite. Characteristic impedance is determined by the geometry and materials of the transmission line and, for a uniform line, is not dependent on its length. The SI unit of characteristic impedance is the ohm.
In continuum mechanics, vorticity is a pseudovector field that describes the local spinning motion of a continuum near some point, as would be seen by an observer located at that point and traveling along with the flow. It is an important quantity in the dynamical theory of fluids and provides a convenient framework for understanding a variety of complex flow phenomena, such as the formation and motion of vortex rings.
The theory of consumer choice is the branch of microeconomics that relates preferences to consumption expenditures and to consumer demand curves. It analyzes how consumers maximize the desirability of their consumption, by maximizing utility subject to a consumer budget constraint. Factors influencing consumers' evaluation of the utility of goods include: income level, cultural factors, product information and physio-psychological factors.
In mathematics, a random walk, sometimes known as a drunkard's walk, is a random process that describes a path that consists of a succession of random steps on some mathematical space.
In probability theory, a branching process is a type of mathematical object known as a stochastic process, which consists of collections of random variables indexed by some set, usually natural or non-negative real numbers. The original purpose of branching processes was to serve as a mathematical model of a population in which each individual in generation produces some random number of individuals in generation , according, in the simplest case, to a fixed probability distribution that does not vary from individual to individual. Branching processes are used to model reproduction; for example, the individuals might correspond to bacteria, each of which generates 0, 1, or 2 offspring with some probability in a single time unit. Branching processes can also be used to model other systems with similar dynamics, e.g., the spread of surnames in genealogy or the propagation of neutrons in a nuclear reactor.
An integer programming problem is a mathematical optimization or feasibility program in which some or all of the variables are restricted to be integers. In many settings the term refers to integer linear programming (ILP), in which the objective function and the constraints are linear.
In statistics, originally in geostatistics, kriging or Kriging, also known as Gaussian process regression, is a method of interpolation based on Gaussian process governed by prior covariances. Under suitable assumptions of the prior, kriging gives the best linear unbiased prediction (BLUP) at unsampled locations. Interpolating methods based on other criteria such as smoothness may not yield the BLUP. The method is widely used in the domain of spatial analysis and computer experiments. The technique is also known as Wiener–Kolmogorov prediction, after Norbert Wiener and Andrey Kolmogorov.
In the field of mathematical optimization, stochastic programming is a framework for modeling optimization problems that involve uncertainty. A stochastic program is an optimization problem in which some or all problem parameters are uncertain, but follow known probability distributions. This framework contrasts with deterministic optimization, in which all problem parameters are assumed to be known exactly. The goal of stochastic programming is to find a decision which both optimizes some criteria chosen by the decision maker, and appropriately accounts for the uncertainty of the problem parameters. Because many real-world decisions involve uncertainty, stochastic programming has found applications in a broad range of areas ranging from finance to transportation to energy optimization.
In computer science and operations research, the ant colony optimization algorithm (ACO) is a probabilistic technique for solving computational problems which can be reduced to finding good paths through graphs. Artificial ants stand for multi-agent methods inspired by the behavior of real ants. The pheromone-based communication of biological ants is often the predominant paradigm used. Combinations of artificial ants and local search algorithms have become a method of choice for numerous optimization tasks involving some sort of graph, e.g., vehicle routing and internet routing.
The superposition principle, also known as superposition property, states that, for all linear systems, the net response caused by two or more stimuli is the sum of the responses that would have been caused by each stimulus individually. So that if input A produces response X and input B produces response Y then input (A + B) produces response (X + Y).
In applied mechanics, bending characterizes the behavior of a slender structural element subjected to an external load applied perpendicularly to a longitudinal axis of the element.
Mode choice analysis is the third step in the conventional four-step transportation forecasting model of transportation planning, following trip distribution and preceding route assignment. From origin-destination table inputs provided by trip distribution, mode choice analysis allows the modeler to determine probabilities that travelers will use a certain mode of transport. These probabilities are called the modal share, and can be used to produce an estimate of the amount of trips taken using each feasible mode.
The marginal value theorem (MVT) is an optimality model that usually describes the behavior of an optimally foraging individual in a system where resources are located in discrete patches separated by areas with no resources. Due to the resource-free space, animals must spend time traveling between patches. The MVT can also be applied to other situations in which organisms face diminishing returns.
Optimal foraging theory (OFT) is a behavioral ecology model that helps predict how an animal behaves when searching for food. Although obtaining food provides the animal with energy, searching for and capturing the food require both energy and time. To maximize fitness, an animal adopts a foraging strategy that provides the most benefit (energy) for the lowest cost, maximizing the net energy gained. OFT helps predict the best strategy that an animal can use to achieve this goal.
In economics, discrete choice models, or qualitative choice models, describe, explain, and predict choices between two or more discrete alternatives, such as entering or not entering the labor market, or choosing between modes of transport. Such choices contrast with standard consumption models in which the quantity of each good consumed is assumed to be a continuous variable. In the continuous case, calculus methods can be used to determine the optimum amount chosen, and demand can be modeled empirically using regression analysis. On the other hand, discrete choice analysis examines situations in which the potential outcomes are discrete, such that the optimum is not characterized by standard first-order conditions. Thus, instead of examining "how much" as in problems with continuous choice variables, discrete choice analysis examines "which one". However, discrete choice analysis can also be used to examine the chosen quantity when only a few distinct quantities must be chosen from, such as the number of vehicles a household chooses to own and the number of minutes of telecommunications service a customer decides to purchase. Techniques such as logistic regression and probit regression can be used for empirical analysis of discrete choice.
Competitive equilibrium is a concept of economic equilibrium, introduced by Kenneth Arrow and Gérard Debreu in 1951, appropriate for the analysis of commodity markets with flexible prices and many traders, and serving as the benchmark of efficiency in economic analysis. It relies crucially on the assumption of a competitive environment where each trader decides upon a quantity that is so small compared to the total quantity traded in the market that their individual transactions have no influence on the prices. Competitive markets are an ideal standard by which other market structures are evaluated.
Contact mechanics is the study of the deformation of solids that touch each other at one or more points. A central distinction in contact mechanics is between stresses acting perpendicular to the contacting bodies' surfaces and frictional stresses acting tangentially between the surfaces. Normal contact mechanics or frictionless contact mechanics focuses on normal stresses caused by applied normal forces and by the adhesion present on surfaces in close contact, even if they are clean and dry. Frictional contact mechanics emphasizes the effect of friction forces.
Multi-objective optimization or Pareto optimization is an area of multiple-criteria decision making that is concerned with mathematical optimization problems involving more than one objective function to be optimized simultaneously. Multi-objective is a type of vector optimization that has been applied in many fields of science, including engineering, economics and logistics where optimal decisions need to be taken in the presence of trade-offs between two or more conflicting objectives. Minimizing cost while maximizing comfort while buying a car, and maximizing performance whilst minimizing fuel consumption and emission of pollutants of a vehicle are examples of multi-objective optimization problems involving two and three objectives, respectively. In practical problems, there can be more than three objectives.
Congestion games (CG) are a class of games in game theory. They represent situations which commonly occur in roads, communication networks, oligopoly markets and natural habitats. There is a set of resources ; there are several players who need resources ; each player chooses a subset of these resources ; the delay in each resource is determined by the number of players choosing a subset that contains this resource. The cost of each player is the sum of delays among all resources he chooses. Naturally, each player wants to minimize his own delay; however, each player's choices impose a negative externality on the other players, which may lead to inefficient outcomes.