Polymer field theory

Last updated

A polymer field theory is a statistical field theory describing the statistical behavior of a neutral or charged polymer system. It can be derived by transforming the partition function from its standard many-dimensional integral representation over the particle degrees of freedom in a functional integral representation over an auxiliary field function, using either the Hubbard–Stratonovich transformation or the delta-functional transformation. Computer simulations based on polymer field theories have been shown to deliver useful results, for example to calculate the structures and properties of polymer solutions (Baeurle 2007, Schmid 1998), polymer melts (Schmid 1998, Matsen 2002, Fredrickson 2002) and thermoplastics (Baeurle 2006).

Contents

Canonical ensemble

Particle representation of the canonical partition function

The standard continuum model of flexible polymers, introduced by Edwards (Edwards 1965), treats a solution composed of linear monodisperse homopolymers as a system of coarse-grained polymers, in which the statistical mechanics of the chains is described by the continuous Gaussian thread model (Baeurle 2007) and the solvent is taken into account implicitly. The Gaussian thread model can be viewed as the continuum limit of the discrete Gaussian chain model, in which the polymers are described as continuous, linearly elastic filaments. The canonical partition function of such a system, kept at an inverse temperature and confined in a volume , can be expressed as

where is the potential of mean force given by,

representing the solvent-mediated non-bonded interactions among the segments, while represents the harmonic binding energy of the chains. The latter energy contribution can be formulated as

where is the statistical segment length and the polymerization index.

Field-theoretic transformation

To derive the basic field-theoretic representation of the canonical partition function, one introduces in the following the segment density operator of the polymer system

Using this definition, one can rewrite Eq. (2) as

Next, one converts the model into a field theory by making use of the Hubbard-Stratonovich transformation or delta-functional transformation

where is a functional and is the delta functional given by

with representing the auxiliary field function. Here we note that, expanding the field function in a Fourier series, implies that periodic boundary conditions are applied in all directions and that the -vectors designate the reciprocal lattice vectors of the supercell.

Basic field-theoretic representation of canonical partition function

Using the Eqs. (3), (4) and (5), we can recast the canonical partition function in Eq. (1) in field-theoretic representation, which leads to

where

can be interpreted as the partition function for an ideal gas of non-interacting polymers and

is the path integral of a free polymer in a zero field with elastic energy

In the latter equation the unperturbed radius of gyration of a chain . Moreover, in Eq. (6) the partition function of a single polymer, subjected to the field , is given by

Grand canonical ensemble

Basic field-theoretic representation of grand canonical partition function

To derive the grand canonical partition function, we use its standard thermodynamic relation to the canonical partition function, given by

where is the chemical potential and is given by Eq. (6). Performing the sum, this provides the field-theoretic representation of the grand canonical partition function,

where

is the grand canonical action with defined by Eq. (8) and the constant

Moreover, the parameter related to the chemical potential is given by

where is provided by Eq. (7).

Mean field approximation

A standard approximation strategy for polymer field theories is the mean field (MF) approximation, which consists in replacing the many-body interaction term in the action by a term where all bodies of the system interact with an average effective field. This approach reduces any multi-body problem into an effective one-body problem by assuming that the partition function integral of the model is dominated by a single field configuration. A major benefit of solving problems with the MF approximation, or its numerical implementation commonly referred to as the self-consistent field theory (SCFT), is that it often provides some useful insights into the properties and behavior of complex many-body systems at relatively low computational cost. Successful applications of this approximation strategy can be found for various systems of polymers and complex fluids, like e.g. strongly segregated block copolymers of high molecular weight, highly concentrated neutral polymer solutions or highly concentrated block polyelectrolyte (PE) solutions (Schmid 1998, Matsen 2002, Fredrickson 2002). There are, however, a multitude of cases for which SCFT provides inaccurate or even qualitatively incorrect results (Baeurle 2006a). These comprise neutral polymer or polyelectrolyte solutions in dilute and semidilute concentration regimes, block copolymers near their order-disorder transition, polymer blends near their phase transitions, etc. In such situations the partition function integral defining the field-theoretic model is not entirely dominated by a single MF configuration and field configurations far from it can make important contributions, which require the use of more sophisticated calculation techniques beyond the MF level of approximation.

Higher-order corrections

One possibility to face the problem is to calculate higher-order corrections to the MF approximation. Tsonchev et al. developed such a strategy including leading (one-loop) order fluctuation corrections, which allowed to gain new insights into the physics of confined PE solutions (Tsonchev 1999). However, in situations where the MF approximation is bad many computationally demanding higher-order corrections to the integral are necessary to get the desired accuracy.

Renormalization techniques

An alternative theoretical tool to cope with strong fluctuations problems occurring in field theories has been provided in the late 1940s by the concept of renormalization, which has originally been devised to calculate functional integrals arising in quantum field theories (QFT's). In QFT's a standard approximation strategy is to expand the functional integrals in a power series in the coupling constant using perturbation theory. Unfortunately, generally most of the expansion terms turn out to be infinite, rendering such calculations impracticable (Shirkov 2001). A way to remove the infinities from QFT's is to make use of the concept of renormalization (Baeurle 2007). It mainly consists in replacing the bare values of the coupling parameters, like e.g. electric charges or masses, by renormalized coupling parameters and requiring that the physical quantities do not change under this transformation, thereby leading to finite terms in the perturbation expansion. A simple physical picture of the procedure of renormalization can be drawn from the example of a classical electrical charge, , inserted into a polarizable medium, such as in an electrolyte solution. At a distance from the charge due to polarization of the medium, its Coulomb field will effectively depend on a function , i.e. the effective (renormalized) charge, instead of the bare electrical charge, . At the beginning of the 1970s, K.G. Wilson further pioneered the power of renormalization concepts by developing the formalism of renormalization group (RG) theory, to investigate critical phenomena of statistical systems (Wilson 1971).

Renormalization group theory

The RG theory makes use of a series of RG transformations, each of which consists of a coarse-graining step followed by a change of scale (Wilson 1974). In case of statistical-mechanical problems the steps are implemented by successively eliminating and rescaling the degrees of freedom in the partition sum or integral that defines the model under consideration. De Gennes used this strategy to establish an analogy between the behavior of the zero-component classical vector model of ferromagnetism near the phase transition and a self-avoiding random walk of a polymer chain of infinite length on a lattice, to calculate the polymer excluded volume exponents (de Gennes 1972). Adapting this concept to field-theoretic functional integrals, implies to study in a systematic way how a field theory model changes while eliminating and rescaling a certain number of degrees of freedom from the partition function integral (Wilson 1974).

Hartree renormalization

An alternative approach is known as the Hartree approximation or self-consistent one-loop approximation (Amit 1984). It takes advantage of Gaussian fluctuation corrections to the -order MF contribution, to renormalize the model parameters and extract in a self-consistent way the dominant length scale of the concentration fluctuations in critical concentration regimes.

Tadpole renormalization

In a more recent work Efimov and Nogovitsin showed that an alternative renormalization technique originating from QFT, based on the concept of tadpole renormalization, can be a very effective approach for computing functional integrals arising in statistical mechanics of classical many-particle systems (Efimov 1996). They demonstrated that the main contributions to classical partition function integrals are provided by low-order tadpole-type Feynman diagrams, which account for divergent contributions due to particle self-interaction. The renormalization procedure performed in this approach effects on the self-interaction contribution of a charge (like e.g. an electron or an ion), resulting from the static polarization induced in the vacuum due to the presence of that charge (Baeurle 2007). As evidenced by Efimov and Ganbold in an earlier work (Efimov 1991), the procedure of tadpole renormalization can be employed very effectively to remove the divergences from the action of the basic field-theoretic representation of the partition function and leads to an alternative functional integral representation, called the Gaussian equivalent representation (GER). They showed that the procedure provides functional integrals with significantly ameliorated convergence properties for analytical perturbation calculations. In subsequent works Baeurle et al. developed effective low-cost approximation methods based on the tadpole renormalization procedure, which have shown to deliver useful results for prototypical polymer and PE solutions (Baeurle 2006a, Baeurle 2006b, Baeurle 2007a).

Numerical simulation

Another possibility is to use Monte Carlo (MC) algorithms and to sample the full partition function integral in field-theoretic formulation. The resulting procedure is then called a polymer field-theoretic simulation. In a recent work, however, Baeurle demonstrated that MC sampling in conjunction with the basic field-theoretic representation is impracticable due to the so-called numerical sign problem (Baeurle 2002). The difficulty is related to the complex and oscillatory nature of the resulting distribution function, which causes a bad statistical convergence of the ensemble averages of the desired thermodynamic and structural quantities. In such cases special analytical and numerical techniques are necessary to accelerate the statistical convergence (Baeurle 2003, Baeurle 2003a, Baeurle 2004).

Mean field representation

To make the methodology amenable for computation, Baeurle proposed to shift the contour of integration of the partition function integral through the homogeneous MF solution using Cauchy's integral theorem, providing its so-called mean-field representation. This strategy was previously successfully employed by Baer et al. in field-theoretic electronic structure calculations (Baer 1998). Baeurle could demonstrate that this technique provides a significant acceleration of the statistical convergence of the ensemble averages in the MC sampling procedure (Baeurle 2002, Baeurle 2002a).

Gaussian equivalent representation

In subsequent works Baeurle et al. (Baeurle 2002, Baeurle 2002a, Baeurle 2003, Baeurle 2003a, Baeurle 2004) applied the concept of tadpole renormalization, leading to the Gaussian equivalent representationof the partition function integral, in conjunction with advanced MC techniques in the grand canonical ensemble. They could convincingly demonstrate that this strategy provides a further boost in the statistical convergence of the desired ensemble averages (Baeurle 2002).

Related Research Articles

<span class="mw-page-title-main">Quantum field theory</span> Theoretical framework combining classical field theory, special relativity, and quantum mechanics

In theoretical physics, quantum field theory (QFT) is a theoretical framework that combines classical field theory, special relativity, and quantum mechanics. QFT is used in particle physics to construct physical models of subatomic particles and in condensed matter physics to construct models of quasiparticles.

<span class="mw-page-title-main">Stress–energy tensor</span> Tensor describing energy momentum density in spacetime

The stress–energy tensor, sometimes called the stress–energy–momentum tensor or the energy–momentum tensor, is a tensor physical quantity that describes the density and flux of energy and momentum in spacetime, generalizing the stress tensor of Newtonian physics. It is an attribute of matter, radiation, and non-gravitational force fields. This density and flux of energy and momentum are the sources of the gravitational field in the Einstein field equations of general relativity, just as mass density is the source of such a field in Newtonian gravity.

<span class="mw-page-title-main">Heat equation</span> Partial differential equation describing the evolution of temperature in a region

In mathematics and physics, the heat equation is a certain partial differential equation. Solutions of the heat equation are sometimes known as caloric functions. The theory of the heat equation was first developed by Joseph Fourier in 1822 for the purpose of modeling how a quantity such as heat diffuses through a given region.

In theoretical physics, the term renormalization group (RG) refers to a formal apparatus that allows systematic investigation of the changes of a physical system as viewed at different scales. In particle physics, it reflects the changes in the underlying force laws as the energy scale at which physical processes occur varies, energy/momentum and resolution distance scales being effectively conjugate under the uncertainty principle.

In the calculus of variations, a field of mathematical analysis, the functional derivative relates a change in a functional to a change in a function on which the functional depends.

In physics, a partition function describes the statistical properties of a system in thermodynamic equilibrium. Partition functions are functions of the thermodynamic state variables, such as the temperature and volume. Most of the aggregate thermodynamic variables of the system, such as the total energy, free energy, entropy, and pressure, can be expressed in terms of the partition function or its derivatives. The partition function is dimensionless.

<span class="mw-page-title-main">Partition function (quantum field theory)</span> Generating function for quantum correlation functions

In quantum field theory, partition functions are generating functionals for correlation functions, making them key objects of study in the path integral formalism. They are the imaginary time versions of statistical mechanics partition functions, giving rise to a close connection between these two areas of physics. Partition functions can rarely be solved for exactly, although free theories do admit such solutions. Instead, a perturbative approach is usually implemented, this being equivalent to summing over Feynman diagrams.

<span class="mw-page-title-main">Fermi's interaction</span> Mechanism of beta decay proposed in 1933

In particle physics, Fermi's interaction is an explanation of the beta decay, proposed by Enrico Fermi in 1933. The theory posits four fermions directly interacting with one another. This interaction explains beta decay of a neutron by direct coupling of a neutron with an electron, a neutrino and a proton.

<span class="mw-page-title-main">Multiple integral</span> Generalization of definite integrals to functions of multiple variables

In mathematics, a multiple integral is a definite integral of a function of several real variables, for instance, f(x, y) or f(x, y, z). Integrals of a function of two variables over a region in are called double integrals, and integrals of a function of three variables over a region in are called triple integrals. For multiple integrals of a single-variable function, see the Cauchy formula for repeated integration.

The isothermal–isobaric ensemble is a statistical mechanical ensemble that maintains constant temperature and constant pressure applied. It is also called the -ensemble, where the number of particles is also kept as a constant. This ensemble plays an important role in chemistry as chemical reactions are usually carried out under constant pressure condition. The NPT ensemble is also useful for measuring the equation of state of model systems whose virial expansion for pressure cannot be evaluated, or systems near first-order phase transitions.

A quasiprobability distribution is a mathematical object similar to a probability distribution but which relaxes some of Kolmogorov's axioms of probability theory. Quasiprobabilities share several of general features with ordinary probabilities, such as, crucially, the ability to yield expectation values with respect to the weights of the distribution. They can however violate the σ-additivity axiom: integrating them over does not necessarily yield probabilities of mutually exclusive states. Indeed, quasiprobability distributions also counterintuitively have regions of negative probability density, contradicting the first axiom. Quasiprobability distributions arise naturally in the study of quantum mechanics when treated in phase space formulation, commonly used in quantum optics, time-frequency analysis, and elsewhere.

In numerical methods, total variation diminishing (TVD) is a property of certain discretization schemes used to solve hyperbolic partial differential equations. The most notable application of this method is in computational fluid dynamics. The concept of TVD was introduced by Ami Harten.

A ratio distribution is a probability distribution constructed as the distribution of the ratio of random variables having two other known distributions. Given two random variables X and Y, the distribution of the random variable Z that is formed as the ratio Z = X/Y is a ratio distribution.

In the field of mathematical modeling, a radial basis function network is an artificial neural network that uses radial basis functions as activation functions. The output of the network is a linear combination of radial basis functions of the inputs and neuron parameters. Radial basis function networks have many uses, including function approximation, time series prediction, classification, and system control. They were first formulated in a 1988 paper by Broomhead and Lowe, both researchers at the Royal Signals and Radar Establishment.

In mathematics – specifically, in stochastic analysis – an Itô diffusion is a solution to a specific type of stochastic differential equation. That equation is similar to the Langevin equation used in physics to describe the Brownian motion of a particle subjected to a potential in a viscous fluid. Itô diffusions are named after the Japanese mathematician Kiyosi Itô.

In mathematics, vector spherical harmonics (VSH) are an extension of the scalar spherical harmonics for use with vector fields. The components of the VSH are complex-valued functions expressed in the spherical coordinate basis vectors.

Multipole radiation is a theoretical framework for the description of electromagnetic or gravitational radiation from time-dependent distributions of distant sources. These tools are applied to physical phenomena which occur at a variety of length scales - from gravitational waves due to galaxy collisions to gamma radiation resulting from nuclear decay. Multipole radiation is analyzed using similar multipole expansion techniques that describe fields from static sources, however there are important differences in the details of the analysis because multipole radiation fields behave quite differently from static fields. This article is primarily concerned with electromagnetic multipole radiation, although the treatment of gravitational waves is similar.

Lagrangian field theory is a formalism in classical field theory. It is the field-theoretic analogue of Lagrangian mechanics. Lagrangian mechanics is used to analyze the motion of a system of discrete particles each with a finite number of degrees of freedom. Lagrangian field theory applies to continua and fields, which have an infinite number of degrees of freedom.

<span class="mw-page-title-main">Path integrals in polymer science</span>

A polymer is a macromolecule, composed of many similar or identical repeated subunits. Polymers are common in, but not limited to, organic media. They range from familiar synthetic plastics to natural biopolymers such as DNA and proteins. Their unique elongated molecular structure produces unique physical properties, including toughness, viscoelasticity, and a tendency to form glasses and semicrystalline structures. The modern concept of polymers as covalently bonded macromolecular structures was proposed in 1920 by Hermann Staudinger. One sub-field in the study of polymers is polymer physics. As a part of soft matter studies, Polymer physics concerns itself with the study of mechanical properties and focuses on the perspective of condensed matter physics.

In physical oceanography and fluid mechanics, the Miles-Phillips mechanism describes the generation of wind waves from a flat sea surface by two distinct mechanisms. Wind blowing over the surface generates tiny wavelets. These wavelets develop over time and become ocean surface waves by absorbing the energy transferred from the wind. The Miles-Phillips mechanism is a physical interpretation of these wind-generated surface waves.
Both mechanisms are applied to gravity-capillary waves and have in common that waves are generated by a resonance phenomenon. The Miles mechanism is based on the hypothesis that waves arise as an instability of the sea-atmosphere system. The Phillips mechanism assumes that turbulent eddies in the atmospheric boundary layer induce pressure fluctuations at the sea surface. The Phillips mechanism is generally assumed to be important in the first stages of wave growth, whereas the Miles mechanism is important in later stages where the wave growth becomes exponential in time.

References