Young's convolution inequality

Last updated

In mathematics, Young's convolution inequality is a mathematical inequality about the convolution of two functions, [1] named after William Henry Young.

Contents

Statement

Euclidean Space

In real analysis, the following result is called Young's convolution inequality: [2]

Suppose is in the Lebesgue space and is in and

with Then

Here the star denotes convolution, is Lebesgue space, and

denotes the usual norm.

Equivalently, if and then

Generalizations

Young's convolution inequality has a natural generalization in which we replace by a unimodular group If we let be a bi-invariant Haar measure on and we let or be integrable functions, then we define by

Then in this case, Young's inequality states that for and and such that

we have a bound

Equivalently, if and then

Since is in fact a locally compact abelian group (and therefore unimodular) with the Lebesgue measure the desired Haar measure, this is in fact a generalization.

This generalization may be refined. Let and be as before and assume satisfy Then there exists a constant such that for any and any measurable function on that belongs to the weak space which by definition means that the following supremum

is finite, we have and [3]

Applications

An example application is that Young's inequality can be used to show that the heat semigroup is a contracting semigroup using the norm (that is, the Weierstrass transform does not enlarge the norm).

Proof

Proof by Hölder's inequality

Young's inequality has an elementary proof with the non-optimal constant 1. [4]

We assume that the functions are nonnegative and integrable, where is a unimodular group endowed with a bi-invariant Haar measure We use the fact that for any measurable Since

By the Hölder inequality for three functions we deduce that

The conclusion follows then by left-invariance of the Haar measure, the fact that integrals are preserved by inversion of the domain, and by Fubini's theorem.

Proof by interpolation

Young's inequality can also be proved by interpolation; see the article on Riesz–Thorin interpolation for a proof.

Sharp constant

In case Young's inequality can be strengthened to a sharp form, via

where the constant [5] [6] [7] When this optimal constant is achieved, the function and are multidimensional Gaussian functions.

See also

Notes

  1. Young, W. H. (1912), "On the multiplication of successions of Fourier constants", Proceedings of the Royal Society A , 87 (596): 331–339, doi: 10.1098/rspa.1912.0086 , JFM   44.0298.02, JSTOR   93120
  2. Bogachev, Vladimir I. (2007), Measure Theory, vol. I, Berlin, Heidelberg, New York: Springer-Verlag, ISBN   978-3-540-34513-8, MR   2267655, Zbl   1120.28001 , Theorem 3.9.4
  3. Bahouri, Chemin & Danchin 2011, pp. 5–6.
  4. Lieb, Elliott H.; Loss, Michael (2001). Analysis. Graduate Studies in Mathematics (2nd ed.). Providence, R.I.: American Mathematical Society. p. 100. ISBN   978-0-8218-2783-3. OCLC   45799429.
  5. Beckner, William (1975). "Inequalities in Fourier Analysis". Annals of Mathematics. 102 (1): 159–182. doi:10.2307/1970980. JSTOR   1970980.
  6. Brascamp, Herm Jan; Lieb, Elliott H (1976-05-01). "Best constants in Young's inequality, its converse, and its generalization to more than three functions". Advances in Mathematics . 20 (2): 151–173. doi: 10.1016/0001-8708(76)90184-5 .
  7. Fournier, John J. F. (1977), "Sharpness in Young's inequality for convolution", Pacific Journal of Mathematics , 72 (2): 383–397, doi: 10.2140/pjm.1977.72.383 , MR   0461034, Zbl   0357.43002

Related Research Articles

<span class="mw-page-title-main">Convolution</span> Integral expressing the amount of overlap of one function as it is shifted over another

In mathematics, convolution is a mathematical operation on two functions that produces a third function that expresses how the shape of one is modified by the other. The term convolution refers to both the result function and to the process of computing it. It is defined as the integral of the product of the two functions after one is reflected about the y-axis and shifted. The choice of which function is reflected and shifted before the integral does not change the integral result. The integral is evaluated for all values of shift, producing the convolution function.

In mathematics, the Lp spaces are function spaces defined using a natural generalization of the p-norm for finite-dimensional vector spaces. They are sometimes called Lebesgue spaces, named after Henri Lebesgue, although according to the Bourbaki group they were first introduced by Frigyes Riesz.

In mathematical analysis, Hölder's inequality, named after Otto Hölder, is a fundamental inequality between integrals and an indispensable tool for the study of Lp spaces.

In mathematical analysis, the Minkowski inequality establishes that the Lp spaces are normed vector spaces. Let be a measure space, let and let and be elements of Then is in and we have the triangle inequality

In the calculus of variations and classical mechanics, the Euler–Lagrange equations are a system of second-order ordinary differential equations whose solutions are stationary points of the given action functional. The equations were discovered in the 1750s by Swiss mathematician Leonhard Euler and Italian mathematician Joseph-Louis Lagrange.

<span class="mw-page-title-main">Jensen's inequality</span> Theorem of convex functions

In mathematics, Jensen's inequality, named after the Danish mathematician Johan Jensen, relates the value of a convex function of an integral to the integral of the convex function. It was proved by Jensen in 1906, building on an earlier proof of the same inequality for doubly-differentiable functions by Otto Hölder in 1889. Given its generality, the inequality appears in many forms depending on the context, some of which are presented below. In its simplest form the inequality states that the convex transformation of a mean is less than or equal to the mean applied after convex transformation; it is a simple corollary that the opposite is true of concave transformations.

In quantum mechanics, information theory, and Fourier analysis, the entropic uncertainty or Hirschman uncertainty is defined as the sum of the temporal and spectral Shannon entropies. It turns out that Heisenberg's uncertainty principle can be expressed as a lower bound on the sum of these entropies. This is stronger than the usual statement of the uncertainty principle in terms of the product of standard deviations.

In mathematics, the Riesz–Thorin theorem, often referred to as the Riesz–Thorin interpolation theorem or the Riesz–Thorin convexity theorem, is a result about interpolation of operators. It is named after Marcel Riesz and his student G. Olof Thorin.

In mathematics, a norm is a function from a real or complex vector space to the non-negative real numbers that behaves in certain ways like the distance from the origin: it commutes with scaling, obeys a form of the triangle inequality, and is zero only at the origin. In particular, the Euclidean distance in a Euclidean space is defined by a norm on the associated Euclidean vector space, called the Euclidean norm, the 2-norm, or, sometimes, the magnitude of the vector. This norm can be defined as the square root of the inner product of a vector with itself.

Differential entropy is a concept in information theory that began as an attempt by Claude Shannon to extend the idea of (Shannon) entropy, a measure of average (surprisal) of a random variable, to continuous probability distributions. Unfortunately, Shannon did not derive this formula, and rather just assumed it was the correct continuous analogue of discrete entropy, but it is not. The actual continuous version of discrete entropy is the limiting density of discrete points (LDDP). Differential entropy is commonly encountered in the literature, but it is a limiting case of the LDDP, and one that loses its fundamental association with discrete entropy.

In mathematics, the Wasserstein distance or Kantorovich–Rubinstein metric is a distance function defined between probability distributions on a given metric space . It is named after Leonid Vaseršteĭn.

In mathematics, the Prékopa–Leindler inequality is an integral inequality closely related to the reverse Young's inequality, the Brunn–Minkowski inequality and a number of other important and classical inequalities in analysis. The result is named after the Hungarian mathematicians András Prékopa and László Leindler.

Beliefs depend on the available information. This idea is formalized in probability theory by conditioning. Conditional probabilities, conditional expectations, and conditional probability distributions are treated on three levels: discrete probabilities, probability density functions, and measure theory. Conditioning leads to a non-random result if the condition is completely specified; otherwise, if the condition is left random, the result of conditioning is also random.

In mathematics, the Babenko–Beckner inequality (after K. Ivan Babenko and William E. Beckner) is a sharpened form of the Hausdorff–Young inequality having applications to uncertainty principles in the Fourier analysis of Lp spaces. The (qp)-norm of the n-dimensional Fourier transform is defined to be

In mathematics, the Pettis integral or Gelfand–Pettis integral, named after Israel M. Gelfand and Billy James Pettis, extends the definition of the Lebesgue integral to vector-valued functions on a measure space, by exploiting duality. The integral was introduced by Gelfand for the case when the measure space is an interval with Lebesgue measure. The integral is also called the weak integral in contrast to the Bochner integral, which is the strong integral.

In mathematical analysis, Lorentz spaces, introduced by George G. Lorentz in the 1950s, are generalisations of the more familiar spaces.

In mathematics, the symmetric decreasing rearrangement of a function is a function which is symmetric and decreasing, and whose level sets are of the same size as those of the original function.

In the field of mathematical analysis, an interpolation inequality is an inequality of the form

In mathematics, Young's inequality for products is a mathematical inequality about the product of two numbers. The inequality is named after William Henry Young and should not be confused with Young's convolution inequality.

In mathematical analysis, the Young's inequality for integral operators, is a bound on the operator norm of an integral operator in terms of norms of the kernel itself.

References