Carey Foster bridge

Last updated

In electronics, the Carey Foster bridge is a bridge circuit used to measure medium resistances, or to measure small differences between two large resistances. It was invented by Carey Foster as a variant on the Wheatstone bridge. He first described it in his 1872 paper "On a Modified Form of Wheatstone's Bridge, and Methods of Measuring Small Resistances" (Telegraph Engineer's Journal, 1872–1873, 1, 196).

Contents

Use

The Carey Foster bridge. The thick-edged areas are busbars of almost zero resistance. Carey Foster bridge.svg
The Carey Foster bridge. The thick-edged areas are busbars of almost zero resistance.

In the adjacent diagram, X and Y are resistances to be compared. P and Q are nearly equal resistances, forming the other half of the bridge. The bridge wire EF has a jockey contact D placed along it and is slid until the galvanometer G measures zero. The thick-bordered areas are thick copper busbars of very low resistance, to limit the influence on the measurement.

  1. Place a known resistance in position Y.
  2. Place the unknown resistance in position X.
  3. Adjust the contact D along the bridge wire EF so as to null the galvanometer. This position (as a percentage of distance from E to F) is 1.
  4. Swap X and Y. Adjust D to the new null point. This position is 2.
  5. If the resistance of the wire per percentage is σ, then the resistance difference is the resistance of the length of bridge wire between 1 and 2:

To measure a low unknown resistance X, replace Y with a copper busbar that can be assumed to be of zero resistance.

In practical use, when the bridge is unbalanced, the galvanometer is shunted with a low resistance to avoid burning it out. It is only used at full sensitivity when the anticipated measurement is close to the null point.

To measure σ

To measure the unit resistance of the bridge wire EF, put a known resistance (e.g., a standard 1 ohm resistance) that is less than that of the wire as X, and a copper busbar of assumed zero resistance as Y.

Theory

Two resistances to be compared, X and Y, are connected in series with the bridge wire. Thus, considered as a Wheatstone bridge, the two resistances are X plus a length of bridge wire, and Y plus the remaining bridge wire. The two remaining arms are the nearly equal resistances P and Q, connected in the inner gaps of the bridge.

A standard Wheatstone bridge for comparison. Points A, B, C and D in both circuit diagrams correspond. X and Y correspond to R1 and R2, P and Q correspond to R3 and RX. Note that with the Carey Foster bridge, we are measuring R1 rather than RX. Wheatstonebridge.svg
A standard Wheatstone bridge for comparison. Points A, B, C and D in both circuit diagrams correspond. X and Y correspond to R1 and R2, P and Q correspond to R3 and RX. Note that with the Carey Foster bridge, we are measuring R1 rather than RX.

Let 1 be the null point D on the bridge wire EF in percent. α is the unknown left-side extra resistance EX and β is the unknown right-side extra resistance FY, and σ is the resistance per percent length of the bridge wire:

and add 1 to each side:

   (equation 1)

Now swap X and Y. 2 is the new null point reading in percent:

and add 1 to each side:

   (equation 2)

Equations 1 and 2 have the same left-hand side and the same numerator on the right-hand side, meaning the denominator on the right-hand side must also be equal:

Thus: the difference between X and Y is the resistance of the bridge wire between 1 and 2.

The bridge is most sensitive when P, Q, X and Y are all of comparable magnitude.

Related Research Articles

<span class="mw-page-title-main">Pauli matrices</span> Matrices important in quantum mechanics and the study of spin

In mathematical physics and mathematics, the Pauli matrices are a set of three 2 × 2 complex matrices that are Hermitian, involutory and unitary. Usually indicated by the Greek letter sigma, they are occasionally denoted by tau when used in connection with isospin symmetries.

In statistics, the Gauss–Markov theorem states that the ordinary least squares (OLS) estimator has the lowest sampling variance within the class of linear unbiased estimators, if the errors in the linear regression model are uncorrelated, have equal variances and expectation value of zero. The errors do not need to be normal, nor do they need to be independent and identically distributed. The requirement that the estimator be unbiased cannot be dropped, since biased estimators exist with lower variance. See, for example, the James–Stein estimator, ridge regression, or simply any degenerate estimator.

<span class="mw-page-title-main">Lindemann–Weierstrass theorem</span> On algebraic independence of exponentials of linearly independent algebraic numbers over Q

In transcendental number theory, the Lindemann–Weierstrass theorem is a result that is very useful in establishing the transcendence of numbers. It states the following:

In field theory, the primitive element theorem states that every finite separable extension is simple, i.e. generated by a single element. This theorem implies in particular that all algebraic number fields over the rational numbers, and all extensions in which both fields are finite, are simple.

In econometrics, the autoregressive conditional heteroskedasticity (ARCH) model is a statistical model for time series data that describes the variance of the current error term or innovation as a function of the actual sizes of the previous time periods' error terms; often the variance is related to the squares of the previous innovations. The ARCH model is appropriate when the error variance in a time series follows an autoregressive (AR) model; if an autoregressive moving average (ARMA) model is assumed for the error variance, the model is a generalized autoregressive conditional heteroskedasticity (GARCH) model.

Ridge regression is a method of estimating the coefficients of multiple-regression models in scenarios where the independent variables are highly correlated. It has been used in many fields including econometrics, chemistry, and engineering. Also known as Tikhonov regularization, named for Andrey Tikhonov, it is a method of regularization of ill-posed problems. It is particularly useful to mitigate the problem of multicollinearity in linear regression, which commonly occurs in models with large numbers of parameters. In general, the method provides improved efficiency in parameter estimation problems in exchange for a tolerable amount of bias.

In mathematics, we can define norms for the elements of a vector space. When the vector space in question consists of matrices, these are called matrix norms.

In statistics, ordinary least squares (OLS) is a type of linear least squares method for choosing the unknown parameters in a linear regression model by the principle of least squares: minimizing the sum of the squares of the differences between the observed dependent variable in the input dataset and the output of the (linear) function of the independent variable.

In general relativity, the Gibbons–Hawking–York boundary term is a term that needs to be added to the Einstein–Hilbert action when the underlying spacetime manifold has a boundary.

<span class="mw-page-title-main">Covariant formulation of classical electromagnetism</span> Ways of writing certain laws of physics

The covariant formulation of classical electromagnetism refers to ways of writing the laws of classical electromagnetism in a form that is manifestly invariant under Lorentz transformations, in the formalism of special relativity using rectilinear inertial coordinate systems. These expressions both make it simple to prove that the laws of classical electromagnetism take the same form in any inertial coordinate system, and also provide a way to translate the fields and forces from one frame to another. However, this is not as general as Maxwell's equations in curved spacetime or non-rectilinear coordinate systems.

In conformal geometry, the tractor bundle is a particular vector bundle constructed on a conformal manifold whose fibres form an effective representation of the conformal group.

The Newman–Penrose (NP) formalism is a set of notation developed by Ezra T. Newman and Roger Penrose for general relativity (GR). Their notation is an effort to treat general relativity in terms of spinor notation, which introduces complex forms of the usual variables used in GR. The NP formalism is itself a special case of the tetrad formalism, where the tensors of the theory are projected onto a complete vector basis at each point in spacetime. Usually this vector basis is chosen to reflect some symmetry of the spacetime, leading to simplified expressions for physical observables. In the case of the NP formalism, the vector basis chosen is a null tetrad: a set of four null vectors—two real, and a complex-conjugate pair. The two real members often asymptotically point radially inward and radially outward, and the formalism is well adapted to treatment of the propagation of radiation in curved spacetime. The Weyl scalars, derived from the Weyl tensor, are often used. In particular, it can be shown that one of these scalars— in the appropriate frame—encodes the outgoing gravitational radiation of an asymptotically flat system.

Expected shortfall (ES) is a risk measure—a concept used in the field of financial risk measurement to evaluate the market risk or credit risk of a portfolio. The "expected shortfall at q% level" is the expected return on the portfolio in the worst of cases. ES is an alternative to value at risk that is more sensitive to the shape of the tail of the loss distribution.

A ratio distribution is a probability distribution constructed as the distribution of the ratio of random variables having two other known distributions. Given two random variables X and Y, the distribution of the random variable Z that is formed as the ratio Z = X/Y is a ratio distribution.

In probability and statistics, the Hellinger distance is used to quantify the similarity between two probability distributions. It is a type of f-divergence. The Hellinger distance is defined in terms of the Hellinger integral, which was introduced by Ernst Hellinger in 1909.

<span class="mw-page-title-main">Normal-inverse-gamma distribution</span>

In probability theory and statistics, the normal-inverse-gamma distribution is a four-parameter family of multivariate continuous probability distributions. It is the conjugate prior of a normal distribution with unknown mean and variance.

The purpose of this page is to provide supplementary materials for the ordinary least squares article, reducing the load of the main article with mathematics and improving its accessibility, while at the same time retaining the completeness of exposition.

<span class="mw-page-title-main">Relativistic Lagrangian mechanics</span> Mathematical formulation of special and general relativity

In theoretical physics, relativistic Lagrangian mechanics is Lagrangian mechanics applied in the context of special relativity and general relativity.

Infinite derivative gravity is a theory of gravity which attempts to remove cosmological and black hole singularities by adding extra terms to the Einstein–Hilbert action, which weaken gravity at short distances.

This article summarizes several identities in exterior calculus.

References