Limiting magnitude of reference source

At optical and near infra-red wavelengths the brightness of stars is defined using the stellar magnitude scale of Pogson (1856). The apparent magnitude $m$ at a given wavelength $\lambda$ is defined in terms of the amplitude $A$ of the electromagnetic waves:
\begin{displaymath}
m=-5\log_{10} \left (A \right ) + k \left ( \lambda \right )
\end{displaymath} (1.10)

where a number of definitions for the wavelength dependent constants $k \left ( \lambda \right )$ exist such as the Johnson magnitude system (Aller et al. , 1982).

For observations in a given waveband the apparent magnitude of the faintest reference source which can be used for a high-resolution imaging technique is called the reference source limiting magnitude $m_{l}$. The applicability of the imaging technique depends on the density $\rho \left ( m<m_{l} \right )$ of stars brighter than this limiting magnitude on the night sky. For the range of limiting magnitudes appropriate to most of the imaging techniques described here this density is relatively well fit over the majority of the night sky by $\rho \left ( m<m_{l} \right ) \propto 10^{0.35 m_{l}}$ (see e.g. Bahcall & Soneira (1984); Cox (2000)). Improving the limiting magnitude for any one of the imaging techniques by only one magnitude typically doubles the sky coverage of the technique, dramatically improving the range of astronomical studies which can be undertaken by that technique.



Subsections
Bob Tubbs 2003-11-14