From Wikipedia, the free encyclopedia - View original article
|Part of a series on|
|Earth Sciences Portal|
Category • Related topics
The moment magnitude scale (abbreviated as MMS; denoted as MW or M) is used by seismologists to measure the size of earthquakes in terms of the energy released. The magnitude is based on the seismic moment of the earthquake, which is equal to the rigidity of the Earth multiplied by the average amount of slip on the fault and the size of the area that slipped. The scale was developed in the 1970s to succeed the 1930s-era Richter magnitude scale (ML). Even though the formulae are different, the new scale retains the familiar continuum of magnitude values defined by the older one. The MMS is now the scale used to estimate magnitudes for all modern large earthquakes by the United States Geological Survey.
In 1935, Charles Richter and Beno Gutenberg developed the local magnitude () scale (popularly known as the Richter scale) with the goal of quantifying medium-sized earthquakes (between magnitude 3.0 and 7.0) in Southern California. This scale was based on the ground motion measured by a particular type of seismometer (a Wood-Anderson seismograph) at a distance of 100 kilometres (62 mi) from the earthquake's epicenter. Because of this, there is an upper limit on the highest measurable magnitude, and all large earthquakes will tend to have a local magnitude of around 7. Further, the magnitude becomes unreliable for measurements taken at a distance of more than about 600 kilometres (370 mi) from the epicenter. Since this ML scale was simple to use and corresponded well with the damage which was observed, it was extremely useful for engineering earthquake-resistant structures, and gained common acceptance.
Although the Richter scale represented a major step forward, it was not as effective for characterizing some classes of quakes. As a result, Beno Gutenberg expanded Richter's work to consider earthquakes detected at distant locations. For such large distances the higher frequency vibrations are attenuated and seismic surface waves (Rayleigh and Love waves) are dominated by waves with a period of 20 seconds (which corresponds to a wavelength of about 60 km). Their magnitude was assigned a surface wave magnitude scale (MS). Gutenberg also combined compressional P-waves and the transverse S-waves (which he termed "body waves") to create a body-wave magnitude scale (Mb), measured for periods between 1 and 10 seconds. Ultimately Gutenberg and Richter collaborated to produce a combined scale which was able to estimate the energy released by an earthquake in terms of Gutenberg's surface wave magnitude scale (MS).
The Richter Scale, as modified, was successfully applied to characterize localities. This enabled local building codes to establish standards for buildings which were earthquake resistant. However a series of quakes were poorly handled by the modified Richter scale. This series of "great earthquakes", which included faults that broke along a line of up to 1000 km. Examples include the 1952 Aleutian Fox Islands quake and the 1960 Chilean quake, both which broke faults approaching 1000 km. The MS scale was unable to characterize these "great earthquakes" accurately.
The difficulties with use of MS in characterizing the quake resulted from the size of these earthquakes. Great quakes produced 20 s waves such that MS was comparable to normal quakes, but also produced very long period waves (more than 200 s) which carried large amounts of energy. As a result, use of the modified Richter scale methodology, to estimate earthquake energy, was deficient at high energies.
In 1972, Aki introduced elastic dislocation theory to improve understanding of the earthquake mechanism. This theory proposed that the energy release from a quake is proportional to the surface area that breaks free, and the average distance that the fault is displaced, and the rigidity of the material adjacent to the fault. This is found to correlate well with the seismologic readings from long-period seismographs. Hence the moment magnitude scale (MW) represented a major step forward in characterizing earthquakes.
Recent research related to the moment magnitude scale focuses on:
where is the seismic moment in N⋅m (107 dyne⋅cm). The constant values in the equation are chosen to achieve consistency with the magnitude values produced by earlier scales, the Local Magnitude and the Surface Wave magnitude, both referred to as the "Richter" scale by reporters.
As with the Richter scale, an increase of one step on this logarithmic scale corresponds to a 101.5 ≈ 32 times increase in the amount of energy released, and an increase of two steps corresponds to a 103 = 1000 times increase in energy.
The following formula, obtained by solving the previous equation for , allows one to assess the proportional difference in energy release between earthquakes of two different moment magnitudes, say and :
Potential energy is stored in the crust in the form of built-up stress. During an earthquake, this stored energy is transformed and results in
The seismic moment is a measure of the total amount of energy that is transformed during an earthquake. Only a small fraction of the seismic moment is converted into radiated seismic energy , which is what seismographs register. Using the estimate
Choy and Boatwright defined in 1995 the energy magnitude 
where is in N.m.
A rule of thumb equivalence from seismology used in the study of nuclear proliferation asserts that a one kiloton nuclear explosion creates a seismic signal with a magnitude of approximately 4.0. This in turn leads to the equation
where is the mass of the explosive TNT that is quoted for comparison (relative to megatons Mt).
Such comparison figures are not very meaningful. As with earthquakes, during an underground explosion of a nuclear weapon, only a small fraction of the total amount of energy transformed ends up being radiated as seismic waves. Therefore, a seismic efficiency has to be chosen for a bomb that is quoted as a comparison. Using the conventional specific energy of TNT (4.184 MJ/kg), the above formula implies the assumption that about 0.5% of the bomb's energy is converted into radiated seismic energy . For real underground nuclear tests, the actual seismic efficiency achieved varies significantly and depends on the site and design parameters of the test.
In 1935, physicist Charles Richter and seismologist Beno Gutenberg developed the local magnitude () scale (popularly known as the Richter scale) with the goal of quantifying medium-sized earthquakes (between magnitude 3.0 and 7.0) in Southern California. This scale was based on the ground motion measured by a particular type of seismometer at a distance of 100 kilometres (62 mi) from the earthquake's epicenter. Because of this, there is an upper limit on the highest measurable magnitude, and all large earthquakes will tend to have a local magnitude of around 7. The magnitude becomes unreliable for measurements taken at a distance of more than about 600 kilometres (370 mi) from the epicenter.
The moment magnitude () scale was introduced in 1979 by Caltech seismologists Thomas C. Hanks and Hiroo Kanamori to address these shortcomings while maintaining consistency. Thus, for medium-sized earthquakes, the moment magnitude values should be similar to Richter values. That is, a magnitude 5.0 earthquake will be about a 5.0 on both scales. This scale was based on the physical properties of the earthquake, specifically the seismic moment (). Unlike other scales, the moment magnitude scale does not saturate at the upper end; there is no upper limit to the possible measurable magnitudes. However, this has the side-effect that the scales diverge for smaller earthquakes.
The concept of seismic moment was introduced in 1966, but it took 13 years before the scale was designed. The reason for the delay was that the necessary spectra of seismic signals had to be derived by hand at first, which required personal attention to every event. Faster computers than those available in the 1960s were necessary and seismologists had to develop methods to process earthquake signals automatically. In the mid-1970s Dziewonski started the Harvard Global Centroid Moment Tensor Catalog. After this advance, it was possible to introduce and estimate it for large numbers of earthquakes.
Moment magnitude is now the most common measure for medium to large earthquake magnitudes, but breaks down for smaller quakes. For example, the United States Geological Survey does not use this scale for earthquakes with a magnitude of less than 3.5, which is the great majority of quakes.
Magnitude scales differ from earthquake intensity, which is the perceptible shaking, and local damage experienced during a quake. The shaking intensity at a given spot depends on many factors, such as soil types, soil sublayers, depth, type of displacement, and range from the epicenter (not counting the complications of building engineering and architectural factors). Rather, magnitude scales are used to estimate with one number the size of the quake.
The following table compares magnitudes towards the upper end of the Richter Scale for major Californian earthquakes.
|Date||Seismic moment (dyne-cm)||Richter scale||Moment magnitude|