TransWikia.com

Problem regarding the absorption lines of the sun

Astronomy Asked on December 12, 2020

Some of the wavelengths of light that are emitted from the sun will be absorbed by atoms in the outer layer of the sun and also the atmosphere of the sun, and we see this as absorption lines in the spectrum. Now, this absorbed radiation will indeed be re-emitted again, so one might think that these emission lines should "cancel out" the absorption lines. The usual explanation for why this doesn’t happen is that the re-emitted light is radiated in all directions, not just towards us, meaning that to us these wavelengths will be much fainter than the other wavelengths.

But the problem I have is that this happens all around the sun (since the atmosphere is completely surrounding it), and intuitively it seems then that all of this re-emitted light should combine such that far away it would appear that the sun is radiating these wavelengths just as it is radiating all the other wavelengths. And if that is true, then we shouldn’t see absorption lines in the spectrum. So what is it that I am missing?

2 Answers

Possibly you are labouring under the misapprehension that the number of photons is somehow a conserved quantity? That isn't true, there are more photons at any given wavelength when you are deeper into the star, because there is a temperature gradient. Cooler material further out is less emissive because fewer atoms are in excited states.

The temperature gradient is responsible for the formation of absorption lines. If the photosphere of the Sun were at a single temperature then we would see a perfect blackbody spectrum, for the reasons you outline.

The filling in of absorption by scattering would only take place if the radiation field that the atoms were in were isotropic. But it isn't isotropic because of the temperature gradient.

A much better way to think about the spectrum of a star is to imagine that you can see to a wavelength-dependent depth into the star. Where there is a strong atomic absorption feature, you cannot see very far into the star at that wavelength.

Since the star gets hotter the deeper you go into it, and the emissivity scales as $T^4$, then the deeper we can see into the star, the brighter it will appear at that wavelength (and vice-versa).

EDIT:

More formally. The radiative transfer equation, if you want to consider the absorption and remission as some sort of scattering process, would be $$frac{dI_{nu}}{ds} = -sigma_nu I_{nu} + sigma_nu J_nu ,$$ where $I$ is the specific intensity in the solar photosphere (in this case, directed towards the Earth), $J$ is the mean specific intensity at a point in the solar photosphere averaged over all directions (i.e. $J = int I dOmega/4pi$, where $Omega$ is solid angle), $sigma$ is the scattering coefficient (assumed to be isotropic) and $ds$ is a piece of pathlength towards the observer. The $nu$ subscript just indicates everything is wavelength/frequency dependent.

To avoid creating an absorption or emission line then $dI_nu/ds$ must equal zero (i.e. nothing is added or subtracted from the beam of light).

This will only happen if $I_nu = J_nu$, which would require that the specific intensity averaged over all directions is equal to the specific intensity emerging from the Sun and heading towards the observer. This will only be true if the radiation field is isotropic and equal to $I_nu$ in all directions.

Whilst this would be true for a blackbody radiation field at a set temperature, it isn't true in the solar photosphere. The specific intensity heading towards us (generally outwards) is always larger than the specific intensity heading away (generally inwards and true regardless of which portion of the visible solar disc is considered) because of the temperature gradient in the photosphere, which means it is hotter further into the interior. That means that $I_nu$ is always greater than $J_nu$ and hence $dI_nu/ds < 0$ and we have net absorption.

Correct answer by Rob Jeffries on December 12, 2020

The atmospheric layer that produces the absorption lines acts somewhat like a mirror at these frequencies and scatters the light back into the sun (although this is diffuse reflection not specular reflection like an actual mirror). In principle, light is scattered also outwards (with a probability of 1/2 for each scattering event), but since the layer is very dense at the line frequencies it takes many scattering events to get trough. After two scattering events it would only be a fraction 1/2 *1/2 =1/4, after three 1/2 *1/2 *1/2 =1/8 and so on (this is just to demonstrate the principle, in reality it is bit more complicated due to multiple scattering back and forth in the layer). There are so many scattering events required that very little is getting through. It is being all scattered back into the lower layers of the atmosphere where it is eventually converted to photons of different frequencies.

It is a bit similar to why you have little light from the sun here on our earth under a dense cloud layer compared to a clear sky. If you have ever been in an airplane 5 miles high above the clouds, you realize that this light missing under the clouds is in fact reflected back from the top into space, making the clouds appear blindingly white. It is just the reverse situation in the solar atmosphere (if you could take a spectrum from below the layer responsible for the Fraunhofer lines looking upwards, you would see those lines all in emission)

Edit: The following diagram (taken from https://courses.lumenlearning.com/astronomy/chapter/formation-of-spectral-lines/ ) illustrates what happens here enter image description here

The specific difference is here only that the geometry of the scattering layer is different, being more like an infinitely extended vertical plane layer than kind of cylindrical. So in this case you can see the emission line (bright line) spectrum only from underneath the solar layer producing absorption lines when looking upwards (this is the emission the OP was missing in the absorption spectrum). In all other directions, you see (for obvious geometrical reasons) always the continuum source behind (which you have to assume as an extended plane layer as well) and thus the absorption spectrum.

Edit 2: Note that the accepted answer above is incorrect. It claims to describe the scattering of radiation, but the quoted equation effectively neglects the scattering source term when associating the source term later on with the thermal black-body term in order to bring in the temperature argument here. The correct equation is (see http://irina.eas.gatech.edu/EAS8803_Fall2017/petty_11.pdf ) enter image description here Note that $beta_e$ is here the combined absorption/scattering coefficient going into the loss term (with the minus sign), and $tildeomega=beta_s/beta_e=beta_s/(beta_a+beta_s)$ is the relative contribution of scattering to the absorption coefficient. This means for pure scattering we have $tildeomega=1$ and the thermal black-body radiation term vanishes. The temperature argument given in the accepted answer above is therefore not applicable in this case. It is clear from this that the thermal emission is only related to the continuum absorption, which however a) is negligible in the visible region above the photosphere and b) can not produce absorption lines anyway, whether there is temperature gradient or not.

So absorption lines can only be produced by resonance scattering, as already qualitatively explained by the colour illustration above. I have made in this respect some explicit numerical calculation with my own radiative transfer program reproduced at https://www.plasmaphysics.org.uk/programs/plantrans.htm , modified somewhat to show the actual line profile rather than frequency integrated intensities.

This is what you get from a mono-directional continuum source falling from one side onto an isothermal purely scattering plane-parallel layer with a line center optical depth $tau$=10 (assuming a Doppler (Gausian) scattering emissivity) for the transmitted line at the other end (looking vertically into the layer and including the continuum source)

Transmitted $tau$=10

and this is what is being vertically reflected back to the continuum source

Reflected $tau$=10 enter image description here


Here is the same for an optical depth $tau$=100 instead

Transmitted $tau$=100 enter image description here

Reflected $tau$=100 enter image description here

If one looks at the actual numerical scale of the graphs, it is obvious that the amount reflected back does not fully explain the amount missing from the continuum on the other side. This is simply due to the fact these plots hold for a fixed (vertical) viewing direction only and are furthermore normalized to a solid angle of 1 steradian (which is only 1/2/$pi$ of the full half-space the radiation is scattered back into). If one would add up the back-scattered radiation over the complete half-space, taking also into account that the line shape and intensity varies with the viewing direction, it would exactly account for the radiation that is missing in the transmitted spectrum. The question the OP had can only be answered in this way.

Answered by Thomas on December 12, 2020

Add your own answers!

Ask a Question

Get help from others!

© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP