Saturday, July 21, 2018

Atmospheric Dispersion Correction in Image Post Processing

Atmospheric Dispersion Correction in Image Post Processing

The Earth's atmosphere refracts light of different wavelengths to a different degree, depending on an object's altitude above the horizon, hence declination dependent. I.e., the Earth's atmosphere acts as a prism.

Consequently, objects at the zenith will not have this effect, since the light from that position are perpendicular to the Earth's surface (and atmosphere) at that point.


This image shows the effect of atmospheric dispersion by the lower edge displaying a red tinge and the upper edge a bluish tinge.

The next 3 images show the individual frames after separating the channels using Michael Unsold's venerable ImagesPlus ( http://www.mlunsold.com/ ).

Red channel

Green channel

Blue channel

The image below is a GIF showing the vertical displacement between the individual RGB frames.

3 frame GIF

Finally, the result of aligning then recombining the individual frames, again, with ImagesPlus.

Result of colors split, aligned and recombined 

With compliments to Mike.

For camera conversion services, i.e., having your DSLR "IR Modded" (IR filter removed), visit Life Pixel Camera Conversion Services here

Thursday, July 5, 2018

M57 - The Ring Nebula 04-Jul-2018


Seeing was reasonably good last night at ~2.47 arcseconds (as measured by image link).

Ambient temp 85°F. I believe it will take a cooled camera to eliminate the red streaking.
The images were taken when M57 was just about at the zenith - clearer, darker and no atmospheric color dispersion, as is encountered when imaging planets.

Tracking was excellent, between 0.44 and 0.50 arc-seconds.



Bill Shaheen
Superstition Mountain Astronomical League®

For camera conversion services, i.e., having your DSLR "IR Modded" (IR filter removed), visit Life Pixel Camera Conversion Services here

Monday, July 2, 2018

Elements of Astrophotography - Calculating Image Scale/Field of View

Elements of Astrophotography - Calculating Image Scale/Field of View

The general formula for determining the Field of View (FOV) for a light collecting area, whether at the pixel level or for the overall imaging area is:
206,265 x pixel (or sensor) size / focal length of the telescope (or lens).
(See Note 1 at bottom.)


For the purposes of our calculations, it is simpler to use 206.265 with the pixel size expressed in uM. For example, my Sky-Watcher Esprit 120 refractor has a focal length of 840mm and my Nikon D7500 has a pixel size of 4.2 uM.(See below for determining your camera's pixel size.) 

Hence, the per pixel FOV (aka image scale) is 206.265 x 4.2) / 840mm = 1.03 arc-seconds, sometimes expressed as as/p (arc-seconds per pixel). 
The above assumes square pixels, of course, which is usually the case these days.

At the chip level (or for non-square pixels), the horizontal (width) and vertical (height) fields of view need to be determined separately.
I.e., (206,265 x sensor array in mm (width or height) / FL) / 60.
E.g., the Nikon D7500 sensor array is 23.5mm (w) x 15.7mm (h).
(You may frequently see these terms interchanged - h x w.)


So, Horizontal FOV = ((206,265 x 23.5) / 840) / 60 = 96.175 arc-minutes.
(If you encounter an intermediate overflow error, use 206.265 and multiply the result by 1000, or get a new calculator.)

As an alternative, simply multiply the per-pixel FOV by the array size on each axis.

Solving for either FOV or image scale 

Now, the question may be posed, what camera or telescope should I choose to achieve an image scale of, say, 1 arc-second? And, it's a great question, since you may want to design your imaging system to provide a particular per pixel image scale or an overall field of view.

In that case, you simply want to substitute your desired FOV (or image scale) in the above equations and solve for either pixel size (or sensor size) or image scale.

You need to choose one or the other. For example: Since as/p = (206,265 x uM) / FL, then (206,265 x uM) / image scale = FL.

Or, you may already, and typically do, have a particular telescope that you already own and want to identify a camera based on its pixel size, or close to it, in which case, (image scale x FL) / 206265 = uM.

In my case, I'll target an image scale of 0.8 as/p (as opposed to the above 1.03as/p). So, (0.8 as/p x 840) / 206.265 = 3.25um (which is a commonly available image scale).

Now let's look at arriving at an overall FOV (horizontally).
Horizontal FOV is ((206,265 x sensor width in mm) / FL) / 60 = Width FOV in arc-minutes. Note that this calculation is independent of the individual pixel size. The entire sensor is the image collecting area. 
In other words, image scale in arc-seconds per pixel determines resolution, where as chip size determines overall field of view.

Again, we assume the FOV and solve for FL. 
So, to repeat  ((206,265 x sensor width in mm) / FL) / 60 = Width FOV.
sensor width in mm =  (FOV (in arc-minutes) x FL x 60) / 206,265.

Example: (96 arc-min X 840FL X 60) / 206,265 = 23.45mm on the chip's horizontal side.

Calculating Pixel Size from camera specs

In some cases you may not know the camera's pixel scale, although that data can be had from the manufacturer's specifications, or from third party resources. But, just to illustrate, simply divide the chip's width in mm by the number of pixels along that axis. The D7500 has an array of 23.5mm x  15.7mm and the pixel count is 5600 x 3728. (In the days before square pixels, you had to make two calculations. Now, one is sufficient.)

So, 23.5mm / 5600 = 0.004196 mm per pixel, or, 4.196 uM per pixel.

Or, use a calculator

Of course, there are a number of calculators on the web that can do this for you. One such calculator that you can download is Ron Wodaski's CCD Calculator

Note 1
206265 is the number of arc-seconds in a radian. See: https://en.wikipedia.org/wiki/Minute_and_second_of_arc


For camera conversion services, i.e., having your DSLR "IR Modded" (IR filter removed), visit Life Pixel Camera Conversion Services here