2012-12-25

Astronomy HDR still in its infancy

It was a nice evening, clear sky and new 10cm refractor was ready for me at Vyškov's observatory. I've always been fascinated by earthshine (Earth's reflected light) and the evening has been ideal for my project: acquire a high dynamic range (HDR) image of Moon including earthshine. Unfortunately, the earthshine is very difficult to record and to display due to enormous intensity range between light and dark parts (which is about one thousand).

Softening contrast

 I've currently experience with using global mapping methods which non-linearly compress high to low ranges. When, we defines contrast as ratio of intensity of a detail against to a local mean, the contrast is compressed together with intensity. As result of the compression, pictures has characteristically softened details. 

Perhaps our perceive prefers the contrast over global luminance, the main problem of HDR photography is compress of the global luminance while preserving the local contrast.

Luminance HDR

During last summer, I've spend some time with algorithms for rendering HDR images. The project which I've been studied deeply is Luminance HDR. Advanced algorithms included in the package works well (at least acceptable) on both normal (landscapes, interiors or erotic photographs, etc) and astronomical scenes.

Earthshine

The algorithm for taking photos has been:
  • I taken set of exposures with different exposure times to cover full range of intensities. I'm preferring division by two sequence: 16,8,4,..  1/32,1/64... For long-exposure night scenes and high ISOs, darks are necessary.
The processing has parts:

  • ll images are unified on same exposure time. Images are spatially (angular) arranged and composed by weighting summation. Weights are choose as a function with maximum in half saturation range and zero outside the range (pfscalibration).
  • The tone mapping maps HDR to a limited range. The package offers many methods, but for my requirements, the just "A Perceptual Framework for Contrast Processing of High Dynamic Range Images" by Rafał Mantiuk, Karol Myszkowski, Hans-Peter Seidel is acceptable (source).



Images

Images of Moon and earthshine has been taken on Oct 20 2012, about 17h UT by Canon EOS 60D (ISO 400). 15 exposures from 32 to 1/64 sec per factor  two. Near horizon, in primary focus of 10cm refractor, Medium seeing, light mist. Approximate ratio of dark and light 1:1000.


Near MonteBoo observatory, superior night in Brno. 32,16,8...1/512,1/1024 s, fish-eye, Canon EOS 60D (ISO 640), ratio 1:2000.

My office. Canon Powershot S90 (ISO 80): 2, 1/2, 1/8, 1/32, 1/128, 1/512 s. Ratio 1:1000.

Český ráj, sandstone cliffs. Canon Powershot S90 (ISO 80):  1/25, 1/60, 1/200, 1/1600 s. Ratio 1:60.


Conclusions

Although, the images at day-light conditions are very nice,
there are some open problems:
  • Some aureole, worse contrast and lower resolutions are visible. I'm meaning ones are products of too-rough  arranging (needs better subpixels). Note that all images are published in two- or tree-times binning.
  • I've worry about correct showing of really full range, single exposures shows even more details.
  • Also many defect (black areas in clouds, purple in dark exposures) are corrupting visually images.
  • I've no ideas about better intensity profile (just gamma is used).
  • The best rendering algorithm is Mantinuk's. Acceptable results gives  also Fattal (both removes gradients). All others, including Gaussian blurring, practically works on base of global methods and gives poor results in contrast.
  • The night exposures are even worse, probably growing noticed defects.
  • As the best rendering, I'm supposing images which are visually indistinguishable against to original scene (by my eye). Various strongly-coloring pictures looks cool, but ones are out of my interest.

Why, an astronomical CCD image is missing? Because there is no converter FITS -> EXR (HDR). :)

Using of the HDR algorithms on astronomical pictures is my future goal.

Many thanks for Fantom, who focused me on qtpfsgui (as  the start point).

2012-10-06

New Photo Preprocessing

This year St. Václav's day and long weekend I dedicated to develop a new approach to a photometric preprocessing in Munipack. My body had treated from flu and cold so I spend a long time at terminal.

As the photometric preprocessing I means photometric corrections:
  • correction for dark current (dark frame),
  • removing of preamplifier offset (bias),
  • correct a variable light response of (CCD) detector together with whole optical system (flat-field).
Munipack's older approach on the corrections has roots in '90, when we acquired dark frames for every exposure time. The bias is included in both scientific and dark exposure and the bias correction is naturally included in boths. The same approach is valid for both scientific and flat frames.

New approach is modeled as following the widely reccomended method in today. A photometrically corrected image Icij is computed for every input image Iij as
  Icij = (Iij - t Dij - Bij) / fij,
where i,j is an index of a pixel, Dij is a dark frame t is ratio of exposure times  of I and  D. Bij is the bias and fij is normalized flat-field frame (to save absolute photometric fluxes):
  fij = Fij / 〈Fij.
〈Fij means averaged level determined by robust mean.

I think the change does not cause any controversy. More over:
  • The older approach is still available when bias is treated as zero.
  • From previous point of view, the approach is just primary a new façade for users. More easy user's usage.
  • User is not confused. The method is exactly the same as textbooks describes.
  • Important parts of algorithms are the same, changes in functional parts are minor.
  • Observation time save. A long-exposure dark would be prepared and applied to all shortest exposures (also on longer ones, but the noise can devalue frames).
  • Bias exposures are very short, so acquisition of a lot of biases it is not too time consuming.
  • Processing tools has improvements in use of previous data products and no temporary reduction of frames are created.
This is practically a complete new implementation of preprocessing routines. Of course, all my previous experiences are also included. The work is currently uploaded to Mercurial repository and will be available in a first issued public release.

2012-06-28

New Generation Release

I finished modernization of Munipack the engine during this spring. By the way, the incoming release fully replaces of the classic edition 2009. All the functionality is recovered again (and many many other is added).

All the current functionality has origin in all-inclusive representation of FITS (images has added all processed data as the additional HDU). On the base I prepared new versions of:

  • astrometry,
  • photometry,
  • listing of data (star catalogues, light curves),
  • composition of images.

The classic edition tagged as 0.4.2 is to be supposed obsolete now and its functionality is fully superseded.

Now, Munipack is prepared as to be suitable for regular data processing since this release. Please be careful, really extensive testing is still necessary!

Many of the implemented features has GUI unfinished (unprepared).


Astrometry

Astrometry part has been improved in many points:

  • advanced GUI dialog,
  • batch GUI dialog,
  • the matching has been theoretically studied and the algorithm is practically developed from scratch,
  • the robust fitting part has been improved by more simplifying and fine tuning of the algorithm,
  • GUI contains visualization of matching:

Robust Algorithm

Very important improvement is included in the robust algorithm, the core of all robust estimation. Since this release, I used estimate of median by a fast algorithm by Dijskra (see Wirth (1976)) which can get n/2-th element of n sequence. One is excellent for many data (over twenty) but the ignorance of the correct definition of median (which is mean for even) has poor impact on estimate in case of a few points.

The algorithm of median (scatter) estimation is newly modified. For a few points, the median is estimated using of a sorted sequence (with sorting ~n*log(n) complexity) and the definition. For many points, the algorithm is unchanged and by using of the fast algorithm (with ~n complexity).

Results seems more precise and stable.


Localization

There is an unsolved problem with localization. Many my colleagues has installed localized version of a GNU/Linux distribution which is confused by a decimal dash (no decimal point) in real numbers so the underlying software (wxWidgets) must use locale rules to convert numbers to the right format. So I prepared a (untested) code to be locale-friendly.

Unfortunately, there is no complete solution of the problem. There are many examples where the points must be left: VO related and FITS header must contains only ASCII characters.

Other changes:

  • HTML documentation updated to HTML5,
  • Docs has been restructured and extended for description of completed features and tutorials,
  • Helping utilities konve and picko has been removed from main tree and are available as independent applications (but included to binary version),
  • backup strategy replaces FITS conventions and implements the standard GNU backup behavior,
  • better processing of output filenames in batch,
  • improvement and bugfix in displaying of images,
  • created a photometry GUI dialog,
  • rewrite of munilist (added listing of all objects,..) and kombine (works with spherical coordinates which enable geometrical transformations and mosaicking),
  • updated for a new versions of g++/gfortran and wxWidgets,
  • patchelf utility needed for binary distribution has been fully replaced by -rpath option,
  • clarified verbose/pipelog prints.

Summer Plans:

  • update GUI + add GUI for remaining features,
  • a library for image/color processing (common to fitspng) (?),
  • use slalib.


Acknowledgment

Many thanks for my students which has been first testing persons of new versions. Many errors has been identified and corrected during the testing period.

2012-04-19

Benchmarking of mathematical co-processors

While working on speed optimizations of fitspng, I encountered that the power function pow() slows extremely full run. (pow() is used in conversion from CIE XYZ to CIE Luv).

My inspection of glibc sources revealed that numerical functions provided by GNU glibc just only wraps implementation of ones by mathematical co-processor. No own rational approximation is implemented. The hardware support is directly assured by C99 IEEE standard.

Following the track, I created a small test code to test duration of execution various computing functions in GNU system math library. One is practically tests implementation of the functions in mathematical co-processors.


Fortran


program funs


  integer, parameter :: double = selected_real_kind(15)
  integer, parameter :: nsteps = 10000
  integer :: i,j
  real(double) :: x,y, s, xsteps = nsteps


  s = 0.0
  do j = 1,nsteps
     do i = 1,nsteps
        x = i/xsteps
        y = x*(1.0_double/3.0_double)
        s = s + y
     end do
  end do


  write(*,*) s
end program funs




C



#include

int main()
{
  const int nsteps = 10000;
  int i,j;
  double xsteps = (double) nsteps;
  double x,y,s;

  s = 0.0;
  for(j = 1; j <= nsteps; j++)
    for(i = 1; i <= nsteps; i++) {
      x =  (double) i /xsteps;
/*!*/ y = pow(x,1.0/3.0);
      s += y;
    }

  return s > 0.0 ? 1 : 0;
}

(note that no-effect summations and return values are added to prevent compiler's over-optimizations)

The function in line /*!*/ have been changed.

Computations has been done on Intel(R) Core Quad and verified on Intel(R) Xeon machines (cca 30% faster). The gcc and gfortran compilers gives the same results, gfortran may give a litte bit faster code).

Results are summarized in following table.


                    Core Quad
                 [nsec]   [nsec]   ratio fortran  i686
                   -O      -O4            -O4 
pow(x,1.0/3.0)  128.56   123.01    7.2   123.06  287.68
exp(1/3*log(x)) 120.21   116.14    6.8   117.14  269.20
cbtr(x)          90.15    85.79    5.0   -       -
sinh(x)          95.15    91.80    5.4    91.99  235.34
log10(x)         95.74    91.14    5.4    91.73   95.35
atan2(x,2.0)     83.94    79.22    4.6    79.09  106.00
log(x)           79.30    74.98    4.4    74.88   95.22
cosh(x)          78.13    74.34    4.4    74.61  198.99
tan(x)           78.89    73.29    4.3    73.50  118.17
atan(x)          64.27    58.02    3.4    57.99  101.92
exp(x)           58.97    54.58    3.2    54.55  143.93
asin(x)          58.52    53.87    3.2    53.44  165.31
acos(x)          58.70    55.30    3.2    54.94  165.32
cos(x)           51.63    45.57    2.7    45.64   93.98
sin(x)           50.03    44.29    2.6    44.77   92.30
sqrt(x)          47.68    37.23    2.2    39.35   34.11
fabs(x)          20.93    16.14    1.0    16.11   17.25
(1.0/3.0)*x      23.02    17.16    1.0    17.22   17.87

(i686 = Intel(R) Pentium(R) 4 CPU 2.66GHz, 0.376 nsec)
(Core Quad = Intel(R) Core(TM)2 Quad CPU Q6600  @ 2.40GHz, 0.4167 ns )

As expected, the optimization flag -O4 just only marginally speed-up the computation, because ones are actually passed to be computed by the co-processor.

From the table, It is directly visible that pow(x,a) is computed as exp(a*log(x)) because 5.5+7.5 is approximately 13.0. Moreover, the results perhaps shows that computing of pow() by direct use of the identity exp(..log) gives a little bit faster computation.
Also log10 is computed perhaps as log(x)/log(10) because 1.7+7.5 is 9.5.

Computation times divides functions onto groups by duration:
  • fast: arithmetical operations, fabs
  •  medium: sin,cos, tan, atan, cbtr, sqrt, asin, acos, sinh, cosh, log(10), exp, atan2
  • slow:  pow
By the way, I changed computation of x^(1/3) from pow(x,1.0/3.0) to cbtr(x) which speeds-up my code just only about few percents. Of course, the slower computation of a general power is not solved  by the way.

Finally, computation times looks horribly but I think that a method which computes a function with precision of 15 decimals just only 7 times slower (!) than basic arithmetical operations is a miracle. The real world miracle.:)

2012-01-15

Konve and picko



I separated utilities konve and picko from main Munipack's source tree. Sources of both converters are unmaintained and I'm expecting just only small interest about its. On the other side, there are probably huge archives of frames in these formats and the existence of the utility may be important.


Both utilities can be easy used by hand (from command line). An integration to munipack/xmunipack is not planed (although one is easy).

Moreover, I prepared also:

  • GNU autoconf/automake machinery (but simple compilation by gcc will does the same job).
  • creating of DEB based packages (is there somebody who will do it for RPM based?)
  • Homepages: konve and picko
  • source is maintained under Mercurial vs.



Renoir's: A Girl with Watering Can.