Direction Dependent Beams
Prerequisites[edit]
Short Topical Videos[edit]
Reference Material[edit]
- S. Bhatnagar, T.J. Cornwell, K. Golap, Juan M. Uson, Correcting direction-dependent gains in the deconvolution of radio interferometric images, 10.1051/0004-6361:20079284
1 Direction-Dependent Beams
1.1 The Problem
In the context of W-Projection, we discussed the idea that it is possible to correct for the direction-dependent phase error that results from having a non-zero projection of the baseline toward the desired phase center. The solution was to project into the plane by convolving with a gridding kernel that is the Fourier transform of the phase corrections you need to apply.
Now we pose a different problem. Suppose you are observing a phase center for a long time, and for one reason or another, the primary beam of your antenna changes during your observation. This can happen for a variety of reasons. Perhaps you are drift-scanning (i.e. your antenna is not tracking the phase center). Or perhaps, as your antennas track the phase center overhead, the dish slowly rotates around the axis of pointing (this happens for almost every antenna built, except for the very clever ASKAP antennas). In any case, you now have direction-dependent gains that are not the same among your various measurements. How do you combine them? If you don’t do anything, and just make your synthesis image as you would have normally, then you will have calibration errors that depend on where in the image you are looking. This will mean that, when you go to CLEAN or otherwise deconvolve your image, you will get undesirable imaging artifacts around regions where different measurements disagree about the strength of the source. Ouch.
1.2 The Solution
The first step toward a solution is to ask what you would have done if the beam was not changing between measurements. There will still be a primary beam that affects the amplitude of sources in the synthesis image. However, having done your best job deconvolving the synthesis image (where now, no calibration errors were introduced) you are free to divide each pixel in the image by a known primary beam response in that direction. This can be used to establish a consistent flux scale across the image.
Now, of course, as we know from the convolution theorem, and our experience with W-Projection, anything that can be done in the image domain can be done in the visibility domain. It would be completely equivalent to convolve each visibility in the -plane by the Fourier transform of the correcting factor that was applied in the image domain. However, this correction factor can be a little problematic. Near the edges of the primary beam, the correction factor jumps up very large, and this makes for a very ugly kernel in visibility domain.
A better solution is to choose the kernel to be, instead of the Fourier transform of the correction factor, the Fourier transform of the beam itself. This kernel is used to convolve each visibility onto the -plane, and it should also be used to convolve each weight into the sampling grid. (Note: for optimal signal-to-noise, you will actually want to convolve by the Fourier transform of the square of the primary beam, which gives you inverse variance weighting in each UV cell). This weighting allows the synthesized beam to accurately deconvolve the image, and accurately encompasses the direction-dependence of the gain.
Now it should be clear that, in the case the the beam changes between measurements, you should convolve each measurement by the appropriate beam kernel for that particular measurement. Hence, the solution for compensating for direction-dependent gains that may vary with time is not altogether dissimilar from W-projection: you convolve by a correcting kernel.