ARL API¶
The ARL API consists of data models expressed as python classes, and state-free functions.
The best way to learn the API is via the directory, Jupyter notebooks and the test programs.
Data models¶
Memory Data models¶
The data models used in ARL:
Note
There are two visibility formats:
BlockVisibility
is conceived as an ingest and calibration format. The visibility
data are kept in a block of shape (number antennas, number antennas, number channels,
number polarisation). One block is kept per integration. The other columns are time and uvw.
The sampling in time is therefore the same for all baselines.
Visibility
is designed to hold coalesced data where the integration time and
channel width can vary with baseline length. The visibility data are kept in a visibility
vector of length equal to the number of polarisations. Everything else is a separate
column: time, frequency, uvw, channel_bandwidth, integration time.
-
class
Configuration
(name='', data=None, location=None, names='%s', xyz=None, mount='alt-az', frame='', receptor_frame=<data_models.polarisation.ReceptorFrame object>, diameter=None)[source]¶ Describe a Configuration as locations in x,y,z, mount type, diameter, names, and overall location
-
property
names
¶ Names of the antennas/stations
-
property
diameter
¶ diameter of antennas/stations
-
property
xyz
¶ XYZ locations of antennas/stations
-
property
mount
¶ Mount type
-
property
-
class
GainTable
(data=None, gain: numpy.array = None, time: numpy.array = None, interval=None, weight: numpy.array = None, residual: numpy.array = None, frequency: numpy.array = None, receptor_frame: data_models.polarisation.ReceptorFrame = <data_models.polarisation.ReceptorFrame object>, phasecentre=None, configuration=None)[source]¶ Gain table with data_models: time, antenna, gain[:, chan, rec, rec], weight columns
The weight is usually that output from gain solvers.
-
class
PointingTable
(data=None, pointing: numpy.array = None, nominal: numpy.array = None, time: numpy.array = None, interval=None, weight: numpy.array = None, residual: numpy.array = None, frequency: numpy.array = None, receptor_frame: data_models.polarisation.ReceptorFrame = <data_models.polarisation.ReceptorFrame object>, pointing_frame: str = 'local', pointingcentre=None, configuration=None)[source]¶ Pointing table with data_models: time, antenna, offset[:, chan, rec, 2], weight columns
The weight is usually that output from gain solvers.
-
class
Image
[source]¶ Image class with Image data (as a numpy.array) and the AstroPy implementation of a World Coodinate System
Many operations can be done conveniently using numpy processing_library on Image.data_models.
Most of the imaging processing_library require an image in canonical format: - 4 axes: RA, DEC, POL, FREQ
The conventions for indexing in WCS and numpy are opposite. - In astropy.wcs, the order is (longitude, latitude, polarisation, frequency) - in numpy, the order is (frequency, polarisation, latitude, longitude)
Warning
The polarisation_frame is kept in two places, the WCS and the polarisation_frame variable. The latter should be considered definitive.
-
class
GridData
[source]¶ Class to hold Gridded data for Fourier processing - Has four or more coordinates: [chan, pol, z, y, x] where x can be u, l; y can be v, m; z can be w, n
The conventions for indexing in WCS and numpy are opposite. - In astropy.wcs, the order is (longitude, latitude, polarisation, frequency) - in numpy, the order is (frequency, polarisation, depth, latitude, longitude)
Warning
The polarisation_frame is kept in two places, the WCS and the polarisation_frame variable. The latter should be considered definitive.
-
class
ConvolutionFunction
[source]¶ Class to hold Gridded data for Fourier processing - Has four or more coordinates: [chan, pol, z, y, x] where x can be u, l; y can be v, m; z can be w, n
The conventions for indexing in WCS and numpy are opposite. - In astropy.wcs, the order is (longitude, latitude, polarisation, frequency) - in numpy, the order is (frequency, polarisation, depth, latitude, longitude)
Warning
The polarisation_frame is kept in two places, the WCS and the polarisation_frame variable. The latter should be considered definitive.
-
class
Skycomponent
(direction=None, frequency=None, name=None, flux=None, shape='Point', polarisation_frame=<data_models.polarisation.PolarisationFrame object>, params=None)[source]¶ Skycomponents are used to represent compact sources on the sky. They possess direction, flux as a function of frequency and polarisation, shape (with params), and polarisation frame.
For example, the following creates and predicts the visibility from a collection of point sources drawn from the GLEAM catalog:
sc = create_low_test_skycomponents_from_gleam(flux_limit=1.0, polarisation_frame=PolarisationFrame("stokesIQUV"), frequency=frequency, kind='cubic', phasecentre=phasecentre, radius=0.1) model = create_image_from_visibility(vis, cellsize=0.001, npixel=512, frequency=frequency, polarisation_frame=PolarisationFrame('stokesIQUV')) bm = create_low_test_beam(model=model) sc = apply_beam_to_skycomponent(sc, bm) vis = predict_skycomponent_visibility(vis, sc)
-
class
SkyModel
(image=None, components=None, gaintable=None, mask=None, fixed=False)[source]¶ A model for the sky, including an image, components, gaintable and a mask
-
class
Visibility
(data=None, frequency=None, channel_bandwidth=None, phasecentre=None, configuration=None, uvw=None, time=None, antenna1=None, antenna2=None, vis=None, weight=None, imaging_weight=None, integration_time=None, polarisation_frame=<data_models.polarisation.PolarisationFrame object>, cindex=None, blockvis=None, source='anonymous', meta=None)[source]¶ Visibility table class
Visibility with uvw, time, integration_time, frequency, channel_bandwidth, a1, a2, vis, weight as separate columns in a numpy structured array, The fundemental unit is a complex vector of polarisation.
Visibility is defined to hold an observation with one direction. Polarisation frame is the same for the entire data set and can be stokes, circular, linear The configuration is also an attribute
The phasecentre is the direct of delay tracking i.e. n=0. If uvw are rotated then this should be updated with the new delay tracking centre. This is important for wstack and wproject algorithms.
If a visibility is created by coalescence then the cindex column is filled with a pointer to the row in the original block visibility that this row has a value for. The original blockvisibility is also preserves as n attribute so that decoalescence is expedited. If you don’t need that then the storage can be released by setting self.blockvis to None
-
class
BlockVisibility
(data=None, frequency=None, channel_bandwidth=None, phasecentre=None, configuration=None, uvw=None, time=None, vis=None, weight=None, integration_time=None, polarisation_frame=<data_models.polarisation.PolarisationFrame object>, imaging_weight=None, source='anonymous', meta={})[source]¶ Block Visibility table class
BlockVisibility with uvw, time, integration_time, frequency, channel_bandwidth, pol, a1, a2, vis, weight Columns in a numpy structured array.
BlockVisibility is defined to hold an observation with one direction.
The phasecentre is the direct of delay tracking i.e. n=0. If uvw are rotated then this should be updated with the new delay tracking centre. This is important for wstack and wproject algorithms.
Polarisation frame is the same for the entire data set and can be stokesI, circular, linear
The configuration is also an attribute
Buffer Data Models¶
Buffer equivalents of memory data models
To create from an existing model in the buffer, make some changes, and then sync to buffer:
my_buffer_skymodel = BufferSkyModel(conf["buffer"], conf["inputs"]["skymodel"])
my_memory_skymodel = my_buffer_skymodel.memory_data_model
... do some stuff
my_memory_skymodel.sync()
To create a new buffer for a memory_data_model:
my_buffer_skymodel = BufferSkyModel(conf["buffer"], conf["inputs"]["skymodel"], my_memory_skymodel)
my_memory_skymodel.sync()
An explicit sync is required in both cases
-
class
BufferDataModel
(json_buffer, json_model, mdm=None)[source]¶ Buffer version of data model
To create from an existing model in the buffer, make some changes, and then sync to buffer:
my_buffer_skymodel = BufferSkyModel(conf["buffer"], conf["inputs"]["skymodel"]) my_memory_skymodel = my_buffer_skymodel.memory_data_model() ... do some stuff my_memory_skymodel.sync()
To create a new buffer for a memory_data_model:
my_buffer_skymodel = BufferSkyModel(conf["buffer"], conf["inputs"]["skymodel"], my_memory_skymodel) my_memory_skymodel.sync()
An explicit sync is required in both cases.
-
class
BufferImage
(json_buffer, json_model, mdm=None)[source]¶ Buffer version of memory data model Image
-
class
BufferGridData
(json_buffer, json_model, mdm=None)[source]¶ Buffer version of memory data model GridData
-
class
BufferConvolutionFunction
(json_buffer, json_model, mdm=None)[source]¶ Buffer version of memory data model ConvolutionFunction
-
class
BufferBlockVisibility
(json_buffer, json_model, mdm=None)[source]¶ Buffer version of memory data model BlockVisibility
-
class
BufferSkyModel
(json_buffer, json_model, mdm=None)[source]¶ Buffer version of memory data model SkyModel
Data model persistence¶
Functions to help with persistence of data models
These do data conversion and persistence. Functions from processing_library and processing_components are used.
-
convert_earthlocation_to_string
(el: astropy.coordinates.earth.EarthLocation)[source]¶ Convert Earth Location to string
- Parameters
el –
- Returns
-
convert_earthlocation_from_string
(s: str)[source]¶ Convert Earth Location to string
- Parameters
s – String
- Returns
-
convert_direction_to_string
(d: astropy.coordinates.sky_coordinate.SkyCoord)[source]¶ Convert SkyCoord to string
- Parameters
d – SkyCoord
- Returns
-
convert_direction_from_string
(s: str)[source]¶ Convert direction (SkyCoord) from string
- Parameters
s – String
- Returns
-
convert_configuration_to_hdf
(config: data_models.memory_data_models.Configuration, f)[source]¶ - Parameters
config –
f –
- Returns
-
convert_visibility_to_hdf
(vis, f)[source]¶ Convert visibility to HDF
- Parameters
vis –
f – HDF root
- Returns
-
convert_blockvisibility_to_hdf
(vis: data_models.memory_data_models.BlockVisibility, f)[source]¶ Convert blockvisibility to HDF
- Parameters
vis –
f – HDF root
- Returns
-
convert_hdf_to_blockvisibility
(f)[source]¶ Convert HDF root to blockvisibility
- Parameters
f –
- Returns
-
export_visibility_to_hdf5
(vis, filename)[source]¶ Export a Visibility to HDF5 format
- Parameters
vis –
filename –
- Returns
-
import_visibility_from_hdf5
(filename)[source]¶ Import a Visibility from HDF5 format
- Parameters
filename –
- Returns
If only one then a Visibility, otherwise a list of Visibilitys
-
export_blockvisibility_to_hdf5
(vis, filename)[source]¶ Export a BlockVisibility to HDF5 format
- Parameters
vis –
filename –
- Returns
-
import_blockvisibility_from_hdf5
(filename)[source]¶ Import a Visibility from HDF5 format
- Parameters
filename –
- Returns
If only one then a BlockVisibility, otherwise a list of BlockVisibility’s
-
convert_gaintable_to_hdf
(gt: data_models.memory_data_models.GainTable, f)[source]¶ Convert GainTable to HDF
- Parameters
gt –
f – HDF root
- Returns
-
export_gaintable_to_hdf5
(gt: data_models.memory_data_models.GainTable, filename)[source]¶ Export a GainTable to HDF5 format
- Parameters
gt –
filename –
- Returns
-
import_gaintable_from_hdf5
(filename)[source]¶ Import GainTable(s) from HDF5 format
- Parameters
filename –
- Returns
single gaintable or list of gaintables
-
convert_pointingtable_to_hdf
(pt: data_models.memory_data_models.PointingTable, f)[source]¶ Convert PointingTable to HDF
- Parameters
pt –
f – HDF root
- Returns
-
export_pointingtable_to_hdf5
(pt: data_models.memory_data_models.PointingTable, filename)[source]¶ Export a PointingTable to HDF5 format
- Parameters
pt –
filename –
- Returns
-
import_pointingtable_from_hdf5
(filename)[source]¶ Import PointingTable(s) from HDF5 format
- Parameters
filename –
- Returns
single pointingtable or list of pointingtables
-
convert_skycomponent_to_hdf
(sc: data_models.memory_data_models.Skycomponent, f)[source]¶ Convert Skycomponent to HDF :param sc: SkyComponent :param f: HDF root :return:
-
export_skycomponent_to_hdf5
(sc: data_models.memory_data_models.Skycomponent, filename)[source]¶ Export a Skycomponent to HDF5 format
- Parameters
sc – SkyComponent
filename –
- Returns
-
import_skycomponent_from_hdf5
(filename)[source]¶ Import Skycomponent(s) from HDF5 format
- Parameters
filename –
- Returns
single skycomponent or list of skycomponents
-
convert_image_to_hdf
(im: data_models.memory_data_models.Image, f)[source]¶ Convert Image to HDF
- Parameters
im – Image
f – HDF root
- Returns
-
export_image_to_hdf5
(im, filename)[source]¶ Export an Image to HDF5 format
- Parameters
im –
filename –
- Returns
-
import_image_from_hdf5
(filename)[source]¶ Import Image(s) from HDF5 format
- Parameters
filename –
- Returns
single image or list of images
-
export_skymodel_to_hdf5
(sm, filename)[source]¶ Export a Skymodel to HDF5 format
- Parameters
sm –
filename –
- Returns
-
import_skymodel_from_hdf5
(filename)[source]¶ Import a Skymodel from HDF5 format
- Parameters
filename –
- Returns
SkyModel
-
convert_griddata_to_hdf
(gd: data_models.memory_data_models.GridData, f)[source]¶ Convert Griddata to HDF
- Parameters
im – GridData
f – HDF root
- Returns
-
export_griddata_to_hdf5
(gd, filename)[source]¶ Export a GridData to HDF5 format
- Parameters
gd –
filename –
- Returns
-
import_griddata_from_hdf5
(filename)[source]¶ Import GridData from HDF5 format
- Parameters
filename –
- Returns
single image or list of images
-
convert_convolutionfunction_to_hdf
(cf: data_models.memory_data_models.ConvolutionFunction, f)[source]¶ Convert Griddata to HDF
- Parameters
im – ConvolutionFunction
f – HDF root
- Returns
-
convert_hdf_to_convolutionfunction
(f)[source]¶ Convert HDF root to a ConvolutionFunction
- Parameters
f –
- Returns
-
export_convolutionfunction_to_hdf5
(cf, filename)[source]¶ Export a ConvolutionFunction to HDF5 format
- Parameters
cf –
filename –
- Returns
-
import_convolutionfunction_from_hdf5
(filename)[source]¶ Import ConvolutionFunction from HDF5 format
- Parameters
filename –
- Returns
single image or list of images
-
memory_data_model_to_buffer
(model, jbuff, dm)[source]¶ Copy a memory data model to a buffer data model
The file type is derived from the file extension. All are hdf only with the exception of Imaghe which can also be fits.
- Parameters
model – Memory data model to be sent to buffer
jbuff – JSON describing buffer
dm – JSON describing data model
-
buffer_data_model_to_memory
(jbuff, dm)[source]¶ Copy a buffer data model into memory data model
The file type is derived from the file extension. All are hdf only with the exception of Imaghe which can also be fits.
- Parameters
jbuff – JSON describing buffer
dm – JSON describing data model
- Returns
data model
Parameter handling¶
We use the standard kwargs mechanism for arguments. For example:
kernelname = get_parameter(kwargs, "kernel", "2d")
oversampling = get_parameter(kwargs, "oversampling", 8)
padding = get_parameter(kwargs, "padding", 2)
The kwargs may need to be passed down to called functions.
All functions possess an API which is always of the form:
def processing_function(idatastruct1, idatastruct2, ..., *kwargs):
return odatastruct1, odatastruct2,... other
Processing parameters are passed via the standard Python kwargs approach.
Inside a function, the values are retrieved can be accessed directly from the kwargs dictionary, or if a default is needed a function can be used:
log = get_parameter(kwargs, 'log', None)
vis = get_parameter(kwargs, 'visibility', None)
Function parameters should obey a consistent naming convention:
Name |
Meaning |
---|---|
vis |
Name of Visibility |
sc |
Name of Skycomponent |
gt |
Name of GainTable |
conf |
Name of Configuration |
im |
Name of input image |
qa |
Name of quality assessment |
log |
Name of processing log |
If a function argument has a better, more descriptive name e.g. normalised_gt, newphasecentre, use it.
Keyword=value pairs should have descriptive names. The names should be lower case with underscores to separate words:
Name |
Meaning |
Example |
---|---|---|
loop_gain |
Clean loop gain |
0.1 |
niter |
Number of iterations |
10000 |
eps |
Fractional tolerance |
1e-6 |
threshold |
Absolute threshold |
0.001 |
fractional_threshold |
Threshold as fraction of e.g. peak |
0.1 |
G_solution_interval |
Solution interval for G term |
100 |
phaseonly |
Do phase-only solutions |
True |
phasecentre |
Phase centre (usually as SkyCoord) |
SkyCoord(“-1.0d”, “37.0d”, frame=’icrs’, equinox=’J2000’) |
spectral_mode |
Visibility processing mode |
‘mfs’ or ‘channel’ |
Polarisation¶
- Functions for defining polarisation conventions. These include definitions via classes and
conversion functions.
For example:
stokes = numpy.array(random.uniform(-1.0, 1.0, [3, 4, 128, 128]))
ipf = PolarisationFrame('stokesIQUV')
opf = PolarisationFrame('circular')
cir = convert_pol_frame(stokes, ipf, opf)
st = convert_pol_frame(cir, opf, ipf)
or:
stokes = numpy.array([1, 0.5, 0.2, -0.1])
circular = convert_stokes_to_circular(stokes)
These function operate on Numpy arrays. These are packaged for use in Images. The Image functions are probably more useful.
-
class
ReceptorFrame
(name)[source]¶ Define polarisation frames for receptors
circular, linear, and stokesI. The latter is non-physical but useful for some types of testing.
-
property
nrec
¶ Number of receptors (should be 2)
-
property
-
class
PolarisationFrame
(name)[source]¶ Define polarisation frames post correlation
-
property
npol
¶ Number of correlated polarisations
-
property
-
polmatrixmultiply
(cm, vec, polaxis=1)[source]¶ Matrix multiply of appropriate axis of vec […,:] by cm
For an image vec has axes [nchan, npol, ny, nx] and polaxis=1 For visibility vec has axes [row, nchan, npol] and polaxis=2 For blockvisibility vec has axes [row, ant, ant, nchan, npol] and polaxis=4
- Parameters
cm – matrix to apply
vec – array to be multiplied […,:]
polaxis – which axis contains the polarisation
- Returns
multiplied vec
-
convert_stokes_to_linear
(stokes, polaxis=1)[source]¶ Convert Stokes IQUV to Linear
- Parameters
stokes – […,4] Stokes vector in I,Q,U,V (can be complex)
polaxis – Axis of stokes with polarisation (default 1)
- Returns
linear vector in XX, XY, YX, YY sequence
Equation 4.58 TMS
-
convert_linear_to_stokes
(linear, polaxis=1)[source]¶ Convert Linear to Stokes IQUV
- Parameters
linear – […,4] linear vector in XX, XY, YX, YY sequence
polaxis – Axis of linear with polarisation (default 1)
- Returns
Complex I,Q,U,V
Equation 4.58 TMS, inverted with numpy.linalg.inv
-
convert_linear_to_stokesI
(linear, polaxis=1)[source]¶ Convert Linear to Stokes I
- Parameters
linear – […,4] linear vector in XX, XY, YX, YY sequence
polaxis – Axis of linear with polarisation (default 1)
- Returns
Complex I
Equation 4.58 TMS, inverted with numpy.linalg.inv
-
convert_stokes_to_circular
(stokes, polaxis=1)[source]¶ Convert Stokes IQUV to Circular
- Parameters
stokes – […,4] Stokes vector in I,Q,U,V (can be complex)
polaxis – Axis of stokes with polarisation (default 1)
- Returns
circular vector in RR, RL, LR, LL sequence
Equation 4.59 TMS
-
convert_circular_to_stokes
(circular, polaxis=1)[source]¶ Convert Circular to Stokes IQUV
- Parameters
circular – […,4] linear vector in RR, RL, LR, LL sequence
polaxis – Axis of circular with polarisation (default 1)
- Returns
Complex I,Q,U,V
Equation 4.58 TMS, inverted with numpy.linalg.inv
-
convert_circular_to_stokesI
(circular, polaxis=1)[source]¶ Convert Circular to Stokes I
- Parameters
circular – […,4] linear vector in RR, RL, LR, LL sequence
polaxis – Axis of circular with polarisation (default 1)
- Returns
Complex I
Equation 4.58 TMS, inverted with numpy.linalg.inv
Processing Library¶
The following are functions that support the processing components but are not expected to interface to workflows.
Image¶
Cleaners¶
Image Deconvolution functions
-
hogbom
(dirty, psf, window, gain, thresh, niter, fracthresh, prefix='')[source]¶ Clean the point spread function from a dirty image
See Hogbom CLEAN (1974A&AS…15..417H)
This version operates on numpy arrays.
- Parameters
fracthresh –
prefix –
dirty – The dirty Image, i.e., the Image to be deconvolved
psf – The point spread-function
window – Regions where clean components are allowed. If True, entire dirty Image is allowed
gain – The “loop gain”, i.e., the fraction of the brightest pixel that is removed in each iteration
thresh – Cleaning stops when the maximum of the absolute deviation of the residual is less than this value
niter – Maximum number of components to make if the threshold thresh is not hit
- Returns
clean component Image, residual Image
-
hogbom_complex
(dirty_q, dirty_u, psf_q, psf_u, window, gain, thresh, niter, fracthresh)[source]¶ Clean the point spread function from a dirty Q+iU image
This uses the complex Hogbom CLEAN for polarised data (2016MNRAS.462.3483P)
The starting-point for the code was the standard Hogbom clean algorithm available in ARL.
Args: dirty_q (numpy array): The dirty Q Image, i.e., the Q Image to be deconvolved. dirty_u (numpy array): The dirty U Image, i.e., the U Image to be deconvolved. psf_q (numpy array): The point spread-function in Stokes Q. psf_u (numpy array): The point spread-function in Stokes U. window (float): Regions where clean components are allowed. If True, entire dirty Image is allowed. gain (float): The “loop gain”, i.e., the fraction of the brightest pixel that is removed in each iteration. thresh (float): Cleaning stops when the maximum of the absolute deviation of the residual is less than this value. niter (int): Maximum number of components to make if the threshold thresh is not hit. fracthresh (float): The predefined fractional threshold at which to stop cleaning.
Returns: comps.real: real clean component image. comps.imag: imaginary clean component image. res.real: real residual image. res.imag: imaginary residual image.
-
overlapIndices
(res, psf, peakx, peaky)[source]¶ Find the indices where two arrays overlap
- Parameters
res –
psf –
peakx – peak in x
peaky – peak in y
- Returns
(limits in a1, limits in a2)
-
msclean
(dirty, psf, window, gain, thresh, niter, scales, fracthresh, prefix='')[source]¶ Perform multiscale clean
Multiscale CLEAN (IEEE Journal of Selected Topics in Sig Proc, 2008 vol. 2 pp. 793-801)
This version operates on numpy arrays.
- Parameters
prefix –
fracthresh –
dirty – The dirty image, i.e., the image to be deconvolved
psf – The point spread-function
window – Regions where clean components are allowed. If True, all of the dirty image is allowed
gain – The “loop gain”, i.e., the fraction of the brightest pixel that is removed in each iteration
thresh – Cleaning stops when the maximum of the absolute deviation of the residual is less than this value
niter – Maximum number of components to make if the threshold “thresh” is not hit
scales – Scales (in pixels width) to be used
- Returns
clean component image, residual image
-
create_scalestack
(scaleshape, scales, norm=True)[source]¶ Create a cube consisting of the scales
- Parameters
scaleshape – desired shape of stack
scales – scales (in pixels)
norm – Normalise each plane to unity?
- Returns
stack
-
convolve_scalestack
(scalestack, img)[source]¶ Convolve img by the specified scalestack, returning the resulting stack
- Parameters
scalestack – stack containing the scales
img – Image to be convolved
- Returns
stack
-
convolve_convolve_scalestack
(scalestack, img)[source]¶ Convolve img by the specified scalestack, returning the resulting stack
- Parameters
scalestack – stack containing the scales
img – Image to be convolved
- Returns
Twice convolved image [nscales, nscales, nx, ny]
-
find_max_abs_stack
(stack, windowstack, couplingmatrix)[source]¶ Find the location and value of the absolute maximum in this stack :param stack: stack to be searched :param windowstack: Window for the search :param couplingmatrix: Coupling matrix between difference scales :return: x, y, scale
-
spheroidal_function
(vnu)[source]¶ Evaluates the PROLATE SPHEROIDAL WAVEFUNCTION
m=6, alpha = 1 from Schwab, Indirect Imaging (1984). This is one factor in the basis function.
-
msmfsclean
(dirty, psf, window, gain, thresh, niter, scales, fracthresh, findpeak='ARL', prefix='')[source]¶ Perform image plane multiscale multi frequency clean
This algorithm is documented as Algorithm 1 in: U. Rau and T. J. Cornwell, “A multi-scale multi-frequency deconvolution algorithm for synthesis imaging in radio interferometry,” A&A 532, A71 (2011). Note that this is only the image plane parts.
Specific code is linked to specific lines in that algorithm description.
This version operates on numpy arrays that have been converted to moments on the last axis.
- Parameters
fracthresh –
dirty – The dirty image, i.e., the image to be deconvolved
psf – The point spread-function
window – Regions where clean components are allowed. If True, all of the dirty image is allowed
gain – The “loop gain”, i.e., the fraction of the brightest pixel that is removed in each iteration
thresh – Cleaning stops when the maximum of the absolute deviation of the residual is less than this value
niter – Maximum number of components to make if the threshold “thresh” is not hit
scales – Scales (in pixels width) to be used
fracthresh – Fractional stopping threshold
findpeak – Method of finding peak in mfsclean: ‘Algorithm1’|’CASA’|’ARL’, Default is ARL.
prefix – Prefix to log messages to provide context
- Returns
clean component image, residual image
-
find_global_optimum
(hsmmpsf, ihsmmpsf, smresidual, windowstack, findpeak)[source]¶ Find the optimum peak using one of a number of algorithms
-
update_scale_moment_residual
(smresidual, ssmmpsf, lhs, rhs, gain, mscale, mval)[source]¶ Update residual by subtracting the effect of model update for each moment
-
update_moment_model
(m_model, scalestack, lhs, rhs, gain, mscale, mval)[source]¶ Update model with an appropriately scaled and centered blob for each moment
-
calculate_scale_moment_residual
(residual, scalestack)[source]¶ Calculate scale-dependent moment residuals
Part of the initialisation for Algorithm 1: lines 12 - 17
- Parameters
scalestack –
residual – residual [nmoment, nx, ny]
- Returns
scale-dependent moment residual [nscales, nmoment, nx, ny]
-
calculate_scale_scale_moment_moment_psf
(psf, scalestack)[source]¶ Calculate scale-dependent moment psfs
Part of the initialisation for Algorithm 1
- Parameters
scalestack –
psf – psf
- Returns
scale-dependent moment psf [nscales, nscales, nmoment, nmoment, nx, ny]
-
calculate_scale_inverse_moment_moment_hessian
(scale_scale_moment_moment_psf)[source]¶ Calculate inverse_scale dependent moment moment hessian
Part of the initialisation for Algorithm 1. Lines 7 - 9
- Parameters
scale_scale_moment_moment_psf – scale_moment_psf [nscales, nscales, nmoment, nmoment]
- Returns
scale-dependent moment-moment inverse hessian
-
calculate_scale_moment_principal_solution
(smresidual, ihsmmpsf)[source]¶ Calculate the principal solution in moment space for each scale
Lines 20 - 26
- Parameters
smresidual – scale-dependent moment residual [nscales, nmoment, nx, ny]
ihsmmpsf – Inverse of scale dependent moment moment Hessian
- Returns
Decoupled residual images [nscales, nmoment, nx, ny]
Calibration¶
Solvers¶
Functions to solve for antenna/station gain
This uses an iterative substitution algorithm due to Larry D’Addario c 1980’ish. Used in the original VLA Dec-10 Antsol.
For example:
gtsol = solve_gaintable(vis, originalvis, phase_only=True, niter=niter, crosspol=False, tol=1e-6)
vis = apply_gaintable(vis, gtsol, inverse=True)
-
solve_from_X
(gt: data_models.memory_data_models.GainTable, x: numpy.ndarray, xwt: numpy.ndarray, chunk, crosspol, niter, phase_only, tol, npol) → data_models.memory_data_models.GainTable[source]¶ Solve for gains from the point source equivalents
- Parameters
gt –
x – point source visibility
xwt – point source weight
chunk – which chunk of the gaintable?
crosspol –
niter –
phase_only –
tol –
npol –
- Returns
-
solve_antenna_gains_itsubs_scalar
(gain, gwt, x, xwt, niter=30, tol=1e-08, phase_only=True, refant=0, damping=0.5)[source]¶ Solve for the antenna gains
x(antenna2, antenna1) = gain(antenna1) conj(gain(antenna2))
This uses an iterative substitution algorithm due to Larry D’Addario c 1980’ish (see ThompsonDaddario1982 Appendix 1). Used in the original VLA Dec-10 Antsol.
- Parameters
gain – gains
gwt – gain weight
x – Equivalent point source visibility[nants, nants, …]
xwt – Equivalent point source weight [nants, nants, …]
niter – Number of iterations
tol – tolerance on solution change
phase_only – Do solution for only the phase? (default True)
refant – Reference antenna for phase (default=0)
- Returns
gain [nants, …], weight [nants, …]
-
solve_antenna_gains_itsubs_vector
(gain, gwt, x, xwt, niter=30, tol=1e-08, phase_only=True, refant=0)[source]¶ Solve for the antenna gains using full matrix expressions
x(antenna2, antenna1) = gain(antenna1) conj(gain(antenna2))
See Appendix D, section D.1 in:
J. P. Hamaker, “Understanding radio polarimetry - IV. The full-coherency analogue of scalar self-calibration: Self-alignment, dynamic range and polarimetric fidelity,” Astronomy and Astrophysics Supplement Series, vol. 143, no. 3, pp. 515–534, May 2000.
- Parameters
gain – gains
gwt – gain weight
x – Equivalent point source visibility[nants, nants, …]
xwt – Equivalent point source weight [nants, nants, …]
niter – Number of iterations
tol – tolerance on solution change
phase_only – Do solution for only the phase? (default True)
refant – Reference antenna for phase (default=0.0)
- Returns
gain [nants, …], weight [nants, …]
-
solve_antenna_gains_itsubs_matrix
(gain, gwt, x, xwt, niter=30, tol=1e-08, phase_only=True, refant=0)[source]¶ Solve for the antenna gains using full matrix expressions
x(antenna2, antenna1) = gain(antenna1) conj(gain(antenna2))
See Appendix D, section D.1 in:
J. P. Hamaker, “Understanding radio polarimetry - IV. The full-coherency analogue of scalar self-calibration: Self-alignment, dynamic range and polarimetric fidelity,” Astronomy and Astrophysics Supplement Series, vol. 143, no. 3, pp. 515–534, May 2000.
- Parameters
gain – gains
gwt – gain weight
x – Equivalent point source visibility[nants, nants, …]
xwt – Equivalent point source weight [nants, nants, …]
niter – Number of iterations
tol – tolerance on solution change
phase_only – Do solution for only the phase? (default True)
refant – Reference antenna for phase (default=0.0)
- Returns
gain [nants, …], weight [nants, …]
-
solution_residual_scalar
(gain, x, xwt)[source]¶ Calculate residual across all baselines of gain for point source equivalent visibilities
- Parameters
gain – gain [nant, …]
x – Point source equivalent visibility [nant, …]
xwt – Point source equivalent weight [nant, …]
- Returns
residual[…]
-
solution_residual_vector
(gain, x, xwt)[source]¶ Calculate residual across all baselines of gain for point source equivalent visibilities
Vector case i.e. off-diagonals of gains are zero
- Parameters
gain – gain [nant, …]
x – Point source equivalent visibility [nant, …]
xwt – Point source equivalent weight [nant, …]
- Returns
residual[…]
Fourier transform¶
FFT support¶
FFT support functions
-
fft
(a)[source]¶ Fourier transformation from image to grid space
Note
If there are four axes then the last outer axes are not transformed
- Parameters
a – image in lm coordinate space
- Returns
uv grid
-
ifft
(a)[source]¶ Fourier transformation from grid to image space
Note
If there are four axes then the last outer axes are not transformed
- Parameters
a – uv grid to transform
- Returns
an image in lm coordinate space
-
pad_mid
(ff, npixel)[source]¶ Pad a far field image with zeroes to make it the given size.
Effectively as if we were multiplying with a box function of the original field’s size, which is equivalent to a convolution with a sinc pattern in the uv-grid.
Note
Only the two innermost axes are transformed
This function does not handle odd-sized dimensions properly
- Parameters
ff – The input far field. Should be smaller than npixelxnpixel.
npixel – The desired far field size
-
extract_mid
(a, npixel)[source]¶ Extract a section from middle of a map
Suitable for zero frequencies at npixel/2. This is the reverse operation to pad.
Note
Only the two innermost axes are transformed
- Parameters
npixel – desired size of the section to extract
a – grid from which to extract
-
extract_oversampled
(a, xf, yf, kernel_oversampling, kernelwidth)[source]¶ Extract the (xf-th,yf-th) w-kernel from the oversampled parent
Offsets are suitable for correcting of fractional coordinates, e.g. an offset of (xf,yf) results in the kernel for an (-xf,-yf) sub-grid offset.
We do not want to make assumptions about the source grid’s symmetry here, which means that the grid’s side length must be at least kernel_oversampling*(npixel+2) to contain enough information in all circumstances
- Parameters
xf –
yf –
a – grid from which to extract
kernel_oversampling – oversampling factor
kernelwidth – size of section
Convolutional Gridding¶
Imaging is based on used of the FFT to perform Fourier transforms efficiently. Since the observed visibility data_models do not arrive naturally on grid points, the sampled points are resampled on the FFT grid using a convolution function to smear out the sample points. The resulting grid points are then FFT’ed. The result can be corrected for the griddata convolution function by division in the image plane of the transform.
This approach may be extended to include image plane effect such as the w term and the antenna/station primary beam.
This module contains functions for performing the griddata process and the inverse degridding process.
-
coordinateBounds
(npixel)[source]¶ Returns lowest and highest coordinates of an image/grid given:
Step size is \(1/npixel\):
\[\frac{high-low}{npixel-1} = \frac{1}{npixel}\]The coordinate \(\lfloor npixel/2\rfloor\) falls exactly on zero:
\[low + \left\lfloor\frac{npixel}{2}\right\rfloor * (high-low) = 0\]
This is the coordinate system for shifted FFTs.
-
coordinates2
(npixel: int)[source]¶ Two dimensional grids of coordinates spanning -1 to 1 in each dimension
a step size of 2/npixel and
(0,0) at pixel (floor(n/2),floor(n/2))
-
coordinates2Offset
(npixel: int, cx: int, cy: int, quadrant=False)[source]¶ Two dimensional grids of coordinates centred on an arbitrary point.
This is used for A and w beams.
a step size of 2/npixel and
(0,0) at pixel (cx, cy,floor(n/2))
-
grdsf
(nu)[source]¶ Calculate PSWF using an old SDE routine re-written in Python
Find Spheroidal function with M = 6, alpha = 1 using the rational approximations discussed by Fred Schwab in ‘Indirect Imaging’. This routine was checked against Fred’s SPHFN routine, and agreed to about the 7th significant digit. The griddata function is (1-NU**2)*GRDSF(NU) where NU is the distance to the edge. The grid correction function is just 1/GRDSF(NU) where NU is now the distance to the edge of the image.
-
w_beam
(npixel, field_of_view, w, cx=None, cy=None, remove_shift=False)[source]¶ W beam, the fresnel diffraction pattern arising from non-coplanar baselines
- Parameters
npixel – Size of the grid in pixels
field_w_beamof_view – Field of view
w – Baseline distance to the projection plane
cx – location of delay centre def :npixel//2
cy – location of delay centre def :npixel//2
remove_shift – Remove overall phase shift at the centre of the image
- Returns
npixel x npixel array with the far field
-
frac_coord
(npixel, kernel_oversampling, p)[source]¶ Compute whole and fractional parts of coordinates, rounded to kernel_oversampling-th fraction of pixel size
The fractional values are rounded to nearest 1/kernel_oversampling pixel value. At fractional values greater than (kernel_oversampling-0.5)/kernel_oversampling coordinates are rounded to next integer index.
- Parameters
npixel – Number of pixels in total
kernel_oversampling – Fractional values to round to
p – Coordinate in range [-.5,.5[
-
weight_gridding
(shape, visweights, vuvwmap, vfrequencymap, vpolarisationmap=None, weighting='uniform')[source]¶ Reweight data using one of a number of algorithms
- Parameters
shape –
visweights – Visibility weights
vuvwmap – map uvw to grid fractions
vfrequencymap – map frequency to image channels
vpolarisationmap – map polarisation to image polarisation
weighting – ‘’ | ‘uniform’
- Returns
visweights, density, densitygrid
Imaging¶
Parameters¶
Functions that aid definition of fourier transform processing.
-
get_frequency_map
(vis, im: data_models.memory_data_models.Image = None)[source]¶ Map channels from visibilities to image
Util¶
Array Functions¶
Useful array functions.
-
average_chunks_jit
(arr, wts, chunksize)[source]¶ Average the array arr with weights by chunks
Array len does not have to be multiple of chunksize
This is a version written for numba. When used with numba.jit, it’s about 25 - 30% faster than the numpy version without jit.
- Parameters
arr – 1D array of values
wts – 1D array of weights
chunksize – averaging size
- Returns
1D array of averaged data_models, 1d array of weights
-
average_chunks
(arr, wts, chunksize)[source]¶ Average the array arr with weights by chunks
Array len does not have to be multiple of chunksize
This version is optimised for plain numpy. It is roughly ten times faster that average_chunks_jit when used without numba jit. It cannot (yet) be used with numba because the add.reduceat is not support in numba 0.31
- Parameters
arr – 1D array of values
wts – 1D array of weights
chunksize – averaging size
- Returns
1D array of averaged data_models, 1d array of weights
-
average_chunks2
(arr, wts, chunksize)[source]¶ Average the two dimensional array arr with weights by chunks
Array len does not have to be multiple of chunksize.
- Parameters
arr – 2D array of values
wts – 2D array of weights
chunksize – 2-tuple of averaging region e.g. (2,3)
- Returns
2D array of averaged data_models, 2d array of weights
-
tukey_filter
(x, r)[source]¶ Calculate the Tukey (tapered cosine) filter
See e.g. https://uk.mathworks.com/help/signal/ref/tukeywin.html
- Parameters
x – x coordinate (float)
r – transition point of filter (float)
- Returns
Value of filter for x
-
insert_array
(im, x, y, flux, bandwidth=1.0, support=7, insert_function=<function insert_function_L>)[source]¶ Insert point into image using specified function
- Parameters
im – Image
x – x in float pixels
y – y in float pixels
flux – Flux[nchan, npol]
bandwidth – Support of data in uv plane
support – Support of function in image space
insert_function – insert_function_L or insert_function_Sinc or insert_function_pswf
- Returns
Coordinate Support¶
Coordinate support
We follow the casa definition of coordinate systems http://casa.nrao.edu/Memos/CoordConvention.pdf :
UVW is a right-handed coordinate system, with W pointing towards the source, and a baseline convention of \(ant2 - ant1\) where \(index(ant1) < index(ant2)\). Consider an XYZ Celestial coordinate system centered at the location of the interferometer, with \(X\) towards the East, \(Z\) towards the NCP and \(Y\) to complete a right-handed system. The UVW coordinate system is then defined by the hour-angle and declination of the phase-reference direction such that
when the direction of observation is the NCP (ha=0,dec=90), the UVW coordinates are aligned with XYZ,
V, W and the NCP are always on a Great circle,
when W is on the local meridian, U points East
when the direction of observation is at zero declination, an hour-angle of -6 hours makes W point due East.
The \((l,m,n)\) coordinates are parallel to \((u,v,w)\) such that \(l\) increases with Right-Ascension (or increasing longitude coordinate), \(m\) increases with Declination, and \(n\) is towards the source. With this convention, images will have Right Ascension increasing from Right to Left, and Declination increasing from Bottom to Top.
-
xyz_at_latitude
(local_xyz, lat)[source]¶ Rotate local XYZ coordinates into celestial XYZ coordinates. These coordinate systems are very similar, with X pointing towards the geographical east in both cases. However, before the rotation Z points towards the zenith, whereas afterwards it will point towards celestial north (parallel to the earth axis).
- Parameters
lat – target latitude (radians or astropy quantity)
local_xyz – Array of local XYZ coordinates
- Returns
Celestial XYZ coordinates
-
xyz_to_uvw
(xyz, ha, dec)[source]¶ Rotate \((x,y,z)\) positions in earth coordinates to \((u,v,w)\) coordinates relative to astronomical source position \((ha, dec)\). Can be used for both antenna positions as well as for baselines.
Hour angle and declination can be given as single values or arrays of the same length. Angles can be given as radians or astropy quantities with a valid conversion.
- Parameters
xyz – \((x,y,z)\) co-ordinates of antennas in array
ha – hour angle of phase tracking centre (\(ha = ra - lst\))
dec – declination of phase tracking centre.
-
uvw_to_xyz
(uvw, ha, dec)[source]¶ Rotate \((x,y,z)\) positions relative to a sky position at \((ha, dec)\) to earth coordinates. Can be used for both antenna positions as well as for baselines.
Hour angle and declination can be given as single values or arrays of the same length. Angles can be given as radians or astropy quantities with a valid conversion.
- Parameters
uvw – \((u,v,w)\) co-ordinates of antennas in array
ha – hour angle of phase tracking centre (\(ha = ra - lst\))
dec – declination of phase tracking centre
-
baselines
(ants_uvw)[source]¶ Compute baselines in uvw co-ordinate system from uvw co-ordinate system station positions
- Parameters
ants_uvw – (u,v,w) co-ordinates of antennas in array
-
xyz_to_baselines
(ants_xyz, ha_range, dec)[source]¶ Calculate baselines in \((u,v,w)\) co-ordinate system for a range of hour angles (i.e. non-snapshot observation) to create a uvw sampling distribution
- Parameters
ants_xyz – \((x,y,z)\) co-ordinates of antennas in array
ha_range – list of hour angle values for astronomical source as function of time
dec – declination of astronomical source [constant, not \(f(t)\)]
-
skycoord_to_lmn
(pos: astropy.coordinates.sky_coordinate.SkyCoord, phasecentre: astropy.coordinates.sky_coordinate.SkyCoord)[source]¶ Convert astropy sky coordinates into the l,m,n coordinate system relative to a phase centre.
The l,m,n is a RHS coordinate system with * its origin on the sky sphere * m,n and the celestial north on the same plane * l,m a tangential plane of the sky sphere
Note that this means that l increases east-wards
-
lmn_to_skycoord
(lmn, phasecentre: astropy.coordinates.sky_coordinate.SkyCoord)[source]¶ Convert l,m,n coordinate system + phascentre to astropy sky coordinate relative to a phase centre.
The l,m,n is a RHS coordinate system with * its origin on the sky sphere * m,n and the celestial north on the same plane * l,m a tangential plane of the sky sphere
Note that this means that l increases east-wards
-
simulate_point
(dist_uvw, l, m)[source]¶ Simulate visibilities for unit amplitude point source at direction cosines (l,m) relative to the phase centre.
This includes phase tracking to the centre of the field (hence the minus 1 in the exponent.)
Note that point source is delta function, therefore the FT relationship becomes an exponential, evaluated at (uvw.lmn)
- Parameters
dist_uvw – \((u,v,w)\) distribution of projected baselines (in wavelengths)
l – horizontal direction cosine relative to phase tracking centre
m – orthogonal directon cosine relative to phase tracking centre
-
visibility_shift
(uvw, vis, dl, dm)[source]¶ Shift visibilities by the given image-space distance. This is based on simple FFT laws. It will require kernels to be suitably shifted as well to work correctly.
- Parameters
uvw –
vis – \((u,v,w)\) distribution of projected baselines (in wavelengths)
vis – Input visibilities
dl – Horizontal shift distance as directional cosine
dm – Vertical shift distance as directional cosine
- Returns
New visibilities
-
uvw_transform
(uvw, transform_matrix)[source]¶ Transforms UVW baseline coordinates such that the image is transformed with the given matrix. Will require kernels to be suitably transformed to work correctly.
Reference: Sault, R. J., L. Staveley-Smith, and W. N. Brouw. “An approach to interferometric mosaicing.” Astronomy and Astrophysics Supplement Series 120 (1996): 375-384.
- Parameters
uvw – \((u,v,w)\) distribution of projected baselines (in wavelengths)
transform_matrix – 2x2 matrix for image transformation
- Returns
New baseline coordinates
-
parallactic_angle
(ha, dec, lat)[source]¶ Calculate parallactic angle of source at ha, dec observed from site at latitude dec
H = t - α sin(a) = sin(δ) sin(φ) + cos(δ) cos(φ) cos(H) sin(A) = - sin(H) cos(δ) / cos(a) cos(A) = { sin(δ) - sin(φ) sin(a) } / cos(φ) cos(a)
- Parameters
ha – Hour angle (radians)
dec – Declination (radians)
lat – Site latitude (radians)
- Returns
-
pa_z
(ha, dec, lat)[source]¶ Calculate parallactic angle and zenith angle of source at ha, dec observed from site at latitude dec
H = t - α sin(a) = sin(δ) sin(φ) + cos(δ) cos(φ) cos(H) sin(A) = - sin(H) cos(δ) / cos(a) cos(A) = { sin(δ) - sin(φ) sin(a) } / cos(φ) cos(a)
- Parameters
ha – Hour angle (radians)
dec – Declination (radians)
lat – Site latitude (radians)
- Returns
-
hadec_to_azel
(ha, dec, latitude)[source]¶ Convert HA Dec to Az El
TMS Appendix 4.1
sinel = sinlat sindec + coslat cosdec cosha cosel cosaz = coslat sindec - sinlat cosdec cosha cosel sinaz = - cosdec sinha
- Parameters
ha –
dec –
latitude –
- Returns
az, el
-
azel_to_hadec
(az, el, latitude)[source]¶ Converting Az El to HA Dec
TMS Appendix 4.1
sindec = sinlat sinel + coslat cosel cosaz cosdec cosha = coslat sinel - sinlat cosel cosaz cosdec sinha = -cosel sinaz
- Parameters
az –
el –
latitude –
- Returns
ha, dec
Processing Components¶
These python functions contain the core algorithms in ARL. They can be executed directly or via workflows.
Image¶
Operations¶
Image operations visible to the Execution Framework as Components
-
export_image_to_fits
(im: data_models.memory_data_models.Image, fitsfile: str = 'imaging.fits')[source]¶ Write an image to fits
- Parameters
im – Image
fitsfile – Name of output fits file
-
import_image_from_fits
(fitsfile: str) → data_models.memory_data_models.Image[source]¶ Read an Image from fits
- Parameters
fitsfile –
- Returns
Image
-
reproject_image
(im: data_models.memory_data_models.Image, newwcs: astropy.wcs.wcs.WCS, shape=None) -> (<class 'data_models.memory_data_models.Image'>, <class 'data_models.memory_data_models.Image'>)[source]¶ Re-project an image to a new coordinate system
Currently uses the reproject python package. This seems to have some features do be careful using this method. For timeslice imaging I had to use griddata.
- Parameters
im – Image to be reprojected
newwcs – New WCS
shape –
- Returns
Reprojected Image, Footprint Image
-
add_image
(im1: data_models.memory_data_models.Image, im2: data_models.memory_data_models.Image, docheckwcs=False) → data_models.memory_data_models.Image[source]¶ Add two images
- Parameters
docheckwcs –
im1 –
im2 –
- Returns
Image
-
qa_image
(im, context='') → data_models.memory_data_models.QA[source]¶ Assess the quality of an image
- Parameters
im –
- Returns
QA
-
show_image
(im: data_models.memory_data_models.Image, fig=None, title: str = '', pol=0, chan=0, cm='Greys', components=None, vmin=None, vmax=None, vscale=1.0)[source]¶ Show an Image with coordinates using matplotlib, optionally with components
- Parameters
im – Image
fig – Matplotlib figure
title –
pol – Polarisation
chan – Channel
components – Optional components
vmin – Clip to this minimum
vmax – Clip to this maximum
vscale – scale max, min by this amount
- Returns
-
show_components
(im, comps, npixels=128, fig=None, vmax=None, vmin=None, title='')[source]¶ Show components against an image
- Parameters
im –
comps –
npixels –
fig –
- Returns
-
smooth_image
(model: data_models.memory_data_models.Image, width=1.0, normalise=True)[source]¶ Smooth an image with a kernel
- Parameters
model – Image
width – Kernel in pixels
normalise – Normalise kernel peak to unity
-
calculate_image_frequency_moments
(im: data_models.memory_data_models.Image, reference_frequency=None, nmoment=1) → data_models.memory_data_models.Image[source]¶ Calculate frequency weighted moments
Weights are ((freq-reference_frequency)/reference_frequency)**moment
Note that the spectral axis is replaced by a MOMENT axis.
For example, to find the moments and then reconstruct from just the moments:
moment_cube = calculate_image_frequency_moments(model_multichannel, nmoment=5) reconstructed_cube = calculate_image_from_frequency_moments(model_multichannel, moment_cube)
- Parameters
im – Image cube
reference_frequency – Reference frequency (default None uses average)
nmoment – Number of moments to calculate
- Returns
Moments image
-
calculate_image_from_frequency_moments
(im: data_models.memory_data_models.Image, moment_image: data_models.memory_data_models.Image, reference_frequency=None) → data_models.memory_data_models.Image[source]¶ Calculate image from frequency weighted moments
Weights are ((freq-reference_frequency)/reference_frequency)**moment
Note that a new image is created
For example, to find the moments and then reconstruct from just the moments:
moment_cube = calculate_image_frequency_moments(model_multichannel, nmoment=5) reconstructed_cube = calculate_image_from_frequency_moments(model_multichannel, moment_cube)
- Parameters
im – Image cube to be reconstructed
moment_image – Moment cube (constructed using calculate_image_frequency_moments)
reference_frequency – Reference frequency (default None uses average)
- Returns
reconstructed image
-
remove_continuum_image
(im: data_models.memory_data_models.Image, degree=1, mask=None)[source]¶ Fit and remove continuum visibility in place
Fit a polynomial in frequency of the specified degree where mask is True
- Parameters
im –
degree – 1 is a constant, 2 is a slope, etc.
mask –
- Returns
-
convert_stokes_to_polimage
(im: data_models.memory_data_models.Image, polarisation_frame: data_models.polarisation.PolarisationFrame)[source]¶ Convert a stokes image to polarisation_frame
Gather/Scatter¶
Functions that define and manipulate images. Images are just data and a World Coordinate System.
-
image_scatter_facets
(im: data_models.memory_data_models.Image, facets=1, overlap=0, taper=None) → List[data_models.memory_data_models.Image][source]¶ Scatter an image into a list of subimages using the image_raster_iterator
If the overlap is greater than zero, we choose to keep all images the same size so the other ring of facets are ignored. So if facets=4 and overlap > 0 then the scatter returns (facets-2)**2 = 4 images.
- Parameters
im – Image
facets – Number of image partitions on each axis (2)
overlap – Number of pixels overlap
taper – Taper at edges None or ‘linear’
- Returns
list of subimages
-
image_gather_facets
(image_list: List[data_models.memory_data_models.Image], im: data_models.memory_data_models.Image, facets=1, overlap=0, taper=None, return_flat=False)[source]¶ Gather a list of subimages back into an image using the image_raster_iterator
If the overlap is greater than zero, we choose to keep all images the same size so the other ring of facets are ignored. So if facets=4 and overlap > 0 then the gather expects (facets-2)**2 = 4 images.
To normalize the overlap we make a set of flats, gather that and divide. The flat may be optionally returned instead of the result
- Parameters
image_list – List of subimages
im – Output image
facets – Number of image partitions on each axis (2)
overlap – Overlap between neighbours in pixels
taper – Taper at edges None or ‘linear’ or ‘Tukey’
return_flat – Return the flat
- Returns
list of subimages
-
image_scatter_channels
(im: data_models.memory_data_models.Image, subimages=None) → List[data_models.memory_data_models.Image][source]¶ Scatter an image into a list of subimages using the channels
- Parameters
im – Image
subimages – Number of channels
- Returns
list of subimages
-
image_gather_channels
(image_list: List[data_models.memory_data_models.Image], im: data_models.memory_data_models.Image = None, subimages=0) → data_models.memory_data_models.Image[source]¶ Gather a list of subimages back into an image using the channel_iterator
If the template image is not given then ti will be formed assuming that the list has been generated by image_scatter_channels with subimages = number of channels
- Parameters
image_list – List of subimages
im – Output image
subimages – Number of image partitions on each axis (2)
- Returns
list of subimages
Deconvolution¶
Image deconvolution functions
The standard deconvolution algorithms are provided:
hogbom: Hogbom CLEAN See: Hogbom CLEAN A&A Suppl, 15, 417, (1974)
msclean: MultiScale CLEAN See: Cornwell, T.J., Multiscale CLEAN (IEEE Journal of Selected Topics in Sig Proc, 2008 vol. 2 pp. 793-801)
mfsmsclean: MultiScale Multi-Frequency See: U. Rau and T. J. Cornwell, “A multi-scale multi-frequency deconvolution algorithm for synthesis imaging in radio interferometry,” A&A 532, A71 (2011).
For example to make dirty image and PSF, deconvolve, and then restore:
model = create_image_from_visibility(vt, cellsize=0.001, npixel=256)
dirty, sumwt = invert_2d(vt, model)
psf, sumwt = invert_2d(vt, model, dopsf=True)
comp, residual = deconvolve_cube(dirty, psf, niter=1000, threshold=0.001, fracthresh=0.01, window='quarter',
gain=0.7, algorithm='msclean', scales=[0, 3, 10, 30])
restored = restore_cube(comp, psf, residual)
-
deconvolve_cube
(dirty: data_models.memory_data_models.Image, psf: data_models.memory_data_models.Image, prefix='', **kwargs) -> (<class 'data_models.memory_data_models.Image'>, <class 'data_models.memory_data_models.Image'>)[source]¶ Clean using a variety of algorithms
Functions that clean a dirty image using a point spread function. The algorithms available are:
hogbom: Hogbom CLEAN See: Hogbom CLEAN A&A Suppl, 15, 417, (1974)
msclean: MultiScale CLEAN See: Cornwell, T.J., Multiscale CLEAN (IEEE Journal of Selected Topics in Sig Proc, 2008 vol. 2 pp. 793-801)
mfsmsclean, msmfsclean, mmclean: MultiScale Multi-Frequency See: U. Rau and T. J. Cornwell, “A multi-scale multi-frequency deconvolution algorithm for synthesis imaging in radio interferometry,” A&A 532, A71 (2011).
For example:
comp, residual = deconvolve_cube(dirty, psf, niter=1000, gain=0.7, algorithm='msclean', scales=[0, 3, 10, 30], threshold=0.01)
For the MFS clean, the psf must have number of channels >= 2 * nmoment
- Parameters
dirty – Image dirty image
psf – Image Point Spread Function
window_shape – Window image (Bool) - clean where True
mask – Window in the form of an image, overrides woindow_shape
algorithm – Cleaning algorithm: ‘msclean’|’hogbom’|’mfsmsclean’
gain – loop gain (float) 0.7
threshold – Clean threshold (0.0)
fractional_threshold – Fractional threshold (0.01)
scales – Scales (in pixels) for multiscale ([0, 3, 10, 30])
nmoment – Number of frequency moments (default 3)
findpeak – Method of finding peak in mfsclean: ‘Algorithm1’|’ASKAPSoft’|’CASA’|’ARL’, Default is ARL.
- Returns
componentimage, residual
Iterators¶
Functions that define and manipulate images. Images are just data and a World Coordinate System.
-
image_null_iter
(im: data_models.memory_data_models.Image, facets=1, overlap=0) → collections.abc.Iterable[source]¶ One time iterator
- Parameters
im –
facets – Number of image partitions on each axis (2)
overlap – overlap in pixels
- Returns
-
image_raster_iter
(im: data_models.memory_data_models.Image, facets=1, overlap=0, taper='flat', make_flat=False) → collections.abc.Iterable[source]¶ Create an image_raster_iter generator, returning images, optionally with overlaps
The WCS is adjusted appropriately for each raster element. Hence this is a coordinate-aware way to iterate through an image.
Provided we don’t break reference semantics, memory should be conserved. However make_flat creates a new set of images and thus reference semantics dont hold.
- To update the image in place:
- for r in raster(im, facets=2)::
r.data[…] = numpy.sqrt(r.data[…])
If the overlap is greater than zero, we choose to keep all images the same size so the other ring of facets are ignored. So if facets=4 and overlap > 0 then the iterator returns (facets-2)**2 = 4 images.
A taper is applied in the overlap regions. None implies a constant value, linear is a ramp, and quadratic is parabolic at the ends.
- Parameters
im – Image
facets – Number of image partitions on each axis (2)
overlap – overlap in pixels
taper – method of tapering at the edges: ‘flat’ or ‘linear’ or ‘quadratic’ or ‘tukey’
make_flat – Make the flat images
-
image_channel_iter
(im: data_models.memory_data_models.Image, subimages=1) → collections.abc.Iterable[source]¶ Create a image_channel_iter generator, returning images
The WCS is adjusted appropriately for each raster element. Hence this is a coordinate-aware way to iterate through an image.
Provided we don’t break reference semantics, memory should be conserved
- To update the image in place:
- for r in raster(im, facets=2)::
r.data[…] = numpy.sqrt(r.data[…])
- Parameters
im – Image
subimages – Number of subimages
Calibration¶
Calibration¶
Functions to solve for antenna/station gain
This uses an iterative substitution algorithm due to Larry D’Addario c 1980’ish. Used in the original VLA Dec-10 Antsol.
For example:
gtsol = solve_gaintable(vis, originalvis, phase_only=True, niter=niter, crosspol=False, tol=1e-6)
vis = apply_gaintable(vis, gtsol, inverse=True)
-
solve_gaintable
(vis: data_models.memory_data_models.BlockVisibility, modelvis: data_models.memory_data_models.BlockVisibility = None, gt=None, phase_only=True, niter=30, tol=1e-08, crosspol=False, normalise_gains=True, **kwargs) → data_models.memory_data_models.GainTable[source]¶ Solve a gain table by fitting an observed visibility to a model visibility
If modelvis is None, a point source model is assumed.
- Parameters
vis – BlockVisibility containing the observed data_models
modelvis – BlockVisibility containing the visibility predicted by a model
gt – Existing gaintable
phase_only – Solve only for the phases (default=True)
niter – Number of iterations (default 30)
tol – Iteration stops when the fractional change in the gain solution is below this tolerance
crosspol – Do solutions including cross polarisations i.e. XY, YX or RL, LR
- Returns
GainTable containing solution
Operations¶
Functions for calibration, including creation of gaintables, application of gaintables, and merging gaintables.
-
gaintable_summary
(gt: data_models.memory_data_models.GainTable)[source]¶ Return string summarizing the Gaintable
-
create_gaintable_from_blockvisibility
(vis: data_models.memory_data_models.BlockVisibility, timeslice=None, frequencyslice: float = None, **kwargs) → data_models.memory_data_models.GainTable[source]¶ Create gain table from visibility.
This makes an empty gain table consistent with the BlockVisibility.
- Parameters
vis – BlockVisibilty
timeslice – Time interval between solutions (s)
frequency_width – Frequency solution width (Hz)
- Returns
GainTable
-
apply_gaintable
(vis: data_models.memory_data_models.BlockVisibility, gt: data_models.memory_data_models.GainTable, inverse=False, vis_slices=None, **kwargs) → data_models.memory_data_models.BlockVisibility[source]¶ Apply a gain table to a block visibility
The corrected visibility is:
V_corrected = {g_i * g_j^*}^-1 V_obs
If the visibility data are polarised e.g. polarisation_frame(“linear”) then the inverse operator represents an actual inverse of the gains.
- Parameters
vis – Visibility to have gains applied
gt – Gaintable to be applied
inverse – Apply the inverse (default=False)
- Returns
input vis with gains applied
-
append_gaintable
(gt: data_models.memory_data_models.GainTable, othergt: data_models.memory_data_models.GainTable) → data_models.memory_data_models.GainTable[source]¶ Append othergt to gt
- Parameters
gt –
othergt –
- Returns
GainTable gt + othergt
-
copy_gaintable
(gt: data_models.memory_data_models.GainTable, zero=False)[source]¶ Copy a GainTable
Performs a deepcopy of the data array
-
create_gaintable_from_rows
(gt: data_models.memory_data_models.GainTable, rows: numpy.ndarray, makecopy=True) → data_models.memory_data_models.GainTable[source]¶ Create a GainTable from selected rows
- Parameters
gt – GainTable
rows – Boolean array of row selection
makecopy – Make a deep copy (True)
- Returns
GainTable
-
qa_gaintable
(gt: data_models.memory_data_models.GainTable, context=None) → data_models.memory_data_models.QA[source]¶ Assess the quality of a gaintable
- Parameters
gt –
- Returns
AQ
-
gaintable_plot
(gt: data_models.memory_data_models.GainTable, ax, title='', value='amp', ants=None, channels=None, label_max=10, **kwargs)[source]¶ Standard plot of gain table
- Parameters
gt – Gaintable
ax – matplotlib axes
value – ‘amp’ or ‘phase’ or ‘residual’
ants – Antennas to plot
channels – Channels to plot
kwargs –
- Returns
Imaging¶
Base¶
Functions that aid fourier transform processing. These are built on top of the core functions in processing_library.fourier_transforms.
The measurement equation for a sufficently narrow field of view interferometer is:
The measurement equation for a wide field of view interferometer is:
This and related modules contain various approachs for dealing with the wide-field problem where the extra phase term in the Fourier transform cannot be ignored.
-
shift_vis_to_image
(vis: Union[data_models.memory_data_models.Visibility, data_models.memory_data_models.BlockVisibility], im: data_models.memory_data_models.Image, tangent: bool = True, inverse: bool = False) → Union[data_models.memory_data_models.Visibility, data_models.memory_data_models.BlockVisibility][source]¶ Shift visibility to the FFT phase centre of the image in place
- Parameters
vis – Visibility data
im – Image model used to determine phase centre
tangent – Is the shift purely on the tangent plane True|False
inverse – Do the inverse operation True|False
- Returns
visibility with phase shift applied and phasecentre updated
-
normalize_sumwt
(im: data_models.memory_data_models.Image, sumwt) → data_models.memory_data_models.Image[source]¶ Normalize out the sum of weights
- Parameters
im – Image, im.data has shape [nchan, npol, ny, nx]
sumwt – Sum of weights [nchan, npol]
-
predict_2d
(vis: Union[data_models.memory_data_models.BlockVisibility, data_models.memory_data_models.Visibility], model: data_models.memory_data_models.Image, gcfcf=None, **kwargs) → Union[data_models.memory_data_models.BlockVisibility, data_models.memory_data_models.Visibility][source]¶ Predict using convolutional degridding.
This is at the bottom of the layering i.e. all transforms are eventually expressed in terms of this function. Any shifting needed is performed here.
- Parameters
vis – Visibility to be predicted
model – model image
gcfcf – (Grid correction function i.e. in image space, Convolution function i.e. in uv space)
- Returns
resulting visibility (in place works)
-
invert_2d
(vis: data_models.memory_data_models.Visibility, im: data_models.memory_data_models.Image, dopsf: bool = False, normalize: bool = True, gcfcf=None, **kwargs) -> (<class 'data_models.memory_data_models.Image'>, <class 'numpy.ndarray'>)[source]¶ Invert using 2D convolution function, using the specified convolution function
Use the image im as a template. Do PSF in a separate call.
This is at the bottom of the layering i.e. all transforms are eventually expressed in terms of this function. . Any shifting needed is performed here.
- Parameters
vis – Visibility to be inverted
im – image template (not changed)
dopsf – Make the psf instead of the dirty image
normalize – Normalize by the sum of weights (True)
gcfcf – (Grid correction function i.e. in image space, Convolution function i.e. in uv space)
- Returns
resulting image
-
predict_skycomponent_visibility
(vis: Union[data_models.memory_data_models.Visibility, data_models.memory_data_models.BlockVisibility], sc: Union[data_models.memory_data_models.Skycomponent, List[data_models.memory_data_models.Skycomponent]]) → Union[data_models.memory_data_models.Visibility, data_models.memory_data_models.BlockVisibility][source]¶ Predict the visibility from a Skycomponent, add to existing visibility, for Visibility or BlockVisibility
- Parameters
vis – Visibility or BlockVisibility
sc – Skycomponent or list of SkyComponents
- Returns
Visibility or BlockVisibility
-
create_image_from_visibility
(vis: Union[data_models.memory_data_models.BlockVisibility, data_models.memory_data_models.Visibility], **kwargs) → data_models.memory_data_models.Image[source]¶ Make an empty image from params and Visibility
This makes an empty, template image consistent with the visibility, allowing optional overriding of select parameters. This is a convenience function and does not transform the visibilities.
- Parameters
vis –
phasecentre – Phasecentre (Skycoord)
channel_bandwidth – Channel width (Hz)
cellsize – Cellsize (radians)
npixel – Number of pixels on each axis (512)
frame – Coordinate frame for WCS (ICRS)
equinox – Equinox for WCS (2000.0)
nchan – Number of image channels (Default is 1 -> MFS)
- Returns
image
-
advise_wide_field
(vis: Union[data_models.memory_data_models.BlockVisibility, data_models.memory_data_models.Visibility], delA=0.02, oversampling_synthesised_beam=3.0, guard_band_image=6.0, facets=1, wprojection_planes=1, verbose=True)[source]¶ Advise on parameters for wide field imaging.
Calculate sampling requirements on various parameters
For example:
advice = advise_wide_field(vis, delA) wstep = get_parameter(kwargs, 'wstep', advice['w_sampling_primary_beam'])
- Parameters
vis –
delA – Allowed coherence loss (def: 0.02)
oversampling_synthesised_beam – Oversampling of the synthesized beam (def: 3.0)
guard_band_image – Number of primary beam half-widths-to-half-maximum to image (def: 6)
facets – Number of facets on each axis
wprojection_planes – Number of planes in wprojection
- Returns
dict of advice
Timeslice¶
The w-term can be viewed as a time-variable distortion. Approximating the array as instantaneously co-planar, we have that w can be expressed in terms of u,v:
Transforming to a new coordinate system:
Ignoring changes in the normalisation term, we have:
-
log
= <logging.Logger object>[source]¶ The w-term can be viewed as a time-variable distortion. Approximating the array as instantaneously co-planar, we have that w can be expressed in terms of u,v:
\[w = a u + b v\]Transforming to a new coordinate system:
\[l' = l + a ( \sqrt{1-l^2-m^2}-1))\]\[m' = m + b ( \sqrt{1-l^2-m^2}-1))\]Ignoring changes in the normalisation term, we have:
\[V(u,v,w) =\int \frac{ I(l',m')} { \sqrt{1-l'^2-m'^2}} e^{-2 \pi j (ul'+um')} dl' dm'\]
-
fit_uvwplane_only
(vis: data_models.memory_data_models.Visibility) -> (<class 'float'>, <class 'float'>)[source]¶ Fit the best fitting plane p u + q v = w
- Parameters
vis – visibility to be fitted
- Returns
direction cosines defining plane
-
fit_uvwplane
(vis: data_models.memory_data_models.Visibility, remove=False) -> (<class 'data_models.memory_data_models.Image'>, <class 'float'>, <class 'float'>)[source]¶ Fit and optionally remove the best fitting plane p u + q v = w
- Parameters
vis – visibility to be fitted
remove – Remove the fitted w permanently from vis?
- Returns
direction cosines defining plane
-
predict_timeslice_single
(vis: data_models.memory_data_models.Visibility, model: data_models.memory_data_models.Image, predict=<function predict_2d>, remove=True, gcfcf=None, **kwargs) → data_models.memory_data_models.Visibility[source]¶ Predict using a single time slices.
This fits a single plane and corrects the image geometry.
- Parameters
vis – Visibility to be predicted
model – model image
predict –
remove – Remove fitted w (so that wprojection will do the right thing)
gcfcf – (Grid correction function, convolution function)
- Returns
resulting visibility (in place works)
-
invert_timeslice_single
(vis: data_models.memory_data_models.Visibility, im: data_models.memory_data_models.Image, dopsf, normalize=True, remove=True, gcfcf=None, **kwargs) -> (<class 'data_models.memory_data_models.Image'>, <class 'numpy.ndarray'>)[source]¶ Process single time slice
Extracted for re-use in parallel version :param vis: Visibility to be inverted :param im: image template (not changed) :param dopsf: Make the psf instead of the dirty image :param gcfcf: (Grid correction function, convolution function) :param normalize: Normalize by the sum of weights (True)
WStack¶
The w-stacking or w-slicing approach is to partition the visibility data by slices in w. The measurement equation is approximated as:
If images constructed from slices in w are added after applying a w-dependent image plane correction, the w term will be corrected.
-
predict_wstack_single
(vis, model, remove=True, gcfcf=None, **kwargs) → data_models.memory_data_models.Visibility[source]¶ Predict using a single w slices.
This processes a single w plane, rotating out the w beam for the average w
- Parameters
vis – Visibility to be predicted
model – model image
- Returns
resulting visibility (in place works)
-
invert_wstack_single
(vis: data_models.memory_data_models.Visibility, im: data_models.memory_data_models.Image, dopsf, normalize=True, remove=True, gcfcf=None, **kwargs) -> (<class 'data_models.memory_data_models.Image'>, <class 'numpy.ndarray'>)[source]¶ Process single w slice
- Parameters
vis – Visibility to be inverted
im – image template (not changed)
dopsf – Make the psf instead of the dirty image
normalize – Normalize by the sum of weights (True)
Weighting¶
Functions that aid weighting the visibility data prior to imaging.
- There are two classes of functions:
Changing the weight dependent on noise level or sample density or a combination
Tapering the weihght spatially to avoid effects of sharp edges or to emphasize a given scale size in the image
-
weight_visibility
(vis, model, gcfcf=None, weighting='uniform', **kwargs)[source]¶ Weight the visibility data
This is done collectively so the weights are summed over all vis_lists and then corrected
- Parameters
vis_list –
model_imagelist – Model required to determine weighting parameters
weighting – Type of weighting
kwargs – Parameters for functions in graphs
- Returns
List of vis_graphs
-
taper_visibility_gaussian
(vis: data_models.memory_data_models.Visibility, beam=None) → data_models.memory_data_models.Visibility[source]¶ Taper the visibility weights
These are cumulative. If You can reset the imaging_weights using
processing_library.imaging.weighting.weight_visibility
- Parameters
vis – Visibility with imaging_weight’s to be tapered
beam – desired resolution (Full width half maximum, radians)
- Returns
visibility with imaging_weight column modified
-
taper_visibility_tukey
(vis: data_models.memory_data_models.Visibility, tukey=0.1) → data_models.memory_data_models.Visibility[source]¶ Taper the visibility weights
This algorithm is present in WSClean.
See https://sourceforge.net/p/wsclean/wiki/Tapering
tukey, a circular taper that smooths the outer edge set by -maxuv-l inner-tukey, a circular taper that smooths the inner edge set by -minuv-l edge-tukey, a square-shaped taper that smooths the edge set by the uv grid and -taper-edge.
These are cumulative. If You can reset the imaging_weights using
processing_library.imaging.weighting.weight_visibility
- Parameters
vis – Visibility with imaging_weight’s to be tapered
- Returns
visibility with imaging_weight column modified
Skycomponent¶
Operations¶
Function to manage sky components.
-
create_skycomponent
(direction: astropy.coordinates.sky_coordinate.SkyCoord, flux: numpy.array, frequency: numpy.array, shape: str = 'Point', polarisation_frame=<data_models.polarisation.PolarisationFrame object>, params: dict = None, name: str = '') → data_models.memory_data_models.Skycomponent[source]¶ A single Skycomponent with direction, flux, shape, and params for the shape
- Parameters
polarisation_frame –
params –
direction –
flux –
frequency –
shape – ‘Point’ or ‘Gaussian’
name –
- Returns
Skycomponent
-
find_nearest_skycomponent_index
(home, comps) → int[source]¶ Find nearest component in a list to a given direction (home)
- Parameters
home – Home direction
comps – list of skycomponents
- Returns
index of best in comps
-
find_nearest_skycomponent
(home: astropy.coordinates.sky_coordinate.SkyCoord, comps) -> (<class 'data_models.memory_data_models.Skycomponent'>, <class 'float'>)[source]¶ Find nearest component to a given direction
- Parameters
home – Home direction
comps – list of skycomponents
- Returns
Index of nearest component
-
find_separation_skycomponents
(comps_test, comps_ref=None)[source]¶ Find the matrix of separations for two lists of components
- Parameters
comps_test – List of components to be test
comps_ref – If None then set to comps_test
- Returns
-
find_skycomponent_matches_atomic
(comps_test, comps_ref, tol=1e-07)[source]¶ Match a list of candidates to a reference set of skycomponents
find_skycomponent_matches is faster since it uses the astropy catalog matching
many to one is allowed.
- Parameters
tol –
comps_test –
comps_ref –
- Returns
-
find_skycomponent_matches
(comps_test, comps_ref, tol=1e-07)[source]¶ Match a list of candidates to a reference set of skycomponents
many to one is allowed.
- Parameters
comps_test –
comps_ref –
tol – Tolerance in radians for a match
- Returns
-
select_components_by_separation
(home, comps, rmax=6.283185307179586, rmin=0.0) → [<class 'data_models.memory_data_models.Skycomponent'>][source]¶ Select components with a range in separation
- Parameters
home – Home direction
comps – list of skycomponents
rmin – minimum range
rmax – maximum range
- Returns
selected components
-
select_neighbouring_components
(comps, target_comps)[source]¶ Assign components to nearest in the target
- Parameters
comps –
target_comps –
- Returns
Indices of components in target_comps
-
remove_neighbouring_components
(comps, distance)[source]¶ Remove the faintest of a pair of components that are within a specified distance
- Parameters
comps –
target_comps –
distance – Minimum distance
- Returns
Indices of components in target_comps, selected components
-
find_skycomponents
(im: data_models.memory_data_models.Image, fwhm=1.0, threshold=1.0, npixels=5) → List[data_models.memory_data_models.Skycomponent][source]¶ Find gaussian components in Image above a certain threshold as Skycomponent
- Parameters
im – Image to be searched
fwhm – Full width half maximum of gaussian in pixels
threshold – Threshold for component detection. Default: 1 Jy.
npixels – Number of connected pixels required
- Returns
list of sky components
-
apply_beam_to_skycomponent
(sc: Union[data_models.memory_data_models.Skycomponent, List[data_models.memory_data_models.Skycomponent]], beam: data_models.memory_data_models.Image) → Union[data_models.memory_data_models.Skycomponent, List[data_models.memory_data_models.Skycomponent]][source]¶ Insert a Skycomponent into an image
- Parameters
beam –
sc – SkyComponent or list of SkyComponents
- Returns
List of skycomponents
-
filter_skycomponents_by_index
(sc, indices)[source]¶ Filter sky components by index
- Parameters
sc –
indices –
- Returns
-
filter_skycomponents_by_flux
(sc, flux_min=-inf, flux_max=inf)[source]¶ Filter sky components by stokes I flux
- Parameters
sc –
flux_min –
flux_max –
- Returns
-
insert_skycomponent
(im: data_models.memory_data_models.Image, sc: Union[data_models.memory_data_models.Skycomponent, List[data_models.memory_data_models.Skycomponent]], insert_method='Nearest', bandwidth=1.0, support=8) → data_models.memory_data_models.Image[source]¶ Insert a Skycomponent into an image
- Parameters
im –
sc – SkyComponent or list of SkyComponents
insert_method – ‘’ | ‘Sinc’ | ‘Lanczos’
bandwidth – Fractional of uv plane to optimise over (1.0)
support – Support of kernel (7)
- Returns
image
-
voronoi_decomposition
(im, comps)[source]¶ Construct a Voronoi decomposition of a set of components
The array return contains the index into the Voronoi structure
- Parameters
im –
comps –
- Returns
Voronoi structure, vertex image
-
image_voronoi_iter
(im: data_models.memory_data_models.Image, components: data_models.memory_data_models.Skycomponent) → collections.abc.Iterable[source]¶ Iterate through Voronoi decomposition, returning a generator yielding fullsize images
- Parameters
im – Image
components – Components to define Voronoi decomposition
Visibility¶
Base¶
Base simple visibility operations, placed here to avoid circular dependencies
-
vis_summary
(vis: Union[data_models.memory_data_models.Visibility, data_models.memory_data_models.BlockVisibility])[source]¶ Return string summarizing the Visibility
-
copy_visibility
(vis: Union[data_models.memory_data_models.Visibility, data_models.memory_data_models.BlockVisibility], zero=False) → Union[data_models.memory_data_models.Visibility, data_models.memory_data_models.BlockVisibility][source]¶ Copy a visibility
Performs a deepcopy of the data array
-
create_visibility
(config: data_models.memory_data_models.Configuration, times: numpy.array, frequency: numpy.array, channel_bandwidth, phasecentre: astropy.coordinates.sky_coordinate.SkyCoord, weight: float, polarisation_frame=<data_models.polarisation.PolarisationFrame object>, integration_time=1.0, zerow=False, elevation_limit=0.2617993877991494, source='unknown', meta=None) → data_models.memory_data_models.Visibility[source]¶ Create a Visibility from Configuration, hour angles, and direction of source
Note that we keep track of the integration time for BDA purposes
- Parameters
config – Configuration of antennas
times – hour angles in radians
frequency – frequencies (Hz] [nchan]
weight – weight of a single sample
phasecentre – phasecentre of observation
channel_bandwidth – channel bandwidths: (Hz] [nchan]
integration_time – Integration time (‘auto’ or value in s)
polarisation_frame – PolarisationFrame(‘stokesI’)
- Returns
Visibility
-
create_blockvisibility
(config: data_models.memory_data_models.Configuration, times: numpy.array, frequency: numpy.array, phasecentre: astropy.coordinates.sky_coordinate.SkyCoord, weight: float = 1.0, polarisation_frame: data_models.polarisation.PolarisationFrame = None, integration_time=1.0, channel_bandwidth=1000000.0, zerow=False, elevation_limit=None, source='unknown', meta=None, **kwargs) → data_models.memory_data_models.BlockVisibility[source]¶ Create a BlockVisibility from Configuration, hour angles, and direction of source
Note that we keep track of the integration time for BDA purposes
- Parameters
config – Configuration of antennas
times – hour angles in radians
frequency – frequencies (Hz] [nchan]
weight – weight of a single sample
phasecentre – phasecentre of observation
channel_bandwidth – channel bandwidths: (Hz] [nchan]
integration_time – Integration time (‘auto’ or value in s)
polarisation_frame –
- Returns
BlockVisibility
-
create_visibility_from_rows
(vis: Union[data_models.memory_data_models.Visibility, data_models.memory_data_models.BlockVisibility], rows: numpy.ndarray, makecopy=True)[source]¶ Create a Visibility from selected rows
- Parameters
vis – Visibility
rows – Boolean array of row selction
makecopy – Make a deep copy (True)
- Returns
Visibility
-
phaserotate_visibility
(vis: data_models.memory_data_models.Visibility, newphasecentre: astropy.coordinates.sky_coordinate.SkyCoord, tangent=True, inverse=False) → data_models.memory_data_models.Visibility[source]¶ Phase rotate from the current phase centre to a new phase centre
If tangent is False the uvw are recomputed and the visibility phasecentre is updated. Otherwise only the visibility phases are adjusted
- Parameters
vis – Visibility to be rotated
newphasecentre –
tangent – Stay on the same tangent plane? (True)
inverse – Actually do the opposite
- Returns
Visibility
-
export_blockvisibility_to_ms
(msname, vis_list, source_name=None, ack=False)[source]¶ Minimal BlockVisibility to MS converter
The MS format is much more general than the ARL BlockVisibility so we cut many corners. This requires casacore to be installed. If not an exception ModuleNotFoundError is raised.
Write a list of BlockVisibility’s to a MS file, split by field and spectral window
- Parameters
msname – File name of MS
vislist – BlockVisibility
- Returns
-
list_ms
(msname, ack=False)[source]¶ List sources and data descriptors in a MeasurementSet
- Parameters
msname – File name of MS
- Returns
-
create_blockvisibility_from_ms
(msname, channum=None, start_chan=None, end_chan=None, ack=False, datacolumn='DATA', selected_sources=None, selected_dds=None)[source]¶ Minimal MS to BlockVisibility converter
The MS format is much more general than the ARL BlockVisibility so we cut many corners. This requires casacore to be installed. If not an exception ModuleNotFoundError is raised.
Creates a list of BlockVisibility’s, split by field and spectral window
Reading of a subset of channels is possible using either start_chan and end_chan or channnum. Using start_chan and end_chan is preferred since it only reads the channels required. Channum is more flexible and can be used to read a random list of channels.
- Parameters
msname – File name of MS
channum – range of channels e.g. range(17,32), default is None meaning all
start_chan – Starting channel to read
end_chan – End channel to read
- Returns
-
create_visibility_from_ms
(msname, channum=None, start_chan=None, end_chan=None, ack=False)[source]¶ Minimal MS to BlockVisibility converter
The MS format is much more general than the ARL BlockVisibility so we cut many corners. This requires casacore to be installed. If not an exception ModuleNotFoundError is raised.
Creates a list of BlockVisibility’s, split by field and spectral window
Reading of a subset of channels is possible using either start_chan and end_chan or channnum. Using start_chan and end_chan is preferred since it only reads the channels required. Channum is more flexible and can be used to read a random list of channels.
- Parameters
msname – File name of MS
channum – range of channels e.g. range(17,32), default is None meaning all
start_chan – Starting channel to read
end_chan – End channel to read
- Returns
-
create_blockvisibility_from_uvfits
(fitsname, channum=None, ack=False, antnum=None)[source]¶ Minimal UVFIT to BlockVisibility converter
The UVFITS format is much more general than the ARL BlockVisibility so we cut many corners.
Creates a list of BlockVisibility’s, split by field and spectral window
- Parameters
fitsname – File name of UVFITS
channum – range of channels e.g. range(17,32), default is None meaning all
antnum – the number of antenna
- Returns
-
create_visibility_from_uvfits
(fitsname, channum=None, ack=False, antnum=None)[source]¶ Minimal UVFITS to BlockVisibility converter
Creates a list of BlockVisibility’s, split by field and spectral window
- Parameters
fitsname – File name of UVFITS file
channum – range of channels e.g. range(17,32), default is None meaning all
antnum – the number of antenna
- Returns
Operations¶
Visibility operations
-
append_visibility
(vis: Union[data_models.memory_data_models.Visibility, data_models.memory_data_models.BlockVisibility], othervis: Union[data_models.memory_data_models.Visibility, data_models.memory_data_models.BlockVisibility]) → Union[data_models.memory_data_models.Visibility, data_models.memory_data_models.BlockVisibility][source]¶ Append othervis to vis
- Parameters
vis –
othervis –
- Returns
Visibility vis + othervis
-
sort_visibility
(vis, order=None)[source]¶ Sort a visibility on a given column
- Parameters
vis –
order – Array of string of column to be used for sortin
- Returns
-
concatenate_visibility
(vis_list, sort=True)[source]¶ Concatenate a list of visibilities, with an optional sort back to index order
- Parameters
vis_list –
- Returns
Visibility
-
concatenate_blockvisibility_frequency
(bvis_list)[source]¶ Concatenate a list of BlockVisibility’s in frequency
- Parameters
bvis_list –
- Returns
BlockVisibility
-
sum_visibility
(vis: data_models.memory_data_models.Visibility, direction: astropy.coordinates.sky_coordinate.SkyCoord) → numpy.array[source]¶ Direct Fourier summation in a given direction
- Parameters
vis – Visibility to be summed
direction – Direction of summation
- Returns
flux[nch,npol], weight[nch,pol]
-
subtract_visibility
(vis, model_vis, inplace=False)[source]¶ Subtract model_vis from vis, returning new visibility
- Parameters
vis –
model_vis –
- Returns
-
qa_visibility
(vis: Union[data_models.memory_data_models.Visibility, data_models.memory_data_models.BlockVisibility], context=None) → data_models.memory_data_models.QA[source]¶ Assess the quality of Visibility
- Parameters
context –
vis – Visibility to be assessed
- Returns
QA
-
remove_continuum_blockvisibility
(vis: data_models.memory_data_models.BlockVisibility, degree=1, mask=None) → data_models.memory_data_models.BlockVisibility[source]¶ Fit and remove continuum visibility
Fit a polynomial in frequency of the specified degree where mask is True
- Parameters
vis –
degree – Degree of polynomial
mask –
- Returns
-
divide_visibility
(vis: data_models.memory_data_models.BlockVisibility, modelvis: data_models.memory_data_models.BlockVisibility)[source]¶ Divide visibility by model forming visibility for equivalent point source
This is a useful intermediate product for calibration. Variation of the visibility in time and frequency due to the model structure is removed and the data can be averaged to a limit determined by the instrumental stability. The weight is adjusted to compensate for the division.
Zero divisions are avoided and the corresponding weight set to zero.
- Parameters
vis –
modelvis –
- Returns
-
integrate_visibility_by_channel
(vis: data_models.memory_data_models.BlockVisibility) → data_models.memory_data_models.BlockVisibility[source]¶ Integrate visibility across channels, returning new visibility
- Parameters
vis –
- Returns
BlockVisibility
-
convert_visibility_to_stokes
(vis)[source]¶ Convert the polarisation frame data into Stokes parameters.
Args: vis (obj): ARL visibility data.
Returns: vis: Converted visibility data.
-
convert_blockvisibility_to_stokes
(vis)[source]¶ Convert the polarisation frame data into Stokes parameters.
Args: vis (obj): ARL visibility data.
Returns: vis: Converted visibility data.
Coalesce¶
Functions for visibility coalescence and decoalescence.
The BlockVisibility format describes the visibility data_models as it would come from the correlator: [time, ant2, ant1, channel, pol]. This is well-suited to calibration and some visibility processing such as continuum removal. However the BlockVisibility format is vastly oversampled on the short spacings where the visibility (after calibration) varies slowly compared to the longest baselines. The coalescence operation resamples the visibility at a rate inversely proportional to baseline length. This cannot be held in the BlockVIsibility format so it is stored in the Visibility format. For e.g. SKA1-LOW, coalescing typically reduces the number of visibilities by a factor between 10 and 30.
A typical use might be:
vt = predict_skycomponent_visibility(vt, comps)
cvt = convert_blockvisibility_to_visibility(vt, time_coal=1.0, max_time_coal=100, frequency_coal=0.0,
max_frequency_coal=1)
dirtyimage, sumwt = invert_2d(cvt, model)
-
coalesce_visibility
(vis: data_models.memory_data_models.BlockVisibility, **kwargs) → data_models.memory_data_models.Visibility[source]¶ Coalesce the BlockVisibility data_models. The output format is a Visibility, as needed for imaging
Coalesce by baseline-dependent averaging (optional). The number of integrations averaged goes as the ratio of the maximum possible baseline length to that for this baseline. This number can be scaled by coalescence_factor and limited by max_coalescence.
When faceting, the coalescence factors should be roughly the same as the number of facets on one axis.
If coalescence_factor=0.0 then just a format conversion is done
- Parameters
vis – BlockVisibility to be coalesced
- Returns
Coalesced visibility with cindex and blockvis filled in
-
convert_blockvisibility_to_visibility
(vis: data_models.memory_data_models.BlockVisibility) → data_models.memory_data_models.Visibility[source]¶ Convert the BlockVisibility data with no coalescence
- Parameters
vis – BlockVisibility to be converted
- Returns
Visibility with cindex and blockvis filled in
-
decoalesce_visibility
(vis: data_models.memory_data_models.Visibility, **kwargs) → data_models.memory_data_models.BlockVisibility[source]¶ Decoalesce the visibilities to the original values (opposite of coalesce_visibility)
This relies upon the block vis and the index being part of the vis. Needs the index generated by coalesce_visibility
- Parameters
vis – (Coalesced visibility)
- Returns
BlockVisibility with vis and weight columns overwritten
-
average_in_blocks
(vis, uvw, wts, imaging_wts, times, integration_time, frequency, channel_bandwidth, time_coal=1.0, max_time_coal=100, frequency_coal=1.0, max_frequency_coal=100)[source]¶ Average visibility in blocks
- Parameters
vis –
uvw –
wts –
imaging_wts –
times –
integration_time –
frequency –
channel_bandwidth –
time_coal –
max_time_coal –
frequency_coal –
max_frequency_coal –
- Returns
Gather/Scatter¶
Visibility iterators for iterating through a BlockVisibility or Visibility.
A typical use would be to make a sequence of snapshot visibilitys:
for rows in vis_timeslice_iter(vt, vis_slices=vis_slices):
visslice = create_visibility_from_rows(vt, rows)
dirtySnapshot = create_visibility_from_visibility(visslice, npixel=512, cellsize=0.001, npol=1)
dirtySnapshot, sumwt = invert_2d(visslice, dirtySnapshot)
-
visibility_scatter
(vis: data_models.memory_data_models.Visibility, vis_iter, vis_slices=1) → List[data_models.memory_data_models.Visibility][source]¶ Scatter a visibility into a list of subvisibilities
If vis_iter is over time then the type of the outvisibilities will be the same as inout If vis_iter is over w then the type of the output visibilities will always be Visibility
- Parameters
vis – Visibility
vis_iter – visibility iterator
vis_slices – Number of slices to be made
- Returns
list of subvisibilitys
-
visibility_gather
(visibility_list: List[data_models.memory_data_models.Visibility], vis: data_models.memory_data_models.Visibility, vis_iter, vis_slices=None) → data_models.memory_data_models.Visibility[source]¶ Gather a list of subvisibilities back into a visibility
The iterator setup must be the same as used in the scatter.
- Parameters
visibility_list – List of subvisibilities
vis – Output visibility
vis_iter – visibility iterator
vis_slices – Number of slices to be gathered (optional)
- Returns
vis
Iterators¶
Visibility iterators for iterating through a BlockVisibility or Visibility.
A typical use would be to make a sequence of snapshot images:
for rows in vis_timeslice_iter(vt):
visslice = create_visibility_from_rows(vt, rows)
dirtySnapshot = create_image_from_visibility(visslice, npixel=512, cellsize=0.001, npol=1)
dirtySnapshot, sumwt = invert_2d(visslice, dirtySnapshot)
-
vis_null_iter
(vis: Union[data_models.memory_data_models.Visibility, data_models.memory_data_models.BlockVisibility], vis_slices=1) → numpy.ndarray[source]¶ One time iterator returning true for all rows
- Parameters
vis –
vis_slices –
- Returns
-
vis_timeslice_iter
(vis: Union[data_models.memory_data_models.Visibility, data_models.memory_data_models.BlockVisibility], vis_slices=None) → numpy.ndarray[source]¶ Time slice iterator
- Parameters
vis –
vis_slices – Number of time slices
- Returns
Boolean array with selected rows=True
-
vis_timeslices
(vis: data_models.memory_data_models.Visibility, timeslice='auto') → int[source]¶ Calculate number of time slices in a visibility
- Parameters
vis – Visibility
timeslice – ‘auto’ or float (seconds)
- Returns
Number of slices
Util¶
Testing Support¶
Functions that aid testing in various ways. A typical use would be:
lowcore = create_named_configuration('LOWBD2-CORE')
times = numpy.linspace(-3, +3, 13) * (numpy.pi / 12.0)
frequency = numpy.array([1e8])
channel_bandwidth = numpy.array([1e7])
# Define the component and give it some polarisation and spectral behaviour
f = numpy.array([100.0])
flux = numpy.array([f])
phasecentre = SkyCoord(ra=+15.0 * u.deg, dec=-35.0 * u.deg, frame='icrs', equinox='J2000')
compabsdirection = SkyCoord(ra=17.0 * u.deg, dec=-36.5 * u.deg, frame='icrs', equinox='J2000')
comp = create_skycomponent(flux=flux, frequency=frequency, direction=compabsdirection,
polarisation_frame=PolarisationFrame('stokesI'))
image = create_test_image(frequency=frequency, phasecentre=phasecentre,
cellsize=0.001,
polarisation_frame=PolarisationFrame('stokesI')
vis = create_visibility(lowcore, times=times, frequency=frequency,
channel_bandwidth=channel_bandwidth,
phasecentre=phasecentre, weight=1,
polarisation_frame=PolarisationFrame('stokesI'),
integration_time=1.0)
-
create_test_image
(canonical=True, cellsize=None, frequency=None, channel_bandwidth=None, phasecentre=None, polarisation_frame=<data_models.polarisation.PolarisationFrame object>) → data_models.memory_data_models.Image[source]¶ Create a useful test image
This is the test image M31 widely used in ALMA and other simulations. It is actually part of an Halpha region in M31.
- Parameters
canonical – Make the image into a 4 dimensional image
cellsize –
frequency – Frequency (array) in Hz
channel_bandwidth – Channel bandwidth (array) in Hz
phasecentre – Phase centre of image (SkyCoord)
polarisation_frame – Polarisation frame
- Returns
Image
-
create_test_image_from_s3
(npixel=16384, polarisation_frame=<data_models.polarisation.PolarisationFrame object>, cellsize=1.5e-05, frequency=array([1.e+08]), channel_bandwidth=array([1000000.]), phasecentre=None, fov=20, flux_limit=0.001) → data_models.memory_data_models.Image[source]¶ Create MID test image from S3
- The input catalog was generated at http://s-cubed.physics.ox.ac.uk/s3_sex using the following query::
Database: s3_sex SQL: select * from Galaxies where (pow(10,itot_151)*1000 > 1.0) and (right_ascension between -5 and 5) and (declination between -5 and 5);;
Number of rows returned: 29966
For frequencies < 610MHz, there are three tables to use:
data/models/S3_151MHz_10deg.csv, use fov=10 data/models/S3_151MHz_20deg.csv, use fov=20 data/models/S3_151MHz_40deg.csv, use fov=40
For frequencies > 610MHz, there are three tables:
data/models/S3_1400MHz_1mJy_10deg.csv, use flux_limit>= 1e-3 data/models/S3_1400MHz_100uJy_10deg.csv, use flux_limit < 1e-3 data/models/S3_1400MHz_1mJy_18deg.csv, use flux_limit>= 1e-3 data/models/S3_1400MHz_100uJy_18deg.csv, use flux_limit < 1e-3
The component spectral index is calculated from the 610MHz and 151MHz or 1400MHz and 610MHz, and then calculated for the specified frequencies.
If polarisation_frame is not stokesI then the image will a polarised axis but the values will be zero.
- Parameters
npixel – Number of pixels
polarisation_frame – Polarisation frame (default PolarisationFrame(“stokesI”))
cellsize – cellsize in radians
frequency –
channel_bandwidth – Channel width (Hz)
phasecentre – phasecentre (SkyCoord)
fov – fov 10 | 20 | 40
flux_limit – Minimum flux (Jy)
- Returns
Image
-
create_test_skycomponents_from_s3
(polarisation_frame=<data_models.polarisation.PolarisationFrame object>, frequency=array([1.e+08]), channel_bandwidth=array([1000000.]), phasecentre=None, fov=20, flux_limit=0.001, radius=None)[source]¶ Create test image from S3
- The input catalog was generated at http://s-cubed.physics.ox.ac.uk/s3_sex using the following query::
Database: s3_sex SQL: select * from Galaxies where (pow(10,itot_151)*1000 > 1.0) and (right_ascension between -5 and 5) and (declination between -5 and 5);;
Number of rows returned: 29966
For frequencies < 610MHz, there are three tables to use:
data/models/S3_151MHz_10deg.csv, use fov=10 data/models/S3_151MHz_20deg.csv, use fov=20 data/models/S3_151MHz_40deg.csv, use fov=40
For frequencies > 610MHz, there are three tables:
data/models/S3_1400MHz_1mJy_10deg.csv, use flux_limit>= 1e-3 data/models/S3_1400MHz_100uJy_10deg.csv, use flux_limit < 1e-3 data/models/S3_1400MHz_1mJy_18deg.csv, use flux_limit>= 1e-3 data/models/S3_1400MHz_100uJy_18deg.csv, use flux_limit < 1e-3
The component spectral index is calculated from the 610MHz and 151MHz or 1400MHz and 610MHz, and then calculated for the specified frequencies.
If polarisation_frame is not stokesI then the image will a polarised axis but the values will be zero.
- Parameters
npixel – Number of pixels
polarisation_frame – Polarisation frame (default PolarisationFrame(“stokesI”))
cellsize – cellsize in radians
frequency –
channel_bandwidth – Channel width (Hz)
phasecentre – phasecentre (SkyCoord)
fov – fov 10 | 20 | 40
flux_limit – Minimum flux (Jy)
- Returns
Image
-
create_low_test_image_from_gleam
(npixel=512, polarisation_frame=<data_models.polarisation.PolarisationFrame object>, cellsize=1.5e-05, frequency=array([1.e+08]), channel_bandwidth=array([1000000.]), phasecentre=None, kind='cubic', applybeam=False, flux_limit=0.1, flux_max=inf, flux_min=-inf, radius=None, insert_method='Nearest') → data_models.memory_data_models.Image[source]¶ Create LOW test image from the GLEAM survey
Stokes I is estimated from a cubic spline fit to the measured fluxes. The polarised flux is always zero.
See http://www.mwatelescope.org/science/gleam-survey The catalog is available from Vizier.
VIII/100 GaLactic and Extragalactic All-sky MWA survey (Hurley-Walker+, 2016)
GaLactic and Extragalactic All-sky Murchison Wide Field Array (GLEAM) survey. I: A low-frequency extragalactic catalogue. Hurley-Walker N., et al., Mon. Not. R. Astron. Soc., 464, 1146-1167 (2017), 2017MNRAS.464.1146H
- Parameters
npixel – Number of pixels
polarisation_frame – Polarisation frame (default PolarisationFrame(“stokesI”))
cellsize – cellsize in radians
frequency –
channel_bandwidth – Channel width (Hz)
phasecentre – phasecentre (SkyCoord)
kind – Kind of interpolation (see scipy.interpolate.interp1d) Default: linear
- Returns
Image
-
create_low_test_skymodel_from_gleam
(npixel=512, polarisation_frame=<data_models.polarisation.PolarisationFrame object>, cellsize=1.5e-05, frequency=array([1.e+08]), channel_bandwidth=array([1000000.]), phasecentre=None, kind='cubic', applybeam=True, flux_limit=0.1, flux_max=inf, flux_threshold=1.0, insert_method='Nearest', telescope='LOW') → data_models.memory_data_models.SkyModel[source]¶ Create LOW test skymodel from the GLEAM survey
Stokes I is estimated from a cubic spline fit to the measured fluxes. The polarised flux is always zero.
See http://www.mwatelescope.org/science/gleam-survey The catalog is available from Vizier.
VIII/100 GaLactic and Extragalactic All-sky MWA survey (Hurley-Walker+, 2016)
GaLactic and Extragalactic All-sky Murchison Wide Field Array (GLEAM) survey. I: A low-frequency extragalactic catalogue. Hurley-Walker N., et al., Mon. Not. R. Astron. Soc., 464, 1146-1167 (2017), 2017MNRAS.464.1146H
- Parameters
telescope –
npixel – Number of pixels
polarisation_frame – Polarisation frame (default PolarisationFrame(“stokesI”))
cellsize – cellsize in radians
frequency –
channel_bandwidth – Channel width (Hz)
phasecentre – phasecentre (SkyCoord)
kind – Kind of interpolation (see scipy.interpolate.interp1d) Default: cubic
applybeam – Apply the primary beam?
flux_limit – Weakest component
flux_max – Maximum strength component to be included in components
flux_threshold – Split between components (brighter) and image (weaker)
insert_method – Nearest | PSWF | Lanczos
- Returns
- Returns
SkyModel
-
create_low_test_skycomponents_from_gleam
(flux_limit=0.1, polarisation_frame=<data_models.polarisation.PolarisationFrame object>, frequency=array([1.e+08]), kind='cubic', phasecentre=None, radius=1.0) → List[data_models.memory_data_models.Skycomponent][source]¶ Create sky components from the GLEAM survey
Stokes I is estimated from a cubic spline fit to the measured fluxes. The polarised flux is always zero.
See http://www.mwatelescope.org/science/gleam-survey The catalog is available from Vizier.
VIII/100 GaLactic and Extragalactic All-sky MWA survey (Hurley-Walker+, 2016)
GaLactic and Extragalactic All-sky Murchison Wide Field Array (GLEAM) survey. I: A low-frequency extragalactic catalogue. Hurley-Walker N., et al., Mon. Not. R. Astron. Soc., 464, 1146-1167 (2017), 2017MNRAS.464.1146H
- Parameters
flux_limit – Only write components brighter than this (Jy)
polarisation_frame – Polarisation frame (default PolarisationFrame(“stokesI”))
frequency – Frequencies at which the flux will be estimated
kind – Kind of interpolation (see scipy.interpolate.interp1d) Default: linear
phasecentre – Desired phase centre (SkyCoord) default None implies all sources
radius – Radius of sources selected around phasecentre (default 1.0 rad)
- Returns
List of Skycomponents
-
replicate_image
(im: data_models.memory_data_models.Image, polarisation_frame=<data_models.polarisation.PolarisationFrame object>, frequency=array([1.e+08])) → data_models.memory_data_models.Image[source]¶ Make a new canonical shape Image, extended along third and fourth axes by replication.
The order of the data is [chan, pol, dec, ra]
- Parameters
frequency –
im –
polarisation_frame – Polarisation_frame
- Returns
Image
-
create_blockvisibility_iterator
(config: data_models.memory_data_models.Configuration, times: numpy.array, frequency: numpy.array, channel_bandwidth, phasecentre: astropy.coordinates.sky_coordinate.SkyCoord, weight: float = 1, polarisation_frame=<data_models.polarisation.PolarisationFrame object>, integration_time=1.0, number_integrations=1, predict=<function predict_2d>, model=None, components=None, phase_error=0.0, amplitude_error=0.0, sleep=0.0, **kwargs)[source]¶ Create a sequence of Visibilities and optionally predicting and coalescing
This is useful mainly for performing large simulations. Do something like:
vis_iter = create_blockvisibility_iterator(config, times, frequency, channel_bandwidth, phasecentre=phasecentre, weight=1.0, integration_time=30.0, number_integrations=3) for i, vis in enumerate(vis_iter): if i == 0: fullvis = vis else: fullvis = append_visibility(fullvis, vis)
- Parameters
config – Configuration of antennas
times – hour angles in radians
frequency – frequencies (Hz] Shape [nchan]
weight – weight of a single sample
phasecentre – phasecentre of observation
npol – Number of polarizations
integration_time – Integration time (‘auto’ or value in s)
number_integrations – Number of integrations to be created at each time.
model – Model image to be inserted
components – Components to be inserted
sleep_time – Time to sleep between yields
- Returns
Visibility
-
simulate_gaintable
(gt: data_models.memory_data_models.GainTable, phase_error=0.1, amplitude_error=0.0, smooth_channels=1, leakage=0.0, **kwargs) → data_models.memory_data_models.GainTable[source]¶ Simulate a gain table
- Parameters
phase_error – std of normal distribution, zero mean
amplitude_error – std of log normal distribution
leakage – std of cross hand leakage
smooth_channels – Use bspline over smooth_channels
kwargs –
- Returns
Gaintable
-
simulate_pointingtable
(pt: data_models.memory_data_models.PointingTable, pointing_error, static_pointing_error=None, global_pointing_error=None, seed=None, **kwargs) → data_models.memory_data_models.PointingTable[source]¶ Simulate a gain table
- Parameters
pointing_error – std of normal distribution (radians)
static_pointing_error – std of normal distribution (radians)
global_pointing_error – 2-vector of global pointing error (rad)
kwargs –
- Returns
PointingTable
-
simulate_pointingtable_from_timeseries
(pt, type='wind', time_series_type='precision', pointing_directory=None, reference_pointing=False, seed=None)[source]¶ Create a pointing table with time series created from PSD.
- Parameters
pt – Pointing table to be filled
type – Type of pointing: ‘tracking’ or ‘wind’
pointing_file – Name of pointing file
reference_pointing – Use reference pointing?
- Returns
-
insert_unittest_errors
(vt, seed=180555, calibration_context='TG', amp_errors=None, phase_errors=None)[source]¶ Simulate gain errors and apply
- Parameters
vt –
seed – Random number seed, set to big integer repeat values from run to run
phase_errors – e.g. {‘T’: 1.0, ‘G’: 0.1, ‘B’: 0.01}
amp_errors – e.g. {‘T’: 0.0, ‘G’: 0.01, ‘B’: 0.01}
- Returns
Workflows using arlexecute¶
These are workflows assembled from processing components functions, using the arlexecute framework. arlexecute wraps Dask in a convenient way so that the decision to use Dask can be made at run time.
Dask is a python-based flexible parallel computing library for analytic computing. arlexecute.execute (equivalent to dask.delayed) can be used to wrap functions for deferred execution thus allowing construction of graphs. For example, to build a graph for a major/minor cycle algorithm:
arlexecute.set_client(use_dask=True)
model_imagelist = arlexecute.compute(create_image_from_visibility)(vt, npixel=512, cellsize=0.001, npol=1)
solution_list = create_solve_image_list(vt, model_imagelist=model_imagelist, psf_list=psf_list,
context='timeslice', algorithm='hogbom',
niter=1000, fractional_threshold=0.1,
threshold=1.0, nmajor=3, gain=0.1)
solution_list.visualize()
arlexecute.compute(solution_list, sync=True)
If use_dask is set False all steps are executed immediately.
Construction of the components requires that the number of nodes (e.g. w slices or time-slices) be known at construction, rather than execution. To counteract this, at run time, a given node should be able to act as a no-op. We use None to denote a null node.
Calibration¶
-
calibrate_list_arlexecute_workflow
(vis_list, model_vislist, calibration_context='TG', global_solution=True, **kwargs)[source]¶ Create a set of components for (optionally global) calibration of a list of visibilities
If global solution is true then visibilities are gathered to a single visibility data set which is then self-calibrated. The resulting gaintable is then effectively scattered out for application to each visibility set. If global solution is false then the solutions are performed locally.
- Parameters
vis_list –
model_vislist –
calibration_context – String giving terms to be calibrated e.g. ‘TGB’
global_solution – Solve for global gains
kwargs – Parameters for functions in components
- Returns
Workflows for imaging, including predict, invert, residual, restore, deconvolve, weight, taper, zero, subtract and sum results from invert
-
predict_list_arlexecute_workflow
(vis_list, model_imagelist, context, vis_slices=1, facets=1, gcfcf=None, **kwargs)[source]¶ Predict, iterating over both the scattered vis_list and image
The visibility and image are scattered, the visibility is predicted on each part, and then the parts are assembled.
- Parameters
vis_list –
model_imagelist – Model used to determine image parameters
vis_slices – Number of vis slices (w stack or timeslice)
facets – Number of facets (per axis)
context – Type of processing e.g. 2d, wstack, timeslice or facets
gcfcg – tuple containing grid correction and convolution function
kwargs – Parameters for functions in components
- Returns
List of vis_lists
-
invert_list_arlexecute_workflow
(vis_list, template_model_imagelist, context, dopsf=False, normalize=True, facets=1, vis_slices=1, gcfcf=None, **kwargs)[source]¶ Sum results from invert, iterating over the scattered image and vis_list
- Parameters
vis_list –
template_model_imagelist – Model used to determine image parameters
dopsf – Make the PSF instead of the dirty image
facets – Number of facets
normalize – Normalize by sumwt
vis_slices – Number of slices
context – Imaging context
gcfcg – tuple containing grid correction and convolution function
kwargs – Parameters for functions in components
- Returns
List of (image, sumwt) tuple
-
residual_list_arlexecute_workflow
(vis, model_imagelist, context='2d', gcfcf=None, **kwargs)[source]¶ Create a graph to calculate residual image :param vis: :param model_imagelist: Model used to determine image parameters :param context: :param gcfcg: tuple containing grid correction and convolution function :param kwargs: Parameters for functions in components :return:
-
restore_list_arlexecute_workflow
(model_imagelist, psf_imagelist, residual_imagelist=None, restore_facets=1, restore_overlap=0, restore_taper='tukey', **kwargs)[source]¶ Create a graph to calculate the restored image
- Parameters
model_imagelist – Model list
psf_imagelist – PSF list
residual_imagelist – Residual list
kwargs – Parameters for functions in components
restore_facets – Number of facets used per axis (used to distribute)
restore_overlap – Overlap in pixels (0 is best)
restore_taper – Type of taper.
- Returns
-
deconvolve_list_arlexecute_workflow
(dirty_list, psf_list, model_imagelist, prefix='', mask=None, **kwargs)[source]¶ Create a graph for deconvolution, adding to the model
- Parameters
dirty_list –
psf_list –
model_imagelist –
prefix – Informative prefix to log messages
mask – Mask for deconvolution
kwargs – Parameters for functions in components
- Returns
graph for the deconvolution
-
deconvolve_list_channel_arlexecute_workflow
(dirty_list, psf_list, model_imagelist, subimages, **kwargs)[source]¶ Create a graph for deconvolution by channels, adding to the model
Does deconvolution channel by channel. :param subimages: :param dirty_list: :param psf_list: Must be the size of a facet :param model_imagelist: Current model :param kwargs: Parameters for functions in components :return:
-
weight_list_arlexecute_workflow
(vis_list, model_imagelist, gcfcf=None, weighting='uniform', **kwargs)[source]¶ Weight the visibility data
This is done collectively so the weights are summed over all vis_lists and then corrected
- Parameters
vis_list –
model_imagelist – Model required to determine weighting parameters
weighting – Type of weighting
kwargs – Parameters for functions in graphs
- Returns
List of vis_graphs
-
taper_list_arlexecute_workflow
(vis_list, size_required)[source]¶ Taper to desired size
- Parameters
vis_list –
size_required –
- Returns
-
zero_list_arlexecute_workflow
(vis_list)[source]¶ Initialise vis to zero: creates new data holders
- Parameters
vis_list –
- Returns
List of vis_lists
Pipelines¶
Pipeline functions. SDP standard pipelines expressed as functions.
-
ical_list_arlexecute_workflow
(vis_list, model_imagelist, context, vis_slices=1, facets=1, gcfcf=None, calibration_context='TG', do_selfcal=True, **kwargs)[source]¶ Create graph for ICAL pipeline
- Parameters
vis_list –
model_imagelist –
context – imaging context e.g. ‘2d’
calibration_context – Sequence of calibration steps e.g. TGB
do_selfcal – Do the selfcalibration?
kwargs – Parameters for functions in components
- Returns
-
continuum_imaging_list_arlexecute_workflow
(vis_list, model_imagelist, context, gcfcf=None, vis_slices=1, facets=1, **kwargs)[source]¶ Create graph for the continuum imaging pipeline.
Same as ICAL but with no selfcal.
- Parameters
vis_list –
model_imagelist –
context – Imaging context
kwargs – Parameters for functions in components
- Returns
-
spectral_line_imaging_list_arlexecute_workflow
(vis_list, model_imagelist, context, continuum_model_imagelist=None, vis_slices=1, facets=1, gcfcf=None, **kwargs)[source]¶ Create graph for spectral line imaging pipeline
Uses the continuum imaging arlexecute pipeline after subtraction of a continuum model
- Parameters
vis_list – List of visibility components
model_imagelist – Spectral line model graph
continuum_model_imagelist – Continuum model list
context – Imaging context
kwargs – Parameters for functions in components
- Returns
(deconvolved model, residual, restored)
Simulation¶
Pipelines expressed as dask components
-
simulate_list_arlexecute_workflow
(config='LOWBD2', phasecentre=<SkyCoord (ICRS): (ra, dec) in deg (15., -60.)>, frequency=None, channel_bandwidth=None, times=None, polarisation_frame=<data_models.polarisation.PolarisationFrame object>, order='frequency', format='blockvis', rmax=1000.0, zerow=False)[source]¶ A component to simulate an observation
The simulation step can generate a single BlockVisibility or a list of BlockVisibility’s. The parameter keyword determines the way that the list is constructed. If order=’frequency’ then len(frequency) BlockVisibility’s with all times are created. If order=’time’ then len(times) BlockVisibility’s with all frequencies are created. If order = ‘both’ then len(times) * len(times) BlockVisibility’s are created each with a single time and frequency. If order = None then all data are created in one BlockVisibility.
The output format can be either ‘blockvis’ (for calibration) or ‘vis’ (for imaging)
- Parameters
config – Name of configuration: def LOWBDS-CORE
phasecentre – Phase centre def: SkyCoord(ra=+15.0 * u.deg, dec=-60.0 * u.deg, frame=’icrs’, equinox=’J2000’)
frequency – def [1e8]
channel_bandwidth – def [1e6]
times – Observing times in radians: def [0.0]
polarisation_frame – def PolarisationFrame(“stokesI”)
order – ‘time’ or ‘frequency’ or ‘both’ or None: def ‘frequency’
format – ‘blockvis’ or ‘vis’: def ‘blockvis’
- Returns
vis_list with different frequencies in different elements
-
corrupt_list_arlexecute_workflow
(vis_list, gt_list=None, seed=None, **kwargs)[source]¶ Create a graph to apply gain errors to a vis_list
- Parameters
vis_list –
gt_list – Optional gain table graph
kwargs –
- Returns
-
calculate_residual_from_gaintables_arlexecute_workflow
(sub_bvis_list, sub_components, sub_model_list, no_error_gt_list, error_gt_list)[source]¶ Calculate residual image corresponding to a set of gaintables
The visibility difference for a set of components for error and no error gaintables are calculated and the residual images constructed
- Parameters
sub_bvis_list –
sub_components –
sub_model_list –
no_error_gt_list –
error_gt_list –
- Returns
Execution¶
Execution (optionally via Dask)¶
Execute wrap dask such that with the same code Dask.delayed can be replaced by immediate calculation
Dask init¶
Initialise dask
-
get_dask_Client
(timeout=30, n_workers=None, threads_per_worker=1, processes=True, create_cluster=True, memory_limit=None, local_dir='.', with_file=False, scheduler_file='./scheduler.json', dashboard_address=':8787')[source]¶ Get a Dask.distributed Client for the scheduler defined externally, otherwise create
The environment variable ARL_DASK_SCHEDULER is interpreted as pointing to the scheduler. and a client using that scheduler is returned. Otherwise a client is created
- Parameters
timeout – Time out for creation
n_workers – Number of workers
threads_per_worker –
processes – Use processes instead of threads
create_cluster – Create a LocalCluster
memory_limit – Memory limit per worker (bytes e.g. 8e9)
- Returns
Dask client
Serial workflows¶
Imaging functions¶
Manages the imaging context. This take a string and returns a dictionary containing: * Predict function * Invert function * image_iterator function * vis_iterator function
-
predict_list_serial_workflow
(vis_list, model_imagelist, context, vis_slices=1, facets=1, gcfcf=None, **kwargs)[source]¶ Predict, iterating over both the scattered vis_list and image
The visibility and image are scattered, the visibility is predicted on each part, and then the parts are assembled.
- Parameters
vis_list –
model_imagelist – Model used to determine image parameters
vis_slices – Number of vis slices (w stack or timeslice)
facets – Number of facets (per axis)
context – Type of processing e.g. 2d, wstack, timeslice or facets
gcfcg – tuple containing grid correction and convolution function
kwargs – Parameters for functions in components
- Returns
List of vis_lists
-
invert_list_serial_workflow
(vis_list, template_model_imagelist, dopsf=False, normalize=True, facets=1, vis_slices=1, context='2d', gcfcf=None, **kwargs)[source]¶ Sum results from invert, iterating over the scattered image and vis_list
- Parameters
vis_list –
template_model_imagelist – Model used to determine image parameters
dopsf – Make the PSF instead of the dirty image
facets – Number of facets
normalize – Normalize by sumwt
vis_slices – Number of slices
context – Imaging context
gcfcg – tuple containing grid correction and convolution function
kwargs – Parameters for functions in components
- Returns
List of (image, sumwt) tuple
-
residual_list_serial_workflow
(vis, model_imagelist, context='2d', gcfcf=None, **kwargs)[source]¶ Create a graph to calculate residual image
- Parameters
vis –
model_imagelist – Model used to determine image parameters
context –
gcfcg – tuple containing grid correction and convolution function
kwargs – Parameters for functions in components
- Returns
-
restore_list_serial_workflow
(model_imagelist, psf_imagelist, residual_imagelist=None, **kwargs)[source]¶ Create a graph to calculate the restored image
- Parameters
model_imagelist – Model list
psf_imagelist – PSF list
residual_imagelist – Residual list
kwargs – Parameters for functions in components
- Returns
-
deconvolve_list_serial_workflow
(dirty_list, psf_list, model_imagelist, prefix='', mask=None, **kwargs)[source]¶ Create a graph for deconvolution, adding to the model
- Parameters
dirty_list –
psf_list –
model_imagelist –
prefix – Informative prefix to log messages
mask – Mask for deconvolution
kwargs – Parameters for functions in components
- Returns
(graph for the deconvolution, graph for the flat)
-
weight_list_serial_workflow
(vis_list, model_imagelist, gcfcf=None, weighting='uniform', **kwargs)[source]¶ Weight the visibility data
This is done collectively so the weights are summed over all vis_lists and then corrected
- Parameters
vis_list –
model_imagelist – Model required to determine weighting parameters
weighting – Type of weighting
kwargs – Parameters for functions in graphs
- Returns
List of vis_graphs
-
taper_list_serial_workflow
(vis_list, size_required)[source]¶ Taper to desired size
- Parameters
vis_list –
size_required –
- Returns
Pipeline functions¶
Pipeline functions. SDP standard pipelines expressed as functions. This is quite slow and is provided mainly for completeness. Use arlexecute versions pipelines/components.py for speed.
-
ical_list_serial_workflow
(vis_list, model_imagelist, context, vis_slices=1, facets=1, gcfcf=None, calibration_context='TG', do_selfcal=True, **kwargs)[source]¶ Run ICAL pipeline
- Parameters
vis_list –
model_imagelist –
context – imaging context e.g. ‘2d’
calibration_context – Sequence of calibration steps e.g. TGB
do_selfcal – Do the selfcalibration?
kwargs – Parameters for functions in components
- Returns
-
continuum_imaging_list_serial_workflow
(vis_list, model_imagelist, context, gcfcf=None, vis_slices=1, facets=1, **kwargs)[source]¶ Create graph for the continuum imaging pipeline.
Same as ICAL but with no selfcal.
- Parameters
vis_list –
model_imagelist –
context – Imaging context
kwargs – Parameters for functions in components
- Returns
-
spectral_line_imaging_list_serial_workflow
(vis_list, model_imagelist, context, continuum_model_imagelist=None, vis_slices=1, facets=1, gcfcf=None, **kwargs)[source]¶ Create graph for spectral line imaging pipeline
Uses the continuum imaging arlexecute pipeline after subtraction of a continuum model
- Parameters
vis_list – List of visibility components
model_imagelist – Spectral line model graph
continuum_model_imagelist – Continuum model list
context – Imaging context
kwargs – Parameters for functions in components
- Returns
(deconvolved model, residual, restored)