Processing Components

These python components contain the core algorithms in ARL. They can be executed directly or via workflows.

Image

Operations

Image operations visible to the Execution Framework as Components

add_image(im1: data_models.memory_data_models.Image, im2: data_models.memory_data_models.Image, docheckwcs=False) → data_models.memory_data_models.Image[source]

Add two images

Parameters:
  • docheckwcs
  • im1
  • im2
Returns:

Image

calculate_image_frequency_moments(im: data_models.memory_data_models.Image, reference_frequency=None, nmoments=3) → data_models.memory_data_models.Image[source]

Calculate frequency weighted moments

Weights are ((freq-reference_frequency)/reference_frequency)**moment

Note that the spectral axis is replaced by a MOMENT axis.

For example, to find the moments and then reconstruct from just the moments:

moment_cube = calculate_image_frequency_moments(model_multichannel, nmoments=5)
reconstructed_cube = calculate_image_from_frequency_moments(model_multichannel, moment_cube)
Parameters:
  • im – Image cube
  • reference_frequency – Reference frequency (default None uses average)
  • nmoments – Number of moments to calculate
Returns:

Moments image

calculate_image_from_frequency_moments(im: data_models.memory_data_models.Image, moment_image: data_models.memory_data_models.Image, reference_frequency=None) → data_models.memory_data_models.Image[source]

Calculate image from frequency weighted moments

Weights are ((freq-reference_frequency)/reference_frequency)**moment

Note that a new image is created

For example, to find the moments and then reconstruct from just the moments:

moment_cube = calculate_image_frequency_moments(model_multichannel, nmoments=5)
reconstructed_cube = calculate_image_from_frequency_moments(model_multichannel, moment_cube)
Parameters:
  • im – Image cube to be reconstructed
  • moment_image – Moment cube (constructed using calculate_image_frequency_moments)
  • reference_frequency – Reference frequency (default None uses average)
Returns:

reconstructed image

convert_polimage_to_stokes(im: data_models.memory_data_models.Image)[source]

Convert a polarisation image to stokes (complex)

convert_stokes_to_polimage(im: data_models.memory_data_models.Image, polarisation_frame: data_models.polarisation.PolarisationFrame)[source]

Convert a stokes image to polarisation_frame

export_image_to_fits(im: data_models.memory_data_models.Image, fitsfile: str = 'imaging.fits')[source]

Write an image to fits

Parameters:
  • im – Image
  • fitsfile – Name of output fits file
import_image_from_fits(fitsfile: str) → data_models.memory_data_models.Image[source]

Read an Image from fits

Parameters:fitsfile
Returns:Image
qa_image(im, context='') → data_models.memory_data_models.QA[source]

Assess the quality of an image

Parameters:im
Returns:QA
remove_continuum_image(im: data_models.memory_data_models.Image, degree=1, mask=None)[source]

Fit and remove continuum visibility in place

Fit a polynomial in frequency of the specified degree where mask is True

Parameters:
  • im
  • degree – 1 is a constant, 2 is a slope, etc.
  • mask
Returns:

reproject_image(im: data_models.memory_data_models.Image, newwcs: astropy.wcs.wcs.WCS, shape=None) -> (<class 'data_models.memory_data_models.Image'>, <class 'data_models.memory_data_models.Image'>)[source]

Re-project an image to a new coordinate system

Currently uses the reproject python package. This seems to have some features do be careful using this method. For timeslice imaging I had to use griddata.

Parameters:
  • im – Image to be reprojected
  • newwcs – New WCS
  • shape
Returns:

Reprojected Image, Footprint Image

show_image(im: data_models.memory_data_models.Image, fig=None, title: str = '', pol=0, chan=0, cm='Greys', components=None, vmin=None, vmax=None)[source]

Show an Image with coordinates using matplotlib, optionally with components

Parameters:
  • im
  • fig
  • title
  • pol – Polarisation
  • chan – Channel
  • components – Optional components
  • vmin – Clip to this minimum
  • vmax – Clip to this maximum
Returns:

smooth_image(model: data_models.memory_data_models.Image, width=1.0)[source]

Smooth an image with a kernel

Gather/Scatter

Functions that define and manipulate images. Images are just data and a World Coordinate System.

image_gather_channels(image_list: List[data_models.memory_data_models.Image], im: data_models.memory_data_models.Image = None, subimages=0) → data_models.memory_data_models.Image[source]

Gather a list of subimages back into an image using the channel_iterator

If the template image is not given then ti will be formed assuming that the list has been generated by image_scatter_channels with subimages = number of channels

Parameters:
  • image_list – List of subimages
  • im – Output image
  • subimages – Number of image partitions on each axis (2)
Returns:

list of subimages

image_gather_facets(image_list: List[data_models.memory_data_models.Image], im: data_models.memory_data_models.Image, facets=1, overlap=0, taper=None, return_flat=False)[source]

Gather a list of subimages back into an image using the image_raster_iterator

If the overlap is greater than zero, we choose to keep all images the same size so the other ring of facets are ignored. So if facets=4 and overlap > 0 then the gather expects (facets-2)**2 = 4 images.

To normalize the overlap we make a set of flats, gather that and divide. The flat may be optionally returned instead of the result

Parameters:
  • image_list – List of subimages
  • im – Output image
  • facets – Number of image partitions on each axis (2)
  • overlap – Overlap between neighbours in pixels
  • taper – Taper at edges None or ‘linear’ or ‘Tukey’
  • return_flat – Return the flat
Returns:

list of subimages

image_scatter_channels(im: data_models.memory_data_models.Image, subimages=None) → List[data_models.memory_data_models.Image][source]

Scatter an image into a list of subimages using the channels

Parameters:
  • im – Image
  • subimages – Number of channels
Returns:

list of subimages

image_scatter_facets(im: data_models.memory_data_models.Image, facets=1, overlap=0, taper=None) → List[data_models.memory_data_models.Image][source]

Scatter an image into a list of subimages using the image_raster_iterator

If the overlap is greater than zero, we choose to keep all images the same size so the other ring of facets are ignored. So if facets=4 and overlap > 0 then the scatter returns (facets-2)**2 = 4 images.

Parameters:
  • im – Image
  • facets – Number of image partitions on each axis (2)
  • overlap – Number of pixels overlap
  • taper – Taper at edges None or ‘linear’
Returns:

list of subimages

Deconvolution

Image deconvolution functions

The standard deconvolution algorithms are provided:

hogbom: Hogbom CLEAN See: Hogbom CLEAN A&A Suppl, 15, 417, (1974)

msclean: MultiScale CLEAN See: Cornwell, T.J., Multiscale CLEAN (IEEE Journal of Selected Topics in Sig Proc, 2008 vol. 2 pp. 793-801)

mfsmsclean: MultiScale Multi-Frequency See: U. Rau and T. J. Cornwell, “A multi-scale multi-frequency deconvolution algorithm for synthesis imaging in radio interferometry,” A&A 532, A71 (2011).

For example to make dirty image and PSF, deconvolve, and then restore:

model = create_image_from_visibility(vt, cellsize=0.001, npixel=256)
dirty, sumwt = invert_2d(vt, model)
psf, sumwt = invert_2d(vt, model, dopsf=True)

comp, residual = deconvolve_cube(dirty, psf, niter=1000, threshold=0.001, fracthresh=0.01, window='quarter',
                             gain=0.7, algorithm='msclean', scales=[0, 3, 10, 30])

restored = restore_cube(comp, psf, residual)
deconvolve_cube(dirty: data_models.memory_data_models.Image, psf: data_models.memory_data_models.Image, prefix='', **kwargs) -> (<class 'data_models.memory_data_models.Image'>, <class 'data_models.memory_data_models.Image'>)[source]

Clean using a variety of algorithms

Functions that clean a dirty image using a point spread function. The algorithms available are:

hogbom: Hogbom CLEAN See: Hogbom CLEAN A&A Suppl, 15, 417, (1974)

msclean: MultiScale CLEAN See: Cornwell, T.J., Multiscale CLEAN (IEEE Journal of Selected Topics in Sig Proc, 2008 vol. 2 pp. 793-801)

mfsmsclean, msmfsclean, mmclean: MultiScale Multi-Frequency See: U. Rau and T. J. Cornwell, “A multi-scale multi-frequency deconvolution algorithm for synthesis imaging in radio interferometry,” A&A 532, A71 (2011).

For example:

comp, residual = deconvolve_cube(dirty, psf, niter=1000, gain=0.7, algorithm='msclean',
                                 scales=[0, 3, 10, 30], threshold=0.01)

For the MFS clean, the psf must have number of channels >= 2 * nmoments

Parameters:
  • dirty – Image dirty image
  • psf – Image Point Spread Function
  • window – Window image (Bool) - clean where True
  • algorithm – Cleaning algorithm: ‘msclean’|’hogbom’|’mfsmsclean’
  • gain – loop gain (float) 0.7
  • threshold – Clean threshold (0.0)
  • fractional_threshold – Fractional threshold (0.01)
  • scales – Scales (in pixels) for multiscale ([0, 3, 10, 30])
  • nmoments – Number of frequency moments (default 3)
  • findpeak – Method of finding peak in mfsclean: ‘Algorithm1’|’ASKAPSoft’|’CASA’|’ARL’, Default is ARL.
Returns:

componentimage, residual

restore_cube(model: data_models.memory_data_models.Image, psf: data_models.memory_data_models.Image, residual=None, **kwargs) → data_models.memory_data_models.Image[source]

Restore the model image to the residuals

Params psf:Input PSF
Returns:restored image

Solvers

Definition of structures needed by the function interface. These are mostly subclasses of astropy classes.

solve_image(vis: data_models.memory_data_models.Visibility, model: data_models.memory_data_models.Image, components=None, context='2d', **kwargs) -> (<class 'data_models.memory_data_models.Visibility'>, <class 'data_models.memory_data_models.Image'>, <class 'data_models.memory_data_models.Image'>)[source]

Solve for image using deconvolve_cube and specified predict, invert

This is the same as a majorcycle/minorcycle algorithm. The components are removed prior to deconvolution.

See also arguments for predict, invert, deconvolve_cube functions.2d

Parameters:
  • vis
  • model – Model image
  • predict – Predict function e.g. predict_2d, predict_wstack
  • invert – Invert function e.g. invert_2d, invert_wstack
Returns:

Visibility, model

Calibration

Calibration

Functions to solve for antenna/station gain

This uses an iterative substitution algorithm due to Larry D’Addario c 1980’ish. Used in the original VLA Dec-10 Antsol.

For example:

gtsol = solve_gaintable(vis, originalvis, phase_only=True, niter=niter, crosspol=False, tol=1e-6)
vis = apply_gaintable(vis, gtsol, inverse=True)
solve_gaintable(vis: data_models.memory_data_models.BlockVisibility, modelvis: data_models.memory_data_models.BlockVisibility = None, gt=None, phase_only=True, niter=30, tol=1e-08, crosspol=False, normalise_gains=True, **kwargs) → data_models.memory_data_models.GainTable[source]

Solve a gain table by fitting an observed visibility to a model visibility

If modelvis is None, a point source model is assumed.

Parameters:
  • vis – BlockVisibility containing the observed data_models
  • modelvis – BlockVisibility containing the visibility predicted by a model
  • gt – Existing gaintable
  • phase_only – Solve only for the phases (default=True)
  • niter – Number of iterations (default 30)
  • tol – Iteration stops when the fractional change in the gain solution is below this tolerance
  • crosspol – Do solutions including cross polarisations i.e. XY, YX or RL, LR
Returns:

GainTable containing solution

Operations

Functions for calibration, including creation of gaintables, application of gaintables, and merging gaintables.

append_gaintable(gt: data_models.memory_data_models.GainTable, othergt: data_models.memory_data_models.GainTable) → data_models.memory_data_models.GainTable[source]

Append othergt to gt

Parameters:
  • gt
  • othergt
Returns:

GainTable gt + othergt

apply_gaintable(vis: data_models.memory_data_models.BlockVisibility, gt: data_models.memory_data_models.GainTable, inverse=False, vis_slices=None, **kwargs) → data_models.memory_data_models.BlockVisibility[source]

Apply a gain table to a block visibility

The corrected visibility is:

V_corrected = {g_i * g_j^*}^-1 V_obs

If the visibility data are polarised e.g. polarisation_frame(“linear”) then the inverse operator represents an actual inverse of the gains.

Parameters:
  • vis – Visibility to have gains applied
  • gt – Gaintable to be applied
  • inverse – Apply the inverse (default=False)
Returns:

input vis with gains applied

copy_gaintable(gt: data_models.memory_data_models.GainTable, zero=False) → data_models.memory_data_models.GainTable[source]

Copy a GainTable

Performs a deepcopy of the data array

create_gaintable_from_blockvisibility(vis: data_models.memory_data_models.BlockVisibility, timeslice=None, frequencyslice: float = None, **kwargs) → data_models.memory_data_models.GainTable[source]

Create gain table from visibility.

This makes an empty gain table consistent with the BlockVisibility.

Parameters:
  • vis – BlockVisibilty
  • timeslice – Time interval between solutions (s)
  • frequency_width – Frequency solution width (Hz)
Returns:

GainTable

create_gaintable_from_rows(gt: data_models.memory_data_models.GainTable, rows: numpy.ndarray, makecopy=True) → data_models.memory_data_models.GainTable[source]

Create a GainTable from selected rows

Parameters:
  • gt – GainTable
  • rows – Boolean array of row selection
  • makecopy – Make a deep copy (True)
Returns:

GainTable

gaintable_summary(gt: data_models.memory_data_models.GainTable)[source]

Return string summarizing the Gaintable

qa_gaintable(gt: data_models.memory_data_models.GainTable, context=None) → data_models.memory_data_models.QA[source]

Assess the quality of a gaintable

Parameters:gt
Returns:AQ

SkyModel calibration

Radio interferometric calibration using an expectation maximisation algorithm

See the SDP document “Model Partition Calibration View Packet”

In this code:

  • A single model parition is taken to be a list composed of (skymodel, gaintable) tuples.
  • The E step for a specific model partition is the sum of the partition data model and the discrepancy between the
    observed data and the summed (over all partitions) data models.
  • The M step for a specific partition is the optimisation of the model partition given the model partition. This
    involves fitting a skycomponent and fitting for the gain phases.
create_modelpartition(vis: data_models.memory_data_models.BlockVisibility, skymodels, **kwargs)[source]

Create a set of associations between skymodel and gaintable

Parameters:
  • vis – BlockVisibility to process
  • skymodels – List of skyModels
  • kwargs
Returns:

modelpartition_expectation_all(vis: data_models.memory_data_models.BlockVisibility, modelpartitions, **kwargs)[source]

Calculates E step in equation A12

This is the sum of the data models over all skymodel

Parameters:
  • vis – Visibility
  • csm – List of (skymodel, gaintable) tuples
  • kwargs
Returns:

Sum of data models (i.e. a visibility)

modelpartition_expectation_step(vis: data_models.memory_data_models.BlockVisibility, evis_all: data_models.memory_data_models.BlockVisibility, modelpartition, **kwargs)[source]

Calculates E step in equation A12

This is the data model for this window plus the difference between observed data and summed data models

Parameters:
  • evis_all – Sum data models
  • csm – csm element being fit
  • kwargs
Returns:

Data model (i.e. visibility) for this csm

modelpartition_fit_gaintable(evis, modelpartition, gain=0.1, niter=3, tol=0.001, **kwargs)[source]

Fit a gaintable to a visibility

This is the update to the gain part of the window

Parameters:
  • evis – Expected vis for this ssm
  • modelpartition – csm element being fit
  • gain – Gain in step
  • niter – Number of iterations
  • kwargs – Gaintable
modelpartition_fit_skymodel(vis, modelpartition, gain=0.1, **kwargs)[source]

Fit a single skymodel to a visibility

Parameters:
  • evis – Expected vis for this ssm
  • modelpartition – scm element being fit i.e. (skymodel, gaintable) tuple
  • gain – Gain in step
  • kwargs
Returns:

skymodel

modelpartition_maximisation_step(evis: data_models.memory_data_models.BlockVisibility, modelpartition, **kwargs)[source]

Calculates M step in equation A13

This maximises the likelihood of the ssm parameters given the existing data model. Note that the skymodel and gaintable are done separately rather than jointly.

Parameters:
  • ssm
  • kwargs
Returns:

modelpartition_solve(vis, skymodels, niter=10, tol=1e-08, gain=0.25, **kwargs)[source]

Solve for model partitions

Solve by iterating, performing E step and M step.

Parameters:
  • vis – Initial visibility
  • components – Initial components to be used
  • gaintables – Initial gain tables to be used
  • kwargs
Returns:

The individual data models and the residual visibility

Imaging

Base

Functions that aid fourier transform processing. These are built on top of the core functions in libs.fourier_transforms.

The measurement equation for a sufficently narrow field of view interferometer is:

\[V(u,v,w) =\int I(l,m) e^{-2 \pi j (ul+vm)} dl dm\]

The measurement equation for a wide field of view interferometer is:

\[V(u,v,w) =\int \frac{I(l,m)}{\sqrt{1-l^2-m^2}} e^{-2 \pi j (ul+vm + w(\sqrt{1-l^2-m^2}-1))} dl dm\]

This and related modules contain various approachs for dealing with the wide-field problem where the extra phase term in the Fourier transform cannot be ignored.

advise_wide_field(vis: data_models.memory_data_models.Visibility, delA=0.02, oversampling_synthesised_beam=3.0, guard_band_image=6.0, facets=1, wprojection_planes=1)[source]

Advise on parameters for wide field imaging.

Calculate sampling requirements on various parameters

For example:

advice = advise_wide_field(vis, delA)
wstep = get_parameter(kwargs, 'wstep', advice['w_sampling_primary_beam'])
Parameters:
  • vis
  • delA – Allowed coherence loss (def: 0.02)
  • oversampling_synthesised_beam – Oversampling of the synthesized beam (def: 3.0)
  • guard_band_image – Number of primary beam half-widths-to-half-maximum to image (def: 6)
  • facets – Number of facets on each axis
  • wprojection_planes – Number of planes in wprojection
Returns:

dict of advice

create_image_from_visibility(vis, **kwargs) → data_models.memory_data_models.Image[source]

Make an empty image from params and Visibility

This makes an empty, template image consistent with the visibility, allowing optional overriding of select parameters. This is a convenience function and does not transform the visibilities.

Parameters:
  • vis
  • phasecentre – Phasecentre (Skycoord)
  • channel_bandwidth – Channel width (Hz)
  • cellsize – Cellsize (radians)
  • npixel – Number of pixels on each axis (512)
  • frame – Coordinate frame for WCS (ICRS)
  • equinox – Equinox for WCS (2000.0)
  • nchan – Number of image channels (Default is 1 -> MFS)
Returns:

image

invert_2d(vis: data_models.memory_data_models.Visibility, im: data_models.memory_data_models.Image, dopsf: bool = False, normalize: bool = True, **kwargs) -> (<class 'data_models.memory_data_models.Image'>, <class 'numpy.ndarray'>)[source]

Invert using 2D convolution function, including w projection optionally

Use the image im as a template. Do PSF in a separate call.

This is at the bottom of the layering i.e. all transforms are eventually expressed in terms of this function. . Any shifting needed is performed here.

Parameters:
  • vis – Visibility to be inverted
  • im – image template (not changed)
  • dopsf – Make the psf instead of the dirty image
  • normalize – Normalize by the sum of weights (True)
Returns:

resulting image

normalize_sumwt(im: data_models.memory_data_models.Image, sumwt) → data_models.memory_data_models.Image[source]

Normalize out the sum of weights

Parameters:
  • im – Image, im.data has shape [nchan, npol, ny, nx]
  • sumwt – Sum of weights [nchan, npol]
predict_2d(vis: Union[data_models.memory_data_models.BlockVisibility, data_models.memory_data_models.Visibility], model: data_models.memory_data_models.Image, **kwargs) → Union[data_models.memory_data_models.BlockVisibility, data_models.memory_data_models.Visibility][source]

Predict using convolutional degridding.

This is at the bottom of the layering i.e. all transforms are eventually expressed in terms of this function. Any shifting needed is performed here.

Parameters:
  • vis – Visibility to be predicted
  • model – model image
Returns:

resulting visibility (in place works)

predict_skycomponent_visibility(vis: Union[data_models.memory_data_models.Visibility, data_models.memory_data_models.BlockVisibility], sc: Union[data_models.memory_data_models.Skycomponent, typing.List[data_models.memory_data_models.Skycomponent]]) → Union[data_models.memory_data_models.Visibility, data_models.memory_data_models.BlockVisibility][source]

Predict the visibility from a Skycomponent, add to existing visibility, for Visibility or BlockVisibility

Parameters:
  • vis – Visibility or BlockVisibility
  • sc – Skycomponent or list of SkyComponents
Returns:

Visibility or BlockVisibility

rad_and_deg(x)[source]

Stringify x in radian and degress forms

residual_image(vis: data_models.memory_data_models.Visibility, model: data_models.memory_data_models.Image, invert_residual=<function invert_2d>, predict_residual=<function predict_2d>, **kwargs) → Tuple[data_models.memory_data_models.Visibility, data_models.memory_data_models.Image, numpy.ndarray][source]

Calculate residual image and visibility

Parameters:
  • vis – Visibility to be inverted
  • im – image template (not changed)
  • invert – invert to be used (default invert_2d)
  • predict – predict to be used (default predict_2d)
Returns:

residual visibility, residual image, sum of weights

shift_vis_to_image(vis: data_models.memory_data_models.Visibility, im: data_models.memory_data_models.Image, tangent: bool = True, inverse: bool = False) → data_models.memory_data_models.Visibility[source]

Shift visibility to the FFT phase centre of the image in place

Parameters:
  • vis – Visibility data
  • im – Image model used to determine phase centre
  • tangent – Is the shift purely on the tangent plane True|False
  • inverse – Do the inverse operation True|False
Returns:

visibility with phase shift applied and phasecentre updated

Imaging functions

Manages the imaging context. This take a string and returns a dictionary containing: * Predict function * Invert function * image_iterator function * vis_iterator function

imaging_contexts()[source]

Contains all the context information for imaging

The fields are:
predict: Predict function to be used invert: Invert function to be used image_iterator: Iterator for traversing images vis_iterator: Iterator for traversing visibilities inner: The innermost axis
Returns:
invert_function(vis, im: data_models.memory_data_models.Image, dopsf=False, normalize=True, context='2d', inner=None, vis_slices=1, facets=1, overlap=0, taper=None, **kwargs)[source]

Invert using algorithm specified by context:

  • 2d: Two-dimensional transform
  • wstack: wstacking with either vis_slices or wstack (spacing between w planes) set
  • wprojection: w projection with wstep (spacing between w places) set, also kernel=’wprojection’
  • timeslice: snapshot imaging with either vis_slices or timeslice set. timeslice=’auto’ does every time
  • facets: Faceted imaging with facets facets on each axis
  • facets_wprojection: facets AND wprojection
  • facets_wstack: facets AND wstacking
  • wprojection_wstack: wprojection and wstacking
Parameters:
  • vis
  • im
  • dopsf – Make the psf instead of the dirty image (False)
  • normalize – Normalize by the sum of weights (True)
  • context – Imaging context e.g. ‘2d’, ‘timeslice’, etc.
  • inner – Inner loop ‘vis’|’image’
  • kwargs
Returns:

Image, sum of weights

predict_function(vis, model: data_models.memory_data_models.Image, context='2d', inner=None, vis_slices=1, facets=1, overlap=0, taper=None, **kwargs) → data_models.memory_data_models.Visibility[source]

Predict visibilities using algorithm specified by context

  • 2d: Two-dimensional transform
  • wstack: wstacking with either vis_slices or wstack (spacing between w planes) set
  • wprojection: w projection with wstep (spacing between w places) set, also kernel=’wprojection’
  • timeslice: snapshot imaging with either vis_slices or timeslice set. timeslice=’auto’ does every time
  • facets: Faceted imaging with facets facets on each axis
  • facets_wprojection: facets AND wprojection
  • facets_wstack: facets AND wstacking
  • wprojection_wstack: wprojection and wstacking
Parameters:
  • vis
  • model – Model image, used to determine image characteristics
  • context – Imaging context e.g. ‘2d’, ‘timeslice’, etc.
  • inner – Inner loop ‘vis’|’image’
  • kwargs
Returns:

Timeslice

The w-term can be viewed as a time-variable distortion. Approximating the array as instantaneously co-planar, we have that w can be expressed in terms of u,v:

\[w = a u + b v\]

Transforming to a new coordinate system:

\[l' = l + a ( \sqrt{1-l^2-m^2}-1))\]
\[m' = m + b ( \sqrt{1-l^2-m^2}-1))\]

Ignoring changes in the normalisation term, we have:

\[V(u,v,w) =\int \frac{ I(l',m')} { \sqrt{1-l'^2-m'^2}} e^{-2 \pi j (ul'+um')} dl' dm'\]
fit_uvwplane(vis: data_models.memory_data_models.Visibility, remove=False) -> (<class 'data_models.memory_data_models.Image'>, <class 'float'>, <class 'float'>)[source]

Fit and optionally remove the best fitting plane p u + q v = w

Parameters:
  • vis – visibility to be fitted
  • remove – Remove the fitted w permanently from vis?
Returns:

direction cosines defining plane

fit_uvwplane_only(vis: data_models.memory_data_models.Visibility) -> (<class 'float'>, <class 'float'>)[source]

Fit the best fitting plane p u + q v = w

Parameters:vis – visibility to be fitted
Returns:direction cosines defining plane
invert_timeslice_single(vis: data_models.memory_data_models.Visibility, im: data_models.memory_data_models.Image, dopsf, normalize=True, facets=1, vis_slices=1, **kwargs) -> (<class 'data_models.memory_data_models.Image'>, <class 'numpy.ndarray'>)[source]

Process single time slice

Extracted for re-use in parallel version :param vis: Visibility to be inverted :param im: image template (not changed) :param dopsf: Make the psf instead of the dirty image :param normalize: Normalize by the sum of weights (True)

lm_distortion(im: data_models.memory_data_models.Image, a, b) -> (<class 'numpy.ndarray'>, <class 'numpy.ndarray'>, <class 'numpy.ndarray'>, <class 'numpy.ndarray'>)[source]

Calculate the nominal and distorted coordinates for w=au+bv

Parameters:
  • im – Image with the coordinate system
  • b (a,) – parameters in fit
Returns:

meshgrids for l,m nominal and distorted

predict_timeslice_single(vis: data_models.memory_data_models.Visibility, model: data_models.memory_data_models.Image, predict=<function predict_2d>, remove=True, facets=1, vis_slices=1, **kwargs) → data_models.memory_data_models.Visibility[source]

Predict using a single time slices.

This fits a single plane and corrects the image geometry.

Parameters:
  • vis – Visibility to be predicted
  • model – model image
  • predict
  • remove – Remove fitted w (so that wprojection will do the right thing)
Returns:

resulting visibility (in place works)

WStack

The w-stacking or w-slicing approach is to partition the visibility data by slices in w. The measurement equation is approximated as:

\[V(u,v,w) =\sum_i \int \frac{ I(l,m) e^{-2 \pi j (w_i(\sqrt{1-l^2-m^2}-1))})}{\sqrt{1-l^2-m^2}} e^{-2 \pi j (ul+vm)} dl dm\]

If images constructed from slices in w are added after applying a w-dependent image plane correction, the w term will be corrected.

invert_wstack_single(vis: data_models.memory_data_models.Visibility, im: data_models.memory_data_models.Image, dopsf, normalize=True, remove=True, facets=1, vis_slices=1, **kwargs) -> (<class 'data_models.memory_data_models.Image'>, <class 'numpy.ndarray'>)[source]

Process single w slice

Parameters:
  • vis – Visibility to be inverted
  • im – image template (not changed)
  • dopsf – Make the psf instead of the dirty image
  • normalize – Normalize by the sum of weights (True)
predict_wstack_single(vis, model, remove=True, facets=1, vis_slices=1, **kwargs) → data_models.memory_data_models.Visibility[source]

Predict using a single w slices.

This processes a single w plane, rotating out the w beam for the average w

Parameters:
  • vis – Visibility to be predicted
  • model – model image
Returns:

resulting visibility (in place works)

Weighting

Functions that aid weighting the visibility data prior to imaging.

There are two classes of functions:
  • Changing the weight dependent on noise level or sample density or a combination
  • Tapering the weihght spatially to avoid effects of sharp edges or to emphasize a given scale size in the image
taper_visibility_gaussian(vis: data_models.memory_data_models.Visibility, beam=None) → data_models.memory_data_models.Visibility[source]

Taper the visibility weights

These are cumulative. If You can reset the imaging_weights using libs.imaging.weighting.weight_visibility

Parameters:
  • vis – Visibility with imaging_weight’s to be tapered
  • beam – desired resolution (Full width half maximum, radians)
Returns:

visibility with imaging_weight column modified

taper_visibility_tukey(vis: data_models.memory_data_models.Visibility, tukey=0.1) → data_models.memory_data_models.Visibility[source]

Taper the visibility weights

This algorithm is present in WSClean.

See https://sourceforge.net/p/wsclean/wiki/Tapering

tukey, a circular taper that smooths the outer edge set by -maxuv-l inner-tukey, a circular taper that smooths the inner edge set by -minuv-l edge-tukey, a square-shaped taper that smooths the edge set by the uv grid and -taper-edge.

These are cumulative. If You can reset the imaging_weights using libs.imaging.weighting.weight_visibility

Parameters:vis – Visibility with imaging_weight’s to be tapered
Returns:visibility with imaging_weight column modified
weight_visibility(vis: data_models.memory_data_models.Visibility, im: data_models.memory_data_models.Image, **kwargs) → data_models.memory_data_models.Visibility[source]

Reweight the visibility data using a selected algorithm

Imaging uses the column “imaging_weight” when imaging. This function sets that column using a variety of algorithms

Options are:
  • Natural: by visibility weight (optimum for noise in final image)
  • Uniform: weight of sample divided by sum of weights in cell (optimum for sidelobes)
  • Super-uniform: As uniform, by sum of weights is over extended box region
  • Briggs: Compromise between natural and uniform
  • Super-briggs: As Briggs, by sum of weights is over extended box region
Parameters:
  • vis
  • im
Returns:

visibility with imaging_weights column added and filled

Skycomponent

Operations

Function to manage sky components.

apply_beam_to_skycomponent(sc: Union[data_models.memory_data_models.Skycomponent, typing.List[data_models.memory_data_models.Skycomponent]], beam: data_models.memory_data_models.Image, flux_limit=0.0) → Union[data_models.memory_data_models.Skycomponent, typing.List[data_models.memory_data_models.Skycomponent]][source]

Insert a Skycomponent into an image

Parameters:
  • beam
  • sc – SkyComponent or list of SkyComponents
  • flux_limit – flux limit on input
Returns:

List of skycomponents

create_skycomponent(direction: astropy.coordinates.sky_coordinate.SkyCoord, flux: <built-in function array>, frequency: <built-in function array>, shape: str = 'Point', polarisation_frame=<data_models.polarisation.PolarisationFrame object>, params: dict = None, name: str = '') → data_models.memory_data_models.Skycomponent[source]

A single Skycomponent with direction, flux, shape, and params for the shape

Parameters:
  • params
  • direction
  • flux
  • frequency
  • shape – ‘Point’ or ‘Gaussian’
  • name
Returns:

Skycomponent

find_nearest_skycomponent(home: astropy.coordinates.sky_coordinate.SkyCoord, comps) -> (<class 'data_models.memory_data_models.Skycomponent'>, <class 'float'>)[source]

Find nearest component to a given direction

Parameters:
  • home – Home direction
  • comps – list of skycomponents
Returns:

Index of nearest component

find_nearest_skycomponent_index(home, comps) → int[source]

Find nearest component in a list to a given direction (home)

Parameters:
  • home – Home direction
  • comps – list of skycomponents
Returns:

index of best in comps

find_separation_skycomponents(comps_test, comps_ref=None)[source]

Find the matrix of separations for two lists of components

Parameters:
  • comps_test – List of components to be test
  • comps_ref – If None then set to comps_test
Returns:

find_skycomponent_matches(comps_test, comps_ref, tol=1e-07)[source]

Match a list of candidates to a reference set of skycomponents

many to one is allowed.

Parameters:
  • comps_test
  • comps_ref
  • tol – Tolerance in radians for a match
Returns:

find_skycomponent_matches_atomic(comps_test, comps_ref, tol=1e-07)[source]

Match a list of candidates to a reference set of skycomponents

find_skycomponent_matches is faster since it uses the astropy catalog matching

many to one is allowed.

Parameters:
  • comps_test
  • comps_ref
Returns:

find_skycomponents(im: data_models.memory_data_models.Image, fwhm=1.0, threshold=10.0, npixels=5) → List[data_models.memory_data_models.Skycomponent][source]

Find gaussian components in Image above a certain threshold as Skycomponent

Parameters:
  • fwhm – Full width half maximum of gaussian
  • threshold – Threshold for component detection. Default: 10 standard deviations over median.
  • im – Image to be searched
  • params
Returns:

list of sky components

insert_skycomponent(im: data_models.memory_data_models.Image, sc: Union[data_models.memory_data_models.Skycomponent, typing.List[data_models.memory_data_models.Skycomponent]], insert_method='Nearest', bandwidth=1.0, support=8) → data_models.memory_data_models.Image[source]

Insert a Skycomponent into an image

Parameters:
  • params
  • im
  • sc – SkyComponent or list of SkyComponents
  • insert_method – ‘’ | ‘Sinc’ | ‘Lanczos’
  • bandwidth – Fractional of uv plane to optimise over (1.0)
  • support – Support of kernel (7)
Returns:

image

select_components_by_flux(comps, fmax=inf, fmin=-inf) → [<class 'data_models.memory_data_models.Skycomponent'>][source]

Select components with a range in flux

Parameters:
  • comps – list of skycomponents
  • fmin – minimum range
  • fmax – maximum range
Returns:

selected components

select_components_by_separation(home, comps, max=6.283185307179586, min=0.0) → [<class 'data_models.memory_data_models.Skycomponent'>][source]

Select components with a range in separation

Parameters:
  • home – Home direction
  • comps – list of skycomponents
  • min – minimum range
  • max – maximum range
Returns:

selected components

SkyModel

Operations

Function to manage skymodels.

copy_skymodel(sm)[source]

Copy a sky model

predict_skymodel_visibility(vis, sm, **kwargs)[source]

Predict the visibility for a sky model.

The skymodel is a collection of skycomponents and images

Parameters:
  • vis
  • sm
  • kwargs
Returns:

solve_skymodel(vis, skymodel, gain=0.1, **kwargs)[source]

Fit a single skymodel to a visibility

Parameters:
  • evis – Expected vis for this ssm
  • modelpartition – scm element being fit i.e. (skymodel, gaintable) tuple
  • gain – Gain in step
  • method – ‘fit’ or ‘sum’
  • kwargs
Returns:

skycomponent

Visibility

Base

Base simple visibility operations, placed here to avoid circular dependencies

copy_visibility(vis: Union[data_models.memory_data_models.Visibility, data_models.memory_data_models.BlockVisibility], zero=False) → Union[data_models.memory_data_models.Visibility, data_models.memory_data_models.BlockVisibility][source]

Copy a visibility

Performs a deepcopy of the data array

create_blockvisibility(config: data_models.memory_data_models.Configuration, times: <built-in function array>, frequency: <built-in function array>, phasecentre: astropy.coordinates.sky_coordinate.SkyCoord, weight: float = 1.0, polarisation_frame: data_models.polarisation.PolarisationFrame = None, integration_time=1.0, channel_bandwidth=1000000.0, zerow=False, **kwargs) → data_models.memory_data_models.BlockVisibility[source]

Create a BlockVisibility from Configuration, hour angles, and direction of source

Note that we keep track of the integration time for BDA purposes

Parameters:
  • config – Configuration of antennas
  • times – hour angles in radians
  • frequency – frequencies (Hz] [nchan]
  • weight – weight of a single sample
  • phasecentre – phasecentre of observation
  • channel_bandwidth – channel bandwidths: (Hz] [nchan]
  • integration_time – Integration time (‘auto’ or value in s)
  • polarisation_frame
Returns:

BlockVisibility

create_blockvisibility_from_ms(msname, channum=None, ack=False)[source]

Minimal MS to BlockVisibility converter

The MS format is much more general than the ARL BlockVisibility so we cut many corners. This requires casacore to be installed. If not an exception ModuleNotFoundError is raised.

Creates a list of BlockVisibility’s, split by field and spectral window

Parameters:
  • msname – File name of MS
  • channum – range of channels e.g. range(17,32), default is None meaning all
Returns:

create_visibility(config: data_models.memory_data_models.Configuration, times: <built-in function array>, frequency: <built-in function array>, channel_bandwidth, phasecentre: astropy.coordinates.sky_coordinate.SkyCoord, weight: float, polarisation_frame=<data_models.polarisation.PolarisationFrame object>, integration_time=1.0, zerow=False) → data_models.memory_data_models.Visibility[source]

Create a Visibility from Configuration, hour angles, and direction of source

Note that we keep track of the integration time for BDA purposes

Parameters:
  • config – Configuration of antennas
  • times – hour angles in radians
  • frequency – frequencies (Hz] [nchan]
  • weight – weight of a single sample
  • phasecentre – phasecentre of observation
  • channel_bandwidth – channel bandwidths: (Hz] [nchan]
  • integration_time – Integration time (‘auto’ or value in s)
  • polarisation_frame – PolarisationFrame(‘stokesI’)
Returns:

Visibility

create_visibility_from_ms(msname, channum=None, ack=False)[source]

Minimal MS to BlockVisibility converter

The MS format is much more general than the ARL BlockVisibility so we cut many corners. This requires casacore to be installed. If not an exception ModuleNotFoundError is raised.

Creates a list of BlockVisibility’s, split by field and spectral window

Parameters:
  • msname – File name of MS
  • channum – range of channels e.g. range(17,32), default is None meaning all
Returns:

create_visibility_from_rows(vis: Union[data_models.memory_data_models.Visibility, data_models.memory_data_models.BlockVisibility], rows: numpy.ndarray, makecopy=True) → None[source]

Create a Visibility from selected rows

Parameters:
  • vis – Visibility
  • rows – Boolean array of row selction
  • makecopy – Make a deep copy (True)
Returns:

Visibility

phaserotate_visibility(vis: data_models.memory_data_models.Visibility, newphasecentre: astropy.coordinates.sky_coordinate.SkyCoord, tangent=True, inverse=False) → data_models.memory_data_models.Visibility[source]

Phase rotate from the current phase centre to a new phase centre

If tangent is False the uvw are recomputed and the visibility phasecentre is updated. Otherwise only the visibility phases are adjusted

Parameters:
  • vis – Visibility to be rotated
  • newphasecentre
  • tangent – Stay on the same tangent plane? (True)
  • inverse – Actually do the opposite
Returns:

Visibility

vis_summary(vis: Union[data_models.memory_data_models.Visibility, data_models.memory_data_models.BlockVisibility])[source]

Return string summarizing the Visibility

Operations

Visibility operations

append_visibility(vis: Union[data_models.memory_data_models.Visibility, data_models.memory_data_models.BlockVisibility], othervis: Union[data_models.memory_data_models.Visibility, data_models.memory_data_models.BlockVisibility]) → Union[data_models.memory_data_models.Visibility, data_models.memory_data_models.BlockVisibility][source]

Append othervis to vis

Parameters:
  • vis
  • othervis
Returns:

Visibility vis + othervis

concatenate_visibility(vis_list, sort=True)[source]

Concatenate a list of visibilities, with an optional sort back to index order

Parameters:vis_list
Returns:Visibility
divide_visibility(vis: data_models.memory_data_models.BlockVisibility, modelvis: data_models.memory_data_models.BlockVisibility)[source]

Divide visibility by model forming visibility for equivalent point source

This is a useful intermediate product for calibration. Variation of the visibility in time and frequency due to the model structure is removed and the data can be averaged to a limit determined by the instrumental stability. The weight is adjusted to compensate for the division.

Zero divisions are avoided and the corresponding weight set to zero.

Parameters:
  • vis
  • modelvis
Returns:

integrate_visibility_by_channel(vis: data_models.memory_data_models.BlockVisibility) → data_models.memory_data_models.BlockVisibility[source]

Integrate visibility across channels, returning new visibility

Parameters:vis
Returns:BlockVisibility
qa_visibility(vis: Union[data_models.memory_data_models.Visibility, data_models.memory_data_models.BlockVisibility], context=None) → data_models.memory_data_models.QA[source]

Assess the quality of Visibility

Parameters:
  • context
  • vis – Visibility to be assessed
Returns:

QA

remove_continuum_blockvisibility(vis: data_models.memory_data_models.BlockVisibility, degree=1, mask=None) → data_models.memory_data_models.BlockVisibility[source]

Fit and remove continuum visibility

Fit a polynomial in frequency of the specified degree where mask is True

Parameters:
  • vis
  • degree – Degree of polynomial
  • mask
Returns:

sort_visibility(vis, order=None)[source]

Sort a visibility on a given column

Parameters:
  • vis
  • order – Array of string of column to be used for sortin
Returns:

subtract_visibility(vis, model_vis, inplace=False)[source]

Subtract model_vis from vis, returning new visibility

Parameters:
  • vis
  • model_vis
Returns:

sum_visibility(vis: data_models.memory_data_models.Visibility, direction: astropy.coordinates.sky_coordinate.SkyCoord) → <built-in function array>[source]

Direct Fourier summation in a given direction

Parameters:
  • vis – Visibility to be summed
  • direction – Direction of summation
Returns:

flux[nch,npol], weight[nch,pol]

Coalesce

Functions for visibility coalescence and decoalescence.

The BlockVisibility format describes the visibility data_models as it would come from the correlator: [time, ant2, ant1, channel, pol]. This is well-suited to calibration and some visibility processing such as continuum removal. However the BlockVisibility format is vastly oversampled on the short spacings where the visibility (after calibration) varies slowly compared to the longest baselines. The coalescence operation resamples the visibility at a rate inversely proportional to baseline length. This cannot be held in the BlockVIsibility format so it is stored in the Visibility format. For e.g. SKA1-LOW, coalescing typically reduces the number of visibilities by a factor between 10 and 30.

A typical use might be:

vt = predict_skycomponent_visibility(vt, comps)
cvt = coalesce_visibility(vt, time_coal=1.0, max_time_coal=100, frequency_coal=0.0,
    max_frequency_coal=1)
dirtyimage, sumwt = invert_2d(cvt, model)
coalesce_visibility(vis: data_models.memory_data_models.BlockVisibility, **kwargs) → data_models.memory_data_models.Visibility[source]

Coalesce the BlockVisibility data_models. The output format is a Visibility, as needed for imaging

Coalesce by baseline-dependent averaging (optional). The number of integrations averaged goes as the ratio of the maximum possible baseline length to that for this baseline. This number can be scaled by coalescence_factor and limited by max_coalescence.

When faceting, the coalescence factors should be roughly the same as the number of facets on one axis.

If coalescence_factor=0.0 then just a format conversion is done

Parameters:vis – BlockVisibility to be coalesced
Returns:Coalesced visibility with cindex and blockvis filled in
convert_blockvisibility_to_visibility(vis: data_models.memory_data_models.BlockVisibility) → data_models.memory_data_models.Visibility[source]

Convert the BlockVisibility data with no coalescence

Parameters:vis – BlockVisibility to be converted
Returns:Visibility with cindex and blockvis filled in
convert_visibility_to_blockvisibility(vis: data_models.memory_data_models.Visibility) → data_models.memory_data_models.BlockVisibility[source]

Convert a Visibility to equivalent BlockVisibility format

Parameters:vis – Coalesced visibility
Returns:Visibility
decoalesce_vis(vshape, cvis, cindex)[source]

Decoalesce data using Time-Baseline

We use the index into the coalesced data_models. For every output row, this gives the corresponding row in the coalesced data_models.

Parameters:
  • vshape – Shape of template visibility data_models
  • cvis – Coalesced visibility values
  • cindex – Index array from coalescence
Returns:

uncoalesced vis

decoalesce_visibility(vis: data_models.memory_data_models.Visibility, overwrite=False, **kwargs) → data_models.memory_data_models.BlockVisibility[source]

Decoalesce the visibilities to the original values (opposite of coalesce_visibility)

This relies upon the block vis and the index being part of the vis. Needs the index generated by coalesce_visibility

Parameters:vis – (Coalesced visibility)
Returns:BlockVisibility with vis and weight columns overwritten

Gather/Scatter

Visibility iterators for iterating through a BlockVisibility or Visibility.

A typical use would be to make a sequence of snapshot visibilitys:

for rows in vis_timeslice_iter(vt, vis_slices=vis_slices):
    visslice = create_visibility_from_rows(vt, rows)
    dirtySnapshot = create_visibility_from_visibility(visslice, npixel=512, cellsize=0.001, npol=1)
    dirtySnapshot, sumwt = invert_2d(visslice, dirtySnapshot)
visibility_gather(visibility_list: List[data_models.memory_data_models.Visibility], vis: data_models.memory_data_models.Visibility, vis_iter, vis_slices=None) → data_models.memory_data_models.Visibility[source]

Gather a list of subvisibilities back into a visibility

The iterator setup must be the same as used in the scatter.

Parameters:
  • visibility_list – List of subvisibilities
  • vis – Output visibility
  • vis_iter – visibility iterator
  • vis_slices – Number of slices to be gathered (optional)
Returns:

vis

visibility_gather_channel(vis_list: List[data_models.memory_data_models.Visibility], vis: data_models.memory_data_models.Visibility = None)[source]

Gather a visibility by channel

Parameters:
  • vis_list
  • vis
Returns:

visibility_scatter(vis: data_models.memory_data_models.Visibility, vis_iter, vis_slices=1) → List[data_models.memory_data_models.Visibility][source]

Scatter a visibility into a list of subvisibilities

If vis_iter is over time then the type of the outvisibilities will be the same as inout If vis_iter is over w then the type of the output visibilities will always be Visibility

Parameters:
  • vis – Visibility
  • vis_iter – visibility iterator
  • vis_slices – Number of slices to be made
Returns:

list of subvisibilitys

visibility_scatter_channel(vis: data_models.memory_data_models.BlockVisibility) → List[data_models.memory_data_models.Visibility][source]

Scatter channels to separate images

Parameters:vis
Returns:

Iterators

Visibility iterators for iterating through a BlockVisibility or Visibility.

A typical use would be to make a sequence of snapshot images:

for rows in vis_timeslice_iter(vt):
    visslice = create_visibility_from_rows(vt, rows)
    dirtySnapshot = create_image_from_visibility(visslice, npixel=512, cellsize=0.001, npol=1)
    dirtySnapshot, sumwt = invert_2d(visslice, dirtySnapshot)
vis_null_iter(vis: Union[data_models.memory_data_models.Visibility, data_models.memory_data_models.BlockVisibility], vis_slices=1) → numpy.ndarray[source]

One time iterator returning true for all rows

Parameters:
  • vis
  • vis_slices
Returns:

vis_timeslice_iter(vis: data_models.memory_data_models.Visibility, vis_slices=None) → numpy.ndarray[source]

Time slice iterator

Parameters:
  • vis
  • vis_slices – Number of time slices
Returns:

Boolean array with selected rows=True

vis_timeslices(vis: data_models.memory_data_models.Visibility, timeslice='auto') → int[source]

Calculate number of time slices in a visibility

Parameters:
  • vis – Visibility
  • timeslice – ‘auto’ or float (seconds)
Returns:

Number of slices

vis_wslice_iter(vis: data_models.memory_data_models.Visibility, vis_slices=1) → numpy.ndarray[source]

W slice iterator

Parameters:
  • vis
  • vis_slices – Number of slices
Returns:

Boolean array with selected rows=True

vis_wslices(vis: data_models.memory_data_models.Visibility, wslice=10.0) → int[source]

Calculate number of w slices (or stack) in a visibility

Parameters:
  • vis – Visibility
  • wslice – width of w slice (in lambda)
Returns:

Number of slices

Util

Testing Support

Functions that aid testing in various ways. A typical use would be:

lowcore = create_named_configuration('LOWBD2-CORE')
times = numpy.linspace(-3, +3, 13) * (numpy.pi / 12.0)

frequency = numpy.array([1e8])
channel_bandwidth = numpy.array([1e7])

# Define the component and give it some polarisation and spectral behaviour
f = numpy.array([100.0])
flux = numpy.array([f])

phasecentre = SkyCoord(ra=+15.0 * u.deg, dec=-35.0 * u.deg, frame='icrs', equinox='J2000')
compabsdirection = SkyCoord(ra=17.0 * u.deg, dec=-36.5 * u.deg, frame='icrs', equinox='J2000')

comp = create_skycomponent(flux=flux, frequency=frequency, direction=compabsdirection,
                                polarisation_frame=PolarisationFrame('stokesI'))
image = create_test_image(frequency=frequency, phasecentre=phasecentre,
                                              cellsize=0.001,
                                              polarisation_frame=PolarisationFrame('stokesI')

vis = create_visibility(lowcore, times=times, frequency=frequency,
                             channel_bandwidth=channel_bandwidth,
                             phasecentre=phasecentre, weight=1,
                             polarisation_frame=PolarisationFrame('stokesI'),
                             integration_time=1.0)
create_LOFAR_configuration(antfile: str) → data_models.memory_data_models.Configuration[source]

Define from the LOFAR configuration file

Parameters:antfile
Returns:Configuration
create_blockvisibility_iterator(config: data_models.memory_data_models.Configuration, times: <built-in function array>, frequency: <built-in function array>, channel_bandwidth, phasecentre: astropy.coordinates.sky_coordinate.SkyCoord, weight: float = 1, polarisation_frame=<data_models.polarisation.PolarisationFrame object>, integration_time=1.0, number_integrations=1, predict=<function predict_2d>, model=None, components=None, phase_error=0.0, amplitude_error=0.0, sleep=0.0, **kwargs)[source]

Create a sequence of Visibilities and optionally predicting and coalescing

This is useful mainly for performing large simulations. Do something like:

vis_iter = create_blockvisibility_iterator(config, times, frequency, channel_bandwidth, phasecentre=phasecentre,
                                      weight=1.0, integration_time=30.0, number_integrations=3)

for i, vis in enumerate(vis_iter):
if i == 0:
    fullvis = vis
else:
    fullvis = append_visibility(fullvis, vis)
Parameters:
  • config – Configuration of antennas
  • times – hour angles in radians
  • frequency – frequencies (Hz] Shape [nchan]
  • weight – weight of a single sample
  • phasecentre – phasecentre of observation
  • npol – Number of polarizations
  • integration_time – Integration time (‘auto’ or value in s)
  • number_integrations – Number of integrations to be created at each time.
  • model – Model image to be inserted
  • components – Components to be inserted
  • sleep_time – Time to sleep between yields
Returns:

Visibility

create_configuration_from_file(antfile: str, location: astropy.coordinates.earth.EarthLocation = None, mount: str = 'altaz', names: str = '%d', frame: str = 'local', diameter=35.0, rmax=None, name='') → data_models.memory_data_models.Configuration[source]

Define from a file

Parameters:
  • names – Antenna names
  • antfile – Antenna file name
  • location
  • mount – mount type: ‘altaz’, ‘xy’
  • frame – ‘local’ | ‘global’
  • diameter – Effective diameter of station or antenna
Returns:

Configuration

create_low_test_beam(model: data_models.memory_data_models.Image) → data_models.memory_data_models.Image[source]

Create a test power beam for LOW using an image from OSKAR

Parameters:model – Template image
Returns:Image
create_low_test_image_from_gleam(npixel=512, polarisation_frame=<data_models.polarisation.PolarisationFrame object>, cellsize=1.5e-05, frequency=array([1.e+08]), channel_bandwidth=array([1000000.]), phasecentre=None, kind='cubic', applybeam=False, flux_limit=0.1, radius=None, insert_method='Nearest') → data_models.memory_data_models.Image[source]

Create LOW test image from the GLEAM survey

Stokes I is estimated from a cubic spline fit to the measured fluxes. The polarised flux is always zero.

See http://www.mwatelescope.org/science/gleam-survey The catalog is available from Vizier.

VIII/100 GaLactic and Extragalactic All-sky MWA survey (Hurley-Walker+, 2016)

GaLactic and Extragalactic All-sky Murchison Wide Field Array (GLEAM) survey. I: A low-frequency extragalactic catalogue. Hurley-Walker N., et al., Mon. Not. R. Astron. Soc., 464, 1146-1167 (2017), 2017MNRAS.464.1146H

Parameters:
  • npixel – Number of pixels
  • polarisation_frame – Polarisation frame (default PolarisationFrame(“stokesI”))
  • cellsize – cellsize in radians
  • frequency
  • channel_bandwidth – Channel width (Hz)
  • phasecentre – phasecentre (SkyCoord)
  • kind – Kind of interpolation (see scipy.interpolate.interp1d) Default: linear
Returns:

Image

create_low_test_skycomponents_from_gleam(flux_limit=0.1, polarisation_frame=<data_models.polarisation.PolarisationFrame object>, frequency=array([1.e+08]), kind='cubic', phasecentre=None, radius=1.0) → List[data_models.memory_data_models.Skycomponent][source]

Create sky components from the GLEAM survey

Stokes I is estimated from a cubic spline fit to the measured fluxes. The polarised flux is always zero.

See http://www.mwatelescope.org/science/gleam-survey The catalog is available from Vizier.

VIII/100 GaLactic and Extragalactic All-sky MWA survey (Hurley-Walker+, 2016)

GaLactic and Extragalactic All-sky Murchison Wide Field Array (GLEAM) survey. I: A low-frequency extragalactic catalogue. Hurley-Walker N., et al., Mon. Not. R. Astron. Soc., 464, 1146-1167 (2017), 2017MNRAS.464.1146H

Return type:

Union[None, List[libs.data_models.data_models.Skycomponent], List]

Parameters:
  • flux_limit – Only write components brighter than this (Jy)
  • polarisation_frame – Polarisation frame (default PolarisationFrame(“stokesI”))
  • frequency – Frequencies at which the flux will be estimated
  • kind – Kind of interpolation (see scipy.interpolate.interp1d) Default: linear
  • phasecentre – Desired phase centre (SkyCoord) default None implies all sources
  • radius – Radius of sources selected around phasecentre (default 1.0 rad)
Returns:

List of Skycomponents

create_named_configuration(name: str = 'LOWBD2', **kwargs) → data_models.memory_data_models.Configuration[source]

Standard configurations e.g. LOWBD2, MIDBD2

Parameters:
  • name – name of Configuration LOWBD2, LOWBD1, LOFAR, VLAA, ASKAP
  • rmax – Maximum distance of station from the average (m)
Returns:

For LOWBD2, setting rmax gives the following number of stations 100.0 13 300.0 94 1000.0 251 3000.0 314 10000.0 398 30000.0 476 100000.0 512

create_test_image(canonical=True, cellsize=None, frequency=None, channel_bandwidth=None, phasecentre=None, polarisation_frame=<data_models.polarisation.PolarisationFrame object>) → data_models.memory_data_models.Image[source]

Create a useful test image

This is the test image M31 widely used in ALMA and other simulations. It is actually part of an Halpha region in M31.

Parameters:
  • canonical – Make the image into a 4 dimensional image
  • cellsize
  • frequency – Frequency (array) in Hz
  • channel_bandwidth – Channel bandwidth (array) in Hz
  • phasecentre – Phase centre of image (SkyCoord)
  • polarisation_frame – Polarisation frame
Returns:

Image

create_test_image_from_s3(npixel=16384, polarisation_frame=<data_models.polarisation.PolarisationFrame object>, cellsize=1.5e-05, frequency=array([1.e+08]), channel_bandwidth=array([1000000.]), phasecentre=None, fov=20, flux_limit=0.001) → data_models.memory_data_models.Image[source]

Create LOW test image from S3

The input catalog was generated at http://s-cubed.physics.ox.ac.uk/s3_sex using the following query::
Database: s3_sex SQL: select * from Galaxies where (pow(10,itot_151)*1000 > 1.0) and (right_ascension between -5 and 5) and (declination between -5 and 5);;

Number of rows returned: 29966

For frequencies < 610MHz, there are three tables to use:

data/models/S3_151MHz_10deg.csv, use fov=10
data/models/S3_151MHz_20deg.csv, use fov=20
data/models/S3_151MHz_40deg.csv, use fov=40

For frequencies > 610MHz, there are three tables:

data/models/S3_1400MHz_1mJy_10deg.csv, use flux_limit>= 1e-3 data/models/S3_1400MHz_100uJy_10deg.csv, use flux_limit < 1e-3 data/models/S3_1400MHz_1mJy_18deg.csv, use flux_limit>= 1e-3 data/models/S3_1400MHz_100uJy_18deg.csv, use flux_limit < 1e-3

The component spectral index is calculated from the 610MHz and 151MHz or 1400MHz and 610MHz, and then calculated for the specified frequencies.

If polarisation_frame is not stokesI then the image will a polarised axis but the values will be zero.

Parameters:
  • npixel – Number of pixels
  • polarisation_frame – Polarisation frame (default PolarisationFrame(“stokesI”))
  • cellsize – cellsize in radians
  • frequency
  • channel_bandwidth – Channel width (Hz)
  • phasecentre – phasecentre (SkyCoord)
  • fov – fov 10 | 20 | 40
  • flux_limit – Minimum flux (Jy)
Returns:

Image

insert_unittest_errors(vt, seed=180555, amp_errors=None, phase_errors=None)[source]

Simulate gain errors and apply

Parameters:
  • vt
  • seed – Random number seed, set to big integer repeat values from run to run
  • phase_errors – e.g. {‘T’: 1.0, ‘G’: 0.1, ‘B’: 0.01}
  • amp_errors – e.g. {‘T’: 0.0, ‘G’: 0.01, ‘B’: 0.01}
Returns:

replicate_image(im: data_models.memory_data_models.Image, polarisation_frame=<data_models.polarisation.PolarisationFrame object>, frequency=array([1.e+08])) → data_models.memory_data_models.Image[source]

Make a new canonical shape Image, extended along third and fourth axes by replication.

The order of the data is [chan, pol, dec, ra]

Parameters:
  • frequency
  • im
  • polarisation_frame – Polarisation_frame
Returns:

Image

simulate_gaintable(gt: data_models.memory_data_models.GainTable, phase_error=0.1, amplitude_error=0.0, smooth_channels=1, leakage=0.0, seed=None, **kwargs) → data_models.memory_data_models.GainTable[source]

Simulate a gain table

Parameters:
  • phase_error – std of normal distribution, zero mean
  • amplitude_error – std of log normal distribution
  • leakage – std of cross hand leakage
  • seed – Seed for random numbers def: 180555
  • smooth_channels – Use bspline over smooth_channels
  • kwargs
Returns:

Gaintable