refnx.dataset

class refnx.dataset.Data1D(data=None, mask=None, **kwds)[source]

Bases: object

A basic representation of a 1D dataset.

Parameters
  • data ({str, file-like, Path, tuple of np.ndarray}, optional) –

    data can be a string, file-like, or Path object referring to a File to load the dataset from. The file should be plain text and have 2 to 4 columns separated by space, comma or tab. The columns represent x, y [y_err [, x_err]].

    Alternatively it is a tuple containing the data from which the dataset will be constructed. The tuple should have between 2 and 4 members.

    • data[0] - x

    • data[1] - y

    • data[2] - uncertainties on y, y_err

    • data[3] - uncertainties on x, x_err

    data must be at least two long, x and y. If the tuple is at least 3 long then the third member is y_err. If the tuple is 4 long then the fourth member is x_err. All arrays must have the same shape.

  • mask (array-like) – Specifies which data points are (un)masked. Must be broadcastable to the y-data. Data1D.mask = None clears the mask. If a mask value equates to True, then the point is included, if a mask value equates to False it is excluded.

filename

The file the data was read from

Type

{str, Path, None}

weighted

Whether the y data has uncertainties

Type

bool

metadata

Information that should be retained with the dataset.

Type

dict

add_data(data_tuple, requires_splice=False, trim_trailing=True)[source]

Adds more data to the dataset.

Parameters
  • data_tuple (tuple) – 2 to 4 member tuple containing the (x, y, y_err, x_err) data to add to the dataset. y_err and x_err are optional.

  • requires_splice (bool, optional) – When the new data is added to the dataset do you want to scale it vertically so that it overlaps with the existing data? y and y_err in data_tuple are both multiplied by the scaling factor.

  • trim_trailing (bool, optional) – When the new data is concatenated do you want to remove points from the existing data that are in the overlap region? This might be done because the datapoints in the data_tuple you are adding have have lower y_err than the preceding data.

Notes

Raises ValueError if there are no points in the overlap region and requires_splice was True. The added data is not masked.

property data

4-tuple containing the (x, y, y_err, x_err) data

property finite_data

4-tuple containing the (x, y, y_err, x_err) datapoints that are finite.

load(f)[source]

Loads a dataset from file, and overwrites existing data. Must be 2 to 4 column ASCII.

Parameters

f ({file-like, string, Path}) – File to load the dataset from.

property mask
plot(fig=None)[source]

Plot the dataset.

Requires matplotlib be installed.

Parameters

fig (Figure instance, optional) – If fig is not supplied then a new figure is created. Otherwise the graph is created on the current axes on the supplied figure.

Returns

fig, axmatplotlib figure and axes objects.

Return type

matplotlib.figure.Figure, matplotlib.Axes

refresh()[source]

Refreshes a previously loaded dataset.

save(f, header=None)[source]

Saves the data to file. Saves the data as 4 column ASCII.

Parameters

f ({file-like, str, Path}) – File to save the dataset to.

scale(scalefactor=1.0)[source]

Scales the y and y_err data by dividing by scalefactor.

Parameters

scalefactor (float) – The scalefactor to divide by.

sort()[source]

Sorts the data in ascending order

synthesise(random_state=None)[source]

Synthesise a new dataset by adding Gaussian noise onto each of the datapoints of the existing data.

Returns

  • dataset (Data1D) – A new synthesised dataset

  • random_state ({int, numpy.random.RandomState, numpy.random.Generator}) – If random_state is not specified the numpy.random.RandomState singleton is used. If random_state is an int, a new RandomState instance is used, seeded with random_state. If random_state is already a RandomState or a Generator instance, then that object is used. Specify random_state for repeatable synthesising.

property unmasked_data

4-tuple containing unmasked (x, y, y_err, x_err) data

property x

x data (possibly masked)

Type

np.ndarray

property x_err

x uncertainty (possibly masked)

Type

np.ndarray

property y

y data (possibly masked)

Type

np.ndarray

property y_err

uncertainties on the y data (possibly masked)

class refnx.dataset.OrsoDataset(data, **kwds)[source]

Bases: Data1D

A thinly wrapped version of an ORSODataset

Parameters

data ({str, file-like. Path}) –

Notes

Multiplies the resolution information contained in the fourth column of the ORSO dataset to convert from standard deviation to FWHM.

load(f)[source]
Parameters

f ({str, file-like, Path}) – The file to load the spectrum from, or a str/Path that specifies the file name

class refnx.dataset.ReflectDataset(data=None, **kwds)[source]

Bases: Data1D

A 1D Reflectivity dataset.

load(f)[source]

Load a dataset from file. Can either be 2-4 column ascii or XML file.

Parameters

f ({str, file-like, Path}) – The file to load the spectrum from, or a str that specifies the file name

save_xml(f, start_time=0)[source]

Saves the reflectivity data to an XML file.

Parameters
  • f (str or file-like) – The file to write the spectrum to, or a str that specifies the file name

  • start_time (int, optional) – Epoch time specifying when the sample started

refnx.dataset.load_data(f)[source]

Loads a dataset

Parameters

f ({file-like, str}) – f can be a string or file-like object referring to a File to load the dataset from.

Returns

data – data object

Return type

Data1D-like