2. Python Module

The following section documents the owl module API. This documentation is applicable to the owl module version 0.19.128.

The following documents the owl module version 0.19.128.

owl.sys_info()

Collect relevant system and debugging information.

Returns:

system_information – Version information of many packages, optional and required, to run owl.

Return type:

dictionary

Example

>>> owl.sys_info()
{'python': '3.8.5 | packaged by conda-forge | (default, Sep 27 2020, 15:56:17) \n[GCC 7.5.0]',
 'owl': '0.15.18',
 'sys.executable': '/home/ramona/falcongui/bin/python',
 'sys.exec_prefix': '/home/mark/falcongui',
 'platform.node': 'falcon',
 'mcam_sdk': 'is not installed',
 'pyilluminate': '0.6.5',
 'teensytoany': '0.0.24',
 'opencv': '4.5.2',
 'numpy': '1.21.4',
 'scipy': '1.7.0',
 'imageio': '2.9.0',
 'scikit-image': '0.18.2',
 'xarray': '0.18.2',
 'wabisabi': '0.2.8',
 'dask': '2021.07.0',
 'vispy': '0.7.0',
 'tqdm': '4.61.2',
 'pillow': '8.3.1',
 'zaber-motion': '2.4.1',
 'json_tricks': '3.15.5'}

MCAM Instrument Class

class owl.instruments.MCAM(*, serial_number=None, **kwargs)

The MCAM object.

This object allows you to interact with the MCAM as a whole, capturing images under various conditions and saving data.

Parameters:
  • serial_number (str, or None) –

    • If provided, the MCAM will open a particular serial number.

  • with_z_stage (bool, or None) –

    • If True, opening the MCAM will fail if no z-stage is found.

    • If None, then a warning will be issued if no z-stage is found.

    • If False, opening the MCAM will not attempt to open the z-stage.

  • with_y_stage (bool, or None) –

    • If True, opening the MCAM will fail if no y-stage is found.

    • If None, then a warning will be issued if no y-stage is found.

    • If False, opening the MCAM will not attempt to open the y-stage.

  • with_x_stage (bool, or None) –

    • If True, opening the MCAM will fail if no x-stage is found.

    • If None, then a warning will be issued if no x-stage is found.

    • If False, opening the MCAM will not attempt to open the x-stage.

  • with_x_sample_stage (bool, or None) –

    • If True, opening the MCAM will fail if no x sample stage is found.

    • If None, then a warning will be issued if no x sample stage is found.

    • If False, opening the MCAM will not attempt to open the x sample stage.

  • with_reflection_illumination (bool, or None) –

    • If True, opening the MCAM will fail if no reflection-illumination is found.

    • If None, then a warning will be issued if no reflection-illumination is found.

    • If False, opening the MCAM will not attempt to open the reflection-illumination.

  • with_transmission_illumination (bool, or None) –

    • If True, opening the MCAM will fail if no transmission-illumination is found.

    • If None, then a warning will be issued if no transmission-illumination is found.

    • If False, opening the MCAM will not attempt to open the transmission-illumination.

  • with_fluorescence_illumination (bool, or None) –

    • If True, opening the MCAM will fail if no fluorescence-illumination is found.

    • If None, then a warning will be issued if no fluorescence-illumination is found.

    • If False, opening the MCAM will not attempt to open the fluorescence-illumination.

  • with_temperature_monitor (bool, or None) –

    • If True, opening the MCAM will fail if no temperature_monitor is found.

    • If None, then a warning will be issued if no temperature_monitor is found.

    • If False, opening the MCAM will not attempt to open the temperature_monitor.

  • with_plate_tapper (bool, or None) –

    • If True, opening the MCAM will fail if no plate_tapper is found.

    • If None, then a warning will be issued if no plate_tapper is found.

    • If False, opening the MCAM will not attempt to open the plate_tapper.

  • with_calibration (True, False, or None) –

    • If False, then no calibration file is loaded.

    • If None, then the default calibration file is loaded.

Notes

Changed in version 0.18.89: The keyword argument exif_orientation is formally deprecated and will be removed in a future version.

property N_cameras

Tuple containing the number of cameras in each direction.

property N_cameras_X

The number of cameras in the X direction.

property N_cameras_Y

The number of cameras in the Y direction.

property N_cameras_total

The total number of micro cameras in the array.

acquire_full_field_of_view(*, skip_bad_sensors=True, data=None)

Acquires an image from all sensors and updates the metadata.

Parameters:
  • data (array) –

    Buffer with shape (N_cameras_X, N_cameras_Y, sensor_width, sensor_height). to hold the acquired image. If set to None, then the data is stored in the built-in buffer MCAM.data.

    Added in version 0.18.85: In version 0.18.85 the data keyword argument was added.

  • skip_bad_sensors (bool) – Don’t acquire from non-functioning sensors.

Returns:

The dataset containing the MCAM metadata in addition to the image data. For more information about the metadata included, please see https://docs.ramonaoptics.com/python_metadata.html This dataset’s metadata will not change when the imaging parameters of the MCAM are modified. However the, data included within the dataset may change unless the data parameter is provided to this function.

Changed in version 0.18.85: In version 0.18.85 the return value was changed from a dictionary containing low level information from the MCAM to the full dataset.

Return type:

dataset

acquire_high_speed_video(N_frames, *, selection_slice=None, tqdm=None, stop_event=None, data_buffers=None, **kwargs)
Acquire a given number of video frames of all or a subset of cameras

at the maximum framerate.

Before calling this function, the user is expected to have allocated a video buffer using the MCAM.allocate_video_buffer.

>>> from owl.instruments import MCAM
>>> mcam = MCAM()
>>> mcam.bin_mode = 4
>>> mcam.frame_rate_setpoint = 30
>>> mcam.exposure = 30E-3
>>> mcam.allocate_video_buffer(N_frames=100, tqdm=tqdm)
>>> video_dataset = mcam.acquire_high_speed_video(100)
Parameters:
  • N_frames (int) – Number of frames to acquire.

  • selection_slice (slice) – Python slice in the MCAM array to acquire a video from

  • tqdm (tqdm object) – TQDM object used to monitor the amount of time to allocate memory.

  • start_event (threading.Event) –

    If provided, the method start_event.set() will be called after video setup has been completed. The call to the set() method should not be blocking otherwise it will incur additional delay on the video acquisition.

    Added in version 0.18.59: The start_event parameter was added.

  • stop_event (Event) –

    A threading.Event-like object used to stop the acquisition. Once the acquisition is stopped, None may be returned.

    Added in version 0.18.371: The stop_event parameter was added.

  • data_buffers (np.ndarray) – Optionally, the user may provide a pre-allocated array to store the acquired data. If not provided, the buffer will be allocated upon calling this function.

Returns:

mcam_dataset – Dataset containing the video data acquired.

Return type:

xarray.Dataset

Notes

Changed in version 0.18.29: In version 0.18.29 the metadata generate by this function no longer keeps track of the frame_number dimension for many of the properties as it is assumed that the properties are unchanging.

As of version 0.18.29, the returned software_timestamp is independent of the image indcies image_y and image_x.

acquire_image_from_sensor(index, *, skip_bad_sensors=True, data=None)

Acquires an image from a single sensor and updates the metadata.

Parameters:
  • index (tuple) – Tuple containing the index of the desired sensor, (row, column).

  • data (array) –

    Buffer with shape (N_cameras_X, N_cameras_Y, sensor_width, sensor_height). to hold the acquired image. If set to None, then the data is stored in the built-in buffer MCAM.data.

    Added in version 0.18.85: In version 0.18.85 the data keyword argument was added.

  • skip_bad_sensors (bool) – Don’t acquire from non-functioning sensors.

Returns:

The dataset containing the MCAM metadata in addition to the image data. For more information about the metadata included, please see https://docs.ramonaoptics.com/python_metadata.html This dataset’s metadata will not change when the imaging parameters of the MCAM are modified. However the, data included within the dataset may change unless the data parameter is provided to this function.

Changed in version 0.18.85: In version 0.18.85 the return value was changed from a dictionary containing low level information from the MCAM to the full dataset.

Return type:

dataset

acquire_new_image(index, *, skip_bad_sensors=True, data=None)

Acquires an image from a single sensor and updates the metadata.

Parameters:
  • index (tuple) – Tuple containing the index of the desired sensor, (row, column).

  • data (array) –

    Buffer with shape (N_cameras_X, N_cameras_Y, sensor_width, sensor_height). to hold the acquired image. If set to None, then the data is stored in the built-in buffer MCAM.data.

    Added in version 0.18.85: In version 0.18.85 the data keyword argument was added.

  • skip_bad_sensors (bool) – Don’t acquire from non-functioning sensors.

Returns:

The dataset containing the MCAM metadata in addition to the image data. For more information about the metadata included, please see https://docs.ramonaoptics.com/python_metadata.html This dataset’s metadata will not change when the imaging parameters of the MCAM are modified. However the, data included within the dataset may change unless the data parameter is provided to this function.

Changed in version 0.18.85: In version 0.18.85 the return value was changed from a dictionary containing low level information from the MCAM to the full dataset.

Return type:

dataset

acquire_selection(selection, *, skip_bad_sensors=True, data=None)

Acquires an image from the selected sensors.

After calling this method, the metadata included in the dataset attribute will be updated.

Parameters:
  • selection (np.array[bool]) – A boolean array of shape (N_cameras_X, N_cameras_Y) with the sensors selected for acquisition set to true.

  • data (array) – Buffer with shape (N_cameras_X, N_cameras_Y, sensor_width, sensor_height). to hold the acquired image. If set to None, then the data is stored in the built-in buffer MCAM.data.

  • skip_bad_sensors (bool) – Don’t acquire from non-functioning sensors.

Returns:

The dataset containing the MCAM metadata in addition to the image data. For more information about the metadata included, please see https://docs.ramonaoptics.com/python_metadata.html This dataset’s metadata will not change when the imaging parameters of the MCAM are modified. However the, data included within the dataset may change unless the data parameter is provided to this function.

Changed in version 0.18.85: In version 0.18.85 the return value was changed from a dictionary containing low level information from the MCAM to the full dataset.

Return type:

dataset

acquire_video_to_file(filename, N_frames, *, selection_slice=None, include_timestamp=True, temporary_buffer_size=None, stop_event=None, tqdm=None, start_event=None, metadata=None, keypoints=None, tqdm_save=None)

Record a video from the MCAM saving it directly to a file.

This function records a certain number of frames directly to a particular file. It uses the computer RAM as temporary storage in case the disk file is too busy to store the incoming frames.

A larger buffer enables larger durations of temporary slowdowns on the disk.

Added in version 0.18.68: First version of the method acquire_video_to_file.

Parameters:
  • filename (Path) – The filename where the data should be saved. If the filename has no extension, then an '.nc' extension is added.

  • N_frames (int) – Number of frames to acquire.

  • selection_slice (slice) – Python slice in the MCAM array to acquire a video from

  • tqdm (tqdm object) – TQDM object used to monitor the amount of time to allocate memory.

  • start_event (threading.Event) – If provided, the method start_event.set() will be called after video setup has been completed. The call to the set() method should not be blocking otherwise it will incur additional delay on the video acquisition.

  • metadata

    Additional metadata to add to the saved file after the acquisition is complete.

    Added in version 0.18.191.

  • keypoints

    Well plate keypoints structure. If provided the data will be cropped according to the provided keypoints during the saving pipeline.

    Added in version 0.18.193.

  • stop_event (threading.Event) –

    If provided, the method stop_event.is_set() will be polled to check if the video acquisition should be aborted.

    Added in version 0.18.371: First version where the parameter stop_event became an option.

  • tqdm_save

    A TQDM object to specifically track the progress on the data saving portion of the acquisition. If video encoding is slightly slower than the acquisition frame rate, encoding the final few frames may happen after the acquisition is complete.

    Added in version 0.18.224.

Returns:

filename – The filename where the data has been stored. It includes the optional timestamp, and the new suffix.

Return type:

Path-like

acquire_video_to_multi_mp4(filename, N_frames, *, selection_slice=None, keypoints=None, include_timestamp=True, temporary_buffer_size=None, stop_event=None, tqdm=None, start_event=None, metadata=None, tqdm_save=None)

Record a video from saving it directly to compressed MP4 file.

This function records a certain number of frames directly to a particular file. It uses the computer RAM as temporary storage in case the disk file is too busy to store the incoming frames.

A larger buffer enables larger durations of temporary slowdowns on the video encoding pipeline.

Added in version 0.18.242: First version of the method acquire_video_to_multi_mp4.

Parameters:
  • filename (Path) – The filename where the data should be saved. If the filename has no extension, then an '.nc' extension is added.

  • N_frames (int) – Number of frames to acquire.

  • selection_slice (slice) – Python slice in the MCAM array to acquire a video from

  • tqdm (tqdm object) – TQDM object used to monitor the amount of time to allocate memory.

  • start_event (threading.Event) – If provided, the method start_event.set() will be called after video setup has been completed. The call to the set() method should not be blocking otherwise it will incur additional delay on the video acquisition.

  • metadata – Metadata to join to the MCAM dataset upon saving.

  • keypoints – Well plate keypoints structure. If provided the data will be cropped according to the provided keypoints during the saving pipeline.

  • stop_event (threading.Event) –

    If provided, the method stop_event.is_set() will be polled to check if the video acquisition should be aborted.

    Added in version 0.18.371: First version where the parameter stop_event became an option.

  • tqdm_save – A TQDM object to specifically track the progress on the data saving portion of the acquisition. If video encoding is slightly slower than the acquisition frame rate, encoding the final few frames may happen after the acquisition is complete.

Returns:

filename – The filename where the data has been stored. It includes the optional timestamp, and the new suffix.

Return type:

Path-like

acquire_video_to_tiledmp4(filename, N_frames, *, selection_slice=None, keypoints=None, include_timestamp=True, temporary_buffer_size=None, stop_event=None, tqdm=None, start_event=None, metadata=None, tqdm_save=None)

Record a video from saving it directly to compressed MP4 file.

This function records a certain number of frames directly to a particular file. It uses the computer RAM as temporary storage in case the disk file is too busy to store the incoming frames.

A larger buffer enables larger durations of temporary slowdowns on the video encoding pipeline.

Added in version 0.18.193: First version of the method acquire_video_to_tiledmp4.

Parameters:
  • filename (Path) – The filename where the data should be saved. If the filename has no extension, then an '.nc' extension is added.

  • N_frames (int) – Number of frames to acquire.

  • selection_slice (slice) – Python slice in the MCAM array to acquire a video from

  • tqdm (tqdm object) – TQDM object used to monitor the amount of time to allocate memory.

  • start_event (threading.Event) – If provided, the method start_event.set() will be called after video setup has been completed. The call to the set() method should not be blocking otherwise it will incur additional delay on the video acquisition.

  • metadata – Metadata to join to the MCAM dataset upon saving.

  • keypoints – Well plate keypoints structure. If provided the data will be cropped according to the provided keypoints during the saving pipeline.

  • stop_event (threading.Event) –

    If provided, the method stop_event.is_set() will be polled to check if the video acquisition should be aborted.

    Added in version 0.18.371: First version where the parameter stop_event became an option.

  • tqdm_save

    A TQDM object to specifically track the progress on the data saving portion of the acquisition. If video encoding is slightly slower than the acquisition frame rate, encoding the final few frames may happen after the acquisition is complete.

    Added in version 0.18.224.

Returns:

filename – The filename where the data has been stored. It includes the optional timestamp, and the new suffix.

Return type:

Path-like

add_illumination(light, illumination_device, *, owndevice=False)

Add illumination board to the MCAM.

Once the board is added, the user should not attempt to independently close it. The board will close automatically upon closing the MCAM, or it can be closed by calling MCAM.remove_reflection_illumination()

Note

This function is not thread safe. Video or image acquisition should be stopped before calling this function.

Parameters:

light (owl.instruments.Illuminate) – Open illumination board to attach to the MCAM

add_reflection_illumination(light, *, owndevice=False)

Add reflection illumination board to the MCAM.

Once the board is added, the user should not attempt to independently close it. The board will close automatically upon closing the MCAM, or it can be closed by calling MCAM.remove_reflection_illumination()

Note

This function is not thread safe. Video or image acquisition should be stopped before calling this function.

Parameters:

light (owl.instruments.Illuminate) – Open reflection illumination board to attach to the MCAM

add_transmission_illumination(light, *, owndevice=False)

Add transmission illumination board to the MCAM.

Once the board is added, the user should not attempt to independently close it. The board will close automatically upon closing the MCAM, or it can be closed by calling MCAM.remove_transmission_illumination()

Note

This function is not thread safe. Video or image acquisition should be stopped before calling this function.

Parameters:

light (owl.instruments.Illuminate) – Open transmission illumination board to attach to the MCAM

add_x_sample_stage(stage=None, owndevice=None)

Add a sample side X stage to the MCAM.

The stage will be closed upon closing the MCAM or calling MCAM.remove_x_sample_stage() if ownership of the stage is explicitly passed to the MCAM.

Note

This function is not thread safe. Video or image acquisition should be stopped before calling this function.

Parameters:
  • stage (owl.instruments.X_LSM_E) –

    Opened stage to attach to the MCAM. If no stage is provided, the MCAM will attempt to open the stage from the system settings.

    Changed in version 0.18.373: Providing a value of None is allowed.

  • owndevice (bool) – True if MCAM has permission to close the stage. If the stage is None, owndevice will be set to True by default.

add_x_stage(stage=None, owndevice=None)

Add an X-stage to the MCAM.

The stage will be closed upon closing the MCAM or calling MCAM.remove_x_stage() if ownership of the stage is explicitly passed to the MCAM.

Note

This function is not thread safe. Video or image acquisition should be stopped before calling this function.

Parameters:
  • stage (owl.instruments.X_LSM_E) –

    Opened stage to attach to the MCAM. If no stage is provided, the MCAM will attempt to open the stage from the system settings.

    Changed in version 0.18.373: Providing a value of None is allowed.

  • owndevice (bool) – True if MCAM has permission to close the stage. If the stage is None, owndevice will be set to True by default.

add_y_stage(stage=None, owndevice=None)

Add a Y-stage to the MCAM.

The stage will be closed upon closing the MCAM or calling MCAM.remove_y_stage() if ownership of the stage is explicitly passed to the MCAM.

Note

This function is not thread safe. Video or image acquisition should be stopped before calling this function.

Parameters:
  • stage (owl.instruments.X_LSM_E) –

    Opened stage to attach to the MCAM. If no stage is provided, the MCAM will attempt to open the stage from the system settings.

    Changed in version 0.18.373: Providing a value of None is allowed.

  • owndevice (bool) – True if MCAM has permission to close the stage. If the stage is None, owndevice will be set to True by default.

add_z_stage(stage=None, owndevice=None)

Add a Z-stage to the MCAM.

The stage will be closed upon closing the MCAM or calling MCAM.remove_z_stage() if ownership of the stage is explicitly passed to the MCAM.

Note

This function is not thread safe. Video or image acquisition should be stopped before calling this function.

Parameters:
  • stage (owl.instruments.X_LSM_E) –

    Opened stage to attach to the MCAM. If no stage is provided, the MCAM will attempt to open the stage from the system settings.

    Changed in version 0.18.373: Providing a value of None is allowed.

  • owndevice (bool) – True if MCAM has permission to close the stage. If the stage is None, owndevice will be set to True by default.

allocate_video_buffer(N_frames, *, selection=None, garbage_collection=True, tqdm=None, cpu_count=None)

Allocates a buffer for video frames.

Before allocating any new data, the reference to the old data will be released. Upon failure, there may be no video buffer in memory.

Parameters:
  • N_frames (int) – The number of frames to allocate memory for.

  • selection ([N_cameras_Y, N_cameras,X]) – A boolean array of the image sensors to acquire from.

  • tqdm (tqdm-like) – A tqdm-like progress bar manager.

  • garbage_collection (bool) – Passed to free_video_buffer if one needs to reallocate memory.

  • cpu_count (int) – The number of CPUs to use to allocate the memory. If None, all CPUs will be used.

See also

free_video_buffer, acquire_video

property analog_gain

Analog gain applied globally to all pixels of all sensors.

property available_illumination

The available illumination hardware devices connected to the MCAM.

This property reports a tuple of strings containing the available illumination devices presently connected to the MCAM.

The capabilities of these devices, in conjunction with the MCAM determine the available illumination modes.

property available_illumination_modes

The available illumination modes of the MCAM.

This property reports a tuple of strings containing the available illumination modes of the MCAM given the presently connected illumination devices. .. seealso:: available_illumination, set_illumination_brightness

property available_mechanical_states

A list of mechanical states currently available for the MCAM.

For MCAM systems equipped with one or more motion stages, the MCAM may be defined with a set of pre-calibrated mechanical states.

This list reports the available states for the MCAM, taking into account the presently connected hardware so that the users may programmatically select the most appropriate state for their usecase.

property bin_mode

The number of pixels binned in the X-axis and skipped in the Y-axis.

Changing the bin mode does not affect the image exposure.

The data property is updated when bin_mode is modified to accommodate the changing image resolution.

The data acquired in previous binning modes can still be retrieved by changing the binning mode back to its previous value.

Valid bin_mode values are either 1, 2, or 4.

property calibration_filename

File containing calibration information specific to the MCAM device.

Returns:

calibration_filename – Path object pointing to the calibration file. This file may or may not exist.

Return type:

Path

can_reuse_video_buffer(N_frames, selection=None)

Checks if the video buffer can be reused for the next acquisition.

This function may be used for interactive applications to guide users in their navigation of the interface.

Returns:

video_buffer_reusable – If True, MCAM.allocate_video_buffer will return instantly since the previous video buffer will simply be reused.

Return type:

bool

close(*)

Close the connection to the MCAM.

property color_data

The color data as measured by the MCAM.

This property returns a lazy array containing the color information that would be returned by the camera after it has been debayered.

The reason it returns a lazy array is that the full color information is quite large and takes a long time to compute.

This way, you can slice the color_data as you would a regular numpy array. If data from only a few cameras is selected, then only the required portion of the data will be computed.

property digital_gain

Digital gain applied globally to all pixels of all sensors.

property digital_gain_color

Digital gain applied to each individual color in a bayer pixel.

You can assign to this property a tuple of four values (red_gain, green0_gain, blue_gain, green1_gain) or three values (red_gain, green_gain, blue gain).

Returns:

digital_gain_color – Digital gain values for each color in the form of the tuple.

Return type:

(red gain, green0_gain, blue_gain, green1 gain)

property end_pixel

Ending pixel position in the coordinate space of the physical sensor.

The value is returned as a tuple (end_pixel_y, end_pixel_x).

These values can be changed through a call to select_pixels.

property exif_orientation

Image Orientation flag to facilitate image display.

Should be set to one of

1, 2, 3, 4, 5, 6, 7, or 8

depending on orientation of the MCAM relative to the sample [1].

Changing this parameter only has an effect on the next acquisition’s metadata.

References

property exposure

Image sensor exposure in seconds.

property exposure2

Image sensor exposure in seconds used in iHDR acquisitions.

property frame_rate

The sensor limited frame rate.

property frame_rate_setpoint

The current frame rate setpoint for high speed video acquisition.

Changing this value has a side effect of changing the exposure of the image sensors.

The true frame rate will be limited by both the frame rate setpoint, and the exposure of the image sensor.

free_video_buffer(*, garbage_collection=True)

Free the allocated video buffer to reclaim the allocated memory.

Parameters:

garbage_collection (bool) – If True, the system will call python’s gc.collect() to attempt gargabe collection on the unused memory.

property gain

Total gain of the sensor.

Equal to the product of analog_gain and digital_gain.

Note

This property cannot be set. To change the gain, you must change either the digital_gain or the analog_gain.

get_illumination_brightness()

The brightness of the present illumination mode as a fraction.

Returns:

brightness – The illumination brightness of the illumination modes as a number between 0. and 1.

Return type:

float

hold_state()

Context manager to ensure the state isn’t mutated in a code block.

Example

>>> from owl.instruments import MCAM
>>> mcam = MCAM()
>>> mcam.exposure = 100E-3
>>> with mcam.hold_state():
...     mcam.exposure = 10E-3
...     print(f"The MCAM exposure is temporarily set to {mcam.exposure * 1000:.2f} ms")
>>> print(f"The MCAM exposure is {mcam.exposure * 1000:.2f} ms")
property illumination_mode

The illumination mode of the MCAM at the present state.

property image_nbytes: int

Size of image in bytes from a single sensor.

property image_shape: int, int

The image shape obtained from the acquisition of a single sensor.

The image shape is affected by the following imaging parameters:

  • bin_mode

  • start_pixel (defined by select_pixels)

  • end_pixel (defined by select_pixels)

property image_size: int

Size of image in pixels from a single sensor.

property interlaced_hdr

Interlaced HDR mode for the MCAM.

property maximum_analog_gain

The maximum value that the analog gain can be set to.

property maximum_datarate

The maximum acquisition datarate capable by the MCAM hardware.

property maximum_digital_gain

The maximum value that the digital gain can be set to.

property maximum_exposure

The maximum exposure in seconds given the current imaging configuration.

property maximum_frame_rate_setpoint

The maximum frame rate setpoint as limited by the sensor hardware.

property mechanical_state

The mechanical state of the MCAM system.

For MCAM systems equipped with one or more motion stages, the MCAM may be defined with a set of pre-calibrated mechanical states.

This variable reflects the current mechanical state of the system.

The initial state of the MCAM is undefined, as such, this property will return None upon opening the MCAM.

property micro_camera_separation

Separation between micro cameras in the array in meters.

property minimum_analog_gain

The minimum value that the analog gain can be set to.

property minimum_digital_gain

The minimum value that the digital gain can be set to.

property minimum_exposure

The minimum exposure in seconds given the current imaging configuration.

property minimum_frame_rate_setpoint

The minimum frame rate setpoint as limited by the sensor hardware.

open(*)

Open the MCAM for communication.

optimize_sensor_timing(mode='stability')

Optimize the senor timing for different acquisition modes.

Parameters:

mode ('stability' or 'frame_rate') – The desired sensor timing mode. For snapshot acquisition, ‘stability’ is recommended. Those requiring high frame rate acquisition are encouraged to contact Ramona Optics at info@ramonaoptics.com to discuss their application.

Notes

The primary use of this function is to enable high speed video acquisition. The goal is to deprecate the call to this function.

As of Version 0.16.27 the use of ‘frame_rate’ is still experimental and will emit a warning to the user.

property pixel_width

The width of a pixel’s FOV in meters.

property plate_tapper

The MCAM plate tapper subcomponent, if it is connected.

remove_illumination(illumination_device)

Remove and close illumination board.

Note

This function is not thread safe. Video or image acquisition should be stopped before calling this function.

remove_reflection_illumination()

Remove and close reflection illumination board. Once the board is added, the user should not attempt to independently close it. The board will close automatically upon closing the MCAM, or it can be closed by calling MCAM.remove_reflection_illumination()

Note

This function is not thread safe. Video or image acquisition should be stopped before calling this function.

remove_transmission_illumination()

Remove and close transmission illumination board. Once the board is added, the user should not attempt to independently close it. The board will close automatically upon closing the MCAM, or it can be closed by calling MCAM.remove_transmission_illumination()

Note

This function is not thread safe. Video or image acquisition should be stopped before calling this function.

remove_x_sample_stage()

Remove the x-stage from the MCAM. If the MCAM has ownership of the stage, the stage will be closed.

Note

This function is not thread safe. Video or image acquisition should be stopped before calling this function.

remove_x_stage()

Remove the x-stage from the MCAM. If the MCAM has ownership of the stage, the stage will be closed.

Note

This function is not thread safe. Video or image acquisition should be stopped before calling this function.

remove_y_stage()

Remove the y-stage from the MCAM. If the MCAM has ownership of the stage, the stage will be closed.

Note

This function is not thread safe. Video or image acquisition should be stopped before calling this function.

remove_z_stage()

Remove the z-stage from the MCAM. If the MCAM has ownership of the stage, the stage will be closed.

Note

This function is not thread safe. Video or image acquisition should be stopped before calling this function.

save_calibration(calibration_data=None, calibration_filename=None)

Save the given calibration data at the desired location.

Parameters:
  • calibration_data (Dict) – Data for alignment and photometric corrections as well as metadata.

  • calibration_filename (Path-like, optional) – Location to save the calibration data. If no location is given defaults to self.calibration_filename.

Returns:

calibration_filename – Returns the saved location of the calibration

Return type:

Path

select_center_pixels(shape, step=None)

Select the desired pixels centered about the center of the sensor.

Parameters:
  • shape (int, int) – Image shape to capture

  • step (1, 2, 4, or None) – Bin mode to select. If None, the current bin mode is maintained.

See also

select_pixels

select_center_square_pixels(step=None, step_alignment=4)

Select the largest center square pixels supported by the MCAM.

Currently, this is the same as selecting the center square (3072, 3072) pixels.

Parameters:
  • step (1, 2, 4, or None) – The binning mode used when selecting the sensors. If None, the current bin mode is used.

  • step_alignment (4) – Different bin modes may support slightly different shapes for the maximum supported square images. By default, we use image extents that are supported by the largest values for bin mode. This guarantees that they are supported by the smaller values as well.

See also

select_pixels

select_default_4_by_3_pixels(step=None)

Select the default pixel selection supported by the MCAM.

Currently, this is the same as selecting the center (3120, 4096) pixels.

Parameters:

step (1, 2, 4, or None) – The binning mode used when selecting the sensors. If None, the current bin mode is used.

See also

select_pixels

select_pixels(start_pixel, end_pixel, step=None)

Set the pixel sub-selection and bin mode of the MCAM sensors.

When selecting pixels you must follow these constraints after binning: * The width must be greater than or equal to 320 and divisible by 8. * The height must be greater than or equal to 8 and divisible by 16. * The product or the width and the height must be divisible by 4096.

Calling this function is equivalent to invalidating all the data in the dataset. The data (and metadata) stored in MCAM.dataset should be considered invalid.

The exposure time is not guaranteed to stay constant after calling this function.

Parameters:
  • start_pixel ((int, int)) – The y and x start pixels, relative to how data is stored in software.

  • end_pixel ((int, int)) – The y and x end pixels, relative to how data is stored in software.

  • step (int) – The binning mode. Must be 1, 2, or 4. If None, the current bin mode is maintained.

select_sensor_regions_of_interest(image_span, *, start_pixels, step=None)

Per sensor region of interest selection.

Added in version 0.18.21.

Parameters:
  • image_span ((int, int)) – The image space across pixels in the imaging sensors. This span is common to all imaging sensors.

  • start_pixels (array-like [N_cameras, 2]) – Starting pixels for each sensor in the MCAM array.

  • step (1, 2, 4 or None) – A valid bin mode for the imaging sensor

Note

This feature is still experimental and does not maintain consistency across the entire metadata. This function’s API is also subject to change without notice.

property selection_all_cameras

The selection of sensors that corresponds to all cameras being used.

Returns:

selection – Array of bool corresponding to the selection of all sensors. In this array, all values are True. This array has shape MCMA.N_cameras.

Return type:

np.array[bool]

property selection_center_12

The selection of sensors that corresponds to the center 12 cameras.

The center 4 x 3 cameras are selected.

This property return an array of bools corresponding to the selection of the center 12 sensors.

Deprecated since version 0.18.4.

Notes

This function is only supported by by micro cameras of shape (9, 6).

property sensor_chroma

The sensor chromaticity.

This property can take on one of 5 string values:

  • 'monochrome'

  • 'bayer_grbg', 'bayer_rggb', 'bayer_bggr' or, 'bayer_gbrg'

property sensor_height: int

The height in pixels of an image from a single sensor.

This is an inherent property of the physical sensor and unaffected by the imaging parameters.

property sensor_pixel_pitch

Distance, in meters, between pixel centers in a sensor.

property sensor_pixel_width

Distance, in meters, between pixel centers in a sensor.

property sensor_shape: int, int

A tuple containing the physical number of pixels of the sensors.

This is an inherent property of the physical sensor and unaffected by the imaging parameters.

property sensor_width: int

The width in pixels of an image from a single sensor.

This is an inherent property of the physical sensor and unaffected by the imaging parameters.

property serial_number

The serial number of the MCAM.

set_illumination_brightness(brightness, illumination_mode=None, **kwargs)

Set the brightness of the chosen illumination mode.

Parameters:
  • brightness (float) – A number between 0 and 1 indicating the overall brightness of the illumination mode.

  • illumination_mode (str) – The illumination mode of the MCAM. This should be one of the reported available illumination modes.

set_state(state, *, strict=True)

Set the MCAM to a previously saved state.

Parameters:
  • state – The desired MCAM state.

  • strict (bool) – If True, the state will be validated for the appropriate serial numbers, and for the validity of all other parameters. If False, the serial numbers of the MCAM and the state will not be compared, and invalid parameters will be replaced by None and the MCAM will retain the previously applied state.

property start_pixel

Starting pixel position in the coordinate space of the physical sensor.

The value is returned as a tuple (start_pixel_y, start_pixel_x).

These values can be changed through a call to select_pixels.

property temperature_monitor

The MCAM temperature monitor subcomponent, if it is connected.

property use_hardware_trigger

Use trigger pulses to control image acquisition.

Using the hardware trigger will update minimum_exposure.

See also

minimum_exposure

property video_buffer_nbytes

The number of bytes allocated in the video acquisition buffer.

property white_light_reflection

LED color that provides white light to the MCAM in transmission.

These led values come from the photometric calibration data which models the response of our system when used in transmission.

The user should take care to normalize these LED values to a value that suits their application.

property white_light_transmission

LED color that provides white light to the MCAM in reflection.

These led values come from the photometric calibration data which models the response of our system when used in transmission.

The user should take care to normalize these LED values to a value that suits their application.

MCAM Dataset Manipulation

owl.mcam_data.new_dataset(N_cameras: (int, int) = (9, 6), image_shape: (int, int) = (3120, 4096), dtype=<class 'numpy.uint8'>, dims=('image_y', 'image_x', 'y', 'x'), coords=None, *, delayed: bool = False, array=None)

Create a new xarray.Dataset for MCAM data.

Returns an xarray object that contains MCAM data along with the bare minimum metadata. The dataset holds an mcam_data xr.DataArray of size (*N_cameras, *image_shape) with dimensions dims and coordinates coords.

If a coordinate is not provided, a new coordinate is set with np.arange.

Parameters:
  • N_cameras (tuple of length 2) – Number of cameras that exist in the MCAM.

  • image_shape (tuple) – The size of the images returned by the individual cameras.

  • dtype – The dtype of the array that should be allocated.

  • dims – A list or tuple containing the names of the dimensions.

  • coords – A dictionary containing the information for the coordinates.

  • delayed – If set to True, this returns a lazy dask array. If set the False (the default), it returns a numpy array within the xarray.

  • array – if provided, N_cameras, image_shape and dtype will be overwritten to match array. The array will be used as the data in the returned xarray object.

Returns:

mcam_dataset – The dataset containing an 'images' array with associated dimensions.

Return type:

xr.Dataset

owl.mcam_data.new_rgb_dataset(N_cameras=(6, 4), image_shape=(2432, 4320, 3), dtype=<class 'numpy.uint8'>, dims=('image_y', 'image_x', 'y', 'x', 'rgb'), coords=None, delayed=False, array=None)

Create a xarray.Dataset containing RGB MCAM data.

By default, this calls new, but passes the parameters

dims=(‘image_y’, ‘image_x’, ‘y’, ‘x’, ‘rgb’) and coords={‘rgb’: [‘r’, ‘g’, ‘b’]}

Returns:

mcam_dataset – An xarray.Dataset with the last dimension having coordinates of 'rgb'.

Return type:

xr.Dataset

owl.mcam_data.new_rgba_dataset(N_cameras=(6, 4), image_shape=(2432, 4320, 4), dtype=<class 'numpy.uint8'>, dims=('image_y', 'image_x', 'y', 'x', 'rgba'), coords=None, delayed=False, array=None)

Create a new xr.Dataset containing the MCAM data with RGBA colors.

By default, this calls new, but passes the parameters

dims=(‘image_y’, ‘image_x’, ‘y’, ‘x’, ‘rgba’) and coords={‘rgba’: [‘r’, ‘g’, ‘b’, ‘a’]}

Returns:

mcam_dataset – An xarray.Dataset with the last dimension having coordinates of 'rgba'.

Return type:

xr.Dataset

owl.mcam_data.load(directory: Path, *, delayed=True, progress=True, scheduler='threading', update=True, **_kwargs)

Load mcam_data and convert it to whatever we want.

Parameters:
  • directory (Path) – The path where the data is stored as bmps.

  • metadata_filename (str) – The name of the metadata file. If not provided, a file name metadata.json or metadata.nc will be looked for in that order.

  • delayed (bool, optional) – If True, the computation will return a lazy object. As the user of this library, you will have to explicitly force the computation of the lazy object. If None, this function will attempt to automatically determine if the data should be loaded lazily or eagerly based on the size of the data. If a particular behavior is desired in your application, the delayed parameter should be specified. of the lazy object. Metadata is loaded eagerly.

  • progress – If True, then a progress bar is shown during loading operations. This is only valid if delayed=True.

  • scheduler – Parameter passed to dask.compute to select which schedule is used to load the data in parallel.

  • update – Set to False if you wish to load the raw un-updated data. Setting this parameter to False keeps the raw metadata as is in the .nc file but means that the loaded dataset is unsupported by the remainder of the owl analysis functions.

Returns:

mcam_data – Return the mcam_data in an xarray DataArray with all the metadata.

Return type:

xarray DataArray or Dataset

owl.mcam_data.save(mcam_dataset, filename: Path, *, mode='w', engine=None, include_timestamp: bool = True, disk_space_tolerance=100000000.0)

Save data as a hdf5 using netcdf4 API.

Default extension is nc if none is given

Returns the saved path name as a Path object.

Parameters:
  • mcam_dataset (xarray Dataset) – The mcam_data you wish to save.

  • filename (Path-like) – The filename where the data should be saved. If the filename has no extension, then an '.nc' extension is added.

  • mode (str) – The mode to open the file in. Valid options are 'w' (write), or `x` exclusive creation, failing if the file already exists.

  • engine – Parameter passed to xarray.Dataset.to_netcdf to select the backend used for writing data to disk.

  • include_timestamp (bool) – If set to True, this will append a timestamp to the provided path name. If set to false, no timestamp will be appended to the path name possibly overwriting any files currently within the previous path name.

  • disk_space_tolerance (float) – The difference between the free disk space in the given directory and the dataset to be save in bytes. If the directory has fewer free bytes than this tolerance plus the dataset size an error will be raised.

  • single_file

    Changed in version 0.14.0: In version 0.14.0 this parameter is ignored, and the behavior is always that of single_file=True

    Changed in version 0.18.9: In version 0.18.9 this parameter will emit a deprecation warning indicating that it will be removed in version 0.20.0

Returns:

filename – The filename where the the data has been stored. It includes the optional timestamp, and the new suffix.

Return type:

Path-like

owl.mcam_data.save_video(mcam_dataset, filename: Path, *, include_timestamp: bool = True, disk_space_tolerance=100000000.0, tqdm=None, mode='w')

Save datasets that contain stacks and provide progress information.

This method provides an optimized way to save datasets for speed of writing and reading. It also provides the user with feedback during data writing through the form of an optional progress bar.

Parameters:
  • mcam_dataset (xarray Dataset) – The mcam_data you wish to save.

  • filename (Path-like) – The filename where the data should be saved. If the filename has no extension, then an '.nc' extension is added.

  • include_timestamp (bool) – If set to True, this will append a timestamp to the provided path name. If set to false, no timestamp will be appended to the path name possibly overwriting any files currently within the previous path name.

  • disk_space_tolerance (float) – The difference between the free disk space in the given directory and the dataset to be save in bytes. If the directory has fewer free bytes than this tolerance plus the dataset size an error will be raised.

  • mode (str) – The mode to open the file in. Valid options are 'w' (write), or `x` exclusive creation, failing if the file already exists.

Returns:

The path where the data was saved.

Return type:

save_filepath

owl.mcam_data.save_metadata(mcam_dataset, directory, *, include_timestamp: bool = True, metadata_filename: str = 'metadata.nc', allow_nan: bool = True, mode='w')

Save mcam metadata to a given directory.

Parameters:
  • mcam_dataset (xarray Dataset) – The mcam_dataset in an xarray Dataset with all the metadata.

  • directory (pathlike) – The directory where to save the metadata.

  • metadata_filename (str, optional) – The name of the metadata file.

  • include_timestamp (bool, optional) – If set to True, this will append a timestamp to the provided directory. If set to false, no timestamp will be appended to the directory possibly overwriting any files currently within the previous directory.

  • allow_nan (bool, optional) – If True nan values will be allowed when exporting metadata as a json. If False attempting to export metadata as a json will result in an exception.

Returns:

save_directory – The directory where the metadata was saved. If include_timestamp is True, then this include the added timestamp.

Return type:

Path

owl.mcam_data.export(mcam_dataset, directory: Path, *, mode='w', metadata_filename=None, include_timestamp: bool = True, imagename_format=None, image_mode=None, disk_space_tolerance=10000000.0, image_export_format=None, save_filename_separator=None, **kwargs) Path

Save data as bmps and metadata as json.

You may pass additional data to be serialized as a dictionary to experiment_data.

Returns the saved directory as a Path object.

Parameters:
  • mcam_dataset – The mcam_dataset you wish to save.

  • directory – There directory where the data should be saved.

  • metadata_filename – The filename within the directory where the metadata stored as a json file should be saved. If set to None, no metadata file will be saved.

  • include_timestamp – If set to True, this will append a timestamp to the provided directory. If set to false, no timestamp will be appended to the directory possibly overwriting any files currently within the previous directory.

  • imagename_format – Passed to save_one_as_tif or ‘save_one_as_bmp’.

  • image_mode ('gray', 'rgb', 'rgba', 'bayer', 'bggr', 'rggb', 'grbg', gbrb' or None) – If mcam_dataset is not an xarray.DataArray, then the image_mode can be used to override the code that we use to guess the dimensions of your image.

  • disk_space_tolerance (float) – The difference between the free disk space in the given directory and the dataset to be save in bytes. If the directory has fewer free bytes than this tolerance plus the dataset size an error will be raised.

  • save_filename_separator (string) – If not None, this will be used to create a flat structure using the provided separator, the toplevel directory name as a prefix, and the imagename_format. For example, if the separator is ‘_’, then the output structure will be {directory.parent}/{directory.name}_{imagename_format}.

Returns:

The name of the folder where the data has been exported.

Return type:

directory

owl.mcam_data.to_rgb(mcam_dataarray, bayer_pattern=None)

Convert raw data obtained from the mcam to RGB data.

Parameters:
  • mcam_dataarray – The raw data acquired from the MCAM. Should be an xarray DataArray object that was ideally created with mcam_data.new.

  • bayer_pattern ([None, 'rggb', 'bggr', 'grbg', 'gbrg']) – If None is provided, then the bayer_pattern is inferred from the bayer_pattern coordinate of the raw_data. In recent versions, this coordinate should be automatically populated if the data was acquired with an MCAM. If bayer_pattern is specified, then the coordinate of the raw_data is ignored and this parameter is passed to bayer2rgb as the assumed color ordering.

Returns:

rgb_dataarray – Deep copy of mcam_dataarray with rgb color mcam_data.

Return type:

xarray DataArray

owl.mcam_data.to_rgba(mcam_dataarray, bayer_pattern=None)

Converts raw data obtained from the mcam to RGBA data.

Parameters:
  • raw_data – The raw data acquired from the MCAM. Should be an xarray object that was ideally created with mcam_data.new.

  • bayer_pattern ([None, 'rggb', 'bggr', 'grbg', 'gbrg']) – If None is provided, then the bayer_pattern is inferred from the bayer_pattern coordinate of the raw_data. In recent versions, this coordinate should be automatically populated if the data was acquired with an MCAM. If bayer_pattern is specified, then the coordinate of the raw_data is ignored and this parameter is passed to bayer2rgba as the assumed color ordering.

Returns:

rgba_dataarray – Deep copy of mcam_dataarray with rgba color mcam_data.

Return type:

xarray DataArray

owl.mcam_data.resize_as_square(images, axes=(2, 3))

Slice your image retaining the center square.

Parameters:

images (array-like) – numpy-like array containing the following leading dimensions [‘image_y’, ‘image_x’, ‘y’, ‘x’].

Returns:

images – With equal dimensions on the ‘y’ and ‘x’ dimensions.

Return type:

array-like

owl.mcam_data.dataset_from_large_image(image, chunks=(6, 4), contiguous=True, trim_edge=False)

Creates an mcam_data like Dataset from a large image.

Parameters:
  • image (array-like, grayscale [N, M], or rgb [N, M, 3], or rgba [N, M, 4]) – The image you wish to chunk up into smaller pieces.

  • chunks ((int, int), optional) – Tuple of int for the number of chunks in each dimensions.

  • contiguous (bool, optional) – If true, will ensure that the resulting array is contiguous in memory.

  • trim_edge (bool, optional) – If true, will trim away edges to make a smaller image in the case that the chunks do not line up with the original image size.

Returns:

m – mcam_data like xarray.Dataset structure that holds the image data as the variable images.

Return type:

mcam_data

Example

>>> from owl import mcam_data
>>> from imageio import imread
>>> image = imread('large_image_example.tif')
>>> dataset = mcam_data.from_large_image(image, trim_edge=True)
>>> dataset
<xarray.Dataset>
Dimensions:           (chunk_y: 6, chunk_x: 4, y: 256, x: 128)
Coordinates:
* chunk_y           (chunk_y) int64 0 1 2 3 4 5
* chunk_x           (chunk_x) int64 0 1 2 3
* y                 (y) int64 0 1 2 3 4 5 6 7 ... 249 250 251 252 253 254 255
* x                 (x) int64 0 1 2 3 4 5 6 7 ... 121 122 123 124 125 126 127
    __owl_version__   <U29 '0.18.123.dev6+ga658ddd8.dirty'
    __sys_version__   <U80 '3.9.13 | packaged by Ramona Optics | (main, Aug 3...
    __owl_sys_info__  <U674 "{'python': '3.9.13 | packaged by Ramona Optics |...
Data variables:
    images            (chunk_y, chunk_x, y, x) float64 0.0 0.0 0.0 ... 0.0 0.0
>>> mcam_data.save(mcam_data, "large_image_chunked", include_timestamp=False)
owl.mcam_data.from_large_image(image, chunks=(6, 4), contiguous=True, trim_edge=False)

Creates an mcam_data like DataArray from a large image.

Parameters:
  • image (array-like, grayscale [N, M], or rgb [N, M, 3], or rgba [N, M, 4]) – The image you wish to chunk up into smaller pieces.

  • chunks ((int, int), optional) – Tuple of int for the number of chunks in each dimensions.

  • contiguous (bool, optional) – If true, will ensure that the resulting array is contiguous in memory.

  • trim_edge (bool, optional) – If true, will trim away edges to make a smaller image in the case that the chunks do not line up with the original image size.

Returns:

m – mcam_data like xarray.DataArray structure that holds the image.

Return type:

mcam_data

Example

>>> from owl import mcam_data
>>> from imageio import imread
>>> image = imread('large_image_example.tif')
>>> mcam_data = mcam_data.from_large_image(image, trim_edge=True)
>>> mcam_data
<xarray.DataArray 'images' (chunk_y: 6, chunk_x: 4, y: 4028, x: 4602)>
array([[[[ 0, ...,  0],
         [ 0, ...,  0]]]], dtype=uint8)
Coordinates:
  * chunk_y          (chunk_y) int64 0 1 2 3 4 5
  * chunk_x          (chunk_x) int64 0 1 2 3
  * y                (y) int64 0 1 2 3 4 5 6 ... 4022 4023 4024 4025 4026 4027
  * x                (x) int64 0 1 2 3 4 5 6 ... 4596 4597 4598 4599 4600 4601
    __owl_version__  <U24 '0.9.11'
    __sys_version__  <U79 '3.7.3 | ...'
>>> mcam_data.save(mcam_data, "large_image_chunked", include_timestamp=False)
owl.mcam_data.save_netcdf(mcam_dataset, filename: Path, *, mode='w', engine=None, include_timestamp: bool = True, disk_space_tolerance=100000000.0)

Save data as a hdf5 using netcdf4 API.

Default extension is nc if none is given

Returns the saved path name as a Path object.

Parameters:
  • mcam_dataset (xarray Dataset) – The mcam_data you wish to save.

  • filename (Path-like) – The filename where the data should be saved. If the filename has no extension, then an '.nc' extension is added.

  • mode (str) – The mode to open the file in. Valid options are 'w' (write), or `x` exclusive creation, failing if the file already exists.

  • engine – Parameter passed to xarray.Dataset.to_netcdf to select the backend used for writing data to disk.

  • include_timestamp (bool) – If set to True, this will append a timestamp to the provided path name. If set to false, no timestamp will be appended to the path name possibly overwriting any files currently within the previous path name.

  • disk_space_tolerance (float) – The difference between the free disk space in the given directory and the dataset to be save in bytes. If the directory has fewer free bytes than this tolerance plus the dataset size an error will be raised.

  • single_file

    Changed in version 0.14.0: In version 0.14.0 this parameter is ignored, and the behavior is always that of single_file=True

    Changed in version 0.18.9: In version 0.18.9 this parameter will emit a deprecation warning indicating that it will be removed in version 0.20.0

Returns:

filename – The filename where the the data has been stored. It includes the optional timestamp, and the new suffix.

Return type:

Path-like

owl.mcam_data.save_video_netcdf(mcam_dataset, filename: Path, *, include_timestamp: bool = True, disk_space_tolerance=100000000.0, tqdm=None, mode='w')

Save datasets that contain stacks and provide progress information.

This method provides an optimized way to save datasets for speed of writing and reading. It also provides the user with feedback during data writing through the form of an optional progress bar.

Parameters:
  • mcam_dataset (xarray Dataset) – The mcam_data you wish to save.

  • filename (Path-like) – The filename where the data should be saved. If the filename has no extension, then an '.nc' extension is added.

  • include_timestamp (bool) – If set to True, this will append a timestamp to the provided path name. If set to false, no timestamp will be appended to the path name possibly overwriting any files currently within the previous path name.

  • disk_space_tolerance (float) – The difference between the free disk space in the given directory and the dataset to be save in bytes. If the directory has fewer free bytes than this tolerance plus the dataset size an error will be raised.

  • mode (str) – The mode to open the file in. Valid options are 'w' (write), or `x` exclusive creation, failing if the file already exists.

Returns:

The path where the data was saved.

Return type:

save_filepath

owl.mcam_data.load_netcdf(filename: Path, *, delayed=True, scheduler='threading', progress=True, update=True, engine='ramona', chunks=None)

Load mcam_data from a hdf5 file using netcdf4 and convert it to whatever we want.

Parameters:
  • filename (Path-like) – The netcdf4 file where the data is stored.

  • delayed (bool, optional) – If True, the computation will return a lazy object. As the user of this library, you will have to explicitly force the computation of the lazy object. This function will attempt to automatically determine if the data should be loaded lazily or eagerly. If a particular behavior is desired in your application, the delayed parameter should be specified.

  • progress – If True, then a progress bar is shown during loading operations. This is only valid if delayed=True.

  • scheduler – Parameter passed to dask.compute to select which schedule is used to load the data in parallel.

  • update – Set to False if you wish to load the raw un-updated data. Setting this parameter to False keeps the raw metadata as is in the .nc file but means that the loaded dataset is unsupported by the remainder of the owl analysis functions.

  • engine – xarray engine used to open the dataset.

  • chunks – A dictionary of the image axes to chunk and the size of the chunks along the axes. Axes should be referenced by their coordinate name. This is best used in conjunction with delayed=True

Returns:

mcam_dataset – Return the MCAM data in an xarray DataArray with all the metadata.

Return type:

DataArray

owl.mcam_data.append_netcdf(filename, ds_to_append, unlimited_dims, *, engine=None)

Append dataset to netCDF4 file.

Append the provided dataset to the file along the unlimited_dim.

Parameters:
  • filename (Path-like or File-like object) –

    The path to the file where the data should be written or alternatively, a file object from netCDF4 or h5netcdf. If filename is a File-like handle, the engine parameter is ignored.

    Changed in version 0.18.15: The filename parameter can now accept a file object.

  • ds_to_append (xarray.Dataset) – xarray dataset to append to the filename.

  • ulimited_dims (str or List[str]) – Dimension over which to append the dataset to the file. Currently, only one dimension is supported.

  • engine (str) –

    The name of the backend to use to write the file. Should be one of "netcdf4", "h5netcdf", or "ramona".

    Added in version 0.18.15: The engine parameter.

owl.mcam_data.get_photometric_corrections(mcam_dataset, illumination_type)

Get all all color corrections available for a calibrated system.

Parameters:
  • mcam_dataset (xarray.Dataset) – An xarray.Dataset of mcam_data.

  • illumination_type (string ('reflection', 'transmission')) – Which illumination board to use to illuminate the sensors.

Returns:

  • response_matrix (numpy array) – The matrix that describes the sensor’s response to the leds. It is a M x N x 4 x 4 array of float where M and N are the shape of the sensor array.

  • coefficient_corrections (numpy array) – Array of pixel correction coefficients of shape M x N x image_shape.

  • offset_corrections (numpy array) – Array of pixel correction offsets of shape M x N x image_shape.

owl.mcam_data.get_photometric_sensor_corrections(mcam_dataset, illumination_type)

Get the matrix used to create the average value of the individual images.

Parameters:
  • mcam_dataset (xarray.Dataset) – An xarray.Dataset of mcam_data.

  • illumination_type (string ('reflection', 'transmission')) – Which illumination board to use to illuminate the sensors.

Raises:

ValueError – Unable to get correction if photometric calibration data is not in mcam_dataset.

Returns:

response_matrix – The matrix that describes the sensor’s response to the leds. It is a M x N x 4 x 4 array of float where M and N are the shape of the sensor array.

Return type:

numpy array

owl.mcam_data.bayer_dataset_to_single_channel(dataset, color)

Extract a fixed color pixel from data acquired with CFA sensors.

Given data acquired with a sensor contained a color filter array, this function extracts a single monochrome pixel from the array.

This function can help reduce the data analysis load and speed up algorithms that work will with monochromatic images.

While a standard Bayer pattern has one red pixel, and one blue pixel, it contains two green pixels. We denote the green pixel on the first row as the 'green0' pixel, and the green pixel on the second row as the 'green1' pixel. Here 'green' is shorthand for 'green0'.

Parameters:
  • dataset (mcam_dataset) – Dataset containing MCAM images. The dataset must contain the bayer_pattern that describes the pixel ordering in the color filter array.

  • color – Color to extract. Must be one of: ['red', 'green', 'blue', 'green0', 'green1']

Returns:

The returned dataset no longer contains the bayer_pattern variable to indicate that it has been converted to that of a monochromatic image.

Return type:

dataset

Notes

The images in the dataset must all have been acquired with the same pattern for the color filter array.

owl.mcam_data.bayer_dataset_to_rgb(dataset)

Convert a dataset acquired with a bayer sensor to an RGB dataset.

Given data acquired with a sensor contained a color filter array, this converts the raw data to that of an RGB image.

Parameters:

dataset (mcam_dataset) – Dataset containing MCAM images. The dataset must contain the bayer_pattern that describes the pixel ordering in the color filter array. The images variable will contain a new dimension labeled with ‘rgb’ with dimension of 3.

Returns:

The returned dataset no longer contains the bayer_pattern variable to indicate that it has been converted to that of a monochromatic image.

Return type:

dataset

owl.mcam_data.bayer_dataset_to_grayscale(dataset, *, gray_vector=None)

Convert a dataset acquired with a bayer sensor to a grayscale dataset.

Given data acquired with a sensor contained a color filter array, this converts the raw data to that of an RGB image.

Parameters:

dataset (mcam_dataset) – Dataset containing MCAM images. The dataset must contain the bayer_pattern that describes the pixel ordering in the color filter array. The images variable will contain the new grayscale dataset. All other metadata will be retained.

Returns:

The returned dataset no longer contains the bayer_pattern variable to indicate that it has been converted to that of a monochromatic image.

Return type:

dataset

owl.mcam_data.rgb_dataset_to_grayscale(dataset, *, gray_vector=None)

Convert a debayered rgb dataset to a grayscale dataset.

Parameters:
  • dataset (mcam_dataset) – Dataset containing MCAM images. The dataset must contain the dimension rgb. The images variable will contain the new grayscale dataset. All other metadata will be retained.

  • gray_vector (tuple) – A 3-tuple of floats that describes the weights of the red, green, and blue channels in the conversion to grayscale. The default values are the ITU-R BT.709-1 standard.

Returns:

The returned dataset no longer contains the rgb dimension indicating that it has been converted to that of a grayscale image.

Return type:

dataset

owl.mcam_data.get_color_dataset(dataset)
owl.mcam_data.get_color_dataset_stack(dataset_stack)
Parameters:
  • dataset_stack (mcam_dataset)

  • images. (Dataset with bayered)

  • information (This must also include bayer_pattern)

Returns:

  • color_dataset_stack

  • Dataset with debayered images saved as a dask array and identical

  • metadata to the given dataset except for bayer_pattern which is removed.

owl.mcam_data.get_color_data(dataset)
owl.mcam_data.get_gray_dataset(dataset, gray_vector=None)

Note: this function only considers single frame datasets.

owl.mcam_data.get_software_frame_rate(dataset, *, frame_number_coordinate='frame_number')

Compute the average frame rate of the data acquired in the dataset.

Using the software timestamp, compute the frame rate of the acquired dataset.

Parameters:
  • dataset (mcam_dataset) – Dataset containing mcam_data and software_timestamp.

  • frame_number_coordinate (str) – Coordinate over which to compute the frame rate. The result is averaged over all other coordinates.

Returns:

frame_rate – Average frame rate over all cameras in seconds.

Return type:

float

owl.mcam_data.get_software_frame_time_difference(dataset, *, frame_number_coordinate='frame_number')

Compute the average time difference between subsequent frames.

Using the software timestamp, compute the frame rate of the acquired dataset.

Parameters:
  • dataset (mcam_dataset) – Dataset containing mcam_data and software_timestamp.

  • frame_number_coordinate (str) – Coordinate over which to compute the frame rate. The result is averaged over all other coordinates.

Returns:

time_difference – Average time difference over all cameras in seconds between each subsequent frame.

Return type:

float

owl.mcam_data.get_bayer_pattern(dataset, default=None)

Return the bayer pattern of the underlying sensors.

Parameters:
  • dataset (mcam_data) – A dataset containing MCAM data. The bayer pattern is expected to be found in a key called bayer_pattern.

  • default – The default value in the case that the bayer_pattern key is not found in the dataset.

Returns:

bayer_pattern – The bayer pattern as a string. Typical bayer patterns include "rggb", "bggr", "grbg", "gbrg".

Return type:

str

owl.mcam_data.get_bin_mode(dataset)

Return the binning mode of the dataset

Parameters:

dataset (mcam_data) – A dataset containing MCAM data. The pixel information is expected to be contained in the coordinates 'y' and 'x'.

Returns:

bin_mode – A single integer for the bin mode.

Return type:

int

Note

The binning mode is assumed to be symmetric in both the row (y) and column (x) dimension. This function uses the information in the y coordinate to extract the bin mode.

owl.mcam_data.get_offset(dataset)

Return the offset of the dataset

Parameters:

dataset (mcam_data) – A dataset containing MCAM data. The pixel information is expected to be contained in the coordinates 'y' and 'x'.

Returns:

bin_mode – A integer for the offset in the y and x direction.

Return type:

tuple(int, int)

owl.mcam_data.get_stack_dimension(dataset)

Stitching

Ramona Optics provides a stitching solution that utilizes the Hugin toolbox.

owl.stitch.hugin_stitching

class owl.stitch.hugin_stitching(mcam_data_path, *, save_directory=None, stitch_filename=None, pto_filename=None, template_pto_filename=None, load_masks_filename=None, save_masks_filename=None, blender='enblend', estimated_overlap=(None, 0.35), selection_slice=None, cp_threshhold_dist=100, ignore_calibration=False, attempt_custom_alignment=False, use_gpu=False, output_shape=None, bbox_indices=None, primary_seam_generator='nft', text_output=None, **kwargs)

Stitch mcam images using hugin.

stitches mcam images utilizing the hugin toolbox and exports the final stitched image as a tiff file.

Data can be stitched by giving only the mcam_data directory path as shown:

>>> from pathlib import Path
>>> from owl.stitch.hugin import hugin_stitching
>>> mcam_data_filename = Path('path_to_walkthrough_images')
>>> stitch_filename, pto_filename = hugin_stitching(mcam_data_filename)

Set pass a pto file path to template_pto_path to used premade template instead of creating a new one

>>> stitch_filename, pto_filename = hugin_stitching(mcam_data_path,
                                                    template_pto_filename=pto_filename)

Give a location to save the masks and the blending masks will be saved during blending

>>> save_masks_filename = mcam_data_filename / 'mask%n.tif'
>>> stitch_filename, pto_filename = hugin_stitching(mcam_data_path,
                                                    save_masks_filename=save_masks_filename)

Give a location to save the masks and the blending masks will be saved during blending

>>> load_masks_filename = save_masks_filename
>>> stitch_filename, pto_filename = hugin_stitching(mcam_data_path,
                                                    template_pto_filename=pto_filename,
                                                    load_masks_filename=load_masks_filename)

Designate the nona blender (default blender is enblend) - this is quicker, but seams are more obvious

>>> stitch_filename, pto_filename = hugin_stitching(mcam_data_path,
                                                    blender='nona')
Parameters:
  • mcam_data_path (PathLike) – The path to either the directory containing all of the exported images to stitch, or the single file path to the mcam_data.

  • save_directory (PathLike) – If provided, the stitched image and the pto file will be created in the provided path. By default, if an exported dataset is provided, the save_directory will be that of the dataset. If a single .nc file is provided, a new directory will be created with the same name as the stem of the .nc file.

  • stitch_filename (PathLike, optional) – Path to file that will contain the stitched tiff (extension will be appended). Default is stitched.tif in the save_directory.

  • pto_filename (PathLike, optional) – Path to the pto file containing all the stitching information (must include pto extension). Default is template.pto in the save_directory.

  • template_pto_filename (PathLike) – When paseed a pto path the given path will be used to find a prewriten pto file that will be used to align the images before blending. If it is not found or the file does not match the N_cameras the code will raise an error. If a template pto file is used, then a copy of the pto file modified to reflect the used blender is saved as template.pto in the same directory as the stitched image, or at the file path given to pto_path.

  • load_masks_filename (String or None) – If given a string the masks generated during the blending process are saved at the location given with the given file extension (tif is suggested). Include “%n” o add a counter to the name so that all masks have unique names. Only applies if blender=’enblend’. The default is None.

  • save_masks_filename (String or None) – If given a string the blender will attempt to load masks from the filename given. Use “%n” to reference indexed masks (similar to load_masks_path). Only applies if blender=’enblend’. The default is False.

  • blender ('enblend', 'nona', OR 'none') – This is the blending engine use for the stitched image. Enblend will use Dijkstra’s shortest path algorithm to minimize the difference between overlapping pixels along the seam, and nona will selects seams based on the watershed algorithm. Enblend provides less visible seams, but nona is much faster. If the blender is set to ‘none’ only the template will be generated, without creating a stitched image. The default is ‘enblend’.

  • estimated_overlap ((float, float), optional) – Percentage of image size is expected to overlap with neighboring images. (vertical, horizontal). The default is (0.13, 0.35).

  • selection_slice (slice) – Section of mcam_data array to stitch. If section is used with template the template must have been made using the same section. If section is None, full array will be used. When selecting a section of a saved dataset without camera index (0, 0), index as if the lowest indexed camera is (0, 0).

  • cp_threshhold_dist (Num) – The distance in pixels from the initial positioning that a control point pair can be. All points pairs a greater distance will be removed.

  • ignore_calibration (Boolean) – If True, ignores the calibrationg stitching transforms when generating initial alingments and instead using the given estimated overlap.

  • attempt_custom_alignment (Boolean) – If True will attempt to align images based on found control points. If False will stitch images based on initial position.

  • use_gpu (bool) – If True use the gpu to speed up the performance of nona. setting this to True with blender=’enblend’ decreases the time very slightly, but with blender=’nona’ the time to stitch decreases about 30%.

  • output_shape ((int, int)) – A tuple holding the desired dimensions of the composite image in (HEIGHT, WIDTH).

  • bbox_indices (((image_y0, image_x0, pix_y0, pix_x0), (image_y1, image_x1, pix_y1, pix_x1))) – Two diagnolly opposite points to define the bounding box. Points are in data array space meaning a point is defined by their camera index and pixel index.

  • primary_seam_generator (str) – The primary seam generator used by enblend. Set to either ‘graph-cut’ (‘gc’) or ‘nearest-feature-transform’ (‘nft’).

Returns:

  • stitch_filename (Path or None) – The path to the stitched image. If the blender is set to ‘none’ this value will be None.

  • pto_filename (Path) – The path to the template file containing the data necessary to create the global transforms.

owl.stitch.hugin_global_transforms

class owl.stitch.hugin_global_transforms(pto_filename, imagename_format=None, correct_transforms=True, bin_mode=1)

Create a homography matrix from the alingments in the given pto file.

Parameters:

pto_path (PathLike) – Path to the desired hugin template (.pto) in which to take the position data from.

Returns:

  • global_transform (array of floats) – An array of floats of shape (y_cameras, x_cameras, 3, 3) made up of the 3x3 global homography matrices used in the pto file.

  • output_shape (tuple of int) – The size of the numpy array in (height, width) pixels that will hold the composite.

Instruments

When opened the MCAM class will contain up to 3 sub-devices:

  • A transmission illumination unit. This device can be accessed through the transmission_illumination attribute and is an object of the Illuminate class

  • A reflection illumination unit. This device can be accessed through the reflection_illumination attribute and is an object of the Illuminate class.

  • A stage to control the height. This device can be accessed through the z_stage attribute and is an object of class X_LSM_E.

Illumination module

class owl.instruments.Illuminate(*, N_cameras_Y=None, N_cameras_X=None, flip_along_y=False, flip_along_x=False, check_version=True, **kwargs)
property NA: float

Numerical aperture for bf / df / dpc / cdpc patterns.

property about: str

Display information about this LED Array.

property analog_brightness_settings

Electrical current settings for the LEDs.

The analog brightness settings are only valid for Illuminate boards that have the necessary hardware. As of today, only the c-008-falcon-transmission board supports analog current control through the use of special features in the TLC5955 [1].

In the parameter definition below, MC refers to the maximum current setting, BC refers to the brightness control setting, and DC refers to the dot correction setting.

The settings are expected to be organized as a tuple of length 3. Each tuple should contain either a tuple of 3 integers or a single integer.

>>> light.analog_brightness_settings = ((MC_R, MC_G, MC_B),
...                                     (BC_R, BC_G, BC_B),
...                                     (DC_R, DC_G, DC_B))

Notes

Setting MC, BC, and DC to 0 will not turn that LED channel off.

_[1] https://www.ti.com/lit/ds/symlink/tlc5955.pdf

annulus(minNA: float, maxNA: float) None

Display annulus pattern set by min/max NA.

property array_distance: float

LED array distance in meters.

ask(data: str) int | float | None

Send data, read the output, check for error, extract a number.

property autoclear: bool

Toggle clearing of array between led updates.

Returns:

value – The current setting of autoclear

Return type:

bool

property autoupdate: bool

Toggle updating of array between led commands.

Returns:

value – The current setting of autoupdate

Return type:

bool

property background_color

The background RGB color set when the IR leds are on.

Only valid in ‘ir850_analog_fullarray’ is used.

The color of the RGB leds.

property background_lux

The target brightness (in Lux) of the RGB leds.

Only valid when ‘ir850_analog_fullarray’ is used.

brightfield() None

Display brightfield pattern.

classmethod by_device_name(device_name, *args, **kwargs)

Connect to an LED board by device name.

property channel_current

The analog current output of each channel.

This function helps estimate the average analog current provided to each LED given the present analog and grayscale (pulse width modulation or PWM for short) settings for the LED Board.

For example, the C008-Transmission (based on the TCL5955 controller) can output a current between 0.08384 mA to 31.9 mA when PWM is disabled.

The analog current can not be set to 0 unless the PWM settings are also set to 0.

Examples

Demonstrating the need to set color value to 0 first in order to set current to 0 >>> light = Illuminate() # It is best to change the analog settings when the PWM is set to 0 >>> light.color = 0 >>> light.analog_brightness_settings = (0, 0, 0) >>> light.color = (255, 255, 0) >>> print(light.channel_current) (8.384E-5, 8.384E-5, 0.0)

To set the output current to 0, the PWM settings must be set to zero. >>> light.color = 0 >>> print(light.channel_current) (0.0, 0.0, 0.0)

Set brightness based on expected color ratio and brightness >>> color_ratio = (0.813, 0.168, 0.557) >>> brightness_percentage = 0.1 >>> light.color = (255, 255, 255) >>> max_current = light.get_maximum_channel_current() >>> channel_current = tuple(m * c * brightness_percentage … for m, c in zip(max_current, color_ratio)) >>> light.channel_current = channel_current >>> light.fill_array() >>> print(light.channel_current) (0.000678, 0.000143, 0.000464)

clear() None

Clear the LED array.

close() None

Force close the serial port.

property color: Tuple[float, ...]

LED array color.

Returns a tuple for the (red, green, blue) value of the LEDs.

Returns:

  • red – Integer value for the brightness of the red pixel.

  • green – Integer value for the brightness of the green pixel.

  • blue – Integer value for the blue pixels.

property color_maximum_value

Maximum color intensity that can provided to the LED board.

property color_minimum_increment

Minium intensity increment that can be provided to the LED board.

darkfield() None

Display darkfield pattern.

debug(value=None)

Set a debug flag. Toggles if value is None.

delay(t)

Simply puts the device in a loop for the amount of time in seconds.

Prints newline approximately 100 ms.

Returns:

None

demo(time: float = 10) None

Run a demo routine to show what the array can do.

Ok, I don’t know what that blinking is doing, when it is blinking, it won’t respond to serial commands. Therefore, if you try to wake it up while it is blinking, it simply will ignore you

SEems to blink for a while before starting. Maybe it is turning on some UV leds on my board? So this demo’s default time is set to 20 instead.

property device_info

Provide the serial number and other device info in a dictionary.

The returned dictionary contains the following keys: ['serial_number', 'device_name', 'mac_address']

property device_name

The human readable name of the device.

discoparty_demo(n_leds=1, time=10)

Run a demo routine to show what the array can do.

Parameters:
  • n_led – Number of LEDs to turn on at once

  • time – The amount of time to run the paterns in seconds

draw_channel(led)

Draw LED by hardware channel(use for debugging).

draw_circle(Y_index, X_index, radius=1, set_leds=True, led_type='rgb')

Illuminate the LEDs in a circle centered at a given index.

Units provided are in “microcamera” units, where 1 camera corresponds to the pitch between microcameras.

For the Falcon Illumination units, this is 13.5 mm.

Parameters:
  • Y_index (int) – Y coordinate of the center of the circle.

  • X_index (int) – X coordinate of the center of the circle.

  • radius (float) – Radius of the circle.

  • set_leds (bool) – If set to False, this will just return the LEDs without sending them to the LED board so they can be used in sequences

  • led_type ('rgb', 'uv', or , 'ir', 'all') – The type of LED to turn on around the perimeter.

Returns:

leds – LEDs that correspond to the circle directly below the indicated camera index.

Return type:

List[int]

draw_edge(num_leds=4, led_type='rgb', set_leds=True)

Lights the leds on the parameter of the board.

You can be used for quick global dark field illumination to remove the reflection from reflective samples.

Currently, only the reflection illumination board is supported.

Parameters:
  • num_leds (int) – The number of LEDs on the perimeter to light up.

  • led_type ('rgb', 'uv', 'ir', or 'all') – The type of LED to turn on around the perimeter.

  • set_leds (bool) – If set to False, this will just return a list of LEDs that would be set instead of directly changing the LED pattern on the board.

Returns:

led_list – The list of LEDs that were turned on by this function.

Return type:

List[int]

draw_hole(Y_index, X_index, radius=1)

Illuminate LEDs around a single hole.

draw_quadrant(red: int, green: int, blue: int) None

Draws single quadrant.

draw_square(Y_index, X_index, width=1.25, set_leds=True, led_type='rgb')

Illuminate the LEDs in a square at a given micro camera index.

Units provided are in “microcamera” units, where 1 camera corresponds to the pitch between microcameras.

For the Falcon Illumination units, this is 13.5 mm.

Parameters:
  • Y_index (int) – Y coordinate of the center of the square.

  • X_index (int) – X coordinate of the center of the square.

  • width (float) – Width of the square.

  • set_leds (bool) – If set to False, this will just return the LEDs without sending them to the LED board, so they can be used in sequences.

  • led_type ('rgb', 'uv', or , 'ir', 'all') – The type of LED to turn on around the perimeter.

Returns:

leds – LEDs that correspond to the ones below the camera index.

Return type:

List[int]

fill_array(led_type='rgb')

Turn on all leds of a given type.

Parameters:

led_type ({'any', 'rgb', 'uv', 'ir'})

static find(serial_numbers=None)

Find all the serial ports that are associated with Illuminate.

Parameters:

serial_numbers (list of str, or None) – If provided, will only match the serial numbers that are contained in the provided list.

Returns:

devices – List of serial devices

Return type:

list of serial devices

Note

If a list of serial numbers is not provided, then this function may match Teensy 3.1/3.2 microcontrollers that are connected to the computer but that may not be associated with the Illuminate boards.

find_max_brightness(num_leds, color_ratio=None)

Calculate the maximum brightness for each color channel of an LED that won’t exceed the TLC’s internal current limit.

Parameters:
  • num_leds (int) – The number of LEDs to be illuminated.

  • color_ratio ((float, float, float)) – The required ratio for the brightness values of each color channel (r, g, b)

Returns:

brightness – The maximum scaled brightness for each color channel.

Return type:

(float, float, float)

find_nearest(y, x, led_type='rgb')

Finds the nearest LED to the given coordinate.

Parameters:
  • y (float) – y coordinate in meters.

  • x (float) – x coordinate in meters.

  • led_type ({'any', 'rgb', 'uv', 'ir'}) – LED chromaticity to search for.

Returns:

led_index – LED index closest to the provided coordinate according to the Euclidean distance (i.e. L2-norm).

Return type:

int

find_row(led, direction='horizontal', led_type='any')

Given an LED, find the row which it belongs to.

Parameters:
  • led (int)

  • led_type ({'any', 'rgb', 'uv', 'ir'})

  • direction ({'horizontal', 'vertical'})

Returns:

leds – List of the indices of all LEDs in the same row as the provided led.

Return type:

list

half_annulus(pattern: str, minNA: float, maxNA: float) None

Illuminate half annulus.

half_circle(pattern: str) None

Illuminate half circle(DPC) pattern.

Parameters:

pattern (should be 'top', 'bottom', 'left' or 'right')

half_circle_color(red: int, green: int, blue: int) None

Illuminate color DPC pattern.

property help: str

Display help information from the illuminate board.

illuminate_uv(number: int) None

Illuminate UV LED.

property led: List[int]

Turn on list of LEDs.

Note that the LEDs along the edges do not have all the colors. Therefore, it might be deceiving if you set the color to red, then call ` Illuminate.led = 0 ` which makes it seem like it turned off the LEDs, but in fact, it simply set LED #0 to the color red, which for that particular LED doesn’t exist.

property led_current_amps

Maximum current in amps per LED channel.

property led_positions

Position of each LED in cartesian coordinates[mm].

property led_positions_NA

Print the position of each LED in NA coordinates.

Not working: See[PR # 8](https://github.com/zfphil/illuminate/pull/8)

property led_state

Current state of the Illuminate LEDs in RGB as a DataArray.

static list_all_serial_numbers(serial_numbers=None)

Find all the currently connected Illuminate serial numbers.

Parameters:

serial_numbers (list of str, or None) – If provided, will only match the serial numbers that are contained in the provided list.

Returns:

serial_numbers – List of connected serial numbers.

Return type:

list of serial numbers

Note

If a list of serial numbers is not provided, then this function may match Teensy 3.1/3.2 microcontrollers that are connected to the computer but that may not be associated with the Illuminate boards.

property mac_address: str

MAC Address of the Teansy that drives the LED board.

property parameters_json

Print system parameters in a json file.

NA, LED Array z - distance, etc.

positions_as_xarray()

Return the position of the led information as an xarray.DataArray.

Returns:

led_position – This dataarray contains a Nx3 matrix that has rows with the z, y, x coordinates of the leds.

Return type:

xr.DataArray

property precision

Python interface bitdepth

print_sequence() str

Print sequence values to the terminal.

Returns:

s – Human readable

Return type:

string

print_sequence_length()

Print sequence length to the terminal.

print_values()

Print LED value for software interface.

read(size: int = 10000) bytearray

Read data from the serial port.

Returns:

data – bytearray of data read.

Return type:

bytearray

read_paragraph(raw=False) List[str]

Read a whole paragraph of text.

Returns:

lines – A list of the lines in the paragraph.

Return type:

list

readline() str

Call underlying readline and decode as utf-8.

reboot()

Run setup routine again, for resetting LED array.

reset_sequence()

Reset sequence index to start.

run_sequence(delay: float, trigger_modes: List[float]) None

Run sequence with specified delay between each update.

If update speed is too fast, a: (is shown on the LED array.

run_sequence_fast(delay, trigger_modes)

Not implemented yet.

scan_brightfield(delay: float | None = None) None

Scan all brightfield LEDs.

Sends trigger pulse in between images.

Outputs LED list to serial terminal.

scan_full(delay: float | None = None) None

Scan all active LEDs.

Sends trigger pulse in between images.

Delay in seconds.

Outputs LED list to serial terminal.

property sequence: List[int]

LED sequence value.

The sequence should be a list of LEDs with their LED number.

property sequence_bit_depth

1, 8, [or 16?].

Type:

Set bit depth of sequence values

property sequence_length: int

Sequence length in terms of independent patterns.

set_brightness(brightness_fraction, illumination_mode=None, *, color_ratio=None, background_lux=0, background_color_ratio=None)

Set the brightness for a given illumination mode and color.

Parameters:
  • brightness_fraction (float) – A value from 0 to 1 that sets the illumination board to that fraction of the max possible brightness.

  • illumination_mode (str) – The desired mode of illumination. If none is given will keep the currenet illumination mode.

  • color_ratio (tuple of floats) – A tuple of length 3 that lists the ratio between the channels. Any values can be given, but the ratios will be normalized so that they sum to 1. This is only valid for the ‘visible’ illumination modes.

set_pin_order(red_pin, green_pin, blue_pin, led=None)

Set pin order(R / G / B) for setup purposes.

step_sequence(trigger_start, trigger_update)

Trigger sequence.

Triggers represents the trigger output from each trigger pin on the teensy. The modes can be:

0: No triggering 1: Trigger at start of frame 2: Trigger each update of pattern

trigger(index)

Output TTL trigger pulse to camera.

trigger_print()

Print information about the current i / o trigger setting.

Returns:

s – Human readable string describing the trigger.

Return type:

string

trigger_setup(index, pin_index, delay)

Set up hardware(TTL) triggering.

trigger_test(index)

Wait for trigger pulses on the defined channel.

turn_on_led(leds: int | Iterable[int]) None

Turn on a single LED(or multiple LEDs in an iterable).

Parameters:

leds (single item or list-like) – If this is single item, then the single LED is turned on. If this is an iterable, such as a list, tuple, or numpy array, turn on all the LEDs listed in the iterable. ND numpy arrays are first converted to 1D numpy arrays, then to a list.

update() None

Update the LED array.

static update_firmware(serial_number, *, device_name=None, mcu=None)

Update a device firmware.

Update the device firmware by specifying its serial number.

For unregistered devices, one can specify the device_name manually.

Parameters:
  • serial_number (str) – The serial number of the device to open.

  • device_name (str) – Optional parameter to specify exactly what firmware to program on the device.

  • mcu (None or 'TEENSY31') – The microcontroller unit used in the LED board. If device_name is not specified, this parameter is ignored.

property version: str

Display controller version number.

water_drop_demo(time: float = 10) None

Water drop demo.

write(data) None

Write data to the port.

If it is a string, encode it as ‘utf-8’.

Motion Controllers

class owl.instruments.X_LSM_E(port=None, *, ftdi_serial_number=None, ensure_homed=True, stage_serial_number=None, connection_manager=None, controller_type='axis', controller_id=1, device=None, device_address=None, set_default_parameters=True, **kwargs)

Zaber Miniature Motorized linear stage with encoder and controller.

Tested with the X-LSM-E series stages.

Parameters:
  • ftdi_serial_number (str) – Serial number for the port on which to connect to for the motor.

  • port (str) – USB port on which to connect to for the motor.

  • stage_serial_number (str) – The serial number of the Zaber motor to connect to.

  • connection_manager (MotorConnectionManager) – The MotorConnectionManager which manages the connection this motor should use. The connection manager should currently be active and in use. This will override a supplied serial_number or port.

  • ensure_homed (bool) – If True, ensures that the stage is homed upon connection. Motors that have not been homed do not report an accurate position offset.

  • controller_type (str) – One of “lockstep” or “axis”. The type of controller to be used for this stage.

  • controller_id (int) – The id of the axis or lockstep group being used for this stage.

  • device (zaber_motion.ascii.Device) – An open device to be used with this stage. Using a pre-opened device may speed up the opening process

  • device_address (int) – Numerical order of the device on the connection. If provided, this can speed up opening the device.

Examples

>>> from owl.instruments import X_LSM_E
>>> z_stage = X_LSM_E()
>>> # Set the stage position to 3 mm
>>> z_stage.position = 3E-3
property acceleration

Stage accelreation in meters per second squared.

property busy

A property describing if the device is busy processing a command.

A device is typically busy when moving to a specified position or when homing.

classmethod by_axis(axis, *args, connection_managers=None, **kwargs)

Connect to an X_LSM_E stage by device axis.

Parameters:
  • axis (str) – The axis direction of the stage. One of ‘z’, ‘x’, or ‘y’.

  • connection_managers (List[MotorConnectionManager] or List[FakeMotorConnectionManager]) – If the device may be on an open connection, the MotorConnectionManager object(s) for the possible connection(s) must be provided.

close()

Close the device for communication.

ensure_homed(wait_until_idle=True)

Ensures that the stage has been homed.

Returns immediately if the stage is already homed.

Parameters:

wait_until_idle (bool) – If True, the call will return immediately. If False, the caller must monitor the busy signal themselves to ensure the stage has stopped moving.

property firmware_version

Firmware version of the stage controller.

property ftdi_serial_number

The serial number of the FTDI USB->Serial adapter for the stage.

home(wait_until_idle=True)

Move stage to home position.

property homed

A property describing whether or not the stage has been homed.

Stages require homing every time they are power cycled.

property maximum_speed

Maximum speed setpoint for the stage in meters per second.

move_sin(*, amplitude, frequency, duration=1, count=None)

Move the stage in a sinusoidal fashion.

Examples

>>> from owl.instruments import X_LSM_E
>>> stage = X_LSM_E()
>>> stage.move_sin(amplitude=1E-6, frequency=300)
open(*)

Open the device for communication.

This function is automatically called at the end of the object creation. It is mostly useful for users that need to manually close the device and reopen it later for communication.

property position

Position of the stage in meters.

property position_mm

Current position of the stage in millimeters.

set_position(value, wait=True, ignore_flip=False)

Set the device to a specified position.

Parameters:
  • value (float) – Position to provide to the stage in meters.

  • wait (bool) – If True, the call will wait until the stage is idle before returning.

set_position_mm(value, *args, **kwargs)

Sets the position of the stage in millimeters.

property stage_serial_number

The serial number of the stage as a string.

stream_call(*, stream_index=0, buffer_index=0)

Initiate a streamed motion.

property stream_num_buffers

The number of allowable buffers used to define stream sequences.

property stream_num_streams

The number of allowable streams.

stream_prepare_smoothed_vibration(*, amplitude, frequency, duration=1, stream_index=0, buffer_index=0, tqdm=None)

Prepare a precisely control vibration motion on the stage.

The motion uses a Tukey Window to create a smooth transition from the resting position for the maximum amplitude.

The first and last 10 oscillations are appodized using a raised cosine to smoothly increase the amplitude of the motion from 0 to the desired amplitude.

Examples

>>> from owl.instruments import MCAM, X_LSM_E
>>> from tqdm import tqdm
>>> stage = X_LSM_E.by_axis('z')
>>> # Streams can be customized but each should be unique
>>> # to a given buffer_index
>>> stage.stream_prepare_smoothed_vibration(
...     amplitude=1E-6,
...     frequency=300,
...     duration=1,
...     buffer_index=1,
...     tqdm=tqdm,
... )
>>> # The creation of streams can be a lengthy process
>>> # A tqdm constructor can be used to show an indication of progress
>>> stage.stream_prepare_smoothed_vibration(
...     amplitude=1E-6,
...     frequency=250,
...     duration=2,
...     buffer_index=0,
...     tqdm=tqdm,
... )
>>> stage.stream_prepare_smoothed_vibration(
...     amplitude=1E-6,
...     frequency=350,
...     duration=0.5,
...     buffer_index=2,
...     tqdm=tqdm,
... )
>>> # Stream can be played back in any order
>>> print("250 Hz")
>>> stage.stream_call(buffer_index=0)
>>> print("350 Hz")
>>> stage.stream_call(buffer_index=2)
>>> print("300 Hz")
>>> stage.stream_call(buffer_index=1)
>>> stage.close()
Parameters:
  • amplitude (float) – In meters, the amplitude of the oscillations at maximum intensity. The peak to peak amplitude is twice this value. Typical values range between 0.25E-6 and 1E-6.

  • frequency (float) – In Hertz, the frequency of oscillation of the movement.

  • duration (float) – In seconds, the duration of the oscillation from start to finish (including the ramp up time from the window function).

  • stream_index (int) – The stream index to use for this movement. Typically this parameter is left at 0.

  • buffer_index (int) – The index where to store the motion in the device. Typical values range between 0 and 99 inclusively but the legal value can range from device to device.

wait_until_idle(*, sleep_time=0.01)

Wait until the device is idle.

Parameters:

sleep_time (float) – Amount of time to wait between polling the device.

Ranging Module

class owl.instruments.MultiLidar(*, serial_number: str = None, device_addresses: int | Iterable_T[int] | None = None, open_device=True, **kwargs)

Open a Ramona Optics Multi-Lidar ranging system.

Example

>>> from owl.instruments import MultiLidar
>>> lidar = MultiLidar()
>>> print(lidar.distance)
Parameters:
  • serial_number (str) – Serial_number of microcontroller. If not provided, auto-detection of the microcontroller will be attempted.

  • device_addresses (int or List[int]) – For devices shipped with Ramona Optics products, this should be left as None. Device addresses of Lidar units. 8-bit addresses are assumed. One address can be provided as a single integer. Multiple addresses can be provided as a list.

  • open_device (bool) –

    If True, the last operation of the constructor will be to call the open method to ensure the device is ready for reading.

    Added in version 0.18.11: The open_device parameter.

close() None

Close the device for communication

See also

MultiLidar, open,

property coprocessor_firmware_version: List[int]

Copressor firmware version.

We expect to see Version 210.

property distance: List[float]

The distance as read from the sensor in meters.

This returns a list of floating point values corresponding to the reading from each lidar sensor.

property hardware_version: List[int]

Hardware Version.

We expect to see Version 16.

property lidar_count

The number of lidar sensors connected to the module.

Added in version 0.18.11.

open(*) None

Open the device for communication

See also

MultiLidar, close

property soc_temperature_C: List[float]

The temperature of the system on a chip (SOC) in Celsius.

property temperature_C: List[float]

The temperature of the device in degrees Celsius.

Multiple values are returned as a list, one value for each Lidar unit.

Data analysis and other tools

owl.analysis.hdr_combine(images, exposures, image_range=(None, None), invalid_relative_range=(0.1, 0.1), target_exposure=None)

Combine a stack of images taken with different exposures.

If a given pixel is valid in multiple images, the weighted average of the intensity in all the images will be taken into account in the final value returned.

Pixels are combined on a per pixel basis. This means that the images may be of any shape, so long as the shape is consistent between images.

Parameters:
  • images (list of array-like objects) – An array containing the image data. These can be multi-dimensional arrays so long as they all have the same shape.

  • exposure (list of exposures) – List of the exposures taken for all the images.

  • image_range (tuple of floats (min, max)) – The range of the image. If not provided, limits are taken to be be those from skimage.util.dtype.dtype_limits with clip_negative=True.

  • invalid_relative_range (tuple of floats (min_invalid, max_invalid)) – Pixels within a certain fraction of the minimum and maximum are considered to be invalid with the exception of the image taken with minimal and maximal exposure.

  • target_exposure (float) – If this is set the outputted image will be scaled to match this exposure value. If None is given it will be set to the lowest exposure value from the give list.

Examples

>>> image_low_exposure = np.asarray([[0.6, 0.6],
...                                  [0.01, 0.01]])
>>> image_high_exposure = np.asarray([[1.0, 1.0],
...                                   [0.16, 0.16]])
>>> exposures = [1, 10]
>>> hdr_combine([image_low_exposure, image_high_exposure], exposures)
array([[0.6  , 0.6  ],
       [0.016, 0.016]])

Notice that in this case, the pixels on the first row were correctly exposed during the short exposure. The pixels on the second row were correctly exposed during the longer exposure. HDR combine selects the pixels that were correctly exposed in both cases and creates a composite from the input images.

owl.analysis.equalize_intensity(images, params, *, roi_fraction_up_down=(0.3, 0.3), roi_fraction_left_right=(0.3, 0.3))

Often, images might have slight variations from camera to camera.

This attempts to estimate these intensity variations by looking at the region around the overlap. Given an image with an overlap of $(N_y, N_x)$ pixels, this algorithm will analyze a number of pixels equal to $(N_y * overlapFraction_y, N_x * overlapFraction_x)$ and ensure that the average intensity in that region is the same.

Parameters:
  • images (mcam_data) – must be at least 4D (grayscale), with (image_y, image_x) as the leading dimensions.

  • params (dictionary) – Stitching parameters from the fourier stitcher.

  • roi_fraction_up_down ((float, float)) – Tuple describing the fraction of the overlap in each dimension to use for intensity calibration when comparing the images in the Y dimension.

  • roi_fraction_left_right ((float, float)) – Tuple describing the fraction of the overlap in each dimension to use for intensity calibration when comparing the images in the X dimension.

Returns:

  • images_out – mcam_data of the same shape, but float-like, with the intensity normalized.

  • intensity_variations – The relative intensity to which each image was multiplied.

Example

This example will load data in, stitch the data, then use the stitching parameters to normalize the intensity of the image so as to normalize the imaging lighting intensity between acquisitions.

This normalization will work best with images of specular targets illuminated with incoherent light.

>>> from owl import mcam_data
>>> from owl.color import rgb2gray
>>> from owl.analysis import equalize_intensity
>>> from owl.visualize import view_large_image
>>> data = mcam_data.load('your_data_set', output_mode='rgb')
>>> data_gray = rgb2gray(data)
>>> data_float = data.astype('float32') / 255
>>> params = find_stitching_parameters(data_gray)
>>> data_equalized, variations = equalize_intensity(data_float, params)
>>> image_corrected = generate_composite(data_equalized, params)
>>> view_large_image(image_corrected)
owl.analysis.find_in_focus_indices(data, *, search_axis=0, image_dims=2)

Find the indices that are in focus for each camera.

The provided data should be something that looks like grayscale mcam_data with an additional dimension, the one you wish to search along.

For example, the mcam_data might look something like:

>>> raw_data
<xarray.DataArray 'stack-329162f6' (z: 12, image_y: 6, image_x: 4, y: 2432, x: 4320)>
dask.array<shape=(12, 6, 4, 2432, 4320), dtype=uint8, chunksize=(1, 1, 1, 2432, 4320)>
Coordinates:
  * image_y           (image_y) int64 0 1 2 3 4 5
  * image_x           (image_x) int64 0 1 2 3
  * y                  (y) int64 0 1 2 3 4 5 6 ... 2426 2427 2428 2429 2430 2431
  * x                  (x) int64 0 1 2 3 4 5 6 ... 4314 4315 4316 4317 4318 4319
  * z                  (z) float64 0.0 0.5 1.0 1.5 2.0 ... 3.5 4.0 4.5 5.0 5.5
Attributes:
    sys_info:          Python: 3.6.6 | packaged by conda-forge | (default, Ju...
    imagename_format:  cam{image_y}_{image_x}.bmp

With z being the dimension we wish to search along.

The function would then be called, returning a delayed object that consists of the indices corresponding to the slice in focus for each camera. Here we ask that the metric only be computed considering every other pixel. This is an acceptable compromise since it avoids any Bayer Pattern artifacts and speeds up the computation.

>>> in_focus_z_indices = find_in_focus_indices(raw_data[..., ::2, ::2])
>>> in_focus_z_indices
<xarray.DataArray 'z_stage' (image_y: 6, image_x: 4)>
dask.array<shape=(6, 4), dtype=int64, chunksize=(1, 1)>
Coordinates:
  * image_y  (image_y) int64 0 1 2 3 4 5
  * image_x  (image_x) int64 0 1 2 3
    Data should be a dask array with the dimension you want to search for
    being the leading dimension.
Parameters:
  • data (ArrayLike [N_z, N_cameras_y, N_cameras_x, y, x]) – A 5D data volume containing info about all cameras.

  • search_axis (int ????) – Not supported yet, but hopefully we can at some point specify which axis to search along. This doesn’t seem to be useful immediately. For now this is simply set to the axis of N_z.

  • image_dims (int) – The number of dimensions each image has. For now, this is hardcoded to 2, grayscale image, but in the future may be expanded to support more than grayscale images.

Returns:

in_focus_indices – The indices where each camera has the best focus. This is potentially a delayed object.

Return type:

ArrayLike [image_y, image_x]

owl.analysis.slice_in_focus_images(data, in_focus_indices)

Picks out the images with that are in focus from MCAM Data.

The data is assumed to be an xarray with the leading dimension being the one we wish to slice into. For example, if the stack is a focal stack where the leading dimension contains measurements at 10 different stage heights z, the array might have the following dimensions

>>> data = mcam_data.new(N_cameras=(10, 6, 4),
                         dims=['z_stage', 'image_y', 'image_x', 'y', 'x'])
>>> data.shape
(10, 6, 4, 2432, 4320)

in_focus_indices is an array of shape [N_image_y, N_image_x] containing the index that should be sliced in for each camera.

>>> color_data = slice_in_focus_images(color_data, in_focus_z_index)
>>> color_data
<xarray.DataArray 'stack-e599e60' (image_y: 6, image_x: 4, y: 2432, x: 4320, rgb: 3)>
dask.array<shape=(6, 4, 2432, 4320, 3), dtype=uint8, chunksize=(1, 1, 2432, 4320, 3)>
Coordinates:
  * image_y           (image_y) int64 0 1 2 3 4 5
  * image_x           (image_x) int64 0 1 2 3
  * y                  (y) int64 0 1 2 3 4 5 6 ... 2426 2427 2428 2429 2430 2431
  * x                  (x) int64 0 1 2 3 4 5 6 ... 4314 4315 4316 4317 4318 4319
    bayer_pattern      (image_y, image_x) <U4 'bggr' 'bggr' ... 'bggr' 'bggr'
    camera_number      (image_y, image_x) int64 20 21 23 22 16 ... 6 0 1 3 2
    exposure           (image_y, image_x) int64 500 500 500 ... 500 500 500
    gain               (image_y, image_x) int64 1 1 1 1 1 1 1 ... 1 1 1 1 1 1
  * rgb                (rgb) <U1 'r' 'g' 'b'
    acquisition_count  (image_y, image_x) int64 5 6 3 3 4 5 6 ... 4 3 3 4 3 3
    trigger            (image_y, image_x) int64 175 176 173 ... 174 173 173
    z                  (image_y, image_x) float64 2.0 2.5 1.0 ... 1.5 1.0 1.0
Attributes:
    sys_info:          Python: 3.6.6 | packaged by conda-forge | (default, Ju...
    imagename_format:  cam{image_y}_{image_x}.bmp
>>> color_data = color_data.persist(scheduler='threads')
>>> color_data
<xarray.DataArray 'stack-e599e60' (image_y: 6, image_x: 4, y: 2432, x: 4320, rgb: 3)>
dask.array<shape=(6, 4, 2432, 4320, 3), dtype=uint8, chunksize=(1, 1, 2432, 4320, 3)>
Coordinates:
  * image_y           (image_y) int64 0 1 2 3 4 5
  * image_x           (image_x) int64 0 1 2 3
  * y                  (y) int64 0 1 2 3 4 5 6 ... 2426 2427 2428 2429 2430 2431
  * x                  (x) int64 0 1 2 3 4 5 6 ... 4314 4315 4316 4317 4318 4319
    bayer_pattern      (image_y, image_x) <U4 'bggr' 'bggr' ... 'bggr' 'bggr'
    camera_number      (image_y, image_x) int64 20 21 23 22 16 ... 6 0 1 3 2
    exposure           (image_y, image_x) int64 500 500 500 ... 500 500 500
    gain               (image_y, image_x) int64 1 1 1 1 1 1 1 ... 1 1 1 1 1 1
  * rgb                (rgb) <U1 'r' 'g' 'b'
    acquisition_count  (image_y, image_x) int64 5 6 3 3 4 5 6 ... 4 3 3 4 3 3
    trigger            (image_y, image_x) int64 175 176 173 ... 174 173 173
    z                  (image_y, image_x) float64 2.0 2.5 1.0 ... 1.5 1.0 1.0
Attributes:
    sys_info:          Python: 3.6.6 | packaged by conda-forge | (default, Ju...
    imagename_format:  cam{image_y}_{image_x}.bmp

The use of persist instead of compute keeps the data discontinuous in memory. This is useful since the final array, especially for Gigacam data might be several GB large. Moving that around needlessly is not necessary unless you require it to be fully contiguous.

Parameters:
  • data (ArrayLike[N_z, N_cameras_y, N_cameras_x, y, x, [...]]) – The mcam_data you wish to slice into.

  • in_focus_indices ([N_cameras_y, N_cameras_x]) – The in-focus indices for each camera.

Returns:

in_focus_data – The mcam_data at the desired slice.

Return type:

ArrayLikee[N_cameras_y, N_cameras_x, y, x, […]]

owl.analysis.find_best_z_position_array(mcam_dataset, tqdm=<cyfunction _tqdm>, stop_event=None)

Find array of most in focused images considering images individually

Given a list of z heights and a directory holding the z-stack images measure the focus of individual images and give an array of the most in focus individual images.

Measures the focus of the images and organizes data with measure_slice_array

Parameters:

mcam_dataset (xarray Dataset) – MCAM data.

Returns:

  • measured_list_array (xarray.DataArray) – Array of focus scores.

  • best_images (xarray.DataArray) – DataArray of most in focused image for each sensor.

owl.color.bayer2rgb(image, bayer_pattern='rggb', output_array=None)

Convert a raw image from a sensor with a Bayer filter to an RGB image.

Parameters:
  • image (ArrayLike[N, M]) – The image to be converted. N, M should be even.

  • bayer_pattern – The bayer pattern of the sensor. Should be one of ['rggb', 'bggr', 'grbg', 'gbrg'].

  • output_array (ArrayLike[N, M, 3] or None) – If OpenCV2 is installed, this specifies the output array of the operation. Without the OpenCV backend, this parameter can only take the value of None. If provided, this should be a C contiguous array.

Returns:

output_array – The image converted to RGBA format.

Return type:

ArrayLike[N, M, 3]

owl.color.bayer2rgba(image, bayer_pattern: str = 'rggb', output_array=None)

Convert a raw image from a sensor with a Bayer filter to an RGBA image.

Parameters:
  • image (ArrayLike[N, M]) – The image to be converted. N, M should be even.

  • bayer_pattern – The bayer pattern of the sensor. Should be one of ['rggb', 'bggr', 'grbg', 'gbrg'].

  • output_array (ArrayLike[N, M, 4] or None) – If OpenCV2 is installed, this specifies the output array of the operation. Without the OpenCV backend, this parameter can only take the value of None. If provided, this should be a C contiguous array.

Returns:

output_array – The image converted to RGBA format.

Return type:

ArrayLike[N, M, 4]

owl.color.bayer2gray(image, bayer_pattern, output_array=None)

Convert an image from a sensor with a Bayer filter to a grayscale image.

Parameters:
  • image (ArrayLike[N, M]) – The image to be converted. N, M should be even.

  • bayer_pattern – The bayer pattern of the sensor. Should be one of ['rggb', 'bggr', 'grbg', 'gbrg'].

  • output_array (ArrayLike[N, M] or None) – If OpenCV2 is installed, this specifies the output array of the operation. Without the OpenCV backend, this parameter can only take the value of None. If provided, this should be a C contiguous array.

Returns:

output_array – The image converted to grayscale format.

Return type:

ArrayLike[N, M]

owl.color.rgb2gray(data, *, preserve_range=False, output_array=None, casting='same_kind', gray_vector=None)

Convert ND images from rgb color space to grayscale color space.

Conversion from RGB color space to grayscale color space is done with the following coefficients:

[0.299, 0.587, 0.114]

Parameters:
  • data (ndarray [..., 3]) – Multi-dimensional image where the last dimension has a dimension of 3. The color coordinates correspond to the red, green and blue channels respectively.

  • preserve_range

    If True, integer images will not be scaled between 0 and 1 prior to color space conversion. If False, integer images will be converted to floating point numbers between 0 and 1 following to scikit-image conventions.

    Changed in version 0.14.0: In Version 0.14.0 the default value of preserved_range was changed to False.

  • output_array (ArrayLike[N, M] or None) – The output array of the operation. If provided, this should be a C contiguous array.

  • casting ({'no', 'equiv', 'safe', 'same_kind', 'unsafe'}, optional)

  • gray_vector (array_like) – The coefficients to convert RGB to grayscale. By default, it is [0.299, 0.587, 0.114] but can be changed to another set of coefficients.

owl.color.rgba2gray(data, *, preserve_range=False)

Convert ND images from RGBA color space to grayscale color space.

Conversion from RGBA color space to grayscale color space is done with the following coefficients:

[0.299, 0.587, 0.114, 0]

Parameters:
  • data (ndarray [..., 4]) – Multi-dimensional image where the last dimension has a dimension of 4. The color coordinates correspond to the red, green, blue and alpha channels respectively.

  • preserve_range

    If True, integer images will not be scaled between 0 and 1 prior to color space conversion. If False, integer images will be converted to floating point numbers between 0 and 1 following to scikit-image conventions.

    Changed in version 0.14.0: In Version 0.14.0 the default value of preserved_range was changed to False.

owl.color.get_converted_data(dataset, conversion_matrix)
owl.color.get_converted_dataset(dataset, conversion_matrix)

Apply photometric response to the dataset images variable

Parameters:
  • dataset (xarray Dataset) – MCAM data containing image data as well as additional metadata. Image data should be RGB.

  • conversion_matrix (numpy Array) – NxMx3x3 Matrix to be applied to the image data. NxM should match the shape image_y and image_x dimension of the dataset.

Returns:

converted_dataset

Return type:

xarray Dataset

owl.color.get_converted_dataset_stack(dataset_stack, conversion_matrix)
owl.calibration.calculate_led_for_desired_pixel(response_matrix, pixels)

Gives led values that will produce the given pixel ratio.

Parameters:
  • response_matrix (numpy array) – Array of float that model the responses of the sensors. The array is shape (image_y, image_x, 4, 4).

  • pixels (tuple) – Ratio of the desired pixel values. It must be length 3.

Returns:

led_values – The values of led values to produce near the desired pixel values on white paper.

Return type:

tuple

owl.calibration.create_pixel_corrections(coefficient_polynomial_coefficients, offset_polynomial_coefficients, *, image_shape, tqdm=<cyfunction tqdm>)

Create a coefficient and offset pixel correction based on the polynomial coefficients.

Parameters:
  • coefficient_polynomial_coefficients (numpy array) – An MxNx4x4 array of polynomial coefficients describing the pixel coefficient corrections.

  • offset_polynomial_coefficients (numpy array) – An MxNx4x4 array of polynomial coefficients describing the pixel offset corrections.

  • image_shape (tuple) – The desired shape of the images in pixel (y_pixels, x_pixels).

Returns:

  • coefficient_corrections (numpy array) – Array of pixel correction coefficients of shape M x N x image_shape.

  • offset_corrections (numpy array) – Array of pixel correction offsets of shape M x N x image_shape.

Visualization tools

The owl module provides the following visualization tools to explore the large datasets generated by the MCAM.

Graphical User Interface

The MCAM is shipped with a basic graphical user interface that enables one to interact with the basic features in an easy to use fashion.

Upon closing the interface, the last used settings are saved to the user’s directory. On Windows, (MCAM_OWL only) the directory location is:

  • C:\Users\<Username>\Application Data\Local Settings\ramonaoptics\mcam

and on linux, the configuration file is stored in:

  • ~/.config/mcam