3. MCAM Dataset

The core data organization of the MCAM is located in an xarray.Dataset. These datasets enable the use of many standard numpy functions, while enabling straightforward metadata management and disk serialization.

Image coordinates included in data

Examining the object returned by mcam_data.new we notice not only an array of zeros, but also properties such as the x and y indices, as well as the image_x, and image_y indices.

>>> from owl import mcam_data
>>> dataset = mcam_data.new_dataset()
>>> dataset
<xarray.Dataset>
Dimensions:           (image_y: 9, image_x: 6, y: 3120, x: 4096)
Coordinates:
  * image_y           (image_y) int64 0 1 2 3 4 5 6 7 8
  * image_x           (image_x) int64 0 1 2 3 4 5
  * y                 (y) int64 0 1 2 3 4 5 6 ... 3114 3115 3116 3117 3118 3119
  * x                 (x) int64 0 1 2 3 4 5 6 ... 4090 4091 4092 4093 4094 4095
Data variables:
    images            (image_y, image_x, y, x) uint8 0 0 0 0 0 0 ... 0 0 0 0 0 0

This kind of metadata is important when slicing data. For example, slicing the newly created m_data object, we can see that the coordinates are also sliced

>>> dataset.isel({'y': slice(None, None, 2), 'x': slice(1, None, 3)})
<xarray.Dataset>
Dimensions:           (image_y: 9, image_x: 6, y: 1560, x: 1365)
Coordinates:
  * image_y           (image_y) int64 0 1 2 3 4 5 6 7 8
  * image_x           (image_x) int64 0 1 2 3 4 5
  * y                 (y) int64 0 2 4 6 8 10 ... 3108 3110 3112 3114 3116 3118
  * x                 (x) int64 1 4 7 10 13 16 ... 4078 4081 4084 4087 4090 4093
Data variables:
    images            (image_y, image_x, y, x) uint8 0 0 0 0 0 0 ... 0 0 0 0 0 0

We notice that the x and y coordinates now reflect that the original array has been sliced in a particular fashion. It may be important for you to recall this data when doing advanced analysis on your datasets.

The raw data, without any metadata is accessible through the data attribute of the images data variable.

>>> dataset.images.data
array([[[[0, 0, 0, ..., 0, 0, 0],
         [0, 0, 0, ..., 0, 0, 0],
         [0, 0, 0, ..., 0, 0, 0],
         ...,
         [0, 0, 0, ..., 0, 0, 0],
         [0, 0, 0, ..., 0, 0, 0],
         [0, 0, 0, ..., 0, 0, 0]]]], dtype=uint8)

Coordinates are also attributes, and can be accessed in a similar fashion:

>>> dataset.image_x
<xarray.DataArray 'image_x' (image_x: 6)>
array([0, 1, 2, 3, 4, 5])
Coordinates:
  * image_x           (image_x) int64 0 1 2 3 4 5

The owl library leverages the work done by the xarray community to offer labeled datasets. To learn more about all the features of xarrays, please refer to their documentation.

Experiment metadata

Data created during experiments includes additional information such as the exposure, and gain settings. We can access the Dataset obtained during an experiment from the dataset attribute of the MCAM object.

>>> from owl.instruments import MCAM
>>> mcam = MCAM()
>>> mcam.dataset
<xarray.Dataset>
Dimensions:                                      (image_x: 6, image_y: 9, reflection_illumination.led_number: 377, reflection_illumination.rgb: 3, reflection_illumination.yx: 2, transmission_illumination.led_number: 377, transmission_illumination.rgb: 3, transmission_illumination.yx: 2, x: 4096, y: 3120)
Coordinates:
  * reflection_illumination.led_number           (reflection_illumination.led_number) int64 ...
  * reflection_illumination.rgb                  (reflection_illumination.rgb) object ...
  * transmission_illumination.led_number         (transmission_illumination.led_number) int64 ...
  * transmission_illumination.rgb                (transmission_illumination.rgb) object ...
  * image_x                                      (image_x) int64 0 1 2 3 4 5
  * image_y                                      (image_y) int64 0 1 2 ... 7 8
  * y                                            (y) int64 0 1 2 ... 3118 3119
  * x                                            (x) int64 0 1 2 ... 4094 4095
  * transmission_illumination.yx                 (transmission_illumination.yx) <U1 ...
    transmission_illumination.led_positions      (transmission_illumination.led_number, transmission_illumination.yx) float64 ...
    transmission_illumination.chroma             (transmission_illumination.led_number) <U3 ...
  * reflection_illumination.yx                   (reflection_illumination.yx) <U1 ...
    reflection_illumination.led_positions        (reflection_illumination.led_number, reflection_illumination.yx) float64 ...
    reflection_illumination.chroma               (reflection_illumination.led_number) <U3 ...
    exif_orientation                             int64 8
Data variables:
    images                                       (image_y, image_x, y, x) uint8
    acquisition_count                            (image_y, image_x) int64 0
    trigger                                      (image_y, image_x) int64 0
    exposure                                     (image_y, image_x) float64
    bayer_pattern                                (image_y, image_x) <U4
    software_timestamp                           (image_y, image_x) datetime64[ns]
    digital_red_gain                             (image_y, image_x) float64
    digital_green1_gain                          (image_y, image_x) float64
    digital_blue_gain                            (image_y, image_x) float64
    digital_green2_gain                          (image_y, image_x) float64
    analog_gain                                  (image_y, image_x) float64
    digital_gain                                 (image_y, image_x) float64
    acquisition_index                            (image_y, image_x) int64
    latest_acquisition_index                     int64 0
    z_stage                                      float64 0.0
    transmission_illumination.state              (transmission_illumination.led_number, transmission_illumination.rgb) float64
    reflection_illumination.state                (reflection_illumination.led_number, reflection_illumination.rgb) float64

A few key DataArray’s have been added to the Dataset. Namely, exposure, gain, acquisition_count. The MCAM object only stores the last acquisition.

The pixel data can be directly accessed by accessing the mcam_data DataArray within the dataset:

>>> mcam.dataset['images']
<xarray.DataArray 'images' (image_y: 9, image_x: 6, y: 3120, x: 4096)>
Coordinates:
 * image_x                               (image_x) int64 0 1 2 3 4 5
 * image_y                               (image_y) int64 0 1 2 3 4 5 6 7 8
 * y                                      (y) int64 0 1 2 3 ... 3117 3118 3119
 * x                                      (x) int64 0 1 2 3 ... 4093 4094 4095
   exif_orientation                       int64 8

and contains information pertaining to the location of each pixel on the underlying imaging sensors. To obtain the underlying NumPy array, utilize the data attribute of the returned DataArray.

>>> mcam.dataset['images'].data
array([[[[ 83,  83,  83, ...,  86,  85,  84],
         [ 83,  87,  81, ...,  87,  83,  87],
         [ 81,  87,  83, ...,  85,  86,  85],
         ...,
         [ 87,  85,  87, ...,  84,  85,  85],
         [ 85,  84,  87, ...,  84,  87,  84],
         [ 84,  84,  86, ...,  88,  84,  84]]]], dtype=uint8)

The exposure refers to the exposure setting that was set each of the given cameras. For example, if we acquire a full field of view with an exposure of 100 ms, and an image from a single micro-camera with an exposure of 50 ms, the resulting exposure DataArray would take on the following values:

>>> mcam.exposure = 100E-3
>>> mcam.acquire_full_field_of_view(()
>>> mcam.exposure = 50E-3
>>> mcam.acquire_new_image(index=(3, 2))
>>> mcam.dataset.exposure
<xarray.DataArray 'exposure' (image_y: 9, image_x: 6)>
array([[0.09999245, 0.09999245, 0.09999245, 0.09999245, 0.09999245, 0.09999245],
       [0.09999245, 0.09999245, 0.09999245, 0.09999245, 0.09999245, 0.09999245],
       [0.09999245, 0.09999245, 0.09999245, 0.09999245, 0.09999245, 0.09999245],
       [0.09999245, 0.09999245, 0.04998618, 0.09999245, 0.09999245, 0.09999245],
       [0.09999245, 0.09999245, 0.09999245, 0.09999245, 0.09999245, 0.09999245],
       [0.09999245, 0.09999245, 0.09999245, 0.09999245, 0.09999245, 0.09999245],
       [0.09999245, 0.09999245, 0.09999245, 0.09999245, 0.09999245, 0.09999245],
       [0.09999245, 0.09999245, 0.09999245, 0.09999245, 0.09999245, 0.09999245],
       [0.09999245, 0.09999245, 0.09999245, 0.09999245, 0.09999245, 0.09999245]])
Coordinates:
  * image_x                                     (image_x) int64 0 1 2 3 4 5
  * image_y                                     (image_y) int64 0 1 2 ... 7 8
    exif_orientation                             int64 8

We notice that a new image was acquired for index (5, 1) and that the all the properties associated with that sensor have been updated in the array.

Detailed list of coordinate variables in the MCAM dataset

Name

Datatype

Shape

Description

image_y

int64

(N_cameras_Y,)

The camera index in the Y direction for the particular micro-camera. For the Falcon, this is an array of shape (9,) with a value of values array([0, 1, 2, 3, 4, 5, 6, 7, 8]).

image_x

int64

(N_cameras_X,)

The camera index in the X direction for the particular micro-camera. For the Falcon, this is an array of shape (6,) with a value of values array([0, 1, 2, 3, 4, 5]).

y

int64

(image_shape_y,)

The pixel index in the Y direction within a given micro-camera.

x

int64

(image_shape_x,)

The pixel index in the X direction within a given micro-camera.

exif_orientation

int64

Scalar

Integer describing the EXIF orientation of the MCAM Dataset. Valid values are integers between 1 and 8 inclusively. See exif for more information.

Detailed list of data variables in the MCAM dataset

If an variable has coordinates of

Name

Datatype

Coordinates

Units

Description

images

uint8

(image_y, image_x, y, x)

0-255

The raw pixel data obtained from the array of image sensors.

exposure

float64

(image_y, image_x)

Seconds

The exposure with with each micro-camera captured the last acquisition.

digital_gain

float64

(image_y, image_x)

Scaling factor

The digital gain with which the image was acquired in a particular micro-camera. This number is only valid if the digital gain for all pixel colors is equal.

analog_gain

float64

(image_y, image_x)

Scaling factor

The analog ggain with which the image was acquired in a particular micro-camera.

acquisition_count

int64

(image_y, image_x)

Integer

The number of images that have been acquired with the micro-camera.

software_timestamp

datetime64[ns]

(image_y, image_x)

Datetime

The approximate time when the image from a particular micro-camera was taken.

bayer_pattern

<U4

(image_y, image_x)

Bayer Pattern

If the image is that taken with a sensor that contains a color filter array, this contains information about the pixel ordering in the sensor. Valid values are ['rggb', 'gbbr', 'grbg', 'gbrb'].

latest_acquisition_index

int64

None

Integer

The acquisition of the latest set of images acquired at the same time.

digital_gain_red

float64

(image_y, image_x)

Scaling factor

The digital gain for the red pixels with which the image was acquired in a particular micro-camera.

digital_gain_green1

float64

(image_y, image_x)

Scaling factor

The digital gain for the green pixels adjacent to the red ones with which the image was acquired in a particular micro-camera.

digital_gain_blue

float64

(image_y, image_x)

Scaling factor

The digital gain for the blue pixels with which the image was acquired in a particular micro-camera.

digital_gain_green2

float64

(image_y, image_x)

Scaling factor

The digital gain for the green pixels adjacent to the blue ones with which the image was acquired in a particular micro-camera.

System information

Information about the system that acquired the MCAM Dataset is also saved within the dataset itself as part of the dataset coordinates at the time the MCAM object is opened.

The following coordinates are part of each MCAM Dataset.

Name

Datatype

Description

__owl_version__

<U

The version of the owl package at the time of dataset creation.

__sys_version__

<U

The version returned by import sys; sys.version. This corresponds to the version of the python interpreter.

__owl_sys_info__

<U

The string containing the dictionary returned by import owl; owl.sys_info().

__falcon.build_info.version__

<U

The version of the FPGA logic on the Falcon acquisition system.

__falcon.build_info.configuration__

<U

The build configuration of the FPGA logic on the Falcon acquisition system.

__falcon.build_info.branch__

<U

The branch on which the FPGA logic was built at build time.

__falcon.sensor_board.serial_number__

<U

The sensor board serial number.

__falcon.dna__

<U

The FPGA DNA (serial number).

__z_stage.stage_serial_number__

<U

The serial number of the z-stage, if the z-stage is attached to the system.

__z_stage.ftdi_serial_number__

<U

The serial number of the z-stage FTDI controller, if the z-stage is attached to the system.

__transmission_illumination.serial_number__

<U

The serial number of the transmission illumination board, if the board is attached to the system.

__transmission_illumination.version__

<U

The firmware version of the transmission illumination board, if the board is attached to the system.

__transmission_illumination.device_name__

<U

The device name of the transmission illumination board, if the board is attached to the system.

__reflection_illumination.serial_number__

<U

The serial number of the reflection illumination board, if the board is attached to the system.

__reflection_illumination.version__

<U

The firmware version of the reflection illumination board, if the board is attached to the system.

__reflection_illumination.device_name__

<U

The device name of the reflection illumination board, if the board is attached to the system.

Saved datasets and metadata

By default, MCAM dataset are saved as NetCDF4 files that can easily be opened by a variety of applications to access the raw data for custom analysis. At their core, they are HDF5 files that are supported by various scientific applications. A few HDF5 data navigators exist to rapidly visualize the data.

We point users of the MCAM to:

Exporting datasets as images

Saved data can also be exported through routines labelled export into individual image files along with one metadata.json file. However, this does not provide the full metadata information. Contact us if you require more information how to use the exported data.

Adding custom metadata

Adding custom metadata is possible by adding data or coordinates to the dataset before it is saved.

For example, if we wanted to record the height at which a stage was set during the experiment, we could do so with the following lines of code:

>>> mcam.dataset['z_stage'] = 5E-3
>>> mcam.dataset
<xarray.Dataset>
Dimensions:                                (image_x: 6, image_y: 9, x: 4096, y: 3120)
Coordinates:
  * image_x                                (image_x) int64 0 1 2 3 4 5
  * image_y                                (image_y) int64 0 1 2 3 4 5 6 7 8
  * y                                      (y) int64 0 1 2 3 ... 3117 3118 3119
  * x                                      (x) int64 0 1 2 3 ... 4093 4094 4095
    exif_orientation                       int64 8
Data variables:
    images                                 (image_y, image_x, y, x) uint8 8...
    acquisition_count                      (image_y, image_x) int64 0 0 ... 0
    trigger                                (image_y, image_x) int64 0 0 ... 0
    exposure                               (image_y, image_x) float64 0.0 ....
    bayer_pattern                          (image_y, image_x) <U4 'gbrg' .....
    software_timestamp                     (image_y, image_x) datetime64[ns] ...
    digital_red_gain                       (image_y, image_x) float64 0.0 ....
    digital_green1_gain                    (image_y, image_x) float64 0.0 ....
    digital_blue_gain                      (image_y, image_x) float64 0.0 ....
    digital_green2_gain                    (image_y, image_x) float64 0.0 ....
    analog_gain                            (image_y, image_x) float64 0.0 ....
    digital_gain                           (image_y, image_x) float64 0.0 ....
    acquisition_index                      (image_y, image_x) int64 0 0 ... 0
    latest_acquisition_index               int64 0
    z_stage                                float64 0.005

We notice that a new coordinate z_stage was set and stores the value of 0.005.

Should we wish to add a coordinate that has dimensions that refer to the image_y, or image_x indices, we should first create the appropriate xarray object with the desired coordinates.

>>> import numpy as np
>>> from xarray import DataArray
>>> emission_filters = DataArray(
...     np.zeros(mcam.N_cameras), dims=['image_y', 'image_x'],
...     name='emission_filters')
>>> emission_filters[::2, ::2] = 540E-9
>>> emission_filters[1::2, ::2] = 560E-9
>>> emission_filters[1::2, 1::2] = 570E-9
>>> emission_filters[0::2, 1::2] = 580E-9
>>> mcam.dataset['emission_filters'] = emission_filters
>>> mcam.dataset
<xarray.Dataset>
Dimensions:                                (image_x: 6, image_y: 9, x: 4096, y: 3120)
Coordinates:
  * image_x                               (image_x) int64 0 1 2 3 4 5
  * image_y                               (image_y) int64 0 1 2 3 4 5 6 7 8
  * y                                      (y) int64 0 1 2 3 ... 3117 3118 3119
  * x                                      (x) int64 0 1 2 3 ... 4093 4094 4095
    exif_orientation                       int64 8
Data variables:
    mcam_data                              (image_y, image_x, y, x) uint8 8...
    acquisition_count                      (image_y, image_x) int64 0 0 ... 0
    trigger                                (image_y, image_x) int64 0 0 ... 0
    exposure                               (image_y, image_x) float64 0.0 ....
    bayer_pattern                          (image_y, image_x) <U4 'gbrg' .....
    software_timestamp                     (image_y, image_x) datetime64[ns] ...
    digital_red_gain                       (image_y, image_x) float64 0.0 ....
    digital_green1_gain                    (image_y, image_x) float64 0.0 ....
    digital_blue_gain                      (image_y, image_x) float64 0.0 ....
    digital_green2_gain                    (image_y, image_x) float64 0.0 ....
    analog_gain                            (image_y, image_x) float64 0.0 ....
    digital_gain                           (image_y, image_x) float64 0.0 ....
    acquisition_index                      (image_y, image_x) int64 0 0 ... 0
    latest_acquisition_index               int64 0
    z_stage                                float64 0.005
    emission_filters                       (image_y, image_x) float64 5.4e-...
>>> m.dataset.emission_filters
<xarray.DataArray 'emission_filters' (image_y: 9, image_x: 6)>
array([[5.4e-07, 5.8e-07, 5.4e-07, 5.8e-07, 5.4e-07, 5.8e-07],
       [5.6e-07, 5.7e-07, 5.6e-07, 5.7e-07, 5.6e-07, 5.7e-07],
       [5.4e-07, 5.8e-07, 5.4e-07, 5.8e-07, 5.4e-07, 5.8e-07],
       [5.6e-07, 5.7e-07, 5.6e-07, 5.7e-07, 5.6e-07, 5.7e-07],
       [5.4e-07, 5.8e-07, 5.4e-07, 5.8e-07, 5.4e-07, 5.8e-07],
       [5.6e-07, 5.7e-07, 5.6e-07, 5.7e-07, 5.6e-07, 5.7e-07],
       [5.4e-07, 5.8e-07, 5.4e-07, 5.8e-07, 5.4e-07, 5.8e-07],
       [5.6e-07, 5.7e-07, 5.6e-07, 5.7e-07, 5.6e-07, 5.7e-07],
       [5.4e-07, 5.8e-07, 5.4e-07, 5.8e-07, 5.4e-07, 5.8e-07]])
Coordinates:
  * image_x                               (image_x) int64 0 1 2 3 4 5
  * image_y                               (image_y) int64 0 1 2 3 4 5 6 7 8
    exif_orientation                       int64 8
<xarray.DataArray 'emission_filters' (image_y: 6, image_x: 4)>
array([[5.4e-07, 5.8e-07, 5.4e-07, 5.8e-07],
       [5.6e-07, 5.7e-07, 5.6e-07, 5.7e-07],
       [5.4e-07, 5.8e-07, 5.4e-07, 5.8e-07],
       [5.6e-07, 5.7e-07, 5.6e-07, 5.7e-07],
       [5.4e-07, 5.8e-07, 5.4e-07, 5.8e-07],
       [5.6e-07, 5.7e-07, 5.6e-07, 5.7e-07]])
Coordinates:
  * image_y             (image_y) int64 0 1 2 3 4 5
  * image_x             (image_x) int64 0 1 2 3
    acquisition_count   (image_y, image_x) int64 2 2 2 2 3 3 2 ... 3 3 2 3 3 3
    bayer_pattern       (image_y, image_x) <U4 'bggr' 'bggr' ... 'bggr' 'bggr'
    camera_number       (image_y, image_x) int64 20 21 23 22 16 ... 6 0 1 3 2
    exposure            (image_y, image_x) int64 576 576 576 ... 576 576 576
    exposure_arb_units  (image_y, image_x) int64 576 576 576 ... 576 576 576
    exposure_seconds    (image_y, image_x) float64 0.05925 0.05925 ... 0.05925
    gain                (image_y, image_x) int64 4 4 4 4 4 4 4 ... 4 4 4 4 4 4
    trigger             (image_y, image_x) int64 142 142 142 ... 142 142 142
    z_stage             float64 0.005
    emission_filters    (image_y, image_x) float64 5.4e-07 5.8e-07 ... 5.7e-07

We now notice that our emission_filters coordinate has taken on values for the image_y and image_x coordinates that match the rest our data. If you think these features are useful to your workflow, we suggest you learn more about them by reading through the relevant parts of the xarray documentation.

Color Transforms - Grayscale

Images may be exported and displayed in color (RGB) or grayscale from within the GUI. Converting the RGB images into grayscale is done by combining the three color channels of each pixel using the following equation:

\[\begin{split}\begin{bmatrix} p_{gray} \end{bmatrix}_{x, y} = \begin{bmatrix} 0.2125 & 0.7154 & 0.0721 \end{bmatrix} \begin{bmatrix} p_r \\ p_g \\ p_b \end{bmatrix}_{x, y}\end{split}\]

where \(p_r\) is the red channel value, \(p_g\) is the green channel value, and \(p_b\) is the blue channel value of pixel \((x, y)\). The vector \(\begin{bmatrix} 0.2125 & 0.7154 & 0.0721 \end{bmatrix}\) represents the fraction of the red, green, and blue pixel channels of pixel \((x, y)\) to be summed to create \(p_{gray}\), the pixel’s grayscale value.

Color Corrections - RGB Lighting

The MCAM imaging head is made up of an array of individual camera modules. Each of these modules has its own sensor that must be calibrated to the other sensors in the array so that the array produces a unified response. This calibration must also account for the spatially varying brightness of the illumination. The goal of this calibration is to unify the response of the pixels of the full array to the illumination so that a pixel value in one location of the array can be compared to another section of the array in confidence.

To account for variations in the image sensors and lenses that make up each of the camera modules, each MCAM is calibrated under a variety of known illumination conditions. The following briefly describes the calibration procedure and the mathematical operations applied to each image in the MCAM viewer to ensure uniform measurements across the entire field of view.

We begin by denoting each camera in the array with a 2-dimensional index, \((i, j)\).

During calibration a sensor response matrix is created by collecting the average pixel response of the sensors to a specific illumination pattern. For each measurement, a different combination of LED channel values, defined as a vector of 3 values, \(l=\left[l_r, l_g, l_b\right]\), is applied and one image is acquired from each of the cameras in the array. The average pixel value, for each of the 3 color channels, red, green and blue, is recorded in a vector \(p\) given by \(p = \left[p_r, p_g, p_b\right]\) for each sensor \((i, j)\). Pixel responses are split by channels as it is expected that the different pixel channels (red, green and blue) will respond to the illuminations patterns differently. The measurement is repeated \(n\) times, each time varying the vector \(l\) and recording \(p\). Images are also collected under no illumination to document the dark current of the sensors. We give each component in the vectors above a superscript to denote the index in the acquisition when it was acquired.

Together, these \(n\) measurements can be grouped in a matrix equation written as:

\[\begin{split}\begin{bmatrix} p_r^0 & p_r^1 & p_r^2 & \dots & p_r^n \\ p_g^0 & p_g^1 & p_g^2 & \dots & p_g^n \\ p_b^0 & p_b^1 & p_b^2 & \dots & p_b^n \end{bmatrix}_{i,j} = \begin{bmatrix} r_{rr} & r_{rg} & r_{rb} & b_r\\ r_{gr} & r_{gg} & r_{gb} & b_g\\ r_{br} & r_{bg} & r_{bb} & b_b \end{bmatrix}_{i,j} \begin{bmatrix} l_r^0 & l_r^1 & l_r^2 & \ldots & l_r^n \\ l_g^0 & l_g^1 & l_g^2 & \ldots & l_g^n \\ l_b^0 & l_b^1 & l_b^2 & \ldots & l_b^n \\ 1 & 1 & 1 & \ldots & 1 \end{bmatrix}\end{split}\]

Where \(p_r\) is sensor (i, j) average red channel value, \(p_g\) is sensor (i, j) average green channel value, and \(p_b\) is sensor (i, j) average blue channel value for \(n\) measurements. \(l_r\), \(l_g\), \(l_b\) are the illumination’s red, green and blue channels power respectively.

The response matrix’s elements are the following:

  • \(r_{rr}\) - the sensor’s red channel’s average response to the illumination’s red channel.

  • \(r_{rg}\) - the sensor’s red channel’s average response to the illumination’s green channel.

  • \(r_{rb}\) - the sensor’s red channel’s average response to the illumination’s blue channel.

  • \(b_{r}\) - the sensor’s red channel’s average response when no illumination is applied.

  • \(r_{gr}\) - the sensor’s green channel’s average response to the illumination’s red channel.

  • \(r_{gg}\) - the sensor’s green channel’s average response to the illumination’s green channel.

  • \(r_{gb}\) - the sensor’s green channel’s average response to the illumination’s blue channel.

  • \(b_{g}\) - the sensor’s green channel’s average response when no illumination is applied.

  • \(r_{br}\)- the sensor’s blue channel’s average response to the illumination’s red channel.

  • \(r_{bg}\) - the sensor’s blue channel’s average response to the illumination’s green channel.

  • \(r_{bb}\) - the sensor’s blue channel’s average response to the illumination’s blue channel.

  • \(b_{b}\) - the sensor’s blue channel’s average response when no illumination is applied.

When writing the equaiton above as a matrix equaiton, we obtain:

\[P_{i,j} = R_{i,j}L\]

The corrections for the sensor responses are created by finding the element by element average of the responses and multiplying by the individual response matrix’s inverse.

\[C_{i,j} = R' R_{i,j}^{-1}\]

Where \(C_{i,j}\) is the correction matrix. R' is the average response matrix across sensors, and \(R_{i,j}^{-1}\) is the inverse of \(R_{i,j}\).

Along with this, each sensor’s response is not uniform across all pixels. To account for this surfaces are defined to model the variable corrections of the sensors to the illumination source. These are modeled by the following polynomial:

\[S_{i,j}(x, y) = a + by + cy^{2} + dx + exy + fx^{2}\]

Where a, b, c, d, e and f are coefficients and x and y define the pixel’s position in the sensor

Applying the sensor corrections and the pixel corrections follow the following formula:

\[P'_{i,j}(x, y) = S_{(i,j),coefficient}(x, y)((C_{i,j} P_{i,j}(x, y)) + S_{(i,j),offset}(x, y))\]

Where \(P'_{i,j}(x, y)\) is the corrected pixel at position \((x, y)\) on sensor \((i, j)\). \(S_{i,j,coefficient}(x, y)\) is the multiplicative correction for sensor \((i, j)\) at pixel position \((x, y)\). \(P_{i,j}(x, y)\) is the original pixel value at position \((x, y)\) on sensor \((i, j)\), and \(S_{i,j,offset}(x, y)\) is the additive correction for sensor \((i, j)\) at pixel position \((x, y)\).

This procedure ensures that the correction pixel values for camera \((i, j)\), \(P'_{i,j}(x, y)\), can be compared to those of camera \(k, l\), \(P'_{k,l}(w, z)\), with confidence.

Color Corrections - IR (850nm) Lighting

Similar to the RGB calibration described above, calibration is also done for the 850nm infrared (IR) illumination mode. This again allows consistency so that a pixel value in one location of the array can be compared to another section of the array in confidence.

We begin by denoting each camera in the array with a 2-dimensional index, \((i, j)\).

During calibration a sensor response matrix is created by collecting the average pixel response of the sensors to varying illumination intensities. For each measurement, an intensity of 850nm IR light, defined as value, \(l\), is applied and one image is acquired from each of the cameras in the array. The average pixel value, for each of the 3 color channels, red, green and blue, is recorded in a vector \(p\) given by \(p = \left[p_r, p_g, p_b\right]\) for each sensor \((i, j)\). Pixel responses are split by channels as it is expected that the different pixel channels (red, green and blue) will respond to the IR light differently. The measurement is repeated \(n\) times, each time varying the intensity of the light \(l\) and recording \(p\). Images are also collected under no illumination to document the dark current of the sensors. We give each component in the vectors above a superscript to denote the index in the acquisition when it was acquired.

Together, these \(n\) measurements can be grouped in a matrix equation written as:

\[\begin{split}\begin{bmatrix} p_r^0 & p_r^1 & p_r^2 & \dots & p_r^n \\ p_g^0 & p_g^1 & p_g^2 & \dots & p_g^n \\ p_b^0 & p_b^1 & p_b^2 & \dots & p_b^n \end{bmatrix}_{i,j} = \begin{bmatrix} r_{r} & b_r\\ r_{g} & b_g\\ r_{b} & b_b \end{bmatrix}_{i,j} \begin{bmatrix} l^0 & l^1 & l^2 & \ldots & l^n \\ 1 & 1 & 1 & \ldots & 1 \end{bmatrix}\end{split}\]

Where \(p_r\) is sensor (i, j) average red channel value, \(p_g\) is sensor (i, j) average green channel value, and \(p_b\) is sensor (i, j) average blue channel value for \(n\) measurements. \(l\) is the IR illumination’s power.

The response matrix’s elements are the following:
\(r_{r}\) - the sensor’s red channel’s average response to the IR illumination.
\(b_{r}\) - the sensor’s red channel’s average response when no IR illumination is applied.
\(r_{g}\) - the sensor’s green channel’s average response to the IR illumination.
\(b_{g}\) - the sensor’s green channel’s average response when no IR illumination is applied.
\(r_{b}\)- the sensor’s blue channel’s average response to the IR illumination.
\(b_{b}\) - the sensor’s blue channel’s average response when no IR illumination is applied.

This can be abstracted to:

\[P_{i,j} = R_{i,j}L\]

Corrections for the sensor responses are created by finding the element by element average of the responses and multiplying by the individual response matrix’s psuedo-inverse.

\[C_{i,j} = R' R_{i,j}^{-1}\]

Where \(C_{i,j}\) is the correction matrix. R' is the average response matrix across sensors, and \(R_{i,j}^{-1}\) is the psuedo-inverse of \(R_{i,j}\).

Along with this, each sensor’s response is not uniform across all pixels. To account for this surfaces are defined to model the variable corrections of the sensors to the illumination source. These are modeled by the following polynomial:

\[S_{i,j}(x, y) = a + by + cy^{2} + dx + exy + fx^{2}\]

Where a, b, c, d, e and f are coefficients and x and y define the pixel’s position in the sensor

Applying the sensor corrections and the pixel corrections follow the following formula:

\[P'_{i,j}(x, y) = S_{(i,j),coefficient}(x, y)((C_{i,j} P_{i,j}(x, y)) + S_{(i,j),offset}(x, y))\]

Where \(P'_{i,j}(x, y)\) is the corrected pixel at position (x, y) on sensor (i, j). \(S_{i,j,coefficient}(x, y)\) is the multiplicative correction for sensor (i, j) at pixel position (x, y). \(P_{i,j}(x, y)\) is the original pixel value at position (x, y) on sensor (i, j), and \(S_{i,j,offset}(x, y)\) is the additive correction for sensor (i, j) at pixel position (x, y).

Because of this correction pixel value \(P'_{i,j}(x, y)\) can be compared to \(P'_{k,l}(w, z)\) with confidence.

Color Corrections - White Lighting

For systems using the white light rail illumination a separate instance of calibration must be done. These systems not only must correct the sensors to balance the relative responses, but also balance the sensors channels’ responses to the white light. For RGB illumination we are able to tune the balance of the lighting channels to create a lighting ratio that appears white to the sensors, but with white light illumination this tuning is unavailable. Instead we must correct the raw pixel values using the calibrated corrections to remove any unexpected coloration of the image.

During calibration a sensor response matrix is created by collecting the average pixel response of the sensors to varying illumination intensities. For each measurement, an intensity of white light, defined as value, \(l\), is applied and one image is acquired from each of the cameras in the array. The average pixel value, for each of the 3 color channels, red, green and blue, is recorded in a vector \(p\) given by \(p = \left[p_r, p_g, p_b\right]\) for each sensor \((i, j)\). Pixel responses are split by channels as it is expected that the different pixel channels (red, green and blue) will respond to the white light differently. The measurement is repeated \(n\) times, each time varying the intensity of the light \(l\) and recording \(p\). Images are also collected under no illumination to document the dark current of the sensors. We give each component in the vectors above a superscript to denote the index in the acquisition when it was acquired.

Together, these \(n\) measurements can be grouped in a matrix equation written as:

\[\begin{split}\begin{bmatrix} p_r^0 & p_r^1 & p_r^2 & \dots & p_r^n \\ p_g^0 & p_g^1 & p_g^2 & \dots & p_g^n \\ p_b^0 & p_b^1 & p_b^2 & \dots & p_b^n \end{bmatrix}_{i,j} = \begin{bmatrix} r_{r} & b_r\\ r_{g} & b_g\\ r_{b} & b_b \end{bmatrix}_{i,j} \begin{bmatrix} l^0 & l^1 & l^2 & \ldots & l^n \\ 1 & 1 & 1 & \ldots & 1 \end{bmatrix}\end{split}\]

Where \(p_r\) is sensor (i, j) average red channel value, \(p_g\) is sensor (i, j) average green channel value, and \(p_b\) is sensor (i, j) average blue channel value for \(n\) measurements. \(l\) is the white light rail illumination’s power.

The response matrix’s elements are the following:
\(r_{r}\) - the sensor’s red channel’s average response to the white light illumination.
\(b_{r}\) - the sensor’s red channel’s average response when no white light illumination is applied.
\(r_{g}\) - the sensor’s green channel’s average response to the white light illumination.
\(b_{g}\) - the sensor’s green channel’s average response when no white light illumination is applied.
\(r_{b}\) - the sensor’s blue channel’s average response to the white light illumination.
\(b_{b}\) - the sensor’s blue channel’s average response when no white light illumination is applied.

Corrections for the white light response should bring the responses of all channels to be equal. To create these corrections we multiply a scaled identity matrix by the pseudo inverse of the individual sensor’s response matrix.

\[C_{i,j} = s I R_{i,j}^{-1}\]

Where \(C_{i,j}\) is the correction matrix. \(R_{i,j}^{-1}\) is the psuedo-inverse of \(R_{i,j}\) and \(I\) is the Identity matrix. The scalar value \(s\) is found by taking the root mean square of the three channels responses.

\[s = \sqrt{(r_{r}^2 + r_{g}^2 + r_{b}^2) / 3}\]

This scalar serves to have the corrected pixel value the same magnitude as as the raw pixel value.

Along with this, each sensor’s response is not uniform across all pixels. To account for this surfaces are defined to model the variable corrections of the sensors to the illumination source. These are modeled by the following polynomial:

\[S_{i,j}(x, y) = a + by + cy^{2} + dx + exy + fx^{2}\]

Where a, b, c, d, e and f are coefficients and x and y define the pixel’s position in the sensor

Applying the sensor corrections and the pixel corrections follow the following formula:

\[P'_{i,j}(x, y) = S_{(i,j),coefficient}(x, y)((C_{i,j} P_{i,j}(x, y)) + S_{(i,j),offset}(x, y))\]

Where \(P'_{i,j}(x, y)\) is the corrected pixel at position (x, y) on sensor (i, j). \(S_{i,j,coefficient}(x, y)\) is the multiplicative correction for sensor (i, j) at pixel position (x, y). \(P_{i,j}(x, y)\) is the original pixel value at position (x, y) on sensor (i, j), and \(S_{i,j,offset}(x, y)\) is the additive correction for sensor (i, j) at pixel position (x, y).

Because of this correction pixel value \(P'_{i,j}(x, y)\) can be compared to \(P'_{k,l}(w, z)\) with confidence.

Illumination Channel ID

Accessible through the illumination_channel_id variable of the MCAM dataset.

>>> from owl.mcam_data import load
>>> dataset = mcam_data.load('/path/to/mcam_dataset.nc')
>>> dataset.illumination_channel_id
<xarray.DataArray 'illumination_channel_id' ()>
array(1)
Coordinates:
    __owl_version__  <U25 '0.18.363'

Illumination Mode

Channel ID

external

0

transmission_visible_pwm_fullarray

1

transmission_visible_pwm_perimeter

2

transmission_visible_analog_fullarray

3

transmission_visible_analog_perimeter

4

transmission_ir850_pwm_fullarray

5

transmission_ir850_pwm_perimeter

6

transmission_ir850_analog_fullarray

7

transmission_ir850_analog_perimeter

8

reflection_visible_pwm_fullarray

20

reflection_visible_pwm_perimeter

21

reflection_visible_analog_fullarray

22

reflection_visible_analog_perimeter

23

reflection_ir850_pwm_fullarray

24

reflection_ir850_pwm_perimeter

25

reflection_ir850_analog_fullarray

26

reflection_ir850_analog_perimeter

27

fluorescence

50

fluorescence_front

51

fluorescence_back

52

fluorescence_unknown

53

fluorescence_unknown_front

54

fluorescence_unknown_back

55

fluorescence_380nm

56

fluorescence_380nm_front

57

fluorescence_380nm_back

58

fluorescence_440nm

59

fluorescence_440nm_front

60

fluorescence_440nm_back

61

fluorescence_590nm

62

fluorescence_590nm_front

63

fluorescence_590nm_back

64

fluorescence_633nm

65

fluorescence_633nm_front

66

fluorescence_633nm_back

67

fluorescence_850nm

68

fluorescence_850nm_front

69

fluorescence_850nm_back

70

fluorescence_white_5650K

71

fluorescence_white_5650K_front

72

fluorescence_white_5650K_back

73

fluorescence_530nm

74

fluorescence_530nm_front

75

fluorescence_530nm_back

76

fluorescence_brightfield_530nm_focused_diffuser

96

fluorescence_brightfield_530nm_focused

97

fluorescence_brightfield_530nm_high_contrast

98

fluorescence_brightfield_530nm_diffuser

99

fluorescence_brightfield

100

fluorescence_brightfield_440nm

101

fluorescence_brightfield_530nm

102

transmission_red_pwm_fullarray

103

transmission_red_pwm_perimeter

104

transmission_red_analog_fullarray

105

transmission_red_analog_perimeter

106

transmission_green_pwm_fullarray

107

transmission_green_pwm_perimeter

108

transmission_green_analog_fullarray

109

transmission_green_analog_perimeter

110

transmission_blue_pwm_fullarray

111

transmission_blue_pwm_perimeter

112

transmission_blue_analog_fullarray

113

transmission_blue_analog_perimeter

114

reflection_red_pwm_fullarray

115

reflection_red_pwm_perimeter

116

reflection_red_analog_fullarray

117

reflection_red_analog_perimeter

118

reflection_green_pwm_fullarray

119

reflection_green_pwm_perimeter

120

reflection_green_analog_fullarray

121

reflection_green_analog_perimeter

122

reflection_blue_pwm_fullarray

123

reflection_blue_pwm_perimeter

124

reflection_blue_analog_fullarray

125

reflection_blue_analog_perimeter

126