3. MCAM Dataset¶
The core data organization of the MCAM is located in an xarray.Dataset.
These datasets enable the use of many standard numpy functions, while
enabling straightforward metadata management and disk serialization.
At Ramona, we are committed to the FAIR Guiding Principles so that the data we produce, including our metadata, is Findable, Accessible, Interoperable, and Reusable. If you find any part of this documentation unclear as it pertains to loading the data you need, please reach out to us by email at help@ramonaoptics.com.
The organization of our data within the NetCDF4 format ensures that the raw data and metadata captured from the MCAM software remains accessible throughout time.
A note on forward and backward compatibility¶
We strive to make each version of our software compatible with all the historic data you acquired. However, if you try to look through the dataset with your own software, you may find that on occasion certain fields may be missing.
This is because we are constantly improving our software and adding new features. As such, older datasets may not contain all the fields that are present in the latest version of the software. However, we do our best to ensure that all the core fields are always present.
We recommend you use defensive programming such as:
Always check the value of __owl_version__.
Checking if the key is present in the metadata before fetching it.
Using default values when a field is missing.
For example python allows one to use dict.get('key', default_value) to
safely get a value from a dictionary.
>>> metadata = your_function_to_load_metadata('metadata.json')
>>> value = metadata.get('some_key', default_value)
Image coordinates included in data¶
Examining the object returned by mcam_data.new we notice not only an array
of zeros, but also properties such as the x and y indices, as well as
the image_x, and image_y indices.
>>> from owl import mcam_data
>>> dataset = mcam_data.new_dataset()
>>> dataset
<xarray.Dataset>
Dimensions: (image_y: 9, image_x: 6, y: 3120, x: 4096)
Coordinates:
* image_y (image_y) int64 0 1 2 3 4 5 6 7 8
* image_x (image_x) int64 0 1 2 3 4 5
* y (y) int64 0 1 2 3 4 5 6 ... 3114 3115 3116 3117 3118 3119
* x (x) int64 0 1 2 3 4 5 6 ... 4090 4091 4092 4093 4094 4095
Data variables:
images (image_y, image_x, y, x) uint8 0 0 0 0 0 0 ... 0 0 0 0 0 0
This kind of metadata is important when slicing data. For example, slicing the
newly created m_data object, we can see that the coordinates are also
sliced
>>> dataset.isel({'y': slice(None, None, 2), 'x': slice(1, None, 3)})
<xarray.Dataset>
Dimensions: (image_y: 9, image_x: 6, y: 1560, x: 1365)
Coordinates:
* image_y (image_y) int64 0 1 2 3 4 5 6 7 8
* image_x (image_x) int64 0 1 2 3 4 5
* y (y) int64 0 2 4 6 8 10 ... 3108 3110 3112 3114 3116 3118
* x (x) int64 1 4 7 10 13 16 ... 4078 4081 4084 4087 4090 4093
Data variables:
images (image_y, image_x, y, x) uint8 0 0 0 0 0 0 ... 0 0 0 0 0 0
We notice that the x and y coordinates now reflect that the original
array has been sliced in a particular fashion. It may be important for you to
recall this data when doing advanced analysis on your datasets.
The raw data, without any metadata is accessible through the data attribute
of the images data variable.
>>> dataset.images.data
array([[[[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
...,
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0],
[0, 0, 0, ..., 0, 0, 0]]]], dtype=uint8)
Coordinates are also attributes, and can be accessed in a similar fashion:
>>> dataset.image_x
<xarray.DataArray 'image_x' (image_x: 6)>
array([0, 1, 2, 3, 4, 5])
Coordinates:
* image_x (image_x) int64 0 1 2 3 4 5
The owl library leverages the work done by the xarray community to
offer labeled datasets. To learn more about all the features of xarrays,
please refer to their documentation.
Experiment metadata¶
Data created during experiments includes additional information such as the
exposure, and gain settings. We can access the Dataset obtained during an
experiment from the dataset attribute of the MCAM object.
>>> from owl.instruments import MCAM
>>> mcam = MCAM()
>>> mcam.dataset
<xarray.Dataset>
Dimensions: (image_x: 6, image_y: 9, reflection_illumination.led_number: 377, reflection_illumination.rgb: 3, reflection_illumination.yx: 2, transmission_illumination.led_number: 377, transmission_illumination.rgb: 3, transmission_illumination.yx: 2, x: 4096, y: 3120)
Coordinates:
* reflection_illumination.led_number (reflection_illumination.led_number) int64 ...
* reflection_illumination.rgb (reflection_illumination.rgb) object ...
* transmission_illumination.led_number (transmission_illumination.led_number) int64 ...
* transmission_illumination.rgb (transmission_illumination.rgb) object ...
* image_x (image_x) int64 0 1 2 3 4 5
* image_y (image_y) int64 0 1 2 ... 7 8
* y (y) int64 0 1 2 ... 3118 3119
* x (x) int64 0 1 2 ... 4094 4095
* transmission_illumination.yx (transmission_illumination.yx) <U1 ...
transmission_illumination.led_positions (transmission_illumination.led_number, transmission_illumination.yx) float64 ...
transmission_illumination.chroma (transmission_illumination.led_number) <U3 ...
* reflection_illumination.yx (reflection_illumination.yx) <U1 ...
reflection_illumination.led_positions (reflection_illumination.led_number, reflection_illumination.yx) float64 ...
reflection_illumination.chroma (reflection_illumination.led_number) <U3 ...
exif_orientation int64 8
Data variables:
images (image_y, image_x, y, x) uint8
acquisition_count (image_y, image_x) int64 0
trigger (image_y, image_x) int64 0
exposure (image_y, image_x) float64
bayer_pattern (image_y, image_x) <U4
software_timestamp (image_y, image_x) datetime64[ns]
digital_red_gain (image_y, image_x) float64
digital_green1_gain (image_y, image_x) float64
digital_blue_gain (image_y, image_x) float64
digital_green2_gain (image_y, image_x) float64
analog_gain (image_y, image_x) float64
digital_gain (image_y, image_x) float64
acquisition_index (image_y, image_x) int64
latest_acquisition_index int64 0
z_stage float64 0.0
transmission_illumination.state (transmission_illumination.led_number, transmission_illumination.rgb) float64
reflection_illumination.state (reflection_illumination.led_number, reflection_illumination.rgb) float64
A few key DataArray’s have been added to the Dataset. Namely, exposure,
gain, acquisition_count.
The MCAM object only stores the last acquisition.
The pixel data can be directly accessed by accessing the mcam_data
DataArray within the dataset:
>>> mcam.dataset['images']
<xarray.DataArray 'images' (image_y: 9, image_x: 6, y: 3120, x: 4096)>
Coordinates:
* image_x (image_x) int64 0 1 2 3 4 5
* image_y (image_y) int64 0 1 2 3 4 5 6 7 8
* y (y) int64 0 1 2 3 ... 3117 3118 3119
* x (x) int64 0 1 2 3 ... 4093 4094 4095
exif_orientation int64 8
and contains information pertaining to the location of each pixel on the
underlying imaging sensors. To obtain the underlying NumPy array, utilize the
data attribute of the returned DataArray.
>>> mcam.dataset['images'].data
array([[[[ 83, 83, 83, ..., 86, 85, 84],
[ 83, 87, 81, ..., 87, 83, 87],
[ 81, 87, 83, ..., 85, 86, 85],
...,
[ 87, 85, 87, ..., 84, 85, 85],
[ 85, 84, 87, ..., 84, 87, 84],
[ 84, 84, 86, ..., 88, 84, 84]]]], dtype=uint8)
The exposure refers to the exposure setting that was set each of the given
cameras. For example, if we acquire a full field of view with an exposure of
100 ms, and an image from a single micro-camera with an exposure of 50 ms, the
resulting exposure DataArray would take on the following values:
>>> mcam.exposure = 100E-3
>>> mcam.acquire_full_field_of_view(()
>>> mcam.exposure = 50E-3
>>> mcam.acquire_new_image(index=(3, 2))
>>> mcam.dataset.exposure
<xarray.DataArray 'exposure' (image_y: 9, image_x: 6)>
array([[0.09999245, 0.09999245, 0.09999245, 0.09999245, 0.09999245, 0.09999245],
[0.09999245, 0.09999245, 0.09999245, 0.09999245, 0.09999245, 0.09999245],
[0.09999245, 0.09999245, 0.09999245, 0.09999245, 0.09999245, 0.09999245],
[0.09999245, 0.09999245, 0.04998618, 0.09999245, 0.09999245, 0.09999245],
[0.09999245, 0.09999245, 0.09999245, 0.09999245, 0.09999245, 0.09999245],
[0.09999245, 0.09999245, 0.09999245, 0.09999245, 0.09999245, 0.09999245],
[0.09999245, 0.09999245, 0.09999245, 0.09999245, 0.09999245, 0.09999245],
[0.09999245, 0.09999245, 0.09999245, 0.09999245, 0.09999245, 0.09999245],
[0.09999245, 0.09999245, 0.09999245, 0.09999245, 0.09999245, 0.09999245]])
Coordinates:
* image_x (image_x) int64 0 1 2 3 4 5
* image_y (image_y) int64 0 1 2 ... 7 8
exif_orientation int64 8
We notice that a new image was acquired for index (5, 1) and that the all
the properties associated with that sensor have been updated in the array.
Detailed list of coordinate variables in the MCAM dataset¶
Name |
Datatype |
Shape |
Description |
|---|---|---|---|
|
|
|
The camera index in the Y direction for the particular micro-camera.
For the Falcon, this is an array of shape |
|
|
|
The camera index in the X direction for the particular micro-camera.
For the Falcon, this is an array of shape |
|
|
|
The pixel index in the Y direction within a given micro-camera. |
|
|
|
The pixel index in the X direction within a given micro-camera. |
|
|
Scalar |
Integer describing the EXIF orientation of the MCAM Dataset.
Valid values are integers between |
Detailed list of data variables in the MCAM dataset¶
If an variable has coordinates of
System information¶
Information about the system that acquired the MCAM Dataset is also saved within the dataset itself as part of the dataset coordinates at the time the MCAM object is opened.
The following coordinates are part of each MCAM Dataset.
Name |
Datatype |
Description |
|---|---|---|
|
|
The version of the |
|
|
The version returned by |
|
|
The string containing the dictionary returned by |
|
|
The version of the FPGA logic on the Falcon acquisition system. |
|
|
The build configuration of the FPGA logic on the Falcon acquisition system. |
|
|
The branch on which the FPGA logic was built at build time. |
|
|
The sensor board serial number. |
|
|
The FPGA DNA (serial number). |
|
|
The serial number of the z-stage, if the z-stage is attached to the system. |
|
|
The serial number of the z-stage FTDI controller, if the z-stage is attached to the system. |
|
|
The serial number of the transmission illumination board, if the board is attached to the system. |
|
|
The firmware version of the transmission illumination board, if the board is attached to the system. |
|
|
The device name of the transmission illumination board, if the board is attached to the system. |
|
|
The serial number of the reflection illumination board, if the board is attached to the system. |
|
|
The firmware version of the reflection illumination board, if the board is attached to the system. |
|
|
The device name of the reflection illumination board, if the board is attached to the system. |
Saved datasets and metadata¶
By default, MCAM dataset are saved as NetCDF4 files that can easily be opened by a variety of applications to access the raw data for custom analysis. At their core, they are HDF5 files that are supported by various scientific applications. A few HDF5 data navigators exist to rapidly visualize the data.
We point users of the MCAM to:
Exporting datasets as images¶
Saved data can also be exported through routines labelled export into
individual image files along with one metadata.json file. However, this
does not provide the full metadata information. Contact us if you require
more information how to use the exported data.
Adding custom metadata¶
Adding custom metadata is possible by adding data or coordinates to the dataset before it is saved.
For example, if we wanted to record the height at which a stage was set during the experiment, we could do so with the following lines of code:
>>> mcam.dataset['z_stage'] = 5E-3
>>> mcam.dataset
<xarray.Dataset>
Dimensions: (image_x: 6, image_y: 9, x: 4096, y: 3120)
Coordinates:
* image_x (image_x) int64 0 1 2 3 4 5
* image_y (image_y) int64 0 1 2 3 4 5 6 7 8
* y (y) int64 0 1 2 3 ... 3117 3118 3119
* x (x) int64 0 1 2 3 ... 4093 4094 4095
exif_orientation int64 8
Data variables:
images (image_y, image_x, y, x) uint8 8...
acquisition_count (image_y, image_x) int64 0 0 ... 0
trigger (image_y, image_x) int64 0 0 ... 0
exposure (image_y, image_x) float64 0.0 ....
bayer_pattern (image_y, image_x) <U4 'gbrg' .....
software_timestamp (image_y, image_x) datetime64[ns] ...
digital_red_gain (image_y, image_x) float64 0.0 ....
digital_green1_gain (image_y, image_x) float64 0.0 ....
digital_blue_gain (image_y, image_x) float64 0.0 ....
digital_green2_gain (image_y, image_x) float64 0.0 ....
analog_gain (image_y, image_x) float64 0.0 ....
digital_gain (image_y, image_x) float64 0.0 ....
acquisition_index (image_y, image_x) int64 0 0 ... 0
latest_acquisition_index int64 0
z_stage float64 0.005
We notice that a new coordinate z_stage was set and stores the value of 0.005.
Should we wish to add a coordinate that has dimensions that refer to the
image_y, or image_x indices, we should first create the appropriate
xarray object with the desired coordinates.
>>> import numpy as np
>>> from xarray import DataArray
>>> emission_filters = DataArray(
... np.zeros(mcam.N_cameras), dims=['image_y', 'image_x'],
... name='emission_filters')
>>> emission_filters[::2, ::2] = 540E-9
>>> emission_filters[1::2, ::2] = 560E-9
>>> emission_filters[1::2, 1::2] = 570E-9
>>> emission_filters[0::2, 1::2] = 580E-9
>>> mcam.dataset['emission_filters'] = emission_filters
>>> mcam.dataset
<xarray.Dataset>
Dimensions: (image_x: 6, image_y: 9, x: 4096, y: 3120)
Coordinates:
* image_x (image_x) int64 0 1 2 3 4 5
* image_y (image_y) int64 0 1 2 3 4 5 6 7 8
* y (y) int64 0 1 2 3 ... 3117 3118 3119
* x (x) int64 0 1 2 3 ... 4093 4094 4095
exif_orientation int64 8
Data variables:
mcam_data (image_y, image_x, y, x) uint8 8...
acquisition_count (image_y, image_x) int64 0 0 ... 0
trigger (image_y, image_x) int64 0 0 ... 0
exposure (image_y, image_x) float64 0.0 ....
bayer_pattern (image_y, image_x) <U4 'gbrg' .....
software_timestamp (image_y, image_x) datetime64[ns] ...
digital_red_gain (image_y, image_x) float64 0.0 ....
digital_green1_gain (image_y, image_x) float64 0.0 ....
digital_blue_gain (image_y, image_x) float64 0.0 ....
digital_green2_gain (image_y, image_x) float64 0.0 ....
analog_gain (image_y, image_x) float64 0.0 ....
digital_gain (image_y, image_x) float64 0.0 ....
acquisition_index (image_y, image_x) int64 0 0 ... 0
latest_acquisition_index int64 0
z_stage float64 0.005
emission_filters (image_y, image_x) float64 5.4e-...
>>> m.dataset.emission_filters
<xarray.DataArray 'emission_filters' (image_y: 9, image_x: 6)>
array([[5.4e-07, 5.8e-07, 5.4e-07, 5.8e-07, 5.4e-07, 5.8e-07],
[5.6e-07, 5.7e-07, 5.6e-07, 5.7e-07, 5.6e-07, 5.7e-07],
[5.4e-07, 5.8e-07, 5.4e-07, 5.8e-07, 5.4e-07, 5.8e-07],
[5.6e-07, 5.7e-07, 5.6e-07, 5.7e-07, 5.6e-07, 5.7e-07],
[5.4e-07, 5.8e-07, 5.4e-07, 5.8e-07, 5.4e-07, 5.8e-07],
[5.6e-07, 5.7e-07, 5.6e-07, 5.7e-07, 5.6e-07, 5.7e-07],
[5.4e-07, 5.8e-07, 5.4e-07, 5.8e-07, 5.4e-07, 5.8e-07],
[5.6e-07, 5.7e-07, 5.6e-07, 5.7e-07, 5.6e-07, 5.7e-07],
[5.4e-07, 5.8e-07, 5.4e-07, 5.8e-07, 5.4e-07, 5.8e-07]])
Coordinates:
* image_x (image_x) int64 0 1 2 3 4 5
* image_y (image_y) int64 0 1 2 3 4 5 6 7 8
exif_orientation int64 8
<xarray.DataArray 'emission_filters' (image_y: 6, image_x: 4)>
array([[5.4e-07, 5.8e-07, 5.4e-07, 5.8e-07],
[5.6e-07, 5.7e-07, 5.6e-07, 5.7e-07],
[5.4e-07, 5.8e-07, 5.4e-07, 5.8e-07],
[5.6e-07, 5.7e-07, 5.6e-07, 5.7e-07],
[5.4e-07, 5.8e-07, 5.4e-07, 5.8e-07],
[5.6e-07, 5.7e-07, 5.6e-07, 5.7e-07]])
Coordinates:
* image_y (image_y) int64 0 1 2 3 4 5
* image_x (image_x) int64 0 1 2 3
acquisition_count (image_y, image_x) int64 2 2 2 2 3 3 2 ... 3 3 2 3 3 3
bayer_pattern (image_y, image_x) <U4 'bggr' 'bggr' ... 'bggr' 'bggr'
camera_number (image_y, image_x) int64 20 21 23 22 16 ... 6 0 1 3 2
exposure (image_y, image_x) int64 576 576 576 ... 576 576 576
exposure_arb_units (image_y, image_x) int64 576 576 576 ... 576 576 576
exposure_seconds (image_y, image_x) float64 0.05925 0.05925 ... 0.05925
gain (image_y, image_x) int64 4 4 4 4 4 4 4 ... 4 4 4 4 4 4
trigger (image_y, image_x) int64 142 142 142 ... 142 142 142
z_stage float64 0.005
emission_filters (image_y, image_x) float64 5.4e-07 5.8e-07 ... 5.7e-07
We now notice that our emission_filters coordinate has taken on values for
the image_y and image_x coordinates that match the rest our data. If
you think these features are useful to your workflow, we suggest you learn more
about them by reading through the relevant parts of the xarray documentation.
Image and Video Storage Formats¶
The MCAM software provides options to save the image data in various formats such as .nc, .mp4, and .tif.
In this section, we describe the tradeoffs between different image storage backends offered by the MCAM software.
In the context of the MCAM, data storage considerations include:
Lossy or lossless compression. - Typically lossless compression offers smaller compression ratios (good compression factors are between 1.5 and 2x) - Lossy compression can acheive compression ratios of 10x or more, but at the cost of losing the finer image (and video) details.
Random access or sequential - Random access allows for accessing arbitrary portions of the data without reading the entire image (or video). - File formats that require sequential access are often more efficient in their storage formats. However, data retreival is slower.
Is the image captured by brightfield or fluorescence microscopy? - Brigtfield images will typically be biased around a gray value (ideally 128 for 8-bit images) and have very few pixels with values smaller than 5. Some compression algorithms will squash these values to zero (0) which for brightfield images, may not be a problem. - Fluorescence images will typically be biased much closer to zero with few pixels (of containing the critical information) taking on value greater than 5. Maintaining the value of these pixels is critical and can limit the effectiveness of compression algorithsm.
Natural vs synthetic image - Natural images often have more continous values and can be harder to compress due to the inherent noise in the imaging system. - Synthetic images, or masks, that are the result of analysis algorithms discrete in the values they take on often with large contiguous portions of the image taking on the exact same value.
RGB sensor or Monochrome - Monochrome sensors have a single (optional) wavelength filter that is used accross the entire sensor. Pixels are often associated as having a single value for each location in the image sensor (pixel). - RGB sensor utilize a `bayer
filter<https://en.wikipedia.org/wiki/Bayer_filter>_` to capture color information. Each pixel measures an approximate of the red, green or blue component of an image. With bayer filters, no one pixel measure all 3 color components at once. However, a software post processing step combines measurements of neighboring pixels to associate 3 colors (red, green, and blue) with a single pixel. Data can either be stored as “RGB” pixels, or in the raw captured format. The raw captured data takes less space when stored in an uncompressed fasion but may be harder to work with. The RGB format may be more intuitive but takes more space when stored in an uncompressed fashion.
No matter where the image data is stored, a file called metadata.nc will be
used to store the metadata information (pixel width, image name, exposure, and
other associated information). This file is used in conjuction with the mp4
or tif image to create the full MCAM dataset.
As of version 0.19.460 (released on March 30, 2026), we offer the following options for saving MCAM datasets:
.ncfiles - Lossless compression with random access. - Data stored in these files is the closest representation of exactly what ismeasured by the MCAM but may be prohibitively difficult to transfer between machines due to its sheer size.
Expected Compression Factor: 1.
Best for: Initial exploration to ensure data that is being captured is correct.
.mp4files stored asmulti-mp4– High quality lossy compression, sequential access. - Our multi-mp4 format stores the output of each micro camera as its own MP4 file. Thismakes it easy to take data from the MP4 and process it as part of your own post processing pipeline.
While typical MP4 files are not compatible with true random access, the MP4 files generated by the the MCAM software provide rapid seeking to frames at offsets that are integer multiples of 60. This allows for rapid predictable seeking to frames at large offsets without having to decompress the entire video.
Expected Compression Factor: 10x.
Best for: Data captured with “High Resolution” settings.
Note: Wihle the data is captured at the frame rate of your choosing, the MP4 file will have a frame rate of 30 frames per second regardless of the recording frame rate. This is to ensure high scientific image quality regardess of the captured frame rate. Our metadata.nc file stores the original frame rate to help our software report the playback frame rate accordingly. If you are using software like VLC to play the MP4 files, VLC provides ways to speed up and slowdown video playback with the keys
[(slower), and[(faster). You can use these keys to adjust the playback speed of the MP4 files to match the original frame rate.
.mp4files stored astiledmp4– High quality lossy compression, sequential access. - Our tiled-mp4 format combines the output of multiple micro cameras in a single MP4 file. Thisimproves the compression speed since there is less overhead in creating the MP4 file structure.
Expected Compression Factor: 10x.
Best for: Data captured with “High Speed” settings for the Kestrel in Behavior mode.
Note: Wihle the data is captured at the frame rate of your choosing, the MP4 file will have a frame rate of 30 frames per second regardless of the recording frame rate. This is to ensure high scientific image quality regardess of the captured frame rate. Our metadata.nc file stores the original frame rate to help our software report the playback frame rate accordingly. If you are using software like VLC to play the MP4 files, VLC provides ways to speed up and slowdown video playback with the keys
[(slower), and[(faster). You can use these keys to adjust the playback speed of the MP4 files to match the original frame rate.
.tiffiles - Lossless or lossy compression with random access. - Each TIFF file contains information captured by one micro-camera at onegiven location. When using lossless compression, data stored in these files is the closest representation of exactly what is captured by the MCAM, bit for bit. When using lossy compression, the quality of the images can be turned as one trades off between image size, and visual quality.
Expected compression factor: Lossless 1.5-2x.
Expected compression factor: Lossy 10x or more.
Best for: Multi-channel fluorescence data.
.pngfiles - Lossless compression with random access. - Each PNG file contains information captured by one micro-camera at onegiven location. When using lossless compression, data stored in these files is the closest representation of exactly what is captured by the MCAM, bit for bit. Note that PNG decompression can be quite slow compared to more modern compression algorithms used with TIFF images.
Expected compression factor: 2x.
Best for: Compatibility with software that does not support TIFF images.
Compatibility with External Image and Video Viewers¶
We strive to make sure our data is compatible with a variety of external image viewers. However, our image and video quality settings are often tuned to extremely high quality which limits the playback performance of many image and video players.
If our videos are incompatible with your image viewers, please let us know and we will do our best to try to provide options as to why this may be the case.
Color Transforms - Grayscale¶
Images may be exported and displayed in color (RGB) or grayscale from within the GUI. Converting the RGB images into grayscale is done by combining the three color channels of each pixel using the following equation:
where \(p_r\) is the red channel value, \(p_g\) is the green channel value, and \(p_b\) is the blue channel value of pixel \((x, y)\). The vector \(\begin{bmatrix} 0.2125 & 0.7154 & 0.0721 \end{bmatrix}\) represents the fraction of the red, green, and blue pixel channels of pixel \((x, y)\) to be summed to create \(p_{gray}\), the pixel’s grayscale value.
Color Corrections - RGB Lighting¶
The MCAM imaging head is made up of an array of individual camera modules. Each of these modules has its own sensor that must be calibrated to the other sensors in the array so that the array produces a unified response. This calibration must also account for the spatially varying brightness of the illumination. The goal of this calibration is to unify the response of the pixels of the full array to the illumination so that a pixel value in one location of the array can be compared to another section of the array in confidence.
To account for variations in the image sensors and lenses that make up each of the camera modules, each MCAM is calibrated under a variety of known illumination conditions. The following briefly describes the calibration procedure and the mathematical operations applied to each image in the MCAM viewer to ensure uniform measurements across the entire field of view.
We begin by denoting each camera in the array with a 2-dimensional index, \((i, j)\).
During calibration a sensor response matrix is created by collecting the average pixel response of the sensors to a specific illumination pattern. For each measurement, a different combination of LED channel values, defined as a vector of 3 values, \(l=\left[l_r, l_g, l_b\right]\), is applied and one image is acquired from each of the cameras in the array. The average pixel value, for each of the 3 color channels, red, green and blue, is recorded in a vector \(p\) given by \(p = \left[p_r, p_g, p_b\right]\) for each sensor \((i, j)\). Pixel responses are split by channels as it is expected that the different pixel channels (red, green and blue) will respond to the illuminations patterns differently. The measurement is repeated \(n\) times, each time varying the vector \(l\) and recording \(p\). Images are also collected under no illumination to document the dark current of the sensors. We give each component in the vectors above a superscript to denote the index in the acquisition when it was acquired.
Together, these \(n\) measurements can be grouped in a matrix equation written as:
Where \(p_r\) is sensor (i, j) average red channel value,
\(p_g\) is sensor (i, j) average green channel value, and
\(p_b\) is sensor (i, j) average blue channel value for \(n\) measurements.
\(l_r\), \(l_g\), \(l_b\) are the illumination’s red, green and blue channels power respectively.
The response matrix’s elements are the following:
\(r_{rr}\) - the sensor’s red channel’s average response to the illumination’s red channel.
\(r_{rg}\) - the sensor’s red channel’s average response to the illumination’s green channel.
\(r_{rb}\) - the sensor’s red channel’s average response to the illumination’s blue channel.
\(b_{r}\) - the sensor’s red channel’s average response when no illumination is applied.
\(r_{gr}\) - the sensor’s green channel’s average response to the illumination’s red channel.
\(r_{gg}\) - the sensor’s green channel’s average response to the illumination’s green channel.
\(r_{gb}\) - the sensor’s green channel’s average response to the illumination’s blue channel.
\(b_{g}\) - the sensor’s green channel’s average response when no illumination is applied.
\(r_{br}\)- the sensor’s blue channel’s average response to the illumination’s red channel.
\(r_{bg}\) - the sensor’s blue channel’s average response to the illumination’s green channel.
\(r_{bb}\) - the sensor’s blue channel’s average response to the illumination’s blue channel.
\(b_{b}\) - the sensor’s blue channel’s average response when no illumination is applied.
When writing the equation above as a matrix equation, we obtain:
The corrections for the sensor responses are created by finding the element by element average of the responses and multiplying by the individual response matrix’s inverse.
Where \(C_{i,j}\) is the correction matrix. R' is the average response matrix across sensors,
and \(R_{i,j}^{-1}\) is the inverse of \(R_{i,j}\).
Along with this, each sensor’s response is not uniform across all pixels. To account for this surfaces are defined to model the variable corrections of the sensors to the illumination source. These are modeled by the following polynomial:
Where a, b, c, d, e and f are coefficients and x and y
define the pixel’s position in the sensor
Applying the sensor corrections and the pixel corrections follow the following formula:
Where \(P'_{i,j}(x, y)\) is the corrected pixel at position \((x, y)\) on sensor \((i, j)\). \(S_{i,j,coefficient}(x, y)\) is the multiplicative correction for sensor \((i, j)\) at pixel position \((x, y)\). \(P_{i,j}(x, y)\) is the original pixel value at position \((x, y)\) on sensor \((i, j)\), and \(S_{i,j,offset}(x, y)\) is the additive correction for sensor \((i, j)\) at pixel position \((x, y)\).
This procedure ensures that the correction pixel values for camera \((i, j)\), \(P'_{i,j}(x, y)\), can be compared to those of camera \(k, l\), \(P'_{k,l}(w, z)\), with confidence.
Color Corrections - IR (850nm) Lighting¶
Similar to the RGB calibration described above, calibration is also done for the 850nm infrared (IR) illumination mode. This again allows consistency so that a pixel value in one location of the array can be compared to another section of the array in confidence.
We begin by denoting each camera in the array with a 2-dimensional index, \((i, j)\).
During calibration a sensor response matrix is created by collecting the average pixel response of the sensors to varying illumination intensities. For each measurement, an intensity of 850nm IR light, defined as value, \(l\), is applied and one image is acquired from each of the cameras in the array. The average pixel value, for each of the 3 color channels, red, green and blue, is recorded in a vector \(p\) given by \(p = \left[p_r, p_g, p_b\right]\) for each sensor \((i, j)\). Pixel responses are split by channels as it is expected that the different pixel channels (red, green and blue) will respond to the IR light differently. The measurement is repeated \(n\) times, each time varying the intensity of the light \(l\) and recording \(p\). Images are also collected under no illumination to document the dark current of the sensors. We give each component in the vectors above a superscript to denote the index in the acquisition when it was acquired.
Together, these \(n\) measurements can be grouped in a matrix equation written as:
Where \(p_r\) is sensor (i, j) average red channel value,
\(p_g\) is sensor (i, j) average green channel value, and
\(p_b\) is sensor (i, j) average blue channel value for \(n\) measurements.
\(l\) is the IR illumination’s power.
This can be abstracted to:
Corrections for the sensor responses are created by finding the element by element average of the responses and multiplying by the individual response matrix’s psuedo-inverse.
Where \(C_{i,j}\) is the correction matrix. R' is the average response matrix across sensors,
and \(R_{i,j}^{-1}\) is the psuedo-inverse of \(R_{i,j}\).
Along with this, each sensor’s response is not uniform across all pixels. To account for this surfaces are defined to model the variable corrections of the sensors to the illumination source. These are modeled by the following polynomial:
Where a, b, c, d, e and f are coefficients and x and y
define the pixel’s position in the sensor
Applying the sensor corrections and the pixel corrections follow the following formula:
Where \(P'_{i,j}(x, y)\) is the corrected pixel at position (x, y) on sensor (i, j).
\(S_{i,j,coefficient}(x, y)\) is the multiplicative correction for sensor (i, j) at pixel position (x, y).
\(P_{i,j}(x, y)\) is the original pixel value at position (x, y) on sensor (i, j), and
\(S_{i,j,offset}(x, y)\) is the additive correction for sensor (i, j) at pixel position (x, y).
Because of this correction pixel value \(P'_{i,j}(x, y)\) can be compared to \(P'_{k,l}(w, z)\) with confidence.
Color Corrections - White Lighting¶
For systems using the white light rail illumination a separate instance of calibration must be done. These systems not only must correct the sensors to balance the relative responses, but also balance the sensors channels’ responses to the white light. For RGB illumination we are able to tune the balance of the lighting channels to create a lighting ratio that appears white to the sensors, but with white light illumination this tuning is unavailable. Instead we must correct the raw pixel values using the calibrated corrections to remove any unexpected coloration of the image.
During calibration a sensor response matrix is created by collecting the average pixel response of the sensors to varying illumination intensities. For each measurement, an intensity of white light, defined as value, \(l\), is applied and one image is acquired from each of the cameras in the array. The average pixel value, for each of the 3 color channels, red, green and blue, is recorded in a vector \(p\) given by \(p = \left[p_r, p_g, p_b\right]\) for each sensor \((i, j)\). Pixel responses are split by channels as it is expected that the different pixel channels (red, green and blue) will respond to the white light differently. The measurement is repeated \(n\) times, each time varying the intensity of the light \(l\) and recording \(p\). Images are also collected under no illumination to document the dark current of the sensors. We give each component in the vectors above a superscript to denote the index in the acquisition when it was acquired.
Together, these \(n\) measurements can be grouped in a matrix equation written as:
Where \(p_r\) is sensor (i, j) average red channel value,
\(p_g\) is sensor (i, j) average green channel value, and
\(p_b\) is sensor (i, j) average blue channel value for \(n\) measurements.
\(l\) is the white light rail illumination’s power.
Corrections for the white light response should bring the responses of all channels to be equal. To create these corrections we multiply a scaled identity matrix by the pseudo inverse of the individual sensor’s response matrix.
Where \(C_{i,j}\) is the correction matrix. \(R_{i,j}^{-1}\) is the psuedo-inverse of \(R_{i,j}\) and \(I\) is the Identity matrix. The scalar value \(s\) is found by taking the root mean square of the three channels responses.
This scalar serves to have the corrected pixel value the same magnitude as as the raw pixel value.
Along with this, each sensor’s response is not uniform across all pixels. To account for this surfaces are defined to model the variable corrections of the sensors to the illumination source. These are modeled by the following polynomial:
Where a, b, c, d, e and f are coefficients and x and y
define the pixel’s position in the sensor
Applying the sensor corrections and the pixel corrections follow the following formula:
Where \(P'_{i,j}(x, y)\) is the corrected pixel at position (x, y) on sensor (i, j).
\(S_{i,j,coefficient}(x, y)\) is the multiplicative correction for sensor (i, j) at pixel position (x, y).
\(P_{i,j}(x, y)\) is the original pixel value at position (x, y) on sensor (i, j), and
\(S_{i,j,offset}(x, y)\) is the additive correction for sensor (i, j) at pixel position (x, y).
Because of this correction pixel value \(P'_{i,j}(x, y)\) can be compared to \(P'_{k,l}(w, z)\) with confidence.
Illumination Channel ID¶
Accessible through the illumination_channel_id variable of the
MCAM dataset.
>>> from owl.mcam_data import load
>>> dataset = mcam_data.load('/path/to/mcam_dataset.nc')
>>> dataset.illumination_channel_id
<xarray.DataArray 'illumination_channel_id' ()>
array(1)
Coordinates:
__owl_version__ <U25 '0.18.363'
Illumination Mode |
Channel ID |
|---|---|
external |
0 |
transmission_visible_pwm_fullarray |
1 |
transmission_visible_pwm_perimeter |
2 |
transmission_visible_analog_fullarray |
3 |
transmission_visible_analog_perimeter |
4 |
transmission_ir850_pwm_fullarray |
5 |
transmission_ir850_pwm_perimeter |
6 |
transmission_ir850_analog_fullarray |
7 |
transmission_ir850_analog_perimeter |
8 |
reflection_visible_pwm_fullarray |
20 |
reflection_visible_pwm_perimeter |
21 |
reflection_visible_analog_fullarray |
22 |
reflection_visible_analog_perimeter |
23 |
reflection_ir850_pwm_fullarray |
24 |
reflection_ir850_pwm_perimeter |
25 |
reflection_ir850_analog_fullarray |
26 |
reflection_ir850_analog_perimeter |
27 |
fluorescence |
50 |
fluorescence_front |
51 |
fluorescence_back |
52 |
fluorescence_unknown |
53 |
fluorescence_unknown_front |
54 |
fluorescence_unknown_back |
55 |
fluorescence_380nm |
56 |
fluorescence_380nm_front |
57 |
fluorescence_380nm_back |
58 |
fluorescence_440nm |
59 |
fluorescence_440nm_front |
60 |
fluorescence_440nm_back |
61 |
fluorescence_590nm |
62 |
fluorescence_590nm_front |
63 |
fluorescence_590nm_back |
64 |
fluorescence_633nm |
65 |
fluorescence_633nm_front |
66 |
fluorescence_633nm_back |
67 |
fluorescence_850nm |
68 |
fluorescence_850nm_front |
69 |
fluorescence_850nm_back |
70 |
fluorescence_white_5650K |
71 |
fluorescence_white_5650K_front |
72 |
fluorescence_white_5650K_back |
73 |
fluorescence_530nm |
74 |
fluorescence_530nm_front |
75 |
fluorescence_530nm_back |
76 |
fluorescence_500nm |
77 |
fluorescence_500nm_front |
78 |
fluorescence_500nm_back |
79 |
fluorescence_410nm |
80 |
fluorescence_410nm_front |
81 |
fluorescence_410nm_back |
82 |
fluorescence_470nm |
83 |
fluorescence_470nm_front |
84 |
fluorescence_470nm_back |
85 |
fluorescence_540nm |
86 |
fluorescence_540nm_front |
87 |
fluorescence_540nm_back |
88 |
fluorescence_650nm |
89 |
fluorescence_650nm_front |
90 |
fluorescence_650nm_back |
91 |
fluorescence_brightfield_530nm_focused_diffuser |
96 |
fluorescence_brightfield_530nm_focused |
97 |
fluorescence_brightfield_530nm_high_contrast |
98 |
fluorescence_brightfield_530nm_diffuser |
99 |
fluorescence_brightfield |
100 |
fluorescence_brightfield_440nm |
101 |
fluorescence_brightfield_530nm |
102 |
transmission_red_pwm_fullarray |
103 |
transmission_red_pwm_perimeter |
104 |
transmission_red_analog_fullarray |
105 |
transmission_red_analog_perimeter |
106 |
transmission_green_pwm_fullarray |
107 |
transmission_green_pwm_perimeter |
108 |
transmission_green_analog_fullarray |
109 |
transmission_green_analog_perimeter |
110 |
transmission_blue_pwm_fullarray |
111 |
transmission_blue_pwm_perimeter |
112 |
transmission_blue_analog_fullarray |
113 |
transmission_blue_analog_perimeter |
114 |
reflection_red_pwm_fullarray |
115 |
reflection_red_pwm_perimeter |
116 |
reflection_red_analog_fullarray |
117 |
reflection_red_analog_perimeter |
118 |
reflection_green_pwm_fullarray |
119 |
reflection_green_pwm_perimeter |
120 |
reflection_green_analog_fullarray |
121 |
reflection_green_analog_perimeter |
122 |
reflection_blue_pwm_fullarray |
123 |
reflection_blue_pwm_perimeter |
124 |
reflection_blue_analog_fullarray |
125 |
reflection_blue_analog_perimeter |
126 |
fluorescence_brightfield_540nm_focused_diffuser |
127 |
fluorescence_brightfield_540nm_focused |
128 |
fluorescence_brightfield_540nm_high_contrast |
129 |
fluorescence_brightfield_540nm_diffuser |
130 |
fluorescence_brightfield_540nm |
131 |
transmission_visible_analog_535nm_diffuser |
132 |
transmission_visible_analog_535nm_focused |
133 |
transmission_visible_pwm_535nm_diffuser |
134 |
transmission_visible_pwm_535nm_focused |
135 |
fluorescence_brightfield_590nm_focused_diffuser |
136 |
fluorescence_brightfield_590nm_focused |
137 |
fluorescence_brightfield_590nm_high_contrast |
138 |
fluorescence_brightfield_590nm_diffuser |
139 |
fluorescence_brightfield_590nm |
140 |