Introduction¶
Resonon hyperspectral imagers are built using a number of custom optical components that direct light onto the sensor of a standard (commercially available) machine vision camera. For most applications, we recommend using Resonon software to acquire data with Resonon Hyperspectral Imagers. SpectrononPro is a general-purpose desktop application that offers many tools for collecting and analyzing hyperspectral data, including a customizable plugin system. Resonon also offers specialized software for acquiring data with airborne systems and for performing real-time machine vision. For those with specialized needs or who wish to integrate Resonon devices into their own software, the camera itself can be controlled using an API provided by the camera manufacturer. This documentation provides guidelines and examples for programmers to correctly configure these cameras and interpret the data collected from them as hyperspectral data.
Working Principle¶
Resonon hyperspectral imagers are line-scan imagers. In principle, light is passed through a slit, which creates a one dimensional image of the scene. A diffraction grating then disperses that light onto the camera sensor, such that different wavelengths of light fall on different parts of the sensor in one dimension. In the other dimension, spatial structure is retained. A two dimensional image can then be assembled by stacking repeated lines of data.
Camera Setup and Windowing¶
Each imager is spectrally calibrated by Resonon before it is shipped, and has a unique sensor window and relationship between the pixel position on the camera sensor and the wavelength represented by that pixel. Only a portion of the camera sensor array is actually used by the hyperspectral imagers, and the camera must be windowed to this region prior to beginning data collection. The required camera windowing is defined by the following parameters:
X Offset: The number of pixels between the first sensor pixel and the first pixel used by the hyperspectral imager in the sensor’s X dimension. Calibrated by Resonon for each unit (may be zero).
Y Offset: The number of pixels between the first sensor pixel and the first pixel used by the hyperspectral imager in the sensor’s Y dimension. Calibrated by Resonon for each unit.
ROI Width: The number of samples (spatial pixels) collected in each frame of data. This quantity is constant within each model of hyperspectral imager.
ROI Width: The number of bands (spectral channels) collected in each frame of data. This quantity is constant within each model of hyperspectral imager.
Y Binning: Binning combines the information from adjacent pixels in a digital camera sensor and reports the combined (summed) brightness of those pixels as if it were a single pixel. Resonon imagers are designed to use a specific binning factor in the spectral dimension that causes the combined size of the binned pixels to correspond to the design of the imager’s optics. The number of independent spectral channels is given by the ROI height divided by the Y binning factor. This quantity is constant within each model of hyperspectral imager.
Imager Model |
Camera Manufacturer |
ROI Width |
ROI Height |
Y Binning |
Special Considerations |
---|---|---|---|---|---|
Pika L / L-GigE |
Basler |
900 |
600 |
2 |
|
Pika LF |
Basler |
720 |
480 |
2 |
|
Pika XC2 |
Basler |
1600 |
924 |
2 |
|
Pika IR |
Allied |
320 |
168 |
1 |
|
Pika IR+ |
Allied |
640 |
336 |
1 |
|
Pika IR rev2 |
Allied |
320 |
172 |
1 |
|
Pika IR+ rev2 |
Allied |
640 |
344 |
1 |
|
Pika IR-L |
Allied |
320 |
240 |
1 |
|
Pika IR-L+ |
Allied |
640 |
478 |
1 |
|
Pika UV |
Basler |
1500 |
1080 |
4 |
Spectral Calibration¶
The relationship between pixel position on the sensor array and wavelength is given by a quadratic polynomial of the form:
where \(\lambda\) is wavelength in nanometers, \(a\), \(b\), and \(c\) are imager-specific calibration coefficients, and \(x\) is the pixel number of the sensor in the spectral dimension. The pixel number begins at zero, is counted from the beginning of the sensor (not the beginning of the ROI), and is in unbinned coordinates.
In general, the pixel number on the raw sensor (\(x\) in the equation above) of a given spectral band from the configured image array can be calculated as:
Where \(O_y\) is the device-specific Y offset from the configuration report, and \(B_y\) is the model-specific binning factor from the table above. The term \(0.5 * B_y - 0.5\) is a midpoint correction used so that the pixel position input into the polynomial calibration equation is computed for the center of the binned region, instead of the edge (using zero-based, C-style indexing).
Code examples of this calculation can be found at Pika L setup and frame-grab loop and Pika IR setup and frame-grab loop.
Note
In the special case of the Pika XC2, the order of bands is reversed, such that lower wavelength bands are found at higher numbered pixels and higher wavelength bands are found at lower numbered pixels. To compensate, the calculated position must be subtracted from the total number of pixels in the sensor’s vertical dimension (1216):
Obtaining Device-Specific Parameters¶
Device-specific parameters can be obtained in one of three ways:
Resonon can always provide a written copy of imager-specific windowing and spectral calibration coefficients if you contact support@resonon.com with the serial number of your imager(s).
The coefficients can be read from the camera’s non-volatile memory. See On Camera Configuration Storage for Basler-based imagers and On Camera Configuration Storage for Allied-based imagers for details.
The coefficients can be looked up using Resonon’s imager control software, SpectrononPro, available at https://downloads.resonon.com. Plug the camera into your computer and start SpectrononPro. Select Preferences from the File menu and navigate to the Imager tab. The button labeled Generate configuration report can be used to produce a text file with your imager-specific data.
Here is a sample configuration report for a Pika L:
Basler imager configuration report
Imager Type: Pika L
Imager SN: 100121-50
Camera SN: 22121845
Coeff A: 0.00010350000229664147
Coeff B: 0.9359210133552551
Coeff C: 83.2490005493164
x offset (samples): 540
y offset (bands): 312
Windowing Spatial Begin: 540
Windowing Spectral Begin: 312
----------------------------------
12804
953749028
1064278149
1118207869
35389752
----------------------------------
Flat Field Calibration¶
Camera detectors have pixel-to-pixel variations in their sensitivity to light, and some noise is present from the detector, even when it records no light. To account for this, most applications should incorporate a flat field, or “white reference” calibration.
This calibration requires two measurements: a dark frame recorded with no light incident on the camera sensor, and a reference frame recorded with a known reference level of light uniformly incident on the sensor. A common way to apply this calibration is to record a dark frame with a lens cap in place, and to record a reference (or “white”) frame with a material of known reflectance spanning the entirety of the imager’s field of view.
After the two reference points have been collected, each subsequent frame of camera data is corrected as:
Where \(C\) is the corrected frame, \(r_{ref}\) (scalar) is the reflectance of the reference material (often assumed to be 1), \(R\) is the raw camera frame to be corrected, \(W\) is the white reference frame, and \(D\) is the dark reference frame.
For a code example of this process, see Flat Field Correction Example.
APIs¶
Pika IR, IR+, IR-L, and IR-L+¶
Resonon’s hyperspectral imagers covering the 900-1700nm spectral range are based on Allied Vision Goldeye cameras, and can be controlled using the Allied Vimba SDK. The Vimba SDK provides APIs in Python, C, C++, and .NET. The Python API can be found at https://github.com/alliedvision/VimbaPython.
Pika L, L-GigE, LF, XC2, and UV¶
Imagers covering the UV, visible, and near-IR spectral ranges are based on Basler cameras, and can be controlled using the Basler Pylon SDK. The Pylon SDK provides APIs in C++, .NET, C, and Java. An official Python wrapper is provided at https://github.com/basler/pypylon.
Code Examples¶
All code examples are illustrated in Python for readability, but the basic sequence of API calls and camera Feature access is similar across all supported languages.