Sunday, January 23, 2022

Solving Wordle Heuristically

I recently wrote a paper about solving the popular word game Wordle.

A preprint may be found here.

Thursday, December 2, 2021

Low-Cost Visible Light Spectral Imaging

My research paper "Low-Cost Visible Light Spectral Imaging" was recently published in the Journal of the International Colour Association.

It describes how to produce informationally-complete, observer-independent, high-resolution measurements of light & color for less than 10% the cost of a hyperspectral camera.

This paper improves upon the methods described in my earlier article "Low-Cost Hyperspectral Imaging".



Sunday, October 18, 2020

Low-Cost Hyperspectral Imaging

Update 2021-12-02: This article has been obsoleted by subsequent improvements to the methodology; see "Low-Cost Visible Light Spectral Imaging".

Abstract

Hyperspectral imaging is an emerging technology for capturing light and color more accurately and with finer granularity than conventional photography. It is most commonly used for machine vision tasks in medicine, agriculture, forensics, and manufacturing. 
Hyperspectral images may be thought of as pre-RAW, observer- and device-independent images, providing greater optionality in subsequent image processing. Adoption remains hindered by the high cost; turnkey commercial solutions typically cost $10-25k. Reducing cost and improving access would open the technology to new fields and users.

Presented here is an efficient, practical method for low-cost, hyperspectral, visible-light imaging at ~5% the cost of a commercial solution. It is a variant of the spectral scanning paradigm, using a bandpass filter set and the camera's built-in Bayer filter to form a semi-orthonormal vector space for SPD estimation with approximately 7 principal components. The hardware setup consists of a Canon 650D DSLR camera with a 40 mm prime lens, a subset of 5 filters from the MidOpt FS100 bandpass filter kit, a tripod, a quick-release plate, and a remote shutter. A photo is captured with each filter under the same camera settings, and spectral power distributions (SPDs) are estimated at each pixel as linear combinations of the filter-attenuated sensitivity curves, with weights given by the associated photo RGB values. The SPDs are then non-dimensionalized against a simulated equal-intensity response, dimensionalized against a presumed illuminant, and corrected using a calibration profile. Reconstructed RGB images may be calculated from SPDs using standard observer functions, e.g. CIE 1931 2° XYZ to simulate human color perception. The method is validated against a chart of 108 paint color samples of independently known properties, showing reasonable agreement, and a calibration profile is established. Several real-world examples are shown.

Concept diagram - click to enlarge

Related Work

A number of authors have explored spectral imaging with ordinary cameras:
  • Cosentino used 12 bandpass filters to estimate SPD in 2D spatially and compared results to a commercial spectral imaging system, showing good agreement. However, because the Bayer filter is discarded in conversion to grayscale, the quantity and cost of the filters are higher than necessary, 12 and approximately $2k respectively. [3]
  • Baek et al. developed a single-shot method using an ordinary prism to induce diffraction grating on a camera's image sensor; computation then separates the combined spatial and wavelength information. Although accurate, this method requires construction of a specialized prism adapter and has limitations associated with requiring edges and the spatial sparsity of spectral cues. [6]
  • Habel et al. similarly developed a single-shot method using diffraction grating; however it requires construction of a large and complex camera objective and its reconstructed images are limited to 120 x 120 pixels. [5]
  • Seoung used a three-camera setup for SPD estimation, using small variations in camera sensitivities. An image registration process is used to align images between cameras using planarity. PCA is performed on a spectral database of Munsell colors to create a vector space for describing SPD as low-dimension linear combinations. A system of linear equations is solved at each pixel. [4]

By comparison, the proposed method has the following advantages:
  • The camera's built-in Bayer filter is leveraged to reduce the required photo/filter pairs
  • No requirement for planarity or high-contrast edges within the scene
  • Image registration is not required
  • Only a single camera is required
  • All hardware is commercial-off-the-shelf; no construction required
  • Total filter quantity and cost are minimized at 5 and $500 respectively
  • SPD calculation is mathematically simple and computationally fast, occurring at the datacube (not pixel) level at megapixel scale
  • Matrix inversion at each pixel is avoided for faster datacube-level linear algebra
  • RAW images are not required

Filtered Photo Capture and SPD Estimation

The essential goal of this method is to use a commodity camera in place of a hyperspectral camera. The key differences between the two types of camera are:
  • A hyperspectral camera has near-uniform sensitivity over its domain; an ordinary camera has non-linear trichromate sensitivities that mimic the human eye.
  • A hyperspectral camera has an adjustable filter system that allows it to reject all except the narrow wavelength band being measured; an ordinary camera's trichromate filters (found in the Bayer filter) are fixed and broad.

The visible wavelength band spans approximately 400 to 700 nm. A visible-light hyperspectral camera may report SPD with a wavelength resolution of 5 or 10 nm. Thus, at each pixel, an SPD measurement comprises some 30-60 values. However, it has been shown empirically that 99% of observed color variance can be explained by 8 principal components in surface reflectance [4]; thus, SPDs may reasonably be estimated from much fewer than 30-60 measurements. Note that SPDs and reflectance differ by a factor of the illuminant, which is of lower dimensionality; SPDs may be thought of has having the same number of principal components as surface reflectance with decent approximation.

Ordinary cameras employ a Bayer filter to attenuate the spectral sensitivity of a camera to mimic that of the human eye. Bayer filters are an ultra-fine mosaic of red, green, and blue (R, G, B) sub-filters that cover the image sensor. An ordinary camera may be thought of as having uniform spectral sensitivity that is attenuated by the Bayer filter.


Bayer filter illustration, showing red, green, and blue sub-filters on an image sensor. Source: Wikipedia.

In essence, ordinary photos represent the dot product of the camera's spectral sensitivity with the SPDs of the scene, at each pixel, for each color channel. These dot product values become RGB values after minor processing, such as demosaicing, white balance, levels, etc. Note that a dot product is fundamentally a lossy operation; in a photo, the SPD itself is not measured, only its dot products.

A filter placed in front of a camera's lens further attenuates its spectral sensitivity. This attenuation is described by the filter's transmission spectrum, which reports the fraction of light admitted as a function of its wavelength. This attenuation occurs as a simple element-wise multiplication, i.e. wavelength-by-wavelength. The filter may be thought of as modifying the spectral sensitivity of the camera by attenuation.

A system of equations are developed to describe filtered photo capture and SPD estimation, beginning with the following definitions:


The process of filtered photo capture is modeled as follows:


The process is described in terms of proportionality rather than equality, to avoid dimensionalizing the SPD and modeling finer details such as ISO and shutter speed.

A set of "ST" curves are calculated, equal in quantity to the number of photo color channels, 3, times the number of filters, 5, for 15 total. This quantity is equal to the total number of color measurements per pixel:


It is convenient to define a single "st" index as a composite of the filter/photo index and color channel index. The vector U is then calculated as the sum of all ST curves over the st index:


Finally, SPDs are reconstructed as linear combinations of the ST curves weighted by the associated photo values scaled 0-1, which are then normalized by U 0-1, then scaled by the illuminant and calibration curves:


Image Reconstruction from SPDs

Images can be reconstructed from SPDs standard methods. [7] The preceding discussion has neglected the subject of units/scaling due to the associative nature of the equations. However, scaling becomes important in image reconstruction due to the fixed limits of the color space.

Any set of observer functions (e.g. for a population, camera, individual, etc.) can be used for image reconstruction. The most common of these is the 1931 CIE 2° XYZ observer. [8] Taking the dot product of the SPD at each pixel with each of the observer sensitivity curves, an XYZ tristimulus color is obtained. These XYZ colors are then normalized 0-1 and converted to RGB for display.

Standard tristimulus observer functions for image reconstruction from SPDs

Because SPDs are relative, not absolute, the sensor has no ability to set the black point and white point in the image. Instead, the operator may manually specify where these limits occur according to their knowledge of the scene, or aesthetic goals for the image. They may also elect to scale their image to the limits of the color space for maximum contrast, although this is generally unlikely to produce perceptually accurate colors.

A simple default scaling method is suggested here:
  • Convert the image to grayscale.
  • Calculate the probability density function (PDF) of the image pixel values.
  • Calculate the cumulative distribution function (CDF) as the cumulative sum of the PDF.
  • Look up the values corresponding to 1% and 99% in the CDF
  • Using imadjust or equivalent, scale the value range of the image so that 1% and 99% on the CDF correspond to 5% and 95% on the value range
These values are good for default general-purpose scaling, but may be adjusted depending on the specific scene being imaged.

Computational Approach

All computation described in this article is implemented in MATLAB. Although SPDs may be calculated pixel-by-pixel, this approach scales poorly to multiple-megapixel images. Instead, SPDs are calculated at the datacube level using reshape and repmat to eliminate multiple for loops and leverage MATLAB's matrix math efficiency. This works as follows:
  1. Initialize an SPD datacube with zeros for all entries, measuring m-by-n-w, for photos of resolution n-by-m and w wavelengths
  2. Pick an ST curve
  3. Retrieve the color channel of the photo corresponding to this ST curve
  4. Reshape the ST curve along the 3rd (wavelength) dimension with reshape
  5. Repeat the ST curve at each pixel with repmat
  6. Repeat the color channel at each wavelength with repmat
  7. Calculate the element-wise product of the color datacube and the ST datacube
  8. Add this result to the SPD datacube
  9. Repeat back to Step 2 for all remaining ST curves
  10. Divide the SPD datacube by the sum of all ST curves, similarly reshaped and repeated
  11. Multiply the SPD datacube by the illuminant and calibration curves, similarly reshaped and repeated

Depending on the camera and filter set, some wavelengths may have zero or near-zero sensitivity. This commonly occurs at the fringes of the visible light spectrum, where the camera is least sensitive. At these wavelengths, the SPD is undefined, or nearly so. To address this, wavelength channels corresponding to a sensitivity of < 1% of the peak value are discarded, and set equal to the closest non-discarded wavelength channel. Depending on the specific camera-filter system, it may be preferable in other cases to interpolate or extrapolate instead.

This "insensitivity" check is performed on U, the sum of the ST curves over the st index, shown below. It may be thought of as the overall sensitivity of the camera/filter system, as a function of wavelength. For a truly non-overlapping filter set, e.g. for a narrow bandpass filter set, U would have a sawtooth shape, and interpolation would be preferable in the insensitive regions between peaks. For the regular bandpass filter set under consideration, overlap occurs, and insensitivity occurs only at the extremes of the domain, where the SPDs from the closest "sensitive" wavelength are simply copied, or held.

Overall sensitivity, used to normalize and determine of where to interpolate/extrapolate

Filter Set Selection

As discussed in Seeoung et al. [4], real-world reflectance spectra can empirically be decomposed into approximately 8 principal components, supposing > 99% of variance is to be explained. SPDs differ from reflectance by a factor of the illuminant, which by inspection is of lower dimensionality for everyday cases. [8] [9]


From the perspective of sensor design, this indicates that SPDs should be well-approximated by 8 values distributed evenly over the visible wavelength band of roughly 400 to 700 nm, equivalent to a spectral resolution of approximately 43 nm. Between these wavelength values, interpolation can be used to good approximation. For an image sensor with monochromatic sensitivity, this would necessitate at least 8 filters. However, the trichromate sensitivity of ordinary cameras can be used to decrease the filter count.

The principal components of a camera/filter set system can be evaluated using the ST matrix shown above:


which expands as follows:


where 1, 2, 3, ... λ denotes the wavelength index and a, b, c, ... st denotes the ST index.

At an arbitrary pixel, the ST matrix can be used to set up an over-defined system of linear equations per the photo capture expression:



where COL is a vector of color measurements, and SPD is a vector of the spectral power distribution.

This system of linear equations is equivalent to the canonical Ax=b formulation with a known measurement vector b (COL), known transformation matrix A (ST), and unknown predictor vector x (SPD). It is mathematically possible to solve this over-defined system directly, e.g. with a linear least-squares (LLS) method:


However, the ST matrix tends to be poorly conditioned, producing non-physical and non-convergent results for SPD if using LLS. LLS is also prohibitively slow at megapixel scale.

Nevertheless, the ST matrix is still useful for principal component analysis (PCA), to estimate the spectral resolution of the camera / filter set system. Because PCA is a function only of the transformation matrix ST, it can be performed in the absence of any photo data, and is a powerful tool for evaluating filter sets.

It can be shown using PCA that the ideal filter set has transmission spectra with minimal overlap and maximum uniformity. If the spectra do not overlap, the filters are orthogonal in SPD vector space, and their utility in terms of PC per filter is maximized. It follows that ideal filters have "spike" spectra that measure a single wavelength only. In practice, this is best realized with a series of bandpass filters with minimal FWHM.

Using this PCA approach, several filter sets were evaluated, ranging in quantity from 5 to 12, and in cost from $30 to $3,500. These included options from Thorlabs, Edmund Optics, MidOpt, and PixelTeq. These fell roughly into the following tiers:

Comparison of filter set tiers by key cost and performance metrics; selection highlighted

The tiers differ significantly in terms of cost and performance. Gel-like color filters are very cheap, [2] but because their transmission spectra are arbitrary and usually non-zero nowhere, they overlap significantly, and it is practically difficult to develop more than 5 PC. Further, they usually lack published transmission data. Bandpass filters with 60-90 FWHM are more expensive but get much closer to the target of 8 PC. Their relatively broad FWHM limits the maximum number of useful filters than can be used to ~5, and the PC count to ~7. Narrow bandpass filters with 20-40 FWHM are more expensive yet but offer greater optionality. They can be made to develop as many as 14 PC. 8 PC is also achievable, but still at substantially greater cost than with regular bandpass filters and 7 PC, at ~$1,400 vs.$500.

Size and form factor/deployability were also considered. The ~43 mm diameter size required is relatively large relative to typical offerings, which commonly are limited to 1" [25.4 mm]. Many filter kits come in large, bulky cases, or are unmounted.

Ultimately, the MidOpt FS100 was selected. At $500 and 10 filters, it is an outlier in value, despite only 5 of the 10 filters being useful for visible light imaging. The swiveling fan form factor is ideal for no-contact lens filtering, as it can be operated with one hand, protects the filters, and is extremely compact when collapsed.

MidOpt FS100 Filter Swatch Kit

Several filters in the FS100 kit are not useful for this application as they either operate outside the visible light range, are polarizing filters, or excessively overlap another filter. The useful subset of filters were removed and re-bound using a shorter brass binding post:

Subset of useful filters from MidOpt FS100

The filter colors appear artificially warm in this picture due to their red-orange reflective coatings. The final filter subset is:
  1. MidOpt BP470, FWHM 85 nm, blue
  2. MidOpt BP525, FWHM 80 nm, light green
  3. MidOpt BP590, FWHM 75 nm, orange
  4. MidOpt BP635, FWHM 60 nm, light red
  5. MidOpt BP660, FWHM 65 nm, dark red

Bandpass transmission spectra for filters chosen from MidOpt FS100 kit

Colors for filters chosen from MidOpt FS100 kit - for visual check only, not used in calculation

This filter set is shown to have 7 PC of diminishing explanative power, using the same standard as Seoung et al.: [4]

Principal component analysis of camera/filter system, showing ~7 PC

Photographic Aspects

Camera Selection

It is a prerequisite for this method that the spectral sensitivity of the camera is known. This data is not typically reported by the manufacturer, nor can it be measured without specialized equipment. Fortunately, literature provides data for some common models, most prominently Jiang et al. who published a database of 28 models in 2013, which is used in this study. [1]

In some cases, it may be acceptable to use a similar camera's data, particularly if the manufacturer is the same and the model year/type is similar. Because camera spectral sensitivities are designed to mimic the human eye, they generally resemble standard observer functions with minor deviations.


In this study, photos are captured with a Canon 650D, with sensitivity data given by the similar Canon 600D. 

Photo Capture

Minimizing perturbation of the camera while capturing the photo stack is critical. To this end, a tripod and remote shutter are used. The filters are neither attached to nor in contact with the camera at any time; they are simply held in front of the lens by hand at a distance of approximately 1/4". A prime (i.e. non-zoom, or fixed focal length) lens is preferable as it has one fewer degree of freedom than a zoom lens, and is less prone to perturbation and backlash. Prime lenses also have smaller apertures than zoom lenses, which enables use of smaller filters. A Canon 40 mm prime lens is used in this study, chosen for its generality and naturalistic field of view. Accounting for the crop-sensor of the Canon 650D, the effective focal length is approximately 65 mm.

Photography setup: filter fan in left hand, remote shutter in right hand; no contact with camera

Since this method deliberately avoids the RAW format, care must be taken to avoid saturating the limits of the color space while capturing photos, particularly by overexposure. Saturation occurs in color channels individually before the corresponding grayscale image saturates; therefore the R, G, and B histograms should be inspected individually, rather than the grayscale histogram by itself.

ISO and exposure time should be minimized to reduce sensor noise and blurriness, while still ensuring that the image adequately spans the color space. The specific settings for ISO and exposure time will vary depending on the subject and imaging goals. White balance should be set to match the scene as best as possible.

It is advisable to take several test photos without a filter to correct the camera settings before capturing the photo stack. The goal should be to get the unfiltered photo just shy of overexposure, so that the filtered photos have maximum dynamic range without saturation. Because the MidOpt filters have a very high peak transmission of 90-100%, no adjustment of ISO or shutter speed is necessary between the test photos and the stack photos. These test photos also serve as a qualitative check for the reconstructed images.

Validation and Calibration

SPD estimates are compared against samples of known properties - specifically, a color chart consisting of 108 samples of oil paint mixtures
spanning the color gamut, originally developed by painter Richard Schmid. [10] [11] SPDs are estimated by capturing a filtered photo stack of the Schmid chart under noon daylight, corresponding to the CIE D65 illuminant. SPD ground truth is calculated as the product the D65 illuminant and the spectral reflectance of each swatch, as measured by the Spectro 1 spectrophotometer. The quality of SPD estimation is assessed by comparing the estimated and measured SPDs for each swatch.

Comparison of measured vs. estimated SPD at each swatch for un-calibrated system

From this comparison, a calibration curve is developed. At each swatch, the ratio of estimated to measured SPD is calculated, i.e. the required gain to eliminate error between measurement and estimation. These gains are averaged at each wavelength for all swatches, resulting in the following calibration curve:

Overall error signal for Schmid chart before calibration

To maintain generality and avoid overfitting, a simple linear calibration curve is fit to this curve:

Fitted calibration curve to minimize error signal for Schmid chart

The Schmid chart is then reevaluated using the calibration curve:

Comparison of measured vs. estimated SPD at each swatch for calibrated system

The fit between measurement and estimation is much improved. The average error is centered on 1.00, indicating that the linear fit has the intended effect.

Overall error signal for Schmid chart after calibration

Results

Several results are shown. All are calculated with a D65 illuminant, the Schmid calibration curve applied, a wavelength resolution of 20 nm, and value scaled to 5% and 95% with a nominal gamma (linearity) value of 1.00. For each, the unfiltered/test (left) and reconstructed (right) images are compared, and a mesh of characteristic spectra are shown.



Schmid Color Chart



Cabbage



John W. Weeks Bridge




Little Fresh Pond 1



Sugar Maple



Little Fresh Pond 2



Western Av. Bridge



John W. Weeks Bridge Entrance




Belmont Victory Gardens, Rock Meadow



Thicket



JFK Memorial Park 1



JFK Memorial Park 2



Desk

Conclusions

A simple method for hyperspectral imaging with a commodity camera is demonstrated using only commercially-available hardware. Accounting for the camera's Bayer filter reduces the number of lens filters required by increasing the number of principal components in the photo capture transformation matrix, ST. For the specific choice of a Canon 650D camera and the MidOpt FS100 filter kit, reasonable agreement is shown for samples of known properties between measured vs. estimated SPDs. Future work may examine in greater depth the filter selection sub-problem to improve SPD estimation and/or reduce cost, and/or study differences between unfiltered and reconstructed images.

Source Code

A MATLAB GUI was developed to facilitate rapid adjustment and iteration of SPD estimation and image reconstruction parameters. It currently supports 2 filter sets, 7 illuminants, 28 cameras, and 2 observers, and can readily be expanded as needed. Full source code may be found on GitHub.

MATLAB GUI for creation of SPDs and reconstructed images

References

  1. Jiang et al. What is the Space of Spectral Sensitivity Functions for Digital Color Cameras? IEEE Workshop on the Applications of Computer Vision (WACV), 2013.
  2. Leeuw et al. In situ Measurements of Phytoplankton Fluorescence Using Low Cost Electronics. Sensors 2013, 13, 7872-7883.
  3. Antonino Cosentino. Multispectral Imaging of Pigments with a Digital Camera and 12 Interferential Filters. e-Preservation Science, 2015.
  4. Seoung et al. Do It Yourself Hyperspectral Imaging with Everyday Digital Cameras. IEEE Conference on Computer Vision and Pattern Recognition (CVPR), 2016.
  5. Habel et al. Practical Spectral Photography. Eurographics, Volume 31, Number 2, 2012.
  6. Baek et al. Compact Single-Shot Hyperspectral Imaging Using a Prism. ACM Transactions on Graphics, Vol. 36, No. 6, Article 217. Publication date: November 2017.
  7. Lindbloom. Spectral Computation of XYZ. Revised Sat, 08 Apr 2017 18:57:27 GMT.
  8. Colour & Vision Research Laboratory, University College London. Standard Illuminants.
  9. PublicResource.org. CIE 15: Supplementary Tables [10 CFR 430 Subpart B, App. R, 4.1.1], cie.15.2004.tables.xls.
  10. Dichter. How Paints Mix. 2019.
  11. Schmid. Alla Prima II. Page 219. Fourth Printing, March 2018.