Not Available in Your Country
Olympus AI for Highly Robust Label-Free Nucleus Detection and Segmentation in Microwell Plates

Motivation

Labeling cells with chromatic and, in particular, fluorescent markers is an invaluable approach for the observation and analysis of biological features and processes. Prior to the development of these labeling techniques, microscopic analysis of biological samples was performed through label-free observation. With the dramatic improvements in image analysis owing to machine-learning methods, label-free observation has recently seen a significant resurgence in importance. Deep-learning-based approaches can provide new access to information encoded in transmitted-light images and have the potential to replace fluorescent markers used for structural staining of cells or compartments (Christiansen et al. In Silico Labeling: Predicting Fluorescent Labels in Unlabeled Images, Cell, 2018) [Figure 1].

Figure 1 From left to right: AI prediction of nuclei positions (blue), Green fl uorescent protein (GFP) Histone 2B labels showing nuclei (green) and raw brightfi eld transmission image (gray).

Figure 1
From left to right: AI prediction of nuclei positions (blue), green fluorescent protein (GFP) histone 2B labels showing nuclei (green), and raw brightfield transmission image (gray).
 

No label-free approach can completely replace fluorescence, because information obtained from directly attaching labels to target molecules is still invaluable. However, gaining information about the sample without or with fewer labels has a number of clear advantages:

  • Reduced complexity in sample preparation
  • Reduced phototoxicity
  • Saving fluorescence channels for other markers
  • Faster imaging
  • Improved viability of living cells by avoiding stress from transfection or chemical markers

The limitations of label-free assays are due in large part to the lack of methods to robustly deal with the challenges inherent to transmitted-light image analysis. These constraints include:

  • Low contrast, in particular, of brightfield images
  • Compared to fluorescence microscopy, dust and other imperfections in the optical path, which negatively influence image quality
  • Additional constraints of techniques to improve contrast, such as phase contrast or differential interference contrast (DIC)
  • Higher level of background compared to fluorescence

For live-cell imaging in microwell plates, it can be particularly challenging to acquire transmission images of sufficient quality for analysis because of the liquid meniscus of the buffer medium and other noise contributions (Figures 2 and 3). The challenges of microwell plates include:

  • Phase contrast is often impossible
  • DIC only in glass dishes
  • Brightfield images strongly shaded at the well borders
  • Condensation artifacts may require removing the lid
  • Particles in suspension increase background

Figure 2 Brightfield transmission images (10x magnification, HeLa cells) of part of one well of a 96 well microplate, showing many of the challenges for analysis with this kind of images. In particular the scratch (top row) and the shading caused by the meniscus effect and condensation of evaporated buffer on the lid after long-term observation.

Figure 2
Brightfield transmission images (10X magnification, HeLa cells) of part of one well of a 96-well microplate, showing many of the challenges for analysis with this kind of image. In particular the scratch (top row) and the shading caused by the meniscus effect and condensation of evaporated buffer on the lid after long-term observation.

Figure 3 Detail view image showing the strong background and inhomogeneities that can occur in brightfi eld transmission imaging. Note in particular the out-of-focus contributions from unwanted particles.

Figure 3
Detail view showing the strong background and inhomogeneities that can occur in brightfield transmission imaging. Note in particular the out-of-focus contributions from unwanted particles.
 

Deep-Learning Technology

Transmission brightfield imaging is a natural approach for label-free analysis applications, but it also presents image analysis and segmentation challenges that have long been unsolved. To address these challenges, Olympus has integrated an image analysis approach into the scanR high-content screening (HCS) analysis software, based on deep convolutional neural networks. This kind of neural network architecture has recently been described as the most powerful object segmentation technology (Long et al. 2014: Fully Convolutional Networks for Semantic Segmentation). Neural networks of this kind feature an unrivaled adaptability to various challenging image analysis tasks, making it an optimal choice for the non trivial analysis of transmission brightfield images for label-free analysis.

In a training phase, the scanR AI’s neural networks automatically learn how to predict desired parameters—for example, the positions and contours of cells or cell compartments—in a process called object-of-interest segmentation. During the training phase, the network is fed with pairs of example images and “ground truth” data (i.e. object masks where the objects of interest are annotated). Once the network is trained, it can be applied to new images and predict the object masks with high precision.

Typically in machine learning, the annotations (for example, the boundaries of cells) are provided by human experts. This can be a tedious and time-consuming step because neural networks require large amounts of training data to fully exploit their large potential.

To overcome these difficulties, Olympus uses self-learning microscopy. In self-learning microscopy, the microscope automatically generates the ground truth required for training the neural network by acquiring reference images during the training phase. For example, in order to teach the neural network the robust detection and segmentation of nuclei in brightfield images in difficult conditions, the nuclei can be labeled with a fluorescent marker. The microscope then automatically acquires large numbers of image pairs (brightfield and fluorescence). These pairs are used to train the neural network to correctly analyze the images (Figures 4 and 5).

Since this approach to ground truth generation requires little human interaction and is scale invariant, huge amounts of training image pairs can be acquired within a short time. This makes it possible for the neural network to adapt to variations and distortions during the training, which results in a learned neural network model that is robust against these challenging issues.

Figure 4 GFP channel of the region shown in Figure 2.

Figure 4
GFP channel of the region shown in Figure 2.

Figure 5 Histone 2B GFP label, highlighting the nucleus, on top of the brightfield image shown in Figure 3. The GFP channel in this case is used to generate the ground truth automatically.

Figure 5
Histone 2B GFP label, highlighting the nucleus, on top of the brightfield image shown in Figure 3.
The GFP channel in this case is used to generate the ground truth automatically.
 

Label-Free Segmentation Training

To demonstrate a typical use case, a whole 96-well plate with variation in buffer filling level, condensation effect, meniscus-induced imaging artifacts, etc. is imaged with the following parameters:

  • UPLSAPO objective (10X magnification, NA = 0.6)
  • Adherent HeLa cells in liquid buffer (fixed)
  • GFP channel: histone 2B GFP as a marker for the nucleus (ground truth)
  • Brightfield channel: three Z-slices with a 6 μm step size (to include defocused images in the training)

The ground truth for the neural network training is generated by automated segmentation of the fluorescence images using conventional methods (edge-based object segmentation). Including slightly defocused images in the example data set during training allows the neural network to better account for small focus variations later. The neural network is trained with pairs of ground truth and brightfield images, as depicted in Figure 6. Five wells with 40 positions each are used as training data. The training phase took 90 minutes on the NVIDIA GTX 1070 graphics card (GPU).

Figure 6 Schematic showing the training of the neural network.

Figure 6
Schematic showing the training process of the neural network.
 

Label-Free Nucleus Detection and Segmentation

During the detection phase, the learned neural network model is applied to brightfield images as depicted in Figure 7. It predicts for each pixel whether it belongs to a nucleus or not. The result is a probability image, which can be visualized as shown in Figures 8 and 9 by color-coding the probability and generating an overlay image.

Figure 7 Schematic showing the application (inference) of the trained neural network.

Figure 7
Schematic showing the application (inference) of the trained neural network.
 

The images in Figures 8 and 9 show that the neural network, which learned to predict cellular nuclei from brightfield images, finds the nuclei at the exact positions they appear in the brightfield image, clearly demonstrating the value of the AI-based approach:

  • High-precision detection and segmentation of nuclei
  • Optimal for cell counting and geometric measurements like area or shape
  • Less than 1-second processing time per position (on NVIDIA GTX 1080 Ti GPU)


Validation of the Results

Deep-learning predictions can be extremely precise and robust, but it is essential to validate the predictions carefully to ensure that no artifacts or other errors are produced. In this sense, it is similar to a classical image analysis pipeline, but errors are less easy to predict without careful validation because they depend on the data used for training.

The Olympus HCS analysis software is well suited for systematic validation of the AI results. Figure 10 compares the software results to the fluorescence-based analysis and manual inspection of 100 randomly selected nuclei. Overall cell counts of the wells were also compared (Figure 11).

Figure 10 shows that Olympus’ AI results correspond well with the fluorescence results. The distribution of cell counts across all wells also appears identical (Figure 11). However, the total cell count using the deep-learning approach is around 3% larger than cell counts based on fluorescence imaging (1.13 million cells versus 1.10 million nuclei).

One reason for this discrepancy was that the AI was able to detect nuclei that did not produce enough GFP signal to be detected with fluorescence. However, another reason was identified by creating scatter plots looking at circularity versus area of cells.

Figure 8 Probability image of AI prediction of nuclei positions from the brightfield image. Same part of the well as in Figure 2.

Figure 8
Probability image of AI prediction of nuclei positions from the brightfield image. Same part of the well as in Figure 2.
 

Figure 9 Probability image overlay on a brightfield image. Example of AI prediction of nuclei positions from a brightfield image.

Figure 9
Probability image overlay on a brightfield image. Example of AI prediction of nuclei positions from a brightfield image.
 

These plots revealed 22,000 (2%) unusually large objects (>800 pixels) in the fluorescence plot compared to 7,000 (0.6%) in the AI plot (Figure 12). Figure 13 shows a random selection of unusually large objects in comparison, indicating the better separation of nuclei in close contact by the AI.

(A)

Figure 10 Random selection of 100 nuclei of the whole validation data set. GFP nuclear labels (left), brightfi eld image (center) and AI prediction of nuclei positions from brightfi eld image (right).

(B)

Figure 10 Random selection of 100 nuclei of the whole validation data set. GFP nuclear labels (left), brightfi eld image (center) and AI prediction of nuclei positions from brightfi eld image (right).

(C)

Figure 10 Random selection of 100 nuclei of the whole validation data set. GFP nuclear labels (left), brightfi eld image (center) and AI prediction of nuclei positions from brightfi eld image (right).

Figure 10
Random selection of 100 nuclei of the whole validation data set. (A) GFP nuclear labels, (B) brightfield image and (B) AI prediction of nuclei positions from brightfield image.
 

Figure 11 Comparison of cell counts of the reference method (counted on GFP channel with conventional approach, left) and Olympus AI (counted on brightfi eld channel using the neural network). Wells 1–5 have been used for the training and must not be considered for validation.

Figure 11 Comparison of cell counts of the reference method (counted on GFP channel with conventional approach, left) and Olympus AI (counted on brightfi eld channel using the neural network). Wells 1–5 have been used for the training and must not be considered for validation.

Figure 11
Comparison of cell counts of the reference method, counted on GFP channel with conventional approach (left) and Olympus AI, counted on the brightfield channel using the neural network (right). Wells 1–5 have been used for the training and must not be considered for validation.
 

Figure 12 Scatter plot showing circularity vs. area distribution of the 1.10 million nuclei detected in the GFP channel (left) and the 1.13 million nuclei detected in the brightfield channel by AI (right). The yellow rectangle indicates unusually large objects.

Figure 12
Scatter plot showing circularity versus area distribution of the 1.10 million nuclei detected in the GFP channel (left) and the 1.13 million nuclei detected in the brightfield channel by AI (right). The yellow rectangle indicates unusually large objects.

(A)

Figure 13 Random selection of 100 unusually large objects of the whole validation data set. GFP nuclear labels (left), bright field image (center) and AI prediction of nuclei positions from bright field image (right).

(B)

Figure 13 Random selection of 100 unusually large objects of the whole validation data set. GFP nuclear labels (left), bright field image (center) and AI prediction of nuclei positions from bright field image (right).

(C)

Figure 13 Random selection of 100 unusually large objects of the whole validation data set. GFP nuclear labels (left), bright field image (center) and AI prediction of nuclei positions from bright field image (right).

Figure 13
Random selection of 100 unusually large objects of the whole validation data set. (A) GFP nuclear labels, (B) brightfield image, and (C) the AI prediction of nuclei positions from the brightfield image.
 

Conclusions

The AI software on Olympus’ scanR system can reliably derive nuclei positions and masks in microwells solely from brightfield transmission images. The HCS software can achieve this after a brief training stage. No manual annotations are required thanks to the self-learning microscopy approach. Fully automated training data generation enables segmentation of nuclei with potentially better accuracy than measurements based on fluorescence.

Use of Olympus’ AI-based approach offers significant benefits to many live-cell analysis workflows. Aside from the improved accuracy, using brightfield images also avoids the need for using genetic modifications or nucleus markers. Not only does this save time on sample preparation, it also saves the fluorescence channel for other markers. Furthermore, the shorter exposure times for brightfield imaging mean reduced phototoxicity and further time savings on imaging.

Author

Dr. Mike Woerdemann
Product Manager
Olympus Soft Imaging Solutions
GmbH
Münster, Germany

Sorry, this page is not
available in your country.

Related Products
High-content screening station for life science

scanR

  • Fast and precise image acquisition and analysis
  • Hyper-precise real-time experiment control
  • Analysis module

This site uses cookies to enhance performance, analyze traffic, and for ads measurement purposes. If you do not change your web settings, cookies will continue to be used on this website. To learn more about how we use cookies on this website, and how you can restrict our use of cookies, please review our Cookie Policy.

OK