Evident LogoOlympus Logo

Ask the Experts

Deep Learning: Opening Doors to New Applications

Join Manoel and Kathy as they explain how to unleash the power of deep learning to tackle challenging image analysis tasks, such as detecting cells in brightfield images and cell classification tasks that are difficult for the human eye.

Presenters:

Manoel Veiga, Application Specialist, Life Science Research
Kathy Lindsley, Application Specialist, Life Science

FAQ

Webinar FAQs | Deep Learning

Can you do quantitative intensity measurements after deep learning algorithms are applied?

The deep learning algorithms predict the position of the fluorescent signal but not the intensity. However, you can perform segmentation based on deep learning and then use the secondary channel to perform a quantitative intensity analysis (e.g., fluorescence).

For example, if you’re going label-free and want to measure a protein expression and irradiate your specimen with very low power, then you can perform the segmentation in the brightfield image, then measure the fluorescence in the secondary channel. And if your intensity is low but has a constant background, you can perform a quantitative analysis—even if the signal is just a couple of counts above the camera noise.

Can deep learning software be applied to stained histology slides (e.g., H&E)?

Yes, there is a special neural network architecture in cellSens™ software for RGB images. This RGB network has an augmentation procedure that slightly modifies the contributions of the different colors, ensuring the neural network is robust to slight variations in RGB and the balance of colors.

How many images does deep learning software need for training?

The key parameter is the number of objects annotated rather than the number of images. In some cases, 20 to 30 objects would work, but then you can only use that neural network to analyze images with similar contrast. If you want to go beyond that to go label-free and analyze objects in difficult conditions, then you will typically need thousands of annotations. This high level of annotations can be achieved by applying an automated ground truth using fluorescence, for example.

Are Olympus deep learning algorithms based on U-Net?

Yes, they are inspired by U-Net. They are not exactly the same, but the overall structure is based on U-Net.

What is the difference between deep neural networks and convolutional neural networks?

Neural networks have an input and an output layer. A deep neural network has at least one intermediate layer between the input and the output layers (generally they have several intermediate layers). A convolutional neural network is a class of deep neural networks where the intermediate layers are convolved with each other. Convolution is a mathematical operation that works very well for imaging analysis tasks. For this reason, convolutional neural networks are used to analyze microscopy images. Deep learning is also used in other fields outside of image analysis. These applications do not require convolutional networks and use other types of networks instead.


Related Products

Imaging Software

cellSens

Providing intuitive operations and a seamless workflow, cellSens software’s user interface is customizable so you control the layout. Offered in a range of packages, cellSens software provides a variety of features optimized for your specific imaging needs. Its Graphic Experiment Manager and Well Navigator features facilitate 5D image acquisition. Achieve improved resolution through TruSight™ deconvolution and share your images using Conference Mode.

  • Improve experiment efficiency with TruAI™ deep-learning segmentation analysis, providing label-free nuclei detection and cell counting
  • Modular imaging software platform
  • Intuitive application-driven user interface
  • Broad feature set, ranging from simple snapshot to advanced multidimensional real-time experiments

Deep Learning: Opening Doors to New Applications

Join Manoel and Kathy as they explain how to unleash the power of deep learning to tackle challenging image analysis tasks, such as detecting cells in brightfield images and cell classification tasks that are difficult for the human eye.

FAQ

Webinar FAQs | Deep Learning

Can you do quantitative intensity measurements after deep learning algorithms are applied?

The deep learning algorithms predict the position of the fluorescent signal but not the intensity. However, you can perform segmentation based on deep learning and then use the secondary channel to perform a quantitative intensity analysis (e.g., fluorescence).

For example, if you’re going label-free and want to measure a protein expression and irradiate your specimen with very low power, then you can perform the segmentation in the brightfield image, then measure the fluorescence in the secondary channel. And if your intensity is low but has a constant background, you can perform a quantitative analysis—even if the signal is just a couple of counts above the camera noise.

Can deep learning software be applied to stained histology slides (e.g., H&E)?

Yes, there is a special neural network architecture in cellSens™ software for RGB images. This RGB network has an augmentation procedure that slightly modifies the contributions of the different colors, ensuring the neural network is robust to slight variations in RGB and the balance of colors.

How many images does deep learning software need for training?

The key parameter is the number of objects annotated rather than the number of images. In some cases, 20 to 30 objects would work, but then you can only use that neural network to analyze images with similar contrast. If you want to go beyond that to go label-free and analyze objects in difficult conditions, then you will typically need thousands of annotations. This high level of annotations can be achieved by applying an automated ground truth using fluorescence, for example.

Are Olympus deep learning algorithms based on U-Net?

Yes, they are inspired by U-Net. They are not exactly the same, but the overall structure is based on U-Net.

What is the difference between deep neural networks and convolutional neural networks?

Neural networks have an input and an output layer. A deep neural network has at least one intermediate layer between the input and the output layers (generally they have several intermediate layers). A convolutional neural network is a class of deep neural networks where the intermediate layers are convolved with each other. Convolution is a mathematical operation that works very well for imaging analysis tasks. For this reason, convolutional neural networks are used to analyze microscopy images. Deep learning is also used in other fields outside of image analysis. These applications do not require convolutional networks and use other types of networks instead.


Related Products

Imaging Software

cellSens

Providing intuitive operations and a seamless workflow, cellSens software’s user interface is customizable so you control the layout. Offered in a range of packages, cellSens software provides a variety of features optimized for your specific imaging needs. Its Graphic Experiment Manager and Well Navigator features facilitate 5D image acquisition. Achieve improved resolution through TruSight™ deconvolution and share your images using Conference Mode.

  • Improve experiment efficiency with TruAI™ deep-learning segmentation analysis, providing label-free nuclei detection and cell counting
  • Modular imaging software platform
  • Intuitive application-driven user interface
  • Broad feature set, ranging from simple snapshot to advanced multidimensional real-time experiments
Experts
Kathy Lindsley
Application Specialist, Life Science Applications

I’m Kathy Lindsley and I’m an application specialist at Olympus, supporting camera-based imaging systems. I have a B.Sc. in biochemistry from Iowa State University. I joined Olympus in 2006 as a research imaging sales representative and in 2012 I transitioned to the life science applications group. Prior to joining Olympus, I worked as a research assistant in academic research for 15 years, gaining experience in patch clamp, calcium imaging, tissue culture and immunohistochemistry.

Manoel Veiga
Application Specialist, Life Science Research
Olympus Soft Imaging Solutions

Manoel Veiga earned his Ph.D. in Physical Chemistry in the University Santiago de Compostela, Spain. Following two postdocs in the Universities Complutense of Madrid and WWU Münster, he joined PicoQuant GmbH. After five years supporting customers worldwide in the fields of FLIM and time-resolved spectroscopy, he joined Olympus Soft Imaging Solutions GmbH in 2017, where he works as Global Application Specialist with a focus on high-content analysis and deep learning.

Deep Learning: Opening Doors to New ApplicationsDec 13 2024
Sorry, this page is not available in your country
Ask the Expert Sign-up

By clicking subscribe you are agreeing to our privacy policy which can be found here.

Sorry, this page is not
available in your country.

Sorry, this page is not available in your country