banner
Home / News / Light
News

Light

Jun 01, 2023Jun 01, 2023

Communications Biology volume 6, Article number: 502 (2023) Cite this article

4714 Accesses

39 Altmetric

Metrics details

Light-sheet fluorescence microscopy has transformed our ability to visualize and quantitatively measure biological processes rapidly and over long time periods. In this review, we discuss current and future developments in light-sheet fluorescence microscopy that we expect to further expand its capabilities. This includes smart and adaptive imaging schemes to overcome traditional imaging trade-offs, i.e., spatiotemporal resolution, field of view and sample health. In smart microscopy, a microscope will autonomously decide where, when, what and how to image. We further assess how image restoration techniques provide avenues to overcome these tradeoffs and how "open top" light-sheet microscopes may enable multi-modal imaging with high throughput. As such, we predict that light-sheet microscopy will fulfill an important role in biomedical and clinical imaging in the future.

Over the past decades, microscopes have provided us with invaluable insights on how biological processes are organized in space and time. A core innovation has been the selective labeling of proteins and lipids with fluorescent markers1,2, enabling fluorescent microscopy techniques such as light-sheet fluorescence microscopy (or light-sheet microscopy for short)3. Light-sheet microscopy enables us to visualize, quantify and dynamically track structural components in vivo4,5,6 and in vitro7,8,9. The fundamentals of light-sheet microscopy are covered in several reviews10,11,12,13,14,15,16,17,18,19, but in short, it relies on an orthogonal separation of the illumination and detection path, enabling selective illumination of a whole imaging plane and simultaneous widefield detection (Fig. 1a).

a Traditional light-sheet microscopy such as three-objective Selective Plane Illumination Microscopy (SPIM) relies on an orthogonal arrangement of the illumination (blue; illumination objectives IL1 and IL2) and detection (green; detection objective DO). This ensures that the axial resolution of imaging is mainly governed by the thickness of the light-sheet (blue) enabling imaging across large samples with widefield detection (green) and good optical sectioning. To acquire a 3D volume, the sample is scanned along the detection axis either by moving the sample itself or by scanning the light sheet together with the objective in the detection path. b–d Prime examples of imaging with light-sheets include continuous, long-term imaging of developmental processes in mouse and zebrafish embryos, and imaging of cleared tissue with subcellular resolution. b Katie McDole et al.4 characterized the cellular movements involved in mouse development from early streak (E6.5) to somite stages (E8.5) by imaging a CAGTAG1 expressing mouse embryo with a histone marker (H2B-eGFP) for over 44 h. Scale bar: 100 μm. c Selected projections from multi-view imaging5 (three angles) of the growing embryonic zebrafish vasculature labeled with the fluorescent vascular marker (Tg(kdrl:EGFP), cyan) and the red blood cell marker (Tg(GATA1a:dsRed), magenta), imaged from 20 h post-fertilization (hpf) to 86 hpf. Scale bar: 500 μm d Adam Glaser et al.37 performed large-scale imaging of an expanded slice of kidney of 3.2 cm × 2.1 cm size and 1 mm thickness. High-resolution regions of interest revealed the morphology of glomeruli (Scale bar: 40 μm), vessels (Scale bar: 80 μm) and tubules (Scale bar: 50 μm). The increased resolution due to expansion was further demonstrated with a multi-channel zoom-in of DAPI-counterstained tissue (Scale bars: 100 μm [top] and 20 μm [bottom]). The scale bars thereby indicate the dimensions of the native unexpanded tissue. Panel b was adapted with permission from Katie McDole et al. (2018)4. Panel c adapted from Daetwyler et al. (2019)5. Panel d adapted from Glaser et al. (2019)37.

In this review, we will look ahead and discuss potential future avenues of fluorescence imaging with light-sheets. We will describe how volumetric and temporal imaging barriers limit the application of optical microscopy to capture large specimens with high spatiotemporal resolution and explore strategies to overcome them. This includes progress in technology, novel smart and adaptive imaging schemes and image restoration techniques. Moreover, we will review how such schemes will go hand-in-hand with flexible, open top light-sheet microscopes to enable multi-modality imaging with high throughput.

Light-sheet microscopy stands out for its efficient and gentle 3D imaging capacity. It is characterized by a light intensity distribution in the shape of a sheet, which illuminates the focal plane of a microscope detection system (Fig. 1a)3,16. This provides many advantages. Most importantly, only (or at least predominantly) the focal plane of the detection system is illuminated, which results in optical sectioning and minimal out-of-focus excitation15,16,19. This leads to crisp images devoid of blur, and massively reduced sample bleaching compared to conventional, epi-fluorescent microscopy techniques, such as widefield or confocal10,20.

The excitation of the focal plane is traditionally achieved with one or two illumination objectives to launch the light-sheet(s)3,21. Thereby, coherent laser light is shaped into Gaussian3, top hat21, single or multiple Bessel22,23, Airy24 or other25,26 beams to create an intensity distribution in the sample that approximates a sheet over a finite distance. Volumetric imaging is achieved by either scanning the sample3, increasing the depth of focus27, or moving the sheet and the detection28,29.

Detection of the excited fluorophores is achieved by capturing the fluorescence signal of the illuminated plane with widefield detection on a scientific camera16. To understand the benefit of light-sheet microscopy, the concept of spatial duty cycle is important. It describes how long a fluorophore is on during the duration of an exposure. As the entire plane is illuminated, the spatial duty cycle is much higher with light-sheet microscopy compared to conventional point-scanning confocal microscopes, where only a fraction of the volume is scanned at once. Consequently, lower laser powers can be applied to produce a similar signal. This is important as many photo-bleaching and photo-toxic effects are highly nonlinear to the excitation intensity10,20. However, the widefield detection limits the optical penetration depth of light-sheet microscopes, as scattering occludes imaging deeper in tissues. Nevertheless, for small organoids and model organisms, light-sheet microscopy can be applied, especially when combined with image fusion30, which combines information from different orientations of the sample. Thereby, areas that would otherwise be occluded by scattering can be visited from their most favorable viewing angle15,31,32. Multiple views can be acquired by sample rotation, or in recent implementations, up to four objectives provide optical paths for alternating between light-sheet illumination and detection33,34,35.

These abilities have allowed light-sheet microscopes to gently acquire 3D data over hundreds to thousands of stacks per sample. The resulting data revealed and enabled quantification of dynamic processes such as subcellular signaling and morphology23,25, embryogenesis over durations of days4,5,36 (Fig. 1b, c) and provided fast acquisition of large cleared tissues with subcellular resolution37,38,39,40 (Fig. 1d).

Despite the fast and gentle volumetric imaging provided by light-sheet and other fluorescence microscopes16, they are ultimately constrained by volumetric and temporal imaging barriers (Fig. 2). While the former refers to our inability to image large specimens with high resolution, the latter refers to our inability to image very fast processes continuously over extended time periods.

a Current imaging techniques such as light-sheet, confocal and super-resolution microscopy are limited in the volume they can image due to technical and practical limitations (blue gradient: from low to high spatial resolution; white dashed circles indicate the predominant application regimes of the three microscopy techniques). To overcome this volumetric imaging barrier, expansion microscopy enables lower resolution imaging techniques to acquire with an effectively higher resolution. Additionally, we expect novel adaptive, smart imaging techniques and multi-resolution imaging to overcome the volumetric imaging barrier by selectively imaging parts of a large volume with high-resolution. Moreover, image reconstruction algorithms, such as compressed sensing and deep learning approaches, will provide avenues to obtain high-resolution images from partially sampled, large volumes b Additionally, we expect that adaptive and smart imaging schemes will overcome the temporal imaging barrier to image fast processes selectively over long time periods.

The volumetric imaging barrier (Fig. 2a) is given by the maximum volumetric reach of a given imaging technology. For example, large specimens, such as a whole mouse, cannot currently be imaged with confocal or super-resolution imaging in their entirety. This is in part governed by physical limitations, such as optical penetration due to light-scattering, and optical engineering, e.g. a trade-off between Numerical Aperture and working distance as well as field of view41.

Additionally, proper Nyquist sampling can become rate limiting so that imaging a large specimen at high resolution is impractical. To illustrate this, a common voxel dwell time for a conventional laser scanning confocal microscope with 250 nm resolution is ~1 μs42, and thus it would take ~4.2 s to capture a 2048 × 2048 × 1 confocal image. Doubling the resolution with an Airyscan microscope to 120 nm43, would require for the same field of view ~16.8 s. Imaging a Drosophila egg44 of size 9 × 10−3 mm3 (0.18 mm width, 0.51 mm length) using a confocal microscope with Nyquist sampling would thus take 76 min, or over 10 h with an Airyscan confocal. This prevents imaging at rates enabling studies of cellular dynamics, such as endocytosis processes that occur within a minute45. While light-sheet microscopy is much faster due to its widefield acquisition and high duty cycle, high-resolution versions of it such as Lattice light-sheet microscopy23,46 or Axially Swept Light Sheet Microscopy (ASLM)47 can still struggle to acquire large volume sufficiently fast.

Similarly, there is a temporal imaging barrier (Fig. 2b). We currently cannot image rapid processes over long time periods due to the amount of data generated and the impact on sample health and bleaching of the fluorophores. For example, taking an image of 2048 × 2048 × 1 voxels every 1 ms over the period of one day amounts to a dataset of almost 700 TB. While data issues might be resolved in future with new hardware and larger storages, continuous imaging induces photo-toxic effects, which accumulate over imaging cycles20,48.

Consequently, traditional acquisitions governed by Nyquist sampling are limited by trade-offs between sample health, temporal resolution, spatial resolution, and field of view or volumetric coverage, respectively (Fig. 3a). To account for these trade-offs, a microscopist has to choose one imaging modality to best fit the biological question at hand and perform an experiment with the chosen settings to the end (Fig. 3b).

a Traditionally, an acquisition is governed by a limited photon budget of the sample. Therefore, improved spatial and temporal resolution is typically antagonistic to sample health and the field of view that is imaged. Optimizing one corner of the pyramid thus leads to trade-offs towards other corners. b Consequently, in a traditional acquisition one microscope and/or one microscope settings are chosen to best reflect the imaging needs defined by the biological questions asked: best sample health, e.g. through non-fluorescent acquisitions (bright-field imaging), highest spatial resolution, e.g. for studying molecular signaling (blue building block), largest field of view to e.g. capture a whole organism or organ such as the whole brain (orange building block), or highest temporal resolution to capture fast processes such as neuronal signaling or organismal movements (yellow building block). c In the future, we anticipate that new smart and adaptive imaging schemes will overcome the current limitation by providing modular imaging within one experiment. Thereby, a microscope will be able to use event-based detections to switch automatically between imaging modes, which optimize for example, large field of views (orange building block), spatial (blue building block) and temporal (yellow building blocks) resolution and sample health (green building blocks). d In a recent implementation of such a novel imaging scheme, Mahecic et al.96 utilized neural networks for event-based detections. Here, the architecture of the utilized U-Net network is displayed that takes an acquired input image and outputs an event-based probability map for guiding the microscope. The U-Net consists of encoder (downsampling layers, blue), decoder (upsampling, green) sections and connections between layers (beige).

To push the volumetric imaging barrier, multi-photon excitation49,50,51, wave-front shaping52, tissue clearing53 and expansion microscopy54,55 have been developed to overcome the physical limitations for imaging due to light-scattering and absorption processes. Physical scattering and absorption arises due to tissue-inherent absorbing chromophores such as blood, melanin, water or pigments, and small-and large-scale scatterers in the structure of cells and tissues56,57. This results in reduced penetration of optical microscopy into tissue, limiting imaging to few tens to hundreds of micrometers from the tissue surface58 (i.e. one optical mean free path).

Multi-photon excitation49,50,51 has increased the optical penetration depth to more than a mm in some tissues49,59. Importantly though, light-sheet microscopy does not benefit as strongly from multiphoton excitation than raster scanning techniques do. This is because a light-sheet microscope still needs to form a widefield image, which is severely limited by light-scattering in the visible wavelength. In contrast, multi-photon raster scanning microscopes do not need to form a sharp image with the returning fluorescence photons and as such can go much deeper. Thus, intravital light-sheet microscopy is currently limited to a depth of less than 100 microns in most tissues.

As an alternative, a judicious choice of a reasonably translucent model organism, such as zebrafish lines with no pigmentation60, has enabled light-sheet imaging in situ and in vivo. Furthermore, tuning the refractive index of the immersion medium better to the sample61,62 reduces scattering and thus improves penetration depth. Also, a shift to fluorescent probes in the near infrared II window (900–1700 nm) has shown promise to increase the reach of light-sheet microscopes in tissues63. Longer wavelengths have intrinsically a longer scattering mean free path, and can overlap with the absorption window of biological tissues64. Development of probes for this optical window, however, has remained challenging as absorbing and emitting at longer wavelengths necessitates increased electronic conjugation, which is often accompanied with reduced molecular rigidity, increased sources for non-radiative decay, and low quantum yields65,66. Quantum dots67 and carbon nanotubes68 have been used as alternative to fluorescent proteins/dye molecules, but complicate labeling specificity and biocompatibility. Therefore, the future of near infrared light-sheet imaging strongly depends on future breakthroughs in probe development.

Additionally, progress in wavefront correction schemes, in particular multi-conjugate adaptive optics (MCAO)69,70, may increase the optical penetration depth further. MCAO addresses the issue that conventional adaptive optics can only correct a small area, the so called isoplanatic patch71,72. In tissues, this patch can be smaller than the field of view of the camera, negating the benefits of parallelized detection of light-sheet microscopy. By correcting different regions of tissue separately, MCAO has the potential to increase the isoplanatic patch size69,70 and may enable effective light-sheet imaging in tissues. Conventional adaptive optics for light-sheet microscopy has been demonstrated73,74, but the setup featured a high complexity. To rapidly correct spatially varying aberrations, dedicated wavefront sensors and deformable mirrors were employed both in the excitation and emission path of the light-sheet microscope. As such, it may seem at first far-fetched to add even more components for MCAO, making such a system overly complex. However, we envision that through machine learning, aberrations can be sensed without dedicated wavefront sensors75,76 significantly easing the equipment constraints. Further, instead of deformable mirrors, transmissive deformable waveplates have shown promise for wavefront correction77. In principle, such devices can be stacked in an image space of the microscope to perform MCAO, or a dedicated, integrated 3D wavefront shaping device might be devised.

For fixed tissues, sample preparation through tissue clearing53 can largely overcome the depth limitations associated with light-scattering. Particularly interesting in this context is expansion microscopy54,55, which can physically magnify a sample tenfold or larger78,79 (Fig. 1d). This effectively increases the resolving power of any microscope by the expansion factor, and thus enables light-sheet microscopes to reach resolution levels that where hitherto limited to super-resolution microscopy (Fig. 2a). Therefore, expansion microscopy is a way to overcome the volumetric imaging barrier by modifying the sample, with the caveat that the expansion process might not always preserve the ultrastructure and careful validation is needed80. The challenge is now to image the thousandfold larger volumes effectively, which will even test the most efficient volumetric light-sheet microscopes. As expansion microscopy progresses, we see further need to engineer novel light-sheet microscopes with ever larger field of view, larger cameras and working distances. Also, techniques that rapidly tile81,82 or scan39 the light-sheet to cover large field of views might become more necessary in this quest.

While it is possible to push the imaging barriers with novel developments, as light-sheet microscopy3 and expansion microscopy54,55 have done, a complementary approach is to modularly combine the strengths of different techniques into one imaging workflow (Fig. 3c). Thereby, the microscope system will determine by itself when and how to apply which module, such as spatiotemporal sampling, field of view and sample irradiation. Thus, we envision that such smart and adaptive imaging schemes will overcome traditional Nyquist sampling and expand the capabilities of light microscopy, including light-sheet microscopy.

First steps towards such universal smart and adaptive schemes have already been achieved. An emerging requirement for any smart and adaptive imaging scheme is a feedback loop based on real-time, on-the-fly processing of the acquired data to monitor for changes.

Real-time analysis of microscope images has been established to improve imaging parameters within one imaging modality. In a landmark paper, McDole et al.4 applied adaptive light-sheet microscopy to capture mouse embryo development (Fig. 1b) by real-time specimen tracking and automatic adjustment of the overall imaging volume and other microscope parameters. The imaging scheme thereby compensates for drift, growth and changing optical properties, improving over the previously published automated microscopy routine AutoPilot83 that required near-constant size and shape. Important parameters for light-sheet based techniques are optimization of the imaging volume and spatial overlap between the light-sheet and detection focal planes including their relative offsets and angles4,83]. In our view, the main difference to a traditional microscope is the decoupling of illumination and detection, and hence this relative alignment is critical for best imaging performance. Moreover, imaging schemes have been devised to automatically find the best angles in SPIM acquisitions31 or tailor illumination dosage in super-resolution microscopy84,85 and multi-photon microscopy86,87,88. Additionally, automatic adjustment of the imaging volume to fit the sample morphology showed drastic reduction in the duration of imaging and overall light dose, and thus improved sample health89,90,91.

To change between imaging modalities (Fig. 3c), mechanisms to detect events of interest are required. Event-detection thereby relies on the early identification of changes in biological structures or behavior such as an upcoming cell division or cell signaling, to name a few examples. In an early implementation of event-driven microscopy, Almada et al.92 performed unsupervised, high-content, event-driven sample treatment and live-to-fixed imaging. Thereby, they relied on mitotic cell rounding as a biological cue, determined by on-the-fly cell segmentation with Otsu thresholding. Combining widefield imaging for event detection with STED super-resolution imaging, Alvelid et al.93 designed an automated multiscale method to selectively image protein recruitment, vesicle trafficking and biosensor activity with high-resolution. Applying GPU accelerated peak detection, they realized data processing on a millisecond time scale. Additionally, GPU-based deep learning networks such as the U-Net architecture94 (Fig. 3d) promise great potential for event-detection due to their inherently fast, parallel processing of large images once trained and the active development of specialized hardware, such as Tensor Processing Units (TPU)95. Mahecic et al.96 applied such a network for event detection of upcoming mitochondrial and bacterial divisions, enabling selective rapid imaging of these processes at rates matching their temporal dynamics.

While live imaging is currently the main driver of such smart acquisition schemes, we also envision them to become important in the exploration of cleared organs, and even entire animals. While time is not a hard barrier, after all the samples are no longer alive, it is still a factor. This is especially true for repeated experiments and particularly in clinical settings where mm-size biopsies are routine and cellular resolution needs to be achieved for accurate cell type identification for prognosis. The amount of data generated by imaging large, cleared tissues can also not be understated, especially in the context of expansion microscopy. Smart imaging schemes will therefore be crucial to explore cleared tissues and autonomously switch to higher resolution imaging only in areas of interest.

At the heart of smart and adaptive microscopes routines is the microscope control software that enables adaptive control schemes and event detections. In the available implementations, the importance of open-source control software has become evident. Open-source software allows for controlling and modifying every aspect of microscope acquisition and integration with available fast image analysis software. These efforts are spearheaded by open-source software such Micro-Manager97, Pycro-Manager98, AutoScanJ99, or other, Python-based control software100,101. As open-source software is often developed and maintained by few contributors, it will remain a challenge to maintain and adapt scripts to new hardware and incorporate new on-the-fly processing algorithms. Therefore, modularity of the software is essential, and containerization of image processing workflows could contribute to maintain compatibility, allowing for several software environments on one computer102,103. For commercial microscope providers, we believe it will be paramount to provide interfaces to these open-source tools. One way to achieve this could be through enabling network message triggered events in the acquisition protocols99.

While the fundamentals for smart and adaptive microscopes have been laid, the era of smart microscopes has just dawned. Recognizing the advancement that deep learning networks have achieved in other fields such as sequence-to-structure prediction with AlphaFold104 and large-scale generative language models with GPT105, we envision microscope experiments where a user could input keywords such as "capture all endocytosis events" and the microscope would then systematically image these events in the specimen.

This would require training deep learning networks with universal grounding in fundamental biology concepts and to associate biological terms with their visual microscopy appearances. The growing availability of public image repositories, e.g., the Image Data Resource106, and NIH's recent guideline of making all image data associated with a publication available might be a first step towards this direction to train such networks. Challenges remain, for example, the availability of massive amounts of microscope data with insufficient annotation might render them less impactful to train the anticipated deep learning networks for microscopy. Crucially, unlike natural language, images are not bound to a standard visual ‘dictionary’ or ‘vocabulary’ but demonstrate significant visual heterogeneity even for the same biology. It remains an open question how to ensure generalizability and scalability of any trained networks beyond a single biological process and lab. Moreover, training of such a universal microscopy network will likely incur significant costs that are currently beyond the reach of microscope institutions, let alone individual labs. To overcome some of these limitations, self-supervising deep learning networks have shown promise in identifying similar morphologic features within large repositories of whole-slide histology images, independent of repository image size and with almost no annotations107. The development of techniques to learn from only weak or limited supervision108 or deploy expert-in-the-loop active learning109 to annotate and refine the ‘hard’ cases may also present a promising cost-efficient avenue to scale learning.

In a similar avenue, ‘unsupervised’ probabilistic models might learn the distribution of available images to enable the searching of rare events to uncover previously unknown biology. An example of such a rare find by human annotators has been the discovery of structures in zebrafish brain vasculature termed as Kugeln110 after many years of research and imaging of zebrafish brain vasculature. In future, a smart microscope might present such discoveries themselves.

Another current limitation for on-the-fly processing is the time required for data processing as a light-sheet microscope can easily generate gigabytes of data within seconds. However, we expect considerable progress towards faster processing pipelines in the foreseeable future. Besides progress in better algorithms, this acceleration will come in part by progress in computing hardware. Current computer architectures still predominantly rely on discrete, split CPU and GPU memory, which requires slow transfer of the data between them. In future, we expect that microscope acquisition will rely on single physical memory resource (e.g. Soc DRAM), shared by CPU and GPU, currently e.g. available on NVIDIA Jetson111, which will eliminate copying of data back and forth between CPU and GPU and thus make the best of both worlds available to fast processing – potentially even on the camera chip. Thereby, microscopy may benefit from development of tools that enable autonomous driving, where a huge number of images are analyzed on-the-fly to identify street hazards, other cars, or pedestrians.

In addition to changing the imaging parameters and modules during acquisition with smart and adaptive imaging schemes, acquisition schemes can be devised where a higher-resolution image data set is reconstructed after acquisition from a low-resolution scan or a scan which deliberately contains missing regions. This has the potential to significantly reduce the overall light dose on the sample, acquired data volume and acquisition time. After image reconstruction, the resolution of the original imaging system is recovered, or even increased, overcoming the traditional imaging trade-offs (Fig. 3a). With progress in the theory of image reconstruction with compressed sensing112,113,114 and machine learning115,116,117,118 approaches, we expect that such algorithms will become more widespread in the future.

Compressed sensing112,113,114 is a mathematical framework that describes how to capture and represent signals (images) at rates significantly below the Nyquist rate. The theory of compressed sensing is traditionally based on three concepts: sparsity, incoherence and random sampling119. If all three are given, successful reconstruction of an under-sampled signal can be achieved. A signal is sparse when it can be represented in a certain domain or basis with only few non-zero parameters. Therefore, the sparsity constraint is usually fulfilled for fluorescent microscopy as they are often already sparse in their pixel representation (e.g., few selectively labeled structures), or can be easily compressed, which means that there is a basis, e.g., in wavelets, in which many components are zero. However, incoherence (the values in the measurement matrix are uniformly spread out) and uniform random sampling are often lacking in microscopy119. After all, current image sensors acquire data deterministically over a 2D array, and not in a random fashion. Nevertheless, compressed sensing application have been demonstrated successfully and new principles for compressed sensing have been introduced to bridge the gap between theory and practice: asymptotic incoherence, asymptotic sparsity and multilevel sampling119.

Several successful applications of compressed sensing in imaging and microscopy have been demonstrated120. These applications include massively accelerated frame rate of cameras, reaching 100 billion frames per second121. Moreover, compressed sensing has been implemented on a lattice light-sheet microscope and an epifluorescence microscope to reduce light exposure and acquisition time 5–10 fold122, and applied for high-throughput anatomical imaging of whole mouse brains of ~400 mm3 on a timescale of ~10 min123. Importantly, compressed sensing reconstruction is unsupervised and does not require prior training data. Moreover, reconstruction accuracy improves as resolution increases124. This makes compressed sensing an appealing technique for the future of multi-resolution, smart light-sheet imaging schemes.

Similarly, deep learning networks can learn image restoration from training data125,126,127. Thereby, deep learning can address several limitations of compressed sensing126. Traditionally, compressed sensing requires a handcrafted reconstruction procedure, which might be difficult to establish for sophisticated image models. Moreover, such reconstruction procedures are based on iterative inverse optimization algorithms which tend to be slow and delicate to tune correctly, and thus reconstruction is hard to achieve in real-time. In contrast, reconstruction with deep learning requires only a single, fast forward propagation through the network. Moreover, the (asymptotic) incoherence of data required for compressed sensing is not strictly required for deep learning networks, in contrast, they might benefit from coherent data. Not surprisingly, applications of deep learning for image reconstructions is therefore a very active area of research and a variety of networks have been developed for this task126,127.

Deep learning, however, faces several challenges. Deep learning models do not yet provide the generalization, robustness and stability of reconstructions provided by established compressed sensing, and suffer from hallucinations, i.e. the creation of realistic looking artefacts127,128. This is related to the question of how well-trained networks generalize outside their training set and model fairness, the ability of models to equally capture and represent both common and rare phenotypes. We expect considerable progress in the next years to address these questions. The reporting of quantification of uncertainties will therefore be crucial to interpret reconstructions. To this end, models such as Bayesian inversion129 and techniques such as Bayesian dropout130 exist to generate uncertainty measures from deep learning models. However, the accuracy of the uncertainty measures require further external assessment to ascertain their ability to account for aleatoric and epistemic uncertainty131,132.

Moreover, the data-driven approach of deep learning networks depends on the availability of good (size, balance, and quality) training data reflective of the intended application. This is particularly true for image reconstruction, which requires an output with highest resolution data, in contrast to classification tasks and (binary) segmentations which are in essence coarse data. However, while popular networks such as GPT105 rely on an abundance of available data with limited restrictions, microscopy image reconstruction tasks are usually specialized applications with small datasets. One hope is that in the future, with mandates to host all microscope data accompanying a publication, vastly more and better annotated training data will eventually become available. Additionally, data augmentation, e.g., by geometric transformation such as image rotations, can increase the data size133. Moreover, application of a deep learning network that was pre-trained for different tasks on more diverse and larger datasets might be beneficial, a concept known as transfer learning134. Additionally, meta-learning approaches135,136 to specifically train networks in the few-image setting may yield more performant networks in real deployment. Similarly, more physically informed deep-learning network architectures, that model the image generation process, can help to ensure realistic predictions and reduce the number of free parameters to fit for faster, more generalizable learning137,138. Lastly, adopting a continuous learning paradigm instead of one-off training, may help to continuously adapt to new data and imaging conditions.

Despite these concerns, deep learning restoration techniques have been applied successfully and have even been granted approval by the FDA for select applications such as CT scans139. In super-resolution microscopy, considerable acquisition speed improvements have been reported through restoration140,141,142. For light-sheet microscopy, deep learning networks such as CARE143 have improved the SNR ratio of images acquired with less laser power or faster exposure. Interestingly, deep learning networks have also been applied to directly influence the sampling process. Horstmeyer et al. developed convolutional neural networks to optimize the physical layout of a microscope to improve accuracy of the identification of malaria-infected cells by 5–10%144. We expect that future developments will further leverage this avenue of co-optimizing image acquisition with deep learning networks for analysis. Understanding both the imaging system and process thereby will lead to faster processing times and better reconstructions138.

Ultimately, it is also important to realize that an image is often an intermediary step to the quantification of a biological process. Therefore, for many studies, a visually appealing deep-learning reconstructed image may be less important than having image data with rigorously quantifiable conclusions. This might alleviate some of the challenges described above but requires a precise understanding of the imaging system and image formation. Towards this end, Pégard et al. demonstrate compressive light-field microscopy, enabling real-time quantification of brain activity without ever reconstructing a 3D image145.

We anticipate that light-sheet technology itself will advance in the form of refined optical designs, better detectors, novel probes, NIR imaging capability and potentially non fluorescent contrast methods, such as Raman scattering. Some technical aspects of light-sheet technology may however been optimized to their maximum extent, such as the numerical aperture that can be covered in a light-sheet microscope146,147. Further gains in this area would likely also diminish the practicability of the instrument. This is imposed by the orthogonal configuration of LSFM and the fact that high NA objectives need a large opening angle. As a result, improving the lateral resolution beyond a certain threshold comes at the cost of reducing the axial resolution and vice versa, as the excitation and detection light cones share a limited solid angle.

Instead, we think that the future impact of light-sheet systems greatly depends on their practicality and applicability to biological and biomedical research questions. Many traditional light-sheet designs require non-traditional sample mounting5,36 and offer only limited space for the sample itself (Fig. 1a).

A promising alternative are open top37,148 and oblique plane microscopes (OPM)8,28,149, which leave one half space free (Fig. 4a) to place samples of, in principle, arbitrary sizes. Progress in the optical design of these microscopes has enabled microscopes with large millimeter sized field of views150,151,152,153,154 and microscopes with high-resolution8,155, and even with both modalities156. Recently, also commercial microscopes have been developed on the basis of open-top microscopy157. As such, we believe that open top and OPM systems will enable a widespread adoption of light-sheet microscopy, as conventional sample mounting methods can be employed, and integration in standard microscope bodies is in principle possible with OPM. This opens the way for three dimensional high-throughput imaging using multi-well plates, imaging of toxic or infectious specimens contained in (sealed) dishes, and multimodal imaging approaches (Fig. 4b).

a Oblique Plane Light-Sheet Microscopy (OPM) is one example of open top geometries, where a high NA primary objective provides both, the illumination (blue) and detection (three fluorescent emitters along the light-sheet, shown in dark green, light green shows the fluorescence collected by OPM from these three emitters). This removes the need for additional illumination objectives as in traditional light-sheet microscopes, and thus provides novel available optical design space and improved accessibility. To scan a 3D volume, the light-sheet is scanned across the objective and no stage or sample movement is necessary. b OPM facilitates light-sheet microscopy for (i) novel high-throughput, multi-well applications, and microfluidics devices, (ii) imaging of new probes that are sealed to reduce contaminations over long time periods and imaging of pathogens such as bacteria and viruses, (iii) and combinations with other modalities such as Atomic Force Microscopy (AFM).

Open top and OPM have also opened new design spaces for optical engineering. OPM relies on remote refocusing158, which describes the ability to create an aberration-free 3D image of the specimen in a remote space away from the sample. The tilted light-sheet plane within that 3D image can then be mapped with another microscope onto a camera. While the remote focusing principle158 underlying OPM has been established over a decade ago, it has been recently re-analyzed159 to enable imaging across different refractive indices. The new findings may enable high resolution light-sheet imaging in any immersion media, furthering the versatility and applicability of light-sheet microscopy. This should serve just as one example on how improvements might still come from discoveries of optical principles and theory, as well as engineering.

We expect that light-sheet microscopy will play an important role in the biomedical sciences and clinical applications for microscopic and macroscopic imaging in the future. Its combination of rapid yet gentle volumetric imaging will serve as the basis of physiologically relevant studies of cellular biology in cell culture, in organoids, in (engineered) tissue, clinical biopsies, and in entire animals. As such, one can dream big, a future where sub-cellular biology can be studied live in its native context, without the limitations imposed by traditional cell culture methods on coverslips.

To achieve this dream, we expect light-sheet microscopes to overcome the volumetric and temporal imaging barriers imposed by constrains of the microscope systems and sample. This will no longer be a task that a human microscope operator, or image analyst will be able to handle alone. Instead, novel smart and adaptive microscope control schemes will explore samples in an autonomous, self-driving fashion to image processes of interest selectively at rates matching their dynamics. These schemes will enable new autonomous biological discoveries and systematic imaging studies of processes that take place over multiple length and timescales.

Besides increased throughput, such acquisition schemes also promise to rein in the data deluge. Current light-sheet data can already reach petabyte scales and will likely reach even higher orders of magnitudes soon. As smart and adaptive data acquisition schemes no longer follow Nyquist sampling on the finest level across the entirety of the data set, finest sampling will only be applied selectively. Moreover, algorithmic selection of regions of interests will remove human bias and thus improve reproducibility of imaging studies.

As with any look into the future, it is likely that the field could take very different directions. After all, who would have foreseen expansion microscopy54,55 before 2015, which has impacted fluorescence microscopy in unimaginable ways. Thus, while we are excited about the possibilities that we have described herein, we also hope that the microscope community will remain as imaginative as it has been over the last 20 years, holding many more surprises in store.

Further information on research design is available in the Nature Portfolio Reporting Summary linked to this article.

Lippincott-Schwartz, J. & Patterson, G. H. Development and use of fluorescent protein markers in living cells. Science 300, 87–91 (2003).

Article CAS PubMed Google Scholar

Sezgin, E. & Schwille, P. Fluorescence techniques to study lipid dynamics. Cold Spring Harb. Perspect. Biol. 3, a009803 (2011).

Article PubMed PubMed Central Google Scholar

Huisken, J., Swoger, J., Del Bene, F., Wittbrodt, J. & Stelzer, E. H. K. Optical sectioning deep inside live embryos by selective plane illumination microscopy. Science 305, 1007–1009 (2004). The landmark paper by Huisken et al. heralds the application of light-sheet microscopy.

Article CAS PubMed Google Scholar

McDole, K. et al. In toto imaging and reconstruction of post-implantation mouse development at the single-cell level. Cell 175, 859–876.e33 (2018). This milestone study leverages adaptive imaging schemes to image post-implantation mouse development at unprecedent resolution and volumetric coverage.

Article CAS PubMed Google Scholar

Daetwyler, S., Günther, U., Modes, C. D., Harrington, K. & Huisken, J. Multi-sample SPIM image acquisition, processing and analysis of vascular growth in zebrafish. Development 146, dev.173757 (2019).

Article Google Scholar

Jain, A. et al. Regionalized tissue fluidization is required for epithelial gap closure during insect gastrulation. Nat. Commun. 11, 5604 (2020).

Article CAS PubMed PubMed Central Google Scholar

Serra, D. et al. Self-organization and symmetry breaking in intestinal organoid. Dev. Nat. 569, 66–72 (2019).

Article CAS Google Scholar

Sapoznik, E. et al. A versatile oblique plane microscope for large-scale and high-resolution imaging of subcellular dynamics. eLife 9, e57681 (2020). In this study, an oblique plane microscopy with subcellular resolution and opto-genetics capability is presented.

Article CAS PubMed PubMed Central Google Scholar

Welf, E. S. et al. Actin-membrane release initiates cell protrusions. Dev. Cell 55, 723–736.e8 (2020).

Article CAS PubMed PubMed Central Google Scholar

Reynaud, E. G., Kržič, U., Greger, K. & Stelzer, E. H. K. Light sheet‐based fluorescence microscopy: More dimensions, more photons, and less photodamage. HFSP J. 2, 266–275 (2008).

Article PubMed PubMed Central Google Scholar

Khairy, K. & Keller, P. J. Reconstructing embryonic development. genesis 49, 488–513 (2011).

Article PubMed Google Scholar

Tomer, R., Khairy, K. & Keller, P. J. Shedding light on the system: studying embryonic development with light sheet microscopy. Curr. Opin. Genet. Dev. 21, 558–565 (2011).

Article CAS PubMed Google Scholar

Höckendorf, B., Thumberger, T. & Wittbrodt, J. Quantitative analysis of embryogenesis: a perspective for light sheet microscopy. Dev. Cell 23, 1111–1120 (2012).

Article PubMed Google Scholar

Pampaloni, F., Chang, B.-J. & Stelzer, E. H. K. Light sheet-based fluorescence microscopy (LSFM) for the quantitative imaging of cells and tissues. Cell Tissue Res. 360, 129–141 (2015).

Article CAS PubMed Google Scholar

Weber, M. & Huisken, J. Light sheet microscopy for real-time developmental biology. Curr. Opin. Genet. Dev. 21, 566–572 (2011).

Article CAS PubMed Google Scholar

Daetwyler, S. & Huisken, J. Fast fluorescence microscopy with light sheets. Biol. Bull. 231, 14–25 (2016).

Article PubMed Google Scholar

Stelzer, E. H. K. et al. Light sheet fluorescence microscopy. Nat. Rev. Methods Prim. 1, 73 (2021). A recent, in-depth review of light-sheet microscopy.

Article CAS Google Scholar

Huisken, J. Slicing embryos gently with laser light sheets. BioEssays 34, 406–411 (2012).

Article PubMed Google Scholar

Wan, Y., McDole, K. & Keller, P. J. Light-sheet microscopy and its potential for understanding developmental processes. Annu. Rev. Cell Dev. Biol. 35, 655–681 (2019).

Article CAS PubMed Google Scholar

Icha, J., Weber, M., Waters, J. C. & Norden, C. Phototoxicity in live fluorescence microscopy, and how to avoid it. BioEssays 39, 1700003 (2017).

Article Google Scholar

Huisken, J. & Stainier, D. Y. R. Even fluorescence excitation by multidirectional selective plane illumination microscopy (mSPIM). Opt. Lett. 32, 2608–2610 (2007).

Article PubMed Google Scholar

Fahrbach, F. O. & Rohrbach, A. Propagation stability of self-reconstructing Bessel beams enables contrast-enhanced imaging in thick media. Nat. Commun. 3, 632 (2012).

Article PubMed Google Scholar

Chen, B.-C. et al. Lattice light-sheet microscopy: Imaging molecules to embryos at high spatiotemporal resolution. Science 346, 1257998 (2014). Seminal paper on subcellular light-sheet microscopy using optical lattices.

Article PubMed PubMed Central Google Scholar

Vettenburg, T. et al. Light-sheet microscopy using an Airy beam. Nat. Methods 11, 541–544 (2014).

Article CAS PubMed Google Scholar

Chang, B.-J. et al. Universal light-sheet generation with field synthesis. Nat. Methods https://doi.org/10.1038/s41592-019-0327-9 (2019).

Golub, I., Chebbi, B. & Golub, J. Toward the optical "magic carpet": reducing the divergence of a light sheet below the diffraction limit. Opt. Lett. 40, 5121–5124 (2015).

Article PubMed Google Scholar

Tomer, R. et al. SPED light sheet microscopy: fast mapping of biological system structure and function. Cell 163, 1796–1806 (2015).

Article CAS PubMed PubMed Central Google Scholar

Dunsby, C. Optically sectioned imaging by oblique plane microscopy. Opt. Express 16, 20306–20316 (2008). Seminal paper on oblique plane microscopy, which enables light-sheet microscopy through a single objective.

Article CAS PubMed Google Scholar

Fahrbach, F. O., Voigt, F. F., Schmid, B., Helmchen, F. & Huisken, J. Rapid 3D light-sheet microscopy with a tunable lens. Opt. Express 21, 21010–21026 (2013).

Article PubMed Google Scholar

Hörl, D. et al. BigStitcher: reconstructing high-resolution image datasets of cleared and expanded samples. Nat. Methods 16, 870–874 (2019).

Article PubMed Google Scholar

He, J. & Huisken, J. Image quality guided smart rotation improves coverage in microscopy. Nat. Commun. 11, 150 (2020).

Article CAS PubMed PubMed Central Google Scholar

Swoger, J., Verveer, P., Greger, K., Huisken, J. & Stelzer, E. H. K. Multi-view image fusion improves resolution in three-dimensional microscopy. Opt. Express 15, 8029–8042 (2007).

Article PubMed Google Scholar

Krzic, U., Gunther, S., Saunders, T. E., Streichan, S. J. & Hufnagel, L. Multiview light-sheet microscope for rapid in toto imaging. Nat. Methods 9, 730–733 (2012).

Article CAS PubMed Google Scholar

Tomer, R., Khairy, K., Amat, F. & Keller, P. J. Quantitative high-speed imaging of entire developing embryos with simultaneous multiview light-sheet microscopy. Nat. Methods 9, 755–763 (2012). Seminal paper on simultaneous multi-view imaging in light-sheet microscopy.

Article CAS PubMed Google Scholar

Schmid, B. et al. High-speed panoramic light-sheet microscopy reveals global endodermal cell dynamics. Nat. Commun. 4, 2207 (2013).

Article PubMed Google Scholar

Kaufmann, A., Mickoleit, M., Weber, M. & Huisken, J. Multilayer mounting enables long-term imaging of zebrafish development in a light sheet microscope. Development 139, 3242–3247 (2012).

Article CAS PubMed Google Scholar

Glaser, A. K. et al. Multi-immersion open-top light-sheet microscope for high-throughput imaging of cleared tissues. Nat. Commun. 10, 2781 (2019). This study introduces a light-sheet system to image cleared tissue samples in an open top geometry.

Article PubMed PubMed Central Google Scholar

Voigt, F. F. et al. The mesoSPIM initiative: open-source light-sheet microscopes for imaging cleared tissue. Nat. Methods 16, 1105–1108 (2019). Important paper that introduced mesoscopic cleared tissue light-sheet microscopy.

Article CAS PubMed PubMed Central Google Scholar

Chakraborty, T. et al. Light-sheet microscopy of cleared tissues with isotropic, subcellular resolution. Nat. Methods 16, 1109–1113 (2019). In this paper, a light-sheet microscopy method for cleared tissue imaging is presented that is compatible with any clearing technique and can provide isotropic, subcellular resolution.

Article CAS PubMed PubMed Central Google Scholar

Gao, R. et al. Cortical column and whole-brain imaging with molecular contrast and nanoscale resolution. Science 363, eaau8302 (2019).

Article CAS PubMed PubMed Central Google Scholar

Zhang, Y. & Gross, H. Systematic design of microscope objectives. Part I Syst. Rev. Anal. 8, 313–347 (2019).

Google Scholar

Dean, K. M. et al. Isotropic imaging across spatial scales with axially swept light-sheet microscopy. Nat. Protoc. https://doi.org/10.1038/s41596-022-00706-6 (2022).

Wu, X. & Hammer, J. A. ZEISS Airyscan: optimizing usage for fast, gentle, super-resolution imaging. In: Confocal Microscopy: Methods and Protocols (eds. Brzostowski, J. & Sohn, H.) 111–130 (Springer US, 2021). https://doi.org/10.1007/978-1-0716-1402-0_5.

Markow, T. A., Beall, S. & Matzkin, L. M. Egg size, embryonic development time and ovoviviparity in Drosophila species. J. Evolut. Biol. 22, 430–434 (2009).

Article CAS Google Scholar

Shamir, M., Bar-On, Y., Phillips, R. & Milo, R. SnapShot: timescales in cell biology. Cell 164, 1302–1302.e1 (2016).

Article CAS PubMed Google Scholar

Tsai, Y.-C. et al. Rapid high resolution 3D imaging of expanded biological specimens with lattice light sheet microscopy. Methods 174, 11–19 (2020).

Article CAS PubMed Google Scholar

Dean, K. M., Roudot, P., Welf, E. S., Danuser, G. & Fiolka, R. Deconvolution-free subcellular imaging with axially swept light sheet microscopy. Biophys. J. 108, 2807–2815 (2015).

Article CAS PubMed PubMed Central Google Scholar

Laissue, P. P., Alghamdi, R. A., Tomancak, P., Reynaud, E. G. & Shroff, H. Assessing phototoxicity in live fluorescence imaging. Nat. Methods 14, 657–661 (2017).

Article CAS PubMed Google Scholar

Lecoq, J., Orlova, N. & Grewe, B. F. Wide. Fast. Deep: recent advances in multiphoton microscopy of in vivo neuronal activity. J. Neurosci. 39, 9042 (2019).

Article CAS PubMed PubMed Central Google Scholar

Zipfel, W. R., Williams, R. M. & Webb, W. W. Nonlinear magic: multiphoton microscopy in the biosciences. Nat. Biotechnol. 21, 1369–1377 (2003).

Article CAS PubMed Google Scholar

Horton, N. G. et al. In vivo three-photon microscopy of subcortical structures within an intact mouse brain. Nat. Photonics 7, 205–209 (2013). First practical demonstration of three-photon imaging, which can increase the imaging depth in scattering tissues.

Article CAS PubMed PubMed Central Google Scholar

Yu, Z. et al. Wavefront shaping: a versatile tool to conquer multiple scattering in multidisciplinary fields. Innovation 3, 100292 (2022).

PubMed PubMed Central Google Scholar

Richardson, D. S. et al. Tissue clearing. Nat. Rev. Methods Prim. 1, 84 (2021).

Article CAS Google Scholar

Chen, F., Tillberg, P. W. & Boyden, E. S. Expansion microscopy. Science 347, 543–548 (2015). This paper introduces expansion microscopy to overcome physical limitations in imaging.

Article CAS PubMed PubMed Central Google Scholar

Wassie, A. T., Zhao, Y. & Boyden, E. S. Expansion microscopy: principles and uses in biological research. Nat. Methods 16, 33–41 (2019).

Article CAS PubMed Google Scholar

Jacques, S. L. Optical properties of biological tissues: a review. Phys. Med. Biol. 58, R37 (2013).

Article PubMed Google Scholar

Sandell, J. L. & Zhu, T. C. A review of in-vivo optical properties of human tissues and its impact on PDT. J. Biophotonics 4, 773–787 (2011).

Article PubMed PubMed Central Google Scholar

Berke, I. M., Miola, J. P., David, M. A., Smith, M. K. & Price, C. Seeing through musculoskeletal tissues: improving in situ imaging of bone and the lacunar canalicular system through optical clearing. PLOS One 11, e0150268 (2016).

Article PubMed PubMed Central Google Scholar

Si, K., Fiolka, R. & Cui, M. Fluorescence imaging beyond the ballistic regime by ultrasound-pulse-guided digital phase conjugation. Nat. Photonics 6, 657–661 (2012). This study applies ultrasound assisted wavefront shaping to image deep in scattering media.

Article CAS PubMed PubMed Central Google Scholar

White, R. M. et al. Transparent adult Zebrafish as a tool for in vivo transplantation analysis. Cell Stem Cell 2, 183–189 (2008).

Article CAS PubMed PubMed Central Google Scholar

Iijima, K., Oshima, T., Kawakami, R. & Nemoto, T. Optical clearing of living brains with MAGICAL to extend in vivo imaging. iScience 24, 101888 (2021).

Article CAS PubMed Google Scholar

Boothe, T. et al. A tunable refractive index matching medium for live imaging cells, tissues and model organisms. eLife 6, e27240 (2017).

Article PubMed PubMed Central Google Scholar

Wang, F. et al. Light-sheet microscopy in the near-infrared II window. Nat. Methods 16, 545–552 (2019).

Article PubMed PubMed Central Google Scholar

Shi, L., Sordillo, L. A., Rodríguez-Contreras, A. & Alfano, R. Transmission in near-infrared optical windows for deep brain imaging. J. Biophotonics 9, 38–43 (2016).

Article CAS PubMed Google Scholar

Lavis, L. D. & Raines, R. T. Bright ideas for chemical biology. ACS Chem. Biol. 3, 142–155 (2008).

Article CAS PubMed PubMed Central Google Scholar

Li, B., Zhao, M. & Zhang, F. Rational design of near-infrared-II organic molecular dyes for bioimaging and biosensing. ACS Mater. Lett. 2, 905–917 (2020).

Article CAS Google Scholar

Gil, H. M. et al. NIR-quantum dots in biomedical imaging and their future. iScience 24, 102189 (2021).

Article CAS PubMed PubMed Central Google Scholar

Welsher, K. et al. A route to brightly fluorescent carbon nanotubes for near-infrared imaging in mice. Nat. Nanotechnol. 4, 773–780 (2009).

Article CAS PubMed PubMed Central Google Scholar

Simmonds, R. D. & Booth, M. J. Modelling of multi-conjugate adaptive optics for spatially variant aberrations in microscopy. J. Opt. 15, 094010 (2013).

Article Google Scholar

Kam, Z., Kner, P., Agard, D. & Sedat, J. W. Modelling the application of adaptive optics to wide-field microscope live imaging. J. Microsc. 226, 33–42 (2007).

Article PubMed Google Scholar

Booth, M. J. Adaptive optics in microscopy. In Optical and Digital Image Processing 295–322 https://doi.org/10.1002/9783527635245.ch14 (2011).

Hampson, K. M. et al. Adaptive optics for high-resolution imaging. Nat. Rev. Methods Prim. 1, 68 (2021).

Article CAS Google Scholar

Liu, T.-L. et al. Observing the cell in its native state: imaging subcellular dynamics in multicellular organisms. Science 360, eaaq1392 (2018). Seminal paper on the use of adaptive optics in light-sheet microscopy.

Article PubMed PubMed Central Google Scholar

Malivert, M. et al. Active image optimization for lattice light sheet microscopy in thick samples. Biomed. Opt. Express 13, 6211–6228 (2022).

Article PubMed PubMed Central Google Scholar

Krishnan, A. P. et al. Optical aberration correction via phase diversity and deep learning. bioRxiv 2020.04.05.026567 https://doi.org/10.1101/2020.04.05.026567 (2020).

Hu, Q. et al. Universal adaptive optics for microscopy through embedded neural network control. bioRxiv https://doi.org/10.48550/ARXIV.2301.02647 (2023).

Banerjee, K., Rajaeipour, P., Ataman, Ç. & Zappe, H. Optofluidic adaptive optics. Appl. Opt. 57, 6338–6344 (2018).

Article CAS PubMed Google Scholar

Klimas, A. et al. Magnify is a universal molecular anchoring strategy for expansion microscopy. Nat. Biotechnol. https://doi.org/10.1038/s41587-022-01546-1 (2023).

Chang, J.-B. et al. Iterative expansion microscopy. Nat. Methods 14, 593–599 (2017).

Article CAS PubMed PubMed Central Google Scholar

Kubalová, I. et al. Prospects and limitations of expansion microscopy in chromatin ultrastructure determination. Chromosome Res. 28, 355–368 (2020).

Article PubMed PubMed Central Google Scholar

Gao, L. Extend the field of view of selective plan illumination microscopy by tiling the excitation light sheet. Opt. Express 23, 6102–6111 (2015).

Article PubMed Google Scholar

Chen, Y. et al. A versatile tiling light sheet microscope for imaging of cleared tissues. Cell Rep. 33, 108349 (2020).

Article CAS PubMed Google Scholar

Royer, L. A. et al. Adaptive light-sheet microscopy for long-term, high-resolution imaging in living organisms. Nat. Biotechnol. 34, 1267–1278 (2016). This paper introduced adaptive optimization of light-sheet parameters during an ongoing imaging session.

Article CAS PubMed Google Scholar

Chakrova, N., Canton, A. S., Danelon, C., Stallinga, S. & Rieger, B. Adaptive illumination reduces photobleaching in structured illumination microscopy. Biomed. Opt. Express 7, 4263–4274 (2016).

Article PubMed PubMed Central Google Scholar

Štefko, M., Ottino, B., Douglass, K. M. & Manley, S. Autonomous illumination control for localization microscopy. Opt. Express 26, 30882–30900 (2018).

Article PubMed Google Scholar

Chu, K. K., Lim, D. & Mertz, J. Enhanced weak-signal sensitivity in two-photon microscopy by adaptive illumination. Opt. Lett. 32, 2846–2848 (2007).

Article PubMed Google Scholar

Li, B., Wu, C., Wang, M., Charan, K. & Xu, C. An adaptive excitation source for high-speed multiphoton microscopy. Nat. Methods 17, 163–166 (2020).

Article CAS PubMed Google Scholar

Pinkard, H. et al. Learned adaptive multiphoton illumination microscopy for large-scale immune response imaging. Nat. Commun. 12, 1916 (2021).

Article PubMed PubMed Central Google Scholar

Heine, J. et al. Adaptive-illumination STED nanoscopy. Proc. Natl Acad. Sci. 114, 9797–9802 (2017).

Article CAS PubMed PubMed Central Google Scholar

Abouakil, F. et al. An adaptive microscope for the imaging of biological surfaces. Light. Sci. Appl. 10, 210 (2021).

Article CAS PubMed PubMed Central Google Scholar

Dreier, J. et al. Smart scanning for low-illumination and fast RESOLFT nanoscopy in vivo. Nat. Commun. 10, 556 (2019).

Article PubMed PubMed Central Google Scholar

Almada, P. et al. Automating multimodal microscopy with NanoJ-Fluidics. Nat. Commun. 10, 1223 (2019).

Article PubMed PubMed Central Google Scholar

Alvelid, J., Damenti, M., Sgattoni, C. & Testa, I. Event-triggered STED imaging. Nat. Methods 19, 1268–1275 (2022).

Article CAS PubMed PubMed Central Google Scholar

Ronneberger, O., Fischer, P. & Brox, T. U-net: Convolutional networks for biomedical image segmentation. In International Conference on Medical image computing and computer-assisted intervention 234–241 (Springer, 2015).

Jouppi, N. P. et al. In-datacenter performance analysis of a tensor processing unit. In 2017 ACM/IEEE 44th Annual International Symposium on Computer Architecture (ISCA) 1–12 https://doi.org/10.1145/3079856.3080246 (2017).

Mahecic, D. et al. Event-driven acquisition for content-enriched microscopy. Nat. Methods 19, 1262–1267 (2022).

Article CAS PubMed Google Scholar

Edelstein, A., Amodaj, N., Hoover, K., Vale, R. & Stuurman, N. Computer control of microscopes using µManager. Curr. Protoc. Mol. Biol. 92, 14.20.1–14.20.17 (2010).

Article Google Scholar

Pinkard, H. et al. Pycro-Manager: open-source software for customized and reproducible microscope control. Nat. Methods 18, 226–228 (2021).

Article CAS PubMed PubMed Central Google Scholar

Tosi, S. et al. AutoScanJ: a suite of ImageJ scripts for intelligent microscopy. Front. Bioinforma. 1, 627626 (2021).

Article Google Scholar

Moreno, X. C., Al-Kadhimi, S., Alvelid, J., Bodén, A. & Testa, I. ImSwitch: generalizing microscope control in Python. J. Open Source Softw. 6, 3394 (2021).

Article Google Scholar

Fox, Z. R. et al. Enabling reactive microscopy with MicroMator. Nat. Commun. 13, 2199 (2022).

Article CAS PubMed PubMed Central Google Scholar

Mitra-Behura, S., Fiolka, R. P. & Daetwyler, S. Singularity containers improve reproducibility and ease of use in computational image analysis workflows. Front. Bioinforma. 1, 757291 (2022).

Article Google Scholar

Kurtzer, G. M., Sochat, V. & Bauer, M. W. Singularity: scientific containers for mobility of compute. PLOS One 12, e0177459 (2017).

Article PubMed PubMed Central Google Scholar

Jumper, J. et al. Highly accurate protein structure prediction with AlphaFold. Nature 596, 583–589 (2021).

Article CAS PubMed PubMed Central Google Scholar

Nakano, R. et al. WebGPT: Browser-assisted question-answering with human feedback. bioRxiv https://doi.org/10.48550/ARXIV.2112.09332 (2021).

Williams, E. et al. Image Data Resource: a bioimage data integration and publication platform. Nat. Methods 14, 775–781 (2017).

Article CAS PubMed PubMed Central Google Scholar

Chen, C. et al. Fast and scalable search of whole-slide images via self-supervised deep learning. Nat. Biomed. Eng. 6, 1420–1434 (2022).

Article PubMed PubMed Central Google Scholar

Dawoud, Y., Bouazizi, A., Ernst, K., Carneiro, G. & Belagiannis, V. Knowing what to label for few shot microscopy image cell segmentation. bioRxiv https://doi.org/10.48550/ARXIV.2211.10244 (2022).

Pachitariu, M. & Stringer, C. Cellpose 2.0: how to train your own model. Nat. Methods 19, 1634–1641 (2022).

Article CAS PubMed PubMed Central Google Scholar

Kugler, E. C. et al. Cerebrovascular endothelial cells form transient Notch-dependent cystic structures in zebrafish. EMBO Rep. 20, e47047 (2019).

Article PubMed PubMed Central Google Scholar

Brundyn, A. Demystifying unified memory on Jetson. Presentation. NVIDIA conference GTC Spring 2022. https://www.nvidia.com/en-us/on-demand/session/gtcspring22-se2600 (2022).

Donoho, D. L. Compressed sensing. IEEE Trans. Inf. Theory 52, 1289–1306 (2006).

Article Google Scholar

Candès, E. J., Romberg, J. K. & Tao, T. Stable signal recovery from incomplete and inaccurate measurements. Commun. Pure Appl. Math. 59, 1207–1223 (2006).

Article Google Scholar

Baraniuk, R. G. Compressive sensing [Lecture Notes]. IEEE Signal Process. Mag. 24, 118–121 (2007).

Article Google Scholar

Liu, G. et al. Image inpainting for irregular holes using partial convolutions. bioRxiv https://doi.org/10.48550/ARXIV.1804.07723 (2018).

Iizuka, S., Simo-Serra, E. & Ishikawa, H. Globally and locally consistent image completion. ACM Trans. Graph. 36, 1–14 (2017).

Article Google Scholar

Tian, K. et al. Designing BERT for convolutional networks: sparse and hierarchical masked modeling. bioRxiv https://doi.org/10.48550/ARXIV.2301.03580 (2023).

Yu, J. et al. Free-form image inpainting with gated convolution. bioRxiv https://doi.org/10.48550/ARXIV.1806.03589 (2018).

Adcock, B., Hansen, A. C., Poon, C. & Roman, B. Breaking the coherence barrier: a new theory for compressed sensing. Forum Math. Sigma 5, e4 (2017).

Article Google Scholar

Calisesi, G. et al. Compressed sensing in fluorescence microscopy. Prog. Biophys. Mol. Biol. 168, 66–80 (2022).

Article CAS PubMed Google Scholar

Gao, L., Liang, J., Li, C. & Wang, L. V. Single-shot compressed ultrafast photography at one hundred billion frames per second. Nature 516, 74–77 (2014). In this study, ultrafast imaging is demonstrated leveraging compressed sensing.

Article CAS PubMed PubMed Central Google Scholar

Woringer, M., Darzacq, X., Zimmer, C. & Mir, M. Faster and less phototoxic 3D fluorescence microscopy using a versatile compressed sensing scheme. Opt. Express 25, 13668–13683 (2017).

Article PubMed PubMed Central Google Scholar

Fang, C. et al. Minutes-timescale 3D isotropic imaging of entire organs at subcellular resolution by content-aware compressed-sensing light-sheet microscopy. Nat. Commun. 12, 107 (2021).

Article CAS PubMed PubMed Central Google Scholar

Roman, B., Hansen, A. & Adcock, B. On asymptotic structure in compressed sensing. bioRxiv https://doi.org/10.48550/ARXIV.1406.4178 (2014).

Zhang, X., Zhai, D., Li, T., Zhou, Y. & Lin, Y. Image inpainting based on deep learning: a review. Inf. Fusion 90, 74–94 (2023).

Article Google Scholar

Adcock, B. & Hansen, A. C. From compressed sensing to deep learning. In Compressive Imaging: Structure, Sampling, Learning (eds Hansen, A. C. & Adcock, B.) 427–430 (Cambridge University Press, 2021). https://doi.org/10.1017/9781108377447.023.

Ongie, G. et al. Deep learning techniques for inverse problems in imaging. bioRxiv https://doi.org/10.48550/ARXIV.2005.06001 (2020).

Gottschling, N. M., Antun, V., Hansen, A. C. & Adcock, B. The troublesome kernel – On hallucinations, no free lunches and the accuracy-stability trade-off in inverse problems. bioRxiv https://doi.org/10.48550/ARXIV.2001.01258 (2020).

Adler, J. & Öktem, O. Deep Bayesian inversion. bioRxiv https://doi.org/10.48550/ARXIV.1811.05910 (2018).

Gal, Y. & Ghahramani, Z.. Dropout as a Bayesian approximation: representing model uncertainty in deep learning. In Proceedings of The 33rd International Conference on Machine Learning (eds. Maria Florina Balcan & Kilian Q. Weinberger) vol. 48 1050–1059 (PMLR, 2016).

Hüllermeier, E. & Waegeman, W. Aleatoric and epistemic uncertainty in machine learning: an introduction to concepts and methods. Mach. Learn. 110, 457–506 (2021).

Article Google Scholar

Kendall, A. & Gal, Y. What uncertainties do we need in Bayesian deep learning for computer vision? In Advances in Neural Information Processing Systems (eds. Guyon, I. et al.) vol. 30 (Curran Associates, Inc., 2017).

Shorten, C. & Khoshgoftaar, T. M. A survey on image data augmentation for deep learning. J. Big Data 6, 60 (2019).

Article Google Scholar

Weiss, K., Khoshgoftaar, T. M. & Wang, D. A survey of transfer learning. J. Big Data 3, 9 (2016).

Article Google Scholar

Yao, B. et al. Depixelation and image restoration with meta-learning in fiber-bundle-based endomicroscopy. Opt. Express 30, 5038–5050 (2022).

Article PubMed Google Scholar

Khadka, R. et al. Meta-learning with implicit gradients in a few-shot setting for medical image segmentation. Comput. Biol. Med. 143, 105227 (2022).

Article PubMed Google Scholar

Qiao, C. et al. Rationalized deep learning super-resolution microscopy for sustained live imaging of rapid subcellular processes. Nat. Biotechnol. https://doi.org/10.1038/s41587-022-01471-3 (2022).

Li, Y. et al. Incorporating the image formation process into deep learning improves network performance. Nat. Methods 19, 1427–1437 (2022).

Article CAS PubMed PubMed Central Google Scholar

Ravishankar, S., Ye, J. C. & Fessler, J. A. Image reconstruction: from sparsity to data-adaptive methods and machine learning. Proc. IEEE 108, 86–109 (2020).

Article Google Scholar

Wang, Y. et al. Blind sparse inpainting reveals cytoskeletal filaments with sub-Nyquist localization. Optica 4, 1277–1284 (2017).

Article CAS PubMed PubMed Central Google Scholar

Zhou, Z., Kuang, W., Wang, Z. & Huang, Z.-L. ResNet-based image inpainting method for enhancing the imaging speed of single molecule localization microscopy. Opt. Express 30, 31766–31784 (2022).

Article PubMed Google Scholar

Ouyang, W., Aristov, A., Lelek, M., Hao, X. & Zimmer, C. Deep learning massively accelerates super-resolution localization microscopy. Nat. Biotechnol. 36, 460–468 (2018).

Article CAS PubMed Google Scholar

Weigert, M. et al. Content-aware image restoration: pushing the limits of fluorescence microscopy. Nat. Methods 15, 1090–1097 (2018).

Article CAS PubMed Google Scholar

Horstmeyer, R., Chen, R. Y., Kappes, B. & Judkewitz, B. Convolutional neural networks that teach microscopes how to image. bioRxiv https://doi.org/10.48550/ARXIV.1709.07223 (2017).

Pégard, N. C. et al. Compressive light-field microscopy for 3D neural activity recording. Optica 3, 517–524 (2016).

Article Google Scholar

Theer, P., Dragneva, D. & Knop, M. πSPIM: high NA high resolution isotropic light-sheet imaging in cell culture dishes. Sci. Rep. 6, 32880 (2016).

Article CAS PubMed PubMed Central Google Scholar

Cao, B., Coelho, S., Li, J., Wang, G. & Pertsinidis, A. Volumetric interferometric lattice light-sheet imaging. Nat. Biotechnol. 39, 1385–1393 (2021).

Article CAS PubMed PubMed Central Google Scholar

Mcgorty, R., Xie, D. & Huang, B. High-NA open-top selective-plane illumination microscopy for biological imaging. Opt. Express 25, 17798–17810 (2017). This study introduces optical concepts for open top light-sheet microscopy.

Article PubMed PubMed Central Google Scholar

Bouchard, M. B. et al. Swept confocally-aligned planar excitation (SCAPE) microscopy for high-speed volumetric imaging of behaving organisms. Nat. Photonics 9, 113–119 (2015). This paper introduces a rapid scan mechanism for OPM, which greatly increases the acquisition speed in light-sheet microscopy.

Article CAS PubMed PubMed Central Google Scholar

Hoffmann, M., Henninger, J., Richter, L. & Judkewitz, B. Brain-wide imaging of an adult vertebrate with image transfer oblique plane microscopy. bioRxiv 2022.05.16.492103 https://doi.org/10.1101/2022.05.16.492103 (2022).

Shao, W. et al. Mesoscopic oblique plane microscopy with a diffractive light-sheet for large-scale 4D cellular resolution imaging. Optica 9, 1374–1385 (2022).

Article Google Scholar

Singh, R. et al. Oblique plane microscope for mesoscopic imaging of freely moving organisms with cellular resolution. Opt. Express 31, 2292–2301 (2023).

Article PubMed Google Scholar

Hoffmann, M. & Judkewitz, B. Diffractive oblique plane microscopy. Optica 6, 1166–1170 (2019). This paper introduces a mesoscopic variant of OPM leveraging a diffractive element in the detection path.

Article Google Scholar

Chen, B. et al. Increasing the field-of-view in oblique plane microscopy via optical tiling. Biomed. Opt. Express 13, 5616–5627 (2022).

Article PubMed PubMed Central Google Scholar

Chen, B. et al. Resolution doubling in light-sheet microscopy via oblique plane structured illumination. Nat. Methods 19, 1419–1426 (2022).

Article CAS PubMed Google Scholar

Glaser, A. K. et al. A hybrid open-top light-sheet microscope for versatile multi-scale imaging of cleared tissues. Nat. Methods 19, 613–619 (2022).

Article CAS PubMed PubMed Central Google Scholar

Carl Zeiss Microscopy GmbH. ZEISS Lattice Lightsheet 7: long-term volumetric imaging of living cells. https://www.zeiss.com/microscopy/en/products/light-microscopes/light-sheet-microscopes/lattice-lightsheet-7.html (2023).

Botcherby, E. J., Juskaitis, R., Booth, M. J. & Wilson, T. Aberration-free optical refocusing in high numerical aperture microscopy. Opt. Lett. 32, 2007–2009 (2007). This seminal paper introduced remote focusing, which is a key optical principle used in OPM.

Article PubMed Google Scholar

Millett-Sikking, A. amsikking/any_immersion_remote_refocus_microscopy: v1.0.1. https://doi.org/10.5281/zenodo.7425705 (2022). This study extends the concept of remote focusing, which is the basis for OPM, to imaging in any refractive index media.

Download references

We thank the National Institutes of Health (grant no. R35GM133522) for support. The authors thank Dr. Anna Bajur, Dr. Kevin Dean and Dr. Felix Zhou for comments and feedback on the manuscript.

Lyda Hill Department of Bioinformatics, University of Texas Southwestern Medical Center, Dallas, TX, USA

Stephan Daetwyler & Reto Paul Fiolka

Department of Cell Biology, University of Texas Southwestern Medical Center, Dallas, TX, USA

Stephan Daetwyler & Reto Paul Fiolka

You can also search for this author in PubMed Google Scholar

You can also search for this author in PubMed Google Scholar

S.D. and R.F. conceptualized and wrote the manuscript.

Correspondence to Reto Paul Fiolka.

The authors declare no competing interests.

Communications Biology thanks the anonymous reviewers for their contribution to the peer review of this work. Primary Handling Editor: Manuel Breuer.

Publisher's note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.

Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons license, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons license, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons license and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this license, visit http://creativecommons.org/licenses/by/4.0/.

Reprints and Permissions

Daetwyler, S., Fiolka, R.P. Light-sheets and smart microscopy, an exciting future is dawning. Commun Biol 6, 502 (2023). https://doi.org/10.1038/s42003-023-04857-4

Download citation

Received: 04 February 2023

Accepted: 20 April 2023

Published: 09 May 2023

DOI: https://doi.org/10.1038/s42003-023-04857-4

Anyone you share the following link with will be able to read this content:

Sorry, a shareable link is not currently available for this article.

Provided by the Springer Nature SharedIt content-sharing initiative

By submitting a comment you agree to abide by our Terms and Community Guidelines. If you find something abusive or that does not comply with our terms or guidelines please flag it as inappropriate.