A Brief History of Computational Adaptive Optics
How “just compute it” gradually replaced expensive hardware
Before Computation: The Hardware Era
In the 1990s, astronomers faced a fundamental problem: atmospheric turbulence blurs telescope images. Their solution was elegant but expensive—adaptive optics (AO). A wavefront sensor measures distortions in real-time, and a deformable mirror physically reshapes the light to compensate.
This worked beautifully. It also cost hundreds of thousands of dollars and required constant calibration.
By the late 1990s, ophthalmologists adopted the technology for retinal imaging (Liang et al., 1997). The hardware complexity remained: deformable mirrors, Shack-Hartmann sensors, control loops running at kilohertz rates.
The implicit assumption: aberrations must be corrected optically.
2007: The First Crack — ISAM
Ralston et al. (Nature Physics, 2007) asked a different question: what if we don’t need the mirror at all?
Their insight was specific to coherent imaging systems like optical coherence tomography (OCT). These systems preserve phase information in their measurements. And phase encodes both the sample structure and the optical distortions.
ISAM (Interferometric Synthetic Aperture Microscopy) showed that defocus—the blur from being out of focus—could be corrected computationally, after acquisition. No moving parts. Just mathematics.
The limitation: ISAM only worked for known, deterministic physics. Defocus follows predictable equations. Apply the inverse, get a sharper image.
2012: The Generalization — CAO
Adie et al. (PNAS, 2012) took the conceptual leap. What about aberrations we don’t know ahead of time?
Real optical systems have imperfections: astigmatism from lens stress, coma from misalignment, higher-order aberrations from the sample itself. These vary between systems, between days, between samples.
Computational Adaptive Optics (CAO) reframed the problem: instead of calculating a known correction, search for the correction that makes the image sharpest.
The key ingredients:
- Parameterize aberrations using Zernike polynomials (an orthogonal basis on circular pupils)
- Define a sharpness metric
- Optimize until image quality peaks
This was no longer analysis—it was estimation. The problem shifted from “compute the inverse” to “solve an inverse problem.”
Timeline graphic:
1990 ━━━━ Hardware AO (mirrors + sensors)
$$$$, complex
2007 ━━━━ ISAM (computational defocus)
Known physics only
2012 ━━━━ CAO (optimize unknown aberrations)
Estimation problem
2012 ━━━━ ??? (different field)
Inverse problems revolution
2024 ━━━━ Convergence
My research here 🎯
The Slow Build: 2012–2020
Progress came steadily but incrementally:
| Year | Development |
|---|---|
| 2012 | Adie et al. establish CAO framework |
| 2015–2017 | Various optimization algorithms (SPGD, PSO) |
| 2020 | Zhu et al. demonstrate real-time CAO |
| 2020 | Ruiz-Lopera et al. handle phase-unstable systems (SHARP) |
| 2021 | Liu et al. apply CAO to brain imaging |
| 2023 | Lee et al. achieve wide-field retinal imaging |
Almost all foundational work traced back to a small number of lab groups—particularly Stephen Boppart’s group at UIUC, which produced many of the field’s key researchers (Adie, Carney, Ralston, Shemonski, Liu).
The Closed Garden
If you come from modern open-source software culture, computational optics can feel strangely inaccessible. Papers describe algorithms, but code is rarely released. Implementation details live in supplementary materials—or in the heads of lab members.
This isn’t malice. It’s structural:
- Hardware coupling: Code assumes specific optical setups
- Small community: Sharing helps direct competitors
- Commercial interests: Patents and spinoffs matter
- Legacy tools: MATLAB workflows predate GitHub culture
- Data scarcity: Code without matched calibration data is often useless
The result: entry barriers stay high, knowledge concentrates in lineages, and replication requires reverse-engineering from equations.
The Arc So Far
Looking back, the pattern is clear:
1990s — Hardware AO: Correct aberrations physically. Expensive, complex, real-time.
2007 — ISAM: Some distortions are deterministic. Compute the inverse. No mirrors needed.
2012 — CAO: Unknown aberrations require estimation. Optimize until sharp.
2015–2023: Faster algorithms, broader applications, but still the same fundamental framework.
Each step: less hardware, more computation, more generality.
But also: more dependence on optimization choices, metrics, and modeling assumptions.
An Open Question
The progression suggests an obvious next step. Each era replaced hardware constraints with computational flexibility. Each transition happened when someone asked: “What if we don’t need that assumption?”
ISAM asked: “What if we don’t need the deformable mirror for defocus?”
CAO asked: “What if we don’t need to know the aberration ahead of time?”
The pattern continues. Another field experienced its own revolution in 2012—one that transformed how we approach inverse problems, learned priors, and high-dimensional optimization.
These two fields developed in parallel for over a decade. Different conferences. Different journals. Different cultures. But solving remarkably similar problems with increasingly compatible mathematics.
So here’s my question for you: what do you think I’m researching now? 🤔