Aug 13, 2018

Google-developed AI can read eye scans as well as humans

AI segments a 3D eye scan into sections representing different types of tissue. Animation: DeepMind

Doctors at a U.K. eye hospital are getting algorithmic help interpreting the results of 3D eye scans, using a system developed at Google's DeepMind that can identify more than 50 eye problems and recommend a course of action with human expert-level accuracy.

Why it matters: DeepMind's system shows an intermediate step in its work, and tells doctors how confident it is in its assessment. This is crucial, because AI systems are often too opaque to be able to explain their reasoning, making them risky to deploy in high-stakes environments like hospitals.

The big picture: Black-box algorithms make it difficult to check results for accuracy, and their ambiguous reasoning can decrease trust in their recommendations.

  • DeepMind’s two-step approach, which was published Monday in the Nature Medicine journal, tries to mitigate those problems.
  • The system has been tested for two years at Moorfields Eye Hospital in London, and still needs to undergo clinical trials before it can be implemented more broadly.

How it works: The system reads optical coherence tomography (OCT) scans, which are 3D representations of the back of a patient's eye.

  • In the first step, a neural network segments the scan, which is difficult for humans to read in its raw form, into colored areas that represent different types of tissue.
  • Then, the system analyzes the segmentation map with a second neural network, and identifies signs of disease using a separate classification network.
  • This second step offers clinicians the most likely diagnosis and recommends a course of action.
  • The results are paired with a percentage that shows the system’s confidence in a diagnosis or recommendation.
"One of the reasons we're putting so much effort into explainability and interpretation is that we desperately want to build trust with nurses and doctors."
— Mustafa Suleyman, DeepMind’s co-founder

The system’s 5.5% error rate matches or exceeds the accuracy of human eye experts, the DeepMind and University College London researchers wrote in the paper.

Since OCT scans can be ambiguous — different eye doctors will often interpret them differently — the DeepMind system’s recommendation is the result of not one analysis but a combination of 25 of them.

  • DeepMind uses five slightly different segmentation networks to create five eye diagrams. Then, it runs five slightly different classification networks on each of the five segmentation maps, resulting in 25 interpretations.
  • The confidence percentages displayed to clinicians show the results of these iterations. If nearly all the analyses indicated that a patient has choroidal neovascularization, that diagnosis would be highlighted with high confidence in the final result.

The two-step process also helps make it easy to retrain the system for different scanning equipment, allowing it to work on new, state-of-the-art equipment soon after it comes out — or older scanners that might not be as accurate.

  • Without the intermediate step, the system would need to see tens or even hundreds of thousands of scans from a new piece of equipment in order to learn to interpret them correctly, DeepMind Health research lead Trevor Back told Axios.
  • But DeepMind’s system needs fewer than 200 scans to train the segmentation network, which provides the color-coded map used for diagnoses and recommendations, for new equipment.
  • The DeepMind team focused on their products’s generalizability, Back said, in an effort to create a system that will actually be useful in eye clinics and hospitals.

What’s next: Since OCT scans are 3D, the technology DeepMind developed to analyze them could be useful for other types of 3D medical imaging, like CT scans, Suleiman said.

  • "3D imaging is one of the harder modalities to work on," he told Axios. "We want to learn as much as possible about the way our algorithms work in order to use them in other areas of radiology."
  • Suleiman said this research could help advance fundamental research into AI image and video understanding beyond hospital uses.

Go deeper: Read a DeepMind blog post about the new research, or the Nature Medicine paper for more technical details.

Go deeper

Coronavirus spreads to more countries, and U.S. ups its case count

Data: The Center for Systems Science and Engineering at Johns Hopkins, the CDC, and China's Health Ministry. Note: China numbers are for the mainland only and U.S. numbers include repatriated citizens.

The novel coronavirus continues to spread to more nations, and the U.S. reports a doubling of its confirmed cases to 34 — while noting those are mostly due to repatriated citizens, emphasizing there's no "community spread" yet in the U.S. Meanwhile, Italy reported its first virus-related death on Friday.

The big picture: COVID-19 has now killed at least 2,359 people and infected more than 77,000 others, mostly in mainland China. New countries to announce infections recently include Israel, Lebanon and Iran.

Go deeperArrowUpdated 4 hours ago - Health

Wells Fargo agrees to pay $3 billion to settle consumer abuse charges

Clients use an ATM at a Wells Fargo Bank in Los Angeles, Calif. Photo: Ronen Tivony/SOPA Images/LightRocket via Getty Images

Wells Fargo agreed to a pay a combined $3 billion to the Justice Department and the Securities and Exchange Commission on Friday for opening millions of fake customer accounts between 2002 and 2016, the SEC said in a press release.

The big picture: The fine "is among the largest corporate penalties reached during the Trump administration," the Washington Post reports.