Skip to main content

In the future, A.I. medicine will let patients own their health data

As A.I. takes over the grunt work, doctors can get back to healing

ai driven medicine nvidia gtc 2019
Nvidia

A.I. has the power to transform the world — at least that’s what we’re constantly being told. Yes, it powers voice assistants and robotic dogs, but there are some legitimate areas where A.I. is not only making things easier and more convenient. In the case of medicine and health care, it’s actually saving lives.

There has been pushback lately, though. Medical professionals and government officials are bullish about the long-term potential of artificial intelligence’s transformative powers, but researchers are taking a more cautious and measured approach to implementation. In just the past year, we’ve seen huge leaps forward that take A.I.’s potential in medical care and turn it into a reality.

Today, we stand on the brink of a significant transformation in how we’ll all experience and use our medical data in the future.

Get your weekly teardown of the tech behind PC gaming
Check your inbox!

A.I. in a broken system

“We became serious about it as a discipline maybe five years ago, but my whole career I’ve been haunted by the need for this technology,” Dr. Richard White told Digital Trends about the institution’s foray into A.I. He’s the chair of radiology at Ohio State University’s Wexner Medical Center

“It’s up to the patient and the doctors to try to fix it, because we are the agents of the last resort.”

“For the longest time, I could not figure out why there wasn’t a use for computers to replicate what humans are doing: to laboriously look through all the images that were dynamic and try to come up with this, and then have the computer make the same mistakes that I was making was very frustrating for at least three decades.”

White said that when they tried to venture into radiomics, they saw a true need for computer smarts. “About four or five years ago, things were coming together and it was the right thing to do. It was meeting a dire need, and that’s when we started serious [with A.I.] in our labs.”

Radiologists from participating health systems at GTC this year, including White, Dr. Paul Chang, a professor and vice chairman from the University of Chicago, and Dr. Christopher Hess, a professor and chair of radiology from the University of California, San Francisco (UCSF), began exploring A.I. simply because the amount of medical data from improved imaging scans became overwhelming.

Image used with permission by copyright holder

Advances to medical imaging technology resulted in the collection of significantly more patient data, Chang and his colleagues said, which led to doctor burnout. Doctors see A.I.’s transformative potential, as the technology could allow them to regain some of the time spent on laboriously going through scans, and this, according to Dr. Hess, allows “doctors to become healers again.”

But Chang cautions his fellow practitioners from being “seduced” by the new technology, noting that it must be correctly implemented to be effective. “You can’t prematurely incorporate A.I. into a system that’s broken,” he said.

In many ways, it’s that exact scenario that has led us to where we are today.

Owning your own data

The current practice of medicine right now is centered around algorithms and electronic health records. This software isn’t centered on patient care or learning, but it’s a system of categorizing treatments, which in turn allows insurers to pay doctors for services that were performed.

“The industry has transformed doctors into clients to put in codes so that they can be billed,” Dr. Walter Brouwer, CEO of data analytics firm Doc.A.I. said. “We have to stop what we’re doing because it doesn’t work. If you take 2019, the predictions are that 400 doctors will commit suicide, 150,000 people will die, and the first course of bankruptcy will be medical records, so we trust that everyone will try to fix a system that’s unfixable. It’s up to the patient and the doctors to try to fix it, because we are the agents of the last resort.”

People can actually monetize their data as a latent economic asset. That’s the promise of deep learning.

For White, changing how data flows through the system is an important first step to being able to truly leverage the power of A.I. Unlike other fields where A.I. has largely been seen as successful technology enablers, such as customer service and autonomous driving, the healthcare vertical has been saddled with regulations designed to protect patient privacy rights.

“I think the patient has to be entrusted with their own data, and then they direct how that data gets used when we’re brought into their lives,” he said. “It is our moral obligation to protect it.”

For Anthem, the nation’s second provider of health insurance covering more than 40 million Americans, if sharing data is more convenient, patients would feel more compelled to do it.

Doc.ai app
Doc.ai users use the app to choose which data trials to join, and which aspects of their health data to share. doc.ai

“It’s really a tradeoff of convenience and privacy,” said Rajeev Ronanki, Anthem’s chief digital officer. “So far, we haven’t done a good job of making healthcare simple, easy, and convenient, so therefore everyone wants to value privacy over everything else. For example, if it is going to save you fifteen minutes from trying to fill out the same redundant forms in your doctor’s office about your health conditions and you can get in and out quicker, then most people will choose convenience over wanting to make their data private. Surely, some people will choose to keep their health information private, and we want to be able to support both.”

As mobile devices become more powerful, healthcare professionals envision a world where the patients own and store the data on their devices, leaving health institutions responsible to create a system where the data can be anonymized, shared, and exchanged.

“Getting your hands on good data is a very big challenge.”

“No institution is going to allow large amounts of data to be sent from their systems, so we have to bring the models and develop the model, by circulating them to the subscribers and then watching the arrangement, “White said. “It’s just much more practical.”

A larger pool of data shared by patients could lead to more accurate clinical studies and reduce bias in medicine. In this model, researchers want to rely on edge learning rather than the cloud to process the data. Instead of setting information to the cloud, edge learning relies on the Apple model for A.I. where data is stored and processed locally, promising a higher degree of privacy. And because data is processed locally, it can be processed much faster, De Brouwer claimed.

“So I collect all my data – my healthcare records – if I want to do a clinical trial,” De Brouwer continued. “If I am given a protocol, I trace my data through the protocols on my phone. I get tensors. I send off the tensors, which are irreversible, and they are averaged with all the other data, and I get back the data on my phone. My data is private, but I get a better prediction because tensors are the average of the average of the average of the average, which is better than the first average.”

The AI powered medical research companion.

De Brouwer claimed this would completely change medical research. “We can actually combine our tensors and leave our data where it is. People can actually monetize their data as a latent economic asset. That’s the promise of deep learning.”

With technology enablers, like 5G, connected home sensors, and smart health devices, medical researchers may soon have access to new data sources that they may not have considered as relevant for their medical research today.

Called fuzzy data, Doc.A.I. predicts that the amount of data will grow by as much as 32 times each year, and by 2020, we’re going to be headed to a factorial future. “A.I. is here to help because it brings us the gift of time,” De Brouwer said. “I am very optimistic about the future.”

Reducing bias

As part of its initiative for the responsible and ethical use of A.I., Anthem is now working with data scientists to evaluate 17 million records from its databases to ensure that there aren’t any biases in the algorithms that it has created.

Clara: Supercharging Medical Instruments with AI

“When you create algorithms that impact people’s lives, then you have to be a lot more careful,” said Democratic Congressman Jerry McNerney (co-chair of the Congressional A.I. Caucus), in a separate talk at GTC that emphasized some of the life and death consequences when A.I. is used in critical infrastructure such as military applications. “When you have data that’s badly biased, then you are going to have similar results. Getting your hands on good data is a very big challenge.”

Additionally, when you have limited data, bias can also creep in more easily, Hess explained, is that it can skew medical studies and interpretations of results. Citing Stanford University’s research showcasing how A.I.-derived algorithms are “better” at detecting pneumonia than actual radiologists, Hess showed some of the fallacies in the presumption.

While A.I. is good at repetitive, time-consuming tasks you still need the human interaction in patient care.

“What is better,” asked a facetious Hess trying to extract a definition of the word better. While Hess admitted that Stanford’s algorithms had a high success rate – upwards of 75 percent – at detecting pneumonia by reading X-rays and other scans, it still underperformed when compared against the diagnoses made by four radiologists cited in the study.

Though Hess views A.I. as a time-saving technology that allows physicians to go back to patient care rather than spend time on coding charts, he warns that the technology isn’t quite perfect, noting that A.I.’s object detection algorithms can completely misidentify scans.

Medical A.I. as a drone

As such, Hess and his colleagues view A.I. as a complementary technology in medicine that will help, not replace, human doctors. While A.I. is good at repetitive, time-consuming tasks of identifying tumors and abnormalities in scans, Chang said, you still need the human interaction in patient care.

Rather, to interpret the massive troves of data that will be collected, industry observers predict that a single doctor will create numerous additional jobs for data scientists to create algorithms to help make sense of that data. “We’re going to have the same in medicine. I think that every doctor will create a hundred data scientist jobs, so healthcare will become a continuous function,” De Brouwer said.

“We will always need caring people to interface with a human being, human-to-human,” White said. “I hope we never lose the touch of a hand on another person’s hand asking for help, and someone has to translate it to real-world situations.”

Editors' Recommendations

Chuong Nguyen
Silicon Valley-based technology reporter and Giants baseball fan who splits his time between Northern California and Southern…
Revisiting the rise of A.I.: How far has artificial intelligence come since 2010?
christie's auction house obvious art ai

2010 doesn’t seem all that long ago. Facebook was already a giant, time-consuming leviathan; smartphones and the iPad were a daily part of people’s lives; The Walking Dead was a big hit on televisions across America; and the most talked-about popular musical artists were the likes of Taylor Swift and Justin Bieber. So pretty much like life as we enter 2020, then? Perhaps in some ways.

One place that things most definitely have moved on in leaps and bounds, however, is on the artificial intelligence front. Over the past decade, A.I. has made some huge advances, both technically and in the public consciousness, that mark this out as one of the most important ten year stretches in the field’s history. What have been the biggest advances? Funny you should ask; I’ve just written a list on exactly that topic.

Read more
Researchers use A.I. to make smiling pet pics — and it’s as creepy as it sounds
nvidia ganimal ai research smiling pet

Can’t get your dog or that tiger at the zoo to smile for your Instagram? A new artificially intelligent program developed by researchers from Nvidia can take the expression from one animal and put it on the photo of another animal. Called GANimal -- after generative adversarial networks, a type of A.I. -- the software allows users to upload an image of one animal to re-create the pet’s expression and pose on another animal.

GAN programs are designed to convert one image to look like another, but are typically focused on more narrow tasks like turning horses to zebras. GANimal, however, applies several different changes to the image, adjusting the expression, the position of the animal’s head, and in many cases, even the background, from the inspiration image onto the source image. Unlike most GANs, the program is designed to work with any animal.

Read more
You can now moonwalk on the moon with Nvidia’s A.I. and ray tracing tech
nvidia rtx apollo 11 moon landing 50th anniversary with technology

Apollo Moon Landing with RTX Technology; courtesy of Nvidia Image used with permission by copyright holder

In addition to launching 10 new RTX Studio laptops targeting creators at SIGGRAPH, Nvidia also announced some of the work that its research teams have been doing in the fields of artificial intelligence, augmented reality, and computer graphics. On the 50th anniversary of the Apollo 11 lunar landing, Nvidia showcased how ray tracing technology from its RTX graphics card is used to visually enhance the images captured by NASA 50 years ago. At SIGGRAPH, Nvidia's effort to commemorate Apollo 11 goes a step further, allowing fans of space the opportunity to superimpose themselves into a short video clip, as if they were astronauts Neil Armstrong and Buzz Aldrin, by using A.I. and the power of ray tracing to render these videos in real-time.

Read more