We work closely with the clinicians, including nurses and doctors. We understood their eagerness to help improve clinical practice, but also to protect and respect privacy. So we chose to use depth sensors that do not contain information of the scenes and people, but only 3D depth signals. Our team works with bioethicists, legal scholars, and ethics professors at Stanford for every research project.
AI for senior care is one of my most passionate areas of research, partly due to the fact that I’ve been taking care of my ailing parents, who are in their late ’70s, for many years. With several health organizations and geriatricians, our team has been working on pilot projects that aim to use smart sensor technology to help clinicians and caretakers understand the progressions of health conditions for seniors, such as gait or gesture changes that might lead to increased risk of falling, or activity abnormalities that need further assessment or intervention.
Do you think efforts to use AI to fight Covid-19 could have unintended consequences? The pandemic highlights how inequitable society is, and without due care, AI algorithms could reinforce that—if algorithms work better for rich, white patients for example.
I have worked on applying AI to health care for more than eight years. As my collaborator, the Stanford medical school professor Arnold Milstein, always says, we have to focus on the most vulnerable groups of patients and their circumstances—housing, economics, access to health care, and so on. Good intentions are not enough, we need to get different stakeholders involved to have the right effect. We don’t want to keep repeating unintended consequences.
How do you ensure your own research doesn’t do this?
In addition to all required guardrails for research involving human subjects, HAI is starting to do ethical reviews of our research grant proposals. These aren’t required yet by anyone, but we feel the action is required. We should continue to improve our efforts. Because of Covid we should put more effort into guardrails [such as more diverse teams and practices designed to prevent bias.]
What would you say is HAI’s most important achievement to date?
I am especially proud of how we responded after Covid hit our country. We were planning a conference on April 1 on neuroscience and AI, but on March 1, we asked ourselves, what can we do for this crisis? In a couple of weeks, we put together a program with scientists, national leaders, social scientists, ethicists, and so on. If you look at our agenda, we have medicine, a drug discovery track, but also the international picture, privacy aspects related to contact tracing, and the social side of things such as xenophobia towards different ethinic groups in the US.
Then, two months later, on June 1, we had another conference to look at the economic and election impact of Covid. We brought together national security scholars, doctors, and economists to talk about financing vaccines and the impact on the elections. I think this is an example of HAI engaged in impactful events and topics through an interdisciplinary approach, engaging with everyone.
Tell us why you chose to join Twitter’s board.
I was flattered that Twitter invited me. Twitter is an unprecedented platform that gives individuals a voice on a global scale, shaping conversations of our society near and far. And because of that, it is so important to do it right. Twitter is committed to advocating for healthy conversation. As a scientist, I joined to be helpful, mostly on the technical side. This is only week three or four, but I hope that I will have a positive impact, and Twitter’s aspiration of serving healthy conversations aligns with that value.
As a user of social technology myself, I obviously know the negative aspects of it, and I hope all of us, in or outside Twitter, can help. And it will take time. It won’t be a light-switch moment. It will take lots of time, even mistakes of trial and error, but we have to try.
This article was syndicated from wired.com