Medecine
Editor’s note: This is part two of a two-part interview on AI and cybersecurity with David Heaney from Mass General Brigham. To read part one, click here.
In the first installation of this deep-dive interview, Mass General Brigham Chief Information Security Officer David Heaney explained defensive and offensive uses of artificial intelligence in healthcare. He said understanding the environment, knowing where one’s controls are deployed and being great at the basics is much more critical when AI is involved.
Today, Heaney lays out best practices healthcare CISOs and CIOs can employ for securing the use of AI, how his team uses them, how he gets his team up to speed when it comes to securing with and against AI, the human element of AI and cybersecurity, and types of AI he uses to combat cyberattacks.
Q. What are some best practices that healthcare CISOs and CIOs can employ for securing the use of AI? And how are you and your team using them at Mass General Brigham?
A. It’s important to start with the way you phrase that question, which is about understanding that these AI capabilities are going to drive amazing changes in how we care for patients and how we discover new approaches and so much more in our industry.
It really is about how we support that and how we help to secure that. As I mentioned in part one, it’s really important to make sure we’re getting the basics right. So, if there’s an AI-driven service that uses our data or is being run in our environment, we have the same requirements in place for risk assessments, for business associate agreements, for any other legal agreements we’d have with non-AI services.
Because at some level we’re talking about another app, and it needs to be controlled just like any other apps in the environment, including restrictions against using unapproved applications. And none of that’s to say there aren’t AI-specific considerations we would want to address, and there’s a few that come to mind. In addition to the standard legal agreements I just mentioned, there certainly are additional data use considerations.
For example, do you want your organization’s data to be used to train your vendor’s AI models downstream? The security of the AI model itself is important. Organizations need to consider options around continuous validation of the model to ensure it is providing accurate outputs in all scenarios, and that can be part of the AI governance I mentioned in part one.
There’s also adversarial testing of the models. If we put in bad input, does it change the way the output comes out? And then one of the areas of the basics I’ve actually seen changing a little bit in terms of its importance in this environment is around the ease of adoption of so many of these tools.
An example there: Look at meeting note-taking services like Otter AI or Read AI, and there’s so many others. But these services, they’re incentivized to make adoption simple and frictionless, and they’ve done a great job at doing that.
While the concerns around the use of these services and the data they can get access to and things like that doesn’t change, the combination of the ease of adoption by our end users, and frankly, just the cool factor of this and some other applications, really makes it an important area to focus on how you’re onboarding different applications, especially AI-driven applications.
Q. How have you been getting your team up to speed when it comes to securing with and against AI? What’s the human element at play here?
A. It’s huge. And one of my top values for my security team is curiosity. I would argue it’s the single skill behind everything we do in cybersecurity. It’s the thing where you see something that’s a little bit funny and you say, « I wonder why that happened? » And you start digging in.
That’s the start of virtually every improvement we make in the industry. So, to that end, a huge part of the answer is having curious team members who get excited about this and want to learn about it on their own. And they just go out and they play with some of these tools.
I try to set an example in the area by sharing how I’ve used the various tools to make my job easier. But nothing replaces that curiosity. Within MGB, within our digital team, we do try to dedicate one day a month to learning, and we provide access to a variety of training services with relevant content in the space. But the challenge with that really is the technology changes faster than the training can keep up with.
So really nothing replaces just going out and playing with the technology. But also, perhaps with a little bit of irony, one of my favorite uses for generative AI is for learning. And one of the things I do is I use a prompt where it says something like, « Create a table of contents for a book titled X, where X is whatever topic I want to learn about. » And I also usually specify a little bit about what the author is like and the purpose of the book.
That creates a great outline of how to learn about that topic. And then you can either ask your AI friend, « Hey, can you expand on chapter one? And what does that mean? » Or potentially go to other sources or other forums to find the relevant content there.
Q. What are some types of AI you use, without giving away any secrets, of course, to combat cyberattacks? Perhaps you could explain in broader terms how these types of AI are meant to work and why you like them?
A. Our overall digital strategy at MGB is really focused on leveraging platforms from our technology vendors. Picking up a little bit from part one’s vendor question, our focus is working with these companies to develop the most valuable capabilities, many of which are going to be AI-driven.
And just to give a picture of what that looks like, at least in general terms, to not give away the golden goose, so to speak, our endpoint protection tools use a variety of AI algorithms to identify potentially malicious behavior. They then all send logs from all of these endpoints to a central collection point where there’s a combination of both rules-based and AI-based analysis that looks for broader trends.
So not just on one system, but across the entire environment. Are there trends indicative of maybe some elevated risk? We have an Identity Governance Suite, and that’s the tooling that’s used to provision access to grant and remove access in the environment. And that suite of tools has various capabilities built in to identify potential risk and see access combinations that might already be in place or even look at access requests as they come in to prevent us from granting that access in the first place.
So that’s the world of the platforms themselves and the technology that’s built in. But beyond that, going back to how we can use generative AI in some of these areas, we use that to accelerate all kinds of tasks we used to do manually.
The team has gotten, I couldn’t put a number on it, but I’ll say tons of time savings by using generative AI to write custom scripts for triage, for forensics, for remediation of systems. It’s not perfect. The AI gets us, I don’t know, 80% complete, but our analysts then finalize the script and do so much more quickly than if they were running it or creating it from the beginning.
Similarly, we use some of these AI tools to create queries that go into our other tools. We get our junior analysts up to speed much faster by letting them have access to these tools to help them more effectively use the various other technologies we have in place.
Our senior analysts are just more efficient. They already know how to do a lot of this, but it’s always better to start from 80% than to start from zero.
In general, I describe it as my really eager intern. I can ask it to do anything and it’ll come back with something between a really good starting point and potentially a great and complete answer. But I certainly wouldn’t go and use that answer without doing my own checks and finishing it first.
CLICK HERE to watch the video of this interview that contains BONUS CONTENT not found in this story.
Editor’s Note: This is the tenth and final in a series of features on top voices in health IT discussing the use of artificial intelligence. Read the other installments:
-
Dr. John Halamka of Mayo Clinic Platform
-
Dr. Aalpen Patel of Geisinger
-
Helen Waters of Meditech
-
Sumit Rana of Epic
-
Dr. Rebecca G. Mishuris of Mass General Brigham
-
Dr. Melek Somai of the Froedtert & Medical College of Wisconsin Health Network
-
Dr. Brian Hasselfeld of Johns Hopkins Medicine
-
Craig Kwiatkowski of Cedars-Sinai
-
Dr. Bruce Darrow of Mount Sinai Health System
Follow Bill’s HIT coverage on LinkedIn: Bill Siwicki
Email him: [email protected]
Healthcare IT News is a HIMSS Media publication.
The HIMSS Healthcare Cybersecurity Forum is scheduled to take place October 31-November 1 in Washington, D.C. Learn more and register.