10 tips to avoid planting AI timebombs in your organization

At the recent HIMSS Global Health Conference & Exhibition in Orlando, I delivered a talk focused on protecting against some of the pitfalls of artificial intelligence in healthcare.

The objective was to encourage healthcare professionals to think deeply about the realities of AI transformation, while providing them with real-world examples of how to proceed safely and effectively. My goal was for everyone in the audience to join me in cutting through the hype to focus on a mature understanding of how to build this exciting future.

Thankfully, my message was well received. The attendees appreciated the potential that emerges when we move beyond gimmicks and the fear of missing out. It represents a higher level of leadership, where thoughtful individuals collaborate across various functions to establish clear and actionable goals for improving outcomes.

The appetite for this post-hype approach to AI was so substantial that I felt compelled to write a brief summary of my talk and share it widely with the readers of Healthcare IT News. 

I’ll briefly touch on AI time bombs that have already exploded, provide ten tips to help you avoid this issue, and share two examples of organizations with which I’m working that are implementing AI correctly.

What Not to Do

Both within and outside the healthcare sector, hastily launched AI initiatives are already showing signs of failure. 

For instance, Air Canada’s customer-facing chatbot incorrectly promised a discounted flight to a passenger. Subsequently, the company attempted to claim that it wasn’t their fault, arguing that the AI was a separate legal entity “responsible for its own actions.” Unsurprisingly, a Canadian tribunal did not accept the “it wasn’t us, it was the AI” defense, and now the airline is obligated to honor the mistakenly promised discount.

This past year, the National Eating Disorders Association intended to replace their highly experienced helpline staff with Tessa, a chatbot designed to assist individuals seeking advice on eating disorders. However, just days before Tessa’s scheduled launch, it was discovered that the bot began to provide problematic advice, including recommendations for restricting caloric intake, frequent weigh-ins, and setting rigid weight-loss goals. Although Tessa never became operational, this incident underscores the devastating consequences that can result from rushing into AI solutions.

A recent paper published in JAMA Open Network sheds light on multiple instances of biased algorithms that perpetuate “racial and ethnic disparities in health and healthcare.” The authors detailed several cases of biased and harmful algorithms that have been developed and deployed, adversely impacting “access to, or eligibility for, interventions and services, and the allocation of resources.” 

And it’s particularly concerning because many of these biased algorithms are still in operation. 

Put simply, AI time bombs have already detonated, and they will continue to do so unless proactive measures are taken to mitigate these issues.

What to Do

To assist leaders in addressing the risks associated with AI, I’ve developed ten tips for approaching AI transformation in a safe and sustainable way. These tips are designed to ensure that healthcare executives achieve the best possible return on their investments:

  • Prioritize Transparency and Explainability. Choose AI systems that offer transparent algorithms and explainable outcomes. 

  • Implement Robust Data Governance. Ensuring high-quality, diverse, and accurately labeled data is crucial. 

  • Engage with Ethical and Regulatory Bodies Early. Understanding and aligning with ethical guidelines and regulatory requirements early can prevent costly revisions and ensure patient safety. 

  • Foster Interdisciplinary Collaboration. An interdisciplinary approach ensures that the AI tools developed are practical, ethical, and patient-centered.

  • Ensure Scalability and Interoperability. AI tools should be designed to integrate seamlessly with existing healthcare IT systems and be scalable across different departments or even institutions.

  • Invest in Continuous Education and Training. Investing in continuous education and training ensures that staff can effectively use AI, interpret its outputs, and make informed decisions.

  • Develop a Patient-Centric Approach. Adopt AI practices that enhance patient engagement, personalize healthcare delivery, and do not inadvertently exacerbate health disparities.

  • Monitor Performance and Impact Continuously. Develop mechanisms for worker and patient feedback, enabling ongoing refinement of AI tools to better meet the needs of stakeholders.

  • Establish Clear Accountability Frameworks. Define clear lines of accountability for decisions made with the assistance of AI.

  • Promote an Ethical AI Culture. Encourage discussions about the ethics of AI, promote responsible AI use, and ensure decisions are made with consideration for the welfare of all stakeholders.

Let these tips guide you on your AI journey. Use them to develop principles, policies, procedures, and protocols to get AI right the first time and to deftly navigate instances when things don’t go according to plan. Proactively incorporating these tips at the beginning of AI transformation will save time, money, and, ultimately, lives.

What others are doing

AI transformation necessitates several fundamental components working in unison. As I mentioned in my HIMSS talk: Like a Thanksgiving rite of passage, it’s time to graduate from the AI kids’ table – where the conversation is obsessively centered around ChatGPT – to the adults’ table, where leaders are actively taking steps to lay the foundation for mature AI transformation.

Two of these essential elements that I’ve been focusing on, in partnership with large healthcare organizations, are adopting a holistic approach to deployment and investing in a robust, data-driven culture.

In one health system, we developed a blueprint for safely implementing large language models. This blueprint covers various impact areas to consider, such as the economic and privacy implications of LLMs, and it includes essential questions to ask in each of these domains.

The objective was to present everyone in the C-suite with specific and interconnected questions about the risks and benefits associated with deploying LLMs. This approach helps to highlight trade-offs – like speed vs. safety or quality vs. cost – and provides this diverse group of leaders with a common language to identify opportunities and discuss risks.

In another health system, we developed ten key performance indicators to ensure their leaders, teams, and processes all contribute to a data-driven, AI-ready culture of care. We’ve also created a survey based on these KPIs to establish a baseline understanding of where the data culture excels and where there’s room for improvement.

By focusing on understanding their clinicians’ data needs and providing them with high-quality and relevant data when they need it, the organization has realized a rapid and impressive spike in “the good numbers,” such as employee engagement and patient satisfaction.

This serves as a prime example of how AI transformation starts well before the flash of emerging technologies and hype. By focusing on the fundamentals like data, leaders can achieve quick wins while preparing their organizations for lasting success.

What comes next

The future of healthcare demands a “leadership first, tech last” mindset. Executives must prioritize the needs of their people, as well as the challenges and opportunities inherent in their processes.

This approach involves using science to understand their organization in a systematic and predictable way and relying on high-quality data to generate accurate and reliable insights for guiding change.

Adopting a leadership first, tech last mindset also means that decision-makers combine science and data with their hard-won experience to expertly craft solutions tailored to their specific context.

This is why the American Medical Association defines AI as “augmented intelligence” – emphasizing its role in enhancing human intelligence rather than replacing it. Their definition highlights the importance of keeping our cognitive and emotional abilities at the forefront of decision-making before turning to technology.

Executives embracing these timeless human qualities will foster a mature AI-powered future.

Brian R. Spisak, PhD, is an independent consultant focusing on digital transformation in healthcare. He’s also a research associate at the National Preparedness Leadership Initiative at Harvard T.H. Chan School of Public Health, a faculty member at the American College of Healthcare Executives and the author of the book, Computational Leadership.

Source link

Technology

gaitQ and machineMD secure million dollar research grant to monitor Parkinson’s development in UK and Switzerland

Oxford-based medical technology start-up gaitQ and Swiss medical device company machineMD have announced the joint award of a million dollar research grant from Innovate UK and Innosuisse to enable the collection and analysis of critical movement data from people with Parkinson’s (PwP). The grant will fund an 18-month research project that will record movement data […]

Read More
Technology

Take-Two plans to lay off 5 percent of its employees by the end of 2024

Take-Two Interactive plans to lay off 5 percent of its workforce, or about 600 employees, by the end of the year, as reported in an SEC filing Tuesday. The studio is also canceling several in-development projects. These moves are expected to cost $160 million to $200 million to implement, and should result in $165 million […]

Read More
Technology

Cloud Cost Optimisation Tools Not Enough To Rein In ‘Uncontrolled’ Cloud Spending

APAC companies are struggling with wasteful cloud spending, according to a commissioned study conducted by Forrester Consulting. Also, cloud cost management initiatives and tools are being introduced too late or without a full picture of their environment to be very effective. In the new study, commissioned by IPaaS provider Boomi in December 2023, it was […]

Read More