A wave of new artificial intelligence (AI) tools is rapidly transforming how patients receive care, how providers deliver it, and how systems measure outcomes. Now, leaders across Massachusetts and the nation are grappling with an important question: Can this technology truly make healthcare more equitable—or will it widen the digital divide?
From clinical decision support systems to AI-powered chatbots triaging patient needs, the healthcare sector is embracing automation in new and unprecedented ways. Health systems are piloting AI-driven imaging diagnostics, predictive analytics for hospital capacity, and natural language processing for faster documentation.
But as excitement grows, so does concern about bias in algorithms, access to AI-powered tools, and how patients from underserved communities will be impacted.
“AI is here to stay,” said Dr. Jason Gilleylen, MD, Program Director at Clearscope Tech and a digital health advocate. “What we do now—how we choose to train, regulate, and apply it—will determine whether it becomes a force for equity or a new frontier for exclusion.”
Community Health Centers Eye Opportunities—and Risks
At a recent Healthcare Pulse Forum event, providers from community-based health centers shared both optimism and caution. Several pointed to pilot programs that use machine learning to flag high-risk patients for early interventions—like identifying social determinants that could impact readmission rates.
But there are challenges.
“Most safety-net clinics don’t have the infrastructure to implement enterprise AI tools,” said one public health IT leader. “We need capacity-building alongside innovation.”
And while large hospital systems can afford cutting-edge AI integrations, many smaller clinics are still working on basics like EHR upgrades, broadband access, and cybersecurity protections.
Equity by Design, Not Afterthought
Experts warn that AI must be designed with inclusivity in mind—from the data it’s trained on to the patients it aims to serve. National studies have shown that algorithms built on biased or incomplete data can overlook the needs of Black, Latino, Indigenous, and immigrant communities.
The Healthcare Pulse Forum supports initiatives that push for transparency in AI models, ethics review boards, and inclusive datasets.
“Community voice is critical,” said Isaiah Brown, Senior Consultant at Clearscope Tech. “We have to involve the people most affected by health inequities in how these technologies are developed and deployed.”
Preparing the Workforce for an AI-Driven System
As AI grows more sophisticated, healthcare workers—especially frontline and administrative staff—will need new training to use these tools effectively. The shift toward automation is expected to create new roles and demand cross-functional skills in both technology and care delivery.
Organizations like Clearscope Training are working to build AI literacy across all levels of the workforce. Programs include role-based training in data ethics, decision support, and patient communication with digital tools.
“There’s a real opportunity to make AI work for healthcare workers, not replace them,” said Eugene Green, President of Clearscope Tech. “But we need investment in upskilling now.”
Looking Ahead
AI has the potential to revolutionize healthcare delivery, reduce administrative burdens, and increase access to timely care. But unlocking that potential requires deliberate strategy, equitable policy, and cross-sector collaboration.
That’s where the Healthcare Pulse Forum comes in—bringing together clinicians, technologists, policymakers, and community leaders to ensure innovation moves hand-in-hand with inclusion.
Want to join the conversation?
Attend our next forum or subscribe for updates on digital health equity in action.