<img height="1" width="1" style="display:none;" alt="" src="https://px.ads.linkedin.com/collect/?pid=489233&amp;fmt=gif">

Navigating complex AI regulations in quality management

Illustration of paper plane with interconnected lines to represent the navigation of AI regulations in quality management | Scilife

Introduction


As artificial intelligence (AI) reshapes the medical device, pharmaceutical, and life science industries, its influence on quality management has been transformative—unlocking new efficiencies, predictive capabilities, and opportunities for innovation. 
Yet, the rapid adoption of AI comes with a regulatory twist: a shifting landscape of guidelines designed to address the unique challenges of these powerful technologies.

For quality assurance (QA) professionals, compliance can feel like chasing a moving target, as frameworks evolve to keep pace with advancements in AI.

From data privacy and algorithmic transparency to ethical decision-making and bias mitigation, regulations are demanding unprecedented rigor from organizations deploying AI. The stakes are high: missteps in compliance can lead to reputational damage, hefty fines, or worse, harm to patients or end users.

This dynamic environment calls for QA teams to take on a dual role—not just ensuring product quality but interpreting, applying, and even anticipating complex regulatory requirements. This means that QA professionals are uniquely positioned to bridge the gap between technical teams, legal counsel, regulatory teams, and external regulators.

However, recent studies from Pistoia Alliance, a pharmaceutical non-profit collaborator that works with the FDA, EMA, NIH, and WHO, among others, show that only 9% of life sciences professionals feel they know and understand EU and US AI regulations well. Not great, considering the potential of AI solutions and their impact on patients worldwide. The good news is that according to the World Quality Report by Sogeti, almost 77% of companies are investing in AI to enhance their QA processes by leveraging AI for continuous testing and quality management.

In this article, we’ll explore how QA teams can navigate this evolving terrain, ensuring both regulatory adherence and the delivery of trustworthy AI systems.

Today's AI regulations in the life sciences | Scilife
Future AI regulations in the life sciences | Scilife

AI regulations today


The integration of artificial intelligence (AI) in the life sciences has revolutionized life sciences, enabling innovations such as predictive diagnostics, personalized treatment plans, and advanced imaging techniques. AI is integrated into many medical device categories including everything from patient monitors to CPAP machines to image enhancement software to surgical targeting systems. It is safe to say AI has successfully improved the treatments of millions of patients around the world already.

However, these advancements come under strict regulatory scrutiny to ensure safety, effectiveness, and patient privacy.

Key regulatory frameworks guiding AI in medical devices today include official regulations, such as:


Besides regulations focused on medical devices, the EU Data Act and GDPR are wider regulations that all deal with digital systems and AI and must be considered.

Staying compliant with AI regulations in life sciences might feel like an uphill battle.

Regulatory bodies are continuously updating standards to address the unique challenges of AI. Likewise, AI technologies are racing ahead and without proper anticipation, planning, and innovation during medical device development, compliance quickly becomes complicated.

AI systems, particularly those based on deep learning, often operate as "black boxes,", i.e., AI systems where the internal working such as decision-making processes and the logic behind its output are non-transparent and difficult to understand, even for experts. AI black boxes make it increasingly difficult to explain decisions to regulators, clinicians, and patients—a critical requirement for compliance.

Medical AI relies on large volumes of data, which must be collected, processed, and stored in compliance with stringent privacy laws like GDPR and HIPAA. Ensuring that training data is unbiased, representative, and free of errors adds another layer of complexity.
And last, but not least, compliance requires close collaboration between AI engineers, regulatory specialists, QA professionals, and clinicians—a task that can be resource-intensive and complex. In pressured economic climates, such as the one we are currently experiencing, RA and QA are usually some of the departments facing the largest reductions in budget.




AI regulations tomorrow


Current AI regulations are most frequently industry-specific. Future AI regulations will be holistic.

The regulation of AI systems will go from applying to specific industries or products to higher-level regulations applying to products across industries and technologies.

Equally, the hope is for AI regulations to be more flexible and allow for greater innovation, considering 21% of respondents to the previously mentioned Pistoia Alliance survey believe existing AI regulations block their research.

In the US, the FDA is hard at work improving its AI and machine learning (ML) regulations. In 2023, the Center for Drug Evaluation and Research, CDER, issued a discussion paper on artificial intelligence in drug manufacturing while the FDA issued its discussion paper and request for feedback on using artificial intelligence and machine learning in the development of drug and biological products.

In October of 2023, President Joe Biden issued an Executive Order addressing the risks of using AI-enabled technologies in health care and the use of AI in drug development.

In the EU, the AI Act, published in the Official Journal of the European Union in July 2024, provides the world with its first-ever set of regulations for the use of artificial intelligence. While the AI Act does impose stricter requirements for life sciences companies, it describes the responsible use of AI – something the industry has been begging for, for years. The regulation applies to all AI systems and output deployed within the EU, regardless of where the manufacturer or system is physically located. The AI Act divides AI systems into three risk categories with their own sets of actions and risk controls.

The regulatory landscape of tomorrow will not just enforce compliance; it will actively guide the development of safe, effective, and ethical AI technologies from a holistic perspective.

AI standards today and in the future for life sciences | Scilife
The evolution of AI regulations in the life sciences | Scilife

"Life science organizations can proactively prepare for the paradigm shift of AI systems in healthcare by encouraging investments in the right tools and people, supporting the timely implementation of processes related to AI systems, and ensuring the safety and privacy of their patients and users."


Nanna Finne, Senior Regulatory Affairs Specialist

Overcoming challenges


AI systems in healthcare present a unique set of challenges: how do you develop devices that improve the quality of life of the patient without causing harm? Here we present the top three challenges of the implementation of AI in the life sciences, as well as possible solutions. 




Ethicality


Challenge

There are lots of ethical aspects to consider when dealing with AI systems in the life sciences: transparency, privacy, accountability, sustainability, accuracy, security, and the responsible use of patient data and AI-based decision-making, among others.

Solution
Manufacturers must consider all aspects of the development and production of AI systems and how they impact their patients. They should develop global policies for ethical operations in all aspects of development and production and ensure all departments, production lines, suppliers and distributors live up to them.

 


Constant adaptation


Challenge

Due to the speed and innovation happening within AI, machine learning, large language models, deep learning, and other AI systems, manufacturers and regulators must stay up to date, and adapt constantly. Taking on too much too soon can quickly become unsafe or inefficient for patients and users.

Solution
Manufacturers must invest heavily in the proper infrastructure needed for adaptations to a constantly changing market and industry. This includes personnel, equipment, tools, and training. Considering the time it takes to get a medical device to market, quality, regulatory, and developmental intelligence is also of the essence.


 


Paradigm shifts towards AI


Challenge

Up until now, the technical “language” of regulatory and quality submissions has been that of engineers and scientists. Changing the language of an entire industry to better suit AI systems will be a challenge for regulators and companies used to the traditional medical device space.

Solution
Regulatory and quality personnel must be taught to review AI documentation, understand the underlying mechanisms, and advise on the best course of action. Specialists in AI systems should be hired as regulatory and quality personnel and can mentor existing personnel through the new AI paradigm.

Actionable ways to prepare for AI compliance in the future


Preparing for the future of AI compliance might seem overwhelming and complex. And we’re not going to lie – it can be. But the competitive advantage and good relationships with regulating bodies will be worth it. Here are some ways to prepare:

Electronic QMS


The amount of documentation required for life sciences quality assurance can be staggering. The documentation requirements will not become less strict working with AI systems - au contraire. Investing in an eQMS that connects and documents all your quality-related processes is invaluable.

Support regulatory and quality intelligence


With so much happening in the regulatory space, one of the biggest challenges to compliance is staying up to date. Establishing processes for regulatory monitoring, gap and impact assessments, and consolidated lists of regulations and standards can help you be at the forefront of compliance.

Manage expectations


Quality assurance and compliance can take a long time. It is important to ensure that everyone from top leadership to your guys in production understands the timelines, resources, and work required to implement any updated regulations. Rome wasn’t built in a day, and your AI compliance won’t be either.

Focus on cybersecurity


Protecting patient data and device security is one of the most critical considerations for AI systems. It is also the area most regulatory bodies shine the brightest light on. Threat prediction, security protocols, and monitoring network traffic for anomalies are of the essence.

Classify your AI devices


The EU AI Act is coming, and you might as well get a head start. You can already begin to classify your AI systems according to risk. Even if you don’t plan to sell in the EU, it doesn’t hurt to understand the risk class of your device.

Illustration that represents the Scilife View, opinions and insights from Scilife Experts around Predictive Analytics in Quality Management | Scilife

A real-world use case of excelling in AI regulatory compliance

Let’s say you’re a QA Manager at a medical device company. You get sent an article on LinkedIn by a previous colleague about the upcoming publishing of the EU AI Act. You’ve never heard of it before. You don’t know what it is about or how it will impact your company. The next day, you check with your colleagues. Only one of them had heard of this new regulation before, but they only saw it briefly on a poster at a conference and didn’t think it applied to them. You spend the rest of the day thinking about how to flag the situation to your manager.

Now, consider a different scenario. A few years ago, your company invested in a regulatory intelligence AI system that tracks changes to regulations, international standards, and guidelines issued by regulatory bodies in the major markets of the world. The AI flagged a draft of the EU AI Act, published on the website of the European Commission. After an adequate time to review it, you shared it with your team, started the activities needed to conform with the AI Act, and by the time it was published, your company was already in full compliance. You might even have had time to provide comments on the Act or participate in working groups on the implementation of it.

Because this company’s management decided to invest in a regulatory intelligence AI platform, the QA team was given a heads-up on the coming regulation in time to establish their processes and achieve compliance without unnecessary stress. While AI might seem scary to some and the uses of some AI systems are questionable, AI can be a wonderful tool in automating some of the heavier tasks of compliance, freeing up time for personnel to work on tasks that require human problem-solving.

Image that represents the conclusions of Predictive Analytics in Quality Management Analysis | Scilife

Key takeaways

Holistic AI regulations demand collaboration


AI regulations will shift to a holistic approach which necessitates close collaboration between departments and teams. Cross-functional teams encourage compliance.

Patient safety and privacy are everything


Establishing processes for cybersecurity, security breaches, and data handling is critical for ensuring appropriate patient protection.

Regulatory Intelligence is the key to proactive compliance


Regulatory intelligence is critical for staying compliant, as are eQMS systems and other digital tools that help life science companies proactively seek and achieve compliance.
Illustration that indicates how to download the Report about the Quality trends in 2025 | Scilife

Download the full report

Download the report >>>
Other related articles
Illustration of half a brain and interconnected lines to represent Quality Assurance 5.0 with AI becoming a second brain | Scilife

Quality Assurance 5.0: AI as your second brain

See more

 
Illustration of interconnected arrow and magnifying glass to represent being data-driven with AI in quality assurance | Scilife

The rise of data-savvy QA in an AI-driven world

See more

 
Illustration of interconnected blocks and arrow to represent AI building a quality culture | Scilife

How to harness AI to build a thriving quality culture

See more

 
Illustration of interconnected alerts with AI to represent risk assessment in quality assurance | Scilife

How AI is transforming risk assessment in QA

See more