Nanna Finne, Senior Regulatory Affairs Specialist
EMEA Office
Louizalaan 489
1050 Brussels
Belgium
The headline and subheader tells us what you're offering, and the form header closes the deal. Over here you can explain why your offer is so great it's worth filling out a form for.
Remember:
The headline and subheader tells us what you're offering, and the form header closes the deal. Over here you can explain why your offer is so great it's worth filling out a form for.
Remember:
As artificial intelligence (AI) reshapes the medical device, pharmaceutical, and life science industries, its influence on quality management has been transformative—unlocking new efficiencies, predictive capabilities, and opportunities for innovation.
Yet, the rapid adoption of AI comes with a regulatory twist: a shifting landscape of guidelines designed to address the unique challenges of these powerful technologies.
For quality assurance (QA) professionals, compliance can feel like chasing a moving target, as frameworks evolve to keep pace with advancements in AI.
From data privacy and algorithmic transparency to ethical decision-making and bias mitigation, regulations are demanding unprecedented rigor from organizations deploying AI. The stakes are high: missteps in compliance can lead to reputational damage, hefty fines, or worse, harm to patients or end users.
This dynamic environment calls for QA teams to take on a dual role—not just ensuring product quality but interpreting, applying, and even anticipating complex regulatory requirements. This means that QA professionals are uniquely positioned to bridge the gap between technical teams, legal counsel, regulatory teams, and external regulators.
However, recent studies from Pistoia Alliance, a pharmaceutical non-profit collaborator that works with the FDA, EMA, NIH, and WHO, among others, show that only 9% of life sciences professionals feel they know and understand EU and US AI regulations well. Not great, considering the potential of AI solutions and their impact on patients worldwide. The good news is that according to the World Quality Report by Sogeti, almost 77% of companies are investing in AI to enhance their QA processes by leveraging AI for continuous testing and quality management.
In this article, we’ll explore how QA teams can navigate this evolving terrain, ensuring both regulatory adherence and the delivery of trustworthy AI systems.
The integration of artificial intelligence (AI) in the life sciences has revolutionized life sciences, enabling innovations such as predictive diagnostics, personalized treatment plans, and advanced imaging techniques. AI is integrated into many medical device categories including everything from patient monitors to CPAP machines to image enhancement software to surgical targeting systems. It is safe to say AI has successfully improved the treatments of millions of patients around the world already.
However, these advancements come under strict regulatory scrutiny to ensure safety, effectiveness, and patient privacy.
Key regulatory frameworks guiding AI in medical devices today include official regulations, such as:
Besides regulations focused on medical devices, the EU Data Act and GDPR are wider regulations that all deal with digital systems and AI and must be considered.
Staying compliant with AI regulations in life sciences might feel like an uphill battle.
Regulatory bodies are continuously updating standards to address the unique challenges of AI. Likewise, AI technologies are racing ahead and without proper anticipation, planning, and innovation during medical device development, compliance quickly becomes complicated.
AI systems, particularly those based on deep learning, often operate as "black boxes,", i.e., AI systems where the internal working such as decision-making processes and the logic behind its output are non-transparent and difficult to understand, even for experts. AI black boxes make it increasingly difficult to explain decisions to regulators, clinicians, and patients—a critical requirement for compliance.
Medical AI relies on large volumes of data, which must be collected, processed, and stored in compliance with stringent privacy laws like GDPR and HIPAA. Ensuring that training data is unbiased, representative, and free of errors adds another layer of complexity.
And last, but not least, compliance requires close collaboration between AI engineers, regulatory specialists, QA professionals, and clinicians—a task that can be resource-intensive and complex. In pressured economic climates, such as the one we are currently experiencing, RA and QA are usually some of the departments facing the largest reductions in budget.
Current AI regulations are most frequently industry-specific. Future AI regulations will be holistic.
The regulation of AI systems will go from applying to specific industries or products to higher-level regulations applying to products across industries and technologies.
Equally, the hope is for AI regulations to be more flexible and allow for greater innovation, considering 21% of respondents to the previously mentioned Pistoia Alliance survey believe existing AI regulations block their research.
In the US, the FDA is hard at work improving its AI and machine learning (ML) regulations. In 2023, the Center for Drug Evaluation and Research, CDER, issued a discussion paper on artificial intelligence in drug manufacturing while the FDA issued its discussion paper and request for feedback on using artificial intelligence and machine learning in the development of drug and biological products.
In October of 2023, President Joe Biden issued an Executive Order addressing the risks of using AI-enabled technologies in health care and the use of AI in drug development.
In the EU, the AI Act, published in the Official Journal of the European Union in July 2024, provides the world with its first-ever set of regulations for the use of artificial intelligence. While the AI Act does impose stricter requirements for life sciences companies, it describes the responsible use of AI – something the industry has been begging for, for years. The regulation applies to all AI systems and output deployed within the EU, regardless of where the manufacturer or system is physically located. The AI Act divides AI systems into three risk categories with their own sets of actions and risk controls.
The regulatory landscape of tomorrow will not just enforce compliance; it will actively guide the development of safe, effective, and ethical AI technologies from a holistic perspective.
Nanna Finne, Senior Regulatory Affairs Specialist
AI systems in healthcare present a unique set of challenges: how do you develop devices that improve the quality of life of the patient without causing harm? Here we present the top three challenges of the implementation of AI in the life sciences, as well as possible solutions.
Challenge
There are lots of ethical aspects to consider when dealing with AI systems in the life sciences: transparency, privacy, accountability, sustainability, accuracy, security, and the responsible use of patient data and AI-based decision-making, among others.
Solution
Manufacturers must consider all aspects of the development and production of AI systems and how they impact their patients. They should develop global policies for ethical operations in all aspects of development and production and ensure all departments, production lines, suppliers and distributors live up to them.
Challenge
Due to the speed and innovation happening within AI, machine learning, large language models, deep learning, and other AI systems, manufacturers and regulators must stay up to date, and adapt constantly. Taking on too much too soon can quickly become unsafe or inefficient for patients and users.
Solution
Manufacturers must invest heavily in the proper infrastructure needed for adaptations to a constantly changing market and industry. This includes personnel, equipment, tools, and training. Considering the time it takes to get a medical device to market, quality, regulatory, and developmental intelligence is also of the essence.
Challenge
Up until now, the technical “language” of regulatory and quality submissions has been that of engineers and scientists. Changing the language of an entire industry to better suit AI systems will be a challenge for regulators and companies used to the traditional medical device space.
Solution
Regulatory and quality personnel must be taught to review AI documentation, understand the underlying mechanisms, and advise on the best course of action. Specialists in AI systems should be hired as regulatory and quality personnel and can mentor existing personnel through the new AI paradigm.
Preparing for the future of AI compliance might seem overwhelming and complex. And we’re not going to lie – it can be. But the competitive advantage and good relationships with regulating bodies will be worth it. Here are some ways to prepare:
The amount of documentation required for life sciences quality assurance can be staggering. The documentation requirements will not become less strict working with AI systems - au contraire. Investing in an eQMS that connects and documents all your quality-related processes is invaluable.
With so much happening in the regulatory space, one of the biggest challenges to compliance is staying up to date. Establishing processes for regulatory monitoring, gap and impact assessments, and consolidated lists of regulations and standards can help you be at the forefront of compliance.
Quality assurance and compliance can take a long time. It is important to ensure that everyone from top leadership to your guys in production understands the timelines, resources, and work required to implement any updated regulations. Rome wasn’t built in a day, and your AI compliance won’t be either.
Protecting patient data and device security is one of the most critical considerations for AI systems. It is also the area most regulatory bodies shine the brightest light on. Threat prediction, security protocols, and monitoring network traffic for anomalies are of the essence.
The EU AI Act is coming, and you might as well get a head start. You can already begin to classify your AI systems according to risk. Even if you don’t plan to sell in the EU, it doesn’t hurt to understand the risk class of your device.
Let’s say you’re a QA Manager at a medical device company. You get sent an article on LinkedIn by a previous colleague about the upcoming publishing of the EU AI Act. You’ve never heard of it before. You don’t know what it is about or how it will impact your company. The next day, you check with your colleagues. Only one of them had heard of this new regulation before, but they only saw it briefly on a poster at a conference and didn’t think it applied to them. You spend the rest of the day thinking about how to flag the situation to your manager.
Now, consider a different scenario. A few years ago, your company invested in a regulatory intelligence AI system that tracks changes to regulations, international standards, and guidelines issued by regulatory bodies in the major markets of the world. The AI flagged a draft of the EU AI Act, published on the website of the European Commission. After an adequate time to review it, you shared it with your team, started the activities needed to conform with the AI Act, and by the time it was published, your company was already in full compliance. You might even have had time to provide comments on the Act or participate in working groups on the implementation of it.
Because this company’s management decided to invest in a regulatory intelligence AI platform, the QA team was given a heads-up on the coming regulation in time to establish their processes and achieve compliance without unnecessary stress. While AI might seem scary to some and the uses of some AI systems are questionable, AI can be a wonderful tool in automating some of the heavier tasks of compliance, freeing up time for personnel to work on tasks that require human problem-solving.
EMEA Office
Louizalaan 489
1050 Brussels
Belgium
US Office
Scilife Inc.
228 E 45th St. RM 9E
New York, NY 10017
EMEA Office
Louizalaan 489
1050 Brussels
Belgium
US Office
Scilife Inc.
228 E 45th St. RM 9E
New York, NY 10017
Copyright 2024 Scilife N.V. All rights reserved.