AI certification is a puzzle. Regulations are evolving, but they aren’t always clear, and companies developing AI systems often find themselves navigating compliance frameworks built for traditional software. Instead of waiting for regulators to define the process, the team behind the Diconium pneumonia detection project took a different approach: they reverse-engineered AI certification itself.
Rather than building an AI model and then figuring out how to certify it, they flipped the process. They started with the regulatory requirements and worked backward, developing an AI system that met them from the ground up. The goal wasn’t just to certify one AI model—it was to create a scalable, repeatable approach to AI assurance that could be applied across industries.
Unlike traditional medical devices, AI isn’t static. A medical scanner produces the same image no matter who uses it, but an AI model learns from data, adapts over time, and can even drift in performance. That makes it difficult to apply rigid pass/fail criteria the way regulators do with conventional products.
The Diconium project aimed to develop an AI model that could detect pneumonia from chest X-rays—but with a twist. The team wasn’t just interested in whether they could build an effective AI model. They wanted to figure out how to get it certified under real-world regulatory standards. That meant aligning with strict safety, accuracy, and fairness criteria while navigating the uncertainty of AI regulation.
To develop an AI model that could be certified, the team first needed to understand what regulators actually wanted. They analyzed existing medical device standards, AI risk management frameworks, and emerging compliance guidelines, identifying key focus areas:
By defining these criteria at the start, the team ensured that certification wasn’t just an afterthought—it was built into the AI development process.
Once the requirements were clear, the team moved on to the next step: developing an AI model that wasn’t just effective but also certifiable. That meant integrating compliance measures directly into the AI development workflow, including:
By embedding these measures into the system from day one, the team didn’t just build a high-performing AI—they built one that aligned with regulatory expectations.
One of the biggest hurdles in AI certification is that every model is different. Unlike traditional software, which can be tested against a fixed set of benchmarks, AI requires customized assurance processes for each new use case.
The Diconium team tackled this by developing a structured, repeatable AI certification framework that could be applied beyond just their pneumonia detector. This included:
The Diconium pneumonia detection project proved that AI certification doesn’t have to be a roadblock—it can be built directly into AI development. By reverse-engineering the requirements, the team demonstrated a scalable approach that could help other industries navigate AI regulation more effectively.
This method can be applied beyond healthcare, to AI systems used in autonomous driving, finance, and risk assessment. Instead of developing AI first and worrying about compliance later, companies can design AI models with certification in mind from the start, reducing approval delays and ensuring safer deployments.
AI regulations will continue to evolve, but one thing is clear: companies that integrate AI assurance early will be the ones that move the fastest and build the most trust. The Diconium project showed that AI safety and compliance aren’t obstacles—they’re essential components of responsible AI innovation.
Commenting Rules: Being critical is fine, if you are being rude, we’ll delete your stuff. Please do not put your URL in the comment text and please use your PERSONAL name or initials and not your business name, as the latter comes off like spam. Have fun and thanks for your input.
Join a growing community. Every Friday I share the most recent insights from what I have been up to, directly to your inbox.