shot-button
Home > Buzzfeed > As AI Governance Mandates Converge One Framework Aims to Solve Pharmas Multi Jurisdiction Compliance Problem

As AI Governance Mandates Converge, One Framework Aims to Solve Pharma’s Multi-Jurisdiction Compliance Problem

Updated on: 13 April,2026 04:28 PM IST  |  Mumbai
Buzzfeed | faizan.farooqui@mid-day.com

Zelthy’s AI governance framework helps pharma companies manage multi-jurisdiction compliance ahead of EU AI Act 2026 rules.

As AI Governance Mandates Converge, One Framework Aims to Solve Pharma’s Multi-Jurisdiction Compliance Problem

Vibhor Agnihotri

A new governance architecture for pharmaceutical AI aims to solve multi-jurisdiction compliance before the EU AI Act's high-risk obligations take effect in August 2026

Pharmaceutical companies deploying artificial intelligence into regulated workflows are facing a problem that has less to do with the technology itself than with the governance infrastructure around it. The FDA’s January 2025 draft guidance on AI in drug development introduced a risk-based credibility assessment framework for AI models supporting regulatory decisions. The EU AI Act’s high-risk system obligations, which cover healthcare applications, take effect August 2, 2026. In January 2026, the EMA and FDA jointly published ten guiding principles for AI use across the medicines lifecycle. And across APAC markets, pharmacovigilance authorities are tightening expectations for AI-assisted adverse event reporting.

The result is that many pharmaceutical companies are now facing several AI compliance requirements at the same time, even though most of their existing IT systems were never built to manage that kind of complexity all at once. Companies that operate in different jurisdictions now need AI systems that can meet different regulatory expectations within one controlled platform. This includes FDA requirements for audit trails, EU rules on transparency and human oversight under the AI Act, and local pharmacovigilance reporting standards in different regions.


It is against this backdrop that Vibhor Agnihotri, who leads U.S. strategy and growth at Zelthy, an AI-enabled platform provider for regulated pharmaceutical operations, has architected what the company calls its AI Governance Framework for Life Sciences. The framework covers six regulated areas: compliance, regulatory affairs, pharmacovigilance, patient services, advanced therapies, and supply chain traceability. It was designed from the outset to operate across jurisdictional boundaries without requiring parallel governance systems for each market.

“Every governance failure we see traces back to the same root cause: AI was integrated as an add-on, and the audit trail has a gap in it,” Agnihotri said. “We built this framework so that governance is a structural property of the platform, not a layer applied after the fact.”

Governance as Architecture, Not Policy

The framework is built on Zango, Zelthy’s open-source Django-based application framework for enterprise-regulated environments. What makes Agnihotri’s approach different from traditional compliance systems is that the governance is built to be enforced by the technology itself, rather than policy-dependent.

Zango’s role-based access control engine governs what users and AI agents can initiate, review, approve, or override within any regulated workflow. AI actions and human actions share the same permission architecture, there is no separate governance layer for machine-generated outputs. Every action, regardless of origin, is subject to identical access controls, escalation paths, and approval chains.

The platform’s audit logging captures a tamper-evident, object-level record of every system event - model version, data inputs, user identity, timestamp, and decision outcome. The workflow engine enforces sequential, role-controlled process steps, meaning no AI output can advance in a regulated workflow without the required human review checkpoints being completed and logged.

This design choice has practical implications for the EU AI Act’s upcoming high-risk obligations, which will require providers and deployers of high-risk AI systems to implement risk management, human oversight, and technical documentation by August 2026. By building these controls into the platform’s architecture rather than layering them on through policy documents, Agnihotri argues the framework already satisfies the structural requirements that many companies will spend the next several months trying to add later.

“When the FDA or an auditor asks for a complete record of every AI-assisted decision, who reviewed it, what model version was running, what data it accessed, the answer has to come from the system architecture, not from someone compiling spreadsheets,” Agnihotri said. “Zango’s audit logging and policy framework make that record a byproduct of normal operation. There is nothing to reconstruct.”

A Jurisdiction-Agnostic Approach to Multi-Market Compliance

One of the framework’s more notable design decisions is the separation of structural governance controls from jurisdiction-specific regulatory configuration. The core controls, including role-based access, audit logging, version tracking, and evidence generation, operate identically regardless of market. Jurisdiction-specific rules, reporting formats, escalation thresholds, and submission requirements are configurable per deployment.

For enterprise pharmaceutical companies operating across FDA, EMA, and APAC regulatory environments, this means maintaining a single governed platform rather than running parallel compliance systems. The approach recognizes that, although specific regulatory requirements differ from one jurisdiction to another, the core controls that make AI governance auditable, such as access management, provenance tracking, and human-in-the-loop enforcement, are fundamentally the same everywhere.

The framework covers six regulated operational domains: compliance operations (including promotional review, adverse event management, and third-party due diligence), regulatory affairs (submission management, dossier assembly, regulatory intelligence), patient services (digital PSP platforms, benefits verification, adverse event detection in call center operations), advanced therapies (cell and gene therapy orchestration, REMS compliance, cold chain tracking), commercial operations (HCP engagement, sample management under PDMA, field force management), and supply chain traceability (serialization, authentication, last-mile tracking).

First U.S. Enterprise Deployment

The framework’s first practical test in the U.S. market came with an enterprise deployment at a global pharmaceutical company. The client’s IT validation and quality assurance teams evaluated the AI governance architecture against the documentation, audit evidence, and change control standards applied to validated computerized systems under FDA 21 CFR Part 11.

According to Agnihotri, the implementation was completed in weeks instead of the six months that is typically required for validated system implementations at large U.S. pharmaceutical companies. He attributes this faster timeline to the platform’s ability to generate audit and change control evidence automatically, rather than requiring validation teams to spend months putting that documentation together manually.

“U.S. enterprise pharma has a well-established bar for validated system deployment,” Agnihotri said. “When we walked their quality team through the audit logging architecture, the RBAC policy framework, and the change control evidence that Zango generates natively, they could evaluate it against their existing validation protocols directly. That alignment is what compressed the timeline.”

The Convergence Challenge Ahead

Agnihotri’s work on the governance framework reflects a broader challenge the pharmaceutical industry will have to confront as AI governance mandates accelerate. The FDA’s January 2025 draft guidance signaled the agency’s intent to formalize expectations around AI credibility assessment in regulatory submissions. The EMA-FDA joint principles published in January 2026 indicate increasing international alignment on foundational AI governance expectations. And the EU AI Act’s high-risk obligations, once enforceable in August 2026, will impose concrete technical and documentation requirements on AI systems used in healthcare contexts.

Before joining Zelthy, Agnihotri founded Dashmed, a healthcare technology company that built interoperability infrastructure for India’s national digital health ecosystem. That experience, which involved designing systems that had to work at scale across fragmented regulatory environments, helped shape his approach to the governance framework.

“The companies that will struggle most are the ones treating AI governance as a compliance exercise rather than an architecture problem,” Agnihotri said. “Compliance is a moving target. Architecture is what lets you hit it across every jurisdiction simultaneously.”

Zelthy’s platform is currently live in more than ten countries, with 300 applications in production serving global pharmaceutical companies. The AI Governance Framework is available for enterprise evaluation.

"Exciting news! Mid-day is now on WhatsApp Channels Subscribe today by clicking the link and stay updated with the latest news!" Click here!

Buzzfeed Technology Development

This website uses cookie or similar technologies, to enhance your browsing experience and provide personalised recommendations. By continuing to use our website, you agree to our Privacy Policy and Cookie Policy. OK