Decision-Making in PV: How Experts Evaluate Uncertain Safety Data
- 16/02/2026
- 6 min read
In pharmacovigilance, the most critical decisions often arise from uncertain safety data. These situations occur when information is incomplete, data conflict, and the clinical picture is far from straightforward.
Real-world safety cases rarely resemble textbook examples. Patients often have comorbidities, take multiple medications simultaneously, or present with atypical disease courses or undocumented risk factors. Add to this fragmented reports, missing timelines, and varying interpretations, and each case becomes a complex analytical challenge.
Despite automation, AI, and big data, expert judgment remains the cornerstone of effective pharmacovigilance. While data can guide the process, final decisions still require experience, clinical reasoning, and the ability to navigate uncertainty.
Making Complex Safety Data Clear and Compliant
In real-world PV practice, most complex cases do not fit neatly into the categories of “related” or “not related.”
A single case can have several plausible explanations. These may involve the drug’s mechanism of action, interactions, underlying conditions, or disease progression.
For example, a patient may be taking five different medications, each with its own safety profile. Symptoms may be side effects, manifestations of an underlying disease, or the result of an interaction. Even the event’s timing – one of the key evaluation criteria – may not provide a clear answer.
Yet PV teams are expected to provide concrete, regulatory-supported decisions:
- Should the event be considered a potential risk?
- Does it require an update to the SmPC?
- Should regulators be informed, or is further assessment needed?
Interview with an Expert: Making Sense of Uncertain Safety Data
Digital platforms like DrugCard do not replace expert judgment. Instead, they enhance it, helping specialists focus on accurate risk interpretation.
We spoke with Mircea Ciuca, a pharmacovigilance expert who regularly handles complex and ambiguous safety cases. He explained how he evaluates incomplete data, distinguishes “likely related” from “uncertain,” and balances evidence for regulatory decisions.
When you receive a complex safety case with multiple contributing factors, how do you structure your initial assessment?
I start by reconstructing the clinical narrative and timeline. I outline what happened, in which sequence, and how it relates to treatment, dose changes, and other medical events. Then I break down the possible contributors (underlying disease, comorbidities, concomitant medications, procedural factors, lifestyle or physiological factors) and assess how plausible each is in explaining the event and what each could contribute to the main adverse event. Finally, I combine that with prior knowledge – known risks from the product label, class effects, and any emerging signals – to arrive at a structured, transparent assessment that others can follow and challenge if needed. Much of the focus addresses missing data. If the inventory of missing key information is too large, we postpone the final assessment until follow-up is complete. We then collect additional information to finalise the report.
How do you determine whether an event is more likely related to the drug rather than the disease or concomitant medications?
I look at four key elements together: temporal relationship, biological plausibility, alternative explanations, and consistency with prior evidence. A clean temporal association alone is never enough; it becomes meaningful when supported by a plausible mechanism, a pattern that fits what we know from trials or post‑marketing data, and when competing causes are less convincing or do not fully account for the clinical picture. Often, the answer is not a binary “yes/no” but a probabilistic judgment that I then express in a way that is meaningful for regulatory decision‑making (for example, “consistent with a causal relationship” or “cannot be excluded but alternative explanations remain more likely”).
What role do formal causality algorithms play compared to your clinical judgment?
Formal algorithms provide structure and a degree of reproducibility, especially in large teams or when training less experienced colleagues, and they help ensure key domains (time to onset, dechallenge, rechallenge, alternative causes) are not overlooked. However, they are ultimately simplifications of complex clinical reality and can over‑ or under‑estimate relatedness in cases with atypical presentations, multimorbidity, or incomplete data. For complex cases, I use them as a supportive tool rather than a decision-maker: clinical judgment, grounded in pathophysiology and the totality of available evidence, always has primacy.
Does your approach change when there are potential regulatory consequences – for example, an SmPC update or use restrictions?
The fundamental principles remain the same, but experts must act more carefully and handle uncertainty more thoughtfully when regulatory decisions are at stake. I strongly focus on transparency: clearly articulating the evidence’s strengths and limitations, the size and characteristics of the exposed population, and the potential impact on patients and clinical practice. In those situations, I also place more emphasis on collaboration – involving cross‑functional experts (clinical, epidemiology, regulatory, sometimes external advisors) – to ensure that any proposed change to the SmPC or risk minimisation measures is both scientifically justified and proportionate to the level of risk and uncertainty.
How do you handle cases where key data are missing, incomplete, or conflicting?
First, I try to close the gaps actively: targeted follow‑up with the reporter, checking additional data sources, or triangulating information from similar cases or relevant studies, rather than passively accepting the initial report as fixed. When uncertainty remains – which is often the reality – I avoid overstating the assessment’s precision and make that uncertainty explicit in the case documentation and any aggregate evaluation. At the population level, I look for patterns across multiple imperfect cases, supported where possible by non‑interventional studies or other real‑world data, to see whether a coherent signal emerges despite the noise of incomplete information.
What role do digital platforms and AI play today in supporting your expert judgment?
Digital platforms and AI are becoming essential for handling volume and complexity: they help surface relevant information faster, standardise routine steps, and highlight patterns that might otherwise be missed in large datasets. For me, their most significant value is freeing human experts from repetitive tasks, allowing more time for nuanced clinical reasoning, risk–benefit analysis, and communication with stakeholders. At the same time, I see AI as a decision‑support partner rather than a replacement for expert judgment. The most robust outcomes emerge when experts combine transparent algorithms with their experience, continuously calibrate results, and understand each tool’s limitations.
Conclusion
Pharmacovigilance rarely offers simple answers. Experts often face uncertain safety data, incomplete reports, and complex patient histories. Making defensible decisions requires more than raw data – it demands experience, clinical reasoning, and a structured approach to risk assessment.
Combining deep clinical expertise with modern digital platforms sets a new standard of quality in PV.
Even in the era of automation, AI, and big data, expert judgment remains the cornerstone of pharmacovigilance. Handling uncertain safety data effectively requires both skill and the right tools. This combination enables PV teams to make accurate, reliable decisions, even in the most complex and ambiguous cases.