Podcast
Arash Azhand - Trusting the Machine: A Deep Dive into AI Safety and Assurance
A conversation with Arash Azhand about AI safety and assurance, and how trustworthy systems require monitoring, transparency, robustness, and continuous checks after deployment.

In this conversation
Arash Azhand discusses AI safety and assurance as practical engineering and governance work. The conversation moves from medical AI and critical applications to transparency, robustness, cybersecurity, and continuous assurance.
Arash treats trust as something that has to be earned technically and operationally. Trustworthy AI requires evidence, monitoring, tools, and a way to keep checking a system after deployment.
Central question
How do we make AI systems reliable enough for settings where failure has consequences?
What we cover
- Why regulatory gaps matter for AI in safety-relevant domains.
- How transparency, robustness, and assurance tools support trust.
- Why generative AI creates both cybersecurity risks and defensive opportunities.
Guest background
Arash Azhand works on AI safety, assurance, and trustworthy systems. His work connects technical diagnostics with the governance needed for high-stakes AI use.
Things to listen for
- The difference between model performance and system assurance.
- Why safety has to continue after deployment.
- How interpretability and robustness tools can support decision-making without solving everything.
Trust in AI systems
One-time validation
A model is tested once, reported as accurate, and then treated as stable.
Continuous assurance
The system is monitored, challenged, documented, and re-evaluated as data, context, and risks change.
