Search Results for author: John McDermid

Found 7 papers, 0 papers with code

Safety Analysis of Autonomous Railway Systems: An Introduction to the SACRED Methodology

no code implementations18 Mar 2024 Josh Hunter, John McDermid, Simon Burton

To combat these difficulties we introduce SACRED, a safety methodology for producing an initial safety case and determining important safety metrics for autonomous systems.

What's my role? Modelling responsibility for AI-based safety-critical systems

no code implementations30 Dec 2023 Philippa Ryan, Zoe Porter, Joanna Al-Qaddoumi, John McDermid, Ibrahim Habli

Many authors have commented on the "responsibility gap" where it is difficult for developers and manufacturers to be held responsible for harmful behaviour of an AI-SCS.

Unravelling Responsibility for AI

no code implementations4 Aug 2023 Zoe Porter, Philippa Ryan, Phillip Morgan, Joanna Al-Qaddoumi, Bernard Twomey, John McDermid, Ibrahim Habli

It is widely acknowledged that we need to establish where responsibility lies for the outputs and impacts of AI-enabled systems.

Philosophy valid

Safety Assessment for Autonomous Systems' Perception Capabilities

no code implementations17 Aug 2022 John Molloy, John McDermid

The well-established safety-analysis methods developed for conventional SC systems are not well-matched to AS, ML, or the sensing systems used by AS.

Decision Making Scene Understanding

A Principles-based Ethics Assurance Argument Pattern for AI and Autonomous Systems

no code implementations29 Mar 2022 Zoe Porter, Ibrahim Habli, John McDermid, Marten Kaas

An assurance case is a structured argument, typically produced by safety engineers, to communicate confidence that a critical or complex system, such as an aircraft, will be acceptably safe within its intended context.

Ethics

The Role of Explainability in Assuring Safety of Machine Learning in Healthcare

no code implementations1 Sep 2021 Yan Jia, John McDermid, Tom Lawton, Ibrahim Habli

Established approaches to assuring safety-critical systems and software are difficult to apply to systems employing ML where there is no clear, pre-defined specification against which to assess validity.

BIG-bench Machine Learning Explainable Artificial Intelligence (XAI)

A Framework for Assurance of Medication Safety using Machine Learning

no code implementations11 Jan 2021 Yan Jia, Tom Lawton, John McDermid, Eric Rojas, Ibrahim Habli

As healthcare is now data rich, it is possible to augment safety analysis with machine learning to discover actual causes of medication error from the data, and to identify where they deviate from what was predicted in the safety analysis.

BIG-bench Machine Learning

Cannot find the paper you are looking for? You can Submit a new open access paper.