Home / Publications / How can we trust AI

How can we trust AI?

CMS Digitalbytes

15 Novemeber 2018

Already humans are trusting AI. For example, the US FDA approved the first medical device to diagnose disease without a doctor in May 2018. So we need to engineer AI for accountability as explained by Dr Joanna Bryson in her thought provoking article referenced below. Some (such as Jacob Turner in his recent book "Robot rules: regulating artificial intelligence") argue that AI deployments should be treated as legal entities and there should be a registration system to keep track of these legal entities. Joanna presents an opposite view and argues that human characteristics such as the suffering we feel when we lose status, liberty or property, is a key component in ensuring that AI deployments are safe and made to the highest standards. She argues that these human characteristics would be lost if AI deployments are treated as legal entities. These were interesting points to think about in those small moments between speaking with potential entrants to the legal profession last night at the legal cheek event.

AI & Global Governance: No One Should Trust AI

The content above was originally posted on CMS DigitalBytes - CMS lawyers sharing comment and commentary on all things tech.

Authors

Portrait ofRachel Free
Dr. Rachel Free
Partner
London