Home / Publications / Seven moral rules for AI algorithms to share

Seven moral rules for AI algorithms to share

CMS Digitalbytes

13 February 2019

Researchers have recently found seven principles of morality used by all societies in order to ensure cooperation and collective problem solving. The seven are “help your family”, “help your group”, return favours”, “be brave”, “defer to superiors”, “divide resources fairly” and “respect others property”.

If these are fundamental as a human moral code across all human societies would it be sensible to use this moral code for groups of Autonomous AI agents ?

In the future we will need to design and possibly regulate AI agents according to a defined moral code. As humans ourselves we are perhaps biased towards using our own moral code. But it seems to me that there is no easy direct translation from human moral codes to machine moral codes. And many may argue that human moral codes are flawed since human societies exhibit  many negative behaviours. More research is needed in this field.

The content above was originally posted on CMS DigitalBytes - CMS lawyers sharing comment and commentary on all things tech.

Authors

Portrait ofRachel Free
Dr. Rachel Free
Partner
London