Decisions are increasingly being taken by computer systems rather than humans. These decisions often rely upon computer algorithms that assess and analyse data and patterns of data and then aim to produce an evidence-based decision. The most sophisticated systems are using machine learning (ML) – incorporating algorithms that themselves can adjust and adapt on the fly, transforming their own behaviours as they learn rather than being prescriptively designed in advance by a human programmer.
There have already been reports of systems that have made discriminatory decisions, either by design or mistake. Many of the systems, devices and sensors embedded around us lack well-engineered security and are vulnerable to hacking and compromise.
If the right approach is not taken, the downside of this emergent generation of systems is that they will be discriminatory, biased, unaccountable, manipulative, and create significant security, privacy and trust issues. However, if well applied the upside of these computer-based systems is that they will help support better policy-making and hence better outcomes for us all in important areas such as healthcare, education and transport.
Society needs to have trust in these increasingly pervasive and influential systems, and in the decisions they make. This trust needs to encompass everything from the security of the systems and the way they protect personal data, through to the decisions they make.
Trust will require consistent standards of security and privacy engineering together with transparency about the decisions these systems are making. We need to have confidence, and evidence, that they behave in unbiased, non-discriminatory and non-invasive ways and are making applicable, acceptable and legal decisions. Any exceptions will need to be identified so that appropriate remedial, or legal, action can be taken.
Stance is researching the impact of algorithms and machine learning. In particular, we are evaluating the policy opportunities, implications and actions required for public and private sectors work together to ensure trust in machine-based algorithms and the decisions they make.