Fourleaflover/iStock

Thinking Space Technology

How do we get ethics factored into the coding?

Algorithmic systems have enormous power. Of course, such power can be abused – if there are no rules. For some time, a number of initiatives and companies have been discussing how to apply algorithms in a safe manner. In 2018, the EU, too, put together a group of experts who presented some guidelines on how to deal with this issue. Talks are also underway between the EU and international partners, for instance Singapore, Japan and Canada.

Fourleaflover/iStock

“Technologies and algorithms know no borders,” an EU paper states. Against this background, the Bertelsmann Foundation teamed up with the think tank iRights in 2019 to develop several ethical rules for the use of algorithms – the results are visible in our exhibition too.

1. Developing expertise

Those who develop and operate algorithmic systems, and/or make decisions about their use, must have the necessary expertise and an appropriately graded understanding of the technology’s functional principles and potential impacts. Sharing individual and institutional knowledge (...) is just as central in this regard as qualifying measures. These are to be integrated into the training (...) of new employees.

2. Defining responsibilities

Responsibilities need to be clearly assigned. The person in charge must be aware of the tasks involved. This also applies to responsibilities shared by multiple individuals or organisations. The assignment must be completely documented in a transparent manner, both internally and externally. The responsibility must not be shifted to the algorithmic system or to users or to other involved persons.

3. Documenting goals

The goals pursued by the algorithmic system must be clearly defined (...). (...) A documented impact assessment must be conducted before the algorithmic system is deployed. Especially in the case of learning systems (...) the impact assessment must be repeated at regular intervals. In this context, risks of discrimination and other (...) consequences must be considered. Value considerations in relation to (...) the use of algorithmic systems must be recorded.

4. Ensuring security

The reliability and robustness of an algorithmic system and its underlying data in the event of attacks, unauthorised access and manipulation must be guaranteed under all circumstances. To this end, security considerations must from the outset be a fixed element in the design of the algorithmic system (security by design). The system must be tested in a protected environment before it is used. Security precautions must be documented.

5. Labelling

When using algorithmic systems, persons interacting with them must be able to recognise through appropriate labelling that a decision or prediction is based on an algorithmic system. This is especially applicable when the system imitates a human in the way it interacts (speech, appearance, etc.).

6. Ensuring understandability

An algorithmic system and its functional principles (...) must be made so easily understandable for humans that they can question and check them. To this end, information about the system’s underlying data and models, its architecture and potential impacts must be published (...). Whether a goal (...) might also be achieved with an algorithmic system that is less complex and (...) easier to understand is something that must always be investigated.

7. Ensuring controllability

For an algorithmic system to remain designable, all persons involved in (...) the development and (...) deployment must at all times jointly maintain control over the system. It must be ensured that the overall control of the system is always maintained, even if tasks are distributed between (...) persons and work areas. The operation of a system must never become so complex (...) that it can no longer be controlled or changed by humans. This applies especially to self-learning systems. If this controllability cannot be ensured, the system should not be used (...).

8. Checking effectiveness

An algorithmic system must be subject to active monitoring in order to ensure that the intended goals are actually being pursued and the use of the system does not violate existing law. Appropriate technical arrangements must be made to enable external inspectors to test an algorithmic system (...) comprehensively and independently. If a negative effect is detected, (...) the algorithmic system must be adjusted accordingly.

9. Enabling complaints

The entity employing an algorithmic system must provide easily accessible means of contact. Firstly, affected persons must be able to demand qualified and detailed information on the specific decisions taken and the considerations that lie behind them. (...) Secondly, a simple, low-threshold and effective complaints procedure must be available. Complaints and measures initiated must be documented.