machine head1_S

Trustworthy AI for responsible competitiveness

How does one teach machines and robots, basically a bunch of circuitry and binary code, to behave ethically? Artificial intelligence isn’t evil, but how its creators wield and use and apply AI, may make it seem so.

Two bodies so far, Singapore’s IMDA and the European Commission seem to believe that like everything else that is built into machines and robots, ethics too can be programmed into robots. More specifically, ethical-guided settings can be built in make AI behave ethically.

According to an ethics guideline draft authored by the European Commission’s AI high-level expert group (HLEG), trustworthy AI has two components.

Firstly, it should respect fundamental rights, applicable regulation and core principles and values, ensuring an “ethical purpose”. Secondly, it should be “technically robust” and reliable since, even with good intentions, a lack of technological mastery can cause unintentional harm.

There are real life examples of this already – autonomous vehicles mowing down innocent pedestrians.

Ensuring ethical purpose

Ethics principles are abstract, but HLEG’s draft tries to operationalise ethics for artificial intelligence, by referring to the fundamental rights commitment of the EU Treaties and Charter of Fundamental Rights, as well as other legal instruments (European Social Charter) and legislative acts (General Data Protection Regulation).

All these serve the purpose of identifying ethical principles and specify how concrete ethical values can be operationalised.

Fundamental rights provide the bedrock for the formulation of ethical principles. Those principles are abstract high-level norms that developers, deployers, users and regulators should follow in order to uphold the purpose of human-centric and Trustworthy AI. Values, in turn, provide more concrete guidance on how to uphold ethical principles, while also underpinning fundamental rights.

At the end of the day, we want to leverage artificial intelligence, but not cripple it with restrictions that ethical values may place on it. The idea is to enable competitiveness, but responsible competitiveness.

The ten requirements for Trustworthy AI listed below, have been derived from the rights, principles and values of Chapter I. While they are all equally important, in different application domains and industries, the specific context needs to be taken into account for further handling thereof.

  1. Accountability
  2. Data Governance
  3. Design for all
  4. Governance of AI Autonomy (Human oversight)
  5. Non-Discrimination
  6. Respect for (& Enhancement of) Human Autonomy
  7. Respect for Privacy
  8. Robustness
  9. Safety
  10. Transparency

During the course of the 37-page draft being played out, a list of critical concerns were also broached.

Critical concerns raised by AI where conclusion could not be arrived at

  1. Identification without consent.
  2. Covert AI systems – humans must be able to request and validate they fact they are interacting with an AI identity.
  3. Normative and mass citizen scoring without consent – citizen scoring is used in school systems, and for driver licenses for example. A fully transparent procedure should be available to citizens, in limited social domains at least, so they may make informed decisions to opt-out or not, where possible.
  4. Lethal autonomous weapon systems (LAWS) – LAWS enable critical functions of selecting and attacking individual targets. Human control potentially can be entirely relinquished and risks of malfunction not addressed.
  5. Longer-term concerns – Future AI use, at best is still speculative in nature and requires having to extrapolate into the future.