Embedding practical ethics to AI
By putting together tiny ones and zeros, we are piling up huge quantities of personal data. Quite likely, we will never manage our personal data ourselves. Indeed, we do need personal control over personal data, but an average reasonable person will soon hand this over to her artificially intelligent helper - a trusted, ethically safe helper. The experience of corporate entities “taking care” of what we see on the web, what product we buy, who are our friends and what thoughts we think, is luckily just too recent to let it slip between the lines that we factor in some ethics in our new artificial helpers.
There is barely any ethical layer in the current digital domain. This track aims for raising awareness about the many nice and hard ways we need to embed ethics into the solutions we’re building for the future. Ethics needs to become more than just a nice word in the personal data talks. It needs to become a practical tool that goes beyond the transparent algorithms and decision making, to the fair remuneration of the individuals behind AI training data, and beyond.
Keywords: artificial intelligence, algorithmic transparency, automated decision making, AI training data, ethics