Behind the scenes of artificial intelligence
It is said that artificial intelligence will change our lives fundamentally, at least as much as the discovery of electricity did. AI can improve medical care and make climate protection measures more efficient. However, intelligent systems can also show bias, monitor people and restrict courses of action.
Naming the opportunities and risks posed by this technology as specifically as possible while at the same time critically examining popular narratives was the aim of the "Demystify AI" conference organised by the German Center for Research and Innovation New York (DWIH NY) in November 2019. The event brought together more than 160 German and American experts from the field of artificial intelligence and was part of the new Future Forum conference format. “In future, we aim to hold this event once a year to focus on a topical issue that is deliberately aimed at a wider audience,” said Benedikt Brisch, Director of the DWIH New York.
As well as discussing new technical possibilities, the participants also examined some of the ethical issues involved in the development of AI applications. “Our aim was to take a look behind the scenes of this technology and to see it from both sides,” said Brisch. “Firstly, in the form of intelligent solutions for highly complex tasks, and secondly, as the scenario of an unsettling technology that has us at its mercy.”
Alumniportal Deutschland talked to three participants – an artist, a journalist and an academic – about their views on the subject.
Dr Misselhorn, can machines act ethically?
Dr Catrin Misselhorn, professor of philosophy at the University of Göttingen:
“That’s a fascinating and important question which is currently being explored in depth by both philosophers and computer scientists. The aim is to program computers in such a way that they are able to make moral decisions. It is not yet clear to what extent this is even possible and whether such decisions can or even should be left to machines.
I believe, however, that fundamental guidelines for good artificial morality can be formulated, which incidentally are also generally transferable to artificial intelligence. The self-determination of human beings should be encouraged and not restricted, artificial systems should not make life and death decisions and it is necessary to ensure that humans always take responsibility in a substantial way.
I think such guidelines can serve us very well when it comes to checking the social implications of the latest AI applications early on. For me, the Future Forum’s “A.I. for Social Good” workshop was again a good example. The use of facial recognition software in a Brooklyn apartment building against the will of its residents, as we discussed on this occasion, can definitely not be regarded as a good application in the light of these guidelines. Here, there is a blatant breach of the residents’ informational self-determination. However, it is equally important that these guidelines are not seen as obstacles to innovation, but as inspiration for good design.”
Ms Schellmann, can artificial intelligence make fair decisions?
Hilke Schellmann, professor at New York University, freelance reporter for the Wall Street Journal and DAAD alumna:
“It is now an undisputed fact that artificial intelligence can be extremely effective in helping us analyse large amounts of data. My research shows that it becomes problematic when attempts are made to extend this pattern reading expertise to complex social areas.
One example of this is the use of AI in recruiting. Until quite recently, American companies used the distance between home and the workplace as a relevant recruitment criterion in the application process. Statistically, the probability of an employee quitting their job increases the further they have to commute. However, anyone who makes decisions according to this logic not only risks overlooking genuinely talented people, but is also discriminating against whole sections of the population. In the US, many people from socially disadvantaged backgrounds live on the very edge of cities or in certain districts that they were assigned to historically. This means unintentional discrimination occurs through the back door of the algorithm, so to speak.
Therefore, what we really need in future when it comes to AI is a raising of awareness. Just because artificial intelligence makes a decision, it doesn’t automatically mean that this decision is objective. Secondly, we should aim for transparency: it needs to be clear what variables are being used and how they are weighted. And finally, all the experts I spoke to want to see greater regulation – either by the state or by an independent body.”
Ms Siddiquie, can art explain artificial intelligence to us?
Esther Siddiquie, media and performance artist and DAAD alumna:
“I believe art can be a very good introduction to engaging with artificial intelligence. This is primarily because art lets people directly experience the essential nature of this technology. At the heart of these systems is something highly artificial which, on the surface, interacts with us and therefore takes on almost human characteristics. Like chatbots, which we might sometimes forget are actually computer programs if they have been programmed well. And then we feel caught out: damn, I was just talking to a robot!
Reversing this relationship between naturalness and artificiality was the idea behind the work I created for the Future Forum. People see an abstract object on a pedestal, a hologram floating in a transparent pyramid. This hairy green thing is deliberately designed to look as un-human as possible, yet it has something human-like about it – namely the movements it makes. These are not actually animated but based on motion-capture data of human movement. It provided a surprising change of perspective for many people. And that is precisely what art can and should do in this context: question an apparently consistent technological trend towards tools that act in an ever more human way. AI can only appear to become more human. It would make a lot more sense for us to remain aware of its artificiality and for us to focus on the aspects that will always distinguish us from it.”
German Centers for Research and Innovation
The German Centers for Research and Innovation (DWIH) are a network of German research organisations, higher education institutions and research-based companies. They are managed by the DAAD and funded by the Federal Foreign Office. At five locations in New York, São Paulo, New Delhi, Moscow and Tokyo, the DWIH provide a joint platform for German innovation leaders, showcase the capabilities of German research and connect German researchers with local cooperation partners.
Do you think AI is a positive technology for the future or should it be treated with caution? Discuss this topic with us!