SIUTranslations Frankfurt - The Human Side of AI: The Role of Artificial Intelligence in Society

Author: Vanessa Hübner Edited By: Burcu Anil Kirmizitas

This November 20th, 2017 was a cold and rainy day. In fact, it was a typical day that seduces one to sit under a warm blanket and sip hot tea at home – but SIU Frankfurt organised another exciting SIUTranslations event about “The Human Side of AI: The Role of Artificial Intelligence in Society”. The event was booked out: more than 80 interested attendees swapped their warm blankets and hot teas for enlightening talks and delicious wine while networking with fellows afterwards. And they were not disappointed: at this event, SIU members learned about the change from ‘logical’ to ‘numerical’ artificial intelligence, its current and future applications and how these technologies can be used more ethically for the benefit of all. Who presented? – The experts in the field of artificial intelligence and big data in Frankfurt: Prof. Gregory Wheeler and Prof. Roberto V. Zicari.

“What’s so deep about deep learning?”

Gregory Wheeler is a professor of philosophy and computer science at the Frankfurt School of Finance and Management. He was a research scientist in several universities and institutes in the USA, Germany and Portugal, is an author of many scientific publications and books, and editor of several scientific journals. His interests cover “philosophy, artificial intelligence, statistics and cognitive science”.

He explained that the goal of artificial intelligence is to develop a system that has the capability to either think or act like a human. But how can this be achieved? The first approach to make a system think involves the definition of rules to manipulate given representations such as making the system capable of perceiving objects, of understanding sentences and evaluating situations and thus, being able to act. The second, more feasible approach orients towards a given goal that the system should do, such as the perception of an object, and involves the picking of representations that are incorporated into an algorithm. Machine learning falls within the second approach – the system is programmed to follow a concise goal and it learns “from data without being explicitly programmed to do so”. This strength makes machine learning capable of touching every part of our lives by processing vision and language, by improving robotics, science and medicine as well as by being used in government and commerce. The underlying principle is based on computer science and statistics. However, as Gregory explained, the concept of artificial intelligence is not a ‘new’ phenomenon. It is rather an old concept initiated in 1956 at the Dartmouth Conference and since then, follows the ‘logical’ artificial intelligence approach. This approach tries to tackle artificial intelligence by making the system think: numerous representations guide systems to solutions, e.g. Deep Blue, the chess playing computer. The emergence of large data sets in the beginning of the new millennium changed the ‘logical’ approach to a ‘numerical’ concept of artificial intelligence: Large data sets create variable interactions and make it possible to predict certain outcomes. This allows a system to do. However, since we are at the beginning of this new, promising approach to artificial intelligence, we still must go a long way of research until it reaches its goal of systems that act like a human. Therefore, Gregory urges the audience to appreciate the long-term objectives of artificial intelligence and not to follow Amara’s Law, which describes that “we tend to overestimate the effect of a technology in the short run and underestimate the effect in the long run.”

“Big Data & The Great Awakening”

Roberto V. Zicari is a full professor and founder of the Frankfurt Big Data Lab. He is also the Director of Goethe Unibator, a network in Frankfurt supporting bright minds in translating their innovative ideas to market-ready products, a member of the Global Venture Lab network, the Object Management Group and the editor of ODBMS.org - an internet platform informing about big data and new trends in data management and data science - and its blog. Roberto has a sound scientific publishing record in the field of data science and he spent time in numerous labs as a visiting scientist in the USA, Switzerland, Mexico and Denmark. He is also an internationally recognised expert in database and information systems.

Roberto’s talk focussed on the ethical and societal implications of big data and artificial intelligence. He explained that having and handling big data has made the recent development in artificial intelligence possible. Therefore, companies with big data pools, such as Google, Facebook, Microsoft and Apple can feed specific algorithms with these data to send personalised ads to prospective customers. These ads are based on decisions that are better on average to increase the odds of making a sale. However, similar algorithms can be implemented to make higher-stakes decisions, such as a medical diagnosis, loan approvals, hiring and crime prevention. Now, better on average is not good enough any more. In fact, with higher-stakes decisions, individuals’ lives are affected. These crucial decisions need accuracy, fairness and discrimination between different conditions. This leads to the notion of implementing an auditing tool that makes the technology able to explain itself, its data-driven algorithm and its decisions, and is followed by the questions of how much transparency we desire and if we wish to have a “human in the loop”. But then, the involvement of humans in these systems bear more ethical implications: Who would be responsible for the consequences of these decisions? What are the human motivations to interfere in the process of these decisions? Who would regulate these motivations? Is it realistic? And how can human involvement steer the use of big data and artificial intelligence for the common good? For this reason, Roberto and Andrej Zwitter, a Professor at the University of Groningen, Netherlands, initiated the Data for Humanity with the goal to “bring people and institutions together who share the motivation to use data for the common good”. With this initiative, they emphasise not only the need for stronger collaboration between researchers and decision makers in charities and government, but also the urge for the following ethical rules when using data:

·       Do not harm

·       Use data to help create peaceful coexistence

·       Use data to help vulnerable people and people in need

·       Use data to preserve and improve natural environment

·       Use data to help create a fair world without discrimination

Finally, Roberto emphasised that although software designers have a distinctive ethical responsibility, we all are responsible for the use of big data and artificial intelligence: from employee to software developer to politician to associations. He reiterates that there is a big gap between the idea to do something good for the society and the action to actually do something. Roberto stresses that this gap will be bridged if everyone in one's daily life becomes aware and takes responsibility to finally become an active participant of the development and progress of artificial intelligence for the common good.

 

This SIU Translation event was documented in an amazing photo series by the Frankfurter photographer Nikolay Nikolov (www.blindspoteurope.weebly.com). Photos and videos of the event can be viewed at our Facebook page “Science Innovation Union”.