F6 17 F6 54 D6 32 n20 o08 r1 33 cr5 b0 04

Photo: Gene Kogan

Artificial Intelligence in Art and Society

Program or Be Programmed

Creative tools, medical analyses, autonomous driving…many promising technologies rely on Artificial Intelligence (AI) to make rule-based decisions behind the scenes. The implications are dizzying. Will they take tedious tasks off our hands, leaving us with more time for the good things in life? Or will they make us redundant and trigger decisions we might regret?

F6 17 F6 54 D6 32 n20 o08 r1 33 cr5 b0 04

Photo: Gene Kogan

Making music, creating art, discovering the secret power of algorithms…the Futurium Lab invites us all to experience first-hand the creative potential and dangers of AI.

Invisible and Everywhere

According to Science Journalist Ulrich Eberl, AI is already here and all around us: machines “drive cars without human intervention, learn to cook and serve, paint and make music, think and debate. Some have already surpassed us. They make more accurate medical diagnoses than human doctors, speak 20 languages, or identify technical problems even before the machine fails.”

If this has made your adrenaline spike up a bit, relax. Generally speaking, AI isn’t about to replace us any time soon. On the contrary, it can free us of boring, repetitive jobs and even help us become more creative. Unlike our own brains, computers can compare billions of data sets and recognise patterns in a few seconds. When used responsibly, AI could potentially benefit us in almost all parts of life.

More Artificial than Intelligent

But let’s rewind a bit back to the basics. What we currently call Artificial Intelligence is anything but “intelligent,” at least when judged by human standards. To date, four-year old children are still better at learning than digital systems. The more apt term would be “silicon-based, single-issue specialists” because AI machines are usually only good at solving one specific problem well. To tackle their assigned issue, these machines are trained on large data sets and encouraged to establish patterns and connections. Based on this, they can draw conclusions, recommend actions, or make decisions. This feature describes the key premise as well. “AI is neither malicious nor kind; does not have independently developed intentions or goals; and does not engage in self-reflection. What AI can do is to perform well-specified tasks to help discover associations between data and actions, providing solutions for quandaries people find difficult and perhaps impossible.”

With regards to the question ‘AI’s versus AIs’. In fact, AIs would theoretically be correct but it’s a moot point because Artificial Intelligences is not a term. Just wanted to clarify!

The Evolution of Machines

Just in the past few years, AI and machine learning have made huge strides. The so-called ‘deep learning’ mimics our own brains. Software developer Dhanoop Karunakaran explains, “Deep learning algorithms are structured like the neurons in our nervous system where each neuron connects to others and passes on information. Tasks are broken down and distributed across machine learning algorithms and each layer builds on the output from the previous layer. This is similar to the problem-solving carried out by neurons in the human brain.” Additionally, such deep learning systems are no longer told which characteristics to look for. Instead, they work this information out for themselves.

At the Futurium, the artist Gene Kogan uses machine learning algorithms to transform camera captures into living paintings. Kogan is very excited by the sheer wealth of opportunities AI opens up for artists and scientists alike. “Such neural networks can tell us what is truly inside images or speech.”

Pushing Our Buttons

To gain a better grasp of AI’s true potential, visitors are encouraged to get involved. Without any training or expert knowledge, they can use Gene Kogan’s installations to turn their mirror image into a piece of art, sketch out elaborate landscapes, or create photo-realistic maps from a few coloured disks. Just a few steps away, Kling Klang Klong has installed a range of AI-assisted instruments that enable anyone to conduct, compose, or improvise like a pro. We point the AI to the desired direction and the neural network takes care of the rest. What happens within the machines remains a mystery, however, even to the artists and developers who use AI for their own projects.

‘Wrong’ Ideas, Right Results

Since such machines no longer ‘learn’ from people but find their own methods and solutions, the results can be unsettling. Take AlphaZero for example. Unlike previous chess computers, it was never trained on countless ‘real’ chess matches to determine the ultimate winning strategy, but only fed the basic rules of the game. After just a few days, AlphaZero had not only beaten every human and digital chess master it competed against, but achieved this using the moves that felt strange and even ‘wrong’ to chess experts. The founder of the company behind AlphaZero called it “chess from another dimension”. To him, it was eerie proof that the next generation of AI “is no longer constrained by the limits of human knowledge.”

Such self-training systems are improving in leaps and bounds, especially in image recognition. They don’t just power Gene Kogan’s artworks, but also simplify tricky medical diagnoses. Great examples include skin cancer and breast cancer screening where algorithms have already become better at analysing X-rays and CT scans than human specialists.

Ingenious Idiots

Ready to welcome your new robot overlords yet? Don’t worry, they’re not about to invade your space (other than the increasing number of robotic vacuum cleaners hunting dust bunnies in our homes). The reason is surprisingly simple: while AI is great at answering many difficult questions in record time, it is still easily stumped by seemingly obvious questions like “Can a crocodile play volleyball?” because AI machines are only trained for one specific task. Janelle Shane’s blog has some great examples of what happens when you confront Artificial Intelligence with unknown challenges. Kling Klang Klong’s Fernando Knof adds, “Artificial Intelligence machines are invariably specialists. They are good at the one thing they are trained for, and that’s it, really. The important decisions are still taken by people because our thinking is not limited by context. When I make music, I might get inspired by a book. This lateral connection between music and literature is still hard to impossible for a neural network.”

Please wait, the image is currently loading and will be there shortly.

Photo: Franck V. on Unsplash

Sound Partners

When treated as simple tools and helpers, these technologies can really boost the range of artists and other creative individuals. Thanks to prefab AI modules and platforms, it doesn’t take much expertise to start. Musicians can automate many aspects of their craft without giving up creative control. Almost any step of the process is up to AI for tweaking, from composition and sound design to live performances and marketing. Felipe Sanchez Luna of Kling Klang Klong is convinced that “machines won’t take our jobs away, at least not in the foreseeable future. Sure, Artificial Intelligence has evolved to the point where it can write entire scores, but it still can’t interpret them like a human since this is an emotional and subjective process.” His collaborator Fernando Knof adds that “there are plenty of our daily tasks that can be tedious, for example mixing the finished music. Maybe we just want to specify which parts should sound brighter, darker, or louder. That’s something AI can do. It could also assist composers. Many composers only sketch out a general melody or idea that is then fleshed out by other musicians.”

Gene Kogan, too, is optimistic about the potential of creative AI technologies: “They augment a user's creativity in ways that they didn't know they could be creative. In the future, we’ll be able to apply the style transfer principle not just to images, but also to things like words.”

How Deep Learning Spawns Deep Fakes

On the other hand, people are also (mis-)using deep learning to create deceptively real pictures or videos for casual consumption. "It is certainly true that it has become more difficult for people to tell actual content made by humans apart from AI-generated content. This has serious implications on the information we consume online,” Gene Kogan states with a nod to projects like This Person Does Not Exist by the software developer Phillip Wang. Every two seconds, Wang’s AI experiment generates a new ‘person’ and scammers are already using such images to launch fake Facebook profiles. DataGrids expands this idea to full-body shots of photorealistic models and outfits for online shops and catalogues. Or how about a virtual model with its own biography? Imma has more than 50,000 Instagram followers, sports a hip streetstyle look, and graces several magazine spreads.

Please wait, the video is currently loading and will be there shortly.

The Sound of Our Voice

Deep learning can also be used to produce deceptively ‘genuine’ sounds. Thanks to AI, our spontaneous compositions made at Kling Klang Klong’s stations are almost indistinguishable from popular chart hits or professional pianists’ improvisations. Even the sound of our own voice is up for grabs, as companies like Vocal ID have shown. Miles away from Stephen Hawking’s iconic (and tinny) ‘male standard voice, Perfect Paul,’ the latest innovations offer new voices to people who can no longer speak, modelled after family members or even their own personality. Natural inflexion, credible pauses, pleasant pitch…all of these have become possible thanks to AI advances.

Lost in Translation

Meanwhile, linguists are more interested in what goes on behind the scenes of online translation services based on deep learning. Google Translate seems to have developed its own, incomprehensible proto language to communicate with itself. The Translatotron (also by Google) no longer analyses words and content to provide real-time translations in natural speech, but only the underlying sound frequency patterns. And then, Microsoft and Alibaba recently claimed that their own software has become better at reading comprehension than humans.

We’re only just beginning to explore the possibilities of all these different AI approaches. The artistic experiments of Kogan and kling klang klong already hint at the scope and diversity of recombining different technological building blocks into ever-new applications. Just how these will affect our future – in a positive or negative way – is impossible to predict at this point.

What Works for Us?

Take the labour market. While there’s room for speculation, especially in segments that are easier to automate, we still can’t tell which tasks and jobs might be threatened by AI. One thing is certain, though: it would be futile to compete directly because “machines are slaves. Anybody that competes with slaves becomes a slave.” (Kurt Vonnegut). Instead of a labour market collapsing, the economists Ajay Agrawal, Joshua Gans, and Avi Goldfarb expect to see a major shift. “Further advances in machine learning will lower the value of human predictions since automated predictions will become cheaper and more accurate. Yet this doesn’t mean the end of human jobs as many have predicted. Instead, human judgement will become more valuable.”

The Decision Is Ours

Just how and in which areas we will shift decision-making from humans to machines remain contentious. Another Futurium exhibit invites us to experience the repercussions of such a shift first-hand. Does AI make better decisions than people? And in which parts of society should we embrace this? In some areas, AI could really help to deliver fairer, less biased results. “Judges rule differently depending on the time of day; employers have less confidence in equally qualified female applicants; and let’s not forget the structural disadvantages against people with foreign-sounding names,“ states Philosophy Professor Lisa Herzog. “So, should we entrust difficult decisions to computers?”For experts and ethicists, the answer is a clear ‘yes and no’. While Artificial Intelligence is theoretically capable of arriving at fairer decisions, a lot – and perhaps too much – depends on the rules, structure, and data used to train it, not to mention the intent of the people behind the system. Since we can no longer retrace or comprehend how deep learning algorithms arrive at their output, the lack of true transparency can have scary results.

The biggest challenge will be to ensure that Artificial Intelligence will be trained in a non-discriminatory way.

Sven Laumer, Business Computer Scientist

The AI Now Institute is already sounding the alarm. As the AI industry continues to be dominated by white males, their systems often reflect this population segment’s conscious or unconscious biases. After Google’s embarrassing image recognition disaster, which classified African Americans as gorillas because the system had only been trained on Caucasian faces, the right selection of criteria and data has become ever more pressing to guarantee fair, representative results. Without external checks and balances, chatbots turn racist; household items from low income countries aren‘t recognised; and Amazon disadvantages female applicants. Many of these biases are not intended, but occur due to a lack of diversity in AI teams. The Amazon AI engine, for example, was mainly trained on profiles of male applicants and therefore classified the ‘male’ traits as signifying success.

When Big Data Gets Too Big

At the same time, we shouldn’t underestimate commercial interest in AI to boost business. Many corporations rely on algorithms by AI specialists like Blue Yonder to optimise pricing and predict future sales. Most of us are only too happy to share our data for tailored special offers, better navigation, or perfect search results. But this intrinsic belief in the ‘good,’ constructive use of our data is inherently dangerous. With his AI project at the Futurium Lab, Alexander Peterhaensel wants us to take a good look at ourselves and our naïve optimism for technology. His Smile to Vote polling booth asks us to take a look in the mirror and then face the results. Anyone who enters his voting booth receives a quick facial scan. The AI compares these characteristics with those of politicians and automatically ticks the box of the party you resemble the most. Why make up your own mind when technology can do it for you? Peterhaensel’s project is a timely reminder that we shouldn’t get too comfortable with the ostensible ease offered by technology, especially if it means the systems we no longer truly understand decide for us.

Comparable technologies are already determining our employability, credit scores, and insurance premiums. “It frightens me that large tech companies, whose business models are based on psychometric measurements, aren’t stopped from undermining our basic privacy rights. We are already being judged and scored on the basis of our behaviour and biological disposition. If we don’t demand stringent regulations, we won’t be able to protect our privacy,” states Peterhaensel. He wants us not only to retain the rights to our own data, but also to become far more conscious and deliberate about the way we interact with tech, so that we don’t sacrifice future freedom of choice for today’s click-worthy convenience.

Everything is Illuminated

The systematic measuring, collecting, and processing of every part of our digital existence is very deliberate, because “you'll wind up with much better algorithms If you disregard privacy concerns and concentrate all the information on a billion people in one database than if you respect individual privacy and have only partial information on a million people in your database," explains the Israeli historian Yuval Noah Harari.

Like Harari and Peterhaensel, Gene Kogan thinks that all of us need to become more aware of the implications and more involved in exercising our rights. “We will have to make a lot of important decisions. The more familiar with these technologies people are, the better they will be able to participate in that decision-making process.” After all, "if the future is decided in our absence," Harari adds, "we won't be exempt from the consequences."