Economic theory states that technological change comes in waves: one innovation rapidly triggers another, launching the disruptions from which new industries, workplaces and jobs are born. Steam power set in motion the industrial revolution, and likewise since the 1990s a torrent of digital and software developments have transformed industries and our working lives. But the revolutionary very quickly becomes humdrum, and once-radical and efficient innovations like the telephone, email, smartphones and Skype, become part of everyday, even mundane experience. Despite all the time-saving devices we have successfully integrated into our lives, there is a collective anxiety about the current wave of technological change and what more the future holds. Mainstream dystopian visions of our relationship with technology abound, but are we in fact engaged in a group act of cognitive dissonance: using our smartphones to read and worry about robots taking over our jobs, whilst wishing for a shorter work week and more time for creative pursuits?
The British Academy recently brought together a panel of experts in robotics, economics, retail and sociology to talk about how technology is reshaping our working lives. This review summarises some of their thoughts on the situation now, and what developments lie ahead. Watch the full debate here.
Helen Dickinson OBE reported on the British Retail Consortium’s project, Retail 2020, a practical example of how technology is changing consumer behavior and affecting firms in her industry. The UK’s retail sector has on the one hand embraced technology and created a success story. The UK has the highest ecommerce spend per head in the developed world, with c15% of transactions taking place online, and at 3.0m employees it is also the largest private sector employer in the UK. However, beneath this, internet price comparison ushered in fierce price competition. Retailers are using technology to improve manufacturing and logistic efficiencies to control costs and offset shrinking profit margins. Physical stores are closing as sales migrate online. The BRC predicts a net 900,000 jobs will be lost by 2025. Nor will the expected impact be even: deprived regions are more reliant on retail employers and so will be more affected by job losses. Likewise, the most vulnerable, with less education or skills and looking for work in their local area, will be the hardest hit.
Prof Judy Wajcman resisted the urge to overly rejoice or despair at technological developments. For her, this revolution is not so different to the waves which have come before. It is impossible to predict what new needs, wants, skills and jobs will be created by technological advances. Undoubtedly some jobs will be eliminated, others changed, and some created. However, we can certainly think beyond the immediate like-for-like: a washing machine saves labour, but it has also changed our cultural sense of what it means to be clean. Critically, we should stop thinking of technology as any kind of neutral, inevitable, unstoppable force. All technology is manmade and political, reflecting the values, biases and cultures of those creating it. As Wajcman said, ‘if we can put a man on the moon, why are women still doing so much washing?’ In other words, female subjugation to domestic labour could have been eliminated by technology, but persistent cultural norms have prevented this from happening.
cc by 2.0 Adrian Scottow, via Flickr
Dr Sabine Hauert is a self-professed technological optimist. For her technology has the potential to make us safer and empower us, for example by reducing road accidents, or allowing those who cannot currently drive to do so. Hauert sees a future not where robots completely replace humans, but where collaborative robots work alongside them to help with specific tasks. The crucial issue for dealing with this future lies in communication and education about new technologies, since the general public, mainly informed by news and cultural media, is ill-served by a steady drip of negative stories about our future with robots.
The short film Humans Need not Apply is one such alarming production, chiming with Dr Daniel Susskind’s altogether more gloomy view of the longer term effects of technological advances on the workforce. To date, manufacturing jobs have been those most affected by automation, but traditionally white collar jobs also contain many repetitive tasks and activities (just ask the employee drumming their fingers on the photocopier). Computing advances mean that many more of these are now in scope for automation, such as the Japanese insurer replacing some underwriters with artificial intelligence. For Susskind, it is not certain that workers will continue to benefit from increased efficiencies as technology advances. A human uses a satnav provided s/he is still needed to drive, but the same satnav could just as easily interface with a self-driving car, eliminating the need for any kind of human-machine interaction. Calling to mind the wholesale changes to UK heavy industry in the 1980s, any redeployment of labour will present huge challenges, and what work eventually remains may not be enough to keep large populations in well paid, stable employment.
cc by 2.0 Mike Mozart, via Flickr
Can humans benefit from robots in the workplace?The panel agreed that technological change will continue apace with wide reaching ramifications for our workplaces and our wider societies, but that it is our human qualities that will give us an advantage over machines. Perhaps this is the most pressing notion: we urgently need to recalculate the value we place on tasks within society. Work where social skills, communciation, empathy, and personal interaction are prioritised (like teaching or nursing) may develop a value above that which is rewarded today.
If we smell such change coming, it is no wonder we are anxious. The panellists differed on the ability of our society to absorb and adapt to coming technological change, and the distribution of any net benefit or loss. So, is the only option to accept the inevitable and brace for the tsunami to hit? Well, no. We need to realise that ‘technology’ is not one vast, distant wave on the horizon, but a series of smaller ripples already lapping higher around our ankles. Returning to Wajcman’s point, all technologies are created by people. If innovation has a cultural dimension, it can be influenced, so we must take heart and believe in our ability to effect change.
The further we can work to democratise and widen the pool of creative engineers, developers, artists, designers and critical thinkers contributing to the development of technologies, the broader the spectrum of resulting applications and consequent benefits to society as a whole. We can be conscious in our choices as consumers as we adopt new products and services into our lives, and challenge the new social norms emerging around work and life as technology allows us to blur the boundaries between them. And finally, we need to consider who profits, and who doesn’t, from new business models. We should lobby government to be deliberate in designing policy that looks to these future developments, and their likely unequal impacts across regions, industries and populations, to ensure that existing social inequalities are not entrenched or magnified. Hopefully the creative community can help steer this wave in the right direction, painting a vivid picture of our possible futures, to persuade the powerful to act in the interests of the greater good.
Helen Dickinson OBE, Chief Executive, British Retail Consortium
Dr Sabine Hauert, Lecturer in Robotics, University of Bristol
Dr Daniel Susskind, Fellow in Economics, University of Oxford and co-author of The future of the professions: How technology will transform the work of human experts (OUP, 2015)
Professor Judy Wajcman FBA, Anthony Giddens Professor of Sociology, LSE and author Pressed for time: The acceleration of life in digital capitalism (Chicago, 2015)
Since 2005, Inke Arns has been the curator and artist director of Hartware MedienKunstVerein, an institution focusing the cross-section between media and technology into forms of experimental and contemporary art. This year, she was the curator for the exhibition titled alien matter during transmediale festival’s thirty-year anniversary. I had the pleasure of meeting Inke and taking a leisurely stroll with her around the exhibition.
The interview is written as part of a late-night email exchange with Inke a couple of weeks following our initial meeting.
CS: How did the idea come about? In your introductory text you mention The Terminator. Were you truly watching Arnold when alien matter occurred to you as an exploratory concept?
IA: Haha, good question! No, seriously, this particular scene from Terminator 2 (1991) was sitting in the back of my head for years, maybe even decades. It’s the scene where the T-1000, a shape-shifting android, appears as the main (evil) antagonist of the T-800, played by Arnold Schwarzenegger. The T-1000 is composed of a mimetic polyalloy. His liquid metal body allows it to assume the form of other objects or people, typically terminated victims. It can use its ability to fit through narrow openings, morph its arms into bladed weapons, or change its surface colour and texture to convincingly imitate non-metallic materials. It is capable of accurately mimicking voices as well, including the ability to extrapolate a relatively small voice sample in order to generate a wider array of words or inflections as required.
The T-1000 is effectively impervious to mechanical damage: If any body part is detached, the part turns into liquid form and simply flows back into the T-1000’s body from a far range, up to 9 miles. Somehow, the strange material of the T-1000 was teaming up with Jean-Francois Lyotard’s notion of “Les Immatériaux” (1985). Lyotard tried to describe new kinds of matter, that at first sight look like something that we know of old, but in fact are materials that have been taken apart and re-assembled and therefore come to us with radically new qualities. It is essentially alien matter which Lyotard was describing.
CS: You also comment on intelligent liquid and then make reference to four subcategories for the ‘rise of new object cultures’: AI, Plastic, Infrastructure, and the Internet of Things. Is this what makes up ‘alien matter’ to you? Inorganic materials? Simultaneously, HTF The Gardener and Hard Body Trade explicitly and dominantly utilise nature.
IA: Well, the shape shifting intelligent liquid acts more like a metaphor. It is a metaphor for the fact that the clear division between active subjects and passive objects is becoming more and more blurred. Today, we are increasingly faced with active objects, with things that are acting for us. The German philosopher Günther Anders, yet another inspiration for alien matter, described in his seminal book The Obsolescence of Man (Die Antiquiertheit des Menschen) how machines – or computers – are “coming down”, how over time they have come to look less and less like machines, and how they are becoming part of the ‘background’. Or, if you wish, how they have become environment. That’s what I tried to capture in these four subcategories AI, Internet of Things, Infrastructure and Plastic. It is subcategories that reflect our contemporary situation, and at the same time are future obsolete. All of this is becoming part of the big machine that is becoming visible on the horizon. The description that Anders uses is eerily up to date.
Is this alien matter inorganic? Well, yes and no. It is primarily something inorganic as plastic could be described as one of the earliest alien matters – its qualities, like, e.g., its lifespan, are radically different from human qualities. However, it is something that increasingly merges with organic matter – Alien in Green showed this in their workshop that dealt with the xeno-hormones released by plastic and how they can be found in our own bodies. They did this by analyzing the participants’ urine samples in which they found stuff that was profoundly alien.
In the exhibition, everything is highly artificial, even if it looks like nature, like in Ignas Krunglevicius’ video Hard Body Trade or Suzanne Treister’s series of drawings/prints HFT The Gardener. The ‘natural’ is becoming increasingly polluted by potentially intelligent xeno-matter. We are advancing into murky waters.
CS: There is no use of walls in the exhibition, other than Video Palace, standing as a monumental structure made out of VHS tapes. Why did you decide to exclude setting up rooms or walls for alien matter?
IA: I knew right from the beginning that I wanted to keep the space as open as possible. Anything you build into this specific space will look kind of awkward. This is also how I make exhibitions in general: Keeping the exhibition space as open as possible, building as few separate spaces as possible in order to allow for dialogues to happen between the individual works. For alien matter we worked with raumlaborberlin, an architectural office that is known for its unusual and experimental spatial solutions and that has been working with transmediale for quite some time now. I have worked with them for the first time and I am super happy with the result. We met several times during the development process, and raumlabor proposed these amazing tripods you can see in the show. They serve as support for screens and the lighting system. (Almost) nothing is attached to the walls or the ceiling. raumlabor were very inspired by the aliens in H.G. Wells’ War of the Worlds – where the extraterrestrials are depicted with three legs and a gigantic head. Even if the show is not about aliens I really liked the idea and the appearance of these tripods. They look at the same time elegant, strange, and through their sheer size they are also a bit awe-inspiring. Strange elegant aliens so to speak to whom we have to look up in order to see. At the same time they are ‘caring’ for the exhibition, almost as if they were making sure that everything is running smoothly.
CS: What can you tell me about the narrative behind Johannes Paul Raether’s Protekto.x.x. 184.108.40.206.pcp.? You mentioned that it was originally a performance in the Apple Store, nearly branding the artist a terrorist.
IA: Correct. The figure central to the installation is one of the many fictional identities of artist Johannes Paul Raether, Protektorama. It investigates people’s obsession with their smartphones, explores portable computer systems as body prosthetics, and addresses the materiality, manufacturing, and mines of information technologies. Protektorama became known to a wider audience in July 2016 when a performance in Berlin, in which gallium—a harmless metal—was liquefied in an Apple store, led to a police operation at Kurfürstendamm. In contrast to the shrill tabloid coverage, the performative work of the witch is based on complex research and visualizations, presented here for the first time in the form of a sculptural ensemble including original audio tracks from the performance. The figure of Protektorama stems from Raether’s cyclical performance system Systema identitekturae (Identitecture), which he has been developing since 2009.
CS: Throughout the exhibition there is an awareness that technological singularity can and possibly will overcome the human body and condition. In the context of the exhibition, do you think that we may be accelerating towards technological and machinic singularity? As humans, are we already mourning the future?
IA: The technological singularity is a trans-humanist figure of thought that is currently being propagated by the mathematician Vernor Vinge and the author, inventor and Google employee Ray Kurzweil. This is understood as a point in time, and here I resort to Wikipedia, “at which machines rapidly improve themselves by way of artificial intelligence (AI) and thus accelerate technical progress in such a way that the future of humanity beyond this event is no longer predictable.” The next question you are probably going to ask is whether I believe in the singularity.
CS: Do you?
IA: Whether I believe in it? (laughs) The singularity is in fact a kind of almost theological figure. Technology and theology are very close to one another in a sense. The famous American science fiction author Arthur C. Clarke once said that any sufficiently developed technology can’t be differentiated from magic. I consider the singularity to be an interesting speculative figure of thought. Assuming the development of technology were to continue on its course as rapidly as it has to date, and Moore’s Law (stating that computing performance of computer chips doubles every 12-24 months) retained its validity, what would then be possible in 30 years? Could it really come to this tipping point of the singularity in which pure quantity is transformed into quality? I don’t know. What is interesting right now is that instead of the singularity, we are faced with something that the technology anthropologist Justin Pickart calls the ‘crapularity’: “3D printing + spam + micropayments = tribbles that you get billed for, as it replicates wildly out of control. 90% of everything is rubbish, and it’s all in your spare room – or someone else’s spare room, which you’re forced to rent through AirBnB.” I also suggest to check out the ‘Internet of Shit’ Twitter feed.
CS: You come from a literary background. Noticing the selection and curation of alien matter, it becomes clear that you love working with narratives. Do you feel as though your approach of combining narrative and speculative imaginations is fruitful and rewarding?
IA: I do (if I didn’t I wouldn’t do it). I think narrative – or: storytelling – and speculative imaginations are powerful tools of art. They allow us to see the world from a different perspective. One that is not necessarily ours, or that is maybe improbable or unthinkable today. The Russian Formalists called this (literary) procedure ‘estrangement’ (this was ten years before Bertolt Brecht with his ‘estrangement effect’). Storytelling and/or speculative imaginations help us grasping things that might be difficult to access from our or from today’s perspective. It’s like an interface into the unknown. Maybe you can compare it to learning a foreign language – it greatly helps you to understand your own native language.
CS: On a final note, I’d like to revisit a conversation we had during transmediale’s opening weekend. We spoke about a potential dichotomy or contention between the discourse followed by transmediale and that of the contemporary art world, using the review by The Guardian about the Berlin Biennial as an example. Beautifully written, albeit you seemed to disagree with some points made – particularly at the notion enforced by the writer that works shown there, similar in nature to the works in alien matter, are not ‘art’. Could you elaborate on your thoughts?
IA: You are mixing up several things – let me try to disentangle them. I was referring to the article “Welcome to the LOLhouse” published in The Guardian. The article was especially critical of the supposed cynicism and sarcasm it detected in the Berlin Biennale curators’ and most of the artists’ approaches. Well, what was true for Berlin Biennale was the fact that it showed many younger artists from the field of what some people call ‘post-Internet’ art. This generation of artists – the ‘digital natives’ – mostly grew up with digital media. And one of the realities of the all pervasive digital media is the predominance of surfaces. The generation of artists presented at the Berlin Biennale dealt a lot with these surfaces. In that sense it was a very timely and at the same time a cold reflection of the realities we are constantly faced with. I felt as if the artists held up a mirror in which today’s pervasiveness of shiny surfaces was reflected. It could be interpreted as sarcasm or cynicism – I would rather call it a realistic reflection of contemporary realities. And it was not necessarily nice what we could see in this mirror. But I liked it exactly because of this unresolved ambivalence.
About transmediale and the contemporary art world: These are in fact two worlds that merge or mix very rarely. I have often heard from people deeply involved in the field of contemporary art (even some friends of mine) that they are not interested in transmediale and/or that they would never attend the festival or go and see the exhibition. And vice versa. This is mainly due to the fact that the art people think that transmediale is too nerdy, it’s for the tech geeks (there is some truth in this), and the transmediale people are not interested in the contemporary art world as they deem it superficial (there is some truth in this as well). For my part, I am not interested in preaching to the converted. That’s why I included a lot of artists in the show that have never exhibited at transmediale before (like Joep van Liefland, Suzanne Treister, Johannes Paul Raether, Mark Leckey). However, albeit the borders, the fields have become increasingly blurred. It is also visible that what is coming more from a transmediale (or ‘media art’) context clearly displays a greater interest in the (politics of) infrastructures that are covered by the ever shiny surfaces (that bring along their own but different politics).
I could continue but I’d rather stop, as it is Monday morning, 3:01 am.
You can also read a review of alien matter, available here.
alien matter is on display until the 5th of March, in conjunction with the closing weekend of trasmediale. Don’t snooze on the last chance to see it!
All in-text images are courtesy of Luca Girardini, 2017 (CC NC-SA 4.0)
Main image is a still from the movie The Terminator 2 (1991)
Within the context of transmediale’s thirty-year anniversary, Inke Arns curates an exhibition titled alien matter. Housed in Haus der Kulturen der Welt, alien matter is a stand-alone product that has been worked on for more than a year, featuring thirty artists from Berlin and beyond. In the introductory text, Arns utilises her background in literature and borrows a quote from J.G. Ballard, an English novelist associated with New Wave science fiction and post apocalyptic stories. The quote reads:
The only truly alien planet is Earth. – J.G. Ballard in his essay Which Way to Inner Space?
Ballard was redefining the notion of space as ‘outer space’, seemingly beyond the Earth, and ‘inner space’ as the matter constituting the planet we live on. For him, the idea of outer space is irrelevant if we do not fully understand the components of our inner space, claiming, ‘It is inner space, not outer, that needs to be explored’. The ever increasing and accelerating modes of infrastructural and therefore environmental change caused by humans on our Earth is immense. Arns searches for the ways by which this form of change has contributed to the making of alien matter on a planet we consider secure, familiar and essentially, our home. In the age where technological advancements are so severe that machines are taking over human labour, singularity is a predominant theme whilst the human condition is reaching a deadlock in more ways than we can predict. The works shown in alien matter respond to this deadlock by shedding their status as mere objects of utility and evolve into autonomous agents, thus posing the question, ‘where does agency lie?’
Entering the space possessing alien matter, one is immediately confronted with a giant wall – not one like Trump’s, but instead a structure made out of approximately 20,000 obsolescent VHS tapes on wooden shelves. It is Joep van Liefland’s Video Palace #44, hollowed inside with a green glow coming from within at its entry point. The audience has the opportunity to enter the palace and be encapsulated within its plastic and green fluorescent walls, reminiscent perhaps of old video rental stores with an added touch of neon. The massive sculpture acts as an archaeological monument. It highlights one of Arns’ allocated subcategories encompassing alien matter, (The Outdateness of) Plastic(s); the rest are as follows: (The Outdatedness of) Artificial Intelligence, (The Outdatedness of) Infastructure and (The Outdatedness of) Internet(s) of Things.
Part of Plastic(s) is Morehshin Allahyari and Daniel Rourke’s project titled The 3D Additivist Cookbook, initially making its conceptual debut at last year’s transmediale festival. In collaboration with Ami Drach, Dov Ganchrow, Joey Holder and Kuang-Yi Ku, the Cookbook examines 3D printing as possessing innovative capabilities to further the functions of human activities in a post-human age. The 3D printer is no longer just an object for realising speculative ideas, but instead is manifested as a means of creating items that may initially (and currently) be considered alien for human utility. Kuang-Yi Ku’s contribution, The Fellatio Modification Project, for example, applies biological techniques of dentistry through 3D printing in order to enhance sexual pleasure. Through the 3D Additivist Cookbook, plastic is transformed into a material with infinite possibilities, in which may also be considered as alien because of their human unfamiliarity.
Alien and unfamiliarity is also prevalent when noticing the approach by which the works are laid out and lit throughout the exhibition. Without taking Video Palace #44 into consideration, the exhibiting space is void of walls and rooms. Instead, what we witness are erect structures, or tripods, clasping screens and lights. These architectural constructions are, as Arns points out in the interview we conducted, reminiscent of the extraterrestrial tripods invading the Earth in H.G. Wells’ science fiction novel, The War of the Worlds; initially illustrated by Warwick Goble in 1898. The perception of alien matter is enriched through this witty application of these technical requirements as audiences wander amongst unknown fabrications.
Amidst and through these alien structures, screens become manifestations for expressive AIs. Pinar Yoldas’ Artificial Intelligence for Governance, the Kitty AI envisages the world in the near future, 2039. Now, in the year 2017, Kitty AI appears to the viewer as a slightly humorous political statement, however, much of what Kitty is saying may not be far from speculation. Kitty AI appears in the form of rudimentary and aged video graphics of a cute kitten, possibly to not alarm humans with its words. It speaks against paralysed politicians, extrapolates on overloaded infrastructures of human settlement, the on-going refugee crisis still happening in 2039 but to larger dimensions and… love.
The Kitty AI is ‘running our lives, controlling all the systems it learns for us’, providing us with a politician-free zone and states that it ‘can love up to three million people at the time’ and that it ‘cares and cares about you’. Kitty AI has evolved and possesses the capacity to fulfil our most base desires and needs – solutions to problems in which human are intrinsically the cause of. Kitty AI is a perfect example when taking into consideration Paul Virilio’s theory in his book A Landscape of Events, stating:
And so we went from the metempsychosis of the evolutionary monkey to the embodiment of a human mind in an android; why not move on after that to those evolving machines whose rituals could be jolted into action by their own energy potential. – Paul Virilio in his book A Landscape of Events
Virilio doesn’t necessarily condemn the evolution of AIs; humans had the equal opportunity to progress throughout the years. Instead his concerns rise from worries that this evolution is unpredictably diminishing human agency. The starting stage for this loss of agency would be the fabrication of algorithms having the ability to speculate possible scenarios or futures. Such is the work of Nicolas Maigret and Maria Roszkowska titled Predictive Art Bot. Almost nonsensical and increasingly witty, the Predictive Art Robot borrows headlines from global market developments, purchasing behaviour, phrases from websites containing articles about digital art and hacktivism, and sometimes even crimes to create its own hypothetical, yet conceivable, storyboards. The interchange of concepts rangings from economics, to ecologies, to art, transhumanism and even medicine, pertain subjects like ‘tactical self-driving cars’ and ‘radical pranks’ for disruption and ‘political drones’ and even ‘hardcore websites perverting the female entity’.
To a certain degree, both Kitty AI and Art Predictive Bot could be seen as radical statements regarding the future of human agency, particularly in politics. There is always an underline danger regarding fading human agency and its importance for both these works and imagined scenarios – particularly when taking into consideration Sascha Pohflepp’s Recursion.
Recursion, acted by Erika Ostrander,is an attempt by an AI to speak about human ideas coming from Wikipedia, songs by The Beatles and Joni Mitchell, and even philosophy by Hegel, regarding ‘self-consciousness’, ‘sexual consciousness’, the ‘good form of the economy’, and ‘the reality of social contract’. Ostrander’s performance of the piece is almost uncanny to how we might expect AIs to understand and read through language regarding these subjects. The AI has been programmed to compose a text from these readings starting with the word ‘human’ – the result is a computer which passes a Turing test, almost mimetic of what in its own eyes is considered an ‘other’ in which we can understand that simulacra gains dialectal power as the slippage becomes mutual. Simultaneously, these words are performed by a seemingly human entity, posing the question of have we been aliens within all along without self-conscious awareness?
Throughout alien matter it becomes gradually apparent that the reason why AIs are problematic to agency is because of their ability to imitate or even be connected to a natural entity. In Ignas Krunglevičius’ video, Hard Body Trade, we are encapsulated by panoramic landscapes of mountains complimented by soothing chords and a dynamic sub-bass as a soundtrack. The AI speaks over it ‘we are sending you a message in real time’ for us to be afraid, as they are ‘the brand new’ and ‘wear masks just like you’ implying they now emulate human personas. The time-lapse continues and the AI echoes, ‘we are replacing things with math while your ideas and building in your body like fat’ – are humans reaching a point of finitude in a landscape whereby everything moves much faster than ourselves?
Arn’s potential resolution might be to foster environments of participation and understanding, as with the inclusion of Johannes Paul Raether’s Protektor.x.x. 220.127.116.11.pcp. Raether’s project is a participatory narrative following the daily structures of the WorldWideWitches and tells the story of an Apple Store ‘infiltration’ which took place on the 9th of July 2016 in Berlin. The performance itself was part of the Cycle Music and Art Festival and was falsely depicted by the media as scandalous; the Berliner Post called it ‘outrageous’. The performance featured Raether, wearing alien attire walking into the Store and allowing gallium to swim on the table. Gallium, as a substance is completely harmless substances to human beings, but if it touches aluminium the gallium liquid metal can completely dissolve the aluminium.
The installation is a means of communicating not only the narrative of the World Wide Witches, but to uncover the fixation that humans have with material metal objects such as iPhones. The installation itself is interactive and quite often engaged a big crowd around it, all curious to see what it was. It was placed on a table covered in a imitated form of gallium spread over cracked screens and pipes which held audio ports for the audience to listen to the WorldWideWitches story. Raether’s work, much like the exhibition as a whole, is immersive, engaging and participatory.
The exhibition precisely depicts alien matter in all its various and potential manifestations. The space, with all its constant flooding of sounds, echoes and reverbations, simulates an environment whereby the works foster intimacy not only with transmediale, but also with its audience. Indeed, Arns with a beautiful touch of curation, has fruitfully brought together the work of these gifted artists fostering an environment that is as much entertaining as it is contemplative. You can read more about Arns’ curatorial process and thoughts on alien matter through her recent interview with Furtherfield.
alien matter is on display until the 5th of March, in conjunction with the closing weekend of trasmediale. Don’t snooze on the last chance to see it!
All images are courtesy of Luca Girardini, 2017 (CC NC-SA 4.0)
“AI just 3D printed a brand-new Rembrandt, and it’s shockingly good” reads the title of a PC World article in April 2016. Advertising firm J. Walter Thompson unveiled a 3D printed painting called “The Next Rembrandt”, based on 346 paintings of the old master. Not just PC World, many more articles touted similar titles, presenting the painting to the public as if it were made by a computer, a 3D printer, Artificial Intelligence and deep learning. It is clear though that the programmers who worked on the project are not computers, and neither are the people who tagged the 346 Rembrandt paintings by hand. The painting was made by a team of programmers and researchers, and it took them 18 months to do so.
A very successful feat of advertising, and a great example of how eager we are to attribute human qualities to computers and see data as the magic powder bringing life to humanity’s most confusing tool. Data is the new black… it can touch our soul according to a Microsoft spokesperson on the website of the Next Rembrandt: “Data is used by many people today to help them be more efficient and knowledgeable about their daily work, and about the decisions they need to make. But in this project it’s also used to make life itself more beautiful. It really touches the human soul.” (Ron Augustus, Microsoft). We have elevated data to divine standards and have developed a tendency to confuse tools with their creators in the process. Nobody in the 17th Century would have dreamed of claiming a brush and some paint created The Night Watch, or that it’s a good idea to spend 18 months on one painting.
The anthropomorphisation of computers was researched in depth by Reeves and Nass in The Media Equation (1996). They show through multiple experiments how people treat computers, television, and new media like real people and places. On the back of the book, Bill Gates says Nass and Reeves show us some “amazing things”. And he was right. Even though test subjects were completely unaware of it, they responded to computers as they would to people, by being polite, cooperative, attributing personality characteristics such as aggressiveness, humour, expertise, and even gender. If only Microsoft would use this knowledge to improve the way people interact with their products, instead of using it for advertising campaigns promoting a belief in the magic powers of computers and data. Or… oh wait… This belief, combined with the anthropomorphising of computers, profoundly alters the way people interact with machines and makes it much more likely that users will accept and adapt to the limitations and demands of technology, instead of demanding technology should adapt to them.
Strangely enough, the anthropomorphising of computers goes hand in hand with attributing authority, objectivity, even superiority to computer output by obfuscating the human hand in its generation. It seems paradoxical to attribute human qualities to something, while at the same time considering it to be more objective than humans. How can these two beliefs exist side by side? We are easily fooled, ask any magician. As long as our attention is distracted, steered, you can hide things in plain sight. We haven’t been too bothered with this paradox in the past. The obfuscation of a human hand in the generation of messages that require an objective or authoritative feel is very old. As a species, we’ve always turned to godly or mythical agents, in order to make sense of what we did not understand, to seek counsel. We asked higher powers to guide us. These higher powers rarely spoke to us directly. Usually messages were mediated by humans: priestesses, shamans or oracles. These mediations were validated as objective and true transmissions through a formalisation of the process in ritual, and later institutionalised as religion, obfuscating the human hand in the generation of these messages. Although disputed, it is commonly believed that the Delphic oracle delivered messages from her god Apollo in a state of trance, induced by intoxicating vapours arising from the chasm over which she was seated. Possessed by her god the oracle spoke, ecstatically and spontaneously. Priests of the temple translated her words into the prophecies the seekers of advice were sent home with. Apollo had spoken.
Nowadays we turn to data for advice. The oracle of big data functions in a similar way to the oracle of Delphi. Algorithms programmed by humans are fed data and consequently spit out numbers that are then translated and interpreted by researchers into the prophecies the seekers of advice are sent home with. The bigger the data set, the more accurate the results. Data has spoken. We are brought closer to the truth, to reality as it is, unmediated by us subjective, biased and error-prone humans. We seek guidance, just like our ancestors, hoping we can steer events in our favour. Because of this point of departure, very strong emotions are attached to big data analysis, feelings of great hope and intense fear. Visions of utopia, the hope that it will create new insights into climate change and accurate predictions of terrorist attacks, protecting us from great disaster. At the same time there are visions of dystopia, of a society where privacy invasion is business as usual, and through this invasion an ever increasing grip on people through state and corporate control, both directly and through unseen manipulation.
Let’s take a closer look at big data utopia, where the analysis of data will protect us from harm. This ideology is fed by fear and has driven states and corporations alike to gather data like there is no tomorrow or right to privacy, at once linking it very tightly to big data dystopia. What is striking is that the fear of terrorist attacks has led to lots of data gathering, the introduction of new laws and military action on the part of governments, yet the fear of climate change has led to only very moderate activity. Yet the impact of the latter is likely to have more far reaching consequences on humanity’s ability to survive. In any case, the idea of being able to predict disaster is a tricky one. In the case of global warming, we can see it coming and aggravating because it is already taking place. But other disasters are hard to predict and can only be explained in retrospect. In Antifragility, Nicolas Taleb (2013, pp.92-93), inspired by a metaphor of Bertrand Russell, quite brilliantly explains the tendency to mistake absence of evidence for evidence of absence with a story about turkeys. Turkeys are fed for a thousand days by a butcher, leading them to believe, backed up by statistical evidence, that butchers love turkeys. Right when the turkey is most convinced that butchers love turkeys, when it is well fed and everything is quiet and predictable, the butcher surprises the turkey which has to drastically revise its beliefs.
An only slightly more subtle version of big data utopia is a utopia where data can speak, brings us closer to reality as it is, where we can safely forget theory and ideas and other messy subjective human influences through crunching enormous amounts of numbers.
“This is a world where massive amounts of data and applied mathematics replace every other tool that might be brought to bear. Out with every theory of human behaviour, from linguistics to sociology. Forget taxonomy, ontology, and psychology. Who knows why people do what they do? The point is they do it, and we can track and measure it with unprecedented fidelity. With enough data, the numbers speak for themselves.” (Anderson, 2008)
Using this rhetoric, there is no more need for models or hypotheses, correlation is enough. Just throw in the numbers and the algorithms will spit out patterns that traditional scientific methods were unable to bring to light. This promise sells well and companies providing data analysis and storage services promote it with great enthusiasm, as demonstrated by two slides in an Oracle presentation at Strata 2015: “Data Capital is the land grab of the future” and “It’s yours for the taking” (Pollock, 2015).
Can numbers really speak for themselves? When taking a closer look, a lot of assumptions come to light, constructing the myth of the oracle. Harford (2014) describes these assumptions as four articles of faith. The first is the belief in uncanny accuracy. This belief focuses all attention on the cases where data analysis made a correct prediction, while ignoring all false positive findings. Being right one out of ten times may still be highly profitable for some business applications, but uncannily accurate it is not. The second article is the belief that not causation, but correlation matters. The biggest issue with this belief is that if you don’t understand why things correlate, you have no idea why they might stop correlating either, making predictions very fragile in an ever changing world. Third is the faith in massive data sets being immune to sampling bias, because there is no selection taking place. Yet found data contains a lot of bias, as for example not everyone has a smartphone, and not everyone is on Twitter. Last but not least, the fourth belief, that numbers can speak for themselves… is hard to cling to when spurious correlations create so much noise, it’s hard to filter out the real discoveries. Taleb (2013, p.418) points to the enormous amount of cherry-picking done in big data research. There are way too many variables in modern life, making spurious relations grow at a much faster pace than real information.
As a rather poetic example, Leinweber (2007) demonstrated that data mining techniques could show a strong but spurious correlation between the changes in the S&P 500 stock index and butter production in Bangladesh. There is more to meaningful data analysis than finding statistical patterns, which show correlation rather than causation. Boyd and Crawford describe the beliefs attached to big data as a mythology, “the widespread belief that large data sets offer a higher form of intelligence and knowledge that can generate insights that were previously impossible, with the aura of truth, objectivity, and accuracy” (Boyd and Crawford, 2012). The belief in this oracle has quite far reaching implications. For one, it dehumanises humans by asserting that human involvement through hypotheses and interpretation, is unreliable, and only by removing humans from the equation can we finally see the world as it is. While putting humans and human thought on the sideline, it obfuscates the human hand in the generation of its messages and anthropomorphises the computer by claiming it is able to analyse, draw conclusions, even speak to us. The practical consequence of this dynamic is that it is no longer possible to argue with the outcome of big data analysis. This becomes painful when you find yourself in the wrong category of a social sorting algorithm guiding real world decisions on insurance, mortgage, work, border checks, scholarships and so on. Exclusion from certain privileges is only the most optimistic scenario, more dark ones involve basic human rights.
The deconstruction of this myth was attempted as early as 1984. In “A Spreadsheet Way of Knowledge”, Stephen Levy describes how the authoritative look of a spreadsheet, and the fact that it was done by a computer, has a strong persuasive effect on people, leading to acceptance of the proposed model of reality as gospel. Levy points out that all the benefits of using spreadsheets are meaningless if the metaphor is taken too seriously. He concludes with a statement completely opposite to what Anderson will state 24 years later: “Fortunately, few would argue that all relations between people can be quantified and manipulated by formulas. Of human behaviour, no faultless assumptions – and so no perfect model — can be made”. A lot has changed in 24 years. Still back in the eighties, Theodore Roszak describes the subjective core of software beautifully: “Computers, as the experts continually remind us, are nothing more than their programs make them. But as the sentiments above should make clear, the programs may have a program hidden within them, an agenda of values that counts for more than all the interactive virtues and graphic tricks of the technology. The essence of the machine is its software, but the essence of the software is its philosophy” (Roszak, 1986). This essence, sadly, is often forgotten, and outcomes of data analysis therefore misinterpreted. In order to correctly assess the outcomes of data analysis it is essential to acknowledge that interpretation is at the heart of the process, and assumptions, bias and limitations are undeniable parts of it.
In order to better understand the widespread belief in the myth of big data, it is important to look at the shift in the meaning of the word information (Roszak, 1986). In the 1950s, with the advent of cybernetics, the study of feedback in self-regulating closed systems, information transformed from short statement of fact to the means to control a system, any system, be it mechanical, physical, biological, cognitive or social (Wiener, 1950). In the 1960s artificial intelligence researchers started viewing both computers and humans as information processing systems (Weizenbaum, p.169). In the 1970s it was granted an even more powerful status, that of commodity. The information economy was born and promoted with great enthusiasm in the 1980s.
“Reading Naisbitt and Toffler is like a fast jog down a world’s Fair Midway. We might almost believe, from their simplistic formulation of the information economy, that we will soon be living on a diet of floppy disks and walking streets paved with microchips. Seemingly, there are no longer any fields to till, any ores to mine, any heavy industrial goods to manufacture; at most these continuing necessities of life are mentioned in passing and then lost in the sizzle of pure electronic energy somehow meeting all human needs painlessly and instantaneously.” (Roszak, 1986, p. 22).
Nowadays not only corporations and governments, but individuals have become information hungry. What started as a slightly awkward hobby in the 80s, the quantified self has now become mainstream with people self monitoring anything from sleep to eating habits, from sport activities to mood using smartphones and smartwatches with build-in sensors, uploading intimate details such as heart-rate, sleep patterns and whereabouts to corporate servers in order to improve their performance.
Even with this change in the way we view information in mind, it is difficult to believe we cannot see through the myth. How did it become plausible to transpose cybernetic thinking from feedback in self-regulating closed systems to society and even to human beings? It is quite a leap, but we somehow took it. Do humans have anything in common with such systems? Weizenbaum (1976) explains our view of man as machine through our strong emotional ties to computers, through the internalisation of aspects of computers in order to operate them, in the form of kinaesthetic and perceptual habits. He describes how in that sense, man’s instruments become part of him and alter the nature of his relationship to himself. In The Empty Brain (2016) research psychologist Robert Epstein writes about the idea that we nowadays tend to view ourselves as information processors, but points out there is a very essential difference between us and computers: humans have no physical representations of the world in their brains. Our memory and that of a computer have nothing in common, we do not store, retrieve and process information and we are not guided by algorithms.
Epstein refers to George Zarkadakis’ book In Our Own Image (2015), where he describes six different metaphors people have employed over the past 2,000 years to try to explain human intelligence. In the earliest one, eventually preserved in the Bible, humans were formed from clay or dirt, which an intelligent god then infused with its spirit. This spirit somehow provided our intelligence. The invention of hydraulic engineering in the 3rd century BC led to the popularity of a hydraulic model of human intelligence, the idea that the flow of different fluids in the body – the ‘humours’ – accounted for both our physical and mental functioning. By the 1500s, automata powered by springs and gears had been devised, eventually inspiring leading thinkers such as René Descartes to assert that humans are complex machines. By the 1700s, discoveries about electricity and chemistry led to new theories of human intelligence – again, largely metaphorical in nature. In the mid-1800s, inspired by recent advances in communications, the German physicist Hermann von Helmholtz compared the brain to a telegraph. Predictably, just a few years after the dawn of computer technology in the 1940s, the brain was said to operate like a computer, with the role of physical hardware played by the brain itself and our thoughts serving as software.
This explanation of human intelligence, the information processing metaphor, has infused our thinking over the past 75 years. Even though our brains are wet and warm, obviously an inseparable part of our bodies, and using not only electrical impulses but also neurotransmitters, blood, hormones and more, nowadays it is not uncommon to think of neurons as ‘processing units’, synapses as ‘circuitry’, ‘processing’ sensory ‘input’, creating behavioural ‘outputs’. The use of metaphors to describe science to laymen also led to the idea that we are ‘programmed’ through the ‘code’ contained in our DNA. Besides leading to the question of who the programmer is, these metaphors make it hard to investigate the nature of our intelligence and the nature of our machines with an open mind. They make it hard to distinguish the human hand in computers, from the characteristics of the machine itself, anthropomorphising the machine, while at the same time dehumanising ourselves. For instance, Alan Turing’s test to prove if a computer could be regarded as thinking, now is popularly seen as a test that proves if a computer is thinking. Only in horror movies would a puppeteer start to perceive his puppet as an autonomous and conscious entity. The Turing test only shows whether or not a computer can be perceived of as thinking by a human. That is why Turing called it the imitation game. “The Turing Test is not a definition of thinking, but an admission of ignorance — an admission that it is impossible to ever empirically verify the consciousness of any being but yourself.” (Schulman, 2009). What a critical examination of the IP metaphor makes clear, is that in an attempt to make sense of something we don’t understand, we’ve invented a way of speaking about ourselves and our technology that obscures instead of clarifies both our nature and that of our machines.
Why dehumanize and marginalize ourselves through the paradox of viewing ourselves as flawed, wanting a higher form of intelligence to guide us, envisioning superhuman powers in a machine created by us, filled with programs written by us, giving us nothing but numbers that need interpretation by us, obfuscating our own role in the process yet viewing the outcome as authoritative and superior. A magician cannot trick himself. Once revealed, never concealed. Yet we manage to fall for a self created illusion time and time again. We fall for it because we want to believe. We joined the church of big data out of fear, in the hope it would protect us from harm by making the world predictable and controllable. With the sword of Damocles hanging over our heads, global warming setting in motion a chain of catastrophes, threatening our survival, facing the inevitable death of capitalism’s myth of eternal growth as earth’s resources run out, we need a way out. Since changing light bulbs didn’t do the trick, and changing the way society is run seems too complicated, the promise of a technological solution inspires great hope. Really smart scientists, with the help of massive data sets and ridiculously clever AI will suddenly find the answer. In ten years time we’ll laugh at the way we panicked about global warming, safely aboard our CO2 vacuum cleaners orbiting a temporarily abandoned planet Earth.
The information processing metaphor got us completely hooked on gathering data. We are after all information processors. The more data at our fingertips, the more powerful we become. Once we have enough data, we will finally be in control, able to predict formerly unforeseen events, able to steer the outcome of any process because for the first time in history we’ll understand the world as it really is. False hope. Taken to its extreme, the metaphor leads to the belief that human consciousness, being so similar to computer software, can be transferred to a computer.
“One prediction – made by the futurist Kurzweil, the physicist Stephen Hawking and the neuroscientist Randal Koene, among others – is that, because human consciousness is supposedly like computer software, it will soon be possible to download human minds to a computer, in the circuits of which we will become immensely powerful intellectually and, quite possibly, immortal.” (Epstein, 2016).
False hope. Another example is the project Alliance to Rescue Civilization (ARC), by scientists E. Burrows and Robert Shapiro. It is a project that aims to back up human civilization in a lunar facility. The project artificially separates the “hardware” of the planet with its oceans and soils, and the “data” of human civilization (Bennan, 2016). Even though seeing the need to store things off-planet conveys aless than optimistic outlook on the future, the project gives the false impression that technology can separate us from earth. A project pushing this separation to its extreme is Elon Musk’s SpaceX plan to colonize Mars, announced in June 2016, and gaining momentum with his presentation at the 67th International Astronautical Congress in Guadalajara, September 27th. The goal of the presentation was to make living on Mars seem possible within our lifetime. Possible, and fun.
“It would be quite fun because you have gravity, which is about 37% that of Earth, so you’d be able to lift heavy things and bound around and have a lot of fun.” (Musk, 2016).
We are inseparable from the world we live in. An artificial separation from earth, which we are part of and on which our existence depends, will only lead to a more casual attitude towards the destruction of its ecosystems. We‘ll never be immortal, downloadable or rescued from a lunar facility in the form of a back up. Living on Mars is not only completely impossible at this moment, nothing guarantees it will bein the future. Even if it were possible, and you would be one of the select few that could afford to go, you’d spend your remaining days isolated on a life-less, desolate planet, chronically sleep deprived, with a high risk of cancer, a bloated head, the bone density and muscle mass worse than that of a 120 year old, due to the small amount of gravity and high amount of radiation (Chang, 2014). Just like faster and more precise calculations regarding the position of planets in our solar system will not make astrology more accurate in predicting the future, faster machines, more compact forms of data storage and larger data sets will not make us able to predict and control the future. Technological advances will not transform our species from a slowly evolving one, into one that can adapt to extreme changes in our environment instantly, as we would need to in the case of a rapidly changing climate or a move to another planet.
These are the more extreme examples, that are interesting because they make the escapist attitude to our situation so painfully clear. Yet the more widely accepted beliefs are just as damaging. The belief that with so much data at our fingertips, we’ll make amazing new discoveries that will safe us just in time, leads to hope that distracts us from the real issues that threaten us. There are 7.4 billion of us. The earth’s resources are running out. Climate change will set in motion a chain of events we cannot predict precisely but dramatic sea level rises and mass extinction will be part of it without doubt. We cannot all Houdini out of this one, no matter how tech savvy we might become in the next decades. Hope is essential, but a false sense of safety paralyses us and we need to start acting. In an attempt to understand the world, to become less fragile, in the most rational and scientifically sound way we could think of, we’ve started anthropomorphising machines and dehumanising ourselves. This has, besides inspiring a great number of Hollywood productions, created a massive blind spot and left us paralysed. While we are bravely filling up the world’s hard disks, we ignore our biggest weakness: this planet we are on, it’s irreplaceable and our existence on it only possible under certain conditions. Our habits, our quest to become more productive, more efficient, more safe, less mortal, more superhuman, actually endangers us as a species… With technology there to save us, there is no urgency to act. Time to dispel the myth, to exit the church of big data and start acting in our best interest, as a species, not as isolated packages of selfish genes organized as information processors ready to be swooshed off the planet in a singularity style rapture.
 In this essay the focus is on the socio-technical aspect of the phenomenon, the assumptions and beliefs surrounding big data, and on research using “Data exhaust” such as status updates on social media, web searches, and credit card payments. This is an important distinction to make, as there is very valid research done which also falls under the umbrella term of big data, most notably climate research, which is using data based on measurements made by scientists instead of so called found data.