Close
When you subscribe to Furtherfield’s newsletter service you will receive occasional email newsletters from us plus invitations to our exhibitions and events. To opt out of the newsletter service at any time please click the unsubscribe link in the emails.
Close
All Content
Contributors
UFO Icon
Close
Irridescent cyber duck illustration with a bionic eye Irridescent cyber bear illustration with a bionic eye Irridescent cyber bee illustration
Visit People's Park Plinth

AI TRAPS: Automating Discrimination

On June the 16th Tatiana Bazzichelli and Lieke Ploeger presented a new Disruption Network Lab conference entitled “AI TRAPS” to scrutinize Artificial Intelligence and automatic discrimination. The conference touched several topics from biometric surveillance to diversity in data, giving a closer look at how AI and algorithms reinforce prejudices and biases of its human creators and societies, to find solutions and countermeasures.

A focus on facial recognition technologies opened the first panel “THE TRACKED & THE INVISIBLE: From Biometric Surveillance to Diversity in Data Science” discussing how massive sets of images have been used by academic, commercial, defence and intelligence agencies around the world for their research and development. The artist and researcher Adam Harvey addressed this tech as the focal point of an emerging authoritarian logic, based on probabilistic determinations and the assumption that identities are static and reality is made through absolute norms. The artist considered two recent reports about the UK and China showing how this technology is yet unreliable and dangerous. According to data released under the UK´s Freedom of Information Law, 98% of “matches” made by the English Met police using facial recognition were mistakes. Meanwhile, over 200 million cameras are active in China and – although only 15% are supposed to be technically implemented for effective face recognition – Chinese authorities are deploying a new system of this tech to racial profile, track and control the Uighurs Muslim minority.

Big companies like Google and Facebook hold a collection of billions of images, most of which are available inside search engines (63%), on Flickr (25%) and on IMdB (11 %). Biometric companies around the world are implementing facial recognition algorithms on the pictures of common people, collected in unsuspected places like dating-apps and social media, to be used for private profit purposes and governmental mass-surveillance. They end up mostly in China (37%), US (34%), UK (21%) and Australia (4%), as Harvey reported.

Metis Senior Data Scientist Sophie Searcy, technical expert who has also extensively researched on the subject of diversity in tech, contributed to the discussion on such a crucial issue underlying the design and implementation of AI, enforcing the description of a technology that tends to be defective, unable to contextualise and consider the complexity of the reality it interacts with. This generates a lot of false predictions and mistakes. To maximise their results and reduce mistakes tech companies and research institutions that develop algorithms for AI use the Stochastic Gradient Descent (SGD) technique. This enables to pick a few samples selected randomly from a dataset instead of analysing the whole of it for each iteration, saving a considerable amount of time. As Searcy explained during the talk with the panel moderator, Adriana Groh, this technique needs huge amount of data and tech companies are therefore becoming increasingly hungry for them. 

Adam Harvey, Sophie Searcy and Adriana Groh during the panel discussion “THE TRACKED & THE INVISIBLE: From Biometric Surveillance to Diversity in Data Science”
Adam Harvey, Sophie Searcy and Adriana Groh during the panel discussion “THE TRACKED & THE INVISIBLE: From Biometric Surveillance to Diversity in Data Science”

In order to have a closer look at the relation between governments and AI-tech, the researcher and writer Crofton Black presented the study conducted with Cansu Safak at The Bureau of Investigative Journalism on the UK government’s use of big data.  They used publicly available data to build a picture of companies, services and projects in the area of AI and machine learning, to map what IT systems the British government has been buying. To do so they interviewed experts and academics, analysed official transparency data and scraped governmental websites. Transparency and accountability over the way in which public money is spent are a requirement for public administrations and they relied on this principle, filing dozens of requests under the Freedom of Information Act to public authorities to get audit trails. Thus they mapped an ecosystem of the corporate nexus between UK public sector and corporate entities. More than 1,800 IT companies, from big ones like BEA System and IBM to small ones within a constellation of start-ups. 

As Black explained in the talk with the moderator of the keynote Daniel Eriksson, Transparency International Head of Technology, this investigation faced systemic problems with disclosure from authorities, that do not keep transparent and accessible records. Indeed just 25% of the UK-government departments provided some form of info. Therefore details of the assignments are still unknown, but it is at least possible to list the services those companies deploying AI and machine learning can offer governments: connect data and identify links between people, objects, locations; set up automated alerts in the context of border and immigration control, spotting out changes in data and events of interest; work on passports application programs, implementing the risk-based approaches to passports application assessments; work on identity verification services using smartphones, gathering real time biometric authentications. These are just few examples.

Crofton Black and Daniel Eriksson during the panel discussion “HOW IS GOVERNMENT USING BIG DATA?”

Crofton Black and Daniel Eriksson during the panel discussion “HOW IS GOVERNMENT USING BIG DATA?”

Maya Indira Ganesh opened the panel “AI FOR THE PEOPLE: AI Bias, Ethics & the Common Good” questioning how tech and research have historically been almost always developed and conducted on prejudiced parameters, falsifying results and distorting reality. For instance, data about women’s heart attacks hadn´t been taken in consideration for decades, until doctors and scientists determined that ECG-machines calibrated on the data collected from early ´60s could neither predict heart attacks in women, nor give reliable data for therapeutic purposes, because they were trained only on male population. Just from 2007 ECG-machines were recalibrated on parameters based on data collected from female individuals. It is not possible to calculate the impact this gender inequality had on the development of modern cardiovascular medicine and on the lives of millions of women.

As the issue of algorithmic bias in tech and specifically in AI grows, all big tech firms and research institutions are writing ethics charters and establishing ethics boards sponsoring research in these topics. Detractors often refer to it as ethics-washing, which Ganesh finds a trick to mask ethics and morality as something definable in universal terms or scale: though it cannot be computed by machines, corporations need us to believe that ethics is something measurable. The researcher suggested that in such a way the abstraction and the complexity of the machine get easy to process as ethics becomes the interface used to obfuscate what is going on inside the black box and represent its abstractions. “But these abstractions are us and our way to build relations” she objected. 

Ganesh wonders consequently according to what principle it shall be acceptable to train a facial recognition system, basing it on video of transgender people, as it happened in the alarming “Robust transgender face recognition” research, based on data from people undergoing hormone replacement therapy, Youtube videos, diaries and time-lapse documentation of the transition process. The HRT Transgender Dataset used to train AI to recognize transgender people worsens the harassment and the targeting that trans-people already experience daily, targeting and harming them as a group. However, it was partly financed by FBI and US-Army, confirming that law enforcement and national security agencies appear to be very interested in these kinds of datasets and look for private companies and researchers able to provide it. 

In this same panel professor of Data Science and Public Policy Slava Jankin reflected on how machine learning can be used for common good in the public sector. As it was objected during the discussion moderated by Nicole Shephard, Researcher on Gender, Technology and Politics of Data, the “common good” isn’t easy to define, and like ethics it is not universally given. It could be identified with those goods that are relevant to guarantee and determine the respect of human rights and their practice. The project that Jankin presented was developed inside the Essex Centre for Data analytics in a synergic effort of developers, researches, universities and local authorities. Together, they tried to build an AI able to predict within reliability where children lacking school readiness are more likely to be found geographically, to support them in their transition and gaining competencies, considering social, economic and environmental conditions.

Maya Indira Ganesh during her lecture part of “AI FOR THE PEOPLE: AI Bias, Ethics & the Common Good”
Maya Indira Ganesh during her lecture part of “AI FOR THE PEOPLE: AI Bias, Ethics & the Common Good” 

The first keynote of the conference was the researcher and activist Charlotte Webb, who presented her project Feminist Internet in the talk “WHAT IS A FEMINIST AI?” 

<<There is not just one possible internet and there is not just one possible feminism, but only possible feminisms and possible internets>>. Starting from this assumption Webb talked about Feminist Human Computer Interaction, a discipline born to improve understandings about how gender identities and relations shape the design and use of interactive technologies. Her Feminist Internet is a no profit organisation funded to make internet a more equal space for women and other marginalized groups. Its approach combines art, design, critical thinking, creative technology development and feminism, seeking to build more responsible and bias-free AI able to empower people considering the causes of marginalization and discrimination. In her words, a feminist AI is not an algorithm and is not a system built to evangelize about a certain political or ideological cause. It is a tool that aims at recognizing differences without minimizing them for the sake of universality, meeting human needs with the awareness of the entire ecosystem in which it sits. 

Tech adapts plastically to pre-existing discriminations and gender stereotypes. In a recent UN report, the ‘female’ obsequiousness and the servility expressed by digital assistants like Alexa, the Google Assistant, are defined as example of gender biases coded into tech products, since they are often projected as young women. They are programmed to be submissive and accept abuses. As stated by Feldman (2016) by encouraging consumers to understand the objects that serve them as women, technologists abet the prejudice by which women are considered objects. With her projects, Webb pushes to create alternatives that educate to shift this systemic problem – rather than complying with market demands – first considering that there is a diversity crisis in the AI sector and in the Silicon Valley. Between 2.5 and 4% of Google, Facebook and Microsoft employees are black, whilst there are no public data on transgender workers within these companies. Moreover, as Webb pointed out, just 22% of the people building AI right now are female, only 18% of authors at major AI-conferences are women, whilst over 80% of AI-professors are men. Considering companies with decisive impact on society women comprise only 15% of AI research staff at Facebook and 10% in Google. 

Women, people of colour, minorities, LGBTQ and marginalized groups are substantially not deciding about designing and implementing AI and algorithms. They are excluded from the processes of coding and programming. As a result the work of engineers and designers is not inherently neutral and the automated systems that they build reflect their perspectives, preferences, priorities and eventually their bias. 

Charlotte Webb during her keynote “WHAT IS A FEMINIST AI?”
Charlotte Webb during her keynote “WHAT IS A FEMINIST AI?”

Washington Tech Policy Advisor Mutale Nkonde focused on this issue in her keynote “RACIAL DISCRIMINATION IN THE AGE OF AI.” She opened her dissertation reporting that Google´s facial intelligence team is composed by 893 people, and just one is a black woman, an intern. Questions, answers and predictions in their technological work will always reflect a political and socioeconomic point of view, consciously or unconsciously. A lot of the tech-people confronted with this wide-ranging problem seem to undermine it, showing colour-blindness tendencies about what impacts their tech have on minorities and specifically black people. Historically credit scores are correlated with racist segregated neighbourhoods and risk analyses and predictive policing data are corrupted by racist prejudice, leading to biased data collection reinforcing privileges. Without a conscious effort to address racism in technology, new technologies will replicate old divisions and conflicts. By instituting policies like facial recognition we just replicate rooted behaviours based on racial lines and gender stereotypes mediated by algorithms. Nkonde warns that civil liberties need an update for the era of AI, advancing racial literacy in Tech.

In a talk with the moderator, the writer Rhianna Ilube, the keynote Nkonde recalled that in New York´s poor and black neighbourhood with historically high crime and violence rates, Brownsville, a private landlord in social housing wanted to exchange keys for facial recognition software, so that either people accept surveillance, or they lose their homes. The finding echoes wider concerns about the lack of awareness of racism. Nkonde thinks that white people must be able to cope with the inconvenience of talking about race, with the countervailing pressures and their lack of cultural preparation, or simply the risk to get it wrong. Acting ethically isn´t easy if you do not work on it and many big tech companies just like to crow about their diversity and inclusion efforts, disclosing diversity goals and offering courses that reduce bias. However, there is a high level of racial discrimination in tech sector and specifically in the Silicon Valley, at best colour-blindness – said Nkonde – since many believe that racial classification does not limit a person’s opportunities within the society, ignoring that there are instead economic and social obstacles that prevent full individual development and participation, limiting freedom and equality, excluding marginalized and disadvantaged groups from the political, economic, and social organization. Nkonde concluded her keynote stressing that we need to empower minorities, providing tools that allow overcoming autonomously socio-economic obstacles, to fully participate in society. It is about sharing power, taking in consideration the unconscious biases of people, for example starting from those designing the technology. 

Mutale Nkonde during her keynote “RACIAL DISCRIMINATION IN THE AGE OF AI?”
Mutale Nkonde during her keynote “RACIAL DISCRIMINATION IN THE AGE OF AI?”

The closing panel “ON THE POLITICS OF AI: Fighting Injustice & Automatic Supremacism” discussed the effect of a tool shown to be not neutral, but just the product of the prevailing social economical model. 

Dia Kayyali, Leader of the Tech and Advocacy program at WITNESS, described how AI is facilitating white supremacy, nationalism, racism and transphobia, recalling the dramatic case of the Rohingya persecution in Myanmar and the oppressive Chinese social score and surveillance systems. Pointing out critical aspects the researcher reported the case of the Youtube anti-extremism-algorithm, which removed thousands of videos documenting atrocities in Syria in an effort to purge hate speech and propaganda from its platform. The algorithm was trained to automatically flag and eliminate content that potentially breached its guidelines and ended up cancelling documents relevant to prosecute war crimes. Once again, the absence of the ability to contextualize leads to severe risks in the way machines operate and make decisions. Likewise, applying general parameters without considering specificities and the complex concept of identity, Facebook imposed in 2015 new policies and arbitrarily exposed drag queens, trans people and other users at risk, who were not using their legal names for safety and privacy reasons, including domestic violence and stalking. 

Researcher on gender, tech and (counter) power Os Keyes considered that AI is not the problem, but the symptom. The problem are the structures creating AI. We live in an environment where few highly wealthy people and companies are ruling all. We have bias in AI and tech because their development is driven by exactly those same individuals. To fix AI we have to change requirements and expectations around it; we can fight to have AI based on explainability and transparency, but eventually if we strive to fix AI and do not look at the wider picture, in 10 years the same debate over another technology will arise. Keyes considered that since its very beginning AI-tech was discriminatory, racialized and gendered, because society is capitalist, racist, homo-transphobic and misogynistic. The question to pose is how we start building spaces that are prefigurative and constructed on values that we want a wider society to embrace. 

As the funder and curator of the Disruption Network Lab Tatiana Bazzichelli pointed out during the moderation of this panel, the problem of bias in algorithms is related to several major “bias traps” that algorithm-based prediction systems fail to win. The fact that AI is political – not just because of the question of what is to be done with it, but because of the political tendencies of the technology itself – is the real aspect to discuss.

In his analysis of the political effects of AI, Dan McQuillan, Lecturer in Creative and Social Computing from the London University, underlined that while the reform of AI is endlessly discussed, there seems to be no attempt to seriously question whether we should be using it at all. We need to think collectively about ways out, learning from and with each other rather than relying on machine learning. Countering thoughtlessness of AI with practices of solidarity, self-management and collective care is what he suggests because bringing the perspective of marginalised groups at the core of AI practice, it is possible to build a new society within the old, based on social autonomy.

What McQuillan calls the AI realism appears to be close to the far-right perspective, as it trivialises complexity and naturalises inequalities. The character of learning through AI implicates indeed reductive simplifications, and simplifying social problems to matters of exclusion is the politics of populist and Fascist right. McQuillan suggests taking some guidance from the feminist and decolonial technology studies that have cast doubt on our ideas about objectivity and neutrality. An antifascist AI, he explains, shall involve some kinds of people’s councils, to put the perspective of marginalised groups at the core of AI practice and to transform machine learning into a form of critical pedagogy.

Pic 7: Dia Kayyali, Os Keyes, Dan McQuillan and Tatiana Bazzichelli during the panel “ON THE POLITICS OF AI: Fighting Injustice & Automatic Supremacism”

Dia Kayyali, Os Keyes, Dan McQuillan and Tatiana Bazzichelli during the panel "ON THE POLITICS OF AI: Fighting Injustice & Automatic Supremacism"
Dia Kayyali, Os Keyes, Dan McQuillan and Tatiana Bazzichelli during the panel “ON THE POLITICS OF AI: Fighting Injustice & Automatic Supremacism”

We see increasing investment on AI, machine learning and robots. Automated decision-making informed by algorithms is already a predominant reality, whose range of applications has broadened to almost all aspects of life. Current ethical debates about the consequences of automation focus on the rights of individuals and marginalized groups. However, algorithmic processes generate a collective impact too, that can only be addressed partially at the level of individual rights, as it is the result of a collective cultural legacy. A society that is soaked in racial and sexual discriminations will replicate them inside technology. 

Moreover, when referring to surveillance technology and face recognition software, existing ethical and legal criteria appear to be ineffective and a lack of standards around their use and sharing just benefit its intrusive and discriminatory nature.

Whilst building alternatives we need to consider inclusion and diversity: If more brown and black people would be involved in the building and making of these systems, there would be less bias. But this is not enough. Automated systems are mostly trying to identify and predict risk, and risk is defined according to cultural parameters that reflect the historical, social and political milieu, to give answers able to fit a certain point of view and make decisions. What we are and where we are as a collective, what we have achieved and what we still lack culturally is what is put in software to make those same decisions in the future. In such a context a diverse team within a discriminatory conflictual society might find ways to flash the problem of bias away, but it will get somewhere else.

The truth is that automated discrimination, racism and sexism are integrated in tech-infrastructures. New generation of start-ups are fulfilling authoritarian needs, commercialising AI-technologies, automating biases based on skin colour and ethnicity, sexual orientation and identity. They develop censored search engine and platforms for authoritarian governments and dictators, refine high-tech military weapons training them using facial recognition on millions of people without their knowledge. Governments and corporations are developing technology in ways that threaten civil liberties and human rights.  It is not hard to imagine the impact of the implementation of tools for robotic gender recognition, within countries were non-white, non-male and non-binary individuals are discriminated. Bathrooms and changing rooms that open just by AI gender-detection, or cars that start the engine just if a man is driving, are to be expected. Those not gender conforming, who do not fit traditional gender structures, will end up being systematically blocked and discriminated.

Open source, transparency and diversity alone will not defeat colour-blinded attitudes, reactionary backlashes, monopolies, other-directed homologation and cultural oppression by design. As it was discussed in the conference, using algorithms to label people based on sexual identity or ethnicity has become easy and common. If you build a technology able to catalogue people by ethnicity or sexual identity, someone will exploit it to repress genders or ethnicities, China shows.
In this sense, no better facial recognition is possible, no mass-surveillance tech is safe and attempts at building good tech will continue to fail. To tackle bias, discrimination and harm in AI we have to integrate research on and development of technology with all of the humanities and social sciences, deciding to consciously create a society where everybody could participate to the organisation of our common future.


Curated by Tatiana Bazzichelli and developed in cooperation with Transparency International, this Disruption Network Lab-conference was the second of the 2019 series The Art of Exposing Injustice.
More info, all its speakers and thematic could be found here: https://www.disruptionlab.org/ai-traps
The videos of the conference are on Youtube and the Disruption Network Lab is also on Twitter and Facebook.

To follow the Disruption Network Lab sign up for its Newsletter and get informed about its conferences, ongoing researches and projects. The next Disruption Network Lab event “Citizen of evidence” is planned for September  20-21 in Kunstquartier Bethanien Berlin. Make sure you don´t miss it!

Photocredits: Maria Silvano for Disruption Network Lab

Nervous Systems – Algorithms and our everyday life

What does the comic book heroine Wonder Woman have to do with the lie detector? How is the Situationists’ dérive connected to Google Map’s realtime recordings of our patterns of movement? Do we live in Borges’ story “On Exactitude in Science” where the art of cartography became so perfect that the maps of the land are as big as the land itself? These are the topics of the Nervous Systems exhibition. Questions that are pertinent to the state of the world we are in at this moment.

The motto “Quantified life and the social question” sets the frame. Science started out as a the quantification of the world outside of us, but from the 19th century onwards moved towards observing and quantifying human behaviour. What are the consequences of our daily life being more and more controlled by algorithms? How does it change our behaviour if we are caught up in a feedback loop about what we are doing and how it relates to what others are doing? Does this contributed to more pressure towards a normalized behaviour?

Art from now to then

These are just some of the questions that come to mind wandering around the densely packed space. You have to bring enough time when you visit “Nervous systems” – it is easy to spend the whole day watching the video works and reading the text panels.

There are three distinct parts: the Grid, Triangulation and the White Room. Each treats these questions from a different perspective. The Grid is the art part of the show and features younger artists like Melanie Gilligan with her video series “The Commons Sense” where she imagines a world where people can directly experience other people’s feelings. In the beginning this “patch” allows people to grow closer together, increasing empathy and togetherness, but soon it becomes just another tool for surveillance and optimizing of work flows in the capitalist economy – an uncanny metaphor of the internet.

Melanie Gilligan, The Common Sense, 2014, Courtesy: Galerie Max Mayer
Melanie Gilligan, The Common Sense, 2014, Courtesy: Galerie Max Mayer

The Swiss art collective !Mediengruppe Bitnik rebuilt Julian Assange’s room in the Ecuadorian Embassy – including a treadmill, a Whiskey bottle, Science Fiction novels, mobile phone collection and server infrastructure – all in a few square metres. The installation illustrates the truism that history is made by real people who eat and drink and have bodies that are existing outside of their role they are known for. Which is exactly why Assange is confined since 2012 without being convicted by a regular court.

The NSA surveillance scandal does not play a major part in the exhibition as the curators – Stephanie Hankey and Marek Tuszynski from the Tactical Technology Collective and Anselm Franke, head of the HKW’s Department for Visual Arts and Film – rather concentrate on the evolution of methods of quantification. For this reason we are not only seeing younger artists who directly deal with data and its implications but go back to conceptual art and performance in the 1970s and earlier.

The performance artist Vito Acconci is shown with his work “Theme Song” from 1973 and evokes the Instagram and Snapchat culture of today by building intimate space where he pretends to secrets with the audience. Today he could be one of the Youtube ASMR stars whispering nonsense in order to trigger a unproven neural reaction.

Harun Farocki’s film “How to live in the FRG” on the other hand shows how society already then used strategies of optimization in order to train the human material for the capitalist risk society.

From art to social science

Installation view, triangulations Photo:Laura Fiorio, Haus der Kulturen der Welt
Installation view Triangulations Photo: Laura Fiorio, Haus der Kulturen der Welt

Triangulation is the method to determine the location of a point in space by measuring the angles from two other points instead of directly measuring the distance. As a method it goes back to antiquity. In the exhibition the Triangulation stations give a theoretical and cultural background to the notion of a quantified understanding of the world. Here the exhibition give a rich historical background mixing stories about the UK mass observation project where volunteers made detailed notes about the dancing hall etiquette with analyses of mapping histories and work optimization.

The triangulations in the exhibiton are written by eminent scholars, activists and philosophers. The legal researcher Laurence Liang writes about the re-emerging forensic techniques like polygraphs and brain mappings that have their roots in the positivism of the nineteenth century. Other triangulations are about Smart Design (Orit Halpern), algorithms, patterns and anomalies (Matteo Pasquinelli) and quantification and the social question where Avery F. Gordon together with curator Anselm Franke speaks about the connection of governance, industrial capitalism and quantification.

“Today’s agitated state apparatuses and overreaching institutions act according to the fantasy that given sufficient information, threats, disasters, and disruptions can be predicted and controlled; economies can be managed; and profit margins can be elevated,” the curators say in their statement. We see this believe everywhere – the state, the financial sector, in the drive towards self-optimization. A lot of the underlying assumptions can be traced back to the 19th century – the believe that if we have enough information we can control everything.

Like Pierre Laplace and his demon we believe that if we know more we can determine more. The problem is that knowing is only possible through a specific lens and context so that we become caught up in a feedback loop that only confirms what we already know to begin of. Most notably this can be seen in the concept of predictive policing which is also present in the triangulation part of the exhibition. Algorithms only catch patterns that are pre-determined by the people who program them.

What we can do – The White Room

Going through The Grid of the art works (the exhibition architecture by Kris Kimpe masterfully manifests a rectangular gridlike structure in the exhibition space) and reading the Triangulation stations the visitor is left a bit bereft. Is there still hope? One feels like at the end of George Orwell’s 1984 – the main character is broken and Big Brother’s reign unchallenged. This is where the White Room comes in. Here the visitors can be active. The Tactical Technology Collective provides a workshop programme where one can learn about how to secure one’s digital devices, how to avoid being tracked – on the web or by smartphone, what the alternatives are to corporate data collectors like Facebook or Google.

Installation view The White Room Laura Fiorio / Haus der Kulturen der Welt
Installation view The White Room Photo: Laura Fiorio / Haus der Kulturen der Welt

In the end the nature of internet is twofold and conflicting: on the one hand it allows for unprecedented observation and monitoring, on the other it is a tool for resistance connecting people who were separated by space and time. It is not yet clear if this is enough or if (like the radio and other technological media before) it will be co-opted by the one’s in power.

This leads me to one problem of an otherwise very necessary and inspiring show: For all its social justice impetus that narrative that is presented here is very androcentric – there are few if any feminist, queer or post-colonial perspectives in the exhibition. If they are presented then from the outside. This is not a theoretical objection as women, people of colour and other minorities (who are actually majorities) are especially vulnerable to the kind of hegemonic enclosures on the basis of data and algorithms. In a talk in the supporting programme the feminist philosopher Ewa Majewska gave some pointers towards a feminist critique of quantification: the policing of women’s reproductive abilities, affective labour or privacy as a political tool. More of this in the exhibition itself would have gone a long way.

The exhibition is accompanied by an interesting and ambitious lecture programme. The finissage day, May 8, is bound to be especially interesting. Franco Berardi, Laboria Cuboniks, Evgeny Morozov, Ana Teixeira Pinto and Seb Franklin are invited to think further about digitial life, autonomy, governance and algorithms.

Nervous Systems. Quantified Life and the Social Question
March 11 – May 9, 2016
Haus der Kulturen der Welt Berlin
Opening hours and information about the lecture programme on the website.

Share your Values with the Museum of Contemporary Commodities

LAB #1 in the Art Data Money series

Walkshop and Commodity Consultation
Come for one or both sessions, or just drop in for a chat about MoCC over tea and cake.

VISITING INFORMATION

For both the morning and afternoon session please BOOK HERE.

10:30am – 1:30pm  Walkshop – BOOK HERE.

Join us for a walkshop exploring places, moments and technologies of trade and exchange in the Finsbury Park retail area. We will be out and about for around 90 minutes followed by a group conversation on relations between data, trade and values and how they are affecting our daily lives and spaces. Please dress for the weather and bring a smart phone/camera and means to download images. Coffee and cake provided.

2.00-4:30pm Commodity Consultation – BOOK HERE.

Use LEGO re-creations and animated gifs to explore the values held in your own experiences of trade and exchange. Our Commodity Consultant will be available throughout the afternoon to research your commodity questions, helping you add your own things of value to the Museum of Contemporary Commodities.

Part of Furtherfield’s Art Data Money programme.

Art Data Money Logo
Art Data Money Logo

FURTHER INFO

The Museum of Contemporary Commodities (MoCC) is neither a building nor a permanent collection of stuff – it’s an invitation. To consider every shop, online store and warehouse full of stuff as if it were a museum, and all the things in it part of our collective future heritage.

MoCC is an art-social science project led by artist Paula Crutchlow (Blind Ditch) and cultural geographer Ian Cook (University of Exeter) in collaboration with Furtherfield.

The project is supported by the Economic and Social Science Research Council, Islington Council, All Change Arts, Exeter City Council and University of Exeter.

MORE INFO

VISIT MoCC WEBSITE

Museum of Contemporary Commodities – Data Walkshop

Explore and discuss the data surveillance processes at play in Finsbury Park through a process of rapid group ethnography. Arrive from 5.30pm at Furtherfield Commons, the community lab space in Finsbury Park, for a short introduction to the project. We will leave at 6pm for a 90 minute walk around the area followed by food and discussion.

Please bring:

This walkshop event is part of the research and development for the Museum of Contemporary Commodities.

VISITING INFORMATION

Sex and Security!

http://fossbox.org.uk/content/sex-and-security

A two-day event by Fossbox in collaboration with Furtherfield and Autonomous Tech Fetish around surveillance, gender and society. The event will consist of a practical privacy workshop followed by a day of discussion, making and performance. Open to everyone, women/LGBTQ especially welcome. 

Practical workshop on privacy and security: 7 March, 11am
A focused exploration of the issues around sexuality, gender and surveillance. Visit this Meetup Group to join.

Discussion workshops – sex and surveillance: 22 March, 11am
A workshop using play, performance and discussion to develop a better understanding and a more collectivised civil response around these issues. Visit this Meetup Group to join.

If you’re hazy about how digital mass surveillance works and what you can do, you’ll get most out of it by attending both workshops. If you already know about the tech side, you might want to help out at the practical workshop!

MORE ABOUT THIS EVENT

The ‘Internet of Things’ (IoT) is heralded as a quantum leap in economy, society, culture and government — yet most people outside of the tech and creative industries struggle to get their heads around what this really means. A number of recent scandals involving ‘smart’ devices as well as Snowden’s revelations have highlighted the huge and slightly scary security and privacy issues at the heart of our brave, new, cyberworld. ICO and Offcom are acutely aware that this pervasive surveillance is making a mockery of current EU/UK privacy legislation which urgently needs an overhaul. Policy, however, prioritises rapid development of IoT over protecting sensitive data calling for ‘transparency’ rather than restriction of how our data can be collected and used. Meanwhile, the corporate and governmental chorus that law-abiding citizens have nothing to worry about from pervasive surveillance is wearing thinner every day.

As the digital domain begins to bleed into our ordinary, physical surroundings, the government is telling us that we don’t need privacy and even wants to outlaw the secure encryption which can protect our private life from being used for invasive profiling, ending up on public websites and social media, or on sale in the data black market. Cameron has stated that there should be absolutely no communication or data which the government can’t read. Is it really OK for corporations to compile and sell personal profiles so intimate that they know more about us than our loved ones do? Does the government really have a right to gather and keep our every private thought to be used in profiling if we ever do fall foul of the state for whatever reason? There is already a discussion of the ‘militarisation’ of social science and ‘predictive policing‘ – we’ve seen in the past which vulnerable groups are most likely to be targeted for profiling. What about the constant monitoring and manipulation of movement around cities. What if our government takes even more steps to the right whilst conditions get harder for most of the population. What if there were mass protests? What if a future right-wing government outside the EU decided to outlaw LGBTQ? What might these ubiquitous personal dossiers and pervasive control of urban space be used for then? Are we sleepwalking into a society where everyone is always-already a criminal and non-conformity or protest is no longer an option?

How is this a gender issue?

These developments are going to affect everyone but there are many issues around online security which will have particular resonances for women and queers and may affect us in very specific ways. There has been a lot of discussion and press about gender-trolling, ‘revenge porn‘, ‘gamergate‘ and Wikipedia, the ‘quantified self‘, FB real-names policy and binary sex dropdown lists in databases — to name but a few. However, there is not very much discussion about how these issues all fit together and what the overall impact of this production of digitised social space, hyper-self-awareness, and networked sexuality might be for women, queers, and marginalised groups. How does corporate and government surveillance and profiling affect our sense of self, our personal and public spaces, and our freedom to speak and organise with other women and queers? Where is the line between design which facilitates us and design which manipulates us?

Can we really make a difference?

The issues are hard to wrap your head around and keeping your private life private requires the commitment of a little effort — it can interrupt the manicured flow of our digital ‘user experience’. You might not really want to know what happens after you push the little button that says ‘send’ but staying safe online in 2015 and beyond is going to get a little bit more challenging. Not caring is going to be an increasingly risky option.

Technical solutions are needed, and a small army of ‘infosec’ practitioners, strong encryption, and hardened systems. However, this won’t be enough because USA, UK and EU governments are more concerned with corporate profits and the security of their own power than with the safety and wellbeing of civil society. Alongside our own private actions to secure our own data and the efforts of civil-society-hackers, non-profits and NGOs striving to keep us informed, skilled, and safe, we also need an aware and empowered civil society response. But is ‘smart’ economy an all-round bad idea purely to be defended against or is there the possibility of ‘smart’ technology and smart systems co-designed by and for women themselves and a respectful way to manage ‘big data’?