On June the 16th Tatiana Bazzichelli and Lieke Ploeger presented a new Disruption Network Lab conference entitled “AI TRAPS” to scrutinize Artificial Intelligence and automatic discrimination. The conference touched several topics from biometric surveillance to diversity in data, giving a closer look at how AI and algorithms reinforce prejudices and biases of its human creators and societies, to find solutions and countermeasures.
A focus on facial recognition technologies opened the first panel “THE TRACKED & THE INVISIBLE: From Biometric Surveillance to Diversity in Data Science” discussing how massive sets of images have been used by academic, commercial, defence and intelligence agencies around the world for their research and development. The artist and researcher Adam Harvey addressed this tech as the focal point of an emerging authoritarian logic, based on probabilistic determinations and the assumption that identities are static and reality is made through absolute norms. The artist considered two recent reports about the UK and China showing how this technology is yet unreliable and dangerous. According to data released under the UK´s Freedom of Information Law, 98% of “matches” made by the English Met police using facial recognition were mistakes. Meanwhile, over 200 million cameras are active in China and – although only 15% are supposed to be technically implemented for effective face recognition – Chinese authorities are deploying a new system of this tech to racial profile, track and control the Uighurs Muslim minority.
Big companies like Google and Facebook hold a collection of billions of images, most of which are available inside search engines (63%), on Flickr (25%) and on IMdB (11 %). Biometric companies around the world are implementing facial recognition algorithms on the pictures of common people, collected in unsuspected places like dating-apps and social media, to be used for private profit purposes and governmental mass-surveillance. They end up mostly in China (37%), US (34%), UK (21%) and Australia (4%), as Harvey reported.
Metis Senior Data Scientist Sophie Searcy, technical expert who has also extensively researched on the subject of diversity in tech, contributed to the discussion on such a crucial issue underlying the design and implementation of AI, enforcing the description of a technology that tends to be defective, unable to contextualise and consider the complexity of the reality it interacts with. This generates a lot of false predictions and mistakes. To maximise their results and reduce mistakes tech companies and research institutions that develop algorithms for AI use the Stochastic Gradient Descent (SGD) technique. This enables to pick a few samples selected randomly from a dataset instead of analysing the whole of it for each iteration, saving a considerable amount of time. As Searcy explained during the talk with the panel moderator, Adriana Groh, this technique needs huge amount of data and tech companies are therefore becoming increasingly hungry for them.
In order to have a closer look at the relation between governments and AI-tech, the researcher and writer Crofton Black presented the study conducted with Cansu Safak at The Bureau of Investigative Journalism on the UK government’s use of big data. They used publicly available data to build a picture of companies, services and projects in the area of AI and machine learning, to map what IT systems the British government has been buying. To do so they interviewed experts and academics, analysed official transparency data and scraped governmental websites. Transparency and accountability over the way in which public money is spent are a requirement for public administrations and they relied on this principle, filing dozens of requests under the Freedom of Information Act to public authorities to get audit trails. Thus they mapped an ecosystem of the corporate nexus between UK public sector and corporate entities. More than 1,800 IT companies, from big ones like BEA System and IBM to small ones within a constellation of start-ups.
As Black explained in the talk with the moderator of the keynote Daniel Eriksson, Transparency International Head of Technology, this investigation faced systemic problems with disclosure from authorities, that do not keep transparent and accessible records. Indeed just 25% of the UK-government departments provided some form of info. Therefore details of the assignments are still unknown, but it is at least possible to list the services those companies deploying AI and machine learning can offer governments: connect data and identify links between people, objects, locations; set up automated alerts in the context of border and immigration control, spotting out changes in data and events of interest; work on passports application programs, implementing the risk-based approaches to passports application assessments; work on identity verification services using smartphones, gathering real time biometric authentications. These are just few examples.
Maya Indira Ganesh opened the panel “AI FOR THE PEOPLE: AI Bias, Ethics & the Common Good” questioning how tech and research have historically been almost always developed and conducted on prejudiced parameters, falsifying results and distorting reality. For instance, data about women’s heart attacks hadn´t been taken in consideration for decades, until doctors and scientists determined that ECG-machines calibrated on the data collected from early ´60s could neither predict heart attacks in women, nor give reliable data for therapeutic purposes, because they were trained only on male population. Just from 2007 ECG-machines were recalibrated on parameters based on data collected from female individuals. It is not possible to calculate the impact this gender inequality had on the development of modern cardiovascular medicine and on the lives of millions of women.
As the issue of algorithmic bias in tech and specifically in AI grows, all big tech firms and research institutions are writing ethics charters and establishing ethics boards sponsoring research in these topics. Detractors often refer to it as ethics-washing, which Ganesh finds a trick to mask ethics and morality as something definable in universal terms or scale: though it cannot be computed by machines, corporations need us to believe that ethics is something measurable. The researcher suggested that in such a way the abstraction and the complexity of the machine get easy to process as ethics becomes the interface used to obfuscate what is going on inside the black box and represent its abstractions. “But these abstractions are us and our way to build relations” she objected.
Ganesh wonders consequently according to what principle it shall be acceptable to train a facial recognition system, basing it on video of transgender people, as it happened in the alarming “Robust transgender face recognition” research, based on data from people undergoing hormone replacement therapy, Youtube videos, diaries and time-lapse documentation of the transition process. The HRT Transgender Dataset used to train AI to recognize transgender people worsens the harassment and the targeting that trans-people already experience daily, targeting and harming them as a group. However, it was partly financed by FBI and US-Army, confirming that law enforcement and national security agencies appear to be very interested in these kinds of datasets and look for private companies and researchers able to provide it.
In this same panel professor of Data Science and Public Policy Slava Jankin reflected on how machine learning can be used for common good in the public sector. As it was objected during the discussion moderated by Nicole Shephard, Researcher on Gender, Technology and Politics of Data, the “common good” isn’t easy to define, and like ethics it is not universally given. It could be identified with those goods that are relevant to guarantee and determine the respect of human rights and their practice. The project that Jankin presented was developed inside the Essex Centre for Data analytics in a synergic effort of developers, researches, universities and local authorities. Together, they tried to build an AI able to predict within reliability where children lacking school readiness are more likely to be found geographically, to support them in their transition and gaining competencies, considering social, economic and environmental conditions.
The first keynote of the conference was the researcher and activist Charlotte Webb, who presented her project Feminist Internet in the talk “WHAT IS A FEMINIST AI?”
<<There is not just one possible internet and there is not just one possible feminism, but only possible feminisms and possible internets>>. Starting from this assumption Webb talked about Feminist Human Computer Interaction, a discipline born to improve understandings about how gender identities and relations shape the design and use of interactive technologies. Her Feminist Internet is a no profit organisation funded to make internet a more equal space for women and other marginalized groups. Its approach combines art, design, critical thinking, creative technology development and feminism, seeking to build more responsible and bias-free AI able to empower people considering the causes of marginalization and discrimination. In her words, a feminist AI is not an algorithm and is not a system built to evangelize about a certain political or ideological cause. It is a tool that aims at recognizing differences without minimizing them for the sake of universality, meeting human needs with the awareness of the entire ecosystem in which it sits.
Tech adapts plastically to pre-existing discriminations and gender stereotypes. In a recent UN report, the ‘female’ obsequiousness and the servility expressed by digital assistants like Alexa, the Google Assistant, are defined as example of gender biases coded into tech products, since they are often projected as young women. They are programmed to be submissive and accept abuses. As stated by Feldman (2016) by encouraging consumers to understand the objects that serve them as women, technologists abet the prejudice by which women are considered objects. With her projects, Webb pushes to create alternatives that educate to shift this systemic problem – rather than complying with market demands – first considering that there is a diversity crisis in the AI sector and in the Silicon Valley. Between 2.5 and 4% of Google, Facebook and Microsoft employees are black, whilst there are no public data on transgender workers within these companies. Moreover, as Webb pointed out, just 22% of the people building AI right now are female, only 18% of authors at major AI-conferences are women, whilst over 80% of AI-professors are men. Considering companies with decisive impact on society women comprise only 15% of AI research staff at Facebook and 10% in Google.
Women, people of colour, minorities, LGBTQ and marginalized groups are substantially not deciding about designing and implementing AI and algorithms. They are excluded from the processes of coding and programming. As a result the work of engineers and designers is not inherently neutral and the automated systems that they build reflect their perspectives, preferences, priorities and eventually their bias.
Washington Tech Policy Advisor Mutale Nkonde focused on this issue in her keynote “RACIAL DISCRIMINATION IN THE AGE OF AI.” She opened her dissertation reporting that Google´s facial intelligence team is composed by 893 people, and just one is a black woman, an intern. Questions, answers and predictions in their technological work will always reflect a political and socioeconomic point of view, consciously or unconsciously. A lot of the tech-people confronted with this wide-ranging problem seem to undermine it, showing colour-blindness tendencies about what impacts their tech have on minorities and specifically black people. Historically credit scores are correlated with racist segregated neighbourhoods and risk analyses and predictive policing data are corrupted by racist prejudice, leading to biased data collection reinforcing privileges. Without a conscious effort to address racism in technology, new technologies will replicate old divisions and conflicts. By instituting policies like facial recognition we just replicate rooted behaviours based on racial lines and gender stereotypes mediated by algorithms. Nkonde warns that civil liberties need an update for the era of AI, advancing racial literacy in Tech.
In a talk with the moderator, the writer Rhianna Ilube, the keynote Nkonde recalled that in New York´s poor and black neighbourhood with historically high crime and violence rates, Brownsville, a private landlord in social housing wanted to exchange keys for facial recognition software, so that either people accept surveillance, or they lose their homes. The finding echoes wider concerns about the lack of awareness of racism. Nkonde thinks that white people must be able to cope with the inconvenience of talking about race, with the countervailing pressures and their lack of cultural preparation, or simply the risk to get it wrong. Acting ethically isn´t easy if you do not work on it and many big tech companies just like to crow about their diversity and inclusion efforts, disclosing diversity goals and offering courses that reduce bias. However, there is a high level of racial discrimination in tech sector and specifically in the Silicon Valley, at best colour-blindness – said Nkonde – since many believe that racial classification does not limit a person’s opportunities within the society, ignoring that there are instead economic and social obstacles that prevent full individual development and participation, limiting freedom and equality, excluding marginalized and disadvantaged groups from the political, economic, and social organization. Nkonde concluded her keynote stressing that we need to empower minorities, providing tools that allow overcoming autonomously socio-economic obstacles, to fully participate in society. It is about sharing power, taking in consideration the unconscious biases of people, for example starting from those designing the technology.
The closing panel “ON THE POLITICS OF AI: Fighting Injustice & Automatic Supremacism” discussed the effect of a tool shown to be not neutral, but just the product of the prevailing social economical model.
Dia Kayyali, Leader of the Tech and Advocacy program at WITNESS, described how AI is facilitating white supremacy, nationalism, racism and transphobia, recalling the dramatic case of the Rohingya persecution in Myanmar and the oppressive Chinese social score and surveillance systems. Pointing out critical aspects the researcher reported the case of the Youtube anti-extremism-algorithm, which removed thousands of videos documenting atrocities in Syria in an effort to purge hate speech and propaganda from its platform. The algorithm was trained to automatically flag and eliminate content that potentially breached its guidelines and ended up cancelling documents relevant to prosecute war crimes. Once again, the absence of the ability to contextualize leads to severe risks in the way machines operate and make decisions. Likewise, applying general parameters without considering specificities and the complex concept of identity, Facebook imposed in 2015 new policies and arbitrarily exposed drag queens, trans people and other users at risk, who were not using their legal names for safety and privacy reasons, including domestic violence and stalking.
Researcher on gender, tech and (counter) power Os Keyes considered that AI is not the problem, but the symptom. The problem are the structures creating AI. We live in an environment where few highly wealthy people and companies are ruling all. We have bias in AI and tech because their development is driven by exactly those same individuals. To fix AI we have to change requirements and expectations around it; we can fight to have AI based on explainability and transparency, but eventually if we strive to fix AI and do not look at the wider picture, in 10 years the same debate over another technology will arise. Keyes considered that since its very beginning AI-tech was discriminatory, racialized and gendered, because society is capitalist, racist, homo-transphobic and misogynistic. The question to pose is how we start building spaces that are prefigurative and constructed on values that we want a wider society to embrace.
As the funder and curator of the Disruption Network Lab Tatiana Bazzichelli pointed out during the moderation of this panel, the problem of bias in algorithms is related to several major “bias traps” that algorithm-based prediction systems fail to win. The fact that AI is political – not just because of the question of what is to be done with it, but because of the political tendencies of the technology itself – is the real aspect to discuss.
In his analysis of the political effects of AI, Dan McQuillan, Lecturer in Creative and Social Computing from the London University, underlined that while the reform of AI is endlessly discussed, there seems to be no attempt to seriously question whether we should be using it at all. We need to think collectively about ways out, learning from and with each other rather than relying on machine learning. Countering thoughtlessness of AI with practices of solidarity, self-management and collective care is what he suggests because bringing the perspective of marginalised groups at the core of AI practice, it is possible to build a new society within the old, based on social autonomy.
What McQuillan calls the AI realism appears to be close to the far-right perspective, as it trivialises complexity and naturalises inequalities. The character of learning through AI implicates indeed reductive simplifications, and simplifying social problems to matters of exclusion is the politics of populist and Fascist right. McQuillan suggests taking some guidance from the feminist and decolonial technology studies that have cast doubt on our ideas about objectivity and neutrality. An antifascist AI, he explains, shall involve some kinds of people’s councils, to put the perspective of marginalised groups at the core of AI practice and to transform machine learning into a form of critical pedagogy.
Pic 7: Dia Kayyali, Os Keyes, Dan McQuillan and Tatiana Bazzichelli during the panel “ON THE POLITICS OF AI: Fighting Injustice & Automatic Supremacism”
We see increasing investment on AI, machine learning and robots. Automated decision-making informed by algorithms is already a predominant reality, whose range of applications has broadened to almost all aspects of life. Current ethical debates about the consequences of automation focus on the rights of individuals and marginalized groups. However, algorithmic processes generate a collective impact too, that can only be addressed partially at the level of individual rights, as it is the result of a collective cultural legacy. A society that is soaked in racial and sexual discriminations will replicate them inside technology.
Moreover, when referring to surveillance technology and face recognition software, existing ethical and legal criteria appear to be ineffective and a lack of standards around their use and sharing just benefit its intrusive and discriminatory nature.
Whilst building alternatives we need to consider inclusion and diversity: If more brown and black people would be involved in the building and making of these systems, there would be less bias. But this is not enough. Automated systems are mostly trying to identify and predict risk, and risk is defined according to cultural parameters that reflect the historical, social and political milieu, to give answers able to fit a certain point of view and make decisions. What we are and where we are as a collective, what we have achieved and what we still lack culturally is what is put in software to make those same decisions in the future. In such a context a diverse team within a discriminatory conflictual society might find ways to flash the problem of bias away, but it will get somewhere else.
The truth is that automated discrimination, racism and sexism are integrated in tech-infrastructures. New generation of start-ups are fulfilling authoritarian needs, commercialising AI-technologies, automating biases based on skin colour and ethnicity, sexual orientation and identity. They develop censored search engine and platforms for authoritarian governments and dictators, refine high-tech military weapons training them using facial recognition on millions of people without their knowledge. Governments and corporations are developing technology in ways that threaten civil liberties and human rights. It is not hard to imagine the impact of the implementation of tools for robotic gender recognition, within countries were non-white, non-male and non-binary individuals are discriminated. Bathrooms and changing rooms that open just by AI gender-detection, or cars that start the engine just if a man is driving, are to be expected. Those not gender conforming, who do not fit traditional gender structures, will end up being systematically blocked and discriminated.
Open source, transparency and diversity alone will not defeat colour-blinded attitudes, reactionary backlashes, monopolies, other-directed homologation and cultural oppression by design. As it was discussed in the conference, using algorithms to label people based on sexual identity or ethnicity has become easy and common. If you build a technology able to catalogue people by ethnicity or sexual identity, someone will exploit it to repress genders or ethnicities, China shows.
In this sense, no better facial recognition is possible, no mass-surveillance tech is safe and attempts at building good tech will continue to fail. To tackle bias, discrimination and harm in AI we have to integrate research on and development of technology with all of the humanities and social sciences, deciding to consciously create a society where everybody could participate to the organisation of our common future.
Curated by Tatiana Bazzichelli and developed in cooperation with Transparency International, this Disruption Network Lab-conference was the second of the 2019 series The Art of Exposing Injustice.
More info, all its speakers and thematic could be found here: https://www.disruptionlab.org/ai-traps
The videos of the conference are on Youtube and the Disruption Network Lab is also on Twitter and Facebook.
To follow the Disruption Network Lab sign up for its Newsletter and get informed about its conferences, ongoing researches and projects. The next Disruption Network Lab event “Citizen of evidence” is planned for September 20-21 in Kunstquartier Bethanien Berlin. Make sure you don´t miss it!
Photocredits: Maria Silvano for Disruption Network Lab
The leaking of the Panama Papers in 2015 (and later the Bahamas and the Paradise Papers), have exposed the extent to which part of the richest world elite has a consolidated habit of avoiding paying taxes. The leaks from the Panamanian global law firm Mossack-Fonseca have uncovered the illegal assets and murky fiscal dealings of hundreds of powerful individuals and corporations, providing detailed information on their bank accounts and shell companies.
Two years after the release of the Panama Papers, the 15th conference of the Disruption Network Lab “DARK HAVENS: Confronting Hidden Money and Power” was held in Berlin Kunstquartier Bethanien, by Disruption Network Lab Berlin, April 5-6 2019. It brought together the researchers and individuals who were part of global investigations, taking severe personal risks to expose the hidden money and power structures. This time, the Disruption Network has focused on the issues of secrecy, financial corruption and tax havens to identify informational, political, technological and artistic countermeasures to such topics.
Tax evasion is a colossal and complicated issue. Several unexpected countries that offer comparatively low corporate tax-rates and protect bank secrecy can indeed be included within a broader definition for tax haven. Many receive information for abroad assets and income but do not share with outside authorities information about what happens domestically, to such an extent that the Caymans and Bahamas are somehow far less permissive than states such as Delaware and Nevada in the US, where shell companies are very easy to open and bank secrecy is essential.
Author and researcher Nicholas Shaxson opened the conference by describing an articulated system that contributes to global inequalities and shifts of wealth from poor to rich. Transparency Internationals Senior Policy Advisor Maìra Martini described the role of banks in cross-border corruption cases, presenting schemes connecting shell companies, multiple offshore bank accounts and money laundering. The panel moderated by Simon Shuster discussed why, despite the global outrage caused by the Papers, the practice of billionaires and corporations stashing their cash in tax havens is still common.
Mossack-Fonseca was indeed just one of the many providers in the off-shore industry of Panama. And Panama is just one of the many tax havens, where complacent governments guarantee international investors that they will neither regulate nor prosecute their conducts, nor share information about their activities.
EU Institutions turn a blind eye to billions of euro’s worth of wealth that disappear not always out of sight of local tax authorities across Europe. The MEPs recently stated that seven member states (Belgium, Cyprus, Hungary, Ireland, Luxembourg, Malta and the Netherlands) “display traits of a tax haven and facilitate aggressive tax planning”. Obviously, tax avoidance is not related just to exotic illegal corporate offshore activities.
The leaks confirmed that several British satellite havens (E.g. Virgin Islands, Cayman, and the Channel Islands) and the City of London are closely linked through commercial and legal ties, with high chances for dark money to flow through the UK’s overseas territories and crown dependencies undetected. As many participants to the conference pointed out, Brexit can just worsen this situation.
The political and economic ramifications of the leeks suggest that such a system is hard to disrupt, since it guarantees the very rich and powerful elite of the world will withhold power. 50% of the wealth held in tax havens belongs to households with more than USD 50 million in net wealth: just ultra-high-net-worth individuals can afford activities in said tax havens as they carry very high fees, although the fees are substantially less than paying taxes owed.
Reporters Ryan Gallagher and Friedrich Lindenberg from the Organized Crime and Corruption Reporting Project (OCCRP) discussed the ethics of massive data leaks, security and secrecy in juxtaposition to openness and transparency, as well as source protection and collective mobilization in the analysis of the material.
Considering the amount of leaked data (16.8 million financial confidential electronic documents about offshore investments in three years), a collective mobilization is necessary to analyse them all and uncover injustices. By these means people can force companies and privates to end their systematic tax evasion that lies at the heart of the global economic system.
How can we then try to find a road in there? It is important to share information, also creatively, with tools like “The Offshore Tour Operator” a GPS prototype presented during the conference, that guides the walk of the user through the database of the Panama Papers, in search of the physical traces of the offshore banking within the city landscape. A group of twenty people could actually see the tool in action, guided throughout Berlin by two members of RYBN.ORG, the Paris-based collective founded in 1999 that created this project.
Projects and online platforms like this could be part of the solution. They are open to individuals, tax experts and reporters too, not just law enforcement officials and activists and we all shall enhance access and participation.
In 2018, the net wealth of 26 billionaires was the same as the poorest 3.8 billion people of the world (Oxfam). An extreme disparity-rate accelerating year by year, that considers just official capitals and investments, whilst there is a whole universe of dark money stored in tax havens impossible to calculate. The impact of such a private corruption affects public fundamental services like hospitals and schools and is devastating. It’s an inequality comparable to that of Versailles before the French Revolution, as the documentary “Panama Papers,” that had its German Premiere during the Disruption Network Lab conference suggests.
The documentary shows how millions of files were leaked by anonymous whistle-blower John Doe to journalists of the Süddeutsche Zeitung, Frederik Obermaier and Bastian Obermayer, who shared them with the Washington-based International Consortium of Investigative Journalists (ICIJ) and coordinated a worldwide investigation, setting up an incredible enterprise.
Obermaier described his choice of opening the information to other colleagues as the only possible way of analysing the information, considering that a single person would need 32 years to get through 1.4 Terabyte of leaked data. The efforts of 376 journalists from 76 countries working in secret for two years until the simultaneous publication in April 2016, was awarded with the Pulitzer Prize.
Following the Papers release more than thousands official investigations started and dozens criminal cases have been initiated. Such an activity has brought more than USD 1.2 billion back onshore as governments around the world have recouped money in fines and back taxes (UK 253 million, Germany 150 million). Participation works!
However, whistleblowers and investigative journalists working on tax avoidance and international frauds are criminalised and killed. Obermaier reported that 17% of the countries involved in the Papers has seen backlash against journalists who covered the investigation, instead of offering them effective legal protections.
To reverse this trend, the conference focused on the personal impact of investigative journalism and whistleblowing for those who expose offshore corruption and tax havens. The stories of women suffering the consequences of their anti-corruption activities were presented in a panel moderated by Michael Hornsby, communications officer at Transparency International.
The video “Daphne Caruana Galizia. Anti-Corruption Award Winner 2018” directed by David Velduque, opened the discussion with an interview of Paul Caruana Galizia, son of Daphne, talking about the journalist from Malta assassinated in 2017 after she uncovered a thread of hidden connections linking major multinational deals and global money-laundering operations to the heart of Malta’s government.
Exposing injustice caused consequences to the Turkish journalist Pelin Ünker, who – as she narrated – was sentenced to 13 months for writing about the dodgy dealings of the former Turkish prime minister Binali Yildirim and his sons who were found to be stashing cash in a Maltese shell company (Update: one month after the conference, on the 6th of May, Pelin Ünker’s prison sentence was overturned by an appeal court, although she still has to pay a fine).
The experience of Stéphanie Gibaud, a whistleblower from France, who revealed tax evasion and fraud by the investment bank UBS, and Khadija Ismayilova, Azerbaijani investigative journalist and radio host well known for her reports on corruption in Azerbaijan, completed the panel. Due to a a travel ban, this last one could just take part to the Disruption Network Lab-conference with a video, by which she described her case made of trumped up accusations, imprisonment, 3.5 years in probation and a 2-year ban on professional activity. As she narrated, she has been spied and blackmailed by her Government for years, and she is still not free.
The importance of collaborative networks of investigative journalists was something that all those taking part in the conference mentioned. Thanks to organisations like OCCRP, that connects 45 non-profit investigative centers in 34 countries, journalists and activists con work together to turn the tables on corruption and build greater accountability through exposing the abuse of power at the expense of the people.
Transparency could be articulated on open platforms for data sharing and technological tools able to ensure access to relevant information, control and participation, fostering accountability. Documents are still available and the still have a lot to say: a collective mobilization could support the work of investigative journalists and oblige authorities to act.
Estimates report that offshore financial wealth is worth USD 7.6 trillion, the 8% of global household financial wealth (Zucman, 2015) others indicate though that super-rich might hold up to USD 32 trillion in offshore havens, excluding non-financial assets as real estate, gold and other undetectable investments (Henry, 2016). Our economic systems are hostage of these numbers. Banks hold massive amounts of loans in tax-haven-based businesses, a small economic elite makes and escapes rules, undermining democracies via financial deregulations, feeding social injustice.
Financial secrecy is a key facilitator of money laundering, corruption and financial crimes. For too long it’s been clear that corporations simply shift profits to places where taxes are low, whilst ultrawealthy households hide dark money in a handful of tax shelters to avoid paying their fair share. What is then the opportunity connected with the preservation of bank secrecy, considering that it just increases inequalities and violations of human and social rights?
Curated by Tatiana Bazzichelli and developed in cooperation with the Berlin-based International Secretariat of Transparency International, this Disruption Network Lab-conference was the first of the 2019 series “The Art of Exposing Injustice”.
Info about the 15th Disruption Network Lab conference, all its speakers and thematic is available here: https://www.disruptionlab.org/dark-havens
The videos of the conference are on Youtube.
To follow the Disruption Network Lab sign up for its Newsletter and get informed about its conferences, ongoing researches and projects. The next Disruption Network Lab event is planned for June. Make sure you don´t miss it!
The Disruption Network Lab is also on Twitter and Facebook.
Photocredits: Maria Silvano for Disruption Network Lab
On the 7th September, the Disruption Network Lab opened its 14th conference in Berlin entitled “INFILTRATION: Challenging Supremacism“. The two-day-event was a journey inside right wing extremism and supremacist ideology to provoke direct change, second appointment of the Misinformation Ecosystems series that began in May. In the Kunstquartier Bethanien journalists, activists, researchers, and infiltrators had the chance to discuss the increasing presence of movements that want to oppose immigration, multiculturalism and political correctness, sharing their experiences and proposing a constructive critical approach, based on the motivation of understanding the current debates in society as well as transforming mere opposition into a concrete path for inspirational change.
“How can you hate me when you don’t even know me?” With this question Daryl Davis tried to crumble the wall of ignorance and fear that he believes to be the basis of racial hatred. This 65 year-old author, activist and blues man, who played for decades with Chuck Barry, Jerry Lee Lewis and B.B. King, has spent 35 years studying race relations and befriending members of the Ku Klux Klan to turn them away from racism. In the context of increased supremacist ideologies and right-wing extremism, the Disruption Network Lab invited Davis to speak about racism and his interactions with individuals holding racist beliefs.
Growing up, Davis lived a privileged life as the son of a U.S. Foreign Service officer, travelling around the world and studying in an international context surrounded by children of other Foreign Service workers. His first shocking encounter with racism occurred when he was 10 years old in a 1968 Massachusetts. He was marching in a parade carrying the US- flag in front of his scouting group as people yelled racial epithets and threw rocks and bottles at him. His parents later explained that those people were targeting just him because of the color of his skin. Someone who knew absolutely nothing about his person and his life wanted to inflict pain upon him for no other reason than that. Because of this hateful reaction from so many white spectators along the route, Davis started wondering the fatidic question.
Ignorance causes fear and obviously the theses of supremacists and racist groups are built on these two components. Many years ago Davis decided to sit with them and listen to their point of view, contradicting their falsehoods using dialogue. Davis is convinced that if we do not fight ignorance it will escalate to destruction, “ignorance breeds fear; fear breeds hatred; hatred breeds destruction” as he previously stated. So, when someone says he thinks white people are superior, Daryl faces them and answers: “we are equal.” On this basis, Davis befriended hundreds of KKK-members and convinced them to rethink their choices. According to the media, he has persuaded more than two hundred of them to throw away their hoods and robes, their stereotypes and beliefs. His activity became national news as he befriended the KKK-member Roger Kelly and CNN broadcasted a story on their unusual relationship. When they first met, Kelly was “Maryland’s Grand Dragon”. Kelly didn’t know Davis was a black man and agreed to meet him. During their first meeting he spewed a lot of stereotypes, but – as Davis narrates – by the end of the evening they could agree on a few topics. The Grand Dragon told Davis they would never agree on racial issues; he said his Klan views on race and segregation were “cemented.” They continued to meet and converse about difficult and controversial matters for a long period: Kelly would attend Davis’s house and Davis would go to KKK-rallies. It took a few years but Kelly’s cemented beliefs got weaker, until he decided to quit the Klan and run for local elections. He had meanwhile become “Imperial Wizard” – which means national leader of the Klan.
During his key note Davis explained that his search for the answer to his question began one night in 1983. After having played in a country music bar a white man approached him and offered him a drink. The man later told him that it was the first time he had ever sat down and had a drink with a black man because he was a member of the Ku Klux Klan. Davis thought at first that the man was joking, but he wasn’t. The bluesman decided to talk to him, focusing on the fact that “they are just human beings,” he says “I respect these people when they sit, talk and listen. It’s just about difference of opinion. If you talk with them you can find things in common.” Someone might disagree with Daryl Davis that Fascism, Racism, Supremacism cannot be considered opinions, that they are crimes and that normalizing their cult is dangerous. But Davis prefers dialogue to posturing and fights. Davis believes in addressing ignorance through communication and education, to ease fear and prevent destruction. His efforts at dialogue are represented by his collection of hoods and robes from former Klan members he has befriended over time. Davis thinks that society should give these people a chance to express their views publicly to challenge them and force them to actively listen to someone else, dialoguing, to passively learn something. Many of them are anti-Semitic, neo-Nazi, Holocaust-denial and racist white supremacist, but he sees them mostly as victims of ignorance, fearing something that they just do not know. For these reasons he talks with them trying to overcome their prejudices. “Always keep the lines of communication with your antagonist open, because when you’re talking, you’re not fighting.”
Davis offers an extreme example of breaking down stereotypes to change the minds of white supremacists. It can be deeply understood only in the context of his US-American background and cultural formation. His keynote speech tended to get soaked in clichés, enriched by several “I am proud of my country” and “my country is great.” Maybe it is just a way to subtract right winged racists the monopoly over the patriotic discourse, through a moderate and gentle approach, to disrupt their one-way narrative, that conflates patriotism with rabid nationalism, showing them that he has traits in common with them. All in all his underlying convincement is that racism comes from a lack of personal knowledge of the African American experience and history, for example in music, and from a lack of personal relationships of a certain part of the white community with human beings that are not white. He thinks that by befriending ignorant racists they could relent, change their minds, have a change of heart and learn how to respect others. Davis is conscious about the fact that such an uncommon approach can be considered, at least, controversial. Many disapprove of it, pointing out that he is offering them a prominent stage in the national and international spotlight.
Davis´ activity can also be dangerous. In the past thirty years he has been attacked because of what he does. However, he is not afraid of the Klan or of racist groups. He has cultural tools to face dialectics, he has a strong identity and doesn´t want to fight against someone´s else idea of identity. On the contrary he is convinced that people of all backgrounds shall come together, getting along without losing their sense of identity or individualistic dimension, as no one shall be forced to accept an idea. Matt Ornstein has directed a documentary about Daryl Davis, that the Disruption Network Lab decided to screen during the third day of this 14th Conference. Entitled “Accidental Courtesy: Daryl Davis, Race and America.”
In Germany, individuals and organizations have been mobilizing to prevent the access of neo-Nazi to public platforms and media to spread Negationism and racist propaganda, in a collective lucid reasoning. Dialoguing with neo-Nazis, allowing them to exhibit symbols and to represent reactionary bigotry and hatred as something normal is not accepted by many people in Germany and the audience of the Conference showed reservations about Davis ‘words. Davis replied that his approach pays back. To those who tell him that he is giving racist and violent groups a platform to be normalized and to be part of the public discourse, he reminds them that most of those KKK-members that he approached decided afterwards to quit the group. It took him courage and dedication, he went to KKK-rallies, listened to their hymns, watched them set on fire giant wooden crosses during liturgical rites, witnessing moments of collective frenzy, delirium and hatred.
The documentary shows the efforts to dialogue with representatives from the movement Black Lives Matter too, that sadly ended up in a moment of misunderstanding and dramatic confrontation. Davis and Black Lives Matter have met again and have found a way to work together, going the same way approaching the issue of racism and discrimination with two alternative techniques, that are not mutually exclusive. However, Davis approach is markedly distant from this grassroots movement that organizes demonstrations and protests.
The audience of the 14th DNL Conference challenged Daryl Davis as his approach “we are all human beings” looks fragile in days of uncertainties, when extreme right movements are gaining consensus upon lies and discrimination. Inevitably, the debate after Davis ‘speech focused on the cultural shift represented by Donald Trump´s election and what came after that on a global scale. Davis said that in his opinion what is happening works as a bucket of cold water, that wakes people up and makes them engage and fight for change, reacting with indignation. Davis explained that in his opinion the #MeToo campaign came out as a positive consequence to Donald Trump´s election. “Obama was not elected by black people, who are all in all 12% of the US-population. Things change if we dialogue together, creating the bases for that change. In this way we can accomplish things that just few decades ago were thought to be unachievable.”
The panel of September the 7th represented a cross-section of the research being conducted by journalists, researchers and artists currently working on extreme-right movements and alt-right narrative. By accessing mainstream parties and connecting to moderate-leaning voters, right wing extremists have managed to exercise a significant influence on social and political discourse with an impact that is increasingly visible in Europe. The speakers on this first day of the Conference reported about their experiences with a focus on what is happening in their countries of origin: Sweden, Germany and Slovenia. Interconnecting three methodologies of provoking critical reflection within right-wing political groups, the panel reflected on possible strategies of cultural and political change that go beyond mere opposition.
Recalling all this, the moderator Christina Lee, Head of Ambassador Program and Hostwriter, introduced Mattias Gardell, first panelist of the day, Swedish Professor of comparative religion at Uppsala University, who dedicated part of his studies to religious extremism and religious racism, addressing groups such as the Nation of Islam and its connections to the KKK and other American racist activists, to focus than on the rise of neo-Paganism and its meaning for the radical right. Among his publications, a book on his encounters with Petter Mangs, “the most effective and successful racist serial killer” Sweden has ever encountered, as he writes, and recent analyses of the “lonely wolf” tactic of militant action and groups from the extreme right and the radical Islamism, that are operating under the radar, to avoid being detected and blocked by authorities.
At the time of the Conference, parliamentary elections were about to take place in Sweden. The country was then set for political uncertainty after a tight vote where the far right and other small parties made gains at the expense of traditional big parties. Gardell reported that in Sweden the political and social climate of intolerance has risen. More than half of the mosques have been assaulted or set on fire and minorities are continuously under threat. During his speech he focused on how new radical nationalist parties and movements are investing in narratives built on positive images of love and community, nostalgic sentiments and promises to return the once good society and its original harmony. They are nationalist and ‘identitarian’ groups (as they call themselves), from different nations and united under their belief in separation on the base of national identities. They often portray themselves as common citizens, worried about the vanishing of their country and identity due to a program of multicultural globalism that aims at substituting national identities and people by means of a white genocide: a constant sense of paranoia, that Gardell also perceives in a country like Sweden, where the economy is flourishing, and inequalities hit mostly migrants and non-white population.
These groups work to spread the idea of a “white nostalgia”, a rhetorical discourse based on their efforts to reiterate a rosy, but hazy period, when life was better for the white native population of a certain territory. They ambiguously evoke a moment in history, that has probably never existed, at which national identities were free from external contaminations and people were wealthy sovereign citizens. This propaganda emerges into a multi-faced production in music, film and visual arts. It is not the “angry white men” image alone that can contain such a new fragmented and liquid reality; in fact, explained Gardell, the opposite is true. They often offer a narrative, that appears to be built on love rather than on hate. Love for their nation, love for their hypothetical race, for their selected groups and communities. It is not an imaginary love, it is a deep true feeling that they feel and upon which they construct their sense of collectivity.
Gardell underlined the importance of studying every-day-Fascism, focusing on its essence made up of ordinary individuals that like football, accompany children to school, listen to music and therefore have things in common with their neighbors and colleagues to whom they might appear as moderate people. “You can’t defeat national socialism with garlic. You have to face the fact that Fascism has been supported by millions of ordinary people who considered themselves to be good and decent citizens” he said. It is necessary to unveil the false representation of a political view, evoked through posters of blonde children and pictures of smiling women, that are designed to embody a bright future and a safe homeland. It is necessary to oppose the program of selective love and restricted solidarity that extreme right and nationalist groups promote. Therefore, says Gardell, we need to challenge those representations of love for nation, homeland and family built on a language that is impressive-sounding but not meaningful or sincere at all. And not just because “white nostalgia” is a fictional invention, but – more important – because on the collective and public sphere, love and solidarity are meaningful only if they are universal and express the value of equality unless they are just synonyms for privilege from which just few people can benefit.
At the moment, ultra-nationalist and radical right parties assembling the new political scene, appear to be able to influence traditional parties and vast parts of the population using love as a political weapon, affecting the social and political landscape in many countries, succeeding in making those traditional parties copy their agenda. Their recurrent themes exploit desire for individual social retribution, the tradition of a misogynic masculinity, the enhancement of self-government tendencies and isolation in opposition to openness and solidarity. A rhetoric that exploits the presence of nonwhite minorities and economic instability of this late capitalism, creating hateful propaganda. An intense online activity of manipulation supports the point of view of these ultra-nationalists. As the DNL Conference “Hate News” (May 2018) showed, online facts can become irrelevant against a torrent of abuse, memes and hate.
Online and offline, right extremists can easily find supporters in isolated realities, in the countryside or in close web-communities. Consequently, it is important to act locally and be focused, disrupting their ability to contaminate small groups. Young people are still intrigued by the gruesome and brutal part of the black metal scene, by the fringes of anarcho-fascists and by hooligans, feeding into an international network of neo-Nazi black labels and groups. But there are now also presentable faces, new political formations with attractive slogans supported by glossy music bands and influencers that are building a narrative of love. Mattias Gardell concluded his intervention saying that these groups are currently on the rise throughout Europe, whilst a storm of Fascism is coming again, widely, to hurt exposed individuals and communities, as it is already doing. He is disillusioned and reminded the audience of the Disruption Network Lab that it is necessary to focus and act to defeat it, knowing that it will cause blood.
The analyses of Mattias Gardell introduced topics covered by the second panelist of the day Richard Gebhardt, political scientist and journalist, who gave an insight into Hooliganism based on his direct researches from the last four years in Germany and England, where the Football Lad’s Alliance established itself as a complex reality. He focused on the reasons that pushed this violent collective to become a political movement, connected in Germany to the foundation of Pegida (Patriotic Europeans Against the Islamisation of the West), and the increasing popularity of the parliamentary party AfD (Alternative for Germany), which is today – according to opinion polls and surveys – the second biggest political formation in Germany.
Gebhardt’s intervention began indeed with a quotation by the leader of the alt-right party Alternative for Germany: “We will hunt them down.” The parliament member was suggesting that the new members of parliament from his political formation would use their new powers to hold Angela Merkel’s government to account for its refugee policies “to reclaim their country and people.”
At the time of the Conference only a few days had passed since right-wing extremist thugs and neo-Nazis organized an assault on foreigners in the German city of Chemnitz on the 26th and 27th of August, in reaction to a murder that happened a few days before. It was a shocking moment for many Germans. However, in the following days politicians and members of the German government have tried to downplay the events, showing that big moderate parties tend to favor a certain kind of narration. Far-right violence in Germany has indeed seen a sharp rise in the last period. In this context the guest talked about the group “Hooligans gegen Salafisten” also known as HoGeSa (Hooligans against Salafists) and its origins.
On October 26, 2014, in Cologne the HoGeSa organized its first rally against Salafism. The number of participants can be ultimately estimated around 4.000 people, violent hooligans, who threw stones, bottles and firecrackers. They gathered in Cologne Central Station, with several speakers and live music, and to later march through the streets of Cologne. Xenophobic and neo-Nazi slogans were frequent, and so was the Hitler salute. During the riots dozens of police officers were injured and several police cars were damaged. Police were surprised by the inclination to excessive and unpredictable use of force. In that year thousands of refugees were traveling to Germany from conflict-ridden Middle Eastern countries and the HoGeSa was already targeting them.
In the days immediately after the demonstration, leading German politicians and prominent jurists sought to give a lighter representation of the events. The first official comments to the HoGeSa demonstration were not referring to it as a neo-Nazi demonstration, stressing the fact that hooligans are “for the most part politically indifferent” and that “they are not political but antisocial. They meet just to fight and drink.” The motto “Fußball ist Fußball und Politik bleibt Politik“ (football is football, politics stays politics) was repeated often but did not sound convincing at all. The Hooligans gegen Salafisten represented undeniably a new network of neo-Nazis, that had joined forces with football hooligans, nationalists and other right-wing extremists. Thousands of football supporters appeared to have left their football clubs of choice behind in favor of uniting against a common enemy: Islam. They chose their name HoGeSa hoping to receive popular support by recalling the fight against Islamist extremists.
Nonetheless, not every hooligan is a neo-Nazi. Press reported that in Hannover, for example, hooligans and ultras distanced themselves from the demonstration of HoGeSa and non-fascist football Ultras and that groups in Aachen, Dortmund, Duisburg, Braunschweig and Düsseldorf say they have been threatened, chased down and beaten by these Nazi-hooligans. Gebhardt suggested to the audience of the Conference a book, “Among the Thugs” by Bill Buford, to better understand the dynamics behind hooliganism. The book follows the adventures of Bill, an American writer in England, as he explores the world of soccer hooligans and “the lads”. Setting himself the task of defining why young men in England riot and pillage in the name of sports fandom, Bill travels deep into a culture of violence both horrific and hilarious.
Gebhardt portrayed these extreme right-wing rioters from HoGeSa as men, claiming to be equally distant from conservative and progressive parties, who want to be seen just as football supporters that are not carrying any ideological content, neither that of the Left nor of the Right. However, the nonpolitical hooligan is a myth: they are the heirs of a fascist tradition based on prevarication, arrogance and violence, that plays with the aestheticization of fighting and war, the glorification of militarism and pseudo-heroism. They are not worried citizens, they are thugs “ready for a civil war.” They claim to speak for the silent majority of their community, defending their country and their people. The work of Gebhardt can be seen in a documentary “Inside HoGeSa” (2018) and in his articles online.
The last guest of Friday’s Conference was a member of the project ”Janez Janša, Janez Janša and Janez Janša,” who run for office in Slovenia at the last 2018 elections, confronting the leader of the conservative Slovenian Democratic Party (SDS), and former Prime Minister, Janez Janša. “Old names, new faces” was their motto.
In 2007 three artists decided to legally change their name to Janez Janša and joined the right conservative Slovenian Democratic Party (SDS, which was originally a moderate political formation). Janez Janša is also the name of the former President of Slovenia. All of a sudden, there were at least four Janez Janšas in the country: the three artists and the politician famous for his aggressiveness and contentiousness with the opposition and anyone who dares to criticize his choices. At the time President Janša made a public statement about the artists and pro-government media started to comment on their name change criticising their “politicized art”. The activity inside the SDS of the artists served to explore the bureaucratic and political systems of their home country. Their work of investigation is instead much more complex. It reveals how the perceptual influence of a name can interfere with social dynamics. Both on a collective and subjective dimension, they researched the meaning of identity and sectioned how their private life was affected by such name change. They proved that names are just a convention, an instrument, but with a relevant role. Janša remembered as an example that the Slovenian Democratic Party, despite this name, turned into a radical, right and conservative party between 2000 and 2005. Nowadays it is engaged in anti-migrant rhetoric and populist right-wing propaganda.
The artist illustrated how, in the last decade, the Janšas responded with art, cleverness and culture to campaigns of hate and propaganda, an approach that is the base for their political interventions. Their experience was the subject of the documentary from 2012, “My Name Is Janez Janša” and is internationally known. Artists and academics are still pondering about the meaning of the Janez Janšas experience, political critique, art work, activism, provocation or never-ending joke.
During the conclusive debate all panellists agreed that the world they have been in touch with and that they described in the Conference is mostly a world of men. Women are generally present as an accompaniment and/or an accessory. It is certainly a characteristic of Fascism, described in literature and art, as designated in the book “Male Fantasy” by Klaus Theweleit, where the author talks about the fantasies that preoccupied a group of men who played a crucial role in the rise of Nazism. Proto-fascists seeking out and reconstructing their images of women. Another aspect that all three guests agreed on, is the fact that individuals are massively not voting or taking part in public life, since they are increasingly distrustful of traditional media and politicians. European moderate politicians have on the other side the responsibility of a systematic dismantlement of social rights, they justified and supported an unequal economic system of wealth distribution for too many years. Now, scandals and arrogance in public and institutional life do not seem to affect the popularity of extreme right parties, that are ridiculing the excess of fair play and the interests of those moderate politicians.
Focusing on new strategies to directly provoke change, the Conference on the 8th of September began with a performative conversation between Stewart Home (artist, filmmaker, writer, and activist from London), and Florian Cramer, (reader in 21st Century Visual Culture at Willem de Kooning Academy in Rotterdam), moderated by Tatiana Bazzichelli, artistic director of the Disruption Network Lab. The universe of the extreme-right seems to have embraced a path of transgression, arrogance and nonconformity, employed to suggest that its members are holders of a new alternative approach in cultural, political and social criticism. What comes out from such a wave of counterculture is an articulated patchwork that flirts with violence, discrimination and authoritarism.
Bazzichelli asked the audience to question the nowadays extreme right self-definition of their political offer as an “alternative,” considering that the issue of transgression and counterculture has been widely developed by academic and artistic Left, and that experimentation, theorisation and political antagonism have been growing together in the left-leaning universe. In such a perspective, “working on something alternative” – explained Bazzichelli – is supposed to be synonymous with creating a strong criticism of media and society, through political engagement, art and intellectual efforts. An alternative that could enhance a positive, constructive contribution in the collective socio-economic discourse. Today, words like “infowar” and “alternative” tend instead to be associated with a far-right countercultural chaotic production. On this basis, Bazzichelli introduced the lecture by Stewart Home and Florian Cramer, that investigated if and to which extent it is possible to affirm that ideas and values driven from the Left are now reclaimed and distorted in an extreme-right alternative narrative.
In an historical excursus on art, literature and subcultures, the two speakers focused on the 1970s-1990s counter-cultural currents that used radical performance, viral communication and media hoaxes and examined the degree to which they may be seen as playbooks for the info warfare of the contemporary extreme right. With their presentation they suggested that it is improper to state that the alt-right has now occupied established leftist countercultural territories. There have been several examples of a parallel development and interpenetration of very opposite points of view over time. Tommaso Marinetti, father of Futurism and its Manifesto about “War, the World’s Only Hygiene,” mixing anarchist rebellion and violent reaction became then a fervent supporter of the Italian Fascism, that glorified the new futuristic approach. However, Futurism means also sound poetry, since discordant sound had a vital role in Futurist art and politics; an experience that developed into the noise movement with an influence that reached post-industrial musicians and further.
Cramer remembered that Futurism represents also an avant-garde and counterculture from the 1900s, that had similarities with Dadaism. In fact, though Dadaism was anti-war and antibourgeois, they shared a spirit of mockery and provocative performances, mixing distant genres and a massive use of communication, experimental media and magazines. Always considering the beginning of the 20th Century, the lecturers recalled the production of the painter Hugo Höppener Fidus, expression of the Life Reform Movement, linked both to the left- and the right-leaning political views, that strongly influenced Hitler and Nazism, showing roots of an alternative counterculture that went both into the political extreme right and left.
In the 1970s and 1980s, in subcultural production and artistic performances it was frequent the use of fascist symbols as provocation and transgression, for example in the punk scene, which ranged notoriously from left wing to right wing views as pseudo-fascist camp in post-punk culture turn into actual Fascism. A conscious ambiguity, part of experimentation, that – particularly in the U.S. – meant also leaving space to things that were in contrast to each other. In the context of US underground culture, the speakers mentioned publications like those from Re/Search “Pranks!” on the subject of pranks, obscure music and films, industrial culture, and many other experimental topics. Pranks were intended as a way of visionary media manipulation and reality hacking. Among the contributors, you could find artists from the industrial movement, like the controversial Peter Sotos and Boyde Rice, who became today established part of right-winged countercultural movement.
Talking about the present, Cramer and Home also mentioned Casa Pound, a neofascist-squat and political formation from Rome, that adopted the experience described by the anarchist writer Hakim Bey of the “temporary autonomous zones,” that redefined the psychogeographical nooks of autonomy – as well as appropriated the name of Ezra Pound, a member of the early modernist poetry movement.
All this suggests that the so-called alt-right has probably not hijacked counterculture, by for example deploying tactics of subversive humour and transgression or through cultural appropriation, since there is a whole history of grey zones and presence of both extreme right and left in avant-garde and in countercultures, and there were overlooked fascist undertones in the various libertarian ideologies that flourished in the underground. Home and Cramer reminded their common experience in the Luther Blissett project, based on a collective pseudonym used by several artists, performers, activists and squatter collectives in the nineties. The possibility to perform anonymously under a pseudonym gave birth to a mixed production, with undefined borders, in few cases expression of reactionary drives. An experience that we can easily reconnect to the development of 4chan, the English-language imageboard very important for the early stage of Anonymous, that today is very popular among the members of the Alt–Right scene,
Cramer illustrated so how Libertarianism can sometimes flip into a reactionary ideology. The same can be for Anarchism (with the Anarcho-Capitalism) and Cyberlibertarianism, just like for the subcultures. In the Chaos Computer Club – explained Cramer – there is a strong cyberlibertarian component, but we might find also grey zones where a minority of extreme-right can find ways to express itself. Spores of extreme-right and fascist-anarchical degeneration can so be found in the activities of political and art collectives from the Left and, in this sense, it looks necessary to expose their presence in relation to those grey areas, that could become a context for spreading ambiguous points of view within cultural production.
Marinetti, Pound, Heidegger, have a general relevance that cannot be denied. Home and Cramer underlined that, at the moment, nothing of what we see internationally in the extreme-right panorama can be considered culturally relevant. The alt-right is not re-enacting counterculture. This “alternative” of the extreme right consist mostly of a cluster of media outlets producing hate and propaganda, within a revisionist narrative. It picks up an old rhetoric about heroic rebellion, arrogance, overbearing masculinity, mythization of war and the use of violence, in most cases using new definitions for old concepts. Home and Cramer concluded that there is no intrinsic value in being transgressive, and transgression alone cannot be enough to gain any kind of attribute of quality. Because transgression is just a tool. Artists and activists cannot stop experimenting and using the tool of transgression to criticise society, building alternatives and being alternative. The moderate approach in an era of political correctness is a way to enchain the Left; moreover people have the right to hate their condition, hate their job and the inequalities that affect their lives. This feeling is legitimately generated by a critical thinking.
The panel of the second day of the conference reflected on the practice of political, journalistic and activist infiltration as a way of better understanding extremist groups. The moderator explained how from one side infiltration maps extremist groups from the inside, and from the other, it analyses how extremist groups are building their networks, becoming widespread in online and offline. The aim is to explore such groups from within, analysing the reason for people to join them, as well as understanding their inner dynamics.
Rebecca Pates, Political Anthropologist from the University of Leipzig, moderated the discussion and introduced the four guests, commenting that a number of different things can be done when infiltrating. The activities and the achievements can differ, and so the technique, from total concealment in infiltration to openness about it. Pates suggested that from the inside it is possible to understand for example the reason why young people are attracted by groups that from the outside look so angry and violent, and it could be defined the sense of comradeship and belonging that convinces individuals to participate into these movements.
Julia Ebner is a Research Fellow at the Institute for Strategic Dialogue (ISD) and author of the bestselling book The Rage: The Vicious Circle of Islamist and Far-Right Extremism. She opened the panel explaining how, after the terroristic attacks from right extremists in Europe and in the US, she decided to get a better understanding of the world of the far-right and their narrative. She infiltrated both online and offline, undercover, with fake identities and avatar accounts, changing her appearance. Her goal was to get into groups that are very different ideologically one from the other, like the neo-Nazi, the old conservative fascist movements and the counter Jihad movement. During her speech she described how she built up a new identity and made connections necessary to her purpose.
To get in touch with active members she used some social media and crowdsourcing platforms available for the extreme-right, such as “Gab”, the alt-right equivalent of twitter, “Wasp love”, a place to date “reformed Christians, confederate, home-schooled, white nationalists, alt-right and sovereign singles.” She was asked to send a full account of her genetic ancestry to be accepted or to share a picture of her skin colour. She had voice chat interviews to enquire about her ideological background or sexual orientation. Ebner entered an alternative universe of disinformation ecosystems and accessed subcultures that interact in parallel as a part of a same bigger network. When she was asked to justify fresh profiles, that she just created, she could benefit from the fact that many far-right users are removed and banned for what they post. She started frequenting all the different tech platforms considered a safe environment for far-right extremism, where they could very openly cultivate antisemitic and conspiracy theories, anti-left rhetoric, coordinate doxing and harassment activities. In 2016 the writer and researcher joined undercover the English Defence League and went to a rally of theirs against what they would call Muslim grooming gangs. A year later she was then recruited into the movement Generation Identity or Identitäre Bewegung, always as part of the new European alt-right (alternative right) and was invited to join them in public and private meetings, like a secret meeting in an Airbnb location in Brixton. In that occasion she was sitting among 20 white nationalists discussing their strategies to launch a British branch of their group, with a manifest focus on optics and media strategy briefings, to learn how to deal with tough questions from journalists about anti-Semitism and racism. They discussed about their political background and their selection procedures in order to achieve a good branding and quality in their membership. The obsession of appearing as decent citizens in public was and is very important in rallies like Charleville. Reports attest indeed that far right groups were concerned about how to dress and even told some people, not particularly good looking, that they could not join the event as they would not make a good impression. Events like Chemnitz, Charlottesville’s “rally to unite the right” or the experience of Defend Europe – an illegal far right ship that sought to hamper the rescue of refugees in the Mediterranean in 2016 – represent a cross-border collaboration between movements that until few years ago were not communicating. These events bring them together on the basis of their lowest common denominator for the sake of having a bigger impact.
After Ebner wrote an article for the Guardian and for the Independent she got backlash from the far-right and the English Defence League. Its founder, Tommy Robinson, ended up storming into her office with a cameraman, filming the whole confrontation and live streaming it to Rebel Media, a far-right news outlet in Canada. The influencer has 300.000 followers and these channels are very popular too. They gave immediate resonance to the aggression and set off a long chain reaction among other far-right and alt-right news platforms, globally. Her whole life got under attention, they used all available data to publicly discredit her. The researcher realised how much it is possible to do with online data to intimidate political opponents or people who criticize. Ebner and her colleagues experienced the hate campaign machine. She noticed that women are more attacked and threats, symptom of the wide anti-feminist and mesogenic culture. It seemed to her that the whole universe was against her activity of infiltration and that she had no supporters. Many different groups and networks were creating a distorted representation of her engagement, and this pushed her to embark on a research project about the interconnectedness of the variegated far-right media galaxy.
With other colleagues Ebner analysed about 5.000 pieces of content, accomplishing a lot of linguistic analysis, and studying interactions with social media monitoring tools. Thanks to this work the researcher can describe the mainstreaming strategies of the extreme right and how its members try to create compelling and persuasive countercultural campaigns using humour, satire and transgression and co-opting Pop Culture. An attitude common with the fundamental Islamism is they are creating content that has appeal on young people on the Internet but they are also concentrated on the traditional media, to make sure that they pick up on their provocations or fake news. They trigger media to report on them by staging online complaints that would go viral. Ebner has also started a project in collaboration with the organisation #ichbinhier e.V., discovering that this technique of coordinated interactions often creates the illusion that they represent the majority of the users. The research shows something different: 50% of the interactions or of the hateful comments below news articles, that they analysed, came from just 5% of all active accounts. A small but very loud minority of people that is now dominating the whole discourse amplified by bots or a media outlet sometimes also Russian ones, staging online psychological operations, jokes and meme to hide extreme right hatred campaigns behind humour-images.
Memetic warfare and gamification are two very relevant aspects, as frequent as quotations from the movies Matrix and Fight Club, with the rhetoric of the red pilling to see the truth. Most of the accounts active in this activity were coordinating posts and hashtags so that their content could get prioritised in the feeds and create viral campaigns, striving to dominate the whole social media discourse. They have very clear hierarchies, which could be ascribed to the gaming dimension too. Hateful comments and negative interactions appear in a flow, getting soon in the top section due to a high degree of coordination. Generation Identity is known for sharing content according to the tactics of the so-called media guerrilla warfare manual, based on a very militarised language, that describes actions, goals and sniper-missions to target and intimidate political opponents exploiting media. All comes in a very gamified way, as they talk about a virtual battlefield and electronic items, where a good performance allows to grow of level. During the German elections in 2017 members and followers of Reconquista Germanica (an extreme-right channel running on the Discord platform) were quite successful in spreading extreme-right topics and making politicians and media pick up on them. Some of their hashtags were often listed in the top 5 trends in the two weeks before the vote. In the meantime, they were evaluating and analysing their activity, celebrating successful “generals” or “soldiers” that were promoted into higher levels. Ebner expressed her concerns as this reflects in in real-world practice what they would do if they manage to establish their own vision and get in power. Since Trump was elected we’ve seen a growing ecosystem that repeats itself, where extreme-right is certainly reappearing. It is indeed possible to spot similar tactics and vocabulary among several European far-right groups, in the campaigns of Italian, German, French, Swedish and Dutch elections. She underlined how important it is to understand far-right extremism better and the relationship between Islamist and far-right extremism, as they have a lot in common and are reinforcing each other.
Anti-racist activist and “Hope Not Hate” researcher Patrik Hermansson reflected on the meaning of radical right-wing practices today bringing his direct story as undercover activist inside the international Alt-Right, and published in The International Alternative Right Report. Starting in the fall of 2016 he joined a London based organisation and then travelled through many other related groups, living a dual life. He described his year of infiltration inside an international secret formation called London Forum, for which you need to be vetted, background checked and have someone who lets you in. Hermansson works as a researcher for Hope Not Hate, an organisation established to offer a more positive and engaged way of doing anti-Fascism. This 26-year-old man has been “Erik”, a fascist who came to London inspired by Brexit and to get away from the liberal prejudice of Swedish universities. He entered and investigated the Forum, discovered members, techniques and goals, until he witnessed the terrible violence in Charlottesville, Virginia. To do so, Hermansson had to become a Swedish teacher of a member and was quickly driven into the world of the extreme right. From former Tory Party members to famous alt-right influencers, he met people in different countries and different social context. It was a safe space of anonymity, that you do not find on Social media.
Hermansson described to the audience of the Conference the social aspects animating the group, where members can feel part of a community, make new friends even overcoming political differences until they have anything else outside. Conspiracy theories have a relevant role too. Holocaust denial, addressed as “the biggest PR event in history”, or the chemistry rails, are important part of their theorization. They feel part of a group that is bound together by secrets that allow you to see behind the curtain and make you understand more than the rest of the population. Hermansson pointed out that the activity of infiltrate is a difficult and immoral business. It exploits people’s trust. It is justified by the need to expose techniques of recruitment, data, but we should not romanticize, not to go too far not to be ruthless. Hermansson infiltrated for a purpose, to get a closer image of right extremism and decided to expose the top players of the organisation, musicians and influencers. The most effective part of the activity, he said, was the sabotage. Infiltration makes people point fingers, paranoia spread in the movement and things broke apart. Hermansson explained that it was a conscious decision: the anti-fascist part of the research. The London Forum is not active anymore, people left it showing that the method of exposure is quite effective. He found out that his activity raised the cost of their recruiting process, which is now much tighter.
The panel was concluded by a member of the Unicorn Riot collective, “a decentralized non-profit media organization of artists and journalists, dictated to exposing root causes of dynamic social and environmental issues through amplifying stories and exploring sustainable alternatives in the globalized world.” The investigative journalist Christopher Schiano presented his work of analysing and publishing of leaked messages from white supremacist, neo-Nazi and various alt-right fascist groups in the US – followed by an introduction of the DiscordLeaks platform by the developer Heartsucker, who is working as an affiliated volunteer for the Unicorn Riot. The guests talked about how Unicorn Riot has obtained hundreds of thousands of messages from white supremacist and neo-Nazi Discord chat servers after the events in Charlottesville, and decided to organise and open a far-right activity centre to allow public scrutiny through data journalism.
Discord is a voice-over-Internet Protocol (VoIP) application for video gaming communities, offering text, image, video and audio communication between users in a chat channel. The US non-profit media organisation with its Discord Leaks has exposed hundreds of thousands of chats from alt-right and far-right servers received. Parker was receiving screenshots of real-time communications between alt-right activists involved in planning the Charlottesville rally and got a “general orders” document, along with audio recordings of a planning meeting ahead of the rally. The screenshots kept then coming throughout the following days.
As reported by the Washington Post, Discord allowed the organizers and participants of the rally to convene in private, invite-only threads shrouded in anonymity – with usernames such as “kristall.night” and “WhiteTrash.” On a Discord server called “Charlottesville 2.0,” they planned everything from car pools, dress code and lodging in Charlottesville to how one might improvise weapons in case of a fight. Some suggested using flag poles as a makeshift spear or club. Many of these things took place. The collective received also internal logs, which enabled them to better see the scope of plans for the Unite the Right rally. Since its founding, Unicorn Riot has gained relevance among people looking for alternative news sources, principally covering protests with an on-the-ground perspective that many mainstream outlets miss. Unicorn Riot was for example among the first media outlets to get to the rally in Charlottesville and cover it. Through their investigation they explained how the far right tries to recruit new member via Discord, or they unveiled the attempts of extremists to look like ordinary Trump’ supporters, building a victim narrative to insinuate the idea that they are targeted citizens. Some of them are supporting the police and members of the police force have been exposed for leaking information to far-right members. They exposed the movement Anticom, anti-communist action, active mostly in shitposting, and the group Patriot Front, whose members unite under the motto “we are Americans and we are Fascists.”
At the end of the three the panellists reasoned on the importance of infiltration, as a means to study the extreme right and expose their networks and members, their strategies and tactics. It can also be helpful to try to predict what these groups are about to do, foreseeing their next step. It means getting in touch with them, entering their circles based on comradeship and exchange of personal experiences. Ebner commented that the use of lies and distortion is the cost of it, wondering, however, about what the cost of inaction is instead. Hermansson reported about the effects of infiltration in terms of the desensitisation he went through, taking part in conversations without reacting. The same desensitisation process can be described in the memetic warfare.
As part of the Disruption Network Lab thematic series “Misinformation Ecosystems” (2018), this 3-days-conference concluded the 2018 programme of the Disruption Network Lab. The series began with a focus on hate-news, manipulators, trolls and influencers, that investigated online opinion manipulation and strategic hate speech in the frame of a growing international misinformation ecosystem, and their impact on civil rights. HATE NEWS focused on the issue of opinion manipulation, from the interconnections of traditional and online media to behavioural profiling within the Cambridge Analytica debate. This second conference took the process further by pointing to specific researches and investigations that illustrated how a process has clearly set in motion, whereas radical right is currently working on an international level, building cross-national connections and establishing global cooperation.
Not just Steve Bannon and a galaxy of media outlet and online platforms are pushing for a new authoritarian turnaround, based on discrimination and ultra-nationalism, having factual impact on political systems. On a grassroot level, there are local networks and formations able to unite different realities and backgrounds, melt together under new trendy labels, slogans and influencers. A new scene that is carefully designed to be appealing to moderate-leaning electorate, where you can find hooligans, hipsters, neo-Nazi and politicians dressed in suits and ties, all striving to appear like conscious citizens and decent members of society, part of a new generation of activists. However, beyond the facade, the majority of far-right groups shows to be against an open, multicultural society as well as against inter-religious and inter-cultural togetherness. They play with economic uncertainties, fear, anger and resentment to spread hate, attack opponents and discriminate minorities, often through a meme-driven alt-right humour, designed to cover with dark hilarity their racist propaganda and fascist drives. Jokes are used by public figures and influencers to promote misogyny, homophobia, a distorted idea of masculinity, racism and justify unacceptable statements. Too often mainstream media and newspapers pick up staged news from such misinformation ecosystems, enforcing a revisionist narrative built on manipulated facts and interactions, arrogance and violence.
Conspiratorial and paranoid thinking acts like a catalyst, provoking participation and fascinating individuals, who want to become warriors and custodians of knowledge. Alongside the image of the angry white man, there is a whole narrative of love and solidarity for their chosen group, the community they decide to protect, identified on utilitarian basis.
Despite of what is represented in media, many speakers at the conference pointed out that there is neither something alternative nor innovative in what they are offering. However, mainstream parties and media tend to follow their reactionary narrative, enforcing the idea that it is competitive. The guests of the Disruption Network Lab came from Africa, Europe, North and South America and exposed an intertwined scenario of transnationalism of the radical right. The direct engagement of activists, that decide to infiltrate, together with the work of researchers, journalists and artists, allowed for a clearer image of what is going on at a global as well as local scale, to understand how it is possible to interrupt this process working actively within the civil society. Sabotage and exposure are instruments useful to disrupt and unveil strategies aimed at sending the world back of a hundred years of human rights achievements. Thanks to Tatiana Bazzichelli and the Disruption Network Lab team, who offered a stage to learn about constructive practices that can be activated in order to change the course of things.
On the day of the General Data Protection Law (GDPR) going into effect in Europe, on the 25th May, the Disruption Network Lab opened its 13th conference in Berlin entitled “HATE NEWS: Manipulators, Trolls & Influencers”. The two-day-event looked into the consequences of online opinion manipulation and strategic hate speech. It investigated the technological responses to these phenomena in the context of the battle for civil rights.
The conference began with Jo Havemann presenting #DefyHateNow, a campaign by r0g_agency for open culture and critical transformation, a community peace-building initiative aimed at combating online hate speech and mitigating incitement to offline violence in South Sudan. More than ten years ago the bulk of African countries’ online ecosystem consisted of just a few millions of users, whilst today’s landscape is far different. This project started as a response to the way social media was used to feed the conflicts that exploded in the country in 2013 and 2016. It is a call to mobilize individuals and communities for civic action against hate speech and social media incitement to violence in South Sudan. Its latest initiative is the music video #Thinkbe4uclick, a new awareness campaign specifically targeted at young people.
In Africa, hate campaigns and manipulation techniques have been causing serious consequences for much longer than a decade. The work of #DefyHateNow counters a global challenge with local solutions, suggesting that what is perceived in Europe and the US as a new problem should instead be considered in its global dimension. This same point of view was suggested by the keynote speaker of the day, Nanjala Nyabola, writer and political analyst based in Nairobi. Focusing on social media and politics in the digital age, the writer described Kenya´s recent history as widely instructive, warning that manipulation and rumours can not only twist or influence election results, but drive conflicts feeding violence too.
The reliance on rumours and fake news was the principle reason that caused the horrifying escalation of violence following the Kenyan 2007 general election. More than 1,000 people were killed and 650,000 displaced in a crisis triggered by accusations of election fraud. The violence that followed unfolded fast, with police use of brutal force against non-violent protesters causing most of the fatalities. The outbreak of violence was largely blamed on ethnic clashes inflamed by hate speech. It consisted of revenge attacks for massacres supposedly carried out against ethnic groups in remote areas of the country. Unverified rumours about facts that had not taken place. Misinformation and hate were broadcast over local vernacular radio stations and with SMS campaigns, inciting the use of violence, animating different groups against one another.
The general election in 2013 was relatively peaceful. However, ethnic tensions continued to grow across the whole country and ethnic driven political intolerance appeared increasingly on social media, used mainly by young Kenyans. Online manipulation and disinformation proliferated on social media again before and after the 2017 general election campaign.
Nyabola explained that nowadays the media industry in Kenya is more lucrative than in most other African regions, which could be considered a positive aspect, suggesting that within Kenya the press is free. Instead a majority media companies depend heavily on government advertising revenue, which in turn is used as leverage by authorities to censor antagonistic coverage. It should be no wonder Kenyans appear to be more reliant on rumours now than in 2007. People are increasingly distrustful of traditional media. The high risk of manipulation by media campaigns and a duopoly de facto on the distribution of news, has led to the use of social media as the principle reliable source of information. It is still too early to have a clear image of the 2017 election in terms of interferences affecting its results, but Nyabola directly experienced how misinformation and manipulation present in social media was a contributing factor feeding ethnic angst.
Rafiki, the innovative Kenyan film presented at the Cannes Film Festival, is now the subject of a controversy over censorship due to its lesbian storyline. Nyabola is one of the African voices expressing the intention to support the distribution of the movie. “As something new and unexpected this movie might make certain people within the country feel uncomfortable” she said, “but it cannot be considered a vehicle for hate, promoting homosexuality in violation of moral values.” It is actually essential not to confuse actual hate speech with something that is labeled as hate speech for the purpose discredit upon it. Hate speech can be defined as something intended to offend, insult, intimidate, or threaten an individual or group based on an attribute, such as sexual orientation, religion, colour, gender, or disability. The writer from Nairobi reminded the audience that, when we talk about hate speech it is important to focus both on how it makes people feel and what it wants to accomplish. We should always consider that we regulate hate speech since it creates a condition, in which social, political and economic violence is fed, affecting the way we think about groups and individuals (and not just because it is offensive).
Nyabola indicated few key factors that she considers able to increase the consequences of hate speech and manipulation on social media. Firstly, information travels fast and can remain insulated. Whilst Twitter is a highly public space where content and comments flow freely, Facebook is a platform where you connect just with a smaller group of people, mostly friends, and WhatsApp is based on groups limited to a small number of contacts. The smaller the interaction sphere is, the harder it is for fact-checkers to see when and where rumours and hate speech go viral. It is difficult to find and stop them and their impact can be calculated just once they have already spread quickly and widely. Challenges which distinguish offline hate speech and manipulation from online ones are also related to the way information moves today among people supporting each other without a counterpart and without anyone being held to account.
Nowadays Kenya boasts an increasingly technological population, though not all rural areas have as yet been able to benefit from the country being one of the most connected ones in sub-Saharan Africa. In this context, reports indicate that since 2013 the British consulting firm Cambridge Analytica had been working in the country to interfere with elections, organizing conventions, orchestrating campaigns to sway the electorate away from specific candidates. It shall be no surprise that the reach of Cambridge Analytica extended well beyond United Kingdom and USA. In her speech, Nyabola expressed her frustration as she sees that western media focus their attention on developing countries just when they fear a threat of violence coming from there, ignoring that the rest of the world is also a place for innovation and decision making too.
Kenya has one of the highest rates of Internet penetration in Africa with millions of active Kenyan Facebook and Twitter accounts. People using social media are a growing minority and they are learning how to defeat misinformation and manipulation. For them social media can become an instrument for social change. In the period of last year’s election none of the main networks covered news related to female candidates until the campaigns circulating on social media could no longer be ignored. These platforms are now a formidable tool in Kenya used to mobilize civil society to accomplish social, gender and economic equality. This positive look is hindered along the way by the reality of control and manipulation.
Most of the countries globally currently have no effective legal regulation to safeguard their citizens online. The GDPR legislation now in force in the EU obliges publishers and companies to comply with stricter rules within a geographic area when it comes to privacy and data harvesting. In Africa, national institutions are instead weaker, and self-regulation is often left in the hands of private companies. Therefore, citizens are even more vulnerable to manipulation and strategic hate speech. In Kenya, which still doesn’t have an effective data protection law, users have been subject to targeted manipulation. “The effects of such a polluted ecosystem of misinformation has affected and changed personal relationships and lives for good,” said the writer.
On social media, without regulations and control, hatred and discriminations can produce devastating consequences. Kenya is just one of the many countries experiencing this. Hate speech blasted on Facebook at the start of the Rohingya crisis in Myanmar. Nyabola criticized that, as in many other cases, the problem was there for all, but the company was not able to combat the spread of ethnic based discrimination and hate speech.
Moving from the interconnections of traditional and online media in Kenyan misinformation ecosystem, the second part of the day focused on privacy implications of behavioural profiling on social media, covering the controversy about Cambridge Analytica. The Friday’s panel opened with the analyses of David Carroll, best known as the professor who filed a lawsuit against Cambridge Analytica in the UK to gain a better understanding of what data the company had collected about him and to what purpose. When he got access to his voter file from the 2016 U.S. election, he realized the company had been secretly profiling him. Carroll was the first person to receive and publish his file, finding out that Cambridge Analytica held personal data on the vast majority of registered voters in the US. He then requested the precise details on how these were obtained, processed and used. After the British consulting firm refused to disclose, he decided to pursue a court case instead.
As Carroll is a U.S. citizen, Cambridge Analytica took for granted that he had neither recourse under federal U.S. legislation, nor under UK data protection legislation. They were wrong. The legal challenge in British court case that centred on Cambridge Analytica’s compliance with the UK Data Protection Act of 1998 could be applied because Carroll’s data was processed in the United Kingdom. The company filed for bankruptcy not long after it was revealed that it used the data of 87 million Facebook users to profile and manipulate them, likely in contravention of UK law. Professor Carroll could never imagine that his activity would demolish the company.
Cambridge Analytica, working with an election management firm called SCL Group, appears to have been a propaganda machine master, able to manipulate voters through the combination of psychometric data. It exploited Facebook likes and interactions above all. Its technique disguised attempts at political manipulation since they were integrated in the online environment.
Carroll talked about how technology and data were used to influence elections and popular voting for the first time in countries like USA and UK, whereas for a much longer time international campaign promoters were hired to act on an international scale. In Carroll’s opinion Cambridge Analytica was an ‘oil spill’ moment. It was an epiphany, a sudden deep understanding of what was happening on a broader scale. It made people aware of the threat to their privacy and the fact that many other companies harvest data.
Since 2012 Facebook and Google have been assigning a DoubleClick ID to users, attaching it to their accounts, de-anonymizing and tracking every action. It is an Ad-tracker that gives companies and advertisers the power to measure impressions and interactions with their campaigns. It also allows third-party platforms to set retargeting ads after users visit external websites, integrated with cookies, accomplishing targeted profiling at different levels. This is how the AdTech industry system works. Carroll gave a wide description of how insidious such a technique can be. When a user downloads an app to his smartphone to help with sport and staying healthy, it will not be a secret that what was downloaded is the product of a health insurance or a bank, to collect data of potential customers, to profile and acquire knowledge about individuals and groups. Ordinary users have no idea about what is hidden under the surface of their apps. Thousands of companies are synchronizing and exchanging their data, collected in a plethora of ways, and used to shape the messages that they see, building up a tailor-made propaganda that would not be recognizable, for example, as a political aid. This mechanism works in several ways and for different purposes: to sell a product, to sell a brand or to sell a politician.
In this context, Professor Carroll welcomed the New European GDPR legislation to improve the veracity of the information on the internet to create a safer environment. In his dissertation, Carroll explained that the way AdTech industry relates to our data now contaminates the quality of our lives, as singles and communities, affecting our private sphere and our choices. GDPR hopefully giving consumers more ownership over their data, constitutes a relevant risk for companies that don’t take steps to comply. In his analyses the U.S. professor pointed out how companies want users to believe that they are seriously committed to protecting privacy and that they can solve all conflicts between advertising and data protection. Carroll claimed though that they are merely consolidating their power to an unprecedented rate. Users have never been as exposed as they are today.
Media companies emphasize the idea that they are able to collect people’s data for good purposes and that – so far – it cannot be proved this activity is harmful. The truth is that these companies cannot even monitor effectively the Ads appearing on their platforms. A well-known case is the one of YouTube, accused of showing advertisements from multinational companies like Mercedes on channels promoting Nazis and jihad propaganda, who were monetizing from these ads.
Carroll then focused on the industry of online advertisement and what he called “the fraud of the AdTech industry.” Economic data and results from this sector are unreliable and manipulated, as there are thousands of computers loading ads and making real money communicating to each other. This generates nowadays a market able to cheat the whole economy about 11 billion dollar a year. It consists of bots and easy clicks tailor-made for a user. The industries enabled this to happen and digital advertising ecosystem has evolved leading to an unsafe and colluded environment.
Alphabet and Facebook dominate the advertising business and are responsible for the use of most trackers. Publishers as well as AdTech platforms have the ability to link person-based identifiers by way of login and profile info.
Social scientists demonstrated that a few Facebook likes can be enough to reveal and accurately predict individual choices and ideas. Basic digital records are so used to automatically estimate a wide range of personal attributes and traits that are supposed to be part of a private sphere, like sexual orientation, religious beliefs, or political belonging. This new potential made politicians excited and they asked external companies to harvest data in order to generate a predictive model to exploit. Cambridge Analytica’s audience-targeting methodology was for several years “export-controlled by the British government”. It was classified as weapon by the House of Commons, at a weapons-grade communications tactics. It is comprehensible then that companies using this tech can easily sell their ability to influence voters and change the course of elections, polarizing the people using social networks.
The goal of such a manipulation and profiling is not to persuade everybody, but to increase the likelihood that specific individuals will react positively and engage with certain content, becoming part of the mechanism and feeding it. It is something that is supposed to work not for all but just for some of the members of a community. To find that small vulnerable slice of the U.S. population, for example, Cambridge Analytica had to profile a huge part of the electorate. By doing this it apparently succeeded in determining the final results, guiding and determining human behaviours and choices.
Bernd Fix, hacker veteran of the Chaos Computer Club in Germany, entered the panel conversation describing the development from the original principle of contemporary cybernetics, in order to contextualize the uncontrollable deviated system of Cambridge Analytica. He represented the cybernetic model as a control theory, by which a monitor compares what is happening into a system with a standard value representing what should be happening. When necessary, a controller adjusts the system’s behaviour accordingly to again reach that standard expected by the monitor. In his dissertation, Fix explained how this model, widely applied in interdisciplinary studies and fields, failed as things got more complex and it could not handle a huge amount of data in the form of cybernetics. Its evolution is called Machine Learning (or Artificial Intelligence), which is based on the training of a model (algorithm) to massive data sets to make predictions. Traditional IT has made way for the intelligence-based business model, which is now dominating the scene.
Machine learning can prognosticate with high accuracy what it is asked to, but – as the hacker explained – it is not possible to determine how the algorithm achieved the result. Nowadays most of our online environment works through algorithms that are programmed to fulfil their master’s interests, whereas big companies collect and analyse data to maximize their profit. All the services they provide, apparently for free, cost users their privacy. Thanks to the predictive model, they can create needs which convinces users to do something by subtle manipulating their perspective. Most of the responsibilities are on AdTech and social media companies, as they support a business model that is eroding privacy, rights and information. The challenge is now to make people understand that these companies do not act in their interest and that they are just stealing data from them to build up a psychometric profile to exploit.
The hacker reported eventually the scaring case of China’s platform “social credit,” designed to cover every aspect of online and offline existence and wanted by the national authorities. It is supposed to monitor each person and catalogue eventual “infractions and misbehaviours” using an algorithm to integrate them into a single score that rates the subjective fidelity into accepted social standards. A complex kind of ultimate social control, still in its prototype stages, but that could become part of our global future where socio-political regulation and control are governed by cybernetic regulatory circuits. Fix is not convinced that regulation can be the solution: to him, binding private actors and authorities to specific restriction as a way to hold them accountable is useless if people are not aware of what is going on. Most people around us are plugged into this dimension where the bargain of data seems to be irrelevant and the Big Three – Google, Facebook and Amazon – are allowed to self-determine the level of privacy. People are too often happy consumers who want companies to know their lives.
The last panellist of the afternoon was the artist and researcher Marloes de Valk, who co-developed a video game for old 1986 Nintendo consoles, which challenges the player to unveil, recognize and deconstruct techniques used to manipulate public opinion. The player faces the Propaganda Machine, level after level, to save the planet.
“Acid rains are natural phenomena”, “passive smoke doesn’t affect the health,” “greenhouse´s effects are irrelevant.” Such affirmations are a scientific aberration nowadays, but in the ‘80s there were private groups and corporations struggling to make them look like legitimate theorizations. The artist from Nederland analysed yesterday´s and today’s media landscape and, basing her research on precise misinformation campaigns, she succeeded in defying how propaganda has become more direct, maintaining all its old characteristics. De Valk looked, for example, for old documents from the American Tobacco Institute, for U.S. corporations‘leaked documents and also official articles from the press of the ´80s.
What remains is a dark-humoured game whose purpose is that of helping people to orientate inside the world of misinformation and deviated interests that affects our lives today. Where profit and lobbyism can be hidden behind a pseudoscientific point of view or be the reason rumours are spread around. The artist and researcher explained that what you find in the game represents the effects of late capitalism, where self-regulation together with complacent governments, that do not protect their citizens, shape a world where there is not room for transparency and accountability.
In the game, players get in contact with basic strategies of propaganda like “aggressively disseminate the facts you manufactured” or “seek allies: create connections, also secret ones”. The device used to play, from the same period of the misinformation campaigns, is an instrument that reminds with a bit of nostalgia where we started, but also where we are going. Things did not change from the ‘80s and corporations still try to sell us their ready-made opinion, to make more money and concentrate more power.
New international corporations like Facebook have refined their methods of propaganda and are able to create induced needs thus altering the representation of reality. We need to learn how to interact with such a polluted dimension. De Valk asked the audience to consider official statements like “we want to foster and facilitate free and open democratic debate and promote positive change in the world” (Twitter) and “we create technology that gives people the power to build community and bring the world closer together” (Facebook). There is a whole narrative built to emphasize their social relevance. By contextualising them within recent international events, it is possible to broad the understanding of what these companies want and how they manipulate people to obtain it.
What is the relation between deliberate spread of hate online and political manipulation?
As part of the Disruption Network Lab thematic series “Misinformation Ecosystems” the second day of the Conference investigated the ideology and reasons behind hate speech, focusing on stories of people who have been trapped and affected by hate campaigns, violence, and sexual assault both online and offline. The keynote event was introduced by Renata Avila, international lawyer from Guatemala and a digital rights advocate. Speaker was Andrea Noel, journalist from Mexico, “one of the most dangerous countries in the world for reporters and writers, with high rates of violence against women” as Avila remembered.
Noel has spent the last two years studying hate speech, fake news, bots, trolls and influencers. She decided to use her personal experience to focus on the correlation between misinformation and business, criminal organizations and politics. On March 8, 2016 it was International Women’s Day, and when Noel became a victim of a sexual assault. Whilst she was walking down the street in La Condesa (Mexico City) a man ran after her, suddenly lifted up her dress, and pulled her underwear down. It all lasted about 3 seconds.
As the journalist posted on twitter the surveillance footage of the assault commenting: “If anyone recognizes this idiot please identify him,” she spent the rest of the evening and the following morning facing trolls, who supported the attacker. In one day her name became trend topic on twitter on a national level, in a few days the assault was international news. She became so subject of haters and target of a misogynistic and sexist campaign too, which forced her to move abroad as the threat became concrete and her private address was disclosed. Trolls targeted her with the purpose of intimidating her, sending rape and death threats, pictures of decapitated heads, machetes and guns.
In Mexico women are murdered, abused and raped daily. They are victims of family members, husbands, authorities, criminals and strangers. Trolls are since ever active online promoting offensive hashtags, such as #MujerGolpeadaMujerFeliz, which translates as ‘a beaten woman is a happy woman’. It is a spectrum of the machismo culture affecting also many Latin American countries and the epidemic of gender-based violence and sexual assault.
Facts can be irrelevant against a torrent of abuse and hate toward journalists. Noel also received hundreds of messages telling her that there was a group of famous pranksters named “master trolls” that used to assault people on the streets in that same way, to make clicks and money out of it. Noel found out that they became best known for pulling down people’s pants and underwear in public, and that this brought them directly to popular tv shows. A profitable and growing business.
The journalist decided to face her trolls one by one and later realized that they were mostly part of an organized activity, not from a TV show but from a political group targeting her, a fact that made everything way more intricate. In two years she “got to know her trolls” as she said, and she studied their ecosystem. The description of the whole story is available on podcast Reply All.
Moving from her story, Noel focused in her second part of dissertation on the relation linking together trolls, criminal organizations, political and social manipulation. She described how, by using algorithms, bots and trolls, it is possible to generate political and election related postings on Facebook and Twitter that go viral. Manipulation comes also by weaponizing memes to propel hate speech and denigration, creating false campaigns to distract public attention from real news like corruption and atrocious cartel crimes.
Marginal voices and fake news can be spread by inflating the number of retweets and shares. Hashtags and trends are part of orchestrated system, where publishers and social media are not held in account for the fraud. Automated or semi-automated accounts, which manipulate public opinion by boosting the popularity of online posts and amplifying rumours. There is a universe of humans acting like bots, controlling hundreds of fake accounts.
Noel is particularly critical against Twitter. Its legal team expressed their engagement facing this “new major problem and novel threats”. The journalist hypothesized that the company had been well aware of the issue since 2010 but decided not to intervene to weed out organized groups manipulating its environment. Moreover, they knew that organized campaigns of discredit can water down the impact of real grassroot spontaneous protests and movements.
These manipulation techniques are responsible for digitally swaying the 2016 election toward the candidate Peña Nieto, organizing an army of thousands of bots to artificially generate trends on Twitter. Trends on this social media move up and down based on the number of tweets in a topic or hashtag related to the speed of sharing or retweeting. Trolls and bots can easily control the trending topic mechanism with their intense spamming activity.
Noel reported that false stories are shared via WhatsApp too, they are difficult to track and the most challenging to debunk. Her portrayal of social media and information market is not different from the description on the first day of the Conference by the writer Najala Nyabola.
To see the future of social media manipulation in politics we need to look at Mexico. All parties in Mexico have used bots to promote their own campaigns, journalists and opponents are overwhelmed with meaningless spam and denigrating hashtags. Offline, media landscape across Mexico is not free and organised crime has been using propaganda and manipulation to further its own aims. President Peña Nieto’s administration spent hundreds of millions of dollars on advertising, making media dependent and colluded. This system suppresses investigative articles and intimidates reporters.
The next general election is scheduled for July 1st. Andrea Noel warned that manipulation, trolls and bots are already irreversibly polluting the debate, in a country where more than 100 candidates have already been murdered (at the time of the Conference) and a history of corruption makes media and authorities unreliable in the eyes of people.
As a response, universities and NGOs formed an anti-fake news initiative called “Verificado” a platform that encourages people to forward stories found on social media using the hashtag #QuieroQueVerifiquen, ‘I want you to verify this’. The researchers of this project answer with fact-checking and publish their findings online. When asked, Noel expressed appreciation for the efforts of organizations and civil society. However, she is becoming increasingly disillusioned. She can see no immediate prospect of finding solutions able to slow or halt the impact of misinformation and hate speech online. In her opinion projects like Verificado can be easily hijacked. On the other side genuine social media campaigns are still an effective tool in the hands of civil society but the lack of trust in media fed by corruption often undermines all efforts to mobilize society, leading the public to routinely dismiss initiative to fight injustice.
When asked about the possibility to shut down social networks as a solution, Noel could not say she did not think of it. A first step could be to oblige media like Twitter and Facebook to guarantee users a safe environment where the economic interest comes after the need of a hate speech and manipulation free environment. The way they operate confirms they are content platforms and as such media entities they lack of transparency and accountability. These companies shirk their obligation for publishing responsibly. They should be held to account when they spin lies and allow groups to act unethically or against target single or communities.
The program of the second day continued with the presentation of the documentary The Cleaners, by Hans Block and Moritz Riesewieck, a project started in 2013 and in the cinemas at the time of the Conference. Initially, the authors wanted to learn more about the removal of pedo-pornographic content and sexualised images of children on Facebook. Social networks have largely pledged to work harder to identify and remove illegal, offensive and improper content, to limit violations and deny hate speech. But how does it work? Who decides what shall be cancelled and on what basis? These questions arose frequently during the first part of the Hate News conference and the German authors could answer it in relation to the social media Facebook, subject of their documentary.
The choice about what shall and what shall not belong the internet is a subjective one. Content moderators, who censor postings and content on platforms like Facebook, have indeed a controversial and central role. Their work is subject to almost no open scrutiny. However, it shapes attitudes and trends of both individuals and social groups, impacting the public discourse and the cultural dialectic. When a social network decides to censor content and delate videos about the effects of drone bombings, since by showing civilian victims Daesh builds its propaganda, it makes a choice that affects the narration of events and the perception of facts.
Investigating how the social media platform Facebook polices online content, and the direct impact of these decisions on the users’ interactions, Block and Riesewieck ended up in the Manila, where Facebook boasts its biggest department for content moderation, with more than 10,000 contractors. The Cleaners shows how this platform sees its responsibilities, both toward people moderating and censoring the content and its users. Based on interviews with Philippine content moderators at work, the documentary contributes to the debate about the public responsibilities of social media and online platforms for publishing, from political manipulation and propaganda to data protection.
Humans are still the first line of content moderation and they suffer horrible consequences and traumas for they see daily the worst of the web. Companies like Facebook have developed algorithms and artificial-intelligence tools able to work as a first level, but the most of this technology cannot substitute human capabilities. Certain content moderators describe themselves as custodians of moral values, as their work turns into decisions that can shape social media and consequently society. There are indeed countries where people consider Facebook as the Internet, ignoring that the world wide web is much more than that social media.
The authors go beyond, showing that Manila cleaners are influenced by their cultural background and social believes. They build a parallel between Philippines’ Catholicism and discourse about universal enslavement of humans to God and sacrifice, photographed in the years of the government of Rodrigo Duterte, controversial president who is leading a war against drugs and moral corruption, made of extrajudicial killings and a violent, abusive approach.
Despite denials by the company, cleaners in Manila also moderate Europeans’ posts and they are trained for that. A single world, a historical reference, together with a picture can make all the difference between an innocent joke and hate speech. Whilst memes can be used as weapons, for example by the alt-right groups or by reactionary movements against gender equality, cleaners have just few seconds to decide between removing and keeping a content, checking more than 35,000 images per day. The authors of the documentary explained how it is almost impossible for them to contextualize content. As a result, there is almost no control over their work, as a team leader can just proof 3% of what a cleaner does.
The last panel closing the conference on the second day was moderated by the curator, artist and writer Margarita Tsomou. American independent online harassment researcher Caroline Sinders focused her dissertation on online protests and political campaigns in the frame of the hate speech discourse. She recalled recent events able to pollute the public debate by creating chaotic and misleading messages to enhance a reactionary anti-progressive culture. Misogyny thrives on social media and hatred of women and entrenched prejudice against them are everywhere in the Internet. Fake online campaigns are often subtly orchestrated targeting women.
In 2014 on social networks appeared an organised action associated with the hashtag #EndFathersDay, presented as a feminist political campaign to eradicate the celebration of Father’s Day as a “celebration of patriarchy and oppression”. That campaign had nothing to do with feminism and grassroots movements, it was a harassment campaign against women, a fake with manipulated images and hundreds of trolls to feed a sentiment of hatred and hostility against activists for civil rights and equality.
It is not the only case of its genre. The #Gamergate campaign, that in 2014 targeted several women from the video game industry (on Twitter, Reddit, 4chan, and 8chan) falls into this context. The campaign was not immediately perceived as a harassment instrument due to attempts of making it appear as a movement against political correctness and bad journalistic ethics. It was though a misogynistic reactionary campaign against female game developers, that soon revealed its true face as right-wing sexist backlash. Under this hashtag women were indeed victims of doxing, threats of rape and death.
Sinders explained that in the last several years we have seen a shift from a sectorial market to a global dimension where we are all potentially identifiable as gamers. Video games and gaming culture are now mainstream. People are continuously connected to all kind of devices that enable the global gaming industry to generate more than 100 billion dollar every year. The Gamergate controversy reopened the debate that gaming is a world for (white) males, pointing out how the video game industry has a diversity problem, as sexism, racial and gender discrimination in video game culture appear to be a constant factor.
A relevant aspect of the controversy is related to how trolls organised and tried to reframe the narrative of the harassment campaign. Instead of a misogynistic and violent action, they claimed it was about journalistic integrity and candid reviewing, thus denouncing a collusion between the press and feminists and social critics. Most of the trolls and supporters were anonymous, ensuring that the campaign be defined merely by the harassment they have committed against women and as a reaction to what they reported as the increasing influence of feminism on video game culture.
Sinders concluded her speech explaining that organised actions and campaigns like those described above are structured on precise tactics and harassment techniques that have already entered in our vocabulary. Words like doxing, swatting, sealioning and dogpiling are neologisms that describe strategies of hate speech and harassment nowadays common.
The Norwegian journalist Øyvind Strømmen, author and managing editor of Hate Speech International, has extensively researched and written about how extreme right movements and religious fundamentalism are able to build an effective communication online and use the web as an infrastructure to strategically enhance their activities. He joined the panel explaining that despite his intense international activity, he has never been subjected to harassment and death threats like his female colleagues, whilst he finds daily-organised activities to sow hatred and intolerance to repress women.
Cathleen Berger, former International cyber policy coordination staff at the German Foreign Office and currently lead of Mozilla’s strategic engagement with global Internet fora, closed the conference with an analyses of the new German NetzDG legislation, defined by media as an extreme example of efforts by governments to make social media liable for what circulates on their pages. The law was adopted at the end of 2017 to combat illegal and harmful content on social media platforms. It is defined also as anti-hate-speech law as it was written in the historical context of the refugees’ mass migration to Europe and the new neo-nazi propaganda from political formations like the Alternative for Germany (AfD). At the time, fake news and racist material were shared online on several mainstream channels for the first time, with relevant impact on public opinion.
The new German law requires social media companies to provide users with a wide-ranging complaints structure to make sure that discriminatory and illegal posts can quickly be reported. It is left to social media platforms to decide if a certain reported content represents a promotion of or an incitement to terrorism, child abuse, hate or discrimination of any kind.
The law forces social media to act quickly too. Under NetzDG, social media platforms with more than 2 million users in Germany have 24 hours to remove posts reported by users for being illegal. Facebook, Twitter and YouTube seem to be the law’s main focus. Failure to comply with the law carries a fine up to € 50 million.
The German government’s Network Enforcement Act has been criticised for its risks of controversial inadvertent censorship, limiting legitimate expressions of opinion and free speech. Once again private companies, that are neither judges nor any kind of public authority, have the power to decide whether reported content is in fact unlawful.
All credit is due to Tatiana Bazzichelli and the Disruption Network Lab, who provided once again a forum for discussion and exchange of information that provokes awareness on matters of particular concern from the different perspectives of the guests – especially women – able to photograph with their international activities and their researches several topical issues.
This 13th Conference (https://www.disruptionlab.org/hate-news/) was a valid opportunity to discuss and rationalise the need for civil society to remain globally vigilant against new forms of hate speech, manipulation and censorship. Ideological reasons behind hate speech and online manipulation are on the table and the framing is clear enough to hold online media and publishing companies accountable for the spread of frauds, falsehood and discrimination within their networks.
Companies like Facebook and Twitter have demonstrated their inability to recognise real threats and appear to be thinking of profit and control without considering the repercussions that their choices have. However, we are delegating them the power to define what is legal and what is not. Their power of censorship shapes society, interfering with fundamental rights and freedoms, feeding conflicts and polarization. This legal response to hate speech and manipulation in the context of the battle for privacy and civil rights is completely inadequate.
Propaganda and hate speech have historically been tools used in all countries to influence decision making and to manipulate and scare public opinion. Forms of intrusive persuasion that use rumours or manipulation to influence people’s choices, beliefs and behaviours are now occupying the web too. Individuals should be able to give due value to their online interactions, focusing on the risks that they run when they click on something. There is too little awareness of how companies, aggressive trolls, criminals, private groups and advertisers subtly manipulate online environment for political and economic interest.
Such a corrupted online ecosystem – where almost nothing of what we meet can be trusted and where individuals and communities are exposed to private interest – generates often hate campaigns targeting women and minorities, normalising crimes, reactionary gender stereotyping and deplorable cultural customs. As all speakers suggested, Cyber-ethnography can be a worthwhile tool as an online research method to study communities and cultures created through computer-mediated social interaction. It could be helpful to study local online exchanges and find local solutions. By researching available data from its microcosmos, it is possible to prevent ethnic, socioeconomic, and political conflicts linked to the online activity of manipulators, destructive trolls and influential groups, to disrupt the insularity of closed media and unveil the economic and political interest behind them.
HATE NEWS: Manipulators, Trolls & Influencers
May 25-26, 2018 – Kunstquartier Bethanien, Berlin
Info about the 13th Disruption Network Lab Conference, its speakers and thematic is available online here:
To follow the Disruption Network Lab sign up for its Newsletter and get informed about its Conferences, ongoing researches and projects. The next Disruption Network Lab event is planned for September. Make sure you don´t miss it!
Photocredits: Maria Silvano for Disruption Network Lab