Category: social media

Stop preaching the converted: Talking feminism in online video gaming!

This article which first appeared on creativetimesreport may seem irrelevant at first sight, but it’s actually a VERY IMPORTANT one! It is a great example of someone who went out of her “comfort zone” and stopped preaching the converted. A strategy at the heart of all good campaigning work. Her example, and the lessons she shares, are enlightening!

 

Angela Washko, The Council on Gender Sensitivity and Behavioral Awareness in World of Warcraft, 2012.

Angela Washko, The Council on Gender Sensitivity and Behavioral Awareness in World of Warcraft, 2012.

[Chastity]:Abortion is wrong and any woman who gets one should be sterilized for life.
[Purpwhiteowl]: should i mention the rape theory?
[Snuh]: What if they don’t have the means to pay for the child and got raped?
[Xentrist]: clearly Chastity in sick
[Snuh]: What if they are 14 years old and were raped?
[Chastity]: I was raped growing up. Repeatedly. By a family member. If i had gotten pregnant i wouldnt have murdered the poor child. because THE CHILD did not rape me.

This intense and personal discussion regarding the ethics of abortion unfolded in the lively city of Orgrimmar, one of the capitals of an online universe populated by more than 7 million players: World of Warcraft (WoW). After several years of raiding dungeons with guilds, slaying goblins and sorcerers, wearing spiked shoulder pads with eyeballs embedded in them and flying on dragons over flaming volcanic ruins, I decided to abandon playing the game as directed. Fed up with the casual sexism exhibited by players on my servers, in 2012 I founded the Council on Gender Sensitivity and Behavioral Awareness in World of Warcraft to facilitate discussions about the misogynistic, homophobic, racist and otherwise discriminatory language used within the game space.

As a gamer who is also an artist and a feminist, I consider it my responsibility to dispel stereotypes about gamers—especially WoW players—who have been mislabeled as unattractive, mean-spirited losers. At the same time, I question my fellow gamers’ propagation of the hateful speech that earns them those epithets. The incredible social spaces designed by game developers suggest that things could have been otherwise; in WoW’s guilds, teams come together for hours to discuss strategy, forming intimate bonds as they exercise problem-solving and leadership skills. Unfortunately, somewhere along the way, this promising communication system bred codes to let women and minorities know that they didn’t belong.

Angela Washko,The Council on Gender Sensitivity and Behavioral Awareness: Red Shirts and Blue Shirts (The Gay Agenda), 2014 (excerpt).

Trying to explain to someone who has never played WoW (or any similar game) that the orcs and elves riding flying dragons are engaging in meaningful long-term relationships and collaborative team-building experiences can be a little difficult. Typical Urban Dictionary entries for WoW define the game as “crack in CD-ROM form” and note, “players are widely stereotyped as fat guys living in there parents basements with out a life or a job or a girl friend [sic].” One only needs to look into the ongoing saga of #gamergate—an online social movement orchestrated by thousands of gamers to silence women and minorities who have raised questions about their representation and treatment within the gaming community—to see how certain individuals play directly into the hands of this stereotype by attempting to lay exclusive claim to the “gamer” identity. But gamers, increasingly, are not a homogeneous social group.

World of Warcraft is a perfect Petri dish for conversations about feminism with people who are uninhibited by IRL accountability

When women and minorities who love games question why they are abused, poorly represented or made to feel out of place, self-identified gamers often respond with an age-old argument: “If you don’t like it, why don’t you make your own?” Those on the receiving end of this arrogant question are doing just that, reshaping the gaming landscape by independently designing their own critical games and writing their own cultural criticism. Organizations like Dames Making Games, game makers like Anna Anthropy, Molleindustria and Merritt Kopas and game writers like Leigh Alexander, Samantha Allen, Lana Polansky and others listed on The New Inquiry’s Gaming and Feminism Syllabus are becoming more and more visible and broadly distributed in opposition to an industry that cares much more about consumer sales data and profit than about cultural innovation, storytelling and diversity of voices.

What’s especially strange about the sexism present in WoW is that players not only come from diverse social, economic and racial backgrounds but are also, according to census data taken by the Daedalus Project, 28 years old on average. (“It’s just a bunch of 14-year-old boys trolling you” won’t cut it as a defense.) If #gamergate supporters need to respect this diversity, many non-gamers also need to accept that the dichotomy between the physical (real) and the virtual (fake) is dated; in game spaces, individuals perform their identities in ways that are governed by the same social relations that are operative in a classroom or park, though with fewer inhibitions. That’s why—instead of either continuing on quests to kill more baddies or declaring the game a trivial, reactionary space where sexists thrive and abandoning it—I embarked on a quest to facilitate conversations about discriminatory language in WoW’s public discussion channels. I realized that players’ geographic dispersion generates a population that is far more representative of American opinion than those of the art or academic circles that I frequent in New York and San Diego, making it a perfect Petri dish for conversations about women’s rights, feminism and gender expression with people who are uninhibited by IRL accountability.

Angela Washko, The Council on Gender Sensitivity and Behavioral Awareness in World of Warcraft, 2012.

Angela Washko, The Council on Gender Sensitivity and Behavioral Awareness in World of Warcraft, 2012.

WoW, like many other virtual spaces, can be a bastion of homophobia, racism and sexism existing completely unchecked by physical world ramifications. Because of the time investment the game requires, only those dedicated enough to go through the leveling process will ever make it to a chatty capital city (like Orgrimmar, where most of my discussions take place), meaning that only the most avid players are capable of raising these issues within the game space. At such moments, the diplomatic facades required of everyday social and professional life are broken down, and an inverse policy of “radical truth” emerges. When I asked them about the underrepresentation of women in WoW—less than 15 percent of the playerbase is female—some of these unabashed purveyors of “truth” have attributed it not to the outspoken misogyny of players like themselves but to the “fact” that gaming is a naturally male activity. Many of the men I’ve talked to suggest that women are also inherently more interested in playing “healer” characters. These arguments are made as if they were obviously true—as if they were rooted in science.

When I ask men why they play female characters, I’ve repeatedly been told: “I’d rather look at a girl’s butt all day in WoW”

Women now have to “come out” as women in the game space, risking ridicule and sexualization, as more than half the female avatars running around in WoW are played by men (women, by contrast, are rarely interested in playing men). Unfortunately this is not because WoW is an empathetic utopia in which men play women to better understand their experiences and perspectives; WoW merely offers men another opportunity to control an objectified, simulated female body. When I ask men why they play female characters, I’ve repeatedly been told: “I’d rather look at a girl’s butt all day in WoW,” “because it would be gay to look at a guy’s butt all day” and “I project an attractive human woman on my character because I like to watch pretty girls.” I found these responses, which were corroborated by a study recently cited in Slate, disturbing to say the least. They also bring to mind Laura Mulvey’s discussion of the male gaze in her influential essay “Visual Pleasure and Narrative Cinema,” published in 1975: “In a world ordered by sexual imbalance, pleasure in looking has been split between active/male and passive/female. The determining male gaze projects its phantasy on to the female form which is styled accordingly. In their traditional exhibitionist role women are simultaneously looked at and displayed, with their appearance coded for strong visual and erotic impact.”

The simulated avatar woman customized and controlled by a man who gets pleasure out of projecting his fantasy onto her is in strict competition with the woman who talks back—the woman who plays women because, as Taetra points out in the image below, for women it is logical to do so. Women haven’t been socialized to capitalize on—or in many contexts even to admit to having—sexual desires and consequently do not project sexual objects to conquer and control onto their avatars.

Angela Washko,The Council on Gender Sensitivity and Behavioral Awareness: Playing a Girl, 2013 (excerpt).

As I continued to facilitate discussions about the discriminatory language usage on various WoW servers, I realized that the topic generating the most negative responses and the greatest misunderstanding was “feminism.” Here’s a small sample of the responses I’ve gotten when asking for player definitions of feminism (and framing my question as part of a research project):

[Chastity]: Feminists are man hating whores who think their better than everyone else. Personally I think a woman’s job is to stay home, take care of her house, her babies, her kitchen and her man. And before you ask, yes I am female
[Xentrist]: Feminism is about EQUAL rights for women
[Hyperjump]: well all you really need to know is pregnant, dish’s, naked, masturbate, shaven, and solid firm titties. feminism is all about big titties and long stretchy nipples for kids to breastfeed.
[Taetra]: Feminism is the attention whore term of saying that women are better than men and deserve everything if not more than them, which is not true in certain terms. Identifying with the female society instead of humans. Working against the males instead of with.
[Yukarri]: isnt it when somebody acts really girly
[Try]: google it bro
[Holypizza]: girls have boobs. gb2 kitchen
[Raspberrie]: idk like angry more rights for females can’t take a kitchen joke kind of lady
[Defeated]: is that supporting woman who don’t make me sammichs? they need to make my samwicths faster
[Kigensobank]: i dont know if WOW is the best place to ask for feminists
[Mallows]: I think that hardcore feminists often think that women are better lol and they change their mind when they don’t like something that men have that is undesirable
[Alvister]: da fuq
[Misstysmoo]: lol feminism is another way communism to be put into society under the pretense of
protecting women

[Seirina]: Feminists are women who think they are better than men. Theyre nuts. Men and women are equal. We’re just sexier.
[Yesimapally]: Big Chicks who love a buffet but hate to shave their hairy armpits??
[Nimrodson]: i think it’s a word with too many negative/positive connotations to be worth defining
[Dante]: woman are usefull as healer
[Scrub]: yes, women were discriminated against while back, but after many feminist movements the laws were changed. It is now the 21st century and women have all if not more rights then men do. so the feminist activists are doing nothing more then creating drama

The tone of many of these comments reflects what one might find on a men’s rights forum. Recently the gaming and men’s rights communities have overlapped unambiguously, as Roosh V—a so-called pick-up artist dubbed “the Web’s most infamous misogynist” by The Daily Dot—just created an online support site for #gamergate supporters despite not being a gamer himself. I conducted an interview with him for another (seemingly unrelated) project a week before he announced this site.

Angela Washko, BANGED, currently in-progress

Angela Washko, BANGED, currently in progress.

Most of the women I’ve addressed in WoW do not see themselves as victims within this system, likely because their scarcity greatly increases their value as projected-upon objects of desire (as long as they don’t ask too many questions) without having it related to the physical body outside of the screen. Among the women I’ve talked to, I’ve found that there are two common yet distinct responses to my questions about feminism and being a woman inside of WoW. Response type #1: “Feminists hate men and feminism encourages physically attractive women to be sluts.” Response type #2: “Feminism is about equal rights for women, but I don’t talk about it in WoW because bringing up issues about the community’s exclusivity compromises my participation in competitive play and makes me a target for ridicule.”

Opportunities to interact online without potential repercussions for one’s offline life are becoming fewer and fewer.

Of course phrases like “get back to the kitchen/gb2kitchen” or “make me a sandwich” can be said in jest, but they nonetheless reinforce conservative viewpoints regarding women’s roles. The overwhelmingly popular belief communicated in this space—that women are not biologically wired to play video games (but rather to cook, clean, produce and take care of babies, maintain long, dye-free hair and faithfully serve their deserving men)—creates a barrier for women who hope to excel in the game and participate in its social potential. This barrier keeps women from being taken seriously for their contributions within the game beyond existing as abstracted, fetishized sex objects. Women who reject this role may be publicly demonized and called “feminazis.”

Unfortunately I did not learn how to turn WoW into a space for equitable, respectful conversation, as I had intended. Instead I came away with some thoughts about how much bigger the issues are than the game itself. Back in the days of dial-up modems, when my family finally realized the impending necessity of “getting the internet,” there was a huge fear of allowing anyone to know “who you really were.” Anonymity was the default then, and protecting your identity was key to avoiding scams, having your credit card information stolen, being stalked IRL or whatever else parents everywhere imagined might happen if someone on the internet knew your “real identity.”

What I learned early on from playing MUD games (text-based multiplayer dungeon games—precursors to MMORPGs like WoW) was that you could actually be quite intimate, revealing and honest with little consequence. There was no connection to your physical self in that kind of setting. But that seems to have changed drastically since the transition from Web 1.0 to 2.0. Web 2.0 has all but eliminated the idealized possibilities of performing an anonymous virtual self, moving internet users toward performing an (often professionalized) online version of one’s physical self (i.e., branding). The possibility of anonymity has disappeared as an increasing number of sites, Facebook foremost among them, require us to use our real names and identities to interact with other individuals online. Opportunities to interact online without potential repercussions for one’s offline life are becoming fewer and fewer.

Angela Washko, The Council on Gender Sensitivity and Behavioral Awareness in World of Warcraft, 2013

Angela Washko, The Council on Gender Sensitivity and Behavioral Awareness in World of Warcraft, 2013

Though I had initially hoped to convince many WoW players to reconsider the adopted communal language therein, I quickly realized that this was both a terribly icky colonialist impulse on my part and that its persistence was related to a more complicated desire to hold on to a set of values that is becoming increasingly outdated and unacceptable. Throughout my interventions in the massively multiplayer video game space, I’ve found that WoW is a space in which the suppressed ideologies, feelings and experiences of an ostensibly politically correct American society flourish.

“It’s just a bunch of 14-year-old boys trolling you” won’t cut it—gamers are not a homogeneous social group.”

In many areas of physical space, racism, homophobia and misogyny play out systemically rather than overtly. It has fallen out of fashion to openly be a sexist, homophobic bigot, so people carve out marginal spaces where this language can live on. WoW is a space in which the learned professional and social behaviors (or performances) that we all employ as we shift from context to context in our everyday life outside of the screen are unnecessary. At the same time, this anonymity produces one of the few remaining opportunities to have a space for solidarity among those who are extremely socially conservative in a seemingly unsurveilled environment unattached to participants’ professional and social identities. For the players I talk to, my research project provides a potentially meaningful platform to share concerns about how social value systems are evolving while protected by the facade of their avatars.

Thanks to the emerging visibility and solidarity of visual artists, writers, game makers and other cultural producers fostering a “queer futurity of games” (to quote Merritt Kopas) and more inclusive internet spaces in general, I believe that new spaces will be produced by and for those targeted by #gamergate and its ilk. I hope that efforts will move beyond examining how marginalized groups are represented and move toward creating game spaces that promote empathy. Rather than playing a female blood elf solely because you like the design of her ass, players would be allowed to fully experience the perspective of a person they might not understand or agree with. Perhaps by living as an other in this queer utopian game space, players will come to respect people unlike themselves; at the least, they will have a harder time denying that the experiences of other gamers are valid, acceptable and even worth celebrating.

2019 : Celebrate Blade Runner’s Year of the Replicant !

2019 is a blessed year for science fiction fans. It is a crucial year for Neo-Tokyo, in Katsuhiro Otomo’s cult manga Akira. But it is also and especially the year that Ridley Scott’s Blade Runner is set in.

(Adapted from an Article published in LeMonde)

We’re in 2019 reality. Los Angeles is not yet completely overtaken by the pollution shown by Ridley Scott. But the androids that are at the heart of Blade Runner’s plot are already there – albeit in a very different form from the Replicants, these artificial beings impossible to differentiate from a human being without resorting to a complex test.

Replicants in 2019 reality do not haunt the basements of large mega-cities, but rather the depths of the Web. And they are everywhere, as the New York magazine summarizes in a long article entitled “How much of the Internet is fake? “.  A lot, it turns out. A substantial portion of website traffic is done by automated programs and not by humans. Some are useful and well known, like Google’s crawlers who roam the Web to index all pages and their updates, almost in real time. Others, on the other hand, are designed to pass as humans. Their goal is simple: to increase the statistics of visits or views, or even click on advertisements. You can buy thousands of views of a YouTube video for a few euros; and there are automated networks that click on ads to “inflate” their numbers and bring earnings to the more or less legitimate sites that host them.

The problem is such that in 2013, according to the Times, almost half of the clicks on YouTube were made by robots – making the company’s tech people fear a phenomenon of “inversion”: Once the clicks of the machines would exceed those of the humans, the anti-machine tools would end up considering the human traffic as being the “fake” one, and would turn against the legitimate users of the site.

The “Inversion moment” officially never arrived; Without achieving the complexity of the Blade Runner Voight-Kampff test, the anti-spam tools have improved. The most common was, historically, the captcha, which asked the user to decipher one or two badly written words to prove that they were a human. The test proved too simple in the face of increasingly sophisticated robots: it has largely been replaced by a more analytical test, which asks the user to identify objects on images. Google, and others, are already working on a new generation of tools that analyze how the mouse moves on the screen to guess if it’s being manipulated by a real being.

But knowing that it is a human who clicks is not always enough. The Russian propagandists of the Internet Research Agency, who have tried to influence the US presidential election, are very human, as are the employees of the “click farms” who inflate the views on YouTube of their customers.

And as control tools improve, so do the skills of those who generate fake traffic.

In the past two years, simple AI tools have made it much easier to create “deepfakes”, these faked – and mostly pornographic – videos in which the face of a person is superimposed, in a relatively convincing way, on a character of a video. “ The fact is that trying to protect yourself from the Internet and its depravity is basically a lost cause… The Internet is a vast wormhole of darkness that eats itself” says in a rather disillusioned interview to the Washington Post actress Scarlett Johansson, a regular victim of deepfakes.

The worst is perhaps to come: on the Internet, pornographic innovations usually find other usage, and 2019 could be a good year for shady political videos. Because at the end of the day, one of the biggest differences between Blade Runner’s 2019 and the one we’re about to experience is that “replicating”is not just a multinational thing, as is the Tyrell Corporation in the film. Today, everyone, or almost everyone, can for a small cost buy or build a small robot factory. More than was foreseen by the original book by Philip K Dick on which Blade Runner was based on, the Replicants are now truly amongst us.

Having said that, Happy 2019 !

 

Is humanity controlled by alien lizards? – how fake news and robots influence us from within our own social circles.

Is humanity controlled by alien lizards? – how fake news and robots influence us from within our own social circles.

Even these days, there are still 12% of Americans to believe humanity is controlled by alien lizards who took human shape. Replace “alien lizards” with “bots”, and the laughable conspiracy theory might not be that funny anymore.

Increasingly all social debates and political elections are manipulated by social bots and the most worrying news is that opponents to a cause or a party manipulate supporters of this cause or party from within their own social circles. We must absolutely understand how this is working against our social struggles, if we are to keep control of our campaigning strategies.

 

One of the most verified truth of campaigning is that people only get really influenced by attitudes and behaviors of other members of their social circles, as the conformity bias drives most of us to follow what we perceive our fellows think and do.

And where do these patterns appear more clearly than on social media? Clicks, likes and comments drive most of us to distinguish what is appropriate from what is not.

Political strategist have been constantly researching how to make the most of this and use individuals as one of their main channels to propagate their ideas.

In recent years the explosion of the use of social bots, allied to a shameless use of fake news, have given the strategists the most worrying tools to influence attitudes and behaviors, including our own.

The increasing presence of bots in social and political discussions

Social (ro)bots are software-controlled accounts that artificially generate content and establish interactions with non-robots. They seek to imitate human behavior and to pass as such in order to interfere in spontaneous debates and create forged discussions.

Strategist behind the bots create fake news and fake opinions. They then disseminate these via millions of messages sent via social media platforms.

With this type of manipulation, robots create the false sense of broad political support for a certain proposal, idea or public figure. These massive communication flows modify the direction of public policies, interfere with the stock market, spread rumors, false news and conspiracy theories and generate misinformation.

In all social debates, it is now becoming common to observe the orchestrated use of robot networks (botnets) to generate a movement at a given moment, manipulating trending topics and the debate in general. Their presence has been evidenced in all recent major political confrontations, from Brexit to the US elections and, very recently, the Brazilian elections:

On October 17, the daily Folha de S. Paulo, revealed that four services specialized in the sending of messages in mass on WhatsApp (Quick Mobile, Yacows, Croc Services, SMS Market) had signed contracts of several millions of dollars with companies supporting Jair Bolsonaro’s campaign.

According to the revelations, the 4 companies have sent hundreds of millions of messages on large lists of whatsapp accounts, which they collected via cellphone companies or other channels.

What these artificial flows represent in terms of proportion is frightening.

According to a Brazilian study, led by Getúlio Vargas Foundation, which analyzed the discussions on Twitter during the TV debate in the Brazilian presidential election in 2014, 6.29 percent of the interactions on Twitter during the first round were made by social bots that were controlled by software that created a massification of posts to manipulate the discussion on social media. During the second round, the proliferation of social bots was even worse. Bots created 11 percent of the posts. During the 2017 general strike, more than 22% of the Twitter interactions between users in favor of the strike were triggered by this type of account.

The foundation conducted several more case studies, all with similar results.

Twitter is Bot land

Robots are easier to spread on Twitter than on Facebook for a variety of reasons. The Twitter text pattern (restricted number of characters) generates a communication limitation that facilitates the imitation of human action. In addition, using @ to mark users, even if they are not connected to their network account, allows robots to randomly mark real people to insert a factor that closely resembles human interactions.

Robots also take advantage of the fact that, generally, people lack critical thinking when following a profile on Twitter, and usually act reciprocally when they receive a new follower. Experiments show that on Facebook, where people tend to be a bit more careful about accepting new friends, 20% of real users accept friend requests indiscriminately, and 60% accept when they have at least one friend in common. In this way, robots add a large number of real people at the same time, follow real pages of famous people, and follow a large number of robots, so that they create mixed communities – including real and false profiles ( Ferrara et al., 2016)

How Whatsapp is trusting the debate in Brazil

But Twitter is not the only channel. All social media experience the same strategies of infiltration, depending on what is being used by the specific group targeted by unscrupulous strategists.

In most countries whatsapp is a media restricted to private communications amongst a close circle.

But in Brazil, it has largely replaced social media. Of 210 million brazilians, 120 million have an active Whatsapp account. In 2016, a . En 2016, Harvard Business Review study indicated that 96 % of Brazilians who has a smartphone used Whatsapp as prefered messaging app.

Although disseminating information is rather difficult, with Whatsapp groups being limited to 256 people, the influence of messages is extremely high as the levels of trust within Whatsapp groups are higher than anywhere else. So investing in reaching these groups turns out to be extremely effective.

Furthermore, regulation and traceability of fake news are extremely difficult as messages are encrypted.

As a result, some Brazilians have reported receiving up to 500 messages per day according to Agence France Presse

And the impact of this tactic is not to be underestimated: The internet watchdog Comprova created by over 50 journalists analysed that among the fifty most viral images within these groups, 56% of them propagate fake news or present misleading facts.

The “virality” of fake news is particularly strong in the case of images and memes, such as the one pretending that Fernando Haddad, the candidate of the PT, aimed at imposing “gay kits” in schools.

Not only is this highly immoral but, in the case of Brazil also illegal, as the law only allows a party to send message to its enrolled supporters. Not to speak of how this constitutes illegal funding of political campaigning.

Following the disclosure, Whatsapp closed 100 000 accounts that were linked to the 4 companies, but this represents only a fraction of the problem, and in any case the damage was done.

This manipulation is generated within supporters groups to discredit their opponents 

In line with what has been happening since the beginnings of politics, influencers act within groups of supporters of a cause or a party to discredit their opponents and help tighten the group.

The same strategy is applied to target the moveable audiences and win them over.

In this respect, the major change that bots bring is the size and speed of the manipulation.

Attacking from within

But the worrying trend is that the army of fake news and opinion distortions also attack our movement from within.

The University of Washington released in October 2018 the results of investigationsin the social discussions during the 2016 US presidential elections that showed that many tweets sent from what seemed to be #BlackLivesMatter supporters  were not posted by “real” #BlackLivesMatter but by Russia’s Internet Research Agency (IRA) in their influence campaign targeting the 2016 U.S. election. Of course, the same was true of #BlueLivesMatter.

The creepy graph below shows in orange the IRA accounts, within the larger blue circles of pro and anti BLM conversations.

The IRA accounts impersonated activists on both sides of the conversation. On the left were IRA accounts that enacted the personas of African-American activists supporting #BlackLivesMatter. On the right were IRA accounts that pretended to be conservative U.S. citizens or political groups critical of the #BlackLivesMatter movement. Infiltrating the BLM movement by increasing the presence of radical opinionswas a clear strategy to undermine electoral support for Hillary Clinton by encouraging BLM supporters not to vote.

Outrageous fake news that come from our opponents are relatively easy to spot and dismiss, but when more subtle fake news and artificial massification of opinion use our own frames and come from what seem to be elements of our own movements, the danger is much bigger.

What does this mean for SOGI campaigning?

Political pressure on social media to reinforce regulations is mounting from governments and multilateral institutions such as the EU.

Issues of sexual orientation, gender identity or expression, and sex characteristics are almost always used by conservatives to discredit progressives and whip up moral panics.

Supporting institutional efforts to control fake news would probably always work in our favor.

More and more public and private initiatives are being developed to bust fake profiles.

For example, Brazil developed  PegaBot, a software that estimates the probability of a profile being a social bot (e.g. profiles that post more than once per second).

TheBBC reportsthat through the International Fact Checking Network (IFCN), a branch of the Florida-based journalism think tank Poynter, facebook users in the US and Germany can now flag articles they think are deliberately false, these will then go to third-party fact checkers signed up with the IFCN.

Those fact checkers come from media organisations like the Washington Post and websites such as the urban legend debunking site Snopes.com. The third-party fact checkers, says IFCN director Alexios Mantzarlis “look at the stories that users have flagged as fake and if they fact check them and tag them as false, these stories then get a disputed tag that stays with them across the social network. “Another warning appears if users try to share the story, although Facebook doesn’t prevent such sharing or delete the fake news story. The “fake” tag will however negatively impact the story’s score in Facebook’s algorithm, meaning that fewer people will see it pop up in their news feeds.

The opposite could also be favored, with « fact-checked » labels being issued by certified sources and given priority by social media algorithms.

Of course, this would create strong concerns over who would hold the « truth label » and how this would play out to silence voices which are not within the ruling systems.

But beyond these and other initiatives to get the social media platforms to exert control, campaign organisations also need to take direct action.

As a systematic step, educating our own social circles on fake news and bots now seems unavoidable.

We might even need to disseminate internal information to our readers, membership or followers, warning them of possible infiltration of the debates by fake profiles that look radical. But this might also lead to discredit the real radical thinking which we desperately need.

One of the most useful activities could be to increase our presence in other social circles and help these circles identify and combat fake news. Some people are so entrenched in their hatred that they will believe almost anything that will justify their hatred. But most people are genuinely looking for true information. After all, no one likes to be lied at and manipulated. If we keep identifying and exposing fake news within the social circles of these moderate people, we can surely achieve something, at least help block specific profiles by reporting them.

The net is ablaze with discussions on how to counter the manipulation of public opinion by bots. As one of the first victims of this, we surely must have our part to play.

 

 

 

 

 

 

 

Will truth be defeated? What can be done when 12 million Americans believe Obama is an alien lizard?

On February 12, 2014 the New Zealand Prime Minister proudly announced on TV that he could medically prove that he was not a …lizard.

Although this made everyone laugh, the sad truth is that he had to respond to a constitutional request of a citizen who demanded that the PM proved that he was not “a lizard alien in human shape trying to enslave the Human race”. And sadder even, he was not alone. In 2013, 4% of the US populations (that’s 12 million people), believed the alien lizard myth, and that Queen Elisabeth and Barack Obama were among them.

Funny?

If you draw a parallel with the myths and urban legends surrounding LGBTI people, it is not. “Abuse of children”, “witchcraft”, “demonization”, are just a few of the myths that are being used to persecute, and often kill, LGBT people. Hardly is there an earthquake that is not blamed on “gays”, in places as different as Italy, the USAHaiti or more recently Indonesia.

From firm belief that planet Earth is flat, to certainty that HIV can be cured with garlic, there are countless urban legends and myths that resist all forms of argumentation.

Some campaigners will argue that it is education to rationality that will over time overcome legends and myths. But if education might be a necessary condition, it is by no mean sufficient. Actually, in a lot of cases the more educated people are the better they are equipped to justify their beliefs. Education might make it more difficult for people to hold crazy beliefs but once they do, they will use their education to cling to them even more.

That is one of the reasons why having our campaigns systematically target “people with higher education” might be something we should put serious research in, and not just assume that they are more progressive, or easier to convince.

Social research into human behavior has shown that people make their distinction between true and false, or right and wrong, on the basis of the group they (want to) belong to, and not on the basis of what they know is true. Hence religious dogma and “alternative facts”.

And with the choice of communications channels being more and more in the hands of the users (no more sitting in front of the 8 o’clock news), people live in a social bubble and the influence of the “in-group” is getting stronger and stronger.

Social media research shows that the bubbles are tighter than ever, with very little flow between opposing bubbles.

So your “truth” is unlikely to reach your target in the first place. And if it does, it is likely to be dismissed.

So is truth once and for all a loosing game?

“Providing information” and doing so on one’s Facebook page is definitely not the most effective thing to do when it comes to changing people, but there might be some other options to consider:

The most obvious move is of course to reach beyond your own “bubble” and identify the “bubbles” that are closest to you: the first tier. Human rights groups, women’s liberation forums, and all your natural allies.

But some of the “second tier” bubbles are harder to identify, although this is often where the biggest gains can be achieved. If you aim at early adopters of new trends, discussion forums on technological progress could be a good target. If you aim at young modern women, you might want to try discussion forums on fashion or modern lifestyle. When you know that a new series with an LGB or T character hits the net, it might be a better use of your time to participate in the discussions on mainstream discussion forums rather than on your own channels.

But even so, the basics of campaigns communication still apply and aggressively trolling these circles will be counterproductive, only alienating enemies even further. Communication has to be smartly framed, and this takes a bit of preparation.

Counter-intuitive as it may be, “truth” won’t change people.

If we want to have even a slight chance to change hearts and minds, we have to be good at becoming part of our target’s reference groups. And this requires going out of our bubbles and take the conversation where people are.

 

 

 

 

 

 

 

 

 

Trolls attack from within : How we are all being manipulated from within our own communities

This article is reproduced from Medium.com

For researchers in online disinformation and information operations, it’s been an interesting week. On Wednesday, Twitter released an archive of tweets shared by accounts from the Internet Research Agency (IRA), an organization in St. Petersburg, Russia, with alleged ties to the Russian government’s intelligence apparatus. This data archive provides a new window into Russia’s recent “information operations.” On Friday, the U.S. Department of Justice filed charges against a Russian citizen for her role in ongoing operations and provided new details about their strategies and goals.

Information operations exploit information systems (like social media platforms) to manipulate audiences for strategic, political goals—in this case, one of the goals was to influence the U.S. election in 2016.

In our lab at the University of Washington (UW), we’ve been accidentally studying these information operations since early 2016. These recent developments offer new context for our research and, in many ways, confirm what we thought we were seeing—at the intersection of information operations and political discourse in the United States—from a very different view.

A few years ago, UW PhD student Leo Stewart initiated a project to study online conversations around the #BlackLivesMatter movement. This research grew to become a collaborative project that included PhD student Ahmer Arif, iSchool assistant professor Emma Spiro, and me. As the research evolved, we began to focus on “framing contests” within what turned out to be a very politicized online conversation.

Framing can be a powerful political tool.

The concept of framing has interesting roots and competing definitions (see Goffman, Entman, Benford and Snow). In simple terms, a frame is a way of seeing and understanding the world that helps us interpret new information. Each of us has a set of frames we use to make sense of what we see, hear, and experience. Frames exist within individuals, but they can also be shared. Framing is the process of shaping other people’s frames, guiding how other people interpret new information. We can talk about the activity of framing as it takes place in classrooms, through news broadcasts, political ads, or a conversation with a friend helping you understand why it’s so important to vote. Framing can be a powerful political tool.

Framing contests occur when two (or more) groups attempt to promote different frames—for example, in relation to a specific historical event or emerging social problem. Think about the recent images of the group of Central American migrants trying to cross the border into Mexico. One framing for these images sees these people as refugees trying to escape poverty and violence and describes their coordinated movement (in the “caravan”) as a method for ensuring their safety as they travel hundreds of miles in hopes of a better life. A competing framing sees this caravan as a chaotic group of foreign invaders, “including many criminals,” marching toward the United States (due to weak immigration laws created by Democrats), where they will cause economic damage and perpetrate violence. These are two distinct frames and we can see how people with political motives are working to refine, highlight, and spread their frame and to undermine or drown out the other frame.

In 2017, we published a paper examining framing contests on Twitter related to a subset of #BlackLivesMatter conversations that took place around shooting events in 2016. In that work, we first took a meta-level view of more than 66,000 tweets and 8,500 accounts that were highly active in that conversation, creating a network graph (below) based on a “shared audience” metric that allowed us to group accounts together based on having similar sets of followers.

“Shared Audience” Network Graph of Accounts in Twitter Conversations about #BlackLivesMatter and Shooting Events in 2016. Courtesy of Kate Starbird/University of Washington.

That graph revealed that, structurally, the #BlackLivesMatter Twitter conversation had two distinct clusters or communities of accounts—one on the political “left” that was supportive of #BlackLivesMatter and one on the political “right” that was critical of #BlackLivesMatter.

Next, we conducted qualitative analysis of the different content that was being shared by accounts on the two different sides of the conversation. Content, for example, like these tweets (from the left side of the graph):

Tweet: Cops called elderly Black man the n-word before shooting him to death #KillerCops #BlackLivesMatter

Tweet: WHERE’S ALL THE #BlueLivesMatter PEOPLE?? 2 POLICE OFFICERS SHOT BY 2 WHITE MEN, BOTH SHOOTERS IN CUSTODY NOT DEAD.

And these tweets (from the right side of the graph):

Tweet: Nothing Says #BlackLivesMatter like mass looting convenience stores & shooting ppl over the death of an armed thug.

Tweet: What is this world coming to when you can’t aim a gun at some cops without them shooting you? #BlackLivesMatter.

In these tweets, you can see the kinds of “framing contests” that were taking place. On the left, content coalesced around frames that highlighted cases where African-Americans were victims of police violence, characterizing this as a form of systemic racism and ongoing injustice. On the right, content supported frames that highlighted violence within the African-American community, implicitly arguing that police were acting reasonably in using violence. You can also see how the content on the right attempts to explicitly counter and undermine the #BlackLivesMatter movement and its frames—and, in turn, how content from the left reacts to and attempts to contest the counter-frames from the right.

Our research surfaced several interesting findings about the structure of the two distinct clusters and the nature of “grassroots” activism shaping both sides of the conversation. But at a high level, two of our main takeaways were how divided those two communities were and how toxic much of the content was.

Our initial paper was accepted for publication in autumn 2017, and we finished the final version in early October. Then things got interesting.

A few weeks later, in November 2017, the House Intelligence Committee released a list of accounts, given to them by Twitter, that were found to be associated with Russia’s Internet Research Agency (IRA) and their influence campaign targeting the 2016 U.S. election. The activities of these accounts—the information operations that they were part of—had been occurring at the same time as the politicized conversations we had been studying so closely.

Looking over the list, we recognized several account names. We decided to cross-check the list of accounts with the accounts in our #BlackLivesMatter dataset. Indeed, dozens of the accounts in the list appeared in our data. Some—like @Crystal1Johnson and @TEN_GOP—were among the most retweeted accounts in our analysis. And some of the tweet examples we featured in our earlier paper, including some of the most problematic tweets, were not posted by “real” #BlackLivesMatter or #BlueLivesMatter activists, but by IRA accounts.

To get a better view of how IRA accounts participated in the #BlackLivesMatter Twitter conversation, we created another network graph (below) using retweet patterns from the accounts. Similar to the graph above, we saw two different clusters of accounts that tended to retweet other accounts in their cluster, but not accounts in the other cluster. Again, there was a cluster of accounts (on the left, in magenta) that was pro-BlackLivesMatter and liberal/Democrat and a cluster (on the right, in green) that was anti-BlackLivesMatter and conservative/Republican.

Retweet Network Graph of Accounts in Twitter Conversations about #BlackLivesMatter and Shooting Events in 2016. Courtesy of Kate Starbird/University of Washington

Next, we identified and highlighted the accounts identified as part of the IRA’s information operations. That graph—in all its creepy glory—is below, with the IRA accounts in orange and other accounts in blue.

Retweet Network Graph plus IRA Troll Accounts. Courtesy of Kate Starbird/University of Washington

As you can see, the IRA accounts impersonated activists on both sides of the conversation. On the left were IRA accounts like @Crystal1Johnson, @gloed_up, and @BleepThePolice that enacted the personas of African-American activists supporting #BlackLivesMatter. On the right were IRA accounts like @TEN_GOP, @USA_Gunslinger, and @SouthLoneStar that pretended to be conservative U.S. citizens or political groups critical of the #BlackLivesMatter movement.

Ahmer Arif conducted a deep qualitative analysis of the IRA accounts active in this conversation, studying their profiles and tweets to understand how they carefully crafted and maintained their personas. Among other observations, Arif described how, as a left-leaning person who supports #BlackLivesMatter, it was easy to problematize much of the content from the accounts on the “right” side of the graph: Some of that content, which included racist and explicitly anti-immigrant statements and images, was profoundly disturbing. But in some ways, he was more troubled by his reaction to the IRA content from the left side of the graph, content that often aligned with his own frames. At times, this content left him feeling doubtful about whether it was really propaganda after all.

This underscores the power and nuance of these strategies. These IRA agents were enacting caricatures of politically active U.S. citizens. In some cases, these were gross caricatures of the worst kinds of online actors, using the most toxic rhetoric. But, in other cases, these accounts appeared to be everyday people like us, people who care about the things we care about, people who want the things we want, people who share our values and frames. These suggest two different aspects of these information operations.

First, these information operations are targeting us within our online communities, the places we go to have our voices heard, to make social connections, to organize political action. They are infiltrating these communities by acting like other members of the community, developing trust, gathering audiences. Second, these operations begin to take advantage of that trust for different goals, to shape those communities toward the strategic goals of the operators (in this case, the Russian government).

One of these goals is to “sow division,” to put pressure on the fault lines in our society. A divided society that turns against itself, that cannot come together and find common ground, is one that is easily manipulated. Look at how the orange accounts in the graph (Figure 3) are at the outside of the clusters; perhaps you can imagine them literally pulling the two communities further apart. Russian agents did not create political division in the United States, but they were working to encourage it.

That IRA accounts sent messages supporting #BlackLivesMatter does not mean that ending racial injustice in the United States aligns with Russia’s strategic goals or that #BlackLivesMatter is an arm of the Russian government.

Their second goal is to shape these communities toward their other strategic aims. Not surprisingly, considering what we now know about their 2016 strategy, the IRA accounts on the right in this graph converged in support of Donald Trump. Their activity on the left is more interesting. As we discussed in our previous paper (written before we knew about the IRA activities), the accounts in the pro-#BlackLivesMatter cluster were harshly divided in sentiment about Hillary Clinton and the 2016 election. When we look specifically at the IRA accounts on the left, they were consistently critical of Hillary Clinton, highlighting previous statements of hers they perceived to be racist and encouraging otherwise left-leaning people not to vote for her. Therefore, we can see the IRA accounts using two different strategies on the different sides of the graph, but with the same goal (of electing Donald Trump).

The #BlackLivesMatter conversation isn’t the only political conversation the IRA targeted. With the new data provided by Twitter, we can see there were several conversational communities they participated in, from gun rights to immigration issues to vaccine debates. Stepping back and keeping these views of the data in mind, we need to be careful, both in the case of #BlackLivesMatter and these other public issues, to resist the temptation to say that because these movements or communities were targeted by Russian information operations, they are therefore illegitimate. That IRA accounts sent messages supporting #BlackLivesMatter does not mean that ending racial injustice in the United States aligns with Russia’s strategic goals or that #BlackLivesMatter is an arm of the Russian government. (IRA agents also sent messages saying the exact opposite, so we can assume they are ambivalent at most).

If you accept this, then you should also be able to think similarly about the IRA activities supporting gun rights and ending illegal immigration in the United States. Russia likely does not care about most domestic issues in the United States. Their participation in these conversations has a different set of goals: to undermine the U.S. by dividing us, to erode our trust in democracy (and other institutions), and to support specific political outcomes that weaken our strategic positions and strengthen theirs. Those are the goals of their information operations.

One of the most amazing things about the internet age is how it allows us to come together—with people next door, across the country, and around the world—and work together toward shared causes. We’ve seen the positive aspects of this with digital volunteerism during disasters and online political activism during events like the Arab Spring. But some of the same mechanisms that make online organizing so powerful also make us particularly vulnerable, in these spaces, to tactics like those the IRA are using.

Passing along recommendations from Arif, if we could leave readers with one goal, it’s to become more reflective about how we engage with information online (and elsewhere), to tune in to how this information affects us (emotionally), and to consider how the people who seek to manipulate us (for example, by shaping our frames) are not merely yelling at us from the “other side” of these political divides, but are increasingly trying to cultivate and shape us from within our own communities.

Go to the profile of Kate Starbird

WRITTEN BY

Kate Starbird

Asst. Professor of Human Centered Design & Engineering at UW. Researcher of crisis informatics and online rumors. Aging athlete. Army brat.

Why People Share: The Psychology of Social Sharing

We lately found this article on coschedule.com and though it’s business-focused, some lessons seem to be transferable to our sector. Below are the elements we suggest you have a look at and see how they resonate with your current practice on social media.


“People buy (and share content) from those that they know, like, and trust. Most sharing, as it turns out, is primarily dependent on the personal relationships of your readers. The data shows that the likelihood of your content being shared has more to do with your readers relationship to others than their relationship to you.

The most common reasons people share something with others are pretty surprising. Let’s look at the data.

Get Your Free Why People Share Infographic!

  1. To bring valuable and entertaining content to others.  49% say sharing allows them to inform others of products they care about and potentially change opinions or encourage action
  2. To define ourselves to others. 68% share to give people a better sense of who they are and what they care about
  3. To grow and nourish our relationships. 78% share information online because it lets them stay connected to people they may not otherwise stay in touch with
  4. Self-fulfillment. 69% share information because it allows them to feel more involved in the world
  5. To get the word out about causes or brands. 84% share because it is a way to support causes or issues they care about

It was also found that some users share as a act of “information management.” 73% of respondents said that they process information more deeply, thoroughly and thoughtfully when they share it.

As if that wasn’t enough, you also need to realize that good content comes with a high entertainment factor. Rather than a generic stock image, consider custom graphics or charts that present your content to readers in a brand new way. If you haven’t before, consider a video or infographic as a way to add more value, and more entertainment, to your content.

Connect Your Readers To Others

Your readers have an instinctual need to connect with others. Just look at the success of social networks like Facebook and Twitter. People like people.

In content marketing, the fabric of these connections is directly related to the content that we consume and share with our online network.

Here’s a small example: when is the last time that you left a comment on a post without sharing the post itself? Probably never. When we attach a conversation to a piece of content, we become very likely to share that content with others.

In addition, some readers will actually share their comment with a social share. The Facebook and Google+ commenting utilities (link) prove how closely these two things that are connected.

One way to do this is to try and end as many posts as possible with a question that our readers and can answer in the comments. While they don’t always do it, the question will often get them thinking and helps them apply.

Another option is to occasionally hit on the controversial post.  Overall, this is a good thing and helps people connect with others.

Make Them Feel More Valuable

In the New York Times study one respondent was quoted as saying that she enjoyed “getting comments that I sent great information and that my friends will forward it to their friends because it’s so helpful. It makes me feel valuable.”

This is pretty cool! Not only can your content help your readers become a subject matter expert in their field, but it can also help them look like one for their peers.

Why Facebook Is a Waste of Time—and Money—for Arts Nonprofits

The team from Artistic Activism takes a stand on an issue that is a major preoccupation for all non-profits. A bold move but are we ready to give up FB???

This articles appeared first on ARTNET

Steve Lambert,

Why Facebook Is a Waste of Time—and Money—for Arts Nonprofits

The co-founder of the nonprofit Center for Artistic Activism explains why his company has officially de-friended Facebook.

Facebook CEO Mark Zuckerberg in San Francisco, California. Photo: Josh Edelson/AFP/Getty Images.

Like many nonprofits, we use Facebook to connect with our audiences, and they use Facebook to stay in touch with us. It’s not our preferred way, but it’s where more than 4,000 people have chosen to stay informed about what we do at the Center for Artistic Activism. Part of our philosophy at the C4AA is to meet people where they are, and, undeniably, hundreds of millions of people (and some bots) are on Facebook. However, looking at the statistics provided by Facebook, we’ve come to realize that the connection we were after isn’t actually made.

That’s why we’ve decided to stop putting effort into Facebook. The world’s largest social network has become an increasingly inhospitable place for nonprofits.

We currently have 4,093 “fans” of our page on Facebook. For a scrappy organization focused on artistic activism, that’s not bad (especially since we never bought followers to boost our numbers). Those thousands came from years of hard work doing outreach.

From left: Steve Lambert, Rebecca Bray, and Stephen Duncombe, directors of the C4AA. Courtesy of Steve Lambert.

Stephen Duncombe and I started the organization around 2009, shortly after Facebook asked organizations to create “pages” to help differentiate from personal “profiles.” In those early years, we used our fan page to share the progress we were making to support artists and activists fighting corruption in West Africa, to help save lives in the opioid crisis, to get proper healthcare for LGBTQ people in Eastern Europe, and our work to make activism more creative, fun, and effective.

After trainings and other events, our page was especially active as new alumni from countries around the world joined to stay in touch. However, in recent years, the traffic dropped off.

Looking at the Numbers

During that time, we’ve grown significantly as an organization—adding staff positions, increasing programming—but I wouldn’t blame our Facebook followers for thinking the C4AA was dormant, if not dead.

They weren’t seeing everything we shared—and may not have been seeing anything. They’ve asked to hear from us, but Facebook decides if and when they actually do. And in reality, it’s not often. Here are the stats Facebook provides us:

Screenshot of C4AA's Facebook analytics. Courtesy of Steve Lambert.

Screenshot of C4AA’s Facebook analytics. Courtesy of Steve Lambert.

This shows how many people (anyone, not exclusively fans of our page) have seen our posts over the past three months. With a few exceptions, you can see most posts don’t reach more than a tenth of the number who have opted to follow our page. In recent weeks, we’ve reached an average of around 3 percent.

This is by design. People think the Facebook algorithm is complicated, and it does weigh many factors, but reaching audiences through their algorithm is driven by one thing above all others: payment. Facebook’s business model for organizations is to sell your audience back to you.

In the past, you could boost your social media reach by writing better posts and including images and video. But in recent years, targeted spending on advertising has overtaken all other tips and tricks. To reach more people who already requested to hear from the C4AA, we’d have to give our donors’ money to Facebook to “boost” our posts.

Now, are we simply against paying Facebook? Do we not want to give our donors’ money to one of the largest corporations on the planet, one that has enriched its leadership and shareholders by not paying the artists, journalists, and everyday people who give the site value? Do we want to withhold support to a company that’s barely taken responsibility for enabling Russian disinformation to reach US citizens in an effort to undermine democratic elections? Do we think that Facebook is turning the internet from an autonomous, social democratic space into an expanding, poorly managed shopping mall featuring a food court of candied garbage and Jumbotrons blasting extreme propaganda that’s built on top of the grave of the free and open web? Yes, yes, yes, and yes. That’s why we’ve never been big fans, much less paid to use Facebook.

Why Facebook Is Bad News

However, for the sake of argument, let’s imagine that we accept that this is Facebook’s business model, and it is free to create its own rules on its private platform. Fine. There’s still a broader inequity to address.

Facebook’s pricing treats nonprofits and artists the same as a multinational corporation like Coca-Cola, a high-end neighborhood boutique hair salon, or a vitamin supplement scam. The advertising model makes no exceptions for nonprofits—even though we have nothing to sell and our mission, legally bound, is for the common good.

This difference in purpose is significant. It’s why the US government does not charge taxes to nonprofits, and the postal service offers reduced rates. Even other tech companies put nonprofits in a different category. Paypal charges less to process charitable donations and enables fundraising opportunities through partners like eBay.

At the C4AA, we use the messaging system Slack, and were delighted to learn it offers a significant discount to non-profits to upgrade from their free plan to the standard plan. That discount? 100 percent. To upgrade to the top plan, the Plus Plan, the discount is 85 percent. Slack partners with the non-profit TechSoup, which arranges discounted software, hardware, and support from for-profits to nonprofit organizations. One TechSoup partner, Google—yes, that Google—offers thousands of in-kind dollars for “ad grants” so nonprofits can compete to communicate alongside for-profit companies.

Facebook offers no such discount. It considers all communication from any organization to be a form of “advertising.” Facebook will take the money of anyone who pays—whether to sell products or discord.

Sure, we can keep posting there anyway for free, but less than 3 percent of our followers would know.

Meanwhile, the Facebook-using public—around two billion people—is unaware of what they are missing. My social network may consist of a mix of the causes I care about, artists who challenge my thinking, independent news organizations I trust, some friends and family, and even a few businesses I like. But what I select is not what I see—at least not entirely. And this is a system that puts artists and nonprofits at a disadvantage.

In the past two years, we’ve seen this problem get worse. After the 2016 election, the C4AA began considering this decision more seriously, and after much internal discussion among our leadership and a few board members, along with last week’s indictments, we felt it was time. As much as Facebook and Mark Zuckerberg claim to want to build community and bring the world closer together, their business decisions tell another story.

Looking Ahead

For some nonprofits, paying Facebook for access to supporters is a deal they’re willing to make. No judgment here. C4AA staff still use it to stay in touch with friends. Many organizations we work alongside use Facebook for advocacy efforts. We know for some it may not be a reasonable option to withdraw. We’re not insisting anyone needs to adhere to some arbitrary purity standard. We’ve just decided Facebook is not for us.

For now, we’ve found our email newsletters much more effective because at least we know the message reaches the subscribers’ inbox. And while we are no longer investing our time or our donors’ money into Facebook, it’s not a complete departure. We’re letting automated systems repost from our website and from other social networks.

Leaving history’s biggest social network feels risky. We don’t want to lose those 4,000-plus people—though, in a way, they’ve been lost for a long time. And we remember: It’s not that big of a deal! This makes us only slightly more radical than the Unilever Corporation.

If you’re at a nonprofit and wondering what you can do, have a conversation with your leadership and make a conscious choice. Look at your Facebook stats. Are you reaching your audience? Is paying worth it? Is the money, content, and audience you give Facebook consistent with the goals and mission of your organization?

The Center for Artistic Activism is at C4AA.org. You can sign up for the Center for Artistic Activism email newsletter here. You could also follow us on Facebook, but what would be the point?

Steve Lambert is an associate professor of new media at the State University of New York at Purchase College, a co-founder and co-director of the Center for Artistic Activism, and an artist whose work can be seen at visitsteve.com.

Can we enter the fight against extremism?

Very useful for activists: maybe homophobic campaigns can be identified as extremism and erased from Youtube!

.
`
Terrorism is an attack on open societies, and addressing the threat posed by violence and hate is a critical challenge for us all. Google and YouTube are committed to being part of the solution. We are working with government, law enforcement and civil society groups to tackle the problem of violent extremism online. There should be no place for terrorist content on our services.

While we and others have worked for years to identify and remove content that violates our policies, the uncomfortable truth is that we, as an industry, must acknowledge that more needs to be done. Now.

We have thousands of people around the world who review and counter abuse of our platforms. Our engineers have developed technology to prevent re-uploads of known terrorist content using image-matching technology. We have invested in systems that use content-based signals to help identify new videos for removal. And we have developed partnerships with expert groups, counter-extremism agencies, and the other technology companies to help inform and strengthen our efforts.

Today, we are pledging to take four additional steps.

First, we are increasing our use of technology to help identify extremist and terrorism-related videos. This can be challenging: a video of a terrorist attack may be informative news reporting if broadcast by the BBC, or glorification of violence if uploaded in a different context by a different user. We have used video analysis models to find and assess more than 50 per cent of the terrorism-related content we have removed over the past six months. We will now devote more engineering resources to apply our most advanced machine learning research to train new “content classifiers” to help us more quickly identify and remove extremist and terrorism-related content.

Second, because technology alone is not a silver bullet, we will greatly increase the number of independent experts in YouTube’s Trusted Flagger programme. Machines can help identify problematic videos, but human experts still play a role in nuanced decisions about the line between violent propaganda and religious or newsworthy speech. While many user flags can be inaccurate, Trusted Flagger reports are accurate over 90 per cent of the time and help us scale our efforts and identify emerging areas of concern. We will expand this programme by adding 50 expert NGOs to the 63 organisations who are already part of the programme, and we will support them with operational grants. This allows us to benefit from the expertise of specialised organisations working on issues like hate speech, self-harm, and terrorism. We will also expand our work with counter-extremist groups to help identify content that may be being used to radicalise and recruit extremists.

Third, we will be taking a tougher stance on videos that do not clearly violate our policies — for example, videos that contain inflammatory religious or supremacist content. In future these will appear behind an interstitial warning and they will not be monetised, recommended or eligible for comments or user endorsements. That means these videos will have less engagement and be harder to find. We think this strikes the right balance between free expression and access to information without promoting extremely offensive viewpoints.

Finally, YouTube will expand its role in counter-radicalisation efforts. Building on our successful Creators for Change programme promoting YouTube voices against hate and radicalisation, we are working with Jigsaw to implement the “Redirect Method” more broadly across Europe. This promising approach harnesses the power of targeted online advertising to reach potential Isis recruits, and redirects them towards anti-terrorist videos that can change their minds about joining. In previous deployments of this system, potential recruits have clicked through on the ads at an unusually high rate, and watched over half a million minutes of video content that debunks terrorist recruiting messages.

We have also recently committed to working with industry colleagues—including Facebook, Microsoft, and Twitter—to establish an international forum to share and develop technology and support smaller companies and accelerate our joint efforts to tackle terrorism online.

Collectively, these changes will make a difference. And we’ll keep working on the problem until we get the balance right. Extremists and terrorists seek to attack and erode not just our security, but also our values; the very things that make our societies open and free. We must not let them. Together, we can build lasting solutions that address the threats to our security and our freedoms. It is a sweeping and complex challenge. We are committed to playing our part.

Express yourself(ie) !

Expressing ourselves is at the heart of every campaign.

Our expressions is what makes us visible, what makes us liked or disliked, what brings us enemies and allies.

Expressions come in many forms, and each campaigner will be faced with an early crucial choice : whose expression are we considering?

and under what form?

The answer to the first question is very often “Everyone’s”: while many campaigns chose to have celebrities, moral authorities or selected individuals carry a standard message, many others increasingly chose to call for public expression.

Public expression campaigns have the combined benefit of generating original content, which can serve as basis for advocacy (for example when the campaign aims at collecting powerful stories, which will then be brought to decision makers), and of reinforcing the community by drawing more people into the action.

But inviting the public to express themselves is not necessarily easy.

The answer to the second question is often “Selfies”. Many campaigns indeed are based on people sending selfies, which arguably is the easiest form of participation, both for those who contribute and for those who are in charge of validating the content (a split second tells you if a photo is OK to be posted, or to remain on a FB page or a Tumblr account, whereas written contributions take often very long to read and it might in addition be difficult to determine at times if some writings are OK).

Most selfie campaigns will be based on people sending a picture of themselves holding a sign with their message.

But as time goes by, selfie-campaigns have become quite worn-out, and campaigners need fresh ideas for public expression campaigns;

In a previous article, we documented the ‘Kiss the Pride’ initiative which invited the public to send ‘Rainbow lips’ selfies.

We also documented how nudity and sexuality are being used in selfie campaigns

There are many ways in which a selfie campaign can be tailored to the campaign’s message.

A feminist campaign once asked the public to deconstruct images of masculinity/patriarchy.

Screen Shot 2015-02-04 at 17.04.32

A campaign from an LGBT organisation, which wanted to make the point that legal and social obstacles to expressing your full sexuality left people incomplete, asked the public to send half portraits of themselves and created a giant display of these submissions.

Screen Shot 2015-02-04 at 17.24.58

In some contexts, coming out as LGBT is just too risky to allow for a selfie campaign. BUT there are creative ways around it. This incapacity to show your face publicly can become the very message of your campaign. French photographer Philippe Castetbon created a campaign by which people sent creative shoots of themselves where they remained unidentifiable. The campaign message was clearly that repressive legislations and social climates deprive people of the very basis of their identity: their image.  In places where criminal laws are in place, selfies can feature people’s faces masked by bars, featuring prison bars.

Screen Shot 2015-02-04 at 17.26.03

Holding a mirror in front of your face when you take the selfie is also a powerful way to demonstrate how the person looking at you (and maybe condemning you) could easily be in your place.

 

Need more ideas to inspire your next selfie campaign ? Check out

Buzzfeed

Improvephotography

If you feel your public needs advice to take good selfies, check out these and also see below a nice infographic from the postplanner site

 

taking-a-selfie

 

 

 

Virtual reality gets real in latest campaigns

It’s difficult to imagine how LGBT campaigning can integrate VR. Would an experience of rejection and discrimination filmed on VR and brought to the viewer be an effective tool? VR has been called the “empathy machine” but there is little experiment yet to as how far this goes. Anyway, there are bound to be many discussions on this in future, so LGBT campaigners should probably get themselves on top of things.

From Greenpeace’s Mobilisation Lab

Virtual reality gets real in latest campaigns

Learning from the frontlines of VR at Greenpeace and beyond
Since the first mission to remote Amchitka, Alaska, in 1971, Greenpeace has heightened awareness by pushing the boundaries of reporting. Storytelling – and bringing people into the conversation about what’s at stake – is always evolving as technologies, cultural sensitivities, and the problems themselves shift.

Journey to the Arctic virtual reality

This summer, in keeping with this evolution and tradition of experimentation, Greenpeace launched A Journey to the ArcticThe project was the organisation’s first virtual reality (VR) campaign about the rapid and devastating impact of climate change in the Arctic.

Using new technology – not to mention an expensive and uncharted one that asks viewers to wear silly masks that can cause motion sickness – is always a leap of faith. How did Greenpeace pull this campaign off and what can we learn?

Taking People to a Place Nobody Ever Sees

A Journey to the Arctic depicts the sublime beauty of Northern Svalbard, (a pristine Norwegian archipelago) immersing us within the beautiful, remote, and yet integrally important arctic region that has become increasingly fragile due to human-related climate change. With a VR-viewer strapped to your face and your head swirling around to explore, you begin your journey in front of the Arctic Sunrise as it breaks its way through ice to Svalbard.

A Journey to the Arctic Slowly takes viewers  deep inside of a glacier to hints of all the wildlife hidden within the snow and ice. You even see a mother polar bear with a cub, curiously investigating the camera – or rather six GoPro cameras for 360 degree video.

Bringing people from around the world to the frontlines of climate change is critical, especially as  Arctic ice melt accelerates. Yet doing so without further damaging the environment requires a mediated experience. Rasmus Törnqvist, the project’s Director of Photography, chose VR for its power to transport people and elicit emotional responses.

Empathy and Ecotourism 2.0

Törnqvist, who  began working for Greenpeace as a campaigner 11 years ago, told us that VR “provides a unique opportunity to take millions of people to the arctic,” calling it Ecotourism 2.0.

The VR experience is still too new for definitive results, Törnqvist says, but initial findings are promising. A Journey to the Arctic was created with face-to-face campaigning in mind. The film may be viewed anywhere but at 3.5 minutes it was made to test how VR works with campaigners and fundraisers on the street.

When people on the street see the VR video, “most of the time they’re amazed and ready to support,” said Törnqvist. In some instances, campaigners have credited VR for more than doubling donations. Törnqvist hopes that within a few more months of campaigning, they will have provable stats to see just how effective this new technology is for Greenpeace.

Getting to Behavior Change (and Impact)

This positive response mirrors results found by Stanford University’s Virtual Human Interaction Lab. The Lab’s former Hardware Manager, Cody Karutz, told us by email that several studies show VR can nourish empathy and, more importantly, behavior change, in relation to the environment. One study showed a relationship between immersive video and reductions in hot water use. Two other studies found that VR can be used to give people an animal’s perspective and thereby create greater feelings of connection between the self and nature.

Karutz told us that A Journey to the Arctic “gives the user enough time to accommodate to the Arctic spaces.” However, he says, “the piece is still focused on showcasing an exotic space.” This helps reduce one’s psychological distance from the issue, which is important. But bringing the issue home to the user’s local reality is integral to the work’s success. The campaigner, the human handing the viewer the VR goggles, needs to frame the story and give the user a hook integral part of the piece.

An Empathy Machine is not Enough

The conversation around VR in tech spaces tends to highlight its empathetic powers. In a 2015 TED talk, Vrse CEO and founder Chris Milk called VR an “empathy machine.” Törnqvist takes inspiration from Milk’s work but rejects that framing, instead calling VR an “amplifier of emotion.”

The technology can isolate, enrage, or build empathy; the context, framing, and work is what makes the difference. As Jeremy Bailenson, founding director of the Stanford lab on VR, said, “It’s up to us to choose.”

Ainsley Sutherland, a fellow at BuzzFeed Open Lab who studied VR and empathy while at MIT, has also been critical of efforts to cast empathy creation as the most important aspect of virtual reality. Sutherland wrote that VR “cannot reproduce internal states, only the physical conditions that might influence that.” There are hundreds of relational factors, such as where you use VR, how it is presented to the user, and by whom the story is framed that can create, hinder, or alter the emotional connection between the VR environment and the viewer.

The VR Experience is More than Goggles

Contexts (and campaigners) frame the story. Greenpeace’s Törnqvist notes the continued primary and powerful role of the campaigner – and campaign. Greenpeace found that the setting in which the VR is shared influences the user experience. On a crowded street, few people will agree to sit and wear awkward headgear. Those that do have a less immersive experience than users at festivals or other locations. Törnqvist attributed this to a more relaxed, convivial setting. The quality of the VR experience matters, but context can make or break the VR as well.

Törnqvist tells us that anyone who says they know how to make great VR films is either lying or from the future. However, he and others have some important lessons based on countless hours of filming. In largely stationary shots that allow the viewer to control where they look, building a didactical narrative is less effective.

Place as story. Evan Wexler, Technical Director and Cinematographer for On the Brink of Famine, talks about the key value of VR is building an experience of the site itself. Wexler calls this “place as story.” We see this in A Journey to the Arctic when the narrator invites us upon arrival to Svalbard to simply “just look around.” Wexler and Törnqvist both note that it’s important to find the right location at which to focus the viewer’s attention – the place where their presence may have a transformative effect.

Positive emotions are more powerful. Törnqvist also found that positive emotions tend to create more powerful experiences. He sought Svalbard as an environment still largely untouched by humans in order to “offer the same awe and passion that we [at Greenpeace] feel about the planet.” The media shows what sublime landscape is at stake, not what is already lost.

Create depth. Wexler and Törnqvist also discussed the importance and challenge of creating depth. 360 degree stationary camera rigs do not offer a large depth of field. Have the key subject nearby and other objects of value at middle and far distances to create a richer environment. This obviously presented some challenges in Svalbard, a land largely comprised of snow.

Where’s the audience? The empathy and impact of any communications medium depends on the reader or viewer. In his TED talk, Chris Milk points out the importance of connecting his film for the United Nations, Clouds Over Sidra, to those with the power to make a difference.

Clouds over Sidra - Virtual Reality

The UN screened Milk’s film about Sidra, a 12-year-old Syrian girl in the Zaatari Refugee Camp, at the World Economic Forum’s meeting in Davos, Switzerland. It’s useful for campaigns to consider how and where their targeted audiences will view VR stories.

Where to Go with Virtual Reality

Karutz notes that there is a great lack of interaction in most VR video, including A Journey to the Arctic. Without“embodied engagement with the user and the VR environment,” Karutz says, there could be less lasting behavior change. This can be mediated by the campaigners but Greenpeace is already working on pushing VR even further.

Pete Speller of Greenpeace International is working with The Feelies, a multi-sensory design team, and Alchemy VR, experts in creating compelling virtual reality narrative experiences, on a VR project that takes takes viewers inside Sawré Muybu village, home of the Munduruku Indigenous People in the Amazon rainforest.

In the Tapajós project, as it’s called, multi-sensory viewing pods will complement the VR film to create an immersive experience incorporating sounds, imagery, motion, smell and touch. The goal is to create a deeper connection to the Munduruku people and Amazon rainforest. The work will be publicly launched in Rio de Janeiro in early 2017.

“A fundamental of Greenpeace has always been the act of bearing witness,” Törnqvist told me. “Now, with VR, we have an opportunity for anyone to do so.” VR is a new way of telling stories but using it effectively requires a creative coupling with all the old tools campaigners have been honing for decades. Finding that balance remains the challenge.