Banner graphic for She-philosopher.com: Studies in the history of science, technology & culture
Your support enables us to further develop this unique collection of scholarly resources: Donate to She-philosopher.com!

Q U I C K   L I N K S

Believing that Aristotelian rhetoric offered an unsurpassed guide to knowledge of human nature and the art of controlling/inflaming “the passions,” Hobbes made a free translation of Aristotle’s Rhetoric, dictated to William Cavendish (later 3rd earl of Devonshire) while Hobbes was his tutor.
  Learn more about Thomas Hobbes’ textbook of rhetorized psychology, A Briefe of the Art of Rhetorique (1st edn., 1637) in the Editor’s Introduction for Lib. Cat. No. THOB1637.

She-philosopher.com’s detailed study of California’s flawed “Good Neighbor Fence Act of 2013” (California Assembly Bill 1404) critiques our postmodern resort to militant ignorance and a demagogic politics of certainty.
  I believe that a classical agonistic politics of persuasion — not the polling and data-driven demagoguery (calculated appeals designed to manipulate us) which controls policy-making today — best serves the type of pluralist democratic society to which many of us aspire.

An IN BRIEF topic promoting the more ethical rhetoric of critical pluralism — an art of engagement & confrontation, born of respect for discomfiting difference.
  Among other useful aphorisms you’ll find there:
  “Difference must be not merely tolerated, but seen as a fund of necessary polarities between which our creativity can spark like a dialectic. Only then does the necessity for interdependence become unthreatening....” —Audre Lorde, Sister Outsider (1984)

The PBS NewsHour has produced an excellent series on the ways in which data aggregators and brokers like Facebook weaponize metadata (e.g., for psychographics, psychographic filtering, and other high-tech forms of psychological warfare) in order to manipulate our behavior: getting users addicted to social media, which encourages us to promote, buy, consume, vote (or not vote), mobilize, fear, hate, believe and spread lies, lean into “group think,” etc.
1. Part 1 of 4 (“How Facebook’s News Feed Can Be Fooled into Spreading Misinformation”) in Miles O’Brien’s reporting for his weekly segment on the Leading Edge of Technology (first aired on 4/25/2018).
  SUMMARY: “Facebook’s news feed algorithm learns in great detail what we like, and then strives to give us more of the same — and it’s that technology that can be taken advantage of to spread junk news like a virus. Science correspondent Miles O’Brien begins a four-part series on Facebook’s battle against misinformation that began after the 2016 presidential election.”
2. Part 2 of 4 (“Online Anger Is Gold to this Junk-News Pioneer”) in Miles O’Brien’s reporting for his weekly segment on the Leading Edge of Technology (first aired on 5/2/2018).
  SUMMARY: “Meet one of the Internet’s most prolific distributors of hyper-partisan fare. From California, Cyrus Massoumi caters to both liberals and conservatives, serving up political grist through various Facebook pages. Science correspondent Miles O’Brien profiles a leading purveyor of junk news who has hit the jackpot exploiting the trend toward tribalism.”
3. Part 3 of 4 (“Why We Love to Like Junk News that Reaffirms our Beliefs”) in Miles O’Brien’s reporting for his weekly segment on the Leading Edge of Technology (first aired on 5/9/2018).
  SUMMARY: “Facebook is exquisitely designed to feed our addiction to hyper-partisan content. In this world, fringe players who are apt to be more strident end up at the top of our news feeds, burying the middle ground. Science correspondent Miles O’Brien reports on the ways junk news feeds into our own beliefs about politics, institutions and government.”
  Most disturbing about this episode: the ways in which we have refashioned independent thought in terms of “confirmation bias” (for Betty Manlove, anti-establishment news which she perceives as edgy, ergo true):
  “[BETTY MANLOVE:] I believe what I want to believe. I’m too much of an independent thinker to allow emotions to take over. And news is news and opinion is opinion, and so I just go for the true news.
  “[MILES O’BRIEN:] But finding what is true in her news feed is not so easy. She has been convinced Barack Obama was born in Kenya… [...] and Parkland school shooting survivor David Hogg is a fraud.  ¶   So, where do these ideas come from? From the filter bubble created on Facebook by liking a post or clicking on a targeted ad that unwittingly makes users followers of a hyperpartisan page.  ¶   In the mix, some misinformation from Russia. Her grandson helped her find that out by going to a site on Facebook for users to see if they have liked any pages linked to Russia’s Internet Research Agency.” (n. pag.)
  As commentators were quick to point out, Manlove’s beliefs didn’t originate with her Facebook news feed, which O’Brien elsewhere confirms: “Neither Betty nor Gabe say their opinions have been swayed on Facebook, just hardened.”
4. Part 4 of 4 (“Inside Facebook’s Race to Separate News from Junk”) in Miles O’Brien’s reporting for his weekly segment on the Leading Edge of Technology (first aired on 5/16/2018).
  SUMMARY: “At Facebook, there are two competing goals: keep the platform free and open to a broad spectrum of ideas and opinions, while reducing the spread of misinformation. The company says it’s not in the business of making editorial judgments, so they use fact-checkers, artificial intelligence and their users. Can they stop junk news from adapting? Science correspondent Miles O’Brien reports.”
5. A supplementary episode in Paul Solman’s Making Sen$e series, “Why We Should Be More Like Cats than Dogs When It Comes to Social Media” (first aired 5/17/2018).
  SUMMARY: “Computer scientist and virtual reality pioneer Jaron Lanier doesn’t mince words when it comes to social media. In his latest book, Ten Arguments for Deleting Your Social Media Accounts Right Now, [he] says the economic model is based on ‘sneaky manipulation.’ Economics correspondent Paul Solman sits down with Lanier to discuss how the medium is designed to [engage] us and how it could hurt us.” (n. pag.)
  One exchange of note:
  “[JARON LANIER:] ... There’s sort of the cognitive extortion racket now, where the idea is that, you know what, nobody’s going to know about your book, nobody’s going to know about your store, nobody’s going to know about your candidacy unless you’re putting money into these social network things.
  “[PAUL SOLMAN:] Right.  ¶   All that information we share about ourselves online, Lanier argues, is not only used to sell us stuff, but to manipulate our civic behavior in uncivilly destabilizing ways.  ¶   Just look at the spread of fake news and the Cambridge Analytica scandal.
  “[JARON LANIER:] In the last presidential election in the U.S., what we saw was targeted nihilism or cynicism, conspiracy theories, paranoia, negativity at voter groups that parties were trying to suppress.  ¶   The thing about negativity is, it comes up faster, it’s cheaper to generate, and it lingers longer. So, for instance, it takes a long time to build trust, but you can lose trust very quickly.
  “[PAUL SOLMAN:] Right, always easier to destroy than to build.
  “[JARON LANIER:] So, the thing is, since these systems are built on really quick feedback, negativity is more efficient, cheaper, more effective. So if you want to turn an election, for instance, you don’t do it with positivity about your candidate. You do it with negativity about the other candidate.” (n. pag.)
  And from another exchange of note:
  “[JARON LANIER:] So, we’re dealing with statistical effect.  ¶   So let’s say I take a million people, and for each of them, I have this online dossier that’s been created by observing them in detail for years through their phones. And then I send out messages that are calculated to, for instance, make them a little cynical on Election Day if they were tending to vote for a candidate I don’t like.  ¶   I can say, without knowing exactly which people I influenced — let’s say 10 percent became 3 percent less likely to vote because I got them confused and bummed out and cynical. It’s a slight thing, but here’s something about slight changes.  ¶   When you have slight changes that you can predict well, and you can use them methodically, you can actually make big changes.” (n. pag.)
  And again:
  “[JARON LANIER:] Well, it’s even a little sneakier than that, because, for instance, they might be sending you notifications about singles services because, statistically, people who are in the same grouping with you get a little annoyed about that, and that engages them a little bit more.” (n. pag.)
  And finally:
  “[PAUL SOLMAN:] So, how to become a cat? Lanier has long argued that we have to force the social media business model to change, insisting companies should be paid by users, instead of third-party advertisers, subscription, instead of supposedly free TV.” (n. pag.)

NEW  And another PBS NewsHour special about unregulated, predatory high-tech (in this case, companies using Big Data and psychographics to actively target people’s addictions): “How Social Casinos Leverage Facebook User Data to Target Vulnerable Gamblers” (first aired 8/13/2019).
  SUMMARY: “Every year, more people are playing games on their phones, and a category of apps called social casinos has quickly become a multi-billion dollar industry. But are game developers targeting vulnerable users, with Facebook’s help and massive trove of personal data? Nate Halverson of Reveal at the Center for Investigative Reporting has the story of this treacherous platform for addiction.”
  The piece has drawn the usual conservative cries about the need for building character by taking individual responsibility for our choices: scil., “My GOD.....Is ANYONE personally responsible for their own actions anymore?” (comment posted by “guitarman121”). Presumably, this reaction was triggered not just by Ms. Kelly’s consensual participation in her own victimization, but also by her choice to sue the social casino industry that targeted her so successfully: “[NATE HALVERSON:] Suzie Kelly joined a lawsuit last year in the state of Washington, where Big Fish Casino is based, arguing that the game constitutes illegal gambling, and she is asking for her money back. She is now getting help for her gambling addiction, and says she no longer spends money on Big Fish. But, she [along with her family] is still dealing with near financial ruin from the game.” (n. pag.)
  As this case study makes clear, cyberspace is no longer a fair playing field for most of us (humans are no match for emotionless calculators armed with an endless supply of Big Data). We know that people with “very severe gambling problems, or gambling-like problems [...] can’t just walk away,” even in the brick-and-mortar world, which is why “real casinos would be required to cut [users like Suzie Kelly] off, or face big fines. But there are no regulations on social casino games.”
  Plus, the data-driven virtual world is even more adept at manipulating our human vulnerabilities: “He said social casino games appear to be five times more addictive than traditional casinos.” (n. pag.)
  “[NATE HALVERSON:] Do we want hyper-targeted ads from beer companies to alcoholics? Do we want hyper-targeted ads from casinos to gambling addicts?
  “[SAM LESSIN:] No, of course, we don’t want those things, right? Like, no thinking person is like, that’s great. But then the question is, well, OK, like, let’s be really clear, what rule do you want to write? Right? And how are you going to enforce that rule?” (n. pag.)
  Personal responsibility is all well and fine, but there’s no way a single flawed individual goes up against a corporate predator in the global economy and wins!
  We need the protections of prudent regulatory capitalism if human beings are to come out ahead in a brave new data-centric world.

+

NEW  A tentative step in this direction (reigning in the aggressive monetizing of Internet user data, by tech firms large and small) has been taken by the California state legislature, with its California Consumer Privacy Act (Assembly Bill 375, as amended by Senate Bill 1121).
  Unfortunately, as documented here (sidebar entry), that legislation (which takes effect 1/1/2020) was seriously flawed from the get-go, and already needs a make-over.

+

NEW  Twitter has drawn praise for its November 2019 decision to limit “the unprecedented power of digital platforms to monetize speech that deceives, divides and weakens democratic discourse” by banning political ads, but critics argue that CEO Jack Dorsey’s regulatory “solution won’t work”: scil. the op-ed by Ellen P. Goodman and Karen Kornbluh, “The More Outrageous the Lie, the Better for Facebook’s Bottom Line: As long as a social platform’s financial returns align with outrage, the site will be optimized for distributing disinformation” (Los Angeles Times, 11/10/2019, p. A23).
  Goodman & Kornbluh point out that the social-media “platforms want to make this debate about free speech, not about how their algorithms and use of personal data amplify speech. The conversation they want to avoid is about how they make money. That’s why it’s important to focus on ... structural, not content-based, regulation.” (A23)

+

NEW  Katie Fitzpatrick’s book review for The Nation (vol. 308, no. 14, 13 May 2019: 27–28 and 30) of Shosana Zuboff’s The Age of Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power (2019) raises yet more questions about the possibility of regulating our way out of the kind of large-scale social engineering projects on the digital horizon, including the seemingly ubiquitous (by employers, landlords, merchants, banks, insurers, hospitals, schools, etc.) quest to profit from data-driven behavior modification (according to a Michael Serazio op-ed reprinted in the 9/8/2019 issue of the Los Angeles Times, even sports fans want in on this action!).
  Fitzpatrick believes that organized resistance (e.g., by unions) is still our best hope, citing the wildcat strike in 2018 by West Virginia teachers — protesting not just state austerity, but also the introduction of a workplace wellness program called Go365, “that would monitor their health, rewarding points for exercise and good behavior” — as a model.
  I believe the situation is dire enough to require a creative mix of both/and solutions.

To more effectively combat the rise of data-driven demagoguery, critics like Victor Pickard recommend that we “claw back the Internet from unaccountable monopolies” and reinvest in public-service journalism (e.g., local news, investigative journalism, policy reporting).
  In his article advocating for media democracy, “Breaking Facebook’s Grip: To renew journalism, we must take back the Internet from monopolies” (The Nation, 21 May 2018, vol. 306, no. 15, pp. 22–24), Pickard calls for an antitrust investigation to look into “how Facebook exploits its control over” Big Data and for “policy interventions that rein in Facebook and redistribute revenue as part of a new regulatory system designed to address the digital giants’ negative impacts on society.”
  Pickard recommends establishing an independent social-media regulatory agency to implement a “broader, bolder vision” of government regulation, moving beyond typically “weak self-regulation that will fade over time” to “real public oversight” and the pursuit of “redistributive measures” (V. Pickard, 23).
  Sensitive to the need for “preventing government overreach,” Pickard suggests a public-media tax, which would generate resources for a journalism trust fund, insulated from government influence, to support independent journalism (V. Pickard, 24).

For more on the growing dispute over Russian trolls using data-driven demagoguery in the digital agon to foment division and subvert pluralist democratic societies, see:
1. The PBS NewsHour interview with Kathleen Hall Jamieson, author of Cyberwar: How Russian Hackers and Trolls Helped Elect a President (first aired 11/1/2018).
  SUMMARY: “Did the involvement of Russian trolls and hackers swing the 2016 presidential election? Kathleen Hall Jamieson, author of Cyberwar, believes it is ‘highly probable’ that they did. She joins Judy Woodruff to discuss her research on how the Russians found the right messages and delivered them to key audiences using social media — as well as how we can manage foreign election meddling in the future.”
  Of note, Jamieson also blames the mainstream media — not just the new social media — for propagating “Russian stolen content hacked from Democratic accounts illegally.”
  “[KATHLEEN HALL JAMIESON:] The social media platforms have made many changes to try to minimize the likelihood that they will be able to replicate 2016. They have increased the likelihood that they’re going to catch anybody trying to illegally buy ads as a foreign national, for example.  ¶   The place that we haven’t seen big changes is with the press. We haven’t heard from our major media outlets. If tomorrow, somebody hacked our candidates and released the content into the media stream, how would you cover it? Would you cover it the same? And would you assume its accuracy, instead of questioning it and finding additional sourcing for it, before you release it into the body politic?  ¶   I would like to know what the press is going to do confronted with the same situation again.  ¶   I do have some sense of what the social media platforms will do.” (n. pag.)
2. Aaron Maté’s article, “New Studies Show Pundits Are Wrong about Russian Social-Media Involvement in US Politics: Far from being a sophisticated propaganda campaign, it was small, amateurish, and mostly unrelated to the 2016 election” (posted to The Nation website on 12/28/2018).
  According to Maté, Russian trolls “were actually engaging in clickbait capitalism: targeting unique demographics like African Americans or evangelicals in a bid to attract large audiences for commercial purposes. Reporters who have profiled the IRA have commonly described it as ‘a social media marketing campaign.’ Mueller’s indictment of the IRA disclosed that it sold ‘promotions and advertisements’ on its pages that generally sold in the $25-$50 range. ‘This strategy,’ Oxford observes, ‘is not an invention for politics and foreign intrigue, it is consistent with techniques used in digital marketing.’ New Knowledge notes that the IRA even sold merchandise that ‘perhaps provided the IRA with a source of revenue,’ hawking goods such as T-shirts, ‘LGBT-positive sex toys and many variants of triptych and 5-panel artwork featuring traditionally conservative, patriotic themes.’” (n. pag.)

In his book Zucked: Waking Up to the Facebook Catastrophe (Penguin Press, 2019), the tech venture capitalist, early mentor to Mark Zuckerberg, and Facebook investor, Roger McNamee “also proposes [like many critics of social media before him, including Jaron Lanier] that digital platforms ditch advertising for subscription-based models (think Netflix). This, he hopes, would tame political microtargeting and end the click race among digital platforms. Funded by subscriptions, the platforms would not need to worry about selling their users’ ‘headspace’ to advertisers.” (Evgeny Morozov’s book review of Zucked, “A Former Social Media Evangelist Unfriends Facebook,” posted to The Washington Post website, 2/14/2019, n. pag.)
  But Evgeny Morozov is rightly skeptical of evangelizing subscriber funding as a panacea: “But would McNamee’s subscription-based models reduce addiction? Probably not. As long as user choices (and the data they leave behind) help ‘curate’ digital platforms, subscription-based alternatives will still have incentives to extract user data, deploying it to personalize their offerings and ensure that users do not leave the site. Companies with inferior curation systems would simply be eaten away by their competitors.” (E. Morozov, n. pag.)
  The subscription-based publishing model was developed in 17th-century Britain as an alternative to the traditional patronage model, which for centuries funded the arts & sciences. For details, see She-philosopher.​com’s IN BRIEF topic on the early-modern practitioners of subscription.

+

Roger McNamee offers his humble opinion on why, as consumers, we need to stop being passive and take control of how we share our personal information in a PBS NewsHour video essay, “The Dangers of Our ‘New Data Economy,’ and How to Avoid Them” (first aired 3/14/2019).

If you aren’t worried enough yet about how data brokers are manipulating us at all levels of society, see the feature story, “Big Brother Is Watching, and Wants Your Vote: Data brokers are using phones and other devices to track users and selling the info to political campaigns” by Evan Halper (Los Angeles Times, 2/24/2019, pp. A1 and A12), retitled “Your Phone and TV Are Tracking You, and Political Campaigns Are Listening In” for online posting.
  And it is not just political campaigns that are able to track your movements “with unnerving accuracy”: “Antiabortion groups, for example, used the technology to track women who entered waiting rooms of abortion clinics in more than half a dozen cities. RealOptions, a California-based network of so-called pregnancy crisis centers, along with a partner organization, had hired a firm to track cell phones in and around clinic lobbies and push ads touting alternatives to abortion. Even after the women left the clinics, the ads continued for a month.” (E. Halper, A12)
  Advocacy groups lobbying politicians are also “building ‘geo-fences’ ... around the homes, workplaces and hangouts of legislators and their families, enabling a campaign to bombard their devices with a message and leave the impression that a group’s campaign is much bigger in scope than it actually is.” (E. Halper, A12)
  Most insidious, as things stand now, “Which political campaigns and other clients receive all that tracking information can’t be traced.” (E. Halper, A12)
  At least one individual commenting on this story has called for vigorous regulation of Big Data brokers and their clients: “It is time to outlaw such invasion of privacy in the USA. Make it a felony. Also specify a monetary fine per offense that is high, with a right of private enforcement and the right to class action and nullification of mandatory arbitration as contrary to the public good.” (comment posted by “independentshold thebalance,” n. pag.)

For more on how the filter bubbles constructed for us by Big Data (especially e-commerce analytics firms) and Big Search companies determine what we see and don’t see, what we know and don’t know, see sidebar entry about Eli Pariser’s tech criticism at She-philosopher.​com’s “A Note on Site Design” Web page.

Search engine shenanigans are drawing scrutiny from a number of different groups these days, from socially-conscious Google employees in revolt against business as usual ... to irate conservatives complaining about search results they believe are biased in favor of the liberal establishment ... to me!
  See my discussion of this website’s contrarian business model and design philosophy — including our reliance on ethical search tools — at She-philosopher.com’s “A Note on Site Design” Web page.
  See also the sidebar entry on that same page for an intriguing suggestion re. a Big Search utopian alternative: a public search engine.

In an 8/28/2018 tweet, President Donald Trump claimed that “Google & others are suppressing voices of Conservatives and hiding information and news that is good. They are controlling what we can & cannot see.” According to the president, Google Search results are rigged in favor of the “Fake News Media” while “Republican/Conservative & Fair Media is shut out.”
  As usual, the president “cited no evidence for the claim, which echoes both his own attacks on the press and a conservative talking point.  ¶   Google, operator of a popular search engine, responded by saying: ‘We never rank search results to manipulate political sentiment.’  ¶   Trump tweeted before dawn [on 8/28/2018]: ‘This is a very serious situation-will be addressed!’ Hours later, Larry Kudlow, the president’s top economic adviser, said the White House is ‘taking a look’ at whether Google searches should be subject to some regulation.” (Darlene Superville, “Trump Accuses Google of ‘Rigged’ Search Results,” posted to the PBS NewsHour website, 8/28/2018; n. pag.)
  There is no question that Big Search has always privileged elite, establishment media sources (not just The New York Times, but also The Wall Street Journal, among other reputable conservative organs), because these are the acknowledged gatekeepers for their profession — these are the news organizations which have risen to the top by modeling “best practices” and professional journalistic standards (for example, promoting objectivity in reporting, which I would note is not the same thing as neutrality: no one denies that reporters and editors have a point of view!). Big Search also privileges new media, such as Wikipedia, when it, too, models professional standards (in this case, for academic research). Every profession has its authoritative voices & institutions, and Big Search has traditionally accepted these establishment designations and hierarchies without question (e.g., treating *.edu and *.gov domains as more trustworthy sources of information than *.com domains).
  President Trump, used to a cozy relationship with the tabloid press, is upset that elite journalism is not a cheerleader for his presidency, in the same way as is conservative talk radio and Fox News. SeeInside the Unprecedented Partnership between Fox News and the Trump White House” (first aired on the PBS NewsHour, 3/5/2019) for a look at Trump’s preferred media relationships: “President Trump has long acknowledged top-rated Fox News as his favorite media outlet, and the network relishes its role as a conservative voice. But its increasingly close relationship with the administration is drawing criticism. William Brangham talks to the New Yorker’s Jane Mayer about an unprecedented ‘feedback loop’ and whether the president has made policy decisions to help Fox succeed.”
  Assiduously cultivating favorable media coverage in this manner is not illegal, even if it is unprofessional (on all sides). But misusing government regulatory powers to force Big Search to “rank search results to manipulate political sentiment” in Trump’s favor would be corrupt and unacceptable in a real democracy.
  Many things contribute to improved search engine rankings, but in my experience, a liberal political agenda is not among them. “Search engines use complex mathematical algorithms to interpret which websites a user seeks,” as is illustrated in the diagram at Wikipedia’s page on Search Engine Optimization (SEO), where it is noted that algorithms change often, and there are no guarantees of achieving or keeping a high organic ranking: “According to Google’s CEO, Eric Schmidt, in 2010, Google made over 500 algorithm changes — almost 1.5 per day.” (n. pag.; accessed 3/26/2019)
  I have been actively engaged in “white hat SEO,” at all of my websites, since launching She-philosopher.​com in 2004, and have at various times benefited, or not, from unpredictable changes to Big Search algorithms (such as Google’s move to mobile-first indexing). Over the years, I’ve had to just go with the flow. I follow SEO best practices where I can, but on occasion, I choose to do things that I know will hurt my websites’ rankings, such as duplicating a write-up of my archival research at a companion website, instead of linking to it; and deep-linking to recommended content at external websites (otherwise prospective viewers might never find it!), but refusing to participate in popular link-manipulation schemes (exchanging, buying, and selling links with other content providers).
  From this divided practice, I have learned that what may look like “suppressing voices of Conservatives and hiding information and news that is good” for Donald Trump is not, in fact, what’s going on: “If site A is first on a SERP (search engine results page) one month, and then tenth the next month search neutrality advocates cry ‘foul play,’ but in reality it is often the page’s loss in popularity, relevance, or quality content that has caused the move.” (Wikipedia’s page on Search Neutrality, accessed 3/26/2019)
  Without question, search engine bias exists, but not in the way a casual observer perceives it. “Neutrality in search is complicated by the fact that search engines, by design and in implementation, are not intended to be neutral or impartial. Rather, search engines and other information retrieval applications are designed to collect and store information (indexing), receive a query from a user, search for and filter relevant information based on that query (searching/filtering), and then present the user with only a subset of those results, which are ranked from most relevant to least relevant (ranking). ‘Relevance’ is a form of bias used to favor some results and rank those favored results. Relevance is defined in the search engine so that a user is satisfied with the results and is therefore subject to the user’s preferences. And because relevance is so subjective, putting search neutrality into practice has been so contentious.” (Wikipedia’s page on Search Neutrality, accessed 3/26/2019)
  Despite conservative cries for more “balance” in Big Search rankings, Trump is not after search neutrality, which would probably bury the favorable media coverage he receives even deeper. What he really wants is regime change within elite mainstream media conglomerates (making them more like the tabloid press), but the science behind Big Search can not be tweaked to help him with this.

NEW  UPDATE:  The Technology 202: Silicon Valley Pans White House Bias Tool as a Gimmick” by Cat Zakrzewski (posted to The Washington Post website, 5/17/2019).
  As always, President Trump and his team will not let go of a good conspiracy theory, no matter what the experts say.
  Impressions gleaned from our “overlapping yet irreconcilable experiences” always trump real evidence for this White House, and the Trump adminstration’s determined effort to assemble “alternative facts” more to its liking is behind the White House’s new survey about search-engine bias.
  “The White House hasn’t said how it plans to use the data it’s collecting about people’s experiences with bias on social media. But in Silicon Valley, it’s viewed as the administration’s latest political stunt. [...] ‘pure kabuki theatre’ and an attempt to curry political points with conservatives. [Venky Ganesan, a partner at technology investor Menlo Ventures] said the Trump administration’s repeated accusations that tech companies censor conservative voices are unfounded because even though most Silicon Valley executives are liberal or libertarian, they wouldn’t let politics get in the way of their primary goal: making money.  ¶   ‘Algorithms and products don’t have political biases because that’s not how you optimise to make money,’ he wrote in an email. ‘The only bias the products have is to monetise users and make money for the companies.’” (C. Zakrzewski, n. pag.)
  FWIW, Darrell Huff’s How to Lie with Statistics (1954; rpt. New York: W. W. Norton & Co., 1982; ISBN 0-393-09426-X, pbk) is still an invaluable guide to the making of “alternative facts” in our new age of data-driven demagoguery.

While I generally think that a public search engine is a good idea, I’m not sure it would — or could, given First Amendment protections — do much about the spread of junk news, misinformation, tribalism, & hate via social media which “relies on users to supply the content.”
  Some of the difficulties faced even by private companies trying to stop the rapid-fire spread of junk news online are raised in the Los Angeles Times editorial, “Pinterest Takes On Fake News” (2/24/2019, p. A15), retitled “Pinterest Strikes Back at Online Disinformation. Are You Paying Attention, Facebook?” for online posting.
  The editorial applauds Pinterest’s trial run restricting the spread of “fake health news” by “disabling searches related to these topics. Now, searching on Pinterest for ‘vaccine harms’ will return a blank page with the explanation, ‘Pins about this topic often violate our community guidelines, so we’re currently unable to show search results.’ The same happens on a search for ‘diabetes cures,’ for example.” (A15)
  However, Pinterest’s latest attempt at controlling “what gets found and shared” is a work-in-progress, as the platform experiments with redirects for filtered searches which push vetted, higher-quality health information at users (here taking on an editorial role that other purveyors of junk news, such as Facebook, have refused to assume). “Anti-vaxxers may bristle at the censorship Pinterest is imposing and complain that their speech rights are being infringed. But as a private company, Pinterest has the right to enforce its own rules for what gets shared on its site, and to define the line between idle chatter and harmful misinformation. We welcome its efforts on the health front, and hope it blazes a trail for other social networks to follow.” (Editorial Board for The Los Angeles Times, A15)

As numerous position papers from the Electronic Frontier Foundation (EFF) have warned — e.g., “Platform Censorship: Lessons From the Copyright Wars” by Corynne McSherry (posted 9/26/2018) — most current schemes for preventing the spread of “fake health news” on social-media platforms have had unintended consequences, sometimes censoring the very voices and “good” content we wish to promote.
  Even the best models for “content moderation” on the Web involve trade-offs, to which we need to give a lot more thought.
  E.g., EFF has argued that “Corporate Speech Police Are Not the Answer to Online Hate” (posted 10/25/2018). EFF acknowledges that a range of players “are trying to articulate better content moderation practices, and we appreciate that goal. But we are also deeply skeptical that even the social media platforms can get this right, much less the broad range of other services that fall within the rubric proposed here. We have no reason to trust that they will, and every reason to expect that their efforts to do so will cause far too much collateral damage.” (Corynne McSherry, n. pag.)
  Seemingly benign calls to treat social-media platforms created and run by corporations as “public forums” actually threaten “the free speech rights of Internet users and the platforms they use.” See the Electronic Frontier Foundation position paper, “EFF To U.S. Supreme Court: Rule Carefully in Free Speech Case about Private Operators, State Actors, and the First Amendment,” by Karen Gullo and David Greene (posted 12/12/2018).
  And laws like FOSTA (Allow States and Victims to Fight Online Sex Trafficking Act) — making it illegal to post content on the Internet that “facilitates” prostitution or “contributes” to sex trafficking — have led Internet websites and forums “to censor speech with adult content on their platforms to avoid running afoul of the new anti-sex trafficking law FOSTA. The measure’s vague, ambiguous language and stiff criminal and civil penalties are driving constitutionally protected content off the Internet.  ¶   The consequences of this censorship are devastating for marginalized communities and groups that serve them, especially organizations that provide support and services to victims of trafficking and child abuse, sex workers, and groups and individuals promoting sexual freedom. Fearing that comments, posts, or ads that are sexual in nature will be ensnared by FOSTA, many vulnerable people have gone offline and back to the streets, where they’ve been sexually abused and physically harmed.” See EFF’s position paper, “With FOSTA Already Leading to Censorship, Plaintiffs Are Seeking Reinstatement of their Lawsuit Challenging the Law’s Constitutionality” by Karen Gullo and David Greene (posted 3/1/2019).

+

NEW  And for an alternate model: “Troll Patrol: How Amazon’s Twitch Is Protecting Its LGBT Community: With an aggressive mix of filters, moderators, and lawyers, Twitch is trying to keep the Internet’s horde of harassers at bay” by Jeff Green (posted to Bloomberg News website, 6/28/2019).
  When Twitch was attacked by anonymous trolls on 5/25/2019, it “did something unusual for a major social media platform during an era in which online harassment often goes unchecked for long periods of time. Twitch took immediate and decisive action to protect its community.  ¶   For the next two days, as it grappled with the attackers, Twitch prevented all new users from streaming. It imposed two-factor authentication for certain accounts. And it filed a lawsuit in federal court seeking damages for trademark infringement, breach of contract, and fraud from the anonymous assailants.  ¶   The aggressive countermeasures were part of a broader, ongoing effort by Twitch to carve out a safer space for the 500,000 streamers who go live on the platform each day. The company is particularly mindful of its minority members who often bear the brunt of online harassment. ‘It’s a bit tough to tackle because as many ways as you can shield your community, there are ways that people will come up with to work around it,’ says Katrina Jones, who was hired by Twitch in September [2018] as its first diversity and inclusion executive.” (J. Green, n. pag.)
  The general consensus is that Twitch’s strategy is working. According to one content creator, “The harassment protections ... are appreciated if imperfect. ‘You’re never going to be immune, but it’s a trade-off,’ says Roberts. ‘If I advertise myself as LGBT, I get homophobic and transphobic raiders, but for every one troll, I get 100 people who care for me and I can really connect to.’” (J. Green, n. pag.)
  And again: “‘When it comes to being an LGBT creator on a platform, sometimes it’s a little bit scary and intimidating because of people who say things that aren’t that great,’ says Antphrodite. ‘But Twitch is really great at protecting creators, especially LGBT. It’s the one thing that keeps me from being afraid.’” (J. Green, n. pag.)

+

A PBS NewsHour Weekend segment raises yet more issues relating to content moderation that we need to think about: “Facebook Moderators Battle Hate Speech and Violence” (first aired 5/4/2019).
  SUMMARY: “Facebook has banned several high-profile accounts it says engage in ‘violence and hate.’ The move also follows several recent acts of violence livestreamed on the company’s site. Facebook employs thousands of people known as moderators, who are on the frontlines of a battle to stop extremist material online. But as The Verge editor Casey Newton tells Hari Sreenivasan, their jobs come at a cost.”
  The fact that full-time immersion in the dark side of Facebook causes long-term PTSD (akin to that experienced by first responders in the brick-and-mortar world) should be a wake-up call for everyone! At the very least, social media giants such as Facebook need to invest much more heavily in content moderation, starting with bringing it in-house (IMO, the “call center model of content moderation,” using subcontractors, is grossly inadequate to the task).
  Casey Newton notes the true value of professional content moderators for users: “And I think that the time has come for us to shift our perspective on what these platforms are and on the value of the work that these folks are doing. Because again, if you take content moderation off of any social network, whether it’s Facebook, YouTube, Twitter, Reddit those places quickly become totally unusable. They’re overrun by trolls. You and I would never want to spend any time there. And so because of the work that these folks are doing and because of the really disturbing stuff that they are subjected to and work through they may create a safer world for the rest of us.  ¶   And yet they can be fired for basically anything and then they never get any help from the company that put them into that position. So I do think that that is ripe for rethinking.” (n. pag.)
  I would argue that more professional content moderation is also of great value to business and marketers. Few organizations advertising on social media want their brands associated with terrorism, hate groups, or loathsome content.

+

NEW  As of August 2019, Mark Zuckerberg — “keen to reestablish Facebook as a source of trustworthy information after being used to disseminate Russian-sponsored ‘fake news’ during the 2016 presidential election” — has committed Facebook to more professional content moderation, and is introducing a news curation function to be carried out by professional journalists employed fulltime by Facebook. But this move is sure to fan conservatives’ fears of bias and discrimination, even though there is “no proof that such discrimination happens,” as reported by Jeff Bercovici in “Facebook Turns to Editors: In culture shift for social media giant, ‘News tab’ will highlight stories selected by company’s employees,” (Los Angeles Times, 8/23/2019, p. C3), retitled “Facebook Will Use Journalists to Curate News, Opening Itself to More Bias Allegations” for online posting.
  In addition to giving human beings responsibility for making editorial content decisions, Facebook will continue to “host a much larger volume of algorithmically selected news, personalized through signals such as what pages a user follows on the social network and what content he or she has engaged with,” even though there is ample evidence that the Facebook News Feed is susceptible “to websites that look like news outlets but aren’t. During the last presidential election cycle, phony news stories published for profit or as propaganda outperformed the biggest news publishers, according to a Buzzfeed analysis. To prevent fake news from infiltrating the news tab, Facebook is considering imposing eligibility requirements, only featuring websites that are registered in the company’s news index and barring those with a history of being flagged as misinformation providers.” (J. Bercovici, C3)
  Another important change in the offing: Facebook’s professional editors will try to shift traffic away from so much reposted, repurposed content to “original” news stories. “‘One of the things we want to reward is provenance,’ Brown said.” (J. Bercovici, C3)

+

NEW  As for Facebook’s latest attempt to bypass vexing First Amendment issues — the tech giant reports that it is now censoring inappropriate behavior, not speech, in the digital public square: “‘We’re taking down these pages and accounts based on their behavior [“coordinated inauthentic behavior”], not the content they posted.’” (“Facebook Shuts Down Israel-Based Disinformation Campaigns as Election Manipulation Increasingly Goes Global,” by Craig Timberg and Tony Romm; posted to The Washington Post website, 5/16/2019, n. pag.)

Amendment I to the United States Constitution reads in full: “Congress shall make no law respecting an establishment of religion, or prohibiting the free exercise thereof; or abridging the freedom of speech, or of the press; or the right of the people peaceably to assemble, and to petition the Government for a redress of grievances.” (adopted in 1791)
  It is customary to think of the freedoms of speech and press enshrined in the Bill of Rights at the end of the 18th century as representing the founding principles of these United States.
  But our 17th-century founders saw things somewhat differently. At a time when deep political divisions, and growing Stuart authoritarianism in Britain, threatened to derail popular government in the most ethnically and religiously diverse colony in British North America (New Jersey), the settlers’ representative assembly chose to limit freedom of the press in the new proprietary colony, making it a criminal offense to “wittingly and willingly forge or publish any false news.”
  Government regulation of mendacious speech and communication in this country dates back to 1645, when Puritans in Massachusetts (and afterwards, in the English Commonwealth) made wilful lying a statutory crime. In more heterogeneous New Jersey, the first law against the spread of fake news (called “false news” in the statute) — “whereby the minds of people are frequently disquieted or exasperated in relation to publick affairs” — was enacted by the Third General Assembly in 1675. Accordingly, propagators of false news were to be fined ten shillings, which was also the punishment for the first offence of slander (the second offence being twenty shillings). A subsequent law enacted by the Sixth General Assembly of New Jersey imposed even harsher penalties for spreading fake news and untruth.
  These initial attempts to promote populist truth-telling in and about “any part of America” are documented in a new write-up on colonial law and “prudential” law-making (still forthcoming, as of August 2019), to be added to the “Legislative process in the most rebellious and diverse of the founding Thirteen American Colonies (East New Jersey)” section of She-philosopher.​com’s study page re. California’s flawed “Good Neighbor Fence Act of 2013.”

+

As noted below, EFF has argued that our “lack of a set definition for” the term “fake news” presents numerous problems for would-be regulators. Trying to reconcile “multiple and inconsistent meanings” across the ages further complicates things.
  E.g., post-2016, Donald Trump has popularized a narrow and narcissistic definition of “fake news” as anything that is critical of him or his presidency. To his mind, the truthfulness of the reporting has nothing to do with judging its quality. For President Trump, “fake news” is about marketing (undermining brand authenticity), not epistemology. His stated intent is to delegitimize elite news organizations — “to discredit you all and demean you all, so when you write negative stories about me no one will believe you” (as he reportedly told 60 Minutes correspondent Lesley Stahl off-camera).
  President Trump’s sophism concerning “fake news” adds an historic twist to American legal history, turning us even further from our radical republican beginnings.
  Seventeenth-century America also had a fake news problem, which was remedied by the legal innovations of godly New England colonists, inspired by the “more truth and light” dictum of the separatist theologian John Robinson (1575/6?–1625) to make lying indictable.
  Sadly, Anglo-Americans’ dissenting emphasis on the primacy of truth in government is no more. As the great lawyer, judge, jurisprudent, and parliamentarian, Sir Edward Coke, said in 1609 about criminal libel at common law, the truth “is not material” in President Trump’s reframing of “fake news.” And with this false rebranding of real as “fake,” we appear to have come full circle: in an odd turn of events, our pluto-populist president’s identification of MSM criticism with seditious libel (e.g., calling mainstream media “the enemy of the people”) smacks of Stuart tyranny.
  (What Financial Times economic correspondent Martin Wolf refers to as “pluto-populism” — “policies that benefit plutocrats, justified by populist rhetoric” — has a long history in Anglo-American dynastic politics, starting with the authoritarian leadership and populist policies of Protector Somerset, Edward Seymour [c.1500–1552], lord protector of England [1547–49] and effective ruler of England on behalf of Edward VI.)
  NEW  Of note, while the original Massachusetts Bay colonists “considered the privilege of petition a sacred right,” their experiment in theocracy would brook no criticism: “In 1649 at Salem, Mary Oliver was sentenced to be whipped not more than 20 stripes for saying the Governor was unjust. Only a few months after the colony was established, Philip Ratliffe was severely punished and banished for speaking against the government and Church, and Henry Lynn was whipped and banished for writing to England against the government and justice of Massachusetts Bay.” (T. L. Wolford, “The Laws and Liberties of 1648,” 156n40)
  NEW  And under Stuart colonial rule in 1640, criticizing the legislature (in this case, Anglo-America’s first representative institution, the Virginia House of Burgesses) could cost you your livelihood: “Francis Willis, clerk of Charles River court turned out of his place and fined for speaking against the laws of last Assembly and the persons concerned in making them.” (Extract from the “Minutes of the Proceedings of the Governor and Council of Virginia” for 1640 [an MS. belonging to Thomas Jefferson]; transcribed in The Statutes at Large, ed. W. W. Hening, 4 vols., new edn., 1820–​1823, 1.552)
  So President Trump’s lashing out against critics of his government — as in the recent spate of “America, love it or leave it” demagoguery (his tweets of 7/14/2019 and following) targeting four women of color in the U.S. House of Repesentatives (Reps. Alexandria Ocasio-Cortez, D-N.Y., Ayanna Pressley, D-Mass, Rashida Tlaib, D-Mich., Ilhan Omar, D-Minn) and their supporters — is not unprecedented.
  But President Trump’s own propensity for spreading “false news” and manufacturing chaos is unprecedented. At this country’s founding during the 17th century, it was a criminal offense in multiple colonies (Massachusetts, Virginia, New Jersey, Pennsylvania) for any person at the age of discretion (14 years) “to wittingly and willingly make or publish any lye, which may be pernicious to the public weal, or with intent to deceive and abuse the people with false news and reports,” “whereof no certain authority or authentick letters out of any part of America, can be produced” in evidence of truthfulness, and “whereby the minds of people are frequently disquieted or exasperated in relation to publick affairs.” And the penalty for disrupting the public sphere in this manner was severe: any perpetrator of false reports was “to be stockt or whipt” if they lacked the means to pay the escalating fines which “shall be levied upon his or their estate, for the use of the publick.” (Were we still subject to our founders’ original laws & Christian values, pathological liars in high places, such as Donald Trump, would be in big trouble! ;-)

+

NEW  And for a primer on modern incitement law (“Can speech on a social media site, or a presidential platform, incite violence?”), see the op-ed by Danielle Allen and Richard Ashby Wilson, “The Rules of Incitement Should Apply To — and Be Enforced On — Social Media” (posted to The Washington Post website, 8/8/2019).
  The authors believe that “obscurity hinders fair and consistent application” of the current law of incitement (as determined by the 1969 Supreme Court case, Brandenburg v. Ohio), and they argue that “updating the law of incitement and enforcing it on social media platforms will also clarify the rules of speech governing the presidential platform,” for which legal jeopardy is similarly unclear.
  “The Brandenburg test was developed in a pre-Internet era and requires updating. Mainstream social media companies such as Facebook and Twitter have hate-speech guidelines that allow them to remove incendiary content. Like private clubs, they set their own terms of service and regulate speech more assiduously than government. For instance, mainstream social media regularly remove content that denigrates racial, religious or immigrant groups, or calls for harm against them.  ¶   Fringe platforms such as 4chan and 8chan set no such standards, and they thrive on racist, anti-immigrant and inciting language. They allow far-right communities of hate to coalesce and incite their members to commit mass shootings. Under Section 230 of the Communications Decency Act of 1996, Internet providers and social media sites bear no liability for content that third parties post on their platforms. The time has come to challenge this again in court and to pursue civil liability for those platforms that are grossly negligent in regulating the content on their sites.” (D. Allen and R. A. Wilson, n. pag.)
  Of note, “We could easily tighten up the current law of incitement without undermining free-speech protections.” (D. Allen and R. A. Wilson, n. pag.)

Given our evolving cultural anxieties over “fake news” in a post-First Amendment digital age — when anyone with an Internet connection can easily “deceive people with false reports” — it is to be expected that academic researchers would begin studying the signified concept, in hopes of better understanding what it is and how it propagates.
  Axel Gelfert’s article, “Fake News: A Definition,” published in the academic journal, Informal Logic: Reasoning and Argumentation in Theory and Practice, stresses the importance of distinguishing fake news from related, but distinct, types of public disinformation, false or misleading claims, and propaganda. “Fake news, I argue, is best defined as the deliberate presentation of (typically) false or misleading claims as news, where the claims are misleading by design.” (A. Gelfert, 85–6)
  Noting that “Fake news is not itself a new phenomenon,” Gelfert emphasizes the novel effects of it “when combined with online social media that enable the targeted, audience-specific manipulation of cognitive biases and heuristics”: this “forms a potent — and, as the events of 2016 show, politically explosive — mix.... [O]nline social media, which, as a Psychology Today article puts it, work on cognitive biases ‘like steroids’ ... has opened up new systemic ways of presenting consumers with news-like claims that are misleading by design. As a result, given the increasing permeability between online and offline news sources, and with traditional news media often reporting on fake news in order to debunk it (a worthy goal that is rendered ineffective by further cognitive biases such as source confusion, belief perseverance, and the backfire effect), we find ourselves increasingly confronted with publicly disseminated disinformation that masquerades as news, yet whose main purpose it is to feed off our cognitive biases in order to ensure its own continued production and reproduction.” (A. Gelfert, 113)

“Amid the many questions swirling around the New Zealand mosque shootings [on 3/15/2019] is whether Facebook and other digital platforms acted swiftly enough to stop video footage of the attacks from circulating. These social media giants are already facing scrutiny for enabling users to perpetuate false stories and hate speech. Judy Woodruff talks to The Washington Post’s Elizabeth Dwoskin for more.” in the PBS NewsHour segment, “How Social Media Platforms Reacted to Viral Video of New Zealand Shootings” (first aired 3/18/2019).
  “[JUDY WOODRUFF:] Well — and, of course, all this raises the question, do these social media platforms, do they see their responsibility as stopping this kind of material from being spread?
  “[ELIZABETH DWOSKIN:] They would say yes. But the reality is, is that that’s where they fail.  ¶   They also will tell you that it’ll never not be posted, because they have a system where there’s not prior review. Anyone can post, and it only gets reviewed later, if it gets reviewed. And as long as you have that system, you’re going to accept that some of the stuff goes up and gets spread.  ¶   And then let’s add to — let’s add to this their responsibility. It’s not just like the content goes up and anyone sees it. YouTube and Facebook, they have highly personalized algorithms, where the content is actually designed to be turbocharged when people click on it. They start recommending it.  ¶   So they’re making a lot of editorial, curatorial actions that actually promote content to people who didn’t even ask for it. And so they have a huge role. I talked today to a former director at YouTube who said that he himself was stunned by the level of irresponsibility of those design choices.” (n. pag.)
  Several commentators on this piece objected to any futher attempts at censoring such videos on the Web; e.g., see the comment posted by “Bob”: “I watched the video and ironically it’s as if the far right terrorist took a page out of ISIS who posted thousands of videos pridefully showing their deeds... before being crushed by those they victimized the most (Iraqi’s, Kurds, Syrians). Same tactics, same mindset, same outcome.  ¶   Why the scramble to remove this particular video when there are countless other videos that are equal to or far worse that haven’t been removed? YouTube has attempted to sanitize its videos with the usual sanitized videos (CNN, Fox, AP, the usual) and it is far more tame than it was just a few years ago, but to what end? I have found that the UK and US over-sanitize their news to begin with, keeping the viewer distanced from what is really going on in the world. These videos smash a hole in conspiracy theorists and those who deny such things occur.  ¶   I see no reason why someone who chooses to see these videos should be prevented from viewing them. If it’s marked ‘explicit’ then it should be up to the viewer to decide to click on the video, fair warning. In any case, this video is still out there.” (n. pag.)
  For more on the deadliest mass shootings in modern New Zealand history — which were live-streamed on Facebook — see the Wikipedia page on Christchurch Mosque Shootings.

+

One of social media’s more successful attempts at controlling the Internet’s rapid-fire spread of disinformation and bias is analyzed in the PBS NewsHour segment, “Why Kicking Alex Jones Off Social Media Is Not Legally Censorship” (first aired 8/8/2018).
  SUMMARY: “iTunes, Facebook, Spotify and YouTube have all removed conspiracy theorist Alex Jones’ audio and video content from their platforms, saying he violated their hate-speech policies. P. J. Tobia takes a closer look at his media operation, and William Brangham examines the pushback and legal questions with Lyrissa Lidsky, dean of the University of Missouri School of Law.”
  In the (pre-First Amendment) 17th century — when “the identification of untruth with malicious criminal defamation was a forward-looking legal practice” pioneered by the American Puritans (R. B. Morris, “Massachusetts and the Common Law: The Declaration of 1646,” 144) — Jones would have been prosecuted by state government as a publisher of “false news,” and the fines alone (let alone the threat of corporal punishment) would have probably caused him to cease & desist his practice of tabloid journalism.

The Electronic Frontier Foundation (EFF) continues to oppose most 21st-century government mandates responding to the “fake news” phenomenon.
  In the position paper, “EFF to the Inter-American System: If You Want to Tackle ‘Fake News,’ Consider Free Expression First” (posted 2/28/2019), Veridiana Alimonti notes that “Disinformation flows are not a new issue, neither is the use of ‘fake news’ as a label to attack all criticism as baseless propaganda. The lack of a set definition for this term magnifies the problem, rendering its use susceptible to multiple and inconsistent meanings. Time and again legitimate concerns about misinformation and manipulation were misconstrued or distorted to entrench the power of established voices and stifle dissent. To combat these pitfalls, EFF’s submission presented recommendations — and stressed that the human rights standards on which the Inter-American System builds its work, already provide substantial guidelines and methods to address disinformation without undermining free expression and other fundamental rights.  ¶   The Americas’ human rights standards — which include the American Convention on Human Rights — declare that restrictions to free expression must be (1) clearly and precisely defined by law, (2) serve compelling objectives authorized by the American Convention, and (3) be necessary and appropriate in a democratic society to accomplish the objectives pursued as well as strictly proportionate to the intended objective. New prohibitions on the online dissemination of information based on vague ideas, such as ‘false news,’ for example, fail to comply with this three-part test. Restrictions on free speech that vaguely claim to protect the ‘public order’ also fall short of meeting these requirements.” (V. Alimonti, n. pag.)
  In another position paper, “Victory! Dangerous Elements Removed from California’s Bot-Labeling Bill" (posted 10/5/2018), Jamie Williams and Jeremy Gillula describe how California Senate Bill 1001 — “a new law requiring all ‘bots’ used for purposes of influencing a commercial transaction or a vote in an election to be labeled” — “originally included a provision that would have been abused as a censorship tool, and would have threatened online anonymity and resulted in the takedown of lawful human speech.” (J. Williams and J. Gillula, n. pag.)
  Also seeEFF to Court: Remedy for Bad Content Moderation Isn’t to Give Government More Power to Control Speech” by David Greene (posted 11/26/2018), which documents EFF’s ongoing struggle for a voluntary, not government-mandated, “human rights framing for removing or downgrading content and accounts” from social-media sites: “We’ve taken Internet service companies and platforms like Facebook, Twitter, and YouTube to task for bad content moderation practices that remove speech and silence voices that deserve to be heard. We’ve catalogued their awful decisions. We’ve written about their ambiguous policies, inconsistent enforcement, and failure to appreciate the human rights implications of their actions. We’re part of an effort to devise a human rights framing for removing or downgrading content and accounts from their sites, and are urging all platforms to adopt them as part of their voluntary internal governance. Just last week, we joined more than 80 international human rights groups in demanding that Facebook clearly explain how much content it removes, both rightly and wrongly, and provide all users with a fair and timely method to appeal removals and get their content back up.  ¶   These efforts have thus far been directed at urging the platforms to adopt voluntary practices rather than calling for them to be imposed by governments through law. Given the long history of governments using their power to regulate speech to promote their own propaganda, manipulate the public discourse, and censor disfavored speech, we are very reluctant to hand the U.S. government a role in controlling the speech that appears on the Internet via private platforms. This is already a problem in other countries.” (D. Greene, n. pag.)
  And EFF’s alternative approach to reform has generated some real wins: “Facebook Responds to Global Coalition’s Demand That Users Get a Say in Content Removal Decisions” by Karen Gullo and Jillian C. York (posted 12/20/2018).
  Click/tap here (direct link to PDF file) to read the text of EFF’s recommended Santa Clara Principles on Transparency and Accountability in Content Moderation — a set of minimum content moderation standards with a human rights framing created by EFF and its partners.

For modern political communities long accustomed to self-government (which our 17th-century founders were not), an alternate community-based model of quality control — for promoting a popular culture of truth, and restricting the spread of fake news — is being pioneered at Wikipedia, with its “large, diverse editor base” of amateurs, which “makes it very difficult for any person or group to censor and impose bias” over time.

Another alternative proposal for building digital democracy from the grassroots: “The Rise of a Cooperatively Owned Internet: Platform cooperativism gets a boost” by Nathan Schneider (The Nation, vol. 303, no. 18, 31 Oct. 2016, p. 4).
  “Platform cooperatives weren’t something one could even call for until December 2014, when New School professor Trebor Scholz posted an online essay about ‘platform cooperativism,’ putting the term on the map. A year later, he and I organized a packed conference on the subject in New York City. We’re about to publish Ours to Hack and to Own [OR Books, 2017], a collective manifesto with contributions from more than 60 authors that Scholz and I edited. The authors include leading tech critics like Yochai Benkler, Douglas Rushkoff, and Astra Taylor, as well as entrepreneurs, labor organizers, workers, and others. The theory and practice of platform cooperativism are spreading.” (N. Schneider, 4)

As for Facebook’s latest solution to the growing problems of a digital agon shaped by data-driven demagoguery: see “Mark Zuckerberg Says He’ll Reorient Facebook toward Privacy and Encryption,” by Elizabeth Dwoskin of The Washington Post (posted to the Los Angeles Times website, 3/6/2019).
  Of note, “Zuckerberg described the changes using the metaphor of transforming Facebook from a town square into a living room. ‘As I think about the future of the internet,’ he wrote, ‘I believe a privacy-focused communications platform will become even more important than today’s open platforms. Privacy gives people the freedom to be themselves and connect more naturally, which is why we build social networks.’” (E. Dwoskin, n. pag.)
  “The moves — which Zuckerberg, in a blog post, outlined in broad strokes rather than as a set of specific product changes — would shift the company’s focus from a social network in which people widely broadcast information to one in which people communicate with smaller groups, with their content disappearing after a short period of time, Zuckerberg said. Facebook’s core social network is structured around public conversation, but it also owns private messaging services WhatsApp and Messenger, which are closed networks. Instagram, Facebook’s photo-sharing platform, has also seen huge growth thanks to ephemeral messaging.” (E. Dwoskin, n. pag.)
  I fail to see how any of this addresses the real issues with Facebook’s profitable practice of building detailed psychographic profiles of users, which can be sold to unidentified parties, as well as used by Facebook and advertisers to “target” us as website visitors, consumers, citizens, and voters.
  I’m also not sure that social media denizens are going to be all that eager to vacate the town square and keep to their living rooms! I expect liberated grandmothers the world over have come to enjoy digital life beyond the domestic sphere, encountering the unknown, and communicating with hundreds of voices beyond their immediate circle of family and friends.
  Fortunately, Zuckerberg’s vision of “the future of the internet” — a more intimate digital public sphere, colonized by private capital — is not our only option.

+

Elizabeth Dwoskin updates her reporting on Facebook’s planned reorientation in a PBS NewsHour interview: “With Proposed Changes, Is Facebook Sincere about Prioritizing Privacy?” (first aired 5/1/2019).
  SUMMARY: “While Facebook remains one of the world’s largest companies, it has lost some public trust in recent years, due to the Cambridge Analytica scandal, Russian influence campaigns during the 2016 election and privacy issues. Now, founder and CEO Mark Zuckerberg is embarking upon a major shift to the platform’s basic design and approach. Jeffrey Brown talks to The Washington Post’s Elizabeth Dwoskin.”
  I found the following exchange noteworthy:
  “[JEFFREY BROWN:] And what about the business model? What are they saying about how they will make money with this new approach?
  “[ELIZABETH DWOSKIN:] So, in my interview with Zuckerberg, I asked him about that very directly. And he said, you know, I’m not sure how we’re going to profit off this transition to messaging, but I’m confident we will be fine.  ¶   So I’m looking at that, thinking they’re going to find a way to collect data about you, even though they can’t read the messages. And potentially that will come from the fact that they’re making all their services interoperable.  ¶   So you think of Facebook as a social network, but Facebook is a conglomerate of WhatsApp, Instagram, Facebook Messenger. And now they’re going to unify them. You can send a message to WhatsApp, someone on WhatsApp, through Facebook.  ¶   And so that will allow them to track even more behavior than before and will push people to engage even more than before. And, you know, their real obsession, Zuckerberg said, you know, people — people think, we’re all about data. He said, what we’re really all about is attention, which I was very surprised to hear.” (n. pag.)
  And again:
  “[ELIZABETH DWOSKIN:] I think Mark Zuckerberg has always been a person who cared more about human behavior and growth than actual money.  ¶   And we remember he wanted to connect the world and make Facebook free, when people didn’t want Facebook to be free. So I think he’s confident that, if we win in the attention game, the dollars will follow. And, so far, Wall Street rewards that.  ¶   In terms of the sincerity around privacy, just remember, in order to get people’s attention, if Mark says that’s the most important thing, you need to know things about them, you need to collect data.  ¶   And it’s deeply in that company’s DNA to profile your behavior, to understand behavior, to create psychological learning tactics to keep your attention there. And I don’t see that going away.” (n. pag.)

It is not news to designers (in any field, including tech) that brands operate as emotional triggers.
  But a study published in the Proceedings of the National Academies of Science suggesting that we “limit the presentations” of brands and other graphic symbols (by which we sort ourselves into tribes) online in order “to eliminate echo chambers and partisan rancor on social media” is news worth debating.
  As reported by Nsikan Akpan in “How Seeing a Political Logo Can Impair Your Understanding of Facts” (posted to the PBS NewsHour website, 9/3/2018):
  “Merely seeing these political and social labels can cause you to reject facts that you would otherwise support,” according to researchers.
  “There’s a name for the behavioral pattern observed among participants who saw the logos — priming — and it is common among political and social discourse. Research shows that priming with small partisan cues, whether they involve politics, race or economics, can sway the opinions of people. These knee-jerk decisions happen even if you’re encountering an issue for the first time or have little time to react.  ¶   ‘When people are reminded of their partisan identity — when that is brought to the front of their minds – they think as partisans first and as “thoughtful people” second,’ Dannagal Young, a communications psychologist at the University of Delaware, told the PBS NewsHour via email.  ¶   Young said this happens because logos prompt our minds to change how they process our goals. When reminded of our partisan identity, we promote ideas that our consistent with our attitudes and social beliefs in a bid to seem like good members of the group — in this case, Democrat or Republican.  ¶   Like teenagers at a digital school lunch table, we emphasize our most extreme opinions and place less weight on facts. When partisan cues are stripped away, people made considerations based on objective accuracy, rather than choosing goals by beliefs or peer pressure.  ¶   Young said any online social network — including the ones in the study — are conceptually distinct from the way humans exist in the world, but Centola’s experiments offer insights into how partisan cues can affect people’s attitudes and opinions in digital spaces.  ¶   The study also offers clues for journalists reporting on partisan issues as well as for the designers of social networks like Facebook, which has pledged to reduce the damage caused by the spread of political news and propaganda on its platform.  ¶   ‘The biggest takeaway for me is that individuals and journalists seeking to overcome partisan biases need to drop the “Republicans say this and Democrats say that” language from their discussion of policy,’ Young said. ‘These findings encourage journalists to cover policy in ways that are more about the substance of the issues rather than in terms of the personalized, dramatized political fights.’” (Nsikan Akpan, n. pag.)

And a technological means for reducing the amount of data-driven demagoguery in our lives: “The Flip Phone Is Back. Have people had enough of constant connection?” by Elizabeth Flock (posted to the PBS NewsHour website, 4/26/2019).
  “Smartphones have been around for more than a decade, but Adrian Ward, an assistant professor at the University of Texas at Austin, who has authored several studies on the cognitive consequences of smartphones, said he believes there has been a lag period for people to notice the negative effects.  ¶   ‘It takes people time to realize that they’re personally getting “more” than they expected from technology — not just productivity tools and instant access to cat videos, but also an attentional black hole,’ he wrote in an email. That, he added, coupled with rising concern at a societal level, and tech companies who have gotten even better at capturing consumers’ attention, may have led to a rising resistance to smartphones and renewed interest in simpler phones.” (E. Flock, n. pag.)
  However, the old cheap flip phone has mostly been replaced by “an entire new generation of flip phones that are more like fancy smartphones in disguise.” (n. pag.) For example, Nokia’s retro “banana phone,” “not only allows users to play the retro video game Snake, but also check their Facebook and Twitter, take beautiful photos, and create a mobile hotspot.” “These [new flip] phones aren’t solving the problems of smartphone use, but Kostadin Kushlev, an assistant professor of psychology at Georgetown University who has studied the subject for years, said he didn’t believe companies should be expected to.” (E. Flock, n. pag.)

NEW  For better or worse, it is not just the demagogue’s art which is being radically transformed by machine learning-based optimization of messaging and micro-targeting.
  As reported by the PBS NewsHour, Big Data is now driving content creation in industries such as perfumery, streaming services (e.g., Netflix), and pornography: “How Big Data Is Transforming Creative Commerce” (first aired 10/17/2019).
  SUMMARY: “Big data is disrupting nearly every aspect of modern life. Artificial intelligence, which involves machines learning, analyzing and acting upon enormous sets of data, is transforming industries and eliminating certain jobs. But that data can also be used to appeal more directly to what customers want. Special correspondent and Washington Post columnist Catherine Rampell reports.”

To learn more about the engraver of the 17th-century head-piece pictured to the left, see the IN BRIEF biography for Wenceslaus Hollar.

N O T E

With revisions made to this page on 8/15/2019, I have begun using NEW stickers to make it easier for this page’s many repeat visitors to locate new content, which is evolving at a faster rate than I anticipated when I first created this page.
  I’m still trying to figure out the best way to manage burgeoning scholarly content in slow haste, and have already raised some related issues in another sidebar note, here.
  So, please be patient as you watch me try out new ideas ... make big mistakes ... then correct course, and try again....

Can’t find something you’re sure you learned about here?
  Try using our customized search tool (search box at the top of the right-hand sidebar on this page), which is updated every time new content is added to the public areas of the website, thus ensuring the most comprehensive and reliable searches of She-philosopher.​com.
  Learn more about our ethical, customized search tool here.
symbol
  To ensure that you’re viewing She-philosopher.​com’s most recently-updated content (both here and elsewhere at the website), don’t forget to use your browser’s Reload current page button — typically, an icon featuring a broken circle, with arrowhead on one end. For some computers, the keyboard shortcuts, Ctrl+R and F5 or Command-R, will also work; or you can right-click for a context-sensitive menu with the Reload this page button/command.
  Refreshing a page is especially important if you find yourself visiting the same Web page more than once within a relatively short time frame. I may have made modifications to the page in the interim, and you won’t always know this unless you force your browser to access the server (rather than your computer’s cache) to retrieve the requested Web page.

go to TOP of page


First Published:  23 February 2019
Revised (substantive):  23 November 2019


Opening quotation mark…So easie are men to be drawn to believe any thing, from such men as have gotten credit with them; and can with gentlenesse, and dexterity, take hold of their fear, and ignorance.Closing quotation mark

 THOMAS HOBBES (1588–1679), Leviathan, or the Matter, Forme, & Power of a Common-Wealth Ecclesiasticall and Civill, 1st edn., 1651, p. 56

Under Construction

S O R R Y,  but this Web page on data-driven demagoguery — harnessing the power of Big Data and psychographics in the service of rhetorical trickery — is still under construction.

printer's decorative block

^ 17th-century head-piece, showing six boys with farm tools, engraved by Wenceslaus Hollar (1607–1677).

We apologize for the inconvenience, and hope that you will return to check on its progress another time.

If you have specific questions relating to She-philosopher.com’s ongoing research projects, contact the website editor.

go to TOP of page

go up a level: She-philosopher.com’s IN BRIEF page