CRES (not) ON FACEBOOK
Zuckerberg . . . is an enemy of the state,
and I mean the United States of America. He doesn’t give a shit about us,
the United States. He knows he can transcend it. He can get away to any
place. And so it’s just about filthy lucre, that’s it. . . . Because these
people — and Sheryl is a complicit . . . .
https://www.facebook.com/pg/CRESKC/posts/ “The truth is that these
companies won’t fundamentally change because
--Sacha Baron Cohen
Here is a link to the posts
page on the CRES Facebook site. Friends have prevailed upon me to allow
them to post good stuff to Facebook because they are "more realistic" than
I am. I think Facebook is evil, but
I know it also does a lot of good. I am not letting my own judgment override
the opinion of so many.
But below I am retaining analyses about the dangers of Facebook. I also question Twitter and will not use it. Vern Barnet
FACEBOOK DANGER updates: Facebook and Democracy What's to be done about this evil? 211005 WaPo Facebook betrays its early ideals for greed and power Facebook's (Political) Strategy Facebook’s Shameful Data Sharing NYTimes: Facebook: Delay, Deny and Deflect https://www.nytimes.com/2018/10/03/opinion/midterms-facebook-foreign-meddling.html
https://www.nytimes.com/2018/03/19/technology/facebook-alex-stamos.html https://www.nytimes.com/2018/03/19/opinion/facebook-cambridge-analytica.html REAL CULPRIT: TV NEWS, says Douthat https://www.nytimes.com/2019/10/31/opinion/aaron-sorkin-mark-zuckerberg-facebook.html https://www.nytimes.com/2019/10/29/opinion/trump-zuckerberg.html
https://www.nytimes.com/2018/03/17/us/politics/cambridge-analytica-trump-campaign.html
LONDON — As the upstart voter-profiling company Cambridge Analytica prepared to wade into the 2014 American midterm elections, it had a problem. The firm had secured a $15 million investment from Robert Mercer, the wealthy Republican donor, and wooed his political adviser, Stephen K. Bannon, with the promise of tools that could identify the personalities of American voters and influence their behavior. But it did not have the data to make its new products work. So the firm harvested private information from the Facebook profiles of more than 50 million users without their permission, according to former Cambridge employees, associates and documents, making it one of the largest data leaks in the social network’s history. The breach allowed the company to exploit the private social media activity of a huge swath of the American electorate, developing techniques that underpinned its work on President Trump’s campaign in 2016. An examination by The New York Times and The Observer of London reveals how Cambridge Analytica’s drive to bring to market a potentially powerful new weapon put the firm — and wealthy conservative investors seeking to reshape politics — under scrutiny from investigators and lawmakers on both sides of the Atlantic. Christopher Wylie, who helped found Cambridge and worked there until late 2014, said of its leaders: “Rules don’t matter for them. For them, this is a war, and it’s all fair.” “They want to fight a culture war in America,” he added. “Cambridge Analytica was supposed to be the arsenal of weapons to fight that culture war.” Details of Cambridge’s acquisition and use of Facebook data have surfaced in several accounts since the business began working on the 2016 campaign, setting off a furious debate about the merits of the firm’s so-called psychographic modeling techniques. But the full scale of the data leak involving Americans has not been previously disclosed — and Facebook, until now, has not acknowledged it. Interviews with a half-dozen former employees and contractors, and a review of the firm’s emails and documents, have revealed that Cambridge not only relied on the private Facebook data but still possesses most or all of the trove. Cambridge paid to acquire the personal information through an outside researcher who, Facebook says, claimed to be collecting it for academic purposes. During a week of inquiries from The Times, Facebook downplayed the scope of the leak and questioned whether any of the data still remained out of its control. But on Friday, the company posted a statement expressing alarm and promising to take action. “This was a scam — and a fraud,” Paul Grewal, a vice president and deputy general counsel at the social network, said in a statement to The Times earlier on Friday. He added that the company was suspending Cambridge Analytica, Mr. Wylie and the researcher, Aleksandr Kogan, a Russian-American academic, from Facebook. “We will take whatever steps are required to see that the data in question is deleted once and for all — and take action against all offending parties,” Mr. Grewal said. Alexander Nix, the chief executive of Cambridge Analytica, and other officials had repeatedly denied obtaining or using Facebook data, most recently during a parliamentary hearing last month. But in a statement to The Times, the company acknowledged that it had acquired the data, though it blamed Mr. Kogan for violating Facebook’s rules and said it had deleted the information as soon as it learned of the problem two years ago. In Britain, Cambridge Analytica is facing intertwined investigations by Parliament and government regulators into allegations that it performed illegal work on the “Brexit” campaign. The country has strict privacy laws, and its information commissioner announced on Saturday that she was looking into whether the Facebook data was “illegally acquired and used.” In the United States, Mr. Mercer’s daughter, Rebekah, a board member, Mr. Bannon and Mr. Nix received warnings from their lawyer that it was illegal to employ foreigners in political campaigns, according to company documents and former employees. Congressional investigators have questioned Mr. Nix about the company’s role in the Trump campaign. And the Justice Department’s special counsel, Robert S. Mueller III, has demanded the emails of Cambridge Analytica employees who worked for the Trump team as part of his investigation into Russian interference in the election. While the substance of Mr. Mueller’s interest is a closely guarded secret, documents viewed by The Times indicate that the firm’s British affiliate claims to have worked in Russia and Ukraine. And the WikiLeaks founder, Julian Assange, disclosed in October that Mr. Nix had reached out to him during the campaign in hopes of obtaining private emails belonging to Mr. Trump’s Democratic opponent, Hillary Clinton. The documents also raise new questions about Facebook, which is already grappling with intense criticism over the spread of Russian propaganda and fake news. The data Cambridge collected from profiles, a portion of which was viewed by The Times, included details on users’ identities, friend networks and “likes.” Only a tiny fraction of the users had agreed to release their information to a third party. “Protecting people’s information is at the heart of everything we do,” Mr. Grewal said. “No systems were infiltrated, and no passwords or sensitive pieces of information were stolen or hacked.” Still, he added, “it’s a serious abuse of our rules.” Reading Voters’ Minds The Bordeaux flowed freely as Mr. Nix and several colleagues sat down for dinner at the Palace Hotel in Manhattan in late 2013, Mr. Wylie recalled in an interview. They had much to celebrate. Mr. Nix, a brash salesman, led the small elections division at SCL Group, a political and defense contractor. He had spent much of the year trying to break into the lucrative new world of political data, recruiting Mr. Wylie, then a 24-year-old political operative with ties to veterans of President Obama’s campaigns. Mr. Wylie was interested in using inherent psychological traits to affect voters’ behavior and had assembled a team of psychologists and data scientists, some of them affiliated with Cambridge University. The group experimented abroad, including in the Caribbean and Africa, where privacy rules were lax or nonexistent and politicians employing SCL were happy to provide government-held data, former employees said. Then a chance meeting bought Mr. Nix into contact with Mr. Bannon, the Breitbart News firebrand who would later become a Trump campaign and White House adviser, and with Mr. Mercer, one of the richest men on earth. Mr. Nix and his colleagues courted Mr. Mercer, who believed a sophisticated data company could make him a kingmaker in Republican politics, and his daughter Rebekah, who shared his conservative views. Mr. Bannon was intrigued by the possibility of using personality profiling to shift America’s culture and rewire its politics, recalled Mr. Wylie and other former employees, who spoke on the condition of anonymity because they had signed nondisclosure agreements. Mr. Bannon and Mr. Mercer declined to comment. Mr. Mercer agreed to help finance a $1.5 million pilot project to poll voters and test psychographic messaging in Virginia’s gubernatorial race in November 2013, where the Republican attorney general, Ken Cuccinelli, ran against Terry McAuliffe, the Democratic fund-raiser. Though Mr. Cuccinelli lost, Mr. Mercer committed to moving forward. The Mercers wanted results quickly, and more business beckoned. In early 2014, the investor Toby Neugebauer and other wealthy conservatives were preparing to put tens of millions of dollars behind a presidential campaign for Senator Ted Cruz of Texas, work that Mr. Nix was eager to win. When Mr. Wylie’s colleagues failed to produce a memo explaining their work to Mr. Neugebauer, Mr. Nix castigated them over email. “ITS 2 PAGES!! 4 hours work max (or an hour each). What have you all been doing??” he wrote. Mr. Wylie’s team had a bigger problem. Building psychographic profiles on a national scale required data the company could not gather without huge expense. Traditional analytics firms used voting records and consumer purchase histories to try to predict political beliefs and voting behavior. But those kinds of records were useless for figuring out whether a particular voter was, say, a neurotic introvert, a religious extrovert, a fair-minded liberal or a fan of the occult. Those were among the psychological traits the firm claimed would provide a uniquely powerful means of designing political messages. Mr. Wylie found a solution at Cambridge University’s Psychometrics Centre. Researchers there had developed a technique to map personality traits based on what people had liked on Facebook. The researchers paid users small sums to take a personality quiz and download an app, which would scrape some private information from their profiles and those of their friends, activity that Facebook permitted at the time. The approach, the scientists said, could reveal more about a person than their parents or romantic partners knew — a claim that has been disputed. When the Psychometrics Centre declined to work with the firm, Mr. Wylie found someone who would: Dr. Kogan, who was then a psychology professor at the university and knew of the techniques. Dr. Kogan built his own app and in June 2014 began harvesting data for Cambridge Analytica. The business covered the costs — more than $800,000 — and allowed him to keep a copy for his own research, according to company emails and financial records. All he divulged to Facebook, and to users in fine print, was that he was collecting information for academic purposes, the social network said. It did not verify his claim. Dr. Kogan declined to provide details of what happened, citing nondisclosure agreements with Facebook and Cambridge Analytica, though he maintained that his program was “a very standard vanilla Facebook app.” He ultimately provided over 50 million raw profiles to the firm, Mr. Wylie said, a number confirmed by a company email and a former colleague. Of those, roughly 30 million contained enough information, including places of residence, that the company could match users to other records and build psychographic profiles. Only about 270,000 users — those who participated in the survey — had consented to having their data harvested. INSERT IMAGE == An email from Dr. Kogan to Mr. Wylie describing traits that could be predicted. Mr. Wylie said the Facebook data was “the saving grace” that let his team deliver the models it had promised the Mercers. “We wanted as much as we could get,” he acknowledged. “Where it came from, who said we could have it — we weren’t really asking.” Mr. Nix tells a different story. Appearing before a parliamentary committee last month, he described Dr. Kogan’s contributions as “fruitless.” An International Effort Just as Dr. Kogan’s efforts were getting underway, Mr. Mercer agreed to invest $15 million in a joint venture with SCL’s elections division. The partners devised a convoluted corporate structure, forming a new American company, owned almost entirely by Mr. Mercer, with a license to the psychographics platform developed by Mr. Wylie’s team, according to company documents. Mr. Bannon, who became a board member and investor, chose the name: Cambridge Analytica. The firm was effectively a shell. According to the documents and former employees, any contracts won by Cambridge, originally incorporated in Delaware, would be serviced by London-based SCL and overseen by Mr. Nix, a British citizen who held dual appointments at Cambridge Analytica and SCL. Most SCL employees and contractors were Canadian, like Mr. Wylie, or European. But in July 2014, an American election lawyer advising the company, Laurence Levy, warned that the arrangement could violate laws limiting the involvement of foreign nationals in American elections. In a memo to Mr. Bannon, Ms. Mercer and Mr. Nix, the lawyer, then at the firm Bracewell & Giuliani, warned that Mr. Nix would have to recuse himself “from substantive management” of any clients involved in United States elections. The data firm would also have to find American citizens or green card holders, Mr. Levy wrote, “to manage the work and decision making functions, relative to campaign messaging and expenditures.” In summer and fall 2014, Cambridge Analytica dived into the American midterm elections, mobilizing SCL contractors and employees around the country. Few Americans were involved in the work, which included polling, focus groups and message development for the John Bolton Super PAC, conservative groups in Colorado and the campaign of Senator Thom Tillis, the North Carolina Republican. Cambridge Analytica, in its statement to The Times, said that all “personnel in strategic roles were U.S. nationals or green card holders.” Mr. Nix “never had any strategic or operational role” in an American election campaign, the company said. Whether the company’s American ventures violated election laws would depend on foreign employees’ roles in each campaign, and on whether their work counted as strategic advice under Federal Election Commission rules. Cambridge Analytica appears to have exhibited a similar pattern in the 2016 election cycle, when the company worked for the campaigns of Mr. Cruz and then Mr. Trump. While Cambridge hired more Americans to work on the races that year, most of its data scientists were citizens of the United Kingdom or other European countries, according to two former employees. Under the guidance of Brad Parscale, Mr. Trump’s digital director in 2016 and now the campaign manager for his 2020 re-election effort, Cambridge performed a variety of services, former campaign officials said. That included designing target audiences for digital ads and fund-raising appeals, modeling voter turnout, buying $5 million in television ads and determining where Mr. Trump should travel to best drum up support. Cambridge executives have offered conflicting accounts about the use of psychographic data on the campaign. Mr. Nix has said that the firm’s profiles helped shape Mr. Trump’s strategy — statements disputed by other campaign officials — but also that Cambridge did not have enough time to comprehensively model Trump voters. In a BBC interview last December, Mr. Nix said that the Trump efforts drew on “legacy psychographics” built for the Cruz campaign. After the Leak By early 2015, Mr. Wylie and more than half his original team of about a dozen people had left the company. Most were liberal-leaning, and had grown disenchanted with working on behalf of the hard-right candidates the Mercer family favored. Cambridge Analytica, in its statement, said that Mr. Wylie had left to start a rival firm, and that it later took legal action against him to enforce intellectual property claims. It characterized Mr. Wylie and other former “contractors” as engaging in “what is clearly a malicious attempt to hurt the company.” Near the end of that year, a report in The Guardian revealed that Cambridge Analytica was using private Facebook data on the Cruz campaign, sending Facebook scrambling. In a statement at the time, Facebook promised that it was “carefully investigating this situation” and would require any company misusing its data to destroy it. Facebook verified the leak and — without publicly acknowledging it — sought to secure the information, efforts that continued as recently as August 2016. That month, lawyers for the social network reached out to Cambridge Analytica contractors. “This data was obtained and used without permission,” said a letter that was obtained by the Times. “It cannot be used legitimately in the future and must be deleted immediately.” Mr. Grewal, the Facebook deputy general counsel, said in a statement that both Dr. Kogan and “SCL Group and Cambridge Analytica certified to us that they destroyed the data in question.” But copies of the data still remain beyond Facebook’s control. The Times viewed a set of raw data from the profiles Cambridge Analytica obtained. While Mr. Nix has told lawmakers that the company does not have Facebook data, a former employee said that he had recently seen hundreds of gigabytes on Cambridge servers, and that the files were not encrypted. Today, as Cambridge Analytica seeks to expand its business in the United States and overseas, Mr. Nix has mentioned some questionable practices. This January, in undercover footage filmed by Channel 4 News in Britain and viewed by The Times, he boasted of employing front companies and former spies on behalf of political clients around the world, and even suggested ways to entrap politicians in compromising situations. All the scrutiny appears to have damaged Cambridge Analytica’s political business. No American campaigns or “super PACs” have yet reported paying the company for work in the 2018 midterms, and it is unclear whether Cambridge will be asked to join Mr. Trump’s re-election campaign. In the meantime, Mr. Nix is seeking to take psychographics to the commercial advertising market. He has repositioned himself as a guru for the digital ad age — a “Math Man,” he puts it. In the United States last year, a former employee said, Cambridge pitched Mercedes-Benz, MetLife and the brewer AB InBev, but has not signed them on. Matthew Rosenberg, Nicholas Confessore and Carole Cadwalladr reported from London. Gabriel J.X. Dance contributed reporting from London, and Danny Hakim from New York. Facebook and Democracy - 2 https://www.nytimes.com/2016/11/20/opinion/cambridge-analytica-facebook-quiz.html Cambridge Analytica and the Secret Agenda of a Facebook Quiz By McKENZIE FUNK NOV. 19, 2016 Do you panic easily? Do you often feel blue? Do you have a sharp tongue? Do you get chores done right away? Do you believe in the importance of art? If ever you’ve answered questions like these on one of the free personality quizzes floating around Facebook, you’ll have learned what’s known as your Ocean score: How you rate according to the big five psychological traits of Openness, Conscientiousness, Extraversion, Agreeableness and Neuroticism. You may also be responsible the next time America is shocked by an election upset. For several years, a data firm eventually hired by the Trump campaign, Cambridge Analytica, has been using Facebook as a tool to build psychological profiles that represent some 230 million adult Americans. A spinoff of a British consulting company and sometime-defense contractor known for its counterterrorism “psy ops” work in Afghanistan, the firm does so by seeding the social network with personality quizzes. Respondents — by now hundreds of thousands of us, mostly female and mostly young but enough male and older for the firm to make inferences about others with similar behaviors and demographics — get a free look at their Ocean scores. Cambridge Analytica also gets a look at their scores and, thanks to Facebook, gains access to their profiles and real names. Cambridge Analytica worked on the “Leave” side of the Brexit campaign. In the United States it takes only Republicans as clients: Senator Ted Cruz in the primaries, Mr. Trump in the general election. Cambridge is reportedly backed by Robert Mercer, a hedge fund billionaire and a major Republican donor; a key board member is Stephen K. Bannon, the head of Breitbart News who became Mr. Trump’s campaign chairman and is set to be his chief strategist in the White House. In the age of Facebook, it has become far easier for campaigners or marketers to combine our online personas with our offline selves, a process that was once controversial but is now so commonplace that there’s a term for it, “onboarding.” Cambridge Analytica says it has as many as 3,000 to 5,000 data points on each of us, be it voting histories or full-spectrum demographics — age, income, debt, hobbies, criminal histories, purchase histories, religious leanings, health concerns, gun ownership, car ownership, homeownership — from consumer-data giants. No data point is very informative on its own, but profiling voters, says Cambridge Analytica, is like baking a cake. “It’s the sum of the ingredients,” its chief executive officer, Alexander Nix, told NBC News. Because the United States lacks European-style restrictions on second- or thirdhand use of our data, and because our freedom-of-information laws give data brokers broad access to the intimate records kept by local and state governments, our lives are open books even without social media or personality quizzes. Ever since the advertising executive Lester Wunderman coined the term “direct marketing” in 1961, the ability to target specific consumers with ads — rather than blanketing the airwaves with mass appeals and hoping the right people will hear them — has been the marketer’s holy grail. What’s new is the efficiency with which individually tailored digital ads can be tested and matched to our personalities. Facebook is the microtargeter’s ultimate weapon. The explosive growth of Facebook’s ad business has been overshadowed by its increasing role in how we get our news, real or fake. In July, the social network posted record earnings: quarterly sales were up 59 percent from the previous year, and profits almost tripled to $2.06 billion. While active users of Facebook — now 1.71 billion monthly active users — were up 15 percent, the real story was how much each individual user was worth. The company makes $3.82 a year from each global user, up from $2.76 a year ago, and an average of $14.34 per user in the United States, up from $9.30 a year ago. Much of this growth comes from the fact that advertisers not only have an enormous audience in Facebook but an audience they can slice into the tranches they hope to reach. One recent advertising product on Facebook is the so-called “dark post”: A newsfeed message seen by no one aside from the users being targeted. With the help of Cambridge Analytica, Mr. Trump’s digital team used dark posts to serve different ads to different potential voters, aiming to push the exact right buttons for the exact right people at the exact right times. Imagine the full capability of this kind of “psychographic” advertising. In future Republican campaigns, a pro-gun voter whose Ocean score ranks him high on neuroticism could see storm clouds and a threat: The Democrat wants to take his guns away. A separate pro-gun voter deemed agreeable and introverted might see an ad emphasizing tradition and community values, a father and son hunting together. In this election, dark posts were used to try to suppress the African-American vote. According to Bloomberg, the Trump campaign sent ads reminding certain selected black voters of Hillary Clinton’s infamous “super predator” line. It targeted Miami’s Little Haiti neighborhood with messages about the Clinton Foundation’s troubles in Haiti after the 2010 earthquake. Federal Election Commission rules are unclear when it comes to Facebook posts, but even if they do apply and the facts are skewed and the dog whistles loud, the already weakening power of social opprobrium is gone when no one else sees the ad you see — and no one else sees “I’m Donald Trump, and I approved this message.” While Hillary Clinton spent more than $140 million on television spots, old-media experts scoffed at Trump’s lack of old-media ad buys. Instead, his campaign pumped its money into digital, especially Facebook. One day in August, it flooded the social network with 100,000 ad variations, so-called A/B testing on a biblical scale, surely more ads than could easily be vetted by human eyes for compliance with Facebook’s “community standards.” Perhaps out of necessity, the Trump team was embracing a new-media lesson: It didn’t have to build everything from scratch. Mark Zuckerberg and others had already built the infrastructure the campaign needed to reach voters directly. When “Trump TV” went live on Facebook before and after the second debate it raked in $9 million in donations in 120 minutes. In the immediate wake of Mr. Trump’s surprise election, so many polls and experts were so wrong that it became fashionable to declare that big data was dead. But it isn’t, not when its most obvious avatar, Facebook, was so crucial to victory. On Monday, after a similar announcement from Google, Facebook said it would no longer allow fake-news websites to show ads, on their own sites, from Facebook’s ad network — a half-step that neither blocks what appears on your newsfeed nor affects how advertisers can microtarget users on the social network. There are surely more changes to come. Mr. Zuckerberg is young, still skeptical that his radiant transparency machine could be anything but a force for good, rightly wary of policing what the world’s diverse citizens say and share on his network, so far mostly dismissive of Facebook’s role in the election. If Mr. Zuckerberg takes seriously his oft-stated commitments to diversity and openness, he must grapple honestly with the fact that Facebook is no longer just a social network. It’s an advertising medium that’s now dangerously easy to weaponize. A Trump administration is unlikely to enforce transparency about who is targeted by dark posts and other hidden political ads — or to ensure that politicians take meaningful ownership of what the ads say. But Facebook can. Facebook and Democracy - 3 https://www.washingtonpost.com/news/global-opinions/wp/2018/02/19
Who is afraid of special counsel Robert S. Mueller III? President Trump is afraid. So are those who worked on his campaign. But they are not alone. Over the weekend, Rob Goldman made it clear that some of America’s biggest social media companies are scared of Mueller, too. Goldman is Facebook’s vice president for advertising, and according to his Twitter bio, a “student, seeker, raconteur, burner.” On Friday, he took to Twitter to proclaim his company’s innocence. He was, he wrote, “very excited to see the Mueller indictment today,” since Facebook had “shared Russian ads with Congress, Mueller and the American people.” But “still, there are key facts about the Russian actions that are still not well understood.” He went on: “Most of the coverage of Russian meddling involves their attempt to effect the outcome of the 2016 US election. I have seen all of the Russian ads and I can say very definitively that swaying the election was *NOT* the main goal.” Instead, he said, the main goal was to “divide America by using our institutions, like free speech and social media, against us. It has stoked fear and hatred amongst Americans. It is working incredibly well.” In a short string of tweets, in other words, Facebook’s vice president for advertising twisted and obfuscated the issues almost beyond recognition. For one, the indictment states clearly that the Russians were not merely buying ads: It alleges that they used fake American identities, fraudulently obtained PayPal accounts and fraudulent Social Security numbers to set up Facebook pages for groups such as “Blacktivist,” “Secured Borders” and “Army of Jesus.” They did indeed use those pages to spread fear and hatred, reaching tens and possibly hundreds of millions of people. They began this project in 2014, well before the election. And when the election began, they were under clear instructions, according to the indictment, to “use any opportunity to criticize Hillary [Clinton] and the rest (except [Bernie] Sanders and Trump—we support them).” By the time the election began in earnest, the attempt to “divide America” was an attempt to elect Trump. They pushed anti-Clinton messages on websites aimed at the far-right fringe and tried to suppress voter turnout on websites aimed at minorities. I’m not sure where Goldman’s idea that “swaying the election was not the main goal” comes from, but it is diametrically opposed to the content of Mueller’s indictment. No wonder Trump tweeted this on Saturday: “The Fake News Media never fails. Hard to ignore the fact from the Vice President of Facebook Ads, Rob Goldman!” But Goldman is right to be afraid. The social media companies, including Facebook as well as Twitter, YouTube and Reddit, really do bear a part of the responsibility for the growing polarization and bitter partisanship in American life that the Russians, and not only the Russians, sought to exploit. They have not become conduits for Russian propaganda, and not only Russian propaganda, by accident. The Facebook algorithm, by its very nature, is pushing Americans, and everybody else, into ever more partisan echo chambers — and people who read highly partisan material are much more likely to believe false stories. At the same time, Facebook has declared itself free of responsibility: The company continues to argue that it is not legally liable for material that appears on its platform because it is not a “publisher,” even though it behaves in every other way like a publisher, including by collecting advertising revenue that used to go to publishers. The result is that anyone who seeks to spread false information on Facebook or any other social media site is, in practice, no longer bound by laws on libel or false advertising that were explicitly designed to stop them. his is not the only problem: There is plenty of evidence now that the very nature of the platforms encourages ever more extreme, ever more offensive material. Studies of YouTube have shown how automated video production, governed by algorithms, not humans, leads inexorably to more violent and more disturbing videos. One recent survey suggests that up to 15 percent of Twitter accounts — some 48 million — may not be human at all. Many think that is a gross underestimate. Don’t let them off the hook: Until they take responsibility
for what appears on their platforms — or until they are held legally liable
— the social media companies will continue to fuel the division that Goldman
piously denounces. They are not accidental victims of Russia’s information
war. They are its tools.
Algorithmic
Results
Legally mandated open application programming interfaces for social media platforms . . .would help the public identify what is being delivered by social media algorithms, and thus help protect our democracy.
How Evil Is Tech? NYTimes | OP-ED COLUMNIST David Brooks NOV. 20, 2017 Not long ago, tech was the coolest industry. Everybody wanted to work at Google, Facebook and Apple. But over the past year the mood has shifted. Some now believe tech is like the tobacco industry — corporations that make billions of dollars peddling a destructive addiction. Some believe it is like the N.F.L. — something millions of people love, but which everybody knows leaves a trail of human wreckage in its wake. Surely the people in tech — who generally want to make the world a better place — don’t want to go down this road. It will be interesting to see if they can take the actions necessary to prevent their companies from becoming social pariahs. There are three main critiques of big tech. The first is that it is destroying the young. Social media promises an end to loneliness but actually produces an increase in solitude and an intense awareness of social exclusion. Texting and other technologies give you more control over your social interactions but also lead to thinner interactions and less real engagement with the world. As Jean Twenge has demonstrated in book and essay, since the spread of the smartphone, teens are much less likely to hang out with friends, they are less likely to date, they are less likely to work. Eighth graders who spend 10 or more hours a week on social media are 56 percent more likely to say they are unhappy than those who spend less time. Eighth graders who are heavy users of social media increase their risk of depression by 27 percent. Teens who spend three or more hours a day on electronic devices are 35 percent more likely to have a risk factor for suicide, like making a plan for how to do it. Girls, especially hard hit, have experienced a 50 percent rise in depressive symptoms. The second critique of the tech industry is that it is causing this addiction on purpose, to make money. Tech companies understand what causes dopamine surges in the brain and they lace their products with “hijacking techniques” that lure us in and create “compulsion loops.” Snapchat has Snapstreak, which rewards friends who snap each other every single day, thus encouraging addictive behavior. News feeds are structured as “bottomless bowls” so that one page view leads down to another and another and so on forever. Most social media sites create irregularly timed rewards; you have to check your device compulsively because you never know when a burst of social affirmation from a Facebook like may come. The third critique is that Apple, Amazon, Google and Facebook are near monopolies that use their market power to invade the private lives of their users and impose unfair conditions on content creators and smaller competitors. The political assault on this front is gaining steam. The left is attacking tech companies because they are mammoth corporations; the right is attacking them because they are culturally progressive. Tech will have few defenders on the national scene. Obviously, the smart play would be for the tech industry to get out in front and clean up its own pollution. There are activists like Tristan Harris of Time Well Spent, who is trying to move the tech world in the right directions. There are even some good engineering responses. I use an app called Moment to track and control my phone usage. The big breakthrough will come when tech executives clearly acknowledge the central truth: Their technologies are extremely useful for the tasks and pleasures that require shallower forms of consciousness, but they often crowd out and destroy the deeper forms of consciousness people need to thrive. Online is a place for human contact but not intimacy. Online is a place for information but not reflection. It gives you the first stereotypical thought about a person or a situation, but it’s hard to carve out time and space for the third, 15th and 43rd thought. Online is a place for exploration but discourages cohesion. It grabs control of your attention and scatters it across a vast range of diverting things. But we are happiest when we have brought our lives to a point, when we have focused attention and will on one thing, wholeheartedly with all our might. Rabbi Abraham Joshua Heschel wrote that we take a break from the distractions of the world not as a rest to give us more strength to dive back in, but as the climax of living. “The seventh day is a palace in time which we build. It is made of soul, joy and reticence,” he said. By cutting off work and technology we enter a different state of consciousness, a different dimension of time and a different atmosphere, a “mine where the spirit’s precious metal can be found.” Imagine if instead of claiming to offer us the best things in life, tech merely saw itself as providing efficiency devices. Its innovations can save us time on lower-level tasks so we can get offline and there experience the best things in life. Imagine if tech pitched itself that way. That would
be an amazing show of realism and, especially, humility, which these days
is the ultimate and most disruptive technology.
Excerpt from a column by
The Post’s reviewer, Susan Benkelman of the American Press Institute, summed up its takeaway: “The company has put growth and profits above all else, even when it was clear that misinformation and hate speech were circulating across the platform and that the company was violating the privacy of its users.” Facebook’s strategy: Avert disaster, apologize and keep growing And then I remembered a few things — like having been present at the Senate hearing in 2018 when Facebook founder Mark Zuckerberg tried to defend the company policies that enabled the political consulting firm Cambridge Analytica, intent on electing Donald Trump as president, to get its hands on data from millions of users. I remembered how thoroughly incapable many senators were of even understanding the way Facebook works, much less regulating it effectively, and how at one point, 84-year-old Orrin Hatch of Utah, asked a question about the company’s business model so basic that Zuckerberg was able to answer it in four words: “Senator, we run ads.” From 2018: Members of Congress can’t possibly regulate Facebook. They don’t understand it. . . . . To the casual user posting wedding photos or recipes, doing research or finding that old friend, using Facebook may seem to be not only fun but free. But in reality, the price is astonishingly high. And it’s only going in one direction.
#211005 211005 Washington Post: Margaret Sullivan Facebook is harming our society.
Frances Haugen, a Facebook whistleblower, speaking to Scott Pelley on “60 Minutes” on Sunday. Haugen showed the extent of Facebook’s toxicity, Sullivan writes, but our current tools of government aren’t equipped to confront it. (Robert Fortunato/AFP/Getty Images) Margaret Sullivan
Frances Haugen, who revealed herself Sunday as the Facebook whistleblower, could not have made things any clearer. “Facebook has realized that if they change the algorithm to be safer, people will spend less time on the site, they’ll click on less ads, they’ll make less money,” the former member of Facebook’s civic integrity team, who left the company this spring, told Scott Pelley of CBS’s “60 Minutes.” This wasn’t just Haugen’s opinion as a digital-economy veteran, with a long stint at Google before she joined Facebook. She had the goods. The huge trove of documents that she took when she left the behemoth social network spells out its ugly incentive structure in case you had any remaining doubt: Outrage, hate and lies are what drive digital engagement, and therefore revenue. The system is broken. And we all suffer from it. But how to fix it? A problem that threatens the underpinnings of our civil society calls for a radical solution: A new federal agency focused on the digital economy. The idea comes from none other than a former Federal Communications Commission chairman, Tom Wheeler, who maintains that neither his agency nor the Federal Trade Commission are nimble or tech-savvy enough to protect consumers in this volatile and evolving industry. “You need an agency that doesn’t say ‘here are the rigid rules,’ when the rules become obsolete almost immediately,” Wheeler, who headed the FCC from 2013 to 2017, told me Monday. Too much of the digital world operates according to Mark Zuckerberg’s famous motto: “Move fast and break things.” That’s a perfect expression of what Wheeler called “consequence-free behavior.” So if we really want to think about the public interest in the fast-paced digital world, it’ll be necessary to revise “the cumbersome, top-down rule-making process that has been in place since the industrial era,” as Wheeler wrote in a Harvard’s Shorenstein Center paper, with Phil Verveer, the Justice Department lead counsel on a suit that resulted in the breakup of AT&T, and Gene Kimmelman, a prominent consumer-protection advocate. Digital platforms like Facebook and Google have become “pseudo-governments that make the rules,” Wheeler told me. No surprise that they make the rules to benefit themselves. The existing regulatory structure just doesn’t work, he argued in a Brookings Institution piece. The FCC and FTC are filled with dedicated professionals but are constrained. Their antitrust actions may grab headlines but can’t protect against more general consumer abuses — like those take-it-or-leave-it “terms and conditions” they force on their customers. And it’s not as though Facebook hasn’t been punished for its offenses. In 2019, the FTC slapped the company with a record-breaking $5 billion fine for deceiving billions of users and failing to protect their privacy. But such a penalty doesn’t address the issues that Haugen was talking about Sunday, or those that she’s expected to discuss Tuesday when she testifies before Congress. “The thing I saw at Facebook over and over again was there were conflicts of interest between what was good for the public and what was good for Facebook,” she told CBS. “Facebook, over and over again, chose to optimize for its own interests, like making more money.” Haugen’s simple delivery made for a powerful interview, hammering home the details of the shocking information she shared with the Wall Street Journal for its recent blockbuster investigation, The Facebook Files. Among the revelations: Facebook is thoroughly aware that the mental health of teens is damaged by engagement with Instagram, which it owns (“Teens blame Instagram for increases in the rate of anxiety and depression,” stated one slide from an internal presentation) but has done little to change that. And despite its constant protestations to the contrary, Facebook has built a business model that it knows full well relies on the anger and outrage of its nearly 3 billion users to keep them engaged and clicking. (“Misinformation, toxicity and violent content are inordinately prevalent among reshares,” its own data scientists concluded, according to the Journal report.) As Haugen explained, this phenomenon motivates politicians not just to communicate differently but to govern differently, by embracing less reasonable, more outrage-inducing policy positions. You can see this playing out in extreme rhetoric on emotional issues like immigration policy. Facebook’s practices, she believes, even propelled the Jan. 6 insurrection at the Capitol by allowing misinformation to flourish and organizers to congregate on its sites. Facebook’s representatives deny many of her charges, calling some of them ludicrous. “We continue to make significant improvements to tackle the spread of misinformation and harmful content,” said Facebook spokesperson Lena Pietsch. “To suggest we encourage bad content and do nothing is just not true.” Company founder Zuckerberg, meanwhile, has repeatedly said he thinks more regulation is necessary. (As long as it doesn’t cut into profits, one can assume he means.) That all sounds mighty reasonable. And mighty familiar. Zuckerberg loves to apologize sincerely and carry on. Facebook keeps growing in size, value and influence — vividly demonstrated when the massive platform crashed Monday, along with its subsidiaries Instagram and WhatsApp, and disrupted an enormous chunk of the planet that has come to rely on them. Something has to change. And that doesn’t mean a
little tinkering around the edges of what already exists. The digital revolution
requires a revolutionary change in restraining out-of-control practitioners.
| https://www.nytimes.com/2024/06/22/technology/zuckerberg-instagram-child-safety-lawsuits.html How Mark Zuckerberg’s Meta Failed Children on Safety The C.E.O. and his team drove Meta’s efforts to capture young users and misled the public about the risks, lawsuits by state attorneys general say. By Natasha Singer Natasha Singer, who covers children’s online privacy, reviewed several thousand pages of legal filings in states’ lawsuits against Meta for this article. June 22, 2024 In April 2019, David Ginsberg, a Meta executive, emailed his boss, Mark Zuckerberg, with a proposal to research and reduce loneliness and compulsive use on Instagram and Facebook. In the email, Mr. Ginsberg noted that the company faced scrutiny for its products’ impacts “especially around areas of problematic use/addiction and teens.” He asked Mr. Zuckerberg for 24 engineers, researchers and other staff, saying Instagram had a “deficit” on such issues. A week later, Susan Li, now the company’s chief financial officer, informed Mr. Ginsberg that the project was “not funded” because of staffing constraints. Adam Mosseri, Instagram’s head, ultimately declined to finance the project, too. MOSSERI’S EMAIL Unfortunately I don’t see us funding this from Instagram any time soon. The email exchanges are just one slice of evidence cited among more than a dozen lawsuits filed since last year by the attorneys general of 45 states and the District of Columbia. The states accuse Meta of unfairly ensnaring teenagers and children on Instagram and Facebook while deceiving the public about the hazards. Using a coordinated legal approach reminiscent of the government’s pursuit of Big Tobacco in the 1990s, the attorneys general seek to compel Meta to bolster protections for minors. A New York Times analysis of the states’ court filings — including roughly 1,400 pages of company documents and correspondence filed as evidence by the State of Tennessee — show how Mr. Zuckerberg and other Meta leaders repeatedly promoted the safety of the company’s platforms, playing down risks to young people, even as they rejected employee pleas to bolster youth guardrails and hire additional staff. In interviews, the attorneys general of several states suing Meta said Mr. Zuckerberg had led his company to drive user engagement at the expense of child welfare. “A lot of these decisions ultimately landed on Mr. Zuckerberg’s desk,” said Raśl Torrez, the attorney general of New Mexico. “He needs to be asked explicitly, and held to account explicitly, for the decisions that he’s made.” The state lawsuits against Meta reflect mounting concerns that teenagers and children on social media can be sexually solicited, harassed, bullied, body-shamed and algorithmically induced into compulsive online use. Last Monday, Dr. Vivek H. Murthy, the United States surgeon general, called for warning labels to be placed on social networks, saying the platforms present a public health risk to young people. His warning could boost momentum in Congress to pass the Kids Online Safety Act, a bill that would require social media companies to turn off features for minors, like bombarding them with phone notifications, that could lead to “addiction-like” behaviors. (Critics say the bill could hinder minors’ access to important information. The News/Media Alliance, a trade group that includes The Times, helped win an exemption in the bill for news sites and apps that produce news videos.) In May, New Mexico arrested three men who were accused of targeting children for sex after, Mr. Torrez said, they solicited state investigators who had posed as children on Instagram and Facebook. Mr. Torrez, a former child sex crimes prosecutor, said Meta’s algorithms enabled adult predators to identify children they would not have found on their own. “This is Mr. Zuckerberg’s fault,” Raśl Torrez, the attorney general of New Mexico, said during a news conference last month. His state has sued Meta, accusing it of deceiving the public on child safety.Credit...Greg Kahn for The New York Times Meta disputed the states’ claims and has filed motions to dismiss their lawsuits. In a statement, Liza Crenshaw, a spokeswoman for Meta, said the company was committed to youth well-being and had many teams and specialists devoted to youth experiences. She added that Meta had developed more than 50 youth safety tools and features, including limiting age-inappropriate content and restricting teenagers under 16 from receiving direct messages from people they didn’t follow. “We want to reassure every parent that we have their interests at heart in the work we’re doing to help provide teens with safe experiences online,” Ms. Crenshaw said. The states’ legal complaints, she added, “mischaracterize our work using selective quotes and cherry-picked documents.” But parents who say their children died as a result of online harms challenged Meta’s safety assurances. “They preach that they have safety protections, but not the right ones,” said Mary Rodee, an elementary school teacher in Canton, N.Y., whose 15-year-old son, Riley Basford, was sexually extorted on Facebook in 2021 by a stranger posing as a teenage girl. Riley died by suicide several hours later. Ms. Rodee, who sued the company in March, said Meta had never responded to the reports she submitted through automated channels on the site about her son’s death. “It’s pretty unfathomable,” she said. The Push to Win Teenagers Meta has long wrestled with how to attract and retain teenagers, who are a core part of the company’s growth strategy, internal company documents show. Teenagers became a major focus for Mr. Zuckerberg as early as 2016, according to the Tennessee complaint, when the company was still known as Facebook and owned apps including Instagram and WhatsApp. That spring, an annual survey of young people by the investment bank Piper Jaffray reported that Snapchat, a disappearing-message app, had surpassed Instagram in popularity. Later that year, Instagram introduced a similar disappearing photo- and video-sharing feature, Instagram Stories. Mr. Zuckerberg directed executives to focus on getting teenagers to spend more time on the company’s platforms, according to the Tennessee complaint. The “overall company goal is total teen time spent,” wrote one employee, whose name is redacted, in an email to executives in November 2016, according to internal correspondence among the exhibits in the Tennessee case. Participating teams should increase the number of employees dedicated to projects for teenagers by at least 50 percent, the email added, noting that Meta already had more than a dozen researchers analyzing the youth market. EMAIL ON ZUCKERBERG’S GOALS Mark has decided that the top priority for the company in 2017 is teens. In April 2017, Kevin Systrom, Instagram’s chief executive, emailed Mr. Zuckerberg asking for more staff to work on mitigating harms to users, according to the New Mexico complaint. Mr. Zuckerberg replied that he would include Instagram in a plan to hire more staff, but he said Facebook faced “more extreme issues.” At the time, legislators were criticizing the company for having failed to hinder disinformation during the 2016 U.S. presidential campaign. Mr. Systrom asked colleagues for examples to show the urgent need for more safeguards. He soon emailed Mr. Zuckerberg again, saying Instagram users were posting videos involving “imminent danger,” including a boy who shot himself on Instagram Live, the complaint said. Two months later, the company announced that the Instagram Stories feature had hit 250 million daily users, dwarfing Snapchat. Mr. Systrom, who left the company in 2018, didn’t respond to a request for comment. Meta said an Instagram team developed and introduced safety measures and experiences for young users. The company didn’t respond to a question about whether Mr. Zuckerberg had provided the additional staff. ‘Millions’ of Underage Users In January 2018, Mr. Zuckerberg received a report estimating that four million children under the age of 13 were on Instagram, according to a lawsuit filed in federal court by 33 states. Facebook’s and Instagram’s terms of use prohibit users under 13. But the company’s sign-up process for new accounts enabled children to easily lie about their age, according to the complaint. Meta’s practices violated a federal children’s online privacy law requiring certain online services to obtain parental consent before collecting personal data, like contact information, from children under 13, the states allege. In March 2018, The Times reported that Cambridge Analytica, a voter profiling firm, had covertly harvested the personal data of millions of Facebook users. That set off more scrutiny of the company’s privacy practices, including those involving minors. Mr. Zuckerberg testified the next month at a Senate hearing, “We don’t allow people under the age of 13 to use Facebook.” Attorneys general from dozens of states disagree. STATE ATTORNEYS GENERAL COMPLAINT Within the company, Meta’s actual knowledge that millions of Instagram users are under the age of 13 is an open secret that is routinely documented, rigorously analyzed and confirmed, and zealously protected from disclosure to the public. In late 2021, Frances Haugen, a former Facebook employee, disclosed thousands of pages of internal documents that she said showed the company valued “profit above safety.” Lawmakers held a hearing, grilling her on why so many children had accounts. Meanwhile, company executives knew that Instagram use by children under 13 was “the status quo,” according to the joint federal complaint filed by the states. In an internal chat in November 2021, Mr. Mosseri acknowledged those underage users and said the company’s plan to “cater the experience to their age” was on hold, the complaint said. MOSSERI’S MESSAGE Tweens want access to Instagram, and they lie about their age to get it now. In its statement, Meta said Instagram had measures in place to remove underage accounts when the company identified them. Meta has said it has regularly removed hundreds of thousands of accounts that could not prove they met the company’s age requirements. Fighting Over Beauty Filters A company debate over beauty filters on Instagram encapsulated the internal tensions over teenage mental health — and ultimately the desire to engage more young people prevailed. It began in 2017 after Instagram introduced camera effects that enabled users to alter their facial features to make them look funny or “cute/pretty,” according to internal emails and documents filed as evidence in the Tennessee case. The move was made to boost engagement among young people. Snapchat already had popular face filters, the emails said. But a backlash ensued in the fall of 2019 after Instagram introduced an appearance-altering filter, Fix Me, which mimicked the nip/tuck lines that cosmetic surgeons draw on patients’ faces. Some mental health experts warned that the surgery-like camera effects could normalize unrealistic beauty standards for young women, exacerbating body-image disorders. As a result, Instagram in October 2019 temporarily disallowed camera effects that made dramatic, surgical-looking facial alterations — while still permitting obviously fantastical filters, like goofy animal faces. The next month, concerned executives proposed a permanent ban, according to Tennessee court filings. Other executives argued that a ban would hurt the company’s ability to compete. One senior executive sent an email saying Mr. Zuckerberg was concerned whether data showed real harm. In early 2020, ahead of an April meeting with Mr. Zuckerberg to discuss the issue, employees prepared a briefing document on the ban, according to the Tennessee court filings. One internal email noted that employees had spoken with 18 mental health experts, each of whom raised concerns that cosmetic surgery filters could “cause lasting harm, especially to young people.” But the meeting with Mr. Zuckerberg was canceled. Instead, the chief executive told company leaders that he was in favor of lifting the ban on beauty filters, according to an email he sent that was included in the court filings. ZUCKERBERG’S EMAIL TO EXECUTIVES It has always felt paternalistic to me that we’ve limited people’s ability to present themselves in these ways, especially when there’s no data I’ve seen that suggests doing so is helpful or not doing so is harmful. Several weeks later, Margaret Gould Stewart, then Facebook’s vice president for product design and responsible innovation, reached out to Mr. Zuckerberg, according to an email included among the exhibits. In the email, she noted that as a mother of teenage daughters, she knew social media put “intense” pressure on girls “with respect to body image.” STEWART’S EMAIL TO ZUCKERBERG I was hoping we could maintain a moderately protective stance here given the risk to minors. … I just hope that years from now we will look back and feel good about the decision we made here. Ms. Stewart, who subsequently left Meta, did not respond to an email seeking comment. In the end, Meta said it barred filters “that directly promote cosmetic surgery, changes in skin color or extreme weight loss” and clearly indicated when one was being used. Priorities and Youth Safety In 2021, Meta began planning for a new social app. It was to be aimed specifically at children and called Instagram Kids. In response, 44 attorneys general wrote a letter that May urging Mr. Zuckerberg to “abandon these plans.” “Facebook has historically failed to protect the welfare of children on its platforms,” the letter said. Meta subsequently paused plans for an Instagram Kids app. By August, company efforts to protect users’ well-being work had become “increasingly urgent” for Meta, according to another email to Mr. Zuckerberg filed as an exhibit in the Tennessee case. Nick Clegg, now Meta’s head of global affairs, warned his boss of mounting concerns from regulators about the company’s impact on teenage mental health, including “potential legal action from state A.G.s.” Describing Meta’s youth well-being efforts as “understaffed and fragmented,” Mr. Clegg requested funding for 45 employees, including 20 engineers. CLEGG’S EMAIL TO ZUCKERBERG We need to do more and we are being held back by a lack of investment on the product side which means that we’re not able to make changes and innovations at the pace required to be responsive to policymaker concerns. In September 2021, The Wall Street Journal published an article saying Instagram knew it was “toxic for teen girls,” escalating public concerns. An article in The Times that same month mentioned a video that Mr. Zuckerberg had posted of himself riding across a lake on an “electric surfboard.” Internally, Mr. Zuckerberg objected to that description, saying he was actually riding a hydrofoil he pumped with his legs and wanted to post a correction on Facebook, according to employee messages filed in court. Mr. Clegg found the idea of a hydrofoil post “pretty tone deaf given the gravity” of recent accusations that Meta’ s products caused teenage mental health harms, he said in a text message with communications executives included in court filings. CLEGG’S TEXT MESSAGE TO COMMUNICATIONS STAFF ABOUT ZUCKERBERG If I was him, I wouldn’t want to be asked ‘while your company was being accused of aiding and abetting teenage suicide, why was your only public pronouncement a post about surfing?’ Mr. Zuckerberg went ahead with the correction. In November 2021, Mr. Clegg, who had not heard back from Mr. Zuckerberg about his request for more staff, sent a follow-up email with a scaled-down proposal, according to Tennessee court filings. He asked for 32 employees, none of them engineers. CLEGG’S EMAIL TO ZUCKERBERG This investment is important to ensure we have the product roadmaps necessary to stand behind our external narrative of well-being on our apps. Ms. Li, the finance executive, responded a few days later, saying she would defer to Mr. Zuckerberg and suggested that the funding was unlikely, according to an internal email filed in the Tennessee case. Meta didn’t respond to a question about whether the request had been granted. A few months later, Meta said that although its revenue for 2021 had increased 37 percent to nearly $118 billion from a year earlier, fourth-quarter profit plummeted because of a $10 billion investment in developing virtual reality products for immersive realms, known as the metaverse. Explicit Videos Involving Children Last fall, the Match Group, which owns dating apps like Tinder and OKCupid, found that ads the company had placed on Meta’s platforms were running adjacent to “highly disturbing” violent and sexualized content, some of it involving children, according to the New Mexico complaint. Meta removed some of the posts flagged by Match, telling the dating giant that “violating content may not get caught a small percentage of the time,” the complaint said. Dissatisfied with Meta’s response, Bernard Kim, the chief executive of the Match Group, reached out to Mr. Zuckerberg by email with a warning, saying his company could not “turn a blind eye,” the complaint said. KIM’S EMAIL TO ZUCKERBERG Meta is placing ads adjacent to offensive, obscene — and potentially illegal — content, including sexualization of minors and gender based violence. Mr. Zuckerberg didn’t respond to Mr. Kim, according to the complaint. Meta said the company had spent years building technology to combat child exploitation. Last month, a judge denied Meta’s motion to dismiss the New Mexico lawsuit. But the court granted a request regarding Mr. Zuckerberg, who had been named as defendant, to drop him from the case. The "Facebook Papers" research by Harvard's Frances Haugen to SEC #221211 https://www.nytimes.com/2022/12/11/opinion/what-twitter-can-learn-from-quakers.html? OPINION EZRA KLEIN Dec. 11, 2022 The Great Delusion Behind Twitter For what feels like ages, we’ve been told that Twitter is, or needs to be, the world’s town square. That was Dick Costolo’s line in 2013, when he was Twitter’s chief executive (“We think of it as the global town square”), and Jack Dorsey, one of Twitter’s founders, used it, too, in 2018 (“People use Twitter as a digital public square”). Now the line comes from the “chief twit,” Elon Musk (“The reason I acquired Twitter is because it is important to the future of civilization to have a common digital town square”). This metaphor is wrong on three levels. First, there isn’t, can’t be and shouldn’t be a “global town square.” The world needs many town squares, not one. Public spaces are rooted in the communities and contexts in which they exist. This is true, too, for Twitter, which is less a singular entity than a digital multiverse. What Twitter is for activists in Zimbabwe is not what it is for gamers in Britain. Second, town squares are public spaces, governed in some way by the public. That is what makes them a town square rather than a square in a town. They are not the playthings of whimsical billionaires. They do not exist, as Twitter did for so long, to provide returns to shareholders. (And as wild as Musk’s reign has already been, remember that he tried to back out of this deal, and Twitter’s leadership, knowing he neither wanted the service nor would treat it or its employees with care, forced it through to ensure that executives and shareholders got their payout.) A town square controlled by one man isn’t a town square. It’s a storefront, an art project or possibly a game preserve. Third, what matters for a polity isn’t the mere existence of a town square but the condition the townspeople are in when they arrive. Town squares can host debates. They can host craft fairs. They can host brawls. They can host lynchings. Civilization does not depend on a place to gather. It depends on what happens when people gather. So much genius and trickery and money have gone into a mistaken metaphor. The competition to create and own the digital square may be good business, but it has led to terrible politics. Think of the hopeful imaginings that accompanied the early days of social media: We would know one another across time and space; we would share with one another across cultures and generations; we would inform one another across borders and factions. Billions of people use these services. Their scale is truly civilizational. And what have they wrought? Is the world more democratic? Is G.D.P. growth higher? Is innovation faster? Do we seem wiser? Do we seem kinder? Are we happier? Shouldn’t something, anything, have gotten noticeably better in the short decades since these services fought their way into our lives? I think there is a reason that so little has gotten better and so much has gotten worse. It is this: The cost of so much connection and information has been the deterioration of our capacity for attention and reflection. And it is the quality of our attention and reflection that matters most. In a recent paper, Benjamin Farrer, a political scientist at Knox College in Illinois, argues that we have mistaken the key resource upon which democracy, and perhaps civilization, depends. That resource is attention. But not your attention or my attention. Our attention. Attention, in this sense, is a collective resource; it is the depth of thought and consideration a society can bring to bear on its most pressing problems. And as with so many collective resources, from fresh air to clean water, it can be polluted or exhausted. Borrowing a concept from Elinor Ostrom, the first woman to win the Nobel in economic science, Farrer argues that attention is subject to a problem known as the tragedy of the commons. A classic example of a tragedy of the commons is an open pasture that any shepherd can use for his flock. Without wise governance, every shepherd will send his flock to graze, because if he doesn’t, the other shepherds will do so first. Soon enough, the pasture is bare, and the resource is depleted. Farrer argues that our collective attention is like a public pasture: It is valuable, it is limited, and it is being depleted. Everyone from advertisers to politicians to newspapers to social media giants wants our attention. The competition is fierce, and it has led to more sensationalism, more outrageous or infuriating content, more algorithmic tricks, more of anything that might give a brand or a platform or a politician an edge, even as it leaves us harried, irritable and distracted. One telling study recruited participants across 17 countries and six continents and measured skin conductivity — a signal of emotional response — when participants saw positive, negative and neutral news. Negative news was, consistently, the most engaging. If you’ve ever wondered why the news is so focused on tragedy and conflict or why social media furnishes more outrage than inspiration, that’s the reason. Negativity captures our attention better than positivity or neutrality. This is not a new dynamic, and it is by no means unique to Twitter. “The mission of the press is to spread culture while destroying the attention span,” Karl Kraus, the Austrian satirist, wrote in the early 1900s. But it is worse now. The tools available to those who would command our attention are far more powerful than in past eras. Twitter’s problems did not begin and will not end with Musk. They are woven into the fabric of the platform. Twitter makes it easy to discuss hard topics poorly. And it does that by putting its participants in the worst state of mind for a discussion. Twitter forces nuanced thoughts down to bumper-sticker bluntness. The chaotic, always moving newsfeed leaves little time for reflection on whatever has just been read. The algorithm’s obsession with likes and retweets means users mainly see (and produce) speech that flatters their community or demonizes those they already loathe. The quote tweet function encourages mockery rather than conversation. The frictionless slide between thought and post, combined with the absence of an edit function, encourages impulsive reaction rather than sober consideration. It is not that difficult conversations cannot or have not happened on the platform. It is more that they should not happen on the platform. But they do. Of course they do. And this is what critics of the platform, including me, need to reckon with. “The whole issue of police violence against Black people was fully exposed because of Twitter,” Sherrilyn Ifill, a former president of the NAACP Legal Defense Fund, told me. “Because of videos of Walter Scott running in that park and Philando Castile and Freddie Gray and so many others. Presenting this incontrovertible evidence of the truth we’d been living with and that was so disparaged by white political leaders has forever transformed the conversation over public safety.” Twitter has real strengths, many of which are the flip side of its weaknesses. It is as flat a medium as any that has existed. It is as fast a medium as has ever existed; that can be maddening, but it can also draw attention to something that is happening and has to change right now. It is an unusually confrontational medium, and that has permitted movements like Black Lives Matter and #MeToo to flower and for socialists to get a new hearing in American politics — and it has also, of course, given new succor and life to the racist right. Put simply, Twitter’s value is how easy it makes it to talk. Its cost is how hard it makes it to listen. It is a failure of imagination to think that our choice is the social media platforms we have now or nothing. I keep thinking about something that Robin Sloan, a novelist and former Twitter employee, wrote this year: “There are so many ways people might relate to one another online, so many ways exchange and conviviality might be organized. Look at these screens, this wash of pixels, the liquid potential! What a colossal bummer that Twitter eked out a local maximum, that its network effect still (!) consumes the fuel for other possibilities, other explorations.” What’s surprised me most as Twitter has convulsed in recent weeks is how threadbare the social media cupboard really is. So many are open to trying something new, but as of yet, there’s nothing that feels all that new to try. Everything feels like a take on Twitter. It may be faster or slower, more decentralized or more moderated, but they’re all variations on the same theme: experiments in how to capture attention rather than deepen it, platforms built to encourage us to speak rather than to help us listen or think. Permit me a weird turn here. I became interested this year in how Quakers deliberate. As a movement, Quakers have been far ahead of the moral curve time and again — early to abolitionism, to equality between the sexes, to prison reform, to pressuring governments to help save Jews from the Holocaust. That is not to say Quakers have gotten nothing wrong, but what has led them to get so much right? The answer suggested by Rex Ambler’s lovely book “The Quaker Way” is silence. In a typical Quaker meeting, Ambler writes, community members “sit in silence together for an hour or so, standing up to speak only if they are led to do so, and then only to share some insight which they sense will be of value to others.” If they must decide an issue collectively, “they will wait in silence together, again, to discern what has to be done.” There is much that debate can offer but much that it can obscure. “To get a clear sense of what is happening in our lives, we Quakers try to go deeper,” he writes. “We have to let go our active and fretful minds in order to do this. We go quiet and let a deeper, more sensitive awareness arise.” I find this powerful in part because I see it in myself. I know how I respond in the heat of an argument, when my whole being is tensed to react. And I know how I process hard questions or difficult emotions after quiet reflection, when there is time for my spirit to settle. I know which is my better self. Democracy is not and will not be one long Quaker meeting. But there is wisdom here worth mulling. We do not make our best decisions, as individuals or as a collective, when our minds are most active and fretful. And yet “active and fretful” is about as precise a description as I can imagine of the Twitter mind. And having put us in an active, fretful mental state, Twitter then encourages us to fire off declarative statements on the most divisive possible issues, always with one eye to how quickly they will rack up likes and retweets and thus viral power. It’s insane. And it will get so much worse from here. OpenAI recently released ChatGPT, an artificial intelligence system that can be given requests in plain language (“Write me an argument for the benefits of single-payer health care, in the style of a Taylor Swift song”) and spit out remarkably passable results. What ChatGPT can do is a marvel. We are at the dawn of a new technological era. But it is easy to see how it could turn dark — and quickly. A.I. systems like this make the production and manipulation of text (and code and images and eventually audio and video) functionally costless. They will be deployed to produce whatever makes us most likely to click. But these systems do not and cannot know what they are producing. The cost of creating and optimizing content that grabs our attention is plummeting, but the cost of producing valuable and truthful work isn’t. These are technologies that lend themselves to cacophony, not community. I fear a world in which the business models behind them run on our attention or profit off our anger. But other worlds and other models are possible. A few weeks back, I spoke to Audrey Tang, Taiwan’s minister of digital affairs. I asked her what it would mean for social media to be run democratically, given the mistrust many Americans have — and for good reason — of the state. (Imagine if the Trump administration had owned Twitter.) “Does the social sector mean anything in the American context?” she asked me. By the social sector, Tang meant what we sometimes call civil society — the layer of associations and organizations between the government and the market. In Taiwan, key parts of digital infrastructure are managed at this level. The PTT Bulletin Board System, which she described as Taiwan’s Reddit, if Reddit were far more central to social and political life, is still owned by the student group that started it. It was part of how Taiwan responded so early and so effectively to the coronavirus. “It has no shareholders,” Tang said. “No advertisers. It is entirely within the academic network. It’s entirely open source. It's entirely community governed. People can freely join it. It’s a public digital space.” It sounded like utopia to me, before I remembered that a key part of our digital infrastructure is run similarly. Wikipedia remains one of the most-visited sites on the web, and it is owned and managed by the nonprofit Wikimedia Foundation. It shows. Wikipedia has never tried to become more than it is. It never pivoted to video or remade itself around an algorithmic feed in order to harvest more of our attention. It is a commons but one that is governed so we may use it rather than so that it may use us. It gives so much more than it takes. It thrives, quietly and gently, as a reminder that a very different internet, governed in a very different way, intended for a very different purpose, is possible. There are those who believe the social web is reaching its terminal point. I hope they’re right. Platform after platform was designed to make it easier and more addictive for us to share content with one another so the corporations behind them could sell ever more of our attention and data. In different ways, most of these platforms are now in decline. What if the next turn of the media dial was measured not by how much attention we gave to a platform but by how much it gave to us? I am not sure what such a service would look like. But I am hungry for it, and I suspect a lot of other people are, too. |