Biden Must Repair—and Reinvigorate—Tech Diplomacy

The Biden-Harris administration has officially named Antony Blinken as its pick for secretary of state. In kind with other nominees announced in the past few days, Blinken is an experienced civil servant and foreign policy hand, having served as deputy secretary of state from 2015 to 2017 and as deputy national security adviser to President Obama before that. He brings a wealth of expertise to the table.

Nominations like Blinken’s corroborate the Biden-Harris administration’s expected hard pivot back to multilateralism and alliance-building, much needed after four years of zero-sum foreign policy and nationalist chest-thumping. Any diplomatic reinvigoration, though, must focus on tech as well—many of the world’s technology problems will not be solved unilaterally or through military means, and certainly not by Silicon Valley internet giants. There are several ways a Biden-Harris administration could make this renewed US tech diplomacy, and by extension tech policy leadership, a reality.

The Trump administration did no favors with respect to the diplomats responsible for digital issues; it cut the State Department’s overall budget, minimized the importance of its technology work, and pulled the rug out from underneath those working in areas like free internet access and 5G supply chain policy. In an ideal case, a new administration will not merely aim to “reset to 2016” but, rather, completely reorient to address this damage and give diplomacy an even more concerted push. To do this, the State Department writ large, and specifically its technology work, will need more funding.

But beyond the necessity of more diplomatic resources, the “optimal” path forward is not as clear. The administration will have to decide how exactly to situate cyber in the US diplomatic apparatus—given its entanglement with human rights and counterterrorism, free speech and modern trade, capacity-building and national security. In 2011, the Obama White House set up the Office of the Coordinator for Cyber Issues to centralize the department’s technology work. Trump officials effectively lowered the office’s importance, and John Bolton, in what was widely seen as a power-consolidation move, eliminated the White House cyber coordinator position in May 2018 right after he landed as national security adviser. In response to these changes, the final report of the Cyberspace Solarium Commission recommends that Congress create a Senate-confirmed role of national cyber director and an assistant secretary position in the State Department to head a new Bureau of Cyberspace and Emerging Technologies.

As for how the US organizes its diplomatic tools internally, the administration will have many paths available. It’s possible the president-elect will restore the White House cyber coordinator, though having the position be Senate-confirmed may be undesirable from an executive branch perspective. Pressing for a much bigger diplomacy budget would also have great value in the digital sphere—for better addressing issues like supply chain security and 5G, and for building coalitions on digital trade (like the one that 15 Asia-Pacific nations just signed)—but that, too, is no guarantee in light of a years-long decline in State Department funding. Ultimately, appointments below the secretary level at State will shape how US diplomats use their available resources to work on tech.

The strategy the US will take on digital engagement internationally could also go a number of different ways. Lately, a lot of attention has been paid to the idea that democracies should band together to battle technologically infused authoritarianism; it was certainly a theme at the Halifax Security Forum this past weekend, where Senator Chris Coons said, “If we are going to make it as a world community of democracies, this is an absolutely pivotal year.” In the last year alone, several tech-specific diplomatic initiatives have emerged, like the Global Partnership on AI (GPAI), launched by the OECD, which all G7 countries plus India, South Korea, Singapore, Slovenia, and the EU have already signed onto. There is also the D10 alliance formed by the United Kingdom to pursue, alongside other democratic nations, 5G alternatives to Chinese telecom Huawei. Not to mention, of course, a history of bilateral and multilateral engagements on which the US can double down.

Read More

The AstraZeneca Covid Vaccine Data Isn’t Up to Snuff

There were other dosing issues, too, that haven’t been explained even though dosing is the centerpiece of the press release. There are many different regimens in these trials—the UK study has more than two dozen arms, meaning the volunteers were divided into that many groups according to age and how much of the vaccine would be administered and when. The doses are measured by the number of altered viral particles they contain, and the developers decided that the standard dose would be 5 x 1010 viral particles. But for many of those arms in the UK trial—as well as everyone who got the vaccine in the Brazilian trial—publicly available trial information shows that the standard dose could be between 3.5 and 6.5 × 1010 viral particles. The lower end of that range isn’t far off from a half-dose.

How did Oxford-AstraZeneca end up with this patched-together analysis instead of data from a single, large trial? After all, this vaccine went into Phase 3 testing before either BNT-PFizer’s or Moderna’s did. But in the UK, where that testing started, the Covid-19 outbreak happened to be receding. That meant results would be coming in very slowly.

A month later, a second Phase 3 trial for the vaccine started in Brazil. That one was for healthcare workers, for whom the risk of being exposed to Covid was far higher than it was for the people in the UK trial. But the two trials had other substantive differences. In the UK, for example, the volunteers who did not get the experimental Covid vaccine were injected with meningococcal vaccine; in Brazil, those in the comparison group were given a saline injection as a placebo.

Meanwhile, BNT-Pfizer and Moderna began Phase 3 trials for their coronavirus vaccines on the same day in July: Both planned to include 30,000 volunteers at the time, and both trial plans were approved by the FDA. Oxford-AstraZeneca then announced they, too, would run a 30,000-person trial in the US.

But that research on the Oxford-AstraZeneca vaccine quickly fell behind the others’. The US trial was approved by the FDA, but it didn’t start recruiting people until the end of August; and just a week later, it was put on hold so the FDA could investigate a serious adverse event in the UK trial. It wasn’t clear what caused the volunteer to get sick, but the FDA did not give the all-clear for Oxford-AstraZeneca’s US trial to resume until Oct. 23. By then the protocol for the trial had been publicly released. It says the plan is to inject the vaccine in two standard doses, a month apart; and two people will be vaccinated for every one who gets a placebo saline injection.

So here we are at the end of November. BNT-Pfizer and Moderna have offered up a masterclass in how to do major vaccine trials quickly in a pandemic, while Oxford-AstraZeneca has, for the moment, only an assortment of smaller ones ready to look at.

Read all of our coronavirus coverage here.

But wait, more red flags! Last week, Oxford-AstraZeneca published some results from earlier in the development of the UK trial. That paper included a trial protocol for the UK study, attached as an appendix. Deep in that document, and apparently overlooked by reporters and commentators, was an eyebrow-raising suggestion: Under a section marked “Interim and primary analyses of the primary outcome,” the trialists outline a plan to combine and analyze data from four clinical trials (only half of which are Phase 3), carried out in different ways on three different continents. The plan, they wrote, was to pull out results only for the people across these four trials who had gotten “two standard-dose vaccines,” and then pool those together for what’s called a meta-analysis.

Read More

Overcoming Vaccine Skepticism Starts in the Community

Governments are already planning who will get the Covid-19 vaccine first, prioritizing the elderly and the vulnerable.

Those plans should not presume that everyone who can have the vaccine will be willing to receive it. There is already much scepticism, resistance, and all-out hostility to vaccination, particularly in minority communities.

This is about much more than the “anti-vax” movement that has lately been associated with the far-right in the United States, or, where I live, the largely white anti-lockdown and anti-vaccine protestors who have regularly marched throughout England.

I sit on the management group of the Novavax vaccine trial at the Bradford Royal Infirmary, one of six vaccines pre-ordered by the UK government, and the first trial of its type anywhere in the world. Bradford is one of the most ethnically diverse parts of Britain; more than a third of the town is not white British. A quarter of Bradford’s residents are Pakistani.

Ethnic minorities were ten times less likely than the general population to participate in the vaccine trial: They comprise 36 percent of the population, but only 3 percent of trial participants.

Those same minorities who are more likely to refuse a vaccine are also twice as likely to catch Covid, and two-to-three times as likely to die from the disease.

Many of the factors that make them more susceptible to Covid also make them more likely to refuse a vaccine.

The common thread is lack of access to and distrust of official government communication. In March, I called for all official government Covid information websites to be available in multiple languages. Eight months later, some governments are still only communicating in official languages. This immediately excludes many older first generation immigrants—precisely the demographic that is most at risk. In Bradford, Pakistani and Kashmiri immigrants who speak Urdu and local Kashmiri languages like Potwari are largely left in the dark.

There also needs to be a shift away from top-down, almost dictatorial communications. These pressers (along with an aggressive social media strategy) have been a ratings hit and invaluable in providing a single, authoritative source of information. But what about communities who do not watch the mainstream channels, or don’t actively use social media?

Minorities also already have poorer health outcomes than the general population. When many minorities feel failed by health services (despite their own communities being over-represented in the delivery of health and care), there is naturally lower trust.

Compounding this is the fact that many minorities also already felt alienated by government policies. The ever closer alliance between health experts and political leaders is likely to tar the former with the distrust directed towards the latter.

African-Americans are more than three times as likely to be killed during a police encounter, just as Black Britons are forty times more likely to be stopped and searched by police. Many Latino communities in the US live in constant fear of ICE enforcement teams. Muslims on both sides of the Atlantic have complained of profiling and over-zealous counter-extremism programs like the Prevent strategy. If you’re not white, it’s inevitable these policies will color your feelings about an officially endorsed vaccine.

This is a (perhaps unforeseen) consequence of the politicization of health authorities. Epidemiologists like Anthony Fauci or Chris Whitty, England’s chief medical officer, may feel that they can stand at the podium next to the president or prime minister and still claim to be impartial scientists. Optics matter, and in some quarters health authorities are now as distrusted as the governments who fund them.

This has created real resistance in some communities towards vaccines. When the Bradford Novavax trial sent representatives to the local Mosque to plead for minority participants, they were politely welcomed, but it didn’t increase participation.

What minority communities need is to receive this message about vaccine safety from those they identify with and trust within their own communities. Instead of top-down communication from health authorities and medical professors, we need horizontal encouragement: relatives, friends, the server in the restaurant, the taxi driver who drives you to school, they should all be encouraging you. Crucially, we need respected and trusted figures in the community to advocate. Religious leadership is also key. Mosque leaders and spiritual authorities should be publicly taking the vaccine.

Read More

Why Isn’t Susan Wojcicki Getting Grilled By Congress?

If there is a singular moment that defines YouTube’s intentional opacity and the lack of accountability this facilitates, perhaps it was in 2018, when Google (and therefore YouTube) provided the most limited data set of the three companies to the independent researchers tasked by the Senate Select Committee on Intelligence with preparing reports analyzing the nature and extent of Russian interference in the 2016 US election. Our collective lack of insight into what is happening on the platform in the four year since has been an ongoing echo of that moment.

And yet, by and large, YouTube’s game plan of giving less to scrutinize has worked. Why?

In part, the problem is practical and technical. It is much harder—and more time consuming—to search and analyze audio and video content than it is text. In part, it is an audience problem: The people that write and research platforms tend to live on Twitter (and, to a lesser extent, Facebook). Perhaps the problem is also a product of unconscious bias, with academics and journalists over-indexing on the importance of the written word. It’s certainly a generational problem: Users of YouTube, and other platforms that focus on video content like TikTok or Twitch, also tend to be younger. Fundamentally, it’s also a storytelling problem: It’s simply harder to write a captivating story about a platform’s failure to take action or release a policy than it is to write about a platform that releases one. That is, until the results of failing to have a policy become all too clear, as they have for YouTube since Election Day. I am guilty of all these biases, and they are evident in my work too. But to solve the challenges posed by content moderation and its governance, the focus must extend beyond the problems that are easier to write about. Opacity should not be so rewarded.

The YouTube problem is not just a problem with YouTube. It’s also indicative of a broader truth: In general, researchers, lawmakers, and journalists focus on the problems that are most visible and tractable, even if they are not necessarily the only important ones. As more content moves from the biggest “mainstream” platforms to smaller ones—perhaps precisely because they have more lax content moderation standards—this will be an increasingly common challenge. Likewise, as platforms and users create more “private” or “disappearing” content, it will be harder to track. This does not mean social media will not still have all the usual problems—hate speech, disinformation, misinformation, incitement to violence—that always exist where people create content online.

This is not a call for a swath of new policies banning any and all false political content (whatever that would mean). In general, I favor intermediate measures like aggressive labelling, de-amplification, and increased friction for users sharing it further. But most of all, I favor platforms taking responsibility for the role they play in our information ecosystem, thinking ahead, being transparent, explaining their content moderation choices, and showing how they have been enforced. Clear policies, announced in advance, are an important part of platform governance: Content moderation must not only be done, but it must be seen to be legitimate and understood.

YouTube ultimately did append a small label to videos about election results stating that “The AP has called the Presidential race for Joe Biden.” Whether or not this is adequate, YouTube’s failure to announce in advance that it planned to do so (as other platforms did) is inexplicable. This ad hoc approach creates the opening for speculation that its actions are influenced by political outcomes, rather than objective criteria it laid out beforehand. YouTube’s role in modern public discourse is important enough that it needs to do better than complacent reassurances that “our systems are generally working as intended.”

Read More

Trump’s Bogus Ballot Lawsuits Are the Mark of an Autocrat

Votes in the US presidential election are still being counted and made official, and throughout this process, media outlets like the Associated Press have remained reliable information sources where citizens can stay apprised of the tallying. I’ve watched several news channels since Election Day, and journalists have put impressive effort into carefully detailing the numbers—what is official, predicted, and unknown.

Despite all this, President Trump’s campaign said it is pursuing legal action over ballot counts in Pennsylvania, Michigan, Wisconsin, and Georgia. Former vice president Joe Biden’s campaign says it is unfazed, as legal experts call the lawsuits baseless. “There’s no legal cause of action that says, ‘Stop the count and declare me the winner,’” law professor Joshua A. Douglas told The Washington Post. But nonsensical claims of election illegitimacy were never about the law or about facts—they derive from Trump’s authoritarian worldview.

As many votes are still undergoing tabulation, Trump has continued spreading disinformation online. He tweeted lies about vote counts and procedures in multiple swing states; Twitter labeled these tweets with a banner indicating potentially misleading information. Early on Wednesday morning, Trump also made a false victory claim that was broadcast on Facebook and Twitter. The Facebook post only carried a warning label (and quickly racked up millions of views); Twitter had none at all. Now, his campaign says it is filing lawsuits in multiple states to contest how ballots are processed, while MAGA influencers concurrently pushed baseless election fraud claims on their own social media profiles. Over 150 Trump supporters, some armed, surrounded the entrance to a Phoenix election office last night while chanting, “Count the vote.”

The incumbent’s demands to count ballots past a cutoff in one state blatantly contradict his demands to discount those similarly situated in another. Logical consistency or principle was of course never the point—the Trump campaign’s proclamations and lawsuits were never about procedural rules in the first place.

Instead, the best lens through which to understand these events is authoritarianism: Ballots cast in Trump’s favor are legitimate; those cast in opposition are not. Because it is he who should be in power, noncompliant votes are invalid, and the only fair process is the one which results in his victory. This worldview is the very reason the president’s enablers now vie to further exclude as many Biden votes as possible, no matter how baseless their legal assertions.

It should not surprise anyone, because Donald Trump made his intentions quite clear: falsely claiming for months that mail-in voting wasn’t secure, denying clear evidence of voter suppression, and not agreeing to recognize the election’s results before they’re known. It should not surprise because the Trump administration, like any number of autocrats who purge opposition within government or believe media exists to serve their interests, has carried out reprisals against Department of Homeland Security personnel talking about Russian election interference, has forced officials to manipulate Centers for Disease Control and Prevention data to mirror the president’s lying about Covid-19, and has overseen the development of a “purge list” of CIA personnel not ideologically aligned with the president. Within the government, misalignment with Trump is automatically disqualifying.

Trump’s 2020 campaign was itself built on this foundation, as state officials flagrantly walked over the Hatch Act, a 1939 law limiting federal officials’ political activity in their official positions. Trump delivered a campaign speech—accepting the Republican Party’s nomination, in fact—on the White House lawn, fist raised, the white marble pillars towering behind him. Secretary of state Mike Pompeo spoke at the Republican National Convention while calling in from Israel, a trip made for official government business. Reading the administration’s defenses of this and other similar behavior, though, conveyed that those wielding power apparently cannot abuse it. In a 2016 interview, president-elect Trump asserted it was impossible for the president to have a conflict of interest. This went a step further than implying a lack of accountability for those in power, however; it suggested that a leader’s personal interests actually cannot be misaligned with those of the people they were elected to serve.

Read More

An Election Forecaster Reflects: We Have Too Many Polls

An earth scientist colleague wrote to me this week to ask about the election. In the climate-forecasting business, he wrote, one often uses “persistence”—that is, the assumption that conditions remain unchanged from one year to the next—as a control condition and basis for comparisons. He wanted to know what would happen if you applied the same logic to electoral politics: Were this year’s poll-based predictions any better than what you’d get by guessing that the 2016 results would repeat themselves?

Read More

illustration of 2020 in red and blue

My quick answer was, no, the persistence method would not have worked. If you’d just copied the 2016 results, you would have had a Republican victory, and as of Thursday it looks like Joe Biden won the presidential election with victories in many key states and a slightly higher share of the national vote than Hillary Clinton received four years ago. But we can do better than that. Political scientists have developed models that do a good job of forecasting the national vote based on so-called “fundamentals”: key variables such as economic growth, approval ratings, and incumbency. If we’d taken one of these models and adjusted it based on the parties’ vote shares from 2016 (as opposed to using recent polling data), we would have projected a narrow Biden win, and likely ended up closer to the mark than any guess derived from the famous poll averages. Even better, we would have done so at a fraction of the cost.

I say this as a co-creator of one of those famous—or maybe I should say “notorious”—poll averages. Our election forecast at The Economist ended up predicting Biden would win more than 54 percent of the two-party vote, and gave him a 97 percent chance of winning the electoral college. Given the closeness of the election, we’re now feeling a bit uncomfortable with that latter claim. On the other hand, the popular vote, electoral vote, and vote shares in all or almost all the states (including Florida!) seem to have fallen within our 95 percent uncertainty intervals—so maybe it’s fairer to say that we successfully expressed our uncertainty.

The question here, though, is whether polling and forecasting are a waste of time and resources, given that, at least in this election, we could’ve done better with no polls at all. We should be able to study this using our forecasting model. It’s Bayesian, meaning that it combines information from past elections, a fundamentals-based forecast, and polls during the campaign.

One thing I can say with some confidence is that we currently have too many polls—too many state polls and too many national polls. At some point, polling a state or the country over and over again has diminishing returns, because all the polls can be off—as we saw in several states this election.

Then again, I’m not paying for the polls. Many surveys are done by commercial pollsters who make money by asking business-related questions on their surveys. Election polling serves as a loss leader for these firms, a way for the polling organization to get some publicity. The good thing about this system is that the pollsters have an economic motivation to get things right. For example, the fact that the Selzer poll performed so well in Iowa, predicting a strong Republican finish, should be good for their business.

But this logic led me and others to be too sanguine about poll performance in this election. Sure, some key state polls bombed in 2016, but the pollsters learned from their mistakes, right? They did fine in 2018. The wide uncertainties in our 2020 forecast were based on our historical analysis of state-level polling errors, and they came in handy this time, as they allowed our prediction intervals to include the ultimate election outcomes despite the poll failures.


Image may contain Rug
Subscribe to WIRED and stay smart with more of your favorite Ideas writers.

What went wrong with the polls this year? It wasn’t just Donald Trump. The polls also systematically understated the vote for Republican congressional candidates. We can’t be sure at this point, but right now I’m guessing that the big factors were differential nonresponse (Republican voters being less likely to respond to polls and Democratic voters being more likely) and differential turnout (Republicans being more likely to go out and vote). We had a record number of voters this year, and part of this was Republicans coming out on Election Day after hearing about record early voting by Democrats. Other possible reasons for discrepancies between the polls and the vote include differential rates of ballot rejection and last-minute changes in opinion among undecided voters.

Read More

So How Wrong Were the Polls This Year, Really?

Going into Election Day, forecasters predicted a Joe Biden victory, but cautioned that Donald Trump still had a chance. Experts warned that the process of counting mail-in votes could take days or even weeks to finish, and that early vote totals in the Rust Belt states might start off looking heavily pro-Trump but then shift blue as absentee ballots were counted. We heard that Trump planned to declare himself the winner on election night and call for ballot counting to halt in states where he was in the lead.

On Wednesday afternoon, as I write this column, Biden is on track for a close victory, but Trump still has a chance. The results hinge on a handful of swing states that may not finish counting votes until the end of the week; in the Rust Belt, Trump’s early leads look to be morphing into narrow Biden victories as absentee ballots get counted. Meanwhile, Trump has indeed declared that “as far as I’m concerned, we already have won it.” In other words, everything is turning out just as we’d been told. So why does it all feel so surprising?

The answer begins in Florida. Heading into Tuesday, The New York Times announced it would be reviving its notorious election needle, but only for the three states that were expected to count most of their votes quickly and report detailed statistics on who voted where, and by what method: Florida, North Carolina, and Georgia. FiveThirtyEight’s poll averages gave Biden an advantage in those states, in percentage points, of 2.5, 1.8, and 1.2, respectively. All practically toss-ups, but heading into the evening it seemed reasonable to guess that Biden would win at least one of these key states, and that we’d know it before bedtime.

This did not happen. Florida, where Trump won by 1.2 points in 2016, was the first to report results—and they were stunning. The Times needle swung dramatically to the right as results came in, indicating a near-guaranteed Trump victory. While Biden did well in some parts of the state, Trump increased his 2016 margins in the heavily Cuban stronghold of Miami-Dade County. The Times started predicting a 4-point Trump victory; as of now, with most of the votes counted, that lead is closer to 3 percent, suggesting that the FiveThirtyEight average had been off by 5.5 points.

The other Times needles, feeding on Florida’s data, reacted accordingly, predicting similarly decisive wins for Trump in Georgia and North Carolina. But as the night went on, it became clear that the results in Florida—perhaps the nation’s most demographically and politically idiosyncratic swing state—both over- and under-predicted the scope of the national polling error. In Georgia and North Carolina, as more votes got counted, the races tightened; as of now, Trump is on track to win North Carolina by just over 1 percentage point, while in Georgia, a batch of outstanding ballots from the Atlanta area could actually deliver the state narrowly to Biden.

Read More

illustration of 2020 in red and blue

Elsewhere in the country, however, the polls downplayed Trump’s support even more flagrantly than they did in Florida. Biden led the averages by 8.4 points in Wisconsin and 7.9 points in Michigan. As of now, with all votes counted in Wisconsin and nearly all in Michigan, he is up in those states by just 0.6 and 1.2 points, respectively. In Ohio, which Trump won easily last time, polls showed Biden within less than a point. He’s currently down 8. These errors are even larger than the ones from 2016, as you can see if you scroll down a bit here.

Polls are not and never have been perfect, and state polls are typically worse than national ones because it’s harder to build representative samples with smaller subgroups. This year’s apparent errors may also shrink once all the votes are counted. “Some of these outcomes are still moving targets,” said Courtney Kennedy, director of survey research at Pew Research Center. “Michigan, Pennsylvania, Arizona, Nevada—I think it’s important that we pump the brakes for the next few days and let that play out, because it’s very likely that those vote outcomes are going to shift a little bit more toward where the polling was.” Nate Silver of FiveThirtyEight predicts that, once all the votes have been counted, this year’s national polling error will settle around 3 percent, with some states ending up better than predicted for Trump and others worse—just like 2016.

Read More

The Senate Race That Could be Pivotal for America—and Wikipedia

A political newcomer, Greenfield has never held public office, and her life lacks the typical arc of a political climber. In 1988, her husband died in a freak accident; the Social Security benefits she received allowed her family to survive, a story that has become the centerpiece of her campaign. After earning a college degree, Greenfield became the president of a small Des Moines real estate firm.

This has made Greenfield an unusual candidate for national office: Her tragedies have been private, while her ambitions, if not modest, were focused: trying to raise two children as a single parent with a business. Greenfield’s lack of notability—which she shares with the vast majority of people she is running to represent—is in many ways a primary theme of her campaign.

In short, Wikipedia’s notability litmus test doesn’t just advantage political incumbents; it advantages the kind of people—insiders, celebrities, men—who already enjoy notable status in a social and economic hierarchy that others in politics may wish to democratize.

Greenfield’s dilemma is one that can often face female candidates: what might be called a “notability trap.” Political challengers who are deemed non-notable tend to be women, and they are often faced with only one path to getting a page on Wikipedia: winning their race. In 2018, for example, Alexandria Ocasio-Cortez saw her Wikipedia entry appear on June 27, the day after she won an upset primary victory.

In the “blue wave” later that year, 88 newcomers would win election to Congress. Of the 52 challengers considered notable enough to have Wikipedia entries before their elections, almost 70 percent were men and 30 percent women. And among the 10 challengers already considered notable for their private-life achievements, eight were men: A liquor store magnate; the brother of Vice President Pence; a former NFL wide receiver; and a California man who won the lottery. Meanwhile, among the women not considered notable were a Navy commander, an Air Force captain and sports company executive, a key architect of the auto-industry bailout, a law professor, and an Iowa state official. All received their Wikipedia articles shortly after they won election.

The notability trap has become a topic of controversy outside of politics, too. In 2018, Canadian physicist Donna Strickland was repeatedly denied a Wikipedia page for lack of notability. That changed one day in October, around 9:56 am—the morning she won the Nobel Prize. Strickland shared the prize with a male colleague, Gérard Mourou, who has had a Wikipedia page since 2005. Earlier that year, when users attempted to create a page for Strickland, a moderator denied the request, replying that the article’s references “do not show that the subject qualifies” for Wikipedia.

For activists, the Greenfield example reflects a familiar pattern. “Absences on Wikipedia echo throughout the Internet, and that is universal for any field—art, politics, and so on,” says Kira Wisniewski, the executive director of the organization Art+Feminism, a group founded in 2014 to correct what it saw as gender imbalances in the arts on Wikipedia. Wisniewski pointed to a 2011 survey that suggested more than 90 percent of Wikipedia editors were male, one reason she suspects women might be less likely to have their past achievements deemed notable.

Read More

illustration of 2020 in red and blue

Lih, the Wikipedia expert, is more reluctant to attribute Greenfield’s rejection to gender—some male Senate candidates, like Al Gross in Alaska, similarly did not have a Wikipedia page for much of this year—but nevertheless calls Wikipedia’s political rules a serious problem. “It’s pretty obvious an article was merited,” he says of the Greenfield case, later adding: “We’re not doing the right thing.”

Yet that wasn’t so obvious on Wikipedia. As the Iowa race became a virtual toss-up, Greenfield’s proponents became increasingly heated. They pointed to the growing national interest in the campaign. “This draft now clearly exceeds [the] notability threshold,” wrote one user.

But the other side insisted that Greenfield’s life was just not notable, and never would be—unless she won. “Drop the stick, and move away from the [horse] carcass,” wrote Muboshgu. “She’ll get an article if she wins.” Another user evaluated Greenfield’s biography and wrote, “I don’t think that gives her a meaningful career outside of her current Senate run,” adding that if Greenfield lost, “she will very likely be seen as insignificant.”

Read More

The Science That Spans #MeToo, Memes, and Covid-19

Network science’s underlying theory predates the internet, but social media’s rise was an important cultural innovation that implored the need for a science of how people are connected. And while there are myriad fun and interesting questions about the way that people interact, few have been more pertinent than how social movements are born.

Take this year’s #Hashtag Activism, for example, in which Brooke Foucault Welles, Sarah Jackson, and Moya Bailey use network science to uncover the growth of social media activism.

Foucault Welles, an associate professor at Northeastern, says that network science “lets us distill vast, chaotic online communication data down to its essence” and “pull out important themes, people, and events for close reading.” This intersection with big data is critical: that it can extract patterns from terabytes of social media interactions strengthens the reach of its conclusions—the findings aren’t about how a small set of users behave, but about aggregate behavior.

The approaches highlighted in #Hashtag Activism can reveal fundamental principles of social movements that apply to the digital activism movements of recent times. From a network of activist narratives built from quantitative and qualitative data, Foucault Welles describes how, “in #MeToo, we discovered that talking about sexual assault online is really powerful because it reduces stigma and encourages other people to disclose. The first few people to come forward have to be really brave and talk about what happened to them, even though they might not be believed, they might not be supported, and they might be blamed. But each time someone is brave and comes forward, it reduces the risk for other people to come forward.”
The work of Foucault Welles and colleagues provides part of a blueprint for how to construct hashtag movements moving forward. “In any given social justice movement,” she says, “there’s a committed core of activists who work really hard to craft and spread a message. Then there’s a huge periphery of allies and supporters who amplify that message. I love this finding because it shows how activists and regular people can work hand in hand—how we have to work hand in hand to keep things going.”

While social movements have recently come into network science’s crosshairs, the field has long focused on epidemiology. It takes little imagination to consider how a science dedicated to understanding how connections between people is important in infectious diseases. Network science has driven a large number of breakthroughs in epidemiology, from identifying the role of airline transportation in the global spread of epidemics, to revealing how the replacement of sick workers with healthy ones can drive the dynamics of influenza.

The dynamics of Covid-19 have proven especially challenging to understand, as questions have persisted about the importance of asymptomatic transmission and superspreading events. Network perspective has added layers to how we consider basic aspects of an epidemic, such as the basic reproduction number (the R0), a signature of contagiousness. The study of networks highlights that this number is truly an average, and doesn’t consider how select individuals embedded in a network can infect others in numbers much larger than predicted by the R0.

Dina Mistry, a postdoctoral fellow at the Institute for Disease Modeling, has conducted cutting-edge work on human interaction networks, and social mixing patterns. That is, she builds careful and detailed simulations of how exactly people interact, to inform public health intervention patterns, all of which are highly germane to the Covid-19 pandemic.

“We don’t know how to model contact patterns, especially in metro areas, and households,” says Mistry. Work like this is central to conversations about contact tracing, the safe reopening of schools, and other central conversations that have arisen during the Covid-19 pandemic. Mistry further suggests it’s important “to collect and report on distributions of data, rather than point estimates. For example, if we think that way then maybe we can explore heterogeneity in things like behavior adoption—I want to know more than just the percent of people adopting a behavior, rather what’s the distribution of willingness to adopt behaviors, for example, mask wearing, and the covariates that go with it.”

Network science and our perilous future

The cases of both Foucault Welles and Mistry demonstrate network science’s fungibility, and the importance of integrating theory with data science, which aid in their ability to describe large, complicated patterns. But the true measure of a field is in what it offers for the future.

Read More

On the Week of the Election, Social Media Must Go Dark

America has given social media giants ample time to figure out how to stop their platforms from being used to sow political discord. Yet we find ourselves stuck in an even more precarious situation than 2016—not only is the possibility of a stolen election real, democracy itself is vulnerable to a potential heist. No meaningful laws have improved the landscape. Instead, it is up to Facebook, Twitter, YouTube, and others to show a rare sense of self-awareness and take a few days off—not as an admission of failure, but to reduce the odds of enabling harm. Social media outlets should voluntarily go silent for a few days before and after the election.

A few days of silence would prevent many online attempts at election interference and would hinder President Trump’s effort to build a preemptive narrative—for example, portraying a potential blue shift (as mail-in ballots are counted) as fraudulent.



Martin Skladany is a professor of law and technology, intellectual property, and law and international development at Penn State Dickinson Law. He is the author of Copyright’s Arc (Cambridge University Press, 2020).

Not only do a majority of both Democrats and Republicans support the idea of social media platforms going offline for the week of the election, there are analogous examples across the world. By law, the French observe a period of no electoral press coverage starting 44 hours before an election. In the UK, TV and radio stations are prohibited from covering the election when the polls are open—between 7 am and 10 pm on election day. In many countries, there is a blackout period, ranging from one to over 10 days, before an election in which opinion polls may not be publicly released.

Before the US election, this silence would prevent a litany of ills—false claims about when and where to vote, voter intimidation, poll station violence, and other schemes that have already happened in the past few weeks. This benefit outweighs any good from last-minute get-out-the-vote social campaigns, because falsehoods travel through social media faster than the truth does.

Further, nonprofits attempting to combat voter sequestration could use other avenues to communicate; they could text or call. Of course, ne’er-do-wells would also have access to these same tools. Yet email, text, and Zoom work well for community organizers and poorly for trolls. Further, making communication harder will deter the casual spreader of falsehoods, but it will not stop a volunteer committed to keeping the election fair from staying in touch with other monitors.

Social media firms could consider making one exception during the blackout. They could prevent all communication by third parties on their platforms while they themselves reported on attempts at election interference. Platforms could either collect such information through the mainstream press or set up a hotline allowing users to report election problems to the social media companies themselves, which would then verify the authenticity of tips before alerting users. Facebook staff could verify, for example, attempts at voter intimidation in a precinct, and then alert all users in the area. Such an exception would allow social media to do some good.

A break from social media is also necessary immediately after the election. Prime-time election night coverage of the 2016 presidential race saw 12.1 million viewers tune in to Fox News, Trump’s preferred network, through which he’s told countless lies. Meanwhile, @realDonaldTrump has over 87 million followers and @POTUS over 31 million. Yes, Fox News will continue to blare. But going socially silent would prevent the president from reaching tens of millions of more individuals with false claims of victory or election theft. Taking away his loudest bullhorn will buy neutral poll workers the time they need to do their job unobstructed.

A social media CEO might claim that any vacation by the dominant platforms will open the door for upstarts. Yet the largest conservative competitor, Parler, has only a few million users. Most voters don’t know it exists. Plus, a swing of a few million accounts is a rounding error to Facebook’s 2.7 billion users. Finally, network effects still heavily favor incumbent platforms. That said, dominant social media companies within one particular space that go silent (Twitter) could lose followers to a leading social media firm in another area that doesn’t (Facebook). Yet given that they enable users to do different things, the migration would likely be modest.

Read More
Page 1 of 812345»...Last »