Blog Details

Connected cars – Data protection, privacy, and cybersecurity

CONNECTED CAR TECH SERIES PART 4: Complex regulatory landscape threatens to restrict the market’s development 

Contributed by: Waseem Haider

In the last three parts of our Connected Car Tech Series, we talked about the immense possibilities this space is offering to car manufacturers, network operators and other stakeholders. This obviously creates an impression of the grass as all green which is not the case. Connected cars have a number of challenges.

One significant roadblock facing the connected car industry relates to regulations and standards. The need for regulations governing in-vehicle data and other connected car resources is one of the most pressing issues affecting connected car stakeholders. This is partially because regulations were meant to deal with basic connectivity, e.g. emergency calling, and partially because of the expansion of the connected car ecosystem. Current regulations do not sufficiently address the challenges posed by increased connectivity and the role of different stakeholders.

Though there are multiple areas which are affected by lack of standardization and proper regulations in the connected car ecosystem, there are two significant areas which stand-out due to their impact on both the end-consumers and service providers: data protection and privacy, and cybersecurity.

Data Protection and Privacy

One of the biggest challenges faced by the connected car ecosystem is the protection of consumer data. Even though regulatory authorities have made some significant policy changes around connected cars, data access and privacy regulations have yet to be tackled adequately. For example, the EU updated its Motor Vehicle Type Approval Regulation in 2019, but the increasing vehicle connectivity is still a topic of discussion. 

One major data protection law from the EU is the General Data Protection Regulation, or GDPR. There is a lot of uncertainty between the EU and US since the introduction of GDPR. Meanwhile, numerous regional efforts around data protection have emerged, inspired by the GDPR. One such regulation is the California Consumer Privacy Act (CCPA). The CCPA directly addresses car manufacturers and automotive suppliers globally on their telematics data capture, and influences cloud service providers’ data privacy practices.

The amount of data generated not only within the car but also outside of the car, certainly poses a threat to the protection of personal data and raises serious privacy issues. According to some estimates, almost 25 gigabytes of data is produced per hour from a connected car. Most of this is driver’s personal data and that of passengers. Moreover, suddenly the data generated by connected cars have attracted the interests of multiple stakeholders – enforcement and government authorities, car insurance companies, car manufacturers and other third parties.

Primarily, connected cars are generating data from three different categories of functionalities: Telematics, V2X and Infotainment (see Figure 1 below).

Figure 1: Main Data Sources in Connected Cars 

Source: ENISA

The functions shown in the graphic enhance the customer experience for car owners and some of them are essential for safety and emergency services. However, the amount of personal data which the connected car systems are generating becomes a cause of worry for the protection of the data and privacy of individual car owners and/or related parties. Note that we are not talking about the fully autonomous vehicles of the future, which will generate and gather even larger amounts of data than today’s connected cars.

Hence, the question arises how to adopt data protection and privacy standards today which will stand the test of time. While there is some progress in creating standards and regulations surrounding connected cars, for instance the new Motor Vehicle Type Approval Regulation, EU GDPR, and CCPA, many issues have not been addressed comprehensively or consistently enough to support growth of this new market.

Cybersecurity

Another big challenge for the connected car ecosystem is the now-increased vulnerability of cyberattacks and hacking threats. The transformation of the automotive industry into one offering digital mobility products and services has given rise to importance of cybersecurity in the connected car ecosystem (see figure 2 below). Though the digital features in connected cars are adding great customer value, they are also exposing connected cars to multiple touchpoints for possible cyberattacks. As connected cars have more and more in-vehicle software units, hackers have access to electronic systems and data, posing potential threats to critical safety functions and data privacy.

Figure 2: Cyberattack scenarios in connected cars 

Source: Frost & Sullivan

In the past few years there have been multiple instances of cyberattacks on connected cars, where hackers have taken full control of the vehicles. The major challenge is lack of clear regulatory guidelines and standards for the connected car ecosystem. As such, the cybersecurity problem is related to data protection and privacy. Cybersecurity and data protection/privacy are two sides of the same coin: cybersecurity presents the outside-in scenario and data protection is the inside-out scenario.

One important point to highlight here is that regulators are having a tough time formulating such laws. Part of the challenge is the involvement of multiple stakeholders in the connected car ecosystem. This influences current supplier contracts with OEMs and other third-party relationships for software development, testing and managing over-the-air (OTA) updates.

Regulators face a difficult situation in adoption of standards across the entire automotive value-chain. For the last few years, however, regulators have been working on a cybersecurity framework for the automotive industry that will cover the entire value-chain. This year, the United Nations Economic Commission for Europe (UNECE) passed a law called the Vehicle Cyber-Security Management System (CSMS), to be implemented by automotive manufacturers. The law will make cybersecurity an integral part of the entire connected car ecosystem and OEMs need to implement a certified CSMS across the entire lifecycle of any given connected vehicle in near future.

Next Up: Data ownership

Among the many regulatory issues in the connected car ecosystem is, who owns the data generated by connected car ecosystem. In the next part of this series, we will take a deeper look at ownership of data in the connected car space.

*

Image credit: Erik Mclean

Blog Details

What Can President Biden Do to End the Disinformation Age?

On Jan. 6, 2020, something extraordinarily dangerous occurred. During Congress’ certification of the Electoral College votes from the 2020 election, armed protesters stormed the Capitol Building, overwhelming police officers and forcing lawmakers to seek shelter. What made this occurrence so out-of-the-ordinary was that the protesters were supporters of then-President Donald J. Trump, who had been defeated in his bid for re-election by former Vice President Joe Biden.

Five people died in the attack, or as a result of the attack. Brian Sicknick, a Capitol Police officer, died of his injuries after a savage beating from the insurrectionists. Ashli Babbitt, a Trump supporter, was shot and killed by Capitol Police. Three other Trump supporters died that day, as well: Kevin D. Greeson, who suffered a fatal heart attack; Rosanne Boyland, who was apparently trampled by in the crowd as they attempted to breach police lines; and Benjamin Philips, founder of a pro-Trump website called Trumparoo, who reportedly suffered a fatal stroke.

As for what sparked the violence? Disinformation.

What is that, anyway?

According to Merriam-Webster, disinformation is “false information deliberately and often covertly spread (as by the planting of rumors) in order to influence public opinion or obscure the truth.” In other words, disinformation is always intentional.

This is in contrast to misinformation, which is simply “incorrect or misleading information.” Admittedly, members of the news media frequently use these terms interchangeably. That needs to change, because they could not be any more different.

Misinformation can happen by accident, and be a simple mistake without malicious intent. Disinformation, however, is intentionally malicious, as it purposefully aims to spread falsehoods.

In this case, Trump, prior to Election Night, continually claimed (falsely) that the election was rigged against him.   Later, rather than accept his election loss, Trump loudly and frequently insisted (again, falsely) that the election was stolen from him. He claimed that a massive plot of voter fraud had robbed him of his second term and, after various state recounts failed to sway the election results, took to the courts in an effort to prove his case. In the overwhelming majority of the court cases, the Trump campaign’s arguments were heard and dismissed, typically for lack of evidence, although all too often for improper legal preparation.

Trump, throughout it all, continued to call upon his followers to resist, to “Stop the Steal,” and hinted that there would be violence if the so-called plot against him didn’t end. And his followers believed his disinformation,  culminating with their storming the Capitol Building.

The events of Jan. 6 demonstrate one of the most visible – and increasingly common — outcomes of disinformation: violence.

HOW DID THIS ALL START?

Let’s walk things back a bit. Over a decade ago, an Ohio State University study warned that news consumers, rather than access a diverse array of ideas, instead were migrating to news networks that reinforced the beliefs they already held. That bias has, in the decade-plus since, metastasized into the disinformation age we see today, as people increasingly looked at their own values and points-of-view as “correct,” and any dissenting viewpoint as “wrong.” In time, dissenters were attacked as being “anti-American.” Even the way people perceived the world is different, with the concept of “common ground” having been relegated to the dustbin of social history.

At some point, many people stopped considering their opinions as merely their own beliefs, but as “facts.”  With a growing segment of the population, even considering facts counter to what they believed led to cognitive dissonance, defined as a “psychological conflict resulting from incongruous beliefs and attitudes held simultaneously.”  More, it has been shown that attempting to “simple change someone’s mind” on a deeply held belief actually triggers parts of the brain associated with self-identity and negative emotions. In other words, the brain actually rejects concepts that run counter to what the person believes.

So, now you know why people “believe the crazy things they do,” and why they aren’t swayed by facts. Even when that information is wrong, they still believe it. And then they spread their bad beliefs, their “alternative facts,” creating a dangerous, and ever-growing, cycle that eventually leads to the demise of objective facts, and of truth.

Fun fact: Science, journalism, and voting, the bedrock of politics, rely on the idea that facts are objective, not subjective, and are thus reliable. When one starts to question whether “truth” is real or not, it jeopardizes belief in those institutions. And that can lead to outcomes like we see today, with a growing, partisan divide between those who believe in science, in journalism … and those who do not.

Disinformation sparks lack of trust in journalism, science, and political structures. That lack of trust creates a void. And ironically enough, as nature abhors a vacuum, something will always attempt to fill that empty space.

Enter disinformation.

On Dec. 30, 2020, Sen Ben Sasse, R-Nebraska, noted that “America has always been fertile soil for groupthink, conspiracy theories, and showmanship. But Americans have common sense. We know up from down, and if it sounds too good to be true, it probably is. We need that common sense if we’re going to rebuild trust.”

However, Emily Dreyfuss, editor of Harvard University’s Media Manipulation Casebook, warns that the proliferation of disinformation has a way of overriding common sense:

“Social science studies have shown that the more a person hears something or is exposed to something, the more true it sounds. It’s kind of a glitch in the human brain. It has evolutionarily served us before. But in a disinformation ecosystem, it really is dangerous. And what these hashtags do, what viral slogans and all of these – even memes – what they do is they take really complicated, nuanced issues that people can debate about, that people feel passionate about, and they distill them down to this really simple piece of information that becomes unstoppable in some ways.”

These days, everyone has their own definition of reality. Even as I write this, many in this country believe, whole-heartedly, in two different, conflicting, realities. In one, Biden won the 2020 presidential election, making him the president-elect. In the other, Trump won re-election by a landslide. In the first, people believe that an authoritarian president with aspirations to dictatorship was unseated. In the latter, people believe that the Chosen One was brought down by a massive conspiracy of fraud

It is important to point out that, for the purpose of this presentation, we will be preceding with the objective reality: that Biden won the U.S. presidency with 81.2 million votes, compared to Trump’s 74.2 million; that Biden won the Electoral College vote, 306 to Trump’s 232; that the Electoral College certified that victory; and that the U.S. Congress certified the Electoral College results.

I bring this up because the current social and political atmosphere is a direct result of disinformation. Indeed, the repeated assertion that the election was “stolen” from Trump directly led to the assault on Congress. 

Elizabeth Neumann, former assistant secretary of counterterrorism at the Department of Homeland Security, put it simply:

“A huge portion of the base of the Republican party has now bought into a series of lies that the election was stolen from them, that there is rampant fraud, and, therefore, their voice is no longer heard.”

Indeed, Hallie Jackson, chief White House correspondent for NBC News, mentioned this in December 2020. She referenced Trump counselor Kellyanne Conway’s 2017 comment that “alternative facts” were used to estimate the size of the crowd present at Trump’s inauguration. Jackson warned that the U.S. is “reaching peak alternative fact-cism,” adding that “here we are four years later and it’s not just alternative facts … It’s alternative realities.”

Effectively, those who attacked Congress firmly believed the alternate reality pushed by Trump and his allies. Despite the lack of provable facts behind the argument, this disinformation radicalized Trump’s followers to the brink of violence. One more push, provided by Trump himself, and the insurrection exploded.

But how did we get to that point?

Let’s take a look.

AUTHENTIC HUMAN CONNECTIONS

The entire concept of social media hinges on the idea that it creates social connections online between people. The key word in that simplified explanation is “people.” When you’re on social media, you expect to be communicating and sharing ideas with other people. That honest communication, the authenticity of the human connection, is what makes the entire concept of social media thrive. 

Let’s be honest: we humans worship celebrities. From composers to pop singers, actors of stage and screen, athletes to politicians, we equate celebrity with power. The more popular a person is, the more power we assume that they have.  (Money, of course, is also associated with power. But more money does not automatically equate to more popularity or influence – at least, not in the eyes of the public.  After all, if you had a list of the world’s top billionaires, how many of the names would you actually recognize?)

So, popularity equals power online. On social media, popularity is measured in the number of followers, and the number of accounts that respond to your posts. And if the popularity isn’t enough, there’s a more personal payoff for social media users: a quick high, as though you’ve taken a drug. According to the research magazine Now:

Neuroscientists are studying the effects of social media on the brain and finding that positive interactions (such as someone liking your tweet) trigger the same kind of chemical reaction that is caused by gambling and recreational drugs.

 According to an article by Harvard University researcher Trevor Haynes, when you get a social media notification, your brain sends a chemical messenger called dopamine along a reward pathway, which makes you feel good. Dopamine is associated with food, exercise, love, sex, gambling, drugs … and now, social media. Variable reward schedules up the ante; psychologist B.F. Skinner first described this in the 1930s. When rewards are delivered randomly (as with a slot machine or a positive interaction on social media), and checking for the reward is easy, the dopamine-triggering behavior becomes a habit.

In other words … “Hello. My name is Social Media User, and I am an addict.”

So, you have a system that a) rewards social media users by giving them more influence and power when they attain enough followers, and b) provides an addictive instant-high reward system. And we tend to believe that the system is honest and fair and true.

The problem, of course, is that it isn’t.

ENTER THE BOTS

Bots, as we’ve covered before, are automated computer algorithms that have been programmed to perform specific tasks. One of the things that makes them so useful is that they can be programmed to simulate human interaction. A common example of bots is the automated customer service that many websites offer. Bots are designed to automatically perform tasks that a human would normally perform.

However, technological aids such as bots jeopardize that human connection we were discussing, particularly when social media users’ all-too-human responses are driven not by a post conceived by a human, but by a computer algorithm designed to provoke an emotional, and sometimes irrational, response.

For a long time, Trump was at the top of the news cycle, so he’s an easy example. Large parts of his popularity, prior to his general exile from social media, were because of his social media followers, who he frequently rewarded by mentioning them. Indeed, during the first presidential debate of the 2016 election campaign, he noted that he had 30 million followers on Twitter and Facebook. That number had, prior to Jan. 6, 2021, risen to 88.5 million followers on Twitter, and 35.1 followers on Facebook. An impressive following, to be sure.

But was it real?

A 2016 Oxford University study revealed that, between the first and second presidential debates that year, more than a third of pro-Trump tweets, and nearly a fifth of pro-Clinton tweets, came from bot-controlled accounts — a total of more than a million tweets.

The study also found:

  • During the debates, the bot accounts created up to 27 percent of all Twitter traffic related to the election
  • By the time of the election, 81 percent of the bot-controlled tweets involved some form of Trump messaging

And this isn’t just a problem during high-profile events like presidential debates. Two years later, a Pew Research Center study showed that bots had made a disproportionate impact on social media.  In summer 2017, the center examined 1.2 million tweets that shared URL links to determine how many of them where actually posted by bots, as opposed to people. The findings were worrisome:

  • Sixty-six percent of all tweeted links were posted by suspected bots, which suggests that links shared by bots are actually more common than links shared by humans.
  • Sixty-six percent of links to sites dealing with news and current events were posted by suspected bots. Higher numbers were seen in the areas of adult content (90 percent), sports (76 percent), and commercial products (73 percent).
  • Eighty-nine percent of tweeted links to news aggregation sites were posted by bots.
  • Putting it all a bit more in perspective: The 500 people who were the most active online generated only an estimated six percent of links to news sites. In contrast, the 500 most active bot accounts were responsible for 22 percent of the tweeted links to popular news and current events sites. In other words, bot accounts tweeted more than three times as much as their human-controlled counterparts.

In other words, bots had essentially seized control of a large portion of social media. The digital province of humans was, instead, being partially ruled by bots. A few more examples of how that manifests, and the results:

  • In 2016, Congress passed the Better Online Ticket Sales Act, which banned the use of bots to “circumvent a security measure, access control system, or other technological control or measure on an Internet website or online service that is used by the ticket issuer to enforce posted event ticket purchasing limits or to maintain the integrity of posted online ticket purchasing order rules.”
  • November 2018: The FBI warned that “Americans should be aware that foreign actors—and Russia in particular—continue to try to influence public sentiment and voter perceptions through actions intended to sow discord. They can do this by spreading false information about political processes and candidates, lying about their own interference activities, disseminating propaganda on social media, and through other tactics.” The statement was a joint release with the Department of Homeland Security, the Department of Justice, and the Office of the Director of National Intelligence.
  • February 2019: A study showed that bots, including thousands based in Russia and Iran, were much more active during the 2018 midterm elections than previously thought. In nearly every state, more than a fifth of Twitter posts about the elections in the weeks before Election Day were posted by bots.
  • 2019: Twitter detected and removed more than 26,600 bot-controlled accounts. Granted, that sounds like a lot, until you consider that, at the time, the platform had more than 330 million active users.
  • May 2020: Researchers determine that nearly half of the Twitter accounts posting information about COVID-19 were, in fact, actually bots. Researchers found more than 100 fake narratives about COVID-19 being published by the bot accounts, including conspiracy theories “about hospitals being filled with mannequins,” or that the spread of the coronavirus was connected to 5G wireless towers.
  • September 2020: Facebook and Twitter warn that the Russian group that interfered in the 2016 presidential election had again set up a network of fake accounts, as well as a website designed to look like a left-wing news site.
  • October 2020: Emilio Ferrara, a data scientist at the University of Southern California in Los Angeles, warns that bot-controlled social media accounts have become more sophisticated and harder to detect.

As we’ve discovered, bots are excellent at shaping ongoing public narratives to influence public opinion. Another example: Bots have been discovered being used by scammers to write and post fake consumer reviews for ride-share companies, restaurants, hotels, and many other industries. The very information you rely upon to make informed decisions might have been subtly influenced by bots designed to shift your thinking along a predetermined narrative.

But, of course, the bots don’t just appear out of thin air. They are created, and controlled, by humans.

SEND IN THE TROLLS

According to Merriam-Webster, a troll is a person who intentionally antagonizes others online by posting inflammatory, irrelevant, or offensive comments or other disruptive content.

Now, this isn’t necessarily a bad thing. Trolls, of course, can serve a useful purpose in society by generating conversations that people may be reluctant to begin. Writing and publishing a controversial post can be a useful way to get people talking.

Of course, there’s the other kind of troll that is more concerning. Some people post disinformation in order to control the narrative. This type of troll has no interest in an honest, open dialogue. Rather, they want to spread their message, regardless of how harmful it is. And that is always a danger in a social media environment … particularly in a politically polarized nation further traumatized by a global pandemic.

To a large degree, trolls are responsible for a great deal of the disinformation plaguing the internet. Some countries establish troll farms to carry out disinformation campaigns against other sovereign nations, or even just to target specific individuals. As has been previously established, Russia did just that during the 2016 election campaign, acting both to support Trump and weaken Democratic nominee Hillary Rodham Clinton.

Trolling has always been a problem on the internet, but it picked up in 2020 during the COVID-19 crisis.

“Because so many in such a brief span of time have experienced the pandemic and indirectly the sudden increase in unemployment, the contagion effect associated with trolling behavior should be more extensive,” warns Dr. Kent Bausman, a Maryville University professor in the Online Sociology program. “Therefore, what may be grotesquely cathartic at the individual level simultaneously blooms into a toxic form of expression that ultimately erodes collective good will.”

Adds Jevin West, an associate professor at the University of Washington’s Information School: “It is difficult to measure whether trolls during this crisis are worse than others, but are we seeing a lot of troll activity and misinformation. We are swimming in a cesspool of (disinformation). The pandemic likely makes it worse because increased levels of uncertainty (create) the kinds of environments that trolls take advantage of.”

Trolls can simply post disinformation on social media networks. And, of course, that’s a relatively simple task. But, to really make an impact, they turn to more automated techniques.

Bots, anyone?

 DIGITAL PANDEMIC

With the help of bots and trolls, disinformation spreads like wildfire over social media networks. Clare Wardle, of First Draft News, a truth-seeking non-profit based at Harvard’s Shorenstein Center, covered this in an interview with the BBC:

“In the early days of Twitter, people would call it a ‘self-cleaning oven,’ because yes there were falsehoods, but the community would quickly debunk them. But now we’re at a scale where if you add in automation and bots, that oven is overwhelmed.

“There are many more people now acting as fact-checking and trying to clean all the ovens, but it’s at a scale now that we just can’t keep up.”

For example, fake content was widespread during the 2016 presidential campaign. Facebook has estimated that 126 million of its platform users saw articles and posts promulgated by Russian sources. Twitter has found 2,752 accounts established by Russian groups that tweeted 1.4 million times in 2016. Despite billions of dollars spent annually by big tech on R&D, they still haven’t solved these problems.

Dreyfuss, who is also a Harvard Shorenstein Center journalist, explained recently why disinformation is so pervasive:

“A lot of these media manipulation campaigns, and especially when it comes to vaccine hesitancy, they really prey on existing social ledges and cultural inequalities. So groups of people who may already be hesitant and distrustful of doctors are often targeted. …  But in that environment where people are looking for answers and there aren’t necessarily simple and easy answers readily available, into that environment flows disinformation.”

Indeed, disinformation poses a clear threat — particularly when people desperately need information like health guidelines during a global pandemic. It can also stoke anger and spark violence, as we saw on Jan. 6.

It is disingenuous to suggest that all of Trump’s supporters advocate the violence that occurred in the Capitol attack. What is known, however, is the composition of the mobs that ran riot at the Capitol.

According to ABC News:

 “Members of far-right groups, including the violent Proud Boys, joined the crowds that formed in Washington to cheer on President Donald Trump as he urged them to protest Congress’ counting of Electoral College votes confirming President-elect Joe Biden’s win. Then they headed to the Capitol. Members of smaller white supremacist and neo-Nazi groups also were spotted in the crowds. Police were photographed stopping a man identified as a leading promoter of the QAnon conspiracy theory from storming the Senate floor.”

White supremacy and neo-Nazi philosophies, of course, are forms of disinformation that have a negative impact on society because they a) promulgates a false narrative of inherent racial superiority to its believers, and b) cause varied and widespread types of harm to those they deem “inferior.” Conspiracy theories also are forms of disinformation spreading contradictory and often nonsensical ideas.

Security officials and terrorism researchers warn that the embrace of conspiracy theories and disinformation causes a “mass radicalization,” which increases the potential for right-wing violence.

Back in December 2020, National Public Radio delivered this warning:

“At conferences, in op-eds and at agency meetings, domestic terrorism analysts are raising concern about the security implications of millions of conservatives buying into baseless right-wing claims. They say the line between mainstream and fringe is vanishing, with conspiracy-minded Republicans now marching alongside armed extremists at rallies across the country. Disparate factions on the right are coalescing into one side, analysts say, self-proclaimed ‘real Americans’ who are cocooned in their own news outlets, their own social media networks and, ultimately, their own ‘truth.’

BAD ACTORS

The debate over free speech vs. hate speech has persisted … oh, pretty much since forever. Granted, the U.S. Supreme Court has never “created a category of speech that is defined by its hateful conduct, labeled it hate speech, and said that that is categorically excluded by the First Amendment.” Because of that, hate speech cannot be made illegal simply because of its hateful content. However, when you examine the context, then “speech with a hateful message may be punished, if in a particular context it directly causes certain specific, imminent, serious harm — such as a genuine threat that means to instill a reasonable fear on the part of the person at whom the threat is targeted that he or she is going be subject to violence.”

That said, after the Capitol attack, social media platforms moved to further restrict hate speech, conspiracy theories, and other harmful disinformation. Granted, they had been attempting to do so for years, but critics said that the companies’ pattern of what they considered half-measures had helped cause the crisis.

“Blame for the violence (at Congress) will appropriately fall on Trump and his enablers on Capitol Hill and in right-wing media,” said Roger McNamee, an early advisor to Facebook founder Zuckerberg. “But internet platforms — Facebook, Instagram, Google, YouTube, Twitter, and others — have played a central role.”

The Capitol attack had been organized on social media platforms for months. Red State Succession, a Facebook group, was administered by a group that called for a revolution on Jan. 6. After Buzzfeed reporter Ryan MacNamee exposed the group, Facebook shut it down the same day as the attack. Without Buzzfeed’s alert, Facebook may still today be booking revenues based on the ads served up to supporters of this group. Any thoughtful observer would wonder why Facebook doesn’t spend more on self-policing. It’s worth noting that Facebook ended 3Q20 with nearly $56 billion of cash and cash equivalents on its books, over twice what it had before Trump took office. The company has benefited enormously from looking the other way.

McNamee warns that internet platforms “amplify hate speech, disinformation and conspiracy theories, while only selectively enforcing their terms of service.” It is an argument with which others agree.

 Let’s face it: While we’d like to blame trolls for all of the disinformation free-flowing on social media, we can’t. To some degree, this is because the tech companies that run the social media platforms a) have a difficult time keeping up with the sheer amount of false information, and b) possibly have no real interest in reining in such information, as doing so might negatively impact their financial goals.

Admittedly, there is evidence to support both arguments. For example, in March 2020, Twitter made an effort to update its Developer Policy. It sought to, among other goals:

  • Take “a more proactive approach to improving the health of our developer platform by continuing to remove bad actors, which resulted in over 144,000 app suspensions during the last six months.”
  • Ask that “developers clearly indicate (in their account bio or profile) if they are operating a bot account, what the account is, and who the person behind it is, so it’s easier for everyone on Twitter to know what’s a bot – and what’s not.”

In the context of U.S. politics, critics blasted the effort as too little, too late, and demanded that the platform do more to remove disinformation from its content. One critic, CNN journalist Lisa Ling, attacked Twitter on Jan. 2, 2021, saying, “At least you’re trying to call out disinformation but so much damage has been done. TRY TO FIX IT! Our country has never been more divided and you have played a massive role in it.”

James Murdoch, the youngest son of Rupert Murdoch, recently continued that theme in a joint statement with his wife Kathryn:

“Spreading disinformation — whether about the election, public health or climate change — has real world consequences,” the two said. “Many media property owners have as much responsibility for this as the elected officials who know the truth but choose instead to propagate lies. We hope the awful scenes we have all been seeing will finally convince those enablers to repudiate the toxic politics they have promoted once and forever.”

Indeed, after the November election, Newsmax, and elements of Fox News began to walk back their false “massive voter fraud” narrative, as the threat of legal liability became too great to ignore.

And in the aftermath of the Capitol insurrection, Twitter and Facebook moved more aggressively against disinformation – specifically against Trump. Twitter temporarily shuttered Trump’s account for 12 hours, noting that he had violated the platform’s standards against disinformation and glorifying violence. The next day, Facebook suspended Trump’s account on their platform and on Instagram until after Biden’s inauguration.

“We believe the risks of allowing the President to continue to use our service during this period are simply too great,” wrote Facebook chief executive Mark Zuckerberg. “Therefore, we are extending the block we have placed on his Facebook and Instagram accounts indefinitely and for at least the next two weeks until the peaceful transition of power is complete.”

After that, social media companies began banning Trump from their platforms, or restricting his use. In addition, they stepped up their battles against disinformation by targeting content that glorified violence, much of which involved Trump, QAnon adherents, or idealogues for support neo-Nazi or White supremacist beliefs. A few examples:

Guy Rosen, vice president of integrity at Facebook, summarized measures that had been implemented, or were going to be implemented for Facebook and Instagram, that were designed to battle the spread of hate speech and incitements to violence. The measures included:

  • Taking “enforcement action consistent with our policy banning militarized social movements like the Oathkeepers and the violence-inducing conspiracy theory QAnon.
  • We’ve also continued to enforce our ban on hate groups including the Proud Boys and many others. We’ve already removed over 600 militarized social movements from our platform.”
  • Boosting the “requirement of Group admins to review and approve posts” prior to publication”
  • “Automatically disabling comments … (in groups with) a high rate of hate speech or content that incites violence”
  • Using artificial intelligence to identify and remove content that likely violates Facebook policies.

Again, critics say the moves are too little, too late.

“While I’m pleased to see social media platforms like Facebook, Twitter and YouTube take long-belated steps to address the President’s sustained misuse of their platforms to sow discord and violence, these isolated actions are both too late and not nearly enough,” said Sen. Mark R. Warner, D-Virginia. “Disinformation and extremism researchers have for years pointed to broader network-based exploitation of these platforms.”

A growing number of people on both sides of the political divide have called for more regulation of social media platforms. Trump and conservatives want more regulations because they say they believe that the platforms censor conservatives … even though ample evidence exists showing that conservatives rule the platforms – making their argument more disinformation. Democrats and liberals are also calling for change, mostly because of how much hate speech exists online that can be directly traced to conservatives.

“The social media sphere is, at its core, a connection and amplification machine, which can be used for both bad and good,” says Morten Bay, a research fellow at the University of Southern California Annenberg’s Center for the Digital Future. “… But unlike, say, the ‘public square’ that social media CEOs want their platforms to be, we have no established ethics for social media, and so neither platforms nor users know what can be considered good and right, except for obvious cases, like extremism and hate speech,” Bay noted. “If we did, most people would know how to handle trolls best, which is to simply ignore them.”

However, human nature makes it difficult to ignore trolls, as we’re compelled to respond to information that we either strongly believe in, or seriously disagree with. Add in that fact that trolls and bots tend to reinforce the messaging of other trolls and bots, and you begin to see a feedback loop that can easily spread. As a result, online discourse can quickly get hijacked by disinformation specialists, whether they are human or not.

WHAT CAN BIDEN DO?

In December 2020, a group of Democratic lawmakers asked Biden to, after his inauguration, combat the “infodemic” of disinformation plaguing America:

“Understanding and addressing misinformation – and the wider phenomena of declining public trust in institutions, political polarization, networked social movements, and online information environments that create fertile grounds for the spread of falsehoods – is a critical part of our nation’s public health response.”

In a previous blog, we discussed what Biden, once inaugurated as president of the United States, might do to enhance our security and protect our privacy on the digital front. The purpose was to relay suggestions on approaches that could be used to deal with threats to our privacy and security in the form of cyberattacks, over-reaching retailers, and the abuse of authority when using biometric technologies such as facial recognition.  

However, the blog did not delve into the threat posed by disinformation. Let’s correct that now, and reflect upon the various actions the newly inaugurated president can help bring the Disinformation Age to an end.

REGULATION OF SOCIAL MEDIA COMPANIES

President Biden should consider several of the recommendations proposed by the Forum on Information and Democracy:

  • New transparency standards “should relate to all platforms’ core functions in the public information ecosystem: content moderation, content ranking, content targeting, and social influence building.”
  • “Sanctions for non-compliance could include large fines, mandatory publicity in the form of banners, liability of the CEO, and administrative sanctions such as closing access to a country’s market.”
  • “Online service providers should be required to better inform users regarding the origin of the messages they receive, especially by labelling those which have been forwarded.”

Getting a bit more in-depth, Biden should:

  • Set new legal guidelines establishing that “whoever finances dissemination of fake news, or orders it from an institution, (will be held legally responsible) for the disinformation,” and held accountable.
  • Draft new definitions of protected speech, designed to eliminate hate speech as a protected class of free speech. Biden can, perhaps, take cues from Germany’s laws, in which, as Wired describes, there are limitations to freedom of speech:

 Germany passed laws prohibiting Volksverhetzung—“incitement to hatred”—in 1960, in response to the vandalism of a Cologne synagogue with black, symmetrical swastikas. The laws forbid Holocaust denial and eventually various forms of hate speech and instigation of violence, and they’re controversial chiefly outside Germany, in places like the US, which is subject to interpretive, precedent-based common law and, of course, a rousing if imprecise fantasy of “free speech.” 

  • Establish new rules, perhaps in the form of additions to the Communications Decency Act of 1996, defining acceptable content, and setting penalties for violations of those definitions. (In 2017, Germany passed its Network Enforcement Act, a law requiring internet companies to remove “obviously illegal” content within 24 hours of being notified about it, and other illegal content within a week. (It should be noted that, unlike the United States, Germany has long had some of the world’s toughest laws involving hate speech. For example, denying the Holocaust or inciting hatred against minorities results in federal criminal charges. Companies can be fined up to $57 million for content that is not deleted from the platform.
  • Make it illegal to profit from disinformation. “Clickbait” in general is designed to generate profit from brand-building and/or ad revenues. Profiting from disinformation should result in legal action and financial penalties against the executives running the companies that violate the regulation.
  • Require social media platforms to police the accuracy of their content, and hold them legally liable for any disinformation published on their platform. This would require changes to the Communications Decency Act, specifically, Section 230.
  • Consider the recommendation of the News Media Alliance, which sent Biden’s staffers suggestions on how to “work with Congress on a comprehensive revision” of Section 230 in order to remove legal immunity for platforms that “continuously amplify – and profit from – false and overtly dangerous content.” This would be a punitive measure that would affect only those platforms that refuse to alter their format.
  • Demand “real name” requirements for social media platforms, in which the accounts can only be opened with a photocopy of a government-issued ID card. Admittedly, there would be a loss of privacy here, but it would lead to a decrease in the frequent “mob” mentality we see online, and an increase in the accountability of account users for their content. For verification, require the platform to confirm the account applicant’s information with two-factor verification: one via text or email, and the other via snail mail, such as a code sent in a letter. (Corporate accounts would likewise have to have verifiable people behind the accounts.)
  • Require social media platforms to pay for news content that was created not on the platform, but by a journalism outlet. The platform should be required to pay the originating news outlet for the use of its content. Australia took the lead on this type of legislation back in December 2020.
  • Require social media platforms to ban the use of bot-controlled accounts, and require big tech to use its deep pockets to scrutinize accounts more closely in order to detect and delete such accounts.

NEW EDUCATION GUIDELINES

  • The U.S. Department of Education should be ordered to develop better training for students in the areas of critical thinking and news literacy. These guidelines should them be disseminated to states for consideration in elementary education.
  • The Education Department should launch grants “to support partnerships between journalists, businesses, educational institutions, and nonprofit organizations to encourage news literacy.”

NEW REGULATION OF JOURNALISM OUTLETS

  • Expand the reach of the Federal Communications Commission to include newspapers, as well as cable and online news outlets. Currently, the FCC covers only radio and broadcast television.
  • Reestablish the Fairness doctrine, a communications policy established 1949 by the FCC. The rule, which applied to licensed radio and television broadcasters, required them to present “fair and balanced coverage of controversial issues of interest to their communities, including by devoting equal airtime to opposing points of view.” The FCC repealed the guidance in 1987. Biden should also update the FCC guidance to include cable news channels and online news outlets. He should then push for the Fairness doctrine to be made into law, and ensure that adherence to the law is a vital part of the licensing for broadcast, cable and online journalism outlets.
  • Set new legal guidance, based on the Fairness doctrine, defining what constitutes factual, objective news, as opposed to the “slanted” takes we see so often on news platforms such as Huffington Post, MSNBC, Fox News, Breitbart, One America Network and NewsMax. Hold news outlets accountable for broadcasting disinformation.
  • Establish concrete definitions over what constitutes a news outlet, as opposed to a venue for entertainment. Ban disinformation from being disseminated by news outlets.

CONCLUSION

These are but a few of the approaches that President Biden might take to end the Disinformation Age. He’ll need to make changes to education, as well as the laws and regulations governing education and social media platforms. Of course, some of the above recommendations will likely be seen as controversial. Some need to be fleshed out within legislative and regulatory bodies. And, of course, there will be those that will inevitably argue that fighting disinformation is a violation of the freedom of speech.

What good is this freedom, though, when it is being abused to spread disinformation? The freedom of speech already has one intelligent exception: the classic “shouting fire in a crowded theater.” If we can make that exception, which is aimed at preventing harm, then we should do the same with disinformation. After all, disinformation is all about taking advantage of others, which inevitably leads to harm. No one should have the right to cause harm in the name of politics, or some insane idea of racial superiority, or because of belief in some fantastical conspiracy myth.

Disinformation does not benefit society. It tears it apart. If the “United” – currently divided – States of America is to continue as a coherent nation, we would do well to remember that.

Abraham Lincoln, after accepting the Illinois Republican Party’s nomination as U.S. senator, spoke in 1858 on the “agitation” caused by differing opinions on slavery. Although the White supremacy component of the agitation is smaller today than it was in Lincoln’s day, I believe parts of the speech still apply, particularly if we apply it to the “agitation” of disinformation:

 If we could first know where we are, and whither we are tending, we could better judge what to do, and how to do it.

 We are now far into the fifth year, since a policy was initiated, with the avowed object, and confident promise, of putting an end to slavery agitation.

 Under the operation of that policy, that agitation has not only, not ceased, but has constantly augmented.

 In my opinion, it will not cease, until a crisis shall have been reached, and passed –

 A house divided against itself cannot stand.

About the author

Melvin Bankhead III is the founder of MB Ink Media Relations, a boutique public relations firm based in Buffalo, New York. An experienced journalist, he is a former syndicated columnist for Cox Media Group, and a former editor at The Buffalo News.  

 

Note from MTN Consulting

MTN Consulting is an industry analysis and research firm, not a company that typically comments on politics. We remain focused on companies who build and operate networks, and the vendors who supply them. That isn’t changing. However, we are going to dig into some of the technology issues related to these networks and networking platforms which are having (or will have) negative societal effects.

Image credits: (1) iStock, by Getty Images (cover); (2) Gayatri Malhotra (Biden flag); (3) Charles Deluvio (troll doll); (4) Joshua Hoehne (smartphone close-up); (5) Joshua Bedford (Abraham Lincoln statue).

Blog Details

On the digital front, what can President Biden do to enhance our security and protect our privacy?

After Joseph R. Biden Jr. takes the oath of office on Jan. 20, 2021, the newly inaugurated president of the United States will need to contend with an America in turmoil.

Naturally, the scourge of COVID-19, and its devastating impact on the nation, will be near or at the top of his list of Things That Must Be Dealt with Immediately. With over 311,000 deaths and total and more than 3,000 more deaths per day (as of Dec. 18, 2020), he has no choice but to respond to that grim reality. Recently released vaccines from Pfizer and Moderna will no doubt be very useful weapons in his arsenal.

However, America is also still reeling from Russia’s ongoing, unprecedented cyberattack against U.S. governmental agencies and corporations. Even though tens of billions of dollars had been spent to prevent such an attack, it had gone undetected for most of a year — and remains an ongoing concern.

Toss in the fact that states and consumers are becoming more wary of the power wielded by corporations and social media platforms to use your personal data for their own ends and profit – effectively turning you into a monetized resource for their exploitation.

And, of course, there is also the growing concern that facial recognition technology is being weaponized against underrepresented minorities in the U.S – invading their privacy and possible violating their rights.

When it is all added up, it becomes clear that America is on the precipice of a digital war. The only question as yet unanswered is, when all is said and done, will the war for cybersecurity and digital privacy be decided in our favor, or in the favor of those that would exploit us for money and power?

Soon-to-be President Biden has several options available to deal with these issues. Let’s explore what we know, and what Biden might do.

CYBERSECURITY

What We Know

 The U.S. government had spent billions of dollars in creating a new war room for U.S. Cyber Command, while also installing Einstein, a web of sensors throughout the nation that was designed to detect and avert cyberattacks. Unfortunately, according to the U.S. intelligence community, Russia designed its most recent attacks to bypass Einstein, slipping their assault past the sensor web and into the computer infrastructure of corporations and government agencies.

The list of impacted agencies is large: The U.S. Commerce, Homeland Security, Treasury and Energy departments reported having been hit, as did the Pentagon, the U.S. Postal Service, and the National Institutes of Health.

Although the sheer breadth of the attacks was stunning in its size — indeed, it is believed that the attack is one of the largest ever — it has not been revealed what information might have been stolen, or whether the hacks succeeded in changing or destroying data.

Investigators have yet to determine whether any classified systems were breached. Still, the intrusion seems to be one of the biggest ever, with the amount of information put at risk dwarfing other network intrusions.

However, it is known that the hackers exploited a weakness in the cyber infrastructure. The attackers accessed software from SolarWinds, an Austin, Texas-based company. SolarWinds’ Orion software, which is designed to monitor computer networks, is used by thousand of companies and by many federal agencies, making it an inviting target.

Indeed, SolarWinds estimated, in a Securities and Exchange Commission filing on Dec. 14, that perhaps as many as 18,000 of its customers may have been impacted by the breaches.

On Dec. 13, the U.S. Cybersecurity and Infrastructure Security Agency (CISA) ordered all  federal agencies “to immediately disconnect or power down affected SolarWinds Orion products from their network.” CISA is part of the Department of Homeland Security which, on Dec. 16, announced that it, the FBI and the Office of the Director of National Intelligence (DNI) had formed a joint team to “coordinate a whole-of-government-response to this significant cyber incident.”

Aside from that, there has been no comment from President Donald Trump regarding the attack. Critics are saying that Trump’s silence is more proof that he refuses to take a stand against Russia, no matter the provocation.

Meanwhile, CISA is warning that “this threat poses a grave risk to the (federal government) and state, local, tribal, and territorial governments as well as critical infrastructure entities and other private sector organizations.”

What Biden Can Do:

Once in the White House, Biden has a wealth of options at his disposal:

  • Declare, in no uncertain terms, that Russia is responsible for various intrusions into corporate, state, and U.S. governmental computer systems and that such actions need to be halted immediately.
  • Determine how many government agencies, states, and corporations use the same or similar software, and order researchers to find a) more diverse software for computer monitoring, or b) create ways to strength the security of the software to resist intrusion.
  • Form an agreement with other nations to refuse to sell any software or computer hardware (or parts) to Russia, Russia-controlled nations and territories, or Russian-headquartered businesses.
  • Create an equivalent to the National Transportation Safety Board. Rather than investigate accidents and transportation standards, this proposed agency would “track attacks, conduct investigations into the root causes of vulnerabilities and issue recommendations on how to prevent them in the future,” says Alex Stamos, director of the Stanford Internet Observatory. Stamos is former chief information security officer of Yahoo and Facebook.
  • Make sure Congress passes a law requiring companies and government agencies to reveal every time their cybersecurity is breached. Currently, no such law exists to force such compliance in areas other than medical or banking information. As Stamos notes, you “can’t respond to the overall risk as long as we’re discussing only a fraction of the significant security failures.”
  • Implement harsh financial sanctions of the leaders of Russia’s technology industries.
  • Launch federal investigations into the cyberattacks in an effort to identify individual hackers. If possible, prosecute the hackers and their superiors.
  • Establish a ban on all Russian-created software and hardware in the United States. Such a ban should include Kaspersky Labs, which is currently prohibited from selling to the federal government, but remains free to sell otherwise.
  • Conduct mandatory cybersecurity “stress tests” of state and federal governmental computer systems, as well as those utilized by major corporations, banks, hospital systems and insurance companies.
  • Update all federal government computer systems to include stronger security.
  • Launch a series of retaliatory cyberattacks against the business holdings of Russia President Vladimir Putin’s most ardent financial backers and where he banks himself.

Not only would these changes result in a more digitally secure America, but they would also provide a massive boost to the U.S. economy. As the COVID-19 pandemic continues to rage, MTN Consulting’s research has shown that the pandemic has proved beneficial to parts of the communications industry, as:

  • “The sudden, widespread need to work and study from home has increased demand for the cloud services offered by many webscale players.”
  • “Technology investments by the webscale sector are also (surging, with research and development) spending increased by 17% in 3Q20 to $46.1 billion.”
  • “Webscale spending on … network infrastructure has also spiked,” with total capital expenditures rising 25 percent year-over-year “to hit $34.7 billion in 3Q20. A good portion of capex in 2020 has supported the growth of ecommerce activity, which was given a lift by pandemic-related lifestyle changes. However, the Network/IT/Software portion of capex grew 31% YoY in 3Q20 to $16.0 billion. New data center construction slowed in 2020 but rapid growth of traffic and cloud services adoption forced operators to invest heavily in new servers and other incremental capacity additions.”

A sudden, technology industry-wide push to secure the nation’s cyber infrastructure would create jobs, inject large amounts of money into the economy, and, of course, make the country more secure. A win-win for a newly installed president.

CONSUMER PRIVACY

What We Know

In recent weeks, we’ve seen state governments open a new front in the war for digital privacy. People have become more aware of the fact that social media platforms and other telecommunications companies collect your personal data, store it, and then use it to fuel their marketing efforts, or sell the data to other business entities. However, it is difficult to tell what company is doing what to/with the data, as many companies are not remotely transparent about what happens after they acquire the data.

Americans are very much aware that their everyday lives – both online and off – are being watched closely by various corporate interests.  In a 2019 Pew Research Center survey, it was revealed that a majority of Americans admitted that they believe their lives — online and off —were being heavily monitored both by corporate interests and the federal government.

“Roughly six-in-ten U.S. adults say they do not think it is possible to go through daily life without having data collected about them by companies or the government,” the report warned.

Granted, the Pew report also admitted that “data-driven products and services are often marketed with the potential to save users time and money or even lead to better health and well-being.” Still, 81 percent of those surveyed expressed the belief that “the potential risks they face because of data collection by companies outweigh the benefits, and 66% say the same about government data collection.” The report also noted that 79 percent of respondents worry about how their data is used by companies, while 64 percent worry about the same data’s use by the government. Indeed, “most also feel they have little or no control over how these entities use their personal information.”

Enter the Federal Trade Commission. On Dec. 14, the FTC ordered Amazon, Discord, Facebook, Reddit, Snap, Twitter, WhatsApp YouTube, and ByteDance, which operates TikTok, to “provide data on how they collect, use, and present personal information, their advertising and user engagement practices, and how their practices affect children and teens.”

In a statement, the FTC commissioners said that: 

These digital products may have launched with the simple goal of connecting people or fostering creativity. But, in the decades since, the industry model has shifted from supporting users’ activities to monetizing them. This transition has been fueled by the industry’s increasing intrusion into our private lives. Several social media and video streaming companies have been able to exploit their user-surveillance capabilities to achieve such significant financial gains that they are now among the most profitable companies in the world.

Never before has there been an industry capable of surveilling and monetizing so much of our personal lives. Social media and video streaming companies now follow users everywhere through apps on their always-present mobile devices. This constant access allows these firms to monitor where users go, the people with whom they interact, and what they are doing. But to what end? Is this surveillance used to build psychological profiles of users? Predict their behavior? Manipulate experiences to generate ad sales? Promote content to capture attention or shape discourse? Too much about the industry remains dangerously opaque.

A few days later, another gauntlet was thrown. Thirty-eight state attorneys general filed an antitrust lawsuit against Google – its third in under two months. hit Google with the company’s third antitrust complaint in less than two months.

“Google sits at the crossroads of so many areas of our digital economy and has used its dominance to illegally squash competitors, monitor nearly every aspect of our digital lives, and profit to the tune of billions,” said New York Attorney General Letitia James.

In other words, states were worried that Google had used its massive amounts of data on what people do online to benefit itself at the expense of its competitors. Sound familiar?

Meanwhile, a leaked Google document detailing the company’s plan to undermine European Union legislation for its own ends has EU lawmakers on the alert. According to the New York Times:

“Academic allies” would raise questions about the new rules. Google would attempt to erode support within the European Commission to complicate the policymaking process. And the company would try to seed a trans-Atlantic trade dispute by enlisting U.S. officials against the European policy.

For many officials in Brussels, the document confirmed what they had long suspected: Google and other American tech giants are engaged in a broad lobbying campaign to stop stronger regulation against them.

As MTN analyst Matt Walker puts it, “Big tech wants to serve up ads to exactly the right person, at the right time, in the right place – and the only way to do this is by a massive invasion of what many would consider private information.”

According to Zenith Media, about $587 billion was spent on digital advertising  in 2020.

Another firm, Magna, says that digital ad spending, which it estimates rose 8 percent in 2020, will comprise 59 percent all global ad spending by year end. This eclipses traditional advertising such as television, radio, print and out-of-home, which Magna estimates has fallen 18 percent from 2019.

What Biden Can Do

Many groups and organizations, including Public Citizen and the Parent Coalition for Student Privacy, have offered recommendations of this matter. Like on the subject of cybersecurity, Biden has a variety of options:

  • If Democrats win both Senate runoff races in Georgia this January, then Democrats will control the U.S. Senate and Biden may consider expanding the responsibilities of the Consumer Financial Protection Bureau to include regulation of social media platforms and corporations in the realms of consumer privacy and data usage. Created in 2010 by the Obama administration, in which Biden served as vice president, the CFPB’s current mandate is consumer protection in the financial sector. However, it already has experience engaging “with the data economy in a number of ways. Its enforcement actions have required it to look at how financial entities are using social media and algorithms to sell to consumers. The agency has become active in enforcing privacy matters. It has also taken steps toward improving data portability principles and building a regulatory sandbox.”
  • Limit access by others to our digital lives. As we’ve noted previously, an increasing number of employers, schools and the federal governmental agencies are requiring access to our digital accounts.  S. border enforcement agents are demanding that travelers unlock their devices and provide passwords. Schools are utilizing services that allow them to access students’ devices and social media accounts. All of those entities should be required to obtain a warrant prior to being granted access. After all, the right not to incriminate yourself IS spelled out in the U.S. Constitution.
  • Ban social media platforms and other companies from using consumer data without express written permission from said consumers. Companies should have a standardized form governing whether to grant permission to companies to sell or share their personal data.
  • Require all companies and lobbying entities to have fully transparent systems in place as to how data is collected and used.
  • Require all entities that collect consumer data to publish an annual notice to consumers whose data they use
  • Ban anonymous social media accounts. In other words, social media accounts must have a verifiable name, address, phone number and email address prior to account’s activation. Said information must be confirmed every two years.  (This might help diffuse some of the mob mentality currently evident on social media platforms.)
  • Hold social media responsible for the content that they publish. Ban content that advocates harm against others based on race, gender, gender ID, sexual orientation, race, ethnicity, religion, etc.
  • The previous suggestion could work alongside a redesign or elimination of Section 230, a section of the Communications Decency Act of 1996. The section shields internet companies from liability over the content they publish. In recent years, Republicans – notably Trump – and Democrats are argued for reforming or abolishing the rule. Indeed, Bruce Reed, Biden’s top technology adviser, advises reforming Section 230 in a book he coauthored, “Which Side of History? How Technology Is Reshaping Democracy and Our Lives.” In it, he and coauthor James Steyer, a Stanford University lecturer, argue that if internet companies and social media platforms “sell ads that run alongside harmful content, they should be considered complicit in the harm. If their algorithms promote harmful content, they should be held accountable for helping redress the harm. In the long run, the only real way to moderate content is to moderate the business model.”
  • Companies should be required to establish easier ways for consumers to manage their devices’ and accounts’ privacy settings.
  • After it was revealed that many members of Congress simply didn’t comprehend how social media platforms work, even though they were trying to regulate the industry, members of Congress should be required to be briefed annually on the current state of the social media, as well as its impact on their constituents.
  • Require technology companies to create more secure privacy settings for minors using social media.
  • Push the Federal Communications Commission to reassert net neutrality, a rule that banned telecommunications operators from blocking or slowing internet traffic originating from unaffiliated Internet access providers.

FACIAL RECOGNITION

What We Know

In the above discussion on privacy, one area that we neglected to delve into is the impact of facial recognition on privacy. A fundamental aspect of the American criminal justice system is that people are innocent until proven guilty, an axiom more commonly known as the “presumption of innocence.”  This is echoed in the Fifth Amendment to the U.S. Constitution, which states, in part that no person “shall be compelled in any criminal case to be a witness against himself.” In other words, when people “take the Fifth,” they are exercising their right not to incriminate themselves.

By contrast, the growing usage of facial recognition technology, which is widely recognized as a tool to enhance security and identify potential criminal suspects, jeopardizes people’s right to privacy, as well as that presumption of innocence. Indeed, on Dec. 22, 2020, New York Gov. Andrew M. Cuomo signed into law of the nation’s first statewide ban on using biometric identifying technology such as facial recognition in schools. The law bans the use of such technology in schools until July 1, 2022, or until after the state Education Department has conducted extensive research into whether the technology should be used in schools.

“This technology is moving really quickly without a lot of concern about the impact on children,” said Stefanie Coyle, deputy director of education policy for the New York Civil Liberties Union. “This bill will actually put the brakes on that.”

Even scientists are growing concerned about the assault of privacy posed by facial recognition systems, with many calling for “a firmer stance against unethical facial-recognition research. It’s important to denounce controversial uses of the technology, but that’s not enough, ethicists say. Scientists should also acknowledge the morally dubious foundations of much of the academic work in the field — including studies that have collected enormous data sets of images of people’s faces without consent, many of which helped hone commercial or military surveillance algorithms.”

With the growing push in retail spheres toward more protections of consumers’ privacy, is it so surprising that a similar push would eventuate in other areas? The controversy of using facial recognition to surveil public spaces has been under debate for some time – particularly as people grow a deeper understanding of how unreliable the systems are when dealing with people who are not White men.

Indeed, in December 2019, a National Institute of Standards and Technology study demonstrated the results of testing 189 facial recognition systems from 99 companies. The study found that the majority of the software had some form of bias. Indeed, among the broad findings were these troubling revelations:

  • One-to-one matching revealed higher error rates for “Asian and African American faces relative to images of Caucasians. The differentials often ranged from a factor of 10 to 100 times, depending on the individual algorithm.”
  • Among U.S.-made software, “there were similar high rates of false positives in one-to-one matching for Asians, African Americans and native groups (which include Native American, American Indian, Alaskan Indian and Pacific Islanders). The American Indian demographic had the highest rates of false positives.”

Such errors in identifying criminal suspects can be devastating to those innocents who are caught up in a criminal investigation. One prevalent example comes from January 2020 in Michigan: Detroit police arrested Robert Williams, a Black man, as a suspect in a shoplifting case. However, they were following the lead of a facial recognition scan, which had incorrectly identified Williams as the suspect. The charges were later dropped, but the damage was done: Williams’ “DNA sample, mugshot, and fingerprints — all of which were taken when he arrived at the detention center — are now on file. His arrest is on the record,” said the  American Civil Liberties Union. “… Given the technology’s flaws, and how widely it is being used by law enforcement today, Robert likely isn’t the first person to be wrongfully arrested because of this technology. He’s just the first person we’re learning about.”

Side view of conceptual face recognition technology.

As previously mentioned, there is a growing view that facial recognition technology is being weaponized against underrepresented minorities in the United States. In recent months, in the time since the deaths – some would say murders — of George Floyd and Breonna Taylor at the hands of White police officers, civil rights groups have pointed to the use of facial recognition technologies by law enforcement at protests. Also, critics of Trump have noted similar technologies in use by law enforcement at protests against the now-outgoing president.  And with growing awareness of the growing right-wing and White supremacist influences in law enforcement, people are wary of permitting any more advances that can be used in an oppressive fashion.

As I indicated in a previous essay on facial recognition systems,  such digital tools are used for a variety of purposes, many of them beneficial. However, as I also demonstrated, those tools are also extremely easy to abuse, particularly in the hands of governments and the law enforcement community. And in today’s politically explosive environment, all it takes is the wrong person in elected office to turn a beneficial tool into a weapon for suppression.

There is, of course, the “Big Brother” scenario: George Orwell’s dystopian nightmare of a totalitarian government that maintains control through constant electronic surveillance of its citizens. Although people argue that “such things can never happen here,” a great many things have happened in America over the last four years that people once argued only happened in dictatorships or “Third-World” countries. For example, armed, unidentifiable “security officers” never used to roam America’s streets, grabbing up citizens and transporting them to places unknown. Attorneys working for elected officials didn’t use to call for the deaths of their client’s perceived enemies. White supremacists didn’t openly accept orders from the president of the United States.  Conspiracy theorists didn’t publicly tout their illogical views while running for, or working in, public office. And the president of the Unites States, and his supporters in Congress, didn’t flatly assert that an election was fraudulent just because he lost it.

A lot can happen in a nation “where it can’t possibly happen here.” In fact, many of the examples cited above used to “only happen overseas.” Of course, if something happens overseas, it should not be all that difficult to believe that it could happen here in America. Which is why the following developments, here and abroad, are so troubling:

  • In April 2019, it was revealed that the Chinese government was using facial recognition technology to surveil Uighurs, a mostly Muslim ethnic group. As the New York Times also reported, hundreds of thousands of Uighurs were surveilled, arrested, and then imprisoned in secret camps.
  • In January 2020, Amnesty International warned that, “In the hands of Russia’s already very abusive authorities, and in the total absence of transparency and accountability for such systems, the facial recognition technology is a tool which is likely to take reprisals against peaceful protest to an entirely new level.” The warning came as a Moscow court took on a case by a civil rights activist and a politician who argued that Russia’s surveillance of public protests was a violation of their right to peacefully assemble.
  • Six months later, in Portland, Oregon, unidentified “federal police officers” began detaining those protesting police violence. Portland Mayor Ted Wheeler called them Trump’s “personal army,” and Attorney General Bill Barr acknowledged sending the officers. Many of those detained were imprisoned for a short time, then released, often with no charges being filed and no way to identify the officers involved.
  • In the summer of 2020, Black Lives Matter protesters, as well as those protesting Trump’s policies, complained that they were being surveilled by police officers using facial recognition software.
  • And in December 2020, it was revealed that Huawei is marketing facial recognition software to the Chinese government that is reportedly capable of sending “automated ‘Uighur alarms’ to government authorities when its camera systems identify members of the oppressed minority group.” On Dec. 16, it was revealed that tech giant Alibaba also possessed a similar system.

America is a nation too often consumed by racial tensions. Indeed, we see increasingly violent rhetoric and actions of right-wing activists, who are in turn often fueled by and, in turn, fuel right-wing media and White supremacist ideologies.  So when we see other countries cracking down on racial minorities, it is important to remember that the same thing can happen here. It is equally important to remember that race-based violence and suppression are a long part of America’s history, built into its very foundation.

And with racially coded language in political speeches such as “Take Back America” and “Make America Great Again,” underrepresented minorities see themselves being blamed for America’s failures by a rising number of politicians who identify with or are followed by conspiracy theorists and/or White supremacists. Regretfully, the accusers are not mature enough to recognize their own culpability in such failures because they can’t see past their own self- interest.

What Biden Can Do

This is one area in which Biden will absolutely need a majority in Congress with which he can work. If he gains that advantage, he can:

  • Follow the lead of soon-to-be Vice President Kamala Harris, who, as part of a group of legislators, sent letters to the FBI, the Equal Employment Opportunity Commission (EEOC), and the FTC to point out research showing how facial recognition can produce and reinforce racial and gender bias. Harris asked “that the EEOC develop guidelines for employers on the fair use of facial analysis technologies and called on the FTC to consider requiring facial recognition developers to disclose the technology’s potential biases to purchasers.”
  • Take the suggestion from IBM and Microsoft to craft a federal law regulating the use of facial recognition systems.
  • Order an evaluation of all facial recognition technology in use by government agencies, as well as state and local law enforcement agencies, to determine their accuracy dealing with diverse groups of people.
  • Offer incentives to companies that crack the bias problem in facial recognition technologies
  • Set a new federal threshold for such systems, at least 85 percent accuracy for all racial/ethnic groups, before use by law enforcement agencies.
  • In federal cases, ban use of facial recognition tech when it is being used as the primary reason for probable cause.

CONCLUSION

These are but a few of the approaches Biden can take to improve America’s cybersecurity infrastructure while improving consumer privacy. There are, of course, likely many more ideas out there that experts will recommend.

I hope he keeps an open mind and considers them.

About the author

Melvin Bankhead III is the founder of MB Ink Media Relations, a strategic communications firm based in Buffalo, New York. An experienced journalist, he is a former syndicated columnist for Cox Media Group, former editor at The Buffalo News, and current instructor at Hilbert College.

Note from MTN Consulting

MTN Consulting is an industry analysis and research firm, not a company that typically comments on politics. We remain focused on companies who build and operate networks, and the vendors who supply them. That isn’t changing. However, we are going to dig into some of the technology issues related to these networks and networking platforms which are having (or will have) negative societal effects.

Image credits: (1) Gayatri Malhotra (cover); (2) John Salvino ; (3) iStock, by Getty Images.

Blog Details

Security Specialist Barracuda Reports 7% Revenue Growth; Margins Still An Issue

Barracuda Networks, a security/data protection solution vendor, yesterday reported $94.3M in revenues for the quarter ended August (~3Q17). That’s 7% growth from the prior year (June-August 2016). This growth rate would satisfy many companies, including lots of vendors selling into telecom networks.

Margins not going in the right direction

For Barracuda, a vendor focused on cloud-based security solutions to a wide range of vertical markets, the 7% is a step down. After going public in November 2013, Barracuda’s YoY revenues grew steadily in double digits. This was organic growth, largely, as the company’s few acquisitions had minimal revenue impact. In the last three quarters, YoY revenue growth has been in the 6-8% range. Some growth moderation is normal, as the company started from a small base. But this comes at a time when Barracuda is still struggling to make money. As shown in the figure, operating margins (operating income/revenues) have fallen in the last few quarters, and they were already low.

barracuda-mtnconsulting

IPO in November 2013

Despite low or negative margins, Barracuda has managed to stay free cash flow positive for every quarter since going public in November 2013. For the last 2 years, its average quarterly FCF was +$12.3M. Not a lot for a big vendor like Cisco or HPE (both competitors), but enough to leave a small one like Barracuda with a cash reserve of $207M as of August. That could be handy both for small M&A transactions, or as a buffer against a few more low-margin quarters. (Note that Barracuda’s net income has been in the 1-4% of revenues range for the last 7 quarters).

Several security rivals are losing money outright, including Palo Alto, Symantec, and FireEye, and Proofpoint. This last one is interesting. Proofpoint bills itself as a security-as-a-service provider, playing into a similar cloud-based security market. The company’s latest annual revenues of $376M puts it just $23M ahead of Barracuda (comparing fiscal year to fiscal year). Proofpoint’s current market cap is roughly 3x Barracuda, though. Proofpoint is growing much faster, with revenues up 42% in 2016. That growth has not come with positive margins; Proofpoint’s net loss was 30% of revenues for the year. Many expect Proofpoint (and Palo Alto Networks, and others) to grow out of their losses.

Made in California

Barracuda has physical products (e.g. the Next-Generation Firewall), not just software, and manufactures these appliances in California. To some, that might suggest higher production costs and/or slower delivery to customers. Barracuda’s cost of revenue is relatively low though, averaging 24% of revenues for the last 8 quarters. Turnaround time is also quick. Barracuda says most orders are received in the same quarter as the revenues are ultimately booked. One thing that helps here is, around 70% of Barracuda’s revenue comes from the US market, a figure that hasn’t changed much since going public. Also helpful is Barracuda’s vast distributor network, which should accelerate customer acceptance.

Balancing the revenue model

Barracuda gets revenues from both physical appliances, and subscriptions. In 2013-14, appliances accounted for 30% of revenues, with subscriptions the remainder. Since then, appliance revenues have been falling, down to under 20% of total in 3Q17. That’s not necessarily a problem. Subscriptions bring recurring revenues, after all. Further, if the margins on subscription services are high enough, giving away the appliance for free may even be an option. That’s not the case here.

Barracuda’s renewal rates are high, at 92% for the 6 months ended August. There’s no guarantee that will persist, though. Moreover, customers are opting for shorter contract lengths in fiscal year 2017. This adds uncertainty to revenue projections, and generates more work for the sales force. Average contract length, and Barracuda’s sales costs, should be watched closely.