Blog Details

On the digital front, what can President Biden do to enhance our security and protect our privacy?

After Joseph R. Biden Jr. takes the oath of office on Jan. 20, 2021, the newly inaugurated president of the United States will need to contend with an America in turmoil.

Naturally, the scourge of COVID-19, and its devastating impact on the nation, will be near or at the top of his list of Things That Must Be Dealt with Immediately. With over 311,000 deaths and total and more than 3,000 more deaths per day (as of Dec. 18, 2020), he has no choice but to respond to that grim reality. Recently released vaccines from Pfizer and Moderna will no doubt be very useful weapons in his arsenal.

However, America is also still reeling from Russia’s ongoing, unprecedented cyberattack against U.S. governmental agencies and corporations. Even though tens of billions of dollars had been spent to prevent such an attack, it had gone undetected for most of a year — and remains an ongoing concern.

Toss in the fact that states and consumers are becoming more wary of the power wielded by corporations and social media platforms to use your personal data for their own ends and profit – effectively turning you into a monetized resource for their exploitation.

And, of course, there is also the growing concern that facial recognition technology is being weaponized against underrepresented minorities in the U.S – invading their privacy and possible violating their rights.

When it is all added up, it becomes clear that America is on the precipice of a digital war. The only question as yet unanswered is, when all is said and done, will the war for cybersecurity and digital privacy be decided in our favor, or in the favor of those that would exploit us for money and power?

Soon-to-be President Biden has several options available to deal with these issues. Let’s explore what we know, and what Biden might do.

CYBERSECURITY

What We Know

 The U.S. government had spent billions of dollars in creating a new war room for U.S. Cyber Command, while also installing Einstein, a web of sensors throughout the nation that was designed to detect and avert cyberattacks. Unfortunately, according to the U.S. intelligence community, Russia designed its most recent attacks to bypass Einstein, slipping their assault past the sensor web and into the computer infrastructure of corporations and government agencies.

The list of impacted agencies is large: The U.S. Commerce, Homeland Security, Treasury and Energy departments reported having been hit, as did the Pentagon, the U.S. Postal Service, and the National Institutes of Health.

Although the sheer breadth of the attacks was stunning in its size — indeed, it is believed that the attack is one of the largest ever — it has not been revealed what information might have been stolen, or whether the hacks succeeded in changing or destroying data.

Investigators have yet to determine whether any classified systems were breached. Still, the intrusion seems to be one of the biggest ever, with the amount of information put at risk dwarfing other network intrusions.

However, it is known that the hackers exploited a weakness in the cyber infrastructure. The attackers accessed software from SolarWinds, an Austin, Texas-based company. SolarWinds’ Orion software, which is designed to monitor computer networks, is used by thousand of companies and by many federal agencies, making it an inviting target.

Indeed, SolarWinds estimated, in a Securities and Exchange Commission filing on Dec. 14, that perhaps as many as 18,000 of its customers may have been impacted by the breaches.

On Dec. 13, the U.S. Cybersecurity and Infrastructure Security Agency (CISA) ordered all  federal agencies “to immediately disconnect or power down affected SolarWinds Orion products from their network.” CISA is part of the Department of Homeland Security which, on Dec. 16, announced that it, the FBI and the Office of the Director of National Intelligence (DNI) had formed a joint team to “coordinate a whole-of-government-response to this significant cyber incident.”

Aside from that, there has been no comment from President Donald Trump regarding the attack. Critics are saying that Trump’s silence is more proof that he refuses to take a stand against Russia, no matter the provocation.

Meanwhile, CISA is warning that “this threat poses a grave risk to the (federal government) and state, local, tribal, and territorial governments as well as critical infrastructure entities and other private sector organizations.”

What Biden Can Do:

Once in the White House, Biden has a wealth of options at his disposal:

  • Declare, in no uncertain terms, that Russia is responsible for various intrusions into corporate, state, and U.S. governmental computer systems and that such actions need to be halted immediately.
  • Determine how many government agencies, states, and corporations use the same or similar software, and order researchers to find a) more diverse software for computer monitoring, or b) create ways to strength the security of the software to resist intrusion.
  • Form an agreement with other nations to refuse to sell any software or computer hardware (or parts) to Russia, Russia-controlled nations and territories, or Russian-headquartered businesses.
  • Create an equivalent to the National Transportation Safety Board. Rather than investigate accidents and transportation standards, this proposed agency would “track attacks, conduct investigations into the root causes of vulnerabilities and issue recommendations on how to prevent them in the future,” says Alex Stamos, director of the Stanford Internet Observatory. Stamos is former chief information security officer of Yahoo and Facebook.
  • Make sure Congress passes a law requiring companies and government agencies to reveal every time their cybersecurity is breached. Currently, no such law exists to force such compliance in areas other than medical or banking information. As Stamos notes, you “can’t respond to the overall risk as long as we’re discussing only a fraction of the significant security failures.”
  • Implement harsh financial sanctions of the leaders of Russia’s technology industries.
  • Launch federal investigations into the cyberattacks in an effort to identify individual hackers. If possible, prosecute the hackers and their superiors.
  • Establish a ban on all Russian-created software and hardware in the United States. Such a ban should include Kaspersky Labs, which is currently prohibited from selling to the federal government, but remains free to sell otherwise.
  • Conduct mandatory cybersecurity “stress tests” of state and federal governmental computer systems, as well as those utilized by major corporations, banks, hospital systems and insurance companies.
  • Update all federal government computer systems to include stronger security.
  • Launch a series of retaliatory cyberattacks against the business holdings of Russia President Vladimir Putin’s most ardent financial backers and where he banks himself.

Not only would these changes result in a more digitally secure America, but they would also provide a massive boost to the U.S. economy. As the COVID-19 pandemic continues to rage, MTN Consulting’s research has shown that the pandemic has proved beneficial to parts of the communications industry, as:

  • “The sudden, widespread need to work and study from home has increased demand for the cloud services offered by many webscale players.”
  • “Technology investments by the webscale sector are also (surging, with research and development) spending increased by 17% in 3Q20 to $46.1 billion.”
  • “Webscale spending on … network infrastructure has also spiked,” with total capital expenditures rising 25 percent year-over-year “to hit $34.7 billion in 3Q20. A good portion of capex in 2020 has supported the growth of ecommerce activity, which was given a lift by pandemic-related lifestyle changes. However, the Network/IT/Software portion of capex grew 31% YoY in 3Q20 to $16.0 billion. New data center construction slowed in 2020 but rapid growth of traffic and cloud services adoption forced operators to invest heavily in new servers and other incremental capacity additions.”

A sudden, technology industry-wide push to secure the nation’s cyber infrastructure would create jobs, inject large amounts of money into the economy, and, of course, make the country more secure. A win-win for a newly installed president.

CONSUMER PRIVACY

What We Know

In recent weeks, we’ve seen state governments open a new front in the war for digital privacy. People have become more aware of the fact that social media platforms and other telecommunications companies collect your personal data, store it, and then use it to fuel their marketing efforts, or sell the data to other business entities. However, it is difficult to tell what company is doing what to/with the data, as many companies are not remotely transparent about what happens after they acquire the data.

Americans are very much aware that their everyday lives – both online and off – are being watched closely by various corporate interests.  In a 2019 Pew Research Center survey, it was revealed that a majority of Americans admitted that they believe their lives — online and off —were being heavily monitored both by corporate interests and the federal government.

“Roughly six-in-ten U.S. adults say they do not think it is possible to go through daily life without having data collected about them by companies or the government,” the report warned.

Granted, the Pew report also admitted that “data-driven products and services are often marketed with the potential to save users time and money or even lead to better health and well-being.” Still, 81 percent of those surveyed expressed the belief that “the potential risks they face because of data collection by companies outweigh the benefits, and 66% say the same about government data collection.” The report also noted that 79 percent of respondents worry about how their data is used by companies, while 64 percent worry about the same data’s use by the government. Indeed, “most also feel they have little or no control over how these entities use their personal information.”

Enter the Federal Trade Commission. On Dec. 14, the FTC ordered Amazon, Discord, Facebook, Reddit, Snap, Twitter, WhatsApp YouTube, and ByteDance, which operates TikTok, to “provide data on how they collect, use, and present personal information, their advertising and user engagement practices, and how their practices affect children and teens.”

In a statement, the FTC commissioners said that: 

These digital products may have launched with the simple goal of connecting people or fostering creativity. But, in the decades since, the industry model has shifted from supporting users’ activities to monetizing them. This transition has been fueled by the industry’s increasing intrusion into our private lives. Several social media and video streaming companies have been able to exploit their user-surveillance capabilities to achieve such significant financial gains that they are now among the most profitable companies in the world.

Never before has there been an industry capable of surveilling and monetizing so much of our personal lives. Social media and video streaming companies now follow users everywhere through apps on their always-present mobile devices. This constant access allows these firms to monitor where users go, the people with whom they interact, and what they are doing. But to what end? Is this surveillance used to build psychological profiles of users? Predict their behavior? Manipulate experiences to generate ad sales? Promote content to capture attention or shape discourse? Too much about the industry remains dangerously opaque.

A few days later, another gauntlet was thrown. Thirty-eight state attorneys general filed an antitrust lawsuit against Google – its third in under two months. hit Google with the company’s third antitrust complaint in less than two months.

“Google sits at the crossroads of so many areas of our digital economy and has used its dominance to illegally squash competitors, monitor nearly every aspect of our digital lives, and profit to the tune of billions,” said New York Attorney General Letitia James.

In other words, states were worried that Google had used its massive amounts of data on what people do online to benefit itself at the expense of its competitors. Sound familiar?

Meanwhile, a leaked Google document detailing the company’s plan to undermine European Union legislation for its own ends has EU lawmakers on the alert. According to the New York Times:

“Academic allies” would raise questions about the new rules. Google would attempt to erode support within the European Commission to complicate the policymaking process. And the company would try to seed a trans-Atlantic trade dispute by enlisting U.S. officials against the European policy.

For many officials in Brussels, the document confirmed what they had long suspected: Google and other American tech giants are engaged in a broad lobbying campaign to stop stronger regulation against them.

As MTN analyst Matt Walker puts it, “Big tech wants to serve up ads to exactly the right person, at the right time, in the right place – and the only way to do this is by a massive invasion of what many would consider private information.”

According to Zenith Media, about $587 billion was spent on digital advertising  in 2020.

Another firm, Magna, says that digital ad spending, which it estimates rose 8 percent in 2020, will comprise 59 percent all global ad spending by year end. This eclipses traditional advertising such as television, radio, print and out-of-home, which Magna estimates has fallen 18 percent from 2019.

What Biden Can Do

Many groups and organizations, including Public Citizen and the Parent Coalition for Student Privacy, have offered recommendations of this matter. Like on the subject of cybersecurity, Biden has a variety of options:

  • If Democrats win both Senate runoff races in Georgia this January, then Democrats will control the U.S. Senate and Biden may consider expanding the responsibilities of the Consumer Financial Protection Bureau to include regulation of social media platforms and corporations in the realms of consumer privacy and data usage. Created in 2010 by the Obama administration, in which Biden served as vice president, the CFPB’s current mandate is consumer protection in the financial sector. However, it already has experience engaging “with the data economy in a number of ways. Its enforcement actions have required it to look at how financial entities are using social media and algorithms to sell to consumers. The agency has become active in enforcing privacy matters. It has also taken steps toward improving data portability principles and building a regulatory sandbox.”
  • Limit access by others to our digital lives. As we’ve noted previously, an increasing number of employers, schools and the federal governmental agencies are requiring access to our digital accounts.  S. border enforcement agents are demanding that travelers unlock their devices and provide passwords. Schools are utilizing services that allow them to access students’ devices and social media accounts. All of those entities should be required to obtain a warrant prior to being granted access. After all, the right not to incriminate yourself IS spelled out in the U.S. Constitution.
  • Ban social media platforms and other companies from using consumer data without express written permission from said consumers. Companies should have a standardized form governing whether to grant permission to companies to sell or share their personal data.
  • Require all companies and lobbying entities to have fully transparent systems in place as to how data is collected and used.
  • Require all entities that collect consumer data to publish an annual notice to consumers whose data they use
  • Ban anonymous social media accounts. In other words, social media accounts must have a verifiable name, address, phone number and email address prior to account’s activation. Said information must be confirmed every two years.  (This might help diffuse some of the mob mentality currently evident on social media platforms.)
  • Hold social media responsible for the content that they publish. Ban content that advocates harm against others based on race, gender, gender ID, sexual orientation, race, ethnicity, religion, etc.
  • The previous suggestion could work alongside a redesign or elimination of Section 230, a section of the Communications Decency Act of 1996. The section shields internet companies from liability over the content they publish. In recent years, Republicans – notably Trump – and Democrats are argued for reforming or abolishing the rule. Indeed, Bruce Reed, Biden’s top technology adviser, advises reforming Section 230 in a book he coauthored, “Which Side of History? How Technology Is Reshaping Democracy and Our Lives.” In it, he and coauthor James Steyer, a Stanford University lecturer, argue that if internet companies and social media platforms “sell ads that run alongside harmful content, they should be considered complicit in the harm. If their algorithms promote harmful content, they should be held accountable for helping redress the harm. In the long run, the only real way to moderate content is to moderate the business model.”
  • Companies should be required to establish easier ways for consumers to manage their devices’ and accounts’ privacy settings.
  • After it was revealed that many members of Congress simply didn’t comprehend how social media platforms work, even though they were trying to regulate the industry, members of Congress should be required to be briefed annually on the current state of the social media, as well as its impact on their constituents.
  • Require technology companies to create more secure privacy settings for minors using social media.
  • Push the Federal Communications Commission to reassert net neutrality, a rule that banned telecommunications operators from blocking or slowing internet traffic originating from unaffiliated Internet access providers.

FACIAL RECOGNITION

What We Know

In the above discussion on privacy, one area that we neglected to delve into is the impact of facial recognition on privacy. A fundamental aspect of the American criminal justice system is that people are innocent until proven guilty, an axiom more commonly known as the “presumption of innocence.”  This is echoed in the Fifth Amendment to the U.S. Constitution, which states, in part that no person “shall be compelled in any criminal case to be a witness against himself.” In other words, when people “take the Fifth,” they are exercising their right not to incriminate themselves.

By contrast, the growing usage of facial recognition technology, which is widely recognized as a tool to enhance security and identify potential criminal suspects, jeopardizes people’s right to privacy, as well as that presumption of innocence. Indeed, on Dec. 22, 2020, New York Gov. Andrew M. Cuomo signed into law of the nation’s first statewide ban on using biometric identifying technology such as facial recognition in schools. The law bans the use of such technology in schools until July 1, 2022, or until after the state Education Department has conducted extensive research into whether the technology should be used in schools.

“This technology is moving really quickly without a lot of concern about the impact on children,” said Stefanie Coyle, deputy director of education policy for the New York Civil Liberties Union. “This bill will actually put the brakes on that.”

Even scientists are growing concerned about the assault of privacy posed by facial recognition systems, with many calling for “a firmer stance against unethical facial-recognition research. It’s important to denounce controversial uses of the technology, but that’s not enough, ethicists say. Scientists should also acknowledge the morally dubious foundations of much of the academic work in the field — including studies that have collected enormous data sets of images of people’s faces without consent, many of which helped hone commercial or military surveillance algorithms.”

With the growing push in retail spheres toward more protections of consumers’ privacy, is it so surprising that a similar push would eventuate in other areas? The controversy of using facial recognition to surveil public spaces has been under debate for some time – particularly as people grow a deeper understanding of how unreliable the systems are when dealing with people who are not White men.

Indeed, in December 2019, a National Institute of Standards and Technology study demonstrated the results of testing 189 facial recognition systems from 99 companies. The study found that the majority of the software had some form of bias. Indeed, among the broad findings were these troubling revelations:

  • One-to-one matching revealed higher error rates for “Asian and African American faces relative to images of Caucasians. The differentials often ranged from a factor of 10 to 100 times, depending on the individual algorithm.”
  • Among U.S.-made software, “there were similar high rates of false positives in one-to-one matching for Asians, African Americans and native groups (which include Native American, American Indian, Alaskan Indian and Pacific Islanders). The American Indian demographic had the highest rates of false positives.”

Such errors in identifying criminal suspects can be devastating to those innocents who are caught up in a criminal investigation. One prevalent example comes from January 2020 in Michigan: Detroit police arrested Robert Williams, a Black man, as a suspect in a shoplifting case. However, they were following the lead of a facial recognition scan, which had incorrectly identified Williams as the suspect. The charges were later dropped, but the damage was done: Williams’ “DNA sample, mugshot, and fingerprints — all of which were taken when he arrived at the detention center — are now on file. His arrest is on the record,” said the  American Civil Liberties Union. “… Given the technology’s flaws, and how widely it is being used by law enforcement today, Robert likely isn’t the first person to be wrongfully arrested because of this technology. He’s just the first person we’re learning about.”

Side view of conceptual face recognition technology.

As previously mentioned, there is a growing view that facial recognition technology is being weaponized against underrepresented minorities in the United States. In recent months, in the time since the deaths – some would say murders — of George Floyd and Breonna Taylor at the hands of White police officers, civil rights groups have pointed to the use of facial recognition technologies by law enforcement at protests. Also, critics of Trump have noted similar technologies in use by law enforcement at protests against the now-outgoing president.  And with growing awareness of the growing right-wing and White supremacist influences in law enforcement, people are wary of permitting any more advances that can be used in an oppressive fashion.

As I indicated in a previous essay on facial recognition systems,  such digital tools are used for a variety of purposes, many of them beneficial. However, as I also demonstrated, those tools are also extremely easy to abuse, particularly in the hands of governments and the law enforcement community. And in today’s politically explosive environment, all it takes is the wrong person in elected office to turn a beneficial tool into a weapon for suppression.

There is, of course, the “Big Brother” scenario: George Orwell’s dystopian nightmare of a totalitarian government that maintains control through constant electronic surveillance of its citizens. Although people argue that “such things can never happen here,” a great many things have happened in America over the last four years that people once argued only happened in dictatorships or “Third-World” countries. For example, armed, unidentifiable “security officers” never used to roam America’s streets, grabbing up citizens and transporting them to places unknown. Attorneys working for elected officials didn’t use to call for the deaths of their client’s perceived enemies. White supremacists didn’t openly accept orders from the president of the United States.  Conspiracy theorists didn’t publicly tout their illogical views while running for, or working in, public office. And the president of the Unites States, and his supporters in Congress, didn’t flatly assert that an election was fraudulent just because he lost it.

A lot can happen in a nation “where it can’t possibly happen here.” In fact, many of the examples cited above used to “only happen overseas.” Of course, if something happens overseas, it should not be all that difficult to believe that it could happen here in America. Which is why the following developments, here and abroad, are so troubling:

  • In April 2019, it was revealed that the Chinese government was using facial recognition technology to surveil Uighurs, a mostly Muslim ethnic group. As the New York Times also reported, hundreds of thousands of Uighurs were surveilled, arrested, and then imprisoned in secret camps.
  • In January 2020, Amnesty International warned that, “In the hands of Russia’s already very abusive authorities, and in the total absence of transparency and accountability for such systems, the facial recognition technology is a tool which is likely to take reprisals against peaceful protest to an entirely new level.” The warning came as a Moscow court took on a case by a civil rights activist and a politician who argued that Russia’s surveillance of public protests was a violation of their right to peacefully assemble.
  • Six months later, in Portland, Oregon, unidentified “federal police officers” began detaining those protesting police violence. Portland Mayor Ted Wheeler called them Trump’s “personal army,” and Attorney General Bill Barr acknowledged sending the officers. Many of those detained were imprisoned for a short time, then released, often with no charges being filed and no way to identify the officers involved.
  • In the summer of 2020, Black Lives Matter protesters, as well as those protesting Trump’s policies, complained that they were being surveilled by police officers using facial recognition software.
  • And in December 2020, it was revealed that Huawei is marketing facial recognition software to the Chinese government that is reportedly capable of sending “automated ‘Uighur alarms’ to government authorities when its camera systems identify members of the oppressed minority group.” On Dec. 16, it was revealed that tech giant Alibaba also possessed a similar system.

America is a nation too often consumed by racial tensions. Indeed, we see increasingly violent rhetoric and actions of right-wing activists, who are in turn often fueled by and, in turn, fuel right-wing media and White supremacist ideologies.  So when we see other countries cracking down on racial minorities, it is important to remember that the same thing can happen here. It is equally important to remember that race-based violence and suppression are a long part of America’s history, built into its very foundation.

And with racially coded language in political speeches such as “Take Back America” and “Make America Great Again,” underrepresented minorities see themselves being blamed for America’s failures by a rising number of politicians who identify with or are followed by conspiracy theorists and/or White supremacists. Regretfully, the accusers are not mature enough to recognize their own culpability in such failures because they can’t see past their own self- interest.

What Biden Can Do

This is one area in which Biden will absolutely need a majority in Congress with which he can work. If he gains that advantage, he can:

  • Follow the lead of soon-to-be Vice President Kamala Harris, who, as part of a group of legislators, sent letters to the FBI, the Equal Employment Opportunity Commission (EEOC), and the FTC to point out research showing how facial recognition can produce and reinforce racial and gender bias. Harris asked “that the EEOC develop guidelines for employers on the fair use of facial analysis technologies and called on the FTC to consider requiring facial recognition developers to disclose the technology’s potential biases to purchasers.”
  • Take the suggestion from IBM and Microsoft to craft a federal law regulating the use of facial recognition systems.
  • Order an evaluation of all facial recognition technology in use by government agencies, as well as state and local law enforcement agencies, to determine their accuracy dealing with diverse groups of people.
  • Offer incentives to companies that crack the bias problem in facial recognition technologies
  • Set a new federal threshold for such systems, at least 85 percent accuracy for all racial/ethnic groups, before use by law enforcement agencies.
  • In federal cases, ban use of facial recognition tech when it is being used as the primary reason for probable cause.

CONCLUSION

These are but a few of the approaches Biden can take to improve America’s cybersecurity infrastructure while improving consumer privacy. There are, of course, likely many more ideas out there that experts will recommend.

I hope he keeps an open mind and considers them.

About the author

Melvin Bankhead III is the founder of MB Ink Media Relations, a strategic communications firm based in Buffalo, New York. An experienced journalist, he is a former syndicated columnist for Cox Media Group, former editor at The Buffalo News, and current instructor at Hilbert College.

Note from MTN Consulting

MTN Consulting is an industry analysis and research firm, not a company that typically comments on politics. We remain focused on companies who build and operate networks, and the vendors who supply them. That isn’t changing. However, we are going to dig into some of the technology issues related to these networks and networking platforms which are having (or will have) negative societal effects.

Image credits: (1) Gayatri Malhotra (cover); (2) John Salvino ; (3) iStock, by Getty Images.

Blog Details

THE RISE OF DIGITAL ADVERTISING AND THE DEATH OF OBJECTIVE JOURNALISM

In the beginning, the people of Earth told their truths, voiced their opinions, and advertised their wares and services in print media such as newspapers and magazines.

And the people of Earth looked upon their works, and called it good.

In time, humanity harnessed the power of parts of the electromagnetic spectrum. And lo, the people of Earth told their truths, voiced their opinions, and advertised their wares and services on radio.

And the people of Earth looked upon their works, and called it good.

In time, humanity harnessed the power of more of the electromagnetic spectrum. And lo, the people of Earth told their truths, voiced their opinions, and advertised their wares and services on television.

And the people of Earth looked upon their works, and called it good.

In time, humanity harnessed the power of even more of the electromagnetic spectrum. And lo, the people of Earth told their truths, voiced their opinions, and advertised their wares and services in the digital realm.

And the people of Earth looked upon their works, and … well, that’s where all kinds of things got screwed up … particularly for news outlets.

Getting to the point

Of course, “screwed up” is a completely subjective interpretation of the current reality of the state of journalism.  Still, I consider it appropriate. At least as it relates to digital marketing and social media. An advantage of my journalism career starting in 1997 and ending in 2018 is that I was able to watch in real time as journalism as a whole shifted to accommodate the new reality of the internet.  I’ve also been able to watch as the rise and growth of social media was seemingly accompanied by a loss of many Americans’ critical thinking skills and, from there, a hyper-partisan nation where your political affiliation dictated your news source.

My biased version of history notwithstanding, there is no arguing that there has been a massive shift in how journalism is defined and perceived in this country. With the rise of the internet came the birth of social media. With social media came an expansion of where people could share their news and voice their opinions.

Where things got “screwed up,” as I said, is when people stopped looking at their opinions as their own thoughts and biases, and started perceiving them as “facts.”  And then they spread these “alternative facts” (more on this later) via social media. And as the social media platforms gained power, and truth became more and more subjective, news organizations lost power.

Recent news

Digital advertising has fueled the growth of Facebook, Google, Baidu, Tencent, and other internet services companies. The companies don’t charge their users, which enhances the social media platforms’ popularity. Still, the proliferation of the social media platforms hasn’t been the best thing for the news business. Indeed, digital companies have been in the news fairly often in recent years, as has their connection to news outlets. A few examples:

  • In fall 2019, it became apparent that the two reigning giants of digital advertising would have to acknowledge a third member of the club. Facebook and Google, which had ruled the industry for most of the decade, was about to be joined by Amazon. Indeed, Amazon’s advertising revenue has continued to grow, with even its 1Q20 numbers, from the beginning of the COVID-19 crisis, showing a year-over-year increase of 44 percent. Still, that growth comes with a price. Amazon is owned by Jeff Bezos, the owner of The Washington Post. The Post’s investigative reporting on President Donald J. Trump drew Trump’s wrath, and that rage spilled over onto Amazon.
  • A growing number of companies, in response to the growing repercussions of the Black Lives Matter protests, have been removing their ads from Facebook. The companies have been declaring solidarity with the #StopHateForProfit boycott, which is being led by a coalition of civil rights groups. Among the more than 400 companies is the retailer Patagonia, which tweeted on June 21 that “Patagonia is proud to join the Stop Hate for Profit campaign. We will pull all ads on Facebook and Instagram, effective immediately, through at least the end of July, pending meaningful action from the social media giant.” In a later series of tweets, the company said:

“From secure elections to a global pandemic to racial justice, the stakes are too high to sit back and let the company continue to be complicit in spreading disinformation and fomenting fear and hatred. As companies across the country work hard to ensure that Americans have access to free and fair elections this fall, we can’t stand by and contribute resources to companies that contribute to the problem.”

  • Amid this backdrop, media buying agency GroupM predicted that “digital advertising on platforms such as Google, Facebook and Alibaba is set this year to overtake spending on traditional media for the first time, a historic shift in market share that has been accelerated by the coronavirus pandemic. Excluding online ads sold by old media outlets such as news publishers or broadcasters, digital marketing is predicted to account for more than half the $530 (billion) global advertising industry in 2020, according to GroupM, the media buying agency owned by WPP.”

It is that latter point that shows the phenomenal power of digital advertising. Indeed, the Internet Advertising Bureau (IAB) has tracked U.S. digital advertising revenues almost since its beginning. To give you an idea of how much and how quickly digital advertising has grown, the IAB reported that, in 1996, revenues from such advertising had reached $267 million. By 2000, a mere four years later, that number had grown to $8.2 billion. as the calendar rolled forward, so, too, did the ad spending, soaring to $12.5 billion in 2005; $26.0 billion in 2010; $59.6 billion in 2015; exceeding $100 billion for the first time in 2018; and hitting $124.6 billion in 2019. That’s roughly 10 times the 2005 figure.

Is that spending worth it? According to Statista, the average return-on-investment for each dollar spent on digital advertising was about $11 in 2018, “making it the medium with the highest return on advertising spending (or ROAS).”

What does all this have to do with journalism?

A whole heck of a lot, actually.

A less biased history of technology and journalism

Journalism has always been fueled by three forces: retail sales, subscribers, and advertisers. When journalism was pretty much only newspapers and magazines, the businesses survived on revenue generated by the sales of copies in retail venues such as shops and newsstands; via paid subscriptions, which allowed the copies to be mailed to the recipients; and through what advertisers paid to have their wares and services mentioned in print.

In time, however, radio rose up. Although the first radio program was broadcast in Canada on December 24, 1906, it took several more years before the first radio news broadcast would occur.  On Aug. 31, 1920, a Detroit radio station aired the first radio news broadcast. Shortly thereafter, for a one-time cost of purchasing a radio (as well as the cost of powering the device), listeners could hear news and entertainment without needing to pay for it.  Unfortunately, sales of the radios themselves didn’t benefit the news outlets, and the subscription model didn’t work for radio, either.  As a result, radio news outlets relied on corporate sponsorships and advertisers. Newspapers and magazines continued as normal, but kept a wary eye on their new competitor.

Television was next, with the 1922 transmission via radio waves of a still picture. The technology was improved in 1925, with the successful transmission of a live human face.

Meanwhile, the federal government decided to regulate these new technologies, starting with the Federal Communications Act in 1934. The law created the Federal Communications Commission (FCC). Here’s how the FCC was initially described, according to “That’s the Way It Is: A History of Television News in America,” by Charles L. Ponce de Leon:

(The FCC) was responsible for overseeing the broadcasting industry and the nation’s airwaves, which, at least in theory, belonged to the public. Rather than selling frequencies, which would have violated this principle, the FCC granted individual parties station licenses. These allowed licensees sole possession of a frequency to broadcast to listeners in their community or region. This system allocated a scarce resource—the nation’s limited number of frequencies—and made possession of a license a lucrative asset for businessmen eager to exploit broadcasting’s commercial potential. …  As part of this process, they had to demonstrate to the FCC that at least some of the programs they aired were in the “public interest.” Inspired by a deep suspicion of commercialization, which had spread widely among the public during the early 1900s, the FCC’s public-interest requirement was conceived as a countervailing force that would prevent broadcasting from falling entirely under the sway of market forces.

In reality, however, the FCC tended to be “unusually sympathetic to the businessmen who owned individual stations and possessed broadcast licenses and made it quite easy for them to renew their licenses. They were allowed to air a bare minimum of public-affairs programming and fill their schedules with the entertainment programs that appealed to listeners and sponsors alike. By interpreting the public-interest requirement so broadly, the FCC encouraged the commercialization of broadcasting and unwittingly tilted the playing field against any programs—including news and public affairs—that could not compete with the entertainment shows that were coming to dominate the medium.”

The National Broadcast Company’s Red Network launched the first daily radio news program on Feb. 24, 1930. The Columbia Broadcasting System (CBS) followed on Sept. 29, 1930, with radio broadcaster Lowell Thomas as the host. Thomas made history again in 1939, when he simultaneously read a news report over radio and television, making it the first television news broadcast.

It wasn’t until Sept. 2, 1963, that the concept of a daily news program translated to television, when Walter Cronkite anchored the first daily half-hour news program on network television for CBS. For well over a decade, ABC, NBC, and CBS ruled television news – until the June 1, 1980, launch of the Cable News Network, better known as CNN. The creation of a 24-hour television news network forever changed the way news was delivered. In response, all three network broadcasters dropped their 30-minute daily programs and went on to create longer-form news shows, including morning news broadcasts.

Throughout all these changes, news outlets continued to operate pretty much as they had at the very beginning. Newspapers and magazines continued to rely on retail sales, subscribers, and advertising revenue. Radio and television broadcasters relied far more heavily on advertising revenue. As time marched on, both print and broadcast news outlets updated how they gathered and spread the news as technology evolved.

But their time-tested business models were about to be upended by one new technology that would change everything.

The advent of the Internet

The world-spanning phenomenon known as the internet began as a Department of Defense research project in 1969. A few quick facts about it, mostly courtesy of Vox:

  • “The internet began as ARPANET, an academic research network that was funded by the military’s Advanced Research Projects Agency (ARPA, now DARPA).”
  • “… In 1973, software engineers … began work on the next generation of networking standards for the ARPANET. These standards, known as TCP/IP, became the foundation of the modern internet. ARPANET switched to using TCP/IP on January 1, 1983.”
  • “During the 1980s, funding for the internet shifted from the military to the National Science Foundation. The NSF funded the long-distance networks that served as the internet’s backbone from 1981 until 1994. In 1994, the Clinton Administration turned control over the internet backbone to the private sector. It has been privately operated and funded ever since.”
  • As of April 2020, nearly 4.57 billion people “were active internet users,” encompassing 59 percent of the global population.
  • And no, despite the hype, former Vice President Al Gore did not create the internet. In truth, he never claimed that he had. What he did say, in a 1999 interview with CNN, was he “took the initiative in creating the internet.” The actual inventors of the internet, TCP/IP designers Bob Kahn and Vint Cerf, have said that “Gore was “the first political leader to recognize the importance of the internet and to promote and support its development” — particularly with his sponsorship of the 1991 High Performance Computing and Communications Act (HPCCA). Kahn and Cerf say that law “became one of the major vehicles for the spread of the internet beyond the field of computer science.”

In 1994, three years after the HPCCA’s passage, the founding of Netscape and Yahoo! helped to kickstart mass-market adoption of the web and email, respectively. Online searches were simplified by Google’s arrival in 1998. Hundreds of other Internet companies were founded in the latter half of the 1990s — too many, as it turned out. However, the advertising-supported model of free Internet services survived the dotcom bubble’s collapse in 2001.

 The trouble starts

In 2004, the Pew Research Center published its first annual State of the News Media Report. In the report’s inaugural edition, Pew noted that then-President George W. Bush had told ABC News in December 2002 that he “preferred to get his news not from journalists but from people he trusted, who ‘give me the actual news’ and ‘don’t editorialize.’” Indeed, a New Yorker writer noted that senior White House staff “saw the news media as just another special interest group whose agenda was making money, not serving the public – and surveys suggest increasingly that the public agrees.”

These observations were some of the earliest warnings that a sharply partisan divide was coming over previously agreed-upon norms such as truth, facts, and the mission of journalism.  Indeed, the 2004 report went on to warn:

Some argue that as people move online, the notion of news consumers is giving way to something called “pro­sumers,” in which citizens simultaneously function as consumers, editors and producers of a new kind of news in which journalistic accounts are but one element.

 With audiences now fragmented across hundreds of outlets with varying standards and agendas, others say the notions of a common public understanding, a common language and a common public square are disappearing.

 For some, these are all healthy signals of the end of oligarchical control over news. For others, these are harbingers of chaos, of unchecked spin and innuendo replacing the role of journalists as gatekeepers over what is fact, what is false and what is propaganda. Whichever view one prefers, it seems everything is changing.

Sound familiar?

Going back to digital for a moment … among the 2004 Pew report’s conclusions, two items stand out:

  • The biggest question may not be technological but economic…If online proves to be a less useful medium for subscription fees or advertising, will it provide as strong an economic foundation for newsgathering as television and newspapers have? If not, the move to the Web may lead to a general decline in the scope and quality of American journalism, not because the medium isn’t suited for news, but because it isn’t suited to the kind of profits that underwrite newsgathering. 
  • Those who would manipulate the press and public appear to be gaining leverage over the journalists who cover them. Several factors point in this direction. One is simple supply and demand. As more outlets compete for their information, it becomes a seller’s market for information. Another is workload. The content analysis of the 24 ­hour­ news outlets suggests that their stories contain fewer sources.

So, at the time, news outlets weren’t yet certain of what to make of using the internet to gather and disseminate news. And concerns about manipulation of the media were already on people’s radar.

The fuse is lit

The 2008 presidential campaign of Barack Obama, and his 2009 inauguration, exacerbated the racism and xenophobia of many White Americans. Fox News, which had been known for its motto of being “fair and balanced,” immediately launched racist attacks on Obama and his wife, Michelle:

  • May 2008: Fox News contributor Liz Trotta joked that then-candidate Obama should be assassinated.
  • June 6: On the Fox News program “America’s Pulse,” host E.D. Hill referred to a celebratory fist bump shared by the Obamas to celebrate his acceptance as the Democratic presidential nominee as a “terrorist fist jab.” Hill was dropped from her show a week later.
  • June 11: A Fox News chyron referred to Michelle Obama as “Obama’s Baby Mama.”

Things would only deteriorate from there.

Amid the sharply partisan tone of Fox News and other conservative news outlets, the explosion in growth of social media platforms occurred. As the BBC noted:

 Clearly the enabler of the modern form of “fake news” – or, if you like, misinformation – has been the explosive growth of social media.

“In the early days of Twitter, people would call it a ‘self-cleaning oven’, because yes there were falsehoods, but the community would quickly debunk them,” (says Clare Wardle of First Draft News, a truth-seeking non-profit based at Harvard’s Shorenstein Center). “But now we’re at a scale where if you add in automation and bots, that oven is overwhelmed.

“There are many more people now acting as fact-checking and trying to clean all the ovens, but it’s at a scale now that we just can’t keep up.”

One example of the latter comment is the proliferation of bots in social media. As has been noted elsewhere, bots serve a useful purpose for social media companies. Regrettably, they also represent a danger to social media platforms in that they can rapidly spread misinformation and propaganda. Indeed, the malicious use of bots helped sway public opinion in the 2016 election, in which Russia used them to help position Trump into a better position for him to win election to the presidency.

In addition, in 2009, an Ohio State University study warned that more Americans, rather than seeking to be informed by their news outlets, were instead increasingly flocking to news that reinforced their own prejudices. Conservatives headed to Fox News; liberals, to CNN and MSNBC.

Meanwhile, during Obama’s two terms in office, the Pew Research Center noticed something else happening to journalism: tech companies were increasingly becoming involved in their operations:

In 2013, the business of journalism saw another twist in its digital evolution: An influx of new money – and interest – from the tech world.

 At this point, professional newsgathering is still largely supported by advertising directed to such legacy platforms as print and television and, secondarily, by audience revenues (mostly subscriptions). But other ways of paying for news are becoming more visible. Much of the momentum is around this high-profile interest from the tech world, in the form of venture capital and individual and corporate investments, which bring with them different skill sets and approaches to journalism.

 But as the tech companies continued to invest, and more advertising shifted to the digital world, news outlets found themselves “competing for an increasingly smaller share of those dollars.” As a result, even as news outlets fought a battle against an increasingly partisan readership, it was also fighting a war on a different front: namely, against the impact and influence of technology companies and social media platforms. According to the conclusions in Pew’s 2016 report:

  • It has been evident for several years that the financial realities of the web are not friendly to news entities, whether legacy or digital only. There is money being made on the web, just not by news organizations. 
  • Increasingly, the data suggest that the impact these technology companies are having on the business of journalism goes far beyond the financial side, to the very core elements of the news industry itself. In the predigital era, journalism organizations largely controlled the news products and services from beginning to end, including original reporting; writing and production; packaging and delivery; audience experience; and editorial selection. Over time, technology companies like Facebook and Apple have become an integral, if not dominant player in most of these arenas, supplanting the choices and aims of news outlets with their own choices and goals. 
  • The ties that now bind these tech companies to publishers began in many ways as lifelines for news organizations struggling to find their way in a new world. First tech companies created new pathways for distribution, in the form of search engines and email. The next industry overlap involved the financial model, with the creation of ad networks and app stores, followed by developments that impact audience engagement … Now, the recent accusations regarding Facebook editors’ possible involvement in “trending topics” selections have shined a spotlight on technology companies’ integral role in the editorial process.

Meanwhile, as the end of Obama’s second term neared, the lines of partisan perception had been drawn: CNN, MSNBC, The New York Times, The Washington Post, and many other news outlets were firmly perceived as “liberal.” Fox News, Breitbart, The New York Post and the Wall Street Journal were counted as “conservative.”

By June 16, 2015, when Trump descended an escalator in Trump Tower to announce his candidacy for the 2016 presidential election, the opposing forces of technology companies and news outlets were about to combust.

Things go boom

You know what happened next:

  • The bot attacks from Russia that manipulated social media, influenced the very facts Americans used to vote, and continue to be a clear and present danger to American democracy.
  • Fox News effectively becoming a part of the Trump campaign. Indeed, Fox News, Trump’s news outlet of choice during his campaign, stopped using its motto “fair and balanced” in August 2016 – one month after Trump was confirmed as the Republican presidential nominee. In June 2017, five months after Trump’s inauguration, Fox News officially dropped the motto.

Meanwhile, we’ve seen new phrases that shatter our understanding of once-simple concepts such as truth or facts. Trump advisor Kellyanne Conway coined the phrase “alternative facts” on Jan. 21, 2017, in an effort to defend false estimates by then White House press secretary Sean Spicer on the size of the crowd gathered in Washington for Trump’s inauguration.

And, of course, there is “fake news,” which was popularized by Hillary Rodham Clinton in a speech on Dec. 8, 2016. While refuting the Pizzagate conspiracy theory,  she noted “the epidemic of malicious fake news and false propaganda that flooded social media over the past year. It’s now clear that so-called fake news can have real-world consequences. This isn’t about politics or partisanship. Lives are at risk… lives of ordinary people just trying to go about their days, to do their jobs, contribute to their communities.”

Less than a month later, then-President-elect Trump made use of the phrase during press conference in which he said “you’re fake news” to CNN reporter Jim Acosta.  A day or so later, on Jan. 11, 2017, he tweeted (in all-caps) about various news investigations into his political and business dealings, “FAKE NEWS – A TOTAL POLITICAL WITCH HUNT!” Since then, he’s made “fake news” a standard response to virtually any news report that casts him in a negative light.

And that brings us to today, when Americans are increasingly growing wary of the role social media platforms play in delivering the news… particularly when those platforms have been shown to be receptive to inflammatory points of views. The current ad boycott of Facebook is but one example.

The reach of technology companies, political partisanship, and social media platforms has had a definite impact on modern journalism. Unlike previous technological advances such as radio and television, digital technology’s influence on news outlets has changed the very nature of journalism, as well as how journalism is perceived. Indeed, even as more than half of Americans get their news from social media, a Pew Research Center study shows that:

  • Almost all Americans – about nine-in-ten (88%) – recognize that social media companies have at least some control over the mix of news people see.
  • About six-in-ten (62%) say social media companies have too much control over the mix of news that people see on their sites, roughly four times as many as say that they don’t have enough control (15%).
  • Just 21% say that social media companies have the right amount of control over the news people see.
  • While social media companies say these efforts are meant to make the news experience on their sites better for everyone, most Americans think they just make things worse. A majority (55%) say that the role social media companies play in delivering the news on their sites results in a worse mix of news.
  • About eight-in-ten U.S. adults (82%) say social media sites treat some news organizations differently than others
  • As large majorities say that the tone of American political debate has become more negative in recent years, about a third of U.S. adults (35%) say that uncivil discussions about the news are a very big problem when it comes to news on social media. Additionally, about a quarter (27%) say that the harassment of journalists is a very big problem associated with news on social media.

The questions that remain are simple:

  • Will we continue to tolerate the outsized influence of technology companies on journalism?
  • Should social media companies be legally required to police the accuracy of their content?
  • Should social media platforms ban all political advertising?
  • Will Americans see the value of paying a fee for news content, as long as the content is objective and of high quality?
  • And, finally, will Americans see past their own prejudices to force news outlets to present news that is not only factually accurate, but free from bias?

Time will tell.

About the author

Melvin Bankhead III is the founder of MB Ink Media Relations, a boutique public relations firm based in Buffalo, New York. An experienced journalist, he is a former syndicated columnist for Cox Media Group, and a former editor at The Buffalo News.

 

Note from MTN Consulting

MTN Consulting is an industry analysis and research firm, not a company that typically comments on politics. We remain focused on companies who build and operate networks, and the vendors who supply them. That isn’t changing. However, we are going to dig into some of the technology issues related to these networks and networking platforms which are having (or will have) negative societal effects.

Image credits: (1) iStock, by Getty Images (cover); (2) Will Francis; (3) CBS; (4) Shutterstock; (5) CNN‘s Twitter feed.

Blog Details

DIGITAL PRIVACY, PART TWO: WHAT CAN WE DO ABOUT OUR DATA’S PRIVACY?

As I indicated in Part One of these reports on digital privacy, digital tools such as facial recognition are used for many beneficial purposes. However, as I demonstrated, those tools are also extremely easy to abuse, particularly in the hands of governments and the law enforcement community.

One of the films of the blockbuster film series, the Marvel Cinematic Universe, demonstrated the threat in a most capable manner.

Reel life reflecting real life

In “Captain America: The Winter Soldier,” Steve Rogers, the titular super-soldier, finds himself in a race against time to stop a deadly conspiracy that is fueled by abuse of digital surveillance. It’s discovered that the government security agency SHIELD has been infiltrated by a terrorist group known as Hydra. As Hydra scientist Arnim Zola explains, “Hydra was founded on the belief that humanity could not be trusted with its own freedom. What we did not realize is that if you try to take that freedom, they resist. (World War II) taught us much. Humanity needed to surrender its freedom willingly. After the war … the new Hydra grew. For 70 years, Hydra has been secretly feeding crises, reaping war. … Hydra created a world so chaotic that humanity is finally ready to sacrifice its freedom to gain its security.”

Hydra infiltrator Jasper Sitwell explains how digital information is being used to determine the targets for the imminent lethal uprising. “The 21st century is a digital book. Zola taught Hydra how to read it. Your bank records, medical histories, voting patterns, emails, phone calls, your damn SAT scores. Zola’s algorithm evaluates people’s past to predict their future. … And then the Insight helicarriers [heavily armed aerial transports] scratch people off the list a few million at a time.”

Yeah, that’s a frightening scenario: “Big Brother” writ large. Depending on your age and education, you might wonder what the hit CBS television show has to do with digital privacy. After all, the reality TV show is designed for entertainment. But the phrase “Big Brother” debuted in George Orwell’s 1949 novel “1984,” in which a totalitarian government maintains control through constant electronic surveillance of its citizens. Today, the phrase “Big Brother” is “a synonym for abuse of government power, particularly in respect to civil liberties, often specifically related to mass surveillance.”

And, as I demonstrated in the previous essay, digital information, particularly facial recognition, can easily be misused and abused … as demonstrated by these most recent examples, which were made public after the last essay was published:

  • In Michigan, Robert Williams, a Black man, was arrested by Detroit police in his driveway. Police thought Williams was a suspect in a shoplifting case. However, the inciting factor for the arrest was a facial recognition scan, which had incorrectly suggested that Williams was the suspect. And while the charges were later dropped, the damage was done: Williams’ “DNA sample, mugshot, and fingerprints — all of which were taken when he arrived at the detention center — are now on file. His arrest is on the record,” says the American Civil Liberties Union, which has filed a complaint with Detroit police department.
  • In May, Harrisburg University announced that two of its professors and a graduate student had “developed automated computer facial recognition software capable of predicting whether someone is likely going to be a criminal. With 80 percent accuracy and with no racial bias, the software can predict if someone is a criminal based solely on a picture of their face.” On June 23, over 1,500 academics condemned the research paper in a public letter. In response, Springer Nature will not be publishing the research, which the academics blasted as having been “based on unsound scientific premises, research, and methods which … have [been] debunked over the years.” The academics also warn that it is not possible to predict criminal activity without racial bias, “because the category of ‘criminality’ itself is racially biased.”

Today, we’ll explore another aspect of digital privacy — namely, how much of a threat your own digital footprint can pose to your security. Because, as has become readily apparent, humanity doesn’t need “to surrender its freedom willingly.” It’s already done it.

I always feel like somebody’s watching me

Not too long ago, my wife and I were relaxing. I was reading a book; she was watching television. She asked me a question about the drug commercial that had just aired, and we laughed while discussing how the “minor” side effects of the drug didn’t sound all that minor. In the midst of our laughter, her phone started talking, telling us all about the drug.

She hadn’t touched the phone, which was sitting beside her on the couch.

I said, “Skynet is real.” She picked up the phone, put it into sleep mode, and set it back down, all without directly looking at it. And then, of course, for the next week or so, her supposed interest in the drug influenced what kind of ads popped up on her phone and laptop.

They say it’s not being paranoid if people really are out to get you. And they are out to get you … or your data, anyway.

The thing is, both of us have voice recognition software on our phones. With it, we can instruct our phones to open a specific app, call a person, search Google for information, and various other tasks.  It just never occurred to us that the phones could listen in without our specifically activating the voice-recognition software.

But we’re not the only ones creeped out by our devices’ antics. A few years ago, users of Amazon’s Alexa reported that the AI assistant would abruptly laugh for no apparent reason. Sometimes, the laughter would come in response to a user query. Other times, the user would be sitting silently when Alexa would suddenly chuckle. The laughter disturbed a lot of Alexa users. One user tweeted, “Lying in bed about to fall asleep when Alexa on my Amazon Echo Dot lets out a very loud and creepy laugh… there’s a good chance I get murdered tonight.”

Admittedly, we know that it’s highly unlikely that our phones, or Alexa, are going to pick up a weapon and come after us. While some say the age of Skynet is inching ever closer, most of us realize that it’s not all that likely that all of our machines are going to rise up and wipe us out … the repeated attempts of GPS notwithstanding. (Ever had your GPS tell you to “Turn right” … while you’re in the middle of a bridge over a very wide and deep river? I have!)

They say it’s not being paranoid if people really are out to get you. And they are out to get you … or your data, anyway. Corporations want your data, which fuels their marketing; social media platforms want your data, which fuels their interconnectivity, as well as the demographic data they can use for targeting their ads; the government wants your data, to fuel their research, voting, and criminal justice databases; and hackers want your data so they can steal your money.

The ACLU breaks down its concerns over privacy and technology into the categories of Internet privacy; cybersecurity; location tracking; privacy at borders and checkpoints; medical and genetic privacy; consumer privacy; and workplace privacy.

Internet privacy

In 2019, the Pew Research Center published the results of a survey in which a majority of Americans admitted their belief that their activities — both online and offline —were being monitored by the government and companies. “Roughly six-in-ten U.S. adults say they do not think it is possible to go through daily life without having data collected about them by companies or the government,” the report warned.

Although the report acknowledges that “data-driven products and services are often marketed with the potential to save users time and money or even lead to better health and well-being,” 81 percent of respondents said that “the potential risks they face because of data collection by companies outweigh the benefits, and 66% say the same about government data collection. At the same time, a majority of Americans report being concerned about the way their data is being used by companies (79%) or the government (64%). Most also feel they have little or no control over how these entities use their personal information.”

There are various aspects of Internet privacy:

  • Consumer Online Privacy: One of the concerns that many consumers have is how their data is collected online, and what happens to it afterward. Ever Googled a retail website and then, shortly afterward, ads for that website start popping up in the margins of whatever other website you’re looking at? Your Internet Search Provider likely sold your information, or the website left “cookies” on your computer that allowed it to target you for the ads. Because ISPs are in the perfect position to see everything we do online, Maine regulates how they operate by requiring that they gain permission from users before using their data. Others are using Virtual Private Networks to put barriers between their devices and the ever-watching eyes of the ISPs. Maine is an outlier, though, and VPNs can be inconvenient and costly.
  • Social Networking Privacy: When you’re not at work, you would think that what you post on social media is your own private business. But increasingly, employers, school and the federal government are requiring access to our digital lives. U.S. border enforcement agents are demanding that travelers unlock their devices and provide passwords. Schools are utilizing services that allow them to access students’ devices and social media accounts. The concern about overreach has become so widespread that some states have taken steps to prevent employers from researching the habits and postings of job applicants on social media, or trying to require that employees surrender passwords to their accounts.
  • Cell Phone Privacy: You know from movies, television shows and the news that your digital devices also act as reliable tracking devices. Indeed, recent events alone have shown that the tech can track stolen devices; allow advocacy and voting rights groups to track the movements of protesters (and communicate with them); and allow companies like Venntel can collect and then sell data from citizen’s phones to government agencies, which can lead to warrantless tracking of their activities.
  • Email Privacy: One of the biggest stories in 2016 was the hacking of emails that belonged to the Democratic National Committee and then-presidential candidate Hillary Clinton. Today, concerns remain over just how private our emails really are, creating opportunities for services that give you more control Mozilla is now offering Firefox Relay, which effectively acts as call forwarding in the form of email aliases that are connected to your real account, but doesn’t allow others access to your real account.  Google is also offering new features to make Gmail safer.
  • Cybersecurity: As our reliance on our digital devices continues to grow, and the technology that connects those devices continues to improve, it can be argued that we are increasing the opportunities for hackers to exploit that same dependence. As the prevalence of hacker attacks grow, so does the need to protect computers, databases, electronic systems, mobile devices, networks and servers. A recent poll showed that most companies’ electronic security breaches were the result of poorly planned security infrastructure. Yet, even though U.S. business losses in cybersecurity attacks averaged $1.41 million in 2018, with over 68 million sensitive records exposed in 2019, the United States faces a coming shortage of cybersecurity experts. At a time when 5G is fast becoming the go-to resource of connectivity, it needs to be considered that the expansion of 5G could also lead to a massive expansion of internet-connected devices, which will raise the stakes for cybersecurity even higher. One such concern, from Consumer Watchdog, is that “all the top 2020 cars have Internet connections to safety critical systems that leave them vulnerable to fleet wide hacks,” which could lead to “a 9-11 scale catastrophe.”  

Location Tracking

The very nature of a cell phone requires that the device be tracked from tower to tower to maintain the integrity of calls.  Since mobile companies store that location data, the government can obtain a great deal of information about you from your movements. Similarly, the mobile company might sell that information. By triangulating cell towers, your location data can reveal where you live, your doctor’s office, your school, your workplace, your place of worship, your friends’ homes … the list goes on.

Back in 2013, whistleblower Edward Snowden revealed that the National Security Agency was obtaining almost 5 billion records a day from cellphones around the world. This collection effort allowed U.S. intelligence officials to track phones, the phones’ users, and map out the relationships of the phones’ users to other users and their phones. This meant that people around the world, Americans among them, were caught up in the NSA’s web, where that data was stored … all without a warrant.

However, the pace at which technology evolves means that, for every advance that can be used for oppressive purposes, a counter will shortly follow. In September 2019, protesters in Hong Kong were well aware of the fact that Chinese authorities monitored WiFi and the Internet. They turned to Bridgefy, a Bluetooth-based app that does not use the Internet and, as a result, is more difficult for the Chinese authorities to trace.

Privacy at Borders and Checkpoints

People carry a great deal of information —personal and professional — on their phones and other devices. Typically, they take steps to ensure that said information doesn’t fall into the wrong hands.  For example, my phone contains my emails, as well as access to my various social media accounts.  My communications with my clients are accessible via my laptop and my cell phone.  As a result, I keep my devices pass-coded. However, a growing problem is that of government agents attempting to gain access to travelers’ devices without a warrant. An argument can be made that the Constitutional prohibition on unreasonable search and seizure appears to be under assault, as government agents at border crossings demand access to travelers’ devices.

Medical and Genetic Privacy

What would you do if your employer, acting upon a question from your health insurance company, asked you to submit the results of that DNA testing you had performed via Ancestry.com or 23andMe? Right now, your medical and genetic information is protected under the provisions of the 1996 Health Insurance Portability and Accountability Act (signed by President Bill Clinton), the 2008 Genetic Information Nondiscrimination Act (signed by President George W. Bush). Meanwhile, you are protected from being denied health insurance because of pre-existing conditions by the 2010 Affordable Care Act (signed by President Barack Obama). But these laws, intended to protect Americans from predatory health care, may not be as strong as we think they are:

  • The ACA, which is more commonly known as “Obamacare,” has been under relentless attack by the Trump administration. Indeed, during the writing of this report, the Trump administration has asked the U.S. Supreme Court to “invalidate” the ACA. If it falls, insurance companies could again deny coverage to those with pre-existing conditions. It should be noted that a 2017 U.S. Department of Health and Human Services analysis estimated that between 61 million and 133 million Americans have a preexisting condition.
  • In 2017, a committee in the Republican-controlled House of Representative approved HR 1313, “a bill that would let companies make employees get genetic testing and share that information with their employer — or pay thousands of dollars as a penalty.” The bill died from inaction after Democrats won the House in the 2018 midterm elections.
  • In 2018, police in California identified a serial killer by matching his DNA with DNA from members of his family who had signed up for a genealogy website called GEDmatch. Since then, questions have been raised about the accessibility of such data to law enforcement agencies.
  • Consumer genetics testing companies frequently sell your data, often to pharmaceutical companies. This is a modern spin on the case of Henrietta Lacks, a poor Black woman who was treated for cervical cancer in 1951. Researchers discovered that her cells were incredibly resilient, and where other cancer patients’ cell samples would die, Lacks’ cells would continue to live and thrive. Without Lacks’ permission, researchers used her cells to conduct research into “the effects of toxins, drugs, hormones and viruses on the growth of cancer cells without experimenting on humans. They have been used to test the effects of radiation and poisons, to study the human genome, to learn more about how viruses work, and played a crucial role in the development of the polio vaccine.” Although the discovery was a boon to the worlds of science and medicine, the fact remains that Lacks’ medical and genetic information was used without her permission in what is now an industry worth more than $1 billion annually, as of 2019.
  • The GINA law bans employers and health care companies from using genetic data to deny you coverage or employment. However, “companies with fewer than 15 people are exempt from this rule, as are life insurance, disability insurance, and long-term care insurance companies — all of which can request genetic testing as part of their application process.”
  • What happens if companies make your medical or genetic information part of the interview process? Granted, some state and federal laws protect against genetic discrimination, but those laws do not cover everything.

Consumer Privacy

I mentioned earlier that companies want your data for their marketing. Indeed, it’s been shown that Facebook, for example, looks at its customers as less like people and more like products to be sold to advertisers. A Vermont law forced some transparency on companies that sell our information, but very little is actually known about who has access to our information, and what happens to it after it is sold. California has taken steps to address that lack of knowledge, but it is too early to see what kind of impact the law is having.

…take a closer look at the various apps you keep on your devices. How much access do those apps have to your private information? For example, do you really need to grant “Candy Crush” access to your microphone, camera, and location?

Workplace Privacy

When you’re at work, you likely don’t expect to have privacy. A 2018 study showed that 50 percent of companies monitor their workers’ emails and social media accounts, “along with who they met with and how they utilized their workspaces.” Fast forward a year, and 62 percent of companies were “leveraging new tools to collect data on their employees.” And even in the current work-from-home reality of COVID-19, employers are still keeping an eye on their workers. For example, random screenshots of your workers’ screens will tell you what they’re actually doing. Indeed, “monitoring software can track keystrokes, email, file transfers, applications used and how much time the employee spends on each task.” Also, if you ever use your personal phone for business purposes? You could be risking all of your personal data should your employment be terminated.

A hopeless situation?

Earlier, I mentioned a 2019 Pew Research Center study showing that “roughly six-in-ten U.S. adults” believed they were being monitored by the government and companies., and that they “do not think it is possible to go through daily life without having data collected about them by companies or the government.” The report also noted that most Americans feel “they have little or no control over how these entities use their personal information.”

With that information as a backdrop, it is important to realize that what may seem like a hopeless situation actually is an opportunity. Granted, U.S. laws on privacy have fallen behind the pace of technology, and it has been shown that those in charge of regulating technology and social media platforms often do not understand the very technologies they’ve been charged with monitoring. As a 2018 Brookings Institution paper warns:

“This is where we are with data privacy in America today. More and more data about each of us is being generated faster and faster from more and more devices, and we can’t keep up. It’s a losing game both for individuals and for our legal system. If we don’t change the rules of the game soon, it will turn into a losing game for our economy and society.”

Indeed, news reports seem to bear this out. From the Snowden revelations in 2013, to the 2017 Equifax breaches that exposed the data of nearly 146 million Americans, to the 2018 Cambridge Analytica scandal and beyond, more and more attention is being paid to the subject of data privacy. But that just means that we’re more aware. What can we actually do about it?

Recommendations

First of all, I strongly recommend that you make use of the one aspect of your digital data that you do control —namely, your device privacy settings.  Some of the data sharing our devices perform is within our ability to control. However, it is important to remember than many aspects of it are not.

Second, take the time to actually read the privacy policies of the digital devices and online services that you use. As the Brookings Institution has noted, perhaps the concept of “informed consent was practical two decades ago, but it is a fantasy today. In a constant stream of online interactions, especially on the small screens that now account for the majority of usage, it is unrealistic to read through privacy policies. And people simply don’t.” The fact that so many people don’t bother to read those polices effectively makes them complicit when their own data is used against them.

Third, take a closer look at the various apps you keep on your devices. How much access do those apps have to your private information? For example, do you really need to grant “Candy Crush” access to your microphone, camera, and location?

And then, of course, are the ideas that you could demand from digital service companies.

Pitch these ideas to lawmakers, activists, and journalists to force companies to discuss them:

  • Demand that companies establish easier ways to manage your devices’ privacy settings
  • Demand that companies be transparent over how they store, and if they share, your information.
  • Demand “real name” requirements for social media, in which the accounts can only be opened with a photocopy of a government-issued ID card. Admittedly, there would be a loss of privacy here, but it would lead to a decrease in the frequent “mob” mentality we see online, and an increase in the accountability of account users for their content.
  • Companies should have a standardized form governing whether to grant permission to companies to sell or share their personal data.
  • Question lawmakers to ensure that they understand the technologies they’re attempting to regulate.
  • Demand that, if someone is cleared of criminal charges, any biometric data that was gathered as a result of the arrest be deleted within 30 days.
  • Require that greater scrutiny be given to digital applications coming from foreign countries that have a history of intellectual property theft.
  • Demand that lawmakers reveal if they have any financial ties to technology companies
  • Require technology companies to create more secure privacy settings for minors using social media.

Admittedly, some of the above suggestions may be long shots, particularly given how much money technology companies and their lobbyists have at their disposal. This would seem an impossible task, taking back control of our digital data.

However, to put it into a historical context: Women’s right to vote, the end of slavery, and same-sex marriage were once considered impossible tasks, too.

About the author

Melvin Bankhead III is the founder of MB Ink Media Relations, a boutique public relations firm based in Buffalo, New York. An experienced journalist, he is a former syndicated columnist for Cox Media Group, and a former editor at The Buffalo News.

 

Note from MTN Consulting

MTN Consulting is an industry analysis and research firm, not a company that typically comments on politics. We remain focused on companies who build and operate networks, and the vendors who supply them. That isn’t changing. However, we are going to dig into some of the technology issues related to these networks and networking platforms which are having (or will have) negative societal effects.

Image credit: iStock, by Getty Images

Blog Details

DIGITAL PRIVACY, PART ONE: THE DANGERS OF FACIAL RECOGNITION

There’s been a lot of talk in recent weeks regarding facial recognition technology. Much of the conversation has centered on privacy concerns. Other aspects concern the technical flaws in the software, which impacts the technology’s accuracy. Still others center on the demonstrated gender and racial biases of such systems, and the potential of governments and police forces using facial recognition to weaponize racial bias.

Indeed, the media has been following the conversations. Reports have dealt with China’s current use of facial recognition in its crackdown on a minority group; the questionable accuracy of the technology itself, particularly when involving people of color; and, of course, the intersection of privacy, law enforcement and racial bias when U.S. agencies and local police forces use facial recognition technologies.

A few other examples:

  • Concern that PimEyes, which identifies itself as a tool to help prevent the abuse of people’s private images, could instead “enable state surveillance, commercial monitoring and even stalking on a scale previously unimaginable.”
  • Concern that use of Clearview AI’s facial recognition system could easily be abused, as the app’s database was assembled by “scraping” pictures from social media, enabling the company to access your name, address and other details — all without your permission. The app, although not available to the public, is being “used by hundreds of law enforcement agencies in the U.S, including the FBI.” In May, Clearview AI announced that it would cease selling its software to private companies.
  • In response to the mask-related laws connected to the spread of COVID-19, tech companies have been attempting to update their facial recognition software so that it still works even when the subject of the scan is wearing a face mask.
  • Business Insider, Wired, U.S. News & World Reports, Popular Mechanics, the Guardian, and the Washington Post have all published reports on ways to defeat facial recognition systems.
  • IBM’s announcement, in a letter to Congress, that “IBM no longer offers general purpose IBM facial recognition or analysis software. IBM firmly opposes and will not condone uses of any technology, including facial recognition technology offered by other vendors, for mass surveillance, racial profiling, violations of basic human rights and freedoms, or any purpose which is not consistent with our values and Principles of Trust and Transparency. We believe now is the time to begin a national dialogue on whether and how facial recognition technology should be employed by domestic law enforcement agencies.”
  • Amazon’s announcement that they are “implementing a one-year moratorium on police use of Amazon’s facial recognition technology. We will continue to allow organizations like Thorn, the International Center for Missing and Exploited Children, and Marinus Analytics to use Amazon Rekognition to help rescue human trafficking victims and reunite missing children with their families.”
  • Microsoft CEO Brad Smith confirmed that the company “will not sell facial-recognition technology to police departments in the United States until we have a national law in place, grounded in human rights, that will govern this technology.”
  • Other tech companies — NEC and Clearview AI among them — restated their commitment to providing facial recognition technology to police departments and governmental agencies.

So, yes, people are talking about facial recognition technology. And as the conversation grows, more people and corporations joining the conversation. MTN Consulting, like Amazon, IBM and Microsoft, and others, is expressing alarm at how the technology is used and, in a growing number of instances, abused.

Oddly, many people don’t know a great deal about the technology, such as how it works, how accurate it is, or how much of a threat it poses.

Let’s explore:

What is facial recognition?

According to the Electronic Frontier Foundation, facial recognition “is a method of identifying or verifying the identity of an individual using their face. Facial recognition systems can be used to identify people in photos, video, or in real-time. Law enforcement may also use mobile devices to identify people during police stops.”

How does it work?

According to Norton, a picture of your face is saved from a video or photograph. The software then looks at the way your face is constructed. In other words, it “reads the geometry of your face. Key factors include the distance between your eyes and the distance from forehead to chin. The software identifies facial landmarks — one system identifies 68 of them — that are key to distinguishing your face. The result: your facial signature.”

Next, your facial signature, “is compared to a database of known faces. And consider this: at least 117 million Americans have images of their faces in one or more police databases. According to a May 2018 report, the FBI has had access to 412 million facial images for searches.”

Finally, the system determines whether your face matches any of the other stored images.

How is it used?

Facial recognition has many uses. For example, the 2002 film “Minority Report” imagined the potential outcomes of the technology. In the film, when the main character, played by Tom Cruise, enters a mall, he is inundated by personalized greetings and advertising, all holographic, and all keyed to his facial scan – particularly, his eyes. Later, he enters the subway system, and facial recognition is again used, this time in lieu of immediate payment or carrying identification.

“Minority Report,” in its own way, was prescient in its prediction that facial recognition software would be everywhere, although it primarily addressed the commercial applications. In real life, however, the technology is used by both corporations and governments. A few examples:

  • The Moscow Times recently reported that Russia plans to equip more than 43,000 Russian schools with facial recognition. The 2 billion ruble ($25.4 million) project, named “Orwell,” will “ensure children’s safety by monitoring their movements and identifying outsiders on the premises, said Yevgeny Lapshev, a spokesman for Elvees Neotech, a subsidiary of the state-controlled technology company Rusnano. According to Vedomosti, a Russian-language business daily, “Orwell” has already been in more than 1,608 schools.
  • Mobile phones are sold with facial recognition software that is used to unlock the phone, replacing the need for a password or PIN. Many companies – including Apple, Guangdong OPPO, Huawei, LG, Motorola, OnePlus and Samsung — offer phones with this technology.
  • As for laptops, Apple is lagging behind other manufacturers at the moment. The company recently announced that it is planning to add facial recognition software to its MacBook Pro laptop and iMac screen lines. Meanwhile, Acer, Asus, Dell, HP, Lenovo, and Microsoft. have offered the technology in its laptops for years.

There are, of course, many other ways in which the technology is used:

  • The U.S. government uses it at airports to monitor passengers.
  • Some colleges use it to monitor classrooms, as it can be used for security purposes, as well as something simpler like taking roll.
  • Facebook uses it to identify faces when photos are uploaded to its platform, so as to offer members the opportunity to “tag” people in the photos.
  • Some companies have eschewed security badges and identification cards in favor of facial recognition systems.
  • Some churches use it to monitor who attends services and events.
  • Retailers use surveillance cameras and facial recognition to identify regular shoppers and potential shoplifters. (“Minority Report,” anyone?)
  • Some airline companies scan your face while your ticket is being scanned at the departure gate.
  • Marketers and advertisers use it at events such as concerts. It allows them to target consumers by gender, age, and ethnicity.

 So, what’s the concern?

Well, there are three main concerns, mainly in the areas of privacy, accuracy, and governmental abuse. There is, however, a strong thread of racism that is integral to all three concerns.

Privacy

Although using a facial scan to gain access to your phone is more secure than, say, a short password, it isn’t perfect. There are some concerns about how and where the data is stored.

Admittedly, many people use facial recognition systems for fun. Specialized apps designed for, or that offer, the technology include B612, Cupace 4.8, Face App 4.2, Face Swap (by Microsoft), and Snapchat. The apps permit you to scan your face, and swap it with, say, that of a friend or film star.

The easy accessibility of such apps is a boon for those who would use them. However, the very popularity of the apps give rise to certain questions. For example, if the company stores the facial images in the cloud, how good is the security? How accessible is the data to third parties? Does the company ever sell that data to other companies? A simple leak of data, or a more aggressive hacking of the database, could result in many peoples’ data being compromised.

Another privacy aspect involves monitoring people without their knowledge or consent. People going about their daily business don’t typically expect to be monitored … but there are exceptions, depending on where you live.  Last year, China was accused of human rights abuses in Xinjiang, a province populated by hundreds of thousands of the mostly Muslim ethnic group known as Uighurs. The New York Times reported on how the government used facial recognition systems to identify Uighurs, who were then seized and imprisoned in clandestine camps. Millions of others are monitored daily to track their activities.

In the U.S., reports circulated that some police departments were using technology developed by Clearview AI. The startup had scraped billions of photos from social media accounts in order to assemble a massive database that law enforcement officials could access – all without people’s consent. In other words, any photos that you’ve posted on SnapChat, Twitter, Facebook, Instagram, or other social media platform, could be part of the database without your knowledge. The only way you would find out is if the police connect your face to a crime and come knocking on your door.

Indeed, Clearview AI has raised the ire of the American Civil Liberties Union, the European Data Protection Board, members of the U.S. Senate, as well as provincial and federal watchdogs in Canada.

Admittedly, some will argue that, although the collection of the data is likely an invasion of people’s privacy, the data itself is useful to assist law enforcement. Granted, that interpretation is subjective, but relevant to the argument at hand. However, it also assumes two things: that people being surveilled by the police are suspects; and that the technology is accurate.

In both cases, however, the reverse is often true. And because of that, innocent people can be surveilled without their knowledge or consent; the wrong people can end up arrested, tried and convicted for crimes they didn’t commit; and racial bias can be weaponized. More on that latter point in a bit.

Accuracy

In December 2019, researchers at Kneron decided to put facial recognition to the test. Using images of other people —in the form of 2-D photos, images stored on cell phones, and 3-D printed masks — they managed to penetrate security at various locations. Although most sites weren’t fooled by the 2-D image or video copies, the 3-D mask sailed through most of the scans, including at a high-speed rail station in China and point-of-sales terminals. Worse, the team was able to pass through a self-check-in terminal at the Schiphol Airport, one of Europe’s three busiest airports, with a picture saved on a cell phone. They were also able to unlock at least one popular cell phone model.

So, we know that the face-matching aspect of facial recognition can be fooled. Granted, one might argue that using a 3-D printer isn’t that common an occurrence. However, given that the worldwide sales of 3-D printers generated $11.58 billion USD in 2019; that 1.42 million units were sold in 2018; and that annual global sales are expected to hit 8.04 million units by 2027, it can be safely assumed that 3-D masks pose a risk to facial recognition systems.

Still, obvious attempts to beat the system notwithstanding, there’s an even deeper concern regarding facial recognition — the face-matching aspect of the software isn’t always that accurate, and it has shown a demonstrated bias against women and people of color:

  • In 2018, the ACLU used Amazon’s facial recognition tech to scan the faces of members of Congress. Amazon’s “Rekognition” tool “incorrectly matched 28 members of Congress, identifying them as other people who have been arrested for a crime. The members of Congress who were falsely matched with the mugshot database we used in the test include Republicans and Democrats, men and women, and legislators of all ages, from all across the country.”
  • The FBI admitted in October 2019 that its facial recognition database “may not be sufficiently reliable to accurately locate other photos of the same identity, resulting in an increased percentage of misidentifications.”
  • In the United Kingdom, police departments use facial recognition systems that generate results with an error rate as high as 98 percent. In other words, for every 100 people identified as suspects, 98 of them were not, in fact, actual suspects.
  • In June 2019, a problem with a Chinese company’s facial recognition system went viral after one employee’s facial scan, used to clock into and out of work, “kept matching (the) employee’s face to his colleagues, both male and female. People started joking that the man must have one of those faces that looks way too common.”
  • Back in January, Robert Williams, a Black man, was arrested by Detroit police in his driveway. He then spent over 24 hours in a “crowded and filthy cell,” according to his attorneys. Police thought Williams was a suspect in a shoplifting case. However, the inciting factor for the arrest was a facial recognition scan, which had incorrectly suggested that Williams was the suspect. And while the charges were later dropped, the damage was done: Williams’ “DNA sample, mugshot, and fingerprints — all of which were taken when he arrived at the detention center — are now on file. His arrest is on the record,” says the American Civil Liberties Union, which has filed a complaint with Detroit police department. “Study after study has confirmed that face recognition technology is flawed and biased, with significantly higher error rates when used against people of color and women. And we have long warned that one false match can lead to an interrogation, arrest, and, especially for Black men like Robert, even a deadly police encounter. Given the technology’s flaws, and how widely it is being used by law enforcement today, Robert likely isn’t the first person to be wrongfully arrested because of this technology. He’s just the first person we’re learning about,” the ACLU warns.
  • In May, Harrisburg University announced that two of its professors and a graduate student had “developed automated computer facial recognition software capable of predicting whether someone is likely going to be a criminal. With 80 percent accuracy and with no racial bias, the software can predict if someone is a criminal based solely on a picture of their face.” On June 23, over 1,500 academics condemned the research paper in a public letter. In response, Springer Nature will not be publishing the research, which the academics blasted as having been “based on unsound scientific premises, research, and methods which … have [been] debunked over the years.” The academics also warn that it is not possible to predict criminal activity without racial bias, “because the category of ‘criminality’ itself is racially biased.”

As I indicated earlier, aspects of racism exist with the argument surrounding facial recognition. It’s not just that the technology can be used in a discriminatory manner (more on that later). It is also because the scan results themselves can show bias against women and people of color.

“If you’re black, you’re more likely to be subjected to this technology and the technology is more likely to be wrong. That’s a hell of a combination.”

-Congressman Elijah Cummings, March 2017.

In 2012, a joint university study that was co-authored by the FBI showed that the accuracy of facial recognition scans was lower for African Americans than for other demographics. The software also misidentifies “other ethnic minorities, young people, and women at higher rates.” The fact that more recent studies, including some as recent as last year, show these same problems indicates that the bias is known, and yet is still not being addressed.

Another joint university study, this one published in 2019, found that the facial recognition software used by Amazon, IBM, Kairos, Megvii, and Microsoft were significantly less accurate when identifying women and people of color. Among their findings were that Kairos and Amazon’s software performed better on male faces than female faces; that their software performed much better on light-skinned faces than on darker faces; that they perform the worst on dark-skinned women, with Kairos showing an error rate of 22.5 percent, and Amazon showing an error rate of 31.4 percent; and that neither company showed an error rate for lighter-skinned men.

In December 2019, a National Institute of Standards and Technology study demonstrated the results of testing 189 facial recognition from 99 companies. The study found that the majority of the software had some form of bias. Indeed, among the broad findings:

  • One-to-one matching revealed higher error rates for “Asian and African American faces relative to images of Caucasians. The differentials often ranged from a factor of 10 to 100 times, depending on the individual algorithm.”
  • Among U.S.-made software, “there were similar high rates of false positives in one-to-one matching for Asians, African Americans and native groups (which include Native American, American Indian, Alaskan Indian and Pacific Islanders). The American Indian demographic had the highest rates of false positives.”
  • For software made in Asian countries doing one-to-one matching, there was no dramatic difference in false positives for Asian and Caucasian faces.
  • “For one-to-many matching, the team saw higher rates of false positives for African American females. Differentials in false positives in one-to-many matching are particularly important because the consequences could include false accusations.”

As we discussed earlier, three of America’s top technology companies recently announced that they would temporarily halt, or end altogether, the sale of facial recognition technology to police departments. The announcement by Amazon, IBM and Microsoft surprised police departments, market analysts and journalists for a specific reason: those particular companies had previously shown no real interest in what advocates for racial justice and civil rights had to say.

Although such advocates have complained for years about the threat posed to their communities by mass surveillance, and corporate complicity in that surveillance, it wasn’t until nationwide protests against police brutality and systemic racism that America’s top tech companies began to listen. As we’ve already determined, facial recognition is not all that accurate when dealing with people who are not White men. Even low error rates can result in mistaken arrests. And, as there is a demonstrated police bias against people of color, as shown in arrest rates, the idea of such technology being abused when used against “suspects” of color is not so unbelievable.

In a March 2017 hearing of the U.S. House of Representatives’ oversight committee, ranking member Elijah Cummings warned against law enforcement’s use of facial recognition software. “If you’re black, you’re more likely to be subjected to this technology and the technology is more likely to be wrong,” Cummings said. “That’s a hell of a combination.”

So, we know that the technology isn’t foolproof, that it discriminates against women and people of color, and that it being increasingly used by governmental agencies and police departments.

What can this lead to?

Remember the earlier observation about China?

Governmental Abuses

Last year, PBS went undercover into China’s Xinjiang province to investigate accusations of mass surveillance and detentions of Uighurs, a mostly Muslim ethnic group.  As the New York Times reported, hundreds of thousands of Uighurs were then seized and imprisoned in clandestine camps, while millions of others are monitored daily to track their activities.

In January, Amnesty International warned that, “In the hands of Russia’s already very abusive authorities, and in the total absence of transparency and accountability for such systems, the facial recognition technology is a tool which is likely to take reprisals against peaceful protest to an entirely new level.”  The warning came as a Moscow court took on a case by a civil rights activist and a politician who argued that Russia’s surveillance of public protests was a violation of their right to peacefully assemble.

And, of course, we have the United States, where governmental agencies and police departments use demonstrably racially biased facial recognition software.

As the ACLU reported after Amazon, IBM and Microsoft halted or ended the sale of facial recognition technology to law enforcement agencies, “racial justice and civil rights advocates had been warning (for years) that this technology in law enforcement hands would be the end of privacy as we know it. It would supercharge police abuses, and it would be used to harm and target Black and Brown communities in particular.”

The ACLU warned that facial technology “surveillance is the most dangerous of the many new technologies available to law enforcement. And while face surveillance is a danger to all people, no matter the color of their skin, the technology is a particularly serious threat to Black people in at least three fundamental ways”:

  • The technology itself is racially biased (see above).
  • Police departments use databases of mugshots, which “recycles racial bias from the past, supercharging that bias with 21st century surveillance technology. … Since Black people are more likely to be arrested than white people for minor crimes like cannabis possession, their faces and personal data are more likely to be in mugshot databases. Therefore, the use of facial recognition technology tied into mugshot databases exacerbates racism in a criminal legal system that already disproportionately polices and criminalizes Black people.”
  • Even if the algorithms were equally accurate across race (again, see above), “government use of face surveillance technology will still be racist (because) … Black people face overwhelming disparities at every single stage of the criminal punishment system, from street-level surveillance and profiling all the way through to sentencing and conditions of confinement.”

And, indeed, fresh concerns about law enforcement’s use of facial recognition technologies have surfaced as the Black Lives Matter protests gain steam in the wake of George Floyd’s May 25th death, while unarmed, under the knee of a White police officer. The protests, which consist of American citizens exercising their First Amendment rights, have been met by heavily armored police, aerial surveillance by drones, fake cellular towers designed to capture the stored data on protesters’ phones, covert government surveillance, and threats from President Donald J. Trump.

Of course, it would be wrong to say that all police officers, all governmental officials, are racist. It would be ludicrous, however, to say that the various systems that make up the infrastructure of the United States do not have a strong foundation that is racist in origin – particularly when it comes to law enforcement.

As the ACLU warned, “(the) White supremacist, anti-Black history of surveillance and tracking in the United States persists into the present. It merely manifests differently, justified by the government using different excuses. Today, those excuses generally fall into two categories: spying that targets political speech, too often conflated with ‘terrorism,’ and spying that targets people suspected of drug or gang involvement.” One currently relevant example is the FBI surveillance program that targets what the federal government considers to be “Black Identity Extremists” — the FBI’s way of justifying surveillance of Black Lives Matter activists, much as it kept a close watch on the Rev. Dr. Martin Luther King Jr. during the civil rights protests of the 1960s.

That some of America’s technology companies have decided, at least for now, to no longer be complicit in exacerbating racist policies is something to be applauded. However, it remains to be seen how long these changes will last, who will follow their lead … and whether any important lessons will be learned.

Time will tell.

About the author

Melvin Bankhead III is the founder of MB Ink Media Relations, a boutique public relations firm based in Buffalo, New York. An experienced journalist, he is a former syndicated columnist for Cox Media Group, and a former editor at The Buffalo News.

 

Note from MTN Consulting

MTN Consulting is an industry analysis and research firm, not a company that typically comments on politics. We remain focused on companies who build and operate networks, and the vendors who supply them. That isn’t changing. However, we are going to dig into some of the technology issues related to these networks and networking platforms which are having (or will have) negative societal effects.

 

Image credit: iStock, by Getty Images

Blog Details

BOTS: A CLEAR AND PRESENT DANGER

So … let’s talk about bots.

You’ve probably heard about them already … most likely connected to social media and the 2016 presidential election.

But, do you know what they are? Or what makes them so dangerous?

Let’s review:

What’s a bot?

A bot is an automated program that has been programmed to perform a specific task. By their nature, bots themselves are neutral. One of the things that makes them so useful is that they can be programmed to simulate human interaction. A common example of bots is the automated customer service that many websites offer. You log in, seek customer service, and a chat window opens. The person you end up talking to may, in fact, not be a person at all.

How do they work?

Bots are designed to automatically perform tasks that a human would normally perform. For example, you could pick up your phone, enter your search engine (we’ll use Google), and type in “What are bots?” Or, you could simply say, “Hey, Google … what are bots?” And your phone, thanks to the bot linked to your voice recognition software, would answer you. In many ways, bots simplify our lives. Regrettably, they increasingly also make things more complex and difficult.

Why should you care?

Ever been enraged when, after waiting a long time for ticket sales to open to your favorite event, the event sells out in mere minutes? In December 2016, President Barack Obama signed the “Better Online Ticket Sales Act,” which banned “the circumvention of control measures used by Internet ticket sellers to ensure equitable consumer access to tickets for certain events.” In other words, it banned people from using bots to scoop up huge numbers of tickets in order to resell them, usually at exorbitant rates, on secondary markets.

Unconvinced? In 2018, the Pew Research Center released a study showing that bots were making a disproportionate impact on social media. During a six-week period in the summer of 2017, the center examined 1.2 million tweets that shared URL links to determine how many of them where actually posted by bots, as opposed to people. Among the findings:

  • Sixty-six percent of all tweeted links were posted by suspected bots, which suggests that links shared by bots are actually more common than links shared by humans.
  • Sixty-six percent of links to sites dealing with news and current events were posted by suspected bots. Higher numbers were seen in the areas of adult content (90 percent), sports (76 percent), and commercial products (73 percent).
  • Eighty-nine percent of tweeted links to news aggregation sites were posted by bots.
  • The 500 most active bot accounts were responsible for 22 percent of the tweeted links to popular news and current events sites. On the human side of the equation, the 500 most active human users were responsible only an estimated six percent of those links.

In other words, social media, which was designed by humans for use by humans, has instead become the province of bots.

And then, of course, there’s always the chance that the information that you read and share on social media, the information that helps you decide how to vote, has been subtly influenced by bots designed to shift your thinking along a predetermined narrative.

In 2016, Scottie Nell Hughes, a conservative political commentator, told CNN anchor Anderson Cooper that “(the) only place that we’re hearing that Donald Trump honestly is losing is in the media or these polls. You’re not seeing it with the crowd rallies, you’re not seeing it on social media—where Donald Trump is two to three times more popular than Hillary Clinton on every social media platform.”

Trump himself touted his social media popularity during the campaign, saying during the first presidential debate that he had 30 million followers on Twitter and Facebook. That apparent popularity, in the eyes of a culture that translates “worth and fame” into support on social media, made Trump look even more like a winner among his followers.

However … what if those numbers were, in fact, a lie?

In 2016, an Oxford University study revealed that, between the first and second presidential debates, more than a third of pro-Trump tweets, and nearly a fifth of pro-Clinton tweets, came from bot-controlled accounts — a total of more than a million tweets.

The study also found:

  • During the debates, the bot accounts created up to 27 percent of all Twitter traffic related to the election
  • By the time of the election, 81 percent of the bot-controlled tweets involved some form of Trump messaging
  • On Election Day, as Trump’s victory became apparent, traffic from automated pro-Trump accounts abruptly stopped.

What about today?

Today, the race for the White House has begun once again, with Trump facing a challenger in former Vice President Joe Biden.

And the bots, as you might expect, are at it again. This time, however, people and social media platforms are better armed, and better prepared to fight back.

  • In November 2018, the FBI warned that “Americans should be aware that foreign actors—and Russia in particular—continue to try to influence public sentiment and voter perceptions through actions intended to sow discord. They can do this by spreading false information about political processes and candidates, lying about their own interference activities, disseminating propaganda on social media, and through other tactics.” The statement was a joint release with the Department of Homeland Security, the Department of Justice, and the Office of the Director of National Intelligence.
  • In February 2019, a study showed that bots, including thousands based in Russia and Iran, were much more active during the 2018 midterm elections than previously thought. In nearly every state, more than a fifth of Twitter posts about the elections in the weeks before Election Day were posted by bots.
  • In 2019, Twitter detected and removed more than 26,600 bot-controlled accounts. Granted, that sounds like a lot, until you consider that, at the time, the platform had more than 330 million active users. Still, for Twitter — which is known for its openness, as well as for its reluctance to set truth and authenticity as a rule for its accounts — it was a start. The company’s efforts, however, are like fighting the tide with a bucket; for every bot account that is deleted, many, many more are already being created. The platform has also begun flagging tweets by Trump that it says glorify violence or are factually inaccurate.
  • In September 2019, a study by the University of Southern California’s Information Sciences Institute showed that “although social media service providers put increasing efforts to protect their platforms, malicious bot accounts continuously evolve to escape detection. In this work, we monitored the activity of almost 245 (thousand) accounts engaged in the Twitter political discussion during the last two U.S. voting events. We identified approximately 31 (thousand) bots. … We show that, in the 2018 midterms, bots (learned) to better mimic humans and avoid detection.”
  • Because social media platforms have a global reach, they also have a global impact. In March, ProPublica revealed that, since August 2019, it had been tracking more than 10,000 Twitter accounts it suspected of being part of an influence campaign linked to the Chinese government. “Among those are the hacked accounts of users from around the world that now post propaganda and disinformation about the coronavirus outbreak, the Hong Kong protests and other topics of state interest,” the report said.
  • In May, NortonLifeLock has begun offering BotSight, which it calls “a new tool to detect bots on Twitter in real-time” that will quantify “disinformation on Twitter, one tweet at a time.”
  • On June 11, Twitter announced that it had closed down more than 170,000 accounts connected to China’s government. The accounts were designed to spread “geopolitical narratives favorable to the Communist Party of China,” by disseminating misinformation about the Hong Kong protests, COVID-19, and other issues.

What can you do?

You have the facts. Now, you need to decide what to do.

Yes, some bots, such as those used in customer service, exist to make our lives easier. However, it has been shown, time and again, that they also represent a tool that can be used to damage our democracy. In a nation that prides itself on “one person, one vote,” the fact that bots can actively tamper with the information people use to determine how they will vote is a clear and present danger to our nation’s security.

If you’re concerned that bots are a threat, then contact Twitter, Facebook, and the other social media platforms. Demand that they ban the use of bot-controlled accounts, and that they find ways to scrutinize accounts more closely in order to detect and delete such accounts. If they refuse to act, then contact your elected representative in the Senate and the House of Representatives. Demand that they pressure the social media platforms to act.

The integrity of our elections system goes beyond partisan politics. It is, in fact, the fabric that keeps this country together.

About the author

Melvin Bankhead III is the founder of MB Ink Media Relations, a boutique public relations firm based in Buffalo, New York. An experienced journalist, he is a former syndicated columnist for Cox Media Group, and a former editor at The Buffalo News.

 

Reference Materials

See active hyperlinks within the text, above.

Note from MTN Consulting

MTN Consulting is an industry analysis and research firm, not a company that typically comments on politics. We remain focused on companies who build and operate networks, and the vendors who supply them. That isn’t changing. However, we are going to dig into some of the technology issues related to these networks and networking platforms which are having (or will have) negative societal effects.

For context on this series, see our June 8, 2020 post, “It’s time for tech to take a stand.” Questions or comments can be directed to Matt Walker, MTN Consulting’s Chief Analyst (matt@www.mtn-c.com).

Image credit: iStock, by Getty Images