Blog Details

DIGITAL PRIVACY, PART ONE: THE DANGERS OF FACIAL RECOGNITION

There’s been a lot of talk in recent weeks regarding facial recognition technology. Much of the conversation has centered on privacy concerns. Other aspects concern the technical flaws in the software, which impacts the technology’s accuracy. Still others center on the demonstrated gender and racial biases of such systems, and the potential of governments and police forces using facial recognition to weaponize racial bias.

Indeed, the media has been following the conversations. Reports have dealt with China’s current use of facial recognition in its crackdown on a minority group; the questionable accuracy of the technology itself, particularly when involving people of color; and, of course, the intersection of privacy, law enforcement and racial bias when U.S. agencies and local police forces use facial recognition technologies.

A few other examples:

  • Concern that PimEyes, which identifies itself as a tool to help prevent the abuse of people’s private images, could instead “enable state surveillance, commercial monitoring and even stalking on a scale previously unimaginable.”
  • Concern that use of Clearview AI’s facial recognition system could easily be abused, as the app’s database was assembled by “scraping” pictures from social media, enabling the company to access your name, address and other details — all without your permission. The app, although not available to the public, is being “used by hundreds of law enforcement agencies in the U.S, including the FBI.” In May, Clearview AI announced that it would cease selling its software to private companies.
  • In response to the mask-related laws connected to the spread of COVID-19, tech companies have been attempting to update their facial recognition software so that it still works even when the subject of the scan is wearing a face mask.
  • Business Insider, Wired, U.S. News & World Reports, Popular Mechanics, the Guardian, and the Washington Post have all published reports on ways to defeat facial recognition systems.
  • IBM’s announcement, in a letter to Congress, that “IBM no longer offers general purpose IBM facial recognition or analysis software. IBM firmly opposes and will not condone uses of any technology, including facial recognition technology offered by other vendors, for mass surveillance, racial profiling, violations of basic human rights and freedoms, or any purpose which is not consistent with our values and Principles of Trust and Transparency. We believe now is the time to begin a national dialogue on whether and how facial recognition technology should be employed by domestic law enforcement agencies.”
  • Amazon’s announcement that they are “implementing a one-year moratorium on police use of Amazon’s facial recognition technology. We will continue to allow organizations like Thorn, the International Center for Missing and Exploited Children, and Marinus Analytics to use Amazon Rekognition to help rescue human trafficking victims and reunite missing children with their families.”
  • Microsoft CEO Brad Smith confirmed that the company “will not sell facial-recognition technology to police departments in the United States until we have a national law in place, grounded in human rights, that will govern this technology.”
  • Other tech companies — NEC and Clearview AI among them — restated their commitment to providing facial recognition technology to police departments and governmental agencies.

So, yes, people are talking about facial recognition technology. And as the conversation grows, more people and corporations joining the conversation. MTN Consulting, like Amazon, IBM and Microsoft, and others, is expressing alarm at how the technology is used and, in a growing number of instances, abused.

Oddly, many people don’t know a great deal about the technology, such as how it works, how accurate it is, or how much of a threat it poses.

Let’s explore:

What is facial recognition?

According to the Electronic Frontier Foundation, facial recognition “is a method of identifying or verifying the identity of an individual using their face. Facial recognition systems can be used to identify people in photos, video, or in real-time. Law enforcement may also use mobile devices to identify people during police stops.”

How does it work?

According to Norton, a picture of your face is saved from a video or photograph. The software then looks at the way your face is constructed. In other words, it “reads the geometry of your face. Key factors include the distance between your eyes and the distance from forehead to chin. The software identifies facial landmarks — one system identifies 68 of them — that are key to distinguishing your face. The result: your facial signature.”

Next, your facial signature, “is compared to a database of known faces. And consider this: at least 117 million Americans have images of their faces in one or more police databases. According to a May 2018 report, the FBI has had access to 412 million facial images for searches.”

Finally, the system determines whether your face matches any of the other stored images.

How is it used?

Facial recognition has many uses. For example, the 2002 film “Minority Report” imagined the potential outcomes of the technology. In the film, when the main character, played by Tom Cruise, enters a mall, he is inundated by personalized greetings and advertising, all holographic, and all keyed to his facial scan – particularly, his eyes. Later, he enters the subway system, and facial recognition is again used, this time in lieu of immediate payment or carrying identification.

“Minority Report,” in its own way, was prescient in its prediction that facial recognition software would be everywhere, although it primarily addressed the commercial applications. In real life, however, the technology is used by both corporations and governments. A few examples:

  • The Moscow Times recently reported that Russia plans to equip more than 43,000 Russian schools with facial recognition. The 2 billion ruble ($25.4 million) project, named “Orwell,” will “ensure children’s safety by monitoring their movements and identifying outsiders on the premises, said Yevgeny Lapshev, a spokesman for Elvees Neotech, a subsidiary of the state-controlled technology company Rusnano. According to Vedomosti, a Russian-language business daily, “Orwell” has already been in more than 1,608 schools.
  • Mobile phones are sold with facial recognition software that is used to unlock the phone, replacing the need for a password or PIN. Many companies – including Apple, Guangdong OPPO, Huawei, LG, Motorola, OnePlus and Samsung — offer phones with this technology.
  • As for laptops, Apple is lagging behind other manufacturers at the moment. The company recently announced that it is planning to add facial recognition software to its MacBook Pro laptop and iMac screen lines. Meanwhile, Acer, Asus, Dell, HP, Lenovo, and Microsoft. have offered the technology in its laptops for years.

There are, of course, many other ways in which the technology is used:

  • The U.S. government uses it at airports to monitor passengers.
  • Some colleges use it to monitor classrooms, as it can be used for security purposes, as well as something simpler like taking roll.
  • Facebook uses it to identify faces when photos are uploaded to its platform, so as to offer members the opportunity to “tag” people in the photos.
  • Some companies have eschewed security badges and identification cards in favor of facial recognition systems.
  • Some churches use it to monitor who attends services and events.
  • Retailers use surveillance cameras and facial recognition to identify regular shoppers and potential shoplifters. (“Minority Report,” anyone?)
  • Some airline companies scan your face while your ticket is being scanned at the departure gate.
  • Marketers and advertisers use it at events such as concerts. It allows them to target consumers by gender, age, and ethnicity.

 So, what’s the concern?

Well, there are three main concerns, mainly in the areas of privacy, accuracy, and governmental abuse. There is, however, a strong thread of racism that is integral to all three concerns.

Privacy

Although using a facial scan to gain access to your phone is more secure than, say, a short password, it isn’t perfect. There are some concerns about how and where the data is stored.

Admittedly, many people use facial recognition systems for fun. Specialized apps designed for, or that offer, the technology include B612, Cupace 4.8, Face App 4.2, Face Swap (by Microsoft), and Snapchat. The apps permit you to scan your face, and swap it with, say, that of a friend or film star.

The easy accessibility of such apps is a boon for those who would use them. However, the very popularity of the apps give rise to certain questions. For example, if the company stores the facial images in the cloud, how good is the security? How accessible is the data to third parties? Does the company ever sell that data to other companies? A simple leak of data, or a more aggressive hacking of the database, could result in many peoples’ data being compromised.

Another privacy aspect involves monitoring people without their knowledge or consent. People going about their daily business don’t typically expect to be monitored … but there are exceptions, depending on where you live.  Last year, China was accused of human rights abuses in Xinjiang, a province populated by hundreds of thousands of the mostly Muslim ethnic group known as Uighurs. The New York Times reported on how the government used facial recognition systems to identify Uighurs, who were then seized and imprisoned in clandestine camps. Millions of others are monitored daily to track their activities.

In the U.S., reports circulated that some police departments were using technology developed by Clearview AI. The startup had scraped billions of photos from social media accounts in order to assemble a massive database that law enforcement officials could access – all without people’s consent. In other words, any photos that you’ve posted on SnapChat, Twitter, Facebook, Instagram, or other social media platform, could be part of the database without your knowledge. The only way you would find out is if the police connect your face to a crime and come knocking on your door.

Indeed, Clearview AI has raised the ire of the American Civil Liberties Union, the European Data Protection Board, members of the U.S. Senate, as well as provincial and federal watchdogs in Canada.

Admittedly, some will argue that, although the collection of the data is likely an invasion of people’s privacy, the data itself is useful to assist law enforcement. Granted, that interpretation is subjective, but relevant to the argument at hand. However, it also assumes two things: that people being surveilled by the police are suspects; and that the technology is accurate.

In both cases, however, the reverse is often true. And because of that, innocent people can be surveilled without their knowledge or consent; the wrong people can end up arrested, tried and convicted for crimes they didn’t commit; and racial bias can be weaponized. More on that latter point in a bit.

Accuracy

In December 2019, researchers at Kneron decided to put facial recognition to the test. Using images of other people —in the form of 2-D photos, images stored on cell phones, and 3-D printed masks — they managed to penetrate security at various locations. Although most sites weren’t fooled by the 2-D image or video copies, the 3-D mask sailed through most of the scans, including at a high-speed rail station in China and point-of-sales terminals. Worse, the team was able to pass through a self-check-in terminal at the Schiphol Airport, one of Europe’s three busiest airports, with a picture saved on a cell phone. They were also able to unlock at least one popular cell phone model.

So, we know that the face-matching aspect of facial recognition can be fooled. Granted, one might argue that using a 3-D printer isn’t that common an occurrence. However, given that the worldwide sales of 3-D printers generated $11.58 billion USD in 2019; that 1.42 million units were sold in 2018; and that annual global sales are expected to hit 8.04 million units by 2027, it can be safely assumed that 3-D masks pose a risk to facial recognition systems.

Still, obvious attempts to beat the system notwithstanding, there’s an even deeper concern regarding facial recognition — the face-matching aspect of the software isn’t always that accurate, and it has shown a demonstrated bias against women and people of color:

  • In 2018, the ACLU used Amazon’s facial recognition tech to scan the faces of members of Congress. Amazon’s “Rekognition” tool “incorrectly matched 28 members of Congress, identifying them as other people who have been arrested for a crime. The members of Congress who were falsely matched with the mugshot database we used in the test include Republicans and Democrats, men and women, and legislators of all ages, from all across the country.”
  • The FBI admitted in October 2019 that its facial recognition database “may not be sufficiently reliable to accurately locate other photos of the same identity, resulting in an increased percentage of misidentifications.”
  • In the United Kingdom, police departments use facial recognition systems that generate results with an error rate as high as 98 percent. In other words, for every 100 people identified as suspects, 98 of them were not, in fact, actual suspects.
  • In June 2019, a problem with a Chinese company’s facial recognition system went viral after one employee’s facial scan, used to clock into and out of work, “kept matching (the) employee’s face to his colleagues, both male and female. People started joking that the man must have one of those faces that looks way too common.”
  • Back in January, Robert Williams, a Black man, was arrested by Detroit police in his driveway. He then spent over 24 hours in a “crowded and filthy cell,” according to his attorneys. Police thought Williams was a suspect in a shoplifting case. However, the inciting factor for the arrest was a facial recognition scan, which had incorrectly suggested that Williams was the suspect. And while the charges were later dropped, the damage was done: Williams’ “DNA sample, mugshot, and fingerprints — all of which were taken when he arrived at the detention center — are now on file. His arrest is on the record,” says the American Civil Liberties Union, which has filed a complaint with Detroit police department. “Study after study has confirmed that face recognition technology is flawed and biased, with significantly higher error rates when used against people of color and women. And we have long warned that one false match can lead to an interrogation, arrest, and, especially for Black men like Robert, even a deadly police encounter. Given the technology’s flaws, and how widely it is being used by law enforcement today, Robert likely isn’t the first person to be wrongfully arrested because of this technology. He’s just the first person we’re learning about,” the ACLU warns.
  • In May, Harrisburg University announced that two of its professors and a graduate student had “developed automated computer facial recognition software capable of predicting whether someone is likely going to be a criminal. With 80 percent accuracy and with no racial bias, the software can predict if someone is a criminal based solely on a picture of their face.” On June 23, over 1,500 academics condemned the research paper in a public letter. In response, Springer Nature will not be publishing the research, which the academics blasted as having been “based on unsound scientific premises, research, and methods which … have [been] debunked over the years.” The academics also warn that it is not possible to predict criminal activity without racial bias, “because the category of ‘criminality’ itself is racially biased.”

As I indicated earlier, aspects of racism exist with the argument surrounding facial recognition. It’s not just that the technology can be used in a discriminatory manner (more on that later). It is also because the scan results themselves can show bias against women and people of color.

“If you’re black, you’re more likely to be subjected to this technology and the technology is more likely to be wrong. That’s a hell of a combination.”

-Congressman Elijah Cummings, March 2017.

In 2012, a joint university study that was co-authored by the FBI showed that the accuracy of facial recognition scans was lower for African Americans than for other demographics. The software also misidentifies “other ethnic minorities, young people, and women at higher rates.” The fact that more recent studies, including some as recent as last year, show these same problems indicates that the bias is known, and yet is still not being addressed.

Another joint university study, this one published in 2019, found that the facial recognition software used by Amazon, IBM, Kairos, Megvii, and Microsoft were significantly less accurate when identifying women and people of color. Among their findings were that Kairos and Amazon’s software performed better on male faces than female faces; that their software performed much better on light-skinned faces than on darker faces; that they perform the worst on dark-skinned women, with Kairos showing an error rate of 22.5 percent, and Amazon showing an error rate of 31.4 percent; and that neither company showed an error rate for lighter-skinned men.

In December 2019, a National Institute of Standards and Technology study demonstrated the results of testing 189 facial recognition from 99 companies. The study found that the majority of the software had some form of bias. Indeed, among the broad findings:

  • One-to-one matching revealed higher error rates for “Asian and African American faces relative to images of Caucasians. The differentials often ranged from a factor of 10 to 100 times, depending on the individual algorithm.”
  • Among U.S.-made software, “there were similar high rates of false positives in one-to-one matching for Asians, African Americans and native groups (which include Native American, American Indian, Alaskan Indian and Pacific Islanders). The American Indian demographic had the highest rates of false positives.”
  • For software made in Asian countries doing one-to-one matching, there was no dramatic difference in false positives for Asian and Caucasian faces.
  • “For one-to-many matching, the team saw higher rates of false positives for African American females. Differentials in false positives in one-to-many matching are particularly important because the consequences could include false accusations.”

As we discussed earlier, three of America’s top technology companies recently announced that they would temporarily halt, or end altogether, the sale of facial recognition technology to police departments. The announcement by Amazon, IBM and Microsoft surprised police departments, market analysts and journalists for a specific reason: those particular companies had previously shown no real interest in what advocates for racial justice and civil rights had to say.

Although such advocates have complained for years about the threat posed to their communities by mass surveillance, and corporate complicity in that surveillance, it wasn’t until nationwide protests against police brutality and systemic racism that America’s top tech companies began to listen. As we’ve already determined, facial recognition is not all that accurate when dealing with people who are not White men. Even low error rates can result in mistaken arrests. And, as there is a demonstrated police bias against people of color, as shown in arrest rates, the idea of such technology being abused when used against “suspects” of color is not so unbelievable.

In a March 2017 hearing of the U.S. House of Representatives’ oversight committee, ranking member Elijah Cummings warned against law enforcement’s use of facial recognition software. “If you’re black, you’re more likely to be subjected to this technology and the technology is more likely to be wrong,” Cummings said. “That’s a hell of a combination.”

So, we know that the technology isn’t foolproof, that it discriminates against women and people of color, and that it being increasingly used by governmental agencies and police departments.

What can this lead to?

Remember the earlier observation about China?

Governmental Abuses

Last year, PBS went undercover into China’s Xinjiang province to investigate accusations of mass surveillance and detentions of Uighurs, a mostly Muslim ethnic group.  As the New York Times reported, hundreds of thousands of Uighurs were then seized and imprisoned in clandestine camps, while millions of others are monitored daily to track their activities.

In January, Amnesty International warned that, “In the hands of Russia’s already very abusive authorities, and in the total absence of transparency and accountability for such systems, the facial recognition technology is a tool which is likely to take reprisals against peaceful protest to an entirely new level.”  The warning came as a Moscow court took on a case by a civil rights activist and a politician who argued that Russia’s surveillance of public protests was a violation of their right to peacefully assemble.

And, of course, we have the United States, where governmental agencies and police departments use demonstrably racially biased facial recognition software.

As the ACLU reported after Amazon, IBM and Microsoft halted or ended the sale of facial recognition technology to law enforcement agencies, “racial justice and civil rights advocates had been warning (for years) that this technology in law enforcement hands would be the end of privacy as we know it. It would supercharge police abuses, and it would be used to harm and target Black and Brown communities in particular.”

The ACLU warned that facial technology “surveillance is the most dangerous of the many new technologies available to law enforcement. And while face surveillance is a danger to all people, no matter the color of their skin, the technology is a particularly serious threat to Black people in at least three fundamental ways”:

  • The technology itself is racially biased (see above).
  • Police departments use databases of mugshots, which “recycles racial bias from the past, supercharging that bias with 21st century surveillance technology. … Since Black people are more likely to be arrested than white people for minor crimes like cannabis possession, their faces and personal data are more likely to be in mugshot databases. Therefore, the use of facial recognition technology tied into mugshot databases exacerbates racism in a criminal legal system that already disproportionately polices and criminalizes Black people.”
  • Even if the algorithms were equally accurate across race (again, see above), “government use of face surveillance technology will still be racist (because) … Black people face overwhelming disparities at every single stage of the criminal punishment system, from street-level surveillance and profiling all the way through to sentencing and conditions of confinement.”

And, indeed, fresh concerns about law enforcement’s use of facial recognition technologies have surfaced as the Black Lives Matter protests gain steam in the wake of George Floyd’s May 25th death, while unarmed, under the knee of a White police officer. The protests, which consist of American citizens exercising their First Amendment rights, have been met by heavily armored police, aerial surveillance by drones, fake cellular towers designed to capture the stored data on protesters’ phones, covert government surveillance, and threats from President Donald J. Trump.

Of course, it would be wrong to say that all police officers, all governmental officials, are racist. It would be ludicrous, however, to say that the various systems that make up the infrastructure of the United States do not have a strong foundation that is racist in origin – particularly when it comes to law enforcement.

As the ACLU warned, “(the) White supremacist, anti-Black history of surveillance and tracking in the United States persists into the present. It merely manifests differently, justified by the government using different excuses. Today, those excuses generally fall into two categories: spying that targets political speech, too often conflated with ‘terrorism,’ and spying that targets people suspected of drug or gang involvement.” One currently relevant example is the FBI surveillance program that targets what the federal government considers to be “Black Identity Extremists” — the FBI’s way of justifying surveillance of Black Lives Matter activists, much as it kept a close watch on the Rev. Dr. Martin Luther King Jr. during the civil rights protests of the 1960s.

That some of America’s technology companies have decided, at least for now, to no longer be complicit in exacerbating racist policies is something to be applauded. However, it remains to be seen how long these changes will last, who will follow their lead … and whether any important lessons will be learned.

Time will tell.

About the author

Melvin Bankhead III is the founder of MB Ink Media Relations, a boutique public relations firm based in Buffalo, New York. An experienced journalist, he is a former syndicated columnist for Cox Media Group, and a former editor at The Buffalo News.

 

Note from MTN Consulting

MTN Consulting is an industry analysis and research firm, not a company that typically comments on politics. We remain focused on companies who build and operate networks, and the vendors who supply them. That isn’t changing. However, we are going to dig into some of the technology issues related to these networks and networking platforms which are having (or will have) negative societal effects.

 

Image credit: iStock, by Getty Images

Blog Details

BOTS: A CLEAR AND PRESENT DANGER

So … let’s talk about bots.

You’ve probably heard about them already … most likely connected to social media and the 2016 presidential election.

But, do you know what they are? Or what makes them so dangerous?

Let’s review:

What’s a bot?

A bot is an automated program that has been programmed to perform a specific task. By their nature, bots themselves are neutral. One of the things that makes them so useful is that they can be programmed to simulate human interaction. A common example of bots is the automated customer service that many websites offer. You log in, seek customer service, and a chat window opens. The person you end up talking to may, in fact, not be a person at all.

How do they work?

Bots are designed to automatically perform tasks that a human would normally perform. For example, you could pick up your phone, enter your search engine (we’ll use Google), and type in “What are bots?” Or, you could simply say, “Hey, Google … what are bots?” And your phone, thanks to the bot linked to your voice recognition software, would answer you. In many ways, bots simplify our lives. Regrettably, they increasingly also make things more complex and difficult.

Why should you care?

Ever been enraged when, after waiting a long time for ticket sales to open to your favorite event, the event sells out in mere minutes? In December 2016, President Barack Obama signed the “Better Online Ticket Sales Act,” which banned “the circumvention of control measures used by Internet ticket sellers to ensure equitable consumer access to tickets for certain events.” In other words, it banned people from using bots to scoop up huge numbers of tickets in order to resell them, usually at exorbitant rates, on secondary markets.

Unconvinced? In 2018, the Pew Research Center released a study showing that bots were making a disproportionate impact on social media. During a six-week period in the summer of 2017, the center examined 1.2 million tweets that shared URL links to determine how many of them where actually posted by bots, as opposed to people. Among the findings:

  • Sixty-six percent of all tweeted links were posted by suspected bots, which suggests that links shared by bots are actually more common than links shared by humans.
  • Sixty-six percent of links to sites dealing with news and current events were posted by suspected bots. Higher numbers were seen in the areas of adult content (90 percent), sports (76 percent), and commercial products (73 percent).
  • Eighty-nine percent of tweeted links to news aggregation sites were posted by bots.
  • The 500 most active bot accounts were responsible for 22 percent of the tweeted links to popular news and current events sites. On the human side of the equation, the 500 most active human users were responsible only an estimated six percent of those links.

In other words, social media, which was designed by humans for use by humans, has instead become the province of bots.

And then, of course, there’s always the chance that the information that you read and share on social media, the information that helps you decide how to vote, has been subtly influenced by bots designed to shift your thinking along a predetermined narrative.

In 2016, Scottie Nell Hughes, a conservative political commentator, told CNN anchor Anderson Cooper that “(the) only place that we’re hearing that Donald Trump honestly is losing is in the media or these polls. You’re not seeing it with the crowd rallies, you’re not seeing it on social media—where Donald Trump is two to three times more popular than Hillary Clinton on every social media platform.”

Trump himself touted his social media popularity during the campaign, saying during the first presidential debate that he had 30 million followers on Twitter and Facebook. That apparent popularity, in the eyes of a culture that translates “worth and fame” into support on social media, made Trump look even more like a winner among his followers.

However … what if those numbers were, in fact, a lie?

In 2016, an Oxford University study revealed that, between the first and second presidential debates, more than a third of pro-Trump tweets, and nearly a fifth of pro-Clinton tweets, came from bot-controlled accounts — a total of more than a million tweets.

The study also found:

  • During the debates, the bot accounts created up to 27 percent of all Twitter traffic related to the election
  • By the time of the election, 81 percent of the bot-controlled tweets involved some form of Trump messaging
  • On Election Day, as Trump’s victory became apparent, traffic from automated pro-Trump accounts abruptly stopped.

What about today?

Today, the race for the White House has begun once again, with Trump facing a challenger in former Vice President Joe Biden.

And the bots, as you might expect, are at it again. This time, however, people and social media platforms are better armed, and better prepared to fight back.

  • In November 2018, the FBI warned that “Americans should be aware that foreign actors—and Russia in particular—continue to try to influence public sentiment and voter perceptions through actions intended to sow discord. They can do this by spreading false information about political processes and candidates, lying about their own interference activities, disseminating propaganda on social media, and through other tactics.” The statement was a joint release with the Department of Homeland Security, the Department of Justice, and the Office of the Director of National Intelligence.
  • In February 2019, a study showed that bots, including thousands based in Russia and Iran, were much more active during the 2018 midterm elections than previously thought. In nearly every state, more than a fifth of Twitter posts about the elections in the weeks before Election Day were posted by bots.
  • In 2019, Twitter detected and removed more than 26,600 bot-controlled accounts. Granted, that sounds like a lot, until you consider that, at the time, the platform had more than 330 million active users. Still, for Twitter — which is known for its openness, as well as for its reluctance to set truth and authenticity as a rule for its accounts — it was a start. The company’s efforts, however, are like fighting the tide with a bucket; for every bot account that is deleted, many, many more are already being created. The platform has also begun flagging tweets by Trump that it says glorify violence or are factually inaccurate.
  • In September 2019, a study by the University of Southern California’s Information Sciences Institute showed that “although social media service providers put increasing efforts to protect their platforms, malicious bot accounts continuously evolve to escape detection. In this work, we monitored the activity of almost 245 (thousand) accounts engaged in the Twitter political discussion during the last two U.S. voting events. We identified approximately 31 (thousand) bots. … We show that, in the 2018 midterms, bots (learned) to better mimic humans and avoid detection.”
  • Because social media platforms have a global reach, they also have a global impact. In March, ProPublica revealed that, since August 2019, it had been tracking more than 10,000 Twitter accounts it suspected of being part of an influence campaign linked to the Chinese government. “Among those are the hacked accounts of users from around the world that now post propaganda and disinformation about the coronavirus outbreak, the Hong Kong protests and other topics of state interest,” the report said.
  • In May, NortonLifeLock has begun offering BotSight, which it calls “a new tool to detect bots on Twitter in real-time” that will quantify “disinformation on Twitter, one tweet at a time.”
  • On June 11, Twitter announced that it had closed down more than 170,000 accounts connected to China’s government. The accounts were designed to spread “geopolitical narratives favorable to the Communist Party of China,” by disseminating misinformation about the Hong Kong protests, COVID-19, and other issues.

What can you do?

You have the facts. Now, you need to decide what to do.

Yes, some bots, such as those used in customer service, exist to make our lives easier. However, it has been shown, time and again, that they also represent a tool that can be used to damage our democracy. In a nation that prides itself on “one person, one vote,” the fact that bots can actively tamper with the information people use to determine how they will vote is a clear and present danger to our nation’s security.

If you’re concerned that bots are a threat, then contact Twitter, Facebook, and the other social media platforms. Demand that they ban the use of bot-controlled accounts, and that they find ways to scrutinize accounts more closely in order to detect and delete such accounts. If they refuse to act, then contact your elected representative in the Senate and the House of Representatives. Demand that they pressure the social media platforms to act.

The integrity of our elections system goes beyond partisan politics. It is, in fact, the fabric that keeps this country together.

About the author

Melvin Bankhead III is the founder of MB Ink Media Relations, a boutique public relations firm based in Buffalo, New York. An experienced journalist, he is a former syndicated columnist for Cox Media Group, and a former editor at The Buffalo News.

 

Reference Materials

See active hyperlinks within the text, above.

Note from MTN Consulting

MTN Consulting is an industry analysis and research firm, not a company that typically comments on politics. We remain focused on companies who build and operate networks, and the vendors who supply them. That isn’t changing. However, we are going to dig into some of the technology issues related to these networks and networking platforms which are having (or will have) negative societal effects.

For context on this series, see our June 8, 2020 post, “It’s time for tech to take a stand.” Questions or comments can be directed to Matt Walker, MTN Consulting’s Chief Analyst (matt@www.mtn-c.com).

Image credit: iStock, by Getty Images

Blog Details

It’s time for tech to take a stand

In 2000, Google famously incorporated a simple catchphrase into its corporate code of conduct: “Don’t be evil.”

The idea, said Google, was that “everything we do in connection with our work at Google will be, and should be, measured against the highest possible standards of ethical business conduct.”

Google’s founders recognized that the growth of its search and ad platforms was turning the company into a powerful entity with the ability to shape user’s understanding of the world. While the “don’t be evil” catchphrase was mocked by some, it did at least imply that the company saw that its growing power came with certain responsibilities. The tech industry could use more of this sentiment in 2020.

Chaos in the streets is a feature, not a bug

Fast forward 20 years, 3.5 years after Facebook helped elect Donald Trump to the presidency, and America is in crisis.

The country is now run by a president who, as Jim Mattis, Trump’s first Secretary of Defense, put it, “is the first president in my lifetime who does not try to unite the American people—does not even pretend to try. Instead, he tries to divide us.” There are parallels in this to how Trump ran his 2016 campaign, deftly using Facebook and other social media to micro-target his messaging.

Since George Floyd was killed by a Minneapolis police officer on May 25, and the video of the killing went viral, protests have spread nationwide, to even the smallest of towns. Some opportunists have used the protests for looting, as is always the case, and some far-right, pro-Trump actors have deliberately engaged in looting and vandalism in order to give cover to any resulting police crackdowns. The bulk of the violence, though, is top-down. Egged on by Trump, police officers and an array of other armed security officers have reacted to largely peaceful assemblies of their fellow Americans with violent tactics and gear designed for fighting wars.

Patrick Skinner, a writer, former intelligence officer, and now police officer in Georgia, implied this violence was by design on his Twitter feed recently:

“Don’t let us off the hook by saying this orgy of violence is a failure in training. It is not. It is the result of training for war. Don’t say it’s a lack of a few de-escalation power points. It is not. It is the result of training for war. Our entire mindset is a war on crime.”

Racism didn’t start with Trump, nor did the militarization of the police. But this President has used a unified right-wing mass media propaganda machine and the tech industry’s social media tools to make both hip again. Cultivating a tough-guy image, he once urged a police group, “Please don’t be too nice” to suspects. Note his focus: “Suspects,” as opposed to convicted criminals.

Today, hundreds of thousands (if not millions) are protesting to be heard, at great personal risk, while the COVID-19 pandemic rages on. Republican politicians are under pressure to preserve an image of a good economy, in hopes of a Trump re-election, so public health concerns take a back seat. The political movement that claimed to be concerned with the lives of the unborn, and responds to “Black Lives Matter” chants with the inane “All Lives Matter,” is now persuading the public to overlook the 100,000+ deaths from COVID-19 and just get back to work.

In my home state of Arizona, which has a population of over 7 million, more than 1,000 people have died from COVID-19. Prior to this, I lived in Thailand for a decade. That country, which has more than 70 million — more than 10 times than that of Arizona — has recorded fewer than 100 COVID-19 deaths. And Arizona’s gross domestic product per capita (nominal) is over five times that of Thailand. What good is wealth if elected leaders don’t use it to invest in things like public health for their constituents?

As Mattis said in his recent statement, “We are witnessing the consequences of three years without mature leadership.”

Tech executives continue to hedge their bets

We are also witnessing how obsessed with money the rich and powerful of this country have become.

The hundreds of Internet companies to make it big since Google’s advent have become even bigger since Trump’s 2017 tax reform directed massive tax cuts to corporations and high-income individuals. Their top execs have become far wealthier. Even with extreme levels of unemployment and a steep GDP drop inevitable in 2020, these folks are doing just fine.

Surely, you would think, the largely liberal (so we’re told) tech sector would have spoken out by now, publicly critiquing not only specific acts of police violence but, more importantly, the messaging sent from the top. Yet, when we surveyed the top few execs of the largest companies in the U.S. Internet and telecom sectors, we came up largely dry. If wealth is supposed to free you to do and say what you want, the results have been revealing (Table 1).

Table 1: Public comments on George Floyd and Racism by Tech Execs 

Company Market cap (U.S. $B) Tech executive Public comments
Alphabet                 977.0 Sundar Pichai, CEO Posted a picture of a modified Google search home page, with new text: “We stand in support of racial equality, and all those who search for it.” Pichai’s post: “Today on US @Google and @YouTube homepages we share our support for racial equality in solidarity with the Black community and in memory of George Floyd, Breonna Taylor, Ahmaud Arbery & others who don’t have a voice. For those feeling grief, anger, sadness and fear, you are not alone.”
Amazon              1,220.0 Jeffrey Wilke, CEO, Consumer Two tweet thread: (1) “A friend who is a Black man sent me an email today that included: “The narrative that security of accomplishment will somehow lead to equality in this country for people of color, especially Black men, is a false narrative. It is simply not real.” (2) “Since I’ve subscribed to this idea — that facilitating achievement was the key to solving the problem — I looked in the mirror and asked “Have I done enough? Have I listened carefully enough?” Clearly the answer to both is “no.””
Amazon              1,220.0 Andrew Jassy, CEO, Amazon Web Services Tweeted “*What* will it take for us to refuse to accept these unjust killings of black people? How many people must die, how many generations must endure, how much eyewitness video is required? What else do we need? We need better than what we’re getting from courts and political leaders.”
Amazon              1,220.0 Jeff Bezos, COB & CEO Posted an essay on Instagram called “Maintaining Professionalism in the Age of Black Death is…A Lot”. Bezos’ personal intro to the essay: “The pain and emotional trauma caused by the racism and violence we are witnessing toward the black community has a long reach. I recommend you take a moment to read this powerful essay from @goldinggirl617, especially if you’re a manager or leader.”
Apple              1,380.0 Tim Cook, CEO, Director Tweeted “Minneapolis is grieving for a reason. To paraphrase Dr. King, the negative peace which is the absence of tension is no substitute for the positive peace which is the presence of justice. Justice is how we heal.”
Disney                 211.9 Robert Iger, Executive COB Tweeted “Below is a link to a statement we sent to our fellow @Disney employees. It’s from Bob Chapek, our CEO, Latondra Newton, our Chief Diversity Officer, and me. Thank you.” The link is a letter to Disney employees that discusses George Floyd.
Microsoft              1,390.0 Satya Nadella, CEO, Director Re-tweeted a Microsoft Corp. post that it would be using its platform to “amplify voices from the Black and African American community at Microsoft.”. Nadella’s post says, “There is no place for hate and racism in our society. Empathy and shared understanding are a start, but we must do more. I stand with the Black and African American community and we are committed to building on this work in our company and in our communities.”
Netflix                 184.6 Reed Hastings, COB, President, CEO Retweeted a video promoting non-violence, which said: “Some protestors in Brooklyn calling to loot the Target, but organizers are rushing in front of the store to stop them, keep things non-violent #nycprotest”
Snap                   27.4 Evan Spiegel, CEO, Co-Founder, Director Posted a Snapchat with intro saying, “We condemn racism. We must embrace profound change. It starts with advocating for creating more opportunity, and for living the American values of freedom, equality and justice for all. Our CEO Evan’s memo to our team:”, followed by a link to a message written by Evan to his team members.
Twitter                   24.3 Jack Dorsey, CEO, Director Active participant in online discussion, largely through re-tweets, several of which highlight police violence. In May, raised Trump’s ire by flagging one of his tweets for “glorifying violence.” An important but small step, though: the New York Times reviewed a set of Trump tweets for the week of May 24th, and found at least 26 out of 139 posts contained clearly false claims.
Verizon                 237.4 Hans Vestberg, COB, CEO Pinned a Tweet and posted the video on Instagram as well as from Verizon’s Twitter feed of a video clip of himself speaking up on the death of Floyd, captioned “We cannot commit to the brand purpose of moving the world forward unless we are committed to helping ensure we move it forward for everyone. We stand united as one Verizon.”
Verizon                 237.4 Ronan Dunne, EVP, CEO Consumer Group Tweeted, “While it’s hard to find the right words, we need to do more than speak — we need to listen and act. I’ll do my part to learn and help elevate the voices that will drive the change we want and need to see in the world. #ForwardTogether”, followed by a link to a video of CEO Hans Vestberg speaking on the subject.

Note: all posts are from the May 30-June 3 timeframe; exact dates available in links.

Most prominent execs have simply kept their heads down. One big exception is Jack Dorsey of Twitter, who appears to have had a recent awakening as to the power of his company’s platform and how well it has been manipulated by the powers that be. Watch Jack.

Snap CEO Evan Spiegel has also started to find a voice, first deciding to stop promoting (for free) content from Trump on Snap, and saying that Snap needs to “embrace profound change.”

Many more execs have issued bland, low-risk statements, sometimes head-scratchingly vague, as with the Verizon CEO’s focus on “the brand purpose of moving the world forward.” Apple CEO Tim Cook quoted the Rev. Dr. Martin Luther King Jr. on Twitter, saying “positive peace” requires the “presence of justice.” Cook also sent a letter to employees which received some public praise.

Yet the Cook letter also risked almost nothing, for Apple as a company and Cook personally. Silicon Valley VC Vinod Khosla pointed this out in response, saying that “it’s easy to support equality & justice…it’s when one has to give up something to support it that belief in our real values show up. @tim_cook easy to talk but why do you suck up to @realDonaldTrump?”

Exactly the point.

Let’s not forget, we are talking about some of the wealthiest, most powerful people in America. The few who have spoken recently are clearly in favor of equality, and pro-human rights, but their statements read as largely vacuous lip service. Recall that clause within the U.S. Declaration of Independence, “All men are created equal.” Inspirational, yes, but, at the time, white male property owners just happened to be a little more “equal” than others.

Words are easy to toss around, then and now. Actions count.

If you have ever read the Bible, whether as a believer or a student of philosophy, this quote seems apt: “To whom much is given, much will be required.”

What can tech do?

The first step to fixing a problem is accepting that you have one. Some tech companies have arrived at this point, notably Twitter.

The second step, in this case, is deciding that you have the resources to fix the problem. On that note, some market data may come in handy.

Figure 1 below illustrates just how deep the pockets are in the sector of webscale network operators, tracked by MTN Consulting. The “webscale” sector encompasses big companies in the Internet services industry like Facebook and Google who have built out their own physical network infrastructure to support their services and operations, with data centers taking up much of the spending. The figure shows the free cash flow generated in 2019, and year-end cash reserves, of U.S.-based webscale players.

Figure 1: Free cash flow and cash & short term investments at year-end in the webscale sector, 2019

Source: MTN Consulting, “Webscale Network Operators: 4Q19 Market Review

These are immense companies which have recorded profit margins far above most other sectors, and for many years. There’s always pressure to grow profits more, or use more of the cash for mergers and acquisitions in order to position for growth of forestall new competitors. But saying that they can’t afford to improve their platforms is a hard argument to make.

Then there’s another question: Why should they bother? Many will read this and, even if they oppose Trump, may think it’s not tech’s job to get involved in politics. It’s not a tech CEO’s job to combat rising authoritarianism, racism, or the metaphorical shredding of the Constitution. That, they will argue, is the job of voters.

However, these tech and telecom CEOs do have a responsibility to ensure their platforms are not used and manipulated by evil actors to do evil things. Not just for moral reasons, but also to ensure their platforms can thrive over the long-term. It’s been clear for at least 3.5 years that many are failing at this aspect of their job.

MTN Consulting’s contribution

MTN Consulting is an industry analysis and research firm, not a company that typically comments on politics. We remain focused on companies who build and operate networks, and the vendors who supply them. That isn’t changing. However, we are going to dig into some of the technology issues related to these networks and networking platforms which are having (or will have) negative societal effects.

Specifically, over the next few weeks, we will issue reports on:

  • Bots on social media platforms: How they work, how they shape public opinion, and how they can directly impact elections
  • Privacy: How social media and telecom companies exploit user data to sell more ads, and how this user data is often sold to and misused by third parties (including government actors)
  • Digital advertising and journalism: How tech companies’ takeover of advertising markets has impacted the news industry and complicated citizens’ efforts to get reliable information
  • Deep fakes: How machine learning and artificial intelligence (AI) research, much of it done by the webscale sector, is about to make it even harder to distinguish fact from fiction; how that may reduce the value of social media platforms; and how both webscale players and users will have to cope.

For those of you accustomed to seeing us write about data centers, optical fiber, mobile radio access networks and similarly dry topics, have no fear – that will all continue. This is a moment in time, however, when sitting on the sidelines of more consequential debates is no longer an option.

-end-

Photo by Khalid Naji-Allah, Executive Office of the Mayor via AP

Blog Details

5G to follow a politicized path in developing markets – telcos beware

Huawei has dominated telecom news since the arrest last December of the Chinese vendor’s CFO in Vancouver. Since then, the US Commerce Department has restricted Huawei’s access to US-built tech components, including Google’s Android ecosystem. Huawei needs these components, so the heat is on. What happens next?

Let the Huawei chaos begin

Those waiting for a grand resolution to US-China disputes surrounding Huawei will be disappointed – the company’s problems did not arise with the Trump administration’s trade battles. Concerns about Huawei’s private company origins and independence from the Chinese state are fairly bipartisan in the US, at least two decades old, and shared by many European and Asian governments.

Yet Huawei certainly isn’t going anywhere; it has the broadest portfolio of products in the industry, and its 22% market share in network infrastructure sales to telcos (“Telco NI”) is nearly as much as Nokia and Ericsson combined (figure, below). Since Meng’s arrest, the vendor has hardly backed away from its ambitions – and the Chinese government has made clear its support for Huawei’s long term growth.

In the developing world, Huawei’s network infra share is over 30%, and its share in most developing markets is rising, due in part to “China Inc”. Huawei – and its customers – continue to benefit from cut-rate financing available from Chinese banks, among other incentives. This activity has picked up as Belt and Road Initiative (BRI)-related projects have got underway. Egypt’s new capital is an example – Huawei is supplying nearly all of the new telecom network infrastructure for an entirely new city intended to house 6.5 million.

Given Huawei’s position as a powerhouse in the developing world, it’s impossible to discuss 5G without addressing Huawei’s prospects.

5G not a rush in low ARPU markets

In developing regions such as CIS, Latin America (LA), and Sub-Saharan Africa, 3G remains the primary mobile connection technology. While 4G will overtake 3G soon even in these low ARPU markets, 5G will take years to emerge. According to stats from the GSMA, these regions will respectively see 5G account for 12%, 8%, and 3% of their total connections by 2025.

These are cellular connections and don’t factor in IoT – a big caveat given 5G’s promise for device to device connections. However, the point remains that 5G will be a slow evolution – telcos like to stretch the life of technologies whenever possible.

That’s especially true for telcos with high debt levels – and there are a lot of these. The net debt (debt minus cash) of the global telco sector was roughly half of revenues in 2018, having been in the 30-40% range of revenues at the cusp of the LTE buildout cycle. Few telcos have room in their budgets for a 5G capex splurge. Even if there are 5G trials underway across the developed world, the developing world will need 10 years or more for widespread migrations to complete.

Individual operators reflect this different pace. Etisalat for example is already advertising ZTE-provided 5G in its home market of the UAE ($41K GDP per capita); however, in the west African country of Togo ($617 GDP per capita), its local unit Moov Togo only launched 4G in mid-2018. There is little need or incentive for Etisalat to push 5G anytime soon in Togo.

The natural conservatism of telcos is heightened when lots of things are changing on the supply side. Right now, Huawei-related uncertainty is slowing down procurement. Even if a product is on the shelf, a telco needs to know it can be supported after the sale. Given that some countries are considering restrictions on Huawei, it’s only natural for telcos to take a breath.

Supply side push likely from Huawei

Any good vendor sales rep talks to customers frequently about new products, in search of interest and/or commitments. Huawei has been especially proactive about stirring up business in small markets like Togo, and successful in turning single-country projects into much larger ones. If Huawei can keep its supply chains running – although this is not certain – it will likely launch an aggressive supply side push for 5G in its strongest developing markets (e.g. Thailand). We can expect more low-cost financing, joint R&D facilities, university partnerships, tie-ins with Huawei’s device and cloud business, and lobbying. Huawei wants to seize the moment.

This could all end up being good for operators if they play it smartly. A better pitch from Huawei should provoke its rivals into doing the same, ultimately benefiting telco customers. The complication is on the financing end and the use of China’s state-owned banks – primarily CDB and Ex-Im. Politics are by definition part of the decision-making process of these banks, and telcos may not want to embroil themselves in that process.

This is now a political issue, as concerns about foreign debt levels grow. Just last month the Kiel Institute for the World Economy issued a report on “China’s Overseas Lending”, noting that for the 50 main recipients of Chinese direct lending, “the average stock of debt owed to China has increased from less than 1% of GDP in 2005 to more than 15% of debtor country GDP in 2017.” The study also found that “about one half of China’s overseas loans to the developing world are ‘hidden’”.

Telcos forced to do more with less as webscale operators splurge

Telcos’ network department headcounts and R&D budgets have been declining for many years. This has made telcos more reliant on vendors for knowledge and technical support, and even rudimentary design. In effect telcos have outsourced much of their R&D to their suppliers. This tends to benefit incumbent vendors.

Network operators in the webscale world – Amazon, Facebook, Microsoft etc – are by contrast splurging on staff. They spend heavily on R&D, an average of 10.3% of revenues in 2018 (vs. 1.3% for telcos; figure). Webscale R&D projects are all over the map, in line with the range of the companies’ business interests. Most important, all of the big WNOs spend heavily on network R&D, designing equipment to suit their high-capacity, high-growth needs precisely. They typically use original design manufacturers (ODMs) to build and then ship the gear to sites worldwide.

These webscale companies have pushed open networking and open source efforts for years, starting in a big way with Facebook’s founding of the Open Compute Project (OCP) in 2014. Much of the webscale network equipment deployed in their cloud is either compliant with or derived from these open source-oriented bodies.

Change comes slower to the telco world, but AT&T giving open networking a push

However, telco adoption of open networking/open source has been slow due to weak OSS/BSS system support and telcos’ slow buying cycle: they do not introduce change into the network quickly. There are signs that this is changing; for instance with AT&T’s Dec. 2018 commitment to deploy “white box routers” at up to 60,000 5G cell towers over the next few years. AT&T first laid out its virtualization plan in 2013, which included using its own developed platform ONAP (Open Networking Automation Platform) and SDN to virtualize its network functions.

With AT&T’s white box commitment, open source hardware in the 5G RAN has become more attractive – even if just for routers. However, AT&T’s open source commitment comes at a cost. The company does have a significant R&D budget, totaling $1.4B in 2018 (or 0.9% of revenues). In the case of the cell site routers, AT&T is not just buying something off the shelf. The “UfiSpace” white box is powered by a network operating system called Vyatta. This OS required both internal development (i.e. R&D) and an acquisition (of Brocade’s Vyatta division) to develop. On the flip side, AT&T has managed to keep its capex outlays to just 12.2% of revenues (2018), among the lowest of all big telcos worldwide.

Not all carriers in the developing world can develop their own network operating system, clearly. Most need to allocate more funding to R&D, though, with the explicit goal of capex reduction – and increased leverage over their suppliers. That’s all the more important to do now as supply chains are in upheaval. Telcos with country operations in the developing world should be more involved in key bodies like ONF, OCP, O-RAN Alliance, and the Telecom Infrastructure Project (TIP).

There is a benefit to being an early mover, and that’s especially true now – lots of small players are eager to sign deals that give them bragging rights. Accton’s Edgecore Networks, for instance, is working on white box cell site gateways with large carriers Vodafone, Telefonica, TIM Brasil, BT, and Orange – all but BT have significant operations in developing markets where deployment is possible. Locally owned competitors would have strong incentives to follow.

New vendor opportunities emerging amidst the Huawei chaos

As 5G becomes a reality and Huawei still has issues, vendors elsewhere in Asia are looking to exploit uncertainty. That doesn’t just mean other RAN suppliers; it involves fiber, transmission, router/switch, and other product areas, and software/IT services. It also involves many countries: India, Korea, Taiwan, and Japan all host competitive players in the telecom network infrastructure space. None approach the scope of even a mini-Huawei but telcos are more willing to buy a la carte nowadays.

India is interesting because its latest Telecom Policy (2018) explicitly called for the development of its telecom equipment sector. Well before the Huawei crisis, India’s Telecom Secretary, Aruna Sundararajan, argued that India should embrace 5G aggressively, not just for services but to help develop India’s export sector. India is a big enough market that the big global RAN vendors are making local investments in R&D and manufacturing, and partnering locally. Ultimately this could expand prospects (and product lines) for companies in other segments like Sterlite and Tejas. It could also help open networking specialist Radisys, now owned by India’s largest telco Jio.

India becomes more interesting in terms of network infrastructure when you consider Taiwan. Its local tech trade association, TAITRA, is pushing hard on India for both export and partnership opportunities. India’s traditional strength (workforce-wise) has been in software (e.g. Wipro, Tech Mahindra), while Taiwan is strong in electronics manufacturing, chips, displays, and sensors. There are some partnership opportunities that look attractive on paper. Already Taiwan’s Foxconn is moving some iPhone production to India, for instance. But politics are a factor in the India-Taiwan avenue. And if politics is what motivates a deal, then a new political environment could make the deal unstable, so things are likely to go slowly here.

What’s an operator to do?

Mobile operators face an unsettled vendor landscape and tight capex budgets. Planning 5G in this climate is not easy. If I led a developing market mobile telco – Axiata, say, or America Movil – I would use this time to:

  • Study my current network equipment inventory (including software elements) to gauge security and regulatory risks – for all vendors;
  • Push regulators to guarantee no future unfunded mandates to rip & replace;
  • Adopt network design and procurement practices from webscale players when workable, but avoid adopting their lax security and privacy practices;
  • Increase R&D budget by at least 0.1% of revenues. This modest increase could potentially fund hundreds of new R&D hires for a company like America Movil; and,
  • Use the new hires to fully evaluate cost saving opportunities related to open networking, and infrastructure spinoffs to the carrier neutral sector of network operators.

Finally, I would make sure I was getting objective advice on prospects for 5G business use cases, and the right investment strategy to pursue them. More capex isn’t always the answer.

-end-

 

Source of cover image: My Edmonds News

Blog Details

Reliance Jio’s aggressive cloud push with Microsoft fret Amazon and Google in India

After taking the Indian telecom scene by storm to reach the pinnacle (by subscriber base) in just three years of commercialization, Reliance Jio Infocomm (Jio) is all set to spread its wings into the booming Indian cloud market. In a 10-year deal with the cloud heavyweight Microsoft, Jio will build new cloud data centers across India that will support Microsoft’s Azure cloud platform to offer economical India-native cloud-based solutions for enterprises. As a part of this, two initial data centers are being built by Jio in the Indian states of Gujarat and Maharashtra – both slated to go live by the end of 2020. These two facilities are reportedly ~7.5MW in capacity, small relative to the largest global facilities but significant for India.

Microsoft has been part of the Indian cloud scene since 2015, before its closest webscale network operator (WNO) rivals Amazon (2016), Google (2017) and Alibaba (2018). Though Microsoft claims to operate three data centers in India, interestingly, these are hosted in a part of existing data center companies such as CtrlS Datacenters and Netmagic (so do Amazon and Google). The partnership with Jio also has a similar set up – Microsoft’s Azure Cloud hosted on Jio’s data centers. By contrast, Microsoft’s recent cloud partnership with AT&T will likely have the telco relying primarily on Microsoft built infrastructure.

The Jio-Microsoft deal also marks telcos’ greater engagement in the Indian webscale arena offering cloud and network connectivity solutions, with Airtel already in the backdrop for quite some time – Airtel operates a wholly-owned data center unit, Nxtra Data, which is prepping for data center footprint expansion.

Jio, Microsoft deal a win-win for both

The key to this deal is how it allows both the firms to focus on their respective competitive edge. While Jio’s scale and infrastructure clout coupled with its understanding of the Indian landscape would assist in delivering seamless connectivity, Microsoft will focus on what it does best – developing and deploying its Azure cloud and AI solutions, on Jio’s network. The deal would also allow Microsoft to grow its cloud market share in India, a key point considering that cloud has now grown to become Microsoft’s biggest business segment by revenues, and is looking at India as a market to boost this growth further.

Jio, on the other hand, will bank on Azure’s brand of solutions to help persuade Indian enterprises to switch from the cloud platforms of Amazon, Google, and Alibaba, onto Azure-backed Jio’s network. Besides, Jio’s quest to explore and build high-growth businesses beyond telecom complements its decision to venture into cloud.

Key deal disruptors – ‘pricing’ and ‘native language compatibility’ – to benefit target market, and unsettle rivals

India being a price conscious market, Jio’s strategy is apparent – triggering a price war by aiming at the bottom of the ‘enterprise pyramid’, primarily comprising the startup ecosystem and SMEs, without compromising on solutions’ quality while leveraging Microsoft’s Azure brand. Jio will offer ‘free’ connectivity and cloud infrastructure to promising startups, and SMEs will be offered customized and bundled solutions encompassing connectivity, productivity and automation tools starting at just INR1,500 (US$21) per month. Similar solutions offered by rivals such as Amazon and Google can cost ~10x that price.

In addition, the Jio-Microsoft duo is looking to plug a key void left by the existing peer offerings for SMEs, i.e. local language compatibility. Jio will leverage Microsoft’s speech and language cognitive services to provide cloud and digital solutions supporting major Indian languages. This could prove to be a game-changer in a market with such language diversity as India. Local language support will likely boost broader adoption among SMEs who still largely cater to the needs of native regions.

These developments are surely going to hurt the existing cloud players, especially Amazon, Google, and Alibaba, who have a lot to ponder on countering Jio-Microsoft threat. Amazon, which has a sizeable SME clientele in India, faces the maximum risk as scores of SME customers are expected to switch from its cloud platform. Alibaba, a Chinese operator, may try to counter the Jio-Microsoft pricing but privacy and political concerns may push customers to Jio.

So how will the peers respond?

It is clear that Jio is looking to replicate its telecom price war success story in the cloud space, i.e. by offering free and discounted cloud solutions which will eventually force bigger peers to match tariffs while pressing smaller rivals to go out of business. Amazon, Google, and Alibaba will, thus, likely come up with bundled connectivity solutions at cheaper rates. Another likelihood is more webscale partnerships with local telco operators. Airtel, which already operates data centers through Nxtra Data and is on an expansion spree across India, could well be the beneficiary. But it remains to be seen if these efforts by peers are competitive enough to keep the Jio-Microsoft duo at bay. Jio’s mobile rivals are still struggling to recover from its disruption of telecom. At the least, the Jio-Microsoft partnership will help accelerate India’s cloud adoption and digital transformation.

Blog Details

Tencent Holdings: 2Q19 Earnings Snapshot

Tencent posted its 2Q19 earnings on August 14, 2019. MTN Consulting’s one-slide review is now available for download:

MTN Consulting – Earnings snapshot 2Q19 – Tencent

 

Cover Image: Tencent President Martin Lau (source: Tencent)

Blog Details

5G and Data Center-Friendly Transmission Network Architectures

Introduction

In the last few years the demands from webscale network operators (WNOs, Figure 1) on transmission network architectures have changed considerably. From pure raw capacity requirements and lower costs, webscale players now prioritize highly scalable and advanced point to point bandwidth bundling interface technologies.

Figure 1: List of Webscale Network Operators (WNOs)

Source: MTN Consulting, LLC

Webscale operators’ field of expertise is the data center, and most planned at least initially to rent capacity as leased lines from telcos (or telecommunications network operators, aka TNOs). However, many TNOs did not have the end to end transmission networks able to support webscale needs in terms of capacity, latency, cost objectives. Further, telco networks were not flexible enough to follow rapidly WNOs’ needs for modifications, additions, and changes of the services they needed.

Hence several years ago, WNOs themselves decided to build their own backbone and regional transmission networks, sometimes linking continents. Undersea, the WNOs either leased capacity from existing submarine consortia systems or started to build submarine cables for their own dedicated use. The largest WNOs, such as Microsoft, Facebook and Alphabet, have increasingly favored the latter (self-build) approach. With these initiatives, WNOs seek to have full control of the transmission network, and adequate time to market for their needs.

As webscale players have built out their networks, they have become more influential across the industry.  Their buying power alone is a major reason; Figure 2 shows how webscale operator capex has grown dramatically since 2012, while telco capex has stagnated.

Figure 2: Capex – telco vs webscale (US$B)

Source: MTN Consulting, LLC

At the same time, the telco market is far larger, and the largest integrated telcos spend well over 10% of capex on their transport networks. These telcos are heavily investing in the transformation of their transport networks, and supporting 5G is a central goal.

In the past, mobile services have been sold on the back of convenience of use. With very little considerations beyond coverage and without a capacity objective, there was never a firm commitment to service quality. 5G is probably the first access network with services subject to a wide variety of SLAs, from best effort to non-congested, and very low latency services with limits as low as 1ms. 5G will help operators to move from best effort services for all, to a tiered service level agreement (SLA)-based portfolio. Telcos hope this will help them to be more profitable, at least for the more sophisticated services.

Network slicing poised to play important role

The big change in direction in the strategy of transmission networks is that the planning, design, engineering and operations of a telecom operator will soon be subject to much tighter contracts and commitments.

For years, wild overbooking levels have been the norm, especially for mobile services, and networks were in most cases engineered for coverage alone. This won’t be possible for the next generation of services, which will require more than a 10x increase in bandwidth, and 10x less latency than the current generation.

In addition, webscale operators and large enterprises have demanding network KPI requirements. To serve this market, telcos must develop their transmission network end to end with enough flexibility to satisfy the capacity growth needs and resiliency requirements of these customers.

TNOs and WNOs both accept that the demanding requirements on bandwidth, latency and operational scalability to ensure short time to market for 5G services cannot be supported with existing network architectures.

A potential solution is “network slicing”, which starts with adding more TDM capabilities in the data plane to be able to provide a hard separation in the way services with different KPIs use network resources. This separation is orchestrated by an SDN centralized control and management plane.

Network slicing brings improvements to traffic engineering, with clear KPIs for bandwidth, latency and packet congestion. That helps to support all types of services over the same network infrastructure. Low priority services such as web browsing are effectively separated from network resources dedicated to services with demanding SLAs such as low latency leased lines or 5G inter-vehicle communications.

Operators pursuing FlexO technology to help cope with looming Shannon limit

Historically the main requirements telcos have standardized for transmission network architectures and platforms are high resiliency, powerful operations administration maintenance features, multiservice support, and backwards compatibility with legacy platforms.

This makes a lot of sense as most of the costs of running the network are operational in nature, such as repairs and maintenance. Further, multiservice capabilities can facilitate the migration of legacy services to newer platform. This reduces the need to support overlaid networks, and also avoids the cost of capacity expansions on older platforms at or near their end-of-life (EOL) dates.

In recent years, technological developments have pushed transmission networks towards the limit in the bandwidth per distance product, or the “Shannon limit”. The transmission technology is starting to hit the limits of the fiber medium.

One way to cope with this comes from the ITU, with its Flexible OTN, or FlexO, standard (G.709.1/Y.1331.1). FlexO allows client OTN handoffs above 100Gbps by defining an “OTUCn” modular structure: “an aggregate OTUCn (n ≥ 1) can be transferred using bonded FlexO short-reach interfaces as lower bandwidth elements.” FlexO also supports standard 100GbE optical modules.

FlexO has led telcos to consider how to fully exploit the flexibility of coherent transmission systems, allowing very high capacity transmission on non-regenerated short links, say 400Gbps links over 300Km distances, and lower capacity transmission over longer links, for example 100Gbps over 1500Km distances (figures for illustration only).

FlexO can bundle a number of lower rates at the TDM level to serve a higher capacity service for very long distances. For instance, by using inverse multiplexing or bundling a 400G service interface, capacity could be carried over four 100G links over (for instance) 1500kms without regeneration.

True to their backwards compatible requirements, telcos have made sure that FlexO supports 100G transmission requirements, and is an extension of existing OTN standards. This should simplify the roll out of FlexO on existing platforms.

FlexE to improve utilization, end-end manageability and router-transport connectivity

Operators – both telco & webscale – have also been exploring breakthroughs in the interfaces between transmission systems and servers and routers.

Aligned with FlexO, the Optical Internetworking Forum’s Flexible Ethernet (FlexE) supports similar schemes of bundling and multiplexing of interfaces between routers and transmission systems. FlexE offers a way to transport a range of Ethernet MAC rates whether or not they correspond to existing physical (Ethernet PHY) rates. Network utilization should improve, as should end-end manageability. One key element of FlexE was that Ethernet would grow within a TDM frame. This may pave the way to network slicing through the use of hard boundaries between tranches of services with different SLAs.

Most webscale operators lack an access link to the end user, making them rely heavily on telcos. And smaller webscale players like Netflix rent their clouds from other providers. Maintaining control of the user experience is an uphill battle. FlexO and FlexE help achieve this, in theory. On the UNI side, a WNO transmission network would now use FlexE interfaces with data platforms and servers and storage. On the NNI side, towards the fiber and other transmission systems, the WNO would use FlexO interfaces and standards.

Transmission interoperability improving due in part to the webscale push

Interoperability is something that transport engineers always wish for but never achieve due to network management interfaces’ lack of interoperability. Further, with coherent transmission, there is a problem with transmission interface incompatibility between vendors, each of whom can be more interested in higher performance and features differentiation than simplicity.

The telco response to the interoperability challenge has generally been to achieve subnetwork level interoperability rather than network element interoperability outright.

Things will change, though, as FlexO could be called the first optical standard that thrives on multivendor equipment operations.
Furthermore, webscale operators have designed simple transmission platforms and aimed to use cheap components already available from larger industries. Examples include the use of Ethernet interfaces components at 25G and 50G that were originally proposed for intra data center connectivity and rack cablings between servers and top rack unit switches. These will also be used in 5G base stations and mobile cloud engine platforms that require a transmission network to interconnect.

Conclusions

There is a growing alignment in the requirements for transmission network architectures across telecommunications and webscale network operators. They both need more flexible ways to grow their networks and manage them on an end to end basis. They want to benefit from low cost, open source components and procurement, but adapt technology to suit their customer base. They need to be able to support different classes of service and traffic. Even when providing free services, operators need to deliver a high quality of experience in order to monetize.

5G transport and data center interconnectivity services pose such a challenge to both TNOs and WNOs that work-arounds will not make up for limitations in either the data centers nor the network. For many operators, building a transmission network that supports network slicing principles will require a fresh start and new investments.

Source of cover image: CommScope.

[Note: a condensed version of this article first appeared at Telecomasia.net.]

Blog Details

Network disaggregation shaping up as crucial to telecom industry’s future

Network disaggregation is one of those topics that is hard to build an audience around. The appeal of it mostly is on the cost side. It’s not short and enticing like “5G”. And it means different things to different people, even within the same operators’ network department. Yet OFC sessions made clear how important this concept is becoming for operators, both telcos and webscale.

Post-OFC news reinforced this, as Nvidia announced on March 11 that it would pay $6.9B for Mellanox Technologies. Mellanox has been an advocate of open networking for years in forums like the OCP and ONF, pitching its portfolio as an “Open Ethernet approach to network disaggregation.” For anyone wondering if this approach had market appeal, the Nvidia deal may have tipped the scales.

AT&T, NTT among the big telcos making moves towards disaggregation

In the optical networks space, disaggregation generally refers to open line systems, where systems and transponders are decoupled. That allows for faster upgrades of transponders, and avoids vendor lock-in. AT&T is on board here, as Scott Mountford confirmed at Monday’s “Open Platform Summit”, saying “we’ve been pretty vocal these last few years about open optical networks”. That includes founding support for the Open ROADM MSA, which was featured in a demo at OFC involving AT&T, Orange, Fujitu, Ciena, and the University of Texas, Dallas.

More broadly, large operators and select vendors have been trying to promote a “white box ecosystem” where hardware can be decoupled (or disaggregated) from software. AT&T made a splash in December when it announced it would deploy white box routers at up to 60,000 towers over the next few years. The company will release as open source the software it is writing for the routers. A large operator like AT&T can make this early commitment, but most others are more cautious. At a 5G session on Monday, AT&T Kent McCammon noted that standards bodies like the ONF, Open Compute, and Linux Foundation are important because in order “to reduce costs we need to simplify operator requirements around commonalities”.
For Japan’s NTT, lowering network opex is a central goal of the white box shift. Akira Hirano from NTT discussed how white boxes can help operators lower opex through “zero touch functions”. The company cited its use of the Cassini white box as a success, because it automates L3 network configuration, requiring only 1 command. However, the NTT speaker noted that the application is only for data center interconnect, and that “for sure” this is not in use in long haul networks yet.

AT&T also underscored the gradual nature of change in telco fiber networks. They have been built over decades, have a range of different attached network equipment, and are subject to a variety of depreciation rates. AT&T’s Mountford also noted that “operational systems need development” in this area, in order to actually manage decoupled network elements. That is something the webscale sector is able to attack more easily, given their relatively simple networks.

Google and Microsoft full steam ahead

The biggest webscale providers spend billions per year on their networks. Most have embraced open networking from the start. Microsoft’s Mark Filer stated at the Summit that “open and disaggregated networks are already powering Microsoft’s cloud”. In making this happen, he emphasized the importance of a set of software tools built internally, “Microsoft SDN”, which includes a topology engine, zero touch configuration tools, data collection tools, and alerts & correlations.

Similarly, Google’s Eric Breverman emphasized software in his talk on “Optical Zero Touch Networking”. The goal of ZTN, Breverman explained, is essentially to “keep people from actually touching the network”. Humans make mistakes, and they are too costly to keep hiring at the same pace as traffic. Automatic network configuration is important. Google says it now supports intent-driven networking on 50% of “Google’s Production Optical Network”. OpenConfig is important here, as it allows working across multiple vendors much easier than with TL1 and SNMP.

Telcos need software skills

It’s no surprise that telecom operators are eager to lower the cost of growing & operating their network. Open platforms have the potential to contribute, and not just in optics. Building the right software tools to manage these platforms is crucial, though, and webscale providers are further along than telcos. As an analyst, I have to wonder whether telcos need to reach deeper into their pockets for R&D budgeting. For AT&T, one of the biggest telco spenders, it spent just 0.7% of revenues on R&D in 2018, down from 1.3% in 2014. Webscale R&D spending averages out to about 10% of revenues, and it shows.

Cover image: Shutterstock.

Blog Details

Facebook stepped in it this time

Whether you’ve joined the #deletefacebook camp or not, it’s hard to deny that Facebook has dug a deep hole for itself this time.

Yesterday’s NYT report was a harsh assessment of the company’s trustworthiness. It’s worse when combined with the late September news that Facebook had “exposed the personal information of nearly 50 million users”. These two reports – and a range of more brutal looks at the company – highlight the risks of trusting any large company with your data, much less the volume and sensitivity of data which Facebook demands. For a company that relies almost entirely on advertising for revenues, this is serious.

Immensely profitable. Still.

Let’s not cry for Facebook though. It has had an incredible run. The company’s 12 month revenues have grown from under $30B in 2016 to over $50B for the period ended September 2018 (figure); even the relatively modest 31% YoY growth recorded in 3Q18 far outpaces most tech companies.

Facebook’s growth has delivered high profitability rates, whether measured by net margin (38% annualized in 3Q18) or free cash flow to revenues (34% in 3Q18). Its excess cash has allowed it to invest in both capex and internal R&D at relatively high rates. Facebook’s capex deployment ratio (capex to revenues) is now higher than most telecom operators, at 23%.

You could also argue that Facebook’s high rates of proprietary tech investments (R&D) and capex spend on strategic infrastructure (mostly data centers) have driven earnings – not the other way around. In reality, it’s probably been a virtuous circle for FB so far, but that has always remained dependent on its incredible growth rates in usage and ad dollars. As Facebook’s advertisers see millions of users quitting or spending less time on the platform, clicking on fewer ads, and turning fewer of those clicks into transactions – they will find new outlets. Amazon is counting on it, in fact, with its recent foray into ads, and it’s been successful so far.

(For more on Amazon’s strategy, see MTN Consulting’s Webscale Playbook: Amazon).

Effect on vendors

As the figure above hints at, Facebook spends big on the network infrastructure behind its business: for the first nine months of 2018, its capex on Network, IT & Software was $4.47B, about half of the company’s $9.6B total capex. Any slowdown in growth will eventually hit network spending.

Even if Facebook does some development in house, now including chips, it still buys lots of tech (hardware and software). Some companies & markets to watch:

Servers: Facebook works with several contract manufacturers in Taiwan for production, including Quanta Computer, Wistron, and Wiwynn. These companies may see the effects of any slowdown first, if new server orders fall due to slower traffic growth rates, and/or new data center opening dates are delayed.

Chips: as discussed in a previous blog, Facebook made the big decision to self-develop earlier this year. That offers a modest competitive threat to Nvidia, Intel, and Qualcomm. The economic and operational incentive to keep building its own chips hasn’t gone away since then. If any privacy concerns can be unearthed in the chip area, though, Facebook will certainly face them. More interesting is potential impact on Qualcomm. Facebook uses Qualcomm’s chips for the social media’s rural connectivity project, Terregraph. This program could be at risk, even after recent trials in Hungary, Malaysia, and Indonesia.

Subsea communications: Facebook is a founding investor/owner of five major submarine cables: Argentina-Brazil with Globenet, two transatlantic cables (MAREA and HAVFRUE) and two transpacific cables (JUPITER, and PLCN). These projects have long planning cycles and probably would not be affected by a FB slowdown. However, Facebook’s current search for the right cable investment in Africa may be delayed, or require more partners (Google, Microsoft and Amazon are also looking at the region)

Optical components: Lumentum, NeoPhotonics, and Applied Optoelectronics are FB’s main OC vendors; for the same reasons cited above, they could face some volatility in demand from Facebook.

Earnings calls over the next few weeks may be revealing.

Source of photo: Facebook

Blog Details

Alibaba Takes On Amazon, Google, And Microsoft Head-On In India’s Cloud Market

It’s a battle between the emerging giant of the east and the pioneers of the west in the highly competitive Indian cloud computing market, as Alibaba prepares to take on the “big three”: Amazon, Microsoft, and Google.

The Chinese ecommerce behemoth, which announced international ambitions for its cloud business (called Alibaba Cloud/AliCloud or AliYun) in 2015, recently launched a cloud data center in Mumbai, India. Alibaba has been active in India since 2007, but this data center is the first notable cloud investment made by any Chinese player in India.

Alibaba’s decision to open an India-based data center looks like a step to kill two birds with one stone: grab cloud market share from the big three, and, allay data privacy fears by establishing a local data center. This was a logical move, and low risk given the company’s deep pockets; it has over $33B in cash & stock on hand. Alibaba’s progress in India’s SME segment is worth watching over the next few quarters. Alibaba aims to be a top global cloud player, and India is one important testing ground. It will also test Alibaba’s ability to navigate some messy international political conflicts.

AWS, Azure, and GCP all opened data centers in India in 2016-17

Global cloud giants Amazon and Microsoft continue to scale up their cloud businesses in India: Amazon pumped US$215 million mid last year into its Indian data services arm, which offers cloud computing solutions. Microsoft recently partnered with the Indian ecommerce giant Flipkart and ride hailing services provider Ola to provide custom solutions via its Azure platform. Google completed three data centers across India in 2017.

The table below summarizes the India presence & cloud capabilities of the world’s largest webscale network operators. Amazon, Microsoft & Google are shown first, as they have the largest presence locally, followed by Alibaba. Apple, Facebook, Baidu, and Tencent each have India operations but no local data center (yet).

Table 1: Webscale network operators in India: local presence & data centers

Company Commercial Operations Network Infrastructure
Amazon
  • Amazon India opened June 2013
  • Acquired local payments company Emvantage Payments Pvt. Ltd. in 2016
  • AWS India has six office locations in Bengaluru, Chennai, Hyderabad, Mumbai, New Delhi and Pune
  • Launched its India network with two data centers in Mumbai in 2016
  • Since the launch, AWS customer base in India grew by more than 50% from 75,000 in 2016 to 120,000 in 2017
Microsoft
  • Active since 1988
  • 6,500 employees
  • First of the top three cloud vendors to launch cloud data centers in India at three locations in 2015: Pune, Chennai, and Mumbai
  • Serves Central India, South India, and West India regions respectively
  • Provides all three forms of cloud: public, private and hybrid
  • Has more than 9,000 cloud partners in India
Google
  • Started operations in 2004
  • Approximately 1,850 employees
  • Launched its first cloud region in Mumbai, India in 2017, hosted across three data centers
  • Primarily serves West India and South India regions
Alibaba
  • Active in India since 2007, mostly through notable investments in online retail (Snapdeal), digital payments (PayTM), and online grocery (BigBasket)
  • First India-based data center launched in Mumbai in January 2018
  • Provides large-scale computing, storage resources, and Big Data processing capabilities
Apple
  • Started operations in 1996
  • Opened a development center in Hyderabad in 2016, and an app accelerator facility in Bengaluru in 2017
  • None in India currently
Facebook
  • Started operations in 2010
  • Has offices in Hyderabad, Mumbai, and Delhi NCR
  • None in India currently
Baidu
  • Launched its India office in Delhi NCR in 2015 which employs ~10 people
  • Claims to have 45 million active monthly users in India for its mobile applications
  • None in India currently
Tencent
  • Active since 2015 through investments in Indian startups such as Practo (2015), Hike Messenger (2016), Flipkart (2017) and Ola (2017)
  • Announced reviving its India business in early 2018, with an investment of US$200 million in gaming
  • None in India currently

Source: MTN Consulting

Alibaba is the only Chinese webscale provider with a data center in India.

Mumbai data center to support a range of cloud capabilities, including AI-based solutions

According to Alex Li (Asia Pacific General Manager – Alibaba Cloud), Alibaba’s new Mumbai facility is a “mega-scale” (aka webscale) data center that will cater to the regional customers in the Indian peninsula, and could support the regional cloud needs for the next 3-5 years. Prior to its construction, Alibaba provided services to a number of Indian companies through its data centers located elsewhere. Now Alibaba will be better positioned to serve these existing customers more cheaply and reliably, while offering localized services to address the increasing market demand from small and medium enterprises (SMEs).

The Mumbai data center provides a broad suite of cloud computing and data intelligence capabilities that include elastic computing, database, storage and CDN, networking, analytics and big data, containers, middleware, and security. In addition, Alibaba Cloud may introduce its proprietary AI-based offering, ET Brain, into the Indian market. ET Brain has applications in industrial manufacturing, city administration, urban transport, and logistics. This strategic step could help capture the lucrative infrastructure and government sectors in the country, as AliCloud has started to do in Malaysia where its City Brain AI-based offering is used to help ease traffic woes. Such an offering could play into the Indian government’s Smart Cities Initiative.

Alibaba will target the underpenetrated yet growing SME segment

In evaluating services, large enterprises tend to emphasize brand, reliability and global coverage issues. SMEs are more open to a new entrant, niche provider as long as the price is right. This is the case in India’s cloud services market. As cloud adoption rates grow in India, Alibaba sees an opening for itself in targeting the country’s 51 million-strong SMEs. This effort is a key part of Alibaba’s globalization strategy.

As a new cloud entrant, Alibaba needs to build its brand and customer confidence. Partnerships will help Alibaba build compelling service offerings. That’s especially important in the network space, where Alibaba needs on-ramps to its cloud network. Recognizing this, the Chinese cloud giant has partnered with Global Cloud Xchange (GCX), a subsidiary of Reliance Communications, to enable direct access to Alibaba Cloud Express Connect via GCX’s CLOUD X Fusion service. Prior to this, AliCloud relied on Tata Communications’ IZOTM Private Connect service.

But can it survive the increased scrutiny over data security concerns on Chinese players?

Alibaba’s reasonably priced offerings seem positioned well to gain interest in the Indian SME market. However, Alibaba is a Chinese company, and viewed as such by Indian government authorities. Many countries, including India, have raised concerns around data security and privacy in their dealings with Chinese tech companies.

Just a few months before AliCloud’s Mumbai data center launch, in fact, Alibaba faced a huge controversy in India: its internet browser offering, UCWeb, came under the Indian government’s lens for allegedly sending personal data on its Indian users to Chinese servers. Alibaba risked a ban if found guilty. A few months later, their mobile browser application was taken down from Google Play Store.

With this case fresh on the Indian public’s mind, Alibaba has worked hard to alleviate the fears of Indian customers and government authorities surrounding data security and privacy prior to its cloud data center launch. The company claims to comply with highest cyber protection standards recognized by a number of international organizations. For instance, Alibaba Cloud is the first Asian cloud provider to complete the assessment for the Cloud Computing Compliance Controls Catalogue (C5) set out by the Federal Office for Information Security in Germany with the additional requirements. This definitely boosts Alibaba’s credentials, as only five cloud providers, including Amazon and Microsoft, have obtained the C5 validation.