Quantcast
Channel: Privacy – Telecoms.com

A look back at the biggest stories this week

$
0
0

Whether it’s important, depressing or just entertaining, the telecoms industry is always one which attracts attention.

Here are the stories we think are worth a second look at this week:


GSMA cosies up to O-RAN Alliance

The GSMA, the telco industry lobby group, has announced a new partnership with the O-RAN Alliance to accelerate the adoption of Open Radio Access Network (RAN) technologies.

Full story here


Europe backtracks on market consolidation opposition

The General Court of the European Court of Justice has annulled a decision made in 2016 to block the merger between O2 and Three in the UK, potentially opening the door for consolidation.

Full story here


Huawei CFO loses first legal battle in extradition case

Huawei CFO Wanzhou Meng, the daughter of Ren Zhengfei, has lost her first legal battle in Canada and will now have to face an extradition case.

Full story here


Data privacy is in the same position as cybersecurity five years ago

It has taken years for the technology and telecoms industry to take security seriously, and now we are at the beginning of the same story arc with privacy.

Full story here


Indian telco association pushes for ‘floor tariffs’ on data pricing

In an open letter to India’s telecoms regulator, the Cellular Operators Association of India (COAI) has pressed for quicker decision making on pricing restriction rules.

Full story here


UK’s National Cyber Security Centre launches another Huawei probe

The National Cyber Security Centre (NCSC) has confirmed it is attempting to understand what impact potential US sanction directed towards Huawei would have on UK networks.

Full story here


 


Amazon attempts to capitalise on society’s anger, but leaves questions unanswered

$
0
0

The general public is angry with authority today and it appears Amazon is attempting to capitalise on this sentiment with a shallow PR stunt.

Earlier this week, IBM made a very powerful statement. It would indefinitely halt the development and commercialisation of all facial recognition technologies. The dangers of this technology are too great, therefore it would wait for legislative action. Big Blue has seemingly put societal responsibility ahead of commercial gain and should be applauded.

Amazon has attempted to claim some of the same praise with its own announcement, but it is not the same. It leaves more questions than answers and appears to a shallow attempt to gain favour in the courts of public opinion without making a material sacrifice.

This is the cynical view on this announcement, but it is difficult to have any other stance when the company does not answer simple and direct questions.

Firstly, what has Amazon actually said? The statement is as follows:

We’re implementing a one-year moratorium on police use of Amazon’s facial recognition technology. We will continue to allow organizations like Thorn, the International Center for Missing and Exploited Children, and Marinus Analytics to use Amazon Rekognition to help rescue human trafficking victims and reunite missing children with their families.

We’ve advocated that governments should put in place stronger regulations to govern the ethical use of facial recognition technology, and in recent days, Congress appears ready to take on this challenge. We hope this one-year moratorium might give Congress enough time to implement appropriate rules, and we stand ready to help if requested.

After reading this statement, Telecoms.com had a few questions:

  • Does this mean the partnerships between Ring devices and Police authorities will also end?
  • Will AWS continue to collect personal data for Rekognition during this period?
  • Will existing contracts for Rekognition with police authorities be honoured or put on hold?
  • Will existing contracts with border authorities (ICE in the US) be honoured or put on hold?
  • Is this an order which is applicable worldwide or just in the US?
  • What happens if no legislation is tabled? Will the stop-order be extended?

In response to these enquiries, Amazon pointed Telecoms.com back to the original statement, claiming there is sufficient information already available.

The issue is that there is not enough information.

If some existing relationships are being honoured by Amazon, would it not be fair to presume the majority, if not all, are? Does this mean the work with the Immigration and Customs Enforcement (ICE) in the US will continue? Is this not the sort of implementation the one-year moratorium is supposed to be protecting the general public from?

Another question which has been raised is whether this year long moratorium would be extended should Congress not get its ducks lined in a row. 12 months seems like a long time, but in politics issues can be pushed around the aisles without actually being addressed for decades. Should we get to June 2021 and nothing has been accomplished, Amazon has made no indication whether it would continue to hold back its controversial technology.

It is also curious that if Amazon has been campaigning for stronger regulations to govern this segment, why did it enter commercial relationships with organisations where there is little transparency or accountability? It seems the Amazon moral compass can be sent bananas with a big enough cheque.

Ultimately this is another case of corporate opportunism, a cheap attempt from a business which already divides opinion to get on the good side of the general public.

Jeff Bezos and Amazon have consistently been criticised of poor ethics and principles with the way employees are treated, therefore the sudden discovery of a conscience is a very curios development. It should surprise no-one that there is very little depth to this promise, a paper-thin commitment which will perhaps be forgotten once the 12-month shot clock counts down to zero.

Aside from the shallow attempt to curry favour with the general public, the other significant takeaway from this announcement is whether Congress can pass any meaningful regulation or legislation by June 2021.

For a bill to be enacted into law there is a pretty long-winded procedure. Firstly, the bill has to be written by a politician, then there is the shoulder-rubbing process of getting sponsors and supporters, before being introduced. At this point it is sent to one of the sitting committees who review, research, and revise the bill before voting on whether or not to send the bill back to the House floor. Should the bill get back to the House floor, it is then debated and voted on.

Should the bill pass through the House, it is then sent to the Senate where many of the same procedures are repeated. If it successfully survives the scrutiny of the Senate, it is then sent to the President to be signed into law.

However, at any point the bill could be scrapped and sent back to the first step, such is the bipartisan nature of politics today. Should a Democrat introduce a bill, the likelihood is that the Republican party will oppose it (and vice-versa). Considering the House of Representatives is a Democrat majority and the Senate is a Republican majority, the bill could be ping-ponging everywhere for a considerable period of time.

For a bill to be conceived, fine-tuned, navigate its way through both political chambers and be signed into law by the President, without receiving any legal or lobby challenges on the way, in a 12-month period, pretty much everything would have to function perfectly.

The smart money would be on nothing changing. The cynic would suggest this topic would be forgotten before too long and Amazon will be able to return to business as usual in 12-months.


Note: There is a poll embedded within this post, please visit the site to participate in this post's poll.

Europe congratulates itself for GDPR, but more needs to be done

$
0
0

While some might be looking for holes to pick in Europe’s General Data Protection Regulation, the rules have laid the foundations of a safer and more consumer-empowered digital economy.

The European Commission might be cumbersome, bloated and short-sighted in some areas, but GDPR should be applauded. These are rules to govern a digital economy which offer control to the consumer, forces transparency on corporations and drives regulators towards a digital mindset. The success of these rules should also be judged on those who follow; Chile, South Korea, Brazil, Japan, Kenya, India and California has all been spurred on to redraft and reimagine privacy.

However, it would be irresponsible to suggest this has been a perfect implementation.

“The GDPR has successfully met its objectives and has become a reference point across the world for countries that want to grant to their citizens a high level of protection,” said Didier Reynders, the European Commission’s Commissioner for Justice. “We can do better though, as today’s report shows.

“For example, we need more uniformity in the application of the rules across the Union: this is important for citizens and for businesses, especially SMEs. We need also to ensure that citizens can make full use of their rights. The Commission will monitor progress, in close cooperation with the European Data Protection Board and in its regular exchanges with Member States, so that the GDPR can deliver its full potential.”

Theoretically, the rules are sound, but it is the reality of GDPR which is perhaps falling short. As Reynders points out, the rules have been haphazardly applied across different European nations, while they are still overly complicated for some SMEs.

Looking at the national data protection authorities (DPA), those empowered to uphold the new digital privacy standard, there is a lot of work to be done. Budgets have increased 49% between 2016 and 2019, while headcount has increased 42%, however this is not uniform across the Union.

Open source web browser Brave has been particularly critical of the implementation of GDPR, directing disapproval towards certain nations who are seemingly non-fussed by the rules. For example, Estonia’s DPA has an annual budget of €800,000 while Romania’s has €1.3 million. These are extreme examples, but half of the nations have budgets less than €5 million, and only six have more than 10 specialist tech investigation staff.

Other questions have been raised as to whether the rules are flexible enough to deal with the introduction of new technologies.

“Whilst we’ve seen some justifiably big fines dished out, unfortunately, as organisations continue to digitally transform, the lack of clarity around new technologies like blockchain and AI is actually mostly hitting law-abiding companies that are just trying to be compliant,” said Chris Harris, EMEA Technical Director at Thales.

“We need to ensure GDPR operates as the protective bubble around personal information that we all want, without restricting the innovation and development that the world needs from these disruptive technologies.”

While it hardly uncommon for companies to poke the bureaucratic bear, progress has been made. Fortunately for everyone involved, the European Commission recognises its shortcomings in delivering GDPR, though Governments and national regulators should also shoulder their fair share of the blame.

More can be done, but it was never going to be a perfect implementation and anyone who says otherwise should not be considered a rational individual.

Most Brits reckon contact tracing data will be misused but will provide it regardless

$
0
0

Identity software company Okta has surveyed a bunch of people in the UK and found that we’re among the most willing to provide location data to help fight COVID-19.

This blitz spirit remains in spite of a healthy scepticism about data privacy, with 84% of Brits believing their contact tracing data will be used for purposes unrelated to COVID-19, most probably advertising. 60% of UK respondents said they would be comfortable in providing location data to help the cause, much higher than the Netherlands (45%), Germany (47%), the US (48%) and Australia (49%).

“It’s great to see that despite privacy concerns, UK citizens are willing to provide their data in order to aid containment of COVID-19,” said Jesper Frederiksen, VP & GM EMEA at Okta. “However, it’s important that this trust is not abused. Over half (58%) of British citizens want a limit on who can access this data and many (46%) want a time limit on how long it can be tracked. Those collecting this data need to ensure they restrict who can access it and what it is used for.”

As ever with these corporate surveys, the purpose of the whole exercise will have been to generate demand for their products and services. So a lot of the canned quotes amount to ‘this just goes to show how important it is to protect your digital identity,’ something that Okta specialises in, of course.

But, assuming the data is clean, the findings remain valid. There doesn’t seem to have been a question asked regarding concerns about governments misusing the data themselves, as opposed to just commercializing it. If it was made clearer that the UK government could use contact tracing data to fundamentally infringe on civil liberties then we suspect that 60% number would start to fall rapidly.

Law enforcement agencies are reportedly buying hacked data

$
0
0

An investigation by Vice has revealed the market in stolen data counts government agencies as customers, as they seek to bypass data privacy laws.

The Vice report starts with the revelation that a company called SpyCloud is unapologetic about dealing in stolen data. “We’re turning the criminals’ data against them, or at least we’re empowering law enforcement to do that,” Dave Endler of SpyCloud told Vice. But the data doesn’t belong to the criminals, of course, it belongs to the regular people law enforcement is supposed to be protecting.

So what we have here is a multi-layered moral conundrum. On one hand, since the data has already been nicked, doesn’t using it to catch bad guys go some way towards offsetting the crime? On the other, by growing the market for such data, isn’t law enforcement directly incentivising further such theft?

But arguably the most disturbing part of this story is that the government purchasers of this information seem to be motivated, at least in part, by a desire to circumvent existing data privacy law. Presumably frustrated by laws that prevent them hacking whoever they want in the name of justice, they’re simply reverting to the black market to get what they want. If a civilian did the same, however, those coppers would presumably arrest them for breaking the law.

As the piece explores, due process is there to protect individuals and ensure that everyone receives equal treatment under the law. The recent EncroChat hack revealed law enforcement agencies are given a license to hack under certain circumstances, but this report indicates many can’t even be bothered with that much due process and would rather indirectly do business with the criminals themselves.

UK and Aussie privacy watchdogs to investigate Clearview AI

$
0
0

Privacy authorities in the UK and Australia have announced a joint investigation into Clearview AI, a US firm which provides facial recognition technologies.

In what might be seen as an ironic sequence of events, as the US acts the international cheerleader to combat the Chinese threat to cybersecurity and privacy, two of its allies launch a privacy investigation into one of its own firms. This is also set against a backdrop of police and intelligence authorities allegedly abusing privacy rights with the implementation of biased facial recognition technologies in various US cities.

According to a press release from the Office of the Australian Information Commissioner (OAIC) and the UK’s Information Commissioner’s Office (ICO), the probe with focus on ‘scraping’ techniques of Clearview AI and the analysis of biometric data.

“The investigation highlights the importance of enforcement cooperation in protecting the personal information of Australian and UK citizens in a globalised data environment,” the statement reads.

The investigation will aim to understand whether the data scraping activities of Clearview AI are legal with respect to the Australian Privacy Act 1988 and the UK Data Protection Act 2018.

Clearview AI was founded in 2017, creating a database of biometric data which is sold to various authorities for identification purposes. Not only has the accuracy of this data and technology been questioned in the past, the sourcing of the data is also slightly suspect.

In January 2020, Twitter sent a cease and desist letter to Clearview, demanding the company stop sourcing data from its platform but also the deletion of any data which has been collected already. Facebook and Google took similar action in February.

While Clearview AI has persistently stated its technology and databases are only used by law enforcement agencies, a data leak contradicted these claims. Analysis of the breach in February suggested the company was working with 2,200 authorities, companies and individuals around the world including the NBA, Macy’s and Walmart. It appears the commercial remit of Clearview AI has been widened.

Although data scraping is a technique which has been used by most financial analyst organisations for decades, in recent years it has taken a twist. In the early days, scraping financial data could build products, much the same as how scraping personal and biometric data is today, but the difference is privacy implications.

As privacy laws are being updated rapidly today, there are grey areas which can be taken advantage of or rules which are simply ignored, but there are regulatory issues with data scraping.

“Where businesses engage data scraping service providers, the business is responsible for providing the individuals with a privacy notice,” writes Fiona Campbell, a technology, privacy and outsourcing lawyer at Field Fisher.

“The privacy notice must contain specific information, set out in Article 14 GDPR, which includes data subject rights and how to exercise them – it must be provided to the individuals within one month of scraping their data.”

Interestingly enough, the United States Court of Appeals for the Ninth District in California said data scraping companies do not have to seek approval from websites if the relevant pages are public. In this case, LinkedIn was blocked by the courts from preventing data analytics firm HiQ from scraping data, despite the social media company attempting to protect the privacy of its users.

The US and Europe certainly have different opinions on what privacy rights actually mean, hence the creation of GDPR and the EU/US Privacy Shield to extend the privacy rights of European citizens living in Europe around the world. This investigation from the ICO and the OAIC might be another step forward to protect the interests of European and Australian citizens in countries where privacy rights are little more than irrelevant footnotes.

Turkey’s internet crackdown has users flocking to VPNs

$
0
0

Prospect of much tighter controls on social media sparks renewed interest in privacy tools.

Recep Tayyip Erdoğan, Turkey’s increasingly censorious president, has sparked another rush for VPN services. No, this is not the result of some government campaign to encourage VPN adoption, but a reaction to upcoming legislation that threatens to stamp out basic internet freedoms.

A new bill stipulates that the likes of Facebook, YouTube, Twitter, Instagram and TikTok will have to comply with requests to remove content or provide the identity of users. If not, the bandwidth allocated to these platforms can limited by up to 95%.

Raising the hackles of net neutrality supporters still further was a proposal by the Nationalist Movement Party (MHP) to ban access to VPN apps that can get around Ankara’s punitive social media control measures. No surprise then that Turks have snapped up VPN services in their droves before a ban might come into place.

NordVPN, a VPN provider, reported a 20% growth in interest of its product “overnight”.

“Whenever a government announces an increase in surveillance, internet restrictions, or other types of constraints, people turn to privacy tools like VPN,” said Laura Tyrell, Head of Public Relations at NordVPN.

This is not the first time there’s been a spike in VPN interest in Turkey. NordVPN reports that it’s happened three times in the last five years, caused by bans on social media or restrictions on free access to information.

TikTok plans European data centre as heat rises

$
0
0

TikTok has seemingly taken Huawei’s position as the primary focus of US aggression in recent weeks, though the social media app is planning a data centre investment to ease European concerns.

Few regulatory or bureaucratic authorities have shown themselves to be as privacy conscious as the European Commission, perhaps explaining TikTok’s most recent announcement.

“Back in April, I wrote about our approach to security and explained how my team is laser-focused on building our advanced security infrastructure, designing relevant programs, and engaging with the industry to develop our capabilities further,” said Roland Cloutier, Global Chief Information Security Officer at TikTok.

“A core component of this commitment is our approach to data centre locations – and following a process that first began last year, today, we’re announcing our intention to establish a new data centre in Ireland, and our first data centre in Europe.

“This investment in Ireland, to the value of approximately €420million, will create hundreds of new jobs and play a key role in further strengthening the safeguarding and protection of TikTok user data, with a state of the art physical and network security defence system planned around this new operation.”

While European authorities have not taken the strident approach to TikTok as the US has, there have been reports to suggest the Chinese application would be placed under greater scrutiny.

In May, Dutch privacy authorities said an investigation was being launched into TikTok to understand if there is enough privacy protection for Dutch children. This might sound inconspicuous and irrelevant, but Al Capone was sentenced to 11 years imprisonment for tax evasion. A month later, the European Data Protection Board set up a probe to examine the data protection practises of TikTok.

Setting up a data centre footprint within the borders of the European bloc will not instantly dissipate the scowling looks, Silicon Valley has shown it can irritate the European Commission irrelevant as to how much money is spent, but it is a good start. Ensuring European data remains in Europe will ease some concerns, but there will be plenty more facets to the investigations.


White House formally bans TikTok and WeChat in US

$
0
0

President Trump issued two executive orders on Thursday to prohibit all transactions with TikTok and WeChat in 45 days’ time.

The executive order to ban TikTok is the culmination of the angst the President has shown in public recently. The axe on WeChat, on the other hand, fell very quickly. Owned by the Chinese internet and gaming company Tencent, the messaging and payment service has only been cited once as one of the rotten apples in an interview of Mike Pompeo, the Secretary of State, prior to the executive order.

The contents of the two orders are largely identical. Trump invoked two laws enacted in the 1970s, the International Emergency Economic Powers Act (IEEPA) (1977) and the National Emergencies Act (NEA) (1976) to justify his moves. The President found that these two mobile apps “developed and owned by companies in the People’s Republic of China (China) continue to threaten the national security, foreign policy, and economy of the United States.”

Their offences are deemed to include “automatically capturing vast swaths of information from its users.” In TikTok’s case, such data, including internet and other network activity information such as location data and browsing and search histories, “threatens to allow the Chinese Communist Party access to Americans’ personal and proprietary information — potentially allowing China to track the locations of Federal employees and contractors, build dossiers of personal information for blackmail, and conduct corporate espionage.”

In WeChat’s case, the data collected also includes “the personal and proprietary information of Chinese nationals visiting the United States”, which could allow “the Chinese Communist Party a mechanism for keeping tabs on Chinese citizens who may be enjoying the benefits of a free society for the first time in their lives.”

The executive orders forbid all transaction by any person or entity under the jurisdiction of the United States with ByteDance (the owner of TikTok) and WeChat, starting 45 days after the orders are issued. However, exception is given to contracts entered into or licences or permits granted before the date of the orders.

An additional measure related to WeChat’s “ability to transfer funds or other assets instantaneously”, there will not be prior notice of an identification to those persons of measures who may have an interest to make such transfers, as such notices would render the measures of prohibition ineffectual.

It is worth noting that the banning of WeChat does not extend to Tencent’s other business in the US, for example its gaming business.

The impacts of the orders on the users of the two applications and on the two companies that own them are different. While a big number of American users of TikTok will find it inconvenient (it has been downloaded over 175 million times in the United States), they can find replacement without too much difficulty, with Instagram likely to be the biggest beneficiary. For ByteDance, however, if the deal with Microsoft’s expanded acquisition plan goes through, it will mean the company is retreating to become a China-only business.

WeChat, on the other hand, has only about 3 million users in the United States, largely the Chinese diaspora and those with close ties with China. But the impact on this small group will be much heavier. Since WhatsApp, Telegram and the like are inaccessible in China, WeChat has become the de facto communication channel to maintain contact with their close ones living in China. More importantly, to the livelihood of many, a large number of export oriented small companies and individual traders rely on WeChat to receive payments from their trading partners overseas. The asynchronous onslaught of COVID-19 in China and in the US has already driven some out of business. Taking away a vital tool in WeChat Pay could be a fatal blow to others. For Tencent, on the other hand, the impact would be tangible but limited, with its one billion users primarily in China.

If Mr. Pompeo’s comment that the US is going to take care of “countless more” Chinese software and technology companies operating in America is something to go by, these two executive orders are highly unlikely to be the last. And it is hard for the Chinese companies to “sit this one out” till after the general election in November. Whichever party wins the presidency, as we reported earlier, facing off China in the technology arena is becoming one of the very few issues that can win bipartisan support in American politics today. The most recent example is that, hours before the executive orders were issues, the Senate unanimously passed a bill to forbid federal government employees from using TikTok on government-issued devices.

Android 11 released with new messaging, control and privacy features

$
0
0

The latest version of the operating system that runs most of the world’s smartphones has been unveiled.

Android stopped using deserts and sweets to identify major new versions when it apparently hit a creative wall at version 9, which was rather lamely codenamed ‘Pie’. The prospect of resorting to Quince Jelly or some such excruciating compromise for Android 10 was clearly to bleak for the Android people, but they must now be regretting missing out on Rice Pudding.

Anyway, the not especially exciting new features are broadly grouped into three categories: managing conversations, managing connected devices and managing privacy. The drag-down notifications feature will now have a special section for active conversations, which is designed to make it easier to manage them if you’re involved in a bunch of them simultaneously.

There’s a new screen recorder function that extends the screen grab facility to video, which could have interesting privacy implications. A long press of the power button also now brings up a control panel for connected devices, which includes connected home and connected car, as the phone increasingly becomes the remote control for everything else.

While other people’s privacy may be threatened by the screen record function, the smartphone user is given more protection through tools that give them more control over the amount of access apps get to the phone’s functions. There is also an enhanced privacy and security auto-update feature through the Play Store.

It looks like this update is being rolled out quicker than usual with Pixel users, as ever, at the head of the cue but, intriguingly, a bunch of Chinese smartphone makers (but not Huawei) also getting it at the same time. You can see a video summary of the new features below. They’re not very exciting, but any time the software on which most of the world depends for its mobile communications gets an update, it’s worth paying attention.

UK government determined not to let a good digital identity crisis go to waste

$
0
0

There is apparently a trust problem around digital identities and no wonder, with talk of vaccine passports being required for previously unrestricted activities.

To address it the UK government today published its draft rules of engagement regarding the future use of digital identities. The big idea is to make it quicker and easier for people to verify themselves using modern technology. In other words create digital ID documents that are just as trusted as analogue ones such as passports and birth certificates.

On a lot if levels this is a great idea. We can already pay for things, store tickets and do banking on our phones, so why shouldn’t they be trusted for all other types of identification? If done properly a digital identity would be completely trusted and by the only form of ID you need. This is described as a ‘trust framework’ in the government policy paper.

“Establishing trust online is absolutely essential if we are to unleash the future potential of our digital economy,” said Digital Infrastructure Minister Matt Warman. “Today we are publishing draft rules of the road to guide organisations using new digital identity technology and we want industry, civil society groups and the public to make their voices heard. Our aim is to help people confidently verify themselves while safeguarding their privacy so we can build back better and fairer from the pandemic.”

He had to go and say ‘build back better’ didn’t he? And then they wonder why some people get spooked by the raft of new rules and regulations being pushed through under cover of the pandemic recovery. Build Back Better is a slogan adopted by many global politicians to signify the opportunity to use the economic wreckage brought on by the Covid pandemic and resulting lockdowns to introduce much more radical reform than would otherwise have been tolerated by the public. It’s closely aligned with the World Economic Forum’s Great Reset initiative.

Again, what’s not to like? There are certainly plenty of things that need improving and if we can steer the recovery in the most constructive direction then surely that’s a good thing. As ever with grand, top-down initiatives, the devil is not just in the detail, but in the other consequences, intended or otherwise.

That’s why it’s disturbing to hear politicians use that phrase in the same sentence as digital identity. What, exactly, would a ‘better’ digital identity look like? What data would it contain? Under what circumstances would it be required? Would it be mandatory and if so what would be the punishment for not having one? These sorts of question are why there has always been such resistance to the introduction of ID Cards.

This announcement comes just a day after a government Minister admitted we’re in talks with other countries to introduce some kind of vaccine passport, without which we wouldn’t be permitted to travel. Once that precedent is set, who’s to say there won’t be other activities, such as going to the pub, that will now be denied us unless our papers are in order?

“Products that help digitally to verify a person’s identity are becoming increasingly important as more areas of our work and home lives move online,” said Cabinet Office Minister Julia Lopez. “Creating a common trust framework will give greater clarity and certainty to organisations who want to work in this field about what is expected of them. More importantly, however, it will help to deepen users’ trust and confidence in digital identities and the standards we expect in the safeguarding of their personal data and privacy.”

It’s good to see the government look digital issues in the eye and this seems like another example of us looking to move quickly, unencumbered by EU bureaucracy. Done well, as robust and trusted digital ID would open up a host of opportunities for the individual. But if the government tries to sneak in extra levels of surveillance and restrictions at the same time it risks undermining public trust and thus the whole enterprise.

WhatsApp resists Indian mass surveillance demands

$
0
0

The Indian government is increasingly unhappy about its inability to intercept private communications.

Yesterday we reported that India has taken to raiding the offices of social media companies reluctant to enforce its censorship requests. Now we learn WhatsApp has been forced to take the Indian government to court over new rules requiring it to break its privacy protections.

Reuters has anonymous sources that told it the lawsuit had been filed in Delhi. It is contesting part of a new set of rules that requires social media companies to identify the ‘first originator of information’ to authorities. Leaving aside the chilling prospects of what the government would do with such information, such a backdoor would totally corrupt the end-to-end encryption built into such services.

Among its security and privacy FAQ’s WhatApp has a post titled What is traceability and why does WhatsApp oppose it? Traceability is effectively what the Indian government is asking for and WhatApp argues implementing it would undermine security for everyone. The post also explores the appalling human rights implications of giving a government such power.

Back in the good old days, while governments couldn’t necessarily control all sources of public information, they could at least identify them. If the traditional media made a contentious claim they could contact the editor or owner and have a word. The capacity for individuals, as opposed to institutions, to spread information widely was very limited and expensive.

Social media has blown that model out of the water and, while that isn’t a problem for countries that make no pretence at freedom of speech, such as China, it creates a novel challenge for the leaders of those that do. Indian Prime Minister Modi seems to be growing increasingly authoritarian in his desire to control the flows of public information, so legal actions such as this could set a major precedent, one way or the other.

State phone-hacking rears its ugly head once more

$
0
0

A set of reports has been published that alleges some countries are using malware to spy on political opponents, activists and journalists.

The investigations were carried out by a media alliance called Forbidden Stories, with contribution from Amnesty International. They call it ‘the Pegasus Project’ because is focuses on the use of Pegasus spyware made by Israeli firm NSO Group. “Our products help government intelligence and law-enforcement agencies use technology to meet the challenges of encryption to prevent and investigate terror and crime,” says the NSO website.

We would imagine the principle challenge posed by encryption lies in breaking it in order to gain access to the material. The reason encryption exists, of course, is to protect the privacy of those using it, which is why WhatsApp initiated legal action against NSO a couple of years ago, which is still ongoing. It’s all very well helping to hack the phones of baddies, but its hardly surprising if some governments decide anyone who is politically inconvenient to them should get the same treatment.

The Guardian has gone all-in on the story and has some handy graphics, which reveal Mexico, Morocco and the UAE are the biggest users of Pegasus to hack specific phone numbers in locations that include Western Europe. Apparently even the Editor of the FT has been targeted but somehow Telecoms.com has escaped the wrath of global authoritarians so far.

“The Pegasus Project lays bare how NSO’s spyware is a weapon of choice for repressive governments seeking to silence journalists, attack activists and crush dissent, placing countless lives in peril,” said Agnès Callamard, Secretary General of Amnesty International.

“These revelations blow apart any claims by NSO that such attacks are rare and down to rogue use of their technology. While the company claims its spyware is only used for legitimate criminal and terror investigations, it’s clear its technology facilitates systemic abuse. They paint a picture of legitimacy, while profiting from widespread human rights violations.

“Clearly, their actions pose larger questions about the wholesale lack of regulation that has created a wild west of rampant abusive targeting of activists and journalists. Until this company and the industry as a whole can show it is capable of respecting human rights, there must be an immediate moratorium on the export, sale, transfer and use of surveillance technology.”

As you would expect, NSO doesn’t view the matter in quite the same way. “The report by Forbidden Stories is full of wrong assumptions and uncorroborated theories that raise serious doubts about the reliability and interests of the sources,” it said in a published statement. “It seems like the ‘unidentified sources’ have supplied information that has no factual basis and are far from reality.

“After checking their claims, we firmly deny the false allegations made in their report. Their sources have supplied them with information which has no factual basis, as evident by the lack of supporting documentation for many of their claims. In fact, these allegations are so outrageous and far from reality, that NSO is considering a defamation lawsuit.”

This isn’t the first time NSO has threatened legal retaliation for one of these exposes, but we’re not aware of it ever following through on one. The fact remains that any software specifically designed to hack phones clearly has the potential for misuse in the wrong hands. Once such software is sold its owner is free to use it on whoever they please and it’s hard to see how the vendor could to anything to prevent that.

Will Apple child safety gift governments the keys to comms?

$
0
0

Apple has caused a stir with its new child safety measures, which other tech firms and privacy advocates say could unwittingly open the door to let governments spy on their citizens.

Later this year for US users, updates to iOS and iPadOS will scan users’ images for matches with known child sexual abuse material (CSAM) before storing them on its iCloud Photos platform.

The system works by using algorithms on a user’s device to turn an image into a lengthy numeric hash, and the iPhone will first upload this hash and compare it with a database of known images of child sexual abuse.

“This is a really bad idea,” says cryptography professor Matthew Green in a Twitter thread. Essentially it adds scanning systems to end-to-end encrypted messaging systems, so “imagine what it could do in the hands of an authoritarian government”, he asks.

Others argue hashing algorithms are not foolproof and may turn up false positives, worrying perhaps for parents with photographs of their children in the bathtub or at the beach. Furthermore, say other critics, child sexual abusers will simply buy a device without the feature.

“I think this is the wrong approach and a setback for people’s privacy all over the world,” says Will Cathcart, head of WhatsApp, which since 2014 has been part of Facebook. Countries where iPhones are sold will have “different definitions on what is acceptable,” and the system could very easily be used to scan private content for anything “a government decides it wants to control,” argues Cathcart.

Fuelling the dispute is a longstanding rivalry between Facebook and Apple, which offer competing messaging platforms. Apple CEO Tim Cook has accused the social media platform of selling users’ data to advertisers. Facebook’s Mark Zuckerberg, for his part, has warned investors that new privacy settings on Apple’s iOS 14.5 (called “App Tracking Transparency”), which make apps ask users’ permission to track their internet activity, could imperil the company’s revenues in the future. So for Zuckerberg, this is a highly welcome chance to argue maybe Apple isn’t quite as keen on privacy as it lets on.

Apple defends new device-scanning tech as criticisms grow

$
0
0

Apple has launched a defence of its controversial new system to scan users’ devices for child sexual material after over 5,000 people and organizations signed an open letter against it.

The critics argue the scanning system, which hashes images on a user’s device and compares the hash with known images of sexual abuse material, also creates a backdoor authoritarian governments can use to spy on their people, crack down on political dissent, or enforce anti-LGBT policies.

Tim Cook’s company has responded by publishing a lengthy question-and-answer document, and pledging it “will not accede to any government’s request to expand” the system. Critics, though, point out Apple has made concessions in the past to continue operating in countries around the world. It removed 39,000 apps from the Chinese App Store at the end of last year, after Beijing embarked on a crackdown on unlicensed games. The government has also made Apple remove VPN, news, and other apps, and to store iCloud data of Chinese citizens on a server owned by a government-controlled company.

Adapting the system from searching for child sexual abuse material (CSAM) to, for example, images of the 1989 Tank Man in Tiananmen Square or Russian dissident Alexei Navalny would only take “expansion of the machine learning parameters to look for additional types of content”, argues the Electronic Frontier Foundation, which adds, “that’s not a slippery slope; that’s a fully-built system just waiting for external pressure to make the slightest change”.

There are three different measures at work in Apple’s plans: as well as scanning for digital fingerprints, iMessage will now enable explicit photo warnings for childrens’ accounts, and Siri and search will respond to requests for CSAM materials with a warning and links to help. The measures use different technologies.

But at the end of the day, says the Electronic Frontier Foundation, “even a thoroughly documented, carefully thought-out, and narrowly-scoped backdoor is still a backdoor”.


ProtonMail and WhatsApp under pressure over user privacy failings

$
0
0

A couple of investigations have revealed that some services that pride themselves on user privacy might not be nearly as secure as they claim.

TechCrunch has done a great job of summarising the case against Switzerland-based ProtonMail, which positions itself as one of the most secure email platforms available. Apparently, having been requested to do so by Europol, the Swiss authorities revealed the IP address of the person who created the ProtonMail account of a French activist to the French police, who was subsequently arrested.

ProtonMail itself has written a blog addressing the matter. “In this case, Proton received a legally binding order from Swiss authorities which we are obligated to comply with. There was no possibility to appeal this particular request,” it says. The most awkward part of this story, which isn’t really addressed in the blog, is the fact that ProtonMail was even logging user IP addresses in the first place.

Meanwhile ProPublica has published an investigation that alleges Facebook-owned WhatsApp has teams of contractors that sift through the private messages of its users and that it regularly shares such information with prosecutors. If true, this contradicts claims that WhatsApp messages are subject to strict end-to-end encryption that prevents anyone being able to intercept messages.

‘WhatsApp’s director of communications, Carl Woog, acknowledged that teams of contractors in Austin and elsewhere review WhatsApp messages to identify and remove “the worst” abusers,” says the accompanying article. ‘But Woog told ProPublica that the company does not consider this work to be content moderation, saying: “We actually don’t typically use the term for WhatsApp.”’

Sounds like mere semantics to us. These revelations come on the back of Apple deciding not to spy on its users photos after significant backlash. This is unlikely to be a coincidence and there is a growing body of evidence that digital service providers are under increasing pressure from governments around the world to help them spy on their citizens. If even ProtonMail can’t be trusted then it’s not clear where privacy-conscious consumers can turn.

Facebook launches a pair of spy shades

$
0
0

Internet giant Facebook has teamed up with Ray-Ban to launch a pair of shades that not only hide your eyes but enable you to surreptitiously record video.

Clumsily named Ray-Ban Stories, these smart glasses have a pair of 5MP cameras that, presumably, are there to help people tell ‘stories’. “You can easily record the world as you see it, taking photos and up to 30-second videos using the capture button or hands-free with Facebook Assistant voice commands,” says the press release. “A hard-wired capture LED lights up to let people nearby know when you’re taking a photo or video. Streamlined, open-ear speakers are built in, and Ray-Ban Stories’ three-microphone audio array delivers richer voice and sound transmission for calls and videos.”

In other words, be very careful what you say or do in front of a person wearing a pair of Ray-Bans from now on. The rest of the press release is mainly just self-congratulatory drivel about what a great job they did designing these shades just five years after competitor Snapchat managed it. It has a whole section on privacy protections, but the little light is the only protection third parties get.

This launch is just the start of Facebook’s ambitions for a thing it calls ‘the metaverse’, in which the lines between the digital and the actual world are increasingly blurred. Eventually these shades will have some kind of augmented reality heads-up display, enabling you to improve your everyday experience by superimposing a bunch of digital noise over the top of it. If some of the initial reactions are anything to go by, Facebook has some serious cultural resistance to overcome before this sort of thing becomes mainstream. And they cost 300 bucks.

Apple joins the NSO pile-on

$
0
0

US gadget giant Apple is suing Israeli smartphone spyware maker NSO Group on behalf of iPhone users it alleges had their phones hacked.

NSO has been under increasing scrutiny this year, following the publication of a bunch of stories by a media alliance calling itself Forbidden Stories, which claimed some governments were using NSO’s Pegasus software to hack the phones of selected individuals. The precise motives for such acts can only be speculated about but, suffice it to say, there’s often a fine line between national security and political expedience.

Inevitably, some of those phones allegedly hacked were iPhones, something Apple seems to be taking personally. Its legal complaint hits the ground running, stating ‘Defendants are notorious hackers—amoral 21st century mercenaries who have created highly sophisticated cyber-surveillance machinery that invites routine and flagrant abuse.’ How Apple to frame this as a moral crusade. We were aware of no public statements on this matter from NSO at time of writing.

“State-sponsored actors like the NSO Group spend millions of dollars on sophisticated surveillance technologies without effective accountability. That needs to change,” said Craig Federighi, Apple’s SVP of Software Engineering. “Apple devices are the most secure consumer hardware on the market — but private companies developing state-sponsored spyware have become even more dangerous. While these cybersecurity threats only impact a very small number of our customers, we take any attack on our users very seriously, and we’re constantly working to strengthen the security and privacy protections in iOS to keep all our users safe.”

Since the US government already acted against NSO earlier this month this seems to be, at least in part, a piece of opportunistic virtue-signalling by Apple, as the self-promotion in the above canned quote infers. Safe in the knowledge that it has the support of the US state, Apple can present itself as the plucky champion of its otherwise vulnerable customers.

Having said that, Apple does seem to have added to the pool of knowledge on this matter, going on to offer new information on NSO Group’s FORCEDENTRY, an exploit for a now-patched vulnerability used to break into an Apple device and install the latest version of Pegasus. The lawsuit seeks to ban NSO even more than it already is and to get ‘redress’ for the hacking it enabled.

“The steps we’re taking today will send a clear message: In a free society, it is unacceptable to weaponize powerful state-sponsored spyware against those who seek to make the world a better place,” said Ivan Krstić, head of Apple Security Engineering and Architecture. “Our threat intelligence and engineering teams work around the clock to analyze new threats, rapidly patch vulnerabilities, and develop industry-leading new protections in our software and silicon. Apple runs one of the most sophisticated security engineering operations in the world, and we will continue to work tirelessly to protect our users from abusive state-sponsored actors like NSO Group.”

More thinly-veiled self-promotion – yay. Apple says it will add $10 million to any cash it wins from the lawsuit (not clear whether that is after it has covered its costs) and donate it to groups like the Citizen Lab at the University of Toronto, which was the first to identify the exploit used by FORCEDENTRY. We trust Apple will be no less vigilant if it ever finds the US government (or the French) is up to similar tricks.

Life360 reportedly sells precise, identifiable location data

$
0
0

Self-styled “family safety platform”, Life360, is alleged to have sold the precise location data of its users to many data brokers, which then sell it on to all kinds of customers.

Former employees of one of the most popular data tracking apps, Life360, have told The Markup, a New York-based non-profit organisation, that their former employer sells location data it collects from its own app to around a dozen data brokers including Cuebiq, X-Mode, SafeGraph, and Arity. These brokers then apparently sell it on to whoever wishes to buy, for example advertisers, insurance companies, and government agencies, former employees at two data brokers corroborated to The Markup.

Chris Hulls, Life360’s founder and CEO, did not deny the company sells data when approached by The Markup, and claimed that monetising data “allows us to keep the core Life360 services free for the majority of our users”, which is reasonable. Instead, Hulls defended his business on two fronts.

With regard to making data unidentified back to individual users, Hulls said the company de-identifies the data before selling it to customers, including removing usernames, emails, phone numbers, and other types of identifiable user information. However, the de-identification process does not remove the device’s latitude and longitude coordinates or its mobile advertising ID, which is more critical for targeted marketing than email addresses and phone numbers. Meanwhile, the former employees claimed that sometimes even the basic de-identification was not done.

When it comes to its customers re-selling data, Hulls said “Life360’s contracts prohibit its customers from re-identifying individual users, along with other privacy and safety protective practices.” Also, more broadly, “from a philosophical standpoint” as he put it, Hulls did not support the idea that government agencies, including the military and the CDC, which have been identified as customers to the data brokers, should buy data from the commercial market. By doing so the public agencies could compromise the individual’s right to due process. However, he also conceded that it’s a challenge to monitor what partners do with the data when the data is already in their hands.

So, in a way, the defence Hulls put up was a feeble one. Many companies sell data as a means of monetisation. However, selling data is different from selling identifiable data. Many telcos are in the market but they don’t sell data directly. Instead, they sell analytics produced using aggregated, anonymised data. What’s most worrying for Life360 users is the data related, and may be identified retrospect, to children. As Justin Sherman, Duke Tech Policy Lab fellow said to The Markup, “Families would probably not like the slogan, ‘You can watch where your kids are, and so can anyone who buys this information.’”

Among the identifiable data points, the unique advertising identifier of each device, called Identifier for Advertising (IDFA) for iOS devices and Google Advertising ID (AAID) for Android devices, is the most valuable one. It enables advertisers to target the users more accurately and more persistently, especially if it is paired with geolocation data, for example the latitude and longitude coordinates. The Financial Times estimated that Apple’s iOS update to make it harder for advertisers to trace individual devices cost Snap, Facebook, Twitter, and YouTube close to $10 billion income.

Life360 has been investing to make the data in its possession to be more comprehensive and more accurate, especially through targeted acquisitions. It has bought ZenScreen, which monitors screen time, and Jiobit, which traces wearable locations. More recently it acquired Tile, a company that sells tracking tags in competition to Apple’s own tags. Hulls told The Markup that the company does not plan to sell data generated from these recent acquisitions.

Life360 has not responded to Telecoms.com’s request for comments at time of writing.

 

UPDATE 09:00 9/12/21 – we received the following emailed statement from Life360:

To be clear: We do have data partnerships, but not a single one that permits children’s data to be personally identified. All of our partners have extremely strict contractual limitations to ensure privacy. We do not allow government enforcement agencies to purchase raw data.

We provide a clear and simple way for our members to opt out of sharing their data – this is not something we hide or obfuscate deep in our privacy policy.

Data partnerships enable us to provide our members – 80% of whom use Life360 for free – valuable safety features at no cost. We protect millions of members each month with our products and services.

We respect Tile and its users and will honor their existing data policies and practices.

UK government reportedly plans anti-encryption marketing campaign

$
0
0

The UK state’s obsession with end-to-end encryption shows no sign of abating, with a questionable £0.5 million ad campaign set to be launched.

We have Rolling Stone to thank for the scoop. It got hold of some documents that indicate the UK government has hired ad agency M&C Saatchi to launch a publicity campaign designed to convince the public that end-to-end encryption, especially in direct messaging apps such as WhatsApp and Facebook Messenger, is bad.

As with most state attempts to curtail freedom and civil liberties, this power grab is being done in the name of safety. It seems the campaign will lazily draw on the ‘think of the children’ cliché, with dodgy-sounding ads featuring adults looking furtively at kids, apparently to imply that encryption facilitates child abuse.

“We have engaged M&C Saatchi to bring together the many organisations who share our concerns about the impact end-to-end encryption would have on our ability to keep children safe,” a Home Office spokesperson told Rolling Stone.

A Freedom of Information request sent last September by someone other than the author of the piece eventually got a response confirming the budget for this campaign is £534,000. It’s reasonable to question this expense at a time of such economic stress for the country but the bigger concern surrounds possible ulterior motives for this initiative.

The UK state has had a problem with encrypted messaging for years and there are indications it’s also responding to pressure from its allies (which usually means just the US). The perma-emergency attached to the Covid pandemic has served to significantly whet the appetite of governments around the world for increased surveillance powers. Encryption is a major impediment to that ambition but, in common with so many other government bright ideas over the past couple of years, this piece of ill-timed profligacy is likely to achieve nothing, at best.





Latest Images