Friday Five 12/18
Privacy labels, GDPR fines, and bias in facial recognition services - catch up on all of the week's infosec news with the Friday Five!
1. Apple's App 'Privacy Labels' Are Here--and They're a Big Step Forward by Lily Hay Newman
Apps on the Mac and iOS app stores are now required to display rundowns of their privacy policies. While there has been a push over the last decade for companies to be more transparent on how they collect and use data, Apple is the first major tech company to implement notable change. The labels will feature three categories, Data Used to Track You, Data Linked to You, and Data Not Linked to You. The transition will take a bit of time as it is mandatory only after a developer submits a new app or update for review. The labels won’t solve all privacy concerns as most of the oversight for accuracy still depends on the third-party app developers in question; however, the new labels are a step in the right direction for privacy.
2. The 'SolarWinds' Hacks Show Supply Chain Attacks Are Business as Usual by Lorenzo Franceschi-Bicchierai
In the wake of the SolarWinds attack this week, this article explores the increasing frequency of supply chain hacks. A supply chain attack occurs when attackers breach a third-party provider, which then gives them access to the potentially large network of users who use the third parties’ software. While the hack on SolarWinds is certainly eye-popping because of its scale, attackers gaining access through a third-party vendor is nothing new. This is all to say that though the hack is troubling, it should be put in the context of previous supply chain attacks such as the famous NotPetya incident in 2016. Supply chain attacks are here to stay, especially as organizations become harder to crack as they prioritize security and thus focus shifts to third party vendors with less sophisticated defenses. For now, regular updating and patching remain the best defense.
3. Scope of Russian Hack Becomes Clear: Multiple U.S. Agencies Were Hit by David E. Sanger, Nicole Perlroth, Eric Schmitt
The article covers the scope of the SolarWinds attack this week, notably that the State Department, DOH, Treasury, Commerce, and the Pentagon all appeaer to have been compromised. The extent of the damage could take years to sort out, but the potential access to important trade and government secrets gained by what is widely assumed to be the Russians is enormous, nearly all Fortune 500 companies use Solar Winds. Even in the incident's early days, it’s clear that the attack was highly targeted; the hackers exploited only the most valuable targets. Even if companies and agencies immediately power down the software, it doesn’t remove hackers who planted backdoors to gain continued access to the systems. From a national security standpoint, it’s also concerning that the attack was discovered by a private company as opposed to the government.
4. Twitter fined ~$550K over a data breach in Ireland's first major GDPR decision by Natasha Lomas
In Ireland’s first major GDPR decision, Twitter was fined $547,000 for a data breach. Specifically, Twitter failed to properly document or declare a data breach under the guidelines laid out in the GDPR. It’s the first cross-border GDPR decision from Ireland, which is the lead on many cases involving tech giants. This ruling could prove prescient for the numerous cases on the backlog, including active investigations into violations by Google, Apple, Facebook, WhatsApp, and Linkedin. The decision comes within the context of the push for greater regulation of tech companies. Further, the decision is a positive development for GDPR enforcement, which has been criticized in the past for taking too long to come to decisions and for being too large and inefficient to be effective.
5. When AI Sees a Man, It Thinks 'Official.' A Woman? 'Smile' by Tom Simonite
A new paper examining bias in image recognition services found significant discrimination. For example, when researchers showed photos of congresspeople to the image recognition services, pictures of women received three times the number of annotations as men. As well, the annotations were for more superficial and lower status stereotypes, such as “smile” or “chin”, while the men’s results were the opposite labels, such as “official” or businessperson”. The results were consistent across the various face recognition platforms, with women lawmakers characterized by their appearance and sometimes even failing to be detected by the services. The paper is further evidence that the algorithms that comprise facial recognition software tend to replicate or amplify historical-cultural bias as opposed to the more utopian idea of a lack of bias through mathematical detachment. The paper is a reminder that we still have a long way to go on facial recognition software before we can implement it in society in a way that would be positive.