How Apple’s Find My feature works

Wired published details on how Find My feature on Apple devices will work. the feature allows Apple users to find lost or stolen devices even when the devices are offline. Below are my understanding of the process and attempt to illustrate how it works with visuals for easier interpretation

Here’s how the new system works, as Apple describes it, step by step:

When you first set up Find My on your Apple devices—and Apple confirmed you do need at least two devices for this feature to work—it generates an unguessable private key that’s shared on all those devices via end-to-end encrypted communication, so that only those machines possess the key.

Each device also generates a public key. As in other public key encryption setups, this public key can be used to encrypt data such that no one can decrypt it without the corresponding private key, in this case the one stored on all your Apple devices. This is the “beacon” that your devices will broadcast out via Bluetooth to nearby devices.

That public key frequently changes, “rotating” periodically to a new number. Thanks to some mathematical magic, that new number doesn’t correlate with previous versions of the public key, but it still retains its ability to encrypt data such that only your devices can decrypt it. Apple refused to say just how often the key rotates. But every time it does, the change makes it that much harder for anyone to use your Bluetooth beacons to track your movements.

Say someone steals your MacBook. Even if the thief carries it around closed and disconnected from the internet, your laptop will emit its rotating public key via Bluetooth. A nearby stranger’s iPhone, with no interaction from its owner, will pick up the signal, check its own location, and encrypt that location data using the public key it picked up from the laptop. The public key doesn’t contain any identifying information, and since it frequently rotates, the stranger’s iPhone can’t link the laptop to its prior locations either.

The stranger’s iPhone then uploads two things to Apple’s server: The encrypted location, and a hash of the laptop’s public key, which will serve as an identifier. Since Apple doesn’t have the private key, it can’t decrypt the location.

When you want to find your stolen laptop, you turn to your second Apple device—let’s say an iPad—which contains both the same private key as the laptop and has generated the same series of rotating public keys. When you tap a button to find your laptop, the iPad uploads the same hash of the public key to Apple as an identifier, so that Apple can search through its millions upon millions of stored encrypted locations, and find the matching hash. One complicating factor is that iPad’s hash of the public key won’t be the same as the one from your stolen laptop, since the public key has likely rotated many times since the stranger’s iPhone picked it up. Apple didn’t quite explain how this works. But Johns Hopkins’ Green points out that the iPad could upload a series of hashes of all its previous public keys, so that Apple could sort through them to pull out the previous location where the laptop was spotted.

Apple returns the encrypted location of the laptop to your iPad, which can use its private key to decrypt it and tell you the laptop’s last known location. Meanwhile, Apple has never seen the decrypted location, and since hashing functions are designed to be irreversible, it can’t even use the hashed public keys to collect any information about where the device has been.

THE CLEVER CRYPTOGRAPHY BEHIND APPLE’S ‘FIND MY’ FEATURE
Exhibit 1 – Two devices have its own public key and a shared private key
Exhibit 2 – A step-by-step illustration of the process, from top to bottom

If you think there are any errors in my understanding of the how this works, please leave me a comment and share your thoughts.

GDPR – Positive impact on firms

Last May, GDPR officially went into effect. Under GDPR, users are given more privacy rights and firms have to adhere to stricter privacy regulations than ever unless they want to be subject to hefty fines. Under GDPR, fines can go up to 20 million euros or 4% of a firm’s global revenue. In the case of companies such as Google or Facebook, which earns to the tune of billions of dollars in annual revenue, the fines could be significant.

I have been in favor of GDPR. Even though it’s not perfect as in the case of any laws enacted for the first time, I believe that with GDPR, we are going in the right direction. Below are a few examples:

According to Cisco 2019 Data Privacy Benchmark Study:

GDPR-ready companies are benefitting from their privacy investments beyond compliance in a number of tangible ways. They had shorter sales delays due to customer’s privacy concerns (3.4 weeks vs. 5.4 weeks). They were less likely to have experienced a breach in the last year (74% vs. 89%), and when a breach occurred, fewer data records were impacted (79k vs. 212k records) and system downtime was shorter (6.4 hours vs. 9.4 hours). As a result, the overall costs associated with these breaches were lower; only 37% of GDPR-ready companies had a loss of over $500,000 last year vs. 64% of the least GDPR ready

Ads trackers were reduced, leading to faster loading pages and more pleasant user experience. Big firms are held more accountable. Google was fined $57 million for its GDPR violations. Without the new regulation, I believe that the amount would have been much less. California passed their toughest privacy laws after being inspired by GDPR.

There is an argument that GDPR might lead to less competition in the advertising fields as only the likes of Google and Facebook have the resources to meet the requirements. An initial study seemed to support that.

Nonetheless, I think that even without GDPR, who could challenge Facebook and Google when it comes to serving ads? At least when there are more rights and protection given to the end users, we get some power back to the users and hold firms to a higher standard. After all, innovation comes only from our raising standards, doesn’t it? Hence, GDPR is still a good move in the right direction and should be improved incrementally in the future. As a result, firms should pay more attention to privacy and security. It will no loner be a check-off-the-list item. It will be a competitive advantage moving forward, especially when everything goes digital.

Facebook & Privacy First Mentality

Quite a week for Facebook

It has been quite a few days for Facebook. First, two days ago on Techcrunch:

Facebook has confirmed it does in fact use phone numbers that users provided it for security purposes to also target them with ads.

Specifically a phone number handed over for two factor authentication (2FA) — a security technique that adds a second layer of authentication to help keep accounts secure.

Then, a bombshell was dropped yesterday. Per Wired:

ON FRIDAY, FACEBOOK revealed that it had suffered a security breach that impacted at least 50 million of its users, and possibly as many as 90 million. What it failed to mention initially, but revealed in a followup call Friday afternoon, is that the flaw affects more than just Facebook. If your account was impacted it means that a hacker could have accessed any account that you log into using Facebook.

Facebook’s track record in data security and privacy hasn’t been particularly stellar recently. 2018 is not 2010. Facebook doesn’t have the same dominant position as it used to in the social network market any more. Users have plenty of alternatives and substitutes to spend their time on. These scandals, coupled with its role in the “free speech vs hate speech” row, don’t do any good to Facebook’s image as well as its appeal to users when privacy has become more and more pressing as a concern to users.

Privacy & regulations

I have been resigned to the fact that there is no anonymity on the Internet and that complete privacy isn’t possible. Yet, when users trust a company with their data, whatever the data is, it’s the company’s responsibility to protect such data. As many important aspects of our lives take place on the Internet, the need to feel safe online is more overwhelming than ever. Without feeling safe, how could users feel comfortable using a service? Privacy and data security will be, if not already is, expected by default of companies. It’s not a nice-to-have feature any more. It’s a do-or-see-your-competitors-get-ahead game.

But companies are not in the business to lose money. If they are not legally required to bolster their security, don’t expect them to. That’s why companies fought hard against GDPR or privacy laws passed in California this year. And this is where I don’t understand the criticisms of some towards regulations such as GDPR. Yes, no law is perfect, especially in the beginning. That’s why we have amendments. GDPR is not an exception. It is a great first step to give power back to users and force companies to be liable for their actions/inactions.

A common criticism that I came across towards GDPR is that it makes it too expensive for small companies and startups to comply, widening the moat or competitive advantage gap between giants such as Google/Facebook and SMBs. Well, if a company with a deep pocket and better security measures has 10% of its 500,000 in user base breached, the impact is 50,000 users. If a small company with fewer recourses and much weaker security measures loses all of its 50,000 users, the impact is the same as in the first scenario. Hence, breaches at SMBs can have significant damages and ramifications as well.

Sure, the best case scenario is to have different levels of compliance applied to companies of different size. I’d love to see that happen. Nonetheless, without privacy regulations, imagine how much companies would care about our data and how much of a mess it would be. Despite having HIPAA in place, every year has been a banner year of cybersecurity in healthcare in the US and healthcare organizations spend 3% of their IT budget on cybersecurity. Verizon reported in their 2018 Payment Security Report that only 40% of all interviewed companies in North America maintained full compliance with PCI. Despite all the scandals related to data security in the past, Facebook still lets more unfortunate events happen. To be fair, I don’t imagine having impeccable security is easy. However, would companies even try to secure your data without any legal requirements?

Progress happens when we raise standards. Would cars be more environmentally friendly if we hadn’t enforced regulations on emission quality? If a university wants to raise its standard for incoming students, will it lower or raise the requirement for GMAT/SAT? Will a drug be safer for patients if the FDA enforces more or fewer tests? Big companies have the means to comply with stringent privacy regulations. Small companies/startups, though difficult, have more access to capital funding. Plus, public cloud providers are investing to have their infrastructure compliant with many compliance regulations (See more here for AWS compliance and Azure compliance). Regardless of size, companies have to take privacy seriously and consider it an integral piece of the puzzle, a competitive advantage if done right or a threat to their competitiveness if ignored.

Government or Tech Corporations for our data and privacy

I had a brief conversation with a few close friends on Whatsapp on how to remain anonymous on the Internet and the role of governments and technology corporations in the fight to protect our data and privacy from being abused. As much of our life involves Internet, whether it is for work or personal use, the issue of our personal data and privacy becomes more overwhelming than ever. The question is who we can trust with our data: the governments or tech corporations.

Regarding governments, it’s safe to say that they haven’t done much to generate confidence. Many of my peers express lack of confidence in the governments to handle a huge amount of data and protect it from breaches. Worst, some said that data could be used to violate their privacy. For instance, the US government requested Apple to build a backdoor to iPhone. The Australian government wanted to build backdoors into encrypted communications apps. Even though I am convinced that having access to encrypted content may be required in some extreme cases (investigation, terrorist threats), the fact that the governments forcefully want to build backdoors to our device/data doesn’t really feel so good.

On the side of technology corporations, there needs no introduction. They are motivated to acquire as much of our data as possible. In some cases, they know about us more than we know ourselves. But they don’t actually protect our data well, to say the least.

Personally, I don’t think it is possible anymore to remain anonymous in this day and age. We are past the point of doing anything about the technology companies having our data. As long as we rely on their services for productivity and social purposes, we cannot avoid them. The same goes for the governments. When served with subpoenas, corporations have no choice, but to surrender our data.

Both have motivation to go against our wishes. Both don’t have our full confidence. Nonetheless, it’s not possible to choose one over the other. I believe that governments can keep technology companies in check with regulations such as GDPR, HIPAA or PCI. Citizens can elect officials who care about consumers/users to the office. On the other hand, technology companies can push the governments to evolve and not to slack off.

DFD_Unroasted-3

Each has a role to play in this check-and-balance system. It may sound idealistic, but I believe that it is our reality. Governments and big tech corporations are not going away any time soon and from our perspective as citizens/consumers, we need both to keep the balance. How will it be achieved? I don’t know. But I don’t think that it’s a zero sum game and that it is in our interest to favor one side over the other.