Category Archives: Security

A Real Gatekeeper


In the years since Apple released the iPhone, with its “locked-down-by-nature” approach to application security, the company has progressively chipped away at the freedoms Mac developers have historically had to do, more or less, whatever the heck they wanted.

With the introduction of the Mac Application Sandbox in 2012, Apple applied an iOS-like mechanism through which applications are entitled only to access their own data, and must explicitly request permission from Apple to access any resources “outside of their own sandbox.” At the time, I wrote that while the technology was promising, it left much to be desired.

Around the same time, they introduced Developer ID, a system for certifying at runtime that a given piece of software has been cryptographically signed by a developer whose identity is known to Apple. Applications that are not signed with Developer ID are allowed to run in macOS, but by default are met with a foreboding warning about the safety of doing so. The component of macOS that is responsible for limiting the launch of software from unknown developers is called “Gatekeeper.”

Last year, in 2018, Apple introduced a new notarization service, an expansion of Developer ID functionality. Developers submit their applications to Apple, where they are scanned for known malware, and have their use of specific system technologies vetted. The “notarization” on an app allows the system to verify at runtime that a given application passes a baseline safety metric for downloaded software.

Finally, in 2019, Apple announced that software signed with Developer ID certificates, that is to say all non-Mac App Store software, must also be notarized. The Catalina 10.15 public beta identifies software that has not been notarized as potentially risky because it “cannot be scanned for malware.”

In effect: developers who ship software directly to end-users are now required to notarize their apps.

While working on the notarization process for my own apps, and a company I work for, I noticed an interesting error from “altool”, the command line program that is used to submit binaries to Apple for verification:

1 package(s) were not uploaded because they had problems:
Error Messages:
To use this application, you must first sign in to 
iTunes Connect and sign the relevant contracts. (1048)

The error is easily worked around by logging in to App Store Connect and agreeing to any updates Apple has recently made to their contracts. I’m so used to more-or-less blindly agreeing to these changes, that it didn’t sink in for me at first what a potentially major change this is.

My colleague Patrick Machielse noticed right away what the larger implication is: all Mac software, inside or outside of the Mac App Store, can now be held up by unsigned contract agreements with Apple. In a rush to fix a horrible bug and get it out to customers? Better review that new contract ASAP.

For the past 35 years, any Mac developer who wanted to ship an update directly to customers could do so by recompiling a binary and distributing it. When macOS 10.15 ships this fall, the status quo will change. Mac developers must register with Apple and sign their products. They must submit their binaries to Apple for notarization. And most significantly of all, they must agree to the terms of Apple’s App Store developer contracts, even if they don’t distribute their apps through the App Store.

Mac Sandboxing: Privileged File Operations

At WWDC 2018, Apple announced with great fanfare that two beloved Mac apps, Transmit and BBEdit, would be returning to the Mac App Store.

Each of these apps had departed the App Store years ago, citing various reasons, but chief among them the limitations of the Mac App Sandbox, which restricts the functionality of apps in the Mac App Store.

I was curious whether Apple made any specific concessions to these developers, and whether those concessions would be opened up to “the rest of us” or not.

Today, Panic launched Transmit 5 on the Mac App Store. It’s a free download, and costs $24.99/year after an initial 7-day free trial.

I downloaded Transmit even though I own a copy of the direct-purchase version. I wanted an answer to my question, which I got, at least partially, by dumping the application binary’s “entitlements”, which represent the sandboxing exceptions that the app has received.

New to me among the entitlements is “com.apple.developer.security.privileged-file-operations”, which is a boolean value set to true for Transmit. I don’t see any Google results for this key, so I’m assuming it’s something new that was added for Panic (and maybe BBEdit), and which may or may not be documented in the future for use by other developers.

Another interesting entitlement is “com.apple.security.automation.apple-events”, which is documented by Apple, but only in the context of the new “Hardened Runtime.” This technology is aimed primarily at developers who are not developing for the Mac App Store, but who want to provide enhanced security for their customers. In that context, I believe this entitlement provides unfettered access to sending AppleEvents, excepting that in Mojave and later the app is still subject to fine-grained system alerts that require user approval for each application that is targeted.

In short: it appears that Transmit possesses at least two “official” entitlements that could be made available, or are perhaps already available, to other developers. One way to find out: add them to your app and submit it for approval!

Update: Thanks to Jeff Nadeau for alerting me to the pertinent API that correlates with the privileged file operations entitlement. NSWorkspaceAuthorization can be used to request privileged file access from the user, and Apple includes a link for requesting access to the entitlement.

Update 2: It turns out my intrigue around “com.apple.security.automation.apple-events” was ill-founded. I assumed that a sandboxed app could use this entitlement to gain unfettered access to automating other apps, but in the case of a sandboxed app it turns out to work in conjunction with the existing “com.apple.security.temporary-exception.apple-events” entitlement, which requires enumeration of specific targets. Thanks to Jeff Johnson and Paolo Andrade for talking me through my misunderstanding of the situation.

Terminal Security Profiles

In macOS Mojave, Apple introduced a number of new security features that impact the day-to-day use of the computer. Activities such as running scripts, or using apps that access private information, are altered now such that users are prompted with one-time permission-granting requests.

One consequence of these changes is that you can no longer access certain parts of your home directory from the Terminal. Don’t believe me? Try opening Applications > Utilities > Terminal, and run the following command:

ls ~/Library/Mail

In all previous macOS releases, this would list the contents of Apple’s internal Mail files. As a privacy enhancement, access to these files is now restricted unless apps have requested or been proactively granted access.

If you really wanted to regain access to these files via the Terminal, you have to grant the app “Full Disk Access.” This is a new section of the Security & Privacy pane in System Preferences.

Well, that’s fine. Now you can “ls” anything in your home folder, but absolutely every other thing you run in Terminal can as well. To grant myself the ability to list files in ~/Library/Mail, am I willing to grant the same access to every single thing I’ll ever run in Terminal?

This isn’t earth-shattering: it’s been the case forever that tools you run in the Terminal have access to “all your files.” But the new restrictions in macOS Mojave shine a light on a problem: the bluntness of security restrictions and relaxations with regard to Terminal.

I’ve run into a variation of this problem in the past. I use the excellent TripMode to limit bandwidth usage when I’m traveling, and tethered to my phone. A consequence of this is that, unless I grant unlimited network access to Terminal, I can’t perform routine tasks such as pushing git changes to a server.

Ideally these permission grants would be applicable at the tool level, rather than at the application level. It would be better if I could say “let ls access my Mail” rather than “let anything I run from Terminal access my Mail.”

I don’t completely understand the limitations there, but I suspect that because commands in the Terminal are running as subprocesses of Terminal, there is some technical challenge to making the permissions apply at such a fine-grained level.

As an alternative, I wonder if Apple could introduce some kind of “Security Profiles” feature for Terminal so that individual windows within the app could be run when different permissions? This could build on Terminal’s existing support for “Profiles” which already support varying Terminal settings dramatically on a per-window basis.

With Security Profiles, a user would be configure an arbitrary number of named profiles, and security privileges acquired by Terminal would be stored separately for the active profile. Each profile would be considered by the system effectively as a different app. For example, given my uses of Terminal, I might set up a few profiles for the types of work I regularly do:

  • Personal: Everyday productivity tasks including running scripts, editing files in my home directory, etc.
  • Administrative: Tasks that pertain to the overall maintenance of my Mac: examining system logs, delving into configuration files, etc.
  • Collaborative: Tasks that involve installing and running third-party tools that I trust, committing to shared source repositories, etc.
  • Experimental: Tasks that involve installing or running third-party tools that I am not familiar with and do not have a high degree of faith in.

These are off the top of my head, and just to give an idea of the kinds of profiles that might make sense here. Switching between these modes would also switch the system’s active list of entitlements for Terminal. If I run a script that accesses my Calendar items from the “Personal” profile, the system would prompt me once to ask my permission, but never prompt me again in that profile. When I switch to “Experimental” and run some unfamiliar third-party tool that tries to access my calendar, it would ask permission again for that profile.

I filed Radar #45042684: “Support a finer-grained permissions model for Terminal”, requesting access for this or something like it.

Reauthorizing Automation in Mojave

The macOS Mojave betas include a significant enhancement to user control over which applications can perform automation tasks. When we talk about automation on the Mac, we usually think of AppleScript or Automator, but with a broader view automation can be seen as any communication from one application to another.

One ubiquitous example of such an automation is the prevalence of “Reveal in Finder” type functionality. For example if you right-click a song file in iTunes, an option in the contextual menu allows you to reveal the file in the Finder. This is a very basic automation accomplished by sending an “Apple Event” from iTunes to the Finder.

In the macOS Mojave betas, you’ll notice that invoking such a command in an application will most likely lead to a panel asking permission from the user. The terminology used is along the lines of:

“WhateverApp” would like to control the application “Finder”.

If the user selects “OK”, the application sending the command will be thereafter whitelisted, and allowed to send arbitrary events (not just the one that prompted the alert) to the Finder. If you’re running macOS Mojave you can see a list of applications you’ve already permitted in System Preferences, under “Security and Privacy,” “Privacy,” “Automation”.

These alerts are a bit annoying, but I can get behind the motivation to give users more authority over which applications are allowed to control other applications. Unfortunately, there are a number of usability issues and practical pitfalls that come as side-effects of this change. Felix Schwarz made a great analysis of many of the problems on his blog.

I ran into another usability challenge that Felix didn’t itemize: the problem of denying authorization to an application and then living to regret it. I guess at some point I must have hastily denied permission for Xcode (Apple’s software development app) to control the Finder. This resulted in a seemingly permanent impairment to Xcode’s “Show in Finder” feature. I’m often using this feature to quickly navigate from Xcode’s interface to the Finder’s view on the same files. After denying access once, the feature has the unfortunate behavior of succeeding in activating the Finder (I guess that one is whitelisted), but failing silently when it comes to revealing the file.

OK, that’s fine. I messed up. But how do I undo it? Unfortunately, the list of applications in the Security and Privacy preference pane is only of those that I have clicked “OK” for. There’s no list of the ones that I’ve denied, and no apparent option to drag in or add applications explicitly. For this high level problem, I filed Radar #42081464: “TCC needs user-facing mechanism for allowing previously denied privileges.”

What’s TCC? I’ll be darned, I don’t know what it stands for. But it’s the name of the system Apple uses for managing the system’s so-called “privacy database.” This is where these and other permissions, granted by the user, are saved. For instance, in macOS 10.13 when the system asks whether to grant access to your Address Book or Contacts, the permission is saved, and managed thereafter, by TCC.

Resetting TCC Privileges

I knew from past experience testing Contacts privileges in my own apps, that Apple supports a mechanism for resetting privileges. Unfortunately, it’s pretty crude: if you want to change the authorization setting for an application you’ve previously weighed in on, you have to universally wipe out all the privileges for all apps using a particular service. For Contacts, for example:

tccutil reset AddressBook

This completely removes the list of apps authorized to access Contacts. (The AddressBook naming is a vestige of the app’s former user-facing name.) In fact, if you type “man tccutil” from the Terminal, you’ll find that AddressBook is the only service explicitly documented by the tool. Fixing my Xcode problem is not going to happen by resetting AddressBook privileges. So what do I reset? I tried the most obvious choice, “Automation,” results in an error: “tccutil: Failed to reset database”.

What’s the service called, and does tccutil even support resetting it? After a crude search of the private TCC.framework’s binary, I discovered I was looking for “AppleEvents”:

tccutil reset AppleEvents

After running this, I quit and reopened Xcode (the TCC privileges seem to be cached), and selected “Show in Finder” on a file. Voila! The Finder was activated and I was again asked if I wanted to permit the behavior. This time, I made sure to say “OK.”

You can get a sense for the variety of services tccutil apparently supports resetting by dumping the pertinent strings from the framework:

strings /System/Library/PrivateFrameworks/TCC.framework/TCC | grep kTCCService

The list of matching strings includes names like AppleEvents and AddressBook, as well other names for things I don’t recognize, and a seemingly useful “All,” which can presumably be used to wipe out all authorizations across all services.

Because the tccutil is far more useful than is advertised, and because users are undoubtedly going to end up needing to reset services more than ever in Mojave, I also filed Radar #42081070: “Documentation and command-line help for tccutil should enumerate services.” There are some items in the dumped list that appear likely to be private to Apple, but anything genuinely useful to customers (or more likely, the consultants who fix their Macs) should be listed in the manual.

Lighten Up, Eh?

While I support the technical and user-facing changes suggested by Felix Schwarz in the previously linked blog post, some issues would be avoided by simply giving apps the benefit of the doubt for widely used, innocuous forms of automation.

I mentioned earlier that the Apple Event sent by Xcode to “activate the Finder,” was apparently whitelisted by the system. Evidently Apple saw wisdom in the thinking that simply causing another application to become active is unlikely to be widely abused. I think the same argument holds for asking the Finder to reveal a file. I filed Radar #42081629: “TCC could whitelist certain widely used, innocuous Apple Events.”

I mentioned before that I can support Apple’s effort to put more power into users’ hands with this feature, but one side-effect of requiring the authorization even for innocuous events like “Show in Finder” is that apps that do not otherwise offer automation functionality to users will nonetheless require that users grant that power.

If the merit in the feature is to allow users to limit what kinds of automation apps can perform, then supporting a “Show in Finder” feature for an application should not require me to simultaneous permit it to do whatever kind of Finder automation it chooses to. For example, an application so-authorized is now empowered, presumably, to send automation commands to the Finder that modify or delete arbitrary user files.

These days Apple always seems to be pushing the privacy and security envelope, and in many ways that is great for their users and for their platforms. With a little common-sense and some extra engineering (“It should be easy” — Hah!), we can get the best of the protection these features offer, while suffering the fewest of the downsides.

Sandbox Transparency

Apple’s sandboxing technology provides a mechanism for developers to specify “entitlements” that an app needs in order to provide functionality that users want. For example, on the Mac, an app can specify the entitlements to “print” and to “make network requests.” This system of granular privilege designation is a great baseline both for developers, to avoid accidentally overstepping intended bounds, and for users, to protect against apps intentionally or accidentally causing harm.

One of the biggest problems with Apple’s approach to sandboxing is that the accountability component has been left entirely to Apple itself. Developers are held accountable for the specific entitlements they request only when they distribute software through the iOS or Mac App Stores. In the review process, Apple may determine that a specific entitlement requested from an app is inappropriate for that app’s domain, and demand that the developer remove the entitlement before being approved. Or, in rare cases, they may approve an entitlement that other developers are not typically granted.

Yesterday, Gizmodo reported that Uber had been granted an entitlement for their iOS app that allowed them to capture an image of an iPhone’s screen at any time, even when the Uber app was not the active app on the phone. This is a big deal, because users don’t typically expect than an iPhone app that is not active might have the ability to eavesdrop on anything they are doing.

I have long felt that the sandboxing infrastructure on both iOS and Mac should be used to more accurately convey to users specifically what the apps they install are capable of doing. Currently the sandboxing system is used primarily to identify to Apple what a specific app’s privileges are. The requested entitlements are used to inform Apple’s decision to approve or reject an app, but the specific list of entitlements is not easily available to users, whose security is actually on the line.

I think the next step for sandboxing, on both iOS and the Mac, is to expose the list of entitlements that apps possess, in a way that is reasonably understandable to all users, and even more open to scrutiny by power users. Any user who is wary of an app should be able to examine its entitlements so that any unusual privileges can be evaluated. With this level of transparency, you can bet that Uber’s ability to arbitrarily record the screen would have been revealed much earlier.

Being more transparent with entitlements would also pave the way for overcoming an unfortunate side-effect of sandboxing: the elimination of whole classes of power-user level apps. If users were empowered to know what the privileges of an app are, through a combination of user prompting and an interface for inspecting entitlements, then it would be reasonable to grant more indulgent entitlements to developers.

Mac apps such as TextExpander essentially became unqualified for the Mac App Store with the advent of sandboxing, because they require access to system services such as monitoring the user’s keyboard input, in order to provide valuable macro text substitution. If entitlements were transparent across the board, and users were consistently informed about the extent of an application’s capabilities, it would empower users to make more reasonable decisions about the software they run. It would empower them to allow apps like TextExpander that are currently disallowed by the App Store’s sandboxing policies, and to reject apps like Uber that may be unexpectedly allowed to capture footage of users’ activity even while running other apps.

Bad Preference Gatekeeper

With the release of OS X 10.11.4, developers of standalone preference panes face a new challenge with respect to users installing their software.

Apparently, the validation process that Apple applies to downloaded software, Gatekeeper, fails to validate OS X preference panes, even if they are signed with a legitimate Developer ID code signature.

The upshot of this is when users download a bona fide 3rd party preference pane such as Noodlesoft’s excellent Hazel, instead of having the software install as expected, a scary warning is displayed indicating the purported untrustworthiness of the software.

According to Paul Kim of Noodlesoft, the problem affects every preference pane he’s tested, including a freshly built, completely plain preference pane built with Apple’s latest tools. I put this to the test in Xcode 7.3, running on 10.11.4, by creating a new Preference Pane project from Apple’s template, setting it to sign with my Developer ID, and creating a release build of the project.

Running Apple’s “spctl” tool on a binary is a reasonably approximate way of determining whether Gatekeeper would reject the binary after downloading it from the web. Here’s the result for all the affected preference panes:

% spctl -av ./TestPanel.prefPane
./TestPanel.prefPane: rejected
source=obsolete resource envelope

Ah, that pesky “obsolete resource enveloped” message. Those of us who survived the transition from Version 1 to Version 2 code signing remember it well. But it’s not an accurate assessment in this case:

 % codesign -dv ./TestPanel.prefPane              
Executable=/Volumes/Data/daniel/Desktop/TestPanel 2016-03-31 14-03-51/Products/
[...]
Sealed Resources version=2 rules=12 files=2
Internal requirements count=1 size=220

The “version=2” indicates we are using an appropriate version for the signed resources. It would be hard, perhaps impossible, to do otherwise on a modern system with a modern Xcode toolchain.

The “spctl” tools supports a command line option to ask for more and more verbose results by adding “v”s to the command line. Unfortunately “spctl -avvvvvvv” doesn’t yield anything more informative than the seemingly inaccurate “obsolete resource envelope.”

I wondered if there was some magic flag that preference panes must now exhibit, or some new requirement that internals be signed in a different way than before. Surely, if anybody could get this right, it would be Apple! Their “Network Link Conditioner” is the only downloadable preference pane I could think of, and what do you know, it was updated as part of the Hardware I/O Tools for Xcode 7.3 download package, released on March 20. I downloaded a fresh copy to be sure I had the best that Apple could offer, located the preference pane, and double-clicked it.

Untrusted

You know it’s bad when even Apple’s own downloads are portrayed as untrustworthy.

This is a minor annoyance for folks trying to install an obscure development tool, but it’s a major issue for developers like Noodlesoft whose entire livelihood is built on the distribution of software packaged as a preference pane. The scary wording in the dialog casts doubt on the reputation of the developer, and for the more savvy, on the reputation of Apple’s ability to properly assess the trustworthiness of software that we download.

Let’s hope Apple can address this problem soon. Although it doesn’t pose a security risk, it seems appropriate that they could include this in a security update. After all, it has everything to do with preserving trust between users, developers, and Apple.

Update: According to Paul Kim, it’s not just preference panes that are affected, but any standalone non-app code bundle. So, for example, color palettes, screensaver modules, and, if anybody ever used them anymore, Dashboard widgets, are all affected. Pretty, pretty, pretty, pretty bad.

(Radar #25468728)

iGotYourBack

I have been an ardent Apple fan since 1993, when I got my first Mac: a PowerBook Duo 210. From then, to the day I joined Apple in 1996, to the day I left in 2002, to present day, one thing has always been true about Apple: they are not a typical tech company. Pushing against the status quo has in many respects been a defining characteristic of the company, through down times and up times. Apple does what it thinks is right for itself, for its customers, and to some significant extent, for the world at large.

Tim Cook shared yesterday in A Message to Our Customers one example of Apple’s atypical attitude rearing its beautiful head. In response to the FBI’s demand that Apple supply custom software that would allow the agency to unlock an iPhone held as evidence, Apple tendered its refusal:

Up to this point, we have done everything that is both within our power and within the law to help them. But now the U.S. government has asked us for something we simply do not have, and something we consider too dangerous to create. They have asked us to build a backdoor to the iPhone.

The news has split public sentiment in predictable ways. There will always be a contingent that believes law enforcement should be aided in any feasible manner, regardless of long-term implications for individual privacy or civil liberties. And there will also be people so cynical about government and the police, that even Apple’s cooperation thus far, handing over information that it does possess, is viewed as a betrayal of customer rights. And of course, there is a massive group of folks in the middle, who aren’t sure where the line should be drawn.

Apple has a clear sense of where the line should be drawn, and they have stated it: they will not weaken the security of their products for the benefit of the FBI or (presumably) any other agency. Although the current request from the FBI only applies to an older iPhone model, whose security is easier to circumvent than later ones, the point Apple emphasizes is that complying with the order would be a terrible precedent for putting the needs of government ahead of the personal security of end-uers.

To my mind, this is a fine place for Apple to draw a line.

Other tech companies with huge investments in the consumer market should be lining up behind Apple in defiance of the FBI. To do otherwise, whether by explicitly defending the FBI’s demands, or by implicitly approving in silence, would be a betrayal of their own customers. It would be wrong both from an ethical perspective with respect to their duty to protect customer data, and from a PR perspective with respect to the public’s perception of their managing that duty.

If a couple other large companies, say Facebook and Google, come to Apple’s side, it will send a powerful message to the FBI and the rest of government. If a dozen large companies do, it will create a firewall that will be difficult for government to dismantle without very publicly reiterating and reaffirming its disdain for personal privacy.

I think it’s best for all parties if the “firewall” scenario comes to pass. The stage is set for a civil rights showdown, and while we need to speak out as individuals, we can also benefit enormously from the powerful voices of these tech giants.

But if other companies don’t step up, I’m not sure all is lost. Apple, as the largest American tech company, which also has the largest cash reserves, is well-suited on many fronts to fight this battle. Alone, if necessary.

People have criticized Apple for amassing a giant pile of money while never giving completely convincing explanations for what it plans to do with it. When your modus operandi is not only to push the leading edge of personal technology, but also to defend your customers’ personal data, and to possibly help establish the legal precedent that will defend the customers of all tech companies for decades to come, you never know when having $200B to “spare” might come in handy.

As a stockholder I don’t relish the idea of Apple burning through all that money just to defend their right to protect customer data. Although it’s arguable that it would be money well spent, it’s not an obvious, ideal use of shareholder equity in a public company. Luckily, I don’t think the cash will be spent. The $200B serves mainly to fortify Apple’s resolve in defying the FBI. Apple’s courage in the face of threats to its pro-consumer security policies is bolstered by the strength of those massive cash reserves.

Some may see this confrontation between Apple and the FBI as an industry vs. government dispute, but it’s far more than that. As personal technology and the internet permeate almost every aspect of wider society, the “tech industry” is indistinguishable from society as a whole. The right to defend our personal information, and the rights of companies to act on our behalf in that pursuit, are completely and inexorably tied to our rights as members of society. Eventually, we must win the right to protect our data from government. Apple, Google, Facebook, and other tech giants can step up to help us secure these rights today, or we’ll have a longer, harder fight ahead of us in years to come.

Lazy Password Storage

When you run an app on your Mac that connects to a secure web service, how confident are you that the password will be treated with care, and protected from prying eyes?

As a rule, Mac developers are pretty responsible about storing passwords and other private data in the OS X system keychain but, of course, there are exceptions.

I found a handy trick for uncovering passwords stored insecurely by applications directly to their preferences storage. The trick takes advantage of a cool functionality of the OS X “defaults” command line tool, which you can run from the “Terminal” app:

'defaults' [-currentHost | -host ] followed by one of the following:
  [...]
  find <word>     lists all entries containing word

How convenient: a simple command line tool to search the entirety of all the preferences stored by all of your apps. So, a good first step would be to simply search for “password”:

defaults find password

On my Mac, this yields an overwhelming number of matches that includes a lot of false positives such as, for example, the preferences pertaining to 1Password, preferences pertaining to apps’ password dialog windows, and other innocuous uses of the term.

It occurred to me that most developers storing passwords insecurely in preferences would probably store the value either under the key “password,” or some variation such as “twitterPassword”. So I tweaked the command line to try to filter out these results. The “defaults find” command doesn’t take any options, but I can winnow the results using grep:

defaults find password | grep -i -E "password\"? ="

This grep invocation searches for case insensitive matches for “password”, optionally followed by a quotation mark, then a space and an equal sign. In other words, examples where a key that ends in “password” is being assigned a value.

This actually did reveal some problematic password storage on my Mac, but the grep is so good at filtering out the results, I can’t see which app to blame. I need to match ALL the lines that pinpoint the app, and all the lines that looks like they store a value into a password. Add an | (or) case to the grep expression to match for the tell-tale signs of the lines that summarize findings per-app:

defaults find password | grep -i -E "password\"? =|keys in domain"

Here I find a neat summary of potentially problematic password storages. Some of them remain false positives, but the list is now small enough to easily interpret. Any example where the app is something I plan to use again, I’ll be in touch with the developer to encourage them to improve the password storage security. Any example where the app is nothing I’ll ever run again?

defaults delete com.example.lazyapp

And the insecurely stored password is obliterated from my preferences.

Obviously this trick won’t match all the careless password storage that apps on your Mac may be committing, but I suspect it will root out a good number of them. Experiment with the grep commands to filter out based on different, less restrictive matches. You might also have some luck searching for examples of apps that store other sensitive information such as credit card numbers, secret questions and answers, etc.

Whose Phone Is This?

On the latest episode of Core Intuition, my co-host Manton Reece described the experience his wife had of leaving her iPhone behind on an airplane, only to have the airline thankfully announce over the airport loudspeakers that the phone had been found.

When they returned to claim it, they asked how they had been able to determine the name of the owner from the locked phone? The answer? They “just asked Siri whose phone it was.”

Apparently this is common knowledge to the airline employees, and to no doubt countless iPhone users, but it was news to Manton and me. You can try it yourself: if you have an iPhone, and have set a contact card as “My Info” in Siri’s preferences, lock your phone and then ask Siri: “Whose phone is this?”

On the face of it, it seems like a great feature. Who wouldn’t want to empower the well-intentioned finder of one’s lost phone to make an effort of returning it?

The problem to my mind is not that Siri shares my name and contact information, but that it goes a step further, showing not only my main telephone number, but my physical address, all my telephone numbers, email addresses, as well as my AIM, Twitter, and Facebook accounts. It also happily provides my birthdate, the names of my wife, mom, dad, brother, heck, the names of any person I have assigned a relationship to.

When my friend Dan Moren gave me a ride to Çingleton last year, we killed time in the car playing with Siri’s abundant personalization features. Anybody with access to my locked phone will soon learn that Dan is in fact more than just a friend to me:

Screen capture showing personal details revealed via Siri

Of course, you don’t have to share all this information with whatever stranger manages to pick up your phone. Simply disable Siri access from the lock screen, and nobody will be able to access your private information using it. Of course, this means no airline employee who finds your phone tucked between the seats will be able to easily return your phone to you, either.

Alternatively, you could change the information on your “My Info” contact card. For example I could add an entry “Daniel Minimal” to my Contacts list, and only include a telephone number or email address. The problem here is much of Siri’s usefulness (as we discussed on the podcast) is rooted in it knowing specific details about you. Requests such as “give me directions home,” “call my mom,” or heck, “text my sweet, sweet ride to Montréal,” will fall upon deaf ears if you don’t include this information in the card that you associate with Siri’s “My Info.”

Depending on how paranoid you are about what happens when a stranger gets ahold of your phone, you might read this and decide to do nothing, to delete all the personal information from your “Me” contact card, or to forbid Siri from being accessed when the phone is locked. Personally, I don’t think any of these solutions is ideal. I’d like to be able to ask Siri to share some of my personal information from the lock screen, to increase the odds of my lost phone being returned. But I’d like to draw the line somewhere reasonable, without having to share every last detail about myself with the stranger who is holding the device.

Fingerprints As Access Tokens

Everybody seems to have an opinion about the new TouchID fingerprint sensor on Apple’s iPhone 5S. I suppose I do, as well.

Critics object to the idea that a fingerprint sensor, no matter how good, should be used to safeguard critical data. Dustin Kirkland makes the case (via John Moltz) that biometric information is inherently bad as a substitute for a password, because it cannot be “independently chosen, changed, and rotated.”

I take his points seriously, and they seem well reasoned from a security point of view, but they are based upon the premise that passwords are the end-all be-all of security, when in fact common sense proves they are not. The oldest, most trusted, and most widely deployed method of authentication on the planet is in fact “biometric”: the human ability to recognize a familiar face. The fact that my appearance could technically be spoofed does not change the fact that arriving at the home of a childhood friend after 20 years of separation will still earn me an invitation to the dinner table, if not a bed for the night.

So fingerprints make lousy passwords. Who cares? Their use in practice need not replace other authentication schemes, it only needs to augment other schemes in a manner that increases overall security.

Most authentication systems in society are scaled appropriately for the context in which they are deployed. When I travel by airplane, I am asked to show a government ID to get through the security gate, but thereafter, a simple piece of printed paper will get me on a plane. The government takes it for granted that once I’m in the boarding area, the odds of somebody getting hold of my scrap of paper, or my getting hold of theirs, and neither one of us subsequently complaining, are marginally small. Furthermore, if the two of us mutually agree to swap tickets and travel to the other’s destination, we haven’t really caused any significant harm, except perhaps to the egos of the folks in charge at the TSA.

Boarding passes are little scraps of paper that make lousy identification cards. I can’t use them to reserve a hotel, file a police report, or obtain a marriage license. Yet in the right context, I can use one to travel from Boston to Shanghai without a single person batting an eye or thinking twice about verifying my identity.

In this sense the airline boarding pass is like an access token. On the web, an access token is something obtained by stronger authentication that permits continued access with weaker authentication. For example when I allow a Twitter client to connect to my Twitter account, I must first visit Twitter.com and possibly enter my full account credentials. After the access token has been vended however, it serves much like a boarding pass, allowing free access until and unless I or Twitter registers a complaint.

There’s another sort of abstract access token that has been available to users iOS devices since day one: your continuous use of the device. If you have in your possession an iOS device and you abstain from turning it off or letting it sit idle, you retain free access to the various data on the phone. From a security point of view, this token is even worse than a fingerprint: anybody, including the family cat, can sustain it if they are so inclined. Leave a phone on a table for 5 seconds, somebody else picks it up, they have your “activity token,” and they didn’t even need to scan your fingerprint.

I view the fingerprint sensor on the iPhone 5S and other devices as an opportunity for extending this kind of implicit authentication. It’s not a substitute for a password, but rather a convenient token for obtaining streamlined, continued access to protected resources. It’s the boarding pass that prevents you needing to take out your ID, and go through a body scanner or pat-down again, just to get on the damned plane.

We can argue about whether Apple has chosen the right boundaries for where a fingerprint should be traded for full authentication, but as a technology it stands to fill that gap between the frighteningly insecure “unlocked while active,” and frustratingly unusable “full authentication required when inactive.”

In short, I would like to see fingerprint authentication deployed in a way that pays respect both to the relative convenience and the relative insecurity of a fingerprint. If I can for example configure my phone to require a fingerprint unlock after 1 minute of inactivity, but to require a passcode unlock after 30 minutes of inactivity, my concerns about fingerprint security would be effectively put to rest. Could a malicious person steal a high quality impression of my thumb print, construct a prosthetic, fleshy representation of it, and use it to unlock my phone? Perhaps. But if they can’t do it within half an hour of stealing my phone, they better get to work on cracking the passcode.

Out Of The Bag

AppleInsider reported on Friday that the number of visitors to their site purportedly running a pre-release version of Mac OS X 10.9 had risen dramatically in January. Federico Viticci of MacStories followed up on Twitter, confirming a similar trend.

I was curious about my own web statistics, so I started poking around at my Apache log files. They start with the IP address of the visitor and include various other information including the URL that was accessed, the referrer, and most importantly here, the user agent string for the browser.

Although the vast majority of visitors to my sites are running Mac OS X 10.8, or iOS, or even Windows, there were indeed a few examples of visitors who appeared to be running 10.9. This is what the user agent string looks like:

Mozilla/5.0 (Macintosh; Intel Mac OS X 10_9) AppleWebKit/537.28.2 (KHTML, like Gecko) Version/6.1 Safari/537.28.2

See that 10_9? It’s a strong indicator, combined with the respectably “higher than 10.8” Safari and WebKit versions, that the visitor is indeed running 10.9. Could it be fake? Sure, but the odds of anybody faking this kind of thing seem relatively low: there is little imaginable reward for duping a site into believing that a solitary IP address is running 10.9, and it would be challenging to orchestrate some kind of distributed fraud without being found out.

If you have access to your own site’s HTTP access log, and the format is like mine, you can sift out the 10.9 accesses by simply grepping for the 10_9 substring:

grep 10_9 access_log

If you have any matches, odds are good that they will be from IP addresses that start with 17. Why? Because Apple is somewhat unique in that it owns outright an entire class A subnet of IP addresses: all addresses starting with “17.” are theirs.

So people at Apple are running 10.9. What’s the big deal? For one thing, anybody with access to a reasonably popular web site’s access logs now has an insight into Apple’s development schedule. Look at the graph from the AppleInsider link above and you can deduce not only that the number of users actively running 10.9 has gone up, but I would also guess that the troughs and peaks in the graph are correlated with the release cycle of internal test builds. What is this worth to a competitor? Probably not much, but who knows.

The other issue that comes to mind is that not all the IP addresses are liable to start with 17. Why? For one thing, Apple employees may be working from home, either in the Bay Area near Apple headquarters, or scattered around the world in their respective telecommuting locations. For another, Apple may have granted early access to close business partners who would naturally be running the operating system in their own office environments, on other subnets than 17. To see if you’ve been treated to any of these visitors, and to further refine the list to avoid duplicates from the same IP, try this:

grep -v ^17\\. access_log | sort -u -t- -k1,1

If you found any results, first of all I strongly encourage you not to share the IP addresses in public. I am writing this article at least in part to call out the reasons why Apple’s divulging this information is a risk to its employees and partners. You should protect the confidence of your site’s visitors.

That said, you may want to privately perform a rough geographic lookup based on the IP address. Googling will find many services for this and this is just one that I used. You will probably find that the IP address maps to a location in San Francisco, San Jose, or Santa Cruz. But some of my 10.9 visitors hailed from other parts of the US.

So Apple’s broadcasting of the Safari user agent string reveals information about their development schedule, and divulges the IP addresses of likely employees or business partners. While I can’t quite imagine somebody taking advantage of the employee IP addresses, it sets off my spidey-sense creepiness alarm. The potential for divulging business partners could be of more obvious pragmatic interest to investors or competitors. The discovery of an alliance between Apple and another company would seem likely to affect the perceived value of either company, and could ruffle the feathers of other business partners who feel threatened by the cooperation.

So what should Apple do? The answer was in their hands before Safari launched: spoof the user agent! Don Melton was on the Safari team and wrote recently about keeping the project a secret:

Nobody at Apple was stupid enough to blog about work, so what was I worried about?

Server logs. They scared the hell out of me.

To guard clues about their development schedule, they should probably spoof the user agent string until the release is in a large enough number of hands that the number of user agents is uninterestingly diverse. But to protect the IP addresses of their employees and business partners from prying eyes they should at least spoof the user agent on non-17 subnets.

Apple’s famous secrecy is not foolproof. We don’t know yet what exciting new features 10.9 will bring or which hardware it will support. We don’t know how much it will cost, or which of the diminishing number of code names it will have. But we know it’s coming, and we know collectively the IP addresses of those who are testing it. The cat is still a secret, but the paws are out of the bag.