Lazy Password Storage

When you run an app on your Mac that connects to a secure web service, how confident are you that the password will be treated with care, and protected from prying eyes?

As a rule, Mac developers are pretty responsible about storing passwords and other private data in the OS X system keychain but, of course, there are exceptions.

I found a handy trick for uncovering passwords stored insecurely by applications directly to their preferences storage. The trick takes advantage of a cool functionality of the OS X “defaults” command line tool, which you can run from the “Terminal” app:

'defaults' [-currentHost | -host ] followed by one of the following:
  [...]
  find <word>     lists all entries containing word

How convenient: a simple command line tool to search the entirety of all the preferences stored by all of your apps. So, a good first step would be to simply search for “password”:

defaults find password

On my Mac, this yields an overwhelming number of matches that includes a lot of false positives such as, for example, the preferences pertaining to 1Password, preferences pertaining to apps’ password dialog windows, and other innocuous uses of the term.

It occurred to me that most developers storing passwords insecurely in preferences would probably store the value either under the key “password,” or some variation such as “twitterPassword”. So I tweaked the command line to try to filter out these results. The “defaults find” command doesn’t take any options, but I can winnow the results using grep:

defaults find password | grep -i -E "password\"? ="

This grep invocation searches for case insensitive matches for “password”, optionally followed by a quotation mark, then a space and an equal sign. In other words, examples where a key that ends in “password” is being assigned a value.

This actually did reveal some problematic password storage on my Mac, but the grep is so good at filtering out the results, I can’t see which app to blame. I need to match ALL the lines that pinpoint the app, and all the lines that looks like they store a value into a password. Add an | (or) case to the grep expression to match for the tell-tale signs of the lines that summarize findings per-app:

defaults find password | grep -i -E "password\"? =|keys in domain"

Here I find a neat summary of potentially problematic password storages. Some of them remain false positives, but the list is now small enough to easily interpret. Any example where the app is something I plan to use again, I’ll be in touch with the developer to encourage them to improve the password storage security. Any example where the app is nothing I’ll ever run again?

defaults delete com.example.lazyapp

And the insecurely stored password is obliterated from my preferences.

Obviously this trick won’t match all the careless password storage that apps on your Mac may be committing, but I suspect it will root out a good number of them. Experiment with the grep commands to filter out based on different, less restrictive matches. You might also have some luck searching for examples of apps that store other sensitive information such as credit card numbers, secret questions and answers, etc.

Living Room Engagement

I am home from Apple’s New York “Apple TV Tech Talks.” These events are always a joy to attend, because they combine some of the high quality preparation and delivery that we’ve come to expect from WWDC, with the refreshing brevity and focus of a one day event. Oh, and they’re totally free, apart from the transportation and lodging you might need to pay for.

I went to the event with some uncertainty, because I am skeptical about my prospects developing for the Apple TV. The platform inherits many of the pricing and marketing challenges of iOS, with the added constraints of working with a shared-user ownership model, limited user input, and a bias towards entertainment software suitable to somebody reclining on a couch.

Although I didn’t come away from the tech talks with a clear inspiration for a “killer app,” I did think a bit about the high-level classes of app that are likely to be successful on Apple TV. This is somewhat off the cuff, so I might be missing something big, but I think Apple TV apps will fall into these main categories:

  1. Passive entertainment. This is the obvious, classic use case for television. To succeed with this model, you will probably need to have access to your own library of streaming media. Past and present episodes from a network television company are a canonical example for this kind of app. Indie app developers are unlikely to succeed in this realm, except as consulting engineers for media companies.
  2. Games. When the earliest video game consoles came out over 40 years ago, they introduced interactivity to the previously passive experience of using a television. It seems appropriate then that on the Apple TV, interactive entertainment in the form of games will remain a top-tier use case for the device. This is great news for indie developers who happen to be interested in game development, but for those of us who have tended to focus on productivity or creative software, there is little to lure us here, either.
  3. Interactive entertainment. In addition to the passive video programming that we associate most closely with television, there is an opportunity to engage users with the level of interactivity found in games, but with an aim to educate or entertain in a non-goal-oriented sense. For example, an app that makes it easy to kick back on the couch and subject oneself to a never-ending supply of dictionary definitions, or Wikipedia articles, would fit in here. Indie developers may have opportunities here because or the large amount of open sourced or government owned data that could be leveraged to build apps that present this data in novel and engaging ways.
  4. Interactive construction. The default input device for the Apple TV, the Siri remote, is pretty limiting for tasks like text input and other productivity-oriented tasks that we take for granted on a computer or iOS device. But what it lacks in precision it makes up for in crude expressiveness. Imagine apps that leverage the expressiveness of the remote’s touchscreen, or its ability to reckon its ever-shifting position in 3D space. Imagine a family gathered around the dining table, with a large blank piece of butcher paper and a variety of creative tools on hand. What does the family do to the paper? Anything you can imagine that empowers a family to be collaboratively creative on the screen, as they would otherwise be on that paper, is a potential hit for the Apple TV.

What am I missing? I know there must be huge categories of Apple TV app ideas that are going to be obvious in retrospect. Two years from now, we’ll look back at a hopefully robust catalog of Apple TV software and find many examples of classic “if only I had thought of that!” ideas. I’m still fairly skeptical that I’ll be one of the developers who stumbles on groundbreaking ideas for the platform, but I credit the Apple Tech Talk with at least getting my thinking moving in the right direction.

Medium Permalinks

I was intrigued to read that Basecamp’s (née 37 Signals) Signal v. Noise blog has moved to Medium. David Heinemeier Hansson argues that Medium is “just the right mix of flexibility and constraint,” celebrating its web-based editor, the network of users, Medium’s demonstration of concern for its customers, and Basecamp’s admirable desire to get out of the business of maintaining their own blog-hosting software.

Hansson lists the ability to use one’s own custom domain among the user-friendly changes Medium has made. “By offering custom domains, we’re ensured that no permalink ever has to break, even if we leave the platform.” Indeed, this is a valuable improvement. But it got me thinking: for a blog like Signal v. Noise, with hundreds of posts spanning more than a decade, how would Signal v. Noise’s existing permalinks be preserved? Does Medium offer some fancy permalink customization for imported posts? Is there some ability to upload static HTML content to reside alongside newer, Medium-native posts? Surely a company as obsessed with the web as Basecamp would be concerned about this. How did they solve it?

It took a little poking around and scratching my head to realize that solved the problem in a novel way that doesn’t exactly get them out of the blog-hosting business. Signal v. Noise is now hosted on two domains:

  1. https://signalvnoise.com/ – The previous home of the blog, hosted and managed by Basecamp themselves, continues to host the backlog of archived posts.
  2. https://m.signalvnoise.com/ – The new home, hosted by Medium.

So when any new Medium post is linked, it points directly to the “m.signalvnoise.com”, and is handled by Medium. When any older post is linked, it goes directly to “signalvnoise.com” and is handled the same as ever. The one change? When the main page at signalvnoise.com is visited, it redirects with a “302 Found” response to the Medium site, getting a casual visitor on track to viewing only the latest posts.

I continue to admire a lot of the work Medium is doing. I agree with the Basecamp that their web editor is among the best I’ve used. For my tastes, it can’t touch the experience of a native Mac app, but of course I’m biased. Hopefully Medium’s API will continue to evolve and make Medium a more viable platform for folks who share my preference for a desktop editor.

Hey Siri, Don’t Trivialize My Timer

Siri’s dictated timers are a feature I use all the time. Especially given my propensity to be distracted and lose track of short-term time commitments, the ability to blurt out in the kitchen: “Hey Siri, set a timer for one minute,” has saved many a pancake from being burned.

Something that has bothered me about this feature for a long time is the determination Siri has to be lighthearted and downright goofy when creating the timer, when the timer is for a specific time such as one or three minutes.

“Hey Siri, set a timer for three minutes.”

Siri's visual response when asked to set a three-minute timer.

Ha. Ha. Yeah, I’m always cooking an egg when I set a three-minute timer. By contrast, a request to set a two-minute timer is always met with a curt, yet still humanized, “Two minutes and counting.” If you set a timer for one minute? Whoo-ee, you’re in for some sass. Sometimes Siri just says “Your timer is set for one minute,” but more often you’ll hear a quip like “Remember, a watched iPhone never boils.”

I’ve been more irritated by this cuteness after years of using the feature and, I suppose, the frequency with which I set one and three-minute timers. It’s mostly a passing exasperation for me, but it strikes me as an example where emotionally charging a software interaction is a risky proposition.

My one-minute timers are usually for something trivial like pancakes, but what if I were using them in a more fraught scenario? What if one-minute timers play a serious role in the administration of care to a loved one? What if it’s a key interval in a CPR procedure? One of Siri’s cute quips is “The suspense is killing me.” What if the suspense really is killing somebody?

(Radar #23776483)

Siri’s Headphone Tax

I wrote earlier today about Siri’s impressive, instant attentiveness on the iPhone 6s. I remarked that although I had set out to report a bug to Apple, I discovered it was actually a very awesome feature.

Unfortunately, I still have a Siri-related bug to file today. Armed with my knowledge that, as a rule, Siri will begin processing instructions at the moment the home button is pressed, I have been exercising it quite a bit. But after I popped on some headphones and headed out for a walk, I discovered an unfortunate shortcoming: Siri’s instant attentiveness is thwarted by the presence of headphones.

For some reason, if headphones are plugged in, iOS goes through the old-and-busted delay in which you must wait for the audible signal that Siri is ready to listen. The result is that, having gotten used to it being instantly ready, you will push the home button and start talking, only to hear the tell-tale audio signal and realize that it hasn’t actually been listening yet.

I’m hoping this is a bug in which the old mechanism is simply being inappropriately activated while headphones are plugged in. I don’t think there’s anything about activating the audio out through headphones that should inherently limit Siri’s ability to be instantly attentive. In fact, with headphones plugged in I can still get an instant response when I use “Hey Siri” to precede my request.

In summary: Siri starts listening instantaneously on an iPhone 6s, whether you’re in silent or non-silent modes, provides you haven’t made the mistake of plugging in headphones. Filed as Radar #22881933

If It Ain’t Fixed, Break It

I have always kept my phone on silent, and thus come to depend on vibration feedback for a lot of workflow tasks. One such task is invoking Siri by holding the home button, to make a quick inquiry or request.

On my iPhone 6, long-pressing the home button always resulted in a short pause before a pleasant vibration indicating that Siri was ready to listen. I trained myself to wait until that vibration was felt before bothering to speak, lest Siri miss anything important.

On my iPhone 6s, however, there is no such feedback. Apple “broke” it. I was all in a huff this morning to file a bug about this, sure in my knowledge that the usability of the phone had been diminished by removing this feedback. Every time I invoke Siri now, in the absence of vibration feedback, I wait to see the tell-tale animated sound wave detector. I’m never quite sure when Siri is ready for me.

I complained on Twitter about the problem, hoping there was a secret setting that I had missed when restoring my phone. I got lots of commiseration from folks who also miss the feedback, and assurances from others that I simply needed to set my “vibrate on ring” or “vibrate on silent” settings correctly. I appreciate the responses, but they were all wrong. And I’m wrong. We’re all wrong.

Apple “broke” the haptic feedback associated with invoking Siri, by “fixing” the problem that there had ever been any latency before. Have an iPhone 6s or 6s Plus? Go ahead, I dare you: hold down the home button and start talking to Siri. You will not escape its attention. It’s ready to go when you are, so it would be obnoxious of it to impose any contrived delay or to give taptic feedback that is uncalled for. Siri has become a more perfect assistant, and we have to change our habits to accommodate this.

The elimination of latency in Siri’s attentiveness seems related to the fact that Apple have added dedicated functionality to the phone’s M9 chip to allow Siri to remain efficiently at attention:

The integrated M9 works so efficiently and intelligently that Siri is always on and waiting for your voice commands. You can easily activate Siri by saying “Hey Siri” whenever your iPhone 6s is nearby.

The lesson is that sometimes our instinct tells us something has been terribly broken, when it has actually been gloriously fixed. Now you can quickly dictate “Hey Siri remind me to file a bug about Siri feedback,” before quickly amending yourself: “Hey Siri delete the reminder,” and dismiss any perceived obligation you felt to tell Apple just how “buggy” the iPhone is.

Familiar Spell Checking

Even if you’re a great speller, automatic spell checking provides a valuable safety net, often preventing us from sharing with the world our own individual ignorances of the languages we communicate with.

For years, spell checkers of all kinds have relied upon word databases to determine that a word has been misspelled. I’m sure it’s at least a little bit more complicated than this, but in a nutshell: if the word you’ve just typed is not in this enormous list of words, then it’s probably a misspelling, and is marked as such.

Thus as you type your master works, when you either flub your typing or can’t recall the correct spelling, most modern editors will flag the word in question, for example by showing a red squiggle below it.

One challenge, though, is that for any individual, a number of words that are spelled correctly may nonetheless be absent from spell checker’s enormous database. One great example of this is the variety of proper names we deal in that may not happen to be included. For example, if I were a Boston Red Sox fan I might type Dustin Pedroia’s name often, and at least OS X’s built-in spell checker would mark it as incorrect. The workaround for these situations is to control-click the word and, from the popup menu, select “Learn Spelling” to avoid future flagging of the item.

The problem can be especially annoying when the OS repeatedly flags the names of people in your own family. For example, my own last name, “Jalkut,” is incredibly uncommon and would be flagged as a misspelling by most spell checkers in the world. However, on my Mac it is not, even though I’ve never asked the system to “learn” the spelling. In fact, none of the names of my family, friends, or business associates are marked as misspellings. How does this work?

Apple’s engineers realized that they could spare you the hassle of “learning” these spellings, since they already have access to a massive database of proper names that matter to you. Your Contacts database. If you’re on a Mac right now, open up the Terminal application and paste in the following line:

/System/Library/Services/AppleSpell.service/Contents/MacOS/findNames

The result that should come flying down the terminal window are the proper names of everybody and everything in your Contacts database. Specifically, Apple consults every entry in your database for the following attributes:

  • First Name
  • Middle Name
  • Last Name
  • Nickname
  • Maiden Name
  • Organization
  • Address
  • City

Each of these words, if present, is used to augment the already-massive list of correctly spelled words. So if you happen to know somebody named Frenk Xssl, who lives in Cwmystwyth and works for Infinitea, you can write all about them and their work, and never be bothered once by the tell-tale red squiggle of a word misspelled.

Everything But The Web

Since Apple announced the new Apple TV on Wednesday, we developers have been poring over the details of what the SDK will, and what it won’t allow us to achieve on the platform.

One of the most surprising, and most impactful limitations to the SDK is that it provides no facility for presenting web content in an app. Not only is there no built-in “browser” on the Apple TV, third party apps are unlikely to be able to offer any web browsing functionality.

This is a big deal, but many people see it as a wise choice on Apple’s part: by forbidding the use of web technologies, they will encourage app developers to design natively for Apple TV. On iOS, by comparison, a large number of “native apps” that are downloaded from the App Store are in fact only thin wrappers around web content. These offer the same interactive experience that a user would have if they navigated to the company’s site in a browser. Forbidding web views on Apple TV all but guarantees that developers will provide a more tailored experience, designed in the spirit of Apple’s guidelines.

But forbidding web content outright will also be an unnecessary impediment to many developers whose apps are either tastefully implemented with the help of web technologies, or whose core functionality is to deliver content — not web sites, mind you — that happens to be formatted with HTML. Daniel Pasco of Black Pixel points out that his company’s NetNewsWire, an RSS news reader, falls squarely into this category. On that point, although I have a hard time imagining the utility of a blog editor on Apple TV, my own MarsEdit would also be “unfairly” restricted by this policy.

As Pasco acknowledges, we don’t know Apple’s real motivation for omitting web views from Apple TV. There may be technical challenges or performance shortcomings that contributed to the decision. But let’s assume for the sake of reasoning that it is purely political, that they want to discourage “web wrappers” and to promote a more native look and feel in TV apps. I propose that Apple could strike a compromise that would serve those ambitions while also supporting the tasteful handling of web content in apps. How? By forbidding network access to web content. Apps themselves could still access the network, but not from within their web views.

Blocking network access from web content would immediately knock out the “web wrapper” type of native app. Any such app that connects to a site meant to also serve regular web browsers will be rendered useless if the referenced resources from the site are not allowed to load. Images would be blank, stylesheets omitted, etc. A clever developer might try to overcome these limitations by taking all network loading into their own hands, but I expect this would be a complicated mess and quickly encourage such a developer to seek a less cumbersome solution.

On the other hand, all apps that use web content tastefully would be liberated to continue doing so. Whether an app simply uses a small web view here or there to support the native UI with styled text and images, or if it is a full-fledged news reader like NetNewsWire, the ability to capitalize on WebKit’s core functionality to convert web content into a visual format should be no more controversial or politically limited than the ability to process TIFFs, JPEGs, and PNGs and turn them into attractive, or not so attractive, visuals in an app’s interface.

I’m not sure where JavaScript support should stand in this “compromise.” It would be somewhat ironic to omit it, seeing as the Apple TV supports a whole framework for creating apps that is based in JavaScript, but I can also see an argument that supporting JavaScript in web views is too much of a lure away from using native iOS technologies. Personally, I think they should include it and let developers decide whether there are suitable use cases for it, but I imagine the vast majority of current iOS developers would be extremely satisfied to be granted the mere ability to render static HTML content in their Apple TV apps.

I’m excited about the Apple TV even if, like many developers, I haven’t quite wrapped my head around what I could or should develop for the platform. These major announcements from Apple always come with a healthy mix of tantalizing allure for the possible, and sobering reminders that Apple defines the constraints in which we must operate. The lack of web views on Apple TV was not a constraint I imagine most developers were anticipating. I hope that it does achieve the laudable goal of encouraging more developers to embrace native designs, but I also hope that Apple loosens up and finds a way to allow responsible developers to use a powerful set of technologies that we’ve grown accustomed to relying upon.

SiriScript

Today I faced a long list of alarms on my iPhone, and decided that I wanted to clean them out. The typical iOS “Edit” interface puts a red “delete” button next to each item, and upon tapping it you must then confirm it by tapping the explicit word “delete” at the other end of the item. Suffice to say: for a list of any significant size, this is very tedious.

On a whim, I decided to give Siri a shot at simplifying the process. I long-pressed the home button, and uttered: “delete all my alarms.”

NewImage

Well, isn’t that nice?

I realize that when I am faced with a problem on iOS or watchOS, for which I wish there were an automated, “power user” mechanism to simplify it, I reach for Siri and hope for the best.

In this respect Siri fills the gap that is left by the omission of an automating service such as AppleScript. On a Mac, when I’m faced with a problem like this, I look to Script Editor, and hope that the app is scriptable enough to get the job done. For example, if alarms were a service provided by my Mac (and why aren’t they!?), then I would expect a script like this to complete the task at hand:

tell application "Clock" to delete all alarms

Having AppleScript at my disposal is great, but I am also frustrated that Siri doesn’t live on my Mac. I should be able to invoke the dictation UI (fn-fn keystroke, by default) and ask Siri to “add to my reminders,” or to perform any of the other common tasks I can perform with my phone or watch.

Siri is more limiting than AppleScript, because it can only carry out tasks that Apple’s engineers have predicted that I will want to perform. But it’s also much easier than opening up Script Editor, scrutinizing a scripting dictionary, spending 10 years learning AppleScript, and then writing and running a script.

Ideally, I’d like to have the best of both worlds: the ease of asking Siri to perform complex procedures and the option of extending the catalog of procedures that it knows how to perform. And I’d like this perfect combination of ease and extensibility on all of Apple’s platforms.

Since the debut of the iPhone, people have speculated about whether we would ever see an official solution for automation, along the lines of AppleScript or Automator. I think we’ve actually been seeing it since Apple acquired and integrated Siri. I hope that one day this will culminate into the consistent, extensible solution for automation that I have imagined here.

Surviving The Work Week At Home

I was honored when Serenity Caldwell of iMore asked me to contribute a column to The Network. I was racking my brain to come up with a topic for the article, but I found inspiration in my MarsEdit drafts folder, with an article I had loosely speculated writing, but never followed up on. The article went live at iMore today: “How to survive working at home“.

My working title for the article was “Surviving the Work Week at Home,” but they changed it at iMore, probably for the better! I’m a sucker for the passive voice.

The staff at iMore also added helpful headings, and restructured paragraphs where I had simply rambled on. I love the freedom of writing fast and loose for my own blogs, but it’s a nice break to collaborate with a publication that will add the finishing touches to one’s writing to bring it to a higher level.

Thanks to Serenity and the entire team at iMore for helping me get my article polished up, and for sharing it with your audience!

Microsoft WinObjC

Microsoft appears to be following through on its promise to provide resources to iOS developers that facilitate the porting of apps to Windows.

The project, identified by the code name “Islandwood” earlier this year, has been renamed to Windows Bridge for iOS, and the company is making the source code available.

I had a surprisingly hard time finding the GitHub sources for the project. No news on Microsoft’s developer home page, nor on the Interoperability Bridges blog (oops, not updated since January, 2014). I even searched Microsoft’s GitHub repositories but couldn’t find anything matching “windows bridge ios” or variations.

Finally, I searched Twitter for related news until I found a link to a story that actually linked to the GitHub project.

The project’s famiilar name is WinObjC. And yes, it is on GitHub.

Update: If I knew The Verge’s format a bit better I might have noticed they credit the source, Microsoft’s own Windows blog, which in fact links to the GitHub project and includes the WinObjC.