Nineteen Years

Nineteen years ago today, I joined Apple as a full-time employee.

I was 20 years old, on the verge of 21. I dropped into a workplace filled with the most ambitious, most laid-back, most serious, most bizarre, most intelligent, least obsessed-with-intelligence people I have ever met. They made up, and were made from the culture that is Apple.

If you had told me 19 years ago that Apple would become the most successful company in the world, I would have believed you. Even at its lowest points, the place seemed to be teeming with latent success. It’s why I wanted to work there so badly, and why I’m so glad that I did.

The Risks And Rewards Of Criticism

Marco Arment responded to speculation by Eli Schiff that he and other Apple developers hesitate to criticize Apple for fear of retribution.

I was particularly surprised by the section of Schiff’s post that described Shifty Jelly developer Russell Ivanovic’s experience of being cut off by Apple from what had previously been a well-supported position. The way it’s described in the post, Ivanovic’s close marketing ties to Apple were severed when he decided to launch a version of his app on the Android Play store before Apple’s App Store. I haven’t listened to the podcast yet, but it sounds great, and may provide slightly more details about the situation.

Ivanovic’s experience sounds devastating, but it doesn’t strike me as treatment that many developers should live in fear of also suffering.

As a company, Apple doesn’t care about individual developers. This works both ways of course: they don’t go out of their way to help, but also don’t go out of their way to harm. When a developer benefits or suffers at the hands of Apple, I believe it’s always thanks to either a wide-sweeping corporate policy that affects all developers, or to an individual at the company whose everyday choices on the job can have a profound impact. An editor who chooses to feature an app on the store, for example, or a reviewer who chooses to notice and raise a fuss about a slightly non-compliant behavior in an app.

I’m confident that at the level of individuals within Apple, efforts are almost always in the spirit of helping developers. You don’t have to meet many Apple employees to form an opinion that, on the whole, the company is made up of good people. So, naturally, the majority of folks there are working to cause good outcomes for people both inside and outside of Apple. The culture at Apple leans towards building people up rather than tearing people down. This is, incidentally, why their products tend to be so great. And why in spite of some truly confounding decisions, the company tends to promote stellar third party products through its App Stores.

On the other hand, the company is huge, and you simply can’t have that many thousands of people in varying positions of power without having at least tens or hundreds of spiteful, angry, petty people in positions of power. Oops, that sucks. While I was trying to make sense on Twitter of Ivanovic’s unfathomably petty experience, another slighted developer chimed in. Matthew Drayton, who like Ivanovic lives and works in Australia, pins his own similar experience on an individual:

I can’t quite tell if the implication is that the same individual is likely to be responsible for the “blackballing” that both Drayton and Ivanovic say they’ve felt. But for the sake of Apple, I hope it is indeed down to one person. One person can often be fired, reprimanded, or simply decide to move on. It would obviously be much worse if there were a systematic policy of suppressing developers who fail to “walk the line,” to to speak.

The risks of being critical are usually not on the scale of upsetting an entire company and suffering its wrath. Instead they are on the scale of possibly upsetting, or merely frustrating, or even just vaguely losing attractiveness to an individual whose help you would otherwise have enjoyed. This is true both in the context of Apple and outside of it. For example, an off-hand remark about the bitterness of the coffee at your local shop might earn you a less professional effort on your next visit.

On the other hand, an astute barista may take the criticism to heart and become hell-bent on ensuring your next cup exceeds expectations. This is what happens when well-formed criticism meets the ears of a confident, competent individual: the facts are taken to heart and studied, perhaps grudgingly. But upon reflection and determination that there was merit in the complaint, respect for the source of provocation goes through the roof.

These are the risks and rewards of criticism: depending upon how far your opinions reach, you may garner either immense respect or massive disdain from the individuals who consider it. In that light, is it risky to be publicly critical of a company upon which you base your entire livelihood? Possibly. But it could be just as risky to remain meekly under the radar while the thoughtful professionals at that company go out of their way to reward the people whose meaningful criticism they value.

Crazy Apple Car Rumors

When I first heard rumors about Apple’s alleged development of a car, I disregarded them without thinking. The idea that the company would stretch its focus so far away from its current line of computer software and hardware products seemed ridiculous, and happened to overlap with countless jokes over the years about the hilariousness that would ensue if Apple entered this, that, or another market.

My head jerked to attention however when the Wall Street Journal recently added its weight to the rumors, giving a code name “Titan” for the project, and asserting that there are hundreds of employees already working on the team.

Even in the wake of this revelation I clung to my skepticism, sensing that it would simply be too “out there” for Apple to tackle the automotive market. I agreed with reasons cited by folks such as Jean-Louis Gassée, who dismisses the idea as fantastical based on comparatively low profits, challenging customer-service obligations, and the absence of Moore’s Law-style advances over time in automotive technologies.

But today’s report from Jordan Kahn of 9to5Mac, listing a variety of automotive-industry experts who are now working for Apple, has really got me doubting my earlier dismissiveness.

What does it mean that Apple has hired a significant number of people with expertise in the auto industry? To me it means that they are either making a car, or that they are making a product that they know will uniquely leverage the abilities of people familiar with cars.

Personally, I’ve flipped over to being cautiously optimistic that the Apple car will become a reality. My first inclination was to worry that it represented a deparature of focus for Apple, and that it would mean stretching their limited resources even thinner. But the 9to5Mac story drives home that a lot of the expertise required to pursue this dream, if that’s what they do, can be hired from outside the pool of software and hardware engineers that Apple has typically employed. I think it’s reasonable, for example, to be optimistic that a drive-train engineer’s efforts are not being wasted by working on a car instead of a MacBook Pro’s cooling fans.

Putting aside the significant effort of designing, manufacturing, marketing, distributing, and servicing a line of Apple-branded vehicles, having these products exist and in use by even a modestly large number of customers would offer some interesting benefits to Apple. Particularly, I’m curious to see how they might leverage the technological ownership of a whole car to serve their ambitions in mapping and navigation.

I thought it was a big loss for Apple when Google acquired Waze, the crowd-sourced navigation service that uses mobile phones to collect traffic data. Since then, I’ve been hoping that Apple might eventually offer a similar solution. With a suitably pre-rigged Apple car, the amount and quality of data collection might leapfrog even Waze’s impressive installed base. Imagine even 100,000 Apple cars in the US, equipped with built-in cameras on four sides, transmitting GPS and environmental graphics (anonymously and with user consent!) to Apple HQ. It might finally give Google something to worry about (assuming Google’s own cars haven’t already captured as much interest).

These rumors have fueled an enormous amount of speculation outside of Apple about whether or not they should build a car. Regardless of whether they do so or not, it’s clear from the amount of automotive-related hiring they have done that a great deal more speculation has probably been done inside of Apple, by minds that are now suited to make constructive decisions about whether Apple will build a car, what kind of car it will be, and when it will be available. I for one can’t wait to see what comes of it all.

The Siri Standard

John Gruber writes about his impression that Siri’s performance has improved over the past year:

Siri is noticeably faster than it used to be. Even just a year ago, I don’t think Siri could have held its own with Google Now pulling information like the current temperature or sports scores, but today, it does. Apple has clearly gotten much better at something everyone agreed was a serious weakness.

Michael Tsai chimes in with agreement, emphasizing improvements in reliability:

I had stopped using it because for years it would essentially throw away what I’d said. It was either unavailable (most of the time) or it didn’t understand me properly (less often). Now I regularly use it to make reminders while driving, and it pretty much always works.

I use Siri in much the same way that John and Michael seem to: for quick, relatively simple data inquiries, text messages, timers, and reminders. I share their impression that Siri has gotten faster and more reliable. It was most striking for me when I first updated to the iPhone 6:

I’m really impressed with the speed and accuracy of Siri on my iPhone 6. It’s exciting to know that Apple is making such progress on this.

Which is not to say Siri is perfect or doesn’t cause frustration to me and others. I use it frequently enough that I’m probably stymied by its misinterpretation of my command at least once a day. But the consequences of the misbehavior are usually not dire, and can be remedied right away. Usually it’s just a matter of sighing and rephrasing the command with a structure that I know will be “more Siri compatible.” And every so often, I say something instinctively before remembering “oh, that doesn’t work with Siri,” but before I’ve had a chance to cancel and restate it, I discover that in fact, it now does work with Siri. I know some people will have horror stories about Siri’s behavior, but for me, and apparently many others, It’s quietly improving all the time.

How many other Apple technologies are earning this kind of unsolicited praise right now? Especially in light of recent discussions about perceptions of a steady decline in quality, the progress by Apple in the Siri department is particularly noticeable.

What if all of Apple’s high-impact technologies were improving so demonstrably that folks were moved to praise the progress? What would the usually gripe-filled Apple blogging, Twittering, and forum-posting scene sound like? Let’s indulge the dream that these enthusiastic posts might grace the web someday soon:

It’s been weeks since I restarted any of my Airport routers. File sharing between my Macs “just works.” Great work, Apple!

Continuity and AirDrop have become so reliable, I actually worry more about data getting lost by emailing it to myself than by beaming it instantly with Bluetooth.

Just deleted Google Maps from my phone. Apple has work to do with placemarks, but these new transit directions are awesome! A huge step above what we lost years ago, and I’m so much more comfortable having Apple handle my private location data.

Tried to backup my phone to iCloud, and Apple says I’m 2GB over my storage limit. It’s cool that they do the backup anyway, and give you 30 days to decide whether to upgrade the plan or download the backup archive. Seems like upgrading is a no-brainer?

No serious complaints about my apps for a year, so Apple just updated my account to “Solo” status. It’s so great to publish updates immediately to my customers. This is a privilege and a responsibility!

OK, OK. Some of these may be a little over the top. But, a boy can dream, can’t he?

I don’t doubt that the groups at Apple responsible for these … less often praised … technologies are comprised of individuals striving to improve things as quickly as possible. It’s hard to say how much the impression of slow progress is due to internal challenges we don’t know about, Apple’s lack of knowledge about the breadth of defects, or the public’s perception being skewed by severity of the impact from problems that persist.

Whatever combination of luck, hard work, and pragmatism is powering the Siri team’s “year of good work,” perhaps it should serve as a model, or at least as a symbol of hope for these teams as they move forward adding features, fixing bugs, and finessing the public’s perception of the value of their work. A world in which every group at Apple somehow achieved the standard of apparent progress that Siri has achieved would be a very good world indeed.

The Functional High Ground

Marco Arment laments his perception that Apple’s software quality is in such a rapid decline that the company has “completely lost the functional high ground.” I like this turn of phrase, even if I don’t agree with the extremity of the sentiment. Marco expands:

“It just works” was never completely true, but I don’t think the list of qualifiers and asterisks has ever been longer. We now need to treat Apple’s OS and application releases with the same extreme skepticism and trepidation that conservative Windows IT departments employ.

I myself am particularly paranoid when it comes to Apple’s future. I spent the earliest years of my professional career working for the company, and to this day I consider the education I received at Apple to have been equal parts technical and philosophical. I learned not only how to build quality software, but why it should be done: to not only serve customers, but to delight and surprise them.

For years, my concerns about Apple’s future have been largely to do with my worry that those philosophical values are decreasingly shared by Apple’s engineering staff and management. And yet, over the years, I have been surprised and delighted by the steady stream of new, quality products that Apple releases.

The current state of Apple’s software does not particularly concern me. Are there embarrassing blemishes? Yes. Does the annual schedule for major OS updates seem rushed? Of course. Are there Apple employees in positions of power who do not share Marco’s and my enthusiasm for software that “just works?” I regret to surmise that, indeed, there are.

But I’ve indulged these doubts about Apple since shortly after I was hired … in 1996. The mysterious, seemingly magical nostalgic components of Apple’s past success have always seemed threatened by the rapid waves of change that undo and reconfigure the company’s priorities. After the NeXT acquisition in late 1996, many of my colleagues and I feared the influx of new engineers would spell the end of the Mac as we knew it. In fact it did, but the new priorities of Mac OS X meshed well with the old priorities of Mac OS 9, yielding what I believe is an undisputably better, more Apple-like operating system than Apple was likely to have come up with on its own. There were many fits and starts along the way, including questions about arcane matters such as filename extensions and case sensitivity. These were but a few of many questions that would seem to make or break the legacy of the Mac. Choices were made, hearts were broken, and the Mac lives on.

Since I left Apple in 2002, I have been no stranger to criticizing the company for its flaws. The mistakes they ship in hardware and software are sometimes so glaringly obvious, it’s impossible to imagine how any engineer, manager, or executive could suffer the embarrassments. And yet, sometimes these defects linger for years before being properly addressed.

The problem has also been a focus of popular geek culture at many, many times in history. Way back in 2005, Dan Wood of Karelia was so frustrated by persistent flakiness in Apple’s software that he encouraged developers to report an Apple bug on Fridays. It worked: myself, Brent Simmons, Wolf Rentzsch, Sven-S. Porst, and countless others were moved to file bugs not just that Friday, but for many weeks to follow.

Over the years I have never been at a loss for identifying problems big and small with Apple’s products, or with the way it conducts its business. I’m sure I had plenty of complaints starting in 2002, but I didn’t start blogging in earnest until 2005. Here are some highlights to remind you that things have never been fine with Apple:

  • 2005 – Keychain Inaccessibility. I lamented the poor behavior of Apple’s Keychain Access app, even after improvements that came in Mac OS X 10.4.3. Nearly ten years later, to the delight of the folks who make 1Password, this embarrassment remains largely uncorrected.
  • 2006 – We Need a Hero. I shined a light on the difficulty of implementing AppleScript support in applications. Things have steadily improved, but are still very frustrating and error-prone. At least now we have two automation languages to pull our hair out over.
  • 2006 – All Work and No Play… Apple’s first Intel portable computer was a sight for sore eyes, but a cause of sore ears. The maddening “CPU whine” persisted through several iterations of the hardware design until the machines finally became more or less (to my ears) quiet.
  • 2007 – Leopard Isn’t the Problem. Speaking of annual software release schedules, here’s my nearly 8-year old reaction to Apple’s failure to meet the planned release schedules for both Mac and iOS in parallel. Is Apple suddenly more fixated on marketing than on engineering? Not by my assessment that their statement way back then was “bluntly crafted, sleazy marketing bullshit.”
  • 2008 – NSURLConnection Crashing Epidemic. Wouldn’t it be embarrassing if Apple shipped a bug so pervasive that it could crash any app that uses Cocoa’s standard URL loading mechanism? That’s what they did in Mac OS X 10.4.11, and it took them months to fix it. When they finally did, I ended up receiving a security update credit!
  • 2009 – Is Apple Evil? Speaking of embarrassments, how pathetic is it that nearly 7 years after the iOS App Store debuted, capricious rejections are still a mainstay of iOS tech journalism? In 2009, I reacted: “Alongside the stubbornly perfected refinement of its products, marketing, and public image, the company has always worn blemishes such as these.” Some things truly never change.
  • 2010 – Surviving Success. From the midst of “antennagate,” in which Steve Jobs accidentally coined the famous anti-advice “you’re holding it wrong.” I fretted that Apple was losing its marketing cool, and that Jobs should chill out:

    He spins the truth in that barely plausible manner that used to be celebrated as the “reality distortion field,” but now comes off as purposefully dishonest and manipulative.

    We don’t have Jobs to blame any longer for Apple’s less tasteful distortions of reality.

  • 2011 – Huh. I couldn’t find any particularly cogent complaints in my archives. Maybe I was too busy reacting with panic to Apple’s new Mac Application Sandbox. I did complain in an interview with The Mac Observer about “having to come to terms with the vast amount of stuff that Apple’s doing,” but that “it’s been a persistent, joyous complaint … that Apple is doing too much.”
  • 2012 – Fix the Sandbox. Having fully digested the impact of the Sandbox on shipping apps, I drew attention to the many problems I saw in Apple’s approach to (allegedly) enhancing user security:

    Given the current limitations of sandboxing, a significant number of developers will not adopt the technology, so its usefulness to users and to the security of the platform will be diminished.

  • 2013 – Respect the Crowd. Oh, right, Maps. Remember when Apple used to have reliable driving directions, place data, and even public transit directions?

    It’s all about the data. It doesn’t matter how beautiful Apple’s maps are, or how quickly they load, if they consistently assign wrong names and locations to the businesses and landmarks that customers search for on a daily basis.

    Apple has made significant improvements to their mapping data, and there are rumors, based largely in their acquisition of transit-oriented companies, that they may restore transit directions at some point. But to this day, Google Maps remains my go-to app for transit directions, while Google’s other directions app, Waze, gets my business for driving directions.

  • 2014 – Breach of Trust. We’re getting so close to modern times by now that Apple’s tactless imposition of a U2 album on everybody’s iPhone, whether they wanted it or not, could be considered part of Marco’s current diagnosis of what ails Apple. The nut of my take on the incident:

    It doesn’t matter much that Apple inserts an unwanted music album into your purchased list. But even a little move in a direction that threatens the primacy of users is a relatively big move for companies like Twitter or Apple, whose track records have inspired us to trust that we retain more authority over the personalization of these products than perhaps we do.

And now it’s 2015, and in the immortal words of Kurt Cobain: “Hey! Wait! I’ve got a new complaint.” Don’t we all. A company like Apple, moving at a breakneck speed, will undoubtedly continue to give us plenty to obsess about, both positively and negatively. I’ve been following the company closely since my hiring in 1996. Since that time, the company has consistently produced nothing short of the best hardware and software in the world, consistently marred by nothing short of the most infuriating, most embarrassing, most “worrisome for the company’s future” defects.

Apple is clearly doomed. I think Apple is going to be okay.

Blockpass For Dummies

After I wrote recently about my tool for preventing accidental typing of my password into plain text fields, I received a large number of requests asking if I would open source the tool. I generally hesitate to open source my private tools, because I throw them together with understandably lower standards than the code that I ship to users, and because I often rely upon my accumulated convenience classes and frameworks to get the job done expeditiously.

But for some reason I’m deciding to share Blockpass on GitHub. I had to do some work to make using and running it a little more bulletproof. Rather than rewrite keychain access to avoid using my private “RSKeychain” class, I decided to just include that.

Details about how to configure and install the tool are detailed in the Readme file on the GitHub project page. You probably should not pursue the project unless you are comfortable using Xcode and building projects from scratch. I may consider building a standalone version of the tool someday, but today is not that day.

If you have any specific questions or feedback, feel free to open an issue on the project or drop me a line on Twitter.

Push Notification Traps

Recently Marco Arment bemoaned Apple’s use of push notifications for promotional purposes. Apple sent a notification promoting their project (RED) products for sale in the App Store, which Marco judged as user-hostile and in poor taste, even if it can be argued it was “for a good cause.” I tend to agree with Marco on this point.

In the latest episode of the Accidental Tech Podcast, Marco, along with co-hosts John Siracusa and Casey Liss, talked more about the problem of notification spam in general and the difficulty of enforcing it at app review time. They seemed to be in agreement that the only realistic tool at Apple’s disposal is to devise a crowd-sourced flagging system for inappropriate notifications, and to use that collective information to pinpoint the worst offenders, and then to use that information to impose consequences upon them.

They went on to lament that Apple is not very good at these kinds of crowd-sourcing solutions, and that in all probability the vast majority of iOS users are not concerned or aware that they should be concerned about notification spam. The lack of consumer awareness about the nature of the problem could itself be a limiting factor in any crowd-sourced solution.

But I propose that Apple does have tools at its disposal that could help flag the worst offenders immediately, without the cooperation of the public, and without violating any user’s privacy.

All remote push notifications are delivered from an app’s developer to an end-user’s device via the Apple Push Notification service. This is good, because it puts Apple in a position to intercept and e.g. immediately shut down a bad actor from delivering notifications to any of its intended recipients. However, the content of all these notifications passing through Apple’s service is encrypted. This is good, even required, because it protects developer and company data from being eavesdropped. But it’s bad from an enforcement sense because it thwarts possible solutions such as using a Bayesian filter on content to flag spam, similarly to the way an app like SpamSieve works on the Mac.

So Apple has complete control over the distribution mechanism, but zero ability (apart from metadata including the originating company and the target device) to examine the content passing through. Game over? I don’t think so.

Apple can still use its unique role as the center of all things iOS to devise a system through which they would themselves be virtually subscribed to all unremarkable notifications from a particular app’s developer. Think about the worst notification spam you’ve seen. In my experience it’s not super-personalized. In fact, it’s liable to be an inducement to keep using the app, to advance in a game, to become more engaged, etc. I think Apple would collect a ton of useful information about spammy developers if they simply arranged that every app on the App Store that is capable of sending push notifications included, among its list of registered devices, a “pseudo-device” in Cupertino whose sole purpose was to receive notifications, scan them for spammy keywords, apply Bayesian filters, and flag questionable developers.

Because Apple controls the namespace for device IDs, has access to the executables for all the apps in the store, and is technically equipped to run these apps in contrived environments, they could coax applications to perceive themselves as having been installed and run on a device with ID of Apple’s choosing. In fact, it’s probably simplest if this very thing happens while App Store reviewers are evaluating apps. It’s true that they won’t see the spammy notifications during review, but the mechanics of triggering an app’s registration for future notifications would ensure delivery to a “trap device,” actually a giant database against which arbitrary research could be conducted.

This would not be a violation of anybody’s privacy, because only the artificial App Store review team’s data (if any) would be involved. Most likely, it would not capture most bona fide useful notifications, because reviewers wouldn’t use the app to the extent that such notifications are generated. But it would capture all the “send a notice to everybody whose every launched the app” and “send a notice to folks who haven’t launched lately” type spam. That seems like a pretty big deal.

At the very least, such a system could serve as a baseline mechanism for flagging developers, and in the event that some future crowd-sourced solution was unveiled, it would layer nicely on a system in which Apple was already collecting massive amounts of data about the most humdrum, spammy notifications that developers send.

Insecure Keyboard Entry

If you use a passphrase to control access to your computer, as you probably should, then it has no doubt become second nature to type it quickly when you sit down to get to work. If you’ve set an aggressive lock-screen timeout, as you probably also should, then you have become blazingly efficient at typing this password. Perhaps too blazing, perhaps too efficient.

If this sounds like you so far, perhaps I can complete the picture by describing the heart-stopping horror of sitting down to your computer after a short time away, methodically typing your password in to unlock it, only to realize the computer wasn’t locked at all, and you just typed it into a chat window, or worse, posted it to Twitter?

I set out recently to address this problem on my computer by writing my own nefarious little tool, which would act as a global keystroke sniffer, looking for any indication that I am typing my password, at which point it puts up a helpful reminder:

Panel reminding me not to type my password in plain text fields.

The beauty of this tool is it catches me at the moment I type my password (actually just a prefix of it, but that’s a technicality), and by nature of putting up a modal dialog that jumps in my face, absorbs any muscle-memory-driven effort to complete the password and press return in whatever insecure text field I might have been typing into.

You may wonder whether this prevents the legitimate entry of my password, e.g. into fields such as the system presents when asking me to confirm an administrator task? The answer is no, because part of the beauty of those standardized password fields is that Apple has taken care to enable a secure keyboard entry mode while these fields is active. While a standard password field is focused, none of your typing is (trivially) available to other processes on the system. So my tool, along with any other keyboard loggers that may be installed on the system, are at least prevented from seeing passwords being typed.

I’ve been running my tool for a few weeks, confident in the knowledge that it will prevent me from accidentally typing my password into a public place. But its aggressive nature has also revealed to me a couple areas that I expected to be secure, but which are not.

Insecure Input Fields

The first insecure input area I noticed was the Terminal. As a power-user, it is not terribly uncommon for me to invoke super-user powers in order to e.g. clean up a system-owned cache folder, install additional system packages, kill system-owned processes that are flying out of control, or simply poke around at parts of the system that are normally off-limits. For example, sometimes I edit the system hosts file to force a specific hostname to map to an artificial IP address:

sudo vi /etc/hosts
Password:

The nice “” is new to Yosemite, I believe. Previously tools such as sudo just blocked typing, leaving a blank space. But in Yosemite I notice the same “secure style” bullet is displayed in both sudo and ssh, when prompting for a password. To me this implies a sense of enhanced security: clearly, the Terminal knows that I am inputting a password here, so I would assume it applies the same care that the rest of the system does when I’m entering text into a secure field. But it doesn’t. When I type my password to sudo something in the Terminal, my little utility barks at me. There’s no way around it: it saw me typing my password. I confirmed that it sees my typing when entering an ssh password, as well.

The other app I noticed a problem with is Apple’s own Screen Sharing app. While logged in to another Mac on my network, I happened to want to connect back, via AppleShare, to the Mac I was connecting from. To do this, I had to authenticate and enter my password. Zing! Up comes my utility, warning me of the transgression. Just because the remote system is securely accepting my virtual keystrokes, doesn’t mean the local system is doing anything special with them!

What Should You Do?

If you do type sensitive passwords into Terminal or Screen Sharing, what should you do to limit your exposure? Terminal in particular makes it easy to enable the same secure keyboard entry mode that standard password fields employ, but to leave it active the entire time you are in Terminal. To activate this, just choose Terminal -> Secure Keyboard Entry. I have confirmed that when this option is checked, my tool is not able to see the typing of passwords.

Why doesn’t Apple enable this option in Terminal by default? The main drawback here is that my tool, or other tools like it, can’t see any of your typing. This sounds like a good thing, except if you take advantage of very handy utilities such as TextExpander, which rely upon having respectful, trusted access to the content of your typing in order to provide a real value. Furthermore, if you rely upon assistive software such as VoiceOver, enabling Secure Keyboard Entry could impact the functionality of that software. In short: turning on secure mode shuts down a broad variety of software solutions that may very well be beneficial to users.

As for Screen Sharing, I’m not sure there is anyway to protect your typing while using it. As a “raw portal” to another machine, it knows nothing about the context of what you’re doing, so as far as it’s concerned your typing into a password field on the other machine is no different from typing into a word processor. Unfortunately, Screen Sharing does not offer a similar option to Terminal’s application-wide “Secure Keyboard Entry.”

What Should Apple Do?

Call me an idealist, but every time that tell-tale appears in Terminal, the system should be protecting my keystrokes from snooping processes. I don’t know the specifics of how or why for example both ssh and sudo receive the same treatment at the command-line, but I suspect it has to do with them using a standard UNIX mechanism for requesting passwords, such as the function “getpass()” or “pam_prompt()”. Knowing little about the infrastructure here, I’m not going to argue that it’s trivial for Apple to make this work as expected, but being in charge of all the moving parts, they should make it a priority to handle this sensitive data as common sense would dictate.

For Screen Sharing, I would argue that Apple should offer a similar option to Terminal’s “Secure Keyboard Entry” mode, except that perhaps with Screen Sharing, it should be enabled by default. The sense of separation and abstraction from the “current machine” is so great with Screen Sharing, that I’m not sure it’s valuable or expected that keyboard events should be intercepted by processes running on the local machine.

What Should Other Developers Do?

Apple makes a big deal in a technical note about secure input, that developers should “use secure input fairly.” By this they mean to stress that any developer who opts to enable secure input mode (the way Terminal does) should do so in a limited fashion and be very conscientious that it be turned back off again when it’s no longer needed. This means that ideally it should be disabled within the developer’s own app except for those moments when e.g. a password is being entered, and that it should absolutely be enabled again when another app is taking control of the user’s typing focus.

Despite the strong language from Apple, it makes sense to me that some applications should nonetheless take a stronger stance in enabling secure input mode when it makes sense for the app. For example, I think other screen sharing apps such as Screens should probably offer a similar (possibly on by default) option to secure all typing to an open session. I would see a similar argument for virtualization software such as VMware Fusion. It’s arguable that virtualized environments tend to contain less secure data, but it seems dangerous to make that assumption, and I think it does not serve the user’s expectations for security that whole classes of application permit what appears to be secure typing (e.g. in a secure field in the host operating system) that is nonetheless visible to processes running on the system that is running the virtualization.

What Should I Do?

Well, apart from writing this friendly notice to let you know what you’re all up against, I should certainly file at least two bugs. And I have:

  • Radar #19189911 – “Standard” password input in the Terminal should activate secure input
  • Radar #19189946 – Screen Sharing should offer support for securing keyboard input

Hopefully the information I have shared here helps you to have a better understanding of the exposure Terminal, Screen Sharing, and other apps may be subjecting you to with respect to what you might have assumed was secure keyboard input.

Manton’s Twitter Apps

My long-time friend and podcasting partner, Manton Reece, is finally saying a painful goodbye to all of his apps that use Twitter’s API. Reacting to Twitter’s recent announcements about full-history search:

I was thrilled by this upgrade to the Twitter service. That the search was so limited for so long was the primary reason I built Tweet Library and Watermark to begin with. Unfortunately, this functionality is only for the official Twitter apps. It will not be made available to third-party developers.

Manton is probably the most earnest developer I know. He is eager and ambitious in his indie pursuits, but always slightly more interested in serving the greater good than in serving his own interests. To me this is a charming, admirable quality, even if it has lead to some inevitable frustrations and disappointment.

It’s easy to imagine how a developer like Manton Reece would have been so eager to participate in the Twitter developer platform of 2007, and how devastating it must have been for him to watch as his ambitions for the platform became less and less viable over time.

How Many Blogs Do You Have?

One of the things that has kept me from blogging more over the years has been the problem of worrying that, or at least wondering if, the specific thing that is on my mind right now is particularly useful or interesting for my readers.

I find it sort of charming when people write “whole person” blogs that may contain material spanning from their personal emotions, to the culture they appreciate, to the work that they do, and the politics they believe in. But I also find it kind of irritating when I don’t happen to value or share in common one or more of those many disparate interests. Slogging through myriad posts about renaissance faires or meat rendering techniques, just to get the rare morsel about, say, optimizing Objective-C code, is not my idea of enjoying the written word as a reader of blogs.

And so I am very sensitive to try to keep things pertinent to the blog at hand. This has led to my having had for a long time now at least two, and often far more active blogs at a time. I started with just a single LiveJournal blog more than a decade ago, but when I started building Red Sweater it made sense to add a company blog as well. I pushed the limits of what is appropriate for a company blog, frequently using it as soapbox for my own personal beliefs, usually about tech issues, but occasionally straying into discussions about the environment, or endorsing a political candidate. I even eulogized my dad when he passed away four years ago.

I enjoyed a significant audience on the Red Sweater Blog, but I became increasingly uncomfortable with the fact that it was a personal blog more than a professional one. Sure, I announced all my product news, but also wrote about, well, almost anything I felt like. That didn’t seem right.

There was also a ton of stuff I didn’t write about at all. Stuff that wasn’t related to my business and furthermore wasn’t related to technology. For this, I kept an old “personal blog” at Blogspot, which was basically the evolution of my original LiveJournal blog. Here, for example, I wrote a long post on buying a car, sharing the tips I’d picked up in my own process of doing so.

But that personal blog wasn’t really suitable, or I didn’t think so anyway, for technical rants or programming advice. If I wanted to make broad observations about a tech company, or wanted to share advice about code signing, these didn’t really belong on either Red Sweater or my personal site.

So I’ve basically added blogs until I no longer hem and haw about whether or not to post something. There’s still a challenge sometimes in deciding which of my blogs to post to, but never a limitation of there not being a suitable outlet if I want it.

The only problem is that now whenever I post a new blog entry and share it on Twitter, somebody will have inevitably seen the blog for the first time and ask “how many blogs do you, anyway”

Let’s see if I can enumerate them all, as well as my rough idea of the audience they serve and the correlated limitations on content.

  • Red Sweater Blog. My official company blog serves to inform existing users about updates to my software in a casual way that includes more verbose explanation about the changes than a mere bullet list of changes. The blog also, at its best, will share tips and tricks about using not only my software but software that is highly pertinent to Mac and iOS users as a whole.
  • Bitsplitting. This is my technical soapbox. If something feels technical in nature but is not clearly tied to my work at Red Sweater in such a way that it’s meaningful to Red Sweater customers, then it goes here. I find this particularly liberating because it gives me a chance to share opinions about tech companies and people that might be less appropriate coming from an official company blog.
  • Indie Stack. Some of my best posts on the Red Sweater Blog were long excursions into the process of debugging or programming for the Mac and iOS. Granted, some normal people found these posts interesting, but for the most part they fly right over the heads of those who are tuning in to learn about either my products or my philosophies about technology. Indie Stack is the nerd haven where anything goes so long as it’s suitable to other developers or people who happen to be interested in developer technologies.
  • Punk It Up. Often neglected for long periods of time, this is where my non-technical writing belongs. Observations about social situations, jokes, advice about buying cars, etc. If it’s suitable for a general audience, it goes here. Wait, that’s not right, because this is also my blog for crude, relatively unedited quips on whatever subject. In short, these are my liberal arts writings, but they have also sometimes been uncensored. Perhaps that’s an opportunity for further bifurcation.

And unless I’m missing something, that’s how many blogs I have. Oh, but I forgot the podcasts and audio:

  • Core Intuition. My weekly podcast with Manton Reece. We talk about anything related to being a Mac and iOS “indie” developer. Geared towards both developers and people who enjoy a peek inside the minds of two guys actively pursuing our indie ambitions.
  • Bitsplitting Podcast. Spun off from the Bitsplitting blog, the idea with the podcast was to fill a void I perceived in other tech podcasts: a failure to dive deeper into the backstory of individuals being interviewed. The format for this show is a long-form interview that doesn’t hesitate to get philosophical about the life ambition of a guest, and how their stories have fulfilled that ambition thus far.
  • TwitPOP. Born from my idea one day that (nearly) literal renditions of poetic tweets in musical form would be a good way to start doing something musical again, and to explore my fascination with the elegance of Twitter’s 140-character expressions.

You might wonder how a crazy person like myself manages to keep this many blogs going. I’m far from perfect, so of course there is some amount of neglect. I just posted to Punk It Up for the first time in four years, but it was nice to have it there when I finally got around to it.

But the other thing is this is only possible because MarsEdit makes editing a large number of distinct blogs somewhat sane. All of my blog posts and podcast episodes start in a familiar, Mac-based editor interface where all my favorite keyboard shortcuts, scripts, saved images, macros, etc., all live. Whether I’m writing to my company’s users or to the few people who take joy in my musical tweets, the interface for doing so is the same.

To be fair, there is certainly a cost to splitting everything up like this. Whatever notoriety I may gain with one blog is unlikely to transfer directly to the others. So if I wanted as many people as possible to see a specific post, it would have to go to the most visited blog, whether it was suitable content or not.

The compromise I’ve taken to address this problem is to treat Twitter as the over-arching, meta-topicked super-blog that acts as the umbrella to all the others. Regardless of the blog I post to, I’m likely to link to it from my @danielpunkass Twitter account. Sure, folks who follow me on Twitter may get tired of seeing links to various subjects that don’t interest them, but that is far less tedious to dig through than whole articles placed where they clearly don’t belong.

Now you know about all my blogs, and why they exist in such numbers. Just don’t ask how many Twitter accounts I have …

The 2014 Retina Web

When Apple announced the first “Retina” HiDPI device, the iPhone 4, it set into motion a slow (slower than I expected, anyway!) migration away from a web on which it was safe to assume every client had roughly the same screen resolution, towards one in which the resolutions of some clients would be so much higher as to warrant distinct image resources.

From a HiDPI device, it’s obvious to most people when a site has taken care to ensure that all the images are suitably high resolution to look sharp on screen. Sites that are not updated look blurry if not downright pixelated, and really take the shine off these fancy displays.

So it seems obvious to me, and should seem obvious to you, that if it’s at all feasible, every web publisher should ensure that her or his site renders beautifully on a HiDPI device. But how feasible is it, really?

Solutions in 2010

The problem in 2010 was that HiDPI seemed to take the web by such surprise that there was no drop-dead stupid way of updating a web site so that it served higher resolution files to the new devices while continuing to serve smaller images, which were also by definition a better fit, for lower-resolution screens. An undoubtedly non-exhaustive list of solutions advised at the time were:

  • Serve @2x images. Where you used to have a 100×100 pixel JPG, serve a 200×200 JPG but keep the width and height at 100. It works as expected for older devices, but newer devices with reasonable browsers will take advantage of the extra information density to draw the image with greater precision. The main downside to this approach was that even older devices would be forced to download the larger, higher-resolution image files.
  • Use CSS background images. This approach took advantage of the ability for CSS to specify that specific CSS rules should be applied only on devices where the ratio of pixels to screen points was e.g. 2 instead of 1. Because the CSS would be evaluated before any resources are loaded, using this technique would allow a browser to download only the image suitable for display on the current device. The main downside I saw to this was that it encouraged moving away from semantic “img” tags and towards using e.g. div tags that just happen to behave just like images. Things tend to go to hell when printing a page that uses this trick, and I have to imagine it isn’t super friendly to screen-reading technologies.
  • Use JavaScript hacks. I say “hacks” with a careful tongue, meant to express both disdain and admiration. Actually, I don’t know how many bona fide solutions there were in the early days, but I seem to recall people talking of dynamic scripts that would rewrite the “src” attributes of image URLs depending on whether they were being loaded on a HiDPI screen or not. The downsides here are that is feels super fiddly, and there were questions, borne out as justified I think, as to whether the tricks would work universally or not.

I jumped to update most of the Red Sweater pages. Why? Mainly for the reasons I listed in Target The Forward Fringe:

HiDPI customers may be a fringe group, but they are a forward-facing fringe. They represent the users of the future, and the more we cater to them now, the more deeply embedded our products and designs will be in their culture. The future culture.

Great thinking, Daniel. Only, in spite of more-or-less supporting Retina very early on, I never really got good at it. I embraced a combination of “just serve @2x images” and “use CSS background images.” But both solutions have bugged me, and made it less fun to change anything about the graphical make-up of my sites. Thus, I have mostly adopted the “if it ain’t broke” approach for the past 4 years, and that has been fine.

Except no, it hasn’t been fine. Because it is broke. Only after finally getting my first Retina MacBook Pro earlier this year have I finally found myself in front of a HiDPI browser frequently enough to become truly judgmental of the LoDPI web. And wouldn’t you know it, one of the offenders is none other than the Red Sweater site. The main page and product pages all sport fancy HiDPI graphics of the application icons, but incidental badges and, worst of all, screenshots of my apps are fuzzy when viewed on a HiDPI Mac. The very “forward fringe” I’m supposed to be catering to will not be so confident of that fact if they rely solely upon my screenshots. So this morning I took to the long-postponed task of correcting my Retina ways.

Solutions in 2014

Surely in 2014, having had four years to bake, the methods for supporting HiDPI on the web will have gelled into a few no-brainer, 100% effective techniques? I had heard a few things over the years about image sets, picture tags, etc., but nothing really jumped out as being the obvious way to support Retina. That’s annoying. I even took to Google and tried searching for definitive rundowns of the 2014 state of the art. Admittedly, my Google-fu is weak (does adding “site:alistapart.com” to any query count as deep-diving in the web realm?), but I wasn’t turning up anything very promising. I took to Twitter:

My reference to “srcset” alluded to my barely understood impression that a smart-enough browser would interpret the presence of a “srcset” attribute on img tags, and use the content of that attribute to deduce the most suitable image resource for the HTML view being served.

Unfortunately I didn’t get a definitive response along the lines of “you should go read this ‘The 2014 Retina Web’ article.” I’m assuming that’s because it’s really hard to pin down a definitive approach when so many different people have differing priorities: how much effort do you put into supporting older browsers, how important is it to minimize bandwidth costs, are you willing to take on 3rd party JavaScript libraries, yadda, yadda, yadda.

In the absence of such an article, I guess that’s what I’m trying to approximate here. This is for myself and for all my peers who have not paid a ton of attention to the state-of-the-art since 2010, and who would at least set themselves down the path towards making an informed decision. My take thus far about the rough approach to the choices we have today is probably all wrong because I just learned most of it 5 minutes ago, but because I think I would have nonetheless benefited from such a rundown, here it is:

  • Keep doing things the 2010 way. That is, if it actually ain’t broke, or you actually don’t care.
  • Use srcset and associated technologies. These are specified in the W3C’s HTML draft standard as the new “picture” tag and extension to the “img” tag with attributes such as srcset. To answer my own question “can I just use srcset?” I think the answer is more or less “yes,” as long as you don’t mind degrading to a lower-resolution experience for any browser that doesn’t support the new evolving standard. And I’m not 100% sure yet, but I think I don’t mind.
  • Use a polyfill. I just learned that a polyfill is a fancy word for a JavaScript library specifically geared towards providing a compatibility layer such that older browsers behave even when you use newer web technologies. I think the gist of this approach is to more or less use the W3C draft standard features including picture tags and srcset attributes, but to load a JavaScript library such as Picturefill to ensure that the best possible experience is had even by folks with clunky old browsers.
  • Use 2014 JavaScript hacks. You could argue the polyfill approach is also a hack, but distinct from that is a popular approach in which a robust library such as Retina.js is used, not to facilitate the use of any kind of semi-standard W3C-approved approach, but to simply get the job done using runtime JavaScript substitution in a manner that does not require extensive changes to your existing HTML source code. The gist of Retina.js in particular is that in its simplest deployment, it will look for any img tags and replace the src attribute with URL that points to the @2x version of the asset, if appropriate for the screen you’re being loaded on.

Further Reading

My searching and the responses of folks on Twitter turned up some valuable resources that may help to paint a clearer picture of what’s been going on. In no particular order:

</pseudofacts>

I want to emphasize that this post is an exposition of a few inklings of truth that I gleaned from surveying the web and some friendly, responsive folks on Twitter. There’s no need to roast me for being wrong about anything here, because I don’t claim to know anything about the topic. Well, maybe 5 minutes worth of research more than you…

Many thanks to @edwardloveall, @tomdiggle, @samuelfine, @josephschmitt, @adamklevy, @octothorpe, @seiz, @nico_h and others I no doubt missed or who chimed in after I published this piece, for responding to my Twitter query and helping me to start painting a picture of the current state of the art.