Category Archives: Articles

Out Of The Bag

AppleInsider reported on Friday that the number of visitors to their site purportedly running a pre-release version of Mac OS X 10.9 had risen dramatically in January. Federico Viticci of MacStories followed up on Twitter, confirming a similar trend.

I was curious about my own web statistics, so I started poking around at my Apache log files. They start with the IP address of the visitor and include various other information including the URL that was accessed, the referrer, and most importantly here, the user agent string for the browser.

Although the vast majority of visitors to my sites are running Mac OS X 10.8, or iOS, or even Windows, there were indeed a few examples of visitors who appeared to be running 10.9. This is what the user agent string looks like:

Mozilla/5.0 (Macintosh; Intel Mac OS X 10_9) AppleWebKit/537.28.2 (KHTML, like Gecko) Version/6.1 Safari/537.28.2

See that 10_9? It’s a strong indicator, combined with the respectably “higher than 10.8” Safari and WebKit versions, that the visitor is indeed running 10.9. Could it be fake? Sure, but the odds of anybody faking this kind of thing seem relatively low: there is little imaginable reward for duping a site into believing that a solitary IP address is running 10.9, and it would be challenging to orchestrate some kind of distributed fraud without being found out.

If you have access to your own site’s HTTP access log, and the format is like mine, you can sift out the 10.9 accesses by simply grepping for the 10_9 substring:

grep 10_9 access_log

If you have any matches, odds are good that they will be from IP addresses that start with 17. Why? Because Apple is somewhat unique in that it owns outright an entire class A subnet of IP addresses: all addresses starting with “17.” are theirs.

So people at Apple are running 10.9. What’s the big deal? For one thing, anybody with access to a reasonably popular web site’s access logs now has an insight into Apple’s development schedule. Look at the graph from the AppleInsider link above and you can deduce not only that the number of users actively running 10.9 has gone up, but I would also guess that the troughs and peaks in the graph are correlated with the release cycle of internal test builds. What is this worth to a competitor? Probably not much, but who knows.

The other issue that comes to mind is that not all the IP addresses are liable to start with 17. Why? For one thing, Apple employees may be working from home, either in the Bay Area near Apple headquarters, or scattered around the world in their respective telecommuting locations. For another, Apple may have granted early access to close business partners who would naturally be running the operating system in their own office environments, on other subnets than 17. To see if you’ve been treated to any of these visitors, and to further refine the list to avoid duplicates from the same IP, try this:

grep -v ^17\\. access_log | sort -u -t- -k1,1

If you found any results, first of all I strongly encourage you not to share the IP addresses in public. I am writing this article at least in part to call out the reasons why Apple’s divulging this information is a risk to its employees and partners. You should protect the confidence of your site’s visitors.

That said, you may want to privately perform a rough geographic lookup based on the IP address. Googling will find many services for this and this is just one that I used. You will probably find that the IP address maps to a location in San Francisco, San Jose, or Santa Cruz. But some of my 10.9 visitors hailed from other parts of the US.

So Apple’s broadcasting of the Safari user agent string reveals information about their development schedule, and divulges the IP addresses of likely employees or business partners. While I can’t quite imagine somebody taking advantage of the employee IP addresses, it sets off my spidey-sense creepiness alarm. The potential for divulging business partners could be of more obvious pragmatic interest to investors or competitors. The discovery of an alliance between Apple and another company would seem likely to affect the perceived value of either company, and could ruffle the feathers of other business partners who feel threatened by the cooperation.

So what should Apple do? The answer was in their hands before Safari launched: spoof the user agent! Don Melton was on the Safari team and wrote recently about keeping the project a secret:

Nobody at Apple was stupid enough to blog about work, so what was I worried about?

Server logs. They scared the hell out of me.

To guard clues about their development schedule, they should probably spoof the user agent string until the release is in a large enough number of hands that the number of user agents is uninterestingly diverse. But to protect the IP addresses of their employees and business partners from prying eyes they should at least spoof the user agent on non-17 subnets.

Apple’s famous secrecy is not foolproof. We don’t know yet what exciting new features 10.9 will bring or which hardware it will support. We don’t know how much it will cost, or which of the diminishing number of code names it will have. But we know it’s coming, and we know collectively the IP addresses of those who are testing it. The cat is still a secret, but the paws are out of the bag.

Reminder Plumbing

I am a fan of The Omni Group’s OmniFocus for both the Mac and iOS. While I’ve owned the apps for a long time I’ve only recently started taking more advantage of them. They have become critical to my own deployment of the Getting Things Done task-management methodology.

One particularly great workflow is afforded by the iOS version of the app’s option to automatically import reminders from the default iPhone reminders database. What this means in practice is you can use Siri to add items to OmniFocus. You say: “add take out the trash to my reminders list,” and the next time you open OmniFocus, the items are instantly imported to OmniFocus and removed from the system list. (Intrigued? You have to make sure you turn on the option in OmniFocus for iOS preferences.)

Unfortunately, OmniFocus for Mac doesn’t support this. I love OmniFocus for both Mac and iOS, but it turns out that because I lean so heavily on using Siri to add items, I tend not to open OmniFocus while I’m on the go. When I come home and get to work on my Mac, I notice that OmniFocus doesn’t contain any of my recently added items, so I have to go through the cumbersome steps of opening my iPhone and launching OmniFocus just to get this theoretically time-saving trick to work right.

I’m looking forward to a future release of OmniFocus that supports a similar mechanism for automatically importing reminders. Who knows, maybe the feature will even make its way into the forthcoming OmniFocus 2.0. But I decided I don’t want to wait even a single day longer for this functionality, so I decided to tackle the problem myself.

I developed a tool, RemindersImport, that solves the problem by adding behavior to my Mac that strongly emulates the behavior built in to OmniFocus for iOS. When launched, the tool will scan for non-location-based reminders, add them to OmniFocus (with start and due dates in-tact!), and then remove them from Apple’s reminders list.

If this sounds as fantastic to you as it does to me, I invite you to share in the wealth of this tool:

Click to download RemindersImport 1.0b3.

How To Use It

Warning: RemindersImport is designed to scan your Mac OS X Reminders and remove them from the default location in your Reminders list so that they may be added instead to OmniFocus. You should understand very well that this is what you want to do before running the tool.

Let’s say you have 5 Reminders that you added via Siri on your phone. In the background, thanks to Apple’s aggressive syncing, these have been migrated over to your Mac and are now visible in Reminders.app. To migrate these from Reminders to OmniFocus, just run the tool once:

./RemindersImport

If you’ve opted to use a different reminders list for OmniFocus, you can specify the name on the command line to import from that list instead:

./RemindersImport "Junk to Do"

Of course, running the tool by hand is about as annoying as having to remember to open up the iPhone and launch OmniFocus, so ideally you’ll want to set this thing up to run on its own automatically. I haven’t yet settled on the ideal approach for this, but a crude way of setting it up would be to just use Mac OS X’s built-in cron scheduling service to run the tool very often, say every minute:

*/1 * * * * /Users/daniel/bin/RemindersImport > /dev/null 2>&1

(Note: to edit your personal crontab on Mac OS X, just type “crontab -e” from the Terminal. Then paste in a line like above and change the path to match your own storage of the tool)

Something I’d like to look into is whether it would make sense to set this tool up as lightweight daemon that just stays running all the time, waiting for Reminders database changes to happen, and then snagging stuff. For now, the crontab based trick is doing the job well enough for my needs.

How To Leverage It

I am sharing the source code for the tool under the liberal terms of the MIT License. You can download the source code on GitHub, and of course I would also welcome pull requests if you make meaningful improvements to the code.

The RemindersImport tool satisfied my needs for automatic OmniFocus import from a single list. Maybe your needs are more complicated: you only want to import tasks that meet certain criteria, or you want to import, but leave the existing items in Reminders. Or you want to do something similar with a completely different app than OmniFocus.

It should also go without saying that the general structure of the code serves as a working model for how you might implement an “import from reminders” type of feature in your own apps. Since I learned about the OmniFocus for iOS trick, it’s always jumping out at me when another app could benefit from applying the same technique.

For example, imagine if the Amazon app offered a feature to import as any items from a list called “Amazon”. Then I could, in the middle of a run, ask Siri to “add running shoes to my amazon list,” and be assured that it would find its way to the right place.

Since Siri first debuted as a system-level feature of iOS, developers have been yearning for “a Siri API.” In the absence of that, this is as good as it gets. This “reminder plumbing” is available to every app but has so far been woefully under-utilized. Maybe once you play around with how well it works with OmniFocus, you’ll get inspired to add something to your own apps, or to beg for similar functionality from the developers of apps you love. When you do, I hope my contributions provide you with a head-start.

AAPL Stops On A Dime

Two months ago, Joe Springer of Seeking Alpha called out January 18, today, as a point of interest in the trajectory of Apple’s stock price. He suggested that because of a particularly large number of open AAPL options expiring today, this would be a turning point: the stock price would remain artificially deflated through today, and then rise more organically starting next week.

Earlier this week, John Gruber of Daring Fireball linked to the post and gave his own summary of the situation:

Billions of dollars at stake if AAPL stays near or under $500 a share until January 19 and then makes a run after that. No tinfoil hat required to see the motivation here.

I’m not sure where the $500 number comes from, because it wasn’t cited in the original article. I suspect that Gruber did some more research and determined that in the months since Springer’s article, $500 had become the most popular option price among investors, and thus carried the heaviest weight among the variously-priced options set to expire today.

Today, Apple’s stock price closed at exactly $500. Sometimes the way things unfold seem too precise to be merely coincidence, and Gruber’s reaction to the news says as much:

I still have that bridge to sell you if you don’t think the fix was in on this.

But was it a fix, or merely an “honest” market doing what markets do? I don’t claim to know too much about the perplexing ebbs and flows of the stock market, particularly when it comes to options, but this article by Rocco Pendola offers a counterpoint to the conspiracy angle, taken verbatim from his interview in 2011 with Neil Pearson:

Neil Pearson: Let’s use AAPL as an example. Friday, AAPL’s closing price was near $340. Further, let’s suppose that there is a large trader or group of traders who follow a hedging strategy that requires them to sell aggressively if AAPL rises above $340, and buy aggressively if AAPL falls below $340. If this is the case, their trading will have a tendency to “pin” AAPL at or near $340. It is only a tendency, because during the week there might be some event, either a news announcement or trading by some other investors, that dwarfs the effect of the hedging strategy and moves AAPL away from $340.

In other words, Pendola agrees that the large number of open options had a part in pushing the stock price to $500, but insists that the fact that it closed precisely on that number was hardly guaranteed or “fixed” as Gruber suggests.

Because Pendola and Pearson are experts in stock analysis, who have covered precisely this topic before, even to the extent that Apple was previously the subject, I tend to respect their conclusion. I also noticed that in after-hours trading, AAPL hasn’t begun rocketing upwards. If there were some conspiratorial manipulation of the stock to keep it at $500 only through close of trading today, one would imagine it would have traded higher than $500.31 after-hours.

I was as quick as anybody to jump on the conspiracy wagon when the stock closed exactly at $500, but sometimes truth really is stranger than fiction.

Dell’s Downfall

I wrote seven years ago that Dell was on the way out. Apple had just announced they would be moving to Intel-based CPUs for the Mac, and I extrapolated, somewhat wildly it turns out, that this would lead to Dell’s downfall.

I was wrong.

A huge, erroneous assumption in my condemnation of Dell was that Apple’s ability to boot Windows on Mac hardware would make the buying decision easy for folks who cared about Windows but wanted a high-quality machine. In retrospect, I don’t think the ability to run Windows on Mac has done nearly as much to help Apple as I predicted. Why? Windows became irrelevant. In late 2005 I saw the future of personal computing as a battle between Macs, PCs, and Linux. With the debut of Intel Macs, I saw Apple coming to the table with a trump card: “if you like our hardware, we can run your OS!” For a moment, the Mac could run every relevant, mainstream personal-computing OS. That didn’t last long.

The debut of iOS, Android, and to a lesser extent, Windows 8, changed the landscape. Nobody cares that the Mac can run Windows anymore, because nobody cares about Windows. And as much as it pains me to say it, outside of the relatively small group of enthusiasts to which I belong, nobody cares about the Mac. The mass market turned to mobile, and it was Apple, Google, and Samsung who ended up seizing on that opportunity.

I haven’t kept close tabs on Dell over the past several years. Heck, I thought they were on the way out of business, so why should I bother? But taking another look I find their marketing emphasis is almost identical to what it was before. You can buy a laptop, you can buy a tower, you can buy a monitor. That’s the Dell way, and although I’ve been wrong before, I am doubling down: the Dell way will be Dell’s downfall.

Aaron Swartz

I woke to the sad news that Aaron Swartz has died by suicide.

I have been inspired by Aaron’s work and philosophy since 2004, when I started reading his thought-provoking blog. I checked in with him after a few of his more down-spirited posts, and this led to a very loose, occasional friendship by email. Our paths nearly crossed in the Boston area a few times but owing separately to his or my own social anxieties, we never met in person.

Like many successful people, Aaron never seemed to appreciate his own achievements the way others did. After Reddit was acquired by Wired, presumably making him rich, he wrote about his inability to celebrate like his co-founders did, and all but wished it undone.

A piece that will probably get a lot of attention in the wake of his death is his own dramatic post about suicide from 2007, which was originally written auto-biographically. The post elicited responses from many people, including me. I wrote to reassure him, lightly, that many folks would miss him if he left us. He thanked me, and said he was “just having a really bad week.”

Given the combination of challenges Aaron faced, from the inner voices that talked down his successes or criticized his appearance, to the fear of imprisonment for trumped up charges of wire fraud, these “really bad weeks” may have been frequent.

It’s hard to find an appropriate perspective for commemorating somebody who has died so suddenly and so tragically. When suicide is involved, our society tends to look for someone or something to blame, often the victim himself. After witnessing a small extent of the struggles Aaron fought, I choose to commemorate him with gratitude for the many bad weeks when he resisted drastic action, and gave us all more time to appreciate and share his contributions.

Shame Projection

Marco Arment addresses the common defense among media pirates that lack of a legal alternative has “forced” them into pirating it:

Admit it: you’re ripping it off, it’s morally questionable at best (and illegal), but you don’t care.

A few years ago my wife opened my eyes to the phenomenon of shame projection. In short: assigning blame for your own shortcomings to external circumstances. Now it’s like that thing where you learn a new word and suddenly start noticing it everywhere: our society is swimming in shame projection.

I am no psychologist, nor have I ever taken a psychology class, nor have I even finished the “Psychological Projection” Wikipedia article cited above. So I’m probably completely misguided and wrong about this, but an example that springs to mind is when a motorist almost runs me down, and then screams at me for being in the way.

What happened in the blink of an eye is:

  1. Motorist is cruising along, distracted, late for work, whatever.
  2. I step into the crosswalk, over-confident of my right of way.
  3. Motorist proceeds to within inches of striking me before braking abruptly.
  4. Motorist feels terrified, ashamed, regretful, and then grateful I’m not hurt.
  5. I, nearly killed, glare my visceral outrage at motorist.
  6. Motorist feels offended by my lack of graciousness upon not being killed.
  7. Motorist’s brain seeks unconsciously for directions to project blame.
  8. Upon not finding any reasonable outlet, brain settles on me. Watch where you’re going, you idiot!
  9. Motorist carries on, content that he or she was the victim, not me.

Most folks who pirate media are feeling some of those same terrified, ashamed, regretful, and grateful feelings that the motorist felt upon almost killing me. In the case Marco cites, the projection outlet is on the companies for not making the media available.

This kind of projection seems to have a delightful efficiency. When the media companies do make the media available, the blame will be on their pricing it too high. When the price is right, it’s the media format that’s wrong. If the media format is right, then the DRM is too odious. If DRM is absent, then the authors are making too much money, anyway. If the authors aren’t making much, you’re only pirating to try it out. Once you’ve tried it and like it, you’ll pay for it when you get your next paycheck. You wouldn’t have to pirate at all if your boss wasn’t such a cheapskate and paid you better…

Hacking My AOL Account

When I read Mat Honan’s article today about the relative uselessness of passwords in protecting the security of our various online accounts, I was attracted by his assertion that it’s particularly easy to hack into an AOL account:

Let’s say you’re on AOL. All I need to do is go to the website and supply your name plus maybe the city you were born in, info that’s easy to find in the age of Google. With that, AOL gives me a password reset, and I can log in as you.

Although I have never been an avid AOL user, I do have an AOL Instant Messenger (AIM) account. I figured, and was correct, that for the purposes of this assertion, the accounts are one and the same. I quickly set about hacking my own account, just to see if it was as easy as Honan had described.

After navigating to AOL, I clicked the Login link and then clicked the “Forgot password” link to get to a very friendly, step-by-step process for resetting the password on an account. As Honan predicted, it offered to let me reset my password if I could supply my home town and another piece of personal information such as my birthday. But try as I might, I couldn’t get the right answers, and therefore I couldn’t break into my own account.

You see, I have had a habit for a long time of supplying bogus information when prompted for personal information. Obviously I break this habit when dealing with an institution that legitimately requires it, but I guessed that when I signed up for AIM, I had supplied false information. The nice side-effect of this appeared to be that my account was now less hackable than a typical account.

Of course Honan has had a little more practice than me with this, and when he saw my tweet he was inspired to ask if he could give it a shot. He and I are friends, and I trusted him to do nothing more than test the security of my account, so I agreed. He said it might take a day or two. A few hours later he sent me a screenshot of the AOL page where it was helpfully offering to let him enter a new password for my account.

I simply hadn’t tried diligently enough to get through the ridiculous “security” wizard. In the end it was as simple as knowing that I grew up in Santa Cruz (a value that I had evidently chosen to enter accurately), and that my email address was my last name at “red-sweater.com.” Totally, outstandingly, ridiculously poor security.

I made a video of myself “hacking” my own account, to show just how awful it is. The worst part is AOL will offer a laughably guessable hint about the alternate email address (j****t@red-sweater.com) down one security path, which can then be used to satisfy the secret question answer down another path.

As I said in the video, my best advice for anybody who has an AOL/AIM account is to change every personal detail on the account to something bogus, and to write those values down in an encrypted note somewhere for future reference if it’s needed. An idea I had is to choose as the email address something like “jalkut+fdj29f292935″@red-sweater.com. This way the confirmation email will still get to you if it’s ever needed, but the address will be much harder to guess.

Shame on you, AOL.

The Anti-Apple Market

John Gruber points to Amazon’s willingness to use its valuable home page to antagonize Apple and its fans by deriding the iPad mini in comparison to the Kindle Fire HD:

I’m sure some people will love it; they’re going for the anti-Apple market.

I first learned about Amazon while working at Apple in the mid-1990’s. There was a series of mail-sorting cubby holes in a hall at work, and I remember noticing the Amazon-branded boxes multiply as my colleagues turned on to the dead-simple innovation: a huge selection of books and music delivered to your door for a reasonable price.

Although I was an early, occasional patron of internet commerce, I looked at Amazon with some derision because of a character flaw. For as much as I appreciate and yearn for innovation, I tend to overvalue the status quo. These folks were buying nearly all their books and music online. My internal dialogue judged them as entitled and too lazy to support a local shop. They may well have been, but a few years later, I was shopping at Amazon too.

From the beginning Amazon has shared Apple’s focus on simplification and iterative improvement. Their catalog evolved into the internet’s de facto product review authority. It’s as easy today to buy all your toilet paper from Amazon as it was to buy all your books from them 15 years ago. Amazon CEO Jeff Bezos’s reputation for customer-focused innovation is often compared with Steve Jobs’s mastery of that art.

The company has grown consistently from within but also through judicious acquisition. The companies they buy typically share that quality of providing customers with some product at a level of service that was previously thought infeasible. Zappos sold upscale shoes with an unfathomably generous return policy. The IMDB collected an unprecedented database of information for film buffs. Audible cracked open the market for digital audio books. I made my first CD Now purchase through a terminal connection over telnet! “Amazon companies” are always innovative, and focused primarily on the service of premium customers.

As with Apple, Amazon’s pursuit of premium customers has resulted in a strong appeal to the mass market. I believe that companies like Amazon and Apple have expanded the expectations of the mass market, such that it’s getting easier to market a company based on its unique, premium innovations. The “anti-Apple” market historically pooh-poohs these advantages, reducing the product landscape to a crude, high-level comparison of features and prices. This is what Amazon has done on their home page, comparing the $199 Kindle Fire HD to the $329 iPad mini, while audaciously claiming they offer “Much More for Much Less.”

Since when does Amazon target the anti-Apple market? The companies compete in a growing number of areas including digital music, movies, and eBooks. But Amazon has thrived with this competition largely because it targets the same market that Apple does, while doing some things better than Apple. From the early days when my colleagues were tearing open shipping boxes at Infinite Loop, to the present time when many Mac and iPhone aficionados cling tenaciously to their authentic Amazon Kindles, the pro-Apple market is the pro-Amazon market. Why would a company that has historically aimed so high change its focus to the lower end?

I see this as a rare example of concession on Amazon’s part. Traditionally when the company discovers they are not the best in a market they wish to dominate, they acquire the stunning leader and integrate the advantages. Here they are going up against Apple, which happens to be both the largest company in the world and also the most inimitable hardware designer. Amazon can’t buy it, and Amazon can’t copy it. They must compete on price, and they must confuse on features. They must go anti-Apple, which is a shame for Amazon and a shame for customers, but it’s the only reasonable choice they have.

Fairly Priced

Tapbots released a Mac version of Tweetbot, their popular Twitter client.

I have been beta testing the application for months and have found it to be a suitable replacement for Twitter.app, the once neglected, now abandoned official application.

But folks are talking less about Tweetbot’s features and more about its price: $20. In today’s culture of low-priced apps, anything costing more than a Starbucks latte raises a eyebrows of the suddenly cash-poor masses who shelled out for expensive iPhones and Macs.

A healthy opposition to the price-whiners has risen to the task of preaching the merits of “fairly priced” software: $20 is a relatively small investment for something you use all the time, quality software takes a ton of time and effort, and unless your beloved software can sustain its creators comfortably, you’ll be disappointed when they abandon it or sell it to a multi-national corporation.

I agree with all this “fair pricing” rhetoric, but I can’t help but notice a key point missing within it: Tapbots doesn’t want to charge $20. From their announcement:

Because of Twitter’s recent enforcement of token limits, we only have a limited number of tokens available for Tweetbot for Mac … This limit and our desire to continue to support the app once we sell out is why we’ve priced Tweetbot for Mac a little higher than we’d like.

The culture of low-priced software is artificially pulling the prices of many apps downward, while in this case Twitter, with its API token-limitation policy, is artificially pulling the price upward.

Is $20 a reasonable amount to pay for Tweetbot? I think so. But if Tapbots would have preferred to charge even less, has it been fairly priced? Many folks are seizing on the coincidence of Tapbots needing to charge more as an opportunity to exalt “fair pricing,” when this was a result of coercion in two directions.

With price pulled downward by the expectation of free or ultra-cheap software, and upward by Twitter’s inconsiderate API policies, Tapbots have settled on a stasis point. It’s not as low as they wish it could be, and at the same time not as high as it’s “worth.” If Twitter’s API policies were not a factor, then staking out a bold $20 price would merit applause. But as the developers have settled on the price grudgingly, this is no victory for fair pricing. It’s an opportunity to acknowledge the discomfort of being pulled in two directions and to congratulate Tapbots on making a pragmatic choice. Well done.

Twitter’s Token Rush

One of the developer-hostile aspects to Twitter’s announcement of upcoming changes in the 1.1 release of their API, is the imposition of new “user token limits.” Developers of traditional client applications will be limited to 100K tokens, or twice the number currently issued, for apps that already have more than 100K. What’s a user token? It gives a person like you or who downloads the app the ability to actually connect to and work with your Twitter account data. No token? No service.

Suddenly the total number of users any Twitter client developer can expect to support has a hard limit. Limited resources tend to rise in value, and we’re already seeing that play out in the client market. On a recent episode of Core Intuition, we welcomed Twitterrific developer Craig Hockenberry, who spoke candidly about the changes. During the episode, he pointed out that the ad-supported, “freemium” model adopted by Iconfactory and many other developers, may not survive this transition.

Tapbots, the developers of the popular Tweetbot client, announced today that they have pulled their Tweetbot alpha release for Mac, citing the new user token limitations. While exposure to a large number of users is undoubtedly good for testing and promotional purposes, they don’t anticipate it being worth the potential cost in “lost tokens.”

Matthew Panzarino of The Next Web reported on the Tweetbot news, taking care to emphasize that because user tokens don’t expire on any regular schedule, they can be used up even by users who download a client once, connect, and never launch the app again. The total number of oustanding user tokens doesn’t go down unless users log in to Twitter and explicitly revoke access to the client application.

During our conversation on Core Intuition, I pointed out that Twitter’s new policy runs the risk of invoking Apple’s ire, as well. Apple’s App Stores for Mac and iOS are host to dozens if not hundreds of Twitter API clients, many of which meet Twitter’s criteria for “traditional clients.” When a particular app reaches its 100K token limit, what happens to the user who purchases the app from Apple’s App Store a second later? I suppose it will be up to developers to anticipate their proximity to 100K, and start winding down the operation (and their business) by removing the app from the store, etc. But if they don’t? Apple’s just sold an app that, through no fault of theirs or the developers, is useless for connecting to the service it’s meant to support.

A Token Degree Of Control

It’s bad enough that clients will have to contend with a hard limit of 100K active users per application, but what must be particularly infuriating to developers is the knowledge that some of those 100K tokens may not have been used for years, and may never be used again.

The Tapbots announcement included an overt request that users of any Twitter clients should “help 3rd party developers out” and revoke any tokens that you’re not using. This underscores the doubly subservient position developers have been put in by this move: Twitter imposes a hard limit on the number of user tokens, only end-users can free up previously used tokens, and developers, helpless to address any of this on a meaningful level, are left to suffer the worst of the consequences.

At a minimum, Twitter should support developer-driven token expiration. Google, for example, supports an API endpoint for revoking OAuth 1.0 or 2.0 tokens. This gives developers the ability to improve the user’s experience when revoking a token makes most sense: e.g. if a user has opted to “Uninstall” an application. But it also provides some discretion and flexibility for the developer to revoke in other scenarios where it makes sense. A scenario such as the one Twitter is imposing, for example.

With the ability to programmatically revoke tokens, the particulars of doing so would be up to developers. For example, if I were the developer of a 3rd party client such as Twitterrific or Tweetbot, I might arrange for the client application to communicate token usage data to a centralized server. This would theoretically give me the ability to say “expire any user tokens that haven’t been used within the past year” and rest assured that I’ve freed up a bunch of tokens without inadvertenly inconveniencing an active user.

There are security issues at play here, and the unfortunate potential to seriously inconvenience an active user if a user token is revoked prematurely. But given the hard limits being imposed by Twitter, some hard coping mechanisms are due to developers. Tokens are precious, limited resource, and neither users nor developers can take them for granted any longer.

Cachet

It’s common to observe in retrospect that a popular band, television show, clothing style, or even a country has outlived its popularity. We internet hipsters refer to this as jumping the shark, in homage to the popular American television series “Happy Days.”

It’s less common to know when some focus of celebrity is at a precise peak of popularity, poised for a sad, gradual decline into the the oblivion of historical footnotes. I didn’t predict Friendster’s demise until it was obvious, in spite of increasingly poor page-load performance that made it impossible for me, as a fan of the site, to visit it as often as I might have liked. When MySpace was at its peak, it seemed as inevitable as Facebook or Twitter is today. Of course you have a MySpace page, otherwise you don’t exist! When I first got involved in blogging, it was on LiveJournal. Where else would you host a blog?

Relevance on the social internet is fleeting. Facebook, Twitter, Tumblr, WordPress, and Google all know this. They’re among the vanguard for the moment, enjoying the same notoriety that MySpace, LiveJournal and Friendster once cherished.

I’m not enough of a business genius to claim, even in retrospect, to know why each of these former social-internet giants fell. But I am enough of a smart-ass to propose that in every case it was a case of a company losing its way. A company loses its way by diverging from the path its customers expect it to follow. For companies like Friendster and MySpace, that happened arguably by standing still while the needs of customers shifted. In other cases, the customer’s sights are in one direction while the company envisions something completely different.

I have often wondered why some people like Facebook while other people like Twitter, and yet other people seem to favor them equally. I should come out and admit now that I “like” both services, but when it comes down to it, I devote the vast majority of my attention to Twitter. I’m not sure I completely “get” Facebook.

That’s a lie.

I get Facebook, but from where I sit, the selling point for Facebook is to be in touch with all the mundane, everyday things that your friends and family have to share. I love my friends and family, and I do love to keep in touch with their mundane activities. But that’s just it, isn’t it? Facebook is for the mundane. Love it or hate it, and sometimes there is an awful lot to love, Facebook is not in the business of providing a venue for punchy, thought-provoking, elevated banter.

That’s where Twitter comes in. The reason I spend the vast majority of my time reading and interacting with friends, acquaintances, and strangers on Twitter, is because the expectation of quality is high. Twitter, like blogging before it, has been broadly ridiculed as being about “what I ate for breakfast.” But in practice, that’s just not the case. The 140-character limit, and certain cultural expectations among the users I interact with, means that cutting humor, philosophical insight, and up-to-the-minute gossip and news, are to be expected. In short? Twitter has cachet.

Among the reasons for Twitter’s cachet is its distance from the tactics of other social networks. While Facebook relishes in trashing up your timeline with mindless games, polls, and other nonsense that distract from the core content of your followers, Twitter has remained relatively pure. I usually connect to Twitter with a desktop or mobile client, but even when I visit the web site, I’m mostly looking at a long list of things people said. And nothing else.

Twitter’s cachet has earned a lot of goodwill, but also a lot of skepticism about how it intends to sustain itself going forward. From day one, it seems, people have criticized the company for its lack of an obvious business model. Now it seems poised to answer that criticism with a vengeance. It has already locked out former partners such as Facebook and Tumblr, and is cracking down unilaterally on 3rd party apps that aim to offer first-class, full-service interfaces to the service.

As Dan Frommer explained in his Understanding Twitter post, Twitter is in a position where, to keep doing what it’s doing, it needs to hunker down and make money. Fast. Recently its actions have revealed that the way it intends to do that is by 1. Owning the core Twitter user experience and 2. Monetizing the ownership of that experience through ads or other means.

The problem with cachet is it’s easy to maintain when you’re giving, but much harder to maintain while you’re begging. Twitter’s new emphasis on earning money will quiet the criticisms of those who mocked them for lacking a business plan. But for customers who were attracted to the service because of its simplicity, for its elevated tone, or for the apparent disregard for the vulgarities of earning money, forfeiting that precious cachet may be the worst business plan of all.