Google’s Next Great Thing

Google Express promises same-day delivery of goods from a variety of San Francisco retailers.

Roberto Baldwin makes the very San Francisco comparison to Kozmo, the famously failing ‘90s home-delivery startup, while John Gruber chastises Google for an evident lack of focus.

But what if this is the embodiment of Google finding its focus?

Steve Jobs said of Apple, before returning to the company in 1996, that he would “milk the Macintosh for all it’s worth — and get busy on the next great thing.” It’s reasonable to argue in retrospect that this is exactly what he did.

Suppose Google recognizes that they can’t play king of online advertising forever, that it must hunker down and focus on its own “next great thing?” What technology does Google own that sets it farthest apart from potential competitors? Driverless vehicles.

Google has allegedly been testing its delivery service with employees and their friends since at least October, 2012. If this fleet is not driverless yet, I’m sure it’s slated to become so.

I’m not sure it matters too much if Google succeeds, at least in the conventional sense of toppling rivals such as Amazon in the home delivery market. I imagine they see this as a no-lose gamble. If they happen to strike a chord of convenience, price, and quality in retail delivery, they may just give Amazon a run for their money. If they don’t, they will still have pushed their driverless cars through another phase of real-world testing. Amazon can keep its massive, profitless customer base, and Google can keep its next great thing.

Bitsplitting With Erika Hall

I’m pleased to announced that the second episode of the Bitsplitting Podcast is now available, featuring my friend Erika Hall of the Mule Design Studio.

It was a lot of fun talking with Erika. Easily the greatest challenge so far in recording this podcast is trying to keep the length of shows around the 1-hour mark instead of talking for four hours as I’d be inclined to do.

I hope you enjoy the interview!

The Bitsplitting Podcast

I’m excited to announce that the previously hinted Bitsplitting Podcast is now live. The first episode features Guy English of Çingleton and Aged & Distilled fame.

The show will come out on a biweekly schedule, so in general you can expect a new episode “every other Friday” from here on out.

New episodes will always appear on the podcast’s home page, but I encourage you to subscribe via iTunes, or through another app using the podcast-specific RSS feed.

I hope you enjoy the show.

NetNewsWire Cloud

My friend Brent Simmons created NetNewsWire over 10 years ago. The app, a Mac-based RSS aggregator, has been a constant companion to me for what feels like an eternity: I check it daily to keep up with the most important blogs, Google searches, and referral notices that I want to keep on top of. Most important of all to me, NetNewsWire spawned MarsEdit, the blogging app that I now develop and which supports me and my family.

Today Google announced they are shuttering Google Reader, the web service that has brought RSS aggregation to what I would guess is the largest number of people to ever enjoy the benefits of “news feeds” on the web. The termination of Google Reader is a disappointment to its loyal users, but it will also have a huge impact on the number of client apps, including NetNewsWire, that have built their syncing functionality on top of the service. When Google Reader goes away, all those apps will lose their syncing capability, or outright stop working.

Some folks will claim that nobody cares about RSS anymore. But the loud outcry on Twitter and through other channels indicates there is still a significant number of people who rely on the technology.

Which brings me back to NetNewsWire. My guess is that after Google Reader, and possibly after Newsvine, NetNewsWire is the most recognized brand in the world for the admittedly niche market for “RSS Readers.” When the top brand in the market drops out, it puts a huge amount of focus on the remainders. Black Pixel, the current developers of NetNewsWire, have to be taking notice.

At this point Black Pixel need to ask themselves one question: are we interested in RSS, or aren’t we? They acquired NetNewsWire because they no doubt loved it and had become reliant on using it themselves. They wanted to see it live on and prosper. But did they expect to be put in a position where they are faced with the challenge/opportunity of becoming the world’s leading RSS services company? Probably not.

My understanding is that the slowness in developing and releasing a successor to NetNewsWire 3 is largely in coming to terms with the challenges of working around Google Reader issues. With Google Reader out of the picture, not just for NetNewsWire, but for everybody, a new future for RSS syncing arises: NetNewsWire Cloud.

By implementing a suitable syncing API for RSS, and implementing a reasonably useful web interface, Black Pixel could establish NetNewsWire Cloud as the de facto replacement for Google Reader. Charging a reasonable fee for this service would likely inoculate it from the risk of sudden termination, and it would doubly serve to provide the very service that NetNewsWire needs to thrive on the desktop and on iOS.

Don’t get me wrong: this is no small order. I would not fault Black Pixel one iota for looking at the challenge and deciding to take a pass. But if they are truly passionate about RSS, this is their moment. This is the time when accepting the impossible challenge will reap the greatest reward.

An Indie State Of Mind

Marco Arment and John Siracusa recently wrapped up their long-running podcasts, Build & Analyze, and Hypercritical. They have since gone on, in collaboration with Casey Liss, to establish two new podcasts: Neutral, a show about cars, and the Accidental Tech Podcast.

Each of the retired shows was on the popular 5by5 podasting network, while each of the new shows is not. Since John Gruber’s sudden departure last year from 5by5, people are particularly keen to look for elements of drama related to the network. I’m guilty of it, myself. But Occam’s razor applies in situations such as this as well. The simple explanation? People who are satisfied stay where they are, and people who are less satisfied move on.

Marco acknowledged that he has been asked repeatedly why the new shows aren’t on 5by5, and conceded to share his own personal reasoning:

I want to own everything I do professionally unless there’s a very compelling reason not to. Some people feel uneasy having that level of control, but I feel uneasy not having it (to a fault).

There you have it: Marco is an indie, and he is bound to behave like one. Working within the confines, no matter how cushy, of another institution is simply not his style. He was destined to be unsatisfied with the status quo, and leaving to do his own thing no doubt resolved a great deal of tension.

I’m surprised by how consistently people assume that joining a large podcasting network is an end-goal for indie podcasters. My friend Manton Reece and I have been recording Core Intuition for over 4 years, yet when I have guest-hosted Build & Analyze or appeared on other 5by5 shows, a significant number of people write or comment on Twitter that I should “have my own show.” I have my own show, thank you very much. And I’m starting another one.

I worked at Apple for more than 7 years, before branching out on my own to focus exclusively on Red Sweater. I’m grateful that, in contrast to indie podcasting, there is far less bias towards conglomeration in the indie software scene. I’m not constantly nagged about when I’m going to re-join Apple, or Google, or Microsoft, or Twitter. And when I ship a new app, I don’t face a barrage of questioning about which larger company will be distributing it. It’s understood that I build, test, distribute, debug, and market the software by myself. And people respect that.

Like Marco, I derive a great amount of satisfaction from doing things for myself. Also like Marco, it can sometimes be a fault. No doubt I would benefit in many ways from working for a company or from joining a podcasting network. The resources and reach of these institutions could help me build greater things and get them in the hands of more people. On the other hand, they could force me to build things that suck. Folks like us, we with an indie state of mind, tend to face a far simpler choice: be dissatisfied working for somebody else, or gratified by the thrill of trying our own thing.

Coming Soon: The Bitsplitting Podcast

I have really enjoyed, and will continue to enjoy, producing the Core Intuition podcast with my friend Manton Reece. What started as an irregular, very casual podcast more than four years ago turned into a much less irregular, weekly show last year.

The success of Core Intuition inspired me to revisit a long-standing interest I’ve had in interview-style podcasts. I have always been a fan of the humanistic style of interview as perfected by Fresh Air’s Terry Gross, and while I have no illusion of matching Ms. Gross’s impeccable style, I hope to start down the path that ends at being at least a bit more like her. In a nutshell, the gist of my show will be interviewing folks I’ve had the luxury of meeting, who have interesting perspectives. I’ll differentiate my show from some others by focusing more on the personal background of my guests, and on trying to discern a philosophical arc to their life and career choices.

We are currently enjoying a renaissance in podcasting (hat-tip for that assessment to Brent Simmons), and I’m excited to be among the lucky folks who are more-or-less ready to seize on the opportunity. I have learned a lot from producing Core Intuition, and from being a guest on countless other shows. What can I say? I’m a lucky guy. Right place, right time, right skills.

With Core Intuition, we waited over four years before we started accepting sponsorships. For the Bitsplitting podcast, I will accept them from day one. I was skeptical about sponsorship, but I’ve learned that it creates both a positive obligation and a positive reward. Once a sponsor has committed to paying for the privilege of a mention on my show, I feel obligated to not only record and publish the show, but to do so in a way that exudes professionalism worthy of the sponsor’s blessing.

The rewards are more complex. Obviously, there’s the money. Money is good. But less obvious is a certain validation that comes from anybody else sticking their neck out to validate your work. Spending money is a somewhat crude, yet very unambiguous way of sticking one’s neck out. I have found with Core Intuition that while the money is nice, it’s most important that we have an unambiguous message from our sponsors that the show is valuable. It helps us, literally, to get out of bed in the morning to record the show.

So I need sponsors. I understand that with Bitsplitting, “there’s no there there” yet, and that makes this a harder sell. If you work for or own a company with vision and a willingness to take a chance, consider being among my debut sponsors. How many listeners do I have? Zero. How many listeners will I have? Time will tell. To balance your faith in sponsoring something new, I’m offering a reduced-cost sponsorship while the wheels are set to motion. If you’re interested, please check out this preliminary sponsorship information, and drop me a line.

Update February 21, 2013: I am humbled by the reaction to my call for sponsors. We are in good shape for the launch of the show, and I’ll resume booking sponsors for future episodes after the show debuts.

Whether or not you’re in a position to sponsor the show, I hope you’ll keep your eyes on this site, or on the @bitsplitting Twitter account, to learn about the launch of the podcast, which I am confident you will find both educational and amusing.

Virtual ROM

My attention was drawn by Anil Dash on Twitter to two posts discussing the purported and actual capacities of devices that, for example, advertise “64GB of storage.”

Marco Arment linked to an article on The Verge, asserting Microsoft’s 64GB Surface Pro will have only 23GB of usable space. Arment went on to suggest that device manufacturers should be required to market devices based on the “amount of space available for end-user” data.

Ed Bott’s take, on the other hand, was that Microsoft’s Surface Pro is more comparable to a MacBook Air than other tablets, and its baseline disk usage should be considered in that context.

I think each of Arment’s and Bott’s analyses are useful. It would be nice, as Arment suggests, if users were presented with a realistic expectation of how much capacity a device will have after they actually start to use it. And there is merit in Bott’s defense that a powerful tablet, using a more computer-scale percentage of a built-in disk’s storage, should be compared with other full-fledged computers.

Let’s just say if fudging capacity numbers was patented, every tech company would be in hot water with the patent trolls. A quick glance at iTunes reveals that my allegedly 64GB iPhone actually has a capacity of 57.3GB.

ITunes

I don’t know precisely what accounts for this discrepancy, but I can guess that technological detritus such as the metadata used by the filesystem to merely manage the content on the disk takes up a significant amount of space. On top of that, the discrepancy may include space allotted for Apple’s operating system, bundled frameworks, and apps. Additional features such as recovery partitions always come at the cost of that precious disk space. Nonetheless, Apple doesn’t sell this 64GB iPhone as the 57.30GB iPhone. No other company would, either.

It seems that in the marketing of computers, capacity has always been cited in the absence of any clarification about actual utility. One of my first computers (after my Timex Sinclair 1000) was the Commodore 64, a computer whose RAM capacity was built in to the very marketing name of the product. Later, Apple followed this scheme with computers that would be known as the Mac 128K and Mac 512K. Each alluding to its ever-increasing RAM capacity.

The purported RAM capacity was somewhat misleading. Sure, the Commodore 64 had 64K of RAM, but some of that had to be used up by the operating system. Surely it would not be possible to run a program that expects 64K of RAM, and have it work. So was it misleading? Yes, all marketing is misleading, and just as it’s easier to describe an iPhone 5 as having 64GB capacity, it was easier to describe a Commodore as having 64K, or a Mac as having 128K of RAM.

But the capacity claims were more honest than they might have been, thanks to the pragmatic allure of storing much of a computer’s own private data in ROM instead of RAM. Because in those days it was much faster to read data from ROM than from RAM, there was a practical, competitive reason for a company to store as much of its “nuts & bolts” code in ROM. The Mac 128K shipped with 128K of RAM, but also shipped with 64K of ROM, on which much of the operating system’s own code could be stored and accessed.

Thanks to the ROM storage that came bundled with computers, more of the installed RAM was available to end users. And thanks to the slowness of floppy and hard disks, not to mention the question of whether a particular user would even have one, disk storage was also primarily at the user’s discretion. It was only after the performance of hard drives and RAM increased that the allure of ROM diminished, and computer makers turned gradually away from storing their own data separately from the space previously reserved for users. With the increasing speed and size of RAM, and then with the advent of virtual memory on consumer computers, disk and RAM storage graduated into a sort of virtual ROM.

The transition took some time. Over the years from that Mac 128K, for example, Apple did continue to increase the amount of ROM that it included in its computers. I started working at Apple in an era when a good portion of an operating system would be “burned in ROM”, with only the requisite bug fixes and updates patched in via updates that were loaded from disk. I haven’t kept up with the latest developments, but I wouldn’t be surprised if the ROM on modern Macs is only sufficient to boot the computer and bootstrap the operating system from disk. For example, the technical specifications of the Mac Mini don’t even list ROM among its attributes. The vast majority of capacity for running code on modern computers is derived from leveraging in tandem the RAM and hard disk capacities of the device.

So we have transitioned from a time where the advertised capacity of a device was largely available to a user, to an era where the technical capacity may deviate greatly from what a user can realistically work with. The speed and affordability of RAM, magnetic disk, and most recently SSD storage, have created a situation where the parts of a computer that the vendor most wants to exploit are the same that the customer covets. Two parties laying claim to a single precious resource can be a recipe for disaster. Fortunately for us customers, RAM, SSD, and hard disks are cheaper than they have ever been. Whether we opt for 64GB, 128GB, or 1TB, we can probably afford to lend the device makers a little space for their virtual ROM.

Out Of The Bag

AppleInsider reported on Friday that the number of visitors to their site purportedly running a pre-release version of Mac OS X 10.9 had risen dramatically in January. Federico Viticci of MacStories followed up on Twitter, confirming a similar trend.

I was curious about my own web statistics, so I started poking around at my Apache log files. They start with the IP address of the visitor and include various other information including the URL that was accessed, the referrer, and most importantly here, the user agent string for the browser.

Although the vast majority of visitors to my sites are running Mac OS X 10.8, or iOS, or even Windows, there were indeed a few examples of visitors who appeared to be running 10.9. This is what the user agent string looks like:

Mozilla/5.0 (Macintosh; Intel Mac OS X 10_9) AppleWebKit/537.28.2 (KHTML, like Gecko) Version/6.1 Safari/537.28.2

See that 10_9? It’s a strong indicator, combined with the respectably “higher than 10.8” Safari and WebKit versions, that the visitor is indeed running 10.9. Could it be fake? Sure, but the odds of anybody faking this kind of thing seem relatively low: there is little imaginable reward for duping a site into believing that a solitary IP address is running 10.9, and it would be challenging to orchestrate some kind of distributed fraud without being found out.

If you have access to your own site’s HTTP access log, and the format is like mine, you can sift out the 10.9 accesses by simply grepping for the 10_9 substring:

grep 10_9 access_log

If you have any matches, odds are good that they will be from IP addresses that start with 17. Why? Because Apple is somewhat unique in that it owns outright an entire class A subnet of IP addresses: all addresses starting with “17.” are theirs.

So people at Apple are running 10.9. What’s the big deal? For one thing, anybody with access to a reasonably popular web site’s access logs now has an insight into Apple’s development schedule. Look at the graph from the AppleInsider link above and you can deduce not only that the number of users actively running 10.9 has gone up, but I would also guess that the troughs and peaks in the graph are correlated with the release cycle of internal test builds. What is this worth to a competitor? Probably not much, but who knows.

The other issue that comes to mind is that not all the IP addresses are liable to start with 17. Why? For one thing, Apple employees may be working from home, either in the Bay Area near Apple headquarters, or scattered around the world in their respective telecommuting locations. For another, Apple may have granted early access to close business partners who would naturally be running the operating system in their own office environments, on other subnets than 17. To see if you’ve been treated to any of these visitors, and to further refine the list to avoid duplicates from the same IP, try this:

grep -v ^17\\. access_log | sort -u -t- -k1,1

If you found any results, first of all I strongly encourage you not to share the IP addresses in public. I am writing this article at least in part to call out the reasons why Apple’s divulging this information is a risk to its employees and partners. You should protect the confidence of your site’s visitors.

That said, you may want to privately perform a rough geographic lookup based on the IP address. Googling will find many services for this and this is just one that I used. You will probably find that the IP address maps to a location in San Francisco, San Jose, or Santa Cruz. But some of my 10.9 visitors hailed from other parts of the US.

So Apple’s broadcasting of the Safari user agent string reveals information about their development schedule, and divulges the IP addresses of likely employees or business partners. While I can’t quite imagine somebody taking advantage of the employee IP addresses, it sets off my spidey-sense creepiness alarm. The potential for divulging business partners could be of more obvious pragmatic interest to investors or competitors. The discovery of an alliance between Apple and another company would seem likely to affect the perceived value of either company, and could ruffle the feathers of other business partners who feel threatened by the cooperation.

So what should Apple do? The answer was in their hands before Safari launched: spoof the user agent! Don Melton was on the Safari team and wrote recently about keeping the project a secret:

Nobody at Apple was stupid enough to blog about work, so what was I worried about?

Server logs. They scared the hell out of me.

To guard clues about their development schedule, they should probably spoof the user agent string until the release is in a large enough number of hands that the number of user agents is uninterestingly diverse. But to protect the IP addresses of their employees and business partners from prying eyes they should at least spoof the user agent on non-17 subnets.

Apple’s famous secrecy is not foolproof. We don’t know yet what exciting new features 10.9 will bring or which hardware it will support. We don’t know how much it will cost, or which of the diminishing number of code names it will have. But we know it’s coming, and we know collectively the IP addresses of those who are testing it. The cat is still a secret, but the paws are out of the bag.

Reminder Plumbing

I am a fan of The Omni Group’s OmniFocus for both the Mac and iOS. While I’ve owned the apps for a long time I’ve only recently started taking more advantage of them. They have become critical to my own deployment of the Getting Things Done task-management methodology.

One particularly great workflow is afforded by the iOS version of the app’s option to automatically import reminders from the default iPhone reminders database. What this means in practice is you can use Siri to add items to OmniFocus. You say: “add take out the trash to my reminders list,” and the next time you open OmniFocus, the items are instantly imported to OmniFocus and removed from the system list. (Intrigued? You have to make sure you turn on the option in OmniFocus for iOS preferences.)

Unfortunately, OmniFocus for Mac doesn’t support this. I love OmniFocus for both Mac and iOS, but it turns out that because I lean so heavily on using Siri to add items, I tend not to open OmniFocus while I’m on the go. When I come home and get to work on my Mac, I notice that OmniFocus doesn’t contain any of my recently added items, so I have to go through the cumbersome steps of opening my iPhone and launching OmniFocus just to get this theoretically time-saving trick to work right.

I’m looking forward to a future release of OmniFocus that supports a similar mechanism for automatically importing reminders. Who knows, maybe the feature will even make its way into the forthcoming OmniFocus 2.0. But I decided I don’t want to wait even a single day longer for this functionality, so I decided to tackle the problem myself.

I developed a tool, RemindersImport, that solves the problem by adding behavior to my Mac that strongly emulates the behavior built in to OmniFocus for iOS. When launched, the tool will scan for non-location-based reminders, add them to OmniFocus (with start and due dates in-tact!), and then remove them from Apple’s reminders list.

If this sounds as fantastic to you as it does to me, I invite you to share in the wealth of this tool:

Click to download RemindersImport 1.0b3.

How To Use It

Warning: RemindersImport is designed to scan your Mac OS X Reminders and remove them from the default location in your Reminders list so that they may be added instead to OmniFocus. You should understand very well that this is what you want to do before running the tool.

Let’s say you have 5 Reminders that you added via Siri on your phone. In the background, thanks to Apple’s aggressive syncing, these have been migrated over to your Mac and are now visible in Reminders.app. To migrate these from Reminders to OmniFocus, just run the tool once:

./RemindersImport

If you’ve opted to use a different reminders list for OmniFocus, you can specify the name on the command line to import from that list instead:

./RemindersImport "Junk to Do"

Of course, running the tool by hand is about as annoying as having to remember to open up the iPhone and launch OmniFocus, so ideally you’ll want to set this thing up to run on its own automatically. I haven’t yet settled on the ideal approach for this, but a crude way of setting it up would be to just use Mac OS X’s built-in cron scheduling service to run the tool very often, say every minute:

*/1 * * * * /Users/daniel/bin/RemindersImport > /dev/null 2>&1

(Note: to edit your personal crontab on Mac OS X, just type “crontab -e” from the Terminal. Then paste in a line like above and change the path to match your own storage of the tool)

Something I’d like to look into is whether it would make sense to set this tool up as lightweight daemon that just stays running all the time, waiting for Reminders database changes to happen, and then snagging stuff. For now, the crontab based trick is doing the job well enough for my needs.

How To Leverage It

I am sharing the source code for the tool under the liberal terms of the MIT License. You can download the source code on GitHub, and of course I would also welcome pull requests if you make meaningful improvements to the code.

The RemindersImport tool satisfied my needs for automatic OmniFocus import from a single list. Maybe your needs are more complicated: you only want to import tasks that meet certain criteria, or you want to import, but leave the existing items in Reminders. Or you want to do something similar with a completely different app than OmniFocus.

It should also go without saying that the general structure of the code serves as a working model for how you might implement an “import from reminders” type of feature in your own apps. Since I learned about the OmniFocus for iOS trick, it’s always jumping out at me when another app could benefit from applying the same technique.

For example, imagine if the Amazon app offered a feature to import as any items from a list called “Amazon”. Then I could, in the middle of a run, ask Siri to “add running shoes to my amazon list,” and be assured that it would find its way to the right place.

Since Siri first debuted as a system-level feature of iOS, developers have been yearning for “a Siri API.” In the absence of that, this is as good as it gets. This “reminder plumbing” is available to every app but has so far been woefully under-utilized. Maybe once you play around with how well it works with OmniFocus, you’ll get inspired to add something to your own apps, or to beg for similar functionality from the developers of apps you love. When you do, I hope my contributions provide you with a head-start.

AAPL Stops On A Dime

Two months ago, Joe Springer of Seeking Alpha called out January 18, today, as a point of interest in the trajectory of Apple’s stock price. He suggested that because of a particularly large number of open AAPL options expiring today, this would be a turning point: the stock price would remain artificially deflated through today, and then rise more organically starting next week.

Earlier this week, John Gruber of Daring Fireball linked to the post and gave his own summary of the situation:

Billions of dollars at stake if AAPL stays near or under $500 a share until January 19 and then makes a run after that. No tinfoil hat required to see the motivation here.

I’m not sure where the $500 number comes from, because it wasn’t cited in the original article. I suspect that Gruber did some more research and determined that in the months since Springer’s article, $500 had become the most popular option price among investors, and thus carried the heaviest weight among the variously-priced options set to expire today.

Today, Apple’s stock price closed at exactly $500. Sometimes the way things unfold seem too precise to be merely coincidence, and Gruber’s reaction to the news says as much:

I still have that bridge to sell you if you don’t think the fix was in on this.

But was it a fix, or merely an “honest” market doing what markets do? I don’t claim to know too much about the perplexing ebbs and flows of the stock market, particularly when it comes to options, but this article by Rocco Pendola offers a counterpoint to the conspiracy angle, taken verbatim from his interview in 2011 with Neil Pearson:

Neil Pearson: Let’s use AAPL as an example. Friday, AAPL’s closing price was near $340. Further, let’s suppose that there is a large trader or group of traders who follow a hedging strategy that requires them to sell aggressively if AAPL rises above $340, and buy aggressively if AAPL falls below $340. If this is the case, their trading will have a tendency to “pin” AAPL at or near $340. It is only a tendency, because during the week there might be some event, either a news announcement or trading by some other investors, that dwarfs the effect of the hedging strategy and moves AAPL away from $340.

In other words, Pendola agrees that the large number of open options had a part in pushing the stock price to $500, but insists that the fact that it closed precisely on that number was hardly guaranteed or “fixed” as Gruber suggests.

Because Pendola and Pearson are experts in stock analysis, who have covered precisely this topic before, even to the extent that Apple was previously the subject, I tend to respect their conclusion. I also noticed that in after-hours trading, AAPL hasn’t begun rocketing upwards. If there were some conspiratorial manipulation of the stock to keep it at $500 only through close of trading today, one would imagine it would have traded higher than $500.31 after-hours.

I was as quick as anybody to jump on the conspiracy wagon when the stock closed exactly at $500, but sometimes truth really is stranger than fiction.

Dell’s Downfall

I wrote seven years ago that Dell was on the way out. Apple had just announced they would be moving to Intel-based CPUs for the Mac, and I extrapolated, somewhat wildly it turns out, that this would lead to Dell’s downfall.

I was wrong.

A huge, erroneous assumption in my condemnation of Dell was that Apple’s ability to boot Windows on Mac hardware would make the buying decision easy for folks who cared about Windows but wanted a high-quality machine. In retrospect, I don’t think the ability to run Windows on Mac has done nearly as much to help Apple as I predicted. Why? Windows became irrelevant. In late 2005 I saw the future of personal computing as a battle between Macs, PCs, and Linux. With the debut of Intel Macs, I saw Apple coming to the table with a trump card: “if you like our hardware, we can run your OS!” For a moment, the Mac could run every relevant, mainstream personal-computing OS. That didn’t last long.

The debut of iOS, Android, and to a lesser extent, Windows 8, changed the landscape. Nobody cares that the Mac can run Windows anymore, because nobody cares about Windows. And as much as it pains me to say it, outside of the relatively small group of enthusiasts to which I belong, nobody cares about the Mac. The mass market turned to mobile, and it was Apple, Google, and Samsung who ended up seizing on that opportunity.

I haven’t kept close tabs on Dell over the past several years. Heck, I thought they were on the way out of business, so why should I bother? But taking another look I find their marketing emphasis is almost identical to what it was before. You can buy a laptop, you can buy a tower, you can buy a monitor. That’s the Dell way, and although I’ve been wrong before, I am doubling down: the Dell way will be Dell’s downfall.