Category Archives: Articles

Stagnation Or Stability?

Michael Lopp is moving on from Things, a popular to-do management app.

That’s an understatement. The title of the post is “R.I.P. Things,” and he wastes no time explaining that he is “throwing away” the app. This sounds juicy. What could have him so riled up?

How can I trust that I’m using the state of the art in productivity systems when I’m using an application that took over two years to land sync I could easily use? What other innovations are they struggling to land in the application? Why hasn’t the artwork changed in forever? What is that smell? That smell is stagnation.

The nut of Lopp’s rationale for tossing Things to the curb is the relatively slow pace of its development over the years. Delays in delivering long-desired features, namely sync, made him conclude that the software will not evolve in ways that suit his needs moving forward. He’s not giving up on Things because of specific shortcomings, but because of anticipated shortcomings and a loss of confidence in the developers of the app.

This line of reasoning gets my hackles up in part because I’m a cautious, deliberate developer. I tend to add features, rework user interfaces, and adopt new platforms at a pace that frustrates even my most loyal customers. I’m slow, but I’m good! When Lopp attacks Cultured Code, the makers of Things, and questions their core competence, I feel that I am being attacked as well.

But what really frustrates me in this case is the software has served him perfectly, and he thanks it with a slap to the face. It’s one thing to denigrate a product for failing to meet your expectations, or for exhibiting a clear lack of craftsmanship, but Lopp admits that those problems do not apply:

Part of me has been fine with this lack of change because I don’t need my productivity system to do much more than capture a task, allow me to easily categorize and prioritize tasks, make it easy to search and filter them, and do all this work frictionlessly. “Things does these things well,” I thought to myself, “I don’t need anything else.”

He applauds the app for allowing him to do his work “frictionlessly.” How does a software developer achieve this level of performance? By first building a quality product and then working deliberately over months and years to address the minor issues that remain. Woodworking makes a reasonable analogy: after a chair has been carved and assembled the job is functionally complete. It’s a chair, you can sit in it. It’s done. But customers will gripe with good cause about its crudeness unless the hard work of detailing, sanding, and lacquering are carried out. Only then will it be considered finely crafted.

As a seasoned software manager, I know Lopp appreciates how hard it is to achieve the stability Things has provided for him. But as a user, he’s as excited as any of us to see new, fresh designs. As an onlooker, it’s easy to associate dramatic change and motion with competence, and quiet refinement with laziness. We must draw on our own experiences attempting to build great things to appreciate how much work takes place in stillness, to have faith that even though things may appear stagnant, a benefit of frictionlessness is resulting. An app at rest may be in that long, arduous phase of becoming finely crafted.

There is a time for dramatic change as well, but it comes with costs. If after years of careful refinement a product is found to be lacking in some important way, bring out the hatchets. Chop it all to bits and rebuild from scratch. The possibilities of positive change in major reworks are exhilarating as a developer, and tantalizing to customers. But every reworked component of a product also resets that process of refinement.

Software should be criticized. Even apps that consistently wow me with their intuitiveness and polish leave me scratching my head about perplexing, nuanced failures. But criticize an app for its failure to do something important, not for an unspecific failure to change in general.

I’ve whined about stagnation, too. I waited years for an update to Keynote and over time, become more and more grumpy about the lack of change. The fact that it’s just about the best application I’ve ever used slowly lost sway in my judgement of the software and, by association, the team of developers who build it.

The next time I’m tempted to think harshly of a developer working at a slower pace than I’d like, I’ll try to step back and appreciate that I care enough about their software to be concerned. And more importantly to appreciate that they care about their software too. So much that they work slowly, deliberately, painstakingly in the pursuit of a frictionless experience for myself and other users.

Respect The Crowd

Everybody knows Apple’s maps are not as good as Google’s maps.

If somebody had belligerently stated a year ago that “Apple is not going to just walk in and be a serious player in maps,” they would have been proven right. Apple shipped their own Maps app on iOS 6, displacing the Google maps that had been a key component of the operating system since 1.0, and set the overall usability and “magic-ness” of iOS back a few notches.

It’s all about the data. It doesn’t matter how beautiful Apple’s maps are, or how quickly they load, if they consistently assign wrong names and locations to the businesses and landmarks that customers search for on a daily basis. Here’s a map of “Spy Pond Park,” a neighborhood playground and baseball field that is central to many iPhone-toting parents’ regular routines. Inexplicably, Apple’s Maps refers to it as “Boston Park”:

SpyPondPark

Upon upgrading to iOS 6, this landmark was one of the first locations I looked up. Finding it mislabeled, I dutifully selected the “Report a Problem” option and submitted detailed correction information.

That was over 6 months ago. Today it’s still “Boston Park” on my phone. And it’s infuriating.

It’s not just “Boston Park.” My local post office shows up on the wrong side of the street. The nearest Whole Foods supermarket is purported to exist in an industrial park behind the local subway station, when it is actually located across the expressway and down the road about 1/4 mile. Other parks in my town are represented as large blank areas on the map, not locatable by name, even through trial and error.

Each of these issues is minor in isolation, but the weight in accumulation is enough to drive any sensible person to another mapping solution. If you can’t trust your Maps app to get you where you need to be, then you can’t use the Maps app. That is unfortunate, indeed.

I was among the most excited of Apple fanboys when I first heard the rumors about Apple entering the mapping market. I put my faith in Apple’s ability to zero in on the remaining nuanced usability problems in maps: the things that Google had overlooked. Instead, I learned upon updating that I had lost access to the effortless transit directions I had grown so accustomed to, and lost all faith in the accuracy of Maps’s data.

It would be foolish to expect perfection from any map, but to be a serious contender the data has to be reputable enough that mistakes are an exception rather than the rule. But more importantly for an app with such a critical impact on day-to-day living, it’s imperative that corrections to the data be as useful to the consumer as to the vendor. Currently, corrections to Apple’s Maps app are only useful to Apple, presuming they are taking them into consideration at all.

In the old days of paper maps, we expected the data to be mostly accurate, but could accommodate an occasional error. Street names change. Town borders shift. Highways are demolished and reconstructed. But in the old days, corrections were also as easy as applying pen to paper: mark out the mistake and clarify the current state of the world. One deft move and the problematic map was fixed — for the owner — forever.

I commend Apple for including the “Report a Problem” feature in their Maps app from day one. They knew that the data was not bulletproof, and they understood that their vast, loyal user base was a great resource for improving it. But I think this reporting process is failing Apple precisely because corrections to Apple’s maps lack all of the advantages of the-fashioned old pen & paper method. After laboriously detailing the problems with a point of interest in Apple’s maps, correcting its name, dragging its pinpoint to a corrected location, etc., the user is rewarded with continuing to suffer using the app with the incorrect data.

These are the most important points of data in Apple’s maps: the ones that a specific user has taken pains to refine and finesse. And Apple opts to leave them in their infuriating, sometimes dangerous state of error, making the app decreasingly useful to the customer.

I’m holding out hope that Apple is working on some major coup for the integrity of their mapping data. It would be fantastic if they announced at WWDC, for example, that they have listened to feedback from developers and customers, and are embracing some new approach to gathering and refining mapping data. Could they have something up their sleeve that would facilitate leap-frogging Google and other POI data-mongers? We can only hope.

In the absence of such improvements they should offer their users something akin to the instant fixes that were afforded by pen and paper. When I report to Apple through my own copy of Maps that the post office is in the wrong place, it should no longer be up for debate where the post office is. When I state with no uncertainty that “Boston Park” is actually called “Spy Pond Park,” I should from that point onward be able to request “directions to Spy Pond Park” without frustration.

Crowdsourcing data refinement can be a very powerful tool. Look at the success Wikipedia has had in their efforts to catalog, in a nutshell, high-level synopses of all the world’s encyclopedic data. Wikipedia works because well-intentioned contributors who spot an omission or error in the data can submit a fix and see the changes immediately. Never again (unless the change is explicitly backed out) will they be punished by reading the non-factual or incomplete information that prompted them to take action.

There are good arguments for why Apple can’t be quite as open as Wikipedia, or to choose a more apt comparison, as open as OpenStreetMap. Apple puts their brand on the iPhone because it is supposed to exude quality, and they expect to be held responsible for the quality of that product from top to bottom. Completely opening mapping data for iOS would undoubtedly lead to attempts at sabotaging Apple’s reputation by injecting embarrassingly incorrect data into the database.

On the other hand, completely botching map data in many locales, while doing little or nothing to address the problem, is also detrimental to Apple’s brand. I used to sing the praises of my iPhone above all competitors. Now, when I am jarred from my fanboy-hypnosis, staring down at an alleged life-changer that doesn’t know how to get me from point A to point B, I’m not so convinced I can defend it.

In order for Apple’s customers to continue “reporting a problem” with Maps, they need to feel that their reports are having some impact. They need to feel respected. Ideally, good reports would lead to timely corrections on a mass level that would benefit all other iOS users. Anecdotally, this is not happening. So at a minimum a user’s own report should be respected by the device they hold in their hands. Let the customer know their voice was heard by improving the usability of their device immediately. Customers demand confidence in map data, whether it be from Apple or fine-tuned by their own hand. If we can’t count on map data, we won’t use the app, we won’t report problems, and we won’t help Apple one iota in shoring up this massive shortcoming.

You Can Check Out Anytime You Like

A week or so ago on John Gruber’s The Talk Show, Gruber and special guest John Moltz recapped the situation with WWDC selling out and with the sprinkling of alternative conferences and events springing up to fill excess demand during the same week in San Francisco. Among those conferences is altWWDC, put on by folks from Appsterdam, and the CocoaConf Alt conference.

During the podcast they remarked on the use of “WWDC” literally in the naming of “altWWDC,” and joked about how likely it was that Apple would take notice and demand something change on that front. As far as I know, altWWDC has escaped thus far unscathed, but CocoaConf Alt has not been so fortunate:

We had secured space in the hotel directly next door to the big show, and we were putting together a phenomenal list of speakers. Ticket sales were better than we had hoped. All was well until we got an email from the Intercontinental San Francisco, saying that they had determined that our event was in conflict with Apple and that due to their contract with Apple, we couldn’t use the space.

Taken at face value: CocoaConf reserved space in a reputable San Francisco hotel, counted on that reservation to sell tickets and to begin organizing the conference in earnest, and now the hotel has backed out of its agreement.

There is a lot of “who, what, when, why and how” missing here. Did Apple specifically ask the Intercontinental to cancel the deal with CocoaConf upon learning about it, or did somebody at the hotel discover a conflict while reviewing the contract terms and proactively seek to avoid an issue with Apple? My hunch is that the hotel is either overacting on its own initiative, or that some individual at Apple is overacting without the full, reasoned consent of Apple’s leadership.

Whatever went down, and whoever is to blame for it: this is not good for developers, not good for San Francisco, and not good for Apple. In an era when WWDC conferences sell out in minutes, it’s only natural that other events would rush in to help to fill the void. And it’s only natural that some of those events will seek to capitalize on the momentum of Apple’s huge event already drawing the spotlight on San Francisco and attracting hundreds if not thousands of additional visitors who are not registered attendees of the conference.

Apple should actively encourage parallel events such as these. They could even go a step further by participating to a limited extent in the events. Sending a few company representatives out to float among each of the satellite activities would give attendees of those events a sense of connectedness to Apple without overly-straining Apple’s limited resources inside the conference.

One of the major benefits of WWDC to Apple is to draw the world’s attention the company’s relevance to desktop and mobile developers, and to how eager the company is to serve them. Even being cited as the cause of quashing meet-ups in the periphery of WWDC is not in the service of that goal. If Apple was involved in pushing for this decision, they should clarify and retract that position. If they were not involved, they should take care to ensure that the hotels they sign contracts with in the future understand they hold no ill will towards these events.

I’m Feeling Useless

I was intrigued to see that Google has changed the country identification for Palestine from “Palestine territories” to just “Palestine.” A subtle but serious hint that the company recognizes Palestine’s right to independent statehood.

I was sort of mystified, however, by John Gruber’s observation that the “I’m Feeling Lucky” button in the BBC’s screenshots instead show the text “stroke of luck.” It would be one thing to learn that the phrase is localized to various regional English dialects around the world, but in my brief tests I have yet to find a single other English Google site where the button text is altered. I also find it very strange that the button text in the screenshot is lowercased. It’s literally “stroke of luck” and not “Stroke of Luck” as one would expect, to fit in with both conventional UI design and with the rest of Google’s UI.

Looking closer, I see the standard “Google Search” button shows up as “Google search a”, and the “About Google” link at the bottom says (I think): “Never you like to know about Google.”

I am inclined to think that the source of the screenshot is the Palestinian Google home page translated automatically to some dialect of English by an automatic translation service (perhaps Google, itself). I don’t think Google has adopted “Stroke of Luck” as part of its revamping of the Palestine Google home page.

Looking into this got me interested in trying out the “I’m Feeling Lucky” button for the first time in many years. I returned to my native English www.google.com where, the text still reads “I’m Feeling Lucky,” but funnily enough you can’t actually use the button to meaningfully achieve what it used to: jumping to the one presumably most-relevant result for your search. Why? Because the moment you type any text in the search field, a prerequisite for using the “I’m Feeling Lucky” button, the entire UI of Google’s famously simple home page shifts dynamically to a completion-list-oriented UI. The lucky button is long gone. Look carefully, and you’ll see there are now “I’m Feeling Lucky” links next to each completion-list result, but these are only visible if you arrow-select, or hover your mouse over the item in the list.

So what is the point of the “I’m Feeling Lucky” button on Google’s home page? You can only click it before you’ve bothered to type anything. On the Palestinian Google home page, clicking the button takes you to Google’s doodles page. But on the American Google home page, merely hovering a mouse cursor over it will change its text to something even more whimsical: “I’m Feeling Wonderful,” “I’m Feeling Stellar,” or “I’m Feeling Puzzled,” for example. Thereafter, hovering over the button Click one of these and you’ll be shuttled off to some vaguely appropriate internet destination.

The “I’m Feeling Lucky” button hasn’t, to my knowledge, been changed to “stroke of luck” in any regional version of Google’s home page. It has, however, been changed into a useless button whose behavior has no relevance to the original “most-relevant result” behavior. It’s just a piece of useless junk on Google’s otherwise still admirably minimalist home page.

Update: Matthew Panzarino commented on Twitter that the dynamic removal of the button is caused by the Google Instant feature, which users can turn off to restore traditional functionality. To Google’s credit, the preference can even be selected without logging in to a user account. However, given that the traditional behavior of the button now works only in a non-standard, user-customized configuration, I think it would best be ejected from the home page.

End WWDC

Not long ago, when Apple’s WWDC conference dates were announced, a slow trickle of registrations would occur as developers consulted with spouses, bosses, and co-workers to determine, in their own sweet time, whether or not they would attend. There was never any rush, because the conference never sold out. Weeks after the announcement, developers who had not registered might even receive a personalized telephone call from Apple, urging them to make a decision. Seriously kids, this is how it used to be.

Over the past several years, demand increased such that at last, those telephone reminders from Apple were no longer necessary. By 2008, the conference sold out for the first time, in a matter of months. By 2010 it took eight days. Last year it took less than 2 hours, and this year? Less than two minutes. I was one of the lucky ones who got a ticket. A few minutes later, as I witnessed friends and colleagues upset after missing the boat? I cancelled my order. It made me uncomfortable to know we had all made the same effort to register as quickly as possible, but for arbitrary reasons I was admitted in and they were left out.

The conference has room for at most 5,000 developers. According to Apple’s job stimulus statistics, there are 275,000 or more registered iOS developers alone. Let’s assume for the sake of argument that Mac developers add only 25,000, bringing the total to 300,000. Every year, 5,000 attendees are selected from the qualified pool, meaning just 1 out of 60, or 1.5% of potential attendees will have the chance to attend.

What are the goals of WWDC, anyway? For Apple, it’s primarily a chance to educate developers and to encourage them to contribute to the growth of Apple’s platforms. By teaching developers the latest and greatest technologies, they leverage developer efforts to differentiate Apple and to make its platforms more competitive.

For developers the main goal is to get a leg up on the persistent challenge of developing great software for these platforms, even as they are constantly changing. A side-benefit is the opportunity to commune with like-minded developers who are trying to do the same. Ideally with folks who share similar visions for how software should be developed and how the end product should behave for customers.

As the sheer number of Apple developers increases, the capacity of WWDC remains the same. The goals of the conference both for Apple and for developers are increasingly unmet as the number of developers who would like to be educated, indoctrinated, and communed with far outweighs the number of developers who actually can be.

Over the years people have made plenty of flip suggestions for how Apple can solve the problems that plague WWDC: get a bigger venue, charge more money, split it up into multiple conferences. But any of these would be a very small band-aid on a very large wound. WWDC is flat-out busted, and can’t be fixed by any of these analog solutions.

The whole point of the conference needs to be rethought, and the goals addressed from scratch using new approaches. As the greatest challenge for WWDC is in scaling to meet demand, I think it’s obvious that the rethought WWDC should be considered in terms of digital solutions. Call it WWDC if you like, but it needs to take place 365 days a year instead of 4. It needs to serve 300,000 developers, not 5,000. And it needs to take place online, not within the cramped confines of a small convention center in San Francisco.

Apple has effectively headed down this course with their laudable offering of free videos of conference sessions. The high-level goal of merely educating developers is largely met by these. But what of the other goals? The vast majority of benefits that Apple and developers see in WWDC could be achieved online using more effective digital materials that are available to, and more importantly, that scale to the vast number of developers eager to learn about and promote Apple’s platforms.

Instead of a week each year when a developer must enter a lottery for a chance at talking directly with a knowledgable Apple engineer in the labs, beef up the existing Developer Technical Support process and workflow so that vexing issues can be driven to the point of resolution, and so that the fruits of those discoveries can be shared with others. For every “lifesaving” tip a developer has received in the WWDC labs, how many others continue to struggle in anguish because the effort was never made to codify that wisdom in the form of a developer technote or other reference material? It doesn’t make sense … it’s a bug, if you will … that so many Apple developers feel that their only opportunity to solve a problem is by meeting in person with an Apple engineer at WWDC.

Instead of asking Apple’s engineers to spend weeks every year preparing, rehearsing, and delivering sessions in San Francisco, ask them to spend a reasonable percentage of the year consulting with and assisting in the development of long-term interactive, iteratively improved video documentation. Start with the last 3 years of WWDC talks on a given subject and condense it down to concise summary of the most pertinent instruction, tips, and demos. It would be ridiculous for Apple to maintain separate text documents for each year, and for developers to be told “Oh, that was addressed in 2011’s NSTextView documentation, go back and look it up.” Yet that’s what developers are forced to do when trying to extract gems of knowledge from past WWDC sessions. (Cough, it’s regrettably true that Apple’s “Release Notes” sometimes serve as a similar kind of decentralized documentation authority).

And what about the community incentive for developers? Isn’t it important to have an opportunity to meet with and catch up with developers from around the world? Yes, it is important, or I should say it would be if it actually worked any longer at WWDC. The very small fraction of developers who are admitted, combined with the unpredictability of whether you or your friends will make the cut, make it essentially useless as an annual catching-up venue. Look to smaller conferences for this ambition. While some of them are similarly challenged in meeting demand for attendance, many are more fine-tuned both in teaching style and in topic choice. They each have a special feel of their own, which naturally attracts a repeat audience whose members are more likely to find fellowship with one another than in the comparatively giant, rotating petri dish of this year’s random WWDC ticket winners.

I have loved the times I’ve attended WWDC, and I may yet end up enjoying it again, but its time has passed. It’s time to move on. In 1983, 1993, and 2003 it was the right tool for the job because it largely fulfilled the objectives for both Apple and developers. In 2013 it’s a strangely exclusive, rotating club with arbitrary membership rules, and increasingly dubious advantages. It’s a source of annual stress and uncertainty for would-be attendees, and has just delivered a whopping blow to thousands of developers who didn’t make the cut for this year’s show.

I would miss many things about WWDC, but the things I would miss could easily be offset by superior, scalable solutions. And I would be happy to leave behind the increasing number of obnoxious aspects of the yearly ritual. It’s time for something better. It’s time to end WWDC.

Why Mention Android

Facebook is apparently due to launch an Android-based phone next week. John Sherrod wonders why they bother to mention the Android brand at all (emphasis mine):

Lately the trend has been for companies to develop phones and tablets based on a heavily customized version of Android and not even mention Google’s OS in their press events. The mention of Android is particularly surprising given all the ways that Google and Facebook compete with one another.

Google has been ridiculed in the years since Android’s debut for failing to profit much from the technology, in spite of using it to stake out a modicum of control over a large segment of the mobile industry.

Let’s assume for the sake of argument that the Facebook product is a success. Let’s assume they make a ton of money off of it. What better way to rub it in a competitor’s face than to make it very clear that you succeeded not in spite of but thanks to their technology. That you succeeded with it in a way that they couldn’t?

Mentioning Android today sets the stage for a graceful postmortem, regardless. If Facebook’s phone is a flop, they can assign some blame to Android (c.f. Motorola Rokr). If it’s a huge hit: “Google, we pwned you!”

(Via Daring Fireball)

Google’s Next Great Thing

Google Express promises same-day delivery of goods from a variety of San Francisco retailers.

Roberto Baldwin makes the very San Francisco comparison to Kozmo, the famously failing ‘90s home-delivery startup, while John Gruber chastises Google for an evident lack of focus.

But what if this is the embodiment of Google finding its focus?

Steve Jobs said of Apple, before returning to the company in 1996, that he would “milk the Macintosh for all it’s worth — and get busy on the next great thing.” It’s reasonable to argue in retrospect that this is exactly what he did.

Suppose Google recognizes that they can’t play king of online advertising forever, that it must hunker down and focus on its own “next great thing?” What technology does Google own that sets it farthest apart from potential competitors? Driverless vehicles.

Google has allegedly been testing its delivery service with employees and their friends since at least October, 2012. If this fleet is not driverless yet, I’m sure it’s slated to become so.

I’m not sure it matters too much if Google succeeds, at least in the conventional sense of toppling rivals such as Amazon in the home delivery market. I imagine they see this as a no-lose gamble. If they happen to strike a chord of convenience, price, and quality in retail delivery, they may just give Amazon a run for their money. If they don’t, they will still have pushed their driverless cars through another phase of real-world testing. Amazon can keep its massive, profitless customer base, and Google can keep its next great thing.

NetNewsWire Cloud

My friend Brent Simmons created NetNewsWire over 10 years ago. The app, a Mac-based RSS aggregator, has been a constant companion to me for what feels like an eternity: I check it daily to keep up with the most important blogs, Google searches, and referral notices that I want to keep on top of. Most important of all to me, NetNewsWire spawned MarsEdit, the blogging app that I now develop and which supports me and my family.

Today Google announced they are shuttering Google Reader, the web service that has brought RSS aggregation to what I would guess is the largest number of people to ever enjoy the benefits of “news feeds” on the web. The termination of Google Reader is a disappointment to its loyal users, but it will also have a huge impact on the number of client apps, including NetNewsWire, that have built their syncing functionality on top of the service. When Google Reader goes away, all those apps will lose their syncing capability, or outright stop working.

Some folks will claim that nobody cares about RSS anymore. But the loud outcry on Twitter and through other channels indicates there is still a significant number of people who rely on the technology.

Which brings me back to NetNewsWire. My guess is that after Google Reader, and possibly after Newsvine, NetNewsWire is the most recognized brand in the world for the admittedly niche market for “RSS Readers.” When the top brand in the market drops out, it puts a huge amount of focus on the remainders. Black Pixel, the current developers of NetNewsWire, have to be taking notice.

At this point Black Pixel need to ask themselves one question: are we interested in RSS, or aren’t we? They acquired NetNewsWire because they no doubt loved it and had become reliant on using it themselves. They wanted to see it live on and prosper. But did they expect to be put in a position where they are faced with the challenge/opportunity of becoming the world’s leading RSS services company? Probably not.

My understanding is that the slowness in developing and releasing a successor to NetNewsWire 3 is largely in coming to terms with the challenges of working around Google Reader issues. With Google Reader out of the picture, not just for NetNewsWire, but for everybody, a new future for RSS syncing arises: NetNewsWire Cloud.

By implementing a suitable syncing API for RSS, and implementing a reasonably useful web interface, Black Pixel could establish NetNewsWire Cloud as the de facto replacement for Google Reader. Charging a reasonable fee for this service would likely inoculate it from the risk of sudden termination, and it would doubly serve to provide the very service that NetNewsWire needs to thrive on the desktop and on iOS.

Don’t get me wrong: this is no small order. I would not fault Black Pixel one iota for looking at the challenge and deciding to take a pass. But if they are truly passionate about RSS, this is their moment. This is the time when accepting the impossible challenge will reap the greatest reward.

An Indie State Of Mind

Marco Arment and John Siracusa recently wrapped up their long-running podcasts, Build & Analyze, and Hypercritical. They have since gone on, in collaboration with Casey Liss, to establish two new podcasts: Neutral, a show about cars, and the Accidental Tech Podcast.

Each of the retired shows was on the popular 5by5 podasting network, while each of the new shows is not. Since John Gruber’s sudden departure last year from 5by5, people are particularly keen to look for elements of drama related to the network. I’m guilty of it, myself. But Occam’s razor applies in situations such as this as well. The simple explanation? People who are satisfied stay where they are, and people who are less satisfied move on.

Marco acknowledged that he has been asked repeatedly why the new shows aren’t on 5by5, and conceded to share his own personal reasoning:

I want to own everything I do professionally unless there’s a very compelling reason not to. Some people feel uneasy having that level of control, but I feel uneasy not having it (to a fault).

There you have it: Marco is an indie, and he is bound to behave like one. Working within the confines, no matter how cushy, of another institution is simply not his style. He was destined to be unsatisfied with the status quo, and leaving to do his own thing no doubt resolved a great deal of tension.

I’m surprised by how consistently people assume that joining a large podcasting network is an end-goal for indie podcasters. My friend Manton Reece and I have been recording Core Intuition for over 4 years, yet when I have guest-hosted Build & Analyze or appeared on other 5by5 shows, a significant number of people write or comment on Twitter that I should “have my own show.” I have my own show, thank you very much. And I’m starting another one.

I worked at Apple for more than 7 years, before branching out on my own to focus exclusively on Red Sweater. I’m grateful that, in contrast to indie podcasting, there is far less bias towards conglomeration in the indie software scene. I’m not constantly nagged about when I’m going to re-join Apple, or Google, or Microsoft, or Twitter. And when I ship a new app, I don’t face a barrage of questioning about which larger company will be distributing it. It’s understood that I build, test, distribute, debug, and market the software by myself. And people respect that.

Like Marco, I derive a great amount of satisfaction from doing things for myself. Also like Marco, it can sometimes be a fault. No doubt I would benefit in many ways from working for a company or from joining a podcasting network. The resources and reach of these institutions could help me build greater things and get them in the hands of more people. On the other hand, they could force me to build things that suck. Folks like us, we with an indie state of mind, tend to face a far simpler choice: be dissatisfied working for somebody else, or gratified by the thrill of trying our own thing.

Coming Soon: The Bitsplitting Podcast

I have really enjoyed, and will continue to enjoy, producing the Core Intuition podcast with my friend Manton Reece. What started as an irregular, very casual podcast more than four years ago turned into a much less irregular, weekly show last year.

The success of Core Intuition inspired me to revisit a long-standing interest I’ve had in interview-style podcasts. I have always been a fan of the humanistic style of interview as perfected by Fresh Air’s Terry Gross, and while I have no illusion of matching Ms. Gross’s impeccable style, I hope to start down the path that ends at being at least a bit more like her. In a nutshell, the gist of my show will be interviewing folks I’ve had the luxury of meeting, who have interesting perspectives. I’ll differentiate my show from some others by focusing more on the personal background of my guests, and on trying to discern a philosophical arc to their life and career choices.

We are currently enjoying a renaissance in podcasting (hat-tip for that assessment to Brent Simmons), and I’m excited to be among the lucky folks who are more-or-less ready to seize on the opportunity. I have learned a lot from producing Core Intuition, and from being a guest on countless other shows. What can I say? I’m a lucky guy. Right place, right time, right skills.

With Core Intuition, we waited over four years before we started accepting sponsorships. For the Bitsplitting podcast, I will accept them from day one. I was skeptical about sponsorship, but I’ve learned that it creates both a positive obligation and a positive reward. Once a sponsor has committed to paying for the privilege of a mention on my show, I feel obligated to not only record and publish the show, but to do so in a way that exudes professionalism worthy of the sponsor’s blessing.

The rewards are more complex. Obviously, there’s the money. Money is good. But less obvious is a certain validation that comes from anybody else sticking their neck out to validate your work. Spending money is a somewhat crude, yet very unambiguous way of sticking one’s neck out. I have found with Core Intuition that while the money is nice, it’s most important that we have an unambiguous message from our sponsors that the show is valuable. It helps us, literally, to get out of bed in the morning to record the show.

So I need sponsors. I understand that with Bitsplitting, “there’s no there there” yet, and that makes this a harder sell. If you work for or own a company with vision and a willingness to take a chance, consider being among my debut sponsors. How many listeners do I have? Zero. How many listeners will I have? Time will tell. To balance your faith in sponsoring something new, I’m offering a reduced-cost sponsorship while the wheels are set to motion. If you’re interested, please check out this preliminary sponsorship information, and drop me a line.

Update February 21, 2013: I am humbled by the reaction to my call for sponsors. We are in good shape for the launch of the show, and I’ll resume booking sponsors for future episodes after the show debuts.

Whether or not you’re in a position to sponsor the show, I hope you’ll keep your eyes on this site, or on the @bitsplitting Twitter account, to learn about the launch of the podcast, which I am confident you will find both educational and amusing.

Virtual ROM

My attention was drawn by Anil Dash on Twitter to two posts discussing the purported and actual capacities of devices that, for example, advertise “64GB of storage.”

Marco Arment linked to an article on The Verge, asserting Microsoft’s 64GB Surface Pro will have only 23GB of usable space. Arment went on to suggest that device manufacturers should be required to market devices based on the “amount of space available for end-user” data.

Ed Bott’s take, on the other hand, was that Microsoft’s Surface Pro is more comparable to a MacBook Air than other tablets, and its baseline disk usage should be considered in that context.

I think each of Arment’s and Bott’s analyses are useful. It would be nice, as Arment suggests, if users were presented with a realistic expectation of how much capacity a device will have after they actually start to use it. And there is merit in Bott’s defense that a powerful tablet, using a more computer-scale percentage of a built-in disk’s storage, should be compared with other full-fledged computers.

Let’s just say if fudging capacity numbers was patented, every tech company would be in hot water with the patent trolls. A quick glance at iTunes reveals that my allegedly 64GB iPhone actually has a capacity of 57.3GB.

ITunes

I don’t know precisely what accounts for this discrepancy, but I can guess that technological detritus such as the metadata used by the filesystem to merely manage the content on the disk takes up a significant amount of space. On top of that, the discrepancy may include space allotted for Apple’s operating system, bundled frameworks, and apps. Additional features such as recovery partitions always come at the cost of that precious disk space. Nonetheless, Apple doesn’t sell this 64GB iPhone as the 57.30GB iPhone. No other company would, either.

It seems that in the marketing of computers, capacity has always been cited in the absence of any clarification about actual utility. One of my first computers (after my Timex Sinclair 1000) was the Commodore 64, a computer whose RAM capacity was built in to the very marketing name of the product. Later, Apple followed this scheme with computers that would be known as the Mac 128K and Mac 512K. Each alluding to its ever-increasing RAM capacity.

The purported RAM capacity was somewhat misleading. Sure, the Commodore 64 had 64K of RAM, but some of that had to be used up by the operating system. Surely it would not be possible to run a program that expects 64K of RAM, and have it work. So was it misleading? Yes, all marketing is misleading, and just as it’s easier to describe an iPhone 5 as having 64GB capacity, it was easier to describe a Commodore as having 64K, or a Mac as having 128K of RAM.

But the capacity claims were more honest than they might have been, thanks to the pragmatic allure of storing much of a computer’s own private data in ROM instead of RAM. Because in those days it was much faster to read data from ROM than from RAM, there was a practical, competitive reason for a company to store as much of its “nuts & bolts” code in ROM. The Mac 128K shipped with 128K of RAM, but also shipped with 64K of ROM, on which much of the operating system’s own code could be stored and accessed.

Thanks to the ROM storage that came bundled with computers, more of the installed RAM was available to end users. And thanks to the slowness of floppy and hard disks, not to mention the question of whether a particular user would even have one, disk storage was also primarily at the user’s discretion. It was only after the performance of hard drives and RAM increased that the allure of ROM diminished, and computer makers turned gradually away from storing their own data separately from the space previously reserved for users. With the increasing speed and size of RAM, and then with the advent of virtual memory on consumer computers, disk and RAM storage graduated into a sort of virtual ROM.

The transition took some time. Over the years from that Mac 128K, for example, Apple did continue to increase the amount of ROM that it included in its computers. I started working at Apple in an era when a good portion of an operating system would be “burned in ROM”, with only the requisite bug fixes and updates patched in via updates that were loaded from disk. I haven’t kept up with the latest developments, but I wouldn’t be surprised if the ROM on modern Macs is only sufficient to boot the computer and bootstrap the operating system from disk. For example, the technical specifications of the Mac Mini don’t even list ROM among its attributes. The vast majority of capacity for running code on modern computers is derived from leveraging in tandem the RAM and hard disk capacities of the device.

So we have transitioned from a time where the advertised capacity of a device was largely available to a user, to an era where the technical capacity may deviate greatly from what a user can realistically work with. The speed and affordability of RAM, magnetic disk, and most recently SSD storage, have created a situation where the parts of a computer that the vendor most wants to exploit are the same that the customer covets. Two parties laying claim to a single precious resource can be a recipe for disaster. Fortunately for us customers, RAM, SSD, and hard disks are cheaper than they have ever been. Whether we opt for 64GB, 128GB, or 1TB, we can probably afford to lend the device makers a little space for their virtual ROM.