Open Source Innovation: Red ocean, blue ocean and the eternal linux coming
Many were surprised by the extraordinary sales of the eeepc, and Asus plans to have 3.8 millions sold next year. One single product seems to be capable of substantially rise the number of linux users worldwide in a single year. How is it possible? Can we do even better?
At the end of each year since 2000 we are bombarded with opposing views about the next coming of linux on desktops, or the growth or decline of open source software on servers, whether Apache is growing or IIS is regaining share. It reminds me so much about heated debates about football, or politics, or many other clearly undecidable questions; the debate has an entertaining value in itself, so despite the lack of any practical value it remains a common sport. As I would never leave such an entertaining opportunity unfulfilled, I will try to present a few opinions on my own.
Blue Ocean 1024 by Aube Insanite’
First of all, I strongly believe that the overall idea of a “tipping point” that happens in the short term (0-2 years) that shows a sudden switch from Windows users to Linux on the desktop has no factual basis. All the research on ICT and innovation diffusion shows that when the incumbent enjoys strong network effects (like Microsoft with the combination of economic incentives to its channel and latency of user base) and is willing to adapt its pricing strategy to counter external threats, it can significantly delay the adoption process of even technically perfect alternatives. This, combined with the fact that at the moment the channel for linux desktops does not exist (apart from some internal successes like IBM, or some external sales by Novell) means that my models predict a less than 5% adoption within 2 years for enterprise desktops if everything stays the same.
And what can change? The first important idea is that there are two ways of doing business, the “red ocean” (fighting for the same market and undercutting competition) and the “blue ocean” (searching for new markets and ideas). My belief is that abrupt changes are much more difficult in red ocean environments, as everyone tries to outsmart the others, and those that are capable of surviving for longer (for example, because they have more cash) are increasingly favorite by this competitive model. But “order of magnitude” changes are possible in the blue ocean strategy, because the space for exploring new things is much larger. Andy Grove of Intel once mentioned that:
in how some element of one’s business is conducted becomes an order of magnitude larger than what that business is accustomed to, then all bets are off. There’s wind and then there’s a typhoon, there are waves and then there’s a tsunami.
Can we find examples of this “order of magnitude” change? Some examples are the Amazon EC2 (cost of one hour of managed and scalable CPU one order of magnitude lower than alternatives), the Asus eeepc (nearly one order of magnitude lower cost compared to other ultraportables), the XO notebook (one order of magnitude reduction in costs, one order of magnitude or more in planned audience); all were surprisingly successful (even the XO, well before shipping, forced companies like Intel, AMD, Microsoft to react and compromise in order to be able to participate in the same market).
Still with me? The missing piece is the fact that we should strive to facilitate the choice of open source at the change points; for example, it is easier to suggest an alternative when the current situation is undergoing change (like suggesting a migration to linux when people has to change its PCs). We should make sure that we propose something that has one order of magnitude less costs than alternatives, that can provide sustainable business models, and that satisfies the needs of users. We have to create a software/hardware/services assembly (as the XO was created from scratch) to replace and enhance what desktop PCs are doing now. Technically speaking, we have to create a hardware assembly that costs one order of magnitude less, software that costs one order of magnitude less to maintain, and services that cost one order of magnitude less to maintain.
How we can do it? The hardware part is easy: design for the purpose. Take the lead from what XO has done, and create a similar platform for the desktop. Flash disk is still too costly, so design a single platter disk, with controller and metal case soldered on the motherboard; think about different chip designs (maybe leveraging Niagara T2) by reducing the number of cores and adding on-chip graphics and memory architectures (when source code is available, more sophisticated manual prefetching architectures are possible). Software needs are in a sense easier: we still need to facilitate management (Sun’s APOC or Gonicus’ GOSa are good examples) and integrate in the system an easy way for receiving external help. Think out of the box: maybe LLVM may be a better compiler for some aspects of the machine than GCC? (think about what Apple has done with it) Leverage external network services (like the WalMart’s gPC and gOS). This means create external backups and storage for moving users; allow for “cloning” of one PCs to another when a replacement is needed, easily synchronize files and data with external services using tools like Conduit. Allow for third parties to target this as a platform, like Google is doing with Android; partner with local companies, to create a channel that will sell services on top of it. As the cost of materials goes down of roughly 10% for every order of magnitude in produced parts, an ambitious company can create a 99$ PC, with reasonable capabilities, packaged by local companies for local needs; the potential market can be estimated at 25% of the actual installed PC base (both new users and users adopting it as second platform or replacement platform), or roughly 200 million PCs.
The assumption that everything is going to be as today is just our inability to plan for a different future.
Reply