Many were surprised by the extraordinary sales of the eeepc, and Asus plans to have 3.8 millions sold next year. One single product seems to be capable of substantially rise the number of linux users worldwide in a single year. How is it possible? Can we do even better?
At the end of each year since 2000 we are bombarded with opposing views about the next coming of linux on desktops, or the growth or decline of open source software on servers, whether Apache is growing or IIS is regaining share. It reminds me so much about heated debates about football, or politics, or many other clearly undecidable questions; the debate has an entertaining value in itself, so despite the lack of any practical value it remains a common sport. As I would never leave such an entertaining opportunity unfulfilled, I will try to present a few opinions on my own.
Blue Ocean 1024 by Aube Insanite’
First of all, I strongly believe that the overall idea of a “tipping point” that happens in the short term (0-2 years) that shows a sudden switch from Windows users to Linux on the desktop has no factual basis. All the research on ICT and innovation diffusion shows that when the incumbent enjoys strong network effects (like Microsoft with the combination of economic incentives to its channel and latency of user base) and is willing to adapt its pricing strategy to counter external threats, it can significantly delay the adoption process of even technically perfect alternatives. This, combined with the fact that at the moment the channel for linux desktops does not exist (apart from some internal successes like IBM, or some external sales by Novell) means that my models predict a less than 5% adoption within 2 years for enterprise desktops if everything stays the same.
And what can change? The first important idea is that there are two ways of doing business, the “red ocean” (fighting for the same market and undercutting competition) and the “blue ocean” (searching for new markets and ideas). My belief is that abrupt changes are much more difficult in red ocean environments, as everyone tries to outsmart the others, and those that are capable of surviving for longer (for example, because they have more cash) are increasingly favorite by this competitive model. But “order of magnitude” changes are possible in the blue ocean strategy, because the space for exploring new things is much larger. Andy Grove of Intel once mentioned that:
in how some element of one’s business is conducted becomes an order of magnitude larger than what that business is accustomed to, then all bets are off. There’s wind and then there’s a typhoon, there are waves and then there’s a tsunami.
Can we find examples of this “order of magnitude” change? Some examples are the Amazon EC2 (cost of one hour of managed and scalable CPU one order of magnitude lower than alternatives), the Asus eeepc (nearly one order of magnitude lower cost compared to other ultraportables), the XO notebook (one order of magnitude reduction in costs, one order of magnitude or more in planned audience); all were surprisingly successful (even the XO, well before shipping, forced companies like Intel, AMD, Microsoft to react and compromise in order to be able to participate in the same market).
Still with me? The missing piece is the fact that we should strive to facilitate the choice of open source at the change points; for example, it is easier to suggest an alternative when the current situation is undergoing change (like suggesting a migration to linux when people has to change its PCs). We should make sure that we propose something that has one order of magnitude less costs than alternatives, that can provide sustainable business models, and that satisfies the needs of users. We have to create a software/hardware/services assembly (as the XO was created from scratch) to replace and enhance what desktop PCs are doing now. Technically speaking, we have to create a hardware assembly that costs one order of magnitude less, software that costs one order of magnitude less to maintain, and services that cost one order of magnitude less to maintain.
How we can do it? The hardware part is easy: design for the purpose. Take the lead from what XO has done, and create a similar platform for the desktop. Flash disk is still too costly, so design a single platter disk, with controller and metal case soldered on the motherboard; think about different chip designs (maybe leveraging Niagara T2) by reducing the number of cores and adding on-chip graphics and memory architectures (when source code is available, more sophisticated manual prefetching architectures are possible). Software needs are in a sense easier: we still need to facilitate management (Sun’s APOC or Gonicus’ GOSa are good examples) and integrate in the system an easy way for receiving external help. Think out of the box: maybe LLVM may be a better compiler for some aspects of the machine than GCC? (think about what Apple has done with it) Leverage external network services (like the WalMart’s gPC and gOS). This means create external backups and storage for moving users; allow for “cloning” of one PCs to another when a replacement is needed, easily synchronize files and data with external services using tools like Conduit. Allow for third parties to target this as a platform, like Google is doing with Android; partner with local companies, to create a channel that will sell services on top of it. As the cost of materials goes down of roughly 10% for every order of magnitude in produced parts, an ambitious company can create a 99$ PC, with reasonable capabilities, packaged by local companies for local needs; the potential market can be estimated at 25% of the actual installed PC base (both new users and users adopting it as second platform or replacement platform), or roughly 200 million PCs.
The assumption that everything is going to be as today is just our inability to plan for a different future.
Technorati Tags: Open Source Innovation, oss, open business, open source strategy, eeepc, ec2, niagara, XO
andrew aitken 7:16 am on January 9, 2008 Permalink
Well, there’s certainly a lot of content in this posting, but I want to just address one portion, that of Savio’s comments you highlite. I think CIOs would disagree with the statement that support is of much less value than a product. Without some form of support most CIOs will not be willing to deploy a piece of software. And CIOs are willing to pay for quite a bit of piece of mind which is what support buys. We heard that loud and clear from CIOs interviewed at last year’s Think Tank. It will be interesting to see if they feel the same way this year. But, just because CIOs are willing to pay for support, doesn’t mean they are willing to pay for it only from the developer of the open source solution itself. One of the benefits of open source is choice. They may be willing to go elsewhere if they feel they can get as high a level of support. This brings up a couple of other points, a business with a high services revenue component traditionally doesn’t scale to meet the required returns for most investors, and in order to be successful today, growing a large and competent channel is critical. How much of the services is a vendor going to keep and how much are they going to give to their channel to motivate them? So,again, I disagree with Savio’s point, but it is a very complex issue.
Roberto Galoppini 12:47 pm on January 10, 2008 Permalink
Hi Andrew, happy to see you joining the conversation.
I suspected it and I was just waiting for customers feedback like the ones you reported. What about the size of those enterprises?
Appropriating returns from the commons is not an easy task, but it is also true that well established open source firms likeRed Red Hat don’t seem too worried by copy cat versions. Yet another complex and interesting issue here, am I right?
Well, I definitely agree with you Andrew, the channel is critical and few open source firms found their way, here Open Source Franchising I am sure can play an important role. Do you agree?
andrew aitken 6:06 pm on January 10, 2008 Permalink
Well lets address the Redhat comment. Redhat certainly does a good job and as long as the overall adoption of Linux continues positively worldwide, they are safe. But, we’re hearing from some very large customers that as their dependency upon Redhat grows, and consequently their cost, they are beginning to figure out exactly where their break even is between adding more REL subscriptions and hiring a sys admin, or training up one of their own. Additionally, as more lower and free versions of Linux gain in functionality, usability and supportability, we’re hearing that enterprises are concentrating new REL subscriptions around high-volume and mission critical work loads while using the other Linux’s for other functions. Virtualization is a huge trend, and we’re seeing tremendous interest, but it’s still a bit early for many large scale deployments.
Roberto Galoppini 12:01 am on January 12, 2008 Permalink
Retain customers is not trivial, as soon as they get technologically autonomous, but as you stated there is a trade-off.
Besides economical considerations, I believe that another factor is the need for customization. When a customer need its own flavor of a commercial Linux distribution, whatever is the reason, it is time to consider to go on its foot. Flexibility matters. Right?