Updates from Carlo Daffara Toggle Comment Threads | Keyboard Shortcuts

  • Carlo Daffara 3:00 pm on August 14, 2007 Permalink | Reply  

    Open communities and lightweight consortia 

    The consortium is one of the oldest, and most practiced structures for coordinating resources of different companies towards a common goal, by creating a simple legal framework to coordinate and encourage a joint activity (for example promotion, development, management of rights). It is based on a simple concept: to be self-sustaining, a consortium must be capable of creating more than the simple sum of its parts.

    What can be said of FLOSS-based consortia? The underlying “raw material”, usually, is based on open source projects that are available to all without limitations, and so they cannot be a discriminating factor, contrary to other development consortia like Avalanche. In fact, even joint development like the Common Customer View are not improved or hindered by the fact that the participating companies are all in a single consortium or not. It is not difficult to see that if such an approach looks technically interesting, other non-members would probably add a compatible offering to their own project; on the other hand, two companies in the same market would be hard pressed to participate in the same consortium, as Roberto correctly said, because there would be no economic incentive to be part of a non-differentiating common ground.

    I suspect that development consortia can accept only a single company per vertical market, while representative consortia (that leverage a common effort to provide a simplified “certification mark”) can probably more effective in reducing the cost of promotion of FLOSS-based solutions. In this sense, I would suggest OSA to leverage more than simple interoperability, and try to promote a two-stage approach: an “inner circle” that provides the interoperability framework by leveraging paying customers, and a “subscriber circle” that leverages the shared resources (like IP clearing from Palamida, technical certification from SpikeSource, etc) to obtain a “seal of approval” that could be used as a marketing instrument.

    After all, if we look at OSA, we can see two different kind of customers: the ones that are buying services and products from the members, and FLOSS companies that may be part of the consortium in the future; it is in my opinion sensible to try to address both.

    Technorati Tags: open communities, OSA, open source consortia

  • Carlo Daffara 9:54 am on August 6, 2007 Permalink | Reply  

    Open source collaboration: non-source code open projects 

    In the context of the joint research work with Roberto, I would like to present a small update in the OpenTTT project. OpenTTT is a EU-funded project (SSA-030595 INN7) that aims at bridging the separate worlds of technology transfer and open source software (OSS), by introducing novel methodologies for helping companies in the take up of technology.

    As part of the project, we are collecting examples of non-source code projects where collaboration or open licensing are critical, and prepared a listing of such activities. Such listing will be extended in the next weeks, also including previous work like the “Open Source Gift Guide” or a list of non software open source goods.

    As already discussed a large portion of work in OSS projects goes into non-code aspects, and as such should be investigated probably with the same interest that OSS commands today.

    Technorati Tags: openttt, EU projects

  • Carlo Daffara 7:45 am on July 25, 2007 Permalink | Reply  

    Open Source Business Models: Joint Research Announcement 

    I am extremely happy to announce the start of a new joint research activity between the FLOSSMETRICS project and Roberto Galoppini, one of the most important European researchers working on FLOSS-based business models. The joint research work will be carried with Carlo Daffara and will be centered on business models taxonomies, and how the participant actors (like the FOSS communities, commercial companies, individual developers) and the licensing choices interact in a commercial exploitation context. The research will leverage the tools and research work carried in the European project for analyzing OSS project participation and contributions, and as for all of FLOSSMETRICS will be publicly avaliable.

    Technorati Tags: Commercial Open Source, Open Source Strategies, FLOSSMETRICS, robertogaloppini, carlodaffara, taxonomies

    • Savio Rodrigues 2:03 pm on July 25, 2007 Permalink

      Congrats Roberto! Look forward to seeing results from the research. Will you be studying the use of OSS by Traditional software vendors (like IBM, Oracle, Sun) to drive their Traditional software revenues?


    • Carlo Daffara 7:37 am on July 26, 2007 Permalink

      Dear Savio,
      yes, the study on how OSS models are used in traditional commercial software companies is one of the aspect of our research. We expect to produce in the end a set of papers helping companies assess existing OSS projects and how to compare the potentially applicable business models to decide the most appropriate one. We hope to turn the results of what is basically software engineering research (as FLOSSMETRICS is) into a more concrete and helpful tool for companies interested in OSS.

  • Carlo Daffara 3:40 pm on June 4, 2007 Permalink | Reply  

    Open Source Production: not only code 

    Most people thinks that being “open source” means being coders or contributing patches, and it is still controversial how companies position themselves in the OSS market. Most people consider a company OSS when it contributes code to an OSS project, but nowadays a significant value of open source lies in non-code contributions.

    During previous research projects, I found several references to code contributions (and most online tracking services like Commits in Action or Ohloh but it is nearly impossible to find traces of non-code contributions. During the creation of the knowledge base of the COSPA project I found an amazingly well written report by the French Réseau National en Technologies Logicielles called “New economic models, new software industry economy” where I found a telling text snippet on the OpenCASCADE CAD framework:

    In the year 2000, fifty outside contributors to Open Cascade provided various kinds of assistance: transferring software to other systems (IRIX 64 bits, Alpha OSF), correcting defects (memory leaks…) and translating the tutorial into Spanish, etc. Currently, there are seventy active contributors and the objective is to reach one hundred. These outside contributions are significant. Open Cascade estimates that they represent about 20 % of the value of the software.

    This 20% is mainly non-code related, but it’s 20% of the project value nevertheless. This happens in a very vertical, and technical-oriented environment; but if we look at a highly successful open source project like KDE, we can find something like this:

    KDE From Aaron Seigo’s speech, in Akademy 2006

    Software development is just one of the tasks necessary to build a large scale, complex system like KDE, and I have no doubt that something similar applies to GNOME, Fedora or OpenSolaris.

    We should start thinking more about how to study non-code contributions, and how this relates to the commercialization of open source projects (and not only software).

    Technorati Tags: OpenCascade, Open Source Production, KDE

    • Ed Dodds 5:34 pm on June 5, 2007 Permalink

      I think this is why documentation and marketing are generally weak links in the OSosphere.

    • Antonio LdF 1:04 am on June 6, 2007 Permalink

      Do you already know me..
      I’m Antonio aka Forrest Camp; do you remember Rome and the barcamp?

      In these period I embraced completely the cause of my “Free Biz Projects” and I thought a lot.

      I use only open source software like Ubuntu, Gimp or Open Office.
      I would collaborate to implement them but I’m not a developer and I’ve no money to give.

      I’m a creative, I’m project manager and a marketing young man. I want to create a bridge between mktg and developers. I would help the open source to arrive to the main stream using approaches that probably the code contributors ignore or despise..

      The important thing is to search something that each one could do for open source and I would do everything in my power.
      Today I would send you a mail, but this comment is better!

      See you soon Roberto!

  • Carlo Daffara 3:38 pm on May 29, 2007 Permalink | Reply  

    Open Source Firms: What is an OS company, anyway? 

    A recurring theme of discussion is exactly what defines an “OS company”. Many potential customers are finding more and more difficult to distinguish between “real” open source, viewable code licenses, quasi-open and more; companies are trying to leverage the opportunity of the OS market to push an offering, even if it is not OSS at all.

    Recursion Recursion by gadl

    Of course, a company like Alfresco can proudly claim that – being its main offering a pure GPL software – they are OS, libre, and whatever. But what exactly makes a company an open one?
    It is not difficult to find previous traces of the same argument before; starting from Mark Shuttleworth’s comments, Aitken’s ones or by Savio Rodrigues. Within the FLOSSMETRICS project we are facing the same problem, that is how to assess the “openness” of a company, and we observed a few things:

    • an OSS project is not only about code; in fact, in many projects the amount of non-code assets (like documentation, translations, ancillary digital material) is substantial. Considering companies as open only by measuring code patches is reductive;
    • a company may sponsor a project in many ways. For example, granting hired programmers time to work on OSS projects (during work hours) is an indirect monetary sponsoring activity; hiring main developers and giving them flexibility to continue develop OSS code is a direct sponsoring.

    There are relatively few examples of the first kind; among them, companies that localize and create country-specific versions (like the italian accounting scheme for the Adempiere ERP created by Anthas to allow for a simpler commercialization.

    The second model is quite common: IBM sponsors development of the Apache web server, as a basis of its Websphere product, Google employees are asked to work for an OSS projects one day per week on company time (and sponsoring the summer of code, by the way), EnterpriseDB pays many PostgreSQL developers.

    Given this, we ended up classifying OS firms as those that:

    • sponsor, support, facilitate an open source project, that is a project that has a license compatible with the OSD definition, in a direct or indirect way.
    • the sponsoring/support must be continuous, that is it should not be a single, one time contribution.

    This allows to include only those companies that leverage OSS in an organic and structural way, or they would not be able to justify the investment over an extensive period of time.

    This excludes one-time donations, for example; and it also excludes those companies that just take OSS and resell it packaged without added value, or “dump” a worthless software code under an OSS license hoping that someone willtake it up from there.

    Technorati Tags: adempiere, alfresco, anthas, Commercial Open Source, Open Source Firm

    • Savio Rodrigues 6:22 am on May 31, 2007 Permalink

      I agree Roberto, defining an OSS company is difficult and going to get increasingly so as more companies add OSS into their software strategy.

      Something that I’ve been thinking about since OSBC, which relates to your post…

      We’ve all heard about how open source Google uses to run their business. We also know that Google pays (some of their) employees to work on OSS projects as part of their day jobs. But what of all the OSS changes that Google makes and does not contribute back to the community? Does that make Google a ‘bad’ OSS company? How can they be ‘bad’ when they’re using OSS in a way that the OSS license allows??? But really, we all know that Google can’t be bad/evil, right 🙂

    • Roberto Galoppini 11:53 am on May 31, 2007 Permalink

      @Stefano Maffulli: Stefano I couldn’t manage to comment your post (may be is it great time to become a FSFE Fellow in order to? ;-), so I am writing you here for the time being. Mission and CSR might help, but my guess is that it is more effective to judge firms by their actions, even if requires some effort.

      @Open Source Solutions: I guested Carlo’s post even if I previously took a completely different position on the matter, and now I am seriously wondering about it all.

      @Savio: you are raising the same point I early discussed with Carlo, amazing! 😉 We all know Google is taking advantage of the GPL loophole, but it is also true that is contributing to many projects, and we can’t also forget the Google Summer of code.

      I think we should start creating categories reflecting the corporate-community relationship, and also the “old” Externally funded”/ “Internally funded” models. But all these (complex) distinctions might bring more confusion than clarity, I am afraid.

  • Carlo Daffara 5:11 pm on May 21, 2007 Permalink | Reply  

    Open Source Adoption: OpenTTT, testing the IRC approach on open source 

    Choosing the best open source products is considered one of the biggest challenges in open source adoption. Software selection costs are so high that specialized consulting companies are doing it as their main job, see Optaros and Spikesource just to name two of. Why is it so difficult?

    Juggler Choose by Dovaneh

    There are many reasons:

    • there is no single place to search for OSS (sourceforge hosts a significant percentage of projects, but some merely started there and then moved elsewhere; there are many other forge-like sites and many software listing sites like freshmeat).
    • there is no consistency in the software evaluation; even models like OSMM and BRR have many components that are based on human evaluation, and some more recent approaches even change the evaluation model and forms depending on the software area or market.
    • there are many excellent projects that are not widely known; a great example is the large and sophisticated packages in the scientific software area, virtually unknown outside of a small community).

    This means that only a few projects get any visibility, and that many useful tools are not employed even when they could be the perfect match for a company. On this consideration, the EU funded a small project called OpenTTT, that tries to apply a “matching model” to help in the adoption process.

    It works like this:

    • A group of companies and public administrations are audited, and a set of needs in terms of software and IT functionalities are collected in structured forms (using a modification of the original IRC forms, called TR or technology requests);
    • in parallel, OSS companies and developers are invited to fill a complementary form indicating on what projects they are offering services;
    • requests are grouped, whenever possible, to find a single match for multiple companies;
    • a manual matched process is performed to find potential matches between requests and offers matchmaking is perfected in one-to-one personal meetings at special “matchmaking” events;
    • one has been recently performed at CeBIT and another at the CONFSL conference.

    An interesting twist of OpenTTT, that we hope to start soon, is the “club” concept. After all matches are performed, we expect that some needs will go unfulfilled; in this case we will try to find a “near match”, and try to group users with the same need into user clubs, and forward the information that an unfulfilled need has been identified to the groups of developers. After this, users and developers or companies are free to negotiate a commercial agreement, for example for implementing the missing pieces.

    See a chart depicting the process.

    I hope that this model can be a basis for a more structured and “grassroot” model for interaction between users and developers, not only because it gives an explicit recognition of the fact that OSS is not about price (at least not only about that) but also about flexibility and matching the user needs in a better way.

    Technorati Tags: OpenTTT, confsl, best practice, IRC

  • Carlo Daffara 6:40 am on May 18, 2007 Permalink | Reply  

    Open Source Blueprints: replicable experiments in open source adoption 

    Is there a better way for helping companies and public administrations in the OSS adoption process? Most adoptions are based on a few different paths, for example by grassroots adoption, from consultancy intervention, by trying to replicate a known success story. In this sense, the concept of “best practice” can be considered as a way to tell others of something that worked well, but in the past it has not been successful in replicating the experience.

    Best Practices Best Practices by andai

    So, considering that most public administrations are pushing for initiatives to help the adoption process (even if it mainly means creating another forge – like the Italian one just launched – I would like to propose the concept of the “implementation blueprint” as an
    extension of the best practice model. The idea came out of our experience in the
    Open TTT project, that is trying to leverage the technology transfer process used in the IRC network to facilitate the match between technology demand and offer in OSS.

    A blueprint is a replicable and complete description of a set of tools and processes that satisfied a specific need. In this sense, a complete blueprint must contain the following items:

    • a complete description of the needs; this should include a complete textual
    • description of what was requested, including mandatory and secondary requests
    • a description of the context of the needs, for example within a public
    • administration, with specific legal requirements, an SME, etc
    • the set of technologies used
    • the process implemented
    • criticalities or additional constraints appeared during the implementation process
    • an estimate of the human effort invested in the migration process.

    Why so much detail? Because replicability requires a significant amount on information not only on the technological means, but also on how those tools were used to create a complete solution.

    As these mapping efforts are already under way – for example the Italian Open Source Observatory has a listing section, called “vetrina” that provides short summaries of public administrations’ experience with open source – it may be interesting to propose a collaborative writing process, maybe wikipedia-based, to turn nice-to-know stories into replicable experiences.

    [tags] Open Source Observatory, OpenTTT, best practice [tags]

  • Carlo Daffara 6:51 pm on May 15, 2007 Permalink | Reply  

    Trust networks, consultancies, and why proprietary market leaders are still leaders 

    Expanding the idea of peer conversations as basis for IT decisions, I would like to extend a little bit the reasons for my belief that this trend will probably continue, and lend to some unexpected results.

    Let’s start by thinking as a CIO that has to decide on a new technology, or in integrating a new software system in the company’s infrastructure. The only thing that the CIO knows is the fact that creating software from scratch is costly and requires a significant ongoing maintenance cost, so shifts the decision to a software platform from some vendor, and seeks advice on a company that may provide the necessary integration.

    Using this limited information, what the CIO knows is that:

    • there is a large number of potential platforms to choose from
    • some may be more appropriate than others, and that choosing the wrong one may cause significant delays and added cost
    • just browsing through the advertising material is not sufficient to choose in an appropriate way
    • that the long-term viability of a company can only be guessed.

    So, what is the best strategy? We can try to imagine what a perfectly rational CIO would do, that is it would create a probability tree and try to guess at the potential events, their probability, and their impact. So, for example, if we choose by ourselves, the probabilities may be:


    In this scenario, the CIO has to give an initial estimates at the probability of succeeding. How can she do it? By looking at similar tasks, for example. As most people uses Microsoft, or IBM, or SAP, she is fairly confident that she can use those too, and as those companies are still alive, they are probably doing it right. This is of course a false assumption, as there is limited information on failed or delayed project (outside the largest ones, like some government IT nightmares), but it is the only information that the CIO does have. Given this information, she knows that by choosing wisely, the potential cost of vendor A is 1.5, with vendor B is 2.4, with vendor C is 4. But she does not if the selection is appropriate, or if all the vendors have been included in the list.

    We have also not considered what happens after the end of the project, like what happens if the company leaves the market, or decides to change the platform without giving enough time for a migration strategy, and so on; but we will leave this for a later post.

    Now, let’s say that the CIO has already tried some projects, and discovered that she is unable to estimate probabilities with reasonable accuracy. At this point, she would probably go to a consultancy, that is an independent party with better information on the products, that has demonstrated to be able to select with more accuracy the appropriate probabilities. This is always advantageous, as long as the consultancy has an information advantage on the CIO; the price that she pays goes in a commensurate reduction in the risks associated with the project.

    But what happens when the consultancy seems no more able than the CIO to select the platform, or when it is suspected that the suggestions are not entirely independent? Then, the CIO has no alternative than trying as much as possible to remain on the “tried and tested”, and hope that everything will continue to be fine.

    What happened recently? The change is that the idea of openness and the availability of open forums allowed users to exchange information (sometimes even in an anonymous form), giving the CIO insight on what really works and what does not. This first hand information is for example what allowed many open source server projects to be deployed in a grass-roots fashion; because system administrators were exchanging information about them, and the best ones succeeded. Now this process is starting to be used at higher levels, and this goes back to the death of generalists conferences: as those do not allow for a venue for information exchange in a bilateral way, the users started feeling that it was not useful anymore when compared to web, second life, traditional marketing and so on.

    So we suppose that users (CIOs) are more interested in conversations. But can a CIO base her own opinion on talking with strangers? The reality is that in a way similar to how Google PageRanks adjusts relevance, the user networks created on blogs, digg-like social sites, or unconferences are adjusting themselves for relevance, and allows trust to emerge from seemingly untrusted parties.

    The concept is simple: let’s say that a user talks on a blog about his experience with a product, and other read about it. Around this post, may additional links may be created, some criticizing, some praising the text; and eventually, some users that share information often may become “daily reading material”. The usefulness, and reliability of the source can be inferred easily, by reading at the text itself, if the reasoning or the experience seems reasonable, and how others react to the post.

    While it may be imaginable that one blogger may be paid for talking in a positive way about a product, it is difficult to imagine that *every* user is biased or unreliable, and we can read and verify even the dissenting views with ease. This way, “reliable” writers and experts can emerge for free, and the CIO can verify everything without paying a consultant to get the same information. Of course this does not means that errors do not happen – only that errors are public, and that it is possible for everyone to check any step or any information against public sources.

    This is the real value that is arising from “web2.0” networks, that is the spontaneous creation of networks of peers, that can be trusted thanks to their transparency and willingness to cooperate. I can only guess that this form of value will be probably not be judged in a positive way by sellers, as this negates some lock-in advantage (the push for unified single-company platforms, for example); but this may be the only potential way to exit from a “lemon market” and giving back to the user the power to choose among products in an unbiased way.

    [trust networks, peer discovery, open source]

  • Carlo Daffara 7:03 pm on May 14, 2007 Permalink | Reply  

    Conferences, knowledge dissemination and the discovery of peers 

    As seen the traditional process used by companies to disseminate information and collect potential customers is becoming less and less useful; i is just the beginning of an overall transformation of how companies look at external information sources (like consulting companies).

    In the beginning of the commercial computer era, most users were connected through user clubs, since most software was developed in-house, and the software market was still in its infancy. Groups like SHARE, the first unix communities, VAX users groups and such provided the essential knowledge technicians needed, and were centered on the idea that software and hardware vendors were few, and user experiences were centered on real and concrete evidence.

    Unconference Unconference by MichaelBee

    With the consolidation of the shrinkwrapped software market and the multiplication of deployable technologies, the need for directions and information was not satisfiable with user conferences, and the consultancies were born – fundamentally, people with deep knowledge of a specific sector, reselling this knowledge to reduce the risk of implementation of a new technologies, or the time necessary to implement it. This period marked the beginning of comparison tools (like the infamous Quadrant), necessary in a world where one solution was exluding all the others.

    Open standards, open source and the substantial opening of IT architectures changed everything again; this, and the fact that consultancies were no longer current or reliable on trends that change in a very short time (anyone remembers the “push web” craze? the original Microsoft internet killer, Microsoft network? WAP?) and were found to be not so impartial after all.

    This void is being filled by a new generation of knowledge disseminator, be it small and efficient consultancies like RedMonk (that show that openness can be effective) and vertical conferences, that are less trade shows and more conversations. This resurgence of exchanging information as peers is what is really innovative, or maybe a return to the roots; the fact that customers are being treated less as passive suppliers of money, and more as partners in a long-term strategy, in a way that is strikingly similar to the kind of partnership that OSS companies create with their customers.

    Technorati Tags: Open Source conference, peers discovery, redmonk, knowledge dissemination

  • Carlo Daffara 2:03 pm on April 6, 2007 Permalink | Reply  

    Open Source Business Models: a Taxonomy of Open Source Firms’ business models 

    Within the context of the FLOSSMETRICS project we are performing a study on the business models adopted by companies that are leveraging FLOSS source code, and how the model changes with respect of licenses and commercialization approaches.In this post I present a draft of the result of 80 FLOSS-based companies and business models, conducted using only publicly available data. Feedbacks and suggestions are welcome!

    taxonomyPractical taxonomy by ellen’s attic


    An initial list of 120 companies was prepared during the first two month of 2007 using some popular open source news websites as source like FreshMeat, Slashdot.org, OSNews, LinuxToday, NewsForge and some blog sites devoted to FLOSS business models like those of Matt Asay, Fabrizio Capobianco, Roberto Galoppini. Additional information was retrieved from Google searches. this list was further refined by eliminating companies that were not really adopting FLOSS, even using a very relaxed definition. In the specific, any company that allowed source code access only to non-commercial users, or that did not allowed for redistribution was dropped from the list; also, companies for which no information was available, or for which no clear product or service was identifiable was equally eliminated. One of the companies included (Sourceforge, from the OSTG group) is not open source in itself, but represents an example of an “ancillary” model, as the site itself hosts more than 100000 open source projects and provides supporting services like mailing lists, source code versioning systems and file distribution. Also, companies that have a significant OSS contribution, but for which FLOSS is not the core business model were not included (this for example includes IBM, HP and Sun; all of which are important FLOSS contributors, but for which open source software is just one of the overall revenue streams).


    The final result is summarized in a table (pdf), the 6 main clusters identified are:

    Twin licensing: the same software code distributed under the GPL and a commercial license. This model is mainly used by producers of developer-oriented tools and software, and works thanks to the strong coupling clause of the GPL, that requires derivative works or software directly linked to be covered under the same license. Companies not willing to release their own software under the GPL can buy a commercial license that is in a sense an exception to the binding clause; by those that value the “free as in speech” idea of free/libre software this is seen as a good compromise between helping those that abide to the GPL and receive the software for free (and make their software available as FLOSS) and benefiting through the commercial license for those that want to maintain the code proprietary. The downside of twin licensing is that external contributors must accept the same licensing regime, and this has been shown to reduce the volume of external contributions (that becomes mainly limited to bug fixes and small additions).

    Split OSS/commercial products: this model distinguish between a basic FLOSS software and a commercial version, based on the libre one but with the addition of proprietary plugins. Most companies adopt as license the Mozilla Public License, as it allows explicitly this form of intermixing, and allows for much greater participation from external contributions, as no acceptance of double licensing is required. The model has the intrinsic downside that the FLOSS product must be valuable to be attractive for the users, but must also be not complete enough to prevent competition with the commercial one. This balance is difficult to achieve and maintain over time; also, if the software is of large interest, developers may try to complete the missing functionality in a purely open source way, thus reducing the attractiveness of the commercial version.

    Badgeware: a recent reinvention/extension of a previous license constraint, that is usually based on the Mozilla Public License with the addition of a “visibility constraint”, the non-removability of visible trademarks or elements from a user interface. This allows the company to leverage trademark protection, and allows the original developers to receive recognition even if the software is resold through independent resellers.

    Product specialists: companies that created, or maintain a specific software project, and use a pure FLOSS license to distribute it. The main revenues are provided from services like training and consulting (the “ITSC” class) and follow the original “best code here” and “best knowledge here” of the original EUWG classification. It leverages the assumption, commonly held, that the most knowledgeable experts on a software are those that have developed it, and this way can provide services with a limited marketing effort, by leveraging the free redistribution of the code. The downside of the model is that there is a limited barrier of entry for potential competitors, as the only investment that is needed is in the acquisition of specific skills and expertise on the software itself.

    Platform providers: companies that provide selection, support, integration and services on a set of projects, collectively forming a tested and verified platform. In this sense, even linux distributions were classified as platforms; the interesting observation is that those distributions are licensed for a significant part under pure FLOSS licenses to maximize external contributions, and leverage copyright protection to prevent outright copying but not “cloning” (the removal of copyrighted material like logos and trademark to create a new product). The main value proposition comes in the form of guaranteed quality, stability and reliability, and the certainty of support for business critical applications.

    Selection/consulting companies: companies in this class are not strictly developers, but provide consulting and selection/evaluation services on a wide range of project, in a way that is close to the analyst role. These companies tend to have very limited impact on the FLOSS communities, as the evaluation results and the evaluation process are usually a proprietary asset.

    The remaining companies are in too limited number to allow for any extrapolation, but do show that non-trivial business model may be found on ancillary markets. For example, the Mozilla foundation obtains a non trivial amount of money from a search engine partnership with Google (an estimated 72M$ in 2006), while SourceForge/OSTG receives the majority of revenues from ecommerce sales of the affiliate ThinkGeek site.

    Technorati Tags: , ,

    • Seth Grimes 6:53 pm on May 18, 2007 Permalink


      Looking at http://www.robertogaloppini.net/documents/businessmodels.pdf

      – I believe that EnterpriseDB does not provide ANY OSS. They sell only closed-source extensions to PostgreSQL.

      – Given that you have SugarCRM, why not also list CentricCRM, which provides a good contrast?

      – And given Pentaho & JasperSoft, how about SpagoBI or all of Spago?

      – If your going to list Red Hat, then you should list Novell rather than SuSE Linux.

      – I’d suggest that “dual licensing” is a better term than “twin licensing.”



    • Carlo Daffara 2:15 pm on May 21, 2007 Permalink

      Seth: many thanks for your comments. On EnterpriseDB, the reason for inclusion is related to how we evaluate “open source” companies; that is, if the company sponsors in a direct or indirect way an open source project that is the basis of his work, then we consider the company to be a “marginal” open source one. The inclusion of EnterpriseDB is related to the direct funding of most of postgresql developers, through employing. In this sense, while not directly “selling” an open source version of postgresql, they are creating a market model that is similar to the split oss/commercial ones.
      On Novell/Suse you are right; the longer title was “novell Suse linux” to distinguish from the other novell activities, and simply got cropped.
      CentricCRM is simply not open source at all; the license explicitly states that “You may not redistribute the code, and you may not sublicense copies or
      derivatives of the code, either as software or as a service.” and as such it clearly is not meeting the definition of open source software.
      As for SpagoBI, Engineering seems at the moment mainly touching the waters with his OSS offer; I will wait a little bit to see if I can obtain balance sheet data on how much is obtained through OSS offers.

    • James Dixon 4:21 am on May 22, 2007 Permalink

      That is a lot of research.

      If you are interested I have developed a model to describe the open source model used by companies that write the majority of the code (JBoss, MySQL, Alfresco, Pentaho, SugarCRM etc).


      James Dixon
      Chief Geek / CTO Pentaho

    • Martin 6:19 pm on May 31, 2007 Permalink

      James… I love the beekeeper analogy. The paper has helped to crystalise my own thoughts on successful software projects.

    • Roberto Galoppini 6:53 pm on May 31, 2007 Permalink

      Hi Martin,

      I also enjoyed the metaphor, really amusing.
      Quoting your comment about your attention:

      But one observation really got my attention. In POSS projects (or even FLOSS projects), the end user (/customer) is engaged at a much earlier stage in the process, thereby ensuring that design defects and unexpected use cases are brought to surface before it is too late.

      I don’t believe that is typical of FLOSS listening to users, Microsoft and many other proprietary vendors do listen too, sometimes even more than some OS firms (just have a look at many OS products’ forums, you’ll sort it out by yourself!).

      OS applications’ ecosystems? May be, but they can be effective only under certain circumstances, definitely not an easy game to play, though.

Compose new post
Next post/Next comment
Previous post/Previous comment
Show/Hide comments
Go to top
Go to login
Show/Hide help
shift + esc