Difference between revisions of "User:Shawndouglas/sandbox/sublevel4"
Shawndouglas (talk | contribs) (Added content. Saving and adding more.) |
Shawndouglas (talk | contribs) (Added content. Saving and adding more.) |
||
Line 68: | Line 68: | ||
==Results== | ==Results== | ||
===Literature review=== | |||
A search of the literature returns thousands of papers related to Open Source software, but most are of limited value in regards to the scope of this project. The need for a process to assist in selecting between Open Source projects is mentioned in a number of these papers and there appear to be over a score of different published procedures. Regrettably, none of these methodologies appear to have gained large scale support in the industry. Stol and Babar have published a framework for comparing evaluation methods targeting Open Source software and include a comparison of 20 of them.<ref name="StolAComp10">{{cite book |chapter=A Comparison Framework for Open Source Software Evaluation Methods |title=Open Source Software: New Horizons |author=Stol, Klaas-Jan; Ali Babar, Muhammad |editor=Ågerfalk, P.J.; Boldyreff, C.; González-Barahona, J.M.; Madey, G.R.; Noll, J |publisher=Springer |year=2010 |pages=389–394 |isbn=9783642132445 |doi=10.1007/978-3-642-13244-5_36}}</ref> They noted that web sites that simply consisted of a suggestion list for selecting an Open Source application were not included in this comparison. This selection difficulty is nothing new with FLOSS applications. In their 1994 paper, Fritz and Carter review over a dozen existing selection methodologies, covering their strengths, weaknesses, the mathematics used, and other factors involved.<ref name="FritzAClass94">{{cite book |title=A Classification And Summary Of Software Evaluation And Selection Methodologies |author=Fritz, Catherine A.; Carter, Bradley D. |publisher=Department of Computer Science, Mississippi State University |location=Mississippi State, MS |date=23 August 1994 |url=http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.55.4470}}</ref> | A search of the literature returns thousands of papers related to Open Source software, but most are of limited value in regards to the scope of this project. The need for a process to assist in selecting between Open Source projects is mentioned in a number of these papers and there appear to be over a score of different published procedures. Regrettably, none of these methodologies appear to have gained large scale support in the industry. Stol and Babar have published a framework for comparing evaluation methods targeting Open Source software and include a comparison of 20 of them.<ref name="StolAComp10">{{cite book |chapter=A Comparison Framework for Open Source Software Evaluation Methods |title=Open Source Software: New Horizons |author=Stol, Klaas-Jan; Ali Babar, Muhammad |editor=Ågerfalk, P.J.; Boldyreff, C.; González-Barahona, J.M.; Madey, G.R.; Noll, J |publisher=Springer |year=2010 |pages=389–394 |isbn=9783642132445 |doi=10.1007/978-3-642-13244-5_36}}</ref> They noted that web sites that simply consisted of a suggestion list for selecting an Open Source application were not included in this comparison. This selection difficulty is nothing new with FLOSS applications. In their 1994 paper, Fritz and Carter review over a dozen existing selection methodologies, covering their strengths, weaknesses, the mathematics used, and other factors involved.<ref name="FritzAClass94">{{cite book |title=A Classification And Summary Of Software Evaluation And Selection Methodologies |author=Fritz, Catherine A.; Carter, Bradley D. |publisher=Department of Computer Science, Mississippi State University |location=Mississippi State, MS |date=23 August 1994 |url=http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.55.4470}}</ref> | ||
Line 229: | Line 230: | ||
In addition to the low engagement rates for the various published evaluation methods, another concern is the viability of the sponsoring organizations. One of the assessment papers indicated that the published methods with the smallest footprint, or the easiest to use, appeared to be FAME and the OpenBRR. I have already mentioned my difficulty obtaining additional information regarding FAME, and OpenBRR appears to be even more problematic. BRR was first registered on SourceForge in September of 2005<ref name="OBRRSource">{{cite web |url=https://sourceforge.net/projects/openbrr/ |title=Business Readiness Rating (BRR) |author=Chan, C.; enugroho; Wasserman, T. |publisher=SourceForge |date=17 April 2013 |accessdate=21 April 2015}}</ref>, and an extensive Request For Comments from the founding members of the BRR consortium (SpikeSource, the Center for Open Source Investigation at Carnegie Mellon West, and Intel Corporation) was released.<ref name="OpenBRROpen2005" /> In 2006, in contrast to typical Open Source development groups, the OpenBRR group announced the formation of an OpenBRR Corporate Community group. Peter Galli's story indicates that "the current plan is that membership will not be open to all."<ref name="GalliOpenBRR06">{{cite web |url=http://www.eweek.com/c/a/Linux-and-Open-Source/OpenBRR-Launches-Closed-OpenSource-Group |title=OpenBRR Launches Closed Open-Source Group |author=Galli, Peter |work=eWeek |publisher=QuinStreet, Inc |date=24 April 2006 |accessdate=13 April 2015}}</ref> He quotes Murugan Pal saying "membership will be on an invitation-only basis to ensure that only trusted participants are coming into the system." However, for some reason, at least some in the group "expressed concern and unhappiness about the idea of the information discussed not being shared with the broader open-source community."<ref name="GalliOpenBRR06" /> | In addition to the low engagement rates for the various published evaluation methods, another concern is the viability of the sponsoring organizations. One of the assessment papers indicated that the published methods with the smallest footprint, or the easiest to use, appeared to be FAME and the OpenBRR. I have already mentioned my difficulty obtaining additional information regarding FAME, and OpenBRR appears to be even more problematic. BRR was first registered on SourceForge in September of 2005<ref name="OBRRSource">{{cite web |url=https://sourceforge.net/projects/openbrr/ |title=Business Readiness Rating (BRR) |author=Chan, C.; enugroho; Wasserman, T. |publisher=SourceForge |date=17 April 2013 |accessdate=21 April 2015}}</ref>, and an extensive Request For Comments from the founding members of the BRR consortium (SpikeSource, the Center for Open Source Investigation at Carnegie Mellon West, and Intel Corporation) was released.<ref name="OpenBRROpen2005" /> In 2006, in contrast to typical Open Source development groups, the OpenBRR group announced the formation of an OpenBRR Corporate Community group. Peter Galli's story indicates that "the current plan is that membership will not be open to all."<ref name="GalliOpenBRR06">{{cite web |url=http://www.eweek.com/c/a/Linux-and-Open-Source/OpenBRR-Launches-Closed-OpenSource-Group |title=OpenBRR Launches Closed Open-Source Group |author=Galli, Peter |work=eWeek |publisher=QuinStreet, Inc |date=24 April 2006 |accessdate=13 April 2015}}</ref> He quotes Murugan Pal saying "membership will be on an invitation-only basis to ensure that only trusted participants are coming into the system." However, for some reason, at least some in the group "expressed concern and unhappiness about the idea of the information discussed not being shared with the broader open-source community."<ref name="GalliOpenBRR06" /> | ||
While the original Business Readiness Rating web site still exists, it is currently little more than a static web page.<ref name="OBRRArch">{{cite web |url=http://www.openbrr.org/ |archiveurl=https://web.archive.org/web/20141224233009/http://www.openbrr.org/ |title=Welcome to Business Readiness Rating: A FrameWork for Evaluating OpenSource Software |publisher=OpenBRR |archivedate=24 December 2014 |accessdate=14 April 2015}}</ref> It appears that some of the original information posted on the site is still there, you just have to know what its URL is to access it, as the original links on the web site have been removed. Otherwise, you may have to turn to the Internet Archive to retrieve some of their documentation. The lack of any visible activity regarding OpenBRR prompted a blog post from one graduate student in 2012 asking "What happened to OpenBRR (Business Readiness Rating for Open Source)?"<ref name="ArjonaWhat12">{{cite web |url=https://larjona.wordpress.com/2012/01/06/what-happened-to-openbrr-business-readiness-rating-for-open-source/ |title=What happened to OpenBRR (Business Readiness Rating for Open Source)? |author=Arjona, Laura |work=The Bright Side |date=06 January 2012 |accessdate=13 April 2015}}</ref> | |||
It appears that at some point, any development activity regarding OpenBRR was morphed into OSSpal.<ref name="OSSpalHome">{{cite web |url=http://osspal.org/ |title=Welcome to OSSpal |publisher=OSSpal |accessdate=18 April 2015}}</ref> However, background information on this project is sparse as well. While the site briefly mentions that OSSpal incorporates a number of lessons learned from BRR, there is very little additional information regarding the group or the methods procedures. Their 'All Projects' tab provides a list of over 30 Open Source projects, but the majority simply show 'No votes yet' under the various headings. In fact, as of now, the only projects showing any input at all are for Ubuntu and Mozilla Firefox. | |||
===Evaluation and selection recommendations=== | |||
At this point, we'll take a step back from the evaluation methodologies papers and examine some of the more general recommendations regarding evaluating and selecting FLOSS applications. The consistency of their recommendations may provide a more useful guide for an initial survey of FLOSS applications. | |||
In ''TechRepublic'', de Silva recommends 10 questions to ask when selecting a FLOSS application.<ref name="Silva10Quest09">{{cite web |url=http://www.techrepublic.com/blog/10-things/10-questions-to-ask-when-selecting-open-source-products-for-your-enterprise/ |title=10 questions to ask when selecting open source products for your enterprise |author=Silva, Chamindra de |work=TechRepublic |publisher=CBS Interactive |date=20 December 2009 |accessdate=13 April 2015}}</ref> While he provides a brief discourse on each question in his paper to ensure you understand the point of his question, I've collected the 10 questions from his article into the following list. Once we see what overlap, if any, are amongst our general recommendations, we'll address some of the consolidated questions in more detail. | |||
# Are the open source license terms compatible with my business requirements? | |||
# What is the strength of the community? | |||
# How well is the product adopted by users? | |||
# Can I get a warranty or commercial support if I need it? | |||
# What quality assurance processes exist? | |||
# How good is the documentation? | |||
# How easily can the system be customized to my exact requirements? | |||
# How is this project governed and how easily can I influence the road map? | |||
# Will the product scale to my enterprise's requirements? | |||
# Are there regular security patches? | |||
Similarly, in ''InfoWorld'' Phipps lists seven questions you should have answered before even starting to select a software package.<ref name="Phipps7Quest15">{{cite web |url=http://www.infoworld.com/article/2872094/open-source-software/seven-questions-to-ask-any-open-source-project.html |title=7 questions to ask any open source project |author=Phipps, Simon |work=InfoWorld |publisher=InfoWorld, Inc |date=21 January 2015 |accessdate=10 April 2015}}</ref> His list of questions, pulled directly from his article are: | |||
# Am I granted copyright permission? | |||
# Am I free to use my chosen business model? | |||
# Am I unlikely to suffer patent attack? | |||
# Am I free to compete with other community members? | |||
# Am I free to contribute my improvements? | |||
# Am I treated as a development peer? | |||
# Am I inclusive of all people and skills? | |||
This list of questions shows a moderately different point of view, as it is not just about someone selecting an Open Source system, but looking to be involved in its direct development. Padin, of 8th Light, Inc., takes the viewpoint of a developer who might incorporate Open Source software into their projects.<ref name="PadinHow14">{{cite web |url=https://blog.8thlight.com/sandro-padin/2014/01/03/how-i-evaluate-open-source-software.html |title=How I Evaluate Open-Source Software |author=Padin, Sandro |publisher=8th Light, Inc |date=03 January 2014 |accessdate=01 June 2015}}</ref> The list of criteria pulled directly from his blog includes: | |||
# Does it do what I need it to do? | |||
# How much more do I need it to do? | |||
# Documentation | |||
# Easy to review source code | |||
# Popularity | |||
# Tests and specs | |||
# Licensing | |||
# Community | |||
Metcalfe of OSS Watch lists his top tips as<ref name="MetcalfeTop13">{{cite web |url=http://oss-watch.ac.uk/resources/tips |title=Top tips for selecting open source software |author=Metcalfe, Randy |work=OSSWatch |publisher=University of Oxford |date=01 February 2004 |accessdate=23 March 2015}}</ref>: | |||
# Reputation | |||
# Ongoing effort | |||
# Standards and interoperability | |||
# Support (Community) | |||
# Support (Commercial) | |||
# Version | |||
# Version 1.0 | |||
# Documentation | |||
# Skill setting | |||
# Project Development Development Model | |||
# License | |||
In his LIMSexpert blog, Joel Limardo of ForwardPhase Technologies, LLC lists the following as components to check when evaluating an Open Source application<ref name="LimardoDIY13">{{cite web |url=http://www.limsexpert.com/cgi-bin/bixchange/bixchange.cgi?pom=limsexpert3&iid=readMore;go=1363288315&title=DIY%20Evaluation%20Process |title=DIY Evaluation Process |author=Limardo, J. |work=LIMSExpert.com |publisher=ForwardPhase Technologies, LLC |date=2013 |accessdate=07 February 2015}}</ref>: | |||
* Check licensing | |||
* Check code quality | |||
* Test setup time | |||
* Verify extensibility | |||
* Check for separation of concerns | |||
* Check for last updated date | |||
* Check for dependence on outdated toolkits/frameworks | |||
Perhaps the most referenced of the general articles on selecting FLOSS applications is David Wheeler's How to Evaluate Open Source Software / Free Software (OSS/FS) Programs.<ref name="WheelerHow11">{{cite web |url=http://www.dwheeler.com/oss_fs_eval.html |title=How to Evaluate Open Source Software / Free Software (OSS/FS) Programs |author=Wheeler, David A. |work=dwheeler.com |date=05 August 2011 |accessdate=19 March 2015}}</ref> The detailed functionality to consider will vary with the types of applications being compared, but there are a number of general features that are relevant to almost any type of application. While we will cover them in more detail later, Wheeler categorizes the features to consider as the following: | |||
* System functionality | |||
* System cost – direct and in-direct | |||
* Popularity of application, i.e. its market share for that type of application | |||
* Varieties of product support available | |||
* Maintenance of application, i.e, is development still taking place | |||
* Reliability of application | |||
* Performance of application | |||
* Scalability of application | |||
* Usability of application | |||
* Security of application | |||
* Adaptability/customizability of application | |||
* Interoperability of application | |||
* Licensing and other legal issues | |||
While a hurried glance might suggest a lot of diversity in the features these various resources suggest, a closer look at the meaning of what they are saying show a repetitive series of concerns. The primary significant differences between the functionality lists suggested is actually due more to how wide a breadth of the analysis process the authors are considering, as well as the underlying features that they are concerned with. | |||
With a few additions, the high-level screening template described in the rest of this communication is based on Wheeler's previously mentioned document describing his recommended process for evaluating open source software and free software programs. Structuring the items thus will make it easier to locate the corresponding sections in his document, which includes many useful specific recommendations, as well as a great deal of background information to help you understand the why of the topic. I highly recommend reading it and following up on some of the links he provides. I will also include evaluation suggestions from several of the previously mentioned procedures where appropriate. | |||
Wheeler defines four basic steps to this evaluation process, as listed below: | |||
* Identify candidate applications | |||
* Read existing product reviews | |||
* Compare attributes of these applications to your needs | |||
* Analyze the applications best matching your needs in more depth | |||
Wheeler categorizes this process with the acronym IRCA. In this paper we will be focusing on the IRC components of this process. To confirm the efficacy of this protocol we will later apply it to several classes of Open Source applications and examine the output of the protocol. | |||
====Identify needs==== | |||
Realistically, before you can perform a survey of applications to determine which ones best match your needs, you must determine what your needs actually are. The product of determining these needs is frequently referred to as the User Requirements Specification (URS).<ref name="VOURS15">{{cite web |url=http://www.validation-online.net/user-requirements-specification.html |title=User Requirements Specification (URS) |work=validation-online.net |publisher=Validation Online |accessdate=08 August 2015}}</ref><ref name="OKeeffeHow15">{{cite web |url=http://www.askaboutgmp.com/296-how-to-create-a-bullet-proof-urs |title=How to Create a Bullet-Proof User Requirement Specification (URS) |author=O'Keefe, Graham |publisher=askaboutgmp |date=01 March 2015 |accessdate=08 August 2015}}</ref> This document can be generated in several ways, including having all of the potential users submit a list of the functions and capabilities that they feel is important. While the requirements document can be created by a single person, it is generally best to make it a group effort with multiple reviews of the draft document and including all of the users who will be working with the application. The reason for this is to ensure that an important requirement is not missed, When a requirement is missed, it is frequently due to the requirement being so basic that it never occurs to anyone that it specifically needed to be included in the requirements document. Admittedly, a detailed URS is not required at the survey level, but it is worth having if only to identify, by their implications, other features that might be significant. | |||
==References== | ==References== |
Revision as of 18:29, 6 October 2015
Full article title | Generalized Procedure for Screening Free Software and Open Source Software Applications |
---|---|
Author(s) | Joyce, John |
Author affiliation(s) | Arcana Informatica; Scientific Computing |
Primary contact | Email: |
Year published | 2015 |
Distribution license | Creative Commons Attribution-ShareAlike 4.0 International |
Abstract
Free Software and Open Source Software projects have become a popular alternative tool in both scientific research and other fields. However, selecting the optimal application for use in a project can be a major task in itself, as the list of potential applications must first be identified and screened to determine promising candidates before an in-depth analysis of systems can be performed. To simplify this process we have initiated a project to generate a library of in-depth reviews of Free Software and Open Source Software applications. Preliminary to beginning this project, a review of evaluation methods available in the literature was performed. As we found no one method that stood out, we synthesized a general procedure using a variety of available sources for screening a designated class of applications to determine which ones to evaluate in more depth. In this paper, we will examine a number of currently published processes to identify their strengths and weaknesses. By selecting from these processes we will synthesize a proposed screening procedure to triage available systems and identify those most promising of pursuit. To illustrate the functionality of this technique, this screening procedure will be executed against a selected class of applications.
Introduction
There is much confusion regarding Free Software and Open Source Software and many people use these terms interchangeably, however, to some the connotations associated with the terms is highly significant. So perhaps we should start with an examination of the terms to clarify what we are attempting to screen. While there are many groups and organizations involved with Open Source software, two of the main ones are the Free Software Foundation (FSF) and the Open Source Initiative (OSI).
When discussing Free Software, we are not explicitly discussing software for which no fee is charged, rather we are referring to free in terms of liberty. To quote the Free Software Foundation (FSF)[1]:
A program is free software if the program's users have the four essential freedoms:
- The freedom to run the program as you wish, for any purpose (freedom 0).
- The freedom to study how the program works, and change it so it does your computing as you wish (freedom 1). Access to the source code is a precondition for this.
- The freedom to redistribute copies so you can help your neighbor (freedom 2).
- The freedom to distribute copies of your modified versions to others (freedom 3). By doing this you can give the whole community a chance to benefit from your changes. Access to the source code is a precondition for this.
This does not mean that a program is provided at no cost, or gratis, though some of these rights imply that it would be. In the FSF's analysis, any application that does not conform to these freedoms is unethical. While there is also 'free software' or 'freeware' that is given away at no charge, or gratis, but without the source code, this would not be considered Free Software under the FSF definition.
The Open Source Initiative (OSI), originally formed to promote Free Software, which they referred to as Open Source Software (OSS) to make it sound more business friendly. The OSI defines Open Source Software as any application that meets the following 10 criteria, which they based on the Debian Free Software Guidelines[2]:
- Free redistribution
- Source code Included
- Must allow derived works
- Must preserve the integrity of the authors source code
- License must not discriminate against persons or groups
- License must not discriminate against fields of endeavor
- Distribution of licenses
- License must not be specific to a production
- License must not restrict other software
- License must be technology neutral
Open Source Software adherents take what they consider the more pragmatic view of looking more at the license requirements and put significant effort into convincing commercial enterprises of the practical benefits of open source, meaning the free availability of application source code.
In an attempt to placate both groups when discussing the same software application, the term Free/Open Source Software (F/OSS) was developed. Since the term Free was still tending to confuse some people, the term libre, which connotes freedom, was added resulting in the term Free/Libre Open Source Software (FLOSS). If you perform a detailed analysis on the full specifications, you will find that all Free Software fits the Open Source Software definition, while not all Open Source Software fits the Free Software definition. However, any Open Source Software that is not also Free Software is the exception, rather than the rule. As a result, you will find these acronyms used almost interchangeably, but there are subtle differences in meaning, so stay alert. In the final analysis, the software license that accompanies the software is what you legally have to follow.
The reality is that since both groups trace their history back to the same origins, the practical differences between an application being Free Software or Open Source are generally negligible. Keep in mind that the above descriptions are to some degree generalizations, as both organizations are involved in multiple activities. There are many additional groups interested in Open Source for a wide variety of reasons. However, this diversity is also a strong point, resulting in a vibrant and dynamic community. You should not allow the difference in terminology to be divisive. The fact that all of these terms can be traced back to the same origin should unite us.[3] In practice, many of the organization members will use the terms interchangeably, depending on the point that they are trying to get across. With in excess of 300,000 FLOSS applications currently registered in SourceForge.net[4] and over 10 million repositories on GitHub[5], there are generally multiple options accessible for any class of application, be it a Laboratory Information Management System (LIMS), an office suite, a data base, or a document management system. Presumably you have gone through the assessment of the various challenges to using an Open Source application[6] and have decided to move ahead with selecting an appropriate application. The difficulty now becomes selecting which application to use. While there are multiple indexes of FOSS projects, these are normally just listings of the applications with a brief description provided by the developers with no indication of the vitality or independent evaluation of the project.
What is missing is a catalog of in-depth reviews of these applications, eliminating the need for each group to go through the process of developing a list of potential applications, screening all available applications, and performing in-depth reviews of the most promising candidates. While once they've made a tentative selection, the organization will need to perform their own testing to confirm that the selected application meets their specific needs, there is no reason for everyone to go through the tedious process of identifying projects and weeding out the untenable ones.
Illustration 1.: This diagram, originally by Chao-Kuei and updated by several others since,
explains the different categories of software. It's available as a Scalable Vector Graphic and
as an XFig document, under the terms of any of the GNU GPL v2 or later, the GNU FDL v1.2
or later, or the Creative Commons Attribution-Share Alike v2.0 or later
The primary goal of this document is to describe a general procedure capable of being used to screen any selected class of software applications. The immediate concern is with screening FLOSS applications, though allowances can be made to the process to allow at least rough cross-comparison of both FOSS and commercial applications. To that end it we start with an examination of published survey procedures. We then combine a subset of standard software evaluation procedures with recommendations for evaluating FLOSS applications. Because it is designed to screen such a diverse range of applications, the procedure is by necessity very general. However, as we move through the steps of the procedure, we will describe how to tune the process for the class of software that you are interested in.
You can also ignore any arguments regarding selecting between FLOSS and commercial applications. In this context, commercial is referring to the marketing approach, not to the quality of the software. Many FLOSS applications have comparable, if not superior quality, to products that are traditionally marketed and licensed. Wheeler discusses this issue in more detail, showing that by many definitions FLOSS is commercial software.[7]
The final objective of this process is to document a procedure that can then be applied to any class of FOSS applications to determine which projects in the class are the most promising to pursue, allowing us to expend our limited resources most effectively. As the information available for evaluating FOSS projects is generally quite different from that available for commercially licensed applications, this evaluation procedure has been optimized to best take advantage of this additional information.
Results
Literature review
A search of the literature returns thousands of papers related to Open Source software, but most are of limited value in regards to the scope of this project. The need for a process to assist in selecting between Open Source projects is mentioned in a number of these papers and there appear to be over a score of different published procedures. Regrettably, none of these methodologies appear to have gained large scale support in the industry. Stol and Babar have published a framework for comparing evaluation methods targeting Open Source software and include a comparison of 20 of them.[8] They noted that web sites that simply consisted of a suggestion list for selecting an Open Source application were not included in this comparison. This selection difficulty is nothing new with FLOSS applications. In their 1994 paper, Fritz and Carter review over a dozen existing selection methodologies, covering their strengths, weaknesses, the mathematics used, and other factors involved.[9]
|
Table 1.: Comparison frameworks and methodologies for examination of FLOSS applications extracted from Stol and Babar.[8] The selection
procedure is described in Stol's and Barbar's paper, however, 'Year' indicates the date of publication, 'Orig.' indicates whether the described
process originated in industry (I) or research (R), while 'Method' indicates whether the paper describes a formal analysis method and procedure (Yes)
or just a list of evaluation criteria (No).
Extensive comparisons between some of these methods have also been published, such as Deprez's and Alexandre's comparative assessment of the OpenBRR and QSOS techniques.[14] Wasserman and Pal have also published a paper under the title of Evaluating Open Source Software, which appears to be more of an updated announcement and in-depth description of the Business Readiness Rating (BRR) framework.[15] Jadhav and Sonar have also examined the issue of both evaluating and selecting software packages. They include a helpful analysis of the strengths and weaknesses of the various techniques.[16] Perhaps more importantly, they clearly point out that there is no common list of evaluation criteria. While the majority of the articles they reviewed listed the criteria used, Jadhav and Sonar indicated that these criteria frequently did not include a detailed definition, which required each evaluator to use their own, sometimes conflicting, interpretation.
Since the publication of Stol's and Babar's paper, additional evaluation methods have been published. Of particular interest are a series of papers by Pani, et al. describing their proposed FAME (Filter, Analyze, Measure and Evaluate) methodology.[17][18][19][20] In their Transferring FAME paper, they emphasized that all of the evaluation frameworks previously described in the published literature were frequently not easy to apply to real environments, as they were developed using an analytic research approach which incorporated a multitude of factors.[20]
Their stated design objective with FAME is to reduce the complexity of performing the application evaluation, particularly for small organizations. As specified ”The goals of FAME methodology are to aid the choice of high-quality F/OSS products, with high probability to be sustainable in the long term, and to be as simple and user friendly as possible.” They further state that “The main idea behind FAME is that the users should evaluate which solution amongst those available is more suitable to their needs by comparing technical and economical factors, and also taking into account the total cost of individual solutions and cash outflows. It is necessary to consider the investment in its totality and not in separate parts that are independent of one another.”[20]
This paper breaks the FAME methodology into four activities:
- Identify the constraints and risks of the projects
- Identify user requirements and rank
- Identify and rank all key objectives of the project
- Generate a priority framework to allow comparison of needs and features
Their paper includes a formula for generating a score from the information collected. The evaluated system with the highest 'major score', Pjtot, indicates the system selected. While it is a common practice to define an analysis process which condenses all of the information gathered into a single score, I highly caution against blindly accepting such a score. FAME, as well as a number or the other assessment methodologies, is designed for iterative use. The logical purpose of this is to allow the addition of factors initially overlooked into your assessment, as well as to change the weighting of existing factors as you reevaluate their importance. However, this feature means that it is also very easy to unconsciously, or consciously, skew the results of the evaluation to select any system you wish. Condensing everything down into a single value also strips out much of the information that you have worked so hard to gather. Note that you can generate the same result score using significantly different input values. While of value, selecting a system based on just the highest score could potentially leave you with a totally unworkable system.
Pani, et al. also describe a FAMEtool to assist in this data gathering and evaluation.[19] However a general web search, as well as a review of their FAME papers revealed no indication of how to obtain this resource. While this paper includes additional comparisons with other FLOSS analysis methodologies and there are some hints suggesting that the FAMEtool is being provided as a web service, I have found no URL specified for it. As of now, I have received no responses from the research team via either e-mail or Skype, regarding FAME, the FAMEtool, or feedback on its use.
During this same time frame Soto and Ciolkowski also published papers describing the QualOSS Open Source Assessment Model and compared it to a number of the procedures in Stol's and Barbar's table.[21][22] Their focus was primarily on three process perspectives: product quality, process maturity, and sustainability of the development community. Due to the lack of anything more than a rudimentary process perspective examination, they felt that the following OSS project assessment models were unsatisfactory: QSOS, CapGemni OSMM, Navica OSMM, and OpenBRR. They position QualOSS as an extension of the tralatitious CMMI and SPICE process maturity models. While there are multiple items in the second paper that are worth incorporating into an in-depth evaluation process, they do not seem suitable for what is intended as a quick survey.
Another paper, published by Haaland and Groven also compared a number of Open Source quality models. To this paper's credit, the authors devoted a significant amount of space to discussing the different definitions of quality and how the target audience of a tool might affect which definition was used.[23] Like Stohl and Babar, they listed a number of the quality assessment models to choose from, including OSMM, QSOS, OpenBRR, and others. For their comparison, they selected OpenBRR and QualOSS. They appear to have classified OpenBRR as a first generation tool with a “User view on quality” and QualOSS as a second generation tool with a “business point of view”. An additional variation is that OpenBRR is primarily a manual tool while QualOSS is primarily an automated tool. Their analysis in this article clearly demonstrates the steps involve in using these tools and in highlighting where they are objective and subjective. While they were unable to answer their original question as to whether the first or second generation tools did a better job of evaluation, to me they answered the following even more important, but unasked question. As they proceeded through their evaluation, it became apparent as to how much the questions defined in the methods could affect the results of the evaluations. Even though the authors might have considered the questions to be objective, I could readily see how some of these questions could be interpreted in alternate ways. My takeaway is an awareness of the potential danger of using rigid tools, as they can skew the accuracy of the evaluation results depending on exactly what you want the evaluated application to do and how you plan to use it. These models can be very useful guides, but they should not be used to replace a carefully considered evaluation, as there will always be factors influencing the selection decision which did not occur to anyone when the specifications were being written.
Hauge, et al. have noted that despite the development of several normative methods of assessment, empirical studies have failed to show wide spread adoption of these methods.[24] From their survey of a number of Norwegian software companies, they have noticed a tendency for selectors to skip the in-depth search for what they call the 'best fit' application and fall back on what they refer to as a 'first fit'. This is an iterative procedure with the knowledge gained from the failure of one set of component tests being incorporated into the evaluation of the next one. Their recommendation is for researchers to stop attempting to develop either general evaluation schemas or normative selection methods which would be applicable to any software application and instead focus on identifying situationally sensitive factors which can be used as evaluation criteria. This is a very rational approach as all situations, even if evaluating the same set of applications, are going to be different, as each user's needs are different.
Ayalal, et al. have performed a study to try to more accurately determine why more people don't take advantage of the various published selection methodologies.[25] While they looked at a number of factors and identified several possible problems, one of the biggest factors was the difficulty in obtaining the needed information for the evaluation. Based on the projects they studied, many did not provide a number of the basic pieces of information required for the evaluation, or perhaps worse, required extensive examination of the project web site and documentation to retrieve the required information. From her paper, it sounded as if this issue was more of a communication breakdown than an attempt to hide any of the information, not that this had any impact on the inaccessibility of the information.
In addition to the low engagement rates for the various published evaluation methods, another concern is the viability of the sponsoring organizations. One of the assessment papers indicated that the published methods with the smallest footprint, or the easiest to use, appeared to be FAME and the OpenBRR. I have already mentioned my difficulty obtaining additional information regarding FAME, and OpenBRR appears to be even more problematic. BRR was first registered on SourceForge in September of 2005[26], and an extensive Request For Comments from the founding members of the BRR consortium (SpikeSource, the Center for Open Source Investigation at Carnegie Mellon West, and Intel Corporation) was released.[10] In 2006, in contrast to typical Open Source development groups, the OpenBRR group announced the formation of an OpenBRR Corporate Community group. Peter Galli's story indicates that "the current plan is that membership will not be open to all."[27] He quotes Murugan Pal saying "membership will be on an invitation-only basis to ensure that only trusted participants are coming into the system." However, for some reason, at least some in the group "expressed concern and unhappiness about the idea of the information discussed not being shared with the broader open-source community."[27]
While the original Business Readiness Rating web site still exists, it is currently little more than a static web page.[28] It appears that some of the original information posted on the site is still there, you just have to know what its URL is to access it, as the original links on the web site have been removed. Otherwise, you may have to turn to the Internet Archive to retrieve some of their documentation. The lack of any visible activity regarding OpenBRR prompted a blog post from one graduate student in 2012 asking "What happened to OpenBRR (Business Readiness Rating for Open Source)?"[29]
It appears that at some point, any development activity regarding OpenBRR was morphed into OSSpal.[30] However, background information on this project is sparse as well. While the site briefly mentions that OSSpal incorporates a number of lessons learned from BRR, there is very little additional information regarding the group or the methods procedures. Their 'All Projects' tab provides a list of over 30 Open Source projects, but the majority simply show 'No votes yet' under the various headings. In fact, as of now, the only projects showing any input at all are for Ubuntu and Mozilla Firefox.
Evaluation and selection recommendations
At this point, we'll take a step back from the evaluation methodologies papers and examine some of the more general recommendations regarding evaluating and selecting FLOSS applications. The consistency of their recommendations may provide a more useful guide for an initial survey of FLOSS applications.
In TechRepublic, de Silva recommends 10 questions to ask when selecting a FLOSS application.[31] While he provides a brief discourse on each question in his paper to ensure you understand the point of his question, I've collected the 10 questions from his article into the following list. Once we see what overlap, if any, are amongst our general recommendations, we'll address some of the consolidated questions in more detail.
- Are the open source license terms compatible with my business requirements?
- What is the strength of the community?
- How well is the product adopted by users?
- Can I get a warranty or commercial support if I need it?
- What quality assurance processes exist?
- How good is the documentation?
- How easily can the system be customized to my exact requirements?
- How is this project governed and how easily can I influence the road map?
- Will the product scale to my enterprise's requirements?
- Are there regular security patches?
Similarly, in InfoWorld Phipps lists seven questions you should have answered before even starting to select a software package.[32] His list of questions, pulled directly from his article are:
- Am I granted copyright permission?
- Am I free to use my chosen business model?
- Am I unlikely to suffer patent attack?
- Am I free to compete with other community members?
- Am I free to contribute my improvements?
- Am I treated as a development peer?
- Am I inclusive of all people and skills?
This list of questions shows a moderately different point of view, as it is not just about someone selecting an Open Source system, but looking to be involved in its direct development. Padin, of 8th Light, Inc., takes the viewpoint of a developer who might incorporate Open Source software into their projects.[33] The list of criteria pulled directly from his blog includes:
- Does it do what I need it to do?
- How much more do I need it to do?
- Documentation
- Easy to review source code
- Popularity
- Tests and specs
- Licensing
- Community
Metcalfe of OSS Watch lists his top tips as[34]:
- Reputation
- Ongoing effort
- Standards and interoperability
- Support (Community)
- Support (Commercial)
- Version
- Version 1.0
- Documentation
- Skill setting
- Project Development Development Model
- License
In his LIMSexpert blog, Joel Limardo of ForwardPhase Technologies, LLC lists the following as components to check when evaluating an Open Source application[35]:
- Check licensing
- Check code quality
- Test setup time
- Verify extensibility
- Check for separation of concerns
- Check for last updated date
- Check for dependence on outdated toolkits/frameworks
Perhaps the most referenced of the general articles on selecting FLOSS applications is David Wheeler's How to Evaluate Open Source Software / Free Software (OSS/FS) Programs.[36] The detailed functionality to consider will vary with the types of applications being compared, but there are a number of general features that are relevant to almost any type of application. While we will cover them in more detail later, Wheeler categorizes the features to consider as the following:
- System functionality
- System cost – direct and in-direct
- Popularity of application, i.e. its market share for that type of application
- Varieties of product support available
- Maintenance of application, i.e, is development still taking place
- Reliability of application
- Performance of application
- Scalability of application
- Usability of application
- Security of application
- Adaptability/customizability of application
- Interoperability of application
- Licensing and other legal issues
While a hurried glance might suggest a lot of diversity in the features these various resources suggest, a closer look at the meaning of what they are saying show a repetitive series of concerns. The primary significant differences between the functionality lists suggested is actually due more to how wide a breadth of the analysis process the authors are considering, as well as the underlying features that they are concerned with.
With a few additions, the high-level screening template described in the rest of this communication is based on Wheeler's previously mentioned document describing his recommended process for evaluating open source software and free software programs. Structuring the items thus will make it easier to locate the corresponding sections in his document, which includes many useful specific recommendations, as well as a great deal of background information to help you understand the why of the topic. I highly recommend reading it and following up on some of the links he provides. I will also include evaluation suggestions from several of the previously mentioned procedures where appropriate.
Wheeler defines four basic steps to this evaluation process, as listed below:
- Identify candidate applications
- Read existing product reviews
- Compare attributes of these applications to your needs
- Analyze the applications best matching your needs in more depth
Wheeler categorizes this process with the acronym IRCA. In this paper we will be focusing on the IRC components of this process. To confirm the efficacy of this protocol we will later apply it to several classes of Open Source applications and examine the output of the protocol.
Identify needs
Realistically, before you can perform a survey of applications to determine which ones best match your needs, you must determine what your needs actually are. The product of determining these needs is frequently referred to as the User Requirements Specification (URS).[37][38] This document can be generated in several ways, including having all of the potential users submit a list of the functions and capabilities that they feel is important. While the requirements document can be created by a single person, it is generally best to make it a group effort with multiple reviews of the draft document and including all of the users who will be working with the application. The reason for this is to ensure that an important requirement is not missed, When a requirement is missed, it is frequently due to the requirement being so basic that it never occurs to anyone that it specifically needed to be included in the requirements document. Admittedly, a detailed URS is not required at the survey level, but it is worth having if only to identify, by their implications, other features that might be significant.
References
- ↑ "What is free software?". GNU Project. Free Software Foundation, Inc. 2015. http://www.gnu.org/philosophy/free-sw.html. Retrieved 17 June 2015.
- ↑ "The Open Source Definition". Open Source Initiative. 2015. http://opensource.org/osd. Retrieved 17 June 2015.
- ↑ Schießle, Björn (12 August 2012). "Free Software, Open Source, FOSS, FLOSS - same same but different". Free Software Foundation Europe. https://fsfe.org/freesoftware/basics/comparison.en.html. Retrieved 5 June 2015.
- ↑ "RepOSS: A Flexible OSS Assessment Repository" (PDF). Northeast Asia OSS Promotion Forum WG3. 5 November 2012. http://events.linuxfoundation.org/images/stories/pdf/lceu2012_date.pdf. Retrieved 05 May 2015.
- ↑ Doll, Brian (23 December 2013). "10 Million Repositories". GitHub, Inc. https://github.com/blog/1724-10-millionrepositories. Retrieved 08 August 2015.
- ↑ Sarrab, Mohamed; Elsabir, Mahmoud; Elgamel, Laila (March 2013). "The Technical, Non-technical Issues and the Challenges of Migration to Free and Open Source Software" (PDF). IJCSI International Journal of Computer Science Issues 10 (2.3). http://ijcsi.org/papers/IJCSI-10-2-3-464-469.pdf.
- ↑ Wheeler, David A. (14 June 2011). "Free-Libre / Open Source Software (FLOSS) is Commercial Software". dwheeler.com. http://www.dwheeler.com/essays/commercial-floss.html. Retrieved 28 May 2015.
- ↑ 8.0 8.1 Stol, Klaas-Jan; Ali Babar, Muhammad (2010). "A Comparison Framework for Open Source Software Evaluation Methods". In Ågerfalk, P.J.; Boldyreff, C.; González-Barahona, J.M.; Madey, G.R.; Noll, J. Open Source Software: New Horizons. Springer. pp. 389–394. doi:10.1007/978-3-642-13244-5_36. ISBN 9783642132445.
- ↑ Fritz, Catherine A.; Carter, Bradley D. (23 August 1994). A Classification And Summary Of Software Evaluation And Selection Methodologies. Mississippi State, MS: Department of Computer Science, Mississippi State University. http://citeseerx.ist.psu.edu/viewdoc/summary?doi=10.1.1.55.4470.
- ↑ 10.0 10.1 "OpenBRR, Business Readiness Rating for Open Source: A Proposed Open Standard to Facilitate Assessment and Adoption of Open Source Software" (PDF). OpenBRR. 2005. http://docencia.etsit.urjc.es/moodle/file.php/125/OpenBRR_Whitepaper.pdf. Retrieved 13 April 2015.
- ↑ Wasserman, A.I.; Pal, M.; Chan, C. (10 June 2006). "The Business Readiness Rating: a Framework for Evaluating Open Source" (PDF). Proceedings of the Workshop on Evaluation Frameworks for Open Source Software (EFOSS) at the Second International Conference on Open Source Systems. Lake Como, Italy. pp. 1–5. Archived from the original on 11 January 2007. http://web.archive.org/web/20070111113722/http://www.openbrr.org/comoworkshop/papers/WassermanPalChan_EFOSS06.pdf. Retrieved 15 April 2015.
- ↑ Majchrowski, Annick; Deprez, Jean-Christophe (2008). "An Operational Approach for Selecting Open Source Components in a Software Development Project". In O'Connor, R.; Baddoo, N.; Smolander, K.; Messnarz, R.. Software Process Improvement. Springer. pp. 176–188. doi:10.1007/978-3-540-85936-9_16. ISBN 9783540859369.
- ↑ Petrinja, E.; Nambakam, R.; Sillitti, A. (2009). "Introducing the Open Source Maturity Model". ICSE Workshop on Emerging Trends in Free/Libre/Open Source Software Research and Development, 2009. IEEE. pp. 37–41. doi:10.1109/FLOSS.2009.5071358. ISBN 9781424437207.
- ↑ Deprez,Jean-Christophe; Alexandre, Simon (2008). "Comparing Assessment Methodologies for Free/Open Source Software: OpenBRR and QSOS". In Jedlitschka, Andreas; Salo, Outi. Product-Focused Software Process Improvement. Springer. pp. 189-203. doi:10.1007/978-3-540-69566-0_17. ISBN 9783540695660.
- ↑ Wasserman, Anthony I.; Pal, Murugan (2010). "Evaluating Open Source Software" (PDF). Carnegie Mellon University - Silicon Valley. Archived from the original on 18 February 2015. https://web.archive.org/web/20150218173146/http://oss.sv.cmu.edu/readings/EvaluatingOSS_Wasserman.pdf. Retrieved 31 May 2015.
- ↑ Jadhav, Anil S.; Sonar, Rajendra M. (March 2009). "Evaluating and selecting software packages: A review". Information and Software Technology 51 (3): 555–563. doi:10.1016/j.infsof.2008.09.003.
- ↑ Pani, F.E.; Sanna, D. (11 June 2010). "FAME, A Methodology for Assessing Software Maturity". Atti della IV Conferenza Italiana sul Software Libero. Cagliari, Italy.
- ↑ Pani, F.E.; Concas, G.; Sanna, D.; Carrogu, L. (2010). "The FAME Approach: An Assessing Methodology". In Niola, V.; Quartieri, J.; Neri, F.; Caballero, A.A.; Rivas-Echeverria, F.; Mastorakis, N. (PDF). Proceedings of the 9th WSEAS International Conference on Telecommunications and Informatics. Stevens Point, WI: WSEAS. ISBN 9789549260021. http://www.wseas.us/e-library/conferences/2010/Catania/TELE-INFO/TELE-INFO-10.pdf.
- ↑ 19.0 19.1 Pani, F.E.; Concas, G.; Sanna, S.; Carrogu, L. (August 2010). "The FAMEtool: an automated supporting tool for assessing methodology" (PDF). WSEAS Transactions on Information Science and Applications 7 (8): 1078–1089. http://www.wseas.us/e-library/transactions/information/2010/88-137.pdf.
- ↑ 20.0 20.1 20.2 Pani, F.E.; Sanna, D.; Marchesi, M.; Concas, G. (2010). "Transferring FAME, a Methodology for Assessing Open Source Solutions, from University to SMEs". In D'Atri, A.; De Marco, M.; Braccini, A.M.; Cabiddu, F.. Management of the Interconnected World. Springer. pp. 495–502. doi:10.1007/978-3-7908-2404-9_57. ISBN 9783790824049.
- ↑ Soto, M.; Ciolkowski, M. (2009). "The QualOSS open source assessment model measuring the performance of open source communities". 3rd International Symposium on Empirical Software Engineering and Measurement, 2009. IEEE. pp. 498-501. doi:10.1109/ESEM.2009.5314237. ISBN 9781424448425.
- ↑ Soto, M.; Ciolkowski, M. (2009). "The QualOSS Process Evaluation: Initial Experiences with Assessing Open Source Processes". In O'Connor, R.; Baddoo, N.; Cuadrado-Gallego, J.J.; Rejas Muslera, R.; Smolander, K.; Messnarz, R.. Software Process Improvement. Springer. pp. 105–116. doi:10.1007/978-3-642-04133-4_9. ISBN 9783642041334.
- ↑ Haaland, Kirsten; Groven, Arne-Kristian; Glott, Ruediger; Tannenberg, Anna (1 July 2010). "Free/Libre Open Source Quality Models - a comparison between two approaches" (PDF). 4th FLOSS International Workshop on Free/Libre Open Source Software. Jena, Germany. pp. 1–17. http://publications.nr.no/directdownload/publications.nr.no/5444/Haaland_-_Free_Libre_Open_Source_Quality_Models-_a_compariso.pdf. Retrieved 15 April 2015.
- ↑ Hauge, O.; Osterlie, T.; Sorensen, C.-F.; Gerea, M. (2009). "An empirical study on selection of Open Source Software - Preliminary results". ICSE Workshop on Emerging Trends in Free/Libre/Open Source Software Research and Development, 2009. IEEE. pp. 42-47. doi:10.1109/FLOSS.2009.5071359. ISBN 9781424437207.
- ↑ Ayala, Claudia; Cruzes, Daniela S.; Franch, Xavier; Conradi, Reidar (2011). "Towards Improving OSS Products Selection – Matching Selectors and OSS Communities Perspectives". In Hissam, S.; Russo, B.; de Mendonça Neto, M.G.; Kon, F.. Open Source Systems: Grounding Research. Springer. pp. 244–258. doi:10.1007/978-3-642-24418-6_17. ISBN 9783642244186.
- ↑ Chan, C.; enugroho; Wasserman, T. (17 April 2013). "Business Readiness Rating (BRR)". SourceForge. https://sourceforge.net/projects/openbrr/. Retrieved 21 April 2015.
- ↑ 27.0 27.1 Galli, Peter (24 April 2006). "OpenBRR Launches Closed Open-Source Group". eWeek. QuinStreet, Inc. http://www.eweek.com/c/a/Linux-and-Open-Source/OpenBRR-Launches-Closed-OpenSource-Group. Retrieved 13 April 2015.
- ↑ "Welcome to Business Readiness Rating: A FrameWork for Evaluating OpenSource Software". OpenBRR. Archived from the original on 24 December 2014. https://web.archive.org/web/20141224233009/http://www.openbrr.org/. Retrieved 14 April 2015.
- ↑ Arjona, Laura (6 January 2012). "What happened to OpenBRR (Business Readiness Rating for Open Source)?". The Bright Side. https://larjona.wordpress.com/2012/01/06/what-happened-to-openbrr-business-readiness-rating-for-open-source/. Retrieved 13 April 2015.
- ↑ "Welcome to OSSpal". OSSpal. http://osspal.org/. Retrieved 18 April 2015.
- ↑ Silva, Chamindra de (20 December 2009). "10 questions to ask when selecting open source products for your enterprise". TechRepublic. CBS Interactive. http://www.techrepublic.com/blog/10-things/10-questions-to-ask-when-selecting-open-source-products-for-your-enterprise/. Retrieved 13 April 2015.
- ↑ Phipps, Simon (21 January 2015). "7 questions to ask any open source project". InfoWorld. InfoWorld, Inc. http://www.infoworld.com/article/2872094/open-source-software/seven-questions-to-ask-any-open-source-project.html. Retrieved 10 April 2015.
- ↑ Padin, Sandro (3 January 2014). "How I Evaluate Open-Source Software". 8th Light, Inc. https://blog.8thlight.com/sandro-padin/2014/01/03/how-i-evaluate-open-source-software.html. Retrieved 01 June 2015.
- ↑ Metcalfe, Randy (1 February 2004). "Top tips for selecting open source software". OSSWatch. University of Oxford. http://oss-watch.ac.uk/resources/tips. Retrieved 23 March 2015.
- ↑ Limardo, J. (2013). "DIY Evaluation Process". LIMSExpert.com. ForwardPhase Technologies, LLC. http://www.limsexpert.com/cgi-bin/bixchange/bixchange.cgi?pom=limsexpert3&iid=readMore;go=1363288315&title=DIY%20Evaluation%20Process. Retrieved 07 February 2015.
- ↑ Wheeler, David A. (5 August 2011). "How to Evaluate Open Source Software / Free Software (OSS/FS) Programs". dwheeler.com. http://www.dwheeler.com/oss_fs_eval.html. Retrieved 19 March 2015.
- ↑ "User Requirements Specification (URS)". validation-online.net. Validation Online. http://www.validation-online.net/user-requirements-specification.html. Retrieved 08 August 2015.
- ↑ O'Keefe, Graham (1 March 2015). "How to Create a Bullet-Proof User Requirement Specification (URS)". askaboutgmp. http://www.askaboutgmp.com/296-how-to-create-a-bullet-proof-urs. Retrieved 08 August 2015.
Notes
This article has not officially been published in a journal. However, this presentation is faithful to the original paper, with only a few minor changes to presentation. This article is being made available for the first time under the Creative Commons Attribution-ShareAlike 4.0 International license, the same license used on this wiki.