CodeHosting » History » Version 4

« Previous - Version 4/5 (diff) - Next » - Current version
Chris Cannam, 2010-09-20 12:54 PM


The code hosting problem

Assumptions found in my head

  • Audio and music research groups in institutions lack effective access to version control systems
    • This is certainly historically true of C4DM; what about other groups?
  • Researchers often want to share their code selectively with other researchers in the same field but in other institutions
    • Internal code hosting doesn't usually facilitate this
  • Individual researchers may be happy to host their code in existing public hosting services (e.g. SourceForge, Google Code), but their supervisors are likely to be less keen
    • Supervisors don't necessarily appreciate these services' requirement that everything should be open source, and it's hard to keep track of what work your students are producing
    • The opposite dynamic may occur in some places -- researchers may be self-conscious about publishing code even when their supervisors encourage them to

How can we test these assumptions?

If these assumptions are correct, how do we solve these problems?

We could encourage and train institutions to provide better internal code hosting facilities

For example, by providing nice recipes, templates, support etc for setting up well-featured, friendly services. A good code management facility would bring together a version control system with a nice web front-end, project data sharing facilities (wiki etc), a sensible authentication system that doesn't involve a whole new username/password database, etc.

This is certainly likely to improve code development practice in an institution that has no facility at present. But it doesn't really solve the "selective sharing" problem, or help very much with the desire to move toward publication of software and reproduceable research -- unless we can also convince people to make their own internal hosting facility a public one.

Audio and music research groups typically are too small to be successfully running their own facilities. To do this well, they really need a horizontal approach -- facilities provided to all research areas by a common CS or IT service. This isn't necessarily the most effective approach if we want to improve search and access specifically to audio-related research code, but it may be the easiest approach to maintain. This is (presumably?) the sort of thing that the general Software Sustainability Institute ought to be exercising itself with.

Some institutions will have a central system already. How many? Which? Are they happy with it? Can the SSI guess at any of these figures for us? Would the existence of a working, if not ideal, internal facility make a group less likely to accept any other approach that we might propose?

We could encourage institutions to make use of existing external facilities

Researchers are often familiar with services like Google Code, SourceForge, GitHub etc already, and in some cases may use them even for hosting code that is not really supposed to be published ("yet") if they have a need to share it with one or more individuals at other institutions. If they are comfortable with doing that, why not encourage it -- since it also promotes open publication and has little or no maintenance cost?

The big problem is that it doesn't address private hosting, for projects that are "not yet" ready for publication. It may (perhaps) be attractive to be able to persuade groups that their code should all be public from the start, but it's not very realistic, and in any case it's probably not wise to mix up a technical solution to a practical problem (use of version control during development) with promotion of a philosophical position (code should be published) during advocacy.

Also, keeping track of projects in these external facilities is hard -- both for prospective user/reusers who want to find stuff, and for institutions who want to keep track of the work that their researchers are producing. We could perhaps help out by providing indexing and metadata services for projects through a central location.

All that said, these services work -- we don't want to find ourselves proposing methods that will be less attractive to motivated researchers.

We could provide a dedicated facility

We could provide a new code hosting facility that provides private hosting and access control, so that in theory institutions can treat it as an internal facility that has the ability to "promote" their projects to public status when desired.

This could solve the problem of selective sharing and the problem of maintaining private code. It could also store and provide more effective project metadata -- e.g. associate a project with its publications, or list all projects from a particular research lab -- making it easier to find, index, and consequently reuse project code.

But it does have some difficulties:

  • Supervisors and other decision-makers would need to be reassured that they were not likely to be duped into publishing work they wanted to keep private
  • Supervisors and other decision-makers would need to be reassured (through policy? through technical means?) that they were not simply giving away their institution's assets to whoever was running the service
  • Researchers would need to perceive the facility as being at least as easy to use and effective as any of the existing external services
  • The service would require an ongoing maintenance budget separate from any individual institutional budget (the self sustainability problem)
  • Consequently, all users would need some sort of reassurance that they wouldn't lose all of their code and project metadata if the funding ran out

And other practical risks:

  • It's possible that everyone would just create private projects, add a few selected other users, and never make them public at all
  • There is a step of "due-diligence" that people generally undertake when preparing to publish something -- license headers and README files, checking who the code actually belongs to, etc -- which may get overlooked if the code starts out as private (users may be more inclined to check in any code they're working with, of whatever provenance) -- this also makes it more likely that the project will never become public, especially since even if the code is subsequently cleaned up, the history will remain

Hybrid approaches

We may be able to combine the second and third approaches by encouraging people to use "whatever hosting facility suits -- and here's one of our own if you can't find one" and then also providing indexing, cross-references and metadata for external projects. How hard is this to do, technically?

Is there any way to combine the first approach with any other? Presumably, many institutions will have their own hosting facility, and advisory services like the SSI will quite reasonably be encouraging them to set one up. Does having, and using, an internal service in fact risk making it harder to publish and share code?