Posts Tagged 'software'

Update to AutoDBAdapter for Android Developers

Update to AutoDBAdapter for Android Developers

Trouble with your database code or databasehelper in android?

As I mentioned earlier, I have written a web application which automatically creates a database adapter code in java for your Android application based on a text description of the database schema.   This grew beyond its initial intentions and now spits out java code for a fair bit of stuff.

I have recently updated it.  Version 1.2 gets rid of some annoying bugs (like fields being wrongly declared – oops!),  adds code to allow each table to be automatically initialised from an array in the, also has code to easily nuke and re-initialise the db (don’t use in production!).

If you’re looking for ready-made DBAdapter  java code which is tailored to your database schema or a java database adapter example please give autoDBAdapter a try.

UniPod for Android

UniPod for Android

Annoyed at Android media players in general I’ve written my own.  The point?  To be able to play long running audio files like university lectures or audio books.  Available for free in the Apps -> Books and Reference section on Android market (or if you’re technically inclined, download the apk yourself).

Features include:

* no playlist, choose a course and go

* no album art, but you do get large text telling you what you’re listening to

* no shuffle to turn on accidentally

* what you are listening to is automatically updated, so even in the worst case if the app crashes it should restart where you were up to.  But in the best case you can always take up your listening from where you left off

* autoreview – it automatically rewinds

* customise these features as you desire.

* never see your music again!  If you want to just see (eg) lectures, you can filter out your music tracks.

* and more!

Known issues:

* some “application taking too long to respond messages”.  Hard to diagnose.

* this is a player only.  You need to get your media onto your phone first.

The Real Story behind Windows 7 Phone Home

The Real Story behind Windows 7 Phone Home

ie: my guesses about it

Lauren Weinstein reports on a new feature of an update to Windows 7 (apparently called KB971033?), which continually checks (once every 90 days) to see whether the installation is authorised/activated/validly licensed.  Some sites have discussed this and made something of the fact that it is voluntary to install, and/or checks to see that the installation is valid even if it has verified the copy as valid and even if the copy has already been authorised.

I tend to think this has not so much to do with piracy per se as solving a couple of Microsoft’s other problems.  The first is the lumpiness of Microsoft’s revenue.  It gets a heap of extra revenue when it releases new versions of whatever.  Microsoft has tried to overcome this by its assurance programs, in which users are asked to sign on to a subscription program.  However, ultimately the subscription programs are dependent upon Microsoft actually releasing new versions from time to time (what a drag!).

The second is the problem of proving losses.  If you make an illegal copy of a Microsoft product and then buy a legitimate copy (or as many legitimate copies as illegal copies you have made), exactly what has Microsoft lost?  Microsoft, for example, gets the same amount of money, it just gets it at a different time.    This is because there is no tying of the license to a particular period of time.   If a person made an illegal copy of Windows98 (say) 10 years ago and buys a legitimate copy today (yes, I know that may not be possible in fact) it is hard to see what Microsoft has actually lost, except the time value of money over the past 10 years (this, unfortunately, might lead to ‘gaming’ of the system, where a person makes illegitimate copies until they are caught, then buys legitimate copies to cover their nefarious activity).   Note: There is nothing specific about Microsoft to this argument, it applies generally to software licenses which are not limited in time.

Both of these problems are solved by tying.  Microsoft began tying a long time ago by tying copies to specific hardware.  This is what the authentication stickers are all about.  Their purpose is not to prevent piracy.  Rather, their purpose is to prevent a legitimate purchaser moving a particular copy from one computer to another.  If they can’t move the copy then, the theory goes, they must buy a new copy.   Moreover, by withdrawing product from the market, Microsoft can force upgrades to new versions of Windows.  If someone could take their legitimate copy of Windows XP from their old computer (and wiping the old computer and loading it with Linux) and load it onto their new computer, would they have bought Vista?

The next form of tying is to time.  The point of having the check being performed once every 90 days is to allow for the prospect of quarterly licensing.  You pay a fee each quarter in order to have continued access to your data.  The 90 day phone home is just the latest piece in the puzzle.  Others are things such as the time limited trial installations of office that OEMs force onto people.   In a year or two you will no doubt see Microsoft offering (initially) optional, low priced time limited licenses, with a view to moving the market over to the new licensing scheme over time.   Time limited licensing may also alleviate (although maybe not solve?) a problem that Microsoft faces with the Netbook form factor – that is, it is too expensive to pay any significant amount for a windows licence on a $300 (or cheaper) machine.  However, if you have a time based license, the first three (or six) months might be given away free or charged at a low amount.  This would allow hardware manufacturers to legitimately sell Windows loaded machines at the same price point as a Linux loaded alternative.

Just follow the bouncing ball…

See also: computerworld article, someone’s blog, someone else’s blog,

Taste of Vista, Sillyness of EULA Laws

A relative bought a new computer last week and I went around to set it up.  It was my first experience of Vista (in theory I have a copy of Vista on my home machine, but I haven’t (re)connected the hard drive since I had to remove it to install my Linux set up).   I spent most of my time setting up the hardware, rather than looking at Vista, but it seems nice enough.  I can understand why the user access controls are annoying people.  I was only on the machine for 5 minutes and was already getting annoyed at it.

Interesting from my point of view was that the machine booted straight into the desktop – no EULA was presented.  I also noticed that the 7 day anti-virus trial only had 2 days left to run.  Presumably the store had set up the machine?  The machine is a name brand (Medion, produced by Aldi).  It has an authorisation sticker on the side.  The machine comes with minimal documentation and no printed EULA.

The $64 question is – is my relative licensed to use the software?  If so, what are the licence terms?

Community Must Value All Open Source Contributions

Brendan Scott, September 2008

Mary Gardiner has written a blog post about how to get females involved in projects.  I want to emphasise one of the points she makes:

Don’t discount what women do [‘what women do’ here used as ‘community management, documentation and similar activities’, via Geek chicks: second thoughts]

I believe there is a blind spot here for everyone in the open source community.  It is nothing for an open source company to fund development effort, and hire x coders (for some non-trivial x) but to suggest that any other contribution to the community be made is to cross the line on acceptable suggestions.  But the community does not rely on coders alone.  The best code in the world is useless if no one knows of it, or if it is hidden behind a terrible user interface, or can’t be licensed because lawmakers have outlawed it (eg  encryption code).  I have hinted at this problem in my post on FLOSS best practices.  This is what I am referring to by the references to KPIs in that post.

Because numbers are relatively easy to come by, it is comparatively easy to criticise companies for not submitting enough code to one project or another but, there are other things that a contribution to the community should be measured against. Unfortunately, the word which has filtered back to me is that managers are not assessed against these soft contributions.  It should therefore come as no surprise that companies are not committing to them.

It would be nice to see the community demand that organisations support the whole breadth of the open source community.  In this way, a more rounded view of what makes a good open source corporate citizen could be created.  This would include things such as supporting marketing, documentation or communication efforts, the making of submissions to governments or courts (or supporting compliance initiatives) and the myriad of other things which are essential to a functional community associated with either a project or with open source generally.

Thanks Readerware

One of my pet hates about closed source software is registration codes.   In order to install software that you’ve legitimately bought, you need to find the relevant registration codes.  If you’ve misplaced them (eg you’ve moved house, or you have kids) then you can kiss goodbye to  however many hundreds of dollars you spent on the thing.   Even if you haven’t lost the codes it can take anything from 5 minutes to hours to actually find them again.  Several years ago this was such a hassle for me when upgrading or changing my system that I decided to foreswear closed source apps to the extent I could.

I made one exception, when we moved house a couple of years ago.  I bought a program called Readerware* to catalogue our books before we moved  – a vain and thankless task but, now it’s done, I thought I might update it with the books we’ve acquired since.  So I dug up my old version of readerware from one of the backups to find that it didn’t seem to work on my current system (which has evolved a bit since then, particularly in May this year – I think it is looking for a 32 bit version of glibc).   I even had the CD – and the registration code.

On a whim I downloaded the most recent version of the product and entered my registration code – it worked.   I had to pay for neither the move to 64 bits nor the new version of the product.  How nice (there is no fee for upgrades within major releases 1, 2, 3 etc – the current version is 2.984).  Thanks Readerware.

I now face the task of identifying what files I need to copy to restore the database… (done) and find what box I put the neat bit of the dot bomb (see below) in.

PS:  I even got a neat bit of the dot bomb at no extra cost when I bought the package.

* At the time Alexandria was a no-goer, although it seems to have come along since then…

Linux Kernel Drivers – AU Law on Interoperability

For those of you who haven’t seen it, some of the Linux Kernel developers have issued a statement on the use of closed source drivers in conjunction with the kernel (summary – it’s a Bad Thing ™).  My suggestion on this is that the interface for drivers should prohibit non-GPL drivers unless the driver’s author positively  asserts that the driver is not a derivative work.

This announcement has triggered off some discussions in mailing lists about how to determine whether or not something is a derivative work of the kernel.   Someone mentioned that derivative works should be functionally based, citing Linus Torvalds here.  I intended to chime in saying that the Australian High Court had actually said something like that some time ago, but then the short email got out of hand.  So now I’ve done a blog post on…

Some Observations on the Australian Law on Interoperability


Some time ago, the highest Australian court endorsed a functional analysis when identifying infringement of computer programs (ie if something is copied and is essential to the operation of a copyright program, then it’s an infringement), although it appears to have stepped back from this position more recently.

The current authorities still hold that the reproduction of a data set, even for the purpose of interoperability, will be an infringement if there is copyright in the data set.  Moreover, the courts have simply looked for a causal connection between the original work and the reproduction.  The re-implementation of a work by piecing it together from observation may still result in an infringement.

The legislature has introduced an interoperability exception but it seems to be worded in a way limited to reproductions for the purpose of gaining information – and therefore doesn’t seem to be a lot of help in practice.

The Australian position for the makers of device drivers is not clear.  There are particular problems when an implementation requires the reproduction (albeit indirectly and notwithstanding that it may be necessary for interoperability) of part of the kernel in which copyright subsists.  In this case there is a risk that the requirement to GPL may be invoked by virtue of clause 2(b).

Autodesk v Dyason

There is an interesting case in Australia called Autodesk Inc v Dyason [1] which held that copying 127 bits (yes, bits)[2] from a multi-kilobyte program (I have quoted 32Kb in the past, but now cannot find the reference – if you know please tell me where to find it) constituted a reproduction of a “substantial portion” of the larger program and was therefore a copyright infringement.

The 127 bits were in part of a chip in a dongle included by Autodesk in the package with their software.  The defendants observed the operation of the chip through an oscilloscope and used this information to produce their own dongle which mimicked the operation of the Autodesk dongle.  These bits were said to infringe the copyright in the program (called widget c) which queried the dongle.

Some interesting aspects of the case are –

  • this is a case about primary infringement of the reproduction right;
  • the dongle was a hardware implementation.  In response to queries by Widget C, it responded in a certain way based on the electrical operation of the various hardware gates in the dongle.  In effect it operated as a look up table, but there was no look up table “stored” in the device;
  • the relevant defendant had no access to and did not analyse the Widget C program.  They only analysed the dongle (see para 85 of the judgment at first instance);
  • there are in fact two decisions by the High Court, with the second on procedural issues.

The important part of the decision is that the court expressly took into account the fact that the key/bytes were essential to the function of the computer program. From McKeogh, Stewart and Griffith Intellectual Property in Australia third edition at para 8.6 (who, incidentally, criticise the decision):

“However, the emphasis on the function of a computer program as determining what is a ‘substantial part’ was maintained by the majority [of the court in Autodesk No 2 -see below], who emphasised that the [127 bits] was ‘essential’ or ‘critical’ to the operation of Widget C.”

This is a decision of the High Court – the ultimate court of appeal here equivalent to the US Supreme Court.

The defendants sought leave to the High Court to have the issue reheard (this is very unusual), the High Court gave judgement on whether or not to look at the issue again (ie, they were determining an issue of procedure, they were not re-examining the substance of the appeal) the following year.[2]   The defendants were unsuccessful (3-2) although the then-Chief Justice (in dissent) indicated reservations about applying a functional analysis when determining substantiality for the infringement of computer programs.  His reasoning was later to be cited with approval in …

Data Access v Powerflex

In Data Access v Powerflex the court unanimously stepped back from the “essential to the operation” analysis in the Autodesk v Dyason Case (only one of the Autodesk judges remained on the bench by that time – see her short concurring judgment at the end) saying this at paragraphs 84ff:

There is great force in the criticism that the “but for” essentiality test which is effectively invoked by the majority in Autodesk No 2 is not practicable as a test for determining whether something which appears in a computer program is a substantial part of it. For that reason, we prefer Mason CJ’s opinion that, in determining whether something is a reproduction of a substantial part of a computer program, the “essential or material features of [the computer program] should be ascertained by considering the originality of the part allegedly taken”

In order for an item in a particular language to be a computer program, it must intend to express, either directly or indirectly, an algorithmic or logical relationship between the function desired to be performed and the physical capabilities of the “device having digital information processing capabilities”. It follows that the originality of what was allegedly taken from a computer program must be assessed with respect to the originality with which it expresses that algorithmic or logical relationship or part thereof. The structure of what was allegedly taken, its choice of commands, and its combination and sequencing of commands, when compared, at the same level of abstraction, with the original, would all be relevant to this inquiry.

That being so, a person who does no more than reproduce those parts of a program which are “data” or “related information” and which are irrelevant to its structure, choice of commands and combination and sequencing of commands will be unlikely to have reproduced a substantial part of the computer program. We say “unlikely” and not “impossible” because it is conceivable that the data, considered alone, could be sufficiently original to be a substantial part of the computer program.

It follows that we are unable to agree with the approach to determining “substantiality” which the majority took in Autodesk No 1 and Autodesk No 2. Because of the importance of the question, we think that the Court should re-open the question of what constitutes a substantial part of a computer program. To depart from the reasoning in the Autodesk cases does not necessarily mean that the outcomes in those cases were wrong. In our view, the look-up table in Widget C was merely data and was not capable of being a substantial part of the AutoCAD program unless the data itself had its own inherent originality. However, re-opening the reasoning in the Autodesk cases does not require the Court to express a view on whether the look-up table in that case had its own inherent originality.

The Data Access case was about whether the re-implementation of a programming language (called Dataflex) was an infringement of copyright in the reserved words of the language.   Surprisingly, this argument was successful at first instance, but was eventually knocked on the head in principle, but not necessarily in practice in the High Court.   The reason it was not knocked on the head in practice is another look up table used to implement a compression algorithm which was at the centre of the case.

In that case the Dataflex program stored its program files in a compressed form.  A form of compression called “Huffman coding” which implements a lookup table was used to implement the compression.   The lookup table in this compression scheme is created by identifying commonly occurring strings in sample files.  In order to create a competing program which read the compressed files (or to create compressed files for use with the Dataflex program)  then it was necessary to reverse engineer, and re-implement the compression table.  The defendant re-created the table  by creating a number of test files and seeing how the Dataflex program actually compressed them (see paragraph 117).  The court found that the original compression table was a literary work and that re-creating it in the manner described was a reproduction (para 124):

The fact that Dr Bennett used an ingenious method of determining the bit string assigned to each character does not make the output of such a process any less a “reproduction” than if Dr Bennett had sat down with a print-out of the table and copy-typed it …

This comment may raise problems for those wanting to create interoperable programs.  Anyone, it seems, can structure their program in such a way that the reproduction of a copyright work is necessary to interoperate with the program.  I will defer to the programmers out there on whether and how this might apply to drivers accessing the Linux kernel.  The legislature might have addressed this in…

Section 47D of the Copyright Act

Since the Data Access case the legislature has implemented section 47D of the Copyright Act.  It says that reproducing a copyright work that is a computer program (which includes literary works incorporated into a computer program) for interoperability is not an infringement… in certain circumstances.  Those circumstances are:

(1)  Subject to this Division, the copyright in a literary work that is a computer program is not infringed by the making of a reproduction or adaptation of the work if:

(a)  the reproduction or adaptation is made by, or on behalf of, the owner or licensee of the copy of the program (the original program) used for making the reproduction or adaptation; and

(b)  the reproduction or adaptation is made for the purpose of obtaining information necessary to enable the owner or licensee, or a person acting on behalf of the owner or licensee, to make independently another program (the new program), or an article, to connect to and be used together with, or otherwise to interoperate with, the original program or any other program; and

(c)  the reproduction or adaptation is made only to the extent reasonably necessary to obtain the information referred to in paragraph (b); and

(d)  to the extent that the new program reproduces or adapts the original program, it does so only to the extent necessary to enable the new program to connect to and be used together with, or otherwise to interoperate with, the original program or the other program; and

(e)  the information referred to in paragraph (b) is not readily available to the owner or licensee from another source when the reproduction or adaptation is made.

(2)  Subsection (1) does not apply to the making of a reproduction or adaptation of a computer program from an infringing copy of the computer program.

McKeogh, Stewart and Griffith cited above argue that s 47D is a complete answer to interoperability questions (see section 8.7) but I don’t see it.  The key words of this are in paragraph (b) and (c) which make it clear that the “reproduction” which is excused by this section is the reproduction for the purposes of getting information (another section, section 47AB gives “computer program” an expansive definition which would probably cover the Huffman table in the Dataflex case).  Paragraph (b) also makes clear that the reproduction in question must occur prior to the making of the interoperating program.   The key point is that it says nothing about whether you can use the information that you obtain in this way, and the reproduction which occurs when this information is actually incorporated in the interoperating software does not appear to be covered by this wording.  Paragraph (d), which you might think covers it, is worded as a qualification on paragraphs (a), (b) and (c).

For example, I can’t see that the outcome of the Data Access case would have been different if s 47D had been in force.   McKeogh et al correctly observe that the compression table in the case would likely be within the extended meaning of computer program in the legislation.  However, the compression table is not the relevant thing being reproduced because a reproduction of it isn’t “made for the purpose of obtaining information…”. On the contrary, it is the information which has been obtained as the result of some other reproduction.

Some thoughts on 47D

In my view section 47D is not drafted in a manner which is of much practical assistance to someone wanting to create an interoperable software program.  Perhaps someone might be able to hang their hat on some Explanatory Memorandum, Hansard (hint: try around 30 November 2006) or a CRC report somewhere to explain its purpose, but to do so seems to require stepping outside the words of the section.   Incidentally, if there is a problem with section 47D, then there is also a problem with the exceptions to circumvention of technological protection measures, since it forms the basis of one of the exceptions.  I made submissions on behalf of OSIA asking for section 47D to be reworded in the course of the AUSFTA negotiations over the past couple of years (because this exception is relevant to the added prohibitions on technological protection measures).  Incidentally, the Senate Committee agreed with me (see recommendation 13 here).  Strangely, it didn’t make it into the final draft of the legislation.

So, while the legislature seems like they intended to address the interoperability issue, it’s not clear they’ve gotten there.

[1] [No 1] (1992) 173 CLR 330.

[2] Actually, the judgment is not clear.  In some places it looks like it is 127 different 127-bit strings in other places it refers to a single 127 bit string – see eg paras 12-14 of Dawson J’s judgment. See also the decision of Northrop J at first instance.

[3] Autodesk Inc v Dyason (No 2) [1993] HCA 6; (1993) 176 CLR 300 (3 March 1993).

[Addition 26 June 2008.  In response to Bruce Wood’s comment here is a more complete quote of paragraph 124/125 of the judgment (my emphasis):

124. In addition, in our opinion the Full Court was correct in holding that the process undertaken by Dr Bennett constituted a “reproduction” of the standard Dataflex Huffman table. The fact that Dr Bennett used an ingenious method of determining the bit string assigned to each character does not make the output of such a process any less a “reproduction” than if Dr Bennett had sat down with a print-out of the table and copy-typed it into the PFXplus program.

125. The finding that the respondents infringed the appellant’s copyright in the Huffman table embedded in the Dataflex program may well have considerable practical consequences. Not only may the finding affect the relations between the parties to these proceedings, it may also have wider ramifications for anyone who seeks to produce a computer program that is compatible with a program produced by others. These are, however, matters that can be resolved only by the legislature reconsidering and, if it thinks it necessary or desirable, rewriting the whole of the provisions that deal with copyright in computer programs. ]

[Addition 27 June 2008.  The Decision on appeal to the Full Federal court describes the table (under the heading Huffman Compression Table) as being 256 lines of source code (presumably one line to assign each character value).]

The Invisible Closed Source Overhead – 1

Brendan Scott, June 2008


Many people focus on the “hard” costs of software acquisition and maintenance such as licence fees and implementation costs. However, the adoption of a closed source solution has a number of hidden costs which are rarely properly accounted for. Many of these costs arise because of the extreme lack of flexibility in closed source licences and they are likely, at least in some cases, to be far more significant than the hard costs. In this series of posts we will work our way through the closed source acquisition and maintenance path and have a look at some of the more obvious ones.

Underlying Reasons – Natural Monopoly

Software is a natural monopoly (because it has a high fixed cost and low marginal cost to produce). As such there is a tendency for the market to become increasingly concentrated, first within a particular product line (such as databases, operating systems or word processing software), then across groups of product lines (typically because of the creation of artificial cross product dependencies).

Underlying Reasons – Good Enough for Most

Professor Pamela Samuelson and Mr Mitch Kapor have run a lecture series at UC Berkeley on open source (which, by the way is worth a listen). In one of their lectures they give the example of the university sector in the United States having particular requirements for their accounting, requirements which are not met by the closed source software industry. Indeed, the gap between their requirements and the capability of close source solutions is so great that Universities have banded together to create their own (open source) accounting software – at substantial expense. Ordinarily you would expect that Universities would have enough market power to get additional functionality into closed source solutions – but they don’t. Remarkably, the US University sector is not important enough to influence the direction of closed source software vendors on features of their accounting software. This is not an isolated case.

If this sector isn’t influential enough, what hope does an ordinary organisation have? Any implementation which is designed to be sold on a per unit basis (regardless of the unit – eg copy, seat, site, user etc) means it must be designed to maximise sales of that unit. Here the 80-20 rule comes into play, with the vendor coding features to attract a large chunk of the market for the lowest cost. That is, in order to maximise profits a closed source vendor must aim for a product which is “good enough for most”.

While the market is young there may be niche players who will provide products into that tail 20% that is not properly served (or that there might be multiple different products each covering a different 80%). However, this market structure is unstable because software is a natural monopoly (as described above). As products begin to win market share they crowd out competitors starting with the competing 80%-ers and then to niche players.

If your organisation has specific feature requirements it is likely not only that closed source solutions will not meet your needs, but also that you will be unable to influence the vendor to implement features to meet your needs. Even offering to pay the development costs will not necessarily influence a vendor to implement the feature because this would burden their code with additional modules which would need to be maintained going forward and the vendor maybe unwilling to take on this maintenance burden.

In short, for those who fall into “the many” category, they will be reasonably well served by their closed source vendor. However those in “the few” (roughly 20% according to the 80-20 rule) won’t be. I will use the terms The Many and The Few for convenience in the balance of these posts.

Tragically Closed

Closed source vendors are typically reluctant to interoperate with others, especially if they vendor is well positioned in the market. Indeed, a vendor’s support for interoperability is inversely proportional to their market share – when they have small market share they need their software to be able to work with other people’s software, since by assumption most people have other people’s software. As the vendor’s market share grows interoperability transforms into intraoperability – the ability to operate with other components in the vendor’s software portfolio. Indeed, vendors with large market share will even argue that intraoperability and interoperability are the same thing, since that vendor controls the market, (they argue) intraoperatibility is all that a customer needs de facto.

As the market share grows offering interoperability is only assisting the growth of the market share of the smaller players in the market, so beyond a certain market share, vendors have a disincentive to interoperability and that disincentive grows as the market share grows. In the extreme examples, for someone with no market share to enter the market their first sale must interoperate with the existing software so they support interoperability. Equally, for someone with 100% market share, support of interoperability necessarily means that they lose some part of the market.

It follows from the fact that software is a natural monopoly that, over time, because the natural tendency of a closed source market is to market concentration, there is also a natural tendency towards lack of interoperability. So, even if you invest in a closed source product which supports or promotes interoperability today, you will likely be in trouble in five or ten years’ time. By then, either the product will have emerged as a market leader and therefore no longer supports interoperability or they will have fallen by the wayside and the actual market leader will be trying hard to lock out the product that you have in fact invested in. Moreover, there is now a move among some closed source vendors to trade interoperability against patent rents, thus increasing the cost of interoperability. Therefore long term interoperability prospects for any closed source solution are poor (I have used the heading “tragically” closed here in Hardin’s sense – that the long run closure of the system follows almost inexorably from the nature of the industry ).

The lack of interoperability is an enormous problem because interoperability is a precondition to competition. When software lacks interoperability it is a symptom that there is no competition in the market. As competition in a market decreases not only do the costs of products in the market become artificially inflated, but the quality and diversity of the products simultaneously decreases. Lack of interoperability means that a customer cannot avail themselves of self help to implement features that they want in a product or remove dis-features (1) (2) from a product. As we mentioned above, unless your requirements are shared by a substantial proportion of the target market, you will be unlikely to be able to have specific features implemented – even if you are willing to pay the cost of implementation.

A result of this Tragic Closure is that closed source software tends to be monolithic rather than modular.

In part 2 we look at some costs of acquisition.

The Tragedy of the Anti-Commons

or Why Government FLOSS Purchasing Policy is Misapplied


Misapplication of “value for money” requirements when purchasing software results in poor value for money – Government purchasing policies for software tend to support the creation of monopolies.

Government purchasing has effects on the price paid by citizens for the product purchased. In some cases purchasing produces volume which permits scale discounts and therefore a net benefit to citizens who also purchase the product. However, in the case of lock in software* Government purchasing can create a monopoly in the software which leads to increased costs for citizen purchasers and a net detriment for society as a whole. It is not appropriate for value for money policies to be assessed on a per acquisition basis when software is being acquired. Doing so will almost certainly create net costs for the community when considered in the aggregate.

A Tale of Two Widgets

Consider the case where the government must buy one of two types of widget (called Widgets A and B respectively). Assume also that both widgets are more or less equivalent. Not only do both widgets meet the government’s needs, but they would also both meet the needs of most general purchasers of widgets. Assume further that the government price for Widget A is about half that of Widget B. At this point take out your taxpayers hat and place it squarely on your head and think about which widget you’d like the government to buy. That is, would you prefer the government to take more of your money and spend it on Widget B or would you prefer it to spend your money on the cheaper Widget A?

It was the Best of Times

Did you choose Widget A? Surely, based on what I’ve told you, that must be right! You’d think government would need to be mad, bad or corrupt to purchase Widget B in those circumstances. Not surprisingly this purchasing preference is reflected in government procurement policies. For example:

Let’s assume now that the Government does purchase Widget A and take the scenario further to analyse some of the assumptions we made in arriving at the purchasing decision. What if Widget A is used to lay roads? What if Widget A is not interoperable?

What if Widget A has been specifically designed so that if it is used to lay a road, then to drive on the road a car must also be fitted with Widget A? (Assume, for the sake of fairness, that Widget B also requires a car to be fitted with Widget B). Would you still be happy for the government to buy Widget A? Maybe you would – Widget A costs half of Widget B. Presumably you’ll be no worse off. You will need to buy one of the two Widgets and you’ve already paid half** the price of Widget B when the government used your taxes to buy Widget A, if you have to pay half the price of Widget B again, you’re square with the price of buying Widget B – in fact you’re better than square because if the government had bought Widget B you’d pay the full price for it with your taxes and you’d have to pay the full price for it in order to drive your car.

It was the Worst of Times

There is a but – and it is unrelated to the characterstics of Widget A and Widget B, their functions, design and operation. That “but” is the availability of Widget A and how it can be priced. Assume that Widget B is made by lots of different people and there is fierce price competition in relation to it. Assume also that Widget A is only available from a single vendor. The vendor of Widget A (Vendor A) is able to set different prices for Widget A in different markets (the vendors of Widget B are not able to do so, because of the assumed competition in the market). Vendor A can choose to set a very low price for Widget A for Government purchasers, knowing that governments build a lot of roads. They can then choose to set a higher price for other purchasers – the prices given above were prices for government purchasers (not for chumps like you and me). Want to drive on a government road? Sorry, that’ll be 10 times the price of Widget B.*** Now, any way you cut it you are out of pocket. Assuming most roads are laid by the government, over time Widget B will be pushed out of the market or at very least relegated to small niches.

Intermediate Conclusion

We can conclude from the above that it is not possible to make a judgment as to value in isolation. Something which seems to be good value in a particular purchase scenario can lead to extremely poor value from public expenditure.

What’s the Problem

Government procurement can both create and reinforce a monopoly in goods and services which it is acquiring. Anecdotal evidence suggests that bureaucrats look at “value for money” type formulae and assess it against the cost to Government on a purchase-by-purchase basis. This approach is fine in respect of goods and services which are easily substitutable (such as hammers, screws, cars etc). In respect of goods which are specifically designed to prevent substitutability – eg devices which are not designed to be interoperable it is an extremely hazardous approach. If those goods also tend to be a natural monopoly (such as software in general, but particularly that which is designed not to be interoperable) this approach is absolutely the wrong one. The reasons should be obvious:

  • the vendor of the product can almost always underprice competitive offerings – even when competitors are loss leading;
  • the Government, bound by its bureaucrat’s incorrect understanding of the value for money policy is required to purchase the monopolist’s product;
  • over time network effects enable the monopolist to crowd out competitive offerings;
  • the vendor, now a monopolist, can charge what it likes to the rest of the community safe in the knowledge that, because of the preponderant use by Government others have no choice but to acquire the product;
  • ironically, over time the monopolist will even be able to charge the Government more because of the its huge installed base and the previous elimination of competition.

The Tragedy of the Anti-Commons

A prospective monopolist has a period of vulnerability before it has established the monopoly product in the market in which this strategy is risky (because it must carry the costs of underpricing). However, the tragic (ie fatalistic) nature of the process, coupled with the huge rewards to be had from playing the game makes it inevitable that sooner or later a monopoly will be established – it only requires one prospective monopolist to succeed for a monopoly to become entrenched. No number of failed attempts will prevent the next attempt and the winner take all nature of the scenario will continually draw in new prospective monopolists.

What is a Better Approach?

Government may have many roles in the procurement of goods and services but supporting, establishing and maintaining unregulated monopolies is not one of them.

While the value for money requirement is fundamentally correct, to elevate it to the status of gospel or taking it out of context is not. Value for money must be determined by reference to the price paid in the aggregate by the community for the Government’s acquisition, not to the price paid by Government or authority for the acquisition. This value for money assessment is further complicated by the fact that the assessment must necessarily:

  • be an ongoing one. A purchase today has no immediate cost impacts on the rest of the community – those impacts are all in the future and some of them may be in medium or long term;
  • involve a review of pre-existing Government acquisitions (as a previous (acceptable) acquisition may be unacceptable when taken in combination with a proposed acquisition); and
  • involve a review of things other than the software – in particular whether the vendor is likely to be in a position to manipulate the market.

Of course, the complexity of issues to be addressed by a value for money assessment make it difficult to apply with any consistency or certainty. In relation to lock in software it may only be a feel good term with little real substance. Any doubt should be resolved against the creation of monopolies.

If the licence terms permit perpetual use of each copy and permit the Government to onsell each copy of the software acquired and to do so independently of any hardware acquired in conjunction with the software that would reduce some of the monopolistic impact of the arrangement. Unfortunately the structure of copyright law usually forbids this arrangement in practice and, in any event, would not eliminate all monopolistic effects.

Particular Example – Whole of Government Purchasing

Whole of Government acquisitions of lock in software provide especially poor value for money on this analysis. Such acquisitions provide a vast installation of the software across government and effectively create an environment in which incremental improvements are net costs rather than net savings. For example, if the software has been purchased for the whole of Department X, then using a cheaper product for some users will not result in any cost savings – on the contrary since there is an effective doubling up on the licensing to the small group it costs more to use cheaper software! Further they create an entrenched installed base which increases the costs in the next round of acquisition (because this installed base effectively dictates purchasing requirements in that acquisition round).

Particular Example – Key Resources

Government acquisitions of software for key services or resources also provide especially poor value for money on this analysis as well. In these cases the importance of the relevant project (for example, the provision of public information by a health body, broadcaster or library) creates a lock in because of the general need of the public to interact with that body. That need for interaction will create in the public a need to acquire the software in order to access the resource, with the consequent establishment of a monopoly. It is a dangerous vanity for public resources to adopt lock in technologies in the provision of information or services through public facing interfaces.

Note on Formats, Standards and Protocols

This discussion has focussed on software primarily because it is more familiar and is conceptually easier to understand. However, the arguments presented apply equally to the adoption of products which require the use of a particular data format, standard or protocol if that format, standard or protocol has the lock in characteristic. Indeed, ultimately the issue is not so much in the software which manipulates data but in the manner in which data is stored and exchanged. In many instances software (and particularly lock in software) has a direct mapping against a specific data format and can therefore be identified with that format. If no lock in data format is used, the negative effects of the acquisition of lock in software is greatly reduced.


* For the purposes of this paper “lock in software” refers to software for which: the licence for that software ties the licensee to the licensor or to any third party either implicitly; or the software is designed to effect such a tying and there is no capacity either at all or in practice for the licensee to avoid or remove that tying. Lock in software includes software: which is not interoperable (eg doesn’t save to a standard format in its default configuration) or not interoperable outside the product set for a specific vendor or set of vendors, for which there is only one manufacturing source (multiple sales channels don’t count) and that source of the software has a substantial degree of market power in the relevant software market. By definition, lock in software does not include software which meets the open source definition.

** Technically probably less than half. It’s only half if the Government must buy one widget per car on the road.

*** The price will be determined by the demand for Widget A and may not be (but also might exceed) 10x. If, as in the case of software, the law prevents on sale, Vendor A will also be able to price discriminate, by charging each individual consumer as much as they are willing to pay.

Finally, a “widget” in this context is a placeholder term for some object the subject of discussion.

[Note initially made available for review on 7 January 2008]

FOSS Software and SAAS

Summary [updated 25 March 2008]

This is something of a theoretical argument, and one which is unlikely to see the light of day in court (if only because the court rooms in the Sydney registry of the Federal Court have no windows). The thrust of the argument is that the manner in which a SAAS model is implemented using FOSS might determine whether or not the implementation is legal/licensed. Conversely it may provide a means for businesses to “monetize” open source applications if they control sufficient copyrights (although probably not possible with software under GPL v3). In short, it may be that recent “innovations” in copyright law have indirectly harmed open source models by shifting the ground underneath them.

Note on Application to Closed Source

This argument will probably also apply more or less unchanged to closed source programs, although in that case the licensor has flexibility to expressly customise the licence to cover end users. This may not be a practical option for FOSS licensors.

Note on Commercial Application [Added 25 March 2008]

If this argument is correct, then another consequence is that those projects which control the copyright of the software will be able to exclude or charge others (ie competitors) from providing SAAS for the software. Indeed, it may be the case that anyone who has copyright in part of the software can prevent its use under a SAAS model. Assuming the community thought this was inappropriate, a business doing this would then run the risk of being blackballed.


The legislative background is (section references to the Copyright Act (Cth) 1968):

  1. it’s illegal to make a reproduction in a material form (s 31(a)(i)) of a substantial part (s 14(1)) of a literary work (which includes a computer program (s 10));
  2. subject to an exception, a reproduction in a material form can include reproductions made (including made in memory) in the course of running the program. While ultimately a question of fact in each case, historically judges have refused to say that a reproduction in memory of a computer program while it is running is an infringement by reason that, because the copy could not ordinarily be reproduced from the memory, if it was a reproduction it was nevertheless not in a material form, thus not an infringement under s 31(a)(i); However, (as part of the AUSFTA – s 186 of the US Free Trade Agreement Implementation Act No 120 of 2004) the definition of material form has been changed so the capability of reproduction no longer counts and a copy of the program in RAM is now fair game for an infringement argument; and
  3. the exception (s 47B(1)) says that the normal running of a program is not an infringement if certain conditions are met. Those are – the reproduction is “incidentally and automatically made” as “part of the technical process of” running the copy (s 47B(1)(a)); and “the running of the copy is done by, or on behalf of, the owner or licensee of the copy” (s 47B(1)(b).

So the relevant questions are:

  1. when a customer fires up their browser and logs on to an instance of a piece of SAAS are they running a copy of the program (or do they otherwise run such a copy); and, if so,
  2. are they doing so “by, or on behalf of, the owner or licensee of the copy.”

They key words are “licensee of the copy“. That is, of the copy which is being run, not of any old copy. Typically the end user of the program will not have received a copy of it. Most FOSS licences trigger the commencement of the licence from the time that the person receives a copy – so while the service provider may be assumed to be properly licensed, the end user is by assumption not a licensee (in any event the particular copy which is being run is probably licensed to the service provider rather than the end user). Therefore, if the end user is not a licensee, then unless they are running the software on behalf of the service provider (or some other licensee of the copy) they’re not within 47B.

To be on the safe side therefore a service provider who is using FOSS ought to consider having program instances which run independent of its customers. That is, the customer ought not to be the person who initiates or perpetuates the running of a copy of the program. Rather, it should be the service provider (since they are the licensee of the copy in question) who runs the copy (it will then be run “by… the… licensee of the copy.”

If the end user is in fact the person running the copy of the software, and it is not running it “on behalf of” the relevant service provider (presumed to be the licensee) then they won’t get the benefit of section 47B and will therefore run the risk of infringement of section 31(a)(i). If there is in fact an infringement by the end user, the service provider is also likely to be liable for authorizing the infringement (s 13(2) and a number of cases).

Counter arguments

As I mentioned, it’s not a glory (ie “a nice knock down argument”). You might argue that:

  1. the end user has received a copy of the software (constructively) when the service provider loads it up ready for them to run. If so, this may cause other problems in licences which have provision of a copy as a trigger event for other consequences (such as the supply of the source code);
  2. when the end user runs the program “it’s really” the licensed service provider who is running the copy or it is run on their behalf;
  3. that when the programs are run there is not in fact a substantial reproduction occurring (this would be a question of fact);
  4. the relevant licence does in fact extend to cover the particular end user. Licences which are structured as a grant from the original licensor to the recipient of a copy are unlikely to meet this exception;
  5. when a licensee is licensed in respect of one copy, they are automatically licensed in respect of all other copies (including those they haven’t received) (nb: question of fact);
  6. honestly, who’s going to sue anyway?


If it does turn out to be a problem, it would not be appropriate to lay the blame at the feet of the licenses (or their drafters). Rather, it is a consequence of inadequate thought being given to the expansion of copyright law, creating problems which were not foreseen when the relevant licences were created. Since judges were doing something comparatively sensible by requiring that a copy must be able to be reproduced for it to be in material form this would not previously have been an issue. However, the AUSFTA has changed that. While there is a provision which attempts to preserve the sanity of the system, it has not anticipated the constantly evolving models in the technology sector.

Note on GPL v3 [Added 25 March 2008]

GPL v3 is different from most FOSS licences in that it defines its permissions implicitly by reference to the copyright law (through the terms “conveying” and “propagating”). As the copyright law changes, GPL v3 is automatically updated to track those changes. Other FOSS licences which are restricted to permitting specific actions (eg “reproduction” or “use”) will be adversely affected by changes in the copyright law. For example, the addition of a rental right for computer software in the late 90s probably has the practical effect of limiting the scope of those licences – when they were drafted there was no rental right, no one bothered to anticipate it in the licence. However, since it hasn’t been anticipated in the licence, the commercial rental of such programs is probably in doubt.

In general therefore this nature of the GPL v3 would recommend itself to those licensors looking for resilience against legislative or judicial changes to the copyright law. Unfortunately, in this case the implicit referencing may not get the licence all the way there. The lynchpin of the argument is when and how does a person become licensed. GPL v3 has a similar structure to that mentioned above. That is, that the licence is effective upon receipt by the prospective licensee of a copy of the software.

That said, the right to “propagate” expressly permits licensees to authorise third parties to run the program. In practical terms GPL v3 probably does preserve the right to run the GPLed software in an SAAS model. Under the law the end user will probably not have a licence (since they will not have received a copy). However, the service provider will have a licence and that licence permits what would otherwise be a secondary infringement (eg authorisation). Thus the copyright owner might sue end users, but the service provider would not be unlikely to be able to be sued (and who’s going to be silly enough to sue end users?).

Blog Stats

  • 273,965 hits

OSWALD Newsletter

If you would like to receive OSWALD, a weekly open source news digest please send an email to oswald (with the subject "subscribe") at