Archive for June, 2008

NAS Little Box – My New NAS

Comments on NetGear ReadyNAS

I went looking for a NAS today so that I can have a more coherent approach to managing my business data. I was sorely tempted by some usb storage routers (routers with usb ports attached to them) from netgear solely because they have a little tux penguin next to the word “powered” in the top corner of the box – who would have thought that would be used as a selling point.  While some places have a wide variety of NAS available if you order ahead, I was not in an order ahead mood.  That really left me with two options – a Netgear EasyNAS or a Buffalo EZLink (or I think that’s their names anyway).  I decided against the Buffalo because I had heard it was their entry level NAS and had no feature set.  I was originally disinclined towards the Netgear because I had the (apparently incorrect) impression that they had something against Linux.  However, the box expressly lists Linux/UNIX as a supported client OS.

The box also says it has command line ssh access, but I’m yet to fathom how to invoke it (it appears to refuse ssh connections?).

Observations once I got it home:

0. In the box is a hardware install guide and a printed copy of both the GPL and LGPL.  In addition there is a small yellow note saying that the device includes GPL/LGPL software and a web address from which the source code is available.

1. Has a linux installer for application – bonus

2. Linux installer didn’t work.  Tried as ordinary user and root.  Failed both times.  Probably a good thing because I’d prefer not to have to worry about device specific applications (eg at re-install time)

3. Had  a hard time getting the right ip address for it on the lan (probably because of my dullard router rather than the nas).  Eventually found it by monitoring active sessions on the router…

4. Connection is easy but it shows no shares to start with.  In fact you have to point the browser at  <ip>/shares/index.html not just <ip>. Tch!  This is apparently a feature – once the NAS is set up you point it at the IP address and the (preassigned) shared folders are displayed.  However http access to most shares is disabled by default – hence nothing was showing.

5. pressing “next” on set up after entering email alert crashes firefox3.  After setup, clicking one of the advanced tabs in Konqueror hung my screen! (control returned by killing process after logging in as root on ctrl-alt-f1)

6. Logging on again now everything (except the advanced control button – which crashes firefox) seems to work.  It has some neat additional options such as auto power off/on at set times.

7.  I will clearly need to read the documentation to make proper use of the NAS.

8. Have set it up as an nfs server (and removed the apple related items).  It has taken about two hours to transfer a 2.8GB directory across the network – probably a reflection of the hub/router.  No, or… the fact that sync was enabled for the file system.  With sync disabled sustained writing speed looks like it’s between 1 and 2 MB/s, with a bias toward the low end of the spectrum.

9. http interface for shares seems broken on Firefox/Konq – will not permit copy/paste.

10. Ouch.  Just tried copying some isos over the network using NFS and Konqueror.  Initially reported speeds of 28MB/s ish until 128MB, then hung my windowing system (windows did not refresh).  Ctrl-alt-F1 and attempting to kill the konqueror (didn’t use -9 silly) froze the entire system.  Yeow.   Apparently it copies the file to a cache or something and gradually sends the data down the line.

11.  Way to copy data from a usb disk is to set up and run a back up job with the usb disk attached.

Someone else’s experience here.

Linux Kernel Drivers – AU Law on Interoperability

For those of you who haven’t seen it, some of the Linux Kernel developers have issued a statement on the use of closed source drivers in conjunction with the kernel (summary – it’s a Bad Thing ™).  My suggestion on this is that the interface for drivers should prohibit non-GPL drivers unless the driver’s author positively  asserts that the driver is not a derivative work.

This announcement has triggered off some discussions in mailing lists about how to determine whether or not something is a derivative work of the kernel.   Someone mentioned that derivative works should be functionally based, citing Linus Torvalds here.  I intended to chime in saying that the Australian High Court had actually said something like that some time ago, but then the short email got out of hand.  So now I’ve done a blog post on…

Some Observations on the Australian Law on Interoperability

Summary

Some time ago, the highest Australian court endorsed a functional analysis when identifying infringement of computer programs (ie if something is copied and is essential to the operation of a copyright program, then it’s an infringement), although it appears to have stepped back from this position more recently.

The current authorities still hold that the reproduction of a data set, even for the purpose of interoperability, will be an infringement if there is copyright in the data set.  Moreover, the courts have simply looked for a causal connection between the original work and the reproduction.  The re-implementation of a work by piecing it together from observation may still result in an infringement.

The legislature has introduced an interoperability exception but it seems to be worded in a way limited to reproductions for the purpose of gaining information – and therefore doesn’t seem to be a lot of help in practice.

The Australian position for the makers of device drivers is not clear.  There are particular problems when an implementation requires the reproduction (albeit indirectly and notwithstanding that it may be necessary for interoperability) of part of the kernel in which copyright subsists.  In this case there is a risk that the requirement to GPL may be invoked by virtue of clause 2(b).

Autodesk v Dyason

There is an interesting case in Australia called Autodesk Inc v Dyason [1] which held that copying 127 bits (yes, bits)[2] from a multi-kilobyte program (I have quoted 32Kb in the past, but now cannot find the reference – if you know please tell me where to find it) constituted a reproduction of a “substantial portion” of the larger program and was therefore a copyright infringement.

The 127 bits were in part of a chip in a dongle included by Autodesk in the package with their software.  The defendants observed the operation of the chip through an oscilloscope and used this information to produce their own dongle which mimicked the operation of the Autodesk dongle.  These bits were said to infringe the copyright in the program (called widget c) which queried the dongle.

Some interesting aspects of the case are –

  • this is a case about primary infringement of the reproduction right;
  • the dongle was a hardware implementation.  In response to queries by Widget C, it responded in a certain way based on the electrical operation of the various hardware gates in the dongle.  In effect it operated as a look up table, but there was no look up table “stored” in the device;
  • the relevant defendant had no access to and did not analyse the Widget C program.  They only analysed the dongle (see para 85 of the judgment at first instance);
  • there are in fact two decisions by the High Court, with the second on procedural issues.

The important part of the decision is that the court expressly took into account the fact that the key/bytes were essential to the function of the computer program. From McKeogh, Stewart and Griffith Intellectual Property in Australia third edition at para 8.6 (who, incidentally, criticise the decision):

“However, the emphasis on the function of a computer program as determining what is a ‘substantial part’ was maintained by the majority [of the court in Autodesk No 2 -see below], who emphasised that the [127 bits] was ‘essential’ or ‘critical’ to the operation of Widget C.”

This is a decision of the High Court – the ultimate court of appeal here equivalent to the US Supreme Court.

The defendants sought leave to the High Court to have the issue reheard (this is very unusual), the High Court gave judgement on whether or not to look at the issue again (ie, they were determining an issue of procedure, they were not re-examining the substance of the appeal) the following year.[2]   The defendants were unsuccessful (3-2) although the then-Chief Justice (in dissent) indicated reservations about applying a functional analysis when determining substantiality for the infringement of computer programs.  His reasoning was later to be cited with approval in …

Data Access v Powerflex

In Data Access v Powerflex the court unanimously stepped back from the “essential to the operation” analysis in the Autodesk v Dyason Case (only one of the Autodesk judges remained on the bench by that time – see her short concurring judgment at the end) saying this at paragraphs 84ff:

There is great force in the criticism that the “but for” essentiality test which is effectively invoked by the majority in Autodesk No 2 is not practicable as a test for determining whether something which appears in a computer program is a substantial part of it. For that reason, we prefer Mason CJ’s opinion that, in determining whether something is a reproduction of a substantial part of a computer program, the “essential or material features of [the computer program] should be ascertained by considering the originality of the part allegedly taken”

In order for an item in a particular language to be a computer program, it must intend to express, either directly or indirectly, an algorithmic or logical relationship between the function desired to be performed and the physical capabilities of the “device having digital information processing capabilities”. It follows that the originality of what was allegedly taken from a computer program must be assessed with respect to the originality with which it expresses that algorithmic or logical relationship or part thereof. The structure of what was allegedly taken, its choice of commands, and its combination and sequencing of commands, when compared, at the same level of abstraction, with the original, would all be relevant to this inquiry.

That being so, a person who does no more than reproduce those parts of a program which are “data” or “related information” and which are irrelevant to its structure, choice of commands and combination and sequencing of commands will be unlikely to have reproduced a substantial part of the computer program. We say “unlikely” and not “impossible” because it is conceivable that the data, considered alone, could be sufficiently original to be a substantial part of the computer program.

It follows that we are unable to agree with the approach to determining “substantiality” which the majority took in Autodesk No 1 and Autodesk No 2. Because of the importance of the question, we think that the Court should re-open the question of what constitutes a substantial part of a computer program. To depart from the reasoning in the Autodesk cases does not necessarily mean that the outcomes in those cases were wrong. In our view, the look-up table in Widget C was merely data and was not capable of being a substantial part of the AutoCAD program unless the data itself had its own inherent originality. However, re-opening the reasoning in the Autodesk cases does not require the Court to express a view on whether the look-up table in that case had its own inherent originality.

The Data Access case was about whether the re-implementation of a programming language (called Dataflex) was an infringement of copyright in the reserved words of the language.   Surprisingly, this argument was successful at first instance, but was eventually knocked on the head in principle, but not necessarily in practice in the High Court.   The reason it was not knocked on the head in practice is another look up table used to implement a compression algorithm which was at the centre of the case.

In that case the Dataflex program stored its program files in a compressed form.  A form of compression called “Huffman coding” which implements a lookup table was used to implement the compression.   The lookup table in this compression scheme is created by identifying commonly occurring strings in sample files.  In order to create a competing program which read the compressed files (or to create compressed files for use with the Dataflex program)  then it was necessary to reverse engineer, and re-implement the compression table.  The defendant re-created the table  by creating a number of test files and seeing how the Dataflex program actually compressed them (see paragraph 117).  The court found that the original compression table was a literary work and that re-creating it in the manner described was a reproduction (para 124):

The fact that Dr Bennett used an ingenious method of determining the bit string assigned to each character does not make the output of such a process any less a “reproduction” than if Dr Bennett had sat down with a print-out of the table and copy-typed it …

This comment may raise problems for those wanting to create interoperable programs.  Anyone, it seems, can structure their program in such a way that the reproduction of a copyright work is necessary to interoperate with the program.  I will defer to the programmers out there on whether and how this might apply to drivers accessing the Linux kernel.  The legislature might have addressed this in…

Section 47D of the Copyright Act

Since the Data Access case the legislature has implemented section 47D of the Copyright Act.  It says that reproducing a copyright work that is a computer program (which includes literary works incorporated into a computer program) for interoperability is not an infringement… in certain circumstances.  Those circumstances are:

(1)  Subject to this Division, the copyright in a literary work that is a computer program is not infringed by the making of a reproduction or adaptation of the work if:

(a)  the reproduction or adaptation is made by, or on behalf of, the owner or licensee of the copy of the program (the original program) used for making the reproduction or adaptation; and

(b)  the reproduction or adaptation is made for the purpose of obtaining information necessary to enable the owner or licensee, or a person acting on behalf of the owner or licensee, to make independently another program (the new program), or an article, to connect to and be used together with, or otherwise to interoperate with, the original program or any other program; and

(c)  the reproduction or adaptation is made only to the extent reasonably necessary to obtain the information referred to in paragraph (b); and

(d)  to the extent that the new program reproduces or adapts the original program, it does so only to the extent necessary to enable the new program to connect to and be used together with, or otherwise to interoperate with, the original program or the other program; and

(e)  the information referred to in paragraph (b) is not readily available to the owner or licensee from another source when the reproduction or adaptation is made.

(2)  Subsection (1) does not apply to the making of a reproduction or adaptation of a computer program from an infringing copy of the computer program.

McKeogh, Stewart and Griffith cited above argue that s 47D is a complete answer to interoperability questions (see section 8.7) but I don’t see it.  The key words of this are in paragraph (b) and (c) which make it clear that the “reproduction” which is excused by this section is the reproduction for the purposes of getting information (another section, section 47AB gives “computer program” an expansive definition which would probably cover the Huffman table in the Dataflex case).  Paragraph (b) also makes clear that the reproduction in question must occur prior to the making of the interoperating program.   The key point is that it says nothing about whether you can use the information that you obtain in this way, and the reproduction which occurs when this information is actually incorporated in the interoperating software does not appear to be covered by this wording.  Paragraph (d), which you might think covers it, is worded as a qualification on paragraphs (a), (b) and (c).

For example, I can’t see that the outcome of the Data Access case would have been different if s 47D had been in force.   McKeogh et al correctly observe that the compression table in the case would likely be within the extended meaning of computer program in the legislation.  However, the compression table is not the relevant thing being reproduced because a reproduction of it isn’t “made for the purpose of obtaining information…”. On the contrary, it is the information which has been obtained as the result of some other reproduction.

Some thoughts on 47D

In my view section 47D is not drafted in a manner which is of much practical assistance to someone wanting to create an interoperable software program.  Perhaps someone might be able to hang their hat on some Explanatory Memorandum, Hansard (hint: try around 30 November 2006) or a CRC report somewhere to explain its purpose, but to do so seems to require stepping outside the words of the section.   Incidentally, if there is a problem with section 47D, then there is also a problem with the exceptions to circumvention of technological protection measures, since it forms the basis of one of the exceptions.  I made submissions on behalf of OSIA asking for section 47D to be reworded in the course of the AUSFTA negotiations over the past couple of years (because this exception is relevant to the added prohibitions on technological protection measures).  Incidentally, the Senate Committee agreed with me (see recommendation 13 here).  Strangely, it didn’t make it into the final draft of the legislation.

So, while the legislature seems like they intended to address the interoperability issue, it’s not clear they’ve gotten there.

[1] [No 1] (1992) 173 CLR 330.

[2] Actually, the judgment is not clear.  In some places it looks like it is 127 different 127-bit strings in other places it refers to a single 127 bit string – see eg paras 12-14 of Dawson J’s judgment. See also the decision of Northrop J at first instance.

[3] Autodesk Inc v Dyason (No 2) [1993] HCA 6; (1993) 176 CLR 300 (3 March 1993).

[Addition 26 June 2008.  In response to Bruce Wood’s comment here is a more complete quote of paragraph 124/125 of the judgment (my emphasis):

124. In addition, in our opinion the Full Court was correct in holding that the process undertaken by Dr Bennett constituted a “reproduction” of the standard Dataflex Huffman table. The fact that Dr Bennett used an ingenious method of determining the bit string assigned to each character does not make the output of such a process any less a “reproduction” than if Dr Bennett had sat down with a print-out of the table and copy-typed it into the PFXplus program.

125. The finding that the respondents infringed the appellant’s copyright in the Huffman table embedded in the Dataflex program may well have considerable practical consequences. Not only may the finding affect the relations between the parties to these proceedings, it may also have wider ramifications for anyone who seeks to produce a computer program that is compatible with a program produced by others. These are, however, matters that can be resolved only by the legislature reconsidering and, if it thinks it necessary or desirable, rewriting the whole of the provisions that deal with copyright in computer programs. ]

[Addition 27 June 2008.  The Decision on appeal to the Full Federal court describes the table (under the heading Huffman Compression Table) as being 256 lines of source code (presumably one line to assign each character value).]

The Invisible Closed Source Overhead – 1

Brendan Scott, June 2008

Groundwork

Many people focus on the “hard” costs of software acquisition and maintenance such as licence fees and implementation costs. However, the adoption of a closed source solution has a number of hidden costs which are rarely properly accounted for. Many of these costs arise because of the extreme lack of flexibility in closed source licences and they are likely, at least in some cases, to be far more significant than the hard costs. In this series of posts we will work our way through the closed source acquisition and maintenance path and have a look at some of the more obvious ones.

Underlying Reasons – Natural Monopoly

Software is a natural monopoly (because it has a high fixed cost and low marginal cost to produce). As such there is a tendency for the market to become increasingly concentrated, first within a particular product line (such as databases, operating systems or word processing software), then across groups of product lines (typically because of the creation of artificial cross product dependencies).

Underlying Reasons – Good Enough for Most

Professor Pamela Samuelson and Mr Mitch Kapor have run a lecture series at UC Berkeley on open source (which, by the way is worth a listen). In one of their lectures they give the example of the university sector in the United States having particular requirements for their accounting, requirements which are not met by the closed source software industry. Indeed, the gap between their requirements and the capability of close source solutions is so great that Universities have banded together to create their own (open source) accounting software – at substantial expense. Ordinarily you would expect that Universities would have enough market power to get additional functionality into closed source solutions – but they don’t. Remarkably, the US University sector is not important enough to influence the direction of closed source software vendors on features of their accounting software. This is not an isolated case.

If this sector isn’t influential enough, what hope does an ordinary organisation have? Any implementation which is designed to be sold on a per unit basis (regardless of the unit – eg copy, seat, site, user etc) means it must be designed to maximise sales of that unit. Here the 80-20 rule comes into play, with the vendor coding features to attract a large chunk of the market for the lowest cost. That is, in order to maximise profits a closed source vendor must aim for a product which is “good enough for most”.

While the market is young there may be niche players who will provide products into that tail 20% that is not properly served (or that there might be multiple different products each covering a different 80%). However, this market structure is unstable because software is a natural monopoly (as described above). As products begin to win market share they crowd out competitors starting with the competing 80%-ers and then to niche players.

If your organisation has specific feature requirements it is likely not only that closed source solutions will not meet your needs, but also that you will be unable to influence the vendor to implement features to meet your needs. Even offering to pay the development costs will not necessarily influence a vendor to implement the feature because this would burden their code with additional modules which would need to be maintained going forward and the vendor maybe unwilling to take on this maintenance burden.

In short, for those who fall into “the many” category, they will be reasonably well served by their closed source vendor. However those in “the few” (roughly 20% according to the 80-20 rule) won’t be. I will use the terms The Many and The Few for convenience in the balance of these posts.

Tragically Closed

Closed source vendors are typically reluctant to interoperate with others, especially if they vendor is well positioned in the market. Indeed, a vendor’s support for interoperability is inversely proportional to their market share – when they have small market share they need their software to be able to work with other people’s software, since by assumption most people have other people’s software. As the vendor’s market share grows interoperability transforms into intraoperability – the ability to operate with other components in the vendor’s software portfolio. Indeed, vendors with large market share will even argue that intraoperability and interoperability are the same thing, since that vendor controls the market, (they argue) intraoperatibility is all that a customer needs de facto.

As the market share grows offering interoperability is only assisting the growth of the market share of the smaller players in the market, so beyond a certain market share, vendors have a disincentive to interoperability and that disincentive grows as the market share grows. In the extreme examples, for someone with no market share to enter the market their first sale must interoperate with the existing software so they support interoperability. Equally, for someone with 100% market share, support of interoperability necessarily means that they lose some part of the market.

It follows from the fact that software is a natural monopoly that, over time, because the natural tendency of a closed source market is to market concentration, there is also a natural tendency towards lack of interoperability. So, even if you invest in a closed source product which supports or promotes interoperability today, you will likely be in trouble in five or ten years’ time. By then, either the product will have emerged as a market leader and therefore no longer supports interoperability or they will have fallen by the wayside and the actual market leader will be trying hard to lock out the product that you have in fact invested in. Moreover, there is now a move among some closed source vendors to trade interoperability against patent rents, thus increasing the cost of interoperability. Therefore long term interoperability prospects for any closed source solution are poor (I have used the heading “tragically” closed here in Hardin’s sense – that the long run closure of the system follows almost inexorably from the nature of the industry ).

The lack of interoperability is an enormous problem because interoperability is a precondition to competition. When software lacks interoperability it is a symptom that there is no competition in the market. As competition in a market decreases not only do the costs of products in the market become artificially inflated, but the quality and diversity of the products simultaneously decreases. Lack of interoperability means that a customer cannot avail themselves of self help to implement features that they want in a product or remove dis-features (1) (2) from a product. As we mentioned above, unless your requirements are shared by a substantial proportion of the target market, you will be unlikely to be able to have specific features implemented – even if you are willing to pay the cost of implementation.

A result of this Tragic Closure is that closed source software tends to be monolithic rather than modular.

In part 2 we look at some costs of acquisition.

Install from USB drive invisible to BIOS and GRUB

Am trying to set up an audio workstation for the kids (wiping an earlier Ubuntu installation, so GRUB is already on the machine). The distro I have chosen is Jack Audio Distribution (JAD) based on opensuse 10.2. The computer is an old 800Mhz Dell that I got an online auction a couple of years ago. The bad thing is that JAD is over 1G. The good thing is that it comes on a DVD (and only a DVD). The bad thing is that the computer doesn’t have a DVD drive, only a CD. The good thing is that I have an external USB DVD…. The bad thing is that it is not recognised by the bios or by GRUB.

So, I boot using Knoppix from the CD drive, copy across the /boot folder from the JAD DVD to the hard drive, then reboot (taking out the Knoppix CD, but leaving the JAD DVD in the USB drive). From there I drop to GRUB at the command line then point the kernel and initrd commands at the copies in the JAD boot folder (presumably these two files alone would have been sufficient?) and this boots enough to get the usb subsystem running… and then it automatically detects the DVD and continues the installation from there.

Neat. (It has installed, but I am still evaluating whether or not everything works on the machine)

Houghton and Sheehan on Economic Impact of Open Access

In July 2006 John Houghton and Peter Sheehan published a paper on The Economic Impact of Enhanced Access to Research Findings. Apparently they have built on this work in subsequent papers. The paper explains how the impact of access to information is factored into standard growth models for the economy. The authors posit two parameters (phi – representing the portion of research and development which ends up being useful – and epsilon – representing that knowledge may not be perfectly accessible). They then go on to identify the consequences of a change in these two factors as might occur from a transition to open access. On their analysis the rate of return on research and development increases by a percentage equal to the percentage increase in the efficiency and accessibility of the knowledge (ie in the parameters phi and epsilon). They state:

“… the results… imply that, if a move to open access has a significant beneficial impact on either or both the accessibility or efficiency of R&D, then the benefits of open access will be high also. Assuming, for example, that a move towards open access increased access and efficiency by 5% and that the social rate of return to GERD was 50%, then if there had been open access to all OECD research circa 2003 it would have increased the social returns to R&D by some USD 36 billion. These are recurring gains from the effect on one year’s R&D. Hence, assuming that the change is permanent, they can be converted to growth rate effects.”

The issue the paper does not address however is what the values of these parameters are in practice, and what how a move to open access will impact them. The put forward some hypotheticals in a country by country table.

They Really Mean Open Access+

However, the assumptions they have made is that the knowledge which is accessible may also be used. This is a stronger assumption than is made by some parts of the “open access” movement which impose differing degrees of purpose based and other restrictions – commentary example 1 example 2.

Hayek on Free Software

Last month I gave a lecture on Free and Open Source Software to a computer science ethics class at the University of NSW (one day I may break it into parts and make separate articles out of them).  One of the themes I developed was the importance of an ethical stance on freedom in deriving substantive economic benefits.  In support of this I quoted the following from economist Friedrich Hayek:

“The enemies of liberty have always based their arguments on the contention that order in human affairs requires that some should give orders and others obey.  Much of the opposition to a system of freedom under general law arises from the inability to conceive of an effective co-ordination of human activities without deliberate organization by a commanding intelligence. …
“[The orderliness of social activity] cannot be the result of a unified direction if we want individuals to adjust their actions to the particular circumstances largely known only to them and never known in their totality to any one mind.  Order with reference to society thus means essentially that individual action is guided by successful forethought, that people not only make effective use of their knowledge but can also foresee with a high degree of confidence what collaboration they can expect from others.”

F. A Hayek, The Constitution of Liberty, Routledge Classics, 2006 @ 140.

This quote, and the arguments that Hayek made in support of it, contains within it most, if not all, of the elements of a complete answer to most criticisms of free and open source software.  Feel free to fling it at cynics and the peddlers of false doctrine.

Benefits of FLOSS for the Kids’ PC

Having Linux loaded on the kids’ PC has been A Good Thing. It offers me a number of “management” advantages over the alternatives (although this may be a function of my lack of the same level of technical knowledge of the other systems).

On Linux I can:

  • do (almost) everything on their computer without leaving my own computer (ssh! or occasionally vnc) – this is a particularly important consideration when I am in front of my own computer with one hand holding a sleeping baby;
  • see what they’re doing on the computer (ps);
  • boot them off games or turn the computer off when they ignore their mother telling them to do something or other (kill);
  • change the volume on the computer (alsamixer);
  • turn access to certain games or programs (or the whole windowing system) on or off (/etc/inittab);
  • install and uninstall programs with ease (and without worrying about having to find the original disks and/or some access code); and
  • install lightweight windowing systems to improve performance.

If I wanted to I could also set login scripts to restrict usage to certain times but not, unfortunately, based on whether or not they are still wearing their pyjamas.

It also has beneficial side effects such as plausible deniability of being able to run cheapo non-Linux games (that are accumulated as part of a bundle with something else from time to time); and (related) inability to install non-Linux rubbishware from the web.

By contrast I have no ability to remotely manage them on the windows box that they use from time to time.


Blog Stats

  • 279,047 hits

OSWALD Newsletter

If you would like to receive OSWALD, a weekly open source news digest please send an email to oswald (with the subject "subscribe") at opensourcelaw.biz