[Sorry to be repetitive at the frequency of 1/year]
This is experimental data which is immutable once aquired; read: it does
not change anyore, ever. It is also not read very often and less so the
older it gets. It also needs no high-speed access, we have copies of it
in databases for that.
The costs of keeping it around are roughly: equipment + ops time.
It appears that particular equipment (netapps filer), which is easy to
operate, becomes too expensive. It might be a good idea to invest a
little ops time in a cheap storage boxes for stuff that does not need
filer-quality storage.
ATA disks are available at about half a Euro per Gigabute in units of
300GB. Thus it is possible to put up about a Terabyte of storage in any
simple Wintel box. Slightly more without redundancy, slightly less with
software RAID. Any (old) Wintel box will be fine. The equipment cost
will be negligible: EUR600/terabyte if you use old wintel boxes,
otherwise add the cost of the simplest wintel box. When building a
couple of them and operating them all the same, the ops cost will not be
too high. One does not even need RAID. Just build two of them and have
a cron job rsyncing between them for full hot redundancy. Name them
cheapfiler-1 and cheapfiler-1-copy, make the copy read-only to users.
Make as many as we need. Spread them around for physical redundancy.
Not rocket science. What's the problem?
I more-or-less agree. For 12000 Euros you can put 11 TB in a
rack-mounted box (prices from alternate.nl, a month or so ago):