under the terms of either licence, at your option.) See the file COPYING.GPL
for the terms of the GNU General Public License, version 2. See the file
COPYING.TGPPL.html for the terms of the Transitive Grace Period Public Licence,
-version 1.0.
+version 1.0. In addition, Allmydata, Inc. offers other licensing terms. If you
+would like to inquire about a commercial relationship with Allmydata, Inc.,
+please contact partnerships@allmydata.com and visit http://allmydata.com .
The most widely known example of an erasure code is the RAID-5 algorithm which
makes it so that in the event of the loss of any one hard drive, the stored data
This will run the tests of the C API, the Python API, and the command-line
tools.
-To run the tests of the Haskell API, do XYZ.
+To run the tests of the Haskell API:
+ % runhaskell haskell/test/FECTest.hs
+
+Note that in order to run the Haskell API tests you must have installed the
+library first due to the fact that the interpreter cannot process FEC.hs as it
+takes a reference to an FFI function.
* Community
The source is currently available via darcs on the web with the command:
-darcs get http://allmydata.org/source/zfec
+darcs get http://allmydata.org/source/zfec/trunk
More information on darcs is available at http://darcs.net
degenerates to the equivalent of the Unix "split" utility which simply splits
the input into successive segments. Similarly, when k == 1 it degenerates to
the equivalent of the unix "cp" utility -- each block is a complete copy of the
-input data. The "zfec" command-line tool does not implement these degenerate
-cases.)
+input data.)
Note that each "primary block" is a segment of the original data, so its size is
1/k'th of the size of original data, and each "secondary block" is of the same
* Command-Line Tool
-NOTE: the format of the sharefiles was changed in zfec v1.1 to allow K == 1 and
-K == M. This change of the format of sharefiles means that zfec >= v1.1 cannot
-read sharefiles produced by zfec < v1.1.
-
The bin/ directory contains two Unix-style, command-line tools "zfec" and
"zunfec". Execute "zfec --help" or "zunfec --help" for usage instructions.
Note: a Unix-style tool like "zfec" does only one thing -- in this case erasure
coding -- and leaves other tasks to other tools. Other Unix-style tools that go
-well with zfec include "GNU tar" for archiving multiple files and directories
-into one file, "rzip" or "lrzip" for compression, and "GNU Privacy Guard" for
-encryption or "sha256sum" for integrity. It is important to do things in order:
-first archive, then compress, then either encrypt or sha256sum, then erasure
-code. Note that if GNU Privacy Guard is used for privacy, then it will also
-ensure integrity, so the use of sha256sum is unnecessary in that case.
+well with zfec include "GNU tar" or "7z" a.k.a. "p7zip" for archiving multiple
+files and directories into one file, "7z" or "rzip" for compression, and "GNU Privacy
+Guard" for encryption or "sha256sum" for integrity. It is important to do
+things in order: first archive, then compress, then either encrypt or sha256sum,
+then erasure code. Note that if GNU Privacy Guard is used for privacy, then it
+will also ensure integrity, so the use of sha256sum is unnecessary in that case.
+Note that if 7z is used for archiving then it also does compression, so you
+don't need a separate compressor in that case.
* Performance Measurements
block, which is the next few bytes of the file, and so on. The last primary
block has blocknum k-1. The blocknum of each secondary block is an arbitrary
integer between k and 255 inclusive. (When using the Python API, if you don't
-specify which blocknums you want for your secondary blocks when invoking
-encode(), then it will by default provide the blocks with ids from k to m-1
-inclusive.)
+specify which secondary blocks you want when invoking encode(), then it will by
+default provide the blocks with ids from k to m-1 inclusive.)
** C API
The output from fec_encode() is the requested set of secondary blocks which are
written into output buffers provided by the caller.
+Note that this fec_encode() is a "low-level" API in that it requires the input
+data to be provided in a set of memory buffers of exactly the right sizes. If
+you are starting instead with a single buffer containing all of the data then
+please see easyfec.py's "class Encoder" as an example of how to split a single
+large buffer into the appropriate set of input buffers for fec_encode(). If you
+are starting with a file on disk, then please see filefec.py's
+encode_file_stringy_easyfec() for an example of how to read the data from a file
+and pass it to "class Encoder". The Python interface provides these
+higher-level operations, as does the Haskell interface. If you implement
+functions to do these higher-level tasks in other languages than Python or
+Haskell, then please send a patch to zfec-dev@allmydata.org so that your API can
+be included in future releases of zfec.
+
+
fec_decode() takes as input an array of k pointers, where each pointer points to
a buffer containing a block. There is also a separate input parameter which is
an array of blocknums, indicating the blocknum of each of the blocks which is
from the input and had to be reconstructed. These reconstructed blocks are
written into output buffers provided by the caller.
+
** Python API
encode() and decode() take as input a sequence of k buffers, where a "sequence"
** Haskell API
-XYZ
+The Haskell code is fully Haddocked, to generate the documentation, run
+ % runhaskell Setup.lhs haddock
* Utilities
A C compiler is required. To use the Python API or the command-line tools a
Python interpreter is also required. We have tested it with Python v2.4 and
-v2.5. For the Haskell interface, a Haskell compiler is required XYZ.
+v2.5. For the Haskell interface, GHC >= 6.8.1 is required.
* Acknowledgements