1 zfec -- efficient, portable erasure coding tool
2 ===============================================
4 Generate redundant blocks of information such that if some of the
5 blocks are lost then the original data can be recovered from the
6 remaining blocks. This package includes command-line tools, C API,
7 Python API, and Haskell API.
13 This package implements an "erasure code", or "forward error correction code".
15 You may use this package under the GNU General Public License, version 2 or, at
16 your option, any later version. You may use this package under the Transitive
17 Grace Period Public Licence, version 1.0 or, at your option, any later version.
18 (You may choose to use this package under the terms of either licence, at your
19 option.) See the file COPYING.GPL for the terms of the GNU General Public
20 License, version 2. See the file COPYING.TGPPL.html for the terms of the
21 Transitive Grace Period Public Licence, version 1.0.
23 The most widely known example of an erasure code is the RAID-5 algorithm which
24 makes it so that in the event of the loss of any one hard drive, the stored data
25 can be completely recovered. The algorithm in the zfec package has a similar
26 effect, but instead of recovering from the loss of only a single element, it can
27 be parameterized to choose in advance the number of elements whose loss it can
30 This package is largely based on the old "fec" library by Luigi Rizzo et al.,
31 which is a mature and optimized implementation of erasure coding. The zfec
32 package makes several changes from the original "fec" package, including
33 addition of the Python API, refactoring of the C API to support zero-copy
34 operation, a few clean-ups and optimizations of the core code itself, and the
35 addition of a command-line tool named "zfec".
41 This package is managed with the "setuptools" package management tool. To build
42 and install the package directly into your system, just run ``python ./setup.py install``. If you prefer to keep the package limited to a specific directory so
43 that you can manage it yourself (perhaps by using the "GNU stow") tool, then
44 give it these arguments: ``python ./setup.py install --single-version-externally-managed --record=${specificdirectory}/zfec-install.log --prefix=${specificdirectory}``
46 To run the self-tests, execute ``python ./setup.py test`` (or if you have Twisted
47 Python installed, you can run ``trial zfec`` for nicer output and test options.)
48 This will run the tests of the C API, the Python API, and the command-line
51 To run the tests of the Haskell API: ``runhaskell haskell/test/FECTest.hs``
53 Note that in order to run the Haskell API tests you must have installed the
54 library first due to the fact that the interpreter cannot process FEC.hs as it
55 takes a reference to an FFI function.
61 The source is currently available via darcs on the web with the command:
63 darcs get http://tahoe-lafs.org/source/zfec/trunk
65 More information on darcs is available at http://darcs.net
67 Please join the zfec mailing list and submit patches:
69 <http://tahoe-lafs.org/cgi-bin/mailman/listinfo/zfec-dev>
75 This package performs two operations, encoding and decoding. Encoding takes
76 some input data and expands its size by producing extra "check blocks", also
77 called "secondary blocks". Decoding takes some data -- any combination of
78 blocks of the original data (called "primary blocks") and "secondary blocks",
79 and produces the original data.
81 The encoding is parameterized by two integers, k and m. m is the total number
82 of blocks produced, and k is how many of those blocks are necessary to
83 reconstruct the original data. m is required to be at least 1 and at most 256,
84 and k is required to be at least 1 and at most m.
86 (Note that when k == m then there is no point in doing erasure coding -- it
87 degenerates to the equivalent of the Unix "split" utility which simply splits
88 the input into successive segments. Similarly, when k == 1 it degenerates to
89 the equivalent of the unix "cp" utility -- each block is a complete copy of the
92 Note that each "primary block" is a segment of the original data, so its size is
93 1/k'th of the size of original data, and each "secondary block" is of the same
94 size, so the total space used by all the blocks is m/k times the size of the
95 original data (plus some padding to fill out the last primary block to be the
96 same size as all the others). In addition to the data contained in the blocks
97 themselves there are also a few pieces of metadata which are necessary for later
98 reconstruction. Those pieces are: 1. the value of K, 2. the value of M, 3.
99 the sharenum of each block, 4. the number of bytes of padding that were used.
100 The "zfec" command-line tool compresses these pieces of data and prepends them
101 to the beginning of each share, so each the sharefile produced by the "zfec"
102 command-line tool is between one and four bytes larger than the share data
105 The decoding step requires as input k of the blocks which were produced by the
106 encoding step. The decoding step produces as output the data that was earlier
107 input to the encoding step.
113 The bin/ directory contains two Unix-style, command-line tools "zfec"
114 and "zunfec". Execute ``zfec --help`` or ``zunfec --help`` for usage
117 Note: a Unix-style tool like "zfec" does only one thing -- in this
118 case erasure coding -- and leaves other tasks to other tools. Other
119 Unix-style tools that go well with zfec include `GNU tar`_ for
120 archiving multiple files and directories into one file, `lzip`_ for
121 compression, and `GNU Privacy Guard`_ for encryption or `sha256sum`_
122 for integrity. It is important to do things in order: first archive,
123 then compress, then either encrypt or integrity-check, then erasure
124 code. Note that if GNU Privacy Guard is used for privacy, then it
125 will also ensure integrity, so the use of sha256sum is unnecessary in
126 that case. Note also that you also need to do integrity checking (such
127 as with sha256sum) on the blocks that result from the erasure coding
128 in *addition* to doing it on the file contents! (There are two
129 different subtle failure modes -- see "more than one file can match an
130 immutable file cap" on the `Hack Tahoe-LAFS!`_ Hall of Fame.)
132 The `Tahoe-LAFS`_ project uses zfec as part of a complete distributed
133 filesystem with integrated encryption, integrity, remote distribution
134 of the blocks, directory structure, backup of changed files or
135 directories, access control, immutable files and directories,
136 proof-of-retrievability, and repair of damaged files and directories.
138 .. _GNU tar: http://directory.fsf.org/project/tar/
139 .. _lzip: http://www.nongnu.org/lzip/lzip.html
140 .. _GNU Privacy Guard: http://www.gnupg.org/
141 .. _sha256sum: http://www.gnu.org/software/coreutils/
142 .. _Tahoe-LAFS: http://tahoe-lafs.org
143 .. _Hack Tahoe-LAFS!: http://tahoe-lafs.org/hacktahoelafs/
149 To run the benchmarks, execute the included bench/bench_zfec.py script
150 with optional --k= and --m= arguments.
152 On my Athlon 64 2.4 GHz workstation (running Linux), the "zfec" command-line
153 tool encoded a 160 MB file with m=100, k=94 (about 6% redundancy) in 3.9
154 seconds, where the "par2" tool encoded the file with about 6% redundancy in 27
155 seconds. zfec encoded the same file with m=12, k=6 (100% redundancy) in 4.1
156 seconds, where par2 encoded it with about 100% redundancy in 7 minutes and 56
159 The underlying C library in benchmark mode encoded from a file at about 4.9
160 million bytes per second and decoded at about 5.8 million bytes per second.
162 On Peter's fancy Intel Mac laptop (2.16 GHz Core Duo), it encoded from a file at
163 about 6.2 million bytes per second.
165 On my even fancier Intel Mac laptop (2.33 GHz Core Duo), it encoded from a file
166 at about 6.8 million bytes per second.
168 On my old PowerPC G4 867 MHz Mac laptop, it encoded from a file at about 1.3
169 million bytes per second.
171 Here is a paper analyzing the performance of various erasure codes and their
172 implementations, including zfec:
174 http://www.usenix.org/events/fast09/tech/full_papers/plank/plank.pdf
176 Zfec shows good performance on different machines and with different values of
177 K and M. It also has a nice small memory footprint.
183 Each block is associated with "blocknum". The blocknum of each primary block is
184 its index (starting from zero), so the 0'th block is the first primary block,
185 which is the first few bytes of the file, the 1'st block is the next primary
186 block, which is the next few bytes of the file, and so on. The last primary
187 block has blocknum k-1. The blocknum of each secondary block is an arbitrary
188 integer between k and 255 inclusive. (When using the Python API, if you don't
189 specify which secondary blocks you want when invoking encode(), then it will by
190 default provide the blocks with ids from k to m-1 inclusive.)
194 fec_encode() takes as input an array of k pointers, where each
195 pointer points to a memory buffer containing the input data (i.e.,
196 the i'th buffer contains the i'th primary block). There is also a
197 second parameter which is an array of the blocknums of the secondary
198 blocks which are to be produced. (Each element in that array is
199 required to be the blocknum of a secondary block, i.e. it is
200 required to be >= k and < m.)
202 The output from fec_encode() is the requested set of secondary
203 blocks which are written into output buffers provided by the caller.
205 Note that this fec_encode() is a "low-level" API in that it requires
206 the input data to be provided in a set of memory buffers of exactly
207 the right sizes. If you are starting instead with a single buffer
208 containing all of the data then please see easyfec.py's "class
209 Encoder" as an example of how to split a single large buffer into
210 the appropriate set of input buffers for fec_encode(). If you are
211 starting with a file on disk, then please see filefec.py's
212 encode_file_stringy_easyfec() for an example of how to read the data
213 from a file and pass it to "class Encoder". The Python interface
214 provides these higher-level operations, as does the Haskell
215 interface. If you implement functions to do these higher-level
216 tasks in other languages than Python or Haskell, then please send a
217 patch to zfec-dev@tahoe-lafs.org so that your API can be included in
218 future releases of zfec.
220 fec_decode() takes as input an array of k pointers, where each
221 pointer points to a buffer containing a block. There is also a
222 separate input parameter which is an array of blocknums, indicating
223 the blocknum of each of the blocks which is being passed in.
225 The output from fec_decode() is the set of primary blocks which were
226 missing from the input and had to be reconstructed. These
227 reconstructed blocks are written into output buffers provided by the
233 encode() and decode() take as input a sequence of k buffers, where a
234 "sequence" is any object that implements the Python sequence
235 protocol (such as a list or tuple) and a "buffer" is any object that
236 implements the Python buffer protocol (such as a string or array).
237 The contents that are required to be present in these buffers are
238 the same as for the C API.
240 encode() also takes a list of desired blocknums. Unlike the C API,
241 the Python API accepts blocknums of primary blocks as well as
242 secondary blocks in its list of desired blocknums. encode() returns
243 a list of buffer objects which contain the blocks requested. For
244 each requested block which is a primary block, the resulting list
245 contains a reference to the apppropriate primary block from the
246 input list. For each requested block which is a secondary block,
247 the list contains a newly created string object containing that
250 decode() also takes a list of integers indicating the blocknums of
251 the blocks being passed int. decode() returns a list of buffer
252 objects which contain all of the primary blocks of the original data
253 (in order). For each primary block which was present in the input
254 list, then the result list simply contains a reference to the object
255 that was passed in the input list. For each primary block which was
256 not present in the input, the result list contains a newly created
257 string object containing that primary block.
259 Beware of a "gotcha" that can result from the combination of mutable
260 data and the fact that the Python API returns references to inputs
263 Returning references to its inputs is efficient since it avoids
264 making an unnecessary copy of the data, but if the object which was
265 passed as input is mutable and if that object is mutated after the
266 call to zfec returns, then the result from zfec -- which is just a
267 reference to that same object -- will also be mutated. This
268 subtlety is the price you pay for avoiding data copying. If you
269 don't want to have to worry about this then you can simply use
270 immutable objects (e.g. Python strings) to hold the data that you
275 The Haskell code is fully Haddocked, to generate the documentation, run ``runhaskell Setup.lhs haddock``.
281 The filefec.py module has a utility function for efficiently reading a file and
282 encoding it piece by piece. This module is used by the "zfec" and "zunfec"
283 command-line tools from the bin/ directory.
289 A C compiler is required. To use the Python API or the command-line
290 tools a Python interpreter is also required. We have tested it with
291 Python v2.4, v2.5, v2.6, and v2.7. For the Haskell interface, GHC >=
298 Thanks to the author of the original fec lib, Luigi Rizzo, and the folks that
299 contributed to it: Phil Karn, Robert Morelos-Zaragoza, Hari Thirumoorthy, and
300 Dan Rubenstein. Thanks to the Mnet hackers who wrote an earlier Python wrapper,
301 especially Myers Carpenter and Hauke Johannknecht. Thanks to Brian Warner and
302 Amber O'Whielacronx for help with the API, documentation, debugging,
303 compression, and unit tests. Thanks to Adam Langley for improving the C API and
304 contributing the Haskell API. Thanks to the creators of GCC (starting with
305 Richard M. Stallman) and Valgrind (starting with Julian Seward) for a pair of
306 excellent tools. Thanks to my coworkers at Allmydata -- http://allmydata.com --
307 Fabrice Grinda, Peter Secor, Rob Kinninmont, Brian Warner, Zandr Milewski,
308 Justin Boreta, Mark Meras for sponsoring this work and releasing it under a Free
309 Software licence. Thanks to Jack Lloyd, Samuel Neves, and David-Sarah Hopwood.