4 Efficient, portable, programmable erasure coding a.k.a. forward error
5 correction. Generate redundant blocks of information such that if some
6 of the blocks are lost then the original data can be recovered from
7 the remaining blocks. This package includes command-line tools, C API,
8 Python API, and Haskell API.
14 This package implements an "erasure code", or "forward error correction code".
16 You may use this package under the GNU General Public License, version 2 or, at
17 your option, any later version. You may use this package under the Transitive
18 Grace Period Public Licence, version 1.0 or, at your option, any later version.
19 (You may choose to use this package under the terms of either licence, at your
20 option.) See the file COPYING.GPL for the terms of the GNU General Public
21 License, version 2. See the file COPYING.TGPPL.html for the terms of the
22 Transitive Grace Period Public Licence, version 1.0.
24 The most widely known example of an erasure code is the RAID-5 algorithm which
25 makes it so that in the event of the loss of any one hard drive, the stored data
26 can be completely recovered. The algorithm in the zfec package has a similar
27 effect, but instead of recovering from the loss of only a single element, it can
28 be parameterized to choose in advance the number of elements whose loss it can
31 This package is largely based on the old "fec" library by Luigi Rizzo et al.,
32 which is a mature and optimized implementation of erasure coding. The zfec
33 package makes several changes from the original "fec" package, including
34 addition of the Python API, refactoring of the C API to support zero-copy
35 operation, a few clean-ups and optimizations of the core code itself, and the
36 addition of a command-line tool named "zfec".
42 This package is managed with the "setuptools" package management tool. To build
43 and install the package directly into your system, just run "python ./setup.py
44 install". If you prefer to keep the package limited to a specific directory so
45 that you can manage it yourself (perhaps by using the "GNU stow") tool, then
46 give it these arguments: "python ./setup.py install
47 --single-version-externally-managed
48 --record=${specificdirectory}/zfec-install.log --prefix=${specificdirectory}"
50 To run the self-tests, execute "python ./setup.py test" (or if you have Twisted
51 Python installed, you can run "trial zfec" for nicer output and test options.)
52 This will run the tests of the C API, the Python API, and the command-line
55 To run the tests of the Haskell API:
56 % runhaskell haskell/test/FECTest.hs
58 Note that in order to run the Haskell API tests you must have installed the
59 library first due to the fact that the interpreter cannot process FEC.hs as it
60 takes a reference to an FFI function.
66 The source is currently available via darcs on the web with the command:
68 darcs get http://allmydata.org/source/zfec/trunk
70 More information on darcs is available at http://darcs.net
72 Please join the zfec mailing list and submit patches:
74 <http://allmydata.org/cgi-bin/mailman/listinfo/zfec-dev>
80 This package performs two operations, encoding and decoding. Encoding takes
81 some input data and expands its size by producing extra "check blocks", also
82 called "secondary blocks". Decoding takes some data -- any combination of
83 blocks of the original data (called "primary blocks") and "secondary blocks",
84 and produces the original data.
86 The encoding is parameterized by two integers, k and m. m is the total number
87 of blocks produced, and k is how many of those blocks are necessary to
88 reconstruct the original data. m is required to be at least 1 and at most 256,
89 and k is required to be at least 1 and at most m.
91 (Note that when k == m then there is no point in doing erasure coding -- it
92 degenerates to the equivalent of the Unix "split" utility which simply splits
93 the input into successive segments. Similarly, when k == 1 it degenerates to
94 the equivalent of the unix "cp" utility -- each block is a complete copy of the
97 Note that each "primary block" is a segment of the original data, so its size is
98 1/k'th of the size of original data, and each "secondary block" is of the same
99 size, so the total space used by all the blocks is m/k times the size of the
100 original data (plus some padding to fill out the last primary block to be the
101 same size as all the others). In addition to the data contained in the blocks
102 themselves there are also a few pieces of metadata which are necessary for later
103 reconstruction. Those pieces are: 1. the value of K, 2. the value of M, 3.
104 the sharenum of each block, 4. the number of bytes of padding that were used.
105 The "zfec" command-line tool compresses these pieces of data and prepends them
106 to the beginning of each share, so each the sharefile produced by the "zfec"
107 command-line tool is between one and four bytes larger than the share data
110 The decoding step requires as input k of the blocks which were produced by the
111 encoding step. The decoding step produces as output the data that was earlier
112 input to the encoding step.
118 The bin/ directory contains two Unix-style, command-line tools "zfec" and
119 "zunfec". Execute "zfec --help" or "zunfec --help" for usage instructions.
121 Note: a Unix-style tool like "zfec" does only one thing -- in this case erasure
122 coding -- and leaves other tasks to other tools. Other Unix-style tools that go
123 well with zfec include "GNU tar" or "7z" a.k.a. "p7zip" for archiving multiple
124 files and directories into one file, "7z" or "rzip" for compression, and "GNU Privacy
125 Guard" for encryption or "sha256sum" for integrity. It is important to do
126 things in order: first archive, then compress, then either encrypt or sha256sum,
127 then erasure code. Note that if GNU Privacy Guard is used for privacy, then it
128 will also ensure integrity, so the use of sha256sum is unnecessary in that case.
129 Note that if 7z is used for archiving then it also does compression, so you
130 don't need a separate compressor in that case.
136 On my Athlon 64 2.4 GHz workstation (running Linux), the "zfec" command-line
137 tool encoded a 160 MB file with m=100, k=94 (about 6% redundancy) in 3.9
138 seconds, where the "par2" tool encoded the file with about 6% redundancy in 27
139 seconds. zfec encoded the same file with m=12, k=6 (100% redundancy) in 4.1
140 seconds, where par2 encoded it with about 100% redundancy in 7 minutes and 56
143 The underlying C library in benchmark mode encoded from a file at about 4.9
144 million bytes per second and decoded at about 5.8 million bytes per second.
146 On Peter's fancy Intel Mac laptop (2.16 GHz Core Duo), it encoded from a file at
147 about 6.2 million bytes per second.
149 On my even fancier Intel Mac laptop (2.33 GHz Core Duo), it encoded from a file
150 at about 6.8 million bytes per second.
152 On my old PowerPC G4 867 MHz Mac laptop, it encoded from a file at about 1.3
153 million bytes per second.
155 Here is a paper analyzing the performance of various erasure codes and their
156 implementations, including zfec:
158 http://www.usenix.org/events/fast09/tech/full_papers/plank/plank.pdf
160 Zfec shows good performance on different machines and with different values of
161 K and M. It also has a nice small memory footprint.
167 Each block is associated with "blocknum". The blocknum of each primary block is
168 its index (starting from zero), so the 0'th block is the first primary block,
169 which is the first few bytes of the file, the 1'st block is the next primary
170 block, which is the next few bytes of the file, and so on. The last primary
171 block has blocknum k-1. The blocknum of each secondary block is an arbitrary
172 integer between k and 255 inclusive. (When using the Python API, if you don't
173 specify which secondary blocks you want when invoking encode(), then it will by
174 default provide the blocks with ids from k to m-1 inclusive.)
178 fec_encode() takes as input an array of k pointers, where each
179 pointer points to a memory buffer containing the input data (i.e.,
180 the i'th buffer contains the i'th primary block). There is also a
181 second parameter which is an array of the blocknums of the secondary
182 blocks which are to be produced. (Each element in that array is
183 required to be the blocknum of a secondary block, i.e. it is
184 required to be >= k and < m.)
186 The output from fec_encode() is the requested set of secondary
187 blocks which are written into output buffers provided by the caller.
189 Note that this fec_encode() is a "low-level" API in that it requires
190 the input data to be provided in a set of memory buffers of exactly
191 the right sizes. If you are starting instead with a single buffer
192 containing all of the data then please see easyfec.py's "class
193 Encoder" as an example of how to split a single large buffer into
194 the appropriate set of input buffers for fec_encode(). If you are
195 starting with a file on disk, then please see filefec.py's
196 encode_file_stringy_easyfec() for an example of how to read the data
197 from a file and pass it to "class Encoder". The Python interface
198 provides these higher-level operations, as does the Haskell
199 interface. If you implement functions to do these higher-level
200 tasks in other languages than Python or Haskell, then please send a
201 patch to zfec-dev@allmydata.org so that your API can be included in
202 future releases of zfec.
204 fec_decode() takes as input an array of k pointers, where each
205 pointer points to a buffer containing a block. There is also a
206 separate input parameter which is an array of blocknums, indicating
207 the blocknum of each of the blocks which is being passed in.
209 The output from fec_decode() is the set of primary blocks which were
210 missing from the input and had to be reconstructed. These
211 reconstructed blocks are written into output buffers provided by the
217 encode() and decode() take as input a sequence of k buffers, where a
218 "sequence" is any object that implements the Python sequence
219 protocol (such as a list or tuple) and a "buffer" is any object that
220 implements the Python buffer protocol (such as a string or array).
221 The contents that are required to be present in these buffers are
222 the same as for the C API.
224 encode() also takes a list of desired blocknums. Unlike the C API,
225 the Python API accepts blocknums of primary blocks as well as
226 secondary blocks in its list of desired blocknums. encode() returns
227 a list of buffer objects which contain the blocks requested. For
228 each requested block which is a primary block, the resulting list
229 contains a reference to the apppropriate primary block from the
230 input list. For each requested block which is a secondary block,
231 the list contains a newly created string object containing that
234 decode() also takes a list of integers indicating the blocknums of
235 the blocks being passed int. decode() returns a list of buffer
236 objects which contain all of the primary blocks of the original data
237 (in order). For each primary block which was present in the input
238 list, then the result list simply contains a reference to the object
239 that was passed in the input list. For each primary block which was
240 not present in the input, the result list contains a newly created
241 string object containing that primary block.
243 Beware of a "gotcha" that can result from the combination of mutable
244 data and the fact that the Python API returns references to inputs
247 Returning references to its inputs is efficient since it avoids
248 making an unnecessary copy of the data, but if the object which was
249 passed as input is mutable and if that object is mutated after the
250 call to zfec returns, then the result from zfec -- which is just a
251 reference to that same object -- will also be mutated. This
252 subtlety is the price you pay for avoiding data copying. If you
253 don't want to have to worry about this then you can simply use
254 immutable objects (e.g. Python strings) to hold the data that you
259 The Haskell code is fully Haddocked, to generate the documentation, run
260 % runhaskell Setup.lhs haddock
266 The filefec.py module has a utility function for efficiently reading a file and
267 encoding it piece by piece. This module is used by the "zfec" and "zunfec"
268 command-line tools from the bin/ directory.
274 A C compiler is required. To use the Python API or the command-line
275 tools a Python interpreter is also required. We have tested it with
276 Python v2.4, v2.5, v2.6, and v2.7. For the Haskell interface, GHC >=
283 Thanks to the author of the original fec lib, Luigi Rizzo, and the folks that
284 contributed to it: Phil Karn, Robert Morelos-Zaragoza, Hari Thirumoorthy, and
285 Dan Rubenstein. Thanks to the Mnet hackers who wrote an earlier Python wrapper,
286 especially Myers Carpenter and Hauke Johannknecht. Thanks to Brian Warner and
287 Amber O'Whielacronx for help with the API, documentation, debugging,
288 compression, and unit tests. Thanks to Adam Langley for improving the C API and
289 contributing the Haskell API. Thanks to the creators of GCC (starting with
290 Richard M. Stallman) and Valgrind (starting with Julian Seward) for a pair of
291 excellent tools. Thanks to my coworkers at Allmydata -- http://allmydata.com --
292 Fabrice Grinda, Peter Secor, Rob Kinninmont, Brian Warner, Zandr Milewski,
293 Justin Boreta, Mark Meras for sponsoring this work and releasing it under a Free
294 Software licence. Thanks to Jack Lloyd, Samuel Neves, and David-Sarah Hopwood.