1 # -*- test-case-name: allmydata.test.test_hashtree -*-
4 Read and write chunks from files.
8 A file is divided into blocks, each of which has size L{BLOCK_SIZE}
9 (except for the last block, which may be smaller). Blocks are encoded
10 into chunks. One publishes the hash of the entire file. Clients
11 who want to download the file first obtain the hash, then the clients
12 can receive chunks in any order. Cryptographic hashing is used to
13 verify each received chunk before writing to disk. Thus it is
14 impossible to download corrupt data if one has the correct file hash.
16 One obtains the hash of a complete file via
17 L{CompleteChunkFile.file_hash}. One can read chunks from a complete
18 file by the sequence operations of C{len()} and subscripting on a
19 L{CompleteChunkFile} object. One can open an empty or partially
20 downloaded file with L{PartialChunkFile}, and read and write chunks
21 to this file. A chunk will fail to write if its contents and index
22 are not consistent with the overall file hash passed to
23 L{PartialChunkFile} when the partial chunk file was first created.
25 The chunks have an overhead of less than 4% for files of size
26 less than C{10**20} bytes.
30 - On a 3 GHz Pentium 3, it took 3.4 minutes to first make a
31 L{CompleteChunkFile} object for a 4 GB file. Up to 10 MB of
32 memory was used as the constructor ran. A metafile filename
33 was passed to the constructor, and so the hash information was
34 written to the metafile. The object used a negligible amount
35 of memory after the constructor was finished.
36 - Creation of L{CompleteChunkFile} objects in future runs of the
37 program took negligible time, since the hash information was
38 already stored in the metafile.
40 @var BLOCK_SIZE: Size of a block. See L{BlockFile}.
41 @var MAX_CHUNK_SIZE: Upper bound on the size of a chunk.
42 See L{CompleteChunkFile}.
44 free (adj.): unencumbered; not under the control of others
45 Written by Connelly Barnes in 2005 and released into the
46 public domain with no warranty of any kind, either expressed
47 or implied. It probably won't make your computer catch on fire,
48 or eat your children, but it might. Use at your own risk.
51 from allmydata.util import base32
52 from allmydata.util.hashutil import tagged_hash, tagged_pair_hash
54 __version__ = '1.0.0-allmydata'
57 MAX_CHUNK_SIZE = BLOCK_SIZE + 4096
61 Round integer C{x} up to the nearest power of 2.
69 class CompleteBinaryTreeMixin:
71 Adds convenience methods to a complete binary tree.
73 Assumes the total number of elements in the binary tree may be
74 accessed via C{__len__}, and that each element can be retrieved
75 using list subscripting.
77 Tree is indexed like so::
92 Index of the parent of C{i}.
94 if i < 1 or (hasattr(self, '__len__') and i >= len(self)):
95 raise IndexError('index out of range: ' + repr(i))
100 Index of the left child of C{i}.
103 if i < 0 or (hasattr(self, '__len__') and ans >= len(self)):
104 raise IndexError('index out of range: ' + repr(i))
109 Index of right child of C{i}.
112 if i < 0 or (hasattr(self, '__len__') and ans >= len(self)):
113 raise IndexError('index out of range: ' + repr(i))
116 def sibling(self, i):
118 Index of sibling of C{i}.
120 parent = self.parent(i)
121 if self.lchild(parent) == i:
122 return self.rchild(parent)
124 return self.lchild(parent)
126 def needed_for(self, i):
128 Return a list of node indices that are necessary for the hash chain.
130 if i < 0 or i >= len(self):
131 raise IndexError('index out of range: ' + repr(i))
135 needed.append(self.sibling(here))
136 here = self.parent(here)
139 def depth_first(self, i=0):
142 for child,childdepth in self.depth_first(self.lchild(i)):
143 yield child, childdepth+1
147 for child,childdepth in self.depth_first(self.rchild(i)):
148 yield child, childdepth+1
154 for i,depth in self.depth_first():
155 lines.append("%s%3d: %s" % (" "*depth, i,
156 base32.b2a_or_none(self[i])))
157 return "\n".join(lines) + "\n"
159 def get_leaf_index(self, leafnum):
160 return self.first_leaf_num + leafnum
162 def get_leaf(self, leafnum):
163 return self[self.first_leaf_num + leafnum]
165 def empty_leaf_hash(i):
166 return tagged_hash('Merkle tree empty leaf', "%d" % i)
168 return tagged_pair_hash('Merkle tree internal node', a, b)
170 class HashTree(CompleteBinaryTreeMixin, list):
172 Compute Merkle hashes at any node in a complete binary tree.
174 Tree is indexed like so::
183 7 8 9 10 11 12 13 14 <- List passed to constructor.
187 def __init__(self, L):
189 Create complete binary tree from list of hash strings.
191 The list is augmented by hashes so its length is a power of 2, and
192 then this is used as the bottom row of the hash tree.
194 The augmenting is done so that if the augmented element is at index
195 C{i}, then its value is C{hash(tagged_hash('Merkle tree empty leaf',
201 end = roundup_pow2(len(L))
202 self.first_leaf_num = end - 1
203 L = L + [None] * (end - start)
204 for i in range(start, end):
205 L[i] = empty_leaf_hash(i)
206 # Form each row of the tree.
208 while len(rows[-1]) != 1:
210 rows += [[pair_hash(last[2*i], last[2*i+1])
211 for i in xrange(len(last)//2)]]
212 # Flatten the list of rows into a single list.
214 self[:] = sum(rows, [])
216 def needed_hashes(self, leafnum, include_leaf=False):
217 """Which hashes will someone need to validate a given data block?
219 I am used to answer a question: supposing you have the data block
220 that is used to form leaf hash N, and you want to validate that it,
221 which hashes would you need?
223 I accept a leaf number and return a set of 'hash index' values, which
224 are integers from 0 to len(self). In the 'hash index' number space,
225 hash[0] is the root hash, while hash[len(self)-1] is the last leaf
228 This method can be used to find out which hashes you should request
229 from some untrusted source (usually the same source that provides the
230 data block), so you can minimize storage or transmission overhead. It
231 can also be used to determine which hashes you should send to a
232 remote data store so that it will be able to provide validatable data
235 I will not include '0' (the root hash) in the result, since the root
236 is generally stored somewhere that is more trusted than the source of
237 the remaining hashes. I will include the leaf hash itself only if you
238 ask me to, by passing include_leaf=True.
241 needed = set(self.needed_for(self.first_leaf_num + leafnum))
243 needed.add(self.first_leaf_num + leafnum)
247 class NotEnoughHashesError(Exception):
250 class BadHashError(Exception):
253 class IncompleteHashTree(CompleteBinaryTreeMixin, list):
254 """I am a hash tree which may or may not be complete. I can be used to
255 validate inbound data from some untrustworthy provider who has a subset
256 of leaves and a sufficient subset of internal nodes.
258 Initially I am completely unpopulated. Over time, I will become filled
259 with hashes, just enough to validate particular leaf nodes.
261 If you desire to validate leaf number N, first find out which hashes I
262 need by calling needed_hashes(N). This will return a list of node numbers
263 (which will nominally be the sibling chain between the given leaf and the
264 root, but if I already have some of those nodes, needed_hashes(N) will
265 only return a subset). Obtain these hashes from the data provider, then
266 tell me about them with set_hash(i, HASH). Once I have enough hashes, you
267 can tell me the hash of the leaf with set_leaf_hash(N, HASH), and I will
268 either return None or raise BadHashError.
270 The first hash to be set will probably be 0 (the root hash), since this
271 is the one that will come from someone more trustworthy than the data
276 def __init__(self, num_leaves):
277 L = [None] * num_leaves
279 end = roundup_pow2(len(L))
280 self.first_leaf_num = end - 1
281 L = L + [None] * (end - start)
283 while len(rows[-1]) != 1:
285 rows += [[None for i in xrange(len(last)//2)]]
286 # Flatten the list of rows into a single list.
288 self[:] = sum(rows, [])
291 def needed_hashes(self, leafnum, include_leaf=False):
292 """Which new hashes do I need to validate a given data block?
294 I am much like HashTree.needed_hashes(), except that I don't include
295 hashes that I already know about. When needed_hashes() is called on
296 an empty IncompleteHashTree, it will return the same set as a
297 HashTree of the same size. But later, once hashes have been added
298 with set_hashes(), I will ask for fewer hashes, since some of the
299 necessary ones have already been set.
302 maybe_needed = set(self.needed_for(self.first_leaf_num + leafnum))
304 maybe_needed.add(self.first_leaf_num + leafnum)
305 return set([i for i in maybe_needed if self[i] is None])
307 def _name_hash(self, i):
308 name = "[%d of %d]" % (i, len(self))
309 if i >= self.first_leaf_num:
310 leafnum = i - self.first_leaf_num
311 numleaves = len(self) - self.first_leaf_num
312 name += " (leaf [%d] of %d)" % (leafnum, numleaves)
315 def set_hashes(self, hashes={}, leaves={}):
316 """Add a bunch of hashes to the tree.
318 I will validate these to the best of my ability. If I already have a
319 copy of any of the new hashes, the new values must equal the existing
320 ones, or I will raise BadHashError. If adding a hash allows me to
321 compute a parent hash, those parent hashes must match or I will raise
322 BadHashError. If I raise BadHashError, I will forget about all the
323 hashes that you tried to add, leaving my state exactly the same as
324 before I was called. If I return successfully, I will remember all
327 I insist upon being able to validate all of the hashes that were
328 given to me. If I cannot do this because I'm missing some hashes, I
329 will raise NotEnoughHashesError (and forget about all the hashes that
330 you tried to add). Note that this means that the root hash must
331 either be included in 'hashes', or it must have been provided at some
334 'leaves' is a dictionary uses 'leaf index' values, which range from 0
335 (the left-most leaf) to num_leaves-1 (the right-most leaf), and form
336 the base of the tree. 'hashes' uses 'hash_index' values, which range
337 from 0 (the root of the tree) to 2*num_leaves-2 (the right-most
338 leaf). leaf[i] is the same as hash[num_leaves-1+i].
340 The best way to use me is to obtain the root hash from some 'good'
341 channel, and use the 'bad' channel to obtain data block 0 and the
342 corresponding hash chain (a dict with the same hashes that
343 needed_hashes(0) tells you, e.g. {0:h0, 2:h2, 4:h4, 8:h8} when
344 len(L)=8). Hash the data block to create leaf0, then feed everything
345 into set_hashes() and see if it raises an exception or not::
347 iht = IncompleteHashTree(numleaves)
348 roothash = trusted_channel.get_roothash()
349 otherhashes = untrusted_channel.get_hashes()
350 # otherhashes.keys() should == iht.needed_hashes(leaves=[0])
351 datablock0 = untrusted_channel.get_data(0)
352 leaf0 = HASH(datablock0)
353 # HASH() is probably hashutil.tagged_hash(tag, datablock0)
354 hashes = otherhashes.copy()
355 hashes[0] = roothash # from 'good' channel
356 iht.set_hashes(hashes, leaves={0: leaf0})
358 If the set_hashes() call doesn't raise an exception, the data block
359 was valid. If it raises BadHashError, then either the data block was
360 corrupted or one of the received hashes was corrupted.
363 assert isinstance(hashes, dict)
364 for h in hashes.values():
365 assert isinstance(h, str)
366 assert isinstance(leaves, dict)
367 for h in leaves.values():
368 assert isinstance(h, str)
369 new_hashes = hashes.copy()
370 for leafnum,leafhash in leaves.iteritems():
371 hashnum = self.first_leaf_num + leafnum
372 if hashnum in new_hashes:
373 if new_hashes[hashnum] != leafhash:
374 raise BadHashError("got conflicting hashes in my "
375 "arguments: leaves[%d] != hashes[%d]"
376 % (leafnum, hashnum))
377 new_hashes[hashnum] = leafhash
379 added = set() # we'll remove these if the check fails
382 # first we provisionally add all hashes to the tree, comparing
386 if self[i] != new_hashes[i]:
387 msg = "new hash does not match existing hash at "
388 msg += self._name_hash(i)
389 raise BadHashError(msg)
391 self[i] = new_hashes[i]
394 # then we start from the bottom and compute new parent hashes
395 # upwards, comparing any that already exist. When this phase
396 # ends, all nodes that have a sibling will also have a parent.
398 hashes_to_check = list(new_hashes.keys())
399 # leaf-most first means reverse sorted order
400 while hashes_to_check:
401 hashes_to_check.sort()
402 i = hashes_to_check.pop(-1)
404 # The root has no sibling. How lonely.
406 if self[self.sibling(i)] is None:
407 # without a sibling, we can't compute a parent
409 parentnum = self.parent(i)
410 # make sure we know right from left
411 leftnum, rightnum = sorted([i, self.sibling(i)])
412 new_parent_hash = pair_hash(self[leftnum], self[rightnum])
414 if self[parentnum] != new_parent_hash:
415 raise BadHashError("h([%d]+[%d]) != h[%d]" %
416 (leftnum, rightnum, parentnum))
418 self[parentnum] = new_parent_hash
420 hashes_to_check.insert(0, parentnum)
422 # then we walk downwards from the top (root), and anything that
423 # is reachable is validated. If any of the hashes that we've
424 # added are unreachable, then they are unvalidated.
429 # TODO: this could be done more efficiently, by starting from
430 # each element of new_hashes and walking upwards instead,
431 # remembering a set of validated nodes so that the searches for
432 # later new_hashes goes faster. This approach is O(n), whereas
433 # O(ln(n)) should be feasible.
434 for i in range(1, len(self)):
435 if self[i] and self.parent(i) in reachable:
438 # were we unable to validate any of the new hashes?
439 unvalidated = set(new_hashes.keys()) - reachable
441 those = ",".join([str(i) for i in sorted(unvalidated)])
442 raise NotEnoughHashesError("unable to validate hashes %s"
445 except (BadHashError, NotEnoughHashesError):