From: Brian Warner Date: Sun, 10 Jun 2007 03:31:48 +0000 (-0700) Subject: update thingA/uri-extension docs X-Git-Tag: allmydata-tahoe-0.3.0~5 X-Git-Url: https://git.rkrishnan.org/%5B/%5D%20/uri//%22%22?a=commitdiff_plain;h=5abc03437834cae66ddc7d68ce50b1107e25f115;p=tahoe-lafs%2Ftahoe-lafs.git update thingA/uri-extension docs --- diff --git a/docs/thingA.txt b/docs/thingA.txt index 895de8a2..0b434809 100644 --- a/docs/thingA.txt +++ b/docs/thingA.txt @@ -1,5 +1,5 @@ -We need a new name for this intentionally-vague block of data. +"URI Extension Block" This block is a bencoded dictionary. All buckets hold an identical copy. The hash of the serialized data is kept in the URI. @@ -10,25 +10,25 @@ before incremental validation can be performed. Full-file validation (for clients who do not wish to do incremental validation) can be performed solely with the data from this block. -At the moment, this data block contains the following keys: +At the moment, this data block contains the following keys (and an estimate +on their sizes): - size - segment_size - num_segments - needed_shares - total_shares + size 5 + segment_size 7 + num_segments 2 + needed_shares 2 + total_shares 3 - codec_name - codec_params - tail_codec_params + codec_name 3 + codec_params 5+1+2+1+3=12 + tail_codec_params 12 - share_root_hash + share_root_hash 32 (binary) or 52 (base32-encoded) each fileid plaintext_root_hash verifierid crypttext_root_hash - Some pieces are needed elsewhere (size should be visible without pulling the block, the Tahoe3 algorithm needs total_shares to find the right peers, all peer selection algorithms need needed_shares to ask a minimal set of peers). @@ -43,3 +43,20 @@ files, regardless of file size. Therefore hash trees (which have a size that depends linearly upon the number of segments) are stored elsewhere in the bucket, with only the hash tree root stored in this data block. +This block will be serialized as follows: + + assert that all keys match ^[a-zA-z_\-]+$ + sort all the keys lexicographically + for k in keys: + write("%s:" % k) + write(netstring(data[k])) + + +Serialized size: + + dense binary (but decimal) packing: 160+46=206 + including 'key:' (185) and netstring (6*3+7*4=46) on values: 231 + including 'key:%d\n' (185+13=198) and printable values (46+5*52=306)=504 + +We'll go with the 231-sized block, and provide a tool to dump it as text if +we really want one.