5 *v1: foolscap, no relay, live=connected-to-introducer, broadcast updates, fully connected topology
6 v2: configurable IP address -- http://allmydata.org/trac/tahoe/ticket/22
7 v3: live != connected-to-introducer, connect on demand
11 *v1: single-segment, no merkle trees
12 *v2: multiple-segment (LFE)
13 *v3: merkle tree to verify each share
14 *v4: merkle tree to verify each segment
15 v5: only retrieve the minimal number of hashes instead of all of them
18 *v1: fake it (replication)
20 *v2.5: ICodec-based codecs, but still using replication
21 *v3: C-based Reed-Solomon
25 v2: derive more information from version and filesize, to remove codec_name,
26 codec_params, tail_codec_params, needed_shares, total_shares, segment_size
28 Upload Peer Selection:
29 *v1: permuted peer list, consistent hash
30 *v2: permute peers by verifierid and arrange around ring, intermixed with
31 shareids on the same range, each share goes to the
32 next-clockwise-available peer
33 v3: reliability/goodness-point counting?
34 v4: denver airport (chord)?
36 Download Peer Selection:
38 v2: permute peers and shareids as in upload, ask next-clockwise peers first
39 (the "A" list), if necessary ask the ones after them, etc.
42 Filetable Maintenance:
43 *v1: vdrive-based tree of MutableDirectoryNodes, persisted to vdrive's disk
45 v2: move tree to client side, serialize to a file, upload,
46 vdrive.set_filetable_uri (still no accounts, just one global tree)
47 v3: break world up into accounts, separate mutable spaces. Maybe
53 v2: centralized checker, repair agent
54 v3: nodes also check their own files
57 *v1: no deletion, one directory per verifierid, no owners of shares,
59 *v2: multiple shares per verifierid [zooko]
61 v4: leases expire, delete expired data on demand, multiple owners per share
64 *v1: readonly webish (nevow, URLs are filepaths)
65 *v2: read/write webish, mkdir, del (files)
66 v2.5: del (directories)
71 Operations/Deployment/Doc/Free Software/Community:
72 - move this file into the wiki ?
75 when nodes are unable to reach storage servers, make a note of it, inform
76 verifier/checker eventually. verifier/checker then puts server under
77 observation or otherwise looks for differences between their self-reported
78 availability and the experiences of others
80 store filetable URI in the first 10 peers that appear after your own nodeid
81 each entry has a sequence number, maybe a timestamp
82 on recovery, find the newest
84 multiple categories of leases:
85 1: committed leases -- we will not delete these in any case, but will instead
86 tell an uploader that we are full
88 1b: in-progress leases (partially filled, not closed, pb connection is
90 2: uncommitted leases -- we will delete these in order to make room for new
92 2a: interrupted leases (partially filled, not closed, pb connection is
93 currently not open, but they might come back)
96 (I'm not sure about the precedence of these last two. Probably deleting
97 expired leases instead of deleting interrupted leases would be okay.)
101 peer list maintenance: lots of entries