5 *v1: foolscap, no relay, live=connected-to-introducer, broadcast updates, fully connected topology
6 *v2: configurable IP address -- http://allmydata.org/trac/tahoe/ticket/22
7 v3: live != connected-to-introducer, connect on demand
8 v4: decentralized introduction -- http://allmydata.org/trac/tahoe/ticket/68
12 *v1: single-segment, no merkle trees
13 *v2: multiple-segment (LFE)
14 *v3: merkle tree to verify each share
15 *v4: merkle tree to verify each segment
16 *v5: merkle tree on plaintext and crypttext: incremental validation
17 v6: only retrieve the minimal number of hashes instead of all of them
20 *v1: fake it (replication)
22 *v2.5: ICodec-based codecs, but still using replication
23 *v3: C-based Reed-Solomon
27 *v2: store URI Extension with shares
28 *v3: derive storage index from readkey
29 v4: perhaps derive more information from version and filesize, to remove
30 codec_name, codec_params, tail_codec_params, needed_shares,
31 total_shares, segment_size from the URI Extension
33 Upload Peer Selection:
34 *v1: permuted peer list, consistent hash
35 *v2: permute peers by verifierid and arrange around ring, intermixed with
36 shareids on the same range, each share goes to the
37 next-clockwise-available peer
38 v3: reliability/goodness-point counting?
39 v4: denver airport (chord)?
41 Download Peer Selection:
43 v2: permute peers and shareids as in upload, ask next-clockwise peers first
44 (the "A" list), if necessary ask the ones after them, etc.
47 Directory/Filesystem Maintenance:
48 *v1: vdrive-based tree of MutableDirectoryNodes, persisted to vdrive's disk
50 *v2: single-host dirnodes, one tree per user, plus one global mutable space
51 v3: maintain file manifest, delete on remove
52 v3.5: distributed storage for dirnodes
53 v4: figure out accounts, users, quotas, snapshots, versioning, etc
57 v1.5: maintain file manifest
58 v2: centralized checker, repair agent
59 v3: nodes also check their own files
62 *v1: no deletion, one directory per verifierid, no owners of shares,
64 *v2: multiple shares per verifierid [zooko]
65 *v3: disk space limits on storage servers -- ticket #34
67 v5: leases expire, delete expired data on demand, multiple owners per share
70 *v1: readonly webish (nevow, URLs are filepaths)
71 *v2: read/write webish, mkdir, del (files)
72 *v2.5: del (directories)
76 v4: FUSE -- http://allmydata.org/trac/tahoe/ticket/36
78 Operations/Deployment/Doc/Free Software/Community:
79 - move this file into the wiki ?
82 when nodes are unable to reach storage servers, make a note of it, inform
83 verifier/checker eventually. verifier/checker then puts server under
84 observation or otherwise looks for differences between their self-reported
85 availability and the experiences of others
87 store filetable URI in the first 10 peers that appear after your own nodeid
88 each entry has a sequence number, maybe a timestamp
89 on recovery, find the newest
91 multiple categories of leases:
92 1: committed leases -- we will not delete these in any case, but will instead
93 tell an uploader that we are full
95 1b: in-progress leases (partially filled, not closed, pb connection is
97 2: uncommitted leases -- we will delete these in order to make room for new
99 2a: interrupted leases (partially filled, not closed, pb connection is
100 currently not open, but they might come back)
103 (I'm not sure about the precedence of these last two. Probably deleting
104 expired leases instead of deleting interrupted leases would be okay.)
108 peer list maintenance: lots of entries