5 *v1: foolscap, no relay, live=connected-to-introducer, broadcast updates, fully connected topology
6 v2: live != connected-to-introducer, connect on demand
10 *v1: single-segment, no merkle trees
11 *v2: multiple-segment (LFE)
12 *v3: merkle tree to verify each share
13 *v4: merkle tree to verify each segment
14 v5: only retrieve the minimal number of hashes instead of all of them
17 *v1: fake it (replication)
19 *v2.5: ICodec-based codecs, but still using replication
20 *v3: C-based Reed-Solomon
24 v2: derive more information from version and filesize, to remove codec_name,
25 codec_params, tail_codec_params, needed_shares, total_shares, segment_size
27 Upload Peer Selection:
28 *v1: permuted peer list, consistent hash
29 *v2: permute peers by verifierid and arrange around ring, intermixed with
30 shareids on the same range, each share goes to the
31 next-clockwise-available peer
32 v3: reliability/goodness-point counting?
33 v4: denver airport (chord)?
35 Download Peer Selection:
37 v2: permute peers and shareids as in upload, ask next-clockwise peers first
38 (the "A" list), if necessary ask the ones after them, etc.
41 Filetable Maintenance:
42 *v1: vdrive-based tree of MutableDirectoryNodes, persisted to vdrive's disk
44 v2: move tree to client side, serialize to a file, upload,
45 vdrive.set_filetable_uri (still no accounts, just one global tree)
46 v3: break world up into accounts, separate mutable spaces. Maybe
52 v2: centralized checker, repair agent
53 v3: nodes also check their own files
56 *v1: no deletion, one directory per verifierid, no owners of shares,
58 *v2: multiple shares per verifierid [zooko]
60 v4: leases expire, delete expired data on demand, multiple owners per share
63 *v1: readonly webish (nevow, URLs are filepaths)
64 *v2: read/write webish, mkdir, del (files)
65 v2.5: del (directories)
70 Operations/Deployment/Doc/Free Software/Community:
71 - move this file into the wiki ?
74 when nodes are unable to reach storage servers, make a note of it, inform
75 verifier/checker eventually. verifier/checker then puts server under
76 observation or otherwise looks for differences between their self-reported
77 availability and the experiences of others
79 store filetable URI in the first 10 peers that appear after your own nodeid
80 each entry has a sequence number, maybe a timestamp
81 on recovery, find the newest
83 multiple categories of leases:
84 1: committed leases -- we will not delete these in any case, but will instead
85 tell an uploader that we are full
87 1b: in-progress leases (partially filled, not closed, pb connection is
89 2: uncommitted leases -- we will delete these in order to make room for new
91 2a: interrupted leases (partially filled, not closed, pb connection is
92 currently not open, but they might come back)
95 (I'm not sure about the precedence of these last two. Probably deleting
96 expired leases instead of deleting interrupted leases would be okay.)
100 peer list maintenance: lots of entries