-= The Tahoe CLI commands =
-
-1. Overview
-2. CLI Command Overview
-3. Node Management
-4. Virtual Drive Manipulation
- 4.1. Starting Directories
- 4.1.1. SECURITY NOTE: For users of shared systems
- 4.2. Command Syntax Summary
- 4.3. Command Examples
-5. Virtual Drive Maintenance
-6. Debugging
-
-== Overview ==
-
-Tahoe provides a single executable named "tahoe", which can be used to create
-and manage client/server nodes, manipulate the filesystem, and perform
+======================
+The Tahoe CLI commands
+======================
+
+1. `Overview`_
+2. `CLI Command Overview`_
+3. `Node Management`_
+4. `Filesystem Manipulation`_
+
+ 1. `Starting Directories`_
+ 2. `Command Syntax Summary`_
+ 3. `Command Examples`_
+
+5. `Storage Grid Maintenance`_
+6. `Debugging`_
+
+
+Overview
+========
+
+Tahoe provides a single executable named "``tahoe``", which can be used to
+create and manage client/server nodes, manipulate the filesystem, and perform
several debugging/maintenance tasks.
-This executable lives in the source tree at "bin/tahoe". Once you've done a
-build (by running "make"), bin/tahoe can be run in-place: if it discovers
+This executable lives in the source tree at "``bin/tahoe``". Once you've done a
+build (by running "make"), ``bin/tahoe`` can be run in-place: if it discovers
that it is being run from within a Tahoe source tree, it will modify sys.path
as necessary to use all the source code and dependent libraries contained in
that tree.
-If you've installed Tahoe (using "make install", or by installing a binary
+If you've installed Tahoe (using "``make install``", or by installing a binary
package), then the tahoe executable will be available somewhere else, perhaps
-in /usr/bin/tahoe . In this case, it will use your platform's normal
+in ``/usr/bin/tahoe``. In this case, it will use your platform's normal
PYTHONPATH search paths to find the tahoe code and other libraries.
-== CLI Command Overview ==
+CLI Command Overview
+====================
-The "tahoe" tool provides access to three categories of commands.
+The "``tahoe``" tool provides access to three categories of commands.
- * node management: create a client/server node, start/stop/restart it
- * filesystem manipulation: list files, upload, download, delete, rename
- * debugging: unpack cap-strings, examine share files
+* node management: create a client/server node, start/stop/restart it
+* filesystem manipulation: list files, upload, download, delete, rename
+* debugging: unpack cap-strings, examine share files
-To get a list of all commands, just run "tahoe" with no additional arguments.
-"tahoe --help" might also provide something useful.
+To get a list of all commands, just run "``tahoe``" with no additional
+arguments. "``tahoe --help``" might also provide something useful.
-Running "tahoe --version" will display a list of version strings, starting
+Running "``tahoe --version``" will display a list of version strings, starting
with the "allmydata" module (which contains the majority of the Tahoe
functionality) and including versions for a number of dependent libraries,
like Twisted, Foolscap, pycryptopp, and zfec.
-== Node Management ==
+Node Management
+===============
-"tahoe create-node [NODEDIR]" is the basic make-a-new-node command. It
+"``tahoe create-node [NODEDIR]``" is the basic make-a-new-node command. It
creates a new directory and populates it with files that will allow the
-"tahoe start" command to use it later on. This command creates nodes that
+"``tahoe start``" command to use it later on. This command creates nodes that
have client functionality (upload/download files), web API services
(controlled by the 'webport' file), and storage services (unless
"--no-storage" is specified).
NODEDIR defaults to ~/.tahoe/ , and newly-created nodes default to
publishing a web server on port 3456 (limited to the loopback interface, at
127.0.0.1, to restrict access to other programs on the same host). All of the
-other "tahoe" subcommands use corresponding defaults.
+other "``tahoe``" subcommands use corresponding defaults.
-"tahoe create-client [NODEDIR]" creates a node with no storage service.
-That is, it behaves like "tahoe create-node --no-storage [NODEDIR]".
+"``tahoe create-client [NODEDIR]``" creates a node with no storage service.
+That is, it behaves like "``tahoe create-node --no-storage [NODEDIR]``".
(This is a change from versions prior to 1.6.0.)
-"tahoe create-introducer [NODEDIR]" is used to create the Introducer node.
+"``tahoe create-introducer [NODEDIR]``" is used to create the Introducer node.
This node provides introduction services and nothing else. When started, this
node will produce an introducer.furl, which should be published to all
clients.
-"tahoe create-key-generator [NODEDIR]" is used to create a special
+"``tahoe create-key-generator [NODEDIR]``" is used to create a special
"key-generation" service, which allows a client to offload their RSA key
generation to a separate process. Since RSA key generation takes several
seconds, and must be done each time a directory is created, moving it to a
continue servicing other requests. The key generator exports a FURL that can
be copied into a node to enable this functionality.
-"tahoe run [NODEDIR]" will start a previously-created node in the foreground.
+"``tahoe run [NODEDIR]``" will start a previously-created node in the foreground.
-"tahoe start [NODEDIR]" will launch a previously-created node. It will launch
+"``tahoe start [NODEDIR]``" will launch a previously-created node. It will launch
the node into the background, using the standard Twisted "twistd"
daemon-launching tool. On some platforms (including Windows) this command is
unable to run a daemon in the background; in that case it behaves in the
-same way as "tahoe run".
+same way as "``tahoe run``".
-"tahoe stop [NODEDIR]" will shut down a running node.
+"``tahoe stop [NODEDIR]``" will shut down a running node.
-"tahoe restart [NODEDIR]" will stop and then restart a running node. This is
+"``tahoe restart [NODEDIR]``" will stop and then restart a running node. This is
most often used by developers who have just modified the code and want to
start using their changes.
-== Filesystem Manipulation ==
+Filesystem Manipulation
+=======================
These commands let you exmaine a Tahoe filesystem, providing basic
list/upload/download/delete/rename/mkdir functionality. They can be used as
except on Windows. The command-line arguments are assumed to use the
character encoding specified by the current locale.
-=== Starting Directories ===
+Starting Directories
+--------------------
As described in architecture.txt, the Tahoe distributed filesystem consists
of a collection of directories and files, each of which has a "read-cap" or a
To use this collection of files and directories, you need to choose a
starting point: some specific directory that we will refer to as a
-"starting directory". For a given starting directory, the "ls
-[STARTING_DIR]:" command would list the contents of this directory,
-the "ls [STARTING_DIR]:dir1" command would look inside this directory
-for a child named "dir1" and list its contents, "ls
-[STARTING_DIR]:dir1/subdir2" would look two levels deep, etc.
+"starting directory". For a given starting directory, the "``ls
+[STARTING_DIR]:``" command would list the contents of this directory,
+the "``ls [STARTING_DIR]:dir1``" command would look inside this directory
+for a child named "dir1" and list its contents, "``ls
+[STARTING_DIR]:dir1/subdir2``" would look two levels deep, etc.
Note that there is no real global "root" directory, but instead each
starting directory provides a different, possibly overlapping
Each tahoe node remembers a list of starting points, named "aliases",
in a file named ~/.tahoe/private/aliases . These aliases are short UTF-8
encoded strings that stand in for a directory read- or write- cap. If
-you use the command line "ls" without any "[STARTING_DIR]:" argument,
-then it will use the default alias, which is "tahoe", therefore "tahoe
-ls" has the same effect as "tahoe ls tahoe:". The same goes for the
+you use the command line "``ls``" without any "[STARTING_DIR]:" argument,
+then it will use the default alias, which is "tahoe", therefore "``tahoe
+ls``" has the same effect as "``tahoe ls tahoe:``". The same goes for the
other commands which can reasonably use a default alias: get, put,
mkdir, mv, and rm.
found in ~/.tahoe/private/aliases, the CLI will use the contents of
~/.tahoe/private/root_dir.cap instead. Tahoe-1.0 had only a single starting
point, and stored it in this root_dir.cap file, so Tahoe-1.1 will use it if
-necessary. However, once you've set a "tahoe:" alias with "tahoe set-alias",
+necessary. However, once you've set a "tahoe:" alias with "``tahoe set-alias``",
that will override anything in the old root_dir.cap file.
The Tahoe CLI commands use the same filename syntax as scp and rsync
The best way to get started with Tahoe is to create a node, start it, then
use the following command to create a new directory and set it as your
-"tahoe:" alias:
+"tahoe:" alias::
tahoe create-alias tahoe
-After that you can use "tahoe ls tahoe:" and "tahoe cp local.txt tahoe:",
-and both will refer to the directory that you've just created.
+After that you can use "``tahoe ls tahoe:``" and
+"``tahoe cp local.txt tahoe:``", and both will refer to the directory that
+you've just created.
-==== SECURITY NOTE: For users of shared systems ====
+SECURITY NOTE: For users of shared systems
+``````````````````````````````````````````
Another way to achieve the same effect as the above "tahoe create-alias"
-command is:
+command is::
tahoe add-alias tahoe `tahoe mkdir`
However, command-line arguments are visible to other users (through the
'ps' command, or the Windows Process Explorer tool), so if you are using a
tahoe node on a shared host, your login neighbors will be able to see (and
-capture) any directory caps that you set up with the "tahoe add-alias"
+capture) any directory caps that you set up with the "``tahoe add-alias``"
command.
-The "tahoe create-alias" command avoids this problem by creating a new
+The "``tahoe create-alias``" command avoids this problem by creating a new
directory and putting the cap into your aliases file for you. Alternatively,
you can edit the NODEDIR/private/aliases file directly, by adding a line like
-this:
+this::
fun: URI:DIR2:ovjy4yhylqlfoqg2vcze36dhde:4d4f47qko2xm5g7osgo2yyidi5m4muyo2vjjy53q4vjju2u55mfa
access to your files and directories.
-=== Command Syntax Summary ===
+Command Syntax Summary
+----------------------
tahoe add-alias alias cap
+
tahoe create-alias alias
+
tahoe list-aliases
+
tahoe mkdir
+
tahoe mkdir [alias:]path
+
tahoe ls [alias:][path]
+
tahoe webopen [alias:][path]
+
tahoe put [--mutable] [localfrom:-]
+
tahoe put [--mutable] [localfrom:-] [alias:]to
+
tahoe put [--mutable] [localfrom:-] [alias:]subdir/to
+
tahoe put [--mutable] [localfrom:-] dircap:to
+
tahoe put [--mutable] [localfrom:-] dircap:./subdir/to
+
tahoe put [localfrom:-] mutable-file-writecap
+
tahoe get [alias:]from [localto:-]
+
tahoe cp [-r] [alias:]frompath [alias:]topath
+
tahoe rm [alias:]what
+
tahoe mv [alias:]from [alias:]to
+
tahoe ln [alias:]from [alias:]to
+
tahoe backup localfrom [alias:]to
-=== Command Examples ===
+Command Examples
+----------------
-tahoe mkdir
+``tahoe mkdir``
This creates a new empty unlinked directory, and prints its write-cap to
stdout. The new directory is not attached to anything else.
-tahoe add-alias fun DIRCAP
+``tahoe add-alias fun DIRCAP``
- An example would be:
+ An example would be::
tahoe add-alias fun URI:DIR2:ovjy4yhylqlfoqg2vcze36dhde:4d4f47qko2xm5g7osgo2yyidi5m4muyo2vjjy53q4vjju2u55mfa
directory. Use "tahoe add-alias tahoe DIRCAP" to set the contents of the
default "tahoe:" alias.
-tahoe create-alias fun
+``tahoe create-alias fun``
- This combines 'tahoe mkdir' and 'tahoe add-alias' into a single step.
+ This combines "``tahoe mkdir``" and "``tahoe add-alias``" into a single step.
-tahoe list-aliases
+``tahoe list-aliases``
This displays a table of all configured aliases.
-tahoe mkdir subdir
-tahoe mkdir /subdir
+``tahoe mkdir subdir``
+
+``tahoe mkdir /subdir``
This both create a new empty directory and attaches it to your root with the
name "subdir".
-tahoe ls
-tahoe ls /
-tahoe ls tahoe:
-tahoe ls tahoe:/
+``tahoe ls``
+
+``tahoe ls /``
+
+``tahoe ls tahoe:``
+
+``tahoe ls tahoe:/``
All four list the root directory of your personal virtual filesystem.
-tahoe ls subdir
+``tahoe ls subdir``
This lists a subdirectory of your filesystem.
-tahoe webopen
-tahoe webopen tahoe:
-tahoe webopen tahoe:subdir/
-tahoe webopen subdir/
+``tahoe webopen``
+
+``tahoe webopen tahoe:``
+
+``tahoe webopen tahoe:subdir/``
+
+``tahoe webopen subdir/``
This uses the python 'webbrowser' module to cause a local web browser to
open to the web page for the given directory. This page offers interfaces to
add, dowlonad, rename, and delete files in the directory. If not given an
alias or path, opens "tahoe:", the root dir of the default alias.
-tahoe put file.txt
-tahoe put ./file.txt
-tahoe put /tmp/file.txt
-tahoe put ~/file.txt
+``tahoe put file.txt``
+
+``tahoe put ./file.txt``
+
+``tahoe put /tmp/file.txt``
+
+``tahoe put ~/file.txt``
These upload the local file into the grid, and prints the new read-cap to
stdout. The uploaded file is not attached to any directory. All one-argument
- forms of "tahoe put" perform an unlinked upload.
+ forms of "``tahoe put``" perform an unlinked upload.
+
+``tahoe put -``
-tahoe put -
-tahoe put
+``tahoe put``
These also perform an unlinked upload, but the data to be uploaded is taken
from stdin.
-tahoe put file.txt uploaded.txt
-tahoe put file.txt tahoe:uploaded.txt
+``tahoe put file.txt uploaded.txt``
+
+``tahoe put file.txt tahoe:uploaded.txt``
These upload the local file and add it to your root with the name
"uploaded.txt"
-tahoe put file.txt subdir/foo.txt
-tahoe put - subdir/foo.txt
-tahoe put file.txt tahoe:subdir/foo.txt
-tahoe put file.txt DIRCAP:./foo.txt
-tahoe put file.txt DIRCAP:./subdir/foo.txt
+``tahoe put file.txt subdir/foo.txt``
+
+``tahoe put - subdir/foo.txt``
+
+``tahoe put file.txt tahoe:subdir/foo.txt``
+
+``tahoe put file.txt DIRCAP:./foo.txt``
+
+``tahoe put file.txt DIRCAP:./subdir/foo.txt``
These upload the named file and attach them to a subdirectory of the given
root directory, under the name "foo.txt". Note that to use a directory
than ":", to help the CLI parser figure out where the dircap ends. When the
source file is named "-", the contents are taken from stdin.
-tahoe put file.txt --mutable
+``tahoe put file.txt --mutable``
Create a new mutable file, fill it with the contents of file.txt, and print
the new write-cap to stdout.
-tahoe put file.txt MUTABLE-FILE-WRITECAP
+``tahoe put file.txt MUTABLE-FILE-WRITECAP``
Replace the contents of the given mutable file with the contents of file.txt
and prints the same write-cap to stdout.
-tahoe cp file.txt tahoe:uploaded.txt
-tahoe cp file.txt tahoe:
-tahoe cp file.txt tahoe:/
-tahoe cp ./file.txt tahoe:
+``tahoe cp file.txt tahoe:uploaded.txt``
+
+``tahoe cp file.txt tahoe:``
+
+``tahoe cp file.txt tahoe:/``
+
+``tahoe cp ./file.txt tahoe:``
These upload the local file and add it to your root with the name
"uploaded.txt".
-tahoe cp tahoe:uploaded.txt downloaded.txt
-tahoe cp tahoe:uploaded.txt ./downloaded.txt
-tahoe cp tahoe:uploaded.txt /tmp/downloaded.txt
-tahoe cp tahoe:uploaded.txt ~/downloaded.txt
+``tahoe cp tahoe:uploaded.txt downloaded.txt``
+
+``tahoe cp tahoe:uploaded.txt ./downloaded.txt``
+
+``tahoe cp tahoe:uploaded.txt /tmp/downloaded.txt``
+
+``tahoe cp tahoe:uploaded.txt ~/downloaded.txt``
This downloads the named file from your tahoe root, and puts the result on
your local filesystem.
-tahoe cp tahoe:uploaded.txt fun:stuff.txt
+``tahoe cp tahoe:uploaded.txt fun:stuff.txt``
This copies a file from your tahoe root to a different virtual directory,
set up earlier with "tahoe add-alias fun DIRCAP".
-tahoe rm uploaded.txt
-tahoe rm tahoe:uploaded.txt
+``tahoe rm uploaded.txt``
+
+``tahoe rm tahoe:uploaded.txt``
This deletes a file from your tahoe root.
-tahoe mv uploaded.txt renamed.txt
-tahoe mv tahoe:uploaded.txt tahoe:renamed.txt
+``tahoe mv uploaded.txt renamed.txt``
+
+``tahoe mv tahoe:uploaded.txt tahoe:renamed.txt``
These rename a file within your tahoe root directory.
-tahoe mv uploaded.txt fun:
-tahoe mv tahoe:uploaded.txt fun:
-tahoe mv tahoe:uploaded.txt fun:uploaded.txt
+``tahoe mv uploaded.txt fun:``
+
+``tahoe mv tahoe:uploaded.txt fun:``
+
+``tahoe mv tahoe:uploaded.txt fun:uploaded.txt``
These move a file from your tahoe root directory to the virtual directory
set up earlier with "tahoe add-alias fun DIRCAP"
-tahoe backup ~ work:backups
+``tahoe backup ~ work:backups``
This command performs a full versioned backup of every file and directory
underneath your "~" home directory, placing an immutable timestamped
should delete the stale backupdb.sqlite file, to force "tahoe backup" to
upload all files to the new grid.
-tahoe backup --exclude=*~ ~ work:backups
+``tahoe backup --exclude=*~ ~ work:backups``
Same as above, but this time the backup process will ignore any
filename that will end with '~'. '--exclude' will accept any standard
attention that the pattern will be matched against any level of the
directory tree, it's still impossible to specify absolute path exclusions.
-tahoe backup --exclude-from=/path/to/filename ~ work:backups
+``tahoe backup --exclude-from=/path/to/filename ~ work:backups``
'--exclude-from' is similar to '--exclude', but reads exclusion
patterns from '/path/to/filename', one per line.
-tahoe backup --exclude-vcs ~ work:backups
+``tahoe backup --exclude-vcs ~ work:backups``
This command will ignore any known file or directory that's used by
- version control systems to store metadata. The list of the exluded
- names is:
+ version control systems to store metadata. The excluded names are:
* CVS
* RCS
* .hgignore
* _darcs
-== Storage Grid Maintenance ==
+Storage Grid Maintenance
+========================
+
+``tahoe manifest tahoe:``
+
+``tahoe manifest --storage-index tahoe:``
+
+``tahoe manifest --verify-cap tahoe:``
+
+``tahoe manifest --repair-cap tahoe:``
-tahoe manifest tahoe:
-tahoe manifest --storage-index tahoe:
-tahoe manifest --verify-cap tahoe:
-tahoe manifest --repair-cap tahoe:
-tahoe manifest --raw tahoe:
+``tahoe manifest --raw tahoe:``
This performs a recursive walk of the given directory, visiting every file
and directory that can be reached from that point. It then emits one line to
strings, and cap strings. The last line of the --raw output will be a JSON
encoded deep-stats dictionary.
-tahoe stats tahoe:
+``tahoe stats tahoe:``
This performs a recursive walk of the given directory, visiting every file
and directory that can be reached from that point. It gathers statistics on
the sizes of the objects it encounters, and prints a summary to stdout.
-== Debugging ==
+Debugging
+=========
For a list of all debugging commands, use "tahoe debug".
-"tahoe debug find-shares STORAGEINDEX NODEDIRS.." will look through one or
+"``tahoe debug find-shares STORAGEINDEX NODEDIRS..``" will look through one or
more storage nodes for the share files that are providing storage for the
given storage index.
-"tahoe debug catalog-shares NODEDIRS.." will look through one or more storage
-nodes and locate every single share they contain. It produces a report on
-stdout with one line per share, describing what kind of share it is, the
+"``tahoe debug catalog-shares NODEDIRS..``" will look through one or more
+storage nodes and locate every single share they contain. It produces a report
+on stdout with one line per share, describing what kind of share it is, the
storage index, the size of the file is used for, etc. It may be useful to
concatenate these reports from all storage hosts and use it to look for
anomalies.
-"tahoe debug dump-share SHAREFILE" will take the name of a single share file
+"``tahoe debug dump-share SHAREFILE``" will take the name of a single share file
(as found by "tahoe find-shares") and print a summary of its contents to
stdout. This includes a list of leases, summaries of the hash tree, and
information from the UEB (URI Extension Block). For mutable file shares, it
will describe which version (seqnum and root-hash) is being stored in this
share.
-"tahoe debug dump-cap CAP" will take a URI (a file read-cap, or a directory
+"``tahoe debug dump-cap CAP``" will take a URI (a file read-cap, or a directory
read- or write- cap) and unpack it into separate pieces. The most useful
aspect of this command is to reveal the storage index for any given URI. This
can be used to locate the share files that are holding the encoded+encrypted
data for this file.
-"tahoe debug repl" will launch an interactive python interpreter in which the
-Tahoe packages and modules are available on sys.path (e.g. by using 'import
+"``tahoe debug repl``" will launch an interactive python interpreter in which
+the Tahoe packages and modules are available on sys.path (e.g. by using 'import
allmydata'). This is most useful from a source tree: it simply sets the
PYTHONPATH correctly and runs the 'python' executable.
-"tahoe debug corrupt-share SHAREFILE" will flip a bit in the given sharefile.
-This can be used to test the client-side verification/repair code. Obviously
-this command should not be used during normal operation.
+"``tahoe debug corrupt-share SHAREFILE``" will flip a bit in the given
+sharefile. This can be used to test the client-side verification/repair code.
+Obviously, this command should not be used during normal operation.
-
-= The Tahoe REST-ful Web API =
-
-1. Enabling the web-API port
-2. Basic Concepts: GET, PUT, DELETE, POST
-3. URLs, Machine-Oriented Interfaces
-4. Browser Operations: Human-Oriented Interfaces
-5. Welcome / Debug / Status pages
-6. Static Files in /public_html
-7. Safety and security issues -- names vs. URIs
-8. Concurrency Issues
-
-
-== Enabling the web-API port ==
+==========================
+The Tahoe REST-ful Web API
+==========================
+
+1. `Enabling the web-API port`_
+2. `Basic Concepts: GET, PUT, DELETE, POST`_
+3. `URLs`_
+
+ 1. `Child Lookup`_
+
+4. `Slow Operations, Progress, and Cancelling`_
+5. `Programmatic Operations`_
+
+ 1. `Reading a file`_
+ 2. `Writing/Uploading a File`_
+ 3. `Creating a New Directory`_
+ 4. `Get Information About A File Or Directory (as JSON)`_
+ 5. `Attaching an existing File or Directory by its read- or write-cap`_
+ 6. `Adding multiple files or directories to a parent directory at once`_
+ 7. `Deleting a File or Directory`_
+
+6. `Browser Operations: Human-Oriented Interfaces`_
+
+ 1. `Viewing A Directory (as HTML)`_
+ 2. `Viewing/Downloading a File`_
+ 3. `Get Information About A File Or Directory (as HTML)`_
+ 4. `Creating a Directory`_
+ 5. `Uploading a File`_
+ 6. `Attaching An Existing File Or Directory (by URI)`_
+ 7. `Deleting A Child`_
+ 8. `Renaming A Child`_
+ 9. `Other Utilities`_
+ 10. `Debugging and Testing Features`_
+
+7. `Other Useful Pages`_
+8. `Static Files in /public_html`_
+9. `Safety and security issues -- names vs. URIs`_
+10. `Concurrency Issues`_
+
+Enabling the web-API port
+=========================
Every Tahoe node is capable of running a built-in HTTP server. To enable
this, just write a port number into the "[node]web.port" line of your node's
This string is actually a Twisted "strports" specification, meaning you can
get more control over the interface to which the server binds by supplying
additional arguments. For more details, see the documentation on
-twisted.application.strports:
-http://twistedmatrix.com/documents/current/api/twisted.application.strports.html
+`twisted.application.strports
+<http://twistedmatrix.com/documents/current/api/twisted.application.strports.html>`_.
Writing "tcp:3456:interface=127.0.0.1" into the web.port line does the same
but binds to the loopback interface, ensuring that only the programs on the
-local host can connect. Using
-"ssl:3456:privateKey=mykey.pem:certKey=cert.pem" runs an SSL server.
+local host can connect. Using "ssl:3456:privateKey=mykey.pem:certKey=cert.pem"
+runs an SSL server.
This webport can be set when the node is created by passing a --webport
option to the 'tahoe create-node' command. By default, the node listens on
port 3456, on the loopback (127.0.0.1) interface.
-== Basic Concepts ==
+Basic Concepts: GET, PUT, DELETE, POST
+======================================
-As described in architecture.txt, each file and directory in a Tahoe virtual
+As described in `architecture.rst`_, each file and directory in a Tahoe virtual
filesystem is referenced by an identifier that combines the designation of
the object with the authority to do something with it (such as read or modify
the contents). This identifier is called a "read-cap" or "write-cap",
depending upon whether it enables read-only or read-write access. These
"caps" are also referred to as URIs.
+.. _architecture.rst: http://tahoe-lafs.org/source/tahoe-lafs/trunk/docs/architecture.rst
+
The Tahoe web-based API is "REST-ful", meaning it implements the concepts of
"REpresentational State Transfer": the original scheme by which the World
Wide Web was intended to work. Each object (file or directory) is referenced
400-series code (like 404 Not Found for an unknown childname, or 400 Bad Request
when the parameters to a webapi operation are invalid), and the HTTP response
body will usually contain a few lines of explanation as to the cause of the
-error and possible responses. Unusual exceptions may result in a
-500 Internal Server Error as a catch-all, with a default response body containing
+error and possible responses. Unusual exceptions may result in a 500 Internal
+Server Error as a catch-all, with a default response body containing
a Nevow-generated HTML-ized representation of the Python exception stack trace
that caused the problem. CLI programs which want to copy the response body to
stderr should provide an "Accept: text/plain" header to their requests to get
-a plain text stack trace instead. If the Accept header contains */*, or
-text/*, or text/html (or if there is no Accept header), HTML tracebacks will
+a plain text stack trace instead. If the Accept header contains ``*/*``, or
+``text/*``, or text/html (or if there is no Accept header), HTML tracebacks will
be generated.
-== URLs ==
+URLs
+====
Tahoe uses a variety of read- and write- caps to identify files and
directories. The most common of these is the "immutable file read-cap", which
-is used for most uploaded files. These read-caps look like the following:
+is used for most uploaded files. These read-caps look like the following::
URI:CHK:ime6pvkaxuetdfah2p2f35pe54:4btz54xk3tew6nd4y2ojpxj4m6wxjqqlwnztgre6gnjgtucd5r4a:3:10:202
The next most common is a "directory write-cap", which provides both read and
-write access to a directory, and look like this:
+write access to a directory, and look like this::
URI:DIR2:djrdkfawoqihigoett4g6auz6a:jx5mplfpwexnoqff7y5e4zjus4lidm76dcuarpct7cckorh2dpgq
a prefix (which indicates the HTTP server to use) with the cap (which
indicates which object inside that server to access). Since the default Tahoe
webport is 3456, the most common prefix is one that will use a local node
-listening on this port:
+listening on this port::
http://127.0.0.1:3456/uri/ + $CAP
So, to access the directory named above (which happens to be the
publically-writeable sample directory on the Tahoe test grid, described at
-http://allmydata.org/trac/tahoe/wiki/TestGrid), the URL would be:
+http://allmydata.org/trac/tahoe/wiki/TestGrid), the URL would be::
http://127.0.0.1:3456/uri/URI%3ADIR2%3Adjrdkfawoqihigoett4g6auz6a%3Ajx5mplfpwexnoqff7y5e4zjus4lidm76dcuarpct7cckorh2dpgq/
(note that the colons in the directory-cap are url-encoded into "%3A"
sequences).
-Likewise, to access the file named above, use:
+Likewise, to access the file named above, use::
http://127.0.0.1:3456/uri/URI%3ACHK%3Aime6pvkaxuetdfah2p2f35pe54%3A4btz54xk3tew6nd4y2ojpxj4m6wxjqqlwnztgre6gnjgtucd5r4a%3A3%3A10%3A202
In the rest of this document, we'll use "$DIRCAP" as shorthand for a read-cap
or write-cap that refers to a directory, and "$FILECAP" to abbreviate a cap
that refers to a file (whether mutable or immutable). So those URLs above can
-be abbreviated as:
+be abbreviated as::
http://127.0.0.1:3456/uri/$DIRCAP/
http://127.0.0.1:3456/uri/$FILECAP
The operation summaries below will abbreviate these further, by eliding the
-server prefix. They will be displayed like this:
+server prefix. They will be displayed like this::
/uri/$DIRCAP/
/uri/$FILECAP
-=== Child Lookup ===
+Child Lookup
+------------
Tahoe directories contain named child entries, just like directories in a regular
local filesystem. These child entries, called "dirnodes", consist of a name,
If you have a Tahoe URL that refers to a directory, and want to reference a
named child inside it, just append the child name to the URL. For example, if
our sample directory contains a file named "welcome.txt", we can refer to
-that file with:
+that file with::
http://127.0.0.1:3456/uri/$DIRCAP/welcome.txt
(or http://127.0.0.1:3456/uri/URI%3ADIR2%3Adjrdkfawoqihigoett4g6auz6a%3Ajx5mplfpwexnoqff7y5e4zjus4lidm76dcuarpct7cckorh2dpgq/welcome.txt)
-Multiple levels of subdirectories can be handled this way:
+Multiple levels of subdirectories can be handled this way::
http://127.0.0.1:3456/uri/$DIRCAP/tahoe-source/docs/webapi.txt
In this document, when we need to refer to a URL that references a file using
-this child-of-some-directory format, we'll use the following string:
+this child-of-some-directory format, we'll use the following string::
/uri/$DIRCAP/[SUBDIRS../]FILENAME
that this whole URL refers to a file of some sort, rather than to a
directory.
-When we need to refer specifically to a directory in this way, we'll write:
+When we need to refer specifically to a directory in this way, we'll write::
/uri/$DIRCAP/[SUBDIRS../]SUBDIR
Note that all components of pathnames in URLs are required to be UTF-8
encoded, so "resume.doc" (with an acute accent on both E's) would be accessed
-with:
+with::
http://127.0.0.1:3456/uri/$DIRCAP/r%C3%A9sum%C3%A9.doc
the security properties of Tahoe caps to be extended across the webapi
interface.
-== Slow Operations, Progress, and Cancelling ==
+Slow Operations, Progress, and Cancelling
+=========================================
Certain operations can be expected to take a long time. The "t=deep-check",
described below, will recursively visit every file and directory reachable
to the POST or PUT request which starts the operation. The following
operations can then be used to retrieve status:
-GET /operations/$HANDLE?output=HTML (with or without t=status)
-GET /operations/$HANDLE?output=JSON (same)
+``GET /operations/$HANDLE?output=HTML (with or without t=status)``
+
+``GET /operations/$HANDLE?output=JSON (same)``
These two retrieve the current status of the given operation. Each operation
presents a different sort of information, but in general the page retrieved
will indicate:
- * whether the operation is complete, or if it is still running
- * how much of the operation is complete, and how much is left, if possible
+ * whether the operation is complete, or if it is still running
+ * how much of the operation is complete, and how much is left, if possible
Note that the final status output can be quite large: a deep-manifest of a
directory structure with 300k directories and 200k unique files is about
There may be more status information available under
/operations/$HANDLE/$ETC : i.e., the handle forms the root of a URL space.
-POST /operations/$HANDLE?t=cancel
+``POST /operations/$HANDLE?t=cancel``
This terminates the operation, and returns an HTML page explaining what was
cancelled. If the operation handle has already expired (see below), this
handles. Instead, they emit line-oriented status results immediately. Client
code can cancel the operation by simply closing the HTTP connection.
-== Programmatic Operations ==
+Programmatic Operations
+=======================
Now that we know how to build URLs that refer to files and directories in a
Tahoe virtual filesystem, what sorts of operations can we do with those URLs?
that use HTTP to communicate with a Tahoe node. A later section describes
operations that are intended for web browsers.
-=== Reading A File ===
+Reading A File
+--------------
-GET /uri/$FILECAP
-GET /uri/$DIRCAP/[SUBDIRS../]FILENAME
+``GET /uri/$FILECAP``
+
+``GET /uri/$DIRCAP/[SUBDIRS../]FILENAME``
This will retrieve the contents of the given file. The HTTP response body
will contain the sequence of bytes that make up the file.
"Browser Operations", for details on how to modify these URLs for that
purpose.
-=== Writing/Uploading A File ===
+Writing/Uploading A File
+------------------------
+
+``PUT /uri/$FILECAP``
-PUT /uri/$FILECAP
-PUT /uri/$DIRCAP/[SUBDIRS../]FILENAME
+``PUT /uri/$DIRCAP/[SUBDIRS../]FILENAME``
Upload a file, using the data from the HTTP request body, and add whatever
child links and subdirectories are necessary to make the file available at
Note that the 'curl -T localfile http://127.0.0.1:3456/uri/$DIRCAP/foo.txt'
command can be used to invoke this operation.
-PUT /uri
+``PUT /uri``
This uploads a file, and produces a file-cap for the contents, but does not
attach the file into the filesystem. No directories will be modified by
mutable file, and return its write-cap in the HTTP respose. The default is
to create an immutable file, returning the read-cap as a response.
-=== Creating A New Directory ===
+Creating A New Directory
+------------------------
+
+``POST /uri?t=mkdir``
-POST /uri?t=mkdir
-PUT /uri?t=mkdir
+``PUT /uri?t=mkdir``
Create a new empty directory and return its write-cap as the HTTP response
body. This does not make the newly created directory visible from the
filesystem. The "PUT" operation is provided for backwards compatibility:
new code should use POST.
-POST /uri?t=mkdir-with-children
+``POST /uri?t=mkdir-with-children``
Create a new directory, populated with a set of child nodes, and return its
write-cap as the HTTP response body. The new directory is not attached to
Each dictionary key should be a child name, and each value should be a list
of [TYPE, PROPDICT], where PROPDICT contains "rw_uri", "ro_uri", and
"metadata" keys (all others are ignored). For example, the PUT request body
- could be:
+ could be::
{
"Fran\u00e7ais": [ "filenode", {
The metadata may have a "no-write" field. If this is set to true in the
metadata of a link, it will not be possible to open that link for writing
- via the SFTP frontend; see docs/frontends/FTP-and-SFTP.txt for details.
+ via the SFTP frontend; see `FTP-and-SFTP.rst`_ for details.
Also, if the "no-write" field is set to true in the metadata of a link to
a mutable child, it will cause the link to be diminished to read-only.
+
+ .. _FTP-and-SFTP.rst: http://tahoe-lafs.org/source/tahoe-lafs/trunk/docs/frontents/FTP-and-SFTP.rst
Note that the webapi-using client application must not provide the
"Content-Type: multipart/form-data" header that usually accompanies HTML
and the resulting string encoded into UTF-8. This UTF-8 bytestring should
then be used as the POST body.
-POST /uri?t=mkdir-immutable
+``POST /uri?t=mkdir-immutable``
Like t=mkdir-with-children above, but the new directory will be
deep-immutable. This means that the directory itself is immutable, and that
A non-empty request body is mandatory, since after the directory is created,
it will not be possible to add more children to it.
-POST /uri/$DIRCAP/[SUBDIRS../]SUBDIR?t=mkdir
-PUT /uri/$DIRCAP/[SUBDIRS../]SUBDIR?t=mkdir
+``POST /uri/$DIRCAP/[SUBDIRS../]SUBDIR?t=mkdir``
+
+``PUT /uri/$DIRCAP/[SUBDIRS../]SUBDIR?t=mkdir``
Create new directories as necessary to make sure that the named target
($DIRCAP/SUBDIRS../SUBDIR) is a directory. This will create additional
The write-cap of the new directory will be returned as the HTTP response
body.
-POST /uri/$DIRCAP/[SUBDIRS../]SUBDIR?t=mkdir-with-children
+``POST /uri/$DIRCAP/[SUBDIRS../]SUBDIR?t=mkdir-with-children``
Like /uri?t=mkdir-with-children, but the final directory is created as a
child of an existing mutable directory. This will create additional
directory; or if it would require changing an immutable directory; or if
the immediate parent directory already has a a child named SUBDIR.
-POST /uri/$DIRCAP/[SUBDIRS../]SUBDIR?t=mkdir-immutable
+``POST /uri/$DIRCAP/[SUBDIRS../]SUBDIR?t=mkdir-immutable``
Like /uri?t=mkdir-immutable, but the final directory is created as a child
of an existing mutable directory. The final directory will be deep-immutable,
This operation will return an error if the parent directory is immutable,
or already has a child named SUBDIR.
-POST /uri/$DIRCAP/[SUBDIRS../]?t=mkdir&name=NAME
+``POST /uri/$DIRCAP/[SUBDIRS../]?t=mkdir&name=NAME``
Create a new empty mutable directory and attach it to the given existing
directory. This will create additional intermediate directories as necessary.
whereas the /uri/$DIRCAP/[SUBDIRS../]SUBDIR?t=mkdir operation above has a URL
that points directly to the bottommost new directory.
-POST /uri/$DIRCAP/[SUBDIRS../]?t=mkdir-with-children&name=NAME
+``POST /uri/$DIRCAP/[SUBDIRS../]?t=mkdir-with-children&name=NAME``
Like /uri/$DIRCAP/[SUBDIRS../]?t=mkdir&name=NAME, but the new directory will
be populated with initial children via the POST request body. This command
Note that the name= argument must be passed as a queryarg, because the POST
request body is used for the initial children JSON.
-POST /uri/$DIRCAP/[SUBDIRS../]?t=mkdir-immutable&name=NAME
+``POST /uri/$DIRCAP/[SUBDIRS../]?t=mkdir-immutable&name=NAME``
Like /uri/$DIRCAP/[SUBDIRS../]?t=mkdir-with-children&name=NAME, but the
final directory will be deep-immutable. The children are specified as a
This operation will return an error if the parent directory is immutable,
or already has a child named NAME.
-=== Get Information About A File Or Directory (as JSON) ===
-
-GET /uri/$FILECAP?t=json
-GET /uri/$DIRCAP?t=json
-GET /uri/$DIRCAP/[SUBDIRS../]SUBDIR?t=json
-GET /uri/$DIRCAP/[SUBDIRS../]FILENAME?t=json
-
- This returns a machine-parseable JSON-encoded description of the given
- object. The JSON always contains a list, and the first element of the list is
- always a flag that indicates whether the referenced object is a file or a
- directory. If it is a capability to a file, then the information includes
- file size and URI, like this:
-
- GET /uri/$FILECAP?t=json :
-
- [ "filenode", {
- "ro_uri": file_uri,
- "verify_uri": verify_uri,
- "size": bytes,
- "mutable": false
- } ]
-
- If it is a capability to a directory followed by a path from that directory
- to a file, then the information also includes metadata from the link to the
- file in the parent directory, like this:
-
- GET /uri/$DIRCAP/[SUBDIRS../]FILENAME?t=json :
-
- [ "filenode", {
- "ro_uri": file_uri,
- "verify_uri": verify_uri,
- "size": bytes,
- "mutable": false,
- "metadata": {
- "ctime": 1202777696.7564139,
- "mtime": 1202777696.7564139,
- "tahoe": {
- "linkcrtime": 1202777696.7564139,
- "linkmotime": 1202777696.7564139
- } } } ]
+Get Information About A File Or Directory (as JSON)
+---------------------------------------------------
+
+``GET /uri/$FILECAP?t=json``
+
+``GET /uri/$DIRCAP?t=json``
+
+``GET /uri/$DIRCAP/[SUBDIRS../]SUBDIR?t=json``
+
+``GET /uri/$DIRCAP/[SUBDIRS../]FILENAME?t=json``
+
+ This returns a machine-parseable JSON-encoded description of the given
+ object. The JSON always contains a list, and the first element of the list is
+ always a flag that indicates whether the referenced object is a file or a
+ directory. If it is a capability to a file, then the information includes
+ file size and URI, like this::
+
+ GET /uri/$FILECAP?t=json :
+
+ [ "filenode", {
+ "ro_uri": file_uri,
+ "verify_uri": verify_uri,
+ "size": bytes,
+ "mutable": false
+ } ]
+
+ If it is a capability to a directory followed by a path from that directory
+ to a file, then the information also includes metadata from the link to the
+ file in the parent directory, like this::
+
+ GET /uri/$DIRCAP/[SUBDIRS../]FILENAME?t=json
+
+ [ "filenode", {
+ "ro_uri": file_uri,
+ "verify_uri": verify_uri,
+ "size": bytes,
+ "mutable": false,
+ "metadata": {
+ "ctime": 1202777696.7564139,
+ "mtime": 1202777696.7564139,
+ "tahoe": {
+ "linkcrtime": 1202777696.7564139,
+ "linkmotime": 1202777696.7564139
+ } } } ]
+
+ If it is a directory, then it includes information about the children of
+ this directory, as a mapping from child name to a set of data about the
+ child (the same data that would appear in a corresponding GET?t=json of the
+ child itself). The child entries also include metadata about each child,
+ including link-creation- and link-change- timestamps. The output looks like
+ this::
+
+ GET /uri/$DIRCAP?t=json :
+ GET /uri/$DIRCAP/[SUBDIRS../]SUBDIR?t=json :
+
+ [ "dirnode", {
+ "rw_uri": read_write_uri,
+ "ro_uri": read_only_uri,
+ "verify_uri": verify_uri,
+ "mutable": true,
+ "children": {
+ "foo.txt": [ "filenode", {
+ "ro_uri": uri,
+ "size": bytes,
+ "metadata": {
+ "ctime": 1202777696.7564139,
+ "mtime": 1202777696.7564139,
+ "tahoe": {
+ "linkcrtime": 1202777696.7564139,
+ "linkmotime": 1202777696.7564139
+ } } } ],
+ "subdir": [ "dirnode", {
+ "rw_uri": rwuri,
+ "ro_uri": rouri,
+ "metadata": {
+ "ctime": 1202778102.7589991,
+ "mtime": 1202778111.2160511,
+ "tahoe": {
+ "linkcrtime": 1202777696.7564139,
+ "linkmotime": 1202777696.7564139
+ } } } ]
+ } } ]
+
+ In the above example, note how 'children' is a dictionary in which the keys
+ are child names and the values depend upon whether the child is a file or a
+ directory. The value is mostly the same as the JSON representation of the
+ child object (except that directories do not recurse -- the "children"
+ entry of the child is omitted, and the directory view includes the metadata
+ that is stored on the directory edge).
+
+ The rw_uri field will be present in the information about a directory
+ if and only if you have read-write access to that directory. The verify_uri
+ field will be present if and only if the object has a verify-cap
+ (non-distributed LIT files do not have verify-caps).
+
+ If the cap is of an unknown format, then the file size and verify_uri will
+ not be available::
+
+ GET /uri/$UNKNOWNCAP?t=json :
+
+ [ "unknown", {
+ "ro_uri": unknown_read_uri
+ } ]
+
+ GET /uri/$DIRCAP/[SUBDIRS../]UNKNOWNCHILDNAME?t=json :
+
+ [ "unknown", {
+ "rw_uri": unknown_write_uri,
+ "ro_uri": unknown_read_uri,
+ "mutable": true,
+ "metadata": {
+ "ctime": 1202777696.7564139,
+ "mtime": 1202777696.7564139,
+ "tahoe": {
+ "linkcrtime": 1202777696.7564139,
+ "linkmotime": 1202777696.7564139
+ } } } ]
+
+ As in the case of file nodes, the metadata will only be present when the
+ capability is to a directory followed by a path. The "mutable" field is also
+ not always present; when it is absent, the mutability of the object is not
+ known.
+
+About the metadata
+``````````````````
+
+The value of the 'tahoe':'linkmotime' key is updated whenever a link to a
+child is set. The value of the 'tahoe':'linkcrtime' key is updated whenever
+a link to a child is created -- i.e. when there was not previously a link
+under that name.
+
+Note however, that if the edge in the Tahoe filesystem points to a mutable
+file and the contents of that mutable file is changed, then the
+'tahoe':'linkmotime' value on that edge will *not* be updated, since the
+edge itself wasn't updated -- only the mutable file was.
+
+The timestamps are represented as a number of seconds since the UNIX epoch
+(1970-01-01 00:00:00 UTC), with leap seconds not being counted in the long
+term.
+
+In Tahoe earlier than v1.4.0, 'mtime' and 'ctime' keys were populated
+instead of the 'tahoe':'linkmotime' and 'tahoe':'linkcrtime' keys. Starting
+in Tahoe v1.4.0, the 'linkmotime'/'linkcrtime' keys in the 'tahoe' sub-dict
+are populated. However, prior to Tahoe v1.7beta, a bug caused the 'tahoe'
+sub-dict to be deleted by webapi requests in which new metadata is
+specified, and not to be added to existing child links that lack it.
+
+From Tahoe v1.7.0 onward, the 'mtime' and 'ctime' fields are no longer
+populated or updated (see ticket #924), except by "tahoe backup" as
+explained below. For backward compatibility, when an existing link is
+updated and 'tahoe':'linkcrtime' is not present in the previous metadata
+but 'ctime' is, the old value of 'ctime' is used as the new value of
+'tahoe':'linkcrtime'.
+
+The reason we added the new fields in Tahoe v1.4.0 is that there is a
+"set_children" API (described below) which you can use to overwrite the
+values of the 'mtime'/'ctime' pair, and this API is used by the
+"tahoe backup" command (in Tahoe v1.3.0 and later) to set the 'mtime' and
+'ctime' values when backing up files from a local filesystem into the
+Tahoe filesystem. As of Tahoe v1.4.0, the set_children API cannot be used
+to set anything under the 'tahoe' key of the metadata dict -- if you
+include 'tahoe' keys in your 'metadata' arguments then it will silently
+ignore those keys.
+
+Therefore, if the 'tahoe' sub-dict is present, you can rely on the
+'linkcrtime' and 'linkmotime' values therein to have the semantics described
+above. (This is assuming that only official Tahoe clients have been used to
+write those links, and that their system clocks were set to what you expected
+-- there is nothing preventing someone from editing their Tahoe client or
+writing their own Tahoe client which would overwrite those values however
+they like, and there is nothing to constrain their system clock from taking
+any value.)
+
+When an edge is created or updated by "tahoe backup", the 'mtime' and
+'ctime' keys on that edge are set as follows:
+
+* 'mtime' is set to the timestamp read from the local filesystem for the
+ "mtime" of the local file in question, which means the last time the
+ contents of that file were changed.
+
+* On Windows, 'ctime' is set to the creation timestamp for the file
+ read from the local filesystem. On other platforms, 'ctime' is set to
+ the UNIX "ctime" of the local file, which means the last time that
+ either the contents or the metadata of the local file was changed.
+
+There are several ways that the 'ctime' field could be confusing:
+
+1. You might be confused about whether it reflects the time of the creation
+ of a link in the Tahoe filesystem (by a version of Tahoe < v1.7.0) or a
+ timestamp copied in by "tahoe backup" from a local filesystem.
+
+2. You might be confused about whether it is a copy of the file creation
+ time (if "tahoe backup" was run on a Windows system) or of the last
+ contents-or-metadata change (if "tahoe backup" was run on a different
+ operating system).
+
+3. You might be confused by the fact that changing the contents of a
+ mutable file in Tahoe doesn't have any effect on any links pointing at
+ that file in any directories, although "tahoe backup" sets the link
+ 'ctime'/'mtime' to reflect timestamps about the local file corresponding
+ to the Tahoe file to which the link points.
+
+4. Also, quite apart from Tahoe, you might be confused about the meaning
+ of the "ctime" in UNIX local filesystems, which people sometimes think
+ means file creation time, but which actually means, in UNIX local
+ filesystems, the most recent time that the file contents or the file
+ metadata (such as owner, permission bits, extended attributes, etc.)
+ has changed. Note that although "ctime" does not mean file creation time
+ in UNIX, links created by a version of Tahoe prior to v1.7.0, and never
+ written by "tahoe backup", will have 'ctime' set to the link creation
+ time.
+
+
+Attaching an existing File or Directory by its read- or write-cap
+-----------------------------------------------------------------
+
+``PUT /uri/$DIRCAP/[SUBDIRS../]CHILDNAME?t=uri``
+
+ This attaches a child object (either a file or directory) to a specified
+ location in the virtual filesystem. The child object is referenced by its
+ read- or write- cap, as provided in the HTTP request body. This will create
+ intermediate directories as necessary.
+
+ This is similar to a UNIX hardlink: by referencing a previously-uploaded file
+ (or previously-created directory) instead of uploading/creating a new one,
+ you can create two references to the same object.
+
+ The read- or write- cap of the child is provided in the body of the HTTP
+ request, and this same cap is returned in the response body.
+
+ The default behavior is to overwrite any existing object at the same
+ location. To prevent this (and make the operation return an error instead
+ of overwriting), add a "replace=false" argument, as "?t=uri&replace=false".
+ With replace=false, this operation will return an HTTP 409 "Conflict" error
+ if there is already an object at the given location, rather than
+ overwriting the existing object. To allow the operation to overwrite a
+ file, but return an error when trying to overwrite a directory, use
+ "replace=only-files" (this behavior is closer to the traditional UNIX "mv"
+ command). Note that "true", "t", and "1" are all synonyms for "True", and
+ "false", "f", and "0" are synonyms for "False", and the parameter is
+ case-insensitive.
+
+ Note that this operation does not take its child cap in the form of
+ separate "rw_uri" and "ro_uri" fields. Therefore, it cannot accept a
+ child cap in a format unknown to the webapi server, unless its URI
+ starts with "ro." or "imm.". This restriction is necessary because the
+ server is not able to attenuate an unknown write cap to a read cap.
+ Unknown URIs starting with "ro." or "imm.", on the other hand, are
+ assumed to represent read caps. The client should not prefix a write
+ cap with "ro." or "imm." and pass it to this operation, since that
+ would result in granting the cap's write authority to holders of the
+ directory read cap.
+
+Adding multiple files or directories to a parent directory at once
+------------------------------------------------------------------
+
+``POST /uri/$DIRCAP/[SUBDIRS..]?t=set_children``
+
+``POST /uri/$DIRCAP/[SUBDIRS..]?t=set-children`` (Tahoe >= v1.6)
+
+ This command adds multiple children to a directory in a single operation.
+ It reads the request body and interprets it as a JSON-encoded description
+ of the child names and read/write-caps that should be added.
+
+ The body should be a JSON-encoded dictionary, in the same format as the
+ "children" value returned by the "GET /uri/$DIRCAP?t=json" operation
+ described above. In this format, each key is a child names, and the
+ corresponding value is a tuple of (type, childinfo). "type" is ignored, and
+ "childinfo" is a dictionary that contains "rw_uri", "ro_uri", and
+ "metadata" keys. You can take the output of "GET /uri/$DIRCAP1?t=json" and
+ use it as the input to "POST /uri/$DIRCAP2?t=set_children" to make DIR2
+ look very much like DIR1 (except for any existing children of DIR2 that
+ were not overwritten, and any existing "tahoe" metadata keys as described
+ below).
+
+ When the set_children request contains a child name that already exists in
+ the target directory, this command defaults to overwriting that child with
+ the new value (both child cap and metadata, but if the JSON data does not
+ contain a "metadata" key, the old child's metadata is preserved). The
+ command takes a boolean "overwrite=" query argument to control this
+ behavior. If you use "?t=set_children&overwrite=false", then an attempt to
+ replace an existing child will instead cause an error.
+
+ Any "tahoe" key in the new child's "metadata" value is ignored. Any
+ existing "tahoe" metadata is preserved. The metadata["tahoe"] value is
+ reserved for metadata generated by the tahoe node itself. The only two keys
+ currently placed here are "linkcrtime" and "linkmotime". For details, see
+ the section above entitled "Get Information About A File Or Directory (as
+ JSON)", in the "About the metadata" subsection.
+
+ Note that this command was introduced with the name "set_children", which
+ uses an underscore rather than a hyphen as other multi-word command names
+ do. The variant with a hyphen is now accepted, but clients that desire
+ backward compatibility should continue to use "set_children".
- If it is a directory, then it includes information about the children of
- this directory, as a mapping from child name to a set of data about the
- child (the same data that would appear in a corresponding GET?t=json of the
- child itself). The child entries also include metadata about each child,
- including link-creation- and link-change- timestamps. The output looks like
- this:
-
- GET /uri/$DIRCAP?t=json :
- GET /uri/$DIRCAP/[SUBDIRS../]SUBDIR?t=json :
-
- [ "dirnode", {
- "rw_uri": read_write_uri,
- "ro_uri": read_only_uri,
- "verify_uri": verify_uri,
- "mutable": true,
- "children": {
- "foo.txt": [ "filenode", {
- "ro_uri": uri,
- "size": bytes,
- "metadata": {
- "ctime": 1202777696.7564139,
- "mtime": 1202777696.7564139,
- "tahoe": {
- "linkcrtime": 1202777696.7564139,
- "linkmotime": 1202777696.7564139
- } } } ],
- "subdir": [ "dirnode", {
- "rw_uri": rwuri,
- "ro_uri": rouri,
- "metadata": {
- "ctime": 1202778102.7589991,
- "mtime": 1202778111.2160511,
- "tahoe": {
- "linkcrtime": 1202777696.7564139,
- "linkmotime": 1202777696.7564139
- } } } ]
- } } ]
-
- In the above example, note how 'children' is a dictionary in which the keys
- are child names and the values depend upon whether the child is a file or a
- directory. The value is mostly the same as the JSON representation of the
- child object (except that directories do not recurse -- the "children"
- entry of the child is omitted, and the directory view includes the metadata
- that is stored on the directory edge).
-
- The rw_uri field will be present in the information about a directory
- if and only if you have read-write access to that directory. The verify_uri
- field will be present if and only if the object has a verify-cap
- (non-distributed LIT files do not have verify-caps).
-
- If the cap is of an unknown format, then the file size and verify_uri will
- not be available:
-
- GET /uri/$UNKNOWNCAP?t=json :
-
- [ "unknown", {
- "ro_uri": unknown_read_uri
- } ]
-
- GET /uri/$DIRCAP/[SUBDIRS../]UNKNOWNCHILDNAME?t=json :
-
- [ "unknown", {
- "rw_uri": unknown_write_uri,
- "ro_uri": unknown_read_uri,
- "mutable": true,
- "metadata": {
- "ctime": 1202777696.7564139,
- "mtime": 1202777696.7564139,
- "tahoe": {
- "linkcrtime": 1202777696.7564139,
- "linkmotime": 1202777696.7564139
- } } } ]
- As in the case of file nodes, the metadata will only be present when the
- capability is to a directory followed by a path. The "mutable" field is also
- not always present; when it is absent, the mutability of the object is not
- known.
-
-==== About the metadata ====
-
- The value of the 'tahoe':'linkmotime' key is updated whenever a link to a
- child is set. The value of the 'tahoe':'linkcrtime' key is updated whenever
- a link to a child is created -- i.e. when there was not previously a link
- under that name.
-
- Note however, that if the edge in the Tahoe filesystem points to a mutable
- file and the contents of that mutable file is changed, then the
- 'tahoe':'linkmotime' value on that edge will *not* be updated, since the
- edge itself wasn't updated -- only the mutable file was.
-
- The timestamps are represented as a number of seconds since the UNIX epoch
- (1970-01-01 00:00:00 UTC), with leap seconds not being counted in the long
- term.
-
- In Tahoe earlier than v1.4.0, 'mtime' and 'ctime' keys were populated
- instead of the 'tahoe':'linkmotime' and 'tahoe':'linkcrtime' keys. Starting
- in Tahoe v1.4.0, the 'linkmotime'/'linkcrtime' keys in the 'tahoe' sub-dict
- are populated. However, prior to Tahoe v1.7beta, a bug caused the 'tahoe'
- sub-dict to be deleted by webapi requests in which new metadata is
- specified, and not to be added to existing child links that lack it.
-
- From Tahoe v1.7.0 onward, the 'mtime' and 'ctime' fields are no longer
- populated or updated (see ticket #924), except by "tahoe backup" as
- explained below. For backward compatibility, when an existing link is
- updated and 'tahoe':'linkcrtime' is not present in the previous metadata
- but 'ctime' is, the old value of 'ctime' is used as the new value of
- 'tahoe':'linkcrtime'.
-
- The reason we added the new fields in Tahoe v1.4.0 is that there is a
- "set_children" API (described below) which you can use to overwrite the
- values of the 'mtime'/'ctime' pair, and this API is used by the
- "tahoe backup" command (in Tahoe v1.3.0 and later) to set the 'mtime' and
- 'ctime' values when backing up files from a local filesystem into the
- Tahoe filesystem. As of Tahoe v1.4.0, the set_children API cannot be used
- to set anything under the 'tahoe' key of the metadata dict -- if you
- include 'tahoe' keys in your 'metadata' arguments then it will silently
- ignore those keys.
-
- Therefore, if the 'tahoe' sub-dict is present, you can rely on the
- 'linkcrtime' and 'linkmotime' values therein to have the semantics described
- above. (This is assuming that only official Tahoe clients have been used to
- write those links, and that their system clocks were set to what you expected
- -- there is nothing preventing someone from editing their Tahoe client or
- writing their own Tahoe client which would overwrite those values however
- they like, and there is nothing to constrain their system clock from taking
- any value.)
-
- When an edge is created or updated by "tahoe backup", the 'mtime' and
- 'ctime' keys on that edge are set as follows:
-
- * 'mtime' is set to the timestamp read from the local filesystem for the
- "mtime" of the local file in question, which means the last time the
- contents of that file were changed.
-
- * On Windows, 'ctime' is set to the creation timestamp for the file
- read from the local filesystem. On other platforms, 'ctime' is set to
- the UNIX "ctime" of the local file, which means the last time that
- either the contents or the metadata of the local file was changed.
-
- There are several ways that the 'ctime' field could be confusing:
-
- 1. You might be confused about whether it reflects the time of the creation
- of a link in the Tahoe filesystem (by a version of Tahoe < v1.7.0) or a
- timestamp copied in by "tahoe backup" from a local filesystem.
-
- 2. You might be confused about whether it is a copy of the file creation
- time (if "tahoe backup" was run on a Windows system) or of the last
- contents-or-metadata change (if "tahoe backup" was run on a different
- operating system).
-
- 3. You might be confused by the fact that changing the contents of a
- mutable file in Tahoe doesn't have any effect on any links pointing at
- that file in any directories, although "tahoe backup" sets the link
- 'ctime'/'mtime' to reflect timestamps about the local file corresponding
- to the Tahoe file to which the link points.
-
- 4. Also, quite apart from Tahoe, you might be confused about the meaning
- of the "ctime" in UNIX local filesystems, which people sometimes think
- means file creation time, but which actually means, in UNIX local
- filesystems, the most recent time that the file contents or the file
- metadata (such as owner, permission bits, extended attributes, etc.)
- has changed. Note that although "ctime" does not mean file creation time
- in UNIX, links created by a version of Tahoe prior to v1.7.0, and never
- written by "tahoe backup", will have 'ctime' set to the link creation
- time.
-
-
-=== Attaching an existing File or Directory by its read- or write- cap ===
-
-PUT /uri/$DIRCAP/[SUBDIRS../]CHILDNAME?t=uri
-
- This attaches a child object (either a file or directory) to a specified
- location in the virtual filesystem. The child object is referenced by its
- read- or write- cap, as provided in the HTTP request body. This will create
- intermediate directories as necessary.
-
- This is similar to a UNIX hardlink: by referencing a previously-uploaded file
- (or previously-created directory) instead of uploading/creating a new one,
- you can create two references to the same object.
-
- The read- or write- cap of the child is provided in the body of the HTTP
- request, and this same cap is returned in the response body.
-
- The default behavior is to overwrite any existing object at the same
- location. To prevent this (and make the operation return an error instead
- of overwriting), add a "replace=false" argument, as "?t=uri&replace=false".
- With replace=false, this operation will return an HTTP 409 "Conflict" error
- if there is already an object at the given location, rather than
- overwriting the existing object. To allow the operation to overwrite a
- file, but return an error when trying to overwrite a directory, use
- "replace=only-files" (this behavior is closer to the traditional UNIX "mv"
- command). Note that "true", "t", and "1" are all synonyms for "True", and
- "false", "f", and "0" are synonyms for "False", and the parameter is
- case-insensitive.
-
- Note that this operation does not take its child cap in the form of
- separate "rw_uri" and "ro_uri" fields. Therefore, it cannot accept a
- child cap in a format unknown to the webapi server, unless its URI
- starts with "ro." or "imm.". This restriction is necessary because the
- server is not able to attenuate an unknown write cap to a read cap.
- Unknown URIs starting with "ro." or "imm.", on the other hand, are
- assumed to represent read caps. The client should not prefix a write
- cap with "ro." or "imm." and pass it to this operation, since that
- would result in granting the cap's write authority to holders of the
- directory read cap.
-
-=== Adding multiple files or directories to a parent directory at once ===
-
-POST /uri/$DIRCAP/[SUBDIRS..]?t=set_children
-POST /uri/$DIRCAP/[SUBDIRS..]?t=set-children (Tahoe >= v1.6)
-
- This command adds multiple children to a directory in a single operation.
- It reads the request body and interprets it as a JSON-encoded description
- of the child names and read/write-caps that should be added.
-
- The body should be a JSON-encoded dictionary, in the same format as the
- "children" value returned by the "GET /uri/$DIRCAP?t=json" operation
- described above. In this format, each key is a child names, and the
- corresponding value is a tuple of (type, childinfo). "type" is ignored, and
- "childinfo" is a dictionary that contains "rw_uri", "ro_uri", and
- "metadata" keys. You can take the output of "GET /uri/$DIRCAP1?t=json" and
- use it as the input to "POST /uri/$DIRCAP2?t=set_children" to make DIR2
- look very much like DIR1 (except for any existing children of DIR2 that
- were not overwritten, and any existing "tahoe" metadata keys as described
- below).
-
- When the set_children request contains a child name that already exists in
- the target directory, this command defaults to overwriting that child with
- the new value (both child cap and metadata, but if the JSON data does not
- contain a "metadata" key, the old child's metadata is preserved). The
- command takes a boolean "overwrite=" query argument to control this
- behavior. If you use "?t=set_children&overwrite=false", then an attempt to
- replace an existing child will instead cause an error.
-
- Any "tahoe" key in the new child's "metadata" value is ignored. Any
- existing "tahoe" metadata is preserved. The metadata["tahoe"] value is
- reserved for metadata generated by the tahoe node itself. The only two keys
- currently placed here are "linkcrtime" and "linkmotime". For details, see
- the section above entitled "Get Information About A File Or Directory (as
- JSON)", in the "About the metadata" subsection.
-
- Note that this command was introduced with the name "set_children", which
- uses an underscore rather than a hyphen as other multi-word command names
- do. The variant with a hyphen is now accepted, but clients that desire
- backward compatibility should continue to use "set_children".
-
-
-=== Deleting a File or Directory ===
-
-DELETE /uri/$DIRCAP/[SUBDIRS../]CHILDNAME
-
- This removes the given name from its parent directory. CHILDNAME is the
- name to be removed, and $DIRCAP/SUBDIRS.. indicates the directory that will
- be modified.
-
- Note that this does not actually delete the file or directory that the name
- points to from the tahoe grid -- it only removes the named reference from
- this directory. If there are other names in this directory or in other
- directories that point to the resource, then it will remain accessible
- through those paths. Even if all names pointing to this object are removed
- from their parent directories, then someone with possession of its read-cap
- can continue to access the object through that cap.
-
- The object will only become completely unreachable once 1: there are no
- reachable directories that reference it, and 2: nobody is holding a read-
- or write- cap to the object. (This behavior is very similar to the way
- hardlinks and anonymous files work in traditional UNIX filesystems).
-
- This operation will not modify more than a single directory. Intermediate
- directories which were implicitly created by PUT or POST methods will *not*
- be automatically removed by DELETE.
-
- This method returns the file- or directory- cap of the object that was just
- removed.
-
-== Browser Operations ==
+Deleting a File or Directory
+----------------------------
+
+``DELETE /uri/$DIRCAP/[SUBDIRS../]CHILDNAME``
+
+ This removes the given name from its parent directory. CHILDNAME is the
+ name to be removed, and $DIRCAP/SUBDIRS.. indicates the directory that will
+ be modified.
+
+ Note that this does not actually delete the file or directory that the name
+ points to from the tahoe grid -- it only removes the named reference from
+ this directory. If there are other names in this directory or in other
+ directories that point to the resource, then it will remain accessible
+ through those paths. Even if all names pointing to this object are removed
+ from their parent directories, then someone with possession of its read-cap
+ can continue to access the object through that cap.
+
+ The object will only become completely unreachable once 1: there are no
+ reachable directories that reference it, and 2: nobody is holding a read-
+ or write- cap to the object. (This behavior is very similar to the way
+ hardlinks and anonymous files work in traditional UNIX filesystems).
+
+ This operation will not modify more than a single directory. Intermediate
+ directories which were implicitly created by PUT or POST methods will *not*
+ be automatically removed by DELETE.
+
+ This method returns the file- or directory- cap of the object that was just
+ removed.
+
+Browser Operations: Human-oriented interfaces
+=============================================
This section describes the HTTP operations that provide support for humans
running a web browser. Most of these operations use HTML forms that use POST
specified by using <input type="hidden"> elements. For clarity, the
descriptions below display the most significant arguments as URL query args.
-=== Viewing A Directory (as HTML) ===
+Viewing A Directory (as HTML)
+-----------------------------
-GET /uri/$DIRCAP/[SUBDIRS../]
+``GET /uri/$DIRCAP/[SUBDIRS../]``
This returns an HTML page, intended to be displayed to a human by a web
browser, which contains HREF links to all files and directories reachable
contains forms to upload new files, and to delete files and directories.
Those forms use POST methods to do their job.
-=== Viewing/Downloading a File ===
+Viewing/Downloading a File
+--------------------------
-GET /uri/$FILECAP
-GET /uri/$DIRCAP/[SUBDIRS../]FILENAME
+``GET /uri/$FILECAP``
+
+``GET /uri/$DIRCAP/[SUBDIRS../]FILENAME``
This will retrieve the contents of the given file. The HTTP response body
will contain the sequence of bytes that make up the file.
most browsers will refuse to display it inline). "true", "t", "1", and other
case-insensitive equivalents are all treated the same.
- Character-set handling in URLs and HTTP headers is a dubious art[1]. For
+ Character-set handling in URLs and HTTP headers is a dubious art [1]_. For
maximum compatibility, Tahoe simply copies the bytes from the filename=
argument into the Content-Disposition header's filename= parameter, without
trying to interpret them in any particular way.
-GET /named/$FILECAP/FILENAME
+``GET /named/$FILECAP/FILENAME``
This is an alternate download form which makes it easier to get the correct
filename. The Tahoe server will provide the contents of the given file, with
this form can *only* be used with file caps; it is an error to use a
directory cap after the /named/ prefix.
-=== Get Information About A File Or Directory (as HTML) ===
+Get Information About A File Or Directory (as HTML)
+---------------------------------------------------
+
+``GET /uri/$FILECAP?t=info``
+
+``GET /uri/$DIRCAP/?t=info``
-GET /uri/$FILECAP?t=info
-GET /uri/$DIRCAP/?t=info
-GET /uri/$DIRCAP/[SUBDIRS../]SUBDIR/?t=info
-GET /uri/$DIRCAP/[SUBDIRS../]FILENAME?t=info
+``GET /uri/$DIRCAP/[SUBDIRS../]SUBDIR/?t=info``
- This returns a human-oriented HTML page with more detail about the selected
- file or directory object. This page contains the following items:
+``GET /uri/$DIRCAP/[SUBDIRS../]FILENAME?t=info``
- object size
- storage index
- JSON representation
- raw contents (text/plain)
- access caps (URIs): verify-cap, read-cap, write-cap (for mutable objects)
- check/verify/repair form
- deep-check/deep-size/deep-stats/manifest (for directories)
- replace-conents form (for mutable files)
+ This returns a human-oriented HTML page with more detail about the selected
+ file or directory object. This page contains the following items:
-=== Creating a Directory ===
+ * object size
+ * storage index
+ * JSON representation
+ * raw contents (text/plain)
+ * access caps (URIs): verify-cap, read-cap, write-cap (for mutable objects)
+ * check/verify/repair form
+ * deep-check/deep-size/deep-stats/manifest (for directories)
+ * replace-conents form (for mutable files)
-POST /uri?t=mkdir
+Creating a Directory
+--------------------
+
+``POST /uri?t=mkdir``
This creates a new empty directory, but does not attach it to the virtual
filesystem.
"false"), then the HTTP response body will simply be the write-cap of the
new directory.
-POST /uri/$DIRCAP/[SUBDIRS../]?t=mkdir&name=CHILDNAME
+``POST /uri/$DIRCAP/[SUBDIRS../]?t=mkdir&name=CHILDNAME``
This creates a new empty directory as a child of the designated SUBDIR. This
will create additional intermediate directories as necessary.
the directory that was just created.
-=== Uploading a File ===
+Uploading a File
+----------------
-POST /uri?t=upload
+``POST /uri?t=upload``
This uploads a file, and produces a file-cap for the contents, but does not
attach the file into the filesystem. No directories will be modified by
this operation.
The file must be provided as the "file" field of an HTML encoded form body,
- produced in response to an HTML form like this:
+ produced in response to an HTML form like this::
+
<form action="/uri" method="POST" enctype="multipart/form-data">
<input type="hidden" name="t" value="upload" />
<input type="file" name="file" />
returning the upload results page as a response.
-POST /uri/$DIRCAP/[SUBDIRS../]?t=upload
+``POST /uri/$DIRCAP/[SUBDIRS../]?t=upload``
This uploads a file, and attaches it as a new child of the given directory,
which must be mutable. The file must be provided as the "file" field of an
- HTML-encoded form body, produced in response to an HTML form like this:
+ HTML-encoded form body, produced in response to an HTML form like this::
+
<form action="." method="POST" enctype="multipart/form-data">
<input type="hidden" name="t" value="upload" />
<input type="file" name="file" />
the file that was just uploaded (a write-cap for mutable files, or a
read-cap for immutable files).
-POST /uri/$DIRCAP/[SUBDIRS../]FILENAME?t=upload
+``POST /uri/$DIRCAP/[SUBDIRS../]FILENAME?t=upload``
This also uploads a file and attaches it as a new child of the given
directory, which must be mutable. It is a slight variant of the previous
directory. It is otherwise identical: this accepts mutable= and when_done=
arguments too.
-POST /uri/$FILECAP?t=upload
+``POST /uri/$FILECAP?t=upload``
This modifies the contents of an existing mutable file in-place. An error is
signalled if $FILECAP does not refer to a mutable file. It behaves just like
the "PUT /uri/$FILECAP" form, but uses a POST for the benefit of HTML forms
in a web browser.
-=== Attaching An Existing File Or Directory (by URI) ===
+Attaching An Existing File Or Directory (by URI)
+------------------------------------------------
-POST /uri/$DIRCAP/[SUBDIRS../]?t=uri&name=CHILDNAME&uri=CHILDCAP
+``POST /uri/$DIRCAP/[SUBDIRS../]?t=uri&name=CHILDNAME&uri=CHILDCAP``
This attaches a given read- or write- cap "CHILDCAP" to the designated
directory, with a specified child name. This behaves much like the PUT t=uri
This accepts the same replace= argument as POST t=upload.
-=== Deleting A Child ===
+Deleting A Child
+----------------
-POST /uri/$DIRCAP/[SUBDIRS../]?t=delete&name=CHILDNAME
+``POST /uri/$DIRCAP/[SUBDIRS../]?t=delete&name=CHILDNAME``
This instructs the node to remove a child object (file or subdirectory) from
the given directory, which must be mutable. Note that the entire subtree is
into the subtree will see that the child subdirectories are not modified by
this operation. Only the link from the given directory to its child is severed.
-=== Renaming A Child ===
+Renaming A Child
+----------------
-POST /uri/$DIRCAP/[SUBDIRS../]?t=rename&from_name=OLD&to_name=NEW
+``POST /uri/$DIRCAP/[SUBDIRS../]?t=rename&from_name=OLD&to_name=NEW``
This instructs the node to rename a child of the given directory, which must
be mutable. This has a similar effect to removing the child, then adding the
operation cannot move the child to a different directory.
This operation will replace any existing child of the new name, making it
- behave like the UNIX "mv -f" command.
+ behave like the UNIX "``mv -f``" command.
-=== Other Utilities ===
+Other Utilities
+---------------
-GET /uri?uri=$CAP
+``GET /uri?uri=$CAP``
This causes a redirect to /uri/$CAP, and retains any additional query
arguments (like filename= or save=). This is for the convenience of web
indicated by the $CAP: unlike the GET /uri/$DIRCAP form, you cannot
traverse to children by appending additional path segments to the URL.
-GET /uri/$DIRCAP/[SUBDIRS../]?t=rename-form&name=$CHILDNAME
+``GET /uri/$DIRCAP/[SUBDIRS../]?t=rename-form&name=$CHILDNAME``
This provides a useful facility to browser-based user interfaces. It
returns a page containing a form targetting the "POST $DIRCAP t=rename"
'from_name' field of that form. I.e. this presents a form offering to
rename $CHILDNAME, requesting the new name, and submitting POST rename.
-GET /uri/$DIRCAP/[SUBDIRS../]CHILDNAME?t=uri
+``GET /uri/$DIRCAP/[SUBDIRS../]CHILDNAME?t=uri``
This returns the file- or directory- cap for the specified object.
-GET /uri/$DIRCAP/[SUBDIRS../]CHILDNAME?t=readonly-uri
+``GET /uri/$DIRCAP/[SUBDIRS../]CHILDNAME?t=readonly-uri``
This returns a read-only file- or directory- cap for the specified object.
If the object is an immutable file, this will return the same value as
t=uri.
-=== Debugging and Testing Features ===
+Debugging and Testing Features
+------------------------------
These URLs are less-likely to be helpful to the casual Tahoe user, and are
mainly intended for developers.
-POST $URL?t=check
-
- This triggers the FileChecker to determine the current "health" of the
- given file or directory, by counting how many shares are available. The
- page that is returned will display the results. This can be used as a "show
- me detailed information about this file" page.
-
- If a verify=true argument is provided, the node will perform a more
- intensive check, downloading and verifying every single bit of every share.
-
- If an add-lease=true argument is provided, the node will also add (or
- renew) a lease to every share it encounters. Each lease will keep the share
- alive for a certain period of time (one month by default). Once the last
- lease expires or is explicitly cancelled, the storage server is allowed to
- delete the share.
-
- If an output=JSON argument is provided, the response will be
- machine-readable JSON instead of human-oriented HTML. The data is a
- dictionary with the following keys:
-
- storage-index: a base32-encoded string with the objects's storage index,
- or an empty string for LIT files
- summary: a string, with a one-line summary of the stats of the file
- results: a dictionary that describes the state of the file. For LIT files,
- this dictionary has only the 'healthy' key, which will always be
- True. For distributed files, this dictionary has the following
- keys:
- count-shares-good: the number of good shares that were found
- count-shares-needed: 'k', the number of shares required for recovery
- count-shares-expected: 'N', the number of total shares generated
- count-good-share-hosts: this was intended to be the number of distinct
- storage servers with good shares. It is currently
- (as of Tahoe-LAFS v1.8.0) computed incorrectly;
- see ticket #1115.
- count-wrong-shares: for mutable files, the number of shares for
- versions other than the 'best' one (highest
- sequence number, highest roothash). These are
- either old ...
- count-recoverable-versions: for mutable files, the number of
- recoverable versions of the file. For
- a healthy file, this will equal 1.
- count-unrecoverable-versions: for mutable files, the number of
- unrecoverable versions of the file.
- For a healthy file, this will be 0.
- count-corrupt-shares: the number of shares with integrity failures
- list-corrupt-shares: a list of "share locators", one for each share
- that was found to be corrupt. Each share locator
- is a list of (serverid, storage_index, sharenum).
- needs-rebalancing: (bool) True if there are multiple shares on a single
- storage server, indicating a reduction in reliability
- that could be resolved by moving shares to new
- servers.
- servers-responding: list of base32-encoded storage server identifiers,
- one for each server which responded to the share
- query.
- healthy: (bool) True if the file is completely healthy, False otherwise.
- Healthy files have at least N good shares. Overlapping shares
- do not currently cause a file to be marked unhealthy. If there
- are at least N good shares, then corrupt shares do not cause the
- file to be marked unhealthy, although the corrupt shares will be
- listed in the results (list-corrupt-shares) and should be manually
- removed to wasting time in subsequent downloads (as the
- downloader rediscovers the corruption and uses alternate shares).
- Future compatibility: the meaning of this field may change to
- reflect whether the servers-of-happiness criterion is met
- (see ticket #614).
- sharemap: dict mapping share identifier to list of serverids
- (base32-encoded strings). This indicates which servers are
- holding which shares. For immutable files, the shareid is
- an integer (the share number, from 0 to N-1). For
- immutable files, it is a string of the form
- 'seq%d-%s-sh%d', containing the sequence number, the
- roothash, and the share number.
-
-POST $URL?t=start-deep-check (must add &ophandle=XYZ)
-
- This initiates a recursive walk of all files and directories reachable from
- the target, performing a check on each one just like t=check. The result
- page will contain a summary of the results, including details on any
- file/directory that was not fully healthy.
-
- t=start-deep-check can only be invoked on a directory. An error (400
- BAD_REQUEST) will be signalled if it is invoked on a file. The recursive
- walker will deal with loops safely.
-
- This accepts the same verify= and add-lease= arguments as t=check.
-
- Since this operation can take a long time (perhaps a second per object),
- the ophandle= argument is required (see "Slow Operations, Progress, and
- Cancelling" above). The response to this POST will be a redirect to the
- corresponding /operations/$HANDLE page (with output=HTML or output=JSON to
- match the output= argument given to the POST). The deep-check operation
- will continue to run in the background, and the /operations page should be
- used to find out when the operation is done.
-
- Detailed check results for non-healthy files and directories will be
- available under /operations/$HANDLE/$STORAGEINDEX, and the HTML status will
- contain links to these detailed results.
-
- The HTML /operations/$HANDLE page for incomplete operations will contain a
- meta-refresh tag, set to 60 seconds, so that a browser which uses
- deep-check will automatically poll until the operation has completed.
-
- The JSON page (/options/$HANDLE?output=JSON) will contain a
- machine-readable JSON dictionary with the following keys:
-
- finished: a boolean, True if the operation is complete, else False. Some
- of the remaining keys may not be present until the operation
- is complete.
- root-storage-index: a base32-encoded string with the storage index of the
- starting point of the deep-check operation
- count-objects-checked: count of how many objects were checked. Note that
- non-distributed objects (i.e. small immutable LIT
- files) are not checked, since for these objects,
- the data is contained entirely in the URI.
- count-objects-healthy: how many of those objects were completely healthy
- count-objects-unhealthy: how many were damaged in some way
- count-corrupt-shares: how many shares were found to have corruption,
- summed over all objects examined
- list-corrupt-shares: a list of "share identifiers", one for each share
- that was found to be corrupt. Each share identifier
- is a list of (serverid, storage_index, sharenum).
- list-unhealthy-files: a list of (pathname, check-results) tuples, for
- each file that was not fully healthy. 'pathname' is
- a list of strings (which can be joined by "/"
- characters to turn it into a single string),
- relative to the directory on which deep-check was
- invoked. The 'check-results' field is the same as
- that returned by t=check&output=JSON, described
- above.
- stats: a dictionary with the same keys as the t=start-deep-stats command
- (described below)
-
-POST $URL?t=stream-deep-check
+``POST $URL?t=check``
+
+ This triggers the FileChecker to determine the current "health" of the
+ given file or directory, by counting how many shares are available. The
+ page that is returned will display the results. This can be used as a "show
+ me detailed information about this file" page.
+
+ If a verify=true argument is provided, the node will perform a more
+ intensive check, downloading and verifying every single bit of every share.
+
+ If an add-lease=true argument is provided, the node will also add (or
+ renew) a lease to every share it encounters. Each lease will keep the share
+ alive for a certain period of time (one month by default). Once the last
+ lease expires or is explicitly cancelled, the storage server is allowed to
+ delete the share.
+
+ If an output=JSON argument is provided, the response will be
+ machine-readable JSON instead of human-oriented HTML. The data is a
+ dictionary with the following keys::
+
+ storage-index: a base32-encoded string with the objects's storage index,
+ or an empty string for LIT files
+ summary: a string, with a one-line summary of the stats of the file
+ results: a dictionary that describes the state of the file. For LIT files,
+ this dictionary has only the 'healthy' key, which will always be
+ True. For distributed files, this dictionary has the following
+ keys:
+ count-shares-good: the number of good shares that were found
+ count-shares-needed: 'k', the number of shares required for recovery
+ count-shares-expected: 'N', the number of total shares generated
+ count-good-share-hosts: this was intended to be the number of distinct
+ storage servers with good shares. It is currently
+ (as of Tahoe-LAFS v1.8.0) computed incorrectly;
+ see ticket #1115.
+ count-wrong-shares: for mutable files, the number of shares for
+ versions other than the 'best' one (highest
+ sequence number, highest roothash). These are
+ either old ...
+ count-recoverable-versions: for mutable files, the number of
+ recoverable versions of the file. For
+ a healthy file, this will equal 1.
+ count-unrecoverable-versions: for mutable files, the number of
+ unrecoverable versions of the file.
+ For a healthy file, this will be 0.
+ count-corrupt-shares: the number of shares with integrity failures
+ list-corrupt-shares: a list of "share locators", one for each share
+ that was found to be corrupt. Each share locator
+ is a list of (serverid, storage_index, sharenum).
+ needs-rebalancing: (bool) True if there are multiple shares on a single
+ storage server, indicating a reduction in reliability
+ that could be resolved by moving shares to new
+ servers.
+ servers-responding: list of base32-encoded storage server identifiers,
+ one for each server which responded to the share
+ query.
+ healthy: (bool) True if the file is completely healthy, False otherwise.
+ Healthy files have at least N good shares. Overlapping shares
+ do not currently cause a file to be marked unhealthy. If there
+ are at least N good shares, then corrupt shares do not cause the
+ file to be marked unhealthy, although the corrupt shares will be
+ listed in the results (list-corrupt-shares) and should be manually
+ removed to wasting time in subsequent downloads (as the
+ downloader rediscovers the corruption and uses alternate shares).
+ Future compatibility: the meaning of this field may change to
+ reflect whether the servers-of-happiness criterion is met
+ (see ticket #614).
+ sharemap: dict mapping share identifier to list of serverids
+ (base32-encoded strings). This indicates which servers are
+ holding which shares. For immutable files, the shareid is
+ an integer (the share number, from 0 to N-1). For
+ immutable files, it is a string of the form
+ 'seq%d-%s-sh%d', containing the sequence number, the
+ roothash, and the share number.
+
+``POST $URL?t=start-deep-check`` (must add &ophandle=XYZ)
+
+ This initiates a recursive walk of all files and directories reachable from
+ the target, performing a check on each one just like t=check. The result
+ page will contain a summary of the results, including details on any
+ file/directory that was not fully healthy.
+
+ t=start-deep-check can only be invoked on a directory. An error (400
+ BAD_REQUEST) will be signalled if it is invoked on a file. The recursive
+ walker will deal with loops safely.
+
+ This accepts the same verify= and add-lease= arguments as t=check.
+
+ Since this operation can take a long time (perhaps a second per object),
+ the ophandle= argument is required (see "Slow Operations, Progress, and
+ Cancelling" above). The response to this POST will be a redirect to the
+ corresponding /operations/$HANDLE page (with output=HTML or output=JSON to
+ match the output= argument given to the POST). The deep-check operation
+ will continue to run in the background, and the /operations page should be
+ used to find out when the operation is done.
+
+ Detailed check results for non-healthy files and directories will be
+ available under /operations/$HANDLE/$STORAGEINDEX, and the HTML status will
+ contain links to these detailed results.
+
+ The HTML /operations/$HANDLE page for incomplete operations will contain a
+ meta-refresh tag, set to 60 seconds, so that a browser which uses
+ deep-check will automatically poll until the operation has completed.
+
+ The JSON page (/options/$HANDLE?output=JSON) will contain a
+ machine-readable JSON dictionary with the following keys::
+
+ finished: a boolean, True if the operation is complete, else False. Some
+ of the remaining keys may not be present until the operation
+ is complete.
+ root-storage-index: a base32-encoded string with the storage index of the
+ starting point of the deep-check operation
+ count-objects-checked: count of how many objects were checked. Note that
+ non-distributed objects (i.e. small immutable LIT
+ files) are not checked, since for these objects,
+ the data is contained entirely in the URI.
+ count-objects-healthy: how many of those objects were completely healthy
+ count-objects-unhealthy: how many were damaged in some way
+ count-corrupt-shares: how many shares were found to have corruption,
+ summed over all objects examined
+ list-corrupt-shares: a list of "share identifiers", one for each share
+ that was found to be corrupt. Each share identifier
+ is a list of (serverid, storage_index, sharenum).
+ list-unhealthy-files: a list of (pathname, check-results) tuples, for
+ each file that was not fully healthy. 'pathname' is
+ a list of strings (which can be joined by "/"
+ characters to turn it into a single string),
+ relative to the directory on which deep-check was
+ invoked. The 'check-results' field is the same as
+ that returned by t=check&output=JSON, described
+ above.
+ stats: a dictionary with the same keys as the t=start-deep-stats command
+ (described below)
+
+``POST $URL?t=stream-deep-check``
This initiates a recursive walk of all files and directories reachable from
the target, performing a check on each one just like t=check. For each
"file", "directory", or "stats".
For all units that have a type of "file" or "directory", the dictionary will
- contain the following keys:
+ contain the following keys::
"path": a list of strings, with the path that is traversed to reach the
object
unit is emitted to the HTTP response body before the child is traversed.
-POST $URL?t=check&repair=true
-
- This performs a health check of the given file or directory, and if the
- checker determines that the object is not healthy (some shares are missing
- or corrupted), it will perform a "repair". During repair, any missing
- shares will be regenerated and uploaded to new servers.
-
- This accepts the same verify=true and add-lease= arguments as t=check. When
- an output=JSON argument is provided, the machine-readable JSON response
- will contain the following keys:
-
- storage-index: a base32-encoded string with the objects's storage index,
- or an empty string for LIT files
- repair-attempted: (bool) True if repair was attempted
- repair-successful: (bool) True if repair was attempted and the file was
- fully healthy afterwards. False if no repair was
- attempted, or if a repair attempt failed.
- pre-repair-results: a dictionary that describes the state of the file
- before any repair was performed. This contains exactly
- the same keys as the 'results' value of the t=check
- response, described above.
- post-repair-results: a dictionary that describes the state of the file
- after any repair was performed. If no repair was
- performed, post-repair-results and pre-repair-results
- will be the same. This contains exactly the same keys
- as the 'results' value of the t=check response,
- described above.
-
-POST $URL?t=start-deep-check&repair=true (must add &ophandle=XYZ)
-
- This triggers a recursive walk of all files and directories, performing a
- t=check&repair=true on each one.
-
- Like t=start-deep-check without the repair= argument, this can only be
- invoked on a directory. An error (400 BAD_REQUEST) will be signalled if it
- is invoked on a file. The recursive walker will deal with loops safely.
-
- This accepts the same verify= and add-lease= arguments as
- t=start-deep-check. It uses the same ophandle= mechanism as
- start-deep-check. When an output=JSON argument is provided, the response
- will contain the following keys:
-
- finished: (bool) True if the operation has completed, else False
- root-storage-index: a base32-encoded string with the storage index of the
- starting point of the deep-check operation
- count-objects-checked: count of how many objects were checked
-
- count-objects-healthy-pre-repair: how many of those objects were completely
- healthy, before any repair
- count-objects-unhealthy-pre-repair: how many were damaged in some way
- count-objects-healthy-post-repair: how many of those objects were completely
- healthy, after any repair
- count-objects-unhealthy-post-repair: how many were damaged in some way
-
- count-repairs-attempted: repairs were attempted on this many objects.
- count-repairs-successful: how many repairs resulted in healthy objects
- count-repairs-unsuccessful: how many repairs resulted did not results in
- completely healthy objects
- count-corrupt-shares-pre-repair: how many shares were found to have
- corruption, summed over all objects
- examined, before any repair
- count-corrupt-shares-post-repair: how many shares were found to have
- corruption, summed over all objects
- examined, after any repair
- list-corrupt-shares: a list of "share identifiers", one for each share
- that was found to be corrupt (before any repair).
- Each share identifier is a list of (serverid,
- storage_index, sharenum).
- list-remaining-corrupt-shares: like list-corrupt-shares, but mutable shares
- that were successfully repaired are not
- included. These are shares that need
- manual processing. Since immutable shares
- cannot be modified by clients, all corruption
- in immutable shares will be listed here.
- list-unhealthy-files: a list of (pathname, check-results) tuples, for
- each file that was not fully healthy. 'pathname' is
- relative to the directory on which deep-check was
- invoked. The 'check-results' field is the same as
- that returned by t=check&repair=true&output=JSON,
- described above.
- stats: a dictionary with the same keys as the t=start-deep-stats command
- (described below)
-
-POST $URL?t=stream-deep-check&repair=true
+``POST $URL?t=check&repair=true``
+
+ This performs a health check of the given file or directory, and if the
+ checker determines that the object is not healthy (some shares are missing
+ or corrupted), it will perform a "repair". During repair, any missing
+ shares will be regenerated and uploaded to new servers.
+
+ This accepts the same verify=true and add-lease= arguments as t=check. When
+ an output=JSON argument is provided, the machine-readable JSON response
+ will contain the following keys::
+
+ storage-index: a base32-encoded string with the objects's storage index,
+ or an empty string for LIT files
+ repair-attempted: (bool) True if repair was attempted
+ repair-successful: (bool) True if repair was attempted and the file was
+ fully healthy afterwards. False if no repair was
+ attempted, or if a repair attempt failed.
+ pre-repair-results: a dictionary that describes the state of the file
+ before any repair was performed. This contains exactly
+ the same keys as the 'results' value of the t=check
+ response, described above.
+ post-repair-results: a dictionary that describes the state of the file
+ after any repair was performed. If no repair was
+ performed, post-repair-results and pre-repair-results
+ will be the same. This contains exactly the same keys
+ as the 'results' value of the t=check response,
+ described above.
+
+``POST $URL?t=start-deep-check&repair=true`` (must add &ophandle=XYZ)
+
+ This triggers a recursive walk of all files and directories, performing a
+ t=check&repair=true on each one.
+
+ Like t=start-deep-check without the repair= argument, this can only be
+ invoked on a directory. An error (400 BAD_REQUEST) will be signalled if it
+ is invoked on a file. The recursive walker will deal with loops safely.
+
+ This accepts the same verify= and add-lease= arguments as
+ t=start-deep-check. It uses the same ophandle= mechanism as
+ start-deep-check. When an output=JSON argument is provided, the response
+ will contain the following keys::
+
+ finished: (bool) True if the operation has completed, else False
+ root-storage-index: a base32-encoded string with the storage index of the
+ starting point of the deep-check operation
+ count-objects-checked: count of how many objects were checked
+
+ count-objects-healthy-pre-repair: how many of those objects were completely
+ healthy, before any repair
+ count-objects-unhealthy-pre-repair: how many were damaged in some way
+ count-objects-healthy-post-repair: how many of those objects were completely
+ healthy, after any repair
+ count-objects-unhealthy-post-repair: how many were damaged in some way
+
+ count-repairs-attempted: repairs were attempted on this many objects.
+ count-repairs-successful: how many repairs resulted in healthy objects
+ count-repairs-unsuccessful: how many repairs resulted did not results in
+ completely healthy objects
+ count-corrupt-shares-pre-repair: how many shares were found to have
+ corruption, summed over all objects
+ examined, before any repair
+ count-corrupt-shares-post-repair: how many shares were found to have
+ corruption, summed over all objects
+ examined, after any repair
+ list-corrupt-shares: a list of "share identifiers", one for each share
+ that was found to be corrupt (before any repair).
+ Each share identifier is a list of (serverid,
+ storage_index, sharenum).
+ list-remaining-corrupt-shares: like list-corrupt-shares, but mutable shares
+ that were successfully repaired are not
+ included. These are shares that need
+ manual processing. Since immutable shares
+ cannot be modified by clients, all corruption
+ in immutable shares will be listed here.
+ list-unhealthy-files: a list of (pathname, check-results) tuples, for
+ each file that was not fully healthy. 'pathname' is
+ relative to the directory on which deep-check was
+ invoked. The 'check-results' field is the same as
+ that returned by t=check&repair=true&output=JSON,
+ described above.
+ stats: a dictionary with the same keys as the t=start-deep-stats command
+ (described below)
+
+``POST $URL?t=stream-deep-check&repair=true``
This triggers a recursive walk of all files and directories, performing a
t=check&repair=true on each one. For each unique object (duplicates are
file or directory repair fails, the traversal will continue, and the repair
failure will be indicated in the JSON data (in the "repair-successful" key).
-POST $DIRURL?t=start-manifest (must add &ophandle=XYZ)
-
- This operation generates a "manfest" of the given directory tree, mostly
- for debugging. This is a table of (path, filecap/dircap), for every object
- reachable from the starting directory. The path will be slash-joined, and
- the filecap/dircap will contain a link to the object in question. This page
- gives immediate access to every object in the virtual filesystem subtree.
-
- This operation uses the same ophandle= mechanism as deep-check. The
- corresponding /operations/$HANDLE page has three different forms. The
- default is output=HTML.
-
- If output=text is added to the query args, the results will be a text/plain
- list. The first line is special: it is either "finished: yes" or "finished:
- no"; if the operation is not finished, you must periodically reload the
- page until it completes. The rest of the results are a plaintext list, with
- one file/dir per line, slash-separated, with the filecap/dircap separated
- by a space.
-
- If output=JSON is added to the queryargs, then the results will be a
- JSON-formatted dictionary with six keys. Note that because large directory
- structures can result in very large JSON results, the full results will not
- be available until the operation is complete (i.e. until output["finished"]
- is True):
-
- finished (bool): if False then you must reload the page until True
- origin_si (base32 str): the storage index of the starting point
- manifest: list of (path, cap) tuples, where path is a list of strings.
- verifycaps: list of (printable) verify cap strings
- storage-index: list of (base32) storage index strings
- stats: a dictionary with the same keys as the t=start-deep-stats command
- (described below)
-
-POST $DIRURL?t=start-deep-size (must add &ophandle=XYZ)
-
- This operation generates a number (in bytes) containing the sum of the
- filesize of all directories and immutable files reachable from the given
- directory. This is a rough lower bound of the total space consumed by this
- subtree. It does not include space consumed by mutable files, nor does it
- take expansion or encoding overhead into account. Later versions of the
- code may improve this estimate upwards.
-
- The /operations/$HANDLE status output consists of two lines of text:
-
- finished: yes
- size: 1234
-
-POST $DIRURL?t=start-deep-stats (must add &ophandle=XYZ)
-
- This operation performs a recursive walk of all files and directories
- reachable from the given directory, and generates a collection of
- statistics about those objects.
-
- The result (obtained from the /operations/$OPHANDLE page) is a
- JSON-serialized dictionary with the following keys (note that some of these
- keys may be missing until 'finished' is True):
-
- finished: (bool) True if the operation has finished, else False
- count-immutable-files: count of how many CHK files are in the set
- count-mutable-files: same, for mutable files (does not include directories)
- count-literal-files: same, for LIT files (data contained inside the URI)
- count-files: sum of the above three
- count-directories: count of directories
- count-unknown: count of unrecognized objects (perhaps from the future)
- size-immutable-files: total bytes for all CHK files in the set, =deep-size
- size-mutable-files (TODO): same, for current version of all mutable files
- size-literal-files: same, for LIT files
- size-directories: size of directories (includes size-literal-files)
- size-files-histogram: list of (minsize, maxsize, count) buckets,
- with a histogram of filesizes, 5dB/bucket,
- for both literal and immutable files
- largest-directory: number of children in the largest directory
- largest-immutable-file: number of bytes in the largest CHK file
-
- size-mutable-files is not implemented, because it would require extra
- queries to each mutable file to get their size. This may be implemented in
- the future.
-
- Assuming no sharing, the basic space consumed by a single root directory is
- the sum of size-immutable-files, size-mutable-files, and size-directories.
- The actual disk space used by the shares is larger, because of the
- following sources of overhead:
-
- integrity data
- expansion due to erasure coding
- share management data (leases)
- backend (ext3) minimum block size
-
-POST $URL?t=stream-manifest
+``POST $DIRURL?t=start-manifest`` (must add &ophandle=XYZ)
+
+ This operation generates a "manfest" of the given directory tree, mostly
+ for debugging. This is a table of (path, filecap/dircap), for every object
+ reachable from the starting directory. The path will be slash-joined, and
+ the filecap/dircap will contain a link to the object in question. This page
+ gives immediate access to every object in the virtual filesystem subtree.
+
+ This operation uses the same ophandle= mechanism as deep-check. The
+ corresponding /operations/$HANDLE page has three different forms. The
+ default is output=HTML.
+
+ If output=text is added to the query args, the results will be a text/plain
+ list. The first line is special: it is either "finished: yes" or "finished:
+ no"; if the operation is not finished, you must periodically reload the
+ page until it completes. The rest of the results are a plaintext list, with
+ one file/dir per line, slash-separated, with the filecap/dircap separated
+ by a space.
+
+ If output=JSON is added to the queryargs, then the results will be a
+ JSON-formatted dictionary with six keys. Note that because large directory
+ structures can result in very large JSON results, the full results will not
+ be available until the operation is complete (i.e. until output["finished"]
+ is True)::
+
+ finished (bool): if False then you must reload the page until True
+ origin_si (base32 str): the storage index of the starting point
+ manifest: list of (path, cap) tuples, where path is a list of strings.
+ verifycaps: list of (printable) verify cap strings
+ storage-index: list of (base32) storage index strings
+ stats: a dictionary with the same keys as the t=start-deep-stats command
+ (described below)
+
+``POST $DIRURL?t=start-deep-size`` (must add &ophandle=XYZ)
+
+ This operation generates a number (in bytes) containing the sum of the
+ filesize of all directories and immutable files reachable from the given
+ directory. This is a rough lower bound of the total space consumed by this
+ subtree. It does not include space consumed by mutable files, nor does it
+ take expansion or encoding overhead into account. Later versions of the
+ code may improve this estimate upwards.
+
+ The /operations/$HANDLE status output consists of two lines of text::
+
+ finished: yes
+ size: 1234
+
+``POST $DIRURL?t=start-deep-stats`` (must add &ophandle=XYZ)
+
+ This operation performs a recursive walk of all files and directories
+ reachable from the given directory, and generates a collection of
+ statistics about those objects.
+
+ The result (obtained from the /operations/$OPHANDLE page) is a
+ JSON-serialized dictionary with the following keys (note that some of these
+ keys may be missing until 'finished' is True)::
+
+ finished: (bool) True if the operation has finished, else False
+ count-immutable-files: count of how many CHK files are in the set
+ count-mutable-files: same, for mutable files (does not include directories)
+ count-literal-files: same, for LIT files (data contained inside the URI)
+ count-files: sum of the above three
+ count-directories: count of directories
+ count-unknown: count of unrecognized objects (perhaps from the future)
+ size-immutable-files: total bytes for all CHK files in the set, =deep-size
+ size-mutable-files (TODO): same, for current version of all mutable files
+ size-literal-files: same, for LIT files
+ size-directories: size of directories (includes size-literal-files)
+ size-files-histogram: list of (minsize, maxsize, count) buckets,
+ with a histogram of filesizes, 5dB/bucket,
+ for both literal and immutable files
+ largest-directory: number of children in the largest directory
+ largest-immutable-file: number of bytes in the largest CHK file
+
+ size-mutable-files is not implemented, because it would require extra
+ queries to each mutable file to get their size. This may be implemented in
+ the future.
+
+ Assuming no sharing, the basic space consumed by a single root directory is
+ the sum of size-immutable-files, size-mutable-files, and size-directories.
+ The actual disk space used by the shares is larger, because of the
+ following sources of overhead::
+
+ integrity data
+ expansion due to erasure coding
+ share management data (leases)
+ backend (ext3) minimum block size
+
+``POST $URL?t=stream-manifest``
This operation performs a recursive walk of all files and directories
reachable from the given starting point. For each such unique object
"file", "directory", or "stats".
For all units that have a type of "file" or "directory", the dictionary will
- contain the following keys:
+ contain the following keys::
"path": a list of strings, with the path that is traversed to reach the
object
was untraversable, since the manifest entry is emitted to the HTTP response
body before the child is traversed.
-== Other Useful Pages ==
+Other Useful Pages
+==================
The portion of the web namespace that begins with "/uri" (and "/named") is
dedicated to giving users (both humans and programs) access to the Tahoe
virtual filesystem. The rest of the namespace provides status information
about the state of the Tahoe node.
-GET / (the root page)
+``GET /`` (the root page)
-This is the "Welcome Page", and contains a few distinct sections:
+This is the "Welcome Page", and contains a few distinct sections::
Node information: library versions, local nodeid, services being provided.
Grid Status: introducer information, helper information, connected storage
servers.
-GET /status/
+``GET /status/``
This page lists all active uploads and downloads, and contains a short list
of recent upload/download operations. Each operation has a link to a page
"mapupdate", "publish", or "retrieve" (the first two are for immutable
files, while the latter three are for mutable files and directories).
- The "upload" op-dict will contain the following keys:
-
- type (string): "upload"
- storage-index-string (string): a base32-encoded storage index
- total-size (int): total size of the file
- status (string): current status of the operation
- progress-hash (float): 1.0 when the file has been hashed
- progress-ciphertext (float): 1.0 when the file has been encrypted.
- progress-encode-push (float): 1.0 when the file has been encoded and
- pushed to the storage servers. For helper
- uploads, the ciphertext value climbs to 1.0
- first, then encoding starts. For unassisted
- uploads, ciphertext and encode-push progress
- will climb at the same pace.
-
- The "download" op-dict will contain the following keys:
-
- type (string): "download"
- storage-index-string (string): a base32-encoded storage index
- total-size (int): total size of the file
- status (string): current status of the operation
- progress (float): 1.0 when the file has been fully downloaded
+ The "upload" op-dict will contain the following keys::
+
+ type (string): "upload"
+ storage-index-string (string): a base32-encoded storage index
+ total-size (int): total size of the file
+ status (string): current status of the operation
+ progress-hash (float): 1.0 when the file has been hashed
+ progress-ciphertext (float): 1.0 when the file has been encrypted.
+ progress-encode-push (float): 1.0 when the file has been encoded and
+ pushed to the storage servers. For helper
+ uploads, the ciphertext value climbs to 1.0
+ first, then encoding starts. For unassisted
+ uploads, ciphertext and encode-push progress
+ will climb at the same pace.
+
+ The "download" op-dict will contain the following keys::
+
+ type (string): "download"
+ storage-index-string (string): a base32-encoded storage index
+ total-size (int): total size of the file
+ status (string): current status of the operation
+ progress (float): 1.0 when the file has been fully downloaded
Front-ends which want to report progress information are advised to simply
average together all the progress-* indicators. A slightly more accurate
implementation hashes synchronously, so clients will probably never see
progress-hash!=1.0).
-GET /provisioning/
+``GET /provisioning/``
This page provides a basic tool to predict the likely storage and bandwidth
requirements of a large Tahoe grid. It provides forms to input things like
the grid. This information is very preliminary, and the model upon which it
is based still needs a lot of work.
-GET /helper_status/
+``GET /helper_status/``
If the node is running a helper (i.e. if [helper]enabled is set to True in
tahoe.cfg), then this page will provide a list of all the helper operations
JSON-formatted list of helper statistics, which can then be used to produce
graphs to indicate how busy the helper is.
-GET /statistics/
+``GET /statistics/``
This page provides "node statistics", which are collected from a variety of
- sources.
+ sources::
load_monitor: every second, the node schedules a timer for one second in
the future, then measures how late the subsequent callback
graphs of node behavior. The misc/munin/ directory in the source
distribution provides some tools to produce these graphs.
-GET / (introducer status)
+``GET /`` (introducer status)
For Introducer nodes, the welcome page displays information about both
clients and servers which are connected to the introducer. Servers make
By adding "?t=json" to the URL, the node will return a JSON-formatted
dictionary of stats values, which can be used to produce graphs of connected
- clients over time. This dictionary has the following keys:
+ clients over time. This dictionary has the following keys::
["subscription_summary"] : a dictionary mapping service name (like
"storage") to an integer with the number of
considered to be on the same host.
-== Static Files in /public_html ==
+Static Files in /public_html
+============================
The webapi server will take any request for a URL that starts with /static
and serve it from a configurable directory which defaults to
prettier front-end to the rest of the Tahoe webapi.
-== Safety and security issues -- names vs. URIs ==
+Safety and security issues -- names vs. URIs
+============================================
Summary: use explicit file- and dir- caps whenever possible, to reduce the
potential for surprises when the filesystem structure is changed.
directory) is found by following this name (or sequence of names) when my
request reaches the server". Use URIs if you want "this particular object".
-== Concurrency Issues ==
+Concurrency Issues
+==================
Tahoe uses both mutable and immutable files. Mutable files can be created
explicitly by doing an upload with ?mutable=true added, or implicitly by
this file.
-[1]: URLs and HTTP and UTF-8, Oh My
+.. [1] URLs and HTTP and UTF-8, Oh My
HTTP does not provide a mechanism to specify the character set used to
encode non-ascii names in URLs (rfc2396#2.1). We prefer the convention that
The response header will need to indicate a non-ASCII filename. The actual
mechanism to do this is not clear. For ASCII filenames, the response header
- would look like:
+ would look like::
Content-Disposition: attachment; filename="english.txt"
If Tahoe were to enforce the utf-8 convention, it would need to decode the
URL argument into a unicode string, and then encode it back into a sequence
of bytes when creating the response header. One possibility would be to use
- unencoded utf-8. Developers suggest that IE7 might accept this:
+ unencoded utf-8. Developers suggest that IE7 might accept this::
#1: Content-Disposition: attachment; filename="fianc\xC3\xA9e"
(note, the last four bytes of that line, not including the newline, are
RFC2231#4 (dated 1997): suggests that the following might work, and some
developers (http://markmail.org/message/dsjyokgl7hv64ig3) have reported that
- it is supported by firefox (but not IE7):
+ it is supported by firefox (but not IE7)::
#2: Content-Disposition: attachment; filename*=utf-8''fianc%C3%A9e
My reading of RFC2616#19.5.1 (which defines Content-Disposition) says that
the filename= parameter is defined to be wrapped in quotes (presumeably to
allow spaces without breaking the parsing of subsequent parameters), which
- would give us:
+ would give us::
#3: Content-Disposition: attachment; filename*=utf-8''"fianc%C3%A9e"
However this is contrary to the examples in the email thread listed above.
Developers report that IE7 (when it is configured for UTF-8 URL encoding,
- which is not the default in asian countries), will accept:
+ which is not the default in asian countries), will accept::
#4: Content-Disposition: attachment; filename=fianc%C3%A9e
+=============
+Mutable Files
+=============
This describes the "RSA-based mutable files" which were shipped in Tahoe v0.8.0.
-= Mutable Files =
+1. `Consistency vs. Availability`_
+2. `The Prime Coordination Directive: "Don't Do That"`_
+3. `Small Distributed Mutable Files`_
+
+ 1. `SDMF slots overview`_
+ 2. `Server Storage Protocol`_
+ 3. `Code Details`_
+ 4. `SMDF Slot Format`_
+ 5. `Recovery`_
+
+4. `Medium Distributed Mutable Files`_
+5. `Large Distributed Mutable Files`_
+6. `TODO`_
Mutable File Slots are places with a stable identifier that can hold data
that changes over time. In contrast to CHK slots, for which the
deleting or corrupting the shares), or attempt a rollback attack (which can
only succeed with the cooperation of at least k servers).
-== Consistency vs Availability ==
+Consistency vs. Availability
+============================
There is an age-old battle between consistency and availability. Epic papers
have been written, elaborate proofs have been established, and generations of
necessarily a problem (i.e. directory nodes can usually merge multiple "add
child" operations).
-== The Prime Coordination Directive: "Don't Do That" ==
+The Prime Coordination Directive: "Don't Do That"
+=================================================
The current rule for applications which run on top of Tahoe is "do not
perform simultaneous uncoordinated writes". That means you need non-tahoe
means to make sure that two parties are not trying to modify the same mutable
slot at the same time. For example:
- * don't give the read-write URI to anyone else. Dirnodes in a private
- directory generally satisfy this case, as long as you don't use two
- clients on the same account at the same time
- * if you give a read-write URI to someone else, stop using it yourself. An
- inbox would be a good example of this.
- * if you give a read-write URI to someone else, call them on the phone
- before you write into it
- * build an automated mechanism to have your agents coordinate writes.
- For example, we expect a future release to include a FURL for a
- "coordination server" in the dirnodes. The rule can be that you must
- contact the coordination server and obtain a lock/lease on the file
- before you're allowed to modify it.
+* don't give the read-write URI to anyone else. Dirnodes in a private
+ directory generally satisfy this case, as long as you don't use two
+ clients on the same account at the same time
+* if you give a read-write URI to someone else, stop using it yourself. An
+ inbox would be a good example of this.
+* if you give a read-write URI to someone else, call them on the phone
+ before you write into it
+* build an automated mechanism to have your agents coordinate writes.
+ For example, we expect a future release to include a FURL for a
+ "coordination server" in the dirnodes. The rule can be that you must
+ contact the coordination server and obtain a lock/lease on the file
+ before you're allowed to modify it.
If you do not follow this rule, Bad Things will happen. The worst-case Bad
Thing is that the entire file will be lost. A less-bad Bad Thing is that one
conflicts, not intra-node ones.
-== Small Distributed Mutable Files ==
+Small Distributed Mutable Files
+===============================
SDMF slots are suitable for small (<1MB) files that are editing by rewriting
the entire file. The three operations are:
The first use of SDMF slots will be to hold directories (dirnodes), which map
encrypted child names to rw-URI/ro-URI pairs.
-=== SDMF slots overview ===
+SDMF slots overview
+-------------------
Each SDMF slot is created with a public/private key pair. The public key is
known as the "verification key", while the private key is called the
The read-only URI contains the read key and the verification key hash. The
verify-only URI contains the storage index and the verification key hash.
+::
+
URI:SSK-RW:b2a(writekey):b2a(verification_key_hash)
URI:SSK-RO:b2a(readkey):b2a(verification_key_hash)
URI:SSK-Verify:b2a(storage_index):b2a(verification_key_hash)
The SDMF slot structure will be described in more detail below. The important
pieces are:
- * a sequence number
- * a root hash "R"
- * the encoding parameters (including k, N, file size, segment size)
- * a signed copy of [seqnum,R,encoding_params], using the signature key
- * the verification key (not encrypted)
- * the share hash chain (part of a Merkle tree over the share hashes)
- * the block hash tree (Merkle tree over blocks of share data)
- * the share data itself (erasure-coding of read-key-encrypted file data)
- * the signature key, encrypted with the write key
+* a sequence number
+* a root hash "R"
+* the encoding parameters (including k, N, file size, segment size)
+* a signed copy of [seqnum,R,encoding_params], using the signature key
+* the verification key (not encrypted)
+* the share hash chain (part of a Merkle tree over the share hashes)
+* the block hash tree (Merkle tree over blocks of share data)
+* the share data itself (erasure-coding of read-key-encrypted file data)
+* the signature key, encrypted with the write key
The access pattern for read is:
- * hash read-key to get storage index
- * use storage index to locate 'k' shares with identical 'R' values
- * either get one share, read 'k' from it, then read k-1 shares
- * or read, say, 5 shares, discover k, either get more or be finished
- * or copy k into the URIs
- * read verification key
- * hash verification key, compare against verification key hash
- * read seqnum, R, encoding parameters, signature
- * verify signature against verification key
- * read share data, compute block-hash Merkle tree and root "r"
- * read share hash chain (leading from "r" to "R")
- * validate share hash chain up to the root "R"
- * submit share data to erasure decoding
- * decrypt decoded data with read-key
- * submit plaintext to application
+
+* hash read-key to get storage index
+* use storage index to locate 'k' shares with identical 'R' values
+
+ * either get one share, read 'k' from it, then read k-1 shares
+ * or read, say, 5 shares, discover k, either get more or be finished
+ * or copy k into the URIs
+
+* read verification key
+* hash verification key, compare against verification key hash
+* read seqnum, R, encoding parameters, signature
+* verify signature against verification key
+* read share data, compute block-hash Merkle tree and root "r"
+* read share hash chain (leading from "r" to "R")
+* validate share hash chain up to the root "R"
+* submit share data to erasure decoding
+* decrypt decoded data with read-key
+* submit plaintext to application
The access pattern for write is:
- * hash write-key to get read-key, hash read-key to get storage index
- * use the storage index to locate at least one share
- * read verification key and encrypted signature key
- * decrypt signature key using write-key
- * hash signature key, compare against write-key
- * hash verification key, compare against verification key hash
- * encrypt plaintext from application with read-key
- * application can encrypt some data with the write-key to make it only
- available to writers (use this for transitive read-onlyness of dirnodes)
- * erasure-code crypttext to form shares
- * split shares into blocks
- * compute Merkle tree of blocks, giving root "r" for each share
- * compute Merkle tree of shares, find root "R" for the file as a whole
- * create share data structures, one per server:
- * use seqnum which is one higher than the old version
- * share hash chain has log(N) hashes, different for each server
- * signed data is the same for each server
- * now we have N shares and need homes for them
- * walk through peers
- * if share is not already present, allocate-and-set
- * otherwise, try to modify existing share:
- * send testv_and_writev operation to each one
- * testv says to accept share if their(seqnum+R) <= our(seqnum+R)
- * count how many servers wind up with which versions (histogram over R)
- * keep going until N servers have the same version, or we run out of servers
- * if any servers wound up with a different version, report error to
- application
- * if we ran out of servers, initiate recovery process (described below)
-
-=== Server Storage Protocol ===
+
+* hash write-key to get read-key, hash read-key to get storage index
+* use the storage index to locate at least one share
+* read verification key and encrypted signature key
+* decrypt signature key using write-key
+* hash signature key, compare against write-key
+* hash verification key, compare against verification key hash
+* encrypt plaintext from application with read-key
+
+ * application can encrypt some data with the write-key to make it only
+ available to writers (use this for transitive read-onlyness of dirnodes)
+
+* erasure-code crypttext to form shares
+* split shares into blocks
+* compute Merkle tree of blocks, giving root "r" for each share
+* compute Merkle tree of shares, find root "R" for the file as a whole
+* create share data structures, one per server:
+
+ * use seqnum which is one higher than the old version
+ * share hash chain has log(N) hashes, different for each server
+ * signed data is the same for each server
+
+* now we have N shares and need homes for them
+* walk through peers
+
+ * if share is not already present, allocate-and-set
+ * otherwise, try to modify existing share:
+ * send testv_and_writev operation to each one
+ * testv says to accept share if their(seqnum+R) <= our(seqnum+R)
+ * count how many servers wind up with which versions (histogram over R)
+ * keep going until N servers have the same version, or we run out of servers
+
+ * if any servers wound up with a different version, report error to
+ application
+ * if we ran out of servers, initiate recovery process (described below)
+
+Server Storage Protocol
+-----------------------
The storage servers will provide a mutable slot container which is oblivious
to the details of the data being contained inside it. Each storage index
The container holds space for a container magic number (for versioning), the
write enabler, the nodeid which accepted the write enabler (used for share
migration, described below), a small number of lease structures, the embedded
-data itself, and expansion space for additional lease structures.
+data itself, and expansion space for additional lease structures::
# offset size name
1 0 32 magic verstr "tahoe mutable container v1" plus binary
The two methods provided by the storage server on these "MutableSlot" share
objects are:
- * readv(ListOf(offset=int, length=int))
- * returns a list of bytestrings, of the various requested lengths
- * offset < 0 is interpreted relative to the end of the data
- * spans which hit the end of the data will return truncated data
-
- * testv_and_writev(write_enabler, test_vector, write_vector)
- * this is a test-and-set operation which performs the given tests and only
- applies the desired writes if all tests succeed. This is used to detect
- simultaneous writers, and to reduce the chance that an update will lose
- data recently written by some other party (written after the last time
- this slot was read).
- * test_vector=ListOf(TupleOf(offset, length, opcode, specimen))
- * the opcode is a string, from the set [gt, ge, eq, le, lt, ne]
- * each element of the test vector is read from the slot's data and
- compared against the specimen using the desired (in)equality. If all
- tests evaluate True, the write is performed
- * write_vector=ListOf(TupleOf(offset, newdata))
- * offset < 0 is not yet defined, it probably means relative to the
- end of the data, which probably means append, but we haven't nailed
- it down quite yet
- * write vectors are executed in order, which specifies the results of
- overlapping writes
- * return value:
- * error: OutOfSpace
- * error: something else (io error, out of memory, whatever)
- * (True, old_test_data): the write was accepted (test_vector passed)
- * (False, old_test_data): the write was rejected (test_vector failed)
- * both 'accepted' and 'rejected' return the old data that was used
- for the test_vector comparison. This can be used by the client
- to detect write collisions, including collisions for which the
- desired behavior was to overwrite the old version.
+* readv(ListOf(offset=int, length=int))
+
+ * returns a list of bytestrings, of the various requested lengths
+ * offset < 0 is interpreted relative to the end of the data
+ * spans which hit the end of the data will return truncated data
+
+* testv_and_writev(write_enabler, test_vector, write_vector)
+
+ * this is a test-and-set operation which performs the given tests and only
+ applies the desired writes if all tests succeed. This is used to detect
+ simultaneous writers, and to reduce the chance that an update will lose
+ data recently written by some other party (written after the last time
+ this slot was read).
+ * test_vector=ListOf(TupleOf(offset, length, opcode, specimen))
+ * the opcode is a string, from the set [gt, ge, eq, le, lt, ne]
+ * each element of the test vector is read from the slot's data and
+ compared against the specimen using the desired (in)equality. If all
+ tests evaluate True, the write is performed
+ * write_vector=ListOf(TupleOf(offset, newdata))
+
+ * offset < 0 is not yet defined, it probably means relative to the
+ end of the data, which probably means append, but we haven't nailed
+ it down quite yet
+ * write vectors are executed in order, which specifies the results of
+ overlapping writes
+
+ * return value:
+
+ * error: OutOfSpace
+ * error: something else (io error, out of memory, whatever)
+ * (True, old_test_data): the write was accepted (test_vector passed)
+ * (False, old_test_data): the write was rejected (test_vector failed)
+
+ * both 'accepted' and 'rejected' return the old data that was used
+ for the test_vector comparison. This can be used by the client
+ to detect write collisions, including collisions for which the
+ desired behavior was to overwrite the old version.
In addition, the storage server provides several methods to access these
share objects:
- * allocate_mutable_slot(storage_index, sharenums=SetOf(int))
- * returns DictOf(int, MutableSlot)
- * get_mutable_slot(storage_index)
- * returns DictOf(int, MutableSlot)
- * or raises KeyError
+* allocate_mutable_slot(storage_index, sharenums=SetOf(int))
+
+ * returns DictOf(int, MutableSlot)
+
+* get_mutable_slot(storage_index)
+
+ * returns DictOf(int, MutableSlot)
+ * or raises KeyError
We intend to add an interface which allows small slots to allocate-and-write
in a single call, as well as do update or read in a single call. The goal is
to allow a reasonably-sized dirnode to be created (or updated, or read) in
just one round trip (to all N shareholders in parallel).
-==== migrating shares ====
+migrating shares
+````````````````
If a share must be migrated from one server to another, two values become
invalid: the write enabler (since it was computed for the old server), and
Migrating the leases will require a similar protocol. This protocol will be
defined concretely at a later date.
-=== Code Details ===
+Code Details
+------------
The MutableFileNode class is used to manipulate mutable files (as opposed to
ImmutableFileNodes). These are initially generated with
The methods of MutableFileNode are:
- * download_to_data() -> [deferred] newdata, NotEnoughSharesError
- * if there are multiple retrieveable versions in the grid, get() returns
- the first version it can reconstruct, and silently ignores the others.
- In the future, a more advanced API will signal and provide access to
- the multiple heads.
- * update(newdata) -> OK, UncoordinatedWriteError, NotEnoughSharesError
- * overwrite(newdata) -> OK, UncoordinatedWriteError, NotEnoughSharesError
+* download_to_data() -> [deferred] newdata, NotEnoughSharesError
+
+ * if there are multiple retrieveable versions in the grid, get() returns
+ the first version it can reconstruct, and silently ignores the others.
+ In the future, a more advanced API will signal and provide access to
+ the multiple heads.
+
+* update(newdata) -> OK, UncoordinatedWriteError, NotEnoughSharesError
+* overwrite(newdata) -> OK, UncoordinatedWriteError, NotEnoughSharesError
download_to_data() causes a new retrieval to occur, pulling the current
contents from the grid and returning them to the caller. At the same time,
triggering the UncoordinatedWriteError.
update() is therefore intended to be used just after a download_to_data(), in
-the following pattern:
+the following pattern::
d = mfn.download_to_data()
d.addCallback(apply_delta)
(post-collision and post-recovery) form of the file, reapply their delta,
then submit the update request again. A randomized pause is necessary to
reduce the chances of colliding a second time with another client that is
-doing exactly the same thing:
+doing exactly the same thing::
d = mfn.download_to_data()
d.addCallback(apply_delta)
giving up after a while.
UCW does not mean that the update was not applied, so it is also a good idea
-to skip the retry-update step if the delta was already applied:
+to skip the retry-update step if the delta was already applied::
d = mfn.download_to_data()
d.addCallback(apply_delta)
raw files are uploaded into a mutable slot through the tahoe webapi (using
POST and the ?mutable=true argument), they are put in place with overwrite().
-
-
The peer-selection and data-structure manipulation (and signing/verification)
steps will be implemented in a separate class in allmydata/mutable.py .
-=== SMDF Slot Format ===
+SMDF Slot Format
+----------------
This SMDF data lives inside a server-side MutableSlot container. The server
is oblivious to this format.
all the way to the beginning of the encrypted private key (the encprivkey
offset is used both to terminate the share data and to begin the encprivkey).
- # offset size name
- 1 0 1 version byte, \x00 for this format
- 2 1 8 sequence number. 2^64-1 must be handled specially, TBD
- 3 9 32 "R" (root of share hash Merkle tree)
- 4 41 16 IV (share data is AES(H(readkey+IV)) )
- 5 57 18 encoding parameters:
- 57 1 k
- 58 1 N
- 59 8 segment size
- 67 8 data length (of original plaintext)
- 6 75 32 offset table:
- 75 4 (8) signature
- 79 4 (9) share hash chain
- 83 4 (10) block hash tree
- 87 4 (11) share data
- 91 8 (12) encrypted private key
- 99 8 (13) EOF
- 7 107 436ish verification key (2048 RSA key)
- 8 543ish 256ish signature=RSAenc(sigkey, H(version+seqnum+r+IV+encparm))
- 9 799ish (a) share hash chain, encoded as:
- "".join([pack(">H32s", shnum, hash)
- for (shnum,hash) in needed_hashes])
-10 (927ish) (b) block hash tree, encoded as:
- "".join([pack(">32s",hash) for hash in block_hash_tree])
-11 (935ish) LEN share data (no gap between this and encprivkey)
-12 ?? 1216ish encrypted private key= AESenc(write-key, RSA-key)
-13 ?? -- EOF
-
-(a) The share hash chain contains ceil(log(N)) hashes, each 32 bytes long.
+::
+
+ # offset size name
+ 1 0 1 version byte, \x00 for this format
+ 2 1 8 sequence number. 2^64-1 must be handled specially, TBD
+ 3 9 32 "R" (root of share hash Merkle tree)
+ 4 41 16 IV (share data is AES(H(readkey+IV)) )
+ 5 57 18 encoding parameters:
+ 57 1 k
+ 58 1 N
+ 59 8 segment size
+ 67 8 data length (of original plaintext)
+ 6 75 32 offset table:
+ 75 4 (8) signature
+ 79 4 (9) share hash chain
+ 83 4 (10) block hash tree
+ 87 4 (11) share data
+ 91 8 (12) encrypted private key
+ 99 8 (13) EOF
+ 7 107 436ish verification key (2048 RSA key)
+ 8 543ish 256ish signature=RSAenc(sigkey, H(version+seqnum+r+IV+encparm))
+ 9 799ish (a) share hash chain, encoded as:
+ "".join([pack(">H32s", shnum, hash)
+ for (shnum,hash) in needed_hashes])
+ 10 (927ish) (b) block hash tree, encoded as:
+ "".join([pack(">32s",hash) for hash in block_hash_tree])
+ 11 (935ish) LEN share data (no gap between this and encprivkey)
+ 12 ?? 1216ish encrypted private key= AESenc(write-key, RSA-key)
+ 13 ?? -- EOF
+
+ (a) The share hash chain contains ceil(log(N)) hashes, each 32 bytes long.
This is the set of hashes necessary to validate this share's leaf in the
share Merkle tree. For N=10, this is 4 hashes, i.e. 128 bytes.
-(b) The block hash tree contains ceil(length/segsize) hashes, each 32 bytes
+ (b) The block hash tree contains ceil(length/segsize) hashes, each 32 bytes
long. This is the set of hashes necessary to validate any given block of
share data up to the per-share root "r". Each "r" is a leaf of the share
has tree (with root "R"), from which a minimal subset of hashes is put in
the share hash chain in (8).
-=== Recovery ===
+Recovery
+--------
The first line of defense against damage caused by colliding writes is the
Prime Coordination Directive: "Don't Do That".
The write-shares-to-peers algorithm is as follows:
- * permute peers according to storage index
- * walk through peers, trying to assign one share per peer
- * for each peer:
- * send testv_and_writev, using "old(seqnum+R) <= our(seqnum+R)" as the test
- * this means that we will overwrite any old versions, and we will
- overwrite simultaenous writers of the same version if our R is higher.
- We will not overwrite writers using a higher seqnum.
- * record the version that each share winds up with. If the write was
- accepted, this is our own version. If it was rejected, read the
- old_test_data to find out what version was retained.
- * if old_test_data indicates the seqnum was equal or greater than our
- own, mark the "Simultanous Writes Detected" flag, which will eventually
- result in an error being reported to the writer (in their close() call).
- * build a histogram of "R" values
- * repeat until the histogram indicate that some version (possibly ours)
- has N shares. Use new servers if necessary.
- * If we run out of servers:
- * if there are at least shares-of-happiness of any one version, we're
- happy, so return. (the close() might still get an error)
- * not happy, need to reinforce something, goto RECOVERY
-
-RECOVERY:
- * read all shares, count the versions, identify the recoverable ones,
- discard the unrecoverable ones.
- * sort versions: locate max(seqnums), put all versions with that seqnum
- in the list, sort by number of outstanding shares. Then put our own
- version. (TODO: put versions with seqnum <max but >us ahead of us?).
- * for each version:
- * attempt to recover that version
- * if not possible, remove it from the list, go to next one
- * if recovered, start at beginning of peer list, push that version,
- continue until N shares are placed
- * if pushing our own version, bump up the seqnum to one higher than
- the max seqnum we saw
- * if we run out of servers:
- * schedule retry and exponential backoff to repeat RECOVERY
- * admit defeat after some period? presumeably the client will be shut down
- eventually, maybe keep trying (once per hour?) until then.
-
-
-
-
-== Medium Distributed Mutable Files ==
+* permute peers according to storage index
+* walk through peers, trying to assign one share per peer
+* for each peer:
+
+ * send testv_and_writev, using "old(seqnum+R) <= our(seqnum+R)" as the test
+
+ * this means that we will overwrite any old versions, and we will
+ overwrite simultaenous writers of the same version if our R is higher.
+ We will not overwrite writers using a higher seqnum.
+
+ * record the version that each share winds up with. If the write was
+ accepted, this is our own version. If it was rejected, read the
+ old_test_data to find out what version was retained.
+ * if old_test_data indicates the seqnum was equal or greater than our
+ own, mark the "Simultanous Writes Detected" flag, which will eventually
+ result in an error being reported to the writer (in their close() call).
+ * build a histogram of "R" values
+ * repeat until the histogram indicate that some version (possibly ours)
+ has N shares. Use new servers if necessary.
+ * If we run out of servers:
+
+ * if there are at least shares-of-happiness of any one version, we're
+ happy, so return. (the close() might still get an error)
+ * not happy, need to reinforce something, goto RECOVERY
+
+Recovery:
+
+* read all shares, count the versions, identify the recoverable ones,
+ discard the unrecoverable ones.
+* sort versions: locate max(seqnums), put all versions with that seqnum
+ in the list, sort by number of outstanding shares. Then put our own
+ version. (TODO: put versions with seqnum <max but >us ahead of us?).
+* for each version:
+
+ * attempt to recover that version
+ * if not possible, remove it from the list, go to next one
+ * if recovered, start at beginning of peer list, push that version,
+ continue until N shares are placed
+ * if pushing our own version, bump up the seqnum to one higher than
+ the max seqnum we saw
+ * if we run out of servers:
+
+ * schedule retry and exponential backoff to repeat RECOVERY
+
+ * admit defeat after some period? presumeably the client will be shut down
+ eventually, maybe keep trying (once per hour?) until then.
+
+
+Medium Distributed Mutable Files
+================================
These are just like the SDMF case, but:
- * we actually take advantage of the Merkle hash tree over the blocks, by
- reading a single segment of data at a time (and its necessary hashes), to
- reduce the read-time alacrity
- * we allow arbitrary writes to the file (i.e. seek() is provided, and
- O_TRUNC is no longer required)
- * we write more code on the client side (in the MutableFileNode class), to
- first read each segment that a write must modify. This looks exactly like
- the way a normal filesystem uses a block device, or how a CPU must perform
- a cache-line fill before modifying a single word.
- * we might implement some sort of copy-based atomic update server call,
- to allow multiple writev() calls to appear atomic to any readers.
+* we actually take advantage of the Merkle hash tree over the blocks, by
+ reading a single segment of data at a time (and its necessary hashes), to
+ reduce the read-time alacrity
+* we allow arbitrary writes to the file (i.e. seek() is provided, and
+ O_TRUNC is no longer required)
+* we write more code on the client side (in the MutableFileNode class), to
+ first read each segment that a write must modify. This looks exactly like
+ the way a normal filesystem uses a block device, or how a CPU must perform
+ a cache-line fill before modifying a single word.
+* we might implement some sort of copy-based atomic update server call,
+ to allow multiple writev() calls to appear atomic to any readers.
MDMF slots provide fairly efficient in-place edits of very large files (a few
GB). Appending data is also fairly efficient, although each time a power of 2
MDMF1 uses the Merkle tree to enable low-alacrity random-access reads. MDMF2
adds cache-line reads to allow random-access writes.
-== Large Distributed Mutable Files ==
+Large Distributed Mutable Files
+===============================
LDMF slots use a fundamentally different way to store the file, inspired by
Mercurial's "revlog" format. They enable very efficient insert/remove/replace
LDMF1 provides deltas but tries to avoid dealing with multiple heads. LDMF2
provides explicit support for revision identifiers and branching.
-== TODO ==
+TODO
+====
improve allocate-and-write or get-writer-buckets API to allow one-call (or
maybe two-call) updates. The challenge is in figuring out which shares are on
update.. maybe send back a list of all old nodeids that we find, then try all
of them when we accept the update?
- We now do this in a specially-formatted IndexError exception:
- "UNABLE to renew non-existent lease. I have leases accepted by " +
- "nodeids: '12345','abcde','44221' ."
+We now do this in a specially-formatted IndexError exception:
+ "UNABLE to renew non-existent lease. I have leases accepted by " +
+ "nodeids: '12345','abcde','44221' ."
confirm that a repairer can regenerate shares without the private key. Hmm,
without the write-enabler they won't be able to write those shares to the