Configuring a Tahoe-LAFS node
=============================
-1. `Overall Node Configuration`_
-2. `Client Configuration`_
-3. `Storage Server Configuration`_
-4. `Frontend Configuration`_
-5. `Running A Helper`_
-6. `Running An Introducer`_
-7. `Other Files in BASEDIR`_
-8. `Other files`_
-9. `Backwards Compatibility Files`_
-10. `Example`_
+1. `Overall Node Configuration`_
+2. `Client Configuration`_
+3. `Storage Server Configuration`_
+4. `Frontend Configuration`_
+5. `Running A Helper`_
+6. `Running An Introducer`_
+7. `Other Files in BASEDIR`_
+8. `Other files`_
+9. `Example`_
A Tahoe-LAFS node is configured by writing to files in its base
directory. These files are read by the node when it starts, so each time you
``web.port = (strports string, optional)``
This controls where the node's webserver should listen, providing
- filesystem access and node status as defined in `webapi.rst
- <frontends/webapi.rst>`_. This file contains a Twisted "strports"
+ filesystem access and node status as defined in
+ `<frontends/webapi.rst>`_. This file contains a Twisted "strports"
specification such as "``3456``" or "``tcp:3456:interface=127.0.0.1``".
The "``tahoe create-node``" or "``tahoe create-client``" commands set the
``web.port`` to "``tcp:3456:interface=127.0.0.1``" by default; this is
These three values set the default encoding parameters. Each time a new
file is uploaded, erasure-coding is used to break the ciphertext into
- separate pieces. There will be ``N`` (i.e. ``shares.total``) pieces
+ separate shares. There will be ``N`` (i.e. ``shares.total``) shares
created, and the file will be recoverable if any ``k``
- (i.e. ``shares.needed``) pieces are retrieved. The default values are
+ (i.e. ``shares.needed``) shares are retrieved. The default values are
3-of-10 (i.e. ``shares.needed = 3``, ``shares.total = 10``). Setting
``k`` to 1 is equivalent to simple replication (uploading ``N`` copies of
the file).
- These values control the tradeoff between storage overhead, performance,
- and reliability. To a first approximation, a 1MB file will use (1MB *
+ These values control the tradeoff between storage overhead and
+ reliability. To a first approximation, a 1MB file will use (1MB *
``N``/``k``) of backend storage space (the actual value will be a bit
more, because of other forms of overhead). Up to ``N``-``k`` shares can
- be lost before the file becomes unrecoverable, so assuming there are at
- least ``N`` servers, up to ``N``-``k`` servers can be offline without
- losing the file. So large ``N``/``k`` ratios are more reliable, and small
- ``N``/``k`` ratios use less disk space. Clearly, ``k`` must never be
- greater than ``N``.
-
- Large values of ``N`` will slow down upload operations slightly, since
- more servers must be involved, and will slightly increase storage
- overhead due to the hash trees that are created. Large values of ``k``
- will cause downloads to be marginally slower, because more servers must
- be involved. ``N`` cannot be larger than 256, because of the 8-bit
- erasure-coding algorithm that Tahoe-LAFS uses.
-
- ``shares.happy`` allows you control over the distribution of your
- immutable file. For a successful upload, shares are guaranteed to be
- initially placed on at least ``shares.happy`` distinct servers, the
- correct functioning of any ``k`` of which is sufficient to guarantee the
- availability of the uploaded file. This value should not be larger than
- the number of servers on your grid.
+ be lost before the file becomes unrecoverable. So large ``N``/``k``
+ ratios are more reliable, and small ``N``/``k`` ratios use less disk
+ space. ``N`` cannot be larger than 256, because of the 8-bit
+ erasure-coding algorithm that Tahoe-LAFS uses. ``k`` can not be greater
+ than ``N``. See `<performance.rst>`_ for more details.
+
+ ``shares.happy`` allows you control over how well to "spread out" the
+ shares of an immutable file. For a successful upload, shares are
+ guaranteed to be initially placed on at least ``shares.happy`` distinct
+ servers, the correct functioning of any ``k`` of which is sufficient to
+ guarantee the availability of the uploaded file. This value should not be
+ larger than the number of servers on your grid.
A value of ``shares.happy`` <= ``k`` is allowed, but does not provide any
redundancy if some servers fail or lose shares.
(Mutable files use a different share placement algorithm that does not
currently consider this parameter.)
+``mutable.format = sdmf or mdmf``
+
+ This value tells Tahoe-LAFS what the default mutable file format should
+ be. If ``mutable.format=sdmf``, then newly created mutable files will be
+ in the old SDMF format. This is desirable for clients that operate on
+ grids where some peers run older versions of Tahoe-LAFS, as these older
+ versions cannot read the new MDMF mutable file format. If
+ ``mutable.format`` is ``mdmf``, then newly created mutable files will use
+ the new MDMF format, which supports efficient in-place modification and
+ streaming downloads. You can overwrite this value using a special
+ mutable-type parameter in the webapi. If you do not specify a value here,
+ Tahoe-LAFS will use SDMF for all newly-created mutable files.
+
+ Note that this parameter only applies to mutable files. Mutable
+ directories, which are stored as mutable files, are not controlled by
+ this parameter and will always use SDMF. We may revisit this decision
+ in future versions of Tahoe-LAFS.
Frontend Configuration
======================
"which peers am I connected to" list), and the shortened form (the first
few characters) is recorded in various log messages.
+``access.blacklist``
+
+ Gateway nodes may find it necessary to prohibit access to certain files. The
+ web-API has a facility to block access to filecaps by their storage index,
+ returning a 403 "Forbidden" error instead of the original file. For more
+ details, see the "Access Blacklist" section of `<frontends/webapi.rst>`_.
+
Example
=======