From: Zooko O'Whielacronx Date: Mon, 1 Feb 2010 18:18:09 +0000 (-0800) Subject: docs: updates to relnotes.txt, NEWS, architecture, historical_known_issues, install... X-Git-Tag: allmydata-tahoe-1.6.0~10 X-Git-Url: https://git.rkrishnan.org/pf/content/en/service/FOOURL?a=commitdiff_plain;h=57e3af144744f61eb04b968a0b182afef43d4e4c;p=tahoe-lafs%2Ftahoe-lafs.git docs: updates to relnotes.txt, NEWS, architecture, historical_known_issues, install.html, etc. --- diff --git a/NEWS b/NEWS index 9d76ce48..07b83738 100644 --- a/NEWS +++ b/NEWS @@ -1,14 +1,14 @@ User visible changes in Tahoe-LAFS. -*- outline -*- -* Release ?.?.? (?) +* Release 1.6.0 (2010-02-01) ** New Features *** Immutable Directories -Tahoe can now create and handle immutable directories. These are read just -like normal directories, but are "deep-immutable", meaning that all their -children (and everything reachable from those children) must be immutable +Tahoe-LAFS can now create and handle immutable directories (#XXX). These are +read just like normal directories, but are "deep-immutable", meaning that all +their children (and everything reachable from those children) must be immutable objects (i.e. immutable/literal files, and other immutable directories). These directories must be created in a single webapi call, which provides all @@ -18,10 +18,10 @@ they cannot be changed after creation). They have URIs that start with interface (aka the "WUI") with a "DIR-IMM" abbreviation (as opposed to "DIR" for the usual read-write directories and "DIR-RO" for read-only directories). -Tahoe releases before 1.6.0 cannot read the contents of an immutable +Tahoe-LAFS releases before 1.6.0 cannot read the contents of an immutable directory. 1.5.0 will tolerate their presence in a directory listing (and -display it as an "unknown node"). 1.4.1 and earlier cannot tolerate them: a -DIR-IMM child in any directory will prevent the listing of that directory. +display it as "unknown"). 1.4.1 and earlier cannot tolerate them: a DIR-IMM +child in any directory will prevent the listing of that directory. Immutable directories are repairable, just like normal immutable files. @@ -31,20 +31,20 @@ directories. See docs/frontends/webapi.txt for details. *** "tahoe backup" now creates immutable directories, backupdb has dircache The "tahoe backup" command has been enhanced to create immutable directories -(in previous releases, it created read-only mutable directories). This is -significantly faster, since it does not need to create an RSA keypair for -each new directory. Also "DIR-IMM" immutable directories are repairable, -unlike "DIR-RO" read-only mutable directories (at least in this release: a -future Tahoe release should be able to repair DIR-RO). +(in previous releases, it created read-only mutable directories) (#XXX). This +is significantly faster, since it does not need to create an RSA keypair for +each new directory. Also "DIR-IMM" immutable directories are repairable, unlike +"DIR-RO" read-only mutable directories (at least in this release: a future +Tahoe-LAFS release should be able to repair DIR-RO). In addition, the backupdb (used by "tahoe backup" to remember what it has -already copied) has been enhanced to store information about existing -immutable directories. This allows it to re-use directories that have moved -but still contain identical contents, or which have been deleted and later -replaced. (the 1.5.0 "tahoe backup" command could only re-use directories -that were in the same place as they were in the immediately previous backup). -With this change, the backup process no longer needs to read the previous -snapshot out of the Tahoe grid, reducing the network load considerably. +already copied) has been enhanced to store information about existing immutable +directories. This allows it to re-use directories that have moved but still +contain identical contents, or which have been deleted and later replaced. (the +1.5.0 "tahoe backup" command could only re-use directories that were in the +same place as they were in the immediately previous backup). With this change, +the backup process no longer needs to read the previous snapshot out of the +Tahoe-LAFS grid, reducing the network load considerably. A "null backup" (in which nothing has changed since the previous backup) will require only two Tahoe-side operations: one to add an Archives/$TIMESTAMP @@ -59,7 +59,7 @@ had to be uploaded too): it will require time proportional to the number and size of your directories. After this initial pass, all subsequent passes should take a tiny fraction of the time. -As noted above, Tahoe versions earlier than 1.5.0 cannot read immutable +As noted above, Tahoe-LAFS versions earlier than 1.5.0 cannot read immutable directories. The "tahoe backup" command has been improved to skip over unreadable objects @@ -67,36 +67,58 @@ The "tahoe backup" command has been improved to skip over unreadable objects command from reading their contents), instead of throwing an exception and terminating the backup process. It also skips over symlinks, because these cannot be represented faithfully in the Tahoe-side filesystem. A warning -message will be emitted each time something is skipped. (#729, #850, #641) +message will be emitted each time something is skipped. (#729, #850, #641) XXX *** "create-node" command added, "create-client" now implies --no-storage -The basic idea behind Tahoe's client+server and client-only processes is that -you are creating a general-purpose Tahoe "node" process, which has several -components activated (or not). Storage service is one of these optional -components, as is the Helper, FTP server, and SFTP server. (Client/webapi +The basic idea behind Tahoe-LAFS's client+server and client-only processes is +that you are creating a general-purpose Tahoe-LAFS "node" process, which has +several components that can be activated. Storage service is one of these +optional components, as is the Helper, FTP server, and SFTP server. Web gateway functionality is nominally on this list, but it is always active: a future -release will make it optional). The special-purpose servers remain separate -(introducer, key-generator, stats-gatherer). +release will make it optional. There are three special purpose servers that +can't currently be run as a component in a node: introducer, key-generator, +stats-gatherer. -So now "tahoe create-node" will create a Tahoe node process, and after +So now "tahoe create-node" will create a Tahoe-LAFS node process, and after creation you can edit its tahoe.cfg to enable or disable the desired services. It is a more general-purpose replacement for "tahoe create-client". The default configuration has storage service enabled. For convenience, the -"--no-storage" argument makes a tahoe.cfg file that disables storage service. +"--no-storage" argument makes a tahoe.cfg file that disables storage +service. (#XXX) -"tahoe create-client" has been changed to create a Tahoe node without a +"tahoe create-client" has been changed to create a Tahoe-LAFS node without a storage service. It is equivalent to "tahoe create-node --no-storage". This -helps to reduce the confusion surrounding the use of a command with "client" -in its name to create a storage *server*. Use "tahoe create-client" to create -a purely client-side node. If you want to offer storage to the grid, use -"tahoe create-node" instead. +helps to reduce the confusion surrounding the use of a command with "client" in +its name to create a storage *server*. Use "tahoe create-client" to create a +purely client-side node. If you want to offer storage to the grid, use "tahoe +create-node" instead. In the future, other services will be added to the node, and they will be -controlled through options in tahoe.cfg . The most important of these -services may get additional --enable-XYZ or --disable-XYZ arguments to "tahoe +controlled through options in tahoe.cfg . The most important of these services +may get additional --enable-XYZ or --disable-XYZ arguments to "tahoe create-node". +** Performance Improvements + +Download of immutable files begins as soon as the downloader has located the K +necessary shares (#XXX). In both the previous and current releases, a +downloader will first issue queries to all storage servers on the grid to +locate shares before it begins downloading the shares. In previous releases of +Tahoe-LAFS, download would not begin until all storage servers on the grid had +replied to the query, at which point K shares would be chosen for download from +among the shares that were located. In this release, download begins as soon as +any K shares are located. This means that downloads start sooner, which is +particularly important if there is a server on the grid that is extremely slow +or even hung in such a way that it will never respond. In previous releases +such a server would have a negative impact on all downloads from that grid. In +this release, such a server will have no impact on downloads (as long as K +shares can be found on other, quicker, servers.) This also means that +downloads now use the "best-alacrity" servers that they talk to, as measured by +how quickly the servers reply to the initial query. This might cause downloads +to go faster, especially on grids with heterogeneous servers or geographical +dispersion. + ** Minor Changes The webapi acquired a new "t=mkdir-with-children" command, to create and @@ -127,10 +149,9 @@ target filename, such as when you copy from a bare filecap. (#761) halting traversal. (#874, #786) Many small packaging improvements were made to facilitate the "tahoe-lafs" -package being added to Ubuntu's "Karmic Koala" 9.10 release. Several -mac/win32 binary libraries were removed, some figleaf code-coverage files -were removed, a bundled copy of darcsver-1.2.1 was removed, and additional -licensing text was added. +package being included in Ubuntu. Several mac/win32 binary libraries were +removed, some figleaf code-coverage files were removed, a bundled copy of +darcsver-1.2.1 was removed, and additional licensing text was added. Several DeprecationWarnings for python2.6 were silenced. (#859) diff --git a/docs/architecture.txt b/docs/architecture.txt index cf8f4cb7..c62f1c59 100644 --- a/docs/architecture.txt +++ b/docs/architecture.txt @@ -5,123 +5,104 @@ OVERVIEW -At a high-level this system consists of three layers: the key-value store, -the filesystem, and the application. +There are three layers: the key-value store, the filesystem, and the +application. -The lowest layer is the key-value store, which is a distributed hashtable -mapping from capabilities to data. The capabilities are relatively short -ASCII strings, each used as a reference to an arbitrary-length sequence of -data bytes, and are like a URI for that data. This data is encrypted and -distributed across a number of nodes, such that it will survive the loss of -most of the nodes. +The lowest layer is the key-value store. The keys are "capabilities" -- short +ascii strings -- and the values are sequences of data bytes. This data is +encrypted and distributed across a number of nodes, such that it will survive +the loss of most of the nodes. There are no hard limits on the size of the +values, but there may be performance issues with extremely large values (just +due to the limitation of network bandwidth). In practice, values as small as a +few bytes and as large as tens of gigabytes are in common use. The middle layer is the decentralized filesystem: a directed graph in which the intermediate nodes are directories and the leaf nodes are files. The leaf -nodes contain only the file data -- they contain no metadata about the file -other than the length in bytes. The edges leading to leaf nodes have metadata -attached to them about the file they point to. Therefore, the same file may -be associated with different metadata if it is dereferenced through different -edges. +nodes contain only the data -- they contain no metadata other than the length +in bytes. The edges leading to leaf nodes have metadata attached to them +about the file they point to. Therefore, the same file may be associated with +different metadata if it is referred to through different edges. The top layer consists of the applications using the filesystem. Allmydata.com uses it for a backup service: the application periodically copies files from the local disk onto the decentralized filesystem. We later -provide read-only access to those files, allowing users to recover them. The -filesystem can be used by other applications, too. - - -THE GRID OF STORAGE SERVERS - -A key-value store is implemented by a collection of peer nodes -- processes -running on computers -- called a "grid". (The term "grid" is also used -loosely for the filesystem supported by these nodes.) The nodes in a grid -establish TCP connections to each other using Foolscap, a secure -remote-message-passing library. - -Each node offers certain services to the others. The primary service is that -of the storage server, which holds data in the form of "shares". Shares are -encoded pieces of files. There are a configurable number of shares for each -file, 10 by default. Normally, each share is stored on a separate server, but -a single server can hold multiple shares for a single file. - -Nodes learn about each other through an "introducer". Each node connects to a -central introducer at startup, and receives a list of all other nodes from -it. Each node then connects to all other nodes, creating a fully-connected -topology. In the current release, nodes behind NAT boxes will connect to all -nodes that they can open connections to, but they cannot open connections to -other nodes behind NAT boxes. Therefore, the more nodes behind NAT boxes, the -less the topology resembles the intended fully-connected topology. - -The introducer in nominally a single point of failure, in that clients who -never see the introducer will be unable to connect to any storage servers. -But once a client has been introduced to everybody, they do not need the -introducer again until they are restarted. The danger of a SPOF is further -reduced in other ways. First, the introducer is defined by a hostname and a +provide read-only access to those files, allowing users to recover them. +There are several other applications built on top of the Tahoe-LAFS filesystem +(see the RelatedProjects page of the wiki for a list). + + +THE KEY-VALUE STORE + +The key-value store is implemented by a grid of Tahoe-LAFS storage servers -- +user-space processes. Tahoe-LAFS storage clients communicate with the storage +servers over TCP. + +Storage servers hold data in the form of "shares". Shares are encoded pieces +of files. There are a configurable number of shares for each file, 10 by +default. Normally, each share is stored on a separate server, but in some +cases a single server can hold multiple shares of a file. + +Nodes learn about each other through an "introducer". Each server connects to +the introducer at startup and announces its presence. Each client connects to +the introducer at startup, and receives a list of all servers from it. Each +client then connects to every server, creating a "bi-clique" topology. In the +current release, nodes behind NAT boxes will connect to all nodes that they +can open connections to, but they cannot open connections to other nodes +behind NAT boxes. Therefore, the more nodes behind NAT boxes, the less the +topology resembles the intended bi-clique topology. + +The introducer is a Single Point of Failure ("SPoF"), in that clients who +never connect to the introducer will be unable to connect to any storage +servers, but once a client has been introduced to everybody, it does not need +the introducer again until it is restarted. The danger of a SPoF is further +reduced in two ways. First, the introducer is defined by a hostname and a private key, which are easy to move to a new host in case the original one suffers an unrecoverable hardware problem. Second, even if the private key is -lost, clients can be reconfigured with a new introducer.furl that points to a -new one. Finally, we have plans to decentralize introduction, allowing any -node to tell a new client about all the others. With decentralized -"gossip-based" introduction, simply knowing how to contact any one node will -be enough to contact all of them. +lost, clients can be reconfigured to use a new introducer. + +For future releases, we have plans to decentralize introduction, allowing any +server to tell a new client about all the others. FILE ENCODING -When a node stores a file on its grid, it first encrypts the file, using a key -that is optionally derived from the hash of the file itself. It then segments -the encrypted file into small pieces, in order to reduce the memory footprint, -and to decrease the lag between initiating a download and receiving the first -part of the file; for example the lag between hitting "play" and a movie -actually starting. - -The node then erasure-codes each segment, producing blocks such that only a -subset of them are needed to reconstruct the segment. It sends one block from -each segment to a given server. The set of blocks on a given server -constitutes a "share". Only a subset of the shares (3 out of 10, by default) -are needed to reconstruct the file. - -A tagged hash of the encryption key is used to form the "storage index", which -is used for both server selection (described below) and to index shares within -the Storage Servers on the selected nodes. - -Hashes are computed while the shares are being produced, to validate the -ciphertext and the shares themselves. Merkle hash trees are used to enable -validation of individual segments of ciphertext without requiring the -download/decoding of the whole file. These hashes go into the "Capability -Extension Block", which will be stored with each share. +When a client stores a file on the grid, it first encrypts the file. It then +breaks the encrypted file into small segments, in order to reduce the memory +footprint, and to decrease the lag between initiating a download and receiving +the first part of the file; for example the lag between hitting "play" and a +movie actually starting. -The capability contains the encryption key, the hash of the Capability -Extension Block, and any encoding parameters necessary to perform the eventual -decoding process. For convenience, it also contains the size of the file -being stored. +The client then erasure-codes each segment, producing blocks of which only a +subset are needed to reconstruct the segment (3 out of 10, with the default +settings). +It sends one block from each segment to a given server. The set of blocks on a +given server constitutes a "share". Therefore a subset f the shares (3 out of 10, +by default) are needed to reconstruct the file. -On the download side, the node that wishes to turn a capability into a -sequence of bytes will obtain the necessary shares from remote nodes, break -them into blocks, use erasure-decoding to turn them into segments of -ciphertext, use the decryption key to convert that into plaintext, then emit -the plaintext bytes to the output target (which could be a file on disk, or it -could be streamed directly to a web browser or media player). +A hash of the encryption key is used to form the "storage index", which is used +for both server selection (described below) and to index shares within the +Storage Servers on the selected nodes. -All hashes use SHA-256, and a different tag is used for each purpose. -Netstrings are used where necessary to insure these tags cannot be confused -with the data to be hashed. All encryption uses AES in CTR mode. The erasure -coding is performed with zfec. +The client computes secure hashes of the ciphertext and of the shares. It uses +Merkle Trees so that it is possible to verify the correctness of a subset of +the data without requiring all of the data. For example, this allows you to +verify the correctness of the first segment of a movie file and then begin +playing the movie file in your movie viewer before the entire movie file has +been downloaded. -A Merkle Hash Tree is used to validate the encoded blocks before they are fed -into the decode process, and a transverse tree is used to validate the shares -as they are retrieved. A third merkle tree is constructed over the plaintext -segments, and a fourth is constructed over the ciphertext segments. All -necessary hashes are stored with the shares, and the hash tree roots are put -in the Capability Extension Block. The final hash of the extension block goes -into the capability itself. +These hashes are stored in a small datastructure named the Capability +Extension Block which is stored on the storage servers alongside each share. -Note that the number of shares created is fixed at the time the file is -uploaded: it is not possible to create additional shares later. The use of a -top-level hash tree also requires that nodes create all shares at once, even -if they don't intend to upload some of them, otherwise the hashroot cannot be -calculated correctly. +The capability contains the encryption key, the hash of the Capability +Extension Block, and any encoding parameters necessary to perform the eventual +decoding process. For convenience, it also contains the size of the file +being stored. + +To download, the client that wishes to turn a capability into a sequence of +bytes will obtain the blocks from storage servers, use erasure-decoding to +turn them into segments of ciphertext, use the decryption key to convert that +into plaintext, then emit the plaintext bytes to the output target. CAPABILITIES @@ -148,11 +129,12 @@ The capability provides both "location" and "identification": you can use it to retrieve a set of bytes, and then you can use it to validate ("identify") that these potential bytes are indeed the ones that you were looking for. -The "key-value store" layer is insufficient to provide a usable filesystem, -which requires human-meaningful names. Capabilities sit on the -"global+secure" edge of Zooko's Triangle[1]. They are self-authenticating, -meaning that nobody can trick you into using a file that doesn't match the -capability you used to refer to that file. +The "key-value store" layer doesn't include human-meaningful +names. Capabilities sit on the "global+secure" edge of Zooko's +Triangle[1]. They are self-authenticating, meaning that nobody can trick you +into accepting a file that doesn't match the capability you used to refer to +that file. The filesystem layer (described below) adds human-meaningful names +atop the key-value layer. SERVER SELECTION @@ -204,13 +186,15 @@ get back any 3 to recover the file. This results in a 3.3x expansion factor. In general, you should set N about equal to the number of nodes in your grid, then set N/k to achieve your desired availability goals. -When downloading a file, the current release just asks all known nodes for any -shares they might have, chooses the minimal necessary subset, then starts -downloading and processing those shares. A later release will use the full -algorithm to reduce the number of queries that must be sent out. This -algorithm uses the same consistent-hashing permutation as on upload, but stops -after it has located k shares (instead of all N). This reduces the number of -queries that must be sent before downloading can begin. +When downloading a file, the current version just asks all known servers for +any shares they might have. and then downloads the shares from the first servers that + +chooses the minimal necessary subset, then starts +change downloading and processing those shares. A future release will use the +server selection algorithm to reduce the number of queries that must be sent +out. This algorithm uses the same consistent-hashing permutation as on upload, +but stops after it has located k shares (instead of all N). This reduces the +number of queries that must be sent before downloading can begin. The actual number of queries is directly related to the availability of the nodes and the degree of overlap between the node list used at upload and at diff --git a/docs/helper.txt b/docs/helper.txt index b78f7742..9f852277 100644 --- a/docs/helper.txt +++ b/docs/helper.txt @@ -67,11 +67,8 @@ What sorts of machines are good candidates for running a helper? To turn a Tahoe-LAFS node into a helper (i.e. to run a helper service in addition to whatever else that node is doing), edit the tahoe.cfg file in your -node's base directory and set "enabled = true" in the section named -"[helper]". Then restart the node: - - echo "yes" >$BASEDIR/run_helper - tahoe restart $BASEDIR +node's base directory and set "enabled = true" in the section named +"[helper]". Then restart the node. This will signal the node to create a Helper service and listen for incoming requests. Once the node has started, there will be a diff --git a/docs/historical/historical_known_issues.txt b/docs/historical/historical_known_issues.txt index cd76ef8c..a97f6b93 100644 --- a/docs/historical/historical_known_issues.txt +++ b/docs/historical/historical_known_issues.txt @@ -5,15 +5,13 @@ manage them. The current version of this file can be found at http://allmydata.org/source/tahoe/trunk/docs/historical/historical_known_issues.txt -Newer versions of this document describing issues in newer releases of -Tahoe-LAFS can be found at: +Issues in newer releases of Tahoe-LAFS can be found at: http://allmydata.org/source/tahoe/trunk/docs/known_issues.txt == issues in Tahoe v1.1.0, released 2008-06-11 == -(Tahoe v1.1.0 was superceded by v1.2.0 which was released 2008-07-21, -and then by v1.3.0 which was released 2009-02-13.) +(Tahoe v1.1.0 was superceded by v1.2.0 which was released 2008-07-21.) === more than one file can match an immutable file cap === diff --git a/docs/install.html b/docs/install.html index 072f13aa..604f1389 100644 --- a/docs/install.html +++ b/docs/install.html @@ -1,34 +1,34 @@ - Installing Tahoe + Installing Tahoe-LAFS - + - + -

About Tahoe

-

Welcome to the Tahoe project, a secure, decentralized, fault-tolerant filesystem. About Tahoe. +

About Tahoe-LAFS

+

Welcome to the Tahoe-LAFS project, a secure, decentralized, fault-tolerant filesystem. About Tahoe-LAFS. -

How To Install Tahoe

+

How To Install Tahoe-LAFS

-

This procedure has been verified to work on Windows, Cygwin, Mac, Linux, Solaris, FreeBSD, OpenBSD, and NetBSD. It's likely to work on other platforms. If you have trouble with this install process, please write to the tahoe-dev mailing list, where friendly hackers will help you out.

+

This procedure has been verified to work on Windows, Cygwin, Mac, many flavors of Linux, Solaris, FreeBSD, OpenBSD, and NetBSD. It's likely to work on other platforms. If you have trouble with this install process, please write to the tahoe-dev mailing list, where friendly hackers will help you out.

Install Python

-

Check if you already have an adequate version of Python installed by running python -V. Python v2.4 (v2.4.2 or greater), Python v2.5 or Python v2.6 will work. Python v3 does not work. If you don't have one of these versions of Python installed, then follow the instructions on the Python download page to download and install Python v2.5.

+

Check if you already have an adequate version of Python installed by running python -V. Python v2.4 (v2.4.2 or greater), Python v2.5 or Python v2.6 will work. Python v3 does not work. If you don't have one of these versions of Python installed, then follow the instructions on the Python download page to download and install Python v2.6.

(If installing on Windows, you now need to manually install the pywin32 package -- see "More Details" below.)

-

Get Tahoe

+

Get Tahoe-LAFS

-

Download the 1.5.0 release zip file:

+

Download the 1.6.0 release zip file:

http://allmydata.org/source/tahoe/releases/allmydata-tahoe-1.5.0.zip
+ href="http://allmydata.org/source/tahoe/releases/allmydata-tahoe-1.6.0.zip">http://allmydata.org/source/tahoe/releases/allmydata-tahoe-1.6.0.zip -

Build Tahoe

+

Build Tahoe-LAFS

Unpack the zip file and cd into the top-level directory.

@@ -38,14 +38,14 @@

Run bin/tahoe --version to verify that the executable tool prints out the right version number.

-

Run

+

Run Tahoe-LAFS

-

Now you have the Tahoe source code installed and are ready to use it to form a decentralized filesystem. The tahoe executable in the bin directory can configure and launch your Tahoe nodes. See running.html for instructions on how to do that.

+

Now you have the Tahoe-LAFS source code installed and are ready to use it to form a decentralized filesystem. The tahoe executable in the bin directory can configure and launch your Tahoe-LAFS nodes. See running.html for instructions on how to do that.

More Details

-

For more details, including platform-specific hints for Debian, Windows, and Mac systems, please see the InstallDetails wiki page. If you are running on Windows, you need to manually install "pywin32", as described on that page. Debian/Ubuntu users: use the instructions written above! Do not try to install Tahoe-LAFS using apt-get. +

For more details, including platform-specific hints for Debian, Windows, and Mac systems, please see the InstallDetails wiki page. If you are running on Windows, you need to manually install "pywin32", as described on that page.

diff --git a/docs/logging.txt b/docs/logging.txt index 0cff1a78..669d9e71 100644 --- a/docs/logging.txt +++ b/docs/logging.txt @@ -13,12 +13,6 @@ The foolscap distribution includes a utility named "flogtool" (usually at /usr/bin/flogtool) which is used to get access to many foolscap logging features. -Note that there are currently (in foolscap-0.3.2) a couple of problems in using -flogtool on Windows: - -http://foolscap.lothar.com/trac/ticket/108 # set base to "." if not running from source -http://foolscap.lothar.com/trac/ticket/109 # make a "flogtool" executable that works on Windows - == Realtime Logging == When you are working on Tahoe code, and want to see what the node is doing, diff --git a/docs/running.html b/docs/running.html index 4790919b..d165e6c9 100644 --- a/docs/running.html +++ b/docs/running.html @@ -1,15 +1,15 @@ - Running Tahoe + Running Tahoe-LAFS - + - + -

How To Start Tahoe

+

How To Start Tahoe-LAFS

This is how to run a Tahoe client or a complete Tahoe grid. First you have to install the Tahoe software, as documented in