If a verify=true argument is provided, the node will perform a more
intensive check, downloading and verifying every single bit of every share.
+ If an output=JSON argument is provided, the response will be
+ machine-readable JSON instead of human-oriented HTML. The data is a
+ dictionary with the following keys:
+
+ storage-index: a base32-encoded string with the objects's storage index,
+ or an empty string for LIT files
+ results: a dictionary that describes the state of the file. For LIT files,
+ this dictionary has only the 'healthy' key, which will always be
+ True. For distributed files, this dictionary has the following
+ keys:
+ count-shares-good: the number of good shares that were found
+ count-shares-needed: 'k', the number of shares required for recovery
+ count-shares-expected: 'N', the number of total shares generated
+ count-good-share-hosts: the number of distinct storage servers with
+ good shares. If this number is less than
+ count-shares-good, then some shares are doubled
+ up, increasing the correlation of failures. This
+ indicates that one or more shares should be
+ moved to an otherwise unused server, if one is
+ available.
+ count-wrong-shares: for mutable files, the number of shares for
+ versions other than the 'best' one (highest
+ sequence number, highest roothash). These are
+ either old ...
+ count-recoverable-versions: for mutable files, the number of
+ recoverable versions of the file. For
+ a healthy file, this will equal 1.
+ count-unrecoverable-versions: for mutable files, the number of
+ unrecoverable versions of the file.
+ For a healthy file, this will be 0.
+ count-corrupt-shares: the number of shares with integrity failures
+ list-corrupt-shares: a list of "share locators", one for each share
+ that was found to be corrupt. Each share locator
+ is a list of (serverid, storage_index, sharenum).
+ needs-rebalancing: (bool) True if there are multiple shares on a single
+ storage server, indicating a reduction in reliability
+ that could be resolved by moving shares to new
+ servers.
+ servers-responding: list of base32-encoded storage server identifiers,
+ one for each server which responded to the share
+ query.
+ healthy: (bool) True if the file is completely healthy, False otherwise.
+ Healthy files have at least N good shares. Overlapping shares
+ (indicated by count-good-share-hosts < count-shares-good) do not
+ currently cause a file to be marked unhealthy. If there are at
+ least N good shares, then corrupt shares do not cause the file to
+ be marked unhealthy, although the corrupt shares will be listed
+ in the results (list-corrupt-shares) and should be manually
+ removed to wasting time in subsequent downloads (as the
+ downloader rediscovers the corruption and uses alternate shares).
+ sharemap: dict mapping share identifier to list of serverids
+ (base32-encoded strings). This indicates which servers are
+ holding which shares. For immutable files, the shareid is
+ an integer (the share number, from 0 to N-1). For
+ immutable files, it is a string of the form
+ 'seq%d-%s-sh%d', containing the sequence number, the
+ roothash, and the share number.
+
POST $URL?t=deep-check
This triggers a recursive walk of all files and directories reachable from
This accepts the same verify=, when_done=, and return_to= arguments as
t=check.
- Be aware that this can take a long time: perhaps a second per object.
+ Be aware that this can take a long time: perhaps a second per object. No
+ progress information is currently provided: the server will be silent until
+ the full tree has been traversed, then will emit the complete response.
+
+ If an output=JSON argument is provided, the response will be
+ machine-readable JSON instead of human-oriented HTML. The data is a
+ dictionary with the following keys:
+
+ root-storage-index: a base32-encoded string with the storage index of the
+ starting point of the deep-check operation
+ count-objects-checked: count of how many objects were checked. Note that
+ non-distributed objects (i.e. small immutable LIT
+ files) are not checked, since for these objects,
+ the data is contained entirely in the URI.
+ count-objects-healthy: how many of those objects were completely healthy
+ count-objects-unhealthy: how many were damaged in some way
+ count-corrupt-shares: how many shares were found to have corruption,
+ summed over all objects examined
+ list-corrupt-shares: a list of "share identifiers", one for each share
+ that was found to be corrupt. Each share identifier
+ is a list of (serverid, storage_index, sharenum).
+ list-unhealthy-files: a list of (pathname, check-results) tuples, for
+ each file that was not fully healthy. 'pathname' is
+ a list of strings (which can be joined by "/"
+ characters to turn it into a single string),
+ relative to the directory on which deep-check was
+ invoked. The 'check-results' field is the same as
+ that returned by t=check&output=JSON, described
+ above.
+
+POST $URL?t=check&repair=true
+
+ This performs a health check of the given file or directory, and if the
+ checker determines that the object is not healthy (some shares are missing
+ or corrupted), it will perform a "repair". During repair, any missing
+ shares will be regenerated and uploaded to new servers.
+
+ This accepts the same when_done=URL, return_to=URL, and verify=true
+ arguments as t=check. When an output=JSON argument is provided, the
+ machine-readable JSON response will contain the following keys:
+
+ storage-index: a base32-encoded string with the objects's storage index,
+ or an empty string for LIT files
+ repair-attempted: (bool) True if repair was attempted
+ repair-successful: (bool) True if repair was attempted and the file was
+ fully healthy afterwards. False if no repair was
+ attempted, or if a repair attempt failed.
+ pre-repair-results: a dictionary that describes the state of the file
+ before any repair was performed. This contains exactly
+ the same keys as the 'results' value of the t=check
+ response, described above.
+ post-repair-results: a dictionary that describes the state of the file
+ after any repair was performed. If no repair was
+ performed, post-repair-results and pre-repair-results
+ will be the same. This contains exactly the same keys
+ as the 'results' value of the t=check response,
+ described above.
+
+POST $URL?t=deep-check&repair=true
+
+ This triggers a recursive walk of all files and directories, performing a
+ t=check&repair=true on each one.
+
+ This accepts the same when_done=URL, return_to=URL, and verify=true
+ arguments as t=deep-check. When an output=JSON argument is provided, the
+ response will contain the following keys:
+
+ root-storage-index: a base32-encoded string with the storage index of the
+ starting point of the deep-check operation
+ count-objects-checked: count of how many objects were checked
+
+ count-objects-healthy-pre-repair: how many of those objects were completely
+ healthy, before any repair
+ count-objects-unhealthy-pre-repair: how many were damaged in some way
+ count-objects-healthy-post-repair: how many of those objects were completely
+ healthy, after any repair
+ count-objects-unhealthy-post-repair: how many were damaged in some way
+
+ count-repairs-attempted: repairs were attempted on this many objects.
+ count-repairs-successful: how many repairs resulted in healthy objects
+ count-repairs-unsuccessful: how many repairs resulted did not results in
+ completely healthy objects
+ count-corrupt-shares-pre-repair: how many shares were found to have
+ corruption, summed over all objects
+ examined, before any repair
+ count-corrupt-shares-post-repair: how many shares were found to have
+ corruption, summed over all objects
+ examined, after any repair
+ list-corrupt-shares: a list of "share identifiers", one for each share
+ that was found to be corrupt (before any repair).
+ Each share identifier is a list of (serverid,
+ storage_index, sharenum).
+ list-remaining-corrupt-shares: like list-corrupt-shares, but mutable shares
+ that were successfully repaired are not
+ included. These are shares that need
+ manual processing. Since immutable shares
+ cannot be modified by clients, all corruption
+ in immutable shares will be listed here.
+ list-unhealthy-files: a list of (pathname, check-results) tuples, for
+ each file that was not fully healthy. 'pathname' is
+ relative to the directory on which deep-check was
+ invoked. The 'check-results' field is the same as
+ that returned by t=check&repair=true&output=JSON,
+ described above.
GET $DIRURL?t=manifest
def get_repair_attempted(self):
return self.repair_attempted
def get_repair_successful(self):
+ if not self.repair_attempted:
+ return False
return self.repair_successful
def get_pre_repair_results(self):
return self.pre_repair_results
sharemap = {}
for (shnum,nodeids) in self.sharemap.items():
hosts.update(nodeids)
- sharemap[shnum] = [idlib.nodeid_b2a(nodeid) for nodeid in nodeids]
+ sharemap[shnum] = nodeids
data["count-good-share-hosts"] = len(hosts)
- data["servers-responding"] = [base32.b2a(serverid)
- for serverid in self.responded]
+ data["servers-responding"] = list(self.responded)
data["sharemap"] = sharemap
r.set_data(data)
that was found to be corrupt. Each share
locator is a list of (serverid, storage_index,
sharenum).
- servers-responding: list of base32-encoded storage server identifiers,
+ servers-responding: list of (binary) storage server identifiers,
one for each server which responded to the share
query.
sharemap: dict mapping share identifier to list of serverids
- (base32-encoded strings). This indicates which servers are
- holding which shares. For immutable files, the shareid is
- an integer (the share number, from 0 to N-1). For
- immutable files, it is a string of the form
- 'seq%d-%s-sh%d', containing the sequence number, the
- roothash, and the share number.
+ (binary strings). This indicates which servers are holding
+ which shares. For immutable files, the shareid is an
+ integer (the share number, from 0 to N-1). For immutable
+ files, it is a string of the form 'seq%d-%s-sh%d',
+ containing the sequence number, the roothash, and the
+ share number.
The following keys are most relevant for mutable files, but immutable
files will provide sensible values too::
"""Return a boolean, True if a repair was attempted."""
def get_repair_successful():
"""Return a boolean, True if repair was attempted and the file/dir
- was fully healthy afterwards."""
+ was fully healthy afterwards. False if no repair was attempted or if
+ a repair attempt failed."""
def get_pre_repair_results():
"""Return an ICheckerResults instance that describes the state of the
file/dir before any repair was attempted."""
shareid = "%s-sh%d" % (smap.summarize_version(verinfo), shnum)
if shareid not in sharemap:
sharemap[shareid] = []
- sharemap[shareid].append(base32.b2a(peerid))
+ sharemap[shareid].append(peerid)
data["sharemap"] = sharemap
- data["servers-responding"] = [base32.b2a(serverid)
- for serverid in smap.reachable_peers]
+ data["servers-responding"] = list(smap.reachable_peers)
r.set_healthy(healthy)
r.set_needs_rebalancing(needs_rebalancing)
self.failUnlessEqual(d["list-corrupt-shares"], [], where)
if not incomplete:
self.failUnlessEqual(sorted(d["servers-responding"]),
- sorted([idlib.nodeid_b2a(c.nodeid)
- for c in self.clients]), where)
+ sorted([c.nodeid for c in self.clients]),
+ where)
self.failUnless("sharemap" in d, where)
+ all_serverids = set()
+ for (shareid, serverids) in d["sharemap"].items():
+ all_serverids.update(serverids)
+ self.failUnlessEqual(sorted(all_serverids),
+ sorted([c.nodeid for c in self.clients]),
+ where)
+
self.failUnlessEqual(d["count-wrong-shares"], 0, where)
self.failUnlessEqual(d["count-recoverable-versions"], 1, where)
self.failUnlessEqual(d["count-unrecoverable-versions"], 0, where)
d = self.set_up_nodes()
d.addCallback(self.set_up_tree)
d.addCallback(self.do_test_good)
+ d.addCallback(self.do_test_web)
return d
def do_test_good(self, ignored):
d.addCallback(self.deep_check_and_repair_is_healthy, 0, "small")
return d
+
+ def web_json(self, n, **kwargs):
+ kwargs["output"] = "json"
+ url = (self.webish_url + "uri/%s" % urllib.quote(n.get_uri())
+ + "?" + "&".join(["%s=%s" % (k,v) for (k,v) in kwargs.items()]))
+ d = getPage(url, method="POST")
+ def _decode(s):
+ try:
+ data = simplejson.loads(s)
+ except ValueError:
+ self.fail("%s (%s): not JSON: '%s'" % (where, url, s))
+ return data
+ d.addCallback(_decode)
+ return d
+
+ def json_check_is_healthy(self, data, n, where, incomplete=False):
+
+ self.failUnlessEqual(data["storage-index"],
+ base32.b2a(n.get_storage_index()), where)
+ r = data["results"]
+ self.failUnlessEqual(r["healthy"], True, where)
+ needs_rebalancing = bool( len(self.clients) < 10 )
+ if not incomplete:
+ self.failUnlessEqual(r["needs-rebalancing"], needs_rebalancing, where)
+ self.failUnlessEqual(r["count-shares-good"], 10, where)
+ self.failUnlessEqual(r["count-shares-needed"], 3, where)
+ self.failUnlessEqual(r["count-shares-expected"], 10, where)
+ if not incomplete:
+ self.failUnlessEqual(r["count-good-share-hosts"], len(self.clients), where)
+ self.failUnlessEqual(r["count-corrupt-shares"], 0, where)
+ self.failUnlessEqual(r["list-corrupt-shares"], [], where)
+ if not incomplete:
+ self.failUnlessEqual(sorted(r["servers-responding"]),
+ sorted([idlib.nodeid_b2a(c.nodeid)
+ for c in self.clients]), where)
+ self.failUnless("sharemap" in r, where)
+ all_serverids = set()
+ for (shareid, serverids_s) in r["sharemap"].items():
+ all_serverids.update(serverids_s)
+ self.failUnlessEqual(sorted(all_serverids),
+ sorted([idlib.nodeid_b2a(c.nodeid)
+ for c in self.clients]), where)
+ self.failUnlessEqual(r["count-wrong-shares"], 0, where)
+ self.failUnlessEqual(r["count-recoverable-versions"], 1, where)
+ self.failUnlessEqual(r["count-unrecoverable-versions"], 0, where)
+
+ def json_check_and_repair_is_healthy(self, data, n, where, incomplete=False):
+ self.failUnlessEqual(data["storage-index"],
+ base32.b2a(n.get_storage_index()), where)
+ self.failUnlessEqual(data["repair-attempted"], False, where)
+ self.json_check_is_healthy(data["pre-repair-results"],
+ n, where, incomplete)
+ self.json_check_is_healthy(data["post-repair-results"],
+ n, where, incomplete)
+
+ def json_check_lit(self, data, n, where):
+ self.failUnlessEqual(data["storage-index"], "", where)
+ self.failUnlessEqual(data["results"]["healthy"], True, where)
+
+ def do_test_web(self, ignored):
+ d = defer.succeed(None)
+
+ # check, no verify
+ d.addCallback(lambda ign: self.web_json(self.root, t="check"))
+ d.addCallback(self.json_check_is_healthy, self.root, "root")
+ d.addCallback(lambda ign: self.web_json(self.mutable, t="check"))
+ d.addCallback(self.json_check_is_healthy, self.mutable, "mutable")
+ d.addCallback(lambda ign: self.web_json(self.large, t="check"))
+ d.addCallback(self.json_check_is_healthy, self.large, "large")
+ d.addCallback(lambda ign: self.web_json(self.small, t="check"))
+ d.addCallback(self.json_check_lit, self.small, "small")
+
+ # check and verify
+ d.addCallback(lambda ign:
+ self.web_json(self.root, t="check", verify="true"))
+ d.addCallback(self.json_check_is_healthy, self.root, "root")
+ d.addCallback(lambda ign:
+ self.web_json(self.mutable, t="check", verify="true"))
+ d.addCallback(self.json_check_is_healthy, self.mutable, "mutable")
+ d.addCallback(lambda ign:
+ self.web_json(self.large, t="check", verify="true"))
+ d.addCallback(self.json_check_is_healthy, self.large, "large", incomplete=True)
+ d.addCallback(lambda ign:
+ self.web_json(self.small, t="check", verify="true"))
+ d.addCallback(self.json_check_lit, self.small, "small")
+
+ # check and repair, no verify
+ d.addCallback(lambda ign:
+ self.web_json(self.root, t="check", repair="true"))
+ d.addCallback(self.json_check_and_repair_is_healthy, self.root, "root")
+ d.addCallback(lambda ign:
+ self.web_json(self.mutable, t="check", repair="true"))
+ d.addCallback(self.json_check_and_repair_is_healthy, self.mutable, "mutable")
+ d.addCallback(lambda ign:
+ self.web_json(self.large, t="check", repair="true"))
+ d.addCallback(self.json_check_and_repair_is_healthy, self.large, "large")
+ d.addCallback(lambda ign:
+ self.web_json(self.small, t="check", repair="true"))
+ d.addCallback(self.json_check_lit, self.small, "small")
+
+ # check+verify+repair
+ d.addCallback(lambda ign:
+ self.web_json(self.root, t="check", repair="true", verify="true"))
+ d.addCallback(self.json_check_and_repair_is_healthy, self.root, "root")
+ return d
+ d.addCallback(lambda ign:
+ self.web_json(self.mutable, t="check", repair="true", verify="true"))
+ d.addCallback(self.json_check_and_repair_is_healthy, self.mutable, "mutable")
+ d.addCallback(lambda ign:
+ self.web_json(self.large, t="check", repair="true", verify="true"))
+ d.addCallback(self.json_check_and_repair_is_healthy, self.large, "large", incomplete=True)
+ d.addCallback(lambda ign:
+ self.web_json(self.small, t="check", repair="true", verify="true"))
+ d.addCallback(self.json_check_lit, self.small, "small")
+
+ return d
from allmydata import interfaces, provisioning, uri, webish
from allmydata.immutable import upload, download
from allmydata.web import status, common
-from allmydata.util import fileutil
+from allmydata.util import fileutil, idlib
from allmydata.test.common import FakeDirectoryNode, FakeCHKFileNode, \
FakeMutableFileNode, create_chk_filenode
from allmydata.interfaces import IURI, INewDirectoryURI, \
import time
+import simplejson
from nevow import rend, inevow, tags as T
from twisted.web import html
from allmydata.web.common import getxmlfile, get_arg, IClient
def _render_results(self, cr):
assert ICheckerResults(cr)
return T.pre["\n".join(self._html(cr.get_report()))] # TODO: more
+
+ def _json_check_and_repair_results(self, r):
+ data = {}
+ data["storage-index"] = r.get_storage_index_string()
+ data["repair-attempted"] = r.get_repair_attempted()
+ data["repair-successful"] = r.get_repair_successful()
+ pre = r.get_pre_repair_results()
+ data["pre-repair-results"] = self._json_check_results(pre)
+ post = r.get_post_repair_results()
+ data["post-repair-results"] = self._json_check_results(post)
+ return data
+
+ def _json_check_results(self, r):
+ data = {}
+ data["storage-index"] = r.get_storage_index_string()
+ data["results"] = self._json_check_counts(r.get_data())
+ data["results"]["needs-rebalancing"] = r.needs_rebalancing()
+ data["results"]["healthy"] = r.is_healthy()
+ return data
+
+ def _json_check_counts(self, d):
+ r = {}
+ r["count-shares-good"] = d["count-shares-good"]
+ r["count-shares-needed"] = d["count-shares-needed"]
+ r["count-shares-expected"] = d["count-shares-expected"]
+ r["count-good-share-hosts"] = d["count-good-share-hosts"]
+ r["count-corrupt-shares"] = d["count-corrupt-shares"]
+ r["list-corrupt-shares"] = [ (idlib.nodeid_b2a(serverid),
+ base32.b2a(si), shnum)
+ for (serverid, si, shnum)
+ in d["list-corrupt-shares"] ]
+ r["servers-responding"] = [idlib.nodeid_b2a(serverid)
+ for serverid in d["servers-responding"]]
+ sharemap = {}
+ for (shareid, serverids) in d["sharemap"].items():
+ sharemap[shareid] = [base32.b2a(serverid) for serverid in serverids]
+ r["sharemap"] = sharemap
+
+ r["count-wrong-shares"] = d["count-wrong-shares"]
+ r["count-recoverable-versions"] = d["count-recoverable-versions"]
+ r["count-unrecoverable-versions"] = d["count-unrecoverable-versions"]
+
+ return r
+
def _html(self, s):
if isinstance(s, (str, unicode)):
return html.escape(s)
assert isinstance(s, (list, tuple))
return [html.escape(w) for w in s]
+class LiteralCheckerResults(rend.Page):
+ docFactory = getxmlfile("literal-checker-results.xhtml")
+
+ def renderHTTP(self, ctx):
+ t = get_arg(inevow.IRequest(ctx), "output", "")
+ if t.lower() == "json":
+ return self.json(ctx)
+ return rend.Page.renderHTTP(self, ctx)
+
+ def json(self, ctx):
+ inevow.IRequest(ctx).setHeader("content-type", "text/plain")
+ data = {"storage-index": "",
+ "results": {"healthy": True},
+ }
+ return simplejson.dumps(data, indent=1)
+
class CheckerResults(rend.Page, ResultsBase):
docFactory = getxmlfile("checker-results.xhtml")
def __init__(self, results):
self.r = ICheckerResults(results)
+ def renderHTTP(self, ctx):
+ t = get_arg(inevow.IRequest(ctx), "output", "")
+ if t.lower() == "json":
+ return self.json(ctx)
+ return rend.Page.renderHTTP(self, ctx)
+
+ def json(self, ctx):
+ inevow.IRequest(ctx).setHeader("content-type", "text/plain")
+ data = self._json_check_results(self.r)
+ return simplejson.dumps(data, indent=1)
+
def render_storage_index(self, ctx, data):
return self.r.get_storage_index_string()
def __init__(self, results):
self.r = ICheckAndRepairResults(results)
+ def renderHTTP(self, ctx):
+ t = get_arg(inevow.IRequest(ctx), "output", None)
+ if t == "json":
+ return self.json(ctx)
+ return rend.Page.renderHTTP(self, ctx)
+
+ def json(self, ctx):
+ inevow.IRequest(ctx).setHeader("content-type", "text/plain")
+ data = self._json_check_and_repair_results(self.r)
+ return simplejson.dumps(data, indent=1)
+
def render_storage_index(self, ctx, data):
return self.r.get_storage_index_string()
assert IDeepCheckResults(results)
self.r = results
+ def renderHTTP(self, ctx):
+ t = get_arg(inevow.IRequest(ctx), "output", None)
+ if t == "json":
+ return self.json(ctx)
+ return rend.Page.renderHTTP(self, ctx)
+
+ def json(self, ctx):
+ inevow.IRequest(ctx).setHeader("content-type", "text/plain")
+ data = {}
+ data["root-storage-index"] = self.r.get_root_storage_index_string()
+ c = self.r.get_counters()
+ data["count-objects-checked"] = c["count-objects-checked"]
+ data["count-objects-healthy"] = c["count-objects-healthy"]
+ data["count-objects-unhealthy"] = c["count-objects-unhealthy"]
+ data["count-corrupt-shares"] = c["count-corrupt-shares"]
+ data["list-corrupt-shares"] = [ (idlib.b2a(serverid),
+ idlib.b2a(storage_index),
+ shnum)
+ for (serverid, storage_index, shnum)
+ in self.r.get_corrupt_shares() ]
+ data["list-unhealthy-files"] = [ (path_t, self._json_check_results(r))
+ for (path_t, r)
+ in self.r.get_all_results().items()
+ if not r.is_healthy() ]
+ return simplejson.dumps(data, indent=1)
+
def render_root_storage_index(self, ctx, data):
return self.r.get_root_storage_index_string()
assert IDeepCheckAndRepairResults(results)
self.r = results
+ def renderHTTP(self, ctx):
+ t = get_arg(inevow.IRequest(ctx), "output", None)
+ if t == "json":
+ return self.json(ctx)
+ return rend.Page.renderHTTP(self, ctx)
+
+ def json(self, ctx):
+ inevow.IRequest(ctx).setHeader("content-type", "text/plain")
+ data = {}
+ data["root-storage-index"] = self.r.get_root_storage_index_string()
+ c = self.r.get_counters()
+ data["count-objects-checked"] = c["count-objects-checked"]
+
+ data["count-objects-healthy-pre-repair"] = c["count-objects-healthy-pre-repair"]
+ data["count-objects-unhealthy-pre-repair"] = c["count-objects-unhealthy-pre-repair"]
+ data["count-objects-healthy-post-repair"] = c["count-objects-healthy-post-repair"]
+ data["count-objects-unhealthy-post-repair"] = c["count-objects-unhealthy-post-repair"]
+
+ data["count-repairs-attempted"] = c["count-repairs-attempted"]
+ data["count-repairs-successful"] = c["count-repairs-successful"]
+ data["count-repairs-unsuccessful"] = c["count-repairs-unsuccessful"]
+
+ data["count-corrupt-shares-pre-repair"] = c["count-corrupt-shares-pre-repair"]
+ data["count-corrupt-shares-post-repair"] = c["count-corrupt-shares-pre-repair"]
+
+ data["list-corrupt-shares"] = [ (idlib.b2a(serverid),
+ idlib.b2a(storage_index),
+ shnum)
+ for (serverid, storage_index, shnum)
+ in self.r.get_corrupt_shares() ]
+ data["list-remaining-corrupt-shares"] = [ (idlib.b2a(serverid),
+ idlib.b2a(storage_index),
+ shnum)
+ for (serverid, storage_index, shnum)
+ in self.r.get_remaining_corrupt_shares() ]
+
+ data["list-unhealthy-files"] = [ (path_t, self._json_check_results(r))
+ for (path_t, r)
+ in self.r.get_all_results().items()
+ if not r.get_pre_repair_results().is_healthy() ]
+ return simplejson.dumps(data, indent=1)
+
def render_root_storage_index(self, ctx, data):
return self.r.get_root_storage_index_string()
getxmlfile, RenderMixin
from allmydata.web.filenode import ReplaceMeMixin, \
FileNodeHandler, PlaceHolderNodeHandler
-from allmydata.web.checker_results import CheckerResults, DeepCheckResults, \
- DeepCheckAndRepairResults
+from allmydata.web.checker_results import CheckerResults, \
+ CheckAndRepairResults, DeepCheckResults, DeepCheckAndRepairResults
class BlockingFileError(Exception):
# TODO: catch and transform
def _POST_check(self, req):
# check this directory
- d = self.node.check()
- d.addCallback(lambda res: CheckerResults(res))
+ verify = boolean_of_arg(get_arg(req, "verify", "false"))
+ repair = boolean_of_arg(get_arg(req, "repair", "false"))
+ if repair:
+ d = self.node.check_and_repair(verify)
+ d.addCallback(lambda res: CheckAndRepairResults(res))
+ else:
+ d = self.node.check(verify)
+ d.addCallback(lambda res: CheckerResults(res))
return d
def _POST_deep_check(self, req):
from allmydata.interfaces import IDownloadTarget, ExistingChildError
from allmydata.immutable.upload import FileHandle
+from allmydata.immutable.filenode import LiteralFileNode
from allmydata.util import log
from allmydata.web.common import text_plain, WebError, IClient, RenderMixin, \
boolean_of_arg, get_arg, should_create_intermediate_directories
-from allmydata.web.checker_results import CheckerResults, CheckAndRepairResults
+from allmydata.web.checker_results import CheckerResults, \
+ CheckAndRepairResults, LiteralCheckerResults
class ReplaceMeMixin:
def _POST_check(self, req):
verify = boolean_of_arg(get_arg(req, "verify", "false"))
repair = boolean_of_arg(get_arg(req, "repair", "false"))
+ if isinstance(self.node, LiteralFileNode):
+ return defer.succeed(LiteralCheckerResults())
if repair:
d = self.node.check_and_repair(verify)
d.addCallback(lambda res: CheckAndRepairResults(res))
--- /dev/null
+<html xmlns:n="http://nevow.com/ns/nevow/0.1">
+ <head>
+ <title>AllMyData - Tahoe - Check Results</title>
+ <!-- <link href="http://www.allmydata.com/common/css/styles.css"
+ rel="stylesheet" type="text/css"/> -->
+ <link href="/webform_css" rel="stylesheet" type="text/css"/>
+ <meta http-equiv="Content-Type" content="text/html; charset=utf-8" />
+ </head>
+ <body>
+
+<h1>File Check Results for LIT file</h1>
+
+<div>Literal files are always healthy: their data is contained in the URI</div>
+
+<div n:render="return" />
+
+ </body>
+</html>