Also rename magic_folder_parent_dircap to collective_dircap.
Author: David Stainton <david@leastauthority.com>
Author: Daira Hopwood <daira@jacaranda.org>
Signed-off-by: Daira Hopwood <daira@jacaranda.org>
+++ /dev/null
-.. -*- coding: utf-8-with-signature -*-
-
-===============================
-Tahoe-LAFS Drop-Upload Frontend
-===============================
-
-1. `Introduction`_
-2. `Configuration`_
-3. `Known Issues and Limitations`_
-
-
-Introduction
-============
-
-The drop-upload frontend allows an upload to a Tahoe-LAFS grid to be triggered
-automatically whenever a file is created or changed in a specific local
-directory. It currently works on Linux and Windows.
-
-The implementation was written as a prototype at the First International
-Tahoe-LAFS Summit in June 2011, and is not currently in as mature a state as
-the other frontends (web, CLI, SFTP and FTP). This means that you probably
-should not rely on all changes to files in the local directory to result in
-successful uploads. There might be (and have been) incompatible changes to
-how the feature is configured.
-
-We are very interested in feedback on how well this feature works for you, and
-suggestions to improve its usability, functionality, and reliability.
-
-
-Configuration
-=============
-
-The drop-upload frontend runs as part of a gateway node. To set it up, you
-need to choose the local directory to monitor for file changes, and a mutable
-directory on the grid to which files will be uploaded.
-
-These settings are configured in the ``[drop_upload]`` section of the
-gateway's ``tahoe.cfg`` file.
-
-``[drop_upload]``
-
-``enabled = (boolean, optional)``
-
- If this is ``True``, drop-upload will be enabled. The default value is
- ``False``.
-
-``local.directory = (UTF-8 path)``
-
- This specifies the local directory to be monitored for new or changed
- files. If the path contains non-ASCII characters, it should be encoded
- in UTF-8 regardless of the system's filesystem encoding. Relative paths
- will be interpreted starting from the node's base directory.
-
-In addition, the file ``private/drop_upload_dircap`` must contain a
-writecap pointing to an existing mutable directory to be used as the target
-of uploads. It will start with ``URI:DIR2:``, and cannot include an alias
-or path.
-
-After setting the above fields and starting or restarting the gateway,
-you can confirm that the feature is working by copying a file into the
-local directory. Then, use the WUI or CLI to check that it has appeared
-in the upload directory with the same filename. A large file may take some
-time to appear, since it is only linked into the directory after the upload
-has completed.
-
-The 'Operational Statistics' page linked from the Welcome page shows
-counts of the number of files uploaded, the number of change events currently
-queued, and the number of failed uploads. The 'Recent Uploads and Downloads'
-page and the node log_ may be helpful to determine the cause of any failures.
-
-.. _log: ../logging.rst
-
-
-Known Issues and Limitations
-============================
-
-This frontend only works on Linux and Windows. There is a ticket to add
-support for Mac OS X and BSD-based systems (`#1432`_).
-
-Subdirectories of the local directory are not monitored. If a subdirectory
-is created, it will be ignored. (`#1433`_)
-
-If files are created or changed in the local directory just after the gateway
-has started, it might not have connected to a sufficient number of servers
-when the upload is attempted, causing the upload to fail. (`#1449`_)
-
-Files that were created or changed in the local directory while the gateway
-was not running, will not be uploaded. (`#1458`_)
-
-The only way to determine whether uploads have failed is to look at the
-'Operational Statistics' page linked from the Welcome page. This only shows
-a count of failures, not the names of files. Uploads are never retried.
-
-The drop-upload frontend performs its uploads sequentially (i.e. it waits
-until each upload is finished before starting the next), even when there
-would be enough memory and bandwidth to efficiently perform them in parallel.
-A drop-upload can occur in parallel with an upload by a different frontend,
-though. (`#1459`_)
-
-On Linux, if there are a large number of near-simultaneous file creation or
-change events (greater than the number specified in the file
-``/proc/sys/fs/inotify/max_queued_events``), it is possible that some events
-could be missed. This is fairly unlikely under normal circumstances, because
-the default value of ``max_queued_events`` in most Linux distributions is
-16384, and events are removed from this queue immediately without waiting for
-the corresponding upload to complete. (`#1430`_)
-
-The Windows implementation might also occasionally miss file creation or
-change events, due to limitations of the underlying Windows API
-(ReadDirectoryChangesW). We do not know how likely or unlikely this is.
-(`#1431`_)
-
-Some filesystems may not support the necessary change notifications.
-So, it is recommended for the local directory to be on a directly attached
-disk-based filesystem, not a network filesystem or one provided by a virtual
-machine.
-
-Attempts to read the mutable directory at about the same time as an uploaded
-file is being linked into it, might fail, even if they are done through the
-same gateway. (`#1105`_)
-
-When a local file is changed and closed several times in quick succession,
-it may be uploaded more times than necessary to keep the remote copy
-up-to-date. (`#1440`_)
-
-Files deleted from the local directory will not be unlinked from the upload
-directory. (`#1710`_)
-
-The ``private/drop_upload_dircap`` file cannot use an alias or path to
-specify the upload directory. (`#1711`_)
-
-Files are always uploaded as immutable. If there is an existing mutable file
-of the same name in the upload directory, it will be unlinked and replaced
-with an immutable file. (`#1712`_)
-
-If a file in the upload directory is changed (actually relinked to a new
-file), then the old file is still present on the grid, and any other caps to
-it will remain valid. See `docs/garbage-collection.rst`_ for how to reclaim
-the space used by files that are no longer needed.
-
-Unicode filenames are supported on both Linux and Windows, but on Linux, the
-local name of a file must be encoded correctly in order for it to be uploaded.
-The expected encoding is that printed by
-``python -c "import sys; print sys.getfilesystemencoding()"``.
-
-On Windows, local directories with non-ASCII names are not currently working.
-(`#2219`_)
-
-On Windows, when a node has drop-upload enabled, it is unresponsive to Ctrl-C
-(it can only be killed using Task Manager or similar). (`#2218`_)
-
-.. _`#1105`: https://tahoe-lafs.org/trac/tahoe-lafs/ticket/1105
-.. _`#1430`: https://tahoe-lafs.org/trac/tahoe-lafs/ticket/1430
-.. _`#1431`: https://tahoe-lafs.org/trac/tahoe-lafs/ticket/1431
-.. _`#1432`: https://tahoe-lafs.org/trac/tahoe-lafs/ticket/1432
-.. _`#1433`: https://tahoe-lafs.org/trac/tahoe-lafs/ticket/1433
-.. _`#1440`: https://tahoe-lafs.org/trac/tahoe-lafs/ticket/1440
-.. _`#1449`: https://tahoe-lafs.org/trac/tahoe-lafs/ticket/1449
-.. _`#1458`: https://tahoe-lafs.org/trac/tahoe-lafs/ticket/1458
-.. _`#1459`: https://tahoe-lafs.org/trac/tahoe-lafs/ticket/1459
-.. _`#1710`: https://tahoe-lafs.org/trac/tahoe-lafs/ticket/1710
-.. _`#1711`: https://tahoe-lafs.org/trac/tahoe-lafs/ticket/1711
-.. _`#1712`: https://tahoe-lafs.org/trac/tahoe-lafs/ticket/1712
-.. _`#2218`: https://tahoe-lafs.org/trac/tahoe-lafs/ticket/2218
-.. _`#2219`: https://tahoe-lafs.org/trac/tahoe-lafs/ticket/2219
-
-.. _docs/garbage-collection.rst: ../garbage-collection.rst
-
--- /dev/null
+.. -*- coding: utf-8-with-signature -*-
+
+================================
+Tahoe-LAFS Magic Folder Frontend
+================================
+
+1. `Introduction`_
+2. `Configuration`_
+3. `Known Issues and Limitations`_
+
+
+Introduction
+============
+
+The Magic Folder frontend allows an upload to a Tahoe-LAFS grid to be triggered
+automatically whenever a file is created or changed in a specific local
+directory. It currently works on Linux and Windows.
+
+The implementation was written as a prototype at the First International
+Tahoe-LAFS Summit in June 2011, and is not currently in as mature a state as
+the other frontends (web, CLI, SFTP and FTP). This means that you probably
+should not rely on all changes to files in the local directory to result in
+successful uploads. There might be (and have been) incompatible changes to
+how the feature is configured.
+
+We are very interested in feedback on how well this feature works for you, and
+suggestions to improve its usability, functionality, and reliability.
+
+
+Configuration
+=============
+
+The Magic Folder frontend runs as part of a gateway node. To set it up, you
+need to choose the local directory to monitor for file changes, and a mutable
+directory on the grid to which files will be uploaded.
+
+These settings are configured in the ``[magic_folder]`` section of the
+gateway's ``tahoe.cfg`` file.
+
+``[magic_folder]``
+
+``enabled = (boolean, optional)``
+
+ If this is ``True``, Magic Folder will be enabled. The default value is
+ ``False``.
+
+``local.directory = (UTF-8 path)``
+
+ This specifies the local directory to be monitored for new or changed
+ files. If the path contains non-ASCII characters, it should be encoded
+ in UTF-8 regardless of the system's filesystem encoding. Relative paths
+ will be interpreted starting from the node's base directory.
+
+In addition:
+ * the file ``private/magic_folder_dircap`` must contain a writecap pointing
+ to an existing mutable directory to be used as the target of uploads.
+ It will start with ``URI:DIR2:``, and cannot include an alias or path.
+ * the file ``private/collective_dircap`` must contain a readcap
+
+After setting the above fields and starting or restarting the gateway,
+you can confirm that the feature is working by copying a file into the
+local directory. Then, use the WUI or CLI to check that it has appeared
+in the upload directory with the same filename. A large file may take some
+time to appear, since it is only linked into the directory after the upload
+has completed.
+
+The 'Operational Statistics' page linked from the Welcome page shows
+counts of the number of files uploaded, the number of change events currently
+queued, and the number of failed uploads. The 'Recent Uploads and Downloads'
+page and the node log_ may be helpful to determine the cause of any failures.
+
+.. _log: ../logging.rst
+
+
+Known Issues and Limitations
+============================
+
+This frontend only works on Linux and Windows. There is a ticket to add
+support for Mac OS X and BSD-based systems (`#1432`_).
+
+Subdirectories of the local directory are not monitored. If a subdirectory
+is created, it will be ignored. (`#1433`_)
+
+If files are created or changed in the local directory just after the gateway
+has started, it might not have connected to a sufficient number of servers
+when the upload is attempted, causing the upload to fail. (`#1449`_)
+
+Files that were created or changed in the local directory while the gateway
+was not running, will not be uploaded. (`#1458`_)
+
+The only way to determine whether uploads have failed is to look at the
+'Operational Statistics' page linked from the Welcome page. This only shows
+a count of failures, not the names of files. Uploads are never retried.
+
+The Magic Folder frontend performs its uploads sequentially (i.e. it waits
+until each upload is finished before starting the next), even when there
+would be enough memory and bandwidth to efficiently perform them in parallel.
+A Magic Folder upload can occur in parallel with an upload by a different
+frontend, though. (`#1459`_)
+
+On Linux, if there are a large number of near-simultaneous file creation or
+change events (greater than the number specified in the file
+``/proc/sys/fs/inotify/max_queued_events``), it is possible that some events
+could be missed. This is fairly unlikely under normal circumstances, because
+the default value of ``max_queued_events`` in most Linux distributions is
+16384, and events are removed from this queue immediately without waiting for
+the corresponding upload to complete. (`#1430`_)
+
+The Windows implementation might also occasionally miss file creation or
+change events, due to limitations of the underlying Windows API
+(ReadDirectoryChangesW). We do not know how likely or unlikely this is.
+(`#1431`_)
+
+Some filesystems may not support the necessary change notifications.
+So, it is recommended for the local directory to be on a directly attached
+disk-based filesystem, not a network filesystem or one provided by a virtual
+machine.
+
+Attempts to read the mutable directory at about the same time as an uploaded
+file is being linked into it, might fail, even if they are done through the
+same gateway. (`#1105`_)
+
+When a local file is changed and closed several times in quick succession,
+it may be uploaded more times than necessary to keep the remote copy
+up-to-date. (`#1440`_)
+
+Files deleted from the local directory will not be unlinked from the upload
+directory. (`#1710`_)
+
+The ``private/magic_folder_dircap`` and ``private/collective_dircap`` files
+cannot use an alias or path to specify the upload directory. (`#1711`_)
+
+Files are always uploaded as immutable. If there is an existing mutable file
+of the same name in the upload directory, it will be unlinked and replaced
+with an immutable file. (`#1712`_)
+
+If a file in the upload directory is changed (actually relinked to a new
+file), then the old file is still present on the grid, and any other caps to
+it will remain valid. See `docs/garbage-collection.rst`_ for how to reclaim
+the space used by files that are no longer needed.
+
+Unicode filenames are supported on both Linux and Windows, but on Linux, the
+local name of a file must be encoded correctly in order for it to be uploaded.
+The expected encoding is that printed by
+``python -c "import sys; print sys.getfilesystemencoding()"``.
+
+On Windows, local directories with non-ASCII names are not currently working.
+(`#2219`_)
+
+On Windows, when a node has Magic Folder enabled, it is unresponsive to Ctrl-C
+(it can only be killed using Task Manager or similar). (`#2218`_)
+
+.. _`#1105`: https://tahoe-lafs.org/trac/tahoe-lafs/ticket/1105
+.. _`#1430`: https://tahoe-lafs.org/trac/tahoe-lafs/ticket/1430
+.. _`#1431`: https://tahoe-lafs.org/trac/tahoe-lafs/ticket/1431
+.. _`#1432`: https://tahoe-lafs.org/trac/tahoe-lafs/ticket/1432
+.. _`#1433`: https://tahoe-lafs.org/trac/tahoe-lafs/ticket/1433
+.. _`#1440`: https://tahoe-lafs.org/trac/tahoe-lafs/ticket/1440
+.. _`#1449`: https://tahoe-lafs.org/trac/tahoe-lafs/ticket/1449
+.. _`#1458`: https://tahoe-lafs.org/trac/tahoe-lafs/ticket/1458
+.. _`#1459`: https://tahoe-lafs.org/trac/tahoe-lafs/ticket/1459
+.. _`#1710`: https://tahoe-lafs.org/trac/tahoe-lafs/ticket/1710
+.. _`#1711`: https://tahoe-lafs.org/trac/tahoe-lafs/ticket/1711
+.. _`#1712`: https://tahoe-lafs.org/trac/tahoe-lafs/ticket/1712
+.. _`#2218`: https://tahoe-lafs.org/trac/tahoe-lafs/ticket/2218
+.. _`#2219`: https://tahoe-lafs.org/trac/tahoe-lafs/ticket/2219
+
+.. _docs/garbage-collection.rst: ../garbage-collection.rst
+
# ControlServer and Helper are attached after Tub startup
self.init_ftp_server()
self.init_sftp_server()
- self.init_drop_uploader()
+ self.init_magic_folder()
# If the node sees an exit_trigger file, it will poll every second to see
# whether the file still exists, and what its mtime is. If the file does not
sftp_portstr, pubkey_file, privkey_file)
s.setServiceParent(self)
- def init_drop_uploader(self):
+ def init_magic_folder(self):
if self.get_config("drop_upload", "enabled", False, boolean=True):
- if self.get_config("drop_upload", "upload.dircap", None):
- raise OldConfigOptionError("The [drop_upload]upload.dircap option is no longer supported; please "
- "put the cap in a 'private/drop_upload_dircap' file, and delete this option.")
+ raise OldConfigOptionError("The [drop_upload] section must be renamed to [magic_folder].\n"
+ "See docs/frontends/magic-folder.rst for more information.")
- upload_dircap = self.get_or_create_private_config("drop_upload_dircap")
- local_dir_config = self.get_config("drop_upload", "local.directory").decode("utf-8")
+ if self.get_config("magic_folder", "enabled", False, boolean=True):
+ upload_dircap = self.get_or_create_private_config("magic_folder_dircap")
+ local_dir_config = self.get_config("magic_folder", "local.directory").decode("utf-8")
local_dir = abspath_expanduser_unicode(local_dir_config, base=self.basedir)
try:
- from allmydata.frontends import drop_upload
+ from allmydata.frontends import magic_folder
dbfile = os.path.join(self.basedir, "private", "magicfolderdb.sqlite")
dbfile = abspath_expanduser_unicode(dbfile)
- parent_dircap_path = os.path.join(self.basedir, "private", "magic_folder_parent_dircap")
- parent_dircap_path = abspath_expanduser_unicode(parent_dircap_path)
- parent_dircap = fileutil.read(parent_dircap_path).strip()
+ collective_dircap_path = os.path.join(self.basedir, "private", "collective_dircap")
+ collective_dircap_path = abspath_expanduser_unicode(collective_dircap_path)
+ collective_dircap = fileutil.read(collective_dircap_path).strip()
- s = drop_upload.DropUploader(self, upload_dircap, parent_dircap, local_dir, dbfile)
+ s = magic_folder.MagicFolder(self, upload_dircap, collective_dircap, local_dir, dbfile)
s.setServiceParent(self)
s.startService()
# start processing the upload queue when we've connected to enough servers
self.upload_ready_d.addCallback(s.upload_ready)
except Exception, e:
- self.log("couldn't start drop-uploader: %r", args=(e,))
+ self.log("couldn't start Magic Folder: %r", args=(e,))
def _check_exit_trigger(self, exit_trigger_file):
if os.path.exists(exit_trigger_file):
+++ /dev/null
-
-import sys, os, stat
-import os.path
-from collections import deque
-
-from twisted.internet import defer, reactor, task
-from twisted.python.failure import Failure
-from twisted.python import runtime
-from twisted.application import service
-
-from allmydata.interfaces import IDirectoryNode
-
-from allmydata.util import log
-from allmydata.util.fileutil import abspath_expanduser_unicode, precondition_abspath
-from allmydata.util.encodingutil import listdir_unicode, to_filepath, \
- unicode_from_filepath, quote_local_unicode_path, FilenameEncodingError
-from allmydata.immutable.upload import FileName, Data
-from allmydata import backupdb, magicpath
-
-
-IN_EXCL_UNLINK = 0x04000000L
-
-def get_inotify_module():
- try:
- if sys.platform == "win32":
- from allmydata.windows import inotify
- elif runtime.platform.supportsINotify():
- from twisted.internet import inotify
- else:
- raise NotImplementedError("filesystem notification needed for drop-upload is not supported.\n"
- "This currently requires Linux or Windows.")
- return inotify
- except (ImportError, AttributeError) as e:
- log.msg(e)
- if sys.platform == "win32":
- raise NotImplementedError("filesystem notification needed for drop-upload is not supported.\n"
- "Windows support requires at least Vista, and has only been tested on Windows 7.")
- raise
-
-
-class DropUploader(service.MultiService):
- name = 'drop-upload'
-
- def __init__(self, client, upload_dircap, parent_dircap, local_dir, dbfile, inotify=None,
- pending_delay=1.0):
- precondition_abspath(local_dir)
-
- service.MultiService.__init__(self)
- self._local_dir = abspath_expanduser_unicode(local_dir)
- self._upload_lazy_tail = defer.succeed(None)
- self._pending = set()
- self._client = client
- self._stats_provider = client.stats_provider
- self._convergence = client.convergence
- self._local_path = to_filepath(self._local_dir)
- self._dbfile = dbfile
-
- self._upload_deque = deque()
- self.is_upload_ready = False
-
- self._inotify = inotify or get_inotify_module()
-
- if not self._local_path.exists():
- raise AssertionError("The '[drop_upload] local.directory' parameter was %s "
- "but there is no directory at that location."
- % quote_local_unicode_path(local_dir))
- if not self._local_path.isdir():
- raise AssertionError("The '[drop_upload] local.directory' parameter was %s "
- "but the thing at that location is not a directory."
- % quote_local_unicode_path(local_dir))
-
- # TODO: allow a path rather than a cap URI.
- self._parent = self._client.create_node_from_uri(upload_dircap)
- if not IDirectoryNode.providedBy(self._parent):
- raise AssertionError("The URI in 'private/drop_upload_dircap' does not refer to a directory.")
- if self._parent.is_unknown() or self._parent.is_readonly():
- raise AssertionError("The URI in 'private/drop_upload_dircap' is not a writecap to a directory.")
-
- self._processed_callback = lambda ign: None
- self._ignore_count = 0
-
- self._notifier = inotify.INotify()
- if hasattr(self._notifier, 'set_pending_delay'):
- self._notifier.set_pending_delay(pending_delay)
-
- # We don't watch for IN_CREATE, because that would cause us to read and upload a
- # possibly-incomplete file before the application has closed it. There should always
- # be an IN_CLOSE_WRITE after an IN_CREATE (I think).
- # TODO: what about IN_MOVE_SELF, IN_MOVED_FROM, or IN_UNMOUNT?
- #
- self.mask = ( inotify.IN_CLOSE_WRITE
- | inotify.IN_MOVED_TO
- | inotify.IN_MOVED_FROM
- | inotify.IN_DELETE
- | inotify.IN_ONLYDIR
- | IN_EXCL_UNLINK
- )
- self._notifier.watch(self._local_path, mask=self.mask, callbacks=[self._notify],
- recursive=True)
-
- def _check_db_file(self, childpath):
- # returns True if the file must be uploaded.
- assert self._db != None
- r = self._db.check_file(childpath)
- filecap = r.was_uploaded()
- if filecap is False:
- return True
-
- def _scan(self, localpath):
- if not os.path.isdir(localpath):
- raise AssertionError("Programmer error: _scan() must be passed a directory path.")
- quoted_path = quote_local_unicode_path(localpath)
- try:
- children = listdir_unicode(localpath)
- except EnvironmentError:
- raise(Exception("WARNING: magic folder: permission denied on directory %s" % (quoted_path,)))
- except FilenameEncodingError:
- raise(Exception("WARNING: magic folder: could not list directory %s due to a filename encoding error" % (quoted_path,)))
-
- for child in children:
- assert isinstance(child, unicode), child
- childpath = os.path.join(localpath, child)
- # note: symlinks to directories are both islink() and isdir()
- isdir = os.path.isdir(childpath)
- isfile = os.path.isfile(childpath)
- islink = os.path.islink(childpath)
-
- if islink:
- self.warn("WARNING: cannot backup symlink %s" % quote_local_unicode_path(childpath))
- elif isdir:
- # process directories unconditionally
- self._append_to_deque(childpath)
-
- # recurse on the child directory
- self._scan(childpath)
- elif isfile:
- must_upload = self._check_db_file(childpath)
- if must_upload:
- self._append_to_deque(childpath)
- else:
- self.warn("WARNING: cannot backup special file %s" % quote_local_unicode_path(childpath))
-
- def startService(self):
- self._db = backupdb.get_backupdb(self._dbfile)
- if self._db is None:
- return Failure(Exception('ERROR: Unable to load magic folder db.'))
-
- service.MultiService.startService(self)
- d = self._notifier.startReading()
-
- self._scan(self._local_dir)
-
- self._stats_provider.count('drop_upload.dirs_monitored', 1)
- return d
-
- def upload_ready(self):
- """upload_ready is used to signal us to start
- processing the upload items...
- """
- self.is_upload_ready = True
- self._turn_deque()
-
- def _append_to_deque(self, path):
- self._upload_deque.append(path)
- self._pending.add(path)
- self._stats_provider.count('drop_upload.objects_queued', 1)
- if self.is_upload_ready:
- reactor.callLater(0, self._turn_deque)
-
- def _turn_deque(self):
- try:
- path = self._upload_deque.pop()
- except IndexError:
- self._log("magic folder upload deque is now empty")
- self._upload_lazy_tail = defer.succeed(None)
- return
- self._upload_lazy_tail.addCallback(lambda ign: task.deferLater(reactor, 0, self._process, path))
- self._upload_lazy_tail.addCallback(lambda ign: self._turn_deque())
-
- def _notify(self, opaque, path, events_mask):
- self._log("inotify event %r, %r, %r\n" % (opaque, path, ', '.join(self._inotify.humanReadableMask(events_mask))))
- path_u = unicode_from_filepath(path)
- if path_u not in self._pending:
- self._append_to_deque(path_u)
-
- def _process(self, path):
- d = defer.succeed(None)
-
- def _add_file(name):
- u = FileName(path, self._convergence)
- return self._parent.add_file(name, u, overwrite=True)
-
- def _add_dir(name):
- self._notifier.watch(to_filepath(path), mask=self.mask, callbacks=[self._notify], recursive=True)
- u = Data("", self._convergence)
- name += "@_"
- d2 = self._parent.add_file(name, u, overwrite=True)
- def _succeeded(ign):
- self._log("created subdirectory %r" % (path,))
- self._stats_provider.count('drop_upload.directories_created', 1)
- def _failed(f):
- self._log("failed to create subdirectory %r" % (path,))
- return f
- d2.addCallbacks(_succeeded, _failed)
- d2.addCallback(lambda ign: self._scan(path))
- return d2
-
- def _maybe_upload(val):
- self._pending.remove(path)
- relpath = os.path.relpath(path, self._local_dir)
- name = magicpath.path2magic(relpath)
-
- if not os.path.exists(path):
- self._log("drop-upload: notified object %r disappeared "
- "(this is normal for temporary objects)" % (path,))
- self._stats_provider.count('drop_upload.objects_disappeared', 1)
- return None
- elif os.path.islink(path):
- raise Exception("symlink not being processed")
-
- if os.path.isdir(path):
- return _add_dir(name)
- elif os.path.isfile(path):
- d2 = _add_file(name)
- def add_db_entry(filenode):
- filecap = filenode.get_uri()
- s = os.stat(path)
- size = s[stat.ST_SIZE]
- ctime = s[stat.ST_CTIME]
- mtime = s[stat.ST_MTIME]
- self._db.did_upload_file(filecap, path, mtime, ctime, size)
- self._stats_provider.count('drop_upload.files_uploaded', 1)
- d2.addCallback(add_db_entry)
- return d2
- else:
- raise Exception("non-directory/non-regular file not being processed")
-
- d.addCallback(_maybe_upload)
-
- def _succeeded(res):
- self._stats_provider.count('drop_upload.objects_queued', -1)
- self._stats_provider.count('drop_upload.objects_succeeded', 1)
- return res
- def _failed(f):
- self._stats_provider.count('drop_upload.objects_queued', -1)
- self._stats_provider.count('drop_upload.objects_failed', 1)
- self._log("%r while processing %r" % (f, path))
- return f
- d.addCallbacks(_succeeded, _failed)
- d.addBoth(self._do_processed_callback)
- return d
-
- def _do_processed_callback(self, res):
- if self._ignore_count == 0:
- self._processed_callback(res)
- else:
- self._ignore_count -= 1
- return None # intentionally suppress failures, which have already been logged
-
- def set_processed_callback(self, callback, ignore_count=0):
- """
- This sets a function that will be called after a notification has been processed
- (successfully or unsuccessfully).
- """
- self._processed_callback = callback
- self._ignore_count = ignore_count
-
- def finish(self, for_tests=False):
- self._notifier.stopReading()
- self._stats_provider.count('drop_upload.dirs_monitored', -1)
- if for_tests and hasattr(self._notifier, 'wait_until_stopped'):
- return self._notifier.wait_until_stopped()
- else:
- return defer.succeed(None)
-
- def remove_service(self):
- return service.MultiService.disownServiceParent(self)
-
- def _log(self, msg):
- self._client.log("drop-upload: " + msg)
- #open("events", "ab+").write(msg)
--- /dev/null
+
+import sys, os, stat
+import os.path
+from collections import deque
+
+from twisted.internet import defer, reactor, task
+from twisted.python.failure import Failure
+from twisted.python import runtime
+from twisted.application import service
+
+from allmydata.interfaces import IDirectoryNode
+
+from allmydata.util import log
+from allmydata.util.fileutil import abspath_expanduser_unicode, precondition_abspath
+from allmydata.util.encodingutil import listdir_unicode, to_filepath, \
+ unicode_from_filepath, quote_local_unicode_path, FilenameEncodingError
+from allmydata.immutable.upload import FileName, Data
+from allmydata import backupdb, magicpath
+
+
+IN_EXCL_UNLINK = 0x04000000L
+
+def get_inotify_module():
+ try:
+ if sys.platform == "win32":
+ from allmydata.windows import inotify
+ elif runtime.platform.supportsINotify():
+ from twisted.internet import inotify
+ else:
+ raise NotImplementedError("filesystem notification needed for drop-upload is not supported.\n"
+ "This currently requires Linux or Windows.")
+ return inotify
+ except (ImportError, AttributeError) as e:
+ log.msg(e)
+ if sys.platform == "win32":
+ raise NotImplementedError("filesystem notification needed for drop-upload is not supported.\n"
+ "Windows support requires at least Vista, and has only been tested on Windows 7.")
+ raise
+
+
+class MagicFolder(service.MultiService):
+ name = 'magic-folder'
+
+ def __init__(self, client, upload_dircap, collective_dircap, local_dir, dbfile, inotify=None,
+ pending_delay=1.0):
+ precondition_abspath(local_dir)
+
+ service.MultiService.__init__(self)
+ self._local_dir = abspath_expanduser_unicode(local_dir)
+ self._upload_lazy_tail = defer.succeed(None)
+ self._pending = set()
+ self._client = client
+ self._stats_provider = client.stats_provider
+ self._convergence = client.convergence
+ self._local_path = to_filepath(self._local_dir)
+ self._dbfile = dbfile
+
+ self._upload_deque = deque()
+ self.is_upload_ready = False
+
+ self._inotify = inotify or get_inotify_module()
+
+ if not self._local_path.exists():
+ raise AssertionError("The '[magic_folder] local.directory' parameter was %s "
+ "but there is no directory at that location."
+ % quote_local_unicode_path(local_dir))
+ if not self._local_path.isdir():
+ raise AssertionError("The '[magic_folder] local.directory' parameter was %s "
+ "but the thing at that location is not a directory."
+ % quote_local_unicode_path(local_dir))
+
+ # TODO: allow a path rather than a cap URI.
+ self._upload_dirnode = self._client.create_node_from_uri(upload_dircap)
+ if not IDirectoryNode.providedBy(self._upload_dirnode):
+ raise AssertionError("The URI in 'private/magic_folder_dircap' does not refer to a directory.")
+ if self._upload_dirnode.is_unknown() or self._upload_dirnode.is_readonly():
+ raise AssertionError("The URI in 'private/magic_folder_dircap' is not a writecap to a directory.")
+
+ self._processed_callback = lambda ign: None
+ self._ignore_count = 0
+
+ self._notifier = inotify.INotify()
+ if hasattr(self._notifier, 'set_pending_delay'):
+ self._notifier.set_pending_delay(pending_delay)
+
+ # We don't watch for IN_CREATE, because that would cause us to read and upload a
+ # possibly-incomplete file before the application has closed it. There should always
+ # be an IN_CLOSE_WRITE after an IN_CREATE (I think).
+ # TODO: what about IN_MOVE_SELF, IN_MOVED_FROM, or IN_UNMOUNT?
+ #
+ self.mask = ( inotify.IN_CLOSE_WRITE
+ | inotify.IN_MOVED_TO
+ | inotify.IN_MOVED_FROM
+ | inotify.IN_DELETE
+ | inotify.IN_ONLYDIR
+ | IN_EXCL_UNLINK
+ )
+ self._notifier.watch(self._local_path, mask=self.mask, callbacks=[self._notify],
+ recursive=True)
+
+ def _check_db_file(self, childpath):
+ # returns True if the file must be uploaded.
+ assert self._db != None
+ r = self._db.check_file(childpath)
+ filecap = r.was_uploaded()
+ if filecap is False:
+ return True
+
+ def _scan(self, localpath):
+ if not os.path.isdir(localpath):
+ raise AssertionError("Programmer error: _scan() must be passed a directory path.")
+ quoted_path = quote_local_unicode_path(localpath)
+ try:
+ children = listdir_unicode(localpath)
+ except EnvironmentError:
+ raise(Exception("WARNING: magic folder: permission denied on directory %s" % (quoted_path,)))
+ except FilenameEncodingError:
+ raise(Exception("WARNING: magic folder: could not list directory %s due to a filename encoding error" % (quoted_path,)))
+
+ for child in children:
+ assert isinstance(child, unicode), child
+ childpath = os.path.join(localpath, child)
+ # note: symlinks to directories are both islink() and isdir()
+ isdir = os.path.isdir(childpath)
+ isfile = os.path.isfile(childpath)
+ islink = os.path.islink(childpath)
+
+ if islink:
+ self.warn("WARNING: cannot backup symlink %s" % quote_local_unicode_path(childpath))
+ elif isdir:
+ # process directories unconditionally
+ self._append_to_deque(childpath)
+
+ # recurse on the child directory
+ self._scan(childpath)
+ elif isfile:
+ must_upload = self._check_db_file(childpath)
+ if must_upload:
+ self._append_to_deque(childpath)
+ else:
+ self.warn("WARNING: cannot backup special file %s" % quote_local_unicode_path(childpath))
+
+ def startService(self):
+ self._db = backupdb.get_backupdb(self._dbfile)
+ if self._db is None:
+ return Failure(Exception('ERROR: Unable to load magic folder db.'))
+
+ service.MultiService.startService(self)
+ d = self._notifier.startReading()
+
+ self._scan(self._local_dir)
+
+ self._stats_provider.count('magic_folder.dirs_monitored', 1)
+ return d
+
+ def upload_ready(self):
+ """upload_ready is used to signal us to start
+ processing the upload items...
+ """
+ self.is_upload_ready = True
+ self._turn_deque()
+
+ def _append_to_deque(self, path):
+ self._upload_deque.append(path)
+ self._pending.add(path)
+ self._stats_provider.count('magic_folder.objects_queued', 1)
+ if self.is_upload_ready:
+ reactor.callLater(0, self._turn_deque)
+
+ def _turn_deque(self):
+ try:
+ path = self._upload_deque.pop()
+ except IndexError:
+ self._log("magic folder upload deque is now empty")
+ self._upload_lazy_tail = defer.succeed(None)
+ return
+ self._upload_lazy_tail.addCallback(lambda ign: task.deferLater(reactor, 0, self._process, path))
+ self._upload_lazy_tail.addCallback(lambda ign: self._turn_deque())
+
+ def _notify(self, opaque, path, events_mask):
+ self._log("inotify event %r, %r, %r\n" % (opaque, path, ', '.join(self._inotify.humanReadableMask(events_mask))))
+ path_u = unicode_from_filepath(path)
+ if path_u not in self._pending:
+ self._append_to_deque(path_u)
+
+ def _process(self, path):
+ d = defer.succeed(None)
+
+ def _add_file(name):
+ u = FileName(path, self._convergence)
+ return self._upload_dirnode.add_file(name, u, overwrite=True)
+
+ def _add_dir(name):
+ self._notifier.watch(to_filepath(path), mask=self.mask, callbacks=[self._notify], recursive=True)
+ u = Data("", self._convergence)
+ name += "@_"
+ d2 = self._upload_dirnode.add_file(name, u, overwrite=True)
+ def _succeeded(ign):
+ self._log("created subdirectory %r" % (path,))
+ self._stats_provider.count('magic_folder.directories_created', 1)
+ def _failed(f):
+ self._log("failed to create subdirectory %r" % (path,))
+ return f
+ d2.addCallbacks(_succeeded, _failed)
+ d2.addCallback(lambda ign: self._scan(path))
+ return d2
+
+ def _maybe_upload(val):
+ self._pending.remove(path)
+ relpath = os.path.relpath(path, self._local_dir)
+ name = magicpath.path2magic(relpath)
+
+ if not os.path.exists(path):
+ self._log("drop-upload: notified object %r disappeared "
+ "(this is normal for temporary objects)" % (path,))
+ self._stats_provider.count('magic_folder.objects_disappeared', 1)
+ return None
+ elif os.path.islink(path):
+ raise Exception("symlink not being processed")
+
+ if os.path.isdir(path):
+ return _add_dir(name)
+ elif os.path.isfile(path):
+ d2 = _add_file(name)
+ def add_db_entry(filenode):
+ filecap = filenode.get_uri()
+ s = os.stat(path)
+ size = s[stat.ST_SIZE]
+ ctime = s[stat.ST_CTIME]
+ mtime = s[stat.ST_MTIME]
+ self._db.did_upload_file(filecap, path, mtime, ctime, size)
+ self._stats_provider.count('magic_folder.files_uploaded', 1)
+ d2.addCallback(add_db_entry)
+ return d2
+ else:
+ raise Exception("non-directory/non-regular file not being processed")
+
+ d.addCallback(_maybe_upload)
+
+ def _succeeded(res):
+ self._stats_provider.count('magic_folder.objects_queued', -1)
+ self._stats_provider.count('magic_folder.objects_succeeded', 1)
+ return res
+ def _failed(f):
+ self._stats_provider.count('magic_folder.objects_queued', -1)
+ self._stats_provider.count('magic_folder.objects_failed', 1)
+ self._log("%r while processing %r" % (f, path))
+ return f
+ d.addCallbacks(_succeeded, _failed)
+ d.addBoth(self._do_processed_callback)
+ return d
+
+ def _do_processed_callback(self, res):
+ if self._ignore_count == 0:
+ self._processed_callback(res)
+ else:
+ self._ignore_count -= 1
+ return None # intentionally suppress failures, which have already been logged
+
+ def set_processed_callback(self, callback, ignore_count=0):
+ """
+ This sets a function that will be called after a notification has been processed
+ (successfully or unsuccessfully).
+ """
+ self._processed_callback = callback
+ self._ignore_count = ignore_count
+
+ def finish(self, for_tests=False):
+ self._notifier.stopReading()
+ self._stats_provider.count('magic_folder.dirs_monitored', -1)
+ if for_tests and hasattr(self._notifier, 'wait_until_stopped'):
+ return self._notifier.wait_until_stopped()
+ else:
+ return defer.succeed(None)
+
+ def remove_service(self):
+ return service.MultiService.disownServiceParent(self)
+
+ def _log(self, msg):
+ self._client.log("drop-upload: " + msg)
+ #open("events", "ab+").write(msg)
_check("helper.furl = pb://blah\n", "pb://blah")
@mock.patch('allmydata.util.log.msg')
- @mock.patch('allmydata.frontends.drop_upload.DropUploader')
- def test_create_drop_uploader(self, mock_drop_uploader, mock_log_msg):
- class MockDropUploader(service.MultiService):
- name = 'drop-upload'
+ @mock.patch('allmydata.frontends.magic_folder.MagicFolder')
+ def test_create_drop_uploader(self, mock_magic_folder, mock_log_msg):
+ class MockMagicFolder(service.MultiService):
+ name = 'magic-folder'
- def __init__(self, client, upload_dircap, parent_dircap, local_dir, dbfile, inotify=None,
+ def __init__(self, client, upload_dircap, collective_dircap, local_dir, dbfile, inotify=None,
pending_delay=1.0):
service.MultiService.__init__(self)
self.client = client
self.upload_dircap = upload_dircap
+ self.collective_dircap = collective_dircap
self.local_dir = local_dir
self.dbfile = dbfile
self.inotify = inotify
- mock_drop_uploader.side_effect = MockDropUploader
+ mock_magic_folder.side_effect = MockMagicFolder
upload_dircap = "URI:DIR2:blah"
local_dir_u = self.unicode_or_fallback(u"loc\u0101l_dir", u"local_dir")
config = (BASECONFIG +
"[storage]\n" +
"enabled = false\n" +
- "[drop_upload]\n" +
+ "[magic_folder]\n" +
"enabled = true\n")
- basedir1 = "test_client.Basic.test_create_drop_uploader1"
+ basedir1 = "test_client.Basic.test_create_magic_folder1"
os.mkdir(basedir1)
fileutil.write(os.path.join(basedir1, "tahoe.cfg"),
self.failUnlessRaises(MissingConfigEntry, client.Client, basedir1)
fileutil.write(os.path.join(basedir1, "tahoe.cfg"), config)
- fileutil.write(os.path.join(basedir1, "private", "drop_upload_dircap"), "URI:DIR2:blah")
- fileutil.write(os.path.join(basedir1, "private", "magic_folder_parent_dircap"), "URI:DIR2:meow")
+ fileutil.write(os.path.join(basedir1, "private", "magic_folder_dircap"), "URI:DIR2:blah")
+ fileutil.write(os.path.join(basedir1, "private", "collective_dircap"), "URI:DIR2:meow")
self.failUnlessRaises(MissingConfigEntry, client.Client, basedir1)
fileutil.write(os.path.join(basedir1, "tahoe.cfg"),
- config + "upload.dircap = " + upload_dircap + "\n")
+ config.replace("[magic_folder]\n", "[drop_upload]\n"))
self.failUnlessRaises(OldConfigOptionError, client.Client, basedir1)
fileutil.write(os.path.join(basedir1, "tahoe.cfg"),
config + "local.directory = " + local_dir_utf8 + "\n")
c1 = client.Client(basedir1)
- uploader = c1.getServiceNamed('drop-upload')
- self.failUnless(isinstance(uploader, MockDropUploader), uploader)
- self.failUnlessReallyEqual(uploader.client, c1)
- self.failUnlessReallyEqual(uploader.upload_dircap, upload_dircap)
- self.failUnlessReallyEqual(os.path.basename(uploader.local_dir), local_dir_u)
- self.failUnless(uploader.inotify is None, uploader.inotify)
- self.failUnless(uploader.running)
+ magicfolder = c1.getServiceNamed('magic-folder')
+ self.failUnless(isinstance(magicfolder, MockMagicFolder), magicfolder)
+ self.failUnlessReallyEqual(magicfolder.client, c1)
+ self.failUnlessReallyEqual(magicfolder.upload_dircap, upload_dircap)
+ self.failUnlessReallyEqual(os.path.basename(magicfolder.local_dir), local_dir_u)
+ self.failUnless(magicfolder.inotify is None, magicfolder.inotify)
+ self.failUnless(magicfolder.running)
class Boom(Exception):
pass
- mock_drop_uploader.side_effect = Boom()
+ mock_magic_folder.side_effect = Boom()
- basedir2 = "test_client.Basic.test_create_drop_uploader2"
+ basedir2 = "test_client.Basic.test_create_magic_folder2"
os.mkdir(basedir2)
os.mkdir(os.path.join(basedir2, "private"))
fileutil.write(os.path.join(basedir2, "tahoe.cfg"),
BASECONFIG +
- "[drop_upload]\n" +
+ "[magic_folder]\n" +
"enabled = true\n" +
"local.directory = " + local_dir_utf8 + "\n")
- fileutil.write(os.path.join(basedir2, "private", "drop_upload_dircap"), "URI:DIR2:blah")
- fileutil.write(os.path.join(basedir2, "private", "magic_folder_parent_dircap"), "URI:DIR2:meow")
+ fileutil.write(os.path.join(basedir2, "private", "magic_folder_dircap"), "URI:DIR2:blah")
+ fileutil.write(os.path.join(basedir2, "private", "collective_dircap"), "URI:DIR2:meow")
c2 = client.Client(basedir2)
- self.failUnlessRaises(KeyError, c2.getServiceNamed, 'drop-upload')
+ self.failUnlessRaises(KeyError, c2.getServiceNamed, 'magic-folder')
self.failUnless([True for arg in mock_log_msg.call_args_list if "Boom" in repr(arg)],
mock_log_msg.call_args_list)
+++ /dev/null
-
-import os, sys, stat, time
-
-from twisted.trial import unittest
-from twisted.internet import defer
-
-from allmydata.interfaces import IDirectoryNode, NoSuchChildError
-
-from allmydata.util import fake_inotify, fileutil
-from allmydata.util.encodingutil import get_filesystem_encoding, to_filepath
-from allmydata.util.consumer import download_to_data
-from allmydata.test.no_network import GridTestMixin
-from allmydata.test.common_util import ReallyEqualMixin, NonASCIIPathMixin
-from allmydata.test.common import ShouldFailMixin
-
-from allmydata.frontends import drop_upload
-from allmydata.frontends.drop_upload import DropUploader
-from allmydata import backupdb
-from allmydata.util.fileutil import abspath_expanduser_unicode
-
-
-class DropUploadTestMixin(GridTestMixin, ShouldFailMixin, ReallyEqualMixin, NonASCIIPathMixin):
- """
- These tests will be run both with a mock notifier, and (on platforms that support it)
- with the real INotify.
- """
-
- def setUp(self):
- GridTestMixin.setUp(self)
- temp = self.mktemp()
- self.basedir = abspath_expanduser_unicode(temp.decode(get_filesystem_encoding()))
- self.uploader = None
- self.dir_node = None
-
- def _get_count(self, name):
- return self.stats_provider.get_stats()["counters"].get(name, 0)
-
- def _createdb(self):
- dbfile = abspath_expanduser_unicode(u"magicfolderdb.sqlite", base=self.basedir)
- bdb = backupdb.get_backupdb(dbfile)
- self.failUnless(bdb, "unable to create backupdb from %r" % (dbfile,))
- self.failUnlessEqual(bdb.VERSION, 2)
- return bdb
-
- def _made_upload_dir(self, n):
- if self.dir_node == None:
- self.dir_node = n
- else:
- n = self.dir_node
- self.failUnless(IDirectoryNode.providedBy(n))
- self.upload_dirnode = n
- self.upload_dircap = n.get_uri()
- self.parent_dircap = "abc123"
-
- def _create_uploader(self, ign):
- dbfile = abspath_expanduser_unicode(u"magicfolderdb.sqlite", base=self.basedir)
- self.uploader = DropUploader(self.client, self.upload_dircap, self.parent_dircap, self.local_dir,
- dbfile, inotify=self.inotify, pending_delay=0.2)
- self.uploader.setServiceParent(self.client)
- self.uploader.upload_ready()
-
- # Prevent unclean reactor errors.
- def _cleanup(self, res):
- d = defer.succeed(None)
- if self.uploader is not None:
- d.addCallback(lambda ign: self.uploader.finish(for_tests=True))
- d.addCallback(lambda ign: res)
- return d
-
- def test_db_basic(self):
- fileutil.make_dirs(self.basedir)
- self._createdb()
-
- def test_db_persistence(self):
- """Test that a file upload creates an entry in the database."""
-
- fileutil.make_dirs(self.basedir)
- db = self._createdb()
-
- path = abspath_expanduser_unicode(u"myFile1", base=self.basedir)
- db.did_upload_file('URI:LIT:1', path, 0, 0, 33)
-
- c = db.cursor
- c.execute("SELECT size,mtime,ctime,fileid"
- " FROM local_files"
- " WHERE path=?",
- (path,))
- row = db.cursor.fetchone()
- self.failIfEqual(row, None)
-
- # Second test uses db.check_file instead of SQL query directly
- # to confirm the previous upload entry in the db.
- path = abspath_expanduser_unicode(u"myFile2", base=self.basedir)
- fileutil.write(path, "meow\n")
- s = os.stat(path)
- size = s[stat.ST_SIZE]
- ctime = s[stat.ST_CTIME]
- mtime = s[stat.ST_MTIME]
- db.did_upload_file('URI:LIT:2', path, mtime, ctime, size)
- r = db.check_file(path)
- self.failUnless(r.was_uploaded())
-
- def test_uploader_start_service(self):
- self.set_up_grid()
-
- self.local_dir = abspath_expanduser_unicode(self.unicode_or_fallback(u"l\u00F8cal_dir", u"local_dir"),
- base=self.basedir)
- self.mkdir_nonascii(self.local_dir)
-
- self.client = self.g.clients[0]
- self.stats_provider = self.client.stats_provider
-
- d = self.client.create_dirnode()
- d.addCallback(self._made_upload_dir)
- d.addCallback(self._create_uploader)
- d.addCallback(lambda ign: self.failUnlessReallyEqual(self._get_count('drop_upload.dirs_monitored'), 1))
- d.addBoth(self._cleanup)
- d.addCallback(lambda ign: self.failUnlessReallyEqual(self._get_count('drop_upload.dirs_monitored'), 0))
- return d
-
- def test_move_tree(self):
- self.set_up_grid()
-
- self.local_dir = abspath_expanduser_unicode(self.unicode_or_fallback(u"l\u00F8cal_dir", u"local_dir"),
- base=self.basedir)
- self.mkdir_nonascii(self.local_dir)
-
- self.client = self.g.clients[0]
- self.stats_provider = self.client.stats_provider
-
- empty_tree_name = self.unicode_or_fallback(u"empty_tr\u00EAe", u"empty_tree")
- empty_tree_dir = abspath_expanduser_unicode(empty_tree_name, base=self.basedir)
- new_empty_tree_dir = abspath_expanduser_unicode(empty_tree_name, base=self.local_dir)
-
- small_tree_name = self.unicode_or_fallback(u"small_tr\u00EAe", u"empty_tree")
- small_tree_dir = abspath_expanduser_unicode(small_tree_name, base=self.basedir)
- new_small_tree_dir = abspath_expanduser_unicode(small_tree_name, base=self.local_dir)
-
- d = self.client.create_dirnode()
- d.addCallback(self._made_upload_dir)
-
- d.addCallback(self._create_uploader)
-
- def _check_move_empty_tree(res):
- self.mkdir_nonascii(empty_tree_dir)
- d2 = defer.Deferred()
- self.uploader.set_processed_callback(d2.callback, ignore_count=0)
- os.rename(empty_tree_dir, new_empty_tree_dir)
- self.notify(to_filepath(new_empty_tree_dir), self.inotify.IN_MOVED_TO)
- return d2
- d.addCallback(_check_move_empty_tree)
- d.addCallback(lambda ign: self.failUnlessReallyEqual(self._get_count('drop_upload.objects_succeeded'), 1))
- d.addCallback(lambda ign: self.failUnlessReallyEqual(self._get_count('drop_upload.files_uploaded'), 0))
- d.addCallback(lambda ign: self.failUnlessReallyEqual(self._get_count('drop_upload.objects_queued'), 0))
- d.addCallback(lambda ign: self.failUnlessReallyEqual(self._get_count('drop_upload.directories_created'), 1))
-
- def _check_move_small_tree(res):
- self.mkdir_nonascii(small_tree_dir)
- fileutil.write(abspath_expanduser_unicode(u"what", base=small_tree_dir), "say when")
- d2 = defer.Deferred()
- self.uploader.set_processed_callback(d2.callback, ignore_count=1)
- os.rename(small_tree_dir, new_small_tree_dir)
- self.notify(to_filepath(new_small_tree_dir), self.inotify.IN_MOVED_TO)
- return d2
- d.addCallback(_check_move_small_tree)
- d.addCallback(lambda ign: self.failUnlessReallyEqual(self._get_count('drop_upload.objects_succeeded'), 3))
- d.addCallback(lambda ign: self.failUnlessReallyEqual(self._get_count('drop_upload.files_uploaded'), 1))
- d.addCallback(lambda ign: self.failUnlessReallyEqual(self._get_count('drop_upload.objects_queued'), 0))
- d.addCallback(lambda ign: self.failUnlessReallyEqual(self._get_count('drop_upload.directories_created'), 2))
-
- def _check_moved_tree_is_watched(res):
- d2 = defer.Deferred()
- self.uploader.set_processed_callback(d2.callback, ignore_count=0)
- fileutil.write(abspath_expanduser_unicode(u"another", base=new_small_tree_dir), "file")
- self.notify(to_filepath(abspath_expanduser_unicode(u"another", base=new_small_tree_dir)), self.inotify.IN_CLOSE_WRITE)
- return d2
- d.addCallback(_check_moved_tree_is_watched)
- d.addCallback(lambda ign: self.failUnlessReallyEqual(self._get_count('drop_upload.objects_succeeded'), 4))
- d.addCallback(lambda ign: self.failUnlessReallyEqual(self._get_count('drop_upload.files_uploaded'), 2))
- d.addCallback(lambda ign: self.failUnlessReallyEqual(self._get_count('drop_upload.objects_queued'), 0))
- d.addCallback(lambda ign: self.failUnlessReallyEqual(self._get_count('drop_upload.directories_created'), 2))
-
- # Files that are moved out of the upload directory should no longer be watched.
- def _move_dir_away(ign):
- os.rename(new_empty_tree_dir, empty_tree_dir)
- # Wuh? Why don't we get this event for the real test?
- #self.notify(to_filepath(new_empty_tree_dir), self.inotify.IN_MOVED_FROM)
- d.addCallback(_move_dir_away)
- def create_file(val):
- test_file = abspath_expanduser_unicode(u"what", base=empty_tree_dir)
- fileutil.write(test_file, "meow")
- return
- d.addCallback(create_file)
- d.addCallback(lambda ign: time.sleep(1))
- d.addCallback(lambda ign: self.failUnlessReallyEqual(self._get_count('drop_upload.objects_succeeded'), 4))
- d.addCallback(lambda ign: self.failUnlessReallyEqual(self._get_count('drop_upload.files_uploaded'), 2))
- d.addCallback(lambda ign: self.failUnlessReallyEqual(self._get_count('drop_upload.objects_queued'), 0))
- d.addCallback(lambda ign: self.failUnlessReallyEqual(self._get_count('drop_upload.directories_created'), 2))
-
- d.addBoth(self._cleanup)
- return d
-
- def test_persistence(self):
- """
- Perform an upload of a given file and then stop the client.
- Start a new client and uploader... and verify that the file is NOT uploaded
- a second time. This test is meant to test the database persistence along with
- the startup and shutdown code paths of the uploader.
- """
- self.set_up_grid()
- self.local_dir = abspath_expanduser_unicode(u"test_persistence", base=self.basedir)
- self.mkdir_nonascii(self.local_dir)
-
- self.client = self.g.clients[0]
- self.stats_provider = self.client.stats_provider
- d = self.client.create_dirnode()
- d.addCallback(self._made_upload_dir)
- d.addCallback(self._create_uploader)
-
- def create_file(val):
- d2 = defer.Deferred()
- self.uploader.set_processed_callback(d2.callback)
- test_file = abspath_expanduser_unicode(u"what", base=self.local_dir)
- fileutil.write(test_file, "meow")
- self.notify(to_filepath(test_file), self.inotify.IN_CLOSE_WRITE)
- return d2
- d.addCallback(create_file)
- d.addCallback(lambda ign: self.failUnlessReallyEqual(self._get_count('drop_upload.objects_succeeded'), 1))
- d.addCallback(lambda ign: self.failUnlessReallyEqual(self._get_count('drop_upload.objects_queued'), 0))
- d.addCallback(self._cleanup)
-
- def _restart(ign):
- self.set_up_grid()
- self.client = self.g.clients[0]
- self.stats_provider = self.client.stats_provider
- d.addCallback(_restart)
- d.addCallback(self._create_uploader)
- d.addCallback(lambda ign: time.sleep(3))
- d.addCallback(lambda ign: self.failUnlessReallyEqual(self._get_count('drop_upload.objects_succeeded'), 0))
- d.addCallback(lambda ign: self.failUnlessReallyEqual(self._get_count('drop_upload.objects_queued'), 0))
- d.addBoth(self._cleanup)
- return d
-
- def test_drop_upload(self):
- self.set_up_grid()
- self.local_dir = os.path.join(self.basedir, self.unicode_or_fallback(u"loc\u0101l_dir", u"local_dir"))
- self.mkdir_nonascii(self.local_dir)
-
- self.client = self.g.clients[0]
- self.stats_provider = self.client.stats_provider
-
- d = self.client.create_dirnode()
-
- d.addCallback(self._made_upload_dir)
- d.addCallback(self._create_uploader)
-
- # Write something short enough for a LIT file.
- d.addCallback(lambda ign: self._check_file(u"short", "test"))
-
- # Write to the same file again with different data.
- d.addCallback(lambda ign: self._check_file(u"short", "different"))
-
- # Test that temporary files are not uploaded.
- d.addCallback(lambda ign: self._check_file(u"tempfile", "test", temporary=True))
-
- # Test that we tolerate creation of a subdirectory.
- d.addCallback(lambda ign: os.mkdir(os.path.join(self.local_dir, u"directory")))
-
- # Write something longer, and also try to test a Unicode name if the fs can represent it.
- name_u = self.unicode_or_fallback(u"l\u00F8ng", u"long")
- d.addCallback(lambda ign: self._check_file(name_u, "test"*100))
-
- # TODO: test that causes an upload failure.
- d.addCallback(lambda ign: self.failUnlessReallyEqual(self._get_count('drop_upload.files_failed'), 0))
-
- d.addBoth(self._cleanup)
- return d
-
- def _check_file(self, name_u, data, temporary=False):
- previously_uploaded = self._get_count('drop_upload.objects_succeeded')
- previously_disappeared = self._get_count('drop_upload.objects_disappeared')
-
- d = defer.Deferred()
-
- # Note: this relies on the fact that we only get one IN_CLOSE_WRITE notification per file
- # (otherwise we would get a defer.AlreadyCalledError). Should we be relying on that?
- self.uploader.set_processed_callback(d.callback)
-
- path_u = abspath_expanduser_unicode(name_u, base=self.local_dir)
- path = to_filepath(path_u)
-
- # We don't use FilePath.setContent() here because it creates a temporary file that
- # is renamed into place, which causes events that the test is not expecting.
- f = open(path_u, "wb")
- try:
- if temporary and sys.platform != "win32":
- os.unlink(path_u)
- f.write(data)
- finally:
- f.close()
- if temporary and sys.platform == "win32":
- os.unlink(path_u)
- self.notify(path, self.inotify.IN_DELETE)
- fileutil.flush_volume(path_u)
- self.notify(path, self.inotify.IN_CLOSE_WRITE)
-
- if temporary:
- d.addCallback(lambda ign: self.shouldFail(NoSuchChildError, 'temp file not uploaded', None,
- self.upload_dirnode.get, name_u))
- d.addCallback(lambda ign: self.failUnlessReallyEqual(self._get_count('drop_upload.objects_disappeared'),
- previously_disappeared + 1))
- else:
- d.addCallback(lambda ign: self.upload_dirnode.get(name_u))
- d.addCallback(download_to_data)
- d.addCallback(lambda actual_data: self.failUnlessReallyEqual(actual_data, data))
- d.addCallback(lambda ign: self.failUnlessReallyEqual(self._get_count('drop_upload.objects_succeeded'),
- previously_uploaded + 1))
-
- d.addCallback(lambda ign: self.failUnlessReallyEqual(self._get_count('drop_upload.objects_queued'), 0))
- return d
-
-
-class MockTest(DropUploadTestMixin, unittest.TestCase):
- """This can run on any platform, and even if twisted.internet.inotify can't be imported."""
-
- def setUp(self):
- DropUploadTestMixin.setUp(self)
- self.inotify = fake_inotify
-
- def notify(self, path, mask):
- self.uploader._notifier.event(path, mask)
-
- def test_errors(self):
- self.set_up_grid()
-
- errors_dir = abspath_expanduser_unicode(u"errors_dir", base=self.basedir)
- os.mkdir(errors_dir)
- not_a_dir = abspath_expanduser_unicode(u"NOT_A_DIR", base=self.basedir)
- fileutil.write(not_a_dir, "")
- magicfolderdb = abspath_expanduser_unicode(u"magicfolderdb", base=self.basedir)
- doesnotexist = abspath_expanduser_unicode(u"doesnotexist", base=self.basedir)
-
- client = self.g.clients[0]
- d = client.create_dirnode()
- def _check_errors(n):
- self.failUnless(IDirectoryNode.providedBy(n))
- upload_dircap = n.get_uri()
- readonly_dircap = n.get_readonly_uri()
-
- self.shouldFail(AssertionError, 'nonexistent local.directory', 'there is no directory',
- DropUploader, client, upload_dircap, '', doesnotexist, magicfolderdb, inotify=fake_inotify)
- self.shouldFail(AssertionError, 'non-directory local.directory', 'is not a directory',
- DropUploader, client, upload_dircap, '', not_a_dir, magicfolderdb, inotify=fake_inotify)
- self.shouldFail(AssertionError, 'bad upload.dircap', 'does not refer to a directory',
- DropUploader, client, 'bad', '', errors_dir, magicfolderdb, inotify=fake_inotify)
- self.shouldFail(AssertionError, 'non-directory upload.dircap', 'does not refer to a directory',
- DropUploader, client, 'URI:LIT:foo', '', errors_dir, magicfolderdb, inotify=fake_inotify)
- self.shouldFail(AssertionError, 'readonly upload.dircap', 'is not a writecap to a directory',
- DropUploader, client, readonly_dircap, '', errors_dir, magicfolderdb, inotify=fake_inotify)
-
- def _not_implemented():
- raise NotImplementedError("blah")
- self.patch(drop_upload, 'get_inotify_module', _not_implemented)
- self.shouldFail(NotImplementedError, 'unsupported', 'blah',
- DropUploader, client, upload_dircap, '', errors_dir, magicfolderdb)
- d.addCallback(_check_errors)
- return d
-
-
-class RealTest(DropUploadTestMixin, unittest.TestCase):
- """This is skipped unless both Twisted and the platform support inotify."""
-
- def setUp(self):
- DropUploadTestMixin.setUp(self)
- self.inotify = drop_upload.get_inotify_module()
-
- def notify(self, path, mask):
- # Writing to the filesystem causes the notification.
- pass
-
-try:
- drop_upload.get_inotify_module()
-except NotImplementedError:
- RealTest.skip = "Drop-upload support can only be tested for-real on an OS that supports inotify or equivalent."
--- /dev/null
+
+import os, sys, stat, time
+
+from twisted.trial import unittest
+from twisted.internet import defer
+
+from allmydata.interfaces import IDirectoryNode, NoSuchChildError
+
+from allmydata.util import fake_inotify, fileutil
+from allmydata.util.encodingutil import get_filesystem_encoding, to_filepath
+from allmydata.util.consumer import download_to_data
+from allmydata.test.no_network import GridTestMixin
+from allmydata.test.common_util import ReallyEqualMixin, NonASCIIPathMixin
+from allmydata.test.common import ShouldFailMixin
+
+from allmydata.frontends import magic_folder
+from allmydata.frontends.magic_folder import MagicFolder
+from allmydata import backupdb
+from allmydata.util.fileutil import abspath_expanduser_unicode
+
+
+class MagicFolderTestMixin(GridTestMixin, ShouldFailMixin, ReallyEqualMixin, NonASCIIPathMixin):
+ """
+ These tests will be run both with a mock notifier, and (on platforms that support it)
+ with the real INotify.
+ """
+
+ def setUp(self):
+ GridTestMixin.setUp(self)
+ temp = self.mktemp()
+ self.basedir = abspath_expanduser_unicode(temp.decode(get_filesystem_encoding()))
+ self.magicfolder = None
+ self.dir_node = None
+
+ def _get_count(self, name):
+ return self.stats_provider.get_stats()["counters"].get(name, 0)
+
+ def _createdb(self):
+ dbfile = abspath_expanduser_unicode(u"magicfolderdb.sqlite", base=self.basedir)
+ bdb = backupdb.get_backupdb(dbfile)
+ self.failUnless(bdb, "unable to create backupdb from %r" % (dbfile,))
+ self.failUnlessEqual(bdb.VERSION, 2)
+ return bdb
+
+ def _made_upload_dir(self, n):
+ if self.dir_node == None:
+ self.dir_node = n
+ else:
+ n = self.dir_node
+ self.failUnless(IDirectoryNode.providedBy(n))
+ self.upload_dirnode = n
+ self.upload_dircap = n.get_uri()
+ self.collective_dircap = "abc123"
+
+ def _create_magicfolder(self, ign):
+ dbfile = abspath_expanduser_unicode(u"magicfolderdb.sqlite", base=self.basedir)
+ self.magicfolder = MagicFolder(self.client, self.upload_dircap, self.collective_dircap, self.local_dir,
+ dbfile, inotify=self.inotify, pending_delay=0.2)
+ self.magicfolder.setServiceParent(self.client)
+ self.magicfolder.upload_ready()
+
+ # Prevent unclean reactor errors.
+ def _cleanup(self, res):
+ d = defer.succeed(None)
+ if self.magicfolder is not None:
+ d.addCallback(lambda ign: self.magicfolder.finish(for_tests=True))
+ d.addCallback(lambda ign: res)
+ return d
+
+ def test_db_basic(self):
+ fileutil.make_dirs(self.basedir)
+ self._createdb()
+
+ def test_db_persistence(self):
+ """Test that a file upload creates an entry in the database."""
+
+ fileutil.make_dirs(self.basedir)
+ db = self._createdb()
+
+ path = abspath_expanduser_unicode(u"myFile1", base=self.basedir)
+ db.did_upload_file('URI:LIT:1', path, 0, 0, 33)
+
+ c = db.cursor
+ c.execute("SELECT size,mtime,ctime,fileid"
+ " FROM local_files"
+ " WHERE path=?",
+ (path,))
+ row = db.cursor.fetchone()
+ self.failIfEqual(row, None)
+
+ # Second test uses db.check_file instead of SQL query directly
+ # to confirm the previous upload entry in the db.
+ path = abspath_expanduser_unicode(u"myFile2", base=self.basedir)
+ fileutil.write(path, "meow\n")
+ s = os.stat(path)
+ size = s[stat.ST_SIZE]
+ ctime = s[stat.ST_CTIME]
+ mtime = s[stat.ST_MTIME]
+ db.did_upload_file('URI:LIT:2', path, mtime, ctime, size)
+ r = db.check_file(path)
+ self.failUnless(r.was_uploaded())
+
+ def test_magicfolder_start_service(self):
+ self.set_up_grid()
+
+ self.local_dir = abspath_expanduser_unicode(self.unicode_or_fallback(u"l\u00F8cal_dir", u"local_dir"),
+ base=self.basedir)
+ self.mkdir_nonascii(self.local_dir)
+
+ self.client = self.g.clients[0]
+ self.stats_provider = self.client.stats_provider
+
+ d = self.client.create_dirnode()
+ d.addCallback(self._made_upload_dir)
+ d.addCallback(self._create_magicfolder)
+ d.addCallback(lambda ign: self.failUnlessReallyEqual(self._get_count('magic_folder.dirs_monitored'), 1))
+ d.addBoth(self._cleanup)
+ d.addCallback(lambda ign: self.failUnlessReallyEqual(self._get_count('magic_folder.dirs_monitored'), 0))
+ return d
+
+ def test_move_tree(self):
+ self.set_up_grid()
+
+ self.local_dir = abspath_expanduser_unicode(self.unicode_or_fallback(u"l\u00F8cal_dir", u"local_dir"),
+ base=self.basedir)
+ self.mkdir_nonascii(self.local_dir)
+
+ self.client = self.g.clients[0]
+ self.stats_provider = self.client.stats_provider
+
+ empty_tree_name = self.unicode_or_fallback(u"empty_tr\u00EAe", u"empty_tree")
+ empty_tree_dir = abspath_expanduser_unicode(empty_tree_name, base=self.basedir)
+ new_empty_tree_dir = abspath_expanduser_unicode(empty_tree_name, base=self.local_dir)
+
+ small_tree_name = self.unicode_or_fallback(u"small_tr\u00EAe", u"empty_tree")
+ small_tree_dir = abspath_expanduser_unicode(small_tree_name, base=self.basedir)
+ new_small_tree_dir = abspath_expanduser_unicode(small_tree_name, base=self.local_dir)
+
+ d = self.client.create_dirnode()
+ d.addCallback(self._made_upload_dir)
+
+ d.addCallback(self._create_magicfolder)
+
+ def _check_move_empty_tree(res):
+ self.mkdir_nonascii(empty_tree_dir)
+ d2 = defer.Deferred()
+ self.magicfolder.set_processed_callback(d2.callback, ignore_count=0)
+ os.rename(empty_tree_dir, new_empty_tree_dir)
+ self.notify(to_filepath(new_empty_tree_dir), self.inotify.IN_MOVED_TO)
+ return d2
+ d.addCallback(_check_move_empty_tree)
+ d.addCallback(lambda ign: self.failUnlessReallyEqual(self._get_count('magic_folder.objects_succeeded'), 1))
+ d.addCallback(lambda ign: self.failUnlessReallyEqual(self._get_count('magic_folder.files_uploaded'), 0))
+ d.addCallback(lambda ign: self.failUnlessReallyEqual(self._get_count('magic_folder.objects_queued'), 0))
+ d.addCallback(lambda ign: self.failUnlessReallyEqual(self._get_count('magic_folder.directories_created'), 1))
+
+ def _check_move_small_tree(res):
+ self.mkdir_nonascii(small_tree_dir)
+ fileutil.write(abspath_expanduser_unicode(u"what", base=small_tree_dir), "say when")
+ d2 = defer.Deferred()
+ self.magicfolder.set_processed_callback(d2.callback, ignore_count=1)
+ os.rename(small_tree_dir, new_small_tree_dir)
+ self.notify(to_filepath(new_small_tree_dir), self.inotify.IN_MOVED_TO)
+ return d2
+ d.addCallback(_check_move_small_tree)
+ d.addCallback(lambda ign: self.failUnlessReallyEqual(self._get_count('magic_folder.objects_succeeded'), 3))
+ d.addCallback(lambda ign: self.failUnlessReallyEqual(self._get_count('magic_folder.files_uploaded'), 1))
+ d.addCallback(lambda ign: self.failUnlessReallyEqual(self._get_count('magic_folder.objects_queued'), 0))
+ d.addCallback(lambda ign: self.failUnlessReallyEqual(self._get_count('magic_folder.directories_created'), 2))
+
+ def _check_moved_tree_is_watched(res):
+ d2 = defer.Deferred()
+ self.magicfolder.set_processed_callback(d2.callback, ignore_count=0)
+ fileutil.write(abspath_expanduser_unicode(u"another", base=new_small_tree_dir), "file")
+ self.notify(to_filepath(abspath_expanduser_unicode(u"another", base=new_small_tree_dir)), self.inotify.IN_CLOSE_WRITE)
+ return d2
+ d.addCallback(_check_moved_tree_is_watched)
+ d.addCallback(lambda ign: self.failUnlessReallyEqual(self._get_count('magic_folder.objects_succeeded'), 4))
+ d.addCallback(lambda ign: self.failUnlessReallyEqual(self._get_count('magic_folder.files_uploaded'), 2))
+ d.addCallback(lambda ign: self.failUnlessReallyEqual(self._get_count('magic_folder.objects_queued'), 0))
+ d.addCallback(lambda ign: self.failUnlessReallyEqual(self._get_count('magic_folder.directories_created'), 2))
+
+ # Files that are moved out of the upload directory should no longer be watched.
+ def _move_dir_away(ign):
+ os.rename(new_empty_tree_dir, empty_tree_dir)
+ # Wuh? Why don't we get this event for the real test?
+ #self.notify(to_filepath(new_empty_tree_dir), self.inotify.IN_MOVED_FROM)
+ d.addCallback(_move_dir_away)
+ def create_file(val):
+ test_file = abspath_expanduser_unicode(u"what", base=empty_tree_dir)
+ fileutil.write(test_file, "meow")
+ return
+ d.addCallback(create_file)
+ d.addCallback(lambda ign: time.sleep(1))
+ d.addCallback(lambda ign: self.failUnlessReallyEqual(self._get_count('magic_folder.objects_succeeded'), 4))
+ d.addCallback(lambda ign: self.failUnlessReallyEqual(self._get_count('magic_folder.files_uploaded'), 2))
+ d.addCallback(lambda ign: self.failUnlessReallyEqual(self._get_count('magic_folder.objects_queued'), 0))
+ d.addCallback(lambda ign: self.failUnlessReallyEqual(self._get_count('magic_folder.directories_created'), 2))
+
+ d.addBoth(self._cleanup)
+ return d
+
+ def test_persistence(self):
+ """
+ Perform an upload of a given file and then stop the client.
+ Start a new client and magic-folder service... and verify that the file is NOT uploaded
+ a second time. This test is meant to test the database persistence along with
+ the startup and shutdown code paths of the magic-folder service.
+ """
+ self.set_up_grid()
+ self.local_dir = abspath_expanduser_unicode(u"test_persistence", base=self.basedir)
+ self.mkdir_nonascii(self.local_dir)
+
+ self.client = self.g.clients[0]
+ self.stats_provider = self.client.stats_provider
+ d = self.client.create_dirnode()
+ d.addCallback(self._made_upload_dir)
+ d.addCallback(self._create_magicfolder)
+
+ def create_file(val):
+ d2 = defer.Deferred()
+ self.magicfolder.set_processed_callback(d2.callback)
+ test_file = abspath_expanduser_unicode(u"what", base=self.local_dir)
+ fileutil.write(test_file, "meow")
+ self.notify(to_filepath(test_file), self.inotify.IN_CLOSE_WRITE)
+ return d2
+ d.addCallback(create_file)
+ d.addCallback(lambda ign: self.failUnlessReallyEqual(self._get_count('magic_folder.objects_succeeded'), 1))
+ d.addCallback(lambda ign: self.failUnlessReallyEqual(self._get_count('magic_folder.objects_queued'), 0))
+ d.addCallback(self._cleanup)
+
+ def _restart(ign):
+ self.set_up_grid()
+ self.client = self.g.clients[0]
+ self.stats_provider = self.client.stats_provider
+ d.addCallback(_restart)
+ d.addCallback(self._create_magicfolder)
+ d.addCallback(lambda ign: time.sleep(3))
+ d.addCallback(lambda ign: self.failUnlessReallyEqual(self._get_count('magic_folder.objects_succeeded'), 0))
+ d.addCallback(lambda ign: self.failUnlessReallyEqual(self._get_count('magic_folder.objects_queued'), 0))
+ d.addBoth(self._cleanup)
+ return d
+
+ def test_magic_folder(self):
+ self.set_up_grid()
+ self.local_dir = os.path.join(self.basedir, self.unicode_or_fallback(u"loc\u0101l_dir", u"local_dir"))
+ self.mkdir_nonascii(self.local_dir)
+
+ self.client = self.g.clients[0]
+ self.stats_provider = self.client.stats_provider
+
+ d = self.client.create_dirnode()
+
+ d.addCallback(self._made_upload_dir)
+ d.addCallback(self._create_magicfolder)
+
+ # Write something short enough for a LIT file.
+ d.addCallback(lambda ign: self._check_file(u"short", "test"))
+
+ # Write to the same file again with different data.
+ d.addCallback(lambda ign: self._check_file(u"short", "different"))
+
+ # Test that temporary files are not uploaded.
+ d.addCallback(lambda ign: self._check_file(u"tempfile", "test", temporary=True))
+
+ # Test that we tolerate creation of a subdirectory.
+ d.addCallback(lambda ign: os.mkdir(os.path.join(self.local_dir, u"directory")))
+
+ # Write something longer, and also try to test a Unicode name if the fs can represent it.
+ name_u = self.unicode_or_fallback(u"l\u00F8ng", u"long")
+ d.addCallback(lambda ign: self._check_file(name_u, "test"*100))
+
+ # TODO: test that causes an upload failure.
+ d.addCallback(lambda ign: self.failUnlessReallyEqual(self._get_count('magic_folder.files_failed'), 0))
+
+ d.addBoth(self._cleanup)
+ return d
+
+ def _check_file(self, name_u, data, temporary=False):
+ previously_uploaded = self._get_count('magic_folder.objects_succeeded')
+ previously_disappeared = self._get_count('magic_folder.objects_disappeared')
+
+ d = defer.Deferred()
+
+ # Note: this relies on the fact that we only get one IN_CLOSE_WRITE notification per file
+ # (otherwise we would get a defer.AlreadyCalledError). Should we be relying on that?
+ self.magicfolder.set_processed_callback(d.callback)
+
+ path_u = abspath_expanduser_unicode(name_u, base=self.local_dir)
+ path = to_filepath(path_u)
+
+ # We don't use FilePath.setContent() here because it creates a temporary file that
+ # is renamed into place, which causes events that the test is not expecting.
+ f = open(path_u, "wb")
+ try:
+ if temporary and sys.platform != "win32":
+ os.unlink(path_u)
+ f.write(data)
+ finally:
+ f.close()
+ if temporary and sys.platform == "win32":
+ os.unlink(path_u)
+ self.notify(path, self.inotify.IN_DELETE)
+ fileutil.flush_volume(path_u)
+ self.notify(path, self.inotify.IN_CLOSE_WRITE)
+
+ if temporary:
+ d.addCallback(lambda ign: self.shouldFail(NoSuchChildError, 'temp file not uploaded', None,
+ self.upload_dirnode.get, name_u))
+ d.addCallback(lambda ign: self.failUnlessReallyEqual(self._get_count('magic_folder.objects_disappeared'),
+ previously_disappeared + 1))
+ else:
+ d.addCallback(lambda ign: self.upload_dirnode.get(name_u))
+ d.addCallback(download_to_data)
+ d.addCallback(lambda actual_data: self.failUnlessReallyEqual(actual_data, data))
+ d.addCallback(lambda ign: self.failUnlessReallyEqual(self._get_count('magic_folder.objects_succeeded'),
+ previously_uploaded + 1))
+
+ d.addCallback(lambda ign: self.failUnlessReallyEqual(self._get_count('magic_folder.objects_queued'), 0))
+ return d
+
+
+class MockTest(MagicFolderTestMixin, unittest.TestCase):
+ """This can run on any platform, and even if twisted.internet.inotify can't be imported."""
+
+ def setUp(self):
+ MagicFolderTestMixin.setUp(self)
+ self.inotify = fake_inotify
+
+ def notify(self, path, mask):
+ self.magicfolder._notifier.event(path, mask)
+
+ def test_errors(self):
+ self.set_up_grid()
+
+ errors_dir = abspath_expanduser_unicode(u"errors_dir", base=self.basedir)
+ os.mkdir(errors_dir)
+ not_a_dir = abspath_expanduser_unicode(u"NOT_A_DIR", base=self.basedir)
+ fileutil.write(not_a_dir, "")
+ magicfolderdb = abspath_expanduser_unicode(u"magicfolderdb", base=self.basedir)
+ doesnotexist = abspath_expanduser_unicode(u"doesnotexist", base=self.basedir)
+
+ client = self.g.clients[0]
+ d = client.create_dirnode()
+ def _check_errors(n):
+ self.failUnless(IDirectoryNode.providedBy(n))
+ upload_dircap = n.get_uri()
+ readonly_dircap = n.get_readonly_uri()
+
+ self.shouldFail(AssertionError, 'nonexistent local.directory', 'there is no directory',
+ MagicFolder, client, upload_dircap, '', doesnotexist, magicfolderdb, inotify=fake_inotify)
+ self.shouldFail(AssertionError, 'non-directory local.directory', 'is not a directory',
+ MagicFolder, client, upload_dircap, '', not_a_dir, magicfolderdb, inotify=fake_inotify)
+ self.shouldFail(AssertionError, 'bad upload.dircap', 'does not refer to a directory',
+ MagicFolder, client, 'bad', '', errors_dir, magicfolderdb, inotify=fake_inotify)
+ self.shouldFail(AssertionError, 'non-directory upload.dircap', 'does not refer to a directory',
+ MagicFolder, client, 'URI:LIT:foo', '', errors_dir, magicfolderdb, inotify=fake_inotify)
+ self.shouldFail(AssertionError, 'readonly upload.dircap', 'is not a writecap to a directory',
+ MagicFolder, client, readonly_dircap, '', errors_dir, magicfolderdb, inotify=fake_inotify)
+
+ def _not_implemented():
+ raise NotImplementedError("blah")
+ self.patch(magic_folder, 'get_inotify_module', _not_implemented)
+ self.shouldFail(NotImplementedError, 'unsupported', 'blah',
+ MagicFolder, client, upload_dircap, '', errors_dir, magicfolderdb)
+ d.addCallback(_check_errors)
+ return d
+
+
+class RealTest(MagicFolderTestMixin, unittest.TestCase):
+ """This is skipped unless both Twisted and the platform support inotify."""
+
+ def setUp(self):
+ MagicFolderTestMixin.setUp(self)
+ self.inotify = magic_folder.get_inotify_module()
+
+ def notify(self, path, mask):
+ # Writing to the filesystem causes the notification.
+ pass
+
+try:
+ magic_folder.get_inotify_module()
+except NotImplementedError:
+ RealTest.skip = "Magic Folder support can only be tested for-real on an OS that supports inotify or equivalent."