Also remove a couple of vestigial references to figleaf, which is long gone.
fixes #1409 (remove contrib/fuse)
check: test
-fuse-test: .built
- $(RUNPP) -d contrib/fuse -p -c runtests.py
-
test-coverage: build src/allmydata/_version.py
rm -f .coverage
$(TAHOE) debug trial --reporter=bwverbose-coverage $(TEST)
Release 1.9.0 (2011-??-??)
--------------------------
-
+- The unmaintained FUSE plugins were removed from the source tree. See
+ docs/frontends/FTP-and-SFTP.rst for how to use sshfs. (`#1409`_)
- Nodes now emit "None" for percentiles with higher implied precision
than the number of observations can support. Older stats gatherers
will throw an exception if they gather stats from a new storage
server and it sends a "None" for a percentile. (`#1392`_)
+.. _`#1409`: http://tahoe-lafs.org/trac/tahoe-lafs/ticket/1409
Release 1.8.2 (2011-01-30)
--------------------------
+++ /dev/null
-This directory contains code and extensions which are not strictly a part
-of Tahoe. They may or may not currently work.
-
+++ /dev/null
-
-Welcome to the tahoe fuse interface prototype!
-
-
-Dependencies:
-
-In addition to a working tahoe installation, this interface depends
-on the python-fuse interface. This package is available on Ubuntu
-systems as "python-fuse". It is only known to work with ubuntu
-package version "2.5-5build1". The latest ubuntu package (version
-"1:0.2-pre3-3") appears to not work currently.
-
-Unfortunately this package appears poorly maintained (notice the wildy
-different version strings and changing API semantics), so if you know
-of a good replacement pythonic fuse interface, please let tahoe-dev know
-about it!
-
-
-Configuration:
-
-Currently tahoe-fuse.py uses the same ~/.tahoe/private/root_dir.cap
-file (which is also the CLI default). This is not configurable yet.
-Place a directory cap in this file. (Hint: If you can run "tahoe ls"
-and see a directory listing, this file is properly configured.)
-
-
-Commandline:
-
-The usage is "tahoe-fuse.py <mountpoint>". The mount point needs to
-be an existing directory which should be empty. (If it's not empty
-the contents will be safe, but unavailable while the tahoe-fuse.py
-process is mounted there.)
-
-
-Usage:
-
-To use the interface, use other programs to poke around the
-mountpoint. You should be able to see the same contents as you would
-by using the CLI or WUI for the same directory cap.
-
-
-Runtime Behavior Notes:
-
-Read-only:
-Only reading a tahoe grid is supported, which is reflected in
-the permission modes. With Tahoe 0.7.0, write access should be easier
-to implement, but is not yet present.
-
-In-Memory File Caching:
-Currently requesting a particular file for read causes the entire file to
-be retrieved into tahoe-fuse.py memory before the read operation returns!
-This caching is reused for subsequent reads. Beware large files.
-When transitioning to a finer-grained fuse api, this caching should be
-replaced with straight-forward calls to the wapi. In my opinion, the
-Tahoe node should do all the caching tricks, so that extensions such as
-tahoe-fuse.py can be simple and thin.
-
-Backgrounding Behavior:
-When using the 2.5-5build1 ubuntu package, and no other arguments
-besides a mountpoint to tahoe-fuse.py, the process should remain in
-the foreground and print debug information. Other python-fuse
-versions appear to alter this behavior and may fork the process to
-the background and obscure the log output. Bonus points to whomever
-discovers the fate of these poor log messages in this case.
-
-"Investigative Logging":
-This prototype is designed to aide in further fuse development, so
-currently *every* fuse interface call figures out the process from
-which the file system request originates, then it figures out that
-processes commandline (this uses the /proc file system). This is handy
-for interactive inspection of what kinds of behavior invokes which
-file system operations, but may not work for you. To disable this
-inspection, edit the source and comment out all of the "@debugcall"
-[FIXME: double check python ref name] method decorators by inserting a
-'#' so it looks like "#@debugcall" (without quotes).
-
-Not-to-spec:
-The current version was not implemented according to any spec and
-makes quite a few dubious "guesses" for what data to pass the fuse
-interface. You may see bizarre values, which may potentialy confuse
-any processes visiting the files under the mount point.
-
-Serial, blocking operations:
-Most fuse operations result in one or more http calls to the WAPI.
-These are serial and blocking (at least for the tested python-fuse
-version 2.5-5build1), so access to this file system is quite
-inefficient.
-
-
-Good luck!
+++ /dev/null
-#! /usr/bin/env python
-'''
-Tahoe thin-client fuse module.
-
-See the accompanying README for configuration/usage details.
-
-Goals:
-
-- Delegate to Tahoe webapi as much as possible.
-- Thin rather than clever. (Even when that means clunky.)
-
-
-Warts:
-
-- Reads cache entire file contents, violating the thinness goal. Can we GET spans of files?
-- Single threaded.
-
-
-Road-map:
-1. Add unit tests where possible with little code modification.
-2. Make unit tests pass for a variety of python-fuse module versions.
-3. Modify the design to make possible unit test coverage of larger portions of code.
-
-Wishlist:
-- Perhaps integrate cli aliases or root_dir.cap.
-- Research pkg_resources; see if it can replace the try-import-except-import-error pattern.
-- Switch to logging instead of homebrew logging.
-'''
-
-
-#import bindann
-#bindann.install_exception_handler()
-
-import sys, stat, os, errno, urllib, time
-
-try:
- import simplejson
-except ImportError, e:
- raise SystemExit('''\
-Could not import simplejson, which is bundled with Tahoe. Please
-update your PYTHONPATH environment variable to include the tahoe
-"support/lib/python<VERSION>/site-packages" directory.
-
-If you run this from the Tahoe source directory, use this command:
-PYTHONPATH="$PYTHONPATH:./support/lib/python%d.%d/site-packages/" python %s
-''' % (sys.version_info[:2] + (' '.join(sys.argv),)))
-
-
-try:
- import fuse
-except ImportError, e:
- raise SystemExit('''\
-Could not import fuse, the pythonic fuse bindings. This dependency
-of tahoe-fuse.py is *not* bundled with tahoe. Please install it.
-On debian/ubuntu systems run: sudo apt-get install python-fuse
-''')
-
-# FIXME: Check for non-working fuse versions here.
-# FIXME: Make this work for all common python-fuse versions.
-
-# FIXME: Currently uses the old, silly path-based (non-stateful) interface:
-fuse.fuse_python_api = (0, 1) # Use the silly path-based api for now.
-
-
-### Config:
-TahoeConfigDir = '~/.tahoe'
-MagicDevNumber = 42
-UnknownSize = -1
-
-
-def main():
- basedir = os.path.expanduser(TahoeConfigDir)
-
- for i, arg in enumerate(sys.argv):
- if arg == '--basedir':
- try:
- basedir = sys.argv[i+1]
- sys.argv[i:i+2] = []
- except IndexError:
- sys.argv = [sys.argv[0], '--help']
-
-
- log_init(basedir)
- log('Commandline: %r', sys.argv)
-
- fs = TahoeFS(basedir)
- fs.main()
-
-
-### Utilities for debug:
-_logfile = None # Private to log* functions.
-
-def log_init(confdir):
- global _logfile
-
- logpath = os.path.join(confdir, 'logs', 'tahoe_fuse.log')
- _logfile = open(logpath, 'a')
- log('Log opened at: %s\n', time.strftime('%Y-%m-%d %H:%M:%S'))
-
-
-def log(msg, *args):
- _logfile.write((msg % args) + '\n')
- _logfile.flush()
-
-
-def trace_calls(m):
- def dbmeth(self, *a, **kw):
- pid = self.GetContext()['pid']
- log('[%d %r]\n%s%r%r', pid, get_cmdline(pid), m.__name__, a, kw)
- try:
- r = m(self, *a, **kw)
- if (type(r) is int) and (r < 0):
- log('-> -%s\n', errno.errorcode[-r],)
- else:
- repstr = repr(r)[:256]
- log('-> %s\n', repstr)
- return r
- except:
- sys.excepthook(*sys.exc_info())
-
- return dbmeth
-
-
-def get_cmdline(pid):
- f = open('/proc/%d/cmdline' % pid, 'r')
- args = f.read().split('\0')
- f.close()
- assert args[-1] == ''
- return args[:-1]
-
-
-class SystemError (Exception):
- def __init__(self, eno):
- self.eno = eno
- Exception.__init__(self, errno.errorcode[eno])
-
- @staticmethod
- def wrap_returns(meth):
- def wrapper(*args, **kw):
- try:
- return meth(*args, **kw)
- except SystemError, e:
- return -e.eno
- wrapper.__name__ = meth.__name__
- return wrapper
-
-
-### Heart of the Matter:
-class TahoeFS (fuse.Fuse):
- def __init__(self, confdir):
- log('Initializing with confdir = %r', confdir)
- fuse.Fuse.__init__(self)
- self.confdir = confdir
-
- self.flags = 0 # FIXME: What goes here?
- self.multithreaded = 0
-
- # silly path-based file handles.
- self.filecontents = {} # {path -> contents}
-
- self._init_url()
- self._init_rootdir()
-
- def _init_url(self):
- if os.path.exists(os.path.join(self.confdir, 'node.url')):
- self.url = file(os.path.join(self.confdir, 'node.url'), 'rb').read().strip()
- if not self.url.endswith('/'):
- self.url += '/'
- else:
- f = open(os.path.join(self.confdir, 'webport'), 'r')
- contents = f.read()
- f.close()
- fields = contents.split(':')
- proto, port = fields[:2]
- assert proto == 'tcp'
- port = int(port)
- self.url = 'http://localhost:%d' % (port,)
-
- def _init_rootdir(self):
- # For now we just use the same default as the CLI:
- rootdirfn = os.path.join(self.confdir, 'private', 'root_dir.cap')
- try:
- f = open(rootdirfn, 'r')
- cap = f.read().strip()
- f.close()
- except EnvironmentError, le:
- # FIXME: This user-friendly help message may be platform-dependent because it checks the exception description.
- if le.args[1].find('No such file or directory') != -1:
- raise SystemExit('%s requires a directory capability in %s, but it was not found.\n' % (sys.argv[0], rootdirfn))
- else:
- raise le
-
- self.rootdir = TahoeDir(self.url, canonicalize_cap(cap))
-
- def _get_node(self, path):
- assert path.startswith('/')
- if path == '/':
- return self.rootdir.resolve_path([])
- else:
- parts = path.split('/')[1:]
- return self.rootdir.resolve_path(parts)
-
- def _get_contents(self, path):
- contents = self.filecontents.get(path)
- if contents is None:
- node = self._get_node(path)
- contents = node.open().read()
- self.filecontents[path] = contents
- return contents
-
- @trace_calls
- @SystemError.wrap_returns
- def getattr(self, path):
- node = self._get_node(path)
- return node.getattr()
-
- @trace_calls
- @SystemError.wrap_returns
- def getdir(self, path):
- """
- return: [(name, typeflag), ... ]
- """
- node = self._get_node(path)
- return node.getdir()
-
- @trace_calls
- @SystemError.wrap_returns
- def mythread(self):
- return -errno.ENOSYS
-
- @trace_calls
- @SystemError.wrap_returns
- def chmod(self, path, mode):
- return -errno.ENOSYS
-
- @trace_calls
- @SystemError.wrap_returns
- def chown(self, path, uid, gid):
- return -errno.ENOSYS
-
- @trace_calls
- @SystemError.wrap_returns
- def fsync(self, path, isFsyncFile):
- return -errno.ENOSYS
-
- @trace_calls
- @SystemError.wrap_returns
- def link(self, target, link):
- return -errno.ENOSYS
-
- @trace_calls
- @SystemError.wrap_returns
- def mkdir(self, path, mode):
- return -errno.ENOSYS
-
- @trace_calls
- @SystemError.wrap_returns
- def mknod(self, path, mode, dev_ignored):
- return -errno.ENOSYS
-
- @trace_calls
- @SystemError.wrap_returns
- def open(self, path, mode):
- IgnoredFlags = os.O_RDONLY | os.O_NONBLOCK | os.O_SYNC | os.O_LARGEFILE
- # Note: IgnoredFlags are all ignored!
- for fname in dir(os):
- if fname.startswith('O_'):
- flag = getattr(os, fname)
- if flag & IgnoredFlags:
- continue
- elif mode & flag:
- log('Flag not supported: %s', fname)
- raise SystemError(errno.ENOSYS)
-
- self._get_contents(path)
- return 0
-
- @trace_calls
- @SystemError.wrap_returns
- def read(self, path, length, offset):
- return self._get_contents(path)[offset:length]
-
- @trace_calls
- @SystemError.wrap_returns
- def release(self, path):
- del self.filecontents[path]
- return 0
-
- @trace_calls
- @SystemError.wrap_returns
- def readlink(self, path):
- return -errno.ENOSYS
-
- @trace_calls
- @SystemError.wrap_returns
- def rename(self, oldpath, newpath):
- return -errno.ENOSYS
-
- @trace_calls
- @SystemError.wrap_returns
- def rmdir(self, path):
- return -errno.ENOSYS
-
- #@trace_calls
- @SystemError.wrap_returns
- def statfs(self):
- return -errno.ENOSYS
-
- @trace_calls
- @SystemError.wrap_returns
- def symlink ( self, targetPath, linkPath ):
- return -errno.ENOSYS
-
- @trace_calls
- @SystemError.wrap_returns
- def truncate(self, path, size):
- return -errno.ENOSYS
-
- @trace_calls
- @SystemError.wrap_returns
- def unlink(self, path):
- return -errno.ENOSYS
-
- @trace_calls
- @SystemError.wrap_returns
- def utime(self, path, times):
- return -errno.ENOSYS
-
-
-class TahoeNode (object):
- NextInode = 0
-
- @staticmethod
- def make(baseurl, uri):
- typefield = uri.split(':', 2)[1]
- # FIXME: is this check correct?
- if uri.find('URI:DIR2') != -1:
- return TahoeDir(baseurl, uri)
- else:
- return TahoeFile(baseurl, uri)
-
- def __init__(self, baseurl, uri):
- if not baseurl.endswith('/'):
- baseurl += '/'
- self.burl = baseurl
- self.uri = uri
- self.fullurl = '%suri/%s' % (self.burl, self.uri)
- self.inode = TahoeNode.NextInode
- TahoeNode.NextInode += 1
-
- def getattr(self):
- """
- - st_mode (protection bits)
- - st_ino (inode number)
- - st_dev (device)
- - st_nlink (number of hard links)
- - st_uid (user ID of owner)
- - st_gid (group ID of owner)
- - st_size (size of file, in bytes)
- - st_atime (time of most recent access)
- - st_mtime (time of most recent content modification)
- - st_ctime (platform dependent; time of most recent metadata change on Unix,
- or the time of creation on Windows).
- """
- # FIXME: Return metadata that isn't completely fabricated.
- return (self.get_mode(),
- self.inode,
- MagicDevNumber,
- self.get_linkcount(),
- os.getuid(),
- os.getgid(),
- self.get_size(),
- 0,
- 0,
- 0)
-
- def get_metadata(self):
- f = self.open('?t=json')
- json = f.read()
- f.close()
- return simplejson.loads(json)
-
- def open(self, postfix=''):
- url = self.fullurl + postfix
- log('*** Fetching: %r', url)
- return urllib.urlopen(url)
-
-
-class TahoeFile (TahoeNode):
- def __init__(self, baseurl, uri):
- #assert uri.split(':', 2)[1] in ('CHK', 'LIT'), `uri` # fails as of 0.7.0
- TahoeNode.__init__(self, baseurl, uri)
-
- # nonfuse:
- def get_mode(self):
- return stat.S_IFREG | 0400 # Read only regular file.
-
- def get_linkcount(self):
- return 1
-
- def get_size(self):
- rawsize = self.get_metadata()[1]['size']
- if type(rawsize) is not int: # FIXME: What about sizes which do not fit in python int?
- assert rawsize == u'?', `rawsize`
- return UnknownSize
- else:
- return rawsize
-
- def resolve_path(self, path):
- assert path == []
- return self
-
-
-class TahoeDir (TahoeNode):
- def __init__(self, baseurl, uri):
- TahoeNode.__init__(self, baseurl, uri)
-
- self.mode = stat.S_IFDIR | 0500 # Read only directory.
-
- # FUSE:
- def getdir(self):
- d = [('.', self.get_mode()), ('..', self.get_mode())]
- for name, child in self.get_children().items():
- if name: # Just ignore this crazy case!
- d.append((name, child.get_mode()))
- return d
-
- # nonfuse:
- def get_mode(self):
- return stat.S_IFDIR | 0500 # Read only directory.
-
- def get_linkcount(self):
- return len(self.getdir())
-
- def get_size(self):
- return 2 ** 12 # FIXME: What do we return here? len(self.get_metadata())
-
- def resolve_path(self, path):
- assert type(path) is list
-
- if path:
- head = path[0]
- child = self.get_child(head)
- return child.resolve_path(path[1:])
- else:
- return self
-
- def get_child(self, name):
- c = self.get_children()
- return c[name]
-
- def get_children(self):
- flag, md = self.get_metadata()
- assert flag == 'dirnode'
-
- c = {}
- for name, (childflag, childmd) in md['children'].items():
- if childflag == 'dirnode':
- cls = TahoeDir
- else:
- cls = TahoeFile
-
- c[str(name)] = cls(self.burl, childmd['ro_uri'])
- return c
-
-
-def canonicalize_cap(cap):
- cap = urllib.unquote(cap)
- i = cap.find('URI:')
- assert i != -1, 'A cap must contain "URI:...", but this does not: ' + cap
- return cap[i:]
-
-
-if __name__ == '__main__':
- main()
-
+++ /dev/null
-This announcement is archived in the tahoe-dev mailing list archive:
-
-http://allmydata.org/pipermail/tahoe-dev/2008-March/000465.html
-
-[tahoe-dev] Another FUSE interface
-Armin Rigo arigo at tunes.org
-Sat Mar 29 04:35:36 PDT 2008
-
- * Previous message: [tahoe-dev] announcing allmydata.org "Tahoe", v1.0
- * Next message: [tahoe-dev] convergent encryption reconsidered -- salting and key-strengthening
- * Messages sorted by: [ date ] [ thread ] [ subject ] [ author ]
-
-Hi all,
-
-I implemented for fun another Tahoe-to-FUSE interface using my own set
-of FUSE bindings. If you are interested, you can check out the
-following subversion directory:
-
- http://codespeak.net/svn/user/arigo/hack/pyfuse
-
-tahoe.py is a 100-lines, half-an-hour-job interface to Tahoe, limited to
-read-only at the moment. The rest of the directory contains PyFuse, and
-many other small usage examples. PyFuse is a pure Python FUSE daemon
-(no messy linking issues, no dependencies).
-
-
-A bientot,
-
-Armin Rigo
-
- * Previous message: [tahoe-dev] announcing allmydata.org "Tahoe", v1.0
- * Next message: [tahoe-dev] convergent encryption reconsidered -- salting and key-strengthening
- * Messages sorted by: [ date ] [ thread ] [ subject ] [ author ]
-
-More information about the tahoe-dev mailing list
-
+++ /dev/null
-from UserDict import DictMixin
-
-
-DELETED = object()
-
-
-class OrderedDict(DictMixin):
-
- def __init__(self, *args, **kwds):
- self.clear()
- self.update(*args, **kwds)
-
- def clear(self):
- self._keys = []
- self._content = {} # {key: (index, value)}
- self._deleted = 0
-
- def copy(self):
- return OrderedDict(self)
-
- def __iter__(self):
- for key in self._keys:
- if key is not DELETED:
- yield key
-
- def keys(self):
- return [key for key in self._keys if key is not DELETED]
-
- def popitem(self):
- while 1:
- try:
- k = self._keys.pop()
- except IndexError:
- raise KeyError, 'OrderedDict is empty'
- if k is not DELETED:
- return k, self._content.pop(k)[1]
-
- def __getitem__(self, key):
- index, value = self._content[key]
- return value
-
- def __setitem__(self, key, value):
- try:
- index, oldvalue = self._content[key]
- except KeyError:
- index = len(self._keys)
- self._keys.append(key)
- self._content[key] = index, value
-
- def __delitem__(self, key):
- index, oldvalue = self._content.pop(key)
- self._keys[index] = DELETED
- if self._deleted <= len(self._content):
- self._deleted += 1
- else:
- # compress
- newkeys = []
- for k in self._keys:
- if k is not DELETED:
- i, value = self._content[k]
- self._content[k] = len(newkeys), value
- newkeys.append(k)
- self._keys = newkeys
- self._deleted = 0
-
- def __len__(self):
- return len(self._content)
-
- def __repr__(self):
- res = ['%r: %r' % (key, self._content[key][1]) for key in self]
- return 'OrderedDict(%s)' % (', '.join(res),)
-
- def __cmp__(self, other):
- if not isinstance(other, OrderedDict):
- return NotImplemented
- keys = self.keys()
- r = cmp(keys, other.keys())
- if r:
- return r
- for k in keys:
- r = cmp(self[k], other[k])
- if r:
- return r
- return 0
+++ /dev/null
-import os, stat, py, select
-import inspect
-from objectfs import ObjectFs
-
-
-BLOCKSIZE = 8192
-
-
-def remote_runner(BLOCKSIZE):
- import sys, select, os, struct
- stream = None
- while True:
- while stream is not None:
- iwtd, owtd, ewtd = select.select([0], [1], [])
- if iwtd:
- break
- pos = stream.tell()
- data = stream.read(BLOCKSIZE)
- res = ('R', path, pos, len(data))
- sys.stdout.write('%r\n%s' % (res, data))
- if len(data) < BLOCKSIZE:
- stream = None
-
- stream = None
- msg = eval(sys.stdin.readline())
- if msg[0] == 'L':
- path = msg[1]
- names = os.listdir(path)
- res = []
- for name in names:
- try:
- st = os.stat(os.path.join(path, name))
- except OSError:
- continue
- res.append((name, st.st_mode, st.st_size))
- res = msg + (res,)
- sys.stdout.write('%s\n' % (res,))
- elif msg[0] == 'R':
- path, pos = msg[1:]
- f = open(path, 'rb')
- f.seek(pos)
- data = f.read(BLOCKSIZE)
- res = msg + (len(data),)
- sys.stdout.write('%r\n%s' % (res, data))
- elif msg[0] == 'S':
- path, pos = msg[1:]
- stream = open(path, 'rb')
- stream.seek(pos)
- #elif msg[0] == 'C':
- # stream = None
-
-
-class CacheFs(ObjectFs):
- MOUNT_OPTIONS = {'max_read': BLOCKSIZE}
-
- def __init__(self, localdir, remotehost, remotedir):
- src = inspect.getsource(remote_runner)
- src += '\n\nremote_runner(%d)\n' % BLOCKSIZE
-
- remotecmd = 'python -u -c "exec input()"'
- cmdline = [remotehost, remotecmd]
- # XXX Unix style quoting
- for i in range(len(cmdline)):
- cmdline[i] = "'" + cmdline[i].replace("'", "'\\''") + "'"
- cmd = 'ssh -C'
- cmdline.insert(0, cmd)
-
- child_in, child_out = os.popen2(' '.join(cmdline), bufsize=0)
- child_in.write('%r\n' % (src,))
-
- control = Controller(child_in, child_out)
- ObjectFs.__init__(self, CacheDir(localdir, remotedir, control))
-
-
-class Controller:
- def __init__(self, child_in, child_out):
- self.child_in = child_in
- self.child_out = child_out
- self.cache = {}
- self.streaming = None
-
- def next_answer(self):
- answer = eval(self.child_out.readline())
- #print 'A', answer
- if answer[0] == 'R':
- remotefn, pos, length = answer[1:]
- data = self.child_out.read(length)
- self.cache[remotefn, pos] = data
- return answer
-
- def wait_answer(self, query):
- self.streaming = None
- #print 'Q', query
- self.child_in.write('%r\n' % (query,))
- while True:
- answer = self.next_answer()
- if answer[:len(query)] == query:
- return answer[len(query):]
-
- def listdir(self, remotedir):
- query = ('L', remotedir)
- res, = self.wait_answer(query)
- return res
-
- def wait_for_block(self, remotefn, pos):
- key = remotefn, pos
- while key not in self.cache:
- self.next_answer()
- return self.cache[key]
-
- def peek_for_block(self, remotefn, pos):
- key = remotefn, pos
- while key not in self.cache:
- iwtd, owtd, ewtd = select.select([self.child_out], [], [], 0)
- if not iwtd:
- return None
- self.next_answer()
- return self.cache[key]
-
- def cached_block(self, remotefn, pos):
- key = remotefn, pos
- return self.cache.get(key)
-
- def start_streaming(self, remotefn, pos):
- if remotefn != self.streaming:
- while (remotefn, pos) in self.cache:
- pos += BLOCKSIZE
- query = ('S', remotefn, pos)
- #print 'Q', query
- self.child_in.write('%r\n' % (query,))
- self.streaming = remotefn
-
- def read_blocks(self, remotefn, poslist):
- lst = ['%r\n' % (('R', remotefn, pos),)
- for pos in poslist if (remotefn, pos) not in self.cache]
- if lst:
- self.streaming = None
- #print 'Q', '+ '.join(lst)
- self.child_in.write(''.join(lst))
-
- def clear_cache(self, remotefn):
- for key in self.cache.keys():
- if key[0] == remotefn:
- del self.cache[key]
-
-
-class CacheDir:
- def __init__(self, localdir, remotedir, control, size=0):
- self.localdir = localdir
- self.remotedir = remotedir
- self.control = control
- self.entries = None
- def listdir(self):
- if self.entries is None:
- self.entries = []
- for name, st_mode, st_size in self.control.listdir(self.remotedir):
- if stat.S_ISDIR(st_mode):
- cls = CacheDir
- else:
- cls = CacheFile
- obj = cls(os.path.join(self.localdir, name),
- os.path.join(self.remotedir, name),
- self.control,
- st_size)
- self.entries.append((name, obj))
- return self.entries
-
-class CacheFile:
- def __init__(self, localfn, remotefn, control, size):
- self.localfn = localfn
- self.remotefn = remotefn
- self.control = control
- self.st_size = size
-
- def size(self):
- return self.st_size
-
- def read(self):
- try:
- st = os.stat(self.localfn)
- except OSError:
- pass
- else:
- if st.st_size == self.st_size: # fully cached
- return open(self.localfn, 'rb')
- os.unlink(self.localfn)
- lpath = py.path.local(self.partial())
- lpath.ensure(file=1)
- f = open(self.partial(), 'r+b')
- return DumpFile(self, f)
-
- def partial(self):
- return self.localfn + '.partial~'
-
- def complete(self):
- try:
- os.rename(self.partial(), self.localfn)
- except OSError:
- pass
-
-
-class DumpFile:
-
- def __init__(self, cf, f):
- self.cf = cf
- self.f = f
- self.pos = 0
-
- def seek(self, npos):
- self.pos = npos
-
- def read(self, count):
- control = self.cf.control
- self.f.seek(self.pos)
- buffer = self.f.read(count)
- self.pos += len(buffer)
- count -= len(buffer)
-
- self.f.seek(0, 2)
- curend = self.f.tell()
-
- if count > 0:
-
- while self.pos > curend:
- curend &= -BLOCKSIZE
- data = control.peek_for_block(self.cf.remotefn, curend)
- if data is None:
- break
- self.f.seek(curend)
- self.f.write(data)
- curend += len(data)
- if len(data) < BLOCKSIZE:
- break
-
- start = max(self.pos, curend) & (-BLOCKSIZE)
- end = (self.pos + count + BLOCKSIZE-1) & (-BLOCKSIZE)
- poslist = range(start, end, BLOCKSIZE)
-
- if self.pos <= curend:
- control.start_streaming(self.cf.remotefn, start)
- self.f.seek(start)
- for p in poslist:
- data = control.wait_for_block(self.cf.remotefn, p)
- assert self.f.tell() == p
- self.f.write(data)
- if len(data) < BLOCKSIZE:
- break
-
- curend = self.f.tell()
- while curend < self.cf.st_size:
- curend &= -BLOCKSIZE
- data = control.cached_block(self.cf.remotefn, curend)
- if data is None:
- break
- assert self.f.tell() == curend
- self.f.write(data)
- curend += len(data)
- else:
- self.cf.complete()
- control.clear_cache(self.cf.remotefn)
-
- self.f.seek(self.pos)
- buffer += self.f.read(count)
-
- else:
- control.read_blocks(self.cf.remotefn, poslist)
- result = []
- for p in poslist:
- data = control.wait_for_block(self.cf.remotefn, p)
- result.append(data)
- if len(data) < BLOCKSIZE:
- break
- data = ''.join(result)
- buffer += data[self.pos-start:self.pos-start+count]
-
- else:
- if self.pos + 60000 > curend:
- curend &= -BLOCKSIZE
- control.start_streaming(self.cf.remotefn, curend)
-
- return buffer
+++ /dev/null
-import sys, os, Queue, atexit
-
-dir = os.path.dirname(os.path.dirname(os.path.abspath(__file__)))
-dir = os.path.join(dir, 'pypeers')
-if dir not in sys.path:
- sys.path.append(dir)
-del dir
-
-from greensock import *
-import threadchannel
-
-
-def _read_from_kernel(handler):
- while True:
- msg = read(handler.fd, handler.MAX_READ)
- if not msg:
- print >> sys.stderr, "out-kernel connexion closed"
- break
- autogreenlet(handler.handle_message, msg)
-
-def add_handler(handler):
- autogreenlet(_read_from_kernel, handler)
- atexit.register(handler.close)
-
-# ____________________________________________________________
-
-THREAD_QUEUE = None
-
-def thread_runner(n):
- while True:
- #print 'thread runner %d waiting' % n
- operation, answer = THREAD_QUEUE.get()
- #print 'thread_runner %d: %r' % (n, operation)
- try:
- res = True, operation()
- except Exception:
- res = False, sys.exc_info()
- #print 'thread_runner %d: got %d bytes' % (n, len(res or ''))
- answer.send(res)
-
-
-def start_bkgnd_thread():
- global THREAD_QUEUE, THREAD_LOCK
- import thread
- threadchannel.startup()
- THREAD_LOCK = thread.allocate_lock()
- THREAD_QUEUE = Queue.Queue()
- for i in range(4):
- thread.start_new_thread(thread_runner, (i,))
-
-def wget(*args, **kwds):
- from wget import wget
-
- def operation():
- kwds['unlock'] = THREAD_LOCK
- THREAD_LOCK.acquire()
- try:
- return wget(*args, **kwds)
- finally:
- THREAD_LOCK.release()
-
- if THREAD_QUEUE is None:
- start_bkgnd_thread()
- answer = threadchannel.ThreadChannel()
- THREAD_QUEUE.put((operation, answer))
- ok, res = answer.receive()
- if not ok:
- typ, value, tb = res
- raise typ, value, tb
- #print 'wget returns %d bytes' % (len(res or ''),)
- return res
+++ /dev/null
-from kernel import *
-import os, errno, sys
-
-def fuse_mount(mountpoint, opts=None):
- if not isinstance(mountpoint, str):
- raise TypeError
- if opts is not None and not isinstance(opts, str):
- raise TypeError
- import dl
- try:
- fuse = dl.open('libfuse.so')
- except dl.error:
- fuse = dl.open('libfuse.so.2')
- if fuse.sym('fuse_mount_compat22'):
- fnname = 'fuse_mount_compat22'
- else:
- fnname = 'fuse_mount' # older versions of libfuse.so
- return fuse.call(fnname, mountpoint, opts)
-
-class Handler(object):
- __system = os.system
- mountpoint = fd = None
- __in_header_size = fuse_in_header.calcsize()
- __out_header_size = fuse_out_header.calcsize()
- MAX_READ = FUSE_MAX_IN
-
- def __init__(self, mountpoint, filesystem, logfile='STDERR', **opts1):
- opts = getattr(filesystem, 'MOUNT_OPTIONS', {}).copy()
- opts.update(opts1)
- if opts:
- opts = opts.items()
- opts.sort()
- opts = ' '.join(['%s=%s' % item for item in opts])
- else:
- opts = None
- fd = fuse_mount(mountpoint, opts)
- if fd < 0:
- raise IOError("mount failed")
- self.fd = fd
- if logfile == 'STDERR':
- logfile = sys.stderr
- self.logfile = logfile
- if self.logfile:
- print >> self.logfile, '* mounted at', mountpoint
- self.mountpoint = mountpoint
- self.filesystem = filesystem
- self.handles = {}
- self.nexth = 1
-
- def __del__(self):
- if self.fd is not None:
- os.close(self.fd)
- self.fd = None
- if self.mountpoint:
- cmd = "fusermount -u '%s'" % self.mountpoint.replace("'", r"'\''")
- self.mountpoint = None
- if self.logfile:
- print >> self.logfile, '*', cmd
- self.__system(cmd)
-
- close = __del__
-
- def loop_forever(self):
- while True:
- try:
- msg = os.read(self.fd, FUSE_MAX_IN)
- except OSError, ose:
- if ose.errno == errno.ENODEV:
- # on hardy, at least, this is what happens upon fusermount -u
- #raise EOFError("out-kernel connection closed")
- return
- if not msg:
- #raise EOFError("out-kernel connection closed")
- return
- self.handle_message(msg)
-
- def handle_message(self, msg):
- headersize = self.__in_header_size
- req = fuse_in_header(msg[:headersize])
- assert req.len == len(msg)
- name = req.opcode
- try:
- try:
- name = fuse_opcode2name[req.opcode]
- meth = getattr(self, name)
- except (IndexError, AttributeError):
- raise NotImplementedError
- #if self.logfile:
- # print >> self.logfile, '%s(%d)' % (name, req.nodeid)
- reply = meth(req, msg[headersize:])
- #if self.logfile:
- # print >> self.logfile, ' >>', repr(reply)
- except NotImplementedError:
- if self.logfile:
- print >> self.logfile, '%s: not implemented' % (name,)
- self.send_reply(req, err=errno.ENOSYS)
- except EnvironmentError, e:
- if self.logfile:
- print >> self.logfile, '%s: %s' % (name, e)
- self.send_reply(req, err = e.errno or errno.ESTALE)
- except NoReply:
- pass
- else:
- self.send_reply(req, reply)
-
- def send_reply(self, req, reply=None, err=0):
- assert 0 <= err < 1000
- if reply is None:
- reply = ''
- elif not isinstance(reply, str):
- reply = reply.pack()
- f = fuse_out_header(unique = req.unique,
- error = -err,
- len = self.__out_header_size + len(reply))
- data = f.pack() + reply
- while data:
- count = os.write(self.fd, data)
- if not count:
- raise EOFError("in-kernel connection closed")
- data = data[count:]
-
- def notsupp_or_ro(self):
- if hasattr(self.filesystem, "modified"):
- raise IOError(errno.ENOSYS, "not supported")
- else:
- raise IOError(errno.EROFS, "read-only file system")
-
- # ____________________________________________________________
-
- def FUSE_INIT(self, req, msg):
- msg = fuse_init_in_out(msg[:8])
- if self.logfile:
- print >> self.logfile, 'INIT: %d.%d' % (msg.major, msg.minor)
- return fuse_init_in_out(major = FUSE_KERNEL_VERSION,
- minor = FUSE_KERNEL_MINOR_VERSION)
-
- def FUSE_GETATTR(self, req, msg):
- node = self.filesystem.getnode(req.nodeid)
- attr, valid = self.filesystem.getattr(node)
- return fuse_attr_out(attr_valid = valid,
- attr = attr)
-
- def FUSE_SETATTR(self, req, msg):
- if not hasattr(self.filesystem, 'setattr'):
- self.notsupp_or_ro()
- msg = fuse_setattr_in(msg)
- if msg.valid & FATTR_MODE: mode = msg.attr.mode & 0777
- else: mode = None
- if msg.valid & FATTR_UID: uid = msg.attr.uid
- else: uid = None
- if msg.valid & FATTR_GID: gid = msg.attr.gid
- else: gid = None
- if msg.valid & FATTR_SIZE: size = msg.attr.size
- else: size = None
- if msg.valid & FATTR_ATIME: atime = msg.attr.atime
- else: atime = None
- if msg.valid & FATTR_MTIME: mtime = msg.attr.mtime
- else: mtime = None
- node = self.filesystem.getnode(req.nodeid)
- self.filesystem.setattr(node, mode, uid, gid,
- size, atime, mtime)
- attr, valid = self.filesystem.getattr(node)
- return fuse_attr_out(attr_valid = valid,
- attr = attr)
-
- def FUSE_RELEASE(self, req, msg):
- msg = fuse_release_in(msg, truncate=True)
- try:
- del self.handles[msg.fh]
- except KeyError:
- raise IOError(errno.EBADF, msg.fh)
- FUSE_RELEASEDIR = FUSE_RELEASE
-
- def FUSE_OPENDIR(self, req, msg):
- #msg = fuse_open_in(msg)
- node = self.filesystem.getnode(req.nodeid)
- attr, valid = self.filesystem.getattr(node)
- if mode2type(attr.mode) != TYPE_DIR:
- raise IOError(errno.ENOTDIR, node)
- fh = self.nexth
- self.nexth += 1
- self.handles[fh] = True, '', node
- return fuse_open_out(fh = fh)
-
- def FUSE_READDIR(self, req, msg):
- msg = fuse_read_in(msg)
- try:
- isdir, data, node = self.handles[msg.fh]
- if not isdir:
- raise KeyError # not a dir handle
- except KeyError:
- raise IOError(errno.EBADF, msg.fh)
- if msg.offset == 0:
- # start or rewind
- d_entries = []
- off = 0
- for name, type in self.filesystem.listdir(node):
- off += fuse_dirent.calcsize(len(name))
- d_entry = fuse_dirent(ino = INVALID_INO,
- off = off,
- type = type,
- name = name)
- d_entries.append(d_entry)
- data = ''.join([d.pack() for d in d_entries])
- self.handles[msg.fh] = True, data, node
- return data[msg.offset:msg.offset+msg.size]
-
- def replyentry(self, (subnodeid, valid1)):
- subnode = self.filesystem.getnode(subnodeid)
- attr, valid2 = self.filesystem.getattr(subnode)
- return fuse_entry_out(nodeid = subnodeid,
- entry_valid = valid1,
- attr_valid = valid2,
- attr = attr)
-
- def FUSE_LOOKUP(self, req, msg):
- filename = c2pystr(msg)
- dirnode = self.filesystem.getnode(req.nodeid)
- return self.replyentry(self.filesystem.lookup(dirnode, filename))
-
- def FUSE_OPEN(self, req, msg, mask=os.O_RDONLY|os.O_WRONLY|os.O_RDWR):
- msg = fuse_open_in(msg)
- node = self.filesystem.getnode(req.nodeid)
- attr, valid = self.filesystem.getattr(node)
- if mode2type(attr.mode) != TYPE_REG:
- raise IOError(errno.EPERM, node)
- f = self.filesystem.open(node, msg.flags & mask)
- if isinstance(f, tuple):
- f, open_flags = f
- else:
- open_flags = 0
- fh = self.nexth
- self.nexth += 1
- self.handles[fh] = False, f, node
- return fuse_open_out(fh = fh, open_flags = open_flags)
-
- def FUSE_READ(self, req, msg):
- msg = fuse_read_in(msg)
- try:
- isdir, f, node = self.handles[msg.fh]
- if isdir:
- raise KeyError
- except KeyError:
- raise IOError(errno.EBADF, msg.fh)
- f.seek(msg.offset)
- return f.read(msg.size)
-
- def FUSE_WRITE(self, req, msg):
- if not hasattr(self.filesystem, 'modified'):
- raise IOError(errno.EROFS, "read-only file system")
- msg, data = fuse_write_in.from_head(msg)
- try:
- isdir, f, node = self.handles[msg.fh]
- if isdir:
- raise KeyError
- except KeyError:
- raise IOError(errno.EBADF, msg.fh)
- f.seek(msg.offset)
- f.write(data)
- self.filesystem.modified(node)
- return fuse_write_out(size = len(data))
-
- def FUSE_MKNOD(self, req, msg):
- if not hasattr(self.filesystem, 'mknod'):
- self.notsupp_or_ro()
- msg, filename = fuse_mknod_in.from_param(msg)
- node = self.filesystem.getnode(req.nodeid)
- return self.replyentry(self.filesystem.mknod(node, filename, msg.mode))
-
- def FUSE_MKDIR(self, req, msg):
- if not hasattr(self.filesystem, 'mkdir'):
- self.notsupp_or_ro()
- msg, filename = fuse_mkdir_in.from_param(msg)
- node = self.filesystem.getnode(req.nodeid)
- return self.replyentry(self.filesystem.mkdir(node, filename, msg.mode))
-
- def FUSE_SYMLINK(self, req, msg):
- if not hasattr(self.filesystem, 'symlink'):
- self.notsupp_or_ro()
- linkname, target = c2pystr2(msg)
- node = self.filesystem.getnode(req.nodeid)
- return self.replyentry(self.filesystem.symlink(node, linkname, target))
-
- #def FUSE_LINK(self, req, msg):
- # ...
-
- def FUSE_UNLINK(self, req, msg):
- if not hasattr(self.filesystem, 'unlink'):
- self.notsupp_or_ro()
- filename = c2pystr(msg)
- node = self.filesystem.getnode(req.nodeid)
- self.filesystem.unlink(node, filename)
-
- def FUSE_RMDIR(self, req, msg):
- if not hasattr(self.filesystem, 'rmdir'):
- self.notsupp_or_ro()
- dirname = c2pystr(msg)
- node = self.filesystem.getnode(req.nodeid)
- self.filesystem.rmdir(node, dirname)
-
- def FUSE_FORGET(self, req, msg):
- if hasattr(self.filesystem, 'forget'):
- self.filesystem.forget(req.nodeid)
- raise NoReply
-
- def FUSE_READLINK(self, req, msg):
- if not hasattr(self.filesystem, 'readlink'):
- raise IOError(errno.ENOSYS, "readlink not supported")
- node = self.filesystem.getnode(req.nodeid)
- target = self.filesystem.readlink(node)
- return target
-
- def FUSE_RENAME(self, req, msg):
- if not hasattr(self.filesystem, 'rename'):
- self.notsupp_or_ro()
- msg, oldname, newname = fuse_rename_in.from_param2(msg)
- oldnode = self.filesystem.getnode(req.nodeid)
- newnode = self.filesystem.getnode(msg.newdir)
- self.filesystem.rename(oldnode, oldname, newnode, newname)
-
- def getxattrs(self, nodeid):
- if not hasattr(self.filesystem, 'getxattrs'):
- raise IOError(errno.ENOSYS, "xattrs not supported")
- node = self.filesystem.getnode(nodeid)
- return self.filesystem.getxattrs(node)
-
- def FUSE_LISTXATTR(self, req, msg):
- names = self.getxattrs(req.nodeid).keys()
- names = ['user.' + name for name in names]
- totalsize = 0
- for name in names:
- totalsize += len(name)+1
- msg = fuse_getxattr_in(msg)
- if msg.size > 0:
- if msg.size < totalsize:
- raise IOError(errno.ERANGE, "buffer too small")
- names.append('')
- return '\x00'.join(names)
- else:
- return fuse_getxattr_out(size=totalsize)
-
- def FUSE_GETXATTR(self, req, msg):
- xattrs = self.getxattrs(req.nodeid)
- msg, name = fuse_getxattr_in.from_param(msg)
- if not name.startswith('user.'): # ENODATA == ENOATTR
- raise IOError(errno.ENODATA, "only supports 'user.' xattrs, "
- "got %r" % (name,))
- name = name[5:]
- try:
- value = xattrs[name]
- except KeyError:
- raise IOError(errno.ENODATA, "no such xattr") # == ENOATTR
- value = str(value)
- if msg.size > 0:
- if msg.size < len(value):
- raise IOError(errno.ERANGE, "buffer too small")
- return value
- else:
- return fuse_getxattr_out(size=len(value))
-
- def FUSE_SETXATTR(self, req, msg):
- xattrs = self.getxattrs(req.nodeid)
- msg, name, value = fuse_setxattr_in.from_param_head(msg)
- assert len(value) == msg.size
- # XXX msg.flags ignored
- if not name.startswith('user.'): # ENODATA == ENOATTR
- raise IOError(errno.ENODATA, "only supports 'user.' xattrs")
- name = name[5:]
- try:
- xattrs[name] = value
- except KeyError:
- raise IOError(errno.ENODATA, "cannot set xattr") # == ENOATTR
-
- def FUSE_REMOVEXATTR(self, req, msg):
- xattrs = self.getxattrs(req.nodeid)
- name = c2pystr(msg)
- if not name.startswith('user.'): # ENODATA == ENOATTR
- raise IOError(errno.ENODATA, "only supports 'user.' xattrs")
- name = name[5:]
- try:
- del xattrs[name]
- except KeyError:
- raise IOError(errno.ENODATA, "cannot delete xattr") # == ENOATTR
-
-
-class NoReply(Exception):
- pass
+++ /dev/null
-import os, re, urlparse
-from handler import Handler
-from objectfs import ObjectFs
-
-
-class Root:
- def __init__(self):
- self.entries = {'gg': GoogleRoot()}
- def listdir(self):
- return self.entries.keys()
- def join(self, hostname):
- if hostname in self.entries:
- return self.entries[hostname]
- if '.' not in hostname:
- raise KeyError
- result = HtmlNode('http://%s/' % (hostname,))
- self.entries[hostname] = result
- return result
-
-
-class UrlNode:
- data = None
-
- def __init__(self, url):
- self.url = url
-
- def getdata(self):
- if self.data is None:
- print self.url
- g = os.popen("lynx -source %r" % (self.url,), 'r')
- self.data = g.read()
- g.close()
- return self.data
-
-
-class HtmlNode(UrlNode):
- r_links = re.compile(r'<a\s[^>]*href="([^"]+)"[^>]*>(.*?)</a>',
- re.IGNORECASE | re.DOTALL)
- r_images = re.compile(r'<img\s[^>]*src="([^"]+[.]jpg)"', re.IGNORECASE)
-
- def format(self, text, index,
- TRANSTBL = ''.join([(32<=c<127 and c!=ord('/'))
- and chr(c) or '_'
- for c in range(256)])):
- return text.translate(TRANSTBL)
-
- def listdir(self):
- data = self.getdata()
-
- seen = {}
- def uniquename(name):
- name = self.format(name, len(seen))
- if name == '' or name.startswith('.'):
- name = '_' + name
- basename = name
- i = 1
- while name in seen:
- i += 1
- name = '%s_%d' % (basename, i)
- seen[name] = True
- return name
-
- for link, text in self.r_links.findall(data):
- url = urlparse.urljoin(self.url, link)
- yield uniquename(text), HtmlNode(url)
-
- for link in self.r_images.findall(data):
- text = os.path.basename(link)
- url = urlparse.urljoin(self.url, link)
- yield uniquename(text), RawNode(url)
-
- yield '.source', RawNode(self.url)
-
-
-class RawNode(UrlNode):
-
- def read(self):
- return self.getdata()
-
- def size(self):
- if self.data:
- return len(self.data)
- else:
- return None
-
-
-class GoogleRoot:
- def join(self, query):
- return GoogleSearch(query)
-
-class GoogleSearch(HtmlNode):
- r_links = re.compile(r'<a\sclass=l\s[^>]*href="([^"]+)"[^>]*>(.*?)</a>',
- re.IGNORECASE | re.DOTALL)
-
- def __init__(self, query):
- self.url = 'http://www.google.com/search?q=' + query
-
- def format(self, text, index):
- text = text.replace('<b>', '').replace('</b>', '')
- text = HtmlNode.format(self, text, index)
- return '%d. %s' % (index, text)
-
-
-if __name__ == '__main__':
- root = Root()
- handler = Handler('/home/arigo/mnt', ObjectFs(root))
- handler.loop_forever()
+++ /dev/null
-from struct import pack, unpack, calcsize
-import stat
-
-class Struct(object):
- __slots__ = []
-
- def __init__(self, data=None, truncate=False, **fields):
- if data is not None:
- if truncate:
- data = data[:self.calcsize()]
- self.unpack(data)
- for key, value in fields.items():
- setattr(self, key, value)
-
- def unpack(self, data):
- data = unpack(self.__types__, data)
- for key, value in zip(self.__slots__, data):
- setattr(self, key, value)
-
- def pack(self):
- return pack(self.__types__, *[getattr(self, k, 0)
- for k in self.__slots__])
-
- def calcsize(cls):
- return calcsize(cls.__types__)
- calcsize = classmethod(calcsize)
-
- def __repr__(self):
- result = ['%s=%r' % (name, getattr(self, name, None))
- for name in self.__slots__]
- return '<%s %s>' % (self.__class__.__name__, ', '.join(result))
-
- def from_param(cls, msg):
- limit = cls.calcsize()
- zero = msg.find('\x00', limit)
- assert zero >= 0
- return cls(msg[:limit]), msg[limit:zero]
- from_param = classmethod(from_param)
-
- def from_param2(cls, msg):
- limit = cls.calcsize()
- zero1 = msg.find('\x00', limit)
- assert zero1 >= 0
- zero2 = msg.find('\x00', zero1+1)
- assert zero2 >= 0
- return cls(msg[:limit]), msg[limit:zero1], msg[zero1+1:zero2]
- from_param2 = classmethod(from_param2)
-
- def from_head(cls, msg):
- limit = cls.calcsize()
- return cls(msg[:limit]), msg[limit:]
- from_head = classmethod(from_head)
-
- def from_param_head(cls, msg):
- limit = cls.calcsize()
- zero = msg.find('\x00', limit)
- assert zero >= 0
- return cls(msg[:limit]), msg[limit:zero], msg[zero+1:]
- from_param_head = classmethod(from_param_head)
-
-class StructWithAttr(Struct):
-
- def unpack(self, data):
- limit = -fuse_attr.calcsize()
- super(StructWithAttr, self).unpack(data[:limit])
- self.attr = fuse_attr(data[limit:])
-
- def pack(self):
- return super(StructWithAttr, self).pack() + self.attr.pack()
-
- def calcsize(cls):
- return super(StructWithAttr, cls).calcsize() + fuse_attr.calcsize()
- calcsize = classmethod(calcsize)
-
-
-def _mkstruct(name, c, base=Struct):
- typ2code = {
- '__u32': 'I',
- '__s32': 'i',
- '__u64': 'Q',
- '__s64': 'q'}
- slots = []
- types = ['=']
- for line in c.split('\n'):
- line = line.strip()
- if line:
- line, tail = line.split(';', 1)
- typ, nam = line.split()
- slots.append(nam)
- types.append(typ2code[typ])
- cls = type(name, (base,), {'__slots__': slots,
- '__types__': ''.join(types)})
- globals()[name] = cls
-
-class timeval(object):
-
- def __init__(self, attr1, attr2):
- self.sec = attr1
- self.nsec = attr2
-
- def __get__(self, obj, typ=None):
- if obj is None:
- return self
- else:
- return (getattr(obj, self.sec) +
- getattr(obj, self.nsec) * 0.000000001)
-
- def __set__(self, obj, val):
- val = int(val * 1000000000)
- sec, nsec = divmod(val, 1000000000)
- setattr(obj, self.sec, sec)
- setattr(obj, self.nsec, nsec)
-
- def __delete__(self, obj):
- delattr(obj, self.sec)
- delattr(obj, self.nsec)
-
-def _mktimeval(cls, attr1, attr2):
- assert attr1.startswith('_')
- assert attr2.startswith('_')
- tv = timeval(attr1, attr2)
- setattr(cls, attr1[1:], tv)
-
-INVALID_INO = 0xFFFFFFFFFFFFFFFF
-
-def mode2type(mode):
- return (mode & 0170000) >> 12
-
-TYPE_REG = mode2type(stat.S_IFREG)
-TYPE_DIR = mode2type(stat.S_IFDIR)
-TYPE_LNK = mode2type(stat.S_IFLNK)
-
-def c2pystr(s):
- n = s.find('\x00')
- assert n >= 0
- return s[:n]
-
-def c2pystr2(s):
- first = c2pystr(s)
- second = c2pystr(s[len(first)+1:])
- return first, second
-
-# ____________________________________________________________
-
-# Version number of this interface
-FUSE_KERNEL_VERSION = 7
-
-# Minor version number of this interface
-FUSE_KERNEL_MINOR_VERSION = 2
-
-# The node ID of the root inode
-FUSE_ROOT_ID = 1
-
-# The major number of the fuse character device
-FUSE_MAJOR = 10
-
-# The minor number of the fuse character device
-FUSE_MINOR = 229
-
-# Make sure all structures are padded to 64bit boundary, so 32bit
-# userspace works under 64bit kernels
-
-_mkstruct('fuse_attr', '''
- __u64 ino;
- __u64 size;
- __u64 blocks;
- __u64 _atime;
- __u64 _mtime;
- __u64 _ctime;
- __u32 _atimensec;
- __u32 _mtimensec;
- __u32 _ctimensec;
- __u32 mode;
- __u32 nlink;
- __u32 uid;
- __u32 gid;
- __u32 rdev;
-''')
-_mktimeval(fuse_attr, '_atime', '_atimensec')
-_mktimeval(fuse_attr, '_mtime', '_mtimensec')
-_mktimeval(fuse_attr, '_ctime', '_ctimensec')
-
-_mkstruct('fuse_kstatfs', '''
- __u64 blocks;
- __u64 bfree;
- __u64 bavail;
- __u64 files;
- __u64 ffree;
- __u32 bsize;
- __u32 namelen;
-''')
-
-FATTR_MODE = 1 << 0
-FATTR_UID = 1 << 1
-FATTR_GID = 1 << 2
-FATTR_SIZE = 1 << 3
-FATTR_ATIME = 1 << 4
-FATTR_MTIME = 1 << 5
-
-#
-# Flags returned by the OPEN request
-#
-# FOPEN_DIRECT_IO: bypass page cache for this open file
-# FOPEN_KEEP_CACHE: don't invalidate the data cache on open
-#
-FOPEN_DIRECT_IO = 1 << 0
-FOPEN_KEEP_CACHE = 1 << 1
-
-fuse_opcode = {
- 'FUSE_LOOKUP' : 1,
- 'FUSE_FORGET' : 2, # no reply
- 'FUSE_GETATTR' : 3,
- 'FUSE_SETATTR' : 4,
- 'FUSE_READLINK' : 5,
- 'FUSE_SYMLINK' : 6,
- 'FUSE_MKNOD' : 8,
- 'FUSE_MKDIR' : 9,
- 'FUSE_UNLINK' : 10,
- 'FUSE_RMDIR' : 11,
- 'FUSE_RENAME' : 12,
- 'FUSE_LINK' : 13,
- 'FUSE_OPEN' : 14,
- 'FUSE_READ' : 15,
- 'FUSE_WRITE' : 16,
- 'FUSE_STATFS' : 17,
- 'FUSE_RELEASE' : 18,
- 'FUSE_FSYNC' : 20,
- 'FUSE_SETXATTR' : 21,
- 'FUSE_GETXATTR' : 22,
- 'FUSE_LISTXATTR' : 23,
- 'FUSE_REMOVEXATTR' : 24,
- 'FUSE_FLUSH' : 25,
- 'FUSE_INIT' : 26,
- 'FUSE_OPENDIR' : 27,
- 'FUSE_READDIR' : 28,
- 'FUSE_RELEASEDIR' : 29,
- 'FUSE_FSYNCDIR' : 30,
-}
-
-fuse_opcode2name = []
-def setup():
- for key, value in fuse_opcode.items():
- fuse_opcode2name.extend([None] * (value+1 - len(fuse_opcode2name)))
- fuse_opcode2name[value] = key
-setup()
-del setup
-
-# Conservative buffer size for the client
-FUSE_MAX_IN = 8192
-
-FUSE_NAME_MAX = 1024
-FUSE_SYMLINK_MAX = 4096
-FUSE_XATTR_SIZE_MAX = 4096
-
-_mkstruct('fuse_entry_out', """
- __u64 nodeid; /* Inode ID */
- __u64 generation; /* Inode generation: nodeid:gen must \
- be unique for the fs's lifetime */
- __u64 _entry_valid; /* Cache timeout for the name */
- __u64 _attr_valid; /* Cache timeout for the attributes */
- __u32 _entry_valid_nsec;
- __u32 _attr_valid_nsec;
-""", base=StructWithAttr)
-_mktimeval(fuse_entry_out, '_entry_valid', '_entry_valid_nsec')
-_mktimeval(fuse_entry_out, '_attr_valid', '_attr_valid_nsec')
-
-_mkstruct('fuse_forget_in', '''
- __u64 nlookup;
-''')
-
-_mkstruct('fuse_attr_out', '''
- __u64 _attr_valid; /* Cache timeout for the attributes */
- __u32 _attr_valid_nsec;
- __u32 dummy;
-''', base=StructWithAttr)
-_mktimeval(fuse_attr_out, '_attr_valid', '_attr_valid_nsec')
-
-_mkstruct('fuse_mknod_in', '''
- __u32 mode;
- __u32 rdev;
-''')
-
-_mkstruct('fuse_mkdir_in', '''
- __u32 mode;
- __u32 padding;
-''')
-
-_mkstruct('fuse_rename_in', '''
- __u64 newdir;
-''')
-
-_mkstruct('fuse_link_in', '''
- __u64 oldnodeid;
-''')
-
-_mkstruct('fuse_setattr_in', '''
- __u32 valid;
- __u32 padding;
-''', base=StructWithAttr)
-
-_mkstruct('fuse_open_in', '''
- __u32 flags;
- __u32 padding;
-''')
-
-_mkstruct('fuse_open_out', '''
- __u64 fh;
- __u32 open_flags;
- __u32 padding;
-''')
-
-_mkstruct('fuse_release_in', '''
- __u64 fh;
- __u32 flags;
- __u32 padding;
-''')
-
-_mkstruct('fuse_flush_in', '''
- __u64 fh;
- __u32 flush_flags;
- __u32 padding;
-''')
-
-_mkstruct('fuse_read_in', '''
- __u64 fh;
- __u64 offset;
- __u32 size;
- __u32 padding;
-''')
-
-_mkstruct('fuse_write_in', '''
- __u64 fh;
- __u64 offset;
- __u32 size;
- __u32 write_flags;
-''')
-
-_mkstruct('fuse_write_out', '''
- __u32 size;
- __u32 padding;
-''')
-
-fuse_statfs_out = fuse_kstatfs
-
-_mkstruct('fuse_fsync_in', '''
- __u64 fh;
- __u32 fsync_flags;
- __u32 padding;
-''')
-
-_mkstruct('fuse_setxattr_in', '''
- __u32 size;
- __u32 flags;
-''')
-
-_mkstruct('fuse_getxattr_in', '''
- __u32 size;
- __u32 padding;
-''')
-
-_mkstruct('fuse_getxattr_out', '''
- __u32 size;
- __u32 padding;
-''')
-
-_mkstruct('fuse_init_in_out', '''
- __u32 major;
- __u32 minor;
-''')
-
-_mkstruct('fuse_in_header', '''
- __u32 len;
- __u32 opcode;
- __u64 unique;
- __u64 nodeid;
- __u32 uid;
- __u32 gid;
- __u32 pid;
- __u32 padding;
-''')
-
-_mkstruct('fuse_out_header', '''
- __u32 len;
- __s32 error;
- __u64 unique;
-''')
-
-class fuse_dirent(Struct):
- __slots__ = ['ino', 'off', 'type', 'name']
-
- def unpack(self, data):
- self.ino, self.off, namelen, self.type = struct.unpack('QQII',
- data[:24])
- self.name = data[24:24+namelen]
- assert len(self.name) == namelen
-
- def pack(self):
- namelen = len(self.name)
- return pack('QQII%ds' % ((namelen+7)&~7,),
- self.ino, getattr(self, 'off', 0), namelen,
- self.type, self.name)
-
- def calcsize(cls, namelen):
- return 24 + ((namelen+7)&~7)
- calcsize = classmethod(calcsize)
+++ /dev/null
-from kernel import *
-from handler import Handler
-import stat, time, os, weakref, errno
-from cStringIO import StringIO
-
-
-class MemoryFS(object):
- INFINITE = 86400.0
-
-
- class Dir(object):
- type = TYPE_DIR
- def __init__(self, attr):
- self.attr = attr
- self.contents = {} # { 'filename': Dir()/File()/SymLink() }
-
- class File(object):
- type = TYPE_REG
- def __init__(self, attr):
- self.attr = attr
- self.data = StringIO()
-
- class SymLink(object):
- type = TYPE_LNK
- def __init__(self, attr, target):
- self.attr = attr
- self.target = target
-
-
- def __init__(self, root=None):
- self.uid = os.getuid()
- self.gid = os.getgid()
- self.umask = os.umask(0); os.umask(self.umask)
- self.root = root or self.Dir(self.newattr(stat.S_IFDIR))
- self.root.id = FUSE_ROOT_ID
- self.nodes = weakref.WeakValueDictionary()
- self.nodes[FUSE_ROOT_ID] = self.root
- self.nextid = FUSE_ROOT_ID + 1
-
- def newattr(self, s, ino=None, mode=0666):
- now = time.time()
- attr = fuse_attr(size = 0,
- mode = s | (mode & ~self.umask),
- nlink = 1, # even on dirs! this confuses 'find' in
- # a good way :-)
- atime = now,
- mtime = now,
- ctime = now,
- uid = self.uid,
- gid = self.gid)
- if ino is None:
- ino = id(attr)
- if ino < 0:
- ino = ~ino
- attr.ino = ino
- return attr
-
- def getnode(self, id):
- return self.nodes[id]
-
- def modified(self, node):
- node.attr.mtime = node.attr.atime = time.time()
- if isinstance(node, self.File):
- node.data.seek(0, 2)
- node.attr.size = node.data.tell()
-
- def getattr(self, node):
- return node.attr, self.INFINITE
-
- def setattr(self, node, mode, uid, gid, size, atime, mtime):
- if mode is not None:
- node.attr.mode = (node.attr.mode & ~0777) | (mode & 0777)
- if uid is not None:
- node.attr.uid = uid
- if gid is not None:
- node.attr.gid = gid
- if size is not None:
- assert isinstance(node, self.File)
- node.data.seek(0, 2)
- oldsize = node.data.tell()
- if size < oldsize:
- node.data.seek(size)
- node.data.truncate()
- self.modified(node)
- elif size > oldsize:
- node.data.write('\x00' * (size - oldsize))
- self.modified(node)
- if atime is not None:
- node.attr.atime = atime
- if mtime is not None:
- node.attr.mtime = mtime
-
- def listdir(self, node):
- assert isinstance(node, self.Dir)
- for name, subobj in node.contents.items():
- yield name, subobj.type
-
- def lookup(self, dirnode, filename):
- try:
- return dirnode.contents[filename].id, self.INFINITE
- except KeyError:
- raise IOError(errno.ENOENT, filename)
-
- def open(self, filenode, flags):
- return filenode.data
-
- def newnodeid(self, newnode):
- id = self.nextid
- self.nextid += 1
- newnode.id = id
- self.nodes[id] = newnode
- return id
-
- def mknod(self, dirnode, filename, mode):
- node = self.File(self.newattr(stat.S_IFREG, mode=mode))
- dirnode.contents[filename] = node
- return self.newnodeid(node), self.INFINITE
-
- def mkdir(self, dirnode, subdirname, mode):
- node = self.Dir(self.newattr(stat.S_IFDIR, mode=mode))
- dirnode.contents[subdirname] = node
- return self.newnodeid(node), self.INFINITE
-
- def symlink(self, dirnode, linkname, target):
- node = self.SymLink(self.newattr(stat.S_IFLNK), target)
- dirnode.contents[linkname] = node
- return self.newnodeid(node), self.INFINITE
-
- def unlink(self, dirnode, filename):
- del dirnode.contents[filename]
-
- rmdir = unlink
-
- def readlink(self, symlinknode):
- return symlinknode.target
-
- def rename(self, olddirnode, oldname, newdirnode, newname):
- node = olddirnode.contents[oldname]
- newdirnode.contents[newname] = node
- del olddirnode.contents[oldname]
-
- def getxattrs(self, node):
- try:
- return node.xattrs
- except AttributeError:
- node.xattrs = {}
- return node.xattrs
-
-
-if __name__ == '__main__':
- import sys
- mountpoint = sys.argv[1]
- memoryfs = MemoryFS()
- handler = Handler(mountpoint, memoryfs)
- handler.loop_forever()
+++ /dev/null
-"""
-For reading and caching from slow file system (e.g. DVDs or network).
-
- python mirrorfs.py <sourcedir> <cachedir> <mountpoint>
-
-Makes <mountpoint> show a read-only copy of the files in <sourcedir>,
-caching all data ever read in the <cachedir> to avoid reading it
-twice. This script also features optimistic read-ahead: once a
-file is accessed, and as long as no other file is accessed, the
-whole file is read and cached as fast as the <sourcedir> will
-provide it.
-
-You have to clean up <cachedir> manually before mounting a modified
-or different <sourcedir>.
-"""
-import sys, os, posixpath, stat
-
-try:
- __file__
-except NameError:
- __file__ = sys.argv[0]
-this_dir = os.path.dirname(os.path.abspath(__file__))
-
-# ____________________________________________________________
-
-sys.path.append(os.path.dirname(this_dir))
-from blockfs import valuetree
-from handler import Handler
-import greenhandler, greensock
-from objectfs import ObjectFs
-
-BLOCKSIZE = 65536
-
-class MirrorFS(ObjectFs):
- rawfd = None
-
- def __init__(self, srcdir, cachedir):
- self.srcdir = srcdir
- self.cachedir = cachedir
- self.table = valuetree.ValueTree(os.path.join(cachedir, 'table'), 'q')
- if '' not in self.table:
- self.initial_read_dir('')
- self.table[''] = -1,
- try:
- self.rawfile = open(os.path.join(cachedir, 'raw'), 'r+b')
- except IOError:
- self.rawfile = open(os.path.join(cachedir, 'raw'), 'w+b')
- ObjectFs.__init__(self, DirNode(self, ''))
- self.readahead_at = None
- greenhandler.autogreenlet(self.readahead)
-
- def close(self):
- self.table.close()
-
- def readahead(self):
- while True:
- greensock.sleep(0.001)
- while not self.readahead_at:
- greensock.sleep(1)
- path, blocknum = self.readahead_at
- self.readahead_at = None
- try:
- self.readblock(path, blocknum, really=False)
- except EOFError:
- pass
-
- def initial_read_dir(self, path):
- print 'Reading initial directory structure...', path
- dirname = os.path.join(self.srcdir, path)
- for name in os.listdir(dirname):
- filename = os.path.join(dirname, name)
- st = os.stat(filename)
- if stat.S_ISDIR(st.st_mode):
- self.initial_read_dir(posixpath.join(path, name))
- q = -1
- else:
- q = st.st_size
- self.table[posixpath.join(path, name)] = q,
-
- def __getitem__(self, key):
- self.tablelock.acquire()
- try:
- return self.table[key]
- finally:
- self.tablelock.release()
-
- def readblock(self, path, blocknum, really=True):
- s = '%s/%d' % (path, blocknum)
- try:
- q, = self.table[s]
- except KeyError:
- print s
- self.readahead_at = None
- f = open(os.path.join(self.srcdir, path), 'rb')
- f.seek(blocknum * BLOCKSIZE)
- data = f.read(BLOCKSIZE)
- f.close()
- if not data:
- q = -2
- else:
- data += '\x00' * (BLOCKSIZE - len(data))
- self.rawfile.seek(0, 2)
- q = self.rawfile.tell()
- self.rawfile.write(data)
- self.table[s] = q,
- if q == -2:
- raise EOFError
- else:
- if q == -2:
- raise EOFError
- if really:
- self.rawfile.seek(q, 0)
- data = self.rawfile.read(BLOCKSIZE)
- else:
- data = None
- if self.readahead_at is None:
- self.readahead_at = path, blocknum + 1
- return data
-
-
-class Node(object):
-
- def __init__(self, mfs, path):
- self.mfs = mfs
- self.path = path
-
-class DirNode(Node):
-
- def join(self, name):
- path = posixpath.join(self.path, name)
- q, = self.mfs.table[path]
- if q == -1:
- return DirNode(self.mfs, path)
- else:
- return FileNode(self.mfs, path)
-
- def listdir(self):
- result = []
- for key, value in self.mfs.table.iteritemsfrom(self.path):
- if not key.startswith(self.path):
- break
- tail = key[len(self.path):].lstrip('/')
- if tail and '/' not in tail:
- result.append(tail)
- return result
-
-class FileNode(Node):
-
- def size(self):
- q, = self.mfs.table[self.path]
- return q
-
- def read(self):
- return FileStream(self.mfs, self.path)
-
-class FileStream(object):
-
- def __init__(self, mfs, path):
- self.mfs = mfs
- self.path = path
- self.pos = 0
- self.size, = self.mfs.table[path]
-
- def seek(self, p):
- self.pos = p
-
- def read(self, count):
- result = []
- end = min(self.pos + count, self.size)
- while self.pos < end:
- blocknum, offset = divmod(self.pos, BLOCKSIZE)
- data = self.mfs.readblock(self.path, blocknum)
- data = data[offset:]
- data = data[:end - self.pos]
- assert len(data) > 0
- result.append(data)
- self.pos += len(data)
- return ''.join(result)
-
-# ____________________________________________________________
-
-if __name__ == '__main__':
- import sys
- srcdir, cachedir, mountpoint = sys.argv[1:]
- mirrorfs = MirrorFS(srcdir, cachedir)
- try:
- handler = Handler(mountpoint, mirrorfs)
- greenhandler.add_handler(handler)
- greenhandler.mainloop()
- finally:
- mirrorfs.close()
+++ /dev/null
-from kernel import *
-import stat, errno, os, time
-from cStringIO import StringIO
-from OrderedDict import OrderedDict
-
-
-class ObjectFs:
- """A simple read-only file system based on Python objects.
-
- Interface of Directory objects:
- * join(name) returns a file or subdirectory object
- * listdir() returns a list of names, or a list of (name, object)
-
- join() is optional if listdir() returns a list of (name, object).
- Alternatively, Directory objects can be plain dictionaries {name: object}.
-
- Interface of File objects:
- * size() returns the size
- * read() returns the data
-
- Alternatively, File objects can be plain strings.
-
- Interface of SymLink objects:
- * readlink() returns the symlink's target, as a string
- """
-
- INFINITE = 86400.0
- USE_DIR_CACHE = True
-
- def __init__(self, rootnode):
- self.nodes = {FUSE_ROOT_ID: rootnode}
- if self.USE_DIR_CACHE:
- self.dircache = {}
- self.starttime = time.time()
- self.uid = os.getuid()
- self.gid = os.getgid()
- self.umask = os.umask(0); os.umask(self.umask)
-
- def newattr(self, s, ino, mode=0666):
- if ino < 0:
- ino = ~ino
- return fuse_attr(ino = ino,
- size = 0,
- mode = s | (mode & ~self.umask),
- nlink = 1, # even on dirs! this confuses 'find' in
- # a good way :-)
- atime = self.starttime,
- mtime = self.starttime,
- ctime = self.starttime,
- uid = self.uid,
- gid = self.gid)
-
- def getnode(self, nodeid):
- try:
- return self.nodes[nodeid]
- except KeyError:
- raise IOError(errno.ESTALE, nodeid)
-
- def getattr(self, node):
- timeout = self.INFINITE
- if isinstance(node, str):
- attr = self.newattr(stat.S_IFREG, id(node))
- attr.size = len(node)
- elif hasattr(node, 'readlink'):
- target = node.readlink()
- attr = self.newattr(stat.S_IFLNK, id(node))
- attr.size = len(target)
- attr.mode |= 0777
- elif hasattr(node, 'size'):
- sz = node.size()
- attr = self.newattr(stat.S_IFREG, id(node))
- if sz is None:
- timeout = 0
- else:
- attr.size = sz
- else:
- attr = self.newattr(stat.S_IFDIR, id(node), mode=0777)
- #print 'getattr(%s) -> %s, %s' % (node, attr, timeout)
- return attr, timeout
-
- def getentries(self, node):
- if isinstance(node, dict):
- return node
- try:
- if not self.USE_DIR_CACHE:
- raise KeyError
- return self.dircache[node]
- except KeyError:
- entries = OrderedDict()
- if hasattr(node, 'listdir'):
- for name in node.listdir():
- if isinstance(name, tuple):
- name, subnode = name
- else:
- subnode = None
- entries[name] = subnode
- if self.USE_DIR_CACHE:
- self.dircache[node] = entries
- return entries
-
- def listdir(self, node):
- entries = self.getentries(node)
- for name, subnode in entries.items():
- if subnode is None:
- subnode = node.join(name)
- self.nodes[uid(subnode)] = subnode
- entries[name] = subnode
- if isinstance(subnode, str):
- yield name, TYPE_REG
- elif hasattr(subnode, 'readlink'):
- yield name, TYPE_LNK
- elif hasattr(subnode, 'size'):
- yield name, TYPE_REG
- else:
- yield name, TYPE_DIR
-
- def lookup(self, node, name):
- entries = self.getentries(node)
- try:
- subnode = entries.get(name)
- if subnode is None:
- if hasattr(node, 'join'):
- subnode = node.join(name)
- entries[name] = subnode
- else:
- raise KeyError
- except KeyError:
- raise IOError(errno.ENOENT, name)
- else:
- return self.reply(subnode)
-
- def reply(self, node):
- res = uid(node)
- self.nodes[res] = node
- return res, self.INFINITE
-
- def open(self, node, mode):
- if not isinstance(node, str):
- node = node.read()
- if not hasattr(node, 'read'):
- node = StringIO(node)
- return node
-
- def readlink(self, node):
- return node.readlink()
-
- def getxattrs(self, node):
- return getattr(node, '__dict__', {})
-
-# ____________________________________________________________
-
-import struct
-try:
- HUGEVAL = 256 ** struct.calcsize('P')
-except struct.error:
- HUGEVAL = 0
-
-def fixid(result):
- if result < 0:
- result += HUGEVAL
- return result
-
-def uid(obj):
- """
- Return the id of an object as an unsigned number so that its hex
- representation makes sense
- """
- return fixid(id(obj))
-
-class SymLink(object):
- def __init__(self, target):
- self.target = target
- def readlink(self):
- return self.target
+++ /dev/null
-"""
-Two magic tricks for classes:
-
- class X:
- __metaclass__ = extendabletype
- ...
-
- # in some other file...
- class __extend__(X):
- ... # and here you can add new methods and class attributes to X
-
-Mostly useful together with the second trick, which lets you build
-methods whose 'self' is a pair of objects instead of just one:
-
- class __extend__(pairtype(X, Y)):
- attribute = 42
- def method((x, y), other, arguments):
- ...
-
- pair(x, y).attribute
- pair(x, y).method(other, arguments)
-
-This finds methods and class attributes based on the actual
-class of both objects that go into the pair(), with the usual
-rules of method/attribute overriding in (pairs of) subclasses.
-
-For more information, see test_pairtype.
-"""
-
-class extendabletype(type):
- """A type with a syntax trick: 'class __extend__(t)' actually extends
- the definition of 't' instead of creating a new subclass."""
- def __new__(cls, name, bases, dict):
- if name == '__extend__':
- for cls in bases:
- for key, value in dict.items():
- if key == '__module__':
- continue
- # XXX do we need to provide something more for pickling?
- setattr(cls, key, value)
- return None
- else:
- return super(extendabletype, cls).__new__(cls, name, bases, dict)
-
-
-def pair(a, b):
- """Return a pair object."""
- tp = pairtype(a.__class__, b.__class__)
- return tp((a, b)) # tp is a subclass of tuple
-
-pairtypecache = {}
-
-def pairtype(cls1, cls2):
- """type(pair(a,b)) is pairtype(a.__class__, b.__class__)."""
- try:
- pair = pairtypecache[cls1, cls2]
- except KeyError:
- name = 'pairtype(%s, %s)' % (cls1.__name__, cls2.__name__)
- bases1 = [pairtype(base1, cls2) for base1 in cls1.__bases__]
- bases2 = [pairtype(cls1, base2) for base2 in cls2.__bases__]
- bases = tuple(bases1 + bases2) or (tuple,) # 'tuple': ultimate base
- pair = pairtypecache[cls1, cls2] = extendabletype(name, bases, {})
- return pair
+++ /dev/null
-from kernel import *
-import errno, posixpath, os
-
-
-class PathFs(object):
- """Base class for a read-write FUSE file system interface
- whose underlying content is best accessed with '/'-separated
- string paths.
- """
- uid = os.getuid()
- gid = os.getgid()
- umask = os.umask(0); os.umask(umask)
- timeout = 86400.0
-
- def __init__(self, root=''):
- self._paths = {FUSE_ROOT_ID: root}
- self._path2id = {root: FUSE_ROOT_ID}
- self._nextid = FUSE_ROOT_ID + 1
-
- def getnode(self, nodeid):
- try:
- return self._paths[nodeid]
- except KeyError:
- raise IOError(errno.ESTALE, nodeid)
-
- def forget(self, nodeid):
- try:
- p = self._paths.pop(nodeid)
- del self._path2id[p]
- except KeyError:
- pass
-
- def cachepath(self, path):
- if path in self._path2id:
- return self._path2id[path]
- id = self._nextid
- self._nextid += 1
- self._paths[id] = path
- self._path2id[path] = id
- return id
-
- def mkattr(self, path, size, st_kind, mode, time):
- attr = fuse_attr(ino = self._path2id[path],
- size = size,
- mode = st_kind | (mode & ~self.umask),
- nlink = 1, # even on dirs! this confuses 'find' in
- # a good way :-)
- atime = time,
- mtime = time,
- ctime = time,
- uid = self.uid,
- gid = self.gid)
- return attr, self.timeout
-
- def lookup(self, path, name):
- npath = posixpath.join(path, name)
- if not self.check_path(npath):
- raise IOError(errno.ENOENT, name)
- return self.cachepath(npath), self.timeout
-
- def mknod(self, path, name, mode):
- npath = posixpath.join(path, name)
- self.mknod_path(npath, mode)
- return self.cachepath(npath), self.timeout
-
- def mkdir(self, path, name, mode):
- npath = posixpath.join(path, name)
- self.mkdir_path(npath, mode)
- return self.cachepath(npath), self.timeout
-
- def unlink(self, path, name):
- npath = posixpath.join(path, name)
- self.unlink_path(npath)
-
- def rmdir(self, path, name):
- npath = posixpath.join(path, name)
- self.rmdir_path(npath)
-
- def rename(self, oldpath, oldname, newpath, newname):
- noldpath = posixpath.join(oldpath, oldname)
- nnewpath = posixpath.join(newpath, newname)
- if not self.rename_path(noldpath, nnewpath):
- raise IOError(errno.ENOENT, oldname)
- # fix all paths in the cache
- N = len(noldpath)
- for id, path in self._paths.items():
- if path.startswith(noldpath):
- if len(path) == N or path[N] == '/':
- del self._path2id[path]
- path = nnewpath + path[N:]
- self._paths[id] = path
- self._path2id[path] = id
+++ /dev/null
-from kernel import *
-import errno, posixpath, weakref
-from time import time as now
-from stat import S_IFDIR, S_IFREG, S_IFMT
-from cStringIO import StringIO
-from handler import Handler
-from pathfs import PathFs
-from pysvn.ra_filesystem import SvnRepositoryFilesystem
-import pysvn.date
-
-
-class SvnFS(PathFs):
-
- def __init__(self, svnurl, root=''):
- super(SvnFS, self).__init__(root)
- self.svnurl = svnurl
- self.openfiles = weakref.WeakValueDictionary()
- self.creationtimes = {}
- self.do_open()
-
- def do_open(self, rev='HEAD'):
- self.fs = SvnRepositoryFilesystem(svnurl, rev)
-
- def do_commit(self, msg):
- rev = self.fs.commit(msg)
- if rev is None:
- print '* no changes.'
- else:
- print '* checked in revision %d.' % (rev,)
- self.do_open()
-
- def do_status(self, path=''):
- print '* status'
- result = []
- if path and not path.endswith('/'):
- path += '/'
- for delta in self.fs._compute_deltas():
- if delta.path.startswith(path):
- if delta.oldrev is None:
- c = 'A'
- elif delta.newrev is None:
- c = 'D'
- else:
- c = 'M'
- result.append(' %s %s\n' % (c, delta.path[len(path):]))
- return ''.join(result)
-
- def getattr(self, path):
- stat = self.fs.stat(path)
- if stat['svn:entry:kind'] == 'dir':
- s = S_IFDIR
- mode = 0777
- else:
- s = S_IFREG
- mode = 0666
- try:
- time = pysvn.date.decode(stat['svn:entry:committed-date'])
- except KeyError:
- try:
- time = self.creationtimes[path]
- except KeyError:
- time = self.creationtimes[path] = now()
- return self.mkattr(path,
- size = stat.get('svn:entry:size', 0),
- st_kind = s,
- mode = mode,
- time = time)
-
- def setattr(self, path, mode, uid, gid, size, atime, mtime):
- if size is not None:
- data = self.fs.read(path)
- if size < len(data):
- self.fs.write(path, data[:size])
- elif size > len(data):
- self.fs.write(path, data + '\x00' * (size - len(data)))
-
- def listdir(self, path):
- for name in self.fs.listdir(path):
- kind = self.fs.check_path(posixpath.join(path, name))
- if kind == 'dir':
- yield name, TYPE_DIR
- else:
- yield name, TYPE_REG
-
- def check_path(self, path):
- kind = self.fs.check_path(path)
- return kind is not None
-
- def open(self, path, mode):
- try:
- of = self.openfiles[path]
- except KeyError:
- of = self.openfiles[path] = OpenFile(self.fs.read(path))
- return of, FOPEN_KEEP_CACHE
-
- def modified(self, path):
- try:
- of = self.openfiles[path]
- except KeyError:
- pass
- else:
- self.fs.write(path, of.f.getvalue())
-
- def mknod_path(self, path, mode):
- self.fs.add(path)
-
- def mkdir_path(self, path, mode):
- self.fs.mkdir(path)
-
- def unlink_path(self, path):
- self.fs.unlink(path)
-
- def rmdir_path(self, path):
- self.fs.rmdir(path)
-
- def rename_path(self, oldpath, newpath):
- kind = self.fs.check_path(oldpath)
- if kind is None:
- return False
- self.fs.move(oldpath, newpath, kind)
- return True
-
- def getxattrs(self, path):
- return XAttrs(self, path)
-
-
-class OpenFile:
- def __init__(self, data=''):
- self.f = StringIO()
- self.f.write(data)
- self.f.seek(0)
-
- def seek(self, pos):
- self.f.seek(pos)
-
- def read(self, sz):
- return self.f.read(sz)
-
- def write(self, buf):
- self.f.write(buf)
-
-
-class XAttrs:
- def __init__(self, svnfs, path):
- self.svnfs = svnfs
- self.path = path
-
- def keys(self):
- return []
-
- def __getitem__(self, key):
- if key == 'status':
- return self.svnfs.do_status(self.path)
- raise KeyError(key)
-
- def __setitem__(self, key, value):
- if key == 'commit' and self.path == '':
- self.svnfs.do_commit(value)
- elif key == 'update' and self.path == '':
- if self.svnfs.fs.modified():
- raise IOError(errno.EPERM, "there are local changes")
- if value == '':
- rev = 'HEAD'
- else:
- try:
- rev = int(value)
- except ValueError:
- raise IOError(errno.EPERM, "invalid revision number")
- self.svnfs.do_open(rev)
- else:
- raise KeyError(key)
-
- def __delitem__(self, key):
- raise KeyError(key)
-
-
-if __name__ == '__main__':
- import sys
- svnurl, mountpoint = sys.argv[1:]
- handler = Handler(mountpoint, SvnFS(svnurl))
- handler.loop_forever()
+++ /dev/null
-"""
-A read-only svn fs showing all the revisions in subdirectories.
-"""
-from objectfs import ObjectFs, SymLink
-from handler import Handler
-from pysvn.ra import connect
-from pysvn.date import decode
-import errno, posixpath, time
-
-
-#USE_SYMLINKS = 0 # they are wrong if the original file had another path
-
-# use getfattr -d filename to see the node's attributes, which include
-# information like the revision at which the file was last modified
-
-
-class Root:
- def __init__(self, svnurl):
- self.svnurl = svnurl
- self.ra = connect(svnurl)
- self.head = self.ra.get_latest_rev()
-
- def listdir(self):
- for rev in range(1, self.head+1):
- yield str(rev)
- yield 'HEAD'
-
- def join(self, name):
- try:
- rev = int(name)
- except ValueError:
- if name == 'HEAD':
- return SymLink(str(self.head))
- else:
- raise KeyError(name)
- return TopLevelDir(self.ra, rev, rev, '')
-
-
-class Node:
- def __init__(self, ra, rev, last_changed_rev, path):
- self.ra = ra
- self.rev = rev
- self.last_changed_rev = last_changed_rev
- self.path = path
-
- def __repr__(self):
- return '<%s %d/%s>' % (self.__class__.__name__, self.rev, self.path)
-
-class Dir(Node):
- def listdir(self):
- rev, props, entries = self.ra.get_dir(self.path, self.rev,
- want_props = False)
- for key, stats in entries.items():
- yield key, getnode(self.ra, self.rev,
- posixpath.join(self.path, key), stats)
-
-class File(Node):
- def __init__(self, ra, rev, last_changed_rev, path, size):
- Node.__init__(self, ra, rev, last_changed_rev, path)
- self.filesize = size
-
- def size(self):
- return self.filesize
-
- def read(self):
- checksum, rev, props, data = self.ra.get_file(self.path, self.rev,
- want_props = False)
- return data
-
-
-class TopLevelDir(Dir):
- def listdir(self):
- for item in Dir.listdir(self):
- yield item
- yield 'svn:log', Log(self.ra, self.rev)
-
-class Log:
-
- def __init__(self, ra, rev):
- self.ra = ra
- self.rev = rev
-
- def getlogentry(self):
- try:
- return self.logentry
- except AttributeError:
- logentries = self.ra.log('', startrev=self.rev, endrev=self.rev)
- try:
- [self.logentry] = logentries
- except ValueError:
- self.logentry = None
- return self.logentry
-
- def size(self):
- return len(self.read())
-
- def read(self):
- logentry = self.getlogentry()
- if logentry is None:
- return 'r%d | (no change here)\n' % (self.rev,)
- datetuple = time.gmtime(decode(logentry.date))
- date = time.strftime("%c", datetuple)
- return 'r%d | %s | %s\n\n%s' % (self.rev,
- logentry.author,
- date,
- logentry.message)
-
-
-if 0:
- pass
-##if USE_SYMLINKS:
-## def getnode(ra, rev, path, stats):
-## committed_rev = stats['svn:entry:committed-rev']
-## if committed_rev == rev:
-## kind = stats['svn:entry:kind']
-## if kind == 'file':
-## return File(ra, rev, path, stats['svn:entry:size'])
-## elif kind == 'dir':
-## return Dir(ra, rev, path)
-## else:
-## raise IOError(errno.EINVAL, "kind %r" % (kind,))
-## else:
-## depth = path.count('/')
-## return SymLink('../' * depth + '../%d/%s' % (committed_rev, path))
-else:
- def getnode(ra, rev, path, stats):
- last_changed_rev = stats['svn:entry:committed-rev']
- kind = stats['svn:entry:kind']
- if kind == 'file':
- return File(ra, rev, last_changed_rev, path,
- stats['svn:entry:size'])
- elif kind == 'dir':
- return Dir(ra, rev, last_changed_rev, path)
- else:
- raise IOError(errno.EINVAL, "kind %r" % (kind,))
-
-
-if __name__ == '__main__':
- import sys
- svnurl, mountpoint = sys.argv[1:]
- handler = Handler(mountpoint, ObjectFs(Root(svnurl)))
- handler.loop_forever()
+++ /dev/null
-from kernel import *
-import stat, errno, os, time
-from cStringIO import StringIO
-from OrderedDict import OrderedDict
-
-INFINITE = 86400.0
-
-
-class Wrapper(object):
- def __init__(self, obj):
- self.obj = obj
-
- def getuid(self):
- return uid(self.obj)
-
- def __hash__(self):
- return hash(self.obj)
-
- def __eq__(self, other):
- return self.obj == other
-
- def __ne__(self, other):
- return self.obj != other
-
-
-class BaseDir(object):
-
- def join(self, name):
- "Return a file or subdirectory object"
- for item in self.listdir():
- if isinstance(item, tuple):
- subname, subnode = item
- if subname == name:
- return subnode
- raise KeyError(name)
-
- def listdir(self):
- "Return a list of names, or a list of (name, object)"
- raise NotImplementedError
-
- def create(self, name):
- "Create a file"
- raise NotImplementedError
-
- def mkdir(self, name):
- "Create a subdirectory"
- raise NotImplementedError
-
- def symlink(self, name, target):
- "Create a symbolic link"
- raise NotImplementedError
-
- def unlink(self, name):
- "Remove a file or subdirectory."
- raise NotImplementedError
-
- def rename(self, newname, olddirnode, oldname):
- "Move another node into this directory."
- raise NotImplementedError
-
- def getuid(self):
- return uid(self)
-
- def getattr(self, fs):
- return fs.newattr(stat.S_IFDIR, self.getuid(), mode=0777), INFINITE
-
- def setattr(self, **kwds):
- pass
-
- def getentries(self):
- entries = OrderedDict()
- for name in self.listdir():
- if isinstance(name, tuple):
- name, subnode = name
- else:
- subnode = None
- entries[name] = subnode
- return entries
-
-
-class BaseFile(object):
-
- def size(self):
- "Return the size of the file, or None if not known yet"
- f = self.open()
- if isinstance(f, str):
- return len(f)
- f.seek(0, 2)
- return f.tell()
-
- def open(self):
- "Return the content as a string or a file-like object"
- raise NotImplementedError
-
- def getuid(self):
- return uid(self)
-
- def getattr(self, fs):
- sz = self.size()
- attr = fs.newattr(stat.S_IFREG, self.getuid())
- if sz is None:
- timeout = 0
- else:
- attr.size = sz
- timeout = INFINITE
- return attr, timeout
-
- def setattr(self, size, **kwds):
- f = self.open()
- if self.size() == size:
- return
- if isinstance(f, str):
- raise IOError(errno.EPERM)
- f.seek(size)
- f.truncate()
-
-
-class BaseSymLink(object):
-
- def readlink(self):
- "Return the symlink's target, as a string"
- raise NotImplementedError
-
- def getuid(self):
- return uid(self)
-
- def getattr(self, fs):
- target = self.readlink()
- attr = fs.newattr(stat.S_IFLNK, self.getuid())
- attr.size = len(target)
- attr.mode |= 0777
- return attr, INFINITE
-
- def setattr(self, **kwds):
- pass
-
-# ____________________________________________________________
-
-class Dir(BaseDir):
- def __init__(self, **contents):
- self.contents = contents
- def listdir(self):
- return self.contents.items()
- def join(self, name):
- return self.contents[name]
- def create(self, fs, name):
- node = fs.File()
- self.contents[name] = node
- return node
- def mkdir(self, fs, name):
- node = fs.Dir()
- self.contents[name] = node
- return node
- def symlink(self, fs, name, target):
- node = fs.SymLink(target)
- self.contents[name] = node
- return node
- def unlink(self, name):
- del self.contents[name]
- def rename(self, newname, olddirnode, oldname):
- oldnode = olddirnode.join(oldname)
- olddirnode.unlink(oldname)
- self.contents[newname] = oldnode
-
-class File(BaseFile):
- def __init__(self):
- self.data = StringIO()
- def size(self):
- self.data.seek(0, 2)
- return self.data.tell()
- def open(self):
- return self.data
-
-class SymLink(BaseFile):
- def __init__(self, target):
- self.target = target
- def readlink(self):
- return self.target
-
-# ____________________________________________________________
-
-
-class RWObjectFs(object):
- """A simple read-write file system based on Python objects."""
-
- UID = os.getuid()
- GID = os.getgid()
- UMASK = os.umask(0); os.umask(UMASK)
-
- Dir = Dir
- File = File
- SymLink = SymLink
-
- def __init__(self, rootnode):
- self.nodes = {FUSE_ROOT_ID: rootnode}
- self.starttime = time.time()
-
- def newattr(self, s, ino, mode=0666):
- return fuse_attr(ino = ino,
- size = 0,
- mode = s | (mode & ~self.UMASK),
- nlink = 1, # even on dirs! this confuses 'find' in
- # a good way :-)
- atime = self.starttime,
- mtime = self.starttime,
- ctime = self.starttime,
- uid = self.UID,
- gid = self.GID)
-
- def getnode(self, nodeid):
- try:
- return self.nodes[nodeid]
- except KeyError:
- raise IOError(errno.ESTALE, nodeid)
-
- def getattr(self, node):
- return node.getattr(self)
-
- def setattr(self, node, mode, uid, gid, size, atime, mtime):
- node.setattr(mode=mode, uid=uid, gid=gid, size=size,
- atime=atime, mtime=mtime)
-
- def listdir(self, node):
- entries = node.getentries()
- for name, subnode in entries.items():
- if subnode is None:
- subnode = node.join(name)
- self.nodes[uid(subnode)] = subnode
- entries[name] = subnode
- if isinstance(subnode, str):
- yield name, TYPE_REG
- elif hasattr(subnode, 'readlink'):
- yield name, TYPE_LNK
- elif hasattr(subnode, 'size'):
- yield name, TYPE_REG
- else:
- yield name, TYPE_DIR
-
- def lookup(self, node, name):
- try:
- subnode = node.join(name)
- except KeyError:
- raise IOError(errno.ENOENT, name)
- else:
- res = uid(subnode)
- self.nodes[res] = subnode
- return res, INFINITE
-
- def mknod(self, dirnode, filename, mode):
- node = dirnode.create(filename)
- return self.newnodeid(node), INFINITE
-
- def mkdir(self, dirnode, subdirname, mode):
- node = dirnode.mkdir(subdirname)
- return self.newnodeid(node), INFINITE
-
- def symlink(self, dirnode, linkname, target):
- node = dirnode.symlink(linkname, target)
- return self.newnodeid(node), INFINITE
-
- def unlink(self, dirnode, filename):
- try:
- dirnode.unlink(filename)
- except KeyError:
- raise IOError(errno.ENOENT, filename)
-
- rmdir = unlink
-
- def open(self, node, mode):
- f = node.open()
- if isinstance(f, str):
- f = StringIO(f)
- return f
-
- def readlink(self, node):
- return node.readlink()
-
- def rename(self, olddirnode, oldname, newdirnode, newname):
- try:
- newdirnode.rename(newname, olddirnode, oldname)
- except KeyError:
- raise IOError(errno.ENOENT, oldname)
-
- def getxattrs(self, node):
- return getattr(node, '__dict__', {})
-
-# ____________________________________________________________
-
-import struct
-try:
- HUGEVAL = 256 ** struct.calcsize('P')
-except struct.error:
- HUGEVAL = 0
-
-def fixid(result):
- if result < 0:
- result += HUGEVAL
- return result
-
-def uid(obj):
- """
- Return the id of an object as an unsigned number so that its hex
- representation makes sense
- """
- return fixid(id(obj))
+++ /dev/null
-import py
-from handler import Handler
-from objectfs import ObjectFs
-
-
-class SvnDir:
- def __init__(self, path):
- self.path = path
-
- def listdir(self):
- for p in self.path.listdir():
- if p.check(dir=1):
- cls = SvnDir
- else:
- cls = SvnFile
- yield p.basename, cls(p)
-
-
-class SvnFile:
- data = None
-
- def __init__(self, path):
- self.path = path
-
- def size(self):
- if self.data is None:
- return None
- else:
- return len(self.data)
-
- def read(self):
- if self.data is None:
- self.data = self.path.read()
- return self.data
-
-
-if __name__ == '__main__':
- import sys
- svnurl, mountpoint = sys.argv[1:]
- root = SvnDir(py.path.svnurl(svnurl))
- handler = Handler(mountpoint, ObjectFs(root))
- handler.loop_forever()
+++ /dev/null
-"""
-PyFuse client for the Tahoe distributed file system.
-See http://allmydata.org/
-"""
-
-# Read-only for now.
-
-# Portions copied from the file contrib/fuse/tahoe_fuse.py distributed
-# with Tahoe 1.0.0.
-
-import os, sys
-from objectfs import ObjectFs
-from handler import Handler
-import simplejson
-import urllib
-
-
-### Config:
-TahoeConfigDir = '~/.tahoe'
-
-
-### Utilities for debug:
-def log(msg, *args):
- print msg % args
-
-
-class TahoeConnection:
- def __init__(self, confdir):
- self.confdir = confdir
- self._init_url()
-
- def _init_url(self):
- if os.path.exists(os.path.join(self.confdir, 'node.url')):
- self.url = file(os.path.join(self.confdir, 'node.url'), 'rb').read().strip()
- if not self.url.endswith('/'):
- self.url += '/'
- else:
- f = open(os.path.join(self.confdir, 'webport'), 'r')
- contents = f.read()
- f.close()
- fields = contents.split(':')
- proto, port = fields[:2]
- assert proto == 'tcp'
- port = int(port)
- self.url = 'http://localhost:%d/' % (port,)
-
- def get_root(self):
- # For now we just use the same default as the CLI:
- rootdirfn = os.path.join(self.confdir, 'private', 'root_dir.cap')
- f = open(rootdirfn, 'r')
- cap = f.read().strip()
- f.close()
- return TahoeDir(self, canonicalize_cap(cap))
-
-
-class TahoeNode:
- def __init__(self, conn, uri):
- self.conn = conn
- self.uri = uri
-
- def get_metadata(self):
- f = self._open('?t=json')
- json = f.read()
- f.close()
- return simplejson.loads(json)
-
- def _open(self, postfix=''):
- url = '%suri/%s%s' % (self.conn.url, self.uri, postfix)
- log('*** Fetching: %r', url)
- return urllib.urlopen(url)
-
-
-class TahoeDir(TahoeNode):
- def listdir(self):
- flag, md = self.get_metadata()
- assert flag == 'dirnode'
- result = []
- for name, (childflag, childmd) in md['children'].items():
- if childflag == 'dirnode':
- cls = TahoeDir
- else:
- cls = TahoeFile
- result.append((str(name), cls(self.conn, childmd['ro_uri'])))
- return result
-
-class TahoeFile(TahoeNode):
- def size(self):
- rawsize = self.get_metadata()[1]['size']
- return rawsize
-
- def read(self):
- return self._open().read()
-
-
-def canonicalize_cap(cap):
- cap = urllib.unquote(cap)
- i = cap.find('URI:')
- assert i != -1, 'A cap must contain "URI:...", but this does not: ' + cap
- return cap[i:]
-
-def main(mountpoint, basedir):
- conn = TahoeConnection(basedir)
- root = conn.get_root()
- handler = Handler(mountpoint, ObjectFs(root))
- handler.loop_forever()
-
-if __name__ == '__main__':
- basedir = os.path.expanduser(TahoeConfigDir)
- for i, arg in enumerate(sys.argv):
- if arg == '--basedir':
- basedir = sys.argv[i+1]
- sys.argv[i:i+2] = []
-
- [mountpoint] = sys.argv[1:]
- main(mountpoint, basedir)
+++ /dev/null
-from handler import Handler
-import stat, errno, os, time
-from cStringIO import StringIO
-from kernel import *
-
-
-UID = os.getuid()
-GID = os.getgid()
-UMASK = os.umask(0); os.umask(UMASK)
-INFINITE = 86400.0
-
-
-class Node(object):
- __slots__ = ['attr', 'data']
-
- def __init__(self, attr, data=None):
- self.attr = attr
- self.data = data
-
- def type(self):
- return mode2type(self.attr.mode)
-
- def modified(self):
- self.attr.mtime = self.attr.atime = time.time()
- t = self.type()
- if t == TYPE_REG:
- f = self.data
- pos = f.tell()
- f.seek(0, 2)
- self.attr.size = f.tell()
- f.seek(pos)
- elif t == TYPE_DIR:
- nsubdirs = 0
- for nodeid in self.data.values():
- nsubdirs += nodeid & 1
- self.attr.nlink = 2 + nsubdirs
-
-
-def newattr(s, mode=0666):
- now = time.time()
- return fuse_attr(ino = INVALID_INO,
- size = 0,
- mode = s | (mode & ~UMASK),
- nlink = 1 + (s == stat.S_IFDIR),
- atime = now,
- mtime = now,
- ctime = now,
- uid = UID,
- gid = GID)
-
-# ____________________________________________________________
-
-class Filesystem:
-
- def __init__(self, rootnode):
- self.nodes = {FUSE_ROOT_ID: rootnode}
- self.nextid = 2
- assert self.nextid > FUSE_ROOT_ID
-
- def getnode(self, nodeid):
- try:
- return self.nodes[nodeid]
- except KeyError:
- raise IOError(errno.ESTALE, nodeid)
-
- def forget(self, nodeid):
- pass
-
- def cachenode(self, node):
- id = self.nextid
- self.nextid += 2
- if node.type() == TYPE_DIR:
- id += 1
- self.nodes[id] = node
- return id
-
- def getattr(self, node):
- return node.attr, INFINITE
-
- def setattr(self, node, mode=None, uid=None, gid=None,
- size=None, atime=None, mtime=None):
- if mode is not None: node.attr.mode = (node.attr.mode&~0777) | mode
- if uid is not None: node.attr.uid = uid
- if gid is not None: node.attr.gid = gid
- if atime is not None: node.attr.atime = atime
- if mtime is not None: node.attr.mtime = mtime
- if size is not None and node.type() == TYPE_REG:
- node.data.seek(size)
- node.data.truncate()
-
- def listdir(self, node):
- for name, subnodeid in node.data.items():
- subnode = self.nodes[subnodeid]
- yield name, subnode.type()
-
- def lookup(self, node, name):
- try:
- return node.data[name], INFINITE
- except KeyError:
- pass
- if hasattr(node, 'findnode'):
- try:
- subnode = node.findnode(name)
- except KeyError:
- pass
- else:
- id = self.cachenode(subnode)
- node.data[name] = id
- return id, INFINITE
- raise IOError(errno.ENOENT, name)
-
- def open(self, node, mode):
- return node.data
-
- def mknod(self, node, name, mode):
- subnode = Node(newattr(mode & 0170000, mode & 0777))
- if subnode.type() == TYPE_REG:
- subnode.data = StringIO()
- else:
- raise NotImplementedError
- id = self.cachenode(subnode)
- node.data[name] = id
- node.modified()
- return id, INFINITE
-
- def mkdir(self, node, name, mode):
- subnode = Node(newattr(stat.S_IFDIR, mode & 0777), {})
- id = self.cachenode(subnode)
- node.data[name] = id
- node.modified()
- return id, INFINITE
-
- def symlink(self, node, linkname, target):
- subnode = Node(newattr(stat.S_IFLNK, 0777), target)
- id = self.cachenode(subnode)
- node.data[linkname] = id
- node.modified()
- return id, INFINITE
-
- def readlink(self, node):
- assert node.type() == TYPE_LNK
- return node.data
-
- def unlink(self, node, name):
- try:
- del node.data[name]
- except KeyError:
- raise IOError(errno.ENOENT, name)
- node.modified()
-
- rmdir = unlink
-
- def rename(self, oldnode, oldname, newnode, newname):
- if newnode.type() != TYPE_DIR:
- raise IOError(errno.ENOTDIR, newnode)
- try:
- nodeid = oldnode.data.pop(oldname)
- except KeyError:
- raise IOError(errno.ENOENT, oldname)
- oldnode.modified()
- newnode.data[newname] = nodeid
- newnode.modified()
-
- def modified(self, node):
- node.modified()
-
-# ____________________________________________________________
-
-if __name__ == '__main__':
- root = Node(newattr(stat.S_IFDIR), {})
- handler = Handler('/home/arigo/mnt', Filesystem(root))
- handler.loop_forever()
+++ /dev/null
-#!/usr/bin/env python
-
-#-----------------------------------------------------------------------------------------------
-from allmydata.uri import CHKFileURI, DirectoryURI, LiteralFileURI, is_literal_file_uri
-from allmydata.scripts.common_http import do_http as do_http_req
-from allmydata.util.hashutil import tagged_hash
-from allmydata.util.assertutil import precondition
-from allmydata.util import base32, fileutil, observer
-from allmydata.scripts.common import get_aliases
-
-from twisted.python import usage
-from twisted.python.failure import Failure
-from twisted.internet.protocol import Factory, Protocol
-from twisted.internet import reactor, defer, task
-from twisted.web import client
-
-import base64
-import errno
-import heapq
-import sha
-import socket
-import stat
-import subprocess
-import sys
-import os
-import weakref
-#import pprint
-
-# one needs either python-fuse to have been installed in sys.path, or
-# suitable affordances to be made in the build or runtime environment
-import fuse
-
-import time
-import traceback
-import simplejson
-import urllib
-
-VERSIONSTR="0.7"
-
-USAGE = 'usage: tahoe fuse [dir_cap_name] [fuse_options] mountpoint'
-DEFAULT_DIRECTORY_VALIDITY=26
-
-if not hasattr(fuse, '__version__'):
- raise RuntimeError, \
- "your fuse-py doesn't know of fuse.__version__, probably it's too old."
-
-fuse.fuse_python_api = (0, 2)
-fuse.feature_assert('stateful_files', 'has_init')
-
-class TahoeFuseOptions(usage.Options):
- optParameters = [
- ["node-directory", None, "~/.tahoe",
- "Look here to find out which Tahoe node should be used for all "
- "operations. The directory should either contain a full Tahoe node, "
- "or a file named node.url which points to some other Tahoe node. "
- "It should also contain a file named private/aliases which contains "
- "the mapping from alias name to root dirnode URI."
- ],
- ["node-url", None, None,
- "URL of the tahoe node to use, a URL like \"http://127.0.0.1:3456\". "
- "This overrides the URL found in the --node-directory ."],
- ["alias", None, None,
- "Which alias should be mounted."],
- ["root-uri", None, None,
- "Which root directory uri should be mounted."],
- ["cache-timeout", None, 20,
- "Time, in seconds, to cache directory data."],
- ]
- optFlags = [
- ['no-split', None,
- 'run stand-alone; no splitting into client and server'],
- ['server', None,
- 'server mode (should not be used by end users)'],
- ['server-shutdown', None,
- 'shutdown server (should not be used by end users)'],
- ]
-
- def __init__(self):
- usage.Options.__init__(self)
- self.fuse_options = []
- self.mountpoint = None
-
- def opt_option(self, fuse_option):
- """
- Pass mount options directly to fuse. See below.
- """
- self.fuse_options.append(fuse_option)
-
- opt_o = opt_option
-
- def parseArgs(self, mountpoint=''):
- self.mountpoint = mountpoint
-
- def getSynopsis(self):
- return "%s [options] mountpoint" % (os.path.basename(sys.argv[0]),)
-
-logfile = file('tfuse.log', 'ab')
-
-def reopen_logfile(fname):
- global logfile
- log('switching to %s' % (fname,))
- logfile.close()
- logfile = file(fname, 'ab')
-
-def log(msg):
- logfile.write("%s: %s\n" % (time.asctime(), msg))
- #time.sleep(0.1)
- logfile.flush()
-
-fuse.flog = log
-
-def unicode_to_utf8_or_str(u):
- if isinstance(u, unicode):
- return u.encode('utf-8')
- else:
- precondition(isinstance(u, str), repr(u))
- return u
-
-def do_http(method, url, body=''):
- resp = do_http_req(method, url, body)
- log('do_http(%s, %s) -> %s, %s' % (method, url, resp.status, resp.reason))
- if resp.status not in (200, 201):
- raise RuntimeError('http response (%s, %s)' % (resp.status, resp.reason))
- else:
- return resp.read()
-
-def flag2mode(flags):
- log('flag2mode(%r)' % (flags,))
- #md = {os.O_RDONLY: 'r', os.O_WRONLY: 'w', os.O_RDWR: 'w+'}
- md = {os.O_RDONLY: 'rb', os.O_WRONLY: 'wb', os.O_RDWR: 'w+b'}
- m = md[flags & (os.O_RDONLY | os.O_WRONLY | os.O_RDWR)]
-
- if flags & os.O_APPEND:
- m = m.replace('w', 'a', 1)
-
- return m
-
-class TFSIOError(IOError):
- pass
-
-class ENOENT(TFSIOError):
- def __init__(self, msg):
- TFSIOError.__init__(self, errno.ENOENT, msg)
-
-class EINVAL(TFSIOError):
- def __init__(self, msg):
- TFSIOError.__init__(self, errno.EINVAL, msg)
-
-class EACCESS(TFSIOError):
- def __init__(self, msg):
- TFSIOError.__init__(self, errno.EACCESS, msg)
-
-class EEXIST(TFSIOError):
- def __init__(self, msg):
- TFSIOError.__init__(self, errno.EEXIST, msg)
-
-class EIO(TFSIOError):
- def __init__(self, msg):
- TFSIOError.__init__(self, errno.EIO, msg)
-
-def logargsretexc(meth):
- def inner_logargsretexc(self, *args, **kwargs):
- log("%s(%r, %r)" % (meth, args, kwargs))
- try:
- ret = meth(self, *args, **kwargs)
- except:
- log('exception:\n%s' % (traceback.format_exc(),))
- raise
- log("ret: %r" % (ret, ))
- return ret
- inner_logargsretexc.__name__ = '<logwrap(%s)>' % (meth,)
- return inner_logargsretexc
-
-def logexc(meth):
- def inner_logexc(self, *args, **kwargs):
- try:
- ret = meth(self, *args, **kwargs)
- except TFSIOError, tie:
- log('error: %s' % (tie,))
- raise
- except:
- log('exception:\n%s' % (traceback.format_exc(),))
- raise
- return ret
- inner_logexc.__name__ = '<logwrap(%s)>' % (meth,)
- return inner_logexc
-
-def log_exc():
- log('exception:\n%s' % (traceback.format_exc(),))
-
-def repr_mode(mode=None):
- if mode is None:
- return 'none'
- fields = ['S_ENFMT', 'S_IFBLK', 'S_IFCHR', 'S_IFDIR', 'S_IFIFO', 'S_IFLNK', 'S_IFREG', 'S_IFSOCK', 'S_IRGRP', 'S_IROTH', 'S_IRUSR', 'S_IRWXG', 'S_IRWXO', 'S_IRWXU', 'S_ISGID', 'S_ISUID', 'S_ISVTX', 'S_IWGRP', 'S_IWOTH', 'S_IWUSR', 'S_IXGRP', 'S_IXOTH', 'S_IXUSR']
- ret = []
- for field in fields:
- fval = getattr(stat, field)
- if (mode & fval) == fval:
- ret.append(field)
- return '|'.join(ret)
-
-def repr_flags(flags=None):
- if flags is None:
- return 'none'
- fields = [ 'O_APPEND', 'O_CREAT', 'O_DIRECT', 'O_DIRECTORY', 'O_EXCL', 'O_EXLOCK',
- 'O_LARGEFILE', 'O_NDELAY', 'O_NOCTTY', 'O_NOFOLLOW', 'O_NONBLOCK', 'O_RDWR',
- 'O_SHLOCK', 'O_SYNC', 'O_TRUNC', 'O_WRONLY', ]
- ret = []
- for field in fields:
- fval = getattr(os, field, None)
- if fval is not None and (flags & fval) == fval:
- ret.append(field)
- if not ret:
- ret = ['O_RDONLY']
- return '|'.join(ret)
-
-class DownloaderWithReadQueue(object):
- def __init__(self):
- self.read_heap = []
- self.dest_file_name = None
- self.running = False
- self.done_observer = observer.OneShotObserverList()
-
- def __repr__(self):
- name = self.dest_file_name is None and '<none>' or os.path.basename(self.dest_file_name)
- return "<DWRQ(%s)> q(%s)" % (name, len(self.read_heap or []))
-
- def log(self, msg):
- log("%r: %s" % (self, msg))
-
- @logexc
- def start(self, url, dest_file_name, target_size, interval=0.5):
- self.log('start(%s, %s, %s)' % (url, dest_file_name, target_size, ))
- self.dest_file_name = dest_file_name
- file(self.dest_file_name, 'wb').close() # touch
- self.target_size = target_size
- self.log('start()')
- self.loop = task.LoopingCall(self._check_file_size)
- self.loop.start(interval)
- self.running = True
- d = client.downloadPage(url, self.dest_file_name)
- d.addCallbacks(self.done, self.fail)
- return d
-
- def when_done(self):
- return self.done_observer.when_fired()
-
- def get_size(self):
- if os.path.exists(self.dest_file_name):
- return os.path.getsize(self.dest_file_name)
- else:
- return 0
-
- @logexc
- def _read(self, posn, size):
- #self.log('_read(%s, %s)' % (posn, size))
- f = file(self.dest_file_name, 'rb')
- f.seek(posn)
- data = f.read(size)
- f.close()
- return data
-
- @logexc
- def read(self, posn, size):
- self.log('read(%s, %s)' % (posn, size))
- if self.read_heap is None:
- raise ValueError('read() called when already shut down')
- if posn+size > self.target_size:
- size -= self.target_size - posn
- fsize = self.get_size()
- if posn+size < fsize:
- return defer.succeed(self._read(posn, size))
- else:
- d = defer.Deferred()
- dread = (posn+size, posn, d)
- heapq.heappush(self.read_heap, dread)
- return d
-
- @logexc
- def _check_file_size(self):
- #self.log('_check_file_size()')
- if self.read_heap:
- try:
- size = self.get_size()
- while self.read_heap and self.read_heap[0][0] <= size:
- end, start, d = heapq.heappop(self.read_heap)
- data = self._read(start, end-start)
- d.callback(data)
- except Exception, e:
- log_exc()
- failure = Failure()
-
- @logexc
- def fail(self, failure):
- self.log('fail(%s)' % (failure,))
- self.running = False
- if self.loop.running:
- self.loop.stop()
- # fail any reads still pending
- for end, start, d in self.read_heap:
- reactor.callLater(0, d.errback, failure)
- self.read_heap = None
- self.done_observer.fire_if_not_fired(failure)
- return failure
-
- @logexc
- def done(self, result):
- self.log('done()')
- self.running = False
- if self.loop.running:
- self.loop.stop()
- precondition(self.get_size() == self.target_size, self.get_size(), self.target_size)
- self._check_file_size() # process anything left pending in heap
- precondition(not self.read_heap, self.read_heap, self.target_size, self.get_size())
- self.read_heap = None
- self.done_observer.fire_if_not_fired(self)
- return result
-
-
-class TahoeFuseFile(object):
-
- #def __init__(self, path, flags, *mode):
- def __init__(self, tfs, path, flags, *mode):
- log("TFF: __init__(%r, %r:%s, %r:%s)" % (path, flags, repr_flags(flags), mode, repr_mode(*mode)))
- self.tfs = tfs
- self.downloader = None
-
- self._path = path # for tahoe put
- try:
- self.parent, self.name, self.fnode = self.tfs.get_parent_name_and_child(path)
- m = flag2mode(flags)
- log('TFF: flags2(mode) -> %s' % (m,))
- if m[0] in 'wa':
- # write
- self.fname = self.tfs.cache.tmp_file(os.urandom(20))
- if self.fnode is None:
- log('TFF: [%s] open() for write: no file node, creating new File %s' % (self.name, self.fname, ))
- self.fnode = File(0, LiteralFileURI.BASE_STRING)
- self.fnode.tmp_fname = self.fname # XXX kill this
- self.parent.add_child(self.name, self.fnode, {})
- elif hasattr(self.fnode, 'tmp_fname'):
- self.fname = self.fnode.tmp_fname
- log('TFF: [%s] open() for write: existing file node lists %s' % (self.name, self.fname, ))
- else:
- log('TFF: [%s] open() for write: existing file node lists no tmp_file, using new %s' % (self.name, self.fname, ))
- if mode != (0600,):
- log('TFF: [%s] changing mode %s(%s) to 0600' % (self.name, repr_mode(*mode), mode))
- mode = (0600,)
- log('TFF: [%s] opening(%s) with flags %s(%s), mode %s(%s)' % (self.name, self.fname, repr_flags(flags|os.O_CREAT), flags|os.O_CREAT, repr_mode(*mode), mode))
- #self.file = os.fdopen(os.open(self.fname, flags|os.O_CREAT, *mode), m)
- self.file = os.fdopen(os.open(self.fname, flags|os.O_CREAT, *mode), m)
- self.fd = self.file.fileno()
- log('TFF: opened(%s) for write' % self.fname)
- self.open_for_write = True
- else:
- # read
- assert self.fnode is not None
- uri = self.fnode.get_uri()
-
- # XXX make this go away
- if hasattr(self.fnode, 'tmp_fname'):
- self.fname = self.fnode.tmp_fname
- log('TFF: reopening(%s) for reading' % self.fname)
- else:
- if is_literal_file_uri(uri) or not self.tfs.async:
- log('TFF: synchronously fetching file from cache for reading')
- self.fname = self.tfs.cache.get_file(uri)
- else:
- log('TFF: asynchronously fetching file from cache for reading')
- self.fname, self.downloader = self.tfs.cache.async_get_file(uri)
- # downloader is None if the cache already contains the file
- if self.downloader is not None:
- d = self.downloader.when_done()
- def download_complete(junk):
- # once the download is complete, revert to non-async behaviour
- self.downloader = None
- d.addCallback(download_complete)
-
- self.file = os.fdopen(os.open(self.fname, flags, *mode), m)
- self.fd = self.file.fileno()
- self.open_for_write = False
- log('TFF: opened(%s) for read' % self.fname)
- except:
- log_exc()
- raise
-
- def log(self, msg):
- log("<TFF(%s:%s)> %s" % (os.path.basename(self.fname), self.name, msg))
-
- @logexc
- def read(self, size, offset):
- self.log('read(%r, %r)' % (size, offset, ))
- if self.downloader:
- # then we're busy doing an async download
- # (and hence implicitly, we're in an environment that supports twisted)
- #self.log('passing read() to %s' % (self.downloader, ))
- d = self.downloader.read(offset, size)
- def thunk(failure):
- raise EIO(str(failure))
- d.addErrback(thunk)
- return d
- else:
- self.log('servicing read() from %s' % (self.file, ))
- self.file.seek(offset)
- return self.file.read(size)
-
- @logexc
- def write(self, buf, offset):
- self.log("write(-%s-, %r)" % (len(buf), offset))
- if not self.open_for_write:
- return -errno.EACCES
- self.file.seek(offset)
- self.file.write(buf)
- return len(buf)
-
- @logexc
- def release(self, flags):
- self.log("release(%r)" % (flags,))
- self.file.close()
- if self.open_for_write:
- size = os.path.getsize(self.fname)
- self.fnode.size = size
- file_cap = self.tfs.upload(self.fname)
- self.fnode.ro_uri = file_cap
- # XXX [ ] TODO: set metadata
- # write new uri into parent dir entry
- self.parent.add_child(self.name, self.fnode, {})
- self.log("uploaded: %s" % (file_cap,))
-
- # dbg
- print_tree()
-
- def _fflush(self):
- if 'w' in self.file.mode or 'a' in self.file.mode:
- self.file.flush()
-
- @logexc
- def fsync(self, isfsyncfile):
- self.log("fsync(%r)" % (isfsyncfile,))
- self._fflush()
- if isfsyncfile and hasattr(os, 'fdatasync'):
- os.fdatasync(self.fd)
- else:
- os.fsync(self.fd)
-
- @logexc
- def flush(self):
- self.log("flush()")
- self._fflush()
- # cf. xmp_flush() in fusexmp_fh.c
- os.close(os.dup(self.fd))
-
- @logexc
- def fgetattr(self):
- self.log("fgetattr()")
- s = os.fstat(self.fd)
- d = stat_to_dict(s)
- if self.downloader:
- size = self.downloader.target_size
- self.log("fgetattr() during async download, cache file: %s, size=%s" % (s, size))
- d['st_size'] = size
- self.log("fgetattr() -> %r" % (d,))
- return d
-
- @logexc
- def ftruncate(self, len):
- self.log("ftruncate(%r)" % (len,))
- self.file.truncate(len)
-
-class TahoeFuseBase(object):
-
- def __init__(self, tfs):
- log("TFB: __init__()")
- self.tfs = tfs
- self.files = {}
-
- def log(self, msg):
- log("<TFB> %s" % (msg, ))
-
- @logexc
- def readlink(self, path):
- self.log("readlink(%r)" % (path,))
- node = self.tfs.get_path(path)
- if node:
- raise EINVAL('Not a symlink') # nothing in tahoe is a symlink
- else:
- raise ENOENT('Invalid argument')
-
- @logexc
- def unlink(self, path):
- self.log("unlink(%r)" % (path,))
- self.tfs.unlink(path)
-
- @logexc
- def rmdir(self, path):
- self.log("rmdir(%r)" % (path,))
- self.tfs.unlink(path)
-
- @logexc
- def symlink(self, path, path1):
- self.log("symlink(%r, %r)" % (path, path1))
- self.tfs.link(path, path1)
-
- @logexc
- def rename(self, path, path1):
- self.log("rename(%r, %r)" % (path, path1))
- self.tfs.rename(path, path1)
-
- @logexc
- def link(self, path, path1):
- self.log("link(%r, %r)" % (path, path1))
- self.tfs.link(path, path1)
-
- @logexc
- def chmod(self, path, mode):
- self.log("XX chmod(%r, %r)" % (path, mode))
- #return -errno.EOPNOTSUPP
-
- @logexc
- def chown(self, path, user, group):
- self.log("XX chown(%r, %r, %r)" % (path, user, group))
- #return -errno.EOPNOTSUPP
-
- @logexc
- def truncate(self, path, len):
- self.log("XX truncate(%r, %r)" % (path, len))
- #return -errno.EOPNOTSUPP
-
- @logexc
- def utime(self, path, times):
- self.log("XX utime(%r, %r)" % (path, times))
- #return -errno.EOPNOTSUPP
-
- @logexc
- def statfs(self):
- self.log("statfs()")
- """
- Should return an object with statvfs attributes (f_bsize, f_frsize...).
- Eg., the return value of os.statvfs() is such a thing (since py 2.2).
- If you are not reusing an existing statvfs object, start with
- fuse.StatVFS(), and define the attributes.
-
- To provide usable information (ie., you want sensible df(1)
- output, you are suggested to specify the following attributes:
-
- - f_bsize - preferred size of file blocks, in bytes
- - f_frsize - fundamental size of file blcoks, in bytes
- [if you have no idea, use the same as blocksize]
- - f_blocks - total number of blocks in the filesystem
- - f_bfree - number of free blocks
- - f_files - total number of file inodes
- - f_ffree - nunber of free file inodes
- """
-
- block_size = 4096 # 4k
- preferred_block_size = 131072 # 128k, c.f. seg_size
- fs_size = 8*2**40 # 8Tb
- fs_free = 2*2**40 # 2Tb
-
- #s = fuse.StatVfs(f_bsize = preferred_block_size,
- s = dict(f_bsize = preferred_block_size,
- f_frsize = block_size,
- f_blocks = fs_size / block_size,
- f_bfree = fs_free / block_size,
- f_bavail = fs_free / block_size,
- f_files = 2**30, # total files
- f_ffree = 2**20, # available files
- f_favail = 2**20, # available files (root)
- f_flag = 2, # no suid
- f_namemax = 255) # max name length
- #self.log('statfs(): %r' % (s,))
- return s
-
- def fsinit(self):
- self.log("fsinit()")
-
- ##################################################################
-
- @logexc
- def readdir(self, path, offset):
- self.log('readdir(%r, %r)' % (path, offset))
- node = self.tfs.get_path(path)
- if node is None:
- return -errno.ENOENT
- dirlist = ['.', '..'] + node.children.keys()
- self.log('dirlist = %r' % (dirlist,))
- #return [fuse.Direntry(d) for d in dirlist]
- return dirlist
-
- @logexc
- def getattr(self, path):
- self.log('getattr(%r)' % (path,))
-
- if path == '/':
- # we don't have any metadata for the root (no edge leading to it)
- mode = (stat.S_IFDIR | 755)
- mtime = self.tfs.root.mtime
- s = TStat({}, st_mode=mode, st_nlink=1, st_mtime=mtime)
- self.log('getattr(%r) -> %r' % (path, s))
- #return s
- return stat_to_dict(s)
-
- parent, name, child = self.tfs.get_parent_name_and_child(path)
- if not child: # implicitly 'or not parent'
- raise ENOENT('No such file or directory')
- return stat_to_dict(parent.get_stat(name))
-
- @logexc
- def access(self, path, mode):
- self.log("access(%r, %r)" % (path, mode))
- node = self.tfs.get_path(path)
- if not node:
- return -errno.ENOENT
- accmode = os.O_RDONLY | os.O_WRONLY | os.O_RDWR
- if (mode & 0222):
- if not node.writable():
- log('write access denied for %s (req:%o)' % (path, mode, ))
- return -errno.EACCES
- #else:
- #log('access granted for %s' % (path, ))
-
- @logexc
- def mkdir(self, path, mode):
- self.log("mkdir(%r, %r)" % (path, mode))
- self.tfs.mkdir(path)
-
- ##################################################################
- # file methods
-
- def open(self, path, flags):
- self.log('open(%r, %r)' % (path, flags, ))
- if path in self.files:
- # XXX todo [ ] should consider concurrent open files of differing modes
- return
- else:
- tffobj = TahoeFuseFile(self.tfs, path, flags)
- self.files[path] = tffobj
-
- def create(self, path, flags, mode):
- self.log('create(%r, %r, %r)' % (path, flags, mode))
- if path in self.files:
- # XXX todo [ ] should consider concurrent open files of differing modes
- return
- else:
- tffobj = TahoeFuseFile(self.tfs, path, flags, mode)
- self.files[path] = tffobj
-
- def _get_file(self, path):
- if not path in self.files:
- raise ENOENT('No such file or directory: %s' % (path,))
- return self.files[path]
-
- ##
-
- def read(self, path, size, offset):
- self.log('read(%r, %r, %r)' % (path, size, offset, ))
- return self._get_file(path).read(size, offset)
-
- @logexc
- def write(self, path, buf, offset):
- self.log("write(%r, -%s-, %r)" % (path, len(buf), offset))
- return self._get_file(path).write(buf, offset)
-
- @logexc
- def release(self, path, flags):
- self.log("release(%r, %r)" % (path, flags,))
- self._get_file(path).release(flags)
- del self.files[path]
-
- @logexc
- def fsync(self, path, isfsyncfile):
- self.log("fsync(%r, %r)" % (path, isfsyncfile,))
- return self._get_file(path).fsync(isfsyncfile)
-
- @logexc
- def flush(self, path):
- self.log("flush(%r)" % (path,))
- return self._get_file(path).flush()
-
- @logexc
- def fgetattr(self, path):
- self.log("fgetattr(%r)" % (path,))
- return self._get_file(path).fgetattr()
-
- @logexc
- def ftruncate(self, path, len):
- self.log("ftruncate(%r, %r)" % (path, len,))
- return self._get_file(path).ftruncate(len)
-
-class TahoeFuseLocal(TahoeFuseBase, fuse.Fuse):
- def __init__(self, tfs, *args, **kw):
- log("TFL: __init__(%r, %r)" % (args, kw))
- TahoeFuseBase.__init__(self, tfs)
- fuse.Fuse.__init__(self, *args, **kw)
-
- def log(self, msg):
- log("<TFL> %s" % (msg, ))
-
- def main(self, *a, **kw):
- self.log("main(%r, %r)" % (a, kw))
- return fuse.Fuse.main(self, *a, **kw)
-
- # overrides for those methods which return objects not marshalled
- def fgetattr(self, path):
- return TStat({}, **(TahoeFuseBase.fgetattr(self, path)))
-
- def getattr(self, path):
- return TStat({}, **(TahoeFuseBase.getattr(self, path)))
-
- def statfs(self):
- return fuse.StatVfs(**(TahoeFuseBase.statfs(self)))
- #self.log('statfs()')
- #ret = fuse.StatVfs(**(TahoeFuseBase.statfs(self)))
- #self.log('statfs(): %r' % (ret,))
- #return ret
-
- @logexc
- def readdir(self, path, offset):
- return [ fuse.Direntry(d) for d in TahoeFuseBase.readdir(self, path, offset) ]
-
-class TahoeFuseShim(fuse.Fuse):
- def __init__(self, trpc, *args, **kw):
- log("TF: __init__(%r, %r)" % (args, kw))
- self.trpc = trpc
- fuse.Fuse.__init__(self, *args, **kw)
-
- def log(self, msg):
- log("<TFs> %s" % (msg, ))
-
- @logexc
- def readlink(self, path):
- self.log("readlink(%r)" % (path,))
- return self.trpc.call('readlink', path)
-
- @logexc
- def unlink(self, path):
- self.log("unlink(%r)" % (path,))
- return self.trpc.call('unlink', path)
-
- @logexc
- def rmdir(self, path):
- self.log("rmdir(%r)" % (path,))
- return self.trpc.call('unlink', path)
-
- @logexc
- def symlink(self, path, path1):
- self.log("symlink(%r, %r)" % (path, path1))
- return self.trpc.call('link', path, path1)
-
- @logexc
- def rename(self, path, path1):
- self.log("rename(%r, %r)" % (path, path1))
- return self.trpc.call('rename', path, path1)
-
- @logexc
- def link(self, path, path1):
- self.log("link(%r, %r)" % (path, path1))
- return self.trpc.call('link', path, path1)
-
- @logexc
- def chmod(self, path, mode):
- self.log("XX chmod(%r, %r)" % (path, mode))
- return self.trpc.call('chmod', path, mode)
-
- @logexc
- def chown(self, path, user, group):
- self.log("XX chown(%r, %r, %r)" % (path, user, group))
- return self.trpc.call('chown', path, user, group)
-
- @logexc
- def truncate(self, path, len):
- self.log("XX truncate(%r, %r)" % (path, len))
- return self.trpc.call('truncate', path, len)
-
- @logexc
- def utime(self, path, times):
- self.log("XX utime(%r, %r)" % (path, times))
- return self.trpc.call('utime', path, times)
-
- @logexc
- def statfs(self):
- self.log("statfs()")
- response = self.trpc.call('statfs')
- #self.log("statfs(): %r" % (response,))
- kwargs = dict([ (str(k),v) for k,v in response.items() ])
- return fuse.StatVfs(**kwargs)
-
- def fsinit(self):
- self.log("fsinit()")
-
- def main(self, *a, **kw):
- self.log("main(%r, %r)" % (a, kw))
-
- return fuse.Fuse.main(self, *a, **kw)
-
- ##################################################################
-
- @logexc
- def readdir(self, path, offset):
- self.log('readdir(%r, %r)' % (path, offset))
- return [ fuse.Direntry(d) for d in self.trpc.call('readdir', path, offset) ]
-
- @logexc
- def getattr(self, path):
- self.log('getattr(%r)' % (path,))
- response = self.trpc.call('getattr', path)
- kwargs = dict([ (str(k),v) for k,v in response.items() ])
- s = TStat({}, **kwargs)
- self.log('getattr(%r) -> %r' % (path, s))
- return s
-
- @logexc
- def access(self, path, mode):
- self.log("access(%r, %r)" % (path, mode))
- return self.trpc.call('access', path, mode)
-
- @logexc
- def mkdir(self, path, mode):
- self.log("mkdir(%r, %r)" % (path, mode))
- return self.trpc.call('mkdir', path, mode)
-
- ##################################################################
- # file methods
-
- def open(self, path, flags):
- self.log('open(%r, %r)' % (path, flags, ))
- return self.trpc.call('open', path, flags)
-
- def create(self, path, flags, mode):
- self.log('create(%r, %r, %r)' % (path, flags, mode))
- return self.trpc.call('create', path, flags, mode)
-
- ##
-
- def read(self, path, size, offset):
- self.log('read(%r, %r, %r)' % (path, size, offset, ))
- return self.trpc.call('read', path, size, offset)
-
- @logexc
- def write(self, path, buf, offset):
- self.log("write(%r, -%s-, %r)" % (path, len(buf), offset))
- return self.trpc.call('write', path, buf, offset)
-
- @logexc
- def release(self, path, flags):
- self.log("release(%r, %r)" % (path, flags,))
- return self.trpc.call('release', path, flags)
-
- @logexc
- def fsync(self, path, isfsyncfile):
- self.log("fsync(%r, %r)" % (path, isfsyncfile,))
- return self.trpc.call('fsync', path, isfsyncfile)
-
- @logexc
- def flush(self, path):
- self.log("flush(%r)" % (path,))
- return self.trpc.call('flush', path)
-
- @logexc
- def fgetattr(self, path):
- self.log("fgetattr(%r)" % (path,))
- #return self.trpc.call('fgetattr', path)
- response = self.trpc.call('fgetattr', path)
- kwargs = dict([ (str(k),v) for k,v in response.items() ])
- s = TStat({}, **kwargs)
- self.log('getattr(%r) -> %r' % (path, s))
- return s
-
- @logexc
- def ftruncate(self, path, len):
- self.log("ftruncate(%r, %r)" % (path, len,))
- return self.trpc.call('ftruncate', path, len)
-
-
-def launch_tahoe_fuse(tf_class, tobj, argv):
- sys.argv = ['tahoe fuse'] + list(argv)
- log('setting sys.argv=%r' % (sys.argv,))
- config = TahoeFuseOptions()
- version = "%prog " +VERSIONSTR+", fuse "+ fuse.__version__
- server = tf_class(tobj, version=version, usage=config.getSynopsis(), dash_s_do='setsingle')
- server.parse(errex=1)
- server.main()
-
-def getnodeurl(nodedir):
- f = file(os.path.expanduser(os.path.join(nodedir, "node.url")), 'rb')
- nu = f.read().strip()
- f.close()
- if nu[-1] != "/":
- nu += "/"
- return nu
-
-def fingerprint(uri):
- if uri is None:
- return None
- return base64.b32encode(sha.new(uri).digest()).lower()[:6]
-
-stat_fields = [ 'st_mode', 'st_ino', 'st_dev', 'st_nlink', 'st_uid', 'st_gid', 'st_size',
- 'st_atime', 'st_mtime', 'st_ctime', ]
-def stat_to_dict(statobj, fields=None):
- if fields is None:
- fields = stat_fields
- d = {}
- for f in fields:
- d[f] = getattr(statobj, f, None)
- return d
-
-class TStat(fuse.Stat):
- # in fuse 0.2, these are set by fuse.Stat.__init__
- # in fuse 0.2-pre3 (hardy) they are not. badness ensues if they're missing
- st_mode = None
- st_ino = 0
- st_dev = 0
- st_nlink = None
- st_uid = 0
- st_gid = 0
- st_size = 0
- st_atime = 0
- st_mtime = 0
- st_ctime = 0
-
- fields = [ 'st_mode', 'st_ino', 'st_dev', 'st_nlink', 'st_uid', 'st_gid', 'st_size',
- 'st_atime', 'st_mtime', 'st_ctime', ]
- def __init__(self, metadata, **kwargs):
- # first load any stat fields present in 'metadata'
- for st in [ 'mtime', 'ctime' ]:
- if st in metadata:
- setattr(self, "st_%s" % st, metadata[st])
- for st in self.fields:
- if st in metadata:
- setattr(self, st, metadata[st])
-
- # then set any values passed in as kwargs
- fuse.Stat.__init__(self, **kwargs)
-
- def __repr__(self):
- return "<Stat%r>" % (stat_to_dict(self),)
-
-class Directory(object):
- def __init__(self, tfs, ro_uri, rw_uri):
- self.tfs = tfs
- self.ro_uri = ro_uri
- self.rw_uri = rw_uri
- assert (rw_uri or ro_uri)
- self.children = {}
- self.last_load = None
- self.last_data = None
- self.mtime = 0
-
- def __repr__(self):
- return "<Directory %s>" % (fingerprint(self.get_uri()),)
-
- def maybe_refresh(self, name=None):
- """
- if the previously cached data was retrieved within the cache
- validity period, does nothing. otherwise refetches the data
- for this directory and reloads itself
- """
- now = time.time()
- if self.last_load is None or (now - self.last_load) > self.tfs.cache_validity:
- self.load(name)
-
- def load(self, name=None):
- now = time.time()
- log('%s.loading(%s)' % (self, name))
- url = self.tfs.compose_url("uri/%s?t=json", self.get_uri())
- data = urllib.urlopen(url).read()
- h = tagged_hash('cache_hash', data)
- if h == self.last_data:
- self.last_load = now
- log('%s.load() : no change h(data)=%s' % (self, base32.b2a(h), ))
- return
- try:
- parsed = simplejson.loads(data)
- except ValueError:
- log('%s.load(): unable to parse json data for dir:\n%r' % (self, data))
- return
- nodetype, d = parsed
- assert nodetype == 'dirnode'
- self.children.clear()
- for cname,details in d['children'].items():
- cname = unicode_to_utf8_or_str(cname)
- ctype, cattrs = details
- metadata = cattrs.get('metadata', {})
- if ctype == 'dirnode':
- cobj = self.tfs.dir_for(cname, cattrs.get('ro_uri'), cattrs.get('rw_uri'))
- else:
- assert ctype == "filenode"
- cobj = File(cattrs.get('size'), cattrs.get('ro_uri'))
- self.children[cname] = cobj, metadata
- self.last_load = now
- self.last_data = h
- self.mtime = now
- log('%s.load() loaded: \n%s' % (self, self.pprint(),))
-
- def get_children(self):
- return self.children.keys()
-
- def get_child(self, name):
- return self.children[name][0]
-
- def add_child(self, name, child, metadata):
- log('%s.add_child(%r, %r, %r)' % (self, name, child, metadata, ))
- self.children[name] = child, metadata
- url = self.tfs.compose_url("uri/%s/%s?t=uri", self.get_uri(), name)
- child_cap = do_http('PUT', url, child.get_uri())
- # XXX [ ] TODO: push metadata to tahoe node
- assert child_cap == child.get_uri()
- self.mtime = time.time()
- log('added child %r with %r to %r' % (name, child_cap, self))
-
- def remove_child(self, name):
- log('%s.remove_child(%r)' % (self, name, ))
- del self.children[name]
- url = self.tfs.compose_url("uri/%s/%s", self.get_uri(), name)
- resp = do_http('DELETE', url)
- self.mtime = time.time()
- log('child (%s) removal yielded %r' % (name, resp,))
-
- def get_uri(self):
- return self.rw_uri or self.ro_uri
-
- # TODO: rename to 'is_writeable', or switch sense to 'is_readonly', for consistency with Tahoe code
- def writable(self):
- return self.rw_uri and self.rw_uri != self.ro_uri
-
- def pprint(self, prefix='', printed=None, suffix=''):
- ret = []
- if printed is None:
- printed = set()
- writable = self.writable() and '+' or ' '
- if self in printed:
- ret.append(" %s/%s ... <%s> : %s" % (prefix, writable, fingerprint(self.get_uri()), suffix, ))
- else:
- ret.append("[%s] %s/%s : %s" % (fingerprint(self.get_uri()), prefix, writable, suffix, ))
- printed.add(self)
- for name,(child,metadata) in sorted(self.children.items()):
- ret.append(child.pprint(' ' * (len(prefix)+1)+name, printed, repr(metadata)))
- return '\n'.join(ret)
-
- def get_metadata(self, name):
- return self.children[name][1]
-
- def get_stat(self, name):
- child,metadata = self.children[name]
- log("%s.get_stat(%s) md: %r" % (self, name, metadata))
-
- if isinstance(child, Directory):
- child.maybe_refresh(name)
- mode = metadata.get('st_mode') or (stat.S_IFDIR | 0755)
- s = TStat(metadata, st_mode=mode, st_nlink=1, st_mtime=child.mtime)
- else:
- if hasattr(child, 'tmp_fname'):
- s = os.stat(child.tmp_fname)
- log("%s.get_stat(%s) returning local stat of tmp file" % (self, name, ))
- else:
- s = TStat(metadata,
- st_nlink = 1,
- st_size = child.size,
- st_mode = metadata.get('st_mode') or (stat.S_IFREG | 0444),
- st_mtime = metadata.get('mtime') or self.mtime,
- )
- return s
-
- log("%s.get_stat(%s)->%s" % (self, name, s))
- return s
-
-class File(object):
- def __init__(self, size, ro_uri):
- self.size = size
- if ro_uri:
- ro_uri = str(ro_uri)
- self.ro_uri = ro_uri
-
- def __repr__(self):
- return "<File %s>" % (fingerprint(self.ro_uri) or [self.tmp_fname],)
-
- def pprint(self, prefix='', printed=None, suffix=''):
- return " %s (%s) : %s" % (prefix, self.size, suffix, )
-
- def get_uri(self):
- return self.ro_uri
-
- def writable(self):
- return True
-
-class TFS(object):
- def __init__(self, nodedir, nodeurl, root_uri,
- cache_validity_period=DEFAULT_DIRECTORY_VALIDITY, async=False):
- self.cache_validity = cache_validity_period
- self.nodeurl = nodeurl
- self.root_uri = root_uri
- self.async = async
- self.dirs = {}
-
- cachedir = os.path.expanduser(os.path.join(nodedir, '_cache'))
- self.cache = FileCache(nodeurl, cachedir)
- ro_uri = DirectoryURI.init_from_string(self.root_uri).get_readonly()
- self.root = Directory(self, ro_uri, self.root_uri)
- self.root.maybe_refresh('<root>')
-
- def log(self, msg):
- log("<TFS> %s" % (msg, ))
-
- def pprint(self):
- return self.root.pprint()
-
- def compose_url(self, fmt, *args):
- return self.nodeurl + (fmt % tuple(map(urllib.quote, args)))
-
- def get_parent_name_and_child(self, path):
- """
- find the parent dir node, name of child relative to that parent, and
- child node within the TFS object space.
- @returns: (parent, name, child) if the child is found
- (parent, name, None) if the child is missing from the parent
- (None, name, None) if the parent is not found
- """
- if path == '/':
- return
- dirname, name = os.path.split(path)
- parent = self.get_path(dirname)
- if parent:
- try:
- child = parent.get_child(name)
- return parent, name, child
- except KeyError:
- return parent, name, None
- else:
- return None, name, None
-
- def get_path(self, path):
- comps = path.strip('/').split('/')
- if comps == ['']:
- comps = []
- cursor = self.root
- c_name = '<root>'
- for comp in comps:
- if not isinstance(cursor, Directory):
- self.log('path "%s" is not a dir' % (path,))
- return None
- cursor.maybe_refresh(c_name)
- try:
- cursor = cursor.get_child(comp)
- c_name = comp
- except KeyError:
- self.log('path "%s" not found' % (path,))
- return None
- if isinstance(cursor, Directory):
- cursor.maybe_refresh(c_name)
- return cursor
-
- def dir_for(self, name, ro_uri, rw_uri):
- #self.log('dir_for(%s) [%s/%s]' % (name, fingerprint(ro_uri), fingerprint(rw_uri)))
- if ro_uri:
- ro_uri = str(ro_uri)
- if rw_uri:
- rw_uri = str(rw_uri)
- uri = rw_uri or ro_uri
- assert uri
- dirobj = self.dirs.get(uri)
- if not dirobj:
- self.log('dir_for(%s) creating new Directory' % (name, ))
- dirobj = Directory(self, ro_uri, rw_uri)
- self.dirs[uri] = dirobj
- return dirobj
-
- def upload(self, fname):
- self.log('upload(%r)' % (fname,))
- fh = file(fname, 'rb')
- url = self.compose_url("uri")
- file_cap = do_http('PUT', url, fh)
- self.log('uploaded to: %r' % (file_cap,))
- return file_cap
-
- def mkdir(self, path):
- self.log('mkdir(%r)' % (path,))
- parent, name, child = self.get_parent_name_and_child(path)
-
- if child:
- raise EEXIST('File exists: %s' % (name,))
- if not parent:
- raise ENOENT('No such file or directory: %s' % (path,))
-
- url = self.compose_url("uri?t=mkdir")
- new_dir_cap = do_http('PUT', url)
-
- ro_uri = DirectoryURI.init_from_string(new_dir_cap).get_readonly()
- child = Directory(self, ro_uri, new_dir_cap)
- parent.add_child(name, child, {})
-
- def rename(self, path, path1):
- self.log('rename(%s, %s)' % (path, path1))
- src_parent, src_name, src_child = self.get_parent_name_and_child(path)
- dst_parent, dst_name, dst_child = self.get_parent_name_and_child(path1)
-
- if not src_child or not dst_parent:
- raise ENOENT('No such file or directory')
-
- dst_parent.add_child(dst_name, src_child, {})
- src_parent.remove_child(src_name)
-
- def unlink(self, path):
- parent, name, child = self.get_parent_name_and_child(path)
-
- if child is None: # parent or child is missing
- raise ENOENT('No such file or directory')
- if not parent.writable():
- raise EACCESS('Permission denied')
-
- parent.remove_child(name)
-
- def link(self, path, path1):
- src = self.get_path(path)
- dst_parent, dst_name, dst_child = self.get_parent_name_and_child(path1)
-
- if not src:
- raise ENOENT('No such file or directory')
- if dst_parent is None:
- raise ENOENT('No such file or directory')
- if not dst_parent.writable():
- raise EACCESS('Permission denied')
-
- dst_parent.add_child(dst_name, src, {})
-
-class FileCache(object):
- def __init__(self, nodeurl, cachedir):
- self.nodeurl = nodeurl
- self.cachedir = cachedir
- if not os.path.exists(self.cachedir):
- os.makedirs(self.cachedir)
- self.tmpdir = os.path.join(self.cachedir, 'tmp')
- if not os.path.exists(self.tmpdir):
- os.makedirs(self.tmpdir)
- self.downloaders = weakref.WeakValueDictionary()
-
- def log(self, msg):
- log("<FC> %s" % (msg, ))
-
- def get_file(self, uri):
- self.log('get_file(%s)' % (uri,))
- if is_literal_file_uri(uri):
- return self.get_literal(uri)
- else:
- return self.get_chk(uri, async=False)
-
- def async_get_file(self, uri):
- self.log('get_file(%s)' % (uri,))
- return self.get_chk(uri, async=True)
-
- def get_literal(self, uri):
- h = sha.new(uri).digest()
- u = LiteralFileURI.init_from_string(uri)
- fname = os.path.join(self.cachedir, '__'+base64.b32encode(h).lower())
- size = len(u.data)
- self.log('writing literal file %s (%s)' % (fname, size, ))
- fh = open(fname, 'wb')
- fh.write(u.data)
- fh.close()
- return fname
-
- def get_chk(self, uri, async=False):
- u = CHKFileURI.init_from_string(str(uri))
- storage_index = u.storage_index
- size = u.size
- fname = os.path.join(self.cachedir, base64.b32encode(storage_index).lower())
- if os.path.exists(fname):
- fsize = os.path.getsize(fname)
- if fsize == size:
- if async:
- return fname, None
- else:
- return fname
- else:
- self.log('warning file "%s" is too short %s < %s' % (fname, fsize, size))
- self.log('downloading file %s (%s)' % (fname, size, ))
- url = "%suri/%s" % (self.nodeurl, uri)
- if async:
- if fname in self.downloaders and self.downloaders[fname].running:
- downloader = self.downloaders[fname]
- else:
- downloader = DownloaderWithReadQueue()
- self.downloaders[fname] = downloader
- d = downloader.start(url, fname, target_size=u.size)
- def clear_downloader(result, fname):
- self.log('clearing %s from downloaders: %r' % (fname, result))
- self.downloaders.pop(fname, None)
- d.addBoth(clear_downloader, fname)
- return fname, downloader
- else:
- fh = open(fname, 'wb')
- download = urllib.urlopen(url)
- while True:
- chunk = download.read(4096)
- if not chunk:
- break
- fh.write(chunk)
- fh.close()
- return fname
-
- def tmp_file(self, id):
- fname = os.path.join(self.tmpdir, base64.b32encode(id).lower())
- return fname
-
-_tfs = None # to appease pyflakes; is set in main()
-def print_tree():
- log('tree:\n' + _tfs.pprint())
-
-
-def unmarshal(obj):
- if obj is None or isinstance(obj, int) or isinstance(obj, long) or isinstance(obj, float):
- return obj
- elif isinstance(obj, unicode) or isinstance(obj, str):
- #log('unmarshal(%r)' % (obj,))
- return base64.b64decode(obj)
- elif isinstance(obj, list):
- return map(unmarshal, obj)
- elif isinstance(obj, dict):
- return dict([ (k,unmarshal(v)) for k,v in obj.items() ])
- else:
- raise ValueError('object type not int,str,list,dict,none (%s) (%r)' % (type(obj), obj))
-
-def marshal(obj):
- if obj is None or isinstance(obj, int) or isinstance(obj, long) or isinstance(obj, float):
- return obj
- elif isinstance(obj, str):
- return base64.b64encode(obj)
- elif isinstance(obj, list) or isinstance(obj, tuple):
- return map(marshal, obj)
- elif isinstance(obj, dict):
- return dict([ (k,marshal(v)) for k,v in obj.items() ])
- else:
- raise ValueError('object type not int,str,list,dict,none (%s)' % type(obj))
-
-
-class TRPCProtocol(Protocol):
- compute_response_sha1 = True
- log_all_requests = False
-
- def connectionMade(self):
- self.buf = []
-
- def dataReceived(self, data):
- if data == 'keepalive\n':
- log('keepalive connection on %r' % (self.transport,))
- self.keepalive = True
- return
-
- if not data.endswith('\n'):
- self.buf.append(data)
- return
- if self.buf:
- self.buf.append(data)
- reqstr = ''.join(self.buf)
- self.buf = []
- self.dispatch_request(reqstr)
- else:
- self.dispatch_request(data)
-
- def dispatch_request(self, reqstr):
- try:
- req = simplejson.loads(reqstr)
- except ValueError, ve:
- log(ve)
- return
-
- d = defer.maybeDeferred(self.handle_request, req)
- d.addCallback(self.send_response)
- d.addErrback(self.send_error)
-
- def send_error(self, failure):
- log('failure: %s' % (failure,))
- if failure.check(TFSIOError):
- e = failure.value
- self.send_response(['error', 'errno', e.args[0], e.args[1]])
- else:
- self.send_response(['error', 'failure', str(failure)])
-
- def send_response(self, result):
- response = simplejson.dumps(result)
- header = { 'len': len(response), }
- if self.compute_response_sha1:
- header['sha1'] = base64.b64encode(sha.new(response).digest())
- hdr = simplejson.dumps(header)
- self.transport.write(hdr)
- self.transport.write('\n')
- self.transport.write(response)
- self.transport.loseConnection()
-
- def connectionLost(self, reason):
- if hasattr(self, 'keepalive'):
- log('keepalive connection %r lost, shutting down' % (self.transport,))
- reactor.callLater(0, reactor.stop)
-
- def handle_request(self, req):
- if type(req) is not list or not req or len(req) < 1:
- return ['error', 'malformed request']
- if req[0] == 'call':
- if len(req) < 3:
- return ['error', 'malformed request']
- methname = req[1]
- try:
- args = unmarshal(req[2])
- except ValueError, ve:
- return ['error', 'malformed arguments', str(ve)]
-
- try:
- meth = getattr(self.factory.server, methname)
- except AttributeError, ae:
- return ['error', 'no such method', str(ae)]
-
- if self.log_all_requests:
- log('call %s(%s)' % (methname, ', '.join(map(repr, args))))
- try:
- result = meth(*args)
- except TFSIOError, e:
- log('errno: %s; %s' % e.args)
- return ['error', 'errno', e.args[0], e.args[1]]
- except Exception, e:
- log('exception: ' + traceback.format_exc())
- return ['error', 'exception', str(e)]
- d = defer.succeed(None)
- d.addCallback(lambda junk: result) # result may be Deferred
- d.addCallback(lambda res: ['result', marshal(res)]) # only applies if not errback
- return d
-
-class TFSServer(object):
- def __init__(self, socket_path, server=None):
- self.socket_path = socket_path
- log('TFSServer init socket: %s' % (socket_path,))
-
- self.factory = Factory()
- self.factory.protocol = TRPCProtocol
- if server:
- self.factory.server = server
- else:
- self.factory.server = self
-
- def get_service(self):
- if not hasattr(self, 'svc'):
- from twisted.application import strports
- self.svc = strports.service('unix:'+self.socket_path, self.factory)
- return self.svc
-
- def run(self):
- svc = self.get_service()
- def ss():
- try:
- svc.startService()
- except:
- reactor.callLater(0, reactor.stop)
- raise
- reactor.callLater(0, ss)
- reactor.run()
-
- def hello(self):
- return 'pleased to meet you'
-
- def echo(self, arg):
- return arg
-
- def failex(self):
- raise ValueError('expected')
-
- def fail(self):
- return defer.maybeDeferred(self.failex)
-
-class RPCError(RuntimeError):
- pass
-
-class TRPC(object):
- def __init__(self, socket_fname):
- self.socket_fname = socket_fname
- self.keepalive = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM)
- self.keepalive.connect(self.socket_fname)
- self.keepalive.send('keepalive\n')
- log('requested keepalive on %s' % (self.keepalive,))
-
- def req(self, req):
- # open conenction to trpc server
- s = socket.socket(socket.AF_UNIX, socket.SOCK_STREAM)
- s.connect(self.socket_fname)
- # send request
- s.send(simplejson.dumps(req))
- s.send('\n')
- # read response header
- hdr_data = s.recv(8192)
- first_newline = hdr_data.index('\n')
- header = hdr_data[:first_newline]
- data = hdr_data[first_newline+1:]
- hdr = simplejson.loads(header)
- hdr_len = hdr['len']
- if hdr.has_key('sha1'):
- hdr_sha1 = base64.b64decode(hdr['sha1'])
- spool = [data]
- spool_sha = sha.new(data)
- # spool response
- while True:
- data = s.recv(8192)
- if data:
- spool.append(data)
- spool_sha.update(data)
- else:
- break
- else:
- spool = [data]
- # spool response
- while True:
- data = s.recv(8192)
- if data:
- spool.append(data)
- else:
- break
- s.close()
- # decode response
- resp = ''.join(spool)
- spool = None
- assert hdr_len == len(resp), str((hdr_len, len(resp), repr(resp)))
- if hdr.has_key('sha1'):
- data_sha1 = spool_sha.digest()
- spool = spool_sha = None
- assert hdr_sha1 == data_sha1, str((base32.b2a(hdr_sha1), base32.b2a(data_sha1)))
- #else:
- #print 'warning, server provided no sha1 to check'
- return resp
-
- def call(self, methodname, *args):
- res = self.req(['call', methodname, marshal(args)])
-
- result = simplejson.loads(res)
- if not result or len(result) < 2:
- raise TypeError('malformed response %r' % (result,))
- if result[0] == 'error':
- if result[1] == 'errno':
- raise TFSIOError(result[2], result[3])
- else:
- raise RPCError(*(result[1:])) # error, exception / error, failure
- elif result[0] == 'result':
- return unmarshal(result[1])
- else:
- raise TypeError('unknown response type %r' % (result[0],))
-
- def shutdown(self):
- log('shutdown() closing keepalive %s' % (self.keepalive,))
- self.keepalive.close()
-
-# (cut-n-pasted here due to an ImportError / some py2app linkage issues)
-#from twisted.scripts._twistd_unix import daemonize
-def daemonize():
- # See http://www.erlenstar.demon.co.uk/unix/faq_toc.html#TOC16
- if os.fork(): # launch child and...
- os._exit(0) # kill off parent
- os.setsid()
- if os.fork(): # launch child and...
- os._exit(0) # kill off parent again.
- os.umask(077)
- null=os.open('/dev/null', os.O_RDWR)
- for i in range(3):
- try:
- os.dup2(null, i)
- except OSError, e:
- if e.errno != errno.EBADF:
- raise
- os.close(null)
-
-def main(argv):
- log("main(%s)" % (argv,))
-
- # check for version or help options (no args == help)
- if not argv:
- argv = ['--help']
- if len(argv) == 1 and argv[0] in ['-h', '--help']:
- config = TahoeFuseOptions()
- print >> sys.stderr, config
- print >> sys.stderr, 'fuse usage follows:'
- if len(argv) == 1 and argv[0] in ['-h', '--help', '--version']:
- launch_tahoe_fuse(TahoeFuseLocal, None, argv)
- return -2
-
- # parse command line options
- config = TahoeFuseOptions()
- try:
- #print 'parsing', argv
- config.parseOptions(argv)
- except usage.error, e:
- print config
- print e
- return -1
-
- # check for which alias or uri is specified
- if config['alias']:
- alias = config['alias']
- #print 'looking for aliases in', config['node-directory']
- aliases = get_aliases(os.path.expanduser(config['node-directory']))
- if alias not in aliases:
- raise usage.error('Alias %r not found' % (alias,))
- root_uri = aliases[alias]
- root_name = alias
- elif config['root-uri']:
- root_uri = config['root-uri']
- root_name = 'uri_' + base32.b2a(tagged_hash('root_name', root_uri))[:12]
- # test the uri for structural validity:
- try:
- DirectoryURI.init_from_string(root_uri)
- except:
- raise usage.error('root-uri must be a valid directory uri (not %r)' % (root_uri,))
- else:
- raise usage.error('At least one of --alias or --root-uri must be specified')
-
- nodedir = config['node-directory']
- nodeurl = config['node-url']
- if not nodeurl:
- nodeurl = getnodeurl(nodedir)
-
- # allocate socket
- socket_dir = os.path.join(os.path.expanduser(nodedir), "tfuse.sockets")
- socket_path = os.path.join(socket_dir, root_name)
- if len(socket_path) > 103:
- # try googling AF_UNIX and sun_len for some taste of why this oddity exists.
- raise OSError(errno.ENAMETOOLONG, 'socket path too long (%s)' % (socket_path,))
-
- fileutil.make_dirs(socket_dir, 0700)
- if os.path.exists(socket_path):
- log('socket exists')
- if config['server-shutdown']:
- log('calling shutdown')
- trpc = TRPC(socket_path)
- result = trpc.shutdown()
- log('result: %r' % (result,))
- log('called shutdown')
- return
- else:
- raise OSError(errno.EEXIST, 'fuse already running (%r exists)' % (socket_path,))
- elif config['server-shutdown']:
- raise OSError(errno.ENOTCONN, '--server-shutdown specified, but server not running')
-
- if not os.path.exists(config.mountpoint):
- raise OSError(errno.ENOENT, 'No such file or directory: "%s"' % (config.mountpoint,))
-
- global _tfs
- #
- # Standalone ("no-split")
- #
- if config['no-split']:
- reopen_logfile('tfuse.%s.unsplit.log' % (root_name,))
- log('\n'+(24*'_')+'init (unsplit)'+(24*'_')+'\n')
-
- cache_timeout = float(config['cache-timeout'])
- tfs = TFS(nodedir, nodeurl, root_uri, cache_timeout, async=False)
- #print tfs.pprint()
-
- # make tfs instance accesible to print_tree() for dbg
- _tfs = tfs
-
- args = [ '-o'+opt for opt in config.fuse_options ] + [config.mountpoint]
- launch_tahoe_fuse(TahoeFuseLocal, tfs, args)
-
- #
- # Server
- #
- elif config['server']:
- reopen_logfile('tfuse.%s.server.log' % (root_name,))
- log('\n'+(24*'_')+'init (server)'+(24*'_')+'\n')
-
- log('daemonizing')
- daemonize()
-
- try:
- cache_timeout = float(config['cache-timeout'])
- tfs = TFS(nodedir, nodeurl, root_uri, cache_timeout, async=True)
- #print tfs.pprint()
-
- # make tfs instance accesible to print_tree() for dbg
- _tfs = tfs
-
- log('launching tfs server')
- tfuse = TahoeFuseBase(tfs)
- tfs_server = TFSServer(socket_path, tfuse)
- tfs_server.run()
- log('tfs server ran, exiting')
- except:
- log('exception: ' + traceback.format_exc())
-
- #
- # Client
- #
- else:
- reopen_logfile('tfuse.%s.client.log' % (root_name,))
- log('\n'+(24*'_')+'init (client)'+(24*'_')+'\n')
-
- server_args = [sys.executable, sys.argv[0], '--server'] + argv
- if 'Allmydata.app/Contents/MacOS' in sys.executable:
- # in this case blackmatch is the 'fuse' subcommand of the 'tahoe' executable
- # otherwise we assume blackmatch is being run from source
- server_args.insert(2, 'fuse')
- #print 'launching server:', server_args
- server = subprocess.Popen(server_args)
- waiting_since = time.time()
- wait_at_most = 8
- while not os.path.exists(socket_path):
- log('waiting for appearance of %r' % (socket_path,))
- time.sleep(1)
- if time.time() - waiting_since > wait_at_most:
- log('%r did not appear within %ss' % (socket_path, wait_at_most))
- raise IOError(2, 'no socket %s' % (socket_path,))
- #print 'launched server'
- trpc = TRPC(socket_path)
-
-
- args = [ '-o'+opt for opt in config.fuse_options ] + [config.mountpoint]
- launch_tahoe_fuse(TahoeFuseShim, trpc, args)
-
-
-if __name__ == '__main__':
- sys.exit(main(sys.argv[1:]))
+++ /dev/null
-#! /usr/bin/env python
-'''
-Unit and system tests for tahoe-fuse.
-'''
-
-# Note: It's always a SetupFailure, not a TestFailure if a webapi
-# operation fails, because this does not indicate a fuse interface
-# failure.
-
-# TODO: Unmount after tests regardless of failure or success!
-
-# TODO: Test mismatches between tahoe and fuse/posix. What about nodes
-# with crazy names ('\0', unicode, '/', '..')? Huuuuge files?
-# Huuuuge directories... As tahoe approaches production quality, it'd
-# be nice if the fuse interface did so also by hardening against such cases.
-
-# FIXME: Only create / launch necessary nodes. Do we still need an introducer and three nodes?
-
-# FIXME: This framework might be replaceable with twisted.trial,
-# especially the "layer" design, which is a bit cumbersome when
-# using recursion to manage multiple clients.
-
-# FIXME: Identify all race conditions (hint: starting clients, versus
-# using the grid fs).
-
-import sys, os, shutil, unittest, subprocess
-import tempfile, re, time, random, httplib, urllib
-#import traceback
-
-from twisted.python import usage
-
-if sys.platform.startswith('darwin'):
- UNMOUNT_CMD = ['umount']
-else:
- # linux, and until we hear otherwise, all other platforms with fuse, by assumption
- UNMOUNT_CMD = ['fusermount', '-u']
-
-# Import fuse implementations:
-#FuseDir = os.path.join('.', 'contrib', 'fuse')
-#if not os.path.isdir(FuseDir):
-# raise SystemExit('''
-#Could not find directory "%s". Please run this script from the tahoe
-#source base directory.
-#''' % (FuseDir,))
-FuseDir = '.'
-
-
-### Load each implementation
-sys.path.append(os.path.join(FuseDir, 'impl_a'))
-import tahoe_fuse as impl_a
-sys.path.append(os.path.join(FuseDir, 'impl_b'))
-import pyfuse.tahoe as impl_b
-sys.path.append(os.path.join(FuseDir, 'impl_c'))
-import blackmatch as impl_c
-
-### config info about each impl, including which make sense to run
-implementations = {
- 'impl_a': dict(module=impl_a,
- mount_args=['--basedir', '%(nodedir)s', '%(mountpath)s', ],
- mount_wait=True,
- suites=['read', ]),
- 'impl_b': dict(module=impl_b,
- todo=True,
- mount_args=['--basedir', '%(nodedir)s', '%(mountpath)s', ],
- mount_wait=False,
- suites=['read', ]),
- 'impl_c': dict(module=impl_c,
- mount_args=['--cache-timeout', '0', '--root-uri', '%(root-uri)s',
- '--node-directory', '%(nodedir)s', '%(mountpath)s', ],
- mount_wait=True,
- suites=['read', 'write', ]),
- 'impl_c_no_split': dict(module=impl_c,
- mount_args=['--cache-timeout', '0', '--root-uri', '%(root-uri)s',
- '--no-split',
- '--node-directory', '%(nodedir)s', '%(mountpath)s', ],
- mount_wait=True,
- suites=['read', 'write', ]),
- }
-
-if sys.platform == 'darwin':
- del implementations['impl_a']
- del implementations['impl_b']
-
-default_catch_up_pause = 0
-if sys.platform == 'linux2':
- default_catch_up_pause = 2
-
-class FuseTestsOptions(usage.Options):
- optParameters = [
- ["test-type", None, "both",
- "Type of test to run; unit, system or both"
- ],
- ["implementations", None, "all",
- "Comma separated list of implementations to test, or 'all'"
- ],
- ["suites", None, "all",
- "Comma separated list of test suites to run, or 'all'"
- ],
- ["tests", None, None,
- "Comma separated list of specific tests to run"
- ],
- ["path-to-tahoe", None, "../../bin/tahoe",
- "Which 'tahoe' script to use to create test nodes"],
- ["tmp-dir", None, "/tmp",
- "Where the test should create temporary files"],
- # Note; this is '/tmp' because on leopard, tempfile.mkdtemp creates
- # directories in a location which leads paths to exceed what macfuse
- # can handle without leaking un-umount-able fuse processes.
- ["catch-up-pause", None, str(default_catch_up_pause),
- "Pause between tahoe operations and fuse tests thereon"],
- ]
- optFlags = [
- ["debug-wait", None,
- "Causes the test system to pause at various points, to facilitate debugging"],
- ["web-open", None,
- "Opens a web browser to the web ui at the start of each impl's tests"],
- ["no-cleanup", False,
- "Prevents the cleanup of the working directories, to allow analysis thereof"],
- ]
-
- def postOptions(self):
- if self['suites'] == 'all':
- self.suites = ['read', 'write']
- # [ ] todo: deduce this from looking for test_ in dir(self)
- else:
- self.suites = map(str.strip, self['suites'].split(','))
- if self['implementations'] == 'all':
- self.implementations = implementations.keys()
- else:
- self.implementations = map(str.strip, self['implementations'].split(','))
- if self['tests']:
- self.tests = map(str.strip, self['tests'].split(','))
- else:
- self.tests = None
- self.catch_up_pause = float(self['catch-up-pause'])
-
-### Main flow control:
-def main(args):
- config = FuseTestsOptions()
- config.parseOptions(args[1:])
-
- target = 'all'
- if len(args) > 1:
- target = args.pop(1)
-
- test_type = config['test-type']
- if test_type not in ('both', 'unit', 'system'):
- raise usage.error('test-type %r not supported' % (test_type,))
-
- if test_type in ('both', 'unit'):
- run_unit_tests([args[0]])
-
- if test_type in ('both', 'system'):
- return run_system_test(config)
-
-
-def run_unit_tests(argv):
- print 'Running Unit Tests.'
- try:
- unittest.main(argv=argv)
- except SystemExit, se:
- pass
- print 'Unit Tests complete.\n'
-
-
-def run_system_test(config):
- return SystemTest(config).run()
-
-def drepr(obj):
- r = repr(obj)
- if len(r) > 200:
- return '%s ... %s [%d]' % (r[:100], r[-100:], len(r))
- else:
- return r
-
-### System Testing:
-class SystemTest (object):
- def __init__(self, config):
- self.config = config
-
- # These members represent test state:
- self.cliexec = None
- self.testroot = None
-
- # This test state is specific to the first client:
- self.port = None
- self.clientbase = None
-
- ## Top-level flow control:
- # These "*_layer" methods call each other in a linear fashion, using
- # exception unwinding to do cleanup properly. Each "layer" invokes
- # a deeper layer, and each layer does its own cleanup upon exit.
-
- def run(self):
- print '\n*** Setting up system tests.'
- try:
- results = self.init_cli_layer()
- print '\n*** System Tests complete:'
- total_failures = todo_failures = 0
- for result in results:
- impl_name, failures, total = result
- if implementations[impl_name].get('todo'):
- todo_failures += failures
- else:
- total_failures += failures
- print 'Implementation %s: %d failed out of %d.' % result
- if total_failures:
- print '%s total failures, %s todo' % (total_failures, todo_failures)
- return 1
- else:
- return 0
- except SetupFailure, sfail:
- print
- print sfail
- print '\n*** System Tests were not successfully completed.'
- return 1
-
- def maybe_wait(self, msg='waiting', or_if_webopen=False):
- if self.config['debug-wait'] or or_if_webopen and self.config['web-open']:
- print msg
- raw_input()
-
- def maybe_webopen(self, where=None):
- if self.config['web-open']:
- import webbrowser
- url = self.weburl
- if where is not None:
- url += urllib.quote(where)
- webbrowser.open(url)
-
- def maybe_pause(self):
- time.sleep(self.config.catch_up_pause)
-
- def init_cli_layer(self):
- '''This layer finds the appropriate tahoe executable.'''
- #self.cliexec = os.path.join('.', 'bin', 'tahoe')
- self.cliexec = self.config['path-to-tahoe']
- version = self.run_tahoe('--version')
- print 'Using %r with version:\n%s' % (self.cliexec, version.rstrip())
-
- return self.create_testroot_layer()
-
- def create_testroot_layer(self):
- print 'Creating test base directory.'
- #self.testroot = tempfile.mkdtemp(prefix='tahoe_fuse_test_')
- #self.testroot = tempfile.mkdtemp(prefix='tahoe_fuse_test_', dir='/tmp/')
- tmpdir = self.config['tmp-dir']
- if tmpdir:
- self.testroot = tempfile.mkdtemp(prefix='tahoe_fuse_test_', dir=tmpdir)
- else:
- self.testroot = tempfile.mkdtemp(prefix='tahoe_fuse_test_')
- try:
- return self.launch_introducer_layer()
- finally:
- if not self.config['no-cleanup']:
- print 'Cleaning up test root directory.'
- try:
- shutil.rmtree(self.testroot)
- except Exception, e:
- print 'Exception removing test root directory: %r' % (self.testroot, )
- print 'Ignoring cleanup exception: %r' % (e,)
- else:
- print 'Leaving test root directory: %r' % (self.testroot, )
-
-
- def launch_introducer_layer(self):
- print 'Launching introducer.'
- introbase = os.path.join(self.testroot, 'introducer')
-
- # NOTE: We assume if tahoe exits with non-zero status, no separate
- # tahoe child process is still running.
- createoutput = self.run_tahoe('create-introducer', '--basedir', introbase)
-
- self.check_tahoe_output(createoutput, ExpectedCreationOutput, introbase)
-
- startoutput = self.run_tahoe('start', '--basedir', introbase)
- try:
- self.check_tahoe_output(startoutput, ExpectedStartOutput, introbase)
-
- return self.launch_clients_layer(introbase)
-
- finally:
- print 'Stopping introducer node.'
- self.stop_node(introbase)
-
- def set_tahoe_option(self, base, key, value):
- import re
-
- filename = os.path.join(base, 'tahoe.cfg')
- content = open(filename).read()
- content = re.sub('%s = (.+)' % key, '%s = %s' % (key, value), content)
- open(filename, 'w').write(content)
-
- TotalClientsNeeded = 3
- def launch_clients_layer(self, introbase, clientnum = 0):
- if clientnum >= self.TotalClientsNeeded:
- self.maybe_wait('waiting (launched clients)')
- ret = self.create_test_dirnode_layer()
- self.maybe_wait('waiting (ran tests)', or_if_webopen=True)
- return ret
-
- tmpl = 'Launching client %d of %d.'
- print tmpl % (clientnum,
- self.TotalClientsNeeded)
-
- base = os.path.join(self.testroot, 'client_%d' % (clientnum,))
-
- output = self.run_tahoe('create-node', '--basedir', base)
- self.check_tahoe_output(output, ExpectedCreationOutput, base)
-
- if clientnum == 0:
- # The first client is special:
- self.clientbase = base
- self.port = random.randrange(1024, 2**15)
-
- self.set_tahoe_option(base, 'web.port', 'tcp:%d:interface=127.0.0.1' % self.port)
-
- self.weburl = "http://127.0.0.1:%d/" % (self.port,)
- print self.weburl
- else:
- self.set_tahoe_option(base, 'web.port', '')
-
- introfurl = os.path.join(introbase, 'introducer.furl')
-
- furl = open(introfurl).read().strip()
- self.set_tahoe_option(base, 'introducer.furl', furl)
-
- # NOTE: We assume if tahoe exist with non-zero status, no separate
- # tahoe child process is still running.
- startoutput = self.run_tahoe('start', '--basedir', base)
- try:
- self.check_tahoe_output(startoutput, ExpectedStartOutput, base)
-
- return self.launch_clients_layer(introbase, clientnum+1)
-
- finally:
- print 'Stopping client node %d.' % (clientnum,)
- self.stop_node(base)
-
- def create_test_dirnode_layer(self):
- print 'Creating test dirnode.'
-
- cap = self.create_dirnode()
-
- f = open(os.path.join(self.clientbase, 'private', 'root_dir.cap'), 'w')
- f.write(cap)
- f.close()
-
- return self.mount_fuse_layer(cap)
-
- def mount_fuse_layer(self, root_uri):
- mpbase = os.path.join(self.testroot, 'mountpoint')
- os.mkdir(mpbase)
- results = []
-
- if self.config['debug-wait']:
- ImplProcessManager.debug_wait = True
-
- #for name, kwargs in implementations.items():
- for name in self.config.implementations:
- kwargs = implementations[name]
- #print 'instantiating %s: %r' % (name, kwargs)
- implprocmgr = ImplProcessManager(name, **kwargs)
- print '\n*** Testing impl: %r' % (implprocmgr.name)
- implprocmgr.configure(self.clientbase, mpbase)
- implprocmgr.mount()
- try:
- failures, total = self.run_test_layer(root_uri, implprocmgr)
- result = (implprocmgr.name, failures, total)
- tmpl = '\n*** Test Results implementation %s: %d failed out of %d.'
- print tmpl % result
- results.append(result)
- finally:
- implprocmgr.umount()
- return results
-
- def run_test_layer(self, root_uri, iman):
- self.maybe_webopen('uri/'+root_uri)
- failures = 0
- testnum = 0
- numtests = 0
- if self.config.tests:
- tests = self.config.tests
- else:
- tests = list(set(self.config.suites).intersection(set(iman.suites)))
- self.maybe_wait('waiting (about to run tests)')
- for test in tests:
- testnames = [n for n in sorted(dir(self)) if n.startswith('test_'+test)]
- numtests += len(testnames)
- print 'running %s %r tests' % (len(testnames), test,)
- for testname in testnames:
- testnum += 1
- print '\n*** Running test #%d: %s' % (testnum, testname)
- try:
- testcap = self.create_dirnode()
- dirname = '%s_%s' % (iman.name, testname)
- self.attach_node(root_uri, testcap, dirname)
- method = getattr(self, testname)
- method(testcap, testdir = os.path.join(iman.mountpath, dirname))
- print 'Test succeeded.'
- except TestFailure, f:
- print f
- #print traceback.format_exc()
- failures += 1
- except:
- print 'Error in test code... Cleaning up.'
- raise
- return (failures, numtests)
-
- # Tests:
- def test_read_directory_existence(self, testcap, testdir):
- if not wrap_os_error(os.path.isdir, testdir):
- raise TestFailure('Attached test directory not found: %r', testdir)
-
- def test_read_empty_directory_listing(self, testcap, testdir):
- listing = wrap_os_error(os.listdir, testdir)
- if listing:
- raise TestFailure('Expected empty directory, found: %r', listing)
-
- def test_read_directory_listing(self, testcap, testdir):
- names = []
- filesizes = {}
-
- for i in range(3):
- fname = 'file_%d' % (i,)
- names.append(fname)
- body = 'Hello World #%d!' % (i,)
- filesizes[fname] = len(body)
-
- cap = self.webapi_call('PUT', '/uri', body)
- self.attach_node(testcap, cap, fname)
-
- dname = 'dir_%d' % (i,)
- names.append(dname)
-
- cap = self.create_dirnode()
- self.attach_node(testcap, cap, dname)
-
- names.sort()
-
- listing = wrap_os_error(os.listdir, testdir)
- listing.sort()
-
- if listing != names:
- tmpl = 'Expected directory list containing %r but fuse gave %r'
- raise TestFailure(tmpl, names, listing)
-
- for file, size in filesizes.items():
- st = wrap_os_error(os.stat, os.path.join(testdir, file))
- if st.st_size != size:
- tmpl = 'Expected %r size of %r but fuse returned %r'
- raise TestFailure(tmpl, file, size, st.st_size)
-
- def test_read_file_contents(self, testcap, testdir):
- name = 'hw.txt'
- body = 'Hello World!'
-
- cap = self.webapi_call('PUT', '/uri', body)
- self.attach_node(testcap, cap, name)
-
- path = os.path.join(testdir, name)
- try:
- found = open(path, 'r').read()
- except Exception, err:
- tmpl = 'Could not read file contents of %r: %r'
- raise TestFailure(tmpl, path, err)
-
- if found != body:
- tmpl = 'Expected file contents %r but found %r'
- raise TestFailure(tmpl, body, found)
-
- def test_read_in_random_order(self, testcap, testdir):
- sz = 2**20
- bs = 2**10
- assert(sz % bs == 0)
- name = 'random_read_order'
- body = os.urandom(sz)
-
- cap = self.webapi_call('PUT', '/uri', body)
- self.attach_node(testcap, cap, name)
-
- # XXX this should also do a test where sz%bs != 0, so that it correctly tests
- # the edge case where the last read is a 'short' block
- path = os.path.join(testdir, name)
- try:
- fsize = os.path.getsize(path)
- if fsize != len(body):
- tmpl = 'Expected file size %s but found %s'
- raise TestFailure(tmpl, len(body), fsize)
- except Exception, err:
- tmpl = 'Could not read file size for %r: %r'
- raise TestFailure(tmpl, path, err)
-
- try:
- f = open(path, 'r')
- posns = range(0,sz,bs)
- random.shuffle(posns)
- data = [None] * (sz/bs)
- for p in posns:
- f.seek(p)
- data[p/bs] = f.read(bs)
- found = ''.join(data)
- except Exception, err:
- tmpl = 'Could not read file %r: %r'
- raise TestFailure(tmpl, path, err)
-
- if found != body:
- tmpl = 'Expected file contents %s but found %s'
- raise TestFailure(tmpl, drepr(body), drepr(found))
-
- def get_file(self, dircap, path):
- body = self.webapi_call('GET', '/uri/%s/%s' % (dircap, path))
- return body
-
- def test_write_tiny_file(self, testcap, testdir):
- self._write_test_linear(testcap, testdir, name='tiny.junk', bs=2**9, sz=2**9)
-
- def test_write_linear_small_writes(self, testcap, testdir):
- self._write_test_linear(testcap, testdir, name='large_linear.junk', bs=2**9, sz=2**20)
-
- def test_write_linear_large_writes(self, testcap, testdir):
- # at least on the mac, large io block sizes are reduced to 64k writes through fuse
- self._write_test_linear(testcap, testdir, name='small_linear.junk', bs=2**18, sz=2**20)
-
- def _write_test_linear(self, testcap, testdir, name, bs, sz):
- body = os.urandom(sz)
- try:
- path = os.path.join(testdir, name)
- f = file(path, 'w')
- except Exception, err:
- tmpl = 'Could not open file for write at %r: %r'
- raise TestFailure(tmpl, path, err)
- try:
- for posn in range(0,sz,bs):
- f.write(body[posn:posn+bs])
- f.close()
- except Exception, err:
- tmpl = 'Could not write to file %r: %r'
- raise TestFailure(tmpl, path, err)
-
- self.maybe_pause()
- self._check_write(testcap, name, body)
-
- def _check_write(self, testcap, name, expected_body):
- uploaded_body = self.get_file(testcap, name)
- if uploaded_body != expected_body:
- tmpl = 'Expected file contents %s but found %s'
- raise TestFailure(tmpl, drepr(expected_body), drepr(uploaded_body))
-
- def test_write_overlapping_small_writes(self, testcap, testdir):
- self._write_test_overlap(testcap, testdir, name='large_overlap', bs=2**9, sz=2**20)
-
- def test_write_overlapping_large_writes(self, testcap, testdir):
- self._write_test_overlap(testcap, testdir, name='small_overlap', bs=2**18, sz=2**20)
-
- def _write_test_overlap(self, testcap, testdir, name, bs, sz):
- body = os.urandom(sz)
- try:
- path = os.path.join(testdir, name)
- f = file(path, 'w')
- except Exception, err:
- tmpl = 'Could not open file for write at %r: %r'
- raise TestFailure(tmpl, path, err)
- try:
- for posn in range(0,sz,bs):
- start = max(0, posn-bs)
- end = min(sz, posn+bs)
- f.seek(start)
- f.write(body[start:end])
- f.close()
- except Exception, err:
- tmpl = 'Could not write to file %r: %r'
- raise TestFailure(tmpl, path, err)
-
- self.maybe_pause()
- self._check_write(testcap, name, body)
-
-
- def test_write_random_scatter(self, testcap, testdir):
- sz = 2**20
- name = 'random_scatter'
- body = os.urandom(sz)
-
- def rsize(sz=sz):
- return min(int(random.paretovariate(.25)), sz/12)
-
- # first chop up whole file into random sized chunks
- slices = []
- posn = 0
- while posn < sz:
- size = rsize()
- slices.append( (posn, body[posn:posn+size]) )
- posn += size
- random.shuffle(slices) # and randomise their order
-
- try:
- path = os.path.join(testdir, name)
- f = file(path, 'w')
- except Exception, err:
- tmpl = 'Could not open file for write at %r: %r'
- raise TestFailure(tmpl, path, err)
- try:
- # write all slices: we hence know entire file is ultimately written
- # write random excerpts: this provides for mixed and varied overlaps
- for posn,slice in slices:
- f.seek(posn)
- f.write(slice)
- rposn = random.randint(0,sz)
- f.seek(rposn)
- f.write(body[rposn:rposn+rsize()])
- f.close()
- except Exception, err:
- tmpl = 'Could not write to file %r: %r'
- raise TestFailure(tmpl, path, err)
-
- self.maybe_pause()
- self._check_write(testcap, name, body)
-
- def test_write_partial_overwrite(self, testcap, testdir):
- name = 'partial_overwrite'
- body = '_'*132
- overwrite = '^'*8
- position = 26
-
- def write_file(path, mode, contents, position=None):
- try:
- f = file(path, mode)
- if position is not None:
- f.seek(position)
- f.write(contents)
- f.close()
- except Exception, err:
- tmpl = 'Could not write to file %r: %r'
- raise TestFailure(tmpl, path, err)
-
- def read_file(path):
- try:
- f = file(path, 'rb')
- contents = f.read()
- f.close()
- except Exception, err:
- tmpl = 'Could not read file %r: %r'
- raise TestFailure(tmpl, path, err)
- return contents
-
- path = os.path.join(testdir, name)
- #write_file(path, 'w', body)
-
- cap = self.webapi_call('PUT', '/uri', body)
- self.attach_node(testcap, cap, name)
- self.maybe_pause()
-
- contents = read_file(path)
- if contents != body:
- raise TestFailure('File contents mismatch (%r) %r v.s. %r', path, contents, body)
-
- write_file(path, 'r+', overwrite, position)
- contents = read_file(path)
- expected = body[:position] + overwrite + body[position+len(overwrite):]
- if contents != expected:
- raise TestFailure('File contents mismatch (%r) %r v.s. %r', path, contents, expected)
-
-
- # Utilities:
- def run_tahoe(self, *args):
- realargs = ('tahoe',) + args
- status, output = gather_output(realargs, executable=self.cliexec)
- if status != 0:
- tmpl = 'The tahoe cli exited with nonzero status.\n'
- tmpl += 'Executable: %r\n'
- tmpl += 'Command arguments: %r\n'
- tmpl += 'Exit status: %r\n'
- tmpl += 'Output:\n%s\n[End of tahoe output.]\n'
- raise SetupFailure(tmpl,
- self.cliexec,
- realargs,
- status,
- output)
- return output
-
- def check_tahoe_output(self, output, expected, expdir):
- ignorable_lines = map(re.compile, [
- '.*site-packages/zope\.interface.*\.egg/zope/__init__.py:3: UserWarning: Module twisted was already imported from .*egg is being added to sys.path',
- ' import pkg_resources',
- ])
- def ignore_line(line):
- for ignorable_line in ignorable_lines:
- if ignorable_line.match(line):
- return True
- else:
- return False
- output = '\n'.join( [ line
- for line in output.split('\n')+['']
- #if line not in ignorable_lines ] )
- if not ignore_line(line) ] )
- m = re.match(expected, output, re.M)
- if m is None:
- tmpl = 'The output of tahoe did not match the expectation:\n'
- tmpl += 'Expected regex: %s\n'
- tmpl += 'Actual output: %r\n'
- self.warn(tmpl, expected, output)
-
- elif expdir != m.group('path'):
- tmpl = 'The output of tahoe refers to an unexpected directory:\n'
- tmpl += 'Expected directory: %r\n'
- tmpl += 'Actual directory: %r\n'
- self.warn(tmpl, expdir, m.group(1))
-
- def stop_node(self, basedir):
- try:
- self.run_tahoe('stop', '--basedir', basedir)
- except Exception, e:
- print 'Failed to stop tahoe node.'
- print 'Ignoring cleanup exception:'
- # Indent the exception description:
- desc = str(e).rstrip()
- print ' ' + desc.replace('\n', '\n ')
-
- def webapi_call(self, method, path, body=None, **options):
- if options:
- path = path + '?' + ('&'.join(['%s=%s' % kv for kv in options.items()]))
-
- conn = httplib.HTTPConnection('127.0.0.1', self.port)
- conn.request(method, path, body = body)
- resp = conn.getresponse()
-
- if resp.status != 200:
- tmpl = 'A webapi operation failed.\n'
- tmpl += 'Request: %r %r\n'
- tmpl += 'Body:\n%s\n'
- tmpl += 'Response:\nStatus %r\nBody:\n%s'
- raise SetupFailure(tmpl,
- method, path,
- body or '',
- resp.status, body)
-
- return resp.read()
-
- def create_dirnode(self):
- return self.webapi_call('PUT', '/uri', t='mkdir').strip()
-
- def attach_node(self, dircap, childcap, childname):
- body = self.webapi_call('PUT',
- '/uri/%s/%s' % (dircap, childname),
- body = childcap,
- t = 'uri',
- replace = 'false')
- assert body.strip() == childcap, `body, dircap, childcap, childname`
-
- def polling_operation(self, operation, polldesc, timeout = 10.0, pollinterval = 0.2):
- totaltime = timeout # Fudging for edge-case SetupFailure description...
-
- totalattempts = int(timeout / pollinterval)
-
- starttime = time.time()
- for attempt in range(totalattempts):
- opstart = time.time()
-
- try:
- result = operation()
- except KeyboardInterrupt, e:
- raise
- except Exception, e:
- result = False
-
- totaltime = time.time() - starttime
-
- if result is not False:
- #tmpl = '(Polling took over %.2f seconds.)'
- #print tmpl % (totaltime,)
- return result
-
- elif totaltime > timeout:
- break
-
- else:
- opdelay = time.time() - opstart
- realinterval = max(0., pollinterval - opdelay)
-
- #tmpl = '(Poll attempt %d failed after %.2f seconds, sleeping %.2f seconds.)'
- #print tmpl % (attempt+1, opdelay, realinterval)
- time.sleep(realinterval)
-
-
- tmpl = 'Timeout while polling for: %s\n'
- tmpl += 'Waited %.2f seconds (%d polls).'
- raise SetupFailure(tmpl, polldesc, totaltime, attempt+1)
-
- def warn(self, tmpl, *args):
- print ('Test Warning: ' + tmpl) % args
-
-
-# SystemTest Exceptions:
-class Failure (Exception):
- def __init__(self, tmpl, *args):
- msg = self.Prefix + (tmpl % args)
- Exception.__init__(self, msg)
-
-class SetupFailure (Failure):
- Prefix = 'Setup Failure - The test framework encountered an error:\n'
-
-class TestFailure (Failure):
- Prefix = 'TestFailure: '
-
-
-### Unit Tests:
-class Impl_A_UnitTests (unittest.TestCase):
- '''Tests small stand-alone functions.'''
- def test_canonicalize_cap(self):
- iopairs = [('http://127.0.0.1:3456/uri/URI:DIR2:yar9nnzsho6czczieeesc65sry:upp1pmypwxits3w9izkszgo1zbdnsyk3nm6h7e19s7os7s6yhh9y',
- 'URI:DIR2:yar9nnzsho6czczieeesc65sry:upp1pmypwxits3w9izkszgo1zbdnsyk3nm6h7e19s7os7s6yhh9y'),
- ('http://127.0.0.1:3456/uri/URI%3ACHK%3Ak7ktp1qr7szmt98s1y3ha61d9w%3A8tiy8drttp65u79pjn7hs31po83e514zifdejidyeo1ee8nsqfyy%3A3%3A12%3A242?filename=welcome.html',
- 'URI:CHK:k7ktp1qr7szmt98s1y3ha61d9w:8tiy8drttp65u79pjn7hs31po83e514zifdejidyeo1ee8nsqfyy:3:12:242?filename=welcome.html')]
-
- for input, output in iopairs:
- result = impl_a.canonicalize_cap(input)
- self.failUnlessEqual(output, result, 'input == %r' % (input,))
-
-
-
-### Misc:
-class ImplProcessManager(object):
- debug_wait = False
-
- def __init__(self, name, module, mount_args, mount_wait, suites, todo=False):
- self.name = name
- self.module = module
- self.script = module.__file__
- self.mount_args = mount_args
- self.mount_wait = mount_wait
- self.suites = suites
- self.todo = todo
-
- def maybe_wait(self, msg='waiting'):
- if self.debug_wait:
- print msg
- raw_input()
-
- def configure(self, client_nodedir, mountpoint):
- self.client_nodedir = client_nodedir
- self.mountpath = os.path.join(mountpoint, self.name)
- os.mkdir(self.mountpath)
-
- def mount(self):
- print 'Mounting implementation: %s (%s)' % (self.name, self.script)
-
- rootdirfile = os.path.join(self.client_nodedir, 'private', 'root_dir.cap')
- root_uri = file(rootdirfile, 'r').read().strip()
- fields = {'mountpath': self.mountpath,
- 'nodedir': self.client_nodedir,
- 'root-uri': root_uri,
- }
- args = ['python', self.script] + [ arg%fields for arg in self.mount_args ]
- print ' '.join(args)
- self.maybe_wait('waiting (about to launch fuse)')
-
- if self.mount_wait:
- exitcode, output = gather_output(args)
- if exitcode != 0:
- tmpl = '%r failed to launch:\n'
- tmpl += 'Exit Status: %r\n'
- tmpl += 'Output:\n%s\n'
- raise SetupFailure(tmpl, self.script, exitcode, output)
- else:
- self.proc = subprocess.Popen(args)
-
- def umount(self):
- print 'Unmounting implementation: %s' % (self.name,)
- args = UNMOUNT_CMD + [self.mountpath]
- print args
- self.maybe_wait('waiting (unmount)')
- #print os.system('ls -l '+self.mountpath)
- ec, out = gather_output(args)
- if ec != 0 or out:
- tmpl = '%r failed to unmount:\n' % (' '.join(UNMOUNT_CMD),)
- tmpl += 'Arguments: %r\n'
- tmpl += 'Exit Status: %r\n'
- tmpl += 'Output:\n%s\n'
- raise SetupFailure(tmpl, args, ec, out)
-
-
-def gather_output(*args, **kwargs):
- '''
- This expects the child does not require input and that it closes
- stdout/err eventually.
- '''
- p = subprocess.Popen(stdout = subprocess.PIPE,
- stderr = subprocess.STDOUT,
- *args,
- **kwargs)
- output = p.stdout.read()
- exitcode = p.wait()
- return (exitcode, output)
-
-
-def wrap_os_error(meth, *args):
- try:
- return meth(*args)
- except os.error, e:
- raise TestFailure('%s', e)
-
-
-ExpectedCreationOutput = r'(introducer|client) created in (?P<path>.*?)\n'
-ExpectedStartOutput = r'(.*\n)*STARTING (?P<path>.*?)\n(introducer|client) node probably started'
-
-
-if __name__ == '__main__':
- sys.exit(main(sys.argv))
this License.
------- end TGPPL1 licence
-The files mac/fuse.py and mac/fuseparts/subbedopts.py are licensed under
-the GNU Lesser General Public Licence. In addition, on 2009-09-21 Csaba
-Henk granted permission for those files to be under the same terms as
-Tahoe-LAFS itself.
-
-See /usr/share/common-licenses/GPL for a copy of the GNU General Public
-License, and /usr/share/common-licenses/LGPL for the GNU Lesser General Public
-License.
-
-The file src/allmydata/util/figleaf.py is licensed under the BSD licence.
-
------- begin BSD licence
Copyright (c) <YEAR>, <OWNER>
All rights reserved.
this License.
------- end TGPPL1 licence
-The files mac/fuse.py and mac/fuseparts/subbedopts.py are licensed under
-the GNU Lesser General Public Licence. In addition, on 2009-09-21 Csaba
-Henk granted permission for those files to be under the same terms as
-Tahoe-LAFS itself.
-
-See /usr/share/common-licenses/GPL for a copy of the GNU General Public
-License, and /usr/share/common-licenses/LGPL for the GNU Lesser General Public
-License.
-
-The file src/allmydata/util/figleaf.py is licensed under the BSD licence.
-
------- begin BSD licence
Copyright (c) <YEAR>, <OWNER>
All rights reserved.
this License.
------- end TGPPL1 licence
-The files mac/fuse.py and mac/fuseparts/subbedopts.py are licensed under
-the GNU Lesser General Public Licence. In addition, on 2009-09-21 Csaba
-Henk granted permission for those files to be under the same terms as
-Tahoe-LAFS itself.
-
-See /usr/share/common-licenses/GPL for a copy of the GNU General Public
-License, and /usr/share/common-licenses/LGPL for the GNU Lesser General Public
-License.
-
-The file src/allmydata/util/figleaf.py is licensed under the BSD licence.
-
------- begin BSD licence
Copyright (c) <YEAR>, <OWNER>
All rights reserved.
this License.
------- end TGPPL1 licence
-The files mac/fuse.py and mac/fuseparts/subbedopts.py are licensed under
-the GNU Lesser General Public Licence. In addition, on 2009-09-21 Csaba
-Henk granted permission for those files to be under the same terms as
-Tahoe-LAFS itself.
-
-See /usr/share/common-licenses/GPL for a copy of the GNU General Public
-License, and /usr/share/common-licenses/LGPL for the GNU Lesser General Public
-License.
-
-The file src/allmydata/util/figleaf.py is licensed under the BSD licence.
-
------- begin BSD licence
Copyright (c) <YEAR>, <OWNER>
All rights reserved.
+++ /dev/null
-^/home/warner/stuff/python/twisted/Twisted/
-^/var/lib