From: Brian Warner Date: Sat, 25 Aug 2007 22:36:33 +0000 (-0700) Subject: remove foolscap source from our tree, users should grab a tarball from our http:... X-Git-Url: https://git.rkrishnan.org/pf/content/en/seg/class-simplejson.JSONDecoder-index.html?a=commitdiff_plain;h=31cf4badad327d42fabe0f81d5845992a9f77456;p=tahoe-lafs%2Ftahoe-lafs.git remove foolscap source from our tree, users should grab a tarball from our http://allmydata.org/trac/tahoe/wiki/Dependencies page, or from the upstream http://foolscap.lothar.com/ home page --- diff --git a/src/foolscap/ChangeLog b/src/foolscap/ChangeLog deleted file mode 100644 index 9c887ab9..00000000 --- a/src/foolscap/ChangeLog +++ /dev/null @@ -1,2182 +0,0 @@ -2007-08-07 Brian Warner - - * foolscap/__init__.py: release Foolscap-0.1.5 - * misc/{sid|sarge|dapper|edgy|feisty}/debian/changelog: same - -2007-08-07 Brian Warner - - * NEWS: update for the upcoming release - - * foolscap/pb.py (Tub.registerNameLookupHandler): new function to - augment Tub.registerReference(). This allows names to be looked up - at request time, rather than requiring all Referenceables be - pre-registered with registerReference(). The chief use of this - would be for FURLs which point at objects that live on disk in - some persistent state until they are needed. Closes #6. - (Tub.unregisterNameLookupHandler): allow handlers to be removed - (Tub.getReferenceForName): use the handler during lookup - * foolscap/test/test_tub.py (NameLookup): test it - -2007-07-27 Brian Warner - - * foolscap/referenceable.py (LocalReferenceable): implement an - adapter that allows code to do IRemoteReference(t).callRemote(...) - and have it work for both RemoteReferences and local - Referenceables. You might want to do this if you're getting back - introductions to a variety of remote Referenceables, some of which - might actually be on your local system, and you want to treat all - of the, the same way. Local Referenceables will be wrapped with a - class that implements callRemote() and makes it behave like an - actual remote callRemote() would. Closes ticket #1. - * foolscap/test/test_reference.py (LocalReference): test it - -2007-07-26 Brian Warner - - * foolscap/call.py (AnswerUnslicer.receiveChild): accept a - ready_deferred, to accomodate Gifts in return values. Closes #5. - (AnswerUnslicer.receiveClose): .. and don't fire the response - until any such Gifts resolve - * foolscap/test/test_gifts.py (Gifts.testReturn): test it - (Gifts.testReturnInContainer): same - (Bad.testReturn_swissnum): and test the failure case too - - * foolscap/test/test_pb.py (TestAnswer.testAccept1): fix a test - which wasn't calling start() properly and was broken by that change - (TestAnswer.testAccept2): same - - * foolscap/test/test_gifts.py (Bad.setUp): disable these tests when - we don't have crypto, since TubIDs are not mangleable in the same - way without crypto. - - * foolscap/slicer.py (BaseUnslicer.receiveChild): new convention: - Unslicers should accumulate their children's ready_deferreds into - an AsyncAND, and pass it to the parent. If something goes wrong, - the ready_deferred should errback, which will abandon the method - call that contains it. - * foolscap/slicers/dict.py (DictUnslicer.receiveClose): same - * foolscap/slicers/tuple.py (TupleUnslicer.receiveClose): same - (TupleUnslicer.complete): same - * foolscap/slicers/set.py (SetUnslicer.receiveClose): same - * foolscap/slicers/list.py (ListUnslicer.receiveClose): same - * foolscap/call.py (CallUnslicer.receiveClose): same - - * foolscap/referenceable.py (TheirReferenceUnslicer.receiveClose): - use our ready_deferred to signal whether the gift resolves - correctly or not. If it fails, errback ready_deferred (to prevent - the message from being delivered without the resolved gift), but - callback obj_deferred with a placeholder to avoid causing too much - distress to the container. - - * foolscap/broker.py (PBRootUnslicer.receiveChild): accept - ready_deferred in the InboundDelivery, stash both of them in the - broker. - (Broker.scheduleCall): rewrite inbound delivery handling: use a - self._call_is_running flag to prevent concurrent deliveries, and - wait for the ready_deferred before delivering the top-most - message. If the ready_deferred errbacks, that gets routed to - self.callFailed so the caller hears about the problem. This closes - ticket #2. - - * foolscap/call.py (InboundDelivery): remove whenRunnable, relying - upon the ready_deferred to let the Broker know when the message - can be delivered. - (ArgumentUnslicer): significant cleanup, using ready_deferred. - Remove isReady and whenReady. - - * foolscap/test/test_gifts.py (Base): factor setup code out - (Base.createCharacters): registerReference(tubname), for debugging - (Bad): add a bunch of tests to make sure that gifts which fail to - resolve (for various reasons) will inform the caller about the - problem, via an errback on the original callRemote()'s Deferred. - -2007-07-25 Brian Warner - - * foolscap/util.py (AsyncAND): new utility class, which is like - DeferredList but is specifically for control flow rather than data - flow. - * foolscap/test/test_util.py: test it - - * foolscap/call.py (CopiedFailure.setCopyableState): set .type to - a class that behaves (as least as far as reflect.qual() is - concerned) just like the original exception class. This improves - the behavior of derived Failure objects, as well as trial's - handling of CopiedFailures that get handed to log.err(). - CopiedFailures are now a bit more like actual Failures. See ticket - #4 (http://foolscap.lothar.com/trac/ticket/4) for more details. - (CopiedFailureSlicer): make sure that CopiedFailures can be - serialized, so that A-calls-B-calls-C can return a failure all - the way back. - * foolscap/test/test_call.py (TestCall.testCopiedFailure): test it - * foolscap/test/test_copyable.py: update to match, now we must - compare reflect.qual(f.type) against some extension classname, - rather than just f.type. - * foolscap/test/test_pb.py: same - * foolscap/test/common.py: same - -2007-07-15 Brian Warner - - * foolscap/test/test_interfaces.py (TestInterface.testStack): - don't look for a '/' in the stacktrace, since it won't be there - under windows. Thanks to 'strank'. Closes Twisted#2731. - -2007-06-29 Brian Warner - - * foolscap/__init__.py: bump revision to 0.1.4+ while between releases - * misc/{sid|sarge|dapper|edgy|feisty}/debian/changelog: same - -2007-05-14 Brian Warner - - * foolscap/__init__.py: release Foolscap-0.1.4 - * misc/{sid|sarge|dapper|edgy|feisty}/debian/changelog: same, also - remove a bunch of old between-release version numbers - -2007-05-14 Brian Warner - - * NEWS: update for the upcoming release - - * doc/using-foolscap.xhtml: rename from doc/using-pb.xhtml - - * doc/using-pb.xhtml: replace all uses of 'PB URL' with 'FURL' - - * foolscap/pb.py (Tub.getReference): if getReference() is called - before Tub.startService(), queue the request until startup. - (Tub.connectTo): same for connectTo(). - (Tub.startService): launch pending getReference() and connectTo() - requests. There are all fired with eventual-sends. - * foolscap/reconnector.py (Reconnector): don't automatically start - the Reconnector in __init__, rather wait for the Tub to start it. - * foolscap/test/test_tub.py (QueuedStartup): test it - * doc/using-pb.xhtml: update docs to match - - * foolscap/test/test_call.py (TestCall.testCall1): replace an - arbitrary delay with a polling loop, to make the test more - reliable under load - - * foolscap/referenceable.py (SturdyRef.asLiveRef): remove a method - that was never used, didn't work, and is of dubious utility - anyways. - (_AsLiveRef): remove this too - - * misc/testutils/figleaf.py (CodeTracer.start): remove leftover - debug logging - - * foolscap/remoteinterface.py (RemoteInterfaceConstraint): accept - gifts too: allow sending of RemoteReferences on the outbound side, - and accept their-reference sequences on the inbound side. - * foolscap/test/test_gifts.py (Gifts.test_constraint): test it - * foolscap/test/test_schema.py (Interfaces.test_remotereference): - update test, since now we allow RemoteReferences to be sent on the - outbound side - - * foolscap/remoteinterface.py (getRemoteInterface): improve the - error message reported when a Referenceable class implements - multiple RemoteInterfaces - - * foolscap/remoteinterface.py (RemoteMethodSchema.initFromMethod): - properly handle methods like 'def foo(nodefault)' that are missing - *all* default values. Previously this resulted in an unhelpful - exception (since typeList==None), now it gives a sensible - InvalidRemoteInterface exception. - * foolscap/test/test_schema.py (Arguments.test_bad_arguments): - test it - -2007-05-11 Brian Warner - - * foolscap/slicers/set.py (FrozenSetSlicer): finally acknowledge - our dependence on python2.4 or newer, by using the built-in 'set' - and 'frozenset' types by default. We'll serialize the old sets.Set - and sets.ImmutableSet too, but they'll emerge as a set/frozenset. - This will cause problems for code that was written to be - compatible with python2.3 (by using sets.Set) and wasn't changed - when moved to 2.4, if it tries to mingle sets.Set with the data - coming out of Foolscap. Unfortunate, but doing it this way - preserves both sanity and good behavior for modern 2.4-or-later - apps. - (SetUnslicer): fix handling of children that were unreferenceable - during construction, fix handling of children that are not ready - for use (i.e. gifts). - (FrozenSetUnslicer): base this off of TupleUnslicer, since - previously the cycle-handling logic was completely broken. I'm not - entirely sure this is necessary, since I think the contents of - sets must be transitively immutable (or at least transitively - hashable), but it good to review and clean it up anyways. - * foolscap/slicers/allslicers.py: match name change - - * foolscap/slicers/tuple.py (TupleUnslicer.receiveClose): fix - handling of unready children (i.e. gifts), previously gifts inside - containers were completely broken. - * foolscap/slicers/list.py (ListUnslicer.receiveClose): same - * foolscap/slicers/dict.py (DictUnslicer.receiveClose): same - - * foolscap/call.py: add debug log messages (disabled) - - * foolscap/referenceable.py (TheirReferenceUnslicer.receiveClose): - gifts must declare themselves 'unready' until the RemoteReference - resolves, since we might be inside a container of some sort. - Without this fix, methods would be invoked too early, before the - RemoteReference was really available. - - * foolscap/test/test_banana.py (ThereAndBackAgain.test_set): match - new set/sets.Set behavior - (ThereAndBackAgain.test_cycles_1): test some of the cycles - (ThereAndBackAgain.test_cycles_3): add (disabled) test for - checking cycles that involve sets. I think these tests are - non-sensical, since sets can't really participate in the sorts of - cycles we worry about, but I left the (disabled) test code in - place in case it becomes useful again. - - * foolscap/test/test_gifts.py (Gifts.testContainers): validate - that gifts can appear in all sorts of containers successfully. - -2007-05-11 Brian Warner - - * foolscap/__init__.py: bump revision to 0.1.3+ while between releases - * misc/{sid|sarge|dapper|edgy|feisty}/debian/changelog: same - -2007-05-02 Brian Warner - - * foolscap/__init__.py: release Foolscap-0.1.3 - * misc/{sid|sarge|dapper|edgy|feisty}/debian/changelog: same - -2007-05-02 Brian Warner - - * MANIFEST.in: include some recently-added files to the source - tarball - - * NEWS: update for the upcoming release - - * foolscap/reconnector.py (Reconnector._failed): simplify - log/no-log logic - - * foolscap/slicers/unicode.py (UnicodeConstraint): add a new - constraint that only accepts unicode objects. It isn't complete: - I've forgotten how the innards of Constraints work, and as a - result this one is too permissive: it will probably accept too - many tokens over the wire before raising a Violation (although the - post-receive just-before-the-method-is-called check should still - be enforced, so application code shouldn't notice the issue). - * foolscap/test/test_schema.py (ConformTest.testUnicode): test it - (CreateTest.testMakeConstraint): check the typemap too - * foolscap/test/test_call.py (TestCall.testMegaSchema): test in a call - * foolscap/test/common.py: same - - * foolscap/constraint.py (ByteStringConstraint): rename - StringConstraint to ByteStringConstraint, to more accurately - describe its function. This constraint will *not* accept unicode - objects. - * foolscap/call.py, foolscap/copyable.py, foolscap/referenceable.py: - * foolscap/slicers/vocab.py: same - - * foolscap/schema.py (AnyStringConstraint): add a new constraint - to accept either bytestrings or unicode objects. I don't think it - actually works yet, particularly when used inside containers. - (constraintMap): map 'str' to ByteStringConstraint for now. Maybe - someday it should be mapped to AnyStringConstraint, but not today. - Map 'unicode' to UnicodeConstraint. - - - * foolscap/pb.py (Tub.getReference): assert that the Tub is - already running, either because someone called Tub.startService(), - or because we've been attached (with tub.setServiceParent) to a - running service. This requirement appeared with the - connector-tracking code, and I hope to relax it at some - point (such that any pre-startService getReferences will be queued - and serviced when the Tub is finally started), but for this - release it is a requirement to start the service before trying to - use it. - (Tub.connectTo): same - * doc/using-pb.xhtml: document it - * doc/listings/pb1client.py: update example to match - * doc/listings/pb2client.py: update example to match - * doc/listings/pb3client.py: update example to match - - * foolscap/pb.py (Tub.connectorFinished): if, for some reason, - we're removing the same connector twice, log and ignore rather - than explode. I can't find a code path that would allow this, but - I *have* seen it occur in practice, and the results aren't pretty. - Since the whole connection-tracking thing is really for the - benefit of unit tests anyways (who want to know when - Tub.stopService is done), I think it's more important to keep - application code running. - - * foolscap/negotiate.py (TubConnector.shutdown): clear out - self.remainingLocations too, in case it helps to shut things down - faster. Add some comments. - - * foolscap/negotiate.py (Negotiation): improve error-message - delivery, by keeping track of what state the receiver is in (i.e. - whether we should send them an HTTP error block, an rfc822-style - error-block, or a banana ERROR token). - (Negotiation.switchToBanana): empty self.buffer, to make sure that - any extra data is passed entirely to the new Banana protocol and - none of it gets passed back to ourselves - (Negotiation.dataReceived): same, only recurse if there's something - still in self.buffer. In other situtations we recurse here because - we might have somehow received data for two separate phases in a - single packet. - - * foolscap/banana.py (Banana.sendError): rather than explode when - trying to send an overly-long error message, just truncate it. - -2007-04-30 Brian Warner - - * foolscap/broker.py (Broker.notifyOnDisconnect): if the - RemoteReference is already dead, notify the callback right away. - Previously we would never notify them, which was a problem. - (Broker.dontNotifyOnDisconnect): be tolerant of attempts to - unregister callbacks that have already fired. I think this makes it - easier to write correct code, but on the other hand it loses the - assertion feedback if somebody tries to unregister something that - was never registered in the first place. - * foolscap/test/test_call.py (TestCall.testNotifyOnDisconnect): - test this new tolerance - (TestCall.testNotifyOnDisconnect_unregister): same - (TestCall.testNotifyOnDisconnect_already): test that a handler - fires when the reference was already broken - - * foolscap/call.py (InboundDelivery.logFailure): don't use - f.getTraceback() on string exceptions: twisted explodes - (FailureSlicer.getStateToCopy): same - * foolscap/test/test_call.py (TestCall.testFailStringException): - skip the test on python2.5, since string exceptions are deprecated - anyways and I don't want the warning message to clutter the test - logs - - * doc/using-pb.xhtml (RemoteInterfaces): document the fact that - the default name is *not* fully-qualified, necessitating the use - of __remote_name__ to distinguish between foo.RIBar and baz.RIBar - * foolscap/remoteinterface.py: same - - * foolscap/call.py (FailureSlicer.getStateToCopy): handle string - exceptions without exploding, annoying as they are. - * foolscap/test/test_call.py (TestCall.testFail4): test them - -2007-04-27 Brian Warner - - * foolscap/broker.py (Broker.freeYourReference._ignore_loss): - change the way we ignore DeadReferenceError and friends, since - f.trap is not suitable for direct use as an errback - - * foolscap/referenceable.py (SturdyRef.__init__): log the repr of - the unparseable FURL, rather than just the str, in case there are - weird control characters in it - - * foolscap/banana.py (Banana.handleData): rewrite the typebyte - scanning loop, to remove the redundant pos<64 check. Also, if we - get an overlong prefix, log it so we can figure out what's going - wrong. - * foolscap/test/test_banana.py: update to match - - * foolscap/negotiate.py (Negotiation.dataReceived): if a - non-NegotiationError exception occurs, log it, since it indicates - a foolscap coding failure rather than some disagreement with the - remote end. Log it with 'log.msg' for now, since some of the unit - tests seem to trigger startTLS errors that flunk tests which - should normally pass. I suspect some problems with error handling - in twisted's TLS implementation, but I'll have to investigate it - later. Eventually this will turn into a log.err. - - * foolscap/pb.py (Tub.keepaliveTimeout): set the default keepalive - timer to 4 minutes. This means that at most 8 minutes will go by - without any traffic at all, which should be a reasonable value to - keep NAT table entries alive. PINGs are only sent if no other - traffic was received, and they are only one byte long, so the - traffic overhead should be minimal. Note that we are not turning - on disconnectTimeout by default: if you want quietly broken - connections to be disconnected before TCP notices a problem you'll - need to do tub.setOption("disconnectTimeout", 10*60) or something. - - * foolscap/pb.py: remove an unused import - - * foolscap/pb.py (Tub.generateSwissnumber): always use os.urandom - to generate the unguessable identifier. Previously we used either - PyCrypto or fell back to the stdlib 'random' module (which of - course isn't very random at all). I did it this way originally to - provide compatibility with python2.3 (which lacked os.urandom): - now that we require python2.4 or newer, os.urandom is a far better - source (it uses /dev/random or equivalent). - * doc/using-pb.xhtml: don't mention PyCrypto now that we aren't - using it at all. - - * foolscap/negotiate.py (Negotiation.minVersion): bump both min - and max version to '3', since we've added PING and PONG tokens - that weren't present before. It would be feasible to accomodate v2 - peers (by adding a Banana flag that refrains from ever sending - PINGs), but there aren't enough 0.1.2 installations present to - make this seem like a good idea just now. - (Negotiation.maxVersion): same - (Negotiation.evaluateNegotiationVersion3): same - (Negotiation.acceptDecisionVersion3): same - * foolscap/test/test_negotiate.py (Future): same - - * foolscap/banana.py (Banana): add keepalives and idle-disconnect. - The first timeout value says that if we haven't received any data - for this long, poke the other side by sending them a PING message. - The other end is obligated to respond with a PONG (both PING and - PONG are otherwise ignored). If we still haven't heard anything - from them by the time the second timeout expires, we drop the - connection. - (Banana.dataReceived): if we're using keepalives, update the - dataLastReceivedAt timestamp on every inbound message. - (Banana.sendPING, sendPONG): new messages and handlers. Both are - ignored, and serve only to update dataLastReceivedAt. - * foolscap/tokens.py: add PING and PONG tokens - * doc/specifications/banana.xhtml: document PING and PONG - * foolscap/broker.py (Broker.__init__): add keepaliveTimeout and - disconnectTimeout arguments. Both default to 'None' to disable - keepalives and disconnects. - * foolscap/negotiate.py (Negotiation.switchToBanana): copy - timeouts from the Tub into the new Banana/Broker instance - * foolscap/pb.py (Tub.setOption): accept 'keepaliveTimeout' and - 'disconnectTimeout' options to enable this stuff. - * foolscap/test/test_keepalive.py: test it - - * foolscap/pb.py (Tub.brokerClass): parameterize the kind of - Broker that this Tub will create, to make certain unit tests - easier to write (allowing them to substitute a custom Broker - subclass). - * foolscap/negotiate.py (Negotiation.brokerClass): same - (Negotiation.initClient): capture the brokerClass here for clients - (Negotiation.handlePLAINTEXTServer): and here for listeners - (Negotiation.switchToBanana): use it - -2007-04-26 Brian Warner - - * README (DEPENDENCIES, INSTALLATION): add docs - -2007-04-16 Brian Warner - - * foolscap/remoteinterface.py - (RemoteInterfaceConstraint.checkObject): string-format the object - inside a tuple, to avoid an annoying logging failure when the - object in question is actually a tuple - - * foolscap/test/test_gifts.py (ignoreConnectionDone): trap both - ConnectionDone and ConnectionLost, since it appears that windows - signals ConnectionLost. Hopefully this will make the unit tests - pass under windows. - - * foolscap/banana.py (Banana.handleData): when the token prefix is - too long, log and emit the repr of the prefix string, so somebody - can figure out where it came from. - * foolscap/test/test_banana.py (InboundByteStream.testString): - update to match - -2007-04-13 Brian Warner - - * foolscap/copyable.py (CopyableSlicer.slice): set self.streamable - before yielding any tokens, otherwise contained elements that use - streaming will trigger an exception. Many thanks to - Ricky (iacovou-AT-gmail.com) for trying out advanced features of - Foolscap and discovering this problem, I would never have stumbled - over this one on my own. TODO: we still need unit tests to - exercise this sort of thing on a regular basis. - (Copyable2): same thing - - * foolscap/schema.py (_tupleConstraintMaker): redefine what tuples - mean in constraint specifications. They used to indicate an - alternative: (int,str) meant accept either an int *or* a string. - Now tuples indicate actual tuples, so (int,str) means a 2-element - tuple in which the first element is an int, and the second is a - string. I don't know what I was thinking back then. If you really - want to use alternatives, use schema.ChoiceOf instead. - * foolscap/test/test_schema.py (CreateTest.testMakeConstraint): - test that tuples mean tuples - - * foolscap/reconnector.py (Reconnector._failed): the old f.trap() - could sometimes cause the reconnector to stop trying forever. - Remove that. Thanks to Rob Kinninmont for finding the problem. Add - new code to log the failure if f.check() indicates that it is a - NegotiationError, since that's the sort of weird thing that users - will probably want to see. - * foolscap/test/test_reconnector.py: add lots of new tests - - * misc/testutils: add tools to do figleaf-based code-coverage - checks while running unit tests. We have 89.2% coverage! Use - 'make test-figleaf figleaf-output' to see the results. - * Makefile: new targets for figleaf - (test): enable 'make test TEST=foolscap.test.test_call' to work - (test-figleaf): same - - * foolscap/__init__.py: bump revision to 0.1.2+ while between releases - * misc/{sid|sarge|dapper|edgy|feisty}/debian/changelog: same - -2007-04-04 Brian Warner - - * foolscap/__init__.py: release Foolscap-0.1.2 - * misc/{sid|sarge|dapper|edgy|feisty}/debian/changelog: same - - * NEWS: update for new release - -2007-04-04 Brian Warner - - * misc/feisty/debian/*: add debian packaging support for the - Ubuntu 'feisty' distribution - * Makefile: and a way to invoke it - * misc/edgy/debian/*: same for the 'edgy' distribution - * MANIFEST.in: include the edgy/feisty files in the source tarball - - * foolscap/test/test_call.py (TestCall.testMegaSchema): add a new - test to exercise lots of constraint code - * foolscap/test/common.py: support code for it - * foolscap/slicers/set.py (SetUnslicer.setConstraint): fix bugs - discovered as a result - (SetConstraint.__init__): same - - * foolscap/__init__.py: bump revision to 0.1.1+ while between releases - * misc/{sid|sarge|dapper}/debian/changelog: same - -2007-04-03 Brian Warner - - * foolscap/__init__.py: release Foolscap-0.1.1 - * misc/{sid|sarge|dapper}/debian/changelog: same - - * NEWS: update for new release - -2007-04-03 Brian Warner - - * foolscap/negotiate.py (Negotiation): bump both minVersion and - maxVersion to 2, indicating that this release is not compatible - with 0.1.0, since the reqID=0 change will cause the first method - call in either direction (probably a getYourReferenceByName) to - never receive a response. The handler functions were rearranged a - bit too. - * foolscap/test/test_negotiate.py (Future): update to match - - * NEWS: get ready for release - - * foolscap/test/test_pb.py (TestCallable.testLogLocalFailure): - validate that Tub.setOption("logLocalFailures") actually works - (TestCallable.testLogRemoteFailure): same - - * foolscap/remoteinterface.py (UnconstrainedMethod): add a - "constraint" that can be used to mark a method as accepting - anything and returning anything. This might be useful if you have - RemoteInterface for most of your application, but there are still - one or two methods which should not enforce a schema of any sort. - This mostly defeats the purpose of schemas in the first place, but - offering UnconstrainedMethod means developers can make the - schema-or-not decision differently for individual methods, rather - than for a whole class at a time. - * foolscap/constraint.py (IRemoteMethodConstraint): document the - requirements on IRemoteMethodConstraint-providing classes, now that - there are two of them. - * foolscap/test/test_call.py (TestCall.testUnconstrainedMethod): - test it - * foolscap/test/common.py: add some support code for the test - - * foolscap/referenceable.py - (RemoteReferenceTracker._handleRefLost): refrain from sending - decref messages with count=0 - - * foolscap/negotiate.py (TubConnectorClientFactory.__repr__): - include both the origin and the target of the factory - - * foolscap/test/*.py (tearDown): insure that all tests use the - now-standard stopService+flushEventualQueue teardown procedure, to - avoid trial complaints about leftover timers and selectables. - - * foolscap/test/test_pb.py (GoodEnoughTub): when crypto is not - available, skip some tests that really require it. Modify others - to not really require it. - * foolscap/test/test_crypto.py: same - * foolscap/test/test_gifts.py: same - * foolscap/test/test_loopback.py: same - * foolscap/test/test_negotiate.py: same - - * foolscap/test/test_*.py (localhost): replace all use of - "localhost" with "127.0.0.1" to avoid invoking the address - resolver, which sometimes leaves a cleanup timer running. I think - the root problem is there's no clean way to interrupt a connection - attempt which still in the address resolution phase. You can stop - it, but there's no way to wait for the resolver's cleanup timer to - finish, which is what we'd need to make Trial happy. - tcp.BaseClient.resolveAddress does not keep a handle to the - resolver, so failIfNotConnected cannot halt its timer. - * foolscap/test/test_zz_resolve.py: removed this test - - * foolscap/crypto.py (_ssl): import SSL goo in a different way to - appease pyflakes - - * all: fix some pyflakes warnings by checking for the - importability of foolscap.crypto in a different way - - * foolscap/pb.py (Tub.stopService): when shutting down the Tub, - make sure all outstanding connections are shut down as well. By - the time stopService's deferred fires, all of our TCP transports - should have had their 'connectionLost' methods fired. This is - specifically to help unit tests that use Trial, which insists upon - having a clean reactor between tests. With this change, test - suites should use a tearDown() method that looks like: 'd = - tub.stopService(); d.addCallback(flushEventualQueue); return d', - and trial shouldn't complain about foolscap selectables or timers - being left over. - (Tub.stopService): also, since Tubs are not currently restartable, - modify some entry points at shutdown to make sure nobody gets - confused about why their getReference() doesn't work anymore. Be - aware that at some point soon, we'll start enforcing the rule that - the Tub must be started before you can get any connections out of - it, at which point getReference() will queue requests until - startService() is called. The idea is that the Tub will not use - the network at all unless it is running. - * foolscap/broker.py: drop the connection when shutdown() is called - * foolscap/negotiate.py (Negotiate): rearrange error reporting and - connection shutdown. Now errors are stashed and loseConnection() - is called, but the errors are not reported to negotiationFailed() - until connectionLost() is fired (which will be after any remaining - data gets sent out over the wire). - (TubConnector): the TubConnector reports success once the first - connection has passed negotiation, but now lives until all of the - connections are finally closed. It then informs the Tub that it is - done, so the Tub can forget about it (and possibly notify - stopService that it can finally complete). - - * foolscap/observer.py (OneShotObserverList): eventual-send -using - event distributor, following the pattern espoused by Mark Miller's - "Concurrency Among Strangers" paper. Many thanks to AllMyData.com - for contributing this class. - * foolscap/test/test_observer.py: tests for it - -2007-03-22 Brian Warner - - * foolscap/constraint.py (StringConstraint): add a regexp= argument - * foolscap/test/test_schema.py (ConformTest.testString): test it - - * foolscap/test/test_banana.py (TestBananaMixin.shouldDropConnection): - fix a pyflakes warning - * foolscap/call.py: same, don't fall back to plain StringIO if - cStringIO is unavailable - * foolscap/debug.py: same - * foolscap/storage.py: same - - * foolscap/slicers/list.py (ListConstraint): add a minLength= - argument, fix maxLength=None - * foolscap/test/test_schema.py (ConformTest.testList): test it - - * foolscap/constraint.py (StringConstraint): add a minLength= - argument - * foolscap/test/test_schema.py (ConformTest.testString): test it - - * foolscap/slicers/set.py (BuiltinFrozenSetSlicer): add slicer for - the builtin 'frozenset' type that appeared in python2.4 - (SetConstraint): provide a constraint for sets - * foolscap/schema.py (SetOf): add an alias - * foolscap/test/test_schema.py (ConformTest.testSet): test it - -2007-03-20 Brian Warner - - * foolscap/banana.py (Banana.outgoingVocabTableWasReplaced): remove - verbose debug message, not really needed anymore - - * foolscap/ipb.py (IRemoteReference.callRemoteOnly): new method to - invoke a remote method without waiting for a response. Useful for - certain messages where we really don't care whether the far end - receives them or not. - * foolscap/referenceable.py (RemoteReference.callRemoteOnly): - implement it - (TheirReferenceUnslicer.ackGift): use it - * foolscap/broker.py (Broker.initBroker): use reqID=0 to mean "we - don't want a response". Note that this is a compatibility barrier: - older endpoints which use reqID=0 for the first message will not - get a response. All subsequent messages will be ok, though. - (Broker._callFinished): don't send a response if reqID=0 - (Broker.callFailed): don't send an error if reqID=0 - * foolscap/call.py (InboundDelivery.logFailure): fix arg logging - (CallUnslicer.receiveChild): don't create an activeLocalCalls - entry if reqID=0 - * foolscap/test/test_call.py (TestCallOnly.testCallOnly): test it - (TestCall._testFailWrongReturnLocal_1): update expectations, - now that reqIDs start at 1 instead of 0 - * foolscap/test/common.py (TargetMixin.poll): new support code - - * foolscap/referenceable.py: add Referenceable to - schema.constraintMap, so that RemoteInterfaces can use 'return - Referenceable' to indicate that they return a Referenceable of any - sort. This is like using 'return RIFoo' to indicate that the - method returns a Referenceable that implements RIFoo, but without - the specific interface requirement. - * foolscap/remoteinterface.py (RemoteInterfaceConstraint): support - this by skipping the interface check if self.interface=None - * foolscap/test/test_schema.py (CreateTest): test it - * foolscap/test/test_interfaces.py (Types): update test to match, - since the error messages changed - * foolscap/test/common.py: more test support changes - -2007-03-19 Brian Warner - - * foolscap/ipb.py (IRemoteReference): new interface .. - * foolscap/referenceable.py (RemoteReferenceOnly): .. implemented here - * foolscap/remoteinterface.py - (RemoteInterfaceConstraint.checkObject): remove a circular import - by using IRemoteReference to detect RemoteReference instances, - rather than using isinstance(). - * foolscap/test/test_schema.py (Interfaces): test it - - * everything: massive Constraint refactoring. Primitive - constraints (StringConstraint, IntegerConstraint, etc) are now in - foolscap/constraint.py, while opentype-specific constraints like - ListConstraint and BooleanConstraint are in the same module that - defines the associated Slicer. Remote method constraints are in - remoteinterface.py and copyable.py, FailureConstraint is in - call.py . A new foolscap/constraint.py module contains the base - classes but is careful not to import much else. foolscap/schema.py - contains a reference to all constraints, so that user code can get - at them conveniently. Tests were updated to import from the new - places. Some circular imports were resolved. zope.interface - adaptation has been used to assist with the conversion from the - "shorthand" forms of constraint specification into the full form - (i.e. converting x=str into x=StringConstraint()), specifically - IConstraint(shorthand) will return a Constraint instance. - - * foolscap/__init__.py: bump revision to 0.1.0+ while between releases - * misc/{sid|sarge|dapper}/debian/changelog: same - -2007-03-15 Brian Warner - - * foolscap/__init__.py: release Foolscap-0.1.0 - * misc/{sid|sarge|dapper}/debian/changelog: same - - * README: update for new release - * NEWS: update for new release - -2007-02-16 Brian Warner - - * foolscap/eventual.py (_SimpleCallQueue._turn): retire all - pending eventual-send messages before returning control to the - reactor, rather than doing exactly one event per reactor turn. - This seems likely to help avoid starvation, as we now finish as - much work as possible before accepting IO (which might cause more - work to be added to our queue), and probably makes the interaction - between eventual-send and DelayedCalls a bit more consistent. - Thanks to Rob Kinninmont for the suggestion. - -2007-02-08 Brian Warner - - * foolscap/test/test_pb.py: move TestCall out to.. - * foolscap/test/test_call.py: .. a new test file - - * foolscap/test/test_negotiate.py (BaseMixin.tearDown): add a - 100ms stall between shutting down all the Tubs and actually - finishing the test. This seems to be enough to stop the occasional - test failures that probably occur because TCP connections that - we've dropped haven't finished signalling the other end (also in - our process) that they've been closed. - -2007-01-30 Brian Warner - - * foolscap/pb.py (Tub): add certFile= argument, to allow the Tub - to manage its own certificates. This argument provides a filename - where the Tub should read or write its certificate. If the file - exists, the Tub will read the certificate data from there. If not, - the Tub will generate a new certificate and write it to the file. - * foolscap/test/test_tub.py: test it - * doc/using-pb.xhtml: document certFile= - * doc/listings/pb2server.py: use certFile= in the example - -2007-01-24 Brian Warner - - * foolscap/crypto.py (MyOptions._makeContext.alwaysValidate): add - code to ignore two additional OpenSSL certificate validation - errors: X509_V_ERR_CERT_NOT_YET_VALID (9) and - X509_V_ERR_CERT_HAS_EXPIRED (10). Foolscap uses certificates very - differently than web sites, and it is exceedingly common to start - using a cert mere seconds after creating it. If there is any - significant clock skew between the two systems, then insisting - that the cert's "valid after X" time is actually in the past will - cause a lot of false errors. - -2007-01-22 Brian Warner - - * .darcs-boringfile: ignore files that are generated by distutils - when we make a source release (dist/*) and when making a debian - package (build/* and the debian install directory). - - * foolscap/__init__.py: bump revision to 0.0.7+ while between releases - * misc/{sid|sarge|dapper}/debian/changelog: same - -2007-01-16 Brian Warner - - * foolscap/__init__.py: release Foolscap-0.0.7 - * misc/{sid|sarge|dapper}/debian/changelog: same - - * NEWS: update for 0.0.7 - -2007-01-16 Brian Warner - - * foolscap/pb.py (Tub.getBrokerForTubRef): special-case an attempt - to connect to a tub with an ID equal to our own, by attaching a - Broker to a special LoopbackTransport that delivers serialized - data directly to a peer without going through a socket. - * foolscap/broker.py (LoopbackTransport): same - (Broker.setTub): refactor some code out of negotiate.py - * foolscap/negotiate.py (Negotiation.switchToBanana): same - (Negotiation.loopbackDecision): new method to determine params for - a loopback connection - * foolscap/test/test_loopback.py: enable all tests, add a check to - make sure we can connect to ourselves twice - - * foolscap/referenceable.py (RemoteReferenceTracker.getRef): the - weakref this holds may have become stale, so check that we both - have self.ref *and* that self.ref() is not None to decide whether - we must re-create the RemoteReference. This fixes a bug in which - two calls to Tub.getReference() for the same URL would result in - the second call getting None. - (RemoteReferenceTracker._handleRefLost): only send a decref - message if we haven't already re-created the RemoteReference - * foolscap/test/test_pb.py (TestService.testConnect3): modify this - test to validate the 'call Tub.getReference() twice' bug is fixed - -2007-01-15 Brian Warner - - * foolscap/test/test_loopback.py (ConnectToSelf): Add tests to - validate that we can connect to our own Tub properly. This test - does not yet pass for authenticated Tubs: the negotiation hangs - until the 30 second timeout is reached. To fix this requires - special-casing such connections to use a different kind of Broker, - one that wires transport.write to eventual(rcvr.dataReceived) and - skips negotiation completely. - -2007-01-10 Brian Warner - - * doc/using-pb.xhtml: fix some "pb" references to mention - "Foolscap" instead - * doc/schema.xhtml: same - -2007-01-09 Brian Warner - - * foolscap/pb.py (Listener.removeTub): disownServiceParent is not - guaranteed to return a Deferred, so don't try to make removeTub do - so either. I saw a failure related to this, but was unable to - analzye it well enough to reproduce it or write a test case. - (Tub.stopListeningOn): tolerate removeTub returning synchronously - (Tub.stopService): same - -2007-01-04 Brian Warner - - * foolscap/negotiate.py (Negotiation.dataReceived): when sending - an error message to the far end inside the decision block, make - sure the error text itself has no newlines, since that would break - the format of the block, and probably cause all sorts of - confusion. - * foolscap/ipb.py (IRemotelyCallable.doRemoteCall): remote calls now - accept positional args - -2007-01-04 Brian Warner - - * foolscap/__init__.py: bump revision to 0.0.6+ while between - releases - * misc/{sid|sarge|dapper}/debian/changelog: same - -2006-12-18 Brian Warner - - * foolscap/__init__.py: release Foolscap-0.0.6 - * misc/{sid|sarge|dapper}/debian/changelog: same - -2006-12-18 Brian Warner - - * misc/{sid|sarge|dapper}/debian/rules: include copyable.xhtml - - * NEWS: update for 0.0.6 - - * foolscap/negotiate.py (Negotiation): Send less data. When - sending a range (both for the fundamental banana negotiation - version and for the initial vocab table index), send it in a - single line with "0 1" rather than two separate min and max lines. - This brings the hello message down to about 105 bytes and improves - the benefit of using a negotiated initial-vocab-table-range rather - than an early (but post-negotiation) SET-VOCAB banana sequence. - - * foolscap/schema.py: add RemoteInterfaceConstraints. This works - by declaring an argument as, e.g., RIFoo, which means that this - argument must be passed a RemoteReference that is connected to a - remote Referenceable which implements RIFoo. This works as a - return value constraint too. - (Constraint.checkObject): add inbound= - argument to this method, so RemoteInterfaceConstraint can work - properly - (InterfaceConstraint): split this into local and remote forms - (LocalInterfaceConstraint): only check real local objects, not - RemoteReferences. This isn't really useful yet, but eventually - schemas will turn into proper local Guards and then it will be. - (RemoteInterfaceConstraint): only check RemoteReferences. The - check performed must be different on inbound and outbound (since - we'll see a RemoteReference when inbound=True, and a Referenceable - when inbound=False). - (makeConstraint): distinguish between Interfaces and - RemoteInterfaces, so we can figure out whether to use - LocalInterfaceConstraint or RemoteInterfaceConstraint - (callable): get rid of this, the functionality has been absorbed - into RemoteMethodSchema.initFromMethod - * foolscap/broker.py (Broker._doCall): use inbound= argument - (Broker._callFinished): same - * foolscap/referenceable.py (RemoteReference._callRemote): same - * foolscap/slicer.py (ReferenceUnslicer.receiveChild): same - * foolscap/test/test_schema.py: same - * foolscap/test/test_interfaces.py: rearrange, add tests for - RemoteInterfaces. - (LocalTypes): there are tests for Interfaces too, but without local - Guards they're disabled for now. - * foolscap/test/common.py: refactoring - - * foolscap/schema.py (makeConstraint): map None to Nothing(), - which only accepts None. This is pretty handy for methods which - are always supposed to return None. - - * foolscap/schema.py (RemoteMethodSchema.checkResults): don't - annotate any Violations here.. leave that up to the caller - * foolscap/broker.py (Broker._callFinished): update the annotation - * foolscap/test/test_pb.py: update to match - - * foolscap/tokens.py (Violation.prependLocation): add new methods - to Violations for easier annotation of where they occurred - (Violation.appendLocation): same - (Violation.__str__): remove the "at" from the location text - * foolscap/test/test_pb.py: update to match - - * foolscap/broker.py (Broker._callFinished): if the outbound - return value violates the schema, annotate the Violation to - indicate the object and method that was responsible. - -2006-12-15 Brian Warner - - * foolscap/call.py (CopiedFailure): clean up a bit, make it match - the current Failure class better - - * doc/using-pb.xhtml: document positional arguments - - * foolscap/call.py: pass both positional and keyword arguments to - remote methods. Previously only keyword arguments were accepted. - This is a pretty far-reaching change, and introduces a - compatibility barrier. - (ArgumentSlicer): send both positional args and kwargs in a - separate container - (CallSlicer): move arg-sending out of CallSlicer - (InboundDelivery.isRunnable): make InboundDelivery itself - responsible for determining when it is runnable, instead of - leaving that up to the CallUnslicer. The InboundDelivery is always - referenceable, making the resulting object delivery simpler. - (ArgumentUnslicer): move arg-receiving out of CallUnslicer. All of - the schema-checking takes place here. Simplify the are-we-ready - tests. - (CallUnslicer): most of the code has moved out. The (call) - sequence is now ('call', reqID, objID, methname, (args)). - * foolscap/broker.py (Broker.scheduleCall): simplify, allow posargs - * foolscap/referenceable.py (Referenceable.doRemoteCall): deliver - posargs to the target method as well as kwargs - (RemoteReference): same, stop trying to pre-map posargs into kwargs, - no longer require a RemoteInterface to use posargs - * foolscap/vocab.py (vocab_v1): add 'arguments' to the v1 vocab - list. This is a compatibilty barriers, and changes like this are - only allowed between releases. Once 0.0.6 is out we should leave - the v1 list alone and make any additions to v2 instead. - * foolscap/schema.py (RemoteMethodSchema): allow posargs, deal - correctly with a mixture of posargs and kwargs - * foolscap/test/test_schema.py (Arguments): test the - RemoteMethodSchema class - * foolscap/test/test_pb.py (TestCall.testCall1a): new tests of - posargs and mixed posargs/kwargs - (TestService.tearDown): use flushEventualQueue for cleanup - * foolscap/test/test_interfaces.py (TestInterface): change a few - things to match RemoteMethodSchema's new interfaces - - * foolscap/eventual.py (flushEventualQueue): allow this method to - accept a single argument, which it ignores. This enables it to be - used easily as a Deferred callback/errback, such as in a Trial - tearDown method. The recommended usage is: d = clean_stuff(); - d.addBoth(flushEventualQueue); return d - -2006-12-11 Brian Warner - - * foolscap/vocab.py: add code to negotiate an initial set of words - with which to pre-fill the VOCAB token list. Each side proposes a - range and they use the highest common index (and they exchange a - short hash of the list itself to guard against disagreements). - This serve to compress the protocol traffic by maybe 50% over the - longer run. - * foolscap/negotiate.py: send the 'initial-vocab-table-min' and - '-max' keys in the offer, and 'initial-vocab-table-index' in the - decision (and in the Banana params) - * foolscap/broker.py (Broker.__init__): populate the table - * foolscap/banana.py (Banana.populateVocabTable): new method - - * foolscap/test/test_banana.py (Sliceable.testAdapter): todo items - - * foolscap/referenceable.py - (RemoteReferenceOnly.notifyOnDisconnect): document this method. - * foolscap/broker.py (Broker.shutdown): cancel all disconnect - watchers upon shutdown - * foolscap/pb.py (Tub.stopService): same - - * foolscap/negotiate.py (Negotiation.evaluateHello): if we spot an - <=0.0.5 peer, mention that fact in our error message, to - distinguish this case from some completely non-Foolscapish - protocol trying to talk to us. - -2006-12-10 Brian Warner - - * foolscap/negotiate.py (TubConnectorClientFactory.__repr__): - annotate the string form to include which Tub we're connecting to. - This makes the default factory's "BlahFactory Starting" log - messages more interesting to look at. - * foolscap/referenceable.py (TubRef.getTubID): support method - (NoAuthTubRef.getTubID): same - -2006-12-01 Brian Warner - - * foolscap/referenceable.py (RemoteReference.callRemote): use - defer.maybeDeferred to rearrange and simplify. Clarify the - comments about the various phases of commitment. - - * foolscap/call.py (AnswerUnslicer.checkToken): when re-raising an - exception, use bareword 'raise' rather than explicitly re-raising - the same exception instance with 'raise v'. Both forms get the - right instance, but the latter loses the earlier stack trace. - * foolscap/schema.py (RemoteMethodSchema.checkResults): same - (RemoteMethodSchema.checkAllArgs): same - * foolscap/referenceable.py (RemoteReference.callRemote): same - - * foolscap/test/test_interfaces.py (TestInterface.testStack): new - test to verify that the Failure you get when you violate outbound - method arguments actually includes the call to callRemote. - - * foolscap/schema.py (StringConstraint.checkObject): make the Violation - message more useful - (InterfaceConstraint.checkObject): same, by printing the repr() of the - object that didn't meet the constraint. I'm not sure if this could be - considered to leak sensitive information or not. - (ClassConstraint.checkObject): same - (RemoteMethodSchema.checkAllArgs): record which argument caused the - problem in the Violation - * foolscap/referenceable.py (RemoteReference.callRemote): add - RemoteInterface and method name to the Violation when a caller - violates their outbound constraint - - * foolscap/tokens.py (Violation.setLocation,getLocation): make it - easier to modify an existing location value - - * foolscap/test/test_interfaces.py (TestInterface.testFail): verify - that RemoteFailures pass a StringConstraint schema - - * foolscap/test/test_copyable.py: remove unused imports, from pyflakes - * foolscap/test/test_pb.py: same - * foolscap/test/test_reconnector.py: same - * foolscap/test/test_registration.py: same - - * foolscap/test/test_interfaces.py: split the RemoteInterface - tests out to a separate file - * foolscap/test/test_pb.py: split them from here - * foolscap/test/common.py: factor out some common utility classes - -2006-11-30 Brian Warner - - * foolscap/negotiate.py (Negotiation.dataReceived): when sending - an error block, set banana-decision-version to '1' so the - recipient knows that it's safe to interpret the 'error' key. - Thanks to Rob Kinninmont for the catch. - -2006-11-27 Brian Warner - - * foolscap/negotiate.py (Negotiation._evaluateNegotiationVersion1): - ignore extra keys in the offer, since a real v2 (and beyond) offer - will have all sorts of extra keys. - * foolscap/test/test_negotiate.py (NegotiationV2): test it by - putting extra keys in the offer - -2006-11-26 Brian Warner - - * foolscap/negotiate.py (Negotiation): change negotiation - protocol: now each end sends a minVersion/maxVersion pair, using - banana-negotiation-min-version and banana-negotiation-max-version, - indicating that they can handle all versions between those - numbers, inclusive. The deciding end finds the highest version - number that fits in the ranges of both ends, and includes it in - the banana-decision-version key of the decision block. This is an - incompatible protocol change, but should make it easier (i.e. - possible) to have compatible protocol changes in the future. - Thanks to Zooko for suggesting this approach. - (Negotiation.evaluateNegotiationVersion1): each negotiation - version gets is own methods - (Negotiation.acceptDecisionVersion1): same - (TubConnectorClientFactory.buildProtocol): allow the Tub to make - us use other Negotiation classes, for testing - * foolscap/pb.py (Listener.__init__): same, use the class from the - Tub that first caused the Listener to be created - * foolscap/broker.py (Broker.__init__): record the - banana-decision-version value, so tests can check it - * foolscap/test/test_negotiate.py (Future): test it - -2006-11-17 Brian Warner - - * foolscap/pb.py: remove unused and dodgy urlparse stuff - - * doc/using-pb.xhtml: move and expand the section on Copyable and - other pass-by-copy things into a new file - * doc/copyable.xhtml: new document. Thanks to Ricky Iacovou for - the registerCopier examples. - * doc/listings/copyable-{receive|send}.py: new examples - * doc/stylesheet.css, doc/stylesheet-unprocessed.css - * doc/template.tpl: docs utilities - * Makefile: add 'make docs' target - - * foolscap/__init__.py: export registerCopier and - registerRemoteCopyFactory - - * foolscap/copyable.py (Copyable): The new preferred Copyable - usage is to have a class-level attribute named "typeToCopy" which - holds the unique string. This must match the class-level - "copytype" attribute of the corresponding RemoteCopy class. - Copyable subclasses (or ICopyable adapters) may still implement - getTypeToCopy(), but the default just returns self.typeToCopy . - Most significantly, we no longer automatically use the - fully-qualified classname: instead we *require* that the class - definition include "typeToCopy". Feel free to use any stable and - globally-unique string here. - (RemoteCopyClass): Require that RemoteCopy subclasses set their - "copytype" attribute, and use it for auto-registration. These - subclasses can still use "copytype=None" to inhibit - auto-registration. They no longer auto-register with the - fully-qualified classname. - * foolscap/referenceable.py (SturdyRef): match this change - * foolscap/test/test_copyable.py: same - -2006-11-16 Brian Warner - - * foolscap/negotiate.py (Negotiation.dataReceived): include the - error message in the '500 Internal Server Error' string. - (Negotiation.handlePLAINTEXTClient): include the full '500 - Internal Server Error' string in the reported exception. These two - changes make it easier to spot mismatched TubIDs. Thanks to Rob - Kinninmont for the suggestion. - -2006-11-14 Brian Warner - - * foolscap/__init__.py: bump revision to 0.0.5+ while between - releases - * misc/{sid|sarge|dapper}/debian/changelog: same - -2006-11-04 Brian Warner - - * NEWS: update for 0.0.5 - * foolscap/__init__.py: release Foolscap-0.0.5 - * misc/{sid|sarge|dapper}/debian/changelog: same - * MANIFEST.in: add debian packaging files to source tarball - -2006-11-01 Brian Warner - - * foolscap/pb.py (Tub.setOption): new API to set options. Added - logRemoteFailures and logLocalFailures, which cause failed - callRemotes to be sent to the twisted log via log.msg . The - defaults are False, which means that failures are only reported - through the caller's Deferred.errback . - - Setting logRemoteFailures to True means that the client's log will - contain a record of every callRemote that it sent to someone else - that failed on the far side. This can be implemented on a - per-callRemote basis by just doing d.addErrback(log.err) - everywhere, but often there are reasons (like debugging) for - logging failures that are completely independent of the desired - error-handling path. These log messages have a REMOTE: prefix to - make it very clear that the stack trace being shown is *not* - occurring on the local system, but rather on some remote one. - - Setting logLocalFailures to True means that the server's log will - contain a record of every callRemote that someone sent to it which - failed on that server. This cannot be implemented with - addErrbacks, since normally the server does not care about the - methods it is running for other people's benefit. This option is - purely for debugging purposes. These log messages have a LOCAL: - prefix to make it clear that the stack trace is happening locally, - but on behalf of some remote caller. - - * foolscap/call.py (PendingRequest.fail): improve the logging, - make it conditional on logRemoteFailures, add the REMOTE: prefix - (InboundDelivery): put more information into the InboundDelivery, - move logLocalFailures logging into it - (CallUnslicer.receiveClose): put the .runnable flag on the - InboundDelivery object instead of on the CallUnslicer - - * foolscap/broker.py (Broker): pass the InboundDelivery around - instead of the CallUnslicer that it points to. - (Broker.callFailed): Add logLocalFailures checking here. - - - * foolscap/reconnector.py: oops, add missing import that would break - any actual reconnection attempts - -2006-10-31 Brian Warner - - * misc/sarge/debian/control: add sarge packaging - * misc/dapper/debian/control: update dependencies, add Recommends - on pyopenssl - * misc/sid/debian/control: same - * Makefile: add 'debian-sarge' target - - * misc/dapper/debian: move debian packaging up a level - * misc/sid/debian: same - * Makefile: same - - * foolscap/__init__.py (__version__): bump to 0.0.4+ while between - releases - * misc/debs/sid/debian/changelog: same - * misc/debs/dapper/debian/changelog: same - -2006-10-26 Brian Warner - - * foolscap/__init__.py: release Foolscap-0.0.4 - * misc/debs/sid/debian/changelog: same - * misc/debs/dapper/debian/changelog: same - -2006-10-26 Brian Warner - - * setup.py: fix project URL - - * MANIFEST.in: include misc/debs/* in the source tarball - - * NEWS: update for 0.0.4 - - * foolscap/test/test_reconnector.py: verify that the Reconnector's - callbacks are properly interleaved with any notifyOnDisconnect - callbacks the user might have registered. A Reconnector cb that - uses notifyOnDisconnect should see a strictly-ordered sequence of - connect, disconnect, connect, disconnect. - -2006-10-25 Brian Warner - - * foolscap/referenceable.py - (RemoteReferenceOnly.notifyOnDisconnect): accept args/kwargs to - pass to the callback. Return a marker that can be passed to - dontNotifyOnDisconnect() to de-register the callback. - * foolscap/broker.py (Broker.notifyOnDisconnect): same - (Broker.connectionLost): fire notifyOnDisconnect callbacks in a - separate turn, using eventually(), so that problems or - side-effects in one call cannot affect other calls or the - connectionLost process - * foolscap/test/test_pb.py (TestCall.testDisconnect4): test it - - * foolscap/pb.py (Tub.registerReference): undo that, make - registerReference *always* create a strongref to the target, but - split some of the work out to an internal function which makes the - weakrefs. Tub.registerReference() is the API that application code - uses to publish an object (make it reachable) *and* have the Tub - keep it alive for you. I'm not sure I can think of a use case for - making it reachable but *not* wanting the Tub to keep it alive. If - you want to make it reachable but still ephemeral, just pass it - over the wire. - (Tub._assignName): new method to make weakrefs and assign names. - (Tub.getOrCreateURLForReference): renamed from getURLForReference. - Changed to assign a name if possible and one didn't already exist. - BEHAVIOR CHANGE: This causes *all* objects passed over the wire, - whether explicitly registered or just implicitly passed along, to - be shareable as gifts (assuming the Tub is reachable and has a - location, of course). - * foolscap/referenceable.py (ReferenceableTracker.getURL): update - - * foolscap/test/test_registration.py (Registration.testWeak): use - _assignName instead of registerReference - * foolscap/test/test_gifts.py (Gifts.testOrdering): test it - -2006-10-25 Brian Warner - - * foolscap/pb.py (Tub.registerReference): add a strong= argument - which means the Tub should keep the registered object alive. If - strong=False, the tub uses a weakref, so that when the application - and all remote peers forget about the object, the Tub will too. - strong= defaults to True to match the previous behavior, but this - might change in the future, and/or it might become a property to - be set on the Tub. - * foolscap/test/test_registration.py: new tests for it - * foolscap/test/test_pb.py (TestService.testStatic): disable this - test, since static data (like tuples) are not weakreffable. The - registration of static data is an outstanding issue. - - * foolscap/pb.py (Tub.connectTo): provide a new method, sets up a - repeating connection to a given url (with randomized exponential - backoff) that will keep firing a callback each time a new - connection is made. This is the foolscap equivalent of - ReconnectingClientFactory, and is the repeating form of - getReference(). Thanks to AllMyData.com for sponsoring this work. - * foolscap/reconnector.py (Reconnector): implement it here - * foolscap/test/test_reconnector.py: test it - - * doc/using-pb.xhtml: update to reflect that we now have secure - PBURLs and TubIDs, and that methods are delivered in-order (at - least within a Tub-to-Tub connection) even in the face of gifts. - - * misc/debs/dapper/debian/rules (binary-indep): remove obsolete - reference to the old python-twisted-pb2 package - - * foolscap/referenceable.py (YourReferenceSlicer.slice): assert - that we actually have a URL to give out, since otherwise the error - will confusingly show up on the far end (as a Violation). This - occurs when we (as Alice) try to introduce Carol to a Bob that was - not explicitly registered in Bob's Tub, such that Bob does not - have a URL to give out. - - * foolscap/pb.py (Tub): tubID is no longer a parameter to Tub, - since it is always computed from the certificate - (UnauthenticatedTub): but it *is* a parameter here, since there - is no certificate - - * foolscap/broker.py (Broker.getMyReferenceByCLID): relax the - assertion to (int,long), since eventually clids will overrun a - 31-bit integer. Thanks to Rob Kinninmont for the catch. - (Broker.remote_decref): same - -2006-10-10 Brian Warner - - * misc/debs: add some debian packaging, separate directories for - sid and dapper because sid has pycentral and dapper is still in - the versioned-python-package era - * Makefile: simple Makefile to remind me how to create .debs - -2006-10-05 Brian Warner - - * foolscap/__init__.py: bump to 0.0.3+ while between releases - -2006-10-05 Brian Warner - - * foolscap/__init__.py: release Foolscap-0.0.3 - * NEWS: update for 0.0.3 release - -2006-10-05 Brian Warner - - * foolscap/test/test_gifts.py (Gifts): split out the Introduction - tests from test_pb.py - (Gifts.testOrdering): test the ordering of messages around a gift. - Doing [send(1), send(2, carol), send(3)] should result in Bob seeing - [1, (2,carol), 3] in that order. Before the recent ordering fix, - the presence of the gift would delay message delivery, resulting in - something like [1, 3, (2,carol)] - * foolscap/test/test_pb.py: same - - * foolscap/call.py (CallUnslicer): fix ordering of message - delivery in the face of Gifts. Each inbound method call gets - unserialized into an InboundDelivery/CallUnslicer pair, which gets - put on a queue. Messages get pulled off the queue in order, but - only when the head of the queue is ready (i.e. all of its - arguments are available, which means any pending Gifts have been - retrieved). - (InboundDelivery): same - (CallUnslicer.describe): stop losing useful information - * foolscap/broker.py (Broker.doNextCall): add inboundDeliveryQueue - to implement all this - * foolscap/test/test_pb.py (TestCall.testFailWrongArgsRemote1): - match the change to CallUnslicer.describe - - * foolscap/referenceable.py (TheirReferenceUnslicer.receiveClose): - don't bother returning ready_deferred, since we're returning an - unreferenceable Deferred anyway. - - * foolscap/test/test_pb.py (Test3Way): put off the check that - Alice's gift table is empty until we're sure she's received the - 'decgift' message. Add a note about a race condition that we have - to work around in a weird way to avoid spurious test failures - until I implement sendOnly (aka callRemoteOnly). - -2006-10-04 Brian Warner - - * foolscap/test/test_banana.py (ThereAndBackAgain.testIdentity): - use an actual tuple. Obviously I wasn't thinking when I first - wrote this and tried to use "(x)" to construct a one-item tuple. - -2006-10-02 Brian Warner - - * everything: fix most of the pyflakes warnings. Some of the - remaining ones are actual bugs where I need to finish implementing - something. - - * foolscap/slicers/*.py: move most Slicers/Unslicers out to separate - files, since slicer.py was way too big - * foolscap/slicers/allslicers.py: new module to pull them all in. - banana.py imports this to make sure all the auto-registration hooks - get triggered. - * everything: rearrange imports to match - * setup.py: add new sub-package - -2006-10-01 Brian Warner - - * foolscap/slicer.py: rearrange the internals, putting the - corresponding Slicer and Unslicer for each type next to each other - - * foolscap/slicer.py: move all "unsafe" Slicers and Unslicers out to - storage.py where it belongs - * foolscap/storage.py: same - * foolscap/test/test_banana.py: fix some imports to match - * foolscap/test/test_pb.py: same - - * foolscap/slicer.py (ReplaceVocabSlicer): clean up VOCAB - handling: add the ('add-vocab') sequence to incrementally add to - the receiving end's incomingVocabulary table, fix the race - condition that would have caused problems for strings that were - serialized after the setOutgoingVocabulary() call was made but - before the ('set-vocab') sequence was actually emitted. Lay the - groundwork for adaptive tokenization and negotiated vocab table - presets. Other classes involved are AddVocabSlicer, - AddVocabUnslicer, and ReplaceVocabUnslicer. - (BananaUnslicerRegistry): handle the add-vocab and set-vocab - sequences with a registry rather than special-casing them. - * foolscap/storage.py (UnsafeRootUnslicer): same, add the - BananaUnslicerRegistry - - * foolscap/banana.py (setOutgoingVocabulary): make it safe - to call this function at any time, as it merely schedules - an update. Change the signature to accept a list of strings - that should be tokenized rather than expecting the caller to - choose the index values as well. - (addToOutgoingVocabulary): new function to tokenize a single - string, also safe to call at any time - (outgoingVocabTableWasReplaced): - (allocateEntryInOutgoingVocabTable): - (outgoingVocabTableWasAmended): new functions for use by the - Slicers that are sending the 'set-vocab' and 'add-vocab' sequences - (Banana.maybeVocabizeString): reserve a place for adaptize - tokenizing - * foolscap/test/test_banana.py: match the changes - - - * foolscap/broker.py: s/topRegistry/topRegistries/, since it is - actually a list of Registries. Same for openRegistry and - openRegistries - * foolscap/slicer.py: same - * foolscap/storage.py: same - * foolscap/test/test_banana.py: same - - * foolscap/slicer.py (BuiltinSetSlicer): use a different test to - look for python < 2.4, one which doesn't make pyflakes complain - about using __builtins__ - -2006-09-30 Brian Warner - - * foolscap/promise.py (Promise): implement a simpler syntax, at - the encouragement of Zooko and others: now p.foo(args) does an - eventual-send. This is a simpler form of send(p).foo(args) . Added - _then and _except methods to do simple callback/errback handling. - You can still do send() and sendOnly() on either immediate values - or Promises: this shortcut only helps with send() on a Promise. - You can still do when() on a Promise, which is more flexible - because it returns a Deferred. The new syntax gives you a more - dataflow-ish style of coding, which might be confusing in some - ways but can also make the overall code much much easier to read. - * foolscap/test/test_promise.py: update tests - - * foolscap/test/common.py (HelperTarget.remote_defer): replace - callLater(0) with fireEventually() - * foolscap/test/test_banana.py (ErrorfulSlicer.next): same - (EncodeFailureTest.tearDown): use flushEventualQueue() for cleanup - - * foolscap/crypto.py (CertificateError): In Twisted >2.5, this - exception is defined in twisted.internet.error, and it is - sometimes raised by the SSL transport (in getPeerCertificate), and - we need to catch it. In older versions, we define it ourselves - even though it will never be raised, so that the code which - catches it doesn't have to have weird conditionals. - * foolscap/negotiate.py (Negotiation.handleENCRYPTED): catch the - CertificateError exception (which indicates that we have an - encrypted but unauthenticated connection: the other end did not - supply a certificate). In older versions of twisted's SSL code, - this was just indicated by having getPeerCertificate() return - None. - - * foolscap/test/test_negotiate.py: re-enable all negotiation tests - - * foolscap/pb.py (UnauthenticatedTub): change the API and docs to - refer to "Unauthenticated" tubs rather than "Unencrypted" ones, - since that's really the choice you get to make. We use encrypted - connections whenever possible; what you get to control is whether - we use keys to provide secure identification and introduction. - * foolscap/__init__.py: same, export UnauthenticatedTub instead of - UnencryptedTub - * foolscap/negotiate.py: same - * foolscap/referenceable.py: same - * foolscap/test/test_negotiate.py: same - * doc/listings/pb1server.py: update examples to match - * doc/using-pb.xhtml: same - -2006-09-26 Brian Warner - - * foolscap/pb.py (Tub): rename PBService to Tub, make it always - be encrypted - (UnencryptedTub): new class for unencrypted tubs - * all: fix everything else (code, docs, tests) to match - * foolscap/ipb.py (ITub): new interface to mark a Tub - -2006-09-24 Brian Warner - - * foolscap/referenceable.py (RemoteReferenceTracker._refLost): now - that we have eventually(), use it to avoid the ugly bug-inducing - indeterminacies that result from weakref callbacks being fired in - the middle of other operations. - - * foolscap/promise.py (Promise._resolve): I think I figured out - chained Promises. In the process, I made it illegal to call - _break after the Promise has already been resolved. This also - means that _resolve() can only be called once. We'll figure - out breakable Far references later. - * foolscap/test/test_promise.py (Chained): tests for them - - * foolscap/broker.py (Broker.getRemoteInterfaceByName): fix a bunch - of typos caught by pyflakes. Left a couple of ones in there that I - haven't figured out how to fix yet. - * foolscap/slicer.py (InstanceUnslicer.receiveChild): same - * foolscap/schema.py (RemoteMethodSchema.initFromMethod): same - * foolscap/pb.py (Listener.addTub): same - * foolscap/debug.py (TokenBanana.reportReceiveError): same - * foolscap/copyable.py: same - * foolscap/test/common.py: same - * foolscap/test/test_pb.py (TestReferenceable.NOTtestRemoteRef1): - same - - * foolscap/eventual.py: move eventual-send handling out to a - separate file. This module now provides eventually(cb), - d=fireEventually(), and d=flushEventualQueue() (for use by - unit tests, not user code). - * foolscap/negotiate.py: update to match - * foolscap/test/common.py: same - * foolscap/test/test_pb.py: same - * foolscap/test/test_eventual.py: new tests for eventually() - * foolscap/promise.py: rework Promise handling, now it behaves - like I want it to (although chained Promises aren't working yet) - * foolscap/test/test_promise.py: rework tests - -2006-09-16 Brian Warner - - * foolscap/crypto.py: fall back to using our own sslverify.py if - Twisted doesn't provide one (i.e. Twisted-2.4.x). - * foolscap/sslverify.py: copy from the Divmod tree - -2006-09-14 Brian Warner - - * foolscap/banana.py: remove #! line from non-script - * foolscap/remoteinterface.py: same - * foolscap/tokens.py: same - * foolscap/test/test_schema.py: same - - * foolscap/__init__.py: bump to 0.0.2+ while between releases - -2006-09-14 Brian Warner - - * foolscap/__init__.py: release Foolscap-0.0.2 - -2006-09-14 Brian Warner - - * doc/using-pb.xhtml: update pb3 example to match current usage, show - an example of using encrypted Tubs - * doc/listings/pb3calculator.py: same - * doc/listings/pb3user.py: same - - * foolscap/__init__.py: rearrange the API: now 'import foolscap' - is the preferred entry point, rather than 'from foolscap import pb'. - * foolscap/pb.py: stop importing things just to make them available - to people who import foolscap.pb - * all: same, update docs, examples, tests - - * all: rename newpb to 'Foolscap' - * setup.py: fix packages= to get tests too - - -2006-05-15 Brian Warner - - * test_zz_resolve.py: rename test file, I'd like to sit at the end - of the tests rather than at the beginning. This is to investigate - ticket #1390. - - * test_negotiate.py (Crossfire): oops, a cut-and-paste error - resulted in two CrossfireReverse tests and zero Crossfire tests. - Fixed this to enable the possibly-never-run real CrossfireReverse - test case. - (top): disable all negotiation tests unless NEWPB_TEST_NEGOTIATION - is set in the environment, since they are sensitive to system load - and the intermittent buildbot failures are annoying. - -2006-05-05 Brian Warner - - * release-twisted: add 'pb' subproject - * twisted/python/dist.py: same - * twisted/pb/__init__.py: set version to 0.0.1 - * twisted/pb/topfiles/setup.py: fix subproject name, set version 0.0.1 - -2006-04-29 Brian Warner - - * topfiles/README, topfiles/NEWS: prepare for 0.0.1 release - * setup.py: fix up description, project name - * test_ZZresolve.py: add some instrumentation to try and debug the - occasional all-connection-related-tests-fail problem, which I - suspect involves the threadpool being broken. - -2006-02-28 Brian Warner - - * sslverify.py: update to latest version (r5075) from Vertex SVN, - to fix a problem reported on OS-X with python2.4 . Removed the - test-case-name tag to prevent troubles with buildbot on systems - that don't also have vertex installed. I need to find a better - solution for this in the long run: I don't want newpb to depend - upon Vertex, but I also don't want to duplicate code. - -2006-02-27 Brian Warner - - * debug.py (encodeTokens): return a Deferred rather than use - deferredResult - -2006-02-02 Brian Warner - - * test_negotiate.py: skip pb-vs-web tests when we don't have - twisted.web, thanks to for the patch. - -2006-01-26 Brian Warner - - * test/test_banana.py (ErrorfulSlicer.next): don't use callLater() - with non-zero timeout - * test/test_promise.py (TestPromise.test2): same - * test/common.py (HelperTarget.remote_defer): same - -2006-01-25 Brian Warner - - * copyable.py: refactor ICopyable and IRemoteCopy to make it - possible to register adapters for third-party classes. - (RemoteCopy): allow RemoteCopy to auto-register with the - fully-qualified classname. This is only useful if you inherit from - both pb.Copyable and pb.RemoteCopy at the same time, otherwise the - sender and receiver will be using different names so they won't - match up. - - * broker.py (PBRootUnslicer.open): now that registerRemoteCopy is - done purely in terms of Unslicers, remove all the special-case code - that handled IRemoteCopy - (PBRootSlicer.slicerForObject): since zope.interface won't do - transitive adaptation, manually handle the - ThirdPartyClass -> ICopyable -> ISlicer case - - * test/test_copyable.py: clean up, improve comments - (MyRemoteCopy3Unslicer): update to match new RemoteCopyUnslicer - behavior. This needs to be documented and made easier. Also switch - from pb.registerRemoteCopy to registerRemoteCopyUnslicerFactory, - which is a mouthful. - (Registration): split this out, update to match new debug tables - in copyable.py - (Adaptation): test ICopyable adapters - -2006-01-23 Brian Warner - - * common.py: remove missing test_gift from the test-case-name tag, - not sure how that got in there - - * test/test_copyable.py: split Copyable tests out of test_pb.py - * test/common.py: factor out some more common test utility pieces - * copyable.py: add suitable test-case-name tag - - * base32.py: rename Base32.py to base32.py, to match Twisted - naming conventions - * crypto.py: same - * pb.py: same - -2006-01-02 Brian Warner - - * negotiate.py (eventually): add glyph's eventual-send operator, - based upon a queue cranked by callLater(0). - (flushEventualQueue): provide a way to flush that queue, so tests - know when to finish. - * test/test_pb.py: switch to negotiate.eventually - * test/__init__.py: add test-case-name tag - -2005-12-31 Brian Warner - - * test_gift.py (TestOrderedGifts.testGift): verify that the - presence of a gift (a third-party reference) in the arguments of a - method does not cause that method to be run out-of-order. Marked - TODO because at the moment they *are* run out-of-order. - * common.py (RIHelper.append): new method - - * referenceable.py (TheirReferenceUnslicer.ackGift): ignore errors - that involve losing the connection, since if these happen, the - giver will decref the gift reference anyway. This removes some - spurious log.errs and makes the unit tests happier. - -2005-12-30 Brian Warner - - * test_negotiate.py (Versus.testVersusHTTPServerEncrypted): stall - for a second after the test completes, to give the HTTP server a - moment to tear down its socket. Otherwise trial flunks the test - because of the lingering socket. I don't care for the arbitrary - 1.0-second delay, but twisted.web doesn't give me any convenient - way to wait for it to shut down. (this test was only failing under - the gtk2 reactor, but I think this was an unlucky timing thing). - (Versus.testVersusHTTPServerUnencrypted): same - - * negotiate.py (eventually): add an eventual-send operator - (Negotiation.negotiationFailed): fire connector.negotiationFailed - through eventually(), to give us a chance to loseConnection - beforehand. This helps the unit tests clean up better. - - * negotiation.py (eventually): change the eventual-send operator - to (ab)use reactor.callFromThread instead of callLater(0). exarkun - warned me, but I didn't listen: callLater(0) does not guarantee - relative ordering of sequentially-scheduled calls, and the windows - reactors in fact execute them in random order. Obviously I'd like - the reactor to provide a clearly-defined method for this purpose. - * test_pb.py (eventually): same - (Loopback.write): same. It was the reordering of these _write - calls that was breaking the unit tests on windows so badly. - (Loopback.loseConnection): same - - -2005-12-29 Brian Warner - - * test_pb.py (Loopback): fix plan-coordination bug by deferring - all write() and loseConnection() calls until the next reactor - turn, using reactor.callLater(0) as an 'eventual send' operator. - This avoids an infinite-mutual-recursion hang that confuses - certain test failures. Tests which use this Loopback must call - flush() and wait on the returned Deferred before finishing. - (TargetMixin): do proper setup/teardown of Loopback - (TestCall.testDisconnect2): use proper CONNECTION_LOST exception - (TestCall.testDisconnect3): same - (TestReferenceable.testArgs1): rename some tests - (TestReferenceable.testArgs2): test sending shared objects in - multiple arguments of a single method call - (TestReferenceable.testAnswer1): test shared objects in the return - value of a method call - (TestReferenceable.testAnswer2): another test for return values - - * call.py (CallUnslicer): inherit from ScopedUnslicer, so - arguments that reference shared objects will accurately reproduce - the object graph - (AnswerUnslicer): same, for answers that have shared objects - (ErrorUnslicer): same, just in case serialized Failures do too - * slicer.py (ImmutableSetSlicer): set trackReferences=False, since - immutable objects are never shared, so don't require reference - tracking - - * banana.py (Banana.sendError): do loseConnection() in sendError - rather than inside dataReceived. - -2005-12-26 Brian Warner - - * slicer.py (ScopedSlicer.registerReference): track references - with a (obj,refid) pair instead of just refid. This insures that - the object being tracked stays alive until the scope is retired, - preventing some ugly bugs that result from dead object id() values - being reused. These bugs would only happen if the object graph - changes during serialization (which you aren't supposed to do), - but this is a cheap fix that limits the damage that could happen. - In particular, it should fix a test failure on the OS-X buildslave - that results from a unit test that is violating this object-graph - -shouldn't-change prohibition. - - * banana.py (StorageBanana): refactor storage-related things, - moving them from banana.py and slicer.py into the new storage.py . - This includes UnsafeRootSlicer, StorageRootSlicer, - UnsafeRootUnslicer, and StorageRootUnslicer. Also provide a simple - serialize()/unserialize() pair in twisted.pb.storage, which will - be the primary interface for simple pickle.dumps()-like - serialization. - -2005-12-24 Brian Warner - - * slicer.py: remove #!, add test-case-name - (SetSlicer): define this unconditionally, now that python2.2 is no - longer supported. - (BuiltinSetSlicer): just like SetSlicer, used when there is a builtin - 'set' type (python2.4 and higher) - (ImmutableSetSlicer): define this unconditionally - (SetUnslicer): same - (ImmutableSetUnslicer): same - - * test_banana.py (TestBananaMixin.looptest): make it easier to - test roundtrip encode/decode pairs that don't *quite* re-create - the original object - (TestBananaMixin.loop): clear the token stream for each test - (ThereAndBackAgain.test_set): verify that python2.4's builtin - 'set' type is serialized as a sets.Set - - * all: drop python2.2 compatibility, now that Twisted no longer - supports it - -2005-12-22 Brian Warner - - * pb.py (Listener.getPortnum): more python2.2 fixes, str in str - (PBService.__init__): same, bool issues - * test/test_banana.py: same, use failUnlessSubstring - * test/test_negotiate.py: same - * test/test_pb.py: same - * negotiate.py: same, str in str stuff - - * broker.py: don't import itertools, for python2.2 compatibility - * sslverify.py: same - -2005-12-20 Brian Warner - - * test/test_banana.py: remove all remaining uses of - deferredResult/deferredError - * test/test_pb.py: same - -2005-12-09 Brian Warner - - * pb.py (PBService.__init__): switch to SHA-1 for TubID digests - * negotiate.py (Negotiation.evaluateHello): same - * crypto.py (digest32): same - -2005-12-08 Brian Warner - - * pb.py (PBService): allow all Tubs to share the same RandomPool - -2005-10-10 Brian Warner - - * lots: overhaul negotiation, add lots of new tests. Implement - shared Listeners, correct handling of both encrypted and - non-encrypted Tubs, follow multiple locationHints correctly. More - docs, update encrypted-tub examples to match new usage. - -2005-09-15 Brian Warner - - * test_pb.py: remove some uses of deferredResult/deferredError - -2005-09-14 Brian Warner - - * pb.py (PBService.generateSwissnumber): use PyCrypto RNG if - available, otherwise use the stdlib 'random' module. Create a - 160-bit swissnumber by default, this can be changed by the - NAMEBITS class attribute. - (PBService.__init__): use a random 32-bit number as a TubID when - we aren't using crypto and an SSL certificate - * Base32.py: copy module from the waterken.org Web-Calculus - python implementation - * test/test_crypto.py (TestService.getRef): let it register a - random swissnumber instead of a well-known name - - - * crypto.py: Implement encrypted PB connections, so PB-URLs are - closer to being secure capabilities. This file contains utility - functions. - * sslverify.py: some pyOpenSSL wrappers, copied from Divmod's - Vertex/vertex/sslverify.py - - * test/test_crypto.py: test case for encrypted connections - - * pb.py (PBServerFactory.buildProtocol): accomodate missing tubID, - this needs to be re-thought when I do the "what if we aren't using - crypto" pass. - (PBServerFactory.clientConnectionMade): get the remote_tubid from - a .theirTubID attribute, not the negotiated connection parameters, - which won't include tub IDs anyway) - (PBClientFactory.buildProtocol): if we're using crypto, tell the - other side we want an encrypted connection - (PBService.__init__): add useCrypto= parameter, currently defaults - to False. This should switch to =True soon. - (PBService.createCertificate): if useCrypto=True, create an SSL - certificate for the Tub. - - * ipb.py (DeadReferenceError): actually define it somewhere - - * broker.py (Broker.handleNegotiation_v1): cleanup, make the - different negotiation-parameter dictionaries distinct, track the - ['my-tub-id'] field of each end more carefully. Start a TLS - session when both ends want it. - (Broker.startTLS): method to actually start the TLS session. This - is called on both sides (client and server), the t.i.ssl - subclasses figure out which is which and inform SSL appropriately. - (Broker.acceptNegotiation): Make a PB-specific form. Start TLS if - the server tells us to. When the second (encrypted) negotiation - block arrives, verify that the TubID we're looking for matches - both what they claim and what their SSL certificate contains. - (Broker.freeYourReference): ignore DeadReferenceErrors too - - * banana.py (Banana.__init__): each instance must have its own - copy of self.negotiationOffer, rather than setting it at the class - level - (Banana.negotiationDataReceived): let both handleNegotiation() and - acceptNegotiation() return a 'done' flag, if False then the - negotiation is re-started - (Banana.handleNegotiation): make handleNegotiation_v1 responsible - for setting self.negotiationResults - (Banana.handleNegotiation_v1): same - (Banana.acceptNegotiation): same - -2005-09-09 Brian Warner - - * broker.py: big sanity-cleanup of RemoteInterface usage. Only - allow a single RemoteInterface on any given pb.Referenceable. - Tub.getReference() now only takes a string-form method name, so - the rr.callRemote(RIFoo['bar'], *args) form is gone, and the one - RemoteInterface associated with the RemoteReference (is available) - will be checked. Tub.getReference() no longer takes an interface - name: you request an object, and then later find out what it - implements (rather than specifying your expectations ahead of - time). Gifts (i.e. 'their-reference' sequences) no longer have an - interfacename.. that is left up to the actual owner of the - reference, who will provide it in the 'my-reference' sequence. - * call.py, pb.py, referenceable.py, remoteinterface.py: same - * test/test_pb.py: update to match, still needs some cleanup - -2005-09-08 Brian Warner - - * setup.py, twisted/pb/topfiles: add "PB" sub-project - - * banana.py (Banana.sendFailed): oops, loseConnection() doesn't - take an argument - - * copyable.py (RemoteCopyClass): make it possible to disable - auto-registration of RemoteCopy classes - * test/test_pb.py (TestCopyable.testRegistration): test it - - * referenceable.py (CallableSlicer): make it possible to publish - callables (bound methods in particular) as secure capabilities. - They are handled very much like pb.Referenceable, but with a - negative CLID number and a slightly different callRemote() - codepath. - * broker.py (Broker.getTrackerForMyCall): same - (Broker.getTrackerForYourReference): same, use a - RemoteMethodReferenceTracker for negative CLID values - (Broker.doCall): callables are distinguished by having a - methodname of 'None', and are dispatched differently - * call.py (CallUnslicer.checkToken): accept INT/NEG for the object - ID (the CLID), but not string (leftover from old scheme) - (CallUnslicer.receiveChild): handle negative CLIDs specially - * test/test_pb.py (TestCallable): tests for it all - (TestService.getRef): refactor - (TestService.testStatic): verify that we can register static data - too, at least stuff that can be hashed. We need to decide whether - it would be useful to publish non-hashable static data too. - -2005-09-05 Brian Warner - - * pb.py (PBService): move to using tubIDs as the primary identity - key for a Tub, replacing the baseURL with a .location attribute. - Look up references by name instead of by URL, and start using - SturdyRefs locally instead of URLs whenever possible. - (PBService.getReference): accept either a SturdyRef or a URL - (RemoteTub.__init__): take a list of locationHints instead of a - single location. The try-all-of-them code is not yet written, nor - is the optional redirect-following. - (RemoteTub.getReference): change the inter-Tub protocol to pass a - name over the wire instead of a full URL. The Broker is already - connected to a specific Tub (multiple Tubs sharing the same port - will require separate Brokers), and by this point the location - hints have already served their purpose, so the name is the only - appropriate thing left to send. - - * broker.py (RIBroker.getReferenceByName): match that change to - the inter-Tub protocol: pass name over the wire, not URL - (Broker.getYourReferenceByName): same - (Broker.remote_getReferenceByName): same - - * referenceable.py (RemoteReferenceOnly): replace getURL with - getSturdyRef, since the SturdyRef can be stringified into a URL if - necessary - (SturdyRef): new class. When these are sent over the wire, they - appear at the far end as an identical SturdyRef; if you want them - to appear as a live reference, send sr.asLiveRef() instead. - - * test/test_pb.py (TestService.testRegister): match changes - (Test3Way.setUp): same - (HelperTarget.__init__): add some debugging annotations - * test/test_sturdyref.py: new test - - * doc/pb/using-pb.xhtml: update to match new usage, explain PB - URLs and secure identifiers - * doc/pb/listings/pb1server.py: same - * doc/pb/listings/pb1client.py: same - * doc/pb/listings/pb2calculator.py: same - * doc/pb/listings/pb2user.py: same - -2005-05-12 Brian Warner - - * doc/pb/using-pb.xhtml: document RemoteInterface, Constraints, - most of Copyable (still need examples), Introductions (third-party - references). - * doc/pb/listings/pb2calculator.py, pb2user.py: demostrate - bidirectional references, using service.Application - -2005-05-10 Brian Warner - - * broker.py (Broker.freeYourReference): also ignore ConnectionLost - errors - * doc/pb/listings/pb1client.py, pb1server.py: use reactor.run() - * doc/pb/using-pb.xhtml: add shell output for examples - - * doc/pb/using-pb.xhtml: started writing usage docs - - * banana.py (Banana.dataReceived): add .connectionAbandoned, don't - accept inbound data if it has been set. I don't trust - .loseConnection to work right away, and sending multiple - negotiation error messages is bad. - (Banana.negotiationDataReceived): split out negotiation stuff to a - separate method. Improve failure-reporting code to make sure we - either report a problem with a negotation block, or with an ERROR - token, not both, and not with multiple ERROR tokens. Catch errors - in the upper-level bananaVersionNegotiated() call. Make sure we - only send a response if we're the server. Report negotiation errors - with NegotiationError, not BananaError. - (Banana.reportReceiveError): rearrange a bit, accept a Failure - object. Don't do transport.loseConnection here, do it in whatever - calls reportReceiveError - * debug.py (TokenBanana.reportReceiveError): match signature change - (TokenStorageBanana.reportReceiveError): same - * test/test_banana.py: match changes - * tokens.py (NegotiationError): new exception - - * broker.py (Broker.handleNegotiation_v1): use the negotiation - block to exchange TubIDs. - (Broker.connectionFailed): tell the factory if negotiation failed - (Broker.freeYourReference): ignore lost-connection errors, call - freeYourReferenceTracker even if the connection was lost, since - in that case the reference has gone away anyway. - (Broker.freeYourReferenceTracker): don't explode if the keys were - already deleted, since .connectionLost will clear everything - before the decref-ack mechanism gets a chance to delete them. - * referenceable.py (RemoteReferenceTracker.__repr__): stringify - these with more useful information. - * pb.py (PBServerFactory.buildProtocol): copy .debugBanana flag - into the new Broker (both .debugSend and .debugReceive) - (PBServerFactory.clientConnectionMade): survive a missing TubID - (PBClientFactory.negotiationFailed): notify all onConnect watchers - -2005-05-08 Brian Warner - - * test_pb.py (TestService): test the use of PBService without - RemoteInterfaces too - -2005-05-04 Brian Warner - - * broker.py (Broker): add tables to track gifts (third-party - references) - (PBOpenRegistry): add their-reference entry - (RIBroker.decgift): new method to release pending gifts - * call.py (PendingRequest): add some debugging hints - (CallUnslicer): accept deferred arguments, don't invoke the method - until all arguments are available - * pb.py (PBService.listenOn): return the Service, for testing - (PBService.generateUnguessableName): at least make them unique, - if not actually unguessable - (top): remove old URL code, all is now PBService - * referenceable.py (RemoteReferenceOnly.__repr__): include the - URL, if available - (RemoteReference.callRemote): set .methodName on the - PendingRequest, to make debugging easier - (YourReferenceSlicer.slice): handle third-party references - (TheirReferenceUnslicer): accept third-party references - * schema.py (Nothing): a constraint which only accepts None - * test/test_pb.py (Test3Way): validate third-party reference gifts - -2005-04-28 Brian Warner - - * tokens.py (IReferenceable): move to flavors.py - * flavors.py (IReferenceable): add it, mark Referenceable as - implementing it. - * pb.py (PBServerFactory): make root= optional - (PBService): new class. In the future, all PB uses will go through - this service, rather than using factories and connectTCPs directly. - The service uses urlparse to map PB URLs to target hosts. - * test_pb.py (TestService): start adding tests for PBService - -2005-04-26 Brian Warner - - * banana.py: add preliminary newpb connection negotiation - * test_banana.py: start on tests for negotiation, at least verify - that newpb-newpb works, and that newpb-http and http-newpb fail. - -2005-04-16 Brian Warner - - * banana.py (Banana.handleData): handle -2**31 properly - * test_banana.py (ThereAndBackAgain.test_bigint): test it properly - - * flavors.py: python2.2 compatibility: __future__.generators - * pb.py: same - * schema.py (TupleConstraint.maxSize): don't use sum() - (AttributeDictConstraint.maxSize): same - (makeConstraint): in 2.2, 'bool' is a function, not a type, and - there is no types.BooleanType - * slicer.py: __future__.generators, and the 'sets' module might not - be available - (SetSlicer): only define it if 'sets' is available - (SetUnslicer): same - * test_banana.py: __future__.generators, 'sets' might not exist, - (EncodeFailureTest.failUnlessIn): 2.2 can't do 'str in str', only - 'char in str', so use str.find() instead - (InboundByteStream2.testConstrainedBool): skip bool constraints - unless we have a real BooleanType - (ThereAndBackAgain.test_set): skip sets unless they're supported - * test_schema.py (ConformTest.testBool): skip on 2.2 - (CreateTest.testMakeConstraint): same - * test_pb.py: __future__.generators, use str.find() - - * test_banana.py (DecodeTest.test_ref2): accomodate python2.4, - which doesn't try to be quite as clever as python2.3 when - comparing complex object graphs with == - (DecodeTest.test_ref5): same. Do the comparison by hand. - (DecodeTest.test_ref6): same, big gnarly validation phase - - * test_pb.py (TestReferenceUnslicer.testNoInterfaces): update to - new signature for receiveClose() - (TestReferenceUnslicer.testInterfaces): same - (TestCall.testFail1): deferredError doesn't seem to like - CopiedFailure all that much. Use retrial's return-a-deferred - support instead. - (MyRemoteCopy3Unslicer.receiveClose): same - (TestCall.testFail2): same - (TestCall.testFail3): same - (TestFactory): clean up both server and client sockets, to avoid - the "unclean reactor" warning from trial - (Test3Way.tearDown): clean up client sockets - - * tokens.py (receiveClose): fix documentation - - * pb.py (CopiedFailure): make CopiedFailure old-style, since you - can't raise new-style instances as exceptions, and CopiedFailure - may have its .trap() method invoked, which does 'raise self'. - (CopiedFailure.__str__): make it clear that this is a - CopiedFailure, not a normal Failure. - (callRemoteURL_TCP): Add a _gotReferenceCallback argument, to - allow test cases to clean up their client connections. - - * flavors.py (RemoteCopyOldStyle): add an old-style base class, so - CopiedFailure can be old-style. Make RemoteCopy a new-style - derivative. - - * test_banana.py (DecodeTest.test_instance): fix the - manually-constructed class names to reflect their new location in - the tree (test_banana to twisted.pb.test.test_banana) - (EncodeFailureTest.test_instance_unsafe): same - - * twisted/pb/*: move newpb from Sandbox/warner into the 'newpb' - branch, distributed out in twisted/pb/ and doc/pb/ - * twisted/pb: add __init__.py files to make it a real module - * twisted/pb/test/test_*.py: fix up import statements - -2005-03-22 Brian Warner - - * flavors.py: implement new signature - * pb.py: same - * test_pb.py: same - - * test_banana.py (BrokenDictUnslicer.receiveClose): new signature - (ErrorfulUnslicer.receiveChild): same - (ErrorfulUnslicer.receiveClose): same - (FailingUnslicer.receiveChild): same - - * slicer.py: implement new receiveChild/receiveClose signature. - Require that ready_deferred == None for now. - (ListUnslicer.receiveChild): put "placeholder" in the list instead - of the Deferred - (TupleUnslicer.start): change the way we keep track of - not-yet-constructable tuples, using a counter of unreferenceable - children instead of counting the Deferred placeholders in the list - (TupleUnslicer.receiveChild): put "placeholder" in the list - instead of the Deferred - - * banana.py (Banana.reportReceiveError): when debugging, log the - exception in a way that doesn't cause trial to think the test - failed. - (Banana.handleToken): implement new receiveChild signature - (Banana.handleClose): same - * debug.py (LoggingBananaMixin.handleToken): same - - * tokens.py (IUnslicer.receiveChild): new signature for - receiveClose and receiveChild, they now pass a pair of (obj, - ready_deferred), where obj is still object-or-deferred, but - ready_deferred is non-None when the object will not be ready to - use until some other event takes place (like a "slow" global - reference is established). - -# Local Variables: -# add-log-time-format: add-log-iso8601-time-string -# End: diff --git a/src/foolscap/MANIFEST.in b/src/foolscap/MANIFEST.in deleted file mode 100644 index 026d37fa..00000000 --- a/src/foolscap/MANIFEST.in +++ /dev/null @@ -1,12 +0,0 @@ -include ChangeLog MANIFEST.in NEWS -include doc/*.txt doc/*.xhtml doc/*.css doc/template.tpl -include doc/listings/*.py -include doc/specifications/*.xhtml -include Makefile -include misc/dapper/debian/* -include misc/edgy/debian/* -include misc/feisty/debian/* -include misc/sid/debian/* -include misc/sarge/debian/* -include misc/testutils/* -include misc/testutils/twisted/plugins/* diff --git a/src/foolscap/Makefile b/src/foolscap/Makefile deleted file mode 100644 index 99496904..00000000 --- a/src/foolscap/Makefile +++ /dev/null @@ -1,56 +0,0 @@ - -.PHONY: build test debian-sid debian-dapper debian-feisty debian-sarge -.PHONY: debian-edgy - -build: - python setup.py build - -TEST=foolscap -test: - trial $(TEST) - -test-figleaf: - rm -f .figleaf - PYTHONPATH=misc/testutils trial --reporter=bwverbose-figleaf $(TEST) - -figleaf-output: - rm -rf coverage-html - PYTHONPATH=misc/testutils python misc/testutils/figleaf2html -d coverage-html -r . - @echo "now point your browser at coverage-html/index.html" - -debian-sid: - rm -f debian - ln -s misc/sid/debian debian - chmod a+x debian/rules - debuild -uc -us - -debian-dapper: - rm -f debian - ln -s misc/dapper/debian debian - chmod a+x debian/rules - debuild -uc -us - -debian-edgy: - rm -f debian - ln -s misc/edgy/debian debian - chmod a+x debian/rules - debuild -uc -us - -debian-feisty: - rm -f debian - ln -s misc/feisty/debian debian - chmod a+x debian/rules - debuild -uc -us - -debian-sarge: - rm -f debian - ln -s misc/sarge/debian debian - chmod a+x debian/rules - debuild -uc -us - -DOC_TEMPLATE=doc/template.tpl -docs: - lore -p --config template=$(DOC_TEMPLATE) --config ext=.html \ - `find doc -name '*.xhtml'` - - diff --git a/src/foolscap/NEWS b/src/foolscap/NEWS deleted file mode 100644 index 839c6de9..00000000 --- a/src/foolscap/NEWS +++ /dev/null @@ -1,898 +0,0 @@ -User visible changes in Foolscap (aka newpb/pb2). -*- outline -*- - -* Release 0.1.5 (07 Aug 2007) - -** Compatibility - -This release is fully compatible with 0.1.4 and 0.1.3 . - -** CopiedFailure improvements - -When a remote method call fails, the calling side gets back a CopiedFailure -instance. These instances now behave slightly more like the (local) Failure -objects that they are intended to mirror, in that .type now behaves much like -the original class. This should allow trial tests which result in a -CopiedFailure to be logged without exploding. In addition, chained failures -(where A calls B, and B calls C, and C fails, so C's Failure is eventually -returned back to A) should work correctly now. - -** Gift improvements - -Gifts inside return values should properly stall the delivery of the response -until the gift is resolved. Gifts in all sorts of containers should work -properly now. Gifts which cannot be resolved successfully (either because the -hosting Tub cannot be reached, or because the name cannot be found) will now -cause a proper error rather than hanging forever. Unresolvable gifts in -method arguments will cause the message to not be delivered and an error to -be returned to the caller. Unresolvable gifts in method return values will -cause the caller to receive an error. - -** IRemoteReference() adapter - -The IRemoteReference() interface now has an adapter from Referenceable which -creates a wrapper that enables the use of callRemote() and other -IRemoteReference methods on a local object. - -The situation where this might be useful is when you have a central -introducer and a bunch of clients, and the clients are introducing themselves -to each other (to create a fully-connected mesh), and the introductions are -using live references (i.e. Gifts), then when a specific client learns about -itself from the introducer, that client will receive a local object instead -of a RemoteReference. Each client will wind up with n-1 RemoteReferences and -a single local object. - -This adapter allows the client to treat all these introductions as equal. A -client that wishes to send a message to everyone it's been introduced to -(including itself) can use: - - for i in introductions: - IRemoteReference(i).callRemote("hello", args) - -In the future, if we implement coercing Guards (instead of -compliance-asserting Constraints), then IRemoteReference will be useful as a -guard on methods that want to insure that they can do callRemote (and -notifyOnDisconnect, etc) on their argument. - -** Tub.registerNameLookupHandler - -This method allows a one-argument name-lookup callable to be attached to the -Tub. This augments the table maintained by Tub.registerReference, allowing -Referenceables to be created on the fly, or persisted/retrieved on disk -instead of requiring all of them to be generated and registered at startup. - - -* Release 0.1.4 (14 May 2007) - -** Compatibility - -This release is fully compatible with 0.1.3 . - -** getReference/connectTo can be called before Tub.startService() - -The Tub.startService changes that were suggested in the 0.1.3 release notes -have been implemented. Calling getReference() or connectTo() before the Tub -has been started is now allowed, however no action will take place until the -Tub is running. Don't forget to start the Tub, or you'll be left wondering -why your Deferred or callback is never fired. (A log message is emitted when -these calls are made before the Tub is started, in the hopes of helping -developers find this mistake faster). - -** constraint improvements - -The RIFoo -style constraint now accepts gifts (third-party references). This -also means that using RIFoo on the outbound side will accept either a -Referenceable that implements the given RemoteInterface or a RemoteReference -that points to a Referenceable that implements the given RemoteInterface. -There is a situation (sending a RemoteReference back to its owner) that will -pass the outbound constraint but be rejected by the inbound constraint on the -other end. It remains to be seen how this will be fixed. - -** foolscap now deserializes into python2.4-native 'set' and 'frozenset' types - -Since Foolscap is dependent upon python2.4 or newer anyways, it now -unconditionally creates built-in 'set' and 'frozenset' instances when -deserializing 'set'/'immutable-set' banana sequences. The pre-python2.4 -'sets' module has non-built-in set classes named sets.Set and -sets.ImmutableSet, and these are serialized just like the built-in forms. - -Unfortunately this means that Set and ImmutableSet will not survive a -round-trip: they'll be turned into set and frozenset, respectively. Worse -yet, 'set' and 'sets.Set' are not entirely compatible. This may cause a -problem for older applications that were written to be compatible with both -python-2.3 and python-2.4 (by using sets.Set/sets.ImmutableSet), for which -the compatibility code is still in place (i.e. they are not using -set/frozenset). These applications may experience problems when set objects -that traverse the wire via Foolscap are brought into close proximity with set -objects that remained local. This is unfortunate, but it's the cleanest way -to support modern applications that use the native types exclusively. - -** bug fixes - -Gifts inside containers (lists, tuples, dicts, sets) were broken: the target -method was frequently invoked before the gift had properly resolved into a -RemoteReference. Constraints involving gifts inside containers were broken -too. The constraints may be too loose right now, but I don't think they -should cause false negatives. - -The unused SturdyRef.asLiveRef method was removed, since it didn't work -anyways. - -** terminology shift: FURL - -The preferred name for the sort of URL that you get back from -registerReference (and hand to getReference or connectTo) has changed from -"PB URL" to "FURL" (short for Foolscap URL). They still start with 'pb:', -however. Documentation is slowly being changed to use this term. - - -* Release 0.1.3 (02 May 2007) - -** Incompatibility Warning - -The 'keepalive' feature described below adds a new pair of banana tokens, -PING and PONG, which introduces a compatibility break between 0.1.2 and 0.1.3 -. Older versions would throw an error upon receipt of a PING token, so the -version-negotiation mechanism is used to prevent banana-v2 (0.1.2) peers from -connecting to banana-v3 (0.1.3+) peers. Our negotiation mechanism would make -it possible to detect the older (v2) peer and refrain from using PINGs, but -that has not been done for this release. - -** Tubs must be running before use - -Tubs are twisted.application.service.Service instances, and as such have a -clear distinction between "running" and "not running" states. Tubs are -started by calling startService(), or by attaching them to a running service, -or by starting the service that they are already attached to. The design rule -in operation here is that Tubs are not allowed to perform network IO until -they are running. - -This rule was not enforced completely in 0.1.2, and calls to -getReference()/connectTo() that occurred before the Tub was started would -proceed normally (initiating a TCP connection, etc). Starting with 0.1.3, -this rule *is* enforced. For now, that means that you must start the Tub -before calling either of these methods, or you'll get an exception. In a -future release, that may be changed to allow these early calls, and queue or -otherwise defer the network IO until the Tub is eventually started. (the -biggest issue is how to warn users who forget to start the Tub, since in the -face of such a bug the getReference will simply never complete). - -** Keepalives - -Tubs now keep track of how long a connection has been idle, and will send a -few bytes (a PING of the other end) if no other traffic has been seen for -roughly 4 to 8 minutes. This serves two purposes. The first is to convince an -intervening NAT box that the connection is still in use, to prevent it from -discarding the connection's table entry, since that would block any further -traffic. The second is to accelerate the detection of such blocked -connections, specifically to reduce the size of a window of buggy behavior in -Foolscap's duplicate-connection detection/suppression code. - -This problem arises when client A (behind a low-end NAT box) connects to -server B, perhaps using connectTo(). The first connection works fine, and is -used for a while. Then, for whatever reason, A and B are silent for a long -time (perhaps as short as 20 minutes, depending upon the NAT box). During -this silence, A's NAT box thinks the connection is no longer in use and drops -the address-translation table entry. Now suppose that A suddenly decides to -talk to B. If the NAT box creates a new entry (with a new outbound port -number), the packets that arrive on B will be rejected, since they do not -match any existing TCP connections. A sees these rejected packets, breaks the -TCP connection, and the Reconnector initiates a new connection. Meanwhile, B -has no idea that anything has gone wrong. When the second connection reaches -B, it thinks this is a duplicate connection from A, and that it already has a -perfectly functional (albeit quiet) connection for that TubID, so it rejects -the connection during the negotiation phase. A sees this rejection and -schedules a new attempt, which ends in the same result. This has the -potential to prevent hosts behind NAT boxes from ever reconnecting to the -other end, at least until the the program at the far end is restarted, or it -happens to try to send some traffic of its own. - -The same problem can occur if a laptop is abruptly shut down, or unplugged -from the network, then moved to a different network. Similar problems have -been seen with virtual machine instances that were suspended and moved to a -different network. - -The longer-term fix for this is a deep change to the way duplicate -connections (and cross-connect race conditions) are handled. The keepalives, -however, mean that both sides are continually checking to see that the -connection is still usable, enabling TCP to break the connection once the -keepalives go unacknowledged for a certain amount of time. The default -keepalive timer is 4 minutes, and due to the way it is implemented this means -that no more than 8 minutes will pass without some traffic being sent. TCP -tends to time out connections after perhaps 15 minutes of unacknowledged -traffic, which means that the window of unconnectability is probably reduced -from infinity down to about 25 minutes. - -The keepalive-sending timer defaults to 4 minutes, and can be changed by -calling tub.setOption("keepaliveTimeout", seconds). - -In addition, an explicit disconnect timer can be enabled, which tells -Foolscap to drop the connection unless traffic has been seen within some -minimum span of time. This timer can be set by calling -tub.setOption("disconnectTimeout", seconds). Obviously it should be set to a -higher value than the keepaliveTimeout. This will close connections faster -than TCP will. Both TCP disconnects and the ones triggered by this -disconnectTimeout run the risk of false negatives, of course, in the face of -unreliable networks. - -** New constraints - -When a tuple appears in a method constraint specification, it now maps to an -actual TupleOf constraint. Previously they mapped to a ChoiceOf constraint. -In practice, TupleOf appears to be much more useful, and thus better -deserving of the shortcut. - -For example, a method defined as follows: - - def get_employee(idnumber=int): - return (str, int, int) # (name, room_number, age) - -can only return a three-element tuple, in which the first element is a string -(specifically it conforms to a default StringConstraint), and the second two -elements are ints (which conform to a default IntegerConstraint, which means -it fits in a 32-bit signed twos-complement value). - -To specify a constraint that can accept alternatives, use ChoiceOf: - - def get_record(key=str): - """Return the record (a string) if it is present, or None if - it is not present.""" - return ChoiceOf(str, None) - -UnicodeConstraint has been added, with minLength=, maxLength=, and regexp= -arguments. - -The previous StringConstraint has been renamed to ByteStringConstraint (for -accuracy), and it is defined to *only* accept string objects (not unicode -objects). 'StringConstraint' itself remains equivalent to -ByteStringConstraint for now, but in the future it may be redefined to be a -constraint that accepts both bytestrings and unicode objects. To accomplish -the bytestring-or-unicode constraint now, you might try -schema.AnyStringConstraint, but it has not been fully tested, and might not -work at all. - -** Bugfixes - -Errors during negotiation were sometimes delivered in the wrong format, -resulting in a "token prefix is limited to 64 bytes" error message. Several -error messages (including that one) have been improved to give developers a -better chance of determining where the actual problem lies. - -RemoteReference.notifyOnDisconnect was buggy when called on a reference that -was already broken: it failed to fire the callback. Now it fires the callback -soon (using an eventual-send). This should remove a race condition from -connectTo+notifyOnDisconnect sequences and allow them to operate reliably. -notifyOnDisconnect() is now tolerant of attempts to remove something twice, -which should make it easier to use safely. - -Remote methods which raise string exceptions should no longer cause Foolscap -to explode. These sorts of exceptions are deprecated, of course, and you -shouldn't use them, but at least they won't break Foolscap. - -The Reconnector class (accessed by tub.connectTo) was not correctly -reconnecting in certain cases (which appeared to be particularly common on -windows). This should be fixed now. - -CopyableSlicer did not work inside containers when streaming was enabled. -Thanks to iacovou-AT-gmail.com for spotting this one. - -** Bugs not fixed - -Some bugs were identified and characterized but *not* fixed in this release - -*** RemoteInterfaces aren't defaulting to fully-qualified classnames - -When defining a RemoteInterface, you can specify its name with -__remote_name__, or you can allow it to use the default name. Unfortunately, -the default name is only the *local* name of the class, not the -fully-qualified name, which means that if you have an RIFoo in two different -.py files, they will wind up with the same name (which will cause an error on -import, since all RemoteInterfaces known to a Foolscap-using program must -have unique names). - -It turns out that it is rather difficult to determine the fully-qualified -name of the RemoteInterface class early enough to be helpful. The workaround -is to always add a __remote_name__ to your RemoteInterface classes. The -recommendation is to use a globally-unique string, like a URI that includes -your organization's DNS name. - -*** Constraints aren't constraining inbound tokens well enough - -Constraints (and the RemoteInterfaces they live inside) serve three purposes. -The primary one is as documentation, describing how remotely-accessible -objects behave. The second purpose is to enforce that documentation, by -inspecting arguments (and return values) before invoking the method, as a -form of precondition checking. The third is to mitigate denial-of-service -attacks, in which an attacker sends so much data (or carefully crafted data) -that the receiving program runs out of memory or stack space. - -It looks like several constraints are not correctly paying attention to the -tokens as they arrive over the wire, such that the third purpose is not being -achieved. Hopefully this will be fixed in a later release. Application code -can be unaware of this change, since the constraints are still being applied -to inbound arguments before they are passed to the method. Continue to use -RemoteInterfaces as usual, just be aware that you are not yet protected -against certain DoS attacks. - -** Use os.urandom instead of falling back to pycrypto - -Once upon a time, when Foolscap was compatible with python2.3 (which lacks -os.urandom), we would try to use PyCrypto's random-number-generation routines -when creating unguessable object identifiers (aka "SwissNumbers"). Now that -we require python2.4 or later, this fallback has been removed, eliminating -the last reference to pycrypto within the Foolscap source tree. - - -* Release 0.1.2 (04 Apr 2007) - -** Bugfixes - -Yesterday's release had a bug in the new SetConstraint which rendered it -completely unusable. This has been fixed, along with some new tests. - -** More debian packaging - -Some control scripts were added to make it easier to create debian packages -for the Ubuntu 'edgy' and 'feisty' distributions. - - -* Release 0.1.1 (03 Apr 2007) - -** Incompatibility Warning - -Because of the technique used to implement callRemoteOnly() (specifically the -commandeering of reqID=0), this release is not compatible with the previous -release. The protocol negotiation version numbers have been bumped to avoid -confusion, meaning that 0.1.0 Tubs will refuse to connect to 0.1.1 Tubs, and -vice versa. Be aware that the errors reported when this occurs may not be -ideal, in particular I think the "reconnector" (tub.connectTo) might not log -this sort of connection failure in a very useful way. - -** changes to Constraints - -Method specifications inside RemoteInterfaces can now accept or return -'Referenceable' to indicate that they will accept a Referenceable of any -sort. Likewise, they can use something like 'RIFoo' to indicate that they -want a Referenceable or RemoteReference that implements RIFoo. Note that this -restriction does not quite nail down the directionality: in particular there -is not yet a way to specify that the method will only accept a Referenceable -and not a RemoteReference. I'm waiting to see if such a thing is actually -useful before implementing it. As an example: - -class RIUser(RemoteInterface): - def get_age(): - return int - -class RIUserListing(RemoteInterface): - def get_user(name=str): - """Get the User object for a given name.""" - return RIUser - -In addition, several constraints have been enhanced. StringConstraint and -ListConstraint now accept a minLength= argument, and StringConstraint also -takes a regular expression to apply to the string it inspects (the regexp can -either be passed as a string or as the output of re.compile()). There is a -new SetConstraint object, with 'SetOf' as a short alias. Some examples: - -HexIdConstraint = StringConstraint(minLength=20, maxLength=20, - regexp=r'[\dA-Fa-f]+') -class RITable(RemoteInterface): - def get_users_by_id(id=HexIdConstraint): - """Get a set of User objects; all will have the same ID number.""" - return SetOf(RIUser, maxLength=200) - -These constraints should be imported from foolscap.schema . Once the -constraint interface is stabilized and documented, these classes will -probably be moved into foolscap/__init__.py so that you can just do 'from -foolscap import SetOf', etc. - -*** UnconstrainedMethod - -To disable schema checking for a specific method, use UnconstrainedMethod in -the RemoteInterface definition: - -from foolscap.remoteinterface import UnconstrainedMethod - -class RIUse(RemoteInterface): - def set_phone_number(area_code=int, number=int): - return bool - set_arbitrary_data = UnconstrainedMethod - -The schema-checking code will allow any sorts of arguments through to this -remote method, and allow any return value. This is like schema.Any(), but for -entire methods instead of just specific values. Obviously, using this defeats -te whole purpose of schema checking, but in some circumstances it might be -preferable to allow one or two unconstrained methods rather than resorting to -leaving the entire class left unconstrained (by not declaring a -RemoteInterface at all). - -*** internal schema implementation changes - -Constraints underwent a massive internal refactoring in this release, to -avoid a number of messy circular imports. The new way to convert a -"shorthand" description (like 'str') into an actual constraint object (like -StringConstraint) is to adapt it to IConstraint. - -In addition, all constraints were moved closer to their associated -slicer/unslicer definitions. For example, SetConstraint is defined in -foolscap.slicers.set, right next to SetSlicer and SetUnslicer. The -constraints for basic tokens (like lists and ints) live in -foolscap.constraint . - -** callRemoteOnly - -A new "fire and forget" API was added to tell Foolscap that you want to send -a message to the remote end, but do not care when or even whether it arrives. -These messages are guaranteed to not fire an errback if the connection is -already lost (DeadReferenceError) or if the connection is lost before the -message is delivered or the response comes back (ConnectionLost). At present, -this no-error philosophy is so strong that even schema Violation exceptions -are suppressed, and the callRemoteOnly() method always returns None instead -of a Deferred. This last part might change in the future. - -This is most useful for messages that are tightly coupled to the connection -itself, such that if the connection is lost, then it won't matter whether the -message was received or not. If the only state that the message modifies is -both scoped to the connection (i.e. not used anywhere else in the receiving -application) and only affects *inbound* data, then callRemoteOnly might be -useful. It may involve less error-checking code on the senders side, and it -may involve fewer round trips (since no response will be generated when the -message is delivered). - -As a contrived example, a message which informs the far end that all -subsequent messages on this connection will sent entirely in uppercase (such -that the recipient should apply some sort of filter to them) would be -suitable for callRemoteOnly. The sender does not need to know exactly when -the message has been received, since Foolscap guarantees that all -subsequently sent messages will be delivered *after* the 'SetUpperCase' -message. And, the sender does not need to know whether the connection was -lost before or after the receipt of the message, since the establishment of a -new connection will reset this 'uppercase' flag back to some known -initial-contact state. - - rref.callRemoteOnly("set_uppercase", True) # returns None! - -This method is intended to parallel the 'deliverOnly' method used in E's -CapTP protocol. It is also used (or will be used) in some internal Foolscap -messages to reduce unnecessary network traffic. - -** new Slicers: builtin set/frozenset - -Code has been added to allow Foolscap to handle the built-in 'set' and -'frozenset' types that were introduced in python-2.4 . The wire protocol does -not distinguish between 'set' and 'sets.Set', nor between 'frozenset' and -'sets.ImmutableSet'. - -For the sake of compatibility, everything that comes out of the deserializer -uses the pre-2.4 'sets' module. Unfortunately that means that a 'set' sent -into a Foolscap connection will come back out as a 'sets.Set'. 'set' and -'sets.Set' are not entirely interoperable, and concise things like 'added = -new_things - old_things' will not work if the objects are of different types -(but note that things like 'added = new_things.difference(old_things)' *do* -work). - -The current workaround is for remote methods to coerce everything to a -locally-preferred form before use. Better solutions to this are still being -sought. The most promising approach is for Foolscap to unconditionally -deserialize to the builtin types on python >= 2.4, but then an application -which works fine on 2.3 (by using sets.Set) will fail when moved to 2.4 . - -** Tub.stopService now indicates full connection shutdown, helping Trial tests - -Like all twisted.application.service.MultiService instances, the -Tub.stopService() method returns a Deferred that indicates when shutdown has -finished. Previously, this Deferred could fire a bit early, when network -connections were still trying to deliver the last bits of data. This caused -problems with the Trial unit test framework, which insist upon having a clean -reactor between tests. - -Trial test writers who use Foolscap should include the following sequence in -their twisted.trial.unittest.TestCase.tearDown() methods: - -def tearDown(self): - from foolscap.eventual import flushEventualQueue - d = tub.stopService() - d.addCallback(flushEventualQueue) - return d - -This will insure that all network activity is complete, and that all message -deliveries thus triggered have been retired. This activity includes any -outbound connections that were initiated (but not completed, or finished -negotiating), as well as any listening sockets. - -The only remaining problem I've seen so far is with reactor.resolve(), which -is used to translate DNS names into addresses, and has a window during which -you can shut down the Tub and it will leave a cleanup timer lying around. The -only solution I've found is to avoid using DNS names in URLs. Of course for -real applications this does not matter: it only makes a difference in Trial -unit tests which are making heavy use of short-lived Tubs and connections. - - -* Release 0.1.0 (15 Mar 2007) - -** usability improvements - -*** Tubs now have a certFile= argument - -A certFile= argument has been added to the Tub constructor to allow the Tub -to manage its own certificates. This argument provides a filename where the -Tub should read or write its certificate. If the file exists, the Tub will -read the certificate data from there. If not, the Tub will generate a new -certificate and write it to the file. - -The idea is that you can point certFile= at a persistent location on disk, -perhaps in the application's configuration or preferences subdirectory, and -then not need to distinguish between the first time the Tub has been created -and later invocations. This allows the Tub's identity (derived from the -certificate) to remain stable from one invocation to the next. The related -problem of how to make (unguessable) object names persistent from one program -run to the next is still outstanding, but I expect to implement something -similar in the future (some sort of file to which object names are written -and read later). - -certFile= is meant to be used somewhat like this: - - where = os.path.expanduser("~/.myapp.cert") - t = Tub(certFile=where) - t.registerReference(obj) # ... - -*** All eventual-sends are retired on each reactor tick, not just one. - -Applications which make extensive use of the eventual-send operations (in -foolscap.eventual) will probably run more smoothly now. In previous releases, -the _SimpleCallQueue class would only execute a single eventual-send call per -tick, then take care of all pending IO (and any pending timers) before -servicing the next eventual-send. This could probably lead to starvation, as -those eventual-sends might generate more work (and cause more network IO), -which could cause the event queue to grow without bound. The new approach -finishes as much eventual-send work as possible before accepting any IO. Any -new eventual-sends which are queued during the current tick will be put off -until the next tick, but everything which was queued before the current tick -will be retired in the current tick. - -** bug fixes - -*** Tub certificates can now be used the moment they are created - -In previous releases, Tubs were only willing to accept SSL certificates that -created before the moment of checking. If two systems A and B had -unsynchronized clocks, and a Foolscap-using application on A was run for the -first time to connect to B (thus creating a new SSL certificate), system B -might reject the certificate because it looks like it comes from the future. - -This problem is endemic in systems which attempt to use the passage of time -as a form of revocation. For now at least, to resolve the practical problem -of certificates generated on demand and used by systems with unsynchronized -clocks, Foolscap does not use certificate lifetimes, and will ignore -timestamps on the certificates it examines. - - -* Release 0.0.7 (16 Jan 2007) - -** bug fixes - -*** Tubs can now connect to themselves - -In previous releases, Tubs were unable to connect to themselves: the -following code would fail (the negotiation would never complete, so the -connection attempt would eventually time out after about 30 seconds): - - url = mytub.registerReference(target) - d = mytub.getReference(url) - -In release 0.0.7, this has been fixed by catching this case and making it use -a special loopback transport (which serializes all messages but does not send -them over a wire). There may be still be problems with this code, in -particular connection shutdown is not completely tested and producer/consumer -code is completely untested. - -*** Tubs can now getReference() the same URL multiple times - -A bug was present in the RemoteReference-unslicing code which caused the -following code to fail: - - d = mytub.getReference(url) - d.addCallback(lambda ref: mytub.getReference(url)) - -In particular, the second call to getReference() would return None rather -than the RemoteReference it was supposed to. - -This bug has been fixed. If the previous RemoteReference is still alive, it -will be returned by the subsequent getReference() call. If it has been -garbage-collected, a new one will be created. - -*** minor fixes - -Negotiation errors (such as having incompatible versions of Foolscap on -either end of the wire) may be reported more usefully. - -In certain circumstances, disconnecting the Tub service from a parent service -might have caused an exception before. It might behave better now. - - -* Release 0.0.6 (18 Dec 2006) - -** INCOMPATIBLE PROTOCOL CHANGES - -Version 0.0.6 will not interoperate with versions 0.0.5 or earlier, because -of changes to the negotiation process and the method-calling portion of the -main wire protocol. (you were warned :-). There are still more incompatible -changes to come in future versions as the feature set and protocol -stabilizes. Make sure you can upgrade both ends of the wire until a protocol -freeze has been declared. - -*** Negotiation versions now specify a range, instead of a single number - -The two ends of a connection will agree to use the highest mutually-supported -version. This approach should make it much easier to maintain backwards -compatibility in the future. - -*** Negotiation now includes an initial VOCAB table - -One of the outputs of connection negotiation is the initial table of VOCAB -tokens to use for abbreviating commonly-used strings into short tokens -(usually just 2 bytes). Both ends have the ability to modify this table at any -time, but by setting the initial table during negotiation we same some -protocol traffic. VOCAB-izing common strings (like 'list' and 'dict') have -the potential to compress wire traffic by maybe 50%. - -*** remote methods now accept both positional and keyword arguments - -Previously you had to use a RemoteInterface specification to be able to pass -positional arguments into callRemote(). (the RemoteInterface schema was used -to convert the positional arguments into keyword arguments before sending -them over the wire). In 0.0.6 you can pass both posargs and kwargs over the -wire, and the remote end will pass them directly to the target method. When -schemas are in effect, the arguments you send will be mapped to the method's -named parameters in the same left-to-right way that python does it. This -should make it easier to port oldpb code to use Foolscap, since you don't -have to rewrite everything to use kwargs exclusively. - -** Schemas now allow =None and =RIFoo - -You can use 'None' in a method schema to indicate that the argument or return -value must be None. This is useful for methods that always return None. You -can also require that the argument be a RemoteReference that provides a -particular RemoteInterface. For example: - -class RIUser(RemoteInterface): - def get_age(): - return int - def delete(): - return None - -class RIUserDatabase(RemoteInterface): - def get_user(username=str): - return RIUser - -Note that these remote interface specifications are parsed at import time, so -any names they refer to must be defined before they get used (hence placing -RIUserDatabase before RIUser would fail). Hopefully we'll figure out a way to -fix this in the future. - -** Violations are now annotated better, might keep more stack-trace information - -** Copyable improvements - -The Copyable documentation has been split out to docs/copyable.xhtml and -somewhat expanded. - -The new preferred Copyable usage is to have a class-level attribute named -"typeToCopy" which holds the unique string. This must match the class-level -"copytype" attribute of the corresponding RemoteCopy class. Copyable -subclasses (or ICopyable adapters) may still implement getTypeToCopy(), but -the default just returns self.typeToCopy . Most significantly, we no longer -automatically use the fully-qualified classname: instead we *require* that -the class definition include "typeToCopy". Feel free to use any stable and -globally-unique string here, like a URI in a namespace that you control, or -the fully-qualified package/module/classname of the Copyable subclass. - -The RemoteCopy subclass must set the 'copytype' attribute, as it is used for -auto-registration. These can set copytype=None to inhibit auto-registration. - - -* Release 0.0.5 (04 Nov 2006) - -** add Tub.setOption, add logRemoteFailures and logLocalFailures - -These options control whether we log exceptions (to the standard twisted log) -that occur on other systems in response to messages that we've sent, and that -occur on our system in response to messages that we've received -(respectively). These may be useful while developing a distributed -application. All such log messages have each line of the stack trace prefixed -by REMOTE: or LOCAL: to make it clear where the exception is happening. - -** add sarge packaging, improve dependencies for sid and dapper .debs - -** fix typo that prevented Reconnector from actually reconnecting - - -* Release 0.0.4 (26 Oct 2006) - -** API Changes - -*** notifyOnDisconnect() takes args/kwargs - -RemoteReference.notifyOnDisconnect(), which registers a callback to be fired -when the connection to this RemoteReference is lost, now accepts args and -kwargs to be passed to the callback function. Without this, application code -needed to use inner functions or bound methods to close over any additional -state you wanted to get into the disconnect handler. - -notifyOnDisconnect() returns a "marker", an opaque values that should be -passed into the corresponding dontNotifyOnDisconnect() function to deregister -the callback. (previously dontNotifyOnDisconnect just took the same argument -as notifyOnDisconnect). - -For example: - -class Foo: - def _disconnect(self, who, reason): - print "%s left us, because of %s" % (who, reason) - def connect(self, url, why): - d = self.tub.getReference(url) - def _connected(rref): - self.rref = rref - m = rref.notifyOnDisconnect(self._disconnect, who, reason=why) - self.marker = m - d.addCallback(_connected) - def stop_caring(self): - self.rref.dontNotifyOnDisconnect(self.marker) - -*** Reconnector / Tub.connectTo() - -There is a new connection API for applications that want to connect to a -target and to reconnect to it if/when that connection is lost. This is like -ReconnectingClientFactory, but at a higher layer. You give it a URL to -connect to, and a callback (plus args/kwargs) that should be called each time -a connection is established. Your callback should use notifyOnDisconnect() to -find out when it is disconnected. Reconnection attempts use exponential -backoff to limit the retry rate, and you can shut off reconnection attempts -when you no longer want to maintain a connection. - -Use it something like this: - -class Foo: - def __init__(self, tub, url): - self.tub = tub - self.reconnector = tub.connectTo(url, self._connected, "arg") - def _connected(self, rref, arg): - print "connected" - assert arg == "arg" - self.rref = rref - self.rref.callRemote("hello") - self.rref.notifyOnDisconnect(self._disconnected, "blag") - def _disconnected(self, blag): - print "disconnected" - assert blag == "blag" - self.rref = None - def shutdown(self): - self.reconnector.stopConnecting() - -Code which uses this pattern will see "connected" events strictly interleaved -with "disconnected" events (i.e. it will never see two "connected" events in -a row, nor two "disconnected" events). - -The basic idea is that each time your _connected() method is called, it -should re-initialize all your state by making method calls to the remote -side. When the connection is lost, all that state goes away (since you have -no way to know what is happening until you reconnect). - -** Behavioral Changes - -*** All Referenceable object are now implicitly "giftable" - -In 0.0.3, for a Referenceable to be "giftable" (i.e. useable as the payload -of an introduction), two conditions had to be satisfied. #1: the object must -be published through a Tub with Tub.registerReference(obj). #2: that Tub must -have a location set (with Tub.setLocation). Once those conditions were met, -if the object was sent over a wire from this Tub to another one, the -recipient of the corresponding RemoteReference could pass it on to a third -party. Another side effect of calling registerReference() is that the Tub -retains a strongref to the object, keeping it alive (with respect to gc) -until either the Tub is shut down or the object is explicitly de-registered -with unregisterReference(). - -Starting in 0.0.4, the first condition has been removed. All objects which -pass through a setLocation'ed Tub will be usable as gifts. This makes it much -more convenient to use third-party references. - -Note that the Tub will *not* retain a strongref to these objects (merely a -weakref), so such objects might disappear before the recipient has had a -chance to claim it. The lifecycle of gifts is a subject of much research. The -hope is that, for reasonably punctual recipients, the gift will be kept alive -until they claim it. The whole gift/introduction mechanism is likely to -change in the near future, so this lifetime issue will be revisited in a -later release. - -** Build Changes - -The source tree now has some support for making debian-style packages (for -both sid and dapper). 'make debian-sid' and 'make debian-dapper' ought to -create a .deb package. - - -* Release 0.0.3 (05 Oct 2006) - -** API Changes - -The primary entry point for Foolscap is now the "Tub": - - import foolscap - t = foolscap.Tub() - d = t.getReference(pburl) - d.addCallback(self.gotReference) - ... - -The old "PBService" name is gone, use "Tub" instead. There are now separate -classes for "Tub" and "UnauthenticatedTub", rather than using an "encrypted=" -argument. Tubs always use encryption if available: the difference between the -two classes is whether this Tub should use a public key for its identity or -not. Note that you always need encryption to connect to an authenticated Tub. -So install pyopenssl, really. - -** eventual send operators - -Foolscap now provides 'eventually' and 'fireEventually', to implement the -"eventual send" operator advocated by Mark Miller's "Concurrency Among -Strangers" paper (http://www.erights.org/talks/promises/index.html). -eventually(cb, *args, **kwargs) runs the given call in a later reactor turn. -fireEventually(value=None) returns a Deferred that will be fired (with -'value') in a later turn. These behave a lot like reactor.callLater(0,..), -except that Twisted doesn't actually promise that a pair of callLater(0)s -will be fired in the right order (they usually do under unix, but they -frequently don't under windows). Foolscap's eventually() *does* make this -guarantee. In addition, there is a flushEventualQueue() that is useful for -unit tests, it returns a Deferred that will only fire when the entire queue -is empty. As long as your code only uses eventually() (and not callLater(0)), -putting the following in your trial test cases should keep everything nice -and clean: - - def tearDown(self): - return foolscap.flushEventualQueue() - -** Promises - -An initial implementation of Promises is in foolscap.promise for -experimentation. Only "Near" Promises are implemented to far (promises which -resolve to a local object). Eventually Foolscap will offer "Far" Promises as -well, and you will be able to invoke remote method calls through Promises as -well as RemoteReferences. See foolscap/test/test_promise.py for some hints. - -** Bug Fixes - -Messages containing "Gifts" (third-party references) are now delivered in the -correct order. In previous versions, the presence of these references could -delay delivery of the containing message, causing methods to be executed out -of order. - -The VOCAB-manipulating code used to have nasty race conditions, which should -be all fixed now. This would be more important if we actually used the -VOCAB-manipulating code yet, but we don't. - -Lots of internal reorganization (put all slicers in a subpackage), not really -user-visible. - -Updated to work with recent Twisted HEAD, specifically changes to sslverify. -This release of Foolscap ought to work with the upcoming Twisted-2.5 . - -** Incompatible protocol changes - -There are now separate add-vocab and set-vocab sequences, which add a single -new VOCAB token and replace the entire table, respectively. These replace the -previous 'vocab' sequence which behaved like set-vocab does now. This would -be an incompatible protocol change, except that previous versions never sent -the vocab sequence anyways. This version doesn't send either vocab-changing -sequence either, but when we finally do start using it, it'll be ready. - -* Release 0.0.2 (14 Sep 2006) - -Renamed to "Foolscap", extracted from underneat the Twisted packaged, -consolidated API to allow a simple 'import foolscap'. No new features or bug -fixes relative to pb2-0.0.1 . - - -* Release 0.0.1 (29 Apr 2006) - -First release! All basic features are in place. The wire protocol will almost -certainly change at some point, so compatibility with future versions is not -guaranteed. diff --git a/src/foolscap/README b/src/foolscap/README deleted file mode 100644 index 56ac5f38..00000000 --- a/src/foolscap/README +++ /dev/null @@ -1,82 +0,0 @@ - Foolscap - (aka newpb, aka pb2) - -This is a ground-up rewrite of Perspective Broker, which itself is Twisted's -native RPC/RMI protocol (Remote Procedure Call / Remote Method Invocation). -If you have control of both ends of the wire, and are thus not constrained to -use some other protocol like HTTP/XMLRPC/CORBA/etc, you might consider using -Foolscap. - -Fundamentally, Foolscap allows you to make a python object in one process -available to code in other processes, which means you can invoke its methods -remotely. This includes a data serialization layer to convey the object -graphs for the arguments and the eventual response, and an object reference -system to keep track of which objects you are connecting to. It uses a -capability-based security model, such that once you create a non-public -object, it is only accessible to clients to whom you've given the -(unguessable) PB-URL. You can of course publish world-visible objects that -have well-known PB-URLs. - -Full documentation and examples are in the doc/ directory. - -DEPENDENCIES: - - * Python 2.4 or later - * Twisted 2.4.0 or later - * PyOpenSSL (tested against 0.6) - - -INSTALLATION: - - To install foolscap into your system's normal python library directory, just - run the following (you will probably have to do this as root): - - python setup.py install - - You can also just add the foolscap source tree to your PYTHONPATH, since - there are no compile steps or .so/.dll files involved. - - -COMPATIBILITY: - - Foolscap is still under development. The wire protocol is almost certainly - going to change in the near future, so forward compatibility between - versions is *NOT* yet guaranteed. Do not use Foolscap if you do not have - continuing control over both ends of the wire. Foolscap is not yet suitable - for widespread deployment: for production applications please continue to - use oldpb (in twisted.spread). - - Foolscap has a built-in version-negotiation mechanism that allows the two - processes to determine how to best communicate with each other. The two ends - will agree upon the highest mutually-supported version for all their - traffic. If they do not have any versions in common, the connection will - fail with a NegotiationError. - - Certain releases of Foolscap will remain compatible with earlier releases. - Please check the NEWS file for announcements of compatibility-breaking - changes in any given release. - - -NAMING: - - The established version of PB that has been around for years is referred to - here as "oldpb". The new version contained in this release is known as - "Foolscap", but at various points of its development was known as "newpb" or - "pb2". The release tarballs are named "foolscap-x.y.z". The python module - name is "foolscap" . These names are still in flux. At some point in the - future, we may come up with a suitably clever and confusing name that will - replace any or all of these. - - A "foolscap" is a size of paper, probably measuring 17 by 13.5 inches. A - twisted foolscap of paper makes a good fool's cap. Also, "cap" makes me - think of capabilities, and Foolscap is a protocol to implement a distributed - object-capabilities model in python. - -AUTHOR: - - Brian Warner is responsible for this thing. Please discuss it on the - twisted-python list. - - The wiki page at contains - pointers to the latest release, as well as documentation and other - resources. diff --git a/src/foolscap/doc/copyable.xhtml b/src/foolscap/doc/copyable.xhtml deleted file mode 100644 index 2a9a7db2..00000000 --- a/src/foolscap/doc/copyable.xhtml +++ /dev/null @@ -1,236 +0,0 @@ - - -Using Pass-By-Copy in Foolscap - - - - -

Using Pass-By-Copy in Foolscap

- - -

Certain objects (including subclasses of foolscap.Copyable and things for which an ICopyable adapter has been -registered) are serialized using copy-by-value semantics. Each such object is -serialized as a (copytype, state) pair of values. On the receiving end, the -"copytype" is looked up in a table to find a suitable deserializer. The -"state" information is passed to this deserializer to create a new instance -that corresponds to the original. Note that the sending and receiving ends -are under no obligation to use the same class on each side: it is fairly -common for the remote form of an object to have different methods than the -original instance.

- -

Copy-by-value (as opposed to copy-by-reference) means that the remote -representation of an object leads an independent existence, unconnected to -the original. Sending the same object multiple times will result in separate -independent copies. Sending the result of a pass-by-copy operation back to -the original sender will, at best, result in the sender holding two separate -objects containing similar state (and at worst will not work at all: not all -RemoteCopies are themselves Copyable).

- -

More complex copy semantics can be accomplished by writing custom Slicer -code. For example, to get an object that is copied by value the first time it -traverses the wire, and then copied by reference all later times, you will -need to write a Slicer/Unslicer pair to implement this functionality. -Likewise the oldpb Cacheable class would need to be implemented -with a custom Slicer/Unslicer pair.

- -

Copyable

- -

The easiest way to send your own classes over the wire is to use -Copyable. On the sending side, this requires two things: your -class must inherit from foolscap.Copyable, and it -must define an attribute named typeToCopy with a unique string. -This copytype string is shared between both sides, so it is a good idea to -use a stable and globally unique value: perhaps a URL rooted in a namespace -that you control, or a UUID, or perhaps the fully-qualified -package+module+class name of the class being serialized. Any string will do, -as long as it matches the one used on the receiving side.

- -

The object being sent is asked to provide a state dictionary by calling -its getStateToCopy method. The default -implementation of getStateToCopy will simply return -self.__dict__. You can override getStateToCopy to -control what pieces of the source object get copied to the target. In -particular, you may want to override getStateToCopy if there is -any portion of the object's state that should not be sent over the -wire: references to objects that can not or should not be serialized, or -things that are private to the application. It is common practice to create -an empty dictionary in this method and then copy items into it.

- -

On the receiving side, you must register the copytype and provide a -function to deserialize the state dictionary back into an instance. For each -Copyable subclass you will create a corresponding RemoteCopy subclass. There are three -requirements which must be fulfilled by this subclass:

- -
    -
  1. copytype: Each RemoteCopy needs a - copytype attribute which contains the same string as the - corresponding Copyable's typeToCopy attribute. - (metaclass magic is used to auto-register the RemoteCopy class - in the global copytype-to-RemoteCopy table when the class is defined. You - can also use foolscap.registerRemoteCopy to - manually register a class).
  2. - -
  3. __init__: The RemoteCopy subclass must have - an __init__ method that takes no arguments. When the receiving side is - creating the incoming object, it starts by creating a new instance of the - correct RemoteCopy subclass, and at this point it has no - arguments to work with. Later, once the instance is created, it will - call setCopyableState to populate it.
  4. - -
  5. setCopyableState: Your RemoteCopy subclass - must define a method named setCopyableState. This method - will be called with the state dictionary that came out of - getStateToCopy on the sending side, and is expected to set any - necessary internal state.
  6. -
- - -

Note that RemoteCopy is a new-style class: if you want your -copies to be old-style classes, inherit from RemoteCopyOldStyle -and manually register the copytype-to-subclass mapping with -registerRemoteCopy.

- -copyable-send.py -copyable-receive.py - - -

Registering Copiers to serialize third-party classes

- -

If you wish to serialize instances of third-party classes that are out of -your control (or you simply want to avoid subclassing), you can register a -Copier to provide serialization mechanisms for those instances.

- -

There are plenty of cases where it is difficult to arrange for all of the -data you send over the wire to be in the form of Copyable -subclasses. For example, you might have a codebase that produces a -deeply-nested data structure that contains instances of pre-existing classes. -Those classes are written by other people, and do not happen to inherit from -Copyable. Without Copiers, you would have to traverse the whole -structure, locate all instances of these non-Copyable classes, -and wrap them in some new Copyable subclass. Registering a -Copier for the third-party class is much easier.

- - -

The registerCopier -function is used to provide a "copier" for any given class. This copier is a -function that accepts an instance of the given class, and returns a -(copytype, state) tuple. For examplemany thanks to -Ricky Iacovou for the xmlrpclib.DateTime example, the xmlrpclib module -provides a DateTime class, and you might have a data structure -that includes some instances of them:

- -
-import xmlrpclib
-from foolscap import registerCopier
-
-def copy_DateTime(xd):
-    return ("_xmlrpclib_DateTime", {"value": xd.value})
-
-registerCopier(xmlrpclib.DateTime, copy_DateTime)
-
- -

This insures that any xmlrpclib.DateTime that is encountered -while serializing arguments or return values will be serialized with a -copytype of "_xmlrpclib_DateTime" and a state dictionary containing the -single "value" key. Even DateTime instances that appear -arbitrarily deep inside nested data structures will be serialized this way. -For example, one a method argument might be dictionary, and one of its keys -was a list, and that list could containe a DateTime -instance.

- -

To deserialize this object, the receiving side needs to register a -corresponding deserializer. registerRemoteCopyFactory is the -receiving-side parallel to registerCopier. It associates a -copytype with a function that will receive a state dictionary and is expected -to return a fully-formed instance. For example:

- -
-import xmlrpclib
-from foolscap import registerRemoteCopyFactory
-
-def make_DateTime(state):
-    return xmlrpclib.DateTime(state["value"])
-
-registerRemoteCopyFactory("_xmlrpclib_DateTime", make_DateTime)
-
- -

Note that the "_xmlrpclib_DateTime" copytype must be the same for -both the copier and the RemoteCopyFactory, otherwise the receiving side will -be unable to locate the correct deserializer.

- -

It is perfectly reasonable to include both of these function/registration -pairs in the same module, and import it in the code on both sides of the -wire. The examples describe the sending and receiving sides separately to -emphasize the fact that the recipient may be running completely different -code than the sender.

- - -

Registering ICopyable adapters

- -

A slightly more generalized way to teach Foolscap about third-party -classes is to register an ICopyable adapter for them, using the usual -(i.e. zope.interface) adapter-registration mechanism. The object that provide -ICopyable needs to implement two methods: -getTypeToCopy (which returns the copytype), and -getStateToCopy, which returns the state dictionary. Any object -which can be adapted to ICopyable can be serialized this -way.

- -

On the receiving side, the copytype is looked up in the -CopyableRegistry to find a corresponding UnslicerFactory. The -registerRemoteCopyUnslicerFactory function accepts two -arguments: the copytype, and the unslicer factory to use. This unslicer -factory is simply a function that takes no arguments and returns a new -Unslicer. Each time an inbound message with the matching copytype is -received, ths unslicer factory is invoked to create an Unslicer that will be -responsible for the single instance described in the message. This Unslicer -must implement an interface described in the Unslicer specifications.

- -

Registering ISlicer adapters

- -

The most generalized way to serialize classes is to register a whole -ISlicer adapter for them. The ISlicer gets complete -control over serialization: it can stall the production of tokens by -implementing a slice method that yields Deferreds instead of -basic objects. It can also interact with other objects while the target is -being serialized. As an extreme example, if you had a service that wanted to -migrate an open HTTP connection from one process to another, the -ISlicer could communication with a front-end load-balancing box -to redirect the connection to the new host. In this case, the slicer could -theoretically tell the load-balancer to pause the connection and assign it a -rendezvous number, then serialize this rendezvous number as a form of "claim -check" to the target process. The IUnslicer on the receiving end -could open a new listening port, then use the claim check to tell the -load-balancer to direct the connection to this new port. Likewise two -services running on the same host could conspire to pass open file -descriptors over a Foolscap connection (via an auxilliary unix-domain socket) -through suitable magic in the ISlicer and IUnslicer -on each end.

- -

The Slicers and Unslicers are described in more detail in the specifications.

- -

Note that a Copyable with a copytype of "foo" is serialized -as the following token stream: OPEN, "copyable", "foo", [state dictionary..], -CLOSE. Any ISlicer adapter which wishes to match what -Copyable does needs to include the extra "copyable" opentype -string first.

- -

Also note that using a custom Slicer introduces an opportunity to violate -serialization coherency. Copyable and Copiers transform the -original object into a state dictionary in one swell foop, not allowing any -other code to get control (and possibly mutate the object's state). If your -custom Slicer allows other code to get control during serialization, then the -object's state might be changed, and thus the serialized state dictionary -could wind up looking pretty weird.

- - - diff --git a/src/foolscap/doc/jobs.txt b/src/foolscap/doc/jobs.txt deleted file mode 100644 index d671b466..00000000 --- a/src/foolscap/doc/jobs.txt +++ /dev/null @@ -1,619 +0,0 @@ --*- outline -*- - -Reasonably independent newpb sub-tasks that need doing. Most important come -first. - -* decide on a version negotiation scheme - -Should be able to telnet into a PB server and find out that it is a PB -server. Pointing a PB client at an HTTP server (or an HTTP client at a PB -server) should result in an error, not a timeout. Implement in -banana.Banana.connectionMade(). - -desiderata: - - negotiation should take place with regular banana sequences: don't invent a - new protocol that is only used at the start of the connection - - Banana should be useable one-way, for storage or high-latency RPC (the mnet - folks want to create a method call, serialize it to a string, then encrypt - and forward it on to other nodes, sometimes storing it in relays along the - way if a node is offline for a few days). It should be easy for the layer - above Banana to feed it the results of what its negotiation would have been - (if it had actually used an interactive connection to its peer). Feeding the - same results to both sides should have them proceed as if they'd agreed to - those results. - - negotiation should be flexible enough to be extended but still allow old - code to talk with new code. Magically predict every conceivable extension - and provide for it from the very first release :). - -There are many levels to banana, all of which could be useful targets of -negotiation: - - which basic tokens are in use? Is there a BOOLEAN token? a NONE token? Can - it accept a LONGINT token or is the target limited to 32-bit integers? - - are there any variations in the basic Banana protocol being used? Could the - smaller-scope OPEN-counter decision be deferred until after the first - release and handled later with a compatibility negotiation flag? - - What "base" OPEN sequences are known? 'unicode'? 'boolean'? 'dict'? This is - an overlap between expressing the capabilities of the host language, the - Banana implementation, and the needs of the application. How about - 'instance', probably only used for StorageBanana? - - What "top-level" OPEN sequences are known? PB stuff (like 'call', and - 'your-reference')? Are there any variations or versions that need to be - known? We may add new functionality in the future, it might be useful for - one end to know whether this functionality is available or not. (the PB - 'call' sequence could some day take numeric argument names to convey - positional parameters, a 'reference' sequence could take a string to - indicate globally-visible PB URLs, it could become possible to pass - target.remote_foo directly to a peer and have a callable RemoteMethod object - pop out the other side). - - What "application-level" sequences are available? (Which RemoteInterface - classes are known and valid in 'call' sequences? Which RemoteCopy names are - valid for targets of the 'copy' sequence?). This is not necessarily within - the realm of Banana negotiation, but applications may need to negotiate this - sort of thing, and any disagreements will be manifested when Banana starts - raising Violations, so it may be useful to include it in the Banana-level - negotiation. - -On the other hand, negotiation is only useful if one side is prepared to -accomodate a peer which cannot do some of the things it would prefer to use, -or if it wants to know about the incapabilities so it can report a useful -failure rather than have an obscure protocol-level error message pop up an -hour later. So negotiation isn't the only goal: simple capability awareness -is a useful lesser goal. - -It kind of makes sense for the first object of a stream to be a negotiation -blob. We could make a new 'version' opentype, and declare that the contents -will be something simple and forever-after-parseable (like a dict, with heavy -constraints on the keys and values, all strings emitted in full). - -DONE, at least the framework is in place. Uses HTTP-style header-block -exchange instead of banana sequences, with client-sends-first and -server-decides. This correctly handles PB-vs-HTTP, but requires a timeout to -detect oldpb clients vs newpb servers. No actual feature negotiation is -performed yet, because we still only have the one version of the code. - -* connection initiation - -** define PB URLs - -[newcred is the most important part of this, the URL stuff can wait] - -A URL defines an endpoint: a pb.Referenceable, with methods. Somewhere along -the way it defines a transport (tcp+host+port, or unix+path) and an object -reference (pathname). It might also define a RemoteInterface, or that might -be put off until we actually invoke a method. - - URL = f("pb:", host, port, pathname) - d = pb.callRemoteURL(URL, ifacename, methodname, args) - -probably give an actual RemoteInterface instead of just its name - -a pb.RemoteReference claims to provide access to zero-or-more -RemoteInterfaces. You may choose which one you want to use when invoking -callRemote. - -TODO: decide upon a syntax for URLs that refer to non-TCP transports - pb+foo://stuff, pby://stuff (for yURL-style self-authenticating names) - -TODO: write the URL parser, implementing pb.getRemoteURL and pb.callRemoteURL - DONE: use a Tub/PBService instead - -TODO: decide upon a calling convention for callRemote when specifying which -RemoteInterface is being used. - - -DONE, PB-URL is the way to go. -** more URLs - -relative URLs (those without a host part) refer to objects on the same -Broker. Absolute URLs (those with a host part) refer to objects on other -Brokers. - -SKIP, interesting but not really useful - -** build/port pb.login: newcred for newpb - -Leave cred work for Glyph. - - has some enhanced PB cred stuff (challenge/response, pb.Copyable -credentials, etc). - -URL = pb.parseURL("pb://lothar.com:8789/users/warner/services/petmail", - IAuthorization) -URL = doFullLogin(URL, "warner", "x8yzzy") -URL.callRemote(methodname, args) - -NOTDONE - -* constrain ReferenceUnslicer properly - -The schema can use a ReferenceConstraint to indicate that the object must be -a RemoteReference, and can also require that the remote object be capable of -handling a particular Interface. - -This needs to be implemented. slicer.ReferenceUnslicer must somehow actually -ask the constraint about the incoming tokens. - -An outstanding question is "what counts". The general idea is that -RemoteReferences come over the wire as a connection-scoped ID number and an -optional list of Interface names (strings and version numbers). In this case -it is the far end which asserts that its object can implement any given -Interface, and the receiving end just checks to see if the schema-imposed -required Interface is in the list. - -This becomes more interesting when applied to local objects, or if a -constraint is created which asserts that its object is *something* (maybe a -RemoteReference, maybe a RemoteCopy) which implements a given Interface. In -this case, the incoming object could be an actual instance, but the class -name must be looked up in the unjellyableRegistry (and the class located, and -the __implements__ list consulted) before any of the object's tokens are -accepted. - -* security TODOs: - -** size constraints on the set-vocab sequence - -* implement schema.maxSize() - -In newpb, schemas serve two purposes: - - a) make programs safer by reducing the surprises that can appear in their - arguments (i.e. factoring out argument-checking in a useful way) - - b) remove memory-consumption DoS attacks by putting an upper bound on the - memory consumed by any particular message. - -Each schema has a pair of methods named maxSize() and maxDepth() which -provide this upper bound. While the schema is in effect (say, during the -receipt of a particular named argument to a remotely-invokable method), at -most X bytes and Y slicer frames will be in use before either the object is -accepted and processed or the schema notes the violation and the object is -rejected (whereupon the temporary storage is released and all further bytes -in the rejected object are simply discarded). Strictly speaking, the number -returned by maxSize() is the largest string on the wire which has not yet -been rejected as violating the constraint, but it is also a reasonable -metric to describe how much internal storage must be used while processing -it. (To achieve greater accuracy would involve knowing exactly how large -each Python type is; not a sensible thing to attempt). - -The idea is that someone who is worried about an attacker throwing a really -long string or an infinitely-nested list at them can ask the schema just what -exactly their current exposure is. The tradeoff between flexibility ("accept -any object whatsoever here") and exposure to DoS attack is then user-visible -and thus user-selectable. - -To implement maxSize() for a basic schema (like a string), you simply need -to look at banana.xhtml and see how basic tokens are encoded (you will also -need to look at banana.py and see how deserialization is actually -implemented). For a schema.StringConstraint(32) (which accepts strings <= 32 -characters in length), the largest serialized form that has not yet been -either accepted or rejected is: - - 64 bytes (header indicating 0x000000..0020 with lots of leading zeros) - + 1 byte (STRING token) - + 32 bytes (string contents) - = 97 - -If the header indicates a conforming length (<=32) then just after the 32nd -byte is received, the string object is created and handed to up the stack, so -the temporary storage tops out at 97. If someone is trying to spam us with a -million-character string, the serialized form would look like: - - 64 bytes (header indicating 1-million in hex, with leading zeros) -+ 1 byte (STRING token) -= 65 - -at which point the receive parser would check the constraint, decide that -1000000 > 32, and reject the remainder of the object. - -So (with the exception of pass/fail maxSize values, see below), the following -should hold true: - - schema.StringConstraint(32).maxSize() == 97 - -Now, schemas which represent containers have size limits that are the sum of -their contents, plus some overhead (and a stack level) for the container -itself. For example, a list of two small integers is represented in newbanana -as: - - OPEN(list) - INT - INT - CLOSE() - -which really looks like: - - opencount-OPEN - len-STRING-"list" - value-INT - value-INT - opencount-CLOSE - -This sequence takes at most: - - opencount-OPEN: 64+1 - len-STRING-"list": 64+1+1000 (opentypes are confined to be <= 1k long) - value-INT: 64+1 - value-INT: 64+1 - opencount-CLOSE: 64+1 - -or 5*(64+1)+1000 = 1325, or rather: - - 3*(64+1)+1000 + N*(IntConstraint().maxSize()) - -So ListConstraint.maxSize is computed by doing some math involving the -.maxSize value of the objects that go into it (the ListConstraint.constraint -attribute). This suggests a recursive algorithm. If any constraint is -unbounded (say a ListConstraint with no limit on the length of the list), -then maxSize() raises UnboundedSchema to indicate that there is no limit on -the size of a conforming string. Clearly, if any constraint is found to -include itself, UnboundedSchema must also be raised. - -This is a loose upper bound. For example, one non-conforming input string -would be: - - opencount-OPEN: 64+1 - len-STRING-"x"*1000: 64+1+1000 - -The entire string would be accepted before checking to see which opentypes -were valid: the ListConstraint only accepts the "list" opentype and would -reject this string immediately after the 1000th "x" was received. So a -tighter upper bound would be 2*65+1000 = 1130. - -In general, the bound is computed by walking through the deserialization -process and identifying the largest string that could make it past the -validity checks. There may be later checks that will reject the string, but -if it has not yet been rejected, then it still represents exposure for a -memory consumption DoS. - -** pass/fail sizes - -I started to think that it was necessary to have each constraint provide two -maxSize numbers: one of the largest sequence that could possibly be accepted -as valid, and a second which was the largest sequence that could be still -undecided. This would provide a more accurate upper bound because most -containers will respond to an invalid object by abandoning the rest of the -container: i.e. if the current active constraint is: - - ListConstraint(StringConstraint(32), maxLength=30) - -then the first thing that doesn't match the string constraint (say an -instance, or a number, or a 33-character string) will cause the ListUnslicer -to go into discard-everything mode. This makes a significant difference when -the per-item constraint allows opentypes, because the OPEN type (a string) is -constrained to 1k bytes. The item constraint probably imposes a much smaller -limit on the set of actual strings that would be accepted, so no -kilobyte-long opentype will possibly make it past that constraint. That means -there can only be one outstanding invalid object. So the worst case (maximal -length) string that has not yet been rejected would be something like: - - OPEN(list) - validthing [0] - validthing [1] - ... - validthing [n-1] - long-invalid-thing - -because if the long-invalid thing had been received earlier, the entire list -would have been abandoned. - -This suggests that the calculation for ListConstraint.maxSize() really needs -to be like - overhead - +(len-1)*itemConstraint.maxSize(valid) - +(1)*itemConstraint.maxSize(invalid) - -I'm still not sure about this. I think it provides a significantly tighter -upper bound. The deserialization process itself does not try to achieve the -absolute minimal exposure (i.e., the opentype checker could take the set of -all known-valid open types, compute the maximum length, and then impose a -StringConstraint with that length instead of 1000), because it is, in -general, a inefficient hassle. There is a tradeoff between computational -efficiency and removing the slack in the maxSize bound, both in the -deserialization process (where the memory is actually consumed) and in -maxSize (where we estimate how much memory could be consumed). - -Anyway, maxSize() and maxDepth() (which is easier: containers add 1 to the -maximum of the maxDepth values of their possible children) need to be -implemented for all the Constraint classes. There are some tests (disabled) -in test_schema.py for this code: those tests assert specific values for -maxSize. Those values are probably wrong, so they must be updated to match -however maxSize actually works. - -* decide upon what the "Shared" constraint should mean - -The idea of this one was to avoid some vulnerabilities by rejecting arbitrary -object graphs. Fundamentally Banana can represent most anything (just like -pickle), including objects that refer to each other in exciting loops and -whorls. There are two problems with this: it is hard to enforce a schema that -allows cycles in the object graph (indeed it is tricky to even describe one), -and the shared references could be used to temporarily violate a schema. - -I think these might be fixable (the sample case is where one tuple is -referenced in two different places, each with a different constraint, but the -tuple is incomplete until some higher-level node in the graph has become -referenceable, so [maybe] the schema can't be enforced until somewhat after -the object has actually finished arriving). - -However, Banana is aimed at two different use-cases. One is kind of a -replacement for pickle, where the goal is to allow arbitrary object graphs to -be serialized but have more control over the process (in particular we still -have an unjellyableRegistry to prevent arbitrary constructors from being -executed during deserialization). In this mode, a larger set of Unslicers are -available (for modules, bound methods, etc), and schemas may still be useful -but are not enforced by default. - -PB will use the other mode, where the set of conveyable objects is much -smaller, and security is the primary goal (including putting limits on -resource consumption). Schemas are enforced by default, and all constraints -default to sensible size limits (strings to 1k, lists to [currently] 30 -items). Because complex object graphs are not commonly transported across -process boundaries, the default is to not allow any Copyable object to be -referenced multiple times in the same serialization stream. The default is to -reject both cycles and shared references in the object graph, allowing only -strict trees, making life easier (and safer) for the remote methods which are -being given this object tree. - -The "Shared" constraint is intended as a way to turn off this default -strictness and allow the object to be referenced multiple times. The -outstanding question is what this should really mean: must it be marked as -such on all places where it could be referenced, what is the scope of the -multiple-reference region (per- method-call, per-connection?), and finally -what should be done when the limit is violated. Currently Unslicers see an -Error object which they can respond to any way they please: the default -containers abandon the rest of their contents and hand an Error to their -parent, the MethodCallUnslicer returns an exception to the caller, etc. With -shared references, the first recipient sees a valid object, while the second -and later recipient sees an error. - - -* figure out Deferred errors for immutable containers - -Somewhat related to the previous one. The now-classic example of an immutable -container which cannot be created right away is the object created by this -sequence: - - t = ([],) - t[0].append((t,)) - -This serializes into (with implicit reference numbers on the left): - -[0] OPEN(tuple) -[1] OPEN(list) -[2] OPEN(tuple) -[3] OPEN(reference #0) - CLOSE - CLOSE - CLOSE - -In newbanana, the second TupleUnslicer cannot return a fully-formed tuple to -its parent (the ListUnslicer), because that tuple cannot be created until the -contents are all referenceable, and that cannot happen until the first -TupleUnslicer has completed. So the second TupleUnslicer returns a Deferred -instead of a tuple, and the ListUnslicer adds a callback which updates the -list's item when the tuple is complete. - -The problem here is that of error handling. In general, if an exception is -raised (perhaps a protocol error, perhaps a schema violation) while an -Unslicer is active, that Unslicer is abandoned (all its remaining tokens are -discarded) and the parent gets an Error object. (the parent may give up too.. -the basic Unslicers all behave this way, so any exception will cause -everything up to the RootUnslicer to go boom, and the RootUnslicer has the -option of dropping the connection altogether). When the error is noticed, the -Unslicer stack is queried to figure out what path was taken from the root of -the object graph to the site that had an error. This is really useful when -trying to figure out which exact object cause a SchemaViolation: rather than -being told a call trace or a description of the *object* which had a problem, -you get a description of the path to that object (the same series of -dereferences you'd use to print the object: obj.children[12].peer.foo.bar). - -When references are allowed, these exceptions could occur after the original -object has been received, when that Deferred fires. There are two problems: -one is that the error path is now misleading, the other is that it might not -have been possible to enforce a schema because the object was incomplete. - -The most important thing is to make sure that an exception that occurs while -the Deferred is being fired is caught properly and flunks the object just as -if the problem were caught synchronously. This may involve discarding an -otherwise complete object graph and blaming the problem on a node much closer -to the root than the one which really caused the failure. - -* adaptive VOCAB compression - -We want to let banana figure out a good set of strings to compress on its -own. In Banana.sendToken, keep a list of the last N strings that had to be -sent in full (i.e. they weren't in the table). If the string being sent -appears more than M times in that table, before we send the token, emit an -ADDVOCAB sequence, add a vocab entry for it, then send a numeric VOCAB token -instead of the string. - -Make sure the vocab mapping is not used until the ADDVOCAB sequence has been -queued. Sending it inline should take care of this, but if for some reason we -need to push it on the top-level object queue, we need to make sure the vocab -table is not updated until it gets serialized. Queuing a VocabUpdate object, -which updates the table when it gets serialized, would take care of this. The -advantage of doing it inline is that later strings in the same object graph -would benefit from the mapping. The disadvantage is that the receiving -Unslicers must be prepared to deal with ADDVOCAB sequences at any time (so -really they have to be stripped out). This disadvantage goes away if ADDVOCAB -is a token instead of a sequence. - -Reasonable starting values for N and M might be 30 and 3. - -* write oldbanana compatibility code? - -An oldbanana peer can be detected because the server side sends its dialect -list from connectionMade, and oldbanana lists are sent with OLDLIST tokens -(the explicit-length kind). - - -* add .describe methods to all Slicers - -This involves setting an attribute between each yield call, to indicate what -part is about to be serialized. - - -* serialize remotely-callable methods? - -It might be useful be able to do something like: - - class Watcher(pb.Referenceable): - def remote_foo(self, args): blah - - w = Watcher() - ref.callRemote("subscribe", w.remote_foo) - -That would involve looking up the method and its parent object, reversing -the remote_*->* transformation, then sending a sequence which contained both -the object's RemoteReference and the appropriate method name. - -It might also be useful to generalize this: passing a lambda expression to -the remote end could stash the callable in a local table and send a Callable -Reference to the other side. I can smell a good general-purpose object -classification framework here, but I haven't quite been able to nail it down -exactly. - -* testing - -** finish testing of LONGINT/LONGNEG - -test_banana.InboundByteStream.testConstrainedInt needs implementation - -** thoroughly test failure-handling at all points of in/out serialization - -places where BananaError or Violation might be raised - -sending side: - Slicer creation (schema pre-validation? no): no no - pre-validation is done before sending the object, Broker.callFinished, - RemoteReference.doCall - slicer creation is done in newSlicerFor - - .slice (called in pushSlicer) ? - .slice.next raising Violation - .slice.next returning Deferrable when streaming isn't allowed - .sendToken (non-primitive token, can't happen) - .newSlicerFor (no ISlicer adapter) - top.childAborted - -receiving side: - long header (>64 bytes) - checkToken (top.openerCheckToken) - checkToken (top.checkToken) - typebyte == LIST (oldbanana) - bad VOCAB key - too-long vocab key - bad FLOAT encoding - top.receiveClose - top.finish - top.reportViolation - oldtop.finish (in from handleViolation) - top.doOpen - top.start -plus all of these when discardCount != 0 -OPENOPEN - -send-side uses: - f = top.reportViolation(f) -receive-side should use it too (instead of f.raiseException) - -** test failure-handing during callRemote argument serialization - -** implement/test some streaming Slicers - -** test producer Banana - -* profiling/optimization - -Several areas where I suspect performance issues but am unwilling to fix -them before having proof that there is a problem: - -** Banana.produce - -This is the main loop which creates outbound tokens. It is called once at -connectionMade() (after version negotiation) and thereafter is fired as the -result of a Deferred whose callback is triggered by a new item being pushed -on the output queue. It runs until the output queue is empty, or the -production process is paused (by a consumer who is full), or streaming is -enabled and one of the Slicers wants to pause. - -Each pass through the loop either pushes a single token into the transport, -resulting in a number of short writes. We can do better than this by telling -the transport to buffer the individual writes and calling a flush() method -when we leave the loop. I think Itamar's new cprotocol work provides this -sort of hook, but it would be nice if there were a generalized Transport -interface so that Protocols could promise their transports that they will -use flush() when they've stopped writing for a little while. - -Also, I want to be able to move produce() into C code. This means defining a -CSlicer in addition to the cprotocol stuff before. The goal is to be able to -slice a large tree of basic objects (lists, tuples, dicts, strings) without -surfacing into Python code at all, only coming "up for air" when we hit an -object type that we don't recognize as having a CSlicer available. - -** Banana.handleData - -The receive-tokenization process wants to be moved into C code. It's -definitely on the critical path, but it's ugly because it has to keep -calling into python code to handle each extracted token. Maybe there is a -way to have fast C code peek through the incoming buffers for token -boundaries, then give a list of offsets and lengths to the python code. The -b128 conversion should also happen in C. The data shouldn't be pulled out of -the input buffer until we've decided to accept it (i.e. the -memory-consumption guarantees that the schemas provide do not take any -transport-level buffering into account, and doing cprotocol tokenization -would represent memory that an attacker can make us spend without triggering -a schema violation). Itamar's CLineReceiver is a good example: you tokenize -a big buffer as much as you can, pass the tokens upstairs to Python code, -then hand the leftover tail to the next read() call. The tokenizer always -works on the concatenation of two buffers: the tail of the previous read() -and the complete contents of the current one. - -** Unslicer.doOpen delegation - -Unslicers form a stack, and each Unslicer gets to exert control over the way -that its descendents are deserialized. Most don't bother, they just delegate -the control methods up to the RootUnslicer. For example, doOpen() takes an -opentype and may return a new Unslicer to handle the new OPEN sequence. Most -of the time, each Unslicer delegates doOpen() to their parent, all the way -up the stack to the RootUnslicer who actually performs the UnslicerRegistry -lookup. - -This provides an optimization point. In general, the Unslicer knows ahead of -time whether it cares to be involved in these methods or not (i.e. whether -it wants to pay attention to its children/descendants or not). So instead of -delegating all the time, we could just have a separate Opener stack. -Unslicers that care would be pushed on the Opener stack at the same time -they are pushed on the regular unslicer stack, likewise removed. The -doOpen() method would only be invoked on the top-most Opener, removing a lot -of method calls. (I think the math is something like turning -avg(treedepth)*avg(nodes) into avg(nodes)). - -There are some other methods that are delegated in this way. open() is -related to doOpen(). setObject()/getObject() keep track of references to -shared objects and are typically only intercepted by a second-level object -which defines a "serialization scope" (like a single remote method call), as -well as connection-wide references (like pb.Referenceables) tracked by the -PBRootUnslicer. These would also be targets for optimization. - -The fundamental reason for this optimization is that most Unslicers don't -care about these methods. There are far more uses of doOpen() (one per -object node) then there are changes to the desired behavior of doOpen(). - -** CUnslicer - -Like CSlicer, the unslicing process wants to be able to be implemented (for -built-in objects) entirely in C. This means a CUnslicer "object" (a struct -full of function pointers), a table accessible from C that maps opentypes to -both CUnslicers and regular python-based Unslicers, and a CProtocol -tokenization code fed by a CTransport. It should be possible for the -python->C transition to occur in the reactor when it calls ctransport.doRead -python->and then not come back up to Python until Banana.receivedObject(), -at least for built-in types like dicts and strings. diff --git a/src/foolscap/doc/listings/copyable-receive.py b/src/foolscap/doc/listings/copyable-receive.py deleted file mode 100644 index f43f79e9..00000000 --- a/src/foolscap/doc/listings/copyable-receive.py +++ /dev/null @@ -1,41 +0,0 @@ -#! /usr/bin/python - -import sys -from twisted.internet import reactor -from foolscap import RemoteCopy, Tub - -# the receiving side defines the RemoteCopy -class RemoteUserRecord(RemoteCopy): - copytype = "unique-string-UserRecord" # this matches the sender - - def __init__(self): - # note: our __init__ must take no arguments - pass - - def setCopyableState(self, d): - self.name = d['name'] - self.age = d['age'] - self.shoe_size = "they wouldn't tell us" - - def display(self): - print "Name:", self.name - print "Age:", self.age - print "Shoe Size:", self.shoe_size - -def getRecord(rref, name): - d = rref.callRemote("getuser", name=name) - def _gotRecord(r): - # r is an instance of RemoteUserRecord - r.display() - reactor.stop() - d.addCallback(_gotRecord) - - -from foolscap import Tub -tub = Tub() -tub.startService() - -d = tub.getReference(sys.argv[1]) -d.addCallback(getRecord, "alice") - -reactor.run() diff --git a/src/foolscap/doc/listings/copyable-send.py b/src/foolscap/doc/listings/copyable-send.py deleted file mode 100644 index 55bb2c62..00000000 --- a/src/foolscap/doc/listings/copyable-send.py +++ /dev/null @@ -1,42 +0,0 @@ -#! /usr/bin/python - -from twisted.internet import reactor -from foolscap import Copyable, Referenceable, Tub - -# the sending side defines the Copyable - -class UserRecord(Copyable): - # this class uses the default Copyable behavior - typeToCopy = "unique-string-UserRecord" - - def __init__(self, name, age, shoe_size): - self.name = name - self.age = age - self.shoe_size = shoe_size # this is a secret - - def getStateToCopy(self): - d = {} - d['name'] = self.name - d['age'] = self.age - # don't tell anyone our shoe size - return d - -class Database(Referenceable): - def __init__(self): - self.users = {} - def addUser(self, name, age, shoe_size): - self.users[name] = UserRecord(name, age, shoe_size) - def remote_getuser(self, name): - return self.users[name] - -db = Database() -db.addUser("alice", 34, 8) -db.addUser("bob", 25, 9) - -tub = Tub() -tub.listenOn("tcp:12345") -tub.setLocation("localhost:12345") -url = tub.registerReference(db, "database") -print "the database is at:", url -tub.startService() -reactor.run() diff --git a/src/foolscap/doc/listings/pb1client.py b/src/foolscap/doc/listings/pb1client.py deleted file mode 100644 index 2d129b1f..00000000 --- a/src/foolscap/doc/listings/pb1client.py +++ /dev/null @@ -1,31 +0,0 @@ -#! /usr/bin/python - -from twisted.internet import reactor -from foolscap import Tub - -def gotError1(why): - print "unable to get the RemoteReference:", why - reactor.stop() - -def gotError2(why): - print "unable to invoke the remote method:", why - reactor.stop() - -def gotReference(remote): - print "got a RemoteReference" - print "asking it to add 1+2" - d = remote.callRemote("add", a=1, b=2) - d.addCallbacks(gotAnswer, gotError2) - -def gotAnswer(answer): - print "the answer is", answer - reactor.stop() - -tub = Tub() -tub.startService() -d = tub.getReference("pbu://localhost:12345/math-service") -d.addCallbacks(gotReference, gotError1) - -reactor.run() - - diff --git a/src/foolscap/doc/listings/pb1server.py b/src/foolscap/doc/listings/pb1server.py deleted file mode 100644 index 25d86f22..00000000 --- a/src/foolscap/doc/listings/pb1server.py +++ /dev/null @@ -1,20 +0,0 @@ -#! /usr/bin/python - -from twisted.internet import reactor -from foolscap import Referenceable, UnauthenticatedTub - -class MathServer(Referenceable): - def remote_add(self, a, b): - return a+b - def remote_subtract(self, a, b): - return a-b - -myserver = MathServer() -tub = UnauthenticatedTub() -tub.listenOn("tcp:12345") -tub.setLocation("localhost:12345") -url = tub.registerReference(myserver, "math-service") -print "the object is available at:", url - -tub.startService() -reactor.run() diff --git a/src/foolscap/doc/listings/pb2client.py b/src/foolscap/doc/listings/pb2client.py deleted file mode 100644 index 5a70f82c..00000000 --- a/src/foolscap/doc/listings/pb2client.py +++ /dev/null @@ -1,36 +0,0 @@ -#! /usr/bin/python - -import sys -from twisted.internet import reactor -from foolscap import Tub - -def gotError1(why): - print "unable to get the RemoteReference:", why - reactor.stop() - -def gotError2(why): - print "unable to invoke the remote method:", why - reactor.stop() - -def gotReference(remote): - print "got a RemoteReference" - print "asking it to add 1+2" - d = remote.callRemote("add", a=1, b=2) - d.addCallbacks(gotAnswer, gotError2) - -def gotAnswer(answer): - print "the answer is", answer - reactor.stop() - -if len(sys.argv) < 2: - print "Usage: pb2client.py URL" - sys.exit(1) -url = sys.argv[1] -tub = Tub() -tub.startService() -d = tub.getReference(url) -d.addCallbacks(gotReference, gotError1) - -reactor.run() - - diff --git a/src/foolscap/doc/listings/pb2server.py b/src/foolscap/doc/listings/pb2server.py deleted file mode 100644 index 96627aeb..00000000 --- a/src/foolscap/doc/listings/pb2server.py +++ /dev/null @@ -1,20 +0,0 @@ -#! /usr/bin/python - -from twisted.internet import reactor -from foolscap import Referenceable, Tub - -class MathServer(Referenceable): - def remote_add(self, a, b): - return a+b - def remote_subtract(self, a, b): - return a-b - -myserver = MathServer() -tub = Tub(certFile="pb2server.pem") -tub.listenOn("tcp:12345") -tub.setLocation("localhost:12345") -url = tub.registerReference(myserver, "math-service") -print "the object is available at:", url - -tub.startService() -reactor.run() diff --git a/src/foolscap/doc/listings/pb3calculator.py b/src/foolscap/doc/listings/pb3calculator.py deleted file mode 100644 index ebfbf747..00000000 --- a/src/foolscap/doc/listings/pb3calculator.py +++ /dev/null @@ -1,44 +0,0 @@ -#! /usr/bin/python - -from twisted.application import service -from twisted.internet import reactor -from foolscap import Referenceable, Tub - -class Calculator(Referenceable): - def __init__(self): - self.stack = [] - self.observers = [] - def remote_addObserver(self, observer): - self.observers.append(observer) - def log(self, msg): - for o in self.observers: - o.callRemote("event", msg=msg) - def remote_removeObserver(self, observer): - self.observers.remove(observer) - - def remote_push(self, num): - self.log("push(%d)" % num) - self.stack.append(num) - def remote_add(self): - self.log("add") - arg1, arg2 = self.stack.pop(), self.stack.pop() - self.stack.append(arg1 + arg2) - def remote_subtract(self): - self.log("subtract") - arg1, arg2 = self.stack.pop(), self.stack.pop() - self.stack.append(arg2 - arg1) - def remote_pop(self): - self.log("pop") - return self.stack.pop() - -tub = Tub() -tub.listenOn("tcp:12345") -tub.setLocation("localhost:12345") -url = tub.registerReference(Calculator(), "calculator") -print "the object is available at:", url - -application = service.Application("pb2calculator") -tub.setServiceParent(application) - -if __name__ == '__main__': - raise RuntimeError("please run this as 'twistd -noy pb3calculator.py'") diff --git a/src/foolscap/doc/listings/pb3user.py b/src/foolscap/doc/listings/pb3user.py deleted file mode 100644 index 58593f7d..00000000 --- a/src/foolscap/doc/listings/pb3user.py +++ /dev/null @@ -1,34 +0,0 @@ -#! /usr/bin/python - -import sys -from twisted.internet import reactor -from foolscap import Referenceable, Tub - -class Observer(Referenceable): - def remote_event(self, msg): - print "event:", msg - -def printResult(number): - print "the result is", number -def gotError(err): - print "got an error:", err -def gotRemote(remote): - o = Observer() - d = remote.callRemote("addObserver", observer=o) - d.addCallback(lambda res: remote.callRemote("push", num=2)) - d.addCallback(lambda res: remote.callRemote("push", num=3)) - d.addCallback(lambda res: remote.callRemote("add")) - d.addCallback(lambda res: remote.callRemote("pop")) - d.addCallback(printResult) - d.addCallback(lambda res: remote.callRemote("removeObserver", observer=o)) - d.addErrback(gotError) - d.addCallback(lambda res: reactor.stop()) - return d - -url = sys.argv[1] -tub = Tub() -tub.startService() -d = tub.getReference(url) -d.addCallback(gotRemote) - -reactor.run() diff --git a/src/foolscap/doc/schema.xhtml b/src/foolscap/doc/schema.xhtml deleted file mode 100644 index 5887dce2..00000000 --- a/src/foolscap/doc/schema.xhtml +++ /dev/null @@ -1,474 +0,0 @@ - - -Foolscap Schemas - - - - -

Foolscap Schemas

- -NOTE! This is all preliminary and is more an exercise in semiconscious -protocol design than anything else. Do not believe this document. This -sentence is lying. So there. - - -

Existing Constraint classes

- - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - - -
class nameshortcut
Any()accept anything
StringConstraint(maxLength=1000)strstring of up to maxLength characters (maxLength=None means - unlimited), or a VOCAB sequence of any length
IntegerConstraint(maxBytes=-1)intinteger. maxBytes=-1 means s_int32_t, =N means LONGINT which can be - expressed in N or fewer bytes (i.e. abs(num) < 2**(8*maxBytes)), - =None means unlimited. NOTE: shortcut 'long' is like shortcut 'int' but - maxBytes=1024.
NumberConstraint(maxBytes=1024)floatinteger or float. Integers are limited by maxBytes as in - IntegerConstraint, floats are fixed size.
BooleanConstraint(value=None)boolTrue or False. If value=True, only accepts True. If value=False, - only accepts False. NOTE: value= is a very silly parameter.
InterfaceConstraint(iface)InterfaceTODO. UNSAFE. Accepts an instance which claims to implement the given - Interface. The shortcut is simply any Interface subclass.
ClassConstraintClassTODO. UNSAFE. Accepts an instance which claims to be of the given - class name. The shortcut is simply any (old-style) class object.
PolyConstraint(*alternatives)(alt1, alt2)Accepts any object which obeys at least one of the alternative - constraints provided. Implements a logical OR function of the given - constraints. Also known as ChoiceOf.
TupleConstraint(*elemConstraints)Accepts a tuple of fixed length with elements that obey the given - constraints. Also known as TupleOf.
ListConstraint(elemConstraint, maxLength=30)Accepts a list of up to maxLength items, each of which obeys the - element constraint provided. Also known as ListOf.
DictConstraint(keyConstraint, valueConstraint, maxKeys=30)Accepts a dictionary of up to maxKeys items. Each key must obey - keyConstraint and each value must obey valueConstraint. Also known - as DictOf.
AttributeDictConstraint(*attrTuples, **kwargs)Constrains dictionaries used to describe instance attributes, as used - by RemoteCopy. Each attrTuple is a pair of (attrname, constraint), used - to constraint individual named attributes. kwargs['attributes'] - provides the same control. kwargs['ignoreUnknown'] is a boolean flag - which indicates that unknown attributes in inbound state should simply - be dropped. kwargs['acceptUnknown'] indicates that unknown attributes - should be accepted into the instance state dictionary.
RemoteMethodSchema(method=None, _response=None, - __options=[], **kwargs)Constrains arguments and return value of a single remotely-invokable - method. If method= is provided, the inspect module is used - to extract constraints from the method itself (positional arguments are - not allowed, default values of keyword arguments provide constraints for - each argument, the results of running the method provide the return value - constraint). If not, most kwargs items provide constraints for method - arguments, _response provides a constraint for the return value. - __options and additional kwargs keys provide neato whiz-bang future - expansion possibilities.
Shared(constraint, refLimit=None)TODO. Allows objects with refcounts no greater than refLimit (=None - means unlimited). Wraps another constraint, which the object must obey. - refLimit=1 rejects shared objects.
Optional(constraint, default)TODO. Can be used to tag Copyable attributes or (maybe) method - arguments. Wraps another constraint. If an object is provided, it must - obey the constraint. If not provided, the default value will be given in - its place.
FailureConstraint()Constrains the contents of a CopiedFailure.
- - -
-
-"""
-
-RemoteReference objects should all be tagged with interfaces that they
-implement, which point to representations of the method schemas. When a
-remote method is called, Foolscap should look up the appropriate method and
-serialize the argument list accordingly.
-
-We plan to eliminate positional arguments, so local RemoteReferences use
-their schema to convert callRemote calls with positional arguments to
-all-keyword arguments before serialization.
-
-Conversion to the appropriate version interface should be handled at the
-application level.  Eventually, with careful use of twisted.python.context,
-we might be able to provide automated tools for helping application authors
-automatically convert interface calls and isolate version-conversion code,
-but that is probably pretty hard.
-
-"""
-
-
-class Attributes:
-    def __init__(self,*a,**k):
-        pass
-
-X = Attributes(
-    ('hello', str),
-    ('goodbye', int),
-
-    # Allow the possibility of multiple or circular references.  The default
-    # is to make multiple copies to avoid making the serializer do extra
-    # work.
-    ('next', Shared(Narf)),
-
-    ('asdf', ListOf(Narf, maxLength=30)),
-    ('fdsa', (Narf, String(maxLength=5), int)),
-    ('qqqq', DictOf(str, Narf, maxKeys=30)),
-    ('larg', AttributeDict(('A', int),
-                           ('X', Number),
-                           ('Z', float))),
-    Optional("foo", str),
-    Optional("bar", str, default=None),
-    ignoreUnknown=True,
-    )
-
-X = Attributes(
-    attributes={ 'hello': str,     # this form doesn't allow Optional()
-                 'goodbye': int,
-               },
-    Optional("foo", str),  # but both can be used at once
-    ignoreUnknown=True)
-
-class Narf(Remoteable):
-    # step 1
-    __schema__ = X
-    # step 2 (possibly - this loses information)
-    class schema:
-        hello = str
-        goodbye = int
-        class add:
-            x = Number
-            y = Number
-            __return__ = Copy(Number)
-
-        class getRemoteThingy:
-            fooID = Arg(WhateverID, default=None)
-            barID = Arg(WhateverID, default=None)
-            __return__ = Reference(Narf)
-
-    # step 3 - this is the only example that shows argument order, which we
-    # _do_ need in order to translate positional arguments to callRemote, so
-    # don't take the nested-classes example too seriously.
-
-    schema = """
-    int add (int a, int b)
-    """
-
-    # Since the above schema could also be used for Formless, or possibly for
-    # World (for state) we can also do:
-
-    class remote_schema:
-        """blah blah
-        """
-
-    # You could even subclass that from the other one...
-
-    class remote_schema(schema):
-        dontUse = 'hello', 'goodbye'
-
-
-    def remote_add(self, x, y):
-        return x + y
-
-    def rejuvinate(self, deadPlayer):
-        return Reference(deadPlayer.bringToLife())
-
-    # "Remoteable" is a new concept - objects which may be method-published
-    # remotely _or_ copied remotely.  The schema objects support both method
-    # / interface definitions and state definitions, so which one gets used
-    # can be defined by the sending side as to whether it sends a
-    # Copy(theRemoteable) or Reference(theRemoteable)
-
-    # (also, with methods that are explicitly published by a schema there is
-    # no longer technically any security need for the remote_ prefix, which,
-    # based on past experience can be kind of annoying if you want to
-    # provide the same methods locally and remotely)
-
-    # outstanding design choice - Referenceable and Copyable are subclasses
-    # of Remoteable, but should they restrict the possibility of sending it
-    # the other way or
-
-    def getPlayerInfo(self, playerID):
-        return CopyOf(self.players[playerID])
-
-    def getRemoteThingy(self, fooID, barID):
-        return ReferenceTo(self.players[selfPlayerID])
-
-
-class RemoteNarf(Remoted):
-    __schema__ = X
-    # or, example of a difference between local and remote
-    class schema:
-        class getRemoteThingy:
-            __return__ = Reference(RemoteNarf)
-        class movementUpdate:
-            posX = int
-            posY = int
-
-            # No return value
-            __return__ = None
-
-            # Don't wait for the answer
-            __wait__ = False
-
-            # Feel free to send this over UDP
-            __reliable__ = False
-
-            # but send in order!
-            __ordered__ = True
-
-            # use priority queue / stream 3
-            __priority__ = 3
-
-            # allow full serialization of failures
-            __failure__ = FullFailure
-
-            # default: trivial failures, or str or int
-            __failure__ = ErrorMessage
-
-            # These options may imply different method names - e.g. '__wait__ =
-            # False' might imply that you can't use callRemote, you have to
-            # call 'sendRemote' instead... __reliable__ = False might be
-            # 'callRemoteUnreliable' (longer method name to make it less
-            # convenient to call by accident...)
-
-
-## (and yes, donovan, we know that TypedInterface exists and we are going to
-## use it.  we're just screwing around with other syntaxes to see what about PB
-## might be different.)
-
-Common banana sequences:
-
-A reference to a remote object.
-   (On the sending side: Referenceable or ReferenceTo(aRemoteable)
-    On the receiving side: RemoteReference)
-('remote', reference-id, interface, version, interface, version, ...)
-
-
-A call to a remote method:
-('fastcall', request-id, reference-id,
- method-name, 'arg-name', arg1, 'arg-name', arg2)
-
-A call to a remote method with extra spicy metadata:
-('call', request-id, reference-id, interface,
- version, method-name, 'arg-name', arg1, 'arg-name', arg2)
-
-Special hack: request-id of 0 means 'do not answer this call, do not
-acknowledge', etc.
-
-Answer to a method call:
-('answer', request-id, response)
-('error', request-id, response)
-
-Decrement a reference incremented by 'remote' command:
-('decref', reference-id)
-
-Broker currently has 9 proto_ methods:
-
-_version(vnum): accept a version number, compare to ours, reject if different
-
-_didNotUnderstand(command): log command, maybe drop connection
-
-_message(reqID, objID, message, answerRequired, netArgs, netKw):
-_cachemessage (like _message but finds objID with self.cachedLocallyAs instead
-               of self.localObjectForID, used by RemoteCacheMethod and
-               RemoteCacheObserver)
- look up objID, invoke it with .remoteMessageReceived(message, args),
- send "answer"(reqID, results)
-
-_answer(reqID, results): look up self.waitingForAnswers[reqID] and fire
-                         callback with results
-
-_error(reqID, failure): lookup waitingForAnswers, fire errback
-
-_decref(objID): dec refcount of self.localObjects[objID]. Sent in
-                RemoteReference.__del__
-
-_decache(objID): dec refcount of self.remotelyCachedObjects[objID]
-
-_uncache(objID): remove obj from self.locallyCachedObjects[objID]
-
-
- -

stuff

- -

A RemoteReference/RemoteCopy (called a Remote for now) has a schema -attached to it. remote.callRemote(methodname, *args) does -schema.getMethodSchema(methodname) to obtain a MethodConstraint that -describes the individual method. This MethodConstraint (or MethodSchema) has -other attributes which are used by either end: what arguments are allowed -and/or expected, calling conventions (synchronous, in-order, priority, etc), -and how the return value should be constrained.

- -

To use the Remote like a RemoteCopy ...

- -
-
-Remote:
- .methods
- .attributes
- .getMethodSchema(methodname) -> MethodConstraint
- .getAttributeSchema(attrname) -> a Constraint
-
-XPCOM idl specifies methods and attributes (readonly, readwrite). A Remote
-object which represented a distant XPCOM object would have a Schema that is
-created by parsing the IDL. Its callRemote would do the appropriate
-marshalling. Issue1: XPCOM lets methods have in/out/inout parameters.. these
-must be detected and a wrapper generated. Issue2: what about attribute
-set/get operations? Could add setRemote and getRemote for these.
-
----
-
-Some of the schema questions come down to how PBRootSlicer should deal with
-instances. The question is whether to treat the instance as a Referenceable
-(copy-by-reference: create and transmit a reference number, which will be
-turned into a RemoteReference on the other end), or as a Copyable
-(copy-by-value: collect some instance state and send it as an instance).
-This decision could be made by looking at what the instance inherits from:
-
-  if isinstance(obj, pb.Referenceable):
-      sendByReference(obj)
-  elif isinstance(obj, pb.Copyable):
-      sendByValue(obj)
-  else:
-      raise InsecureJellyError
-
-or by what it can be adapted to:
-
-  r = IReferenceable(obj, None)
-  if r:
-      sendByReference(r)
-  else:
-      r = ICopyable(obj, None)
-      if r:
-          sendByValue(r)
-      else:
-          raise InsecureJellyError
-
-The decision could also be influenced by the sending schema currently in
-effect. Side A invokes a method on side B. A knows of a schema which states
-that the 'foo' argument of this method should be a CopyableSpam, so it
-requires the object be adaptable to ICopyableSpam (which is then copied by
-value) tries to comply when that argument is serialized. B will enforce its
-own schema. When B returns a result to A, the method-result schema on B's
-side can influence how the return value is handled.
-
-For bonus points, it may be possible to send the object as a combination of
-these two. That may get pretty hard to follow, though.
-
-
- -

adapters and Referenceable/Copyable

- -

One possibility: rather than using a SlicerRegistry, use Adapters. The -ISliceable interface has one method: getSlicer(). Slicer.py would register -adapters for basic types (list, dict, etc) that would just return an -appropriate ListSlicer, etc. Instances which would have been pb.Copyable -subclasses in oldpb can still inherit from pb.Copyable, which now implements -ISliceable and produces a Slicer (opentype='instance') that calls -getStateToCopy() (although the subclass-__implements__ handling is now more -important than before). pb.Referenceable implements ISlicer and produces a -Slicer (opentype='reference'?) which (possibly) registers itself in the -broker and then sends the reference number (along with a schema if necessary -(and the other end wants them)).

- -

Classes are also welcome to implement ISlicer themselves and produce -whatever clever (streaming?) Slicer objects they like.

- -

On the receiving side, we still need a registry to provide reasonable -security. There are two registries. The first is the -RootUnslicer.openRegistry, and maps OPEN types to Unslicer factories. It is -used in doOpen().

- -

The second registry should map opentype=instance class names to something -which can handle the instance contents. Should this be a replacement -Unslicer?

- - diff --git a/src/foolscap/doc/specifications/banana.xhtml b/src/foolscap/doc/specifications/banana.xhtml deleted file mode 100644 index e778d60d..00000000 --- a/src/foolscap/doc/specifications/banana.xhtml +++ /dev/null @@ -1,1379 +0,0 @@ - - -The Banana Protocol - - - - -

The Banana Protocol

- -NOTE! This is all preliminary and is more an exercise in semiconscious -protocol design than anything else. Do not believe this document. This -sentence is lying. So there. - -

Banana tokens

- -

At the lowest layer, the wire transport takes the form of Tokens. These -all take the shape of header/type-byte/body.

- -
    -
  • Header: zero or more bytes, all of which have the high bit clear (they - range in value from 0 to 127). They form a little-endian base-128 number, - so 1 is represented as 0x01, 128 is represented as 0x00 0x01, 130 as 0x02 - 0x01, etc. 0 can be represented by any string of 0x00 bytes, including an - empty string. The maximum legal header length is 64 bytes, so it has a - maximum value of 2**(64*7)-1. Not all tokens have headers.
  • - -
  • Type Byte: the high bit is set to distinguish it from the header bytes - that precede it (it has a value from 128 to 255). The Type Byte determines - how to interpret both the header and the body. All valid type bytes are - listed below.
  • - -
  • Body: zero or more arbitrary bytes, length is specified by the - header. Not all tokens have bodies.
  • -
- -

Tokens are described below as [header-TOKEN-body], where either -header or body may be empty. For example, [len-LIST-empty] -indicates that the length is put into the header, LIST is the token -being used, and the body is empty.

- -

The possible Token types are:

- -
    -
  • - 0x80: LIST (old): [len-LIST-empty] - -

    This token marks the beginning of a list with LEN elements. It acts as - the open parenthesis, and the matching close parenthesis is - implicit, based upon the length of the list. It will be followed by LEN - things, which may be tokens like INTs or STRINGS, or which may be - sublists. Banana keeps a list stack to handle nested sublists.

    - -

    This token (and the notion of length-prefixed lists in general) is from - oldbanana. In newbanana it is only used during the initial dialect - negotiation (so that oldbanana peers can be detected). Newbanana requires - that LIST(old) tokens be followed exclusively by strings and have a rather - limited allowable length (say, 640 dialects long).

    -
  • - -
  • - 0x81: INT: [value-INT-empty] - -

    This token defines a single positive integer. The protocol defines its - range as [0, 2**31), so the largest legal value is 2**31-1. The - recipient is responsible for choosing an appropriate local type to hold the - number. For Python, if the value represented by the incoming base-128 - digits grows larger than a regular Python IntType can accomodate, the - receiving system will use a LongType or a BigNum as necessary.

    - -

    Anything larger than this range must be sent with a LONGINT token - instead.

    - -

    (oldbanana compatibility note: a python implementation can accept - anything in the range [0, 2**448), limited by the 64-byte maximum header - size).

    - -

    The range was chosen to allow INT values to always fit in C's s_int32_t - type, so an implementation that doesn't have a useful bignum type can - simply reject LONGINT tokens.

    -
  • - -
  • - 0x82: STRING [len-STRING-chars] - -

    This token defines a string. To be precise, it defines a sequence of - bytes. The length is a base-128-encoded integer. The type byte is followed - by LEN bytes of data which make up the string. LEN is required to be - shorter than 640k: this is intended to reduce the amount of memory that - can be consumed on the receiving end before user code gets to decide - whether to accept the data or not.

    -
  • - -
  • - 0x83: NEG: [value-NEG-empty] - -

    This token defines a negative integer. It is identical to the - INT tag except that the results are negated before storage. - The range is defined as [-2**31, 0), again to make an implementation using - s_int32_t easier. Any numbers smaller (more negative) than this range must - be sent with a LONGNEG token.

    - -

    Implementations should be tolerant when receiving a negative zero - and turn it into a 0, even though they should not send such things.

    - -

    Note that NEG can represent a number (-2**31) whose absolute value - (2**31) is one larger than the greatest number that INT can represent - (2**31-1).

    -
  • - -
  • - 0x84: FLOAT [empty-FLOAT-value] - -

    This token defines a floating-point number. There is no header, and the - type byte is followed by 8 bytes which are a 64-bit IEEE double, as - defined by struct.pack("!d", num).

    -
  • - -
  • -

    0x85: OLDLONGINT: [value-OLDLONGINT-empty]

    -

    0x86: OLDLONGNEG: [value-OLDLONGNEG-empty]

    - -

    These were used by oldbanana to represent large numbers. Their size was - limited by the number of bytes in the header (max 64), so they can - represent [0, 2**448).

    -
  • - -
  • - 0x87: VOCAB: [index-VOCAB-empty] - -

    This defines a tokenized string. Banana keeps a mapping of common - strings, each one is assigned a small integer. These strings can be sent - compressed as a two-byte (index, VOCAB) sequence. They are delivered to - Jelly as plain strings with no indication that they were compressed for - transit.

    - -

    The strings in this mapping are populated by the sender when it sends a - special vocab OPEN sequence. The intention is that this mapping - will be sent just once when the connection is first established, but a - sufficiently ambituous sender could use this to implement adaptive forward - compression.

    -
  • - -
  • -

    0x88: OPEN: [[num]-OPEN-empty]

    -

    0x89: CLOSE: [[num]-CLOSE-empty]

    - -

    These tokens are the newbanana parenthesis markers. They carry an - optional number in their header: if present, the number counts the - appearance of OPEN tokens in the stream, starting at 0 for the first OPEN - used for a given connection and incrementing by 1 for each subsequent - OPEN. The matching CLOSE token must contain an identical number. These - numbers are solely for debugging and may be omitted. They may be removed - from the protocol once development has been completed.

    - -

    In contrast to oldbanana (with the LIST token), newbanana does not use - length-prefixed lists. Instead it relies upon the Banana layer to track - OPEN/CLOSE tokens.

    - -

    OPEN markers are followed by the Open Index tuple: one or more - tokens to indicate what kind of new sub-expression is being started. The - first token must be a string (either STRING or VOCAB), the rest may be - strings or other primitive tokens. The recipient decides when the Open - Index has finished and the body has begun.

    -
  • - -
  • -

    0x8A: ABORT: [[num]-ABORT-empty]

    - -

    This token indicates that something has gone wrong on the sender side, - and that the resulting object must not be handed upwards in the unslicer - stack. It may be impossible or inconvenient for the sender to stop sending - the tokens associated with the unfortunate object, so the receiver must be - prepared to silently drop all further tokens up to the matching STOP - marker. The STOP token must always follow eventually: this is just a - courtesy notice.

    - -

    The number, if present, will be the same one used by the OPEN - token.

    -
  • - -
  • -

    0x8B: LONGINT: [len-LONGINT-bytes]

    -

    0x8C: LONGNEG: [len-LONGNEG-bytes]

    - -

    These are processed like STRING tokens, but the bytes form a base-256 - encoded number, most-significant-byte first (note that this may require - several passes and some intermediate storage). The size is (barely) limited - by the length field, so the theoretical range is [0, 2**(2**(64*7)-1)-1), - but the receiver can impose whatever length limit they wish.

    - -

    LONGNEG is handled exactly like LONGINT but the number is negated - first.

    -
  • - -
  • -

    0x8D: ERROR [len-ERROR-chars]

    - -

    This token defines a string of ASCII characters which hold an error - message. When a severe protocol violation occurs, the offended side will - emit an ERROR token and then close the transport. The side which receives - the ERROR token should put the message in a developer-readable logfile and - close the transport as well.

    - -

    The ERROR token is formatted exactly like the STRING token, except that - it is defined to be encoded in ASCII (the STRING token does not claim to - be encoded in any particular character set, nor does it necessarily - represent human-readable characters).

    - -

    The ERROR token is limited to 1000 characters.

    -
  • - -
  • -

    0x8E: PING [[num]-PING-empty]

    -

    0x8F: PONG [[num]-PONG-empty]

    - -

    These tokens have no semantic value, but are used to implement - connection timeouts and keepalives. When one side receives a PING message, - it should immediately queue a PONG message on the return stream. The - optional number can be used to associate a PONG with the PING that prompted - it: if present, it must be duplicated in the response.

    - -

    Other than generating a PONG, these tokens are ignored by both ends. - They are not delivered to higher levels. They may appear in the middle of - an OPEN sequence without affecting it.

    - -

    The intended use is that each side is configured with two timers: the - idle timer and the disconnect timer. The idle timer specifies how long the - inbound connection is allowed to remain quiet before poking it. If no data - has been received for this long, a PING is sent to provoke some kind of - traffic. The disconnect timer specifies how long the inbound connection is - allowed to remain quiet before concluding that the other end is dead and - thus terminating the connection.

    -
  • - -

    These messages can also be used to estimate the connection's round-trip - time (including the depth of the transmit/receive queues at either end). - Just send a PING with a unique number, and measure the time until the - corresponding PONG is seen.

    - -
- -

TODO: Add TRUE, FALSE, and NONE tokens. (maybe? These are currently -handled as OPEN sequences)

- - -

Serialization

- -

When serializing an object, it is useful to view it as a directed graph. -The root object is the one you start with, any objects it refers to are -children of that root. Those children may point back to other objects that -have already been serialized, or which will be serialized later.

- -

Banana, like pickle and other serialization schemes, does a depth-first -traversal of this graph. Serialization is begun on each node before going -down into the child nodes. Banana tracks previously-handled nodes and -replaces them with numbered reference tokens to break loops in -the graph.

- - -

Banana Slicers

- -

A Banana Slicer is responsible for serializing a single user -object: it slices that object into a series of smaller pieces, either -fundamental Banana tokens or other Sliceable objects. On the receiving end, -there is a corresponding Banana Unslicer which accepts the incoming -tokens and re-creates the user object. There are different kinds of Slicers -and Unslicers for lists, tuples, dictionaries, etc. Classes can provide -their own Slicers if they want more control over the serialization -process.

- -

In general, there is a Slicer object for each act of serialization of a -given object (although this is not strictly necessary). This allows the -Slicer to contain state about the serialization process, which enables -producer/consumer -style pauses, and slicer-controlled streaming -serialization. The entire context is stored in a small tuple (which includes -the Slicer), so it can be set aside for a while. In the future, this will -allow interleaved serialization of multiple objects (doing context switching -on the wire), to do things like priority queues and avoid head-of-line -blocking.

- -

The most common pattern is to have the Slicer be the ISlicer -Adapter for the object, in which it gets a new Slicer case each it is -serialized. Classes which do not need to store a lot of state can have a -single Slicer per serialized object, presumably through some adapter tricks. -It is also valid to have the serialized object be its own Slicer.

- -

The Slicer has other duties (described below), but the main one is to -implement the slice method, which should return a sequence or -an iterable which yields the Open Index Tokens, followed by the body tokens. -(Note that the Slicer should not include the OPEN or CLOSE tokens: those are -supplied by the SendBanana wrapping code). Any item which is a fundamental -type (int, string, float) will be sent as a banana token, anything else will -be handled by recursion (with a new Slicer).

- -

Most subclasses of BaseSlicer implement a companion method -named sliceBody, which supplies just the body tokens. (This -makes the code a bit easier to follow). sliceBody is usually -just a return [token, token], or a series of yield -statements, one per token. However, classes which wish to have more control -over the process can implement sliceBody or even -slice differently.

- - - -
-class ThingySlicer(slicer.BaseSlicer):
-    opentype = ('thingy',)
-    trackReferences = True
-
-    def sliceBody(self, streamable, banana):
-        return [self.obj.attr1, self.obj.attr2]
-
- -

If attr1 and attr2 are integers, the preceding Slicer would -create a token sequence like: OPEN STRING(thingy) 13 16 CLOSE. If -attr2 were actually another Thingy instance, it might produce OPEN -STRING(thingy) 13 OPEN STRING(thingy) 19 18 CLOSE CLOSE.

- -

Doing this with a generator gives the same basic results but avoids the -temporary buffer, which can be important when sending large amounts of data. -The following Slicer could be combined with a concatenating Unslicer to -implement the old FilePager class without the extra round-trip -inefficiencies.

- -
-class DemandSlicer(slicer.BaseSlicer):
-    opentype = ('demandy',)
-    trackReferences = True
-
-    def sliceBody(self, streamable, banana):
-        f = open("data", "r")
-        for chunk in f.read(2048):
-            yield chunk
-
- -

The SendBanana code controls the pacing: if the transport is full, it has -the option of pausing the generator until the receiving end has caught up. -It also has the option of pulling tokens out of the Slicer anyway, and -buffering them in memory. This may be necessary to achieve serialization -coherency, discussed below.

- -

If the streamable flag is set, then the slicer gets to -control the pacing too: it is allowed to yield a Deferred where it would -normally provide a regular token. This tells Banana that serialization needs -to wait for a while (perhaps we are streaming data from another source which -has run dry, or we are trying to implement some kind of rate limiting). -Banana will wait until the Deferred fires before attempting to retrieve -another token. If the streamable flag is not set, then a -parent Slicer has decided that it is unwilling to allow streaming (perhaps -it needs to serialize a coherent state, and a pause for streaming would -allow that state to change before it was completely serialized). The Slicer -is not allowed to return a Deferred when streaming is disabled.

- -
-class URLGetterSlicer(slicer.BaseSlicer):
-    opentype = ('urldata',)
-    trackReferences = True
-
-    def gotPage(self, page):
-        self.page = page
-
-    def sliceBody(self, streamable, banana):
-        yield self.url
-        d = web.client.getPage(self.url)
-        d.addCallback(self.gotPage)
-        yield d
-        # here we hover in limbo until it fires
-        yield self.page
-
- -

(the code is a bit kludgy because generators have no way to pass data -back out of the yield statement).

- -

The Slicer can also raise a Violation exception, in which case the -slicer will be aborted: no further tokens will be pulled from it. This -causes an ABORT token to be sent over the wire, followed immediately by a -CLOSE token. The dead Slicer's parent is notified with a -childAborted method, then the Banana continues to extract -tokens from the parent as if the child had finished normally. (TODO: we need -a convenient way for the parent to indicate that it wishes to give up too, -such as raising a Violation from within childAborted).

- - -

Serialization Coherency

- -

Streaming serialization means the object is serialized a little bit at a -time, never consuming too much memory at once. The tradeoff is that, by -doing other useful work inbetween, our object may change state while it is -being serialized. In oldbanana this process was uninterruptible, so -coherency was not an issue. In newbanana it is optional. Some objects may -have more trouble with this than others, so Banana provides Slicers with a -means to influence the process.

- -

Banana makes certain promises about what takes place between successive -yield statements, when the Slicer gives up control to Banana. The -most conservative approach is to:

- -
    -
  • disable the RootSlicer's streamable flag to tell all Slicers - that they should not return Deferreds: this avoids loss of control due - to child Slicers giving it away
  • - -
  • set the SendBanana policy to buffer data in memory rather than do a - .pauseProducing: this removes pauses due to the output channel filling - up
  • - -
  • return a list from slice (or sliceBody) - instead of using a generator: this fixes the object contents at a single - point in time. (you can also create a list at the beginning of that - routine and then yield pieces of it, which has exactly the same - effect)
  • -
- -

Slicers aren't supposed to do anything which changes the state observed -by other Slicers: if this is really the case than it is safe to use a -generator. A parent Slicer which yields a non-primitive object will give up -control to the child Slicer needed to handle that object, but that child -should do its business and finish quickly, so there should be no way for the -parent object's state to change in the meantime.

- -

If the SendBanana is allowed to give up control (.pauseProducing), then -arbitrary code will get to run in between yield calls, possibly -changing the state being accessed by those yields. Likewise child Slicers -might give up control, threatening the coherency of one of their parents. -Slicers can invoke banana.inhibitStreaming() (TODO: need a -better name) to inhibit streaming, which will cause all child serialization -to occur immediately, buffering as much data in memory as necessary to -complete the operation without give up control.

- -

Coherency issues are a new area for Banana, so expect new tools and -techniques to be developed which allow the programmer to make sensible -tradeoffs.

- - - -

The Slicer Stack

- - - -

The serialization context is stored in a SendBanana object, which -is one of the two halves of the Banana object (a subclass of Protocol). This -holds a stack of Banana Slicers, one per object currently being serialized -(i.e. one per node in the path from the root object to the object currently -being serialized).

- -

For example, suppose a class instance is being serialized, and this class -chose to use a dictionary to hold its instance state. That dictionary holds -a list of numbers in one of its values. While the list of numbers is being -serialized, the Slicer Stack would hold: the RootSlicer, an InstanceSlicer, -a DictSlicer, and finally a ListSlicer.

- -

The stack is used to determine two things:

- -
    -
  • How to handle a child object: which Slicer should be used, or if a - Violation should be raised
  • -
  • How to track object references, to break cycles in the object graph
  • -
- -

When a new object needs to be sent, it is first submitted to the top-most -Slicer (to its slicerForObject method), which is responsible -for either returning a suitable Slicer or raising a Violation exception (if -the object is rejected by a security policy). Most Slicers will just -delegate this method up to the RootSlicer, but Slicers which wish to pass -judgement upon enclosed objects (or modify the Slicer selected) can do -something else. Unserializable objects will raise an exception here.

- -

Once the new Slicer is obtained, the OPEN token is emitted, which -provides the openID number (just an implicit count of how many OPEN -tokens have been sent over the wire). This is where we break cycles in the -object graph: before serializing the object, we record a reference to it -(the openID), and any time we encounter the object again, we send the -reference number instead of a new copy. This reference number is tracked in -the SlicerStack, by handing the number/object pair to the top-most Slicer's -registerReference method. Most Slicers will delegate this up to -the RootSlicer, but again they can perform additional registrations or -consume the request entirely. This is used in PB to provide scoped -references, where (for example) a list should be sent twice if -it occurs in two separate method calls. In this case the CallSlicer (which -sits above the PBRootSlicer) does its own registration.

- -

The slicerForObject process is responsible for catching the -second time the object is sent. It looks in the same mapping created by -registerReference and returns a ReferenceSlicer -instead of the usual one.

- -

The RootSlicer, which sits at the bottom of the stack, is a -special case. It is never pushed or popped, and implements most of the -policy for the whole Banana process. The RootSlicer can also be interpreted -as a root object, if you imagine that any given user object being -serialized is somehow a child of the overall serialization context. In PB, -for example, the root object would be related to the connection and needs to -track things like which remotely-invokable objects are available.

- -

The default RootSlicer implements the following behavior:

- -
    -
  • Allow all objects to be serialized that can be
  • - -
  • Use its .slicerTable to get a Slicer for an object. If - that fails, adapt the object to ISlicer
  • - -
  • Record object references in its .references dict
  • -
- -

The RootSlicer class only does safe serialization: -basic types and whatever you've registered an ISlicer adapter for. The -TrustingRootSlicer uses that .slicerTable mapping to serialize -unsafe things (arbitrary instances, classes, etc), which is suitable for -local storage instead of network communication (i.e. when you want to use -banana as a pickle replacement).

- -

TODO: The idea is to let other serialization contexts do other things. -For example, the final tokens could go to the parent slice for handling -instead of straight to the Protocol, which would provide more control over -turning the tokens into bytes and sending over a wire, saving to a file, -etc.

- -

Finally, the stack can be queried to find out what path leads from the -root object to the one currently being serialized. If something goes wrong -in the serialization process (an exception is thrown), this path can make it -much easier to find out when the trouble happened, as opposed to -merely where. Knowing that the .oops method of your FooObject failed -during serialization isn't very useful when you have 500 FooObjects inside -your data structure and you need to know whether it was -bar.thisfoo or bar.thatfoo which caused the -problem. To this end, each Slicer has a .describe method which -is supposed to return a short string that explains how to get to the child -node currently being processed. When an error occurs, these strings are -concatenated together and put into the failure object.

- - - -

Deserialization

- -

The other half of the Banana class is the ReceiveBanana, -which accepts incoming tokens and turns them into objects. It is organized -just like the SendBanana, with a stack of Banana -Unslicer objects, each of which assembles tokens or child objects into a -larger one. Each Unslicer receives the tokens emitted by the matching Slicer -on the sending side. The whole stack is used to create new Unslicers, -enforce restrictions upon what objects will be accepted, and manage object -references.

- -

Each Unslicer accepts tokens that turn into an object of some sort. They -pass this object up to their parent Unslicer. Eventually a finished object -is given to the RootUnslicer, which decides what to do with it. -When the Banana is being used for data storage (like pickle), the root will -just deliver the object to the caller. When Banana is used in PB, the actual -work is done by some intermediate objects like the -CallUnslicer, which is responsible for a single method -invocation.

- -

The ReceiveBanana itself is responsible for pulling -well-formed tokens off the incoming data stream, tracking OPEN and CLOSE -tokens, maintaining synchronization with the transmitted token stream, and -discarding tokens when the receiving Unslicers have rejected one of the -inbound objects. Unslicer methods may raise Violation exceptions: these are -caught by the Unbanana and cause the object currently being unserialized to -fail: its parent gets a UnbananaFailure instead of the dict or list or -instance that it would normally have received.

- -

OPEN tokens are followed by a short list of tokens called the -opentype to indicate what kind of object is being started. This is -looked up in the UnbananaRegistry just like object types are looked up in -the BananaRegistry (TODO: need sensible adapter-based registration scheme -for unslicing). The new Unslicer is pushed onto the stack.

- -

ABORT tokens indicate that something went wrong on the sending -side and that the current object is to be aborted. It causes the receiver to -discard all tokens until the CLOSE token which closes the current node. This -is implemented with a simple counter of how many levels of discarding we -have left to do.

- -

CLOSE tokens finish the current node. The Unslicer will pass its -completed object up to the receiveChild method of its parent.

- -

Open Index tokens: the Opentype

- -

OPEN tokens are followed by an arbitrary list of other tokens which are -used to determine which UnslicerFactory should be invoked to create the new -Unslicer. Basic Python types are designated with a simple string, like (OPEN -list) or (OPEN dict), but instances are serialized with two -strings (OPEN instance classname), and various exotic PB -objects like method calls may involve a list of strings and numbers (OPEN -call reqID objID methodname). The unbanana code works with the -unslicer stack to apply constraints to these indexing tokens and finally -obtain the new Unslicer when enough indexing tokens have been received.

- -

The reason for assembling this opentype list before creating the -Unslicer (instead of using a generic InstanceUnslicer which switches -behavior depending upon its first received token) is to support classes or -PB methods which wish to push custom Unslicers to handle their -deserialization process. For example, a class could push a -StreamingFileUnslicer that accepts a series of string tokens and appends -their contents to a file on disk. This Unslicer could reduce memory -consumption (by only holding one chunk at a time) and update some kind of -progress indicator as the data arrives. This particular feature was provided -by the old StringPager utility, but custom Unslicers offer more flexibility -and better efficiency (no additional round-trips).

- -

(note: none of this affects the serialization side: those Slicers emit -both their indexing tokens and their state tokens. It is only the receiving -side where the index tokens are handled by a different piece of code than -the content tokens).

- -

In yet greater detail:

- -
    - -
  • Each OPEN sequence is divided into an Index phase and a - Contents phase. The first one (or two or three) tokens are the - Index Tokens and the rest are the Body Tokens. The sequence ends with a - CLOSE token.
  • - -
  • Banana.inOpen is a boolean which indicates that we are in the Index - Phase. It is set to True when the OPEN token is received and returns to - False after the new Unslicer has been pushed.
  • - -
  • Banana.opentype is a list of Index Tokens that are being accumulated. - It is cleared each time .inOpen is set to True. The tuple form of opentype - is passed to Slicer.doOpen, Constraint.checkOpentype, and used as a key in - the RootSlicer.openRegistry dictionary. Each Unslicer type is indexed by - an opentype tuple.
  • - -
- -

If .inOpen is True, each new token type will be passed (through -Banana.getLimit and top.openerCheckToken) to the opener's .openerCheckToken -method, along with the current opentype tuple. The opener gets to decide if -the token is acceptable (possibly raising a Violation exception). Note that -the opener does not maintain state about what phase the decoding process is -in, so it may want to condition its response upon the length of the -opentype.

- -

After each index token is complete, it is appended to .opentype, then the -list is passed (through Banana.handleOpen, top.doOpen, and top.open) to the -opener's .open method. This can either return an Unslicer (which will finish -the index phase: all further tokens will be sent to the new Unslicer), -return None (to continue the index phase), raise a Violation (which causes -an UnbananaFailure to be passed to the current top unslicer), or raise -another exception (which causes the connection to be abandoned).

- -

Unslicer Lifecycle

- -

Each Unslicer has access to the following attributes:

- -
    -
  • .parent: This is set by the ReceiveBanana before - .start is invoked, and provides a reference to the Unslicer - responsible for the containing object. You can follow .parent - all the way up the object graph to the single RootUnslicer - object for this connection. It is appropriate to invoke - openerCheckToken and open on your parent.
  • - -
  • .protocol: This is set by the ReceiveBanana before - .start is invoked, and provides access to the Banana object - which maintains the connection on which this object is being received. It - is appropriate to examine the .debugReceive attribute on the - protocol. It is also appropriate to invoke .setObject on it - to register references for shared containers (like lists).
  • - -
  • openCount: This is set by the ReceiveBanana before - .start is invoked, and contains the optional OPEN-count for - this object, an implicit sequence number incremented for each OPEN token - seen on the wire. During protocol development and testing the OPEN tokens - may include an explicit OPEN-count value, but usually it is left out of - the packet. If present, it is used by Banana.handleClose to assert that - the CLOSE token is associated with the right OPEN token. Unslicers will - not normally have a use for it.
  • - -
  • .count: This is provided as the count argument to - .start, and contains the object counter for this - object. This is incremented for each new object which is created by the - receive Banana code. This is similar to (but not always the same as) the - OPEN-count. Containers should call self.protocol.setObject to - register a Deferred during start, then call it again in - receiveClose with the real (finished) object. It is sometimes - also included in a debug message.
  • - -
  • .broker: PB objects are given .broker, which is exactly - equal to the .protocol attribute. The synonym exists because it makes - several PB routines easier to read.
  • - -
- -

Each Unslicer handles a single OPEN sequence, which starts with an -OPEN token and ends with a CLOSE token.

- - -

Creation

- -

Acceptance of the OPEN token simply sets a flag to indicate that we are -in the Index Phase. (The OPEN token might not be accepted: it is submitted -to checkToken for approval first, as described below). During the Index -Phase, all tokens are appended to the current opentype list and -handed as a tuple to the top-most Unslicer's doOpen method. -This method can do one of the following things:

- -
    -
  • Return a new Unslicer object. It does this when there are enough index - tokens to specify a new Unslicer. The new child is pushed on top of the - Unslicer stack (Banana.receiveStack) and initialized by calling the - start method described below. This ends the Index Phase.
  • - -
  • Return None. This indicates that more index tokens are required. The - Banana protocol object simply remains in the Index Phase and continues to - accumulate index tokens.
  • - -
  • Raise a Violation. If the open type is unrecognized, then a Violation - is a good way to indicate it.
  • -
- -

When a new Unslicer object is pushed on the top of the stack, it has its -.start method called, in which it has an opportunity to create -whatever internal state is necessary to record the incoming content tokens. -Each created object will have a separate Unslicer instance. The start method -can run normally, or raise a Violation exception.

- -

.start is distinct from the Unslicer's constructor function -to minimize the parameter-passing requirements for doOpen() and friends. It -is also conceivable that keeping arguments out of __init__ -would make it easier to use adapters in this context, although it is not -clear why that might be useful on the Unslicing side. TODO: consider merging -.start into the constructor.

- -

This Unslicer is responsible for all incoming tokens until either 1: it -pushes a new one on the stack, or 2: it receives a CLOSE token.

- - -

checkToken

- -

Each token starts with a length sequence, up to 64 bytes which are turned -into an integer. This is followed by a single type byte, distinguished from -the length bytes by having the high bit set (the type byte is always 0x80 or -greater). When the typebyte is received, the topmost Unslicer is asked about -its suitability by calling the .checkToken method. (note that -CLOSE and ABORT tokens are always legal, and are not submitted to -checkToken). Both the typebyte and the header's numeric value are passed to -this methoed, which is expected to do one of the following:

- -
    -
  • Return None to indicate that the token and the header value are - acceptable.
  • - -
  • Raise a Violation exception to reject the token or the - header value. This will cause the remainder of the current OPEN sequence - to be discarded (all tokens through the matching CLOSE token). Unslicers - should raise this if their constraints will not accept the incoming - object: for example a constraint which is expecting a series of integers - can accept INT/NEG/LONGINT/LONGNEG tokens and reject - OPEN/STRING/VOCAB/FLOAT tokens. They should also raise this if the header - indicates, e.g., a STRING which is longer than the constraint is willing - to accept, or a LONGINT/LONGNEG which is too large. The topmost Unslicer - (the same one which raised Violation) will receive (through its - .receiveChild method) an UnbananaFailure object which - encapsulates the reason for the rejection
  • -
- -

If the token sequence is in the index phase (i.e. it is just after -an OPEN token and a new Unslicer has not yet been pushed), then instead of -.checkToken the top unslicer is sent -.openerCheckToken. This method behaves just like checkToken, -but in addition to the type byte it is also given the opentype list (which -is built out of all the index tokens received during this index phase).

- -

receiveChild

- -

If the type byte is accepted, and the size limit is obeyed, then the rest -of the token is read and a finished (primitive) object is created: a string -or number (TODO: maybe add boolean and None). This object is handed to the -topmost Unslicer's .receiveChild method, where again it is has -a few options:

- -
    -
  • Run normally: if the object is acceptable, it should append or record - it somehow.
  • - -
  • Raise Violation, just like checkToken.
  • - -
  • invoke self.abort, which does - protocol.abandonUnslicer
  • -
- -

If the child is handed an UnbananaFailure object, and it wishes to pass -it upwards to its parent, then self.abort is the appropriate -thing to do. Raising a Violation will accomplish the same thing, but with a -new UnbananaFailure that describes the exception raised here instead of the -one raised by a child object. It is bad to both call abort and -raise an exception.

- -

Finishing

- -

When the CLOSE token arrives, the Unslicer will have its -.receiveClose method called. This is expected to do:

- -
    -
  • Return an object: this object is the finished result of the - deserialization process. It will be passed to .receiveChild - of the parent Unslicer.
  • - -
  • Return a Deferred: this indicates that the object cannot be created - yet (tuples that contain references to an enclosing tuple, for example). - The Deferred will be fired (with the object) when it completes.
  • - -
  • Raise Violation
  • -
- -

After receiveClose has finished, the child is told to clean up by calling -its .finish method. This can complete normally or raise a -Violation.

- -

Then, the old top-most Unslicer is popped from the stack and discarded. -Its parent is now the new top-most Unslicer, and the newly-unserialized -object is given to it with the .receiveChild method. Note that -this method is used to deliver both primitive objects (from raw tokens) -and composite objects (from other Unslicers).

- - -

Error Handling

- -

Schemas are enforced by Constraint objects which are given an opportunity -to pass judgement on each incoming token. When they do not like something -they are given, they respond by raising a Violation exception. -The Violation exception is sometimes created with an argument that describes -the reason for the rejection, but frequently it is just a bare exception. -Most Violations are raised by the checkOpentype and -checkObject methods of the various classes in -schema.py.

- -

Violations which occur in an Unslicer can be confined to a single -sub-tree of the object graph. The object being deserialized (and all of its -children) is abandoned, and all remaining tokens for that object are -discarded. However, the parent object (to which the abandoned object would -have been given) gets to decide what happens next: it can either fail -itself, or absorb the failure (much like an exception handler can choose to -re-raise the exception or eat it).

- -

When a Violation occurs, it is wrapped in an UnbananaFailure -object (just like Deferreds wrap exceptions in Failure objects). The -UnbananaFailure behaves like a regular -twisted.python.failure.Failure object, except that it has an -attribute named .where which indicate the object-graph pathname -where the problem occurred.

- -

The Unslicer which caused the Violation is given a chance to do cleanup -or error-reporting by invoking its reportViolation method. It -is given the UnbananaFailure so it can modify or copy it. The default -implementation simply returns the is expected to return the UnbananaFailure -it was given, but it is also allowed to return a different one. It must -return an UnbananaFailure: it cannot ignore the Violation by returning None. -This method should not raise any exceptions: doing so will cause the -connection to be dropped.

- -

The UnbananaFailure returned by reportViolation is passed up -the Unslicer stack in lieu of an actual object. Most Unslicers have code in -their receiveChild methods to detect an UnbananaFailure and -trigger an abort (propagateUnbananaFailures), which causes all -further tokens of the sub-tree to be discarded. The connection is not -dropped. Unslicers which partition their children's sub-graphs (like the -PBRootUnslicer, for which each child is a separate operation) can simply -ignore the UnbananaFailure, or respond to it by sending an error message to -the other end.

- -

Other exceptions may occur during deserialization. These indicate coding -errors or severe protocol violations and cause the connection to be dropped -(they are not caught by the Banana code and thus propagate all the way up to -the reactor, which drops the socket). The exception is logged on the local -side with log.err, but the remote end will not be told any -reason for the disconnection. The banana code uses the BananaError exception -to indicate protocol violations, but others may be encountered.

- -

The Banana object can also choose to respond to Violations by terminating -the connection. For example, the .hangupOnLengthViolation flag -causes string-too-long violations to be raised directly instead of being -handled, which will cause the connection to be dropped (as it occurs in the -dataReceived method).

- - -

Example

- -

The serialized form of ["foo",(1,2)] is the -following token sequence: OPEN STRING(list) STRING(foo) OPEN STRING(tuple) -INT(1) INT(2) CLOSE CLOSE. In practice, the STRING(list) would really be -something like VOCAB(7), likewise the STRING(tuple) might be VOCAB(8). Here -we walk through how this sequence is processed.

- -

The initial Unslicer stack consists of the single RootUnslicer -rootun.

- -
-OPEN
-  rootun.checkToken(OPEN) : must not raise Violation
-  enter index phase
-
-VOCAB(7)  (equivalent to STRING(list))
-  rootun.openerCheckToken(VOCAB, ()) : must not raise Violation
-  VOCAB token is looked up in .incomingVocabulary, turned into "list"
-  rootun.doOpen(("list",)) : looks in UnslicerRegistry, returns ListUnslicer
-  exit index phase
-  the ListUnslicer is pushed on the stack
-  listun.start()
-
-STRING(foo)
-  listun.checkToken(STRING, 3) : must return None
-  string is assembled
-  listun.receiveChild("foo") : appends to list
-
-OPEN
-  listun.checkToken(OPEN) : must not raise Violation
-  enter index phase
-
-VOCAB(8)  (equivalent to STRING(tuple))
-  listun.openerCheckToken(VOCAB, ()) : must not raise Violation
-  VOCAB token is looked up, turned into "tuple"
-  listun.doOpen(("tuple",)) : delegates through:
-                                 BaseUnslicer.open
-                                 self.opener (usually the RootUnslicer)
-                                 self.opener.open(("tuple",))
-                              returns TupleUnslicer
-  exit index phase
-  TupleUnslicer is pushed on the stack
-  tupleun.start()
-
-INT(1)
-  tupleun.checkToken(INT) : must not raise Violation
-  integer is assembled
-  tupleun.receiveChild(1) : appends to list
-
-INT(2)
-  tupleun.checkToken(INT) : must not raise Violation
-  integer is assembled
-  tupleun.receiveChild(2) : appends to list
-
-CLOSE
-  tupleun.receiveClose() : creates and returns the tuple (1,2)
-                           (could also return a Deferred)
-  TupleUnslicer is popped from the stack and discarded
-  listun.receiveChild((1,2))
-
-CLOSE
-  listun.receiveClose() : creates and returns the list ["foo", (1,2)]
-  ListUnslicer is popped from the stack and discarded
-  rootun.receiveChild(["foo", (1,2)])
-
- - -

Other Issues

- - -

Deferred Object Recreation: The Trouble With Tuples

- -

Types and classes are roughly classified into containers and -non-containers. The containers are further divided into mutable and -immutable. Some examples of immutable containers are tuples and bound -methods. Lists and dicts are mutable containers. Ints and strings are -non-containers. Non-containers are always leaf nodes in the object -graph.

- -

During unserialization, objects are in one of three states: uncreated, -referenceable (but not complete), and complete. Only mutable containers can -be referenceable but not complete: immutable containers have no intermediate -referenceable state.

- -

Mutable containers (like lists) are referenceable but not complete during -traversal of their child nodes. This means those children can reference the -list without trouble.

- -

Immutable containers (like tuples) present challenges when unserializing. -The object cannot be created until all its components are referenceable. -While it is guaranteed that these component objects will be complete before -the graph traversal exits the current node, the child nodes are allowed to -reference the current node during that traversal. The classic example is the -graph created by the following Python fragment:

- -
-a = ([],)
-a[0].append((a,))
-
- -

To handle these cases, the TupleUnslicer installs a Deferred into the -object table when it begins unserializing (in the .start method). When the -tuple is finally complete, the object table is updated and the Deferred is -fired with the new tuple.

- -

Containers (both mutable and immutable) are required to pay attention to -the types of their incoming children and notice when they receive Deferreds -instead of normal objects. These containers are not complete (in the sense -described above) until those Deferreds have been replaced with referenceable -objects. When the container receives the Deferred, it should attach a -callback to it which will perform the replacement. In addition, immutable -containers should check after each update to see if all the Deferreds have -been cleared, and if so, complete their own object (and fire their own -Deferreds so any containers they are a child of may be updated -and/or completed).

- -

TODO: it would be really handy to have the RootUnslicer do Deferred -Accounting: each time a Deferred is installed instead of a real object, add -its the graph-path to a list. When the Deferred fires and the object becomes -available, remove it. If deserialization completes and there are still -Deferreds hanging around, flag an error that points to the culprits instead -of returning a broken object.

- -

Security Model

- -

Having the whole Slicer stack get a chance to pass judgement on the -outbound object is very flexible. There are optimizations possibly because -of the fact that most Slicers don't care, perhaps a separate stack for the -ones that want to participate, or a chained delegation function. The -important thing is to make sure that exception cases don't leave a -taster stranded on the stack when the object that put it there has -gone away.

- -

On the receiving side, the top Unslicer gets to make a decision about the -token before its body has arrived (limiting memory exposure to no more than -65 bytes). In addition, each Unslicer receives component tokens one at a -time. This lets you catch the dangerous data before it gets turned into an -object. However, tokens are a pretty low-level place to do security checks. -It might be more useful to have some kind of instance taster stack, -with tasters that are asked specifically about (class,state) pairs and -whether they should be turned into objects or not.

- -

Because the Unslicers receive their data one token at a time, things like -InstanceUnslicer can perform security checks one attribute at a time. -traits-style attribute constraints (see the Chaco project or the -PyCon-2003 presentation for details) can be implemented by having a -per-class dictionary of tests that attribute values must pass before they -will be accepted. The instance will only be created if all attributes fit -the constraints. The idea is to catch violations before any code is run on -the receiving side. Typical checks would be things like .foo must be a -number, .bar must not be an instance, .baz must implement the -IBazzer interface.

- -

TODO: the rest of this section is somewhat out of date.

- -

Using the stack instead of a single Taster object means that the rules -can be changed depending upon the context of the object being processed. A -class that is valid as the first argument to a method call may not be valid -as the second argument, or inside a list provided as the first argument. The -PBMethodArgumentsUnslicer could change the way its .taste method behaves as -its state machine progresses through the argument list.

- -

There are several different ways to implement this Taster stack:

- -
    -
  • Each object in the Unslicer stack gets to raise an exception if they - don't like what they see: unanimous consent is required to let the token or - object pass
  • - -
  • The top-most unslicer is asked, and it has the option of asking the - next slice down. It might not, allowing local I'm sure this is safe - classes to override higher-level paranoia.
  • - -
  • Unslicer objects may add and remove Taster objects on a separate - stack. This is undoubtedly faster but must be done carefully to make sure - Tasters and Unslicers stay in sync.
  • -
- -

Of course, all this holds true for the sending side as well. A Slicer -could enforce a policy that no objects of type Foo will be sent while it is -on the stack.

- -

It is anticipated that something like the current Jellyable/Unjellyable -classes will be created to offer control over the Slicer/Unslicers used to -handle instance of that class.

- -

One eventual goal is to allow PB to implement E-like argument -constraints.

- - -

Streaming Slices

- -

The big change from the old Jelly scheme is that now -serialization/unserialization is done in a more streaming format. Individual -tokens are the basic unit of information. The basic tokens are just numbers -and strings: anything more complicated (starting at lists) involves -composites of other tokens.

- -

Producer/Consumer-oriented serialization means that large objects which -can't fit into the socket buffers should not consume lots of memory, sitting -around in a serialized state with nowhere to go. This must be balanced -against the confusion caused by time-distributed serialization. PB method -calls must retain their current in-order execution, and it must not be -possible to interleave serialized state (big mess). One interesting -possibility is to allow multiple parallel SlicerStacks, with a -context-switch token to let the receiving end know when they should switch -to a different UnslicerStack. This would allow cleanly interleaved streams -at the token level. Head-of-line blocking is when a large request -prevents a smaller (quicker) one from getting through: grocery stores -attempt to relieve this frustration by grouping customers together by -expected service time (the express lane). Parallel stacks would allow the -sender to establish policies on immediacy versus minimizing context -switches.

- -

CBanana, CBananaRun, RunBananaRun

- -

Another goal of the Jelly+Banana->JustBanana change is the hope of -writing Slicers and Unslicers in C. The CBanana module should have C objects -(structs with function pointers) that can be looked up in a registry table -and run to turn python objects into tokens and vice versa. This ought to be -faster than running python code to implement the slices, at the cost of less -flexibility. It would be nice if the resulting tokens could be sent directly -to the socket at the C level without surfacing into python; barring this it -is probably a good idea to accumulate the tokens into a large buffer so the -code can do a few large writes instead of a gazillion small ones.

- -

It ought to be possible to mix C and Python slices here: if the C code -doesn't find the slice in the table, it can fall back to calling a python -method that does a lookup in an extensible registry.

- -

Beyond Banana

- -

Random notes and wild speculations: take everything beyond here with -two grains of salt

- -

Oldbanana usage

- -

The oldbanana usage model has the layer above banana written in one of -two ways. The simple form is to use the banana.encode and banana.decode functions to turn an object into a -bytestream. This is used by twisted.spread.publish . The more flexible model -is to subclass Banana. The largest example of this technique is, of course, -twisted.spread.pb.Broker, but others which use it are twisted.trial.remote -and twisted.scripts.conch (which appears to use it over unix-domain -sockets).

- -

Banana itself is a Protocol. The Banana subclass would generally override -the expressionReceived method, which receives s-expressions -(lists of lists). These are processed to figure out what method should be -called, etc (processing which only has to deal with strings, numbers, and -lists). Then the serialized arguments are sent through Unjelly to produce -actual objects.

- -

On output, the subclass usually calls self.sendEncoded with -some set of objects. In the case of PB, the arguments to the remote method -are turned into s-expressions with jelly, then combined with the method -meta-data (object ID, method name, etc), then the whole request is sent to -sendEncoded.

- -

Newbanana

- -

Newbanana moves the Jelly functionality into a stack of Banana Slices, -and the lowest-level token-to-bytestream conversion into the new Banana -object. Instead of overriding expressionReceived, users could -push a different root Unslicer. to get more control over the receive -process. - -Currently, Slicers call Banana.sendOpen/sendToken/sendClose/sendAbort, which -then creates bytes and does transport.write . - -To move this into C, the transport should get to call CUnbanana.receiveToken -There should be CBananaUnslicers. Probably a parent.addMe(self) instead of -banana.stack.append(self), maybe addMeC for the C unslicer. - -The Banana object is a Protocol, and has a dataReceived method. (maybe in -some C form, data could move directly from a CTransport to a CProtocol). It -parses tokens and hands them to its Unslicer stack. The root Unslicer is -probably created at connectionEstablished time. Subclasses of Banana could -use different RootUnslicer objects, or the users might be responsible for -setting up the root unslicer. - -The Banana object is also created with a RootSlicer. Banana.writeToken -serializes the token and does transport.write . (a C form could have CSlicer -objects which hand tokens to a little CBanana which then hands bytes off to -a CTransport). - -Doing the bytestream-to-Token conversion in C loses a lot of utility when -the conversion is done token at a time. It made more sense when a whole mess -of s-lists were converted at once. - -All Slicers currently have a Banana pointer.. maybe they should have a -transport pointer instead? The Banana pointer is needed to get to top of the -stack. - -want to be able to unserialize lists/tuples/dicts/strings/ints (basic -types) without surfacing into python. want to deliver the completed -object to a python function. - - -

- -

Streaming Methods

- -

It would be neat if a PB method could indicate that it would like to -receive its arguments in a streaming fashion. This would involve calling the -method early (as soon as the objectID and method name were known), then -somehow feeding objects to it as they arrive. The object could return a -handler or consumer sub-object which would be fed as tokens arrive over the -wire. This consumer should have a way to enforce a constraint on its -input.

- -

This consumer object sounds a lot like an Unslicer, so maybe the method -schema should indicate that the method will would like to be called right -away so it can return an Unslicer to be pushed on the stack. That Unslicer -could do whatever it wanted with the incoming tokens, and could enforce -constraints with the usual checkToken/doOpen/receiveChild/receiveClose -methods.

- -

On the sending side, it would be neat to let a callRemote() invocation -provide a Producer or a generator that will supply data as the network -buffer becomes available. This could involve pushing a Slicer. Slicers are -generators.

- - - -

Common token sequences

- -

Any given Banana instance has a way to map objects to the Open Index -tuples needed to represent them, and a similar map from such tuples to -incoming object factories. These maps give rise to various classes of -objects, depending upon how widespread any particular object type is. A List -is a fairly common type of object, something you would expect to find -implemented in pretty much any high-level language, so you would expect a -Banana implementation in that language to be capable of accepting an (OPEN, -'list') sequence. However, a Failure object (found in -twisted.python.failure, providing an asynchronous-friendly way -of reporting python exceptions) is both Python- and Twisted- specific. Is it -reasonable for one program to emit an (OPEN, 'failure') sequence and expect -another speaker of the generic Banana protocol to understand it?

- -

This level of compatibility is (somewhat arbitrarily) named dialect -compatibility. The set of acceptable sequences will depend upon many -things: the language in which the program at each end of the wire is -implemented, the nature of the higher-level software that is using Banana at -that moment (PB is one such layer), and application-specific registrations -that have been performed by the time the sequence is received (the set of -pb.Copyable sequences that can be received without error will -depend upon which RemoteCopyable class definitions and -registerRemoteCopy calls have been made).

- -

Ideally, when two Banana instances first establish a connection, they -will go through a negotiation phase where they come to an agreement on what -will be sent across the wire. There are two goals to this negotiation:

- -
    -
  1. least-surprise: if one side cannot handle a construct which the other - side might emit at some point in the future, it would be nice to know - about it up front rather than encountering a Violation or - connection-dropping BananaError later down the line. This could be - described as the strong-typing argument. It is important to note - that different arguments (both for and against strong typing) may exist - when talking about remote interfaces rather than local ones.
  2. - -
  3. adapability: if one side cannot handle a newer construct, it may be - possible for the other side to back down to some simpler variation without - too much loss of data.
  4. -
- -

Dialect negotiation is a very much still an active area of -development.

- - -

Base Python Types

- -

The basic python types are considered safe: the code which is -invoked by their receipt is well-understood and there is no way to cause -unsafe behavior during unserialization. Resource consumption attacks are -mitigated by Constraints imposed by the receiving schema.

- -

Note that the OPEN(dict) slicer is implemented with code that sorts the -list of keys before serializing them. It does this to provide deterministic -behavior and make testing easier.

- - - - - - - - - - - - - - - - - -
IntType, LongType (small+)INT(value)
IntType, LongType (small-)NEG(value)
IntType, LongType (large+)LONGINT(value)
IntType, LongType (large-)LONGNEG(value)
FloatTypeFLOAT(value)
StringTypeSTRING(value)
StringType (tokenized)VOCAB(tokennum)
UnicodeTypeOPEN(unicode) STRING(str.encode('UTF-8')) CLOSE
ListTypeOPEN(list) elem.. CLOSE
TupleTypeOPEN(tuple) elem.. CLOSE
DictType, DictionaryTypeOPEN(dict) (key,value).. CLOSE
NoneTypeOPEN(none) CLOSE
BooleanTypeOPEN(boolean) INT(0/1) CLOSE
- -

Extended (unsafe) Python Types

- -

To serialize arbitrary python object graphs (including instances) -requires that we allow more types in. This begins to get dangerous: with -complex graphs of inter-dependent objects, instances may need to be used (by -referencing objects) before they are fully initialized. A schema can be used -to make assertions about what object types live where, but in general the -contents of those objects are difficult to constrain.

- -

For this reason, these types should only be used in places where you -trust the creator of the serialized stream (the same places where you would -be willing to use the standard Pickle module). Saving application state to -disk and reading it back at startup time is one example.

- - - - - - - - - - - - -
Extended (unsafe) Python Types
InstanceTypeOPEN(instance) STRING(reflect.qual(class)) - (attr,value).. CLOSE
ModuleTypeOPEN(module) STRING(__name__) CLOSE
ClassTypeOPEN(class) STRING(reflect.qual(class)) CLOSE
MethodTypeOPEN(method) STRING(__name__) im_self im_class CLOSE
FunctionTypeOPEN(function) STRING(module.__name__) CLOSE
- - -

PB Sequences

- -

See the PB document for details.

- - -

Unhandled types

- -

The following types are not handled by any slicer, and will raise a -KeyError if one is referenced by an object being sliced. This technically -imposes a limit upon the kinds of objects that can be serialized, even by a -unsafe serializer, but in practice it is not really an issue, as many -of these objects have no meaning outside the program invocation which -created them.

- -
    -
  • - types that might be nice to have
  • -
  • ComplexType
  • -
  • SliceType
  • -
  • TypeType
  • -
  • XRangeType
  • - -
  • - types that aren't really that useful
  • -
  • BufferType
  • -
  • BuiltinFunctionType
  • -
  • BuiltinMethodType
  • -
  • CodeType
  • -
  • DictProxyType
  • -
  • EllipsisType
  • -
  • NotImplementedType
  • -
  • UnboundMethodType
  • - -
  • - types that are meaningless outside the creator
  • -
  • TracebackType
  • -
  • FileType
  • -
  • FrameType
  • -
  • GeneratorType
  • -
  • LambdaType
  • -
- -

Unhandled (but don't worry about it) types

- -

ObjectType is the root class of all other types. All objects -are known by some other type in addition to ObjectType, so the -fact that it is not handled explicitly does not matter.

- -

StringTypes is simply a list of StringType and -UnicodeType, so it does not need to be explicitly handled -either.

- -

Internal types

- -

The following sequences are internal.

- -

The OPEN(vocab) sequence is used to update the forward compression -token-to-string table used by the VOCAB token. It is followed by a series of -number/string pairs. All numbers that appear in VOCAB tokens must be -associated with a string by appearing in the most recent OPEN(vocab) -sequence.

- - - - -
internal types
vocab dictOPEN(vocab) (num,string).. CLOSE
- - diff --git a/src/foolscap/doc/specifications/pb.xhtml b/src/foolscap/doc/specifications/pb.xhtml deleted file mode 100644 index 1e177d21..00000000 --- a/src/foolscap/doc/specifications/pb.xhtml +++ /dev/null @@ -1,1017 +0,0 @@ - - -NewPB - - - - -

NewPB

- -

This document describes the new PB protocol. This is a layer on top of Banana which provides remote object access (method -invocation and instance transfer).

- -

Fundamentally, PB is about one side keeping a -RemoteReference to the other side's Referenceable. The -Referenceable has some methods that can be invoked remotely: functionality -it is offering to remote callers. Those callers hold RemoteReferences which -point to it. The RemoteReference object offers a way to invoke those methods -(generally through the callRemote method).

- -

There are plenty of other details, starting with how the RemoteReference -is obtained, and how arguments and return values are communicated.

- -

For the purposes of this document, we will designate the side that holds -the actual Referenceable object as local, and the side -that holds the proxy RemoteReference object as remote. -This distinction is only meaningful with respect to a single -RemoteReference/Referenceable pair. One program may hold Referenceable -A and RemoteReference B, paired with another that holds -RemoteReference A and Referenceable B. Once initialization is -complete, PB is a symmetric protocol.

- -

It is helpful to think of PB as providing a wire or pipe that connects -two programs. Objects are put into this pipe at one end, and something -related to the object comes out the other end. These two objects are said to -correspond to each other. Basic types (like lists and dictionaries) are -handled by Banana, but more complex types (like instances) are treated -specially, so that most of the time there is a native form (as -present on the local side) that goes into the pipe, and a remote form that -comes out.

- -

Initialization

- -

The PB session begins with some feature negotiation and (generally) the -receipt of a VocabularyDict. Usually this takes place over an interactive -transport, like a TCP connection, but newpb can also be used in a more -batched message-oriented mode, as long as both the creator of the method -call request and its eventual consumer are in agreement about their shared -state (at least, this is the intention.. there are still pieces that need to -be implemented to make this possible).

- -

The local side keeps a table which provides a bidirectional mapping -between Referenceable objects and a connection-local -object-ID number. This table begins with a single object called the -Root, which is implicitly given ID number 0. Everything else is -bootstrapped through this object. For the typical PB Broker, this root -object performs cred authentication and returns other Referenceables as the -cred Avatar.

- -

The remote side has a collection of RemoteReference objects, -each of which knows the object-ID of the corresponding Referenceable, as -well as the Broker which provides the connection to the other Broker. The -remote side must do reference-tracking of these RemoteReferences, because as -long as it remains alive, the local-side Broker must maintain a reference to -the original Referenceable.

- -

Method Calls

- -

The remote side invokes a remote method by calling -ref.callRemote() on its RemoteReference. This starts by -validating the method name and arguments against a Schema (described -below). It then creates a new Request object which will live until the method -call has either completed successfully or failed due to an exception -(including the connection being lost). callRemote returns a -Deferred, which does not fire until the request is finished.

- -

It then sends a call banana sequence over the wire. This -sequence indicates the request ID (used to match the request with the -resulting answer or error response), the object ID -of the Referenceable being targeted, a string to indicate the name of the -method being invoked, and the arguments to be passed into the method.

- -

All arguments are passed by name (i.e. keyword arguments instead of -positional parameters). Each argument is subject to the argument -transformation described below.

- -

The local side receives the call sequence, uses the object-ID -to look up the Referenceable, finds the desired method, then applies the -method's schema to the incoming arguments. If they are acceptable, it invokes -the method. A normal return value it sent back immediately in an -answer sequence (subject to the same transformation as the -inbound arguments). If the method returns a Deferred, the answer will be sent -back when the Deferred fires. If the method raises an exception (or the -Deferred does an errback), the resulting Failure is sent back in a -error sequence. Both the answer and the -error start with the request-ID so they can be used to complete -the Request object waiting on the remote side.

- -

The original Deferred (the one produced by callRemote) is -finally callbacked with the results of the method (or errbacked with a -Failure or RemoteFailure object).

- - -

Example

- -

This code runs on the local side: the one with the -pb.Referenceable which will respond to a remote invocation.

- -
-class Responder(pb.Referenceable):
-    def remote_add(self, a, b):
-        return a+b
-
- -

and the following code runs on the remote side (the one which holds -a pb.RemoteReference):

- -
-def gotAnswer(results):
-    print results
-
-d = rr.callRemote("add", a=1, b=2)
-d.addCallback(gotAnswer)
-
- -

Note that the arguments are passed as named parameters: oldpb used both -positional parameters and named (keyword) arguments, but newpb prefers just -the keyword arguments. TODO: newpb will probably convert positional -parameters to keyword arguments (based upon the schema) before sending them -to the remote side.

- - -

Using RemoteInterfaces

- -

To nail down the types being sent across the wire, you can use a -RemoteInterface to define the methods that are implemented by -any particular pb.Referenceable:

- -
-class RIAdding(pb.RemoteInterface):
-    def add(a=int, b=int): return int
-
-class Responder(pb.Referenceable):
-    implements(RIAdding)
-    def remote_add(self, a, b):
-        return a+b
-
-# and on the remote side:
-d = rr.callRemote(RIAdding['add'], a=1, b=2)
-d.addCallback(gotAnswer)
-
- -

In this example, the RIAdding remote interface defines a single -method add, which accepts two integer parameters and returns an -integer. This method (technically a classmethod) is used instead of the -string form of the method name. What does this get us?

- -
    -
  • The calling side will pre-check its arguments against the constraints - that it believes to be imposed by the remote side. It will raise a - Violation rather than send parameters that it thinks will be rejected.
  • - -
  • The receiving side will enforce the constraints, causing the method - call to errback (with a Violation) if they are not met. This means the code - in remote_add does not need to worry about what strange types - it might be given, such as two strings, or two lists.
  • - -
  • The receiving side will pre-check its return argument before sending it - back. If the method returns a string, it will cause a Violation exception - to be raised. The caller will get this Violation as an errback instead of - whatever (illegal) value the remote method computed.
  • - -
  • The sending side will enforce the return-value constraint (raising a - Violation if it is not met). This means the calling side (in this case the - gotAnswer callback function) does not need to worry about what - strange type the remote method returns.
  • -
- -

You can use either technique: with RemoteInterfaces or without. To get the -type-checking benefits, you must use them. If you do not, PB cannot protect -you against memory consumption attacks.

- - -

RemoteInterfaces

- -

RemoteInterfaces are passed by name. Each side of a PB connection has a -table which maps names to RemoteInterfaces (subclasses of -pb.RemoteInterface). Metaclass magic is used to add an entry to -this table each time you define a RemoteInterface subclass, using the -__remote_name__ attribute (or reflect.qual() if that is not -set).

- -

Each Referenceable that goes over the wire is accompanied by -the list of RemoteInterfaces which it claims to implement. On the receiving -side, these RemoteInterface names are looked up in the table and mapped to -actual (local) RemoteInterface classes.

- -

TODO: it might be interesting to serialize the RemoteInterface class and -ship it over the wire, rather than assuming both sides have a copy (and that -they agree). However, if one side does not have a copy, it is unlikely that -it will be able to do anything very meaningful with the remote end.

- -

The syntax of RemoteInterface is still in flux. The basic idea is that -each method of the RemoteInterface defines a remotely invokable method, -something that will exist with a remote_ prefix on any -pb.Referenceables which claim to implement it.

- -

Those methods are defined with a number of named parameters. The default -value of each parameter is something which can be turned into a -Constraint according to the rules of schema.makeConstraint . -This means you can use things like (int, str, str) to mean a -tuple of exactly those three types.

- -

Note that the methods of the RemoteInterface do not list -self as a parameter. As the zope.interface documentation points out, -self is an implemenation detail, and does not belong in the interface -specification. Another way to think about it is that, when you write the code -which calls a method in this interface, you don't include self in the -arguments you provide, therefore it should not appear in the public -documentation of those methods.

- -

The method is required to return a value which can be handled by -schema.makeConstraint: this constraint is then applied to the return value of -the remote method.

- -

Other attributes of the method (perhaps added by decorators of some sort) -will, some day, be able to specify specialized behavior of the method. The -brainstorming sessions have come up with the following ideas:

- -
    -
  • .wait=False: don't wait for an answer
  • -
  • .reliable=False: feel free to send this over UDP
  • -
  • .ordered=True: but enforce order between successive remote calls
  • -
  • .priority=3: use priority queue / stream #3
  • -
  • .failure=Full: allow/expect full Failure contents (stack frames)
  • -
  • .failure=ErrorMessage: only allow/expect truncated CopiedFailures
  • -
- -

We are also considering how to merge the RemoteInterface with other useful -interface specifications, in particular zope.interface and -formless.TypedInterface .

- - -

Argument Transformation

- -

To understand this section, it may be useful to review the Banana documentation on serializing object graphs. -Also note that method arguments and method return values are handled -identically.

- -

Basic types (lists, tuples, dictionaries) are serialized and unserialized -as you would expect: the resulting object would (if it existed in the -sender's address space) compare as equal (but of course not -identical, because the objects will exist at different memory -locations).

- -

Shared References, Serialization Scope

- -

Shared references to the same object are handled correctly. Banana is -responsible for noticing that a sharable object has been serialized before -(or at least has begun serialization) and inserts reference markers so that -the object graph can be reconstructed. This introduces the concept of -serialization scope: the boundaries beyond which shared references are not -maintained.

- -

For PB, serialization is scoped to the method call. If an object is -referenced by two arguments to the same method call, that method will see -two references to the same object. If those arguments are containers of some -form, which (eventually) hold a reference to the same object, the object -graph will be preserved. For example:

- -
-class Caller:
-    def start(self):
-        obj = [1, 2, 3]
-        self.remote.callRemote("both", obj, obj)
-        self.remote.callRemote("deeper", ["a", obj], (4, 5, obj))
-
-class Called(pb.Referenceable):
-    def remote_both(self, arg1, arg2):
-        assert arg1 is arg2
-        assert arg1 == [1,2,3]
-    def remote_deeper(self, listarg, tuplearg):
-        ref1 = listarg[1]
-        ref2 = tuplearg[2]
-        assert ref1 is ref2
-        assert ref1 == [1,2,3]
-
- -

But if the remote-side object is referenced in two distinct remote method -invocations, the local-side methods will see two separate objects. For -example:

- -
-class Caller:
-    def start(self):
-        self.obj = [1, 2, 3]
-        d = self.remote.callRemote("one", self.obj)
-        d.addCallback(self.next)
-    def next(self, res):
-        self.remote.callRemote("two", self.obj)
-
-class Called(pb.Referenceable):
-    def remote_one(self, ref1):
-        assert ref1 == [1,2,3]
-        self.ref1 = ref1
-
-    def remote_two(self, ref2):
-        assert ref2 == [1,2,3]
-        assert ref1 is not ref2 # not the same object
-
- -

You can think of the method call itself being a node in the object graph, -with the method arguments as its children. The method call node is picked up -and the resulting sub-tree is serialized with no knowledge of anything -outside the sub-treeThis isn't quite true: for some -objects, serialization is scoped to the connection as a whole. -Referenceables and RemoteReferences are like this..

- -

The value returned by a method call is serialized by itself, without -reference to the arguments that were given to the method. If a remote method -is called with a list, and the method returns its argument unchanged, the -caller will get back a deep copy of the list it passed in.

- -

Referenceables, RemoteReferences

- -

Referenceables are transformed into RemoteReferences when they are sent -over the wire. As one side traverses the object graph of the method arguments -(or the return value), each Referenceable object it encounters -it serialized with a my-reference sequence, that includes the -object-ID number. When the other side is unserializing the token stream, it -creates a RemoteReference object, or uses one that already -exists.

- -

Likewise, if an argument (or return value) contains a -RemoteReference, and it is being sent back to the Broker that -holds the original Referenceable then it will be turned back -into that Referenceable when it arrives. In this case, the caller of a -remote method which returns its argument unchanged will see a a -result that is identical to what it passed in:

- -
-class Target(pb.Referenceable):
-    pass
-
-class Caller:
-    def start(self):
-        self.obj = Target()
-        d = self.remote.callRemote("echo", self.obj)
-        d.addCallback(self.next)
-    def next(self, res):
-        assert res is self.obj
-
-class Called(pb.Referenceable):
-    def remote_echo(self, arg):
-        # arg is a RemoteReference to a Target() instance 
-        return arg
-
- -

These references have a serialization scope which extends across the -entire connection. As long as two method calls share the same -Broker instance (which generally means they share the same TCP -socket), they will both serialize Referenceables into identical -RemoteReferences. This also means that both sides do -reference-counting to insure that the Referenceable doesn't get -garbage-collected while a remote system holds a RemoteReference that points -to it.

- -

In the future, there may be other classes which behave this way. In -particular, Referenceable and Callable may be distinct -qualities.

- - -

Copyable, RemoteCopy

- -

Some objects can be marked to indicate that they should be copied bodily -each time they traverse the wire (pass-by-value instead of -pass-by-reference). Classes which inherit from pb.Copyable are -passed by value. Their getTypeToCopy and -getStateToCopy methods are used to assemble the data that will -be serialized. These methods default to plain old reflect.qual -(which provides the fully-qualified name of the class) and the instance's -attribute __dict__. You can override these to provide a -different (or smaller) set of state attributes to the remote end.

- - -
-class Source(pb.Copyable):
-    def getStateToCopy(self):
-        state = self.__dict__.copy()
-        del state['private']
-        state['children'] = []
-        return state
-
- -

Rather than subclass pb.Copyable, you can also implement the -flavors.ICopyable interface:

- -
-from twisted.python import reflect
-
-class Source2:
-    implements(flavors.ICopyable)
-    def getTypeToCopy(self):
-        return reflect.qual(self.__class__)
-    def getStateToCopy(self):
-        return self.__dict__
-
- -

.. or register an ICopyable adapter. Using the adapter allows you to -define serialization behavior for third-party classes that are out of your -control (ones which you cannot rewrite to inherit from -pb.Copyable).

- -
-class Source3:
-    pass
-
-class Source3Copier:
-    implements(flavors.ICopyable)
-
-    def getTypeToCopy(self):
-        return 'foo.Source3'
-    def getStateToCopy(self):
-        orig = self.original
-        d = { 'foo': orig.foo, 'bar': orig.bar }
-        return d
-
-registerAdapter(Source3Copier, Source3, flavors.ICopyable)
-
- - -

On the other end of the wire, the receiving side must register a -RemoteCopy subclass under the same name as returned by the -sender's getTypeToCopy value. This subclass is used as a factory -to create instances that correspond to the original Copyable. -The registration can either take place explicitly (with -pb.registerRemoteCopy), or automatically (by setting the -copytype attribute in the class definition).

- -

The default RemoteCopy behavior simply sets the instance's -__dict__ to the incoming state, which may be plenty if you are -willing to let outsiders arbitrarily manipulate your object state. If so, and -you believe both peers are importing the same source file, it is enough to -create and register the RemoteCopy at the same time you create -the Copyable:

- -
-class Source(pb.Copyable):
-    def getStateToCopy(self):
-        state = self.__dict__.copy()
-        del state['private']
-        state['children'] = []
-        return state
-class Remote(pb.RemoteCopy):
-    copytype = reflect.qual(Source)
-
- -

You can do something special with the incoming object state by overriding -the setCopyableState method. This may allow you to do some -sanity-checking on the state before trusting it.

- -
-class Remote(pb.RemoteCopy):
-    def setCopyableState(self, state):
-        state['count'] = 0
-        self.__dict__ = state
-        self.total = self.one + self.two
-
-# show explicit registration, instead of using 'copytype' class attribute
-pb.registerRemoteCopy(reflect.qual(Source), Remote)
-
- -

You can also set a constraint on the inbound -object state, which provides a way to enforce some type checking on the state -components as they arrive. This protects against resource-consumption attacks -where someone sends you a zillion-byte string as part of the object's -state.

- -
-class Remote(pb.RemoteCopy):
-    stateSchema = schema.AttributeDictConstraint(('foo', int),
-                                                 ('bar', str))
-
- -

In this example, the object will only accept two attributes: foo -(which must be a number), and bar (which must be a string shorter than -the default limit of 1000 characters). Various classes from the -schema module can be used to construct more complicated -constraints.

- - - -

Slicers, ISlicer

- -

Each object gets Sliced into a stream of tokens as they go over the -wire: Referenceable and Copyable are merely special cases. These classes have -Slicers which implement specific behaviors when the serialization process is -asked to send their instances to the remote side. You can implement your own -Slicers to take complete control over the serialization process. The most -useful reason to take advantage of this feature is to implement streaming -slicers, which can minimize in-memory buffering by only producing Banana -tokens on demand as space opens up in the transport.

- -

Banana Slicers are documented in detail in the Banana documentation. Once you create a Slicer class, -you will want to register it, letting Banana know that this Slicer is -useful for conveying certain types of objects across the wire. The registry -maps a type to a Slicer class (which is really a slicer factory), and is -implemented by registering the slicer as a regular adapter for the -ISlicer interface. For example, lists are serialized by the -ListSlicer class, so ListSlicer is registered as -the slicer for the list type:

- -
-class ListSlicer(BaseSlicer):
-    opentype = ("list",)
-    slices = list
-
- -

Slicer registration can be either explicit or implicit. In this example, -an implicit registration is used: by setting the slices attribute to -the list type, the BaseSlicer's metaclass automatically -registers the mapping from list to ListSlicer.

- -

To explicitly register a slicer, just leave opentype set to -None (to disable auto-registration), and then register the slicer -manually.

- -
-class TupleSlicer(BaseSlicer):
-    opentype = ("tuple",)
-    slices = None
-    ...
-registerAdapter(TupleSlicer, tuple, pb.ISlicer)
-
- -

As with ICopyable, registering an ISlicer adapter allows you to define -exactly how you wish to serialize third-party classes which you do not get to -modify.

- - -

Unslicers

- -

On the other side of the wire, the incoming token stream is handed to an -Unslicer, which is responsible for turning the set of tokens -into a single finished object. They are also responsible for enforcing limits -on the types and sizes of the tokens that make up the stream. Unslicers are -also described in greater detail in the Banana -docs.

- -

As with Slicers, Unslicers need to be registered to be useful. This -registry maps opentypes to Unslicer classes (i.e. factories which can -produce an unslicer instance each time the given opentype appears in the -token stream). Therefore it maps tuples to subclasses of -BaseUnslicer.

- -

Again, this registry can be either implicit or explicit. If the Unslicer -has a non-None class attribute named opentype, then it is -automatically registered. If it does not have this attribute (or if it is set -to None), then no registration is performed, and the Unslicer must be -manually registered:

- -
-class MyUnslicer(BaseUnslicer):
-    ...
-
-pb.registerUnslicer(('myopentype',), MyUnslicer)
-
- -

Also remember that this registry is global, and that you cannot register -two Unslicers for the same opentype (you'll get an exception at -class-definition time, which will probably result in an ImportError).

- - -

Slicer/Unslicer Example

- -

The simplest kind of slicer has a sliceBody method (a -generator) which yields a series of tokens. To demonstrate how to build a -useful Slicer, we'll write one that can send large strings across the wire in -pieces. Banana can send arbitrarily long strings in a single token, but each -token must be handed to the transport layer in an indivisble chunk, and -anything that doesn't fit in the transmit buffers will be stored in RAM until -some space frees up in the socket. Practically speaking, this means that -anything larger than maybe 50kb will spend a lot of time in memory, -increasing the RAM footprint for no good reason.

- -

Because of this, it is useful to be able to send large amounts of data in -smaller pieces, and let the remote end reassemble them. The following Slicer -is registered to handle all open files (perhaps not the best idea), and -simply emits the contents in 10kb chunks.

- -

(readers familiar with oldpb will notice that this Slicer/Unslicer pair -provide similar functionality to the old FilePager class. The biggest -improvement is that newpb can accomplish this without the extra round-trip -per chunk. The downside is that, unless you enable streaming in your Broker, -no other methods can be invoked while the file is being transmitted. The -upside of the downside is that this lets you retain in-order execution of -remote methods, and that you don't have to worry changes to the contents of -the file causing corrupt data to be sent over the wire. The oter upside of -the downside is that, if you enable streaming, you can do whatever other -processing you wish between data chunks.)

- -
-class BigFileSlicer(BaseSlicer):
-    opentype = ("bigfile",)
-    slices = types.FileType
-    CHUNKSIZE = 10000
-
-    def sliceBody(self, streamable, banana):
-        while 1:
-            chunk = self.obj.read(self.CHUNKSIZE)
-            if not chunk:
-                return
-            yield chunk
-
- -

To receive this, you would use the following minimal Unslicer at the other -end. Note that this Unslicer does not do as much as it could in the way of -constraint enforcement: an attacker could easily make you consume as much -memory as they wished by simply sending you a never-ending series of -chunks.

- -
-class BigFileUnslicer(LeafUnslicer):
-    opentype = ("bigfile",)
-
-    def __init__(self):
-        self.chunks = []
-
-    def checkToken(self, typebyte, size):
-        if typebyte != tokens.STRING:
-            raise BananaError("BigFileUnslicer only accepts strings")
-
-    def receiveChild(self, obj):
-        self.chunks.append(obj)
-
-    def receiveClose(self):
-        return "".join(self.chunks)
-
- -

The opentype attribute causes this Unslicer to be implicitly -registered to handle any incoming sequences with an index tuple of -("bigfile",), so each time BigFileSlicer is used, a -BigFileUnslicer will be created to handle the results.

- -

A more complete example would want to write the file chunks to disk at -they arrived, or process them incrementally. It might also want to have some -way to limit the overall size of the file, perhaps by having the first chunk -be an integer with the promised file size. In this case, the example might -look like this somewhat contrived (and somewhat insecure) Unslicer:

- -
-class SomewhatLargeFileUnslicer(LeafUnslicer):
-    opentype = ("bigfile",)
-
-    def __init__(self):
-        self.fileSize = None
-        self.size = 0
-        self.output = open("/tmp/bigfile.txt", "w")
-
-    def checkToken(self, typebyte, size):
-        if self.fileSize is None:
-            if typebyte != tokens.INT:
-                raise BananaError("fileSize must be an INT")
-        else:
-            if typebyte != tokens.STRING:
-                raise BananaError("BigFileUnslicer only accepts strings")
-            if self.size + size > self.fileSize:
-                raise BananaError("size limit exceeded")
-
-    def receiveChild(self, obj):
-        if self.fileSize is None:
-            self.fileSize = obj
-            # decide if self.fileSize is too big, raise error to refuse it
-        else:
-            self.output.write(obj)
-            self.size += len(obj)
-
-    def receiveClose(self):
-        self.output.close()
-        return open("/tmp/bigfile.txt", "r")
-
- -

This constrained BigFileUnslicer uses the fact that each STRING token -comes with a size, which can be used to enforce the promised filesize that -was provided in the first token. The data is streamed to a disk file as it -arrives, so no more than CHUNKSIZE of memory is required at any given -time.

- - -

Streaming Slicers

- -

TODO: add example

- -

The following slicer will, when the broker allows streaming, will yield -the CPU to other reactor events that want processing time. (This technique -becomes somewhat inefficient if there is nothing else contending for CPU -time, and if this matters you might want to use something which sends N -chunks before yielding, or yields only when some other known service -announces that it wants CPU time, etc).

- -
-class BigFileSlicer(BaseSlicer):
-    opentype = ("bigfile",)
-    slices = types.FileType
-    CHUNKSIZE = 10000
-
-    def sliceBody(self, streamable, banana):
-        while 1:
-            chunk = self.obj.read(self.CHUNKSIZE)
-            if not chunk:
-                return
-            yield chunk
-            if streamable:
-                d = defer.Deferred()
-                reactor.callLater(0, d.callback, None)
-                yield d
-
- -

The next example will deliver data as it becomes available from a -hypothetical slow process.

- -
-class OutputSlicer(BaseSlicer):
-    opentype = ("output",)
-
-    def sliceBody(self, streamable, banana):
-        assert streamable # requires it
-        while 1:
-            if self.process.finished():
-                return
-            chunk = self.process.read(self.CHUNKSIZE)
-            if not chunk:
-                d = self.process.waitUntilDataIsReady()
-                yield d
-            else:
-                yield chunk
-
- -

Streamability is required in this example because otherwise the Slicer is -required to provide chunks non-stop until the object has been completely -serialized. If the process cannot deliver data, it's not like the Slicer can -block waiting until it becomes ready. Prohibiting streamability is done to -ensure coherency of serialized state, and the only way to guarantee this is -to not let any non-Banana methods get CPU time until the object has been -fully processed.

- -

Streaming Unslicers

- -

On the receiving side, the Unslicer can be made streamable too. This is -considerably easier than on the sending side, because there are fewer -concerns about state coherency.

- -

A streaming Unslicer is merely one that delivers some data directly from -the receiveChild method, rather than accumulating it until the -receiveClose method. The SomewhatLargeFileUnslicer example from -above is actually a streaming Unslicer. Nothing special needs to be -done.

- -

On the other hand, it can be tricky to know where exactly to deliver the -data being streamed. The streamed object is probably part of a larger -structure (like a method call), where the higher-level attribute can be used -to determine which object or method should be called with the incoming data -as it arrives. The current Banana model is that each completed object (as -returned by the child's receiveClose method) is handed to the -parent's receiveChild method. The parent can do whatever it -wants with the results. To make streaming Unslicers more useful, the parent -should be able to set up a target for the data at the time the child -Unslicer is created.

- -

More work is needed in this area to figure out how this functionality -should be exposed.

- - -

Arbitrary Instances are NOT serialized

- -

Arbitrary instances (that is, anything which does not have an -ISlicer adapter) are not serialized. If an argument to -a remote method contains one, you will get a Violation exception when you -attempt to serialize it (i.e., the Deferred that you get from -callRemote will errback with a Failure that contains a -Violation exception). If the return value contains one, the Violation will -be logged on the local side, and the remote caller will see an error just as -if your method had raised a Violation itself.

- -

There are two reasons for this. The first is a security precaution: you -must explicitly mark the classes that are willing to reveal their contents -to the world. This reduces the chance of leaking sensitive information.

- -

The second is because it is not actually meaningful to send the contents -of an arbitrary object. The recipient only gets the class name and a -dictionary with the object's state. Which class should it use to create the -corresponding object? It could attempt to import one based upon the -classname (the approach pickle uses), but that would give a remote attacker -unrestricted access to classes which could do absolutely anything: very -dangerous.

- -

Both ends must be willing to transport the object. The sending side -expresses this by marking the class (subclassing Copyable, or registering an -ISlicer adapter). The receiving side must register the class as well, by -doing registerUnslicer or using the opentype attribute in a -suitable Unslicer subclass definition.

- - -

PB Sequences

- -

There are several Banana sequences which are used to support the RPC -mechanisms of Perspective Broker. These are in addition to the usual ones -listed in the Banana docs.

- -

Top-Level Sequences

- -

These sequences only appear at the top-level (never inside another -object).

- - - - - - - - - - - - - - - -
PB (method call) Sequences
method call (callRemote)OPEN(call) INT(request-id) INT/STR(your-reference-id) - STRING(interfacename) STRING(methodname) - (STRING(argname),argvalue).. - CLOSE
method response (success)OPEN(answer) INT(request-id) value CLOSE
method response (exception)OPEN(error) INT(request-id) value CLOSE
RemoteReference.__del__OPEN(decref) INT(your-reference-id) CLOSE
- -

Internal Sequences

- -

The following sequences are used to serialize PB-specific objects. They -never appear at the top-level, but only as the argument value or return -value (or somewhere inside them).

- - - - - - - - - - - - -
PB (method call) Sequences
pb.ReferenceableOPEN(my-reference) INT(clid) - [OPEN(list) InterfaceList.. CLOSE] - CLOSE
pb.RemoteReferenceOPEN(your-reference) INT/STR(clid) - CLOSE
pb.CopyableOPEN(copyable) STRING(reflect.qual(class)) - (attr,value).. CLOSE
- -

The first time a pb.Referenceable is sent, the second object -is an InterfaceList, which is a list of interfacename strings, and therefore -constrainable by a schema of ListOf(str) with some appropriate -maximum-length restrictions. This InterfaceList describes all the Interfaces -that the corresponding pb.Referenceable implements. The -receiver uses this list to look up local Interfaces (and therefore Schemas) -to attach to the pb.RemoteReference. This is how method schemas -are checked on the sender side.

- -

This implies that Interfaces must be registered, just as classes are for -pb.Copyable. TODO: what happens if an unknown Interface is -received?

- -

Classes which wish to be passed by value should either inherit from -pb.Copyable or have an ICopyable adapter -registered for them. On the receiving side, the -registerRemoteCopy function must be used to register a factory, -which can be a pb.RemoteCopy subclass or something else which -implements IRemoteCopy.

- -

Failure objects are sent as a pb.Copyable with -a class name of twisted.python.failure.Failure.

- -

Implementation notes

- -

Outgoing Referenceables

- -

The side which holds the Referenceable uses a -ReferenceableSlicer to serialize it. Each Referenceable is -tracked with a process-Unique ID (abbreviated puid). As the -name implies, this number refers to a specific object within a given -process: it is scoped to the process (and is never sent to another process), -but it spans multiple PB connections (any given object will have the same -puid regardless of which connection is referring to it). The -puid is an integer, normally obtained with -id(obj), but you can override the object's -processUniqueID method to use something else (this might be -useful for objects that are really proxies for something else). Any two -objects with the same puid are serialized identically.

- -

All Referenceables sent over the wire (as arguments or return values for -remote methods) are given a connection-local ID (clid) -which is scoped to one end of the connection. The Referenceable is serialized -with this number, using a banana sequence of (OPEN "my-reference" -clid). The remote peer (the side that holds the -RemoteReference) knows the Referenceable by the -clid sent to represent it. These are small integers. From a -security point of view, any object sent across the wire (and thus given a -clid) is forever accessible to the remote end (or at least until -the connection is dropped).

- -

The sending side uses the Broker.clids dict to map -puid to clid. It uses the -Broker.localObjects dict to map clid to -Referenceable. The reference from .localObjects -also has the side-effect of making sure the Referenceable doesn't go out of -scope while the remote end holds a reference.

- -

Broker.currentLocalID is used as a counter to create -clid values.

- - -

RemoteReference

- -

In response to the incoming my-reference sequence, the -receiving side creates a RemoteReference that remembers its -Broker and the clid value. The RemoteReference is stashed in the -Broker.remoteReferences weakref dictionary (which maps from -clid to RemoteReference), to make sure that a -single Referenceable is always turned into the same -RemoteReference. Note that this is not infallible: if the -recipient forgets about the RemoteReference, PB will too. But if -they really do forget about it, then they won't be able to tell that the -replacement is not the same as the originalunless they -do something crazy like remembering the id(obj) of the old -object and check to see if it is the same as that of the new one. But -id(obj) is only unique among live objects anyway. It will -have a different clid.and note that I -think there is a race condition here, in which the reference is sent over the -wire at the same time the other end forgets about it

- -

This RemoteReference is where the .callRemote -method lives. When used to invoke remote methods, the clid is -used as the second token of a call sequence. In this context, -the clid is a your-reference: it refers to the -recipient's .localObjects table. The -Referenceable-holder's my-reference-id is sent -back to them as the your-reference-id argument of the -call sequence.

- -

The RemoteReference isn't always used to invoke remote -methods: it could appear in an argument or a return value instead: the goal -is to have the Referenceable-holder see their same -Referenceable come back to them. In this case, the -clid is used in a (OPEN "your-reference" clib) -sequence. The Referenceable-holder looks up the -clid in their .localObjects table and puts the -result in the method argument or return value.

- - - -

URL References

- -

In addition to the implicitly-created numerically-indexed -Referenceable instances (kept in the Broker's -.localObjects dict), there are explicitly-registered -string-indexed Referenceables kept in the PBServerFactory's -localObjects dictionary. This table is used to publish objects -to the outside world. These objects are the targets of the -pb.getRemoteURL and pb.callRemoteURL -functions.

- -

To access these, a URLRemoteReference must be created that -refers to a string clid instead of a numeric one. This is a -simple subclass of RemoteReference: it behaves exactly the same. -The URLRemoteReference is created manually by -pb.getRemoteURL, rather than being generated automatically upon -the receipt of a my-reference sequence. It also assumes a list -of RemoteInterface names (which are usually provided by the holder of the -Referenceable).

- -

To invoke methods on a URL-indexed object, a string token is used as the -clid in the your-reference-id argument of a -call sequence.

- -

In addition, the clid of a your-reference -sequence can be a string to use URL-indexed objects as arguments or return -values of method invocations. This allows one side to send a -URLRemoteReference to the other and have it turn into the -matching Referenceable when it arrives. Of course, if it is -invalid, the method call that tried to send it will fail.

- -

Note that these URLRemoteReference objects wil not survive a -roundtrip like regular RemoteReferences do. The -URLRemoteReference turns into a Referenceable, but -the Referenceable will turn into a regular numeric (implicit) -RemoteReference when it comes back. This may change in the -future as the URL-based referencing scheme is developed. It might also -become possible for string clids to appear in -my-reference sequences, giving -Referenceable-holders the ability to publish URL references -explicitly.

- -

It might also become possible to have these URLs point to other servers. -In this case, a remote sequence will probably be used, rather -than the my-reference sequence used for implicit -references.

- -

Note that these URL-endpoints are per-Factory, so they are shared between -multiple connections (the implicitly-created references are only available -on the connection that created them). The PBServerFactory is created with a -root object, which is a URL-endpoint with a clid of an -empty string.

- - - - - - diff --git a/src/foolscap/doc/stylesheet-unprocessed.css b/src/foolscap/doc/stylesheet-unprocessed.css deleted file mode 100644 index e4a62cc1..00000000 --- a/src/foolscap/doc/stylesheet-unprocessed.css +++ /dev/null @@ -1,20 +0,0 @@ - -span.footnote { - vertical-align: super; - font-size: small; -} - -span.footnote:before -{ - content: "[Footnote: "; -} - -span.footnote:after -{ - content: "]"; -} - -div.note:before -{ - content: "Note: "; -} diff --git a/src/foolscap/doc/stylesheet.css b/src/foolscap/doc/stylesheet.css deleted file mode 100644 index c82fe2ec..00000000 --- a/src/foolscap/doc/stylesheet.css +++ /dev/null @@ -1,180 +0,0 @@ - -body -{ - margin-left: 2em; - margin-right: 2em; - border: 0px; - padding: 0px; - font-family: sans-serif; - } - -.done { color: #005500; background-color: #99ff99 } -.notdone { color: #550000; background-color: #ff9999;} - -pre -{ - padding: 1em; - font-family: Neep Alt, Courier New, Courier; - font-size: 12pt; - border: thin black solid; -} - -.boxed -{ - padding: 1em; - border: thin black solid; -} - -.shell -{ - background-color: #ffffdd; -} - -.python -{ - background-color: #dddddd; -} - -.htmlsource -{ - background-color: #dddddd; -} - -.py-prototype -{ - background-color: #ddddff; -} - - -.python-interpreter -{ - background-color: #ddddff; -} - -.doit -{ - border: thin blue dashed ; - background-color: #0ef -} - -.py-src-comment -{ - color: #1111CC -} - -.py-src-keyword -{ - color: #3333CC; - font-weight: bold; -} - -.py-src-parameter -{ - color: #000066; - font-weight: bold; -} - -.py-src-identifier -{ - color: #CC0000 -} - -.py-src-string -{ - - color: #115511 -} - -.py-src-endmarker -{ - display: block; /* IE hack; prevents following line from being sucked into the py-listing box. */ -} - -.py-listing, .html-listing, .listing -{ - margin: 1ex; - border: thin solid black; - background-color: #eee; -} - -.py-listing pre, .html-listing pre, .listing pre -{ - margin: 0px; - border: none; - border-bottom: thin solid black; -} - -.py-listing .python -{ - margin-top: 0; - margin-bottom: 0; - border: none; - border-bottom: thin solid black; - } - -.html-listing .htmlsource -{ - margin-top: 0; - margin-bottom: 0; - border: none; - border-bottom: thin solid black; - } - -.caption -{ - text-align: center; - padding-top: 0.5em; - padding-bottom: 0.5em; -} - -.filename -{ - font-style: italic; - } - -.manhole-output -{ - color: blue; -} - -hr -{ - display: inline; - } - -ul -{ - padding: 0px; - margin: 0px; - margin-left: 1em; - padding-left: 1em; - border-left: 1em; - } - -li -{ - padding: 2px; - } - -dt -{ - font-weight: bold; - margin-left: 1ex; - } - -dd -{ - margin-bottom: 1em; - } - -div.note -{ - background-color: #FFFFCC; - margin-top: 1ex; - margin-left: 5%; - margin-right: 5%; - padding-top: 1ex; - padding-left: 5%; - padding-right: 5%; - border: thin black solid; -} diff --git a/src/foolscap/doc/template.tpl b/src/foolscap/doc/template.tpl deleted file mode 100644 index 62c88670..00000000 --- a/src/foolscap/doc/template.tpl +++ /dev/null @@ -1,23 +0,0 @@ - - - - - -Twisted Documentation: - - - - -

-
-
- -
- -

Index

- Version: - - - diff --git a/src/foolscap/doc/todo.txt b/src/foolscap/doc/todo.txt deleted file mode 100644 index 14dc1608..00000000 --- a/src/foolscap/doc/todo.txt +++ /dev/null @@ -1,1304 +0,0 @@ --*- outline -*- - -non-independent things left to do on newpb. These require deeper magic or -can not otherwise be done casually. Many of these involve fundamental -protocol issues, and therefore need to be decided sooner rather than later. - -* summary -** protocol issues -*** negotiation -*** VOCABADD/DEL/SET sequences -*** remove 'copy' prefix from RemoteCopy type sequences? -*** smaller scope for OPEN-counter reference numbers? -** implementation issues -*** cred -*** oldbanana compatibility -*** Copyable/RemoteCopy default to __getstate__ or self.__dict__ ? -*** RIFoo['bar'] vs RIFoo.bar (should RemoteInterface inherit from Interface?) -*** constrain ReferenceUnslicer -*** serialize target.remote_foo usefully - -* decide whether to accept positional args in non-constrained methods - -DEFERRED until after 2.0 - warner: that would be awesome but let's do it _later_ - -This is really a backwards-source-compatibility issue. In newpb, the -preferred way of invoking callRemote() is with kwargs exclusively: glyph's -felt positional arguments are more fragile. If the client has a -RemoteInterface, then they can convert any positional arguments into keyword -arguments before sending the request. - -The question is what to do when the client is not using a RemoteInterface. -Until recently, callRemote("bar") would try to find a matching RI. I changed -that to have callRemote("bar") never use an RI, and instead you would use -callRemote(RIFoo['bar']) to indicate that you want argument-checking. - -That makes positional arguments problematic in more situations than they were -before. The decision to be made is if the OPEN(call) sequence should provide -a way to convey positional args to the server (probably with numeric "names" -in the (argname, argvalue) tuples). If we do this, the server (which always -has the RemoteInterface) can do the positional-to-keyword mapping. But -putting this in the protocol will oblige other implementations to handle them -too. - -* change the method-call syntax to include an interfacename -DONE - -Scope the method name to the interface. This implies (I think) one of two -things: - - callRemote() must take a RemoteInterface argument - - each RemoteReference handles just a single Interface - -Probably the latter, maybe have the RR keep both default RI and a list of -all implemented ones, then adapting the RR to a new RI can be a simple copy -(and change of the default one) if the Referenceable knows about the RI. -Otherwise something on the local side will need to adapt one RI to another. -Need to handle reference-counting/DECREF properly for these shared RRs. - -From glyph: - - callRemote(methname, **args) # searches RIs - callRemoteInterface(remoteinterface, methname, **args) # single RI - - getRemoteURL(url, *interfaces) - - URL-RRefs should turn into the original Referenceable (in args/results) - (map through the factory's table upon receipt) - - URL-RRefs will not survive round trips. leave reference exchange for later. - (like def remote_foo(): return GlobalReference(self) ) - - move method-invocation code into pb.Referenceable (or IReferenceable - adapter). Continue using remote_ prefix for now, but make it a property of - that code so it can change easily. - ok, for today I'm just going to stick with remote_foo() as a - low-budget decorator, so the current restrictions are 1: subclass - pb.Referenceable, 2: implements() a RemoteInterface with method named "foo", - 3: implement a remote_foo method - and #1 will probably go away within a week or two, to be replaced by - #1a: subclass pb.Referenceable OR #1b: register an IReferenceable adapter - - try serializing with ISliceable first, then try IReferenceable. The - IReferenceable adapter must implements() some RemoteInterfaces and gets - serialized with a MyReferenceSlicer. - -http://svn.twistedmatrix.com/cvs/trunk/pynfo/admin.py?view=markup&rev=44&root=pynfo - -** use the methods of the RemoteInterface as the "method name" -DONE (provisional), using RIFoo['add'] - - rr.callRemote(RIFoo.add, **args) - -Nice and concise. However, #twisted doesn't like it, adding/using arbitrary -attributes of Interfaces is not clean (think about IFoo.implements colliding -with RIFoo.something). - - rr.callRemote(RIFoo['add'], **args) - RIFoo(rr).callRemote('add', **args) - adaptation, or narrowing? - - glyph: I'm adding callRemote(RIFoo.bar, **args) to newpb right now - wow. - seemed like a simpler interface than callRemoteInterface("RIFoo", -"bar", **args) - warner: Does this mean that IPerspective can be parameterized now? - warner: bad idea - warner: Zope hates you! - warner: zope interfaces don't support that syntax - zi does support multi-adapter syntax - but i don't really know what that is - warner: callRemote(RIFoo.getDescriptionFor("bar"), *a, **k) - glyph: yeah, I fake it. In RemoteInterfaceClass, I remove those -attributes, call InterfaceClass, and then put them all back in - warner: don't add 'em as attributes - warner: just fix the result of __getitem__ to add a slot actually -refer back to the interface - radix: the problem is that IFoo['bar'] doesn't point back to IFoo - warner: even better, make them callable :-) - glyph: IFoo['bar'].interface == 'IFoo' - RIFoo['bar']('hello') - glyph: I was thinking of doing that in a later version of -RemoteInterface - exarkun: >>> type(IFoo['bar'].interface) - - right - 'IFoo' - Just look through all the defined interfaces for ones with matching -names - exarkun: ... e.g. *NOT* __main__.IFoo - exarkun: AAAA you die - hee hee -* warner struggles to keep up with his thoughts and those of people around him -* glyph realizes he has been given the power to whine - glyph: ok, so with RemoteInterface.__getitem__, you could still do -rr.callRemote(RIFoo.bar, **kw), right? - was your objection to the interface or to the implementation? - I really don't think you should add attributes to the interface - ok - I need to stash a table of method schemas somewhere - just make __getitem__ return better type of object - and ideally if this is generic we can get it into upstream - Is there a reason Method.interface isn't a fully qualified name? - not necessarily - I have commit access to zope.interface - if you have any features you want added, post to -interface-dev@zope.org mailing list - and if Jim Fulton is ok with them I can add them for you - hmm - does using RIFoo.bar to designate a remote method seem reasonable? - I could always adapt it to something inside callRemote - something PB-specific, that is - but that adapter would have to be able to pull a few attributes off -the method (name, schema, reference to the enclosing RemoteInterface) - and we're really talking about __getattr__ here, not __getitem__, -right? - for x.y yes - no, I don't think that's a good idea - interfaces have all kinds od methods on them already, for -introspection purposes - namespace clashes are the suck - unless RIFoo isn't really an Interface - hm - how about if it were a wrapper around a regular Interface? - yeah, RemoteInterfaces are kind of a special case - RIFoo(IFoo, publishedMethods=['doThis', 'doThat']) - s/RIFoo/RIFoo = RemoteInterface(/ - I'm confused. Why should you have to specify which methods are -published? - SECURITY! - not actually necessary though, no - and may be overkill - the only reason I have it derive from Interface is so that we can do -neat adapter tricks in the future - that's not contradictory - RIFoo(x) would still be able to do magic - you wouldn't be able to check if an object provides RIFoo, though - which kinda sucks - but in any case I am against RIFoo.bar - pity, it makes the callRemote syntax very clean - hm - So how come it's a RemoteInterface and not an Interface, anyway? - I mean, how come that needs to be done explicitly. Can't you just -write a serializer for Interface itself? - -* warner goes to figure out where the RemoteInterface discussion went after he - got distracted - maybe I should make RemoteInterface a totally separate class and just -implement a couple of Interface-like methods - cause rr.callRemote(IFoo.bar, a=1) just feels so clean - warner: why not IFoo(rr).bar(a=1) ? - hmm, also a possibility - well - IFoo(rr).callRemote('bar') - or RIFoo, or whatever - hold on, what does rr inherit from? - RemoteReference - it's a RemoteReference - then why not IFoo(rr) / - I'm keeping a strong distinction between local interfaces and remote -ones - ah, oka.y - warner: right, you can still do RIFoo - ILocal(a).meth(args) is an immediate function call - in that case, I prefer rr.callRemote(IFoo.bar, a=1) - .meth( is definitely bad, we need callRemote - rr.callRemote("meth", args) returns a deferred - radix: I don't like from foo import IFoo, RIFoo - you probably wouldn't have both an IFoo and an RIFoo - warner: well, look at it this way: IFoo(rr).callRemote('foo') still -makes it obvious that IFoo isn't local - warner: you could implement RemoteReferen.__conform__ to implement it - radix: I'm thinking of providing some kind of other class that would -allow .meth() to work (without the callRemote), but it wouldn't be the default - plus, IFoo(rr) is how you use interfaces normally, and callRemote is -how you make remote calls normally, so it seems that's the best way to do -interfaces + PB - hmm - in that case the object returned by IFoo(rr) is just rr with a tag -that sets the "default interface name" - right - and callRemote(methname) looks in that default interface before -looking anywhere else - for some reason I want to get rid of the stringyness of the method -name - and the original syntax (callRemoteInterface('RIFoo', 'methname', -args)) felt too verbose - warner: well, isn't that what your optional .meth thing is for? - yes, I don't like that either - using callRemote(RIFoo.bar, args) means I can just switch on the -_name= argument being either a string or a (whatever) that's contained in a -RemoteInterface - a lot of it comes down to how adapters would be most useful when -dealing with remote objects - and to what extent remote interfaces should be interchangeable with -local ones - good point. I have never had a use case where I wanted to adapt a -remote object, I don't think - however, I have had use cases to send interfaces across the wire - e.g. having a parameterized portal.login() interface - that'll be different, just callRemote('foo', RIFoo) - yeah. - the current issue is whether to pass them by reference or by value - eugh - Can you explain it without using those words? :) - hmm - Do you mean, Referenceable style vs Copyable style? - at the moment, when you send a Referenceable across the wire, the -id-number is accompanied with a list of strings that designate which -RemoteInterfaces the original claims to provide - the receiving end looks up each string in a local table, and -populates the RemoteReference with a list of RemoteInterface classes - the table is populated by metaclass magic that runs when a 'class -RIFoo(RemoteInterface)' definition is complete - ok - so a RemoteInterface is simply serialized as its qual(), right? - so as long as both sides include the same RIFoo definition, they'll -wind up with compatible remote interfaces, defining the same method names, -same method schemas, etc - effectively - you can't just send a RemoteInterface across the wire right now, but -it would be easy to add - the places where they are used (sending a Referenceable across the -wire) all special case them - ok, and you're considering actually writing a serializer for them that -sends all the information to totally reconstruct it on the other side without -having the definiton - yes - or having some kind of debug method which give you that - I'd say, do it the way you're doing it now until someone comes up with -a use case for actually sending it... - right - the only case I can come up with is some sort of generic object -browser debug tool - everything else turns into a form of version negotiation which is -better handled elsewhere - hmm - so RIFoo(rr).callRemote('bar', **kw) - I guess that's not too ugly - That's my vote. :) - one thing it lacks is the ability to cleanly state that if 'bar' -doesn't exist in RIFoo then it should signal an error - whereas callRemote(RIFoo.bar, **kw) would give you an AttributeError -before callRemote ever got called - i.e. "make it impossible to express the incorrect usage" - mmmh - warner: but you _can_ check it immediately when it's called - in the direction I was heading, callRemote(str) would just send the -method request and let the far end deal with it, no schema-checking involved - warner: which, 99% of the time, is effectively the same time as -IFoo.bar would happen - whereas callRemote(RIFoo.bar) would indicate that you want schema -checking - yeah, true - hm. - (that last feature is what allowed callRemote and callRemoteInterface -to be merged) - or, I could say that the normal RemoteReference is "untyped" and does -not do schema checking - but adapting one to a RemoteInterface results in a -TypedRemoteReference which does do schema checking - and which refuses to be invoked with method names that are not in the -schema - warner: we-ell - warner: doing method existence checking is cool - warner: but I think tying any further "schema checking" to adaptation -is a bad idea - yeah, that's my hunch too - which is why I'd rather not use adapters to express the scope of the -method name (which RemoteInterface it is supposed to be a part of) - warner: well, I don't think tying it to callRemote(RIFoo.methName) -would be a good idea just the same - hm - so that leaves rr.callRemote(RIFoo['add']) and -rr.callRemoteInterface(RIFoo, 'add') - OTOH, I'm inclined to think schema checking should happen by default - It's just a the matter of where it's parameterized - yeah, it's just that the "default" case (rr.callRemote('name')) needs -to work when there aren't any RemoteInterfaces declared - warner: oh - but if we want to encourage people to use the schemas, then we need -to make that case simple and concise -* radix goes over the issue in his head again - Yes, I think I still have the same position. - which one? :) - IFoo(rr).callRemote("foo"); which would do schema checking because -schema checking is on by default when it's possible - using an adaptation-like construct to declare a scope of the method -name that comes later - well, it _is_ adaptation, I think. - Adaptation always has plugged in behavior, we're just adding a bit -more :) - heh - it is a narrowing of capability - hmm, how do you mean? - rr.callRemote("foo") will do the same thing - but rr.callRemote("foo") can be used without the remote interfaces - I think I lost you. - if rr has any RIs defined, it will try to use them (and therefore -complain if "foo" does not exist in any of them, or if the schema is violated) - Oh. That's strange. - So it's really quite different from how interfaces regularly work... - yeah - except that if you were feeling clever you could use them the normal -way - Well, my inclination is to make them work as similarly as possible. - "I have a remote reference to something that implements RIFoo, but I -want to use it in some other way" - s/possible/practical/ - then IBar(rr) or RIBar(rr) would wrap rr in something that knows how -to translate Bar methods into RIFoo remote methods - Maybe it's not practical to make them very similar. - I see. - -rr.callRemote(RIFoo.add, **kw) -rr.callRemote(RIFoo['add'], **kw) -RIFoo(rr).callRemote('add', **kw) - -I like the second one. Normal Interfaces behave like a dict, so IFoo['add'] -gets you the method-describing object (z.i.i.Method). My RemoteInterfaces -don't do that right now (because I remove the attributes before handing the -RI to z.i), but I could probably fix that. I could either add attributes to -the Method or hook __getitem__ to return something other than a Method -(maybe a RemoteMethodSchema). - -Those Method objects have a .getSignatureInfo() which provides almost -everything I need to construct the RemoteMethodSchema. Perhaps I should -post-process Methods rather than pre-process the RemoteInterface. I can't -tell how to use the return value trick, and it looks like the function may -be discarded entirely once the Method is created, so this approach may not -work. - -On the server side (Referenceable), subclassing Interface is nice because it -provides adapters and implements() queries. - -On the client side (RemoteReference), subclassing Interface is a hassle: I -don't think adapters are as useful, but getting at a method (as an attribute -of the RI) is important. We have to bypass most of Interface to parse the -method definitions differently. - -* create UnslicerRegistry, registerUnslicer -DONE (PROVISIONAL), flat registry (therefore problematic for len(opentype)>1) - -consider adopting the existing collection API (getChild, putChild) for this, -or maybe allow registerUnslicer() to take a callable which behaves kind of -like a twisted.web isLeaf=1 resource (stop walking the tree, give all index -tokens to the isLeaf=1 node) - -also some APIs to get a list of everything in the registry - -* use metaclass to auto-register RemoteCopy classes -DONE - -** use metaclass to auto-register Unslicer classes -DONE - -** and maybe Slicer classes too -DONE with name 'slices', perhaps change to 'slicerForClasses'? - - class FailureSlicer(slicer.BaseSlicer): - classname = "twisted.python.failure.Failure" - slicerForClasses = (failure.Failure,) # triggers auto-register - -** various registry approaches -DONE - -There are currently three kinds of registries used in banana/newpb: - - RemoteInterface <-> interface name - class/type -> Slicer (-> opentype) -> Unslicer (-> class/type) - Copyable subclass -> copyable-opentype -> RemoteCopy subclass - -There are two basic approaches to representing the mappings that these -registries implement. The first is implicit, where the local objects are -subclassed from Sliceable or Copyable or RemoteInterface and have attributes -to define the wire-side strings that represent them. On the receiving side, -we make extensive use of metaclasses to perform automatic registration -(taking names from class attributes and mapping them to the factory or -RemoteInterface used to create the remote version). - -The second approach is explicit, where pb.registerRemoteInterface, -pb.registerRemoteCopy, and pb.registerUnslicer are used to establish the -receiving-side mapping. There isn't a clean way to do it explicitly on the -sending side, since we already have instances whose classes can give us -whatever information we want. - -The advantage of implicit is simplicity: no more questions about why my -pb.RemoteCopy is giving "not unserializable" errors. The mere act of -importing a module is enough to let PB create instances of its classes. - -The advantage of doing it explicitly is to remind the user about the -existence of those maps, because the factory classes in the receiving map is -precisely equal to the user's exposure (from a security point of view). See -the E paper on secure-serialization for some useful concepts. - -A disadvantage of implicit is that you can't quite be sure what, exactly, -you're exposed to: the registrations take place all over the place. - -To make explicit not so painful, we can use quotient's .wsv files -(whitespace-separated values) which map from class to string and back again. -The file could list fully-qualified classname, wire-side string, and -receiving factory class on each line. The Broker (or rather the RootSlicer -and RootUnslicer) would be given a set of .wsv files to define their -mapping. It would get all the registrations at once (instead of having them -scattered about). They could also demand-load the receive-side factory -classes. - -For now, go implicit. Put off the decision until we have some more -experience with using newpb. - -* move from VocabSlicer sequence to ADDVOCAB/DELVOCAB tokens - -Requires a .wantVocabString flag in the parser, which is kind of icky but -fixes the annoying asymmetry between set (vocab sequence) and get (VOCAB -token). Might want a CLEARVOCAB token too. - -On second thought, this won't work. There isn't room for both a vocab number -and a variable-length string in a single token. It must be an open sequence. -However, it could be an add/del/set-vocab sequence, allowing the vocab to be -modified incrementally. - -** VOCABize interface/method names - -One possibility is to make a list of all strings used by all known -RemoteInterfaces and all their methods, then send it at broker connection -time as the initial vocab map. A better one (maybe) is to somehow track what -we send and add a word to the vocab once we've sent it more than three -times. - -Maybe vocabize the pairs, as "ri/name1","ri/name2", etc, or maybe do them -separately. Should do some handwaving math to figure out which is better. - -* nail down some useful schema syntaxes - -This has two parts: parsing something like a __schema__ class attribute (see -the sketches in schema.xhtml) into a tree of FooConstraint objects, and -deciding how to retrieve schemas at runtime from things like the object being -serialized or the object being called from afar. To be most useful, the -syntax needs to mesh nicely (read "is identical to") things like formless and -(maybe?) atop or whatever has replaced the high-density highly-structured -save-to-disk scheme that twisted.world used to do. - -Some lingering questions in this area: - - When an object has a remotely-invokable method, where does the appropriate - MethodConstraint come from? Some possibilities: - - an attribute of the method itself: obj.method.__schema__ - - from inside a __schema__ attribute of the object's class - - from inside a __schema__ attribute of an Interface (which?) that the object - implements - - Likewise, when a caller holding a RemoteReference invokes a method on it, it - would be nice to enforce a schema on the arguments they are sending to the - far end ("be conservative in what you send"). Where should this schema come - from? It is likely that the sender only knows an Interface for their - RemoteReference. - - When PB determines that an object wants to be copied by value instead of by - reference (pb.Copyable subclass, Copyable(obj), schema says so), where - should it find a schema to define what exactly gets copied over? A class - attribute of the object's class would make sense: most objects would do - this, some could override jellyFor to get more control, and others could - override something else to push a new Slicer on the stack and do streaming - serialization. Whatever the approach, it needs to be paralleled by the - receiving side's unjellyableRegistry. - -* RemoteInterface instances should have an "RI-" prefix instead of "I-" - -DONE - -* merge my RemoteInterface syntax with zope.interface's - -I hacked up a syntax for how method definitions are parsed in -RemoteInterface objects. That syntax isn't compatible with the one -zope.interface uses for local methods, so I just delete them from the -attribute dictionary to avoid causing z.i indigestion. It would be nice if -they were compatible so I didn't have to do that. This basically translates -into identifying the nifty extra flags (like priority classes, no-response) -that we want on these methods and finding a z.i-compatible way to implement -them. It also means thinking of SOAP/XML-RPC schemas and having a syntax -that can represent everything at once. - - -* use adapters to enable pass-by-reference or pass-by-value - -It should be possible to pass a reference with variable forms: - - rr.callRemote("foo", 1, Reference(obj)) - rr.callRemote("bar", 2, Copy(obj)) - -This should probably adapt the object to IReferenceable or ICopyable, which -are like ISliceable except they can pass the object by reference or by -value. The slicing process should be: - - look up the type() in a table: this handles all basic types - else adapt the object to ISliceable, use the result - else raise an Unsliceable exception - (and point the user to the docs on how to fix it) - -The adapter returned by IReferenceable or ICopyable should implement -ISliceable, so no further adaptation will be done. - -* remove 'copy' prefix from remotecopy banana type names? - - warner: did we ever finish our conversation on the usefulness of the -(copy foo blah) namespace rather than just (foo blah)? - glyph: no, I don't think we did - warner: do you still have (copy foo blah)? - glyph: yup - so far, it seems to make some things easier - glyph: the sender can subclass pb.Copyable and not write any new -code, while the receiver can write an Unslicer and do a registerRemoteCopy - glyph: instead of the sender writing a whole slicer and the receiver -registering at the top-level - warner: aah - glyph: although the fact that it's easier that way may be an artifact -of my sucky registration scheme - warner: so the advantage is in avoiding registration of each new -unslicer token? - warner: yes. I'm thinking that a metaclass will handily remove the -need for extra junk in the protocol ;) - well, the real reason is my phobia about namespace purity, of course - warner: That's what the dots are for - but ease of dispatch is also important - warner: I'm concerned about it because I consider my use of the same -idiom in the first version of PB to be a serious wart -* warner nods - I will put together a list of my reasoning - warner: I think it's likely that PB implementors in other languages -are going to want to introduce new standard "builtin" types; our "builtins" -shouldn't be limited to python's provided data structures - glyph: wait - ok - glyph: are you talking of banana types - glyph: or really PB - in which case (copy blah blah) is a non-builtin type, while -(type-foo) is a builtin type - warner: plus, our namespaces are already quite well separated, I can -tell you I will never be declaring new types outside of quotient.* and -twisted.* :) - moshez: this is mostly banana (or what used to be jelly, really) - warner: my inclination is to standardize by convention - warner: *.* is a non-builtin type, [~.] is a builtin - glyph: ? - sorry [^.]* - my regular expressions and shell globs are totally confused but you -know what I mean - moshez: yes - glyph: hrm - glyph: you're making crazy anime faces - glyph: why do we need any non-Python builtin types - moshez: because I want to destroy SOAP, and doing that means working -with people I don't like - moshez: outside of python - glyph: I meant, "what specific types" - I'd appreciate a blog on that - -* have Copyable/RemoteCopy default to __getstate__/__setstate__? - -At the moment, the default implementations of getStateToCopy() and -setCopyableState() get and set __dict__ directly. Should the default instead -be to call __getstate__() or __setstate__()? - -* make slicer/unslicers for pb.RemoteInterfaces - -exarkun's use case requires these Interfaces to be passable by reference -(i.e. by name). It would also be interesting to let them be passed (and -requested!) by value, so you can ask a remote peer exactly what their -objects will respond to (the method names, the argument values, the return -value). This also requires that constraints be serializable. - -do this, should be referenceable (round-trip should return the same object), -should use the same registration lookup that RemoteReference(interfacelist) -uses - -* investigate decref/Referenceable race - -Any object that includes some state when it is first sent across the wire -needs more thought. The far end could drop the last reference (at time t=1) -while a method is still pending that wants to send back the same object. If -the method finishes at time t=2 but the decref isn't received until t=3, the -object will be sent across the wire without the state, and the far end will -receive it for the "first" time without that associated state. - -This kind of conserve-bandwidth optimization may be a bad idea. Or there -might be a reasonable way to deal with it (maybe request the state if it -wasn't sent and the recipient needs it, and delay delivery of the object -until the state arrives). - -DONE, the RemoteReference is held until the decref has been acked. As long as -the methods are executed in-order, this will prevent the race. TODO: -third-party references (and other things that can cause out-of-order -execution) could mess this up. - -* sketch out how to implement glyph's crazy non-compressed sexpr encoding - -* consider a smaller scope for OPEN-counter reference numbers - -For newpb, we moved to implicit reference numbers (counting OPEN tags -instead of putting a number in the OPEN tag) because we didn't want to burn -so much bandwidth: it isn't feasible to predict whether your object will -need to be referenced in the future, so you always have to be prepared to -reference it, so we always burn the memory to keep track of them (generally -in a ScopedSlicer subclass). If we used explicit refids then we'd have to -burn the bandwidth too. - -The sorta-problem is that these numbers will grow without bound as long as -the connection remains open. After a few hours of sending 100-byte objects -over a 100MB connection, you'll hit 1G-references and will have to start -sending them as LONGINT tokens, which is annoying and slightly verbose (say -3 or 4 bytes of number instead of 1 or 2). You never keep track of that many -actual objects, because the references do not outlive their parent -ScopedSlicer. - -The fact that the references themselves are scoped to the ScopedSlicer -suggests that the reference numbers could be too. Each ScopedSlicer would -track the number of OPEN tokens emitted (actually the number of -slicerForObject calls made, except you'd want to use a different method to -make sure that children who return a Slicer themselves don't corrupt the -OPEN count). - -This requires careful synchronization between the ScopedSlicers on one end -and the ScopedUnslicers on the other. I suspect it would be slightly -fragile. - -One sorta-benefit would be that a somewhat human-readable sexpr-based -encoding would be even more human readable if the reference numbers stayed -small (you could visually correlate objects and references more easily). The -ScopedSlicer's open-parenthesis could be represented with a curly brace or -something, then the refNN number would refer to the NN'th left-paren from -the last left-brace. It would also make it clear that the recipient will not -care about objects outside that scope. - -* implement the FDSlicer - -Over a unix socket, you can pass fds. exarkun had a presentation at PyCon04 -describing the use of this to implement live application upgrade. I think -that we could make a simple FDSlicer to hide the complexity of the -out-of-band part of the communication. - -class Server(unix.Server): - def sendFileDescriptors(self, fileno, data="Filler"): - """ - @param fileno: An iterable of the file descriptors to pass. - """ - payload = struct.pack("%di" % len(fileno), *fileno) - r = sendmsg(self.fileno(), data, 0, (socket.SOL_SOCKET, SCM_RIGHTS, payload)) - return r - -class Client(unix.Client): - def doRead(self): - if not self.connected: - return - try: - msg, flags, ancillary = recvmsg(self.fileno()) - except: - log.msg('recvmsg():') - log.err() - else: - buf = ancillary[0][2] - fds = [] - while buf: - fd, buf = buf[:4], buf[4:] - fds.append(struct.unpack("i", fd)[0]) - try: - self.protocol.fileDescriptorsReceived(fds) - except: - log.msg('protocol.fileDescriptorsReceived') - log.err() - return unix.Client.doRead(self) - -* implement AsyncDeferred returns - -dash wanted to implement a TransferrableReference object with a scheme that -would require creating a new connection (to a third-party Broker) during -ReferenceUnslicer.receiveClose . This would cause the object deserialization -to be asynchronous. - -At the moment, Unslicers can return a Deferred from their receiveClose -method. This is used by immutable containers (like tuples) to indicate that -their object cannot be created yet. Other containers know to watch for these -Deferreds and add a callback which will update their own entries -appropriately. The implicit requirement is that all these Deferreds fire -before the top-level parent object (usually a CallUnslicer) finishes. This -allows for circular references involving immutable containers to be resolved -into the final object graph before the target method is invoked. - -To accomodate Deferreds which will fire at arbitrary points in the future, -it would be useful to create a marker subclass named AsyncDeferred. If an -unslicer returns such an object, the container parent starts by treating it -like a regular Deferred, but it also knows that its object is not -"complete", and therefore returns an AsyncDeferred of its own. When the -child completes, the parent can complete, etc. The difference between the -two types: Deferred means that the object will be complete before the -top-level parent is finished, AsyncDeferred makes claims about when the -object will be finished. - -CallUnslicer would know that if any of its arguments are Deferreds or -AsyncDeferreds then it need to hold off on the broker.doCall until all those -Deferreds have fired. Top-level objects are not required to differentiate -between the two types, because they do not return an object to an enclosing -parent (the CallUnslicer is a child of the RootUnslicer, but it always -returns None). - -Other issues: we'll need a schema to let you say whether you'll accept these -late-bound objects or not (because if you do accept them, you won't be able -to impose the same sorts of type-checks as you would on immediate objects). -Also this will impact the in-order-invocation promises of PB method calls, -so we may need to implement the "it is ok to run this asynchronously" flag -first, then require that TransferrableReference objects are only passed to -methods with the flag set. - -Also, it may not be necessary to have a marker subclass of Deferred: perhaps -_any_ Deferred which arrives from a child is an indication that the object -will not be available until an unknown time in the future, and obligates the -parent to return another Deferred upwards (even though their object could be -created synchronously). Or, it might be better to implement this some other -way, perhaps separating "here is my object" from "here is a Deferred that -will fire when my object is complete", like a call to -parent.addDependency(self.deferred) or something. - -DONE, needs testing - -* TransferrableReference - -class MyThing(pb.Referenceable): pass -r1 = MyThing() -r2 = Facet(r1) -g1 = Global(r1) -class MyGlobalThing(pb.GloballyReferenceable): pass -g2 = MyGlobalThing() -g3 = Facet(g2) - -broker.setLocation("pb://hostname.com:8044") - -rem.callRemote("m1", r1) # limited to just this connection -rem.callRemote("m2", Global(r1)) # can be published -g3 = Global(r1) -rem.callRemote("m3", g1) # can also be published.. -g1.revoke() # but since we remember it, it can be revoked too -g1.restrict() # and, as a Facet, we can revoke some functionality but not all - -rem.callRemote("m1", g2) # can be published - -E tarball: jsrc/net/captp/tables/NearGiftTable - -issues: - 1: when A sends a reference on B to C, C's messages to the object - referenced must arrive after any messages A sent before the reference forks - - in particular, if A does: - B.callRemote("1", hugestring) - B.callRemote("2_makeYourSelfSecure", args) - C.callRemote("3_transfer", B) - - and C does B.callRemote("4_breakIntoYou") as soon as it gets the reference, - then the A->B queue looks like (1, 2), and the A->C queue looks like (3). - The transfer message can be fast, and the resulting 4 message could be - delivered to B before the A->B queue manages to deliver 2. - - 2: an object which get passed through multiple external brokers and - eventually comes home must be recognized as a local object - - 3: Copyables that contain RemoteReferences must be passable between hosts - -E cannot do all three of these at once -http://www.erights.org/elib/distrib/captp/WormholeOp.html - -I think that it's ok to tell people who want this guarantee to explicitly -serialize it like this: - - B.callRemote("1", hugestring) - d = B.callRemote("2_makeYourSelfSecure", args) - d.addCallback(lambda res: C.callRemote("3_transfer", B)) - -Note that E might not require that method calls even have a return value, so -they might not have had a convenient way to express this enforced -serialization. - -** more thoughts - -To enforce the partial-ordering, you could do the equivalent of: - A: - B.callRemote("1", hugestring) - B.callRemote("2_makeYourSelfSecure", args) - nonce = makeNonce() - B.callRemote("makeYourSelfAvailableAs", nonce) - C.callRemote("3_transfer", (nonce, B.name)) - C: - B.callRemote("4_breakIntoYou") - -C uses the nonce when it connects to B. It knows the name of the reference, -so it can compare it against some other reference to the same thing, but it -can't actually use that name alone to get access. - -When the connection request arrives at B, it sees B.name (which is also -unguessable), so that gives it reason to believe that it should queue C's -request (that it isn't just a DoS attack). It queues it until it sees A's -request to makeYourSelfAvailableAs with the matching nonce. Once that -happens, it can provide the reference back to C. - -This implies that C won't be able to send *any* messages to B until that -handshake has completed. It might be desireable to avoid the extra round-trip -this would require. - -** more thoughts - - url = PBServerFactory.registerReference(ref, name=None) - creates human-readable URLs or random identifiers - -the factory keeps a bidirectional mapping of names and Referenceables - -when a Referenceable gets serialized, if the factory's table doesn't have a -name for it, the factory creates a random one. This entry in the table is -kept alive by two things: - - a live reference by one of the factory's Brokers - an entry in a Broker's "gift table" - -When a RemoteReference gets serialized (and it doesn't point back to the -receiving Broker, and thus get turned into a your-reference sequence), - - A->C: "I'm going to send somebody a reference to you, incref your - gift table", C->A: roger that, here's a gift nonce - A->B: "here's Carol's reference: URL plus nonce" - B->C: "I want a liveref to your 'Carol' object, here's my ticket - (nonce)", C->B: "ok, ticket redeemed, here's your liveref" - -once more, without nonces: - A->C: "I'm going to send somebody a reference to you, incref your - gift table", C->A: roger that - A->B: "here's Carol's reference: URL" - B->C: "I want a liveref to your 'Carol' object", C->B: "ok, here's your - liveref" - -really: - on A: c.vat.callRemote("giftYourReference", c).addCallback(step2) - c is serialized as (your-reference, clid) - on C: vat.remote_giftYourReference(which): self.table[which] += 1; return - on A: step2: b.introduce(c) - c is serialized as (their-reference, url) - on B: deserialization sees their-reference - newvat = makeConnection(URL) - newvat.callRemote("redeemGift", URL).addCallback(step3) - on C: vat.remote_redeemGift(URL): - ref = self.urls[URL]; self.table[ref] -= 1; return ref - ref is serialized as (my-reference, clid) - on B: step3(c): b.remote_introduce(c) - -problem: if alice sends a thousand copies, that means these 5 messages are -each send a thousand times. The makeConnection is cached, but the rest are -not. We don't rememeber that we've already made this gift before, that the -other end probably still has it. Hm, but we also don't know that they didn't -lose it already. - -** ok, a plan: - -concern 1: objects must be kept alive as long as there is a RemoteReference -to them. - -concern 2: we should be able to tell when an object is being sent for the -first time, to add metadata (interface list, public URL) that would be -expensive to add to every occurrence. - - each (my-reference) sent over the wire increases the broker's refcount on - both ends. - - the receiving Broker retains a weakref to the RemoteReference, and retains a - copy of the metadata necessary to create it in the clid table (basically the - entire contents of the RemoteReference). When the weakref expires, it marks - the clid entry as "pending-free", and sends a decref(clid,N) to the other - Broker. The decref is actually sent with broker.callRemote("decref", clid, - N), so it can be acked. - - the sending broker gets the decref and reduces its count by N. If another - reference was sent recently, this count may not drop all the way to zero, - indicating there is a reference "in flight" and the far end should be ready - to deal with it (by making a new RemoteReference with the same properties as - the old one). If N!=0, it returns False to indicate that this was not the - last decref message for the clid. If N==0, it returns True, since it is the - last decref, and removes the entry from its table. Once remote_decref - returns True, the clid is retired. - - the receiving broker receives the ack from the decref. If the ack says - last==True, the clid table entry is freed. If it says last==False, then - there should have been another (my-reference) received before the ack, so - the refcount should be non-zero. - - message sequence: - - A-> : (my-reference clid metadata) [A.myrefs[clid].refcount++ = 1] - A-> : (my-reference clid) [A.myrefs[clid].refcount++ = 2] - ->B: receives my-ref, creates RR, B.yourrefs[clid].refcount++ = 1 - ->B: receives my-ref, B.yourrefs[clid].refcount++ = 2 - : time passes, B sees the reference go away - <-B: d=brokerA.callRemote("decref", clid, B.yourrefs[clid].refcount) - B.yourrefs[clid].refcount = 0; d.addCallback(B.checkref, clid) - A-> : (my-reference clid) [A.myrefs[clid].refcount++ = 3] - A<- : receives decref, A.myrefs[clid].refcount -= 2, now =1, returns False - ->B: receives my-ref, re-creates RR, B.yourrefs[clid].refcount++ = 1 - ->B: receives ack(False), B.checkref asserts refcount != 0 - : time passes, B sees the reference go away again - <-B: d=brokerA.callRemote("decref", clid, B.yourrefs[clid].refcount) - B.yourrefs[clid].refcount = 0; d.addCallback(B.checkref, clid) - A<- : receives decref, A.myrefs[clid].refcount -= 1, now =0, returns True - del A.myrefs[clid] - ->B: receives ack(True), B.checkref asserts refcount==0 - del B.yourrefs[clid] - -B retains the RemoteReference data until it receives confirmation from A. -Therefore whenever A sends a reference that doesn't already exist in the clid -table, it is sending it to a B that doesn't know about that reference, so it -needs to send the metadata. - -concern 3: in the three-party exchange, Carol must be kept alive until Bob -has established a reference to her, even if Alice drops her carol-reference -immediately after sending the introduction to Bob. - -(my-reference, clid, [interfaces, public URL]) -(your-reference, clid) -(their-reference, URL) - -Serializing a their-reference causes an entry to be placed in the Broker's -.theirrefs[URL] table. Each time a their-reference is sent, the entry's -refcount is incremented. - -Receiving a their-reference may initiate a PB connection to the target, -followed by a getNamedReference request. When this completes (or if the -reference was already available), the recipient sends a decgift message to -the sender. This message includes a count, so multiple instances of the same -gift can be acked as a group. - -The .theirrefs entry retains a reference to the sender's RemoteReference, so -it cannot go away until the gift is acked. - -DONE, gifts are implemented, we punted on partial-ordering - -*** security, DoS - -Bob can force Alice to hold on to a reference to Carol, as long as both -connections are open, by never acknowledging the gift. - -Alice can cause Bob to open up TCP connections to arbitrary hosts and ports, -by sending third-party references to him, although the only protocol those -connections will speak is PB. - -Using yURLs and StartTLS should be enough to secure and authenticate the -connections. - -*** partial-ordering - -If we need it, the gift (their-reference message) can include a nonce, Alice -sends a makeYourSelfAvailableAs message to Carol with the nonce, and Bob must -do a new getReference with the nonce. - -Kragen came up with a good use-case for partial-ordering: - A: - B.callRemote("updateDocument", bigDocument) - C.callRemote("pleaseReviewLatest", B) - C: - B.callRemote("getLatestDocument") - - -* PBService / Tub - -Really, PB wants to be a Service, since third-party references mean it will -need to make connections to arbitrary targets, and it may want to re-use -those connections. - - s = pb.PBService() - s.listenOn(strport) # provides URL base - swissURL = s.registerReference(ref) # creates unguessable name - publicURL = s.registerReference(ref, "name") # human-readable name - s.unregister(URL) # also revokes all clids - s.unregisterReference(ref) - d = s.getReference(URL) # Deferred which fires with the RemoteReference - d = s.shutdown() # close all servers and client connections - -DONE, this makes things quite clean - -* promise pipelining - -Even without third-party references, we can do E-style promise pipelining. - - hmm. subclass of Deferred that represents a Promise, can be - serialized if it's being sent to the same broker as the RemoteReference it was - generated for - warner: hmmm. how's that help us? - oh, pipelining? - maybe a flag on the callRemote to say that "yeah, I want a - DeferredPromise out of you, but I'm only going to include it as an argument to - another method call I'm sending you, so don't bother sending *me* the result" - aah - yeah - that sounds like a reasonable approach - that would actually work - dash: do you know if E makes any attempt to handle >2 vats in their - pipelining implementation? seems to me it could turn into a large network - optimization problem pretty quickly - warner: Mmm - hmm - I do not think you have to - so you have: t1=a.callRemote("foo",args1); - t2=t1.callRemote("bar",args2), where callRemote returns a Promise, which is a - special kind of Deferred that remembers the Broker its answer will eventually - come from. If args2 consists of entirely immediate things (no Promises) or - Promises that are coming from the same broker as t1 uses, then the "bar" call - is eligible for pipelining and gets sent to the remote broker - in the resulting newpb banana sequence, the clid of the target method - is replaced by another kind of clid, which means "the answer you're going to - send to method call #N", where N comes from t1 - mmm yep - using that new I-can't-unserialize-this-yet hook we added, the second - call sequence doesn't finish unserializing until the first call finishes and - sends the answer. Sending answer #N fires the hook's deferred. - that triggers the invocation of the second method - yay - hm, of course that totally blows away the idea of using a Constraint - on the arguments to the second method - because you don't even know what the object is until after the - arguments have arrived - but - well - the first method has a schema, which includes a return constraint - okay you can't fail synchronously - so you *can* assert that, whatever the object will be, it obeys that - constraint - but you can return a failure like everybody else - and since the constraint specifies an Interface, then the Interface - plus mehtod name is enough to come up with an argument constraint - so you can still enforce one - this is kind of cool - the big advantage of pipelining is that you can have a lot of - composable primitives on your remote interfaces rather than having to smush - them together into things that are efficient to call remotely - hm, yeah, as long as all the arguments are either immediate or - reference something on the recipient - as soon as a third party enters the equation, you have to decide - whether to wait for the arguments to resolve locally or if it might be faster - to throw them at someone else - that's where the network-optimization thing I mentioned before comes - into play - mmm - you send messages to A and to B, once you get both results you want - to send the pair to C to do something with them - spin me an example scenario - Hmm - if all three are close to each other, and you're far from all of - them, it makes more sense to tell C about A and B - how _does_ E handle that - or maybe tell A and B about C, tell them "when you get done, send - your results to C, who will be waiting for them" - warner: yeah, i think that the right thing to do is to wait for them to - resolve locally - assuming that C can talk to A and B is bad - no it isn't - well, depends on whether you live in this world or not :) - warner: if you want other behaviour then you should have to set it up - explicitly, i think - I'm not even sure how you would describe that sort of thing. It'd be - like routing protocols, you assign a cost to each link and hope some magical - omniscient entity can pick an optimal solution - -** revealing intentions - - Now suppose I say "B.your_fired(C.revoke_his_rights())", or such. - A->C: sell all my stock. A->B: declare bankruptcy - -If B has access to C, and the promises are pipelined, then B has a window -during which they know something's about to happen, and they still have full -access to C, so they can do evil. - -Zooko tried to explain the concern to MarkM years ago, but didn't have a -clear example of the problem. The thing is, B can do evil all the time, -you're just trying to revoke their capability *before* they get wind of your -intentions. Keeping intentions secret is hard, much harder than limiting -someone's capabilities. It's kind of the trailing edge of the capability, as -opposed to the leading edge. - -Zooko feels the language needs clear support for expressing how the -synchronization needs to take place, and which domain it needs to happen in. - -* web-calculus integration - -Tyler pointed out that it is vital for a node to be able to grant limited -access to some held object. Specifically, Alice may want to give Bob a -reference not to Carol as a whole, but to just a specific Carol.remote_foo -method (and not to any other methods that Alice might be allowed to invoke). -I had been thinking of using RemoteInterfaces to indicate method subsets, -something like this: - - bob.callRemote("introduce", Facet(self, RIMinimal)) - -but Tyler thinks that this is too coarse-grained and not likely to encourage -the right kinds of security decisions. In his web-calculus, recipients can -grant third-parties access to individual bound methods. - - bob.callRemote("introduce", carol.getMethod("howdy")) - -If I understand it correctly, his approach makes Referenceables into a -copy-by-value object that is represented by a dictionary which maps method -names to these RemoteMethod objects, so there is no actual -callRemote(methname) method. Instead you do something like: - - rr = tub.getReference(url) - d = rr['introduce'].call(args) - -These RemoteMethod objects are top-level, so unguessable URLs must be -generated for them when they are sent, and they must be reference-counted. It -must not be possible to get from the bound method to the (unrestricted) -referenced object. - -TODO: how does the web-calculus maintain reference counts for these? It feels -like there would be an awful lot of messages being thrown around. - -To implement this, we'll need: - - banana sequences for bound methods - ('my-method', clid, url) - ('your-method', clid) - ('their-method', url, RI+methname?) - syntax to carve a single method out of a local Referenceable - A: self.doFoo (only if we get rid of remote_) - B: self.remote_doFoo - C: self.getMethod("doFoo") - D: self.getMethod(RIFoo['doFoo']) - leaning towards C or D - syntax to carve a single method out of a RemoteReference - A: rr.doFoo - B: rr.getMethod('doFoo') - C: rr.getMethod(RIFoo['doFoo']) - D: rr['doFoo'] - E: rr[RIFoo['doFoo']] - leaning towards B or C - decide whether to do getMethod early or late - early means ('my-reference') includes a big dict of my-method values - and a whole bunch of DECREFs when that dict goes away - late means there is a remote_tub.getMethod(your-ref, methname) call - and an extra round-trip to retrieve them - dash thinks late is better - -We could say that the 'my-reference' sequence for any RemoteInterface-enabled -Referenceable will include a dictionary of bound methods. The receiving end -will just stash the whole thing. - -* do implicit "doFoo" -> RIFoo["doFoo"] conversion - -I want rr.callRemote("doFoo", args) to take advantage of a RemoteInterface, -if one is available. RemoteInterfaces aren't supposed to be overlapping (at -least not among RemoteInterfaces that are shared by a single Referenceable), -so there shouldn't be any ambiguity. If there is, we can raise an error. - -* accept Deferreds as arguments? - - bob.callRemote("introduce", target=self.tub.getReference(pburl)) - or - bob.callRemote("introduce", carol.getMethod("doFoo")) - instead of - carol.getMethod("doFoo").addCallback(lambda r: bob.callRemote("introduce", r)) - -If one of the top-level arguments to callRemote is a Deferred, don't send the -method request until all the arguments resolve. If any of the arguments -errback, the callRemote will fail with some new exception (that can contain a -reference to the argument's exception). - -however, this would mean the method would be invoked out-of-order w.r.t. an -immediately-following bob.callRemote - -put this off until we get some actual experience. - -* batch decrefs? - -If we implement the copy-by-value Referenceable idea, then a single gc may -result in dozens of simultaneous decrefs. It would be nice to reduce the -traffic generated by that. - -* promise pipelining - -Promise(Deferred).__getattr__ - -DoS prevention techniques in CapIDL (MarkM) - -pb://key@ip,host,[ipv6],localhost,[/unix]/swissnumber -tubs for lifetime management -separate listener object, share tubs between listeners - distinguish by key number - - actually, why bother with separate keys? Why allow the outside world to - distinguish between these sub-Tubs? Use them purely for lifetime management, - not security properties. That means a name->published-object table for each - SubTub, maybe a hierarchy of them, and the parent-most Tub gets the - Listeners. Incoming getReferenceByURL requests require a lookup in all Tubs - that descend from the one attached to that listener. - -So one decision is whether to have implicitly-published objects have a name -that lasts forever (well, until the Tub is destroyed), or if they should be -reference-counted. If they are reference counted, then outstanding Gifts need -to maintain a reference, and the gift must be turned into a live -RemoteReference right away. It has bearing on how/if we implement SturdyRefs, -so I need to read more about them in the E docs. - -Hrm, and creating new Tubs from within a remote_foo method.. to make that -useful, you'd need to have a way to ask for the Tub through which you were -being invoked. hrm. - -* creating new Tubs - -Tyler suggests using Tubs for namespace management. Tubs can share TCP -listening ports, but MarkS recommends giving them all separate keys (which -means separate SSL sessions, so separate TCP connections). Bill Frantz -discourages using a hierarchy of Tubs, says it's not the sort of thing you -want to be locked into. - -That means I'll need a separate Listener object, where the rule is that the -last Tub to be stopped makes the Listener stop too.. probably abuse the -Service interface in some wacky way to pull this off. - -Creating a new Tub.. how to conveniently create it with the same Listeners as -the current one? If the method that's creating the Tub is receiving a -reference, the Tub can be an attribute of the inbound RemoteReference. If -not, that's trickier.. the _tub= argument may still be a useful way to go. -Once you've got a source tub, then tub.newTub() should create a new one with -the same Listeners as the source (but otherwise unassociated with it). - -Once you have the new Tub, registering an object in it should return -something that can be directly serialized into a gift. - -class Target(pb.Referenceable): - def remote_startGame(self, player_black, player_white): - tub = player_black.tub.newTub() - game = self.createGame() - gameref = tub.register(game) - game.setPlayer("black", tub.something(player_black)) - game.setPlayer("white", tub.something(player_white)) - return gameref - -Hmm. So, create a SturdyRef class, which remembers the tubid (key), list of -location hints, and object name. These have a url() method that renders out a -URL string, and a compare method which compares the tubid and object name but -ignores the location hints. Serializing a SturdyRef creates a their-reference -sequence. Tub.register takes an object (and maybe a name) and returns a -SturdyRef. Tub.getReference takes either a URL or a SturdyRef. -RemoteReferences should have a .getSturdyRef method. - -Actually, I think SturdyRefs should be serialized as Copyables, and create -SturdyRefs on the other side. The new-tub sequence should be: - - create new tub, using the Listener from an existing tub - register the objects in the new tub, obtaining a SturdyRef - send/return SendLiveRef(sturdyref) to the far side - SendLiveRef is a wrapper that causes a their-reference sequence to be sent. - The alternative is to obtain an actual live reference (via - player_black.tub.getReference(sturdyref) first), then send that, but it's - kind of a waste if you don't actually want to use the liveref yourself. - -Note that it becomes necessary to provide for local references here: ones in -different Tubs which happen to share a Listener. These can use real TCP -connections (unless the Listener hint is only valid from the outside world). -It might be possible to use some tricks cut out some of the network overhead, -but I suspect there are reasons why you wouldn't actually want to do that. diff --git a/src/foolscap/doc/use-cases.txt b/src/foolscap/doc/use-cases.txt deleted file mode 100644 index 98484ce1..00000000 --- a/src/foolscap/doc/use-cases.txt +++ /dev/null @@ -1,115 +0,0 @@ -This file contains a collection of pb wishlist items, things it would be nice -to have in newpb. - - -* Log in, specifying desired interfaces - -The server can provide several different interfaces, each of which inherit from -pb.IPerspective. The client can specify which of these interfaces that it -desires. - -This change requires Jellyable interfaces, which in turn requires being able to -"register jelliers sanely" (exarkun 2004-05-10). - -An example, in oldpb lingo: - - # client - factory = PBClientFactory() - reactor.connectTCP('localhost', portNum, factory) - d = factory.login(creds, mind, IBusiness) # <-- API change - d.addCallbacks(connected, disaster) - - # server - class IBusiness(pb.IPerspective): - def perspective_foo(self, bar): - "Does something" - - class Business(pb.Avatar): - __implements__ = (IBusinessInterface, pb.Avatar.__implements__) - def perspective_foo(self, bar): - return bar - - class Finance(pb.Avatar): - def perspective_cash(self): - """do cash""" - - class BizRealm: - __implements__ = portal.IRealm - def requestAvatar(self, avatarId, mind, *interfaces): - if IBusiness in interfaces: - return IBusiness, Business(avatarId, mind), lambda : None - elif pb.IPerspective in interfaces: - return pb.IPerspective, Finance(avatarId), lambda : None - else: - raise NotImplementedError - - -* data schemas in Zope3 - -http://svn.zope.org/Zope3/trunk/src/zope/schema/README.txt?rev=13888&view=auto - -* objects that are both Referenceable and Copyable - -warner - -I have a music player which can be controlled remotely via PB. There are -server-side objects (corresponding to songs or albums) which contain both -public attributes (song name, artist name) and private state (pathname to the -local .ogg file, whether or not it is present in the cache). - -These objects may be sent to the remote end (the client) in response to -either a "what are you playing right now" query, or a "tell me about all of -your music" query. When they are sent down, the remote end should get an -object which contains the public attributes. - -If the remote end sends that object back (in a "please play this song" -method), the local end (the server) should get back a reference to the -original Song or Album object. - -This requires that the original object be serialized with both some public -state and a reference ID. The remote end must create a representation that -contains both pieces. That representation will be serialized with just the -reference ID. - -Ideally this should be as easy to express as marking the source object as -implementing both pb.Referenceable and pb.Copyable, and the receiving object -as both a pb.RemoteReference and a pb.RemoteCopy. - -Without this capability, my workaround is to manually assign a sequential -integer to each of these referenceable objects, then send a dict of the -public attributes and the index number. The recipient sends back the whole -dict, and the server end only pays attention to the .index attribute. - -Note that I don't care about doing .callRemote on the remote object. This is -a case where it might make sense to split pb.Referenceable into two pieces, -one that talks about referenceability and the other that talks about -callablilty (pb.Callable?). - -* both Callable and Copyable - -buildbot: remote version of BuildStatus. When a build starts, the -buildmaster sends the current build to all status clients. It would be handy -for them to get some static data (name, number, reason, changes) about the -build at that time, plus a reference that can be used to query it again -later (through callRemote). This can be done manually, but requires knowing -all the places where a BuildStatus might be sent over the wire and wrapping -them. I suppose it could be done with a Slicer/Unslicer pair: - -class CCSlicer: - def slice(self, obj): - yield obj - yield obj.getName() - yield obj.getNumber() - yield obj.getReason() - yield obj.getChanges() - -class CCUnslicer: - def receiveChild(self, obj): - if state == 0: self.obj = makeRemoteReference(obj); state += 1; return - if state == 1: self.obj.name = obj; state += 1; return - if state == 2: self.obj.reason = obj; state += 1; return - if state == 3: self.obj.changes = obj; state += 1; return - -plus some glue to make sure the object gets added to the per-Broker -references list: this makes sure the object is not sent (in full) twice, and -that the receiving side keeps a reference to the slaved version. - diff --git a/src/foolscap/doc/using-foolscap.xhtml b/src/foolscap/doc/using-foolscap.xhtml deleted file mode 100644 index ade5a60d..00000000 --- a/src/foolscap/doc/using-foolscap.xhtml +++ /dev/null @@ -1,978 +0,0 @@ - - -Introduction to Foolscap - - - - -

Introduction to Foolscap

- -

Introduction

- -

Suppose you find yourself in control of both ends of the wire: you have -two programs that need to talk to each other, and you get to use any protocol -you want. If you can think of your problem in terms of objects that need to -make method calls on each other, then chances are good that you can use the -Foolscap protocol rather than trying to shoehorn your needs into something -like HTTP, or implementing yet another RPC mechanism.

- -

Foolscap is based upon a few central concepts:

- -
    - -
  • serialization: taking fairly arbitrary objects and types, - turning them into a chunk of bytes, sending them over a wire, then - reconstituting them on the other end. By keeping careful track of object - ids, the serialized objects can contain references to other objects and the - remote copy will still be useful.
  • - -
  • remote method calls: doing something to a local proxy and - causing a method to get run on a distant object. The local proxy is called - a RemoteReference, - and you do something by running its .callRemote method. - The distant object is called a Referenceable, and it has methods like - remote_foo that will be invoked.
  • - -
- -

Foolscap is the descendant of Perspective Broker (which lived in the -twisted.spread package). For many years it was known as "newpb". A lot of the -API still has the name "PB" in it somewhere. These will probably go away -sooner or later.

- -

A "foolscap" is a size of paper, probably measuring 17 by 13.5 inches. A -twisted foolscap of paper makes a good fool's cap. Also, "cap" makes me think -of capabilities, and Foolscap is a protocol to implement a distributed -object-capabilities model in python.

- - -

Getting Started

- -

Any Foolscap application has at least two sides: one which hosts a -remotely-callable object, and another which calls (remotely) the methods of -that object. We'll start with a simple example that demostrates both ends. -Later, we'll add more features like RemoteInterface declarations, and -transferring object references.

- -

The most common way to make an object with remotely-callable methods is to -subclass Referenceable. Let's create a simple -server which does basic arithmetic. You might use such a service to perform -difficult mathematical operations, like addition, on a remote machine which -is faster and more capable than your ownalthough -really, if your client machine is too slow to perform this kind of math, it -is probably too slow to run python or use a network, so you should seriously -consider a hardware upgrade.

- -
-from foolscap import Referenceable
-
-class MathServer(Referenceable):
-    def remote_add(self, a, b):
-        return a+b
-    def remote_subtract(self, a, b):
-        return a-b
-    def remote_sum(self, args):
-        total = 0
-        for a in args: total += a
-        return total
-
-myserver = MathServer()
-
- -

On the other end of the wire (which you might call the client -side), the code will have a RemoteReference to this object. The -RemoteReference has a method named callRemote which you -will use to invoke the method. It always returns a Deferred, which will fire -with the result of the method. Assuming you've already acquired the -RemoteReference, you would invoke the method like this:

- -
-def gotAnswer(result):
-    print "result is", result
-def gotError(err):
-    print "error:", err
-d = remote.callRemote("add", 1, 2)
-d.addCallbacks(gotAnswer, gotError)
-
- -

Ok, now how do you acquire that RemoteReference? How do you -make the Referenceable available to the outside world? For this, -we'll need to discuss the Tub, and the concept of a FURL -URL.

- -

Tubs: The Foolscap Service

- -

The Tub is the container that -you use to publish Referenceables, and is the middle-man you use -to access Referenceables on other systems. It is known as the -Tub, since it provides similar naming and identification properties as -the E language's Vatbut they do not provide quite the same insulation against -other objects as E's Vats do. In this sense, Tubs are leaky Vats.. If -you want to make a Referenceable available to the world, you -create a Tub, tell it to listen on a TCP port, and then register the -Referenceable with it under a name of your choosing. If you want -to access a remote Referenceable, you create a Tub and ask it to -acquire a RemoteReference using that same name.

- -

The Tub is a Twisted Service subclass, so you use it in -the same way: once you've created one, you attach it to a parent Service or -Application object. Once the top-level Application object has been started, -the Tub will start listening on any network ports you've requested. When the -Tub is shut down, it will stop listening and drop any connections it had -established since last startup. If you have no parent to attach it to, you -can use startService and stopService on the Tub -directly.

- -

Note that no network activity will occur until the Tub's -startService method has been called. This means that any -getReference or connectTo requests that occur -before the Tub is started will be deferred until startup. If the program -forgets to start the Tub, these requests will never be serviced. A message to -this effect is added to the twistd.log file to help developers discover this -kind of problem.

- -

Making your Tub remotely accessible

- -

To make any of your Referenceables available, you must make -your Tub available. There are three parts: give it an identity, have it -listen on a port, and tell it the protocol/hostname/portnumber at which that -port is accessibly to the outside world.

- -

In general, the Tub will generate its own identity, the TubID, by -creating an SSL public key certificate and hashing it into a suitably-long -random-looking string. This is the primary identifier of the Tub: everything -else is just a location hint that suggests how the Tub might be -reached. The fact that the TubID is tied to the public key allows FURLs to -be secure references (meaning that no third party can cause you to -connect to the wrong reference). You can also create a Tub with a -pre-existing certificate, which is how Tubs can retain a persistent identity -over multiple executions.

- -

You can also create an UnauthenticatedTub, which has an empty -TubID. Hosting and connecting to unauthenticated Tubs do not require the -pyOpenSSL library, but do not provide privacy, authentication, connection -redirection, or shared listening ports. The FURLs that point to -unauthenticated Tubs have a distinct form (starting with pbu: -instead of pb:) to make sure they are not mistaken for -authenticated Tubs. Foolscap uses authenticated Tubs by default.

- -

Having the Tub listen on a TCP port is as simple as calling listenOn with a strports-formatted port specification -string. The simplest such string would be tcp:12345, to listen on port -12345 on all interfaces. Using tcp:12345:interface=127.0.0.1 would -cause it to only listen on the localhost interface, making it available only -to other processes on the same host. The strports module -provides many other possibilities.

- -

The Tub needs to be told how it can be reached, so it knows what host and -port to put into the URLs it creates. This location is simply a string in the -format host:port, using the host name by which that TCP port you've -just opened can be reached. Foolscap cannot, in general, guess what this name -is, especially if there are NAT boxes or port-forwarding devices in the way. -If your machine is reachable directly over the internet as -myhost.example.com, then you could use something like this:

- -
-from foolscap import Tub
-
-tub = Tub()
-tub.listenOn("tcp:12345")  # start listening on TCP port 12345
-tub.setLocation("myhost.example.com:12345")
-
- -

Registering the Referenceable

- -

Once the Tub has a Listener and a location, you can publish your -Referenceable to the entire world by picking a name and -registering it:

- -
-url = tub.registerReference(myserver, "math-service")
-
- -

This returns the FURL for your Referenceable. Remote -systems will use this URL to access your newly-published object. The -registration just maps a per-Tub name to the Referenceable: -technically the same Referenceable could be published multiple -times, under different names, or even be published by multiple Tubs in the -same application. But in general, each program will have exactly one Tub, and -each object will be registered under only one name.

- -

In this example (if we pretend the generated TubID was ABCD), the -URL returned by registerReference would be -"pb://ABCD@myhost.example.com:12345/math-service".

- -

If you do not provide a name, a random (and unguessable) name will be -generated for you. This is useful when you want to give access to your -Referenceable to someone specific, but do not want to make it -possible for someone else to acquire it by guessing the name.

- -

To use an unauthenticated Tub instead, you would do the following:

-
-from foolscap import UnauthenticatedTub
-
-tub = UnauthenticatedTub()
-tub.listenOn("tcp:12345")  # start listening on TCP port 12345
-tub.setLocation("myhost.example.com:12345")
-url = tub.registerReference(myserver, "math-service")
-
- -

In this case, the URL would be -"pbu://myhost.example.com:12345/math-service". The deterministic -nature of this form makes it slightly easier to throw together -quick-and-dirty Foolscap applications, since you only need to hard-code the -target host and port into the client side program. However any serious -application should just used the default authenticated form and use a full -URL as their starting point. Note that the URL can come from anywhere: typed -in by the user, retrieved from a web page, or hardcoded into the -application.

- -

Using a persistent certificate

- -

The Tub uses a TLS public-key certificate as the base of all its -cryptographic operations. If you don't give it one when you create the Tub, -it will generate a brand-new one.

- -

The TubID is simply the hash of this certificate, so if you are writing an -application that should have a stable long-term identity, you will need to -insure that the Tub uses the same certificate every time your app starts. The -easiest way to do this is to pass the certFile= argument into -your Tub() constructor call. This argument provides a filename -where you want the Tub to store its certificate. The first time the Tub is -started (when this file does not exist), the Tub will generate a new -certificate and store it here. On subsequent invocations, the Tub will read -the earlier certificate from this location. Make sure this filename points to -a writable location, and that you pass the same filename to -Tub() each time.

- -

Retrieving a RemoteReference

- -

On the client side, you also need to create a Tub, although you -don't need to perform the (listenOn, setLocation, -registerReference) sequence unless you are also publishing -Referenceables to the world. To acquire a reference to somebody -else's object, just use getReference:

- -
-from foolscap import Tub
-
-tub = Tub()
-tub.startService()
-d = tub.getReference("pb://ABCD@myhost.example.com:12345/math-service")
-def gotReference(remote):
-    print "Got the RemoteReference:", remote
-def gotError(err):
-    print "error:", err
-d.addCallbacks(gotReference, gotError)
-
- -

getReference returns a Deferred which will fire with a -RemoteReference that is connected to the remote -Referenceable named by the URL. It will use an existing -connection, if one is available, and it will return an existing -RemoteReference, it one has already been acquired.

- -

Since getReference requests are queued until the Tub starts, -the following will work too. But don't forget to call -tub.startService() eventually, otherwise your program will hang -forever.

- -
-from foolscap import Tub
-
-tub = Tub()
-d = tub.getReference("pb://ABCD@myhost.example.com:12345/math-service")
-def gotReference(remote):
-    print "Got the RemoteReference:", remote
-def gotError(err):
-    print "error:", err
-d.addCallbacks(gotReference, gotError)
-tub.startService()
-
- - -

Complete example

- -

Here are two programs, one implementing the server side of our -remote-addition protocol, the other behaving as a client. This first example -uses an unauthenticated Tub so you don't have to manually copy a URL from the -server to the client. Both of these are standalone programs (you just run -them), but normally you would create an Application object and pass the -file to twistd -noy. An example of that usage will be provided -later.

- -pb1server.py - -pb1client.py - -
-% doc/listings/pb1server.py
-the object is available at: pbu://localhost:12345/math-service
-
- -
-% doc/listings/pb1client.py
-got a RemoteReference
-asking it to add 1+2
-the answer is 3
-%
-
- -

The second example uses authenticated Tubs. When running this example, you -must copy the URL printed by the server and provide it as an argument to the -client.

- -pb2server.py - -pb2client.py - -
-% doc/listings/pb2server.py
-the object is available at: pb://abcd123@localhost:12345/math-service
-
- -
-% doc/listings/pb2client.py pb://abcd123@localhost:12345/math-service
-got a RemoteReference
-asking it to add 1+2
-the answer is 3
-%
-
- - -

FURLs

- -

In Foolscap, each world-accessible Referenceable has one or more URLs -which are secure, where we use the capability-security definition of -the term, meaning those URLs have the following properties:

- -
    -
  • The only way to acquire the URL is either to get it from someone else - who already has it, or to be the person who published it in the first - place.
  • - -
  • Only that original creator of the URL gets to determine which - Referenceable it will connect to. If your - tub.getReference(url) call succeeds, the Referenceable you - will be connected to will be the right one.
  • -
- -

To accomplish the first goal, FURLs must be unguessable. You can register -the reference with a human-readable name if your intention is to make it -available to the world, but in general you will let -tub.registerReference generate a random name for you, preserving -the unguessability property.

- -

To accomplish the second goal, the cryptographically-secure TubID is used -as the primary identifier, and the location hints are just that: -hints. If DNS has been subverted to point the hostname at a different -machine, or if a man-in-the-middle attack causes you to connect to the wrong -box, the TubID will not match the remote end, and the connection will be -dropped. These attacks can cause a denial-of-service, but they cannot cause -you to mistakenly connect to the wrong target.

- -

Obviously this second property only holds if you use SSL. If you choose to -use unauthenticated Tubs, all security properties are lost.

- -

The format of a FURL, like -pb://abcd123@example.com:5901,backup.example.com:8800/math-server, -is as followsnote that the FURL uses the same format -as an HTTPSY -URL:

- -
    -
  1. The literal string pb://
  2. -
  3. The TubID (as a base32-encoded hash of the SSL certificate)
  4. -
  5. A literal @ sign
  6. - -
  7. A comma-separated list of location hints. Each is one of the - following: -
      -
    • TCP over IPv4 via DNS: HOSTNAME:PORTNUM
    • -
    • TCP over IPv4 without DNS: A.B.C.D:PORTNUM
    • -
    • TCP over IPv6: (TODO, maybe tcp6:HOSTNAME:PORTNUM ?
    • -
    • TCP over IPv6 w/o DNS: (TODO, - maybe tcp6:[X:Y::Z]:PORTNUM
    • -
    • Unix-domain socket: (TODO)
    • -
    - - Each location hint is attempted in turn. Servers can return a - redirect, which will cause the client to insert the provided - redirect targets into the hint list and start trying them before continuing - with the original list.
  8. - -
  9. A literal / character
  10. -
  11. The reference's name
  12. -
- -

(Unix-domain sockets are represented with only a single location hint, in -the format pb://ABCD@unix/path/to/socket/NAME, but this needs -some work)

- -

FURLs for unauthenticated Tubs, like -pbu://example.com:8700/math-server, are formatted as -follows:

- -
    -
  1. The literal string pbu://
  2. -
  3. A single location hint
  4. -
  5. A literal / character
  6. -
  7. The reference's name
  8. -
- -

Clients vs Servers, Names and Capabilities

- -

It is worthwhile to point out that Foolscap is a symmetric protocol. -Referenceable instances can live on either side of a wire, and -the only difference between client and server is who publishes -the object and who initiates the network connection.

- -

In any Foolscap-using system, the very first object exchanged must be -acquired with a tub.getReference(url) callin fact, the very very first object exchanged is a -special implicit RemoteReference to the remote Tub itself, which implements -an internal protocol that includes a method named -remote_getReference. The tub.getReference(url) call -is turned into one step that connects to the remote Tub, and a second step -which invokes remotetub.callRemote("getReference", refname) on the -result, which means it must have been published with a call to -tub.registerReference(ref, name). After that, other objects can -be passed as an argument to (or a return value from) a remotely-invoked -method of that first object. Any suitable Referenceable object -that is passed over the wire will appear on the other side as a corresponding -RemoteReference. It is not necessary to -registerReference something to let it pass over the wire.

- -

The converse of this property is thus: if you do not -registerReference a particular Referenceable, and -you do not give it to anyone else (by passing it in an argument to -somebody's remote method, or return it from one of your own), then nobody -else will be able to get access to that Referenceable. This -property means the Referenceable is a capability, as -holding a corresponding RemoteReference gives someone a power -that they cannot acquire in any other wayof course, -the Foolscap connections must be secured with SSL (otherwise an eavesdropper -or man-in-the-middle could get access), and the registered name must be -unguessable (or someone else could acquire a reference), but both of these -are the default.

- -

In the following example, the first program creates an RPN-style -Calculator object which responds to push, pop, -add, and subtract messages from the user. The user can also -register an Observer, to which the Calculator sends an -event message each time something happens to the calculator's -state. When you consider the Calculator object, the first -program is the server and the second program is the client. When you think -about the Observer object, the first program is a client and the -second program is the server. It also happens that the first program is -listening on a socket, while the second program initiated a network -connection to the first. It also happens that the first program -published an object under some well-known name, while the second program has -not published any objects. These are all independent properties.

- -

Also note that the Calculator side of the example is implemented using a -Application -object, which is the way you'd normally build a real-world application. You -therefore use twistd to launch the program. The User side is -written with the same reactor.run() style as the earlier -example.

- -

The server registers the Calculator instance and prints the FURL at which -it is listening. You need to pass this FURL to the client program so it knows -how to contact the servre. If you have a modern version of Twisted (2.5 or -later) and the right encryption libraries installed, you'll get an -authenticated Tub (for which the FURL will start with "pb:" and will be -fairly long). If you don't, you'll get an unauthenticated Tub (with a -relatively short FURL that starts with "pbu:").

- -pb3calculator.py - -pb3user.py - -
-% twistd -noy doc/listings/pb3calculator.py 
-15:46 PDT [-] Log opened.
-15:46 PDT [-] twistd 2.4.0 (/usr/bin/python 2.4.4) starting up
-15:46 PDT [-] reactor class: twisted.internet.selectreactor.SelectReactor
-15:46 PDT [-] Loading doc/listings/pb3calculator.py...
-15:46 PDT [-] the object is available at:
-              pb://5ojw4cv4u4d5cenxxekjukrogzytnhop@localhost:12345/calculator
-15:46 PDT [-] Loaded.
-15:46 PDT [-] foolscap.pb.Listener starting on 12345
-15:46 PDT [-] Starting factory <Listener at 0x4869c0f4 on tcp:12345
-              with tubs None>
-
- -
-% doc/listings/pb3user.py \
-   pb://5ojw4cv4u4d5cenxxekjukrogzytnhop@localhost:12345/calculator
-event: push(2)
-event: push(3)
-event: add
-event: pop
-the result is 5
-%
-
- - -

Invoking Methods, Method Arguments

- -

As you've probably already guessed, all the methods with names that begin -with remote_ will be available to anyone who manages to acquire -a corresponding RemoteReference. remote_foo matches -a ref.callRemote("foo"), etc. This name lookup can be changed by -overriding Referenceable (or, perhaps more usefully, -implementing an IRemotelyCallable adapter).

- -

The arguments of a remote method may be passed as either positional -parameters (foo(1,2)), or as keyword args -(foo(a=1,b=2)), or a mixture of both. The usual python rules -about not duplicating parameters apply.

- -

You can pass all sorts of normal objects to a remote method: strings, -numbers, tuples, lists, and dictionaries. The serialization of these objects -is handled by Banana, which knows -how to convey arbitrary object graphs over the wire. Things like containers -which contain multiple references to the same object, and recursive -references (cycles in the object graph) are all handled correctlyyou may not want to accept shared objects in your method -arguments, as it could lead to surprising behavior depending upon how you -have written your method. The Shared constraint will let you express this, -and is described in the Constraints section of -this document.

- -

Passing instances is handled specially. Foolscap will not send anything -over the wire that it does not know how to serialize, and (unlike the -standard pickle module) it will not make assumptions about how -to handle classes that that have not been explicitly marked as serializable. -This is for security, both for the sender (making you you don't pass anything -over the wire that you didn't intend to let out of your security perimeter), -and for the recipient (making sure outsiders aren't allowed to create -arbitrary instances inside your memory space, and therefore letting them run -somewhat arbitrary code inside your perimeter).

- -

Sending Referenceables is straightforward: they always appear -as a corresponding RemoteReference on the other side. You can -send the same Referenceable as many times as you like, and it -will always show up as the same RemoteReference instance. A -distributed reference count is maintained, so as long as the remote side -hasn't forgotten about the RemoteReference, the original -Referenceable will be kept alive.

- -

Sending RemoteReferences fall into two categories. If you are -sending a RemoteReference back to the Tub that you got it from, -they will see their original Referenceable. If you send it to -some other Tub, they will (eventually) see a RemoteReference of -their own. This last feature is called an introduction, and has a few -additional requirements: see the Introductions -section of this document for details.

- -

Sending instances of other classes requires that you tell Banana how they -should be serialized. Referenceable is good for -copy-by-reference semanticsIn fact, if all you want is -referenceability (and not callability), you can use OnlyReferenceable. Strictly speaking, -Referenceable is both Referenceable (meaning it is sent -over the wire using pass-by-reference semantics, and it survives a round -trip) and Callable (meaning you can invoke remote methods on it). -Referenceable should really be named Callable, but -the existing name has a lot of historical weight behind it.. For -copy-by-value semantics, the easiest route is to subclass Copyable. See the Copyable section for details. Note that you can also -register an ICopyable -adapter on third-party classes to avoid subclassing. You will need to -register the Copyable's name on the receiving end too, otherwise -Banana will not know how to unserialize the incoming data stream.

- -

When returning a value from a remote method, you can do all these things, -plus two more. If you raise an exception, the caller's Deferred will have the -errback fired instead of the callback, with a CopiedFailure instance that describes what went -wrong. The CopiedFailure is not quite as useful as a local Failure object would be: to -send it over the wire, some contents are replaced with strings, and the -actual Exception object (f.value) is replaced with its string -representation. But you can still use it to find out what went wrong. The -CopiedFailure may reveal more information about the internals of -your program than you want: you can set the unsafeTracebacks -flag on the Tub to limit outgoing CopiedFailures to contain only -the exception type (and none of the stack trace information that would reveal -lines of your source code to the remote end).

- -

The other alternative is for your method to return a Deferred. If this happens, the caller -will not actually get a response until you fire that Deferred. This is useful -when the remote operation being requested cannot complete right away. The -caller's Deferred with fire with whatever value you eventually fire your own -Deferred with. If your Deferred is errbacked, their Deferred will be -errbacked with a CopiedFailure.

- - -

Constraints and RemoteInterfaces

- -

One major feature introduced by Foolscap (relative to oldpb) is the -serialization Constraint. -This lets you place limits on what kind of data you are willing to accept, -which enables safer distributed programming. Typically python uses duck -typing, wherein you usually just throw some arguments at the method and -see what happens. When you are less sure of the origin of those arguments, -you may want to be more circumspect. Enforcing type checking at the boundary -between your code and the outside world may make it safer to use duck typing -inside those boundaries. The type specifications also form a convenient -remote API reference you can publish for prospective clients of your -remotely-invokable service.

- -

In addition, these Constraints are enforced on each token as it arrives -over the wire. This means that you can calculate a (small) upper bound on how -much received data your program will store before it decides to hang up on -the violator, minimizing your exposure to DoS attacks that involve sending -random junk at you.

- -

There are three pieces you need to know about: Tokens, Constraints, and -RemoteInterfaces.

- -

Tokens

- -

The fundamental unit of serialization is the Banana Token. These are -thoroughly documented in the Banana -Specification, but what you need to know here is that each piece of -non-container data, like a string or a number, is represented by a single -token. Containers (like lists and dictionaries) are represented by a special -OPEN token, followed by tokens for everything that is in the container, -followed by the CLOSE token. Everything Banana does is in terms of these -nested OPEN/stuff/stuff/CLOSE sequences of tokens.

- -

Each token consists of a header, a type byte, and an optional body. The -header is always a base-128 number with a maximum of 64 digits, and the type -byte is always a single byte. The body (if present) has a length dictated by -the magnitude of the header.

- -

The length-first token format means that the receiving system never has to -accept more than 65 bytes before it knows the type and size of the token, at -which point it can make a decision about accepting or rejecting the rest of -it.

- -

Constraints

- -

The schema foolscap.schema module has a variety of Constraint classes that can be -applied to incoming data. Most of them correspond to typical Python types, -e.g. ListOf matches a list, -with a certain maximum length, and a child Constraint that gets -applied to the contents of the list. You can nest Constraints in -this way to describe the shape of the object graph that you are -willing to accept.

- -

At any given time, the receiving Banana protocol has single -Constraint object that it enforces against the inbound data -streamto be precise, each Unslicer on the -receive stack has a Constraint, and the idea is that all of them -get to pass judgement on the inbound token. A useful syntax to describe this -sort of thing is still being worked out..

- -

RemoteInterfaces

- -

The RemoteInterface is how you describe -your constraints. You can provide a constraint for each argument of each -method, as well as one for the return value. You can also specify addtional -flags on the methods. The convention (which is actually enforced by the code) -is to name RemoteInterface objects with an RI prefix, -like RIFoo.

- -

RemoteInterfaces are created and used a lot like the usual -zope.interface-style Interface. They look like -class definitions, inheriting from RemoteInterface. For each -method, the default value of each argument is used to create a -Constraint for that argument. Basic types (int, -str, bool) are converted into a -Constraint subclass (IntegerConstraint, StringConstraint, BooleanConstraint). You can also use -instances of other Constraint subclasses, like ListOf and DictOf. This Constraint will be -enforced against the value for the given argument. Unless you specify -otherwise, remote callers must match all the Constraints you -specify, all arguments listed in the RemoteInterface must be present, and no -arguments outside that list will be accepted.

- -

Note that, like zope.interface, these methods should not include -self in their argument list. This is because you are -documenting how other people invoke your methods. self -is an implementation detail. RemoteInterface will complain if -you forget.

- -

The methods in a RemoteInterface should return a -single value with the same format as the default arguments: either a basic -type (int, str, etc) or a Constraint -subclass. This Constraint is enforced on the return value of the -method. If you are calling a method in somebody else's process, the argument -constraints will be applied as a courtesy (be conservative in what you -send), and the return value constraint will be applied to prevent the -server from doing evil things to you. If you are running a method on behalf -of a remote client, the argument constraints will be enforced to protect -you, while the return value constraint will be applied as a -courtesy.

- -

Attempting to send a value that does not satisfy the Constraint will -result in a Violation exception -being raised.

- -

You can also specify methods by defining attributes of the same name in -the RemoteInterface object. Each attribute value should be an -instance of RemoteMethodSchemaalthough technically it can be any object which implements -the IRemoteMethodConstraint -interface. This approach is more flexible: there are some constraints -that are not easy to express with the default-argument syntax, and this is -the only way to set per-method flags. Note that all such method-defining -attributes must be set in the RemoteInterface body itself, -rather than being set on it after the fact (i.e. RIFoo.doBar = -stuff). This is required because the RemoteInterface -metaclass magic processes all of these attributes only once, immediately -after the RemoteInterface body has been evaluated.

- -

The RemoteInterface class has a name. Normally this is -the (short) classnameRIFoo.__class__.__name__, if -RemoteInterfaces were actually classes, which they're -not. You can override this -name by setting a special __remote_name__ attribute on the -RemoteInterface (again, in the body). This name is important -because it is externally visible: all RemoteReferences that -point at your Referenceables will remember the name of the -RemoteInterfaces it implements. This is what enables the -type-checking to be performed on both ends of the wire.

- -

In the future, this ought to default to the fully-qualified -classname (like package.module.RIFoo), so that two -RemoteInterfaces with the same name in different modules can co-exist. In the -current release, these two RemoteInterfaces will collide (and provoke an -import-time error message complaining about the duplicate name). As a result, -if you have such classes (e.g. foo.RIBar and -baz.RIBar), you must use __remote_name__ to -distinguish them (by naming one of them something other than -RIBar to avoid this error. - -Hopefully this will be improved in a future version, but it looks like a -difficult change to implement, so the standing recommendation is to use -__remote_name__ on all your RemoteInterfaces, and set it to a -suitably unique string (like a URI).

- -

Here's an example:

- -
-from foolscap import RemoteInterface, schema
-
-class RIMath(RemoteInterface):
-    __remote_name__ = "RIMath.using-foolscap.docs.foolscap.twistedmatrix.com"
-    def add(a=int, b=int):
-        return int
-    # declare it with an attribute instead of a function definition
-    subtract = schema.RemoteMethodSchema(a=int, b=int, _response=int)
-    def sum(args=schema.ListOf(int)):
-        return int
-
- - -

Using RemoteInterface

- -

To declare that your Referenceable responds to a particular -RemoteInterface, use the normal implements() -annotation:

- -
-class MathServer(foolscap.Referenceable):
-    implements(RIMath)
-
-    def remote_add(self, a, b):
-        return a+b
-    def remote_subtract(self, a, b):
-        return a-b
-    def remote_sum(self, args):
-        total = 0
-        for a in args: total += a
-        return total
-
- -

To enforce constraints everywhere, both sides will need to know about the -RemoteInterface, and both must know it by the same name. It is a -good idea to put the RemoteInterface in a common file that is -imported into the programs running on both sides. It is up to you to make -sure that both sides agree on the interface. Future versions of Foolscap may -implement some sort of checksum-verification or Interface-serialization as a -failsafe, but fundamentally the RemoteInterface that -you are using defines what your program is prepared to -handle. There is no difference between an old client accidentaly using a -different version of the RemoteInterface by mistake, and a malicious attacker -actively trying to confuse your code. The only promise that Foolscap can make -is that the constraints you provide in the RemoteInterface will be faithfully -applied to the incoming data stream, so that you don't need to do the type -checking yourself inside the method.

- -

When making a remote method call, you use the RemoteInterface -to identify the method instead of a string. This scopes the method name to -the RemoteInterface:

- -
-d = remote.callRemote(RIMath["add"], a=1, b=2)
-# or
-d = remote.callRemote(RIMath["add"], 1, 2)
-
- -

Pass-By-Copy

- -

You can pass (nearly) arbitrary instances over the wire. Foolscap knows -how to serialize all of Python's native data types already: numbers, strings, -unicode strings, booleans, lists, tuples, dictionaries, sets, and the None -object. You can teach it how to serialize instances of other types too. -Foolscap will not serialize (or deserialize) any class that you haven't -taught it about, both for security and because it refuses the temptation to -guess your intentions about how these unknown classes ought to be -serialized.

- -

The simplest possible way to pass things by copy is demonstrated in the -following code fragment:

- -
-from foolscap import Copyable, RemoteCopy
-
-class MyPassByCopy(Copyable, RemoteCopy):
-    typeToCopy = copytype = "MyPassByCopy"
-    def __init__(self):
-        # RemoteCopy subclasses may not accept any __init__ arguments
-        pass
-    def setCopyableState(self, state):
-        self.__dict__ = state
-
- -

If the code on both sides of the wire import this class, then any -instances of MyPassByCopy that are present in the arguments of a -remote method call (or returned as the result of a remote method call) will -be serialized and reconstituted into an equivalent instance on the other -side.

- -

For more complicated things to do with pass-by-copy, see the documentation -on Copyable. This explains the difference between -Copyable and RemoteCopy, how to control the -serialization and deserialization process, and how to arrange for -serialization of third-party classes that are not subclasses of -Copyable.

- - -

Third-party References

- -

Another new feature of Foolscap is the ability to send -RemoteReferences to third parties. The classic scenario for this -is illustrated by the three-party -Granovetter diagram. One party (Alice) has RemoteReferences to two other -objects named Bob and Carol. She wants to share her reference to Carol with -Bob, by including it in a message she sends to Bob (i.e. by using it as an -argument when she invokes one of Bob's remote methods). The Foolscap code for -doing this would look like:

- -
-bobref.callRemote("foo", intro=carolref)
-
- -

When Bob receives this message (i.e. when his remote_foo -method is invoked), he will discover that he's holding a fully-functional -RemoteReference to the object named Caroland if everyone involved is using authenticated Tubs, then -Foolscap offers a guarantee, in the cryptographic sense, that Bob will wind -up with a reference to the same object that Alice intended. The authenticated -FURLs prevent DNS-spoofing and man-in-the-middle attacks.. He can -start using this RemoteReference right away:

- -
-class Bob(foolscap.Referenceable):
-    def remote_foo(self, intro):
-        self.carol = intro
-        carol.callRemote("howdy", msg="Pleased to meet you", you=intro)
-        return carol
-
- -

If Bob sends this RemoteReference back to Alice, her method -will see the same RemoteReference that she sent to Bob. In this -example, Bob sends the reference by returning it from the original -remote_foo method call, but he could almost as easily send it in -a separate method call.

- -
-class Alice(foolscap.Referenceable):
-    def start(self, carol):
-        self.carol = carol
-        d = self.bob.callRemote("foo", intro=carol)
-        d.addCallback(self.didFoo)
-    def didFoo(self, result):
-        assert result is self.carol  # this will be true
-
- -

Moreover, if Bob sends it back to Carol (completing the -three-party round trip), Carol will see it as her original -Referenceable.

- -
-class Carol(foolscap.Referenceable):
-    def remote_howdy(self, msg, you):
-        assert you is self  # this will be true
-
- -

In addition to this, in the four-party introduction sequence as used by -the Grant -Matcher Puzzle, when a Referenceable is sent to the same destination -through multiple paths, the recipient will receive the same -RemoteReference object from both sides.

- -

For a RemoteReference to be transferrable to third-parties in -this fashion, the original Referenceable must live in a Tub -which has a working listening port, and an established base URL. It is not -necessary for the Referenceable to have been published with -registerReference first: if it is sent over the wire before a -name has been associated with it, it will be registered under a new random -and unguessable name. The RemoteReference will contain the -resulting URL, enabling it to be sent to third parties.

- -

When this introduction is made, the receiving system must establish a -connection with the Tub that holds the original Referenceable, and acquire -its own RemoteReference. These steps must take place before the remote method -can be invoked, and other method calls might arrive before they do. All -subsequent method calls are queued until the one that involved the -introduction is performed. Foolscap guarantees (by default) that the messages -sent to a given Referenceable will be delivered in the same order. In the -future there may be options to relax this guarantee, in exchange for higher -performance, reduced memory consumption, multiple priority queues, limited -latency, or other features. There might even be an option to turn off -introductions altogether.

- -

Also note that enabling this capability means any of your communication -peers can make you create TCP connections to hosts and port numbers of their -choosing. The fact that those connections can only speak the Foolscap -protocol may reduce the security risk presented, but it still lets other -people be annoying.

- - - diff --git a/src/foolscap/foolscap/__init__.py b/src/foolscap/foolscap/__init__.py deleted file mode 100644 index b3d41909..00000000 --- a/src/foolscap/foolscap/__init__.py +++ /dev/null @@ -1,29 +0,0 @@ -"""Foolscap""" - -__version__ = "0.1.5" - -# here are the primary entry points -from foolscap.pb import Tub, UnauthenticatedTub, getRemoteURL_TCP - -# names we import so that others can reach them as foolscap.foo -from foolscap.remoteinterface import RemoteInterface -from foolscap.referenceable import Referenceable, SturdyRef -from foolscap.copyable import Copyable, RemoteCopy, registerRemoteCopy -from foolscap.copyable import registerCopier, registerRemoteCopyFactory -from foolscap.ipb import DeadReferenceError -from foolscap.tokens import BananaError -from foolscap import schema # necessary for the adapter_hooks side-effect -# TODO: Violation? - -# hush pyflakes -_unused = [ - Tub, UnauthenticatedTub, getRemoteURL_TCP, - RemoteInterface, - Referenceable, SturdyRef, - Copyable, RemoteCopy, registerRemoteCopy, - registerCopier, registerRemoteCopyFactory, - DeadReferenceError, - BananaError, - schema, - ] -del _unused diff --git a/src/foolscap/foolscap/banana.py b/src/foolscap/foolscap/banana.py deleted file mode 100644 index 917f2265..00000000 --- a/src/foolscap/foolscap/banana.py +++ /dev/null @@ -1,1178 +0,0 @@ - -import types, struct, sets, time - -from twisted.internet import protocol, defer, reactor -from twisted.python.failure import Failure -from twisted.python import log - -# make sure to import allslicers, so they all get registered. Even if the -# need for RootSlicer/etc goes away, do the import here anyway. -from foolscap.slicers.allslicers import RootSlicer, RootUnslicer -from foolscap.slicers.allslicers import ReplaceVocabSlicer, AddVocabSlicer - -import tokens -from tokens import SIZE_LIMIT, STRING, LIST, INT, NEG, \ - LONGINT, LONGNEG, VOCAB, FLOAT, OPEN, CLOSE, ABORT, ERROR, \ - PING, PONG, \ - BananaError, BananaFailure, Violation - -EPSILON = 0.1 - -def int2b128(integer, stream): - if integer == 0: - stream(chr(0)) - return - assert integer > 0, "can only encode positive integers" - while integer: - stream(chr(integer & 0x7f)) - integer = integer >> 7 - -def b1282int(st): - # NOTE that this is little-endian - oneHundredAndTwentyEight = 128 - i = 0 - place = 0 - for char in st: - num = ord(char) - i = i + (num * (oneHundredAndTwentyEight ** place)) - place = place + 1 - return i - -# long_to_bytes and bytes_to_long taken from PyCrypto: Crypto/Util/number.py - -def long_to_bytes(n, blocksize=0): - """long_to_bytes(n:long, blocksize:int) : string - Convert a long integer to a byte string. - - If optional blocksize is given and greater than zero, pad the front of - the byte string with binary zeros so that the length is a multiple of - blocksize. - """ - # after much testing, this algorithm was deemed to be the fastest - s = '' - n = long(n) - pack = struct.pack - while n > 0: - s = pack('>I', n & 0xffffffffL) + s - n = n >> 32 - # strip off leading zeros - for i in range(len(s)): - if s[i] != '\000': - break - else: - # only happens when n == 0 - s = '\000' - i = 0 - s = s[i:] - # add back some pad bytes. this could be done more efficiently w.r.t. the - # de-padding being done above, but sigh... - if blocksize > 0 and len(s) % blocksize: - s = (blocksize - len(s) % blocksize) * '\000' + s - return s - -def bytes_to_long(s): - """bytes_to_long(string) : long - Convert a byte string to a long integer. - - This is (essentially) the inverse of long_to_bytes(). - """ - acc = 0L - unpack = struct.unpack - length = len(s) - if length % 4: - extra = (4 - length % 4) - s = '\000' * extra + s - length = length + extra - for i in range(0, length, 4): - acc = (acc << 32) + unpack('>I', s[i:i+4])[0] - return acc - -HIGH_BIT_SET = chr(0x80) - - - -# Banana is a big class. It is split up into three sections: sending, -# receiving, and connection setup. These used to be separate classes, but -# the __init__ functions got too weird. - -class Banana(protocol.Protocol): - - def __init__(self, features={}): - """ - @param features: a dictionary of negotiated connection features - """ - self.initSend() - self.initReceive() - - def populateVocabTable(self, vocabStrings): - """ - I expect a list of strings. I will populate my initial vocab - table (both inbound and outbound) with this list. - - It is not safe to use this method once anything has been serialized - onto the wire. This method can only be used to set up the initial - vocab table based upon a negotiated set of common words. The - 'initial-vocab-table-index' parameter is used to decide upon the - contents of this table. - """ - - out_vocabDict = dict(zip(vocabStrings, range(len(vocabStrings)))) - self.outgoingVocabTableWasReplaced(out_vocabDict) - - in_vocabDict = dict(zip(range(len(vocabStrings)), vocabStrings)) - self.replaceIncomingVocabulary(in_vocabDict) - - ### connection setup - - def connectionMade(self): - if self.debugSend: - print "Banana.connectionMade" - if self.keepaliveTimeout is not None: - self.dataLastReceivedAt = time.time() - t = reactor.callLater(self.keepaliveTimeout + EPSILON, - self.keepaliveTimerFired) - self.keepaliveTimer = t - self.useKeepalives = True - if self.disconnectTimeout is not None: - self.dataLastReceivedAt = time.time() - t = reactor.callLater(self.disconnectTimeout + EPSILON, - self.disconnectTimerFired) - self.disconnectTimer = t - self.useKeepalives = True - # prime the pump - self.produce() - - def connectionLost(self, why): - if self.disconnectTimer: - self.disconnectTimer.cancel() - self.disconnectTimer = None - if self.keepaliveTimer: - self.keepaliveTimer.cancel() - self.keepaliveTimer = None - protocol.Protocol.connectionLost(self, why) - - ### SendBanana - # called by .send() - # calls transport.write() and transport.loseConnection() - - slicerClass = RootSlicer - paused = False - streamable = True # this is only checked during __init__ - debugSend = False - - def initSend(self): - self.rootSlicer = self.slicerClass(self) - self.rootSlicer.allowStreaming(self.streamable) - assert tokens.ISlicer.providedBy(self.rootSlicer) - assert tokens.IRootSlicer.providedBy(self.rootSlicer) - - itr = self.rootSlicer.slice() - next = iter(itr).next - top = (self.rootSlicer, next, None) - self.slicerStack = [top] - - self.openCount = 0 - self.outgoingVocabulary = {} - self.nextAvailableOutgoingVocabularyIndex = 0 - self.pendingVocabAdditions = sets.Set() - - def send(self, obj): - if self.debugSend: print "Banana.send(%s)" % obj - return self.rootSlicer.send(obj) - - def produce(self, dummy=None): - # optimize: cache 'next' because we get many more tokens than stack - # pushes/pops - while self.slicerStack and not self.paused: - if self.debugSend: print "produce.loop" - try: - slicer, next, openID = self.slicerStack[-1] - obj = next() - if self.debugSend: print " produce.obj=%s" % (obj,) - if isinstance(obj, defer.Deferred): - for s,n,o in self.slicerStack: - if not s.streamable: - raise Violation("parent not streamable") - obj.addCallback(self.produce) - obj.addErrback(self.sendFailed) # what could cause this? - # this is the primary exit point - break - elif type(obj) in (int, long, float, str): - # sendToken raises a BananaError for weird tokens - self.sendToken(obj) - else: - # newSlicerFor raises a Violation for unsendable types - # pushSlicer calls .slice, which can raise Violation - try: - slicer = self.newSlicerFor(obj) - self.pushSlicer(slicer, obj) - except Violation, v: - # pushSlicer is arranged such that the pushing of - # the Slicer and the sending of the OPEN happen - # together: either both occur or neither occur. In - # addition, there is nothing past the OPEN/push - # which can cause an exception. - - # Therefore, if an exception was raised, we know - # that the OPEN has not been sent (so we don't have - # to send an ABORT), and that the new Unslicer has - # not been pushed (so we don't have to pop one from - # the stack) - - f = BananaFailure() - if self.debugSend: - print " violation in newSlicerFor:", f - - self.handleSendViolation(f, - doPop=False, sendAbort=False) - - except StopIteration: - if self.debugSend: print "StopIteration" - self.popSlicer() - - except Violation, v: - # Violations that occur because of Constraints are caught - # before the Slicer is pushed. A Violation that is caught - # here was raised inside .next(), or .streamable wasn't - # obeyed. The Slicer should now be abandoned. - if self.debugSend: print " violation in .next:", v - - f = BananaFailure() - self.handleSendViolation(f, doPop=True, sendAbort=True) - - except: - print "exception in produce" - self.sendFailed(Failure()) - # there is no point to raising this again. The Deferreds are - # all errbacked in sendFailed(). This function was called - # inside a Deferred which errbacks to sendFailed(), and - # we've already called that once. The connection will be - # dropped by sendFailed(), and the error is logged, so there - # is nothing left to do. - return - - assert self.slicerStack # should never be empty - - def handleSendViolation(self, f, doPop, sendAbort): - f.value.setLocation(self.describeSend()) - - while True: - top = self.slicerStack[-1][0] - - if self.debugSend: - print " handleSendViolation.loop, top=%s" % top - - # should we send an ABORT? Only if an OPEN has been sent, which - # happens in pushSlicer (if at all). - if sendAbort: - lastOpenID = self.slicerStack[-1][2] - if lastOpenID is not None: - if self.debugSend: - print " sending ABORT(%s)" % lastOpenID - self.sendAbort(lastOpenID) - - # should we pop the Slicer? yes - if doPop: - if self.debugSend: print " popping %s" % top - self.popSlicer() - if not self.slicerStack: - if self.debugSend: print "RootSlicer died!" - raise BananaError("Hey! You killed the RootSlicer!") - top = self.slicerStack[-1][0] - - # now inform the parent. If they also give up, we will - # loop, popping more Slicers off the stack until the - # RootSlicer ignores the error - - if self.debugSend: - print " notifying parent", top - f = top.childAborted(f) - - if f: - doPop = True - sendAbort = True - continue - else: - break - - - # the parent wants to forge ahead - - def newSlicerFor(self, obj): - if tokens.ISlicer.providedBy(obj): - return obj - topSlicer = self.slicerStack[-1][0] - # slicerForObject could raise a Violation, for unserializeable types - return topSlicer.slicerForObject(obj) - - def pushSlicer(self, slicer, obj): - if self.debugSend: print "push", slicer - assert len(self.slicerStack) < 10000 # failsafe - - # if this method raises a Violation, it means that .slice failed, - # and neither the OPEN nor the stack-push has occurred - - topSlicer = self.slicerStack[-1][0] - slicer.parent = topSlicer - - # we start the Slicer (by getting its iterator) first, so that if it - # fails we can refrain from sending the OPEN (hence we do not have - # to send an ABORT and CLOSE, which simplifies the send logic - # considerably). slicer.slice is the only place where a Violation - # can be raised: it is caught and passed cleanly to the parent. If - # it happens anywhere else, or if any other exception is raised, the - # connection will be dropped. - - # the downside to this approach is that .slice happens before - # .registerReference, so any late-validation being done in .slice - # will not be able to detect the fact that this object has already - # begun serialization. Validation performed in .next is ok. - - # also note that if .slice is a generator, any exception it raises - # will not occur until .next is called, which happens *after* the - # slicer has been pushed. This check is only useful for .slice - # methods which are *not* generators. - - itr = slicer.slice(topSlicer.streamable, self) - next = iter(itr).next - - # we are now committed to sending the OPEN token, meaning that - # failures after this point will cause an ABORT/CLOSE to be sent - - openID = None - if slicer.sendOpen: - openID = self.sendOpen() - if slicer.trackReferences: - topSlicer.registerReference(openID, obj) - # note that the only reason to hold on to the openID here is for - # the debug/optional copy in the CLOSE token. Consider ripping - # this code out if we decide to stop sending that copy. - - slicertuple = (slicer, next, openID) - self.slicerStack.append(slicertuple) - - def popSlicer(self): - slicer, next, openID = self.slicerStack.pop() - if openID is not None: - self.sendClose(openID) - if self.debugSend: print "pop", slicer - - def describeSend(self): - where = [] - for i in self.slicerStack: - try: - piece = i[0].describe() - except: - log.msg("Banana.describeSend") - log.err() - piece = "???" - where.append(piece) - return ".".join(where) - - def setOutgoingVocabulary(self, vocabStrings): - """Schedule a replacement of the outbound VOCAB table. - - Higher-level code may call this at any time with a list of strings. - Immediately after the replacement has occured, the outbound VOCAB - table will contain all of the strings in vocabStrings and nothing - else. This table tells the token-sending code which strings to - abbreviate with short integers in a VOCAB token. - - This function can be called at any time (even while the protocol is - in the middle of serializing and transmitting some other object) - because it merely schedules a replacement to occur at some point in - the future. A special marker (the ReplaceVocabSlicer) is placed in - the outbound queue, and the table replacement will only happend after - all the items ahead of that marker have been serialized. At the same - time the table is replaced, a (set-vocab..) sequence will be - serialized towards the far end. This insures that we set our outbound - table at the same 'time' as the far end starts using it. - """ - # build a VOCAB message, send it, then set our outgoingVocabulary - # dictionary to start using the new table - assert isinstance(vocabStrings, (list, tuple)) - for s in vocabStrings: - assert isinstance(s, str) - vocabDict = dict(zip(vocabStrings, range(len(vocabStrings)))) - s = ReplaceVocabSlicer(vocabDict) - # the ReplaceVocabSlicer does some magic to insure the VOCAB message - # does not use vocab tokens itself. This would be legal (sort of a - # differential compression), but confusing. It accomplishes this by - # clearing our self.outgoingVocabulary dict when it begins to be - # serialized. - self.send(s) - - # likewise, when it finishes, the ReplaceVocabSlicer replaces our - # self.outgoingVocabulary dict when it has finished sending the - # strings. It is important that this occur in the serialization code, - # or somewhen very close to it, because otherwise there could be a - # race condition that could result in some strings being vocabized - # with the wrong keys. - - def addToOutgoingVocabulary(self, value): - """Schedule 'value' for addition to the outbound VOCAB table. - - This may be called at any time. If the string is already scheduled - for addition, or if it is already in the VOCAB table, it will be - ignored. (TODO: does this introduce an annoying-but-not-fatal race - condition?) The string will not actually be added to the table until - the outbound serialization queue has been serviced. - """ - assert isinstance(value, str) - if value in self.outgoingVocabulary: - return - if value in self.pendingVocabAdditions: - return - self.pendingVocabAdditions.add(str) - s = AddVocabSlicer(value) - self.send(s) - - def outgoingVocabTableWasReplaced(self, newTable): - # this is called by the ReplaceVocabSlicer to manipulate our table. - # It must certainly *not* be called by higher-level user code. - self.outgoingVocabulary = newTable - if newTable: - maxIndex = max(newTable.values()) + 1 - self.nextAvailableOutgoingVocabularyIndex = maxIndex - else: - self.nextAvailableOutgoingVocabularyIndex = 0 - - def allocateEntryInOutgoingVocabTable(self, string): - assert string not in self.outgoingVocabulary - # TODO: a softer failure more for this assert is to re-send the - # existing key. To make sure that really happens, though, we have to - # remove it from the vocab table, otherwise we'll tokenize the - # string. If we can insure that, then this failure mode would waste - # time and network but would otherwise be harmless. - # - # return self.outgoingVocabulary[string] - - self.pendingVocabAdditions.remove(self.value) - index = self.nextAvailableOutgoingVocabularyIndex - self.nextAvailableOutgoingVocabularyIndex = index + 1 - return index - - def outgoingVocabTableWasAmended(self, index, string): - self.outgoingVocabulary[string] = index - - # these methods define how we emit low-level tokens - - def sendPING(self, number=0): - if number: - int2b128(number, self.transport.write) - self.transport.write(PING) - - def sendPONG(self, number): - if number: - int2b128(number, self.transport.write) - self.transport.write(PONG) - - def sendOpen(self): - openID = self.openCount - self.openCount += 1 - int2b128(openID, self.transport.write) - self.transport.write(OPEN) - return openID - - def sendToken(self, obj): - write = self.transport.write - if isinstance(obj, types.IntType) or isinstance(obj, types.LongType): - if obj >= 2**31: - s = long_to_bytes(obj) - int2b128(len(s), write) - write(LONGINT) - write(s) - elif obj >= 0: - int2b128(obj, write) - write(INT) - elif -obj > 2**31: # NEG is [-2**31, 0) - s = long_to_bytes(-obj) - int2b128(len(s), write) - write(LONGNEG) - write(s) - else: - int2b128(-obj, write) - write(NEG) - elif isinstance(obj, types.FloatType): - write(FLOAT) - write(struct.pack("!d", obj)) - elif isinstance(obj, types.StringType): - if self.outgoingVocabulary.has_key(obj): - symbolID = self.outgoingVocabulary[obj] - int2b128(symbolID, write) - write(VOCAB) - else: - self.maybeVocabizeString(obj) - int2b128(len(obj), write) - write(STRING) - write(obj) - else: - raise BananaError, "could not send object: %s" % repr(obj) - - def maybeVocabizeString(self, string): - # TODO: keep track of the last 30 strings we've send in full. If this - # string appears more than 3 times on that list, create a vocab item - # for it. Make sure we don't start using the vocab number until the - # ADDVOCAB token has been queued. - if False: - self.addToOutgoingVocabulary(string) - - def sendClose(self, openID): - int2b128(openID, self.transport.write) - self.transport.write(CLOSE) - - def sendAbort(self, count=0): - int2b128(count, self.transport.write) - self.transport.write(ABORT) - - def sendError(self, msg): - if len(msg) > SIZE_LIMIT: - msg = msg[:SIZE_LIMIT-10] + "..." - int2b128(len(msg), self.transport.write) - self.transport.write(ERROR) - self.transport.write(msg) - # now you should drop the connection - self.transport.loseConnection() - - def sendFailed(self, f): - # call this if an exception is raised in transmission. The Failure - # will be logged and the connection will be dropped. This is - # suitable for use as an errback handler. - print "SendBanana.sendFailed:", f - log.msg("Sendfailed.sendfailed") - log.err(f) - try: - if self.transport: - self.transport.loseConnection() - except: - print "exception during transport.loseConnection" - log.err() - try: - self.rootSlicer.connectionLost(f) - except: - print "exception during rootSlicer.connectionLost" - log.err() - - ### ReceiveBanana - # called with dataReceived() - # calls self.receivedObject() - - unslicerClass = RootUnslicer - debugReceive = False - logViolations = False - logReceiveErrors = True - useKeepalives = False - keepaliveTimeout = None - keepaliveTimer = None - disconnectTimeout = None - disconnectTimer = None - - def initReceive(self): - self.rootUnslicer = self.unslicerClass() - self.rootUnslicer.protocol = self - self.receiveStack = [self.rootUnslicer] - self.objectCounter = 0 - self.objects = {} - - self.inOpen = False # set during the Index Phase of an OPEN sequence - self.opentype = [] # accumulates Index Tokens - - # to pre-negotiate, set the negotiation parameters and set - # self.negotiated to True. It might instead make sense to fill - # self.buffer with the inbound negotiation block. - self.negotiated = False - self.connectionAbandoned = False - self.buffer = '' - - self.incomingVocabulary = {} - self.skipBytes = 0 # used to discard a single long token - self.discardCount = 0 # used to discard non-primitive objects - self.exploded = None # last-ditch error catcher - - def printStack(self, verbose=0): - print "STACK:" - for s in self.receiveStack: - if verbose: - d = s.__dict__.copy() - del d['protocol'] - print " %s: %s" % (s, d) - else: - print " %s" % s - - def setObject(self, count, obj): - for i in range(len(self.receiveStack)-1, -1, -1): - self.receiveStack[i].setObject(count, obj) - - def getObject(self, count): - for i in range(len(self.receiveStack)-1, -1, -1): - obj = self.receiveStack[i].getObject(count) - if obj is not None: - return obj - raise ValueError, "dangling reference '%d'" % count - - - def replaceIncomingVocabulary(self, vocabDict): - # maps small integer to string, should be called in response to a - # OPEN(set-vocab) sequence. - self.incomingVocabulary = vocabDict - - def addIncomingVocabulary(self, key, value): - # called in response to an OPEN(add-vocab) sequence - self.incomingVocabulary[key] = value - - def dataReceived(self, chunk): - if self.connectionAbandoned: - return - if self.useKeepalives: - self.dataLastReceivedAt = time.time() - try: - self.handleData(chunk) - except Exception, e: - if isinstance(e, BananaError): - # only reveal the reason if it is a protocol error - e.where = self.describeReceive() - msg = str(e) # send them the text of the error - else: - msg = ("exception while processing data, more " - "information in the logfiles") - if not self.logReceiveErrors: - msg += ", except that self.logReceiveErrors=False" - msg += ", sucks to be you" - self.sendError(msg) - self.reportReceiveError(Failure()) - self.connectionAbandoned = True - - def keepaliveTimerFired(self): - self.keepaliveTimer = None - age = time.time() - self.dataLastReceivedAt - if age > self.keepaliveTimeout: - # the connection looks idle, so let's provoke a response - self.sendPING() - # we restart the timer in either case - t = reactor.callLater(self.keepaliveTimeout + EPSILON, - self.keepaliveTimerFired) - self.keepaliveTimer = t - - def disconnectTimerFired(self): - self.disconnectTimer = None - age = time.time() - self.dataLastReceivedAt - if age > self.disconnectTimeout: - # the connection looks dead, so drop it - log.msg("disconnectTimeout, no data for %d seconds" % age) - self.connectionTimedOut() - # we assume that connectionTimedOut() will actually drop the - # connection, so we don't restart the timer. TODO: this might not - # be the right thing to do, perhaps we should restart it - # unconditionally. - else: - # we're still ok, so restart the timer - t = reactor.callLater(self.disconnectTimeout + EPSILON, - self.disconnectTimerFired) - self.disconnectTimer = t - - def connectionTimedOut(self): - # this is to be implemented by higher-level code. It ought to log a - # suitable message and then drop the connection. - pass - - def reportReceiveError(self, f): - # tests can override this to stash the failure somewhere else. Tests - # which intentionally cause an error set self.logReceiveErrors=False - # so that the log.err doesn't flunk the test. - log.msg("Banana.reportReceiveError: an error occured during receive") - if self.logReceiveErrors: - log.err(f) - if self.debugReceive: - # trial watches log.err and treats it as a failure, so log the - # exception in a way that doesn't make trial flunk the test - log.msg(f.getBriefTraceback()) - - - def handleData(self, chunk): - # buffer, assemble into tokens - # call self.receiveToken(token) with each - if self.skipBytes: - if len(chunk) < self.skipBytes: - # skip the whole chunk - self.skipBytes -= len(chunk) - return - # skip part of the chunk, and stop skipping - chunk = chunk[self.skipBytes:] - self.skipBytes = 0 - buffer = self.buffer + chunk - - # Loop through the available input data, extracting one token per - # pass. - - while buffer: - assert self.buffer != buffer, \ - ("Banana.handleData: no progress made: %s %s" % - (repr(buffer),)) - self.buffer = buffer - pos = 0 - - for ch in buffer: - if ch >= HIGH_BIT_SET: - break - pos = pos + 1 - if pos > 64: - # drop the connection. We log more of the buffer, but not - # all of it, to make it harder for someone to spam our - # logs. - raise BananaError("token prefix is limited to 64 bytes: " - "but got %r" % (buffer[:200],)) - else: - # we've run out of buffer without seeing the high bit, which - # means we're still waiting for header to finish - return - assert pos <= 64 - - # At this point, the header and type byte have been received. - # The body may or may not be complete. - - typebyte = buffer[pos] - if pos: - header = b1282int(buffer[:pos]) - else: - header = 0 - - # rejected is set as soon as a violation is detected. It - # indicates that this single token will be rejected. - - rejected = False - if self.discardCount: - rejected = True - - wasInOpen = self.inOpen - if typebyte == OPEN: - if self.inOpen: - raise BananaError("OPEN token followed by OPEN") - self.inOpen = True - # the inOpen flag is set as soon as the OPEN token is - # witnessed (even it it gets rejected later), because it - # means that there is a new sequence starting that must be - # handled somehow (either discarded or given to a new - # Unslicer). - - # The inOpen flag is cleared when the Index Phase ends. - # There are two possibilities: 1) a new Unslicer is pushed, - # and tokens are delivered to it normally. 2) a Violation - # was raised, and the tokens must be discarded - # (self.discardCount++). *any* True->False transition of - # self.inOpen must be accompanied by exactly one increment - # of self.discardCount - - # determine if this token will be accepted, and if so, how large - # it is allowed to be (for STRING and LONGINT/LONGNEG) - - if ((not rejected) and - (typebyte not in (PING, PONG, ABORT, CLOSE, ERROR))): - # PING, PONG, ABORT, CLOSE, and ERROR are always legal. All - # others (including OPEN) can be rejected by the schema: for - # example, a list of integers would reject STRING, VOCAB, and - # OPEN because none of those will produce integers. If the - # unslicer's .checkToken rejects the tokentype, its - # .receiveChild will immediately get an Failure - try: - # the purpose here is to limit the memory consumed by - # the body of a STRING, OPEN, LONGINT, or LONGNEG token - # (i.e., the size of a primitive type). If the sender - # wants to feed us more data than we want to accept, the - # checkToken() method should raise a Violation. This - # will never be called with ABORT or CLOSE types. - top = self.receiveStack[-1] - if wasInOpen: - top.openerCheckToken(typebyte, header, self.opentype) - else: - top.checkToken(typebyte, header) - except Violation, v: - rejected = True - f = BananaFailure() - if wasInOpen: - methname = "openerCheckToken" - else: - methname = "checkToken" - self.handleViolation(f, methname, inOpen=self.inOpen) - self.inOpen = False - - if typebyte == ERROR and header > SIZE_LIMIT: - # someone is trying to spam us with an ERROR token. Drop - # them with extreme prejudice. - raise BananaError("oversized ERROR token") - - rest = buffer[pos+1:] - - # determine what kind of token it is. Each clause finishes in - # one of four ways: - # - # raise BananaError: the protocol was violated so badly there is - # nothing to do for it but hang up abruptly - # - # return: if the token is not yet complete (need more data) - # - # continue: if the token is complete but no object (for - # handleToken) was produced, e.g. OPEN, CLOSE, ABORT - # - # obj=foo: the token is complete and an object was produced - # - # note that if rejected==True, the object is dropped instead of - # being passed up to the current Unslicer - - if typebyte == OPEN: - buffer = rest - self.inboundOpenCount = header - if rejected: - if self.debugReceive: - print "DROP (OPEN)" - if self.inOpen: - # we are discarding everything at the old level, so - # discard everything in the new level too - self.discardCount += 1 - if self.debugReceive: - print "++discardCount (OPEN), now %d" \ - % self.discardCount - self.inOpen = False - else: - # the checkToken handleViolation has already started - # discarding this new sequence, we don't have to - pass - else: - self.inOpen = True - self.opentype = [] - continue - - elif typebyte == CLOSE: - buffer = rest - count = header - if self.discardCount: - self.discardCount -= 1 - if self.debugReceive: - print "--discardCount (CLOSE), now %d" \ - % self.discardCount - else: - self.handleClose(count) - continue - - elif typebyte == ABORT: - buffer = rest - count = header - # TODO: this isn't really a Violation, but we need something - # to describe it. It does behave identically to what happens - # when receiveChild raises a Violation. The .handleViolation - # will pop the now-useless Unslicer and start discarding - # tokens just as if the Unslicer had made the decision. - if rejected: - if self.debugReceive: - print "DROP (ABORT)" - # I'm ignoring you, LALALALALA. - # - # In particular, do not deliver a second Violation - # because of the ABORT that we're supposed to be - # ignoring because of a first Violation that happened - # earlier. - continue - try: - # slightly silly way to do it, but nice and uniform - raise Violation("ABORT received") - except Violation: - f = BananaFailure() - self.handleViolation(f, "receive-abort") - continue - - elif typebyte == ERROR: - strlen = header - if len(rest) >= strlen: - # the whole string is available - buffer = rest[strlen:] - obj = rest[:strlen] - # handleError must drop the connection - self.handleError(obj) - return - else: - return # there is more to come - - elif typebyte == LIST: - raise BananaError("oldbanana peer detected, " + - "compatibility code not yet written") - #listStack.append((header, [])) - #buffer = rest - - elif typebyte == STRING: - strlen = header - if len(rest) >= strlen: - # the whole string is available - buffer = rest[strlen:] - obj = rest[:strlen] - # although it might be rejected - else: - # there is more to come - if rejected: - # drop all we have and note how much more should be - # dropped - if self.debugReceive: - print "DROPPED some string bits" - self.skipBytes = strlen - len(rest) - self.buffer = "" - return - - elif typebyte == INT: - buffer = rest - obj = int(header) - elif typebyte == NEG: - buffer = rest - # -2**31 is too large for a positive int, so go through - # LongType first - obj = int(-long(header)) - elif typebyte == LONGINT or typebyte == LONGNEG: - strlen = header - if len(rest) >= strlen: - # the whole number is available - buffer = rest[strlen:] - obj = bytes_to_long(rest[:strlen]) - if typebyte == LONGNEG: - obj = -obj - # although it might be rejected - else: - # there is more to come - if rejected: - # drop all we have and note how much more should be - # dropped - self.skipBytes = strlen - len(rest) - self.buffer = "" - return - - elif typebyte == VOCAB: - buffer = rest - obj = self.incomingVocabulary[header] - # TODO: bail if expanded string is too big - # this actually means doing self.checkToken(VOCAB, len(obj)) - # but we have to make sure we handle the rejection properly - - elif typebyte == FLOAT: - if len(rest) >= 8: - buffer = rest[8:] - obj = struct.unpack("!d", rest[:8])[0] - else: - # this case is easier than STRING, because it is only 8 - # bytes. We don't bother skipping anything. - return - - elif typebyte == PING: - buffer = rest - self.sendPONG(header) - continue # otherwise ignored - - elif typebyte == PONG: - buffer = rest - continue # otherwise ignored - - else: - raise BananaError("Invalid Type Byte 0x%x" % ord(typebyte)) - - if not rejected: - if self.inOpen: - self.handleOpen(self.inboundOpenCount, obj) - # handleOpen might push a new unslicer and clear - # .inOpen, or leave .inOpen true and append the object - # to .indexOpen - else: - self.handleToken(obj) - else: - if self.debugReceive: - print "DROP", type(obj), obj - pass # drop the object - - # while loop ends here - - self.buffer = '' - - - def handleOpen(self, openCount, indexToken): - self.opentype.append(indexToken) - opentype = tuple(self.opentype) - if self.debugReceive: - print "handleOpen(%d,%s)" % (openCount, indexToken) - objectCount = self.objectCounter - top = self.receiveStack[-1] - try: - # obtain a new Unslicer to handle the object - child = top.doOpen(opentype) - if not child: - if self.debugReceive: - print " doOpen wants more index tokens" - return # they want more index tokens, leave .inOpen=True - if self.debugReceive: - print " opened[%d] with %s" % (openCount, child) - except Violation, v: - # must discard the rest of the child object. There is no new - # unslicer pushed yet, so we don't use abandonUnslicer - self.inOpen = False - f = BananaFailure() - self.handleViolation(f, "doOpen", inOpen=True) - return - - assert tokens.IUnslicer.providedBy(child), "child is %s" % child - self.objectCounter += 1 - self.inOpen = False - child.protocol = self - child.openCount = openCount - child.parent = top - self.receiveStack.append(child) - try: - child.start(objectCount) - except Violation, v: - # the child is now on top, so use abandonUnslicer to discard the - # rest of the child - f = BananaFailure() - # notifies the new child - self.handleViolation(f, "start") - - def handleToken(self, token, ready_deferred=None): - top = self.receiveStack[-1] - if self.debugReceive: print "handleToken(%s)" % (token,) - if ready_deferred: - assert isinstance(ready_deferred, defer.Deferred) - try: - top.receiveChild(token, ready_deferred) - except Violation, v: - # this is how the child says "I've been contaminated". We don't - # pop them automatically: if they want that, they should return - # back the failure in their reportViolation method. - f = BananaFailure() - self.handleViolation(f, "receiveChild") - - def handleClose(self, closeCount): - if self.debugReceive: - print "handleClose(%d)" % closeCount - if self.receiveStack[-1].openCount != closeCount: - raise BananaError("lost sync, got CLOSE(%d) but expecting %s" \ - % (closeCount, self.receiveStack[-1].openCount)) - - child = self.receiveStack[-1] # don't pop yet: describe() needs it - - try: - obj, ready_deferred = child.receiveClose() - except Violation, v: - # the child is contaminated. However, they're finished, so we - # don't have to discard anything. Just give an Failure to the - # parent instead of the object they would have returned. - f = BananaFailure() - self.handleViolation(f, "receiveClose", inClose=True) - return - if self.debugReceive: print "receiveClose returned", obj - - try: - child.finish() - except Violation, v: - # .finish could raise a Violation if an object that references - # the child is just now deciding that they don't like it - # (perhaps their TupleConstraint couldn't be asserted until the - # tuple was complete and referenceable). In this case, the child - # has produced a valid object, but an earlier (incomplete) - # object is not valid. So we treat this as if this child itself - # raised the Violation. The .where attribute will point to this - # child, which is the node that caused somebody problems, but - # will be marked , which indicates that it wasn't the - # child itself which raised the Violation. TODO: not true - # - # TODO: it would be more useful if the UF could also point to - # the completing object (the one which raised Violation). - - f = BananaFailure() - self.handleViolation(f, "finish", inClose=True) - return - - self.receiveStack.pop() - - # now deliver the object to the parent - self.handleToken(obj, ready_deferred) - - def handleViolation(self, f, methname, inOpen=False, inClose=False): - """An Unslicer has decided to give up, or we have given up on it - (because we received an ABORT token). - """ - - where = self.describeReceive() - f.value.setLocation(where) - - if self.debugReceive: - print " handleViolation-%s (inOpen=%s, inClose=%s): %s" \ - % (methname, inOpen, inClose, f) - - assert isinstance(f, BananaFailure) - - if self.logViolations: - log.msg("Violation in %s at %s" % (methname, where)) - log.err(f) - - if inOpen: - self.discardCount += 1 - if self.debugReceive: - print " ++discardCount (inOpen), now %d" % self.discardCount - - while True: - # tell the parent that their child is dead. This is useful for - # things like PB, which may want to errback the current request. - if self.debugReceive: - print " reportViolation to %s" % self.receiveStack[-1] - f = self.receiveStack[-1].reportViolation(f) - if not f: - # they absorbed the failure - if self.debugReceive: - print " buck stopped, error absorbed" - break - - # the old top wants to propagate it upwards - if self.debugReceive: - print " popping %s" % self.receiveStack[-1] - if not inClose: - self.discardCount += 1 - if self.debugReceive: - print " ++discardCount (pop, not inClose), now %d" \ - % self.discardCount - inClose = False - - old = self.receiveStack.pop() - - try: - # TODO: if handleClose encountered a Violation in .finish, - # we will end up calling it a second time - old.finish() # ?? - except Violation: - pass # they've already failed once - - if not self.receiveStack: - # now there's nobody left to create new Unslicers, so we - # must drop the connection - why = "Oh my god, you killed the RootUnslicer! " + \ - "You bastard!!" - raise BananaError(why) - - # now we loop until someone absorbs the failure - - - def handleError(self, msg): - log.msg("got banana ERROR from remote side: %s" % msg) - self.transport.loseConnection(BananaError("remote error: %s" % msg)) - - - def describeReceive(self): - where = [] - for i in self.receiveStack: - try: - piece = i.describe() - except: - piece = "???" - #raise - where.append(piece) - return ".".join(where) - - def receivedObject(self, obj): - """Decoded objects are delivered here, unless you use a RootUnslicer - variant which does something else in its .childFinished method. - """ - raise NotImplementedError - - def reportViolation(self, why): - return why - diff --git a/src/foolscap/foolscap/base32.py b/src/foolscap/foolscap/base32.py deleted file mode 100644 index b122c0e8..00000000 --- a/src/foolscap/foolscap/base32.py +++ /dev/null @@ -1,25 +0,0 @@ - -# copied from the waterken.org Web-Calculus python implementation - -def encode(bytes): - chars = "" - buffer = 0; - n = 0; - for b in bytes: - buffer = buffer << 8 - buffer = buffer | ord(b) - n = n + 8 - while n >= 5: - chars = chars + _encode((buffer >> (n - 5)) & 0x1F) - n = n - 5; - buffer = buffer & 0x1F # To quiet any warning from << operator - if n > 0: - buffer = buffer << (5 - n) - chars = chars + _encode(buffer & 0x1F) - return chars - -def _encode(v): - if v < 26: - return chr(ord('a') + v) - else: - return chr(ord('2') + (v - 26)) diff --git a/src/foolscap/foolscap/broker.py b/src/foolscap/foolscap/broker.py deleted file mode 100644 index 4da9ebc3..00000000 --- a/src/foolscap/foolscap/broker.py +++ /dev/null @@ -1,662 +0,0 @@ - - -# This module is responsible for the per-connection Broker object - -import types -from itertools import count - -from zope.interface import implements -from twisted.python import log -from twisted.internet import defer, error -from twisted.internet import interfaces as twinterfaces -from twisted.internet.protocol import connectionDone - -from foolscap import banana, tokens, ipb, vocab -from foolscap import call, slicer, referenceable, copyable, remoteinterface -from foolscap.constraint import Any -from foolscap.tokens import Violation, BananaError -from foolscap.ipb import DeadReferenceError -from foolscap.slicers.root import RootSlicer, RootUnslicer -from foolscap.eventual import eventually - - -PBTopRegistry = { - ("call",): call.CallUnslicer, - ("answer",): call.AnswerUnslicer, - ("error",): call.ErrorUnslicer, - } - -PBOpenRegistry = { - ('arguments',): call.ArgumentUnslicer, - ('my-reference',): referenceable.ReferenceUnslicer, - ('your-reference',): referenceable.YourReferenceUnslicer, - ('their-reference',): referenceable.TheirReferenceUnslicer, - # ('copyable', classname) is handled inline, through the CopyableRegistry - } - -class PBRootUnslicer(RootUnslicer): - # topRegistries defines what objects are allowed at the top-level - topRegistries = [PBTopRegistry] - # openRegistries defines what objects are allowed at the second level and - # below - openRegistries = [slicer.UnslicerRegistry, PBOpenRegistry] - logViolations = False - - def checkToken(self, typebyte, size): - if typebyte != tokens.OPEN: - raise BananaError("top-level must be OPEN") - - def openerCheckToken(self, typebyte, size, opentype): - if typebyte == tokens.STRING: - if len(opentype) == 0: - if size > self.maxIndexLength: - why = "first opentype STRING token is too long, %d>%d" % \ - (size, self.maxIndexLength) - raise Violation(why) - if opentype == ("copyable",): - # TODO: this is silly, of course (should pre-compute maxlen) - maxlen = reduce(max, - [len(cname) \ - for cname in copyable.CopyableRegistry.keys()] - ) - if size > maxlen: - why = "copyable-classname token is too long, %d>%d" % \ - (size, maxlen) - raise Violation(why) - elif typebyte == tokens.VOCAB: - return - else: - # TODO: hack for testing - raise Violation("index token 0x%02x not STRING or VOCAB" % \ - ord(typebyte)) - raise BananaError("index token 0x%02x not STRING or VOCAB" % \ - ord(typebyte)) - - def open(self, opentype): - # used for lower-level objects, delegated up from childunslicer.open - assert len(self.protocol.receiveStack) > 1 - if opentype[0] == 'copyable': - if len(opentype) > 1: - classname = opentype[1] - try: - factory = copyable.CopyableRegistry[classname] - except KeyError: - raise Violation("unknown RemoteCopy class '%s'" \ - % classname) - child = factory() - child.broker = self.broker - return child - else: - return None # still need classname - for reg in self.openRegistries: - opener = reg.get(opentype) - if opener is not None: - child = opener() - break - else: - raise Violation("unknown OPEN type %s" % (opentype,)) - child.broker = self.broker - return child - - def doOpen(self, opentype): - child = RootUnslicer.doOpen(self, opentype) - if child: - child.broker = self.broker - return child - - def reportViolation(self, f): - if self.logViolations: - print "hey, something failed:", f - return None # absorb the failure - - def receiveChild(self, token, ready_deferred): - if isinstance(token, call.InboundDelivery): - self.broker.scheduleCall(token, ready_deferred) - - - -class PBRootSlicer(RootSlicer): - slicerTable = {types.MethodType: referenceable.CallableSlicer, - types.FunctionType: referenceable.CallableSlicer, - } - def registerReference(self, refid, obj): - assert 0 - - def slicerForObject(self, obj): - # zope.interface doesn't do transitive adaptation, which is a shame - # because we want to let people register ICopyable adapters for - # third-party code, and there is an ICopyable->ISlicer adapter - # defined in copyable.py, but z.i won't do the transitive - # ThirdPartyClass -> ICopyable -> ISlicer - # so instead we manually do it here - s = tokens.ISlicer(obj, None) - if s: - return s - copier = copyable.ICopyable(obj, None) - if copier: - s = tokens.ISlicer(copier) - return s - return RootSlicer.slicerForObject(self, obj) - - -class RIBroker(remoteinterface.RemoteInterface): - def getReferenceByName(name=str): - """If I have published an object by that name, return a reference to - it.""" - # return Remote(interface=any) - return Any() - def decref(clid=int, count=int): - """Release some references to my-reference 'clid'. I will return an - ack when the operation has completed.""" - return None - def decgift(giftID=int, count=int): - """Release some reference to a their-reference 'giftID' that was - sent earlier.""" - return None - - -class Broker(banana.Banana, referenceable.Referenceable): - """I manage a connection to a remote Broker. - - @ivar tub: the L{Tub} which contains us - @ivar yourReferenceByCLID: maps your CLID to a RemoteReferenceData - #@ivar yourReferenceByName: maps a per-Tub name to a RemoteReferenceData - @ivar yourReferenceByURL: maps a global URL to a RemoteReferenceData - - """ - - implements(RIBroker) - slicerClass = PBRootSlicer - unslicerClass = PBRootUnslicer - unsafeTracebacks = True - requireSchema = False - disconnected = False - factory = None - tub = None - remote_broker = None - startingTLS = False - startedTLS = False - - def __init__(self, params={}, - keepaliveTimeout=None, disconnectTimeout=None): - banana.Banana.__init__(self, params) - self.keepaliveTimeout = keepaliveTimeout - self.disconnectTimeout = disconnectTimeout - self._banana_decision_version = params.get("banana-decision-version") - vocab_table_index = params.get('initial-vocab-table-index') - if vocab_table_index: - table = vocab.INITIAL_VOCAB_TABLES[vocab_table_index] - self.populateVocabTable(table) - self.initBroker() - - def initBroker(self): - self.rootSlicer.broker = self - self.rootUnslicer.broker = self - - # tracking Referenceables - # sending side uses these - self.nextCLID = count(1).next # 0 is for the broker - self.myReferenceByPUID = {} # maps ref.processUniqueID to a tracker - self.myReferenceByCLID = {} # maps CLID to a tracker - # receiving side uses these - self.yourReferenceByCLID = {} - self.yourReferenceByURL = {} - - # tracking Gifts - self.nextGiftID = count().next - self.myGifts = {} # maps (broker,clid) to (rref, giftID, count) - self.myGiftsByGiftID = {} # maps giftID to (broker,clid) - - # remote calls - # sending side uses these - self.nextReqID = count(1).next # 0 means "we don't want a response" - self.waitingForAnswers = {} # we wait for the other side to answer - self.disconnectWatchers = [] - # receiving side uses these - self.inboundDeliveryQueue = [] - self._call_is_running = False - self.activeLocalCalls = {} # the other side wants an answer from us - - def setTub(self, tub): - assert ipb.ITub.providedBy(tub) - self.tub = tub - self.unsafeTracebacks = tub.unsafeTracebacks - if tub.debugBanana: - self.debugSend = True - self.debugReceive = True - - def connectionMade(self): - banana.Banana.connectionMade(self) - # create the remote_broker object. We don't use the usual - # reference-counting mechanism here, because this is a synthetic - # object that lives forever. - tracker = referenceable.RemoteReferenceTracker(self, 0, None, - "RIBroker") - self.remote_broker = referenceable.RemoteReference(tracker) - - # connectionTimedOut is called in response to the Banana layer detecting - # the lack of connection activity - - def connectionTimedOut(self): - self.shutdown() - - def shutdown(self): - self.disconnectWatchers = [] - self.transport.loseConnection() - - def connectionLost(self, why): - self.disconnected = True - self.remote_broker = None - self.abandonAllRequests(why) - # TODO: why reset all the tables to something useable? There may be - # outstanding RemoteReferences that point to us, but I don't see why - # that requires all these empty dictionaries. - self.myReferenceByPUID = {} - self.myReferenceByCLID = {} - self.yourReferenceByCLID = {} - self.yourReferenceByURL = {} - self.myGifts = {} - self.myGiftsByGiftID = {} - for (cb,args,kwargs) in self.disconnectWatchers: - eventually(cb, *args, **kwargs) - self.disconnectWatchers = [] - banana.Banana.connectionLost(self, why) - if self.tub: - # TODO: remove the conditional. It is only here to accomodate - # some tests: test_pb.TestCall.testDisconnect[123] - self.tub.brokerDetached(self, why) - - def notifyOnDisconnect(self, callback, *args, **kwargs): - marker = (callback, args, kwargs) - if self.disconnected: - eventually(callback, *args, **kwargs) - else: - self.disconnectWatchers.append(marker) - return marker - def dontNotifyOnDisconnect(self, marker): - if self.disconnected: - return - # be tolerant of attempts to unregister a callback that has already - # fired. I think it is hard to write safe code without this - # tolerance. - - # TODO: on the other hand, I'm not sure this is the best policy, - # since you lose the feedback that tells you about - # unregistering-the-wrong-thing bugs. We need to look at the way that - # register/unregister gets used and see if there is a way to retain - # the typechecking that results from insisting that you can only - # remove something that was stil in the list. - if marker in self.disconnectWatchers: - self.disconnectWatchers.remove(marker) - - # methods to handle RemoteInterfaces - def getRemoteInterfaceByName(self, name): - return remoteinterface.RemoteInterfaceRegistry[name] - - # methods to send my Referenceables to the other side - - def getTrackerForMyReference(self, puid, obj): - tracker = self.myReferenceByPUID.get(puid) - if not tracker: - # need to add one - clid = self.nextCLID() - tracker = referenceable.ReferenceableTracker(self.tub, - obj, puid, clid) - self.myReferenceByPUID[puid] = tracker - self.myReferenceByCLID[clid] = tracker - return tracker - - def getTrackerForMyCall(self, puid, obj): - # just like getTrackerForMyReference, but with a negative clid - tracker = self.myReferenceByPUID.get(puid) - if not tracker: - # need to add one - clid = self.nextCLID() - clid = -clid - tracker = referenceable.ReferenceableTracker(self.tub, - obj, puid, clid) - self.myReferenceByPUID[puid] = tracker - self.myReferenceByCLID[clid] = tracker - return tracker - - # methods to handle inbound 'my-reference' sequences - - def getTrackerForYourReference(self, clid, interfaceName=None, url=None): - """The far end holds a Referenceable and has just sent us a reference - to it (expressed as a small integer). If this is a new reference, - they will give us an interface name too, and possibly a global URL - for it. Obtain a RemoteReference object (creating it if necessary) to - give to the local recipient. - - The sender remembers that we hold a reference to their object. When - our RemoteReference goes away, we send a decref message to them, so - they can possibly free their object. """ - - assert type(interfaceName) is str or interfaceName is None - if url is not None: - assert type(url) is str - tracker = self.yourReferenceByCLID.get(clid) - if not tracker: - # TODO: translate interfaceNames to RemoteInterfaces - if clid >= 0: - trackerclass = referenceable.RemoteReferenceTracker - else: - trackerclass = referenceable.RemoteMethodReferenceTracker - tracker = trackerclass(self, clid, url, interfaceName) - self.yourReferenceByCLID[clid] = tracker - if url: - self.yourReferenceByURL[url] = tracker - return tracker - - def freeYourReference(self, tracker, count): - # this is called when the RemoteReference is deleted - if not self.remote_broker: # tests do not set this up - self.freeYourReferenceTracker(None, tracker) - return - try: - rb = self.remote_broker - # TODO: do we want callRemoteOnly here? is there a way we can - # avoid wanting to know when the decref has completed? Only if we - # send the interface list and URL on every occurrence of the - # my-reference sequence. Either A) we use callRemote("decref") - # and wait until the ack to free the tracker, or B) we use - # callRemoteOnly("decref") and free the tracker right away. In - # case B, the far end has no way to know that we've just freed - # the tracker and will therefore forget about everything they - # told us (including the interface list), so they cannot - # accurately do anything special on the "first" send of this - # reference. Which means that if we do B, we must either send - # that extra information on every my-reference sequence, or do - # without it, or make it optional, or retrieve it separately, or - # something. - - # rb.callRemoteOnly("decref", clid=tracker.clid, count=count) - # self.freeYourReferenceTracker('bogus', tracker) - # return - - d = rb.callRemote("decref", clid=tracker.clid, count=count) - # if the connection was lost before we can get an ack, we're - # tearing this down anyway - def _ignore_loss(f): - f.trap(DeadReferenceError, - error.ConnectionLost, - error.ConnectionDone) - return None - d.addErrback(_ignore_loss) - # once the ack comes back, or if we know we'll never get one, - # release the tracker - d.addCallback(self.freeYourReferenceTracker, tracker) - except: - log.msg("failure during freeRemoteReference") - log.err() - - def freeYourReferenceTracker(self, res, tracker): - if tracker.received_count != 0: - return - if self.yourReferenceByCLID.has_key(tracker.clid): - del self.yourReferenceByCLID[tracker.clid] - if tracker.url and self.yourReferenceByURL.has_key(tracker.url): - del self.yourReferenceByURL[tracker.url] - - - # methods to handle inbound 'your-reference' sequences - - def getMyReferenceByCLID(self, clid): - """clid is the connection-local ID of the Referenceable the other - end is trying to invoke or point to. If it is a number, they want an - implicitly-created per-connection object that we sent to them at - some point in the past. If it is a string, they want an object that - was registered with our Factory. - """ - - obj = None - assert isinstance(clid, (int, long)) - if clid == 0: - return self - return self.myReferenceByCLID[clid].obj - # obj = IReferenceable(obj) - # assert isinstance(obj, pb.Referenceable) - # obj needs .getMethodSchema, which needs .getArgConstraint - - def remote_decref(self, clid, count): - # invoked when the other side sends us a decref message - assert isinstance(clid, (int, long)) - assert clid != 0 - tracker = self.myReferenceByCLID[clid] - done = tracker.decref(count) - if done: - del self.myReferenceByPUID[tracker.puid] - del self.myReferenceByCLID[clid] - - # methods to send RemoteReference 'gifts' to third-parties - - def makeGift(self, rref): - # return the giftid - broker, clid = rref.tracker.broker, rref.tracker.clid - i = (broker, clid) - old = self.myGifts.get(i) - if old: - rref, giftID, count = old - self.myGifts[i] = (rref, giftID, count+1) - else: - giftID = self.nextGiftID() - self.myGiftsByGiftID[giftID] = i - self.myGifts[i] = (rref, giftID, 1) - return giftID - - def remote_decgift(self, giftID, count): - broker, clid = self.myGiftsByGiftID[giftID] - rref, giftID, gift_count = self.myGifts[(broker, clid)] - gift_count -= count - if gift_count == 0: - del self.myGiftsByGiftID[giftID] - del self.myGifts[(broker, clid)] - else: - self.myGifts[(broker, clid)] = (rref, giftID, gift_count) - - # methods to deal with URLs - - def getYourReferenceByName(self, name): - d = self.remote_broker.callRemote("getReferenceByName", name=name) - return d - - def remote_getReferenceByName(self, name): - return self.tub.getReferenceForName(name) - - # remote-method-invocation methods, calling side, invoked by - # RemoteReference.callRemote and CallSlicer - - def newRequestID(self): - if self.disconnected: - raise DeadReferenceError("Calling Stale Broker") - return self.nextReqID() - - def addRequest(self, req): - req.broker = self - self.waitingForAnswers[req.reqID] = req - - def removeRequest(self, req): - del self.waitingForAnswers[req.reqID] - - def getRequest(self, reqID): - # invoked by AnswerUnslicer and ErrorUnslicer - try: - return self.waitingForAnswers[reqID] - except KeyError: - raise Violation("non-existent reqID '%d'" % reqID) - - def abandonAllRequests(self, why): - for req in self.waitingForAnswers.values(): - req.fail(why) - self.waitingForAnswers = {} - - # target-side, invoked by CallUnslicer - - def getRemoteInterfaceByName(self, riname): - # this lives in the broker because it ought to be per-connection - return remoteinterface.RemoteInterfaceRegistry[riname] - - def getSchemaForMethod(self, rifaces, methodname): - # this lives in the Broker so it can override the resolution order, - # not that overlapping RemoteInterfaces should be allowed to happen - # all that often - for ri in rifaces: - m = ri.get(methodname) - if m: - return m - return None - - def scheduleCall(self, delivery, ready_deferred): - self.inboundDeliveryQueue.append( (delivery,ready_deferred) ) - eventually(self.doNextCall) - - def doNextCall(self): - if self._call_is_running: - return - if not self.inboundDeliveryQueue: - return - delivery, ready_deferred = self.inboundDeliveryQueue.pop(0) - self._call_is_running = True - if not ready_deferred: - ready_deferred = defer.succeed(None) - d = ready_deferred - d.addCallback(lambda res: self._doCall(delivery)) - d.addCallback(self._callFinished, delivery) - d.addErrback(self.callFailed, delivery.reqID, delivery) - def _done(res): - self._call_is_running = False - eventually(self.doNextCall) - d.addBoth(_done) - return None - - def _doCall(self, delivery): - obj = delivery.obj - args = delivery.allargs.args - kwargs = delivery.allargs.kwargs - for i in args + kwargs.values(): - assert not isinstance(i, defer.Deferred) - - if delivery.methodSchema: - # we asked about each argument on the way in, but ask again so - # they can look for missing arguments. TODO: see if we can remove - # the redundant per-argument checks. - delivery.methodSchema.checkAllArgs(args, kwargs, True) - - # interesting case: if the method completes successfully, but - # our schema prohibits us from sending the result (perhaps the - # method returned an int but the schema insists upon a string). - # TODO: move the return-value schema check into - # Referenceable.doRemoteCall, so the exception's traceback will be - # attached to the object that caused it - if delivery.methodname is None: - assert callable(obj) - return obj(*args, **kwargs) - else: - obj = ipb.IRemotelyCallable(obj) - return obj.doRemoteCall(delivery.methodname, args, kwargs) - - - def _callFinished(self, res, delivery): - reqID = delivery.reqID - if reqID == 0: - return - methodSchema = delivery.methodSchema - assert self.activeLocalCalls[reqID] - if methodSchema: - try: - methodSchema.checkResults(res, False) # may raise Violation - except Violation, v: - v.prependLocation("in return value of %s.%s" % - (delivery.obj, methodSchema.name)) - raise - - answer = call.AnswerSlicer(reqID, res) - # once the answer has started transmitting, any exceptions must be - # logged and dropped, and not turned into an Error to be sent. - try: - self.send(answer) - # TODO: .send should return a Deferred that fires when the last - # byte has been queued, and we should delete the local note then - except: - log.err() - del self.activeLocalCalls[reqID] - - def callFailed(self, f, reqID, delivery=None): - # this may be called either when an inbound schema is violated, or - # when the method is run and raises an exception. If a Violation is - # raised after we receive the reqID but before we've actually invoked - # the method, we are called by CallUnslicer.reportViolation and don't - # get a delivery= argument. - if delivery: - if (self.tub and self.tub.logLocalFailures) or not self.tub: - # the 'not self.tub' case is for unit tests - delivery.logFailure(f) - if reqID != 0: - assert self.activeLocalCalls[reqID] - self.send(call.ErrorSlicer(reqID, f)) - del self.activeLocalCalls[reqID] - -# this loopback stuff is based upon twisted.protocols.loopback, except that -# we use it for real, not just for testing. The IConsumer stuff hasn't been -# tested at all. - -class _LoopbackAddress(object): - implements(twinterfaces.IAddress) - -class LoopbackTransport(object): - # we always create these in pairs, with .peer pointing at each other - implements(twinterfaces.ITransport, twinterfaces.IConsumer) - - producer = None - - def __init__(self): - self.connected = True - def setPeer(self, peer): - self.peer = peer - - def write(self, bytes): - eventually(self.peer.dataReceived, bytes) - def writeSequence(self, iovec): - self.write(''.join(iovec)) - - def dataReceived(self, data): - if self.connected: - self.protocol.dataReceived(data) - - def loseConnection(self, _connDone=connectionDone): - if not self.connected: - return - self.connected = False - eventually(self.peer.connectionLost, _connDone) - eventually(self.protocol.connectionLost, _connDone) - def connectionLost(self, reason): - if not self.connected: - return - self.connected = False - self.protocol.connectionLost(reason) - - def getPeer(self): - return _LoopbackAddress() - def getHost(self): - return _LoopbackAddress() - - # IConsumer - def registerProducer(self, producer, streaming): - assert self.producer is None - self.producer = producer - self.streamingProducer = streaming - self._pollProducer() - - def unregisterProducer(self): - assert self.producer is not None - self.producer = None - - def _pollProducer(self): - if self.producer is not None and not self.streamingProducer: - self.producer.resumeProducing() - - -import debug -class LoggingBroker(debug.LoggingBananaMixin, Broker): - pass - diff --git a/src/foolscap/foolscap/call.py b/src/foolscap/foolscap/call.py deleted file mode 100644 index e675a7b8..00000000 --- a/src/foolscap/foolscap/call.py +++ /dev/null @@ -1,858 +0,0 @@ - -from twisted.python import failure, log, reflect -from twisted.internet import defer - -from foolscap import copyable, slicer, tokens -from foolscap.copyable import AttributeDictConstraint -from foolscap.constraint import ByteStringConstraint -from foolscap.slicers.list import ListConstraint -from tokens import BananaError, Violation -from foolscap.util import AsyncAND - - -class FailureConstraint(AttributeDictConstraint): - opentypes = [("copyable", "twisted.python.failure.Failure")] - name = "FailureConstraint" - klass = failure.Failure - - def __init__(self): - attrs = [('type', ByteStringConstraint(200)), - ('value', ByteStringConstraint(1000)), - ('traceback', ByteStringConstraint(2000)), - ('parents', ListConstraint(ByteStringConstraint(200))), - ] - AttributeDictConstraint.__init__(self, *attrs) - - def checkObject(self, obj, inbound): - if not isinstance(obj, self.klass): - raise Violation("is not an instance of %s" % self.klass) - - -class PendingRequest(object): - # this object is a local representation of a message we have sent to - # someone else, that will be executed on their end. - active = True - methodName = None # for debugging - - def __init__(self, reqID, rref=None): - self.reqID = reqID - self.rref = rref # keep it alive - self.broker = None # if set, the broker knows about us - self.deferred = defer.Deferred() - self.constraint = None # this constrains the results - - def setConstraint(self, constraint): - self.constraint = constraint - - def complete(self, res): - if self.broker: - self.broker.removeRequest(self) - if self.active: - self.active = False - self.deferred.callback(res) - else: - log.msg("PendingRequest.complete called on an inactive request") - - def fail(self, why): - if self.active: - if self.broker: - self.broker.removeRequest(self) - self.active = False - self.failure = why - if (self.broker and - self.broker.tub and - self.broker.tub.logRemoteFailures): - log.msg("an outbound callRemote (that we sent to someone " - "else) failed on the far end") - log.msg(" reqID=%d, rref=%s, methname=%s" - % (self.reqID, self.rref, self.methodName)) - stack = why.getTraceback() - # TODO: include the first few letters of the remote tubID in - # this REMOTE tag - stack = "REMOTE: " + stack.replace("\n", "\nREMOTE: ") - log.msg(" the failure was:") - log.msg(stack) - self.deferred.errback(why) - else: - log.msg("multiple failures") - log.msg("first one was:", self.failure) - log.msg("this one was:", why) - log.err("multiple failures indicate a problem") - -class ArgumentSlicer(slicer.ScopedSlicer): - opentype = ('arguments',) - - def __init__(self, args, kwargs): - slicer.ScopedSlicer.__init__(self, None) - self.args = args - self.kwargs = kwargs - self.which = "" - - def sliceBody(self, streamable, banana): - yield len(self.args) - for i,arg in enumerate(self.args): - self.which = "arg[%d]" % i - yield arg - keys = self.kwargs.keys() - keys.sort() - for argname in keys: - self.which = "arg[%s]" % argname - yield argname - yield self.kwargs[argname] - - def describe(self): - return "<%s>" % self.which - - -class CallSlicer(slicer.ScopedSlicer): - opentype = ('call',) - - def __init__(self, reqID, clid, methodname, args, kwargs): - slicer.ScopedSlicer.__init__(self, None) - self.reqID = reqID - self.clid = clid - self.methodname = methodname - self.args = args - self.kwargs = kwargs - - def sliceBody(self, streamable, banana): - yield self.reqID - yield self.clid - yield self.methodname - yield ArgumentSlicer(self.args, self.kwargs) - - def describe(self): - return "" % (self.reqID, self.clid, self.methodname) - -class InboundDelivery: - """An inbound message that has not yet been delivered. - - This is created when a 'call' sequence has finished being received. The - Broker will add it to a queue. The delivery at the head of the queue is - serviced when all of its arguments have been resolved. - - The only way that the arguments might not all be available is if one of - the Unslicers which created them has provided a 'ready_deferred' along - with the prospective object. The only standard Unslicer which does this - is the TheirReferenceUnslicer, which handles introductions. (custom - Unslicers might also provide a ready_deferred, for example a URL - slicer/unslicer pair for which the receiving end fetches the target of - the URL as its value, or a UnixFD slicer/unslicer that had to wait for a - side-channel unix-domain socket to finish transferring control over the - FD to the recipient before being ready). - - Most Unslicers refuse to accept unready objects as their children (most - implementations of receiveChild() do 'assert ready_deferred is None'). - The CallUnslicer is fairly unique in not rejecting such objects. - - We do require, however, that all of the arguments be at least - referenceable. This is not generally a problem: the only time an - unslicer's receiveChild() can get a non-referenceable object (represented - by a Deferred) is if that unslicer is participating in a reference cycle - that has not yet completed, and CallUnslicers only live at the top level, - above any cycles. - """ - - def __init__(self, reqID, obj, - interface, methodname, methodSchema, - allargs): - self.reqID = reqID - self.obj = obj - self.interface = interface - self.methodname = methodname - self.methodSchema = methodSchema - self.allargs = allargs - - def logFailure(self, f): - # called if tub.logLocalFailures is True - log.msg("an inbound callRemote that we executed (on behalf of " - "someone else) failed") - log.msg(" reqID=%d, rref=%s, methname=%s" % - (self.reqID, self.obj, self.methodname)) - log.msg(" args=%s" % (self.allargs.args,)) - log.msg(" kwargs=%s" % (self.allargs.kwargs,)) - if isinstance(f.type, str): - stack = "getTraceback() not available for string exceptions\n" - else: - stack = f.getTraceback() - # TODO: trim stack to everything below Broker._doCall - stack = "LOCAL: " + stack.replace("\n", "\nLOCAL: ") - log.msg(" the failure was:") - log.msg(stack) - -class ArgumentUnslicer(slicer.ScopedUnslicer): - methodSchema = None - debug = False - - def setConstraint(self, methodSchema): - self.methodSchema = methodSchema - - def start(self, count): - if self.debug: - log.msg("%s.start: %s" % (self, count)) - self.numargs = None - self.args = [] - self.kwargs = {} - self.argname = None - self.argConstraint = None - self.num_unreferenceable_children = 0 - self._all_children_are_referenceable_d = None - self._ready_deferreds = [] - self.closed = False - - def checkToken(self, typebyte, size): - if self.numargs is None: - # waiting for positional-arg count - if typebyte != tokens.INT: - raise BananaError("posarg count must be an INT") - return - if len(self.args) < self.numargs: - # waiting for a positional arg - if self.argConstraint: - self.argConstraint.checkToken(typebyte, size) - return - if self.argname is None: - # waiting for the name of a keyword arg - if typebyte not in (tokens.STRING, tokens.VOCAB): - raise BananaError("kwarg name must be a STRING") - # TODO: limit to longest argument name of the method? - return - # waiting for the value of a kwarg - if self.argConstraint: - self.argConstraint.checkToken(typebyte, size) - - def doOpen(self, opentype): - if self.argConstraint: - self.argConstraint.checkOpentype(opentype) - unslicer = self.open(opentype) - if unslicer: - if self.argConstraint: - unslicer.setConstraint(self.argConstraint) - return unslicer - - def receiveChild(self, token, ready_deferred=None): - if self.debug: - log.msg("%s.receiveChild: %s %s %s %s %s args=%s kwargs=%s" % - (self, self.closed, self.num_unreferenceable_children, - len(self._ready_deferreds), token, ready_deferred, - self.args, self.kwargs)) - if self.numargs is None: - # this token is the number of positional arguments - assert isinstance(token, int) - assert ready_deferred is None - self.numargs = token - if self.numargs: - ms = self.methodSchema - if ms: - accept, self.argConstraint = \ - ms.getPositionalArgConstraint(0) - assert accept - return - - if len(self.args) < self.numargs: - # this token is a positional argument - argvalue = token - argpos = len(self.args) - self.args.append(argvalue) - if isinstance(argvalue, defer.Deferred): - # this may occur if the child is a gift which has not - # resolved yet. - self.num_unreferenceable_children += 1 - argvalue.addCallback(self.updateChild, argpos) - if ready_deferred: - if self.debug: - log.msg("%s.receiveChild got an unready posarg" % self) - self._ready_deferreds.append(ready_deferred) - if len(self.args) < self.numargs: - # more to come - ms = self.methodSchema - if ms: - nextargnum = len(self.args) - accept, self.argConstraint = \ - ms.getPositionalArgConstraint(nextargnum) - assert accept - return - - if self.argname is None: - # this token is the name of a keyword argument - assert ready_deferred is None - self.argname = token - # if the argname is invalid, this may raise Violation - ms = self.methodSchema - if ms: - accept, self.argConstraint = \ - ms.getKeywordArgConstraint(self.argname, - self.numargs, - self.kwargs.keys()) - assert accept - return - - # this token is the value of a keyword argument - argvalue = token - self.kwargs[self.argname] = argvalue - if isinstance(argvalue, defer.Deferred): - self.num_unreferenceable_children += 1 - argvalue.addCallback(self.updateChild, self.argname) - if ready_deferred: - if self.debug: - log.msg("%s.receiveChild got an unready kwarg" % self) - self._ready_deferreds.append(ready_deferred) - self.argname = None - return - - def updateChild(self, obj, which): - # one of our arguments has just now become referenceable. Normal - # types can't trigger this (since the arguments to a method form a - # top-level serialization domain), but special Unslicers might. For - # example, the Gift unslicer will eventually provide us with a - # RemoteReference, but for now all we get is a Deferred as a - # placeholder. - - if self.debug: - log.msg("%s.updateChild, [%s] became referenceable: %s" % - (self, which, obj)) - if isinstance(which, int): - self.args[which] = obj - else: - self.kwargs[which] = obj - self.num_unreferenceable_children -= 1 - if self.num_unreferenceable_children == 0: - if self._all_children_are_referenceable_d: - self._all_children_are_referenceable_d.callback(None) - return obj - - - def receiveClose(self): - if self.debug: - log.msg("%s.receiveClose: %s %s %s" % - (self, self.closed, self.num_unreferenceable_children, - len(self._ready_deferreds))) - if (self.numargs is None or - len(self.args) < self.numargs or - self.argname is not None): - raise BananaError("'arguments' sequence ended too early") - self.closed = True - dl = [] - if self.num_unreferenceable_children: - d = self._all_children_are_referenceable_d = defer.Deferred() - dl.append(d) - dl.extend(self._ready_deferreds) - ready_deferred = None - if dl: - ready_deferred = AsyncAND(dl) - return self, ready_deferred - - def describe(self): - s = " 0: - self.broker.callFailed(f, self.reqID) - return f # give up our sequence - - def receiveChild(self, token, ready_deferred=None): - assert not isinstance(token, defer.Deferred) - if self.debug: - log.msg("%s.receiveChild [s%d]: %s" % - (self, self.stage, repr(token))) - - if self.stage == 0: # reqID - # we don't yet know which reqID to send any failure to - assert ready_deferred is None - self.reqID = token - self.stage = 1 - if self.reqID != 0: - assert self.reqID not in self.broker.activeLocalCalls - self.broker.activeLocalCalls[self.reqID] = self - return - - if self.stage == 1: # objID - # this might raise an exception if objID is invalid - assert ready_deferred is None - self.objID = token - self.obj = self.broker.getMyReferenceByCLID(token) - #iface = self.broker.getRemoteInterfaceByName(token) - if self.objID < 0: - self.interface = None - else: - self.interface = self.obj.getInterface() - self.stage = 2 - return - - if self.stage == 2: # methodname - # validate the methodname, get the schema. This may raise an - # exception for unknown methods - - # must find the schema, using the interfaces - - # TODO: getSchema should probably be in an adapter instead of in - # a pb.Referenceable base class. Old-style (unconstrained) - # flavors.Referenceable should be adapted to something which - # always returns None - - # TODO: make this faster. A likely optimization is to take a - # tuple of components.getInterfaces(obj) and use it as a cache - # key. It would be even faster to use obj.__class__, but that - # would probably violate the expectation that instances can - # define their own __implements__ (independently from their - # class). If this expectation were to go away, a quick - # obj.__class__ -> RemoteReferenceSchema cache could be built. - - assert ready_deferred is None - self.stage = 3 - - if self.objID < 0: - # the target is a bound method, ignore the methodname - self.methodSchema = getattr(self.obj, "methodSchema", None) - self.methodname = None # TODO: give it something useful - if self.broker.requireSchema and not self.methodSchema: - why = "This broker does not accept unconstrained " + \ - "method calls" - raise Violation(why) - return - - self.methodname = token - - if self.interface: - # they are calling an interface+method pair - ms = self.interface.get(self.methodname) - if not ms: - why = "method '%s' not defined in %s" % \ - (self.methodname, self.interface.__remote_name__) - raise Violation(why) - self.methodSchema = ms - - return - - if self.stage == 3: # arguments - assert isinstance(token, ArgumentUnslicer) - self.allargs = token - # queue the message. It will not be executed until all the - # arguments are ready. The .args list and .kwargs dict may change - # before then. - if ready_deferred: - self._ready_deferreds.append(ready_deferred) - self.stage = 4 - return - - def receiveClose(self): - if self.stage != 4: - raise BananaError("'call' sequence ended too early") - # time to create the InboundDelivery object so we can queue it - delivery = InboundDelivery(self.reqID, self.obj, - self.interface, self.methodname, - self.methodSchema, - self.allargs) - ready_deferred = None - if self._ready_deferreds: - ready_deferred = AsyncAND(self._ready_deferreds) - return delivery, ready_deferred - - def describe(self): - s = "= 1: - s += " reqID=%d" % self.reqID - if self.stage >= 2: - s += " obj=%s" % (self.obj,) - ifacename = "[none]" - if self.interface: - ifacename = self.interface.__remote_name__ - s += " iface=%s" % ifacename - if self.stage >= 3: - s += " methodname=%s" % self.methodname - s += ">" - return s - - -class AnswerSlicer(slicer.ScopedSlicer): - opentype = ('answer',) - - def __init__(self, reqID, results): - assert reqID != 0 - slicer.ScopedSlicer.__init__(self, None) - self.reqID = reqID - self.results = results - - def sliceBody(self, streamable, banana): - yield self.reqID - yield self.results - - def describe(self): - return "" % self.reqID - -class AnswerUnslicer(slicer.ScopedUnslicer): - request = None - resultConstraint = None - haveResults = False - - def start(self, count): - slicer.ScopedUnslicer.start(self, count) - self._ready_deferreds = [] - self._child_deferred = None - - def checkToken(self, typebyte, size): - if self.request is None: - if typebyte != tokens.INT: - raise BananaError("request ID must be an INT") - elif not self.haveResults: - if self.resultConstraint: - try: - self.resultConstraint.checkToken(typebyte, size) - except Violation, v: - # improve the error message - if v.args: - # this += gives me a TypeError "object doesn't - # support item assignment", which confuses me - #v.args[0] += " in inbound method results" - why = v.args[0] + " in inbound method results" - v.args = why, - else: - v.args = ("in inbound method results",) - raise # this will errback the request - else: - raise BananaError("stop sending me stuff!") - - def doOpen(self, opentype): - if self.resultConstraint: - self.resultConstraint.checkOpentype(opentype) - # TODO: improve the error message - unslicer = self.open(opentype) - if unslicer: - if self.resultConstraint: - unslicer.setConstraint(self.resultConstraint) - return unslicer - - def receiveChild(self, token, ready_deferred=None): - if self.request == None: - assert not isinstance(token, defer.Deferred) - assert ready_deferred is None - reqID = token - # may raise Violation for bad reqIDs - self.request = self.broker.getRequest(reqID) - self.resultConstraint = self.request.constraint - else: - if isinstance(token, defer.Deferred): - self._child_deferred = token - else: - self._child_deferred = defer.succeed(token) - if ready_deferred: - self._ready_deferreds.append(ready_deferred) - self.haveResults = True - - def reportViolation(self, f): - # if the Violation was received after we got the reqID, we can tell - # the broker it was an error - if self.request != None: - self.request.fail(f) - return f # give up our sequence - - def receiveClose(self): - # three things must happen before our request is complete: - # receiveClose has occurred - # the receiveChild object deferred (if any) has fired - # ready_deferred has finished - # If ready_deferred errbacks, provide its failure object to the - # request. If not, provide the request with whatever receiveChild - # got. - - if not self._child_deferred: - raise BananaError("Answer didn't include an answer") - - if self._ready_deferreds: - d = AsyncAND(self._ready_deferreds) - else: - d = defer.succeed(None) - - def _ready(res): - return self._child_deferred - d.addCallback(_ready) - - def _done(res): - self.request.complete(res) - def _fail(f): - self.request.fail(f) - d.addCallbacks(_done, _fail) - - return None, None - - def describe(self): - if self.request: - return "Answer(req=%s)" % self.request.reqID - return "Answer(req=?)" - - - -class ErrorSlicer(slicer.ScopedSlicer): - opentype = ('error',) - - def __init__(self, reqID, f): - slicer.ScopedSlicer.__init__(self, None) - assert isinstance(f, failure.Failure) - self.reqID = reqID - self.f = f - - def sliceBody(self, streamable, banana): - yield self.reqID - yield self.f - - def describe(self): - return "" % self.reqID - -class ErrorUnslicer(slicer.ScopedUnslicer): - request = None - fConstraint = FailureConstraint() - gotFailure = False - - def checkToken(self, typebyte, size): - if self.request == None: - if typebyte != tokens.INT: - raise BananaError("request ID must be an INT") - elif not self.gotFailure: - self.fConstraint.checkToken(typebyte, size) - else: - raise BananaError("stop sending me stuff!") - - def doOpen(self, opentype): - self.fConstraint.checkOpentype(opentype) - unslicer = self.open(opentype) - if unslicer: - unslicer.setConstraint(self.fConstraint) - return unslicer - - def reportViolation(self, f): - # a failure while receiving the failure. A bit daft, really. - if self.request != None: - self.request.fail(f) - return f # give up our sequence - - def receiveChild(self, token, ready_deferred=None): - assert not isinstance(token, defer.Deferred) - assert ready_deferred is None - if self.request == None: - reqID = token - # may raise BananaError for bad reqIDs - self.request = self.broker.getRequest(reqID) - else: - self.failure = token - self.gotFailure = True - - def receiveClose(self): - self.request.fail(self.failure) - return None, None - - def describe(self): - if self.request is None: - return "" - return "" % self.request.reqID - - -# failures are sent as Copyables -class FailureSlicer(slicer.BaseSlicer): - slices = failure.Failure - classname = "twisted.python.failure.Failure" - - def slice(self, streamable, banana): - self.streamable = streamable - yield 'copyable' - yield self.classname - state = self.getStateToCopy(self.obj, banana) - for k,v in state.iteritems(): - yield k - yield v - def describe(self): - return "<%s>" % self.classname - - def getStateToCopy(self, obj, broker): - #state = obj.__dict__.copy() - #state['tb'] = None - #state['frames'] = [] - #state['stack'] = [] - - state = {} - # string exceptions show up as obj.value == None and - # isinstance(obj.type, str). Normal exceptions show up as obj.value - # == text and obj.type == exception class. We need to make sure we - # can handle both. - if isinstance(obj.value, failure.Failure): - # TODO: how can this happen? I got rid of failure2Copyable, so - # if this case is possible, something needs to replace it - raise RuntimeError("not implemented yet") - #state['value'] = failure2Copyable(obj.value, banana.unsafeTracebacks) - elif isinstance(obj.type, str): - state['value'] = str(obj.value) - state['type'] = obj.type # a string - else: - state['value'] = str(obj.value) # Exception instance - state['type'] = reflect.qual(obj.type) # Exception class - - if broker.unsafeTracebacks: - if isinstance(obj.type, str): - stack = "getTraceback() not available for string exceptions\n" - else: - stack = obj.getTraceback() - state['traceback'] = stack - # TODO: provide something with globals and locals and HTML and - # all that cool stuff - else: - state['traceback'] = 'Traceback unavailable\n' - if len(state['traceback']) > 1900: - state['traceback'] = (state['traceback'][:1900] + - "\n\n-- TRACEBACK TRUNCATED --\n") - state['parents'] = obj.parents - return state - -class CopiedFailure(failure.Failure, copyable.RemoteCopyOldStyle): - # this is a RemoteCopyOldStyle because you can't raise new-style - # instances as exceptions. - - """I am a shadow of some remote Failure instance. I contain less - information than the original did. - - You can still extract a (brief) printable traceback from me. My .parents - attribute is a list of strings describing the class of the exception - that I contain, just like the real Failure had, so my trap() and check() - methods work fine. My .type and .value attributes are string - representations of the original exception class and exception instance, - respectively. The most significant effect is that you cannot access - f.value.args, and should instead just use f.value . - - My .frames and .stack attributes are empty, although this may change in - the future (and with the cooperation of the sender). - """ - - nonCyclic = True - stateSchema = FailureConstraint() - - def __init__(self): - copyable.RemoteCopyOldStyle.__init__(self) - - def setCopyableState(self, state): - #self.__dict__.update(state) - self.__dict__ = state - # state includes: type, value, traceback, parents - #self.type = state['type'] - #self.value = state['value'] - #self.traceback = state['traceback'] - #self.parents = state['parents'] - self.tb = None - self.frames = [] - self.stack = [] - - # MAYBE: for native exception types, be willing to wire up a - # reference to the real exception class. For other exception types, - # our .type attribute will be a string, which (from a Failure's point - # of view) looks as if someone raised an old-style string exception. - # This is here so that trial will properly render a CopiedFailure - # that comes out of a test case (since it unconditionally does - # reflect.qual(f.type) - - # ACTUALLY: replace self.type with a class that looks a lot like the - # original exception class (meaning that reflect.qual() will return - # the same string for this as for the original). If someone calls our - # .trap method, resulting in a new Failure with contents copied from - # this one, then the new Failure.printTraceback will attempt to use - # reflect.qual() on our self.type, so it needs to be a class instead - # of a string. - - assert isinstance(self.type, str) - typepieces = self.type.split(".") - class ExceptionLikeString: - pass - self.type = ExceptionLikeString - self.type.__module__ = ".".join(typepieces[:-1]) - self.type.__name__ = typepieces[-1] - - def __str__(self): - return "[CopiedFailure instance: %s]" % self.getBriefTraceback() - - pickled = 1 - def printTraceback(self, file=None, elideFrameworkCode=0, - detail='default'): - if file is None: file = log.logerr - file.write("Traceback from remote host -- ") - file.write(self.traceback) - -copyable.registerRemoteCopy(FailureSlicer.classname, CopiedFailure) - -class CopiedFailureSlicer(FailureSlicer): - # A calls B. B calls C. C fails and sends a Failure to B. B gets a - # CopiedFailure and sends it to A. A should get a CopiedFailure too. This - # class lives on B and slicers the CopiedFailure as it is sent to A. - slices = CopiedFailure - - def getStateToCopy(self, obj, broker): - state = {} - for k in ('value', 'type', 'parents'): - state[k] = getattr(obj, k) - if broker.unsafeTracebacks: - state['traceback'] = obj.traceback - else: - state['traceback'] = "Traceback unavailable\n" - if not isinstance(state['type'], str): - state['type'] = reflect.qual(state['type']) # Exception class - return state diff --git a/src/foolscap/foolscap/constraint.py b/src/foolscap/foolscap/constraint.py deleted file mode 100644 index e8c0b360..00000000 --- a/src/foolscap/foolscap/constraint.py +++ /dev/null @@ -1,356 +0,0 @@ - -# This provides a base for the various Constraint subclasses to use. Those -# Constraint subclasses live next to the slicers. It also contains -# Constraints for primitive types (int, str). - -# This imports foolscap.tokens, but no other Foolscap modules. - -import re -from zope.interface import implements, Interface - -from foolscap.tokens import Violation, BananaError, SIZE_LIMIT, \ - STRING, LIST, INT, NEG, LONGINT, LONGNEG, VOCAB, FLOAT, OPEN, \ - tokenNames - -everythingTaster = { - # he likes everything - STRING: SIZE_LIMIT, - LIST: None, - INT: None, - NEG: None, - LONGINT: SIZE_LIMIT, - LONGNEG: SIZE_LIMIT, - VOCAB: None, - FLOAT: None, - OPEN: None, - } -openTaster = { - OPEN: None, - } -nothingTaster = {} - -class UnboundedSchema(Exception): - pass - -class IConstraint(Interface): - pass -class IRemoteMethodConstraint(IConstraint): - def getPositionalArgConstraint(argnum): - """Return the constraint for posargs[argnum]. This is called on - inbound methods when receiving positional arguments. This returns a - tuple of (accept, constraint), where accept=False means the argument - should be rejected immediately, regardless of what type it might be.""" - def getKeywordArgConstraint(argname, num_posargs=0, previous_kwargs=[]): - """Return the constraint for kwargs[argname]. The other arguments are - used to handle mixed positional and keyword arguments. Returns a - tuple of (accept, constraint).""" - - def checkAllArgs(args, kwargs, inbound): - """Submit all argument values for checking. When inbound=True, this - is called after the arguments have been deserialized, but before the - method is invoked. When inbound=False, this is called just inside - callRemote(), as soon as the target object (and hence the remote - method constraint) is located. - - This should either raise Violation or return None.""" - pass - def getResponseConstraint(): - """Return an IConstraint-providing object to enforce the response - constraint. This is called on outbound method calls so that when the - response starts to come back, we can start enforcing the appropriate - constraint right away.""" - def checkResults(results, inbound): - """Inspect the results of invoking a method call. inbound=False is - used on the side that hosts the Referenceable, just after the target - method has provided a value. inbound=True is used on the - RemoteReference side, just after it has finished deserializing the - response. - - This should either raise Violation or return None.""" - -class Constraint: - """ - Each __schema__ attribute is turned into an instance of this class, and - is eventually given to the unserializer (the 'Unslicer') to enforce as - the tokens are arriving off the wire. - """ - - implements(IConstraint) - - taster = everythingTaster - """the Taster is a dict that specifies which basic token types are - accepted. The keys are typebytes like INT and STRING, while the - values are size limits: the body portion of the token must not be - longer than LIMIT bytes. - """ - - strictTaster = False - """If strictTaster is True, taste violations are raised as BananaErrors - (indicating a protocol error) rather than a mere Violation. - """ - - opentypes = None - """opentypes is a list of currently acceptable OPEN token types. None - indicates that all types are accepted. An empty list indicates that no - OPEN tokens are accepted. - """ - - name = None - """Used to describe the Constraint in a Violation error message""" - - def checkToken(self, typebyte, size): - """Check the token type. Raise an exception if it is not accepted - right now, or if the body-length limit is exceeded.""" - - limit = self.taster.get(typebyte, "not in list") - if limit == "not in list": - if self.strictTaster: - raise BananaError("invalid token type: %s" % - tokenNames[typebyte]) - else: - raise Violation("%s token rejected by %s" % \ - (tokenNames[typebyte], self.name)) - if limit and size > limit: - raise Violation("token too large: %d>%d" % (size, limit)) - - def setNumberTaster(self, maxValue): - self.taster = {INT: None, - NEG: None, - LONGINT: None, # TODO - LONGNEG: None, - FLOAT: None, - } - def checkOpentype(self, opentype): - """Check the OPEN type (the tuple of Index Tokens). Raise an - exception if it is not accepted. - """ - - if self.opentypes == None: - return - - for o in self.opentypes: - if len(o) == len(opentype): - if o == opentype: - return - if len(o) > len(opentype): - # we might have a partial match: they haven't flunked yet - if opentype == o[:len(opentype)]: - return # still in the running - print "opentype %s, self.opentypes %s" % (opentype, self.opentypes) - raise Violation, "unacceptable OPEN type '%s'" % (opentype,) - - def checkObject(self, obj, inbound): - """Validate an existing object. Usually objects are validated as - their tokens come off the wire, but pre-existing objects may be - added to containers if a REFERENCE token arrives which points to - them. The older objects were were validated as they arrived (by a - different schema), but now they must be re-validated by the new - schema. - - A more naive form of validation would just accept the entire object - tree into memory and then run checkObject() on the result. This - validation is too late: it is vulnerable to both DoS and - made-you-run-code attacks. - - If inbound=True, this object is arriving over the wire. If - inbound=False, this is being called to validate an existing object - before it is sent over the wire. This is done as a courtesy to the - remote end, and to improve debuggability. - - Most constraints can use the same checker for both inbound and - outbound objects. - """ - # this default form passes everything - return - - def maxSize(self, seen=None): - """ - I help a caller determine how much memory could be consumed by the - input stream while my constraint is in effect. - - My constraint will be enforced against the bytes that arrive over - the wire. Eventually I will either accept the incoming bytes and my - Unslicer will provide an object to its parent (including any - subobjects), or I will raise a Violation exception which will kick - my Unslicer into 'discard' mode. - - I define maxSizeAccept as the maximum number of bytes that will be - received before the stream is accepted as valid. maxSizeReject is - the maximum that will be received before a Violation is raised. The - max of the two provides an upper bound on single objects. For - container objects, the upper bound is probably (n-1)*accept + - reject, because there can only be one outstanding - about-to-be-rejected object at any time. - - I return (maxSizeAccept, maxSizeReject). - - I raise an UnboundedSchema exception if there is no bound. - """ - raise UnboundedSchema - - def maxDepth(self): - """I return the greatest number Slicer objects that might exist on - the SlicerStack (or Unslicers on the UnslicerStack) while processing - an object which conforms to this constraint. This is effectively the - maximum depth of the object tree. I raise UnboundedSchema if there is - no bound. - """ - raise UnboundedSchema - - COUNTERBYTES = 64 # max size of opencount - - def OPENBYTES(self, dummy): - # an OPEN,type,CLOSE sequence could consume: - # 64 (header) - # 1 (OPEN) - # 64 (header) - # 1 (STRING) - # 1000 (value) - # or - # 64 (header) - # 1 (VOCAB) - # 64 (header) - # 1 (CLOSE) - # for a total of 65+1065+65 = 1195 - return self.COUNTERBYTES+1 + 64+1+1000 + self.COUNTERBYTES+1 - -class OpenerConstraint(Constraint): - taster = openTaster - -class Any(Constraint): - pass # accept everything - -# constraints which describe individual banana tokens - -class ByteStringConstraint(Constraint): - opentypes = [] # redundant, as taster doesn't accept OPEN - name = "ByteStringConstraint" - - def __init__(self, maxLength=1000, minLength=0, regexp=None): - self.maxLength = maxLength - self.minLength = minLength - # regexp can either be a string or a compiled SRE_Match object.. - # re.compile appears to notice SRE_Match objects and pass them - # through unchanged. - self.regexp = None - if regexp: - self.regexp = re.compile(regexp) - self.taster = {STRING: self.maxLength, - VOCAB: None} - - def checkObject(self, obj, inbound): - if not isinstance(obj, str): - raise Violation("not a bytestring") - if self.maxLength != None and len(obj) > self.maxLength: - raise Violation("string too long (%d > %d)" % - (len(obj), self.maxLength)) - if len(obj) < self.minLength: - raise Violation("string too short (%d < %d)" % - (len(obj), self.minLength)) - if self.regexp: - if not self.regexp.search(obj): - raise Violation("regexp failed to match") - - def maxSize(self, seen=None): - if self.maxLength == None: - raise UnboundedSchema - return 64+1+self.maxLength - def maxDepth(self, seen=None): - return 1 - -class IntegerConstraint(Constraint): - opentypes = [] # redundant - # taster set in __init__ - name = "IntegerConstraint" - - def __init__(self, maxBytes=-1): - # -1 means s_int32_t: INT/NEG instead of INT/NEG/LONGINT/LONGNEG - # None means unlimited - assert maxBytes == -1 or maxBytes == None or maxBytes >= 4 - self.maxBytes = maxBytes - self.taster = {INT: None, NEG: None} - if maxBytes != -1: - self.taster[LONGINT] = maxBytes - self.taster[LONGNEG] = maxBytes - - def checkObject(self, obj, inbound): - if not isinstance(obj, (int, long)): - raise Violation("not a number") - if self.maxBytes == -1: - if obj >= 2**31 or obj < -2**31: - raise Violation("number too large") - elif self.maxBytes != None: - if abs(obj) >= 2**(8*self.maxBytes): - raise Violation("number too large") - - def maxSize(self, seen=None): - if self.maxBytes == None: - raise UnboundedSchema - if self.maxBytes == -1: - return 64+1 - return 64+1+self.maxBytes - def maxDepth(self, seen=None): - return 1 - -class NumberConstraint(IntegerConstraint): - name = "NumberConstraint" - - def __init__(self, maxBytes=1024): - assert maxBytes != -1 # not valid here - IntegerConstraint.__init__(self, maxBytes) - self.taster[FLOAT] = None - - def checkObject(self, obj, inbound): - if isinstance(obj, float): - return - IntegerConstraint.checkObject(self, obj, inbound) - - def maxSize(self, seen=None): - # floats are packed into 8 bytes, so the shortest FLOAT token is - # 64+1+8 - intsize = IntegerConstraint.maxSize(self, seen) - return max(64+1+8, intsize) - def maxDepth(self, seen=None): - return 1 - - - -#TODO -class Shared(Constraint): - name = "Shared" - - def __init__(self, constraint, refLimit=None): - self.constraint = IConstraint(constraint) - self.refLimit = refLimit - def maxSize(self, seen=None): - if not seen: seen = [] - if self in seen: - raise UnboundedSchema # recursion - seen.append(self) - return self.constraint.maxSize(seen) - def maxDepth(self, seen=None): - if not seen: seen = [] - if self in seen: - raise UnboundedSchema # recursion - seen.append(self) - return self.constraint.maxDepth(seen) - -#TODO: might be better implemented with a .optional flag -class Optional(Constraint): - name = "Optional" - - def __init__(self, constraint, default): - self.constraint = IConstraint(constraint) - self.default = default - def maxSize(self, seen=None): - if not seen: seen = [] - if self in seen: - raise UnboundedSchema # recursion - seen.append(self) - return self.constraint.maxSize(seen) - def maxDepth(self, seen=None): - if not seen: seen = [] - if self in seen: - raise UnboundedSchema # recursion - seen.append(self) - return self.constraint.maxDepth(seen) diff --git a/src/foolscap/foolscap/copyable.py b/src/foolscap/foolscap/copyable.py deleted file mode 100644 index 70bcbb32..00000000 --- a/src/foolscap/foolscap/copyable.py +++ /dev/null @@ -1,434 +0,0 @@ -# -*- test-case-name: foolscap.test.test_copyable -*- - -# this module is responsible for all copy-by-value objects - -from zope.interface import interface, implements -from twisted.python import reflect, log -from twisted.python.components import registerAdapter -from twisted.internet import defer - -import slicer, tokens -from tokens import BananaError, Violation -from foolscap.constraint import OpenerConstraint, IConstraint, \ - ByteStringConstraint, UnboundedSchema, Optional - -Interface = interface.Interface - -############################################################ -# the first half of this file is sending/serialization - -class ICopyable(Interface): - """I represent an object which is passed-by-value across PB connections. - """ - - def getTypeToCopy(): - """Return a string which names the class. This string must match the - one that gets registered at the receiving end. This is typically a - URL of some sort, in a namespace which you control.""" - def getStateToCopy(): - """Return a state dictionary (with plain-string keys) which will be - serialized and sent to the remote end. This state object will be - given to the receiving object's setCopyableState method.""" - -class Copyable(object): - implements(ICopyable) - # you *must* set 'typeToCopy' - - def getTypeToCopy(self): - try: - copytype = self.typeToCopy - except AttributeError: - raise RuntimeError("Copyable subclasses must specify 'typeToCopy'") - return copytype - def getStateToCopy(self): - return self.__dict__ - -class CopyableSlicer(slicer.BaseSlicer): - """I handle ICopyable objects (things which are copied by value).""" - def slice(self, streamable, banana): - self.streamable = streamable - yield 'copyable' - copytype = self.obj.getTypeToCopy() - assert isinstance(copytype, str) - yield copytype - state = self.obj.getStateToCopy() - for k,v in state.iteritems(): - yield k - yield v - def describe(self): - return "<%s>" % self.obj.getTypeToCopy() -registerAdapter(CopyableSlicer, ICopyable, tokens.ISlicer) - - -class Copyable2(slicer.BaseSlicer): - # I am my own Slicer. This has more methods than you'd usually want in a - # base class, but if you can't register an Adapter for a whole class - # hierarchy then you may have to use it. - def getTypeToCopy(self): - return reflect.qual(self.__class__) - def getStateToCopy(self): - return self.__dict__ - def slice(self, streamable, banana): - self.streamable = streamable - yield 'instance' - yield self.getTypeToCopy() - yield self.getStateToCopy() - def describe(self): - return "<%s>" % self.getTypeToCopy() - -#registerRemoteCopy(typename, factory) -#registerUnslicer(typename, factory) - -def registerCopier(klass, copier): - """This is a shortcut for arranging to serialize third-party clases. - 'copier' must be a callable which accepts an instance of the class you - want to serialize, and returns a tuple of (typename, state_dictionary). - If it returns a typename of None, the original class's fully-qualified - classname is used. - """ - klassname = reflect.qual(klass) - class _CopierAdapter: - implements(ICopyable) - def __init__(self, original): - self.nameToCopy, self.state = copier(original) - if self.nameToCopy is None: - self.nameToCopy = klassname - def getTypeToCopy(self): - return self.nameToCopy - def getStateToCopy(self): - return self.state - registerAdapter(_CopierAdapter, klass, ICopyable) - -############################################################ -# beyond here is the receiving/deserialization side - -class RemoteCopyUnslicer(slicer.BaseUnslicer): - attrname = None - attrConstraint = None - - def __init__(self, factory, stateSchema): - self.factory = factory - self.schema = stateSchema - - def start(self, count): - self.d = {} - self.count = count - self.deferred = defer.Deferred() - self.protocol.setObject(count, self.deferred) - - def checkToken(self, typebyte, size): - if self.attrname == None: - if typebyte not in (tokens.STRING, tokens.VOCAB): - raise BananaError("RemoteCopyUnslicer keys must be STRINGs") - else: - if self.attrConstraint: - self.attrConstraint.checkToken(typebyte, size) - - def doOpen(self, opentype): - if self.attrConstraint: - self.attrConstraint.checkOpentype(opentype) - unslicer = self.open(opentype) - if unslicer: - if self.attrConstraint: - unslicer.setConstraint(self.attrConstraint) - return unslicer - - def receiveChild(self, obj, ready_deferred=None): - assert not isinstance(obj, defer.Deferred) - assert ready_deferred is None - if self.attrname == None: - attrname = obj - if self.d.has_key(attrname): - raise BananaError("duplicate attribute name '%s'" % attrname) - s = self.schema - if s: - accept, self.attrConstraint = s.getAttrConstraint(attrname) - assert accept - self.attrname = attrname - else: - if isinstance(obj, defer.Deferred): - # TODO: this is an artificial restriction, and it might - # be possible to remove it, but I need to think through - # it carefully first - raise BananaError("unreferenceable object in attribute") - self.setAttribute(self.attrname, obj) - self.attrname = None - self.attrConstraint = None - - def setAttribute(self, name, value): - self.d[name] = value - - def receiveClose(self): - try: - obj = self.factory(self.d) - except: - log.msg("%s.receiveClose: problem in factory %s" % - (self.__class__.__name__, self.factory)) - log.err() - raise - self.protocol.setObject(self.count, obj) - self.deferred.callback(obj) - return obj, None - - def describe(self): - if self.classname == None: - return "" - me = "<%s>" % self.classname - if self.attrname is None: - return "%s.attrname??" % me - else: - return "%s.%s" % (me, self.attrname) - - -class NonCyclicRemoteCopyUnslicer(RemoteCopyUnslicer): - # The Deferred used in RemoteCopyUnslicer (used in case the RemoteCopy - # is participating in a reference cycle, say 'obj.foo = obj') makes it - # unsuitable for holding Failures (which cannot be passed through - # Deferred.callback). Use this class for Failures. It cannot handle - # reference cycles (they will cause a KeyError when the reference is - # followed). - - def start(self, count): - self.d = {} - self.count = count - self.gettingAttrname = True - - def receiveClose(self): - obj = self.factory(self.d) - return obj, None - - -class IRemoteCopy(Interface): - """This interface defines what a RemoteCopy class must do. RemoteCopy - subclasses are used as factories to create objects that correspond to - Copyables sent over the wire. - - Note that the constructor of an IRemoteCopy class will be called without - any arguments. - """ - - def setCopyableState(statedict): - """I accept an attribute dictionary name/value pairs and use it to - set my internal state. - - Some of the values may be Deferreds, which are placeholders for the - as-yet-unreferenceable object which will eventually go there. If you - receive a Deferred, you are responsible for adding a callback to - update the attribute when it fires. [note: - RemoteCopyUnslicer.receiveChild currently has a restriction which - prevents this from happening, but that may go away in the future] - - Some of the objects referenced by the attribute values may have - Deferreds in them (e.g. containers which reference recursive tuples). - Such containers are responsible for updating their own state when - those Deferreds fire, but until that point their state is still - subject to change. Therefore you must be careful about how much state - inspection you perform within this method.""" - - stateSchema = interface.Attribute("""I return an AttributeDictConstraint - object which places restrictions on incoming attribute values. These - restrictions are enforced as the tokens are received, before the state is - passed to setCopyableState.""") - - -# This maps typename to an Unslicer factory -CopyableRegistry = {} -def registerRemoteCopyUnslicerFactory(typename, unslicerfactory, - registry=None): - """Tell PB that unslicerfactory can be used to handle Copyable objects - that provide a getTypeToCopy name of 'typename'. 'unslicerfactory' must - be a callable which takes no arguments and returns an object which - provides IUnslicer. - """ - assert callable(unslicerfactory) - # in addition, it must produce a tokens.IUnslicer . This is safe to do - # because Unslicers don't do anything significant when they are created. - test_unslicer = unslicerfactory() - assert tokens.IUnslicer.providedBy(test_unslicer) - assert type(typename) is str - - if registry == None: - registry = CopyableRegistry - assert not registry.has_key(typename) - registry[typename] = unslicerfactory - -# this keeps track of everything submitted to registerRemoteCopyFactory -debug_CopyableFactories = {} -def registerRemoteCopyFactory(typename, factory, stateSchema=None, - cyclic=True, registry=None): - """Tell PB that 'factory' can be used to handle Copyable objects that - provide a getTypeToCopy name of 'typename'. 'factory' must be a callable - which accepts a state dictionary and returns a fully-formed instance. - - 'cyclic' is a boolean, which should be set to False to avoid using a - Deferred to provide the resulting RemoteCopy instance. This is needed to - deserialize Failures (or instances which inherit from one, like - CopiedFailure). In exchange for this, it cannot handle reference cycles. - """ - assert callable(factory) - debug_CopyableFactories[typename] = (factory, stateSchema, cyclic) - if cyclic: - def _RemoteCopyUnslicerFactory(): - return RemoteCopyUnslicer(factory, stateSchema) - registerRemoteCopyUnslicerFactory(typename, - _RemoteCopyUnslicerFactory, - registry) - else: - def _RemoteCopyUnslicerFactoryNonCyclic(): - return NonCyclicRemoteCopyUnslicer(factory, stateSchema) - registerRemoteCopyUnslicerFactory(typename, - _RemoteCopyUnslicerFactoryNonCyclic, - registry) - -# this keeps track of everything submitted to registerRemoteCopy, which may -# be useful when you're wondering what's been auto-registered by the -# RemoteCopy metaclass magic -debug_RemoteCopyClasses = {} -def registerRemoteCopy(typename, remote_copy_class, registry=None): - """Tell PB that remote_copy_class is the appropriate RemoteCopy class to - use when deserializing a Copyable sequence that is tagged with - 'typename'. 'remote_copy_class' should be a RemoteCopy subclass or - implement the same interface, which means its constructor takes no - arguments and it has a setCopyableState(state) method to actually set the - instance's state after initialization. It must also have a nonCyclic - attribute. - """ - assert IRemoteCopy.implementedBy(remote_copy_class) - assert type(typename) is str - - debug_RemoteCopyClasses[typename] = remote_copy_class - def _RemoteCopyFactory(state): - obj = remote_copy_class() - obj.setCopyableState(state) - return obj - - registerRemoteCopyFactory(typename, _RemoteCopyFactory, - remote_copy_class.stateSchema, - not remote_copy_class.nonCyclic, - registry) - -class RemoteCopyClass(type): - # auto-register RemoteCopy classes - def __init__(self, name, bases, dict): - type.__init__(self, name, bases, dict) - # don't try to register RemoteCopy itself - if name == "RemoteCopy" and _RemoteCopyBase in bases: - #print "not auto-registering %s %s" % (name, bases) - return - if "copytype" not in dict: - # TODO: provide a file/line-number for the class - raise RuntimeError("RemoteCopy subclass %s must specify 'copytype'" - % name) - copytype = dict['copytype'] - if copytype: - registry = dict.get('copyableRegistry', None) - registerRemoteCopy(copytype, self, registry) - -class _RemoteCopyBase: - - implements(IRemoteCopy) - - stateSchema = None # always a class attribute - nonCyclic = False - - def __init__(self): - # the constructor will always be called without arguments - pass - - def setCopyableState(self, state): - self.__dict__ = state - -class RemoteCopyOldStyle(_RemoteCopyBase): - # note that these will not auto-register for you, because old-style - # classes do not do metaclass magic - copytype = None - -class RemoteCopy(_RemoteCopyBase, object): - # Set 'copytype' to a unique string that is shared between the - # sender-side Copyable and the receiver-side RemoteCopy. This RemoteCopy - # subclass will be auto-registered using the 'copytype' name. Set - # copytype to None to disable auto-registration. - - __metaclass__ = RemoteCopyClass - pass - - -class AttributeDictConstraint(OpenerConstraint): - """This is a constraint for dictionaries that are used for attributes. - All keys are short strings, and each value has a separate constraint. - It could be used to describe instance state, but could also be used - to constraint arbitrary dictionaries with string keys. - - Some special constraints are legal here: Optional. - """ - opentypes = [("attrdict",)] - name = "AttributeDictConstraint" - - def __init__(self, *attrTuples, **kwargs): - self.ignoreUnknown = kwargs.get('ignoreUnknown', False) - self.acceptUnknown = kwargs.get('acceptUnknown', False) - self.keys = {} - for name, constraint in (list(attrTuples) + - kwargs.get('attributes', {}).items()): - assert name not in self.keys.keys() - self.keys[name] = IConstraint(constraint) - - def maxSize(self, seen=None): - if not seen: seen = [] - if self in seen: - raise UnboundedSchema # recursion - seen.append(self) - total = self.OPENBYTES("attributedict") - for name, constraint in self.keys.iteritems(): - total += ByteStringConstraint(len(name)).maxSize(seen) - total += constraint.maxSize(seen[:]) - return total - - def maxDepth(self, seen=None): - if not seen: seen = [] - if self in seen: - raise UnboundedSchema # recursion - seen.append(self) - # all the attribute names are 1-deep, so the min depth of the dict - # items is 1. The other "1" is for the AttributeDict container itself - return 1 + reduce(max, [c.maxDepth(seen[:]) - for c in self.itervalues()], 1) - - def getAttrConstraint(self, attrname): - c = self.keys.get(attrname) - if c: - if isinstance(c, Optional): - c = c.constraint - return (True, c) - # unknown attribute - if self.ignoreUnknown: - return (False, None) - if self.acceptUnknown: - return (True, None) - raise Violation("unknown attribute '%s'" % attrname) - - def checkObject(self, obj, inbound): - if type(obj) != type({}): - raise Violation, "'%s' (%s) is not a Dictionary" % (obj, - type(obj)) - allkeys = self.keys.keys() - for k in obj.keys(): - try: - constraint = self.keys[k] - allkeys.remove(k) - except KeyError: - if not self.ignoreUnknown: - raise Violation, "key '%s' not in schema" % k - else: - # hmm. kind of a soft violation. allow it for now. - pass - else: - constraint.checkObject(obj[k], inbound) - - for k in allkeys[:]: - if isinstance(self.keys[k], Optional): - allkeys.remove(k) - if allkeys: - raise Violation("object is missing required keys: %s" % \ - ",".join(allkeys)) - diff --git a/src/foolscap/foolscap/crypto.py b/src/foolscap/foolscap/crypto.py deleted file mode 100644 index 6ffaaf4a..00000000 --- a/src/foolscap/foolscap/crypto.py +++ /dev/null @@ -1,96 +0,0 @@ -# -*- test-case-name: foolscap.test.test_crypto -*- - -available = False # hack to deal with half-broken imports in python <2.4 - -from OpenSSL import SSL - -# we try to use ssl support classes from Twisted, if it is new enough. If -# not, we pull them from a local copy of sslverify. The funny '_ssl' import -# stuff is used to appease pyflakes, which otherwise complains that we're -# redefining an imported name. -from twisted.internet import ssl -if hasattr(ssl, "DistinguishedName"): - # Twisted-2.5 will contain these names - _ssl = ssl - CertificateOptions = ssl.CertificateOptions -else: - # but it hasn't been released yet (as of 16-Sep-2006). Without them, we - # cannot use any encrypted Tubs. We fall back to using a private copy of - # sslverify.py, copied from the Divmod tree. - import sslverify - _ssl = sslverify - from sslverify import OpenSSLCertificateOptions as CertificateOptions -DistinguishedName = _ssl.DistinguishedName -KeyPair = _ssl.KeyPair -Certificate = _ssl.Certificate -PrivateCertificate = _ssl.PrivateCertificate - -from twisted.internet import error - -if hasattr(error, "CertificateError"): - # Twisted-2.4 contains this, and it is used by twisted.internet.ssl - CertificateError = error.CertificateError -else: - class CertificateError(Exception): - """ - We did not find a certificate where we expected to find one. - """ - - -from foolscap import base32 - -peerFromTransport = Certificate.peerFromTransport - -class MyOptions(CertificateOptions): - def _makeContext(self): - ctx = CertificateOptions._makeContext(self) - def alwaysValidate(conn, cert, errno, depth, preverify_ok): - # This function is called to validate the certificate received by - # the other end. OpenSSL calls it multiple times, each time it - # see something funny, to ask if it should proceed. - - # We do not care about certificate authorities or revocation - # lists, we just want to know that the certificate has a valid - # signature and follow the chain back to one which is - # self-signed. The TubID will be the digest of one of these - # certificates. We need to protect against forged signatures, but - # not the usual SSL concerns about invalid CAs or revoked - # certificates. - - # these constants are from openssl-0.9.7g/crypto/x509/x509_vfy.h - # and do not appear to be exposed by pyopenssl. Ick. TODO. We - # could just always return '1' here (ignoring all errors), but I - # think that would ignore forged signatures too, which would - # obviously be a security hole. - things_are_ok = (0, # X509_V_OK - 9, # X509_V_ERR_CERT_NOT_YET_VALID - 10, # X509_V_ERR_CERT_HAS_EXPIRED - 18, # X509_V_ERR_DEPTH_ZERO_SELF_SIGNED_CERT - 19, # X509_V_ERR_SELF_SIGNED_CERT_IN_CHAIN - ) - if errno in things_are_ok: - return 1 - # TODO: log the details of the error, because otherwise they get - # lost in the PyOpenSSL exception that will eventually be raised - # (possibly OpenSSL.SSL.Error: certificate verify failed) - - # I think that X509_V_ERR_CERT_SIGNATURE_FAILURE is the most - # obvious sign of hostile attack. - return 0 - - # VERIFY_PEER means we ask the the other end for their certificate. - # not adding VERIFY_FAIL_IF_NO_PEER_CERT means it's ok if they don't - # give us one (i.e. if an anonymous client connects to an - # authenticated server). I don't know what VERIFY_CLIENT_ONCE does. - ctx.set_verify(SSL.VERIFY_PEER | - #SSL.VERIFY_FAIL_IF_NO_PEER_CERT | - SSL.VERIFY_CLIENT_ONCE, - alwaysValidate) - return ctx - -def digest32(colondigest): - digest = "".join([chr(int(c,16)) for c in colondigest.split(":")]) - digest = base32.encode(digest) - return digest - -available = True diff --git a/src/foolscap/foolscap/debug.py b/src/foolscap/foolscap/debug.py deleted file mode 100644 index c74732dd..00000000 --- a/src/foolscap/foolscap/debug.py +++ /dev/null @@ -1,209 +0,0 @@ - -# miscellaneous helper classes for debugging and testing, not needed for -# normal use - -from cStringIO import StringIO -from foolscap import banana, tokens, storage -Banana = banana.Banana -StorageBanana = storage.StorageBanana -from foolscap.slicers.root import RootSlicer - -class LoggingBananaMixin: - # this variant prints a log of tokens sent and received, if you set the - # .doLog attribute to a string (like 'tx' or 'rx') - doLog = None - - ### send logging - - def sendOpen(self): - if self.doLog: print "[%s] OPEN(%d)" % (self.doLog, self.openCount) - return Banana.sendOpen(self) - def sendToken(self, obj): - if self.doLog: - if type(obj) == str: - print "[%s] \"%s\"" % (self.doLog, obj) - elif type(obj) == int: - print "[%s] %s" % (self.doLog, obj) - else: - print "[%s] ?%s?" % (self.doLog, obj) - return Banana.sendToken(self, obj) - def sendClose(self, openID): - if self.doLog: print "[%s] CLOSE(%d)" % (self.doLog, openID) - return Banana.sendClose(self, openID) - def sendAbort(self, count): - if self.doLog: print "[%s] ABORT(%d)" % (self.doLog, count) - return Banana.sendAbort(self, count) - - - ### receive logging - - def rxStackSummary(self): - return ",".join([s.__class__.__name__ for s in self.receiveStack]) - - def handleOpen(self, openCount, indexToken): - if self.doLog: - stack = self.rxStackSummary() - print "[%s] got OPEN(%d,%s) %s" % \ - (self.doLog, openCount, indexToken, stack) - return Banana.handleOpen(self, openCount, indexToken) - def handleToken(self, token, ready_deferred=None): - if self.doLog: - if type(token) == str: - s = '"%s"' % token - elif type(token) == int: - s = '%s' % token - elif isinstance(token, tokens.BananaFailure): - s = 'UF:%s' % (token,) - else: - s = '?%s?' % (token,) - print "[%s] got %s %s" % (self.doLog, s, self.rxStackSummary()) - return Banana.handleToken(self, token, ready_deferred) - def handleClose(self, closeCount): - if self.doLog: - stack = self.rxStackSummary() - print "[%s] got CLOSE(%d): %s" % (self.doLog, closeCount, stack) - return Banana.handleClose(self, closeCount) - -class LoggingBanana(LoggingBananaMixin, Banana): - pass - -class LoggingStorageBanana(LoggingBananaMixin, StorageBanana): - pass - -class TokenTransport: - disconnectReason = None - def loseConnection(self): - pass - -class TokenBananaMixin: - # this class accumulates tokens instead of turning them into bytes - - def __init__(self): - self.tokens = [] - self.connectionMade() - self.transport = TokenTransport() - - def sendOpen(self): - openID = self.openCount - self.openCount += 1 - self.sendToken(("OPEN", openID)) - return openID - - def sendToken(self, token): - #print token - self.tokens.append(token) - - def sendClose(self, openID): - self.sendToken(("CLOSE", openID)) - - def sendAbort(self, count=0): - self.sendToken(("ABORT",)) - - def sendError(self, msg): - #print "TokenBanana.sendError(%s)" % msg - pass - - def getTokens(self): - self.produce() - assert(len(self.slicerStack) == 1) - assert(isinstance(self.slicerStack[0][0], RootSlicer)) - return self.tokens - - # TokenReceiveBanana - - def processTokens(self, tokens): - self.object = None - for t in tokens: - self.receiveToken(t) - return self.object - - def receiveToken(self, token): - # insert artificial tokens into receiveData. Once upon a time this - # worked by directly calling the commented-out functions, but schema - # checking and abandonUnslicer made that unfeasible. - - #if self.debug: - # print "receiveToken(%s)" % (token,) - - if type(token) == type(()): - if token[0] == "OPEN": - count = token[1] - assert count < 128 - b = ( chr(count) + tokens.OPEN ) - self.injectData(b) - #self.handleOpen(count, opentype) - elif token[0] == "CLOSE": - count = token[1] - assert count < 128 - b = chr(count) + tokens.CLOSE - self.injectData(b) - #self.handleClose(count) - elif token[0] == "ABORT": - if len(token) == 2: - count = token[1] - else: - count = 0 - assert count < 128 - b = chr(count) + tokens.ABORT - self.injectData(b) - #self.handleAbort(count) - elif type(token) == int: - assert 0 <= token < 128 - b = chr(token) + tokens.INT - self.injectData(b) - elif type(token) == str: - assert len(token) < 128 - b = chr(len(token)) + tokens.STRING + token - self.injectData(b) - else: - raise NotImplementedError, "hey, this is just a quick hack" - - def injectData(self, data): - if not self.transport.disconnectReason: - self.dataReceived(data) - - def receivedObject(self, obj): - self.object = obj - - def reportViolation(self, why): - self.violation = why - -class TokenBanana(TokenBananaMixin, Banana): - def __init__(self): - Banana.__init__(self) - TokenBananaMixin.__init__(self) - - def reportReceiveError(self, f): - Banana.reportReceiveError(self, f) - self.transport.disconnectReason = tokens.BananaFailure() - -class TokenStorageBanana(TokenBananaMixin, StorageBanana): - def __init__(self): - StorageBanana.__init__(self) - TokenBananaMixin.__init__(self) - - def reportReceiveError(self, f): - StorageBanana.reportReceiveError(self, f) - self.transport.disconnectReason = tokens.BananaFailure() - -def encodeTokens(obj, debug=0): - b = TokenStorageBanana() - b.debug = debug - d = b.send(obj) - d.addCallback(lambda res: b.tokens) - return d -def decodeTokens(tokens, debug=0): - b = TokenStorageBanana() - b.debug = debug - obj = b.processTokens(tokens) - return obj - -def encode(obj): - b = LoggingStorageBanana() - b.transport = StringIO() - b.send(obj) - return b.transport.getvalue() -def decode(string): - b = LoggingStorageBanana() - b.dataReceived(string) - return b.object diff --git a/src/foolscap/foolscap/eventual.py b/src/foolscap/foolscap/eventual.py deleted file mode 100644 index 0e349edc..00000000 --- a/src/foolscap/foolscap/eventual.py +++ /dev/null @@ -1,78 +0,0 @@ -# -*- test-case-name: foolscap.test.test_eventual -*- - -from twisted.internet import reactor, defer -from twisted.python import log - -class _SimpleCallQueue: - # XXX TODO: merge epsilon.cooperator in, and make this more complete. - def __init__(self): - self._events = [] - self._flushObservers = [] - self._timer = None - - def append(self, cb, args, kwargs): - self._events.append((cb, args, kwargs)) - if not self._timer: - self._timer = reactor.callLater(0, self._turn) - - def _turn(self): - self._timer = None - # flush all the messages that are currently in the queue. If anything - # gets added to the queue while we're doing this, those events will - # be put off until the next turn. - events, self._events = self._events, [] - for cb, args, kwargs in events: - try: - cb(*args, **kwargs) - except: - log.err() - if self._events and not self._timer: - self._timer = reactor.callLater(0, self._turn) - if not self._events: - observers, self._flushObservers = self._flushObservers, [] - for o in observers: - o.callback(None) - - def flush(self): - """Return a Deferred that will fire (with None) when the call queue - is completely empty.""" - if not self._events: - return defer.succeed(None) - d = defer.Deferred() - self._flushObservers.append(d) - return d - - -_theSimpleQueue = _SimpleCallQueue() - -def eventually(cb, *args, **kwargs): - """This is the eventual-send operation, used as a plan-coordination - primitive. The callable will be invoked (with args and kwargs) in a later - reactor turn. Doing 'eventually(a); eventually(b)' guarantees that a will - be called before b. - - Any exceptions that occur in the callable will be logged with log.err(). - If you really want to ignore them, be sure to provide a callable that - catches those exceptions. - - This function returns None. If you care to know when the callable was - run, be sure to provide a callable that notifies somebody. - """ - _theSimpleQueue.append(cb, args, kwargs) - - -def fireEventually(value=None): - """This returns a Deferred which will fire in a later reactor turn, after - the current call stack has been completed, and after all other deferreds - previously scheduled with callEventually(). - """ - d = defer.Deferred() - eventually(d.callback, value) - return d - -def flushEventualQueue(_ignored=None): - """This returns a Deferred which fires when the eventual-send queue is - finally empty. This is useful to wait upon as the last step of a Trial - test method. - """ - return _theSimpleQueue.flush() diff --git a/src/foolscap/foolscap/ipb.py b/src/foolscap/foolscap/ipb.py deleted file mode 100644 index 22a98c0d..00000000 --- a/src/foolscap/foolscap/ipb.py +++ /dev/null @@ -1,107 +0,0 @@ - - -from zope.interface import interface -Interface = interface.Interface - -# TODO: move these here -from foolscap.tokens import ISlicer, IRootSlicer, IUnslicer -_ignored = [ISlicer, IRootSlicer, IUnslicer] # hush pyflakes - -class DeadReferenceError(Exception): - """The RemoteReference is dead, Jim.""" - - -class IReferenceable(Interface): - """This object is remotely referenceable. This means it is represented to - remote systems as an opaque identifier, and that round-trips preserve - identity. - """ - - def processUniqueID(): - """Return a unique identifier (scoped to the process containing the - Referenceable). Most objects can just use C{id(self)}, but objects - which should be indistinguishable to a remote system may want - multiple objects to map to the same PUID.""" - -class IRemotelyCallable(Interface): - """This object is remotely callable. This means it defines some remote_* - methods and may have a schema which describes how those methods may be - invoked. - """ - - def getInterfaceNames(): - """Return a list of RemoteInterface names to which this object knows - how to respond.""" - - def doRemoteCall(methodname, args, kwargs): - """Invoke the given remote method. This method may raise an - exception, return normally, or return a Deferred.""" - -class ITub(Interface): - """This marks a Tub.""" - -class IRemoteReference(Interface): - """This marks a RemoteReference.""" - - def notifyOnDisconnect(callback, *args, **kwargs): - """Register a callback to run when we lose this connection. - - The callback will be invoked with whatever extra arguments you - provide to this function. For example:: - - def my_callback(name, number): - print name, number+4 - cookie = rref.notifyOnDisconnect(my_callback, 'bob', number=3) - - This function returns an opaque cookie. If you want to cancel the - notification, pass this same cookie back to dontNotifyOnDisconnect:: - - rref.dontNotifyOnDisconnect(cookie) - - Note that if the Tub is shutdown (via stopService), all - notifyOnDisconnect handlers are cancelled. - """ - - def dontNotifyOnDisconnect(cookie): - """Deregister a callback that was registered with notifyOnDisconnect. - """ - - def callRemote(name, *args, **kwargs): - """Invoke a method on the remote object with which I am associated. - - I always return a Deferred. This will fire with the results of the - method when and if the remote end finishes. It will errback if any of - the following things occur:: - - the arguments do not match the schema I believe is in use by the - far end (causes a Violation exception) - - the connection to the far end has been lost (DeadReferenceError) - - the arguments are not accepted by the schema in use by the far end - (Violation) - - the method executed by the far end raises an exception (arbitrary) - - the return value of the remote method is not accepted by the schema - in use by the far end (Violation) - - the connection is lost before the response is returned - (ConnectionLost) - - the return value is not accepted by the schema I believe is in use - by the far end (Violation) - """ - - def callRemoteOnly(name, *args, **kwargs): - """Invoke a method on the remote object with which I am associated. - - This form is for one-way messages that do not require results or even - acknowledgement of completion. I do not wait for the method to finish - executing. The remote end will be instructed to not send any - response. There is no way to know whether the method was successfully - delivered or not. - - I always return None. - """ - diff --git a/src/foolscap/foolscap/negotiate.py b/src/foolscap/foolscap/negotiate.py deleted file mode 100644 index f4a3e95b..00000000 --- a/src/foolscap/foolscap/negotiate.py +++ /dev/null @@ -1,1202 +0,0 @@ -# -*- test-case-name: foolscap.test.test_negotiate -*- - -from twisted.python import log -from twisted.python.failure import Failure -from twisted.internet import protocol, reactor - -from foolscap import broker, referenceable, vocab -from foolscap.eventual import eventually -from foolscap.tokens import SIZE_LIMIT, ERROR, \ - BananaError, NegotiationError, RemoteNegotiationError -from foolscap.banana import int2b128 - -crypto_available = False -try: - from foolscap import crypto - crypto_available = crypto.available -except ImportError: - pass - -def isSubstring(small, big): - assert type(small) is str and type(big) is str - return small in big - -def best_overlap(my_min, my_max, your_min, your_max, name): - """Find the highest integer which is in both ranges (inclusive). - Raise NegotiationError (using 'name' in the error message) if there - is no overlap.""" - best = min(my_max, your_max) - if best < my_min: - raise NegotiationError("I can't handle %s %d" % (name, best)) - if best < your_min: - raise NegotiationError("You can't handle %s %d" % (name, best)) - return best - -def check_inrange(my_min, my_max, decision, name): - if decision < my_min or decision > my_max: - raise NegotiationError("I can't handle %s %d" % (name, decision)) - -# negotiation phases -PLAINTEXT, ENCRYPTED, DECIDING, BANANA, ABANDONED = range(5) - - -# version number history: -# 1 (0.1.0): offer includes initial-vocab-table-range, -# decision includes initial-vocab-table-index -# 2 (0.1.1): no changes to offer or decision -# reqID=0 was commandeered for use by callRemoteOnly() -# 3 (0.1.3): added PING and PONG tokens - -class Negotiation(protocol.Protocol): - """This is the first protocol to speak over the wire. It is responsible - for negotiating the connection parameters, then switching the connection - over to the actual Banana protocol. This removes all the details of - negotiation from Banana, and makes it easier to use a more complex scheme - (including a STARTTLS transition) in PB. - - Negotiation consists of three phases. In the PLAINTEXT phase, the client - side (i.e. the one which initiated the connection) sends an - HTTP-compatible GET request for the target Tub ID. This request includes - an Connection: Upgrade header. The GET request serves a couple of - purposes: if a PB client is accidentally pointed at an HTTP server, it - will trigger a sensible 404 Error instead of getting confused. A regular - HTTP server can be used to send back a 303 Redirect, allowing Apache (or - whatever) to be used as a redirection server. - - After sending the GET request, the client waits for the server to send a - 101 Switching Protocols command, then starts the TLS session. It may also - receive a 303 Redirect command, in which case it drops the connection and - tries again with the new target. - - In the PLAINTEXT phase, the server side (i.e. the one which accepted the - connection) waits for the client's GET request, extracts the TubID from - the first line, consults the local Listener object to locate the - appropriate Tub (and its certificate), sends back a 101 Switching - Protocols response, then starts the TLS session with the Tub's - certificate. If the Listener reports that the requested Tub is listening - elsewhere, the server sends back a 303 Redirect instead, and then drops - the connection. - - By the end of the PLAINTEXT phase, both ends know which Tub they are - using (self.tub has been set). - - Both sides send a Hello Block upon entering the ENCRYPTED phase, which in - practice means just after starting the TLS session. The Hello block - contains the negotiation offer, as a series of Key: Value lines separated - by \\r\\n delimiters and terminated by a blank line. Upon receiving the - other end's Hello block, each side switches to the DECIDING phase, and - then evaluates the received Hello message. - - Each side compares TubIDs, and the side with the lexicographically higher - value becomes the Master. (If, for some reason, one side does not claim a - TubID, its value is treated as None, which always compares *less* than - any actual TubID, so the non-TubID side will probably not be the Master. - Any possible ties are resolved by having the server side be the master). - Both sides know the other's TubID, so both sides know whether they are - the Master or not. - - The Master has two jobs to do. The first is that it compares the - negotiation offer against its own capabilities, and comes to a decision - about what the connection parameters shall be. It may decide that the two - sides are not compatible, in which case it will abandon the connection. - The second job is to decide whether to continue to use the connection at - all: if the Master already has a connection to the other Tub, it will - drop this new one. This decision must be made by the Master (as opposed - to the Server) because it is possible for both Tubs to connect to each - other simultaneously, and this design avoids a race condition that could - otherwise drop *both* connections. - - If the Master decides to continue with the connection, it sends the - Decision block to the non-master side. It then swaps out the Negotiation - protocol for a new Banana protocol instance that has been created with - the same parameters that were used to create the Decision block. - - The non-master side is waiting in the DECIDING phase for this block. Upon - receiving it, the non-master side evaluates the connection parameters and - either drops the connection or swaps in a new Banana protocol instance - with the same parameters. At this point, negotiation is complete and the - Negotiation instances are dropped. - - - @ivar negotationOffer: a dict which describes what we will offer to the - far side. Each key/value pair will be put into a rfc822-style header and - sent from the client to the server when the connection is established. On - the server side, handleNegotiation() uses negotationOffer to indicate - what we are locally capable of. - - Subclasses may influence the negotiation process by modifying this - dictionary before connectionMade() is called. - - @ivar negotiationResults: a dict which describes what the two ends have - agreed upon. This is computed by the server, stored locally, and sent - down to the client. The client receives it and stores it without - modification (server chooses). - - In general, the negotiationResults are the same on both sides of the same - connection. However there may be certain parameters which are sent as - part of the negotiation block (the PB TubID, for example) which will not. - - """ - - myTubID = None - tub = None - theirTubID = None - - receive_phase = PLAINTEXT # we are expecting this - send_phase = PLAINTEXT # the other end is expecting this - encrypted = False - - doNegotiation = True - debugNegotiation = False - forceNegotiation = None - - minVersion = 3 - maxVersion = 3 - - brokerClass = broker.Broker - - initialVocabTableRange = vocab.getVocabRange() - - SERVER_TIMEOUT = 60 # you have 60 seconds to complete negotiation, or else - negotiationTimer = None - - def __init__(self): - for i in range(self.minVersion, self.maxVersion+1): - assert hasattr(self, "evaluateNegotiationVersion%d" % i), i - assert hasattr(self, "acceptDecisionVersion%d" % i), i - assert isinstance(self.initialVocabTableRange, tuple) - self.negotiationOffer = { - "banana-negotiation-range": "%d %d" % (self.minVersion, - self.maxVersion), - "initial-vocab-table-range": "%d %d" % self.initialVocabTableRange, - } - # TODO: for testing purposes, it might be useful to be able to add - # some keys to this offer - if self.forceNegotiation is not None: - # TODO: decide how forcing should work. Maybe forceNegotiation - # should be a dict of keys or something. distinguish between - # offer and decision. - self.negotiationOffer['negotiation-forced'] = "True" - self.buffer = "" - self.options = {} - # to trigger specific race conditions during unit tests, it is useful - # to allow certain operations to be stalled for a moment. - # self.options will contain a key like debug_slow_connectionMade to - # indicate that there should be a 1 second delay between the real - # connectionMade and the time our self.connectionMade() method is - # invoked. To support this, the first time connectionMade() is - # invoked, self.debugTimers['connectionMade'] is set to a 1s - # DelayedCall, which fires self.debug_fireTimer('connectionMade', - # callable, *args). That will set self.debugTimers['connectionMade'] - # to None, so the condition is not fired again, then invoke the - # actual connectionMade method. When the connection is lost, all - # remaining timers will be canceled. - self.debugTimers = {} - - # if anything goes wrong during negotiation (version mismatch, - # malformed headers, assertion checks), we stash the Failure in this - # attribute and then drop the connection. For client-side - # connections, we notify our parent TubConnector when the - # connectionLost() message is finally delivered. - self.failureReason = None - - def initClient(self, connector, targetHost): - # clients do connectTCP and speak first with a GET - self.isClient = True - self.tub = connector.tub - self.brokerClass = self.tub.brokerClass - self.myTubID = self.tub.tubID - self.connector = connector - self.target = connector.target - self.targetHost = targetHost - self.wantEncryption = bool(self.target.encrypted - or self.tub.myCertificate) - self.options = self.tub.options.copy() - - def initServer(self, listener): - # servers do listenTCP and respond to the GET - self.isClient = False - self.listener = listener - self.options = self.listener.options.copy() - # the broker class is set when we find out which Tub we should use - - def parseLines(self, header): - lines = header.split("\r\n") - block = {} - for line in lines: - colon = line.index(":") - key = line[:colon].lower() - value = line[colon+1:].lstrip() - block[key] = value - return block - - def sendBlock(self, block): - keys = block.keys() - keys.sort() - for k in keys: - self.transport.write("%s: %s\r\n" % (k.lower(), block[k])) - self.transport.write("\r\n") # end block - - def debug_doTimer(self, name, timeout, call, *args): - if (self.options.has_key("debug_slow_%s" % name) and - not self.debugTimers.has_key(name)): - log.msg("debug_doTimer(%s)" % name) - t = reactor.callLater(timeout, self.debug_fireTimer, name) - self.debugTimers[name] = (t, [(call, args)]) - cb = self.options["debug_slow_%s" % name] - if cb is not None and cb is not True: - cb() - return True - return False - - def debug_addTimerCallback(self, name, call, *args): - if self.debugTimers.get(name): - self.debugTimers[name][1].append((call, args)) - return True - return False - - def debug_forceTimer(self, name): - if self.debugTimers.get(name): - self.debugTimers[name][0].cancel() - self.debug_fireTimer(name) - - def debug_forceAllTimers(self): - for name in self.debugTimers: - if self.debugTimers.get(name): - self.debugTimers[name][0].cancel() - self.debug_fireTimer(name) - - def debug_cancelAllTimers(self): - for name in self.debugTimers: - if self.debugTimers.get(name): - self.debugTimers[name][0].cancel() - self.debugTimers[name] = None - - def debug_fireTimer(self, name): - calls = self.debugTimers[name][1] - self.debugTimers[name] = None - for call,args in calls: - call(*args) - - def connectionMade(self): - # once connected, this Negotiation instance must either invoke - # self.switchToBanana or self.negotiationFailed, to insure that the - # TubConnector (if any) gets told about the results of the connection - # attempt. - - if self.doNegotiation: - if self.isClient: - self.connectionMadeClient() - else: - self.connectionMadeServer() - else: - self.switchToBanana({}) - - def connectionMadeClient(self): - assert self.receive_phase == PLAINTEXT - # the client needs to send the HTTP-compatible tubid GET, - # along with the TLS upgrade request - self.sendPlaintextClient() - # now we wait for the TLS Upgrade acceptance to come back - - def sendPlaintextClient(self): - # we want an encrypted connection if the Tub at either end uses - # encryption. We might not get it, though. Declaring whether or not - # we are using an encrypted Tub is separate, and expressed in our - # Hello block. - req = [] - if self.target.encrypted: - if self.debugNegotiation: - log.msg("sendPlaintextClient: GET for tubID %s" - % self.target.tubID) - req.append("GET /id/%s HTTP/1.1" % self.target.tubID) - else: - if self.debugNegotiation: - log.msg("sendPlaintextClient: GET for no tubID") - req.append("GET /id/ HTTP/1.1") - req.append("Host: %s" % self.targetHost) - if self.debugNegotiation: - log.msg("sendPlaintextClient: wantEncryption=%s" % - self.wantEncryption) - if self.wantEncryption: - req.append("Upgrade: TLS/1.0") - else: - req.append("Upgrade: PB/1.0") - req.append("Connection: Upgrade") - self.transport.write("\r\n".join(req)) - self.transport.write("\r\n\r\n") - # the next thing the other end expects to see is the encrypted phase - self.send_phase = ENCRYPTED - - def connectionMadeServer(self): - # the server just waits for the GET message to arrive, but set up the - # server timeout first - if self.debug_doTimer("connectionMade", 1, self.connectionMade): - return - timeout = self.options.get('server_timeout', self.SERVER_TIMEOUT) - if timeout: - # oldpb clients will hit this case. - self.negotiationTimer = reactor.callLater(timeout, - self.negotiationTimedOut) - - def sendError(self, why): - pass # TODO - - def negotiationTimedOut(self): - del self.negotiationTimer - why = Failure(NegotiationError("negotiation timeout")) - self.sendError(why) - self.failureReason = why - self.transport.loseConnection() - - def stopNegotiationTimer(self): - if self.negotiationTimer: - self.negotiationTimer.cancel() - del self.negotiationTimer - - def dataReceived(self, chunk): - if self.debugNegotiation: - log.msg("dataReceived(isClient=%s,phase=%s,options=%s): '%s'" - % (self.isClient, self.receive_phase, self.options, chunk)) - if self.receive_phase == ABANDONED: - return - - self.buffer += chunk - - if self.debug_addTimerCallback("connectionMade", - self.dataReceived, ''): - return - - try: - # we accumulate a header block for each phase - if len(self.buffer) > 4096: - raise BananaError("Header too long") - eoh = self.buffer.find('\r\n\r\n') - if eoh == -1: - return - header, self.buffer = self.buffer[:eoh], self.buffer[eoh+4:] - if self.receive_phase == PLAINTEXT: - if self.isClient: - self.handlePLAINTEXTClient(header) - else: - self.handlePLAINTEXTServer(header) - elif self.receive_phase == ENCRYPTED: - self.handleENCRYPTED(header) - elif self.receive_phase == DECIDING: - self.handleDECIDING(header) - else: - assert 0, "should not get here" - # there might be some leftover data for the next phase. - # self.buffer will be emptied when we switchToBanana, so in that - # case we won't call the wrong dataReceived. - if self.buffer: - self.dataReceived("") - - except Exception, e: - why = Failure() - if self.debugNegotiation: - log.msg("negotiation had exception: %s" % why) - if isinstance(e, RemoteNegotiationError): - pass # they've already hung up - else: - # there's a chance we can provide a little bit more information - # to the other end before we hang up on them - if isinstance(e, NegotiationError): - errmsg = str(e) - else: - log.msg("negotiation had internal error:") - log.msg(why) - errmsg = "internal server error, see logs" - errmsg = errmsg.replace("\n", " ").replace("\r", " ") - if self.send_phase == PLAINTEXT: - resp = ("HTTP/1.1 500 Internal Server Error: %s\r\n\r\n" - % errmsg) - self.transport.write(resp) - elif self.send_phase in (ENCRYPTED, DECIDING): - block = {'banana-decision-version': 1, - 'error': errmsg, - } - self.sendBlock(block) - elif self.send_phase == BANANA: - self.sendBananaError(errmsg) - - self.failureReason = why - self.transport.loseConnection() - return - - def sendBananaError(self, msg): - if len(msg) > SIZE_LIMIT: - msg = msg[:SIZE_LIMIT-10] + "..." - int2b128(len(msg), self.transport.write) - self.transport.write(ERROR) - self.transport.write(msg) - # now you should drop the connection - - def connectionLost(self, reason): - # force connectionMade to happen, so connectionLost can occur - # normally - self.debug_forceTimer("connectionMade") - # cancel the other slowdown timers, since they all involve sending - # data, and the connection is no longer available - self.debug_cancelAllTimers() - for k,t in self.debugTimers.items(): - if t: - t[0].cancel() - self.debugTimers[k] = None - if self.isClient: - l = self.tub.options.get("debug_gatherPhases") - if l is not None: - l.append(self.receive_phase) - if not self.failureReason: - self.failureReason = reason - self.negotiationFailed() - - def handlePLAINTEXTServer(self, header): - # the client sends us a GET message - lines = header.split("\r\n") - if not lines[0].startswith("GET "): - raise BananaError("not right") - command, url, version = lines[0].split() - if not url.startswith("/id/"): - # probably a web browser - raise BananaError("not right") - targetTubID = url[4:] - if self.debugNegotiation: - log.msg("handlePLAINTEXTServer: targetTubID='%s'" % targetTubID) - if targetTubID == "": - targetTubID = None - if isSubstring("Upgrade: TLS/1.0\r\n", header): - wantEncrypted = True - else: - wantEncrypted = False - if self.debugNegotiation: - log.msg("handlePLAINTEXTServer: wantEncrypted=%s" % wantEncrypted) - # we ignore the rest of the lines - - if wantEncrypted and not crypto_available: - # this is a confused client, or a bad URL: if we don't have - # crypto, we couldn't have created a pb:// URL. - log.msg("Negotiate.handlePLAINTEXTServer: client wants " - "encryption for TubID=%s but we have no crypto, " - "hanging up on them" % targetTubID) - # we could just not offer the encryption, but they won't be happy - # with the results, since they wanted to connect to a specific - # TubID. - raise NegotiationError("crypto not available") - - if wantEncrypted and targetTubID is None: - # we wouldn't know which certificate to use, so don't use - # encryption at all, even though the client wants to. TODO: if it - # is possible to do startTLS on the server side without a server - # certificate, do that. It might be possible to do some sort of - # ephemeral non-signed certificate. - wantEncrypted = False - - if targetTubID is not None and not wantEncrypted: - raise NegotiationError("secure Tubs require encryption") - - # now that we know which Tub the client wants to connect to, either - # send a Redirect, or start the ENCRYPTED phase - - tub, redirect = self.listener.lookupTubID(targetTubID) - if tub: - self.tub = tub # our tub - self.options.update(self.tub.options) - self.brokerClass = self.tub.brokerClass - self.myTubID = tub.tubID - self.sendPlaintextServerAndStartENCRYPTED(wantEncrypted) - elif redirect: - self.sendRedirect(redirect) - else: - raise NegotiationError("unknown TubID %s" % targetTubID) - - def sendPlaintextServerAndStartENCRYPTED(self, encrypted): - # this is invoked on the server side - if self.debug_doTimer("sendPlaintextServer", 1, - self.sendPlaintextServerAndStartENCRYPTED, - encrypted): - return - if encrypted: - resp = "\r\n".join(["HTTP/1.1 101 Switching Protocols", - "Upgrade: TLS/1.0, PB/1.0", - "Connection: Upgrade", - ]) - else: - # TODO: see if this makes sense, I haven't read the HTTP spec - resp = "\r\n".join(["HTTP/1.1 101 Switching Protocols", - "Upgrade: PB/1.0", - "Connection: Upgrade", - ]) - self.transport.write(resp) - self.transport.write("\r\n\r\n") - # the next thing they expect is the encrypted block - self.send_phase = ENCRYPTED - self.startENCRYPTED(encrypted) - - def sendRedirect(self, redirect): - # this is invoked on the server side - # send the redirect message, then close the connection. make sure the - # data gets flushed, though. - raise NotImplementedError # TODO - - def handlePLAINTEXTClient(self, header): - if self.debugNegotiation: - log.msg("handlePLAINTEXTClient: header='%s'" % header) - lines = header.split("\r\n") - tokens = lines[0].split() - # TODO: accept a 303 redirect - if tokens[1] != "101": - raise BananaError("not right, got '%s', " - "expected 101 Switching Protocols" - % lines[0]) - isEncrypted = isSubstring("Upgrade: TLS/1.0", header) - if not isEncrypted: - # the connection is not encrypted, so don't claim a TubID - self.myTubID = None - # we ignore everything else - - # now we upgrade to TLS - self.startENCRYPTED(isEncrypted) - # and wait for their Hello to arrive - - def startENCRYPTED(self, encrypted): - # this is invoked on both sides. We move to the "ENCRYPTED" phase, - # which might actually involve a TLS-encrypted session if that's what - # the client wanted, but if it isn't then we just "upgrade" to - # nothing and change modes. - if self.debugNegotiation: - log.msg("startENCRYPTED(isClient=%s, encrypted=%s)" % - (self.isClient, encrypted)) - if encrypted: - self.startTLS(self.tub.myCertificate) - self.encrypted = encrypted - # TODO: can startTLS trigger dataReceived? - self.receive_phase = ENCRYPTED - self.sendHello() - - def sendHello(self): - """This is called on both sides as soon as the encrypted connection - is established. This causes a negotiation block to be sent to the - other side as an offer.""" - if self.debug_doTimer("sendHello", 1, self.sendHello): - return - - hello = self.negotiationOffer.copy() - - if self.myTubID: - # this indicates which identity we wish to claim. This is the - # hash of the certificate we're using, or one of its parents. If - # we aren't using an encrypted connection, don't claim any - # identity. - hello['my-tub-id'] = self.myTubID - - if self.debugNegotiation: - log.msg("Negotiate.sendHello (isClient=%s): %s" % - (self.isClient, hello)) - self.sendBlock(hello) - - - def handleENCRYPTED(self, header): - # both ends have sent a Hello message - if self.debug_addTimerCallback("sendHello", - self.handleENCRYPTED, header): - return - self.theirCertificate = None - if self.encrypted: - # we should be encrypted now - # get the peer's certificate, if any - try: - them = crypto.peerFromTransport(self.transport) - if them and them.original: - self.theirCertificate = them - except crypto.CertificateError: - pass - - hello = self.parseLines(header) - if hello.has_key("error"): - raise RemoteNegotiationError(hello["error"]) - self.evaluateHello(hello) - - def evaluateHello(self, offer): - """Evaluate the HELLO message sent by the other side. We compare - TubIDs, and the higher value becomes the 'master' and makes the - negotiation decisions. - - This method returns a tuple of DECISION,PARAMS. There are a few - different possibilities:: - - - We are the master, we make a negotiation decision: DECISION is - the block of data to send back to the non-master side, PARAMS are - the connection parameters we will use ourselves. - - - We are the master, we can't accomodate their request: raise - NegotiationError - - - We are not the master: DECISION is None - """ - - if self.debugNegotiation: - log.msg("evaluateHello(isClient=%s): offer=%s" % (self.isClient, - offer,)) - if not offer.has_key('banana-negotiation-range'): - if offer.has_key('banana-negotiation-version'): - msg = ("Peer is speaking foolscap-0.0.5 or earlier, " - "which is not compatible with this version. " - "Please upgrade the peer.") - raise NegotiationError(msg) - raise NegotiationError("No valid banana-negotiation sequence seen") - min_s, max_s = offer['banana-negotiation-range'].split() - theirMinVer = int(min_s) - theirMaxVer = int(max_s) - # best_overlap() might raise a NegotiationError - best = best_overlap(self.minVersion, self.maxVersion, - theirMinVer, theirMaxVer, - "banana version") - - negfunc = getattr(self, "evaluateNegotiationVersion%d" % best) - self.decision_version = best - return negfunc(offer) - - def evaluateNegotiationVersion1(self, offer): - forced = False - f = offer.get('negotiation-forced', None) - if f and f.lower() == "true": - forced = True - # 'forced' means the client is on a one-way link (or is really - # stubborn) and has already made up its mind about the connection - # parameters. If we are unable to handle exactly what they have - # offered, we must hang up. - assert not forced # TODO: implement - - - # glyph says: look at Juice, it does rfc822 parsing, startTLS, - # switch-to-other-protocol, etc. grep for retrieveConnection in q2q. - - # TODO: oh, if we see an HTTP client, send a good HTTP error like - # "protocol not supported", or maybe even an HTML page that explains - # what a PB server is - - # there are four distinct dicts here: - # self.negotiationOffer: what we want - # clientOffer: what they sent to us, the client's requests. - # serverOffer: what we send to them, the server's decision - # self.negotiationResults: the negotiated settings - # - # [my-tub-id] is not present in self.negotiationResults - # the server's tubID is in [my-tub-id] for both self.negotiationOffer - # and serverOffer - # the client's tubID is in [my-tub-id] for clientOffer - - myTubID = self.myTubID - - theirTubID = offer.get("my-tub-id") - if self.theirCertificate is None: - # no client certificate - if theirTubID is not None: - # this is where a poor MitM attack is detected, one which - # doesn't even pretend to encrypt the connection - raise BananaError("you must use a certificate to claim a " - "TubID") - else: - # verify that their claimed TubID matches their SSL certificate. - # TODO: handle chains - digest = crypto.digest32(self.theirCertificate.digest("sha1")) - if digest != theirTubID: - # this is where a good MitM attack is detected, one which - # encrypts the connection but which of course uses the wrong - # certificate - raise BananaError("TubID mismatch") - - if theirTubID: - theirTubRef = referenceable.TubRef(theirTubID) - else: - theirTubRef = None # unauthenticated - self.theirTubRef = theirTubRef # for use by non-master side, later - - if self.isClient and self.target.encrypted: - # verify that we connected to the Tub we expected to. If we - # weren't trying to connect to an encrypted tub, then don't - # bother checking.. we just accept whoever we managed to connect - # to. - if theirTubRef != self.target: - # TODO: how (if at all) should this error message be - # communicated to the other side? - raise BananaError("connected to the wrong Tub") - - if myTubID is None and theirTubID is None: - iAmTheMaster = not self.isClient - elif myTubID is None: - iAmTheMaster = False - elif theirTubID is None: - iAmTheMaster = True - else: - # this is the most common case - iAmTheMaster = myTubID > theirTubID - - if self.debugNegotiation: - log.msg("iAmTheMaster: %s" % iAmTheMaster) - - decision, params = None, None - - if iAmTheMaster: - # we get to decide everything. The other side is now waiting for - # a decision block. - self.send_phase = DECIDING - decision = {} - # combine their 'offer' and our own self.negotiationOffer to come - # up with a 'decision' to be sent back to the other end, and the - # 'params' to be used on our connection - - # first, do we continue with this connection? we might - # have an existing connection for this particular tub - if theirTubRef and theirTubRef in self.tub.brokers: - # there is an existing connection, so drop this one - if self.debugNegotiation: - log.msg(" abandoning the connection: we already have one") - raise NegotiationError("Duplicate connection") - - # what initial vocab set should we use? - theirVocabRange_s = offer.get("initial-vocab-table-range", "0 0") - theirVocabRange = theirVocabRange_s.split() - theirVocabMin = int(theirVocabRange[0]) - theirVocabMax = int(theirVocabRange[1]) - vocab_index = best_overlap( - self.initialVocabTableRange[0], - self.initialVocabTableRange[1], - theirVocabMin, theirVocabMax, - "initial vocab set") - vocab_hash = vocab.hashVocabTable(vocab_index) - decision['initial-vocab-table-index'] = "%d %s" % (vocab_index, - vocab_hash) - decision['banana-decision-version'] = str(self.decision_version) - - # v1: handle vocab table index - params = { 'banana-decision-version': self.decision_version, - 'initial-vocab-table-index': vocab_index, - } - - else: - # otherwise, the other side gets to decide. The next thing they - # expect to hear from us is banana. - self.send_phase = BANANA - - - if iAmTheMaster: - # I am the master, so I send the decision - if self.debugNegotiation: - log.msg("Negotiation.sendDecision: %s" % decision) - # now we send the decision and switch to Banana. they might hang - # up. - self.sendDecision(decision, params) - else: - # I am not the master, I receive the decision - self.receive_phase = DECIDING - - def evaluateNegotiationVersion2(self, offer): - # version 2 changes the meaning of reqID=0 in a 'call' sequence, to - # support the implementation of callRemoteOnly. No other protocol - # changes were made, and no changes were made to the offer or - # decision blocks. - return self.evaluateNegotiationVersion1(offer) - - def evaluateNegotiationVersion3(self, offer): - # version 3 adds PING and PONG tokens, to enable keepalives and - # idle-disconnect. No other protocol changes were made, and no - # changes were made to the offer or decision blocks. - return self.evaluateNegotiationVersion1(offer) - - def sendDecision(self, decision, params): - if self.debug_doTimer("sendDecision", 1, - self.sendDecision, decision, params): - return - if self.debug_addTimerCallback("sendHello", - self.sendDecision, decision, params): - return - self.sendBlock(decision) - self.send_phase = BANANA - self.switchToBanana(params) - - def handleDECIDING(self, header): - # this gets called on the non-master side - if self.debugNegotiation: - log.msg("handleDECIDING(isClient=%s): %s" % (self.isClient, - header)) - if self.debug_doTimer("handleDECIDING", 1, - self.handleDECIDING, header): - # for testing purposes, wait a moment before accepting the - # decision. This insures that we trigger the "Duplicate - # Broker" condition. NOTE: This will interact badly with the - # "there might be some leftover data for the next phase" call - # in dataReceived - return - decision = self.parseLines(header) - params = self.acceptDecision(decision) - self.switchToBanana(params) - - def acceptDecision(self, decision): - """This is called on the client end when it receives the results of - the negotiation from the server. The client must accept this decision - (and return the connection parameters dict), or raise - NegotiationError to hang up.negotiationResults.""" - if self.debugNegotiation: - log.msg("Banana.acceptDecision: got %s" % decision) - - version = decision.get('banana-decision-version') - if not version: - raise NegotiationError("No banana-decision-version value") - acceptfunc = getattr(self, "acceptDecisionVersion%d" % int(version)) - if not acceptfunc: - raise NegotiationError("I cannot handle banana-decision-version " - "value of %d" % int(version)) - return acceptfunc(decision) - - def acceptDecisionVersion1(self, decision): - if decision.has_key("error"): - error = decision["error"] - raise RemoteNegotiationError("Banana negotiation failed: %s" - % error) - - # parse the decision here, create the connection parameters dict - ver = int(decision['banana-decision-version']) - vocab_index_string = decision.get('initial-vocab-table-index') - if vocab_index_string: - vocab_index, vocab_hash = vocab_index_string.split() - vocab_index = int(vocab_index) - else: - vocab_index = 0 - check_inrange(self.initialVocabTableRange[0], - self.initialVocabTableRange[1], - vocab_index, "initial vocab table index") - our_hash = vocab.hashVocabTable(vocab_index) - if vocab_index > 0 and our_hash != vocab_hash: - msg = ("Our hash for vocab-table-index %d (%s) does not match " - "your hash (%s)" % (vocab_index, our_hash, vocab_hash)) - raise NegotiationError(msg) - params = { 'banana-decision-version': ver, - 'initial-vocab-table-index': vocab_index, - } - return params - - def acceptDecisionVersion2(self, decision): - # this only affects the interpretation of reqID=0, so we can use the - # same accept function - return self.acceptDecisionVersion1(decision) - - def acceptDecisionVersion3(self, decision): - # this adds PING and PONG tokens, so we can use the same accept - # function - return self.acceptDecisionVersion1(decision) - - def loopbackDecision(self): - # if we were talking to ourselves, what negotiation decision would we - # reach? This is used for loopback connections - max_vocab = self.initialVocabTableRange[1] - params = { 'banana-decision-version': self.maxVersion, - 'initial-vocab-table-index': max_vocab, - } - return params - - def startTLS(self, cert): - # the TLS connection (according to glyph) is "ready" immediately, but - # really the negotiation is going on behind the scenes (OpenSSL is - # trying a little too hard to be transparent). I think you have to - # write some bytes to trigger the negotiation. getPeerCertificate() - # can't be called until you receive some bytes, so grab it when a - # negotiation block arrives that claims to have an authenticated - # TubID. - - # Instead of this: - # opts = self.tub.myCertificate.options() - # We use the MyOptions class to fix up the verify stuff: we request a - # certificate from the client, but do not verify it against a list of - # root CAs - if self.debugNegotiation: - log.msg("startTLS, client=%s" % self.isClient) - kwargs = {} - if cert: - kwargs['privateKey'] = cert.privateKey.original - kwargs['certificate'] = cert.original - opts = crypto.MyOptions(**kwargs) - - self.transport.startTLS(opts) - - def switchToBanana(self, params): - # switch over to the new protocol (a Broker instance). This - # Negotiation protocol goes away after this point. - - if self.debugNegotiation: - log.msg("Negotiate.switchToBanana(isClient=%s)" % self.isClient) - log.msg(" params: %s" % (params,)) - - self.stopNegotiationTimer() - - b = self.brokerClass(params, - self.tub.keepaliveTimeout, - self.tub.disconnectTimeout, - ) - b.factory = self.factory # not used for PB code - b.setTub(self.tub) - self.transport.protocol = b - b.makeConnection(self.transport) - buf, self.buffer = self.buffer, "" # empty our buffer, just in case - b.dataReceived(buf) # and hand it to the new protocol - - # if we were created as a client, we'll have a TubConnector. Let them - # know that this connection has succeeded, so they can stop any other - # connection attempts still in progress. - if self.isClient: - self.connector.negotiationComplete(self.factory) - - # finally let our Tub know that they can start using the new Broker. - # This will wake up anyone who initiated an outbound connection. - if self.isClient: - theirTubRef = self.target - else: - theirTubRef = self.theirTubRef - self.tub.brokerAttached(theirTubRef, b, self.isClient) - - def negotiationFailed(self): - reason = self.failureReason - if self.debugNegotiation: - # TODO: consider logging this unconditionally.. it shouldn't - # happen very often, but if it does, it may take a long time to - # track down - log.msg("Negotiation.negotiationFailed: %s" % reason) - self.stopNegotiationTimer() - if self.receive_phase != ABANDONED and self.isClient: - eventually(self.connector.negotiationFailed, self.factory, reason) - self.receive_phase = ABANDONED - cb = self.options.get("debug_negotiationFailed_cb") - if cb: - # note that this gets called with a NegotiationError, not a - # Failure - eventually(cb, reason) - -# TODO: make sure code that examines self.receive_phase handles ABANDONED - -class TubConnectorClientFactory(protocol.ClientFactory, object): - # this is for internal use only. Application code should use - # Tub.getReference(url) - - def __init__(self, tc, host): - self.tc = tc # the TubConnector - self.host = host - - def __repr__(self): - # make it clear which remote Tub we're trying to connect to - base = object.__repr__(self) - at = base.find(" at ") - if at == -1: - # our annotation isn't really important, so don't fail just - # because we guessed the default __repr__ incorrectly - return base - if self.tc.tub.tubID: - origin = self.tc.tub.tubID[:8] - else: - origin = "" - if self.tc.target.getTubID(): - target = self.tc.target.getTubID()[:8] - else: - target = "" - return base[:at] + " [from %s]" % origin + " [to %s]" % target + base[at:] - - def startedConnecting(self, connector): - self.connector = connector - - def disconnect(self): - self.connector.disconnect() - - def buildProtocol(self, addr): - proto = self.tc.tub.negotiationClass() # this is usually Negotiation - proto.initClient(self.tc, self.host) - proto.factory = self - return proto - - def clientConnectionFailed(self, connector, reason): - self.tc.clientConnectionFailed(self, reason) - - -class TubConnector: - """I am used to make an outbound connection. I am given a target TubID - and a list of locationHints, and I try all of them until I establish a - Broker connected to the target. I will consider redirections returned - along the way. The first hint that yields a connected Broker will stop - the search. If targetTubID is None, we are going to make an unencrypted - connection. - - This is a single-use object. The connection attempt begins as soon as my - connect() method is called. - - I live until all but one of the TCP connections I initiated have finished - closing down. This means that connection establishment attempts in - progress are cancelled, and established connections (the ones which did - *not* complete negotiation before the winning connection) have called - their connectionLost() methods. - - @param locationHints: the list of 'host:port' hints where the remote tub - can be found. - """ - - failureReason = None - CONNECTION_TIMEOUT = 60 - timer = None - - def __init__(self, parent, tubref): - self.tub = parent - self.target = tubref - self.remainingLocations = self.target.getLocations() - # attemptedLocations keeps track of where we've already tried to - # connect, so we don't try them twice. - self.attemptedLocations = [] - - # pendingConnections contains a (PBClientFactory -> Connector) map - # for pairs where connectTCP has started, but negotiation has not yet - # completed. We keep track of these so we can shut them down when we - # stop connecting (either because one of the connections succeeded, - # or because someone told us to give up). - self.pendingConnections = {} - - def connect(self): - """Begin the connection process. This should only be called once. - This will either result in the successful Negotiation object invoking - the parent Tub's brokerAttached() method, our us calling the Tub's - connectionFailed() method.""" - self.tub.connectorStarted(self) - timeout = self.tub.options.get('connect_timeout', - self.CONNECTION_TIMEOUT) - self.timer = reactor.callLater(timeout, self.connectionTimedOut) - self.active = True - self.connectToAll() - - def stopConnectionTimer(self): - if self.timer: - self.timer.cancel() - del self.timer - - def shutdown(self): - self.active = False - self.remainingLocations = [] - self.stopConnectionTimer() - for c in self.pendingConnections.values(): - c.disconnect() - # as each disconnect() finishes, it will either trigger our - # clientConnectionFailed or our negotiationFailed methods, both of - # which will trigger checkForIdle, and the last such message will - # invoke self.tub.connectorFinished() - - def connectToAll(self): - while self.remainingLocations: - location = self.remainingLocations.pop() - if location in self.attemptedLocations: - continue - self.attemptedLocations.append(location) - host, port = location.split(":") - port = int(port) - f = TubConnectorClientFactory(self, host) - c = reactor.connectTCP(host, port, f) - self.pendingConnections[f] = c - # the tcp.Connector that we get back from reactor.connectTCP will - # retain a reference to the transport that it creates, so we can - # use it to disconnect the established (but not yet negotiated) - # connection - if self.tub.options.get("debug_stall_second_connection"): - # for unit tests, hold off on making the second connection - # for a moment. This allows the first connection to get to a - # known state. - reactor.callLater(0.1, self.connectToAll) - return - self.checkForFailure() - - def connectionTimedOut(self): - self.timer = None - why = "no connection established within client timeout" - self.failureReason = Failure(NegotiationError(why)) - self.shutdown() - self.failed() - - def clientConnectionFailed(self, factory, reason): - # this is called if some individual TCP connection cannot be - # established - if not self.failureReason: - self.failureReason = reason - del self.pendingConnections[factory] - self.checkForFailure() - self.checkForIdle() - - def redirectReceived(self, newLocation): - # the redirected connection will disconnect soon, which will trigger - # negotiationFailed(), so we don't have to do a - # del self.pendingConnections[factory] - self.remainingLocations.append(newLocation) - self.connectToAll() - - def negotiationFailed(self, factory, reason): - # this is called if protocol negotiation cannot be established, or if - # the connection is closed for any reason prior to switching to the - # Banana protocol - assert isinstance(reason, Failure), \ - "Hey, %s isn't a Failure" % (reason,) - if (not self.failureReason or - isinstance(reason, NegotiationError)): - # don't let mundane things like ConnectionFailed override the - # actually significant ones like NegotiationError - self.failureReason = reason - del self.pendingConnections[factory] - self.checkForFailure() - self.checkForIdle() - - def negotiationComplete(self, factory): - # 'factory' has just completed negotiation, so abandon all the other - # connection attempts - self.active = False - if self.timer: - self.timer.cancel() - self.timer = None - del self.pendingConnections[factory] # this one succeeded - for f in self.pendingConnections.keys(): # abandon the others - # for connections that are not yet established, this will trigger - # clientConnectionFailed. For connections that are established - # (and exchanging negotiation messages), this does - # loseConnection() and will thus trigger negotiationFailed. - f.disconnect() - self.checkForIdle() - - def checkForFailure(self): - if not self.active: - return - if self.remainingLocations: - return - if self.pendingConnections: - return - # we have no more options, so the connection attempt will fail. The - # getBrokerForTubRef may have succeeded, however, if the other side - # tried to connect to us at exactly the same time, they were the - # master, they established their connection first (but the final - # decision is still in flight), and they hung up on our connection - # because they felt it was a duplicate. So, if self.failureReason - # indicates a duplicate connection, do not signal a failure here. We - # leave the connection timer in place in case they lied about having - # a duplicate connection ready to go. - if (self.failureReason.check(RemoteNegotiationError) and - isSubstring(self.failureReason.value.args[0], - "Duplicate connection")): - log.msg("TubConnector.checkForFailure: connection attempt " - "failed because the other end decided ours was a " - "duplicate connection, so we won't signal the " - "failure here") - return - self.failed() - - def failed(self): - self.stopConnectionTimer() - self.active = False - self.tub.connectionFailed(self.target, self.failureReason) - - def checkForIdle(self): - if self.remainingLocations: - return - if self.pendingConnections: - return - # we have no more outstanding connections (either in progress or in - # negotiation), so this connector is finished. - self.tub.connectorFinished(self) diff --git a/src/foolscap/foolscap/observer.py b/src/foolscap/foolscap/observer.py deleted file mode 100644 index 52f80666..00000000 --- a/src/foolscap/foolscap/observer.py +++ /dev/null @@ -1,50 +0,0 @@ -# -*- test-case-name: foolscap.test_observer -*- - -# many thanks to AllMyData for contributing the initial version of this code - -from twisted.internet import defer -from foolscap import eventual - -class OneShotObserverList: - """A one-shot event distributor. - - Subscribers can get a Deferred that will fire with the results of the - event once it finally occurs. The caller does not need to know whether - the event has happened yet or not: they get a Deferred in either case. - - The Deferreds returned to subscribers are guaranteed to not fire in the - current reactor turn; instead, eventually() is used to fire them in a - later turn. Look at Mark Miller's 'Concurrency Among Strangers' paper on - erights.org for a description of why this property is useful. - - I can only be fired once.""" - - def __init__(self): - self._fired = False - self._result = None - self._watchers = [] - self.__repr__ = self._unfired_repr - - def _unfired_repr(self): - return "" % (self._watchers, ) - - def _fired_repr(self): - return " %s>" % (self._result, ) - - def whenFired(self): - if self._fired: - return eventual.fireEventually(self._result) - d = defer.Deferred() - self._watchers.append(d) - return d - - def fire(self, result): - assert not self._fired - self._fired = True - self._result = result - - for w in self._watchers: - eventual.eventually(w.callback, result) - del self._watchers - self.__repr__ = self._fired_repr - diff --git a/src/foolscap/foolscap/pb.py b/src/foolscap/foolscap/pb.py deleted file mode 100644 index e40056e0..00000000 --- a/src/foolscap/foolscap/pb.py +++ /dev/null @@ -1,797 +0,0 @@ -# -*- test-case-name: foolscap.test.test_pb -*- - -import os.path, weakref -from zope.interface import implements -from twisted.internet import defer, protocol -from twisted.application import service, strports -from twisted.python import log - -from foolscap import ipb, base32, negotiate, broker, observer, eventual -from foolscap.referenceable import SturdyRef -from foolscap.tokens import PBError, BananaError -from foolscap.reconnector import Reconnector - -crypto_available = False -try: - from foolscap import crypto - crypto_available = crypto.available -except ImportError: - pass - - -Listeners = [] -class Listener(protocol.ServerFactory): - """I am responsible for a single listening port, which may connect to - multiple Tubs. I have a strports-based Service, which I will attach as a - child of one of my Tubs. If that Tub disconnects, I will reparent the - Service to a remaining one. - - Unauthenticated Tubs use a TubID of 'None'. There may be at most one such - Tub attached to any given Listener.""" - - # this also serves as the ServerFactory - - def __init__(self, port, options={}, - negotiationClass=negotiate.Negotiation): - """ - @type port: string - @param port: a L{twisted.application.strports} -style description. - """ - name, args, kw = strports.parse(port, None) - assert name in ("TCP", "UNIX") # TODO: IPv6 - self.port = port - self.options = options - self.negotiationClass = negotiationClass - self.parentTub = None - self.tubs = {} - self.redirects = {} - self.s = strports.service(port, self) - Listeners.append(self) - - def getPortnum(self): - """When this Listener was created with a strport string of '0' or - 'tcp:0' (meaning 'please allocate me something'), and if the Listener - is active (it is attached to a Tub which is in the 'running' state), - this method will return the port number that was allocated. This is - useful for the following pattern:: - - t = Tub() - l = t.listenOn('tcp:0') - t.setLocation('localhost:%d' % l.getPortnum()) - """ - - assert self.s.running - name, args, kw = strports.parse(self.port, None) - assert name in ("TCP",) - return self.s._port.getHost().port - - def __repr__(self): - if self.tubs: - return "" % ( - abs(id(self)), - self.port, - ",".join([str(k) for k in self.tubs.keys()])) - return "" % (abs(id(self)), - self.port) - - def addTub(self, tub): - if tub.tubID in self.tubs: - if tub.tubID is None: - raise RuntimeError("This Listener (on %s) already has an " - "unauthenticated Tub, you cannot add a " - "second one" % self.port) - raise RuntimeError("This Listener (on %s) is already connected " - "to TubID '%s'" % (self.port, tub.tubID)) - self.tubs[tub.tubID] = tub - if self.parentTub is None: - self.parentTub = tub - self.s.setServiceParent(self.parentTub) - - def removeTub(self, tub): - # this might return a Deferred, since the removal might cause the - # Listener to shut down. It might also return None. - del self.tubs[tub.tubID] - if self.parentTub is tub: - # we need to switch to a new one - tubs = self.tubs.values() - if tubs: - self.parentTub = tubs[0] - # TODO: I want to do this without first doing - # disownServiceParent, so the port remains listening. Can we - # do this? It looks like setServiceParent does - # disownServiceParent first, so it may glitch. - self.s.setServiceParent(self.parentTub) - else: - # no more tubs, this Listener will go away now - d = self.s.disownServiceParent() - Listeners.remove(self) - return d - return None - - def getService(self): - return self.s - - def addRedirect(self, tubID, location): - assert tubID is not None # unauthenticated Tubs don't get redirects - self.redirects[tubID] = location - def removeRedirect(self, tubID): - del self.redirects[tubID] - - def buildProtocol(self, addr): - """Return a Broker attached to me (as the service provider). - """ - proto = self.negotiationClass() - proto.initServer(self) - proto.factory = self - return proto - - def lookupTubID(self, tubID): - return self.tubs.get(tubID), self.redirects.get(tubID) - - -class Tub(service.MultiService): - """I am a presence in the PB universe, also known as a Tub. - - I am a Service (in the twisted.application.service.Service sense), - so you either need to call my startService() method before using me, - or setServiceParent() me to a running service. - - This is the primary entry point for all PB-using applications, both - clients and servers. - - I am known to the outside world by a base URL, which may include - authentication information (a yURL). This is my 'TubID'. - - I contain Referenceables, and manage RemoteReferences to Referenceables - that live in other Tubs. - - - @param certData: if provided, use it as a certificate rather than - generating a new one. This is a PEM-encoded - private/public keypair, as returned by Tub.getCertData() - - @param certFile: if provided, the Tub will store its certificate in - this file. If the file does not exist when the Tub is - created, the Tub will generate a new certificate and - store it here. If the file does exist, the certificate - will be loaded from this file. - - The simplest way to use the Tub is to choose a long-term - location for the certificate, use certFile= to tell the - Tub about it, and then let the Tub manage its own - certificate. - - You may provide certData, or certFile, (or neither), but - not both. - - @param options: a dictionary of options that can influence connection - connection negotiation. Currently defined keys are: - - debug_slow: if True, wait half a second between - each negotiation response - - @ivar brokers: maps TubIDs to L{Broker} instances - - @ivar listeners: maps strport to TCPServer service - - @ivar referenceToName: maps Referenceable to a name - @ivar nameToReference: maps name to Referenceable - - """ - implements(ipb.ITub) - - unsafeTracebacks = True # TODO: better way to enable this - logLocalFailures = False - logRemoteFailures = False - debugBanana = False - NAMEBITS = 160 # length of swissnumber for each reference - TUBIDBITS = 16 # length of non-crypto tubID - encrypted = True - negotiationClass = negotiate.Negotiation - brokerClass = broker.Broker - keepaliveTimeout = 4*60 # ping when connection has been idle this long - disconnectTimeout = None # disconnect after this much idle time - - def __init__(self, certData=None, certFile=None, options={}): - service.MultiService.__init__(self) - self.setup(options) - if certFile: - self.setupEncryptionFile(certFile) - else: - self.setupEncryption(certData) - - def setupEncryptionFile(self, certFile): - if os.path.exists(certFile): - certData = open(certFile, "rb").read() - self.setupEncryption(certData) - else: - self.setupEncryption(None) - f = open(certFile, "wb") - f.write(self.getCertData()) - f.close() - - def setupEncryption(self, certData): - if not crypto_available: - raise RuntimeError("crypto for PB is not available, " - "try importing foolscap.crypto and see " - "what happens") - if certData: - cert = crypto.PrivateCertificate.loadPEM(certData) - else: - cert = self.createCertificate() - self.myCertificate = cert - self.tubID = crypto.digest32(cert.digest("sha1")) - - def setup(self, options): - self.options = options - self.listeners = [] - self.locationHints = [] - - # local Referenceables - self.nameToReference = weakref.WeakValueDictionary() - self.referenceToName = weakref.WeakKeyDictionary() - self.strongReferences = [] - self.nameLookupHandlers = [] - - # remote stuff. Most of these use a TubRef (or NoAuthTubRef) as a - # dictionary key - self.tubConnectors = {} # maps TubRef to a TubConnector - self.waitingForBrokers = {} # maps TubRef to list of Deferreds - self.brokers = {} # maps TubRef to a Broker that connects to them - self.unauthenticatedBrokers = [] # inbound Brokers without TubRefs - self.reconnectors = [] - - self._allBrokersAreDisconnected = observer.OneShotObserverList() - self._activeConnectors = [] - self._allConnectorsAreFinished = observer.OneShotObserverList() - - self._pending_getReferences = [] # list of (d, furl) pairs - - def setOption(self, name, value): - if name == "logLocalFailures": - # log (with log.err) any exceptions that occur during the - # execution of a local Referenceable's method, which is invoked - # on behalf of a remote caller. These exceptions are reported to - # the remote caller through their callRemote's Deferred as usual: - # this option enables logging on the callee's side (i.e. our - # side) as well. - # - # TODO: This does not yet include Violations which were raised - # because the inbound callRemote had arguments that didn't meet - # our specifications. But it should. - self.logLocalFailures = value - elif name == "logRemoteFailures": - # log (with log.err) any exceptions that occur during the - # execution of a remote Referenceabe's method, invoked on behalf - # of a local RemoteReference.callRemote(). These exceptions are - # reported to our local caller through the usual Deferred.errback - # mechanism: this enables logging on the caller's side (i.e. our - # side) as well. - self.logRemoteFailures = value - elif name == "keepaliveTimeout": - self.keepaliveTimeout = value - elif name == "disconnectTimeout": - self.disconnectTimeout = value - else: - raise KeyError("unknown option name '%s'" % name) - - def createCertificate(self): - # this is copied from test_sslverify.py - dn = crypto.DistinguishedName(commonName="newpb_thingy") - keypair = crypto.KeyPair.generate() - req = keypair.certificateRequest(dn) - certData = keypair.signCertificateRequest(dn, req, - lambda dn: True, - 132) - cert = keypair.newCertificate(certData) - #opts = cert.options() - # 'opts' can be given to reactor.listenSSL, or to transport.startTLS - - return cert - - def getCertData(self): - # the string returned by this method can be used as the certData= - # argument to create a new Tub with the same identity. TODO: actually - # test this, I don't know if dump/keypair.newCertificate is the right - # pair of methods. - return self.myCertificate.dumpPEM() - - def setLocation(self, *hints): - """Tell this service what its location is: a host:port description of - how to reach it from the outside world. You need to use this because - the Tub can't do it without help. If you do a - C{s.listenOn('tcp:1234')}, and the host is known as - C{foo.example.com}, then it would be appropriate to do:: - - s.setLocation('foo.example.com:1234') - - You must set the location before you can register any references. - - Encrypted Tubs can have multiple location hints, just provide - multiple arguments. Unauthenticated Tubs can only have one location.""" - - if not self.encrypted and len(hints) > 1: - raise PBError("Unauthenticated tubs may only have one " - "location hint") - self.locationHints = hints - - def listenOn(self, what, options={}): - """Start listening for connections. - - @type what: string or Listener instance - @param what: a L{twisted.application.strports} -style description, - or a L{Listener} instance returned by a previous call to - listenOn. - @param options: a dictionary of options that can influence connection - negotiation before the target Tub has been determined - - @return: The Listener object that was created. This can be used to - stop listening later on, to have another Tub listen on the same port, - and to figure out which port was allocated when you used a strports - specification of'tcp:0'. """ - - if type(what) is str: - l = Listener(what, options, self.negotiationClass) - else: - assert not options - l = what - assert l not in self.listeners - l.addTub(self) - self.listeners.append(l) - return l - - def stopListeningOn(self, l): - # this returns a Deferred when the port is shut down - self.listeners.remove(l) - d = defer.maybeDeferred(l.removeTub, self) - return d - - def getListeners(self): - """Return the set of Listener objects that allow the outside world to - connect to this Tub.""" - return self.listeners[:] - - def clone(self): - """Return a new Tub (with a different ID), listening on the same - ports as this one.""" - if self.encrypted: - t = Tub() - else: - t = UnauthenticatedTub() - for l in self.listeners: - t.listenOn(l) - return t - - def connectorStarted(self, c): - assert self.running - self._activeConnectors.append(c) - def connectorFinished(self, c): - if c not in self._activeConnectors: - # TODO: I've seen this happen, but I can't figure out how it - # could possibly happen. Log and ignore rather than exploding - # when we try to do .remove, since this whole connector-tracking - # thing is mainly for the benefit of the unit tests (applications - # which never shut down a Tub aren't going to care), and it is - # more important to let application code run normally than to - # force an error here. - log.msg("Tub.connectorFinished: WEIRD, %s is not in %s" - % (c, self._activeConnectors)) - return - self._activeConnectors.remove(c) - if not self.running and not self._activeConnectors: - self._allConnectorsAreFinished.fire(self) - - def startService(self): - service.MultiService.startService(self) - for d,sturdy in self._pending_getReferences: - d1 = eventual.fireEventually(sturdy) - d1.addCallback(self.getReference) - d1.addBoth(lambda res, d=d: d.callback(res)) - del self._pending_getReferences - for rc in self.reconnectors: - eventual.eventually(rc.startConnecting, self) - - def _tubsAreNotRestartable(self): - raise RuntimeError("Sorry, but Tubs cannot be restarted.") - def _tubHasBeenShutDown(self): - raise RuntimeError("Sorry, but this Tub has been shut down.") - - def stopService(self): - # note that once you stopService a Tub, I cannot be restarted. (at - # least this code is not designed to make that possible.. it might be - # doable in the future). - assert self.running - self.startService = self._tubsAreNotRestartable - self.getReference = self._tubHasBeenShutDown - self.connectTo = self._tubHasBeenShutDown - dl = [] - for rc in self.reconnectors: - rc.stopConnecting() - del self.reconnectors - for l in self.listeners: - # TODO: rethink this, what I want is for stopService to cause all - # Listeners to shut down, but I'm not sure this is the right way - # to do it. - d = l.removeTub(self) - if isinstance(d, defer.Deferred): - dl.append(d) - dl.append(service.MultiService.stopService(self)) - - if self._activeConnectors: - dl.append(self._allConnectorsAreFinished.whenFired()) - for c in self._activeConnectors: - c.shutdown() - - if self.brokers or self.unauthenticatedBrokers: - dl.append(self._allBrokersAreDisconnected.whenFired()) - for b in self.brokers.values(): - b.shutdown() - for b in self.unauthenticatedBrokers: - b.shutdown() - - return defer.DeferredList(dl) - - def generateSwissnumber(self, bits): - bytes = os.urandom(bits/8) - return base32.encode(bytes) - - def buildURL(self, name): - if self.encrypted: - # TODO: IPv6 dotted-quad addresses have colons, but need to have - # host:port - hints = ",".join(self.locationHints) - return "pb://" + self.tubID + "@" + hints + "/" + name - return "pbu://" + self.locationHints[0] + "/" + name - - def registerReference(self, ref, name=None): - """Make a Referenceable available to the outside world. A URL is - returned which can be used to access this object. This registration - will remain in effect (and the Tub will retain a reference to the - object to keep it meaningful) until explicitly unregistered, or the - Tub is shut down. - - @type name: string (optional) - @param name: if provided, the object will be registered with this - name. If not, a random (unguessable) string will be - used. - - @rtype: string - @return: the URL which points to this object. This URL can be passed - to Tub.getReference() in any Tub on any host which can reach this - one. - """ - - if not self.locationHints: - raise RuntimeError("you must setLocation() before " - "you can registerReference()") - name = self._assignName(ref, name) - assert name - if ref not in self.strongReferences: - self.strongReferences.append(ref) - return self.buildURL(name) - - # this is called by either registerReference or by - # getOrCreateURLForReference - def _assignName(self, ref, preferred_name=None): - """Make a Referenceable available to the outside world, but do not - retain a strong reference to it. If we must create a new name, use - preferred_name. If that is None, use a random unguessable name. - """ - if not self.locationHints: - # without a location, there is no point in giving it a name - return None - if self.referenceToName.has_key(ref): - return self.referenceToName[ref] - name = preferred_name - if not name: - name = self.generateSwissnumber(self.NAMEBITS) - self.referenceToName[ref] = name - self.nameToReference[name] = ref - return name - - def getReferenceForName(self, name): - if name in self.nameToReference: - return self.nameToReference[name] - for lookup in self.nameLookupHandlers: - ref = lookup(name) - if ref: - if ref not in self.referenceToName: - self.referenceToName[ref] = name - return ref - raise KeyError("unable to find reference for name '%s'" % (name,)) - - def getReferenceForURL(self, url): - # TODO: who should this be used by? - sturdy = SturdyRef(url) - assert sturdy.tubID == self.tubID - return self.getReferenceForName(sturdy.name) - - def getOrCreateURLForReference(self, ref): - """Return the global URL for the reference, if there is one, or None - if there is not.""" - name = self._assignName(ref) - if name: - return self.buildURL(name) - return None - - def revokeReference(self, ref): - # TODO - pass - - def unregisterURL(self, url): - sturdy = SturdyRef(url) - name = sturdy.name - ref = self.nameToReference[name] - del self.nameToReference[name] - del self.referenceToName[ref] - self.revokeReference(ref) - - def unregisterReference(self, ref): - name = self.referenceToName[ref] - url = self.buildURL(name) - sturdy = SturdyRef(url) - name = sturdy.name - del self.nameToReference[name] - del self.referenceToName[ref] - if ref in self.strongReferences: - self.strongReferences.remove(ref) - self.revokeReference(ref) - - def registerNameLookupHandler(self, lookup): - """Add a function to help convert names to Referenceables. - - When remote systems pass a FURL to their Tub.getReference(), our Tub - will be asked to locate a Referenceable for the name inside that - furl. The normal mechanism for this is to look at the table - maintained by registerReference() and unregisterReference(). If the - name does not exist in that table, other 'lookup handler' functions - are given a chance. Each lookup handler is asked in turn, and the - first which returns a non-None value wins. - - This may be useful for cases where the furl represents an object that - lives on disk, or is generated on demand: rather than creating all - possible Referenceables at startup, the lookup handler can create or - retrieve the objects only when someone asks for them. - - Note that constructing the FURLs of these objects may be non-trivial. - It is safe to create an object, use tub.registerReference in one - invocation of a program to obtain (and publish) the furl, parse the - furl to extract the name, save the contents of the object on disk, - then in a later invocation of the program use a lookup handler to - retrieve the object from disk. This approach means the objects that - are created in a given invocation stick around (inside - tub.strongReferences) for the rest of that invocation. An alternatve - approach is to create the object but *not* use tub.registerReference, - but in that case you have to construct the FURL yourself, and the Tub - does not currently provide any support for doing this robustly. - - @param lookup: a callable which accepts a name (as a string) and - returns either a Referenceable or None. Note that - these strings should not contain a slash, a question - mark, or an ampersand, as these are reserved in the - FURL for later expansion (to add parameters beyond the - object name) - """ - self.nameLookupHandlers.append(lookup) - - def unregisterNameLookupHandler(self, lookup): - self.nameLookupHandlers.remove(lookup) - - def getReference(self, sturdyOrURL): - """Acquire a RemoteReference for the given SturdyRef/URL. - - The Tub must be running (i.e. Tub.startService()) when this is - invoked. Future releases may relax this requirement. - - @return: a Deferred that fires with the RemoteReference - """ - - if isinstance(sturdyOrURL, SturdyRef): - sturdy = sturdyOrURL - else: - sturdy = SturdyRef(sturdyOrURL) - # pb->pb: ok, requires crypto - # pbu->pb: ok, requires crypto - # pbu->pbu: ok - # pb->pbu: ok, requires crypto - if sturdy.encrypted and not crypto_available: - e = BananaError("crypto for PB is not available, " - "we cannot handle encrypted PB-URLs like %s" - % sturdy.getURL()) - return defer.fail(e) - - if not self.running: - # queue their request for service once the Tub actually starts - log.msg("Tub.getReference(%s) queued until Tub.startService called" - % sturdy) - d = defer.Deferred() - self._pending_getReferences.append((d, sturdy)) - return d - - name = sturdy.name - d = self.getBrokerForTubRef(sturdy.getTubRef()) - d.addCallback(lambda b: b.getYourReferenceByName(name)) - return d - - def connectTo(self, _sturdyOrURL, _cb, *args, **kwargs): - """Establish (and maintain) a connection to a given PBURL. - - I establish a connection to the PBURL and run a callback to inform - the caller about the newly-available RemoteReference. If the - connection is lost, I schedule a reconnection attempt for the near - future. If that one fails, I keep trying at longer and longer - intervals (exponential backoff). - - I accept a callback which will be fired each time a connection - attempt succeeds. This callback is run with the new RemoteReference - and any additional args/kwargs provided to me. The callback should - then use rref.notifyOnDisconnect() to get a message when the - connection goes away. At some point after it goes away, the - Reconnector will reconnect. - - The Tub must be running (i.e. Tub.startService()) when this is - invoked. Future releases may relax this requirement. - - I return a Reconnector object. When you no longer want to maintain - this connection, call the stopConnecting() method on the Reconnector. - I promise to not invoke your callback after you've called - stopConnecting(), even if there was already a connection attempt in - progress. If you had an active connection before calling - stopConnecting(), you will still have access to it, until it breaks - on its own. (I will not attempt to break existing connections, I will - merely stop trying to create new ones). All my Reconnector objects - will be shut down when the Tub is stopped. - - Usage:: - - def _got_ref(rref, arg1, arg2): - rref.callRemote('hello again') - # etc - rc = tub.connectTo(_got_ref, 'arg1', 'arg2') - ... - rc.stopConnecting() # later - """ - - rc = Reconnector(_sturdyOrURL, _cb, args, kwargs) - if self.running: - rc.startConnecting(self) - else: - log.msg("Tub.connectTo(%s) queued until Tub.startService called" - % _sturdyOrURL) - self.reconnectors.append(rc) - return rc - - # _removeReconnector is called by the Reconnector - def _removeReconnector(self, rc): - self.reconnectors.remove(rc) - - def getBrokerForTubRef(self, tubref): - if tubref in self.brokers: - return defer.succeed(self.brokers[tubref]) - if tubref.getTubID() == self.tubID: - b = self._createLoopbackBroker(tubref) - # _createLoopbackBroker will call brokerAttached, which will add - # it to self.brokers - # TODO: stash this in self.brokers, so we don't create multiples - return defer.succeed(b) - - d = defer.Deferred() - if tubref not in self.waitingForBrokers: - self.waitingForBrokers[tubref] = [] - self.waitingForBrokers[tubref].append(d) - - if tubref not in self.tubConnectors: - # the TubConnector will call our brokerAttached when it finishes - # negotiation, which will fire waitingForBrokers[tubref]. - c = negotiate.TubConnector(self, tubref) - self.tubConnectors[tubref] = c - c.connect() - - return d - - def _createLoopbackBroker(self, tubref): - t1,t2 = broker.LoopbackTransport(), broker.LoopbackTransport() - t1.setPeer(t2); t2.setPeer(t1) - n = negotiate.Negotiation() - params = n.loopbackDecision() - b1,b2 = self.brokerClass(params), self.brokerClass(params) - # we treat b1 as "our" broker, and b2 as "theirs", and we pretend - # that b2 has just connected to us. We keep track of b1, and b2 keeps - # track of us. - b1.setTub(self) - b2.setTub(self) - t1.protocol = b1; t2.protocol = b2 - b1.makeConnection(t1); b2.makeConnection(t2) - self.brokerAttached(tubref, b1, False) - return b1 - - def connectionFailed(self, tubref, why): - # we previously initiated an outbound TubConnector to this tubref, but - # it was unable to establish a connection. 'why' is the most useful - # Failure that occurred (i.e. it is a NegotiationError if we made it - # that far, otherwise it's a ConnectionFailed). - - if tubref in self.tubConnectors: - del self.tubConnectors[tubref] - if tubref in self.brokers: - # oh, but fortunately an inbound connection must have succeeded. - # Nevermind. - return - - # inform hopeful Broker-waiters that they aren't getting one - if tubref in self.waitingForBrokers: - waiting = self.waitingForBrokers[tubref] - del self.waitingForBrokers[tubref] - for d in waiting: - d.errback(why) - - def brokerAttached(self, tubref, broker, isClient): - if not tubref: - # this is an inbound connection from an unauthenticated Tub - assert not isClient - # we just track it so we can disconnect it later - self.unauthenticatedBrokers.append(broker) - return - - if tubref in self.tubConnectors: - # we initiated an outbound connection to this tubref - if not isClient: - # however, the connection we got was from an inbound - # connection. The completed (inbound) connection wins, so - # abandon the outbound TubConnector - self.tubConnectors[tubref].shutdown() - - # we don't need the TubConnector any more - del self.tubConnectors[tubref] - - if tubref in self.brokers: - # oops, this shouldn't happen but it isn't fatal. Raise - # BananaError so the Negotiation will drop the connection - raise BananaError("unexpected duplicate connection") - self.brokers[tubref] = broker - - # now inform everyone who's been waiting on it - if tubref in self.waitingForBrokers: - waiting = self.waitingForBrokers[tubref] - del self.waitingForBrokers[tubref] - for d in waiting: - d.callback(broker) - - def brokerDetached(self, broker, why): - # the Broker will have already severed all active references - for tubref in self.brokers.keys(): - if self.brokers[tubref] is broker: - del self.brokers[tubref] - if broker in self.unauthenticatedBrokers: - self.unauthenticatedBrokers.remove(broker) - # if the Tub has already shut down, we may need to notify observers - # who are waiting for all of our connections to finish shutting down - if (not self.running - and not self.brokers - and not self.unauthenticatedBrokers): - self._allBrokersAreDisconnected.fire(self) - -class UnauthenticatedTub(Tub): - encrypted = False - - """ - @type tubID: string - @ivar tubID: a global identifier for this Tub, possibly including - authentication information, hash of SSL certificate - """ - - def __init__(self, tubID=None, options={}): - service.MultiService.__init__(self) - self.setup(options) - self.myCertificate = None - assert not tubID # not yet - self.tubID = tubID - - -def getRemoteURL_TCP(host, port, pathname, *interfaces): - url = "pb://%s:%d/%s" % (host, port, pathname) - if crypto_available: - s = Tub() - else: - s = UnauthenticatedTub() - d = s.getReference(url, interfaces) - return d diff --git a/src/foolscap/foolscap/promise.py b/src/foolscap/foolscap/promise.py deleted file mode 100644 index 7b1914c2..00000000 --- a/src/foolscap/foolscap/promise.py +++ /dev/null @@ -1,283 +0,0 @@ -# -*- test-case-name: foolscap.test.test_promise -*- - -from twisted.python import util -from twisted.python.failure import Failure -from twisted.internet import defer -from foolscap.eventual import eventually - -id = util.unsignedID - -EVENTUAL, CHAINED, NEAR, BROKEN = range(4) - -class UsageError(Exception): - """Raised when you do something inappropriate to a Promise.""" - -def _ignore(results): - pass - - -class Promise: - """I am a promise of a future result. I am a lot like a Deferred, except - that my promised result is usually an instance. I make it possible to - schedule method invocations on this future instance, returning Promises - for the results. - - Promises are always in one of three states: Eventual, Fulfilled, and - Broken. (see http://www.erights.org/elib/concurrency/refmech.html for a - pretty picture). They start as Eventual, meaning we do not yet know - whether they will resolve or not. In this state, method invocations are - queued. Eventually the Promise will be 'resolved' into either the - Fulfilled or the Broken state. Fulfilled means that the promise contains - a live object to which methods can be dispatched synchronously. Broken - promises are incapable of invoking methods: they all result in Failure. - - Method invocation is always asynchronous: it always returns a Promise. - - The only thing you can do with a promise 'p1' is to perform an - eventual-send on it, like so:: - - sendOnly(p1).foo(args) # ignores the result - p2 = send(p1).bar(args) # creates a Promise for the result - p2 = p1.bar(args) # same as send(p1).bar(args) - - Or wait for it to resolve, using one of the following:: - - d = when(p); d.addCallback(cb) # provides a Deferred - p._then(cb, *args, **kwargs) # like when(p).addCallback(cb,*a,**kw) - p._except(cb, *args, **kwargs) # like when(p).addErrback(cb,*a,**kw) - - The _then and _except forms return the same Promise. You can set up - chains of calls that will be invoked in the future, using a dataflow - style, like this:: - - p = getPromiseForServer() - d = p.getDatabase('db1') - r = d.getRecord(name) - def _print(record): - print 'the record says', record - def _oops(failure): - print 'something failed:', failure - r._then(_print) - r._except(_oops) - - Or all collapsed in one sequence like:: - - getPromiseForServer().getDatabase('db1').getRecord(name)._then(_print) - - The eventual-send will eventually invoke the method foo(args) on the - promise's resolution. This will return a new Promise for the results of - that method call. - """ - - # all our internal methods are private, to avoid a confusing lack of an - # error message if someone tries to make a synchronous method call on us - # with a name that happens to match an internal one. - - _state = EVENTUAL - _useDataflowStyle = True # enables p.foo(args) - - def __init__(self): - self._watchers = [] - self._pendingMethods = [] # list of (methname, args, kwargs, p) - - # _then and _except are our only public methods. All other access is - # through normal (not underscore-prefixed) attribute names, which - # indicate names of methods on the target object that should be called - # later. - def _then(self, cb, *args, **kwargs): - d = self._wait_for_resolution() - d.addCallback(cb, *args, **kwargs) - d.addErrback(lambda ignore: None) - return self - - def _except(self, cb, *args, **kwargs): - d = self._wait_for_resolution() - d.addErrback(cb, *args, **kwargs) - return self - - # everything beyond here is private to this module - - def __repr__(self): - return "" % id(self) - - def __getattr__(self, name): - if not self._useDataflowStyle: - raise AttributeError("no such attribute %s" % name) - def newmethod(*args, **kwargs): - return self._send(name, args, kwargs) - return newmethod - - # _send and _sendOnly are used by send() and sendOnly(). _send is also - # used by regular attribute access. - - def _send(self, methname, args, kwargs): - """Return a Promise (for the result of the call) when the call is - eventually made. The call is guaranteed to not fire in this turn.""" - # this is called by send() - p, resolver = makePromise() - if self._state in (EVENTUAL, CHAINED): - self._pendingMethods.append((methname, args, kwargs, resolver)) - else: - eventually(self._deliver, methname, args, kwargs, resolver) - return p - - def _sendOnly(self, methname, args, kwargs): - """Send a message like _send, but discard the result.""" - # this is called by sendOnly() - if self._state in (EVENTUAL, CHAINED): - self._pendingMethods.append((methname, args, kwargs, _ignore)) - else: - eventually(self._deliver, methname, args, kwargs, _ignore) - - # _wait_for_resolution is used by when(), as well as _then and _except - - def _wait_for_resolution(self): - """Return a Deferred that will fire (with whatever was passed to - _resolve) when this Promise moves to a RESOLVED state (either NEAR or - BROKEN).""" - # this is called by when() - if self._state in (EVENTUAL, CHAINED): - d = defer.Deferred() - self._watchers.append(d) - return d - if self._state == NEAR: - return defer.succeed(self._target) - # self._state == BROKEN - return defer.fail(self._target) - - # _resolve is our resolver method, and is handed out by makePromise() - - def _resolve(self, target_or_failure): - """Resolve this Promise to refer to the given target. If called with - a Failure, the Promise is now BROKEN. _resolve may only be called - once.""" - # E splits this method into two pieces resolve(result) and - # smash(problem). It is easier for us to keep them in one piece, - # because d.addBoth(p._resolve) is convenient. - if self._state != EVENTUAL: - raise UsageError("Promises may not be resolved multiple times") - self._resolve2(target_or_failure) - - # the remaining methods are internal, for use by this class only - - def _resolve2(self, target_or_failure): - # we may be called with a Promise, an immediate value, or a Failure - if isinstance(target_or_failure, Promise): - self._state = CHAINED - when(target_or_failure).addBoth(self._resolve2) - return - if isinstance(target_or_failure, Failure): - self._break(target_or_failure) - return - self._target = target_or_failure - self._deliver_queued_messages() - self._state = NEAR - - def _break(self, failure): - # TODO: think about what you do to break a resolved promise. Once the - # Promise is in the NEAR state, it can't be broken, but eventually - # we're going to have a FAR state, which *can* be broken. - """Put this Promise in the BROKEN state.""" - if not isinstance(failure, Failure): - raise UsageError("Promises must be broken with a Failure") - if self._state == BROKEN: - raise UsageError("Broken Promises may not be re-broken") - self._target = failure - if self._state in (EVENTUAL, CHAINED): - self._deliver_queued_messages() - self._state == BROKEN - - def _invoke_method(self, name, args, kwargs): - if isinstance(self._target, Failure): - return self._target - method = getattr(self._target, name) - res = method(*args, **kwargs) - return res - - def _deliverOneMethod(self, methname, args, kwargs): - method = getattr(self._target, methname) - return method(*args, **kwargs) - - def _deliver(self, methname, args, kwargs, resolver): - # the resolver will be fired with both success and Failure - t = self._target - if isinstance(t, Promise): - resolver(t._send(methname, args, kwargs)) - elif isinstance(t, Failure): - resolver(t) - else: - d = defer.maybeDeferred(self._deliverOneMethod, - methname, args, kwargs) - d.addBoth(resolver) - - def _deliver_queued_messages(self): - for (methname, args, kwargs, resolver) in self._pendingMethods: - eventually(self._deliver, methname, args, kwargs, resolver) - del self._pendingMethods - # Q: what are the partial-ordering semantics between queued messages - # and when() clauses that are waiting on this Promise to be resolved? - for d in self._watchers: - eventually(d.callback, self._target) - del self._watchers - -def resolvedPromise(resolution): - p = Promise() - p._resolve(resolution) - return p - -def makePromise(): - p = Promise() - return p, p._resolve - - -class _MethodGetterWrapper: - def __init__(self, callback): - self.cb = [callback] - - def __getattr__(self, name): - if name.startswith("_"): - raise AttributeError("method %s is probably private" % name) - cb = self.cb[0] # avoid bound-methodizing - def newmethod(*args, **kwargs): - return cb(name, args, kwargs) - return newmethod - - -def send(o): - """Make an eventual-send call on object C{o}. Use this as follows: - - p = send(o).foo(args) - - C{o} can either be a Promise or an immediate value. The arguments can - either be promises or immediate values. - - send() always returns a Promise, and the o.foo(args) method invocation - always takes place in a later reactor turn. - - Many thanks to Mark Miller for suggesting this syntax to me. - """ - if isinstance(o, Promise): - return _MethodGetterWrapper(o._send) - p = resolvedPromise(o) - return _MethodGetterWrapper(p._send) - -def sendOnly(o): - """Make an eventual-send call on object C{o}, and ignore the results. - """ - - if isinstance(o, Promise): - return _MethodGetterWrapper(o._sendOnly) - # this is a little bit heavyweight for a simple eventually(), but it - # makes the code simpler - p = resolvedPromise(o) - return _MethodGetterWrapper(p._sendOnly) - - -def when(p): - """Turn a Promise into a Deferred that will fire with the enclosed object - when it is ready. Use this when you actually need to schedule something - to happen in a synchronous fashion. Most of the time, you can just invoke - methods on the Promise as if it were immediately available.""" - - assert isinstance(p, Promise) - return p._wait_for_resolution() diff --git a/src/foolscap/foolscap/reconnector.py b/src/foolscap/foolscap/reconnector.py deleted file mode 100644 index eace104e..00000000 --- a/src/foolscap/foolscap/reconnector.py +++ /dev/null @@ -1,124 +0,0 @@ -# -*- test-case-name: foolscap.test.test_reconnector -*- - -import random -from twisted.internet import reactor -from twisted.python import log -from foolscap.tokens import NegotiationError, RemoteNegotiationError - -class Reconnector: - """Establish (and maintain) a connection to a given PBURL. - - I establish a connection to the PBURL and run a callback to inform the - caller about the newly-available RemoteReference. If the connection is - lost, I schedule a reconnection attempt for the near future. If that one - fails, I keep trying at longer and longer intervals (exponential - backoff). - - My constructor accepts a callback which will be fired each time a - connection attempt succeeds. This callback is run with the new - RemoteReference and any additional args/kwargs provided to me. The - callback should then use rref.notifyOnDisconnect() to get a message when - the connection goes away. At some point after it goes away, the - Reconnector will reconnect. - - When you no longer want to maintain this connection, call my - stopConnecting() method. I promise to not invoke your callback after - you've called stopConnecting(), even if there was already a connection - attempt in progress. If you had an active connection before calling - stopConnecting(), you will still have access to it, until it breaks on - its own. (I will not attempt to break existing connections, I will merely - stop trying to create new ones). - """ - - # adapted from twisted.internet.protocol.ReconnectingClientFactory - maxDelay = 3600 - initialDelay = 1.0 - # Note: These highly sensitive factors have been precisely measured by - # the National Institute of Science and Technology. Take extreme care - # in altering them, or you may damage your Internet! - factor = 2.7182818284590451 # (math.e) - # Phi = 1.6180339887498948 # (Phi is acceptable for use as a - # factor if e is too large for your application.) - jitter = 0.11962656492 # molar Planck constant times c, Joule meter/mole - verbose = False - - def __init__(self, url, cb, args, kwargs): - self._url = url - self._active = False - self._observer = (cb, args, kwargs) - self._delay = self.initialDelay - self._retries = 0 - self._timer = None - self._tub = None - - def startConnecting(self, tub): - self._tub = tub - if self.verbose: - log.msg("Reconnector starting for %s" % self._url) - self._active = True - self._connect() - - def stopConnecting(self): - if self.verbose: - log.msg("Reconnector stopping for %s" % self._url) - self._active = False - if self._timer: - self._timer.cancel() - self._timer = False - if self._tub: - self._tub._removeReconnector(self) - - def _connect(self): - d = self._tub.getReference(self._url) - d.addCallbacks(self._connected, self._failed) - - def _connected(self, rref): - if not self._active: - return - rref.notifyOnDisconnect(self._disconnected) - cb, args, kwargs = self._observer - cb(rref, *args, **kwargs) - - def _failed(self, f): - # I'd like to quietly handle "normal" problems (basically TCP - # failures and NegotiationErrors that result from the target either - # not speaking Foolscap or not hosting the Tub that we want), but not - # hide coding errors or version mismatches. - log_it = self.verbose - - # log certain unusual errors, even without self.verbose, to help - # people figure out why their reconnectors aren't connecting, since - # the usual getReference errback chain isn't active. This doesn't - # include ConnectError (which is a parent class of - # ConnectionRefusedError), so it won't fire if we just had a bad - # host/port, since the way we use connection hints will provoke that - # all the time. - if f.check(RemoteNegotiationError, NegotiationError): - log_it = True - if log_it: - log.msg("Reconnector._failed (furl=%s): %s" % (self._url, f)) - if not self._active: - return - self._delay = min(self._delay * self.factor, self.maxDelay) - if self.jitter: - self._delay = random.normalvariate(self._delay, - self._delay * self.jitter) - self._retry() - - def _disconnected(self): - self._delay = self.initialDelay - self._retries = 0 - self._retry() - - def _retry(self): - if not self._active: - return - if self.verbose: - log.msg("Reconnector scheduling retry in %ds for %s" % - (self._delay, self._url)) - self._timer = reactor.callLater(self._delay, self._timer_expired) - - def _timer_expired(self): - self._timer = None - self._connect() - diff --git a/src/foolscap/foolscap/referenceable.py b/src/foolscap/foolscap/referenceable.py deleted file mode 100644 index 8c2e06a8..00000000 --- a/src/foolscap/foolscap/referenceable.py +++ /dev/null @@ -1,831 +0,0 @@ -# -*- test-case-name: foolscap.test.test_sturdyref -*- - -# this module is responsible for sending and receiving OnlyReferenceable and -# Referenceable (callable) objects. All details of actually invoking methods -# live in call.py - -import weakref - -from zope.interface import interface -from zope.interface import implements -from twisted.python.components import registerAdapter -Interface = interface.Interface -from twisted.internet import defer -from twisted.python import failure, log - -from foolscap import ipb, slicer, tokens, call -BananaError = tokens.BananaError -Violation = tokens.Violation -from foolscap.constraint import IConstraint, ByteStringConstraint -from foolscap.remoteinterface import getRemoteInterface, \ - getRemoteInterfaceByName, RemoteInterfaceConstraint -from foolscap.schema import constraintMap -from foolscap.copyable import Copyable, RemoteCopy -from foolscap.eventual import eventually, fireEventually - -class OnlyReferenceable(object): - implements(ipb.IReferenceable) - - def processUniqueID(self): - return id(self) - -class Referenceable(OnlyReferenceable): - implements(ipb.IReferenceable, ipb.IRemotelyCallable) - _interface = None - _interfaceName = None - - # TODO: this code wants to be in an adapter, not a base class. Also, it - # would be nice to cache this across the class: if every instance has the - # same interfaces, they will have the same values of _interface and - # _interfaceName, and it feels silly to store this data separately for - # each instance. Perhaps we could compare the instance's interface list - # with that of the class and only recompute this stuff if they differ. - - def getInterface(self): - if not self._interface: - self._interface = getRemoteInterface(self) - if self._interface: - self._interfaceName = self._interface.__remote_name__ - else: - self._interfaceName = None - return self._interface - - def getInterfaceName(self): - self.getInterface() - return self._interfaceName - - def doRemoteCall(self, methodname, args, kwargs): - meth = getattr(self, "remote_%s" % methodname) - res = meth(*args, **kwargs) - return res - -constraintMap[Referenceable] = RemoteInterfaceConstraint(None) - -class ReferenceableTracker: - """I hold the data which tracks a local Referenceable that is in used by - a remote Broker. - - @ivar obj: the actual object - @ivar refcount: the number of times this reference has been sent to the - remote end, minus the number of DECREF messages which it - has sent back. When it goes to zero, the remote end has - forgotten the RemoteReference, and is prepared to forget - the RemoteReferenceData as soon as the DECREF message is - acknowledged. - @ivar clid: the connection-local ID used to represent this object on the - wire. - """ - - def __init__(self, tub, obj, puid, clid): - self.tub = tub - self.obj = obj - self.clid = clid - self.puid = puid - self.refcount = 0 - - def send(self): - """Increment the refcount. - @return: True if this is the first transmission of the reference. - """ - self.refcount += 1 - if self.refcount == 1: - return True - - def getURL(self): - if self.tub: - return self.tub.getOrCreateURLForReference(self.obj) - return None - - def decref(self, count): - """Call this in response to a DECREF message from the other end. - @return: True if the refcount went to zero, meaning this clid should - be retired. - """ - assert self.refcount >= count, "decref(%d) but refcount was %d" % (count, self.refcount) - self.refcount -= count - if self.refcount == 0: - return True - return False - -# TODO: rather than subclassing Referenceable, ReferenceableSlicer should be -# registered to use for anything which provides any RemoteInterface - -class ReferenceableSlicer(slicer.BaseSlicer): - """I handle pb.Referenceable objects (things with remotely invokable - methods, which are copied by reference). - """ - opentype = ('my-reference',) - - def sliceBody(self, streamable, broker): - puid = ipb.IReferenceable(self.obj).processUniqueID() - tracker = broker.getTrackerForMyReference(puid, self.obj) - yield tracker.clid - firstTime = tracker.send() - if firstTime: - # this is the first time the Referenceable has crossed this wire. - # In addition to the clid, send the interface name (if any), and - # any URL this reference might be known by - iname = ipb.IRemotelyCallable(self.obj).getInterfaceName() - if iname: - yield iname - else: - yield "" - url = tracker.getURL() - if url: - yield url - -registerAdapter(ReferenceableSlicer, Referenceable, ipb.ISlicer) - -class CallableSlicer(slicer.BaseSlicer): - """Bound methods are serialized as my-reference sequences with negative - clid values.""" - opentype = ('my-reference',) - - def sliceBody(self, streamable, broker): - # TODO: consider this requirement, maybe based upon a Tub flag - # assert ipb.ISlicer(self.obj.im_self) - # or maybe even isinstance(self.obj.im_self, Referenceable) - puid = id(self.obj) - tracker = broker.getTrackerForMyCall(puid, self.obj) - yield tracker.clid - firstTime = tracker.send() - if firstTime: - # this is the first time the Call has crossed this wire. In - # addition to the clid, send the schema name and any URL this - # reference might be known by - schema = self.getSchema() - if schema: - yield schema - else: - yield "" - url = tracker.getURL() - if url: - yield url - - def getSchema(self): - return None # TODO: not quite ready yet - # callables which are actually bound methods of a pb.Referenceable - # can use the schema from that - s = ipb.IReferenceable(self.obj.im_self, None) - if s: - return s.getSchemaForMethodNamed(self.obj.im_func.__name__) - # both bound methods and raw callables can also use a .schema - # attribute - return getattr(self.obj, "schema", None) - - -# The CallableSlicer is activated through PBRootSlicer.slicerTable, because a -# StorageBanana might want to stick with the old MethodSlicer/FunctionSlicer -# for these types -#registerAdapter(CallableSlicer, types.MethodType, ipb.ISlicer) - - -class ReferenceUnslicer(slicer.BaseUnslicer): - """I turn an incoming 'my-reference' sequence into a RemoteReference or a - RemoteMethodReference.""" - state = 0 - clid = None - interfaceName = None - url = None - inameConstraint = ByteStringConstraint(200) # TODO: only known RI names? - urlConstraint = ByteStringConstraint(200) - - def checkToken(self, typebyte, size): - if self.state == 0: - if typebyte not in (tokens.INT, tokens.NEG): - raise BananaError("reference ID must be an INT or NEG") - elif self.state == 1: - self.inameConstraint.checkToken(typebyte, size) - elif self.state == 2: - self.urlConstraint.checkToken(typebyte, size) - else: - raise Violation("too many parameters in my-reference") - - def receiveChild(self, obj, ready_deferred=None): - assert not isinstance(obj, defer.Deferred) - assert ready_deferred is None - if self.state == 0: - self.clid = obj - self.state = 1 - elif self.state == 1: - # must be the interface name - self.interfaceName = obj - if obj == "": - self.interfaceName = None - self.state = 2 - elif self.state == 2: - # URL - self.url = obj - self.state = 3 - else: - raise BananaError("Too many my-reference parameters") - - def receiveClose(self): - if self.clid is None: - raise BananaError("sequence ended too early") - tracker = self.broker.getTrackerForYourReference(self.clid, - self.interfaceName, - self.url) - return tracker.getRef(), None - - def describe(self): - if self.clid is None: - return "" - return "" % self.clid - - - -class RemoteReferenceTracker: - """I hold the data necessary to locate (or create) a RemoteReference. - - @ivar url: the target Referenceable's global URL - @ivar broker: the Broker which holds this RemoteReference - @ivar clid: for that Broker, the your-reference CLID for the - RemoteReference - @ivar interfaceName: the name of a RemoteInterface object that the - RemoteReference claims to implement - @ivar interface: our version of a RemoteInterface object that corresponds - to interfaceName - @ivar received_count: the number of times the remote end has send us this - object. We must send back decref() calls to match. - @ivar ref: a weakref to the RemoteReference itself - """ - - def __init__(self, parent, clid, url, interfaceName): - self.broker = parent - self.clid = clid - # TODO: the remote end sends us a global URL, when really it should - # probably send us a per-Tub name, which can can then concatenate to - # their TubID if/when we pass it on to others. By accepting a full - # URL, we give them the ability to sort-of spoof others. We could - # check that url.startswith(broker.remoteTub.baseURL), but the Right - # Way is to just not have them send the base part in the first place. - # I haven't yet made this change because I'm not yet positive it - # would work.. how exactly does the base url get sent, anyway? What - # about Tubs visible through multiple names? - self.url = url - self.interfaceName = interfaceName - self.interface = getRemoteInterfaceByName(interfaceName) - self.received_count = 0 - self.ref = None - - def __repr__(self): - s = "" % (self.clid, self.url) - return s - - def getRef(self): - """Return the actual RemoteReference that we hold, creating it if - necessary. This is called when we receive a my-reference sequence - from the remote end, so we must increment our received_count.""" - # self.ref might be None (if we haven't created it yet), or it might - # be a dead weakref (if it has been released but our _handleRefLost - # hasn't fired yet). In either case we need to make a new - # RemoteReference. - if self.ref is None or self.ref() is None: - ref = RemoteReference(self) - self.ref = weakref.ref(ref, self._refLost) - self.received_count += 1 - return self.ref() - - def _refLost(self, wref): - # don't do anything right now, we could be in the middle of all sorts - # of weird code. both __del__ and weakref callbacks can fire at any - # time. Almost as bad as threads.. - - # instead, do stuff later. - eventually(self._handleRefLost) - - def _handleRefLost(self): - if self.ref() is None: - count, self.received_count = self.received_count, 0 - if count == 0: - return - self.broker.freeYourReference(self, count) - # otherwise our RemoteReference is actually still alive, resurrected - # between the call to _refLost and the eventual call to - # _handleRefLost. In this case, don't decref anything. - - -class RemoteReferenceOnly(object): - implements(ipb.IRemoteReference) - - def __init__(self, tracker): - """@param tracker: the RemoteReferenceTracker which points to us""" - self.tracker = tracker - - def getSturdyRef(self): - return self.tracker.sturdy - - def notifyOnDisconnect(self, callback, *args, **kwargs): - """Register a callback to run when we lose this connection. - - The callback will be invoked with whatever extra arguments you - provide to this function. For example:: - - def my_callback(name, number): - print name, number+4 - cookie = rref.notifyOnDisconnect(my_callback, 'bob', number=3) - - This function returns an opaque cookie. If you want to cancel the - notification, pass this same cookie back to dontNotifyOnDisconnect:: - - rref.dontNotifyOnDisconnect(cookie) - - Note that if the Tub is shutdown (via stopService), all - notifyOnDisconnect handlers are cancelled. - """ - - # return a cookie (really the (cb,args,kwargs) tuple) that they must - # use to deregister - marker = self.tracker.broker.notifyOnDisconnect(callback, - *args, **kwargs) - return marker - def dontNotifyOnDisconnect(self, marker): - self.tracker.broker.dontNotifyOnDisconnect(marker) - - def __repr__(self): - r = "<%s at 0x%x" % (self.__class__.__name__, abs(id(self))) - if self.tracker.url: - r += " [%s]" % self.tracker.url - r += ">" - return r - -class RemoteReference(RemoteReferenceOnly): - def callRemote(self, _name, *args, **kwargs): - # Note: for consistency, *all* failures are reported asynchronously. - return defer.maybeDeferred(self._callRemote, _name, *args, **kwargs) - - def callRemoteOnly(self, _name, *args, **kwargs): - # the remote end will not send us a response. The only error cases - # are arguments that don't match the schema, or broken invariants. In - # particular, DeadReferenceError will be silently consumed. - d = defer.maybeDeferred(self._callRemote, _name, _callOnly=True, - *args, **kwargs) - return None - - def _callRemote(self, _name, *args, **kwargs): - req = None - broker = self.tracker.broker - - # remember that "none" is not a valid constraint, so we use it to - # mean "not set by the caller", which means we fall back to whatever - # the RemoteInterface says. Using None would mean an AnyConstraint, - # which is not the same thing. - methodConstraintOverride = kwargs.get("_methodConstraint", "none") - resultConstraint = kwargs.get("_resultConstraint", "none") - useSchema = kwargs.get("_useSchema", True) - callOnly = kwargs.get("_callOnly", False) - - if "_methodConstraint" in kwargs: - del kwargs["_methodConstraint"] - if "_resultConstraint" in kwargs: - del kwargs["_resultConstraint"] - if "_useSchema" in kwargs: - del kwargs["_useSchema"] - if "_callOnly" in kwargs: - del kwargs["_callOnly"] - - if callOnly: - if broker.disconnected: - # DeadReferenceError is silently consumed - return - reqID = 0 - else: - # newRequestID() could fail with a DeadReferenceError - reqID = broker.newRequestID() - - # in this clause, we validate the outbound arguments against our - # notion of what the other end will accept (the RemoteInterface) - req = call.PendingRequest(reqID, self) - - # first, figure out which method they want to invoke - (interfaceName, - methodName, - methodSchema) = self._getMethodInfo(_name) - - req.methodName = methodName # for debugging - if methodConstraintOverride != "none": - methodSchema = methodConstraintOverride - - if useSchema and methodSchema: - # check args against the arg constraint. This could fail if - # any arguments are of the wrong type - try: - methodSchema.checkAllArgs(args, kwargs, False) - except Violation, v: - v.setLocation("%s.%s(%s)" % (interfaceName, methodName, - v.getLocation())) - raise - - # the Interface gets to constraint the return value too, so - # make a note of it to use later - req.setConstraint(methodSchema.getResponseConstraint()) - - # if the caller specified a _resultConstraint, that overrides - # the schema's one - if resultConstraint != "none": - # overrides schema - req.setConstraint(IConstraint(resultConstraint)) - - clid = self.tracker.clid - slicer = call.CallSlicer(reqID, clid, methodName, args, kwargs) - - # up to this point, we are not committed to sending anything to the - # far end. The various phases of commitment are: - - # 1: once we tell our broker about the PendingRequest, we must - # promise to retire it eventually. Specifically, if we encounter an - # error before we give responsibility to the connection, we must - # retire it ourselves. - - # 2: once we start sending the CallSlicer to the other end (in - # particular, once they receive the reqID), they might send us a - # response, so we must be prepared to handle that. Giving the - # PendingRequest to the broker arranges for this to happen. - - # So all failures which occur before these commitment events are - # entirely local: stale broker, bad method name, bad arguments. If - # anything raises an exception before this point, the PendingRequest - # is abandoned, and our maybeDeferred wrapper returns a failing - # Deferred. - - # commitment point 1. We assume that if this call raises an - # exception, the broker will be sure to not track the dead - # PendingRequest - if not callOnly: - broker.addRequest(req) - # if callOnly, the PendingRequest will never know about the - # broker, and will therefore never ask to be removed from it - - # TODO: there is a decidability problem here: if the reqID made - # it through, the other end will send us an answer (possibly an - # error if the remaining slices were aborted). If not, we will - # not get an answer. To decide whether we should remove our - # broker.waitingForAnswers[] entry, we need to know how far the - # slicing process made it. - - try: - # commitment point 2 - d = broker.send(slicer) - # d will fire when the last argument has been serialized. It will - # errback if the arguments (or any of their children) could not - # be serialized. We need to catch this case and errback the - # caller. - - # if we got here, we have been able to start serializing the - # arguments. If serialization fails, the PendingRequest needs to - # be flunked (because we aren't guaranteed that the far end will - # do it). - - d.addErrback(req.fail) - - except: - req.fail(failure.Failure()) - - # the remote end could send back an error response for many reasons: - # bad method name - # bad argument types (violated their schema) - # exception during method execution - # method result violated the results schema - # something else could occur to cause an errback: - # connection lost before response completely received - # exception during deserialization of the response - # [but only if it occurs after the reqID is received] - # method result violated our results schema - # if none of those occurred, the callback will be run - - return req.deferred - - def _getMethodInfo(self, name): - assert type(name) is str - interfaceName = None - methodName = name - methodSchema = None - - iface = self.tracker.interface - if iface: - interfaceName = iface.__remote_name__ - try: - methodSchema = iface[name] - except KeyError: - raise Violation("%s(%s) does not offer %s" % \ - (interfaceName, self, name)) - return interfaceName, methodName, methodSchema - - -class RemoteMethodReferenceTracker(RemoteReferenceTracker): - def getRef(self): - if self.ref is None: - ref = RemoteMethodReference(self) - self.ref = weakref.ref(ref, self._refLost) - self.received_count += 1 - return self.ref() - -class RemoteMethodReference(RemoteReference): - def callRemote(self, *args, **kwargs): - # TODO: I suspect it would safer to use something other than - # 'callRemote' here. - # TODO: this probably needs a very different implementation - - # there is no schema support yet, so we can't convert positional args - # into keyword args - assert args == () - return RemoteReference.callRemote(self, "", *args, **kwargs) - - def _getMethodInfo(self, name): - interfaceName = None - methodName = "" - methodSchema = None - return interfaceName, methodName, methodSchema - -class LocalReferenceable: - implements(ipb.IRemoteReference) - def __init__(self, original): - self.original = original - - def notifyOnDisconnect(self, callback, *args, **kwargs): - # local objects never disconnect - return None - def dontNotifyOnDisconnect(self, marker): - pass - - def callRemote(self, methname, *args, **kwargs): - def _try(ignored): - meth = getattr(self.original, "remote_" + methname) - return meth(*args, **kwargs) - d = fireEventually() - d.addCallback(_try) - return d - - def callRemoteOnly(self, methname, *args, **kwargs): - d = self.callRemote(methname, *args, **kwargs) - d.addErrback(lambda f: None) - return None - -registerAdapter(LocalReferenceable, ipb.IReferenceable, ipb.IRemoteReference) - - - -class YourReferenceSlicer(slicer.BaseSlicer): - """I handle pb.RemoteReference objects (being sent back home to the - original pb.Referenceable-holder) - """ - - def slice(self, streamable, broker): - self.streamable = streamable - tracker = self.obj.tracker - if tracker.broker == broker: - # sending back to home broker - yield 'your-reference' - yield tracker.clid - else: - # sending somewhere else - assert isinstance(tracker.url, str) - giftID = broker.makeGift(self.obj) - yield 'their-reference' - yield giftID - yield tracker.url - - def describe(self): - return "" % self.obj.tracker.clid - -registerAdapter(YourReferenceSlicer, RemoteReference, ipb.ISlicer) - -class YourReferenceUnslicer(slicer.LeafUnslicer): - """I accept incoming (integer) your-reference sequences and try to turn - them back into the original Referenceable. I also accept (string) - your-reference sequences and try to turn them into a published - Referenceable that they did not have access to before.""" - clid = None - - def checkToken(self, typebyte, size): - if typebyte != tokens.INT: - raise BananaError("your-reference ID must be an INT") - - def receiveChild(self, obj, ready_deferred=None): - assert not isinstance(obj, defer.Deferred) - assert ready_deferred is None - self.clid = obj - - def receiveClose(self): - if self.clid is None: - raise BananaError("sequence ended too early") - obj = self.broker.getMyReferenceByCLID(self.clid) - if not obj: - raise Violation("unknown clid '%s'" % self.clid) - return obj, None - - def describe(self): - return "" % self.obj.refID - - -class TheirReferenceUnslicer(slicer.LeafUnslicer): - """I accept gifts of third-party references. This is turned into a live - reference upon receipt.""" - # (their-reference, giftID, URL) - state = 0 - giftID = None - url = None - urlConstraint = ByteStringConstraint(200) - - def checkToken(self, typebyte, size): - if self.state == 0: - if typebyte != tokens.INT: - raise BananaError("their-reference giftID must be an INT") - elif self.state == 1: - self.urlConstraint.checkToken(typebyte, size) - else: - raise Violation("too many parameters in their-reference") - - def receiveChild(self, obj, ready_deferred=None): - assert not isinstance(obj, defer.Deferred) - assert ready_deferred is None - if self.state == 0: - self.giftID = obj - self.state = 1 - elif self.state == 1: - # URL - self.url = obj - self.state = 2 - else: - raise BananaError("Too many their-reference parameters") - - def receiveClose(self): - if self.giftID is None or self.url is None: - raise BananaError("sequence ended too early") - d = self.broker.tub.getReference(self.url) - d.addBoth(self.ackGift) - # we return a Deferred that will fire with the RemoteReference when - # it becomes available. The RemoteReference is not even referenceable - # until then. In addition, we provide a ready_deferred, since any - # mutable container which holds the gift will be referenceable early - # but the message delivery must still wait for the getReference to - # complete. See to it that we fire the object deferred before we fire - # the ready_deferred. - - obj_deferred = defer.Deferred() - ready_deferred = defer.Deferred() - - def _ready(rref): - obj_deferred.callback(rref) - ready_deferred.callback(rref) - def _failed(f): - # if an error in getReference() occurs, log it locally (with - # priority UNUSUAL), because this end might need to diagnose some - # connection or networking problems. - log.msg("gift (%s) failed to resolve: %s" % (self.url, f)) - # deliver a placeholder object to the container, but signal the - # ready_deferred that we've failed. This will bubble up to the - # enclosing InboundDelivery, and when it gets to the top of the - # queue, it will be flunked. - obj_deferred.callback("Place holder for a Gift which failed to " - "resolve: %s" % f) - ready_deferred.errback(f) - d.addCallbacks(_ready, _failed) - - return obj_deferred, ready_deferred - - def ackGift(self, rref): - rb = self.broker.remote_broker - # if we lose the connection, they'll decref the gift anyway - rb.callRemoteOnly("decgift", giftID=self.giftID, count=1) - return rref - - def describe(self): - if self.giftID is None: - return "" - return "" % self.giftID - -class SturdyRef(Copyable, RemoteCopy): - """I am a pointer to a Referenceable that lives in some (probably remote) - Tub. This pointer is long-lived, however you cannot send messages with it - directly. To use it, you must ask your Tub to turn it into a - RemoteReference with tub.getReference(sturdyref). - - The SturdyRef is associated with a URL: you can create a SturdyRef out of - a URL that you obtain from some other source, and you can ask the - SturdyRef for its URL. - - SturdyRefs are serialized by copying their URL, and create an identical - SturdyRef on the receiving side.""" - - typeToCopy = copytype = "foolscap.SturdyRef" - - encrypted = False - tubID = None - location = None - locationHints = [] - name = None - - def __init__(self, url=None): - if url: - # pb://key@{ip:port,host:port,[ipv6]:port}[/unix]/swissnumber - # i.e. pb://tubID@{locationHints..}/name - # - # it can live at any one of a variety of network-accessible - # locations, or at a single UNIX-domain socket. - # - # there is also an unauthenticated form, which is indexed by the - # single locationHint, because it does not have a TubID - - if url.startswith("pb://"): - self.encrypted = True - url = url[len("pb://"):] - slash = url.rfind("/") - self.name = url[slash+1:] - at = url.find("@") - if at != -1: - self.tubID = url[:at] - self.locationHints = url[at+1:slash].split(",") - elif url.startswith("pbu://"): - self.encrypted = False - url = url[len("pbu://"):] - slash = url.rfind("/") - self.name = url[slash+1:] - self.tubID = None - self.location = url[:slash] - else: - raise ValueError("unknown FURL prefix in %r" % (url,)) - - def getTubRef(self): - if self.encrypted: - return TubRef(self.tubID, self.locationHints) - return NoAuthTubRef(self.location) - - def getURL(self): - if self.encrypted: - return ("pb://" + self.tubID + "@" + - ",".join(self.locationHints) + - "/" + self.name) - return "pbu://" + self.location + "/" + self.name - - def __str__(self): - return self.getURL() - - def _distinguishers(self): - """Two SturdyRefs are equivalent if they point to the same object. - SturdyRefs to encrypted Tubs only pay attention to the TubID and the - reference name. SturdyRefs to unauthenticated Tubs must use the - location hint instead of the (missing) TubID. This method makes it - easier to compare a pair of SturdyRefs.""" - if self.encrypted: - return (True, self.tubID, self.name) - return (False, self.location, self.name) - - def __hash__(self): - return hash(self._distinguishers()) - def __cmp__(self, them): - return (cmp(type(self), type(them)) or - cmp(self.__class__, them.__class__) or - cmp(self._distinguishers(), them._distinguishers())) - - -class TubRef: - """This is a little helper class which provides a comparable identifier - for Tubs. TubRefs can be used as keys in dictionaries that track - connections to remote Tubs.""" - encrypted = True - - def __init__(self, tubID, locationHints=None): - self.tubID = tubID - self.locationHints = locationHints - - def getLocations(self): - return self.locationHints - - def getTubID(self): - return self.tubID - - def __str__(self): - return "pb://" + self.tubID - - def _distinguishers(self): - """This serves the same purpose as SturdyRef._distinguishers.""" - return (self.tubID,) - - def __hash__(self): - return hash(self._distinguishers()) - def __cmp__(self, them): - return (cmp(type(self), type(them)) or - cmp(self.__class__, them.__class__) or - cmp(self._distinguishers(), them._distinguishers())) - -class NoAuthTubRef(TubRef): - # this is only used on outbound connections - encrypted = False - - def __init__(self, location): - self.location = location - - def getLocations(self): - return [self.location] - - def getTubID(self): - return "" - - def __str__(self): - return "pbu://" + self.location - - def _distinguishers(self): - """This serves the same purpose as SturdyRef._distinguishers.""" - return (self.location,) diff --git a/src/foolscap/foolscap/remoteinterface.py b/src/foolscap/foolscap/remoteinterface.py deleted file mode 100644 index f3fbdc09..00000000 --- a/src/foolscap/foolscap/remoteinterface.py +++ /dev/null @@ -1,429 +0,0 @@ - -import types, inspect -from zope.interface import interface, providedBy, implements -from foolscap.constraint import Constraint, OpenerConstraint, nothingTaster, \ - IConstraint, UnboundedSchema, IRemoteMethodConstraint, Optional, Any -from foolscap.tokens import Violation, InvalidRemoteInterface -from foolscap.schema import addToConstraintTypeMap -from foolscap import ipb - -class RemoteInterfaceClass(interface.InterfaceClass): - """This metaclass lets RemoteInterfaces be a lot like Interfaces. The - methods are parsed differently (PB needs more information from them than - z.i extracts, and the methods can be specified with a RemoteMethodSchema - directly). - - RemoteInterfaces can accept the following additional attribute:: - - __remote_name__: can be set to a string to specify the globally-unique - name for this interface. This should be a URL in a - namespace you administer. If not set, defaults to the - short classname. - - RIFoo.names() returns the list of remote method names. - - RIFoo['bar'] is still used to get information about method 'bar', however - it returns a RemoteMethodSchema instead of a z.i Method instance. - - """ - - def __init__(self, iname, bases=(), attrs=None, __module__=None): - if attrs is None: - interface.InterfaceClass.__init__(self, iname, bases, attrs, - __module__) - return - - # parse (and remove) the attributes that make this a RemoteInterface - try: - rname, remote_attrs = self._parseRemoteInterface(iname, attrs) - except: - raise - - # now let the normal InterfaceClass do its thing - interface.InterfaceClass.__init__(self, iname, bases, attrs, - __module__) - - # now add all the remote methods that InterfaceClass would have - # complained about. This is really gross, and it really makes me - # question why we're bothing to inherit from z.i.Interface at all. I - # will probably stop doing that soon, and just have our own - # meta-class, but I want to make sure you can still do - # 'implements(RIFoo)' from within a class definition. - - a = getattr(self, "_InterfaceClass__attrs") # the ickiest part - a.update(remote_attrs) - self.__remote_name__ = rname - - # finally, auto-register the interface - try: - registerRemoteInterface(self, rname) - except: - raise - - def _parseRemoteInterface(self, iname, attrs): - remote_attrs = {} - - remote_name = attrs.get("__remote_name__", iname) - - # and see if there is a __remote_name__ . We delete it because - # InterfaceClass doesn't like arbitrary attributes - if attrs.has_key("__remote_name__"): - del attrs["__remote_name__"] - - # determine all remotely-callable methods - names = [name for name in attrs.keys() - if ((type(attrs[name]) == types.FunctionType and - not name.startswith("_")) or - IConstraint.providedBy(attrs[name]))] - - # turn them into constraints. Tag each of them with their name and - # the RemoteInterface they came from. - for name in names: - m = attrs[name] - if not IConstraint.providedBy(m): - m = RemoteMethodSchema(method=m) - m.name = name - m.interface = self - remote_attrs[name] = m - # delete the methods, so zope's InterfaceClass doesn't see them. - # Particularly necessary for things defined with IConstraints. - del attrs[name] - - return remote_name, remote_attrs - -RemoteInterface = RemoteInterfaceClass("RemoteInterface", - __module__="pb.flavors") - - - -def getRemoteInterface(obj): - """Get the (one) RemoteInterface supported by the object, or None.""" - interfaces = list(providedBy(obj)) - # TODO: versioned Interfaces! - ilist = [] - for i in interfaces: - if isinstance(i, RemoteInterfaceClass): - if i not in ilist: - ilist.append(i) - assert len(ilist) <= 1, ("don't use multiple RemoteInterfaces! %s uses %s" - % (obj, ilist)) - if ilist: - return ilist[0] - return None - -class DuplicateRemoteInterfaceError(Exception): - pass - -RemoteInterfaceRegistry = {} -def registerRemoteInterface(iface, name=None): - if not name: - name = iface.__remote_name__ - assert isinstance(iface, RemoteInterfaceClass) - if RemoteInterfaceRegistry.has_key(name): - old = RemoteInterfaceRegistry[name] - msg = "remote interface %s was registered with the same name (%s) as %s, please use __remote_name__ to provide a unique name" % (old, name, iface) - raise DuplicateRemoteInterfaceError(msg) - RemoteInterfaceRegistry[name] = iface - -def getRemoteInterfaceByName(iname): - return RemoteInterfaceRegistry.get(iname) - - - -class RemoteMethodSchema: - """ - This is a constraint for a single remotely-invokable method. It gets to - require, deny, or impose further constraints upon a set of named - arguments. - - This constraint is created by using keyword arguments with the same - names as the target method's arguments. Two special names are used: - - __ignoreUnknown__: if True, unexpected argument names are silently - dropped. (note that this makes the schema unbounded) - - __acceptUnknown__: if True, unexpected argument names are always - accepted without a constraint (which also makes this schema unbounded) - - The remotely-accesible object's .getMethodSchema() method may return one - of these objects. - """ - - implements(IRemoteMethodConstraint) - - taster = {} # this should not be used as a top-level constraint - opentypes = [] # overkill - ignoreUnknown = False - acceptUnknown = False - - name = None # method name, set when the RemoteInterface is parsed - interface = None # points to the RemoteInterface which defines the method - - # under development - def __init__(self, method=None, _response=None, __options=[], **kwargs): - if method: - self.initFromMethod(method) - return - self.argumentNames = [] - self.argConstraints = {} - self.required = [] - self.responseConstraint = None - # __response in the argslist gets treated specially, I think it is - # mangled into _RemoteMethodSchema__response or something. When I - # change it to use _response instead, it works. - if _response: - self.responseConstraint = IConstraint(_response) - self.options = {} # return, wait, reliable, etc - - if kwargs.has_key("__ignoreUnknown__"): - self.ignoreUnknown = kwargs["__ignoreUnknown__"] - del kwargs["__ignoreUnknown__"] - if kwargs.has_key("__acceptUnknown__"): - self.acceptUnknown = kwargs["__acceptUnknown__"] - del kwargs["__acceptUnknown__"] - - for argname, constraint in kwargs.items(): - self.argumentNames.append(argname) - constraint = IConstraint(constraint) - self.argConstraints[argname] = constraint - if not isinstance(constraint, Optional): - self.required.append(argname) - - def initFromMethod(self, method): - # call this with the Interface's prototype method: the one that has - # argument constraints expressed as default arguments, and which - # does nothing but returns the appropriate return type - - names, _, _, typeList = inspect.getargspec(method) - if names and names[0] == 'self': - why = "RemoteInterface methods should not have 'self' in their argument list" - raise InvalidRemoteInterface(why) - if not names: - typeList = [] - # 'def foo(oops)' results in typeList==None - if typeList is None or len(names) != len(typeList): - # TODO: relax this, use schema=Any for the args that don't have - # default values. This would make: - # def foo(a, b=int): return None - # equivalent to: - # def foo(a=Any, b=int): return None - why = "RemoteInterface methods must have default values for all their arguments" - raise InvalidRemoteInterface(why) - self.argumentNames = names - self.argConstraints = {} - self.required = [] - for i in range(len(names)): - argname = names[i] - constraint = typeList[i] - if not isinstance(constraint, Optional): - self.required.append(argname) - self.argConstraints[argname] = IConstraint(constraint) - - # call the method, its 'return' value is the return constraint - self.responseConstraint = IConstraint(method()) - self.options = {} # return, wait, reliable, etc - - - def getPositionalArgConstraint(self, argnum): - if argnum >= len(self.argumentNames): - raise Violation("too many positional arguments: %d >= %d" % - (argnum, len(self.argumentNames))) - argname = self.argumentNames[argnum] - c = self.argConstraints.get(argname) - assert c - if isinstance(c, Optional): - c = c.constraint - return (True, c) - - def getKeywordArgConstraint(self, argname, - num_posargs=0, previous_kwargs=[]): - previous_args = self.argumentNames[:num_posargs] - for pkw in previous_kwargs: - assert pkw not in previous_args - previous_args.append(pkw) - if argname in previous_args: - raise Violation("got multiple values for keyword argument '%s'" - % (argname,)) - c = self.argConstraints.get(argname) - if c: - if isinstance(c, Optional): - c = c.constraint - return (True, c) - # what do we do with unknown arguments? - if self.ignoreUnknown: - return (False, None) - if self.acceptUnknown: - return (True, None) - raise Violation("unknown argument '%s'" % argname) - - def getResponseConstraint(self): - return self.responseConstraint - - def checkAllArgs(self, args, kwargs, inbound): - # first we map the positional arguments - allargs = {} - if len(args) > len(self.argumentNames): - raise Violation("method takes %d positional arguments (%d given)" - % (len(self.argumentNames), len(args))) - for i,argvalue in enumerate(args): - allargs[self.argumentNames[i]] = argvalue - for argname,argvalue in kwargs.items(): - if argname in allargs: - raise Violation("got multiple values for keyword argument '%s'" - % (argname,)) - allargs[argname] = argvalue - - for argname, argvalue in allargs.items(): - accept, constraint = self.getKeywordArgConstraint(argname) - if not accept: - # this argument will be ignored by the far end. TODO: emit a - # warning - pass - try: - constraint.checkObject(argvalue, inbound) - except Violation, v: - v.setLocation("%s=" % argname) - raise - - for argname in self.required: - if argname not in allargs: - raise Violation("missing required argument '%s'" % argname) - - def checkResults(self, results, inbound): - if self.responseConstraint: - # this might raise a Violation. The caller will annotate its - # location appropriately: they have more information than we do. - self.responseConstraint.checkObject(results, inbound) - - def maxSize(self, seen=None): - if self.acceptUnknown: - raise UnboundedSchema # there is no limit on that thing - if self.ignoreUnknown: - # for now, we ignore unknown arguments by accepting the object - # and then throwing it away. This makes us vulnerable to the - # memory consumed by that object. TODO: in the CallUnslicer, - # arrange to discard the ignored object instead of receiving it. - # When this is done, ignoreUnknown will not cause the schema to - # be unbounded and this clause should be removed. - raise UnboundedSchema - # TODO: implement the rest of maxSize, just like a dictionary - raise NotImplementedError - -class UnconstrainedMethod: - """I am a method constraint that accepts any arguments and any return - value. - - To use this, assign it to a method name in a RemoteInterface:: - - class RIFoo(RemoteInterface): - def constrained_method(foo=int, bar=str): # this one is constrained - return str - not_method = UnconstrainedMethod() # this one is not - """ - implements(IRemoteMethodConstraint) - - def getPositionalArgConstraint(self, argnum): - return (True, Any()) - def getKeywordArgConstraint(self, argname, num_posargs=0, - previous_kwargs=[]): - return (True, Any()) - def checkAllArgs(self, args, kwargs, inbound): - pass # accept everything - def getResponseConstraint(self): - return Any() - def checkResults(self, results, inbound): - pass # accept everything - - -class LocalInterfaceConstraint(Constraint): - """This constraint accepts any (local) instance which implements the - given local Interface. - """ - - # TODO: maybe accept RemoteCopy instances - # TODO: accept inbound your-references, if the local object they map to - # implements the interface - - # TODO: do we need an string-to-Interface map just like we have a - # classname-to-class/factory map? - taster = nothingTaster - opentypes = [] - name = "LocalInterfaceConstraint" - - def __init__(self, interface): - self.interface = interface - def checkObject(self, obj, inbound): - # TODO: maybe try to get an adapter instead? - if not self.interface.providedBy(obj): - raise Violation("'%s' does not provide interface %s" - % (obj, self.interface)) - -class RemoteInterfaceConstraint(OpenerConstraint): - """This constraint accepts any RemoteReference that claims to be - associated with a remote Referenceable that implements the given - RemoteInterface. If 'interface' is None, just assert that it is a - RemoteReference at all. - - On the inbound side, this will only accept a suitably-implementing - RemoteReference, or a gift that resolves to such a RemoteReference. On - the outbound side, this will accept either a Referenceable or a - RemoteReference (which might be a your-reference or a their-reference). - - Sending your-references will result in the recipient getting a local - Referenceable, which will not pass the constraint. TODO: think about if - we want this behavior or not. - """ - - opentypes = [("my-reference",), ("their-reference",)] - name = "RemoteInterfaceConstraint" - - def __init__(self, interface): - self.interface = interface - def checkObject(self, obj, inbound): - if inbound: - # this ought to be a RemoteReference that claims to be associated - # with a remote Referenceable that implements the desired - # interface. - if not ipb.IRemoteReference.providedBy(obj): - raise Violation("'%s' does not provide RemoteInterface %s, " - "and doesn't even look like a RemoteReference" - % (obj, self.interface)) - if not self.interface: - return - iface = obj.tracker.interface - # TODO: this test probably doesn't handle subclasses of - # RemoteInterface, which might be useful (if it even works) - if not iface or iface != self.interface: - raise Violation("'%s' does not provide RemoteInterface %s" - % (obj, self.interface)) - else: - # this ought to be a Referenceable which implements the desired - # interface. Or, it might be a RemoteReference which points to - # one. - if ipb.IRemoteReference.providedBy(obj): - # it's a RemoteReference - if not self.interface: - return - iface = obj.tracker.interface - if not iface or iface != self.interface: - raise Violation("'%s' does not provide RemoteInterface %s" - % (obj, self.interface)) - return - if not ipb.IReferenceable.providedBy(obj): - # TODO: maybe distinguish between OnlyReferenceable and - # Referenceable? which is more useful here? - raise Violation("'%s' is not a Referenceable" % (obj,)) - if self.interface and not self.interface.providedBy(obj): - raise Violation("'%s' does not provide RemoteInterface %s" - % (obj, self.interface)) - -def _makeConstraint(t): - # This will be called for both local interfaces (IFoo) and remote - # interfaces (RIFoo), so we have to distinguish between them. The late - # import is to deal with a circular reference between this module and - # remoteinterface.py - if isinstance(t, RemoteInterfaceClass): - return RemoteInterfaceConstraint(t) - return LocalInterfaceConstraint(t) - -addToConstraintTypeMap(interface.InterfaceClass, _makeConstraint) diff --git a/src/foolscap/foolscap/schema.py b/src/foolscap/foolscap/schema.py deleted file mode 100644 index 727ad133..00000000 --- a/src/foolscap/foolscap/schema.py +++ /dev/null @@ -1,198 +0,0 @@ - -# This module contains all user-visible Constraint subclasses, for -# convenience by user code which is defining RemoteInterfaces. The primitive -# ones are defined in constraint.py, while the constraints associated with -# specific open sequences (list, unicode, etc) are defined in the related -# slicer/list.py module, etc. A few are defined here. - -# It also defines the constraintMap and constraintTypeMap, used when -# constructing constraints out of the convenience shorthand. This is used -# when processing the methods defined in a RemoteInterface (such that a -# default argument like x=int gets turned into an IntegerConstraint). New -# slicers that want to add to these mappings can use addToConstraintTypeMap -# or manipulate constraintMap directly. - -# this imports slicers and constraints.py, but is not allowed to import any -# other Foolscap modules, to avoid import cycles. - -""" -primitive constraints: - - types.StringType: string with maxLength=1k - - String(maxLength=1000): string with arbitrary maxLength - - types.BooleanType: boolean - - types.IntType: integer that fits in s_int32_t - - types.LongType: integer with abs(num) < 2**8192 (fits in 1024 bytes) - - Int(maxBytes=1024): integer with arbitrary maxValue=2**(8*maxBytes) - - types.FloatType: number - - Number(maxBytes=1024): float or integer with maxBytes - - interface: instance which implements (or adapts to) the Interface - - class: instance of the class or a subclass - - # unicode? types? none? - -container constraints: - - TupleOf(constraint1, constraint2..): fixed size, per-element constraint - - ListOf(constraint, maxLength=30): all elements obey constraint - - DictOf(keyconstraint, valueconstraint): keys and values obey constraints - - AttributeDict(*attrTuples, ignoreUnknown=False): - - attrTuples are (name, constraint) - - ignoreUnknown=True means that received attribute names which aren't - listed in attrTuples should be ignored instead of raising an - UnknownAttrName exception - -composite constraints: - - tuple: alternatives: must obey one of the different constraints - -modifiers: - - Shared(constraint, refLimit=None): object may be referenced multiple times - within the serialization domain (question: which domain?). All - constraints default to refLimit=1, and a MultiplyReferenced exception - is raised as soon as the reference count goes above the limit. - refLimit=None means no limit is enforced. - - Optional(name, constraint, default=None): key is not required. If not - provided and default is None, key/attribute will not be created - Only valid inside DictOf and AttributeDict. - - -""" - -from foolscap.tokens import Violation, UnknownSchemaType - -# make constraints available in a single location -from foolscap.constraint import Constraint, Any, ByteStringConstraint, \ - IntegerConstraint, NumberConstraint, \ - UnboundedSchema, IConstraint, Optional, Shared -from foolscap.slicers.unicode import UnicodeConstraint -from foolscap.slicers.bool import BooleanConstraint -from foolscap.slicers.dict import DictConstraint -from foolscap.slicers.list import ListConstraint -from foolscap.slicers.set import SetConstraint -from foolscap.slicers.tuple import TupleConstraint -from foolscap.slicers.none import Nothing -# we don't import RemoteMethodSchema from remoteinterface.py, because -# remoteinterface.py needs to import us (for addToConstraintTypeMap) -ignored = [Constraint, Any, ByteStringConstraint, UnicodeConstraint, - IntegerConstraint, NumberConstraint, BooleanConstraint, - DictConstraint, ListConstraint, SetConstraint, TupleConstraint, - Nothing, Optional, Shared, - ] # hush pyflakes - -# convenience shortcuts - -TupleOf = TupleConstraint -ListOf = ListConstraint -DictOf = DictConstraint -SetOf = SetConstraint - - -# note: using PolyConstraint (aka ChoiceOf) for inbound tasting is probably -# not fully vetted. One of the issues would be with something like -# ListOf(ChoiceOf(TupleOf(stuff), SetOf(stuff))). The ListUnslicer, when -# handling an inbound Tuple, will do -# TupleUnslicer.setConstraint(polyconstraint), since that's all it really -# knows about, and the TupleUnslicer will then try to look inside the -# polyconstraint for attributes that talk about tuples, and might fail. - -class PolyConstraint(Constraint): - name = "PolyConstraint" - - def __init__(self, *alternatives): - self.alternatives = [IConstraint(a) for a in alternatives] - self.alternatives = tuple(self.alternatives) - # TODO: taster/opentypes should be a union of the alternatives' - - def checkObject(self, obj, inbound): - ok = False - for c in self.alternatives: - try: - c.checkObject(obj, inbound) - ok = True - except Violation: - pass - if not ok: - raise Violation("does not satisfy any of %s" \ - % (self.alternatives,)) - - def maxSize(self, seen=None): - if not seen: seen = [] - if self in seen: - # TODO: if the PolyConstraint contains itself directly, the effect - # is a nop. If a descendent contains the ancestor PolyConstraint, - # then I think it's unbounded.. must draw this out - raise UnboundedSchema # recursion - seen.append(self) - return reduce(max, [c.maxSize(seen[:]) - for c in self.alternatives]) - - def maxDepth(self, seen=None): - if not seen: seen = [] - if self in seen: - raise UnboundedSchema # recursion - seen.append(self) - return reduce(max, [c.maxDepth(seen[:]) for c in self.alternatives]) - -ChoiceOf = PolyConstraint - -def AnyStringConstraint(*args, **kwargs): - return ChoiceOf(ByteStringConstraint(*args, **kwargs), - UnicodeConstraint(*args, **kwargs)) - -# keep the old meaning, for now. Eventually StringConstraint should become an -# AnyStringConstraint -StringConstraint = ByteStringConstraint - -constraintMap = { - str: ByteStringConstraint(), - unicode: UnicodeConstraint(), - bool: BooleanConstraint(), - int: IntegerConstraint(), - long: IntegerConstraint(maxBytes=1024), - float: NumberConstraint(), - None: Nothing(), - } - -# This module provides a function named addToConstraintTypeMap() which helps -# to resolve some import cycles. - -constraintTypeMap = [] -def addToConstraintTypeMap(typ, constraintMaker): - constraintTypeMap.insert(0, (typ, constraintMaker)) - -def _tupleConstraintMaker(t): - return TupleConstraint(*t) -addToConstraintTypeMap(tuple, _tupleConstraintMaker) - -# this function transforms the simple syntax (as used in RemoteInterface -# method definitions) into Constraint instances. This function is registered -# as a zope.interface adapter hook, so that once we've been loaded, other -# code can just do IConstraint(stuff) and expect it to work. - -def adapt_obj_to_iconstraint(iface, t): - if iface is not IConstraint: - return None - assert not IConstraint.providedBy(t) # not sure about this - - c = constraintMap.get(t, None) - if c: - return c - - for (typ, constraintMaker) in constraintTypeMap: - if isinstance(t, typ): - c = constraintMaker(t) - if c: - return c - - # RIFoo means accept either a Referenceable that implements RIFoo, or a - # RemoteReference that points to just such a Referenceable. This is - # hooked in by remoteinterface.py, when it calls addToConstraintTypeMap - - # we are the only way to make constraints - raise UnknownSchemaType("can't make constraint from '%s' (%s)" % - (t, type(t))) - -from zope.interface.interface import adapter_hooks -adapter_hooks.append(adapt_obj_to_iconstraint) - - -# how to accept "([(ref0" ? -# X = "TupleOf(ListOf(TupleOf(" * infinity -# ok, so you can't write a constraint that accepts it. I'm ok with that. diff --git a/src/foolscap/foolscap/slicer.py b/src/foolscap/foolscap/slicer.py deleted file mode 100644 index b36569ab..00000000 --- a/src/foolscap/foolscap/slicer.py +++ /dev/null @@ -1,316 +0,0 @@ -# -*- test-case-name: foolscap.test.test_banana -*- - -from twisted.python.components import registerAdapter -from twisted.python import log -from zope.interface import implements -from twisted.internet.defer import Deferred -import tokens -from tokens import Violation, BananaError - - -class SlicerClass(type): - # auto-register Slicers - def __init__(self, name, bases, dict): - type.__init__(self, name, bases, dict) - typ = dict.get('slices') - #reg = dict.get('slicerRegistry') - if typ: - registerAdapter(self, typ, tokens.ISlicer) - - -class BaseSlicer: - __metaclass__ = SlicerClass - implements(tokens.ISlicer) - - slices = None - - parent = None - sendOpen = True - opentype = () - trackReferences = False - - def __init__(self, obj): - # this simplifies Slicers which are adapters - self.obj = obj - - def registerReference(self, refid, obj): - # optimize: most Slicers will delegate this up to the Root - return self.parent.registerReference(refid, obj) - def slicerForObject(self, obj): - # optimize: most Slicers will delegate this up to the Root - return self.parent.slicerForObject(obj) - def slice(self, streamable, banana): - # this is what makes us ISlicer - self.streamable = streamable - assert self.opentype - for o in self.opentype: - yield o - for t in self.sliceBody(streamable, banana): - yield t - def sliceBody(self, streamable, banana): - raise NotImplementedError - def childAborted(self, f): - return f - - def describe(self): - return "??" - - -class ScopedSlicer(BaseSlicer): - """This Slicer provides a containing scope for referenceable things like - lists. The same list will not be serialized twice within this scope, but - it will not survive outside it.""" - - def __init__(self, obj): - BaseSlicer.__init__(self, obj) - self.references = {} # maps id(obj) -> (obj,refid) - - def registerReference(self, refid, obj): - # keep references here, not in the actual PBRootSlicer - - # This use of id(obj) requires a bit of explanation. We are making - # the assumption that the object graph remains unmodified until - # serialization is complete. In particular, we assume that all the - # objects in it remain alive, and no new objects are added to it, - # until serialization is complete. id(obj) is only unique for live - # objects: once the object is garbage-collected, a new object may be - # created with the same id(obj) value. - # - # The concern is that a custom Slicer will call something that - # mutates the object graph before it has finished being serialized. - # This might be one which calls some user-level function during - # Slicing, or one which uses a Deferred to put off serialization for - # a while, creating an opportunity for some other code to get - # control. - - # The specific concern is that if, in the middle of serialization, an - # object that was already serialized is gc'ed, and a new object is - # created and attached to a portion of the object graph that hasn't - # been serialized yet, and if the new object gets the same id(obj) as - # the dead object, then we could be tricked into sending the - # reference number of the old (dead) object. On the receiving end, - # this would result in a mangled object graph. - - # User code isn't supposed to allow the object graph to change during - # serialization, so this mangling "should not happen" under normal - # circumstances. However, as a reasonably cheap way to mitigate the - # worst sort of mangling when user code *does* mess up, - # self.references maps from id(obj) to a tuple of (obj,refid) instead - # of just the refid. This insures that the object will stay alive - # until the ScopedSlicer dies, guaranteeing that we won't get - # duplicate id(obj) values. If user code mutates the object graph - # during serialization we might still get inconsistent results, but - # they'll be the ordinary kind of inconsistent results (snapshots of - # different branches of the object graph at different points in time) - # rather than the blatantly wrong mangling that would occur with - # re-used id(obj) values. - - self.references[id(obj)] = (obj,refid) - - def slicerForObject(self, obj): - # check for an object which was sent previously or has at least - # started sending - obj_refid = self.references.get(id(obj), None) - if obj_refid is not None: - # we've started to send this object already, so just include a - # reference to it - return ReferenceSlicer(obj_refid[1]) - # otherwise go upstream so we can serialize the object completely - return self.parent.slicerForObject(obj) - -UnslicerRegistry = {} -BananaUnslicerRegistry = {} - -def registerUnslicer(opentype, factory, registry=None): - if registry is None: - registry = UnslicerRegistry - assert not registry.has_key(opentype) - registry[opentype] = factory - -class UnslicerClass(type): - # auto-register Unslicers - def __init__(self, name, bases, dict): - type.__init__(self, name, bases, dict) - opentype = dict.get('opentype') - reg = dict.get('unslicerRegistry') - if opentype: - registerUnslicer(opentype, self, reg) - -class BaseUnslicer: - __metaclass__ = UnslicerClass - opentype = None - implements(tokens.IUnslicer) - - def __init__(self): - pass - - def describe(self): - return "??" - - def setConstraint(self, constraint): - pass - - def start(self, count): - pass - - def checkToken(self, typebyte, size): - return # no restrictions - - def openerCheckToken(self, typebyte, size, opentype): - return self.parent.openerCheckToken(typebyte, size, opentype) - - def open(self, opentype): - """Return an IUnslicer object based upon the 'opentype' tuple. - Subclasses that wish to change the way opentypes are mapped to - Unslicers can do so by changing this behavior. - - This method does not apply constraints, it only serves to map - opentype into Unslicer. Most subclasses will implement this by - delegating the request to their parent (and thus, eventually, to the - RootUnslicer), and will set the new child's .opener attribute so - that they can do the same. Subclasses that wish to change the way - opentypes are mapped to Unslicers can do so by changing this - behavior.""" - - return self.parent.open(opentype) - - def doOpen(self, opentype): - """Return an IUnslicer object based upon the 'opentype' tuple. This - object will receive all tokens destined for the subnode. - - If you want to enforce a constraint, you must override this method - and do two things: make sure your constraint accepts the opentype, - and set a per-item constraint on the new child unslicer. - - This method gets the IUnslicer from our .open() method. That might - return None instead of a child unslicer if the they want a - multi-token opentype tuple, so be sure to check for Noneness before - adding a per-item constraint. - """ - - return self.open(opentype) - - def receiveChild(self, obj, ready_deferred=None): - """Unslicers for containers should accumulate their children's - ready_deferreds, then combine them in an AsyncAND when receiveClose() - happens, and return the AsyncAND as the ready_deferreds half of the - receiveClose() return value. - """ - pass - - def reportViolation(self, why): - return why - - def receiveClose(self): - raise NotImplementedError - - def finish(self): - pass - - - def setObject(self, counter, obj): - """To pass references to previously-sent objects, the [OPEN, - 'reference', number, CLOSE] sequence is used. The numbers are - generated implicitly by the sending Banana, counting from 0 for the - object described by the very first OPEN sent over the wire, - incrementing for each subsequent one. The objects themselves are - stored in any/all Unslicers who cares to. Generally this is the - RootUnslicer, but child slices could do it too if they wished. - """ - # TODO: examine how abandoned child objects could mess up this - # counter - pass - - def getObject(self, counter): - """'None' means 'ask our parent instead'. - """ - return None - - def explode(self, failure): - """If something goes wrong in a Deferred callback, it may be too late - to reject the token and to normal error handling. I haven't figured - out how to do sensible error-handling in this situation. This method - exists to make sure that the exception shows up *somewhere*. If this - is called, it is also likely that a placeholder (probably a Deferred) - will be left in the unserialized object graph about to be handed to - the RootUnslicer. - """ - - # RootUnslicer pays attention to this .exploded attribute and refuses - # to deliver anything if it is set. But PBRootUnslicer ignores it. - # TODO: clean this up, and write some unit tests to trigger it (by - # violating schemas?) - log.msg("BaseUnslicer.explode: %s" % failure) - self.protocol.exploded = failure - -class ScopedUnslicer(BaseUnslicer): - """This Unslicer provides a containing scope for referenceable things - like lists. It corresponds to the ScopedSlicer base class.""" - - def __init__(self): - BaseUnslicer.__init__(self) - self.references = {} - - def setObject(self, counter, obj): - if self.protocol.debugReceive: - print "setObject(%s): %s{%s}" % (counter, obj, id(obj)) - self.references[counter] = obj - - def getObject(self, counter): - obj = self.references.get(counter) - if self.protocol.debugReceive: - print "getObject(%s) -> %s{%s}" % (counter, obj, id(obj)) - return obj - - -class LeafUnslicer(BaseUnslicer): - # inherit from this to reject any child nodes - - # .checkToken in LeafUnslicer subclasses should reject OPEN tokens - - def doOpen(self, opentype): - raise Violation("'%s' does not accept sub-objects" % self) - - -# References are special enough to put here instead of slicers/ - -class ReferenceSlicer(BaseSlicer): - # this is created explicitly, not as an adapter - opentype = ('reference',) - trackReferences = False - - def __init__(self, refid): - assert type(refid) is int - self.refid = refid - def sliceBody(self, streamable, banana): - yield self.refid - -class ReferenceUnslicer(LeafUnslicer): - opentype = ('reference',) - - constraint = None - finished = False - - def setConstraint(self, constraint): - self.constraint = constraint - - def checkToken(self, typebyte,size): - if typebyte != tokens.INT: - raise BananaError("ReferenceUnslicer only accepts INTs") - - def receiveChild(self, obj, ready_deferred=None): - assert not isinstance(obj, Deferred) - assert ready_deferred is None - if self.finished: - raise BananaError("ReferenceUnslicer only accepts one int") - self.obj = self.protocol.getObject(obj) - self.finished = True - # assert that this conforms to the constraint - if self.constraint: - self.constraint.checkObject(self.obj, True) - # TODO: it might be a Deferred, but we should know enough about the - # incoming value to check the constraint. This requires a subclass - # of Deferred which can give us the metadata. - - def receiveClose(self): - return self.obj, None diff --git a/src/foolscap/foolscap/slicers/__init__.py b/src/foolscap/foolscap/slicers/__init__.py deleted file mode 100644 index e69de29b..00000000 diff --git a/src/foolscap/foolscap/slicers/allslicers.py b/src/foolscap/foolscap/slicers/allslicers.py deleted file mode 100644 index afb3b470..00000000 --- a/src/foolscap/foolscap/slicers/allslicers.py +++ /dev/null @@ -1,36 +0,0 @@ - -######################## Slicers+Unslicers - -# note that Slicing is always easier than Unslicing, because Unslicing -# is the side where you are dealing with the danger - -from foolscap.slicers.none import NoneSlicer, NoneUnslicer -from foolscap.slicers.bool import BooleanSlicer, BooleanUnslicer -from foolscap.slicers.unicode import UnicodeSlicer, UnicodeUnslicer -from foolscap.slicers.list import ListSlicer, ListUnslicer -from foolscap.slicers.tuple import TupleSlicer, TupleUnslicer -from foolscap.slicers.set import SetSlicer, SetUnslicer -from foolscap.slicers.set import FrozenSetSlicer, FrozenSetUnslicer -#from foolscap.slicers.set import BuiltinSetSlicer -from foolscap.slicers.dict import DictSlicer, DictUnslicer, OrderedDictSlicer -from foolscap.slicers.vocab import ReplaceVocabSlicer, ReplaceVocabUnslicer -from foolscap.slicers.vocab import ReplaceVocabularyTable, AddToVocabularyTable -from foolscap.slicers.vocab import AddVocabSlicer, AddVocabUnslicer -from foolscap.slicers.root import RootSlicer, RootUnslicer - -# appease pyflakes -unused = [ - NoneSlicer, NoneUnslicer, - BooleanSlicer, BooleanUnslicer, - UnicodeSlicer, UnicodeUnslicer, - ListSlicer, ListUnslicer, - TupleSlicer, TupleUnslicer, - SetSlicer, SetUnslicer, - FrozenSetSlicer, FrozenSetUnslicer, - #from foolscap.slicers.set import BuiltinSetSlicer - DictSlicer, DictUnslicer, OrderedDictSlicer, - ReplaceVocabSlicer, ReplaceVocabUnslicer, - ReplaceVocabularyTable, AddToVocabularyTable, - AddVocabSlicer, AddVocabUnslicer, - RootSlicer, RootUnslicer, - ] diff --git a/src/foolscap/foolscap/slicers/bool.py b/src/foolscap/foolscap/slicers/bool.py deleted file mode 100644 index 9726e760..00000000 --- a/src/foolscap/foolscap/slicers/bool.py +++ /dev/null @@ -1,80 +0,0 @@ -# -*- test-case-name: foolscap.test.test_banana -*- - -from twisted.python.components import registerAdapter -from twisted.internet.defer import Deferred -from foolscap import tokens -from foolscap.tokens import Violation, BananaError -from foolscap.slicer import BaseSlicer, LeafUnslicer -from foolscap.constraint import OpenerConstraint, IntegerConstraint, Any - -class BooleanSlicer(BaseSlicer): - opentype = ('boolean',) - trackReferences = False - def sliceBody(self, streamable, banana): - if self.obj: - yield 1 - else: - yield 0 -registerAdapter(BooleanSlicer, bool, tokens.ISlicer) - -class BooleanUnslicer(LeafUnslicer): - opentype = ('boolean',) - - value = None - constraint = None - - def setConstraint(self, constraint): - if isinstance(constraint, Any): - return - assert isinstance(constraint, BooleanConstraint) - self.constraint = constraint - - def checkToken(self, typebyte, size): - if typebyte != tokens.INT: - raise BananaError("BooleanUnslicer only accepts an INT token") - if self.value != None: - raise BananaError("BooleanUnslicer only accepts one token") - - def receiveChild(self, obj, ready_deferred=None): - assert not isinstance(obj, Deferred) - assert ready_deferred is None - assert type(obj) == int - if self.constraint: - if self.constraint.value != None: - if bool(obj) != self.constraint.value: - raise Violation("This boolean can only be %s" % \ - self.constraint.value) - self.value = bool(obj) - - def receiveClose(self): - return self.value, None - - def describe(self): - return "" - -class BooleanConstraint(OpenerConstraint): - strictTaster = True - opentypes = [("boolean",)] - _myint = IntegerConstraint() - name = "BooleanConstraint" - - def __init__(self, value=None): - # self.value is a joke. This allows you to use a schema of - # BooleanConstraint(True) which only accepts 'True'. I cannot - # imagine a possible use for this, but it made me laugh. - self.value = value - - def checkObject(self, obj, inbound): - if type(obj) != bool: - raise Violation("not a bool") - if self.value != None: - if obj != self.value: - raise Violation("not %s" % self.value) - - def maxSize(self, seen=None): - if not seen: seen = [] - return self.OPENBYTES("boolean") + self._myint.maxSize(seen) - def maxDepth(self, seen=None): - if not seen: seen = [] - return 1+self._myint.maxDepth(seen) - diff --git a/src/foolscap/foolscap/slicers/dict.py b/src/foolscap/foolscap/slicers/dict.py deleted file mode 100644 index b5207c1d..00000000 --- a/src/foolscap/foolscap/slicers/dict.py +++ /dev/null @@ -1,167 +0,0 @@ -# -*- test-case-name: foolscap.test.test_banana -*- - -from twisted.python import log -from twisted.internet.defer import Deferred -from foolscap.tokens import Violation, BananaError -from foolscap.slicer import BaseSlicer, BaseUnslicer -from foolscap.constraint import OpenerConstraint, Any, UnboundedSchema, IConstraint -from foolscap.util import AsyncAND - -class DictSlicer(BaseSlicer): - opentype = ('dict',) - trackReferences = True - slices = None - def sliceBody(self, streamable, banana): - for key,value in self.obj.items(): - yield key - yield value - -class DictUnslicer(BaseUnslicer): - opentype = ('dict',) - - gettingKey = True - keyConstraint = None - valueConstraint = None - maxKeys = None - - def setConstraint(self, constraint): - if isinstance(constraint, Any): - return - assert isinstance(constraint, DictConstraint) - self.keyConstraint = constraint.keyConstraint - self.valueConstraint = constraint.valueConstraint - self.maxKeys = constraint.maxKeys - - def start(self, count): - self.d = {} - self.protocol.setObject(count, self.d) - self.key = None - self._ready_deferreds = [] - - def checkToken(self, typebyte, size): - if self.maxKeys != None: - if len(self.d) >= self.maxKeys: - raise Violation("the dict is full") - if self.gettingKey: - if self.keyConstraint: - self.keyConstraint.checkToken(typebyte, size) - else: - if self.valueConstraint: - self.valueConstraint.checkToken(typebyte, size) - - def doOpen(self, opentype): - if self.maxKeys != None: - if len(self.d) >= self.maxKeys: - raise Violation("the dict is full") - if self.gettingKey: - if self.keyConstraint: - self.keyConstraint.checkOpentype(opentype) - else: - if self.valueConstraint: - self.valueConstraint.checkOpentype(opentype) - unslicer = self.open(opentype) - if unslicer: - if self.gettingKey: - if self.keyConstraint: - unslicer.setConstraint(self.keyConstraint) - else: - if self.valueConstraint: - unslicer.setConstraint(self.valueConstraint) - return unslicer - - def update(self, value, key): - # this is run as a Deferred callback, hence the backwards arguments - self.d[key] = value - - def receiveChild(self, obj, ready_deferred=None): - if ready_deferred: - self._ready_deferreds.append(ready_deferred) - if self.gettingKey: - self.receiveKey(obj) - else: - self.receiveValue(obj) - self.gettingKey = not self.gettingKey - - def receiveKey(self, key): - # I don't think it is legal (in python) to use an incomplete object - # as a dictionary key, because you must have all the contents to - # hash it. Someone could fake up a token stream to hit this case, - # however: OPEN(dict), OPEN(tuple), OPEN(reference), 0, CLOSE, CLOSE, - # "value", CLOSE - if isinstance(key, Deferred): - raise BananaError("incomplete object as dictionary key") - try: - if self.d.has_key(key): - raise BananaError("duplicate key '%s'" % key) - except TypeError: - raise BananaError("unhashable key '%s'" % key) - self.key = key - - def receiveValue(self, value): - if isinstance(value, Deferred): - value.addCallback(self.update, self.key) - value.addErrback(log.err) - self.d[self.key] = value # placeholder - - def receiveClose(self): - ready_deferred = None - if self._ready_deferreds: - ready_deferred = AsyncAND(self._ready_deferreds) - return self.d, ready_deferred - - def describe(self): - if self.gettingKey: - return "{}" - else: - return "{}[%s]" % self.key - - -class OrderedDictSlicer(DictSlicer): - slices = dict - def sliceBody(self, streamable, banana): - keys = self.obj.keys() - keys.sort() - for key in keys: - value = self.obj[key] - yield key - yield value - - -class DictConstraint(OpenerConstraint): - opentypes = [("dict",)] - name = "DictConstraint" - - def __init__(self, keyConstraint, valueConstraint, maxKeys=30): - self.keyConstraint = IConstraint(keyConstraint) - self.valueConstraint = IConstraint(valueConstraint) - self.maxKeys = maxKeys - def checkObject(self, obj, inbound): - if not isinstance(obj, dict): - raise Violation, "'%s' (%s) is not a Dictionary" % (obj, - type(obj)) - if self.maxKeys != None and len(obj) > self.maxKeys: - raise Violation, "Dict keys=%d > maxKeys=%d" % (len(obj), - self.maxKeys) - for key, value in obj.iteritems(): - self.keyConstraint.checkObject(key, inbound) - self.valueConstraint.checkObject(value, inbound) - def maxSize(self, seen=None): - if not seen: seen = [] - if self in seen: - raise UnboundedSchema # recursion - seen.append(self) - if self.maxKeys == None: - raise UnboundedSchema - keySize = self.keyConstraint.maxSize(seen[:]) - valueSize = self.valueConstraint.maxSize(seen[:]) - return self.OPENBYTES("dict") + self.maxKeys * (keySize + valueSize) - def maxDepth(self, seen=None): - if not seen: seen = [] - if self in seen: - raise UnboundedSchema # recursion - seen.append(self) - keyDepth = self.keyConstraint.maxDepth(seen[:]) - valueDepth = self.valueConstraint.maxDepth(seen[:]) - return 1 + max(keyDepth, valueDepth) - - diff --git a/src/foolscap/foolscap/slicers/list.py b/src/foolscap/foolscap/slicers/list.py deleted file mode 100644 index 357a1059..00000000 --- a/src/foolscap/foolscap/slicers/list.py +++ /dev/null @@ -1,154 +0,0 @@ -# -*- test-case-name: foolscap.test.test_banana -*- - -from twisted.python import log -from twisted.internet.defer import Deferred -from foolscap.tokens import Violation -from foolscap.slicer import BaseSlicer, BaseUnslicer -from foolscap.constraint import OpenerConstraint, Any, UnboundedSchema, IConstraint -from foolscap.util import AsyncAND - - -class ListSlicer(BaseSlicer): - opentype = ("list",) - trackReferences = True - slices = list - - def sliceBody(self, streamable, banana): - for i in self.obj: - yield i - -class ListUnslicer(BaseUnslicer): - opentype = ("list",) - - maxLength = None - itemConstraint = None - debug = False - - def setConstraint(self, constraint): - if isinstance(constraint, Any): - return - assert isinstance(constraint, ListConstraint) - self.maxLength = constraint.maxLength - self.itemConstraint = constraint.constraint - - def start(self, count): - #self.opener = foo # could replace it if we wanted to - self.list = [] - self.count = count - if self.debug: - log.msg("%s[%d].start with %s" % (self, self.count, self.list)) - self.protocol.setObject(count, self.list) - self._ready_deferreds = [] - - def checkToken(self, typebyte, size): - if self.maxLength != None and len(self.list) >= self.maxLength: - # list is full, no more tokens accepted - # this is hit if the max+1 item is a primitive type - raise Violation("the list is full") - if self.itemConstraint: - self.itemConstraint.checkToken(typebyte, size) - - def doOpen(self, opentype): - # decide whether the given object type is acceptable here. Raise a - # Violation exception if not, otherwise give it to our opener (which - # will normally be the RootUnslicer). Apply a constraint to the new - # unslicer. - if self.maxLength != None and len(self.list) >= self.maxLength: - # this is hit if the max+1 item is a non-primitive type - raise Violation("the list is full") - if self.itemConstraint: - self.itemConstraint.checkOpentype(opentype) - unslicer = self.open(opentype) - if unslicer: - if self.itemConstraint: - unslicer.setConstraint(self.itemConstraint) - return unslicer - - def update(self, obj, index): - # obj has already passed typechecking - if self.debug: - log.msg("%s[%d].update: [%d]=%s" % (self, self.count, index, obj)) - assert isinstance(index, int) - self.list[index] = obj - return obj - - def receiveChild(self, obj, ready_deferred=None): - if ready_deferred: - self._ready_deferreds.append(ready_deferred) - if self.debug: - log.msg("%s[%d].receiveChild(%s)" % (self, self.count, obj)) - # obj could be a primitive type, a Deferred, or a complex type like - # those returned from an InstanceUnslicer. However, the individual - # object has already been through the schema validation process. The - # only remaining question is whether the larger schema will accept - # it. - if self.maxLength != None and len(self.list) >= self.maxLength: - # this is redundant - # (if it were a non-primitive one, it would be caught in doOpen) - # (if it were a primitive one, it would be caught in checkToken) - raise Violation("the list is full") - if isinstance(obj, Deferred): - if self.debug: - log.msg(" adding my update[%d] to %s" % (len(self.list), obj)) - obj.addCallback(self.update, len(self.list)) - obj.addErrback(self.printErr) - placeholder = "list placeholder for arg[%d], rd=%s" % \ - (len(self.list), ready_deferred) - self.list.append(placeholder) - else: - self.list.append(obj) - - def printErr(self, why): - print "ERR!" - print why.getBriefTraceback() - log.err(why) - - def receiveClose(self): - ready_deferred = None - if self._ready_deferreds: - ready_deferred = AsyncAND(self._ready_deferreds) - return self.list, ready_deferred - - def describe(self): - return "[%d]" % len(self.list) - - -class ListConstraint(OpenerConstraint): - """The object must be a list of objects, with a given maximum length. To - accept lists of any length, use maxLength=None (but you will get a - UnboundedSchema warning). All member objects must obey the given - constraint.""" - - opentypes = [("list",)] - name = "ListConstraint" - - def __init__(self, constraint, maxLength=30, minLength=0): - self.constraint = IConstraint(constraint) - self.maxLength = maxLength - self.minLength = minLength - - def checkObject(self, obj, inbound): - if not isinstance(obj, list): - raise Violation("not a list") - if self.maxLength is not None and len(obj) > self.maxLength: - raise Violation("list too long") - if len(obj) < self.minLength: - raise Violation("list too short") - for o in obj: - self.constraint.checkObject(o, inbound) - - def maxSize(self, seen=None): - if not seen: seen = [] - if self in seen: - raise UnboundedSchema # recursion - seen.append(self) - if self.maxLength == None: - raise UnboundedSchema - return (self.OPENBYTES("list") + - self.maxLength * self.constraint.maxSize(seen)) - def maxDepth(self, seen=None): - if not seen: seen = [] - if self in seen: - raise UnboundedSchema # recursion - seen.append(self) - return 1 + self.constraint.maxDepth(seen) diff --git a/src/foolscap/foolscap/slicers/none.py b/src/foolscap/foolscap/slicers/none.py deleted file mode 100644 index 2f431a7d..00000000 --- a/src/foolscap/foolscap/slicers/none.py +++ /dev/null @@ -1,41 +0,0 @@ -# -*- test-case-name: foolscap.test.test_banana -*- - -from foolscap.tokens import Violation, BananaError -from foolscap.slicer import BaseSlicer, LeafUnslicer -from foolscap.constraint import OpenerConstraint - - -class NoneSlicer(BaseSlicer): - opentype = ('none',) - trackReferences = False - slices = type(None) - def sliceBody(self, streamable, banana): - # hmm, we need an empty generator. I think a sequence is the only way - # to accomplish this, other than 'if 0: yield' or something silly - return [] - -class NoneUnslicer(LeafUnslicer): - opentype = ('none',) - - def checkToken(self, typebyte, size): - raise BananaError("NoneUnslicer does not accept any tokens") - def receiveClose(self): - return None, None - - -class Nothing(OpenerConstraint): - """Accept only 'None'.""" - strictTaster = True - opentypes = [("none",)] - name = "Nothing" - - def checkObject(self, obj, inbound): - if obj is not None: - raise Violation("'%s' is not None" % (obj,)) - def maxSize(self, seen=None): - if not seen: seen = [] - return self.OPENBYTES("none") - def maxDepth(self, seen=None): - if not seen: seen = [] - return 1 - diff --git a/src/foolscap/foolscap/slicers/root.py b/src/foolscap/foolscap/slicers/root.py deleted file mode 100644 index 7000e6db..00000000 --- a/src/foolscap/foolscap/slicers/root.py +++ /dev/null @@ -1,211 +0,0 @@ -# -*- test-case-name: foolscap.test.test_banana -*- - -import types -from zope.interface import implements -from twisted.internet.defer import Deferred -from foolscap import tokens -from foolscap.tokens import Violation, BananaError -from foolscap.slicer import BaseUnslicer -from foolscap.slicer import UnslicerRegistry, BananaUnslicerRegistry -from foolscap.slicers.vocab import ReplaceVocabularyTable, AddToVocabularyTable - -class RootSlicer: - implements(tokens.ISlicer, tokens.IRootSlicer) - - streamableInGeneral = True - producingDeferred = None - objectSentDeferred = None - slicerTable = {} - debug = False - - def __init__(self, protocol): - self.protocol = protocol - self.sendQueue = [] - - def allowStreaming(self, streamable): - self.streamableInGeneral = streamable - - def registerReference(self, refid, obj): - pass - - def slicerForObject(self, obj): - # could use a table here if you think it'd be faster than an - # adapter lookup - if self.debug: print "slicerForObject(%s)" % type(obj) - # do the adapter lookup first, so that registered adapters override - # UnsafeSlicerTable's InstanceSlicer - slicer = tokens.ISlicer(obj, None) - if slicer: - if self.debug: print "got ISlicer", slicer - return slicer - slicerFactory = self.slicerTable.get(type(obj)) - if slicerFactory: - if self.debug: print " got slicerFactory", slicerFactory - return slicerFactory(obj) - if issubclass(type(obj), types.InstanceType): - name = str(obj.__class__) - else: - name = str(type(obj)) - if self.debug: print "cannot serialize %s (%s)" % (obj, name) - raise Violation("cannot serialize %s (%s)" % (obj, name)) - - def slice(self): - return self - def __iter__(self): - return self # we are our own iterator - def next(self): - if self.objectSentDeferred: - self.objectSentDeferred.callback(None) - self.objectSentDeferred = None - if self.sendQueue: - (obj, self.objectSentDeferred) = self.sendQueue.pop() - self.streamable = self.streamableInGeneral - return obj - if self.protocol.debugSend: - print "LAST BAG" - self.producingDeferred = Deferred() - self.streamable = True - return self.producingDeferred - - def childAborted(self, f): - assert self.objectSentDeferred - self.objectSentDeferred.errback(f) - self.objectSentDeferred = None - return None - - def send(self, obj): - # obj can also be a Slicer, say, a CallSlicer. We return a Deferred - # which fires when the object has been fully serialized. - idle = (len(self.protocol.slicerStack) == 1) and not self.sendQueue - objectSentDeferred = Deferred() - self.sendQueue.append((obj, objectSentDeferred)) - if idle: - # wake up - if self.protocol.debugSend: - print " waking up to send" - if self.producingDeferred: - d = self.producingDeferred - self.producingDeferred = None - # TODO: consider reactor.callLater(0, d.callback, None) - # I'm not sure it's actually necessary, though - d.callback(None) - return objectSentDeferred - - def describe(self): - return "" - - def connectionLost(self, why): - # abandon everything we wanted to send - if self.objectSentDeferred: - self.objectSentDeferred.errback(why) - self.objectSentDeferred = None - for obj, d in self.sendQueue: - d.errback(why) - self.sendQueue = [] - - - -class RootUnslicer(BaseUnslicer): - # topRegistries is used for top-level objects - topRegistries = [UnslicerRegistry, BananaUnslicerRegistry] - # openRegistries is used for everything at lower levels - openRegistries = [UnslicerRegistry] - constraint = None - openCount = None - - def __init__(self): - self.objects = {} - keys = [] - for r in self.topRegistries + self.openRegistries: - for k in r.keys(): - keys.append(len(k[0])) - self.maxIndexLength = reduce(max, keys) - - def start(self, count): - pass - - def setConstraint(self, constraint): - # this constraints top-level objects. E.g., if this is an - # IntegerConstraint, then only integers will be accepted. - self.constraint = constraint - - def checkToken(self, typebyte, size): - if self.constraint: - self.constraint.checkToken(typebyte, size) - - def openerCheckToken(self, typebyte, size, opentype): - if typebyte == tokens.STRING: - if size > self.maxIndexLength: - why = "STRING token is too long, %d>%d" % \ - (size, self.maxIndexLength) - raise Violation(why) - elif typebyte == tokens.VOCAB: - return - else: - # TODO: hack for testing - raise Violation("index token 0x%02x not STRING or VOCAB" % \ - ord(typebyte)) - raise BananaError("index token 0x%02x not STRING or VOCAB" % \ - ord(typebyte)) - - def open(self, opentype): - # called (by delegation) by the top Unslicer on the stack, regardless - # of what kind of unslicer it is. This is only used for "internal" - # objects: non-top-level nodes - assert len(self.protocol.receiveStack) > 1 - for reg in self.openRegistries: - opener = reg.get(opentype) - if opener is not None: - child = opener() - return child - else: - raise Violation("unknown OPEN type %s" % (opentype,)) - - def doOpen(self, opentype): - # this is only called for top-level objects - assert len(self.protocol.receiveStack) == 1 - if self.constraint: - self.constraint.checkOpentype(opentype) - for reg in self.topRegistries: - opener = reg.get(opentype) - if opener is not None: - child = opener() - break - else: - raise Violation("unknown top-level OPEN type %s" % (opentype,)) - - if self.constraint: - child.setConstraint(self.constraint) - return child - - def receiveChild(self, obj, ready_deferred=None): - assert not isinstance(obj, Deferred) - assert ready_deferred is None - if self.protocol.debugReceive: - print "RootUnslicer.receiveChild(%s)" % (obj,) - self.objects = {} - if obj in (ReplaceVocabularyTable, AddToVocabularyTable): - # the unslicer has already changed the vocab table - return - if self.protocol.exploded: - print "protocol exploded, can't deliver object" - print self.protocol.exploded - self.protocol.receivedObject(self.protocol.exploded) - return - self.protocol.receivedObject(obj) # give finished object to Banana - - def receiveClose(self): - raise BananaError("top-level should never receive CLOSE tokens") - - def reportViolation(self, why): - return self.protocol.reportViolation(why) - - def describe(self): - return "" - - def setObject(self, counter, obj): - pass - - def getObject(self, counter): - return None - diff --git a/src/foolscap/foolscap/slicers/set.py b/src/foolscap/foolscap/slicers/set.py deleted file mode 100644 index 85a469b6..00000000 --- a/src/foolscap/foolscap/slicers/set.py +++ /dev/null @@ -1,213 +0,0 @@ -# -*- test-case-name: foolscap.test.test_banana -*- - -import sets -from twisted.internet import defer -from twisted.python import log -from foolscap.slicers.list import ListSlicer -from foolscap.slicers.tuple import TupleUnslicer -from foolscap.slicer import BaseUnslicer -from foolscap.tokens import Violation -from foolscap.constraint import OpenerConstraint, UnboundedSchema, Any, \ - IConstraint -from foolscap.util import AsyncAND - -class SetSlicer(ListSlicer): - opentype = ("set",) - trackReferences = True - slices = set - - def sliceBody(self, streamable, banana): - for i in self.obj: - yield i - -class FrozenSetSlicer(SetSlicer): - opentype = ("immutable-set",) - trackReferences = False - slices = frozenset - -# python2.4 has a builtin 'set' type, which is mutable, and we require -# python2.4 or newer. Code which was written to be compatible with python2.3, -# however, may use the 'sets' module. We will serialize old sets.Set and -# sets.ImmutableSet the same as we serialize new set and frozenset. -# Unfortunately this means that these objects will be deserialized as modern -# 'set' and 'frozenset' objects, which are not entirely compatible. Therefore -# code that is compatible with python2.3 might not work with foolscap. - -class OldSetSlicer(SetSlicer): - slices = sets.Set -class OldImmutableSetSlicer(FrozenSetSlicer): - slices = sets.ImmutableSet - -class _Placeholder: - pass - -class SetUnslicer(BaseUnslicer): - # this is a lot like a list, but sufficiently different to make it not - # worth subclassing - opentype = ("set",) - - debug = False - maxLength = None - itemConstraint = None - - def setConstraint(self, constraint): - if isinstance(constraint, Any): - return - assert isinstance(constraint, SetConstraint) - self.maxLength = constraint.maxLength - self.itemConstraint = constraint.constraint - - def start(self, count): - #self.opener = foo # could replace it if we wanted to - self.set = set() - self.count = count - if self.debug: - log.msg("%s[%d].start with %s" % (self, self.count, self.set)) - self.protocol.setObject(count, self.set) - self._ready_deferreds = [] - - def checkToken(self, typebyte, size): - if self.maxLength != None and len(self.set) >= self.maxLength: - # list is full, no more tokens accepted - # this is hit if the max+1 item is a primitive type - raise Violation("the set is full") - if self.itemConstraint: - self.itemConstraint.checkToken(typebyte, size) - - def doOpen(self, opentype): - # decide whether the given object type is acceptable here. Raise a - # Violation exception if not, otherwise give it to our opener (which - # will normally be the RootUnslicer). Apply a constraint to the new - # unslicer. - if self.maxLength != None and len(self.set) >= self.maxLength: - # this is hit if the max+1 item is a non-primitive type - raise Violation("the set is full") - if self.itemConstraint: - self.itemConstraint.checkOpentype(opentype) - unslicer = self.open(opentype) - if unslicer: - if self.itemConstraint: - unslicer.setConstraint(self.itemConstraint) - return unslicer - - def update(self, obj, placeholder): - # obj has already passed typechecking - if self.debug: - log.msg("%s[%d].update: [%s]=%s" % (self, self.count, - placeholder, obj)) - self.set.remove(placeholder) - self.set.add(obj) - return obj - - def receiveChild(self, obj, ready_deferred=None): - if ready_deferred: - self._ready_deferreds.append(ready_deferred) - if self.debug: - log.msg("%s[%d].receiveChild(%s)" % (self, self.count, obj)) - # obj could be a primitive type, a Deferred, or a complex type like - # those returned from an InstanceUnslicer. However, the individual - # object has already been through the schema validation process. The - # only remaining question is whether the larger schema will accept - # it. - if self.maxLength != None and len(self.set) >= self.maxLength: - # this is redundant - # (if it were a non-primitive one, it would be caught in doOpen) - # (if it were a primitive one, it would be caught in checkToken) - raise Violation("the set is full") - if isinstance(obj, defer.Deferred): - if self.debug: - log.msg(" adding my update[%d] to %s" % (len(self.set), obj)) - # note: the placeholder isn't strictly necessary, but it will - # help debugging to see a _Placeholder sitting in the set when it - # shouldn't rather than seeing a set that is smaller than it - # ought to be. If a remote method ever sees a _Placeholder, then - # something inside Foolscap has broken. - placeholder = _Placeholder() - obj.addCallback(self.update, placeholder) - obj.addErrback(self.printErr) - self.set.add(placeholder) - else: - self.set.add(obj) - - def printErr(self, why): - print "ERR!" - print why.getBriefTraceback() - log.err(why) - - def receiveClose(self): - ready_deferred = None - if self._ready_deferreds: - ready_deferred = AsyncAND(self._ready_deferreds) - return self.set, ready_deferred - -class FrozenSetUnslicer(TupleUnslicer): - opentype = ("immutable-set",) - - def receiveClose(self): - obj_or_deferred, ready_deferred = TupleUnslicer.receiveClose(self) - if isinstance(obj_or_deferred, defer.Deferred): - def _convert(the_tuple): - return frozenset(the_tuple) - obj_or_deferred.addCallback(_convert) - else: - obj_or_deferred = frozenset(obj_or_deferred) - return obj_or_deferred, ready_deferred - - -class SetConstraint(OpenerConstraint): - """The object must be a Set of some sort, with a given maximum size. To - accept sets of any size, use maxLength=None. All member objects must obey - the given constraint. By default this will accept both mutable and - immutable sets, if you want to require a particular type, set mutable= to - either True or False. - """ - - # TODO: if mutable!=None, we won't throw out the wrong set type soon - # enough. We need to override checkOpenType to accomplish this. - opentypes = [("set",), ("immutable-set",)] - name = "SetConstraint" - - mutable_set_types = (set, sets.Set) - immutable_set_types = (frozenset, sets.ImmutableSet) - all_set_types = mutable_set_types + immutable_set_types - - def __init__(self, constraint, maxLength=30, mutable=None): - self.constraint = IConstraint(constraint) - self.maxLength = maxLength - self.mutable = mutable - - def checkObject(self, obj, inbound): - if not isinstance(obj, self.all_set_types): - raise Violation("not a set") - if (self.mutable == True and - not isinstance(obj, self.mutable_set_types)): - raise Violation("obj is a set, but not a mutable one") - if (self.mutable == False and - not isinstance(obj, self.immutable_set_types)): - raise Violation("obj is a set, but not an immutable one") - if self.maxLength is not None and len(obj) > self.maxLength: - raise Violation("set is too large") - if self.constraint: - for o in obj: - self.constraint.checkObject(o, inbound) - - def maxSize(self, seen=None): - if not seen: seen = [] - if self in seen: - raise UnboundedSchema # recursion - seen.append(self) - if self.maxLength == None: - raise UnboundedSchema - if not self.constraint: - raise UnboundedSchema - return (self.OPENBYTES("immutable-set") + - self.maxLength * self.constraint.maxSize(seen)) - - def maxDepth(self, seen=None): - if not seen: seen = [] - if self in seen: - raise UnboundedSchema # recursion - if not self.constraint: - raise UnboundedSchema - seen.append(self) - return 1 + self.constraint.maxDepth(seen) diff --git a/src/foolscap/foolscap/slicers/tuple.py b/src/foolscap/foolscap/slicers/tuple.py deleted file mode 100644 index 3c85aefe..00000000 --- a/src/foolscap/foolscap/slicers/tuple.py +++ /dev/null @@ -1,155 +0,0 @@ -# -*- test-case-name: foolscap.test.test_banana -*- - -from twisted.internet.defer import Deferred -from foolscap.tokens import Violation -from foolscap.slicer import BaseUnslicer -from foolscap.slicers.list import ListSlicer -from foolscap.constraint import OpenerConstraint, Any, UnboundedSchema, IConstraint -from foolscap.util import AsyncAND - - -class TupleSlicer(ListSlicer): - opentype = ("tuple",) - slices = tuple - -class TupleUnslicer(BaseUnslicer): - opentype = ("tuple",) - - debug = False - constraints = None - - def setConstraint(self, constraint): - if isinstance(constraint, Any): - return - assert isinstance(constraint, TupleConstraint) - self.constraints = constraint.constraints - - def start(self, count): - self.list = [] - # indices of .list which are unfilled because of children that could - # not yet be referenced - self.num_unreferenceable_children = 0 - self.count = count - if self.debug: - print "%s[%d].start with %s" % (self, self.count, self.list) - self.finished = False - self.deferred = Deferred() - self.protocol.setObject(count, self.deferred) - self._ready_deferreds = [] - - def checkToken(self, typebyte, size): - if self.constraints == None: - return - if len(self.list) >= len(self.constraints): - raise Violation("the tuple is full") - self.constraints[len(self.list)].checkToken(typebyte, size) - - def doOpen(self, opentype): - where = len(self.list) - if self.constraints != None: - if where >= len(self.constraints): - raise Violation("the tuple is full") - self.constraints[where].checkOpentype(opentype) - unslicer = self.open(opentype) - if unslicer: - if self.constraints != None: - unslicer.setConstraint(self.constraints[where]) - return unslicer - - def update(self, obj, index): - if self.debug: - print "%s[%d].update: [%d]=%s" % (self, self.count, index, obj) - self.list[index] = obj - self.num_unreferenceable_children -= 1 - if self.finished: - self.checkComplete() - return obj - - def receiveChild(self, obj, ready_deferred=None): - if ready_deferred: - self._ready_deferreds.append(ready_deferred) - if isinstance(obj, Deferred): - obj.addCallback(self.update, len(self.list)) - obj.addErrback(self.explode) - self.num_unreferenceable_children += 1 - self.list.append("placeholder") - else: - self.list.append(obj) - - def checkComplete(self): - if self.debug: - print "%s[%d].checkComplete: %d pending" % \ - (self, self.count, self.num_unreferenceable_children) - if self.num_unreferenceable_children: - # not finished yet, we'll fire our Deferred when we are - if self.debug: - print " not finished yet" - return - - # list is now complete. We can finish. - return self.complete() - - def complete(self): - ready_deferred = None - if self._ready_deferreds: - ready_deferred = AsyncAND(self._ready_deferreds) - - t = tuple(self.list) - if self.debug: - print " finished! tuple:%s{%s}" % (t, id(t)) - self.protocol.setObject(self.count, t) - self.deferred.callback(t) - return t, ready_deferred - - def receiveClose(self): - if self.debug: - print "%s[%d].receiveClose" % (self, self.count) - self.finished = 1 - - if self.num_unreferenceable_children: - # not finished yet, we'll fire our Deferred when we are - if self.debug: - print " not finished yet" - ready_deferred = None - if self._ready_deferreds: - ready_deferred = AsyncAND(self._ready_deferreds) - return self.deferred, ready_deferred - - # the list is already complete - return self.complete() - - def describe(self): - return "[%d]" % len(self.list) - - -class TupleConstraint(OpenerConstraint): - opentypes = [("tuple",)] - name = "TupleConstraint" - - def __init__(self, *elemConstraints): - self.constraints = [IConstraint(e) for e in elemConstraints] - def checkObject(self, obj, inbound): - if not isinstance(obj, tuple): - raise Violation("not a tuple") - if len(obj) != len(self.constraints): - raise Violation("wrong size tuple") - for i in range(len(self.constraints)): - self.constraints[i].checkObject(obj[i], inbound) - def maxSize(self, seen=None): - if not seen: seen = [] - if self in seen: - raise UnboundedSchema # recursion - seen.append(self) - total = self.OPENBYTES("tuple") - for c in self.constraints: - total += c.maxSize(seen[:]) - return total - - def maxDepth(self, seen=None): - if not seen: seen = [] - if self in seen: - raise UnboundedSchema # recursion - seen.append(self) - return 1 + reduce(max, [c.maxDepth(seen[:]) - for c in self.constraints]) - diff --git a/src/foolscap/foolscap/slicers/unicode.py b/src/foolscap/foolscap/slicers/unicode.py deleted file mode 100644 index 94d79e3f..00000000 --- a/src/foolscap/foolscap/slicers/unicode.py +++ /dev/null @@ -1,93 +0,0 @@ -# -*- test-case-name: foolscap.test.test_banana -*- - -import re -from twisted.internet.defer import Deferred -from foolscap.tokens import BananaError, STRING, VOCAB, Violation -from foolscap.slicer import BaseSlicer, LeafUnslicer -from foolscap.constraint import OpenerConstraint, Any, UnboundedSchema - -class UnicodeSlicer(BaseSlicer): - opentype = ("unicode",) - slices = unicode - def sliceBody(self, streamable, banana): - yield self.obj.encode("UTF-8") - -class UnicodeUnslicer(LeafUnslicer): - # accept a UTF-8 encoded string - opentype = ("unicode",) - string = None - constraint = None - - def setConstraint(self, constraint): - if isinstance(constraint, Any): - return - assert isinstance(constraint, UnicodeConstraint) - self.constraint = constraint - - def checkToken(self, typebyte, size): - if typebyte not in (STRING, VOCAB): - raise BananaError("UnicodeUnslicer only accepts strings") - #if self.constraint: - # self.constraint.checkToken(typebyte, size) - - def receiveChild(self, obj, ready_deferred=None): - assert not isinstance(obj, Deferred) - assert ready_deferred is None - if self.string != None: - raise BananaError("already received a string") - self.string = unicode(obj, "UTF-8") - - def receiveClose(self): - return self.string, None - def describe(self): - return "" - -class UnicodeConstraint(OpenerConstraint): - """The object must be a unicode object. The maxLength and minLength - parameters restrict the number of characters (code points, *not* bytes) - that may be present in the object, which means that the on-wire (UTF-8) - representation may take up to 6 times as many bytes as characters. - """ - - strictTaster = True - opentypes = [("unicode",)] - name = "UnicodeConstraint" - - def __init__(self, maxLength=1000, minLength=0, regexp=None): - self.maxLength = maxLength - self.minLength = minLength - # allow VOCAB in case the Banana-level tokenizer decides to tokenize - # the UTF-8 encoded body of a unicode object, since this is just as - # likely as tokenizing regular bytestrings. TODO: this is disabled - # because it doesn't currently work.. once I remember how Constraints - # work, I'll fix this. The current version is too permissive of - # tokens. - #self.taster = {STRING: 6*self.maxLength, - # VOCAB: None} - # regexp can either be a string or a compiled SRE_Match object.. - # re.compile appears to notice SRE_Match objects and pass them - # through unchanged. - self.regexp = None - if regexp: - self.regexp = re.compile(regexp) - - def checkObject(self, obj, inbound): - if not isinstance(obj, unicode): - raise Violation("not a String") - if self.maxLength != None and len(obj) > self.maxLength: - raise Violation("string too long (%d > %d)" % - (len(obj), self.maxLength)) - if len(obj) < self.minLength: - raise Violation("string too short (%d < %d)" % - (len(obj), self.minLength)) - if self.regexp: - if not self.regexp.search(obj): - raise Violation("regexp failed to match") - - def maxSize(self, seen=None): - if self.maxLength == None: - raise UnboundedSchema - return self.OPENBYTES("unicode") + self.maxLength * 6 - - def maxDepth(self, seen=None): - return 1+1 diff --git a/src/foolscap/foolscap/slicers/vocab.py b/src/foolscap/foolscap/slicers/vocab.py deleted file mode 100644 index e4d2a44c..00000000 --- a/src/foolscap/foolscap/slicers/vocab.py +++ /dev/null @@ -1,184 +0,0 @@ -# -*- test-case-name: foolscap.test.test_banana -*- - -from twisted.internet.defer import Deferred -from foolscap.constraint import Any, ByteStringConstraint -from foolscap.tokens import Violation, BananaError, INT, STRING -from foolscap.slicer import BaseSlicer, BaseUnslicer, LeafUnslicer -from foolscap.slicer import BananaUnslicerRegistry - -class ReplaceVocabularyTable: - pass - -class AddToVocabularyTable: - pass - -class ReplaceVocabSlicer(BaseSlicer): - # this works somewhat like a dictionary - opentype = ('set-vocab',) - trackReferences = False - - def slice(self, streamable, banana): - # we need to implement slice() (instead of merely sliceBody) so we - # can get control at the beginning and end of serialization. It also - # gives us access to the Banana protocol object, so we can manipulate - # their outgoingVocabulary table. - self.streamable = streamable - self.start(banana) - for o in self.opentype: - yield o - # the vocabDict maps strings to index numbers. The far end needs the - # opposite mapping, from index numbers to strings. We perform the - # flip here at the sending end. - stringToIndex = self.obj - indexToString = dict([(stringToIndex[s],s) for s in stringToIndex]) - assert len(stringToIndex) == len(indexToString) # catch duplicates - indices = indexToString.keys() - indices.sort() - for index in indices: - string = indexToString[index] - yield index - yield string - self.finish(banana) - - def start(self, banana): - # this marks the transition point between the old vocabulary dict and - # the new one, so now is the time we should empty the dict. - banana.outgoingVocabTableWasReplaced({}) - - def finish(self, banana): - # now we replace the vocab dict - banana.outgoingVocabTableWasReplaced(self.obj) - -class ReplaceVocabUnslicer(LeafUnslicer): - """Much like DictUnslicer, but keys must be numbers, and values must be - strings. This is used to set the entire vocab table at once. To add - individual tokens, use AddVocabUnslicer by sending an (add-vocab num - string) sequence.""" - opentype = ('set-vocab',) - unslicerRegistry = BananaUnslicerRegistry - maxKeys = None - valueConstraint = ByteStringConstraint(100) - - def setConstraint(self, constraint): - if isinstance(constraint, Any): - return - assert isinstance(constraint, ByteStringConstraint) - self.valueConstraint = constraint - - def start(self, count): - self.d = {} - self.key = None - - def checkToken(self, typebyte, size): - if self.maxKeys is not None and len(self.d) >= self.maxKeys: - raise Violation("the table is full") - if self.key is None: - if typebyte != INT: - raise BananaError("VocabUnslicer only accepts INT keys") - else: - if typebyte != STRING: - raise BananaError("VocabUnslicer only accepts STRING values") - if self.valueConstraint: - self.valueConstraint.checkToken(typebyte, size) - - def receiveChild(self, token, ready_deferred=None): - assert not isinstance(token, Deferred) - assert ready_deferred is None - if self.key is None: - if self.d.has_key(token): - raise BananaError("duplicate key '%s'" % token) - self.key = token - else: - self.d[self.key] = token - self.key = None - - def receiveClose(self): - if self.key is not None: - raise BananaError("sequence ended early: got key but not value") - # now is the time we replace our protocol's vocab table - self.protocol.replaceIncomingVocabulary(self.d) - return ReplaceVocabularyTable, None - - def describe(self): - if self.key is not None: - return "[%s]" % self.key - else: - return "" - - -class AddVocabSlicer(BaseSlicer): - opentype = ('add-vocab',) - trackReferences = False - - def __init__(self, value): - assert isinstance(value, str) - self.value = value - - def slice(self, streamable, banana): - # we need to implement slice() (instead of merely sliceBody) so we - # can get control at the beginning and end of serialization. It also - # gives us access to the Banana protocol object, so we can manipulate - # their outgoingVocabulary table. - self.streamable = streamable - self.start(banana) - for o in self.opentype: - yield o - yield self.index - yield self.value - self.finish(banana) - - def start(self, banana): - # this marks the transition point between the old vocabulary dict and - # the new one, so now is the time we should decide upon the key. It - # is important that we *do not* add it to the dict yet, otherwise - # we'll send (add-vocab NN [VOCAB#NN]), which is kind of pointless. - index = banana.allocateEntryInOutgoingVocabTable(self.value) - self.index = index - - def finish(self, banana): - banana.outgoingVocabTableWasAmended(self.index, self.value) - -class AddVocabUnslicer(BaseUnslicer): - # (add-vocab num string): self.vocab[num] = string - opentype = ('add-vocab',) - unslicerRegistry = BananaUnslicerRegistry - index = None - value = None - valueConstraint = ByteStringConstraint(100) - - def setConstraint(self, constraint): - if isinstance(constraint, Any): - return - assert isinstance(constraint, ByteStringConstraint) - self.valueConstraint = constraint - - def checkToken(self, typebyte, size): - if self.index is None: - if typebyte != INT: - raise BananaError("Vocab key must be an INT") - elif self.value is None: - if typebyte != STRING: - raise BananaError("Vocab value must be a STRING") - if self.valueConstraint: - self.valueConstraint.checkToken(typebyte, size) - else: - raise Violation("add-vocab only accepts two values") - - def receiveChild(self, obj, ready_deferred=None): - assert not isinstance(obj, Deferred) - assert ready_deferred is None - if self.index is None: - self.index = obj - else: - self.value = obj - - def receiveClose(self): - if self.index is None or self.value is None: - raise BananaError("sequence ended too early") - self.protocol.addIncomingVocabulary(self.index, self.value) - return AddToVocabularyTable, None - - def describe(self): - if self.index is not None: - return "[%d]" % self.index - return "" diff --git a/src/foolscap/foolscap/sslverify.py b/src/foolscap/foolscap/sslverify.py deleted file mode 100644 index 2cb7e7d3..00000000 --- a/src/foolscap/foolscap/sslverify.py +++ /dev/null @@ -1,674 +0,0 @@ -# Copyright 2005 Divmod, Inc. See LICENSE file for details - -import itertools, md5 -from OpenSSL import SSL, crypto - -from twisted.python import reflect -from twisted.internet.defer import Deferred - -# Private - shared between all ServerContextFactories, counts up to -# provide a unique session id for each context -_sessionCounter = itertools.count().next - -class _SSLApplicationData(object): - def __init__(self): - self.problems = [] - -class VerifyError(Exception): - """Could not verify something that was supposed to be signed. - """ - -class PeerVerifyError(VerifyError): - """The peer rejected our verify error. - """ - -class OpenSSLVerifyError(VerifyError): - - _errorCodes = {0: ('X509_V_OK', 'ok', 'the operation was successful. >'), - 2: ('X509_V_ERR_UNABLE_TO_GET_ISSUER_CERT', - 'unable to get issuer certificate', - "the issuer certificate could not be found', 'this occurs if the issuer certificate of an untrusted certificate cannot be found."), - 3: ('X509_V_ERR_UNABLE_TO_GET_CRL', - ' unable to get certificate CRL', - 'the CRL of a certificate could not be found. Unused.'), - 4: ('X509_V_ERR_UNABLE_TO_DECRYPT_CERT_SIGNATURE', - "unable to decrypt certificate's signature", - 'the certificate signature could not be decrypted. This means that the actual signature value could not be determined rather than it not match- ing the expected value, this is only meaningful for RSA keys.'), - 5: ('X509_V_ERR_UNABLE_TO_DECRYPT_CRL_SIGNATURE', - "unable to decrypt CRL's signature", - "the CRL signature could not be decrypted', 'this means that the actual signature value could not be determined rather than it not matching the expected value. Unused."), - 6: ('X509_V_ERR_UNABLE_TO_DECODE_ISSUER_PUBLIC_KEY', - 'unable to decode issuer', - 'public key the public key in the certificate SubjectPublicKeyInfo could not be read.'), - 7: ('X509_V_ERR_CERT_SIGNATURE_FAILURE', - 'certificate signature failure', - 'the signature of the certificate is invalid.'), - 8: ('X509_V_ERR_CRL_SIGNATURE_FAILURE', - 'CRL signature failure', - 'the signature of the certificate is invalid. Unused.'), - 9: ('X509_V_ERR_CERT_NOT_YET_VALID', - 'certificate is not yet valid', - "the certificate is not yet valid', 'the notBefore date is after the cur- rent time."), - 10: ('X509_V_ERR_CERT_HAS_EXPIRED', - 'certificate has expired', - "the certificate has expired', 'that is the notAfter date is before the current time."), - 11: ('X509_V_ERR_CRL_NOT_YET_VALID', - 'CRL is not yet valid', - 'the CRL is not yet valid. Unused.'), - 12: ('X509_V_ERR_CRL_HAS_EXPIRED', - 'CRL has expired', - 'the CRL has expired. Unused.'), - 13: ('X509_V_ERR_ERROR_IN_CERT_NOT_BEFORE_FIELD', - "format error in certificate's", - 'notBefore field the certificate notBefore field contains an invalid time.'), - 14: ('X509_V_ERR_ERROR_IN_CERT_NOT_AFTER_FIELD', - "format error in certificate's", - 'notAfter field the certificate notAfter field contains an invalid time.'), - 15: ('X509_V_ERR_ERROR_IN_CRL_LAST_UPDATE_FIELD', - "format error in CRL's lastUpdate field", - 'the CRL lastUpdate field contains an invalid time. Unused.'), - 16: ('X509_V_ERR_ERROR_IN_CRL_NEXT_UPDATE_FIELD', - "format error in CRL's nextUpdate field", - 'the CRL nextUpdate field contains an invalid time. Unused.'), - 17: ('X509_V_ERR_OUT_OF_MEM', - 'out of memory', - 'an error occurred trying to allocate memory. This should never happen.'), - 18: ('X509_V_ERR_DEPTH_ZERO_SELF_SIGNED_CERT', - 'self signed certificate', - 'the passed certificate is self signed and the same certificate cannot be found in the list of trusted certificates.'), - 19: ('X509_V_ERR_SELF_SIGNED_CERT_IN_CHAIN', - 'self signed certificate in certificate chain', - 'the certificate chain could be built up using the untrusted certificates but the root could not be found locally.'), - 20: ('X509_V_ERR_UNABLE_TO_GET_ISSUER_CERT_LOCALLY', - 'unable to get local issuer', - 'certificate the issuer certificate of a locally looked up certificate could not be found. This normally means the list of trusted certificates is not com- plete.'), - 21: ('X509_V_ERR_UNABLE_TO_VERIFY_LEAF_SIGNATURE', - 'unable to verify the first', - 'certificate no signatures could be verified because the chain contains only one cer- tificate and it is not self signed.'), - 22: ('X509_V_ERR_CERT_CHAIN_TOO_LONG', - 'certificate chain too long', - 'the certificate chain length is greater than the supplied maximum depth. Unused.'), - 23: ('X509_V_ERR_CERT_REVOKED', - 'certificate revoked', - 'the certificate has been revoked. Unused.'), - 24: ('X509_V_ERR_INVALID_CA', - 'invalid CA certificate', - 'a CA certificate is invalid. Either it is not a CA or its extensions are not consistent with the supplied purpose.'), - 25: ('X509_V_ERR_PATH_LENGTH_EXCEEDED', - 'path length constraint exceeded', - 'the basicConstraints pathlength parameter has been exceeded.'), - 26: ('X509_V_ERR_INVALID_PURPOSE', - 'unsupported certificate purpose', - 'the supplied certificate cannot be used for the specified purpose.'), - 27: ('X509_V_ERR_CERT_UNTRUSTED', - 'certificate not trusted', - 'the root CA is not marked as trusted for the specified purpose.'), - 28: ('X509_V_ERR_CERT_REJECTED', - 'certificate rejected', - 'the root CA is marked to reject the specified purpose.'), - 29: ('X509_V_ERR_SUBJECT_ISSUER_MISMATCH', - 'subject issuer mismatch', - 'the current candidate issuer certificate was rejected because its sub- ject name did not match the issuer name of the current certificate. Only displayed when the -issuer_checks option is set.'), - 30: ('X509_V_ERR_AKID_SKID_MISMATCH', - 'authority and subject key identifier mismatch', - 'the current candidate issuer certificate was rejected because its sub- ject key identifier was present and did not match the authority key identifier current certificate. Only displayed when the -issuer_checks option is set.'), - 31: ('X509_V_ERR_AKID_ISSUER_SERIAL_MISMATCH', - 'authority and issuer serial number mismatch', - 'the current candidate issuer certificate was rejected because its issuer name and serial number was present and did not match the authority key identifier of the current certificate. Only displayed when the -issuer_checks option is set.'), - 32: ('X509_V_ERR_KEYUSAGE_NO_CERTSIGN', - 'key usage does not include certificate', - 'signing the current candidate issuer certificate was rejected because its keyUsage extension does not permit certificate signing.'), - 50: ('X509_V_ERR_APPLICATION_VERIFICATION', - 'application verification failure', - 'an application specific error. Unused.')} - - - def __init__(self, cert, errno, depth): - VerifyError.__init__(self, cert, errno, depth) - self.cert = cert - self.errno = errno - self.depth = depth - - def __repr__(self): - x = self._errorCodes.get(self.errno) - if x is not None: - name, short, long = x - return 'Peer Certificate Verification Failed: %s (error code: %d)' % ( - long, self.errno - ) - - __str__ = __repr__ - -_x509namecrap = [ - ['CN', 'commonName'], - ['O', 'organizationName'], - ['OU', 'organizationalUnitName'], - ['L', 'localityName'], - ['ST', 'stateOrProvinceName'], - ['C', 'countryName'], - [ 'emailAddress']] - -_x509names = {} - - -for abbrevs in _x509namecrap: - for abbrev in abbrevs: - _x509names[abbrev] = abbrevs[0] - -class DistinguishedName(dict): - __slots__ = () - def __init__(self, **kw): - for k, v in kw.iteritems(): - setattr(self, k, v) - - def _copyFrom(self, x509name): - d = {} - for name in _x509names: - value = getattr(x509name, name, None) - if value is not None: - setattr(self, name, value) - - def _copyInto(self, x509name): - for k, v in self.iteritems(): - setattr(x509name, k, v) - - def __repr__(self): - return '' % (dict.__repr__(self)[1:-1]) - - def __getattr__(self, attr): - return self[_x509names[attr]] - - def __setattr__(self, attr, value): - assert type(attr) is str - if not attr in _x509names: - raise AttributeError("%s is not a valid OpenSSL X509 name field" % (attr,)) - realAttr = _x509names[attr] - value = value.encode('ascii') - assert type(value) is str - self[realAttr] = value - - def inspect(self): - l = [] - from formless.annotate import nameToLabel - lablen = 0 - for kp in _x509namecrap: - k = kp[-1] - label = nameToLabel(k) - lablen = max(len(label), lablen) - l.append((label, getattr(self, k))) - lablen += 2 - for n, (label, attr) in enumerate(l): - l[n] = (label.rjust(lablen)+': '+ attr) - return '\n'.join(l) - -DN = DistinguishedName - -class CertBase: - def __init__(self, original): - self.original = original - - def _copyName(self, suffix): - dn = DistinguishedName() - dn._copyFrom(getattr(self.original, 'get_'+suffix)()) - return dn - - def getSubject(self): - return self._copyName('subject') - - - -def problemsFromTransport(tpt): - """Return a list of L{OpenSSLVerifyError}s given a Twisted transport object. - """ - return tpt.getHandle().get_context().get_app_data().problems - -class Certificate(CertBase): - def __repr__(self): - return '<%s Subject=%s Issuer=%s>' % (self.__class__.__name__, - self.getSubject().commonName, - self.getIssuer().commonName) - - def __eq__(self, other): - if isinstance(other, Certificate): - return self.dump() == other.dump() - return False - - def __ne__(self, other): - return not self.__eq__(other) - - def load(Class, requestData, format=crypto.FILETYPE_ASN1, args=()): - return Class(crypto.load_certificate(format, requestData), *args) - load = classmethod(load) - _load = load - - def dumpPEM(self): - """Dump both public and private parts of a private certificate to PEM-format - data - """ - return self.dump(crypto.FILETYPE_PEM) - - def loadPEM(Class, data): - """Load both private and public parts of a private certificate from a chunk of - PEM-format data. - """ - return Class.load(data, crypto.FILETYPE_PEM) - loadPEM = classmethod(loadPEM) - - def peerFromTransport(Class, transport): - return Class(transport.getHandle().get_peer_certificate()) - peerFromTransport = classmethod(peerFromTransport) - - def hostFromTransport(Class, transport): - return Class(transport.getHandle().get_host_certificate()) - hostFromTransport = classmethod(hostFromTransport) - - def getPublicKey(self): - return PublicKey(self.original.get_pubkey()) - - def dump(self, format=crypto.FILETYPE_ASN1): - return crypto.dump_certificate(format, self.original) - - def serialNumber(self): - return self.original.get_serial_number() - - def digest(self, method='md5'): - return self.original.digest(method) - - def _inspect(self): - return '\n'.join(['Certificate For Subject:', - self.getSubject().inspect(), - '\nIssuer:', - self.getIssuer().inspect(), - '\nSerial Number: %d' % self.serialNumber(), - 'Digest: %s' % self.digest()]) - - def inspect(self): - return '\n'.join(self._inspect(), self.getPublicKey().inspect()) - - def getIssuer(self): - return self._copyName('issuer') - - def options(self, *authorities): - raise NotImplementedError('Possible, but doubtful we need this yet') - -class CertificateRequest(CertBase): - def load(Class, requestData, requestFormat=crypto.FILETYPE_ASN1): - req = crypto.load_certificate_request(requestFormat, requestData) - dn = DistinguishedName() - dn._copyFrom(req.get_subject()) - if not req.verify(req.get_pubkey()): - raise VerifyError("Can't verify that request for %r is self-signed." % (dn,)) - return Class(req) - load = classmethod(load) - - def dump(self, format=crypto.FILETYPE_ASN1): - return crypto.dump_certificate_request(format, self.original) - -class PrivateCertificate(Certificate): - def __repr__(self): - return Certificate.__repr__(self) + ' with ' + repr(self.privateKey) - - def _setPrivateKey(self, privateKey): - if not privateKey.matches(self.getPublicKey()): - raise VerifyError( - "Sanity check failed: Your certificate was not properly signed.") - self.privateKey = privateKey - return self - - def newCertificate(self, newCertData, format=crypto.FILETYPE_ASN1): - return self.load(newCertData, self.privateKey, format) - - def load(Class, data, privateKey, format=crypto.FILETYPE_ASN1): - return Class._load(data, format)._setPrivateKey(privateKey) - load = classmethod(load) - - def inspect(self): - return '\n'.join([Certificate._inspect(self), - self.privateKey.inspect()]) - - def dumpPEM(self): - """Dump both public and private parts of a private certificate to PEM-format - data - """ - return self.dump(crypto.FILETYPE_PEM) + self.privateKey.dump(crypto.FILETYPE_PEM) - - def loadPEM(Class, data): - """Load both private and public parts of a private certificate from a chunk of - PEM-format data. - """ - return Class.load(data, KeyPair.load(data, crypto.FILETYPE_PEM), - crypto.FILETYPE_PEM) - - loadPEM = classmethod(loadPEM) - - def fromCertificateAndKeyPair(Class, certificateInstance, privateKey): - privcert = Class(certificateInstance.original) - return privcert._setPrivateKey(privateKey) - fromCertificateAndKeyPair = classmethod(fromCertificateAndKeyPair) - - def options(self, *authorities): - options = dict(privateKey=self.privateKey.original, - certificate=self.original) - if authorities: - options.update(dict(verify=True, - requireCertificate=True, - caCerts=[auth.original for auth in authorities])) - return OpenSSLCertificateOptions(**options) - - def certificateRequest(self, format=crypto.FILETYPE_ASN1, - digestAlgorithm='md5'): - return self.privateKey.certificateRequest( - self.getSubject(), - format, - digestAlgorithm) - - def signCertificateRequest(self, - requestData, - verifyDNCallback, - serialNumber, - requestFormat=crypto.FILETYPE_ASN1, - certificateFormat=crypto.FILETYPE_ASN1): - issuer = self.getSubject() - return self.privateKey.signCertificateRequest( - issuer, - requestData, - verifyDNCallback, - serialNumber, - requestFormat, - certificateFormat) - - - def signRequestObject(self, certificateRequest, serialNumber, - secondsToExpiry=60 * 60 * 24 * 365, # One year - digestAlgorithm='md5'): - return self.privateKey.signRequestObject(self.getSubject(), - certificateRequest, - serialNumber, - secondsToExpiry, - digestAlgorithm) - -class PublicKey: - def __init__(self, osslpkey): - self.original = osslpkey - req1 = crypto.X509Req() - req1.set_pubkey(osslpkey) - self._emptyReq = crypto.dump_certificate_request(crypto.FILETYPE_ASN1, req1) - - def matches(self, otherKey): - return self._emptyReq == otherKey._emptyReq - -# O OG OMG OMFG PYOPENSSL SUCKS SO BAD -# def verifyCertificate(self, certificate): -# """returns None, or raises a VerifyError exception if the certificate could not -# be verified. -# """ -# if not certificate.original.verify(self.original): -# raise VerifyError("We didn't sign that certificate.") - - def __repr__(self): - return '<%s %s>' % (self.__class__.__name__, self.keyHash()) - - def keyHash(self): - """MD5 hex digest of signature on an empty certificate request with this key. - """ - return md5.md5(self._emptyReq).hexdigest() - - def inspect(self): - return 'Public Key with Hash: %s' % (self.keyHash(),) - - -class KeyPair(PublicKey): - - def load(Class, data, format=crypto.FILETYPE_ASN1): - return Class(crypto.load_privatekey(format, data)) - load = classmethod(load) - - def dump(self, format=crypto.FILETYPE_ASN1): - return crypto.dump_privatekey(format, self.original) - - def __getstate__(self): - return self.dump() - - def __setstate__(self, state): - self.__init__(crypto.load_privatekey(crypto.FILETYPE_ASN1, state)) - - def inspect(self): - t = self.original.type() - if t == crypto.TYPE_RSA: - ts = 'RSA' - elif t == crypto.TYPE_DSA: - ts = 'DSA' - else: - ts = '(Unknown Type!)' - L = (self.original.bits(), ts, self.keyHash()) - return '%s-bit %s Key Pair with Hash: %s' % L - - def generate(Class, kind=crypto.TYPE_RSA, size=1024): - pkey = crypto.PKey() - pkey.generate_key(kind, size) - return Class(pkey) - - def newCertificate(self, newCertData, format=crypto.FILETYPE_ASN1): - return PrivateCertificate.load(newCertData, self, format) - - generate = classmethod(generate) - - def requestObject(self, distinguishedName, digestAlgorithm='md5'): - req = crypto.X509Req() - req.set_pubkey(self.original) - distinguishedName._copyInto(req.get_subject()) - req.sign(self.original, digestAlgorithm) - return CertificateRequest(req) - - def certificateRequest(self, distinguishedName, - format=crypto.FILETYPE_ASN1, - digestAlgorithm='md5'): - """Create a certificate request signed with this key. - - @return: a string, formatted according to the 'format' argument. - """ - return self.requestObject(distinguishedName, digestAlgorithm).dump(format) - - - - def signCertificateRequest(self, - issuerDistinguishedName, - requestData, - verifyDNCallback, - serialNumber, - requestFormat=crypto.FILETYPE_ASN1, - certificateFormat=crypto.FILETYPE_ASN1, - secondsToExpiry=60 * 60 * 24 * 365, # One year - digestAlgorithm='md5'): - """Given a blob of certificate request data and a certificate authority's - DistinguishedName, return a blob of signed certificate data. - - If verifyDNCallback returns a Deferred, I will return a Deferred which - fires the data when that Deferred has completed. - """ - hlreq = CertificateRequest.load(requestData, requestFormat) - - dn = hlreq.getSubject() - vval = verifyDNCallback(dn) - - def verified(value): - if not value: - raise VerifyError("DN callback %r rejected request DN %r" % (verifyDNCallback, dn)) - return self.signRequestObject(issuerDistinguishedName, hlreq, - serialNumber, secondsToExpiry, digestAlgorithm).dump(certificateFormat) - - if isinstance(vval, Deferred): - return vval.addCallback(verified) - else: - return verified(vval) - - - def signRequestObject(self, - issuerDistinguishedName, - requestObject, - serialNumber, - secondsToExpiry=60 * 60 * 24 * 365, # One year - digestAlgorithm='md5'): - """ - Sign a CertificateRequest instance, returning a Certificate instance. - """ - req = requestObject.original - dn = requestObject.getSubject() - cert = crypto.X509() - issuerDistinguishedName._copyInto(cert.get_issuer()) - cert.set_subject(req.get_subject()) - cert.set_pubkey(req.get_pubkey()) - cert.gmtime_adj_notBefore(0) - cert.gmtime_adj_notAfter(secondsToExpiry) - cert.set_serial_number(serialNumber) - cert.sign(self.original, digestAlgorithm) - return Certificate(cert) - - def selfSignedCert(self, serialNumber, **kw): - dn = DN(**kw) - return PrivateCertificate.fromCertificateAndKeyPair( - self.signRequestObject(dn, self.requestObject(dn), serialNumber), - self) - - -class OpenSSLCertificateOptions(object): - """A factory for SSL context objects, for server SSL connections. - """ - - _context = None - # Older versions of PyOpenSSL didn't provide OP_ALL. Fudge it here, just in case. - _OP_ALL = getattr(SSL, 'OP_ALL', 0x0000FFFF) - - method = SSL.TLSv1_METHOD - - def __init__(self, - privateKey=None, - certificate=None, - method=None, - verify=False, - caCerts=None, - verifyDepth=9, - requireCertificate=True, - verifyOnce=True, - enableSingleUseKeys=True, - enableSessions=True, - fixBrokenPeers=False): - """ - Create an OpenSSL context SSL connection context factory. - - @param privateKey: A PKey object holding the private key. - - @param certificate: An X509 object holding the certificate. - - @param method: The SSL protocol to use, one of SSLv23_METHOD, - SSLv2_METHOD, SSLv3_METHOD, TLSv1_METHOD. Defaults to TLSv1_METHOD. - - @param verify: If True, verify certificates received from the peer and - fail the handshake if verification fails. Otherwise, allow anonymous - sessions and sessions with certificates which fail validation. By - default this is False. - - @param caCerts: List of certificate authority certificates to - send to the client when requesting a certificate. Only used if verify - is True, and if verify is True, either this must be specified or - caCertsFile must be given. Since verify is False by default, - this is None by default. - - @param verifyDepth: Depth in certificate chain down to which to verify. - If unspecified, use the underlying default (9). - - @param requireCertificate: If True, do not allow anonymous sessions. - - @param verifyOnce: If True, do not re-verify the certificate - on session resumption. - - @param enableSingleUseKeys: If True, generate a new key whenever - ephemeral DH parameters are used to prevent small subgroup attacks. - - @param enableSessions: If True, set a session ID on each context. This - allows a shortened handshake to be used when a known client reconnects. - - @param fixBrokenPeers: If True, enable various non-spec protocol fixes - for broken SSL implementations. This should be entirely safe, - according to the OpenSSL documentation, but YMMV. This option is now - off by default, because it causes problems with connections between - peers using OpenSSL 0.9.8a. - """ - - assert (privateKey is None) == (certificate is None), "Specify neither or both of privateKey and certificate" - self.privateKey = privateKey - self.certificate = certificate - if method is not None: - self.method = method - - self.verify = verify - assert ((verify and caCerts) or - (not verify)), "Specify client CA certificate information if and only if enabling certificate verification" - - self.caCerts = caCerts - self.verifyDepth = verifyDepth - self.requireCertificate = requireCertificate - self.verifyOnce = verifyOnce - self.enableSingleUseKeys = enableSingleUseKeys - self.enableSessions = enableSessions - self.fixBrokenPeers = fixBrokenPeers - - def __getstate__(self): - d = super(OpenSSLCertificateOptions, self).__getstate__() - try: - del d['context'] - except KeyError: - pass - return d - - def getContext(self): - """Return a SSL.Context object. - """ - if self._context is None: - self._context = self._makeContext() - return self._context - - def _makeContext(self): - ctx = SSL.Context(self.method) - ctx.set_app_data(_SSLApplicationData()) - - if self.certificate is not None and self.privateKey is not None: - ctx.use_certificate(self.certificate) - ctx.use_privatekey(self.privateKey) - # Sanity check - ctx.check_privatekey() - - verifyFlags = SSL.VERIFY_NONE - if self.verify: - verifyFlags = SSL.VERIFY_PEER - if self.requireCertificate: - verifyFlags |= SSL.VERIFY_FAIL_IF_NO_PEER_CERT - if self.verifyOnce: - verifyFlags |= SSL.VERIFY_CLIENT_ONCE - if self.caCerts: - store = ctx.get_cert_store() - for cert in self.caCerts: - store.add_cert(cert) - - def _trackVerificationProblems(conn,cert,errno,depth,preverify_ok): - # retcode is the answer OpenSSL's default verifier would have - # given, had we allowed it to run. - if not preverify_ok: - ctx.get_app_data().problems.append(OpenSSLVerifyError(cert, errno, depth)) - return preverify_ok - ctx.set_verify(verifyFlags, _trackVerificationProblems) - - if self.verifyDepth is not None: - ctx.set_verify_depth(self.verifyDepth) - - if self.enableSingleUseKeys: - ctx.set_options(SSL.OP_SINGLE_DH_USE) - - if self.fixBrokenPeers: - ctx.set_options(self._OP_ALL) - - if self.enableSessions: - sessionName = md5.md5("%s-%d" % (reflect.qual(self.__class__), _sessionCounter())).hexdigest() - ctx.set_session_id(sessionName) - - return ctx diff --git a/src/foolscap/foolscap/storage.py b/src/foolscap/foolscap/storage.py deleted file mode 100644 index 02186856..00000000 --- a/src/foolscap/foolscap/storage.py +++ /dev/null @@ -1,419 +0,0 @@ - -""" -storage.py: support for using Banana as if it were pickle - -This includes functions for serializing to and from strings, instead of a -network socket. It also has support for serializing 'unsafe' objects, -specifically classes, modules, functions, and instances of arbitrary classes. -These are 'unsafe' because to recreate the object on the deserializing end, -we must be willing to execute code of the sender's choosing (i.e. the -constructor of whatever package.module.class names they send us). It is -unwise to do this unless you are willing to allow your internal state to be -compromised by the author of the serialized data you're unpacking. - -This functionality is isolated here because it is never used for data coming -over network connections. -""" - -from cStringIO import StringIO -import types -from new import instance, instancemethod -from pickle import whichmodule # used by FunctionSlicer - -from foolscap import slicer, banana, tokens -from foolscap.tokens import BananaError -from twisted.internet.defer import Deferred -from twisted.python import reflect -from foolscap.slicers.dict import OrderedDictSlicer -from foolscap.slicers.root import RootSlicer, RootUnslicer - - -################## Slicers for "unsafe" things - -# Extended types, not generally safe. The UnsafeRootSlicer checks for these -# with a separate table. - -def getInstanceState(inst): - """Utility function to default to 'normal' state rules in serialization. - """ - if hasattr(inst, "__getstate__"): - state = inst.__getstate__() - else: - state = inst.__dict__ - return state - -class InstanceSlicer(OrderedDictSlicer): - opentype = ('instance',) - trackReferences = True - - def sliceBody(self, streamable, banana): - yield reflect.qual(self.obj.__class__) # really a second index token - self.obj = getInstanceState(self.obj) - for t in OrderedDictSlicer.sliceBody(self, streamable, banana): - yield t - -class ModuleSlicer(slicer.BaseSlicer): - opentype = ('module',) - trackReferences = True - - def sliceBody(self, streamable, banana): - yield self.obj.__name__ - -class ClassSlicer(slicer.BaseSlicer): - opentype = ('class',) - trackReferences = True - - def sliceBody(self, streamable, banana): - yield reflect.qual(self.obj) - -class MethodSlicer(slicer.BaseSlicer): - opentype = ('method',) - trackReferences = True - - def sliceBody(self, streamable, banana): - yield self.obj.im_func.__name__ - yield self.obj.im_self - yield self.obj.im_class - -class FunctionSlicer(slicer.BaseSlicer): - opentype = ('function',) - trackReferences = True - - def sliceBody(self, streamable, banana): - name = self.obj.__name__ - fullname = str(whichmodule(self.obj, self.obj.__name__)) + '.' + name - yield fullname - -UnsafeSlicerTable = {} -UnsafeSlicerTable.update({ - types.InstanceType: InstanceSlicer, - types.ModuleType: ModuleSlicer, - types.ClassType: ClassSlicer, - types.MethodType: MethodSlicer, - types.FunctionType: FunctionSlicer, - #types.TypeType: NewstyleClassSlicer, - # ???: NewstyleInstanceSlicer, # pickle uses obj.__reduce__ to help - # http://docs.python.org/lib/node68.html - }) - - - - -class UnsafeRootSlicer(RootSlicer): - slicerTable = UnsafeSlicerTable - -class StorageRootSlicer(UnsafeRootSlicer): - # some pieces taken from ScopedSlicer - def __init__(self, protocol): - UnsafeRootSlicer.__init__(self, protocol) - self.references = {} - - def registerReference(self, refid, obj): - self.references[id(obj)] = (obj,refid) - - def slicerForObject(self, obj): - # check for an object which was sent previously or has at least - # started sending - obj_refid = self.references.get(id(obj), None) - if obj_refid is not None: - return slicer.ReferenceSlicer(obj_refid[1]) - # otherwise go upstream - return UnsafeRootSlicer.slicerForObject(self, obj) - - -################## Unslicers for "unsafe" things - -def setInstanceState(inst, state): - """Utility function to default to 'normal' state rules in unserialization. - """ - if hasattr(inst, "__setstate__"): - inst.__setstate__(state) - else: - inst.__dict__ = state - return inst - -class Dummy: - def __repr__(self): - return "" % self.__dict__ - def __cmp__(self, other): - if not type(other) == type(self): - return -1 - return cmp(self.__dict__, other.__dict__) - -UnsafeUnslicerRegistry = {} - -class InstanceUnslicer(slicer.BaseUnslicer): - # this is an unsafe unslicer: an attacker could induce you to create - # instances of arbitrary classes with arbitrary attributes: VERY - # DANGEROUS! - opentype = ('instance',) - unslicerRegistry = UnsafeUnslicerRegistry - - # danger: instances are mutable containers. If an attribute value is not - # yet available, __dict__ will hold a Deferred until it is. Other - # objects might be created and use our object before this is fixed. - # TODO: address this. Note that InstanceUnslicers aren't used in PB - # (where we have pb.Referenceable and pb.Copyable which have schema - # constraints and could have different restrictions like not being - # allowed to participate in reference loops). - - def start(self, count): - self.d = {} - self.count = count - self.classname = None - self.attrname = None - self.deferred = Deferred() - self.protocol.setObject(count, self.deferred) - - def checkToken(self, typebyte, size): - if self.classname is None: - if typebyte not in (tokens.STRING, tokens.VOCAB): - raise BananaError("InstanceUnslicer classname must be string") - elif self.attrname is None: - if typebyte not in (tokens.STRING, tokens.VOCAB): - raise BananaError("InstanceUnslicer keys must be STRINGs") - - def receiveChild(self, obj, ready_deferred=None): - assert ready_deferred is None - if self.classname is None: - self.classname = obj - self.attrname = None - elif self.attrname is None: - self.attrname = obj - else: - if isinstance(obj, Deferred): - # TODO: this is an artificial restriction, and it might - # be possible to remove it, but I need to think through - # it carefully first - raise BananaError("unreferenceable object in attribute") - if self.d.has_key(self.attrname): - raise BananaError("duplicate attribute name '%s'" % - self.attrname) - self.setAttribute(self.attrname, obj) - self.attrname = None - - def setAttribute(self, name, value): - self.d[name] = value - - def receiveClose(self): - # you could attempt to do some value-checking here, but there would - # probably still be holes - - #obj = Dummy() - klass = reflect.namedObject(self.classname) - assert type(klass) == types.ClassType # TODO: new-style classes - obj = instance(klass, {}) - - setInstanceState(obj, self.d) - - self.protocol.setObject(self.count, obj) - self.deferred.callback(obj) - return obj, None - - def describe(self): - if self.classname is None: - return "" - me = "<%s>" % self.classname - if self.attrname is None: - return "%s.attrname??" % me - else: - return "%s.%s" % (me, self.attrname) - -class ModuleUnslicer(slicer.LeafUnslicer): - opentype = ('module',) - unslicerRegistry = UnsafeUnslicerRegistry - - finished = False - - def checkToken(self, typebyte, size): - if typebyte not in (tokens.STRING, tokens.VOCAB): - raise BananaError("ModuleUnslicer only accepts strings") - - def receiveChild(self, obj, ready_deferred=None): - assert not isinstance(obj, Deferred) - assert ready_deferred is None - if self.finished: - raise BananaError("ModuleUnslicer only accepts one string") - self.finished = True - # TODO: taste here! - mod = __import__(obj, {}, {}, "x") - self.mod = mod - - def receiveClose(self): - if not self.finished: - raise BananaError("ModuleUnslicer requires a string") - return self.mod, None - -class ClassUnslicer(slicer.LeafUnslicer): - opentype = ('class',) - unslicerRegistry = UnsafeUnslicerRegistry - - finished = False - - def checkToken(self, typebyte, size): - if typebyte not in (tokens.STRING, tokens.VOCAB): - raise BananaError("ClassUnslicer only accepts strings") - - def receiveChild(self, obj, ready_deferred=None): - assert not isinstance(obj, Deferred) - assert ready_deferred is None - if self.finished: - raise BananaError("ClassUnslicer only accepts one string") - self.finished = True - # TODO: taste here! - self.klass = reflect.namedObject(obj) - - def receiveClose(self): - if not self.finished: - raise BananaError("ClassUnslicer requires a string") - return self.klass, None - -class MethodUnslicer(slicer.BaseUnslicer): - opentype = ('method',) - unslicerRegistry = UnsafeUnslicerRegistry - - state = 0 - im_func = None - im_self = None - im_class = None - - # self.state: - # 0: expecting a string with the method name - # 1: expecting an instance (or None for unbound methods) - # 2: expecting a class - - def checkToken(self, typebyte, size): - if self.state == 0: - if typebyte not in (tokens.STRING, tokens.VOCAB): - raise BananaError("MethodUnslicer methodname must be a string") - elif self.state == 1: - if typebyte != tokens.OPEN: - raise BananaError("MethodUnslicer instance must be OPEN") - elif self.state == 2: - if typebyte != tokens.OPEN: - raise BananaError("MethodUnslicer class must be an OPEN") - - def doOpen(self, opentype): - # check the opentype - if self.state == 1: - if opentype[0] not in ("instance", "none"): - raise BananaError("MethodUnslicer instance must be " + - "instance or None") - elif self.state == 2: - if opentype[0] != "class": - raise BananaError("MethodUnslicer class must be a class") - unslicer = self.open(opentype) - # TODO: apply constraint - return unslicer - - def receiveChild(self, obj, ready_deferred=None): - assert not isinstance(obj, Deferred) - assert ready_deferred is None - if self.state == 0: - self.im_func = obj - self.state = 1 - elif self.state == 1: - assert type(obj) in (types.InstanceType, types.NoneType) - self.im_self = obj - self.state = 2 - elif self.state == 2: - assert type(obj) == types.ClassType # TODO: new-style classes? - self.im_class = obj - self.state = 3 - else: - raise BananaError("MethodUnslicer only accepts three objects") - - def receiveClose(self): - if self.state != 3: - raise BananaError("MethodUnslicer requires three objects") - if self.im_self is None: - meth = getattr(self.im_class, self.im_func) - # getattr gives us an unbound method - return meth, None - # TODO: late-available instances - #if isinstance(self.im_self, NotKnown): - # im = _InstanceMethod(self.im_name, self.im_self, self.im_class) - # return im - meth = self.im_class.__dict__[self.im_func] - # whereas __dict__ gives us a function - im = instancemethod(meth, self.im_self, self.im_class) - return im, None - - -class FunctionUnslicer(slicer.LeafUnslicer): - opentype = ('function',) - unslicerRegistry = UnsafeUnslicerRegistry - - finished = False - - def checkToken(self, typebyte, size): - if typebyte not in (tokens.STRING, tokens.VOCAB): - raise BananaError("FunctionUnslicer only accepts strings") - - def receiveChild(self, obj, ready_deferred=None): - assert not isinstance(obj, Deferred) - assert ready_deferred is None - if self.finished: - raise BananaError("FunctionUnslicer only accepts one string") - self.finished = True - # TODO: taste here! - self.func = reflect.namedObject(obj) - - def receiveClose(self): - if not self.finished: - raise BananaError("FunctionUnslicer requires a string") - return self.func, None - - -class UnsafeRootUnslicer(RootUnslicer): - topRegistries = [slicer.UnslicerRegistry, - slicer.BananaUnslicerRegistry, - UnsafeUnslicerRegistry] - openRegistries = [slicer.UnslicerRegistry, - UnsafeUnslicerRegistry] - -class StorageRootUnslicer(UnsafeRootUnslicer, slicer.ScopedUnslicer): - # This version tracks references for the entire lifetime of the - # protocol. It is most appropriate for single-use purposes, such as a - # replacement for Pickle. - - def __init__(self): - slicer.ScopedUnslicer.__init__(self) - UnsafeRootUnslicer.__init__(self) - - def setObject(self, counter, obj): - return slicer.ScopedUnslicer.setObject(self, counter, obj) - def getObject(self, counter): - return slicer.ScopedUnslicer.getObject(self, counter) - - -################## The unsafe form of Banana that uses these (Un)Slicers - - -class StorageBanana(banana.Banana): - # this is "unsafe", in that it will do import() and create instances of - # arbitrary classes. It is also scoped at the root, so each - # StorageBanana should be used only once. - slicerClass = StorageRootSlicer - unslicerClass = StorageRootUnslicer - - # it also stashes top-level objects in .obj, so you can retrieve them - # later - def receivedObject(self, obj): - self.object = obj - -def serialize(obj): - """Serialize an object graph into a sequence of bytes. Returns a Deferred - that fires with the sequence of bytes.""" - b = StorageBanana() - b.transport = StringIO() - d = b.send(obj) - d.addCallback(lambda res: b.transport.getvalue()) - return d - -def unserialize(str): - """Unserialize a sequence of bytes back into an object graph.""" - b = StorageBanana() - b.dataReceived(str) - return b.object - diff --git a/src/foolscap/foolscap/test/__init__.py b/src/foolscap/foolscap/test/__init__.py deleted file mode 100644 index f4b24142..00000000 --- a/src/foolscap/foolscap/test/__init__.py +++ /dev/null @@ -1,2 +0,0 @@ -# -*- test-case-name: foolscap.test -*- -"""foolscap tests""" diff --git a/src/foolscap/foolscap/test/common.py b/src/foolscap/foolscap/test/common.py deleted file mode 100644 index d31c7aa8..00000000 --- a/src/foolscap/foolscap/test/common.py +++ /dev/null @@ -1,338 +0,0 @@ -# -*- test-case-name: foolscap.test.test_pb -*- - -import re -from zope.interface import implements, implementsOnly, implementedBy, Interface -from twisted.python import log -from twisted.internet import defer, reactor -from foolscap import broker -from foolscap import Referenceable, RemoteInterface -from foolscap.eventual import eventually, fireEventually, flushEventualQueue -from foolscap.remoteinterface import getRemoteInterface, RemoteMethodSchema, \ - UnconstrainedMethod -from foolscap.schema import Any, SetOf, DictOf, ListOf, TupleOf, \ - NumberConstraint, ByteStringConstraint, IntegerConstraint, \ - UnicodeConstraint - -from twisted.python import failure -from twisted.internet.main import CONNECTION_DONE - -def getRemoteInterfaceName(obj): - i = getRemoteInterface(obj) - return i.__remote_name__ - -class Loopback: - # The transport's promise is that write() can be treated as a - # synchronous, isolated function call: specifically, the Protocol's - # dataReceived() and connectionLost() methods shall not be called during - # a call to write(). - - connected = True - def write(self, data): - eventually(self._write, data) - - def _write(self, data): - if not self.connected: - return - try: - # isolate exceptions: if one occurred on a regular TCP transport, - # they would hang up, so duplicate that here. - self.peer.dataReceived(data) - except: - f = failure.Failure() - log.err(f) - print "Loopback.write exception:", f - self.loseConnection(f) - - def loseConnection(self, why=failure.Failure(CONNECTION_DONE)): - if self.connected: - self.connected = False - # this one is slightly weird because 'why' is a Failure - eventually(self._loseConnection, why) - - def _loseConnection(self, why): - self.protocol.connectionLost(why) - self.peer.connectionLost(why) - - def flush(self): - self.connected = False - return fireEventually() - -Digits = re.compile("\d*") -MegaSchema1 = DictOf(str, - ListOf(TupleOf(SetOf(int, maxLength=10, mutable=True), - str, bool, int, long, float, None, - UnicodeConstraint(), - ByteStringConstraint(), - Any(), NumberConstraint(), - IntegerConstraint(), - ByteStringConstraint(maxLength=100, - minLength=90, - regexp="\w+"), - ByteStringConstraint(regexp=Digits), - ), - maxLength=20), - maxKeys=5) -# containers should convert their arguments into schemas -MegaSchema2 = TupleOf(SetOf(int), - ListOf(int), - DictOf(int, str), - ) - - -class RIHelper(RemoteInterface): - def set(obj=Any()): return bool - def set2(obj1=Any(), obj2=Any()): return bool - def append(obj=Any()): return Any() - def get(): return Any() - def echo(obj=Any()): return Any() - def defer(obj=Any()): return Any() - def hang(): return Any() - # test one of everything - def megaschema(obj1=MegaSchema1, obj2=MegaSchema2): return None - -class HelperTarget(Referenceable): - implements(RIHelper) - d = None - def __init__(self, name="unnamed"): - self.name = name - def __repr__(self): - return "" % self.name - def waitfor(self): - self.d = defer.Deferred() - return self.d - - def remote_set(self, obj): - self.obj = obj - if self.d: - self.d.callback(obj) - return True - def remote_set2(self, obj1, obj2): - self.obj1 = obj1 - self.obj2 = obj2 - return True - - def remote_append(self, obj): - self.calls.append(obj) - - def remote_get(self): - return self.obj - - def remote_echo(self, obj): - self.obj = obj - return obj - - def remote_defer(self, obj): - return fireEventually(obj) - - def remote_hang(self): - self.d = defer.Deferred() - return self.d - - def remote_megaschema(self, obj1, obj2): - self.obj1 = obj1 - self.obj2 = obj2 - return None - - -class TargetMixin: - - def setUp(self): - self.loopbacks = [] - - def setupBrokers(self): - - self.targetBroker = broker.LoggingBroker() - self.callingBroker = broker.LoggingBroker() - - t1 = Loopback() - t1.peer = self.callingBroker - t1.protocol = self.targetBroker - self.targetBroker.transport = t1 - self.loopbacks.append(t1) - - t2 = Loopback() - t2.peer = self.targetBroker - t2.protocol = self.callingBroker - self.callingBroker.transport = t2 - self.loopbacks.append(t2) - - self.targetBroker.connectionMade() - self.callingBroker.connectionMade() - - def tearDown(self): - # returns a Deferred which fires when the Loopbacks are drained - dl = [l.flush() for l in self.loopbacks] - d = defer.DeferredList(dl) - d.addCallback(flushEventualQueue) - return d - - def setupTarget(self, target, txInterfaces=False): - # txInterfaces controls what interfaces the sender uses - # False: sender doesn't know about any interfaces - # True: sender gets the actual interface list from the target - # (list): sender uses an artificial interface list - puid = target.processUniqueID() - tracker = self.targetBroker.getTrackerForMyReference(puid, target) - tracker.send() - clid = tracker.clid - if txInterfaces: - iname = getRemoteInterfaceName(target) - else: - iname = None - rtracker = self.callingBroker.getTrackerForYourReference(clid, iname) - rr = rtracker.getRef() - return rr, target - - def stall(self, res, timeout): - d = defer.Deferred() - reactor.callLater(timeout, d.callback, res) - return d - - def poll(self, check_f, pollinterval=0.01): - # Return a Deferred, then call check_f periodically until it returns - # True, at which point the Deferred will fire.. If check_f raises an - # exception, the Deferred will errback. - d = defer.maybeDeferred(self._poll, None, check_f, pollinterval) - return d - - def _poll(self, res, check_f, pollinterval): - if check_f(): - return True - d = defer.Deferred() - d.addCallback(self._poll, check_f, pollinterval) - reactor.callLater(pollinterval, d.callback, None) - return d - - - -class RIMyTarget(RemoteInterface): - # method constraints can be declared directly: - add1 = RemoteMethodSchema(_response=int, a=int, b=int) - free = UnconstrainedMethod() - - # or through their function definitions: - def add(a=int, b=int): return int - #add = schema.callable(add) # the metaclass makes this unnecessary - # but it could be used for adding options or something - def join(a=str, b=str, c=int): return str - def getName(): return str - disputed = RemoteMethodSchema(_response=int, a=int) - def fail(): return str # actually raises an exception - def failstring(): return str # raises a string exception - -class RIMyTarget2(RemoteInterface): - __remote_name__ = "RIMyTargetInterface2" - sub = RemoteMethodSchema(_response=int, a=int, b=int) - -# For some tests, we want the two sides of the connection to disagree about -# the contents of the RemoteInterface they are using. This is remarkably -# difficult to accomplish within a single process. We do it by creating -# something that behaves just barely enough like a RemoteInterface to work. -class FakeTarget(dict): - pass -RIMyTarget3 = FakeTarget() -RIMyTarget3.__remote_name__ = RIMyTarget.__remote_name__ - -RIMyTarget3['disputed'] = RemoteMethodSchema(_response=int, a=str) -RIMyTarget3['disputed'].name = "disputed" -RIMyTarget3['disputed'].interface = RIMyTarget3 - -RIMyTarget3['disputed2'] = RemoteMethodSchema(_response=str, a=int) -RIMyTarget3['disputed2'].name = "disputed" -RIMyTarget3['disputed2'].interface = RIMyTarget3 - -RIMyTarget3['sub'] = RemoteMethodSchema(_response=int, a=int, b=int) -RIMyTarget3['sub'].name = "sub" -RIMyTarget3['sub'].interface = RIMyTarget3 - -class Target(Referenceable): - implements(RIMyTarget) - - def __init__(self, name=None): - self.calls = [] - self.name = name - def getMethodSchema(self, methodname): - return None - def remote_add(self, a, b): - self.calls.append((a,b)) - return a+b - remote_add1 = remote_add - def remote_free(self, *args, **kwargs): - self.calls.append((args, kwargs)) - return "bird" - def remote_getName(self): - return self.name - def remote_disputed(self, a): - return 24 - def remote_fail(self): - raise ValueError("you asked me to fail") - def remote_fail_remotely(self, target): - return target.callRemote("fail") - - def remote_failstring(self): - raise "string exceptions are annoying" - -class TargetWithoutInterfaces(Target): - # undeclare the RIMyTarget interface - implementsOnly(implementedBy(Referenceable)) - -class BrokenTarget(Referenceable): - implements(RIMyTarget) - - def remote_add(self, a, b): - return "error" - - -class IFoo(Interface): - # non-remote Interface - pass - -class Foo(Referenceable): - implements(IFoo) - -class RIDummy(RemoteInterface): - pass - -class RITypes(RemoteInterface): - def returns_none(work=bool): return None - def takes_remoteinterface(a=RIDummy): return str - def returns_remoteinterface(work=int): return RIDummy - def takes_interface(a=IFoo): return str - def returns_interface(work=bool): return IFoo - -class DummyTarget(Referenceable): - implements(RIDummy) - -class TypesTarget(Referenceable): - implements(RITypes) - - def remote_returns_none(self, work): - if work: - return None - return "not None" - - def remote_takes_remoteinterface(self, a): - # TODO: really, I want to just be able to say: - # if RIDummy.providedBy(a): - iface = a.tracker.interface - if iface and iface == RIDummy: - return "good" - raise RuntimeError("my argument (%s) should provide RIDummy, " - "but doesn't" % a) - - def remote_returns_remoteinterface(self, work): - if work == 1: - return DummyTarget() - if work == -1: - return TypesTarget() - return 15 - - def remote_takes_interface(self, a): - if IFoo.providedBy(a): - return "good" - raise RuntimeError("my argument (%s) should provide IFoo, but doesn't" % a) - - def remote_returns_interface(self, work): - if work: - return Foo() - return "not implementor of IFoo" diff --git a/src/foolscap/foolscap/test/test_banana.py b/src/foolscap/foolscap/test/test_banana.py deleted file mode 100644 index 2c95c3f7..00000000 --- a/src/foolscap/foolscap/test/test_banana.py +++ /dev/null @@ -1,1891 +0,0 @@ - -from twisted.trial import unittest -from twisted.python import reflect -from twisted.python.components import registerAdapter -from twisted.internet import defer - -from foolscap.tokens import ISlicer, Violation, BananaError -from foolscap.tokens import BananaFailure -from foolscap import slicer, schema, tokens, debug, storage -from foolscap.eventual import fireEventually, flushEventualQueue -from foolscap.slicers.allslicers import RootSlicer, DictUnslicer, TupleUnslicer -from foolscap.constraint import IConstraint - -import StringIO -import sets - -#log.startLogging(sys.stderr) - -# some utility functions to manually assemble bytestreams - -def tOPEN(count): - return ("OPEN", count) -def tCLOSE(count): - return ("CLOSE", count) -tABORT = ("ABORT",) - -def bOPEN(opentype, count): - assert count < 128 - return chr(count) + "\x88" + chr(len(opentype)) + "\x82" + opentype -def bCLOSE(count): - assert count < 128 - return chr(count) + "\x89" -def bINT(num): - if num >=0: - assert num < 128 - return chr(num) + "\x81" - num = -num - assert num < 128 - return chr(num) + "\x83" -def bSTR(str): - assert len(str) < 128 - return chr(len(str)) + "\x82" + str -def bERROR(str): - assert len(str) < 128 - return chr(len(str)) + "\x8d" + str -def bABORT(count): - assert count < 128 - return chr(count) + "\x8A" -# DecodeTest (24): turns tokens into objects, tests objects and UFs -# EncodeTest (13): turns objects/instance into tokens, tests tokens -# FailedInstanceTests (2): 1:turn instances into tokens and fail, 2:reverse - -# ByteStream (3): turn object into bytestream, test bytestream -# InboundByteStream (14): turn bytestream into object, check object -# with or without constraints -# ThereAndBackAgain (20): encode then decode object, check object - -# VocabTest1 (2): test setOutgoingVocabulary and an inbound Vocab sequence -# VocabTest2 (1): send object, test bytestream w/vocab-encoding -# Sliceable (2): turn instance into tokens (with ISliceable, test tokens - -class TokenBanana(debug.TokenStorageBanana): - """this Banana formats tokens as strings, numbers, and ('OPEN',) tuples - instead of bytes. Used for testing purposes.""" - - def testSlice(self, obj): - assert len(self.slicerStack) == 1 - assert isinstance(self.slicerStack[0][0], RootSlicer) - self.tokens = [] - d = self.send(obj) - d.addCallback(self._testSlice_1) - return d - def _testSlice_1(self, res): - assert len(self.slicerStack) == 1 - assert not self.rootSlicer.sendQueue - assert isinstance(self.slicerStack[0][0], RootSlicer) - return self.tokens - - def __del__(self): - assert not self.rootSlicer.sendQueue - -class UnbananaTestMixin: - def setUp(self): - self.hangup = False - self.banana = TokenBanana() - def tearDown(self): - if not self.hangup: - self.failUnless(len(self.banana.receiveStack) == 1) - self.failUnless(isinstance(self.banana.receiveStack[0], - storage.StorageRootUnslicer)) - - def do(self, tokens): - self.banana.object = None - self.banana.violation = None - self.banana.transport.disconnectReason = None - self.failUnless(len(self.banana.receiveStack) == 1) - self.failUnless(isinstance(self.banana.receiveStack[0], - storage.StorageRootUnslicer)) - obj = self.banana.processTokens(tokens) - return obj - - def shouldFail(self, tokens): - obj = self.do(tokens) - self.failUnless(obj is None, "object was produced: %s" % obj) - self.failUnless(self.banana.violation, "didn't fail, ret=%s" % obj) - self.failIf(self.banana.transport.disconnectReason, - "connection was dropped: %s" % \ - self.banana.transport.disconnectReason) - return self.banana.violation - - def shouldDropConnection(self, tokens): - self.banana.logReceiveErrors = False - obj = self.do(tokens) - self.failUnless(obj is None, "object was produced: %s" % obj) - self.failIf(self.banana.violation) - f = self.banana.transport.disconnectReason - self.failUnless(f, "didn't fail, ret=%s" % obj) - if not isinstance(f, BananaFailure): - self.fail("disconnectReason wasn't a Failure: %s" % f) - if not f.check(BananaError): - self.fail("wrong exception type: %s" % f) - self.hangup = True # to stop the tearDown check - return f - - - def failIfBananaFailure(self, res): - if isinstance(res, BananaFailure): - # something went wrong - print "There was a failure while Unbananaing '%s':" % res.where - print res.getTraceback() - self.fail("BananaFailure") - - def checkBananaFailure(self, res, where, failtype=None): - print res - self.failUnless(isinstance(res, BananaFailure)) - if failtype: - self.failUnless(res.failure, - "No Failure object in BananaFailure") - if not res.check(failtype): - print "Wrong exception (wanted '%s'):" % failtype - print res.getTraceback() - self.fail("Wrong exception (wanted '%s'):" % failtype) - self.failUnlessEqual(res.where, where) - self.banana.object = None # to stop the tearDown check TODO ?? - -class TestBanana(debug.LoggingStorageBanana): - #doLog = "rx" - - def receivedObject(self, obj): - self.object = obj - - def reportViolation(self, why): - self.violation = why - - def reportReceiveError(self, f): - debug.LoggingStorageBanana.reportReceiveError(self, f) - self.transport.disconnectReason = BananaFailure() - -class TestTransport(StringIO.StringIO): - disconnectReason = None - def loseConnection(self): - pass - -class _None: pass - -class TestBananaMixin: - def setUp(self): - self.makeBanana() - - def makeBanana(self): - self.banana = TestBanana() - self.banana.transport = TestTransport() - self.banana.connectionMade() - - def encode(self, obj): - d = self.banana.send(obj) - d.addCallback(lambda res: self.banana.transport.getvalue()) - return d - - def clearOutput(self): - self.banana.transport = TestTransport() - - def decode(self, stream): - self.banana.object = None - self.banana.violation = None - self.banana.dataReceived(stream) - return self.banana.object - - def shouldDecode(self, stream): - obj = self.decode(stream) - self.failIf(self.banana.violation) - self.failIf(self.banana.transport.disconnectReason) - self.failUnlessEqual(len(self.banana.receiveStack), 1) - return obj - - def shouldFail(self, stream): - obj = self.decode(stream) - self.failUnless(obj is None, - "obj was '%s', not None" % (obj,)) - self.failIf(self.banana.transport.disconnectReason, - "connection was dropped: %s" % \ - self.banana.transport.disconnectReason) - self.failUnlessEqual(len(self.banana.receiveStack), 1) - f = self.banana.violation - if not f: - self.fail("didn't fail") - if not isinstance(f, BananaFailure): - self.fail("violation wasn't a BananaFailure: %s" % f) - if not f.check(Violation): - self.fail("wrong exception type: %s" % f) - return f - - def shouldDropConnection(self, stream): - self.banana.logReceiveErrors = False # trial hooks log.err - obj = self.decode(stream) - self.failUnless(obj is None, - "decode worked! got '%s', expected dropConnection" \ - % obj) - # the receiveStack is allowed to be non-empty here, since we've - # dropped the connection anyway - f = self.banana.transport.disconnectReason - if not f: - self.fail("didn't fail") - if not isinstance(f, BananaFailure): - self.fail("disconnectReason wasn't a Failure: %s" % f) - if not f.check(BananaError): - self.fail("wrong exception type: %s" % f) - self.makeBanana() # need a new one, we squished the last one - return f - - def wantEqual(self, got, wanted): - if got != wanted: - print - print "wanted: '%s'" % wanted, repr(wanted) - print "got : '%s'" % got, repr(got) - self.fail("did not get expected string") - - def loop(self, obj): - self.clearOutput() - d = self.encode(obj) - d.addCallback(self.shouldDecode) - return d - - def looptest(self, obj, newvalue=_None): - if newvalue is _None: - newvalue = obj - d = self.loop(obj) - d.addCallback(self._looptest_1, newvalue) - return d - def _looptest_1(self, obj2, newvalue): - self.failUnlessEqual(obj2, newvalue) - self.failUnlessEqual(type(obj2), type(newvalue)) - -def join(*args): - return "".join(args) - - - -class BrokenDictUnslicer(DictUnslicer): - dieInFinish = 0 - - def receiveKey(self, key): - if key == "die": - raise Violation("aaagh") - if key == "please_die_in_finish": - self.dieInFinish = 1 - DictUnslicer.receiveKey(self, key) - - def receiveValue(self, value): - if value == "die": - raise Violation("aaaaaaaaargh") - DictUnslicer.receiveValue(self, value) - - def receiveClose(self): - if self.dieInFinish: - raise Violation("dead in receiveClose()") - DictUnslicer.receiveClose(self) - return None, None - -class ReallyBrokenDictUnslicer(DictUnslicer): - def start(self, count): - raise Violation("dead in start") - - -class DecodeTest(UnbananaTestMixin, unittest.TestCase): - def setUp(self): - UnbananaTestMixin.setUp(self) - self.banana.logReceiveErrors = False - d ={ ('dict1',): BrokenDictUnslicer, - ('dict2',): ReallyBrokenDictUnslicer, - } - self.banana.rootUnslicer.topRegistries.insert(0, d) - self.banana.rootUnslicer.openRegistries.insert(0, d) - - def test_simple_list(self): - "simple list" - res = self.do([tOPEN(0),'list',1,2,3,"a","b",tCLOSE(0)]) - self.failUnlessEqual(res, [1,2,3,'a','b']) - - def test_aborted_list(self): - "aborted list" - f = self.shouldFail([tOPEN(0),'list', 1, tABORT, tCLOSE(0)]) - self.failUnless(isinstance(f, BananaFailure)) - self.failUnless(f.check(Violation)) - self.failUnlessEqual(f.value.where, ".[1]") - self.failUnlessEqual(f.value.args[0], "ABORT received") - - def test_aborted_list2(self): - "aborted list2" - f = self.shouldFail([tOPEN(0),'list', 1, tABORT, - tOPEN(1),'list', 2, 3, tCLOSE(1), - tCLOSE(0)]) - self.failUnless(isinstance(f, BananaFailure)) - self.failUnless(f.check(Violation)) - self.failUnlessEqual(f.value.where, ".[1]") - self.failUnlessEqual(f.value.args[0], "ABORT received") - - def test_aborted_list3(self): - "aborted list3" - f = self.shouldFail([tOPEN(0),'list', 1, - tOPEN(1),'list', 2, 3, 4, - tOPEN(2),'list', 5, 6, tABORT, tCLOSE(2), - tCLOSE(1), - tCLOSE(0)]) - self.failUnless(isinstance(f, BananaFailure)) - self.failUnless(f.check(Violation)) - self.failUnlessEqual(f.value.where, ".[1].[3].[2]") - self.failUnlessEqual(f.value.args[0], "ABORT received") - - def test_nested_list(self): - "nested list" - res = self.do([tOPEN(0),'list',1,2, - tOPEN(1),'list',3,4,tCLOSE(1), - tCLOSE(0)]) - self.failUnlessEqual(res, [1,2,[3,4]]) - - def test_list_with_tuple(self): - "list with tuple" - res = self.do([tOPEN(0),'list',1,2, - tOPEN(1),'tuple',3,4,tCLOSE(1), - tCLOSE(0)]) - self.failUnlessEqual(res, [1,2,(3,4)]) - - def test_dict(self): - "dict" - res = self.do([tOPEN(0),'dict',"a",1,"b",2,tCLOSE(0)]) - self.failUnlessEqual(res, {'a':1, 'b':2}) - - def test_dict_with_duplicate_keys(self): - "dict with duplicate keys" - f = self.shouldDropConnection([tOPEN(0),'dict', - "a",1,"a",2, - tCLOSE(0)]) - self.failUnlessEqual(f.value.where, ".{}") - self.failUnlessEqual(f.value.args[0], "duplicate key 'a'") - - def test_dict_with_list(self): - "dict with list" - res = self.do([tOPEN(0),'dict', - "a",1, - "b", tOPEN(1),'list', 2, 3, tCLOSE(1), - tCLOSE(0)]) - self.failUnlessEqual(res, {'a':1, 'b':[2,3]}) - - def test_dict_with_tuple_as_key(self): - "dict with tuple as key" - res = self.do([tOPEN(0),'dict', - tOPEN(1),'tuple', 1, 2, tCLOSE(1), "a", - tCLOSE(0)]) - self.failUnlessEqual(res, {(1,2):'a'}) - - def test_dict_with_mutable_key(self): - "dict with mutable key" - f = self.shouldDropConnection([tOPEN(0),'dict', - tOPEN(1),'list', 1, 2, tCLOSE(1), "a", - tCLOSE(0)]) - self.failUnlessEqual(f.value.where, ".{}") - self.failUnlessEqual(f.value.args[0], "unhashable key '[1, 2]'") - - def test_instance(self): - "instance" - f1 = Foo(); f1.a = 1; f1.b = [2,3] - f2 = Bar(); f2.d = 4; f1.c = f2 - res = self.do([tOPEN(0),'instance', "foolscap.test.test_banana.Foo", - "a", 1, - "b", tOPEN(1),'list', 2, 3, tCLOSE(1), - "c", tOPEN(2),'instance', - "foolscap.test.test_banana.Bar", - "d", 4, - tCLOSE(2), - tCLOSE(0)]) - self.failUnlessEqual(res, f1) - - def test_instance_bad1(self): - "subinstance with numeric classname" - tokens = [tOPEN(0),'instance', "Foo", - "a", 1, - "b", tOPEN(1),'list', 2, 3, tCLOSE(1), - "c", - tOPEN(2),'instance', 37, "d", 4, tCLOSE(2), - tCLOSE(0)] - f = self.shouldDropConnection(tokens) - self.failUnlessEqual(f.value.where, "..c.") - self.failUnlessEqual(f.value.args[0], - "InstanceUnslicer classname must be string") - - def test_instance_bad2(self): - "subinstance with numeric attribute name" - tokens = [tOPEN(0),'instance', "Foo", - "a", 1, - "b", tOPEN(1),'list', 2, 3, tCLOSE(1), - "c", - tOPEN(2),'instance', - "Bar", 37, 4, - tCLOSE(2), - tCLOSE(0)] - f = self.shouldDropConnection(tokens) - self.failUnlessEqual(f.value.where, - "..c..attrname??") - self.failUnlessEqual(f.value.args[0], - "InstanceUnslicer keys must be STRINGs") - - def test_instance_unsafe1(self): - - "instances when instances aren't allowed" - self.banana.rootUnslicer.topRegistries = [slicer.UnslicerRegistry] - self.banana.rootUnslicer.openRegistries = [slicer.UnslicerRegistry] - - tokens = [tOPEN(0),'instance', "Foo", - "a", 1, - "b", tOPEN(1),'list', 2, 3, tCLOSE(1), - "c", - tOPEN(2),'instance', - "Bar", 37, 4, - tCLOSE(2), - tCLOSE(0)] - f = self.shouldFail(tokens) - self.failUnlessEqual(f.value.where, "") - self.failUnlessEqual(f.value.args[0], - "unknown top-level OPEN type ('instance',)") - - def test_ref1(self): - res = self.do([tOPEN(0),'list', - tOPEN(1),'list', 1, 2, tCLOSE(1), - tOPEN(2),'reference', 1, tCLOSE(2), - tCLOSE(0)]) - self.failIfBananaFailure(res) - self.failUnlessEqual(res, [[1,2], [1,2]]) - self.failUnlessIdentical(res[0], res[1]) - - def test_ref2(self): - res = self.do([tOPEN(0),'list', - tOPEN(1),'list', 1, 2, tCLOSE(1), - tOPEN(2),'reference', 0, tCLOSE(2), - tCLOSE(0)]) - self.failIfBananaFailure(res) - wanted = [[1,2]] - wanted.append(wanted) - # python2.3 is clever and can do - # self.failUnlessEqual(res, wanted) - # python2.4 is not, so we do it by hand - self.failUnlessEqual(len(res), len(wanted)) - self.failUnlessEqual(res[0], wanted[0]) - self.failUnlessIdentical(res, res[1]) - - def test_ref3(self): - res = self.do([tOPEN(0),'list', - tOPEN(1),'tuple', 1, 2, tCLOSE(1), - tOPEN(2),'reference', 1, tCLOSE(2), - tCLOSE(0)]) - self.failIfBananaFailure(res) - wanted = [(1,2)] - wanted.append(wanted[0]) - self.failUnlessEqual(res, wanted) - self.failUnlessIdentical(res[0], res[1]) - - def test_ref4(self): - res = self.do([tOPEN(0),'list', - tOPEN(1),'dict', "a", 1, tCLOSE(1), - tOPEN(2),'reference', 1, tCLOSE(2), - tCLOSE(0)]) - self.failIfBananaFailure(res) - wanted = [{"a":1}] - wanted.append(wanted[0]) - self.failUnlessEqual(res, wanted) - self.failUnlessIdentical(res[0], res[1]) - - def test_ref5(self): - # The Droste Effect: a list that contains itself - res = self.do([tOPEN(0),'list', - 5, - 6, - tOPEN(1),'reference', 0, tCLOSE(1), - 7, - tCLOSE(0)]) - self.failIfBananaFailure(res) - wanted = [5,6] - wanted.append(wanted) - wanted.append(7) - #self.failUnlessEqual(res, wanted) - self.failUnlessEqual(len(res), len(wanted)) - self.failUnlessEqual(res[0:2], wanted[0:2]) - self.failUnlessIdentical(res[2], res) - self.failUnlessEqual(res[3], wanted[3]) - - def test_ref6(self): - # everybody's favorite "([(ref0" test case. A tuple of a list of a - # tuple of the original tuple. Such cycles must always have a - # mutable container in them somewhere, or they couldn't be - # constructed, but the resulting object involves a lot of deferred - # results because the mutable list is the *only* object that can - # be created without dependencies - res = self.do([tOPEN(0),'tuple', - tOPEN(1),'list', - tOPEN(2),'tuple', - tOPEN(3),'reference', 0, tCLOSE(3), - tCLOSE(2), - tCLOSE(1), - tCLOSE(0)]) - self.failIfBananaFailure(res) - wanted = ([],) - wanted[0].append((wanted,)) - #self.failUnlessEqual(res, wanted) - self.failUnless(type(res) is tuple) - self.failUnless(len(res) == 1) - self.failUnless(type(res[0]) is list) - self.failUnless(len(res[0]) == 1) - self.failUnless(type(res[0][0]) is tuple) - self.failUnless(len(res[0][0]) == 1) - self.failUnlessIdentical(res[0][0][0], res) - - # TODO: need a test where tuple[0] and [1] are deferred, but - # tuple[0] becomes available before tuple[2] is inserted. Not sure - # this is possible, but it would improve test coverage in - # TupleUnslicer - - def test_failed_dict1(self): - # dies during open because of bad opentype - f = self.shouldFail([tOPEN(0),'list', 1, - tOPEN(1),"bad", - "a", 2, - "b", 3, - tCLOSE(1), - tCLOSE(0)]) - self.failUnless(isinstance(f, BananaFailure)) - self.failUnless(f.check(Violation)) - self.failUnlessEqual(f.value.where, ".[1]") - self.failUnlessEqual(f.value.args[0], "unknown OPEN type ('bad',)") - - def test_failed_dict2(self): - # dies during start - f = self.shouldFail([tOPEN(0),'list', 1, - tOPEN(1),'dict2', "a", 2, "b", 3, tCLOSE(1), - tCLOSE(0)]) - self.failUnless(isinstance(f, BananaFailure)) - self.failUnless(f.check(Violation)) - self.failUnlessEqual(f.value.where, ".[1].{}") - self.failUnlessEqual(f.value.args[0], "dead in start") - - def test_failed_dict3(self): - # dies during key - f = self.shouldFail([tOPEN(0),'list', 1, - tOPEN(1),'dict1', "a", 2, "die", tCLOSE(1), - tCLOSE(0)]) - self.failUnless(isinstance(f, BananaFailure)) - self.failUnless(f.check(Violation)) - self.failUnlessEqual(f.value.where, ".[1].{}") - self.failUnlessEqual(f.value.args[0], "aaagh") - - res = self.do([tOPEN(2),'list', 3, 4, tCLOSE(2)]) - self.failUnlessEqual(res, [3,4]) - - def test_failed_dict4(self): - # dies during value - f = self.shouldFail([tOPEN(0),'list', 1, - tOPEN(1),'dict1', - "a", 2, - "b", "die", - tCLOSE(1), - tCLOSE(0)]) - self.failUnless(isinstance(f, BananaFailure)) - self.failUnless(f.check(Violation)) - self.failUnlessEqual(f.value.where, ".[1].{}[b]") - self.failUnlessEqual(f.value.args[0], "aaaaaaaaargh") - - def test_failed_dict5(self): - # dies during finish - f = self.shouldFail([tOPEN(0),'list', 1, - tOPEN(1),'dict1', - "a", 2, - "please_die_in_finish", 3, - tCLOSE(1), - tCLOSE(0)]) - self.failUnless(isinstance(f, BananaFailure)) - self.failUnless(f.check(Violation)) - self.failUnlessEqual(f.value.where, ".[1].{}") - self.failUnlessEqual(f.value.args[0], "dead in receiveClose()") - -class Bar: - def __cmp__(self, them): - if not type(them) == type(self): - return -1 - return cmp((self.__class__, self.__dict__), - (them.__class__, them.__dict__)) -class Foo(Bar): - pass - -class EncodeTest(unittest.TestCase): - def setUp(self): - self.banana = TokenBanana() - def do(self, obj): - return self.banana.testSlice(obj) - def tearDown(self): - self.failUnless(len(self.banana.slicerStack) == 1) - self.failUnless(isinstance(self.banana.slicerStack[0][0], RootSlicer)) - - def testList(self): - d = self.do([1,2]) - d.addCallback(self.failUnlessEqual, - [tOPEN(0),'list', 1, 2, tCLOSE(0)]) - return d - - def testTuple(self): - d = self.do((1,2)) - d.addCallback(self.failUnlessEqual, - [tOPEN(0),'tuple', 1, 2, tCLOSE(0)]) - return d - - def testNestedList(self): - d = self.do([1,2,[3,4]]) - d.addCallback(self.failUnlessEqual, - [tOPEN(0),'list', 1, 2, - tOPEN(1),'list', 3, 4, tCLOSE(1), - tCLOSE(0)]) - return d - - def testNestedList2(self): - d = self.do([1,2,(3,4,[5, "hi"])]) - d.addCallback(self.failUnlessEqual, - [tOPEN(0),'list', 1, 2, - tOPEN(1),'tuple', 3, 4, - tOPEN(2),'list', 5, "hi", tCLOSE(2), - tCLOSE(1), - tCLOSE(0)]) - return d - - def testDict(self): - d = self.do({'a': 1, 'b': 2}) - d.addCallback(lambda res: - self.failUnless( - res == [tOPEN(0),'dict', 'a', 1, 'b', 2, tCLOSE(0)] or - res == [tOPEN(0),'dict', 'b', 2, 'a', 1, tCLOSE(0)])) - return d - - def test_ref1(self): - l = [1,2] - obj = [l,l] - d = self.do(obj) - d.addCallback(self.failUnlessEqual, - [tOPEN(0),'list', - tOPEN(1),'list', 1, 2, tCLOSE(1), - tOPEN(2),'reference', 1, tCLOSE(2), - tCLOSE(0)]) - return d - - def test_ref2(self): - obj = [[1,2]] - obj.append(obj) - d = self.do(obj) - d.addCallback(self.failUnlessEqual, - [tOPEN(0),'list', - tOPEN(1),'list', 1, 2, tCLOSE(1), - tOPEN(2),'reference', 0, tCLOSE(2), - tCLOSE(0)]) - return d - - def test_ref3(self): - obj = [(1,2)] - obj.append(obj[0]) - d = self.do(obj) - d.addCallback(self.failUnlessEqual, - [tOPEN(0),'list', - tOPEN(1),'tuple', 1, 2, tCLOSE(1), - tOPEN(2),'reference', 1, tCLOSE(2), - tCLOSE(0)]) - return d - - def test_ref4(self): - obj = [{"a":1}] - obj.append(obj[0]) - d = self.do(obj) - d.addCallback(self.failUnlessEqual, - [tOPEN(0),'list', - tOPEN(1),'dict', "a", 1, tCLOSE(1), - tOPEN(2),'reference', 1, tCLOSE(2), - tCLOSE(0)]) - return d - - def test_ref6(self): - # everybody's favorite "([(ref0" test case. - obj = ([],) - obj[0].append((obj,)) - d = self.do(obj) - d.addCallback(self.failUnlessEqual, - [tOPEN(0),'tuple', - tOPEN(1),'list', - tOPEN(2),'tuple', - tOPEN(3),'reference', 0, tCLOSE(3), - tCLOSE(2), - tCLOSE(1), - tCLOSE(0)]) - return d - - def test_refdict1(self): - # a dictionary with a value that isn't available right away - d0 = {1: "a"} - t = (d0,) - d0[2] = t - d = self.do(d0) - d.addCallback(self.failUnlessEqual, - [tOPEN(0),'dict', - 1, "a", - 2, tOPEN(1),'tuple', - tOPEN(2),'reference', 0, tCLOSE(2), - tCLOSE(1), - tCLOSE(0)]) - return d - - def test_instance_one(self): - obj = Bar() - obj.a = 1 - classname = reflect.qual(Bar) - d = self.do(obj) - d.addCallback(self.failUnlessEqual, - [tOPEN(0),'instance', classname, "a", 1, tCLOSE(0)]) - return d - - def test_instance_two(self): - f1 = Foo(); f1.a = 1; f1.b = [2,3] - f2 = Bar(); f2.d = 4; f1.c = f2 - fooname = reflect.qual(Foo) - barname = reflect.qual(Bar) - # needs OrderedDictSlicer for the test to work - d = self.do(f1) - d.addCallback(self.failUnlessEqual, - [tOPEN(0),'instance', fooname, - "a", 1, - "b", tOPEN(1),'list', 2, 3, tCLOSE(1), - "c", - tOPEN(2),'instance', barname, - "d", 4, - tCLOSE(2), - tCLOSE(0)]) - - - -class ErrorfulSlicer(slicer.BaseSlicer): - def __init__(self, mode, shouldSucceed, ignoreChildDeath=False): - self.mode = mode - self.items = [1] - self.items.append(mode) - self.items.append(3) - #if mode not in ('success', 'deferred-good'): - if not shouldSucceed: - self.items.append("unreached") - self.counter = -1 - self.childDied = False - self.ignoreChildDeath = ignoreChildDeath - - def __iter__(self): - return self - def slice(self, streamable, banana): - self.streamable = streamable - if self.mode == "slice": - raise Violation("slice failed") - return self - - def next(self): - self.counter += 1 - if not self.items: - raise StopIteration - obj = self.items.pop(0) - if obj == "next": - raise Violation("next failed") - if obj == "deferred-good": - return fireEventually(None) - if obj == "deferred-bad": - d = defer.Deferred() - # the Banana should bail, so don't bother with the timer - return d - if obj == "newSlicerFor": - unserializable = open("unserializable.txt", "w") - # Hah! Serialize that! - return unserializable - if obj == "unreached": - print "error: slicer.next called after it should have stopped" - return obj - - def childAborted(self, v): - self.childDied = True - if self.ignoreChildDeath: - return None - return v - - def describe(self): - return "ErrorfulSlicer[%d]" % self.counter - -# Slicer creation (schema pre-validation?) -# .slice (called in pushSlicer) ? -# .slice.next raising Violation -# .slice.next returning Deferred when streaming isn't allowed -# .sendToken (non-primitive token, can't happen) -# .newSlicerFor (no ISlicer adapter) -# top.childAborted - -class EncodeFailureTest(unittest.TestCase): - def setUp(self): - self.banana = TokenBanana() - - def tearDown(self): - return flushEventualQueue() - - def send(self, obj): - d = self.banana.send(obj) - d.addCallback(lambda res: self.banana.tokens) - return d - - def testSuccess1(self): - # make sure the test slicer works correctly - s = ErrorfulSlicer("success", True) - d = self.send(s) - d.addCallback(self.failUnlessEqual, - [('OPEN', 0), 1, 'success', 3, ('CLOSE', 0)]) - return d - - def testSuccessStreaming(self): - # success - s = ErrorfulSlicer("deferred-good", True) - d = self.send(s) - d.addCallback(self.failUnlessEqual, - [('OPEN', 0), 1, 3, ('CLOSE', 0)]) - return d - - def test1(self): - # failure during .slice (called from pushSlicer) - s = ErrorfulSlicer("slice", False) - d = self.send(s) - d.addCallbacks(lambda res: self.fail("this was supposed to fail"), - self._test1_1) - return d - def _test1_1(self, e): - e.trap(Violation) - self.failUnlessEqual(e.value.where, "") - self.failUnlessEqual(e.value.args, ("slice failed",)) - self.failUnlessEqual(self.banana.tokens, []) - - def test2(self): - # .slice.next raising Violation - s = ErrorfulSlicer("next", False) - d = self.send(s) - d.addCallbacks(lambda res: self.fail("this was supposed to fail"), - self._test2_1) - return d - def _test2_1(self, e): - e.trap(Violation) - self.failUnlessEqual(e.value.where, ".ErrorfulSlicer[1]") - self.failUnlessEqual(e.value.args, ("next failed",)) - self.failUnlessEqual(self.banana.tokens, - [('OPEN', 0), 1, ('ABORT',), ('CLOSE', 0)]) - - def test3(self): - # .slice.next returning Deferred when streaming isn't allowed - self.banana.rootSlicer.allowStreaming(False) - s = ErrorfulSlicer("deferred-bad", False) - d = self.send(s) - d.addCallbacks(lambda res: self.fail("this was supposed to fail"), - self._test3_1) - return d - def _test3_1(self, e): - e.trap(Violation) - self.failUnlessEqual(e.value.where, ".ErrorfulSlicer[1]") - self.failUnlessEqual(e.value.args, ("parent not streamable",)) - self.failUnlessEqual(self.banana.tokens, - [('OPEN', 0), 1, ('ABORT',), ('CLOSE', 0)]) - - def test4(self): - # .newSlicerFor (no ISlicer adapter), parent propagates upwards - s = ErrorfulSlicer("newSlicerFor", False) - d = self.send(s) - d.addCallbacks(lambda res: self.fail("this was supposed to fail"), - self._test4_1, errbackArgs=(s,)) - return d - def _test4_1(self, e, s): - e.trap(Violation) - self.failUnlessEqual(e.value.where, ".ErrorfulSlicer[1]") - self.failUnlessSubstring("cannot serialize ") - why = e.value.args[0] - self.failUnless( - why.startswith("cannot serialize " - "64 bytes) -# checkToken (top.openerCheckToken) -# checkToken (top.checkToken) -# typebyte == LIST (oldbanana) -# bad VOCAB key # TODO -# too-long vocab key -# bad FLOAT encoding # I don't there is such a thing -# top.receiveClose -# top.finish -# top.reportViolation -# oldtop.finish (in from handleViolation) -# top.doOpen -# top.start -# plus all of these when discardCount != 0 - -class ErrorfulUnslicer(slicer.BaseUnslicer): - debug = False - - def doOpen(self, opentype): - if self.mode == "doOpen": - raise Violation("boom") - return slicer.BaseUnslicer.doOpen(self, opentype) - - def start(self, count): - self.mode = self.protocol.mode - self.ignoreChildDeath = self.protocol.ignoreChildDeath - if self.debug: - print "ErrorfulUnslicer.start, mode=%s" % self.mode - self.list = [] - if self.mode == "start": - raise Violation("boom") - - def openerCheckToken(self, typebyte, size, opentype): - if self.debug: - print "ErrorfulUnslicer.openerCheckToken(%s)" \ - % tokens.tokenNames[typebyte] - if self.mode == "openerCheckToken": - raise Violation("boom") - return slicer.BaseUnslicer.openerCheckToken(self, typebyte, - size, opentype) - def checkToken(self, typebyte, size): - if self.debug: - print "ErrorfulUnslicer.checkToken(%s)" \ - % tokens.tokenNames[typebyte] - if self.mode == "checkToken": - raise Violation("boom") - if self.mode == "checkToken-OPEN" and typebyte == tokens.OPEN: - raise Violation("boom") - return slicer.BaseUnslicer.checkToken(self, typebyte, size) - - def receiveChild(self, obj, ready_deferred=None): - if self.debug: print "ErrorfulUnslicer.receiveChild", obj - if self.mode == "receiveChild": - raise Violation("boom") - self.list.append(obj) - - def reportViolation(self, why): - if self.ignoreChildDeath: - return None - return why - - def receiveClose(self): - if self.debug: print "ErrorfulUnslicer.receiveClose" - if self.protocol.mode == "receiveClose": - raise Violation("boom") - return self.list, None - - def finish(self): - if self.debug: print "ErrorfulUnslicer.receiveClose" - if self.protocol.mode == "finish": - raise Violation("boom") - - def describe(self): - return "errorful" - -class FailingUnslicer(TupleUnslicer): - def receiveChild(self, obj, ready_deferred=None): - if self.protocol.mode != "success": - raise Violation("foom") - return TupleUnslicer.receiveChild(self, obj, ready_deferred) - def describe(self): - return "failing" - -class DecodeFailureTest(TestBananaMixin, unittest.TestCase): - listStream = join(bOPEN("errorful", 0), bINT(1), bINT(2), bCLOSE(0)) - nestedStream = join(bOPEN("errorful", 0), bINT(1), - bOPEN("list", 1), bINT(2), bINT(3), bCLOSE(1), - bCLOSE(0)) - nestedStream2 = join(bOPEN("failing", 0), bSTR("a"), - bOPEN("errorful", 1), bINT(1), - bOPEN("list", 2), bINT(2), bINT(3), bCLOSE(2), - bCLOSE(1), - bSTR("b"), - bCLOSE(0), - ) - abortStream = join(bOPEN("errorful", 0), bINT(1), - bOPEN("list", 1), - bINT(2), bABORT(1), bINT(3), - bCLOSE(1), - bCLOSE(0)) - - def setUp(self): - TestBananaMixin.setUp(self) - d = {('errorful',): ErrorfulUnslicer, - ('failing',): FailingUnslicer, - } - self.banana.rootUnslicer.topRegistries.insert(0, d) - self.banana.rootUnslicer.openRegistries.insert(0, d) - self.banana.ignoreChildDeath = False - - def testSuccess1(self): - self.banana.mode = "success" - o = self.shouldDecode(self.listStream) - self.failUnlessEqual(o, [1,2]) - o = self.shouldDecode(self.nestedStream) - self.failUnlessEqual(o, [1,[2,3]]) - o = self.shouldDecode(self.nestedStream2) - self.failUnlessEqual(o, ("a",[1,[2,3]],"b")) - - def testLongHeader(self): - # would be a string but the header is too long - s = "\x01" * 66 + "\x82" + "stupidly long string" - f = self.shouldDropConnection(s) - self.failUnless(f.value.args[0].startswith("token prefix is limited to 64 bytes")) - - def testLongHeader2(self): - # bad string while discarding - s = "\x01" * 66 + "\x82" + "stupidly long string" - s = bOPEN("errorful",0) + bINT(1) + s + bINT(2) + bCLOSE(0) - self.banana.mode = "start" - f = self.shouldDropConnection(s) - self.failUnless(f.value.args[0].startswith("token prefix is limited to 64 bytes")) - - def testCheckToken1(self): - # violation raised in top.openerCheckToken - self.banana.mode = "openerCheckToken" - f = self.shouldFail(self.nestedStream) - self.failUnlessEqual(f.value.where, ".errorful") - self.failUnlessEqual(f.value.args[0], "boom") - self.testSuccess1() - - def testCheckToken2(self): - # violation raised in top.openerCheckToken, but the error is - # absorbed - self.banana.mode = "openerCheckToken" - self.banana.ignoreChildDeath = True - o = self.shouldDecode(self.nestedStream) - self.failUnlessEqual(o, [1]) - self.testSuccess1() - - def testCheckToken3(self): - # violation raised in top.checkToken - self.banana.mode = "checkToken" - f = self.shouldFail(self.listStream) - self.failUnlessEqual(f.value.where, ".errorful") - self.failUnlessEqual(f.value.args[0], "boom") - self.testSuccess1() - - def testCheckToken4(self): - # violation raised in top.checkToken, but only for the OPEN that - # starts the nested list. The error is absorbed. - self.banana.mode = "checkToken-OPEN" - self.banana.ignoreChildDeath = True - o = self.shouldDecode(self.nestedStream) - self.failUnlessEqual(o, [1]) - self.testSuccess1() - - def testCheckToken5(self): - # violation raised in top.checkToken, while discarding - self.banana.mode = "checkToken" - #self.banana.debugReceive=True - f = self.shouldFail(self.nestedStream2) - self.failUnlessEqual(f.value.where, ".failing") - self.failUnlessEqual(f.value.args[0], "foom") - self.testSuccess1() - - def testReceiveChild1(self): - self.banana.mode = "receiveChild" - f = self.shouldFail(self.listStream) - self.failUnlessEqual(f.value.where, ".errorful") - self.failUnlessEqual(f.value.args[0], "boom") - self.testSuccess1() - - def testReceiveChild2(self): - self.banana.mode = "receiveChild" - f = self.shouldFail(self.nestedStream2) - self.failUnlessEqual(f.value.where, ".failing") - self.failUnlessEqual(f.value.args[0], "foom") - self.testSuccess1() - - def testReceiveChild3(self): - self.banana.mode = "receiveChild" - # the ABORT should be ignored, since it is in the middle of a - # sequence which is being ignored. One possible bug is that the - # ABORT delivers a second Violation. In this test, we only record - # the last Violation, so we'll catch that case. - f = self.shouldFail(self.abortStream) - self.failUnlessEqual(f.value.where, ".errorful") - self.failUnlessEqual(f.value.args[0], "boom") - # (the other Violation would be at 'root', of type 'ABORT received' - self.testSuccess1() - - def testReceiveClose1(self): - self.banana.mode = "receiveClose" - f = self.shouldFail(self.listStream) - self.failUnlessEqual(f.value.where, ".errorful") - self.failUnlessEqual(f.value.args[0], "boom") - self.testSuccess1() - - def testReceiveClose2(self): - self.banana.mode = "receiveClose" - f = self.shouldFail(self.nestedStream2) - self.failUnlessEqual(f.value.where, ".failing") - self.failUnlessEqual(f.value.args[0], "foom") - self.testSuccess1() - - def testFinish1(self): - self.banana.mode = "finish" - f = self.shouldFail(self.listStream) - self.failUnlessEqual(f.value.where, ".errorful") - self.failUnlessEqual(f.value.args[0], "boom") - self.testSuccess1() - - def testFinish2(self): - self.banana.mode = "finish" - f = self.shouldFail(self.nestedStream2) - self.failUnlessEqual(f.value.where, ".failing") - self.failUnlessEqual(f.value.args[0], "foom") - self.testSuccess1() - - def testStart1(self): - self.banana.mode = "start" - f = self.shouldFail(self.listStream) - self.failUnlessEqual(f.value.where, ".errorful") - self.failUnlessEqual(f.value.args[0], "boom") - self.testSuccess1() - - def testStart2(self): - self.banana.mode = "start" - f = self.shouldFail(self.nestedStream2) - self.failUnlessEqual(f.value.where, ".failing") - self.failUnlessEqual(f.value.args[0], "foom") - self.testSuccess1() - - def testDoOpen1(self): - self.banana.mode = "doOpen" - f = self.shouldFail(self.nestedStream) - self.failUnlessEqual(f.value.where, ".errorful") - self.failUnlessEqual(f.value.args[0], "boom") - self.testSuccess1() - - def testDoOpen2(self): - self.banana.mode = "doOpen" - f = self.shouldFail(self.nestedStream2) - self.failUnlessEqual(f.value.where, ".failing") - self.failUnlessEqual(f.value.args[0], "foom") - self.testSuccess1() - -class ByteStream(TestBananaMixin, unittest.TestCase): - - def test_list(self): - obj = [1,2] - expected = join(bOPEN("list", 0), - bINT(1), bINT(2), - bCLOSE(0), - ) - d = self.encode(obj) - d.addCallback(self.wantEqual, expected) - return d - - def test_ref6(self): - # everybody's favorite "([(ref0" test case. - obj = ([],) - obj[0].append((obj,)) - - expected = join(bOPEN("tuple",0), - bOPEN("list",1), - bOPEN("tuple",2), - bOPEN("reference",3), - bINT(0), - bCLOSE(3), - bCLOSE(2), - bCLOSE(1), - bCLOSE(0)) - d = self.encode(obj) - d.addCallback(self.wantEqual, expected) - return d - - def test_two(self): - f1 = Foo(); f1.a = 1; f1.b = [2,3] - f2 = Bar(); f2.d = 4; f1.c = f2 - fooname = reflect.qual(Foo) - barname = reflect.qual(Bar) - # needs OrderedDictSlicer for the test to work - - expected = join(bOPEN("instance",0), bSTR(fooname), - bSTR("a"), bINT(1), - bSTR("b"), - bOPEN("list",1), - bINT(2), bINT(3), - bCLOSE(1), - bSTR("c"), - bOPEN("instance",2), bSTR(barname), - bSTR("d"), bINT(4), - bCLOSE(2), - bCLOSE(0)) - d = self.encode(f1) - d.addCallback(self.wantEqual, expected) - return d - -class InboundByteStream(TestBananaMixin, unittest.TestCase): - - def check(self, obj, stream): - # use a new Banana for each check - self.makeBanana() - obj2 = self.shouldDecode(stream) - self.failUnlessEqual(obj, obj2) - - def testInt(self): - self.check(1, "\x01\x81") - self.check(130, "\x02\x01\x81") - self.check(-1, "\x01\x83") - self.check(-130, "\x02\x01\x83") - self.check(0, bINT(0)) - self.check(1, bINT(1)) - self.check(127, bINT(127)) - self.check(-1, bINT(-1)) - self.check(-127, bINT(-127)) - - def testLong(self): - self.check(258L, "\x02\x85\x01\x02") # TODO: 0x85 for LONGINT?? - self.check(-258L, "\x02\x86\x01\x02") # TODO: 0x85 for LONGINT?? - - def testString(self): - self.check("", "\x82") - self.check("", "\x00\x82") - self.check("", "\x00\x00\x82") - self.check("", "\x00" * 64 + "\x82") - - f = self.shouldDropConnection("\x00" * 65) - self.failUnlessEqual(f.value.where, "") - self.failUnless(f.value.args[0].startswith("token prefix is limited to 64 bytes")) - f = self.shouldDropConnection("\x00" * 65 + "\x82") - self.failUnlessEqual(f.value.where, "") - self.failUnless(f.value.args[0].startswith("token prefix is limited to 64 bytes")) - - self.check("a", "\x01\x82a") - self.check("b"*130, "\x02\x01\x82" + "b"*130 + "extra") - self.check("c"*1025, "\x01\x08\x82" + "c" * 1025 + "extra") - self.check("fluuber", bSTR("fluuber")) - - - def testList(self): - self.check([1,2], - join(bOPEN('list',1), - bINT(1), bINT(2), - bCLOSE(1))) - self.check([1,"b"], - join(bOPEN('list',1), bINT(1), - "\x01\x82b", - bCLOSE(1))) - self.check([1,2,[3,4]], - join(bOPEN('list',1), bINT(1), bINT(2), - bOPEN('list',2), bINT(3), bINT(4), - bCLOSE(2), - bCLOSE(1))) - - def testTuple(self): - self.check((1,2), - join(bOPEN('tuple',1), bINT(1), bINT(2), - bCLOSE(1))) - - def testDict(self): - self.check({1:"a", 2:["b","c"]}, - join(bOPEN('dict',1), - bINT(1), bSTR("a"), - bINT(2), bOPEN('list',2), - bSTR("b"), bSTR("c"), - bCLOSE(2), - bCLOSE(1))) - - def TRUE(self): - return join(bOPEN("boolean",2), bINT(1), bCLOSE(2)) - def FALSE(self): - return join(bOPEN("boolean",2), bINT(0), bCLOSE(2)) - - def testBool(self): - self.check(True, self.TRUE()) - -class InboundByteStream2(TestBananaMixin, unittest.TestCase): - - def setConstraints(self, constraint, childConstraint): - if constraint: - constraint = IConstraint(constraint) - self.banana.receiveStack[-1].constraint = constraint - - if childConstraint: - childConstraint = IConstraint(childConstraint) - self.banana.receiveStack[-1].childConstraint = childConstraint - - def conform2(self, stream, obj, constraint=None, childConstraint=None): - self.setConstraints(constraint, childConstraint) - obj2 = self.shouldDecode(stream) - self.failUnlessEqual(obj, obj2) - - def violate2(self, stream, where, constraint=None, childConstraint=None): - self.setConstraints(constraint, childConstraint) - f = self.shouldFail(stream) - self.failUnlessEqual(f.value.where, where) - self.failUnlessEqual(len(self.banana.receiveStack), 1) - - def testConstrainedInt(self): - pass # TODO: after implementing new LONGINT token - - def testConstrainedString(self): - self.conform2("\x82", "", - schema.StringConstraint(10)) - self.conform2("\x0a\x82" + "a"*10 + "extra", "a"*10, - schema.StringConstraint(10)) - self.violate2("\x0b\x82" + "a"*11 + "extra", - "", - schema.StringConstraint(10)) - - def NOTtestFoo(self): - if 0: - a100 = chr(100) + "\x82" + "a"*100 - b100 = chr(100) + "\x82" + "b"*100 - self.violate2(join(bOPEN('list',1), - bOPEN('list',2), a100, b100, bCLOSE(2), - bCLOSE(1)), - ".[0].[0]", - schema.ListOf( - schema.ListOf(schema.StringConstraint(99), 2), 2)) - - def OPENweird(count, weird): - return chr(count) + "\x88" + weird - - self.violate2(join(bOPEN('list',1), - bOPEN('list',2), - OPENweird(3, bINT(64)), - bINT(1), bINT(2), bCLOSE(3), - bCLOSE(2), - bCLOSE(1)), - ".[0].[0]", None) - - - - def testConstrainedList(self): - self.conform2(join(bOPEN('list',1), bINT(1), bINT(2), - bCLOSE(1)), - [1,2], - schema.ListOf(int)) - self.violate2(join(bOPEN('list',1), bINT(1), "\x01\x82b", - bCLOSE(1)), - ".[1]", - schema.ListOf(int)) - self.conform2(join(bOPEN('list',1), - bINT(1), bINT(2), bINT(3), - bCLOSE(1)), - [1,2,3], - schema.ListOf(int, maxLength=3)) - self.violate2(join(bOPEN('list',1), - bINT(1), bINT(2), bINT(3), bINT(4), - bCLOSE(1)), - ".[3]", - schema.ListOf(int, maxLength=3)) - a100 = chr(100) + "\x82" + "a"*100 - b100 = chr(100) + "\x82" + "b"*100 - self.conform2(join(bOPEN('list',1), a100, b100, bCLOSE(1)), - ["a"*100, "b"*100], - schema.ListOf(schema.StringConstraint(100), 2)) - self.violate2(join(bOPEN('list',1), a100, b100, bCLOSE(1)), - ".[0]", - schema.ListOf(schema.StringConstraint(99), 2)) - self.violate2(join(bOPEN('list',1), a100, b100, a100, bCLOSE(1)), - ".[2]", - schema.ListOf(schema.StringConstraint(100), 2)) - - self.conform2(join(bOPEN('list',1), - bOPEN('list',2), - bINT(11), bINT(12), - bCLOSE(2), - bOPEN('list',3), - bINT(21), bINT(22), bINT(23), - bCLOSE(3), - bCLOSE(1)), - [[11,12], [21, 22, 23]], - schema.ListOf(schema.ListOf(int, maxLength=3))) - - self.violate2(join(bOPEN('list',1), - bOPEN('list',2), - bINT(11), bINT(12), - bCLOSE(2), - bOPEN('list',3), - bINT(21), bINT(22), bINT(23), - bCLOSE(3), - bCLOSE(1)), - ".[1].[2]", - schema.ListOf(schema.ListOf(int, maxLength=2))) - - def testConstrainedTuple(self): - self.conform2(join(bOPEN('tuple',1), bINT(1), bINT(2), - bCLOSE(1)), - (1,2), - schema.TupleOf(int, int)) - self.violate2(join(bOPEN('tuple',1), - bINT(1), bINT(2), bINT(3), - bCLOSE(1)), - ".[2]", - schema.TupleOf(int, int)) - self.violate2(join(bOPEN('tuple',1), - bINT(1), bSTR("not a number"), - bCLOSE(1)), - ".[1]", - schema.TupleOf(int, int)) - self.conform2(join(bOPEN('tuple',1), - bINT(1), bSTR("twine"), - bCLOSE(1)), - (1, "twine"), - schema.TupleOf(int, str)) - self.conform2(join(bOPEN('tuple',1), - bINT(1), - bOPEN('list',2), - bINT(1), bINT(2), bINT(3), - bCLOSE(2), - bCLOSE(1)), - (1, [1,2,3]), - schema.TupleOf(int, schema.ListOf(int))) - self.conform2(join(bOPEN('tuple',1), - bINT(1), - bOPEN('list',2), - bOPEN('list',3), bINT(2), bCLOSE(3), - bOPEN('list',4), bINT(3), bCLOSE(4), - bCLOSE(2), - bCLOSE(1)), - (1, [[2], [3]]), - schema.TupleOf(int, schema.ListOf(schema.ListOf(int)))) - self.violate2(join(bOPEN('tuple',1), - bINT(1), - bOPEN('list',2), - bOPEN('list',3), - bSTR("nan"), - bCLOSE(3), - bOPEN('list',4), bINT(3), bCLOSE(4), - bCLOSE(2), - bCLOSE(1)), - ".[1].[0].[0]", - schema.TupleOf(int, schema.ListOf(schema.ListOf(int)))) - - def testConstrainedDict(self): - self.conform2(join(bOPEN('dict',1), - bINT(1), bSTR("a"), - bINT(2), bSTR("b"), - bINT(3), bSTR("c"), - bCLOSE(1)), - {1:"a", 2:"b", 3:"c"}, - schema.DictOf(int, str)) - self.conform2(join(bOPEN('dict',1), - bINT(1), bSTR("a"), - bINT(2), bSTR("b"), - bINT(3), bSTR("c"), - bCLOSE(1)), - {1:"a", 2:"b", 3:"c"}, - schema.DictOf(int, str, maxKeys=3)) - self.violate2(join(bOPEN('dict',1), - bINT(1), bSTR("a"), - bINT(2), bINT(10), - bINT(3), bSTR("c"), - bCLOSE(1)), - ".{}[2]", - schema.DictOf(int, str)) - self.violate2(join(bOPEN('dict',1), - bINT(1), bSTR("a"), - bINT(2), bSTR("b"), - bINT(3), bSTR("c"), - bCLOSE(1)), - ".{}", - schema.DictOf(int, str, maxKeys=2)) - - def TRUE(self): - return join(bOPEN("boolean",2), bINT(1), bCLOSE(2)) - def FALSE(self): - return join(bOPEN("boolean",2), bINT(0), bCLOSE(2)) - - def testConstrainedBool(self): - self.conform2(self.TRUE(), - True, - bool) - self.conform2(self.TRUE(), - True, - schema.BooleanConstraint()) - self.conform2(self.FALSE(), - False, - schema.BooleanConstraint()) - - # booleans have ints, not strings. To do otherwise is a protocol - # error, not a schema Violation. - f = self.shouldDropConnection(join(bOPEN("boolean",1), - bSTR("vrai"), - bCLOSE(1))) - self.failUnlessEqual(f.value.args[0], - "BooleanUnslicer only accepts an INT token") - - # but true/false is a constraint, and is reported with Violation - self.violate2(self.TRUE(), - ".", - schema.BooleanConstraint(False)) - self.violate2(self.FALSE(), - ".", - schema.BooleanConstraint(True)) - - -class A: - """ - dummy class - """ - def amethod(self): - pass - def __cmp__(self, other): - if not type(other) == type(self): - return -1 - return cmp(self.__dict__, other.__dict__) - -class B(object): - # new-style class - def amethod(self): - pass - def __cmp__(self, other): - if not type(other) == type(self): - return -1 - return cmp(self.__dict__, other.__dict__) - -def afunc(self): - pass - -class ThereAndBackAgain(TestBananaMixin, unittest.TestCase): - - def test_int(self): - d = self.looptest(42) - d.addCallback(lambda res: self.looptest(-47)) - return d - - def test_bigint(self): - # some of these are small enough to fit in an INT - d = self.looptest(int(2**31-1)) # most positive representable number - d.addCallback(lambda res: self.looptest(long(2**31+0))) - d.addCallback(lambda res: self.looptest(long(2**31+1))) - - d.addCallback(lambda res: self.looptest(long(-2**31-1))) - # the following is the most negative representable number - d.addCallback(lambda res: self.looptest(int(-2**31+0))) - d.addCallback(lambda res: self.looptest(int(-2**31+1))) - - d.addCallback(lambda res: self.looptest(long(2**100))) - d.addCallback(lambda res: self.looptest(long(-2**100))) - d.addCallback(lambda res: self.looptest(long(2**1000))) - d.addCallback(lambda res: self.looptest(long(-2**1000))) - return d - - def test_string(self): - return self.looptest("biggles") - def test_unicode(self): - return self.looptest(u"biggles\u1234") - def test_list(self): - return self.looptest([1,2]) - def test_tuple(self): - return self.looptest((1,2)) - def test_set(self): - d = self.looptest(set([1,2])) - d.addCallback(lambda res: self.looptest(frozenset([1,2]))) - # and verify that old sets turn into modern ones, which is - # unfortunate but at least consistent - d.addCallback(lambda res: self.looptest(sets.Set([1,2]), set([1,2]))) - d.addCallback(lambda res: self.looptest(sets.ImmutableSet([1,2]), - frozenset([1,2]))) - return d - - def test_bool(self): - d = self.looptest(True) - d.addCallback(lambda res: self.looptest(False)) - return d - def test_float(self): - return self.looptest(20.3) - def test_none(self): - d = self.loop(None) - d.addCallback(lambda n2: self.failUnless(n2 is None)) - return d - def test_dict(self): - return self.looptest({'a':1}) - - def test_func(self): - return self.looptest(afunc) - def test_module(self): - return self.looptest(unittest) - def test_instance(self): - a = A() - return self.looptest(a) - def test_instance_newstyle(self): - raise unittest.SkipTest("new-style classes still broken") - b = B() - return self.looptest(b) - - def test_class(self): - return self.looptest(A) - def test_class_newstyle(self): - raise unittest.SkipTest("new-style classes still broken") - return self.looptest(B) - - def test_boundMethod(self): - a = A() - m1 = a.amethod - d = self.loop(m1) - d.addCallback(self._test_boundMethod_1, m1) - return d - def _test_boundMethod_1(self, m2, m1): - self.failUnlessEqual(m1.im_class, m2.im_class) - self.failUnlessEqual(m1.im_self, m2.im_self) - self.failUnlessEqual(m1.im_func, m2.im_func) - - def test_boundMethod_newstyle(self): - raise unittest.SkipTest("new-style classes still broken") - b = B() - m1 = b.amethod - d = self.loop(m1) - d.addCallback(self._test_boundMethod_newstyle, m1) - return d - def _test_boundMethod_newstyle(self, m2, m1): - self.failUnlessEqual(m1.im_class, m2.im_class) - self.failUnlessEqual(m1.im_self, m2.im_self) - self.failUnlessEqual(m1.im_func, m2.im_func) - - def test_classMethod(self): - return self.looptest(A.amethod) - - def test_classMethod_newstyle(self): - raise unittest.SkipTest("new-style classes still broken") - return self.looptest(B.amethod) - - # some stuff from test_newjelly - def testIdentity(self): - # test to make sure that objects retain identity properly - x = [] - y = (x,) - x.append(y) - x.append(y) - self.assertIdentical(x[0], x[1]) - self.assertIdentical(x[0][0], x) - d = self.encode(x) - d.addCallback(self.shouldDecode) - d.addCallback(self._testIdentity_1) - return d - def _testIdentity_1(self, z): - self.assertIdentical(z[0], z[1]) - self.assertIdentical(z[0][0], z) - - def testUnicode(self): - x = [unicode('blah')] - d = self.loop(x) - d.addCallback(self._testUnicode_1, x) - return d - def _testUnicode_1(self, y, x): - self.assertEquals(x, y) - self.assertEquals(type(x[0]), type(y[0])) - - def testStressReferences(self): - reref = [] - toplevelTuple = ({'list': reref}, reref) - reref.append(toplevelTuple) - d = self.loop(toplevelTuple) - d.addCallback(self._testStressReferences_1) - return d - def _testStressReferences_1(self, z): - self.assertIdentical(z[0]['list'], z[1]) - self.assertIdentical(z[0]['list'][0], z) - - def test_cycles_1(self): - # a list that contains a tuple that can't be referenced yet - a = [] - t1 = (a,) - t2 = (t1,) - a.append(t2) - d = self.loop(t1) - d.addCallback(lambda z: self.assertIdentical(z[0][0][0], z)) - return d - - def test_cycles_2(self): - # a dict that contains a tuple that can't be referenced yet. - a = {} - t1 = (a,) - t2 = (t1,) - a['foo'] = t2 - d = self.loop(t1) - d.addCallback(lambda z: self.assertIdentical(z[0]['foo'][0], z)) - return d - - def test_cycles_3(self): - # sets seem to be transitively immutable: any mutable contents would - # be unhashable, and sets can only contain hashable objects. - # Therefore sets cannot participate in cycles the way that tuples - # can. - - # a set that contains a tuple that can't be referenced yet. You can't - # actually create this in python, because you can only create a set - # out of hashable objects, and sets aren't hashable, and a tuple that - # contains a set is not hashable. - a = set() - t1 = (a,) - t2 = (t1,) - a.add(t2) - d = self.loop(t1) - d.addCallback(lambda z: self.assertIdentical(list(z[0])[0][0], z)) - - # a list that contains a frozenset that can't be referenced yet - a = [] - t1 = frozenset([a]) - t2 = frozenset([t1]) - a.append(t2) - d = self.loop(t1) - d.addCallback(lambda z: - self.assertIdentical(list(list(z)[0][0])[0], z)) - - # a dict that contains a frozenset that can't be referenced yet. - a = {} - t1 = frozenset([a]) - t2 = frozenset([t1]) - a['foo'] = t2 - d = self.loop(t1) - d.addCallback(lambda z: - self.assertIdentical(list(list(z)[0]['foo'])[0], z)) - - # a set that contains a frozenset that can't be referenced yet. - a = set() - t1 = frozenset([a]) - t2 = frozenset([t1]) - a.add(t2) - d = self.loop(t1) - d.addCallback(lambda z: self.assertIdentical(list(list(list(z)[0])[0])[0], z)) - return d - del test_cycles_3 - - - -class VocabTest1(unittest.TestCase): - def test_incoming1(self): - b = TokenBanana() - vdict = {1: 'list', 2: 'tuple', 3: 'dict'} - keys = vdict.keys() - keys.sort() - setVdict = [tOPEN(0),'set-vocab'] - for k in keys: - setVdict.append(k) - setVdict.append(vdict[k]) - setVdict.append(tCLOSE(0)) - b.processTokens(setVdict) - # banana should now know this vocabulary - self.failUnlessEqual(b.incomingVocabulary, vdict) - - def test_outgoing(self): - b = TokenBanana() - strings = ["list", "tuple", "dict"] - vdict = {0: 'list', 1: 'tuple', 2: 'dict'} - keys = vdict.keys() - keys.sort() - setVdict = [tOPEN(0),'set-vocab'] - for k in keys: - setVdict.append(k) - setVdict.append(vdict[k]) - setVdict.append(tCLOSE(0)) - b.setOutgoingVocabulary(strings) - vocabTokens = b.getTokens() - self.failUnlessEqual(vocabTokens, setVdict) - -class VocabTest2(TestBananaMixin, unittest.TestCase): - def vbOPEN(self, count, opentype): - num = self.invdict[opentype] - return chr(count) + "\x88" + chr(num) + "\x87" - - def test_loop(self): - strings = ["list", "tuple", "dict"] - vdict = {0: 'list', 1: 'tuple', 2: 'dict'} - self.invdict = dict(zip(vdict.values(), vdict.keys())) - - self.banana.setOutgoingVocabulary(strings) - # this next check only happens to work because there is nothing to - # keep serialization from completing synchronously. If Banana - # acquires some eventual-sends, this test might need to be rewritten. - self.failUnlessEqual(self.banana.outgoingVocabulary, self.invdict) - self.shouldDecode(self.banana.transport.getvalue()) - self.failUnlessEqual(self.banana.incomingVocabulary, vdict) - self.clearOutput() - - vbOPEN = self.vbOPEN - expected = "".join([vbOPEN(1,"list"), - vbOPEN(2,"tuple"), - vbOPEN(3,"dict"), - bSTR('a'), bINT(1), - bCLOSE(3), - bCLOSE(2), - bCLOSE(1)]) - d = self.encode([({'a':1},)]) - d.addCallback(self.wantEqual, expected) - return d - - -class SliceableByItself(slicer.BaseSlicer): - def __init__(self, value): - self.value = value - def slice(self, streamable, banana): - self.streamable = streamable - # this is our "instance state" - yield {"value": self.value} - -class CouldBeSliceable: - def __init__(self, value): - self.value = value - -class _AndICanHelp(slicer.BaseSlicer): - def slice(self, streamable, banana): - self.streamable = streamable - yield {"value": self.obj.value} -registerAdapter(_AndICanHelp, CouldBeSliceable, ISlicer) - -class Sliceable(unittest.TestCase): - def setUp(self): - self.banana = TokenBanana() - def do(self, obj): - return self.banana.testSlice(obj) - def tearDown(self): - self.failUnless(len(self.banana.slicerStack) == 1) - self.failUnless(isinstance(self.banana.slicerStack[0][0], RootSlicer)) - - def testDirect(self): - # the object is its own Slicer - i = SliceableByItself(42) - d = self.do(i) - d.addCallback(self.failUnlessEqual, - [tOPEN(0), - tOPEN(1), "dict", "value", 42, tCLOSE(1), - tCLOSE(0)]) - return d - - def testAdapter(self): - # the adapter is the Slicer - i = CouldBeSliceable(43) - d = self.do(i) - d.addCallback(self.failUnlessEqual, - [tOPEN(0), - tOPEN(1), "dict", "value", 43, tCLOSE(1), - tCLOSE(0)]) - return d - - - -# TODO: vocab test: -# send a bunch of strings -# send an object that stalls -# send some more strings -# set the Vocab table to tokenize some of those strings -# send yet more strings -# unstall serialization, let everything flow through, verify diff --git a/src/foolscap/foolscap/test/test_call.py b/src/foolscap/foolscap/test/test_call.py deleted file mode 100644 index 745f9ed5..00000000 --- a/src/foolscap/foolscap/test/test_call.py +++ /dev/null @@ -1,444 +0,0 @@ - -import gc -import re -import sets -import sys - -if False: - from twisted.python import log - log.startLogging(sys.stderr) - -from twisted.python import failure, log -from twisted.trial import unittest -from twisted.internet.main import CONNECTION_LOST - -from foolscap.tokens import Violation -from foolscap.eventual import flushEventualQueue -from foolscap.test.common import HelperTarget, TargetMixin -from foolscap.test.common import RIMyTarget, Target, TargetWithoutInterfaces, \ - BrokenTarget - -class Unsendable: - pass - - -class TestCall(TargetMixin, unittest.TestCase): - def setUp(self): - TargetMixin.setUp(self) - self.setupBrokers() - - def testCall1(self): - # this is done without interfaces - rr, target = self.setupTarget(TargetWithoutInterfaces()) - d = rr.callRemote("add", a=1, b=2) - d.addCallback(lambda res: self.failUnlessEqual(res, 3)) - d.addCallback(lambda res: self.failUnlessEqual(target.calls, [(1,2)])) - d.addCallback(self._testCall1_1, rr) - return d - def _testCall1_1(self, res, rr): - # the caller still holds the RemoteReference - self.failUnless(self.callingBroker.yourReferenceByCLID.has_key(1)) - - # release the RemoteReference. This does two things: 1) the - # callingBroker will forget about it. 2) they will send a decref to - # the targetBroker so *they* can forget about it. - del rr # this fires a DecRef - gc.collect() # make sure - - # we need to give it a moment to deliver the DecRef message and act - # on it. Poll until the caller has received it. - def _check(): - if self.callingBroker.yourReferenceByCLID.has_key(1): - return False - return True - d = self.poll(_check) - d.addCallback(self._testCall1_2) - return d - def _testCall1_2(self, res): - self.failIf(self.callingBroker.yourReferenceByCLID.has_key(1)) - self.failIf(self.targetBroker.myReferenceByCLID.has_key(1)) - - def testCall1a(self): - # no interfaces, but use positional args - rr, target = self.setupTarget(TargetWithoutInterfaces()) - d = rr.callRemote("add", 1, 2) - d.addCallback(lambda res: self.failUnlessEqual(res, 3)) - d.addCallback(lambda res: self.failUnlessEqual(target.calls, [(1,2)])) - return d - testCall1a.timeout = 2 - - def testCall1b(self): - # no interfaces, use both positional and keyword args - rr, target = self.setupTarget(TargetWithoutInterfaces()) - d = rr.callRemote("add", 1, b=2) - d.addCallback(lambda res: self.failUnlessEqual(res, 3)) - d.addCallback(lambda res: self.failUnlessEqual(target.calls, [(1,2)])) - return d - testCall1b.timeout = 2 - - def testFail1(self): - # this is done without interfaces - rr, target = self.setupTarget(TargetWithoutInterfaces()) - d = rr.callRemote("fail") - self.failIf(target.calls) - d.addBoth(self._testFail1_1) - return d - testFail1.timeout = 2 - def _testFail1_1(self, f): - # f should be a CopiedFailure - self.failUnless(isinstance(f, failure.Failure), - "Hey, we didn't fail: %s" % f) - self.failUnless(f.check(ValueError), - "wrong exception type: %s" % f) - self.failUnlessSubstring("you asked me to fail", f.value) - - def testFail2(self): - # this is done without interfaces - rr, target = self.setupTarget(TargetWithoutInterfaces()) - d = rr.callRemote("add", a=1, b=2, c=3) - # add() does not take a 'c' argument, so we get a TypeError here - self.failIf(target.calls) - d.addBoth(self._testFail2_1) - return d - testFail2.timeout = 2 - def _testFail2_1(self, f): - self.failUnless(isinstance(f, failure.Failure), - "Hey, we didn't fail: %s" % f) - self.failUnless(f.check(TypeError), - "wrong exception type: %s" % f.type) - self.failUnlessSubstring("remote_add() got an unexpected keyword " - "argument 'c'", f.value) - - def testFail3(self): - # this is done without interfaces - rr, target = self.setupTarget(TargetWithoutInterfaces()) - d = rr.callRemote("bogus", a=1, b=2) - # the target does not have .bogus method, so we get an AttributeError - self.failIf(target.calls) - d.addBoth(self._testFail3_1) - return d - testFail3.timeout = 2 - def _testFail3_1(self, f): - self.failUnless(isinstance(f, failure.Failure), - "Hey, we didn't fail: %s" % f) - self.failUnless(f.check(AttributeError), - "wrong exception type: %s" % f.type) - self.failUnlessSubstring("TargetWithoutInterfaces", str(f)) - self.failUnlessSubstring(" has no attribute 'remote_bogus'", str(f)) - - def testFailStringException(self): - # make sure we handle string exceptions correctly - if sys.version_info >= (2,5): - log.msg("skipping test: string exceptions are deprecated in 2.5") - return - rr, target = self.setupTarget(TargetWithoutInterfaces()) - d = rr.callRemote("failstring") - self.failIf(target.calls) - d.addBoth(self._testFailStringException_1) - return d - def _testFailStringException_1(self, f): - # f should be a CopiedFailure - self.failUnless(isinstance(f, failure.Failure), - "Hey, we didn't fail: %s" % f) - self.failUnless(f.check("string exceptions are annoying"), - "wrong exception type: %s" % f) - - def testCopiedFailure(self): - # A calls B, who calls C. C fails. B gets a CopiedFailure and reports - # it back to A. What does a get? - rr, target = self.setupTarget(TargetWithoutInterfaces()) - d = rr.callRemote("fail_remotely", target) - def _check(f): - # f should be a CopiedFailure - self.failUnless(isinstance(f, failure.Failure), - "Hey, we didn't fail: %s" % f) - self.failUnless(f.check(ValueError), - "wrong exception type: %s" % f) - self.failUnlessSubstring("you asked me to fail", f.value) - d.addBoth(_check) - return d - - def testCall2(self): - # server end uses an interface this time, but not the client end - rr, target = self.setupTarget(Target(), True) - d = rr.callRemote("add", a=3, b=4, _useSchema=False) - # the schema is enforced upon receipt - d.addCallback(lambda res: self.failUnlessEqual(res, 7)) - return d - testCall2.timeout = 2 - - def testCall3(self): - # use interface on both sides - rr, target = self.setupTarget(Target(), True) - d = rr.callRemote('add', 3, 4) # enforces schemas - d.addCallback(lambda res: self.failUnlessEqual(res, 7)) - return d - testCall3.timeout = 2 - - def testCall4(self): - # call through a manually-defined RemoteMethodSchema - rr, target = self.setupTarget(Target(), True) - d = rr.callRemote("add", 3, 4, _methodConstraint=RIMyTarget['add1']) - d.addCallback(lambda res: self.failUnlessEqual(res, 7)) - return d - testCall4.timeout = 2 - - def testMegaSchema(self): - # try to exercise all our constraints at once - rr, target = self.setupTarget(HelperTarget()) - t = (sets.Set([1, 2, 3]), - "str", True, 12, 12L, 19.3, None, - u"unicode", - "bytestring", - "any", 14.3, - 15, - "a"*95, - "1234567890", - ) - obj1 = {"key": [t]} - obj2 = (sets.Set([1,2,3]), [1,2,3], {1:"two"}) - d = rr.callRemote("megaschema", obj1, obj2) - d.addCallback(lambda res: self.failUnlessEqual(res, None)) - return d - - def testUnconstrainedMethod(self): - rr, target = self.setupTarget(Target(), True) - d = rr.callRemote('free', 3, 4, x="boo") - def _check(res): - self.failUnlessEqual(res, "bird") - self.failUnlessEqual(target.calls, [((3,4), {"x": "boo"})]) - d.addCallback(_check) - return d - - def testFailWrongMethodLocal(self): - # the caller knows that this method does not really exist - rr, target = self.setupTarget(Target(), True) - d = rr.callRemote("bogus") # RIMyTarget doesn't implement .bogus() - d.addCallbacks(lambda res: self.fail("should have failed"), - self._testFailWrongMethodLocal_1) - return d - testFailWrongMethodLocal.timeout = 2 - def _testFailWrongMethodLocal_1(self, f): - self.failUnless(f.check(Violation)) - self.failUnless(re.search(r'RIMyTarget\(.*\) does not offer bogus', - str(f))) - - def testFailWrongMethodRemote(self): - # if the target doesn't specify any remote interfaces, then the - # calling side shouldn't try to do any checking. The problem is - # caught on the target side. - rr, target = self.setupTarget(Target(), False) - d = rr.callRemote("bogus") # RIMyTarget doesn't implement .bogus() - d.addCallbacks(lambda res: self.fail("should have failed"), - self._testFailWrongMethodRemote_1) - return d - testFailWrongMethodRemote.timeout = 2 - def _testFailWrongMethodRemote_1(self, f): - self.failUnless(f.check(Violation)) - self.failUnlessSubstring("method 'bogus' not defined in RIMyTarget", - str(f)) - - def testFailWrongMethodRemote2(self): - # call a method which doesn't actually exist. The sender thinks - # they're ok but the recipient catches the violation - rr, target = self.setupTarget(Target(), True) - d = rr.callRemote("bogus", _useSchema=False) - # RIMyTarget2 has a 'sub' method, but RIMyTarget (the real interface) - # does not - d.addCallbacks(lambda res: self.fail("should have failed"), - self._testFailWrongMethodRemote2_1) - d.addCallback(lambda res: self.failIf(target.calls)) - return d - testFailWrongMethodRemote2.timeout = 2 - def _testFailWrongMethodRemote2_1(self, f): - self.failUnless(f.check(Violation)) - self.failUnless(re.search(r'RIMyTarget\(.*\) does not offer bogus', - str(f))) - - def testFailWrongArgsLocal1(self): - # we violate the interface (extra arg), and the sender should catch it - rr, target = self.setupTarget(Target(), True) - d = rr.callRemote("add", a=1, b=2, c=3) - d.addCallbacks(lambda res: self.fail("should have failed"), - self._testFailWrongArgsLocal1_1) - d.addCallback(lambda res: self.failIf(target.calls)) - return d - testFailWrongArgsLocal1.timeout = 2 - def _testFailWrongArgsLocal1_1(self, f): - self.failUnless(f.check(Violation)) - self.failUnlessSubstring("unknown argument 'c'", str(f.value)) - - def testFailWrongArgsLocal2(self): - # we violate the interface (bad arg), and the sender should catch it - rr, target = self.setupTarget(Target(), True) - d = rr.callRemote("add", a=1, b="two") - d.addCallbacks(lambda res: self.fail("should have failed"), - self._testFailWrongArgsLocal2_1) - d.addCallback(lambda res: self.failIf(target.calls)) - return d - testFailWrongArgsLocal2.timeout = 2 - def _testFailWrongArgsLocal2_1(self, f): - self.failUnless(f.check(Violation)) - self.failUnlessSubstring("not a number", str(f.value)) - - def testFailWrongArgsRemote1(self): - # the sender thinks they're ok but the recipient catches the - # violation - rr, target = self.setupTarget(Target(), True) - d = rr.callRemote("add", a=1, b="foo", _useSchema=False) - d.addCallbacks(lambda res: self.fail("should have failed"), - self._testFailWrongArgsRemote1_1) - d.addCallbacks(lambda res: self.failIf(target.calls)) - return d - testFailWrongArgsRemote1.timeout = 2 - def _testFailWrongArgsRemote1_1(self, f): - self.failUnless(f.check(Violation)) - self.failUnlessSubstring("STRING token rejected by IntegerConstraint", - f.value) - self.failUnlessSubstring(".", f.value) - - def testFailWrongReturnRemote(self): - rr, target = self.setupTarget(BrokenTarget(), True) - d = rr.callRemote("add", 3, 4) # violates return constraint - d.addCallbacks(lambda res: self.fail("should have failed"), - self._testFailWrongReturnRemote_1) - return d - testFailWrongReturnRemote.timeout = 2 - def _testFailWrongReturnRemote_1(self, f): - self.failUnless(f.check(Violation)) - self.failUnlessSubstring("in return value of .add", f.value) - self.failUnlessSubstring("not a number", f.value) - - def testFailWrongReturnLocal(self): - # the target returns a value which violates our _resultConstraint - rr, target = self.setupTarget(Target(), True) - d = rr.callRemote("add", a=1, b=2, _resultConstraint=str) - # The target returns an int, which matches the schema they're using, - # so they think they're ok. We've overridden our expectations to - # require a string. - d.addCallbacks(lambda res: self.fail("should have failed"), - self._testFailWrongReturnLocal_1) - # the method should have been run - d.addCallback(lambda res: self.failUnless(target.calls)) - return d - testFailWrongReturnLocal.timeout = 2 - def _testFailWrongReturnLocal_1(self, f): - self.failUnless(f.check(Violation)) - self.failUnlessSubstring("INT token rejected by ByteStringConstraint", - str(f)) - self.failUnlessSubstring("in inbound method results", str(f)) - self.failUnlessSubstring(".Answer(req=1)", str(f)) - - - - def testDefer(self): - rr, target = self.setupTarget(HelperTarget()) - d = rr.callRemote("defer", obj=12) - d.addCallback(lambda res: self.failUnlessEqual(res, 12)) - return d - testDefer.timeout = 2 - - def testDisconnect_during_call(self): - rr, target = self.setupTarget(HelperTarget()) - d = rr.callRemote("hang") - e = RuntimeError("lost connection") - rr.tracker.broker.transport.loseConnection(e) - d.addCallbacks(lambda res: self.fail("should have failed"), - lambda why: why.trap(RuntimeError) and None) - return d - - def disconnected(self, *args, **kwargs): - self.lost = 1 - self.lost_args = (args, kwargs) - - def testNotifyOnDisconnect(self): - rr, target = self.setupTarget(HelperTarget()) - self.lost = 0 - rr.notifyOnDisconnect(self.disconnected) - rr.tracker.broker.transport.loseConnection(CONNECTION_LOST) - d = flushEventualQueue() - def _check(res): - self.failUnless(self.lost) - self.failUnlessEqual(self.lost_args, ((),{})) - # it should be safe to unregister now, even though the callback - # has already fired, since dontNotifyOnDisconnect is tolerant - rr.dontNotifyOnDisconnect(self.disconnected) - d.addCallback(_check) - return d - - def testNotifyOnDisconnect_unregister(self): - rr, target = self.setupTarget(HelperTarget()) - self.lost = 0 - m = rr.notifyOnDisconnect(self.disconnected) - rr.dontNotifyOnDisconnect(m) - # dontNotifyOnDisconnect is supposed to be tolerant of duplicate - # unregisters, because otherwise it is hard to avoid race conditions. - # Validate that we can unregister something multiple times. - rr.dontNotifyOnDisconnect(m) - rr.tracker.broker.transport.loseConnection(CONNECTION_LOST) - d = flushEventualQueue() - d.addCallback(lambda res: self.failIf(self.lost)) - return d - - def testNotifyOnDisconnect_args(self): - rr, target = self.setupTarget(HelperTarget()) - self.lost = 0 - rr.notifyOnDisconnect(self.disconnected, "arg", foo="kwarg") - rr.tracker.broker.transport.loseConnection(CONNECTION_LOST) - d = flushEventualQueue() - def _check(res): - self.failUnless(self.lost) - self.failUnlessEqual(self.lost_args, (("arg",), - {"foo": "kwarg"})) - d.addCallback(_check) - return d - - def testNotifyOnDisconnect_already(self): - # make sure notifyOnDisconnect works even if the reference was already - # broken - rr, target = self.setupTarget(HelperTarget()) - self.lost = 0 - rr.tracker.broker.transport.loseConnection(CONNECTION_LOST) - d = flushEventualQueue() - d.addCallback(lambda res: rr.notifyOnDisconnect(self.disconnected)) - d.addCallback(lambda res: flushEventualQueue()) - def _check(res): - self.failUnless(self.lost, "disconnect handler not run") - self.failUnlessEqual(self.lost_args, ((),{})) - d.addCallback(_check) - return d - - def testUnsendable(self): - rr, target = self.setupTarget(HelperTarget()) - d = rr.callRemote("set", obj=Unsendable()) - d.addCallbacks(lambda res: self.fail("should have failed"), - self._testUnsendable_1) - return d - testUnsendable.timeout = 2 - def _testUnsendable_1(self, why): - self.failUnless(why.check(Violation)) - self.failUnlessSubstring("cannot serialize", why.value.args[0]) - - -class TestCallOnly(TargetMixin, unittest.TestCase): - def setUp(self): - TargetMixin.setUp(self) - self.setupBrokers() - - def testCallOnly(self): - rr, target = self.setupTarget(TargetWithoutInterfaces()) - ret = rr.callRemoteOnly("add", a=1, b=2) - self.failUnlessIdentical(ret, None) - # since we don't have a Deferred to wait upon, we just have to poll - # for the call to take place. It should happen pretty quickly. - def _check(): - if target.calls: - self.failUnlessEqual(target.calls, [(1,2)]) - return True - return False - d = self.poll(_check) - return d - testCallOnly.timeout = 2 diff --git a/src/foolscap/foolscap/test/test_copyable.py b/src/foolscap/foolscap/test/test_copyable.py deleted file mode 100644 index 5830e598..00000000 --- a/src/foolscap/foolscap/test/test_copyable.py +++ /dev/null @@ -1,294 +0,0 @@ - -from twisted.trial import unittest -from twisted.python import components, failure, reflect -from foolscap.test.common import TargetMixin, HelperTarget - -from foolscap import copyable, tokens -from foolscap import Copyable, RemoteCopy -from foolscap.tokens import Violation - -# MyCopyable1 is the basic Copyable/RemoteCopy pair, using auto-registration. - -class MyCopyable1(Copyable): - typeToCopy = "foolscap.test_copyable.MyCopyable1" - pass -class MyRemoteCopy1(RemoteCopy): - copytype = MyCopyable1.typeToCopy - pass -#registerRemoteCopy(MyCopyable1.typeToCopy, MyRemoteCopy1) - -# MyCopyable2 overrides the various Copyable/RemoteCopy methods. It -# also sets 'copytype' to auto-register with a matching name - -class MyCopyable2(Copyable): - def getTypeToCopy(self): - return "MyCopyable2name" - def getStateToCopy(self): - return {"a": 1, "b": self.b} -class MyRemoteCopy2(RemoteCopy): - copytype = "MyCopyable2name" - def setCopyableState(self, state): - self.c = 1 - self.d = state["b"] - -# MyCopyable3 uses a custom Slicer and a custom Unslicer - -class MyCopyable3: - def getAlternateCopyableState(self): - return {"e": 2} - -class MyCopyable3Slicer(copyable.CopyableSlicer): - def slice(self, streamable, banana): - yield 'copyable' - yield "MyCopyable3name" - state = self.obj.getAlternateCopyableState() - for k,v in state.iteritems(): - yield k - yield v - -class MyRemoteCopy3: - pass -class MyRemoteCopy3Unslicer(copyable.RemoteCopyUnslicer): - def __init__(self): - self.schema = None - def factory(self, state): - obj = MyRemoteCopy3() - obj.__dict__ = state - return obj - def receiveClose(self): - obj,d = copyable.RemoteCopyUnslicer.receiveClose(self) - obj.f = "yes" - return obj, d - -# register MyCopyable3Slicer as an ISlicer adapter for MyCopyable3, so we -# can verify that it overrides the inherited CopyableSlicer behavior. We -# also register an Unslicer to create the results. -components.registerAdapter(MyCopyable3Slicer, MyCopyable3, tokens.ISlicer) -copyable.registerRemoteCopyUnslicerFactory("MyCopyable3name", - MyRemoteCopy3Unslicer) - - -# MyCopyable4 uses auto-registration, and adds a stateSchema - -class MyCopyable4(Copyable): - typeToCopy = "foolscap.test_copyable.MyCopyable4" - pass -class MyRemoteCopy4(RemoteCopy): - copytype = MyCopyable4.typeToCopy - stateSchema = copyable.AttributeDictConstraint(('foo', int), - ('bar', str)) - pass - -# MyCopyable5 disables auto-registration - -class MyRemoteCopy5(RemoteCopy): - copytype = None # disable auto-registration - - -class Copyable(TargetMixin, unittest.TestCase): - - def setUp(self): - TargetMixin.setUp(self) - self.setupBrokers() - if 0: - print - self.callingBroker.doLog = "TX" - self.targetBroker.doLog = " rx" - - def send(self, arg): - rr, target = self.setupTarget(HelperTarget()) - d = rr.callRemote("set", obj=arg) - d.addCallback(self.failUnless) - # some of these tests require that we return a Failure object, so we - # have to wrap this in a tuple to survive the Deferred. - d.addCallback(lambda res: (target.obj,)) - return d - - def testCopy0(self): - d = self.send(1) - d.addCallback(self.failUnlessEqual, (1,)) - return d - - def testFailure1(self): - self.callingBroker.unsafeTracebacks = True - try: - raise RuntimeError("message here") - except: - f0 = failure.Failure() - d = self.send(f0) - d.addCallback(self._testFailure1_1) - return d - def _testFailure1_1(self, (f,)): - #print "CopiedFailure is:", f - #print f.__dict__ - self.failUnlessEqual(reflect.qual(f.type), "exceptions.RuntimeError") - self.failUnless(f.check, RuntimeError) - self.failUnlessEqual(f.value, "message here") - self.failUnlessEqual(f.frames, []) - self.failUnlessEqual(f.tb, None) - self.failUnlessEqual(f.stack, []) - # there should be a traceback - self.failUnless(f.traceback.find("raise RuntimeError") != -1, - "no 'raise RuntimeError' in '%s'" % (f.traceback,)) - - def testFailure2(self): - self.callingBroker.unsafeTracebacks = False - try: - raise RuntimeError("message here") - except: - f0 = failure.Failure() - d = self.send(f0) - d.addCallback(self._testFailure2_1) - return d - def _testFailure2_1(self, (f,)): - #print "CopiedFailure is:", f - #print f.__dict__ - self.failUnlessEqual(reflect.qual(f.type), "exceptions.RuntimeError") - self.failUnless(f.check, RuntimeError) - self.failUnlessEqual(f.value, "message here") - self.failUnlessEqual(f.frames, []) - self.failUnlessEqual(f.tb, None) - self.failUnlessEqual(f.stack, []) - # there should not be a traceback - self.failUnlessEqual(f.traceback, "Traceback unavailable\n") - - def testCopy1(self): - obj = MyCopyable1() # just copies the dict - obj.a = 12 - obj.b = "foo" - d = self.send(obj) - d.addCallback(self._testCopy1_1) - return d - def _testCopy1_1(self, (res,)): - self.failUnless(isinstance(res, MyRemoteCopy1)) - self.failUnlessEqual(res.a, 12) - self.failUnlessEqual(res.b, "foo") - - def testCopy2(self): - obj = MyCopyable2() # has a custom getStateToCopy - obj.a = 12 # ignored - obj.b = "foo" - d = self.send(obj) - d.addCallback(self._testCopy2_1) - return d - def _testCopy2_1(self, (res,)): - self.failUnless(isinstance(res, MyRemoteCopy2)) - self.failUnlessEqual(res.c, 1) - self.failUnlessEqual(res.d, "foo") - self.failIf(hasattr(res, "a")) - - def testCopy3(self): - obj = MyCopyable3() # has a custom Slicer - obj.a = 12 # ignored - obj.b = "foo" # ignored - d = self.send(obj) - d.addCallback(self._testCopy3_1) - return d - def _testCopy3_1(self, (res,)): - self.failUnless(isinstance(res, MyRemoteCopy3)) - self.failUnlessEqual(res.e, 2) - self.failUnlessEqual(res.f, "yes") - self.failIf(hasattr(res, "a")) - - def testCopy4(self): - obj = MyCopyable4() - obj.foo = 12 - obj.bar = "bar" - d = self.send(obj) - d.addCallback(self._testCopy4_1, obj) - return d - def _testCopy4_1(self, (res,), obj): - self.failUnless(isinstance(res, MyRemoteCopy4)) - self.failUnlessEqual(res.foo, 12) - self.failUnlessEqual(res.bar, "bar") - - obj.bad = "unwanted attribute" - d = self.send(obj) - d.addCallbacks(lambda res: self.fail("this was supposed to fail"), - self._testCopy4_2, errbackArgs=(obj,)) - return d - def _testCopy4_2(self, why, obj): - why.trap(Violation) - self.failUnlessSubstring("unknown attribute 'bad'", str(why)) - del obj.bad - - obj.foo = "not a number" - d = self.send(obj) - d.addCallbacks(lambda res: self.fail("this was supposed to fail"), - self._testCopy4_3, errbackArgs=(obj,)) - return d - def _testCopy4_3(self, why, obj): - why.trap(Violation) - self.failUnlessSubstring("STRING token rejected by IntegerConstraint", - str(why)) - - obj.foo = 12 - obj.bar = "very long " * 1000 - d = self.send(obj) - d.addCallbacks(lambda res: self.fail("this was supposed to fail"), - self._testCopy4_4) - return d - def _testCopy4_4(self, why): - why.trap(Violation) - self.failUnlessSubstring("token too large", str(why)) - -class Registration(unittest.TestCase): - def testRegistration(self): - rc_classes = copyable.debug_RemoteCopyClasses - copyable_classes = rc_classes.values() - self.failUnless(MyRemoteCopy1 in copyable_classes) - self.failUnless(MyRemoteCopy2 in copyable_classes) - self.failUnlessIdentical(rc_classes["MyCopyable2name"], - MyRemoteCopy2) - self.failIf(MyRemoteCopy5 in copyable_classes) - - -############## -# verify that ICopyable adapters are actually usable - - -class TheThirdPartyClassThatIWantToCopy: - def __init__(self, a, b): - self.a = a - self.b = b - -def copy_ThirdPartyClass(orig): - return "TheThirdPartyClassThatIWantToCopy_name", orig.__dict__ -copyable.registerCopier(TheThirdPartyClassThatIWantToCopy, - copy_ThirdPartyClass) - -def make_ThirdPartyClass(state): - # unpack the state into constructor arguments - a = state['a']; b = state['b'] - # now create the object with the constructor - return TheThirdPartyClassThatIWantToCopy(a, b) -copyable.registerRemoteCopyFactory("TheThirdPartyClassThatIWantToCopy_name", - make_ThirdPartyClass) - -class Adaptation(TargetMixin, unittest.TestCase): - def setUp(self): - TargetMixin.setUp(self) - self.setupBrokers() - if 0: - print - self.callingBroker.doLog = "TX" - self.targetBroker.doLog = " rx" - def send(self, arg): - rr, target = self.setupTarget(HelperTarget()) - d = rr.callRemote("set", obj=arg) - d.addCallback(self.failUnless) - # some of these tests require that we return a Failure object, so we - # have to wrap this in a tuple to survive the Deferred. - d.addCallback(lambda res: (target.obj,)) - return d - - def testAdaptation(self): - obj = TheThirdPartyClassThatIWantToCopy(45, 91) - d = self.send(obj) - d.addCallback(self._testAdaptation_1) - return d - def _testAdaptation_1(self, (res,)): - self.failUnless(isinstance(res, TheThirdPartyClassThatIWantToCopy)) - self.failUnlessEqual(res.a, 45) - self.failUnlessEqual(res.b, 91) - diff --git a/src/foolscap/foolscap/test/test_crypto.py b/src/foolscap/foolscap/test/test_crypto.py deleted file mode 100644 index acf2b8ce..00000000 --- a/src/foolscap/foolscap/test/test_crypto.py +++ /dev/null @@ -1,197 +0,0 @@ - -import re -from twisted.trial import unittest - -from zope.interface import implements -from twisted.internet import defer -from foolscap import pb -from foolscap import RemoteInterface, Referenceable, Tub -from foolscap.remoteinterface import RemoteMethodSchema -from foolscap.eventual import flushEventualQueue - -crypto_available = False -try: - from foolscap import crypto - crypto_available = crypto.available -except ImportError: - pass - -class RIMyCryptoTarget(RemoteInterface): - # method constraints can be declared directly: - add1 = RemoteMethodSchema(_response=int, a=int, b=int) - - # or through their function definitions: - def add(a=int, b=int): return int - #add = schema.callable(add) # the metaclass makes this unnecessary - # but it could be used for adding options or something - def join(a=str, b=str, c=int): return str - def getName(): return str - -class Target(Referenceable): - implements(RIMyCryptoTarget) - - def __init__(self, name=None): - self.calls = [] - self.name = name - def getMethodSchema(self, methodname): - return None - def remote_add(self, a, b): - self.calls.append((a,b)) - return a+b - remote_add1 = remote_add - def remote_getName(self): - return self.name - def remote_disputed(self, a): - return 24 - def remote_fail(self): - raise ValueError("you asked me to fail") - -class UsefulMixin: - num_services = 2 - def setUp(self): - if not crypto_available: - raise unittest.SkipTest("crypto not available") - self.services = [] - for i in range(self.num_services): - s = Tub() - s.startService() - self.services.append(s) - - def tearDown(self): - d = defer.DeferredList([s.stopService() for s in self.services]) - d.addCallback(self._tearDown_1) - return d - def _tearDown_1(self, res): - self.failIf(pb.Listeners) - return flushEventualQueue() - -class TestPersist(UsefulMixin, unittest.TestCase): - num_services = 2 - def testPersist(self): - t1 = Target() - s1,s2 = self.services - l1 = s1.listenOn("0") - port = l1.getPortnum() - s1.setLocation("127.0.0.1:%d" % port) - public_url = s1.registerReference(t1, "name") - self.failUnless(public_url.startswith("pb:")) - d = defer.maybeDeferred(s1.stopService) - d.addCallback(self._testPersist_1, s1, s2, t1, public_url, port) - return d - testPersist.timeout = 5 - def _testPersist_1(self, res, s1, s2, t1, public_url, port): - self.services.remove(s1) - s3 = Tub(certData=s1.getCertData()) - s3.startService() - self.services.append(s3) - t2 = Target() - l3 = s3.listenOn("0") - newport = l3.getPortnum() - s3.setLocation("127.0.0.1:%d" % newport) - s3.registerReference(t2, "name") - # now patch the URL to replace the port number - newurl = re.sub(":%d/" % port, ":%d/" % newport, public_url) - d = s2.getReference(newurl) - d.addCallback(lambda rr: rr.callRemote("add", a=1, b=2)) - d.addCallback(self.failUnlessEqual, 3) - d.addCallback(self._testPersist_2, t1, t2) - return d - def _testPersist_2(self, res, t1, t2): - self.failUnlessEqual(t1.calls, []) - self.failUnlessEqual(t2.calls, [(1,2)]) - - -class TestListeners(UsefulMixin, unittest.TestCase): - num_services = 3 - - def testListenOn(self): - s1 = self.services[0] - l = s1.listenOn("0") - self.failUnless(isinstance(l, pb.Listener)) - self.failUnlessEqual(len(s1.getListeners()), 1) - self.failUnlessEqual(len(pb.Listeners), 1) - s1.stopListeningOn(l) - self.failUnlessEqual(len(s1.getListeners()), 0) - self.failUnlessEqual(len(pb.Listeners), 0) - - - def testGetPort1(self): - s1,s2,s3 = self.services - s1.listenOn("0") - listeners = s1.getListeners() - self.failUnlessEqual(len(listeners), 1) - portnum = listeners[0].getPortnum() - self.failUnless(portnum) # not 0, not None, must be *something* - - def testGetPort2(self): - s1,s2,s3 = self.services - s1.listenOn("0") - listeners = s1.getListeners() - self.failUnlessEqual(len(listeners), 1) - portnum = listeners[0].getPortnum() - self.failUnless(portnum) # not 0, not None, must be *something* - s1.listenOn("0") # listen on a second port too - l2 = s1.getListeners() - self.failUnlessEqual(len(l2), 2) - self.failIfEqual(l2[0].getPortnum(), l2[1].getPortnum()) - - s2.listenOn(l2[0]) - l3 = s2.getListeners() - self.failUnlessIdentical(l2[0], l3[0]) - self.failUnlessEqual(l2[0].getPortnum(), l3[0].getPortnum()) - - def testShared(self): - s1,s2,s3 = self.services - # s1 and s2 will share a Listener - l1 = s1.listenOn("tcp:0:interface=127.0.0.1") - s1.setLocation("127.0.0.1:%d" % l1.getPortnum()) - s2.listenOn(l1) - s2.setLocation("127.0.0.1:%d" % l1.getPortnum()) - - t1 = Target("one") - t2 = Target("two") - self.targets = [t1,t2] - url1 = s1.registerReference(t1, "target") - url2 = s2.registerReference(t2, "target") - self.urls = [url1, url2] - - d = s3.getReference(url1) - d.addCallback(lambda ref: ref.callRemote('add', a=1, b=1)) - d.addCallback(lambda res: s3.getReference(url2)) - d.addCallback(lambda ref: ref.callRemote('add', a=2, b=2)) - d.addCallback(self._testShared_1) - return d - testShared.timeout = 5 - def _testShared_1(self, res): - t1,t2 = self.targets - self.failUnlessEqual(t1.calls, [(1,1)]) - self.failUnlessEqual(t2.calls, [(2,2)]) - - def testSharedTransfer(self): - s1,s2,s3 = self.services - # s1 and s2 will share a Listener - l1 = s1.listenOn("tcp:0:interface=127.0.0.1") - s1.setLocation("127.0.0.1:%d" % l1.getPortnum()) - s2.listenOn(l1) - s2.setLocation("127.0.0.1:%d" % l1.getPortnum()) - self.failUnless(l1.parentTub is s1) - s1.stopListeningOn(l1) - self.failUnless(l1.parentTub is s2) - s3.listenOn(l1) - self.failUnless(l1.parentTub is s2) - d = s2.stopService() - d.addCallback(self._testSharedTransfer_1, l1, s2, s3) - return d - testSharedTransfer.timeout = 5 - def _testSharedTransfer_1(self, res, l1, s2, s3): - self.services.remove(s2) - self.failUnless(l1.parentTub is s3) - - def testClone(self): - s1,s2,s3 = self.services - l1 = s1.listenOn("tcp:0:interface=127.0.0.1") - s1.setLocation("127.0.0.1:%d" % l1.getPortnum()) - s4 = s1.clone() - s4.startService() - self.services.append(s4) - self.failUnlessEqual(s1.getListeners(), s4.getListeners()) diff --git a/src/foolscap/foolscap/test/test_eventual.py b/src/foolscap/foolscap/test/test_eventual.py deleted file mode 100644 index 0ad420e8..00000000 --- a/src/foolscap/foolscap/test/test_eventual.py +++ /dev/null @@ -1,42 +0,0 @@ - -from twisted.trial import unittest - -from foolscap.eventual import eventually, fireEventually, flushEventualQueue - -class TestEventual(unittest.TestCase): - - def tearDown(self): - return flushEventualQueue() - - def testSend(self): - results = [] - eventually(results.append, 1) - self.failIf(results) - def _check(): - self.failUnlessEqual(results, [1]) - eventually(_check) - def _check2(): - self.failUnlessEqual(results, [1,2]) - eventually(results.append, 2) - eventually(_check2) - - def testFlush(self): - results = [] - eventually(results.append, 1) - eventually(results.append, 2) - d = flushEventualQueue() - def _check(res): - self.failUnlessEqual(results, [1,2]) - d.addCallback(_check) - return d - - def testFire(self): - results = [] - fireEventually(1).addCallback(results.append) - fireEventually(2).addCallback(results.append) - self.failIf(results) - def _check(res): - self.failUnlessEqual(results, [1,2]) - d = flushEventualQueue() - d.addCallback(_check) - return d diff --git a/src/foolscap/foolscap/test/test_gifts.py b/src/foolscap/foolscap/test/test_gifts.py deleted file mode 100644 index 02a394ec..00000000 --- a/src/foolscap/foolscap/test/test_gifts.py +++ /dev/null @@ -1,555 +0,0 @@ - -from zope.interface import implements -from twisted.trial import unittest -from twisted.internet import defer, protocol, reactor -from twisted.internet.error import ConnectionDone, ConnectionLost, \ - ConnectionRefusedError -from twisted.python import failure -from foolscap import Tub, UnauthenticatedTub, RemoteInterface, Referenceable -from foolscap.referenceable import RemoteReference, SturdyRef -from foolscap.test.common import HelperTarget, RIHelper -from foolscap.eventual import flushEventualQueue -from foolscap.tokens import BananaError, NegotiationError - -crypto_available = False -try: - from foolscap import crypto - crypto_available = crypto.available -except ImportError: - pass - -# we use authenticated tubs if possible. If crypto is not available, fall -# back to unauthenticated ones -GoodEnoughTub = UnauthenticatedTub -if crypto_available: - GoodEnoughTub = Tub - -def ignoreConnectionDone(f): - f.trap(ConnectionDone, ConnectionLost) - return None - -class RIConstrainedHelper(RemoteInterface): - def set(obj=RIHelper): return None - - -class ConstrainedHelper(Referenceable): - implements(RIConstrainedHelper) - - def __init__(self, name="unnamed"): - self.name = name - - def remote_set(self, obj): - self.obj = obj - -class Base: - - debug = False - - def setUp(self): - self.services = [GoodEnoughTub() for i in range(4)] - self.tubA, self.tubB, self.tubC, self.tubD = self.services - for s in self.services: - s.startService() - l = s.listenOn("tcp:0:interface=127.0.0.1") - s.setLocation("127.0.0.1:%d" % l.getPortnum()) - - def tearDown(self): - d = defer.DeferredList([s.stopService() for s in self.services]) - d.addCallback(flushEventualQueue) - return d - - def createCharacters(self): - self.alice = HelperTarget("alice") - self.bob = HelperTarget("bob") - self.bob_url = self.tubB.registerReference(self.bob, "bob") - self.carol = HelperTarget("carol") - self.carol_url = self.tubC.registerReference(self.carol, "carol") - # cindy is Carol's little sister. She doesn't have a phone, but - # Carol might talk about her anyway. - self.cindy = HelperTarget("cindy") - # more sisters. Alice knows them, and she introduces Bob to them. - self.charlene = HelperTarget("charlene") - self.christine = HelperTarget("christine") - self.clarisse = HelperTarget("clarisse") - self.colette = HelperTarget("colette") - self.courtney = HelperTarget("courtney") - self.dave = HelperTarget("dave") - self.dave_url = self.tubD.registerReference(self.dave, "dave") - - def createInitialReferences(self): - # we must start by giving Alice a reference to both Bob and Carol. - if self.debug: print "Alice gets Bob" - d = self.tubA.getReference(self.bob_url) - def _aliceGotBob(abob): - if self.debug: print "Alice got bob" - self.abob = abob # Alice's reference to Bob - if self.debug: print "Alice gets carol" - d = self.tubA.getReference(self.carol_url) - return d - d.addCallback(_aliceGotBob) - def _aliceGotCarol(acarol): - if self.debug: print "Alice got carol" - self.acarol = acarol # Alice's reference to Carol - d = self.tubB.getReference(self.dave_url) - return d - d.addCallback(_aliceGotCarol) - def _bobGotDave(bdave): - self.bdave = bdave - d.addCallback(_bobGotDave) - return d - - def createMoreReferences(self): - # give Alice references to Carol's sisters - dl = [] - - url = self.tubC.registerReference(self.charlene, "charlene") - d = self.tubA.getReference(url) - def _got_charlene(rref): - self.acharlene = rref - d.addCallback(_got_charlene) - dl.append(d) - - url = self.tubC.registerReference(self.christine, "christine") - d = self.tubA.getReference(url) - def _got_christine(rref): - self.achristine = rref - d.addCallback(_got_christine) - dl.append(d) - - url = self.tubC.registerReference(self.clarisse, "clarisse") - d = self.tubA.getReference(url) - def _got_clarisse(rref): - self.aclarisse = rref - d.addCallback(_got_clarisse) - dl.append(d) - - url = self.tubC.registerReference(self.colette, "colette") - d = self.tubA.getReference(url) - def _got_colette(rref): - self.acolette = rref - d.addCallback(_got_colette) - dl.append(d) - - url = self.tubC.registerReference(self.courtney, "courtney") - d = self.tubA.getReference(url) - def _got_courtney(rref): - self.acourtney = rref - d.addCallback(_got_courtney) - dl.append(d) - - return defer.DeferredList(dl) - - def shouldFail(self, res, expected_failure, which, substring=None): - # attach this with: - # d = something() - # d.addBoth(self.shouldFail, IndexError, "something") - # the 'which' string helps to identify which call to shouldFail was - # triggered, since certain versions of Twisted don't display this - # very well. - - if isinstance(res, failure.Failure): - res.trap(expected_failure) - if substring: - self.failUnless(substring in str(res), - "substring '%s' not in '%s'" - % (substring, str(res))) - else: - self.fail("%s was supposed to raise %s, not get '%s'" % - (which, expected_failure, res)) - -class Gifts(Base, unittest.TestCase): - # Here we test the three-party introduction process as depicted in the - # classic Granovetter diagram. Alice has a reference to Bob and another - # one to Carol. Alice wants to give her Carol-reference to Bob, by - # including it as the argument to a method she invokes on her - # Bob-reference. - - def testGift(self): - #defer.setDebugging(True) - self.createCharacters() - d = self.createInitialReferences() - def _introduce(res): - d2 = self.bob.waitfor() - if self.debug: print "Alice introduces Carol to Bob" - # send the gift. This might not get acked by the time the test is - # done and everything is torn down, so explicitly silence any - # ConnectionDone error that might result. When we get - # callRemoteOnly(), use that instead. - d3 = self.abob.callRemote("set", obj=(self.alice, self.acarol)) - d3.addErrback(ignoreConnectionDone) - return d2 # this fires with the gift that bob got - d.addCallback(_introduce) - def _bobGotCarol((balice,bcarol)): - if self.debug: print "Bob got Carol" - self.bcarol = bcarol - if self.debug: print "Bob says something to Carol" - d2 = self.carol.waitfor() - # handle ConnectionDone as described before - d3 = self.bcarol.callRemote("set", obj=12) - d3.addErrback(ignoreConnectionDone) - return d2 - d.addCallback(_bobGotCarol) - def _carolCalled(res): - if self.debug: print "Carol heard from Bob" - self.failUnlessEqual(res, 12) - d.addCallback(_carolCalled) - return d - - def testImplicitGift(self): - # in this test, Carol was registered in her Tub (using - # registerReference), but Cindy was not. Alice is given a reference - # to Carol, then uses that to get a reference to Cindy. Then Alice - # sends a message to Bob and includes a reference to Cindy. The test - # here is that we can make gifts out of references that were not - # passed to registerReference explicitly. - - #defer.setDebugging(True) - self.createCharacters() - # the message from Alice to Bob will include a reference to Cindy - d = self.createInitialReferences() - def _tell_alice_about_cindy(res): - self.carol.obj = self.cindy - cindy_d = self.acarol.callRemote("get") - return cindy_d - d.addCallback(_tell_alice_about_cindy) - def _introduce(a_cindy): - # alice now has references to carol (self.acarol) and cindy - # (a_cindy). She sends both of them (plus a reference to herself) - # to bob. - d2 = self.bob.waitfor() - if self.debug: print "Alice introduces Carol to Bob" - # send the gift. This might not get acked by the time the test is - # done and everything is torn down, so explicitly silence any - # ConnectionDone error that might result. When we get - # callRemoteOnly(), use that instead. - d3 = self.abob.callRemote("set", obj=(self.alice, - self.acarol, - a_cindy)) - d3.addErrback(ignoreConnectionDone) - return d2 # this fires with the gift that bob got - d.addCallback(_introduce) - def _bobGotCarol((b_alice,b_carol,b_cindy)): - if self.debug: print "Bob got Carol" - self.failUnless(b_alice) - self.failUnless(b_carol) - self.failUnless(b_cindy) - self.bcarol = b_carol - if self.debug: print "Bob says something to Carol" - d2 = self.carol.waitfor() - if self.debug: print "Bob says something to Cindy" - d3 = self.cindy.waitfor() - - # handle ConnectionDone as described before - d4 = b_carol.callRemote("set", obj=4) - d4.addErrback(ignoreConnectionDone) - d5 = b_cindy.callRemote("set", obj=5) - d5.addErrback(ignoreConnectionDone) - return defer.DeferredList([d2,d3]) - d.addCallback(_bobGotCarol) - def _carolAndCindyCalled(res): - if self.debug: print "Carol heard from Bob" - ((carol_s, carol_result), (cindy_s, cindy_result)) = res - self.failUnless(carol_s) - self.failUnless(cindy_s) - self.failUnlessEqual(carol_result, 4) - self.failUnlessEqual(cindy_result, 5) - d.addCallback(_carolAndCindyCalled) - return d - - # test gifts in return values too - - def testReturn(self): - self.createCharacters() - d = self.createInitialReferences() - def _introduce(res): - self.bob.obj = self.bdave - return self.abob.callRemote("get") - d.addCallback(_introduce) - def _check(adave): - # this ought to be a RemoteReference to dave, usable by alice - self.failUnless(isinstance(adave, RemoteReference)) - return adave.callRemote("set", 12) - d.addCallback(_check) - def _check2(res): - self.failUnlessEqual(self.dave.obj, 12) - d.addCallback(_check2) - return d - - def testReturnInContainer(self): - self.createCharacters() - d = self.createInitialReferences() - def _introduce(res): - self.bob.obj = {"foo": [(set([self.bdave]),)]} - return self.abob.callRemote("get") - d.addCallback(_introduce) - def _check(obj): - adave = list(obj["foo"][0][0])[0] - # this ought to be a RemoteReference to dave, usable by alice - self.failUnless(isinstance(adave, RemoteReference)) - return adave.callRemote("set", 12) - d.addCallback(_check) - def _check2(res): - self.failUnlessEqual(self.dave.obj, 12) - d.addCallback(_check2) - return d - - def testOrdering(self): - self.createCharacters() - self.bob.calls = [] - d = self.createInitialReferences() - def _introduce(res): - # we send three messages to Bob. The second one contains the - # reference to Carol. - dl = [] - dl.append(self.abob.callRemote("append", obj=1)) - dl.append(self.abob.callRemote("append", obj=self.acarol)) - dl.append(self.abob.callRemote("append", obj=3)) - return defer.DeferredList(dl) - d.addCallback(_introduce) - def _checkBob(res): - # this runs after all three messages have been acked by Bob - self.failUnlessEqual(len(self.bob.calls), 3) - self.failUnlessEqual(self.bob.calls[0], 1) - self.failUnless(isinstance(self.bob.calls[1], RemoteReference)) - self.failUnlessEqual(self.bob.calls[2], 3) - d.addCallback(_checkBob) - return d - - def testContainers(self): - self.createCharacters() - self.bob.calls = [] - d = self.createInitialReferences() - d.addCallback(lambda res: self.createMoreReferences()) - def _introduce(res): - # we send several messages to Bob, each of which has a container - # with a gift inside it. This exercises the ready_deferred - # handling inside containers. - dl = [] - cr = self.abob.callRemote - dl.append(cr("append", set([self.acharlene]))) - dl.append(cr("append", frozenset([self.achristine]))) - dl.append(cr("append", [self.aclarisse])) - dl.append(cr("append", obj=(self.acolette,))) - dl.append(cr("append", {'a': self.acourtney})) - # TODO: pass a gift as an attribute of a Copyable - return defer.DeferredList(dl) - d.addCallback(_introduce) - def _checkBob(res): - # this runs after all three messages have been acked by Bob - self.failUnlessEqual(len(self.bob.calls), 5) - - bcharlene = self.bob.calls.pop(0) - self.failUnless(isinstance(bcharlene, set)) - self.failUnlessEqual(len(bcharlene), 1) - self.failUnless(isinstance(list(bcharlene)[0], RemoteReference)) - - bchristine = self.bob.calls.pop(0) - self.failUnless(isinstance(bchristine, frozenset)) - self.failUnlessEqual(len(bchristine), 1) - self.failUnless(isinstance(list(bchristine)[0], RemoteReference)) - - bclarisse = self.bob.calls.pop(0) - self.failUnless(isinstance(bclarisse, list)) - self.failUnlessEqual(len(bclarisse), 1) - self.failUnless(isinstance(bclarisse[0], RemoteReference)) - - bcolette = self.bob.calls.pop(0) - self.failUnless(isinstance(bcolette, tuple)) - self.failUnlessEqual(len(bcolette), 1) - self.failUnless(isinstance(bcolette[0], RemoteReference)) - - bcourtney = self.bob.calls.pop(0) - self.failUnless(isinstance(bcourtney, dict)) - self.failUnlessEqual(len(bcourtney), 1) - self.failUnless(isinstance(bcourtney['a'], RemoteReference)) - - d.addCallback(_checkBob) - return d - - def create_constrained_characters(self): - self.alice = HelperTarget("alice") - self.bob = ConstrainedHelper("bob") - self.bob_url = self.tubB.registerReference(self.bob, "bob") - self.carol = HelperTarget("carol") - self.carol_url = self.tubC.registerReference(self.carol, "carol") - self.dave = HelperTarget("dave") - self.dave_url = self.tubD.registerReference(self.dave, "dave") - - def test_constraint(self): - self.create_constrained_characters() - self.bob.calls = [] - d = self.createInitialReferences() - def _introduce(res): - return self.abob.callRemote("set", self.acarol) - d.addCallback(_introduce) - def _checkBob(res): - self.failUnless(isinstance(self.bob.obj, RemoteReference)) - d.addCallback(_checkBob) - return d - - - - # this was used to alice's reference to carol (self.acarol) appeared in - # alice's gift table at the right time, to make sure that the - # RemoteReference is kept alive while the gift is in transit. The whole - # introduction pattern is going to change soon, so it has been disabled - # until I figure out what the new scheme ought to be asserting. - - def OFF_bobGotCarol(self, (balice,bcarol)): - if self.debug: print "Bob got Carol" - # Bob has received the gift - self.bcarol = bcarol - - # wait for alice to receive bob's 'decgift' sequence, which was sent - # by now (it is sent after bob receives the gift but before the - # gift-bearing message is delivered). To make sure alice has received - # it, send a message back along the same path. - def _check_alice(res): - if self.debug: print "Alice should have the decgift" - # alice's gift table should be empty - brokerAB = self.abob.tracker.broker - self.failUnlessEqual(brokerAB.myGifts, {}) - self.failUnlessEqual(brokerAB.myGiftsByGiftID, {}) - d1 = self.alice.waitfor() - d1.addCallback(_check_alice) - # the ack from this message doesn't always make it back by the time - # we end the test and hang up the connection. That connectionLost - # causes the deferred that this returns to errback, triggering an - # error, so we must be sure to discard any error from it. TODO: turn - # this into balice.callRemoteOnly("set", 39), which will have the - # same semantics from our point of view (but in addition it will tell - # the recipient to not bother sending a response). - balice.callRemote("set", 39).addErrback(lambda ignored: None) - - if self.debug: print "Bob says something to Carol" - d2 = self.carol.waitfor() - d = self.bcarol.callRemote("set", obj=12) - d.addCallback(lambda res: d2) - d.addCallback(self._carolCalled) - d.addCallback(lambda res: d1) - return d - - -class Bad(Base, unittest.TestCase): - - # if the recipient cannot claim their gift, the caller should see an - # errback. - - def setUp(self): - if not crypto_available: - raise unittest.SkipTest("crypto not available") - Base.setUp(self) - - def test_swissnum(self): - self.createCharacters() - d = self.createInitialReferences() - d.addCallback(lambda res: self.tubA.getReference(self.dave_url)) - def _introduce(adave): - # now break the gift to insure that Bob is unable to claim it. - # The first way to do this is to simple mangle the swissnum, - # which will result in a failure in remote_getReferenceByName. - # NOTE: this will have to change when we modify the way gifts are - # referenced, since tracker.url is scheduled to go away. - r = SturdyRef(adave.tracker.url) - r.name += ".MANGLED" - adave.tracker.url = r.getURL() - return self.acarol.callRemote("set", adave) - d.addCallback(_introduce) - d.addBoth(self.shouldFail, KeyError, "Bad.test_swissnum") - # make sure we can still talk to Carol, though - d.addCallback(lambda res: self.acarol.callRemote("set", 14)) - d.addCallback(lambda res: self.failUnlessEqual(self.carol.obj, 14)) - return d - test_swissnum.timeout = 10 - - def test_tubid(self): - self.createCharacters() - d = self.createInitialReferences() - d.addCallback(lambda res: self.tubA.getReference(self.dave_url)) - def _introduce(adave): - # The second way is to mangle the tubid, which will result in a - # failure during negotiation. NOTE: this will have to change when - # we modify the way gifts are referenced, since tracker.url is - # scheduled to go away. - r = SturdyRef(adave.tracker.url) - r.tubID += ".MANGLED" - adave.tracker.url = r.getURL() - return self.acarol.callRemote("set", adave) - d.addCallback(_introduce) - d.addBoth(self.shouldFail, BananaError, "Bad.test_tubid", - "unknown TubID") - return d - test_tubid.timeout = 10 - - def test_location(self): - self.createCharacters() - d = self.createInitialReferences() - d.addCallback(lambda res: self.tubA.getReference(self.dave_url)) - def _introduce(adave): - # The third way is to mangle the location hints, which will - # result in a failure during negotiation as it attempts to - # establish a TCP connection. - r = SturdyRef(adave.tracker.url) - # highly unlikely that there's anything listening on this port - r.locationHints = ["127.0.0.47:1"] - adave.tracker.url = r.getURL() - return self.acarol.callRemote("set", adave) - d.addCallback(_introduce) - d.addBoth(self.shouldFail, ConnectionRefusedError, "Bad.test_location") - return d - test_location.timeout = 10 - - def test_hang(self): - f = protocol.Factory() - f.protocol = protocol.Protocol # ignores all input - p = reactor.listenTCP(0, f, interface="127.0.0.1") - self.createCharacters() - d = self.createInitialReferences() - d.addCallback(lambda res: self.tubA.getReference(self.dave_url)) - def _introduce(adave): - # The next form of mangling is to connect to a port which never - # responds, which could happen if a firewall were silently - # dropping the TCP packets. We can't accurately simulate this - # case, but we can connect to a port which accepts the connection - # and then stays silent. This should trigger the overall - # connection timeout. - r = SturdyRef(adave.tracker.url) - r.locationHints = ["127.0.0.1:%d" % p.getHost().port] - adave.tracker.url = r.getURL() - self.tubD.options['connect_timeout'] = 2 - return self.acarol.callRemote("set", adave) - d.addCallback(_introduce) - d.addBoth(self.shouldFail, NegotiationError, "Bad.test_hang", - "no connection established within client timeout") - def _stop_listening(res): - d1 = p.stopListening() - def _done_listening(x): - return res - d1.addCallback(_done_listening) - return d1 - d.addBoth(_stop_listening) - return d - test_hang.timeout = 10 - - - def testReturn_swissnum(self): - self.createCharacters() - d = self.createInitialReferences() - def _introduce(res): - # now break the gift to insure that Alice is unable to claim it. - # The first way to do this is to simple mangle the swissnum, - # which will result in a failure in remote_getReferenceByName. - # NOTE: this will have to change when we modify the way gifts are - # referenced, since tracker.url is scheduled to go away. - r = SturdyRef(self.bdave.tracker.url) - r.name += ".MANGLED" - self.bdave.tracker.url = r.getURL() - self.bob.obj = self.bdave - return self.abob.callRemote("get") - d.addCallback(_introduce) - d.addBoth(self.shouldFail, KeyError, "Bad.testReturn_swissnum") - # make sure we can still talk to Bob, though - d.addCallback(lambda res: self.abob.callRemote("set", 14)) - d.addCallback(lambda res: self.failUnlessEqual(self.bob.obj, 14)) - return d - testReturn_swissnum.timeout = 10 diff --git a/src/foolscap/foolscap/test/test_interfaces.py b/src/foolscap/foolscap/test/test_interfaces.py deleted file mode 100644 index 29c3d810..00000000 --- a/src/foolscap/foolscap/test/test_interfaces.py +++ /dev/null @@ -1,297 +0,0 @@ -# -*- test-case-name: foolscap.test.test_interfaces -*- - -from zope.interface import implementsOnly -from twisted.trial import unittest - -from foolscap import schema, remoteinterface -from foolscap import RemoteInterface -from foolscap.remoteinterface import getRemoteInterface, RemoteMethodSchema -from foolscap.remoteinterface import RemoteInterfaceRegistry -from foolscap.tokens import Violation -from foolscap.referenceable import RemoteReference - -from foolscap.test.common import TargetMixin -from foolscap.test.common import getRemoteInterfaceName, Target, RIMyTarget, \ - RIMyTarget2, TargetWithoutInterfaces, IFoo, Foo, TypesTarget, RIDummy, \ - DummyTarget - - -class Target2(Target): - implementsOnly(IFoo, RIMyTarget2) - -class TestInterface(TargetMixin, unittest.TestCase): - - def testTypes(self): - self.failUnless(isinstance(RIMyTarget, - remoteinterface.RemoteInterfaceClass)) - self.failUnless(isinstance(RIMyTarget2, - remoteinterface.RemoteInterfaceClass)) - - def testRegister(self): - reg = RemoteInterfaceRegistry - self.failUnlessEqual(reg["RIMyTarget"], RIMyTarget) - self.failUnlessEqual(reg["RIMyTargetInterface2"], RIMyTarget2) - - def testDuplicateRegistry(self): - try: - class RIMyTarget(RemoteInterface): - def foo(bar=int): return int - except remoteinterface.DuplicateRemoteInterfaceError: - pass - else: - self.fail("duplicate registration not caught") - - def testInterface1(self): - # verify that we extract the right interfaces from a local object. - # also check that the registry stuff works. - self.setupBrokers() - rr, target = self.setupTarget(Target()) - iface = getRemoteInterface(target) - self.failUnlessEqual(iface, RIMyTarget) - iname = getRemoteInterfaceName(target) - self.failUnlessEqual(iname, "RIMyTarget") - self.failUnlessIdentical(RemoteInterfaceRegistry["RIMyTarget"], - RIMyTarget) - - rr, target = self.setupTarget(Target2()) - iname = getRemoteInterfaceName(target) - self.failUnlessEqual(iname, "RIMyTargetInterface2") - self.failUnlessIdentical(\ - RemoteInterfaceRegistry["RIMyTargetInterface2"], RIMyTarget2) - - - def testInterface2(self): - # verify that RemoteInterfaces have the right attributes - t = Target() - iface = getRemoteInterface(t) - self.failUnlessEqual(iface, RIMyTarget) - - # 'add' is defined with 'def' - s1 = RIMyTarget['add'] - self.failUnless(isinstance(s1, RemoteMethodSchema)) - ok, s2 = s1.getKeywordArgConstraint("a") - self.failUnless(ok) - self.failUnless(isinstance(s2, schema.IntegerConstraint)) - self.failUnless(s2.checkObject(12, False) == None) - self.failUnlessRaises(schema.Violation, - s2.checkObject, "string", False) - s3 = s1.getResponseConstraint() - self.failUnless(isinstance(s3, schema.IntegerConstraint)) - - # 'add1' is defined as a class attribute - s1 = RIMyTarget['add1'] - self.failUnless(isinstance(s1, RemoteMethodSchema)) - ok, s2 = s1.getKeywordArgConstraint("a") - self.failUnless(ok) - self.failUnless(isinstance(s2, schema.IntegerConstraint)) - self.failUnless(s2.checkObject(12, False) == None) - self.failUnlessRaises(schema.Violation, - s2.checkObject, "string", False) - s3 = s1.getResponseConstraint() - self.failUnless(isinstance(s3, schema.IntegerConstraint)) - - s1 = RIMyTarget['join'] - self.failUnless(isinstance(s1.getKeywordArgConstraint("a")[1], - schema.StringConstraint)) - self.failUnless(isinstance(s1.getKeywordArgConstraint("c")[1], - schema.IntegerConstraint)) - s3 = RIMyTarget['join'].getResponseConstraint() - self.failUnless(isinstance(s3, schema.StringConstraint)) - - s1 = RIMyTarget['disputed'] - self.failUnless(isinstance(s1.getKeywordArgConstraint("a")[1], - schema.IntegerConstraint)) - s3 = s1.getResponseConstraint() - self.failUnless(isinstance(s3, schema.IntegerConstraint)) - - - def testInterface3(self): - t = TargetWithoutInterfaces() - iface = getRemoteInterface(t) - self.failIf(iface) - - def testStack(self): - # when you violate your outbound schema, the Failure you get should - # have a stack trace that includes the actual callRemote invocation - self.setupBrokers() - rr, target = self.setupTarget(Target(), True) - d = rr.callRemote('add', "not a number", "oops") - def _check_failure(f): - s = f.getTraceback().split("\n") - for i in range(len(s)): - line = s[i] - #print line - if ("test_interfaces.py" in line - and i+1 < len(s) - and "rr.callRemote" in s[i+1]): - return # all good - print "failure looked like this:" - print f - self.fail("didn't see invocation of callRemote in stacktrace") - d.addCallbacks(lambda res: self.fail("hey, this was supposed to fail"), - _check_failure) - return d - -class Types(TargetMixin, unittest.TestCase): - def setUp(self): - TargetMixin.setUp(self) - self.setupBrokers() - - def deferredShouldFail(self, d, ftype=None, checker=None): - if not ftype and not checker: - d.addCallbacks(lambda res: - self.fail("hey, this was supposed to fail"), - lambda f: None) - elif ftype and not checker: - d.addCallbacks(lambda res: - self.fail("hey, this was supposed to fail"), - lambda f: f.trap(ftype) or None) - else: - d.addCallbacks(lambda res: - self.fail("hey, this was supposed to fail"), - checker) - - def testCall(self): - rr, target = self.setupTarget(Target(), True) - d = rr.callRemote('add', 3, 4) # enforces schemas - d.addCallback(lambda res: self.failUnlessEqual(res, 7)) - return d - - def testFail(self): - # make sure exceptions (and thus CopiedFailures) pass a schema check - rr, target = self.setupTarget(Target(), True) - d = rr.callRemote('fail') - self.deferredShouldFail(d, ftype=ValueError) - return d - - def testNoneGood(self): - rr, target = self.setupTarget(TypesTarget(), True) - d = rr.callRemote('returns_none', True) - d.addCallback(lambda res: self.failUnlessEqual(res, None)) - return d - - def testNoneBad(self): - rr, target = self.setupTarget(TypesTarget(), True) - d = rr.callRemote('returns_none', False) - def _check_failure(f): - f.trap(Violation) - self.failUnlessIn("(in return value of .returns_none", str(f)) - self.failUnlessIn("'not None' is not None", str(f)) - self.deferredShouldFail(d, checker=_check_failure) - return d - - def testTakesRemoteInterfaceGood(self): - rr, target = self.setupTarget(TypesTarget(), True) - d = rr.callRemote('takes_remoteinterface', DummyTarget()) - d.addCallback(lambda res: self.failUnlessEqual(res, "good")) - return d - - def testTakesRemoteInterfaceBad(self): - rr, target = self.setupTarget(TypesTarget(), True) - # takes_remoteinterface is specified to accept an RIDummy - d = rr.callRemote('takes_remoteinterface', 12) - def _check_failure(f): - f.trap(Violation) - self.failUnlessIn("RITypes.takes_remoteinterface(a=))", str(f)) - self.failUnlessIn("'12' is not a Referenceable", str(f)) - self.deferredShouldFail(d, checker=_check_failure) - return d - - def testTakesRemoteInterfaceBad2(self): - rr, target = self.setupTarget(TypesTarget(), True) - # takes_remoteinterface is specified to accept an RIDummy - d = rr.callRemote('takes_remoteinterface', TypesTarget()) - def _check_failure(f): - f.trap(Violation) - self.failUnlessIn("RITypes.takes_remoteinterface(a=))", str(f)) - self.failUnlessIn(" does not provide RemoteInterface ", str(f)) - self.failUnlessIn("foolscap.test.common.RIDummy", str(f)) - self.deferredShouldFail(d, checker=_check_failure) - return d - - def failUnlessRemoteProvides(self, obj, riface): - # TODO: really, I want to just be able to say: - # self.failUnless(RIDummy.providedBy(res)) - iface = obj.tracker.interface - # TODO: this test probably doesn't handle subclasses of - # RemoteInterface, which might be useful (if it even works) - if not iface or iface != riface: - self.fail("%s does not provide RemoteInterface %s" % (obj, riface)) - - def testReturnsRemoteInterfaceGood(self): - rr, target = self.setupTarget(TypesTarget(), True) - d = rr.callRemote('returns_remoteinterface', 1) - def _check(res): - self.failUnless(isinstance(res, RemoteReference)) - #self.failUnless(RIDummy.providedBy(res)) - self.failUnlessRemoteProvides(res, RIDummy) - d.addCallback(_check) - return d - - def testReturnsRemoteInterfaceBad(self): - rr, target = self.setupTarget(TypesTarget(), True) - # returns_remoteinterface is specified to return an RIDummy - d = rr.callRemote('returns_remoteinterface', 0) - def _check_failure(f): - f.trap(Violation) - self.failUnlessIn("(in return value of .returns_remoteinterface)", str(f)) - self.failUnlessIn("'15' is not a Referenceable", str(f)) - self.deferredShouldFail(d, checker=_check_failure) - return d - - def testReturnsRemoteInterfaceBad2(self): - rr, target = self.setupTarget(TypesTarget(), True) - # returns_remoteinterface is specified to return an RIDummy - d = rr.callRemote('returns_remoteinterface', -1) - def _check_failure(f): - f.trap(Violation) - self.failUnlessIn("(in return value of .returns_remoteinterface)", str(f)) - self.failUnlessIn(" 4, - "b.pings=%d, b.pongs=%d" % (b.pings, b.pongs)) - # and the connection should still be alive and usable - return rref.callRemote("add", 1, 2) - d.addCallback(_count_pings) - def _check_add(res): - self.failUnlessEqual(res, 3) - d.addCallback(_check_add) - - return d - - def do_testDisconnect(self, which): - # establish a connection with a very short disconnect timeout, so it - # will be abandoned. We only set this on one side, since either the - # initiating side or the receiving side should be able to timeout the - # connection. Because we don't set keepaliveTimeout, there will be no - # keepalives, so if we don't use the connection for 0.5 seconds, it - # will be dropped. - self.services[which].setOption("disconnectTimeout", 0.5) - - d = self.getRef() - d.addCallback(self.stall, 2) - def _check_ref(rref): - d2 = rref.callRemote("add", 1, 2) - def _check(res): - self.failUnless(isinstance(res, Failure)) - self.failUnless(res.check(DeadReferenceError)) - d2.addBoth(_check) - return d2 - d.addCallback(_check_ref) - - return d - - def testDisconnect0(self): - return self.do_testDisconnect(0) - def testDisconnect1(self): - return self.do_testDisconnect(1) - - def do_testNoDisconnect(self, which): - # establish a connection with a short disconnect timeout, but an even - # shorter keepalive timeout, so the connection should stay alive. We - # only provide the keepalives on one side, but enforce the disconnect - # timeout on both: just one side doing keepalives should keep the - # whole connection alive. - self.services[which].setOption("keepaliveTimeout", 0.1) - self.services[0].setOption("disconnectTimeout", 1.0) - self.services[1].setOption("disconnectTimeout", 1.0) - - d = self.getRef() - d.addCallback(self.stall, 2) - def _check(rref): - # the connection should still be alive - return rref.callRemote("add", 1, 2) - d.addCallback(_check) - def _check_add(res): - self.failUnlessEqual(res, 3) - d.addCallback(_check_add) - - return d - - def testNoDisconnect0(self): - return self.do_testNoDisconnect(0) - def testNoDisconnect1(self): - return self.do_testNoDisconnect(1) diff --git a/src/foolscap/foolscap/test/test_loopback.py b/src/foolscap/foolscap/test/test_loopback.py deleted file mode 100644 index dd5da2b7..00000000 --- a/src/foolscap/foolscap/test/test_loopback.py +++ /dev/null @@ -1,82 +0,0 @@ - -from twisted.trial import unittest -from twisted.internet import defer -import foolscap -from foolscap.test.common import HelperTarget -from foolscap.eventual import flushEventualQueue - -crypto_available = False -try: - from foolscap import crypto - crypto_available = crypto.available -except ImportError: - pass - - -class ConnectToSelf(unittest.TestCase): - - def setUp(self): - self.services = [] - - def requireCrypto(self): - if not crypto_available: - raise unittest.SkipTest("crypto not available") - - def startTub(self, tub): - self.services = [tub] - for s in self.services: - s.startService() - l = s.listenOn("tcp:0:interface=127.0.0.1") - s.setLocation("127.0.0.1:%d" % l.getPortnum()) - - def tearDown(self): - d = defer.DeferredList([s.stopService() for s in self.services]) - d.addCallback(flushEventualQueue) - return d - - def testConnectUnauthenticated(self): - tub = foolscap.UnauthenticatedTub() - self.startTub(tub) - target = HelperTarget("bob") - target.obj = "unset" - url = tub.registerReference(target) - # can we connect to a reference on our own Tub? - d = tub.getReference(url) - def _connected(ref): - return ref.callRemote("set", 12) - d.addCallback(_connected) - def _check(res): - self.failUnlessEqual(target.obj, 12) - d.addCallback(_check) - - def _connect_again(res): - target.obj = None - return tub.getReference(url) - d.addCallback(_connect_again) - d.addCallback(_connected) - d.addCallback(_check) - - return d - - def testConnectAuthenticated(self): - self.requireCrypto() - tub = foolscap.Tub() - self.startTub(tub) - target = HelperTarget("bob") - target.obj = "unset" - url = tub.registerReference(target) - # can we connect to a reference on our own Tub? - d = tub.getReference(url) - def _connected(ref): - return ref.callRemote("set", 12) - d.addCallback(_connected) - def _check(res): - self.failUnlessEqual(target.obj, 12) - d.addCallback(_check) - def _connect_again(res): - target.obj = None - return tub.getReference(url) - d.addCallback(_connect_again) - d.addCallback(_connected) - d.addCallback(_check) - return d diff --git a/src/foolscap/foolscap/test/test_negotiate.py b/src/foolscap/foolscap/test/test_negotiate.py deleted file mode 100644 index e8b15aeb..00000000 --- a/src/foolscap/foolscap/test/test_negotiate.py +++ /dev/null @@ -1,930 +0,0 @@ - -from twisted.trial import unittest - -from twisted.internet import protocol, defer, reactor -from twisted.application import internet -from foolscap import pb, negotiate, tokens -from foolscap import Referenceable, Tub, UnauthenticatedTub, BananaError -from foolscap.eventual import flushEventualQueue -crypto_available = False -try: - from foolscap import crypto - crypto_available = crypto.available -except ImportError: - pass - -# we use authenticated tubs if possible. If crypto is not available, fall -# back to unauthenticated ones -GoodEnoughTub = UnauthenticatedTub -if crypto_available: - GoodEnoughTub = Tub - -# this is tubID 3hemthez7rvgvyhjx2n5kdj7mcyar3yt -certData_low = \ -"""-----BEGIN CERTIFICATE----- -MIIBnjCCAQcCAgCEMA0GCSqGSIb3DQEBBAUAMBcxFTATBgNVBAMUDG5ld3BiX3Ro -aW5neTAeFw0wNjExMjYxODUxMTBaFw0wNzExMjYxODUxMTBaMBcxFTATBgNVBAMU -DG5ld3BiX3RoaW5neTCBnzANBgkqhkiG9w0BAQEFAAOBjQAwgYkCgYEA1DuK9NoF -fiSreA8rVqYPAjNiUqFelAAYPgnJR92Jry1J/dPA3ieNcCazbjVeKUFjd6+C30XR -APhajsAJFiJdnmgrtVILNrpZDC/vISKQoAmoT9hP/cMqFm8vmUG/+AXO76q63vfH -UmabBVDNTlM8FJpbm9M26cFMrH45G840gA0CAwEAATANBgkqhkiG9w0BAQQFAAOB -gQBCtjgBbF/s4w/16Y15lkTAO0xt8ZbtrvcsFPGTXeporonejnNaJ/aDbJt8Y6nY -ypJ4+LTT3UQwwvqX5xEuJmFhmXGsghRGypbU7Zxw6QZRppBRqz8xMS+y82mMZRQp -ezP+BiTvnoWXzDEP1233oYuELVgOVnHsj+rC017Ykfd7fw== ------END CERTIFICATE----- ------BEGIN RSA PRIVATE KEY----- -MIICXQIBAAKBgQDUO4r02gV+JKt4DytWpg8CM2JSoV6UABg+CclH3YmvLUn908De -J41wJrNuNV4pQWN3r4LfRdEA+FqOwAkWIl2eaCu1Ugs2ulkML+8hIpCgCahP2E/9 -wyoWby+ZQb/4Bc7vqrre98dSZpsFUM1OUzwUmlub0zbpwUysfjkbzjSADQIDAQAB -AoGBAIvxTykw8dpBt8cMyZjzGoZq93Rg74pLnbCap1x52iXmiRmUHWLfVcYT3tDW -4+X0NfBfjL5IvQ4UtTHXsqYjtvJfXWazYYa4INv5wKDBCd5a7s1YQ8R7mnhlBbRd -nqZ6RpGuQbd3gTGZCkUdbHPSqdCPAjryH9mtWoQZIepcIcoJAkEA77gjO+MPID6v -K6lf8SuFXHDOpaNOAiMlxVnmyQYQoF0PRVSpKOQf83An7R0S/jN3C7eZ6fPbZcyK -SFVktHhYwwJBAOKlgndbSkVzkQCMcuErGZT1AxHNNHSaDo8X3C47UbP3nf60SkxI -boqmpuPvEPUB9iPQdiNZGDU04+FUhe5Vtu8CQHDQHXS/hIzOMy2/BfG/Y4F/bSCy -W7HRzKK1jlCoVAbEBL3B++HMieTMsV17Q0bx/WI8Q2jAZE3iFmm4Fi6APHUCQCMi -5Yb7cBg0QlaDb4vY0q51DXTFC0zIVVl5qXjBWXk8+hFygdIxqHF2RIkxlr9k/nOu -7aGtPkOBX5KfN+QrBaECQQCltPE9YjFoqPezfyvGZoWAKb8bWzo958U3uVBnCw2f -Fs8AQDgI/9gOUXxXno51xQSdCnJLQJ8lThRUa6M7/F1B ------END RSA PRIVATE KEY----- -""" - -# this is tubID 6cxxohyb5ysw6ftpwprbzffxrghbfopm -certData_high = \ -"""-----BEGIN CERTIFICATE----- -MIIBnjCCAQcCAgCEMA0GCSqGSIb3DQEBBAUAMBcxFTATBgNVBAMUDG5ld3BiX3Ro -aW5neTAeFw0wNjExMjYxODUxNDFaFw0wNzExMjYxODUxNDFaMBcxFTATBgNVBAMU -DG5ld3BiX3RoaW5neTCBnzANBgkqhkiG9w0BAQEFAAOBjQAwgYkCgYEArfrebvt3 -8FE3kKoscY2J/8A4J6CUUUiM7/gl00UvGvvjfdaWbsj4w0o8W2tE0X8Zce3dScSl -D6qVXy6AEc4Flqs0q02w9uNzcdDY6LF3NiK0Lq+JP4OjJeImUBe8wUU0RQxqf/oA -GhgHEZhTp6aAdxBXZFOVDloiW6iqrKH/thcCAwEAATANBgkqhkiG9w0BAQQFAAOB -gQBXi+edp3iz07wxcRztvXtTAjY/9gUwlfa6qSTg/cGqbF0OPa+sISBOFRnnC8qM -ENexlkpiiD4Oyj+UtO5g2CMz0E62cTJTqz6PfexnmKIGwYjq5wZ2tzOrB9AmAzLv -TQQ9CdcKBXLd2GCToh8hBvjyyFwj+yTSbq+VKLMFkBY8Rg== ------END CERTIFICATE----- ------BEGIN RSA PRIVATE KEY----- -MIICXgIBAAKBgQCt+t5u+3fwUTeQqixxjYn/wDgnoJRRSIzv+CXTRS8a++N91pZu -yPjDSjxba0TRfxlx7d1JxKUPqpVfLoARzgWWqzSrTbD243Nx0NjosXc2IrQur4k/ -g6Ml4iZQF7zBRTRFDGp/+gAaGAcRmFOnpoB3EFdkU5UOWiJbqKqsof+2FwIDAQAB -AoGBAKrU3Vp+Y2u+Y+ARqKgrQai1tq36eAhEQ9dRgtqrYTCOyvcCIR5RCirAFvnx -H1bSBUsgNBw+EZGLfzZBs5FICaUjBOQYBYzfxux6+jlGvdl7idfHs7zogyEYBqye -0VkwzZ0mVXM2ujOD/z/ANkdEn2fGj/VwAYDlfvlyNZMckHp5AkEA5sc1VG3snWmG -lz4967MMzJ7XNpZcTvLEspjpH7hFbnXUHIQ4wPYOP7dhnVvKX1FiOQ8+zXVYDDGB -SK1ABzpc+wJBAMD+imwAhHNBbOb3cPYzOz6XRZaetvep3GfE2wKr1HXP8wchNXWj -Ijq6fJinwPlDugHaeNnfb+Dydd+YEiDTSJUCQDGCk2Jlotmyhfl0lPw4EYrkmO9R -GsSlOKXIQFtZwSuNg9AKXdKn9y6cPQjxZF1GrHfpWWPixNz40e+xm4bxcnkCQQCs -+zkspqYQ/CJVPpHkSnUem83GvAl5IKmp5Nr8oPD0i+fjixN0ljyW8RG+bhXcFaVC -BgTuG4QW1ptqRs5w14+lAkEAuAisTPUDsoUczywyoBbcFo3SVpFPNeumEXrj4MD/ -uP+TxgBi/hNYaR18mTbKD4mzVSjqyEeRC/emV3xUpUrdqg== ------END RSA PRIVATE KEY----- -""" - -class Target(Referenceable): - def __init__(self): - self.calls = 0 - def remote_call(self): - self.calls += 1 - - -def getPage(url): - """This is a variant of the standard twisted.web.client.getPage, which is - smart enough to shut off its connection when its done (even if it fails). - """ - from twisted.web import client - scheme, host, port, path = client._parse(url) - factory = client.HTTPClientFactory(url) - c = reactor.connectTCP(host, port, factory) - def shutdown(res, c): - c.disconnect() - return res - factory.deferred.addBoth(shutdown, c) - return factory.deferred - -class OneTimeDeferred(defer.Deferred): - def callback(self, res): - if self.called: - return - return defer.Deferred.callback(self, res) - -class BaseMixin: - - def requireCrypto(self): - if not crypto_available: - raise unittest.SkipTest("crypto not available") - - def setUp(self): - self.connections = [] - self.servers = [] - self.services = [] - - def tearDown(self): - for c in self.connections: - if c.transport: - c.transport.loseConnection() - dl = [] - for s in self.servers: - dl.append(defer.maybeDeferred(s.stopListening)) - for s in self.services: - dl.append(defer.maybeDeferred(s.stopService)) - d = defer.DeferredList(dl) - d.addCallback(self._checkListeners) - d.addCallback(flushEventualQueue) - return d - def _checkListeners(self, res): - self.failIf(pb.Listeners) - - def stall(self, res, timeout): - d = defer.Deferred() - reactor.callLater(timeout, d.callback, res) - return d - - def makeServer(self, authenticated, options={}, listenerOptions={}): - if authenticated: - self.tub = tub = Tub(options=options) - else: - self.tub = tub = UnauthenticatedTub(options=options) - tub.startService() - self.services.append(tub) - l = tub.listenOn("tcp:0", listenerOptions) - tub.setLocation("127.0.0.1:%d" % l.getPortnum()) - self.target = Target() - return tub.registerReference(self.target), l.getPortnum() - - def makeSpecificServer(self, certData, - negotiationClass=negotiate.Negotiation): - self.tub = tub = Tub(certData=certData) - tub.negotiationClass = negotiationClass - tub.startService() - self.services.append(tub) - l = tub.listenOn("tcp:0") - tub.setLocation("127.0.0.1:%d" % l.getPortnum()) - self.target = Target() - return tub.registerReference(self.target), l.getPortnum() - - def makeNullServer(self): - f = protocol.Factory() - f.protocol = protocol.Protocol # discards everything - s = internet.TCPServer(0, f) - s.startService() - self.services.append(s) - portnum = s._port.getHost().port - return portnum - - def makeHTTPServer(self): - try: - from twisted.web import server, resource, static - except ImportError: - raise unittest.SkipTest('this test needs twisted.web') - root = resource.Resource() - root.putChild("", static.Data("hello\n", "text/plain")) - s = internet.TCPServer(0, server.Site(root)) - s.startService() - self.services.append(s) - portnum = s._port.getHost().port - return portnum - - def connectClient(self, portnum): - tub = UnauthenticatedTub() - tub.startService() - self.services.append(tub) - d = tub.getReference("pb://127.0.0.1:%d/hello" % portnum) - return d - - def connectHTTPClient(self, portnum): - return getPage("http://127.0.0.1:%d/foo" % portnum) - -class Basic(BaseMixin, unittest.TestCase): - - def testOptions(self): - url, portnum = self.makeServer(False, {'opt': 12}) - self.failUnlessEqual(self.tub.options['opt'], 12) - - def testAuthenticated(self): - if not crypto_available: - raise unittest.SkipTest("crypto not available") - url, portnum = self.makeServer(True) - client = Tub() - client.startService() - self.services.append(client) - d = client.getReference(url) - return d - testAuthenticated.timeout = 10 - - def testUnauthenticated(self): - url, portnum = self.makeServer(False) - client = UnauthenticatedTub() - client.startService() - self.services.append(client) - d = client.getReference(url) - return d - testUnauthenticated.timeout = 10 - - def testHalfAuthenticated1(self): - if not crypto_available: - raise unittest.SkipTest("crypto not available") - url, portnum = self.makeServer(True) - client = UnauthenticatedTub() - client.startService() - self.services.append(client) - d = client.getReference(url) - return d - testHalfAuthenticated1.timeout = 10 - - def testHalfAuthenticated2(self): - if not crypto_available: - raise unittest.SkipTest("crypto not available") - url, portnum = self.makeServer(False) - client = Tub() - client.startService() - self.services.append(client) - d = client.getReference(url) - return d - testHalfAuthenticated2.timeout = 10 - -class Versus(BaseMixin, unittest.TestCase): - - def testVersusHTTPServerAuthenticated(self): - if not crypto_available: - raise unittest.SkipTest("crypto not available") - portnum = self.makeHTTPServer() - client = Tub() - client.startService() - self.services.append(client) - url = "pb://1234@127.0.0.1:%d/target" % portnum - d = client.getReference(url) - d.addCallbacks(lambda res: self.fail("this is supposed to fail"), - lambda f: f.trap(BananaError)) - # the HTTP server needs a moment to notice that the connection has - # gone away. Without this, trial flunks the test because of the - # leftover HTTP server socket. - d.addCallback(self.stall, 1) - return d - testVersusHTTPServerAuthenticated.timeout = 10 - - def testVersusHTTPServerUnauthenticated(self): - portnum = self.makeHTTPServer() - client = UnauthenticatedTub() - client.startService() - self.services.append(client) - url = "pbu://127.0.0.1:%d/target" % portnum - d = client.getReference(url) - d.addCallbacks(lambda res: self.fail("this is supposed to fail"), - lambda f: f.trap(BananaError)) - d.addCallback(self.stall, 1) # same reason as above - return d - testVersusHTTPServerUnauthenticated.timeout = 10 - - def testVersusHTTPClientUnauthenticated(self): - try: - from twisted.web import error - except ImportError: - raise unittest.SkipTest('this test needs twisted.web') - url, portnum = self.makeServer(False) - d = self.connectHTTPClient(portnum) - d.addCallbacks(lambda res: self.fail("this is supposed to fail"), - lambda f: f.trap(error.Error)) - return d - testVersusHTTPClientUnauthenticated.timeout = 10 - - def testVersusHTTPClientAuthenticated(self): - if not crypto_available: - raise unittest.SkipTest("crypto not available") - try: - from twisted.web import error - except ImportError: - raise unittest.SkipTest('this test needs twisted.web') - url, portnum = self.makeServer(True) - d = self.connectHTTPClient(portnum) - d.addCallbacks(lambda res: self.fail("this is supposed to fail"), - lambda f: f.trap(error.Error)) - return d - testVersusHTTPClientAuthenticated.timeout = 10 - - def testNoConnection(self): - url, portnum = self.makeServer(False) - d = self.tub.stopService() - d.addCallback(self._testNoConnection_1, url) - return d - testNoConnection.timeout = 10 - def _testNoConnection_1(self, res, url): - self.services.remove(self.tub) - client = UnauthenticatedTub() - client.startService() - self.services.append(client) - d = client.getReference(url) - d.addCallbacks(lambda res: self.fail("this is supposed to fail"), - self._testNoConnection_fail) - return d - def _testNoConnection_fail(self, why): - from twisted.internet import error - self.failUnless(why.check(error.ConnectionRefusedError)) - - def testClientTimeout(self): - portnum = self.makeNullServer() - # lower the connection timeout to 2 seconds - client = UnauthenticatedTub(options={'connect_timeout': 1}) - client.startService() - self.services.append(client) - url = "pbu://127.0.0.1:%d/target" % portnum - d = client.getReference(url) - d.addCallbacks(lambda res: self.fail("hey! this is supposed to fail"), - lambda f: f.trap(tokens.NegotiationError)) - return d - testClientTimeout.timeout = 10 - - def testServerTimeout(self): - # lower the connection timeout to 1 seconds - - # the debug callback gets fired each time Negotiate.negotiationFailed - # is fired, which happens twice (once for the timeout, once for the - # resulting connectionLost), so we have to make sure the Deferred is - # only fired once. - d = OneTimeDeferred() - options = {'server_timeout': 1, - 'debug_negotiationFailed_cb': d.callback - } - url, portnum = self.makeServer(False, listenerOptions=options) - f = protocol.ClientFactory() - f.protocol = protocol.Protocol # discards everything - s = internet.TCPClient("127.0.0.1", portnum, f) - s.startService() - self.services.append(s) - d.addCallbacks(lambda res: self.fail("hey! this is supposed to fail"), - lambda f: self._testServerTimeout_1) - return d - testServerTimeout.timeout = 10 - def _testServerTimeout_1(self, f): - self.failUnless(f.check(tokens.NegotiationError)) - self.failUnlessEqual(f.value.args[0], "negotiation timeout") - - -class Parallel(BaseMixin, unittest.TestCase): - # testParallel*: listen on two separate ports, set up a URL with both - # ports in the locationHints field, the connect. PB is supposed to - # connect to both ports at the same time, using whichever one completes - # negotiation first. The other connection is supposed to be dropped - # silently. - - # the cases we need to cover are enumerated by the possible states that - # connection[1] can be in when connection[0] (the winning connection) - # completes negotiation. Those states are: - # 1: connectTCP initiated and failed - # 2: connectTCP initiated, but not yet established - # 3: connection established, but still in the PLAINTEXT phase - # (sent GET, waiting for the 101 Switching Protocols) - # 4: still in ENCRYPTED phase: sent Hello, waiting for their Hello - # 5: in DECIDING phase (non-master), waiting for their decision - # - - def makeServers(self, tubopts={}, lo1={}, lo2={}): - self.requireCrypto() - self.tub = tub = Tub(options=tubopts) - tub.startService() - self.services.append(tub) - l1 = tub.listenOn("tcp:0", lo1) - l2 = tub.listenOn("tcp:0", lo2) - self.p1, self.p2 = l1.getPortnum(), l2.getPortnum() - tub.setLocation("127.0.0.1:%d" % l1.getPortnum(), - "127.0.0.1:%d" % l2.getPortnum()) - self.target = Target() - return tub.registerReference(self.target) - - def connect(self, url, authenticated=True): - self.clientPhases = [] - opts = {"debug_stall_second_connection": True, - "debug_gatherPhases": self.clientPhases} - if authenticated: - self.client = client = Tub(options=opts) - else: - self.client = client = UnauthenticatedTub(options=opts) - client.startService() - self.services.append(client) - d = client.getReference(url) - return d - - def checkConnectedToFirstListener(self, rr, targetPhases): - # verify that we connected to the first listener, and not the second - self.failUnlessEqual(rr.tracker.broker.transport.getPeer().port, - self.p1) - # then pause a moment for the other connection to finish giving up - d = self.stall(rr, 0.5) - # and verify that we finished during the phase that we meant to test - d.addCallback(lambda res: - self.failUnlessEqual(self.clientPhases, targetPhases, - "negotiation was abandoned in " - "the wrong phase")) - return d - - def test1(self): - # in this test, we stop listening on the second port, so the second - # connection will terminate with an ECONNREFUSED before the first one - # completes. We also slow down the first port so we're sure to - # recognize the failed second connection before starting negotiation - # on the first. - url = self.makeServers(lo1={'debug_slow_connectionMade': True}) - d = self.tub.stopListeningOn(self.tub.getListeners()[1]) - d.addCallback(self._test1_1, url) - return d - def _test1_1(self, res, url): - d = self.connect(url) - d.addCallback(self.checkConnectedToFirstListener, []) - #d.addCallback(self.stall, 1) - return d - test1.timeout = 10 - - def test2(self): - # slow down the second listener so that the first one is used. The - # second listener will be connected but it will not respond to - # negotiation for a moment, allowing the first connection to - # complete. - url = self.makeServers(lo2={'debug_slow_connectionMade': True}) - d = self.connect(url) - d.addCallback(self.checkConnectedToFirstListener, - [negotiate.PLAINTEXT]) - #d.addCallback(self.stall, 1) - return d - test2.timeout = 10 - - def test3(self): - # have the second listener stall just before it does - # sendPlaintextServer(). This insures the second connection will be - # waiting in the PLAINTEXT phase when the first connection completes. - url = self.makeServers(lo2={'debug_slow_sendPlaintextServer': True}) - d = self.connect(url) - d.addCallback(self.checkConnectedToFirstListener, - [negotiate.PLAINTEXT]) - return d - test3.timeout = 10 - - def test4(self): - # stall the second listener just before it sends the Hello. - # This insures the second connection will be waiting in the ENCRYPTED - # phase when the first connection completes. - url = self.makeServers(lo2={'debug_slow_sendHello': True}) - d = self.connect(url) - d.addCallback(self.checkConnectedToFirstListener, - [negotiate.ENCRYPTED]) - #d.addCallback(self.stall, 1) - return d - test4.timeout = 10 - - def test5(self): - # stall the second listener just before it sends the decision. This - # insures the second connection will be waiting in the DECIDING phase - # when the first connection completes. - - # note: this requires that the listener winds up as the master. We - # force this by connecting from an unauthenticated Tub. - url = self.makeServers(lo2={'debug_slow_sendDecision': True}) - d = self.connect(url, authenticated=False) - d.addCallback(self.checkConnectedToFirstListener, - [negotiate.DECIDING]) - return d - test5.timeout = 10 - - -class CrossfireMixin(BaseMixin): - # testSimultaneous*: similar to Parallel, but connection[0] is initiated - # in the opposite direction. This is the case when two Tubs initiate - # connections to each other at the same time. - tub1IsMaster = False - - def makeServers(self, t1opts={}, t2opts={}, lo1={}, lo2={}, - tubAauthenticated=True, tubBauthenticated=True): - if tubAauthenticated or tubBauthenticated: - self.requireCrypto() - # first we create two Tubs - if tubAauthenticated: - a = Tub(options=t1opts) - else: - a = UnauthenticatedTub(options=t1opts) - if tubBauthenticated: - b = Tub(options=t1opts) - else: - b = UnauthenticatedTub(options=t1opts) - - # then we figure out which one will be the master, and call it tub1 - if a.tubID > b.tubID: - # a is the master - tub1,tub2 = a,b - else: - tub1,tub2 = b,a - if not self.tub1IsMaster: - tub1,tub2 = tub2,tub1 - self.tub1 = tub1 - self.tub2 = tub2 - - # now fix up the options and everything else - self.tub1phases = [] - t1opts['debug_gatherPhases'] = self.tub1phases - tub1.options = t1opts - self.tub2phases = [] - t2opts['debug_gatherPhases'] = self.tub2phases - tub2.options = t2opts - - # connection[0], the winning connection, will be from tub1 to tub2 - - tub1.startService() - self.services.append(tub1) - l1 = tub1.listenOn("tcp:0", lo1) - tub1.setLocation("127.0.0.1:%d" % l1.getPortnum()) - self.target1 = Target() - self.url1 = tub1.registerReference(self.target1) - - # connection[1], the abandoned connection, will be from tub2 to tub1 - tub2.startService() - self.services.append(tub2) - l2 = tub2.listenOn("tcp:0", lo2) - tub2.setLocation("127.0.0.1:%d" % l2.getPortnum()) - self.target2 = Target() - self.url2 = tub2.registerReference(self.target2) - - def connect(self): - # initiate connection[1] from tub2 to tub1, which will stall (but the - # actual getReference will eventually succeed once the - # reverse-direction connection is established) - d1 = self.tub2.getReference(self.url1) - # give it a moment to get to the point where it stalls - d = self.stall(None, 0.1) - d.addCallback(self._connect, d1) - return d, d1 - def _connect(self, res, d1): - # now initiate connection[0], from tub1 to tub2 - d2 = self.tub1.getReference(self.url2) - return d2 - - def checkConnectedViaReverse(self, rref, targetPhases): - # assert that connection[0] (from tub1 to tub2) is actually in use. - # This connection uses a per-client allocated port number for the - # tub1 side, and the tub2 Listener's port for the tub2 side. - # Therefore tub1's Broker (as used by its RemoteReference) will have - # a far-end port number that should match tub2's Listener. - self.failUnlessEqual(rref.tracker.broker.transport.getPeer().port, - self.tub2.getListeners()[0].getPortnum()) - # in addition, connection[1] should have been abandoned during a - # specific phase. - self.failUnlessEqual(self.tub2phases, targetPhases) - - -class CrossfireReverse(CrossfireMixin, unittest.TestCase): - # just like the following Crossfire except that tub2 is the master, just - # in case it makes a difference somewhere - tub1IsMaster = False - - def test1(self): - # in this test, tub2 isn't listening at all. So not only will - # connection[1] fail, the tub2.getReference that uses it will fail - # too (whereas in all other tests, connection[1] is abandoned but - # tub2.getReference succeeds) - self.makeServers(lo1={'debug_slow_connectionMade': True}) - d = self.tub2.stopListeningOn(self.tub2.getListeners()[0]) - d.addCallback(self._test1_1) - return d - - def _test1_1(self, res): - d,d1 = self.connect() - d.addCallbacks(lambda res: self.fail("hey! this is supposed to fail"), - self._test1_2, errbackArgs=(d1,)) - return d - def _test1_2(self, why, d1): - from twisted.internet import error - self.failUnless(why.check(error.ConnectionRefusedError)) - # but now the other getReference should succeed - return d1 - test1.timeout = 10 - - def test2(self): - self.makeServers(lo1={'debug_slow_connectionMade': True}) - d,d1 = self.connect() - d.addCallback(self.checkConnectedViaReverse, [negotiate.PLAINTEXT]) - d.addCallback(lambda res: d1) # other getReference should work too - return d - test2.timeout = 10 - - def test3(self): - self.makeServers(lo1={'debug_slow_sendPlaintextServer': True}) - d,d1 = self.connect() - d.addCallback(self.checkConnectedViaReverse, [negotiate.PLAINTEXT]) - d.addCallback(lambda res: d1) # other getReference should work too - return d - test3.timeout = 10 - - def test4(self): - self.makeServers(lo1={'debug_slow_sendHello': True}) - d,d1 = self.connect() - d.addCallback(self.checkConnectedViaReverse, [negotiate.ENCRYPTED]) - d.addCallback(lambda res: d1) # other getReference should work too - return d - test4.timeout = 10 - -class Crossfire(CrossfireReverse): - tub1IsMaster = True - - def test5(self): - # this is the only test where we rely upon the fact that - # makeServers() always puts the higher-numbered Tub (which will be - # the master) in self.tub1 - - # connection[1] (the abandoned connection) is started from tub2 to - # tub1. It connects, begins negotiation (tub1 is the master), but - # then is stalled because we've added the debug_slow_sendDecision - # flag to tub1's Listener. That allows connection[0] to begin from - # tub1 to tub2, which is *not* stalled (because we added the slowdown - # flag to the Listener's options, not tub1.options), so it completes - # normally. When connection[1] is unpaused and hits switchToBanana, - # it discovers that it already has a Broker in place, and the - # connection is abandoned. - - self.makeServers(lo1={'debug_slow_sendDecision': True}) - d,d1 = self.connect() - d.addCallback(self.checkConnectedViaReverse, [negotiate.DECIDING]) - d.addCallback(lambda res: d1) # other getReference should work too - return d - test5.timeout = 10 - -# TODO: some of these tests cause the TLS connection to be abandoned, and it -# looks like TLS sockets don't shut down very cleanly. I connectionLost -# getting called with the following error (instead of a normal ConnectionDone -# exception): -# 2005/10/10 19:56 PDT [Negotiation,0,127.0.0.1] -# Negotiation.negotiationFailed: [Failure instance: Traceback: -# exceptions.AttributeError: TLSConnection instance has no attribute 'socket' -# twisted/internet/tcp.py:402:connectionLost -# twisted/pb/negotiate.py:366:connectionLost -# twisted/pb/negotiate.py:205:debug_forceTimer -# twisted/pb/negotiate.py:223:debug_fireTimer -# --- --- -# twisted/pb/negotiate.py:324:dataReceived -# twisted/pb/negotiate.py:432:handlePLAINTEXTServer -# twisted/pb/negotiate.py:457:sendPlaintextServerAndStartENCRYPTED -# twisted/pb/negotiate.py:494:startENCRYPTED -# twisted/pb/negotiate.py:768:startTLS -# twisted/internet/tcp.py:693:startTLS -# twisted/internet/tcp.py:314:startTLS -# ] -# -# specifically, I saw this happen for CrossfireReverse.test2, Parallel.test2 - -# other tests don't do quite what I want: closing a connection (say, due to a -# duplicate broker) should send a sensible error message to the other side, -# rather than triggering a low-level protocol error. - - -class Existing(CrossfireMixin, unittest.TestCase): - - def checkNumBrokers(self, res, expected, dummy): - if type(expected) not in (tuple,list): - expected = [expected] - self.failUnless(len(self.tub1.brokers) + - len(self.tub1.unauthenticatedBrokers) in expected) - self.failUnless(len(self.tub2.brokers) + - len(self.tub2.unauthenticatedBrokers) in expected) - - def testAuthenticated(self): - # When two authenticated Tubs connect, that connection should be used - # in the reverse connection too - self.makeServers() - d = self.tub1.getReference(self.url2) - d.addCallback(self._testAuthenticated_1) - return d - def _testAuthenticated_1(self, r12): - # this should use the existing connection - d = self.tub2.getReference(self.url1) - d.addCallback(self.checkNumBrokers, 1, (r12,)) - return d - - def testUnauthenticated(self): - # But when two non-authenticated Tubs connect, they don't get to - # share connections. - self.makeServers(tubAauthenticated=False, tubBauthenticated=False) - # the non-authenticated Tub gets a tubID of None, so it becomes tub2. - # We want to verify that connections are not shared regardless of - # which direction is authenticated. In this test, the first - # connection - d = self.tub1.getReference(self.url2) - d.addCallback(self._testUnauthenticated_1) - return d - def _testUnauthenticated_1(self, r12): - # this should *not* use the existing connection - d = self.tub2.getReference(self.url1) - d.addCallback(self.checkNumBrokers, 2, (r12,)) - return d - - def testHalfAuthenticated1(self): - # When an authenticated Tub connects to a non-authenticated Tub, the - # reverse connection *is* allowed to share the connection (although, - # due to what I think are limitations in SSL, it probably won't) - self.makeServers(tubAauthenticated=True, tubBauthenticated=False) - # The non-authenticated Tub gets a tubID of None, so it becomes tub2. - # Therefore this is the authenticated-to-non-authenticated - # connection. - d = self.tub1.getReference(self.url2) - d.addCallback(self._testHalfAuthenticated1_1) - return d - def _testHalfAuthenticated1_1(self, r12): - d = self.tub2.getReference(self.url1) - d.addCallback(self.checkNumBrokers, (1,2), (r12,)) - return d - - def testHalfAuthenticated2(self): - # On the other hand, when a non-authenticated Tub connects to an - # authenticated Tub, the reverse connection is forbidden (because the - # non-authenticated Tub's identity is based upon its Listener's - # location) - self.makeServers(tubAauthenticated=True, tubBauthenticated=False) - # The non-authenticated Tub gets a tubID of None, so it becomes tub2. - # Therefore this is the authenticated-to-non-authenticated - # connection. - d = self.tub2.getReference(self.url1) - d.addCallback(self._testHalfAuthenticated2_1) - return d - def _testHalfAuthenticated2_1(self, r21): - d = self.tub1.getReference(self.url2) - d.addCallback(self.checkNumBrokers, 2, (r21,)) - return d - -# this test will have to change when the regular Negotiation starts using -# different decision blocks. The version numbers must be updated each time -# the negotiation version is changed. -assert negotiate.Negotiation.maxVersion == 3 -MAX_HANDLED_VERSION = negotiate.Negotiation.maxVersion -UNHANDLED_VERSION = 4 -class NegotiationVbig(negotiate.Negotiation): - maxVersion = UNHANDLED_VERSION - def __init__(self): - negotiate.Negotiation.__init__(self) - self.negotiationOffer["extra"] = "new value" - def evaluateNegotiationVersion4(self, offer): - # just like v1, but different - return self.evaluateNegotiationVersion1(offer) - def acceptDecisionVersion4(self, decision): - return self.acceptDecisionVersion1(decision) - -class NegotiationVbigOnly(NegotiationVbig): - minVersion = UNHANDLED_VERSION - -class Future(BaseMixin, unittest.TestCase): - def testFuture1(self): - # when a peer that understands version=[1] that connects to a peer - # that understands version=[1,2], they should pick version=1 - - # the listening Tub will have the higher tubID, and thus make the - # negotiation decision - self.requireCrypto() - url, portnum = self.makeSpecificServer(certData_high) - # the client - client = Tub(certData=certData_low) - client.negotiationClass = NegotiationVbig - client.startService() - self.services.append(client) - d = client.getReference(url) - def _check_version(rref): - ver = rref.tracker.broker._banana_decision_version - self.failUnlessEqual(ver, MAX_HANDLED_VERSION) - d.addCallback(_check_version) - return d - testFuture1.timeout = 10 - - def testFuture2(self): - # same as before, but the connecting Tub will have the higher tubID, - # and thus make the negotiation decision - self.requireCrypto() - url, portnum = self.makeSpecificServer(certData_low) - # the client - client = Tub(certData=certData_high) - client.negotiationClass = NegotiationVbig - client.startService() - self.services.append(client) - d = client.getReference(url) - def _check_version(rref): - ver = rref.tracker.broker._banana_decision_version - self.failUnlessEqual(ver, MAX_HANDLED_VERSION) - d.addCallback(_check_version) - return d - testFuture2.timeout = 10 - - def testFuture3(self): - # same as testFuture1, but it is the listening server that - # understands [1,2] - self.requireCrypto() - url, portnum = self.makeSpecificServer(certData_high, NegotiationVbig) - client = Tub(certData=certData_low) - client.startService() - self.services.append(client) - d = client.getReference(url) - def _check_version(rref): - ver = rref.tracker.broker._banana_decision_version - self.failUnlessEqual(ver, MAX_HANDLED_VERSION) - d.addCallback(_check_version) - return d - testFuture3.timeout = 10 - - def testFuture4(self): - # same as testFuture2, but it is the listening server that - # understands [1,2] - self.requireCrypto() - url, portnum = self.makeSpecificServer(certData_low, NegotiationVbig) - # the client - client = Tub(certData=certData_high) - client.startService() - self.services.append(client) - d = client.getReference(url) - def _check_version(rref): - ver = rref.tracker.broker._banana_decision_version - self.failUnlessEqual(ver, MAX_HANDLED_VERSION) - d.addCallback(_check_version) - return d - testFuture4.timeout = 10 - - def testTooFarInFuture1(self): - # when a peer that understands version=[1] that connects to a peer - # that only understands version=[2], they should fail to negotiate - - # the listening Tub will have the higher tubID, and thus make the - # negotiation decision - self.requireCrypto() - url, portnum = self.makeSpecificServer(certData_high) - # the client - client = Tub(certData=certData_low) - client.negotiationClass = NegotiationVbigOnly - client.startService() - self.services.append(client) - d = client.getReference(url) - def _oops_succeeded(rref): - self.fail("hey! this is supposed to fail") - def _check_failure(f): - f.trap(tokens.NegotiationError) - d.addCallbacks(_oops_succeeded, _check_failure) - return d - testTooFarInFuture1.timeout = 10 - - def testTooFarInFuture2(self): - # same as before, but the connecting Tub will have the higher tubID, - # and thus make the negotiation decision - self.requireCrypto() - url, portnum = self.makeSpecificServer(certData_low) - client = Tub(certData=certData_high) - client.negotiationClass = NegotiationVbigOnly - client.startService() - self.services.append(client) - d = client.getReference(url) - def _oops_succeeded(rref): - self.fail("hey! this is supposed to fail") - def _check_failure(f): - f.trap(tokens.NegotiationError) - d.addCallbacks(_oops_succeeded, _check_failure) - return d - testTooFarInFuture1.timeout = 10 - - def testTooFarInFuture3(self): - # same as testTooFarInFuture1, but it is the listening server which - # only understands [2] - self.requireCrypto() - url, portnum = self.makeSpecificServer(certData_high, - NegotiationVbigOnly) - client = Tub(certData=certData_low) - client.startService() - self.services.append(client) - d = client.getReference(url) - def _oops_succeeded(rref): - self.fail("hey! this is supposed to fail") - def _check_failure(f): - f.trap(tokens.NegotiationError) - d.addCallbacks(_oops_succeeded, _check_failure) - return d - testTooFarInFuture3.timeout = 10 - - def testTooFarInFuture4(self): - # same as testTooFarInFuture2, but it is the listening server which - # only understands [2] - self.requireCrypto() - url, portnum = self.makeSpecificServer(certData_low, - NegotiationVbigOnly) - client = Tub(certData=certData_high) - client.startService() - self.services.append(client) - d = client.getReference(url) - def _oops_succeeded(rref): - self.fail("hey! this is supposed to fail") - def _check_failure(f): - f.trap(tokens.NegotiationError) - d.addCallbacks(_oops_succeeded, _check_failure) - return d - testTooFarInFuture4.timeout = 10 - -# disable all tests unless NEWPB_TEST_NEGOTIATION is set in the environment. -# The negotiation tests are sensitive to system load, and the intermittent -# failures are really annoying. The 'right' solution to this involves -# completely rearchitecting connection establishment, to provide debug/test -# hooks to get control in between the various phases. It also requires -# creating a loopback connection type (as a peer of TCP) which has -# deterministic timing behavior. - -#import os -if False: #not os.environ.get("NEWPB_TEST_NEGOTIATION"): - del Basic - del Versus - del Parallel - del CrossfireReverse - del Crossfire - del Existing diff --git a/src/foolscap/foolscap/test/test_observer.py b/src/foolscap/foolscap/test/test_observer.py deleted file mode 100644 index 6d6dab6a..00000000 --- a/src/foolscap/foolscap/test/test_observer.py +++ /dev/null @@ -1,23 +0,0 @@ -# -*- test-case-name: foolscap.test_observer -*- - -from twisted.trial import unittest -from twisted.internet import defer -from foolscap import observer - -class Observer(unittest.TestCase): - def test_oneshot(self): - ol = observer.OneShotObserverList() - rep = repr(ol) - d1 = ol.whenFired() - d2 = ol.whenFired() - def _addmore(res): - self.failUnlessEqual(res, "result") - d3 = ol.whenFired() - d3.addCallback(self.failUnlessEqual, "result") - return d3 - d1.addCallback(_addmore) - ol.fire("result") - rep = repr(ol) - d4 = ol.whenFired() - dl = defer.DeferredList([d1,d2,d4]) - return dl diff --git a/src/foolscap/foolscap/test/test_pb.py b/src/foolscap/foolscap/test/test_pb.py deleted file mode 100644 index 4d69a554..00000000 --- a/src/foolscap/foolscap/test/test_pb.py +++ /dev/null @@ -1,705 +0,0 @@ -# -*- test-case-name: foolscap.test.test_pb -*- - -import re - -if False: - import sys - from twisted.python import log - log.startLogging(sys.stderr) - -from twisted.python import failure, log, reflect -from twisted.internet import defer -from twisted.trial import unittest - -from foolscap import tokens, referenceable -from foolscap import Tub, UnauthenticatedTub -from foolscap import getRemoteURL_TCP -from foolscap.tokens import BananaError, Violation, INT, STRING, OPEN -from foolscap.tokens import BananaFailure -from foolscap import broker, call -from foolscap.constraint import IConstraint - -crypto_available = False -try: - from foolscap import crypto - crypto_available = crypto.available -except ImportError: - pass - -# we use authenticated tubs if possible. If crypto is not available, fall -# back to unauthenticated ones -GoodEnoughTub = UnauthenticatedTub -if crypto_available: - GoodEnoughTub = Tub - -from foolscap.test.common import HelperTarget, RIHelper, TargetMixin -from foolscap.eventual import flushEventualQueue - -from foolscap.test.common import Target, TargetWithoutInterfaces - - -class TestRequest(call.PendingRequest): - def __init__(self, reqID, rref=None): - self.answers = [] - call.PendingRequest.__init__(self, reqID, rref) - def complete(self, res): - self.answers.append((True, res)) - def fail(self, why): - self.answers.append((False, why)) - -class TestReferenceUnslicer(unittest.TestCase): - # OPEN(reference), INT(refid), [STR(interfacename), INT(version)]... CLOSE - def setUp(self): - self.broker = broker.Broker() - - def tearDown(self): - return flushEventualQueue() - - def newUnslicer(self): - unslicer = referenceable.ReferenceUnslicer() - unslicer.broker = self.broker - unslicer.opener = self.broker.rootUnslicer - return unslicer - - def testReject(self): - u = self.newUnslicer() - self.failUnlessRaises(BananaError, u.checkToken, STRING, 10) - u = self.newUnslicer() - self.failUnlessRaises(BananaError, u.checkToken, OPEN, 0) - - def testNoInterfaces(self): - u = self.newUnslicer() - u.checkToken(INT, 0) - u.receiveChild(12) - rr1,rr1d = u.receiveClose() - self.failUnless(rr1d is None) - rr2 = self.broker.getTrackerForYourReference(12).getRef() - self.failUnless(rr2) - self.failUnless(isinstance(rr2, referenceable.RemoteReference)) - self.failUnlessEqual(rr2.tracker.broker, self.broker) - self.failUnlessEqual(rr2.tracker.clid, 12) - self.failUnlessEqual(rr2.tracker.interfaceName, None) - - def testInterfaces(self): - u = self.newUnslicer() - u.checkToken(INT, 0) - u.receiveChild(12) - u.receiveChild("IBar") - rr1,rr1d = u.receiveClose() - self.failUnless(rr1d is None) - rr2 = self.broker.getTrackerForYourReference(12).getRef() - self.failUnless(rr2) - self.failUnlessIdentical(rr1, rr2) - self.failUnless(isinstance(rr2, referenceable.RemoteReference)) - self.failUnlessEqual(rr2.tracker.broker, self.broker) - self.failUnlessEqual(rr2.tracker.clid, 12) - self.failUnlessEqual(rr2.tracker.interfaceName, "IBar") - -class TestAnswer(unittest.TestCase): - # OPEN(answer), INT(reqID), [answer], CLOSE - def setUp(self): - self.broker = broker.Broker() - - def tearDown(self): - return flushEventualQueue() - - def newUnslicer(self): - unslicer = call.AnswerUnslicer() - unslicer.broker = self.broker - unslicer.opener = self.broker.rootUnslicer - unslicer.protocol = self.broker - return unslicer - - def makeRequest(self): - req = call.PendingRequest(defer.Deferred()) - - def testAccept1(self): - req = TestRequest(12) - self.broker.addRequest(req) - u = self.newUnslicer() - u.start(0) - u.checkToken(INT, 0) - u.receiveChild(12) # causes broker.getRequest - u.checkToken(STRING, 8) - u.receiveChild("results") - self.failIf(req.answers) - u.receiveClose() # causes broker.gotAnswer - self.failUnlessEqual(req.answers, [(True, "results")]) - - def testAccept2(self): - req = TestRequest(12) - req.setConstraint(IConstraint(str)) - self.broker.addRequest(req) - u = self.newUnslicer() - u.start(0) - u.checkToken(INT, 0) - u.receiveChild(12) # causes broker.getRequest - u.checkToken(STRING, 15) - u.receiveChild("results") - self.failIf(req.answers) - u.receiveClose() # causes broker.gotAnswer - self.failUnlessEqual(req.answers, [(True, "results")]) - - - def testReject1(self): - # answer a non-existent request - req = TestRequest(12) - self.broker.addRequest(req) - u = self.newUnslicer() - u.checkToken(INT, 0) - self.failUnlessRaises(Violation, u.receiveChild, 13) - - def testReject2(self): - # answer a request with a result that violates the constraint - req = TestRequest(12) - req.setConstraint(IConstraint(int)) - self.broker.addRequest(req) - u = self.newUnslicer() - u.checkToken(INT, 0) - u.receiveChild(12) - self.failUnlessRaises(Violation, u.checkToken, STRING, 42) - # this does not yet errback the request - self.failIf(req.answers) - # it gets errbacked when banana reports the violation - v = Violation("icky") - v.setLocation("here") - u.reportViolation(BananaFailure(v)) - self.failUnlessEqual(len(req.answers), 1) - err = req.answers[0] - self.failIf(err[0]) - f = err[1] - self.failUnless(f.check(Violation)) - - - -class TestReferenceable(TargetMixin, unittest.TestCase): - # test how a Referenceable gets transformed into a RemoteReference as it - # crosses the wire, then verify that it gets transformed back into the - # original Referenceable when it comes back. Also test how shared - # references to the same object are handled. - - def setUp(self): - TargetMixin.setUp(self) - self.setupBrokers() - if 0: - print - self.callingBroker.doLog = "TX" - self.targetBroker.doLog = " rx" - - def send(self, arg): - rr, target = self.setupTarget(HelperTarget()) - d = rr.callRemote("set", obj=arg) - d.addCallback(self.failUnless) - d.addCallback(lambda res: target.obj) - return d - - def send2(self, arg1, arg2): - rr, target = self.setupTarget(HelperTarget()) - d = rr.callRemote("set2", obj1=arg1, obj2=arg2) - d.addCallback(self.failUnless) - d.addCallback(lambda res: (target.obj1, target.obj2)) - return d - - def echo(self, arg): - rr, target = self.setupTarget(HelperTarget()) - d = rr.callRemote("echo", obj=arg) - return d - - def testRef1(self): - # Referenceables turn into RemoteReferences - r = Target() - d = self.send(r) - d.addCallback(self._testRef1_1, r) - return d - def _testRef1_1(self, res, r): - t = res.tracker - self.failUnless(isinstance(res, referenceable.RemoteReference)) - self.failUnlessEqual(t.broker, self.targetBroker) - self.failUnless(type(t.clid) is int) - self.failUnless(self.callingBroker.getMyReferenceByCLID(t.clid) is r) - self.failUnlessEqual(t.interfaceName, 'RIMyTarget') - - def testRef2(self): - # sending a Referenceable over the wire multiple times should result - # in equivalent RemoteReferences - r = Target() - d = self.send(r) - d.addCallback(self._testRef2_1, r) - return d - def _testRef2_1(self, res1, r): - d = self.send(r) - d.addCallback(self._testRef2_2, res1) - return d - def _testRef2_2(self, res2, res1): - self.failUnless(res1 == res2) - self.failUnless(res1 is res2) # newpb does this, oldpb didn't - - def testRef3(self): - # sending the same Referenceable in multiple arguments should result - # in equivalent RRs - r = Target() - d = self.send2(r, r) - d.addCallback(self._testRef3_1) - return d - def _testRef3_1(self, (res1, res2)): - self.failUnless(res1 == res2) - self.failUnless(res1 is res2) - - def testRef4(self): - # sending the same Referenceable in multiple calls will result in - # equivalent RRs - r = Target() - rr, target = self.setupTarget(HelperTarget()) - d = rr.callRemote("set", obj=r) - d.addCallback(self._testRef4_1, rr, r, target) - return d - def _testRef4_1(self, res, rr, r, target): - res1 = target.obj - d = rr.callRemote("set", obj=r) - d.addCallback(self._testRef4_2, target, res1) - return d - def _testRef4_2(self, res, target, res1): - res2 = target.obj - self.failUnless(res1 == res2) - self.failUnless(res1 is res2) - - def testRef5(self): - # those RemoteReferences can be used to invoke methods on the sender. - # 'r' lives on side A. The anonymous target lives on side B. From - # side A we invoke B.set(r), and we get the matching RemoteReference - # 'rr' which lives on side B. Then we use 'rr' to invoke r.getName - # from side A. - r = Target() - r.name = "ernie" - d = self.send(r) - d.addCallback(lambda rr: rr.callRemote("getName")) - d.addCallback(self.failUnlessEqual, "ernie") - return d - - def testRef6(self): - # Referenceables survive round-trips - r = Target() - d = self.echo(r) - d.addCallback(self.failUnlessIdentical, r) - return d - -## def NOTtestRemoteRef1(self): -## # known URLRemoteReferences turn into Referenceables -## root = Target() -## rr, target = self.setupTarget(HelperTarget()) -## self.targetBroker.factory = pb.PBServerFactory(root) -## urlRRef = self.callingBroker.remoteReferenceForName("", []) -## # urlRRef points at root -## d = rr.callRemote("set", obj=urlRRef) -## self.failUnless(dr(d)) - -## self.failUnlessIdentical(target.obj, root) - -## def NOTtestRemoteRef2(self): -## # unknown URLRemoteReferences are errors -## root = Target() -## rr, target = self.setupTarget(HelperTarget()) -## self.targetBroker.factory = pb.PBServerFactory(root) -## urlRRef = self.callingBroker.remoteReferenceForName("bogus", []) -## # urlRRef points at nothing -## d = rr.callRemote("set", obj=urlRRef) -## f = de(d) -## #print f -## #self.failUnlessEqual(f.type, tokens.Violation) -## self.failUnlessEqual(type(f.value), str) -## self.failUnless(f.value.find("unknown clid 'bogus'") != -1) - - def testArgs1(self): - # sending the same non-Referenceable object in multiple calls results - # in distinct objects, because the serialization scope is bounded by - # each method call - r = [1,2] - rr, target = self.setupTarget(HelperTarget()) - d = rr.callRemote("set", obj=r) - d.addCallback(self._testArgs1_1, rr, r, target) - # TODO: also make sure the original list goes out of scope once the - # method call has finished, to guard against a leaky - # reference-tracking implementation. - return d - def _testArgs1_1(self, res, rr, r, target): - res1 = target.obj - d = rr.callRemote("set", obj=r) - d.addCallback(self._testArgs1_2, target, res1) - return d - def _testArgs1_2(self, res, target, res1): - res2 = target.obj - self.failUnless(res1 == res2) - self.failIf(res1 is res2) - - def testArgs2(self): - # but sending them as multiple arguments of the *same* method call - # results in identical objects - r = [1,2] - rr, target = self.setupTarget(HelperTarget()) - d = rr.callRemote("set2", obj1=r, obj2=r) - d.addCallback(self._testArgs2_1, rr, target) - return d - def _testArgs2_1(self, res, rr, target): - self.failUnlessIdentical(target.obj1, target.obj2) - - def testAnswer1(self): - # also, shared objects in a return value should be shared - r = [1,2] - rr, target = self.setupTarget(HelperTarget()) - target.obj = (r,r) - d = rr.callRemote("get") - d.addCallback(lambda res: self.failUnlessIdentical(res[0], res[1])) - return d - - def testAnswer2(self): - # but objects returned by separate method calls should be distinct - rr, target = self.setupTarget(HelperTarget()) - r = [1,2] - target.obj = r - d = rr.callRemote("get") - d.addCallback(self._testAnswer2_1, rr, target) - return d - def _testAnswer2_1(self, res1, rr, target): - d = rr.callRemote("get") - d.addCallback(self._testAnswer2_2, res1) - return d - def _testAnswer2_2(self, res2, res1): - self.failUnless(res1 == res2) - self.failIf(res1 is res2) - - -class TestFactory(unittest.TestCase): - def setUp(self): - self.client = None - self.server = None - - def gotReference(self, ref): - self.client = ref - - def tearDown(self): - if self.client: - self.client.broker.transport.loseConnection() - if self.server: - d = self.server.stopListening() - else: - d = defer.succeed(None) - d.addCallback(flushEventualQueue) - return d - -class TestCallable(unittest.TestCase): - def setUp(self): - self.services = [GoodEnoughTub(), GoodEnoughTub()] - self.tubA, self.tubB = self.services - for s in self.services: - s.startService() - l = s.listenOn("tcp:0:interface=127.0.0.1") - s.setLocation("127.0.0.1:%d" % l.getPortnum()) - self._log_observers_to_remove = [] - - def addLogObserver(self, observer): - log.addObserver(observer) - self._log_observers_to_remove.append(observer) - - def tearDown(self): - for lo in self._log_observers_to_remove: - log.removeObserver(lo) - d = defer.DeferredList([s.stopService() for s in self.services]) - d.addCallback(flushEventualQueue) - return d - - def testLogLocalFailure(self): - self.tubB.setOption("logLocalFailures", True) - target = Target() - logs = [] - self.addLogObserver(logs.append) - url = self.tubB.registerReference(target) - d = self.tubA.getReference(url) - d.addCallback(lambda rref: rref.callRemote("fail")) - # this will cause some text to be logged with log.msg. TODO: capture - # this text and look at it more closely. - def _check(res): - self.failUnless(isinstance(res, failure.Failure)) - res.trap(ValueError) - messages = [l['message'][0] for l in logs] - text = "\n".join(messages) - self.failUnless("an inbound callRemote that we executed (on behalf of someone else) failed\n" in text) - self.failUnless("\n reqID=2, rref=, methname=fail\n" % url) in text) - #self.failUnless("\n args=[]\n" in text) # TODO: log these too - #self.failUnless("\n kwargs={}\n" in text) - self.failUnless("\nREMOTE: Traceback from remote host -- Traceback (most recent call last):\n" - in text) - self.failUnless("\nREMOTE: exceptions.ValueError: you asked me to fail\n" in text) - d.addBoth(_check) - return d - - def testBoundMethod(self): - target = Target() - meth_url = self.tubB.registerReference(target.remote_add) - d = self.tubA.getReference(meth_url) - d.addCallback(self._testBoundMethod_1) - return d - testBoundMethod.timeout = 5 - def _testBoundMethod_1(self, ref): - self.failUnless(isinstance(ref, referenceable.RemoteMethodReference)) - #self.failUnlessEqual(ref.getSchemaName(), - # RIMyTarget.__remote_name__ + "/remote_add") - d = ref.callRemote(a=1, b=2) - d.addCallback(lambda res: self.failUnlessEqual(res, 3)) - return d - - def testFunction(self): - l = [] - # we need a keyword arg here - def append(what): - l.append(what) - func_url = self.tubB.registerReference(append) - d = self.tubA.getReference(func_url) - d.addCallback(self._testFunction_1, l) - return d - testFunction.timeout = 5 - def _testFunction_1(self, ref, l): - self.failUnless(isinstance(ref, referenceable.RemoteMethodReference)) - d = ref.callRemote(what=12) - d.addCallback(lambda res: self.failUnlessEqual(l, [12])) - return d - - -class TestService(unittest.TestCase): - def setUp(self): - self.services = [GoodEnoughTub()] - self.services[0].startService() - - def tearDown(self): - d = defer.DeferredList([s.stopService() for s in self.services]) - d.addCallback(flushEventualQueue) - return d - - def testRegister(self): - s = self.services[0] - l = s.listenOn("tcp:0:interface=127.0.0.1") - s.setLocation("127.0.0.1:%d" % l.getPortnum()) - t1 = Target() - public_url = s.registerReference(t1, "target") - if crypto_available: - self.failUnless(public_url.startswith("pb://")) - self.failUnless(public_url.endswith("@127.0.0.1:%d/target" - % l.getPortnum())) - else: - self.failUnlessEqual(public_url, - "pbu://127.0.0.1:%d/target" - % l.getPortnum()) - self.failUnlessEqual(s.registerReference(t1, "target"), public_url) - self.failUnlessIdentical(s.getReferenceForURL(public_url), t1) - t2 = Target() - private_url = s.registerReference(t2) - self.failUnlessEqual(s.registerReference(t2), private_url) - self.failUnlessIdentical(s.getReferenceForURL(private_url), t2) - - s.unregisterURL(public_url) - self.failUnlessRaises(KeyError, s.getReferenceForURL, public_url) - - s.unregisterReference(t2) - self.failUnlessRaises(KeyError, s.getReferenceForURL, private_url) - - # TODO: check what happens when you register the same referenceable - # under multiple URLs - - def getRef(self, target): - self.services.append(GoodEnoughTub()) - s1 = self.services[0] - s2 = self.services[1] - s2.startService() - l = s1.listenOn("tcp:0:interface=127.0.0.1") - s1.setLocation("127.0.0.1:%d" % l.getPortnum()) - public_url = s1.registerReference(target, "target") - self.public_url = public_url - d = s2.getReference(public_url) - return d - - def testConnect1(self): - t1 = TargetWithoutInterfaces() - d = self.getRef(t1) - d.addCallback(lambda ref: ref.callRemote('add', a=2, b=3)) - d.addCallback(self._testConnect1, t1) - return d - testConnect1.timeout = 5 - def _testConnect1(self, res, t1): - self.failUnlessEqual(t1.calls, [(2,3)]) - self.failUnlessEqual(res, 5) - - def testConnect2(self): - t1 = Target() - d = self.getRef(t1) - d.addCallback(lambda ref: ref.callRemote('add', a=2, b=3)) - d.addCallback(self._testConnect2, t1) - return d - testConnect2.timeout = 5 - def _testConnect2(self, res, t1): - self.failUnlessEqual(t1.calls, [(2,3)]) - self.failUnlessEqual(res, 5) - - - def testConnect3(self): - # test that we can get the reference multiple times - t1 = Target() - d = self.getRef(t1) - d.addCallback(lambda ref: ref.callRemote('add', a=2, b=3)) - def _check(res): - self.failUnlessEqual(t1.calls, [(2,3)]) - self.failUnlessEqual(res, 5) - t1.calls = [] - d.addCallback(_check) - d.addCallback(lambda res: - self.services[1].getReference(self.public_url)) - d.addCallback(lambda ref: ref.callRemote('add', a=5, b=6)) - def _check2(res): - self.failUnlessEqual(t1.calls, [(5,6)]) - self.failUnlessEqual(res, 11) - d.addCallback(_check2) - return d - testConnect3.timeout = 5 - - def TODO_testStatic(self): - # make sure we can register static data too, at least hashable ones - t1 = (1,2,3) - d = self.getRef(t1) - d.addCallback(lambda ref: self.failUnlessEqual(ref, (1,2,3))) - return d - #testStatic.timeout = 2 - - def testBadMethod(self): - t1 = Target() - d = self.getRef(t1) - d.addCallback(lambda ref: ref.callRemote('missing', a=2, b=3)) - d.addCallbacks(self._testBadMethod_cb, self._testBadMethod_eb) - return d - testBadMethod.timeout = 5 - def _testBadMethod_cb(self, res): - self.fail("method wasn't supposed to work") - def _testBadMethod_eb(self, f): - #self.failUnlessEqual(f.type, 'foolscap.tokens.Violation') - self.failUnlessEqual(f.type, Violation) - self.failUnless(re.search(r'RIMyTarget\(.*\) does not offer missing', - str(f))) - - def testBadMethod2(self): - t1 = TargetWithoutInterfaces() - d = self.getRef(t1) - d.addCallback(lambda ref: ref.callRemote('missing', a=2, b=3)) - d.addCallbacks(self._testBadMethod_cb, self._testBadMethod2_eb) - return d - testBadMethod2.timeout = 5 - def _testBadMethod2_eb(self, f): - self.failUnlessEqual(reflect.qual(f.type), 'exceptions.AttributeError') - self.failUnlessSubstring("TargetWithoutInterfaces", f.value) - self.failUnlessSubstring(" has no attribute 'remote_missing'", f.value) - - -class ThreeWayHelper: - passed = False - - def start(self): - d = getRemoteURL_TCP("127.0.0.1", self.portnum1, "", RIHelper) - d.addCallback(self.step2) - d.addErrback(self.err) - return d - - def step2(self, remote1): - # .remote1 is our RRef to server1's "t1" HelperTarget - self.clients.append(remote1) - self.remote1 = remote1 - d = getRemoteURL_TCP("127.0.0.1", self.portnum2, "", RIHelper) - d.addCallback(self.step3) - return d - - def step3(self, remote2): - # and .remote2 is our RRef to server2's "t2" helper target - self.clients.append(remote2) - self.remote2 = remote2 - # sending a RemoteReference back to its source should be ok - d = self.remote1.callRemote("set", obj=self.remote1) - d.addCallback(self.step4) - return d - - def step4(self, res): - assert self.target1.obj is self.target1 - # but sending one to someone else is not - d = self.remote2.callRemote("set", obj=self.remote1) - d.addCallback(self.step5_callback) - d.addErrback(self.step5_errback) - return d - - def step5_callback(self, res): - why = unittest.FailTest("sending a 3rd-party reference did not fail") - self.err(failure.Failure(why)) - return None - - def step5_errback(self, why): - bad = None - if why.type != tokens.Violation: - bad = "%s failure should be a Violation" % why.type - elif why.value.args[0].find("RemoteReferences can only be sent back to their home Broker") == -1: - bad = "wrong error message: '%s'" % why.value.args[0] - if bad: - why = unittest.FailTest(bad) - self.passed = failure.Failure(why) - else: - self.passed = True - - def err(self, why): - self.passed = why - - -# TODO: -# when the Violation is remote, it is reported in a CopiedFailure, which -# means f.type is a string. When it is local, it is reported in a Failure, -# and f.type is the tokens.Violation class. I'm not sure how I feel about -# these being different. - -# TODO: tests to port from oldpb suite -# testTooManyRefs: sending pb.MAX_BROKER_REFS across the wire should die -# testFactoryCopy? - -# tests which aren't relevant right now but which might be once we port the -# corresponding functionality: -# -# testObserve, testCache (pb.Cacheable) -# testViewPoint -# testPublishable (spread.publish??) -# SpreadUtilTestCase (spread.util) -# NewCredTestCase - -# tests which aren't relevant and aren't like to ever be -# -# PagingTestCase -# ConnectionTestCase (oldcred) -# NSPTestCase diff --git a/src/foolscap/foolscap/test/test_promise.py b/src/foolscap/foolscap/test/test_promise.py deleted file mode 100644 index 53cbaf76..00000000 --- a/src/foolscap/foolscap/test/test_promise.py +++ /dev/null @@ -1,254 +0,0 @@ - -from twisted.trial import unittest - -from twisted.python.failure import Failure -from foolscap.promise import makePromise, send, sendOnly, when, UsageError -from foolscap.eventual import flushEventualQueue, fireEventually - -class KaboomError(Exception): - pass - -class Target: - def __init__(self): - self.calls = [] - def one(self, a): - self.calls.append(("one", a)) - return a+1 - def two(self, a, b=2, **kwargs): - self.calls.append(("two", a, b, kwargs)) - def fail(self, arg): - raise KaboomError("kaboom!") - -class Counter: - def __init__(self, count=0): - self.count = count - def add(self, value): - self.count += value - return self - -class Send(unittest.TestCase): - - def tearDown(self): - return flushEventualQueue() - - def testBasic(self): - p,r = makePromise() - def _check(res, *args, **kwargs): - self.failUnlessEqual(res, 1) - self.failUnlessEqual(args, ("one",)) - self.failUnlessEqual(kwargs, {"two": 2}) - p2 = p._then(_check, "one", two=2) - self.failUnlessIdentical(p2, p) - r(1) - - def testBasicFailure(self): - p,r = makePromise() - def _check(res, *args, **kwargs): - self.failUnless(isinstance(res, Failure)) - self.failUnless(res.check(KaboomError)) - self.failUnlessEqual(args, ("one",)) - self.failUnlessEqual(kwargs, {"two": 2}) - p2 = p._except(_check, "one", two=2) - self.failUnlessIdentical(p2, p) - r(Failure(KaboomError("oops"))) - - def testSend(self): - t = Target() - p = send(t).one(1) - self.failIf(t.calls) - def _check(res): - self.failUnlessEqual(res, 2) - self.failUnlessEqual(t.calls, [("one", 1)]) - p._then(_check) - when(p).addCallback(_check) # check it twice to test both syntaxes - - def testOrdering(self): - t = Target() - p1 = send(t).one(1) - p2 = send(t).two(3, k="extra") - self.failIf(t.calls) - def _check1(res): - # we can't check t.calls here: the when() clause is not - # guaranteed to fire before the second send. - self.failUnlessEqual(res, 2) - when(p1).addCallback(_check1) - def _check2(res): - self.failUnlessEqual(res, None) - when(p2).addCallback(_check2) - def _check3(res): - self.failUnlessEqual(t.calls, [("one", 1), - ("two", 3, 2, {"k": "extra"}), - ]) - fireEventually().addCallback(_check3) - - def testFailure(self): - t = Target() - p1 = send(t).fail(0) - def _check(res): - self.failUnless(isinstance(res, Failure)) - self.failUnless(res.check(KaboomError)) - p1._then(lambda res: self.fail("we were supposed to fail")) - p1._except(_check) - when(p1).addBoth(_check) - - def testBadName(self): - t = Target() - p1 = send(t).missing(0) - def _check(res): - self.failUnless(isinstance(res, Failure)) - self.failUnless(res.check(AttributeError)) - when(p1).addBoth(_check) - - def testDisableDataflowStyle(self): - p,r = makePromise() - p._useDataflowStyle = False - def wrong(p): - p.one(12) - self.failUnlessRaises(AttributeError, wrong, p) - - def testNoMultipleResolution(self): - p,r = makePromise() - r(3) - self.failUnlessRaises(UsageError, r, 4) - - def testResolveBefore(self): - t = Target() - p,r = makePromise() - r(t) - p = send(p).one(2) - def _check(res): - self.failUnlessEqual(res, 3) - when(p).addCallback(_check) - - def testResolveAfter(self): - t = Target() - p,r = makePromise() - p = send(p).one(2) - def _check(res): - self.failUnlessEqual(res, 3) - when(p).addCallback(_check) - r(t) - - def testResolveFailure(self): - t = Target() - p,r = makePromise() - p = send(p).one(2) - def _check(res): - self.failUnless(isinstance(res, Failure)) - self.failUnless(res.check(KaboomError)) - when(p).addBoth(_check) - f = Failure(KaboomError("oops")) - r(f) - -class Call(unittest.TestCase): - def tearDown(self): - return flushEventualQueue() - - def testResolveBefore(self): - t = Target() - p1,r = makePromise() - r(t) - p2 = p1.one(2) - def _check(res): - self.failUnlessEqual(res, 3) - p2._then(_check) - - def testResolveAfter(self): - t = Target() - p1,r = makePromise() - p2 = p1.one(2) - def _check(res): - self.failUnlessEqual(res, 3) - p2._then(_check) - r(t) - - def testResolveFailure(self): - t = Target() - p1,r = makePromise() - p2 = p1.one(2) - def _check(res): - self.failUnless(isinstance(res, Failure)) - self.failUnless(res.check(KaboomError)) - p2._then(lambda res: self.fail("this was supposed to fail")) - p2._except(_check) - f = Failure(KaboomError("oops")) - r(f) - -class SendOnly(unittest.TestCase): - def testNear(self): - t = Target() - sendOnly(t).one(1) - self.failIf(t.calls) - def _check(res): - self.failUnlessEqual(t.calls, [("one", 1)]) - d = flushEventualQueue() - d.addCallback(_check) - return d - - def testResolveBefore(self): - t = Target() - p,r = makePromise() - r(t) - sendOnly(p).one(1) - d = flushEventualQueue() - def _check(res): - self.failUnlessEqual(t.calls, [("one", 1)]) - d.addCallback(_check) - return d - - def testResolveAfter(self): - t = Target() - p,r = makePromise() - sendOnly(p).one(1) - r(t) - d = flushEventualQueue() - def _check(res): - self.failUnlessEqual(t.calls, [("one", 1)]) - d.addCallback(_check) - return d - -class Chained(unittest.TestCase): - def tearDown(self): - return flushEventualQueue() - - def testResolveToAPromise(self): - p1,r1 = makePromise() - p2,r2 = makePromise() - def _check(res): - self.failUnlessEqual(res, 1) - p1._then(_check) - r1(p2) - def _continue(res): - r2(1) - flushEventualQueue().addCallback(_continue) - return when(p1) - - def testResolveToABrokenPromise(self): - p1,r1 = makePromise() - p2,r2 = makePromise() - r1(p2) - def _continue(res): - r2(Failure(KaboomError("foom"))) - flushEventualQueue().addCallback(_continue) - def _check2(res): - self.failUnless(isinstance(res, Failure)) - self.failUnless(res.check(KaboomError)) - d = when(p1) - d.addBoth(_check2) - return d - - def testChained1(self): - p1,r = makePromise() - p2 = p1.add(2) - p3 = p2.add(3) - def _check(c): - self.failUnlessEqual(c.count, 5) - p3._then(_check) - r(Counter(0)) - - def testChained2(self): - p1,r = makePromise() - def _check(c, expected): - self.failUnlessEqual(c.count, expected) - p1.add(2).add(3)._then(_check, 6) - r(Counter(1)) diff --git a/src/foolscap/foolscap/test/test_reconnector.py b/src/foolscap/foolscap/test/test_reconnector.py deleted file mode 100644 index 1da9dc61..00000000 --- a/src/foolscap/foolscap/test/test_reconnector.py +++ /dev/null @@ -1,210 +0,0 @@ -# -*- test-case-name: foolscap.test.test_reconnector -*- - -from twisted.trial import unittest -from foolscap import UnauthenticatedTub -from foolscap.test.common import HelperTarget -from twisted.internet.main import CONNECTION_LOST -from twisted.internet import defer, reactor -from foolscap.eventual import eventually, flushEventualQueue -from foolscap import negotiate - -class AlwaysFailNegotiation(negotiate.Negotiation): - def evaluateHello(self, offer): - raise negotiate.NegotiationError("I always fail") - -class Reconnector(unittest.TestCase): - - def setUp(self): - self.services = [UnauthenticatedTub(), UnauthenticatedTub()] - self.tubA, self.tubB = self.services - for s in self.services: - s.startService() - l = s.listenOn("tcp:0:interface=127.0.0.1") - s.setLocation("127.0.0.1:%d" % l.getPortnum()) - - def tearDown(self): - d = defer.DeferredList([s.stopService() for s in self.services]) - d.addCallback(flushEventualQueue) - return d - - - def test_try(self): - self.count = 0 - self.attached = False - self.done = defer.Deferred() - target = HelperTarget("bob") - url = self.tubB.registerReference(target) - rc = self.tubA.connectTo(url, self._got_ref, "arg", kw="kwarg") - # at least make sure the stopConnecting method is present, even if we - # don't have a real test for it yet - self.failUnless(rc.stopConnecting) - return self.done - - def _got_ref(self, rref, arg, kw): - self.failUnlessEqual(self.attached, False) - self.attached = True - self.failUnlessEqual(arg, "arg") - self.failUnlessEqual(kw, "kwarg") - self.count += 1 - rref.notifyOnDisconnect(self._disconnected, self.count) - if self.count < 2: - # forcibly disconnect it - eventually(rref.tracker.broker.transport.loseConnection, - CONNECTION_LOST) - else: - self.done.callback("done") - - def _disconnected(self, count): - self.failUnlessEqual(self.attached, True) - self.failUnlessEqual(count, self.count) - self.attached = False - - def _connected(self, ref, notifiers, accumulate): - accumulate.append(ref) - if notifiers: - notifiers.pop(0).callback(ref) - - def stall(self, timeout, res=None): - d = defer.Deferred() - reactor.callLater(timeout, d.callback, res) - return d - - def test_retry(self): - tubC = UnauthenticatedTub() - connects = [] - target = HelperTarget("bob") - url = self.tubB.registerReference(target, "target") - portb = self.tubB.getListeners()[0].getPortnum() - d1 = defer.Deferred() - notifiers = [d1] - self.services.remove(self.tubB) - d = self.tubB.stopService() - def _start_connecting(res): - # this will fail, since tubB is not listening anymore - self.rc = self.tubA.connectTo(url, self._connected, - notifiers, connects) - # give it a few tries, then start tubC listening on the same port - # that tubB used to, which should allow the connection to - # complete (since they're both UnauthenticatedTubs) - return self.stall(2) - d.addCallback(_start_connecting) - def _start_tubC(res): - self.failUnlessEqual(len(connects), 0) - self.services.append(tubC) - tubC.startService() - tubC.listenOn("tcp:%d:interface=127.0.0.1" % portb) - tubC.setLocation("127.0.0.1:%d" % portb) - url2 = tubC.registerReference(target, "target") - assert url2 == url - return d1 - d.addCallback(_start_tubC) - def _connected(res): - self.failUnlessEqual(len(connects), 1) - self.rc.stopConnecting() - d.addCallback(_connected) - return d - - def test_negotiate_fails_and_retry(self): - connects = [] - target = HelperTarget("bob") - url = self.tubB.registerReference(target, "target") - l = self.tubB.getListeners()[0] - l.negotiationClass = AlwaysFailNegotiation - portb = l.getPortnum() - d1 = defer.Deferred() - notifiers = [d1] - self.rc = self.tubA.connectTo(url, self._connected, - notifiers, connects) - d = self.stall(2) - def _failed_a_few_times(res): - # the reconnector should have failed once or twice, since the - # negotiation would always fail. - self.failUnlessEqual(len(connects), 0) - # Now we fix tubB. We only touched the Listener, so re-doing the - # listenOn should clear it. - return self.tubB.stopListeningOn(l) - d.addCallback(_failed_a_few_times) - def _stopped(res): - self.tubB.listenOn("tcp:%d:interface=127.0.0.1" % portb) - # the next time the reconnector tries, it should succeed - return d1 - d.addCallback(_stopped) - def _connected(res): - self.failUnlessEqual(len(connects), 1) - self.rc.stopConnecting() - d.addCallback(_connected) - return d - - def test_lose_and_retry(self): - tubC = UnauthenticatedTub() - connects = [] - d1 = defer.Deferred() - d2 = defer.Deferred() - notifiers = [d1, d2] - target = HelperTarget("bob") - url = self.tubB.registerReference(target, "target") - portb = self.tubB.getListeners()[0].getPortnum() - self.rc = self.tubA.connectTo(url, self._connected, - notifiers, connects) - def _connected_first(res): - # we are now connected to tubB. Shut it down to force a - # disconnect. - self.services.remove(self.tubB) - d = self.tubB.stopService() - return d - d1.addCallback(_connected_first) - def _wait(res): - # wait a few seconds to give the Reconnector a chance to try and - # fail a few times - return self.stall(2) - d1.addCallback(_wait) - def _start_tubC(res): - # now start tubC listening on the same port that tubB used to, - # which should allow the connection to complete (since they're - # both UnauthenticatedTubs) - self.services.append(tubC) - tubC.startService() - tubC.listenOn("tcp:%d:interface=127.0.0.1" % portb) - tubC.setLocation("127.0.0.1:%d" % portb) - url2 = tubC.registerReference(target, "target") - assert url2 == url - # this will fire when the second connection has been made - return d2 - d1.addCallback(_start_tubC) - def _connected(res): - self.failUnlessEqual(len(connects), 2) - self.rc.stopConnecting() - d1.addCallback(_connected) - return d1 - - def test_stop_trying(self): - connects = [] - target = HelperTarget("bob") - url = self.tubB.registerReference(target, "target") - d1 = defer.Deferred() - self.services.remove(self.tubB) - d = self.tubB.stopService() - def _start_connecting(res): - # this will fail, since tubB is not listening anymore - self.rc = self.tubA.connectTo(url, self._connected, d1, connects) - self.rc.verbose = True # get better code coverage - # give it a few tries, then tell it to stop trying - return self.stall(2) - d.addCallback(_start_connecting) - def _stop_trying(res): - self.failUnlessEqual(len(connects), 0) - # this stopConnecting occurs while the reconnector's timer is - # active - self.rc.stopConnecting() - d.addCallback(_stop_trying) - # if it keeps trying, we'll see a dirty reactor - return d - -# another test: determine the target url early, but don't actually register -# the reference yet. Start the reconnector, let it fail once, then register -# the reference and make sure the retry succeeds. This will distinguish -# between connection/negotiation failures and object-lookup failures, both of -# which ought to be handled by Reconnector. I suspect the object-lookup -# failures are not yet. - -# test that Tub shutdown really stops all Reconnectors diff --git a/src/foolscap/foolscap/test/test_reference.py b/src/foolscap/foolscap/test/test_reference.py deleted file mode 100644 index 930f593c..00000000 --- a/src/foolscap/foolscap/test/test_reference.py +++ /dev/null @@ -1,71 +0,0 @@ - -from zope.interface import implements -from twisted.trial import unittest -from twisted.python import failure -from foolscap.ipb import IRemoteReference -from foolscap.test.common import HelperTarget, Target -from foolscap.eventual import flushEventualQueue - -class Remote: - implements(IRemoteReference) - pass - - -class LocalReference(unittest.TestCase): - def tearDown(self): - return flushEventualQueue() - - def ignored(self): - pass - - def test_remoteReference(self): - r = Remote() - rref = IRemoteReference(r) - self.failUnlessIdentical(r, rref) - - def test_callRemote(self): - t = HelperTarget() - t.obj = None - rref = IRemoteReference(t) - marker = rref.notifyOnDisconnect(self.ignored, "args", kwargs="foo") - rref.dontNotifyOnDisconnect(marker) - d = rref.callRemote("set", 12) - # the callRemote should be put behind an eventual-send - self.failUnlessEqual(t.obj, None) - def _check(res): - self.failUnlessEqual(t.obj, 12) - self.failUnlessEqual(res, True) - d.addCallback(_check) - return d - - def test_callRemoteOnly(self): - t = HelperTarget() - t.obj = None - rref = IRemoteReference(t) - rc = rref.callRemoteOnly("set", 12) - self.failUnlessEqual(rc, None) - - def shouldFail(self, res, expected_failure, which, substring=None): - # attach this with: - # d = something() - # d.addBoth(self.shouldFail, IndexError, "something") - # the 'which' string helps to identify which call to shouldFail was - # triggered, since certain versions of Twisted don't display this - # very well. - - if isinstance(res, failure.Failure): - res.trap(expected_failure) - if substring: - self.failUnless(substring in str(res), - "substring '%s' not in '%s'" - % (substring, str(res))) - else: - self.fail("%s was supposed to raise %s, not get '%s'" % - (which, expected_failure, res)) - - def test_fail(self): - t = Target() - d = IRemoteReference(t).callRemote("fail") - d.addBoth(self.shouldFail, ValueError, "test_fail", - "you asked me to fail") - return d diff --git a/src/foolscap/foolscap/test/test_registration.py b/src/foolscap/foolscap/test/test_registration.py deleted file mode 100644 index 4fb78e5b..00000000 --- a/src/foolscap/foolscap/test/test_registration.py +++ /dev/null @@ -1,52 +0,0 @@ -# -*- test-case-name: foolscap.test.test_registration -*- - -from twisted.trial import unittest - -import weakref, gc -from foolscap import UnauthenticatedTub -from foolscap.test.common import HelperTarget - -class Registration(unittest.TestCase): - def testStrong(self): - t1 = HelperTarget() - tub = UnauthenticatedTub() - tub.setLocation("bogus:1234567") - u1 = tub.registerReference(t1) - results = [] - w1 = weakref.ref(t1, results.append) - del t1 - gc.collect() - # t1 should still be alive - self.failUnless(w1()) - self.failUnlessEqual(results, []) - tub.unregisterReference(w1()) - gc.collect() - # now it should be dead - self.failIf(w1()) - self.failUnlessEqual(len(results), 1) - - def testWeak(self): - t1 = HelperTarget() - tub = UnauthenticatedTub() - tub.setLocation("bogus:1234567") - name = tub._assignName(t1) - url = tub.buildURL(name) - results = [] - w1 = weakref.ref(t1, results.append) - del t1 - gc.collect() - # t1 should be dead - self.failIf(w1()) - self.failUnlessEqual(len(results), 1) - - def TODO_testNonweakrefable(self): - # what happens when we register a non-Referenceable? We don't really - # need this yet, but as registerReference() becomes more generalized - # into just plain register(), we'll want to provide references to - # Copyables and ordinary data structures too. Let's just test that - # this doesn't cause an error. - target = [] - tub = UnauthenticatedTub() - tub.setLocation("bogus:1234567") - url = tub.registerReference(target) - diff --git a/src/foolscap/foolscap/test/test_schema.py b/src/foolscap/foolscap/test/test_schema.py deleted file mode 100644 index 0c052e78..00000000 --- a/src/foolscap/foolscap/test/test_schema.py +++ /dev/null @@ -1,537 +0,0 @@ - -import sets, re -from twisted.trial import unittest -from foolscap import schema, copyable -from foolscap.tokens import Violation, InvalidRemoteInterface -from foolscap.constraint import IConstraint -from foolscap.remoteinterface import RemoteMethodSchema, \ - RemoteInterfaceConstraint, LocalInterfaceConstraint -from foolscap.referenceable import RemoteReferenceTracker, \ - RemoteReference, Referenceable -from foolscap.test import common - -have_builtin_set = False -try: - set - have_builtin_set = True -except NameError: - pass # oh well - -class Dummy: - pass - -HEADER = 64 -INTSIZE = HEADER+1 -STR10 = HEADER+1+10 - -class ConformTest(unittest.TestCase): - """This tests how Constraints are asserted on outbound objects (where the - object already exists). Inbound constraints are checked in - test_banana.InboundByteStream in the various testConstrainedFoo methods. - """ - def conforms(self, c, obj): - c.checkObject(obj, False) - def violates(self, c, obj): - self.assertRaises(schema.Violation, c.checkObject, obj, False) - def assertSize(self, c, maxsize): - return - self.assertEquals(c.maxSize(), maxsize) - def assertDepth(self, c, maxdepth): - self.assertEquals(c.maxDepth(), maxdepth) - def assertUnboundedSize(self, c): - self.assertRaises(schema.UnboundedSchema, c.maxSize) - def assertUnboundedDepth(self, c): - self.assertRaises(schema.UnboundedSchema, c.maxDepth) - - def testAny(self): - c = schema.Constraint() - self.assertUnboundedSize(c) - self.assertUnboundedDepth(c) - - def testInteger(self): - # s_int32_t - c = schema.IntegerConstraint() - self.assertSize(c, INTSIZE) - self.assertDepth(c, 1) - self.conforms(c, 123) - self.violates(c, 2**64) - self.conforms(c, 0) - self.conforms(c, 2**31-1) - self.violates(c, 2**31) - self.conforms(c, -2**31) - self.violates(c, -2**31-1) - self.violates(c, "123") - self.violates(c, Dummy()) - self.violates(c, None) - - def testLargeInteger(self): - c = schema.IntegerConstraint(64) - self.assertSize(c, INTSIZE+64) - self.assertDepth(c, 1) - self.conforms(c, 123) - self.violates(c, "123") - self.violates(c, None) - self.conforms(c, 2**512-1) - self.violates(c, 2**512) - self.conforms(c, -2**512+1) - self.violates(c, -2**512) - - def testByteString(self): - c = schema.ByteStringConstraint(10) - self.assertSize(c, STR10) - self.assertSize(c, STR10) # twice to test seen=[] logic - self.assertDepth(c, 1) - self.conforms(c, "I'm short") - self.violates(c, "I am too long") - self.conforms(c, "a" * 10) - self.violates(c, "a" * 11) - self.violates(c, 123) - self.violates(c, Dummy()) - self.violates(c, None) - - c2 = schema.ByteStringConstraint(15, 10) - self.violates(c2, "too short") - self.conforms(c2, "long enough") - self.violates(c2, "this is too long") - self.violates(c2, u"I am unicode") - - c3 = schema.ByteStringConstraint(regexp="needle") - self.violates(c3, "no present") - self.conforms(c3, "needle in a haystack") - c4 = schema.ByteStringConstraint(regexp="[abc]+") - self.violates(c4, "spelled entirely without those letters") - self.conforms(c4, "add better cases") - c5 = schema.ByteStringConstraint(regexp=re.compile("\d+\s\w+")) - self.conforms(c5, ": 123 boo") - self.violates(c5, "more than 1 spaces") - self.violates(c5, "letters first 123") - - def testString(self): - # this test will change once the definition of "StringConstraint" - # changes. For now, we assert that StringConstraint is the same as - # ByteStringConstraint. - - c = schema.StringConstraint(20) - self.conforms(c, "I'm short") - self.violates(c, u"I am unicode") - - def testUnicode(self): - c = schema.UnicodeConstraint(10) - #self.assertSize(c, USTR10) - #self.assertSize(c, USTR10) # twice to test seen=[] logic - self.assertDepth(c, 2) - self.violates(c, "I'm a bytestring") - self.conforms(c, u"I'm short") - self.violates(c, u"I am too long") - self.conforms(c, u"a" * 10) - self.violates(c, u"a" * 11) - self.violates(c, 123) - self.violates(c, Dummy()) - self.violates(c, None) - - c2 = schema.UnicodeConstraint(15, 10) - self.violates(c2, "I'm a bytestring") - self.violates(c2, u"too short") - self.conforms(c2, u"long enough") - self.violates(c2, u"this is too long") - - c3 = schema.UnicodeConstraint(regexp="needle") - self.violates(c3, "I'm a bytestring") - self.violates(c3, u"no present") - self.conforms(c3, u"needle in a haystack") - c4 = schema.UnicodeConstraint(regexp="[abc]+") - self.violates(c4, "I'm a bytestring") - self.violates(c4, u"spelled entirely without those letters") - self.conforms(c4, u"add better cases") - c5 = schema.UnicodeConstraint(regexp=re.compile("\d+\s\w+")) - self.violates(c5, "I'm a bytestring") - self.conforms(c5, u": 123 boo") - self.violates(c5, u"more than 1 spaces") - self.violates(c5, u"letters first 123") - - def testBool(self): - c = schema.BooleanConstraint() - self.assertSize(c, 147) - self.assertDepth(c, 2) - self.conforms(c, False) - self.conforms(c, True) - self.violates(c, 0) - self.violates(c, 1) - self.violates(c, "vrai") - self.violates(c, Dummy()) - self.violates(c, None) - - def testPoly(self): - c = schema.PolyConstraint(schema.ByteStringConstraint(100), - schema.IntegerConstraint()) - self.assertSize(c, 165) - self.assertDepth(c, 1) - - def testTuple(self): - c = schema.TupleConstraint(schema.ByteStringConstraint(10), - schema.ByteStringConstraint(100), - schema.IntegerConstraint() ) - self.conforms(c, ("hi", "there buddy, you're number", 1)) - self.violates(c, "nope") - self.violates(c, ("string", "string", "NaN")) - self.violates(c, ("string that is too long", "string", 1)) - self.violates(c, ["Are tuples", "and lists the same?", 0]) - self.assertSize(c, 72+75+165+73) - self.assertDepth(c, 2) - - def testNestedTuple(self): - inner = schema.TupleConstraint(schema.ByteStringConstraint(10), - schema.IntegerConstraint()) - self.assertSize(inner, 72+75+73) - self.assertDepth(inner, 2) - outer = schema.TupleConstraint(schema.ByteStringConstraint(100), - inner) - self.assertSize(outer, 72+165 + 72+75+73) - self.assertDepth(outer, 3) - - self.conforms(inner, ("hi", 2)) - self.conforms(outer, ("long string here", ("short", 3))) - self.violates(outer, (("long string here", ("short", 3, "extra")))) - self.violates(outer, (("long string here", ("too long string", 3)))) - - outer2 = schema.TupleConstraint(inner, inner) - self.assertSize(outer2, 72+ 2*(72+75+73)) - self.assertDepth(outer2, 3) - self.conforms(outer2, (("hi", 1), ("there", 2)) ) - self.violates(outer2, ("hi", 1, "flat", 2) ) - - def testUnbounded(self): - big = schema.ByteStringConstraint(None) - self.assertUnboundedSize(big) - self.assertDepth(big, 1) - self.conforms(big, "blah blah blah blah blah" * 1024) - self.violates(big, 123) - - bag = schema.TupleConstraint(schema.IntegerConstraint(), - big) - self.assertUnboundedSize(bag) - self.assertDepth(bag, 2) - - polybag = schema.PolyConstraint(schema.IntegerConstraint(), - bag) - self.assertUnboundedSize(polybag) - self.assertDepth(polybag, 2) - - def testRecursion(self): - # we have to fiddle with PolyConstraint's innards - value = schema.ChoiceOf(schema.ByteStringConstraint(), - schema.IntegerConstraint(), - # will add 'value' here - ) - self.assertSize(value, 1065) - self.assertDepth(value, 1) - self.conforms(value, "key") - self.conforms(value, 123) - self.violates(value, []) - - mapping = schema.TupleConstraint(schema.ByteStringConstraint(10), - value) - self.assertSize(mapping, 72+75+1065) - self.assertDepth(mapping, 2) - self.conforms(mapping, ("name", "key")) - self.conforms(mapping, ("name", 123)) - value.alternatives = value.alternatives + (mapping,) - - self.assertUnboundedSize(value) - self.assertUnboundedDepth(value) - self.assertUnboundedSize(mapping) - self.assertUnboundedDepth(mapping) - - # but note that the constraint can still be applied - self.conforms(mapping, ("name", 123)) - self.conforms(mapping, ("name", "key")) - self.conforms(mapping, ("name", ("key", "value"))) - self.conforms(mapping, ("name", ("key", 123))) - self.violates(mapping, ("name", ("key", []))) - l = [] - l.append(l) - self.violates(mapping, ("name", l)) - - def testList(self): - l = schema.ListOf(schema.ByteStringConstraint(10)) - self.assertSize(l, 71 + 30*75) - self.assertDepth(l, 2) - self.conforms(l, ["one", "two", "three"]) - self.violates(l, ("can't", "fool", "me")) - self.violates(l, ["but", "perspicacity", "is too long"]) - self.violates(l, [0, "numbers", "allowed"]) - self.conforms(l, ["short", "sweet"]) - - l2 = schema.ListOf(schema.ByteStringConstraint(10), 3) - self.assertSize(l2, 71 + 3*75) - self.assertDepth(l2, 2) - self.conforms(l2, ["the number", "shall be", "three"]) - self.violates(l2, ["five", "is", "...", "right", "out"]) - - l3 = schema.ListOf(schema.ByteStringConstraint(10), None) - self.assertUnboundedSize(l3) - self.assertDepth(l3, 2) - self.conforms(l3, ["long"] * 35) - self.violates(l3, ["number", 1, "rule", "is", 0, "numbers"]) - - l4 = schema.ListOf(schema.ByteStringConstraint(10), 3, 3) - self.conforms(l4, ["three", "is", "good"]) - self.violates(l4, ["but", "four", "is", "bad"]) - self.violates(l4, ["two", "too"]) - - def testSet(self): - l = schema.SetOf(schema.IntegerConstraint(), 3) - self.assertDepth(l, 2) - self.conforms(l, sets.Set([])) - self.conforms(l, sets.Set([1])) - self.conforms(l, sets.Set([1,2,3])) - self.violates(l, sets.Set([1,2,3,4])) - self.violates(l, sets.Set(["not a number"])) - self.conforms(l, sets.ImmutableSet([])) - self.conforms(l, sets.ImmutableSet([1])) - self.conforms(l, sets.ImmutableSet([1,2,3])) - self.violates(l, sets.ImmutableSet([1,2,3,4])) - self.violates(l, sets.ImmutableSet(["not a number"])) - if have_builtin_set: - self.conforms(l, set([])) - self.conforms(l, set([1])) - self.conforms(l, set([1,2,3])) - self.violates(l, set([1,2,3,4])) - self.violates(l, set(["not a number"])) - self.conforms(l, frozenset([])) - self.conforms(l, frozenset([1])) - self.conforms(l, frozenset([1,2,3])) - self.violates(l, frozenset([1,2,3,4])) - self.violates(l, frozenset(["not a number"])) - - l = schema.SetOf(schema.IntegerConstraint(), 3, True) - self.conforms(l, sets.Set([])) - self.conforms(l, sets.Set([1])) - self.conforms(l, sets.Set([1,2,3])) - self.violates(l, sets.Set([1,2,3,4])) - self.violates(l, sets.Set(["not a number"])) - self.violates(l, sets.ImmutableSet([])) - self.violates(l, sets.ImmutableSet([1])) - self.violates(l, sets.ImmutableSet([1,2,3])) - self.violates(l, sets.ImmutableSet([1,2,3,4])) - self.violates(l, sets.ImmutableSet(["not a number"])) - if have_builtin_set: - self.conforms(l, set([])) - self.conforms(l, set([1])) - self.conforms(l, set([1,2,3])) - self.violates(l, set([1,2,3,4])) - self.violates(l, set(["not a number"])) - self.violates(l, frozenset([])) - self.violates(l, frozenset([1])) - self.violates(l, frozenset([1,2,3])) - self.violates(l, frozenset([1,2,3,4])) - self.violates(l, frozenset(["not a number"])) - - l = schema.SetOf(schema.IntegerConstraint(), 3, False) - self.violates(l, sets.Set([])) - self.violates(l, sets.Set([1])) - self.violates(l, sets.Set([1,2,3])) - self.violates(l, sets.Set([1,2,3,4])) - self.violates(l, sets.Set(["not a number"])) - self.conforms(l, sets.ImmutableSet([])) - self.conforms(l, sets.ImmutableSet([1])) - self.conforms(l, sets.ImmutableSet([1,2,3])) - self.violates(l, sets.ImmutableSet([1,2,3,4])) - self.violates(l, sets.ImmutableSet(["not a number"])) - if have_builtin_set: - self.violates(l, set([])) - self.violates(l, set([1])) - self.violates(l, set([1,2,3])) - self.violates(l, set([1,2,3,4])) - self.violates(l, set(["not a number"])) - self.conforms(l, frozenset([])) - self.conforms(l, frozenset([1])) - self.conforms(l, frozenset([1,2,3])) - self.violates(l, frozenset([1,2,3,4])) - self.violates(l, frozenset(["not a number"])) - - - def testDict(self): - d = schema.DictOf(schema.ByteStringConstraint(10), - schema.IntegerConstraint(), - maxKeys=4) - - self.assertDepth(d, 2) - self.conforms(d, {"a": 1, "b": 2}) - self.conforms(d, {"foo": 123, "bar": 345, "blah": 456, "yar": 789}) - self.violates(d, None) - self.violates(d, 12) - self.violates(d, ["nope"]) - self.violates(d, ("nice", "try")) - self.violates(d, {1:2, 3:4}) - self.violates(d, {"a": "b"}) - self.violates(d, {"a": 1, "b": 2, "c": 3, "d": 4, "toomuch": 5}) - - def testAttrDict(self): - d = copyable.AttributeDictConstraint(('a', int), ('b', str)) - self.conforms(d, {"a": 1, "b": "string"}) - self.violates(d, {"a": 1, "b": 2}) - self.violates(d, {"a": 1, "b": "string", "c": "is a crowd"}) - - d = copyable.AttributeDictConstraint(('a', int), ('b', str), - ignoreUnknown=True) - self.conforms(d, {"a": 1, "b": "string"}) - self.violates(d, {"a": 1, "b": 2}) - self.conforms(d, {"a": 1, "b": "string", "c": "is a crowd"}) - - d = copyable.AttributeDictConstraint(attributes={"a": int, "b": str}) - self.conforms(d, {"a": 1, "b": "string"}) - self.violates(d, {"a": 1, "b": 2}) - self.violates(d, {"a": 1, "b": "string", "c": "is a crowd"}) - - -class CreateTest(unittest.TestCase): - def check(self, obj, expected): - self.failUnless(isinstance(obj, expected)) - - def testMakeConstraint(self): - make = IConstraint - c = make(int) - self.check(c, schema.IntegerConstraint) - self.failUnlessEqual(c.maxBytes, -1) - - c = make(str) - self.check(c, schema.ByteStringConstraint) - self.failUnlessEqual(c.maxLength, 1000) - - c = make(unicode) - self.check(c, schema.UnicodeConstraint) - self.failUnlessEqual(c.maxLength, 1000) - - self.check(make(bool), schema.BooleanConstraint) - self.check(make(float), schema.NumberConstraint) - - self.check(make(schema.NumberConstraint()), schema.NumberConstraint) - c = make((int, str)) - self.check(c, schema.TupleConstraint) - self.check(c.constraints[0], schema.IntegerConstraint) - self.check(c.constraints[1], schema.ByteStringConstraint) - - c = make(common.RIHelper) - self.check(c, RemoteInterfaceConstraint) - self.failUnlessEqual(c.interface, common.RIHelper) - - c = make(common.IFoo) - self.check(c, LocalInterfaceConstraint) - self.failUnlessEqual(c.interface, common.IFoo) - - c = make(Referenceable) - self.check(c, RemoteInterfaceConstraint) - self.failUnlessEqual(c.interface, None) - - -class Arguments(unittest.TestCase): - def test_arguments(self): - def foo(a=int, b=bool, c=int): return str - r = RemoteMethodSchema(method=foo) - getpos = r.getPositionalArgConstraint - getkw = r.getKeywordArgConstraint - self.failUnless(isinstance(getpos(0)[1], schema.IntegerConstraint)) - self.failUnless(isinstance(getpos(1)[1], schema.BooleanConstraint)) - self.failUnless(isinstance(getpos(2)[1], schema.IntegerConstraint)) - - self.failUnless(isinstance(getkw("a")[1], schema.IntegerConstraint)) - self.failUnless(isinstance(getkw("b")[1], schema.BooleanConstraint)) - self.failUnless(isinstance(getkw("c")[1], schema.IntegerConstraint)) - - self.failUnless(isinstance(r.getResponseConstraint(), - schema.ByteStringConstraint)) - - self.failUnless(isinstance(getkw("c", 1, [])[1], - schema.IntegerConstraint)) - self.failUnlessRaises(schema.Violation, getkw, "a", 1, []) - self.failUnlessRaises(schema.Violation, getkw, "b", 1, ["b"]) - self.failUnlessRaises(schema.Violation, getkw, "a", 2, []) - self.failUnless(isinstance(getkw("c", 2, [])[1], - schema.IntegerConstraint)) - self.failUnless(isinstance(getkw("c", 0, ["a", "b"])[1], - schema.IntegerConstraint)) - - try: - r.checkAllArgs((1,True,2), {}, False) - r.checkAllArgs((), {"a":1, "b":False, "c":2}, False) - r.checkAllArgs((1,), {"b":False, "c":2}, False) - r.checkAllArgs((1,True), {"c":3}, False) - r.checkResults("good", False) - except schema.Violation: - self.fail("that shouldn't have raised a Violation") - self.failUnlessRaises(schema.Violation, # 2 is not bool - r.checkAllArgs, (1,2,3), {}, False) - self.failUnlessRaises(schema.Violation, # too many - r.checkAllArgs, (1,True,3,4), {}, False) - self.failUnlessRaises(schema.Violation, # double "a" - r.checkAllArgs, (1,), {"a":1, "b":True, "c": 3}, - False) - self.failUnlessRaises(schema.Violation, # missing required "b" - r.checkAllArgs, (1,), {"c": 3}, False) - self.failUnlessRaises(schema.Violation, # missing required "a" - r.checkAllArgs, (), {"b":True, "c": 3}, False) - self.failUnlessRaises(schema.Violation, - r.checkResults, 12, False) - - def test_bad_arguments(self): - def foo(nodefault): return str - self.failUnlessRaises(InvalidRemoteInterface, - RemoteMethodSchema, method=foo) - def bar(nodefault, a=int): return str - self.failUnlessRaises(InvalidRemoteInterface, - RemoteMethodSchema, method=bar) - - -class Interfaces(unittest.TestCase): - def check_inbound(self, obj, constraint): - try: - constraint.checkObject(obj, True) - except Violation, f: - self.fail("constraint was violated: %s" % f) - - def check_outbound(self, obj, constraint): - try: - constraint.checkObject(obj, False) - except Violation, f: - self.fail("constraint was violated: %s" % f) - - def violates_inbound(self, obj, constraint): - try: - constraint.checkObject(obj, True) - except Violation, f: - return - self.fail("constraint wasn't violated") - - def violates_outbound(self, obj, constraint): - try: - constraint.checkObject(obj, False) - except Violation, f: - return - self.fail("constraint wasn't violated") - - def test_referenceable(self): - h = common.HelperTarget() - c1 = RemoteInterfaceConstraint(common.RIHelper) - c2 = RemoteInterfaceConstraint(common.RIMyTarget) - self.violates_inbound("bogus", c1) - self.violates_outbound("bogus", c1) - self.check_outbound(h, c1) - self.violates_inbound(h, c1) - self.violates_inbound(h, c2) - self.violates_outbound(h, c2) - - def test_remotereference(self): - # we need to create a fake RemoteReference here - parent, clid, url = None, 0, "" - interfaceName = common.RIHelper.__remote_name__ - tracker = RemoteReferenceTracker(parent, clid, url, interfaceName) - rr = RemoteReference(tracker) - - c1 = RemoteInterfaceConstraint(common.RIHelper) - self.check_inbound(rr, c1) - self.check_outbound(rr, c1) # gift - - c2 = RemoteInterfaceConstraint(common.RIMyTarget) - self.violates_inbound(rr, c2) - self.violates_outbound(rr, c2) diff --git a/src/foolscap/foolscap/test/test_sturdyref.py b/src/foolscap/foolscap/test/test_sturdyref.py deleted file mode 100644 index 9343366d..00000000 --- a/src/foolscap/foolscap/test/test_sturdyref.py +++ /dev/null @@ -1,30 +0,0 @@ -from twisted.trial import unittest - -from foolscap import referenceable - -class URL(unittest.TestCase): - def testURL(self): - sr = referenceable.SturdyRef("pb://1234@127.0.0.1:9900/name") - self.failUnlessEqual(sr.tubID, "1234") - self.failUnlessEqual(sr.locationHints, ["127.0.0.1:9900"]) - self.failUnlessEqual(sr.name, "name") - - def testCompare(self): - sr1 = referenceable.SturdyRef("pb://1234@127.0.0.1:9900/name") - sr2 = referenceable.SturdyRef("pb://1234@127.0.0.1:9999/name") - # only tubID and name matter - self.failUnlessEqual(sr1, sr2) - sr1 = referenceable.SturdyRef("pb://9999@127.0.0.1:9900/name") - sr2 = referenceable.SturdyRef("pb://1234@127.0.0.1:9900/name") - self.failIfEqual(sr1, sr2) - sr1 = referenceable.SturdyRef("pb://1234@127.0.0.1:9900/name1") - sr2 = referenceable.SturdyRef("pb://1234@127.0.0.1:9900/name2") - self.failIfEqual(sr1, sr2) - - def testLocationHints(self): - url = "pb://ABCD@127.0.0.1:9900,remote:8899/name" - sr = referenceable.SturdyRef(url) - self.failUnlessEqual(sr.tubID, "ABCD") - self.failUnlessEqual(sr.locationHints, ["127.0.0.1:9900", - "remote:8899"]) - self.failUnlessEqual(sr.name, "name") diff --git a/src/foolscap/foolscap/test/test_tub.py b/src/foolscap/foolscap/test/test_tub.py deleted file mode 100644 index 9bd6f73e..00000000 --- a/src/foolscap/foolscap/test/test_tub.py +++ /dev/null @@ -1,209 +0,0 @@ -# -*- test-case-name: foolscap.test.test_tub -*- - -import os.path -from twisted.trial import unittest -from twisted.internet import defer - -crypto_available = False -try: - from foolscap import crypto - crypto_available = crypto.available -except ImportError: - pass - -from foolscap import Tub, UnauthenticatedTub, SturdyRef, Referenceable -from foolscap.referenceable import RemoteReference -from foolscap.eventual import eventually, flushEventualQueue -from foolscap.test.common import HelperTarget, TargetMixin - -# we use authenticated tubs if possible. If crypto is not available, fall -# back to unauthenticated ones -GoodEnoughTub = UnauthenticatedTub -if crypto_available: - GoodEnoughTub = Tub - -class TestCertFile(unittest.TestCase): - def test_generate(self): - t = Tub() - certdata = t.getCertData() - self.failUnless("BEGIN CERTIFICATE" in certdata) - self.failUnless("BEGIN RSA PRIVATE KEY" in certdata) - - def test_certdata(self): - t1 = Tub() - data1 = t1.getCertData() - t2 = Tub(certData=data1) - data2 = t2.getCertData() - self.failUnless(data1 == data2) - - def test_certfile(self): - fn = "test_tub.TestCertFile.certfile" - t1 = Tub(certFile=fn) - self.failUnless(os.path.exists(fn)) - data1 = t1.getCertData() - - t2 = Tub(certFile=fn) - data2 = t2.getCertData() - self.failUnless(data1 == data2) - -if not crypto_available: - del TestCertFile - -class QueuedStartup(TargetMixin, unittest.TestCase): - # calling getReference and connectTo before the Tub has started should - # put off network activity until the Tub is started. - - def setUp(self): - TargetMixin.setUp(self) - self.tubB = GoodEnoughTub() - self.services = [self.tubB] - for s in self.services: - s.startService() - l = s.listenOn("tcp:0:interface=127.0.0.1") - s.setLocation("127.0.0.1:%d" % l.getPortnum()) - - self.barry = HelperTarget("barry") - self.barry_url = self.tubB.registerReference(self.barry) - - self.bill = HelperTarget("bill") - self.bill_url = self.tubB.registerReference(self.bill) - - self.bob = HelperTarget("bob") - self.bob_url = self.tubB.registerReference(self.bob) - - def tearDown(self): - d = TargetMixin.tearDown(self) - def _more(res): - return defer.DeferredList([s.stopService() for s in self.services]) - d.addCallback(_more) - d.addCallback(flushEventualQueue) - return d - - def test_queued_getref(self): - t1 = GoodEnoughTub() - d1 = t1.getReference(self.barry_url) - d2 = t1.getReference(self.bill_url) - def _check(res): - ((barry_success, barry_rref), - (bill_success, bill_rref)) = res - self.failUnless(barry_success) - self.failUnless(bill_success) - self.failUnless(isinstance(barry_rref, RemoteReference)) - self.failUnless(isinstance(bill_rref, RemoteReference)) - self.failIf(barry_rref == bill_success) - dl = defer.DeferredList([d1, d2]) - dl.addCallback(_check) - self.services.append(t1) - eventually(t1.startService) - return dl - - def test_queued_reconnector(self): - t1 = GoodEnoughTub() - bill_connections = [] - barry_connections = [] - t1.connectTo(self.bill_url, bill_connections.append) - t1.connectTo(self.barry_url, barry_connections.append) - def _check(): - if len(bill_connections) >= 1 and len(barry_connections) >= 1: - return True - return False - d = self.poll(_check) - def _validate(res): - self.failUnless(isinstance(bill_connections[0], RemoteReference)) - self.failUnless(isinstance(barry_connections[0], RemoteReference)) - self.failIf(bill_connections[0] == barry_connections[0]) - d.addCallback(_validate) - self.services.append(t1) - eventually(t1.startService) - return d - - -class NameLookup(TargetMixin, unittest.TestCase): - - # test registerNameLookupHandler - - def setUp(self): - TargetMixin.setUp(self) - self.tubA, self.tubB = [GoodEnoughTub(), GoodEnoughTub()] - self.services = [self.tubA, self.tubB] - self.tubA.startService() - self.tubB.startService() - l = self.tubB.listenOn("tcp:0:interface=127.0.0.1") - self.tubB.setLocation("127.0.0.1:%d" % l.getPortnum()) - self.url_on_b = self.tubB.registerReference(Referenceable()) - self.lookups = [] - self.lookups2 = [] - self.names = {} - self.names2 = {} - - def tearDown(self): - d = TargetMixin.tearDown(self) - def _more(res): - return defer.DeferredList([s.stopService() for s in self.services]) - d.addCallback(_more) - d.addCallback(flushEventualQueue) - return d - - def lookup(self, name): - self.lookups.append(name) - return self.names.get(name, None) - - def lookup2(self, name): - self.lookups2.append(name) - return self.names2.get(name, None) - - def testNameLookup(self): - t1 = HelperTarget() - t2 = HelperTarget() - self.names["foo"] = t1 - self.names2["bar"] = t2 - self.names2["baz"] = t2 - self.tubB.registerNameLookupHandler(self.lookup) - self.tubB.registerNameLookupHandler(self.lookup2) - # hack up a new furl pointing at the same tub but with a name that - # hasn't been registered. - s = SturdyRef(self.url_on_b) - s.name = "foo" - - d = self.tubA.getReference(s) - - def _check(res): - self.failUnless(isinstance(res, RemoteReference)) - self.failUnlessEqual(self.lookups, ["foo"]) - # the first lookup should short-circuit the process - self.failUnlessEqual(self.lookups2, []) - self.lookups = []; self.lookups2 = [] - s.name = "bar" - return self.tubA.getReference(s) - d.addCallback(_check) - - def _check2(res): - self.failUnless(isinstance(res, RemoteReference)) - # if the first lookup fails, the second handler should be asked - self.failUnlessEqual(self.lookups, ["bar"]) - self.failUnlessEqual(self.lookups2, ["bar"]) - self.lookups = []; self.lookups2 = [] - # make sure that loopbacks use this too - return self.tubB.getReference(s) - d.addCallback(_check2) - - def _check3(res): - self.failUnless(isinstance(res, RemoteReference)) - self.failUnlessEqual(self.lookups, ["bar"]) - self.failUnlessEqual(self.lookups2, ["bar"]) - self.lookups = []; self.lookups2 = [] - # and make sure we can de-register handlers - self.tubB.unregisterNameLookupHandler(self.lookup) - s.name = "baz" - return self.tubA.getReference(s) - d.addCallback(_check3) - - def _check4(res): - self.failUnless(isinstance(res, RemoteReference)) - self.failUnlessEqual(self.lookups, []) - self.failUnlessEqual(self.lookups2, ["baz"]) - self.lookups = []; self.lookups2 = [] - d.addCallback(_check4) - - return d - diff --git a/src/foolscap/foolscap/test/test_util.py b/src/foolscap/foolscap/test/test_util.py deleted file mode 100644 index c98c976c..00000000 --- a/src/foolscap/foolscap/test/test_util.py +++ /dev/null @@ -1,92 +0,0 @@ - -from twisted.trial import unittest -from twisted.internet import defer -from twisted.python import failure -from foolscap import util, eventual - - -class AsyncAND(unittest.TestCase): - def setUp(self): - self.fired = False - self.failed = False - - def callback(self, res): - self.fired = True - def errback(self, res): - self.failed = True - - def attach(self, d): - d.addCallbacks(self.callback, self.errback) - return d - - def shouldNotFire(self, ignored=None): - self.failIf(self.fired) - self.failIf(self.failed) - def shouldFire(self, ignored=None): - self.failUnless(self.fired) - self.failIf(self.failed) - def shouldFail(self, ignored=None): - self.failUnless(self.failed) - self.failIf(self.fired) - - def tearDown(self): - return eventual.flushEventualQueue() - - def test_empty(self): - self.attach(util.AsyncAND([])) - self.shouldFire() - - def test_simple(self): - d1 = eventual.fireEventually(None) - a = util.AsyncAND([d1]) - self.attach(a) - a.addBoth(self.shouldFire) - return a - - def test_two(self): - d1 = defer.Deferred() - d2 = defer.Deferred() - self.attach(util.AsyncAND([d1, d2])) - self.shouldNotFire() - d1.callback(1) - self.shouldNotFire() - d2.callback(2) - self.shouldFire() - - def test_one_failure_1(self): - d1 = defer.Deferred() - d2 = defer.Deferred() - self.attach(util.AsyncAND([d1, d2])) - self.shouldNotFire() - d1.callback(1) - self.shouldNotFire() - d2.errback(RuntimeError()) - self.shouldFail() - - def test_one_failure_2(self): - d1 = defer.Deferred() - d2 = defer.Deferred() - self.attach(util.AsyncAND([d1, d2])) - self.shouldNotFire() - d1.errback(RuntimeError()) - self.shouldFail() - d2.callback(1) - self.shouldFail() - - def test_two_failure(self): - d1 = defer.Deferred() - d2 = defer.Deferred() - self.attach(util.AsyncAND([d1, d2])) - def _should_fire(res): - self.failIf(isinstance(res, failure.Failure)) - def _should_fail(f): - self.failUnless(isinstance(f, failure.Failure)) - d1.addBoth(_should_fire) - d2.addBoth(_should_fail) - self.shouldNotFire() - d1.errback(RuntimeError()) - self.shouldFail() - d2.errback(RuntimeError()) - self.shouldFail() - - diff --git a/src/foolscap/foolscap/tokens.py b/src/foolscap/foolscap/tokens.py deleted file mode 100644 index 46bc9010..00000000 --- a/src/foolscap/foolscap/tokens.py +++ /dev/null @@ -1,369 +0,0 @@ - -from twisted.python.failure import Failure -from zope.interface import Attribute, Interface - -# delimiter characters. -LIST = chr(0x80) # old -INT = chr(0x81) -STRING = chr(0x82) -NEG = chr(0x83) -FLOAT = chr(0x84) -# "optional" -- these might be refused by a low-level implementation. -LONGINT = chr(0x85) # old -LONGNEG = chr(0x86) # old -# really optional; this is is part of the 'pb' vocabulary -VOCAB = chr(0x87) -# newbanana tokens -OPEN = chr(0x88) -CLOSE = chr(0x89) -ABORT = chr(0x8A) -ERROR = chr(0x8D) -PING = chr(0x8E) -PONG = chr(0x8F) - -tokenNames = { - LIST: "LIST", - INT: "INT", - STRING: "STRING", - NEG: "NEG", - FLOAT: "FLOAT", - LONGINT: "LONGINT", - LONGNEG: "LONGNEG", - VOCAB: "VOCAB", - OPEN: "OPEN", - CLOSE: "CLOSE", - ABORT: "ABORT", - ERROR: "ERROR", - PING: "PING", - PONG: "PONG", - } - -SIZE_LIMIT = 1000 # default limit on the body length of long tokens (STRING, - # LONGINT, LONGNEG, ERROR) - -class InvalidRemoteInterface(Exception): - pass -class UnknownSchemaType(Exception): - pass - -class Violation(Exception): - """This exception is raised in response to a schema violation. It - indicates that the incoming token stream has violated a constraint - imposed by the recipient. The current Unslicer is abandoned and the - error is propagated upwards to the enclosing Unslicer parent by - providing an BananaFailure object to the parent's .receiveChild method. - All remaining tokens for the current Unslicer are to be dropped. - """ - - """.where: this string describes which node of the object graph was - being handled when the exception took place.""" - where = "" - - def setLocation(self, where): - self.where = where - def getLocation(self): - return self.where - def prependLocation(self, prefix): - if self.where: - self.where = prefix + " " + self.where - else: - self.where = prefix - def appendLocation(self, suffix): - if self.where: - self.where = self.where + " " + suffix - else: - self.where = suffix - - def __str__(self): - if self.where: - return "Violation (%s): %s" % (self.where, self.args) - else: - return "Violation: %s" % (self.args,) - - -class BananaError(Exception): - """This exception is raised in response to a fundamental protocol - violation. The connection should be dropped immediately. - - .where is an optional string that describes the node of the object graph - where the failure was noticed. - """ - where = None - - def __str__(self): - if self.where: - return "BananaError(in %s): %s" % (self.where, self.args) - else: - return "BananaError: %s" % (self.args,) - -class NegotiationError(Exception): - pass - -class RemoteNegotiationError(Exception): - """The other end hung up on us because they had a NegotiationError on - their side.""" - pass - -class PBError(Exception): - pass - -class BananaFailure(Failure): - """This is a marker subclass of Failure, to let Unslicer.receiveChild - distinguish between an unserialized Failure instance and a a failure in - a child Unslicer""" - pass - - - -class ISlicer(Interface): - """I know how to slice objects into tokens.""" - - sendOpen = Attribute(\ -"""True if an OPEN/CLOSE token pair should be sent around the Slicer's body -tokens. Only special-purpose Slicers (like the RootSlicer) should use False. -""") - - trackReferences = Attribute(\ -"""True if the object we slice is referenceable: i.e. it is useful or -necessary to send multiple copies as a single instance and a bunch of -References, rather than as separate copies. Instances are referenceable, as -are mutable containers like lists.""") - - streamable = Attribute(\ -"""True if children of this object are allowed to use Deferreds to stall -production of new tokens. This must be set in slice() before yielding each -child object, and affects that child and all descendants. Streaming is only -allowed if the parent also allows streaming: if slice() is called with -streamable=False, then self.streamable must be False too. It can be changed -from within the slice() generator at any time as long as this restriction is -obeyed. - -This attribute is read when each child Slicer is started.""") - - - def slice(streamable, banana): - """Return an iterator which provides Index Tokens and the Body - Tokens of the object's serialized form. This is frequently - implemented with a generator (i.e. 'yield' appears in the body of - this function). Do not yield the OPEN or the CLOSE token, those will - be handled elsewhere. - - If a Violation exception is raised, slicing will cease. An ABORT - token followed by a CLOSE token will be emitted. - - If 'streamable' is True, the iterator may yield a Deferred to - indicate that slicing should wait until the Deferred is fired. If - the Deferred is errbacked, the connection will be dropped. TODO: it - should be possible to errback with a Violation.""" - - def registerReference(refid, obj): - """Register the relationship between 'refid' (a number taken from - the cumulative count of OPEN tokens sent over our connection: 0 is - the object described by the very first OPEN sent over the wire) and - the object. If the object is sent a second time, a Reference may be - used in its place. - - Slicers usually delgate this function upwards to the RootSlicer, but - it can be handled at any level to allow local scoping of references - (they might only be valid within a single RPC invocation, for - example). - - This method is *not* allowed to raise a Violation, as that will mess - up the transmit logic. If it raises any other exception, the - connection will be dropped.""" - - def childAborted(f): - """Notify the Slicer that one of its child slicers (as produced by - its .slice iterator) has caused an error. If the slicer got started, - it has now emitted an ABORT token and terminated its token stream. - If it did not get started (usually because the child object was - unserializable), there has not yet been any trace of the object in - the token stream. - - The corresponding Unslicer (receiving this token stream) will get an - BananaFailure and is likely to ignore any remaining tokens from us, - so it may be reasonable for the parent Slicer to give up as well. - - If the Slicer wishes to abandon their own sequence, it should simply - return the failure object passed in. If it wants to absorb the - error, it should return None.""" - - def slicerForObject(obj): - """Get a new Slicer for some child object. Slicers usually delegate - this method up to the RootSlicer. References are handled by - producing a ReferenceSlicer here. These references can have various - scopes. - - If something on the stack does not want the object to be sent, it can - raise a Violation exception. This is the 'taster' function.""" - - def describe(): - """Return a short string describing where in the object tree this - slicer is sitting, relative to its parent. These strings are - obtained from every slicer in the stack, and joined to describe - where any problems occurred.""" - -class IRootSlicer(Interface): - def allowStreaming(streamable): - """Specify whether or not child Slicers will be allowed to stream.""" - def connectionLost(why): - """Called when the transport is closed. The RootSlicer may choose to - abandon objects being sent here.""" - -class IUnslicer(Interface): - # .parent - - # start/receiveChild/receiveClose/finish are - # the main "here are some tokens, make an object out of them" entry - # points used by Unbanana. - - # start/receiveChild can call self.protocol.abandonUnslicer(failure, - # self) to tell the protocol that the unslicer has given up on life and - # all its remaining tokens should be discarded. The failure will be - # given to the late unslicer's parent in lieu of the object normally - # returned by receiveClose. - - # start/receiveChild/receiveClose/finish may raise a Violation - # exception, which tells the protocol that this object is contaminated - # and should be abandoned. An BananaFailure will be passed to its - # parent. - - # Note, however, that it is not valid to both call abandonUnslicer *and* - # raise a Violation. That would discard too much. - - def setConstraint(constraint): - """Add a constraint for this unslicer. The unslicer will enforce - this constraint upon all incoming data. The constraint must be of an - appropriate type (a ListUnslicer will only accept a ListConstraint, - etc.). It must not be None. To leave us unconstrained, do not call - this method. - - If this method is not called, the Unslicer will accept any valid - banana as input, which probably means there is no limit on the - number of bytes it will accept (and therefore on the memory it could - be made to consume) before it finally accepts or rejects the input. - """ - - def start(count): - """Called to initialize the new slice. The 'count' argument is the - reference id: if this object might be shared (and therefore the - target of a 'reference' token), it should call - self.protocol.setObject(count, obj) with the object being created. - If this object is not available yet (tuples), it should save a - Deferred there instead. - """ - - def checkToken(typebyte, size): - """Check to see if the given token is acceptable (does it conform to - the constraint?). It will not be asked about ABORT or CLOSE tokens, - but it *will* be asked about OPEN. It should enfore a length limit - for long tokens (STRING and LONGINT/LONGNEG types). If STRING is - acceptable, then VOCAB should be too. It should return None if the - token and the size are acceptable. Should raise Violation if the - schema indiates the token is not acceptable. Should raise - BananaError if the type byte violates the basic Banana protocol. (if - no schema is in effect, this should never raise Violation, but might - still raise BananaError). - """ - - def openerCheckToken(typebyte, size, opentype): - """'typebyte' is the type of an incoming index token. 'size' is the - value of header associated with this typebyte. 'opentype' is a list - of open tokens that we've received so far, not including the one - that this token hopes to create. - - This method should ask the current opener if this index token is - acceptable, and is used in lieu of checkToken() when the receiver is - in the index phase. Usually implemented by calling - self.opener.openerCheckToken, thus delegating the question to the - RootUnslicer. - """ - - def doOpen(opentype): - """opentype is a tuple. Return None if more index tokens are - required. Check to see if this kind of child object conforms to the - constraint, raise Violation if not. Create a new Unslicer (usually - by delegating to self.parent.doOpen, up to the RootUnslicer). Set a - constraint on the child unslicer, if any. - """ - - def receiveChild(childobject, - ready_deferred): - """'childobject' is being handed to this unslicer. It may be a - primitive type (number or string), or a composite type produced by - another Unslicer. It might also be a Deferred, which indicates that - the actual object is not ready (perhaps a tuple with an element that - is not yet referenceable), in which case you should add a callback - to it that will fill in the appropriate object later. This callback - is required to return the object when it is done, so multiple such - callbacks can be chained. The childobject/ready_deferred argument - pair is taken directly from the output of receiveClose(). If - ready_deferred is non-None, you should return a dependent Deferred - from your own receiveClose method.""" - - def reportViolation(bf): - """You have received an error instead of a child object. If you wish - to give up and propagate the error upwards, return the BananaFailure - object you were just given. To absorb the error and keep going with - your sequence, return None.""" - - def receiveClose(): - """Called when the Close token is received. Returns a tuple of - (object/referenceable-deferred, complete-deferred), or an - BananaFailure if something went wrong. There are four potential - cases:: - - (obj, None): the object is complete and ready to go - (d1, None): the object cannot be referenced yet, probably - because it is an immutable container, and one of its - children cannot be referenced yet. The deferred will - fire by the time the cycle has been fully deserialized, - with the object as its argument. - (obj, d2): the object can be referenced, but it is not yet - complete, probably because some component of it is - 'slow' (see below). The Deferred will fire (with an - argument of None) when the object is ready to be used. - It is not guaranteed to fire by the time the enclosing - top-level object has finished deserializing. - (d1, d2): the object cannot yet be referenced, and even if it could - be, it would not yet be ready for use. Any potential users - should wait until both deferreds fire before using it. - - The first deferred (d1) is guaranteed to fire before the top-most - enclosing object (a CallUnslicer, for PB methods) is closed. (if it - does not fire, that indicates a broken cycle). It is present to - handle cycles that include immutable containers, like tuples. - Mutable containers *must* return a reference to an object (even if - it is not yet ready to be used, because it contains placeholders to - tuples that have not yet been created), otherwise those cycles - cannot be broken and the object graph will not reconstructable. - - The second (d2) has no such guarantees about when it will fire. It - indicates a dependence upon 'slow' external events. The first use - case for such 'slow' objects is a globally-referenceable object - which requires a new Broker connection before it can be used, so the - Deferred will not fire until a TCP connection has been established - and the first stages of PB negotiation have been completed. - - If necessary, unbanana.setObject should be called, then the Deferred - created in start() should be fired with the new object.""" - - def finish(): - """Called when the unslicer is popped off the stack. This is called - even if the pop is because of an exception. The unslicer should - perform cleanup, including firing the Deferred with an - BananaFailure if the object it is creating could not be created. - - TODO: can receiveClose and finish be merged? Or should the child - object be returned from finish() instead of receiveClose? - """ - - def describe(): - """Return a short string describing where in the object tree this - unslicer is sitting, relative to its parent. These strings are - obtained from every unslicer in the stack, and joined to describe - where any problems occurred.""" - - def where(): - """This returns a string that describes the location of this - unslicer, starting at the root of the object tree.""" diff --git a/src/foolscap/foolscap/util.py b/src/foolscap/foolscap/util.py deleted file mode 100644 index 367d7086..00000000 --- a/src/foolscap/foolscap/util.py +++ /dev/null @@ -1,52 +0,0 @@ - -from twisted.internet import defer - - -class AsyncAND(defer.Deferred): - """Like DeferredList, but results are discarded and failures handled - in a more convenient fashion. - - Create me with a list of Deferreds. I will fire my callback (with None) - if and when all of my component Deferreds fire successfully. I will fire - my errback when and if any of my component Deferreds errbacks, in which - case I will absorb the failure. If a second Deferred errbacks, I will not - absorb that failure. - - This means that you can put a bunch of Deferreds together into an - AsyncAND and then forget about them. If all succeed, the AsyncAND will - fire. If one fails, that Failure will be propagated to the AsyncAND. If - multiple ones fail, the first Failure will go to the AsyncAND and the - rest will be left unhandled (and therefore logged). - """ - - def __init__(self, deferredList): - defer.Deferred.__init__(self) - - if not deferredList: - self.callback(None) - return - - self.remaining = len(deferredList) - self._fired = False - - for d in deferredList: - d.addCallbacks(self._cbDeferred, self._cbDeferred, - callbackArgs=(True,), errbackArgs=(False,)) - - def _cbDeferred(self, result, succeeded): - self.remaining -= 1 - if succeeded: - if not self._fired and self.remaining == 0: - # the last input has fired. We fire. - self._fired = True - self.callback(None) - return - else: - if not self._fired: - # the first Failure is carried into our output - self._fired = True - self.errback(result) - return None - else: - # second and later Failures are not absorbed - return result diff --git a/src/foolscap/foolscap/vocab.py b/src/foolscap/foolscap/vocab.py deleted file mode 100644 index 26e1bc7a..00000000 --- a/src/foolscap/foolscap/vocab.py +++ /dev/null @@ -1,34 +0,0 @@ - -import sha - -# here is the list of initial vocab tables. If the two ends negotiate to use -# initial-vocab-table-index N, then both sides will start with the words from -# INITIAL_VOCAB_TABLES[n] for their VOCABized tokens. - -vocab_v0 = [] -vocab_v1 = [ # all opentypes used in 0.0.6 - "none", "boolean", "reference", - "dict", "list", "tuple", "set", "immutable-set", - "unicode", "set-vocab", "add-vocab", - "call", "arguments", "answer", "error", - "my-reference", "your-reference", "their-reference", "copyable", - # these are only used by storage.py - "instance", "module", "class", "method", "function", - # I'm not sure this one is actually used anywhere, but the first 127 of - # these are basically free. - "attrdict", - ] -INITIAL_VOCAB_TABLES = { 0: vocab_v0, 1: vocab_v1 } - -# to insure both sides agree on the actual words, we can hash the vocab table -# into a short string. This is included in the negotiation decision and -# compared by the receiving side. - -def hashVocabTable(table_index): - data = "\x00".join(INITIAL_VOCAB_TABLES[table_index]) - digest = sha.new(data).hexdigest() - return digest[:4] - -def getVocabRange(): - keys = INITIAL_VOCAB_TABLES.keys() - return min(keys), max(keys) diff --git a/src/foolscap/misc/dapper/debian/changelog b/src/foolscap/misc/dapper/debian/changelog deleted file mode 100644 index d9b00653..00000000 --- a/src/foolscap/misc/dapper/debian/changelog +++ /dev/null @@ -1,71 +0,0 @@ -foolscap (0.1.5) unstable; urgency=low - - * new release - - -- Brian Warner Tue, 07 Aug 2007 17:47:53 -0700 - -foolscap (0.1.4) unstable; urgency=low - - * new release - - -- Brian Warner Mon, 14 May 2007 22:37:04 -0700 - -foolscap (0.1.3) unstable; urgency=low - - * new release - - -- Brian Warner Wed, 2 May 2007 14:55:49 -0700 - -foolscap (0.1.2) unstable; urgency=low - - * new release - - -- Brian Warner Wed, 4 Apr 2007 12:32:46 -0700 - -foolscap (0.1.1) unstable; urgency=low - - * new release - - -- Brian Warner Tue, 3 Apr 2007 20:48:07 -0700 - -foolscap (0.1.0) unstable; urgency=low - - * new release - - -- Brian Warner Thu, 15 Mar 2007 16:56:16 -0700 - -foolscap (0.0.7) unstable; urgency=low - - * new release - - -- Brian Warner Tue, 16 Jan 2007 12:03:00 -0800 - -foolscap (0.0.6) unstable; urgency=low - - * new release - - -- Brian Warner Mon, 18 Dec 2006 12:10:51 -0800 - -foolscap (0.0.5) unstable; urgency=low - - * new release - - -- Brian Warner Sat, 4 Nov 2006 23:20:46 -0800 - -foolscap (0.0.4) unstable; urgency=low - - * new release - - -- Brian Warner Thu, 26 Oct 2006 00:46:30 -0700 - -foolscap (0.0.3) unstable; urgency=low - - * new upstream release, put debian packaging in the tree - - -- Brian Warner Tue, 10 Oct 2006 19:16:13 -0700 - -foolscap (0.0.2) unstable; urgency=low - - * New upstream release of an experimental package - - -- Brian Warner Thu, 27 Jul 2006 17:40:15 -0700 diff --git a/src/foolscap/misc/dapper/debian/compat b/src/foolscap/misc/dapper/debian/compat deleted file mode 100644 index b8626c4c..00000000 --- a/src/foolscap/misc/dapper/debian/compat +++ /dev/null @@ -1 +0,0 @@ -4 diff --git a/src/foolscap/misc/dapper/debian/control b/src/foolscap/misc/dapper/debian/control deleted file mode 100644 index b16e0181..00000000 --- a/src/foolscap/misc/dapper/debian/control +++ /dev/null @@ -1,21 +0,0 @@ -Source: foolscap -Section: python -Priority: optional -Maintainer: Brian Warner -Build-Depends: debhelper (>> 4.1.68), python2.4-dev, python2.4-twisted, cdbs -Standards-Version: 3.7.2 - -Package: python-foolscap -Architecture: all -Depends: python (>= 2.4), python (<< 2.5), python2.4-foolscap -Description: An object-capability -based RPC system for Twisted Python - This is a dummy package that only depends on python2.4-foolscap - -Package: python2.4-foolscap -Architecture: all -Depends: python2.4, python2.4-twisted-core -Recommends: python2.4-twisted-names, python2.4-pyopenssl -Description: An object-capability -based RPC system for Twisted Python - Foolscap (aka "newpb") contains a new RPC system for Twisted Python, combining - capability-based security, secure references, flexible serialization, and - technology to mitigate resource-consumption attacks. diff --git a/src/foolscap/misc/dapper/debian/copyright b/src/foolscap/misc/dapper/debian/copyright deleted file mode 100644 index b68b8960..00000000 --- a/src/foolscap/misc/dapper/debian/copyright +++ /dev/null @@ -1,30 +0,0 @@ -This package was debianized by Brian Warner - -It was downloaded from http://www.twistedmatrix.com/~warner/Foolscap/ - -Copyright (c) 2006 -Brian Warner - -Permission is hereby granted, free of charge, to any person obtaining -a copy of this software and associated documentation files (the -"Software"), to deal in the Software without restriction, including -without limitation the rights to use, copy, modify, merge, publish, -distribute, sublicense, and/or sell copies of the Software, and to -permit persons to whom the Software is furnished to do so, subject to -the following conditions: - -The above copyright notice and this permission notice shall be -included in all copies or substantial portions of the Software. - -THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, -EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF -MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND -NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE -LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION -OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION -WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. - - -Copyright Exceptions: - -No exceptions are listed in the upstream source. diff --git a/src/foolscap/misc/dapper/debian/rules b/src/foolscap/misc/dapper/debian/rules deleted file mode 100644 index 49822e6e..00000000 --- a/src/foolscap/misc/dapper/debian/rules +++ /dev/null @@ -1,61 +0,0 @@ -#!/usr/bin/make -f -# Sample debian/rules that uses debhelper. -# GNU copyright 1997 to 1999 by Joey Hess. - -# Uncomment this to turn on verbose mode. -#export DH_VERBOSE=1 - -# This is the debhelper compatability version to use. -export DH_COMPAT=4 - -build: build-stamp -build-stamp: - dh_testdir - - ## Build for all python versions - python2.4 setup.py build - - touch build-stamp - -clean: - dh_testdir - dh_testroot - rm -f build-stamp - rm -rf build - find . -name '*.pyc' |xargs -r rm - dh_clean - -install: build - dh_testdir - dh_testroot - dh_clean -k - dh_installdirs - - ## Python 2.4 - python2.4 setup.py build - python2.4 setup.py install --prefix=debian/python2.4-foolscap/usr - - -# Build architecture-independent files here. -binary-indep: build install - dh_testdir - dh_testroot - dh_installdocs -i -A NEWS README - dh_installdocs ChangeLog doc/jobs.txt doc/todo.txt doc/use-cases.txt doc/using-foolscap.xhtml doc/copyable.xhtml doc/listings doc/specifications - dh_installchangelogs -i - dh_compress -i -X.py - dh_fixperms - dh_python - dh_installdeb - dh_gencontrol - dh_md5sums - dh_builddeb - -binary-arch: -# nothing to do - -binary: binary-indep -.PHONY: build clean binary-indep binary-arch binary install - - - diff --git a/src/foolscap/misc/dapper/debian/watch b/src/foolscap/misc/dapper/debian/watch deleted file mode 100644 index 8b21b100..00000000 --- a/src/foolscap/misc/dapper/debian/watch +++ /dev/null @@ -1,2 +0,0 @@ -version=3 -http://twistedmatrix.com/~warner/Foolscap/ ([\d\.]+)/ diff --git a/src/foolscap/misc/edgy/debian/changelog b/src/foolscap/misc/edgy/debian/changelog deleted file mode 100644 index d9b00653..00000000 --- a/src/foolscap/misc/edgy/debian/changelog +++ /dev/null @@ -1,71 +0,0 @@ -foolscap (0.1.5) unstable; urgency=low - - * new release - - -- Brian Warner Tue, 07 Aug 2007 17:47:53 -0700 - -foolscap (0.1.4) unstable; urgency=low - - * new release - - -- Brian Warner Mon, 14 May 2007 22:37:04 -0700 - -foolscap (0.1.3) unstable; urgency=low - - * new release - - -- Brian Warner Wed, 2 May 2007 14:55:49 -0700 - -foolscap (0.1.2) unstable; urgency=low - - * new release - - -- Brian Warner Wed, 4 Apr 2007 12:32:46 -0700 - -foolscap (0.1.1) unstable; urgency=low - - * new release - - -- Brian Warner Tue, 3 Apr 2007 20:48:07 -0700 - -foolscap (0.1.0) unstable; urgency=low - - * new release - - -- Brian Warner Thu, 15 Mar 2007 16:56:16 -0700 - -foolscap (0.0.7) unstable; urgency=low - - * new release - - -- Brian Warner Tue, 16 Jan 2007 12:03:00 -0800 - -foolscap (0.0.6) unstable; urgency=low - - * new release - - -- Brian Warner Mon, 18 Dec 2006 12:10:51 -0800 - -foolscap (0.0.5) unstable; urgency=low - - * new release - - -- Brian Warner Sat, 4 Nov 2006 23:20:46 -0800 - -foolscap (0.0.4) unstable; urgency=low - - * new release - - -- Brian Warner Thu, 26 Oct 2006 00:46:30 -0700 - -foolscap (0.0.3) unstable; urgency=low - - * new upstream release, put debian packaging in the tree - - -- Brian Warner Tue, 10 Oct 2006 19:16:13 -0700 - -foolscap (0.0.2) unstable; urgency=low - - * New upstream release of an experimental package - - -- Brian Warner Thu, 27 Jul 2006 17:40:15 -0700 diff --git a/src/foolscap/misc/edgy/debian/compat b/src/foolscap/misc/edgy/debian/compat deleted file mode 100644 index 7ed6ff82..00000000 --- a/src/foolscap/misc/edgy/debian/compat +++ /dev/null @@ -1 +0,0 @@ -5 diff --git a/src/foolscap/misc/edgy/debian/control b/src/foolscap/misc/edgy/debian/control deleted file mode 100644 index a5a4809d..00000000 --- a/src/foolscap/misc/edgy/debian/control +++ /dev/null @@ -1,17 +0,0 @@ -Source: foolscap -Section: python -Priority: optional -Maintainer: Brian Warner -Build-Depends: debhelper (>= 5.0.37.3), cdbs (>= 0.4.43), python-central (>= 0.5), python-all-dev, python-twisted-core -XS-Python-Version: all -Standards-Version: 3.7.2 - -Package: python-foolscap -Architecture: all -Depends: ${python:Depends}, python-twisted-core -Recommends: python-twisted-names, python-pyopenssl -XB-Python-Version: ${python:Versions} -Description: An object-capability -based RPC system for Twisted Python - Foolscap (aka "newpb") contains a new RPC system for Twisted Python, combining - capability-based security, secure references, flexible serialization, and - technology to mitigate resource-consumption attacks. diff --git a/src/foolscap/misc/edgy/debian/copyright b/src/foolscap/misc/edgy/debian/copyright deleted file mode 100644 index b68b8960..00000000 --- a/src/foolscap/misc/edgy/debian/copyright +++ /dev/null @@ -1,30 +0,0 @@ -This package was debianized by Brian Warner - -It was downloaded from http://www.twistedmatrix.com/~warner/Foolscap/ - -Copyright (c) 2006 -Brian Warner - -Permission is hereby granted, free of charge, to any person obtaining -a copy of this software and associated documentation files (the -"Software"), to deal in the Software without restriction, including -without limitation the rights to use, copy, modify, merge, publish, -distribute, sublicense, and/or sell copies of the Software, and to -permit persons to whom the Software is furnished to do so, subject to -the following conditions: - -The above copyright notice and this permission notice shall be -included in all copies or substantial portions of the Software. - -THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, -EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF -MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND -NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE -LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION -OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION -WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. - - -Copyright Exceptions: - -No exceptions are listed in the upstream source. diff --git a/src/foolscap/misc/edgy/debian/pycompat b/src/foolscap/misc/edgy/debian/pycompat deleted file mode 100644 index 0cfbf088..00000000 --- a/src/foolscap/misc/edgy/debian/pycompat +++ /dev/null @@ -1 +0,0 @@ -2 diff --git a/src/foolscap/misc/edgy/debian/rules b/src/foolscap/misc/edgy/debian/rules deleted file mode 100644 index c15f6905..00000000 --- a/src/foolscap/misc/edgy/debian/rules +++ /dev/null @@ -1,15 +0,0 @@ -#! /usr/bin/make -f -# Uncomment this to turn on verbose mode. -#export DH_VERBOSE=1 - -DEB_PYTHON_SYSTEM=pycentral - -include /usr/share/cdbs/1/rules/debhelper.mk -include /usr/share/cdbs/1/class/python-distutils.mk - - -install/python-foolscap:: - dh_installdocs doc/jobs.txt doc/todo.txt doc/use-cases.txt doc/using-foolscap.xhtml doc/copyable.xhtml doc/listings doc/specifications - -clean:: - -rm -rf build diff --git a/src/foolscap/misc/edgy/debian/watch b/src/foolscap/misc/edgy/debian/watch deleted file mode 100644 index 8b21b100..00000000 --- a/src/foolscap/misc/edgy/debian/watch +++ /dev/null @@ -1,2 +0,0 @@ -version=3 -http://twistedmatrix.com/~warner/Foolscap/ ([\d\.]+)/ diff --git a/src/foolscap/misc/feisty/debian/changelog b/src/foolscap/misc/feisty/debian/changelog deleted file mode 100644 index d9b00653..00000000 --- a/src/foolscap/misc/feisty/debian/changelog +++ /dev/null @@ -1,71 +0,0 @@ -foolscap (0.1.5) unstable; urgency=low - - * new release - - -- Brian Warner Tue, 07 Aug 2007 17:47:53 -0700 - -foolscap (0.1.4) unstable; urgency=low - - * new release - - -- Brian Warner Mon, 14 May 2007 22:37:04 -0700 - -foolscap (0.1.3) unstable; urgency=low - - * new release - - -- Brian Warner Wed, 2 May 2007 14:55:49 -0700 - -foolscap (0.1.2) unstable; urgency=low - - * new release - - -- Brian Warner Wed, 4 Apr 2007 12:32:46 -0700 - -foolscap (0.1.1) unstable; urgency=low - - * new release - - -- Brian Warner Tue, 3 Apr 2007 20:48:07 -0700 - -foolscap (0.1.0) unstable; urgency=low - - * new release - - -- Brian Warner Thu, 15 Mar 2007 16:56:16 -0700 - -foolscap (0.0.7) unstable; urgency=low - - * new release - - -- Brian Warner Tue, 16 Jan 2007 12:03:00 -0800 - -foolscap (0.0.6) unstable; urgency=low - - * new release - - -- Brian Warner Mon, 18 Dec 2006 12:10:51 -0800 - -foolscap (0.0.5) unstable; urgency=low - - * new release - - -- Brian Warner Sat, 4 Nov 2006 23:20:46 -0800 - -foolscap (0.0.4) unstable; urgency=low - - * new release - - -- Brian Warner Thu, 26 Oct 2006 00:46:30 -0700 - -foolscap (0.0.3) unstable; urgency=low - - * new upstream release, put debian packaging in the tree - - -- Brian Warner Tue, 10 Oct 2006 19:16:13 -0700 - -foolscap (0.0.2) unstable; urgency=low - - * New upstream release of an experimental package - - -- Brian Warner Thu, 27 Jul 2006 17:40:15 -0700 diff --git a/src/foolscap/misc/feisty/debian/compat b/src/foolscap/misc/feisty/debian/compat deleted file mode 100644 index 7ed6ff82..00000000 --- a/src/foolscap/misc/feisty/debian/compat +++ /dev/null @@ -1 +0,0 @@ -5 diff --git a/src/foolscap/misc/feisty/debian/control b/src/foolscap/misc/feisty/debian/control deleted file mode 100644 index 40dd6fcb..00000000 --- a/src/foolscap/misc/feisty/debian/control +++ /dev/null @@ -1,17 +0,0 @@ -Source: foolscap -Section: python -Priority: optional -Maintainer: Brian Warner -Build-Depends: debhelper (>= 5.0.38), cdbs (>= 0.4.43), python-central (>= 0.5), python-all-dev, python-twisted-core -XS-Python-Version: all -Standards-Version: 3.7.2 - -Package: python-foolscap -Architecture: all -Depends: ${python:Depends}, python-twisted-core -Recommends: python-twisted-names, python-pyopenssl -XB-Python-Version: ${python:Versions} -Description: An object-capability -based RPC system for Twisted Python - Foolscap (aka "newpb") contains a new RPC system for Twisted Python, combining - capability-based security, secure references, flexible serialization, and - technology to mitigate resource-consumption attacks. diff --git a/src/foolscap/misc/feisty/debian/copyright b/src/foolscap/misc/feisty/debian/copyright deleted file mode 100644 index b68b8960..00000000 --- a/src/foolscap/misc/feisty/debian/copyright +++ /dev/null @@ -1,30 +0,0 @@ -This package was debianized by Brian Warner - -It was downloaded from http://www.twistedmatrix.com/~warner/Foolscap/ - -Copyright (c) 2006 -Brian Warner - -Permission is hereby granted, free of charge, to any person obtaining -a copy of this software and associated documentation files (the -"Software"), to deal in the Software without restriction, including -without limitation the rights to use, copy, modify, merge, publish, -distribute, sublicense, and/or sell copies of the Software, and to -permit persons to whom the Software is furnished to do so, subject to -the following conditions: - -The above copyright notice and this permission notice shall be -included in all copies or substantial portions of the Software. - -THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, -EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF -MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND -NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE -LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION -OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION -WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. - - -Copyright Exceptions: - -No exceptions are listed in the upstream source. diff --git a/src/foolscap/misc/feisty/debian/pycompat b/src/foolscap/misc/feisty/debian/pycompat deleted file mode 100644 index 0cfbf088..00000000 --- a/src/foolscap/misc/feisty/debian/pycompat +++ /dev/null @@ -1 +0,0 @@ -2 diff --git a/src/foolscap/misc/feisty/debian/rules b/src/foolscap/misc/feisty/debian/rules deleted file mode 100644 index c15f6905..00000000 --- a/src/foolscap/misc/feisty/debian/rules +++ /dev/null @@ -1,15 +0,0 @@ -#! /usr/bin/make -f -# Uncomment this to turn on verbose mode. -#export DH_VERBOSE=1 - -DEB_PYTHON_SYSTEM=pycentral - -include /usr/share/cdbs/1/rules/debhelper.mk -include /usr/share/cdbs/1/class/python-distutils.mk - - -install/python-foolscap:: - dh_installdocs doc/jobs.txt doc/todo.txt doc/use-cases.txt doc/using-foolscap.xhtml doc/copyable.xhtml doc/listings doc/specifications - -clean:: - -rm -rf build diff --git a/src/foolscap/misc/feisty/debian/watch b/src/foolscap/misc/feisty/debian/watch deleted file mode 100644 index 8b21b100..00000000 --- a/src/foolscap/misc/feisty/debian/watch +++ /dev/null @@ -1,2 +0,0 @@ -version=3 -http://twistedmatrix.com/~warner/Foolscap/ ([\d\.]+)/ diff --git a/src/foolscap/misc/sarge/debian/changelog b/src/foolscap/misc/sarge/debian/changelog deleted file mode 100644 index d9b00653..00000000 --- a/src/foolscap/misc/sarge/debian/changelog +++ /dev/null @@ -1,71 +0,0 @@ -foolscap (0.1.5) unstable; urgency=low - - * new release - - -- Brian Warner Tue, 07 Aug 2007 17:47:53 -0700 - -foolscap (0.1.4) unstable; urgency=low - - * new release - - -- Brian Warner Mon, 14 May 2007 22:37:04 -0700 - -foolscap (0.1.3) unstable; urgency=low - - * new release - - -- Brian Warner Wed, 2 May 2007 14:55:49 -0700 - -foolscap (0.1.2) unstable; urgency=low - - * new release - - -- Brian Warner Wed, 4 Apr 2007 12:32:46 -0700 - -foolscap (0.1.1) unstable; urgency=low - - * new release - - -- Brian Warner Tue, 3 Apr 2007 20:48:07 -0700 - -foolscap (0.1.0) unstable; urgency=low - - * new release - - -- Brian Warner Thu, 15 Mar 2007 16:56:16 -0700 - -foolscap (0.0.7) unstable; urgency=low - - * new release - - -- Brian Warner Tue, 16 Jan 2007 12:03:00 -0800 - -foolscap (0.0.6) unstable; urgency=low - - * new release - - -- Brian Warner Mon, 18 Dec 2006 12:10:51 -0800 - -foolscap (0.0.5) unstable; urgency=low - - * new release - - -- Brian Warner Sat, 4 Nov 2006 23:20:46 -0800 - -foolscap (0.0.4) unstable; urgency=low - - * new release - - -- Brian Warner Thu, 26 Oct 2006 00:46:30 -0700 - -foolscap (0.0.3) unstable; urgency=low - - * new upstream release, put debian packaging in the tree - - -- Brian Warner Tue, 10 Oct 2006 19:16:13 -0700 - -foolscap (0.0.2) unstable; urgency=low - - * New upstream release of an experimental package - - -- Brian Warner Thu, 27 Jul 2006 17:40:15 -0700 diff --git a/src/foolscap/misc/sarge/debian/compat b/src/foolscap/misc/sarge/debian/compat deleted file mode 100644 index b8626c4c..00000000 --- a/src/foolscap/misc/sarge/debian/compat +++ /dev/null @@ -1 +0,0 @@ -4 diff --git a/src/foolscap/misc/sarge/debian/control b/src/foolscap/misc/sarge/debian/control deleted file mode 100644 index 78dc02ab..00000000 --- a/src/foolscap/misc/sarge/debian/control +++ /dev/null @@ -1,21 +0,0 @@ -Source: foolscap -Section: python -Priority: optional -Maintainer: Brian Warner -Build-Depends: debhelper (>> 4.1.68), python2.4-dev, python2.4-twisted, cdbs -Standards-Version: 3.7.2 - -Package: python-foolscap -Architecture: all -Depends: python (>= 2.4), python (<< 2.5), python2.4-foolscap -Description: An object-capability -based RPC system for Twisted Python - This is a dummy package that only depends on python2.4-foolscap - -Package: python2.4-foolscap -Architecture: all -Depends: python2.4, python2.4-twisted -Recommends: python2.4-pyopenssl -Description: An object-capability -based RPC system for Twisted Python - Foolscap (aka "newpb") contains a new RPC system for Twisted Python, combining - capability-based security, secure references, flexible serialization, and - technology to mitigate resource-consumption attacks. diff --git a/src/foolscap/misc/sarge/debian/copyright b/src/foolscap/misc/sarge/debian/copyright deleted file mode 100644 index b68b8960..00000000 --- a/src/foolscap/misc/sarge/debian/copyright +++ /dev/null @@ -1,30 +0,0 @@ -This package was debianized by Brian Warner - -It was downloaded from http://www.twistedmatrix.com/~warner/Foolscap/ - -Copyright (c) 2006 -Brian Warner - -Permission is hereby granted, free of charge, to any person obtaining -a copy of this software and associated documentation files (the -"Software"), to deal in the Software without restriction, including -without limitation the rights to use, copy, modify, merge, publish, -distribute, sublicense, and/or sell copies of the Software, and to -permit persons to whom the Software is furnished to do so, subject to -the following conditions: - -The above copyright notice and this permission notice shall be -included in all copies or substantial portions of the Software. - -THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, -EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF -MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND -NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE -LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION -OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION -WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. - - -Copyright Exceptions: - -No exceptions are listed in the upstream source. diff --git a/src/foolscap/misc/sarge/debian/rules b/src/foolscap/misc/sarge/debian/rules deleted file mode 100644 index 49822e6e..00000000 --- a/src/foolscap/misc/sarge/debian/rules +++ /dev/null @@ -1,61 +0,0 @@ -#!/usr/bin/make -f -# Sample debian/rules that uses debhelper. -# GNU copyright 1997 to 1999 by Joey Hess. - -# Uncomment this to turn on verbose mode. -#export DH_VERBOSE=1 - -# This is the debhelper compatability version to use. -export DH_COMPAT=4 - -build: build-stamp -build-stamp: - dh_testdir - - ## Build for all python versions - python2.4 setup.py build - - touch build-stamp - -clean: - dh_testdir - dh_testroot - rm -f build-stamp - rm -rf build - find . -name '*.pyc' |xargs -r rm - dh_clean - -install: build - dh_testdir - dh_testroot - dh_clean -k - dh_installdirs - - ## Python 2.4 - python2.4 setup.py build - python2.4 setup.py install --prefix=debian/python2.4-foolscap/usr - - -# Build architecture-independent files here. -binary-indep: build install - dh_testdir - dh_testroot - dh_installdocs -i -A NEWS README - dh_installdocs ChangeLog doc/jobs.txt doc/todo.txt doc/use-cases.txt doc/using-foolscap.xhtml doc/copyable.xhtml doc/listings doc/specifications - dh_installchangelogs -i - dh_compress -i -X.py - dh_fixperms - dh_python - dh_installdeb - dh_gencontrol - dh_md5sums - dh_builddeb - -binary-arch: -# nothing to do - -binary: binary-indep -.PHONY: build clean binary-indep binary-arch binary install - - - diff --git a/src/foolscap/misc/sarge/debian/watch b/src/foolscap/misc/sarge/debian/watch deleted file mode 100644 index 8b21b100..00000000 --- a/src/foolscap/misc/sarge/debian/watch +++ /dev/null @@ -1,2 +0,0 @@ -version=3 -http://twistedmatrix.com/~warner/Foolscap/ ([\d\.]+)/ diff --git a/src/foolscap/misc/sid/debian/changelog b/src/foolscap/misc/sid/debian/changelog deleted file mode 100644 index d9b00653..00000000 --- a/src/foolscap/misc/sid/debian/changelog +++ /dev/null @@ -1,71 +0,0 @@ -foolscap (0.1.5) unstable; urgency=low - - * new release - - -- Brian Warner Tue, 07 Aug 2007 17:47:53 -0700 - -foolscap (0.1.4) unstable; urgency=low - - * new release - - -- Brian Warner Mon, 14 May 2007 22:37:04 -0700 - -foolscap (0.1.3) unstable; urgency=low - - * new release - - -- Brian Warner Wed, 2 May 2007 14:55:49 -0700 - -foolscap (0.1.2) unstable; urgency=low - - * new release - - -- Brian Warner Wed, 4 Apr 2007 12:32:46 -0700 - -foolscap (0.1.1) unstable; urgency=low - - * new release - - -- Brian Warner Tue, 3 Apr 2007 20:48:07 -0700 - -foolscap (0.1.0) unstable; urgency=low - - * new release - - -- Brian Warner Thu, 15 Mar 2007 16:56:16 -0700 - -foolscap (0.0.7) unstable; urgency=low - - * new release - - -- Brian Warner Tue, 16 Jan 2007 12:03:00 -0800 - -foolscap (0.0.6) unstable; urgency=low - - * new release - - -- Brian Warner Mon, 18 Dec 2006 12:10:51 -0800 - -foolscap (0.0.5) unstable; urgency=low - - * new release - - -- Brian Warner Sat, 4 Nov 2006 23:20:46 -0800 - -foolscap (0.0.4) unstable; urgency=low - - * new release - - -- Brian Warner Thu, 26 Oct 2006 00:46:30 -0700 - -foolscap (0.0.3) unstable; urgency=low - - * new upstream release, put debian packaging in the tree - - -- Brian Warner Tue, 10 Oct 2006 19:16:13 -0700 - -foolscap (0.0.2) unstable; urgency=low - - * New upstream release of an experimental package - - -- Brian Warner Thu, 27 Jul 2006 17:40:15 -0700 diff --git a/src/foolscap/misc/sid/debian/compat b/src/foolscap/misc/sid/debian/compat deleted file mode 100644 index b8626c4c..00000000 --- a/src/foolscap/misc/sid/debian/compat +++ /dev/null @@ -1 +0,0 @@ -4 diff --git a/src/foolscap/misc/sid/debian/control b/src/foolscap/misc/sid/debian/control deleted file mode 100644 index e4125e35..00000000 --- a/src/foolscap/misc/sid/debian/control +++ /dev/null @@ -1,18 +0,0 @@ -Source: foolscap -Section: python -Priority: optional -Maintainer: Brian Warner -Build-Depends: debhelper (>= 5.0.37.2), cdbs (>= 0.4.43), python-central (>= 0.5), python, python-dev -Build-Depends-Indep: python-twisted-core -XS-Python-Version: all -Standards-Version: 3.7.2 - -Package: python-foolscap -Architecture: all -Depends: ${python:Depends}, python-twisted-core -Recommends: python-twisted-names, python-pyopenssl -XB-Python-Version: ${python:Versions} -Description: An object-capability -based RPC system for Twisted Python - Foolscap (aka "newpb") contains a new RPC system for Twisted Python, combining - capability-based security, secure references, flexible serialization, and - technology to mitigate resource-consumption attacks. diff --git a/src/foolscap/misc/sid/debian/copyright b/src/foolscap/misc/sid/debian/copyright deleted file mode 100644 index b68b8960..00000000 --- a/src/foolscap/misc/sid/debian/copyright +++ /dev/null @@ -1,30 +0,0 @@ -This package was debianized by Brian Warner - -It was downloaded from http://www.twistedmatrix.com/~warner/Foolscap/ - -Copyright (c) 2006 -Brian Warner - -Permission is hereby granted, free of charge, to any person obtaining -a copy of this software and associated documentation files (the -"Software"), to deal in the Software without restriction, including -without limitation the rights to use, copy, modify, merge, publish, -distribute, sublicense, and/or sell copies of the Software, and to -permit persons to whom the Software is furnished to do so, subject to -the following conditions: - -The above copyright notice and this permission notice shall be -included in all copies or substantial portions of the Software. - -THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, -EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF -MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND -NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE -LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION -OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION -WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. - - -Copyright Exceptions: - -No exceptions are listed in the upstream source. diff --git a/src/foolscap/misc/sid/debian/pycompat b/src/foolscap/misc/sid/debian/pycompat deleted file mode 100644 index 0cfbf088..00000000 --- a/src/foolscap/misc/sid/debian/pycompat +++ /dev/null @@ -1 +0,0 @@ -2 diff --git a/src/foolscap/misc/sid/debian/rules b/src/foolscap/misc/sid/debian/rules deleted file mode 100644 index c15f6905..00000000 --- a/src/foolscap/misc/sid/debian/rules +++ /dev/null @@ -1,15 +0,0 @@ -#! /usr/bin/make -f -# Uncomment this to turn on verbose mode. -#export DH_VERBOSE=1 - -DEB_PYTHON_SYSTEM=pycentral - -include /usr/share/cdbs/1/rules/debhelper.mk -include /usr/share/cdbs/1/class/python-distutils.mk - - -install/python-foolscap:: - dh_installdocs doc/jobs.txt doc/todo.txt doc/use-cases.txt doc/using-foolscap.xhtml doc/copyable.xhtml doc/listings doc/specifications - -clean:: - -rm -rf build diff --git a/src/foolscap/misc/sid/debian/watch b/src/foolscap/misc/sid/debian/watch deleted file mode 100644 index 8b21b100..00000000 --- a/src/foolscap/misc/sid/debian/watch +++ /dev/null @@ -1,2 +0,0 @@ -version=3 -http://twistedmatrix.com/~warner/Foolscap/ ([\d\.]+)/ diff --git a/src/foolscap/misc/testutils/figleaf.py b/src/foolscap/misc/testutils/figleaf.py deleted file mode 100644 index 03d8c08d..00000000 --- a/src/foolscap/misc/testutils/figleaf.py +++ /dev/null @@ -1,400 +0,0 @@ -#! /usr/bin/env python -""" -figleaf is another tool to trace code coverage (yes, in Python ;). - -figleaf uses the sys.settrace hook to record which statements are -executed by the CPython interpreter; this record can then be saved -into a file, or otherwise communicated back to a reporting script. - -figleaf differs from the gold standard of coverage tools -('coverage.py') in several ways. First and foremost, figleaf uses the -same criterion for "interesting" lines of code as the sys.settrace -function, which obviates some of the complexity in coverage.py (but -does mean that your "loc" count goes down). Second, figleaf does not -record code executed in the Python standard library, which results in -a significant speedup. And third, the format in which the coverage -format is saved is very simple and easy to work with. - -You might want to use figleaf if you're recording coverage from -multiple types of tests and need to aggregate the coverage in -interesting ways, and/or control when coverage is recorded. -coverage.py is a better choice for command-line execution, and its -reporting is a fair bit nicer. - -Command line usage: :: - - figleaf.py - -The figleaf output is saved into the file '.figleaf', which is an -*aggregate* of coverage reports from all figleaf runs from this -directory. '.figleaf' contains a pickled dictionary of sets; the keys -are source code filenames, and the sets contain all line numbers -executed by the Python interpreter. See the docs or command-line -programs in bin/ for more information. - -High level API: :: - - * ``start(ignore_lib=True)`` -- start recording code coverage. - * ``stop()`` -- stop recording code coverage. - * ``get_trace_obj()`` -- return the (singleton) trace object. - * ``get_info()`` -- get the coverage dictionary - -Classes & functions worth knowing about, i.e. a lower level API: - - * ``get_lines(fp)`` -- return the set of interesting lines in the fp. - * ``combine_coverage(d1, d2)`` -- combine coverage info from two dicts. - * ``read_coverage(filename)`` -- load the coverage dictionary - * ``write_coverage(filename)`` -- write the coverage out. - * ``annotate_coverage(...)`` -- annotate a Python file with its coverage info. - -Known problems: - - -- module docstrings are *covered* but not found. - -AUTHOR: C. Titus Brown, titus@idyll.org - -'figleaf' is Copyright (C) 2006. It will be released under the BSD license. -""" -import sys -import os -import threading -from cPickle import dump, load - -### import builtin sets if in > 2.4, otherwise use 'sets' module. -# we require 2.4 or later -assert set - - -from token import tok_name, NEWLINE, STRING, INDENT, DEDENT, COLON -import parser, types, symbol - -def get_token_name(x): - """ - Utility to help pretty-print AST symbols/Python tokens. - """ - if symbol.sym_name.has_key(x): - return symbol.sym_name[x] - return tok_name.get(x, '-') - -class LineGrabber: - """ - Count 'interesting' lines of Python in source files, where - 'interesting' is defined as 'lines that could possibly be - executed'. - - @CTB this badly needs to be refactored... once I have automated - tests ;) - """ - def __init__(self, fp): - """ - Count lines of code in 'fp'. - """ - self.lines = set() - - self.ast = parser.suite(fp.read()) - self.tree = parser.ast2tuple(self.ast, True) - - self.find_terminal_nodes(self.tree) - - def find_terminal_nodes(self, tup): - """ - Recursively eat an AST in tuple form, finding the first line - number for "interesting" code. - """ - (sym, rest) = tup[0], tup[1:] - - line_nos = set() - if type(rest[0]) == types.TupleType: ### node - - for x in rest: - token_line_no = self.find_terminal_nodes(x) - if token_line_no is not None: - line_nos.add(token_line_no) - - if symbol.sym_name[sym] in ('stmt', 'suite', 'lambdef', - 'except_clause') and line_nos: - # store the line number that this statement started at - self.lines.add(min(line_nos)) - elif symbol.sym_name[sym] in ('if_stmt',): - # add all lines under this - self.lines.update(line_nos) - elif symbol.sym_name[sym] in ('global_stmt',): # IGNORE - return - else: - if line_nos: - return min(line_nos) - - else: ### leaf - if sym not in (NEWLINE, STRING, INDENT, DEDENT, COLON) and \ - tup[1] != 'else': - return tup[2] - return None - - def pretty_print(self, tup=None, indent=0): - """ - Pretty print the AST. - """ - if tup is None: - tup = self.tree - - s = tup[1] - - if type(s) == types.TupleType: - print ' '*indent, get_token_name(tup[0]) - for x in tup[1:]: - self.pretty_print(x, indent+1) - else: - print ' '*indent, get_token_name(tup[0]), tup[1:] - -def get_lines(fp): - """ - Return the set of interesting lines in the source code read from - this file handle. - """ - l = LineGrabber(fp) - return l.lines - -class CodeTracer: - """ - Basic code coverage tracking, using sys.settrace. - """ - def __init__(self, ignore_prefixes=[]): - self.c = {} - self.started = False - self.ignore_prefixes = ignore_prefixes - - def start(self): - """ - Start recording. - """ - if not self.started: - self.started = True - - sys.settrace(self.g) - if hasattr(threading, 'settrace'): - threading.settrace(self.g) - - def stop(self): - if self.started: - sys.settrace(None) - if hasattr(threading, 'settrace'): - threading.settrace(None) - - self.started = False - - def g(self, f, e, a): - """ - global trace function. - """ - if e is 'call': - for p in self.ignore_prefixes: - if f.f_code.co_filename.startswith(p): - return - - return self.t - - def t(self, f, e, a): - """ - local trace function. - """ - - if e is 'line': - self.c[(f.f_code.co_filename, f.f_lineno)] = 1 - return self.t - - def clear(self): - """ - wipe out coverage info - """ - - self.c = {} - - def gather_files(self): - """ - Return the dictionary of lines of executed code; the dict - contains items (k, v), where 'k' is the filename and 'v' - is a set of line numbers. - """ - files = {} - for (filename, line) in self.c.keys(): - d = files.get(filename, set()) - d.add(line) - files[filename] = d - - return files - -def combine_coverage(d1, d2): - """ - Given two coverage dictionaries, combine the recorded coverage - and return a new dictionary. - """ - keys = set(d1.keys()) - keys.update(set(d2.keys())) - - new_d = {} - for k in keys: - v = d1.get(k, set()) - v2 = d2.get(k, set()) - - s = set(v) - s.update(v2) - new_d[k] = s - - return new_d - -def write_coverage(filename, combine=True): - """ - Write the current coverage info out to the given filename. If - 'combine' is false, destroy any previously recorded coverage info. - """ - if _t is None: - return - - d = _t.gather_files() - - # combine? - if combine: - old = {} - fp = None - try: - fp = open(filename) - except IOError: - pass - - if fp: - old = load(fp) - fp.close() - d = combine_coverage(d, old) - - # ok, save. - outfp = open(filename, 'w') - try: - dump(d, outfp) - finally: - outfp.close() - -def read_coverage(filename): - """ - Read a coverage dictionary in from the given file. - """ - fp = open(filename) - try: - d = load(fp) - finally: - fp.close() - - return d - -def annotate_coverage(in_fp, out_fp, covered, all_lines, - mark_possible_lines=False): - """ - A simple example coverage annotator that outputs text. - """ - for i, line in enumerate(in_fp): - i = i + 1 - - if i in covered: - symbol = '>' - elif i in all_lines: - symbol = '!' - else: - symbol = ' ' - - symbol2 = '' - if mark_possible_lines: - symbol2 = ' ' - if i in all_lines: - symbol2 = '-' - - out_fp.write('%s%s %s' % (symbol, symbol2, line,)) - -####################### - -# -# singleton functions/top-level API -# - -_t = None - -def start(ignore_python_lib=True, ignore_prefixes=[]): - """ - Start tracing code coverage. If 'ignore_python_lib' is True, - ignore all files that live below the same directory as the 'os' - module. - """ - global _t - if _t is None: - ignore_prefixes = ignore_prefixes[:] - if ignore_python_lib: - ignore_prefixes.append(os.path.realpath(os.path.dirname(os.__file__))) - _t = CodeTracer(ignore_prefixes) - - _t.start() - -def stop(): - """ - Stop tracing code coverage. - """ - global _t - if _t is not None: - _t.stop() - -def get_trace_obj(): - """ - Return the (singleton) trace object, if it exists. - """ - return _t - -def get_info(): - """ - Get the coverage dictionary from the trace object. - """ - if _t: - return _t.gather_files() - -############# - -def display_ast(): - l = LineGrabber(open(sys.argv[1])) - l.pretty_print() - -def main(): - """ - Execute the given Python file with coverage, making it look like it is - __main__. - """ - ignore_pylibs = False - - def print_help(): - print 'Usage: figleaf [-i] ' - print '' - print 'Options:' - print ' -i Ignore Python standard libraries when calculating coverage' - - args = sys.argv[1:] - - if len(args) < 1: - print_help() - raise SystemExit() - elif len(args) > 2 and args[0] == '-i': - ignore_pylibs = True - - ## Make sure to strip off the -i or --ignore-python-libs option if it exists - args = args[1:] - - ## Reset system args so that the subsequently exec'd file can read from sys.argv - sys.argv = args - - sys.path[0] = os.path.dirname(args[0]) - - cwd = os.getcwd() - - start(ignore_pylibs) # START code coverage - - import __main__ - try: - execfile(args[0], __main__.__dict__) - finally: - stop() # STOP code coverage - - write_coverage(os.path.join(cwd, '.figleaf')) diff --git a/src/foolscap/misc/testutils/figleaf2html b/src/foolscap/misc/testutils/figleaf2html deleted file mode 100644 index 68524669..00000000 --- a/src/foolscap/misc/testutils/figleaf2html +++ /dev/null @@ -1,3 +0,0 @@ -#! /usr/bin/env python -import figleaf_htmlizer -figleaf_htmlizer.main() diff --git a/src/foolscap/misc/testutils/figleaf_htmlizer.py b/src/foolscap/misc/testutils/figleaf_htmlizer.py deleted file mode 100644 index 9b009373..00000000 --- a/src/foolscap/misc/testutils/figleaf_htmlizer.py +++ /dev/null @@ -1,272 +0,0 @@ -#! /usr/bin/env python -import sys -import figleaf -import os -import re - -from optparse import OptionParser - -import logging -logging.basicConfig(level=logging.DEBUG) - -logger = logging.getLogger('figleaf.htmlizer') - -def read_exclude_patterns(f): - if not f: - return [] - exclude_patterns = [] - - fp = open(f) - for line in fp: - line = line.rstrip() - if line and not line.startswith('#'): - pattern = re.compile(line) - exclude_patterns.append(pattern) - - return exclude_patterns - -def report_as_html(coverage, directory, exclude_patterns=[], root=None): - ### now, output. - - keys = coverage.keys() - info_dict = {} - for k in keys: - skip = False - for pattern in exclude_patterns: - if pattern.search(k): - logger.debug('SKIPPING %s -- matches exclusion pattern' % k) - skip = True - break - - if skip: - continue - - if k.endswith('figleaf.py'): - continue - - display_filename = k - if root: - if not k.startswith(root): - continue - display_filename = k[len(root):] - assert not display_filename.startswith("/") - assert display_filename.endswith(".py") - display_filename = display_filename[:-3] # trim .py - display_filename = display_filename.replace("/", ".") - - if not k.startswith("/"): - continue - - try: - pyfile = open(k) -# print 'opened', k - except IOError: - logger.warning('CANNOT OPEN: %s' % k) - continue - - try: - lines = figleaf.get_lines(pyfile) - except KeyboardInterrupt: - raise - except Exception, e: - pyfile.close() - logger.warning('ERROR: %s %s' % (k, str(e))) - continue - - # ok, got all the info. now annotate file ==> html. - - covered = coverage[k] - n_covered = n_lines = 0 - - pyfile = open(k) - output = [] - for i, line in enumerate(pyfile): - is_covered = False - is_line = False - - i += 1 - - if i in covered: - is_covered = True - - n_covered += 1 - n_lines += 1 - elif i in lines: - is_line = True - - n_lines += 1 - - color = 'black' - if is_covered: - color = 'green' - elif is_line: - color = 'red' - - line = escape_html(line.rstrip()) - output.append('%4d. %s' % (color, i, line.rstrip())) - - try: - pcnt = n_covered * 100. / n_lines - except ZeroDivisionError: - pcnt = 100 - info_dict[k] = (n_lines, n_covered, pcnt, display_filename) - - html_outfile = make_html_filename(display_filename) - html_outfp = open(os.path.join(directory, html_outfile), 'w') - html_outfp.write('source file: %s
\n' % (k,)) - html_outfp.write('file stats: %d lines, %d executed: %.1f%% covered\n' % (n_lines, n_covered, pcnt)) - - html_outfp.write('
\n')
-		html_outfp.write("\n".join(output))
-		html_outfp.close()
-
-	### print a summary, too.
-
-	info_dict_items = info_dict.items()
-
-	def sort_by_pcnt(a, b):
-		a = a[1][2]
-		b = b[1][2]
-
-		return -cmp(a,b)
-	info_dict_items.sort(sort_by_pcnt)
-
-	summary_lines = sum([ v[0] for (k, v) in info_dict_items])
-	summary_cover = sum([ v[1] for (k, v) in info_dict_items])
-
-	summary_pcnt = 100
-	if summary_lines:
-		summary_pcnt = float(summary_cover) * 100. / float(summary_lines)
-
-
-	pcnts = [ float(v[1]) * 100. / float(v[0]) for (k, v) in info_dict_items if v[0] ]
-	pcnt_90 = [ x for x in pcnts if x >= 90 ]
-	pcnt_75 = [ x for x in pcnts if x >= 75 ]
-	pcnt_50 = [ x for x in pcnts if x >= 50 ]
-
-        stats_fp = open('%s/stats.out' % (directory,), 'w')
-        stats_fp.write("total files: %d\n" % len(pcnts))
-        stats_fp.write("total source lines: %d\n" % summary_lines)
-        stats_fp.write("total covered lines: %d\n" % summary_cover)
-        stats_fp.write("total coverage percentage: %.1f\n" % summary_pcnt)
-        stats_fp.close()
-
-        ## index.html
-	index_fp = open('%s/index.html' % (directory,), 'w')
-        # summary info
-	index_fp.write('figleaf code coverage report\n')
-	index_fp.write('

Summary

%d files total: %d files > ' - '90%%, %d files > 75%%, %d files > 50%%

' - % (len(pcnts), len(pcnt_90), - len(pcnt_75), len(pcnt_50))) - # sorted by percentage covered - index_fp.write('

Sorted by Coverage Percentage

\n') - index_fp.write('' - '' - '\n') - index_fp.write('' - '' - '\n' - % (summary_lines, summary_cover, summary_pcnt,)) - - for filename, stuff in info_dict_items: - (n_lines, n_covered, percent_covered, display_filename) = stuff - html_outfile = make_html_filename(display_filename) - - index_fp.write('' - '\n' - % (html_outfile, display_filename, n_lines, - n_covered, percent_covered,)) - - index_fp.write('
Filename# lines# covered% covered
totals:%d%d%.1f%%
%s%d%d%.1f
\n') - - # sorted by module name - index_fp.write('

Sorted by Module Name (alphabetical)

\n') - info_dict_items.sort() - index_fp.write('' - '' - '\n') - - for filename, stuff in info_dict_items: - (n_lines, n_covered, percent_covered, display_filename) = stuff - html_outfile = make_html_filename(display_filename) - - index_fp.write('' - '\n' - % (html_outfile, display_filename, n_lines, - n_covered, percent_covered,)) - - index_fp.write('
Filename# lines# covered% covered
%s%d%d%.1f
\n') - - index_fp.close() - - logger.info('reported on %d file(s) total\n' % len(info_dict)) - return len(info_dict) - -def prepare_reportdir(dirname='html'): - try: - os.mkdir(dirname) - except OSError: # already exists - pass - -def make_html_filename(orig): - return orig + ".html" - -def escape_html(s): - s = s.replace("&", "&") - s = s.replace("<", "<") - s = s.replace(">", ">") - s = s.replace('"', """) - return s - -def main(): - ### - - option_parser = OptionParser() - - option_parser.add_option('-x', '--exclude-patterns', action="store", - dest="exclude_patterns_file", - help="file containing regexp patterns to exclude") - - option_parser.add_option('-d', '--output-directory', action='store', - dest="output_dir", - default = "html", - help="directory for HTML output") - option_parser.add_option('-r', '--root', action="store", - dest="root", - default=None, - help="only pay attention to modules under this directory") - - option_parser.add_option('-q', '--quiet', action='store_true', dest='quiet', help='Suppress all but error messages') - - (options, args) = option_parser.parse_args() - - if options.quiet: - logging.disable(logging.DEBUG) - - if options.root: - options.root = os.path.abspath(options.root) - if options.root[-1] != "/": - options.root = options.root + "/" - - ### load - - if not args: - args = ['.figleaf'] - - coverage = {} - for filename in args: - logger.debug("loading coverage info from '%s'\n" % (filename,)) - d = figleaf.read_coverage(filename) - coverage = figleaf.combine_coverage(coverage, d) - - if not coverage: - logger.warning('EXITING -- no coverage info!\n') - sys.exit(-1) - - ### make directory - prepare_reportdir(options.output_dir) - report_as_html(coverage, options.output_dir, - read_exclude_patterns(options.exclude_patterns_file), - options.root) - diff --git a/src/foolscap/misc/testutils/trial_figleaf.py b/src/foolscap/misc/testutils/trial_figleaf.py deleted file mode 100644 index 7bc96b0d..00000000 --- a/src/foolscap/misc/testutils/trial_figleaf.py +++ /dev/null @@ -1,125 +0,0 @@ - -"""A Trial IReporter plugin that gathers figleaf code-coverage information. - -Once this plugin is installed, trial can be invoked with one of two new ---reporter options: - - trial --reporter=verbose-figleaf ARGS - trial --reporter-bwverbose-figleaf ARGS - -Once such a test run has finished, there will be a .figleaf file in the -top-level directory. This file can be turned into a directory of .html files -(with index.html as the starting point) by running: - - figleaf2html -d OUTPUTDIR [-x EXCLUDEFILE] - -Figleaf thinks of everyting in terms of absolute filenames rather than -modules. The EXCLUDEFILE may be necessary to keep it from providing reports -on non-Code-Under-Test files that live in unusual locations. In particular, -if you use extra PYTHONPATH arguments to point at some alternate version of -an upstream library (like Twisted), or if something like debian's -python-support puts symlinks to .py files in sys.path but not the .py files -themselves, figleaf will present coverage information on both of these. The -EXCLUDEFILE option might help to inhibit these. - -Other figleaf problems: - - the annotated code files are written to BASENAME(file).html, which results - in collisions between similarly-named source files. - - The line-wise coverage information isn't quite right. Blank lines are - counted as unreached code, lambdas aren't quite right, and some multiline - comments (docstrings?) aren't quite right. - -""" - -from twisted.trial.reporter import TreeReporter, VerboseTextReporter - -# These plugins are registered via twisted/plugins/figleaf_trial.py . See -# the notes there for an explanation of how that works. - - - -# Reporters don't really get told about the suite starting and stopping. - -# The Reporter class is imported before the test classes are. - -# The test classes are imported before the Reporter is created. To get -# control earlier than that requires modifying twisted/scripts/trial.py . - -# Then Reporter.__init__ is called. - -# Then tests run, calling things like write() and addSuccess(). Each test is -# framed by a startTest/stopTest call. - -# Then the results are emitted, calling things like printErrors, -# printSummary, and wasSuccessful. - -# So for code-coverage (not including import), start in __init__ and finish -# in printSummary. To include import, we have to start in our own import and -# finish in printSummary. - -import figleaf -figleaf.start() - - -class FigleafReporter(TreeReporter): - def __init__(self, *args, **kwargs): - TreeReporter.__init__(self, *args, **kwargs) - - def printSummary(self): - figleaf.stop() - figleaf.write_coverage(".figleaf") - print "Figleaf results written to .figleaf" - return TreeReporter.printSummary(self) - -class FigleafTextReporter(VerboseTextReporter): - def __init__(self, *args, **kwargs): - VerboseTextReporter.__init__(self, *args, **kwargs) - - def printSummary(self): - figleaf.stop() - figleaf.write_coverage(".figleaf") - print "Figleaf results written to .figleaf" - return VerboseTextReporter.printSummary(self) - -class not_FigleafReporter(object): - # this class, used as a reporter on a fully-passing test suite, doesn't - # trigger exceptions. So it is a guide to what methods are invoked on a - # Reporter. - def __init__(self, *args, **kwargs): - print "FIGLEAF HERE" - self.r = TreeReporter(*args, **kwargs) - self.shouldStop = self.r.shouldStop - self.separator = self.r.separator - self.testsRun = self.r.testsRun - self._starting2 = False - - def write(self, *args): - if not self._starting2: - self._starting2 = True - print "FIRST WRITE" - return self.r.write(*args) - - def startTest(self, *args, **kwargs): - return self.r.startTest(*args, **kwargs) - - def stopTest(self, *args, **kwargs): - return self.r.stopTest(*args, **kwargs) - - def addSuccess(self, *args, **kwargs): - return self.r.addSuccess(*args, **kwargs) - - def printErrors(self, *args, **kwargs): - return self.r.printErrors(*args, **kwargs) - - def writeln(self, *args, **kwargs): - return self.r.writeln(*args, **kwargs) - - def printSummary(self, *args, **kwargs): - print "PRINT SUMMARY" - return self.r.printSummary(*args, **kwargs) - - def wasSuccessful(self, *args, **kwargs): - return self.r.wasSuccessful(*args, **kwargs) - diff --git a/src/foolscap/misc/testutils/twisted/plugins/figleaf_trial_plugin.py b/src/foolscap/misc/testutils/twisted/plugins/figleaf_trial_plugin.py deleted file mode 100644 index ee94984e..00000000 --- a/src/foolscap/misc/testutils/twisted/plugins/figleaf_trial_plugin.py +++ /dev/null @@ -1,47 +0,0 @@ - -from zope.interface import implements -from twisted.trial.itrial import IReporter -from twisted.plugin import IPlugin - -# register a plugin that can create our FigleafReporter. The reporter itself -# lives in a separate place - -# note that this .py file is *not* in a package: there is no __init__.py in -# our parent directory. This is important, because otherwise ours would fight -# with Twisted's. When trial looks for plugins, it merely executes all the -# *.py files it finds in and twisted/plugins/ subdirectories of anything on -# sys.path . The namespace that results from executing these .py files is -# examined for instances which provide both IPlugin and the target interface -# (in this case, trial is looking for IReporter instances). Each such -# instance tells the application how to create a plugin by naming the module -# and class that should be instantiated. - -# When installing our package via setup.py, arrange for this file to be -# installed to the system-wide twisted/plugins/ directory. - -class _Reporter(object): - implements(IPlugin, IReporter) - - def __init__(self, name, module, description, longOpt, shortOpt, klass): - self.name = name - self.module = module - self.description = description - self.longOpt = longOpt - self.shortOpt = shortOpt - self.klass = klass - - -fig = _Reporter("Figleaf Code-Coverage Reporter", - "trial_figleaf", - description="verbose color output (with figleaf coverage)", - longOpt="verbose-figleaf", - shortOpt="f", - klass="FigleafReporter") - -bwfig = _Reporter("Figleaf Code-Coverage Reporter (colorless)", - "trial_figleaf", - description="Colorless verbose output (with figleaf coverage)", - longOpt="bwverbose-figleaf", - shortOpt=None, - klass="FigleafTextReporter") - diff --git a/src/foolscap/setup.py b/src/foolscap/setup.py deleted file mode 100644 index e734e946..00000000 --- a/src/foolscap/setup.py +++ /dev/null @@ -1,23 +0,0 @@ -#!/usr/bin/python - -import sys -from distutils.core import setup -from foolscap import __version__ - -if __name__ == '__main__': - setup( - name="foolscap", - version=__version__, - description="Foolscap contains an RPC protocol for Twisted.", - author="Brian Warner", - author_email="warner@twistedmatrix.com", - url="http://twistedmatrix.com/trac/wiki/FoolsCap", - license="MIT", - long_description="""\ -Foolscap (aka newpb) is a new version of Twisted's native RPC protocol, known -as 'Perspective Broker'. This allows an object in one process to be used by -code in a distant process. This module provides data marshaling, a remote -object reference system, and a capability-based security model. -""", - packages=["foolscap", "foolscap/slicers", "foolscap/test"], - )