From: Brian Warner <warner@allmydata.com>
Date: Sat, 10 Jan 2009 02:52:22 +0000 (-0700)
Subject: storage.py : replace 4294967295 with 2**32-1: python does constant folding, I measure... 
X-Git-Tag: allmydata-tahoe-1.3.0~216
X-Git-Url: https://git.rkrishnan.org/%5B/%5D%20/uri/vdrive/flags/?a=commitdiff_plain;h=167742c2b3d477a66bed8b7e852308bda88ffca1;p=tahoe-lafs%2Ftahoe-lafs.git

storage.py : replace 4294967295 with 2**32-1: python does constant folding, I measured this statement as taking 50ns, versus the 400ns for the call to min(), or the 9us required for the 'assert not os.path.exists' syscall
---

diff --git a/src/allmydata/storage.py b/src/allmydata/storage.py
index f1ce2ece..0a15a123 100644
--- a/src/allmydata/storage.py
+++ b/src/allmydata/storage.py
@@ -112,15 +112,16 @@ class ShareFile:
             assert not os.path.exists(self.home)
             fileutil.make_dirs(os.path.dirname(self.home))
             f = open(self.home, 'wb')
-            # The second field -- share data length -- is no longer used as of Tahoe v1.3.0, but
-            # we continue to write it in there in case someone downgrades a storage server from
-            # >= Tahoe-1.3.0 to < Tahoe-1.3.0, or moves a share file from one server to another,
-            # etc.  We do saturation -- a share data length larger than what can fit into the
-            # field is marked as the largest length that can fit into the field.  That way, even
-            # if this does happen, the old < v1.3.0 server will still allow clients to read the
-            # first part of the share. The largest size that will fit in this 4-byte field is
-            # 2**32-1, or 4294967295.
-            f.write(struct.pack(">LLL", 1, min(4294967295, max_size), 0))
+            # The second field -- the four-byte share data length -- is no
+            # longer used as of Tahoe v1.3.0, but we continue to write it in
+            # there in case someone downgrades a storage server from >=
+            # Tahoe-1.3.0 to < Tahoe-1.3.0, or moves a share file from one
+            # server to another, etc. We do saturation -- a share data length
+            # larger than 2**32-1 (what can fit into the field) is marked as
+            # the largest length that can fit into the field. That way, even
+            # if this does happen, the old < v1.3.0 server will still allow
+            # clients to read the first part of the share.
+            f.write(struct.pack(">LLL", 1, min(2**32-1, max_size), 0))
             f.close()
             self._lease_offset = max_size + 0x0c
             self._num_leases = 0