)]}'
{"neutron/agent/securitygroups_rpc.py":[{"author":{"_account_id":5948,"name":"Oleg Bondarev","email":"obondarev@mirantis.com","username":"obondarev"},"change_message_id":"1b2dcdb8eb428ef08a9bdc3ac14129d59621fcee","unresolved":false,"context_lines":[{"line_number":141,"context_line":""},{"line_number":142,"context_line":"    def _apply_port_filter(self, device_ids, update_filter\u003dFalse):"},{"line_number":143,"context_line":"        step \u003d common_constants.AGENT_RES_PROCESSING_STEP"},{"line_number":144,"context_line":"        devices_consistent \u003d threading.Event()"},{"line_number":145,"context_line":"        devices_consistent.clear()"},{"line_number":146,"context_line":"        self._devices_consistent \u003d devices_consistent"},{"line_number":147,"context_line":""}],"source_content_type":"text/x-python","patch_set":1,"id":"9f560f44_27d60fc7","line":144,"range":{"start_line":144,"start_character":8,"end_line":144,"end_character":46},"updated":"2020-07-29 06:27:06.000000000","message":"why need redefine?","commit_id":"1ad3c2135d1dcaea57c0a7f0110915b547021c28"},{"author":{"_account_id":17264,"name":"Charles Farquhar","email":"cfarquhar@cfarquhar.com","username":"cfarquhar"},"change_message_id":"4dc42c4bfaa871982a7d01eedd28eaa85d90844e","unresolved":false,"context_lines":[{"line_number":141,"context_line":""},{"line_number":142,"context_line":"    def _apply_port_filter(self, device_ids, update_filter\u003dFalse):"},{"line_number":143,"context_line":"        step \u003d common_constants.AGENT_RES_PROCESSING_STEP"},{"line_number":144,"context_line":"        devices_consistent \u003d threading.Event()"},{"line_number":145,"context_line":"        devices_consistent.clear()"},{"line_number":146,"context_line":"        self._devices_consistent \u003d devices_consistent"},{"line_number":147,"context_line":""}],"source_content_type":"text/x-python","patch_set":1,"id":"9f560f44_b9e077fc","line":144,"range":{"start_line":144,"start_character":8,"end_line":144,"end_character":46},"in_reply_to":"9f560f44_27d60fc7","updated":"2020-07-29 14:53:51.000000000","message":"Hi Oleg and thanks for the feedback.  \n\nMy understanding is that we can have multiple threads in _apply_port_filter() at the same time.  A previous iteration of this fix used a single instance-scoped global lock, but I believe that would lead to another thread releasing the lock early and thus just move the race condition to a new location.\n\nMy approach to solve this was to maintain a local lock for each thread that enters _apply_port_filter() (line 144) and then track the most recent thread\u0027s lock in the instance-scoped variable (line 64).\n\nWe end up with what is effectively a queue of locks where _security_group_updated() associates itself with the most recently enqueued lock and blocks until it is released.  Locks are then dequeued when:\n\na) a new lock is created and the current lock had nothing waiting on it, or\n\nb) all threads waiting for the lock return from _security_group_updated() \n\nThis allows threads in _security_group_updated() to block for the shortest possible amount time.  They only need to block until the most recent call to _apply_port_filter() returns instead of waiting until all threads have returned.\n\nPlease let me know if you see something that I have misunderstood or if this needs further clarification.","commit_id":"1ad3c2135d1dcaea57c0a7f0110915b547021c28"},{"author":{"_account_id":17264,"name":"Charles Farquhar","email":"cfarquhar@cfarquhar.com","username":"cfarquhar"},"change_message_id":"5bfc390e11ddbe2a468b219ee382a88318b170c5","unresolved":false,"context_lines":[{"line_number":141,"context_line":""},{"line_number":142,"context_line":"    def _apply_port_filter(self, device_ids, update_filter\u003dFalse):"},{"line_number":143,"context_line":"        step \u003d common_constants.AGENT_RES_PROCESSING_STEP"},{"line_number":144,"context_line":"        devices_consistent \u003d threading.Event()"},{"line_number":145,"context_line":"        devices_consistent.clear()"},{"line_number":146,"context_line":"        self._devices_consistent \u003d devices_consistent"},{"line_number":147,"context_line":""}],"source_content_type":"text/x-python","patch_set":1,"id":"9f560f44_fd276811","line":144,"range":{"start_line":144,"start_character":8,"end_line":144,"end_character":46},"in_reply_to":"9f560f44_2813725e","updated":"2020-07-30 14:52:47.000000000","message":"I don\u0027t think we will need to wait.  My intent was to have  _security_group_updated() wait on the most recent lock (stored in _devices_consistent) at the time it was called.\n\nIf another thread starts a new _apply_port_filter() before the previous _apply_port_filter() has completed, the  _devices_consistent variable will be updated but should not affect any threads that were already waiting in _security_group_updated().  Those waiting threads will still have their local reference to the threading.Events() instance that was most recent at the time the thread started waiting.\n\nHopefully the output below demonstrates what I\u0027m trying to describe.  latest_lock represents the instance-scoped _devices_consistent variable on L64 and wait_on_this_lock represents the instance we\u0027re waiting on in L210.\n\nNote that wait_on_this_lock still points to the original address (0x7fc2d89cb130) even after latest_lock is replaced with a new Event() instance (0x7fc2d8689850):\n\n--- 8\u003c ---\n# python\n\n\u003e\u003e\u003e import threading\n\u003e\u003e\u003e latest_lock \u003d threading.Event()\n\u003e\u003e\u003e hex(id(latest_lock))\n\u00270x7fc2d89cb130\u0027\n\u003e\u003e\u003e wait_on_this_lock \u003d latest_lock\n\u003e\u003e\u003e hex(id(wait_on_this_lock))\n\u00270x7fc2d89cb130\u0027\n\u003e\u003e\u003e latest_lock \u003d threading.Event()\n\u003e\u003e\u003e hex(id(latest_lock))\n\u00270x7fc2d8689850\u0027\n\u003e\u003e\u003e hex(id(wait_on_this_lock))\n\u00270x7fc2d89cb130\u0027\n\u003e\u003e\u003e quit()\n--- 8\u003c ---","commit_id":"1ad3c2135d1dcaea57c0a7f0110915b547021c28"},{"author":{"_account_id":5948,"name":"Oleg Bondarev","email":"obondarev@mirantis.com","username":"obondarev"},"change_message_id":"78e0ce71439a1dd558853c07c9808f85a1ba3c18","unresolved":false,"context_lines":[{"line_number":141,"context_line":""},{"line_number":142,"context_line":"    def _apply_port_filter(self, device_ids, update_filter\u003dFalse):"},{"line_number":143,"context_line":"        step \u003d common_constants.AGENT_RES_PROCESSING_STEP"},{"line_number":144,"context_line":"        devices_consistent \u003d threading.Event()"},{"line_number":145,"context_line":"        devices_consistent.clear()"},{"line_number":146,"context_line":"        self._devices_consistent \u003d devices_consistent"},{"line_number":147,"context_line":""}],"source_content_type":"text/x-python","patch_set":1,"id":"9f560f44_488d067a","line":144,"range":{"start_line":144,"start_character":8,"end_line":144,"end_character":46},"in_reply_to":"9f560f44_2813725e","updated":"2020-07-29 16:04:53.000000000","message":"I\u0027m thinking about synchronizing _apply_port_filter() and _security_group_updated() methods with read and write locks from lockutils.ReaderWriterLock(). For example it\u0027s used in https://github.com/openstack/neutron/blob/master/neutron/agent/dhcp/agent.py. Did you consider that?","commit_id":"1ad3c2135d1dcaea57c0a7f0110915b547021c28"},{"author":{"_account_id":17264,"name":"Charles Farquhar","email":"cfarquhar@cfarquhar.com","username":"cfarquhar"},"change_message_id":"5bfc390e11ddbe2a468b219ee382a88318b170c5","unresolved":false,"context_lines":[{"line_number":141,"context_line":""},{"line_number":142,"context_line":"    def _apply_port_filter(self, device_ids, update_filter\u003dFalse):"},{"line_number":143,"context_line":"        step \u003d common_constants.AGENT_RES_PROCESSING_STEP"},{"line_number":144,"context_line":"        devices_consistent \u003d threading.Event()"},{"line_number":145,"context_line":"        devices_consistent.clear()"},{"line_number":146,"context_line":"        self._devices_consistent \u003d devices_consistent"},{"line_number":147,"context_line":""}],"source_content_type":"text/x-python","patch_set":1,"id":"9f560f44_b014f74f","line":144,"range":{"start_line":144,"start_character":8,"end_line":144,"end_character":46},"in_reply_to":"9f560f44_488d067a","updated":"2020-07-30 14:52:47.000000000","message":"My earliest attempts at a fix were with lockutils, but at that time I was unable make it fit the problem.  I noticed DHCP agent and some other components using threading.Event(), which seemed like a better fit so I took that route instead.\n\nIf there is a strong preference for lockutils please let me know and I can take another look now that I have a better understanding of how the bug can be fixed.","commit_id":"1ad3c2135d1dcaea57c0a7f0110915b547021c28"},{"author":{"_account_id":17264,"name":"Charles Farquhar","email":"cfarquhar@cfarquhar.com","username":"cfarquhar"},"change_message_id":"6157d96e3d289d0177354b8615f1d9b61fc3caa7","unresolved":false,"context_lines":[{"line_number":141,"context_line":""},{"line_number":142,"context_line":"    def _apply_port_filter(self, device_ids, update_filter\u003dFalse):"},{"line_number":143,"context_line":"        step \u003d common_constants.AGENT_RES_PROCESSING_STEP"},{"line_number":144,"context_line":"        devices_consistent \u003d threading.Event()"},{"line_number":145,"context_line":"        devices_consistent.clear()"},{"line_number":146,"context_line":"        self._devices_consistent \u003d devices_consistent"},{"line_number":147,"context_line":""}],"source_content_type":"text/x-python","patch_set":1,"id":"9f560f44_a97012a7","line":144,"range":{"start_line":144,"start_character":8,"end_line":144,"end_character":46},"in_reply_to":"9f560f44_6eff2001","updated":"2020-07-30 17:00:38.000000000","message":"\u003e Is _apply_port_filter() for the port always called first in failure scenario?\n\nIn all of the failure scenarios I\u0027ve traced call-by-call (measured in tens), _apply_port_filter() has always been called first. \n\nMore specifically, the failure only occurs when _security_group_updated() calls self.firewall.ports.values() in L211 AFTER _apply_port_filter() calls security_group_info_for_devices() in L153 to retrieve the remote secgroup membership list but BEFORE it returns from prepare_port_filter on line L175.  When this happens, _security_group_updated() doesn\u0027t find a relevant port to apply the secgroup membership update to in L211 - 213, so it returns without doing any further work. \n\nTo try and put it more succinctly, port_update steps #11 - #22 in the diagram [1] are the critical section.  If security_groups_member_updated step #3 occurs during that critical section, the membership update is lost.\n\n\u003e Isn\u0027t it possible that for some port _security_group_updated() is called first but the race still happens because of threads switching (after we did wait())?\n\nI don\u0027t believe so.  _security_group_updated() is called with a list of secgroup ids.  It then consults self.firewall.ports.values() to see if the hypervisor has any ports that reference the secgroup ids.  If we assume a given secgroup membership update is, or will soon be, relevant to a port on the hypervisor, there are two possible outcomes:\n\na) The relevant port is NOT yet present in self.firewall.ports.values(). This is normal and happens when we have not yet received a port_update event.  When it does arrive, we will get the secgroup details when _apply_port_filter() calls security_group_info_for_devices() and then configure ipsets appropriately.\n\nb) The relevant port IS already present in self.firewall.ports.values().  This is normal and happens when the port_update event was previously processed.  The port is added to self.devices_to_refilter and will be processed in step #8 [1] on the next CommonAgentLoop.daemon_loop() iteration.\n\nThis debug output includes both scenarios and may provide some additional clarity:\n\n--- 8\u003c ---\n\n# Agent restart\nJul 30 14:08:54 bug/1887405: in SecurityGroupAgentRpc.init(). Created initial instance-scoped lock 0x7f1b08849780 in an unlocked state\nJul 30 14:08:54 bug/1887405: in _apply_port_filter for {\u0027tape1d201bf-ad\u0027, \u0027tap9ae9ab15-cf\u0027, \u0027tap9d51114e-20\u0027, \u0027tap014339c3-27\u0027}. Created lock 0x7f1b087c7c18. Replaced 0x7f1b08849780 as most recent.\nJul 30 14:09:04 bug/1887405: in _apply_port_filter for {\u0027tape1d201bf-ad\u0027, \u0027tap9ae9ab15-cf\u0027, \u0027tap9d51114e-20\u0027, \u0027tap014339c3-27\u0027}. Releasing lock 0x7f1b087c7c18.\n\n# _security_group_updated() called before _apply_port_filter()\nJul 30 14:09:08 bug/1887405: in _security_group_updated for {\u0027111face9-83ed-4786-8df8-b4510b08e4a1\u0027}. 0x7f1b087c7c18 is unlocked. Proceed without wait\nJul 30 14:09:08 bug/1887405: in _security_group_updated for {\u0027097c5fda-f53f-46ca-ae7c-14b3c9447954\u0027}. 0x7f1b087c7c18 is unlocked. Proceed without wait\nJul 30 14:09:15 bug/1887405: in _apply_port_filter for {\u0027tap3833ec51-75\u0027, \u0027tap9c3f7763-40\u0027}. Created lock 0x7f1b087c7940. Replaced 0x7f1b087c7c18 as most recent.\nJul 30 14:09:17 bug/1887405: in _security_group_updated for {\u0027133cc1f9-5d18-4a2e-8a7f-f1cd50bb3ba3\u0027}. 0x7f1b087c7940 is locked. Let\u0027s wait.\nJul 30 14:09:26 bug/1887405: in _apply_port_filter for {\u0027tap3833ec51-75\u0027, \u0027tap9c3f7763-40\u0027}. Releasing lock 0x7f1b087c7940.\nJul 30 14:09:26 bug/1887405: in _security_group_updated for {\u0027133cc1f9-5d18-4a2e-8a7f-f1cd50bb3ba3\u0027}. Finished waiting on 0x7f1b087c7940\nJul 30 14:09:28 bug/1887405: in _apply_port_filter for {\u0027tap9c3f7763-40\u0027}. Created lock 0x7f1b0873fac8. Replaced 0x7f1b087c7940 as most recent.\nJul 30 14:09:38 bug/1887405: in _apply_port_filter for {\u0027tap9c3f7763-40\u0027}. Releasing lock 0x7f1b0873fac8.\n\n# _security_group_updated() called after _apply_port_filter()\nJul 30 16:53:58 bug/1887405: in _security_group_updated for {\u0027133cc1f9-5d18-4a2e-8a7f-f1cd50bb3ba3\u0027}. 0x7f1b0873fac8 is unlocked. Proceed without wait\nJul 30 16:54:00 bug/1887405: in _apply_port_filter for {\u0027tap9c3f7763-40\u0027}. Created lock 0x7f1b087c70b8. Replaced 0x7f1b0873fac8 as most recent.\nJul 30 16:54:10 bug/1887405: in _apply_port_filter for {\u0027tap9c3f7763-40\u0027}. Releasing lock 0x7f1b087c70b8.\n\n--- 8\u003c ---\n\n\n[1] https://user-images.githubusercontent.com/1253665/87317744-0a75b180-c4ed-11ea-9bad-085019c0f954.png","commit_id":"1ad3c2135d1dcaea57c0a7f0110915b547021c28"},{"author":{"_account_id":5948,"name":"Oleg Bondarev","email":"obondarev@mirantis.com","username":"obondarev"},"change_message_id":"b2ada2d5725a41271872f3fe57726141da2e10fd","unresolved":false,"context_lines":[{"line_number":141,"context_line":""},{"line_number":142,"context_line":"    def _apply_port_filter(self, device_ids, update_filter\u003dFalse):"},{"line_number":143,"context_line":"        step \u003d common_constants.AGENT_RES_PROCESSING_STEP"},{"line_number":144,"context_line":"        devices_consistent \u003d threading.Event()"},{"line_number":145,"context_line":"        devices_consistent.clear()"},{"line_number":146,"context_line":"        self._devices_consistent \u003d devices_consistent"},{"line_number":147,"context_line":""}],"source_content_type":"text/x-python","patch_set":1,"id":"9f560f44_24741f99","line":144,"range":{"start_line":144,"start_character":8,"end_line":144,"end_character":46},"in_reply_to":"9f560f44_a97012a7","updated":"2020-07-31 06:03:48.000000000","message":"Charles, thanks for such a thorough analysis! I think this just deserves a brief comment before #144 similar to the one at #208-209.","commit_id":"1ad3c2135d1dcaea57c0a7f0110915b547021c28"},{"author":{"_account_id":5948,"name":"Oleg Bondarev","email":"obondarev@mirantis.com","username":"obondarev"},"change_message_id":"f77161b61f2813152e21478f5699250a393537a3","unresolved":false,"context_lines":[{"line_number":141,"context_line":""},{"line_number":142,"context_line":"    def _apply_port_filter(self, device_ids, update_filter\u003dFalse):"},{"line_number":143,"context_line":"        step \u003d common_constants.AGENT_RES_PROCESSING_STEP"},{"line_number":144,"context_line":"        devices_consistent \u003d threading.Event()"},{"line_number":145,"context_line":"        devices_consistent.clear()"},{"line_number":146,"context_line":"        self._devices_consistent \u003d devices_consistent"},{"line_number":147,"context_line":""}],"source_content_type":"text/x-python","patch_set":1,"id":"9f560f44_2813725e","line":144,"range":{"start_line":144,"start_character":8,"end_line":144,"end_character":46},"in_reply_to":"9f560f44_b9e077fc","updated":"2020-07-29 15:57:48.000000000","message":"\u003e We end up with what is effectively a queue of locks where\n \u003e _security_group_updated() associates itself with the most recently\n \u003e enqueued lock and blocks until it is released.\n\nBut what if another thread starts new _apply_port_filter() while _security_group_updated() is waiting for the event (from previos _apply_port_filter)? Do we now need to wait for this new devices_consistent event?","commit_id":"1ad3c2135d1dcaea57c0a7f0110915b547021c28"},{"author":{"_account_id":5948,"name":"Oleg Bondarev","email":"obondarev@mirantis.com","username":"obondarev"},"change_message_id":"48ed0c25cbe46628e8fbb649d8c412b92f6df598","unresolved":false,"context_lines":[{"line_number":141,"context_line":""},{"line_number":142,"context_line":"    def _apply_port_filter(self, device_ids, update_filter\u003dFalse):"},{"line_number":143,"context_line":"        step \u003d common_constants.AGENT_RES_PROCESSING_STEP"},{"line_number":144,"context_line":"        devices_consistent \u003d threading.Event()"},{"line_number":145,"context_line":"        devices_consistent.clear()"},{"line_number":146,"context_line":"        self._devices_consistent \u003d devices_consistent"},{"line_number":147,"context_line":""}],"source_content_type":"text/x-python","patch_set":1,"id":"9f560f44_6eff2001","line":144,"range":{"start_line":144,"start_character":8,"end_line":144,"end_character":46},"in_reply_to":"9f560f44_fd276811","updated":"2020-07-30 15:45:08.000000000","message":"yep, I got your idea, thanks. Is _apply_port_filter() for the port always called first in failure scenario? Isn\u0027t it possible that for some port _security_group_updated() is called first but the race still happens because of threads switching (after we did wait())?","commit_id":"1ad3c2135d1dcaea57c0a7f0110915b547021c28"},{"author":{"_account_id":5948,"name":"Oleg Bondarev","email":"obondarev@mirantis.com","username":"obondarev"},"change_message_id":"f77161b61f2813152e21478f5699250a393537a3","unresolved":false,"context_lines":[{"line_number":142,"context_line":"    def _apply_port_filter(self, device_ids, update_filter\u003dFalse):"},{"line_number":143,"context_line":"        step \u003d common_constants.AGENT_RES_PROCESSING_STEP"},{"line_number":144,"context_line":"        devices_consistent \u003d threading.Event()"},{"line_number":145,"context_line":"        devices_consistent.clear()"},{"line_number":146,"context_line":"        self._devices_consistent \u003d devices_consistent"},{"line_number":147,"context_line":""},{"line_number":148,"context_line":"        if self.use_enhanced_rpc:"}],"source_content_type":"text/x-python","patch_set":1,"id":"9f560f44_c82596ad","line":145,"range":{"start_line":145,"start_character":8,"end_line":145,"end_character":34},"updated":"2020-07-29 15:57:48.000000000","message":"nit: no need as event flag is False after init","commit_id":"1ad3c2135d1dcaea57c0a7f0110915b547021c28"},{"author":{"_account_id":17264,"name":"Charles Farquhar","email":"cfarquhar@cfarquhar.com","username":"cfarquhar"},"change_message_id":"5bfc390e11ddbe2a468b219ee382a88318b170c5","unresolved":false,"context_lines":[{"line_number":142,"context_line":"    def _apply_port_filter(self, device_ids, update_filter\u003dFalse):"},{"line_number":143,"context_line":"        step \u003d common_constants.AGENT_RES_PROCESSING_STEP"},{"line_number":144,"context_line":"        devices_consistent \u003d threading.Event()"},{"line_number":145,"context_line":"        devices_consistent.clear()"},{"line_number":146,"context_line":"        self._devices_consistent \u003d devices_consistent"},{"line_number":147,"context_line":""},{"line_number":148,"context_line":"        if self.use_enhanced_rpc:"}],"source_content_type":"text/x-python","patch_set":1,"id":"9f560f44_30dbc71b","line":145,"range":{"start_line":145,"start_character":8,"end_line":145,"end_character":34},"in_reply_to":"9f560f44_c82596ad","updated":"2020-07-30 14:52:47.000000000","message":"Good catch.  I\u0027ll fix this.","commit_id":"1ad3c2135d1dcaea57c0a7f0110915b547021c28"},{"author":{"_account_id":5948,"name":"Oleg Bondarev","email":"obondarev@mirantis.com","username":"obondarev"},"change_message_id":"8d403a9c024e5a8336a774b6a4efc05b25b5938f","unresolved":false,"context_lines":[{"line_number":178,"context_line":"                    self.firewall.prepare_port_filter(device)"},{"line_number":179,"context_line":"            self.firewall.process_trusted_ports(trusted_devices)"},{"line_number":180,"context_line":""},{"line_number":181,"context_line":"        devices_consistent.set()"},{"line_number":182,"context_line":""},{"line_number":183,"context_line":"    def _update_security_group_info(self, security_groups,"},{"line_number":184,"context_line":"                                    security_group_member_ips):"}],"source_content_type":"text/x-python","patch_set":2,"id":"9f560f44_a278f1c3","line":181,"range":{"start_line":181,"start_character":8,"end_line":181,"end_character":32},"updated":"2020-08-04 07:17:26.000000000","message":"consider placing it in \u0027finally\u0027 block to make sure set() is always called. Alternative is to specify some timeout for wait() at #213","commit_id":"e17866d7a7d8b220da48e63814da84bd044bff6d"},{"author":{"_account_id":17264,"name":"Charles Farquhar","email":"cfarquhar@cfarquhar.com","username":"cfarquhar"},"change_message_id":"3d3f02ad210d5912cbc68145dfc493ef1243fbbb","unresolved":false,"context_lines":[{"line_number":178,"context_line":"                    self.firewall.prepare_port_filter(device)"},{"line_number":179,"context_line":"            self.firewall.process_trusted_ports(trusted_devices)"},{"line_number":180,"context_line":""},{"line_number":181,"context_line":"        devices_consistent.set()"},{"line_number":182,"context_line":""},{"line_number":183,"context_line":"    def _update_security_group_info(self, security_groups,"},{"line_number":184,"context_line":"                                    security_group_member_ips):"}],"source_content_type":"text/x-python","patch_set":2,"id":"9f560f44_eed8e253","line":181,"range":{"start_line":181,"start_character":8,"end_line":181,"end_character":32},"in_reply_to":"9f560f44_a278f1c3","updated":"2020-08-06 16:02:34.000000000","message":"Thanks for your patience and coaching on this patch.\n\nI spent a considerable amount of time trying to get this working with threading.Event without success.  It seems like there was some interference between this class and signal handlers in the OVS agent code, which resulted in the process not terminating properly.  Surprisingly, try..finally did not work to force the event state to change so the waiting threads hung indefinitely (or at least beyond the timeout configured in the test case).\n\nConfiguring a timeout on the waiter\u0027s side did work, but it had to be very short (\u003c5s) as the existing test case was already consuming the majority of the 60 second default.  This approach seemed sort of hacky and, without a bunch of sampling from production environments, I was not able to come up with a timeout value that wasn\u0027t completely arbitrary.\n\nThe threading.Condition class does not seem to cause the same problem with signal handlers.  oslo_concurrency\u0027s ReaderWriterLock uses it under the hood, so I re-implemented using that and FirewallMigrationTestCase now passes.  I\u0027ll push the new patch set shortly.","commit_id":"e17866d7a7d8b220da48e63814da84bd044bff6d"}]}
