)]}'
{"cinder/volume/flows/manager/create_volume.py":[{"author":{"_account_id":13900,"name":"Accela Zhao","email":"accelazh@gmail.com","username":"accelazh"},"change_message_id":"a77944006cfc05cfad197d343e5cbf21527d1324","unresolved":false,"context_lines":[{"line_number":796,"context_line":"                    NotifyVolumeActionTask(db, \"create.start\"),"},{"line_number":797,"context_line":"                    CreateVolumeFromSpecTask(db, driver),"},{"line_number":798,"context_line":"                    CreateVolumeOnFinishTask(db, \"create.end\"),"},{"line_number":799,"context_line":"                    flows.UnlockResourcesTask())"},{"line_number":800,"context_line":""},{"line_number":801,"context_line":"    # Now load (but do not run) the flow using the provided initial data."},{"line_number":802,"context_line":"    return taskflow.engines.load(volume_flow, store\u003dcreate_what)"}],"source_content_type":"text/x-python","patch_set":1,"id":"1a4dcd0f_1b38653c","line":799,"updated":"2015-08-06 12:08:52.000000000","message":"The code between lock and unlock creates a risky window. With active-active HA:    \n\nc-vol1 acquires lock \u003d\u003e c-vol1 enters window and write data \u003d\u003e c-vol1 is detected to be down (possibly network issue) \u003d\u003e c-vol2 takes over the lock \u003d\u003e c-vol1 is still in the window, writing data \u003d\u003e c-vol2 writes too \u003d\u003e data corrupts.","commit_id":"6d9301e48d5ce4437ca5bc9e9b1140af366f047b"},{"author":{"_account_id":2243,"name":"John Griffith","email":"john.griffith8@gmail.com","username":"john-griffith"},"change_message_id":"51467f1e68830f356e5799a938e85689576f6622","unresolved":false,"context_lines":[{"line_number":796,"context_line":"                    NotifyVolumeActionTask(db, \"create.start\"),"},{"line_number":797,"context_line":"                    CreateVolumeFromSpecTask(db, driver),"},{"line_number":798,"context_line":"                    CreateVolumeOnFinishTask(db, \"create.end\"),"},{"line_number":799,"context_line":"                    flows.UnlockResourcesTask())"},{"line_number":800,"context_line":""},{"line_number":801,"context_line":"    # Now load (but do not run) the flow using the provided initial data."},{"line_number":802,"context_line":"    return taskflow.engines.load(volume_flow, store\u003dcreate_what)"}],"source_content_type":"text/x-python","patch_set":1,"id":"1a4dcd0f_6e1ebf71","line":799,"in_reply_to":"1a4dcd0f_1b38653c","updated":"2015-08-11 13:52:52.000000000","message":"??  Not following your comments at all here.  The fact is the ONLY thing HA  is giving you here is management/API access to the backend device; that\u0027s it.  It\u0027s completely removed from the data-path.\n\nFor data path HA we have things like multi-path iSCSI, and perhaps at some point we\u0027ll get all sorts of crazy and do multi-c-vol with clustered LVM and mutli-path iSCSI.","commit_id":"6d9301e48d5ce4437ca5bc9e9b1140af366f047b"},{"author":{"_account_id":13900,"name":"Accela Zhao","email":"accelazh@gmail.com","username":"accelazh"},"change_message_id":"5ece31cdc140c03dd33fdd7d07c99dbc50ed205a","unresolved":false,"context_lines":[{"line_number":796,"context_line":"                    NotifyVolumeActionTask(db, \"create.start\"),"},{"line_number":797,"context_line":"                    CreateVolumeFromSpecTask(db, driver),"},{"line_number":798,"context_line":"                    CreateVolumeOnFinishTask(db, \"create.end\"),"},{"line_number":799,"context_line":"                    flows.UnlockResourcesTask())"},{"line_number":800,"context_line":""},{"line_number":801,"context_line":"    # Now load (but do not run) the flow using the provided initial data."},{"line_number":802,"context_line":"    return taskflow.engines.load(volume_flow, store\u003dcreate_what)"}],"source_content_type":"text/x-python","patch_set":1,"id":"fa1b9901_15787cc3","line":799,"in_reply_to":"1a4dcd0f_6e1ebf71","updated":"2015-08-25 09:45:36.000000000","message":"Thanks, John. My concern is that if we implement active-active HA c-vol with locking like: lock -\u003e do something -\u003e unlock. It is possible that node 1 is doing something, but detected to be dead (e.g. network issue). Then node 2 takes the lock and starts to do something too. Both node 1 and node 2 are doing things in the same time, which may cause race condition.\n\nThat\u0027s what the LockResourceTask and UnlockResourceTask in volume_flow does.","commit_id":"6d9301e48d5ce4437ca5bc9e9b1140af366f047b"},{"author":{"_account_id":9535,"name":"Gorka Eguileor","email":"geguileo@redhat.com","username":"Gorka"},"change_message_id":"6a47e829e6283017b008aed1da84c59126a8e0dd","unresolved":false,"context_lines":[{"line_number":796,"context_line":"                    NotifyVolumeActionTask(db, \"create.start\"),"},{"line_number":797,"context_line":"                    CreateVolumeFromSpecTask(db, driver),"},{"line_number":798,"context_line":"                    CreateVolumeOnFinishTask(db, \"create.end\"),"},{"line_number":799,"context_line":"                    flows.UnlockResourcesTask())"},{"line_number":800,"context_line":""},{"line_number":801,"context_line":"    # Now load (but do not run) the flow using the provided initial data."},{"line_number":802,"context_line":"    return taskflow.engines.load(volume_flow, store\u003dcreate_what)"}],"source_content_type":"text/x-python","patch_set":1,"id":"fa1b9901_438efa2e","line":799,"in_reply_to":"fa1b9901_15787cc3","updated":"2015-08-26 11:26:56.000000000","message":"That\u0027s why you need fencing, and you should have a timeout for the locks that is greater than the time it takes for the STONITH to take place.\n\nAnd you could even implement auto fencing in the nodes, because you can detect when you have lost connection to the DLM.\n\nAnyway, this patch doesn\u0027t change anything from current behavior and since we have decided to remove all locks from the manager this may not be even relevant anymore.","commit_id":"6d9301e48d5ce4437ca5bc9e9b1140af366f047b"}]}
