)]}'
{"/COMMIT_MSG":[{"author":{"_account_id":6962,"name":"Kashyap Chamarthy","email":"kchamart@redhat.com","username":"kashyapc"},"change_message_id":"1725862bb29a50a2ecee9ce18a23926b35526c26","unresolved":true,"context_lines":[{"line_number":13,"context_line":"manager would juts call self._shutdown_instance()."},{"line_number":14,"context_line":"_shutdown_instance() does not result in a graceful instance power off."},{"line_number":15,"context_line":"At least for libvirt it\u0027ll eventually call destroy() on the domain."},{"line_number":16,"context_line":"destroy() does the equivalent of pulling the plug. What we want is to"},{"line_number":17,"context_line":"poweroff() the domain. To achieve that, this patch makes the compute"},{"line_number":18,"context_line":"manager call self._power_off_instance() before calling"},{"line_number":19,"context_line":"self._shutdown_instance(). self._power_off_instance() ends up calling"}],"source_content_type":"text/x-gerrit-commit-message","patch_set":1,"id":"448cd0ed_3b8a6ba5","line":16,"updated":"2021-09-10 16:43:00.000000000","message":"Just a small note on libvirt\u0027s destroy() behavior, it doesn\u0027t quite pull the plug right away:\n\nWhen you destroy(), first libvirt sends a SIGTERM to the guest, and waits for 10 seconds to allow it (i.e. the QEMU process) to gracefully clean up the resources.  If the guest still \"refuses to die\", _then_ libvirt sends the more lethal SIGKILL, and waits for 30 seconds (it used to be 5 seconds) for it to take effect.  Once SIGKILL takes effect, that\u0027s the point of no return, \"sorry, I\u0027ve pulled the plug.\"","commit_id":"66ea95ed0be3c92719e44d1bb791ba092b377cb1"},{"author":{"_account_id":8864,"name":"Artom Lifshitz","email":"notartom@gmail.com","username":"artom"},"change_message_id":"1cf93001cecb983f802e6d1d4147a33af4b67986","unresolved":true,"context_lines":[{"line_number":13,"context_line":"manager would juts call self._shutdown_instance()."},{"line_number":14,"context_line":"_shutdown_instance() does not result in a graceful instance power off."},{"line_number":15,"context_line":"At least for libvirt it\u0027ll eventually call destroy() on the domain."},{"line_number":16,"context_line":"destroy() does the equivalent of pulling the plug. What we want is to"},{"line_number":17,"context_line":"poweroff() the domain. To achieve that, this patch makes the compute"},{"line_number":18,"context_line":"manager call self._power_off_instance() before calling"},{"line_number":19,"context_line":"self._shutdown_instance(). self._power_off_instance() ends up calling"}],"source_content_type":"text/x-gerrit-commit-message","patch_set":1,"id":"684f26be_e4ac53eb","line":16,"in_reply_to":"448cd0ed_3b8a6ba5","updated":"2021-09-10 17:07:57.000000000","message":"Good points - so what does qemu do when it receives the TERM, and then the KILL? Does it attempt to ACPI shutdown the guest? Also, if this is what destroy() does, what\u0027s the point of poweroff()?","commit_id":"66ea95ed0be3c92719e44d1bb791ba092b377cb1"},{"author":{"_account_id":11604,"name":"sean mooney","email":"smooney@redhat.com","username":"sean-k-mooney"},"change_message_id":"381cf6845df137059f300eef1ccc7886fa425ada","unresolved":true,"context_lines":[{"line_number":13,"context_line":"manager would juts call self._shutdown_instance()."},{"line_number":14,"context_line":"_shutdown_instance() does not result in a graceful instance power off."},{"line_number":15,"context_line":"At least for libvirt it\u0027ll eventually call destroy() on the domain."},{"line_number":16,"context_line":"destroy() does the equivalent of pulling the plug. What we want is to"},{"line_number":17,"context_line":"poweroff() the domain. To achieve that, this patch makes the compute"},{"line_number":18,"context_line":"manager call self._power_off_instance() before calling"},{"line_number":19,"context_line":"self._shutdown_instance(). self._power_off_instance() ends up calling"}],"source_content_type":"text/x-gerrit-commit-message","patch_set":1,"id":"a14479ac_f5162ab0","line":16,"in_reply_to":"684f26be_e4ac53eb","updated":"2021-09-13 10:10:37.000000000","message":"we likely called poweroff to allow a longer timeout then the 5 seconds that destroy used to do\n\nif a process gets sig kill while its processign sig term typically the kill interupts the graceful shutdown and it gets killed forcefully. so it would be like doing a real shutdown and the power going out mid way through","commit_id":"66ea95ed0be3c92719e44d1bb791ba092b377cb1"},{"author":{"_account_id":7166,"name":"Sylvain Bauza","email":"sbauza@redhat.com","username":"sbauza"},"change_message_id":"9dfdf468327e9739b75b28c1b787e6d3930e24bb","unresolved":true,"context_lines":[{"line_number":6,"context_line":""},{"line_number":7,"context_line":"Gracefully power off guest on instance delete"},{"line_number":8,"context_line":""},{"line_number":9,"context_line":"We could have a situation where an instance with volumes attached is"},{"line_number":10,"context_line":"being deleted, but the volumes need to outlive the instance and remain"},{"line_number":11,"context_line":"uncorrupted. In such a case, we want the guest to gracefully power off"},{"line_number":12,"context_line":"in order to flush all its IO to the volume(s). Previously, the compute"},{"line_number":13,"context_line":"manager would just call self._shutdown_instance()."},{"line_number":14,"context_line":"_shutdown_instance() does not result in a graceful instance power off."},{"line_number":15,"context_line":"At least for libvirt it\u0027ll eventually call destroy() on the domain."},{"line_number":16,"context_line":"destroy() does the equivalent of pulling the plug. What we want is to"},{"line_number":17,"context_line":"poweroff() the domain. To achieve that, this patch makes the compute"},{"line_number":18,"context_line":"manager call self._power_off_instance() before calling"},{"line_number":19,"context_line":"self._shutdown_instance(). self._power_off_instance() ends up calling"},{"line_number":20,"context_line":"poweroff() on the domain, and achieves the graceful shutdown that we"},{"line_number":21,"context_line":"want."},{"line_number":22,"context_line":""},{"line_number":23,"context_line":"This change has a couple of side-effects."},{"line_number":24,"context_line":""}],"source_content_type":"text/x-gerrit-commit-message","patch_set":3,"id":"4baaa835_8d1b90cd","line":21,"range":{"start_line":9,"start_character":0,"end_line":21,"end_character":5},"updated":"2021-09-29 16:19:56.000000000","message":"This is a behavioural change for operators. Previously, we were litterally destroying the instance without waiting while now we propose to wait until the guest gracefully shuts down.\n\nThere could be some impacts we could be missing. At least a blueprint and here a relnote are important for telling about this change.","commit_id":"dd4324f6735892c05b7b355bd3fe50684e06b610"}],"nova/compute/manager.py":[{"author":{"_account_id":11604,"name":"sean mooney","email":"smooney@redhat.com","username":"sean-k-mooney"},"change_message_id":"1e42a737cfe83a757f0c9ba8dc76e0914518e02b","unresolved":true,"context_lines":[{"line_number":2833,"context_line":"    def _get_power_off_values(self, instance, clean_shutdown):"},{"line_number":2834,"context_line":"        \"\"\"Get the timing configuration for powering down this instance.\"\"\""},{"line_number":2835,"context_line":"        if clean_shutdown:"},{"line_number":2836,"context_line":"            timeout \u003d compute_utils.get_value_from_system_metadata(instance,"},{"line_number":2837,"context_line":"                          key\u003d\u0027image_os_shutdown_timeout\u0027, type\u003dint,"},{"line_number":2838,"context_line":"                          default\u003dCONF.shutdown_timeout)"},{"line_number":2839,"context_line":"            retry_interval \u003d CONF.compute.shutdown_retry_interval"},{"line_number":2840,"context_line":"        else:"},{"line_number":2841,"context_line":"            timeout \u003d 0"}],"source_content_type":"text/x-python","patch_set":1,"id":"e0ca679c_a80020d9","line":2838,"range":{"start_line":2836,"start_character":6,"end_line":2838,"end_character":56},"updated":"2021-09-10 16:57:07.000000000","message":"this is an illgal image extra spec.\nit is not include in the image properteis ovo so its not legal to serialsie of use in nova for many years.\nthat is why this is bing looked up from the system metadata driectly and converted to an int\n\ncan you add a todo to either remvoe this or add it to the ovo definition.\nhttps://github.com/openstack/nova/blob/e276184892250922c96c38415eabbaf7507f7e17/nova/objects/image_meta.py#L158\n\nwe will want to backport this change so we cant make this change here but a todo would be nice\n\nlooking at code seach this actully appear to be the only use of compute_utils.get_value_from_system_metadata in the code base so we migth also be able to remove that entirely.\n\nhttps://codesearch.opendev.org/?q\u003dget_value_from_system_metadata\u0026i\u003dnope\u0026literal\u003dnope\u0026files\u003d\u0026excludeFiles\u003d\u0026repos\u003d\n\nthis is a relitivly nice low hanging fruit bug for someone to adress later","commit_id":"66ea95ed0be3c92719e44d1bb791ba092b377cb1"},{"author":{"_account_id":8864,"name":"Artom Lifshitz","email":"notartom@gmail.com","username":"artom"},"change_message_id":"0048328ca1bc2ab4f5a6de533fbfdb54a91a0d7b","unresolved":true,"context_lines":[{"line_number":2833,"context_line":"    def _get_power_off_values(self, instance, clean_shutdown):"},{"line_number":2834,"context_line":"        \"\"\"Get the timing configuration for powering down this instance.\"\"\""},{"line_number":2835,"context_line":"        if clean_shutdown:"},{"line_number":2836,"context_line":"            timeout \u003d compute_utils.get_value_from_system_metadata(instance,"},{"line_number":2837,"context_line":"                          key\u003d\u0027image_os_shutdown_timeout\u0027, type\u003dint,"},{"line_number":2838,"context_line":"                          default\u003dCONF.shutdown_timeout)"},{"line_number":2839,"context_line":"            retry_interval \u003d CONF.compute.shutdown_retry_interval"},{"line_number":2840,"context_line":"        else:"},{"line_number":2841,"context_line":"            timeout \u003d 0"}],"source_content_type":"text/x-python","patch_set":1,"id":"42feb72c_1cf32063","line":2838,"range":{"start_line":2836,"start_character":6,"end_line":2838,"end_character":56},"in_reply_to":"e0ca679c_a80020d9","updated":"2021-09-24 17:58:08.000000000","message":"Done as a follow-up WIP for now.","commit_id":"66ea95ed0be3c92719e44d1bb791ba092b377cb1"},{"author":{"_account_id":11604,"name":"sean mooney","email":"smooney@redhat.com","username":"sean-k-mooney"},"change_message_id":"1e42a737cfe83a757f0c9ba8dc76e0914518e02b","unresolved":true,"context_lines":[{"line_number":2875,"context_line":"                    self.host, action\u003dfields.NotificationAction.SHUTDOWN,"},{"line_number":2876,"context_line":"                    phase\u003dfields.NotificationPhase.START, bdms\u003dbdms)"},{"line_number":2877,"context_line":""},{"line_number":2878,"context_line":"        network_info \u003d instance.get_network_info()"},{"line_number":2879,"context_line":""},{"line_number":2880,"context_line":"        # NOTE(arnaudmorin) to avoid nova destroying the instance without"},{"line_number":2881,"context_line":"        # unplugging the interface, refresh network_info if it is empty."},{"line_number":2882,"context_line":"        if not network_info:"},{"line_number":2883,"context_line":"            network_info \u003d self.network_api.get_instance_nw_info("},{"line_number":2884,"context_line":"                context, instance)"},{"line_number":2885,"context_line":""},{"line_number":2886,"context_line":"        # NOTE(vish) get bdms before destroying the instance"},{"line_number":2887,"context_line":"        vol_bdms \u003d [bdm for bdm in bdms if bdm.is_volume]"}],"source_content_type":"text/x-python","patch_set":1,"id":"a265b273_8a60f43a","line":2884,"range":{"start_line":2878,"start_character":0,"end_line":2884,"end_character":34},"updated":"2021-09-10 16:57:07.000000000","message":"if we call power_off_instance below we might eb able to avoid this here.\nbut it proably good to leave it for now.","commit_id":"66ea95ed0be3c92719e44d1bb791ba092b377cb1"},{"author":{"_account_id":11604,"name":"sean mooney","email":"smooney@redhat.com","username":"sean-k-mooney"},"change_message_id":"1e42a737cfe83a757f0c9ba8dc76e0914518e02b","unresolved":true,"context_lines":[{"line_number":2894,"context_line":"            LOG.debug(\u0027Start destroying the instance on the hypervisor.\u0027,"},{"line_number":2895,"context_line":"                      instance\u003dinstance)"},{"line_number":2896,"context_line":"            with timeutils.StopWatch() as timer:"},{"line_number":2897,"context_line":"                self.driver.destroy(context, instance, network_info,"},{"line_number":2898,"context_line":"                                    block_device_info)"},{"line_number":2899,"context_line":"            LOG.info(\u0027Took %0.2f seconds to destroy the instance on the \u0027"},{"line_number":2900,"context_line":"                     \u0027hypervisor.\u0027, timer.elapsed(), instance\u003dinstance)"}],"source_content_type":"text/x-python","patch_set":1,"id":"c482db8c_2a488d3e","line":2897,"range":{"start_line":2897,"start_character":28,"end_line":2897,"end_character":35},"updated":"2021-09-10 16:57:07.000000000","message":"i think we might want to just replace this with \n\nself._power_off_instance instead.","commit_id":"66ea95ed0be3c92719e44d1bb791ba092b377cb1"},{"author":{"_account_id":8864,"name":"Artom Lifshitz","email":"notartom@gmail.com","username":"artom"},"change_message_id":"0048328ca1bc2ab4f5a6de533fbfdb54a91a0d7b","unresolved":true,"context_lines":[{"line_number":2894,"context_line":"            LOG.debug(\u0027Start destroying the instance on the hypervisor.\u0027,"},{"line_number":2895,"context_line":"                      instance\u003dinstance)"},{"line_number":2896,"context_line":"            with timeutils.StopWatch() as timer:"},{"line_number":2897,"context_line":"                self.driver.destroy(context, instance, network_info,"},{"line_number":2898,"context_line":"                                    block_device_info)"},{"line_number":2899,"context_line":"            LOG.info(\u0027Took %0.2f seconds to destroy the instance on the \u0027"},{"line_number":2900,"context_line":"                     \u0027hypervisor.\u0027, timer.elapsed(), instance\u003dinstance)"}],"source_content_type":"text/x-python","patch_set":1,"id":"3315fa34_1132d52c","line":2897,"range":{"start_line":2897,"start_character":28,"end_line":2897,"end_character":35},"in_reply_to":"c482db8c_2a488d3e","updated":"2021-09-24 17:58:08.000000000","message":"No, I think it makes sense to keep them separate. The methods are named badly, but they do different things.\n\nWhen you don\u0027t care about the guest OS and want to save time, you _shutdown_instance() to just get rid of it.\n\nWhen you have to care about the guest, you _power_off_instance() gracefully first.\n\nIn *libvirt*, driver.power_off() happens to call driver.destory() eventually, so _power_off_instance() will also destroy it from the hypervisor, but we cannot make that assumption for all drivers. We need the two separate methods in the compute manager.","commit_id":"66ea95ed0be3c92719e44d1bb791ba092b377cb1"},{"author":{"_account_id":11604,"name":"sean mooney","email":"smooney@redhat.com","username":"sean-k-mooney"},"change_message_id":"1e42a737cfe83a757f0c9ba8dc76e0914518e02b","unresolved":true,"context_lines":[{"line_number":3025,"context_line":"        # on the domain. destroy() does the equivalent of pulling the plug."},{"line_number":3026,"context_line":"        # What we want is to poweroff() the domain, and calling"},{"line_number":3027,"context_line":"        # _power_off_instance() here does that."},{"line_number":3028,"context_line":"        self._power_off_instance(instant, clean_shutdown\u003dTrue)"},{"line_number":3029,"context_line":"        self._shutdown_instance(context, instance, bdms)"},{"line_number":3030,"context_line":""},{"line_number":3031,"context_line":"        # NOTE(vish): We have already deleted the instance, so we have"}],"source_content_type":"text/x-python","patch_set":1,"id":"d39dbef1_3f9bb62d","line":3028,"range":{"start_line":3028,"start_character":6,"end_line":3028,"end_character":62},"updated":"2021-09-10 16:57:07.000000000","message":"instead of doing this here which likely will work fine i would suggest we move it to inside _shutdown_instance","commit_id":"66ea95ed0be3c92719e44d1bb791ba092b377cb1"},{"author":{"_account_id":8864,"name":"Artom Lifshitz","email":"notartom@gmail.com","username":"artom"},"change_message_id":"0048328ca1bc2ab4f5a6de533fbfdb54a91a0d7b","unresolved":true,"context_lines":[{"line_number":3025,"context_line":"        # on the domain. destroy() does the equivalent of pulling the plug."},{"line_number":3026,"context_line":"        # What we want is to poweroff() the domain, and calling"},{"line_number":3027,"context_line":"        # _power_off_instance() here does that."},{"line_number":3028,"context_line":"        self._power_off_instance(instant, clean_shutdown\u003dTrue)"},{"line_number":3029,"context_line":"        self._shutdown_instance(context, instance, bdms)"},{"line_number":3030,"context_line":""},{"line_number":3031,"context_line":"        # NOTE(vish): We have already deleted the instance, so we have"}],"source_content_type":"text/x-python","patch_set":1,"id":"e473b915_b49bf6b8","line":3028,"range":{"start_line":3028,"start_character":6,"end_line":3028,"end_character":62},"in_reply_to":"d39dbef1_3f9bb62d","updated":"2021-09-24 17:58:08.000000000","message":"I\u0027m not convinced, because wasting time waiting for the guest to power off gracefully when we don\u0027t care isn\u0027t an improvement. As I said in my comment above, I think these should remain separate calls.","commit_id":"66ea95ed0be3c92719e44d1bb791ba092b377cb1"}]}
