)]}'
{"/COMMIT_MSG":[{"author":{"_account_id":17685,"name":"Elod Illes","email":"elod.illes@est.tech","username":"elod.illes"},"change_message_id":"9baa5880b2fcb824b03f301ea1a8d272c0a4cb9a","unresolved":false,"context_lines":[{"line_number":11,"context_line":"NOTE(artom) Backporting this makes the whole stack on top of this pass"},{"line_number":12,"context_line":"functional and unit tests without any stable-only modifications."},{"line_number":13,"context_line":"Otherwise we\u0027d have to refactor nova/virt/libvirt/cpu/__init__.py to"},{"line_number":14,"context_line":"use the new per-driver API objects."},{"line_number":15,"context_line":""},{"line_number":16,"context_line":"Relates to blueprint libvirt-cpu-state-mgmt"},{"line_number":17,"context_line":""}],"source_content_type":"text/x-gerrit-commit-message","patch_set":1,"id":"7f2ee522_80e91bad","line":14,"updated":"2024-04-04 09:27:20.000000000","message":"thanks for the explanation. as I see, the whole feature [1] got merged in 2023.1, only this follow-up patch was merged in 2023.2. Considering this and the less need to refactor the backports i agree that this is reasonable to backport together with the bug fix to make everything a clean cherry pick. LGTM.\n\n[1] https://review.opendev.org/q/topic:bp/libvirt-cpu-state-mgmt","commit_id":"bec8f47c06c52aa8c8d76ed140d8f6a641bad8c7"}],"/PATCHSET_LEVEL":[{"author":{"_account_id":9708,"name":"Balazs Gibizer","display_name":"gibi","email":"gibizer@gmail.com","username":"gibi"},"change_message_id":"6dd967dcdca4f1a6baa6ded07c4d7df068339938","unresolved":false,"context_lines":[],"source_content_type":"","patch_set":1,"id":"4431ad34_c40f5a0c","updated":"2024-04-03 08:04:29.000000000","message":"clean cherry-pick","commit_id":"bec8f47c06c52aa8c8d76ed140d8f6a641bad8c7"},{"author":{"_account_id":9708,"name":"Balazs Gibizer","display_name":"gibi","email":"gibizer@gmail.com","username":"gibi"},"change_message_id":"b8806860848c6b1178ea45d04ba1e62c03160f33","unresolved":false,"context_lines":[],"source_content_type":"","patch_set":1,"id":"a2156754_9e1084cc","updated":"2024-04-05 10:02:37.000000000","message":"recheck multiple failures\nopenstacksdk-functional-devstack \n```\n+ lib/neutron_plugins/services/l3:create_neutron_initial_network:161 :   SUBNETPOOL_V6_ID\u003d0486d12d-d62e-4991-91cc-75cd421bb997\n+ lib/neutron_plugins/services/l3:create_neutron_initial_network:166 :   is_provider_network\n+ functions-common:is_provider_network:2284 :   \u0027[\u0027 \u0027\u0027 \u003d\u003d True \u0027]\u0027\n+ functions-common:is_provider_network:2287 :   return 1\n++ lib/neutron_plugins/services/l3:create_neutron_initial_network:196 :   oscwrap --os-cloud devstack --os-region RegionOne network create private -f value -c id\nError while executing command: HttpException: 503, Unable to create the network. No tenant network is available for allocation.\n++ functions-common:oscwrap:2475            :   return 1\n+ lib/neutron_plugins/services/l3:create_neutron_initial_network:196 :   NET_ID\u003d\n+ lib/neutron_plugins/services/l3:create_neutron_initial_network:1 :   exit_trap\n```\n\ntempest-integrated-compute \n```\n+ lib/neutron_plugins/services/l3:create_neutron_initial_network:161 :   SUBNETPOOL_V6_ID\u003db5aa2293-bb54-4e90-a4a6-32164dc28f52\n+ lib/neutron_plugins/services/l3:create_neutron_initial_network:166 :   is_provider_network\n+ functions-common:is_provider_network:2284 :   \u0027[\u0027 \u0027\u0027 \u003d\u003d True \u0027]\u0027\n+ functions-common:is_provider_network:2287 :   return 1\n++ lib/neutron_plugins/services/l3:create_neutron_initial_network:196 :   oscwrap --os-cloud devstack --os-region RegionOne network create private -f value -c id\nError while executing command: HttpException: 503, Unable to create the network. No tenant network is available for allocation.\n++ functions-common:oscwrap:2475            :   return 1\n+ lib/neutron_plugins/services/l3:create_neutron_initial_network:196 :   NET_ID\u003d\n+ lib/neutron_plugins/services/l3:create_neutron_initial_network:1 :   exit_trap\n```\n\ntempest-integrated-compute-ubuntu-focal \n```\n+ functions-common:real_install_package:1446 :   apt_get install targetcli-fb\n+ functions-common:apt_get:1226            :   sudo DEBIAN_FRONTEND\u003dnoninteractive http_proxy\u003d https_proxy\u003d no_proxy\u003d apt-get --option Dpkg::Options::\u003d--force-confold --assume-yes install targetcli-fb\nE: Could not get lock /var/lib/dpkg/lock-frontend. It is held by process 61090 (apt-get)\nE: Unable to acquire the dpkg frontend lock (/var/lib/dpkg/lock-frontend), is another process using it?\n+ functions-common:apt_get:1               :   exit_trap\n```","commit_id":"bec8f47c06c52aa8c8d76ed140d8f6a641bad8c7"},{"author":{"_account_id":9708,"name":"Balazs Gibizer","display_name":"gibi","email":"gibizer@gmail.com","username":"gibi"},"change_message_id":"e1b67a07de341a0f70a890ff4c9d3e68d7ed6c41","unresolved":false,"context_lines":[],"source_content_type":"","patch_set":1,"id":"a9ba2a69_b6d5cd56","updated":"2024-04-04 13:27:02.000000000","message":"recheck one tempest test failed due to vif plug timeout\n\n```\nApr 04 10:22:55.746533 np0037223552 nova-compute[74438]: WARNING nova.compute.manager [None req-4ece570d-9689-4651-8bc1-d8e1e053990d tempest-MultipleCreateTestJSON-1233069602 tempest-MultipleCreateTestJSON-1233069602-project-member] [instance: d08cf330-e698-4ba0-9b96-d8be4ac7b31f] Timeout waiting for [\u0027network-vif-plugged-1bbdc310-d800-48b7-bc42-85d41a2414dd\u0027] for instance with vm_state building and task_state spawning. Event states are: network-vif-plugged-1bbdc310-d800-48b7-bc42-85d41a2414dd: timed out after 300.06 seconds: eventlet.timeout.Timeout: 300 seconds\n```","commit_id":"bec8f47c06c52aa8c8d76ed140d8f6a641bad8c7"}]}
