)]}'
{"/PATCHSET_LEVEL":[{"author":{"_account_id":14826,"name":"Mark Goddard","email":"markgoddard86@gmail.com","username":"mgoddard"},"change_message_id":"0bbb9f7f9478a1eb3f132d0f2eea16d47cbc2aa9","unresolved":false,"context_lines":[],"source_content_type":"","patch_set":1,"id":"ac159082_47cb9dc0","updated":"2023-11-03 11:51:20.000000000","message":"We\u0027re hitting this while migrating systems to EL9 or Jammy. Would very much appreciate a fix!","commit_id":"888e837bb71464cd1c2179964ac3e853ac18db52"},{"author":{"_account_id":11604,"name":"sean mooney","email":"smooney@redhat.com","username":"sean-k-mooney"},"change_message_id":"a1fdeed56346faed0e2e0ebd08e9987125eb3b4b","unresolved":true,"context_lines":[],"source_content_type":"","patch_set":1,"id":"5c67b6b5_d4ebb9d6","updated":"2023-10-17 15:03:39.000000000","message":"i am generally ok with this backport and prefer it to the alternitiva appoch propsoed in https://review.opendev.org/c/openstack/nova/+/898326\n\ni know gibi had some concerns with backporting this previosly so i have added them to the review\n\nfor added context we have backported this downstream to wallaby as the cgroups changed bewtten rhel 8 and rhel 9. without this change it woudl have prevented livemigratoin during the upgrade to wallaby.\n\n\ndownstream we assest the upgrade impact was outwaied by forcing all vms with more then 9 cpus to be cold migrated/resized. upsteream the calculus may be slightly differnet but i think the operatorional cost warrents the backport.","commit_id":"888e837bb71464cd1c2179964ac3e853ac18db52"},{"author":{"_account_id":28048,"name":"Will Szumski","email":"will@stackhpc.com","username":"jovial"},"change_message_id":"1482289cb04db8d0504aefd6b96b5901106f5194","unresolved":false,"context_lines":[],"source_content_type":"","patch_set":1,"id":"9e7971f7_09180aaa","updated":"2023-11-01 15:47:31.000000000","message":"recheck","commit_id":"888e837bb71464cd1c2179964ac3e853ac18db52"},{"author":{"_account_id":16137,"name":"Tobias Urdin","email":"tobias.urdin@binero.com","username":"tobasco"},"change_message_id":"2281a8261e4e63009bb2034f05504baae00266fe","unresolved":true,"context_lines":[],"source_content_type":"","patch_set":1,"id":"1cdb02bf_d72cad70","in_reply_to":"01cc78c5_85282fe9","updated":"2023-10-18 07:33:07.000000000","message":"this is to relief the immediate issue for live migration the issue imposes, but is it totally out of the picture to even as a workaround add back the logic of prioritizing larger instances?","commit_id":"888e837bb71464cd1c2179964ac3e853ac18db52"},{"author":{"_account_id":16137,"name":"Tobias Urdin","email":"tobias.urdin@binero.com","username":"tobasco"},"change_message_id":"678f055d187c48c7c0c2ae1e24f27427fea1e955","unresolved":true,"context_lines":[],"source_content_type":"","patch_set":1,"id":"5b342395_5d227639","in_reply_to":"0ac2f24e_580c1878","updated":"2023-10-18 18:09:39.000000000","message":"can you elaborate on \"larger instances have priority by default\", if that\u0027s the case then why was this here in the first place?","commit_id":"888e837bb71464cd1c2179964ac3e853ac18db52"},{"author":{"_account_id":11604,"name":"sean mooney","email":"smooney@redhat.com","username":"sean-k-mooney"},"change_message_id":"467fef7f3663f17ba37f9f0217ac3070694a7309","unresolved":true,"context_lines":[],"source_content_type":"","patch_set":1,"id":"0ac2f24e_580c1878","in_reply_to":"1cdb02bf_d72cad70","updated":"2023-10-18 11:41:41.000000000","message":"larger instance have priorty by default if we dont set cpu_shares at all for any instance\n\nthere is only an issue if you mix some guest with cpu_share request with some that have none.\n\nfrom a downstream perspecitve usign virsh to modify the instance xml woudl make the vm unsupported. that is why we backpoted this downstream as we could not support using virsh to modify the guest. form an upstram perspecitive any modification of the guest xml would make the instance state \"unsupported\" in that only nova is allowed to modify the xml. unsupported is in quotes because uptream supprot vs paid product support are very diffent things. \n\nnova assumes that you will never modify the xml or any  files created by nova/libvirt out of band and all unsupported means is if you do that and you encounter a bug as a result, unless that bug also happens without your modification its not a valid nova bug.\n\n\nim most cases upstream or downstream a hard reboot of the instance is enough to solve the supprot status of the modified guest.\n\n\n\nin genrall i would liek to deprecate and remove all qutoa extra specs form nova\nand instead have a qos api like neutron has that can be assocated with an instnace dynmically speratly form the flavor. we could perhaps allow flavors to refence default/required qos policy as well but they qos type extra specs are a bad fit for our current flavor model.\n\nthey are somethign that could be changed at runtime in most cases and it would be nice to be able to do that without havign to resize. since live resize is generally a much harder feature to impelment ahving a dedeicated qos api for instance i think would give a better long term expericnse as it woudl alow us to add fature such as normalisation across cgroup vsrions or drivers since we coudl actully build an abstraction rather then maintiang compatiblety with the exsiting raw passthough semantics.","commit_id":"888e837bb71464cd1c2179964ac3e853ac18db52"},{"author":{"_account_id":11604,"name":"sean mooney","email":"smooney@redhat.com","username":"sean-k-mooney"},"change_message_id":"f3608a3dc5cf76abeb906f44950f5901ba5f20f3","unresolved":true,"context_lines":[],"source_content_type":"","patch_set":1,"id":"be5f203d_f917fc4e","in_reply_to":"5b342395_5d227639","updated":"2023-10-18 19:00:39.000000000","message":"my understanding is nova orginally supproted seting shares via extra specs or didnt set them at all and left the defautling of this to libvirt.\n\nhttps://bugs.launchpad.net/nova/+bug/1383377 was filed and \nhttps://review.opendev.org/c/openstack/nova/+/129690 intoduced as a result to adress the problematic behavior of mixing vms with explict requests for cpu share with those that dont. to adress that it intoducte the behaivor of multipleying number the cpus by a large constant.\n\n\n\nlookign at the code before the change\nhttps://review.opendev.org/c/openstack/nova/+/129690/7/nova/virt/libvirt/driver.py#b3774\n\nthe cpu share were only set by nova if you used the falvor extra specs.\n\nso what it was actully tryign to fix was mixing some flavor that have cpu share set in the falvor extra specs with other flavor that dont on the same host.\n\nthat is very diffent then the bug title imples or the commit.\n\ni think its reasonabl to say that any given host should ever not have cpu.shares set on any vm or have it set on all vms on that host\nthe same as we reuire for numa and non-numa vms.\n\nlonger term i would also consider it reasom to remove the shares configuration entirly. again possible replacing with a qos api at a later date.","commit_id":"888e837bb71464cd1c2179964ac3e853ac18db52"},{"author":{"_account_id":16137,"name":"Tobias Urdin","email":"tobias.urdin@binero.com","username":"tobasco"},"change_message_id":"e840f40c570274b017f91b9ae1f609658103a03f","unresolved":true,"context_lines":[],"source_content_type":"","patch_set":1,"id":"01cc78c5_85282fe9","in_reply_to":"5c67b6b5_d4ebb9d6","updated":"2023-10-18 07:22:54.000000000","message":"they don\u0027t have to be migrated though using python bindings or virsh to update cpu_shares\n\ndid you have any discussions internally about above and that it was causing domains to be tainted like for the live migration announce_self issue?","commit_id":"888e837bb71464cd1c2179964ac3e853ac18db52"},{"author":{"_account_id":16137,"name":"Tobias Urdin","email":"tobias.urdin@binero.com","username":"tobasco"},"change_message_id":"533994b84a73e0b80a9bff814f826008ec75f4f8","unresolved":true,"context_lines":[],"source_content_type":"","patch_set":1,"id":"51a1b659_d4c7eab5","in_reply_to":"be5f203d_f917fc4e","updated":"2023-10-18 19:13:07.000000000","message":"hm thanks for that history, a shame that there is no more in-depth details in [1] or the patch there, but then removing it should be safe if we consider fixing all instances to have the default value again (by removing cpu_shares basicly)\n\n[1] https://bugs.launchpad.net/nova/+bug/1383377","commit_id":"888e837bb71464cd1c2179964ac3e853ac18db52"},{"author":{"_account_id":32755,"name":"Christian Rohmann","email":"christian.rohmann@inovex.de","username":"frittentheke"},"change_message_id":"a8bcb03980fa6113af65cab99e5a752dac648767","unresolved":false,"context_lines":[],"source_content_type":"","patch_set":2,"id":"622e8917_9068fb4e","updated":"2023-11-22 09:45:59.000000000","message":"recheck","commit_id":"0a6b57a9a24a0936383aaf444c690772aacc3245"},{"author":{"_account_id":17685,"name":"Elod Illes","email":"elod.illes@est.tech","username":"elod.illes"},"change_message_id":"726884e68eac9162b23eb46316362f7bc25c53b8","unresolved":false,"context_lines":[],"source_content_type":"","patch_set":2,"id":"ad2630e2_8660e805","updated":"2023-11-22 14:20:01.000000000","message":"recheck - nova-emulation job is now removed:\n\nhttps://review.opendev.org/c/openstack/nova/+/901604","commit_id":"0a6b57a9a24a0936383aaf444c690772aacc3245"}]}
