)]}'
{".zuul.yaml":[{"author":{"_account_id":11604,"name":"sean mooney","email":"smooney@redhat.com","username":"sean-k-mooney"},"change_message_id":"82b02b758bbb367475fcab2953e607821ca33092","unresolved":true,"context_lines":[{"line_number":184,"context_line":"      the \"iptables_hybrid\" securitygroup firewall driver, aka \"hybrid plug\"."},{"line_number":185,"context_line":"      The external events interactions between Nova and Neutron in these"},{"line_number":186,"context_line":"      situations has historically been fragile. This job exercises them."},{"line_number":187,"context_line":"      This job also tests live migration when cpu_shared_set is not defined."},{"line_number":188,"context_line":"    irrelevant-files: \u0026nova-base-irrelevant-files"},{"line_number":189,"context_line":"      - ^api-.*$"},{"line_number":190,"context_line":"      - ^(test-|)requirements.txt$"}],"source_content_type":"text/x-yaml","patch_set":2,"id":"c6f18566_ef44075d","line":187,"updated":"2024-03-20 21:23:44.000000000","message":"i could invert this by the way.\n\nuse the hybrid plug job to test numa aware live migration and keep the ceph live migration testing the totally vanilla case.\n\nwe get the same coverage in either case.","commit_id":"77b2ef0a36251ad7a705c0789ca49a5eb1b5f0b9"},{"author":{"_account_id":11604,"name":"sean mooney","email":"smooney@redhat.com","username":"sean-k-mooney"},"change_message_id":"1756ddf330210f8316c0ef6457a6320b5fc31b20","unresolved":false,"context_lines":[{"line_number":184,"context_line":"      the \"iptables_hybrid\" securitygroup firewall driver, aka \"hybrid plug\"."},{"line_number":185,"context_line":"      The external events interactions between Nova and Neutron in these"},{"line_number":186,"context_line":"      situations has historically been fragile. This job exercises them."},{"line_number":187,"context_line":"      This job also tests live migration when cpu_shared_set is not defined."},{"line_number":188,"context_line":"    irrelevant-files: \u0026nova-base-irrelevant-files"},{"line_number":189,"context_line":"      - ^api-.*$"},{"line_number":190,"context_line":"      - ^(test-|)requirements.txt$"}],"source_content_type":"text/x-yaml","patch_set":2,"id":"272e73d7_a09a0f52","line":187,"in_reply_to":"c6f18566_ef44075d","updated":"2024-09-02 22:05:21.000000000","message":"Acknowledged","commit_id":"77b2ef0a36251ad7a705c0789ca49a5eb1b5f0b9"},{"author":{"_account_id":8864,"name":"Artom Lifshitz","email":"notartom@gmail.com","username":"artom"},"change_message_id":"43c61ca22239f755c8cf7a8ae0374bbc9391c201","unresolved":true,"context_lines":[{"line_number":315,"context_line":"              # updated properly. addtionally in this job we want to test that"},{"line_number":316,"context_line":"              # for guests with a numa topology to ensure the numa topology is"},{"line_number":317,"context_line":"              # updated properly."},{"line_number":318,"context_line":"              cpu_shared_set: \"0-5\""},{"line_number":319,"context_line":"    group-vars:"},{"line_number":320,"context_line":"      subnode:"},{"line_number":321,"context_line":"        devstack_local_conf:"}],"source_content_type":"text/x-yaml","patch_set":2,"id":"0fa374bc_ebcdbe7d","line":318,"updated":"2024-03-20 20:01:35.000000000","message":"OK, so because you\u0027re setting a page size on the flavor, but no CPU policy, we expect the resulting VMs to be pinned to the cpu_shared_set via the \u003ccputune\u003e\u003cvcpupin [...] /\u003e mechanism (and not the \u003cvcpu cpuset\u003d[...] /\u003e mechanism that a VM without a NUMA topology would use for the same thing).\n\nThe thing is, even if the cpu_shared_sets are different and we expect an updated XML, we have no way of making sure it happened. IIRC before we implemented NUMA live migration, the live migration didn\u0027t outright fail, it just silently pinned to the wrong CPUs.\n\nI wonder if enabling power management could help - at least that way, if we don\u0027t update the XML, there\u0027s a decent chance (and we could probably make it 100% deterministic by hacking the config correctly) that we land on a powered down CPU and fail the migration entirely.","commit_id":"77b2ef0a36251ad7a705c0789ca49a5eb1b5f0b9"},{"author":{"_account_id":11604,"name":"sean mooney","email":"smooney@redhat.com","username":"sean-k-mooney"},"change_message_id":"82b02b758bbb367475fcab2953e607821ca33092","unresolved":true,"context_lines":[{"line_number":315,"context_line":"              # updated properly. addtionally in this job we want to test that"},{"line_number":316,"context_line":"              # for guests with a numa topology to ensure the numa topology is"},{"line_number":317,"context_line":"              # updated properly."},{"line_number":318,"context_line":"              cpu_shared_set: \"0-5\""},{"line_number":319,"context_line":"    group-vars:"},{"line_number":320,"context_line":"      subnode:"},{"line_number":321,"context_line":"        devstack_local_conf:"}],"source_content_type":"text/x-yaml","patch_set":2,"id":"a15930e6_55ce31d6","line":318,"in_reply_to":"0fa374bc_ebcdbe7d","updated":"2024-03-20 21:23:44.000000000","message":"you understand the intet correctly\n\nyou are correct that there is nothing that assert the xml is updated currently.\nbut this at least exersizes the code path and we can maunnaly inpect the logs to confirm that it work.\n\nif peopel are comfrotable with the idea of adding the whitebox tempest plugin to these job we could enable a specific test to validate the xml update\nfor now i think its better just to have the code runnign even if we dont asset the xml is updated directly and just rely on live migration working as a proxy.\n\nif we are ok with that then we can see how to make this more useful in a later patch.","commit_id":"77b2ef0a36251ad7a705c0789ca49a5eb1b5f0b9"},{"author":{"_account_id":11604,"name":"sean mooney","email":"smooney@redhat.com","username":"sean-k-mooney"},"change_message_id":"1756ddf330210f8316c0ef6457a6320b5fc31b20","unresolved":false,"context_lines":[{"line_number":315,"context_line":"              # updated properly. addtionally in this job we want to test that"},{"line_number":316,"context_line":"              # for guests with a numa topology to ensure the numa topology is"},{"line_number":317,"context_line":"              # updated properly."},{"line_number":318,"context_line":"              cpu_shared_set: \"0-5\""},{"line_number":319,"context_line":"    group-vars:"},{"line_number":320,"context_line":"      subnode:"},{"line_number":321,"context_line":"        devstack_local_conf:"}],"source_content_type":"text/x-yaml","patch_set":2,"id":"b5bddf0e_0df4aa54","line":318,"in_reply_to":"a15930e6_55ce31d6","updated":"2024-09-02 22:05:21.000000000","message":"Acknowledged","commit_id":"77b2ef0a36251ad7a705c0789ca49a5eb1b5f0b9"},{"author":{"_account_id":11604,"name":"sean mooney","email":"smooney@redhat.com","username":"sean-k-mooney"},"change_message_id":"5f01b46616164cceabe38cdb52587ae5823067fe","unresolved":true,"context_lines":[{"line_number":327,"context_line":"              # properly. note that live migration is also tested in other"},{"line_number":328,"context_line":"              # jobs which will not have cpu_shared_set defined at all."},{"line_number":329,"context_line":"                cpu_shared_set: \"2-7\""},{"line_number":330,"context_line":"    pre-run: playbooks/nova-live-migration-ceph/use-numa-aware-memory.yaml"},{"line_number":331,"context_line":"    post-run: playbooks/nova-live-migration/post-run.yaml"},{"line_number":332,"context_line":""},{"line_number":333,"context_line":"- job:"}],"source_content_type":"text/x-yaml","patch_set":2,"id":"1725665c_329dc831","line":330,"range":{"start_line":330,"start_character":23,"end_line":330,"end_character":47},"updated":"2024-03-20 21:31:18.000000000","message":"nova-live-migration-ceph shoudl be nova-live-migration\nfixed in v3","commit_id":"77b2ef0a36251ad7a705c0789ca49a5eb1b5f0b9"}],"/PATCHSET_LEVEL":[{"author":{"_account_id":11604,"name":"sean mooney","email":"smooney@redhat.com","username":"sean-k-mooney"},"change_message_id":"756efecebeda00979197f308f3fad6d9903f581e","unresolved":false,"context_lines":[],"source_content_type":"","patch_set":3,"id":"73dd7429_d0bd2d7f","updated":"2024-05-27 17:02:14.000000000","message":"recheck thats very odd\n\nlocal.sh failed because the default falvors 42 and 84 created in tempest did not exist when local.sh was run \n\n2024-05-27 14:02:51.117 | Running user script /opt/stack/devstack/local.sh\n2024-05-27 14:02:51.119 | + ./stack.sh:main:1491                     :   /opt/stack/devstack/local.sh\n2024-05-27 14:02:52.010 | No Flavor found for 42\n2024-05-27 14:02:52.010 | \n2024-05-27 14:02:52.811 | No Flavor found for 84\n2024-05-27 14:02:52.811 | \n2024-05-27 14:02:52.821 | ++ ./stack.sh:main:1491   \n\nthat happens here \nhttps://github.com/openstack/devstack/blob/master/stack.sh#L1485-L1492\nwhich is well after tempest is installed.\n\n\n42 and 84 are the correct flavors to use for the default flavors for tempst\n\nhttps://github.com/openstack/devstack/blob/master/lib/tempest#L294-L306\n\nDEFAULT_INSTANCE_TYPE is not defied.\n\nhttps://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_19c/913842/5/check/nova-live-migration-ceph/19c4593/controller/logs/_.localrc_auto.txt\n\nbut m1.nano (42) is not defiend in https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_19c/913842/5/check/nova-live-migration-ceph/19c4593/controller/logs/devstacklog.txt\n\nthis look like its unrelated","commit_id":"94480afae0299104d290e2df32d409d381ca8c45"},{"author":{"_account_id":11604,"name":"sean mooney","email":"smooney@redhat.com","username":"sean-k-mooney"},"change_message_id":"da983c59650a62605427bf2d1165659441bc1828","unresolved":false,"context_lines":[],"source_content_type":"","patch_set":5,"id":"bf42c11f_5e85a5bf","updated":"2024-06-05 10:04:29.000000000","message":"recheck post failure","commit_id":"c41e5a537c76eb2271a35b91722323f8640b66af"},{"author":{"_account_id":7166,"name":"Sylvain Bauza","email":"sbauza@redhat.com","username":"sbauza"},"change_message_id":"2ecab932c8f27a015adbb2da169237dbdc90e195","unresolved":false,"context_lines":[],"source_content_type":"","patch_set":5,"id":"744aa3d8_0d4ad317","updated":"2024-08-01 14:45:37.000000000","message":"recheck unrelated post failure","commit_id":"c41e5a537c76eb2271a35b91722323f8640b66af"},{"author":{"_account_id":9708,"name":"Balazs Gibizer","display_name":"gibi","email":"gibizer@gmail.com","username":"gibi"},"change_message_id":"4100943a67a81fd3f14a38e958a803a9a9d55f8b","unresolved":false,"context_lines":[],"source_content_type":"","patch_set":7,"id":"b7f17ed5_1797ecb3","updated":"2024-09-02 07:54:48.000000000","message":"job failure is relevant:\n```\nRunning user script /opt/stack/devstack/local.sh\n+ ./stack.sh:main:1482                     :   /opt/stack/devstack/local.sh\nNo Flavor found for 42\n\nNo Flavor found for 84\n```","commit_id":"ee83622fdbce1114bd755045039ffb408987f494"},{"author":{"_account_id":11604,"name":"sean mooney","email":"smooney@redhat.com","username":"sean-k-mooney"},"change_message_id":"6592a37cc57518e22ababa6cb90084fb09535586","unresolved":true,"context_lines":[],"source_content_type":"","patch_set":7,"id":"1c6cb64f_927334af","in_reply_to":"b7f17ed5_1797ecb3","updated":"2024-09-02 11:24:47.000000000","message":"ya so i think at some point the default falvor ides chnaged\n\nim just going to revert this to the other way i used to do this and create 2 new flavors instead of updating the default one.","commit_id":"ee83622fdbce1114bd755045039ffb408987f494"},{"author":{"_account_id":11604,"name":"sean mooney","email":"smooney@redhat.com","username":"sean-k-mooney"},"change_message_id":"9173cbf5befc368660e4dcf1f3476b42f8d9dded","unresolved":true,"context_lines":[],"source_content_type":"","patch_set":9,"id":"ee7e862a_c1fd1b8c","updated":"2024-09-13 14:25:06.000000000","message":"we recently had a regression with shared sotrage live migration which is not tested in our ci\n\nbecause of that i think im going to change tack and enabel the numa testing in the \n\nnova-ovs-hybrid-plug job instead as i can move that to using nova on nfs without impacting the test coverage that job is providing \n\nthat way we can have \nnova-live-migration test live migratihon of non numa vms with cpu_shared_set\nnova-live-migraiton-ceph test non numa vm with not cpu_shared_set defiend\nand\nnova-ovs-hybrid-plug test numa live migration with shared storage.\n\nthat will mean no new jobs are required and we still get to add coverage for both \nnuma live migation and shared storage. the recent but reqires both to be enabled to trigger so it will alos test that.","commit_id":"76f24ba4d0fdafc13449dceeb9d1535b1d2dda29"}]}
