)]}'
{"/PATCHSET_LEVEL":[{"author":{"_account_id":8864,"name":"Artom Lifshitz","email":"notartom@gmail.com","username":"artom"},"change_message_id":"8945637936afac4c6248ccb1fb61458d140124f4","unresolved":true,"context_lines":[],"source_content_type":"","patch_set":7,"id":"6e0a9b53_0830137c","updated":"2024-06-14 17:13:48.000000000","message":"I\u0027m not sure why CI fails here, but I think before trying to debug that and grappling with the complexity of the code you\u0027ve written, we need to simplify the approach first.\n\nFor one thing, relying on an instance landing on the correct host to in order to assert trait correctness can give false positives. It\u0027s entire possible that, if the custom trait reporting doesn\u0027t work because of a bug in the test, the instance still happens to land on that host. I\u0027d prefer that we extend Tempest\u0027s placement client in tempest/lib/services/placement/placement_client.py to be able to query traits, see the api-ref (https://docs.openstack.org/api-ref/placement/) to understand the API for that.\n\nFor another, I see a few possibilities to create the provider.yaml file:\n\n1. Have the test code write it directly to the compute host(s) using Paramiko\u0027s SFTPClient (https://docs.paramiko.org/en/latest/api/sftp.html#paramiko.sftp_client.SFTPClient). Tempest\u0027s SSH client will need to be extended to support that.\n\n2. Use something like `echo {contents} \u003e provider.yaml` in a ssh_client.exec_command() call. I\u0027m not sure whether the redirection will work to write the file.\n\n3. Have the deployment write the file (via an ansible `template` task in the pre.yaml playbook, for example), and communicate the necessary information about what was writtent to the test code via either config options or in the nodes.yaml file. Downstream, we\u0027d need a different system to write the file, but that\u0027s almost an advantage, as it would test the provider.yaml deployment code in the edpm-ansible (I\u0027m assuming this exists?).\n\nI don\u0027t necessarily have a preference, hence tagging Sean and James for their opinion.","commit_id":"4effa18c5f50404ffe6a934069188c824783de61"},{"author":{"_account_id":34860,"name":"Amit Uniyal","email":"auniyal@redhat.com","username":"auniyal"},"change_message_id":"587446a11c84cabbb553f2ee08d735478907155f","unresolved":false,"context_lines":[],"source_content_type":"","patch_set":9,"id":"ea6d87a8_6e633d13","updated":"2024-06-21 04:32:14.000000000","message":"I mean shall we not show this msg, if we know that we do not have valid host because of traits, as here in other the test passed (which was expected) but still we have same msg with trait msg.\nhttps://zuul.opendev.org/t/openstack/build/3a36486f78cd4784ae5342ea936ac17e/log/controller/logs/screen-n-sch.txt#5313-5315","commit_id":"09d83ef52987303dcb5702a8ce3678bee7cb3082"},{"author":{"_account_id":34860,"name":"Amit Uniyal","email":"auniyal@redhat.com","username":"auniyal"},"change_message_id":"cfd394fd78036d5ca66683c1e4f6e7e8731b93e0","unresolved":false,"context_lines":[],"source_content_type":"","patch_set":9,"id":"10f52cf0_f3365389","updated":"2024-06-21 01:47:42.000000000","message":"recheck no valid host","commit_id":"09d83ef52987303dcb5702a8ce3678bee7cb3082"},{"author":{"_account_id":34860,"name":"Amit Uniyal","email":"auniyal@redhat.com","username":"auniyal"},"change_message_id":"4f8087efdf5cc689884d6a5f32b36f4b133e05c9","unresolved":false,"context_lines":[],"source_content_type":"","patch_set":9,"id":"9ad04236_b9ce236d","updated":"2024-06-21 04:29:40.000000000","message":"recheck no valid host should not come\nwe are not getting a clear msg in logs, that its because of trait or any other reason.\n\njust a generaic msg,\n`Got no allocation candidates from the Placement API. This could be due to insufficient resources or a temporary occurrence as compute nodes start up.`\n\nhttps://zuul.opendev.org/t/openstack/build/3a36486f78cd4784ae5342ea936ac17e/log/controller/logs/screen-n-sch.txt#5316-5323","commit_id":"09d83ef52987303dcb5702a8ce3678bee7cb3082"},{"author":{"_account_id":11604,"name":"sean mooney","email":"smooney@redhat.com","username":"sean-k-mooney"},"change_message_id":"9993ea10cc678501d59d0ec86c8bd83c8f6be819","unresolved":true,"context_lines":[],"source_content_type":"","patch_set":9,"id":"878e8afe_8622dfc4","in_reply_to":"5d7f07b7_766f9c73","updated":"2024-06-21 10:44:07.000000000","message":"i think this is coming form two difffen tplacces \nthe specific error gets convertaed to a no valid host error because that is what we want to show to end users at the api level.\n\nthe traits detail is not necessarily something that end user should necessarily see. form a secuirty point of vew many public cloud would consider it a security issue if we told the end user (not admins) that the failure was because of a trait requrest as that could be used to probe the envionment for capbalities and or vulnerablities","commit_id":"09d83ef52987303dcb5702a8ce3678bee7cb3082"},{"author":{"_account_id":11604,"name":"sean mooney","email":"smooney@redhat.com","username":"sean-k-mooney"},"change_message_id":"3dc3d6c604e39f6ab6930f3a9c7a33a29c3a03b9","unresolved":true,"context_lines":[],"source_content_type":"","patch_set":9,"id":"79f8f064_bbbe3330","in_reply_to":"631da514_8d71c217","updated":"2024-06-21 10:31:19.000000000","message":"nov ais doing the right thing in terems of logging from my perspective.\n\nnegative tests can cause error messages so knowing why we got a no valid host is gernally very imporant\n\n\noperators woudl not be particallar happy if we didnt tell them it was because the trait that was requested was undefined.","commit_id":"09d83ef52987303dcb5702a8ce3678bee7cb3082"},{"author":{"_account_id":34860,"name":"Amit Uniyal","email":"auniyal@redhat.com","username":"auniyal"},"change_message_id":"da86dee887f8dd1273e8f94874f1b5d0e2d6d7b2","unresolved":true,"context_lines":[],"source_content_type":"","patch_set":9,"id":"5d7f07b7_766f9c73","in_reply_to":"79f8f064_bbbe3330","updated":"2024-06-21 10:36:37.000000000","message":"ack, but what I meant is:\nif we know its because of traits (or other reasons) just tell/log only that reason instead of generic one everytime.\n`Got no allocation candidates from the Placement API. This could be due to insufficient resources or a temporary occurrence as compute nodes start up.`\n\nand use this only when we do not know the exact reason.","commit_id":"09d83ef52987303dcb5702a8ce3678bee7cb3082"},{"author":{"_account_id":34860,"name":"Amit Uniyal","email":"auniyal@redhat.com","username":"auniyal"},"change_message_id":"5d3a2a1eb8dff72c7f73e6eb7fa8b0ce86b1f453","unresolved":true,"context_lines":[],"source_content_type":"","patch_set":9,"id":"631da514_8d71c217","in_reply_to":"ea6d87a8_6e633d13","updated":"2024-06-21 04:32:32.000000000","message":"unresolve this","commit_id":"09d83ef52987303dcb5702a8ce3678bee7cb3082"},{"author":{"_account_id":8864,"name":"Artom Lifshitz","email":"notartom@gmail.com","username":"artom"},"change_message_id":"292b38918699f593cf29ef7b26efe8fdd1ba7c27","unresolved":false,"context_lines":[],"source_content_type":"","patch_set":11,"id":"316f7f7c_b9f4c0ee","updated":"2024-08-23 18:50:33.000000000","message":"Overall fine, just a bunch of small things inline.","commit_id":"7ec3f752cc8202697e9451ce552d44d22c1e57ae"},{"author":{"_account_id":34860,"name":"Amit Uniyal","email":"auniyal@redhat.com","username":"auniyal"},"change_message_id":"21d80fe1b34a10b15bd41b47197e5a4eae1f6023","unresolved":false,"context_lines":[],"source_content_type":"","patch_set":12,"id":"436c1d58_e34c30eb","updated":"2024-09-04 08:36:29.000000000","message":"recheck taking extra trime to add trait in RP\ntempest.lib.exceptions.TimeoutException: Request timed out\nDetails: Failed to add trait in resource provider,within the required time: (196 s)","commit_id":"e078922b038b6f3f0b8986a4c5e2f2c5bee40cda"}],"whitebox_tempest_plugin/api/compute/test_provider_yaml.py":[{"author":{"_account_id":8864,"name":"Artom Lifshitz","email":"notartom@gmail.com","username":"artom"},"change_message_id":"292b38918699f593cf29ef7b26efe8fdd1ba7c27","unresolved":true,"context_lines":[{"line_number":39,"context_line":""},{"line_number":40,"context_line":"    def setUp(self):"},{"line_number":41,"context_line":"        super(TestProviderYamlViaTraits, self).setUp()"},{"line_number":42,"context_line":"        self.rp_cl \u003d self.os_admin.resource_providers_client"},{"line_number":43,"context_line":"        prs \u003d self.rp_cl.list_resource_providers()[\u0027resource_providers\u0027]"},{"line_number":44,"context_line":""},{"line_number":45,"context_line":"        self.host1, self.host1_id \u003d prs[0][\u0027name\u0027], prs[0][\u0027uuid\u0027]"}],"source_content_type":"text/x-python","patch_set":11,"id":"236da4c5_51542a0e","line":42,"updated":"2024-08-23 18:50:33.000000000","message":"Please don\u0027t alias the admin client to a `self.` non-admin alias. We\u0027ve been bitten in the past before, and now just prefer always using the admin client explicitly when necessary.","commit_id":"7ec3f752cc8202697e9451ce552d44d22c1e57ae"},{"author":{"_account_id":34860,"name":"Amit Uniyal","email":"auniyal@redhat.com","username":"auniyal"},"change_message_id":"e7aebc40c1f0e08b2e8181daf31ac56c4317a66a","unresolved":false,"context_lines":[{"line_number":39,"context_line":""},{"line_number":40,"context_line":"    def setUp(self):"},{"line_number":41,"context_line":"        super(TestProviderYamlViaTraits, self).setUp()"},{"line_number":42,"context_line":"        self.rp_cl \u003d self.os_admin.resource_providers_client"},{"line_number":43,"context_line":"        prs \u003d self.rp_cl.list_resource_providers()[\u0027resource_providers\u0027]"},{"line_number":44,"context_line":""},{"line_number":45,"context_line":"        self.host1, self.host1_id \u003d prs[0][\u0027name\u0027], prs[0][\u0027uuid\u0027]"}],"source_content_type":"text/x-python","patch_set":11,"id":"ebe623d9_e0e88107","line":42,"in_reply_to":"236da4c5_51542a0e","updated":"2024-09-04 06:03:08.000000000","message":"Done","commit_id":"7ec3f752cc8202697e9451ce552d44d22c1e57ae"},{"author":{"_account_id":8864,"name":"Artom Lifshitz","email":"notartom@gmail.com","username":"artom"},"change_message_id":"292b38918699f593cf29ef7b26efe8fdd1ba7c27","unresolved":true,"context_lines":[{"line_number":40,"context_line":"    def setUp(self):"},{"line_number":41,"context_line":"        super(TestProviderYamlViaTraits, self).setUp()"},{"line_number":42,"context_line":"        self.rp_cl \u003d self.os_admin.resource_providers_client"},{"line_number":43,"context_line":"        prs \u003d self.rp_cl.list_resource_providers()[\u0027resource_providers\u0027]"},{"line_number":44,"context_line":""},{"line_number":45,"context_line":"        self.host1, self.host1_id \u003d prs[0][\u0027name\u0027], prs[0][\u0027uuid\u0027]"},{"line_number":46,"context_line":"        self.host2, self.host2_id \u003d prs[1][\u0027name\u0027], prs[1][\u0027uuid\u0027]"}],"source_content_type":"text/x-python","patch_set":11,"id":"22656b8f_c53af65b","line":43,"updated":"2024-08-23 18:50:33.000000000","message":"`s/prs/rps/`?","commit_id":"7ec3f752cc8202697e9451ce552d44d22c1e57ae"},{"author":{"_account_id":34860,"name":"Amit Uniyal","email":"auniyal@redhat.com","username":"auniyal"},"change_message_id":"e7aebc40c1f0e08b2e8181daf31ac56c4317a66a","unresolved":false,"context_lines":[{"line_number":40,"context_line":"    def setUp(self):"},{"line_number":41,"context_line":"        super(TestProviderYamlViaTraits, self).setUp()"},{"line_number":42,"context_line":"        self.rp_cl \u003d self.os_admin.resource_providers_client"},{"line_number":43,"context_line":"        prs \u003d self.rp_cl.list_resource_providers()[\u0027resource_providers\u0027]"},{"line_number":44,"context_line":""},{"line_number":45,"context_line":"        self.host1, self.host1_id \u003d prs[0][\u0027name\u0027], prs[0][\u0027uuid\u0027]"},{"line_number":46,"context_line":"        self.host2, self.host2_id \u003d prs[1][\u0027name\u0027], prs[1][\u0027uuid\u0027]"}],"source_content_type":"text/x-python","patch_set":11,"id":"79efa666_357f95ea","line":43,"in_reply_to":"22656b8f_c53af65b","updated":"2024-09-04 06:03:08.000000000","message":"Done","commit_id":"7ec3f752cc8202697e9451ce552d44d22c1e57ae"},{"author":{"_account_id":8864,"name":"Artom Lifshitz","email":"notartom@gmail.com","username":"artom"},"change_message_id":"292b38918699f593cf29ef7b26efe8fdd1ba7c27","unresolved":true,"context_lines":[{"line_number":43,"context_line":"        prs \u003d self.rp_cl.list_resource_providers()[\u0027resource_providers\u0027]"},{"line_number":44,"context_line":""},{"line_number":45,"context_line":"        self.host1, self.host1_id \u003d prs[0][\u0027name\u0027], prs[0][\u0027uuid\u0027]"},{"line_number":46,"context_line":"        self.host2, self.host2_id \u003d prs[1][\u0027name\u0027], prs[1][\u0027uuid\u0027]"},{"line_number":47,"context_line":"        # /etc/nova/provider_config is a default location"},{"line_number":48,"context_line":"        # saving provider.yaml to any other location and updating"},{"line_number":49,"context_line":"        # the path to nova.conf or nova-cpu.conf should work."}],"source_content_type":"text/x-python","patch_set":11,"id":"cdceed69_31d7badd","line":46,"updated":"2024-08-23 18:50:33.000000000","message":"This assumes at least two hosts, so you need to skip unless CONF.min_compute_hosts \u003c 2","commit_id":"7ec3f752cc8202697e9451ce552d44d22c1e57ae"},{"author":{"_account_id":34860,"name":"Amit Uniyal","email":"auniyal@redhat.com","username":"auniyal"},"change_message_id":"e7aebc40c1f0e08b2e8181daf31ac56c4317a66a","unresolved":false,"context_lines":[{"line_number":43,"context_line":"        prs \u003d self.rp_cl.list_resource_providers()[\u0027resource_providers\u0027]"},{"line_number":44,"context_line":""},{"line_number":45,"context_line":"        self.host1, self.host1_id \u003d prs[0][\u0027name\u0027], prs[0][\u0027uuid\u0027]"},{"line_number":46,"context_line":"        self.host2, self.host2_id \u003d prs[1][\u0027name\u0027], prs[1][\u0027uuid\u0027]"},{"line_number":47,"context_line":"        # /etc/nova/provider_config is a default location"},{"line_number":48,"context_line":"        # saving provider.yaml to any other location and updating"},{"line_number":49,"context_line":"        # the path to nova.conf or nova-cpu.conf should work."}],"source_content_type":"text/x-python","patch_set":11,"id":"556abf4c_c46e3512","line":46,"in_reply_to":"cdceed69_31d7badd","updated":"2024-09-04 06:03:08.000000000","message":"Done","commit_id":"7ec3f752cc8202697e9451ce552d44d22c1e57ae"},{"author":{"_account_id":8864,"name":"Artom Lifshitz","email":"notartom@gmail.com","username":"artom"},"change_message_id":"292b38918699f593cf29ef7b26efe8fdd1ba7c27","unresolved":true,"context_lines":[{"line_number":48,"context_line":"        # saving provider.yaml to any other location and updating"},{"line_number":49,"context_line":"        # the path to nova.conf or nova-cpu.conf should work."},{"line_number":50,"context_line":"        # but it do not work as of now."},{"line_number":51,"context_line":"        self.provider_config_location \u003d \"/etc/nova/provider_config\""},{"line_number":52,"context_line":"        self.trait1 \u003d \"CUSTOM_WB_HOST_2\""},{"line_number":53,"context_line":"        self.trait2 \u003d \"CUSTOM_WB_HOST_1\""},{"line_number":54,"context_line":"        self.trait3 \u003d \"CUSTOM_WB_HOST_3\""}],"source_content_type":"text/x-python","patch_set":11,"id":"290039eb_d7c6429a","line":51,"updated":"2024-08-23 18:50:33.000000000","message":"This probably needs to be a config option, for different installers that might have different locations for this.","commit_id":"7ec3f752cc8202697e9451ce552d44d22c1e57ae"},{"author":{"_account_id":8864,"name":"Artom Lifshitz","email":"notartom@gmail.com","username":"artom"},"change_message_id":"292b38918699f593cf29ef7b26efe8fdd1ba7c27","unresolved":true,"context_lines":[{"line_number":49,"context_line":"        # the path to nova.conf or nova-cpu.conf should work."},{"line_number":50,"context_line":"        # but it do not work as of now."},{"line_number":51,"context_line":"        self.provider_config_location \u003d \"/etc/nova/provider_config\""},{"line_number":52,"context_line":"        self.trait1 \u003d \"CUSTOM_WB_HOST_2\""},{"line_number":53,"context_line":"        self.trait2 \u003d \"CUSTOM_WB_HOST_1\""},{"line_number":54,"context_line":"        self.trait3 \u003d \"CUSTOM_WB_HOST_3\""},{"line_number":55,"context_line":""}],"source_content_type":"text/x-python","patch_set":11,"id":"c8fb0fbb_3822b79a","line":52,"updated":"2024-08-23 18:50:33.000000000","message":"You don\u0027t need these in self, just use the strings inline whenever needed. Also the fact that you do trait1 \u003d host2 is... really confusing.","commit_id":"7ec3f752cc8202697e9451ce552d44d22c1e57ae"},{"author":{"_account_id":8864,"name":"Artom Lifshitz","email":"notartom@gmail.com","username":"artom"},"change_message_id":"292b38918699f593cf29ef7b26efe8fdd1ba7c27","unresolved":true,"context_lines":[{"line_number":54,"context_line":"        self.trait3 \u003d \"CUSTOM_WB_HOST_3\""},{"line_number":55,"context_line":""},{"line_number":56,"context_line":"    def traits_list(self):"},{"line_number":57,"context_line":"        return self.os_admin.placement_client.list_traits()[\u0027traits\u0027]"},{"line_number":58,"context_line":""},{"line_number":59,"context_line":"    def assert_trait_present_in_rp(self, trait, provider):"},{"line_number":60,"context_line":"        \"\"\"Verify if created rait is added in resource provider or not."}],"source_content_type":"text/x-python","patch_set":11,"id":"7fdbf332_6010f305","line":57,"updated":"2024-08-23 18:50:33.000000000","message":"nit: do we really need this one line wraper method?","commit_id":"7ec3f752cc8202697e9451ce552d44d22c1e57ae"},{"author":{"_account_id":34860,"name":"Amit Uniyal","email":"auniyal@redhat.com","username":"auniyal"},"change_message_id":"e7aebc40c1f0e08b2e8181daf31ac56c4317a66a","unresolved":true,"context_lines":[{"line_number":54,"context_line":"        self.trait3 \u003d \"CUSTOM_WB_HOST_3\""},{"line_number":55,"context_line":""},{"line_number":56,"context_line":"    def traits_list(self):"},{"line_number":57,"context_line":"        return self.os_admin.placement_client.list_traits()[\u0027traits\u0027]"},{"line_number":58,"context_line":""},{"line_number":59,"context_line":"    def assert_trait_present_in_rp(self, trait, provider):"},{"line_number":60,"context_line":"        \"\"\"Verify if created rait is added in resource provider or not."}],"source_content_type":"text/x-python","patch_set":11,"id":"c447cdbc_140cb885","line":57,"in_reply_to":"7fdbf332_6010f305","updated":"2024-09-04 06:03:08.000000000","message":"its a big line used in 3 places, wrapping it makes it pretty and easy to read.","commit_id":"7ec3f752cc8202697e9451ce552d44d22c1e57ae"},{"author":{"_account_id":8864,"name":"Artom Lifshitz","email":"notartom@gmail.com","username":"artom"},"change_message_id":"292b38918699f593cf29ef7b26efe8fdd1ba7c27","unresolved":true,"context_lines":[{"line_number":56,"context_line":"    def traits_list(self):"},{"line_number":57,"context_line":"        return self.os_admin.placement_client.list_traits()[\u0027traits\u0027]"},{"line_number":58,"context_line":""},{"line_number":59,"context_line":"    def assert_trait_present_in_rp(self, trait, provider):"},{"line_number":60,"context_line":"        \"\"\"Verify if created rait is added in resource provider or not."},{"line_number":61,"context_line":"        \"\"\""},{"line_number":62,"context_line":"        self.assertIn(trait, self.traits_list(), \"trait did not created\")"}],"source_content_type":"text/x-python","patch_set":11,"id":"50bcdba5_f7769def","line":59,"updated":"2024-08-23 18:50:33.000000000","message":"nit: ditto, I\u0027m not sure we\u0027re gaining much with this two line wrapper","commit_id":"7ec3f752cc8202697e9451ce552d44d22c1e57ae"},{"author":{"_account_id":34860,"name":"Amit Uniyal","email":"auniyal@redhat.com","username":"auniyal"},"change_message_id":"e7aebc40c1f0e08b2e8181daf31ac56c4317a66a","unresolved":true,"context_lines":[{"line_number":56,"context_line":"    def traits_list(self):"},{"line_number":57,"context_line":"        return self.os_admin.placement_client.list_traits()[\u0027traits\u0027]"},{"line_number":58,"context_line":""},{"line_number":59,"context_line":"    def assert_trait_present_in_rp(self, trait, provider):"},{"line_number":60,"context_line":"        \"\"\"Verify if created rait is added in resource provider or not."},{"line_number":61,"context_line":"        \"\"\""},{"line_number":62,"context_line":"        self.assertIn(trait, self.traits_list(), \"trait did not created\")"}],"source_content_type":"text/x-python","patch_set":11,"id":"25ca5a34_7f5403d3","line":59,"in_reply_to":"50bcdba5_f7769def","updated":"2024-09-04 06:03:08.000000000","message":"same as above those 2 lines are big,and used in 3 places","commit_id":"7ec3f752cc8202697e9451ce552d44d22c1e57ae"},{"author":{"_account_id":8864,"name":"Artom Lifshitz","email":"notartom@gmail.com","username":"artom"},"change_message_id":"292b38918699f593cf29ef7b26efe8fdd1ba7c27","unresolved":true,"context_lines":[{"line_number":62,"context_line":"        self.assertIn(trait, self.traits_list(), \"trait did not created\")"},{"line_number":63,"context_line":"        waiters.wait_for_trait_add_in_rp(self.rp_cl, trait, provider)"},{"line_number":64,"context_line":""},{"line_number":65,"context_line":"    def _restart_nova_compute_service(self, host_name):"},{"line_number":66,"context_line":"        host \u003d clients.ServiceManager(host_name, \u0027nova-compute\u0027)"},{"line_number":67,"context_line":"        host.restart()"},{"line_number":68,"context_line":""}],"source_content_type":"text/x-python","patch_set":11,"id":"701440a9_b3d320ef","line":65,"updated":"2024-08-23 18:50:33.000000000","message":"nit: and same here","commit_id":"7ec3f752cc8202697e9451ce552d44d22c1e57ae"},{"author":{"_account_id":34860,"name":"Amit Uniyal","email":"auniyal@redhat.com","username":"auniyal"},"change_message_id":"e7aebc40c1f0e08b2e8181daf31ac56c4317a66a","unresolved":false,"context_lines":[{"line_number":62,"context_line":"        self.assertIn(trait, self.traits_list(), \"trait did not created\")"},{"line_number":63,"context_line":"        waiters.wait_for_trait_add_in_rp(self.rp_cl, trait, provider)"},{"line_number":64,"context_line":""},{"line_number":65,"context_line":"    def _restart_nova_compute_service(self, host_name):"},{"line_number":66,"context_line":"        host \u003d clients.ServiceManager(host_name, \u0027nova-compute\u0027)"},{"line_number":67,"context_line":"        host.restart()"},{"line_number":68,"context_line":""}],"source_content_type":"text/x-python","patch_set":11,"id":"19ca2937_eb512fe3","line":65,"in_reply_to":"701440a9_b3d320ef","updated":"2024-09-04 06:03:08.000000000","message":"Done","commit_id":"7ec3f752cc8202697e9451ce552d44d22c1e57ae"},{"author":{"_account_id":8864,"name":"Artom Lifshitz","email":"notartom@gmail.com","username":"artom"},"change_message_id":"292b38918699f593cf29ef7b26efe8fdd1ba7c27","unresolved":true,"context_lines":[{"line_number":97,"context_line":"        cmd \u003d f\"\"\""},{"line_number":98,"context_line":"        echo \"{template}\" \u003e {self.provider_config_location}/provider.yaml\"\"\""},{"line_number":99,"context_line":"        ssh_client.execute(cmd.strip())"},{"line_number":100,"context_line":"        self._restart_nova_compute_service(target_host)"},{"line_number":101,"context_line":""},{"line_number":102,"context_line":"    def _create_custom_trait_flavor(self, trait):"},{"line_number":103,"context_line":"        \"\"\"Creates a flavor that has given trait"}],"source_content_type":"text/x-python","patch_set":11,"id":"0cce542c_22b188af","line":100,"updated":"2024-08-23 18:50:33.000000000","message":"I\u0027d rather you move the restart out of this method, otherwise its name is misleading since it only talks about creating provider.yaml.","commit_id":"7ec3f752cc8202697e9451ce552d44d22c1e57ae"},{"author":{"_account_id":34860,"name":"Amit Uniyal","email":"auniyal@redhat.com","username":"auniyal"},"change_message_id":"e7aebc40c1f0e08b2e8181daf31ac56c4317a66a","unresolved":false,"context_lines":[{"line_number":97,"context_line":"        cmd \u003d f\"\"\""},{"line_number":98,"context_line":"        echo \"{template}\" \u003e {self.provider_config_location}/provider.yaml\"\"\""},{"line_number":99,"context_line":"        ssh_client.execute(cmd.strip())"},{"line_number":100,"context_line":"        self._restart_nova_compute_service(target_host)"},{"line_number":101,"context_line":""},{"line_number":102,"context_line":"    def _create_custom_trait_flavor(self, trait):"},{"line_number":103,"context_line":"        \"\"\"Creates a flavor that has given trait"}],"source_content_type":"text/x-python","patch_set":11,"id":"be2cfdfe_ad11e4df","line":100,"in_reply_to":"0cce542c_22b188af","updated":"2024-09-04 06:03:08.000000000","message":"Acknowledged","commit_id":"7ec3f752cc8202697e9451ce552d44d22c1e57ae"},{"author":{"_account_id":8864,"name":"Artom Lifshitz","email":"notartom@gmail.com","username":"artom"},"change_message_id":"292b38918699f593cf29ef7b26efe8fdd1ba7c27","unresolved":true,"context_lines":[{"line_number":108,"context_line":""},{"line_number":109,"context_line":"    def test_valid_trait_with_provider_yaml(self):"},{"line_number":110,"context_line":"        # create guest with trait for host1"},{"line_number":111,"context_line":"        self._create_provider_yaml_at_target_host(self.trait1, self.host2)"},{"line_number":112,"context_line":"        self.assert_trait_present_in_rp(self.trait1, self.host2_id)"},{"line_number":113,"context_line":""},{"line_number":114,"context_line":"        flavor1_id \u003d self._create_custom_trait_flavor(self.trait1)[\u0027id\u0027]"}],"source_content_type":"text/x-python","patch_set":11,"id":"68eca563_2dfc25e1","line":111,"updated":"2024-08-23 18:50:33.000000000","message":"I\u0027d rather you do the host1 and host2 thing differently, the way you\u0027ve done it, mixing trait1 and host2, and defining the hosts way above there, it\u0027s confusing. You can find host (either with `list_compute_hosts()` or via placement like you do above), give it a trait, boot a VM onto it, find a different host with `get_host_other_than()` in tempest, give it a trait, boot on it, etc.","commit_id":"7ec3f752cc8202697e9451ce552d44d22c1e57ae"}],"whitebox_tempest_plugin/common/waiters.py":[{"author":{"_account_id":8864,"name":"Artom Lifshitz","email":"notartom@gmail.com","username":"artom"},"change_message_id":"292b38918699f593cf29ef7b26efe8fdd1ba7c27","unresolved":true,"context_lines":[{"line_number":55,"context_line":"            \u0027complete, within the required time: (%s s)\u0027 % timeout)"},{"line_number":56,"context_line":""},{"line_number":57,"context_line":""},{"line_number":58,"context_line":"def wait_for_trait_add_in_rp(rp_cl, trait, provider, timeout\u003d10):"},{"line_number":59,"context_line":"    for _ in range(5):"},{"line_number":60,"context_line":"        traits \u003d rp_cl.list_resource_provider_traits(provider)[\u0027traits\u0027]"},{"line_number":61,"context_line":"        if trait in traits:"}],"source_content_type":"text/x-python","patch_set":11,"id":"7f9531f6_743188a5","line":58,"updated":"2024-08-23 18:50:33.000000000","message":"nit: `def wait_for_resource_provider_trait(rp_client, provider, trait):`\n\nI hate that Tempest itself calls it resource_provider_client as opposed to placement_client, but it is what it is.","commit_id":"7ec3f752cc8202697e9451ce552d44d22c1e57ae"},{"author":{"_account_id":34860,"name":"Amit Uniyal","email":"auniyal@redhat.com","username":"auniyal"},"change_message_id":"e7aebc40c1f0e08b2e8181daf31ac56c4317a66a","unresolved":false,"context_lines":[{"line_number":55,"context_line":"            \u0027complete, within the required time: (%s s)\u0027 % timeout)"},{"line_number":56,"context_line":""},{"line_number":57,"context_line":""},{"line_number":58,"context_line":"def wait_for_trait_add_in_rp(rp_cl, trait, provider, timeout\u003d10):"},{"line_number":59,"context_line":"    for _ in range(5):"},{"line_number":60,"context_line":"        traits \u003d rp_cl.list_resource_provider_traits(provider)[\u0027traits\u0027]"},{"line_number":61,"context_line":"        if trait in traits:"}],"source_content_type":"text/x-python","patch_set":11,"id":"ab9edeb7_a578a710","line":58,"in_reply_to":"7f9531f6_743188a5","updated":"2024-09-04 06:03:08.000000000","message":"Done","commit_id":"7ec3f752cc8202697e9451ce552d44d22c1e57ae"},{"author":{"_account_id":8864,"name":"Artom Lifshitz","email":"notartom@gmail.com","username":"artom"},"change_message_id":"292b38918699f593cf29ef7b26efe8fdd1ba7c27","unresolved":true,"context_lines":[{"line_number":56,"context_line":""},{"line_number":57,"context_line":""},{"line_number":58,"context_line":"def wait_for_trait_add_in_rp(rp_cl, trait, provider, timeout\u003d10):"},{"line_number":59,"context_line":"    for _ in range(5):"},{"line_number":60,"context_line":"        traits \u003d rp_cl.list_resource_provider_traits(provider)[\u0027traits\u0027]"},{"line_number":61,"context_line":"        if trait in traits:"},{"line_number":62,"context_line":"            return True"}],"source_content_type":"text/x-python","patch_set":11,"id":"f9162957_dc466629","line":59,"updated":"2024-08-23 18:50:33.000000000","message":"Please use `CONF.compute.build_interval` and `CONF.compute.build_timeout`, which also means attempting until the timeout is reached, rather than a fixed number of attempts.","commit_id":"7ec3f752cc8202697e9451ce552d44d22c1e57ae"},{"author":{"_account_id":34860,"name":"Amit Uniyal","email":"auniyal@redhat.com","username":"auniyal"},"change_message_id":"e7aebc40c1f0e08b2e8181daf31ac56c4317a66a","unresolved":false,"context_lines":[{"line_number":56,"context_line":""},{"line_number":57,"context_line":""},{"line_number":58,"context_line":"def wait_for_trait_add_in_rp(rp_cl, trait, provider, timeout\u003d10):"},{"line_number":59,"context_line":"    for _ in range(5):"},{"line_number":60,"context_line":"        traits \u003d rp_cl.list_resource_provider_traits(provider)[\u0027traits\u0027]"},{"line_number":61,"context_line":"        if trait in traits:"},{"line_number":62,"context_line":"            return True"}],"source_content_type":"text/x-python","patch_set":11,"id":"bca8a541_cc857ee1","line":59,"in_reply_to":"f9162957_dc466629","updated":"2024-09-04 06:03:08.000000000","message":"having directly CONF here was  verymuch needed, Done.\n\ntimeout in other function can be updated too.","commit_id":"7ec3f752cc8202697e9451ce552d44d22c1e57ae"}]}
