)]}'
{"/PATCHSET_LEVEL":[{"author":{"_account_id":13252,"name":"Dr. Jens Harbott","display_name":"Jens Harbott (frickler)","email":"frickler@offenerstapel.de","username":"jrosenboom"},"change_message_id":"504011ccd8f331ad3f1e48c42653884bc6a9ffd2","unresolved":false,"context_lines":[],"source_content_type":"","patch_set":2,"id":"254953c8_e4088292","updated":"2023-08-03 17:15:49.000000000","message":"Do we really need yet another variable for this? I would suggest to simply always set those values.\n\nAlso if the defaults don\u0027t seem to work well, is it worth considering some adaption in cinder and nova?","commit_id":"69e45f6ab65d01fe9214aae80062a8b8254f076a"},{"author":{"_account_id":4393,"name":"Dan Smith","email":"dms@danplanet.com","username":"danms"},"change_message_id":"aca741c42dde923da9fd33d3f783eca90bbdc42d","unresolved":false,"context_lines":[],"source_content_type":"","patch_set":2,"id":"602efaa7_f101c437","in_reply_to":"0ad180a6_5ff52b80","updated":"2023-08-03 17:43:20.000000000","message":"I would set it on the base tempest job so basically all of our jobs use the larger timeout. If we don\u0027t care about developers running with the default, then setting it always is easier and I\u0027ll do that.","commit_id":"69e45f6ab65d01fe9214aae80062a8b8254f076a"},{"author":{"_account_id":4393,"name":"Dan Smith","email":"dms@danplanet.com","username":"danms"},"change_message_id":"e5d715e83ff5497fe960c1edf69087222348fe6f","unresolved":false,"context_lines":[],"source_content_type":"","patch_set":2,"id":"dfcfce6b_cb643486","in_reply_to":"254953c8_e4088292","updated":"2023-08-03 17:20:16.000000000","message":"No, I\u0027m happy to just set it, but I expected pushback for doing so. People do have to increase this number for large and busy deployments, but certainly not always. And increasing the default makes a real deployment worse in general because we don\u0027t notice that a compute service is dead and keep sending things to it for the rest of the interval, which becomes a serious magnet. The scheduler continues to see the dead compute as a good place to send new builds, so more go there for as long as it takes to realize it\u0027s a bad idea.\n\nSo I expected the argument for making it a variable in devstack is that the average devstack deployment should use the service defaults as much as possible, and a single or dual-node devstack for development definitely can handle the defaults. It\u0027s really the over-taxed CI workers that need the exception.\n\nIt\u0027s easier for me to just bump it always, so if that\u0027s the preference, I\u0027m happy to simplify this.","commit_id":"69e45f6ab65d01fe9214aae80062a8b8254f076a"},{"author":{"_account_id":8556,"name":"Ghanshyam Maan","display_name":"Ghanshyam Maan","email":"gmaan.os14@gmail.com","username":"ghanshyam"},"change_message_id":"4b26da9373dcae8e46bb9a4d6e395faee1ef5073","unresolved":false,"context_lines":[],"source_content_type":"","patch_set":2,"id":"0ad180a6_5ff52b80","in_reply_to":"dfcfce6b_cb643486","updated":"2023-08-03 17:41:31.000000000","message":"I agree that changing default on service side can create backward incompatible change and we need to plan it in better way. I also agree that current default os 10 sec seems too early to monitor service.\n\nOn requering a new var to set this, I am ok either way. But I do not see which jobs will be changing it than what we are setting in base devstack job so just hard codding it here and change directly if that need to be is ok. If we find any such job need a different value then we can think of adding it as a var.","commit_id":"69e45f6ab65d01fe9214aae80062a8b8254f076a"},{"author":{"_account_id":5314,"name":"Brian Rosmaita","email":"rosmaita.fossdev@gmail.com","username":"brian-rosmaita"},"change_message_id":"ca429085c3af1482911037012d5614805bc8890d","unresolved":false,"context_lines":[],"source_content_type":"","patch_set":3,"id":"42ec0512_e7f0608b","updated":"2023-08-04 12:40:03.000000000","message":"I agree with the reasoning in Dan\u0027s commit message.","commit_id":"3832ff52b4445324b58a5da123ef4e3880df1591"},{"author":{"_account_id":13252,"name":"Dr. Jens Harbott","display_name":"Jens Harbott (frickler)","email":"frickler@offenerstapel.de","username":"jrosenboom"},"change_message_id":"6356ff4a0aa7e3006de7df663dfae54b515c365e","unresolved":false,"context_lines":[],"source_content_type":"","patch_set":3,"id":"cd4bee48_08a38024","updated":"2023-08-04 16:06:11.000000000","message":"I would still simplify this even more, but I\u0027ll leave that up to you.\n\nNot workflowing because of the dependency to avoid endless gate failing cycles.","commit_id":"3832ff52b4445324b58a5da123ef4e3880df1591"},{"author":{"_account_id":4393,"name":"Dan Smith","email":"dms@danplanet.com","username":"danms"},"change_message_id":"100753033ead66089d650bd63cc80d6367abf148","unresolved":false,"context_lines":[],"source_content_type":"","patch_set":3,"id":"dc64609a_9a51dee9","updated":"2023-08-08 00:43:42.000000000","message":"recheck tempest fix merged","commit_id":"3832ff52b4445324b58a5da123ef4e3880df1591"},{"author":{"_account_id":4393,"name":"Dan Smith","email":"dms@danplanet.com","username":"danms"},"change_message_id":"88385018c83bed6fb287ae3d112f5e7b7217d167","unresolved":false,"context_lines":[],"source_content_type":"","patch_set":3,"id":"c42c30e0_f4c4237e","updated":"2023-08-07 13:33:49.000000000","message":"recheck unrelated glance disk format error","commit_id":"3832ff52b4445324b58a5da123ef4e3880df1591"},{"author":{"_account_id":8556,"name":"Ghanshyam Maan","display_name":"Ghanshyam Maan","email":"gmaan.os14@gmail.com","username":"ghanshyam"},"change_message_id":"186c2194cd2946ea5f6ee137efca6be3a6dbce29","unresolved":false,"context_lines":[],"source_content_type":"","patch_set":3,"id":"be0b39d2_38255327","updated":"2023-08-04 01:08:42.000000000","message":"thanks, lgtm","commit_id":"3832ff52b4445324b58a5da123ef4e3880df1591"},{"author":{"_account_id":22873,"name":"Martin Kopec","email":"mkopec@redhat.com","username":"mkopec"},"change_message_id":"cbbbe6dad50f92313095b1a2f126a0fd80093934","unresolved":false,"context_lines":[],"source_content_type":"","patch_set":3,"id":"1ffa0ae0_ea32cd62","updated":"2023-08-07 09:26:37.000000000","message":"workflowing, the depends-on is merged and the gate seems to be stable now ... let\u0027s take advantage of that while it lasts","commit_id":"3832ff52b4445324b58a5da123ef4e3880df1591"}],"lib/cinder":[{"author":{"_account_id":13252,"name":"Dr. Jens Harbott","display_name":"Jens Harbott (frickler)","email":"frickler@offenerstapel.de","username":"jrosenboom"},"change_message_id":"6356ff4a0aa7e3006de7df663dfae54b515c365e","unresolved":true,"context_lines":[{"line_number":330,"context_line":"    # details and example failures."},{"line_number":331,"context_line":"    iniset $CINDER_CONF DEFAULT rpc_response_timeout 120"},{"line_number":332,"context_line":""},{"line_number":333,"context_line":"    iniset $CINDER_CONF DEFAULT report_interval $CINDER_SERVICE_REPORT_INTERVAL"},{"line_number":334,"context_line":"    iniset $CINDER_CONF DEFAULT service_down_time $(($CINDER_SERVICE_REPORT_INTERVAL * 6))"},{"line_number":335,"context_line":""},{"line_number":336,"context_line":"    if is_service_enabled c-vol \u0026\u0026 [[ -n \"$CINDER_ENABLED_BACKENDS\" ]]; then"}],"source_content_type":"application/x-shellscript","patch_set":3,"id":"326a2236_15919f0d","line":333,"updated":"2023-08-04 16:06:11.000000000","message":"I would not even use the variable and set the value here like for rpc_response_timeout above. That way also the comment could be in the correct context.","commit_id":"3832ff52b4445324b58a5da123ef4e3880df1591"}]}
