)]}'
{"/PATCHSET_LEVEL":[{"author":{"_account_id":1131,"name":"Brian Haley","email":"haleyb.dev@gmail.com","username":"brian-haley"},"change_message_id":"01f9ff11ca841bcbd711fe8462350644b4035712","unresolved":false,"context_lines":[],"source_content_type":"","patch_set":1,"id":"49329e9c_4a0182a0","updated":"2023-03-08 01:45:51.000000000","message":"The one possible failure I see here that would need to be verified is, what happens if two DHCP agents try and configure this IP at the exact same time - will they both fail DAD? I don\u0027t remember what the IPv6 RFC says about it, but if both are still in tentative state and see the other advertisement I think they\u0027d both fail.\n\nI realize this is just a stopgap fix until something better is done but just thinking out loud.","commit_id":"6959e0787bd5a8f423b51b922f2a526c0ac2518c"},{"author":{"_account_id":15554,"name":"Bence Romsics","email":"bence.romsics@gmail.com","username":"ebenrom","status":"working for Ericsson, UTC+1 (+DST)"},"change_message_id":"4c8f2f9b96b89c098d832a39765db33fcef3460a","unresolved":false,"context_lines":[],"source_content_type":"","patch_set":1,"id":"8d6b12d1_133677db","in_reply_to":"49329e9c_4a0182a0","updated":"2023-03-08 12:56:06.000000000","message":"Good questions. I am sure I have not yet fully grasped the rfc with all its consequences: https://www.rfc-editor.org/rfc/rfc4862#section-5.4\n\nHowever I made an experiment:\n\n- Raised /proc/sys/net/ipv6/neigh/IFACE/retrans_time to 2000 (ms).\n- Prepared commands to configure the same link-local v6 address on two Linux kernel interfaces on the same link.\n- \"Hit Enter\" on the prepared commands as simultaneously as I could via sending input to two screen windows, instead of doing this manually.\n\nBased on this (never seen both fail) I\u0027m quite sure that simply starting interface configuration twice with a delay less than retrans_time is not sufficient to make both DAD fail. Other timing conditions of course still could cause both to fail, but I believe we can exclude the big one.","commit_id":"6959e0787bd5a8f423b51b922f2a526c0ac2518c"},{"author":{"_account_id":1131,"name":"Brian Haley","email":"haleyb.dev@gmail.com","username":"brian-haley"},"change_message_id":"5b88d2aca9eb89a17bcca78209bd711c2abd0d79","unresolved":false,"context_lines":[],"source_content_type":"","patch_set":1,"id":"3641d2c4_22316cdf","in_reply_to":"8d6b12d1_133677db","updated":"2023-03-08 14:55:40.000000000","message":"I could not get it to fail either, so I guess we could treat it as an extreme edge case.\n\nThe other hack would be to set accept_dad\u003d0 on the interface, I\u0027m not sure what impact that would have - i.e. could an http stream actually work correctly?","commit_id":"6959e0787bd5a8f423b51b922f2a526c0ac2518c"},{"author":{"_account_id":15554,"name":"Bence Romsics","email":"bence.romsics@gmail.com","username":"ebenrom","status":"working for Ericsson, UTC+1 (+DST)"},"change_message_id":"c6e18b554e329a7b66c5fafd732ba6da0921defc","unresolved":false,"context_lines":[],"source_content_type":"","patch_set":3,"id":"12a2c379_11e99ae8","updated":"2023-03-09 15:16:45.000000000","message":"I believe metadata haproxy fails to start when the v6 metadata address is in dadfailed state even if we suppressed the exception. And we use the same haproxy to listen on ipv4 and ipv6. There\u0027s no error message in the logs. My first guess is that it cannot bind on an interface in the tentative state, but I need to investigate.","commit_id":"b73701e6059ba9872178ee62fd0217d3d93903d3"},{"author":{"_account_id":1131,"name":"Brian Haley","email":"haleyb.dev@gmail.com","username":"brian-haley"},"change_message_id":"f4231e276e7de289666d8322c59749003d0559e1","unresolved":false,"context_lines":[],"source_content_type":"","patch_set":3,"id":"f99a1141_2c2da010","updated":"2023-03-08 23:40:24.000000000","message":"I think this change would be fine until (I hope) we can fix it better. And we\u0027ve documented the failure case.","commit_id":"b73701e6059ba9872178ee62fd0217d3d93903d3"},{"author":{"_account_id":16688,"name":"Rodolfo Alonso","email":"ralonsoh@redhat.com","username":"rodolfo-alonso-hernandez"},"change_message_id":"a90cd1c9612e054fde99e1d0c5bd7f16073357ce","unresolved":false,"context_lines":[],"source_content_type":"","patch_set":3,"id":"b9c37a23_2c97c97a","updated":"2023-03-09 10:13:08.000000000","message":"This patch deserves a reno with a comment in \"issues\" explaining the current problem and in \"other\", stating that we don\u0027t provide (for now), HA for metadata","commit_id":"b73701e6059ba9872178ee62fd0217d3d93903d3"},{"author":{"_account_id":15554,"name":"Bence Romsics","email":"bence.romsics@gmail.com","username":"ebenrom","status":"working for Ericsson, UTC+1 (+DST)"},"change_message_id":"56d60a1b025f973f62c138794aaf946aac96d20e","unresolved":false,"context_lines":[],"source_content_type":"","patch_set":3,"id":"4a220c43_e2aff444","in_reply_to":"12a2c379_11e99ae8","updated":"2023-03-09 15:21:52.000000000","message":"Or I\u0027m just catching the exception at the wrong place.","commit_id":"b73701e6059ba9872178ee62fd0217d3d93903d3"},{"author":{"_account_id":1131,"name":"Brian Haley","email":"haleyb.dev@gmail.com","username":"brian-haley"},"change_message_id":"e467551d7758c3d755e49529adf63323d65adc8d","unresolved":false,"context_lines":[],"source_content_type":"","patch_set":4,"id":"ae81e6b2_7572faca","updated":"2023-03-10 17:39:39.000000000","message":"I pushed a new patch for this since the fullstack job is unstable without this change from what I can tell.","commit_id":"d67564a9ac5bf7d2d77f93de44abf5308db2566a"},{"author":{"_account_id":1131,"name":"Brian Haley","email":"haleyb.dev@gmail.com","username":"brian-haley"},"change_message_id":"ccf61823de7d637c2624348cac32c238cc73ef96","unresolved":false,"context_lines":[],"source_content_type":"","patch_set":6,"id":"b276f0fb_067efe95","updated":"2023-03-10 22:18:14.000000000","message":"Argh, this doesn\u0027t fix the fullstack job as I thought :(","commit_id":"fff1f8c20f82b53b49b83693a0f031c645f7b9f2"},{"author":{"_account_id":1131,"name":"Brian Haley","email":"haleyb.dev@gmail.com","username":"brian-haley"},"change_message_id":"dab108245600ece6a41f5caf9fab5ce4f8220abe","unresolved":false,"context_lines":[],"source_content_type":"","patch_set":6,"id":"16ab8e48_e6c79f52","updated":"2023-03-11 15:09:03.000000000","message":"recheck fullstack test failure","commit_id":"fff1f8c20f82b53b49b83693a0f031c645f7b9f2"},{"author":{"_account_id":15554,"name":"Bence Romsics","email":"bence.romsics@gmail.com","username":"ebenrom","status":"working for Ericsson, UTC+1 (+DST)"},"change_message_id":"19f44769f930951d567f78ef8accd302e72a87de","unresolved":false,"context_lines":[],"source_content_type":"","patch_set":7,"id":"54a815e6_6df2009b","updated":"2023-03-13 12:26:44.000000000","message":"Thanks Brian for the changes. Added some more docs, including a release note.","commit_id":"c0f21394ec756eb0183d4d595c15950ea0f2dc39"},{"author":{"_account_id":1131,"name":"Brian Haley","email":"haleyb.dev@gmail.com","username":"brian-haley"},"change_message_id":"af9cbf17dbc18cb1e24491835563acfc34568adb","unresolved":false,"context_lines":[],"source_content_type":"","patch_set":7,"id":"ec877d1b_7fde5d90","in_reply_to":"54a815e6_6df2009b","updated":"2023-03-13 14:18:45.000000000","message":"Hi Bence. Yes, I pushed an update because it seemed to fix the fullstack issue, but then a re-check showed it didn\u0027t so it could have waited. Either way it\u0027s one step closer...","commit_id":"c0f21394ec756eb0183d4d595c15950ea0f2dc39"},{"author":{"_account_id":8313,"name":"Lajos Katona","display_name":"lajoskatona","email":"katonalala@gmail.com","username":"elajkat","status":"Ericsson Software Technology"},"change_message_id":"eb02d64272c638957980cd58346eab424cfbe18b","unresolved":false,"context_lines":[],"source_content_type":"","patch_set":10,"id":"145bd43c_4554b214","updated":"2023-03-23 15:30:37.000000000","message":"recheck\nopenstack-tox-cover post_failure","commit_id":"e8aff94dce85e9ebb7f964f82e0ee6af02f0c49b"},{"author":{"_account_id":1131,"name":"Brian Haley","email":"haleyb.dev@gmail.com","username":"brian-haley"},"change_message_id":"e8a0f9e86b8b1b4ed06c40b18aaf190f2184ef3a","unresolved":false,"context_lines":[],"source_content_type":"","patch_set":10,"id":"67b526a6_6b7e4c73","updated":"2023-03-23 13:43:37.000000000","message":"recheck tox-cover POST_FAILURE","commit_id":"e8aff94dce85e9ebb7f964f82e0ee6af02f0c49b"},{"author":{"_account_id":1131,"name":"Brian Haley","email":"haleyb.dev@gmail.com","username":"brian-haley"},"change_message_id":"3e6335db8555152bf1741ce9a81a14355104542c","unresolved":false,"context_lines":[],"source_content_type":"","patch_set":10,"id":"6e2bb7b2_e0a46580","updated":"2023-03-20 22:09:28.000000000","message":"recheck tox-cover POST_FAILURE","commit_id":"e8aff94dce85e9ebb7f964f82e0ee6af02f0c49b"},{"author":{"_account_id":15554,"name":"Bence Romsics","email":"bence.romsics@gmail.com","username":"ebenrom","status":"working for Ericsson, UTC+1 (+DST)"},"change_message_id":"435872dfa16693cad9cfb840c8999707c86e30a7","unresolved":false,"context_lines":[],"source_content_type":"","patch_set":12,"id":"d2a9329b_dcb465f9","updated":"2023-04-04 13:09:08.000000000","message":"Finally I got to testing this. And it worked.\n\nDid schedule a network on two dhcp agents. One got dadfailed of course. Stopped the other agent, deleted the metadata v6 address as if the host went offline. When restarted the other-dhcp agent it was able to properly configure the v6 metadata address as expected.\n","commit_id":"e8b03e55968e386af44090349521de8ad5cec20f"},{"author":{"_account_id":1131,"name":"Brian Haley","email":"haleyb.dev@gmail.com","username":"brian-haley"},"change_message_id":"90a12cfa0e21443b4539cae4d509a7a1c1e47bcd","unresolved":false,"context_lines":[],"source_content_type":"","patch_set":12,"id":"a697dd92_4495d7db","updated":"2023-04-04 13:42:28.000000000","message":"Had to rebase as well.","commit_id":"e8b03e55968e386af44090349521de8ad5cec20f"},{"author":{"_account_id":16688,"name":"Rodolfo Alonso","email":"ralonsoh@redhat.com","username":"rodolfo-alonso-hernandez"},"change_message_id":"f2b04dee1c8151e775663c2cb7beccd97b60624a","unresolved":false,"context_lines":[],"source_content_type":"","patch_set":13,"id":"0cf6091a_39d2d577","updated":"2023-04-17 10:28:21.000000000","message":"BTW, this is a candidate to be backported, for sure (but not the FUP n-lib patch)","commit_id":"2aee961ab6942ab59aeacdc93d918c8c19023041"},{"author":{"_account_id":16688,"name":"Rodolfo Alonso","email":"ralonsoh@redhat.com","username":"rodolfo-alonso-hernandez"},"change_message_id":"661c3b59d44a01719a5e22cbf13845699301dd56","unresolved":false,"context_lines":[],"source_content_type":"","patch_set":13,"id":"a5670cd0_25bf8d02","updated":"2023-04-17 10:26:44.000000000","message":"Good documentation, thanks for fixing this bug.","commit_id":"2aee961ab6942ab59aeacdc93d918c8c19023041"},{"author":{"_account_id":15554,"name":"Bence Romsics","email":"bence.romsics@gmail.com","username":"ebenrom","status":"working for Ericsson, UTC+1 (+DST)"},"change_message_id":"6fa38a46688cf76c15536378756e6763c77de883","unresolved":false,"context_lines":[],"source_content_type":"","patch_set":13,"id":"1c151f38_06f00f58","updated":"2023-04-05 09:45:31.000000000","message":"I think this is ready to merge. Thanks Brian for all the help!","commit_id":"2aee961ab6942ab59aeacdc93d918c8c19023041"},{"author":{"_account_id":15554,"name":"Bence Romsics","email":"bence.romsics@gmail.com","username":"ebenrom","status":"working for Ericsson, UTC+1 (+DST)"},"change_message_id":"e6ac3e9ad3bd7ddb4cdc7b8a70d51c6c24f21fab","unresolved":false,"context_lines":[],"source_content_type":"","patch_set":13,"id":"29dfa74e_37220f82","updated":"2023-04-21 07:52:20.000000000","message":"recheck The failure of test_metadata_proxy_respawned could be related to this change, however the same test has passed multiple times in the gate before and 6 out of 6 times locally. So the cause is either environmental or this test is unstable, failing with low frequency.","commit_id":"2aee961ab6942ab59aeacdc93d918c8c19023041"},{"author":{"_account_id":1131,"name":"Brian Haley","email":"haleyb.dev@gmail.com","username":"brian-haley"},"change_message_id":"c586a69fc9f8fe50f49d9456d51cec8bc9a2c5b8","unresolved":false,"context_lines":[],"source_content_type":"","patch_set":13,"id":"c3be9214_a866b6a2","updated":"2023-04-24 00:09:12.000000000","message":"recheck devstack install issues","commit_id":"2aee961ab6942ab59aeacdc93d918c8c19023041"},{"author":{"_account_id":16688,"name":"Rodolfo Alonso","email":"ralonsoh@redhat.com","username":"rodolfo-alonso-hernandez"},"change_message_id":"ae8619c3a09d5414ce7832a12b221551805946f0","unresolved":false,"context_lines":[],"source_content_type":"","patch_set":13,"id":"dafaafab_20527bcf","updated":"2023-04-17 15:58:28.000000000","message":"recheck functional","commit_id":"2aee961ab6942ab59aeacdc93d918c8c19023041"},{"author":{"_account_id":1131,"name":"Brian Haley","email":"haleyb.dev@gmail.com","username":"brian-haley"},"change_message_id":"e03aa21b608193df7cb5099d1b57b80e0c38b68e","unresolved":false,"context_lines":[],"source_content_type":"","patch_set":13,"id":"3bb298a1_eaa28d0f","updated":"2023-04-26 03:38:40.000000000","message":"recheck gate fixes merged","commit_id":"2aee961ab6942ab59aeacdc93d918c8c19023041"},{"author":{"_account_id":7730,"name":"Sahid Orentino Ferdjaoui","email":"sahid.ferdjaoui@industrialdiscipline.com","username":"sahid"},"change_message_id":"26eace2973355db72f4d8a057bbc1fd60baac58e","unresolved":false,"context_lines":[],"source_content_type":"","patch_set":13,"id":"c289e2e6_f11476b2","updated":"2023-04-23 15:54:56.000000000","message":"recheck grenade","commit_id":"2aee961ab6942ab59aeacdc93d918c8c19023041"},{"author":{"_account_id":15554,"name":"Bence Romsics","email":"bence.romsics@gmail.com","username":"ebenrom","status":"working for Ericsson, UTC+1 (+DST)"},"change_message_id":"31d1b0337a4c0771f3f799856375e740853ac578","unresolved":false,"context_lines":[],"source_content_type":"","patch_set":13,"id":"132d0389_f8c6f308","updated":"2023-04-19 13:19:05.000000000","message":"recheck https://bugs.launchpad.net/neutron/+bug/2015065","commit_id":"2aee961ab6942ab59aeacdc93d918c8c19023041"},{"author":{"_account_id":15554,"name":"Bence Romsics","email":"bence.romsics@gmail.com","username":"ebenrom","status":"working for Ericsson, UTC+1 (+DST)"},"change_message_id":"d8c25754ea27719ad59ee14a9a46f106cf36570e","unresolved":false,"context_lines":[],"source_content_type":"","patch_set":13,"id":"f6868f0e_cb6aee6b","updated":"2023-04-18 10:50:22.000000000","message":"recheck https://bugs.launchpad.net/neutron/+bug/2015065\n\nI will start backporting this.","commit_id":"2aee961ab6942ab59aeacdc93d918c8c19023041"},{"author":{"_account_id":15554,"name":"Bence Romsics","email":"bence.romsics@gmail.com","username":"ebenrom","status":"working for Ericsson, UTC+1 (+DST)"},"change_message_id":"ea548cd5e12e3a57032ec5b1e0411e94896e94ba","unresolved":false,"context_lines":[],"source_content_type":"","patch_set":13,"id":"c21fd605_6da5746a","updated":"2023-04-25 11:59:21.000000000","message":"recheck https://lists.openstack.org/pipermail/openstack-discuss/2023-April/033464.html","commit_id":"2aee961ab6942ab59aeacdc93d918c8c19023041"},{"author":{"_account_id":15554,"name":"Bence Romsics","email":"bence.romsics@gmail.com","username":"ebenrom","status":"working for Ericsson, UTC+1 (+DST)"},"change_message_id":"1f77c6aa720a0b794e4192edd81b4b65fa734884","unresolved":false,"context_lines":[],"source_content_type":"","patch_set":13,"id":"f961451b_4d2fabe6","updated":"2023-04-20 08:51:52.000000000","message":"recheck image download problem","commit_id":"2aee961ab6942ab59aeacdc93d918c8c19023041"},{"author":{"_account_id":1131,"name":"Brian Haley","email":"haleyb.dev@gmail.com","username":"brian-haley"},"change_message_id":"0e5a3a4a0b5f502b8908c51d166a78d98b6f723f","unresolved":false,"context_lines":[],"source_content_type":"","patch_set":13,"id":"d1cf4eb3_c35657c1","updated":"2023-04-04 16:28:57.000000000","message":"recheck ovn-rally-task timeout","commit_id":"2aee961ab6942ab59aeacdc93d918c8c19023041"},{"author":{"_account_id":15554,"name":"Bence Romsics","email":"bence.romsics@gmail.com","username":"ebenrom","status":"working for Ericsson, UTC+1 (+DST)"},"change_message_id":"c2add8e96fc8169d7f1f9438a56a68cb04b7c40a","unresolved":false,"context_lines":[],"source_content_type":"","patch_set":13,"id":"72ffb749_15c16231","updated":"2023-04-20 12:04:40.000000000","message":"recheck ubuntu package repo problem","commit_id":"2aee961ab6942ab59aeacdc93d918c8c19023041"},{"author":{"_account_id":8313,"name":"Lajos Katona","display_name":"lajoskatona","email":"katonalala@gmail.com","username":"elajkat","status":"Ericsson Software Technology"},"change_message_id":"acb43de1de1ab331f9a01ec6cec41229bbd58cc0","unresolved":false,"context_lines":[],"source_content_type":"","patch_set":13,"id":"e7ea2f81_a00fd64c","updated":"2023-04-14 11:05:12.000000000","message":"thanks for working on this","commit_id":"2aee961ab6942ab59aeacdc93d918c8c19023041"}],"doc/source/admin/config-dhcp-ha.rst":[{"author":{"_account_id":15554,"name":"Bence Romsics","email":"bence.romsics@gmail.com","username":"ebenrom","status":"working for Ericsson, UTC+1 (+DST)"},"change_message_id":"ddeeb706e0e72dddbc58c713f3bd8737da33b2cf","unresolved":true,"context_lines":[{"line_number":461,"context_line":"Even when you have multiple DHCP agents, an arbitrary one (where the metadata"},{"line_number":462,"context_line":"IPs are not in dadfailed status) will serve all metadata requests. When that"},{"line_number":463,"context_line":"metadata service instance becomes unreachable there is no failover."},{"line_number":464,"context_line":"As far as we can tell the kernel exposes the dadfailed status only for IPv6."},{"line_number":465,"context_line":""},{"line_number":466,"context_line":"Disabling and removing an agent"},{"line_number":467,"context_line":"~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~"}],"source_content_type":"text/x-rst","patch_set":3,"id":"a4651e7c_5b3424ac","line":464,"updated":"2023-03-08 15:38:14.000000000","message":"What Brian found makes me question what I wrote about v4 here:\n\nhttps://meetings.opendev.org/irclogs/%23openstack-neutron/%23openstack-neutron.2023-03-08.log.html#t2023-03-08T15:05:45","commit_id":"b73701e6059ba9872178ee62fd0217d3d93903d3"},{"author":{"_account_id":1131,"name":"Brian Haley","email":"haleyb.dev@gmail.com","username":"brian-haley"},"change_message_id":"e467551d7758c3d755e49529adf63323d65adc8d","unresolved":false,"context_lines":[{"line_number":461,"context_line":"Even when you have multiple DHCP agents, an arbitrary one (where the metadata"},{"line_number":462,"context_line":"IPs are not in dadfailed status) will serve all metadata requests. When that"},{"line_number":463,"context_line":"metadata service instance becomes unreachable there is no failover."},{"line_number":464,"context_line":"As far as we can tell the kernel exposes the dadfailed status only for IPv6."},{"line_number":465,"context_line":""},{"line_number":466,"context_line":"Disabling and removing an agent"},{"line_number":467,"context_line":"~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~"}],"source_content_type":"text/x-rst","patch_set":3,"id":"af67490b_1f4bf663","line":464,"in_reply_to":"355262bc_9c95aa0b","updated":"2023-03-10 17:39:39.000000000","message":"I updated the doc based on that, hopefully it\u0027s better?","commit_id":"b73701e6059ba9872178ee62fd0217d3d93903d3"},{"author":{"_account_id":1131,"name":"Brian Haley","email":"haleyb.dev@gmail.com","username":"brian-haley"},"change_message_id":"f4231e276e7de289666d8322c59749003d0559e1","unresolved":true,"context_lines":[{"line_number":461,"context_line":"Even when you have multiple DHCP agents, an arbitrary one (where the metadata"},{"line_number":462,"context_line":"IPs are not in dadfailed status) will serve all metadata requests. When that"},{"line_number":463,"context_line":"metadata service instance becomes unreachable there is no failover."},{"line_number":464,"context_line":"As far as we can tell the kernel exposes the dadfailed status only for IPv6."},{"line_number":465,"context_line":""},{"line_number":466,"context_line":"Disabling and removing an agent"},{"line_number":467,"context_line":"~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~"}],"source_content_type":"text/x-rst","patch_set":3,"id":"ea0f2e1e_b4492472","line":464,"in_reply_to":"a4651e7c_5b3424ac","updated":"2023-03-08 23:40:24.000000000","message":"I think, based on your cut/paste on irc, that each DHCP agent will inject a route for the IPv4 metadata address, so it would be HA assuming you can reach the DHCP IP. For IPv6 it definitely won\u0027t be HA as only one namespace will have a valid address.","commit_id":"b73701e6059ba9872178ee62fd0217d3d93903d3"},{"author":{"_account_id":15554,"name":"Bence Romsics","email":"bence.romsics@gmail.com","username":"ebenrom","status":"working for Ericsson, UTC+1 (+DST)"},"change_message_id":"c6e18b554e329a7b66c5fafd732ba6da0921defc","unresolved":true,"context_lines":[{"line_number":461,"context_line":"Even when you have multiple DHCP agents, an arbitrary one (where the metadata"},{"line_number":462,"context_line":"IPs are not in dadfailed status) will serve all metadata requests. When that"},{"line_number":463,"context_line":"metadata service instance becomes unreachable there is no failover."},{"line_number":464,"context_line":"As far as we can tell the kernel exposes the dadfailed status only for IPv6."},{"line_number":465,"context_line":""},{"line_number":466,"context_line":"Disabling and removing an agent"},{"line_number":467,"context_line":"~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~"}],"source_content_type":"text/x-rst","patch_set":3,"id":"355262bc_9c95aa0b","line":464,"in_reply_to":"ea0f2e1e_b4492472","updated":"2023-03-09 15:16:45.000000000","message":"My newest findings say we have some recovery of ipv4 isolated metadata when the dhcp lease expires and we get a new lease from another dhcp server:\n\nhttps://bugs.launchpad.net/neutron/+bug/1953165/comments/24","commit_id":"b73701e6059ba9872178ee62fd0217d3d93903d3"},{"author":{"_account_id":1131,"name":"Brian Haley","email":"haleyb.dev@gmail.com","username":"brian-haley"},"change_message_id":"60a7eaf811136e59903220886d4a5fe1d7ebc39c","unresolved":false,"context_lines":[{"line_number":465,"context_line":"See `RFC 4862 \u003chttps://www.rfc-editor.org/rfc/rfc4862#section-5.4\u003e`_ for"},{"line_number":466,"context_line":"details on the DAD process."},{"line_number":467,"context_line":""},{"line_number":468,"context_line":"For this reason, even when you have multiple DHCP agents, an arbitrary one "},{"line_number":469,"context_line":"(where the metadata IPv6 address is not in `dadfailed` state) will serve all"},{"line_number":470,"context_line":"metadata requests over IPv6. When that metadata service instance becomes"},{"line_number":471,"context_line":"unreachable there is no failover and the service will become unreachable."}],"source_content_type":"text/x-rst","patch_set":5,"id":"c7634306_d4c2ed22","line":468,"range":{"start_line":468,"start_character":74,"end_line":468,"end_character":75},"updated":"2023-03-10 20:25:39.000000000","message":"I\u0027ll fix the trailing space :(","commit_id":"115d61bc8e68c8cb8def51ba4db3ee565f264b03"},{"author":{"_account_id":11975,"name":"Slawek Kaplonski","email":"skaplons@redhat.com","username":"slaweq"},"change_message_id":"deeb3abcae89aa4df375475c7a30ca303b8dc6e5","unresolved":true,"context_lines":[{"line_number":458,"context_line":"metadata IPv4 address (`169.254.169.254`) via its own IP address, so it will"},{"line_number":459,"context_line":"be reachable as long as the DHCP service is available at that IP address."},{"line_number":460,"context_line":"This also means that recovery after a failure is tied to the renewal of the"},{"line_number":461,"context_line":"DHCP lease."},{"line_number":462,"context_line":""},{"line_number":463,"context_line":"With IPv6, the well known metadata IPv6 address (`fe80::a9fe:a9fe`) is used,"},{"line_number":464,"context_line":"but directly configured in the DHCP agent network namespace."}],"source_content_type":"text/x-rst","patch_set":9,"id":"9ff110c9_dddea9f7","line":461,"updated":"2023-03-16 16:27:42.000000000","message":"I\u0027m not really sure this is 100% sure for IPv4. It\u0027s true that metadata will be configured in qdhcp namespace by each DHCP agent which hosts the network but VM will get route to 169.254.169.254 only from one of them so if metadata service on that node will be down, metadata service for that vm will not be available. Am I missing something here?","commit_id":"688e6800b88e48b46a4bd61fc0f1273649fcb801"},{"author":{"_account_id":1131,"name":"Brian Haley","email":"haleyb.dev@gmail.com","username":"brian-haley"},"change_message_id":"cb950f9396242a3181d075baa0a91fcd5e821b8d","unresolved":false,"context_lines":[{"line_number":458,"context_line":"metadata IPv4 address (`169.254.169.254`) via its own IP address, so it will"},{"line_number":459,"context_line":"be reachable as long as the DHCP service is available at that IP address."},{"line_number":460,"context_line":"This also means that recovery after a failure is tied to the renewal of the"},{"line_number":461,"context_line":"DHCP lease."},{"line_number":462,"context_line":""},{"line_number":463,"context_line":"With IPv6, the well known metadata IPv6 address (`fe80::a9fe:a9fe`) is used,"},{"line_number":464,"context_line":"but directly configured in the DHCP agent network namespace."}],"source_content_type":"text/x-rst","patch_set":9,"id":"c8538858_80eb2a81","line":461,"in_reply_to":"196cccab_0d1239b4","updated":"2023-03-20 19:53:32.000000000","message":"I\u0027ll add a small addition at the end of this sentence to make it clearer:\n\n\"... since that route will only change if the DHCP server for a VM\nchanges.\"","commit_id":"688e6800b88e48b46a4bd61fc0f1273649fcb801"},{"author":{"_account_id":1131,"name":"Brian Haley","email":"haleyb.dev@gmail.com","username":"brian-haley"},"change_message_id":"a0340d07a01fbbc36aefc5ae61312dcf3ae7e918","unresolved":true,"context_lines":[{"line_number":458,"context_line":"metadata IPv4 address (`169.254.169.254`) via its own IP address, so it will"},{"line_number":459,"context_line":"be reachable as long as the DHCP service is available at that IP address."},{"line_number":460,"context_line":"This also means that recovery after a failure is tied to the renewal of the"},{"line_number":461,"context_line":"DHCP lease."},{"line_number":462,"context_line":""},{"line_number":463,"context_line":"With IPv6, the well known metadata IPv6 address (`fe80::a9fe:a9fe`) is used,"},{"line_number":464,"context_line":"but directly configured in the DHCP agent network namespace."}],"source_content_type":"text/x-rst","patch_set":9,"id":"196cccab_0d1239b4","line":461,"in_reply_to":"9ff110c9_dddea9f7","updated":"2023-03-17 14:25:02.000000000","message":"I think we are saying the same thing? I\u0027ll explain through output and maybe that will help us write this part better.\n\nWhen you boot a VM on an isolated network, the DHCP reply with have an extra route for metadata that the DHCP client will configure. For example:\n\n$ ip r g 169.254.169.254\n169.254.169.254 via 10.0.0.66 dev eth0  src 10.0.0.71\n\n10.0.0.66 here is the DHCP server IP.\n\nIf I have a second DHCP server, another VM could boot and get:\n\n$ ip r g 169.254.169.254\n169.254.169.254 via 10.0.0.67 dev eth0  src 10.0.0.72\n\nIf that second DHCP server goes offline, that second VM can\u0027t reach metadata since it\u0027s tied to the DHCP server IP. But on reboot it would get a lease from an online server, and \"fix\" itself:\n\n$ ip r g 169.254.169.254\n169.254.169.254 via 10.0.0.66 dev eth0  src 10.0.0.72\n\nThat\u0027s what we are trying to explain.\n\nInterestingly, if there was no route installed, it would \"just work\", even if multiple DHCP server\u0027s responded with ARP for the metadata IP, as there is no IPv4 DAD to stop the configuration of the IP.\n\nAnyways, if you have any suggested we can update things. Thanks.","commit_id":"688e6800b88e48b46a4bd61fc0f1273649fcb801"}],"neutron/agent/dhcp/agent.py":[{"author":{"_account_id":1131,"name":"Brian Haley","email":"haleyb.dev@gmail.com","username":"brian-haley"},"change_message_id":"01f9ff11ca841bcbd711fe8462350644b4035712","unresolved":true,"context_lines":[{"line_number":829,"context_line":"            dad_text \u003d ("},{"line_number":830,"context_line":"                \u0027Failure waiting for address fe80::a9fe:a9fe to become ready: \u0027"},{"line_number":831,"context_line":"                \u0027Duplicate address detected\u0027"},{"line_number":832,"context_line":"            )"},{"line_number":833,"context_line":"            if dad_text in str(exc):"},{"line_number":834,"context_line":"                LOG.info("},{"line_number":835,"context_line":"                    \u0027Suppressing error on network %s: %s\u0027,"}],"source_content_type":"text/x-python","patch_set":1,"id":"25af8bc4_1e6b2c65","line":832,"updated":"2023-03-08 01:45:51.000000000","message":"I have another thought. Since wait_until_address_ready() knows what the failure actually was, create a new exception called DadFailed(), then raise it in this case. That way we don\u0027t have to try and grok the string.","commit_id":"6959e0787bd5a8f423b51b922f2a526c0ac2518c"},{"author":{"_account_id":15554,"name":"Bence Romsics","email":"bence.romsics@gmail.com","username":"ebenrom","status":"working for Ericsson, UTC+1 (+DST)"},"change_message_id":"4c8f2f9b96b89c098d832a39765db33fcef3460a","unresolved":false,"context_lines":[{"line_number":829,"context_line":"            dad_text \u003d ("},{"line_number":830,"context_line":"                \u0027Failure waiting for address fe80::a9fe:a9fe to become ready: \u0027"},{"line_number":831,"context_line":"                \u0027Duplicate address detected\u0027"},{"line_number":832,"context_line":"            )"},{"line_number":833,"context_line":"            if dad_text in str(exc):"},{"line_number":834,"context_line":"                LOG.info("},{"line_number":835,"context_line":"                    \u0027Suppressing error on network %s: %s\u0027,"}],"source_content_type":"text/x-python","patch_set":1,"id":"f953cf36_2cc22321","line":832,"in_reply_to":"25af8bc4_1e6b2c65","updated":"2023-03-08 12:56:06.000000000","message":"Done","commit_id":"6959e0787bd5a8f423b51b922f2a526c0ac2518c"},{"author":{"_account_id":16688,"name":"Rodolfo Alonso","email":"ralonsoh@redhat.com","username":"rodolfo-alonso-hernandez"},"change_message_id":"a90cd1c9612e054fde99e1d0c5bd7f16073357ce","unresolved":true,"context_lines":[{"line_number":825,"context_line":"                self.conf,"},{"line_number":826,"context_line":"                bind_address\u003dconstants.METADATA_V4_IP,"},{"line_number":827,"context_line":"                **kwargs)"},{"line_number":828,"context_line":"        except ip_lib.DADFailed as exc:"},{"line_number":829,"context_line":"            LOG.info("},{"line_number":830,"context_line":"                \u0027Suppressing error on network %s: %s\u0027, network.id, str(exc))"},{"line_number":831,"context_line":""}],"source_content_type":"text/x-python","patch_set":3,"id":"4083321b_32afbf38","line":828,"range":{"start_line":828,"start_character":8,"end_line":828,"end_character":39},"updated":"2023-03-09 10:13:08.000000000","message":"It looks weird to catch this exception here. Why not in the \"spawn_monitored_metadata_proxy\" method?","commit_id":"b73701e6059ba9872178ee62fd0217d3d93903d3"},{"author":{"_account_id":1131,"name":"Brian Haley","email":"haleyb.dev@gmail.com","username":"brian-haley"},"change_message_id":"e467551d7758c3d755e49529adf63323d65adc8d","unresolved":false,"context_lines":[{"line_number":825,"context_line":"                self.conf,"},{"line_number":826,"context_line":"                bind_address\u003dconstants.METADATA_V4_IP,"},{"line_number":827,"context_line":"                **kwargs)"},{"line_number":828,"context_line":"        except ip_lib.DADFailed as exc:"},{"line_number":829,"context_line":"            LOG.info("},{"line_number":830,"context_line":"                \u0027Suppressing error on network %s: %s\u0027, network.id, str(exc))"},{"line_number":831,"context_line":""}],"source_content_type":"text/x-python","patch_set":3,"id":"01839861_c5b1ee16","line":828,"range":{"start_line":828,"start_character":8,"end_line":828,"end_character":39},"in_reply_to":"4083321b_32afbf38","updated":"2023-03-10 17:39:39.000000000","message":"Done","commit_id":"b73701e6059ba9872178ee62fd0217d3d93903d3"},{"author":{"_account_id":1131,"name":"Brian Haley","email":"haleyb.dev@gmail.com","username":"brian-haley"},"change_message_id":"60a7eaf811136e59903220886d4a5fe1d7ebc39c","unresolved":false,"context_lines":[{"line_number":823,"context_line":"            self.conf,"},{"line_number":824,"context_line":"            bind_address\u003dconstants.METADATA_V4_IP,"},{"line_number":825,"context_line":"            **kwargs)"},{"line_number":826,"context_line":""},{"line_number":827,"context_line":"    def disable_isolated_metadata_proxy(self, network):"},{"line_number":828,"context_line":"        if (self.conf.enable_metadata_network and"},{"line_number":829,"context_line":"                network.id in self._metadata_routers):"}],"source_content_type":"text/x-python","patch_set":5,"id":"e3699112_095f11d4","line":826,"updated":"2023-03-10 20:25:39.000000000","message":"I\u0027ll remove this change as well.","commit_id":"115d61bc8e68c8cb8def51ba4db3ee565f264b03"}],"neutron/agent/linux/ip_lib.py":[{"author":{"_account_id":1131,"name":"Brian Haley","email":"haleyb.dev@gmail.com","username":"brian-haley"},"change_message_id":"5b88d2aca9eb89a17bcca78209bd711c2abd0d79","unresolved":true,"context_lines":[{"line_number":104,"context_line":""},{"line_number":105,"context_line":"class DADFailed(AddressNotReady):"},{"line_number":106,"context_line":"    message \u003d _(\"Failure waiting for address %(address)s to \""},{"line_number":107,"context_line":"                \"become ready: %(reason)s\")"},{"line_number":108,"context_line":""},{"line_number":109,"context_line":""},{"line_number":110,"context_line":"InvalidArgument \u003d privileged.InvalidArgument"}],"source_content_type":"text/x-python","patch_set":2,"id":"b2a2eaea_cdb21e04","line":107,"updated":"2023-03-08 14:55:40.000000000","message":"Is putting \u0027pass\u0027 enough here since you didn\u0027t change the message?","commit_id":"8d881db7512ec0d90dcabcaeeda941912c6c4681"},{"author":{"_account_id":1131,"name":"Brian Haley","email":"haleyb.dev@gmail.com","username":"brian-haley"},"change_message_id":"e467551d7758c3d755e49529adf63323d65adc8d","unresolved":false,"context_lines":[{"line_number":104,"context_line":""},{"line_number":105,"context_line":"class DADFailed(AddressNotReady):"},{"line_number":106,"context_line":"    message \u003d _(\"Failure waiting for address %(address)s to \""},{"line_number":107,"context_line":"                \"become ready: %(reason)s\")"},{"line_number":108,"context_line":""},{"line_number":109,"context_line":""},{"line_number":110,"context_line":"InvalidArgument \u003d privileged.InvalidArgument"}],"source_content_type":"text/x-python","patch_set":2,"id":"5f48fb07_afd433e3","line":107,"in_reply_to":"b2a2eaea_cdb21e04","updated":"2023-03-10 17:39:39.000000000","message":"Done","commit_id":"8d881db7512ec0d90dcabcaeeda941912c6c4681"}],"neutron/agent/metadata/driver.py":[{"author":{"_account_id":1131,"name":"Brian Haley","email":"haleyb.dev@gmail.com","username":"brian-haley"},"change_message_id":"c7d316998cec695f8ffb0251df6b1532ed64d718","unresolved":true,"context_lines":[{"line_number":251,"context_line":"                # configured this metadata address, so all requests will"},{"line_number":252,"context_line":"                # be via that single agent."},{"line_number":253,"context_line":"                LOG.info(\u0027Suppressing error on network %s: %s\u0027,"},{"line_number":254,"context_line":"                         network_id, str(exc))"},{"line_number":255,"context_line":"                return"},{"line_number":256,"context_line":"        pm.enable()"},{"line_number":257,"context_line":"        monitor.register(uuid, METADATA_SERVICE_NAME, pm)"}],"source_content_type":"text/x-python","patch_set":10,"id":"28b66361_e49b880a","line":254,"updated":"2023-03-21 00:22:07.000000000","message":"So here I suppose one thing we could do is to delete \u0027bind_address_v6\u0027 by calling:\n\n  ip_lib.delete_ip_address(bind_address_v6, bind_interface, namespace\u003dns_name)\n \nBut I think that will fail since we defined the cidr as /64 (incorrectly), and that method will compute it to /128 so we\u0027ll get:\n\n$ sudo ip a d fe80::a9fe:a9fe/128 dev tap999\nRTNETLINK answers: Cannot assign requested address\n\nOn a side note I was starting to look at the DHCP schedule, to see if we could just use that to move the IP to another agent.","commit_id":"e8aff94dce85e9ebb7f964f82e0ee6af02f0c49b"},{"author":{"_account_id":1131,"name":"Brian Haley","email":"haleyb.dev@gmail.com","username":"brian-haley"},"change_message_id":"e8a0f9e86b8b1b4ed06c40b18aaf190f2184ef3a","unresolved":true,"context_lines":[{"line_number":251,"context_line":"                # configured this metadata address, so all requests will"},{"line_number":252,"context_line":"                # be via that single agent."},{"line_number":253,"context_line":"                LOG.info(\u0027Suppressing error on network %s: %s\u0027,"},{"line_number":254,"context_line":"                         network_id, str(exc))"},{"line_number":255,"context_line":"                return"},{"line_number":256,"context_line":"        pm.enable()"},{"line_number":257,"context_line":"        monitor.register(uuid, METADATA_SERVICE_NAME, pm)"}],"source_content_type":"text/x-python","patch_set":10,"id":"deb63e8a_12f9b0d1","line":254,"in_reply_to":"011ae5b3_5d7bf8e7","updated":"2023-03-23 13:43:37.000000000","message":"Right. I don\u0027t think we have to delete it, I just thought of one (untested) scenario where it might help.\n\nIf we have \u003e1 dhcp agent for a network and the one that has the metadata address goes offline, when we reschedule things will that ever trigger the others to try and add the metadata IP again? If it does then we don\u0027t want the address there since it will prevent it from getting re-added and working. I haven\u0027t tested as I only have a single node devstack here.\n\nIn the end we should just fix it a better way of course.","commit_id":"e8aff94dce85e9ebb7f964f82e0ee6af02f0c49b"},{"author":{"_account_id":1131,"name":"Brian Haley","email":"haleyb.dev@gmail.com","username":"brian-haley"},"change_message_id":"e500a3c34f23181fbe58d0bc14d7816254a00614","unresolved":true,"context_lines":[{"line_number":251,"context_line":"                # configured this metadata address, so all requests will"},{"line_number":252,"context_line":"                # be via that single agent."},{"line_number":253,"context_line":"                LOG.info(\u0027Suppressing error on network %s: %s\u0027,"},{"line_number":254,"context_line":"                         network_id, str(exc))"},{"line_number":255,"context_line":"                return"},{"line_number":256,"context_line":"        pm.enable()"},{"line_number":257,"context_line":"        monitor.register(uuid, METADATA_SERVICE_NAME, pm)"}],"source_content_type":"text/x-python","patch_set":10,"id":"ba7fc1aa_3a3a07d4","line":254,"in_reply_to":"0148a9f8_8ab027ee","updated":"2023-03-29 03:15:16.000000000","message":"Hi Bence, will push the update tomorrow after the vPTG, ran out of time today.","commit_id":"e8aff94dce85e9ebb7f964f82e0ee6af02f0c49b"},{"author":{"_account_id":1131,"name":"Brian Haley","email":"haleyb.dev@gmail.com","username":"brian-haley"},"change_message_id":"b1bd1a4a6e4a5afdbfd059349a48de6990a3351d","unresolved":true,"context_lines":[{"line_number":251,"context_line":"                # configured this metadata address, so all requests will"},{"line_number":252,"context_line":"                # be via that single agent."},{"line_number":253,"context_line":"                LOG.info(\u0027Suppressing error on network %s: %s\u0027,"},{"line_number":254,"context_line":"                         network_id, str(exc))"},{"line_number":255,"context_line":"                return"},{"line_number":256,"context_line":"        pm.enable()"},{"line_number":257,"context_line":"        monitor.register(uuid, METADATA_SERVICE_NAME, pm)"}],"source_content_type":"text/x-python","patch_set":10,"id":"bf66d210_943c8c39","line":254,"in_reply_to":"28b66361_e49b880a","updated":"2023-03-21 23:37:11.000000000","message":"So I made a small change to delete the address on DAD failure, let me know if you want me to push it, or maybe I can wait until we can get a better fix.","commit_id":"e8aff94dce85e9ebb7f964f82e0ee6af02f0c49b"},{"author":{"_account_id":1131,"name":"Brian Haley","email":"haleyb.dev@gmail.com","username":"brian-haley"},"change_message_id":"88ceab71a6343b7b82754be3f268280870fc5384","unresolved":false,"context_lines":[{"line_number":251,"context_line":"                # configured this metadata address, so all requests will"},{"line_number":252,"context_line":"                # be via that single agent."},{"line_number":253,"context_line":"                LOG.info(\u0027Suppressing error on network %s: %s\u0027,"},{"line_number":254,"context_line":"                         network_id, str(exc))"},{"line_number":255,"context_line":"                return"},{"line_number":256,"context_line":"        pm.enable()"},{"line_number":257,"context_line":"        monitor.register(uuid, METADATA_SERVICE_NAME, pm)"}],"source_content_type":"text/x-python","patch_set":10,"id":"8e119412_cc357646","line":254,"in_reply_to":"ba7fc1aa_3a3a07d4","updated":"2023-03-29 20:24:38.000000000","message":"Done","commit_id":"e8aff94dce85e9ebb7f964f82e0ee6af02f0c49b"},{"author":{"_account_id":15554,"name":"Bence Romsics","email":"bence.romsics@gmail.com","username":"ebenrom","status":"working for Ericsson, UTC+1 (+DST)"},"change_message_id":"f1abbd68702d2c32a2bee17f2f0332ae6cffae89","unresolved":true,"context_lines":[{"line_number":251,"context_line":"                # configured this metadata address, so all requests will"},{"line_number":252,"context_line":"                # be via that single agent."},{"line_number":253,"context_line":"                LOG.info(\u0027Suppressing error on network %s: %s\u0027,"},{"line_number":254,"context_line":"                         network_id, str(exc))"},{"line_number":255,"context_line":"                return"},{"line_number":256,"context_line":"        pm.enable()"},{"line_number":257,"context_line":"        monitor.register(uuid, METADATA_SERVICE_NAME, pm)"}],"source_content_type":"text/x-python","patch_set":10,"id":"011ae5b3_5d7bf8e7","line":254,"in_reply_to":"bf66d210_943c8c39","updated":"2023-03-23 10:14:04.000000000","message":"Sorry for the slow response.\n\nI\u0027m slightly inclined not to delete the dadfailed address, but I\u0027m not strongly against it.\n\n1) I believe that the presence of the dadfailed address correctly represents the design limitation we have currently. And it\u0027s better to show it, as it has consequences. If we delete it, the problem becomes somewhat hidden, harder to detect.\n\n2) AFAIU the RFC, an address that\u0027s in tentative state (and one that stays in tentative state indefinitely because of dadfailed) is not really configured on the interface, therefore I don\u0027t expect further side effect, just because we didn\u0027t delete it.","commit_id":"e8aff94dce85e9ebb7f964f82e0ee6af02f0c49b"},{"author":{"_account_id":15554,"name":"Bence Romsics","email":"bence.romsics@gmail.com","username":"ebenrom","status":"working for Ericsson, UTC+1 (+DST)"},"change_message_id":"6c9c459cd9bfa3b7e21723defa313f835248fd37","unresolved":true,"context_lines":[{"line_number":251,"context_line":"                # configured this metadata address, so all requests will"},{"line_number":252,"context_line":"                # be via that single agent."},{"line_number":253,"context_line":"                LOG.info(\u0027Suppressing error on network %s: %s\u0027,"},{"line_number":254,"context_line":"                         network_id, str(exc))"},{"line_number":255,"context_line":"                return"},{"line_number":256,"context_line":"        pm.enable()"},{"line_number":257,"context_line":"        monitor.register(uuid, METADATA_SERVICE_NAME, pm)"}],"source_content_type":"text/x-python","patch_set":10,"id":"0148a9f8_8ab027ee","line":254,"in_reply_to":"deb63e8a_12f9b0d1","updated":"2023-03-28 10:55:43.000000000","message":"I see your point. I was also entertaining the idea of removing the dadfailed address just before readding it from init_l3(). But that code is quite generic and this would not properly fit there. So you convinced me that it\u0027s better to delete the dadfailed address here. Please push the patch you have.","commit_id":"e8aff94dce85e9ebb7f964f82e0ee6af02f0c49b"},{"author":{"_account_id":15554,"name":"Bence Romsics","email":"bence.romsics@gmail.com","username":"ebenrom","status":"working for Ericsson, UTC+1 (+DST)"},"change_message_id":"435872dfa16693cad9cfb840c8999707c86e30a7","unresolved":true,"context_lines":[{"line_number":264,"context_line":"                                             namespace\u003dns_name)"},{"line_number":265,"context_line":"                except Exception as exc:"},{"line_number":266,"context_line":"                    # do not re-raise a delete failure, just log"},{"line_number":267,"context_line":"                    LOG.debug(\u0027Address deletion failure: %s\u0027, str(exc))"},{"line_number":268,"context_line":"                return"},{"line_number":269,"context_line":"        pm.enable()"},{"line_number":270,"context_line":"        monitor.register(uuid, METADATA_SERVICE_NAME, pm)"}],"source_content_type":"text/x-python","patch_set":12,"id":"f39d056a_0b9538ec","line":267,"range":{"start_line":267,"start_character":24,"end_line":267,"end_character":29},"updated":"2023-04-04 13:09:08.000000000","message":"nit: This could be a warning or an error.","commit_id":"e8b03e55968e386af44090349521de8ad5cec20f"},{"author":{"_account_id":1131,"name":"Brian Haley","email":"haleyb.dev@gmail.com","username":"brian-haley"},"change_message_id":"90a12cfa0e21443b4539cae4d509a7a1c1e47bcd","unresolved":false,"context_lines":[{"line_number":264,"context_line":"                                             namespace\u003dns_name)"},{"line_number":265,"context_line":"                except Exception as exc:"},{"line_number":266,"context_line":"                    # do not re-raise a delete failure, just log"},{"line_number":267,"context_line":"                    LOG.debug(\u0027Address deletion failure: %s\u0027, str(exc))"},{"line_number":268,"context_line":"                return"},{"line_number":269,"context_line":"        pm.enable()"},{"line_number":270,"context_line":"        monitor.register(uuid, METADATA_SERVICE_NAME, pm)"}],"source_content_type":"text/x-python","patch_set":12,"id":"b2dbf21b_0a0312ca","line":267,"range":{"start_line":267,"start_character":24,"end_line":267,"end_character":29},"in_reply_to":"6ac361af_fe4a2c16","updated":"2023-04-04 13:42:28.000000000","message":"Done","commit_id":"e8b03e55968e386af44090349521de8ad5cec20f"},{"author":{"_account_id":15554,"name":"Bence Romsics","email":"bence.romsics@gmail.com","username":"ebenrom","status":"working for Ericsson, UTC+1 (+DST)"},"change_message_id":"7b8c24d7d2178785706af718885337ed6ac770ce","unresolved":true,"context_lines":[{"line_number":264,"context_line":"                                             namespace\u003dns_name)"},{"line_number":265,"context_line":"                except Exception as exc:"},{"line_number":266,"context_line":"                    # do not re-raise a delete failure, just log"},{"line_number":267,"context_line":"                    LOG.debug(\u0027Address deletion failure: %s\u0027, str(exc))"},{"line_number":268,"context_line":"                return"},{"line_number":269,"context_line":"        pm.enable()"},{"line_number":270,"context_line":"        monitor.register(uuid, METADATA_SERVICE_NAME, pm)"}],"source_content_type":"text/x-python","patch_set":12,"id":"a7b1d06d_a9f45351","line":267,"range":{"start_line":267,"start_character":24,"end_line":267,"end_character":29},"in_reply_to":"6ac361af_fe4a2c16","updated":"2023-04-04 13:19:02.000000000","message":"Makes perfect sense.","commit_id":"e8b03e55968e386af44090349521de8ad5cec20f"},{"author":{"_account_id":1131,"name":"Brian Haley","email":"haleyb.dev@gmail.com","username":"brian-haley"},"change_message_id":"19ea8f6667647d931f62e5bf00bec3717f9361bc","unresolved":true,"context_lines":[{"line_number":264,"context_line":"                                             namespace\u003dns_name)"},{"line_number":265,"context_line":"                except Exception as exc:"},{"line_number":266,"context_line":"                    # do not re-raise a delete failure, just log"},{"line_number":267,"context_line":"                    LOG.debug(\u0027Address deletion failure: %s\u0027, str(exc))"},{"line_number":268,"context_line":"                return"},{"line_number":269,"context_line":"        pm.enable()"},{"line_number":270,"context_line":"        monitor.register(uuid, METADATA_SERVICE_NAME, pm)"}],"source_content_type":"text/x-python","patch_set":12,"id":"6ac361af_fe4a2c16","line":267,"range":{"start_line":267,"start_character":24,"end_line":267,"end_character":29},"in_reply_to":"f39d056a_0b9538ec","updated":"2023-04-04 13:15:17.000000000","message":"I could raise it to info, don\u0027t think it should be higher than the dadfailed message, unless that should be higher?","commit_id":"e8b03e55968e386af44090349521de8ad5cec20f"}],"neutron/common/_constants.py":[{"author":{"_account_id":8313,"name":"Lajos Katona","display_name":"lajoskatona","email":"katonalala@gmail.com","username":"elajkat","status":"Ericsson Software Technology"},"change_message_id":"acb43de1de1ab331f9a01ec6cec41229bbd58cc0","unresolved":true,"context_lines":[{"line_number":86,"context_line":""},{"line_number":87,"context_line":"# The lowest binding index for L3 agents and DHCP agents."},{"line_number":88,"context_line":"LOWEST_AGENT_BINDING_INDEX \u003d 1"},{"line_number":89,"context_line":""},{"line_number":90,"context_line":"# Neutron-lib defines this with a /64 but it should be /128"},{"line_number":91,"context_line":"METADATA_V6_CIDR \u003d constants.METADATA_V6_IP + \u0027/128\u0027"}],"source_content_type":"text/x-python","patch_set":13,"id":"780f22ae_88a3a40b","line":91,"range":{"start_line":89,"start_character":0,"end_line":91,"end_character":52},"updated":"2023-04-14 11:05:12.000000000","message":"Could you propose a FUP for this please?\nAs I see (https://codesearch.openstack.org/?q\u003dMETADATA_V6_CIDR\u0026i\u003dnope\u0026literal\u003dnope\u0026files\u003d\u0026excludeFiles\u003d\u0026repos\u003d ) all occurances of /64 CIDR are change by this patch to the /128 CIDR","commit_id":"2aee961ab6942ab59aeacdc93d918c8c19023041"},{"author":{"_account_id":15554,"name":"Bence Romsics","email":"bence.romsics@gmail.com","username":"ebenrom","status":"working for Ericsson, UTC+1 (+DST)"},"change_message_id":"4ced2c10f41f067f4ef211cf436048c4ed33b534","unresolved":false,"context_lines":[{"line_number":86,"context_line":""},{"line_number":87,"context_line":"# The lowest binding index for L3 agents and DHCP agents."},{"line_number":88,"context_line":"LOWEST_AGENT_BINDING_INDEX \u003d 1"},{"line_number":89,"context_line":""},{"line_number":90,"context_line":"# Neutron-lib defines this with a /64 but it should be /128"},{"line_number":91,"context_line":"METADATA_V6_CIDR \u003d constants.METADATA_V6_IP + \u0027/128\u0027"}],"source_content_type":"text/x-python","patch_set":13,"id":"d9bd3998_3f382ef8","line":91,"range":{"start_line":89,"start_character":0,"end_line":91,"end_character":52},"in_reply_to":"780f22ae_88a3a40b","updated":"2023-04-17 07:39:41.000000000","message":"https://review.opendev.org/c/openstack/neutron-lib/+/880588","commit_id":"2aee961ab6942ab59aeacdc93d918c8c19023041"}],"releasenotes/notes/bug-1953165-6e848ea2c0398f56.yaml":[{"author":{"_account_id":1131,"name":"Brian Haley","email":"haleyb.dev@gmail.com","username":"brian-haley"},"change_message_id":"af9cbf17dbc18cb1e24491835563acfc34568adb","unresolved":true,"context_lines":[{"line_number":9,"context_line":"    a partial fix this is no longer the case. It still signals a design"},{"line_number":10,"context_line":"    limitation of the isolated metadata service, that affects the high"},{"line_number":11,"context_line":"    availability of the isolated metadata service."},{"line_number":12,"context_line":"other:"},{"line_number":13,"context_line":"  - |"},{"line_number":14,"context_line":"    As discovered in `bug 1953165"},{"line_number":15,"context_line":"    \u003chttps://bugs.launchpad.net/neutron/+bug/1953165\u003e`_ the high availability"}],"source_content_type":"text/x-yaml","patch_set":8,"id":"b9ca201e_c6f5b685","line":12,"updated":"2023-03-13 14:18:45.000000000","message":"Just a comment that this will make two notes in different sections, so if we want them together it\u0027s best to just use a single one and have a second paragraph.","commit_id":"9bb8acad52e59417ef8dce0d84c85043f0e9950e"},{"author":{"_account_id":1131,"name":"Brian Haley","email":"haleyb.dev@gmail.com","username":"brian-haley"},"change_message_id":"c5f5b0e415621074b7aed26228188356296637aa","unresolved":false,"context_lines":[{"line_number":9,"context_line":"    a partial fix this is no longer the case. It still signals a design"},{"line_number":10,"context_line":"    limitation of the isolated metadata service, that affects the high"},{"line_number":11,"context_line":"    availability of the isolated metadata service."},{"line_number":12,"context_line":"other:"},{"line_number":13,"context_line":"  - |"},{"line_number":14,"context_line":"    As discovered in `bug 1953165"},{"line_number":15,"context_line":"    \u003chttps://bugs.launchpad.net/neutron/+bug/1953165\u003e`_ the high availability"}],"source_content_type":"text/x-yaml","patch_set":8,"id":"3bd575a8_c41bad15","line":12,"in_reply_to":"5a5bcf4b_098c313e","updated":"2023-03-20 19:53:50.000000000","message":"Done","commit_id":"9bb8acad52e59417ef8dce0d84c85043f0e9950e"},{"author":{"_account_id":15554,"name":"Bence Romsics","email":"bence.romsics@gmail.com","username":"ebenrom","status":"working for Ericsson, UTC+1 (+DST)"},"change_message_id":"dfa3f92a8a0a1668187dc75f52d48747ddd889b9","unresolved":true,"context_lines":[{"line_number":9,"context_line":"    a partial fix this is no longer the case. It still signals a design"},{"line_number":10,"context_line":"    limitation of the isolated metadata service, that affects the high"},{"line_number":11,"context_line":"    availability of the isolated metadata service."},{"line_number":12,"context_line":"other:"},{"line_number":13,"context_line":"  - |"},{"line_number":14,"context_line":"    As discovered in `bug 1953165"},{"line_number":15,"context_line":"    \u003chttps://bugs.launchpad.net/neutron/+bug/1953165\u003e`_ the high availability"}],"source_content_type":"text/x-yaml","patch_set":8,"id":"5a5bcf4b_098c313e","line":12,"in_reply_to":"b9ca201e_c6f5b685","updated":"2023-03-14 14:07:47.000000000","message":"I was just trying to do what Rodolfo suggested earlier. Either way it\u0027s good for me.","commit_id":"9bb8acad52e59417ef8dce0d84c85043f0e9950e"},{"author":{"_account_id":1131,"name":"Brian Haley","email":"haleyb.dev@gmail.com","username":"brian-haley"},"change_message_id":"af9cbf17dbc18cb1e24491835563acfc34568adb","unresolved":true,"context_lines":[{"line_number":15,"context_line":"    \u003chttps://bugs.launchpad.net/neutron/+bug/1953165\u003e`_ the high availability"},{"line_number":16,"context_line":"    of metadata service on isolated networks is limited or non-existent."},{"line_number":17,"context_line":"    IPv4 metadata is redundant when the DHCP agent managing it is redundant,"},{"line_number":18,"context_line":"    but recovery is tied to the renewal of the dhcp lease, making most"},{"line_number":19,"context_line":"    recoveries very slow. IPv6 metadata is not redundant at all. Until a"},{"line_number":20,"context_line":"    redesign of the isolated metadata service, there are no better deployment"},{"line_number":21,"context_line":"    options."}],"source_content_type":"text/x-yaml","patch_set":8,"id":"9cb03b10_3881adc7","line":18,"range":{"start_line":18,"start_character":47,"end_line":18,"end_character":51},"updated":"2023-03-13 14:18:45.000000000","message":"s/DHCP\n\nto be consistent","commit_id":"9bb8acad52e59417ef8dce0d84c85043f0e9950e"},{"author":{"_account_id":15554,"name":"Bence Romsics","email":"bence.romsics@gmail.com","username":"ebenrom","status":"working for Ericsson, UTC+1 (+DST)"},"change_message_id":"dfa3f92a8a0a1668187dc75f52d48747ddd889b9","unresolved":false,"context_lines":[{"line_number":15,"context_line":"    \u003chttps://bugs.launchpad.net/neutron/+bug/1953165\u003e`_ the high availability"},{"line_number":16,"context_line":"    of metadata service on isolated networks is limited or non-existent."},{"line_number":17,"context_line":"    IPv4 metadata is redundant when the DHCP agent managing it is redundant,"},{"line_number":18,"context_line":"    but recovery is tied to the renewal of the dhcp lease, making most"},{"line_number":19,"context_line":"    recoveries very slow. IPv6 metadata is not redundant at all. Until a"},{"line_number":20,"context_line":"    redesign of the isolated metadata service, there are no better deployment"},{"line_number":21,"context_line":"    options."}],"source_content_type":"text/x-yaml","patch_set":8,"id":"15e8e19b_4d1a29c9","line":18,"range":{"start_line":18,"start_character":47,"end_line":18,"end_character":51},"in_reply_to":"9cb03b10_3881adc7","updated":"2023-03-14 14:07:47.000000000","message":"Done","commit_id":"9bb8acad52e59417ef8dce0d84c85043f0e9950e"},{"author":{"_account_id":11975,"name":"Slawek Kaplonski","email":"skaplons@redhat.com","username":"slaweq"},"change_message_id":"deeb3abcae89aa4df375475c7a30ca303b8dc6e5","unresolved":true,"context_lines":[{"line_number":18,"context_line":"    but recovery is tied to the renewal of the DHCP lease, making most"},{"line_number":19,"context_line":"    recoveries very slow. IPv6 metadata is not redundant at all. Until a"},{"line_number":20,"context_line":"    redesign of the isolated metadata service, there are no better deployment"},{"line_number":21,"context_line":"    options."}],"source_content_type":"text/x-yaml","patch_set":9,"id":"1a8039fc_8ab585ad","line":21,"updated":"2023-03-16 16:27:42.000000000","message":"I know that I\u0027m nit picker but IMHO this would be better in \"known issues\" section :)","commit_id":"688e6800b88e48b46a4bd61fc0f1273649fcb801"},{"author":{"_account_id":1131,"name":"Brian Haley","email":"haleyb.dev@gmail.com","username":"brian-haley"},"change_message_id":"a0340d07a01fbbc36aefc5ae61312dcf3ae7e918","unresolved":true,"context_lines":[{"line_number":18,"context_line":"    but recovery is tied to the renewal of the DHCP lease, making most"},{"line_number":19,"context_line":"    recoveries very slow. IPv6 metadata is not redundant at all. Until a"},{"line_number":20,"context_line":"    redesign of the isolated metadata service, there are no better deployment"},{"line_number":21,"context_line":"    options."}],"source_content_type":"text/x-yaml","patch_set":9,"id":"e1e53a0c_0628291d","line":21,"in_reply_to":"1a8039fc_8ab585ad","updated":"2023-03-17 14:25:02.000000000","message":"Right, it might be this belongs in \"issues\" and the above in \"fixes\" along with the bug # ? I know Bence is out today so I\u0027ll see if I can clean it up.","commit_id":"688e6800b88e48b46a4bd61fc0f1273649fcb801"},{"author":{"_account_id":1131,"name":"Brian Haley","email":"haleyb.dev@gmail.com","username":"brian-haley"},"change_message_id":"cb950f9396242a3181d075baa0a91fcd5e821b8d","unresolved":false,"context_lines":[{"line_number":18,"context_line":"    but recovery is tied to the renewal of the DHCP lease, making most"},{"line_number":19,"context_line":"    recoveries very slow. IPv6 metadata is not redundant at all. Until a"},{"line_number":20,"context_line":"    redesign of the isolated metadata service, there are no better deployment"},{"line_number":21,"context_line":"    options."}],"source_content_type":"text/x-yaml","patch_set":9,"id":"6fd73359_95436a0b","line":21,"in_reply_to":"e1e53a0c_0628291d","updated":"2023-03-20 19:53:32.000000000","message":"I tried to clean things up and put it into a single section.","commit_id":"688e6800b88e48b46a4bd61fc0f1273649fcb801"}]}
