)]}'
{"/COMMIT_MSG":[{"author":{"_account_id":29122,"name":"Raghavendra Tilay","email":"raghavendra-uddhav.tilay@hpe.com","username":"raghavendrat"},"change_message_id":"dd1bafe6f9321dd08f418748768c5fb7eb31cf49","unresolved":true,"context_lines":[{"line_number":4,"context_line":"Commit:     melanie witt \u003cmelwittt@gmail.com\u003e"},{"line_number":5,"context_line":"CommitDate: 2022-10-06 18:30:26 +0000"},{"line_number":6,"context_line":""},{"line_number":7,"context_line":"NFS update volume attachment format during volume snapshot"},{"line_number":8,"context_line":""},{"line_number":9,"context_line":"During a NFS volume snapshot of an attached volume, a QCOW2 snapshot is"},{"line_number":10,"context_line":"created and is made the active volume for the instance. The associated"}],"source_content_type":"text/x-gerrit-commit-message","patch_set":5,"id":"700695bb_deffb7b1","line":7,"updated":"2022-10-07 10:57:19.000000000","message":"nit: First line to be limited to 50 chars\n\nhttps://wiki.openstack.org/wiki/GitCommitMessages#Summary_of_Git_commit_message_structure","commit_id":"a02c47ffa3ecb230eb8cf13a9f33a6bfc0dd4d6b"}],"/PATCHSET_LEVEL":[{"author":{"_account_id":4690,"name":"melanie witt","display_name":"melwitt","email":"melwittt@gmail.com","username":"melwitt"},"change_message_id":"bdf03f3787c5a195cc5273fed058c2469801c840","unresolved":false,"context_lines":[],"source_content_type":"","patch_set":1,"id":"47a66f6d_c7d51d56","updated":"2022-09-14 04:59:29.000000000","message":"Hm, test_volume_extend_when_volume_has_snapshot failed in devstack-plugin-nfs-tempest-full 😞 Will try to figure out what I\u0027ve done wrong.","commit_id":"dc3b780323220343cc09e8ca767d10d6fd9d576d"},{"author":{"_account_id":4690,"name":"melanie witt","display_name":"melwitt","email":"melwittt@gmail.com","username":"melwitt"},"change_message_id":"83b19cfed1a2c1ebcca04d04f81fb0c4590a15e5","unresolved":false,"context_lines":[],"source_content_type":"","patch_set":1,"id":"f39234b3_abba1318","in_reply_to":"44c4af7c_c68649e9","updated":"2022-09-15 00:13:08.000000000","message":"(later) Thank you for the link. I found that it was not related to that bug and it was just a mistake I did 😆\n\nI think I see why it fails ... the extend will resize the backing file (which is raw) but because I\u0027m setting the volume admin metadata to qcow2, that format is being passed to resize (-f qcow2) which is not correct for the backing file and it fails.\n\n(later) I did some more testing locally and realized the volume format should not be changed and should remain \"raw\" and that only the volume attachment format should be updated for the snapshot. I had gotten confused because of the code in cinder/api/v3/attachments.py that was overwriting my \"qcow2\" update with the metadata from the volume, which is \"raw\".\n\nSo I\u0027ve tried a new PS to see if this is closer to the right thing.","commit_id":"dc3b780323220343cc09e8ca767d10d6fd9d576d"},{"author":{"_account_id":4523,"name":"Eric Harney","email":"eharney@redhat.com","username":"eharney"},"change_message_id":"a5fd6ea447535976f8ac3317f112feac6726cdde","unresolved":false,"context_lines":[],"source_content_type":"","patch_set":1,"id":"44c4af7c_c68649e9","in_reply_to":"47a66f6d_c7d51d56","updated":"2022-09-14 15:36:01.000000000","message":"I wonder if it\u0027s related to\n    https://bugs.launchpad.net/cinder/+bug/1903319","commit_id":"dc3b780323220343cc09e8ca767d10d6fd9d576d"},{"author":{"_account_id":20813,"name":"Sofia Enriquez","email":"lsofia.enriquez@gmail.com","username":"enriquetaso"},"change_message_id":"3897e9c68137847e0604811ae80905422b62d72e","unresolved":false,"context_lines":[],"source_content_type":"","patch_set":3,"id":"75e64891_fe1bdd94","updated":"2022-09-16 20:59:44.000000000","message":"I run cinder-tempest-plugin manually and it passed! \n\n\n:-1: i think we may need a release note (let me know if you need help with it)","commit_id":"88f846b10799f330e9005605c749d762d9db013a"},{"author":{"_account_id":20813,"name":"Sofia Enriquez","email":"lsofia.enriquez@gmail.com","username":"enriquetaso"},"change_message_id":"6348dffa5d9ad97377884495b0c51f96637373f6","unresolved":false,"context_lines":[],"source_content_type":"","patch_set":3,"id":"f5fe5eb1_0e37f090","updated":"2022-09-16 17:03:33.000000000","message":"Thanks for working on this, I\u0027m waiting for cinder-tempest-plugin test to finish and i\u0027ll upgrade my vote.\n\nThe attachments has been updated and i\u0027m able to connect to the instance without problems. \n\n```\ndevstack$ openstack server start 20c179fe-a8f0-452b-a18a-\ndevstack$ openstack --os-volume-api-version 3.27 volume attachment show a981ad3c-9964-4ca2-891f-2d8cb8a7f0e2 -f json\n/usr/lib/python3/dist-packages/secretstorage/dhcrypto.py:15: CryptographyDeprecationWarning: int_from_bytes is deprecated, use int.from_bytes instead\n  from cryptography.utils import int_from_bytes\n/usr/lib/python3/dist-packages/secretstorage/util.py:19: CryptographyDeprecationWarning: int_from_bytes is deprecated, use int.from_bytes instead\n  from cryptography.utils import int_from_bytes\n{\n  \"ID\": \"a981ad3c-9964-4ca2-891f-2d8cb8a7f0e2\",\n  \"Volume ID\": \"f5661395-957d-4670-98f0-b60eafd18671\",\n  \"Instance ID\": \"20c179fe-a8f0-452b-a18a-f9529f8f8328\",\n  \"Status\": \"attached\",\n  \"Attach Mode\": \"rw\",\n  \"Attached At\": \"2022-09-16T16:45:21.000000\",\n  \"Detached At\": \"\",\n  \"Properties\": {\n    \"export\": \"localhost:/srv/nfs1\",\n    \"name\": \"volume-f5661395-957d-4670-98f0-b60eafd18671.af1a9d2b-a45e-466a-886e-e6706b3895a3\",\n    \"options\": null,\n    \"format\": \"qcow2\",\n    \"qos_specs\": null,\n    \"access_mode\": \"rw\",\n    \"encrypted\": false,\n    \"cacheable\": false,\n    \"driver_volume_type\": \"nfs\",\n    \"mount_point_base\": \"/opt/stack/data/cinder/mnt\",\n    \"attachment_id\": \"a981ad3c-9964-4ca2-891f-2d8cb8a7f0e2\"\n  }\n}\n```\n","commit_id":"88f846b10799f330e9005605c749d762d9db013a"},{"author":{"_account_id":4690,"name":"melanie witt","display_name":"melwitt","email":"melwittt@gmail.com","username":"melwitt"},"change_message_id":"2697f790be775a8941b73cc0154e21750074b0f2","unresolved":false,"context_lines":[],"source_content_type":"","patch_set":3,"id":"783df487_a134ed3d","updated":"2022-09-16 04:51:47.000000000","message":"recheck kernel panic","commit_id":"88f846b10799f330e9005605c749d762d9db013a"},{"author":{"_account_id":4690,"name":"melanie witt","display_name":"melwitt","email":"melwittt@gmail.com","username":"melwitt"},"change_message_id":"0f7a8caa4431ad458505e4f12186c89ed95004c2","unresolved":false,"context_lines":[],"source_content_type":"","patch_set":3,"id":"953fd465_9a8c34f1","in_reply_to":"75e64891_fe1bdd94","updated":"2022-09-17 00:07:58.000000000","message":"Sure, I\u0027ll add a release note.","commit_id":"88f846b10799f330e9005605c749d762d9db013a"},{"author":{"_account_id":4690,"name":"melanie witt","display_name":"melwitt","email":"melwittt@gmail.com","username":"melwitt"},"change_message_id":"a2f6508fd44dcf5a393cb6c864ad16102ae3d87c","unresolved":false,"context_lines":[],"source_content_type":"","patch_set":4,"id":"201ce0cd_4c3e840b","updated":"2022-09-19 18:47:50.000000000","message":"Hi, Rajat, thanks for the review! Question inline.","commit_id":"f8ce79c8dfb510711d2940b0e9609186657b41ae"},{"author":{"_account_id":4690,"name":"melanie witt","display_name":"melwitt","email":"melwittt@gmail.com","username":"melwitt"},"change_message_id":"b2ce2a71492da32a1c1e048c71b8f39b28eda4ca","unresolved":false,"context_lines":[],"source_content_type":"","patch_set":4,"id":"ed8a73e7_382314f0","updated":"2022-09-22 02:30:44.000000000","message":"I\u0027m trying a nova patch here too:\n\nhttps://review.opendev.org/c/openstack/nova/+/858836\n\nNot sure whether it\u0027s an acceptable way to handle NFS though.","commit_id":"f8ce79c8dfb510711d2940b0e9609186657b41ae"},{"author":{"_account_id":27615,"name":"Rajat Dhasmana","email":"rajatdhasmana@gmail.com","username":"whoami-rajat"},"change_message_id":"c06a427beffc24d006e8aedc51bcd3615c1c13de","unresolved":false,"context_lines":[],"source_content_type":"","patch_set":4,"id":"ce00ca49_f8664d82","updated":"2022-09-19 08:52:14.000000000","message":"The format information is preserved in the volume admin metadata since cinder isn\u0027t smart enough to check if the volume is raw or qcow2. the qemu-img commands auto detect format so if a qcow2 file is written on a raw volume, cinder thinks it\u0027s a qcow2 volume hence we need to manually keep track of the volume format.","commit_id":"f8ce79c8dfb510711d2940b0e9609186657b41ae"},{"author":{"_account_id":4690,"name":"melanie witt","display_name":"melwitt","email":"melwittt@gmail.com","username":"melwitt"},"change_message_id":"13fbf9200e7c8b2f6b7d6b439c4c3c7e15c1ba78","unresolved":false,"context_lines":[],"source_content_type":"","patch_set":5,"id":"7e377bac_d8758ab6","updated":"2023-04-26 03:18:04.000000000","message":"I started looking at this again today and find that the online snapshot delete is intentionally leaving the snapshot file behind (example: volume-cfba133d-b092-45d8-b229-1727415e5b7d.c2564f8f-7865-4a08-896f-0d69d7c9fbec) and using it as the active file with format qcow2. So from this perspective in the current state this patch \"works\" because it seems like the snapshot doesn\u0027t actually get deleted when you \u0027openstack volume snapshot delete\u0027 it?\n\nAn example test I\u0027m doing is boot an instance from a volume, \u0027openstack server image create\u0027 a snapshot of the instance, do a hard reboot, instance boots fine, delete the snapshot \u0027openstack volume snapshot delete\u0027 and the hard reboot the instance, the instance boots fine.","commit_id":"a02c47ffa3ecb230eb8cf13a9f33a6bfc0dd4d6b"},{"author":{"_account_id":30615,"name":"Tushar Trambak Gite","email":"tushargite96@gmail.com","username":"tushargite96"},"change_message_id":"923f81b6a70a36a13765497397257c02a1926c36","unresolved":false,"context_lines":[],"source_content_type":"","patch_set":5,"id":"06cd177c_80843ebe","updated":"2023-07-05 03:45:27.000000000","message":"LGTM","commit_id":"a02c47ffa3ecb230eb8cf13a9f33a6bfc0dd4d6b"},{"author":{"_account_id":29122,"name":"Raghavendra Tilay","email":"raghavendra-uddhav.tilay@hpe.com","username":"raghavendrat"},"change_message_id":"dd1bafe6f9321dd08f418748768c5fb7eb31cf49","unresolved":false,"context_lines":[],"source_content_type":"","patch_set":5,"id":"7e755f09_b79d8936","updated":"2022-10-07 10:57:19.000000000","message":"Minor comment inline\n","commit_id":"a02c47ffa3ecb230eb8cf13a9f33a6bfc0dd4d6b"},{"author":{"_account_id":4690,"name":"melanie witt","display_name":"melwitt","email":"melwittt@gmail.com","username":"melwitt"},"change_message_id":"4eab4045a969baa3463d1f14a079f3fbdd8964a8","unresolved":false,"context_lines":[],"source_content_type":"","patch_set":5,"id":"643adc92_0d70f030","updated":"2023-07-20 18:46:10.000000000","message":"check experimental","commit_id":"a02c47ffa3ecb230eb8cf13a9f33a6bfc0dd4d6b"},{"author":{"_account_id":4523,"name":"Eric Harney","email":"eharney@redhat.com","username":"eharney"},"change_message_id":"ed06e9cda7e88ba813a1132e0ea3d2fda80b2777","unresolved":false,"context_lines":[],"source_content_type":"","patch_set":5,"id":"3013926b_80beaf4b","updated":"2024-10-24 14:36:13.000000000","message":"recheck\n\nget new logs","commit_id":"a02c47ffa3ecb230eb8cf13a9f33a6bfc0dd4d6b"},{"author":{"_account_id":4690,"name":"melanie witt","display_name":"melwitt","email":"melwittt@gmail.com","username":"melwitt"},"change_message_id":"df22fee67471094e28c98b2639e517b676df0f2a","unresolved":false,"context_lines":[],"source_content_type":"","patch_set":5,"id":"802992d4_c5ef3680","updated":"2022-10-06 19:54:05.000000000","message":"recheck bug 1991962","commit_id":"a02c47ffa3ecb230eb8cf13a9f33a6bfc0dd4d6b"},{"author":{"_account_id":13425,"name":"Simon Dodsley","email":"simon@purestorage.com","username":"sdodsley"},"change_message_id":"b3d37567ead0ee124530ae69b42da12bfe7c1b5a","unresolved":false,"context_lines":[],"source_content_type":"","patch_set":5,"id":"f54e865d_801bad10","updated":"2022-10-06 19:46:04.000000000","message":"run Pure Storage CI","commit_id":"a02c47ffa3ecb230eb8cf13a9f33a6bfc0dd4d6b"},{"author":{"_account_id":10459,"name":"Luigi Toscano","email":"ltoscano@redhat.com","username":"ltoscano"},"change_message_id":"6cf13898b8c9a18f09a78388f39d4aa09c210e5a","unresolved":false,"context_lines":[],"source_content_type":"","patch_set":6,"id":"b5e26ab0_47e348af","updated":"2025-01-15 13:47:47.000000000","message":"Should this be backported to all open branches? This way it would be possible to have a working tempest test for this issue (see https://review.opendev.org/c/openstack/tempest/+/939329).","commit_id":"56abc9d5b9a07ca31791e385d93ca5b56b6bd74e"},{"author":{"_account_id":5314,"name":"Brian Rosmaita","email":"rosmaita.fossdev@gmail.com","username":"brian-rosmaita"},"change_message_id":"5a5455cb0d41162fe7ba90131b66f146a15367ec","unresolved":false,"context_lines":[],"source_content_type":"","patch_set":6,"id":"ef00f65c_8a989145","updated":"2024-12-18 16:11:00.000000000","message":"Tested patch locally, couldn\u0027t find any issues.  Zuul is still working on this patch set, but devstack-plugin-nfs-tempest-full and devstack-plugin-nfs-tempest-full-fips have both passed.","commit_id":"56abc9d5b9a07ca31791e385d93ca5b56b6bd74e"},{"author":{"_account_id":5314,"name":"Brian Rosmaita","email":"rosmaita.fossdev@gmail.com","username":"brian-rosmaita"},"change_message_id":"67a02f6867f4e080778aa7d3af58e44a83c0a9f8","unresolved":false,"context_lines":[],"source_content_type":"","patch_set":6,"id":"d72f82bc_cb4d39b4","updated":"2024-12-18 14:09:25.000000000","message":"recheck devstack-plugin-nfs-tempest-full - post failure","commit_id":"56abc9d5b9a07ca31791e385d93ca5b56b6bd74e"},{"author":{"_account_id":10459,"name":"Luigi Toscano","email":"ltoscano@redhat.com","username":"ltoscano"},"change_message_id":"cdb1dd247af5554ba0e29a615beba4501f742520","unresolved":false,"context_lines":[],"source_content_type":"","patch_set":6,"id":"bb4d3a85_6b1f1d19","in_reply_to":"b5e26ab0_47e348af","updated":"2025-01-15 14:15:34.000000000","message":"I\u0027ve proposed the backports.","commit_id":"56abc9d5b9a07ca31791e385d93ca5b56b6bd74e"}],"cinder/api/v3/attachments.py":[{"author":{"_account_id":4690,"name":"melanie witt","display_name":"melwitt","email":"melwittt@gmail.com","username":"melwitt"},"change_message_id":"83b19cfed1a2c1ebcca04d04f81fb0c4590a15e5","unresolved":true,"context_lines":[{"line_number":63,"context_line":"            # already been set."},{"line_number":64,"context_line":"            if \u0027format\u0027 not in attachment.connection_info:"},{"line_number":65,"context_line":"                attachment.connection_info[\u0027format\u0027] \u003d ("},{"line_number":66,"context_line":"                    volume.admin_metadata[\u0027format\u0027])"},{"line_number":67,"context_line":"        return attachment_views.ViewBuilder.detail(attachment)"},{"line_number":68,"context_line":""},{"line_number":69,"context_line":"    @wsgi.Controller.api_version(mv.NEW_ATTACH)"}],"source_content_type":"text/x-python","patch_set":2,"id":"fcc178fe_015dae45","line":66,"updated":"2022-09-15 00:13:08.000000000","message":"Full disclosure: I don\u0027t know what this setting of connection_info from volume admin metadata is for, so I tried to change it as minimally as possible.","commit_id":"0a33c9ae46e4eb5e279393004c96460ce4cbe7ec"},{"author":{"_account_id":27615,"name":"Rajat Dhasmana","email":"rajatdhasmana@gmail.com","username":"whoami-rajat"},"change_message_id":"c06a427beffc24d006e8aedc51bcd3615c1c13de","unresolved":true,"context_lines":[{"line_number":63,"context_line":"            # already been set."},{"line_number":64,"context_line":"            if \u0027format\u0027 not in attachment.connection_info:"},{"line_number":65,"context_line":"                attachment.connection_info[\u0027format\u0027] \u003d ("},{"line_number":66,"context_line":"                    volume.admin_metadata[\u0027format\u0027])"},{"line_number":67,"context_line":"        return attachment_views.ViewBuilder.detail(attachment)"},{"line_number":68,"context_line":""},{"line_number":69,"context_line":"    @wsgi.Controller.api_version(mv.NEW_ATTACH)"}],"source_content_type":"text/x-python","patch_set":2,"id":"74390114_1f44ffa3","line":66,"in_reply_to":"25763c48_f49f1809","updated":"2022-09-19 08:52:14.000000000","message":"This shouldn\u0027t be changed, if we\u0027re expecting a format change in volume then the admin metadata also needs to be updated. see my comments in remotefs.py file\n\nSofia: This is specific for file system type drivers so RBD won\u0027t be running this code path.","commit_id":"0a33c9ae46e4eb5e279393004c96460ce4cbe7ec"},{"author":{"_account_id":4690,"name":"melanie witt","display_name":"melwitt","email":"melwittt@gmail.com","username":"melwitt"},"change_message_id":"f08429cbbb548f5982d0bf52be25c9914094533b","unresolved":true,"context_lines":[{"line_number":63,"context_line":"            # already been set."},{"line_number":64,"context_line":"            if \u0027format\u0027 not in attachment.connection_info:"},{"line_number":65,"context_line":"                attachment.connection_info[\u0027format\u0027] \u003d ("},{"line_number":66,"context_line":"                    volume.admin_metadata[\u0027format\u0027])"},{"line_number":67,"context_line":"        return attachment_views.ViewBuilder.detail(attachment)"},{"line_number":68,"context_line":""},{"line_number":69,"context_line":"    @wsgi.Controller.api_version(mv.NEW_ATTACH)"}],"source_content_type":"text/x-python","patch_set":2,"id":"31653417_9947629f","line":66,"in_reply_to":"2abd2249_bd1a0fa9","updated":"2022-09-21 02:37:14.000000000","message":"Right, on the nova side we store the connection_info which AFAICT is only available from the attachment info. We use the connection_info to generate the XML for the instance:\n\nhttps://github.com/openstack/nova/blob/1025c9879341d44db33c4cc501435364dd185a9e/nova/virt/libvirt/volume/nfs.py#L29\n\nCurrently, the connection_info for the attachment says format \"raw\" after the volume snapshot. Is that accurate?\n\nSorry, I don\u0027t know how the snapshots work and why the snapshot would be made the active file for the instance. And why the connection_info would not be updated at the same time the snapshot is made the active file.","commit_id":"0a33c9ae46e4eb5e279393004c96460ce4cbe7ec"},{"author":{"_account_id":4690,"name":"melanie witt","display_name":"melwitt","email":"melwittt@gmail.com","username":"melwitt"},"change_message_id":"f5ef79ceb59255dd0ae2fdf5f98f717646757541","unresolved":true,"context_lines":[{"line_number":63,"context_line":"            # already been set."},{"line_number":64,"context_line":"            if \u0027format\u0027 not in attachment.connection_info:"},{"line_number":65,"context_line":"                attachment.connection_info[\u0027format\u0027] \u003d ("},{"line_number":66,"context_line":"                    volume.admin_metadata[\u0027format\u0027])"},{"line_number":67,"context_line":"        return attachment_views.ViewBuilder.detail(attachment)"},{"line_number":68,"context_line":""},{"line_number":69,"context_line":"    @wsgi.Controller.api_version(mv.NEW_ATTACH)"}],"source_content_type":"text/x-python","patch_set":2,"id":"cdffe824_a310abcf","line":66,"in_reply_to":"31653417_9947629f","updated":"2024-10-29 02:49:56.000000000","message":"Adding another comment here as I have learned a lot more about volumes and storage since my last comment.\n\nIMHO it seems correct for Nova to use the attachment connection_info for generating disk format in the guest XML. If the connection_info represents the current state of the connection, then if the active file is a snapshot, the connected disk has become qcow2 and the attachment should reflect that.","commit_id":"0a33c9ae46e4eb5e279393004c96460ce4cbe7ec"},{"author":{"_account_id":27615,"name":"Rajat Dhasmana","email":"rajatdhasmana@gmail.com","username":"whoami-rajat"},"change_message_id":"3cf048fef123726db5b84f3465451dc2e3e47747","unresolved":true,"context_lines":[{"line_number":63,"context_line":"            # already been set."},{"line_number":64,"context_line":"            if \u0027format\u0027 not in attachment.connection_info:"},{"line_number":65,"context_line":"                attachment.connection_info[\u0027format\u0027] \u003d ("},{"line_number":66,"context_line":"                    volume.admin_metadata[\u0027format\u0027])"},{"line_number":67,"context_line":"        return attachment_views.ViewBuilder.detail(attachment)"},{"line_number":68,"context_line":""},{"line_number":69,"context_line":"    @wsgi.Controller.api_version(mv.NEW_ATTACH)"}],"source_content_type":"text/x-python","patch_set":2,"id":"2abd2249_bd1a0fa9","line":66,"in_reply_to":"3bf15946_2af3bef6","updated":"2022-09-20 07:27:52.000000000","message":"In PS1, you are updating the format irrespective of the volume is attached or not. if you only update the format when volume is attached then it should not fail.\nBut i think the problem is bigger than this. Nova is using attachment information to track the snapshot format (which is the active file from nova side) but on cinder side, we have attachment records associated with the volume and not snapshot. I think this requires some discussion between both nova and cinder teams.","commit_id":"0a33c9ae46e4eb5e279393004c96460ce4cbe7ec"},{"author":{"_account_id":4690,"name":"melanie witt","display_name":"melwitt","email":"melwittt@gmail.com","username":"melwitt"},"change_message_id":"a2f6508fd44dcf5a393cb6c864ad16102ae3d87c","unresolved":true,"context_lines":[{"line_number":63,"context_line":"            # already been set."},{"line_number":64,"context_line":"            if \u0027format\u0027 not in attachment.connection_info:"},{"line_number":65,"context_line":"                attachment.connection_info[\u0027format\u0027] \u003d ("},{"line_number":66,"context_line":"                    volume.admin_metadata[\u0027format\u0027])"},{"line_number":67,"context_line":"        return attachment_views.ViewBuilder.detail(attachment)"},{"line_number":68,"context_line":""},{"line_number":69,"context_line":"    @wsgi.Controller.api_version(mv.NEW_ATTACH)"}],"source_content_type":"text/x-python","patch_set":2,"id":"3bf15946_2af3bef6","line":66,"in_reply_to":"74390114_1f44ffa3","updated":"2022-09-19 18:47:50.000000000","message":"Hm, OK. So back in PS1, I made the change by updating the volume admin metadata, but it caused NFS volume extend tests to fail [1][2]:\n\n ERROR cinder.volume.manager Traceback (most recent call last):\n ERROR cinder.volume.manager   File \"/opt/stack/cinder/cinder/volume/manager.py\", line 2952, in extend_volume\n ERROR cinder.volume.manager     self.driver.extend_volume(volume, new_size)\n ERROR cinder.volume.manager   File \"/opt/stack/cinder/cinder/volume/drivers/nfs.py\", line 393, in extend_volume\n ERROR cinder.volume.manager     image_utils.resize_image(path, new_size,\n ERROR cinder.volume.manager   File \"/opt/stack/cinder/cinder/image/image_utils.py\", line 464, in resize_image\n ERROR cinder.volume.manager     utils.execute(*cmd, run_as_root\u003drun_as_root)\n ERROR cinder.volume.manager   File \"/opt/stack/cinder/cinder/utils.py\", line 174, in execute\n ERROR cinder.volume.manager     return processutils.execute(*cmd, **kwargs)\n ERROR cinder.volume.manager   File \"/usr/local/lib/python3.8/dist-packages/oslo_concurrency/processutils.py\", line 438, in execute\n ERROR cinder.volume.manager     raise ProcessExecutionError(exit_code\u003d_returncode,\n ERROR cinder.volume.manager oslo_concurrency.processutils.ProcessExecutionError: Unexpected error while running command.\n ERROR cinder.volume.manager Command: sudo cinder-rootwrap /etc/cinder/rootwrap.conf qemu-img resize -f qcow2 /opt/stack/data/cinder/mnt/896fb15da6036b68a917322e72ebfe57/volume-c0f347c6-c404-4316-8463-b51ea217b5e4 2G\n ERROR cinder.volume.manager Exit code: 1\n ERROR cinder.volume.manager Stdout: \u0027\u0027\n ERROR cinder.volume.manager Stderr: \"qemu-img: Could not open \u0027/opt/stack/data/cinder/mnt/896fb15da6036b68a917322e72ebfe57/volume-c0f347c6-c404-4316-8463-b51ea217b5e4\u0027: Image is not in qcow2 format\\n\"\n ERROR cinder.volume.manager\n\nThe resize wants to resize the raw backing file but if the volume admin metadata says qcow2, it will fail \"Image is not in qcow2 format\".\n\nSo I did a new PS leaving the volume admin metadata as raw.\n\nWhat is the correct thing to do here?\n\n[1] https://zuul.opendev.org/t/openstack/build/568488797ebb4d45a40efcff6027a030/log/controller/logs/screen-c-vol.txt#2964\n[2] https://review.opendev.org/c/openstack/cinder/+/857528/1#message-83b19cfed1a2c1ebcca04d04f81fb0c4590a15e5","commit_id":"0a33c9ae46e4eb5e279393004c96460ce4cbe7ec"},{"author":{"_account_id":20813,"name":"Sofia Enriquez","email":"lsofia.enriquez@gmail.com","username":"enriquetaso"},"change_message_id":"3897e9c68137847e0604811ae80905422b62d72e","unresolved":true,"context_lines":[{"line_number":63,"context_line":"            # already been set."},{"line_number":64,"context_line":"            if \u0027format\u0027 not in attachment.connection_info:"},{"line_number":65,"context_line":"                attachment.connection_info[\u0027format\u0027] \u003d ("},{"line_number":66,"context_line":"                    volume.admin_metadata[\u0027format\u0027])"},{"line_number":67,"context_line":"        return attachment_views.ViewBuilder.detail(attachment)"},{"line_number":68,"context_line":""},{"line_number":69,"context_line":"    @wsgi.Controller.api_version(mv.NEW_ATTACH)"}],"source_content_type":"text/x-python","patch_set":2,"id":"25763c48_f49f1809","line":66,"in_reply_to":"fcc178fe_015dae45","updated":"2022-09-16 20:59:44.000000000","message":"I wonder why this hasn\u0027t popped out as a problem before (i.e RBD driver)","commit_id":"0a33c9ae46e4eb5e279393004c96460ce4cbe7ec"}],"cinder/tests/unit/api/v3/test_attachments.py":[{"author":{"_account_id":20813,"name":"Sofia Enriquez","email":"lsofia.enriquez@gmail.com","username":"enriquetaso"},"change_message_id":"3897e9c68137847e0604811ae80905422b62d72e","unresolved":true,"context_lines":[{"line_number":355,"context_line":"    @ddt.data({}, {\u0027format\u0027: \u0027qcow2\u0027})"},{"line_number":356,"context_line":"    def test_get_attachment_format(self, connection_info):"},{"line_number":357,"context_line":"        # Set the volume format so we can verify attachment connection_info"},{"line_number":358,"context_line":"        volume \u003d copy.deepcopy(self.volume1)"},{"line_number":359,"context_line":"        volume.admin_metadata[\u0027format\u0027] \u003d \u0027raw\u0027"},{"line_number":360,"context_line":"        volume.save()"},{"line_number":361,"context_line":"        # Attachment has no connection_info yet"},{"line_number":362,"context_line":"        attachment \u003d copy.deepcopy(self.attachment1)"},{"line_number":363,"context_line":"        attachment.connection_info \u003d connection_info"},{"line_number":364,"context_line":"        attachment.save()"},{"line_number":365,"context_line":""},{"line_number":366,"context_line":"        url \u003d \u0027/v3/%s/attachments%s\u0027 % (fake.PROJECT_ID, attachment.id)"},{"line_number":367,"context_line":"        req \u003d fakes.HTTPRequest.blank(url, version\u003dmv.NEW_ATTACH)"}],"source_content_type":"text/x-python","patch_set":3,"id":"b6ccdbc1_358a45ac","line":364,"range":{"start_line":358,"start_character":0,"end_line":364,"end_character":25},"updated":"2022-09-16 20:59:44.000000000","message":":-1: Using copy looks like working fine here. However, it seems to mix a little the way cinder likes to test stuff. Maybe we can just add a new volume self.volume3 to setUp(self), add it to _cleanup() as well, and then add the new configuration to _create_volume().\n\nLet\u0027s see what others think about this.","commit_id":"88f846b10799f330e9005605c749d762d9db013a"},{"author":{"_account_id":4690,"name":"melanie witt","display_name":"melwitt","email":"melwittt@gmail.com","username":"melwitt"},"change_message_id":"f5ef79ceb59255dd0ae2fdf5f98f717646757541","unresolved":false,"context_lines":[{"line_number":355,"context_line":"    @ddt.data({}, {\u0027format\u0027: \u0027qcow2\u0027})"},{"line_number":356,"context_line":"    def test_get_attachment_format(self, connection_info):"},{"line_number":357,"context_line":"        # Set the volume format so we can verify attachment connection_info"},{"line_number":358,"context_line":"        volume \u003d copy.deepcopy(self.volume1)"},{"line_number":359,"context_line":"        volume.admin_metadata[\u0027format\u0027] \u003d \u0027raw\u0027"},{"line_number":360,"context_line":"        volume.save()"},{"line_number":361,"context_line":"        # Attachment has no connection_info yet"},{"line_number":362,"context_line":"        attachment \u003d copy.deepcopy(self.attachment1)"},{"line_number":363,"context_line":"        attachment.connection_info \u003d connection_info"},{"line_number":364,"context_line":"        attachment.save()"},{"line_number":365,"context_line":""},{"line_number":366,"context_line":"        url \u003d \u0027/v3/%s/attachments%s\u0027 % (fake.PROJECT_ID, attachment.id)"},{"line_number":367,"context_line":"        req \u003d fakes.HTTPRequest.blank(url, version\u003dmv.NEW_ATTACH)"}],"source_content_type":"text/x-python","patch_set":3,"id":"1c6b7650_0aedd1a6","line":364,"range":{"start_line":358,"start_character":0,"end_line":364,"end_character":25},"in_reply_to":"b6982751_d1c77572","updated":"2024-10-29 02:49:56.000000000","message":"Just noting this comment no longer applies in newer PS because this test is no longer part of the change.","commit_id":"88f846b10799f330e9005605c749d762d9db013a"},{"author":{"_account_id":4690,"name":"melanie witt","display_name":"melwitt","email":"melwittt@gmail.com","username":"melwitt"},"change_message_id":"0f7a8caa4431ad458505e4f12186c89ed95004c2","unresolved":true,"context_lines":[{"line_number":355,"context_line":"    @ddt.data({}, {\u0027format\u0027: \u0027qcow2\u0027})"},{"line_number":356,"context_line":"    def test_get_attachment_format(self, connection_info):"},{"line_number":357,"context_line":"        # Set the volume format so we can verify attachment connection_info"},{"line_number":358,"context_line":"        volume \u003d copy.deepcopy(self.volume1)"},{"line_number":359,"context_line":"        volume.admin_metadata[\u0027format\u0027] \u003d \u0027raw\u0027"},{"line_number":360,"context_line":"        volume.save()"},{"line_number":361,"context_line":"        # Attachment has no connection_info yet"},{"line_number":362,"context_line":"        attachment \u003d copy.deepcopy(self.attachment1)"},{"line_number":363,"context_line":"        attachment.connection_info \u003d connection_info"},{"line_number":364,"context_line":"        attachment.save()"},{"line_number":365,"context_line":""},{"line_number":366,"context_line":"        url \u003d \u0027/v3/%s/attachments%s\u0027 % (fake.PROJECT_ID, attachment.id)"},{"line_number":367,"context_line":"        req \u003d fakes.HTTPRequest.blank(url, version\u003dmv.NEW_ATTACH)"}],"source_content_type":"text/x-python","patch_set":3,"id":"b6982751_d1c77572","line":364,"range":{"start_line":358,"start_character":0,"end_line":364,"end_character":25},"in_reply_to":"b6ccdbc1_358a45ac","updated":"2022-09-17 00:07:58.000000000","message":"That\u0027s definitely fair, I\u0027m happy to change it.\n\nI also realized if I was gonna do this, I should have used the obj_clone() method in o.vo anyway :P","commit_id":"88f846b10799f330e9005605c749d762d9db013a"}],"cinder/volume/drivers/remotefs.py":[{"author":{"_account_id":27615,"name":"Rajat Dhasmana","email":"rajatdhasmana@gmail.com","username":"whoami-rajat"},"change_message_id":"c06a427beffc24d006e8aedc51bcd3615c1c13de","unresolved":true,"context_lines":[{"line_number":1721,"context_line":"            # Update reference in the only attachment (no multi-attach support)"},{"line_number":1722,"context_line":"            attachment \u003d snapshot.volume.volume_attachment[0]"},{"line_number":1723,"context_line":"            attachment.connection_info[\u0027name\u0027] \u003d active"},{"line_number":1724,"context_line":"            attachment.connection_info[\u0027format\u0027] \u003d \u0027qcow2\u0027"},{"line_number":1725,"context_line":"            # Let OVO know it has been updated"},{"line_number":1726,"context_line":"            attachment.connection_info \u003d attachment.connection_info"},{"line_number":1727,"context_line":"            attachment.save()"}],"source_content_type":"text/x-python","patch_set":4,"id":"3aab84d7_490814b8","line":1724,"range":{"start_line":1724,"start_character":12,"end_line":1724,"end_character":58},"updated":"2022-09-19 08:52:14.000000000","message":"if this is always the case i.e. creating a snapshot of an attached volume causes the volume format to change to qcow2, then we also need to update the volume admin metadata here\n\n    snapshot.volume.admin_metadata[\u0027format\u0027] \u003d \u0027qcow2\u0027\n    with snapshot.volume.obj_as_admin():\n        snapshot.volume.save()","commit_id":"f8ce79c8dfb510711d2940b0e9609186657b41ae"},{"author":{"_account_id":4690,"name":"melanie witt","display_name":"melwitt","email":"melwittt@gmail.com","username":"melwitt"},"change_message_id":"f5ef79ceb59255dd0ae2fdf5f98f717646757541","unresolved":false,"context_lines":[{"line_number":1721,"context_line":"            # Update reference in the only attachment (no multi-attach support)"},{"line_number":1722,"context_line":"            attachment \u003d snapshot.volume.volume_attachment[0]"},{"line_number":1723,"context_line":"            attachment.connection_info[\u0027name\u0027] \u003d active"},{"line_number":1724,"context_line":"            attachment.connection_info[\u0027format\u0027] \u003d \u0027qcow2\u0027"},{"line_number":1725,"context_line":"            # Let OVO know it has been updated"},{"line_number":1726,"context_line":"            attachment.connection_info \u003d attachment.connection_info"},{"line_number":1727,"context_line":"            attachment.save()"}],"source_content_type":"text/x-python","patch_set":4,"id":"105c2246_85c1e0e5","line":1724,"range":{"start_line":1724,"start_character":12,"end_line":1724,"end_character":58},"in_reply_to":"3aab84d7_490814b8","updated":"2024-10-29 02:49:56.000000000","message":"Done","commit_id":"f8ce79c8dfb510711d2940b0e9609186657b41ae"},{"author":{"_account_id":4690,"name":"melanie witt","display_name":"melwitt","email":"melwittt@gmail.com","username":"melwitt"},"change_message_id":"d3dfc435069e7ec45b20a161aa470559cd783ccc","unresolved":true,"context_lines":[{"line_number":1863,"context_line":"        if update_format:"},{"line_number":1864,"context_line":"            snapshot.volume.admin_metadata[\u0027format\u0027] \u003d \u0027qcow2\u0027"},{"line_number":1865,"context_line":"            with snapshot.volume.obj_as_admin():"},{"line_number":1866,"context_line":"                snapshot.volume.save()"},{"line_number":1867,"context_line":""},{"line_number":1868,"context_line":"        # Write info file updated above"},{"line_number":1869,"context_line":"        self._write_info_file(info_path, snap_info)"}],"source_content_type":"text/x-python","patch_set":5,"id":"507b24c6_7b32e94b","line":1866,"updated":"2022-10-21 18:53:24.000000000","message":"We discussed this patch at the PTG and determined it needs a bit more work.\n\nAlongside updating of the volume format after snapshot, there needs to be code added that will update the format back to \u0027raw\u0027 after the last snapshot is online deleted. Currently it only updates format to \u0027qcow2\u0027 when appropriate.\n\nWe agreed Rajat will add that code as SME. And I am also happy to attempt it if Rajat prefers.","commit_id":"a02c47ffa3ecb230eb8cf13a9f33a6bfc0dd4d6b"},{"author":{"_account_id":4523,"name":"Eric Harney","email":"eharney@redhat.com","username":"eharney"},"change_message_id":"2d820eeeb183c679d9ae108bdf6dfebfda2508ae","unresolved":true,"context_lines":[{"line_number":1863,"context_line":"        if update_format:"},{"line_number":1864,"context_line":"            snapshot.volume.admin_metadata[\u0027format\u0027] \u003d \u0027qcow2\u0027"},{"line_number":1865,"context_line":"            with snapshot.volume.obj_as_admin():"},{"line_number":1866,"context_line":"                snapshot.volume.save()"},{"line_number":1867,"context_line":""},{"line_number":1868,"context_line":"        # Write info file updated above"},{"line_number":1869,"context_line":"        self._write_info_file(info_path, snap_info)"}],"source_content_type":"text/x-python","patch_set":5,"id":"0b3b200b_99993a47","line":1866,"in_reply_to":"1ab40229_3cf31395","updated":"2024-12-04 18:21:56.000000000","message":"This sounds correct to me.\n\nThe base file format will change in the event that we support blockcommit of the last snapshot file in the future (which we should do for performance reasons), but we can deal with that case later.","commit_id":"a02c47ffa3ecb230eb8cf13a9f33a6bfc0dd4d6b"},{"author":{"_account_id":4690,"name":"melanie witt","display_name":"melwitt","email":"melwittt@gmail.com","username":"melwitt"},"change_message_id":"f5ef79ceb59255dd0ae2fdf5f98f717646757541","unresolved":true,"context_lines":[{"line_number":1863,"context_line":"        if update_format:"},{"line_number":1864,"context_line":"            snapshot.volume.admin_metadata[\u0027format\u0027] \u003d \u0027qcow2\u0027"},{"line_number":1865,"context_line":"            with snapshot.volume.obj_as_admin():"},{"line_number":1866,"context_line":"                snapshot.volume.save()"},{"line_number":1867,"context_line":""},{"line_number":1868,"context_line":"        # Write info file updated above"},{"line_number":1869,"context_line":"        self._write_info_file(info_path, snap_info)"}],"source_content_type":"text/x-python","patch_set":5,"id":"1ab40229_3cf31395","line":1866,"in_reply_to":"507b24c6_7b32e94b","updated":"2024-10-29 02:49:56.000000000","message":"Adding a comment here to update the current status of this patch.\n\nAFAICT we should not add code to update the attachment format after the last snapshot is deleted. Each time a snapshot is deleted, Cinder calls Nova to pass the delete_info and Nova uses it to do a blockRebase of the disk. The blockRebase will not change the disk format -- it will remain qcow2, even after the last snapshot is deleted. (In the case of deletion of the last snapshot, the disk will be qcow2 except with no backing file. The disk is _not_ converted to raw at any point).\n\nBased on this, I think it is correct to leave the attachment format as qcow2 even after the last snapshot is deleted. Please correct me if I am wrong.","commit_id":"a02c47ffa3ecb230eb8cf13a9f33a6bfc0dd4d6b"}]}
