)]}'
{"/PATCHSET_LEVEL":[{"author":{"_account_id":36080,"name":"Erkin Mussurmankulov","display_name":"Eric","email":"mangust404@gmail.com","username":"mongoose404","status":"PS Cloud services employee"},"change_message_id":"0f82f572490e18963e34fbe230545207b1754265","unresolved":false,"context_lines":[],"source_content_type":"","patch_set":4,"id":"328f6f80_0574fafc","updated":"2026-04-28 19:15:08.000000000","message":"This should also fix issue mentioned here: https://review.opendev.org/c/openstack/trove/+/985262, but with a different approach.","commit_id":"44710535eca0234b85f15652d5c7ae17a72e2dbb"},{"author":{"_account_id":26285,"name":"wu.chunyang","email":"wchy1001@gmail.com","username":"wu.chunyang"},"change_message_id":"a07f88fae7a5263ac0623adb360bde2a584faacd","unresolved":false,"context_lines":[],"source_content_type":"","patch_set":5,"id":"60a68e56_bc191bd7","updated":"2026-04-30 01:38:48.000000000","message":"The tests passed, which means these changes work correctly.\nHowever, with this update, once the operator upgrades to this version,\nit will need to rebuild all backup images — this could make the upgrade process less smooth.","commit_id":"e56f80a158e027803ae61f29feb0da168ab45fcd"},{"author":{"_account_id":36080,"name":"Erkin Mussurmankulov","display_name":"Eric","email":"mangust404@gmail.com","username":"mongoose404","status":"PS Cloud services employee"},"change_message_id":"fc4a45fed9343e4ebe4685840ab2e76243e4a843","unresolved":true,"context_lines":[],"source_content_type":"","patch_set":5,"id":"83ccd34a_6a4804ab","in_reply_to":"48f5bcab_dcbb2c20","updated":"2026-04-30 03:36:50.000000000","message":"I\u0027m also have a crazy idea, which will, in addition, resolve the problem of compatibility with old datastores:\n\nInstead of building a backup image for each datastore version, we can use the SAME image as the database itself for backups: they all already have the required backup utility, with the correct version.\nFor transferring the backup stream outside of db_backup container, we can use:\n- named pipes (mounted from host system)\n- unix socket files  (mounted from host system)\n- network sockets\n\nWe need to research which way will work best. I don\u0027t see a fundamental problem that would make it impossible.\n\nThose backup images are like a cornerstone of the trove, and getting rid of them will make trove more versatile.\n\n@hiwkby@yahoo.com @ministry.96.nd@gmail.com @wchy1001@gmail.com we need an expertise of the core team on this topic.","commit_id":"e56f80a158e027803ae61f29feb0da168ab45fcd"},{"author":{"_account_id":36080,"name":"Erkin Mussurmankulov","display_name":"Eric","email":"mangust404@gmail.com","username":"mongoose404","status":"PS Cloud services employee"},"change_message_id":"b3707f89a6ed9fd02571b9eb0220764f9fcc2ef5","unresolved":true,"context_lines":[],"source_content_type":"","patch_set":5,"id":"48f5bcab_dcbb2c20","in_reply_to":"60a68e56_bc191bd7","updated":"2026-04-30 02:42:26.000000000","message":"Yes, you\u0027re right.\nI can see at least two options here for operators:\n 1. Tell operators to keep old versions of backup images with rescope logic for compatibility with old guestagents; set new names for the backup images that are built during release upgrade; backup_image is set inside trove-guestagent.conf and will remain the same until rebuild.\n 2. add additional code for compatibility, which will use the old rescope method if `--os-auth-url` is provided for the backup container.\n \n I\u0027m not sure which one is better. Adding an additional caveat for the upgrade process is a bad thing, but keeping a lot of legacy code is not a good thing either.","commit_id":"e56f80a158e027803ae61f29feb0da168ab45fcd"},{"author":{"_account_id":36080,"name":"Erkin Mussurmankulov","display_name":"Eric","email":"mangust404@gmail.com","username":"mongoose404","status":"PS Cloud services employee"},"change_message_id":"e32bc6f523dec0d2ca50c80acce2cc9b56f68132","unresolved":true,"context_lines":[],"source_content_type":"","patch_set":5,"id":"8213409e_ac5143b9","in_reply_to":"7056e7ab_643ac23a","updated":"2026-05-06 19:15:51.000000000","message":"Hello, Bo. Thank you for the comment.\n\nThere seems to be a misunderstanding regarding this patch. Below is a precise comparison of the current behavior and the behavior after applying the patch which I did manually just now.\n\nTest input: a database instance created in a user project by a Tempest scenario.\n\n__Current behavior__ (master, without this patch)\n - When creating a backup with project member credentials (user context: member in the user\u0027s project), the backup is stored in the user\u0027s project. The `database_backups` Swift container is created in the user\u0027s namespace.\n - When creating a backup with admin credentials (admin context: admin in the admin project), the backup is stored in the admin project. The `database_backups` Swift container is created in the admin\u0027s namespace.\n\n__Behavior with this patch applied__\n - When creating a backup with project member credentials (user context: member in the user\u0027s project), the backup is stored in the user\u0027s project. The `database_backups` Swift container is created in the user\u0027s namespace.\n - When creating a backup with admin credentials (admin context: admin in the admin project), the backup is stored in the admin project. The `database_backups` Swift container is created in the admin\u0027s namespace.\n\n__Conclusion__\n\nThis patch does not change the existing behavior. It only removes an unnecessary implicit token rescoping step.\n\nAdditionally, it becomes safe to remove credentials from `trove-guestagent.conf`. Keeping service account credentials there poses a huge security risk if they are compromised.\n\n__Regarding your opinions:__\n\n1. My opinion is: no, backups should NOT be transparent to all users. Backups are private data. Accessing other users\u0027 backups should be forbidden. A user may access only their own backups.\n2. Yes, we can add more features, including integration with S3.\nIn our company, we already have a prototype, and we plan to propose it upstream in the near future.\nFor implementing this, as I see it, we should extend the functionality of the existing backup strategy entity. Right now, you can only specify a Swift container name. We should add a \"type\" property, where users may choose, for example, between \"swift\" and \"s3\".\nFor the S3 backup strategy type, you should also provide a credentials reference from Barbican.\nThis will allow adding more backup strategies in the future.\n3. Neither the current logic nor this patch affects this.\n4. This point is related to my suggestion about using database images as backup images. I think it is better to move this conversation somewhere else, for example to this [Etherpad](https://etherpad.opendev.org/p/Trove_Hibiscus_cycle_roadmap) (I already sent you the link via email).","commit_id":"e56f80a158e027803ae61f29feb0da168ab45fcd"},{"author":{"_account_id":28691,"name":"Bo Tran","email":"ministry.96.nd@gmail.com","username":"ministry"},"change_message_id":"3ce0c5e2fb9080d2d6c9c8c3c969c634d8dccde6","unresolved":true,"context_lines":[],"source_content_type":"","patch_set":5,"id":"7056e7ab_643ac23a","in_reply_to":"83ccd34a_6a4804ab","updated":"2026-05-06 02:44:02.000000000","message":"I think we need an agreement about this topic and the next features for Trove.\n\nBTW, if we use a pre-auth token for the Swift client, it seems to break the logic of Trove. Let\u0027s think about backups afterward. Should all backups be in the same project (aka same owner)? Is that really what we want?\n\nSo, my opinions are:\n1. Backups should be transparent to all users.\n2. With the current (old) logic, can we add more features such as integrating with S3 (like AWS, Ceph RadosGW, ...)?\n3. Users should be able to download and view their backups easily.\n4. We should build a backup image for each datastore version — this was my previous idea because of some issues:\n   - We need better control over library conflicts.\n   - It\u0027s not transparent; if we change some logic, the latest image may not work as expected, which could cause various problems for new backups of all datastores.\n   \n   \n\nBTW, in my production, I doing use S3 (Ceph Rados GW) as backend storage for backups. It\u0027s make my clients can be download, view, ... the backups directly.\nWe require my client use S3 service and payment for backups size, more backup more money","commit_id":"e56f80a158e027803ae61f29feb0da168ab45fcd"},{"author":{"_account_id":36080,"name":"Erkin Mussurmankulov","display_name":"Eric","email":"mangust404@gmail.com","username":"mongoose404","status":"PS Cloud services employee"},"change_message_id":"000676693bd184cfdd8e69d9435e2a82bf15e8f2","unresolved":false,"context_lines":[],"source_content_type":"","patch_set":6,"id":"6aaa9415_ac873980","updated":"2026-04-30 07:43:27.000000000","message":"With the latest patchset, backward compatibility is working.\n\nThe sequence I used in my DevStack:\n1. switch to the branch with this patchet applied\n2. build and push backup docker images\n3. systemctl restart devstack@tr-tmgr.service devstack@tr-cond.service apache2\n4. run backup tests - OK\n5. switch back to master\n\n*emulate backup run on old guestagent, new control plane (do not restart services), new backup images:*\n6. run backup tests - OK\n\n*emulate backup run on old guestagent, old control plane, new backup images:*\n7. systemctl restart devstack@tr-tmgr.service devstack@tr-cond.service apache2\n8. run backup tests - OK","commit_id":"2d64d5d187f4dc48d268cfbf1022dbf6b5099f30"}],"trove/guestagent/datastore/service.py":[{"author":{"_account_id":26285,"name":"wu.chunyang","email":"wchy1001@gmail.com","username":"wu.chunyang"},"change_message_id":"a07f88fae7a5263ac0623adb360bde2a584faacd","unresolved":true,"context_lines":[{"line_number":489,"context_line":"        return cfg.get_configuration_property(\u0027backup_strategy\u0027)"},{"line_number":490,"context_line":""},{"line_number":491,"context_line":"    def create_backup(self, context, backup_info, volumes_mapping\u003d{},"},{"line_number":492,"context_line":"                      need_dbuser\u003dTrue, extra_params\u003d\u0027\u0027, is_local\u003dFalse):"},{"line_number":493,"context_line":"        storage_driver \u003d backup_info.get("},{"line_number":494,"context_line":"            \u0027storage_driver\u0027, CONF.storage_strategy)"},{"line_number":495,"context_line":"        backup_driver \u003d self.get_backup_strategy()"}],"source_content_type":"text/x-python","patch_set":5,"id":"abf4f304_450e2193","line":492,"range":{"start_line":492,"start_character":57,"end_line":492,"end_character":72},"updated":"2026-04-30 01:38:48.000000000","message":"It looks like this parameter is not used ?","commit_id":"e56f80a158e027803ae61f29feb0da168ab45fcd"},{"author":{"_account_id":36080,"name":"Erkin Mussurmankulov","display_name":"Eric","email":"mangust404@gmail.com","username":"mongoose404","status":"PS Cloud services employee"},"change_message_id":"b3707f89a6ed9fd02571b9eb0220764f9fcc2ef5","unresolved":false,"context_lines":[{"line_number":489,"context_line":"        return cfg.get_configuration_property(\u0027backup_strategy\u0027)"},{"line_number":490,"context_line":""},{"line_number":491,"context_line":"    def create_backup(self, context, backup_info, volumes_mapping\u003d{},"},{"line_number":492,"context_line":"                      need_dbuser\u003dTrue, extra_params\u003d\u0027\u0027, is_local\u003dFalse):"},{"line_number":493,"context_line":"        storage_driver \u003d backup_info.get("},{"line_number":494,"context_line":"            \u0027storage_driver\u0027, CONF.storage_strategy)"},{"line_number":495,"context_line":"        backup_driver \u003d self.get_backup_strategy()"}],"source_content_type":"text/x-python","patch_set":5,"id":"f23c598e_fc9fc4cf","line":492,"range":{"start_line":492,"start_character":57,"end_line":492,"end_character":72},"in_reply_to":"abf4f304_450e2193","updated":"2026-04-30 02:42:26.000000000","message":"Acknowledged, it\u0027s some leftover from the draft","commit_id":"e56f80a158e027803ae61f29feb0da168ab45fcd"}]}
