)]}'
{"/PATCHSET_LEVEL":[{"author":{"_account_id":9816,"name":"Takashi Kajinami","email":"kajinamit@oss.nttdata.com","username":"kajinamit"},"change_message_id":"7e23b2ec2dc2f02051ae461d1c896543242d3c39","unresolved":false,"context_lines":[],"source_content_type":"","patch_set":5,"id":"1c335e19_51a1ea18","updated":"2025-09-29 05:11:38.000000000","message":"While I understand the point, I\u0027m unsure if replacing all units at this moment is really beneficial. Maybe we can add a note to the doc and it can be enough ?","commit_id":"86f6436391ed22545773ddf7e9cbc4d2bef59d0d"},{"author":{"_account_id":36393,"name":"Callum Dickinson","email":"callum.dickinson@catalystcloud.nz","username":"Callum027","status":"Catalyst Cloud"},"change_message_id":"3afa4b01c5904b5f98567c7e19a32d25a96babb2","unresolved":false,"context_lines":[],"source_content_type":"","patch_set":5,"id":"f86e95f0_21834a1d","in_reply_to":"1c335e19_51a1ea18","updated":"2025-09-29 06:07:45.000000000","message":"I don\u0027t think many people would notice or take care to make sure their usage of the metrics is correct if we just updated the does to say that the output units are incorrect.\n\nThe units for metrics provided by Ceilometer samples and in Gnocchi samples should be correct since they can be relied on for conversion, monitoring or billing purposes downstream. A 2.4% difference on account of expected units is enough to cause large discrepancies when trying to use it where megabytes/gigabytes are expected, or reconciling it with metrics that are actually in megabytes/gigabytes (e.g. backend storage metrics).","commit_id":"86f6436391ed22545773ddf7e9cbc4d2bef59d0d"},{"author":{"_account_id":36393,"name":"Callum Dickinson","email":"callum.dickinson@catalystcloud.nz","username":"Callum027","status":"Catalyst Cloud"},"change_message_id":"f6911a55f235292c54cc3f8fe784210ba5a381bd","unresolved":false,"context_lines":[],"source_content_type":"","patch_set":5,"id":"83eb4bc7_5c3c15af","in_reply_to":"3f613bd0_187d4b61","updated":"2025-10-01 23:14:37.000000000","message":"Thanks for your comments and testing, Jaromír. I have updated the release note to expand on the expected changes to the metrics as a result of the upgrade, for both Gnocchi and Prometheus.","commit_id":"86f6436391ed22545773ddf7e9cbc4d2bef59d0d"},{"author":{"_account_id":36393,"name":"Callum Dickinson","email":"callum.dickinson@catalystcloud.nz","username":"Callum027","status":"Catalyst Cloud"},"change_message_id":"6d17abd2912b1c89f631e7e01ffc3bfba23c0d4e","unresolved":false,"context_lines":[],"source_content_type":"","patch_set":5,"id":"1ec0f2f9_28ff5f0a","in_reply_to":"83eb4bc7_5c3c15af","updated":"2025-10-01 23:20:04.000000000","message":"I should also clarify that the only metrics affected are the storage and memory-related metrics noticed in the commit message, and nothing else. CPU usage would not be affected by this change (though that does not necessarily reduce the significance of it).","commit_id":"86f6436391ed22545773ddf7e9cbc4d2bef59d0d"},{"author":{"_account_id":34975,"name":"Jaromír Wysoglad","email":"jwysogla@redhat.com","username":"jwysogla"},"change_message_id":"b8bbc80d9f1c6347ed919f96a303a7449c3ec2e2","unresolved":false,"context_lines":[],"source_content_type":"","patch_set":5,"id":"3f613bd0_187d4b61","in_reply_to":"f86e95f0_21834a1d","updated":"2025-10-01 19:50:10.000000000","message":"There is a possibility, that this could create issues for users of ceilometer. I don\u0027t know what kind of automation somebody could build on top of the metrics. I don\u0027t know about Gnocchi, but in Prometheus\u0027s case, this will create a new metric series. That will probably be visible in Grafana if somebody is running dashboards. That something happened can also be seen when using the observabilityclient to retrieve metrics. I\u0027ve done similar modification to the ceilometer_image_size metric to test this: https://paste.opendev.org/show/bbjqU77yyeHzIb4ojzFB/ I changed the unit from `B` to `C`. For some period of time (I haven\u0027t actually measured how long), I got the old and the new series at once, because Prometheus treated them as a different metric and the old ones were still valid when the new ones appeared. This could cause issues if somebody wanted to automatically work with the output further. I can imagine somebody doing some sort of computation using PromQL and there would be a potential to disrupt that as well. And if somebody tries to compute a value across a bigger amount of time (average CPU usage over a month maybe), then this could be an issue for a long time.\n\nThat being said. I agree that this should be addressed in some way. At the very least by noting it in the documentation as suggested by Takashi. But maybe merging this wouldn\u0027t be that bad? There is note of the change in the releasenote at least. And if we merge it, then I guess doing it in a slurp release cycle makes sense.","commit_id":"86f6436391ed22545773ddf7e9cbc4d2bef59d0d"},{"author":{"_account_id":9816,"name":"Takashi Kajinami","email":"kajinamit@oss.nttdata.com","username":"kajinamit"},"change_message_id":"8598e3557db32fa040a4c10d5a34e3264cb71b23","unresolved":true,"context_lines":[],"source_content_type":"","patch_set":6,"id":"1d610674_50886107","updated":"2025-10-02 13:15:32.000000000","message":"My point is that, if we know ceilometer uses G as GiB and M as MiB consistently then we can consider that when building calculation logic for billing.\nWe can add this to a documentation to make users aware of this. This may not provide a very good experience but has no impact on existing users, who may be surprised by this change.","commit_id":"b7bed8bedf4ee29742b112b7895c4a5abb775594"},{"author":{"_account_id":36393,"name":"Callum Dickinson","email":"callum.dickinson@catalystcloud.nz","username":"Callum027","status":"Catalyst Cloud"},"change_message_id":"160eaa632b3aca946342b753b38d25b1ea2a78d1","unresolved":true,"context_lines":[],"source_content_type":"","patch_set":6,"id":"31be6200_3d515941","in_reply_to":"1d610674_50886107","updated":"2025-10-02 18:28:47.000000000","message":"There are issues with this:\n\n* Some metrics appear to actually return gigabytes and not gigabytes, such as the volume.provider series of metrics. I\u0027ve left those unmodified. What would we do about these?\n* I still don\u0027t think simply adding documentation about the units is enough for existing (or even new) users to notice the discrepancy in the units, especially because it\u0027s been so long since they have been added.","commit_id":"b7bed8bedf4ee29742b112b7895c4a5abb775594"},{"author":{"_account_id":36393,"name":"Callum Dickinson","email":"callum.dickinson@catalystcloud.nz","username":"Callum027","status":"Catalyst Cloud"},"change_message_id":"35ade5499a0436e8b355464f5b5dbf214dc4e23c","unresolved":true,"context_lines":[],"source_content_type":"","patch_set":6,"id":"b9a3ce9e_08c5e3a1","in_reply_to":"2707c5c2_b2be4159","updated":"2025-10-06 18:54:34.000000000","message":"So while we would need to make sure the upgrade release note let people know their queries would need to handle the time period where the old and new metrics overlap, I\u0027m not sure this is a big problem. For dashboard use, while it would be visible I doubt it would cause trouble for anyone; anyone using CloudKitty and doing unit conversion might need to take care but it could be done by using e.g. `\u003cagg\u003e by (instance, unit) (...)`.\n\nIt\u0027s also worth nothing that the metrics with the old and new samples won\u0027t really have any samples generated in parallel; the metric with the new unit \"replaces\" the old one, so this is only really visible when you\u0027re doing a query for a window that spans the time where the unit change happens.","commit_id":"b7bed8bedf4ee29742b112b7895c4a5abb775594"},{"author":{"_account_id":36393,"name":"Callum Dickinson","email":"callum.dickinson@catalystcloud.nz","username":"Callum027","status":"Catalyst Cloud"},"change_message_id":"f685ed732253168b2a38cfc7d4b30d25f7974c26","unresolved":true,"context_lines":[],"source_content_type":"","patch_set":6,"id":"e48a2afb_8622d8cc","in_reply_to":"31be6200_3d515941","updated":"2025-10-02 18:31:20.000000000","message":"Correction: In the first point, I meant to say \"Some metrics appear to actually return gigabytes and not **gibibytes**, such as the volume.provider series of metrics\".","commit_id":"b7bed8bedf4ee29742b112b7895c4a5abb775594"},{"author":{"_account_id":9816,"name":"Takashi Kajinami","email":"kajinamit@oss.nttdata.com","username":"kajinamit"},"change_message_id":"73deedfac6f07c2d250c08e665b82802161c7994","unresolved":false,"context_lines":[],"source_content_type":"","patch_set":6,"id":"7a090224_7995f98f","in_reply_to":"62b658f5_d1d74ed6","updated":"2025-10-24 16:53:34.000000000","message":"Thanks !","commit_id":"b7bed8bedf4ee29742b112b7895c4a5abb775594"},{"author":{"_account_id":36393,"name":"Callum Dickinson","email":"callum.dickinson@catalystcloud.nz","username":"Callum027","status":"Catalyst Cloud"},"change_message_id":"4adbb92bcf7ca9d9894ba12aee1c0618e890784d","unresolved":true,"context_lines":[],"source_content_type":"","patch_set":6,"id":"cfc2635f_6502f385","in_reply_to":"8872e12a_cbaa9028","updated":"2025-10-23 18:28:53.000000000","message":"Thanks for the double checking the details on the backend storage pools. I had assuming those numbers are in gigabytes (GB) based on [1], which is why I didn\u0027t up.\n\nTo give some other use cases, it looks like other commonly used drivers such as RBD [1] are in gibibytes (GiB) as well.\n\nBased on this I\u0027d be inclined to change the units for the backend pool metrics to GiB as well. I\u0027ll submit a bug to Cinder upstream for the incorrect docs so they can handle it however they like (I suspect it\u0027ll just be a docs change since changing the API will be too difficult there, but they may want to check that all drivers report stats in the same units).\n\n[1]: https://github.com/openstack/cinder/blob/d02171164bdd702b12b59888b744d172f30d712d/cinder/volume/drivers/rbd.py#L752","commit_id":"b7bed8bedf4ee29742b112b7895c4a5abb775594"},{"author":{"_account_id":9816,"name":"Takashi Kajinami","email":"kajinamit@oss.nttdata.com","username":"kajinamit"},"change_message_id":"bd3bec5e8987c3962df6a0db0008e7ac5e96b837","unresolved":true,"context_lines":[],"source_content_type":"","patch_set":6,"id":"8872e12a_cbaa9028","in_reply_to":"b9a3ce9e_08c5e3a1","updated":"2025-10-23 15:42:40.000000000","message":"If there are any metrics actually representing GB, not GiB then I agree we should fix the unit.\n\nSo looking at the latest change I see that all volume provider pool capacity meters are in GB ? I see that cinder API reference mentions that _gb properties are all in GB[1] but looking at the actual driver code I see some drivers (for example rbd driver[2]) reports GiB actually.\n\nDo I misunderstand something ?\n\nI may not block the overall \"fix\" if the other cores agree with it, but I think we should not merge incomplete fix to avoid fixing the same again and again.\n\n[1] https://docs.openstack.org/api-ref/block-storage/v3/?expanded\u003dlist-backups-with-detail-detail,list-all-back-end-storage-pools-detail#id430\n\n[2] https://github.com/openstack/cinder/blob/d02171164bdd702b12b59888b744d172f30d712d/cinder/volume/drivers/rsd.py#L370-L373\n    https://github.com/openstack/cinder/blob/d02171164bdd702b12b59888b744d172f30d712d/cinder/volume/drivers/solidfire.py#L2174\n    https://github.com/openstack/cinder/blob/d02171164bdd702b12b59888b744d172f30d712d/cinder/volume/drivers/pure.py#L1431-L1432","commit_id":"b7bed8bedf4ee29742b112b7895c4a5abb775594"},{"author":{"_account_id":36393,"name":"Callum Dickinson","email":"callum.dickinson@catalystcloud.nz","username":"Callum027","status":"Catalyst Cloud"},"change_message_id":"44c3c7d1633c8134f032d958c73ab1d76521de21","unresolved":true,"context_lines":[],"source_content_type":"","patch_set":6,"id":"62b658f5_d1d74ed6","in_reply_to":"cfc2635f_6502f385","updated":"2025-10-23 19:08:30.000000000","message":"Updated, and bug created on the Cinder side.","commit_id":"b7bed8bedf4ee29742b112b7895c4a5abb775594"},{"author":{"_account_id":36393,"name":"Callum Dickinson","email":"callum.dickinson@catalystcloud.nz","username":"Callum027","status":"Catalyst Cloud"},"change_message_id":"390e372f479642887560767de8929b82cca4b7ae","unresolved":true,"context_lines":[],"source_content_type":"","patch_set":6,"id":"2707c5c2_b2be4159","in_reply_to":"e48a2afb_8622d8cc","updated":"2025-10-06 18:42:43.000000000","message":"I\u0027m not sure how Prometheus queries for CloudKitty etc are designed. But for what it\u0027s worth, if you have a production deployment of Ceilometer publishing to Prometheus with multiple Ceilometer Notification Agents, I believe having multiple metrics represent a single resource, each with a different `instance` field, is a normal occurrence that needs to be taken into account when making PromQL queries/dashboards.\n\nIn [Jaromír\u0027s example](https://paste.opendev.org/show/bbjqU77yyeHzIb4ojzFB), if you really wanted one metric per resource, the query made wouldn\u0027t be enough. You would need to aggregate by resource ID in some fashion (in this case, perhaps `max without (instance) (ceilometer_image_size{job\u003d\"ceilometer})`).","commit_id":"b7bed8bedf4ee29742b112b7895c4a5abb775594"},{"author":{"_account_id":13177,"name":"Emma Foley","email":"efoley@redhat.com","username":"emma-l-foley"},"change_message_id":"e0991ab92f9cb39e2082885cf76f903a8c1f4b41","unresolved":false,"context_lines":[],"source_content_type":"","patch_set":8,"id":"e3ed9b3a_82f909a4","updated":"2025-10-07 11:12:11.000000000","message":"I have a counter proposal: Add the correct units (as you have done), but also re-add the old units so that it doesn\u0027t break anything. \nIf it\u0027s not too computationally intensive, the values could be properly converted before publishing. This would give accurate values for both MiB/GiB and MB/GB","commit_id":"0c75fbd79a054c538c84e738d9ef7126b2fe9203"},{"author":{"_account_id":32968,"name":"Juan Larriba","email":"jlarriba@redhat.com","username":"jlarriba"},"change_message_id":"277db814084ee2653bdaef75b8c416c9d3b10651","unresolved":false,"context_lines":[],"source_content_type":"","patch_set":8,"id":"1aee0719_f72acbe9","updated":"2025-10-07 07:28:48.000000000","message":"I wasn\u0027t convinced about this change. While it is true that the metric unit is deceptive, it has been like this for 8+ years so we dont know what this could break.\n\nHowever, in my opinion, the changes made after review comments from Takashi and Jaromir including the great explanation in the release notes are enough for users to take in account that the new unit is coming. It is not like we are changing the actual metric values from megabytes to mebibytes, it is just informing the users correctly.\n\nSo I vote that we merge it.","commit_id":"0c75fbd79a054c538c84e738d9ef7126b2fe9203"},{"author":{"_account_id":34975,"name":"Jaromír Wysoglad","email":"jwysogla@redhat.com","username":"jwysogla"},"change_message_id":"208da0f855bafd31dd23fef2edac3a9108c0da3c","unresolved":false,"context_lines":[],"source_content_type":"","patch_set":8,"id":"56434ea5_61790942","updated":"2025-10-13 12:51:12.000000000","message":"Regarding Prometheus (I don\u0027t know much about Gnocchi unfortunately) some potential issues were pointed out, but I don\u0027t think they should be too bad. The change and what\u0027s expected to happen in Prometheus / Gnocchi is pretty well described in the releasenote. Overall, this fixes an issue. It makes the metrics more correct and as pointed out in one of the comments, the difference could be significant in some situations. I\u0027m for merging this, but I saw @kajinamit@oss.nttdata.com had some comments. WDYT, can we +W?","commit_id":"0c75fbd79a054c538c84e738d9ef7126b2fe9203"},{"author":{"_account_id":36393,"name":"Callum Dickinson","email":"callum.dickinson@catalystcloud.nz","username":"Callum027","status":"Catalyst Cloud"},"change_message_id":"d2e5279fb8bbee48ccf3fcd6fb482e5b33a7d1c5","unresolved":false,"context_lines":[],"source_content_type":"","patch_set":8,"id":"d5b0a517_55133e18","in_reply_to":"56434ea5_61790942","updated":"2025-10-20 20:04:12.000000000","message":"Hi @kajinamit@oss.nttdata.com, do you have any other feedback you\u0027d like to give for this patch?\n\nIf not, @jlarriba@redhat.com perhaps this might be ready for merging?","commit_id":"0c75fbd79a054c538c84e738d9ef7126b2fe9203"},{"author":{"_account_id":34975,"name":"Jaromír Wysoglad","email":"jwysogla@redhat.com","username":"jwysogla"},"change_message_id":"906a7d4bb38b964ca9590683911627c1c91bbacf","unresolved":false,"context_lines":[],"source_content_type":"","patch_set":8,"id":"47b89ef9_828b4118","in_reply_to":"cbbc3478_64d53576","updated":"2025-10-08 06:39:10.000000000","message":"Also, regarding Prometheus and dashboards, this would make the issues I pointed out earlier bigger and more noticeable. Looking at the dashboards we provide with ceilometer + prometheus, I\u0027m pretty sure, we\u0027d end up showing both values at once - so we\u0027d have two of the same lines just offset a little. So the current proposal seems better to me.","commit_id":"0c75fbd79a054c538c84e738d9ef7126b2fe9203"},{"author":{"_account_id":36393,"name":"Callum Dickinson","email":"callum.dickinson@catalystcloud.nz","username":"Callum027","status":"Catalyst Cloud"},"change_message_id":"e73a7f01554e0ef4c2ba84f6f9777842ca52956f","unresolved":false,"context_lines":[],"source_content_type":"","patch_set":8,"id":"f75b1f7f_77763c6a","in_reply_to":"e3ed9b3a_82f909a4","updated":"2025-10-07 16:54:59.000000000","message":"Hi Emma, thanks for the proposal.\n\nI understand the intent of what you\u0027re proposing, and Ceilometer actually used to offer a feature that allowed this to be done in the configuration (transformers) many years ago. Transformers were removed a long time ago due to many problems with how it worked, in favour of using aggregation and transformations in the storage backends (at the time only Gnocchi was supported).\n\nWithout transformers I\u0027m afraid this would not be practical for the following reasons:\n\n* It would require multiple samples to be published for the same resource metric (with different units), which might confuse any downstream consumers. Preferably they would be published under different metric names.\n* Gnocchi only allows one metric to be set per resource metric (and can\u0027t easily be changed by Ceilometer once created), so ideally we commit to only one or the other (in this case MiB/GiB, the correct ones).","commit_id":"0c75fbd79a054c538c84e738d9ef7126b2fe9203"},{"author":{"_account_id":36393,"name":"Callum Dickinson","email":"callum.dickinson@catalystcloud.nz","username":"Callum027","status":"Catalyst Cloud"},"change_message_id":"9646c114f014d65c6045239df5a0ef39b7ccd727","unresolved":false,"context_lines":[],"source_content_type":"","patch_set":8,"id":"cbbc3478_64d53576","in_reply_to":"f75b1f7f_77763c6a","updated":"2025-10-07 17:02:46.000000000","message":"Correction: Gnocchi allows only one **unit** to be set per resource metric (you can create as many metrics with unique names as you like).","commit_id":"0c75fbd79a054c538c84e738d9ef7126b2fe9203"},{"author":{"_account_id":36393,"name":"Callum Dickinson","email":"callum.dickinson@catalystcloud.nz","username":"Callum027","status":"Catalyst Cloud"},"change_message_id":"1079faacc2617e711c719c302b752a45dda37458","unresolved":false,"context_lines":[],"source_content_type":"","patch_set":10,"id":"60355cd0_730de924","updated":"2025-11-02 20:59:55.000000000","message":"Hi @jwysogla@redhat.com, are you happy for this patch to be merged in its current s tate?","commit_id":"447b6dcdc826b554c5028ded3e7aed9f52b16370"},{"author":{"_account_id":32968,"name":"Juan Larriba","email":"jlarriba@redhat.com","username":"jlarriba"},"change_message_id":"140d059abde41982beb4eedbd20269c941417048","unresolved":false,"context_lines":[],"source_content_type":"","patch_set":10,"id":"1669c9f8_6932cfd0","updated":"2025-10-27 09:29:55.000000000","message":"I agree that this change will provide better information to users (realistically, I always thought that the Ceilometer metrics were in Gigabytes or Megabytes) with little impact to current users.","commit_id":"447b6dcdc826b554c5028ded3e7aed9f52b16370"},{"author":{"_account_id":9816,"name":"Takashi Kajinami","email":"kajinamit@oss.nttdata.com","username":"kajinamit"},"change_message_id":"73deedfac6f07c2d250c08e665b82802161c7994","unresolved":true,"context_lines":[],"source_content_type":"","patch_set":10,"id":"cd2f45d0_8d0cd29c","updated":"2025-10-24 16:53:34.000000000","message":"So the original problem was eventually narrowed down to the \"bad\" old decision to publish GiB values with unit \"GB\". So now we know that the approach(using GB instead of GiB) is used commonly for all metrics, and I\u0027m not sure if this is worth fixing with upgrade impact.\n\nHowever as I said earlier if other cores agree we should fix the wrong unit then I won\u0027t block. Feel free to vote +A .","commit_id":"447b6dcdc826b554c5028ded3e7aed9f52b16370"},{"author":{"_account_id":34975,"name":"Jaromír Wysoglad","email":"jwysogla@redhat.com","username":"jwysogla"},"change_message_id":"26d8580cce6087ed56cba4cb1569fcf67a226999","unresolved":false,"context_lines":[],"source_content_type":"","patch_set":10,"id":"0bb5c7bf_433a90c2","in_reply_to":"60355cd0_730de924","updated":"2025-11-03 14:34:30.000000000","message":"Sorry for the delay. After what Takashi found (difference between what unit is being used in the service code vs its documentation), I wanted to go through the code of the other service to doublecheck this isn\u0027t just reacting to a mistake in the documentation. I\u0027m not familiar with all the other service code, but doing some searches, this seems to be correct, so I\u0027m fine to merge it.","commit_id":"447b6dcdc826b554c5028ded3e7aed9f52b16370"}],"releasenotes/notes/fix-size-metric-units-e6028b4b4fc3e6aa.yaml":[{"author":{"_account_id":36393,"name":"Callum Dickinson","email":"callum.dickinson@catalystcloud.nz","username":"Callum027","status":"Catalyst Cloud"},"change_message_id":"df6dc10fcd380d2555eb8ea5e9182e89cd025f50","unresolved":true,"context_lines":[{"line_number":19,"context_line":""},{"line_number":20,"context_line":"    * In Gnocchi, newly created metrics will set ``unit`` to the newer values."},{"line_number":21,"context_line":"      Existing metrics on existing resources, however, will not have their"},{"line_number":22,"context_line":"      unit updated automatically. They will need to be changed manually,"},{"line_number":23,"context_line":"      if required."},{"line_number":24,"context_line":"    * In Prometheus, the ``unit`` label will change for the above metrics,"},{"line_number":25,"context_line":"      causing Prometheus to treat them as separate metrics (though with"}],"source_content_type":"text/x-yaml","patch_set":6,"id":"f8dcdf9a_e11cd3e8","line":22,"updated":"2025-10-01 23:21:09.000000000","message":"Should we consider adding code to automatically update units, or would this be too disruptive?","commit_id":"b7bed8bedf4ee29742b112b7895c4a5abb775594"},{"author":{"_account_id":36393,"name":"Callum Dickinson","email":"callum.dickinson@catalystcloud.nz","username":"Callum027","status":"Catalyst Cloud"},"change_message_id":"5b97ec3925b042655a4f463b60c49ebc4605c465","unresolved":false,"context_lines":[{"line_number":19,"context_line":""},{"line_number":20,"context_line":"    * In Gnocchi, newly created metrics will set ``unit`` to the newer values."},{"line_number":21,"context_line":"      Existing metrics on existing resources, however, will not have their"},{"line_number":22,"context_line":"      unit updated automatically. They will need to be changed manually,"},{"line_number":23,"context_line":"      if required."},{"line_number":24,"context_line":"    * In Prometheus, the ``unit`` label will change for the above metrics,"},{"line_number":25,"context_line":"      causing Prometheus to treat them as separate metrics (though with"}],"source_content_type":"text/x-yaml","patch_set":6,"id":"f49d0518_1cb769de","line":22,"in_reply_to":"f8dcdf9a_e11cd3e8","updated":"2025-10-05 21:23:37.000000000","message":"Nevermind. There is no realistic way to implement this if the existing code doesn\u0027t update the unit, because actively changing it would require querying metric objects, checking the unit, and then sending individual requests to update it.","commit_id":"b7bed8bedf4ee29742b112b7895c4a5abb775594"}]}
