)]}'
{"/COMMIT_MSG":[{"author":{"_account_id":1179,"name":"Clay Gerrard","email":"clay.gerrard@gmail.com","username":"clay-gerrard"},"change_message_id":"7f2e28ec5958831a968574f848fcfebb2ac58987","unresolved":true,"context_lines":[{"line_number":4,"context_line":"Commit:     Jianjian Huo \u003cjhuo@nvidia.com\u003e"},{"line_number":5,"context_line":"CommitDate: 2024-02-28 11:31:50 -0800"},{"line_number":6,"context_line":""},{"line_number":7,"context_line":"proxy: use cooperative tokens to coalesce updating shard range requests into backend"},{"line_number":8,"context_line":""},{"line_number":9,"context_line":"The cost of memcache misses could be deadly. For example, when"},{"line_number":10,"context_line":"updating shard range cache query miss, PUT requests would have to"}],"source_content_type":"text/x-gerrit-commit-message","patch_set":9,"id":"4d67b8a4_53aadbeb","line":7,"updated":"2024-03-15 16:01:16.000000000","message":"since we\u0027re trying to create a \"generic\" interface in the pre-req patch it might be benificial for the first patch to introduce it\u0027s consumption show it can be used (trivially?) in multiple context.\n\nDid you investigate how hard it would be to add this to listing-shard-ranges as well?","commit_id":"9e87701a33803cd293db30ea0e1b8949c3db7c6c"},{"author":{"_account_id":34930,"name":"Jianjian Huo","email":"jhuo@nvidia.com","username":"jhuo"},"change_message_id":"cd691043fa85bff09ca04ad5d2d950847cb601b5","unresolved":false,"context_lines":[{"line_number":4,"context_line":"Commit:     Jianjian Huo \u003cjhuo@nvidia.com\u003e"},{"line_number":5,"context_line":"CommitDate: 2024-02-28 11:31:50 -0800"},{"line_number":6,"context_line":""},{"line_number":7,"context_line":"proxy: use cooperative tokens to coalesce updating shard range requests into backend"},{"line_number":8,"context_line":""},{"line_number":9,"context_line":"The cost of memcache misses could be deadly. For example, when"},{"line_number":10,"context_line":"updating shard range cache query miss, PUT requests would have to"}],"source_content_type":"text/x-gerrit-commit-message","patch_set":9,"id":"1e54dc1b_f9f83c7d","line":7,"in_reply_to":"4d67b8a4_53aadbeb","updated":"2024-03-20 20:39:24.000000000","message":"the logic flow of listing shard range path is very similar to updating shard range path. I did look into it too, should be easy to apply the generic cooperative token to that path.","commit_id":"9e87701a33803cd293db30ea0e1b8949c3db7c6c"},{"author":{"_account_id":1179,"name":"Clay Gerrard","email":"clay.gerrard@gmail.com","username":"clay-gerrard"},"change_message_id":"7f2e28ec5958831a968574f848fcfebb2ac58987","unresolved":true,"context_lines":[{"line_number":14,"context_line":"lot of 500/503 errors."},{"line_number":15,"context_line":""},{"line_number":16,"context_line":"We have seen cache misses frequently to updating shard range cache"},{"line_number":17,"context_line":"in production, due to Memcached out-of-memory and cache evictions."},{"line_number":18,"context_line":"To cope with those kind of situations, a memcached based cooperative"},{"line_number":19,"context_line":"token mechanism can be added into proxy-server to coalesce lots of"},{"line_number":20,"context_line":"in-flight backend requests into a few: when updating shard range"}],"source_content_type":"text/x-gerrit-commit-message","patch_set":9,"id":"7600c4da_5262c6bc","line":17,"updated":"2024-03-15 16:01:16.000000000","message":"right, this is the problem as I understand it - basically the question is \"why wasn\u0027t cache skipping sufficient\" - I think we\u0027re still learning to the extent the \"cold-start\" problem may benifit from cooperative cache filling.","commit_id":"9e87701a33803cd293db30ea0e1b8949c3db7c6c"},{"author":{"_account_id":34930,"name":"Jianjian Huo","email":"jhuo@nvidia.com","username":"jhuo"},"change_message_id":"779f8c823e2d90f22c8679e56a1bd909f83d4f9a","unresolved":false,"context_lines":[{"line_number":14,"context_line":"lot of 500/503 errors."},{"line_number":15,"context_line":""},{"line_number":16,"context_line":"We have seen cache misses frequently to updating shard range cache"},{"line_number":17,"context_line":"in production, due to Memcached out-of-memory and cache evictions."},{"line_number":18,"context_line":"To cope with those kind of situations, a memcached based cooperative"},{"line_number":19,"context_line":"token mechanism can be added into proxy-server to coalesce lots of"},{"line_number":20,"context_line":"in-flight backend requests into a few: when updating shard range"}],"source_content_type":"text/x-gerrit-commit-message","patch_set":9,"id":"26c21a25_287214d1","line":17,"in_reply_to":"7600c4da_5262c6bc","updated":"2024-08-05 19:34:02.000000000","message":"Acknowledged","commit_id":"9e87701a33803cd293db30ea0e1b8949c3db7c6c"},{"author":{"_account_id":1179,"name":"Clay Gerrard","email":"clay.gerrard@gmail.com","username":"clay-gerrard"},"change_message_id":"7f2e28ec5958831a968574f848fcfebb2ac58987","unresolved":true,"context_lines":[{"line_number":23,"context_line":"from backend container servers. And the following cache miss"},{"line_number":24,"context_line":"requests will wait for cache filling to finish, instead of all"},{"line_number":25,"context_line":"querying the backend container servers. This will prevent a flood"},{"line_number":26,"context_line":"of backend requests to overload container servers."},{"line_number":27,"context_line":""},{"line_number":28,"context_line":"Change-Id: I38c11b7aae8c4112bb3d671fa96012ab0c44d5a2"}],"source_content_type":"text/x-gerrit-commit-message","patch_set":9,"id":"c34fffc7_ea920a92","line":26,"updated":"2024-03-15 16:01:16.000000000","message":"mostly we see container-servers get overloaded because of premature cache eviction - we might want to decide if we want to describe this feature as \"protecting the container servers\" or \"protecting memcache\"","commit_id":"9e87701a33803cd293db30ea0e1b8949c3db7c6c"},{"author":{"_account_id":34930,"name":"Jianjian Huo","email":"jhuo@nvidia.com","username":"jhuo"},"change_message_id":"779f8c823e2d90f22c8679e56a1bd909f83d4f9a","unresolved":false,"context_lines":[{"line_number":23,"context_line":"from backend container servers. And the following cache miss"},{"line_number":24,"context_line":"requests will wait for cache filling to finish, instead of all"},{"line_number":25,"context_line":"querying the backend container servers. This will prevent a flood"},{"line_number":26,"context_line":"of backend requests to overload container servers."},{"line_number":27,"context_line":""},{"line_number":28,"context_line":"Change-Id: I38c11b7aae8c4112bb3d671fa96012ab0c44d5a2"}],"source_content_type":"text/x-gerrit-commit-message","patch_set":9,"id":"52deec4e_983ab434","line":26,"in_reply_to":"c34fffc7_ea920a92","updated":"2024-08-05 19:34:02.000000000","message":"it will help protecting both container and memcache servers, have updated the commit message.","commit_id":"9e87701a33803cd293db30ea0e1b8949c3db7c6c"},{"author":{"_account_id":15343,"name":"Tim Burke","email":"tburke@nvidia.com","username":"tburke"},"change_message_id":"5fd322ed546bbf3260f518eff4abba6da4cc4a8d","unresolved":true,"context_lines":[{"line_number":4,"context_line":"Commit:     Jianjian Huo \u003cjhuo@nvidia.com\u003e"},{"line_number":5,"context_line":"CommitDate: 2024-07-11 15:31:31 -0700"},{"line_number":6,"context_line":""},{"line_number":7,"context_line":"proxy: use cooperative tokens to coalesce updating shard range requests into backend"},{"line_number":8,"context_line":""},{"line_number":9,"context_line":"The cost of memcache misses could be deadly. For example, when"},{"line_number":10,"context_line":"updating shard range cache query miss, PUT requests would have to"}],"source_content_type":"text/x-gerrit-commit-message","patch_set":25,"id":"c5ebcbc7_f89995cf","line":7,"range":{"start_line":7,"start_character":42,"end_line":7,"end_character":50},"updated":"2024-07-23 00:33:19.000000000","message":"Do you think we\u0027ll want to do something similar for listing shard ranges as a follow-up?","commit_id":"24c4cb68b3037de4ba90e827bd1e7b69660a7353"},{"author":{"_account_id":34930,"name":"Jianjian Huo","email":"jhuo@nvidia.com","username":"jhuo"},"change_message_id":"779f8c823e2d90f22c8679e56a1bd909f83d4f9a","unresolved":false,"context_lines":[{"line_number":4,"context_line":"Commit:     Jianjian Huo \u003cjhuo@nvidia.com\u003e"},{"line_number":5,"context_line":"CommitDate: 2024-07-11 15:31:31 -0700"},{"line_number":6,"context_line":""},{"line_number":7,"context_line":"proxy: use cooperative tokens to coalesce updating shard range requests into backend"},{"line_number":8,"context_line":""},{"line_number":9,"context_line":"The cost of memcache misses could be deadly. For example, when"},{"line_number":10,"context_line":"updating shard range cache query miss, PUT requests would have to"}],"source_content_type":"text/x-gerrit-commit-message","patch_set":25,"id":"1dc4a405_e4107e92","line":7,"range":{"start_line":7,"start_character":42,"end_line":7,"end_character":50},"in_reply_to":"c5ebcbc7_f89995cf","updated":"2024-08-05 19:34:02.000000000","message":"yes, if updating shard range works with cooperative token on production, listing shard ranges will be the next.","commit_id":"24c4cb68b3037de4ba90e827bd1e7b69660a7353"},{"author":{"_account_id":1179,"name":"Clay Gerrard","email":"clay.gerrard@gmail.com","username":"clay-gerrard"},"change_message_id":"5b732a1765cb3de3b56ac2c4fb2ead5ffa05328d","unresolved":true,"context_lines":[{"line_number":31,"context_line":""},{"line_number":32,"context_line":"Drive-by fix: when memcache is not available, object controller will"},{"line_number":33,"context_line":"only need to retrieve a specific shard range from the container server"},{"line_number":34,"context_line":"to send the update request to."},{"line_number":35,"context_line":""},{"line_number":36,"context_line":"Co-Authored-By: Clay Gerrard \u003cclay.gerrard@gmail.com\u003e"},{"line_number":37,"context_line":"Co-Authored-By: Tim Burke \u003ctim.burke@gmail.com\u003e"}],"source_content_type":"text/x-gerrit-commit-message","patch_set":45,"id":"87d7b132_72233786","line":34,"updated":"2025-05-13 22:06:08.000000000","message":"nice call out!\n\nagree, \"memcache is not available\" is very niche; perhaps ONLY relevant for testing/dev - as such, I agree it would be helpful to break it out to a separate change: it was merely a side-effect improvement of the cleanup refactoring - perfectly reasonable as a \"drive-by fix\".  Good judgement.","commit_id":"1b10e5043d8dc95e710cfc0c79d6caeb6d7b1899"},{"author":{"_account_id":34930,"name":"Jianjian Huo","email":"jhuo@nvidia.com","username":"jhuo"},"change_message_id":"b3ee26af7f3a45a961bdbcf2ef0808a6606b32b9","unresolved":false,"context_lines":[{"line_number":31,"context_line":""},{"line_number":32,"context_line":"Drive-by fix: when memcache is not available, object controller will"},{"line_number":33,"context_line":"only need to retrieve a specific shard range from the container server"},{"line_number":34,"context_line":"to send the update request to."},{"line_number":35,"context_line":""},{"line_number":36,"context_line":"Co-Authored-By: Clay Gerrard \u003cclay.gerrard@gmail.com\u003e"},{"line_number":37,"context_line":"Co-Authored-By: Tim Burke \u003ctim.burke@gmail.com\u003e"}],"source_content_type":"text/x-gerrit-commit-message","patch_set":45,"id":"39607c2e_97d801c3","line":34,"in_reply_to":"87d7b132_72233786","updated":"2025-05-30 22:35:41.000000000","message":"Acknowledged","commit_id":"1b10e5043d8dc95e710cfc0c79d6caeb6d7b1899"},{"author":{"_account_id":1179,"name":"Clay Gerrard","email":"clay.gerrard@gmail.com","username":"clay-gerrard"},"change_message_id":"34890a8a035287cb6533a97801528f37e247cf61","unresolved":true,"context_lines":[{"line_number":31,"context_line":""},{"line_number":32,"context_line":"Drive-by fix: when memcache is not available, object controller will"},{"line_number":33,"context_line":"only need to retrieve a specific shard range from the container server"},{"line_number":34,"context_line":"to send the update request to."},{"line_number":35,"context_line":""},{"line_number":36,"context_line":"UpgradeImpact: some of existing shard range cache metrics would have"},{"line_number":37,"context_line":"backend request status_int appended in the end, for example:"}],"source_content_type":"text/x-gerrit-commit-message","patch_set":56,"id":"cd9b6c57_9c68df6e","line":34,"updated":"2025-09-25 22:24:36.000000000","message":"I assume the previous behavior was fetching the whole shards list and storing it in infocache - so this is probably better.","commit_id":"e96c9149410bf1d3d08c152bcaeb9f9af89d67ef"},{"author":{"_account_id":34930,"name":"Jianjian Huo","email":"jhuo@nvidia.com","username":"jhuo"},"change_message_id":"230caa450da26de30e5b1de971df156ecb8b1b4c","unresolved":false,"context_lines":[{"line_number":31,"context_line":""},{"line_number":32,"context_line":"Drive-by fix: when memcache is not available, object controller will"},{"line_number":33,"context_line":"only need to retrieve a specific shard range from the container server"},{"line_number":34,"context_line":"to send the update request to."},{"line_number":35,"context_line":""},{"line_number":36,"context_line":"UpgradeImpact: some of existing shard range cache metrics would have"},{"line_number":37,"context_line":"backend request status_int appended in the end, for example:"}],"source_content_type":"text/x-gerrit-commit-message","patch_set":56,"id":"48f0fa17_051e093b","line":34,"in_reply_to":"cd9b6c57_9c68df6e","updated":"2025-09-29 18:14:34.000000000","message":"Acknowledged","commit_id":"e96c9149410bf1d3d08c152bcaeb9f9af89d67ef"},{"author":{"_account_id":1179,"name":"Clay Gerrard","email":"clay.gerrard@gmail.com","username":"clay-gerrard"},"change_message_id":"34890a8a035287cb6533a97801528f37e247cf61","unresolved":true,"context_lines":[{"line_number":38,"context_line":"  \u0027object.shard_updating.cache.set\u0027 would become"},{"line_number":39,"context_line":"        \u0027object.shard_updating.cache.set.200\u0027, and"},{"line_number":40,"context_line":"  \u0027object.shard_updating.cache.set_error\u0027 would become"},{"line_number":41,"context_line":"        \u0027object.shard_updating.cache.set_error.200\u0027"},{"line_number":42,"context_line":""},{"line_number":43,"context_line":"Co-Authored-By: Clay Gerrard \u003cclay.gerrard@gmail.com\u003e"},{"line_number":44,"context_line":"Co-Authored-By: Tim Burke \u003ctim.burke@gmail.com\u003e"}],"source_content_type":"text/x-gerrit-commit-message","patch_set":56,"id":"e55afa28_9b57b6e9","line":41,"updated":"2025-09-25 22:24:36.000000000","message":"But... WHY?  Isn\u0027t a 200 response implied when you\u0027re doing a memcache set?  What other response could result in trying to set a value in memcache?","commit_id":"e96c9149410bf1d3d08c152bcaeb9f9af89d67ef"},{"author":{"_account_id":34930,"name":"Jianjian Huo","email":"jhuo@nvidia.com","username":"jhuo"},"change_message_id":"230caa450da26de30e5b1de971df156ecb8b1b4c","unresolved":false,"context_lines":[{"line_number":38,"context_line":"  \u0027object.shard_updating.cache.set\u0027 would become"},{"line_number":39,"context_line":"        \u0027object.shard_updating.cache.set.200\u0027, and"},{"line_number":40,"context_line":"  \u0027object.shard_updating.cache.set_error\u0027 would become"},{"line_number":41,"context_line":"        \u0027object.shard_updating.cache.set_error.200\u0027"},{"line_number":42,"context_line":""},{"line_number":43,"context_line":"Co-Authored-By: Clay Gerrard \u003cclay.gerrard@gmail.com\u003e"},{"line_number":44,"context_line":"Co-Authored-By: Tim Burke \u003ctim.burke@gmail.com\u003e"}],"source_content_type":"text/x-gerrit-commit-message","patch_set":56,"id":"e5c32260_b4ff2935","line":41,"in_reply_to":"e55afa28_9b57b6e9","updated":"2025-09-29 18:14:34.000000000","message":"Acknowledged","commit_id":"e96c9149410bf1d3d08c152bcaeb9f9af89d67ef"},{"author":{"_account_id":1179,"name":"Clay Gerrard","email":"clay.gerrard@gmail.com","username":"clay-gerrard"},"change_message_id":"d81db38e58975873ba863b00885143bf9942bd42","unresolved":false,"context_lines":[{"line_number":38,"context_line":"  \u0027object.shard_updating.cache.set\u0027 would become"},{"line_number":39,"context_line":"        \u0027object.shard_updating.cache.set.200\u0027, and"},{"line_number":40,"context_line":"  \u0027object.shard_updating.cache.set_error\u0027 would become"},{"line_number":41,"context_line":"        \u0027object.shard_updating.cache.set_error.200\u0027"},{"line_number":42,"context_line":""},{"line_number":43,"context_line":"Co-Authored-By: Clay Gerrard \u003cclay.gerrard@gmail.com\u003e"},{"line_number":44,"context_line":"Co-Authored-By: Tim Burke \u003ctim.burke@gmail.com\u003e"}],"source_content_type":"text/x-gerrit-commit-message","patch_set":56,"id":"3a407d52_0540237a","line":41,"in_reply_to":"e5c32260_b4ff2935","updated":"2025-09-29 19:57:52.000000000","message":"whoo hoo!  no more UpgradeImpact!  Better swift by default!","commit_id":"e96c9149410bf1d3d08c152bcaeb9f9af89d67ef"}],"/PATCHSET_LEVEL":[{"author":{"_account_id":7847,"name":"Alistair Coles","email":"alistairncoles@gmail.com","username":"acoles"},"change_message_id":"ff146041ae375fbb80c48e4c2e1cec2a14ed0ff9","unresolved":false,"context_lines":[],"source_content_type":"","patch_set":2,"id":"aa38a88b_9a9255bd","updated":"2024-02-15 12:53:19.000000000","message":"@Jianjian thanks for trying out the decomposed approach.\n\nI\u0027m not quite convinced that the generic ``CooperativeCachePopulator`` class is a necessary compromise:\n\n- we pass it a function which gets from backend, parses and validates the response body, and has exactly the data structure we want in the obj controller i.e. NamespaceBoundList, but that cannot be returned because the helper class wants the data type that should be written to memcache (and also forces the same into infocache).\n\n- so the caller has to construct the NamespaceBoundList again.\n\nWe could workaround that by perhaps passing the ``CooperativeCachePopulator`` an object that has a backend get method, and that object could stash the data type that the caller wants...but it\u0027 all getting very complicated for the sake of a generic class that has only one use case.\n\nIMHO a better compromise would be to make the generic helper class/function specific to fetching *namespaces*. It could re-use the existing set_namespaces_in_cache function for example, and return a NamespaceBoundList.","commit_id":"917650355da354b5674013c143744aad2ddc4ec9"},{"author":{"_account_id":34930,"name":"Jianjian Huo","email":"jhuo@nvidia.com","username":"jhuo"},"change_message_id":"3afaa86ee37f7773f4302fa0e58e4e5fe4906cf8","unresolved":false,"context_lines":[],"source_content_type":"","patch_set":2,"id":"7629b2a1_2b41b70e","updated":"2024-02-20 05:24:03.000000000","message":"thanks!","commit_id":"917650355da354b5674013c143744aad2ddc4ec9"},{"author":{"_account_id":7847,"name":"Alistair Coles","email":"alistairncoles@gmail.com","username":"acoles"},"change_message_id":"d698eb927d6d7a169168158277203f19bd319e28","unresolved":false,"context_lines":[],"source_content_type":"","patch_set":9,"id":"337b4d0b_8cc26455","updated":"2024-03-15 16:37:10.000000000","message":"I wasn\u0027t convinced that populate_cache_with_cooperative_token is helping, so\nI tried out an alternative approach here https://review.opendev.org/c/openstack/swift/+/913425","commit_id":"9e87701a33803cd293db30ea0e1b8949c3db7c6c"},{"author":{"_account_id":1179,"name":"Clay Gerrard","email":"clay.gerrard@gmail.com","username":"clay-gerrard"},"change_message_id":"7f2e28ec5958831a968574f848fcfebb2ac58987","unresolved":false,"context_lines":[],"source_content_type":"","patch_set":9,"id":"61fe565d_f9675bb6","updated":"2024-03-15 16:01:16.000000000","message":"ok, well this change doesn\u0027t really have much in the way of tests either.\n\nAnd *maybe* if the `populate_cache_with_cooperative_token` helper was well tested independently we wouldn\u0027t so much have to write tests that \"start up multiple puts and make sure they serialize as expected\" as much as mostly only expect the new tests in this change to cover the expected valid returns from the helper.\n\nThe helper returns a three-tuple; I\u0027m not sure exactly how many combinations of valid returns there would be - but presumably there\u0027d at least be tests for the various backend response codes assuming we DO have to make the fetch; maybe somehow some of the existing tests already enumerate those and mostly fall into the \"go ahead and do the fetch\" case because the stub memcache never has the cooperative_token key larger than num_tokens.","commit_id":"9e87701a33803cd293db30ea0e1b8949c3db7c6c"},{"author":{"_account_id":34930,"name":"Jianjian Huo","email":"jhuo@nvidia.com","username":"jhuo"},"change_message_id":"cd691043fa85bff09ca04ad5d2d950847cb601b5","unresolved":false,"context_lines":[],"source_content_type":"","patch_set":10,"id":"eec76994_0b4aca28","updated":"2024-03-20 20:39:24.000000000","message":"thanks for the review!","commit_id":"384d58f7b8d451262673998bb5e69ded980a94b5"},{"author":{"_account_id":1179,"name":"Clay Gerrard","email":"clay.gerrard@gmail.com","username":"clay-gerrard"},"change_message_id":"5b2eb439090dbb7d3383c43b9c7da3fd49922d38","unresolved":false,"context_lines":[],"source_content_type":"","patch_set":15,"id":"f80fd687_3cb7b848","updated":"2024-04-22 15:06:46.000000000","message":"sorry, I didn\u0027t see this update until this morning - I have some other patches I also need to look at; I\u0027m not sure we can get everything ready to carry today - we might have to triage.  There\u0027s always another release coming up next week or the week after!","commit_id":"41c519ab9349a00bfaf9f7750f7b82643ac0e634"},{"author":{"_account_id":34930,"name":"Jianjian Huo","email":"jhuo@nvidia.com","username":"jhuo"},"change_message_id":"7d62748a0214fd0e6037e4b24de687f776d83aa1","unresolved":false,"context_lines":[],"source_content_type":"","patch_set":15,"id":"0aa2c171_fa3a162c","updated":"2024-04-22 17:38:22.000000000","message":"thanks for the reviews!","commit_id":"41c519ab9349a00bfaf9f7750f7b82643ac0e634"},{"author":{"_account_id":7847,"name":"Alistair Coles","email":"alistairncoles@gmail.com","username":"acoles"},"change_message_id":"e976ddc798ae986d063250eaad0916b1d0108793","unresolved":false,"context_lines":[],"source_content_type":"","patch_set":16,"id":"3226a3c2_22fdb54c","updated":"2024-04-24 14:02:33.000000000","message":"I mostly wanted to satisfy myself that the default path is equivalent to master, which it appears to be :)\n\nWatch out for the erroneous config parsing - this could blow up on proxy restart!","commit_id":"245bf557b533100e44970f36bbe17dd8dc2f8baa"},{"author":{"_account_id":1179,"name":"Clay Gerrard","email":"clay.gerrard@gmail.com","username":"clay-gerrard"},"change_message_id":"382e2ac0eddfd3eeb9c76438e530f5c7618d3920","unresolved":false,"context_lines":[],"source_content_type":"","patch_set":16,"id":"56c9dd66_e7e88ec6","updated":"2024-04-23 01:43:15.000000000","message":"I\u0027m reasonably confident this change doesn\u0027t do anything different than master unless you turn it on - which is a pretty low bar but probably good enough to carry.\n\nThe behavior when it *is* turned on is pretty complex and we may need more experience with the telemetry under non trivial load to understand if we\u0027re making the right trade offs with complexity/correctness.  Hopefully this telemetry will evolve/improve to include in particular stats about how long requests are waiting on memcache to get populated before the implementation reaches it\u0027s final form.","commit_id":"245bf557b533100e44970f36bbe17dd8dc2f8baa"},{"author":{"_account_id":34930,"name":"Jianjian Huo","email":"jhuo@nvidia.com","username":"jhuo"},"change_message_id":"91683d17796b106573db8976013204c6d619fe61","unresolved":false,"context_lines":[],"source_content_type":"","patch_set":17,"id":"86dd3dd8_35827562","updated":"2024-04-30 05:35:34.000000000","message":"thanks for the reviews and help!\nwill add more metrics in next patch.","commit_id":"cada2ab51b71a88f249f71b12d4d4a28ae3bc32a"},{"author":{"_account_id":7233,"name":"Matthew Oliver","email":"matt@oliver.net.au","username":"mattoliverau"},"change_message_id":"56b5b1e2ff47b72471aca443f66f45dd8838c440","unresolved":false,"context_lines":[],"source_content_type":"","patch_set":23,"id":"8d37bed6_a5a2d7d6","updated":"2024-07-10 07:41:34.000000000","message":"Looks great! We probably need to mention something about the token wait time is 10x the wait interval or something. Because it isn\u0027t mentioned and not obvious, and would be good to know.. or maybe made configurable?","commit_id":"0d72817c9dc7ae50ab4db73bdfe2093972cac7c0"},{"author":{"_account_id":34930,"name":"Jianjian Huo","email":"jhuo@nvidia.com","username":"jhuo"},"change_message_id":"325dd989caa00abf1fb5b22f27f487a778d8c905","unresolved":false,"context_lines":[],"source_content_type":"","patch_set":30,"id":"dab0fbe6_27d1a746","updated":"2024-09-26 04:53:40.000000000","message":"Thanks all for the reviews!","commit_id":"01bf2f6fd030ee8285a6b1137432ba83af818884"},{"author":{"_account_id":34930,"name":"Jianjian Huo","email":"jhuo@nvidia.com","username":"jhuo"},"change_message_id":"5819602b6dcdf3347a96c1cd6073fbb5be0aa35a","unresolved":false,"context_lines":[],"source_content_type":"","patch_set":31,"id":"31a1b219_09f07147","updated":"2024-11-06 17:12:59.000000000","message":"recheck\nunrelated and known test failure.","commit_id":"b34ad0fa3b0e0e8813fe277a0edb75dff8585151"},{"author":{"_account_id":1179,"name":"Clay Gerrard","email":"clay.gerrard@gmail.com","username":"clay-gerrard"},"change_message_id":"b77645c85c5aaf01a53ec4fcdfded6e6e62160fc","unresolved":false,"context_lines":[],"source_content_type":"","patch_set":32,"id":"e3f515c9_45143c4e","updated":"2025-02-05 19:11:09.000000000","message":"these diffs are looking a lot tighter than the last time I remember looking at them.  Aside from some nits probably the place I should spend my time is with the tests - maybe revert the change and see how they fail.","commit_id":"3b2b8859917b8aad03423f082f2f6a7c7b48ea9d"},{"author":{"_account_id":15343,"name":"Tim Burke","email":"tburke@nvidia.com","username":"tburke"},"change_message_id":"7eed807b1f405d68b9db73718516c864f4a77163","unresolved":false,"context_lines":[],"source_content_type":"","patch_set":38,"id":"911a66fe_225d210c","updated":"2025-04-29 20:46:53.000000000","message":"So to test this, I wanted to restart memcache aggressively during some `swift-bench` runs. Like,\n```\nwhile sleep 1 \u0026\u0026 sudo systemctl restart memcached; do true; done\n```\naggressive. Cherry-picking https://review.opendev.org/c/openstack/swift/+/861271 definitely helped by making auth tokens no longer dependent on memcache. I also needed to remember how to disable memcache error-limiting (`error_suppression_interval \u003d 0` in `[filter:cache]`) and manually sharded a container.\n\nBaseline run (memcache happy, 1000 PUTs, 10 concurrency): 4-9 log lines for `Caching updating shards for ...`\n\nWith memcache thrashing (still 1000 PUTs, 10 concurrency): 93-94 log lines\n\nAfter configuring `namespace_cache_use_token \u003d true`, while still having memcache thrashing: 49-52 log lines\n\nOK, maybe I was thrashing memcache a little _too_ aggressively. Repeating with a 10s sleep between restarts (so 2-3 restarts over a run):\n\n`namespace_cache_use_token` off: 24-25 log lines\n\n`namespace_cache_use_token` on: 8-9 log lines\n\nAnd once I stop thrashing memcache (but still doing one restart before the run), it\u0027s down to 3-4. Gonna dig through the code more this afternoon, but this all looks like a definite win to me.\n\nI wonder if we could apply something similar to account/container info requests to help with those sorts of thundering herds, too...","commit_id":"3136ea74d3bc9a03b0553fec387cc1411e4e80a9"},{"author":{"_account_id":1179,"name":"Clay Gerrard","email":"clay.gerrard@gmail.com","username":"clay-gerrard"},"change_message_id":"3ac5b97eb694bd7419d416cd4d22be40d9525d57","unresolved":false,"context_lines":[],"source_content_type":"","patch_set":39,"id":"d5dc0370_30ecc56d","updated":"2025-05-05 21:42:58.000000000","message":"I think the OOP design is much better, I think the DirectCachePopulator should just be a degenerate case of the CooperativeCachePopulator.\n\nBut I think the pre-req patch should change and that will have knock on effects here.","commit_id":"65d0b089553de97b003b9956853d25dcb3590903"},{"author":{"_account_id":34930,"name":"Jianjian Huo","email":"jhuo@nvidia.com","username":"jhuo"},"change_message_id":"659b61cb7e26a48889ac96d6f02f48f780b3d150","unresolved":false,"context_lines":[],"source_content_type":"","patch_set":40,"id":"061f710a_51e48876","updated":"2025-05-09 16:29:52.000000000","message":"Thanks a lot for the reviews!","commit_id":"aba7bd32f1830cc17430f95d20db10ba29dfdd52"},{"author":{"_account_id":1179,"name":"Clay Gerrard","email":"clay.gerrard@gmail.com","username":"clay-gerrard"},"change_message_id":"5b732a1765cb3de3b56ac2c4fb2ead5ffa05328d","unresolved":false,"context_lines":[],"source_content_type":"","patch_set":45,"id":"aa3310c7_fe45f007","updated":"2025-05-13 22:06:08.000000000","message":"I think my only reservations here have to do with stats:\n\nI don\u0027t think we should change `object.shard_updating.cache.set` to `object.shard_updating.cache.set.200` - nor do I think we *need* to?\n\nI don\u0027t know how we avoid changing `object.shard_updating.cache.miss.200` to `object.shard_updating.cache.miss` i.e. a miss ends up being a hit after a cooperative token sleep/retry without a status/backend_request - maybe we could add a new \"status\" for `object.shard_updating.cache.miss.with_token`???","commit_id":"1b10e5043d8dc95e710cfc0c79d6caeb6d7b1899"},{"author":{"_account_id":34930,"name":"Jianjian Huo","email":"jhuo@nvidia.com","username":"jhuo"},"change_message_id":"b3ee26af7f3a45a961bdbcf2ef0808a6606b32b9","unresolved":false,"context_lines":[],"source_content_type":"","patch_set":49,"id":"24cc4d69_7d4296ff","updated":"2025-05-30 22:35:41.000000000","message":"Thanks a lot for the reviews!","commit_id":"34a6ea0058d3db7c47430831474b8780cc256d35"},{"author":{"_account_id":1179,"name":"Clay Gerrard","email":"clay.gerrard@gmail.com","username":"clay-gerrard"},"change_message_id":"34890a8a035287cb6533a97801528f37e247cf61","unresolved":false,"context_lines":[],"source_content_type":"","patch_set":56,"id":"34ea8207_7e89ca1e","updated":"2025-09-25 22:24:36.000000000","message":"this is a really exciting optimization, on master w/ a sharded db:\n\n```\nvagrant@saio:~$ swift-manage-shard-ranges /srv/node2/sdb2/containers/95/16a/17e6398409dff2cc1b88046b6486a16a/17e6398409dff2cc1b88046b6486a16a.db show | grep name\nLoaded db broker for AUTH_test/sharded-tst\nExisting shard ranges:\n    \"name\": \".shards_AUTH_test/sharded-tst-de9eb8cfdffca157a359f26e0cd05f46-1758834159.76211-0\",\n    \"name\": \".shards_AUTH_test/sharded-tst-de9eb8cfdffca157a359f26e0cd05f46-1758834159.76211-1\",\n    \"name\": \".shards_AUTH_test/sharded-tst-de9eb8cfdffca157a359f26e0cd05f46-1758834159.76211-2\",\n    \"name\": \".shards_AUTH_test/sharded-tst-de9eb8cfdffca157a359f26e0cd05f46-1758834159.76211-3\",\n    \"name\": \".shards_AUTH_test/sharded-tst-de9eb8cfdffca157a359f26e0cd05f46-1758834159.76211-4\",\n    \"name\": \".shards_AUTH_test/sharded-tst-de9eb8cfdffca157a359f26e0cd05f46-1758834159.76211-5\",\n    \"name\": \".shards_AUTH_test/sharded-tst-de9eb8cfdffca157a359f26e0cd05f46-1758834159.76211-6\",\n    \"name\": \".shards_AUTH_test/sharded-tst-de9eb8cfdffca157a359f26e0cd05f46-1758834159.76211-7\",\n    \"name\": \".shards_AUTH_test/sharded-tst-de9eb8cfdffca157a359f26e0cd05f46-1758834159.76211-8\",\n    \"name\": \".shards_AUTH_test/sharded-tst-de9eb8cfdffca157a359f26e0cd05f46-1758834159.76211-9\",\n```\n\n... I\u0027ve configured my proxy to allow better concurrency:\n\n```\nvagrant@saio:~$ head /etc/swift/proxy-server/default.conf-template \n[DEFAULT]\nworkers \u003d 1\nmemcache_max_connections \u003d 100\n```\n\nI reset memcache \u0026 syslog and check my assumptions:\n\n```\nvagrant@saio:~$ sudo truncate --size 0 /var/log/syslog \u0026\u0026 sudo systemctl restart memcached.service \u0026\u0026 swift stat sharded-tst \u0026\u0026 grep -v STDERR /var/log/syslog | grep GET | grep \"states\u003dupdating\" | wc -l\n               Account: AUTH_test\n             Container: sharded-tst\n               Objects: 100\n                 Bytes: 190\n              Read ACL:\n             Write ACL:\n               Sync To:\n              Sync Key:\n          Content-Type: text/plain; charset\u003dutf-8\n           X-Timestamp: 1758834046.17187\n         Last-Modified: Thu, 25 Sep 2025 21:13:49 GMT\n         Accept-Ranges: bytes\n      X-Storage-Policy: default\n                  Vary: Accept\n            X-Trans-Id: tx438f5fdaea8f441f936fe-0068d5b235\nX-Openstack-Request-Id: tx438f5fdaea8f441f936fe-0068d5b235\n0\n\n```\n\nthen make a bunch of concurrent PUTs\n\n```\nvagrant@saio:~$ eval $(swift auth); for obj_name in $(seq -f \"obj-%04g\" 0 99); do curl -H \"x-auth-token: $OS_AUTH_TOKEN\" http://saio:8080/v1/AUTH_test/sharded-tst/${obj_name} -XPUT --data \"${obj_name}\" \u0026 done; time wait\n...\n```\n\nwith even a *moderate* delay in the container server:\n\n```\ndiff --git a/swift/container/server.py b/swift/container/server.py\nindex 1bd0f5b62..9060d2ed4 100644\n--- a/swift/container/server.py\n+++ b/swift/container/server.py\n@@ -19,7 +19,7 @@ import sys\n import time\n import traceback\n \n-from eventlet import Timeout\n+from eventlet import Timeout, sleep\n \n from urllib.parse import quote\n \n@@ -784,6 +784,7 @@ class ContainerController(BaseStorageServer):\n         :param out_content_type: content type as a string.\n         :returns: an instance of :class:`swift.common.swob.Response`\n         \"\"\"\n+        sleep(0.5)\n         override_deleted \u003d info and config_true_value(\n             req.headers.get(\u0027x-backend-override-deleted\u0027, False))\n         resp_headers \u003d gen_resp_headers(\n\n```\n\nI see that ALL of these PUT requests miss the initial updating-shards cache object and storm to the backend:\n\n```\nreal\t0m3.366s\nuser\t0m0.241s\nsys\t0m0.203s\nvagrant@saio:~$ grep -v STDERR /var/log/syslog | grep GET | grep \"states\u003dupdating\" | wc -l\n100\n```\n\n... where as with THIS patch and the same/default config \u0026 slow container server shard range responses:\n\n```\nvagrant@saio:~$ swift-init restart proxy\nSignal proxy-server  pid: 13532  signal: Signals.SIGTERM\nSignal proxy-server  pid: 13533  signal: Signals.SIGTERM\nproxy-server (13532) appears to have stopped\nproxy-server (13533) appears to have stopped\nWARNING: Unable to modify max process limit.  Running as non-root?\nStarting proxy-server...(/etc/swift/proxy-server/proxy-noauth.conf.d)\nStarting proxy-server...(/etc/swift/proxy-server/proxy-server.conf.d)\nvagrant@saio:~$ sudo truncate --size 0 /var/log/syslog \u0026\u0026 sudo systemctl restart memcached.service \u0026\u0026 swift stat sharded-tst \u0026\u0026 grep -v STDERR /var/log/syslog | grep GET | grep \"states\u003dupdating\" | wc -l\n               Account: AUTH_test\n             Container: sharded-tst\n               Objects: 100\n                 Bytes: 190\n              Read ACL:\n             Write ACL:\n               Sync To:\n              Sync Key:\n          Content-Type: text/plain; charset\u003dutf-8\n           X-Timestamp: 1758834046.17187\n         Last-Modified: Thu, 25 Sep 2025 21:13:49 GMT\n         Accept-Ranges: bytes\n      X-Storage-Policy: default\n                  Vary: Accept\n            X-Trans-Id: tx9d81a6ca26094f5892ade-0068d5b335\nX-Openstack-Request-Id: tx9d81a6ca26094f5892ade-0068d5b335\n0\nvagrant@saio:~$ eval $(swift auth); for obj_name in $(seq -f \"obj-%04g\" 0 99); do curl -H \"x-auth-token: $OS_AUTH_TOKEN\" http://saio:8080/v1/AUTH_test/sharded-tst/${obj_name} -XPUT --data \"${obj_name}\" \u0026 done; time wait\n...\n\nreal\t0m2.727s\nuser\t0m0.252s\nsys\t0m0.205s\nvagrant@saio:~$ grep -v STDERR /var/log/syslog | grep GET | grep \"states\u003dupdating\" | wc -l\n3\n```\n\nI get *exactly* the expected num_token backend requests AND it\u0027s faster over all (majority of requests avoid the slow backend request)\n\nFurther, the benefit is not limited to artificially slow shard-listing responses (although with 10K shards 500ms is not that un-realistic) - even with a default small/fast shard resp on a vsaio I see 40-60 backend requests on master while with this change a single proxy worker only makes 3 (or sometimes one? [1]) backend updating request!\n\nI really don\u0027t see anything in *this* patch that should prevent us maintaining/improving this awesome and useful optimization going forward:\n\n962315: sq? some notes from review | https://review.opendev.org/c/openstack/swift/+/962315\n\n1. !? if you increase proxy workers you always get the expected 3 even with fast resp, so I assume it\u0027s more memcache serialization going on somehow although I couldn\u0027t find it, and stats weren\u0027t really helpful:\n\n```\nswift_token{account\u003d\"AUTH_test\", container\u003d\"sharded-tst\", event\u003d\"backend_reqs\", host\u003d\"proxy\", instance\u003d\"localhost:9100\", job\u003d\"saio\", resource\u003d\"shard_updating\", set_cache_state\u003d\"set\", status\u003d\"200\", token\u003d\"with_token\"}\n1\ncontainer_shard_ranges_cache{host\u003d\"proxy\", instance\u003d\"localhost:9100\", job\u003d\"saio\", method\u003d\"shard_updating\", metric\u003d\"hit\", service\u003d\"proxy-server\", target\u003d\"object\"}\n99\ncontainer_shard_ranges_cache{host\u003d\"proxy\", instance\u003d\"localhost:9100\", job\u003d\"saio\", method\u003d\"shard_updating\", metric\u003d\"miss\", service\u003d\"proxy-server\", status\u003d\"200\", target\u003d\"object\"}\n1\n```\n\nI know it *sounds* crazy, but I swear testing with my sq- branch resulted in the expected 3 backend requests:\n\n```\nswift_token{account\u003d\"AUTH_test\", container\u003d\"sharded-tst\", event\u003d\"backend_reqs\", host\u003d\"proxy\", instance\u003d\"localhost:9100\", job\u003d\"saio\", resource\u003d\"shard_updating\", set_cache_state\u003d\"set\", status\u003d\"200\", token\u003d\"with_token\"}\n3\nswift_token{account\u003d\"AUTH_test\", container\u003d\"sharded-tst\", event\u003d\"cache_served\", host\u003d\"proxy\", instance\u003d\"localhost:9100\", job\u003d\"saio\", lack_retries\u003d\"False\", resource\u003d\"shard_updating\", token\u003d\"no_token\"}\n46\ncontainer_shard_ranges_cache{host\u003d\"proxy\", instance\u003d\"localhost:9100\", job\u003d\"saio\", method\u003d\"shard_updating\", metric\u003d\"hit\", service\u003d\"proxy-server\", target\u003d\"object\"}\n51\ncontainer_shard_ranges_cache{host\u003d\"proxy\", instance\u003d\"localhost:9100\", job\u003d\"saio\", method\u003d\"shard_updating\", metric\u003d\"miss\", service\u003d\"proxy-server\", status\u003d\"200\", target\u003d\"object\"}\n3\ncontainer_shard_ranges_cache{host\u003d\"proxy\", instance\u003d\"localhost:9100\", job\u003d\"saio\", method\u003d\"shard_updating\", metric\u003d\"miss\", service\u003d\"proxy-server\", target\u003d\"object\"}\n46\n```","commit_id":"e96c9149410bf1d3d08c152bcaeb9f9af89d67ef"},{"author":{"_account_id":1179,"name":"Clay Gerrard","email":"clay.gerrard@gmail.com","username":"clay-gerrard"},"change_message_id":"d81db38e58975873ba863b00885143bf9942bd42","unresolved":false,"context_lines":[],"source_content_type":"","patch_set":58,"id":"4ef7ea6b_e1dda1f4","updated":"2025-09-29 19:57:52.000000000","message":"Now that the UpgradeImpact is cleared I can\u0027t think of any operator that wouldn\u0027t want us to merge this marked improvement to scaling request load for sharded containers.\n\nI think the tests are in a good place for helping maintain this use-case for the CooperativeCachePopulator, but I found one was pretty flakey:\n\n```\nfor i in {1..10}; do pytest swift/test/unit/proxy/controllers/test_obj.py -k test_get_backend_updating_shard_concurrent_reqs_with_failures; if [ $? -ne 0 ]; then break; fi; done\n```\n\nprobably mostly my fault for offering a \"squash\" diff that wasn\u0027t very good/useful.\n\nI think we should fix the test before we merge.\n\nI\u0027d like a second opinion on the ValueError in set_namespaces_in_cache - it think it\u0027d be justifiable to pull that out into a optional follow-on since it makes no difference to in-tree code outside of the ONE new test that calls it with updating shards key to assert we blow up.","commit_id":"b1e8c7c3e03f5d435695571a0bac30c3348e3eb5"},{"author":{"_account_id":34930,"name":"Jianjian Huo","email":"jhuo@nvidia.com","username":"jhuo"},"change_message_id":"230caa450da26de30e5b1de971df156ecb8b1b4c","unresolved":false,"context_lines":[],"source_content_type":"","patch_set":58,"id":"fc74811d_b0afdbb5","updated":"2025-09-29 18:14:34.000000000","message":"thanks a lot for the reviews and help!","commit_id":"b1e8c7c3e03f5d435695571a0bac30c3348e3eb5"},{"author":{"_account_id":7233,"name":"Matthew Oliver","email":"matt@oliver.net.au","username":"mattoliverau"},"change_message_id":"dc52b27629969fd001d64b4acc2a5a7d5704605d","unresolved":false,"context_lines":[],"source_content_type":"","patch_set":59,"id":"8a3c15d6_f72ffac0","updated":"2025-09-30 05:28:35.000000000","message":"Looking great, score for just some inline questions and a doc fix.","commit_id":"d9883d083409baac3db44e1db14bf3c79a75f411"},{"author":{"_account_id":1179,"name":"Clay Gerrard","email":"clay.gerrard@gmail.com","username":"clay-gerrard"},"change_message_id":"5bd4951d9773d3301e67fc11a7d0492c316e4556","unresolved":false,"context_lines":[],"source_content_type":"","patch_set":59,"id":"049be46d_3e5a3c3a","updated":"2025-09-30 16:01:56.000000000","message":"Thanks for fixing those tests Jian!","commit_id":"d9883d083409baac3db44e1db14bf3c79a75f411"},{"author":{"_account_id":1179,"name":"Clay Gerrard","email":"clay.gerrard@gmail.com","username":"clay-gerrard"},"change_message_id":"2b5a4ed68644305640d186f4f58b6055847c1339","unresolved":false,"context_lines":[],"source_content_type":"","patch_set":59,"id":"61691d28_77c090dc","updated":"2025-09-30 17:41:34.000000000","message":"linking a few follow-ups post merge.","commit_id":"d9883d083409baac3db44e1db14bf3c79a75f411"},{"author":{"_account_id":1179,"name":"Clay Gerrard","email":"clay.gerrard@gmail.com","username":"clay-gerrard"},"change_message_id":"2b5a4ed68644305640d186f4f58b6055847c1339","unresolved":false,"context_lines":[],"source_content_type":"","patch_set":59,"id":"b6090c73_312fe7c7","in_reply_to":"52dc5878_eca71d23","updated":"2025-09-30 17:41:34.000000000","message":"I think the sample configs could be improved if we wanted:\n\n962608: doc: specify seconds in proxy-server.conf-sample | https://review.opendev.org/c/openstack/swift/+/962608","commit_id":"d9883d083409baac3db44e1db14bf3c79a75f411"},{"author":{"_account_id":1179,"name":"Clay Gerrard","email":"clay.gerrard@gmail.com","username":"clay-gerrard"},"change_message_id":"5bd4951d9773d3301e67fc11a7d0492c316e4556","unresolved":false,"context_lines":[],"source_content_type":"","patch_set":59,"id":"52dc5878_eca71d23","in_reply_to":"8a3c15d6_f72ffac0","updated":"2025-09-30 16:01:56.000000000","message":"\u003e just some inline questions\n\n\u003e However, because of the distributed nature of authors and reviewers it’s imperative that you try your best to answer your own questions as part of your review.\n\nhttps://docs.openstack.org/swift/latest/contributor/review_guidelines.html#leave-comments\n\n\u003e but also consider that if you were able to recognize the intent of the statement\n\nhttps://docs.openstack.org/swift/latest/contributor/review_guidelines.html#documentation\n\n\u003e Looking great\n\nMatt, I\u0027m sorry I couldn\u0027t tell for sure from your comment - can you please clarify: why did you -1 this change?\n\n\u003e A negative score means that to the best of your abilities you have not been able to your satisfaction, to justify the value of a change against the cost of its deficiencies and risks\n\nhttps://docs.openstack.org/swift/latest/contributor/review_guidelines.html#scoring","commit_id":"d9883d083409baac3db44e1db14bf3c79a75f411"}],"etc/proxy-server.conf-sample":[{"author":{"_account_id":7847,"name":"Alistair Coles","email":"alistairncoles@gmail.com","username":"acoles"},"change_message_id":"e976ddc798ae986d063250eaad0916b1d0108793","unresolved":true,"context_lines":[{"line_number":167,"context_line":"# Whether to use cooperative token on updating namespace cache to coalesce the"},{"line_number":168,"context_line":"# requests which fetch updating namespaces from the backend and set them in"},{"line_number":169,"context_line":"# memcached."},{"line_number":170,"context_line":"# namespace_cache_use_token \u003d False"},{"line_number":171,"context_line":"#"},{"line_number":172,"context_line":"# For cooperative token enabled on updating namespace cache, when requests"},{"line_number":173,"context_line":"# which didn\u0027t acquired a token and are waiting for other requests to fill in"}],"source_content_type":"application/octet-stream","patch_set":16,"id":"25599116_d722dae6","line":170,"updated":"2024-04-24 14:02:33.000000000","message":"Note: with this patchset we\u0027d need to use 0 or 1 to void the float() cast blowing up in the proxy config parsing","commit_id":"245bf557b533100e44970f36bbe17dd8dc2f8baa"},{"author":{"_account_id":34930,"name":"Jianjian Huo","email":"jhuo@nvidia.com","username":"jhuo"},"change_message_id":"91683d17796b106573db8976013204c6d619fe61","unresolved":false,"context_lines":[{"line_number":167,"context_line":"# Whether to use cooperative token on updating namespace cache to coalesce the"},{"line_number":168,"context_line":"# requests which fetch updating namespaces from the backend and set them in"},{"line_number":169,"context_line":"# memcached."},{"line_number":170,"context_line":"# namespace_cache_use_token \u003d False"},{"line_number":171,"context_line":"#"},{"line_number":172,"context_line":"# For cooperative token enabled on updating namespace cache, when requests"},{"line_number":173,"context_line":"# which didn\u0027t acquired a token and are waiting for other requests to fill in"}],"source_content_type":"application/octet-stream","patch_set":16,"id":"5008a3e2_ef955f01","line":170,"in_reply_to":"25599116_d722dae6","updated":"2024-04-30 05:35:34.000000000","message":"Acknowledged","commit_id":"245bf557b533100e44970f36bbe17dd8dc2f8baa"},{"author":{"_account_id":7233,"name":"Matthew Oliver","email":"matt@oliver.net.au","username":"mattoliverau"},"change_message_id":"56b5b1e2ff47b72471aca443f66f45dd8838c440","unresolved":true,"context_lines":[{"line_number":171,"context_line":"#"},{"line_number":172,"context_line":"# For cooperative token enabled on updating namespace cache, when requests"},{"line_number":173,"context_line":"# which didn\u0027t acquired a token and are waiting for other requests to fill in"},{"line_number":174,"context_line":"# the cache, they will use the below config value (in seconds) as the first"},{"line_number":175,"context_line":"# interval to sleep and retry getting data from cache, suggest to be set as the"},{"line_number":176,"context_line":"# average time spent on getting updating namespaces from the backend."},{"line_number":177,"context_line":"# namespace_cache_token_retry_interval \u003d 0.1"},{"line_number":178,"context_line":"#"}],"source_content_type":"application/octet-stream","patch_set":23,"id":"be37e410_06210fb1","line":175,"range":{"start_line":174,"start_character":66,"end_line":175,"end_character":19},"updated":"2024-07-10 07:41:34.000000000","message":"We probably need to mention that the total time waiting for a resposne is 10x this value or that it\u0027ll wait and try 10 intervals of this time.","commit_id":"0d72817c9dc7ae50ab4db73bdfe2093972cac7c0"},{"author":{"_account_id":34930,"name":"Jianjian Huo","email":"jhuo@nvidia.com","username":"jhuo"},"change_message_id":"b1dd0802ecd6eef0129fd3ee87435c2def7b2512","unresolved":false,"context_lines":[{"line_number":171,"context_line":"#"},{"line_number":172,"context_line":"# For cooperative token enabled on updating namespace cache, when requests"},{"line_number":173,"context_line":"# which didn\u0027t acquired a token and are waiting for other requests to fill in"},{"line_number":174,"context_line":"# the cache, they will use the below config value (in seconds) as the first"},{"line_number":175,"context_line":"# interval to sleep and retry getting data from cache, suggest to be set as the"},{"line_number":176,"context_line":"# average time spent on getting updating namespaces from the backend."},{"line_number":177,"context_line":"# namespace_cache_token_retry_interval \u003d 0.1"},{"line_number":178,"context_line":"#"}],"source_content_type":"application/octet-stream","patch_set":23,"id":"d2d66e4c_6edce2db","line":175,"range":{"start_line":174,"start_character":66,"end_line":175,"end_character":19},"in_reply_to":"be37e410_06210fb1","updated":"2024-07-11 00:26:56.000000000","message":"Done","commit_id":"0d72817c9dc7ae50ab4db73bdfe2093972cac7c0"},{"author":{"_account_id":15343,"name":"Tim Burke","email":"tburke@nvidia.com","username":"tburke"},"change_message_id":"8d78962738f1dfdaf0dcb834d5c8f88908da0d5d","unresolved":true,"context_lines":[{"line_number":210,"context_line":"# Whether to use cooperative token on updating namespace cache to coalesce the"},{"line_number":211,"context_line":"# requests which fetch updating namespaces from the backend and set them in"},{"line_number":212,"context_line":"# memcached."},{"line_number":213,"context_line":"# namespace_cache_use_token \u003d False"},{"line_number":214,"context_line":"#"},{"line_number":215,"context_line":"# For cooperative token enabled on updating namespace cache, when requests"},{"line_number":216,"context_line":"# which didn\u0027t acquired a token and are waiting for other requests to fill in"}],"source_content_type":"application/octet-stream","patch_set":38,"id":"96f99bd4_445f632b","line":213,"updated":"2025-04-30 20:11:37.000000000","message":"We\u0027ve been running with this on in prod for a while now, and it seems to be nothing but good. Should we default to the new behavior?","commit_id":"3136ea74d3bc9a03b0553fec387cc1411e4e80a9"},{"author":{"_account_id":34930,"name":"Jianjian Huo","email":"jhuo@nvidia.com","username":"jhuo"},"change_message_id":"3c2307ea3dc2fcb84cb22fb3897a896842b58e15","unresolved":false,"context_lines":[{"line_number":210,"context_line":"# Whether to use cooperative token on updating namespace cache to coalesce the"},{"line_number":211,"context_line":"# requests which fetch updating namespaces from the backend and set them in"},{"line_number":212,"context_line":"# memcached."},{"line_number":213,"context_line":"# namespace_cache_use_token \u003d False"},{"line_number":214,"context_line":"#"},{"line_number":215,"context_line":"# For cooperative token enabled on updating namespace cache, when requests"},{"line_number":216,"context_line":"# which didn\u0027t acquired a token and are waiting for other requests to fill in"}],"source_content_type":"application/octet-stream","patch_set":38,"id":"8ef5c449_1805ea3f","line":213,"in_reply_to":"96f99bd4_445f632b","updated":"2025-05-03 02:51:37.000000000","message":"Acknowledged","commit_id":"3136ea74d3bc9a03b0553fec387cc1411e4e80a9"},{"author":{"_account_id":15343,"name":"Tim Burke","email":"tburke@nvidia.com","username":"tburke"},"change_message_id":"8d78962738f1dfdaf0dcb834d5c8f88908da0d5d","unresolved":true,"context_lines":[{"line_number":221,"context_line":"# namespace_cache_token_retry_interval \u003d 0.1"},{"line_number":222,"context_line":"#"},{"line_number":223,"context_line":"# Number of cooperative tokens per each token session."},{"line_number":224,"context_line":"# namespace_cache_tokens_per_session \u003d 3"},{"line_number":225,"context_line":"#"},{"line_number":226,"context_line":"# object_chunk_size \u003d 65536"},{"line_number":227,"context_line":"# client_chunk_size \u003d 65536"}],"source_content_type":"application/octet-stream","patch_set":38,"id":"1221eeeb_8bfe079f","line":224,"updated":"2025-04-30 20:11:37.000000000","message":"Are we still using these defaults in prod? That\u0027d be a definite vote of confidence on these being *good* defaults.","commit_id":"3136ea74d3bc9a03b0553fec387cc1411e4e80a9"},{"author":{"_account_id":34930,"name":"Jianjian Huo","email":"jhuo@nvidia.com","username":"jhuo"},"change_message_id":"3c2307ea3dc2fcb84cb22fb3897a896842b58e15","unresolved":false,"context_lines":[{"line_number":221,"context_line":"# namespace_cache_token_retry_interval \u003d 0.1"},{"line_number":222,"context_line":"#"},{"line_number":223,"context_line":"# Number of cooperative tokens per each token session."},{"line_number":224,"context_line":"# namespace_cache_tokens_per_session \u003d 3"},{"line_number":225,"context_line":"#"},{"line_number":226,"context_line":"# object_chunk_size \u003d 65536"},{"line_number":227,"context_line":"# client_chunk_size \u003d 65536"}],"source_content_type":"application/octet-stream","patch_set":38,"id":"d1d92235_adac0769","line":224,"in_reply_to":"1221eeeb_8bfe079f","updated":"2025-05-03 02:51:37.000000000","message":"good point, I got them updated.","commit_id":"3136ea74d3bc9a03b0553fec387cc1411e4e80a9"},{"author":{"_account_id":15343,"name":"Tim Burke","email":"tburke@nvidia.com","username":"tburke"},"change_message_id":"dcf796a3770999844b9fd146eeff99d5a38d757b","unresolved":true,"context_lines":[{"line_number":218,"context_line":"# interval to sleep and retry getting data from cache, suggest to be set as the"},{"line_number":219,"context_line":"# average time spent on getting updating namespaces from the backend. And the"},{"line_number":220,"context_line":"# cooperative token session will be 10 times of this interval config value."},{"line_number":221,"context_line":"# namespace_cache_token_retry_interval \u003d 0.1"},{"line_number":222,"context_line":"#"},{"line_number":223,"context_line":"# Number of cooperative tokens per each token session."},{"line_number":224,"context_line":"# namespace_cache_tokens_per_session \u003d 3"}],"source_content_type":"application/octet-stream","patch_set":39,"id":"2c86e222_b1b3c305","line":221,"updated":"2025-05-05 20:42:46.000000000","message":"0.3 now, yeah?","commit_id":"65d0b089553de97b003b9956853d25dcb3590903"},{"author":{"_account_id":1179,"name":"Clay Gerrard","email":"clay.gerrard@gmail.com","username":"clay-gerrard"},"change_message_id":"3ac5b97eb694bd7419d416cd4d22be40d9525d57","unresolved":true,"context_lines":[{"line_number":218,"context_line":"# interval to sleep and retry getting data from cache, suggest to be set as the"},{"line_number":219,"context_line":"# average time spent on getting updating namespaces from the backend. And the"},{"line_number":220,"context_line":"# cooperative token session will be 10 times of this interval config value."},{"line_number":221,"context_line":"# namespace_cache_token_retry_interval \u003d 0.1"},{"line_number":222,"context_line":"#"},{"line_number":223,"context_line":"# Number of cooperative tokens per each token session."},{"line_number":224,"context_line":"# namespace_cache_tokens_per_session \u003d 3"}],"source_content_type":"application/octet-stream","patch_set":39,"id":"f4156810_795d4a4f","line":221,"updated":"2025-05-05 21:42:58.000000000","message":"\u003e the cooperative token session will be 10 times of this interval config value\n\nThis might be taking the \"less config options is better\" a bit too far; i don\u0027t see any reason why \"how long to wait for memcache\" should be related to \"how frequently do I poll memcache\"\n\n... the first depends on the backend response time, the second depends on how many proxy workers you have all trying to fetch the key while waiting.","commit_id":"65d0b089553de97b003b9956853d25dcb3590903"},{"author":{"_account_id":34930,"name":"Jianjian Huo","email":"jhuo@nvidia.com","username":"jhuo"},"change_message_id":"659b61cb7e26a48889ac96d6f02f48f780b3d150","unresolved":false,"context_lines":[{"line_number":218,"context_line":"# interval to sleep and retry getting data from cache, suggest to be set as the"},{"line_number":219,"context_line":"# average time spent on getting updating namespaces from the backend. And the"},{"line_number":220,"context_line":"# cooperative token session will be 10 times of this interval config value."},{"line_number":221,"context_line":"# namespace_cache_token_retry_interval \u003d 0.1"},{"line_number":222,"context_line":"#"},{"line_number":223,"context_line":"# Number of cooperative tokens per each token session."},{"line_number":224,"context_line":"# namespace_cache_tokens_per_session \u003d 3"}],"source_content_type":"application/octet-stream","patch_set":39,"id":"94d587a2_a121206e","line":221,"in_reply_to":"2c86e222_b1b3c305","updated":"2025-05-09 16:29:52.000000000","message":"Acknowledged","commit_id":"65d0b089553de97b003b9956853d25dcb3590903"},{"author":{"_account_id":34930,"name":"Jianjian Huo","email":"jhuo@nvidia.com","username":"jhuo"},"change_message_id":"659b61cb7e26a48889ac96d6f02f48f780b3d150","unresolved":false,"context_lines":[{"line_number":218,"context_line":"# interval to sleep and retry getting data from cache, suggest to be set as the"},{"line_number":219,"context_line":"# average time spent on getting updating namespaces from the backend. And the"},{"line_number":220,"context_line":"# cooperative token session will be 10 times of this interval config value."},{"line_number":221,"context_line":"# namespace_cache_token_retry_interval \u003d 0.1"},{"line_number":222,"context_line":"#"},{"line_number":223,"context_line":"# Number of cooperative tokens per each token session."},{"line_number":224,"context_line":"# namespace_cache_tokens_per_session \u003d 3"}],"source_content_type":"application/octet-stream","patch_set":39,"id":"1cc6ed80_5e3f3b95","line":221,"in_reply_to":"f4156810_795d4a4f","updated":"2025-05-09 16:29:52.000000000","message":"``token_ttl`` , Time-to-live of the cooperative token when set in memcache, basically defines the typical worse time that a token request would need to fetch the data (e.g. shard ranges) from backend (backend is not down but very busy), currently configured as 10 times of ``namespace_avg_backend_fetch_time``  for the sake of simplicity. Yes, this is the config on \"how long to wait for memcache\".\n\nThe current implementation doesn\u0027t have the config on \"how frequently do I poll memcache\" yet, also for the sake of simplicity, ``_sleep_and_retry_memcache`` will only have 3 retries max within the whole ``token_ttl``, the 1st retry happens after ``1.5 x namespace_avg_backend_fetch_time``, 2nd retries after another ``3 x namespace_avg_backend_fetch_time``, and then 3rd retries after another ``6 x namespace_avg_backend_fetch_time``, so total is about same as ``token_ttl``. the current exponential backoff algorithm is written with given condition ``token_ttl\u003d10*namespace_avg_backend_fetch_time`` and only 3 retries maximum.\n\nAs discussion offline, we will explore adding new options to tune \"how frequently do I poll memcache\" if needed.","commit_id":"65d0b089553de97b003b9956853d25dcb3590903"},{"author":{"_account_id":1179,"name":"Clay Gerrard","email":"clay.gerrard@gmail.com","username":"clay-gerrard"},"change_message_id":"5b732a1765cb3de3b56ac2c4fb2ead5ffa05328d","unresolved":true,"context_lines":[{"line_number":218,"context_line":"# intervals for the retries when requests didn\u0027t acquired a token and are"},{"line_number":219,"context_line":"# waiting for other requests to fill in the cache; and a cooperative token"},{"line_number":220,"context_line":"# session (`token_ttl`) will be 10 times of this value."},{"line_number":221,"context_line":"# namespace_avg_backend_fetch_time \u003d 0.3"},{"line_number":222,"context_line":"#"},{"line_number":223,"context_line":"# object_chunk_size \u003d 65536"},{"line_number":224,"context_line":"# client_chunk_size \u003d 65536"}],"source_content_type":"application/octet-stream","patch_set":45,"id":"657268fb_7828705c","line":221,"updated":"2025-05-13 22:06:08.000000000","message":"these defaults LGTM!  upgrade swift - better updating-shard-caching!","commit_id":"1b10e5043d8dc95e710cfc0c79d6caeb6d7b1899"},{"author":{"_account_id":34930,"name":"Jianjian Huo","email":"jhuo@nvidia.com","username":"jhuo"},"change_message_id":"b3ee26af7f3a45a961bdbcf2ef0808a6606b32b9","unresolved":false,"context_lines":[{"line_number":218,"context_line":"# intervals for the retries when requests didn\u0027t acquired a token and are"},{"line_number":219,"context_line":"# waiting for other requests to fill in the cache; and a cooperative token"},{"line_number":220,"context_line":"# session (`token_ttl`) will be 10 times of this value."},{"line_number":221,"context_line":"# namespace_avg_backend_fetch_time \u003d 0.3"},{"line_number":222,"context_line":"#"},{"line_number":223,"context_line":"# object_chunk_size \u003d 65536"},{"line_number":224,"context_line":"# client_chunk_size \u003d 65536"}],"source_content_type":"application/octet-stream","patch_set":45,"id":"dd2c67a7_a0db0365","line":221,"in_reply_to":"657268fb_7828705c","updated":"2025-05-30 22:35:41.000000000","message":"Acknowledged","commit_id":"1b10e5043d8dc95e710cfc0c79d6caeb6d7b1899"},{"author":{"_account_id":15343,"name":"Tim Burke","email":"tburke@nvidia.com","username":"tburke"},"change_message_id":"c11a8355b5e234efd9a786d00a9fa5eb5e0ea60b","unresolved":true,"context_lines":[{"line_number":215,"context_line":"#"},{"line_number":216,"context_line":"# The average time spent on getting updating namespaces from the container"},{"line_number":217,"context_line":"# servers, this will be used as basic unit for cooperative token to figure out"},{"line_number":218,"context_line":"# intervals for the retries when requests didn\u0027t acquired a token and are"},{"line_number":219,"context_line":"# waiting for other requests to fill in the cache; and a cooperative token"},{"line_number":220,"context_line":"# session (`token_ttl`) will be 10 times of this value."},{"line_number":221,"context_line":"# namespace_avg_backend_fetch_time \u003d 0.3"}],"source_content_type":"application/octet-stream","patch_set":47,"id":"2c5b9915_e0eb3b2b","line":218,"range":{"start_line":218,"start_character":49,"end_line":218,"end_character":57},"updated":"2025-05-23 23:33:21.000000000","message":"nit: \"acquire\"","commit_id":"0c13fe770277ceb7cd49d7f70a710f20ad8f1c9b"},{"author":{"_account_id":34930,"name":"Jianjian Huo","email":"jhuo@nvidia.com","username":"jhuo"},"change_message_id":"b3ee26af7f3a45a961bdbcf2ef0808a6606b32b9","unresolved":false,"context_lines":[{"line_number":215,"context_line":"#"},{"line_number":216,"context_line":"# The average time spent on getting updating namespaces from the container"},{"line_number":217,"context_line":"# servers, this will be used as basic unit for cooperative token to figure out"},{"line_number":218,"context_line":"# intervals for the retries when requests didn\u0027t acquired a token and are"},{"line_number":219,"context_line":"# waiting for other requests to fill in the cache; and a cooperative token"},{"line_number":220,"context_line":"# session (`token_ttl`) will be 10 times of this value."},{"line_number":221,"context_line":"# namespace_avg_backend_fetch_time \u003d 0.3"}],"source_content_type":"application/octet-stream","patch_set":47,"id":"2d4a18f5_1e254c28","line":218,"range":{"start_line":218,"start_character":49,"end_line":218,"end_character":57},"in_reply_to":"2c5b9915_e0eb3b2b","updated":"2025-05-30 22:35:41.000000000","message":"Done","commit_id":"0c13fe770277ceb7cd49d7f70a710f20ad8f1c9b"},{"author":{"_account_id":7233,"name":"Matthew Oliver","email":"matt@oliver.net.au","username":"mattoliverau"},"change_message_id":"dc52b27629969fd001d64b4acc2a5a7d5704605d","unresolved":true,"context_lines":[{"line_number":213,"context_line":"# usage of cooperative token and directly talk to the backend and memcache."},{"line_number":214,"context_line":"# namespace_cache_tokens_per_session \u003d 3"},{"line_number":215,"context_line":"#"},{"line_number":216,"context_line":"# The average time spent on getting updating namespaces from the container"},{"line_number":217,"context_line":"# servers, this will be used as basic unit for cooperative token to figure out"},{"line_number":218,"context_line":"# intervals for the retries when requests didn\u0027t acquire a token and are"},{"line_number":219,"context_line":"# waiting for other requests to fill in the cache; and a cooperative token"}],"source_content_type":"application/octet-stream","patch_set":59,"id":"3884fb1f_18424ffd","line":216,"range":{"start_line":216,"start_character":14,"end_line":216,"end_character":24},"updated":"2025-09-30 05:28:35.000000000","message":"in seconds?","commit_id":"d9883d083409baac3db44e1db14bf3c79a75f411"},{"author":{"_account_id":1179,"name":"Clay Gerrard","email":"clay.gerrard@gmail.com","username":"clay-gerrard"},"change_message_id":"5bd4951d9773d3301e67fc11a7d0492c316e4556","unresolved":true,"context_lines":[{"line_number":213,"context_line":"# usage of cooperative token and directly talk to the backend and memcache."},{"line_number":214,"context_line":"# namespace_cache_tokens_per_session \u003d 3"},{"line_number":215,"context_line":"#"},{"line_number":216,"context_line":"# The average time spent on getting updating namespaces from the container"},{"line_number":217,"context_line":"# servers, this will be used as basic unit for cooperative token to figure out"},{"line_number":218,"context_line":"# intervals for the retries when requests didn\u0027t acquire a token and are"},{"line_number":219,"context_line":"# waiting for other requests to fill in the cache; and a cooperative token"}],"source_content_type":"application/octet-stream","patch_set":59,"id":"c0f0e266_016c515b","line":216,"range":{"start_line":216,"start_character":14,"end_line":216,"end_character":24},"in_reply_to":"3884fb1f_18424ffd","updated":"2025-09-30 16:01:56.000000000","message":"Are the proposed docs WRONG?  Do they say milliseconds and the correct unit was seconds?","commit_id":"d9883d083409baac3db44e1db14bf3c79a75f411"},{"author":{"_account_id":1179,"name":"Clay Gerrard","email":"clay.gerrard@gmail.com","username":"clay-gerrard"},"change_message_id":"5bd4951d9773d3301e67fc11a7d0492c316e4556","unresolved":true,"context_lines":[{"line_number":235,"context_line":"# Defaults to node_timeout, should be overridden if node_timeout is set to a"},{"line_number":236,"context_line":"# high number to prevent client timeouts from firing before the proxy server"},{"line_number":237,"context_line":"# has a chance to retry."},{"line_number":238,"context_line":"# recoverable_node_timeout \u003d node_timeout"},{"line_number":239,"context_line":"#"},{"line_number":240,"context_line":"# conn_timeout \u003d 0.5"},{"line_number":241,"context_line":"#"}],"source_content_type":"application/octet-stream","patch_set":59,"id":"c6cd26ee_4891c993","line":238,"updated":"2025-09-30 16:01:56.000000000","message":"how do you know this is in seconds?  Is it just because nearly all of swift\u0027s timing tunables are relatively consistently specified in seconds?","commit_id":"d9883d083409baac3db44e1db14bf3c79a75f411"}],"swift/common/utils/__init__.py":[{"author":{"_account_id":1179,"name":"Clay Gerrard","email":"clay.gerrard@gmail.com","username":"clay-gerrard"},"change_message_id":"382e2ac0eddfd3eeb9c76438e530f5c7618d3920","unresolved":true,"context_lines":[{"line_number":2682,"context_line":"                # memcache set successful, it can remove all cooperative tokens"},{"line_number":2683,"context_line":"                #  of this token session."},{"line_number":2684,"context_line":"                self._memcache.delete(self._token_key)"},{"line_number":2685,"context_line":"                self.done_reqs_with_token \u003d True"},{"line_number":2686,"context_line":"        else:"},{"line_number":2687,"context_line":"            # No token acquired, it means that there are requests in-flight"},{"line_number":2688,"context_line":"            # which will fetch data form the backend servers and update them in"}],"source_content_type":"text/x-python","patch_set":16,"id":"a17a7e1f_f742b826","line":2685,"updated":"2024-04-23 01:43:15.000000000","message":"ok, so only token winners will delete the memcache _token_key and set this flag.","commit_id":"245bf557b533100e44970f36bbe17dd8dc2f8baa"},{"author":{"_account_id":34930,"name":"Jianjian Huo","email":"jhuo@nvidia.com","username":"jhuo"},"change_message_id":"91683d17796b106573db8976013204c6d619fe61","unresolved":false,"context_lines":[{"line_number":2682,"context_line":"                # memcache set successful, it can remove all cooperative tokens"},{"line_number":2683,"context_line":"                #  of this token session."},{"line_number":2684,"context_line":"                self._memcache.delete(self._token_key)"},{"line_number":2685,"context_line":"                self.done_reqs_with_token \u003d True"},{"line_number":2686,"context_line":"        else:"},{"line_number":2687,"context_line":"            # No token acquired, it means that there are requests in-flight"},{"line_number":2688,"context_line":"            # which will fetch data form the backend servers and update them in"}],"source_content_type":"text/x-python","patch_set":16,"id":"48bbcd9b_33c8830d","line":2685,"in_reply_to":"a17a7e1f_f742b826","updated":"2024-04-30 05:35:34.000000000","message":"Acknowledged","commit_id":"245bf557b533100e44970f36bbe17dd8dc2f8baa"},{"author":{"_account_id":1179,"name":"Clay Gerrard","email":"clay.gerrard@gmail.com","username":"clay-gerrard"},"change_message_id":"382e2ac0eddfd3eeb9c76438e530f5c7618d3920","unresolved":true,"context_lines":[{"line_number":2690,"context_line":"            data \u003d self._sleep_and_retry_memcache()"},{"line_number":2691,"context_line":"            if not data:"},{"line_number":2692,"context_line":"                # Still no cache data fetched."},{"line_number":2693,"context_line":"                data \u003d self.query_backend_and_set_cache()"},{"line_number":2694,"context_line":""},{"line_number":2695,"context_line":"        return data"},{"line_number":2696,"context_line":""}],"source_content_type":"text/x-python","patch_set":16,"id":"d6a3e5cd_33fc5b0a","line":2693,"updated":"2024-04-23 01:43:15.000000000","message":"token loosers will eventually give up, fetch from the backend and set in memcache for other loosers... but they don\u0027t delete the _token_key and they don\u0027t set the flag.","commit_id":"245bf557b533100e44970f36bbe17dd8dc2f8baa"},{"author":{"_account_id":34930,"name":"Jianjian Huo","email":"jhuo@nvidia.com","username":"jhuo"},"change_message_id":"91683d17796b106573db8976013204c6d619fe61","unresolved":false,"context_lines":[{"line_number":2690,"context_line":"            data \u003d self._sleep_and_retry_memcache()"},{"line_number":2691,"context_line":"            if not data:"},{"line_number":2692,"context_line":"                # Still no cache data fetched."},{"line_number":2693,"context_line":"                data \u003d self.query_backend_and_set_cache()"},{"line_number":2694,"context_line":""},{"line_number":2695,"context_line":"        return data"},{"line_number":2696,"context_line":""}],"source_content_type":"text/x-python","patch_set":16,"id":"495ceef6_8917ac55","line":2693,"in_reply_to":"d6a3e5cd_33fc5b0a","updated":"2024-04-30 05:35:34.000000000","message":"Acknowledged","commit_id":"245bf557b533100e44970f36bbe17dd8dc2f8baa"},{"author":{"_account_id":15343,"name":"Tim Burke","email":"tburke@nvidia.com","username":"tburke"},"change_message_id":"c11a8355b5e234efd9a786d00a9fa5eb5e0ea60b","unresolved":true,"context_lines":[{"line_number":108,"context_line":"    non_negative_float,"},{"line_number":109,"context_line":"    non_negative_int,"},{"line_number":110,"context_line":"    config_positive_int_value,"},{"line_number":111,"context_line":"    config_positive_float_value,"},{"line_number":112,"context_line":"    config_float_value,"},{"line_number":113,"context_line":"    config_auto_int_value,"},{"line_number":114,"context_line":"    config_percent_value,"}],"source_content_type":"text/x-python","patch_set":47,"id":"11527d71_ab0a735d","line":111,"updated":"2025-05-23 23:33:21.000000000","message":"Do we want people to continue importing from `swift.common.utils`, or should they be moving to import from `swift.common.utils.config` directly? I thought we mainly included these imports so we don\u0027t break existing 3rd party code that was *already* doing things like `from swift.common.utils import config_positive_int_value` -- IDK that we need to do it for new functions we add.","commit_id":"0c13fe770277ceb7cd49d7f70a710f20ad8f1c9b"},{"author":{"_account_id":7233,"name":"Matthew Oliver","email":"matt@oliver.net.au","username":"mattoliverau"},"change_message_id":"dc52b27629969fd001d64b4acc2a5a7d5704605d","unresolved":true,"context_lines":[{"line_number":108,"context_line":"    non_negative_float,"},{"line_number":109,"context_line":"    non_negative_int,"},{"line_number":110,"context_line":"    config_positive_int_value,"},{"line_number":111,"context_line":"    config_positive_float_value,"},{"line_number":112,"context_line":"    config_float_value,"},{"line_number":113,"context_line":"    config_auto_int_value,"},{"line_number":114,"context_line":"    config_percent_value,"}],"source_content_type":"text/x-python","patch_set":47,"id":"888125d6_ea5ff775","line":111,"in_reply_to":"11527d71_ab0a735d","updated":"2025-09-30 05:28:35.000000000","message":"Yeah that\u0027s true, this was for 3rd party and backwards compat as we refactored utils.\nDevs should now be using:\n\n```\nfrom swift.common.utils.config import config_positive_float_value\n```\n\nI think that Tim mentioned is a good line to draw.","commit_id":"0c13fe770277ceb7cd49d7f70a710f20ad8f1c9b"},{"author":{"_account_id":1179,"name":"Clay Gerrard","email":"clay.gerrard@gmail.com","username":"clay-gerrard"},"change_message_id":"34890a8a035287cb6533a97801528f37e247cf61","unresolved":false,"context_lines":[{"line_number":108,"context_line":"    non_negative_float,"},{"line_number":109,"context_line":"    non_negative_int,"},{"line_number":110,"context_line":"    config_positive_int_value,"},{"line_number":111,"context_line":"    config_positive_float_value,"},{"line_number":112,"context_line":"    config_float_value,"},{"line_number":113,"context_line":"    config_auto_int_value,"},{"line_number":114,"context_line":"    config_percent_value,"}],"source_content_type":"text/x-python","patch_set":47,"id":"fd3a42ca_9af39018","line":111,"in_reply_to":"11527d71_ab0a735d","updated":"2025-09-25 22:24:36.000000000","message":"this alias imports in common.utils were definitely for backwards compat\n\n... in some cases it might be more work to refactor/split-up import sites to use `from swift.common.utils.config import ...`, but maybe it\u0027d be a worthy clean-up:\n\n```\ndiff --git a/swift/common/utils/__init__.py b/swift/common/utils/__init__.py\nindex a4e970ae8..565748b49 100644\n--- a/swift/common/utils/__init__.py\n+++ b/swift/common/utils/__init__.py\n@@ -108,7 +108,6 @@ from swift.common.utils.config import ( # noqa\n     non_negative_float,\n     non_negative_int,\n     config_positive_int_value,\n-    config_positive_float_value,\n     config_float_value,\n     config_auto_int_value,\n     config_percent_value,\ndiff --git a/swift/proxy/server.py b/swift/proxy/server.py\nindex 250dd29b2..a8d7f3288 100644\n--- a/swift/proxy/server.py\n+++ b/swift/proxy/server.py\n@@ -33,11 +33,13 @@ from swift.common.storage_policy import POLICIES\n from swift.common.ring import Ring\n from swift.common.error_limiter import ErrorLimiter\n from swift.common.utils import Watchdog, get_logger, \\\n-    get_remote_client, split_path, config_true_value, generate_trans_id, \\\n-    affinity_key_function, affinity_locality_predicate, list_from_csv, \\\n-    parse_prefixed_conf, config_auto_int_value, node_to_string, \\\n-    config_request_node_count_value, config_percent_value, cap_length, \\\n-    parse_options, non_negative_int, config_positive_float_value\n+    get_remote_client, split_path, generate_trans_id, \\\n+    list_from_csv, node_to_string, cap_length, \\\n+    parse_options\n+from swift.common.utils.config import config_positive_float_value, \\\n+    config_true_value, affinity_key_function, affinity_locality_predicate, \\\n+    parse_prefixed_conf, config_auto_int_value, config_request_node_count_value, \\\n+    config_percent_value, non_negative_int\n from swift.common.registry import register_swift_info\n from swift.common.constraints import check_utf8, valid_api_version\n from swift.common.statsd_client import get_labeled_statsd_client\n```\n\n^cursor seemed to handle this pretty well (sans the line length issue) - definitely not blocking.","commit_id":"0c13fe770277ceb7cd49d7f70a710f20ad8f1c9b"}],"swift/common/utils/config.py":[{"author":{"_account_id":1179,"name":"Clay Gerrard","email":"clay.gerrard@gmail.com","username":"clay-gerrard"},"change_message_id":"34890a8a035287cb6533a97801528f37e247cf61","unresolved":true,"context_lines":[{"line_number":86,"context_line":"def config_positive_float_value(value):"},{"line_number":87,"context_line":"    \"\"\""},{"line_number":88,"context_line":"    Returns positive float value if it can be cast by float() and it\u0027s an"},{"line_number":89,"context_line":"    float \u003e 0. (not including zero) Raises ValueError otherwise."},{"line_number":90,"context_line":"    \"\"\""},{"line_number":91,"context_line":"    try:"},{"line_number":92,"context_line":"        result \u003d float(value)"}],"source_content_type":"text/x-python","patch_set":56,"id":"7d9299e1_626b666a","line":89,"updated":"2025-09-25 22:24:36.000000000","message":"it\u0027s *a float","commit_id":"e96c9149410bf1d3d08c152bcaeb9f9af89d67ef"},{"author":{"_account_id":34930,"name":"Jianjian Huo","email":"jhuo@nvidia.com","username":"jhuo"},"change_message_id":"230caa450da26de30e5b1de971df156ecb8b1b4c","unresolved":false,"context_lines":[{"line_number":86,"context_line":"def config_positive_float_value(value):"},{"line_number":87,"context_line":"    \"\"\""},{"line_number":88,"context_line":"    Returns positive float value if it can be cast by float() and it\u0027s an"},{"line_number":89,"context_line":"    float \u003e 0. (not including zero) Raises ValueError otherwise."},{"line_number":90,"context_line":"    \"\"\""},{"line_number":91,"context_line":"    try:"},{"line_number":92,"context_line":"        result \u003d float(value)"}],"source_content_type":"text/x-python","patch_set":56,"id":"9f76fe44_e4580db7","line":89,"in_reply_to":"7d9299e1_626b666a","updated":"2025-09-29 18:14:34.000000000","message":"Done","commit_id":"e96c9149410bf1d3d08c152bcaeb9f9af89d67ef"}],"swift/proxy/controllers/base.py":[{"author":{"_account_id":7847,"name":"Alistair Coles","email":"alistairncoles@gmail.com","username":"acoles"},"change_message_id":"ff146041ae375fbb80c48e4c2e1cec2a14ed0ff9","unresolved":true,"context_lines":[{"line_number":891,"context_line":"    return info, cache_state"},{"line_number":892,"context_line":""},{"line_number":893,"context_line":""},{"line_number":894,"context_line":"def format_namespace_bounds(bounds):"},{"line_number":895,"context_line":"    \"\"\""},{"line_number":896,"context_line":"    This function formats the namespaces bounds for py2, after namespaces"},{"line_number":897,"context_line":"    bounds are read out of memcached."}],"source_content_type":"text/x-python","patch_set":2,"id":"a23fff3c_8e61fab0","line":894,"updated":"2024-02-15 12:53:19.000000000","message":"nit: encode_namespace_bounds ??","commit_id":"917650355da354b5674013c143744aad2ddc4ec9"},{"author":{"_account_id":34930,"name":"Jianjian Huo","email":"jhuo@nvidia.com","username":"jhuo"},"change_message_id":"3afaa86ee37f7773f4302fa0e58e4e5fe4906cf8","unresolved":false,"context_lines":[{"line_number":891,"context_line":"    return info, cache_state"},{"line_number":892,"context_line":""},{"line_number":893,"context_line":""},{"line_number":894,"context_line":"def format_namespace_bounds(bounds):"},{"line_number":895,"context_line":"    \"\"\""},{"line_number":896,"context_line":"    This function formats the namespaces bounds for py2, after namespaces"},{"line_number":897,"context_line":"    bounds are read out of memcached."}],"source_content_type":"text/x-python","patch_set":2,"id":"607a7f3e_a9c3e73b","line":894,"in_reply_to":"a23fff3c_8e61fab0","updated":"2024-02-20 05:24:03.000000000","message":"Done","commit_id":"917650355da354b5674013c143744aad2ddc4ec9"},{"author":{"_account_id":7847,"name":"Alistair Coles","email":"alistairncoles@gmail.com","username":"acoles"},"change_message_id":"ff146041ae375fbb80c48e4c2e1cec2a14ed0ff9","unresolved":true,"context_lines":[{"line_number":900,"context_line":"    :returns: the formatted bounds"},{"line_number":901,"context_line":"    \"\"\""},{"line_number":902,"context_line":"    if not bounds:"},{"line_number":903,"context_line":"        return None"},{"line_number":904,"context_line":""},{"line_number":905,"context_line":"    if six.PY2:"},{"line_number":906,"context_line":"        # json.loads() in memcache.get will convert json \u0027string\u0027 to"}],"source_content_type":"text/x-python","patch_set":2,"id":"3cf9bbce_e1e1ecf1","line":903,"updated":"2024-02-15 12:53:19.000000000","message":"I guess the function probably never gets called with bounds \u003d [], but is it deliberate that an empty list is converted to None?\n\nThese lines seem unnecessary anyway, could be just:\n\n``if bounds and six.PY2``","commit_id":"917650355da354b5674013c143744aad2ddc4ec9"},{"author":{"_account_id":34930,"name":"Jianjian Huo","email":"jhuo@nvidia.com","username":"jhuo"},"change_message_id":"3afaa86ee37f7773f4302fa0e58e4e5fe4906cf8","unresolved":false,"context_lines":[{"line_number":900,"context_line":"    :returns: the formatted bounds"},{"line_number":901,"context_line":"    \"\"\""},{"line_number":902,"context_line":"    if not bounds:"},{"line_number":903,"context_line":"        return None"},{"line_number":904,"context_line":""},{"line_number":905,"context_line":"    if six.PY2:"},{"line_number":906,"context_line":"        # json.loads() in memcache.get will convert json \u0027string\u0027 to"}],"source_content_type":"text/x-python","patch_set":2,"id":"049bcc9e_a563c34b","line":903,"in_reply_to":"3cf9bbce_e1e1ecf1","updated":"2024-02-20 05:24:03.000000000","message":"Done","commit_id":"917650355da354b5674013c143744aad2ddc4ec9"},{"author":{"_account_id":7847,"name":"Alistair Coles","email":"alistairncoles@gmail.com","username":"acoles"},"change_message_id":"ff146041ae375fbb80c48e4c2e1cec2a14ed0ff9","unresolved":true,"context_lines":[{"line_number":925,"context_line":"    \"\"\""},{"line_number":926,"context_line":"    # try get namespaces from infocache first"},{"line_number":927,"context_line":"    infocache \u003d req.environ.setdefault(\u0027swift.infocache\u0027, {})"},{"line_number":928,"context_line":"    bounds \u003d infocache.get(cache_key)"},{"line_number":929,"context_line":"    if bounds:"},{"line_number":930,"context_line":"        return NamespaceBoundList(bounds), \u0027infocache_hit\u0027"},{"line_number":931,"context_line":""}],"source_content_type":"text/x-python","patch_set":2,"id":"12fc16f3_41b43913","line":928,"updated":"2024-02-15 12:53:19.000000000","message":"ok, so the type of the infocache value has to change to raw bounds because the new coop helper forces the same type to be written to memcache and infocache...I guess that\u0027s an acceptable compromise","commit_id":"917650355da354b5674013c143744aad2ddc4ec9"},{"author":{"_account_id":34930,"name":"Jianjian Huo","email":"jhuo@nvidia.com","username":"jhuo"},"change_message_id":"3afaa86ee37f7773f4302fa0e58e4e5fe4906cf8","unresolved":false,"context_lines":[{"line_number":925,"context_line":"    \"\"\""},{"line_number":926,"context_line":"    # try get namespaces from infocache first"},{"line_number":927,"context_line":"    infocache \u003d req.environ.setdefault(\u0027swift.infocache\u0027, {})"},{"line_number":928,"context_line":"    bounds \u003d infocache.get(cache_key)"},{"line_number":929,"context_line":"    if bounds:"},{"line_number":930,"context_line":"        return NamespaceBoundList(bounds), \u0027infocache_hit\u0027"},{"line_number":931,"context_line":""}],"source_content_type":"text/x-python","patch_set":2,"id":"6ddd52a4_b32de3cb","line":928,"in_reply_to":"12fc16f3_41b43913","updated":"2024-02-20 05:24:03.000000000","message":"Acknowledged","commit_id":"917650355da354b5674013c143744aad2ddc4ec9"},{"author":{"_account_id":7847,"name":"Alistair Coles","email":"alistairncoles@gmail.com","username":"acoles"},"change_message_id":"ff146041ae375fbb80c48e4c2e1cec2a14ed0ff9","unresolved":true,"context_lines":[{"line_number":945,"context_line":"    ns_bound_list \u003d None"},{"line_number":946,"context_line":"    if bounds:"},{"line_number":947,"context_line":"        ns_bound_list \u003d NamespaceBoundList(format_namespace_bounds(bounds))"},{"line_number":948,"context_line":"        infocache[cache_key] \u003d ns_bound_list"},{"line_number":949,"context_line":"    return ns_bound_list, cache_state"},{"line_number":950,"context_line":""},{"line_number":951,"context_line":""}],"source_content_type":"text/x-python","patch_set":2,"id":"0c8a02f7_b6984ca7","line":948,"updated":"2024-02-15 12:53:19.000000000","message":"but the type held in infocache changed right? shouldn\u0027t this now be bounds?","commit_id":"917650355da354b5674013c143744aad2ddc4ec9"},{"author":{"_account_id":34930,"name":"Jianjian Huo","email":"jhuo@nvidia.com","username":"jhuo"},"change_message_id":"3afaa86ee37f7773f4302fa0e58e4e5fe4906cf8","unresolved":false,"context_lines":[{"line_number":945,"context_line":"    ns_bound_list \u003d None"},{"line_number":946,"context_line":"    if bounds:"},{"line_number":947,"context_line":"        ns_bound_list \u003d NamespaceBoundList(format_namespace_bounds(bounds))"},{"line_number":948,"context_line":"        infocache[cache_key] \u003d ns_bound_list"},{"line_number":949,"context_line":"    return ns_bound_list, cache_state"},{"line_number":950,"context_line":""},{"line_number":951,"context_line":""}],"source_content_type":"text/x-python","patch_set":2,"id":"08fa05e6_fe486a9b","line":948,"in_reply_to":"0c8a02f7_b6984ca7","updated":"2024-02-20 05:24:03.000000000","message":"Acknowledged","commit_id":"917650355da354b5674013c143744aad2ddc4ec9"},{"author":{"_account_id":1179,"name":"Clay Gerrard","email":"clay.gerrard@gmail.com","username":"clay-gerrard"},"change_message_id":"7f2e28ec5958831a968574f848fcfebb2ac58987","unresolved":true,"context_lines":[{"line_number":72,"context_line":"DEFAULT_RECHECK_UPDATING_SHARD_RANGES \u003d 3600  # seconds"},{"line_number":73,"context_line":"DEFAULT_RECHECK_LISTING_SHARD_RANGES \u003d 600  # seconds"},{"line_number":74,"context_line":"DEFAULT_SHARD_RANGES_CACHE_TOKEN_TTL \u003d 3  # seconds"},{"line_number":75,"context_line":"DEFAULT_SHARD_RANGES_CACHE_TOKEN_SLEEP_INTERVAL \u003d 0.05  # seconds"},{"line_number":76,"context_line":""},{"line_number":77,"context_line":""},{"line_number":78,"context_line":"def update_headers(response, headers):"}],"source_content_type":"text/x-python","patch_set":9,"id":"321b0f3e_6c7c142c","line":75,"updated":"2024-03-15 16:01:16.000000000","message":"i wonder if I should be surprised these are defined here instead of server.py\n\nthey\u0027re not USED here?","commit_id":"9e87701a33803cd293db30ea0e1b8949c3db7c6c"},{"author":{"_account_id":34930,"name":"Jianjian Huo","email":"jhuo@nvidia.com","username":"jhuo"},"change_message_id":"cd691043fa85bff09ca04ad5d2d950847cb601b5","unresolved":false,"context_lines":[{"line_number":72,"context_line":"DEFAULT_RECHECK_UPDATING_SHARD_RANGES \u003d 3600  # seconds"},{"line_number":73,"context_line":"DEFAULT_RECHECK_LISTING_SHARD_RANGES \u003d 600  # seconds"},{"line_number":74,"context_line":"DEFAULT_SHARD_RANGES_CACHE_TOKEN_TTL \u003d 3  # seconds"},{"line_number":75,"context_line":"DEFAULT_SHARD_RANGES_CACHE_TOKEN_SLEEP_INTERVAL \u003d 0.05  # seconds"},{"line_number":76,"context_line":""},{"line_number":77,"context_line":""},{"line_number":78,"context_line":"def update_headers(response, headers):"}],"source_content_type":"text/x-python","patch_set":9,"id":"5c2abc5c_5af373c0","line":75,"in_reply_to":"321b0f3e_6c7c142c","updated":"2024-03-20 20:39:24.000000000","message":"I thought maybe it\u0027s better to put them into those DEFAULT_**** section together. I have moved new defaults to server.py where it\u0027s used.","commit_id":"9e87701a33803cd293db30ea0e1b8949c3db7c6c"},{"author":{"_account_id":1179,"name":"Clay Gerrard","email":"clay.gerrard@gmail.com","username":"clay-gerrard"},"change_message_id":"7f2e28ec5958831a968574f848fcfebb2ac58987","unresolved":true,"context_lines":[{"line_number":945,"context_line":"    return ns_bound_list, cache_state"},{"line_number":946,"context_line":""},{"line_number":947,"context_line":""},{"line_number":948,"context_line":"def set_namespaces_in_cache(req, cache_key, ns_bound_list, time):"},{"line_number":949,"context_line":"    \"\"\""},{"line_number":950,"context_line":"    Set a list of namespace bounds in infocache and memcache."},{"line_number":951,"context_line":""}],"source_content_type":"text/x-python","patch_set":9,"id":"d9e42711_648c870c","line":948,"updated":"2024-03-15 16:01:16.000000000","message":"AFAIK this function is no longer used in proxy.controllers.obj\n\napparently it\u0027s still used in controllers.container (for listings?)","commit_id":"9e87701a33803cd293db30ea0e1b8949c3db7c6c"},{"author":{"_account_id":34930,"name":"Jianjian Huo","email":"jhuo@nvidia.com","username":"jhuo"},"change_message_id":"cd691043fa85bff09ca04ad5d2d950847cb601b5","unresolved":false,"context_lines":[{"line_number":945,"context_line":"    return ns_bound_list, cache_state"},{"line_number":946,"context_line":""},{"line_number":947,"context_line":""},{"line_number":948,"context_line":"def set_namespaces_in_cache(req, cache_key, ns_bound_list, time):"},{"line_number":949,"context_line":"    \"\"\""},{"line_number":950,"context_line":"    Set a list of namespace bounds in infocache and memcache."},{"line_number":951,"context_line":""}],"source_content_type":"text/x-python","patch_set":9,"id":"53b9842b_5a0fd091","line":948,"in_reply_to":"d9e42711_648c870c","updated":"2024-03-20 20:39:24.000000000","message":"yes, I will deprecate this function after we switch controllers.container to use cooperative token.","commit_id":"9e87701a33803cd293db30ea0e1b8949c3db7c6c"},{"author":{"_account_id":1179,"name":"Clay Gerrard","email":"clay.gerrard@gmail.com","username":"clay-gerrard"},"change_message_id":"7f2e28ec5958831a968574f848fcfebb2ac58987","unresolved":true,"context_lines":[{"line_number":965,"context_line":"        except MemcacheConnectionError:"},{"line_number":966,"context_line":"            cache_state \u003d \u0027set_error\u0027"},{"line_number":967,"context_line":"        else:"},{"line_number":968,"context_line":"            cache_state \u003d \u0027set\u0027"},{"line_number":969,"context_line":"    else:"},{"line_number":970,"context_line":"        cache_state \u003d \u0027disabled\u0027"},{"line_number":971,"context_line":"    return cache_state"}],"source_content_type":"text/x-python","patch_set":9,"id":"2dad5bc3_a22aea6a","line":968,"updated":"2024-03-15 16:01:16.000000000","message":"here this interface choose to handle the exception and return the cache_state string explicitly.\n\nFWIW I prefer that over returning an exception instance.","commit_id":"9e87701a33803cd293db30ea0e1b8949c3db7c6c"},{"author":{"_account_id":34930,"name":"Jianjian Huo","email":"jhuo@nvidia.com","username":"jhuo"},"change_message_id":"cd691043fa85bff09ca04ad5d2d950847cb601b5","unresolved":false,"context_lines":[{"line_number":965,"context_line":"        except MemcacheConnectionError:"},{"line_number":966,"context_line":"            cache_state \u003d \u0027set_error\u0027"},{"line_number":967,"context_line":"        else:"},{"line_number":968,"context_line":"            cache_state \u003d \u0027set\u0027"},{"line_number":969,"context_line":"    else:"},{"line_number":970,"context_line":"        cache_state \u003d \u0027disabled\u0027"},{"line_number":971,"context_line":"    return cache_state"}],"source_content_type":"text/x-python","patch_set":9,"id":"0f077c91_97316da4","line":968,"in_reply_to":"2dad5bc3_a22aea6a","updated":"2024-03-20 20:39:24.000000000","message":"Acknowledged","commit_id":"9e87701a33803cd293db30ea0e1b8949c3db7c6c"},{"author":{"_account_id":1179,"name":"Clay Gerrard","email":"clay.gerrard@gmail.com","username":"clay-gerrard"},"change_message_id":"5b2eb439090dbb7d3383c43b9c7da3fd49922d38","unresolved":true,"context_lines":[{"line_number":816,"context_line":"    :param  op_type: the name of the operation type, includes \u0027shard_listing\u0027,"},{"line_number":817,"context_line":"              \u0027shard_updating\u0027, and etc."},{"line_number":818,"context_line":"    \"\"\""},{"line_number":819,"context_line":"    if cache_populator.token_request_done:"},{"line_number":820,"context_line":"        logger.increment(\u0027token.%s.done_token_reqs\u0027 % op_type)"},{"line_number":821,"context_line":"    if cache_populator.req_served_from_cache:"},{"line_number":822,"context_line":"        logger.increment(\u0027token.%s.cache_served_reqs\u0027 % op_type)"}],"source_content_type":"text/x-python","patch_set":15,"id":"ed0c3eb7_ec4e7b3c","line":819,"updated":"2024-04-22 15:06:46.000000000","message":"this name isn\u0027t intuative to me","commit_id":"41c519ab9349a00bfaf9f7750f7b82643ac0e634"},{"author":{"_account_id":34930,"name":"Jianjian Huo","email":"jhuo@nvidia.com","username":"jhuo"},"change_message_id":"84240d7cf4cf2d50ffaa7a06493d64c4ad741191","unresolved":false,"context_lines":[{"line_number":816,"context_line":"    :param  op_type: the name of the operation type, includes \u0027shard_listing\u0027,"},{"line_number":817,"context_line":"              \u0027shard_updating\u0027, and etc."},{"line_number":818,"context_line":"    \"\"\""},{"line_number":819,"context_line":"    if cache_populator.token_request_done:"},{"line_number":820,"context_line":"        logger.increment(\u0027token.%s.done_token_reqs\u0027 % op_type)"},{"line_number":821,"context_line":"    if cache_populator.req_served_from_cache:"},{"line_number":822,"context_line":"        logger.increment(\u0027token.%s.cache_served_reqs\u0027 % op_type)"}],"source_content_type":"text/x-python","patch_set":15,"id":"1a9a1b20_1e8f8b90","line":819,"in_reply_to":"6da55b0f_3b3c57b1","updated":"2024-05-03 05:51:16.000000000","message":"Acknowledged","commit_id":"41c519ab9349a00bfaf9f7750f7b82643ac0e634"},{"author":{"_account_id":34930,"name":"Jianjian Huo","email":"jhuo@nvidia.com","username":"jhuo"},"change_message_id":"7d62748a0214fd0e6037e4b24de687f776d83aa1","unresolved":true,"context_lines":[{"line_number":816,"context_line":"    :param  op_type: the name of the operation type, includes \u0027shard_listing\u0027,"},{"line_number":817,"context_line":"              \u0027shard_updating\u0027, and etc."},{"line_number":818,"context_line":"    \"\"\""},{"line_number":819,"context_line":"    if cache_populator.token_request_done:"},{"line_number":820,"context_line":"        logger.increment(\u0027token.%s.done_token_reqs\u0027 % op_type)"},{"line_number":821,"context_line":"    if cache_populator.req_served_from_cache:"},{"line_number":822,"context_line":"        logger.increment(\u0027token.%s.cache_served_reqs\u0027 % op_type)"}],"source_content_type":"text/x-python","patch_set":15,"id":"6da55b0f_3b3c57b1","line":819,"in_reply_to":"ed0c3eb7_ec4e7b3c","updated":"2024-04-22 17:38:22.000000000","message":"it\u0027s for number of requests with token acquired and have finished all operations. I renamed it to ``done_reqs_with_token``, hopefully it\u0027s more intuitive.","commit_id":"41c519ab9349a00bfaf9f7750f7b82643ac0e634"},{"author":{"_account_id":1179,"name":"Clay Gerrard","email":"clay.gerrard@gmail.com","username":"clay-gerrard"},"change_message_id":"5b2eb439090dbb7d3383c43b9c7da3fd49922d38","unresolved":true,"context_lines":[{"line_number":821,"context_line":"    if cache_populator.req_served_from_cache:"},{"line_number":822,"context_line":"        logger.increment(\u0027token.%s.cache_served_reqs\u0027 % op_type)"},{"line_number":823,"context_line":"    else:"},{"line_number":824,"context_line":"        logger.increment(\u0027token.%s.backend_reqs\u0027 % op_type)"},{"line_number":825,"context_line":""},{"line_number":826,"context_line":""},{"line_number":827,"context_line":"def _get_info_from_memcache(app, env, account, container\u003dNone):"}],"source_content_type":"text/x-python","patch_set":15,"id":"4a386a41_28b64e3b","line":824,"updated":"2024-04-22 15:06:46.000000000","message":"is this the combination of all backend requests - regardless of if they were from token winners or cooperative ttl timeouts?\n\nI think it would be nice to see a rate of token winners - to verify that we are in fact limiting requests to the backend successfully.\n\nBut it\u0027s super important I think we get a *clear* signal on *any* requests to the backend that \"fall out\" of the \"waiting of memcache loop\" - that failure mode is the major concern with the approach in the current impelmentation IMHO.","commit_id":"41c519ab9349a00bfaf9f7750f7b82643ac0e634"},{"author":{"_account_id":34930,"name":"Jianjian Huo","email":"jhuo@nvidia.com","username":"jhuo"},"change_message_id":"7d62748a0214fd0e6037e4b24de687f776d83aa1","unresolved":true,"context_lines":[{"line_number":821,"context_line":"    if cache_populator.req_served_from_cache:"},{"line_number":822,"context_line":"        logger.increment(\u0027token.%s.cache_served_reqs\u0027 % op_type)"},{"line_number":823,"context_line":"    else:"},{"line_number":824,"context_line":"        logger.increment(\u0027token.%s.backend_reqs\u0027 % op_type)"},{"line_number":825,"context_line":""},{"line_number":826,"context_line":""},{"line_number":827,"context_line":"def _get_info_from_memcache(app, env, account, container\u003dNone):"}],"source_content_type":"text/x-python","patch_set":15,"id":"e100f6d5_71ba31f7","line":824,"in_reply_to":"4a386a41_28b64e3b","updated":"2024-04-22 17:38:22.000000000","message":"yes, ``token.shard_updating.backend_reqs`` includes all issued backend requests, including token winner who need to fetch data from backend and requests w/o token but token ttl timeouts then need to go to the backend as well.\n\nAnd ``token.shard_updating.done_reqs_with_token`` will exactly show the rate of token winners.\n\nOn ``a clear signal on any requests to the backend that \"fall out\" of the \"waiting of memcache loop\"``, it equals to ``token.shard_updating.backend_reqs - token.shard_updating.done_reqs_with_token``. since it can be deducted from the other two stats, so I didn\u0027t create a dedicated one for it, but I can add it if it\u0027s more intuitive.","commit_id":"41c519ab9349a00bfaf9f7750f7b82643ac0e634"},{"author":{"_account_id":34930,"name":"Jianjian Huo","email":"jhuo@nvidia.com","username":"jhuo"},"change_message_id":"84240d7cf4cf2d50ffaa7a06493d64c4ad741191","unresolved":false,"context_lines":[{"line_number":821,"context_line":"    if cache_populator.req_served_from_cache:"},{"line_number":822,"context_line":"        logger.increment(\u0027token.%s.cache_served_reqs\u0027 % op_type)"},{"line_number":823,"context_line":"    else:"},{"line_number":824,"context_line":"        logger.increment(\u0027token.%s.backend_reqs\u0027 % op_type)"},{"line_number":825,"context_line":""},{"line_number":826,"context_line":""},{"line_number":827,"context_line":"def _get_info_from_memcache(app, env, account, container\u003dNone):"}],"source_content_type":"text/x-python","patch_set":15,"id":"c19837c6_e0eea0fb","line":824,"in_reply_to":"e100f6d5_71ba31f7","updated":"2024-05-03 05:51:16.000000000","message":"Done","commit_id":"41c519ab9349a00bfaf9f7750f7b82643ac0e634"},{"author":{"_account_id":1179,"name":"Clay Gerrard","email":"clay.gerrard@gmail.com","username":"clay-gerrard"},"change_message_id":"382e2ac0eddfd3eeb9c76438e530f5c7618d3920","unresolved":true,"context_lines":[{"line_number":947,"context_line":"    bounds \u003d None"},{"line_number":948,"context_line":"    if ns_bound_list:"},{"line_number":949,"context_line":"        bounds \u003d ns_bound_list.bounds"},{"line_number":950,"context_line":"    return bounds"},{"line_number":951,"context_line":""},{"line_number":952,"context_line":""},{"line_number":953,"context_line":"def get_namespaces_from_cache(req, cache_key, skip_chance):"}],"source_content_type":"text/x-python","patch_set":16,"id":"d00ad647_7b28c686","line":950,"updated":"2024-04-23 01:43:15.000000000","message":"these are nice little helpers; docstrings help - kudos!","commit_id":"245bf557b533100e44970f36bbe17dd8dc2f8baa"},{"author":{"_account_id":34930,"name":"Jianjian Huo","email":"jhuo@nvidia.com","username":"jhuo"},"change_message_id":"91683d17796b106573db8976013204c6d619fe61","unresolved":false,"context_lines":[{"line_number":947,"context_line":"    bounds \u003d None"},{"line_number":948,"context_line":"    if ns_bound_list:"},{"line_number":949,"context_line":"        bounds \u003d ns_bound_list.bounds"},{"line_number":950,"context_line":"    return bounds"},{"line_number":951,"context_line":""},{"line_number":952,"context_line":""},{"line_number":953,"context_line":"def get_namespaces_from_cache(req, cache_key, skip_chance):"}],"source_content_type":"text/x-python","patch_set":16,"id":"38c5353a_41aa15fc","line":950,"in_reply_to":"d00ad647_7b28c686","updated":"2024-04-30 05:35:34.000000000","message":"Acknowledged","commit_id":"245bf557b533100e44970f36bbe17dd8dc2f8baa"},{"author":{"_account_id":7233,"name":"Matthew Oliver","email":"matt@oliver.net.au","username":"mattoliverau"},"change_message_id":"56b5b1e2ff47b72471aca443f66f45dd8838c440","unresolved":false,"context_lines":[{"line_number":1010,"context_line":"    infocache[cache_key] \u003d ns_bound_list"},{"line_number":1011,"context_line":"    memcache \u003d cache_from_env(req.environ, True)"},{"line_number":1012,"context_line":"    if memcache and ns_bound_list:"},{"line_number":1013,"context_line":"        bounds \u003d namespace_list_to_bounds(ns_bound_list)"},{"line_number":1014,"context_line":"        try:"},{"line_number":1015,"context_line":"            memcache.set(cache_key, bounds, time\u003dtime, raise_on_error\u003dTrue)"},{"line_number":1016,"context_line":"        except MemcacheConnectionError:"}],"source_content_type":"text/x-python","patch_set":23,"id":"9b98196f_4bf4892f","line":1013,"updated":"2024-07-10 07:41:34.000000000","message":"Because of the `if ns_bound_list` can\u0027t we just use:\n\n    bounds \u003d ns_bound_list.bounds\n \nOr just the try:\n\n    try:\n        memcache.set(cache_key, ns_bound_list.bounds, time\u003dtime, raise_on_error\u003dTrue)\n\nLike it used to be?\n\nEDIT: oh this is used as the encoder the populator uses! Got it!","commit_id":"0d72817c9dc7ae50ab4db73bdfe2093972cac7c0"},{"author":{"_account_id":15343,"name":"Tim Burke","email":"tburke@nvidia.com","username":"tburke"},"change_message_id":"5fd322ed546bbf3260f518eff4abba6da4cc4a8d","unresolved":true,"context_lines":[{"line_number":833,"context_line":"    else:"},{"line_number":834,"context_line":"        if resp:"},{"line_number":835,"context_line":"            logger.increment(\u0027token.%s.backend_reqs.no_token.%d\u0027 %"},{"line_number":836,"context_line":"                             (op_type, resp.status_int))"},{"line_number":837,"context_line":""},{"line_number":838,"context_line":""},{"line_number":839,"context_line":"def _get_info_from_memcache(app, env, account, container\u003dNone):"}],"source_content_type":"text/x-python","patch_set":25,"id":"62092b35_ae2c4247","line":836,"updated":"2024-07-23 00:33:19.000000000","message":"I find these kinds of stat breakdowns most useful when I can mentally come up with something like a [Sankey diagram](https://en.wikipedia.org/wiki/Sankey_diagram) for them. So every call into `record_cooperative_token_metrics`, for example, would go into one (and ideally, *only one*) stat bucket.\n\nLooking through this, though, I can\u0027t tell whether 1, 2, or 0 counters will get incremented for any given call 😕","commit_id":"24c4cb68b3037de4ba90e827bd1e7b69660a7353"},{"author":{"_account_id":34930,"name":"Jianjian Huo","email":"jhuo@nvidia.com","username":"jhuo"},"change_message_id":"779f8c823e2d90f22c8679e56a1bd909f83d4f9a","unresolved":false,"context_lines":[{"line_number":833,"context_line":"    else:"},{"line_number":834,"context_line":"        if resp:"},{"line_number":835,"context_line":"            logger.increment(\u0027token.%s.backend_reqs.no_token.%d\u0027 %"},{"line_number":836,"context_line":"                             (op_type, resp.status_int))"},{"line_number":837,"context_line":""},{"line_number":838,"context_line":""},{"line_number":839,"context_line":"def _get_info_from_memcache(app, env, account, container\u003dNone):"}],"source_content_type":"text/x-python","patch_set":25,"id":"ea2f0c9b_976b8577","line":836,"in_reply_to":"62092b35_ae2c4247","updated":"2024-08-05 19:34:02.000000000","message":"The concept of Sankey diagram is interesting. I think this function has the complete Sankey as below, plus two more stats on top of that.\n\nall cache miss requests \u003d ``token.%s.cache_served_reqs`` + ``token.%s.backend_reqs.with_token.%d`` + ``token.%s.backend_reqs.no_token.%d``\n\ntwo additional stats: ``token.%s.done_token_reqs`` is the request who got a token \u0026 get 200 from backend \u0026 set data into cache; ``token.%s.lack_retries`` is the request who didn\u0027t get enough retries before exiting the waiting period. these two stats are used to monitor two kinds of requests on top of the whole sankey diagram, for example, ``token.%s.lack_retries`` could be ``cache_served_reqs`` or ``backend_reqs.no_token.%d``. I will add more comments on this.","commit_id":"24c4cb68b3037de4ba90e827bd1e7b69660a7353"},{"author":{"_account_id":1179,"name":"Clay Gerrard","email":"clay.gerrard@gmail.com","username":"clay-gerrard"},"change_message_id":"b77645c85c5aaf01a53ec4fcdfded6e6e62160fc","unresolved":true,"context_lines":[{"line_number":799,"context_line":"                server_type, op_type, cache_state))"},{"line_number":800,"context_line":""},{"line_number":801,"context_line":""},{"line_number":802,"context_line":"def record_cooperative_token_metrics(logger, cache_populator, op_type):"},{"line_number":803,"context_line":"    \"\"\""},{"line_number":804,"context_line":"    Record related metrics after a cooperative token request has finished."},{"line_number":805,"context_line":""}],"source_content_type":"text/x-python","patch_set":32,"id":"8d12604e_ea211847","line":802,"updated":"2025-02-05 19:11:09.000000000","message":"if this is only ever used in proxy.controllers.obj I think it would look better to define it in that module instead of importing it from here?","commit_id":"3b2b8859917b8aad03423f082f2f6a7c7b48ea9d"},{"author":{"_account_id":34930,"name":"Jianjian Huo","email":"jhuo@nvidia.com","username":"jhuo"},"change_message_id":"dee9181e290b573de3239cc03759eb5b0da5fe21","unresolved":false,"context_lines":[{"line_number":799,"context_line":"                server_type, op_type, cache_state))"},{"line_number":800,"context_line":""},{"line_number":801,"context_line":""},{"line_number":802,"context_line":"def record_cooperative_token_metrics(logger, cache_populator, op_type):"},{"line_number":803,"context_line":"    \"\"\""},{"line_number":804,"context_line":"    Record related metrics after a cooperative token request has finished."},{"line_number":805,"context_line":""}],"source_content_type":"text/x-python","patch_set":32,"id":"ee1d9305_06b2f30b","line":802,"in_reply_to":"8d12604e_ea211847","updated":"2025-03-05 18:34:04.000000000","message":"Done","commit_id":"3b2b8859917b8aad03423f082f2f6a7c7b48ea9d"},{"author":{"_account_id":15343,"name":"Tim Burke","email":"tburke@nvidia.com","username":"tburke"},"change_message_id":"dcf796a3770999844b9fd146eeff99d5a38d757b","unresolved":false,"context_lines":[{"line_number":920,"context_line":"        cache_state \u003d \u0027error\u0027"},{"line_number":921,"context_line":""},{"line_number":922,"context_line":"    ns_bound_list \u003d namespace_bounds_to_list(bounds)"},{"line_number":923,"context_line":"    if ns_bound_list:"},{"line_number":924,"context_line":"        infocache[cache_key] \u003d ns_bound_list"},{"line_number":925,"context_line":"    return ns_bound_list, cache_state"},{"line_number":926,"context_line":""}],"source_content_type":"text/x-python","patch_set":39,"id":"6a7e4554_ae9a3b5a","line":923,"updated":"2025-05-05 20:42:46.000000000","message":"Oh, cool -- so by implementing `__len__` we *also* got a reasonable `__bool__`!","commit_id":"65d0b089553de97b003b9956853d25dcb3590903"},{"author":{"_account_id":1179,"name":"Clay Gerrard","email":"clay.gerrard@gmail.com","username":"clay-gerrard"},"change_message_id":"3ac5b97eb694bd7419d416cd4d22be40d9525d57","unresolved":true,"context_lines":[{"line_number":947,"context_line":"        else:"},{"line_number":948,"context_line":"            cache_state \u003d \u0027set\u0027"},{"line_number":949,"context_line":"    else:"},{"line_number":950,"context_line":"        cache_state \u003d \u0027disabled\u0027"},{"line_number":951,"context_line":"    return cache_state"},{"line_number":952,"context_line":""},{"line_number":953,"context_line":""}],"source_content_type":"text/x-python","patch_set":39,"id":"973497fc_0603618d","line":950,"updated":"2025-05-05 21:42:58.000000000","message":"the `ns_bounds_list \u003d None` case doesn\u0027t seem obviously related to `cache_state \u003d disabled`","commit_id":"65d0b089553de97b003b9956853d25dcb3590903"},{"author":{"_account_id":34930,"name":"Jianjian Huo","email":"jhuo@nvidia.com","username":"jhuo"},"change_message_id":"659b61cb7e26a48889ac96d6f02f48f780b3d150","unresolved":false,"context_lines":[{"line_number":947,"context_line":"        else:"},{"line_number":948,"context_line":"            cache_state \u003d \u0027set\u0027"},{"line_number":949,"context_line":"    else:"},{"line_number":950,"context_line":"        cache_state \u003d \u0027disabled\u0027"},{"line_number":951,"context_line":"    return cache_state"},{"line_number":952,"context_line":""},{"line_number":953,"context_line":""}],"source_content_type":"text/x-python","patch_set":39,"id":"d7955ec1_4cf61b28","line":950,"in_reply_to":"973497fc_0603618d","updated":"2025-05-09 16:29:52.000000000","message":"the caller path (the listing shard ranges) won\u0027t have ``ns_bound_list`` being ``None`` or empty, I removed the condition of ``and ns_bound_list`` and also added comments.","commit_id":"65d0b089553de97b003b9956853d25dcb3590903"},{"author":{"_account_id":1179,"name":"Clay Gerrard","email":"clay.gerrard@gmail.com","username":"clay-gerrard"},"change_message_id":"5b732a1765cb3de3b56ac2c4fb2ead5ffa05328d","unresolved":true,"context_lines":[{"line_number":881,"context_line":"    # then try get them from memcache"},{"line_number":882,"context_line":"    memcache \u003d cache_from_env(req.environ, True)"},{"line_number":883,"context_line":"    if not memcache:"},{"line_number":884,"context_line":"        return None, \u0027disabled\u0027"},{"line_number":885,"context_line":"    if skip_chance and random.random() \u003c skip_chance:"},{"line_number":886,"context_line":"        return None, \u0027skip\u0027"},{"line_number":887,"context_line":"    try:"}],"source_content_type":"text/x-python","patch_set":45,"id":"d0ee20d5_b998b041","side":"PARENT","line":884,"updated":"2025-05-13 22:06:08.000000000","message":"b/c the interface for `cache_from_env` *can* return None, I think it would have been reasonable to leave this test in ... even tho we never *currently* call it w/o already checking for `cache_from_env` we may want to refactor the code later and as a general helper method it\u0027s not so obvious that any user MUST perform this check before calling this method.","commit_id":"b5fd2a25492ff3421e6110948bff8a3c005deda9"},{"author":{"_account_id":34930,"name":"Jianjian Huo","email":"jhuo@nvidia.com","username":"jhuo"},"change_message_id":"b3ee26af7f3a45a961bdbcf2ef0808a6606b32b9","unresolved":false,"context_lines":[{"line_number":881,"context_line":"    # then try get them from memcache"},{"line_number":882,"context_line":"    memcache \u003d cache_from_env(req.environ, True)"},{"line_number":883,"context_line":"    if not memcache:"},{"line_number":884,"context_line":"        return None, \u0027disabled\u0027"},{"line_number":885,"context_line":"    if skip_chance and random.random() \u003c skip_chance:"},{"line_number":886,"context_line":"        return None, \u0027skip\u0027"},{"line_number":887,"context_line":"    try:"}],"source_content_type":"text/x-python","patch_set":45,"id":"f71a18d7_9e47fb55","side":"PARENT","line":884,"in_reply_to":"d0ee20d5_b998b041","updated":"2025-05-30 22:35:41.000000000","message":"Done","commit_id":"b5fd2a25492ff3421e6110948bff8a3c005deda9"},{"author":{"_account_id":1179,"name":"Clay Gerrard","email":"clay.gerrard@gmail.com","username":"clay-gerrard"},"change_message_id":"5b732a1765cb3de3b56ac2c4fb2ead5ffa05328d","unresolved":true,"context_lines":[{"line_number":937,"context_line":"    infocache \u003d req.environ.setdefault(\u0027swift.infocache\u0027, {})"},{"line_number":938,"context_line":"    infocache[cache_key] \u003d ns_bound_list"},{"line_number":939,"context_line":"    memcache \u003d cache_from_env(req.environ, True)"},{"line_number":940,"context_line":"    if memcache:"},{"line_number":941,"context_line":"        bounds \u003d namespace_list_to_bounds(ns_bound_list)"},{"line_number":942,"context_line":"        try:"},{"line_number":943,"context_line":"            memcache.set(cache_key, bounds, time\u003dtime, raise_on_error\u003dTrue)"}],"source_content_type":"text/x-python","patch_set":45,"id":"d859ca33_044c2daf","line":940,"updated":"2025-05-13 22:06:08.000000000","message":"e.g. I think we call `set_namespace_in_cache` more \"unconditionally\" b/c we want to get the infocache set even if memcache isn\u0027t available; although I suppose if memcache isn\u0027t available we\u0027d never fetch the full namespace-bounds list or use `get_namespaces_from_cache` to retrieve it from infocache - so maybe it\u0027s not actually *useful* ...\n\nstill it prevents the calling code from having to think \"did I remember to check `cache_from_env`\"???","commit_id":"1b10e5043d8dc95e710cfc0c79d6caeb6d7b1899"},{"author":{"_account_id":34930,"name":"Jianjian Huo","email":"jhuo@nvidia.com","username":"jhuo"},"change_message_id":"e2f1cdabad03ce3d614fa13b28624dbb521b7d68","unresolved":false,"context_lines":[{"line_number":937,"context_line":"    infocache \u003d req.environ.setdefault(\u0027swift.infocache\u0027, {})"},{"line_number":938,"context_line":"    infocache[cache_key] \u003d ns_bound_list"},{"line_number":939,"context_line":"    memcache \u003d cache_from_env(req.environ, True)"},{"line_number":940,"context_line":"    if memcache:"},{"line_number":941,"context_line":"        bounds \u003d namespace_list_to_bounds(ns_bound_list)"},{"line_number":942,"context_line":"        try:"},{"line_number":943,"context_line":"            memcache.set(cache_key, bounds, time\u003dtime, raise_on_error\u003dTrue)"}],"source_content_type":"text/x-python","patch_set":45,"id":"d4270167_037676eb","line":940,"in_reply_to":"92d7217f_6aa18b36","updated":"2025-09-23 05:01:47.000000000","message":"Done","commit_id":"1b10e5043d8dc95e710cfc0c79d6caeb6d7b1899"},{"author":{"_account_id":34930,"name":"Jianjian Huo","email":"jhuo@nvidia.com","username":"jhuo"},"change_message_id":"b3ee26af7f3a45a961bdbcf2ef0808a6606b32b9","unresolved":true,"context_lines":[{"line_number":937,"context_line":"    infocache \u003d req.environ.setdefault(\u0027swift.infocache\u0027, {})"},{"line_number":938,"context_line":"    infocache[cache_key] \u003d ns_bound_list"},{"line_number":939,"context_line":"    memcache \u003d cache_from_env(req.environ, True)"},{"line_number":940,"context_line":"    if memcache:"},{"line_number":941,"context_line":"        bounds \u003d namespace_list_to_bounds(ns_bound_list)"},{"line_number":942,"context_line":"        try:"},{"line_number":943,"context_line":"            memcache.set(cache_key, bounds, time\u003dtime, raise_on_error\u003dTrue)"}],"source_content_type":"text/x-python","patch_set":45,"id":"92d7217f_6aa18b36","line":940,"in_reply_to":"d859ca33_044c2daf","updated":"2025-05-30 22:35:41.000000000","message":"this path is used for listing namespace as well, and ``set_namespaces_in_cache`` will be called even if memcache isn\u0027t available, see https://review.opendev.org/c/openstack/swift/+/908969/49/swift/proxy/controllers/container.py#261","commit_id":"1b10e5043d8dc95e710cfc0c79d6caeb6d7b1899"},{"author":{"_account_id":1179,"name":"Clay Gerrard","email":"clay.gerrard@gmail.com","username":"clay-gerrard"},"change_message_id":"34890a8a035287cb6533a97801528f37e247cf61","unresolved":true,"context_lines":[{"line_number":881,"context_line":"    # then try get them from memcache"},{"line_number":882,"context_line":"    memcache \u003d cache_from_env(req.environ, True)"},{"line_number":883,"context_line":"    if not memcache:"},{"line_number":884,"context_line":"        return None, \u0027disabled\u0027"},{"line_number":885,"context_line":"    if skip_chance and random.random() \u003c skip_chance:"},{"line_number":886,"context_line":"        return None, \u0027skip\u0027"},{"line_number":887,"context_line":"    try:"}],"source_content_type":"text/x-python","patch_set":56,"id":"6cd6994e_2a7ed756","side":"PARENT","line":884,"updated":"2025-09-25 22:24:36.000000000","message":"this is part of the drive-by - this method no longer needs to support cache disabled as we\u0027ll never try to \"get_namespaces_FROM_CACHE\" when the cache is disabled.","commit_id":"b74296ef8a4902726852bae1a0e80eb15061efa8"},{"author":{"_account_id":34930,"name":"Jianjian Huo","email":"jhuo@nvidia.com","username":"jhuo"},"change_message_id":"230caa450da26de30e5b1de971df156ecb8b1b4c","unresolved":false,"context_lines":[{"line_number":881,"context_line":"    # then try get them from memcache"},{"line_number":882,"context_line":"    memcache \u003d cache_from_env(req.environ, True)"},{"line_number":883,"context_line":"    if not memcache:"},{"line_number":884,"context_line":"        return None, \u0027disabled\u0027"},{"line_number":885,"context_line":"    if skip_chance and random.random() \u003c skip_chance:"},{"line_number":886,"context_line":"        return None, \u0027skip\u0027"},{"line_number":887,"context_line":"    try:"}],"source_content_type":"text/x-python","patch_set":56,"id":"471d1768_a1623372","side":"PARENT","line":884,"in_reply_to":"6cd6994e_2a7ed756","updated":"2025-09-29 18:14:34.000000000","message":"Acknowledged","commit_id":"b74296ef8a4902726852bae1a0e80eb15061efa8"},{"author":{"_account_id":1179,"name":"Clay Gerrard","email":"clay.gerrard@gmail.com","username":"clay-gerrard"},"change_message_id":"34890a8a035287cb6533a97801528f37e247cf61","unresolved":false,"context_lines":[{"line_number":912,"context_line":"    infocache \u003d req.environ.setdefault(\u0027swift.infocache\u0027, {})"},{"line_number":913,"context_line":"    infocache[cache_key] \u003d ns_bound_list"},{"line_number":914,"context_line":"    memcache \u003d cache_from_env(req.environ, True)"},{"line_number":915,"context_line":"    if memcache and ns_bound_list:"},{"line_number":916,"context_line":"        try:"},{"line_number":917,"context_line":"            memcache.set(cache_key, ns_bound_list.bounds, time\u003dtime,"},{"line_number":918,"context_line":"                         raise_on_error\u003dTrue)"}],"source_content_type":"text/x-python","patch_set":56,"id":"24053cdf_907058cb","side":"PARENT","line":915,"updated":"2025-09-25 22:24:36.000000000","message":"are we worried about setting None in cache?  I don\u0027t think I am.","commit_id":"b74296ef8a4902726852bae1a0e80eb15061efa8"},{"author":{"_account_id":1179,"name":"Clay Gerrard","email":"clay.gerrard@gmail.com","username":"clay-gerrard"},"change_message_id":"34890a8a035287cb6533a97801528f37e247cf61","unresolved":true,"context_lines":[{"line_number":866,"context_line":""},{"line_number":867,"context_line":"    :param  bounds: a list of namespaces bounds(tuple of lower and name)."},{"line_number":868,"context_line":"    :returns: the object instance of ``NamespaceBoundList``; None if ``bounds``"},{"line_number":869,"context_line":"        is None or empty."},{"line_number":870,"context_line":"    \"\"\""},{"line_number":871,"context_line":"    ns_bound_list \u003d None"},{"line_number":872,"context_line":"    if bounds:"}],"source_content_type":"text/x-python","patch_set":56,"id":"7e46adb1_461363ff","line":869,"updated":"2025-09-25 22:24:36.000000000","message":"given the function name ends in `_to_list` it seems weird to return None","commit_id":"e96c9149410bf1d3d08c152bcaeb9f9af89d67ef"},{"author":{"_account_id":34930,"name":"Jianjian Huo","email":"jhuo@nvidia.com","username":"jhuo"},"change_message_id":"230caa450da26de30e5b1de971df156ecb8b1b4c","unresolved":false,"context_lines":[{"line_number":866,"context_line":""},{"line_number":867,"context_line":"    :param  bounds: a list of namespaces bounds(tuple of lower and name)."},{"line_number":868,"context_line":"    :returns: the object instance of ``NamespaceBoundList``; None if ``bounds``"},{"line_number":869,"context_line":"        is None or empty."},{"line_number":870,"context_line":"    \"\"\""},{"line_number":871,"context_line":"    ns_bound_list \u003d None"},{"line_number":872,"context_line":"    if bounds:"}],"source_content_type":"text/x-python","patch_set":56,"id":"e0a98fab_1f2d64da","line":869,"in_reply_to":"7e46adb1_461363ff","updated":"2025-09-29 18:14:34.000000000","message":"yeah, because ``get_namespaces_from_cache`` could return None or Empty and it calls ``namespace_bounds_to_list``.","commit_id":"e96c9149410bf1d3d08c152bcaeb9f9af89d67ef"},{"author":{"_account_id":1179,"name":"Clay Gerrard","email":"clay.gerrard@gmail.com","username":"clay-gerrard"},"change_message_id":"34890a8a035287cb6533a97801528f37e247cf61","unresolved":true,"context_lines":[{"line_number":869,"context_line":"        is None or empty."},{"line_number":870,"context_line":"    \"\"\""},{"line_number":871,"context_line":"    ns_bound_list \u003d None"},{"line_number":872,"context_line":"    if bounds:"},{"line_number":873,"context_line":"        ns_bound_list \u003d NamespaceBoundList(bounds)"},{"line_number":874,"context_line":"    return ns_bound_list"},{"line_number":875,"context_line":""}],"source_content_type":"text/x-python","patch_set":56,"id":"1e7b3995_de9e2734","line":872,"updated":"2025-09-25 22:24:36.000000000","message":"`set_namespaces_in_cache` doc string specifically calls out bounds \"must be not None or empty\" - but this helper seems robust to empty values.","commit_id":"e96c9149410bf1d3d08c152bcaeb9f9af89d67ef"},{"author":{"_account_id":34930,"name":"Jianjian Huo","email":"jhuo@nvidia.com","username":"jhuo"},"change_message_id":"230caa450da26de30e5b1de971df156ecb8b1b4c","unresolved":false,"context_lines":[{"line_number":869,"context_line":"        is None or empty."},{"line_number":870,"context_line":"    \"\"\""},{"line_number":871,"context_line":"    ns_bound_list \u003d None"},{"line_number":872,"context_line":"    if bounds:"},{"line_number":873,"context_line":"        ns_bound_list \u003d NamespaceBoundList(bounds)"},{"line_number":874,"context_line":"    return ns_bound_list"},{"line_number":875,"context_line":""}],"source_content_type":"text/x-python","patch_set":56,"id":"1c6fe15b_0a84e01b","line":872,"in_reply_to":"1e7b3995_de9e2734","updated":"2025-09-29 18:14:34.000000000","message":"Acknowledged","commit_id":"e96c9149410bf1d3d08c152bcaeb9f9af89d67ef"},{"author":{"_account_id":1179,"name":"Clay Gerrard","email":"clay.gerrard@gmail.com","username":"clay-gerrard"},"change_message_id":"34890a8a035287cb6533a97801528f37e247cf61","unresolved":true,"context_lines":[{"line_number":885,"context_line":"    bounds \u003d None"},{"line_number":886,"context_line":"    if ns_bound_list:"},{"line_number":887,"context_line":"        bounds \u003d ns_bound_list.bounds"},{"line_number":888,"context_line":"    return bounds"},{"line_number":889,"context_line":""},{"line_number":890,"context_line":""},{"line_number":891,"context_line":"def get_namespaces_from_cache(req, cache_key, skip_chance):"}],"source_content_type":"text/x-python","patch_set":56,"id":"950af315_1ba84e7b","line":888,"updated":"2025-09-25 22:24:36.000000000","message":"both of these new methods are tested indirectly through controller.test_base","commit_id":"e96c9149410bf1d3d08c152bcaeb9f9af89d67ef"},{"author":{"_account_id":34930,"name":"Jianjian Huo","email":"jhuo@nvidia.com","username":"jhuo"},"change_message_id":"230caa450da26de30e5b1de971df156ecb8b1b4c","unresolved":false,"context_lines":[{"line_number":885,"context_line":"    bounds \u003d None"},{"line_number":886,"context_line":"    if ns_bound_list:"},{"line_number":887,"context_line":"        bounds \u003d ns_bound_list.bounds"},{"line_number":888,"context_line":"    return bounds"},{"line_number":889,"context_line":""},{"line_number":890,"context_line":""},{"line_number":891,"context_line":"def get_namespaces_from_cache(req, cache_key, skip_chance):"}],"source_content_type":"text/x-python","patch_set":56,"id":"1b812e2f_04409da2","line":888,"in_reply_to":"950af315_1ba84e7b","updated":"2025-09-29 18:14:34.000000000","message":"Acknowledged","commit_id":"e96c9149410bf1d3d08c152bcaeb9f9af89d67ef"},{"author":{"_account_id":1179,"name":"Clay Gerrard","email":"clay.gerrard@gmail.com","username":"clay-gerrard"},"change_message_id":"34890a8a035287cb6533a97801528f37e247cf61","unresolved":true,"context_lines":[{"line_number":907,"context_line":"        return ns_bound_list, \u0027infocache_hit\u0027"},{"line_number":908,"context_line":""},{"line_number":909,"context_line":"    # then try get them from memcache"},{"line_number":910,"context_line":"    memcache \u003d cache_from_env(req.environ, True)"},{"line_number":911,"context_line":"    if skip_chance and random.random() \u003c skip_chance:"},{"line_number":912,"context_line":"        return None, \u0027skip\u0027"},{"line_number":913,"context_line":"    try:"}],"source_content_type":"text/x-python","patch_set":56,"id":"c350236f_805af2a8","line":910,"updated":"2025-09-25 22:24:36.000000000","message":"the \"allow_none\" kwarg doesn\u0027t seem to be strictly enforced\n\n```\ndef item_from_env(env, item_name, allow_none\u003dFalse):\n    \"\"\"\n    Get a value from the wsgi environment\n\n    :param env: wsgi environment dict\n    :param item_name: name of item to get\n\n    :returns: the value from the environment\n    \"\"\"\n    item \u003d env.get(item_name, None)\n    if item is None and not allow_none:\n        logging.error(\"ERROR: %s could not be found in env!\", item_name)\n    return item\n\n\ndef cache_from_env(env, allow_none\u003dFalse):\n    \"\"\"\n    Get memcache connection pool from the environment (which had been\n    previously set by the memcache middleware\n\n    :param env: wsgi environment dict\n\n    :returns: swift.common.memcached.MemcacheRing from environment\n    \"\"\"\n    return item_from_env(env, \u0027swift.cache\u0027, allow_none)\n```\n\n... anyway it now be an AttributeError for someone to call this method without setting memcache - AFAIK no one does.","commit_id":"e96c9149410bf1d3d08c152bcaeb9f9af89d67ef"},{"author":{"_account_id":34930,"name":"Jianjian Huo","email":"jhuo@nvidia.com","username":"jhuo"},"change_message_id":"230caa450da26de30e5b1de971df156ecb8b1b4c","unresolved":false,"context_lines":[{"line_number":907,"context_line":"        return ns_bound_list, \u0027infocache_hit\u0027"},{"line_number":908,"context_line":""},{"line_number":909,"context_line":"    # then try get them from memcache"},{"line_number":910,"context_line":"    memcache \u003d cache_from_env(req.environ, True)"},{"line_number":911,"context_line":"    if skip_chance and random.random() \u003c skip_chance:"},{"line_number":912,"context_line":"        return None, \u0027skip\u0027"},{"line_number":913,"context_line":"    try:"}],"source_content_type":"text/x-python","patch_set":56,"id":"57dcfc0d_e89ecbfd","line":910,"in_reply_to":"c350236f_805af2a8","updated":"2025-09-29 18:14:34.000000000","message":"Acknowledged","commit_id":"e96c9149410bf1d3d08c152bcaeb9f9af89d67ef"},{"author":{"_account_id":1179,"name":"Clay Gerrard","email":"clay.gerrard@gmail.com","username":"clay-gerrard"},"change_message_id":"34890a8a035287cb6533a97801528f37e247cf61","unresolved":true,"context_lines":[{"line_number":919,"context_line":""},{"line_number":920,"context_line":"    ns_bound_list \u003d namespace_bounds_to_list(bounds)"},{"line_number":921,"context_line":"    if ns_bound_list:"},{"line_number":922,"context_line":"        infocache[cache_key] \u003d ns_bound_list"},{"line_number":923,"context_line":"    return ns_bound_list, cache_state"},{"line_number":924,"context_line":""},{"line_number":925,"context_line":""}],"source_content_type":"text/x-python","patch_set":56,"id":"8322f62f_a4b919e1","line":922,"updated":"2025-09-25 22:24:36.000000000","message":"I see little reason to guard here - we\u0027ll check memcache again anyway whether this key is None or not-set.","commit_id":"e96c9149410bf1d3d08c152bcaeb9f9af89d67ef"},{"author":{"_account_id":34930,"name":"Jianjian Huo","email":"jhuo@nvidia.com","username":"jhuo"},"change_message_id":"230caa450da26de30e5b1de971df156ecb8b1b4c","unresolved":false,"context_lines":[{"line_number":919,"context_line":""},{"line_number":920,"context_line":"    ns_bound_list \u003d namespace_bounds_to_list(bounds)"},{"line_number":921,"context_line":"    if ns_bound_list:"},{"line_number":922,"context_line":"        infocache[cache_key] \u003d ns_bound_list"},{"line_number":923,"context_line":"    return ns_bound_list, cache_state"},{"line_number":924,"context_line":""},{"line_number":925,"context_line":""}],"source_content_type":"text/x-python","patch_set":56,"id":"a3c94026_ebe350bf","line":922,"in_reply_to":"8322f62f_a4b919e1","updated":"2025-09-29 18:14:34.000000000","message":"Done","commit_id":"e96c9149410bf1d3d08c152bcaeb9f9af89d67ef"},{"author":{"_account_id":1179,"name":"Clay Gerrard","email":"clay.gerrard@gmail.com","username":"clay-gerrard"},"change_message_id":"34890a8a035287cb6533a97801528f37e247cf61","unresolved":true,"context_lines":[{"line_number":930,"context_line":"    :param req: a :class:`swift.common.swob.Request` object."},{"line_number":931,"context_line":"    :param cache_key: the cache key for both infocache and memcache."},{"line_number":932,"context_line":"    :param ns_bound_list: a :class:`swift.common.utils.NamespaceBoundList`;"},{"line_number":933,"context_line":"                          must be not None or empty."},{"line_number":934,"context_line":"    :param time: how long the namespaces should remain in memcache."},{"line_number":935,"context_line":"    :return: the cache_state."},{"line_number":936,"context_line":"    \"\"\""}],"source_content_type":"text/x-python","patch_set":56,"id":"cb1f325b_13c4e722","line":933,"updated":"2025-09-25 22:24:36.000000000","message":"\"must be not None or empty\" - what does this mean?\n\n`must be ((not None) or empty)`\n`must be (not (None or empty))`\n`(must be not None) or (empty)`\n\nI think it means \"must NOT be None nor empty\" - but I don\u0027t think it\u0027d be a big problem if someone circumvented this expectation - we\u0027d just put None in memcache; which is a valid return value from `namespace_bounds_to_list`\n\nI tried to let the listing namespace population code call this with an false-y ns_bound_list\n\n```\ndiff --git a/swift/proxy/controllers/container.py b/swift/proxy/controllers/container.py\nindex 7fc43f3da..1b5fefbd3 100644\n--- a/swift/proxy/controllers/container.py\n+++ b/swift/proxy/controllers/container.py\n@@ -255,9 +255,7 @@ class ContainerController(Controller):\n             # receive back \u0027x-backend-override-shard-name-filter\u003dtrue\u0027 if\n             # the sharding state is \u0027sharded\u0027, but check them both\n             # anyway...\n-            if (namespaces and\n-                    sharding_state \u003d\u003d \u0027sharded\u0027 and\n-                    complete_listing):\n+            if sharding_state \u003d\u003d \u0027sharded\u0027 and complete_listing:\n                 namespaces \u003d self._set_listing_namespaces_in_cache(\n                     req, namespaces)\n                 namespaces \u003d self._filter_complete_listing(req, namespaces)\n```\n\nthe problem isn\u0027t *this* method; it\u0027s that controller\u0027s OWN handling of namespaces not being robust to `NamespaceBoundList.parse`:\n\n```\n  File \"/home/vagrant/swift/swift/proxy/controllers/container.py\", line 201, in _set_listing_namespaces_in_cache\n    return ns_bound_list.get_namespaces()\n```\n\nWe need to either raise ValueError or return a NamespaceBoundList - all this \"if namespace is None\" stuff in the caller is annoying and very confusing!\n```\n    @classmethod\n    def parse(cls, namespaces):\n        \"\"\"\n        Create a NamespaceBoundList object by parsing a list of Namespaces or\n        shard ranges and only storing the compact bounds list.\n\n        Each Namespace in the given list of ``namespaces`` provides the next\n        [lower bound, name] list to append to the NamespaceBoundList. The\n        given ``namespaces`` should be contiguous because the\n        NamespaceBoundList only stores lower bounds; if ``namespaces`` has\n        overlaps then at least one of the overlapping namespaces may be\n        ignored; similarly, gaps between namespaces are not represented in the\n        NamespaceBoundList.\n\n        :param namespaces: A list of Namespace instances. The list should be\n            ordered by namespace bounds.\n        :return: a NamespaceBoundList.\n        \"\"\"\n        if not namespaces:\n            return None\n```\n\nRegardless this doc-string update is not helpful and should be removed.","commit_id":"e96c9149410bf1d3d08c152bcaeb9f9af89d67ef"},{"author":{"_account_id":1179,"name":"Clay Gerrard","email":"clay.gerrard@gmail.com","username":"clay-gerrard"},"change_message_id":"5bd4951d9773d3301e67fc11a7d0492c316e4556","unresolved":false,"context_lines":[{"line_number":930,"context_line":"    :param req: a :class:`swift.common.swob.Request` object."},{"line_number":931,"context_line":"    :param cache_key: the cache key for both infocache and memcache."},{"line_number":932,"context_line":"    :param ns_bound_list: a :class:`swift.common.utils.NamespaceBoundList`;"},{"line_number":933,"context_line":"                          must be not None or empty."},{"line_number":934,"context_line":"    :param time: how long the namespaces should remain in memcache."},{"line_number":935,"context_line":"    :return: the cache_state."},{"line_number":936,"context_line":"    \"\"\""}],"source_content_type":"text/x-python","patch_set":56,"id":"0f4cd475_b55aef8a","line":933,"in_reply_to":"70dc90cf_1edf6d7c","updated":"2025-09-30 16:01:56.000000000","message":"Done","commit_id":"e96c9149410bf1d3d08c152bcaeb9f9af89d67ef"},{"author":{"_account_id":34930,"name":"Jianjian Huo","email":"jhuo@nvidia.com","username":"jhuo"},"change_message_id":"230caa450da26de30e5b1de971df156ecb8b1b4c","unresolved":true,"context_lines":[{"line_number":930,"context_line":"    :param req: a :class:`swift.common.swob.Request` object."},{"line_number":931,"context_line":"    :param cache_key: the cache key for both infocache and memcache."},{"line_number":932,"context_line":"    :param ns_bound_list: a :class:`swift.common.utils.NamespaceBoundList`;"},{"line_number":933,"context_line":"                          must be not None or empty."},{"line_number":934,"context_line":"    :param time: how long the namespaces should remain in memcache."},{"line_number":935,"context_line":"    :return: the cache_state."},{"line_number":936,"context_line":"    \"\"\""}],"source_content_type":"text/x-python","patch_set":56,"id":"70dc90cf_1edf6d7c","line":933,"in_reply_to":"cb1f325b_13c4e722","updated":"2025-09-29 18:14:34.000000000","message":"squashed the changes in https://review.opendev.org/c/openstack/swift/+/962315","commit_id":"e96c9149410bf1d3d08c152bcaeb9f9af89d67ef"},{"author":{"_account_id":1179,"name":"Clay Gerrard","email":"clay.gerrard@gmail.com","username":"clay-gerrard"},"change_message_id":"34890a8a035287cb6533a97801528f37e247cf61","unresolved":true,"context_lines":[{"line_number":938,"context_line":"    infocache[cache_key] \u003d ns_bound_list"},{"line_number":939,"context_line":"    memcache \u003d cache_from_env(req.environ, True)"},{"line_number":940,"context_line":"    if memcache:"},{"line_number":941,"context_line":"        bounds \u003d namespace_list_to_bounds(ns_bound_list)"},{"line_number":942,"context_line":"        try:"},{"line_number":943,"context_line":"            memcache.set(cache_key, bounds, time\u003dtime, raise_on_error\u003dTrue)"},{"line_number":944,"context_line":"        except MemcacheConnectionError:"}],"source_content_type":"text/x-python","patch_set":56,"id":"fd123de0_9e8bed91","line":941,"updated":"2025-09-25 22:24:36.000000000","message":"if I add a NameError here `if not bounds: asdf` - everything test.unit.proxy still seems to pass.","commit_id":"e96c9149410bf1d3d08c152bcaeb9f9af89d67ef"},{"author":{"_account_id":1179,"name":"Clay Gerrard","email":"clay.gerrard@gmail.com","username":"clay-gerrard"},"change_message_id":"5bd4951d9773d3301e67fc11a7d0492c316e4556","unresolved":false,"context_lines":[{"line_number":938,"context_line":"    infocache[cache_key] \u003d ns_bound_list"},{"line_number":939,"context_line":"    memcache \u003d cache_from_env(req.environ, True)"},{"line_number":940,"context_line":"    if memcache:"},{"line_number":941,"context_line":"        bounds \u003d namespace_list_to_bounds(ns_bound_list)"},{"line_number":942,"context_line":"        try:"},{"line_number":943,"context_line":"            memcache.set(cache_key, bounds, time\u003dtime, raise_on_error\u003dTrue)"},{"line_number":944,"context_line":"        except MemcacheConnectionError:"}],"source_content_type":"text/x-python","patch_set":56,"id":"ddd8ac23_96eb1b5b","line":941,"in_reply_to":"fd123de0_9e8bed91","updated":"2025-09-30 16:01:56.000000000","message":"Acknowledged","commit_id":"e96c9149410bf1d3d08c152bcaeb9f9af89d67ef"},{"author":{"_account_id":1179,"name":"Clay Gerrard","email":"clay.gerrard@gmail.com","username":"clay-gerrard"},"change_message_id":"34890a8a035287cb6533a97801528f37e247cf61","unresolved":true,"context_lines":[{"line_number":946,"context_line":"        else:"},{"line_number":947,"context_line":"            cache_state \u003d \u0027set\u0027"},{"line_number":948,"context_line":"    else:"},{"line_number":949,"context_line":"        cache_state \u003d \u0027disabled\u0027"},{"line_number":950,"context_line":"    return cache_state"},{"line_number":951,"context_line":""},{"line_number":952,"context_line":""}],"source_content_type":"text/x-python","patch_set":56,"id":"fb711c84_3ea43ea1","line":949,"updated":"2025-09-25 22:24:36.000000000","message":"It seems weird that `set_` function still supports \"cache_state \u003d disabled\" while the `get_` function does NOT - but this has to do with the client listing requests on sharded containers caching the root shard bounds.\n\nIn proxy.controller.container _GET_auto currently guards against calling get_listing_namespaces_from_cache when not memcache - but it WILL call set_namespaces_in_cache\n\nSo this Drive-by is \"incomplete\" - but seems to acknowledge the current usage of get_namespaces even w/o cleaning up the un-related listing_namespace set path.  I still think removing complexity in get_namespaces to acknowledge the current usage is a step in the right direction.","commit_id":"e96c9149410bf1d3d08c152bcaeb9f9af89d67ef"},{"author":{"_account_id":34930,"name":"Jianjian Huo","email":"jhuo@nvidia.com","username":"jhuo"},"change_message_id":"230caa450da26de30e5b1de971df156ecb8b1b4c","unresolved":false,"context_lines":[{"line_number":946,"context_line":"        else:"},{"line_number":947,"context_line":"            cache_state \u003d \u0027set\u0027"},{"line_number":948,"context_line":"    else:"},{"line_number":949,"context_line":"        cache_state \u003d \u0027disabled\u0027"},{"line_number":950,"context_line":"    return cache_state"},{"line_number":951,"context_line":""},{"line_number":952,"context_line":""}],"source_content_type":"text/x-python","patch_set":56,"id":"09cd9ec9_e7091d36","line":949,"in_reply_to":"fb711c84_3ea43ea1","updated":"2025-09-29 18:14:34.000000000","message":"Good point to add those related comments in https://review.opendev.org/c/openstack/swift/+/962315, I got them squashed.","commit_id":"e96c9149410bf1d3d08c152bcaeb9f9af89d67ef"},{"author":{"_account_id":7233,"name":"Matthew Oliver","email":"matt@oliver.net.au","username":"mattoliverau"},"change_message_id":"dc52b27629969fd001d64b4acc2a5a7d5704605d","unresolved":true,"context_lines":[{"line_number":929,"context_line":"    :param req: a :class:`swift.common.swob.Request` object."},{"line_number":930,"context_line":"    :param cache_key: the cache key for both infocache and memcache."},{"line_number":931,"context_line":"    :param ns_bound_list: a :class:`swift.common.utils.NamespaceBoundList`;"},{"line_number":932,"context_line":"                          should NOT be None nor empty."},{"line_number":933,"context_line":"    :param time: how long the namespaces should remain in memcache."},{"line_number":934,"context_line":"    :return: the cache_state."},{"line_number":935,"context_line":"    \"\"\""}],"source_content_type":"text/x-python","patch_set":59,"id":"939ffca4_84e47616","line":932,"range":{"start_line":932,"start_character":26,"end_line":932,"end_character":54},"updated":"2025-09-30 05:28:35.000000000","message":"So looking at all references it seems we never send in a falsey value for ns_bounds_list. So I guess this is ok. But we seem to have lost the ability to deal with one if we do and not sure what would happen if one did slip through.\nIt is a \"should NOT\" not a \"must NOT\" so does that mean it could be possible (or have I read too many RFCs lately :P)\n\nIf this is suppose to be MUST NOT, then maybe an assert or something. Otherwise, it\u0027s a shame we don\u0027t seem to be handling the empty/None case no more. Although I believe `namespace_list_to_bounds` does handly an empty ns_bound_list... so maybe we do.. and if so why add this to the doc string?","commit_id":"d9883d083409baac3db44e1db14bf3c79a75f411"},{"author":{"_account_id":1179,"name":"Clay Gerrard","email":"clay.gerrard@gmail.com","username":"clay-gerrard"},"change_message_id":"5bd4951d9773d3301e67fc11a7d0492c316e4556","unresolved":true,"context_lines":[{"line_number":929,"context_line":"    :param req: a :class:`swift.common.swob.Request` object."},{"line_number":930,"context_line":"    :param cache_key: the cache key for both infocache and memcache."},{"line_number":931,"context_line":"    :param ns_bound_list: a :class:`swift.common.utils.NamespaceBoundList`;"},{"line_number":932,"context_line":"                          should NOT be None nor empty."},{"line_number":933,"context_line":"    :param time: how long the namespaces should remain in memcache."},{"line_number":934,"context_line":"    :return: the cache_state."},{"line_number":935,"context_line":"    \"\"\""}],"source_content_type":"text/x-python","patch_set":59,"id":"0d420a11_54914bc8","line":932,"range":{"start_line":932,"start_character":26,"end_line":932,"end_character":54},"in_reply_to":"939ffca4_84e47616","updated":"2025-09-30 16:01:56.000000000","message":"\u003e why add this to the doc string?\n\nno one knows!\n\nhttps://review.opendev.org/c/openstack/swift/+/908969/comment/cb1f325b_13c4e722/\nhttps://review.opendev.org/c/openstack/swift/+/908969/comment/fd123de0_9e8bed91/","commit_id":"d9883d083409baac3db44e1db14bf3c79a75f411"}],"swift/proxy/controllers/obj.py":[{"author":{"_account_id":7847,"name":"Alistair Coles","email":"alistairncoles@gmail.com","username":"acoles"},"change_message_id":"ff146041ae375fbb80c48e4c2e1cec2a14ed0ff9","unresolved":true,"context_lines":[{"line_number":349,"context_line":"        if namespaces:"},{"line_number":350,"context_line":"            # only store the list of namespace lower bounds and names into"},{"line_number":351,"context_line":"            # infocache and memcache."},{"line_number":352,"context_line":"            ns_bound_list \u003d NamespaceBoundList.parse(namespaces)"},{"line_number":353,"context_line":"            data \u003d ns_bound_list.bounds"},{"line_number":354,"context_line":"        return data, backend_response"},{"line_number":355,"context_line":""}],"source_content_type":"text/x-python","patch_set":2,"id":"765cb2b6_dbee777c","line":352,"updated":"2024-02-15 12:53:19.000000000","message":"so here we construct a NamespaceBoundList but don\u0027t return it","commit_id":"917650355da354b5674013c143744aad2ddc4ec9"},{"author":{"_account_id":34930,"name":"Jianjian Huo","email":"jhuo@nvidia.com","username":"jhuo"},"change_message_id":"3afaa86ee37f7773f4302fa0e58e4e5fe4906cf8","unresolved":false,"context_lines":[{"line_number":349,"context_line":"        if namespaces:"},{"line_number":350,"context_line":"            # only store the list of namespace lower bounds and names into"},{"line_number":351,"context_line":"            # infocache and memcache."},{"line_number":352,"context_line":"            ns_bound_list \u003d NamespaceBoundList.parse(namespaces)"},{"line_number":353,"context_line":"            data \u003d ns_bound_list.bounds"},{"line_number":354,"context_line":"        return data, backend_response"},{"line_number":355,"context_line":""}],"source_content_type":"text/x-python","patch_set":2,"id":"99e77412_ab083cdb","line":352,"in_reply_to":"765cb2b6_dbee777c","updated":"2024-02-20 05:24:03.000000000","message":"Acknowledged","commit_id":"917650355da354b5674013c143744aad2ddc4ec9"},{"author":{"_account_id":7847,"name":"Alistair Coles","email":"alistairncoles@gmail.com","username":"acoles"},"change_message_id":"ff146041ae375fbb80c48e4c2e1cec2a14ed0ff9","unresolved":true,"context_lines":[{"line_number":393,"context_line":"                do_fetch_backend,"},{"line_number":394,"context_line":"                self.app.shard_ranges_cache_token_ttl,"},{"line_number":395,"context_line":"                self.app.shard_ranges_cache_token_sleep_interval)"},{"line_number":396,"context_line":"            bounds \u003d cache_token.fetch_backend_with_token()"},{"line_number":397,"context_line":"            if bounds:"},{"line_number":398,"context_line":"                ns_bound_list \u003d NamespaceBoundList("},{"line_number":399,"context_line":"                    format_namespace_bounds(bounds))"}],"source_content_type":"text/x-python","patch_set":2,"id":"7fb677f0_3922391f","line":396,"updated":"2024-02-15 12:53:19.000000000","message":"AttributeError: \u0027CooperativeCachePopulator\u0027 object has no attribute \u0027fetch_backend_with_token\u0027","commit_id":"917650355da354b5674013c143744aad2ddc4ec9"},{"author":{"_account_id":34930,"name":"Jianjian Huo","email":"jhuo@nvidia.com","username":"jhuo"},"change_message_id":"3afaa86ee37f7773f4302fa0e58e4e5fe4906cf8","unresolved":false,"context_lines":[{"line_number":393,"context_line":"                do_fetch_backend,"},{"line_number":394,"context_line":"                self.app.shard_ranges_cache_token_ttl,"},{"line_number":395,"context_line":"                self.app.shard_ranges_cache_token_sleep_interval)"},{"line_number":396,"context_line":"            bounds \u003d cache_token.fetch_backend_with_token()"},{"line_number":397,"context_line":"            if bounds:"},{"line_number":398,"context_line":"                ns_bound_list \u003d NamespaceBoundList("},{"line_number":399,"context_line":"                    format_namespace_bounds(bounds))"}],"source_content_type":"text/x-python","patch_set":2,"id":"8ca2e5e3_7812edf6","line":396,"in_reply_to":"7fb677f0_3922391f","updated":"2024-02-20 05:24:03.000000000","message":"Acknowledged","commit_id":"917650355da354b5674013c143744aad2ddc4ec9"},{"author":{"_account_id":7847,"name":"Alistair Coles","email":"alistairncoles@gmail.com","username":"acoles"},"change_message_id":"ff146041ae375fbb80c48e4c2e1cec2a14ed0ff9","unresolved":true,"context_lines":[{"line_number":395,"context_line":"                self.app.shard_ranges_cache_token_sleep_interval)"},{"line_number":396,"context_line":"            bounds \u003d cache_token.fetch_backend_with_token()"},{"line_number":397,"context_line":"            if bounds:"},{"line_number":398,"context_line":"                ns_bound_list \u003d NamespaceBoundList("},{"line_number":399,"context_line":"                    format_namespace_bounds(bounds))"},{"line_number":400,"context_line":"            if cache_token.set_cache_state:"},{"line_number":401,"context_line":"                record_cache_op_metrics("}],"source_content_type":"text/x-python","patch_set":2,"id":"cdf913d8_db565328","line":398,"updated":"2024-02-15 12:53:19.000000000","message":"here we construct a NamespaceBoundList again ??","commit_id":"917650355da354b5674013c143744aad2ddc4ec9"},{"author":{"_account_id":34930,"name":"Jianjian Huo","email":"jhuo@nvidia.com","username":"jhuo"},"change_message_id":"cd691043fa85bff09ca04ad5d2d950847cb601b5","unresolved":false,"context_lines":[{"line_number":395,"context_line":"                self.app.shard_ranges_cache_token_sleep_interval)"},{"line_number":396,"context_line":"            bounds \u003d cache_token.fetch_backend_with_token()"},{"line_number":397,"context_line":"            if bounds:"},{"line_number":398,"context_line":"                ns_bound_list \u003d NamespaceBoundList("},{"line_number":399,"context_line":"                    format_namespace_bounds(bounds))"},{"line_number":400,"context_line":"            if cache_token.set_cache_state:"},{"line_number":401,"context_line":"                record_cache_op_metrics("}],"source_content_type":"text/x-python","patch_set":2,"id":"183dc01f_bb117469","line":398,"in_reply_to":"736399f2_17a99f93","updated":"2024-03-20 20:39:24.000000000","message":"I have added encoder/decoder interface to CooperativeCachePopulator class to eliminate the cost of NamespaceBoundList  construction.","commit_id":"917650355da354b5674013c143744aad2ddc4ec9"},{"author":{"_account_id":34930,"name":"Jianjian Huo","email":"jhuo@nvidia.com","username":"jhuo"},"change_message_id":"3afaa86ee37f7773f4302fa0e58e4e5fe4906cf8","unresolved":true,"context_lines":[{"line_number":395,"context_line":"                self.app.shard_ranges_cache_token_sleep_interval)"},{"line_number":396,"context_line":"            bounds \u003d cache_token.fetch_backend_with_token()"},{"line_number":397,"context_line":"            if bounds:"},{"line_number":398,"context_line":"                ns_bound_list \u003d NamespaceBoundList("},{"line_number":399,"context_line":"                    format_namespace_bounds(bounds))"},{"line_number":400,"context_line":"            if cache_token.set_cache_state:"},{"line_number":401,"context_line":"                record_cache_op_metrics("}],"source_content_type":"text/x-python","patch_set":2,"id":"e259ae71_7ac5072a","line":398,"in_reply_to":"cdf913d8_db565328","updated":"2024-02-20 05:24:03.000000000","message":"this should be very lightweight, since NamespaceBoundList object is just a wrapper of ``bounds`` list. I\u0027d like the ``populate_cache_with_cooperative_token`` function to return data in the same format of data stored in memcached, one reason is that this should be easier to understand the relationship between returned ``data`` and cache data, another reason is this should be universal interface to all other user cases of cooperative token too.","commit_id":"917650355da354b5674013c143744aad2ddc4ec9"},{"author":{"_account_id":1179,"name":"Clay Gerrard","email":"clay.gerrard@gmail.com","username":"clay-gerrard"},"change_message_id":"7f2e28ec5958831a968574f848fcfebb2ac58987","unresolved":true,"context_lines":[{"line_number":395,"context_line":"                self.app.shard_ranges_cache_token_sleep_interval)"},{"line_number":396,"context_line":"            bounds \u003d cache_token.fetch_backend_with_token()"},{"line_number":397,"context_line":"            if bounds:"},{"line_number":398,"context_line":"                ns_bound_list \u003d NamespaceBoundList("},{"line_number":399,"context_line":"                    format_namespace_bounds(bounds))"},{"line_number":400,"context_line":"            if cache_token.set_cache_state:"},{"line_number":401,"context_line":"                record_cache_op_metrics("}],"source_content_type":"text/x-python","patch_set":2,"id":"736399f2_17a99f93","line":398,"in_reply_to":"e259ae71_7ac5072a","updated":"2024-03-15 16:01:16.000000000","message":"please don\u0027t dimiss/trivialize the cost of encode_namespace_bounds - these bounds lists are *huge* (10\u0027s of K of objects, megs of strings) and roundtripping through encode/de-serialize is *expensive*\n\nSurely we can at least agree that *ideally* we\u0027d only encode once, or [de]serialize when going to memcache.  An interface that makes it *convienent* to behave efficiently would be closer to the *ideal* even if it was (slightly?) more complex.","commit_id":"917650355da354b5674013c143744aad2ddc4ec9"},{"author":{"_account_id":7847,"name":"Alistair Coles","email":"alistairncoles@gmail.com","username":"acoles"},"change_message_id":"ff146041ae375fbb80c48e4c2e1cec2a14ed0ff9","unresolved":true,"context_lines":[{"line_number":397,"context_line":"            if bounds:"},{"line_number":398,"context_line":"                ns_bound_list \u003d NamespaceBoundList("},{"line_number":399,"context_line":"                    format_namespace_bounds(bounds))"},{"line_number":400,"context_line":"            if cache_token.set_cache_state:"},{"line_number":401,"context_line":"                record_cache_op_metrics("},{"line_number":402,"context_line":"                    self.logger, self.server_type.lower(), \u0027shard_updating\u0027,"},{"line_number":403,"context_line":"                    cache_token.set_cache_state, None)"}],"source_content_type":"text/x-python","patch_set":2,"id":"6dc8515b_cc3dae12","line":400,"updated":"2024-02-15 12:53:19.000000000","message":"hmmm, so cache_token is holding this state but it is only of interest to the caller.\n\nI wondered why the helper class doesn\u0027t emit the cache metrics, but maybe that\u0027s not part of the \u0027generic\u0027 part, in which case should the set_cache_state strings be part of the helper?\n\nIn the parent review, I discuss how the helper interface could be:\n\n- helper function will write data to caller supplied infocache dict\n- if data was found in memcache, helper returns None\n- if data was fetched from the backend, helper returns response\n- if memcache set fails, helper raises an exception","commit_id":"917650355da354b5674013c143744aad2ddc4ec9"},{"author":{"_account_id":34930,"name":"Jianjian Huo","email":"jhuo@nvidia.com","username":"jhuo"},"change_message_id":"3afaa86ee37f7773f4302fa0e58e4e5fe4906cf8","unresolved":true,"context_lines":[{"line_number":397,"context_line":"            if bounds:"},{"line_number":398,"context_line":"                ns_bound_list \u003d NamespaceBoundList("},{"line_number":399,"context_line":"                    format_namespace_bounds(bounds))"},{"line_number":400,"context_line":"            if cache_token.set_cache_state:"},{"line_number":401,"context_line":"                record_cache_op_metrics("},{"line_number":402,"context_line":"                    self.logger, self.server_type.lower(), \u0027shard_updating\u0027,"},{"line_number":403,"context_line":"                    cache_token.set_cache_state, None)"}],"source_content_type":"text/x-python","patch_set":2,"id":"e29c16a5_e0180360","line":400,"in_reply_to":"6dc8515b_cc3dae12","updated":"2024-02-20 05:24:03.000000000","message":"Thanks for the suggestions. I converted the ``CooperativeCacheFetcher`` to be a single function which returns a tuple of (data, backend_response); \"data\" is the value of the data fetched from either memcached or backend, \"backend_response\" is the response return from backend, None if data is fetched from the memcached.","commit_id":"917650355da354b5674013c143744aad2ddc4ec9"},{"author":{"_account_id":1179,"name":"Clay Gerrard","email":"clay.gerrard@gmail.com","username":"clay-gerrard"},"change_message_id":"7f2e28ec5958831a968574f848fcfebb2ac58987","unresolved":false,"context_lines":[{"line_number":397,"context_line":"            if bounds:"},{"line_number":398,"context_line":"                ns_bound_list \u003d NamespaceBoundList("},{"line_number":399,"context_line":"                    format_namespace_bounds(bounds))"},{"line_number":400,"context_line":"            if cache_token.set_cache_state:"},{"line_number":401,"context_line":"                record_cache_op_metrics("},{"line_number":402,"context_line":"                    self.logger, self.server_type.lower(), \u0027shard_updating\u0027,"},{"line_number":403,"context_line":"                    cache_token.set_cache_state, None)"}],"source_content_type":"text/x-python","patch_set":2,"id":"9dc5ccba_23687adf","line":400,"in_reply_to":"e29c16a5_e0180360","updated":"2024-03-15 16:01:16.000000000","message":"Acknowledged","commit_id":"917650355da354b5674013c143744aad2ddc4ec9"},{"author":{"_account_id":7847,"name":"Alistair Coles","email":"alistairncoles@gmail.com","username":"acoles"},"change_message_id":"ff146041ae375fbb80c48e4c2e1cec2a14ed0ff9","unresolved":true,"context_lines":[{"line_number":406,"context_line":"                        \u0027Cached updating shards for %s (%d shards)\u0027,"},{"line_number":407,"context_line":"                        cache_key, len(bounds))"},{"line_number":408,"context_line":"            if cache_token.req_served_from_cache:"},{"line_number":409,"context_line":"                self.logger.info("},{"line_number":410,"context_line":"                    \u0027Retrieved updating shards (%d shards) from cache instead \u0027"},{"line_number":411,"context_line":"                    \u0027of backend due to request coalescing by cooperative \u0027"},{"line_number":412,"context_line":"                    \u0027token for %s\u0027, len(bounds), cache_key)"}],"source_content_type":"text/x-python","patch_set":2,"id":"e6d2dcb5_7dc188a5","line":409,"updated":"2024-02-15 12:53:19.000000000","message":"I don\u0027t think we\u0027ll want this info level log - it will be on most/many requests I think?\n\nOn the other hand, it may be very interesting to have a metric that counts how many backend requests were avoided by coop token.","commit_id":"917650355da354b5674013c143744aad2ddc4ec9"},{"author":{"_account_id":1179,"name":"Clay Gerrard","email":"clay.gerrard@gmail.com","username":"clay-gerrard"},"change_message_id":"7f2e28ec5958831a968574f848fcfebb2ac58987","unresolved":false,"context_lines":[{"line_number":406,"context_line":"                        \u0027Cached updating shards for %s (%d shards)\u0027,"},{"line_number":407,"context_line":"                        cache_key, len(bounds))"},{"line_number":408,"context_line":"            if cache_token.req_served_from_cache:"},{"line_number":409,"context_line":"                self.logger.info("},{"line_number":410,"context_line":"                    \u0027Retrieved updating shards (%d shards) from cache instead \u0027"},{"line_number":411,"context_line":"                    \u0027of backend due to request coalescing by cooperative \u0027"},{"line_number":412,"context_line":"                    \u0027token for %s\u0027, len(bounds), cache_key)"}],"source_content_type":"text/x-python","patch_set":2,"id":"baa371d0_6ea84099","line":409,"in_reply_to":"e6d2dcb5_7dc188a5","updated":"2024-03-15 16:01:16.000000000","message":"the code moved and may not happen on *every* request but the sentiment of \"useful as a metric\" is still valid.","commit_id":"917650355da354b5674013c143744aad2ddc4ec9"},{"author":{"_account_id":1179,"name":"Clay Gerrard","email":"clay.gerrard@gmail.com","username":"clay-gerrard"},"change_message_id":"7f2e28ec5958831a968574f848fcfebb2ac58987","unresolved":true,"context_lines":[{"line_number":376,"context_line":""},{"line_number":377,"context_line":"        # caching is enabled, try to get from caches"},{"line_number":378,"context_line":"        response \u003d None"},{"line_number":379,"context_line":"        set_cache_state \u003d None"},{"line_number":380,"context_line":"        cache_key \u003d get_cache_key(account, container, shard\u003d\u0027updating\u0027)"},{"line_number":381,"context_line":"        skip_chance \u003d self.app.container_updating_shard_ranges_skip_cache"},{"line_number":382,"context_line":"        ns_bound_list, get_cache_state \u003d get_namespaces_from_cache("}],"source_content_type":"text/x-python","patch_set":9,"id":"677c087a_6844189f","line":379,"updated":"2024-03-15 16:01:16.000000000","message":"this variable is not used in this scope when we get ns_bound_list from get_namespaces_form_cache\n\nI think that would be more obvious if you initialize it w/i the scope that it\u0027s used.","commit_id":"9e87701a33803cd293db30ea0e1b8949c3db7c6c"},{"author":{"_account_id":34930,"name":"Jianjian Huo","email":"jhuo@nvidia.com","username":"jhuo"},"change_message_id":"cd691043fa85bff09ca04ad5d2d950847cb601b5","unresolved":false,"context_lines":[{"line_number":376,"context_line":""},{"line_number":377,"context_line":"        # caching is enabled, try to get from caches"},{"line_number":378,"context_line":"        response \u003d None"},{"line_number":379,"context_line":"        set_cache_state \u003d None"},{"line_number":380,"context_line":"        cache_key \u003d get_cache_key(account, container, shard\u003d\u0027updating\u0027)"},{"line_number":381,"context_line":"        skip_chance \u003d self.app.container_updating_shard_ranges_skip_cache"},{"line_number":382,"context_line":"        ns_bound_list, get_cache_state \u003d get_namespaces_from_cache("}],"source_content_type":"text/x-python","patch_set":9,"id":"828e2f8a_5dc2db09","line":379,"in_reply_to":"677c087a_6844189f","updated":"2024-03-20 20:39:24.000000000","message":"Acknowledged","commit_id":"9e87701a33803cd293db30ea0e1b8949c3db7c6c"},{"author":{"_account_id":7847,"name":"Alistair Coles","email":"alistairncoles@gmail.com","username":"acoles"},"change_message_id":"d698eb927d6d7a169168158277203f19bd319e28","unresolved":true,"context_lines":[{"line_number":380,"context_line":"        cache_key \u003d get_cache_key(account, container, shard\u003d\u0027updating\u0027)"},{"line_number":381,"context_line":"        skip_chance \u003d self.app.container_updating_shard_ranges_skip_cache"},{"line_number":382,"context_line":"        ns_bound_list, get_cache_state \u003d get_namespaces_from_cache("},{"line_number":383,"context_line":"            req, cache_key, skip_chance)"},{"line_number":384,"context_line":"        if not ns_bound_list:"},{"line_number":385,"context_line":"            # namespaces not found in either infocache or memcache or cache"},{"line_number":386,"context_line":"            # skipping, so pull full set of updating shard ranges from the"}],"source_content_type":"text/x-python","patch_set":9,"id":"b6b6218c_8f3c7f53","line":383,"updated":"2024-03-15 16:37:10.000000000","message":"do we want to somehow keep track of the original get_cache_state even if we end up hitting cache during the sleeper retries?","commit_id":"9e87701a33803cd293db30ea0e1b8949c3db7c6c"},{"author":{"_account_id":7847,"name":"Alistair Coles","email":"alistairncoles@gmail.com","username":"acoles"},"change_message_id":"e976ddc798ae986d063250eaad0916b1d0108793","unresolved":false,"context_lines":[{"line_number":380,"context_line":"        cache_key \u003d get_cache_key(account, container, shard\u003d\u0027updating\u0027)"},{"line_number":381,"context_line":"        skip_chance \u003d self.app.container_updating_shard_ranges_skip_cache"},{"line_number":382,"context_line":"        ns_bound_list, get_cache_state \u003d get_namespaces_from_cache("},{"line_number":383,"context_line":"            req, cache_key, skip_chance)"},{"line_number":384,"context_line":"        if not ns_bound_list:"},{"line_number":385,"context_line":"            # namespaces not found in either infocache or memcache or cache"},{"line_number":386,"context_line":"            # skipping, so pull full set of updating shard ranges from the"}],"source_content_type":"text/x-python","patch_set":9,"id":"46c3d9a8_cf6a456b","line":383,"in_reply_to":"0f4db951_88fd924e","updated":"2024-04-24 14:02:33.000000000","message":"Done","commit_id":"9e87701a33803cd293db30ea0e1b8949c3db7c6c"},{"author":{"_account_id":1179,"name":"Clay Gerrard","email":"clay.gerrard@gmail.com","username":"clay-gerrard"},"change_message_id":"382e2ac0eddfd3eeb9c76438e530f5c7618d3920","unresolved":true,"context_lines":[{"line_number":380,"context_line":"        cache_key \u003d get_cache_key(account, container, shard\u003d\u0027updating\u0027)"},{"line_number":381,"context_line":"        skip_chance \u003d self.app.container_updating_shard_ranges_skip_cache"},{"line_number":382,"context_line":"        ns_bound_list, get_cache_state \u003d get_namespaces_from_cache("},{"line_number":383,"context_line":"            req, cache_key, skip_chance)"},{"line_number":384,"context_line":"        if not ns_bound_list:"},{"line_number":385,"context_line":"            # namespaces not found in either infocache or memcache or cache"},{"line_number":386,"context_line":"            # skipping, so pull full set of updating shard ranges from the"}],"source_content_type":"text/x-python","patch_set":9,"id":"0f4db951_88fd924e","line":383,"in_reply_to":"b6b6218c_8f3c7f53","updated":"2024-04-23 01:43:15.000000000","message":"IIUC this version does this.","commit_id":"9e87701a33803cd293db30ea0e1b8949c3db7c6c"},{"author":{"_account_id":7847,"name":"Alistair Coles","email":"alistairncoles@gmail.com","username":"acoles"},"change_message_id":"d698eb927d6d7a169168158277203f19bd319e28","unresolved":true,"context_lines":[{"line_number":389,"context_line":"            memcache \u003d cache_from_env(req.environ, True)"},{"line_number":390,"context_line":"            do_fetch_backend \u003d partial("},{"line_number":391,"context_line":"                self._cache_token_fetch_backend, req, account, container)"},{"line_number":392,"context_line":"            bounds, response, exc \u003d populate_cache_with_cooperative_token("},{"line_number":393,"context_line":"                infocache, memcache,"},{"line_number":394,"context_line":"                cache_key, self.app.recheck_updating_shard_ranges,"},{"line_number":395,"context_line":"                do_fetch_backend,"}],"source_content_type":"text/x-python","patch_set":9,"id":"fc307f2c_10cd2a36","line":392,"updated":"2024-03-15 16:37:10.000000000","message":"I think I may have suggested out-of-band experimenting with returning an exc as part of the tuple - I can\u0027t remember why, and now it is written it feels \"wrong\" 😞\n\nI still feel that populate_cache_with_cooperative_token is trying to do too much and as a result the interface to it is bloated. I\u0027m not convinced that we need to hand so much off to another \"generic\" function that is called once at the moment (I know we anticipate it being reused - but as it is reusing it means duplicating all this error handling).","commit_id":"9e87701a33803cd293db30ea0e1b8949c3db7c6c"},{"author":{"_account_id":7847,"name":"Alistair Coles","email":"alistairncoles@gmail.com","username":"acoles"},"change_message_id":"e976ddc798ae986d063250eaad0916b1d0108793","unresolved":false,"context_lines":[{"line_number":389,"context_line":"            memcache \u003d cache_from_env(req.environ, True)"},{"line_number":390,"context_line":"            do_fetch_backend \u003d partial("},{"line_number":391,"context_line":"                self._cache_token_fetch_backend, req, account, container)"},{"line_number":392,"context_line":"            bounds, response, exc \u003d populate_cache_with_cooperative_token("},{"line_number":393,"context_line":"                infocache, memcache,"},{"line_number":394,"context_line":"                cache_key, self.app.recheck_updating_shard_ranges,"},{"line_number":395,"context_line":"                do_fetch_backend,"}],"source_content_type":"text/x-python","patch_set":9,"id":"e2b01188_1e700af1","line":392,"in_reply_to":"fc307f2c_10cd2a36","updated":"2024-04-24 14:02:33.000000000","message":"Done","commit_id":"9e87701a33803cd293db30ea0e1b8949c3db7c6c"},{"author":{"_account_id":1179,"name":"Clay Gerrard","email":"clay.gerrard@gmail.com","username":"clay-gerrard"},"change_message_id":"7f2e28ec5958831a968574f848fcfebb2ac58987","unresolved":true,"context_lines":[{"line_number":398,"context_line":"            if isinstance(exc, MemcacheSetConnectionError):"},{"line_number":399,"context_line":"                set_cache_state \u003d \u0027set_error\u0027"},{"line_number":400,"context_line":"            elif isinstance(exc, MemcacheIncrConnectionError):"},{"line_number":401,"context_line":"                set_cache_state \u003d \u0027incr_error\u0027"},{"line_number":402,"context_line":"            else:"},{"line_number":403,"context_line":"                if memcache and bounds and response:"},{"line_number":404,"context_line":"                    set_cache_state \u003d \u0027set\u0027"}],"source_content_type":"text/x-python","patch_set":9,"id":"53f370f1_06d1879c","line":401,"updated":"2024-03-15 16:01:16.000000000","message":"AFAIK this is a new metric value passed to record_cache_op_metrics - it\u0027s not obvious to me it\u0027s entirely relevant to fetching/setting specifically \"shard_updating\" and may be independently useful as an inherent telemetry of the effectiveness of \"populate_cache_with_cooperative_token\" - obviously if we\u0027re getting errors trying to increment memcache we\u0027d see a coorelation of more shard_updating miss backend requests.","commit_id":"9e87701a33803cd293db30ea0e1b8949c3db7c6c"},{"author":{"_account_id":34930,"name":"Jianjian Huo","email":"jhuo@nvidia.com","username":"jhuo"},"change_message_id":"cd691043fa85bff09ca04ad5d2d950847cb601b5","unresolved":false,"context_lines":[{"line_number":398,"context_line":"            if isinstance(exc, MemcacheSetConnectionError):"},{"line_number":399,"context_line":"                set_cache_state \u003d \u0027set_error\u0027"},{"line_number":400,"context_line":"            elif isinstance(exc, MemcacheIncrConnectionError):"},{"line_number":401,"context_line":"                set_cache_state \u003d \u0027incr_error\u0027"},{"line_number":402,"context_line":"            else:"},{"line_number":403,"context_line":"                if memcache and bounds and response:"},{"line_number":404,"context_line":"                    set_cache_state \u003d \u0027set\u0027"}],"source_content_type":"text/x-python","patch_set":9,"id":"02564b43_87cf2bb5","line":401,"in_reply_to":"53f370f1_06d1879c","updated":"2024-03-20 20:39:24.000000000","message":"Acknowledged","commit_id":"9e87701a33803cd293db30ea0e1b8949c3db7c6c"},{"author":{"_account_id":1179,"name":"Clay Gerrard","email":"clay.gerrard@gmail.com","username":"clay-gerrard"},"change_message_id":"7f2e28ec5958831a968574f848fcfebb2ac58987","unresolved":true,"context_lines":[{"line_number":401,"context_line":"                set_cache_state \u003d \u0027incr_error\u0027"},{"line_number":402,"context_line":"            else:"},{"line_number":403,"context_line":"                if memcache and bounds and response:"},{"line_number":404,"context_line":"                    set_cache_state \u003d \u0027set\u0027"},{"line_number":405,"context_line":"            if bounds:"},{"line_number":406,"context_line":"                ns_bound_list \u003d NamespaceBoundList("},{"line_number":407,"context_line":"                    encode_namespace_bounds(bounds))"}],"source_content_type":"text/x-python","patch_set":9,"id":"5a52e1d8_fbc8d105","line":404,"updated":"2024-03-15 16:01:16.000000000","message":"the interface of populate_cache_with_cooperative_token seems to expect that callers need to be able to consume the MemcacheError; but leaves them to \"infer\" that a successful response has \"set\" the value in memcache.\n\nCompare this to the existing set_namespaces_in_cache (no longer used in this module) where the value of set_cache_state was always returned explicitly.\n\nPerhaps it would just be better for \"populate_cache_with_cooperative_token\" to return the \"set_cache_state\" string.","commit_id":"9e87701a33803cd293db30ea0e1b8949c3db7c6c"},{"author":{"_account_id":34930,"name":"Jianjian Huo","email":"jhuo@nvidia.com","username":"jhuo"},"change_message_id":"cd691043fa85bff09ca04ad5d2d950847cb601b5","unresolved":false,"context_lines":[{"line_number":401,"context_line":"                set_cache_state \u003d \u0027incr_error\u0027"},{"line_number":402,"context_line":"            else:"},{"line_number":403,"context_line":"                if memcache and bounds and response:"},{"line_number":404,"context_line":"                    set_cache_state \u003d \u0027set\u0027"},{"line_number":405,"context_line":"            if bounds:"},{"line_number":406,"context_line":"                ns_bound_list \u003d NamespaceBoundList("},{"line_number":407,"context_line":"                    encode_namespace_bounds(bounds))"}],"source_content_type":"text/x-python","patch_set":9,"id":"7824c5ae_76e06f83","line":404,"in_reply_to":"5a52e1d8_fbc8d105","updated":"2024-03-20 20:39:24.000000000","message":"refactored the CooperativeCachePopulator to handle exception internally which is similar to set_namespaces_in_cache.","commit_id":"9e87701a33803cd293db30ea0e1b8949c3db7c6c"},{"author":{"_account_id":1179,"name":"Clay Gerrard","email":"clay.gerrard@gmail.com","username":"clay-gerrard"},"change_message_id":"7f2e28ec5958831a968574f848fcfebb2ac58987","unresolved":true,"context_lines":[{"line_number":406,"context_line":"                ns_bound_list \u003d NamespaceBoundList("},{"line_number":407,"context_line":"                    encode_namespace_bounds(bounds))"},{"line_number":408,"context_line":"                if not response:"},{"line_number":409,"context_line":"                    self.logger.info("},{"line_number":410,"context_line":"                        \u0027Retrieved updating shards (%d shards) from cache \u0027"},{"line_number":411,"context_line":"                        \u0027instead of backend due to request coalescing by \u0027"},{"line_number":412,"context_line":"                        \u0027cooperative token for %s\u0027, len(bounds), cache_key)"}],"source_content_type":"text/x-python","patch_set":9,"id":"a3a92de6_7e054aca","line":409,"updated":"2024-03-15 16:01:16.000000000","message":"I think (hope?) this info message is only produced if we a) miss memcache and then b) hit memcache because we waited.\n\nThis indeed would be useful as a metric; but may be useful in testing as (debug?) log message.","commit_id":"9e87701a33803cd293db30ea0e1b8949c3db7c6c"},{"author":{"_account_id":34930,"name":"Jianjian Huo","email":"jhuo@nvidia.com","username":"jhuo"},"change_message_id":"cd691043fa85bff09ca04ad5d2d950847cb601b5","unresolved":false,"context_lines":[{"line_number":406,"context_line":"                ns_bound_list \u003d NamespaceBoundList("},{"line_number":407,"context_line":"                    encode_namespace_bounds(bounds))"},{"line_number":408,"context_line":"                if not response:"},{"line_number":409,"context_line":"                    self.logger.info("},{"line_number":410,"context_line":"                        \u0027Retrieved updating shards (%d shards) from cache \u0027"},{"line_number":411,"context_line":"                        \u0027instead of backend due to request coalescing by \u0027"},{"line_number":412,"context_line":"                        \u0027cooperative token for %s\u0027, len(bounds), cache_key)"}],"source_content_type":"text/x-python","patch_set":9,"id":"2ff52e26_8ee4b671","line":409,"in_reply_to":"a3a92de6_7e054aca","updated":"2024-03-20 20:39:24.000000000","message":"In latest patchset, I used ``cache_populator.req_served_from_cache`` to guarantee that a) miss memcache and then AND b) hit memcache because we waited.\nAlso, changed logging to debug mode.","commit_id":"9e87701a33803cd293db30ea0e1b8949c3db7c6c"},{"author":{"_account_id":7847,"name":"Alistair Coles","email":"alistairncoles@gmail.com","username":"acoles"},"change_message_id":"d698eb927d6d7a169168158277203f19bd319e28","unresolved":true,"context_lines":[{"line_number":409,"context_line":"                    self.logger.info("},{"line_number":410,"context_line":"                        \u0027Retrieved updating shards (%d shards) from cache \u0027"},{"line_number":411,"context_line":"                        \u0027instead of backend due to request coalescing by \u0027"},{"line_number":412,"context_line":"                        \u0027cooperative token for %s\u0027, len(bounds), cache_key)"},{"line_number":413,"context_line":"            if set_cache_state:"},{"line_number":414,"context_line":"                record_cache_op_metrics("},{"line_number":415,"context_line":"                    self.logger, self.server_type.lower(), \u0027shard_updating\u0027,"}],"source_content_type":"text/x-python","patch_set":9,"id":"d187acfe_e0fb50d6","line":412,"updated":"2024-03-15 16:37:10.000000000","message":"this is the happy path, IIUC, so we would not want an info level log here. Maybe a metric though.","commit_id":"9e87701a33803cd293db30ea0e1b8949c3db7c6c"},{"author":{"_account_id":7847,"name":"Alistair Coles","email":"alistairncoles@gmail.com","username":"acoles"},"change_message_id":"e976ddc798ae986d063250eaad0916b1d0108793","unresolved":false,"context_lines":[{"line_number":409,"context_line":"                    self.logger.info("},{"line_number":410,"context_line":"                        \u0027Retrieved updating shards (%d shards) from cache \u0027"},{"line_number":411,"context_line":"                        \u0027instead of backend due to request coalescing by \u0027"},{"line_number":412,"context_line":"                        \u0027cooperative token for %s\u0027, len(bounds), cache_key)"},{"line_number":413,"context_line":"            if set_cache_state:"},{"line_number":414,"context_line":"                record_cache_op_metrics("},{"line_number":415,"context_line":"                    self.logger, self.server_type.lower(), \u0027shard_updating\u0027,"}],"source_content_type":"text/x-python","patch_set":9,"id":"514ac489_a1a191b9","line":412,"in_reply_to":"d187acfe_e0fb50d6","updated":"2024-04-24 14:02:33.000000000","message":"Done","commit_id":"9e87701a33803cd293db30ea0e1b8949c3db7c6c"},{"author":{"_account_id":1179,"name":"Clay Gerrard","email":"clay.gerrard@gmail.com","username":"clay-gerrard"},"change_message_id":"7f2e28ec5958831a968574f848fcfebb2ac58987","unresolved":true,"context_lines":[{"line_number":410,"context_line":"                        \u0027Retrieved updating shards (%d shards) from cache \u0027"},{"line_number":411,"context_line":"                        \u0027instead of backend due to request coalescing by \u0027"},{"line_number":412,"context_line":"                        \u0027cooperative token for %s\u0027, len(bounds), cache_key)"},{"line_number":413,"context_line":"            if set_cache_state:"},{"line_number":414,"context_line":"                record_cache_op_metrics("},{"line_number":415,"context_line":"                    self.logger, self.server_type.lower(), \u0027shard_updating\u0027,"},{"line_number":416,"context_line":"                    set_cache_state, None)"}],"source_content_type":"text/x-python","patch_set":9,"id":"a0967114_ba0810dd","line":413,"updated":"2024-03-15 16:01:16.000000000","message":"this condition basically means we didn\u0027t try to set the value in memcache\n\neither there is no memcache configured (previously set_cache_state would be \"disabled\") or we ended up not having to set the value because we got it from memcache after a sleep w/o having to fetch from the backend.\n\nWe really want a metric for \"missed originally; but slept for N and found in memcache\" - something we could sum to mean \"total time sleeping waiting on someone else to fill memcache\" which we could chart against \"total time sleeping waiting on someone else to fill memcache; but we still ended up having to fetch from the backend anyway\"\n\nHopefully the first one is small and the later is zero.","commit_id":"9e87701a33803cd293db30ea0e1b8949c3db7c6c"},{"author":{"_account_id":1179,"name":"Clay Gerrard","email":"clay.gerrard@gmail.com","username":"clay-gerrard"},"change_message_id":"5b2eb439090dbb7d3383c43b9c7da3fd49922d38","unresolved":false,"context_lines":[{"line_number":410,"context_line":"                        \u0027Retrieved updating shards (%d shards) from cache \u0027"},{"line_number":411,"context_line":"                        \u0027instead of backend due to request coalescing by \u0027"},{"line_number":412,"context_line":"                        \u0027cooperative token for %s\u0027, len(bounds), cache_key)"},{"line_number":413,"context_line":"            if set_cache_state:"},{"line_number":414,"context_line":"                record_cache_op_metrics("},{"line_number":415,"context_line":"                    self.logger, self.server_type.lower(), \u0027shard_updating\u0027,"},{"line_number":416,"context_line":"                    set_cache_state, None)"}],"source_content_type":"text/x-python","patch_set":9,"id":"b0290b30_b568cced","line":413,"in_reply_to":"a0967114_ba0810dd","updated":"2024-04-22 15:06:46.000000000","message":"Acknowledged","commit_id":"9e87701a33803cd293db30ea0e1b8949c3db7c6c"},{"author":{"_account_id":1179,"name":"Clay Gerrard","email":"clay.gerrard@gmail.com","username":"clay-gerrard"},"change_message_id":"94e8d8d9db6adcfb2c0136f1bd9444500ef36a2d","unresolved":true,"context_lines":[{"line_number":386,"context_line":"                infocache, memcache, cache_key,"},{"line_number":387,"context_line":"                self.app.recheck_updating_shard_ranges, do_fetch_backend,"},{"line_number":388,"context_line":"                self.app.shard_ranges_cache_token_retry_interval,"},{"line_number":389,"context_line":"                namespace_list_to_bounds, namespace_bounds_to_list)"},{"line_number":390,"context_line":"            ns_bound_list \u003d cache_populator.fetch_data()"},{"line_number":391,"context_line":"            if cache_populator.set_cache_state:"},{"line_number":392,"context_line":"                record_cache_op_metrics("}],"source_content_type":"text/x-python","patch_set":14,"id":"a080649e_5e4595c9","line":389,"updated":"2024-04-19 19:44:41.000000000","message":"it doesn\u0027t look like this new behavior is \"opt-in\" - so carrying this patch to \"test in staging\" is the same as \"testing it in prod\"","commit_id":"0f4f6e1386bf214bb2268ed495745899d959ea92"},{"author":{"_account_id":34930,"name":"Jianjian Huo","email":"jhuo@nvidia.com","username":"jhuo"},"change_message_id":"cf713d9d97a2bf0347ff2b050dd5cd9bce9fb280","unresolved":false,"context_lines":[{"line_number":386,"context_line":"                infocache, memcache, cache_key,"},{"line_number":387,"context_line":"                self.app.recheck_updating_shard_ranges, do_fetch_backend,"},{"line_number":388,"context_line":"                self.app.shard_ranges_cache_token_retry_interval,"},{"line_number":389,"context_line":"                namespace_list_to_bounds, namespace_bounds_to_list)"},{"line_number":390,"context_line":"            ns_bound_list \u003d cache_populator.fetch_data()"},{"line_number":391,"context_line":"            if cache_populator.set_cache_state:"},{"line_number":392,"context_line":"                record_cache_op_metrics("}],"source_content_type":"text/x-python","patch_set":14,"id":"b6948391_67679601","line":389,"in_reply_to":"a080649e_5e4595c9","updated":"2024-04-22 05:08:13.000000000","message":"have added an option \"namespace_cache_use_token\" in proxy server config to make token usage opt-in, it\u0027s off by default.","commit_id":"0f4f6e1386bf214bb2268ed495745899d959ea92"},{"author":{"_account_id":1179,"name":"Clay Gerrard","email":"clay.gerrard@gmail.com","username":"clay-gerrard"},"change_message_id":"5b2eb439090dbb7d3383c43b9c7da3fd49922d38","unresolved":true,"context_lines":[{"line_number":378,"context_line":"                self.logger.info("},{"line_number":379,"context_line":"                    \u0027Caching updating shards for %s (%d shards)\u0027,"},{"line_number":380,"context_line":"                    cache_key, len(namespaces))"},{"line_number":381,"context_line":"        return ns_bound_list, response"},{"line_number":382,"context_line":""},{"line_number":383,"context_line":"    def _populate_updating_namespaces_cooperatively(self, req, account,"},{"line_number":384,"context_line":"                                                    container, cache_key):"}],"source_content_type":"text/x-python","patch_set":15,"id":"db5ced03_0d6470bf","line":381,"updated":"2024-04-22 15:06:46.000000000","message":"this looks like a pretty clean extraction of the original non-cooperative behavior","commit_id":"41c519ab9349a00bfaf9f7750f7b82643ac0e634"},{"author":{"_account_id":34930,"name":"Jianjian Huo","email":"jhuo@nvidia.com","username":"jhuo"},"change_message_id":"7d62748a0214fd0e6037e4b24de687f776d83aa1","unresolved":false,"context_lines":[{"line_number":378,"context_line":"                self.logger.info("},{"line_number":379,"context_line":"                    \u0027Caching updating shards for %s (%d shards)\u0027,"},{"line_number":380,"context_line":"                    cache_key, len(namespaces))"},{"line_number":381,"context_line":"        return ns_bound_list, response"},{"line_number":382,"context_line":""},{"line_number":383,"context_line":"    def _populate_updating_namespaces_cooperatively(self, req, account,"},{"line_number":384,"context_line":"                                                    container, cache_key):"}],"source_content_type":"text/x-python","patch_set":15,"id":"ae75d81c_a06371e0","line":381,"in_reply_to":"db5ced03_0d6470bf","updated":"2024-04-22 17:38:22.000000000","message":"Acknowledged","commit_id":"41c519ab9349a00bfaf9f7750f7b82643ac0e634"},{"author":{"_account_id":1179,"name":"Clay Gerrard","email":"clay.gerrard@gmail.com","username":"clay-gerrard"},"change_message_id":"5b2eb439090dbb7d3383c43b9c7da3fd49922d38","unresolved":true,"context_lines":[{"line_number":418,"context_line":"            self.logger.debug("},{"line_number":419,"context_line":"                \u0027Retrieved updating shards (%d shards) from cache instead \u0027"},{"line_number":420,"context_line":"                \u0027of backend due to request coalescing by cooperative \u0027"},{"line_number":421,"context_line":"                \u0027token for %s\u0027, len(ns_bound_list), cache_key)"},{"line_number":422,"context_line":"        response \u003d cache_populator.backend_response"},{"line_number":423,"context_line":"        return ns_bound_list, response"},{"line_number":424,"context_line":""}],"source_content_type":"text/x-python","patch_set":15,"id":"2cbbfbe5_a7d5681e","line":421,"updated":"2024-04-22 15:06:46.000000000","message":"we\u0027re not going to see any debug log messages from prod - do we have metrics on this?","commit_id":"41c519ab9349a00bfaf9f7750f7b82643ac0e634"},{"author":{"_account_id":34930,"name":"Jianjian Huo","email":"jhuo@nvidia.com","username":"jhuo"},"change_message_id":"7d62748a0214fd0e6037e4b24de687f776d83aa1","unresolved":false,"context_lines":[{"line_number":418,"context_line":"            self.logger.debug("},{"line_number":419,"context_line":"                \u0027Retrieved updating shards (%d shards) from cache instead \u0027"},{"line_number":420,"context_line":"                \u0027of backend due to request coalescing by cooperative \u0027"},{"line_number":421,"context_line":"                \u0027token for %s\u0027, len(ns_bound_list), cache_key)"},{"line_number":422,"context_line":"        response \u003d cache_populator.backend_response"},{"line_number":423,"context_line":"        return ns_bound_list, response"},{"line_number":424,"context_line":""}],"source_content_type":"text/x-python","patch_set":15,"id":"a3e5cbac_f77df360","line":421,"in_reply_to":"2cbbfbe5_a7d5681e","updated":"2024-04-22 17:38:22.000000000","message":"yes, it\u0027s ``token.shard_updating.cache_served_reqs``","commit_id":"41c519ab9349a00bfaf9f7750f7b82643ac0e634"},{"author":{"_account_id":1179,"name":"Clay Gerrard","email":"clay.gerrard@gmail.com","username":"clay-gerrard"},"change_message_id":"382e2ac0eddfd3eeb9c76438e530f5c7618d3920","unresolved":true,"context_lines":[{"line_number":376,"context_line":"                        cache_key, len(namespaces))"},{"line_number":377,"context_line":"        record_cache_op_metrics("},{"line_number":378,"context_line":"            self.logger, self.server_type.lower(), \u0027shard_updating\u0027,"},{"line_number":379,"context_line":"            get_cache_state, response)"},{"line_number":380,"context_line":"        return ns_bound_list.get_namespace(obj) if ns_bound_list else None"},{"line_number":381,"context_line":""},{"line_number":382,"context_line":"    def _get_update_target(self, req, container_info):"}],"source_content_type":"text/x-python","patch_set":16,"id":"6ce46d2f_634cf37d","side":"PARENT","line":379,"updated":"2024-04-23 01:43:15.000000000","message":"it seems like we already always `record_cache_op_metrics` ... sometimes with `response \u003d None` (on cache hit)\n\nI think that\u0027s fine, in our metrics we\u0027ll sometimes see metric\u003d\"miss\" with status\u003d\"200\" but with metric\u003d\"set\" there\u0027s never a status\u003d (cause I guess we can\u0027t set if the status wasn\u0027t a success!)","commit_id":"4dd49346f9f2e84b7f83f84cbf9db231c08997ae"},{"author":{"_account_id":34930,"name":"Jianjian Huo","email":"jhuo@nvidia.com","username":"jhuo"},"change_message_id":"84240d7cf4cf2d50ffaa7a06493d64c4ad741191","unresolved":false,"context_lines":[{"line_number":376,"context_line":"                        cache_key, len(namespaces))"},{"line_number":377,"context_line":"        record_cache_op_metrics("},{"line_number":378,"context_line":"            self.logger, self.server_type.lower(), \u0027shard_updating\u0027,"},{"line_number":379,"context_line":"            get_cache_state, response)"},{"line_number":380,"context_line":"        return ns_bound_list.get_namespace(obj) if ns_bound_list else None"},{"line_number":381,"context_line":""},{"line_number":382,"context_line":"    def _get_update_target(self, req, container_info):"}],"source_content_type":"text/x-python","patch_set":16,"id":"5b7296e4_5a7dcee2","side":"PARENT","line":379,"in_reply_to":"6ce46d2f_634cf37d","updated":"2024-05-03 05:51:16.000000000","message":"Acknowledged","commit_id":"4dd49346f9f2e84b7f83f84cbf9db231c08997ae"},{"author":{"_account_id":7847,"name":"Alistair Coles","email":"alistairncoles@gmail.com","username":"acoles"},"change_message_id":"e976ddc798ae986d063250eaad0916b1d0108793","unresolved":true,"context_lines":[{"line_number":348,"context_line":"            namespaces) if namespaces else None"},{"line_number":349,"context_line":"        return ns_bound_list, backend_response"},{"line_number":350,"context_line":""},{"line_number":351,"context_line":"    def _populate_updating_namespaces(self, req, account,"},{"line_number":352,"context_line":"                                      container, cache_key):"},{"line_number":353,"context_line":"        \"\"\""},{"line_number":354,"context_line":"        Fetch all updating namespaces from backend and set it into memcache."}],"source_content_type":"text/x-python","patch_set":16,"id":"694071c5_8ba1a8ad","line":351,"updated":"2024-04-24 14:02:33.000000000","message":"+1 this is like-for-like with the code path on master, and is used when self.app.namespace_cache_use_token is False\n\nI broke this path and a bunch of tests failed on master which gives me confidence that this is still the default.","commit_id":"245bf557b533100e44970f36bbe17dd8dc2f8baa"},{"author":{"_account_id":34930,"name":"Jianjian Huo","email":"jhuo@nvidia.com","username":"jhuo"},"change_message_id":"91683d17796b106573db8976013204c6d619fe61","unresolved":false,"context_lines":[{"line_number":348,"context_line":"            namespaces) if namespaces else None"},{"line_number":349,"context_line":"        return ns_bound_list, backend_response"},{"line_number":350,"context_line":""},{"line_number":351,"context_line":"    def _populate_updating_namespaces(self, req, account,"},{"line_number":352,"context_line":"                                      container, cache_key):"},{"line_number":353,"context_line":"        \"\"\""},{"line_number":354,"context_line":"        Fetch all updating namespaces from backend and set it into memcache."}],"source_content_type":"text/x-python","patch_set":16,"id":"b9d56e56_a54a3ab8","line":351,"in_reply_to":"694071c5_8ba1a8ad","updated":"2024-04-30 05:35:34.000000000","message":"Acknowledged","commit_id":"245bf557b533100e44970f36bbe17dd8dc2f8baa"},{"author":{"_account_id":1179,"name":"Clay Gerrard","email":"clay.gerrard@gmail.com","username":"clay-gerrard"},"change_message_id":"382e2ac0eddfd3eeb9c76438e530f5c7618d3920","unresolved":true,"context_lines":[{"line_number":363,"context_line":"        \"\"\""},{"line_number":364,"context_line":"        ns_bound_list \u003d None"},{"line_number":365,"context_line":"        namespaces, response \u003d self._get_updating_namespaces("},{"line_number":366,"context_line":"            req, account, container)"},{"line_number":367,"context_line":"        if namespaces:"},{"line_number":368,"context_line":"            # only store the list of namespace lower bounds and names into"},{"line_number":369,"context_line":"            # infocache and memcache."}],"source_content_type":"text/x-python","patch_set":16,"id":"264455ac_b9b4c7be","line":366,"updated":"2024-04-23 01:43:15.000000000","message":"ok, so *this* request path always makes a backend request and gets a backend response (maybe the backend response was an error/503 but I don\u0027t think it\u0027s ever just \"None\")","commit_id":"245bf557b533100e44970f36bbe17dd8dc2f8baa"},{"author":{"_account_id":34930,"name":"Jianjian Huo","email":"jhuo@nvidia.com","username":"jhuo"},"change_message_id":"84240d7cf4cf2d50ffaa7a06493d64c4ad741191","unresolved":false,"context_lines":[{"line_number":363,"context_line":"        \"\"\""},{"line_number":364,"context_line":"        ns_bound_list \u003d None"},{"line_number":365,"context_line":"        namespaces, response \u003d self._get_updating_namespaces("},{"line_number":366,"context_line":"            req, account, container)"},{"line_number":367,"context_line":"        if namespaces:"},{"line_number":368,"context_line":"            # only store the list of namespace lower bounds and names into"},{"line_number":369,"context_line":"            # infocache and memcache."}],"source_content_type":"text/x-python","patch_set":16,"id":"f866a84f_c34478fb","line":366,"in_reply_to":"264455ac_b9b4c7be","updated":"2024-05-03 05:51:16.000000000","message":"in the function to parse the backend response, it will return ``None`` if response.status_int is not sucess.\nhttps://github.com/NVIDIA/swift/blob/master/swift/proxy/controllers/base.py#L2454-L2458","commit_id":"245bf557b533100e44970f36bbe17dd8dc2f8baa"},{"author":{"_account_id":1179,"name":"Clay Gerrard","email":"clay.gerrard@gmail.com","username":"clay-gerrard"},"change_message_id":"382e2ac0eddfd3eeb9c76438e530f5c7618d3920","unresolved":true,"context_lines":[{"line_number":367,"context_line":"        if namespaces:"},{"line_number":368,"context_line":"            # only store the list of namespace lower bounds and names into"},{"line_number":369,"context_line":"            # infocache and memcache."},{"line_number":370,"context_line":"            ns_bound_list \u003d NamespaceBoundList.parse(namespaces)"},{"line_number":371,"context_line":"            set_cache_state \u003d set_namespaces_in_cache("},{"line_number":372,"context_line":"                req, cache_key, ns_bound_list,"},{"line_number":373,"context_line":"                self.app.recheck_updating_shard_ranges)"}],"source_content_type":"text/x-python","patch_set":16,"id":"66f3b92f_1220df58","line":370,"updated":"2024-04-23 01:43:15.000000000","message":"it\u0027s probably this was extracted to be \"as true to existing code as possible\" - but I think at some point it could re-use `_cache_token_fetch_backend` they seem to be doing the same thing.","commit_id":"245bf557b533100e44970f36bbe17dd8dc2f8baa"},{"author":{"_account_id":34930,"name":"Jianjian Huo","email":"jhuo@nvidia.com","username":"jhuo"},"change_message_id":"84240d7cf4cf2d50ffaa7a06493d64c4ad741191","unresolved":false,"context_lines":[{"line_number":367,"context_line":"        if namespaces:"},{"line_number":368,"context_line":"            # only store the list of namespace lower bounds and names into"},{"line_number":369,"context_line":"            # infocache and memcache."},{"line_number":370,"context_line":"            ns_bound_list \u003d NamespaceBoundList.parse(namespaces)"},{"line_number":371,"context_line":"            set_cache_state \u003d set_namespaces_in_cache("},{"line_number":372,"context_line":"                req, cache_key, ns_bound_list,"},{"line_number":373,"context_line":"                self.app.recheck_updating_shard_ranges)"}],"source_content_type":"text/x-python","patch_set":16,"id":"52f4f3cb_97d79acb","line":370,"in_reply_to":"66f3b92f_1220df58","updated":"2024-05-03 05:51:16.000000000","message":"Done","commit_id":"245bf557b533100e44970f36bbe17dd8dc2f8baa"},{"author":{"_account_id":1179,"name":"Clay Gerrard","email":"clay.gerrard@gmail.com","username":"clay-gerrard"},"change_message_id":"382e2ac0eddfd3eeb9c76438e530f5c7618d3920","unresolved":true,"context_lines":[{"line_number":377,"context_line":"            if set_cache_state \u003d\u003d \u0027set\u0027:"},{"line_number":378,"context_line":"                self.logger.info("},{"line_number":379,"context_line":"                    \u0027Caching updating shards for %s (%d shards)\u0027,"},{"line_number":380,"context_line":"                    cache_key, len(namespaces))"},{"line_number":381,"context_line":"        return ns_bound_list, response"},{"line_number":382,"context_line":""},{"line_number":383,"context_line":"    def _populate_updating_namespaces_cooperatively(self, req, account,"}],"source_content_type":"text/x-python","patch_set":16,"id":"8b7d9655_3bf13aba","line":380,"updated":"2024-04-23 01:43:15.000000000","message":"maybe `set_cache_state \u003d\u003d \u0027set\u0027` business logic could also e extracted so we know we always get the same telemetry on \"cache_op_metrics\" regardless of `namespace_cache_use_token`","commit_id":"245bf557b533100e44970f36bbe17dd8dc2f8baa"},{"author":{"_account_id":34930,"name":"Jianjian Huo","email":"jhuo@nvidia.com","username":"jhuo"},"change_message_id":"779f8c823e2d90f22c8679e56a1bd909f83d4f9a","unresolved":false,"context_lines":[{"line_number":377,"context_line":"            if set_cache_state \u003d\u003d \u0027set\u0027:"},{"line_number":378,"context_line":"                self.logger.info("},{"line_number":379,"context_line":"                    \u0027Caching updating shards for %s (%d shards)\u0027,"},{"line_number":380,"context_line":"                    cache_key, len(namespaces))"},{"line_number":381,"context_line":"        return ns_bound_list, response"},{"line_number":382,"context_line":""},{"line_number":383,"context_line":"    def _populate_updating_namespaces_cooperatively(self, req, account,"}],"source_content_type":"text/x-python","patch_set":16,"id":"3ccfa57d_66ecf7f4","line":380,"in_reply_to":"8b7d9655_3bf13aba","updated":"2024-08-05 19:34:02.000000000","message":"ACK. the new iteration already gets the same telemetry on \"cache_op_metrics\" regardless of using token or not.","commit_id":"245bf557b533100e44970f36bbe17dd8dc2f8baa"},{"author":{"_account_id":1179,"name":"Clay Gerrard","email":"clay.gerrard@gmail.com","username":"clay-gerrard"},"change_message_id":"382e2ac0eddfd3eeb9c76438e530f5c7618d3920","unresolved":true,"context_lines":[{"line_number":407,"context_line":"        if cache_populator.set_cache_state:"},{"line_number":408,"context_line":"            record_cache_op_metrics("},{"line_number":409,"context_line":"                self.logger, self.server_type.lower(), \u0027shard_updating\u0027,"},{"line_number":410,"context_line":"                cache_populator.set_cache_state, None)"},{"line_number":411,"context_line":"            if cache_populator.set_cache_state \u003d\u003d \u0027set\u0027:"},{"line_number":412,"context_line":"                self.logger.info("},{"line_number":413,"context_line":"                    \u0027Cached updating shards for %s (%d shards)\u0027,"}],"source_content_type":"text/x-python","patch_set":16,"id":"8ccfc07b_3a881d3f","line":410,"updated":"2024-04-23 01:43:15.000000000","message":"So the existing call to `record_cache_op_metrics` under the set path also always passes in resp\u003dNone, and not the response from `_get_updating_namespaces`\n\n        # the cases of cache_state is memcache miss, error, skip, force_skip\n        # or disabled.\n        if resp:\n            logger.increment(\u0027%s.%s.cache.%s.%d\u0027 % (\n                server_type, op_type, cache_state, resp.status_int))\n        else:\n            # In some situation, we choose not to lookup backend after cache\n            # miss.\n            logger.increment(\u0027%s.%s.cache.%s\u0027 % (\n                server_type, op_type, cache_state))\n\n^ i don\u0027t see \"set\" in those comments, but we definately get some `ss_container_shard_ranges_cache` metrics in prometheus with `state\u003dhit|skip|miss|set|errror` for `method\u003d\"shard_updating\"`\n\nOn a quick read of cooperative populator I\u0027m seeing at least \"set_error\" and \"inc_error\" getting added to that list.  And even tho `set_namespace_in_cache` looks willing to return a `set_error` I don\u0027t actually see ANY (??) of those MemcacheConnectionError metrics in our prod telemetry.","commit_id":"245bf557b533100e44970f36bbe17dd8dc2f8baa"},{"author":{"_account_id":34930,"name":"Jianjian Huo","email":"jhuo@nvidia.com","username":"jhuo"},"change_message_id":"84240d7cf4cf2d50ffaa7a06493d64c4ad741191","unresolved":false,"context_lines":[{"line_number":407,"context_line":"        if cache_populator.set_cache_state:"},{"line_number":408,"context_line":"            record_cache_op_metrics("},{"line_number":409,"context_line":"                self.logger, self.server_type.lower(), \u0027shard_updating\u0027,"},{"line_number":410,"context_line":"                cache_populator.set_cache_state, None)"},{"line_number":411,"context_line":"            if cache_populator.set_cache_state \u003d\u003d \u0027set\u0027:"},{"line_number":412,"context_line":"                self.logger.info("},{"line_number":413,"context_line":"                    \u0027Cached updating shards for %s (%d shards)\u0027,"}],"source_content_type":"text/x-python","patch_set":16,"id":"a722f14f_4835df98","line":410,"in_reply_to":"8ccfc07b_3a881d3f","updated":"2024-05-03 05:51:16.000000000","message":"yes, when use ``record_cache_op_metrics`` to record cache set operations, no ``response`` needed. backend ``response.status_int`` will only be needed when record cache miss/skip operations.\n\nwhen ``set_cache_state\u003d\u003d\"set\"``, ``record_cache_op_metrics`` will fill in ``set`` into the name of metrics.","commit_id":"245bf557b533100e44970f36bbe17dd8dc2f8baa"},{"author":{"_account_id":1179,"name":"Clay Gerrard","email":"clay.gerrard@gmail.com","username":"clay-gerrard"},"change_message_id":"382e2ac0eddfd3eeb9c76438e530f5c7618d3920","unresolved":true,"context_lines":[{"line_number":411,"context_line":"            if cache_populator.set_cache_state \u003d\u003d \u0027set\u0027:"},{"line_number":412,"context_line":"                self.logger.info("},{"line_number":413,"context_line":"                    \u0027Cached updating shards for %s (%d shards)\u0027,"},{"line_number":414,"context_line":"                    cache_key, len(ns_bound_list))"},{"line_number":415,"context_line":"        record_cooperative_token_metrics("},{"line_number":416,"context_line":"            self.logger, cache_populator, \u0027shard_updating\u0027)"},{"line_number":417,"context_line":"        if cache_populator.req_served_from_cache:"}],"source_content_type":"text/x-python","patch_set":16,"id":"6c212aca_acaf7037","line":414,"updated":"2024-04-23 01:43:15.000000000","message":"looks like we always get this info log line when we set the cache - which matches my experience when watching expirer/internal-client log lines recently\n\nwe might expect after turning on cooperative token the instances of this log message go down, particularlly at peak load, since set_cache_state is initialized to None.","commit_id":"245bf557b533100e44970f36bbe17dd8dc2f8baa"},{"author":{"_account_id":34930,"name":"Jianjian Huo","email":"jhuo@nvidia.com","username":"jhuo"},"change_message_id":"84240d7cf4cf2d50ffaa7a06493d64c4ad741191","unresolved":false,"context_lines":[{"line_number":411,"context_line":"            if cache_populator.set_cache_state \u003d\u003d \u0027set\u0027:"},{"line_number":412,"context_line":"                self.logger.info("},{"line_number":413,"context_line":"                    \u0027Cached updating shards for %s (%d shards)\u0027,"},{"line_number":414,"context_line":"                    cache_key, len(ns_bound_list))"},{"line_number":415,"context_line":"        record_cooperative_token_metrics("},{"line_number":416,"context_line":"            self.logger, cache_populator, \u0027shard_updating\u0027)"},{"line_number":417,"context_line":"        if cache_populator.req_served_from_cache:"}],"source_content_type":"text/x-python","patch_set":16,"id":"d5103ef6_890028a6","line":414,"in_reply_to":"6c212aca_acaf7037","updated":"2024-05-03 05:51:16.000000000","message":"Acknowledged","commit_id":"245bf557b533100e44970f36bbe17dd8dc2f8baa"},{"author":{"_account_id":1179,"name":"Clay Gerrard","email":"clay.gerrard@gmail.com","username":"clay-gerrard"},"change_message_id":"382e2ac0eddfd3eeb9c76438e530f5c7618d3920","unresolved":true,"context_lines":[{"line_number":419,"context_line":"                \u0027Retrieved updating shards (%d shards) from cache instead \u0027"},{"line_number":420,"context_line":"                \u0027of backend due to request coalescing by cooperative \u0027"},{"line_number":421,"context_line":"                \u0027token for %s\u0027, len(ns_bound_list), cache_key)"},{"line_number":422,"context_line":"        response \u003d cache_populator.backend_response"},{"line_number":423,"context_line":"        return ns_bound_list, response"},{"line_number":424,"context_line":""},{"line_number":425,"context_line":"    def _get_update_shard(self, req, account, container, obj):"}],"source_content_type":"text/x-python","patch_set":16,"id":"9177c7ef_6c78942e","line":422,"updated":"2024-04-23 01:43:15.000000000","message":"I don\u0027t know what the cache_populator does when it doesn\u0027t have to make a backend request because it was a token looser and one of the token winners succeded.\n\nIt looks like backend_response is initialized to None","commit_id":"245bf557b533100e44970f36bbe17dd8dc2f8baa"},{"author":{"_account_id":34930,"name":"Jianjian Huo","email":"jhuo@nvidia.com","username":"jhuo"},"change_message_id":"84240d7cf4cf2d50ffaa7a06493d64c4ad741191","unresolved":false,"context_lines":[{"line_number":419,"context_line":"                \u0027Retrieved updating shards (%d shards) from cache instead \u0027"},{"line_number":420,"context_line":"                \u0027of backend due to request coalescing by cooperative \u0027"},{"line_number":421,"context_line":"                \u0027token for %s\u0027, len(ns_bound_list), cache_key)"},{"line_number":422,"context_line":"        response \u003d cache_populator.backend_response"},{"line_number":423,"context_line":"        return ns_bound_list, response"},{"line_number":424,"context_line":""},{"line_number":425,"context_line":"    def _get_update_shard(self, req, account, container, obj):"}],"source_content_type":"text/x-python","patch_set":16,"id":"05f782de_a88aec9f","line":422,"in_reply_to":"9177c7ef_6c78942e","updated":"2024-05-03 05:51:16.000000000","message":"in that case, ``backend_response`` will be ``None``, and the only place which will consume that ``response`` is record_cache_op_metrics() who accepts ``response\u003d\u003dNone``","commit_id":"245bf557b533100e44970f36bbe17dd8dc2f8baa"},{"author":{"_account_id":7847,"name":"Alistair Coles","email":"alistairncoles@gmail.com","username":"acoles"},"change_message_id":"e976ddc798ae986d063250eaad0916b1d0108793","unresolved":true,"context_lines":[{"line_number":420,"context_line":"                \u0027of backend due to request coalescing by cooperative \u0027"},{"line_number":421,"context_line":"                \u0027token for %s\u0027, len(ns_bound_list), cache_key)"},{"line_number":422,"context_line":"        response \u003d cache_populator.backend_response"},{"line_number":423,"context_line":"        return ns_bound_list, response"},{"line_number":424,"context_line":""},{"line_number":425,"context_line":"    def _get_update_shard(self, req, account, container, obj):"},{"line_number":426,"context_line":"        \"\"\""}],"source_content_type":"text/x-python","patch_set":16,"id":"4caa87ba_03b56596","line":423,"updated":"2024-04-24 14:02:33.000000000","message":"I broke this path by changing to \n\n```return None, response```\n\nbut no tests failed.\n\n```\n\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d 9211 passed, 1 skipped, 21 warnings in 282.19s (0:04:42) \u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\n_________________________________________________________________________ summary _________________________________________________________________________\n  py38: commands succeeded\n  congratulations :)\n/home/vagrant/swift\nvagrant@vagrant:~/swift$ git diff\ndiff --git a/swift/proxy/controllers/obj.py b/swift/proxy/controllers/obj.py\nindex a25aa755b..5b7bfa314 100644\n--- a/swift/proxy/controllers/obj.py\n+++ b/swift/proxy/controllers/obj.py\n@@ -420,7 +420,7 @@ class BaseObjectController(Controller):\n                 \u0027of backend due to request coalescing by cooperative \u0027\n                 \u0027token for %s\u0027, len(ns_bound_list), cache_key)\n         response \u003d cache_populator.backend_response\n-        return ns_bound_list, response\n+        return None, response\n\n     def _get_update_shard(self, req, account, container, obj):\n         \"\"\"\n         ```\n\nAFAICT the new tests in test_server.py verify the actions within this method but not the eventual outcomes.\n\nI hope we have some existing tests that should pass with namespace_cache_use_token \u003d [False|True] (i.e. they expect a backend request) so we can validate that either code path is ultimately ok.","commit_id":"245bf557b533100e44970f36bbe17dd8dc2f8baa"},{"author":{"_account_id":34930,"name":"Jianjian Huo","email":"jhuo@nvidia.com","username":"jhuo"},"change_message_id":"325dd989caa00abf1fb5b22f27f487a778d8c905","unresolved":false,"context_lines":[{"line_number":420,"context_line":"                \u0027of backend due to request coalescing by cooperative \u0027"},{"line_number":421,"context_line":"                \u0027token for %s\u0027, len(ns_bound_list), cache_key)"},{"line_number":422,"context_line":"        response \u003d cache_populator.backend_response"},{"line_number":423,"context_line":"        return ns_bound_list, response"},{"line_number":424,"context_line":""},{"line_number":425,"context_line":"    def _get_update_shard(self, req, account, container, obj):"},{"line_number":426,"context_line":"        \"\"\""}],"source_content_type":"text/x-python","patch_set":16,"id":"04619f16_e2d3486a","line":423,"in_reply_to":"4caa87ba_03b56596","updated":"2024-09-26 04:53:40.000000000","message":"Good point! I have modified my new added test cases which set namespace_cache_use_token \u003d True. Now 4 out 5 will return below ``AssertionError`` if you change that ``return ns_bound_list, response `` to ``return None, response`` again.\n\n```\nswift/test/unit/proxy/test_server.py:5197: in do_test\n    self._check_request(request, **expectations)\nswift/test/unit/proxy/test_server.py:4281: in _check_request\n    self.assertEqual(req[\u0027headers\u0027][k], v,\nE   AssertionError: None !\u003d \u0027.shards_a/c_shard\u0027 : Expected .shards_a/c_shard but got None for key X-Backend-Quoted-Container-Path\n\n\nFAILED swift/test/unit/proxy/test_server.py::TestReplicatedObjectController::test_get_backend_updating_shard_with_cooperative_token_acquired - AssertionError: None !\u003d \u0027.shards_a/c_shard\u0027 : Expected .shards_a/c_shard but got None for key X-Backend-Quoted-Container-...\nFAILED swift/test/unit/proxy/test_server.py::TestReplicatedObjectController::test_get_backend_updating_shard_with_cooperative_token_timeout - AssertionError: None !\u003d \u0027.shards_a/c_shard\u0027 : Expected .shards_a/c_shard but got None for key X-Backend-Quoted-Container-...\nFAILED swift/test/unit/proxy/test_server.py::TestReplicatedObjectController::test_get_backend_updating_shard_wo_cooperative_token_acquired - AssertionError: None !\u003d \u0027.shards_a/c_shard\u0027 : Expected .shards_a/c_shard but got None for key X-Backend-Quoted-Container-...\nFAILED swift/test/unit/proxy/test_server.py::TestReplicatedObjectController::test_get_backend_updating_shard_wo_token_lack_retries - AssertionError: None !\u003d \u0027.shards_a/c_shard\u0027 : Expected .shards_a/c_shard but got None for key X-Backend-Quoted-Container-...\n```","commit_id":"245bf557b533100e44970f36bbe17dd8dc2f8baa"},{"author":{"_account_id":1179,"name":"Clay Gerrard","email":"clay.gerrard@gmail.com","username":"clay-gerrard"},"change_message_id":"382e2ac0eddfd3eeb9c76438e530f5c7618d3920","unresolved":true,"context_lines":[{"line_number":464,"context_line":"                    req, account, container, cache_key)"},{"line_number":465,"context_line":"        record_cache_op_metrics("},{"line_number":466,"context_line":"            self.logger, self.server_type.lower(), \u0027shard_updating\u0027,"},{"line_number":467,"context_line":"            get_cache_state, response)"},{"line_number":468,"context_line":"        return ns_bound_list.get_namespace(obj) if ns_bound_list else None"},{"line_number":469,"context_line":""},{"line_number":470,"context_line":"    def _get_update_target(self, req, container_info):"}],"source_content_type":"text/x-python","patch_set":16,"id":"e5fe10ea_f759aacc","line":467,"updated":"2024-04-23 01:43:15.000000000","message":"so in the case where the backend-response is completed successfully by a token winner we\u0027ll log it here at a miss|skip.200 and we ALSO get a new token.shard_updating.done_token_req and token.shard_updating.backend_reqs counters\n\nif we get a miss|ship and then the cooperative token forces the request to wait and we eventually get it form cache (the behavior we want to see) instead of a miss.200 we\u0027ll just get a miss metric with no status and a new token.shard_updating.cache_served_reqs\n\nif we get a miss|skip and then the cooperative token forces the request to wait and we eventually hit a cooperative token timeout (the worst possible scenario!) we\u0027ll see a miss.200 sort of like always, but also a new token.shard_updating.backend_reqs counter which we have to deduce is BAD because it\u0027s larger than token.shard_updating.done_token_req\n\nWe also don\u0027t get any telemetry on how long a request spent in a loop waiting on memcache.","commit_id":"245bf557b533100e44970f36bbe17dd8dc2f8baa"},{"author":{"_account_id":34930,"name":"Jianjian Huo","email":"jhuo@nvidia.com","username":"jhuo"},"change_message_id":"271441aeb5ef4d22c194557e7861c9adcb23f46c","unresolved":false,"context_lines":[{"line_number":464,"context_line":"                    req, account, container, cache_key)"},{"line_number":465,"context_line":"        record_cache_op_metrics("},{"line_number":466,"context_line":"            self.logger, self.server_type.lower(), \u0027shard_updating\u0027,"},{"line_number":467,"context_line":"            get_cache_state, response)"},{"line_number":468,"context_line":"        return ns_bound_list.get_namespace(obj) if ns_bound_list else None"},{"line_number":469,"context_line":""},{"line_number":470,"context_line":"    def _get_update_target(self, req, container_info):"}],"source_content_type":"text/x-python","patch_set":16,"id":"65177a13_65bff7f0","line":467,"in_reply_to":"c7ec248d_cf5e7054","updated":"2024-09-25 16:10:58.000000000","message":"Done","commit_id":"245bf557b533100e44970f36bbe17dd8dc2f8baa"},{"author":{"_account_id":34930,"name":"Jianjian Huo","email":"jhuo@nvidia.com","username":"jhuo"},"change_message_id":"84240d7cf4cf2d50ffaa7a06493d64c4ad741191","unresolved":true,"context_lines":[{"line_number":464,"context_line":"                    req, account, container, cache_key)"},{"line_number":465,"context_line":"        record_cache_op_metrics("},{"line_number":466,"context_line":"            self.logger, self.server_type.lower(), \u0027shard_updating\u0027,"},{"line_number":467,"context_line":"            get_cache_state, response)"},{"line_number":468,"context_line":"        return ns_bound_list.get_namespace(obj) if ns_bound_list else None"},{"line_number":469,"context_line":""},{"line_number":470,"context_line":"    def _get_update_target(self, req, container_info):"}],"source_content_type":"text/x-python","patch_set":16,"id":"c7ec248d_cf5e7054","line":467,"in_reply_to":"e5fe10ea_f759aacc","updated":"2024-05-03 05:51:16.000000000","message":"yes, those ``miss.200`` or ``skip.200`` or ``miss`` metrics are results of first query of shard range cache, and will be recorded independently as before (also not to change anything when token is disabled). I have also added status code to token related metrics for easy monitoring.","commit_id":"245bf557b533100e44970f36bbe17dd8dc2f8baa"},{"author":{"_account_id":15343,"name":"Tim Burke","email":"tburke@nvidia.com","username":"tburke"},"change_message_id":"5fd322ed546bbf3260f518eff4abba6da4cc4a8d","unresolved":true,"context_lines":[{"line_number":348,"context_line":"        namespaces, backend_response \u003d self._do_get_updating_namespaces("},{"line_number":349,"context_line":"            req, account, container)"},{"line_number":350,"context_line":"        ns_bound_list \u003d NamespaceBoundList.parse("},{"line_number":351,"context_line":"            namespaces) if namespaces else None"},{"line_number":352,"context_line":"        return ns_bound_list, backend_response"},{"line_number":353,"context_line":""},{"line_number":354,"context_line":"    def _populate_updating_namespaces(self, req, account,"}],"source_content_type":"text/x-python","patch_set":25,"id":"500b5290_d34b4983","line":351,"range":{"start_line":351,"start_character":23,"end_line":351,"end_character":47},"updated":"2024-07-23 00:33:19.000000000","message":"Should be good to always call `parse` -- it\u0027s got a\n```\nif not namespaces:\n    return None\n```\nearly on.","commit_id":"24c4cb68b3037de4ba90e827bd1e7b69660a7353"},{"author":{"_account_id":34930,"name":"Jianjian Huo","email":"jhuo@nvidia.com","username":"jhuo"},"change_message_id":"779f8c823e2d90f22c8679e56a1bd909f83d4f9a","unresolved":false,"context_lines":[{"line_number":348,"context_line":"        namespaces, backend_response \u003d self._do_get_updating_namespaces("},{"line_number":349,"context_line":"            req, account, container)"},{"line_number":350,"context_line":"        ns_bound_list \u003d NamespaceBoundList.parse("},{"line_number":351,"context_line":"            namespaces) if namespaces else None"},{"line_number":352,"context_line":"        return ns_bound_list, backend_response"},{"line_number":353,"context_line":""},{"line_number":354,"context_line":"    def _populate_updating_namespaces(self, req, account,"}],"source_content_type":"text/x-python","patch_set":25,"id":"1a6cbce9_74e7db54","line":351,"range":{"start_line":351,"start_character":23,"end_line":351,"end_character":47},"in_reply_to":"500b5290_d34b4983","updated":"2024-08-05 19:34:02.000000000","message":"Done","commit_id":"24c4cb68b3037de4ba90e827bd1e7b69660a7353"},{"author":{"_account_id":15343,"name":"Tim Burke","email":"tburke@nvidia.com","username":"tburke"},"change_message_id":"5fd322ed546bbf3260f518eff4abba6da4cc4a8d","unresolved":true,"context_lines":[{"line_number":460,"context_line":"                    response,"},{"line_number":461,"context_line":"                ) \u003d self._populate_updating_namespaces_cooperatively("},{"line_number":462,"context_line":"                    req, account, container, cache_key"},{"line_number":463,"context_line":"                )"},{"line_number":464,"context_line":"            else:"},{"line_number":465,"context_line":"                ns_bound_list, response \u003d self._populate_updating_namespaces("},{"line_number":466,"context_line":"                    req, account, container, cache_key)"}],"source_content_type":"text/x-python","patch_set":25,"id":"48972899_7f7ef5b4","line":463,"updated":"2024-07-23 00:33:19.000000000","message":"That line-length limit is a bear, isn\u0027t it?\n\nYou\u0027ve already done the work to ensure signatures are compatible; would something like\n```\nif self.app.namespace_cache_use_token:\n    populate_func \u003d self._populate_updating_namespaces_cooperatively\nelse:\n    populate_func \u003d self._populate_updating_namespaces\n\nns_bound_list, response \u003d populate_func(\n    req, account, container, cache_key)\n```\nmaybe help?","commit_id":"24c4cb68b3037de4ba90e827bd1e7b69660a7353"},{"author":{"_account_id":34930,"name":"Jianjian Huo","email":"jhuo@nvidia.com","username":"jhuo"},"change_message_id":"779f8c823e2d90f22c8679e56a1bd909f83d4f9a","unresolved":false,"context_lines":[{"line_number":460,"context_line":"                    response,"},{"line_number":461,"context_line":"                ) \u003d self._populate_updating_namespaces_cooperatively("},{"line_number":462,"context_line":"                    req, account, container, cache_key"},{"line_number":463,"context_line":"                )"},{"line_number":464,"context_line":"            else:"},{"line_number":465,"context_line":"                ns_bound_list, response \u003d self._populate_updating_namespaces("},{"line_number":466,"context_line":"                    req, account, container, cache_key)"}],"source_content_type":"text/x-python","patch_set":25,"id":"d3d7592d_02f1acc5","line":463,"in_reply_to":"48972899_7f7ef5b4","updated":"2024-08-05 19:34:02.000000000","message":"Nice change! yeah, those two functions have exact same signature.","commit_id":"24c4cb68b3037de4ba90e827bd1e7b69660a7353"},{"author":{"_account_id":1179,"name":"Clay Gerrard","email":"clay.gerrard@gmail.com","username":"clay-gerrard"},"change_message_id":"b77645c85c5aaf01a53ec4fcdfded6e6e62160fc","unresolved":true,"context_lines":[{"line_number":406,"context_line":"            record_cache_op_metrics("},{"line_number":407,"context_line":"                self.logger, self.server_type.lower(), \u0027shard_updating\u0027,"},{"line_number":408,"context_line":"                cache_populator.set_cache_state, None)"},{"line_number":409,"context_line":"            if cache_populator.set_cache_state \u003d\u003d \u0027set\u0027:"},{"line_number":410,"context_line":"                message \u003d \"Caching updating shards for %s (%d shards)\" % ("},{"line_number":411,"context_line":"                    cache_key, len(ns_bound_list))"},{"line_number":412,"context_line":"                if cache_populator.token_acquired:"}],"source_content_type":"text/x-python","patch_set":32,"id":"f9c72009_522668ae","line":409,"updated":"2025-02-05 19:11:09.000000000","message":"I don\u0027t love this being stringly-typed\n\nIdeally there\u0027d be some way to hide this from the caller as an internal implementation detail - but that might be difficult if we want to support these legacy metrics (unless each CooperativeCachePopulator was it\u0027s own context specific subclass that could just instrument whatever metrics belong in that context)\n\n... however if this sort of \"what was hidden from me behind the fetch_data abstraction\" HAS to bleed out of the abstraction - maybe a enum would help?  Either way it seems like an interface that will require a lot of documentation if we want to write code that can use this interface correctly and consistently.","commit_id":"3b2b8859917b8aad03423f082f2f6a7c7b48ea9d"},{"author":{"_account_id":1179,"name":"Clay Gerrard","email":"clay.gerrard@gmail.com","username":"clay-gerrard"},"change_message_id":"3ac5b97eb694bd7419d416cd4d22be40d9525d57","unresolved":false,"context_lines":[{"line_number":406,"context_line":"            record_cache_op_metrics("},{"line_number":407,"context_line":"                self.logger, self.server_type.lower(), \u0027shard_updating\u0027,"},{"line_number":408,"context_line":"                cache_populator.set_cache_state, None)"},{"line_number":409,"context_line":"            if cache_populator.set_cache_state \u003d\u003d \u0027set\u0027:"},{"line_number":410,"context_line":"                message \u003d \"Caching updating shards for %s (%d shards)\" % ("},{"line_number":411,"context_line":"                    cache_key, len(ns_bound_list))"},{"line_number":412,"context_line":"                if cache_populator.token_acquired:"}],"source_content_type":"text/x-python","patch_set":32,"id":"c39c9779_ed9461d5","line":409,"in_reply_to":"6c19ffe6_c303298e","updated":"2025-05-05 21:42:58.000000000","message":"ok, we added a TODO to try an enum.\n\nthe main bleed through is:\n\n```\n            if cache_populator.set_cache_state:\n                record_cache_op_metrics(\n                    self.logger, self.server_type.lower(), \u0027shard_updating\u0027,\n                    cache_populator.set_cache_state, None)\n                if cache_populator.set_cache_state \u003d\u003d \u0027set\u0027:\n                    # TODO: use enum to unify \u0027set_cache_state\u0027 in existing\n                    # \u0027set_namespaces_in_cache\u0027 and CooperativeCachePopulator.\n\n```\n\n... and it still looks just as gross.","commit_id":"3b2b8859917b8aad03423f082f2f6a7c7b48ea9d"},{"author":{"_account_id":34930,"name":"Jianjian Huo","email":"jhuo@nvidia.com","username":"jhuo"},"change_message_id":"dee9181e290b573de3239cc03759eb5b0da5fe21","unresolved":true,"context_lines":[{"line_number":406,"context_line":"            record_cache_op_metrics("},{"line_number":407,"context_line":"                self.logger, self.server_type.lower(), \u0027shard_updating\u0027,"},{"line_number":408,"context_line":"                cache_populator.set_cache_state, None)"},{"line_number":409,"context_line":"            if cache_populator.set_cache_state \u003d\u003d \u0027set\u0027:"},{"line_number":410,"context_line":"                message \u003d \"Caching updating shards for %s (%d shards)\" % ("},{"line_number":411,"context_line":"                    cache_key, len(ns_bound_list))"},{"line_number":412,"context_line":"                if cache_populator.token_acquired:"}],"source_content_type":"text/x-python","patch_set":32,"id":"6c19ffe6_c303298e","line":409,"in_reply_to":"f9c72009_522668ae","updated":"2025-03-05 18:34:04.000000000","message":"will have a follow-up patch to refactor the existing legacy metrics and convert all usages of ``set_cache_state`` to enum.","commit_id":"3b2b8859917b8aad03423f082f2f6a7c7b48ea9d"},{"author":{"_account_id":1179,"name":"Clay Gerrard","email":"clay.gerrard@gmail.com","username":"clay-gerrard"},"change_message_id":"b77645c85c5aaf01a53ec4fcdfded6e6e62160fc","unresolved":true,"context_lines":[{"line_number":419,"context_line":"                \u0027Retrieved updating shards (%d shards) from cache instead \u0027"},{"line_number":420,"context_line":"                \u0027of backend due to request coalescing by cooperative \u0027"},{"line_number":421,"context_line":"                \u0027token for %s\u0027, len(ns_bound_list), cache_key)"},{"line_number":422,"context_line":"        return ns_bound_list, cache_populator.backend_response"},{"line_number":423,"context_line":""},{"line_number":424,"context_line":"    def _get_update_shard(self, req, account, container, obj):"},{"line_number":425,"context_line":"        \"\"\""}],"source_content_type":"text/x-python","patch_set":32,"id":"2d60b980_b1054c4f","line":422,"updated":"2025-02-05 19:11:09.000000000","message":"is `cache_populator.backend_response \u003d\u003d None` another way to ask `if cache_populator.set_cache_state` \n\nis there a \"one true interface\" for asking a cache populator if it got the value from memcache or the backend?  Is there a situation where you might have gotten a backend repsonse but failed to set the value in memcache?  Or would that be `set_cache_state \u003d\u003d \u0027error\u0027` or some other string?","commit_id":"3b2b8859917b8aad03423f082f2f6a7c7b48ea9d"},{"author":{"_account_id":34930,"name":"Jianjian Huo","email":"jhuo@nvidia.com","username":"jhuo"},"change_message_id":"dee9181e290b573de3239cc03759eb5b0da5fe21","unresolved":true,"context_lines":[{"line_number":419,"context_line":"                \u0027Retrieved updating shards (%d shards) from cache instead \u0027"},{"line_number":420,"context_line":"                \u0027of backend due to request coalescing by cooperative \u0027"},{"line_number":421,"context_line":"                \u0027token for %s\u0027, len(ns_bound_list), cache_key)"},{"line_number":422,"context_line":"        return ns_bound_list, cache_populator.backend_response"},{"line_number":423,"context_line":""},{"line_number":424,"context_line":"    def _get_update_shard(self, req, account, container, obj):"},{"line_number":425,"context_line":"        \"\"\""}],"source_content_type":"text/x-python","patch_set":32,"id":"84a6dd8d_4041598f","line":422,"in_reply_to":"2d60b980_b1054c4f","updated":"2025-03-05 18:34:04.000000000","message":"will hide ``backend_response`` within the sub-class with the follow-up patch to refactor existing legacy metrics.","commit_id":"3b2b8859917b8aad03423f082f2f6a7c7b48ea9d"},{"author":{"_account_id":1179,"name":"Clay Gerrard","email":"clay.gerrard@gmail.com","username":"clay-gerrard"},"change_message_id":"3ac5b97eb694bd7419d416cd4d22be40d9525d57","unresolved":true,"context_lines":[{"line_number":419,"context_line":"                \u0027Retrieved updating shards (%d shards) from cache instead \u0027"},{"line_number":420,"context_line":"                \u0027of backend due to request coalescing by cooperative \u0027"},{"line_number":421,"context_line":"                \u0027token for %s\u0027, len(ns_bound_list), cache_key)"},{"line_number":422,"context_line":"        return ns_bound_list, cache_populator.backend_response"},{"line_number":423,"context_line":""},{"line_number":424,"context_line":"    def _get_update_shard(self, req, account, container, obj):"},{"line_number":425,"context_line":"        \"\"\""}],"source_content_type":"text/x-python","patch_set":32,"id":"c65efe2f_09118a96","line":422,"in_reply_to":"84a6dd8d_4041598f","updated":"2025-05-05 21:42:58.000000000","message":"in the refactored path we pass the backend_resp through w/o ever looking at it; that\u0027s nice.\n\n```\n        response \u003d None\n        if not ns_bound_list:\n            ...\n            # TODO: convert existing usages of response to just status code.\n            response \u003d cache_populator.backend_resp\n\n        record_cache_op_metrics(\n            self.logger, self.server_type.lower(), \u0027shard_updating\u0027,\n            get_cache_state, response)\n```\n\n... so that\u0027s nice.\n\nThe only time we have to muck with `set_cache_state` is to figure out some weird log message:\n\n```\n            if cache_populator.set_cache_state:\n                record_cache_op_metrics(\n                    self.logger, self.server_type.lower(), \u0027shard_updating\u0027,\n                    cache_populator.set_cache_state, None)\n                if cache_populator.set_cache_state \u003d\u003d \u0027set\u0027:\n```\n\n... which is still pretty gross, but probably required if we want to keep that log message everytime we set the value in the cache.\n\nWhat I don\u0027t understand is why we don\u0027t want to move the `record_cache_op_metrics` call for the `set_cache_state` into `CooperativeCachePopulator`","commit_id":"3b2b8859917b8aad03423f082f2f6a7c7b48ea9d"},{"author":{"_account_id":34930,"name":"Jianjian Huo","email":"jhuo@nvidia.com","username":"jhuo"},"change_message_id":"659b61cb7e26a48889ac96d6f02f48f780b3d150","unresolved":false,"context_lines":[{"line_number":419,"context_line":"                \u0027Retrieved updating shards (%d shards) from cache instead \u0027"},{"line_number":420,"context_line":"                \u0027of backend due to request coalescing by cooperative \u0027"},{"line_number":421,"context_line":"                \u0027token for %s\u0027, len(ns_bound_list), cache_key)"},{"line_number":422,"context_line":"        return ns_bound_list, cache_populator.backend_response"},{"line_number":423,"context_line":""},{"line_number":424,"context_line":"    def _get_update_shard(self, req, account, container, obj):"},{"line_number":425,"context_line":"        \"\"\""}],"source_content_type":"text/x-python","patch_set":32,"id":"f05a55f1_fb18a26f","line":422,"in_reply_to":"c65efe2f_09118a96","updated":"2025-05-09 16:29:52.000000000","message":"I would like to keep this log line, when prod runs into cache misses, and this log line is very helpful for debugging, like which container was having lots of shard, and how frequent the number of shards have changes over time.\n\n```\n                if cache_populator.set_cache_state \u003d\u003d \u0027set\u0027:\n                    message \u003d \"Caching updating shards for %s (%d shards)\" % (\n                        cache_key, len(ns_bound_list))\n                    if cache_populator.token_acquired:\n                        message +\u003d \" with a finished token\"\n                    self.logger.info(message)\n```","commit_id":"3b2b8859917b8aad03423f082f2f6a7c7b48ea9d"},{"author":{"_account_id":15343,"name":"Tim Burke","email":"tburke@nvidia.com","username":"tburke"},"change_message_id":"f32b36bb0830daf7a4dc35e3a3a5aeeb5f2ff5c6","unresolved":false,"context_lines":[{"line_number":175,"context_line":""},{"line_number":176,"context_line":"        def __init__(self, ctrl, logger, account, container, req, cache_key):"},{"line_number":177,"context_line":"            infocache \u003d req.environ.setdefault(\u0027swift.infocache\u0027, {})"},{"line_number":178,"context_line":"            memcache \u003d cache_from_env(req.environ, True)"},{"line_number":179,"context_line":"            cache_ttl \u003d ctrl.app.recheck_updating_shard_ranges"},{"line_number":180,"context_line":"            retry_interval \u003d ctrl.app.namespace_cache_token_retry_interval"},{"line_number":181,"context_line":"            num_tokens \u003d ctrl.app.namespace_cache_tokens_per_session"}],"source_content_type":"text/x-python","patch_set":38,"id":"d8c8a30f_68b9c92f","line":178,"range":{"start_line":178,"start_character":51,"end_line":178,"end_character":55},"updated":"2025-04-29 22:05:40.000000000","message":"OK, that\u0027s `allow_none` -- what happens if there\u0027s no cache middleware? Ah, we bail out early in `fetch_data`.","commit_id":"3136ea74d3bc9a03b0553fec387cc1411e4e80a9"},{"author":{"_account_id":15343,"name":"Tim Burke","email":"tburke@nvidia.com","username":"tburke"},"change_message_id":"f32b36bb0830daf7a4dc35e3a3a5aeeb5f2ff5c6","unresolved":true,"context_lines":[{"line_number":355,"context_line":"            or None if the update should go back to the root"},{"line_number":356,"context_line":"        \"\"\""},{"line_number":357,"context_line":"        # legacy behavior requests container server for includes\u003dobj"},{"line_number":358,"context_line":"        namespaces, response \u003d self._do_get_updating_namespaces("},{"line_number":359,"context_line":"            req, account, container, includes\u003dobj)"},{"line_number":360,"context_line":"        record_cache_op_metrics("},{"line_number":361,"context_line":"            self.logger, self.server_type.lower(), \u0027shard_updating\u0027,"}],"source_content_type":"text/x-python","patch_set":38,"id":"e6d6ccb3_964a8232","line":358,"updated":"2025-04-29 22:05:40.000000000","message":"Can this just use a `NamespaceBoundList`? I don\u0027t love the extra layer in `_get_backend_updating_namespaces`, especially when we need to add another to bifurcate calls to `_populate_updating_namespaces(_cooperatively)`. It feels like the sort of thing where we could probably just do the `NamespaceBoundList.parse` in `(_do)_get_updating_namespaces` (or _maybe_ even [base] `Controller._parse_namespaces`)...\n\nI guess we\u0027d need to implement a `NamespaceBoundList.__getitem__`:\n```\ndiff --git a/swift/common/utils/__init__.py b/swift/common/utils/__init__.py\nindex 9404b8dd1..67fc96627 100644\n--- a/swift/common/utils/__init__.py\n+++ b/swift/common/utils/__init__.py\n@@ -3924,6 +3924,12 @@ class NamespaceBoundList(object):\n         \"\"\"\n         return len(self.bounds)\n \n+    def __getitem__(self, index_or_slice):\n+        result \u003d self.bounds[index_or_slice]\n+        if isinstance(index_or_slice, slice):\n+            result \u003d NamespaceBoundList(result)\n+        return result\n+\n     @classmethod\n     def parse(cls, namespaces):\n         \"\"\"\n```\n\n...and maybe instantiate a `Namespace` object like `return Namespace(namespaces[0][1], \u0027\u0027, \u0027\u0027) if namespaces else None`... or we just return the account/container. (`_get_update_target` doesn\u0027t actually need any more than that -- and that\u0027s the only caller of `_get_update_shard`, right?)","commit_id":"3136ea74d3bc9a03b0553fec387cc1411e4e80a9"},{"author":{"_account_id":34930,"name":"Jianjian Huo","email":"jhuo@nvidia.com","username":"jhuo"},"change_message_id":"3c2307ea3dc2fcb84cb22fb3897a896842b58e15","unresolved":true,"context_lines":[{"line_number":355,"context_line":"            or None if the update should go back to the root"},{"line_number":356,"context_line":"        \"\"\""},{"line_number":357,"context_line":"        # legacy behavior requests container server for includes\u003dobj"},{"line_number":358,"context_line":"        namespaces, response \u003d self._do_get_updating_namespaces("},{"line_number":359,"context_line":"            req, account, container, includes\u003dobj)"},{"line_number":360,"context_line":"        record_cache_op_metrics("},{"line_number":361,"context_line":"            self.logger, self.server_type.lower(), \u0027shard_updating\u0027,"}],"source_content_type":"text/x-python","patch_set":38,"id":"f1c2f321_da4c773e","line":358,"in_reply_to":"e6d6ccb3_964a8232","updated":"2025-05-03 02:51:37.000000000","message":"``_get_update_target`` only need account/container/name, even if we generate a new namespace out of NamespaceBoundList[0], that would work since the  ``bound`` entry has ``name``. but I feel it becomes a little harder to understand why the namespace--\u003eboundlist--\u003enamespace conversions are needed at here: https://review.opendev.org/c/openstack/swift/+/948570/1/swift/proxy/controllers/obj.py#365\n\nsince we still have some users running sharded container without memcache and IMHO those a few extra small functions help with code readability, maybe we can keep them until we rip off ``_get_update_shard_caching_disabled`` one day?","commit_id":"3136ea74d3bc9a03b0553fec387cc1411e4e80a9"},{"author":{"_account_id":1179,"name":"Clay Gerrard","email":"clay.gerrard@gmail.com","username":"clay-gerrard"},"change_message_id":"3ac5b97eb694bd7419d416cd4d22be40d9525d57","unresolved":false,"context_lines":[{"line_number":355,"context_line":"            or None if the update should go back to the root"},{"line_number":356,"context_line":"        \"\"\""},{"line_number":357,"context_line":"        # legacy behavior requests container server for includes\u003dobj"},{"line_number":358,"context_line":"        namespaces, response \u003d self._do_get_updating_namespaces("},{"line_number":359,"context_line":"            req, account, container, includes\u003dobj)"},{"line_number":360,"context_line":"        record_cache_op_metrics("},{"line_number":361,"context_line":"            self.logger, self.server_type.lower(), \u0027shard_updating\u0027,"}],"source_content_type":"text/x-python","patch_set":38,"id":"f1259563_73ad3b9c","line":358,"in_reply_to":"f1c2f321_da4c773e","updated":"2025-05-05 21:42:58.000000000","message":"\u003e Can this just use a NamespaceBoundList\n\u003e I guess we\u0027d need to implement a NamespaceBoundList.__getitem__:\n\nI don\u0027t understand why we\u0027d want to do all that?\n\nhttps://review.opendev.org/c/openstack/swift/+/948570/1/swift/common/utils/__init__.py\n\nthis weird `legacy behavior requests container server for includes\u003dobj` behavior should be completely disconnected from all the \"get shard ranges LIST from cache\" behavior (cooperative or not)\n\nI like the idea of getting rid of `_get_update_shard_caching_disabled` and making that path synonymous with the `if not memcache` code (which should maybe be the same as \"direct-to-async-sharded-but-not-udpate-target\" code...","commit_id":"3136ea74d3bc9a03b0553fec387cc1411e4e80a9"},{"author":{"_account_id":15343,"name":"Tim Burke","email":"tburke@nvidia.com","username":"tburke"},"change_message_id":"dcf796a3770999844b9fd146eeff99d5a38d757b","unresolved":false,"context_lines":[{"line_number":355,"context_line":"            or None if the update should go back to the root"},{"line_number":356,"context_line":"        \"\"\""},{"line_number":357,"context_line":"        # legacy behavior requests container server for includes\u003dobj"},{"line_number":358,"context_line":"        namespaces, response \u003d self._do_get_updating_namespaces("},{"line_number":359,"context_line":"            req, account, container, includes\u003dobj)"},{"line_number":360,"context_line":"        record_cache_op_metrics("},{"line_number":361,"context_line":"            self.logger, self.server_type.lower(), \u0027shard_updating\u0027,"}],"source_content_type":"text/x-python","patch_set":38,"id":"e68d0ba2_b0e8245d","line":358,"in_reply_to":"f1c2f321_da4c773e","updated":"2025-05-05 20:42:46.000000000","message":"Yeah, we can sort this out later.","commit_id":"3136ea74d3bc9a03b0553fec387cc1411e4e80a9"},{"author":{"_account_id":15343,"name":"Tim Burke","email":"tburke@nvidia.com","username":"tburke"},"change_message_id":"73848be180f7599714e75cde33169c80fa1d57b7","unresolved":true,"context_lines":[{"line_number":376,"context_line":"        namespaces, backend_response \u003d self._do_get_updating_namespaces("},{"line_number":377,"context_line":"            req, account, container)"},{"line_number":378,"context_line":"        ns_bound_list \u003d NamespaceBoundList.parse(namespaces)"},{"line_number":379,"context_line":"        return ns_bound_list, backend_response"},{"line_number":380,"context_line":""},{"line_number":381,"context_line":"    def _populate_updating_namespaces(self, req, account,"},{"line_number":382,"context_line":"                                      container, cache_key):"}],"source_content_type":"text/x-python","patch_set":38,"id":"9a268776_0058e5de","line":379,"updated":"2025-04-30 18:26:28.000000000","message":"Pulled this layer out in https://review.opendev.org/c/openstack/swift/+/948570","commit_id":"3136ea74d3bc9a03b0553fec387cc1411e4e80a9"},{"author":{"_account_id":1179,"name":"Clay Gerrard","email":"clay.gerrard@gmail.com","username":"clay-gerrard"},"change_message_id":"3ac5b97eb694bd7419d416cd4d22be40d9525d57","unresolved":false,"context_lines":[{"line_number":376,"context_line":"        namespaces, backend_response \u003d self._do_get_updating_namespaces("},{"line_number":377,"context_line":"            req, account, container)"},{"line_number":378,"context_line":"        ns_bound_list \u003d NamespaceBoundList.parse(namespaces)"},{"line_number":379,"context_line":"        return ns_bound_list, backend_response"},{"line_number":380,"context_line":""},{"line_number":381,"context_line":"    def _populate_updating_namespaces(self, req, account,"},{"line_number":382,"context_line":"                                      container, cache_key):"}],"source_content_type":"text/x-python","patch_set":38,"id":"793de2bc_76f10a62","line":379,"in_reply_to":"0c0df170_1593c42c","updated":"2025-05-05 21:42:58.000000000","message":"Acknowledged","commit_id":"3136ea74d3bc9a03b0553fec387cc1411e4e80a9"},{"author":{"_account_id":15343,"name":"Tim Burke","email":"tburke@nvidia.com","username":"tburke"},"change_message_id":"dcf796a3770999844b9fd146eeff99d5a38d757b","unresolved":false,"context_lines":[{"line_number":376,"context_line":"        namespaces, backend_response \u003d self._do_get_updating_namespaces("},{"line_number":377,"context_line":"            req, account, container)"},{"line_number":378,"context_line":"        ns_bound_list \u003d NamespaceBoundList.parse(namespaces)"},{"line_number":379,"context_line":"        return ns_bound_list, backend_response"},{"line_number":380,"context_line":""},{"line_number":381,"context_line":"    def _populate_updating_namespaces(self, req, account,"},{"line_number":382,"context_line":"                                      container, cache_key):"}],"source_content_type":"text/x-python","patch_set":38,"id":"b55c435f_6490c9ad","line":379,"in_reply_to":"0c0df170_1593c42c","updated":"2025-05-05 20:42:46.000000000","message":"Acknowledged","commit_id":"3136ea74d3bc9a03b0553fec387cc1411e4e80a9"},{"author":{"_account_id":34930,"name":"Jianjian Huo","email":"jhuo@nvidia.com","username":"jhuo"},"change_message_id":"3c2307ea3dc2fcb84cb22fb3897a896842b58e15","unresolved":true,"context_lines":[{"line_number":376,"context_line":"        namespaces, backend_response \u003d self._do_get_updating_namespaces("},{"line_number":377,"context_line":"            req, account, container)"},{"line_number":378,"context_line":"        ns_bound_list \u003d NamespaceBoundList.parse(namespaces)"},{"line_number":379,"context_line":"        return ns_bound_list, backend_response"},{"line_number":380,"context_line":""},{"line_number":381,"context_line":"    def _populate_updating_namespaces(self, req, account,"},{"line_number":382,"context_line":"                                      container, cache_key):"}],"source_content_type":"text/x-python","patch_set":38,"id":"0c0df170_1593c42c","line":379,"in_reply_to":"9a268776_0058e5de","updated":"2025-05-03 02:51:37.000000000","message":"see comment at https://review.opendev.org/c/openstack/swift/+/948570/1/swift/proxy/controllers/obj.py#365","commit_id":"3136ea74d3bc9a03b0553fec387cc1411e4e80a9"},{"author":{"_account_id":15343,"name":"Tim Burke","email":"tburke@nvidia.com","username":"tburke"},"change_message_id":"f32b36bb0830daf7a4dc35e3a3a5aeeb5f2ff5c6","unresolved":true,"context_lines":[{"line_number":391,"context_line":"            instance of :class:`swift.common.utils.NamespaceBoundList`,"},{"line_number":392,"context_line":"            response is the backend response."},{"line_number":393,"context_line":"        \"\"\""},{"line_number":394,"context_line":"        ns_bound_list, response \u003d self._get_backend_updating_namespaces("},{"line_number":395,"context_line":"            req, account, container)"},{"line_number":396,"context_line":"        if ns_bound_list:"},{"line_number":397,"context_line":"            # only store the list of namespace lower bounds and names into"}],"source_content_type":"text/x-python","patch_set":38,"id":"157a2c30_917bd19e","line":394,"updated":"2025-04-29 22:05:40.000000000","message":"Right; so all this is just extracted from the old `_get_update_shard`, only the `NamespaceBoundList.parse` has been hoisted into `_get_backend_updating_namespaces` which we\u0027re using instead of `_get_updating_namespaces`.\n\nI wonder a little, though, if it\u0027d be better to stuff this into a `NamespaceCachePopulator`-like class that _isn\u0027t_ cooperative. Then we could let `self.app.namespace_cache_use_token` dictate which implementation to use... I can play with that idea a little.","commit_id":"3136ea74d3bc9a03b0553fec387cc1411e4e80a9"},{"author":{"_account_id":15343,"name":"Tim Burke","email":"tburke@nvidia.com","username":"tburke"},"change_message_id":"73848be180f7599714e75cde33169c80fa1d57b7","unresolved":true,"context_lines":[{"line_number":391,"context_line":"            instance of :class:`swift.common.utils.NamespaceBoundList`,"},{"line_number":392,"context_line":"            response is the backend response."},{"line_number":393,"context_line":"        \"\"\""},{"line_number":394,"context_line":"        ns_bound_list, response \u003d self._get_backend_updating_namespaces("},{"line_number":395,"context_line":"            req, account, container)"},{"line_number":396,"context_line":"        if ns_bound_list:"},{"line_number":397,"context_line":"            # only store the list of namespace lower bounds and names into"}],"source_content_type":"text/x-python","patch_set":38,"id":"468ef188_1dd4b32d","line":394,"in_reply_to":"157a2c30_917bd19e","updated":"2025-04-30 18:26:28.000000000","message":"Gave it a try in https://review.opendev.org/c/openstack/swift/+/948572 -- see what you think.","commit_id":"3136ea74d3bc9a03b0553fec387cc1411e4e80a9"},{"author":{"_account_id":34930,"name":"Jianjian Huo","email":"jhuo@nvidia.com","username":"jhuo"},"change_message_id":"3c2307ea3dc2fcb84cb22fb3897a896842b58e15","unresolved":false,"context_lines":[{"line_number":391,"context_line":"            instance of :class:`swift.common.utils.NamespaceBoundList`,"},{"line_number":392,"context_line":"            response is the backend response."},{"line_number":393,"context_line":"        \"\"\""},{"line_number":394,"context_line":"        ns_bound_list, response \u003d self._get_backend_updating_namespaces("},{"line_number":395,"context_line":"            req, account, container)"},{"line_number":396,"context_line":"        if ns_bound_list:"},{"line_number":397,"context_line":"            # only store the list of namespace lower bounds and names into"}],"source_content_type":"text/x-python","patch_set":38,"id":"e0add107_694c24d7","line":394,"in_reply_to":"468ef188_1dd4b32d","updated":"2025-05-03 02:51:37.000000000","message":"Love this idea and change, got it squashed, thanks!","commit_id":"3136ea74d3bc9a03b0553fec387cc1411e4e80a9"},{"author":{"_account_id":15343,"name":"Tim Burke","email":"tburke@nvidia.com","username":"tburke"},"change_message_id":"f32b36bb0830daf7a4dc35e3a3a5aeeb5f2ff5c6","unresolved":true,"context_lines":[{"line_number":422,"context_line":"            instance of :class:`swift.common.utils.NamespaceBoundList`,"},{"line_number":423,"context_line":"            response is the backend response."},{"line_number":424,"context_line":"        \"\"\""},{"line_number":425,"context_line":"        cache_populator \u003d self.NamespaceCachePopulator("},{"line_number":426,"context_line":"            self, self.logger, account, container, req, cache_key)"},{"line_number":427,"context_line":"        ns_bound_list \u003d cache_populator.fetch_data()"},{"line_number":428,"context_line":"        if cache_populator.set_cache_state:"}],"source_content_type":"text/x-python","patch_set":38,"id":"1892ae63_14f1d773","line":425,"range":{"start_line":425,"start_character":26,"end_line":425,"end_character":54},"updated":"2025-04-29 22:05:40.000000000","message":"That\u0027s a bit of a funny way to do it... did we _need_ to nest the cache-populator class inside the controller?","commit_id":"3136ea74d3bc9a03b0553fec387cc1411e4e80a9"},{"author":{"_account_id":15343,"name":"Tim Burke","email":"tburke@nvidia.com","username":"tburke"},"change_message_id":"73848be180f7599714e75cde33169c80fa1d57b7","unresolved":true,"context_lines":[{"line_number":422,"context_line":"            instance of :class:`swift.common.utils.NamespaceBoundList`,"},{"line_number":423,"context_line":"            response is the backend response."},{"line_number":424,"context_line":"        \"\"\""},{"line_number":425,"context_line":"        cache_populator \u003d self.NamespaceCachePopulator("},{"line_number":426,"context_line":"            self, self.logger, account, container, req, cache_key)"},{"line_number":427,"context_line":"        ns_bound_list \u003d cache_populator.fetch_data()"},{"line_number":428,"context_line":"        if cache_populator.set_cache_state:"}],"source_content_type":"text/x-python","patch_set":38,"id":"c31887ac_be04cb7a","line":425,"range":{"start_line":425,"start_character":26,"end_line":425,"end_character":54},"in_reply_to":"1892ae63_14f1d773","updated":"2025-04-30 18:26:28.000000000","message":"https://review.opendev.org/c/openstack/swift/+/948571","commit_id":"3136ea74d3bc9a03b0553fec387cc1411e4e80a9"},{"author":{"_account_id":34930,"name":"Jianjian Huo","email":"jhuo@nvidia.com","username":"jhuo"},"change_message_id":"3c2307ea3dc2fcb84cb22fb3897a896842b58e15","unresolved":false,"context_lines":[{"line_number":422,"context_line":"            instance of :class:`swift.common.utils.NamespaceBoundList`,"},{"line_number":423,"context_line":"            response is the backend response."},{"line_number":424,"context_line":"        \"\"\""},{"line_number":425,"context_line":"        cache_populator \u003d self.NamespaceCachePopulator("},{"line_number":426,"context_line":"            self, self.logger, account, container, req, cache_key)"},{"line_number":427,"context_line":"        ns_bound_list \u003d cache_populator.fetch_data()"},{"line_number":428,"context_line":"        if cache_populator.set_cache_state:"}],"source_content_type":"text/x-python","patch_set":38,"id":"46e0539c_15af591f","line":425,"range":{"start_line":425,"start_character":26,"end_line":425,"end_character":54},"in_reply_to":"c31887ac_be04cb7a","updated":"2025-05-03 02:51:37.000000000","message":"Done","commit_id":"3136ea74d3bc9a03b0553fec387cc1411e4e80a9"},{"author":{"_account_id":15343,"name":"Tim Burke","email":"tburke@nvidia.com","username":"tburke"},"change_message_id":"f32b36bb0830daf7a4dc35e3a3a5aeeb5f2ff5c6","unresolved":true,"context_lines":[{"line_number":430,"context_line":"                self.logger, self.server_type.lower(), \u0027shard_updating\u0027,"},{"line_number":431,"context_line":"                cache_populator.set_cache_state, None)"},{"line_number":432,"context_line":"            if cache_populator.set_cache_state \u003d\u003d \u0027set\u0027:"},{"line_number":433,"context_line":"                # TODO: use enum to unify \u0027set_cache_state\u0027 in existing"},{"line_number":434,"context_line":"                # \u0027set_namespaces_in_cache\u0027 and CooperativeCachePopulator."},{"line_number":435,"context_line":"                message \u003d \"Caching updating shards for %s (%d shards)\" % ("},{"line_number":436,"context_line":"                    cache_key, len(ns_bound_list))"}],"source_content_type":"text/x-python","patch_set":38,"id":"7e4f6865_3487edf7","line":433,"range":{"start_line":433,"start_character":18,"end_line":433,"end_character":32},"updated":"2025-04-29 22:05:40.000000000","message":"Now that we\u0027ve dropped py2, we might want to revisit this idea.","commit_id":"3136ea74d3bc9a03b0553fec387cc1411e4e80a9"},{"author":{"_account_id":34930,"name":"Jianjian Huo","email":"jhuo@nvidia.com","username":"jhuo"},"change_message_id":"3c2307ea3dc2fcb84cb22fb3897a896842b58e15","unresolved":false,"context_lines":[{"line_number":430,"context_line":"                self.logger, self.server_type.lower(), \u0027shard_updating\u0027,"},{"line_number":431,"context_line":"                cache_populator.set_cache_state, None)"},{"line_number":432,"context_line":"            if cache_populator.set_cache_state \u003d\u003d \u0027set\u0027:"},{"line_number":433,"context_line":"                # TODO: use enum to unify \u0027set_cache_state\u0027 in existing"},{"line_number":434,"context_line":"                # \u0027set_namespaces_in_cache\u0027 and CooperativeCachePopulator."},{"line_number":435,"context_line":"                message \u003d \"Caching updating shards for %s (%d shards)\" % ("},{"line_number":436,"context_line":"                    cache_key, len(ns_bound_list))"}],"source_content_type":"text/x-python","patch_set":38,"id":"ee96cc7b_7679b0d3","line":433,"range":{"start_line":433,"start_character":18,"end_line":433,"end_character":32},"in_reply_to":"7e4f6865_3487edf7","updated":"2025-05-03 02:51:37.000000000","message":"Acknowledged","commit_id":"3136ea74d3bc9a03b0553fec387cc1411e4e80a9"},{"author":{"_account_id":1179,"name":"Clay Gerrard","email":"clay.gerrard@gmail.com","username":"clay-gerrard"},"change_message_id":"3ac5b97eb694bd7419d416cd4d22be40d9525d57","unresolved":true,"context_lines":[{"line_number":376,"context_line":"                        cache_key, len(namespaces))"},{"line_number":377,"context_line":"        record_cache_op_metrics("},{"line_number":378,"context_line":"            self.logger, self.server_type.lower(), \u0027shard_updating\u0027,"},{"line_number":379,"context_line":"            get_cache_state, response)"},{"line_number":380,"context_line":"        return ns_bound_list.get_namespace(obj) if ns_bound_list else None"},{"line_number":381,"context_line":""},{"line_number":382,"context_line":"    def _get_update_target(self, req, container_info):"}],"source_content_type":"text/x-python","patch_set":39,"id":"1fec84f3_07d33dae","side":"PARENT","line":379,"updated":"2025-05-05 21:42:58.000000000","message":"I guess the `record_cache_op_metrics` was always called at least once with `get_cache_state` - and sometimes also w/ `set_cache_state`\n\nI don\u0027t love that we continue to call `get_cache_state` with the original cache value even if we don\u0027t end up having to make a backend request.  But I\u0027m not exactly sure how we should be instrumenting that... miss,miss,miss,miss,hit!\n\nWe probably DO want to more memcache misses if our `retry_interval` is too low.","commit_id":"16d6894d66acef49f21b5783e22a1d545e24f7fd"},{"author":{"_account_id":34930,"name":"Jianjian Huo","email":"jhuo@nvidia.com","username":"jhuo"},"change_message_id":"659b61cb7e26a48889ac96d6f02f48f780b3d150","unresolved":false,"context_lines":[{"line_number":376,"context_line":"                        cache_key, len(namespaces))"},{"line_number":377,"context_line":"        record_cache_op_metrics("},{"line_number":378,"context_line":"            self.logger, self.server_type.lower(), \u0027shard_updating\u0027,"},{"line_number":379,"context_line":"            get_cache_state, response)"},{"line_number":380,"context_line":"        return ns_bound_list.get_namespace(obj) if ns_bound_list else None"},{"line_number":381,"context_line":""},{"line_number":382,"context_line":"    def _get_update_target(self, req, container_info):"}],"source_content_type":"text/x-python","patch_set":39,"id":"8e79b186_42d22aaa","side":"PARENT","line":379,"in_reply_to":"1fec84f3_07d33dae","updated":"2025-05-09 16:29:52.000000000","message":"when calling ``get_cache_state`` with the original cache value, ``response`` code will be appended to the  stat and then we can tell the difference:\n``object.shard_updating.cache.miss`` get cache operation ran into cache miss but was able to be served out of cache later with use of cooperative token.\n``object.shard_updating.cache.miss.200`` get cache operation ran into cache miss and then got the data needed from backend.","commit_id":"16d6894d66acef49f21b5783e22a1d545e24f7fd"},{"author":{"_account_id":1179,"name":"Clay Gerrard","email":"clay.gerrard@gmail.com","username":"clay-gerrard"},"change_message_id":"3ac5b97eb694bd7419d416cd4d22be40d9525d57","unresolved":true,"context_lines":[{"line_number":184,"context_line":"            self.set_cache_state \u003d set_namespaces_in_cache("},{"line_number":185,"context_line":"                self.req, self.cache_key, ns_bound_list,"},{"line_number":186,"context_line":"                self.ctrl.app.recheck_updating_shard_ranges)"},{"line_number":187,"context_line":"        return ns_bound_list"},{"line_number":188,"context_line":""},{"line_number":189,"context_line":""},{"line_number":190,"context_line":"class CooperativeNamespaceCachePopulator(CooperativeCachePopulator):"}],"source_content_type":"text/x-python","patch_set":39,"id":"fd9253fc_eb18f40f","line":187,"updated":"2025-05-05 21:42:58.000000000","message":"this class seems fine, but it would sort of expect there to be a configuration of the CooperativeCachePopulator that spelled the braindead \"go direct to backend every time and set the result in cache\" case somewhat clearly.\n\nI don\u0027t might that this method is stright forward, it just seems like it would duplicate some of the logic from the CooperativeCachePopulator method.\n\nedit: turns out the num_token \u003d 0 case is cursed\n\n948833: num_token \u003d 0 is go slow button | https://review.opendev.org/c/openstack/swift/+/948833\n\nif we fix that I think we\u0027ll have a clear way for a cooperativepopulator to spell \"direct to backend\"","commit_id":"65d0b089553de97b003b9956853d25dcb3590903"},{"author":{"_account_id":34930,"name":"Jianjian Huo","email":"jhuo@nvidia.com","username":"jhuo"},"change_message_id":"659b61cb7e26a48889ac96d6f02f48f780b3d150","unresolved":false,"context_lines":[{"line_number":184,"context_line":"            self.set_cache_state \u003d set_namespaces_in_cache("},{"line_number":185,"context_line":"                self.req, self.cache_key, ns_bound_list,"},{"line_number":186,"context_line":"                self.ctrl.app.recheck_updating_shard_ranges)"},{"line_number":187,"context_line":"        return ns_bound_list"},{"line_number":188,"context_line":""},{"line_number":189,"context_line":""},{"line_number":190,"context_line":"class CooperativeNamespaceCachePopulator(CooperativeCachePopulator):"}],"source_content_type":"text/x-python","patch_set":39,"id":"294360ff_f7001761","line":187,"in_reply_to":"fd9253fc_eb18f40f","updated":"2025-05-09 16:29:52.000000000","message":"I had patch 948833 squashed with changes, thanks for the help!\nhttps://review.opendev.org/c/openstack/swift/+/948833","commit_id":"65d0b089553de97b003b9956853d25dcb3590903"},{"author":{"_account_id":1179,"name":"Clay Gerrard","email":"clay.gerrard@gmail.com","username":"clay-gerrard"},"change_message_id":"3ac5b97eb694bd7419d416cd4d22be40d9525d57","unresolved":true,"context_lines":[{"line_number":198,"context_line":"        memcache \u003d cache_from_env(req.environ, True)"},{"line_number":199,"context_line":"        cache_ttl \u003d ctrl.app.recheck_updating_shard_ranges"},{"line_number":200,"context_line":"        retry_interval \u003d ctrl.app.namespace_cache_token_retry_interval"},{"line_number":201,"context_line":"        num_tokens \u003d ctrl.app.namespace_cache_tokens_per_session"},{"line_number":202,"context_line":"        super().__init__("},{"line_number":203,"context_line":"            logger, \u0027shard_updating\u0027,"},{"line_number":204,"context_line":"            infocache, memcache, cache_key, cache_ttl,"}],"source_content_type":"text/x-python","patch_set":39,"id":"ee5c5877_4137cb1e","line":201,"updated":"2025-05-05 21:42:58.000000000","message":"something like namespace_cache_tokens_per_session \u003d\u003d inf should result in in \"let everyone go straight to the backed\" yeah?  I guess there\u0027s no avoiding the \"check with memcache to see if we won the token\" code path even if the answer will always be \"yes\"\n\nedit: turns out the num_token \u003d 0 case is cursed\n\n948833: num_token \u003d 0 is go slow button | https://review.opendev.org/c/openstack/swift/+/948833\n\nif we fix that I think we\u0027ll have a clear way for a cooperativepopulator to spell \"direct to backend\"","commit_id":"65d0b089553de97b003b9956853d25dcb3590903"},{"author":{"_account_id":34930,"name":"Jianjian Huo","email":"jhuo@nvidia.com","username":"jhuo"},"change_message_id":"659b61cb7e26a48889ac96d6f02f48f780b3d150","unresolved":false,"context_lines":[{"line_number":198,"context_line":"        memcache \u003d cache_from_env(req.environ, True)"},{"line_number":199,"context_line":"        cache_ttl \u003d ctrl.app.recheck_updating_shard_ranges"},{"line_number":200,"context_line":"        retry_interval \u003d ctrl.app.namespace_cache_token_retry_interval"},{"line_number":201,"context_line":"        num_tokens \u003d ctrl.app.namespace_cache_tokens_per_session"},{"line_number":202,"context_line":"        super().__init__("},{"line_number":203,"context_line":"            logger, \u0027shard_updating\u0027,"},{"line_number":204,"context_line":"            infocache, memcache, cache_key, cache_ttl,"}],"source_content_type":"text/x-python","patch_set":39,"id":"eeb34a9c_74b73d49","line":201,"in_reply_to":"ee5c5877_4137cb1e","updated":"2025-05-09 16:29:52.000000000","message":"Done","commit_id":"65d0b089553de97b003b9956853d25dcb3590903"},{"author":{"_account_id":1179,"name":"Clay Gerrard","email":"clay.gerrard@gmail.com","username":"clay-gerrard"},"change_message_id":"3ac5b97eb694bd7419d416cd4d22be40d9525d57","unresolved":true,"context_lines":[{"line_number":370,"context_line":"    def _get_update_shard_caching_disabled(self, req, account, container, obj):"},{"line_number":371,"context_line":"        \"\"\""},{"line_number":372,"context_line":"        Fetch all updating shard ranges for the given root container when"},{"line_number":373,"context_line":"        all caching is disabled."},{"line_number":374,"context_line":""},{"line_number":375,"context_line":"        :param req: original Request instance."},{"line_number":376,"context_line":"        :param account: account from which shard ranges should be fetched."}],"source_content_type":"text/x-python","patch_set":39,"id":"53c2fd3a_7c8222e0","line":373,"updated":"2025-05-05 21:42:58.000000000","message":"\u003e Fetch *all* updating shard ranges \n\nIs this even true?  This code path uses `includes\u003dobj` and returns `namespaces[0]`","commit_id":"65d0b089553de97b003b9956853d25dcb3590903"},{"author":{"_account_id":34930,"name":"Jianjian Huo","email":"jhuo@nvidia.com","username":"jhuo"},"change_message_id":"659b61cb7e26a48889ac96d6f02f48f780b3d150","unresolved":false,"context_lines":[{"line_number":370,"context_line":"    def _get_update_shard_caching_disabled(self, req, account, container, obj):"},{"line_number":371,"context_line":"        \"\"\""},{"line_number":372,"context_line":"        Fetch all updating shard ranges for the given root container when"},{"line_number":373,"context_line":"        all caching is disabled."},{"line_number":374,"context_line":""},{"line_number":375,"context_line":"        :param req: original Request instance."},{"line_number":376,"context_line":"        :param account: account from which shard ranges should be fetched."}],"source_content_type":"text/x-python","patch_set":39,"id":"8500bc51_075cf687","line":373,"in_reply_to":"53c2fd3a_7c8222e0","updated":"2025-05-09 16:29:52.000000000","message":"Done","commit_id":"65d0b089553de97b003b9956853d25dcb3590903"},{"author":{"_account_id":1179,"name":"Clay Gerrard","email":"clay.gerrard@gmail.com","username":"clay-gerrard"},"change_message_id":"3ac5b97eb694bd7419d416cd4d22be40d9525d57","unresolved":true,"context_lines":[{"line_number":439,"context_line":"                else DirectNamespaceCachePopulator"},{"line_number":440,"context_line":"            )"},{"line_number":441,"context_line":"            cache_populator \u003d cache_populator_cls("},{"line_number":442,"context_line":"                self, self.logger, account, container, req, cache_key)"},{"line_number":443,"context_line":"            ns_bound_list \u003d cache_populator.fetch_data()"},{"line_number":444,"context_line":"            if cache_populator.set_cache_state:"},{"line_number":445,"context_line":"                record_cache_op_metrics("}],"source_content_type":"text/x-python","patch_set":39,"id":"e7376758_0fa05803","line":442,"updated":"2025-05-05 21:42:58.000000000","message":"args like `self, self.logger` look a little strange - esp given that we already look inside of `ctrl.app` in the constructor to pull other stuff off `self.app` and reform the args to the super class.","commit_id":"65d0b089553de97b003b9956853d25dcb3590903"},{"author":{"_account_id":34930,"name":"Jianjian Huo","email":"jhuo@nvidia.com","username":"jhuo"},"change_message_id":"659b61cb7e26a48889ac96d6f02f48f780b3d150","unresolved":false,"context_lines":[{"line_number":439,"context_line":"                else DirectNamespaceCachePopulator"},{"line_number":440,"context_line":"            )"},{"line_number":441,"context_line":"            cache_populator \u003d cache_populator_cls("},{"line_number":442,"context_line":"                self, self.logger, account, container, req, cache_key)"},{"line_number":443,"context_line":"            ns_bound_list \u003d cache_populator.fetch_data()"},{"line_number":444,"context_line":"            if cache_populator.set_cache_state:"},{"line_number":445,"context_line":"                record_cache_op_metrics("}],"source_content_type":"text/x-python","patch_set":39,"id":"d4b4a6a6_0c1c1c2e","line":442,"in_reply_to":"e7376758_0fa05803","updated":"2025-05-09 16:29:52.000000000","message":"Done","commit_id":"65d0b089553de97b003b9956853d25dcb3590903"},{"author":{"_account_id":15343,"name":"Tim Burke","email":"tburke@nvidia.com","username":"tburke"},"change_message_id":"dcf796a3770999844b9fd146eeff99d5a38d757b","unresolved":true,"context_lines":[{"line_number":441,"context_line":"            cache_populator \u003d cache_populator_cls("},{"line_number":442,"context_line":"                self, self.logger, account, container, req, cache_key)"},{"line_number":443,"context_line":"            ns_bound_list \u003d cache_populator.fetch_data()"},{"line_number":444,"context_line":"            if cache_populator.set_cache_state:"},{"line_number":445,"context_line":"                record_cache_op_metrics("},{"line_number":446,"context_line":"                    self.logger, self.server_type.lower(), \u0027shard_updating\u0027,"},{"line_number":447,"context_line":"                    cache_populator.set_cache_state, None)"}],"source_content_type":"text/x-python","patch_set":39,"id":"a4950c8a_16daf7fe","line":444,"updated":"2025-05-05 20:42:46.000000000","message":"OK, so one of `set`, `set_error`, or `inc_error`... I\u0027m trying to figure out whether that means we\u0027ll get more or fewer stats than when we were looking at `namespaces`... or how this stat relates to the ones coming out of `CooperativeCachePopulator`\n\nAre we trying to measure the number of requests? Number of responses? Definitely more than the number of cache `set`s...","commit_id":"65d0b089553de97b003b9956853d25dcb3590903"},{"author":{"_account_id":1179,"name":"Clay Gerrard","email":"clay.gerrard@gmail.com","username":"clay-gerrard"},"change_message_id":"3ac5b97eb694bd7419d416cd4d22be40d9525d57","unresolved":true,"context_lines":[{"line_number":441,"context_line":"            cache_populator \u003d cache_populator_cls("},{"line_number":442,"context_line":"                self, self.logger, account, container, req, cache_key)"},{"line_number":443,"context_line":"            ns_bound_list \u003d cache_populator.fetch_data()"},{"line_number":444,"context_line":"            if cache_populator.set_cache_state:"},{"line_number":445,"context_line":"                record_cache_op_metrics("},{"line_number":446,"context_line":"                    self.logger, self.server_type.lower(), \u0027shard_updating\u0027,"},{"line_number":447,"context_line":"                    cache_populator.set_cache_state, None)"}],"source_content_type":"text/x-python","patch_set":39,"id":"a95d4ae9_960ca814","line":444,"updated":"2025-05-05 21:42:58.000000000","message":"it seems like extra indent here is to support the new case where the cooperative-cache-populator was able to get us the goods w/o having to go to the backend.  So we might not want to emit a `record_cache_op_metric(cache_state\u003dset_cache_state)`\n\nIt seems like this would be a good deal cleaner if we let the cooperative-cache-populator handle the `record_cache_state` call in the set case for us.","commit_id":"65d0b089553de97b003b9956853d25dcb3590903"},{"author":{"_account_id":1179,"name":"Clay Gerrard","email":"clay.gerrard@gmail.com","username":"clay-gerrard"},"change_message_id":"5b732a1765cb3de3b56ac2c4fb2ead5ffa05328d","unresolved":true,"context_lines":[{"line_number":441,"context_line":"            cache_populator \u003d cache_populator_cls("},{"line_number":442,"context_line":"                self, self.logger, account, container, req, cache_key)"},{"line_number":443,"context_line":"            ns_bound_list \u003d cache_populator.fetch_data()"},{"line_number":444,"context_line":"            if cache_populator.set_cache_state:"},{"line_number":445,"context_line":"                record_cache_op_metrics("},{"line_number":446,"context_line":"                    self.logger, self.server_type.lower(), \u0027shard_updating\u0027,"},{"line_number":447,"context_line":"                    cache_populator.set_cache_state, None)"}],"source_content_type":"text/x-python","patch_set":39,"id":"d6e76ad3_59775014","line":444,"in_reply_to":"90a3f533_92a83656","updated":"2025-05-13 22:06:08.000000000","message":"\u003e trying to figure out whether that means we\u0027ll get more or fewer stats\n\nfewer backend requests \u003d\u003e fewer memcache sets\n\nboth of those are good things.\n\nthis existing set-cache-op metric clearly knows that \"every backend fetch \u003d\u003d one memcache set\" and that\u0027s still true.  I don\u0027t know what the new token metrics are trying to measure.\n\n\u003e migrate those stats\n\n\"add additional stats\" - we can\u0027t remove `object.shard_updating.cache.set` - we could however move it into the cache-populator since that object is going to be doing the setting and has organized to emit that stat less since we\u0027re doing fewer backend requests and memcache sets.","commit_id":"65d0b089553de97b003b9956853d25dcb3590903"},{"author":{"_account_id":34930,"name":"Jianjian Huo","email":"jhuo@nvidia.com","username":"jhuo"},"change_message_id":"659b61cb7e26a48889ac96d6f02f48f780b3d150","unresolved":true,"context_lines":[{"line_number":441,"context_line":"            cache_populator \u003d cache_populator_cls("},{"line_number":442,"context_line":"                self, self.logger, account, container, req, cache_key)"},{"line_number":443,"context_line":"            ns_bound_list \u003d cache_populator.fetch_data()"},{"line_number":444,"context_line":"            if cache_populator.set_cache_state:"},{"line_number":445,"context_line":"                record_cache_op_metrics("},{"line_number":446,"context_line":"                    self.logger, self.server_type.lower(), \u0027shard_updating\u0027,"},{"line_number":447,"context_line":"                    cache_populator.set_cache_state, None)"}],"source_content_type":"text/x-python","patch_set":39,"id":"90a3f533_92a83656","line":444,"in_reply_to":"a4950c8a_16daf7fe","updated":"2025-05-09 16:29:52.000000000","message":"Those stats are general cache set stats with regarding to updating shard ranges or listing shard ranges. Eventually, for updating shard range ``record_cache_op_metrics`` on cache set, we should be able to migrate those stats from ``object.updating_shard.set...`` to ``token.updating_shard.set...``, probably I should do those within this patch for upstream landing purpose? it\u0027s going to cause some changes to current prod Grafana panels though.","commit_id":"65d0b089553de97b003b9956853d25dcb3590903"},{"author":{"_account_id":34930,"name":"Jianjian Huo","email":"jhuo@nvidia.com","username":"jhuo"},"change_message_id":"659b61cb7e26a48889ac96d6f02f48f780b3d150","unresolved":false,"context_lines":[{"line_number":441,"context_line":"            cache_populator \u003d cache_populator_cls("},{"line_number":442,"context_line":"                self, self.logger, account, container, req, cache_key)"},{"line_number":443,"context_line":"            ns_bound_list \u003d cache_populator.fetch_data()"},{"line_number":444,"context_line":"            if cache_populator.set_cache_state:"},{"line_number":445,"context_line":"                record_cache_op_metrics("},{"line_number":446,"context_line":"                    self.logger, self.server_type.lower(), \u0027shard_updating\u0027,"},{"line_number":447,"context_line":"                    cache_populator.set_cache_state, None)"}],"source_content_type":"text/x-python","patch_set":39,"id":"4a09d0c5_0f02c551","line":444,"in_reply_to":"a95d4ae9_960ca814","updated":"2025-05-09 16:29:52.000000000","message":"or to migrate those stats from ``object.updating_shard.set...`` to ``token.updating_shard.set...``, will continue to have this discussion in other comments.","commit_id":"65d0b089553de97b003b9956853d25dcb3590903"},{"author":{"_account_id":34930,"name":"Jianjian Huo","email":"jhuo@nvidia.com","username":"jhuo"},"change_message_id":"e2f1cdabad03ce3d614fa13b28624dbb521b7d68","unresolved":false,"context_lines":[{"line_number":441,"context_line":"            cache_populator \u003d cache_populator_cls("},{"line_number":442,"context_line":"                self, self.logger, account, container, req, cache_key)"},{"line_number":443,"context_line":"            ns_bound_list \u003d cache_populator.fetch_data()"},{"line_number":444,"context_line":"            if cache_populator.set_cache_state:"},{"line_number":445,"context_line":"                record_cache_op_metrics("},{"line_number":446,"context_line":"                    self.logger, self.server_type.lower(), \u0027shard_updating\u0027,"},{"line_number":447,"context_line":"                    cache_populator.set_cache_state, None)"}],"source_content_type":"text/x-python","patch_set":39,"id":"7d22cbe7_8438a431","line":444,"in_reply_to":"d6e76ad3_59775014","updated":"2025-09-23 05:01:47.000000000","message":"per discussions in other comments, this has been added into commit messages.","commit_id":"65d0b089553de97b003b9956853d25dcb3590903"},{"author":{"_account_id":1179,"name":"Clay Gerrard","email":"clay.gerrard@gmail.com","username":"clay-gerrard"},"change_message_id":"3ac5b97eb694bd7419d416cd4d22be40d9525d57","unresolved":true,"context_lines":[{"line_number":452,"context_line":"                        cache_key, len(ns_bound_list))"},{"line_number":453,"context_line":"                    if cache_populator.token_acquired:"},{"line_number":454,"context_line":"                        message +\u003d \" with a finished token\""},{"line_number":455,"context_line":"                    self.logger.info(message)"},{"line_number":456,"context_line":"            # TODO: convert existing usages of response to just status code."},{"line_number":457,"context_line":"            response \u003d cache_populator.backend_resp"},{"line_number":458,"context_line":""}],"source_content_type":"text/x-python","patch_set":39,"id":"a50638fe_414932f8","line":455,"updated":"2025-05-05 21:42:58.000000000","message":"ok, so existing info log message everytime we cache a shard range; it makes sense to me this should live outside the cooperative-cache-populator until we delete it (which we can probably do in a follow-up; esp if we have good metrics)\n\n... but the `record_cache_op` for the `set_cache_state` should move into `CooperativePoopulator`","commit_id":"65d0b089553de97b003b9956853d25dcb3590903"},{"author":{"_account_id":34930,"name":"Jianjian Huo","email":"jhuo@nvidia.com","username":"jhuo"},"change_message_id":"e2f1cdabad03ce3d614fa13b28624dbb521b7d68","unresolved":false,"context_lines":[{"line_number":452,"context_line":"                        cache_key, len(ns_bound_list))"},{"line_number":453,"context_line":"                    if cache_populator.token_acquired:"},{"line_number":454,"context_line":"                        message +\u003d \" with a finished token\""},{"line_number":455,"context_line":"                    self.logger.info(message)"},{"line_number":456,"context_line":"            # TODO: convert existing usages of response to just status code."},{"line_number":457,"context_line":"            response \u003d cache_populator.backend_resp"},{"line_number":458,"context_line":""}],"source_content_type":"text/x-python","patch_set":39,"id":"c0ac5855_4148f285","line":455,"in_reply_to":"78bc16d6_7ab73528","updated":"2025-09-23 05:01:47.000000000","message":"Done","commit_id":"65d0b089553de97b003b9956853d25dcb3590903"},{"author":{"_account_id":34930,"name":"Jianjian Huo","email":"jhuo@nvidia.com","username":"jhuo"},"change_message_id":"659b61cb7e26a48889ac96d6f02f48f780b3d150","unresolved":true,"context_lines":[{"line_number":452,"context_line":"                        cache_key, len(ns_bound_list))"},{"line_number":453,"context_line":"                    if cache_populator.token_acquired:"},{"line_number":454,"context_line":"                        message +\u003d \" with a finished token\""},{"line_number":455,"context_line":"                    self.logger.info(message)"},{"line_number":456,"context_line":"            # TODO: convert existing usages of response to just status code."},{"line_number":457,"context_line":"            response \u003d cache_populator.backend_resp"},{"line_number":458,"context_line":""}],"source_content_type":"text/x-python","patch_set":39,"id":"d4410aa9_0b7b6d35","line":455,"in_reply_to":"a50638fe_414932f8","updated":"2025-05-09 16:29:52.000000000","message":"I had an iteration yesterday to move ``record_cache_op_metrics`` into ``CooperativeCachePopulator`` (https://review.opendev.org/c/openstack/swift/+/890174/47), but then I feel it\u0027s better for ``record_cache_op_metrics`` to still stay here.\n\nIn part ``record_cache_op_metrics`` is used to emit the general cache operation stats (get and set) for  different cache usages(including updating shard ranges), those stats are related to cooperative token but more like the overall caching stats, and we have good Grafana panels to show those stats from prod; and also there are other usages within proxy controllers which call ``record_cache_op_metrics``. So it looks cleaner if ``CooperativeCachePopulator`` itself only emits token related stats.\n\nEventually, for updating shard range ``record_cache_op_metrics`` on cache set, we should be able to migrate those stats from ``object.updating_shard.set...`` to ``token.updating_shard.set...``, but I wonder probably we can attack that in the follow-on patches.","commit_id":"65d0b089553de97b003b9956853d25dcb3590903"},{"author":{"_account_id":1179,"name":"Clay Gerrard","email":"clay.gerrard@gmail.com","username":"clay-gerrard"},"change_message_id":"5b732a1765cb3de3b56ac2c4fb2ead5ffa05328d","unresolved":true,"context_lines":[{"line_number":452,"context_line":"                        cache_key, len(ns_bound_list))"},{"line_number":453,"context_line":"                    if cache_populator.token_acquired:"},{"line_number":454,"context_line":"                        message +\u003d \" with a finished token\""},{"line_number":455,"context_line":"                    self.logger.info(message)"},{"line_number":456,"context_line":"            # TODO: convert existing usages of response to just status code."},{"line_number":457,"context_line":"            response \u003d cache_populator.backend_resp"},{"line_number":458,"context_line":""}],"source_content_type":"text/x-python","patch_set":39,"id":"f579e924_fe793372","line":455,"in_reply_to":"d4410aa9_0b7b6d35","updated":"2025-05-13 22:06:08.000000000","message":"\u003e So it looks cleaner if CooperativeCachePopulator itself only emits token related stats.\n\nwe definitively disagree about that; I find the `cache_populator.set_cache_state` horrifying - it\u0027s really not much of an abstraction if you have to be able to interrogate what it\u0027s doing under the hood sufficiently to provide deep instrumentation from the outside.\n\nIt\u0027s setting the cache, replacing an existing call - if it can move the memcache-set into the abstraction: it can move the cache-set-metrics.\n\n... and I think that would look cleaner.\n\n\u003e we should be able to migrate those stats \n\nI don\u0027t really think of stats that way.  Once we merge a stat to master we don\u0027t really know when someone outside of nvidia will have built a dashboard that we might break if we decided the metrics would be \"better\" if we did them differently. \n\nIt\u0027s why we still have legacy counters like:\n\n`proxy_request_timing.object.200`\n\n... even after we added storage policy:\n\n`proxy_request_timing.object.0.200`\n\n... sure someone *could* \"migrate\" their graphs to `sum(proxy_reqeust_timing.object.*.200)` and get everything they can from the original metric as a sum over the new metric that has more info - but since we can\u0027t control when/if they do that we have to keep emitting the old metric with less information.\n\nThis problem goes away with labeled metrics b/c new labels are additive - if we want to \"split\" a metric to have more information we can do that without breaking any existing graphs.\n\nSince we\u0027ve already got `record_cache_op_metrics` on master - it\u0027s not going away - regardless of if moves into the CooperativePopulator.  If we eventually want cooperative-token-cache metrics to be a superset of normal pre-existing legacy-style cache metrics it might be useful to keep them closer together:\n\n```\n# legacy metrics\nrecord_cache_op()\n# richer multi-dimension extensible metrics\nself.stats.increment(\u0027cache\u0027, labels\u003d{event\u003d\u0027set\u0027, resource\u003d\u0027shard_updating\u0027, status\u003d200, token\u003d\u0027yes\u0027})\n```","commit_id":"65d0b089553de97b003b9956853d25dcb3590903"},{"author":{"_account_id":15343,"name":"Tim Burke","email":"tburke@nvidia.com","username":"tburke"},"change_message_id":"c11a8355b5e234efd9a786d00a9fa5eb5e0ea60b","unresolved":true,"context_lines":[{"line_number":452,"context_line":"                        cache_key, len(ns_bound_list))"},{"line_number":453,"context_line":"                    if cache_populator.token_acquired:"},{"line_number":454,"context_line":"                        message +\u003d \" with a finished token\""},{"line_number":455,"context_line":"                    self.logger.info(message)"},{"line_number":456,"context_line":"            # TODO: convert existing usages of response to just status code."},{"line_number":457,"context_line":"            response \u003d cache_populator.backend_resp"},{"line_number":458,"context_line":""}],"source_content_type":"text/x-python","patch_set":39,"id":"78bc16d6_7ab73528","line":455,"in_reply_to":"f579e924_fe793372","updated":"2025-05-23 23:33:21.000000000","message":"\u003e Once we merge a stat to master we don\u0027t really know when someone outside of nvidia will have built a dashboard that we might break if we decided the metrics would be \"better\" if we did them differently.\n\nI\u0027ve gotta push back on this. It\u0027s unreasonable to expect that all dashboards built ten years ago should still function exactly as they had. *Especially* dashboards for an area of active development and investment (such as shard-range caching). We need to give ourselves freedom enough to get rid of stats when we find we now have better ones to capture the state that we\u0027re actually interested in.\n\n\u003e It\u0027s why we still have legacy counters like:\n\u003e ```\n\u003e proxy_request_timing.object.200\n\u003e ```\n\u003e ... even after we added storage policy:\n\u003e ```\n\u003e proxy_request_timing.object.0.200\n\u003e ```\n\nWe can\u0027t keep holding this up as some sacred cow -- the only reason we haven\u0027t gotten rid of the old one is that no one\u0027s pushed for it, so we haven\u0027t messaged any metrics removal in release notes and we haven\u0027t written the patch. Want me to start that messaging?\n\nMy worry is that if we *did* hold to \"we must not break stats\" like we do \"we must not break clients\" we\u0027d eventually either\n\n- emit so many stats that no one can remember which ones are the \"good ones\" any more or\n- be incredibly hesitant to add *any* new stat\n\nor both, none of which seem like healthy places.\n\n\u003e since we can\u0027t control when/if they [\"migrate\" their graphs] we have to keep emitting the old metric with less information.\n\nWe also don\u0027t have any control over when/if ops migrate configs when we deprecate/remove config options, but that doesn\u0027t mean we can\u0027t do it.\n\nBut this maybe starts to point at where we need some serious investment. No matter what we do, we can\u0027t control when ops update graphs -- but we **can** at least start making concrete, opinionated recommendations about what kind of graphs ops should be looking at to determine cluster health, publishing some graph definitions for them, and *updating the recommendations as we find them to be out of date*. The ideal (for me, in the long run) would be that ops find ways to integrate our recommendations directly, so shortly after they upgrade swift they upgrade their graphs, too.\n\nI want something like our `etc/` directory full of sample configs, but full of graph definitions.","commit_id":"65d0b089553de97b003b9956853d25dcb3590903"},{"author":{"_account_id":1179,"name":"Clay Gerrard","email":"clay.gerrard@gmail.com","username":"clay-gerrard"},"change_message_id":"5b732a1765cb3de3b56ac2c4fb2ead5ffa05328d","unresolved":true,"context_lines":[{"line_number":369,"context_line":"                    self.app.recheck_updating_shard_ranges)"},{"line_number":370,"context_line":"                record_cache_op_metrics("},{"line_number":371,"context_line":"                    self.logger, self.server_type.lower(), \u0027shard_updating\u0027,"},{"line_number":372,"context_line":"                    set_cache_state, None)"},{"line_number":373,"context_line":"                if set_cache_state \u003d\u003d \u0027set\u0027:"},{"line_number":374,"context_line":"                    self.logger.info("},{"line_number":375,"context_line":"                        \u0027Caching updating shards for %s (%d shards)\u0027,"}],"source_content_type":"text/x-python","patch_set":45,"id":"73eaeb47_c3664653","side":"PARENT","line":372,"updated":"2025-05-13 22:06:08.000000000","message":"previously we always recorded the `set_cache_state` with `resp\u003dNone` i.e. the set metric has never included the status - which was *fine* b/c we always recorded the resp as part of `get_cache_state`","commit_id":"b5fd2a25492ff3421e6110948bff8a3c005deda9"},{"author":{"_account_id":34930,"name":"Jianjian Huo","email":"jhuo@nvidia.com","username":"jhuo"},"change_message_id":"b3ee26af7f3a45a961bdbcf2ef0808a6606b32b9","unresolved":false,"context_lines":[{"line_number":369,"context_line":"                    self.app.recheck_updating_shard_ranges)"},{"line_number":370,"context_line":"                record_cache_op_metrics("},{"line_number":371,"context_line":"                    self.logger, self.server_type.lower(), \u0027shard_updating\u0027,"},{"line_number":372,"context_line":"                    set_cache_state, None)"},{"line_number":373,"context_line":"                if set_cache_state \u003d\u003d \u0027set\u0027:"},{"line_number":374,"context_line":"                    self.logger.info("},{"line_number":375,"context_line":"                        \u0027Caching updating shards for %s (%d shards)\u0027,"}],"source_content_type":"text/x-python","patch_set":45,"id":"b09fa3de_d3c263bc","side":"PARENT","line":372,"in_reply_to":"73eaeb47_c3664653","updated":"2025-05-30 22:35:41.000000000","message":"comment on older version code.","commit_id":"b5fd2a25492ff3421e6110948bff8a3c005deda9"},{"author":{"_account_id":1179,"name":"Clay Gerrard","email":"clay.gerrard@gmail.com","username":"clay-gerrard"},"change_message_id":"5b732a1765cb3de3b56ac2c4fb2ead5ffa05328d","unresolved":true,"context_lines":[{"line_number":376,"context_line":"                        cache_key, len(namespaces))"},{"line_number":377,"context_line":"        record_cache_op_metrics("},{"line_number":378,"context_line":"            self.logger, self.server_type.lower(), \u0027shard_updating\u0027,"},{"line_number":379,"context_line":"            get_cache_state, response)"},{"line_number":380,"context_line":"        return ns_bound_list.get_namespace(obj) if ns_bound_list else None"},{"line_number":381,"context_line":""},{"line_number":382,"context_line":"    def _get_update_target(self, req, container_info):"}],"source_content_type":"text/x-python","patch_set":45,"id":"2272db6c_0c1b53ee","side":"PARENT","line":379,"updated":"2025-05-13 22:06:08.000000000","message":"FWIW it seems we always recorded the `get_cache_state` *after* we (maybe) recorded the `set_cache_state` - when there was no request the response value here was always None.","commit_id":"b5fd2a25492ff3421e6110948bff8a3c005deda9"},{"author":{"_account_id":34930,"name":"Jianjian Huo","email":"jhuo@nvidia.com","username":"jhuo"},"change_message_id":"b3ee26af7f3a45a961bdbcf2ef0808a6606b32b9","unresolved":false,"context_lines":[{"line_number":376,"context_line":"                        cache_key, len(namespaces))"},{"line_number":377,"context_line":"        record_cache_op_metrics("},{"line_number":378,"context_line":"            self.logger, self.server_type.lower(), \u0027shard_updating\u0027,"},{"line_number":379,"context_line":"            get_cache_state, response)"},{"line_number":380,"context_line":"        return ns_bound_list.get_namespace(obj) if ns_bound_list else None"},{"line_number":381,"context_line":""},{"line_number":382,"context_line":"    def _get_update_target(self, req, container_info):"}],"source_content_type":"text/x-python","patch_set":45,"id":"0860a2b4_0413785f","side":"PARENT","line":379,"in_reply_to":"2272db6c_0c1b53ee","updated":"2025-05-30 22:35:41.000000000","message":"Acknowledged","commit_id":"b5fd2a25492ff3421e6110948bff8a3c005deda9"},{"author":{"_account_id":1179,"name":"Clay Gerrard","email":"clay.gerrard@gmail.com","username":"clay-gerrard"},"change_message_id":"5b732a1765cb3de3b56ac2c4fb2ead5ffa05328d","unresolved":false,"context_lines":[{"line_number":168,"context_line":"    container cooperatively using cooperative token and memcached."},{"line_number":169,"context_line":"    \"\"\""},{"line_number":170,"context_line":""},{"line_number":171,"context_line":"    def __init__(self, ctrl, account, container, req, cache_key):"},{"line_number":172,"context_line":"        infocache \u003d req.environ.setdefault(\u0027swift.infocache\u0027, {})"},{"line_number":173,"context_line":"        memcache \u003d cache_from_env(req.environ, True)"},{"line_number":174,"context_line":"        cache_ttl \u003d ctrl.app.recheck_updating_shard_ranges"}],"source_content_type":"text/x-python","patch_set":45,"id":"eb935a3d_829daa0e","line":171,"updated":"2025-05-13 22:06:08.000000000","message":"this signature looks better IMHO","commit_id":"1b10e5043d8dc95e710cfc0c79d6caeb6d7b1899"},{"author":{"_account_id":1179,"name":"Clay Gerrard","email":"clay.gerrard@gmail.com","username":"clay-gerrard"},"change_message_id":"5b732a1765cb3de3b56ac2c4fb2ead5ffa05328d","unresolved":true,"context_lines":[{"line_number":394,"context_line":"            or None if the update should go back to the root"},{"line_number":395,"context_line":"        \"\"\""},{"line_number":396,"context_line":"        memcache \u003d cache_from_env(req.environ, True)"},{"line_number":397,"context_line":"        if not self.app.recheck_updating_shard_ranges or not memcache:"},{"line_number":398,"context_line":"            # caching is disabled"},{"line_number":399,"context_line":"            return self._get_update_shard_caching_disabled("},{"line_number":400,"context_line":"                req, account, container, obj)"}],"source_content_type":"text/x-python","patch_set":45,"id":"ffa5631c_b6e06a06","line":397,"updated":"2025-05-13 22:06:08.000000000","message":"oic, this is why the `_get_update_shard` test is \"fixed\" to produce the ?includes\u003dobj query.  I think this behavior makes the most sense for the memcache disabled case.","commit_id":"1b10e5043d8dc95e710cfc0c79d6caeb6d7b1899"},{"author":{"_account_id":34930,"name":"Jianjian Huo","email":"jhuo@nvidia.com","username":"jhuo"},"change_message_id":"b3ee26af7f3a45a961bdbcf2ef0808a6606b32b9","unresolved":false,"context_lines":[{"line_number":394,"context_line":"            or None if the update should go back to the root"},{"line_number":395,"context_line":"        \"\"\""},{"line_number":396,"context_line":"        memcache \u003d cache_from_env(req.environ, True)"},{"line_number":397,"context_line":"        if not self.app.recheck_updating_shard_ranges or not memcache:"},{"line_number":398,"context_line":"            # caching is disabled"},{"line_number":399,"context_line":"            return self._get_update_shard_caching_disabled("},{"line_number":400,"context_line":"                req, account, container, obj)"}],"source_content_type":"text/x-python","patch_set":45,"id":"dd7b6190_e2b7ce16","line":397,"in_reply_to":"ffa5631c_b6e06a06","updated":"2025-05-30 22:35:41.000000000","message":"Acknowledged","commit_id":"1b10e5043d8dc95e710cfc0c79d6caeb6d7b1899"},{"author":{"_account_id":1179,"name":"Clay Gerrard","email":"clay.gerrard@gmail.com","username":"clay-gerrard"},"change_message_id":"34890a8a035287cb6533a97801528f37e247cf61","unresolved":true,"context_lines":[{"line_number":369,"context_line":"                    self.app.recheck_updating_shard_ranges)"},{"line_number":370,"context_line":"                record_cache_op_metrics("},{"line_number":371,"context_line":"                    self.logger, self.server_type.lower(), \u0027shard_updating\u0027,"},{"line_number":372,"context_line":"                    set_cache_state, None)"},{"line_number":373,"context_line":"                if set_cache_state \u003d\u003d \u0027set\u0027:"},{"line_number":374,"context_line":"                    self.logger.info("},{"line_number":375,"context_line":"                        \u0027Caching updating shards for %s (%d shards)\u0027,"}],"source_content_type":"text/x-python","patch_set":56,"id":"cac11017_68de7f6b","side":"PARENT","line":372,"updated":"2025-09-25 22:24:36.000000000","message":"OMG!?  so we could *trivially* just NOT change the existing legacy metrics by just continuing to pass None here instead of backend_resp!?\n\n```\ndiff --git a/swift/proxy/controllers/obj.py b/swift/proxy/controllers/obj.py\nindex 3df23cffd..7ad937bc6 100644\n--- a/swift/proxy/controllers/obj.py\n+++ b/swift/proxy/controllers/obj.py\n@@ -422,9 +422,7 @@ class BaseObjectController(Controller):\n                 # record the general cache set metrics.\n                 record_cache_op_metrics(\n                     self.logger, self.server_type.lower(), \u0027shard_updating\u0027,\n-                    cache_populator.set_cache_state,\n-                    cache_populator.backend_resp\n-                )\n+                    cache_populator.set_cache_state, None)\n                 # TODO: use enum to unify \u0027set_cache_state\u0027 in existing\n                 # \u0027set_namespaces_in_cache\u0027 and CooperativeCachePopulator, and\n                 # convert existing usages of response to just status code.\n```\n\nfollowed by a whole bunch of:\n\n```\n-                \u0027object.shard_updating.cache.set_error.200\u0027: 1\n+                \u0027object.shard_updating.cache.set_error\u0027: 1\n```","commit_id":"b74296ef8a4902726852bae1a0e80eb15061efa8"},{"author":{"_account_id":34930,"name":"Jianjian Huo","email":"jhuo@nvidia.com","username":"jhuo"},"change_message_id":"230caa450da26de30e5b1de971df156ecb8b1b4c","unresolved":false,"context_lines":[{"line_number":369,"context_line":"                    self.app.recheck_updating_shard_ranges)"},{"line_number":370,"context_line":"                record_cache_op_metrics("},{"line_number":371,"context_line":"                    self.logger, self.server_type.lower(), \u0027shard_updating\u0027,"},{"line_number":372,"context_line":"                    set_cache_state, None)"},{"line_number":373,"context_line":"                if set_cache_state \u003d\u003d \u0027set\u0027:"},{"line_number":374,"context_line":"                    self.logger.info("},{"line_number":375,"context_line":"                        \u0027Caching updating shards for %s (%d shards)\u0027,"}],"source_content_type":"text/x-python","patch_set":56,"id":"9a1fef2b_77e6ee9e","side":"PARENT","line":372,"in_reply_to":"cac11017_68de7f6b","updated":"2025-09-29 18:14:34.000000000","message":"Done","commit_id":"b74296ef8a4902726852bae1a0e80eb15061efa8"},{"author":{"_account_id":1179,"name":"Clay Gerrard","email":"clay.gerrard@gmail.com","username":"clay-gerrard"},"change_message_id":"34890a8a035287cb6533a97801528f37e247cf61","unresolved":true,"context_lines":[{"line_number":171,"context_line":"    def __init__(self, ctrl, account, container, req, cache_key):"},{"line_number":172,"context_line":"        infocache \u003d req.environ.setdefault(\u0027swift.infocache\u0027, {})"},{"line_number":173,"context_line":"        memcache \u003d cache_from_env(req.environ, True)"},{"line_number":174,"context_line":"        cache_ttl \u003d ctrl.app.recheck_updating_shard_ranges"},{"line_number":175,"context_line":"        avg_backend_fetch_time \u003d ctrl.app.namespace_avg_backend_fetch_time"},{"line_number":176,"context_line":"        num_tokens \u003d ctrl.app.namespace_cache_tokens_per_session"},{"line_number":177,"context_line":"        labels \u003d {"}],"source_content_type":"text/x-python","patch_set":56,"id":"9aac2bfa_d853b96e","line":174,"updated":"2025-09-25 22:24:36.000000000","message":"I had to remind myself the difference between cache_ttl and token_ttl\n\nthis is how long the updating-shard-ranges cache object will live in memcache.\n\ntoken_ttl gets made up inside the super class - callers don\u0027t have directly control, as described int he example config it\u0027s always one order of magnitude larger than the avg_backend_fetch_time","commit_id":"e96c9149410bf1d3d08c152bcaeb9f9af89d67ef"},{"author":{"_account_id":34930,"name":"Jianjian Huo","email":"jhuo@nvidia.com","username":"jhuo"},"change_message_id":"230caa450da26de30e5b1de971df156ecb8b1b4c","unresolved":false,"context_lines":[{"line_number":171,"context_line":"    def __init__(self, ctrl, account, container, req, cache_key):"},{"line_number":172,"context_line":"        infocache \u003d req.environ.setdefault(\u0027swift.infocache\u0027, {})"},{"line_number":173,"context_line":"        memcache \u003d cache_from_env(req.environ, True)"},{"line_number":174,"context_line":"        cache_ttl \u003d ctrl.app.recheck_updating_shard_ranges"},{"line_number":175,"context_line":"        avg_backend_fetch_time \u003d ctrl.app.namespace_avg_backend_fetch_time"},{"line_number":176,"context_line":"        num_tokens \u003d ctrl.app.namespace_cache_tokens_per_session"},{"line_number":177,"context_line":"        labels \u003d {"}],"source_content_type":"text/x-python","patch_set":56,"id":"a6c777a9_68c90fdd","line":174,"in_reply_to":"9aac2bfa_d853b96e","updated":"2025-09-29 18:14:34.000000000","message":"Acknowledged","commit_id":"e96c9149410bf1d3d08c152bcaeb9f9af89d67ef"},{"author":{"_account_id":1179,"name":"Clay Gerrard","email":"clay.gerrard@gmail.com","username":"clay-gerrard"},"change_message_id":"34890a8a035287cb6533a97801528f37e247cf61","unresolved":true,"context_lines":[{"line_number":361,"context_line":"            or None if the update should go back to the root"},{"line_number":362,"context_line":"        \"\"\""},{"line_number":363,"context_line":"        # legacy behavior requests container server for includes\u003dobj"},{"line_number":364,"context_line":"        namespaces, response \u003d self._do_get_updating_namespaces("},{"line_number":365,"context_line":"            req, account, container, includes\u003dobj)"},{"line_number":366,"context_line":"        record_cache_op_metrics("},{"line_number":367,"context_line":"            self.logger, self.server_type.lower(), \u0027shard_updating\u0027,"}],"source_content_type":"text/x-python","patch_set":56,"id":"d178f2a2_a8b0381e","line":364,"updated":"2025-09-25 22:24:36.000000000","message":"if I\u0027m reading correctly - down in _get_listing_namespaces_from_backend we\u0027ll call `_set_listing_namespaces_in_cache` but only for \"complete_listing\"\n\nbut that only happens if you go through the container controller via `_GET_auto`\n\nthe _do_get_updating_namespaces path uses a pre-auth\u0027d request and never sets the cache.","commit_id":"e96c9149410bf1d3d08c152bcaeb9f9af89d67ef"},{"author":{"_account_id":34930,"name":"Jianjian Huo","email":"jhuo@nvidia.com","username":"jhuo"},"change_message_id":"230caa450da26de30e5b1de971df156ecb8b1b4c","unresolved":false,"context_lines":[{"line_number":361,"context_line":"            or None if the update should go back to the root"},{"line_number":362,"context_line":"        \"\"\""},{"line_number":363,"context_line":"        # legacy behavior requests container server for includes\u003dobj"},{"line_number":364,"context_line":"        namespaces, response \u003d self._do_get_updating_namespaces("},{"line_number":365,"context_line":"            req, account, container, includes\u003dobj)"},{"line_number":366,"context_line":"        record_cache_op_metrics("},{"line_number":367,"context_line":"            self.logger, self.server_type.lower(), \u0027shard_updating\u0027,"}],"source_content_type":"text/x-python","patch_set":56,"id":"d781ddd2_3da3ddab","line":364,"in_reply_to":"d178f2a2_a8b0381e","updated":"2025-09-29 18:14:34.000000000","message":"Acknowledged","commit_id":"e96c9149410bf1d3d08c152bcaeb9f9af89d67ef"}],"swift/proxy/server.py":[{"author":{"_account_id":1179,"name":"Clay Gerrard","email":"clay.gerrard@gmail.com","username":"clay-gerrard"},"change_message_id":"7f2e28ec5958831a968574f848fcfebb2ac58987","unresolved":true,"context_lines":[{"line_number":44,"context_line":"from swift.proxy.controllers.base import get_container_info, \\"},{"line_number":45,"context_line":"    DEFAULT_RECHECK_CONTAINER_EXISTENCE, DEFAULT_RECHECK_ACCOUNT_EXISTENCE, \\"},{"line_number":46,"context_line":"    DEFAULT_RECHECK_UPDATING_SHARD_RANGES, \\"},{"line_number":47,"context_line":"    DEFAULT_RECHECK_LISTING_SHARD_RANGES, \\"},{"line_number":48,"context_line":"    DEFAULT_SHARD_RANGES_CACHE_TOKEN_TTL, \\"},{"line_number":49,"context_line":"    DEFAULT_SHARD_RANGES_CACHE_TOKEN_SLEEP_INTERVAL"},{"line_number":50,"context_line":"from swift.common.swob import HTTPBadRequest, HTTPForbidden, \\"}],"source_content_type":"text/x-python","patch_set":9,"id":"51a58d32_9b5ce75c","line":47,"updated":"2024-03-15 16:01:16.000000000","message":"are these other constants also only used in this module?","commit_id":"9e87701a33803cd293db30ea0e1b8949c3db7c6c"},{"author":{"_account_id":34930,"name":"Jianjian Huo","email":"jhuo@nvidia.com","username":"jhuo"},"change_message_id":"cd691043fa85bff09ca04ad5d2d950847cb601b5","unresolved":false,"context_lines":[{"line_number":44,"context_line":"from swift.proxy.controllers.base import get_container_info, \\"},{"line_number":45,"context_line":"    DEFAULT_RECHECK_CONTAINER_EXISTENCE, DEFAULT_RECHECK_ACCOUNT_EXISTENCE, \\"},{"line_number":46,"context_line":"    DEFAULT_RECHECK_UPDATING_SHARD_RANGES, \\"},{"line_number":47,"context_line":"    DEFAULT_RECHECK_LISTING_SHARD_RANGES, \\"},{"line_number":48,"context_line":"    DEFAULT_SHARD_RANGES_CACHE_TOKEN_TTL, \\"},{"line_number":49,"context_line":"    DEFAULT_SHARD_RANGES_CACHE_TOKEN_SLEEP_INTERVAL"},{"line_number":50,"context_line":"from swift.common.swob import HTTPBadRequest, HTTPForbidden, \\"}],"source_content_type":"text/x-python","patch_set":9,"id":"c311d755_3d867f4b","line":47,"in_reply_to":"51a58d32_9b5ce75c","updated":"2024-03-20 20:39:24.000000000","message":"Acknowledged","commit_id":"9e87701a33803cd293db30ea0e1b8949c3db7c6c"},{"author":{"_account_id":1179,"name":"Clay Gerrard","email":"clay.gerrard@gmail.com","username":"clay-gerrard"},"change_message_id":"7f2e28ec5958831a968574f848fcfebb2ac58987","unresolved":true,"context_lines":[{"line_number":254,"context_line":"                         DEFAULT_SHARD_RANGES_CACHE_TOKEN_TTL))"},{"line_number":255,"context_line":"        self.shard_ranges_cache_token_sleep_interval \u003d \\"},{"line_number":256,"context_line":"            float(conf.get(\u0027shard_ranges_cache_token_sleep_interval\u0027,"},{"line_number":257,"context_line":"                  DEFAULT_SHARD_RANGES_CACHE_TOKEN_SLEEP_INTERVAL))"},{"line_number":258,"context_line":"        self.allow_account_management \u003d \\"},{"line_number":259,"context_line":"            config_true_value(conf.get(\u0027allow_account_management\u0027, \u0027no\u0027))"},{"line_number":260,"context_line":"        self.container_ring \u003d container_ring or Ring(swift_dir,"}],"source_content_type":"text/x-python","patch_set":9,"id":"c519e0e2_077fc330","line":257,"updated":"2024-03-15 16:01:16.000000000","message":"these new options should trigger an update in etc/proxy-server.conf-sample","commit_id":"9e87701a33803cd293db30ea0e1b8949c3db7c6c"},{"author":{"_account_id":1179,"name":"Clay Gerrard","email":"clay.gerrard@gmail.com","username":"clay-gerrard"},"change_message_id":"5b2eb439090dbb7d3383c43b9c7da3fd49922d38","unresolved":false,"context_lines":[{"line_number":254,"context_line":"                         DEFAULT_SHARD_RANGES_CACHE_TOKEN_TTL))"},{"line_number":255,"context_line":"        self.shard_ranges_cache_token_sleep_interval \u003d \\"},{"line_number":256,"context_line":"            float(conf.get(\u0027shard_ranges_cache_token_sleep_interval\u0027,"},{"line_number":257,"context_line":"                  DEFAULT_SHARD_RANGES_CACHE_TOKEN_SLEEP_INTERVAL))"},{"line_number":258,"context_line":"        self.allow_account_management \u003d \\"},{"line_number":259,"context_line":"            config_true_value(conf.get(\u0027allow_account_management\u0027, \u0027no\u0027))"},{"line_number":260,"context_line":"        self.container_ring \u003d container_ring or Ring(swift_dir,"}],"source_content_type":"text/x-python","patch_set":9,"id":"303b7cef_53633545","line":257,"in_reply_to":"c519e0e2_077fc330","updated":"2024-04-22 15:06:46.000000000","message":"Acknowledged","commit_id":"9e87701a33803cd293db30ea0e1b8949c3db7c6c"},{"author":{"_account_id":1179,"name":"Clay Gerrard","email":"clay.gerrard@gmail.com","username":"clay-gerrard"},"change_message_id":"5b2eb439090dbb7d3383c43b9c7da3fd49922d38","unresolved":true,"context_lines":[{"line_number":254,"context_line":"                  DEFAULT_NAMESPACE_CACHE_USE_TOKEN))"},{"line_number":255,"context_line":"        self.namespace_cache_token_retry_interval \u003d \\"},{"line_number":256,"context_line":"            float(conf.get(\u0027namespace_cache_token_retry_interval\u0027,"},{"line_number":257,"context_line":"                  DEFAULT_NAMESPACE_CACHE_TOKEN_RETRY_INTERVAL))"},{"line_number":258,"context_line":"        self.allow_account_management \u003d \\"},{"line_number":259,"context_line":"            config_true_value(conf.get(\u0027allow_account_management\u0027, \u0027no\u0027))"},{"line_number":260,"context_line":"        self.container_ring \u003d container_ring or Ring(swift_dir,"}],"source_content_type":"text/x-python","patch_set":15,"id":"3c48d4fa_a075971d","line":257,"updated":"2024-04-22 15:06:46.000000000","message":"these new options should trigger an update in etc/proxy-server.conf-sample","commit_id":"41c519ab9349a00bfaf9f7750f7b82643ac0e634"},{"author":{"_account_id":34930,"name":"Jianjian Huo","email":"jhuo@nvidia.com","username":"jhuo"},"change_message_id":"7d62748a0214fd0e6037e4b24de687f776d83aa1","unresolved":false,"context_lines":[{"line_number":254,"context_line":"                  DEFAULT_NAMESPACE_CACHE_USE_TOKEN))"},{"line_number":255,"context_line":"        self.namespace_cache_token_retry_interval \u003d \\"},{"line_number":256,"context_line":"            float(conf.get(\u0027namespace_cache_token_retry_interval\u0027,"},{"line_number":257,"context_line":"                  DEFAULT_NAMESPACE_CACHE_TOKEN_RETRY_INTERVAL))"},{"line_number":258,"context_line":"        self.allow_account_management \u003d \\"},{"line_number":259,"context_line":"            config_true_value(conf.get(\u0027allow_account_management\u0027, \u0027no\u0027))"},{"line_number":260,"context_line":"        self.container_ring \u003d container_ring or Ring(swift_dir,"}],"source_content_type":"text/x-python","patch_set":15,"id":"7435b699_d5e278b8","line":257,"in_reply_to":"3c48d4fa_a075971d","updated":"2024-04-22 17:38:22.000000000","message":"Done","commit_id":"41c519ab9349a00bfaf9f7750f7b82643ac0e634"},{"author":{"_account_id":1179,"name":"Clay Gerrard","email":"clay.gerrard@gmail.com","username":"clay-gerrard"},"change_message_id":"382e2ac0eddfd3eeb9c76438e530f5c7618d3920","unresolved":true,"context_lines":[{"line_number":250,"context_line":"        self.account_existence_skip_cache \u003d config_percent_value("},{"line_number":251,"context_line":"            conf.get(\u0027account_existence_skip_cache_pct\u0027, 0))"},{"line_number":252,"context_line":"        self.namespace_cache_use_token \u003d \\"},{"line_number":253,"context_line":"            float(conf.get(\u0027namespace_cache_use_token\u0027,"},{"line_number":254,"context_line":"                  DEFAULT_NAMESPACE_CACHE_USE_TOKEN))"},{"line_number":255,"context_line":"        self.namespace_cache_token_retry_interval \u003d \\"},{"line_number":256,"context_line":"            float(conf.get(\u0027namespace_cache_token_retry_interval\u0027,"}],"source_content_type":"text/x-python","patch_set":16,"id":"fbb67984_dcfd37f1","line":253,"updated":"2024-04-23 01:43:15.000000000","message":"should float be config_true_value?","commit_id":"245bf557b533100e44970f36bbe17dd8dc2f8baa"},{"author":{"_account_id":34930,"name":"Jianjian Huo","email":"jhuo@nvidia.com","username":"jhuo"},"change_message_id":"91683d17796b106573db8976013204c6d619fe61","unresolved":false,"context_lines":[{"line_number":250,"context_line":"        self.account_existence_skip_cache \u003d config_percent_value("},{"line_number":251,"context_line":"            conf.get(\u0027account_existence_skip_cache_pct\u0027, 0))"},{"line_number":252,"context_line":"        self.namespace_cache_use_token \u003d \\"},{"line_number":253,"context_line":"            float(conf.get(\u0027namespace_cache_use_token\u0027,"},{"line_number":254,"context_line":"                  DEFAULT_NAMESPACE_CACHE_USE_TOKEN))"},{"line_number":255,"context_line":"        self.namespace_cache_token_retry_interval \u003d \\"},{"line_number":256,"context_line":"            float(conf.get(\u0027namespace_cache_token_retry_interval\u0027,"}],"source_content_type":"text/x-python","patch_set":16,"id":"705c6fc8_35a3b87f","line":253,"in_reply_to":"30de84e8_df4782d8","updated":"2024-04-30 05:35:34.000000000","message":"Done","commit_id":"245bf557b533100e44970f36bbe17dd8dc2f8baa"},{"author":{"_account_id":7847,"name":"Alistair Coles","email":"alistairncoles@gmail.com","username":"acoles"},"change_message_id":"e976ddc798ae986d063250eaad0916b1d0108793","unresolved":true,"context_lines":[{"line_number":250,"context_line":"        self.account_existence_skip_cache \u003d config_percent_value("},{"line_number":251,"context_line":"            conf.get(\u0027account_existence_skip_cache_pct\u0027, 0))"},{"line_number":252,"context_line":"        self.namespace_cache_use_token \u003d \\"},{"line_number":253,"context_line":"            float(conf.get(\u0027namespace_cache_use_token\u0027,"},{"line_number":254,"context_line":"                  DEFAULT_NAMESPACE_CACHE_USE_TOKEN))"},{"line_number":255,"context_line":"        self.namespace_cache_token_retry_interval \u003d \\"},{"line_number":256,"context_line":"            float(conf.get(\u0027namespace_cache_token_retry_interval\u0027,"}],"source_content_type":"text/x-python","patch_set":16,"id":"30de84e8_df4782d8","line":253,"in_reply_to":"fbb67984_dcfd37f1","updated":"2024-04-24 14:02:33.000000000","message":"eek! \"True\"/\"False\" will blow up, \"0\" or \"1\" will be ok\n\n(note that conf values are *strings* when read from file)\n\n```\ndiff --git a/test/unit/proxy/test_server.py b/test/unit/proxy/test_server.py\nindex 7790c3361..c1bb64b1e 100644\n--- a/test/unit/proxy/test_server.py\n+++ b/test/unit/proxy/test_server.py\n@@ -628,6 +628,22 @@ class TestProxyServerConfiguration(unittest.TestCase):\n         self.assertEqual(app.container_listing_shard_ranges_skip_cache, 0.0001)\n         self.assertEqual(app.container_updating_shard_ranges_skip_cache, 0.001)\n\n+    def test_namespace_cache_use_token_options(self):\n+        # check default options\n+        app \u003d self._make_app({})\n+        self.assertEqual(0.1, app.namespace_cache_token_retry_interval)\n+        self.assertFalse(app.namespace_cache_use_token)\n+\n+        app \u003d self._make_app({\u0027namespace_cache_use_token\u0027: \u0027False\u0027,\n+                              \u0027namespace_cache_token_retry_interval\u0027: \u00270.2\u0027})\n+        self.assertEqual(0.2, app.namespace_cache_token_retry_interval)\n+        self.assertFalse(app.namespace_cache_use_token)\n+\n+        app \u003d self._make_app({\u0027namespace_cache_use_token\u0027: \u0027True\u0027,\n+                              \u0027namespace_cache_token_retry_interval\u0027: \u00270.3\u0027})\n+        self.assertEqual(0.3, app.namespace_cache_token_retry_interval)\n+        self.assertFalse(app.namespace_cache_use_token)\n+\n\n @patch_policies([StoragePolicy(0, \u0027zero\u0027, True, object_ring\u003dFakeRing())])\n class TestProxyServer(unittest.TestCase):\n```","commit_id":"245bf557b533100e44970f36bbe17dd8dc2f8baa"},{"author":{"_account_id":15343,"name":"Tim Burke","email":"tburke@nvidia.com","username":"tburke"},"change_message_id":"a301cfcb0becff62151fd99f8c62877c74b2af78","unresolved":true,"context_lines":[{"line_number":254,"context_line":"            conf.get(\u0027account_existence_skip_cache_pct\u0027, 0))"},{"line_number":255,"context_line":"        self.namespace_cache_use_token \u003d \\"},{"line_number":256,"context_line":"            config_true_value(conf.get(\u0027namespace_cache_use_token\u0027,"},{"line_number":257,"context_line":"                                       DEFAULT_NAMESPACE_CACHE_USE_TOKEN))"},{"line_number":258,"context_line":"        self.namespace_cache_token_retry_interval \u003d \\"},{"line_number":259,"context_line":"            float(conf.get(\u0027namespace_cache_token_retry_interval\u0027,"},{"line_number":260,"context_line":"                  DEFAULT_NAMESPACE_CACHE_TOKEN_RETRY_INTERVAL))"}],"source_content_type":"text/x-python","patch_set":30,"id":"f4828322_87cc1fbf","line":257,"updated":"2024-10-17 22:10:21.000000000","message":"Could we ditch this config option and replace it with something like\n```\nself.namespace_cache_use_token \u003d (self.namespace_cache_token_retry_interval \u003e 0)\n```\n? Or just check `if self.namespace_cache_token_retry_interval \u003e 0:` at the appropriate spot and get rid of the attr entirely.","commit_id":"01bf2f6fd030ee8285a6b1137432ba83af818884"},{"author":{"_account_id":34930,"name":"Jianjian Huo","email":"jhuo@nvidia.com","username":"jhuo"},"change_message_id":"35dc4bc3921befb4cb049b7d1dfe6c30e0f5079a","unresolved":false,"context_lines":[{"line_number":254,"context_line":"            conf.get(\u0027account_existence_skip_cache_pct\u0027, 0))"},{"line_number":255,"context_line":"        self.namespace_cache_use_token \u003d \\"},{"line_number":256,"context_line":"            config_true_value(conf.get(\u0027namespace_cache_use_token\u0027,"},{"line_number":257,"context_line":"                                       DEFAULT_NAMESPACE_CACHE_USE_TOKEN))"},{"line_number":258,"context_line":"        self.namespace_cache_token_retry_interval \u003d \\"},{"line_number":259,"context_line":"            float(conf.get(\u0027namespace_cache_token_retry_interval\u0027,"},{"line_number":260,"context_line":"                  DEFAULT_NAMESPACE_CACHE_TOKEN_RETRY_INTERVAL))"}],"source_content_type":"text/x-python","patch_set":30,"id":"48e19cb3_092f4f8b","line":257,"in_reply_to":"8fa8b2af_a3845465","updated":"2025-05-06 05:22:35.000000000","message":"thanks @tburke@nvidia.com and @clay.gerrard@gmail.com for pointing this out, ``retry_interval \u003d 0`` is not what we want, I will create a new helper function to make sure it\u0027s a positive float number.\n\nIt\u0027s good to reduce three options down to two, I will use ``num_token # any \u003e 1 means \"on\"`` and ``retry_interval # any \u003e\u003d zero is fine``.\n\nPer offline discussion, ``token_ttl`` is not needed to be configurable for now, added more comments.","commit_id":"01bf2f6fd030ee8285a6b1137432ba83af818884"},{"author":{"_account_id":1179,"name":"Clay Gerrard","email":"clay.gerrard@gmail.com","username":"clay-gerrard"},"change_message_id":"3ac5b97eb694bd7419d416cd4d22be40d9525d57","unresolved":true,"context_lines":[{"line_number":254,"context_line":"            conf.get(\u0027account_existence_skip_cache_pct\u0027, 0))"},{"line_number":255,"context_line":"        self.namespace_cache_use_token \u003d \\"},{"line_number":256,"context_line":"            config_true_value(conf.get(\u0027namespace_cache_use_token\u0027,"},{"line_number":257,"context_line":"                                       DEFAULT_NAMESPACE_CACHE_USE_TOKEN))"},{"line_number":258,"context_line":"        self.namespace_cache_token_retry_interval \u003d \\"},{"line_number":259,"context_line":"            float(conf.get(\u0027namespace_cache_token_retry_interval\u0027,"},{"line_number":260,"context_line":"                  DEFAULT_NAMESPACE_CACHE_TOKEN_RETRY_INTERVAL))"}],"source_content_type":"text/x-python","patch_set":30,"id":"8fa8b2af_a3845465","line":257,"in_reply_to":"dda5f22b_4f470d54","updated":"2025-05-05 21:42:58.000000000","message":"\u003e Do we actually want these to be able to be cached indefinitely?\n\nWAT!?\n\nedit: so the problem with `use_token \u003d true; interval \u003d 0` is it ALSO means the `token_ttl \u003d 0` which is non-sense.  I think these options are bad and we should fix them before we merge.\n\nI think a different three options would be better:\n\n```\nnum_token # any \u003e 1 means \"on\"\nretry_interval # any \u003e\u003d zero is fine\ntoken_ttl # any value \u003e 0 means what you expect, we can document ttl\u003d0 is weird.\n```\n\nThen we don\u0027t need to explicitly configure on, we just set `num_token \u003d 0` as the default which means \"always use cooperative-cache-populator, but if the cache_key is not in cache or we want to skip - then we\u0027re going to go direct to the backend w/o having to check for our token number\"","commit_id":"01bf2f6fd030ee8285a6b1137432ba83af818884"},{"author":{"_account_id":15343,"name":"Tim Burke","email":"tburke@nvidia.com","username":"tburke"},"change_message_id":"dcf796a3770999844b9fd146eeff99d5a38d757b","unresolved":true,"context_lines":[{"line_number":254,"context_line":"            conf.get(\u0027account_existence_skip_cache_pct\u0027, 0))"},{"line_number":255,"context_line":"        self.namespace_cache_use_token \u003d \\"},{"line_number":256,"context_line":"            config_true_value(conf.get(\u0027namespace_cache_use_token\u0027,"},{"line_number":257,"context_line":"                                       DEFAULT_NAMESPACE_CACHE_USE_TOKEN))"},{"line_number":258,"context_line":"        self.namespace_cache_token_retry_interval \u003d \\"},{"line_number":259,"context_line":"            float(conf.get(\u0027namespace_cache_token_retry_interval\u0027,"},{"line_number":260,"context_line":"                  DEFAULT_NAMESPACE_CACHE_TOKEN_RETRY_INTERVAL))"}],"source_content_type":"text/x-python","patch_set":30,"id":"a802e5ee_9a21e10f","line":257,"in_reply_to":"dda5f22b_4f470d54","updated":"2025-05-05 20:42:46.000000000","message":"We do it reasonably often as a way to minimize our config footprint -- especially if there\u0027s some reasonable value for a continuous config option that would essentially mean \"off\". See, for example,\n\n- `error_suppression_interval`\n- `*_skip_cache_pct`\n\nWe even do it sometimes when the \"off\" value isn\u0027t so reasonable:\n\n- `config_reload_interval`\n- `cooperative_period`\n\n(I suppose `concurrent_gets` would be a decent counter-example, though -- `concurrency_timeout \u003e\u003d node_timeout` could reasonably easily be used to mean `concurrent_gets \u003d false`...)\n\nOTOH, maybe it doesn\u0027t matter as much if we\u0027re defaulting to enabling the new feature...\n\n---\n\nOK, so experimentally, if I configure\n```\nnamespace_cache_use_token \u003d true\nnamespace_cache_token_retry_interval \u003d 0\n```\nand restart memcache and proxy-server, I see... the first request populates cache, and all others take from it! I always forget about how setting a ttl of 0 for memcache means \"keep it forever\"... but I\u0027m not sure how obvious that outcome would be to operators. Do we actually *want* these to be able to be cached indefinitely? And unlike with `recheck_*_existence` (which also has this \"0 means cache indefinitely\" behavior), there isn\u0027t any good way to purge it from the cache (e.g., by doing a POST to the container).","commit_id":"01bf2f6fd030ee8285a6b1137432ba83af818884"},{"author":{"_account_id":34930,"name":"Jianjian Huo","email":"jhuo@nvidia.com","username":"jhuo"},"change_message_id":"f333292d1c46e5b5a47e14f815aa5d9ae52b68a9","unresolved":true,"context_lines":[{"line_number":254,"context_line":"            conf.get(\u0027account_existence_skip_cache_pct\u0027, 0))"},{"line_number":255,"context_line":"        self.namespace_cache_use_token \u003d \\"},{"line_number":256,"context_line":"            config_true_value(conf.get(\u0027namespace_cache_use_token\u0027,"},{"line_number":257,"context_line":"                                       DEFAULT_NAMESPACE_CACHE_USE_TOKEN))"},{"line_number":258,"context_line":"        self.namespace_cache_token_retry_interval \u003d \\"},{"line_number":259,"context_line":"            float(conf.get(\u0027namespace_cache_token_retry_interval\u0027,"},{"line_number":260,"context_line":"                  DEFAULT_NAMESPACE_CACHE_TOKEN_RETRY_INTERVAL))"}],"source_content_type":"text/x-python","patch_set":30,"id":"dda5f22b_4f470d54","line":257,"in_reply_to":"f4828322_87cc1fbf","updated":"2024-10-24 04:31:54.000000000","message":"Yes, it\u0027s doable, but I wonder if it might be a bit hacky. what do you think?","commit_id":"01bf2f6fd030ee8285a6b1137432ba83af818884"},{"author":{"_account_id":1179,"name":"Clay Gerrard","email":"clay.gerrard@gmail.com","username":"clay-gerrard"},"change_message_id":"5b732a1765cb3de3b56ac2c4fb2ead5ffa05328d","unresolved":true,"context_lines":[{"line_number":54,"context_line":"from swift.obj import expirer"},{"line_number":55,"context_line":""},{"line_number":56,"context_line":"DEFAULT_NAMESPACE_AVG_BACKEND_FETCH_TIME \u003d 0.3  # seconds"},{"line_number":57,"context_line":"DEFAULT_NAMESPACE_CACHE_TOKENS_PER_SESSION \u003d 3  # 3 tokens per session"},{"line_number":58,"context_line":""},{"line_number":59,"context_line":""},{"line_number":60,"context_line":"# List of entry points for mandatory middlewares."}],"source_content_type":"text/x-python","patch_set":45,"id":"0275d427_c76f1803","line":57,"updated":"2025-05-13 22:06:08.000000000","message":"on by default baby!!!","commit_id":"1b10e5043d8dc95e710cfc0c79d6caeb6d7b1899"},{"author":{"_account_id":34930,"name":"Jianjian Huo","email":"jhuo@nvidia.com","username":"jhuo"},"change_message_id":"b3ee26af7f3a45a961bdbcf2ef0808a6606b32b9","unresolved":false,"context_lines":[{"line_number":54,"context_line":"from swift.obj import expirer"},{"line_number":55,"context_line":""},{"line_number":56,"context_line":"DEFAULT_NAMESPACE_AVG_BACKEND_FETCH_TIME \u003d 0.3  # seconds"},{"line_number":57,"context_line":"DEFAULT_NAMESPACE_CACHE_TOKENS_PER_SESSION \u003d 3  # 3 tokens per session"},{"line_number":58,"context_line":""},{"line_number":59,"context_line":""},{"line_number":60,"context_line":"# List of entry points for mandatory middlewares."}],"source_content_type":"text/x-python","patch_set":45,"id":"10c5cdee_fad57683","line":57,"in_reply_to":"0275d427_c76f1803","updated":"2025-05-30 22:35:41.000000000","message":"Acknowledged","commit_id":"1b10e5043d8dc95e710cfc0c79d6caeb6d7b1899"}],"test/unit/proxy/controllers/test_base.py":[{"author":{"_account_id":1179,"name":"Clay Gerrard","email":"clay.gerrard@gmail.com","username":"clay-gerrard"},"change_message_id":"5b732a1765cb3de3b56ac2c4fb2ead5ffa05328d","unresolved":true,"context_lines":[{"line_number":203,"context_line":"    def test_get_namespaces_from_cache_disabled(self):"},{"line_number":204,"context_line":"        cache_key \u003d \u0027shard-updating-v2/a/c/\u0027"},{"line_number":205,"context_line":"        req \u003d Request.blank(\u0027a/c\u0027)"},{"line_number":206,"context_line":"        actual \u003d get_namespaces_from_cache(req, cache_key, 0)"},{"line_number":207,"context_line":"        self.assertEqual((None, \u0027disabled\u0027), actual)"},{"line_number":208,"context_line":""},{"line_number":209,"context_line":"    def test_get_namespaces_from_cache_miss(self):"}],"source_content_type":"text/x-python","patch_set":39,"id":"74a1367e_b0b455de","line":206,"updated":"2025-05-13 22:06:08.000000000","message":"this very small unit doesn\u0027t need to be tested this way b/c it\u0027s never used this way.\n\nYou can still write data to a sharded container with cache disabled - you just get a direct-to-container-includes\u003dobj query for every PUT.","commit_id":"65d0b089553de97b003b9956853d25dcb3590903"},{"author":{"_account_id":34930,"name":"Jianjian Huo","email":"jhuo@nvidia.com","username":"jhuo"},"change_message_id":"b3ee26af7f3a45a961bdbcf2ef0808a6606b32b9","unresolved":false,"context_lines":[{"line_number":203,"context_line":"    def test_get_namespaces_from_cache_disabled(self):"},{"line_number":204,"context_line":"        cache_key \u003d \u0027shard-updating-v2/a/c/\u0027"},{"line_number":205,"context_line":"        req \u003d Request.blank(\u0027a/c\u0027)"},{"line_number":206,"context_line":"        actual \u003d get_namespaces_from_cache(req, cache_key, 0)"},{"line_number":207,"context_line":"        self.assertEqual((None, \u0027disabled\u0027), actual)"},{"line_number":208,"context_line":""},{"line_number":209,"context_line":"    def test_get_namespaces_from_cache_miss(self):"}],"source_content_type":"text/x-python","patch_set":39,"id":"79819aeb_2876aaf2","line":206,"in_reply_to":"74a1367e_b0b455de","updated":"2025-05-30 22:35:41.000000000","message":"test case removed with regard to changes for other comments.","commit_id":"65d0b089553de97b003b9956853d25dcb3590903"},{"author":{"_account_id":1179,"name":"Clay Gerrard","email":"clay.gerrard@gmail.com","username":"clay-gerrard"},"change_message_id":"34890a8a035287cb6533a97801528f37e247cf61","unresolved":false,"context_lines":[{"line_number":204,"context_line":"        cache_key \u003d \u0027shard-updating-v2/a/c/\u0027"},{"line_number":205,"context_line":"        req \u003d Request.blank(\u0027a/c\u0027)"},{"line_number":206,"context_line":"        actual \u003d get_namespaces_from_cache(req, cache_key, 0)"},{"line_number":207,"context_line":"        self.assertEqual((None, \u0027disabled\u0027), actual)"},{"line_number":208,"context_line":""},{"line_number":209,"context_line":"    def test_get_namespaces_from_cache_miss(self):"},{"line_number":210,"context_line":"        cache_key \u003d \u0027shard-updating-v2/a/c/\u0027"}],"source_content_type":"text/x-python","patch_set":56,"id":"c67c2010_ea791edf","side":"PARENT","line":207,"updated":"2025-09-25 22:24:36.000000000","message":"good riddence!","commit_id":"b74296ef8a4902726852bae1a0e80eb15061efa8"},{"author":{"_account_id":1179,"name":"Clay Gerrard","email":"clay.gerrard@gmail.com","username":"clay-gerrard"},"change_message_id":"34890a8a035287cb6533a97801528f37e247cf61","unresolved":true,"context_lines":[{"line_number":271,"context_line":"        ns_bound_list \u003d NamespaceBoundList([[\u0027\u0027, \u0027sr1\u0027], [\u0027k\u0027, \u0027sr2\u0027]])"},{"line_number":272,"context_line":"        req \u003d Request.blank(\u0027a/c\u0027)"},{"line_number":273,"context_line":"        actual \u003d set_namespaces_in_cache(req, cache_key, ns_bound_list, 123)"},{"line_number":274,"context_line":"        self.assertEqual(\u0027disabled\u0027, actual)"},{"line_number":275,"context_line":"        self.assertEqual({cache_key: ns_bound_list},"},{"line_number":276,"context_line":"                         req.environ[\u0027swift.infocache\u0027])"},{"line_number":277,"context_line":""}],"source_content_type":"text/x-python","patch_set":56,"id":"c612438c_e88d62a9","line":274,"updated":"2025-09-25 22:24:36.000000000","message":"I think the drive-by is an improvement despite the inconsistency between get/set b/c I think as a maintainer I don\u0027t want to have to think about the \"well, what sharding about when cache is disabled\" behavior because in prod you ALWAYS have cache enabled or else sharding doesn\u0027t really \"work\" (it just overloads your metadata servers if there\u0027s any kind of respectable client request rate)","commit_id":"e96c9149410bf1d3d08c152bcaeb9f9af89d67ef"},{"author":{"_account_id":34930,"name":"Jianjian Huo","email":"jhuo@nvidia.com","username":"jhuo"},"change_message_id":"230caa450da26de30e5b1de971df156ecb8b1b4c","unresolved":false,"context_lines":[{"line_number":271,"context_line":"        ns_bound_list \u003d NamespaceBoundList([[\u0027\u0027, \u0027sr1\u0027], [\u0027k\u0027, \u0027sr2\u0027]])"},{"line_number":272,"context_line":"        req \u003d Request.blank(\u0027a/c\u0027)"},{"line_number":273,"context_line":"        actual \u003d set_namespaces_in_cache(req, cache_key, ns_bound_list, 123)"},{"line_number":274,"context_line":"        self.assertEqual(\u0027disabled\u0027, actual)"},{"line_number":275,"context_line":"        self.assertEqual({cache_key: ns_bound_list},"},{"line_number":276,"context_line":"                         req.environ[\u0027swift.infocache\u0027])"},{"line_number":277,"context_line":""}],"source_content_type":"text/x-python","patch_set":56,"id":"a557dab5_fd24a0db","line":274,"in_reply_to":"c612438c_e88d62a9","updated":"2025-09-29 18:14:34.000000000","message":"Acknowledged","commit_id":"e96c9149410bf1d3d08c152bcaeb9f9af89d67ef"},{"author":{"_account_id":1179,"name":"Clay Gerrard","email":"clay.gerrard@gmail.com","username":"clay-gerrard"},"change_message_id":"d81db38e58975873ba863b00885143bf9942bd42","unresolved":true,"context_lines":[{"line_number":316,"context_line":"            set_namespaces_in_cache(req, cache_key, ns_bound_list, 123)"},{"line_number":317,"context_line":"        self.assertIn(\u0027shard-updating cache should use \u0027"},{"line_number":318,"context_line":"                      \u0027CooperativeNamespaceCachePopulator\u0027,"},{"line_number":319,"context_line":"                      str(cm.exception))"},{"line_number":320,"context_line":""},{"line_number":321,"context_line":"    def test_get_info_zero_recheck(self):"},{"line_number":322,"context_line":"        mock_cache \u003d mock.Mock()"}],"source_content_type":"text/x-python","patch_set":58,"id":"3e774c59_6b246eaa","line":319,"updated":"2025-09-29 19:57:52.000000000","message":"Hopefully folks agree this is a reasonable behavior to merge; I could imagine some out-of-tree code might be surprised by this new behavior from `set_namespaces_in_cache` - I put the ValueError in there mostly to debug my understanding that we don\u0027t use that method for updating ranges anymore (at least not in a tested use-case in-tree)","commit_id":"b1e8c7c3e03f5d435695571a0bac30c3348e3eb5"}],"test/unit/proxy/controllers/test_obj.py":[{"author":{"_account_id":1179,"name":"Clay Gerrard","email":"clay.gerrard@gmail.com","username":"clay-gerrard"},"change_message_id":"5b732a1765cb3de3b56ac2c4fb2ead5ffa05328d","unresolved":true,"context_lines":[{"line_number":8035,"context_line":"        self.assertEqual("},{"line_number":8036,"context_line":"            [\u0027format\u003djson\u0027, \u0027includes\u003d\u0027 + quote(self.item), \u0027states\u003dupdating\u0027],"},{"line_number":8037,"context_line":"            params"},{"line_number":8038,"context_line":"        )"},{"line_number":8039,"context_line":"        captured_hdrs \u003d captured[1][\u0027headers\u0027]"},{"line_number":8040,"context_line":"        self.assertEqual(\u0027shard\u0027, captured_hdrs.get(\u0027X-Backend-Record-Type\u0027))"},{"line_number":8041,"context_line":"        self.assertEqual(\u0027namespace\u0027,"}],"source_content_type":"text/x-python","patch_set":45,"id":"9ac8f274_02d70e46","line":8038,"updated":"2025-05-13 22:06:08.000000000","message":"that\u0027s actually quite nice\n\nthis seems to have been \"fixed\" as a side-effect of falling into the \"updating_shard_recheck_existence\" handling when memcache is disabled.\n\nIt\u0027s called out as drive-by in the commit and I think that\u0027s sufficient.  KUDOS.","commit_id":"1b10e5043d8dc95e710cfc0c79d6caeb6d7b1899"},{"author":{"_account_id":34930,"name":"Jianjian Huo","email":"jhuo@nvidia.com","username":"jhuo"},"change_message_id":"b3ee26af7f3a45a961bdbcf2ef0808a6606b32b9","unresolved":false,"context_lines":[{"line_number":8035,"context_line":"        self.assertEqual("},{"line_number":8036,"context_line":"            [\u0027format\u003djson\u0027, \u0027includes\u003d\u0027 + quote(self.item), \u0027states\u003dupdating\u0027],"},{"line_number":8037,"context_line":"            params"},{"line_number":8038,"context_line":"        )"},{"line_number":8039,"context_line":"        captured_hdrs \u003d captured[1][\u0027headers\u0027]"},{"line_number":8040,"context_line":"        self.assertEqual(\u0027shard\u0027, captured_hdrs.get(\u0027X-Backend-Record-Type\u0027))"},{"line_number":8041,"context_line":"        self.assertEqual(\u0027namespace\u0027,"}],"source_content_type":"text/x-python","patch_set":45,"id":"442a6037_28436d73","line":8038,"in_reply_to":"9ac8f274_02d70e46","updated":"2025-05-30 22:35:41.000000000","message":"Acknowledged","commit_id":"1b10e5043d8dc95e710cfc0c79d6caeb6d7b1899"},{"author":{"_account_id":1179,"name":"Clay Gerrard","email":"clay.gerrard@gmail.com","username":"clay-gerrard"},"change_message_id":"d81db38e58975873ba863b00885143bf9942bd42","unresolved":true,"context_lines":[{"line_number":8241,"context_line":"    def setUp(self):"},{"line_number":8242,"context_line":"        super(TestCooperativeToken, self).setUp()"},{"line_number":8243,"context_line":"        # Import needed modules from test_server.py context"},{"line_number":8244,"context_line":"        from test.debug_logger import debug_labeled_statsd_client"},{"line_number":8245,"context_line":""},{"line_number":8246,"context_line":"        conf \u003d {"},{"line_number":8247,"context_line":"            \u0027log_statsd_host\u0027: \u0027host\u0027,"}],"source_content_type":"text/x-python","patch_set":58,"id":"3762eb49_9a11e25e","line":8244,"updated":"2025-09-29 19:57:52.000000000","message":"no good reason for this import to lazy - I only noticed after I pushed the sq commit: AI slop; sorry.","commit_id":"b1e8c7c3e03f5d435695571a0bac30c3348e3eb5"},{"author":{"_account_id":1179,"name":"Clay Gerrard","email":"clay.gerrard@gmail.com","username":"clay-gerrard"},"change_message_id":"2b5a4ed68644305640d186f4f58b6055847c1339","unresolved":false,"context_lines":[{"line_number":8241,"context_line":"    def setUp(self):"},{"line_number":8242,"context_line":"        super(TestCooperativeToken, self).setUp()"},{"line_number":8243,"context_line":"        # Import needed modules from test_server.py context"},{"line_number":8244,"context_line":"        from test.debug_logger import debug_labeled_statsd_client"},{"line_number":8245,"context_line":""},{"line_number":8246,"context_line":"        conf \u003d {"},{"line_number":8247,"context_line":"            \u0027log_statsd_host\u0027: \u0027host\u0027,"}],"source_content_type":"text/x-python","patch_set":58,"id":"1fb45dee_0be2ca0e","line":8244,"in_reply_to":"3762eb49_9a11e25e","updated":"2025-09-30 17:41:34.000000000","message":"962609: test: move import to top of file | https://review.opendev.org/c/openstack/swift/+/962609","commit_id":"b1e8c7c3e03f5d435695571a0bac30c3348e3eb5"},{"author":{"_account_id":1179,"name":"Clay Gerrard","email":"clay.gerrard@gmail.com","username":"clay-gerrard"},"change_message_id":"d81db38e58975873ba863b00885143bf9942bd42","unresolved":true,"context_lines":[{"line_number":8980,"context_line":"        cache_key \u003d \u0027shard-updating-v2/a/c\u0027"},{"line_number":8981,"context_line":"        # TODO when NONE of the 3 token winners succeed, there\u0027s a LOT of"},{"line_number":8982,"context_line":"        # backend requests!"},{"line_number":8983,"context_line":"        failures \u003d random.randint(1, 4)"},{"line_number":8984,"context_line":"        failures_injected \u003d 0"},{"line_number":8985,"context_line":""},{"line_number":8986,"context_line":"        def delayed_fetch_backend(self):"}],"source_content_type":"text/x-python","patch_set":58,"id":"49e94375_bc305f1b","line":8983,"updated":"2025-09-29 19:57:52.000000000","message":"I don\u0027t understand:\n\n```\n\u003e\u003e\u003e random.randint(1, 4)\n1\n\u003e\u003e\u003e random.randint(1, 4)\n4\n\u003e\u003e\u003e random.randint(1, 4)\n1\n\u003e\u003e\u003e random.randint(1, 4)\n2\n```\n\nfor me this test as written is not reliable:\n\n```\nfor i in {1..10}; do pytest swift/test/unit/proxy/controllers/test_obj.py -k test_get_backend_updating_shard_concurrent_reqs_with_failures; if [ $? -ne 0 ]; then break; fi; done\n```\n\nthis works better OMM\n\n```\ndiff --git a/test/unit/proxy/controllers/test_obj.py b/test/unit/proxy/controllers/test_obj.py\nindex 5a61815bc..ff533a83f 100644\n--- a/test/unit/proxy/controllers/test_obj.py\n+++ b/test/unit/proxy/controllers/test_obj.py\n@@ -8980,7 +8980,7 @@ class TestCooperativeToken(BaseObjectControllerMixin, unittest.TestCase):\n         cache_key \u003d \u0027shard-updating-v2/a/c\u0027\n         # TODO when NONE of the 3 token winners succeed, there\u0027s a LOT of\n         # backend requests!\n-        failures \u003d random.randint(1, 4)\n+        failures \u003d random.randint(1, 2)\n         failures_injected \u003d 0\n \n         def delayed_fetch_backend(self):\n@@ -9032,7 +9032,6 @@ class TestCooperativeToken(BaseObjectControllerMixin, unittest.TestCase):\n             pool.waitall()\n \n         stats \u003d self.app.logger.statsd_client.get_stats_counts()\n-        \"\"\"\n         expected \u003d {\n             \u0027account.info.cache.miss.200\u0027: num_processes,\n             \u0027account.info.infocache.hit\u0027: num_processes,\n@@ -9044,7 +9043,6 @@ class TestCooperativeToken(BaseObjectControllerMixin, unittest.TestCase):\n             \u0027object.shard_updating.cache.miss\u0027: num_processes - 3,\n         }\n         self.assertEqual(expected, stats)\n-        \"\"\"\n \n         stats \u003d self.app.statsd.get_labeled_stats_counts()\n         self.assertEqual({\n```","commit_id":"b1e8c7c3e03f5d435695571a0bac30c3348e3eb5"},{"author":{"_account_id":1179,"name":"Clay Gerrard","email":"clay.gerrard@gmail.com","username":"clay-gerrard"},"change_message_id":"2b5a4ed68644305640d186f4f58b6055847c1339","unresolved":false,"context_lines":[{"line_number":8980,"context_line":"        cache_key \u003d \u0027shard-updating-v2/a/c\u0027"},{"line_number":8981,"context_line":"        # TODO when NONE of the 3 token winners succeed, there\u0027s a LOT of"},{"line_number":8982,"context_line":"        # backend requests!"},{"line_number":8983,"context_line":"        failures \u003d random.randint(1, 4)"},{"line_number":8984,"context_line":"        failures_injected \u003d 0"},{"line_number":8985,"context_line":""},{"line_number":8986,"context_line":"        def delayed_fetch_backend(self):"}],"source_content_type":"text/x-python","patch_set":58,"id":"27e8e0a5_1f23c9f2","line":8983,"in_reply_to":"49e94375_bc305f1b","updated":"2025-09-30 17:41:34.000000000","message":"Done","commit_id":"b1e8c7c3e03f5d435695571a0bac30c3348e3eb5"},{"author":{"_account_id":1179,"name":"Clay Gerrard","email":"clay.gerrard@gmail.com","username":"clay-gerrard"},"change_message_id":"d81db38e58975873ba863b00885143bf9942bd42","unresolved":true,"context_lines":[{"line_number":9044,"context_line":"            \u0027object.shard_updating.cache.miss\u0027: num_processes - 3,"},{"line_number":9045,"context_line":"        }"},{"line_number":9046,"context_line":"        self.assertEqual(expected, stats)"},{"line_number":9047,"context_line":"        \"\"\""},{"line_number":9048,"context_line":""},{"line_number":9049,"context_line":"        stats \u003d self.app.statsd.get_labeled_stats_counts()"},{"line_number":9050,"context_line":"        self.assertEqual({"}],"source_content_type":"text/x-python","patch_set":58,"id":"b90588f2_978c54c7","line":9047,"updated":"2025-09-29 19:57:52.000000000","message":"wait, what are we doing here - did I just comment this for debuggin?","commit_id":"b1e8c7c3e03f5d435695571a0bac30c3348e3eb5"},{"author":{"_account_id":1179,"name":"Clay Gerrard","email":"clay.gerrard@gmail.com","username":"clay-gerrard"},"change_message_id":"2b5a4ed68644305640d186f4f58b6055847c1339","unresolved":false,"context_lines":[{"line_number":9044,"context_line":"            \u0027object.shard_updating.cache.miss\u0027: num_processes - 3,"},{"line_number":9045,"context_line":"        }"},{"line_number":9046,"context_line":"        self.assertEqual(expected, stats)"},{"line_number":9047,"context_line":"        \"\"\""},{"line_number":9048,"context_line":""},{"line_number":9049,"context_line":"        stats \u003d self.app.statsd.get_labeled_stats_counts()"},{"line_number":9050,"context_line":"        self.assertEqual({"}],"source_content_type":"text/x-python","patch_set":58,"id":"e5b8c93a_85bf216f","line":9047,"in_reply_to":"b90588f2_978c54c7","updated":"2025-09-30 17:41:34.000000000","message":"Done","commit_id":"b1e8c7c3e03f5d435695571a0bac30c3348e3eb5"}],"test/unit/proxy/test_server.py":[{"author":{"_account_id":34930,"name":"Jianjian Huo","email":"jhuo@nvidia.com","username":"jhuo"},"change_message_id":"e74e14c0667c191e8b073842577da9280d7e7a60","unresolved":true,"context_lines":[{"line_number":4910,"context_line":"                              \u0027account.info.infocache.hit\u0027: 2,"},{"line_number":4911,"context_line":"                              \u0027container.info.cache.miss.200\u0027: 1,"},{"line_number":4912,"context_line":"                              \u0027container.info.infocache.hit\u0027: 1,"},{"line_number":4913,"context_line":"                              \u0027object.shard_updating.cache.skip\u0027: 1,"},{"line_number":4914,"context_line":"                              \u0027object.shard_updating.cache.set_error\u0027: 1},"},{"line_number":4915,"context_line":"                             stats)"},{"line_number":4916,"context_line":"            # verify statsd prefix is not mutated"}],"source_content_type":"text/x-python","patch_set":6,"id":"2c20325f_91600252","line":4913,"updated":"2024-02-26 06:29:35.000000000","message":"will fix this after interface is decided.\n\nshould just simply use 200 when ``get_cache_state\u003d\u003dskip`` and response is None due to set error exception.\n```        record_cache_op_metrics(\n            self.logger, self.server_type.lower(), \u0027shard_updating\u0027,\n            get_cache_state, response)```","commit_id":"98003eb099a8ad9bd061e75d7fa4dc49e2b07305"},{"author":{"_account_id":1179,"name":"Clay Gerrard","email":"clay.gerrard@gmail.com","username":"clay-gerrard"},"change_message_id":"7f2e28ec5958831a968574f848fcfebb2ac58987","unresolved":true,"context_lines":[{"line_number":4910,"context_line":"                              \u0027account.info.infocache.hit\u0027: 2,"},{"line_number":4911,"context_line":"                              \u0027container.info.cache.miss.200\u0027: 1,"},{"line_number":4912,"context_line":"                              \u0027container.info.infocache.hit\u0027: 1,"},{"line_number":4913,"context_line":"                              \u0027object.shard_updating.cache.skip\u0027: 1,"},{"line_number":4914,"context_line":"                              \u0027object.shard_updating.cache.set_error\u0027: 1},"},{"line_number":4915,"context_line":"                             stats)"},{"line_number":4916,"context_line":"            # verify statsd prefix is not mutated"}],"source_content_type":"text/x-python","patch_set":6,"id":"499e440f_1a66d492","line":4913,"in_reply_to":"2c20325f_91600252","updated":"2024-03-15 16:01:16.000000000","message":"I\u0027m not sure I\u0027m reading this diff correctly; it looks like the existing skip metric doesn\u0027t include the backend-response status_int (I thought they did?)","commit_id":"98003eb099a8ad9bd061e75d7fa4dc49e2b07305"},{"author":{"_account_id":34930,"name":"Jianjian Huo","email":"jhuo@nvidia.com","username":"jhuo"},"change_message_id":"cd691043fa85bff09ca04ad5d2d950847cb601b5","unresolved":false,"context_lines":[{"line_number":4910,"context_line":"                              \u0027account.info.infocache.hit\u0027: 2,"},{"line_number":4911,"context_line":"                              \u0027container.info.cache.miss.200\u0027: 1,"},{"line_number":4912,"context_line":"                              \u0027container.info.infocache.hit\u0027: 1,"},{"line_number":4913,"context_line":"                              \u0027object.shard_updating.cache.skip\u0027: 1,"},{"line_number":4914,"context_line":"                              \u0027object.shard_updating.cache.set_error\u0027: 1},"},{"line_number":4915,"context_line":"                             stats)"},{"line_number":4916,"context_line":"            # verify statsd prefix is not mutated"}],"source_content_type":"text/x-python","patch_set":6,"id":"0b5b71d2_4273da3e","line":4913,"in_reply_to":"499e440f_1a66d492","updated":"2024-03-20 20:39:24.000000000","message":"existing skip metric does include the backend-response status_int. That issue was related to previous implementation, not relevant anymore.","commit_id":"98003eb099a8ad9bd061e75d7fa4dc49e2b07305"},{"author":{"_account_id":34930,"name":"Jianjian Huo","email":"jhuo@nvidia.com","username":"jhuo"},"change_message_id":"e74e14c0667c191e8b073842577da9280d7e7a60","unresolved":true,"context_lines":[{"line_number":4933,"context_line":"                        \u0027Host\u0027: \u0027localhost:80\u0027,"},{"line_number":4934,"context_line":"                        \u0027Referer\u0027: \u0027%s http://localhost/v1/a/c/o\u0027 % method,"},{"line_number":4935,"context_line":"                        \u0027X-Backend-Storage-Policy-Index\u0027: \u00271\u0027,"},{"line_number":4936,"context_line":"                        # \u0027X-Backend-Quoted-Container-Path\u0027: shard_ranges[1].name"},{"line_number":4937,"context_line":"                    },"},{"line_number":4938,"context_line":"                }"},{"line_number":4939,"context_line":"                self._check_request(backend_request, **expectations)"}],"source_content_type":"text/x-python","patch_set":6,"id":"5c0822d7_4a1646f2","line":4936,"updated":"2024-02-26 06:29:35.000000000","message":"still need to debug this change","commit_id":"98003eb099a8ad9bd061e75d7fa4dc49e2b07305"},{"author":{"_account_id":1179,"name":"Clay Gerrard","email":"clay.gerrard@gmail.com","username":"clay-gerrard"},"change_message_id":"7f2e28ec5958831a968574f848fcfebb2ac58987","unresolved":false,"context_lines":[{"line_number":4933,"context_line":"                        \u0027Host\u0027: \u0027localhost:80\u0027,"},{"line_number":4934,"context_line":"                        \u0027Referer\u0027: \u0027%s http://localhost/v1/a/c/o\u0027 % method,"},{"line_number":4935,"context_line":"                        \u0027X-Backend-Storage-Policy-Index\u0027: \u00271\u0027,"},{"line_number":4936,"context_line":"                        # \u0027X-Backend-Quoted-Container-Path\u0027: shard_ranges[1].name"},{"line_number":4937,"context_line":"                    },"},{"line_number":4938,"context_line":"                }"},{"line_number":4939,"context_line":"                self._check_request(backend_request, **expectations)"}],"source_content_type":"text/x-python","patch_set":6,"id":"74180654_d397da53","line":4936,"in_reply_to":"5c0822d7_4a1646f2","updated":"2024-03-15 16:01:16.000000000","message":"the gate only seems to failing on py2; OMM this test module passes.","commit_id":"98003eb099a8ad9bd061e75d7fa4dc49e2b07305"},{"author":{"_account_id":1179,"name":"Clay Gerrard","email":"clay.gerrard@gmail.com","username":"clay-gerrard"},"change_message_id":"382e2ac0eddfd3eeb9c76438e530f5c7618d3920","unresolved":true,"context_lines":[{"line_number":5038,"context_line":"    @patch_policies(["},{"line_number":5039,"context_line":"        StoragePolicy(0, \u0027zero\u0027, is_default\u003dTrue, object_ring\u003dFakeRing()),"},{"line_number":5040,"context_line":"        StoragePolicy(1, \u0027one\u0027, object_ring\u003dFakeRing()),"},{"line_number":5041,"context_line":"    ])"},{"line_number":5042,"context_line":"    def test_get_backend_updating_shard_with_cooperative_token_acquired(self):"},{"line_number":5043,"context_line":"        # verify that the request to get updating shard from the container"},{"line_number":5044,"context_line":"        # backend works with cooperative token acquired."}],"source_content_type":"text/x-python","patch_set":16,"id":"cd575c73_c67d2802","line":5041,"updated":"2024-04-23 01:43:15.000000000","message":"as best I can tell this is in the `TestReplicatedObjectController` TestCase.  I\"m not sure what we\u0027re trying to accomplish with patching policies, or if it\u0027s strictly a good thing that we only test this behavior on replicated policies (not that I assume it would make a difference in proxy.controllers.obj)\n\nI think you could probably move these tests to `BaseTestObjectController` but in all fairness I don\u0027t actually understand how `TestECObjectController` actually ends up with `self.policy \u003d self.ec_policy \u003d POLICIES[3]`","commit_id":"245bf557b533100e44970f36bbe17dd8dc2f8baa"},{"author":{"_account_id":1179,"name":"Clay Gerrard","email":"clay.gerrard@gmail.com","username":"clay-gerrard"},"change_message_id":"0cb30e977017313d387a42106e5a4fa19dbceabf","unresolved":false,"context_lines":[{"line_number":5038,"context_line":"    @patch_policies(["},{"line_number":5039,"context_line":"        StoragePolicy(0, \u0027zero\u0027, is_default\u003dTrue, object_ring\u003dFakeRing()),"},{"line_number":5040,"context_line":"        StoragePolicy(1, \u0027one\u0027, object_ring\u003dFakeRing()),"},{"line_number":5041,"context_line":"    ])"},{"line_number":5042,"context_line":"    def test_get_backend_updating_shard_with_cooperative_token_acquired(self):"},{"line_number":5043,"context_line":"        # verify that the request to get updating shard from the container"},{"line_number":5044,"context_line":"        # backend works with cooperative token acquired."}],"source_content_type":"text/x-python","patch_set":16,"id":"b4ed602e_9b2792c0","line":5041,"in_reply_to":"cd575c73_c67d2802","updated":"2024-09-25 16:36:52.000000000","message":"I might have been wrong that these tests can move without additional test infra investment; so probalby this isn\u0027t a blocker.","commit_id":"245bf557b533100e44970f36bbe17dd8dc2f8baa"},{"author":{"_account_id":1179,"name":"Clay Gerrard","email":"clay.gerrard@gmail.com","username":"clay-gerrard"},"change_message_id":"382e2ac0eddfd3eeb9c76438e530f5c7618d3920","unresolved":true,"context_lines":[{"line_number":5065,"context_line":"                # Preset \u0027token_key\u0027 to be value of 1, then make this request"},{"line_number":5066,"context_line":"                # getting updating shard to be the second to acquire a token."},{"line_number":5067,"context_line":"                # Otherwise, it would be the first."},{"line_number":5068,"context_line":"                req.environ[\u0027swift.cache\u0027].incr(token_key)"},{"line_number":5069,"context_line":""},{"line_number":5070,"context_line":"            # we want the container_info response to say policy index of 1 and"},{"line_number":5071,"context_line":"            # sharding state"}],"source_content_type":"text/x-python","patch_set":16,"id":"def07792_5f940d4a","line":5068,"updated":"2024-04-23 01:43:15.000000000","message":"oh ok, I had trouble with this comment.  \"Preset \u0027token_key\u0027 to be value of 1 ... Otherwise, it would be the first\" meaning half the time this test is testing the \"i\u0027m the first one to request this token\" case and reset of the time it\u0027s testing \"I\u0027m the second one to request this token\" - but the behavior in either case is the same; you win - go to the backend.","commit_id":"245bf557b533100e44970f36bbe17dd8dc2f8baa"},{"author":{"_account_id":34930,"name":"Jianjian Huo","email":"jhuo@nvidia.com","username":"jhuo"},"change_message_id":"91683d17796b106573db8976013204c6d619fe61","unresolved":false,"context_lines":[{"line_number":5065,"context_line":"                # Preset \u0027token_key\u0027 to be value of 1, then make this request"},{"line_number":5066,"context_line":"                # getting updating shard to be the second to acquire a token."},{"line_number":5067,"context_line":"                # Otherwise, it would be the first."},{"line_number":5068,"context_line":"                req.environ[\u0027swift.cache\u0027].incr(token_key)"},{"line_number":5069,"context_line":""},{"line_number":5070,"context_line":"            # we want the container_info response to say policy index of 1 and"},{"line_number":5071,"context_line":"            # sharding state"}],"source_content_type":"text/x-python","patch_set":16,"id":"2ac6277d_c5746415","line":5068,"in_reply_to":"def07792_5f940d4a","updated":"2024-04-30 05:35:34.000000000","message":"ACK. I modified the wording to make it more clear.","commit_id":"245bf557b533100e44970f36bbe17dd8dc2f8baa"},{"author":{"_account_id":1179,"name":"Clay Gerrard","email":"clay.gerrard@gmail.com","username":"clay-gerrard"},"change_message_id":"382e2ac0eddfd3eeb9c76438e530f5c7618d3920","unresolved":true,"context_lines":[{"line_number":5098,"context_line":"                              \u0027object.shard_updating.cache.miss.200\u0027: 1,"},{"line_number":5099,"context_line":"                              \u0027object.shard_updating.cache.set\u0027: 1,"},{"line_number":5100,"context_line":"                              \u0027token.shard_updating.backend_reqs\u0027: 1,"},{"line_number":5101,"context_line":"                              \u0027token.shard_updating.done_token_reqs\u0027: 1},"},{"line_number":5102,"context_line":"                             stats)"},{"line_number":5103,"context_line":"            self.assertEqual([], self.app.logger.log_dict[\u0027set_statsd_prefix\u0027])"},{"line_number":5104,"context_line":"            info_lines \u003d self.logger.get_lines_for_level(\u0027info\u0027)"}],"source_content_type":"text/x-python","patch_set":16,"id":"a96625eb_dfd6d16e","line":5101,"updated":"2024-04-23 01:43:15.000000000","message":"these are the happy path stats, a miss.200 + set and new token.backend_reqs and token.done_token_reqs","commit_id":"245bf557b533100e44970f36bbe17dd8dc2f8baa"},{"author":{"_account_id":34930,"name":"Jianjian Huo","email":"jhuo@nvidia.com","username":"jhuo"},"change_message_id":"84240d7cf4cf2d50ffaa7a06493d64c4ad741191","unresolved":false,"context_lines":[{"line_number":5098,"context_line":"                              \u0027object.shard_updating.cache.miss.200\u0027: 1,"},{"line_number":5099,"context_line":"                              \u0027object.shard_updating.cache.set\u0027: 1,"},{"line_number":5100,"context_line":"                              \u0027token.shard_updating.backend_reqs\u0027: 1,"},{"line_number":5101,"context_line":"                              \u0027token.shard_updating.done_token_reqs\u0027: 1},"},{"line_number":5102,"context_line":"                             stats)"},{"line_number":5103,"context_line":"            self.assertEqual([], self.app.logger.log_dict[\u0027set_statsd_prefix\u0027])"},{"line_number":5104,"context_line":"            info_lines \u003d self.logger.get_lines_for_level(\u0027info\u0027)"}],"source_content_type":"text/x-python","patch_set":16,"id":"95ffea7d_6de8f64e","line":5101,"in_reply_to":"a96625eb_dfd6d16e","updated":"2024-05-03 05:51:16.000000000","message":"Acknowledged","commit_id":"245bf557b533100e44970f36bbe17dd8dc2f8baa"},{"author":{"_account_id":1179,"name":"Clay Gerrard","email":"clay.gerrard@gmail.com","username":"clay-gerrard"},"change_message_id":"382e2ac0eddfd3eeb9c76438e530f5c7618d3920","unresolved":true,"context_lines":[{"line_number":5145,"context_line":"        # backend will be served out of memcached when other requests have"},{"line_number":5146,"context_line":"        # grabbed all available cooperative tokens."},{"line_number":5147,"context_line":"        # reset the router post patch_policies"},{"line_number":5148,"context_line":"        conf \u003d {\u0027namespace_cache_use_token\u0027: True}"},{"line_number":5149,"context_line":"        self.app \u003d proxy_server.Application("},{"line_number":5150,"context_line":"            conf,"},{"line_number":5151,"context_line":"            logger\u003dself.logger,"}],"source_content_type":"text/x-python","patch_set":16,"id":"e0c8f2d4_8cf8af9c","line":5148,"updated":"2024-04-23 01:43:15.000000000","message":"OMM this test takes 4.89s.  It\u0027s significantly smaller if I decrease the value of token_retry_interval.\n\nComparatively the `test_get_backend_updating_shard_with_cooperative_token_acquired` only takes 1.8s when run by itself.","commit_id":"245bf557b533100e44970f36bbe17dd8dc2f8baa"},{"author":{"_account_id":34930,"name":"Jianjian Huo","email":"jhuo@nvidia.com","username":"jhuo"},"change_message_id":"91683d17796b106573db8976013204c6d619fe61","unresolved":false,"context_lines":[{"line_number":5145,"context_line":"        # backend will be served out of memcached when other requests have"},{"line_number":5146,"context_line":"        # grabbed all available cooperative tokens."},{"line_number":5147,"context_line":"        # reset the router post patch_policies"},{"line_number":5148,"context_line":"        conf \u003d {\u0027namespace_cache_use_token\u0027: True}"},{"line_number":5149,"context_line":"        self.app \u003d proxy_server.Application("},{"line_number":5150,"context_line":"            conf,"},{"line_number":5151,"context_line":"            logger\u003dself.logger,"}],"source_content_type":"text/x-python","patch_set":16,"id":"1499dc7e_4c9210df","line":5148,"in_reply_to":"e0c8f2d4_8cf8af9c","updated":"2024-04-30 05:35:34.000000000","message":"I added config of ``\u0027namespace_cache_token_retry_interval\u0027: 0.005`` to this test, now it takes ~0.42s on a few runs.","commit_id":"245bf557b533100e44970f36bbe17dd8dc2f8baa"},{"author":{"_account_id":1179,"name":"Clay Gerrard","email":"clay.gerrard@gmail.com","username":"clay-gerrard"},"change_message_id":"382e2ac0eddfd3eeb9c76438e530f5c7618d3920","unresolved":true,"context_lines":[{"line_number":5214,"context_line":"                              \u0027container.info.cache.miss.200\u0027: 1,"},{"line_number":5215,"context_line":"                              \u0027container.info.infocache.hit\u0027: 1,"},{"line_number":5216,"context_line":"                              \u0027object.shard_updating.cache.miss\u0027: 1,"},{"line_number":5217,"context_line":"                              \u0027token.shard_updating.cache_served_reqs\u0027: 1},"},{"line_number":5218,"context_line":"                             stats)"},{"line_number":5219,"context_line":"            self.assertEqual([], self.app.logger.log_dict[\u0027set_statsd_prefix\u0027])"},{"line_number":5220,"context_line":"            debug_lines \u003d self.logger.get_lines_for_level(\u0027debug\u0027)"}],"source_content_type":"text/x-python","patch_set":16,"id":"b083bd4c_108da412","line":5217,"updated":"2024-04-23 01:43:15.000000000","message":"these are the happy path stats - a miss with no status and cache_served_reqs","commit_id":"245bf557b533100e44970f36bbe17dd8dc2f8baa"},{"author":{"_account_id":34930,"name":"Jianjian Huo","email":"jhuo@nvidia.com","username":"jhuo"},"change_message_id":"84240d7cf4cf2d50ffaa7a06493d64c4ad741191","unresolved":false,"context_lines":[{"line_number":5214,"context_line":"                              \u0027container.info.cache.miss.200\u0027: 1,"},{"line_number":5215,"context_line":"                              \u0027container.info.infocache.hit\u0027: 1,"},{"line_number":5216,"context_line":"                              \u0027object.shard_updating.cache.miss\u0027: 1,"},{"line_number":5217,"context_line":"                              \u0027token.shard_updating.cache_served_reqs\u0027: 1},"},{"line_number":5218,"context_line":"                             stats)"},{"line_number":5219,"context_line":"            self.assertEqual([], self.app.logger.log_dict[\u0027set_statsd_prefix\u0027])"},{"line_number":5220,"context_line":"            debug_lines \u003d self.logger.get_lines_for_level(\u0027debug\u0027)"}],"source_content_type":"text/x-python","patch_set":16,"id":"8fad1501_fa767ea9","line":5217,"in_reply_to":"b083bd4c_108da412","updated":"2024-05-03 05:51:16.000000000","message":"Done","commit_id":"245bf557b533100e44970f36bbe17dd8dc2f8baa"},{"author":{"_account_id":1179,"name":"Clay Gerrard","email":"clay.gerrard@gmail.com","username":"clay-gerrard"},"change_message_id":"5b732a1765cb3de3b56ac2c4fb2ead5ffa05328d","unresolved":true,"context_lines":[{"line_number":4412,"context_line":"                              \u0027container.info.cache.miss.200\u0027: 1,"},{"line_number":4413,"context_line":"                              \u0027container.info.infocache.hit\u0027: 1,"},{"line_number":4414,"context_line":"                              \u0027object.shard_updating.cache.miss.200\u0027: 1,"},{"line_number":4415,"context_line":"                              \u0027object.shard_updating.cache.set\u0027: 1},"},{"line_number":4416,"context_line":"                             stats)"},{"line_number":4417,"context_line":"            self.assertEqual([], self.app.logger.log_dict[\u0027set_statsd_prefix\u0027])"},{"line_number":4418,"context_line":"            info_lines \u003d self.logger.get_lines_for_level(\u0027info\u0027)"}],"source_content_type":"text/x-python","patch_set":45,"id":"11a48633_c440c60b","side":"PARENT","line":4415,"updated":"2025-05-13 22:06:08.000000000","message":"oh interesting - I don\u0027t think we can remove this counter?  Or maybe we\u0027re more lienent if the change we\u0027re forcing on ops is only to add a `.*` to their graph definition?  This problem doesn\u0027t exist for labeled metrics.  It\u0027s going to be so much better to maintain swift once we agree we\u0027re not going to add any more legacy metrics anymore.","commit_id":"b5fd2a25492ff3421e6110948bff8a3c005deda9"},{"author":{"_account_id":1179,"name":"Clay Gerrard","email":"clay.gerrard@gmail.com","username":"clay-gerrard"},"change_message_id":"34890a8a035287cb6533a97801528f37e247cf61","unresolved":false,"context_lines":[{"line_number":4412,"context_line":"                              \u0027container.info.cache.miss.200\u0027: 1,"},{"line_number":4413,"context_line":"                              \u0027container.info.infocache.hit\u0027: 1,"},{"line_number":4414,"context_line":"                              \u0027object.shard_updating.cache.miss.200\u0027: 1,"},{"line_number":4415,"context_line":"                              \u0027object.shard_updating.cache.set\u0027: 1},"},{"line_number":4416,"context_line":"                             stats)"},{"line_number":4417,"context_line":"            self.assertEqual([], self.app.logger.log_dict[\u0027set_statsd_prefix\u0027])"},{"line_number":4418,"context_line":"            info_lines \u003d self.logger.get_lines_for_level(\u0027info\u0027)"}],"source_content_type":"text/x-python","patch_set":45,"id":"5bb2ec4b_4ea18a0c","side":"PARENT","line":4415,"in_reply_to":"11a48633_c440c60b","updated":"2025-09-25 22:24:36.000000000","message":"I don\u0027t remember when or why but at some point we decided we\u0027re ok with a change to legacy metrics.","commit_id":"b5fd2a25492ff3421e6110948bff8a3c005deda9"},{"author":{"_account_id":1179,"name":"Clay Gerrard","email":"clay.gerrard@gmail.com","username":"clay-gerrard"},"change_message_id":"5b732a1765cb3de3b56ac2c4fb2ead5ffa05328d","unresolved":true,"context_lines":[{"line_number":4606,"context_line":"                \u0027shard-updating-v2/a/c\u0027:"},{"line_number":4607,"context_line":"                NamespaceBoundList.parse(shard_ranges)}"},{"line_number":4608,"context_line":"            req \u003d Request.blank(\u0027/v1/a/c/o\u0027,"},{"line_number":4609,"context_line":"                                {\u0027swift.cache\u0027: cache,"},{"line_number":4610,"context_line":"                                 \u0027swift.infocache\u0027: infocache},"},{"line_number":4611,"context_line":"                                method\u003dmethod, body\u003d\u0027\u0027,"},{"line_number":4612,"context_line":"                                headers\u003d{\u0027Content-Type\u0027: \u0027text/plain\u0027})"}],"source_content_type":"text/x-python","patch_set":45,"id":"507cd7fb_0fc459a8","line":4609,"updated":"2025-05-13 22:06:08.000000000","message":"I don\u0027t understand why this changed - what was wrong with cache disabled?  Does info cache still work?","commit_id":"1b10e5043d8dc95e710cfc0c79d6caeb6d7b1899"},{"author":{"_account_id":34930,"name":"Jianjian Huo","email":"jhuo@nvidia.com","username":"jhuo"},"change_message_id":"b3ee26af7f3a45a961bdbcf2ef0808a6606b32b9","unresolved":false,"context_lines":[{"line_number":4606,"context_line":"                \u0027shard-updating-v2/a/c\u0027:"},{"line_number":4607,"context_line":"                NamespaceBoundList.parse(shard_ranges)}"},{"line_number":4608,"context_line":"            req \u003d Request.blank(\u0027/v1/a/c/o\u0027,"},{"line_number":4609,"context_line":"                                {\u0027swift.cache\u0027: cache,"},{"line_number":4610,"context_line":"                                 \u0027swift.infocache\u0027: infocache},"},{"line_number":4611,"context_line":"                                method\u003dmethod, body\u003d\u0027\u0027,"},{"line_number":4612,"context_line":"                                headers\u003d{\u0027Content-Type\u0027: \u0027text/plain\u0027})"}],"source_content_type":"text/x-python","patch_set":45,"id":"69ccf8b0_7141a7bd","line":4609,"in_reply_to":"507cd7fb_0fc459a8","updated":"2025-05-30 22:35:41.000000000","message":"comments on older version code, not related anymore.","commit_id":"1b10e5043d8dc95e710cfc0c79d6caeb6d7b1899"},{"author":{"_account_id":1179,"name":"Clay Gerrard","email":"clay.gerrard@gmail.com","username":"clay-gerrard"},"change_message_id":"5b732a1765cb3de3b56ac2c4fb2ead5ffa05328d","unresolved":true,"context_lines":[{"line_number":5041,"context_line":"        do_test(\u0027PUT\u0027, \u0027sharded\u0027)"},{"line_number":5042,"context_line":""},{"line_number":5043,"context_line":"    def test_get_backend_updating_shard_with_cooperative_token_configs(self):"},{"line_number":5044,"context_line":"        conf \u003d {}"},{"line_number":5045,"context_line":"        self.app \u003d proxy_server.Application("},{"line_number":5046,"context_line":"            conf,"},{"line_number":5047,"context_line":"            logger\u003dself.logger,"}],"source_content_type":"text/x-python","patch_set":45,"id":"d8e90910_4d70c449","line":5044,"updated":"2025-05-13 22:06:08.000000000","message":"FWIW this seems to be in `TestReplicatedObjectController` but AFAIK has nothing to do with replicated objects per-se - I would have preferred a new TestCase in `proxy.controller.test_obj` since this file is already enormous.\n\n```\nvagrant@saio:~$ wc -l swift/test/unit/proxy/controllers/test_obj.py\n8232 swift/test/unit/proxy/controllers/test_obj.py\nvagrant@saio:~$ wc -l swift/test/unit/proxy/test_server.py \n13176 swift/test/unit/proxy/test_server.py\n```","commit_id":"1b10e5043d8dc95e710cfc0c79d6caeb6d7b1899"},{"author":{"_account_id":1179,"name":"Clay Gerrard","email":"clay.gerrard@gmail.com","username":"clay-gerrard"},"change_message_id":"2b5a4ed68644305640d186f4f58b6055847c1339","unresolved":false,"context_lines":[{"line_number":5041,"context_line":"        do_test(\u0027PUT\u0027, \u0027sharded\u0027)"},{"line_number":5042,"context_line":""},{"line_number":5043,"context_line":"    def test_get_backend_updating_shard_with_cooperative_token_configs(self):"},{"line_number":5044,"context_line":"        conf \u003d {}"},{"line_number":5045,"context_line":"        self.app \u003d proxy_server.Application("},{"line_number":5046,"context_line":"            conf,"},{"line_number":5047,"context_line":"            logger\u003dself.logger,"}],"source_content_type":"text/x-python","patch_set":45,"id":"89f679cf_fdaef5d5","line":5044,"in_reply_to":"d8e90910_4d70c449","updated":"2025-09-30 17:41:34.000000000","message":"Done","commit_id":"1b10e5043d8dc95e710cfc0c79d6caeb6d7b1899"},{"author":{"_account_id":1179,"name":"Clay Gerrard","email":"clay.gerrard@gmail.com","username":"clay-gerrard"},"change_message_id":"5b732a1765cb3de3b56ac2c4fb2ead5ffa05328d","unresolved":true,"context_lines":[{"line_number":5223,"context_line":"        token_key \u003d \"_cache_token/%s\" % cache_key"},{"line_number":5224,"context_line":""},{"line_number":5225,"context_line":"        def do_test(method, sharding_state):"},{"line_number":5226,"context_line":"            retries \u003d [0]"},{"line_number":5227,"context_line":""},{"line_number":5228,"context_line":"            class CustomizedFakeCache(FakeMemcache):"},{"line_number":5229,"context_line":"                def get(self, key, raise_on_error\u003dFalse):"}],"source_content_type":"text/x-python","patch_set":45,"id":"4cf3976c_cb946294","line":5226,"updated":"2025-05-13 22:06:08.000000000","message":"better with nonlocal","commit_id":"1b10e5043d8dc95e710cfc0c79d6caeb6d7b1899"},{"author":{"_account_id":34930,"name":"Jianjian Huo","email":"jhuo@nvidia.com","username":"jhuo"},"change_message_id":"b3ee26af7f3a45a961bdbcf2ef0808a6606b32b9","unresolved":false,"context_lines":[{"line_number":5223,"context_line":"        token_key \u003d \"_cache_token/%s\" % cache_key"},{"line_number":5224,"context_line":""},{"line_number":5225,"context_line":"        def do_test(method, sharding_state):"},{"line_number":5226,"context_line":"            retries \u003d [0]"},{"line_number":5227,"context_line":""},{"line_number":5228,"context_line":"            class CustomizedFakeCache(FakeMemcache):"},{"line_number":5229,"context_line":"                def get(self, key, raise_on_error\u003dFalse):"}],"source_content_type":"text/x-python","patch_set":45,"id":"1ea90f24_7bd0ea2b","line":5226,"in_reply_to":"4cf3976c_cb946294","updated":"2025-05-30 22:35:41.000000000","message":"Done","commit_id":"1b10e5043d8dc95e710cfc0c79d6caeb6d7b1899"},{"author":{"_account_id":1179,"name":"Clay Gerrard","email":"clay.gerrard@gmail.com","username":"clay-gerrard"},"change_message_id":"5b732a1765cb3de3b56ac2c4fb2ead5ffa05328d","unresolved":true,"context_lines":[{"line_number":5363,"context_line":"                    retries[0] +\u003d 1"},{"line_number":5364,"context_line":"                    if retries[0] \u003c\u003d 2:"},{"line_number":5365,"context_line":"                        return super(CustomizedFakeCache, self).get("},{"line_number":5366,"context_line":"                            \"NOT_EXISTED_YET\")"},{"line_number":5367,"context_line":"                    else:"},{"line_number":5368,"context_line":"                        return super(CustomizedFakeCache, self).get(key)"},{"line_number":5369,"context_line":""}],"source_content_type":"text/x-python","patch_set":45,"id":"136fa231_207da1d3","line":5366,"updated":"2025-05-13 22:06:08.000000000","message":"I would expect there to only be ONE retry because the `time` says after the first sleep of `0.005 * 1.5` we\u0027re already `1s` later which is well past the `0.005 * 10` cutoff so we get ONLY the \"at least one retry\"\n\nok, I got it - unlike `test_fetch_data_req_lacks_enough_retries` this stub has to fake *initial* memcache get that happens outside of the cooperative-cache-populator\n\nretires\u003d1 get before cache populator \u003d\u003e miss\nretries\u003d2 first get after initial sleep \u0026 retry memcache\nretries\u003d3 2nd get after time.time says we slept too long but we\u0027ll try again one last time anyway","commit_id":"1b10e5043d8dc95e710cfc0c79d6caeb6d7b1899"},{"author":{"_account_id":34930,"name":"Jianjian Huo","email":"jhuo@nvidia.com","username":"jhuo"},"change_message_id":"b3ee26af7f3a45a961bdbcf2ef0808a6606b32b9","unresolved":false,"context_lines":[{"line_number":5363,"context_line":"                    retries[0] +\u003d 1"},{"line_number":5364,"context_line":"                    if retries[0] \u003c\u003d 2:"},{"line_number":5365,"context_line":"                        return super(CustomizedFakeCache, self).get("},{"line_number":5366,"context_line":"                            \"NOT_EXISTED_YET\")"},{"line_number":5367,"context_line":"                    else:"},{"line_number":5368,"context_line":"                        return super(CustomizedFakeCache, self).get(key)"},{"line_number":5369,"context_line":""}],"source_content_type":"text/x-python","patch_set":45,"id":"2df16b8a_530a6faa","line":5366,"in_reply_to":"136fa231_207da1d3","updated":"2025-05-30 22:35:41.000000000","message":"Acknowledged","commit_id":"1b10e5043d8dc95e710cfc0c79d6caeb6d7b1899"},{"author":{"_account_id":1179,"name":"Clay Gerrard","email":"clay.gerrard@gmail.com","username":"clay-gerrard"},"change_message_id":"5b732a1765cb3de3b56ac2c4fb2ead5ffa05328d","unresolved":true,"context_lines":[{"line_number":5400,"context_line":""},{"line_number":5401,"context_line":"            with mocked_http_conn(*status_codes, headers\u003dresp_headers,"},{"line_number":5402,"context_line":"                                  body\u003dbody) as fake_conn:"},{"line_number":5403,"context_line":"                with mock.patch(\u0027time.time\u0027, ) as mock_time:"},{"line_number":5404,"context_line":"                    mock_time.side_effect \u003d itertools.count(4000.99, 1.0)"},{"line_number":5405,"context_line":"                    resp \u003d req.get_response(self.app)"},{"line_number":5406,"context_line":""}],"source_content_type":"text/x-python","patch_set":45,"id":"ceb962b2_f21da4b2","line":5403,"updated":"2025-05-13 22:06:08.000000000","message":"so the mocking of time here is the only difference between this test and `test_get_backend_updating_shard_wo_cooperative_token_acquired`\n\nI think we\u0027re mocking time too deep and ending up in the hub:\n\n```\n(Pdb) mock_time.mock_calls\n[call(),\n call(),\n call(),\n call(),\n call(),\n call(),\n call(),\n call(),\n call(),\n call(),\n call(),\n call(),\n call(),\n call(),\n call(),\n call(),\n call(),\n call(),\n call(),\n call(),\n call(),\n call(),\n call(),\n call(),\n call(),\n call(),\n call(),\n call(),\n call(),\n call(),\n call(),\n call(),\n call(),\n call()]\n ```","commit_id":"1b10e5043d8dc95e710cfc0c79d6caeb6d7b1899"},{"author":{"_account_id":34930,"name":"Jianjian Huo","email":"jhuo@nvidia.com","username":"jhuo"},"change_message_id":"b3ee26af7f3a45a961bdbcf2ef0808a6606b32b9","unresolved":false,"context_lines":[{"line_number":5400,"context_line":""},{"line_number":5401,"context_line":"            with mocked_http_conn(*status_codes, headers\u003dresp_headers,"},{"line_number":5402,"context_line":"                                  body\u003dbody) as fake_conn:"},{"line_number":5403,"context_line":"                with mock.patch(\u0027time.time\u0027, ) as mock_time:"},{"line_number":5404,"context_line":"                    mock_time.side_effect \u003d itertools.count(4000.99, 1.0)"},{"line_number":5405,"context_line":"                    resp \u003d req.get_response(self.app)"},{"line_number":5406,"context_line":""}],"source_content_type":"text/x-python","patch_set":45,"id":"508f40e1_0b0f5a22","line":5403,"in_reply_to":"ceb962b2_f21da4b2","updated":"2025-05-30 22:35:41.000000000","message":"Done","commit_id":"1b10e5043d8dc95e710cfc0c79d6caeb6d7b1899"},{"author":{"_account_id":1179,"name":"Clay Gerrard","email":"clay.gerrard@gmail.com","username":"clay-gerrard"},"change_message_id":"5b732a1765cb3de3b56ac2c4fb2ead5ffa05328d","unresolved":true,"context_lines":[{"line_number":5401,"context_line":"            with mocked_http_conn(*status_codes, headers\u003dresp_headers,"},{"line_number":5402,"context_line":"                                  body\u003dbody) as fake_conn:"},{"line_number":5403,"context_line":"                with mock.patch(\u0027time.time\u0027, ) as mock_time:"},{"line_number":5404,"context_line":"                    mock_time.side_effect \u003d itertools.count(4000.99, 1.0)"},{"line_number":5405,"context_line":"                    resp \u003d req.get_response(self.app)"},{"line_number":5406,"context_line":""},{"line_number":5407,"context_line":"            self.assertEqual(resp.status_int, 202)"}],"source_content_type":"text/x-python","patch_set":45,"id":"8f68e8cb_e914d999","line":5404,"updated":"2025-05-13 22:06:08.000000000","message":"so the 1s increment here is much larger than any multiple of `namespace_avg_backend_fetch_time\u003d0.005`","commit_id":"1b10e5043d8dc95e710cfc0c79d6caeb6d7b1899"},{"author":{"_account_id":34930,"name":"Jianjian Huo","email":"jhuo@nvidia.com","username":"jhuo"},"change_message_id":"b3ee26af7f3a45a961bdbcf2ef0808a6606b32b9","unresolved":false,"context_lines":[{"line_number":5401,"context_line":"            with mocked_http_conn(*status_codes, headers\u003dresp_headers,"},{"line_number":5402,"context_line":"                                  body\u003dbody) as fake_conn:"},{"line_number":5403,"context_line":"                with mock.patch(\u0027time.time\u0027, ) as mock_time:"},{"line_number":5404,"context_line":"                    mock_time.side_effect \u003d itertools.count(4000.99, 1.0)"},{"line_number":5405,"context_line":"                    resp \u003d req.get_response(self.app)"},{"line_number":5406,"context_line":""},{"line_number":5407,"context_line":"            self.assertEqual(resp.status_int, 202)"}],"source_content_type":"text/x-python","patch_set":45,"id":"3fc1a774_fd0fb3c3","line":5404,"in_reply_to":"8f68e8cb_e914d999","updated":"2025-05-30 22:35:41.000000000","message":"Acknowledged","commit_id":"1b10e5043d8dc95e710cfc0c79d6caeb6d7b1899"},{"author":{"_account_id":1179,"name":"Clay Gerrard","email":"clay.gerrard@gmail.com","username":"clay-gerrard"},"change_message_id":"5b732a1765cb3de3b56ac2c4fb2ead5ffa05328d","unresolved":true,"context_lines":[{"line_number":5404,"context_line":"                    mock_time.side_effect \u003d itertools.count(4000.99, 1.0)"},{"line_number":5405,"context_line":"                    resp \u003d req.get_response(self.app)"},{"line_number":5406,"context_line":""},{"line_number":5407,"context_line":"            self.assertEqual(resp.status_int, 202)"},{"line_number":5408,"context_line":"            stats \u003d self.app.logger.statsd_client.get_stats_counts()"},{"line_number":5409,"context_line":"            self.assertEqual({\u0027account.info.cache.miss.200\u0027: 1,"},{"line_number":5410,"context_line":"                              \u0027account.info.infocache.hit\u0027: 1,"}],"source_content_type":"text/x-python","patch_set":45,"id":"81e55d10_819ad9b4","line":5407,"updated":"2025-05-13 22:06:08.000000000","message":"I\u0027m getting `assertEqual(3, retries)` here\n\n```\ndiff --git a/test/unit/proxy/test_server.py b/test/unit/proxy/test_server.py\nindex 091a58bb7..3851f4c2f 100644\n--- a/test/unit/proxy/test_server.py\n+++ b/test/unit/proxy/test_server.py\n@@ -5353,15 +5353,16 @@ class TestReplicatedObjectController(\n         token_key \u003d \"_cache_token/%s\" % cache_key\n \n         def do_test(method, sharding_state):\n-            retries \u003d [0]\n+            retries \u003d 0\n \n             class CustomizedFakeCache(FakeMemcache):\n                 def get(self, key, raise_on_error\u003dFalse):\n                     if key !\u003d cache_key:\n                         return super(CustomizedFakeCache, self).get(key)\n \n-                    retries[0] +\u003d 1\n-                    if retries[0] \u003c\u003d 2:\n+                    nonlocal retries\n+                    retries +\u003d 1\n+                    if retries \u003c\u003d 2:\n                         return super(CustomizedFakeCache, self).get(\n                             \"NOT_EXISTED_YET\")\n                     else:\n@@ -5404,6 +5405,7 @@ class TestReplicatedObjectController(\n                     mock_time.side_effect \u003d itertools.count(4000.99, 1.0)\n                     resp \u003d req.get_response(self.app)\n \n+            self.assertEqual(2, retries)\n             self.assertEqual(resp.status_int, 202)\n             stats \u003d self.app.logger.statsd_client.get_stats_counts()\n             self.assertEqual({\u0027account.info.cache.miss.200\u0027: 1,\n```\n\nit\u0027s important to not get `reties` here mixed up with the a `retry` in the cooperative-populator; i.e. there\u0027s 3 \"get attempts\", one before we get into cooperative populator followed by the initial sleep/get and one final \"retry\" after we \"wake up late\"","commit_id":"1b10e5043d8dc95e710cfc0c79d6caeb6d7b1899"},{"author":{"_account_id":34930,"name":"Jianjian Huo","email":"jhuo@nvidia.com","username":"jhuo"},"change_message_id":"b3ee26af7f3a45a961bdbcf2ef0808a6606b32b9","unresolved":false,"context_lines":[{"line_number":5404,"context_line":"                    mock_time.side_effect \u003d itertools.count(4000.99, 1.0)"},{"line_number":5405,"context_line":"                    resp \u003d req.get_response(self.app)"},{"line_number":5406,"context_line":""},{"line_number":5407,"context_line":"            self.assertEqual(resp.status_int, 202)"},{"line_number":5408,"context_line":"            stats \u003d self.app.logger.statsd_client.get_stats_counts()"},{"line_number":5409,"context_line":"            self.assertEqual({\u0027account.info.cache.miss.200\u0027: 1,"},{"line_number":5410,"context_line":"                              \u0027account.info.infocache.hit\u0027: 1,"}],"source_content_type":"text/x-python","patch_set":45,"id":"07912384_30763374","line":5407,"in_reply_to":"81e55d10_819ad9b4","updated":"2025-05-30 22:35:41.000000000","message":"Done","commit_id":"1b10e5043d8dc95e710cfc0c79d6caeb6d7b1899"},{"author":{"_account_id":1179,"name":"Clay Gerrard","email":"clay.gerrard@gmail.com","username":"clay-gerrard"},"change_message_id":"5b732a1765cb3de3b56ac2c4fb2ead5ffa05328d","unresolved":true,"context_lines":[{"line_number":5468,"context_line":"    def test_get_backend_updating_shard_with_cooperative_token_timeout(self):"},{"line_number":5469,"context_line":"        # verify that the request to get updating shard from the container"},{"line_number":5470,"context_line":"        # backend works with cooperative token timeout."},{"line_number":5471,"context_line":"        conf \u003d {\u0027namespace_avg_backend_fetch_time\u0027: 0.001}"},{"line_number":5472,"context_line":"        self.app \u003d proxy_server.Application("},{"line_number":5473,"context_line":"            conf,"},{"line_number":5474,"context_line":"            logger\u003dself.logger,"}],"source_content_type":"text/x-python","patch_set":45,"id":"ac62a441_4d8a33fb","line":5471,"updated":"2025-05-13 22:06:08.000000000","message":"i think it\u0027d be better to mock sleep than use a very very small sleep","commit_id":"1b10e5043d8dc95e710cfc0c79d6caeb6d7b1899"},{"author":{"_account_id":34930,"name":"Jianjian Huo","email":"jhuo@nvidia.com","username":"jhuo"},"change_message_id":"e2f1cdabad03ce3d614fa13b28624dbb521b7d68","unresolved":false,"context_lines":[{"line_number":5468,"context_line":"    def test_get_backend_updating_shard_with_cooperative_token_timeout(self):"},{"line_number":5469,"context_line":"        # verify that the request to get updating shard from the container"},{"line_number":5470,"context_line":"        # backend works with cooperative token timeout."},{"line_number":5471,"context_line":"        conf \u003d {\u0027namespace_avg_backend_fetch_time\u0027: 0.001}"},{"line_number":5472,"context_line":"        self.app \u003d proxy_server.Application("},{"line_number":5473,"context_line":"            conf,"},{"line_number":5474,"context_line":"            logger\u003dself.logger,"}],"source_content_type":"text/x-python","patch_set":45,"id":"379697a8_9900f157","line":5471,"in_reply_to":"7ead455b_b72c87d4","updated":"2025-09-23 05:01:47.000000000","message":"Done","commit_id":"1b10e5043d8dc95e710cfc0c79d6caeb6d7b1899"},{"author":{"_account_id":34930,"name":"Jianjian Huo","email":"jhuo@nvidia.com","username":"jhuo"},"change_message_id":"b3ee26af7f3a45a961bdbcf2ef0808a6606b32b9","unresolved":true,"context_lines":[{"line_number":5468,"context_line":"    def test_get_backend_updating_shard_with_cooperative_token_timeout(self):"},{"line_number":5469,"context_line":"        # verify that the request to get updating shard from the container"},{"line_number":5470,"context_line":"        # backend works with cooperative token timeout."},{"line_number":5471,"context_line":"        conf \u003d {\u0027namespace_avg_backend_fetch_time\u0027: 0.001}"},{"line_number":5472,"context_line":"        self.app \u003d proxy_server.Application("},{"line_number":5473,"context_line":"            conf,"},{"line_number":5474,"context_line":"            logger\u003dself.logger,"}],"source_content_type":"text/x-python","patch_set":45,"id":"7ead455b_b72c87d4","line":5471,"in_reply_to":"ac62a441_4d8a33fb","updated":"2025-05-30 22:35:41.000000000","message":"with the sleeps within cooperative token and proxy server uses the greenthreads and eventlet hubs, I\u0027d like to use real small sleeps to get the test case running closer to the prod, I am not quite sure the behavior for the sleeps with mocked time would be same as sleeps with real time. this is also why I use a lot of real sleep in the base patch test cases, only use mock when necessary.","commit_id":"1b10e5043d8dc95e710cfc0c79d6caeb6d7b1899"},{"author":{"_account_id":1179,"name":"Clay Gerrard","email":"clay.gerrard@gmail.com","username":"clay-gerrard"},"change_message_id":"34890a8a035287cb6533a97801528f37e247cf61","unresolved":true,"context_lines":[{"line_number":4278,"context_line":"        # caller can ignore leading path parts"},{"line_number":4279,"context_line":"        self.assertTrue(req[\u0027path\u0027].endswith(path),"},{"line_number":4280,"context_line":"                        \u0027expected path to end with %s, it was %s\u0027 % ("},{"line_number":4281,"context_line":"                            path, req[\u0027path\u0027]))"},{"line_number":4282,"context_line":"        headers \u003d headers or {}"},{"line_number":4283,"context_line":"        # caller can ignore some headers"},{"line_number":4284,"context_line":"        for k, v in headers.items():"}],"source_content_type":"text/x-python","patch_set":56,"id":"ad093ce8_7481c6c5","line":4281,"updated":"2025-09-25 22:24:36.000000000","message":"\"assert path endswith\" is a weird helper!?","commit_id":"e96c9149410bf1d3d08c152bcaeb9f9af89d67ef"},{"author":{"_account_id":34930,"name":"Jianjian Huo","email":"jhuo@nvidia.com","username":"jhuo"},"change_message_id":"230caa450da26de30e5b1de971df156ecb8b1b4c","unresolved":false,"context_lines":[{"line_number":4278,"context_line":"        # caller can ignore leading path parts"},{"line_number":4279,"context_line":"        self.assertTrue(req[\u0027path\u0027].endswith(path),"},{"line_number":4280,"context_line":"                        \u0027expected path to end with %s, it was %s\u0027 % ("},{"line_number":4281,"context_line":"                            path, req[\u0027path\u0027]))"},{"line_number":4282,"context_line":"        headers \u003d headers or {}"},{"line_number":4283,"context_line":"        # caller can ignore some headers"},{"line_number":4284,"context_line":"        for k, v in headers.items():"}],"source_content_type":"text/x-python","patch_set":56,"id":"1837ae48_76334613","line":4281,"in_reply_to":"ad093ce8_7481c6c5","updated":"2025-09-29 18:14:34.000000000","message":"Acknowledged","commit_id":"e96c9149410bf1d3d08c152bcaeb9f9af89d67ef"},{"author":{"_account_id":1179,"name":"Clay Gerrard","email":"clay.gerrard@gmail.com","username":"clay-gerrard"},"change_message_id":"34890a8a035287cb6533a97801528f37e247cf61","unresolved":true,"context_lines":[{"line_number":4430,"context_line":"                \u0027container.info.cache.miss.200\u0027: 1,"},{"line_number":4431,"context_line":"                \u0027container.info.infocache.hit\u0027: 1,"},{"line_number":4432,"context_line":"                \u0027object.shard_updating.cache.miss.200\u0027: 1,"},{"line_number":4433,"context_line":"                \u0027object.shard_updating.cache.set.200\u0027: 1"},{"line_number":4434,"context_line":"            })"},{"line_number":4435,"context_line":"            stats \u003d self.app.statsd.get_labeled_stats_counts()"},{"line_number":4436,"context_line":"            self.assertEqual({"}],"source_content_type":"text/x-python","patch_set":56,"id":"bbb96923_ef6a2af0","line":4433,"updated":"2025-09-25 22:24:36.000000000","message":"```\n- \u0027object.shard_updating.cache.set\u0027: 1\n+ \u0027object.shard_updating.cache.set.200\u0027: 1\n```\n\nthis *looks* dumb - we\u0027re changing a legacy metric (just `shard_updating.cache.set` ???) to include a new status key that HAS to be *almost always* 2xx, right?  Could we ever increment set on a 404?  Why can\u0027t we just NOT do this?","commit_id":"e96c9149410bf1d3d08c152bcaeb9f9af89d67ef"},{"author":{"_account_id":34930,"name":"Jianjian Huo","email":"jhuo@nvidia.com","username":"jhuo"},"change_message_id":"230caa450da26de30e5b1de971df156ecb8b1b4c","unresolved":false,"context_lines":[{"line_number":4430,"context_line":"                \u0027container.info.cache.miss.200\u0027: 1,"},{"line_number":4431,"context_line":"                \u0027container.info.infocache.hit\u0027: 1,"},{"line_number":4432,"context_line":"                \u0027object.shard_updating.cache.miss.200\u0027: 1,"},{"line_number":4433,"context_line":"                \u0027object.shard_updating.cache.set.200\u0027: 1"},{"line_number":4434,"context_line":"            })"},{"line_number":4435,"context_line":"            stats \u003d self.app.statsd.get_labeled_stats_counts()"},{"line_number":4436,"context_line":"            self.assertEqual({"}],"source_content_type":"text/x-python","patch_set":56,"id":"58b20de5_53bb048a","line":4433,"in_reply_to":"bbb96923_ef6a2af0","updated":"2025-09-29 18:14:34.000000000","message":"Done","commit_id":"e96c9149410bf1d3d08c152bcaeb9f9af89d67ef"},{"author":{"_account_id":1179,"name":"Clay Gerrard","email":"clay.gerrard@gmail.com","username":"clay-gerrard"},"change_message_id":"34890a8a035287cb6533a97801528f37e247cf61","unresolved":true,"context_lines":[{"line_number":5152,"context_line":"            statsd\u003dself.statsd,"},{"line_number":5153,"context_line":"            account_ring\u003dFakeRing(),"},{"line_number":5154,"context_line":"            container_ring\u003dFakeRing())"},{"line_number":5155,"context_line":"        self.app.obj_controller_router \u003d proxy_server.ObjectControllerRouter()"},{"line_number":5156,"context_line":"        self.app.sort_nodes \u003d lambda nodes, *args, **kwargs: nodes"},{"line_number":5157,"context_line":"        self.app.recheck_updating_shard_ranges \u003d 3600"},{"line_number":5158,"context_line":""}],"source_content_type":"text/x-python","patch_set":56,"id":"20a7ea80_c3af7304","line":5155,"updated":"2025-09-25 22:24:36.000000000","message":"why are we doing this?","commit_id":"e96c9149410bf1d3d08c152bcaeb9f9af89d67ef"},{"author":{"_account_id":34930,"name":"Jianjian Huo","email":"jhuo@nvidia.com","username":"jhuo"},"change_message_id":"230caa450da26de30e5b1de971df156ecb8b1b4c","unresolved":false,"context_lines":[{"line_number":5152,"context_line":"            statsd\u003dself.statsd,"},{"line_number":5153,"context_line":"            account_ring\u003dFakeRing(),"},{"line_number":5154,"context_line":"            container_ring\u003dFakeRing())"},{"line_number":5155,"context_line":"        self.app.obj_controller_router \u003d proxy_server.ObjectControllerRouter()"},{"line_number":5156,"context_line":"        self.app.sort_nodes \u003d lambda nodes, *args, **kwargs: nodes"},{"line_number":5157,"context_line":"        self.app.recheck_updating_shard_ranges \u003d 3600"},{"line_number":5158,"context_line":""}],"source_content_type":"text/x-python","patch_set":56,"id":"7cddfaba_9ba0ee5c","line":5155,"in_reply_to":"20a7ea80_c3af7304","updated":"2025-09-29 18:14:34.000000000","message":"I probably copied this cover from other similar test case, thanks for getting this cleaned up!","commit_id":"e96c9149410bf1d3d08c152bcaeb9f9af89d67ef"},{"author":{"_account_id":1179,"name":"Clay Gerrard","email":"clay.gerrard@gmail.com","username":"clay-gerrard"},"change_message_id":"34890a8a035287cb6533a97801528f37e247cf61","unresolved":true,"context_lines":[{"line_number":5154,"context_line":"            container_ring\u003dFakeRing())"},{"line_number":5155,"context_line":"        self.app.obj_controller_router \u003d proxy_server.ObjectControllerRouter()"},{"line_number":5156,"context_line":"        self.app.sort_nodes \u003d lambda nodes, *args, **kwargs: nodes"},{"line_number":5157,"context_line":"        self.app.recheck_updating_shard_ranges \u003d 3600"},{"line_number":5158,"context_line":""},{"line_number":5159,"context_line":"        def do_test(method, sharding_state):"},{"line_number":5160,"context_line":"            self.app.logger.clear()  # clean capture state"}],"source_content_type":"text/x-python","patch_set":56,"id":"f1525414_2ac4a3ea","line":5157,"updated":"2025-09-25 22:24:36.000000000","message":"I think it\u0027d be more obvious if this config value was part of the config dict used to construct the app.\n\nis this not the default?  Do we assert on or care about this value anywhere?","commit_id":"e96c9149410bf1d3d08c152bcaeb9f9af89d67ef"},{"author":{"_account_id":34930,"name":"Jianjian Huo","email":"jhuo@nvidia.com","username":"jhuo"},"change_message_id":"230caa450da26de30e5b1de971df156ecb8b1b4c","unresolved":false,"context_lines":[{"line_number":5154,"context_line":"            container_ring\u003dFakeRing())"},{"line_number":5155,"context_line":"        self.app.obj_controller_router \u003d proxy_server.ObjectControllerRouter()"},{"line_number":5156,"context_line":"        self.app.sort_nodes \u003d lambda nodes, *args, **kwargs: nodes"},{"line_number":5157,"context_line":"        self.app.recheck_updating_shard_ranges \u003d 3600"},{"line_number":5158,"context_line":""},{"line_number":5159,"context_line":"        def do_test(method, sharding_state):"},{"line_number":5160,"context_line":"            self.app.logger.clear()  # clean capture state"}],"source_content_type":"text/x-python","patch_set":56,"id":"9ed0e3f8_55b302ab","line":5157,"in_reply_to":"f1525414_2ac4a3ea","updated":"2025-09-29 18:14:34.000000000","message":"Acknowledged","commit_id":"e96c9149410bf1d3d08c152bcaeb9f9af89d67ef"},{"author":{"_account_id":1179,"name":"Clay Gerrard","email":"clay.gerrard@gmail.com","username":"clay-gerrard"},"change_message_id":"34890a8a035287cb6533a97801528f37e247cf61","unresolved":true,"context_lines":[{"line_number":5271,"context_line":"            expected \u003d {}"},{"line_number":5272,"context_line":"            for i, device in enumerate([\u0027sda\u0027, \u0027sdb\u0027, \u0027sdc\u0027]):"},{"line_number":5273,"context_line":"                expected[device] \u003d \u002710.0.0.%d:100%d\u0027 % (i, i)"},{"line_number":5274,"context_line":"            self.assertEqual(container_headers, expected)"},{"line_number":5275,"context_line":""},{"line_number":5276,"context_line":"        do_test(\u0027POST\u0027, \u0027sharding\u0027)"},{"line_number":5277,"context_line":"        do_test(\u0027POST\u0027, \u0027sharded\u0027)"}],"source_content_type":"text/x-python","patch_set":56,"id":"cf86ba9b_f9ca5781","line":5274,"updated":"2025-09-25 22:24:36.000000000","message":"I find handling the x-container-host header seperately confusing - since the test disables device/node shuffling it\u0027s all deterministic - sda goes with sda:\n\n```\n            for (i, device), request in zip(enumerate([\u0027sda\u0027, \u0027sdb\u0027, \u0027sdc\u0027]),\n                                            backend_requests[3:]):\n                expectations \u003d {\n                    \u0027method\u0027: method,\n                    \u0027path\u0027: f\u0027/{device}/0/a/c/o\u0027,\n                    \u0027headers\u0027: {\n                        \u0027X-Container-Partition\u0027: \u00270\u0027,\n                        \u0027Host\u0027: \u0027localhost:80\u0027,\n                        \u0027Referer\u0027: \u0027%s http://localhost/v1/a/c/o\u0027 % method,\n                        \u0027X-Backend-Storage-Policy-Index\u0027: \u00271\u0027,\n                        \u0027X-Backend-Quoted-Container-Path\u0027: shard_ranges[1].name,\n                        \u0027X-Container-Device\u0027: device,\n                        \u0027X-Container-Host\u0027: \u002710.0.0.%d:100%d\u0027 % (i, i),\n                    },\n                }\n                self._check_request(request, **expectations)\n```","commit_id":"e96c9149410bf1d3d08c152bcaeb9f9af89d67ef"},{"author":{"_account_id":34930,"name":"Jianjian Huo","email":"jhuo@nvidia.com","username":"jhuo"},"change_message_id":"230caa450da26de30e5b1de971df156ecb8b1b4c","unresolved":false,"context_lines":[{"line_number":5271,"context_line":"            expected \u003d {}"},{"line_number":5272,"context_line":"            for i, device in enumerate([\u0027sda\u0027, \u0027sdb\u0027, \u0027sdc\u0027]):"},{"line_number":5273,"context_line":"                expected[device] \u003d \u002710.0.0.%d:100%d\u0027 % (i, i)"},{"line_number":5274,"context_line":"            self.assertEqual(container_headers, expected)"},{"line_number":5275,"context_line":""},{"line_number":5276,"context_line":"        do_test(\u0027POST\u0027, \u0027sharding\u0027)"},{"line_number":5277,"context_line":"        do_test(\u0027POST\u0027, \u0027sharded\u0027)"}],"source_content_type":"text/x-python","patch_set":56,"id":"ef189c81_9518ddf3","line":5274,"in_reply_to":"cf86ba9b_f9ca5781","updated":"2025-09-29 18:14:34.000000000","message":"Nice, this is much easier to read.","commit_id":"e96c9149410bf1d3d08c152bcaeb9f9af89d67ef"},{"author":{"_account_id":1179,"name":"Clay Gerrard","email":"clay.gerrard@gmail.com","username":"clay-gerrard"},"change_message_id":"34890a8a035287cb6533a97801528f37e247cf61","unresolved":true,"context_lines":[{"line_number":5315,"context_line":"                        return super(CustomizedFakeCache, self).get(key)"},{"line_number":5316,"context_line":""},{"line_number":5317,"context_line":"                    retries +\u003d 1"},{"line_number":5318,"context_line":"                    if retries \u003c\u003d 2:"},{"line_number":5319,"context_line":"                        return super(CustomizedFakeCache, self).get("},{"line_number":5320,"context_line":"                            \"NOT_EXISTED_YET\")"},{"line_number":5321,"context_line":"                    else:"}],"source_content_type":"text/x-python","patch_set":56,"id":"50c67739_7679bcaf","line":5318,"updated":"2025-09-25 22:24:36.000000000","message":"maybe more clear as `retries \u003c 3` so it\u0027s more obviously coupled with the retires assertion below?","commit_id":"e96c9149410bf1d3d08c152bcaeb9f9af89d67ef"},{"author":{"_account_id":34930,"name":"Jianjian Huo","email":"jhuo@nvidia.com","username":"jhuo"},"change_message_id":"230caa450da26de30e5b1de971df156ecb8b1b4c","unresolved":false,"context_lines":[{"line_number":5315,"context_line":"                        return super(CustomizedFakeCache, self).get(key)"},{"line_number":5316,"context_line":""},{"line_number":5317,"context_line":"                    retries +\u003d 1"},{"line_number":5318,"context_line":"                    if retries \u003c\u003d 2:"},{"line_number":5319,"context_line":"                        return super(CustomizedFakeCache, self).get("},{"line_number":5320,"context_line":"                            \"NOT_EXISTED_YET\")"},{"line_number":5321,"context_line":"                    else:"}],"source_content_type":"text/x-python","patch_set":56,"id":"164f96b0_801ddb75","line":5318,"in_reply_to":"50c67739_7679bcaf","updated":"2025-09-29 18:14:34.000000000","message":"Done","commit_id":"e96c9149410bf1d3d08c152bcaeb9f9af89d67ef"},{"author":{"_account_id":1179,"name":"Clay Gerrard","email":"clay.gerrard@gmail.com","username":"clay-gerrard"},"change_message_id":"34890a8a035287cb6533a97801528f37e247cf61","unresolved":true,"context_lines":[{"line_number":5349,"context_line":""},{"line_number":5350,"context_line":"            # Preset \u0027token_key\u0027 to be value of 3, then make this request of"},{"line_number":5351,"context_line":"            # getting updating shard to not able to acquire a token."},{"line_number":5352,"context_line":"            req.environ[\u0027swift.cache\u0027].incr(token_key, 3)"},{"line_number":5353,"context_line":"            # Preset the cache value, but only available after 4 retries."},{"line_number":5354,"context_line":"            req.environ[\u0027swift.cache\u0027].set(cache_key, cached_namespaces.bounds)"},{"line_number":5355,"context_line":""}],"source_content_type":"text/x-python","patch_set":56,"id":"87ff3c55_61fde604","line":5352,"updated":"2025-09-25 22:24:36.000000000","message":"any value `\u003e\u003d` than `namespace_cache_tokens_per_session` seems to work for this test - 3 happens to \u003d\u003d the default value.","commit_id":"e96c9149410bf1d3d08c152bcaeb9f9af89d67ef"},{"author":{"_account_id":34930,"name":"Jianjian Huo","email":"jhuo@nvidia.com","username":"jhuo"},"change_message_id":"230caa450da26de30e5b1de971df156ecb8b1b4c","unresolved":false,"context_lines":[{"line_number":5349,"context_line":""},{"line_number":5350,"context_line":"            # Preset \u0027token_key\u0027 to be value of 3, then make this request of"},{"line_number":5351,"context_line":"            # getting updating shard to not able to acquire a token."},{"line_number":5352,"context_line":"            req.environ[\u0027swift.cache\u0027].incr(token_key, 3)"},{"line_number":5353,"context_line":"            # Preset the cache value, but only available after 4 retries."},{"line_number":5354,"context_line":"            req.environ[\u0027swift.cache\u0027].set(cache_key, cached_namespaces.bounds)"},{"line_number":5355,"context_line":""}],"source_content_type":"text/x-python","patch_set":56,"id":"fcca7e54_bc5a8c20","line":5352,"in_reply_to":"87ff3c55_61fde604","updated":"2025-09-29 18:14:34.000000000","message":"Acknowledged","commit_id":"e96c9149410bf1d3d08c152bcaeb9f9af89d67ef"},{"author":{"_account_id":1179,"name":"Clay Gerrard","email":"clay.gerrard@gmail.com","username":"clay-gerrard"},"change_message_id":"34890a8a035287cb6533a97801528f37e247cf61","unresolved":true,"context_lines":[{"line_number":5355,"context_line":""},{"line_number":5356,"context_line":"            with mocked_http_conn(*status_codes, headers\u003dresp_headers,"},{"line_number":5357,"context_line":"                                  body\u003dbody) as fake_conn:"},{"line_number":5358,"context_line":"                resp \u003d req.get_response(self.app)"},{"line_number":5359,"context_line":""},{"line_number":5360,"context_line":"            self.assertEqual(3, retries)"},{"line_number":5361,"context_line":"            self.assertEqual(resp.status_int, 202)"}],"source_content_type":"text/x-python","patch_set":56,"id":"c9b8e9e6_f2234b06","line":5358,"updated":"2025-09-25 22:24:36.000000000","message":"if you patch eventlet.sleep here to no-op you can use the default avg_backend_req w/o making the test any slower","commit_id":"e96c9149410bf1d3d08c152bcaeb9f9af89d67ef"},{"author":{"_account_id":34930,"name":"Jianjian Huo","email":"jhuo@nvidia.com","username":"jhuo"},"change_message_id":"230caa450da26de30e5b1de971df156ecb8b1b4c","unresolved":false,"context_lines":[{"line_number":5355,"context_line":""},{"line_number":5356,"context_line":"            with mocked_http_conn(*status_codes, headers\u003dresp_headers,"},{"line_number":5357,"context_line":"                                  body\u003dbody) as fake_conn:"},{"line_number":5358,"context_line":"                resp \u003d req.get_response(self.app)"},{"line_number":5359,"context_line":""},{"line_number":5360,"context_line":"            self.assertEqual(3, retries)"},{"line_number":5361,"context_line":"            self.assertEqual(resp.status_int, 202)"}],"source_content_type":"text/x-python","patch_set":56,"id":"4d339af5_8d02923b","line":5358,"in_reply_to":"c9b8e9e6_f2234b06","updated":"2025-09-29 18:14:34.000000000","message":"Done","commit_id":"e96c9149410bf1d3d08c152bcaeb9f9af89d67ef"},{"author":{"_account_id":1179,"name":"Clay Gerrard","email":"clay.gerrard@gmail.com","username":"clay-gerrard"},"change_message_id":"34890a8a035287cb6533a97801528f37e247cf61","unresolved":true,"context_lines":[{"line_number":5357,"context_line":"                                  body\u003dbody) as fake_conn:"},{"line_number":5358,"context_line":"                resp \u003d req.get_response(self.app)"},{"line_number":5359,"context_line":""},{"line_number":5360,"context_line":"            self.assertEqual(3, retries)"},{"line_number":5361,"context_line":"            self.assertEqual(resp.status_int, 202)"},{"line_number":5362,"context_line":"            stats \u003d self.app.logger.statsd_client.get_stats_counts()"},{"line_number":5363,"context_line":"            self.assertEqual({\u0027account.info.cache.miss.200\u0027: 1,"}],"source_content_type":"text/x-python","patch_set":56,"id":"3df58818_c4f91949","line":5360,"updated":"2025-09-25 22:24:36.000000000","message":"I can\u0027t seem to push this higher than 4 w/o getting some extra/unexpected backend requests - I assumed it was because of cache_token_ttl but even w/ a longer avg_req_backend_time after 4 memcache attempts it\u0027s always going to the backend.\n\noh... that\u0027s hard coded into the base CooperativeCachePopulator:\n\n```\n        The first retry is 1.5 times of the ``avg_backend_fetch_time``, the\n        second is 3 times, and the third is 6 times of it, so total is 10.5\n        times of the ``avg_backend_fetch_time``. This roughly equals to the\n        ``token_ttl`` which is 10 times of the ``avg_backend_fetch_time``.\n```\n\nI thought the \"extra\" was for the lack_retires handling - but actually what\u0027s going on is we\u0027re calling memcache.get once outside of the populator and then it tries 3 more times.","commit_id":"e96c9149410bf1d3d08c152bcaeb9f9af89d67ef"},{"author":{"_account_id":34930,"name":"Jianjian Huo","email":"jhuo@nvidia.com","username":"jhuo"},"change_message_id":"230caa450da26de30e5b1de971df156ecb8b1b4c","unresolved":false,"context_lines":[{"line_number":5357,"context_line":"                                  body\u003dbody) as fake_conn:"},{"line_number":5358,"context_line":"                resp \u003d req.get_response(self.app)"},{"line_number":5359,"context_line":""},{"line_number":5360,"context_line":"            self.assertEqual(3, retries)"},{"line_number":5361,"context_line":"            self.assertEqual(resp.status_int, 202)"},{"line_number":5362,"context_line":"            stats \u003d self.app.logger.statsd_client.get_stats_counts()"},{"line_number":5363,"context_line":"            self.assertEqual({\u0027account.info.cache.miss.200\u0027: 1,"}],"source_content_type":"text/x-python","patch_set":56,"id":"d0fdb3a6_e68e4ee7","line":5360,"in_reply_to":"3df58818_c4f91949","updated":"2025-09-29 18:14:34.000000000","message":"Acknowledged","commit_id":"e96c9149410bf1d3d08c152bcaeb9f9af89d67ef"},{"author":{"_account_id":1179,"name":"Clay Gerrard","email":"clay.gerrard@gmail.com","username":"clay-gerrard"},"change_message_id":"34890a8a035287cb6533a97801528f37e247cf61","unresolved":true,"context_lines":[{"line_number":5503,"context_line":"            with mocked_http_conn(*status_codes, headers\u003dresp_headers,"},{"line_number":5504,"context_line":"                                  body\u003dbody) as fake_conn:"},{"line_number":5505,"context_line":"                with mock.patch(\u0027swift.proxy.controllers.obj.time.time\u0027, ) \\"},{"line_number":5506,"context_line":"                        as mock_time:"},{"line_number":5507,"context_line":"                    mock_time.side_effect \u003d itertools.count(4000.99, 1.0)"},{"line_number":5508,"context_line":"                    resp \u003d req.get_response(self.app)"},{"line_number":5509,"context_line":""}],"source_content_type":"text/x-python","patch_set":56,"id":"fe4ee09b_bd4860ef","line":5506,"updated":"2025-09-25 22:24:36.000000000","message":"this indenting is weird `mock.patch(\u0027swift.proxy.controllers.obj.time.time\u0027, )` ???","commit_id":"e96c9149410bf1d3d08c152bcaeb9f9af89d67ef"},{"author":{"_account_id":34930,"name":"Jianjian Huo","email":"jhuo@nvidia.com","username":"jhuo"},"change_message_id":"230caa450da26de30e5b1de971df156ecb8b1b4c","unresolved":false,"context_lines":[{"line_number":5503,"context_line":"            with mocked_http_conn(*status_codes, headers\u003dresp_headers,"},{"line_number":5504,"context_line":"                                  body\u003dbody) as fake_conn:"},{"line_number":5505,"context_line":"                with mock.patch(\u0027swift.proxy.controllers.obj.time.time\u0027, ) \\"},{"line_number":5506,"context_line":"                        as mock_time:"},{"line_number":5507,"context_line":"                    mock_time.side_effect \u003d itertools.count(4000.99, 1.0)"},{"line_number":5508,"context_line":"                    resp \u003d req.get_response(self.app)"},{"line_number":5509,"context_line":""}],"source_content_type":"text/x-python","patch_set":56,"id":"474a35ea_ff3d9145","line":5506,"in_reply_to":"fe4ee09b_bd4860ef","updated":"2025-09-29 18:14:34.000000000","message":"Done","commit_id":"e96c9149410bf1d3d08c152bcaeb9f9af89d67ef"},{"author":{"_account_id":1179,"name":"Clay Gerrard","email":"clay.gerrard@gmail.com","username":"clay-gerrard"},"change_message_id":"34890a8a035287cb6533a97801528f37e247cf61","unresolved":true,"context_lines":[{"line_number":5504,"context_line":"                                  body\u003dbody) as fake_conn:"},{"line_number":5505,"context_line":"                with mock.patch(\u0027swift.proxy.controllers.obj.time.time\u0027, ) \\"},{"line_number":5506,"context_line":"                        as mock_time:"},{"line_number":5507,"context_line":"                    mock_time.side_effect \u003d itertools.count(4000.99, 1.0)"},{"line_number":5508,"context_line":"                    resp \u003d req.get_response(self.app)"},{"line_number":5509,"context_line":""},{"line_number":5510,"context_line":"            self.assertEqual(3, retries)    # 2 normal and 1 forced."}],"source_content_type":"text/x-python","patch_set":56,"id":"323a3912_88300c28","line":5507,"updated":"2025-09-25 22:24:36.000000000","message":"wow, when I finally get into the `cur_time \u003c cutoff_time` block in a debugger this has counted all the way to \n```\n(Pdb) cur_time\n4021.99\n(Pdb) cutoff_time\n4021.04\n```","commit_id":"e96c9149410bf1d3d08c152bcaeb9f9af89d67ef"},{"author":{"_account_id":34930,"name":"Jianjian Huo","email":"jhuo@nvidia.com","username":"jhuo"},"change_message_id":"230caa450da26de30e5b1de971df156ecb8b1b4c","unresolved":false,"context_lines":[{"line_number":5504,"context_line":"                                  body\u003dbody) as fake_conn:"},{"line_number":5505,"context_line":"                with mock.patch(\u0027swift.proxy.controllers.obj.time.time\u0027, ) \\"},{"line_number":5506,"context_line":"                        as mock_time:"},{"line_number":5507,"context_line":"                    mock_time.side_effect \u003d itertools.count(4000.99, 1.0)"},{"line_number":5508,"context_line":"                    resp \u003d req.get_response(self.app)"},{"line_number":5509,"context_line":""},{"line_number":5510,"context_line":"            self.assertEqual(3, retries)    # 2 normal and 1 forced."}],"source_content_type":"text/x-python","patch_set":56,"id":"c1e30dda_c746e33f","line":5507,"in_reply_to":"323a3912_88300c28","updated":"2025-09-29 18:14:34.000000000","message":"Acknowledged","commit_id":"e96c9149410bf1d3d08c152bcaeb9f9af89d67ef"},{"author":{"_account_id":1179,"name":"Clay Gerrard","email":"clay.gerrard@gmail.com","username":"clay-gerrard"},"change_message_id":"34890a8a035287cb6533a97801528f37e247cf61","unresolved":true,"context_lines":[{"line_number":5507,"context_line":"                    mock_time.side_effect \u003d itertools.count(4000.99, 1.0)"},{"line_number":5508,"context_line":"                    resp \u003d req.get_response(self.app)"},{"line_number":5509,"context_line":""},{"line_number":5510,"context_line":"            self.assertEqual(3, retries)    # 2 normal and 1 forced."},{"line_number":5511,"context_line":"            self.assertEqual(resp.status_int, 202)"},{"line_number":5512,"context_line":"            stats \u003d self.app.logger.statsd_client.get_stats_counts()"},{"line_number":5513,"context_line":"            self.assertEqual({\u0027account.info.cache.miss.200\u0027: 1,"}],"source_content_type":"text/x-python","patch_set":56,"id":"e4340c63_423766be","line":5510,"updated":"2025-09-25 22:24:36.000000000","message":"I found this comment confusing - there\u0027s two *attempts* and 1 forced\n\n1) we try to get from cache (this happens before you even get INTO cache populator)\n2) because it\u0027s a miss, and we loose the token the coop-populator enters the while loop and sleeps ( then get (num_waits \u003d 1)\n3) the very next time back through the while loop we\u0027re already after cutoff_time and we do one last forced memcache.get","commit_id":"e96c9149410bf1d3d08c152bcaeb9f9af89d67ef"},{"author":{"_account_id":1179,"name":"Clay Gerrard","email":"clay.gerrard@gmail.com","username":"clay-gerrard"},"change_message_id":"34890a8a035287cb6533a97801528f37e247cf61","unresolved":true,"context_lines":[{"line_number":5602,"context_line":"                def get(self, key, raise_on_error\u003dFalse):"},{"line_number":5603,"context_line":"                    if key !\u003d cache_key:"},{"line_number":5604,"context_line":"                        return super(CustomizedFakeCache, self).get(key)"},{"line_number":5605,"context_line":"                    # all fail forever - just like real memcache!"},{"line_number":5606,"context_line":"                    return super(CustomizedFakeCache, self).get("},{"line_number":5607,"context_line":"                        \"NOT_EXISTED_YET\")"},{"line_number":5608,"context_line":""}],"source_content_type":"text/x-python","patch_set":56,"id":"0609c674_6b813d49","line":5605,"updated":"2025-09-25 22:24:36.000000000","message":"ROFL!","commit_id":"e96c9149410bf1d3d08c152bcaeb9f9af89d67ef"},{"author":{"_account_id":34930,"name":"Jianjian Huo","email":"jhuo@nvidia.com","username":"jhuo"},"change_message_id":"230caa450da26de30e5b1de971df156ecb8b1b4c","unresolved":false,"context_lines":[{"line_number":5602,"context_line":"                def get(self, key, raise_on_error\u003dFalse):"},{"line_number":5603,"context_line":"                    if key !\u003d cache_key:"},{"line_number":5604,"context_line":"                        return super(CustomizedFakeCache, self).get(key)"},{"line_number":5605,"context_line":"                    # all fail forever - just like real memcache!"},{"line_number":5606,"context_line":"                    return super(CustomizedFakeCache, self).get("},{"line_number":5607,"context_line":"                        \"NOT_EXISTED_YET\")"},{"line_number":5608,"context_line":""}],"source_content_type":"text/x-python","patch_set":56,"id":"eb282fcf_5218398f","line":5605,"in_reply_to":"0609c674_6b813d49","updated":"2025-09-29 18:14:34.000000000","message":"Acknowledged","commit_id":"e96c9149410bf1d3d08c152bcaeb9f9af89d67ef"},{"author":{"_account_id":1179,"name":"Clay Gerrard","email":"clay.gerrard@gmail.com","username":"clay-gerrard"},"change_message_id":"34890a8a035287cb6533a97801528f37e247cf61","unresolved":true,"context_lines":[{"line_number":5733,"context_line":"        self.memcache \u003d FakeMemcache()"},{"line_number":5734,"context_line":"        self.logger.clear()"},{"line_number":5735,"context_line":"        self.statsd.clear()"},{"line_number":5736,"context_line":"        conf \u003d {\u0027namespace_cache_use_token\u0027: \u0027True\u0027}"},{"line_number":5737,"context_line":"        shard_ranges \u003d ["},{"line_number":5738,"context_line":"            utils.ShardRange("},{"line_number":5739,"context_line":"                \u0027.shards_a/c_not_used\u0027, utils.Timestamp.now(), \u0027\u0027, \u0027l\u0027),"}],"source_content_type":"text/x-python","patch_set":56,"id":"629c5837_523579db","line":5736,"updated":"2025-09-25 22:24:36.000000000","message":"this test tends to be pretty slow ~1500ms\n\nI assumed that was b/c of the default 300ms avg_backend_fetch_time - but it seems like the dominating factor is `num_proccess \u003d 100`","commit_id":"e96c9149410bf1d3d08c152bcaeb9f9af89d67ef"},{"author":{"_account_id":1179,"name":"Clay Gerrard","email":"clay.gerrard@gmail.com","username":"clay-gerrard"},"change_message_id":"34890a8a035287cb6533a97801528f37e247cf61","unresolved":true,"context_lines":[{"line_number":5745,"context_line":"        cache_key \u003d \u0027shard-updating-v2/a/c\u0027"},{"line_number":5746,"context_line":""},{"line_number":5747,"context_line":"        def delayed_fetch_backend(self):"},{"line_number":5748,"context_line":"            eventlet.sleep(0.2)"},{"line_number":5749,"context_line":"            body \u003d json.dumps(["},{"line_number":5750,"context_line":"                dict(shard_range)"},{"line_number":5751,"context_line":"                for shard_range in shard_ranges]).encode(\u0027ascii\u0027)"}],"source_content_type":"text/x-python","patch_set":56,"id":"3c9aa880_e4326437","line":5748,"updated":"2025-09-25 22:24:36.000000000","message":"I can push this *quite* low (0.0001) and it doesn\u0027t really effect the test run-time - as long as the other threads can talk to memcache before this responds to the token winner they\u0027ll go to sleep and retry memcache after.","commit_id":"e96c9149410bf1d3d08c152bcaeb9f9af89d67ef"},{"author":{"_account_id":1179,"name":"Clay Gerrard","email":"clay.gerrard@gmail.com","username":"clay-gerrard"},"change_message_id":"34890a8a035287cb6533a97801528f37e247cf61","unresolved":true,"context_lines":[{"line_number":5843,"context_line":"                \u0027.shards_a/c_nope\u0027, utils.Timestamp.now(), \u0027u\u0027, \u0027\u0027),"},{"line_number":5844,"context_line":"        ]"},{"line_number":5845,"context_line":"        cache_key \u003d \u0027shard-updating-v2/a/c\u0027"},{"line_number":5846,"context_line":"        failures \u003d random.randint(1, 2)"},{"line_number":5847,"context_line":"        failures_injected \u003d 0"},{"line_number":5848,"context_line":""},{"line_number":5849,"context_line":"        def delayed_fetch_backend(self):"}],"source_content_type":"text/x-python","patch_set":56,"id":"68499cc0_d4791422","line":5846,"updated":"2025-09-25 22:24:36.000000000","message":"I can\u0027t seem to push this up to 3 failures w/o introducing some errors?\n\nThe error seems to be that instead of\n\n```\n            \u0027object.shard_updating.cache.miss.503\u0027: failures,\n            \u0027object.shard_updating.cache.miss.200\u0027: 3 - failures,\n            \u0027object.shard_updating.cache.miss\u0027: num_processes - 3,\n            \u0027object.shard_updating.cache.set\u0027: 3 - failures,\n```\n\n... I only have\n\n```\n            \u0027object.shard_updating.cache.miss.503\u0027: failures,\n            \u0027object.shard_updating.cache.miss.200\u0027: 3 - failures,\n            \u0027object.shard_updating.cache.set\u0027: 97,\n```\n\nah, because of course if all the token winners error out EVERYONE will storm the backend!  What an absolutely terrible failure mode!","commit_id":"e96c9149410bf1d3d08c152bcaeb9f9af89d67ef"},{"author":{"_account_id":34930,"name":"Jianjian Huo","email":"jhuo@nvidia.com","username":"jhuo"},"change_message_id":"230caa450da26de30e5b1de971df156ecb8b1b4c","unresolved":false,"context_lines":[{"line_number":5843,"context_line":"                \u0027.shards_a/c_nope\u0027, utils.Timestamp.now(), \u0027u\u0027, \u0027\u0027),"},{"line_number":5844,"context_line":"        ]"},{"line_number":5845,"context_line":"        cache_key \u003d \u0027shard-updating-v2/a/c\u0027"},{"line_number":5846,"context_line":"        failures \u003d random.randint(1, 2)"},{"line_number":5847,"context_line":"        failures_injected \u003d 0"},{"line_number":5848,"context_line":""},{"line_number":5849,"context_line":"        def delayed_fetch_backend(self):"}],"source_content_type":"text/x-python","patch_set":56,"id":"16eae72c_4f770b22","line":5846,"in_reply_to":"68499cc0_d4791422","updated":"2025-09-29 18:14:34.000000000","message":"Acknowledged","commit_id":"e96c9149410bf1d3d08c152bcaeb9f9af89d67ef"}]}
