)]}'
{"/COMMIT_MSG":[{"author":{"_account_id":28022,"name":"Bharat Kunwar","email":"brtknr@bath.edu","username":"brtknr"},"change_message_id":"d25d36307f23ea4440106ad569b8639db1f9b212","unresolved":false,"context_lines":[{"line_number":7,"context_line":"Fix nginx getting OOM killed"},{"line_number":8,"context_line":""},{"line_number":9,"context_line":"* Set requests.memory\u003d128MiB  for the nginx-ingress-controller pod"},{"line_number":10,"context_line":"* QoS from Guaranteed to Burstable"},{"line_number":11,"context_line":"* Set priority class so that pods can take priority on a node that"},{"line_number":12,"context_line":"might have No CPU taint."},{"line_number":13,"context_line":""}],"source_content_type":"text/x-gerrit-commit-message","patch_set":1,"id":"3fa7e38b_4a47377c","line":10,"range":{"start_line":10,"start_character":2,"end_line":10,"end_character":34},"updated":"2019-11-26 11:42:11.000000000","message":"how does this translate?","commit_id":"8bdfd169ffee7322c99ffb7321dc136febd4908b"},{"author":{"_account_id":29425,"name":"Diogo Guerra","email":"diogo.filipe.tomas.guerra@cern.ch","username":"dioguerra"},"change_message_id":"e0abcb6ca3ea48e17d17da98ae2669f8d13fa16b","unresolved":false,"context_lines":[{"line_number":7,"context_line":"Fix nginx getting OOM killed"},{"line_number":8,"context_line":""},{"line_number":9,"context_line":"* Set requests.memory\u003d128MiB  for the nginx-ingress-controller pod"},{"line_number":10,"context_line":"* QoS from Guaranteed to Burstable"},{"line_number":11,"context_line":"* Set priority class so that pods can take priority on a node that"},{"line_number":12,"context_line":"might have No CPU taint."},{"line_number":13,"context_line":""}],"source_content_type":"text/x-gerrit-commit-message","patch_set":1,"id":"3fa7e38b_53d84e23","line":10,"range":{"start_line":10,"start_character":2,"end_line":10,"end_character":34},"in_reply_to":"3fa7e38b_4a47377c","updated":"2019-11-26 14:20:32.000000000","message":"The problem here is that the guaranteed pod always gets capped by the limit value, where if it is a bustable pod it can take the Shared Budget from all the burstable pods (or the remaining resources of the node)\n\nAlthough this itself has other problems, where the application itself can starve the nginx proxy.\n\nIn this case, i still think it kinda makes sense, because, if the application is under loaded you cannot serve more requests (anyways). \nThe best solution here for your application would be to scale up the nginx-ingress-operator pods and spawn it in another node (in this case, even if the node is full, the priorityClass will take action and evict something).\n\nIn the end if there are still pods to spawn, the CA should kick in.","commit_id":"8bdfd169ffee7322c99ffb7321dc136febd4908b"},{"author":{"_account_id":28022,"name":"Bharat Kunwar","email":"brtknr@bath.edu","username":"brtknr"},"change_message_id":"881e0f916ab49e81f745300745a28eaa3b9a4f9f","unresolved":false,"context_lines":[{"line_number":8,"context_line":""},{"line_number":9,"context_line":"* Set requests.memory\u003d256MiB  for the nginx-ingress-controller pod"},{"line_number":10,"context_line":"We decided to leave limits open as this will allow to support most"},{"line_number":11,"context_line":"of the generic use cases. "},{"line_number":12,"context_line":"* QoS from Guaranteed to Burstable"},{"line_number":13,"context_line":"This will make that application and ingress starve each other,"},{"line_number":14,"context_line":"both fighting for node resources for an optimal usage of CPU"}],"source_content_type":"text/x-gerrit-commit-message","patch_set":2,"id":"3fa7e38b_67e125cf","line":11,"range":{"start_line":11,"start_character":25,"end_line":11,"end_character":26},"updated":"2019-12-12 15:42:47.000000000","message":"space","commit_id":"83b38d949f009d09b368745a414a14a52b40facf"}],"magnum/drivers/common/templates/kubernetes/helm/ingress-nginx.sh":[{"author":{"_account_id":20498,"name":"Spyros Trigazis","email":"spyridon.trigazis@cern.ch","username":"strigazi"},"change_message_id":"9ff9fc2801e4903531540695289009d223a03226","unresolved":false,"context_lines":[{"line_number":105,"context_line":"      minAvailable: 1"},{"line_number":106,"context_line":"      resources:"},{"line_number":107,"context_line":"        requests:"},{"line_number":108,"context_line":"          cpu: 100m"},{"line_number":109,"context_line":"          memory: 128Mi"},{"line_number":110,"context_line":"      autoscaling:"},{"line_number":111,"context_line":"        enabled: false"},{"line_number":112,"context_line":"      customTemplate:"}],"source_content_type":"text/x-sh","patch_set":1,"id":"3fa7e38b_d181ccfb","line":109,"range":{"start_line":108,"start_character":0,"end_line":109,"end_character":23},"updated":"2019-12-11 10:51:16.000000000","message":"I would make this 200m and 256Mi. This value is just from experience, exposing a single custom TCP port.\n\nOther than that, happy to take it.","commit_id":"8bdfd169ffee7322c99ffb7321dc136febd4908b"},{"author":{"_account_id":29425,"name":"Diogo Guerra","email":"diogo.filipe.tomas.guerra@cern.ch","username":"dioguerra"},"change_message_id":"623a329e174f1be2224263e7279eeb620157d03e","unresolved":false,"context_lines":[{"line_number":105,"context_line":"      minAvailable: 1"},{"line_number":106,"context_line":"      resources:"},{"line_number":107,"context_line":"        requests:"},{"line_number":108,"context_line":"          cpu: 100m"},{"line_number":109,"context_line":"          memory: 128Mi"},{"line_number":110,"context_line":"      autoscaling:"},{"line_number":111,"context_line":"        enabled: false"},{"line_number":112,"context_line":"      customTemplate:"}],"source_content_type":"text/x-sh","patch_set":1,"id":"3fa7e38b_49f765d5","line":109,"range":{"start_line":108,"start_character":0,"end_line":109,"end_character":23},"in_reply_to":"3fa7e38b_294a2923","updated":"2019-12-12 10:17:56.000000000","message":"Yes, i will update with your suggested values","commit_id":"8bdfd169ffee7322c99ffb7321dc136febd4908b"},{"author":{"_account_id":20498,"name":"Spyros Trigazis","email":"spyridon.trigazis@cern.ch","username":"strigazi"},"change_message_id":"e36ae14e041682e230c2bcf781f14a51e78ad3dd","unresolved":false,"context_lines":[{"line_number":105,"context_line":"      minAvailable: 1"},{"line_number":106,"context_line":"      resources:"},{"line_number":107,"context_line":"        requests:"},{"line_number":108,"context_line":"          cpu: 100m"},{"line_number":109,"context_line":"          memory: 128Mi"},{"line_number":110,"context_line":"      autoscaling:"},{"line_number":111,"context_line":"        enabled: false"},{"line_number":112,"context_line":"      customTemplate:"}],"source_content_type":"text/x-sh","patch_set":1,"id":"3fa7e38b_294a2923","line":109,"range":{"start_line":108,"start_character":0,"end_line":109,"end_character":23},"in_reply_to":"3fa7e38b_57dcec6f","updated":"2019-12-12 10:15:04.000000000","message":"How to you want to proceed? Will you update with a bit higher requests and we (core reviewrs) approve the patch?","commit_id":"8bdfd169ffee7322c99ffb7321dc136febd4908b"},{"author":{"_account_id":20498,"name":"Spyros Trigazis","email":"spyridon.trigazis@cern.ch","username":"strigazi"},"change_message_id":"9cf18415df6612189c7fcd3876b89a87e6b7fb88","unresolved":false,"context_lines":[{"line_number":105,"context_line":"      minAvailable: 1"},{"line_number":106,"context_line":"      resources:"},{"line_number":107,"context_line":"        requests:"},{"line_number":108,"context_line":"          cpu: 100m"},{"line_number":109,"context_line":"          memory: 128Mi"},{"line_number":110,"context_line":"      autoscaling:"},{"line_number":111,"context_line":"        enabled: false"},{"line_number":112,"context_line":"      customTemplate:"}],"source_content_type":"text/x-sh","patch_set":1,"id":"3fa7e38b_57dcec6f","line":109,"range":{"start_line":108,"start_character":0,"end_line":109,"end_character":23},"in_reply_to":"3fa7e38b_973584dd","updated":"2019-12-11 12:52:18.000000000","message":"Let\u0027s avoid the limits so that users can fine grain the values on their own. With requests, we guarantee that what we deploy works for basic/simple use cases.","commit_id":"8bdfd169ffee7322c99ffb7321dc136febd4908b"},{"author":{"_account_id":29425,"name":"Diogo Guerra","email":"diogo.filipe.tomas.guerra@cern.ch","username":"dioguerra"},"change_message_id":"6bc0ad53ee418de112be3dd2e79cce23ad088ded","unresolved":false,"context_lines":[{"line_number":105,"context_line":"      minAvailable: 1"},{"line_number":106,"context_line":"      resources:"},{"line_number":107,"context_line":"        requests:"},{"line_number":108,"context_line":"          cpu: 100m"},{"line_number":109,"context_line":"          memory: 128Mi"},{"line_number":110,"context_line":"      autoscaling:"},{"line_number":111,"context_line":"        enabled: false"},{"line_number":112,"context_line":"      customTemplate:"}],"source_content_type":"text/x-sh","patch_set":1,"id":"3fa7e38b_973584dd","line":109,"range":{"start_line":108,"start_character":0,"end_line":109,"end_character":23},"in_reply_to":"3fa7e38b_d181ccfb","updated":"2019-12-11 12:43:43.000000000","message":"I never tested with cuncurrent TCP connections.\nBut i tested to a max of 400 req/s\n\nnginx data:\nhttps://imgur.com/wdXASDq\n\nhttps://imgur.com/GyE7pfU\n\ni noticed that for CPU usually the consumption is 1mCPU/req\n\nI thought again about this, and we can increate the cpu to your recommended ones.\n\nI would set also limits, but higher (not the same to not get guaranteed resources) \n\n      resources:\n        limits:\n          cpu: 500m\n          memory: 512Mi\n        requests:\n          cpu: 200m\n          memory: 256Mi\n\n\nSo the nginx pod will take resources from the application if the node is not CPU saturated. If nginx, and the APP run on the same node, they will starve each other which is ok.\n\nSure take it.","commit_id":"8bdfd169ffee7322c99ffb7321dc136febd4908b"},{"author":{"_account_id":20498,"name":"Spyros Trigazis","email":"spyridon.trigazis@cern.ch","username":"strigazi"},"change_message_id":"9ff9fc2801e4903531540695289009d223a03226","unresolved":false,"context_lines":[{"line_number":160,"context_line":"            release: prometheus-operator"},{"line_number":161,"context_line":"          namespace: kube-system"},{"line_number":162,"context_line":"      lifecycle: {}"},{"line_number":163,"context_line":"      priorityClassName: \"system-node-critical\""},{"line_number":164,"context_line":"    revisionHistoryLimit: 10"},{"line_number":165,"context_line":"    defaultBackend:"},{"line_number":166,"context_line":"      enabled: true"}],"source_content_type":"text/x-sh","patch_set":1,"id":"3fa7e38b_516ddc64","line":163,"updated":"2019-12-11 10:51:16.000000000","message":"+1","commit_id":"8bdfd169ffee7322c99ffb7321dc136febd4908b"}]}
