)]}'
{"doc/source/ideas/teapot/compute.rst":[{"author":{"_account_id":17068,"name":"Jean-Philippe Evrard","email":"openstack@a.spamming.party","username":"evrardjp"},"change_message_id":"34cbd129b53047752e97338c2889f1e222a523b7","unresolved":false,"context_lines":[{"line_number":26,"context_line":"example if it is only used for batch jobs, without requiring it to be recreated"},{"line_number":27,"context_line":"from scratch each time. Since the management cluster also runs on bare metal,"},{"line_number":28,"context_line":"the tenant pods could also be isolated from each other and from the rest of the"},{"line_number":29,"context_line":"system using Kata."},{"line_number":30,"context_line":""},{"line_number":31,"context_line":".. _teapot-compute-metal3:"},{"line_number":32,"context_line":""}],"source_content_type":"text/x-rst","patch_set":1,"id":"1fa4df85_bc91fe9b","line":29,"range":{"start_line":29,"start_character":17,"end_line":29,"end_character":18},"updated":"2020-02-28 16:00:22.000000000","message":"and/or security policies.","commit_id":"363f5f0062efdbb55398abfe42bcd5f9eab7fadb"},{"author":{"_account_id":17068,"name":"Jean-Philippe Evrard","email":"openstack@a.spamming.party","username":"evrardjp"},"change_message_id":"34cbd129b53047752e97338c2889f1e222a523b7","unresolved":false,"context_lines":[{"line_number":79,"context_line":"is backed by a Machine object in another cluster (the management cluster). This"},{"line_number":80,"context_line":"might prove useful for centralised management clusters in general, not just"},{"line_number":81,"context_line":"Teapot. We would have no choice but to name this component"},{"line_number":82,"context_line":"cluster-api-provider-cluster-api."},{"line_number":83,"context_line":""},{"line_number":84,"context_line":"Cluster API does not yet have support for running the tenant control plane in"},{"line_number":85,"context_line":"containers. Tools like Gardener_ do, but are not yet well integrated with the"}],"source_content_type":"text/x-rst","patch_set":1,"id":"1fa4df85_3fcc20bb","line":82,"updated":"2020-02-28 16:00:22.000000000","message":"nit: cluster-api-machine-provider-sync(er) ? :p","commit_id":"363f5f0062efdbb55398abfe42bcd5f9eab7fadb"},{"author":{"_account_id":11628,"name":"Michael Johnson","email":"johnsomor@gmail.com","username":"johnsom"},"change_message_id":"aecc4250e02c80d8029d8854cf5f281738228674","unresolved":false,"context_lines":[{"line_number":109,"context_line":"involved in provisioning a bare-metal host (15 minutes is not unusual, due in"},{"line_number":110,"context_line":"large part to running hardware tests including checking increasingly massive"},{"line_number":111,"context_line":"amounts of RAM). The situation is even worse when needing to deprovision a host"},{"line_number":112,"context_line":"from one tenant before giving it to another tenant, since that requires"},{"line_number":113,"context_line":"cleaning the local disks."},{"line_number":114,"context_line":""},{"line_number":115,"context_line":".. _teapot-compute-scheduling:"},{"line_number":116,"context_line":""}],"source_content_type":"text/x-rst","patch_set":1,"id":"1fa4df85_c38f4921","line":113,"range":{"start_line":112,"start_character":52,"end_line":113,"end_character":25},"updated":"2020-02-27 20:23:41.000000000","message":"This goes to seconds when using encrypted storage hardware. You can simply zero out the encryption keys in the RAID or drive controllers. This is typically called \"instant secure erase\" by the hardware vendors and are an implementation of the \"SANITIZE\" SCSI command.\nSome details from T13 for SATA: http://t13.org/Documents/UploadedDocuments/docs2009/e07197r7-T13_Sanitize_Command_Proposal.pdf","commit_id":"363f5f0062efdbb55398abfe42bcd5f9eab7fadb"},{"author":{"_account_id":4257,"name":"Zane Bitter","email":"zbitter@redhat.com","username":"zaneb"},"change_message_id":"cd2c4c3b0a59d5efd03fb4fa158091983bbb1846","unresolved":false,"context_lines":[{"line_number":109,"context_line":"involved in provisioning a bare-metal host (15 minutes is not unusual, due in"},{"line_number":110,"context_line":"large part to running hardware tests including checking increasingly massive"},{"line_number":111,"context_line":"amounts of RAM). The situation is even worse when needing to deprovision a host"},{"line_number":112,"context_line":"from one tenant before giving it to another tenant, since that requires"},{"line_number":113,"context_line":"cleaning the local disks."},{"line_number":114,"context_line":""},{"line_number":115,"context_line":".. _teapot-compute-scheduling:"},{"line_number":116,"context_line":""}],"source_content_type":"text/x-rst","patch_set":1,"id":"1fa4df85_b1fdaeb2","line":113,"range":{"start_line":112,"start_character":52,"end_line":113,"end_character":25},"in_reply_to":"1fa4df85_c38f4921","updated":"2020-02-28 05:03:02.000000000","message":"Thanks, that\u0027s good info.","commit_id":"363f5f0062efdbb55398abfe42bcd5f9eab7fadb"},{"author":{"_account_id":11655,"name":"Julia Kreger","email":"juliaashleykreger@gmail.com","username":"jkreger","status":"Flying to the moon with a Jetpack!"},"change_message_id":"7cab13b32a25d76b7d30188bc36ea1a020d165c3","unresolved":false,"context_lines":[{"line_number":11,"context_line":"cluster. Furthermore, it allows users to manage all components of an"},{"line_number":12,"context_line":"application -- both those that run in containers and those that need a"},{"line_number":13,"context_line":"traditional VM -- from the same Kubernetes control plane (using KubeVirt)."},{"line_number":14,"context_line":"Finally, it eliminates the complexity of needing to virtualise access to"},{"line_number":15,"context_line":"specialist hardware such as :abbr:`GPGPU (general-purpose GPU)`\\ s or FPGAs,"},{"line_number":16,"context_line":"while still allowing the capability to be used by different tenants at"},{"line_number":17,"context_line":"different times."},{"line_number":18,"context_line":""},{"line_number":19,"context_line":"However, the *master* nodes of tenant cluster will run in containers on the"},{"line_number":20,"context_line":"management cluster (or some other centrally-managed cluster). This makes it"}],"source_content_type":"text/x-rst","patch_set":4,"id":"1fa4df85_4e8be7c1","line":17,"range":{"start_line":14,"start_character":0,"end_line":17,"end_character":16},"updated":"2020-03-09 20:18:44.000000000","message":"Except compute provisioned resources that are direct bare metal as consumption of an entire node don\u0027t have to deal with this.\n\nGranted, very broad strokes are being painted with, but many of these decisions come down to how a business deploys infrastructure to meet their business needs or how a business wants to make money, as well as how they want to spend it. I guess I would feel more comfortable if apples were being compared to apples, and oranges to oranges, not an apple to an orange depending on the operator or installer chosen path. Hopefully this makes sense.","commit_id":"e77cfc3208b5c65f8031a0cadbe65f116419a193"},{"author":{"_account_id":4257,"name":"Zane Bitter","email":"zbitter@redhat.com","username":"zaneb"},"change_message_id":"ab1560da13c4ed154c74706bbe252f2e99dd90d7","unresolved":false,"context_lines":[{"line_number":11,"context_line":"cluster. Furthermore, it allows users to manage all components of an"},{"line_number":12,"context_line":"application -- both those that run in containers and those that need a"},{"line_number":13,"context_line":"traditional VM -- from the same Kubernetes control plane (using KubeVirt)."},{"line_number":14,"context_line":"Finally, it eliminates the complexity of needing to virtualise access to"},{"line_number":15,"context_line":"specialist hardware such as :abbr:`GPGPU (general-purpose GPU)`\\ s or FPGAs,"},{"line_number":16,"context_line":"while still allowing the capability to be used by different tenants at"},{"line_number":17,"context_line":"different times."},{"line_number":18,"context_line":""},{"line_number":19,"context_line":"However, the *master* nodes of tenant cluster will run in containers on the"},{"line_number":20,"context_line":"management cluster (or some other centrally-managed cluster). This makes it"}],"source_content_type":"text/x-rst","patch_set":4,"id":"1fa4df85_0b8b8fc2","line":17,"range":{"start_line":14,"start_character":0,"end_line":17,"end_character":16},"in_reply_to":"1fa4df85_4e8be7c1","updated":"2020-03-10 18:56:43.000000000","message":"IIUC you\u0027re saying that users could just use Nova+Ironic and run Kubernetes on it to solve the same problem?\nI guess I consider this section more of a \"why this design can actually support almost all use cases despite not having VMs at all\", not like \"why OpenStack is terrible and you should never use it\" or whatever. It\u0027s comparing to alternate designs for Teapot, not really to OpenStack.\n\nIn an apples-to-apples comparison to OpenStack, I think the benefit is \"makes it simple to do what you need, by doing only what you need\".","commit_id":"e77cfc3208b5c65f8031a0cadbe65f116419a193"},{"author":{"_account_id":11655,"name":"Julia Kreger","email":"juliaashleykreger@gmail.com","username":"jkreger","status":"Flying to the moon with a Jetpack!"},"change_message_id":"7cab13b32a25d76b7d30188bc36ea1a020d165c3","unresolved":false,"context_lines":[{"line_number":16,"context_line":"while still allowing the capability to be used by different tenants at"},{"line_number":17,"context_line":"different times."},{"line_number":18,"context_line":""},{"line_number":19,"context_line":"However, the *master* nodes of tenant cluster will run in containers on the"},{"line_number":20,"context_line":"management cluster (or some other centrally-managed cluster). This makes it"},{"line_number":21,"context_line":"easy and cost-effective to provide high availability of cluster control planes,"},{"line_number":22,"context_line":"by not sacrificing large numbers of hosts to this purpose or requiring"},{"line_number":23,"context_line":"workloads to run on master nodes. It also makes it possible to optionally"}],"source_content_type":"text/x-rst","patch_set":4,"id":"1fa4df85_2e8c6bd5","line":20,"range":{"start_line":19,"start_character":9,"end_line":20,"end_character":60},"updated":"2020-03-09 20:18:44.000000000","message":"This feels extremely hand-wavey to me, and I wonder if this is actually representative of users and their chosen architectures?\n\nAlso, wouldn\u0027t security mechanisms be required to help make it more difficult for container escape?","commit_id":"e77cfc3208b5c65f8031a0cadbe65f116419a193"},{"author":{"_account_id":9237,"name":"Kevin Fox","email":"kevin@efox.cc","username":"kfox1111"},"change_message_id":"5fc8d5dd1fb1f26b42863ddd87dc9f31935d4f76","unresolved":false,"context_lines":[{"line_number":16,"context_line":"while still allowing the capability to be used by different tenants at"},{"line_number":17,"context_line":"different times."},{"line_number":18,"context_line":""},{"line_number":19,"context_line":"However, the *master* nodes of tenant cluster will run in containers on the"},{"line_number":20,"context_line":"management cluster (or some other centrally-managed cluster). This makes it"},{"line_number":21,"context_line":"easy and cost-effective to provide high availability of cluster control planes,"},{"line_number":22,"context_line":"by not sacrificing large numbers of hosts to this purpose or requiring"},{"line_number":23,"context_line":"workloads to run on master nodes. It also makes it possible to optionally"}],"source_content_type":"text/x-rst","patch_set":4,"id":"1fa4df85_ef0b8c28","line":20,"range":{"start_line":19,"start_character":9,"end_line":20,"end_character":60},"in_reply_to":"1fa4df85_00ce0e62","updated":"2020-03-11 16:19:43.000000000","message":"Yeah. I believe having one cluster for managing the control planes of other, close by, user focused clusters is pretty common now.","commit_id":"e77cfc3208b5c65f8031a0cadbe65f116419a193"},{"author":{"_account_id":4257,"name":"Zane Bitter","email":"zbitter@redhat.com","username":"zaneb"},"change_message_id":"ab1560da13c4ed154c74706bbe252f2e99dd90d7","unresolved":false,"context_lines":[{"line_number":16,"context_line":"while still allowing the capability to be used by different tenants at"},{"line_number":17,"context_line":"different times."},{"line_number":18,"context_line":""},{"line_number":19,"context_line":"However, the *master* nodes of tenant cluster will run in containers on the"},{"line_number":20,"context_line":"management cluster (or some other centrally-managed cluster). This makes it"},{"line_number":21,"context_line":"easy and cost-effective to provide high availability of cluster control planes,"},{"line_number":22,"context_line":"by not sacrificing large numbers of hosts to this purpose or requiring"},{"line_number":23,"context_line":"workloads to run on master nodes. It also makes it possible to optionally"}],"source_content_type":"text/x-rst","patch_set":4,"id":"1fa4df85_00ce0e62","line":20,"range":{"start_line":19,"start_character":9,"end_line":20,"end_character":60},"in_reply_to":"1fa4df85_2e8c6bd5","updated":"2020-03-10 18:56:43.000000000","message":"The economics of GKE/EKS/AKS suggest that they must be doing something like this (either in containers or small VMs). There are entire projects like https://gardener.cloud/ devoted to doing control planes in containers and other work in the pipeline like https://github.com/openshift/hypershift-toolkit - at this point I think it\u0027s more or less a mainstream assumption that in the near future any management system for multiple Kubernetes clusters will work like this.\n\nSecurity mechanisms will absolutely be needed, although exact solutions probably depend on your level of paranoia + whether you are operating it as a fully-managed service. Options include hypervisor isolation (Kata), restricting tenant management planes to a subset of nodes that are locked down (e.g. no connection/route to the provisioning network), or even putting them in a separate cluster.","commit_id":"e77cfc3208b5c65f8031a0cadbe65f116419a193"},{"author":{"_account_id":11655,"name":"Julia Kreger","email":"juliaashleykreger@gmail.com","username":"jkreger","status":"Flying to the moon with a Jetpack!"},"change_message_id":"7cab13b32a25d76b7d30188bc36ea1a020d165c3","unresolved":false,"context_lines":[{"line_number":25,"context_line":"relatively cheap to scale a cluster to zero when it has nothing to do, for"},{"line_number":26,"context_line":"example if it is only used for batch jobs, without requiring it to be recreated"},{"line_number":27,"context_line":"from scratch each time. Since the management cluster also runs on bare metal,"},{"line_number":28,"context_line":"the tenant pods could also be isolated from each other and from the rest of the"},{"line_number":29,"context_line":"system using Kata, in addition to regular security policies."},{"line_number":30,"context_line":""},{"line_number":31,"context_line":".. _teapot-compute-metal3:"}],"source_content_type":"text/x-rst","patch_set":4,"id":"1fa4df85_2ea54b47","line":28,"range":{"start_line":28,"start_character":16,"end_line":28,"end_character":21},"updated":"2020-03-09 20:18:44.000000000","message":"I\u0027d make this shall instead of could because for it to be adoptable, cross-tenant security needs to be strong.","commit_id":"e77cfc3208b5c65f8031a0cadbe65f116419a193"},{"author":{"_account_id":9237,"name":"Kevin Fox","email":"kevin@efox.cc","username":"kfox1111"},"change_message_id":"5fc8d5dd1fb1f26b42863ddd87dc9f31935d4f76","unresolved":false,"context_lines":[{"line_number":25,"context_line":"relatively cheap to scale a cluster to zero when it has nothing to do, for"},{"line_number":26,"context_line":"example if it is only used for batch jobs, without requiring it to be recreated"},{"line_number":27,"context_line":"from scratch each time. Since the management cluster also runs on bare metal,"},{"line_number":28,"context_line":"the tenant pods could also be isolated from each other and from the rest of the"},{"line_number":29,"context_line":"system using Kata, in addition to regular security policies."},{"line_number":30,"context_line":""},{"line_number":31,"context_line":".. _teapot-compute-metal3:"}],"source_content_type":"text/x-rst","patch_set":4,"id":"1fa4df85_a2336bf6","line":28,"range":{"start_line":28,"start_character":16,"end_line":28,"end_character":21},"in_reply_to":"1fa4df85_2ea54b47","updated":"2020-03-11 16:19:43.000000000","message":"There are other isolation solutions then just kata. network isolation via NetworkPolicy and/or service mesh\u0027s like Istio. Maybe it should be \u0027should\u0027 and leave the details for later?","commit_id":"e77cfc3208b5c65f8031a0cadbe65f116419a193"},{"author":{"_account_id":11655,"name":"Julia Kreger","email":"juliaashleykreger@gmail.com","username":"jkreger","status":"Flying to the moon with a Jetpack!"},"change_message_id":"7cab13b32a25d76b7d30188bc36ea1a020d165c3","unresolved":false,"context_lines":[{"line_number":108,"context_line":"One significant challenge posed by bare-metal is the extremely high latency"},{"line_number":109,"context_line":"involved in provisioning a bare-metal host (15 minutes is not unusual, due in"},{"line_number":110,"context_line":"large part to running hardware tests including checking increasingly massive"},{"line_number":111,"context_line":"amounts of RAM). The situation is even worse when needing to deprovision a host"},{"line_number":112,"context_line":"from one tenant before giving it to another tenant, since that requires"},{"line_number":113,"context_line":"cleaning the local disks, though this extra overhead can be essentially"},{"line_number":114,"context_line":"eliminated if the disk is encrypted (in which case only the keys need be"},{"line_number":115,"context_line":"erased)."},{"line_number":116,"context_line":""},{"line_number":117,"context_line":".. _teapot-compute-scheduling:"},{"line_number":118,"context_line":""}],"source_content_type":"text/x-rst","patch_set":4,"id":"1fa4df85_4e2767cb","line":115,"range":{"start_line":111,"start_character":17,"end_line":115,"end_character":8},"updated":"2020-03-09 20:18:44.000000000","message":"I think things like Ceph would toss a wrench into this because (a) the disk needs to be scrubbed clean for data safety reasons (b) a user has to articulate that is what is required instead of the keys rotated on the disk (with actual disk hardware encryption, or are we talking purely about software encryption?\n\nAnother question is also what about all of those storage backends/subsystems that don\u0027t allow key rotation on the disk because there is no concept of such capabilities to the block device?","commit_id":"e77cfc3208b5c65f8031a0cadbe65f116419a193"},{"author":{"_account_id":4257,"name":"Zane Bitter","email":"zbitter@redhat.com","username":"zaneb"},"change_message_id":"ab1560da13c4ed154c74706bbe252f2e99dd90d7","unresolved":false,"context_lines":[{"line_number":108,"context_line":"One significant challenge posed by bare-metal is the extremely high latency"},{"line_number":109,"context_line":"involved in provisioning a bare-metal host (15 minutes is not unusual, due in"},{"line_number":110,"context_line":"large part to running hardware tests including checking increasingly massive"},{"line_number":111,"context_line":"amounts of RAM). The situation is even worse when needing to deprovision a host"},{"line_number":112,"context_line":"from one tenant before giving it to another tenant, since that requires"},{"line_number":113,"context_line":"cleaning the local disks, though this extra overhead can be essentially"},{"line_number":114,"context_line":"eliminated if the disk is encrypted (in which case only the keys need be"},{"line_number":115,"context_line":"erased)."},{"line_number":116,"context_line":""},{"line_number":117,"context_line":".. _teapot-compute-scheduling:"},{"line_number":118,"context_line":""}],"source_content_type":"text/x-rst","patch_set":4,"id":"1fa4df85_400c0693","line":115,"range":{"start_line":111,"start_character":17,"end_line":115,"end_character":8},"in_reply_to":"1fa4df85_4e2767cb","updated":"2020-03-10 18:56:43.000000000","message":"The part about disk encryption was added after comments from Michael on earlier patchsets - he included a lot more detail. AIUI yes, what he was talking about requires hardware support, but at least there is a (theoretical) option to eliminate the latency penalty of cleaning when this is available.\n\nWhether pure-software based full disk encryption could be an option in theory (obviously nothing like support for that exists in Ironic now)... I don\u0027t know, but it seems at least plausible.","commit_id":"e77cfc3208b5c65f8031a0cadbe65f116419a193"}],"doc/source/ideas/teapot/dns.rst":[{"author":{"_account_id":11628,"name":"Michael Johnson","email":"johnsomor@gmail.com","username":"johnsom"},"change_message_id":"ff9f7a1f19b3cb05cd4fb2738ddcbadaa1b3ef32","unresolved":false,"context_lines":[{"line_number":4,"context_line":"Project Teapot must provide a trusted way for DNS information generated by the"},{"line_number":5,"context_line":"(untrusted) tenant clusters to be propagated out to the network."},{"line_number":6,"context_line":""},{"line_number":7,"context_line":"Each tenant cluster requires at least 2 DNS records -- one for the control"},{"line_number":8,"context_line":"plane, and a wildcard for any applications. These would usually be subdomains"},{"line_number":9,"context_line":"of a zone delegated to the Teapot for this purpose. Teapot would be responsible"},{"line_number":10,"context_line":"for rolling up these records and making them available over DNS."},{"line_number":11,"context_line":""}],"source_content_type":"text/x-rst","patch_set":1,"id":"1fa4df85_3d5d15e9","line":8,"range":{"start_line":7,"start_character":55,"end_line":8,"end_character":6},"updated":"2020-02-27 23:08:52.000000000","message":"Is this really required? The compute section makes it sound like the tenant can do whatever they want with the baremetal box. VMs, etc.","commit_id":"363f5f0062efdbb55398abfe42bcd5f9eab7fadb"},{"author":{"_account_id":4257,"name":"Zane Bitter","email":"zbitter@redhat.com","username":"zaneb"},"change_message_id":"cd2c4c3b0a59d5efd03fb4fa158091983bbb1846","unresolved":false,"context_lines":[{"line_number":4,"context_line":"Project Teapot must provide a trusted way for DNS information generated by the"},{"line_number":5,"context_line":"(untrusted) tenant clusters to be propagated out to the network."},{"line_number":6,"context_line":""},{"line_number":7,"context_line":"Each tenant cluster requires at least 2 DNS records -- one for the control"},{"line_number":8,"context_line":"plane, and a wildcard for any applications. These would usually be subdomains"},{"line_number":9,"context_line":"of a zone delegated to the Teapot for this purpose. Teapot would be responsible"},{"line_number":10,"context_line":"for rolling up these records and making them available over DNS."},{"line_number":11,"context_line":""}],"source_content_type":"text/x-rst","patch_set":1,"id":"1fa4df85_a69256f5","line":8,"range":{"start_line":7,"start_character":55,"end_line":8,"end_character":6},"in_reply_to":"1fa4df85_3d5d15e9","updated":"2020-02-28 05:03:02.000000000","message":"Users will always get a control plane (running in pods in the management cluster), and if they want to be able to reach it without typing IP addresses (which they do) they\u0027ll need a DNS record.\n\nIt\u0027d be nice if they could run anything (I\u0027d like for them to be able to run SLURM one day, for example), and I think there are surprisingly few technical barriers to that, but it\u0027s not an explicit goal for now.","commit_id":"363f5f0062efdbb55398abfe42bcd5f9eab7fadb"},{"author":{"_account_id":9237,"name":"Kevin Fox","email":"kevin@efox.cc","username":"kfox1111"},"change_message_id":"a4247659178b3de30368d4b8318d921090fa2efa","unresolved":false,"context_lines":[{"line_number":85,"context_line":"largely used to implement RPC patterns (``call``, not ``cast``), and might be"},{"line_number":86,"context_line":"amenable to being swapped for a json-rpc interface in the same way as is done"},{"line_number":87,"context_line":"in Ironic for Metal³."},{"line_number":88,"context_line":""},{"line_number":89,"context_line":".. _ExternalDNS: https://github.com/kubernetes-sigs/external-dns#readme"},{"line_number":90,"context_line":".. _Designate: https://docs.openstack.org/designate/"}],"source_content_type":"text/x-rst","patch_set":1,"id":"1fa4df85_03c6c149","line":88,"updated":"2020-02-27 18:01:44.000000000","message":"Another option:\nuse externalDNS on the tenant cluster using the coredns backend or the native sites dns plugins.\n\nUse a validating webhook to ensure they can only edit their own dns entries. This is easy to write.\n\nAnd have a simple syncer container running on the tenant mgmt cluster that pulls externalDNS records from the tenant cluster and syncs the objects to the management cluster\u0027s tenant namespace. Auth then is k8s to k8s.","commit_id":"363f5f0062efdbb55398abfe42bcd5f9eab7fadb"},{"author":{"_account_id":6928,"name":"Ben Nemec","email":"openstack@nemebean.com","username":"bnemec"},"change_message_id":"7c138bca1b48b37838cde8bf644504e0b3cab89c","unresolved":false,"context_lines":[{"line_number":9,"context_line":"of a zone delegated to the Teapot for this purpose. Teapot would be responsible"},{"line_number":10,"context_line":"for rolling up these records and making them available over DNS."},{"line_number":11,"context_line":""},{"line_number":12,"context_line":"Since Teapot will be responsible for allocation floating IP addresses, it will"},{"line_number":13,"context_line":"also need to be responsible for advertising reverse DNS records for those"},{"line_number":14,"context_line":"floating IPs."},{"line_number":15,"context_line":""}],"source_content_type":"text/x-rst","patch_set":2,"id":"1fa4df85_75db941a","line":12,"range":{"start_line":12,"start_character":48,"end_line":12,"end_character":69},"updated":"2020-02-28 23:11:18.000000000","message":"I see a bunch of comments elsewhere about ignoring floating IPs. I assume this still applies to whatever external address method we are using?\n\nAlso, nit/allocation/allocating/","commit_id":"1f9515b9037633b4abb8acf4dc29ed4befb7b745"},{"author":{"_account_id":9237,"name":"Kevin Fox","email":"kevin@efox.cc","username":"kfox1111"},"change_message_id":"7ce66105825ce6dd73c0b371e5c459e314c2865a","unresolved":false,"context_lines":[{"line_number":74,"context_line":"might prove brittle."},{"line_number":75,"context_line":""},{"line_number":76,"context_line":"Again, additional work might need to be done to export the wildcard DNS records"},{"line_number":77,"context_line":"for the tenant workloads and would be needed for reverse DNS records."},{"line_number":78,"context_line":""},{"line_number":79,"context_line":".. _teapot-dns-designate:"},{"line_number":80,"context_line":""}],"source_content_type":"text/x-rst","patch_set":2,"id":"1fa4df85_03a533b5","line":77,"updated":"2020-03-03 16:30:51.000000000","message":"This may integrate well with cert-manager allowing for getting wildcard certificates to go along with the dns records via lets-encrypt or from any of their other backends.","commit_id":"1f9515b9037633b4abb8acf4dc29ed4befb7b745"},{"author":{"_account_id":6928,"name":"Ben Nemec","email":"openstack@nemebean.com","username":"bnemec"},"change_message_id":"7c138bca1b48b37838cde8bf644504e0b3cab89c","unresolved":false,"context_lines":[{"line_number":94,"context_line":"However, this is tightly coupled to Neutron. If Neutron is used in Teapot it"},{"line_number":95,"context_line":"should be as an implementation detail only, so other services like Designate"},{"line_number":96,"context_line":"should not rely on integrating with it. Therefore additional work would be"},{"line_number":97,"context_line":"required here to support reverse DNS."},{"line_number":98,"context_line":""},{"line_number":99,"context_line":"Ideally the back-end in the opinionated configuration would be CoreDNS, due to"},{"line_number":100,"context_line":"its status in the Kubernetes community (it is used for the *internal* DNS and"}],"source_content_type":"text/x-rst","patch_set":2,"id":"1fa4df85_35b99ccb","line":97,"updated":"2020-02-28 23:11:18.000000000","message":"I know you say below that we don\u0027t need designate-sink, but maybe this is a use case for it? I haven\u0027t looked into it much since Graham told me most people don\u0027t need it...except those implementing crazy ideas. :-)","commit_id":"1f9515b9037633b4abb8acf4dc29ed4befb7b745"},{"author":{"_account_id":4257,"name":"Zane Bitter","email":"zbitter@redhat.com","username":"zaneb"},"change_message_id":"51d7ba93325a94ae378f6fca9d8c60891873ad6e","unresolved":false,"context_lines":[{"line_number":94,"context_line":"However, this is tightly coupled to Neutron. If Neutron is used in Teapot it"},{"line_number":95,"context_line":"should be as an implementation detail only, so other services like Designate"},{"line_number":96,"context_line":"should not rely on integrating with it. Therefore additional work would be"},{"line_number":97,"context_line":"required here to support reverse DNS."},{"line_number":98,"context_line":""},{"line_number":99,"context_line":"Ideally the back-end in the opinionated configuration would be CoreDNS, due to"},{"line_number":100,"context_line":"its status in the Kubernetes community (it is used for the *internal* DNS and"}],"source_content_type":"text/x-rst","patch_set":2,"id":"1fa4df85_d8239d04","line":97,"in_reply_to":"1fa4df85_35b99ccb","updated":"2020-02-29 02:35:34.000000000","message":"IIUC it\u0027s a thing that listens on the oslo.messaging bus (Ceilometer-style) for events and then pokes records into Designate. That specific implementation sounds like the last thing we want :D but that architectural pattern could be exactly what we want.","commit_id":"1f9515b9037633b4abb8acf4dc29ed4befb7b745"},{"author":{"_account_id":8099,"name":"Graham Hayes","email":"gr@ham.ie","username":"graham"},"change_message_id":"ded9be44ebf55be82048b22ea7fddaf26e1430f5","unresolved":false,"context_lines":[{"line_number":94,"context_line":"However, this is tightly coupled to Neutron. If Neutron is used in Teapot it"},{"line_number":95,"context_line":"should be as an implementation detail only, so other services like Designate"},{"line_number":96,"context_line":"should not rely on integrating with it. Therefore additional work would be"},{"line_number":97,"context_line":"required here to support reverse DNS."},{"line_number":98,"context_line":""},{"line_number":99,"context_line":"Ideally the back-end in the opinionated configuration would be CoreDNS, due to"},{"line_number":100,"context_line":"its status in the Kubernetes community (it is used for the *internal* DNS and"}],"source_content_type":"text/x-rst","patch_set":2,"id":"1fa4df85_d5ad7f9b","line":97,"in_reply_to":"1fa4df85_d8239d04","updated":"2020-03-03 16:56:01.000000000","message":"Yeah - I would prefer we use the real API if we do use Designate. Sink is useful for edge cases, but something like this could easily sit in the k8s reconciliation loop to add records as needed.\n\nWe also have a reverse dns API plugin point that we could use (there is 2 places we can set PTR records - one looks up the floating IPs in neutron, which is pluggable, so we could re-use that logic)","commit_id":"1f9515b9037633b4abb8acf4dc29ed4befb7b745"},{"author":{"_account_id":6928,"name":"Ben Nemec","email":"openstack@nemebean.com","username":"bnemec"},"change_message_id":"7c138bca1b48b37838cde8bf644504e0b3cab89c","unresolved":false,"context_lines":[{"line_number":99,"context_line":"Ideally the back-end in the opinionated configuration would be CoreDNS, due to"},{"line_number":100,"context_line":"its status in the Kubernetes community (it is used for the *internal* DNS and"},{"line_number":101,"context_line":"is a CNCF project). However, there is currently no CoreDNS back-end for"},{"line_number":102,"context_line":"Designate."},{"line_number":103,"context_line":""},{"line_number":104,"context_line":"The Designate Sink component would not be required, but the rest of Designate"},{"line_number":105,"context_line":"is also built around RabbitMQ, which is highly undesirable. However, it is"}],"source_content_type":"text/x-rst","patch_set":2,"id":"1fa4df85_d53a8832","line":102,"updated":"2020-02-28 23:11:18.000000000","message":"CoreDNS has plugins for all of the other public cloud DNS services, maybe we turn it around and have CoreDNS get its records from Designate rather than Designate sending them to CoreDNS?","commit_id":"1f9515b9037633b4abb8acf4dc29ed4befb7b745"},{"author":{"_account_id":4257,"name":"Zane Bitter","email":"zbitter@redhat.com","username":"zaneb"},"change_message_id":"51d7ba93325a94ae378f6fca9d8c60891873ad6e","unresolved":false,"context_lines":[{"line_number":99,"context_line":"Ideally the back-end in the opinionated configuration would be CoreDNS, due to"},{"line_number":100,"context_line":"its status in the Kubernetes community (it is used for the *internal* DNS and"},{"line_number":101,"context_line":"is a CNCF project). However, there is currently no CoreDNS back-end for"},{"line_number":102,"context_line":"Designate."},{"line_number":103,"context_line":""},{"line_number":104,"context_line":"The Designate Sink component would not be required, but the rest of Designate"},{"line_number":105,"context_line":"is also built around RabbitMQ, which is highly undesirable. However, it is"}],"source_content_type":"text/x-rst","patch_set":2,"id":"1fa4df85_d8515d64","line":102,"in_reply_to":"1fa4df85_d53a8832","updated":"2020-02-29 02:35:34.000000000","message":"Huh, interesting. That would definitely benefit OpenStack users as well.","commit_id":"1f9515b9037633b4abb8acf4dc29ed4befb7b745"},{"author":{"_account_id":8099,"name":"Graham Hayes","email":"gr@ham.ie","username":"graham"},"change_message_id":"ded9be44ebf55be82048b22ea7fddaf26e1430f5","unresolved":false,"context_lines":[{"line_number":99,"context_line":"Ideally the back-end in the opinionated configuration would be CoreDNS, due to"},{"line_number":100,"context_line":"its status in the Kubernetes community (it is used for the *internal* DNS and"},{"line_number":101,"context_line":"is a CNCF project). However, there is currently no CoreDNS back-end for"},{"line_number":102,"context_line":"Designate."},{"line_number":103,"context_line":""},{"line_number":104,"context_line":"The Designate Sink component would not be required, but the rest of Designate"},{"line_number":105,"context_line":"is also built around RabbitMQ, which is highly undesirable. However, it is"}],"source_content_type":"text/x-rst","patch_set":2,"id":"1fa4df85_75960b4f","line":102,"in_reply_to":"1fa4df85_d8515d64","updated":"2020-03-03 16:56:01.000000000","message":":+1:","commit_id":"1f9515b9037633b4abb8acf4dc29ed4befb7b745"},{"author":{"_account_id":11655,"name":"Julia Kreger","email":"juliaashleykreger@gmail.com","username":"jkreger","status":"Flying to the moon with a Jetpack!"},"change_message_id":"7cab13b32a25d76b7d30188bc36ea1a020d165c3","unresolved":false,"context_lines":[{"line_number":103,"context_line":"Designate. An alternative to writing one would be to write a Designate plugin"},{"line_number":104,"context_line":"for CoreDNS -- similar plugins exist for other clouds already. The latter would"},{"line_number":105,"context_line":"provide the most benefit to OpenStack users, since theoretically tenants could"},{"line_number":106,"context_line":"make use of it even if CoreDNS is not chosen as the back-end by their OpenStack"},{"line_number":107,"context_line":"cloud\u0027s administrators."},{"line_number":108,"context_line":""},{"line_number":109,"context_line":"The Designate Sink component would not be required, but the rest of Designate"},{"line_number":110,"context_line":"is also built around RabbitMQ, which is highly undesirable. However, it is"}],"source_content_type":"text/x-rst","patch_set":4,"id":"1fa4df85_9955a7fc","line":107,"range":{"start_line":106,"start_character":70,"end_line":107,"end_character":23},"updated":"2020-03-09 20:18:44.000000000","message":"I\u0027m unsure this cloud can be called this based on the way some of this is written, perhaps their \"Teapot cloud administrator\u0027s\"?","commit_id":"e77cfc3208b5c65f8031a0cadbe65f116419a193"},{"author":{"_account_id":4257,"name":"Zane Bitter","email":"zbitter@redhat.com","username":"zaneb"},"change_message_id":"ab1560da13c4ed154c74706bbe252f2e99dd90d7","unresolved":false,"context_lines":[{"line_number":103,"context_line":"Designate. An alternative to writing one would be to write a Designate plugin"},{"line_number":104,"context_line":"for CoreDNS -- similar plugins exist for other clouds already. The latter would"},{"line_number":105,"context_line":"provide the most benefit to OpenStack users, since theoretically tenants could"},{"line_number":106,"context_line":"make use of it even if CoreDNS is not chosen as the back-end by their OpenStack"},{"line_number":107,"context_line":"cloud\u0027s administrators."},{"line_number":108,"context_line":""},{"line_number":109,"context_line":"The Designate Sink component would not be required, but the rest of Designate"},{"line_number":110,"context_line":"is also built around RabbitMQ, which is highly undesirable. However, it is"}],"source_content_type":"text/x-rst","patch_set":4,"id":"1fa4df85_74b788f6","line":107,"range":{"start_line":106,"start_character":70,"end_line":107,"end_character":23},"in_reply_to":"1fa4df85_9955a7fc","updated":"2020-03-10 18:56:43.000000000","message":"This is a comment about OpenStack specifically, since ideally we want whatever work we have to do to benefit both Teapot and OpenStack users.\nIn this case, Teapot users don\u0027t care whether we implement CoreDNS functionality in Designate or Designate functionality in CoreDNS, because in both cases it\u0027s internal to the Teapot cloud. But OpenStack users benefit more from implementing Designate functionality in CoreDNS because they can use it even if their OpenStack administrator doesn\u0027t.","commit_id":"e77cfc3208b5c65f8031a0cadbe65f116419a193"}],"doc/source/ideas/teapot/idm.rst":[{"author":{"_account_id":9237,"name":"Kevin Fox","email":"kevin@efox.cc","username":"kfox1111"},"change_message_id":"a4247659178b3de30368d4b8318d921090fa2efa","unresolved":false,"context_lines":[{"line_number":22,"context_line":""},{"line_number":23,"context_line":"Credentials for these purposes should be regularly rotated and narrowly"},{"line_number":24,"context_line":"authorised, to limit both the scope and duration of any compromise."},{"line_number":25,"context_line":""},{"line_number":26,"context_line":"Authenticating From Above"},{"line_number":27,"context_line":"-------------------------"},{"line_number":28,"context_line":""}],"source_content_type":"text/x-rst","patch_set":1,"id":"1fa4df85_0336c1e7","line":25,"updated":"2020-02-27 18:01:44.000000000","message":"A few options at this level...\n\nSimplest is this:\nhttps://kubernetes.io/docs/tasks/configure-pod-container/configure-service-account/#service-account-token-volume-projection\n\nspiffe would be the other option I can think of. both istio and spire support it:\nhttps://spiffe.io/","commit_id":"363f5f0062efdbb55398abfe42bcd5f9eab7fadb"},{"author":{"_account_id":11628,"name":"Michael Johnson","email":"johnsomor@gmail.com","username":"johnsom"},"change_message_id":"ff9f7a1f19b3cb05cd4fb2738ddcbadaa1b3ef32","unresolved":false,"context_lines":[{"line_number":52,"context_line":"Keystone_ is currently the only game in town for providing identity management"},{"line_number":53,"context_line":"for OpenStack services that are candidates for being included to provide some"},{"line_number":54,"context_line":"multi-tenant functionality in Teapot, such as :ref:`Manila"},{"line_number":55,"context_line":"\u003cteapot-storage-manila\u003e` and :ref:`Designate \u003cteapot-dns-designate\u003e`. Therefore"},{"line_number":56,"context_line":"using Keystone for all identity management on the management cluster would not"},{"line_number":57,"context_line":"only not increase complexity of the deployment, it would actually minimise it."},{"line_number":58,"context_line":""}],"source_content_type":"text/x-rst","patch_set":1,"id":"1fa4df85_7dbf0dd0","line":55,"range":{"start_line":55,"start_character":35,"end_line":55,"end_character":44},"updated":"2020-02-27 23:08:52.000000000","message":"Same for Octavia, though keystone can be disabled if necessary.","commit_id":"363f5f0062efdbb55398abfe42bcd5f9eab7fadb"},{"author":{"_account_id":4257,"name":"Zane Bitter","email":"zbitter@redhat.com","username":"zaneb"},"change_message_id":"cd2c4c3b0a59d5efd03fb4fa158091983bbb1846","unresolved":false,"context_lines":[{"line_number":52,"context_line":"Keystone_ is currently the only game in town for providing identity management"},{"line_number":53,"context_line":"for OpenStack services that are candidates for being included to provide some"},{"line_number":54,"context_line":"multi-tenant functionality in Teapot, such as :ref:`Manila"},{"line_number":55,"context_line":"\u003cteapot-storage-manila\u003e` and :ref:`Designate \u003cteapot-dns-designate\u003e`. Therefore"},{"line_number":56,"context_line":"using Keystone for all identity management on the management cluster would not"},{"line_number":57,"context_line":"only not increase complexity of the deployment, it would actually minimise it."},{"line_number":58,"context_line":""}],"source_content_type":"text/x-rst","patch_set":1,"id":"1fa4df85_915432cc","line":55,"range":{"start_line":55,"start_character":35,"end_line":55,"end_character":44},"in_reply_to":"1fa4df85_7dbf0dd0","updated":"2020-02-28 05:03:02.000000000","message":"Yep, also Cinder. This is not an exhaustive list.","commit_id":"363f5f0062efdbb55398abfe42bcd5f9eab7fadb"},{"author":{"_account_id":8482,"name":"Colleen Murphy","email":"colleen@gazlene.net","username":"krinkle"},"change_message_id":"a8b425dd7a9ebe57c60232ba449fc32b8561181b","unresolved":false,"context_lines":[{"line_number":70,"context_line":"OpenStack-native API (similar to Magnum) for those who want it."},{"line_number":71,"context_line":""},{"line_number":72,"context_line":"Keystone also features quota management capabilities that could be reused to"},{"line_number":73,"context_line":"manage tenant quotas_."},{"line_number":74,"context_line":""},{"line_number":75,"context_line":"While there are generally significant impedance mismatches between the"},{"line_number":76,"context_line":"Kubernetes and Keystone models of authorisation, Project Teapot is a fresh"}],"source_content_type":"text/x-rst","patch_set":1,"id":"1fa4df85_034b2159","line":73,"updated":"2020-02-27 17:40:10.000000000","message":"You might be interested in this experiment I\u0027ve been working on https://github.com/cmurphy/keyhook","commit_id":"363f5f0062efdbb55398abfe42bcd5f9eab7fadb"},{"author":{"_account_id":4257,"name":"Zane Bitter","email":"zbitter@redhat.com","username":"zaneb"},"change_message_id":"cd2c4c3b0a59d5efd03fb4fa158091983bbb1846","unresolved":false,"context_lines":[{"line_number":70,"context_line":"OpenStack-native API (similar to Magnum) for those who want it."},{"line_number":71,"context_line":""},{"line_number":72,"context_line":"Keystone also features quota management capabilities that could be reused to"},{"line_number":73,"context_line":"manage tenant quotas_."},{"line_number":74,"context_line":""},{"line_number":75,"context_line":"While there are generally significant impedance mismatches between the"},{"line_number":76,"context_line":"Kubernetes and Keystone models of authorisation, Project Teapot is a fresh"}],"source_content_type":"text/x-rst","patch_set":1,"id":"1fa4df85_d1738a7f","line":73,"in_reply_to":"1fa4df85_034b2159","updated":"2020-02-28 05:03:02.000000000","message":"That is very interesting indeed :)","commit_id":"363f5f0062efdbb55398abfe42bcd5f9eab7fadb"},{"author":{"_account_id":8482,"name":"Colleen Murphy","email":"colleen@gazlene.net","username":"krinkle"},"change_message_id":"a8b425dd7a9ebe57c60232ba449fc32b8561181b","unresolved":false,"context_lines":[{"line_number":73,"context_line":"manage tenant quotas_."},{"line_number":74,"context_line":""},{"line_number":75,"context_line":"While there are generally significant impedance mismatches between the"},{"line_number":76,"context_line":"Kubernetes and Keystone models of authorisation, Project Teapot is a fresh"},{"line_number":77,"context_line":"start and can prescribe custom policy models that mitigate the mismatch."},{"line_number":78,"context_line":"(Ongoing changes to default policies will likely smooth over these kinds of"},{"line_number":79,"context_line":"issues in regular OpenStack clouds also.)"},{"line_number":80,"context_line":""}],"source_content_type":"text/x-rst","patch_set":1,"id":"1fa4df85_636535e1","line":77,"range":{"start_line":76,"start_character":69,"end_line":77,"end_character":5},"updated":"2020-02-27 17:40:10.000000000","message":"what I wouldn\u0027t give","commit_id":"363f5f0062efdbb55398abfe42bcd5f9eab7fadb"},{"author":{"_account_id":8482,"name":"Colleen Murphy","email":"colleen@gazlene.net","username":"krinkle"},"change_message_id":"a8b425dd7a9ebe57c60232ba449fc32b8561181b","unresolved":false,"context_lines":[{"line_number":76,"context_line":"Kubernetes and Keystone models of authorisation, Project Teapot is a fresh"},{"line_number":77,"context_line":"start and can prescribe custom policy models that mitigate the mismatch."},{"line_number":78,"context_line":"(Ongoing changes to default policies will likely smooth over these kinds of"},{"line_number":79,"context_line":"issues in regular OpenStack clouds also.)"},{"line_number":80,"context_line":""},{"line_number":81,"context_line":"Keystone Application Credentials allow users to create (potentially)"},{"line_number":82,"context_line":"short-lived credentials that an application can use to authenticate without the"}],"source_content_type":"text/x-rst","patch_set":1,"id":"1fa4df85_235f3d0f","line":79,"updated":"2020-02-27 17:40:10.000000000","message":"Could maybe wrangle https://www.openpolicyagent.org/ to act as some kind of bridge here","commit_id":"363f5f0062efdbb55398abfe42bcd5f9eab7fadb"},{"author":{"_account_id":9237,"name":"Kevin Fox","email":"kevin@efox.cc","username":"kfox1111"},"change_message_id":"32eaafe0a70623fc2bb304872657b1b8cf70e4b4","unresolved":false,"context_lines":[{"line_number":76,"context_line":"Kubernetes and Keystone models of authorisation, Project Teapot is a fresh"},{"line_number":77,"context_line":"start and can prescribe custom policy models that mitigate the mismatch."},{"line_number":78,"context_line":"(Ongoing changes to default policies will likely smooth over these kinds of"},{"line_number":79,"context_line":"issues in regular OpenStack clouds also.)"},{"line_number":80,"context_line":""},{"line_number":81,"context_line":"Keystone Application Credentials allow users to create (potentially)"},{"line_number":82,"context_line":"short-lived credentials that an application can use to authenticate without the"}],"source_content_type":"text/x-rst","patch_set":1,"id":"1fa4df85_a3380d3d","line":79,"in_reply_to":"1fa4df85_235f3d0f","updated":"2020-02-27 18:06:13.000000000","message":"++","commit_id":"363f5f0062efdbb55398abfe42bcd5f9eab7fadb"},{"author":{"_account_id":4257,"name":"Zane Bitter","email":"zbitter@redhat.com","username":"zaneb"},"change_message_id":"cd2c4c3b0a59d5efd03fb4fa158091983bbb1846","unresolved":false,"context_lines":[{"line_number":76,"context_line":"Kubernetes and Keystone models of authorisation, Project Teapot is a fresh"},{"line_number":77,"context_line":"start and can prescribe custom policy models that mitigate the mismatch."},{"line_number":78,"context_line":"(Ongoing changes to default policies will likely smooth over these kinds of"},{"line_number":79,"context_line":"issues in regular OpenStack clouds also.)"},{"line_number":80,"context_line":""},{"line_number":81,"context_line":"Keystone Application Credentials allow users to create (potentially)"},{"line_number":82,"context_line":"short-lived credentials that an application can use to authenticate without the"}],"source_content_type":"text/x-rst","patch_set":1,"id":"1fa4df85_51055ac1","line":79,"in_reply_to":"1fa4df85_235f3d0f","updated":"2020-02-28 05:03:02.000000000","message":"This seems a bit hand-wavy and I am not qualified to interpret ;)\n\nI\u0027m happy to add something if you\u0027d like to say more about it.","commit_id":"363f5f0062efdbb55398abfe42bcd5f9eab7fadb"},{"author":{"_account_id":6928,"name":"Ben Nemec","email":"openstack@nemebean.com","username":"bnemec"},"change_message_id":"7c138bca1b48b37838cde8bf644504e0b3cab89c","unresolved":false,"context_lines":[{"line_number":76,"context_line":"Kubernetes and Keystone models of authorisation, Project Teapot is a fresh"},{"line_number":77,"context_line":"start and can prescribe custom policy models that mitigate the mismatch."},{"line_number":78,"context_line":"(Ongoing changes to default policies will likely smooth over these kinds of"},{"line_number":79,"context_line":"issues in regular OpenStack clouds also.)"},{"line_number":80,"context_line":""},{"line_number":81,"context_line":"Keystone Application Credentials allow users to create (potentially)"},{"line_number":82,"context_line":"short-lived credentials that an application can use to authenticate without the"}],"source_content_type":"text/x-rst","patch_set":1,"id":"1fa4df85_55c0d850","line":79,"in_reply_to":"1fa4df85_51055ac1","updated":"2020-02-28 23:11:18.000000000","message":"We had a simple PoC that was intended to allow integration with OPA[0], but nobody considered it important enough to pursue at the time. :-/\n\nNot sure if that helps or not, but it seemed like it might be relevant.\n\n0: https://review.opendev.org/#/c/658675/","commit_id":"363f5f0062efdbb55398abfe42bcd5f9eab7fadb"},{"author":{"_account_id":9237,"name":"Kevin Fox","email":"kevin@efox.cc","username":"kfox1111"},"change_message_id":"a4247659178b3de30368d4b8318d921090fa2efa","unresolved":false,"context_lines":[{"line_number":84,"context_line":"access to a wide range of unrelated corporate services) anywhere. Credentials"},{"line_number":85,"context_line":"provided to tenant clusters should be exclusively of this type, limited to the"},{"line_number":86,"context_line":"purpose assigned (e.g. credentials intended for accessing storage can only be"},{"line_number":87,"context_line":"used to access storage), and regularly rotated out and expired."},{"line_number":88,"context_line":""},{"line_number":89,"context_line":".. _teapot-idm-dex:"},{"line_number":90,"context_line":""}],"source_content_type":"text/x-rst","patch_set":1,"id":"1fa4df85_23465d51","line":87,"updated":"2020-02-27 18:01:44.000000000","message":"keystone\u0027s sql database is a drawback to the option. Its not light a requirement. This is probably true of any openstack component in the stack though.","commit_id":"363f5f0062efdbb55398abfe42bcd5f9eab7fadb"},{"author":{"_account_id":4257,"name":"Zane Bitter","email":"zbitter@redhat.com","username":"zaneb"},"change_message_id":"cd2c4c3b0a59d5efd03fb4fa158091983bbb1846","unresolved":false,"context_lines":[{"line_number":84,"context_line":"access to a wide range of unrelated corporate services) anywhere. Credentials"},{"line_number":85,"context_line":"provided to tenant clusters should be exclusively of this type, limited to the"},{"line_number":86,"context_line":"purpose assigned (e.g. credentials intended for accessing storage can only be"},{"line_number":87,"context_line":"used to access storage), and regularly rotated out and expired."},{"line_number":88,"context_line":""},{"line_number":89,"context_line":".. _teapot-idm-dex:"},{"line_number":90,"context_line":""}],"source_content_type":"text/x-rst","patch_set":1,"id":"1fa4df85_bec7c012","line":87,"in_reply_to":"1fa4df85_23465d51","updated":"2020-02-28 05:03:02.000000000","message":"We have a DB in Metal³ and it\u0027s not that bad. For Keystone it will need to be backed by persistent storage (e.g. a small Ceph cluster managed by Rook), but as you say the same applies as soon as we choose to use any OpenStack service at all, so I don\u0027t think Keystone adds to the problem.","commit_id":"363f5f0062efdbb55398abfe42bcd5f9eab7fadb"},{"author":{"_account_id":8482,"name":"Colleen Murphy","email":"colleen@gazlene.net","username":"krinkle"},"change_message_id":"a8b425dd7a9ebe57c60232ba449fc32b8561181b","unresolved":false,"context_lines":[{"line_number":85,"context_line":"provided to tenant clusters should be exclusively of this type, limited to the"},{"line_number":86,"context_line":"purpose assigned (e.g. credentials intended for accessing storage can only be"},{"line_number":87,"context_line":"used to access storage), and regularly rotated out and expired."},{"line_number":88,"context_line":""},{"line_number":89,"context_line":".. _teapot-idm-dex:"},{"line_number":90,"context_line":""},{"line_number":91,"context_line":"Dex"}],"source_content_type":"text/x-rst","patch_set":1,"id":"1fa4df85_83a5d106","line":88,"updated":"2020-02-27 17:40:10.000000000","message":"++","commit_id":"363f5f0062efdbb55398abfe42bcd5f9eab7fadb"},{"author":{"_account_id":8482,"name":"Colleen Murphy","email":"colleen@gazlene.net","username":"krinkle"},"change_message_id":"a8b425dd7a9ebe57c60232ba449fc32b8561181b","unresolved":false,"context_lines":[{"line_number":99,"context_line":"Keystone supports OpenID Connect as a federated identity provider, so it could"},{"line_number":100,"context_line":"still be used as the authorisation mechanism for services such as Manila and"},{"line_number":101,"context_line":"Designate using Dex as the source of truth. However, this inevitably adds"},{"line_number":102,"context_line":"additional moving parts. In general Keystone has difficultly handling"},{"line_number":103,"context_line":"revocation of federated users; since both components are under the same control"},{"line_number":104,"context_line":"in this case, some integration could be built to handle this better."},{"line_number":105,"context_line":""},{"line_number":106,"context_line":".. _teapot-idm-keycloak:"}],"source_content_type":"text/x-rst","patch_set":1,"id":"1fa4df85_837e3169","line":103,"range":{"start_line":102,"start_character":25,"end_line":103,"end_character":30},"updated":"2020-02-27 17:40:10.000000000","message":"specifics? Mostly these are bugs or under active development (https://specs.openstack.org/openstack/keystone-specs/specs/keystone/ussuri/expiring-group-memberships.html) so it\u0027s unfair to make this broad generalization.","commit_id":"363f5f0062efdbb55398abfe42bcd5f9eab7fadb"},{"author":{"_account_id":4257,"name":"Zane Bitter","email":"zbitter@redhat.com","username":"zaneb"},"change_message_id":"cd2c4c3b0a59d5efd03fb4fa158091983bbb1846","unresolved":false,"context_lines":[{"line_number":99,"context_line":"Keystone supports OpenID Connect as a federated identity provider, so it could"},{"line_number":100,"context_line":"still be used as the authorisation mechanism for services such as Manila and"},{"line_number":101,"context_line":"Designate using Dex as the source of truth. However, this inevitably adds"},{"line_number":102,"context_line":"additional moving parts. In general Keystone has difficultly handling"},{"line_number":103,"context_line":"revocation of federated users; since both components are under the same control"},{"line_number":104,"context_line":"in this case, some integration could be built to handle this better."},{"line_number":105,"context_line":""},{"line_number":106,"context_line":".. _teapot-idm-keycloak:"}],"source_content_type":"text/x-rst","patch_set":1,"id":"1fa4df85_d1f86ac5","line":103,"range":{"start_line":102,"start_character":25,"end_line":103,"end_character":30},"in_reply_to":"1fa4df85_837e3169","updated":"2020-02-28 05:03:02.000000000","message":"I\u0027ll check out that spec.","commit_id":"363f5f0062efdbb55398abfe42bcd5f9eab7fadb"},{"author":{"_account_id":4257,"name":"Zane Bitter","email":"zbitter@redhat.com","username":"zaneb"},"change_message_id":"f00080d99ef40603e21774b2188daea6736507e4","unresolved":false,"context_lines":[{"line_number":99,"context_line":"Keystone supports OpenID Connect as a federated identity provider, so it could"},{"line_number":100,"context_line":"still be used as the authorisation mechanism for services such as Manila and"},{"line_number":101,"context_line":"Designate using Dex as the source of truth. However, this inevitably adds"},{"line_number":102,"context_line":"additional moving parts. In general Keystone has difficultly handling"},{"line_number":103,"context_line":"revocation of federated users; since both components are under the same control"},{"line_number":104,"context_line":"in this case, some integration could be built to handle this better."},{"line_number":105,"context_line":""},{"line_number":106,"context_line":".. _teapot-idm-keycloak:"}],"source_content_type":"text/x-rst","patch_set":1,"id":"1fa4df85_12b6f6da","line":103,"range":{"start_line":102,"start_character":25,"end_line":103,"end_character":30},"in_reply_to":"1fa4df85_d1f86ac5","updated":"2020-02-28 21:59:46.000000000","message":"I think it\u0027s still an issue, because IIUC application credentials will expire if you don\u0027t log in regularly using your main credentials? Also \"Since Keystone doesn’t have access to the external identity provider to get notified when a users permissions are revoked, there will be a lag\".\n\nBut in our case Keystone *could* get notified because we control both ends of the federation, so we could eliminate the lag and we wouldn\u0027t need to have the expiry problem provided we config the backend not to expire (or have a ridiculously long expiry time). This is what the next line is referring to.\n\nNow that I\u0027ve paged this info back in to my brain, I\u0027ll make the text less vague.","commit_id":"363f5f0062efdbb55398abfe42bcd5f9eab7fadb"},{"author":{"_account_id":9237,"name":"Kevin Fox","email":"kevin@efox.cc","username":"kfox1111"},"change_message_id":"a4247659178b3de30368d4b8318d921090fa2efa","unresolved":false,"context_lines":[{"line_number":102,"context_line":"additional moving parts. In general Keystone has difficultly handling"},{"line_number":103,"context_line":"revocation of federated users; since both components are under the same control"},{"line_number":104,"context_line":"in this case, some integration could be built to handle this better."},{"line_number":105,"context_line":""},{"line_number":106,"context_line":".. _teapot-idm-keycloak:"},{"line_number":107,"context_line":""},{"line_number":108,"context_line":"Keycloak"}],"source_content_type":"text/x-rst","patch_set":1,"id":"1fa4df85_436a79b3","line":105,"updated":"2020-02-27 18:01:44.000000000","message":"This is the stateless\u0027ish option, as it can store state directly in k8s objects. So it can be much lighter weight on admins of the system. It does authn well.\n\nIt does not have any functionality around authz though. So depending on the needs it may not be sufficient.","commit_id":"363f5f0062efdbb55398abfe42bcd5f9eab7fadb"},{"author":{"_account_id":9237,"name":"Kevin Fox","email":"kevin@efox.cc","username":"kfox1111"},"change_message_id":"a4247659178b3de30368d4b8318d921090fa2efa","unresolved":false,"context_lines":[{"line_number":110,"context_line":""},{"line_number":111,"context_line":"Keycloak_ is a more full-featured identity management service that shares all"},{"line_number":112,"context_line":"of the advantages and disadvantages of Dex in this application, but appears to"},{"line_number":113,"context_line":"be more complex to deploy."},{"line_number":114,"context_line":""},{"line_number":115,"context_line":"Keystone could federate to Keycloak as an identity management provider using"},{"line_number":116,"context_line":"either OpenID Connect or SAML."}],"source_content_type":"text/x-rst","patch_set":1,"id":"1fa4df85_e39e05b7","line":113,"updated":"2020-02-27 18:01:44.000000000","message":"I\u0027ve used all 3, and have thus far had the most luck with Keycloak of late. Thou to date its been the most manual to install. My hope is most of the complexity of Keycloak will be hidden soon becuase of this: https://github.com/keycloak/keycloak-operator\nit will have kubernetes native api for managing the clients/scopes/realms/etc so automating it will be significantly easier soon.","commit_id":"363f5f0062efdbb55398abfe42bcd5f9eab7fadb"},{"author":{"_account_id":9237,"name":"Kevin Fox","email":"kevin@efox.cc","username":"kfox1111"},"change_message_id":"a4247659178b3de30368d4b8318d921090fa2efa","unresolved":false,"context_lines":[{"line_number":114,"context_line":""},{"line_number":115,"context_line":"Keystone could federate to Keycloak as an identity management provider using"},{"line_number":116,"context_line":"either OpenID Connect or SAML."},{"line_number":117,"context_line":""},{"line_number":118,"context_line":""},{"line_number":119,"context_line":".. _Keystone: https://docs.openstack.org/keystone/"},{"line_number":120,"context_line":".. _OpenID Connect: https://openid.net/connect/"}],"source_content_type":"text/x-rst","patch_set":1,"id":"1fa4df85_030321ba","line":117,"updated":"2020-02-27 18:01:44.000000000","message":"I wonder if the keystone client pipeline plugin to the rest services could be replaced with an openid connect plugin so rather then needing all of keycloak, you could stub something in for the other openstack services.","commit_id":"363f5f0062efdbb55398abfe42bcd5f9eab7fadb"},{"author":{"_account_id":4257,"name":"Zane Bitter","email":"zbitter@redhat.com","username":"zaneb"},"change_message_id":"cd2c4c3b0a59d5efd03fb4fa158091983bbb1846","unresolved":false,"context_lines":[{"line_number":114,"context_line":""},{"line_number":115,"context_line":"Keystone could federate to Keycloak as an identity management provider using"},{"line_number":116,"context_line":"either OpenID Connect or SAML."},{"line_number":117,"context_line":""},{"line_number":118,"context_line":""},{"line_number":119,"context_line":".. _Keystone: https://docs.openstack.org/keystone/"},{"line_number":120,"context_line":".. _OpenID Connect: https://openid.net/connect/"}],"source_content_type":"text/x-rst","patch_set":1,"id":"1fa4df85_feabf819","line":117,"in_reply_to":"1fa4df85_030321ba","updated":"2020-02-28 05:03:02.000000000","message":"This is a good point that should at least be mentioned as an option. I\u0027ll think about that a bit more.","commit_id":"363f5f0062efdbb55398abfe42bcd5f9eab7fadb"},{"author":{"_account_id":8099,"name":"Graham Hayes","email":"gr@ham.ie","username":"graham"},"change_message_id":"ded9be44ebf55be82048b22ea7fddaf26e1430f5","unresolved":false,"context_lines":[{"line_number":1,"context_line":"Teapot Identity Management"},{"line_number":2,"context_line":"\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d"},{"line_number":3,"context_line":""},{"line_number":4,"context_line":"Teapot need not, and should not, impose any particular identity management"},{"line_number":5,"context_line":"system for tenant clusters. These are the clusters that applications and"},{"line_number":6,"context_line":"application developers/operators will routinely interact with, and the choice"},{"line_number":7,"context_line":"of identity management providers is completely up to the administrators of"},{"line_number":8,"context_line":"those clusters."},{"line_number":9,"context_line":""},{"line_number":10,"context_line":"Identity management in Teapot itself (i.e. the management cluster) is needed"},{"line_number":11,"context_line":"for two different purposes. While not strictly necessary, it would be"}],"source_content_type":"text/x-rst","patch_set":2,"id":"1fa4df85_957ce71f","line":8,"range":{"start_line":4,"start_character":0,"end_line":8,"end_character":15},"updated":"2020-03-03 16:56:01.000000000","message":"We should not Impose a system, but possibly providing some built in system may make this easier for deployers.","commit_id":"1f9515b9037633b4abb8acf4dc29ed4befb7b745"},{"author":{"_account_id":9003,"name":"Tom Barron","email":"tpb@dyncloud.net","username":"tbarron"},"change_message_id":"94e0bd3307d59c90569a70f8c7fcd4d64966389a","unresolved":false,"context_lines":[{"line_number":54,"context_line":"multi-tenant functionality in Teapot, such as :ref:`Manila"},{"line_number":55,"context_line":"\u003cteapot-storage-manila\u003e` and :ref:`Designate \u003cteapot-dns-designate\u003e`. Therefore"},{"line_number":56,"context_line":"using Keystone for all identity management on the management cluster would not"},{"line_number":57,"context_line":"only not increase complexity of the deployment, it would actually minimise it."},{"line_number":58,"context_line":""},{"line_number":59,"context_line":"An authorisation webhook for Kubernetes that uses Keystone is available in"},{"line_number":60,"context_line":"cloud-provider-openstack_. In general, OAuth seems to be preferred to webhooks"}],"source_content_type":"text/x-rst","patch_set":2,"id":"1fa4df85_69d73d25","line":57,"range":{"start_line":57,"start_character":48,"end_line":57,"end_character":78},"updated":"2020-02-29 16:46:56.000000000","message":"+1","commit_id":"1f9515b9037633b4abb8acf4dc29ed4befb7b745"},{"author":{"_account_id":8099,"name":"Graham Hayes","email":"gr@ham.ie","username":"graham"},"change_message_id":"ded9be44ebf55be82048b22ea7fddaf26e1430f5","unresolved":false,"context_lines":[{"line_number":54,"context_line":"multi-tenant functionality in Teapot, such as :ref:`Manila"},{"line_number":55,"context_line":"\u003cteapot-storage-manila\u003e` and :ref:`Designate \u003cteapot-dns-designate\u003e`. Therefore"},{"line_number":56,"context_line":"using Keystone for all identity management on the management cluster would not"},{"line_number":57,"context_line":"only not increase complexity of the deployment, it would actually minimise it."},{"line_number":58,"context_line":""},{"line_number":59,"context_line":"An authorisation webhook for Kubernetes that uses Keystone is available in"},{"line_number":60,"context_line":"cloud-provider-openstack_. In general, OAuth seems to be preferred to webhooks"}],"source_content_type":"text/x-rst","patch_set":2,"id":"1fa4df85_f58bdb1b","line":57,"range":{"start_line":57,"start_character":48,"end_line":57,"end_character":78},"in_reply_to":"1fa4df85_69d73d25","updated":"2020-03-03 16:56:01.000000000","message":"My preference for this would be keystone alright, for this reason.","commit_id":"1f9515b9037633b4abb8acf4dc29ed4befb7b745"}],"doc/source/ideas/teapot/index.rst":[{"author":{"_account_id":11628,"name":"Michael Johnson","email":"johnsomor@gmail.com","username":"johnsom"},"change_message_id":"aecc4250e02c80d8029d8854cf5f281738228674","unresolved":false,"context_lines":[{"line_number":8,"context_line":"on."},{"line_number":9,"context_line":""},{"line_number":10,"context_line":"When OpenStack was first designed, 10 years ago, the cloud computing landscape"},{"line_number":11,"context_line":"was a very different place. In the intervening period, OpenStack has amassed an"},{"line_number":12,"context_line":"enormous installed base of many thousands of users who all depend on it"},{"line_number":13,"context_line":"remaining essentially the same service, with backward-compatible APIs. If we"},{"line_number":14,"context_line":"designed an open source cloud platform without those restrictions and looking"},{"line_number":15,"context_line":"ahead to the 2020s, knowing everything we know today, what might it look like?"},{"line_number":16,"context_line":"And how could we build it without starting from scratch, but using existing"},{"line_number":17,"context_line":"open source technologies where possible? Project Teapot is one answer to these"}],"source_content_type":"text/x-rst","patch_set":1,"id":"1fa4df85_396a0ab3","line":14,"range":{"start_line":11,"start_character":55,"end_line":14,"end_character":65},"updated":"2020-02-27 20:23:41.000000000","message":"Are these really restrictions that block the ability to do \"Teapot\" in the context of OpenStack?\nJust like we have multiple services for containers and compute today (nova, ironic, zun, magnum, etc.) could this not be just a new service(s) that extends OpenStack?\nThere is the \"core\" service concept where \"every OpenStack cloud has nova\", but that concept could change right? (I am talking about the trademark/branding requirements and not the base services list)","commit_id":"363f5f0062efdbb55398abfe42bcd5f9eab7fadb"},{"author":{"_account_id":9237,"name":"Kevin Fox","email":"kevin@efox.cc","username":"kfox1111"},"change_message_id":"671ec37aa02571934fa392741a2f633855ab3afd","unresolved":false,"context_lines":[{"line_number":8,"context_line":"on."},{"line_number":9,"context_line":""},{"line_number":10,"context_line":"When OpenStack was first designed, 10 years ago, the cloud computing landscape"},{"line_number":11,"context_line":"was a very different place. In the intervening period, OpenStack has amassed an"},{"line_number":12,"context_line":"enormous installed base of many thousands of users who all depend on it"},{"line_number":13,"context_line":"remaining essentially the same service, with backward-compatible APIs. If we"},{"line_number":14,"context_line":"designed an open source cloud platform without those restrictions and looking"},{"line_number":15,"context_line":"ahead to the 2020s, knowing everything we know today, what might it look like?"},{"line_number":16,"context_line":"And how could we build it without starting from scratch, but using existing"},{"line_number":17,"context_line":"open source technologies where possible? Project Teapot is one answer to these"}],"source_content_type":"text/x-rst","patch_set":1,"id":"1fa4df85_59510662","line":14,"range":{"start_line":11,"start_character":55,"end_line":14,"end_character":65},"in_reply_to":"1fa4df85_396a0ab3","updated":"2020-02-27 21:31:12.000000000","message":"For OpenStack there still is. But not everything under the umbrella is OpenStack anymore. Kata for example. Is Project Teapot its own thing in the same way Kata is? I\u0027m thinking so.","commit_id":"363f5f0062efdbb55398abfe42bcd5f9eab7fadb"},{"author":{"_account_id":4257,"name":"Zane Bitter","email":"zbitter@redhat.com","username":"zaneb"},"change_message_id":"cd2c4c3b0a59d5efd03fb4fa158091983bbb1846","unresolved":false,"context_lines":[{"line_number":8,"context_line":"on."},{"line_number":9,"context_line":""},{"line_number":10,"context_line":"When OpenStack was first designed, 10 years ago, the cloud computing landscape"},{"line_number":11,"context_line":"was a very different place. In the intervening period, OpenStack has amassed an"},{"line_number":12,"context_line":"enormous installed base of many thousands of users who all depend on it"},{"line_number":13,"context_line":"remaining essentially the same service, with backward-compatible APIs. If we"},{"line_number":14,"context_line":"designed an open source cloud platform without those restrictions and looking"},{"line_number":15,"context_line":"ahead to the 2020s, knowing everything we know today, what might it look like?"},{"line_number":16,"context_line":"And how could we build it without starting from scratch, but using existing"},{"line_number":17,"context_line":"open source technologies where possible? Project Teapot is one answer to these"}],"source_content_type":"text/x-rst","patch_set":1,"id":"1fa4df85_79288233","line":14,"range":{"start_line":11,"start_character":55,"end_line":14,"end_character":65},"in_reply_to":"1fa4df85_396a0ab3","updated":"2020-02-28 05:03:02.000000000","message":"TBH I think that worrying about branding and governance and stuff is just getting way ahead of ourselves. At the moment I\u0027m only interested in figuring out whether there\u0027s a good idea in here somewhere.","commit_id":"363f5f0062efdbb55398abfe42bcd5f9eab7fadb"},{"author":{"_account_id":8482,"name":"Colleen Murphy","email":"colleen@gazlene.net","username":"krinkle"},"change_message_id":"a8b425dd7a9ebe57c60232ba449fc32b8561181b","unresolved":false,"context_lines":[{"line_number":36,"context_line":"is to be ubiquitous; Teapot\u0027s is narrower. In the 2020s, Kubernetes will be"},{"line_number":37,"context_line":"ubiquitous. However, Kubernetes\u0027 separation of responsibilities with the"},{"line_number":38,"context_line":"underlying cloud mean that some important capabilities are considered out of"},{"line_number":39,"context_line":"scope for it -- most obviously multi-tenancy. Teapot\u0027s primary mission is to"},{"line_number":40,"context_line":"fill those gaps with an open source solution, by providing a cloud layer to"},{"line_number":41,"context_line":"manage a physical data center beneath Kubernetes."},{"line_number":42,"context_line":""}],"source_content_type":"text/x-rst","patch_set":1,"id":"1fa4df85_281ddabc","line":39,"range":{"start_line":39,"start_character":16,"end_line":39,"end_character":44},"updated":"2020-02-27 17:40:10.000000000","message":"I think the kubernetes community is not fully decided that multi-tenancy is out of scope https://github.com/kubernetes/community/tree/master/wg-multitenancy https://github.com/kubernetes-sigs/multi-tenancy","commit_id":"363f5f0062efdbb55398abfe42bcd5f9eab7fadb"},{"author":{"_account_id":4257,"name":"Zane Bitter","email":"zbitter@redhat.com","username":"zaneb"},"change_message_id":"cd2c4c3b0a59d5efd03fb4fa158091983bbb1846","unresolved":false,"context_lines":[{"line_number":36,"context_line":"is to be ubiquitous; Teapot\u0027s is narrower. In the 2020s, Kubernetes will be"},{"line_number":37,"context_line":"ubiquitous. However, Kubernetes\u0027 separation of responsibilities with the"},{"line_number":38,"context_line":"underlying cloud mean that some important capabilities are considered out of"},{"line_number":39,"context_line":"scope for it -- most obviously multi-tenancy. Teapot\u0027s primary mission is to"},{"line_number":40,"context_line":"fill those gaps with an open source solution, by providing a cloud layer to"},{"line_number":41,"context_line":"manage a physical data center beneath Kubernetes."},{"line_number":42,"context_line":""}],"source_content_type":"text/x-rst","patch_set":1,"id":"1fa4df85_26a50605","line":39,"range":{"start_line":39,"start_character":16,"end_line":39,"end_character":44},"in_reply_to":"1fa4df85_281ddabc","updated":"2020-02-28 05:03:02.000000000","message":"I think they mainly decided to repurpose the term to describe a thing k8s could actually do. (The multitenancy SIG does talk about hard multitenancy, but they are nowhere close to doing anything about it afaict.) In reality they decided when they did the initial design IMHO. Multi-tenancy, like security, is not a feature you can just slather on top. It has to be baked in to the design and in k8s it wasn\u0027t.","commit_id":"363f5f0062efdbb55398abfe42bcd5f9eab7fadb"},{"author":{"_account_id":11628,"name":"Michael Johnson","email":"johnsomor@gmail.com","username":"johnsom"},"change_message_id":"aecc4250e02c80d8029d8854cf5f281738228674","unresolved":false,"context_lines":[{"line_number":37,"context_line":"ubiquitous. However, Kubernetes\u0027 separation of responsibilities with the"},{"line_number":38,"context_line":"underlying cloud mean that some important capabilities are considered out of"},{"line_number":39,"context_line":"scope for it -- most obviously multi-tenancy. Teapot\u0027s primary mission is to"},{"line_number":40,"context_line":"fill those gaps with an open source solution, by providing a cloud layer to"},{"line_number":41,"context_line":"manage a physical data center beneath Kubernetes."},{"line_number":42,"context_line":""},{"line_number":43,"context_line":"In addition to mediating access to a physical data center, another important"}],"source_content_type":"text/x-rst","patch_set":1,"id":"1fa4df85_19b4ee34","line":40,"range":{"start_line":40,"start_character":61,"end_line":40,"end_character":72},"updated":"2020-02-27 20:23:41.000000000","message":"I think we should have a section that defines this.\nI.e. It\u0027s an API, with \"these\" reference implementations.\nOr something similar. As I read through I feel the scope is unclear. Some are APIs and integration points, some are reference implementations, etc.","commit_id":"363f5f0062efdbb55398abfe42bcd5f9eab7fadb"},{"author":{"_account_id":4257,"name":"Zane Bitter","email":"zbitter@redhat.com","username":"zaneb"},"change_message_id":"cd2c4c3b0a59d5efd03fb4fa158091983bbb1846","unresolved":false,"context_lines":[{"line_number":37,"context_line":"ubiquitous. However, Kubernetes\u0027 separation of responsibilities with the"},{"line_number":38,"context_line":"underlying cloud mean that some important capabilities are considered out of"},{"line_number":39,"context_line":"scope for it -- most obviously multi-tenancy. Teapot\u0027s primary mission is to"},{"line_number":40,"context_line":"fill those gaps with an open source solution, by providing a cloud layer to"},{"line_number":41,"context_line":"manage a physical data center beneath Kubernetes."},{"line_number":42,"context_line":""},{"line_number":43,"context_line":"In addition to mediating access to a physical data center, another important"}],"source_content_type":"text/x-rst","patch_set":1,"id":"1fa4df85_3d875512","line":40,"range":{"start_line":40,"start_character":61,"end_line":40,"end_character":72},"in_reply_to":"1fa4df85_19b4ee34","updated":"2020-02-28 05:03:02.000000000","message":"This is very good feedback.\n\nI think of it as kind of an integration project more than anything. It\u0027ll take stuff that exists as much as possible, fill in the gaps where necessary, and bundle everything up in a consumable way (and test it!). I\u0027m not sure how best to express that.","commit_id":"363f5f0062efdbb55398abfe42bcd5f9eab7fadb"},{"author":{"_account_id":11628,"name":"Michael Johnson","email":"johnsomor@gmail.com","username":"johnsom"},"change_message_id":"aecc4250e02c80d8029d8854cf5f281738228674","unresolved":false,"context_lines":[{"line_number":45,"context_line":"service). Teapot itself can be used to provide a managed service -- Kubernetes"},{"line_number":46,"context_line":"(though it could equally be configured to provide fully user-controlled tenant"},{"line_number":47,"context_line":"clusters). A secondary goal is to make Teapot a platform that cloud providers"},{"line_number":48,"context_line":"could use to offer other kinds of managed service as well. Teapot is an easier"},{"line_number":49,"context_line":"base than OpenStack on which to deploy such services because it is itself based"},{"line_number":50,"context_line":"on Kubernetes."},{"line_number":51,"context_line":""},{"line_number":52,"context_line":"Non-Goals"},{"line_number":53,"context_line":"---------"}],"source_content_type":"text/x-rst","patch_set":1,"id":"1fa4df85_599346a4","line":50,"range":{"start_line":48,"start_character":72,"end_line":50,"end_character":14},"updated":"2020-02-27 20:23:41.000000000","message":"I don\u0027t buy this. Using a base of kubernetes does not mean anything is \"easier\" in my opinion. Plus most kubernetes deployments rely on the underlying cloud platform, like OpenStack, for some of the deployment \"hard parts\".","commit_id":"363f5f0062efdbb55398abfe42bcd5f9eab7fadb"},{"author":{"_account_id":9237,"name":"Kevin Fox","email":"kevin@efox.cc","username":"kfox1111"},"change_message_id":"671ec37aa02571934fa392741a2f633855ab3afd","unresolved":false,"context_lines":[{"line_number":45,"context_line":"service). Teapot itself can be used to provide a managed service -- Kubernetes"},{"line_number":46,"context_line":"(though it could equally be configured to provide fully user-controlled tenant"},{"line_number":47,"context_line":"clusters). A secondary goal is to make Teapot a platform that cloud providers"},{"line_number":48,"context_line":"could use to offer other kinds of managed service as well. Teapot is an easier"},{"line_number":49,"context_line":"base than OpenStack on which to deploy such services because it is itself based"},{"line_number":50,"context_line":"on Kubernetes."},{"line_number":51,"context_line":""},{"line_number":52,"context_line":"Non-Goals"},{"line_number":53,"context_line":"---------"}],"source_content_type":"text/x-rst","patch_set":1,"id":"1fa4df85_f94f92ba","line":50,"range":{"start_line":48,"start_character":72,"end_line":50,"end_character":14},"in_reply_to":"1fa4df85_599346a4","updated":"2020-02-27 21:31:12.000000000","message":"Deploying managed services such as say, a mysql instance, on top of Kubernets is significantly easier then doing so on top of OpenStack. If you deploy teapot, then you have a k8s. I think that\u0027s the argument, and I agree with it.","commit_id":"363f5f0062efdbb55398abfe42bcd5f9eab7fadb"},{"author":{"_account_id":4257,"name":"Zane Bitter","email":"zbitter@redhat.com","username":"zaneb"},"change_message_id":"cd2c4c3b0a59d5efd03fb4fa158091983bbb1846","unresolved":false,"context_lines":[{"line_number":45,"context_line":"service). Teapot itself can be used to provide a managed service -- Kubernetes"},{"line_number":46,"context_line":"(though it could equally be configured to provide fully user-controlled tenant"},{"line_number":47,"context_line":"clusters). A secondary goal is to make Teapot a platform that cloud providers"},{"line_number":48,"context_line":"could use to offer other kinds of managed service as well. Teapot is an easier"},{"line_number":49,"context_line":"base than OpenStack on which to deploy such services because it is itself based"},{"line_number":50,"context_line":"on Kubernetes."},{"line_number":51,"context_line":""},{"line_number":52,"context_line":"Non-Goals"},{"line_number":53,"context_line":"---------"}],"source_content_type":"text/x-rst","patch_set":1,"id":"1fa4df85_fd6e9d94","line":50,"range":{"start_line":48,"start_character":72,"end_line":50,"end_character":14},"in_reply_to":"1fa4df85_f94f92ba","updated":"2020-02-28 05:03:02.000000000","message":"What Kevin said. It\u0027s *much* easier.\n\n* You have the operator-sdk right there to manage it. In OpenStack you either have to use Heat (Magnum, Sahara) which is a very very poor substitute for various reasons, or you have to cobble together something out of MySQL and RabbitMQ (Trove). And in either event you have to run the thing on Nova. That\u0027s why e.g. Trove is dead even though it\u0027s the most-wanted feature of every public cloud (and they want it because users want it).\n* There\u0027s an excellent chance that somebody else will write and maintain the operator for you and all you need is some glue.\n* You may even be able to write the glue only once and connect it to arbitrary operators (so the same very small service could theoretically provide equivalents of Trove, Sahara, Cue, and a hundred other things).","commit_id":"363f5f0062efdbb55398abfe42bcd5f9eab7fadb"},{"author":{"_account_id":8482,"name":"Colleen Murphy","email":"colleen@gazlene.net","username":"krinkle"},"change_message_id":"a8b425dd7a9ebe57c60232ba449fc32b8561181b","unresolved":false,"context_lines":[{"line_number":57,"context_line":"are large enough to be able to utilise at least one (and usually more than one)"},{"line_number":58,"context_line":"entire bare-metal server, because managing virtual machines is not a goal."},{"line_number":59,"context_line":""},{"line_number":60,"context_line":"Smaller deployments that nevertheless require hard multi-tenancy would be"},{"line_number":61,"context_line":"better off with OpenStack."},{"line_number":62,"context_line":""},{"line_number":63,"context_line":"Smaller deployments that do not require hard multi-tenancy would be better off"}],"source_content_type":"text/x-rst","patch_set":1,"id":"1fa4df85_c8c76612","line":60,"range":{"start_line":60,"start_character":46,"end_line":60,"end_character":64},"updated":"2020-02-27 17:40:10.000000000","message":"Are you stealing this term from wg-multitenancy? Would be good to redefine it here.","commit_id":"363f5f0062efdbb55398abfe42bcd5f9eab7fadb"},{"author":{"_account_id":4257,"name":"Zane Bitter","email":"zbitter@redhat.com","username":"zaneb"},"change_message_id":"cd2c4c3b0a59d5efd03fb4fa158091983bbb1846","unresolved":false,"context_lines":[{"line_number":57,"context_line":"are large enough to be able to utilise at least one (and usually more than one)"},{"line_number":58,"context_line":"entire bare-metal server, because managing virtual machines is not a goal."},{"line_number":59,"context_line":""},{"line_number":60,"context_line":"Smaller deployments that nevertheless require hard multi-tenancy would be"},{"line_number":61,"context_line":"better off with OpenStack."},{"line_number":62,"context_line":""},{"line_number":63,"context_line":"Smaller deployments that do not require hard multi-tenancy would be better off"}],"source_content_type":"text/x-rst","patch_set":1,"id":"1fa4df85_a66336dc","line":60,"range":{"start_line":60,"start_character":46,"end_line":60,"end_character":64},"in_reply_to":"1fa4df85_c8c76612","updated":"2020-02-28 05:03:02.000000000","message":"I guess I had just internalised it. I like their definition though.","commit_id":"363f5f0062efdbb55398abfe42bcd5f9eab7fadb"},{"author":{"_account_id":9003,"name":"Tom Barron","email":"tpb@dyncloud.net","username":"tbarron"},"change_message_id":"94e0bd3307d59c90569a70f8c7fcd4d64966389a","unresolved":false,"context_lines":[{"line_number":36,"context_line":"is to be ubiquitous; Teapot\u0027s is narrower. In the 2020s, Kubernetes will be"},{"line_number":37,"context_line":"ubiquitous. However, Kubernetes\u0027 separation of responsibilities with the"},{"line_number":38,"context_line":"underlying cloud mean that some important capabilities are considered out of"},{"line_number":39,"context_line":"scope for it -- most obviously multi-tenancy. Teapot\u0027s primary mission is to"},{"line_number":40,"context_line":"fill those gaps with an open source solution, by providing a cloud layer to"},{"line_number":41,"context_line":"manage a physical data center beneath Kubernetes."},{"line_number":42,"context_line":""}],"source_content_type":"text/x-rst","patch_set":2,"id":"1fa4df85_c9f991c8","line":39,"range":{"start_line":39,"start_character":31,"end_line":39,"end_character":44},"updated":"2020-02-29 16:46:56.000000000","message":"\"hard\" multi-tenancy with a definition or pointer to same?  Or: infrastructure multi-tenancy?  Not sure what is right here, K8s has an active multi-tenancy workgroup and Kubecon always has presentations on multi-tenancy, but it ain\u0027t what we understand as multi-tenancy coming from OpenStack. So I\u0027m very glad that you lead with this point!  But it may be worth spelling this out a bit for readers who come from a non-IAAS background.","commit_id":"1f9515b9037633b4abb8acf4dc29ed4befb7b745"},{"author":{"_account_id":8099,"name":"Graham Hayes","email":"gr@ham.ie","username":"graham"},"change_message_id":"ded9be44ebf55be82048b22ea7fddaf26e1430f5","unresolved":false,"context_lines":[{"line_number":36,"context_line":"is to be ubiquitous; Teapot\u0027s is narrower. In the 2020s, Kubernetes will be"},{"line_number":37,"context_line":"ubiquitous. However, Kubernetes\u0027 separation of responsibilities with the"},{"line_number":38,"context_line":"underlying cloud mean that some important capabilities are considered out of"},{"line_number":39,"context_line":"scope for it -- most obviously multi-tenancy. Teapot\u0027s primary mission is to"},{"line_number":40,"context_line":"fill those gaps with an open source solution, by providing a cloud layer to"},{"line_number":41,"context_line":"manage a physical data center beneath Kubernetes."},{"line_number":42,"context_line":""}],"source_content_type":"text/x-rst","patch_set":2,"id":"1fa4df85_35aad370","line":39,"range":{"start_line":39,"start_character":31,"end_line":39,"end_character":44},"in_reply_to":"1fa4df85_6da3b69f","updated":"2020-03-03 16:56:01.000000000","message":"Yeah, I think expanding on the definition (including the Top of Rack switch isolation / non trust of hosts post depolyment / storage isolation) would be good.","commit_id":"1f9515b9037633b4abb8acf4dc29ed4befb7b745"},{"author":{"_account_id":4257,"name":"Zane Bitter","email":"zbitter@redhat.com","username":"zaneb"},"change_message_id":"229445ef999152ea8be705d7ed289d6d74cb81b6","unresolved":false,"context_lines":[{"line_number":36,"context_line":"is to be ubiquitous; Teapot\u0027s is narrower. In the 2020s, Kubernetes will be"},{"line_number":37,"context_line":"ubiquitous. However, Kubernetes\u0027 separation of responsibilities with the"},{"line_number":38,"context_line":"underlying cloud mean that some important capabilities are considered out of"},{"line_number":39,"context_line":"scope for it -- most obviously multi-tenancy. Teapot\u0027s primary mission is to"},{"line_number":40,"context_line":"fill those gaps with an open source solution, by providing a cloud layer to"},{"line_number":41,"context_line":"manage a physical data center beneath Kubernetes."},{"line_number":42,"context_line":""}],"source_content_type":"text/x-rst","patch_set":2,"id":"1fa4df85_6da3b69f","line":39,"range":{"start_line":39,"start_character":31,"end_line":39,"end_character":44},"in_reply_to":"1fa4df85_c9f991c8","updated":"2020-03-02 23:17:47.000000000","message":"I added a short definition in line 60 after earlier feedback, but this is the first place it\u0027s mentioned so probably a good idea to move it up here.","commit_id":"1f9515b9037633b4abb8acf4dc29ed4befb7b745"},{"author":{"_account_id":9003,"name":"Tom Barron","email":"tpb@dyncloud.net","username":"tbarron"},"change_message_id":"94e0bd3307d59c90569a70f8c7fcd4d64966389a","unresolved":false,"context_lines":[{"line_number":47,"context_line":"clusters). A secondary goal is to make Teapot a platform that cloud providers"},{"line_number":48,"context_line":"could use to offer other kinds of managed service as well. Teapot is an easier"},{"line_number":49,"context_line":"base than OpenStack on which to deploy such services because it is itself based"},{"line_number":50,"context_line":"on Kubernetes."},{"line_number":51,"context_line":""},{"line_number":52,"context_line":"Non-Goals"},{"line_number":53,"context_line":"---------"}],"source_content_type":"text/x-rst","patch_set":2,"id":"1fa4df85_a9fe95ab","line":50,"range":{"start_line":50,"start_character":13,"end_line":50,"end_character":14},"updated":"2020-02-29 16:46:56.000000000","message":"where end users can leverage its operator framework to deploy and maintain the service and where they are not required to boot up a VM and be its system administrator just in order to use the service.","commit_id":"1f9515b9037633b4abb8acf4dc29ed4befb7b745"},{"author":{"_account_id":11655,"name":"Julia Kreger","email":"juliaashleykreger@gmail.com","username":"jkreger","status":"Flying to the moon with a Jetpack!"},"change_message_id":"7cab13b32a25d76b7d30188bc36ea1a020d165c3","unresolved":false,"context_lines":[{"line_number":66,"context_line":"entire bare-metal server, because managing virtual machines is not a goal."},{"line_number":67,"context_line":""},{"line_number":68,"context_line":"Smaller deployments that nevertheless require hard multi-tenancy (that is to"},{"line_number":69,"context_line":"say, zero trust required between tenants) would be better off with OpenStack."},{"line_number":70,"context_line":""},{"line_number":71,"context_line":"Smaller deployments that do not require hard multi-tenancy would be better off"},{"line_number":72,"context_line":"running a single standalone Kubernetes cluster."}],"source_content_type":"text/x-rst","patch_set":4,"id":"1fa4df85_ce86f766","line":69,"range":{"start_line":69,"start_character":0,"end_line":69,"end_character":77},"updated":"2020-03-09 20:18:44.000000000","message":"It would be good to highlight why for context sharing.","commit_id":"e77cfc3208b5c65f8031a0cadbe65f116419a193"},{"author":{"_account_id":4257,"name":"Zane Bitter","email":"zbitter@redhat.com","username":"zaneb"},"change_message_id":"ab1560da13c4ed154c74706bbe252f2e99dd90d7","unresolved":false,"context_lines":[{"line_number":66,"context_line":"entire bare-metal server, because managing virtual machines is not a goal."},{"line_number":67,"context_line":""},{"line_number":68,"context_line":"Smaller deployments that nevertheless require hard multi-tenancy (that is to"},{"line_number":69,"context_line":"say, zero trust required between tenants) would be better off with OpenStack."},{"line_number":70,"context_line":""},{"line_number":71,"context_line":"Smaller deployments that do not require hard multi-tenancy would be better off"},{"line_number":72,"context_line":"running a single standalone Kubernetes cluster."}],"source_content_type":"text/x-rst","patch_set":4,"id":"1fa4df85_f4be18ac","line":69,"range":{"start_line":69,"start_character":0,"end_line":69,"end_character":77},"in_reply_to":"1fa4df85_ce86f766","updated":"2020-03-10 18:56:43.000000000","message":"It\u0027s sort-of explained by the previous paragraph - if a sizable proportion of tenants are too small to make use of an entire baremetal machine, then utilisation will be too low to make economic sense. That could be made more explicit though.","commit_id":"e77cfc3208b5c65f8031a0cadbe65f116419a193"},{"author":{"_account_id":9237,"name":"Kevin Fox","email":"kevin@efox.cc","username":"kfox1111"},"change_message_id":"5fc8d5dd1fb1f26b42863ddd87dc9f31935d4f76","unresolved":false,"context_lines":[{"line_number":66,"context_line":"entire bare-metal server, because managing virtual machines is not a goal."},{"line_number":67,"context_line":""},{"line_number":68,"context_line":"Smaller deployments that nevertheless require hard multi-tenancy (that is to"},{"line_number":69,"context_line":"say, zero trust required between tenants) would be better off with OpenStack."},{"line_number":70,"context_line":""},{"line_number":71,"context_line":"Smaller deployments that do not require hard multi-tenancy would be better off"},{"line_number":72,"context_line":"running a single standalone Kubernetes cluster."}],"source_content_type":"text/x-rst","patch_set":4,"id":"1fa4df85_0246ff22","line":69,"range":{"start_line":69,"start_character":0,"end_line":69,"end_character":77},"in_reply_to":"1fa4df85_f4be18ac","updated":"2020-03-11 16:19:43.000000000","message":"As proposed, I agree project teapot would not handle this use case sufficiently.\n\nWith a little tweaking, and the addition of kubevirt deployed in the mgmt cluster, it may actually work... Maybe we don\u0027t cover that at the moment. It may just complicate things.","commit_id":"e77cfc3208b5c65f8031a0cadbe65f116419a193"},{"author":{"_account_id":11655,"name":"Julia Kreger","email":"juliaashleykreger@gmail.com","username":"jkreger","status":"Flying to the moon with a Jetpack!"},"change_message_id":"7cab13b32a25d76b7d30188bc36ea1a020d165c3","unresolved":false,"context_lines":[{"line_number":103,"context_line":"Teapot is designed to be radically simpler than OpenStack to :doc:`install"},{"line_number":104,"context_line":"\u003cinstallation\u003e` and operate. By running on the same technology stack as the"},{"line_number":105,"context_line":"tenant clusters it deploys, it allows a common set of skills to be applied to"},{"line_number":106,"context_line":"the operation of both applications and the underlying infrastructure. By"},{"line_number":107,"context_line":"eschewing direct management of virtualisation it avoids having to shoehorn"},{"line_number":108,"context_line":"bare-metal management into a virtualisation context or vice-versa, and"},{"line_number":109,"context_line":"eliminates entire layers of networking abstractions."},{"line_number":110,"context_line":""},{"line_number":111,"context_line":"At the same time, Teapot should be able to :doc:`interoperate with OpenStack"}],"source_content_type":"text/x-rst","patch_set":4,"id":"1fa4df85_0e6c0f9a","line":108,"range":{"start_line":106,"start_character":69,"end_line":108,"end_character":65},"updated":"2020-03-09 20:18:44.000000000","message":"So this is a direct call out to the compute layer and BMaaS layer. It would be good to delineate the cases instead of assemble them together and indicate that direct usage shall be the same as installation. We know from experience with TripleO, this is not the case because the substrate layer needed always comes with particular architectural requirements, and if we\u0027re to avoid the mistakes of the past, we need to ensure we\u0027re more prescriptive or better at delineation to help enable self-customization.","commit_id":"e77cfc3208b5c65f8031a0cadbe65f116419a193"},{"author":{"_account_id":4257,"name":"Zane Bitter","email":"zbitter@redhat.com","username":"zaneb"},"change_message_id":"ab1560da13c4ed154c74706bbe252f2e99dd90d7","unresolved":false,"context_lines":[{"line_number":103,"context_line":"Teapot is designed to be radically simpler than OpenStack to :doc:`install"},{"line_number":104,"context_line":"\u003cinstallation\u003e` and operate. By running on the same technology stack as the"},{"line_number":105,"context_line":"tenant clusters it deploys, it allows a common set of skills to be applied to"},{"line_number":106,"context_line":"the operation of both applications and the underlying infrastructure. By"},{"line_number":107,"context_line":"eschewing direct management of virtualisation it avoids having to shoehorn"},{"line_number":108,"context_line":"bare-metal management into a virtualisation context or vice-versa, and"},{"line_number":109,"context_line":"eliminates entire layers of networking abstractions."},{"line_number":110,"context_line":""},{"line_number":111,"context_line":"At the same time, Teapot should be able to :doc:`interoperate with OpenStack"}],"source_content_type":"text/x-rst","patch_set":4,"id":"1fa4df85_46265e6a","line":108,"range":{"start_line":106,"start_character":69,"end_line":108,"end_character":65},"in_reply_to":"1fa4df85_0e6c0f9a","updated":"2020-03-10 18:56:43.000000000","message":"The two big lessons for me from TripleO are these:\n\n1. People don\u0027t want to manage their data centre like it\u0027s a bunch of resources in the cloud. They care very deeply about the individual names and IP addresses of each and every ******* server, because they\u0027re physical things in a rack somewhere with blinkenlights that you occasionally have to go find. Nor is the data centre an effectively-infinite pool of resources from which you get to select some; it\u0027s very much a fixed pool and you have to manage all of it. The orchestration capabilities in OpenStack are completely inadequate to the task of managing this.\n2. TripleO as originally conceived would have used the undercloud only as a bootstrap before eventually pivoting to have the cloud manage itself. If you never get to this point (which TripleO never did) then you end up maintaining double the complexity while losing a lot of the benefits of reusing the same services (e.g. HA).\n\nI think Teapot addresses these pretty well:\n\n1. The orchestration model in Kubernetes is much more powerful (though this makes it easier to shoot yourself in the foot). It manages resources individually instead of in groups, so each one can be a special snowflake. There\u0027s some rough edges because k8s wasn\u0027t really designed to run on baremetal, but Metal³ will have to deal with them regardless of whether we do Teapot.\n2. The installation really can be just bootstrapped, rather than running a second copy in parallel. Both tenant-assigned machines and management cluster machines are managed through the Cluster API in the same way.\n\nI\u0027d be interested to hear what other lessons you learned from TripleO and whether or not you think they\u0027re adequately addressed here.","commit_id":"e77cfc3208b5c65f8031a0cadbe65f116419a193"},{"author":{"_account_id":9237,"name":"Kevin Fox","email":"kevin@efox.cc","username":"kfox1111"},"change_message_id":"5fc8d5dd1fb1f26b42863ddd87dc9f31935d4f76","unresolved":false,"context_lines":[{"line_number":103,"context_line":"Teapot is designed to be radically simpler than OpenStack to :doc:`install"},{"line_number":104,"context_line":"\u003cinstallation\u003e` and operate. By running on the same technology stack as the"},{"line_number":105,"context_line":"tenant clusters it deploys, it allows a common set of skills to be applied to"},{"line_number":106,"context_line":"the operation of both applications and the underlying infrastructure. By"},{"line_number":107,"context_line":"eschewing direct management of virtualisation it avoids having to shoehorn"},{"line_number":108,"context_line":"bare-metal management into a virtualisation context or vice-versa, and"},{"line_number":109,"context_line":"eliminates entire layers of networking abstractions."},{"line_number":110,"context_line":""},{"line_number":111,"context_line":"At the same time, Teapot should be able to :doc:`interoperate with OpenStack"}],"source_content_type":"text/x-rst","patch_set":4,"id":"1fa4df85_a548555c","line":108,"range":{"start_line":106,"start_character":69,"end_line":108,"end_character":65},"in_reply_to":"1fa4df85_46265e6a","updated":"2020-03-11 16:19:43.000000000","message":"I\u0027d lighten lesson 1 a little and say, while people would want to manage their datacenter like its a bunch of resources in the cloud, the messiness/reality of physical machines gets in the way and those details end up being important after all. IF a solid/reliable way could be found to manage them like a cloud, that would be great. This is one of the things I think K8s does relatively well. By being able to target workload to nodes via labels, you can land particular drivers on particular types of nodes. Deal with some nodes having different types of storage then others, and other such uglies that comes from physical systems. You can remove some of the hard abstractions that happen if you tried just cramming bare metal through a virtualization abstraction. In k8s\u0027s api, its not all or nothing like a vm abstraction forces you through. Individual types of holes can be poked in the abstraction that is a pod as needed (hostNetwork\u003dtrue, hostPID\u003dtrue, hostPath volumes, etc. But in the end, the whole cluster still kind of acts as a cloud. single pane of glass to manage everything.\n\nAs far as I know, tripleo never actually met its goal of seeding either. You should be able to use a minimal operating system to install the final operating system. You never were really able to use tripleo to install tripleo.\nrunning minikube+cluster-api+driver on a laptop should be enough to deploy the management cluster and pivot management to the management cluster, fulfilling the seeding process. The management cluster can then also manage other clusters in a Kubernetes as a Service way.\n\nI strongly agree that the orchestration system in Kubernetes enables this use case where TripeO was hamstrung by lack of functionality in OpenStack\u0027s orchestration capabilities. So Teapot would not have the same issues TripeO had.","commit_id":"e77cfc3208b5c65f8031a0cadbe65f116419a193"}],"doc/source/ideas/teapot/installation.rst":[{"author":{"_account_id":17068,"name":"Jean-Philippe Evrard","email":"openstack@a.spamming.party","username":"evrardjp"},"change_message_id":"34cbd129b53047752e97338c2889f1e222a523b7","unresolved":false,"context_lines":[{"line_number":18,"context_line":"franca*. There should be a single official installer and third parties are"},{"line_number":19,"context_line":"encouraged to add extensions and customisations by adding Resources and"},{"line_number":20,"context_line":"Operators through the Kubernetes API."},{"line_number":21,"context_line":""},{"line_number":22,"context_line":"Implementation Options"},{"line_number":23,"context_line":"----------------------"},{"line_number":24,"context_line":""}],"source_content_type":"text/x-rst","patch_set":1,"id":"1fa4df85_1a2052b0","line":21,"updated":"2020-02-28 16:00:22.000000000","message":"The simpler, the better.","commit_id":"363f5f0062efdbb55398abfe42bcd5f9eab7fadb"},{"author":{"_account_id":8099,"name":"Graham Hayes","email":"gr@ham.ie","username":"graham"},"change_message_id":"ded9be44ebf55be82048b22ea7fddaf26e1430f5","unresolved":false,"context_lines":[{"line_number":5,"context_line":"cluster (the management cluster). This is a great advantage for ease of"},{"line_number":6,"context_line":"installation, because Kubernetes is renowned for its simplicity in"},{"line_number":7,"context_line":"bootstrapping. Many, many (perhaps too many) tools already exist for"},{"line_number":8,"context_line":"bootstrapping a Kubernetes cluster, so there is no need to reinvent them."},{"line_number":9,"context_line":""},{"line_number":10,"context_line":"However, Teapot is designed to be the system that provides cloud services to"},{"line_number":11,"context_line":"bare-metal Kubernetes clusters, and while it is possible to run the management"}],"source_content_type":"text/x-rst","patch_set":2,"id":"1fa4df85_35f17396","line":8,"range":{"start_line":8,"start_character":36,"end_line":8,"end_character":73},"updated":"2020-03-03 16:56:01.000000000","message":"\\o/","commit_id":"1f9515b9037633b4abb8acf4dc29ed4befb7b745"}],"doc/source/ideas/teapot/load-balancing.rst":[{"author":{"_account_id":11628,"name":"Michael Johnson","email":"johnsomor@gmail.com","username":"johnsom"},"change_message_id":"aecc4250e02c80d8029d8854cf5f281738228674","unresolved":false,"context_lines":[{"line_number":2,"context_line":"\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d"},{"line_number":3,"context_line":""},{"line_number":4,"context_line":"Load balancers are one of the things that Kubernetes expects to be provided by"},{"line_number":5,"context_line":"the underlying cloud. No multi-tenant bare-metal solutions for this exist, so"},{"line_number":6,"context_line":"project Teapot would need to provide one. Ideally an external load balancer"},{"line_number":7,"context_line":"would act as an abstraction over what could be either a tenant-specific"},{"line_number":8,"context_line":"software reverse proxy or multi-tenant-safe access to a hardware (or virtual)"}],"source_content_type":"text/x-rst","patch_set":1,"id":"1fa4df85_7e41480b","line":5,"range":{"start_line":5,"start_character":22,"end_line":5,"end_character":73},"updated":"2020-02-27 20:23:41.000000000","message":"I disagree.\nhttps://clouddocs.f5networks.net/containers/v1/drafts/kctlr-multi-tenancy.html\nI will say that none of them are \"great\" solutions and scale well.","commit_id":"363f5f0062efdbb55398abfe42bcd5f9eab7fadb"},{"author":{"_account_id":4257,"name":"Zane Bitter","email":"zbitter@redhat.com","username":"zaneb"},"change_message_id":"cd2c4c3b0a59d5efd03fb4fa158091983bbb1846","unresolved":false,"context_lines":[{"line_number":2,"context_line":"\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d"},{"line_number":3,"context_line":""},{"line_number":4,"context_line":"Load balancers are one of the things that Kubernetes expects to be provided by"},{"line_number":5,"context_line":"the underlying cloud. No multi-tenant bare-metal solutions for this exist, so"},{"line_number":6,"context_line":"project Teapot would need to provide one. Ideally an external load balancer"},{"line_number":7,"context_line":"would act as an abstraction over what could be either a tenant-specific"},{"line_number":8,"context_line":"software reverse proxy or multi-tenant-safe access to a hardware (or virtual)"}],"source_content_type":"text/x-rst","patch_set":1,"id":"1fa4df85_79d162cd","line":5,"range":{"start_line":5,"start_character":22,"end_line":5,"end_character":73},"in_reply_to":"1fa4df85_7e41480b","updated":"2020-02-28 05:03:02.000000000","message":"That\u0027s specific to one vendor though. It\u0027s not something that\u0027s abstracted by Kubernetes.","commit_id":"363f5f0062efdbb55398abfe42bcd5f9eab7fadb"},{"author":{"_account_id":11628,"name":"Michael Johnson","email":"johnsomor@gmail.com","username":"johnsom"},"change_message_id":"aecc4250e02c80d8029d8854cf5f281738228674","unresolved":false,"context_lines":[{"line_number":5,"context_line":"the underlying cloud. No multi-tenant bare-metal solutions for this exist, so"},{"line_number":6,"context_line":"project Teapot would need to provide one. Ideally an external load balancer"},{"line_number":7,"context_line":"would act as an abstraction over what could be either a tenant-specific"},{"line_number":8,"context_line":"software reverse proxy or multi-tenant-safe access to a hardware (or virtual)"},{"line_number":9,"context_line":"load balancer."},{"line_number":10,"context_line":""},{"line_number":11,"context_line":"There are two ways for an application to request an external load balancer in"}],"source_content_type":"text/x-rst","patch_set":1,"id":"1fa4df85_1ebb740e","line":8,"range":{"start_line":8,"start_character":9,"end_line":8,"end_character":22},"updated":"2020-02-27 20:23:41.000000000","message":"reverse proxies and load balancers are two very different things. Let\u0027s not confuse that here.","commit_id":"363f5f0062efdbb55398abfe42bcd5f9eab7fadb"},{"author":{"_account_id":4257,"name":"Zane Bitter","email":"zbitter@redhat.com","username":"zaneb"},"change_message_id":"cd2c4c3b0a59d5efd03fb4fa158091983bbb1846","unresolved":false,"context_lines":[{"line_number":5,"context_line":"the underlying cloud. No multi-tenant bare-metal solutions for this exist, so"},{"line_number":6,"context_line":"project Teapot would need to provide one. Ideally an external load balancer"},{"line_number":7,"context_line":"would act as an abstraction over what could be either a tenant-specific"},{"line_number":8,"context_line":"software reverse proxy or multi-tenant-safe access to a hardware (or virtual)"},{"line_number":9,"context_line":"load balancer."},{"line_number":10,"context_line":""},{"line_number":11,"context_line":"There are two ways for an application to request an external load balancer in"}],"source_content_type":"text/x-rst","patch_set":1,"id":"1fa4df85_5984a6b1","line":8,"range":{"start_line":8,"start_character":9,"end_line":8,"end_character":22},"in_reply_to":"1fa4df85_1ebb740e","updated":"2020-02-28 05:03:02.000000000","message":"I was trying to think of a generic term for projects like HAProxy and nginx, which obviously are both. I agree this is not helping clarity though.","commit_id":"363f5f0062efdbb55398abfe42bcd5f9eab7fadb"},{"author":{"_account_id":17068,"name":"Jean-Philippe Evrard","email":"openstack@a.spamming.party","username":"evrardjp"},"change_message_id":"34cbd129b53047752e97338c2889f1e222a523b7","unresolved":false,"context_lines":[{"line_number":10,"context_line":""},{"line_number":11,"context_line":"There are two ways for an application to request an external load balancer in"},{"line_number":12,"context_line":"Kubernetes. The first is to create a Service_ with type |LoadBalancer|_. This"},{"line_number":13,"context_line":"is the older way of doing things but is still useful for low-level plumbing,"},{"line_number":14,"context_line":"and may be required for non-HTTP(S) protocols. The preferred (though nominally"},{"line_number":15,"context_line":"beta) way is to create an Ingress_. The Ingress API allows for more"},{"line_number":16,"context_line":"sophisticated control (such as adding TLS termination), and can allow multiple"}],"source_content_type":"text/x-rst","patch_set":1,"id":"1fa4df85_ba91fe9b","line":13,"range":{"start_line":13,"start_character":7,"end_line":13,"end_character":16},"updated":"2020-02-28 16:00:22.000000000","message":"nit: older? Not sure it\u0027s the right term.\nIt\u0027s still useful today (and not only for low-level plumbing), and is also base for ingress controllers.\n\nIt\u0027s not meant to be the same layer, we shouldn\u0027t confuse those. Let\u0027s remove this wrong connotation.\n\nUnless I am blatently mistaken, but then you have to teach me :D","commit_id":"363f5f0062efdbb55398abfe42bcd5f9eab7fadb"},{"author":{"_account_id":4257,"name":"Zane Bitter","email":"zbitter@redhat.com","username":"zaneb"},"change_message_id":"7b916e60b5c328343046f7330f45448c1515f45e","unresolved":false,"context_lines":[{"line_number":10,"context_line":""},{"line_number":11,"context_line":"There are two ways for an application to request an external load balancer in"},{"line_number":12,"context_line":"Kubernetes. The first is to create a Service_ with type |LoadBalancer|_. This"},{"line_number":13,"context_line":"is the older way of doing things but is still useful for low-level plumbing,"},{"line_number":14,"context_line":"and may be required for non-HTTP(S) protocols. The preferred (though nominally"},{"line_number":15,"context_line":"beta) way is to create an Ingress_. The Ingress API allows for more"},{"line_number":16,"context_line":"sophisticated control (such as adding TLS termination), and can allow multiple"}],"source_content_type":"text/x-rst","patch_set":1,"id":"1fa4df85_359f6104","line":13,"range":{"start_line":13,"start_character":7,"end_line":13,"end_character":16},"in_reply_to":"1fa4df85_ba91fe9b","updated":"2020-02-28 18:24:32.000000000","message":"Here\u0027s where I\u0027m going with this...\n\nIf you have a public-facing HTTP service, there is literally no reason to use a LoadBalancer service for it. This won\u0027t do SSL for you (unless you use some proprietary annotation that only AWS supports) and in the cloud it chews through load balancers (hence $$) like crazy because you get one for every service rather than combining them all into one. Before Ingress existed, there was a reason to do this (hence, the \"older\" way).\n\nIf you have a non-HTTP service then you might be forced to use this (although in practice probably not).\n\n\u003e It\u0027s not meant to be the same layer\n\nIsn\u0027t this the same thing as saying it\u0027s useful for low(er)-level plumbing?\n\nI don\u0027t think either of us is mistaken (or at least, we\u0027re not thinking inconsistent things ;), it\u0027s a matter of finding the right words to communicate it. Totally open to suggestions.","commit_id":"363f5f0062efdbb55398abfe42bcd5f9eab7fadb"},{"author":{"_account_id":11628,"name":"Michael Johnson","email":"johnsomor@gmail.com","username":"johnsom"},"change_message_id":"aecc4250e02c80d8029d8854cf5f281738228674","unresolved":false,"context_lines":[{"line_number":11,"context_line":"There are two ways for an application to request an external load balancer in"},{"line_number":12,"context_line":"Kubernetes. The first is to create a Service_ with type |LoadBalancer|_. This"},{"line_number":13,"context_line":"is the older way of doing things but is still useful for low-level plumbing,"},{"line_number":14,"context_line":"and may be required for non-HTTP(S) protocols. The preferred (though nominally"},{"line_number":15,"context_line":"beta) way is to create an Ingress_. The Ingress API allows for more"},{"line_number":16,"context_line":"sophisticated control (such as adding TLS termination), and can allow multiple"},{"line_number":17,"context_line":"services to share a single external load balancer (including across different"},{"line_number":18,"context_line":"DNS names), and hence a single IP address."}],"source_content_type":"text/x-rst","patch_set":1,"id":"1fa4df85_de087cda","line":15,"range":{"start_line":14,"start_character":61,"end_line":15,"end_character":5},"updated":"2020-02-27 20:23:41.000000000","message":"I would not say that Ingress controllers are \"beta\" any longer. Maybe two years ago.","commit_id":"363f5f0062efdbb55398abfe42bcd5f9eab7fadb"},{"author":{"_account_id":4257,"name":"Zane Bitter","email":"zbitter@redhat.com","username":"zaneb"},"change_message_id":"cd2c4c3b0a59d5efd03fb4fa158091983bbb1846","unresolved":false,"context_lines":[{"line_number":11,"context_line":"There are two ways for an application to request an external load balancer in"},{"line_number":12,"context_line":"Kubernetes. The first is to create a Service_ with type |LoadBalancer|_. This"},{"line_number":13,"context_line":"is the older way of doing things but is still useful for low-level plumbing,"},{"line_number":14,"context_line":"and may be required for non-HTTP(S) protocols. The preferred (though nominally"},{"line_number":15,"context_line":"beta) way is to create an Ingress_. The Ingress API allows for more"},{"line_number":16,"context_line":"sophisticated control (such as adding TLS termination), and can allow multiple"},{"line_number":17,"context_line":"services to share a single external load balancer (including across different"},{"line_number":18,"context_line":"DNS names), and hence a single IP address."}],"source_content_type":"text/x-rst","patch_set":1,"id":"1fa4df85_39044a4a","line":15,"range":{"start_line":14,"start_character":61,"end_line":15,"end_character":5},"in_reply_to":"1fa4df85_de087cda","updated":"2020-02-28 05:03:02.000000000","message":"Check the top of https://kubernetes.io/docs/concepts/services-networking/ingress/\n\nThings stay in beta for a long time in k8s-land. But I agree with you in principle and that\u0027s why I said \"nominally\".","commit_id":"363f5f0062efdbb55398abfe42bcd5f9eab7fadb"},{"author":{"_account_id":9237,"name":"Kevin Fox","email":"kevin@efox.cc","username":"kfox1111"},"change_message_id":"671ec37aa02571934fa392741a2f633855ab3afd","unresolved":false,"context_lines":[{"line_number":11,"context_line":"There are two ways for an application to request an external load balancer in"},{"line_number":12,"context_line":"Kubernetes. The first is to create a Service_ with type |LoadBalancer|_. This"},{"line_number":13,"context_line":"is the older way of doing things but is still useful for low-level plumbing,"},{"line_number":14,"context_line":"and may be required for non-HTTP(S) protocols. The preferred (though nominally"},{"line_number":15,"context_line":"beta) way is to create an Ingress_. The Ingress API allows for more"},{"line_number":16,"context_line":"sophisticated control (such as adding TLS termination), and can allow multiple"},{"line_number":17,"context_line":"services to share a single external load balancer (including across different"},{"line_number":18,"context_line":"DNS names), and hence a single IP address."}],"source_content_type":"text/x-rst","patch_set":1,"id":"1fa4df85_39b20a96","line":15,"range":{"start_line":14,"start_character":61,"end_line":15,"end_character":5},"in_reply_to":"1fa4df85_de087cda","updated":"2020-02-27 21:31:12.000000000","message":"The api still is beta. they are working on replacing the ingress api with a new api. Though not clear how many years that will take.","commit_id":"363f5f0062efdbb55398abfe42bcd5f9eab7fadb"},{"author":{"_account_id":11628,"name":"Michael Johnson","email":"johnsomor@gmail.com","username":"johnsom"},"change_message_id":"aecc4250e02c80d8029d8854cf5f281738228674","unresolved":false,"context_lines":[{"line_number":15,"context_line":"beta) way is to create an Ingress_. The Ingress API allows for more"},{"line_number":16,"context_line":"sophisticated control (such as adding TLS termination), and can allow multiple"},{"line_number":17,"context_line":"services to share a single external load balancer (including across different"},{"line_number":18,"context_line":"DNS names), and hence a single IP address."},{"line_number":19,"context_line":""},{"line_number":20,"context_line":"Most managed Kubernetes services provide an Ingress controller that can set up"},{"line_number":21,"context_line":"external load balancers, including TLS termination, using the underlying"}],"source_content_type":"text/x-rst","patch_set":1,"id":"1fa4df85_bed34038","line":18,"updated":"2020-02-27 20:23:41.000000000","message":"One way to help understand the difference between a \"service\" load balancer and an \"ingress\" load balancer is to think about it in terms of how they were implemented originally.\nA \"service\" load balancer is essentially a NAT with some basic load balancing features.\nAn \"ingress\" load balancer is intended for edge or external load balancing, where you need full L7 load balancing capabilities.\nFeatures for both vary greatly depending on the underlying provider.\nOctavia targets the use case that translates best to an \"ingress\" (full L7 capabilities) type in kubernetes, though it is being used for \"service\" style use cases (for example in OpenShift).","commit_id":"363f5f0062efdbb55398abfe42bcd5f9eab7fadb"},{"author":{"_account_id":17068,"name":"Jean-Philippe Evrard","email":"openstack@a.spamming.party","username":"evrardjp"},"change_message_id":"34cbd129b53047752e97338c2889f1e222a523b7","unresolved":false,"context_lines":[{"line_number":26,"context_line":"When using a Service of type |LoadBalancer| (rather than an Ingress), there is"},{"line_number":27,"context_line":"no standardised way of requesting TLS termination (some cloud providers permit"},{"line_number":28,"context_line":"it using an annotation), so supporting this use case is not a high priority."},{"line_number":29,"context_line":"The |LoadBalancer| Service type in general should be supported, however (though"},{"line_number":30,"context_line":"there are existing Kubernetes offerings where it is not)."},{"line_number":31,"context_line":""},{"line_number":32,"context_line":"Implementation options"}],"source_content_type":"text/x-rst","patch_set":1,"id":"1fa4df85_dae15a2c","line":29,"range":{"start_line":29,"start_character":71,"end_line":29,"end_character":72},"updated":"2020-02-28 16:00:22.000000000","message":"Yes, that sounds fair. Teapot should provide both, IMO.","commit_id":"363f5f0062efdbb55398abfe42bcd5f9eab7fadb"},{"author":{"_account_id":17068,"name":"Jean-Philippe Evrard","email":"openstack@a.spamming.party","username":"evrardjp"},"change_message_id":"34cbd129b53047752e97338c2889f1e222a523b7","unresolved":false,"context_lines":[{"line_number":46,"context_line":"provides the high-availability aspects of load balancing, not actual balancing."},{"line_number":47,"context_line":"All incoming traffic is directed to a single node; from there kubeproxy"},{"line_number":48,"context_line":"distributes it to the services that handle it. However, should the node die,"},{"line_number":49,"context_line":"traffic rapidly fails over to another node."},{"line_number":50,"context_line":""},{"line_number":51,"context_line":"This form of load balancing does not support offloading TLS termination,"},{"line_number":52,"context_line":"results in large amounts of East-West traffic, and consumes resources from the"}],"source_content_type":"text/x-rst","patch_set":1,"id":"1fa4df85_5a2b2a2f","line":49,"range":{"start_line":49,"start_character":42,"end_line":49,"end_character":43},"updated":"2020-02-28 16:00:22.000000000","message":"Was it a single node by cluster? I thought it was a single node by service. Which means effectively multiple nodes, which sounds less scary that \"all the incoming traffic is directed to a single node\". I should double check, but maybe rewriting it to \"all the incoming traffic for a service is directed to a single node\"","commit_id":"363f5f0062efdbb55398abfe42bcd5f9eab7fadb"},{"author":{"_account_id":4257,"name":"Zane Bitter","email":"zbitter@redhat.com","username":"zaneb"},"change_message_id":"7b916e60b5c328343046f7330f45448c1515f45e","unresolved":false,"context_lines":[{"line_number":46,"context_line":"provides the high-availability aspects of load balancing, not actual balancing."},{"line_number":47,"context_line":"All incoming traffic is directed to a single node; from there kubeproxy"},{"line_number":48,"context_line":"distributes it to the services that handle it. However, should the node die,"},{"line_number":49,"context_line":"traffic rapidly fails over to another node."},{"line_number":50,"context_line":""},{"line_number":51,"context_line":"This form of load balancing does not support offloading TLS termination,"},{"line_number":52,"context_line":"results in large amounts of East-West traffic, and consumes resources from the"}],"source_content_type":"text/x-rst","patch_set":1,"id":"1fa4df85_15ad8530","line":49,"range":{"start_line":49,"start_character":42,"end_line":49,"end_character":43},"in_reply_to":"1fa4df85_5a2b2a2f","updated":"2020-02-28 18:24:32.000000000","message":"I intended it in the sense of per-service, but you\u0027re right that this is open to misinterpretation. Will fix, thanks.","commit_id":"363f5f0062efdbb55398abfe42bcd5f9eab7fadb"},{"author":{"_account_id":17068,"name":"Jean-Philippe Evrard","email":"openstack@a.spamming.party","username":"evrardjp"},"change_message_id":"34cbd129b53047752e97338c2889f1e222a523b7","unresolved":false,"context_lines":[{"line_number":49,"context_line":"traffic rapidly fails over to another node."},{"line_number":50,"context_line":""},{"line_number":51,"context_line":"This form of load balancing does not support offloading TLS termination,"},{"line_number":52,"context_line":"results in large amounts of East-West traffic, and consumes resources from the"},{"line_number":53,"context_line":"guest cluster."},{"line_number":54,"context_line":""},{"line_number":55,"context_line":"Tenants could decide to use this unilaterally (i.e. without the involvement of"}],"source_content_type":"text/x-rst","patch_set":1,"id":"1fa4df85_5ae00afb","line":52,"range":{"start_line":52,"start_character":28,"end_line":52,"end_character":37},"updated":"2020-02-28 16:00:22.000000000","message":"Not sure why you have \"large amount of east west\", do you mean the ARP/NDP, the node that implements the service lb type to the pods, or something else? Please keep in mind there is a \"local\" service policy, next to the \"Cluster\" service policy, to reduce it, if necessary.","commit_id":"363f5f0062efdbb55398abfe42bcd5f9eab7fadb"},{"author":{"_account_id":4257,"name":"Zane Bitter","email":"zbitter@redhat.com","username":"zaneb"},"change_message_id":"7b916e60b5c328343046f7330f45448c1515f45e","unresolved":false,"context_lines":[{"line_number":49,"context_line":"traffic rapidly fails over to another node."},{"line_number":50,"context_line":""},{"line_number":51,"context_line":"This form of load balancing does not support offloading TLS termination,"},{"line_number":52,"context_line":"results in large amounts of East-West traffic, and consumes resources from the"},{"line_number":53,"context_line":"guest cluster."},{"line_number":54,"context_line":""},{"line_number":55,"context_line":"Tenants could decide to use this unilaterally (i.e. without the involvement of"}],"source_content_type":"text/x-rst","patch_set":1,"id":"1fa4df85_953415fd","line":52,"range":{"start_line":52,"start_character":28,"end_line":52,"end_character":37},"in_reply_to":"1fa4df85_5ae00afb","updated":"2020-02-28 18:24:32.000000000","message":"All traffic comes in to a node from the, uh, let\u0027s say North but then has to go out again E/W to get to the service that handles it. Whereas if you e.g. load balance at layer 3 or above then traffic comes in once and arrives straight to the node that will handle it. So the presence of kubeproxy in the path in the L2 solution results in extra E-W traffic.","commit_id":"363f5f0062efdbb55398abfe42bcd5f9eab7fadb"},{"author":{"_account_id":17068,"name":"Jean-Philippe Evrard","email":"openstack@a.spamming.party","username":"evrardjp"},"change_message_id":"34cbd129b53047752e97338c2889f1e222a523b7","unresolved":false,"context_lines":[{"line_number":50,"context_line":""},{"line_number":51,"context_line":"This form of load balancing does not support offloading TLS termination,"},{"line_number":52,"context_line":"results in large amounts of East-West traffic, and consumes resources from the"},{"line_number":53,"context_line":"guest cluster."},{"line_number":54,"context_line":""},{"line_number":55,"context_line":"Tenants could decide to use this unilaterally (i.e. without the involvement of"},{"line_number":56,"context_line":"the management cluster or its administrators). However, using MetalLB restricts"}],"source_content_type":"text/x-rst","patch_set":1,"id":"1fa4df85_7abce6db","line":53,"range":{"start_line":53,"start_character":0,"end_line":53,"end_character":13},"updated":"2020-02-28 16:00:22.000000000","message":"how much are we talking about here? I am not sure this really has a lot of importance, or rather: this is important to have working ARP/NDP anyway...","commit_id":"363f5f0062efdbb55398abfe42bcd5f9eab7fadb"},{"author":{"_account_id":4257,"name":"Zane Bitter","email":"zbitter@redhat.com","username":"zaneb"},"change_message_id":"7b916e60b5c328343046f7330f45448c1515f45e","unresolved":false,"context_lines":[{"line_number":50,"context_line":""},{"line_number":51,"context_line":"This form of load balancing does not support offloading TLS termination,"},{"line_number":52,"context_line":"results in large amounts of East-West traffic, and consumes resources from the"},{"line_number":53,"context_line":"guest cluster."},{"line_number":54,"context_line":""},{"line_number":55,"context_line":"Tenants could decide to use this unilaterally (i.e. without the involvement of"},{"line_number":56,"context_line":"the management cluster or its administrators). However, using MetalLB restricts"}],"source_content_type":"text/x-rst","patch_set":1,"id":"1fa4df85_751ab97d","line":53,"range":{"start_line":53,"start_character":0,"end_line":53,"end_character":13},"in_reply_to":"1fa4df85_7abce6db","updated":"2020-02-28 18:24:32.000000000","message":"Well, you have to run kubeproxy in your cluster. Although I gather these days most people use the implementation based on thousands of iptables rules rather than an actual proxy server. So that\u0027s better, but still has a cost.","commit_id":"363f5f0062efdbb55398abfe42bcd5f9eab7fadb"},{"author":{"_account_id":11628,"name":"Michael Johnson","email":"johnsomor@gmail.com","username":"johnsom"},"change_message_id":"aecc4250e02c80d8029d8854cf5f281738228674","unresolved":false,"context_lines":[{"line_number":54,"context_line":""},{"line_number":55,"context_line":"Tenants could decide to use this unilaterally (i.e. without the involvement of"},{"line_number":56,"context_line":"the management cluster or its administrators). However, using MetalLB restricts"},{"line_number":57,"context_line":"the choice of CNI plugins -- for example it does not work with OVN. A"},{"line_number":58,"context_line":"pre-requisite to use it would be that all tenant machines share a layer 2"},{"line_number":59,"context_line":"broadcast domain, which may be undesirable in larger clouds. This may be an"},{"line_number":60,"context_line":"acceptable solution for Services in some cases though."}],"source_content_type":"text/x-rst","patch_set":1,"id":"1fa4df85_1e44d4b6","line":57,"range":{"start_line":57,"start_character":63,"end_line":57,"end_character":66},"updated":"2020-02-27 20:23:41.000000000","message":"Or neutron DVR is some cases.","commit_id":"363f5f0062efdbb55398abfe42bcd5f9eab7fadb"},{"author":{"_account_id":11628,"name":"Michael Johnson","email":"johnsomor@gmail.com","username":"johnsom"},"change_message_id":"aecc4250e02c80d8029d8854cf5f281738228674","unresolved":false,"context_lines":[{"line_number":64,"context_line":"MetalLB (Layer 3) on management cluster"},{"line_number":65,"context_line":"~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~"},{"line_number":66,"context_line":""},{"line_number":67,"context_line":"The layer 3 form of MetalLB_ load balancing provides true load balancing, but"},{"line_number":68,"context_line":"requires control over the network hardware in the form of advertising routes"},{"line_number":69,"context_line":"via BGP. Since tenant clusters are not trusted to do this, it would have to run"},{"line_number":70,"context_line":"in the management cluster. There would need to be an API in the management"}],"source_content_type":"text/x-rst","patch_set":1,"id":"1fa4df85_7ee708e6","line":67,"range":{"start_line":67,"start_character":58,"end_line":67,"end_character":72},"updated":"2020-02-27 20:23:41.000000000","message":"But no L7 capabilities including TLS offload as mentioned below.","commit_id":"363f5f0062efdbb55398abfe42bcd5f9eab7fadb"},{"author":{"_account_id":11628,"name":"Michael Johnson","email":"johnsomor@gmail.com","username":"johnsom"},"change_message_id":"aecc4250e02c80d8029d8854cf5f281738228674","unresolved":false,"context_lines":[{"line_number":65,"context_line":"~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~"},{"line_number":66,"context_line":""},{"line_number":67,"context_line":"The layer 3 form of MetalLB_ load balancing provides true load balancing, but"},{"line_number":68,"context_line":"requires control over the network hardware in the form of advertising routes"},{"line_number":69,"context_line":"via BGP. Since tenant clusters are not trusted to do this, it would have to run"},{"line_number":70,"context_line":"in the management cluster. There would need to be an API in the management"},{"line_number":71,"context_line":"cluster to vet requests and pass them on to MetalLB, and a"}],"source_content_type":"text/x-rst","patch_set":1,"id":"1fa4df85_9e38c43d","line":68,"range":{"start_line":68,"start_character":26,"end_line":68,"end_character":42},"updated":"2020-02-27 20:23:41.000000000","message":"It also requires fairly high end and specific network hardware. The gear must support ECMP and, to provide persistence, must support consistent hashing in the ECMP implementation. Many of these issues are documented on the metallb \"BGP mode\" page.","commit_id":"363f5f0062efdbb55398abfe42bcd5f9eab7fadb"},{"author":{"_account_id":11628,"name":"Michael Johnson","email":"johnsomor@gmail.com","username":"johnsom"},"change_message_id":"aecc4250e02c80d8029d8854cf5f281738228674","unresolved":false,"context_lines":[{"line_number":66,"context_line":""},{"line_number":67,"context_line":"The layer 3 form of MetalLB_ load balancing provides true load balancing, but"},{"line_number":68,"context_line":"requires control over the network hardware in the form of advertising routes"},{"line_number":69,"context_line":"via BGP. Since tenant clusters are not trusted to do this, it would have to run"},{"line_number":70,"context_line":"in the management cluster. There would need to be an API in the management"},{"line_number":71,"context_line":"cluster to vet requests and pass them on to MetalLB, and a"},{"line_number":72,"context_line":"cloud-provider-teapot plugin that tenants could optionally install to connect"}],"source_content_type":"text/x-rst","patch_set":1,"id":"1fa4df85_5e3a4c34","line":69,"range":{"start_line":69,"start_character":4,"end_line":69,"end_character":7},"updated":"2020-02-27 20:23:41.000000000","message":"Slightly side note, but this method that metallb is using with BGP is the same method in the Octavia L3 Active/Active specification.","commit_id":"363f5f0062efdbb55398abfe42bcd5f9eab7fadb"},{"author":{"_account_id":9237,"name":"Kevin Fox","email":"kevin@efox.cc","username":"kfox1111"},"change_message_id":"671ec37aa02571934fa392741a2f633855ab3afd","unresolved":false,"context_lines":[{"line_number":82,"context_line":"While the network cannot trust BGP announcements from tenants, in principle the"},{"line_number":83,"context_line":"management cluster could have a component that listens to such announcements on"},{"line_number":84,"context_line":"the tenant V(x)LANs, drops any that refer to networks not allocated to the"},{"line_number":85,"context_line":"tenant, and rebroadcasts the legitimate ones to the network hardware."},{"line_number":86,"context_line":""},{"line_number":87,"context_line":"This would allow tenant networks to choose to make use of MetalLB in its Layer"},{"line_number":88,"context_line":"3 mode, providing actual traffic balancing as well as making it possible to"}],"source_content_type":"text/x-rst","patch_set":1,"id":"1fa4df85_f92eb2a3","line":85,"updated":"2020-02-27 21:31:12.000000000","message":"I have never played with it, but ran into this project a while back. It may be useful to this use case:\nhttps://github.com/Exa-Networks/exabgp","commit_id":"363f5f0062efdbb55398abfe42bcd5f9eab7fadb"},{"author":{"_account_id":11628,"name":"Michael Johnson","email":"johnsomor@gmail.com","username":"johnsom"},"change_message_id":"ff9f7a1f19b3cb05cd4fb2738ddcbadaa1b3ef32","unresolved":false,"context_lines":[{"line_number":82,"context_line":"While the network cannot trust BGP announcements from tenants, in principle the"},{"line_number":83,"context_line":"management cluster could have a component that listens to such announcements on"},{"line_number":84,"context_line":"the tenant V(x)LANs, drops any that refer to networks not allocated to the"},{"line_number":85,"context_line":"tenant, and rebroadcasts the legitimate ones to the network hardware."},{"line_number":86,"context_line":""},{"line_number":87,"context_line":"This would allow tenant networks to choose to make use of MetalLB in its Layer"},{"line_number":88,"context_line":"3 mode, providing actual traffic balancing as well as making it possible to"}],"source_content_type":"text/x-rst","patch_set":1,"id":"1fa4df85_fd677d50","line":85,"in_reply_to":"1fa4df85_f92eb2a3","updated":"2020-02-27 23:08:52.000000000","message":"Yes, the proposed l3 active/active patches for Octavia use exabgp. Also note, that openstack/os-ken also provides BGP speaker code.","commit_id":"363f5f0062efdbb55398abfe42bcd5f9eab7fadb"},{"author":{"_account_id":11628,"name":"Michael Johnson","email":"johnsomor@gmail.com","username":"johnsom"},"change_message_id":"aecc4250e02c80d8029d8854cf5f281738228674","unresolved":false,"context_lines":[{"line_number":102,"context_line":"overlays used with OpenStack, and thus might be a common choice for those"},{"line_number":103,"context_line":"wanting to integrate workloads running in OpenStack and Kubernetes together."},{"line_number":104,"context_line":""},{"line_number":105,"context_line":"A new OVN-based network load balancer in the vein of MetalLB might provide"},{"line_number":106,"context_line":"additional options might provide more options for this group."},{"line_number":107,"context_line":""},{"line_number":108,"context_line":".. _teapot-load-balancing-ingress-api:"}],"source_content_type":"text/x-rst","patch_set":1,"id":"1fa4df85_1e1db4bc","line":105,"range":{"start_line":105,"start_character":6,"end_line":105,"end_character":37},"updated":"2020-02-27 20:23:41.000000000","message":"There is an existing OVN provider driver for Octavia (https://opendev.org/openstack/ovn-octavia-provider).\nHowever, OVN had very limited load balancing capabilities:\nhttps://docs.openstack.org/octavia/latest/user/feature-classification/index.html","commit_id":"363f5f0062efdbb55398abfe42bcd5f9eab7fadb"},{"author":{"_account_id":11628,"name":"Michael Johnson","email":"johnsomor@gmail.com","username":"johnsom"},"change_message_id":"aecc4250e02c80d8029d8854cf5f281738228674","unresolved":false,"context_lines":[{"line_number":129,"context_line":"component would need to be integrated with :ref:`floating IP assignment"},{"line_number":130,"context_line":"\u003cteapot-networking-external\u003e`."},{"line_number":131,"context_line":""},{"line_number":132,"context_line":"There are already controllers for several types of software load balancers (the"},{"line_number":133,"context_line":"nginx controller is even officially supported by the Kubernetes project), as"},{"line_number":134,"context_line":"well as multiple hardware load balancers. The ecosystem around this API is"},{"line_number":135,"context_line":"likely to have continued growth. This is also likely to be the site of future"}],"source_content_type":"text/x-rst","patch_set":1,"id":"1fa4df85_3e6d1032","line":132,"range":{"start_line":132,"start_character":0,"end_line":132,"end_character":74},"updated":"2020-02-27 20:23:41.000000000","message":"For example, Octavia","commit_id":"363f5f0062efdbb55398abfe42bcd5f9eab7fadb"},{"author":{"_account_id":4257,"name":"Zane Bitter","email":"zbitter@redhat.com","username":"zaneb"},"change_message_id":"cd2c4c3b0a59d5efd03fb4fa158091983bbb1846","unresolved":false,"context_lines":[{"line_number":129,"context_line":"component would need to be integrated with :ref:`floating IP assignment"},{"line_number":130,"context_line":"\u003cteapot-networking-external\u003e`."},{"line_number":131,"context_line":""},{"line_number":132,"context_line":"There are already controllers for several types of software load balancers (the"},{"line_number":133,"context_line":"nginx controller is even officially supported by the Kubernetes project), as"},{"line_number":134,"context_line":"well as multiple hardware load balancers. The ecosystem around this API is"},{"line_number":135,"context_line":"likely to have continued growth. This is also likely to be the site of future"}],"source_content_type":"text/x-rst","patch_set":1,"id":"1fa4df85_597266e5","line":132,"range":{"start_line":132,"start_character":0,"end_line":132,"end_character":74},"in_reply_to":"1fa4df85_3e6d1032","updated":"2020-02-28 05:03:02.000000000","message":"Thanks, that\u0027s important to note for interop with OpenStack clouds.","commit_id":"363f5f0062efdbb55398abfe42bcd5f9eab7fadb"},{"author":{"_account_id":11628,"name":"Michael Johnson","email":"johnsomor@gmail.com","username":"johnsom"},"change_message_id":"aecc4250e02c80d8029d8854cf5f281738228674","unresolved":false,"context_lines":[{"line_number":138,"context_line":""},{"line_number":139,"context_line":"In general, Ingress controllers are not expected to support non-HTTP(S)"},{"line_number":140,"context_line":"protocols, so it\u0027s not necessarily possible to implement the |LoadBalancer|"},{"line_number":141,"context_line":"Service type with an arbitrary plugin. However, the nginx Ingress controller"},{"line_number":142,"context_line":"has support for arbitrary `TCP and UDP services"},{"line_number":143,"context_line":"\u003chttps://kubernetes.github.io/ingress-nginx/user-guide/exposing-tcp-udp-services/\u003e`_,"},{"line_number":144,"context_line":"so the API would be able to provide for either type."}],"source_content_type":"text/x-rst","patch_set":1,"id":"1fa4df85_fe4f78ba","line":141,"range":{"start_line":141,"start_character":52,"end_line":141,"end_character":57},"updated":"2020-02-27 20:23:41.000000000","message":"Many implementations use annotations to get around this limitation.\nCitrix for example: https://developer-docs.citrix.com/projects/citrix-k8s-ingress-controller/en/latest/how-to/tcp-udp-ingress/","commit_id":"363f5f0062efdbb55398abfe42bcd5f9eab7fadb"},{"author":{"_account_id":4257,"name":"Zane Bitter","email":"zbitter@redhat.com","username":"zaneb"},"change_message_id":"cd2c4c3b0a59d5efd03fb4fa158091983bbb1846","unresolved":false,"context_lines":[{"line_number":138,"context_line":""},{"line_number":139,"context_line":"In general, Ingress controllers are not expected to support non-HTTP(S)"},{"line_number":140,"context_line":"protocols, so it\u0027s not necessarily possible to implement the |LoadBalancer|"},{"line_number":141,"context_line":"Service type with an arbitrary plugin. However, the nginx Ingress controller"},{"line_number":142,"context_line":"has support for arbitrary `TCP and UDP services"},{"line_number":143,"context_line":"\u003chttps://kubernetes.github.io/ingress-nginx/user-guide/exposing-tcp-udp-services/\u003e`_,"},{"line_number":144,"context_line":"so the API would be able to provide for either type."}],"source_content_type":"text/x-rst","patch_set":1,"id":"1fa4df85_9970dede","line":141,"range":{"start_line":141,"start_character":52,"end_line":141,"end_character":57},"in_reply_to":"1fa4df85_fe4f78ba","updated":"2020-02-28 05:03:02.000000000","message":"The most important thing here is that we know there\u0027s at least one way to do it, so it\u0027s not required to build *two* things.","commit_id":"363f5f0062efdbb55398abfe42bcd5f9eab7fadb"},{"author":{"_account_id":11628,"name":"Michael Johnson","email":"johnsomor@gmail.com","username":"johnsom"},"change_message_id":"aecc4250e02c80d8029d8854cf5f281738228674","unresolved":false,"context_lines":[{"line_number":151,"context_line":"Build a new custom API"},{"line_number":152,"context_line":"~~~~~~~~~~~~~~~~~~~~~~"},{"line_number":153,"context_line":""},{"line_number":154,"context_line":"A new service running on the management cluster would provide an API through"},{"line_number":155,"context_line":"which tenants could request a load balancer. An implementation of this API"},{"line_number":156,"context_line":"would provide a pure-software load balancer running in containers in the"},{"line_number":157,"context_line":"management cluster (or some other centrally-controlled cluster). As in the case"},{"line_number":158,"context_line":"of an Ingress-based controller, a network load balancer would likely be used to"}],"source_content_type":"text/x-rst","patch_set":1,"id":"1fa4df85_7ed148cd","line":155,"range":{"start_line":154,"start_character":0,"end_line":155,"end_character":44},"updated":"2020-02-27 20:23:41.000000000","message":"How would this be different than the existing cloud-controller-manager implementations?\nAgain, are we proposing a plugin? cloud provider? or new \"OpenStack style\" API?","commit_id":"363f5f0062efdbb55398abfe42bcd5f9eab7fadb"},{"author":{"_account_id":11628,"name":"Michael Johnson","email":"johnsomor@gmail.com","username":"johnsom"},"change_message_id":"aecc4250e02c80d8029d8854cf5f281738228674","unresolved":false,"context_lines":[{"line_number":151,"context_line":"Build a new custom API"},{"line_number":152,"context_line":"~~~~~~~~~~~~~~~~~~~~~~"},{"line_number":153,"context_line":""},{"line_number":154,"context_line":"A new service running on the management cluster would provide an API through"},{"line_number":155,"context_line":"which tenants could request a load balancer. An implementation of this API"},{"line_number":156,"context_line":"would provide a pure-software load balancer running in containers in the"},{"line_number":157,"context_line":"management cluster (or some other centrally-controlled cluster). As in the case"},{"line_number":158,"context_line":"of an Ingress-based controller, a network load balancer would likely be used to"}],"source_content_type":"text/x-rst","patch_set":1,"id":"1fa4df85_3e60b001","line":155,"range":{"start_line":154,"start_character":65,"end_line":155,"end_character":43},"updated":"2020-02-27 20:23:41.000000000","message":"Like Octavia provides? grin","commit_id":"363f5f0062efdbb55398abfe42bcd5f9eab7fadb"},{"author":{"_account_id":11628,"name":"Michael Johnson","email":"johnsomor@gmail.com","username":"johnsom"},"change_message_id":"aecc4250e02c80d8029d8854cf5f281738228674","unresolved":false,"context_lines":[{"line_number":153,"context_line":""},{"line_number":154,"context_line":"A new service running on the management cluster would provide an API through"},{"line_number":155,"context_line":"which tenants could request a load balancer. An implementation of this API"},{"line_number":156,"context_line":"would provide a pure-software load balancer running in containers in the"},{"line_number":157,"context_line":"management cluster (or some other centrally-controlled cluster). As in the case"},{"line_number":158,"context_line":"of an Ingress-based controller, a network load balancer would likely be used to"},{"line_number":159,"context_line":"provide high-availability of the load balancers."},{"line_number":160,"context_line":""}],"source_content_type":"text/x-rst","patch_set":1,"id":"1fa4df85_9e49a48c","line":157,"range":{"start_line":156,"start_character":16,"end_line":157,"end_character":18},"updated":"2020-02-27 20:23:41.000000000","message":"This is the challenging space in the container scheduling world today. How does this scale (you don\u0027t want to limit the number of hosts that can run a load balancing engine limit the limitation neutron-lbaas had)? What will schedule and provision these load balancing resources (k8s can\u0027t be used due to limitations)?\n\nInfrastructure service scheduling is actually an interesting problem for k8s. Currently it\u0027s pushed to the cloud providers. There is opportunity here to bring value.","commit_id":"363f5f0062efdbb55398abfe42bcd5f9eab7fadb"},{"author":{"_account_id":4257,"name":"Zane Bitter","email":"zbitter@redhat.com","username":"zaneb"},"change_message_id":"cd2c4c3b0a59d5efd03fb4fa158091983bbb1846","unresolved":false,"context_lines":[{"line_number":153,"context_line":""},{"line_number":154,"context_line":"A new service running on the management cluster would provide an API through"},{"line_number":155,"context_line":"which tenants could request a load balancer. An implementation of this API"},{"line_number":156,"context_line":"would provide a pure-software load balancer running in containers in the"},{"line_number":157,"context_line":"management cluster (or some other centrally-controlled cluster). As in the case"},{"line_number":158,"context_line":"of an Ingress-based controller, a network load balancer would likely be used to"},{"line_number":159,"context_line":"provide high-availability of the load balancers."},{"line_number":160,"context_line":""}],"source_content_type":"text/x-rst","patch_set":1,"id":"1fa4df85_bd17c545","line":157,"range":{"start_line":156,"start_character":16,"end_line":157,"end_character":18},"in_reply_to":"1fa4df85_9e49a48c","updated":"2020-02-28 05:03:02.000000000","message":"I don\u0027t agree that k8s \"can\u0027t be used\". Ingress controllers like the nginx one use k8s to provision the resources.\n\nWhat Teapot really brings to the table here is that it can scale itself out by grabbing more hardware when needed.","commit_id":"363f5f0062efdbb55398abfe42bcd5f9eab7fadb"},{"author":{"_account_id":17068,"name":"Jean-Philippe Evrard","email":"openstack@a.spamming.party","username":"evrardjp"},"change_message_id":"34cbd129b53047752e97338c2889f1e222a523b7","unresolved":false,"context_lines":[{"line_number":153,"context_line":""},{"line_number":154,"context_line":"A new service running on the management cluster would provide an API through"},{"line_number":155,"context_line":"which tenants could request a load balancer. An implementation of this API"},{"line_number":156,"context_line":"would provide a pure-software load balancer running in containers in the"},{"line_number":157,"context_line":"management cluster (or some other centrally-controlled cluster). As in the case"},{"line_number":158,"context_line":"of an Ingress-based controller, a network load balancer would likely be used to"},{"line_number":159,"context_line":"provide high-availability of the load balancers."},{"line_number":160,"context_line":""}],"source_content_type":"text/x-rst","patch_set":1,"id":"1fa4df85_dac07a2d","line":157,"range":{"start_line":156,"start_character":16,"end_line":157,"end_character":18},"in_reply_to":"1fa4df85_bd17c545","updated":"2020-02-28 16:00:22.000000000","message":"I think you\u0027re not in disagreement. You\u0027re just talking about two different sides of the same coin, the user, and the provider. I agree that k8s can be used from a user perspective by asking an ingress. On a provider (infrastructure) perspective, there is indeed an opportunity to bring an abstraction layer on top of either software, or hardware solutions.","commit_id":"363f5f0062efdbb55398abfe42bcd5f9eab7fadb"},{"author":{"_account_id":11628,"name":"Michael Johnson","email":"johnsomor@gmail.com","username":"johnsom"},"change_message_id":"aecc4250e02c80d8029d8854cf5f281738228674","unresolved":false,"context_lines":[{"line_number":158,"context_line":"of an Ingress-based controller, a network load balancer would likely be used to"},{"line_number":159,"context_line":"provide high-availability of the load balancers."},{"line_number":160,"context_line":""},{"line_number":161,"context_line":"The API would be designed such that alternate implementations of the controller"},{"line_number":162,"context_line":"could be created for various load balancing hardware. Ideally one would take"},{"line_number":163,"context_line":"the form of a shim to the existing cloud-provider API for load balancers, so"},{"line_number":164,"context_line":"that existing plugins could be used. This would include"},{"line_number":165,"context_line":"cloud-provider-openstack, for the case where Teapot is installed alongside an"}],"source_content_type":"text/x-rst","patch_set":1,"id":"1fa4df85_fe59b859","line":162,"range":{"start_line":161,"start_character":0,"end_line":162,"end_character":53},"updated":"2020-02-27 20:23:41.000000000","message":"Like Octavia provides? grin","commit_id":"363f5f0062efdbb55398abfe42bcd5f9eab7fadb"},{"author":{"_account_id":4257,"name":"Zane Bitter","email":"zbitter@redhat.com","username":"zaneb"},"change_message_id":"cd2c4c3b0a59d5efd03fb4fa158091983bbb1846","unresolved":false,"context_lines":[{"line_number":158,"context_line":"of an Ingress-based controller, a network load balancer would likely be used to"},{"line_number":159,"context_line":"provide high-availability of the load balancers."},{"line_number":160,"context_line":""},{"line_number":161,"context_line":"The API would be designed such that alternate implementations of the controller"},{"line_number":162,"context_line":"could be created for various load balancing hardware. Ideally one would take"},{"line_number":163,"context_line":"the form of a shim to the existing cloud-provider API for load balancers, so"},{"line_number":164,"context_line":"that existing plugins could be used. This would include"},{"line_number":165,"context_line":"cloud-provider-openstack, for the case where Teapot is installed alongside an"}],"source_content_type":"text/x-rst","patch_set":1,"id":"1fa4df85_b1304eea","line":162,"range":{"start_line":161,"start_character":0,"end_line":162,"end_character":53},"in_reply_to":"1fa4df85_fe59b859","updated":"2020-02-28 05:03:02.000000000","message":"Yes, as noted on line 202 ;)","commit_id":"363f5f0062efdbb55398abfe42bcd5f9eab7fadb"},{"author":{"_account_id":11628,"name":"Michael Johnson","email":"johnsomor@gmail.com","username":"johnsom"},"change_message_id":"aecc4250e02c80d8029d8854cf5f281738228674","unresolved":false,"context_lines":[{"line_number":177,"context_line":"Build a new Ingress controller"},{"line_number":178,"context_line":"~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~"},{"line_number":179,"context_line":""},{"line_number":180,"context_line":"A Teapot Ingress controller would proxy requests for an Ingress to the API in"},{"line_number":181,"context_line":"the management cluster. It would likely be responsible for syncing the"},{"line_number":182,"context_line":"EndpointSlices to the API as well."},{"line_number":183,"context_line":""},{"line_number":184,"context_line":".. _teapot-load-balancing-cloud-provider:"}],"source_content_type":"text/x-rst","patch_set":1,"id":"1fa4df85_9e3e84d8","line":181,"range":{"start_line":180,"start_character":0,"end_line":181,"end_character":23},"updated":"2020-02-27 20:23:41.000000000","message":"I\u0027m not sure I follow this. Is this a load balancer of load balancer APIs endpoints?","commit_id":"363f5f0062efdbb55398abfe42bcd5f9eab7fadb"},{"author":{"_account_id":4257,"name":"Zane Bitter","email":"zbitter@redhat.com","username":"zaneb"},"change_message_id":"cd2c4c3b0a59d5efd03fb4fa158091983bbb1846","unresolved":false,"context_lines":[{"line_number":177,"context_line":"Build a new Ingress controller"},{"line_number":178,"context_line":"~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~"},{"line_number":179,"context_line":""},{"line_number":180,"context_line":"A Teapot Ingress controller would proxy requests for an Ingress to the API in"},{"line_number":181,"context_line":"the management cluster. It would likely be responsible for syncing the"},{"line_number":182,"context_line":"EndpointSlices to the API as well."},{"line_number":183,"context_line":""},{"line_number":184,"context_line":".. _teapot-load-balancing-cloud-provider:"}],"source_content_type":"text/x-rst","patch_set":1,"id":"1fa4df85_1d0ff9e1","line":181,"range":{"start_line":180,"start_character":0,"end_line":181,"end_character":23},"in_reply_to":"1fa4df85_9e3e84d8","updated":"2020-02-28 05:03:02.000000000","message":"\u003e I\u0027m not sure I follow this. Is this a load balancer of load\n \u003e balancer APIs endpoints?\n\nNo, it\u0027s the thing so that when you create an Ingress resource in the tenant cluster, it goes and hits the API that creates a load balancer in the management cluster.\n\nIf the API is Octavia then this already exists, but if there\u0027s a new k8s-native API (the custom or Ingress-based APIs above) we\u0027d have to build something.","commit_id":"363f5f0062efdbb55398abfe42bcd5f9eab7fadb"},{"author":{"_account_id":9237,"name":"Kevin Fox","email":"kevin@efox.cc","username":"kfox1111"},"change_message_id":"671ec37aa02571934fa392741a2f633855ab3afd","unresolved":false,"context_lines":[{"line_number":177,"context_line":"Build a new Ingress controller"},{"line_number":178,"context_line":"~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~"},{"line_number":179,"context_line":""},{"line_number":180,"context_line":"A Teapot Ingress controller would proxy requests for an Ingress to the API in"},{"line_number":181,"context_line":"the management cluster. It would likely be responsible for syncing the"},{"line_number":182,"context_line":"EndpointSlices to the API as well."},{"line_number":183,"context_line":""},{"line_number":184,"context_line":".. _teapot-load-balancing-cloud-provider:"}],"source_content_type":"text/x-rst","patch_set":1,"id":"1fa4df85_dd6961be","line":181,"range":{"start_line":180,"start_character":0,"end_line":181,"end_character":23},"in_reply_to":"1fa4df85_9e3e84d8","updated":"2020-02-27 21:31:12.000000000","message":"I like this option a lot. Let the admin configure the management cluster with whatever ingress/svc type\u003dloadbalancer driver they want (metallb+nginx-ingress for example) and then the tenants clusters can route through it.\n\nIts not a teapot ingress controller as its a k8s2k8s ingress controller","commit_id":"363f5f0062efdbb55398abfe42bcd5f9eab7fadb"},{"author":{"_account_id":11628,"name":"Michael Johnson","email":"johnsomor@gmail.com","username":"johnsom"},"change_message_id":"aecc4250e02c80d8029d8854cf5f281738228674","unresolved":false,"context_lines":[{"line_number":186,"context_line":"Build a new cloud-provider"},{"line_number":187,"context_line":"~~~~~~~~~~~~~~~~~~~~~~~~~~"},{"line_number":188,"context_line":""},{"line_number":189,"context_line":"A cloud-provider-teapot plugin that tenants could optionally install would"},{"line_number":190,"context_line":"allow them to make use of the API in the management cluster to configure"},{"line_number":191,"context_line":"Services of type |LoadBalancer|."},{"line_number":192,"context_line":""}],"source_content_type":"text/x-rst","patch_set":1,"id":"1fa4df85_3e2510c3","line":189,"range":{"start_line":189,"start_character":2,"end_line":189,"end_character":30},"updated":"2020-02-27 20:23:41.000000000","message":"Isn\u0027t this what the existing cloud providers in k8s provide?","commit_id":"363f5f0062efdbb55398abfe42bcd5f9eab7fadb"},{"author":{"_account_id":4257,"name":"Zane Bitter","email":"zbitter@redhat.com","username":"zaneb"},"change_message_id":"cd2c4c3b0a59d5efd03fb4fa158091983bbb1846","unresolved":false,"context_lines":[{"line_number":186,"context_line":"Build a new cloud-provider"},{"line_number":187,"context_line":"~~~~~~~~~~~~~~~~~~~~~~~~~~"},{"line_number":188,"context_line":""},{"line_number":189,"context_line":"A cloud-provider-teapot plugin that tenants could optionally install would"},{"line_number":190,"context_line":"allow them to make use of the API in the management cluster to configure"},{"line_number":191,"context_line":"Services of type |LoadBalancer|."},{"line_number":192,"context_line":""}],"source_content_type":"text/x-rst","patch_set":1,"id":"1fa4df85_7d506d0b","line":189,"range":{"start_line":189,"start_character":2,"end_line":189,"end_character":30},"in_reply_to":"1fa4df85_3e2510c3","updated":"2020-02-28 05:03:02.000000000","message":"Yes, this is just saying that if we have a new k8s-native API then we\u0027d need a new cloud provider to talk to it.","commit_id":"363f5f0062efdbb55398abfe42bcd5f9eab7fadb"},{"author":{"_account_id":11628,"name":"Michael Johnson","email":"johnsomor@gmail.com","username":"johnsom"},"change_message_id":"b198a72f32c5d7ceafcf40ef1d565e2f545db173","unresolved":false,"context_lines":[{"line_number":203,"context_line":"layer over hardware load balancer APIs, with a software-based driver for those"},{"line_number":204,"context_line":"wanting a pure-software solution."},{"line_number":205,"context_line":""},{"line_number":206,"context_line":"In practice, however, there are no drivers for any hardware load balancers."},{"line_number":207,"context_line":"Several drivers existed for the earlier Neutron LBaaS v2 API, but vendors had"},{"line_number":208,"context_line":"largely moved on to Kubernetes by the time it was replaced by Octavia."},{"line_number":209,"context_line":"Furthermore, the pure-software driver (Amphora) is highly dependent on"}],"source_content_type":"text/x-rst","patch_set":1,"id":"1fa4df85_2d3c0cc7","line":206,"range":{"start_line":206,"start_character":0,"end_line":206,"end_character":75},"updated":"2020-02-27 15:35:44.000000000","message":"This is incorrect. There are drivers for A10, Radware, and vmware NSX published.","commit_id":"363f5f0062efdbb55398abfe42bcd5f9eab7fadb"},{"author":{"_account_id":4257,"name":"Zane Bitter","email":"zbitter@redhat.com","username":"zaneb"},"change_message_id":"cd2c4c3b0a59d5efd03fb4fa158091983bbb1846","unresolved":false,"context_lines":[{"line_number":203,"context_line":"layer over hardware load balancer APIs, with a software-based driver for those"},{"line_number":204,"context_line":"wanting a pure-software solution."},{"line_number":205,"context_line":""},{"line_number":206,"context_line":"In practice, however, there are no drivers for any hardware load balancers."},{"line_number":207,"context_line":"Several drivers existed for the earlier Neutron LBaaS v2 API, but vendors had"},{"line_number":208,"context_line":"largely moved on to Kubernetes by the time it was replaced by Octavia."},{"line_number":209,"context_line":"Furthermore, the pure-software driver (Amphora) is highly dependent on"}],"source_content_type":"text/x-rst","patch_set":1,"id":"1fa4df85_51d23a5e","line":206,"range":{"start_line":206,"start_character":0,"end_line":206,"end_character":75},"in_reply_to":"1fa4df85_2d3c0cc7","updated":"2020-02-28 05:03:02.000000000","message":"Ack, will fix. Thanks for the correction.","commit_id":"363f5f0062efdbb55398abfe42bcd5f9eab7fadb"},{"author":{"_account_id":11628,"name":"Michael Johnson","email":"johnsomor@gmail.com","username":"johnsom"},"change_message_id":"b198a72f32c5d7ceafcf40ef1d565e2f545db173","unresolved":false,"context_lines":[{"line_number":206,"context_line":"In practice, however, there are no drivers for any hardware load balancers."},{"line_number":207,"context_line":"Several drivers existed for the earlier Neutron LBaaS v2 API, but vendors had"},{"line_number":208,"context_line":"largely moved on to Kubernetes by the time it was replaced by Octavia."},{"line_number":209,"context_line":"Furthermore, the pure-software driver (Amphora) is highly dependent on"},{"line_number":210,"context_line":"OpenStack Nova, which will not be present in Teapot."},{"line_number":211,"context_line":""},{"line_number":212,"context_line":"Finally, all of Octavia is tightly integrated with Neutron networking. Since we"},{"line_number":213,"context_line":"want to make use of Neutron only as a replaceable implementation detail -- if at"}],"source_content_type":"text/x-rst","patch_set":1,"id":"1fa4df85_6df3e40b","line":210,"range":{"start_line":209,"start_character":0,"end_line":210,"end_character":52},"updated":"2020-02-27 15:35:44.000000000","message":"The Octavia \"Amphora\" driver is implemented with \"compute\" abstracted to a driver. Currently, merged code, only nova is supported, but we have posted a functional LXC/LXD version as a proof of concept.\nSo, the Octavia amphora driver is loosely coupled to nova.\nThis is why we call them Amphora, they can be bare metal, VMs, containers, etc.\nSadly, design issues in the container scheduling systems, for example kubernetes, has limited the ability to use them for load balancing in a way that is reliable.","commit_id":"363f5f0062efdbb55398abfe42bcd5f9eab7fadb"},{"author":{"_account_id":11628,"name":"Michael Johnson","email":"johnsomor@gmail.com","username":"johnsom"},"change_message_id":"b838f7d0bce2ad2ec36dfad5a4982a11da178301","unresolved":false,"context_lines":[{"line_number":206,"context_line":"In practice, however, there are no drivers for any hardware load balancers."},{"line_number":207,"context_line":"Several drivers existed for the earlier Neutron LBaaS v2 API, but vendors had"},{"line_number":208,"context_line":"largely moved on to Kubernetes by the time it was replaced by Octavia."},{"line_number":209,"context_line":"Furthermore, the pure-software driver (Amphora) is highly dependent on"},{"line_number":210,"context_line":"OpenStack Nova, which will not be present in Teapot."},{"line_number":211,"context_line":""},{"line_number":212,"context_line":"Finally, all of Octavia is tightly integrated with Neutron networking. Since we"},{"line_number":213,"context_line":"want to make use of Neutron only as a replaceable implementation detail -- if at"}],"source_content_type":"text/x-rst","patch_set":1,"id":"1fa4df85_b06d9f99","line":210,"range":{"start_line":209,"start_character":0,"end_line":210,"end_character":52},"in_reply_to":"1fa4df85_1a237207","updated":"2020-02-28 17:40:04.000000000","message":"Yes, this was described below in a later patch (these take a while to write up, lol).\n\nFor scheduling in particular, the \"simple\" statement is that you cannot stop k8s from interrupting user network flows. There is a pile of detail behind that simple statement, but when you go down all of the paths(and api tricks), this is where you land.\nIf you are doing anything beyond simple short lived HTTP follows (though even these are changing with HTTP/2 and 3), having pods preempted in the middle of a user network flow is not a good thing and a support nightmare.\nAs mentioned below, there is effort to get these issues fixed (pod priority and preemption for example), but the community is pushing back on the full solution in those as \"out of scope\".\nThis is why almost all current load balancing solutions for k8s live outside the k8s scheduler today. Either cloud provided, or bolt-ons like envoy, or rely on outside gear like metallb, vendors, etc.","commit_id":"363f5f0062efdbb55398abfe42bcd5f9eab7fadb"},{"author":{"_account_id":17068,"name":"Jean-Philippe Evrard","email":"openstack@a.spamming.party","username":"evrardjp"},"change_message_id":"34cbd129b53047752e97338c2889f1e222a523b7","unresolved":false,"context_lines":[{"line_number":206,"context_line":"In practice, however, there are no drivers for any hardware load balancers."},{"line_number":207,"context_line":"Several drivers existed for the earlier Neutron LBaaS v2 API, but vendors had"},{"line_number":208,"context_line":"largely moved on to Kubernetes by the time it was replaced by Octavia."},{"line_number":209,"context_line":"Furthermore, the pure-software driver (Amphora) is highly dependent on"},{"line_number":210,"context_line":"OpenStack Nova, which will not be present in Teapot."},{"line_number":211,"context_line":""},{"line_number":212,"context_line":"Finally, all of Octavia is tightly integrated with Neutron networking. Since we"},{"line_number":213,"context_line":"want to make use of Neutron only as a replaceable implementation detail -- if at"}],"source_content_type":"text/x-rst","patch_set":1,"id":"1fa4df85_1a237207","line":210,"range":{"start_line":209,"start_character":0,"end_line":210,"end_character":52},"in_reply_to":"1fa4df85_6df3e40b","updated":"2020-02-28 16:00:22.000000000","message":"@johnsom: What\u0027s the problem with scheduling? For me, it sounds like these containers or nodes used for software load-balancing should always run an all nodes, making it an installer problem, or a k8s daemonset and hostNetworking (is that a problem, assuming this only runs in the master cluster?). Is that what you\u0027re explaining below, or is there something else?\n\nFor the abstraction layer/API, I don\u0027t see the problem of running this in k8s either.","commit_id":"363f5f0062efdbb55398abfe42bcd5f9eab7fadb"},{"author":{"_account_id":4257,"name":"Zane Bitter","email":"zbitter@redhat.com","username":"zaneb"},"change_message_id":"cd2c4c3b0a59d5efd03fb4fa158091983bbb1846","unresolved":false,"context_lines":[{"line_number":206,"context_line":"In practice, however, there are no drivers for any hardware load balancers."},{"line_number":207,"context_line":"Several drivers existed for the earlier Neutron LBaaS v2 API, but vendors had"},{"line_number":208,"context_line":"largely moved on to Kubernetes by the time it was replaced by Octavia."},{"line_number":209,"context_line":"Furthermore, the pure-software driver (Amphora) is highly dependent on"},{"line_number":210,"context_line":"OpenStack Nova, which will not be present in Teapot."},{"line_number":211,"context_line":""},{"line_number":212,"context_line":"Finally, all of Octavia is tightly integrated with Neutron networking. Since we"},{"line_number":213,"context_line":"want to make use of Neutron only as a replaceable implementation detail -- if at"}],"source_content_type":"text/x-rst","patch_set":1,"id":"1fa4df85_71d5f642","line":210,"range":{"start_line":209,"start_character":0,"end_line":210,"end_character":52},"in_reply_to":"1fa4df85_6df3e40b","updated":"2020-02-28 05:03:02.000000000","message":"Will fix.","commit_id":"363f5f0062efdbb55398abfe42bcd5f9eab7fadb"},{"author":{"_account_id":11628,"name":"Michael Johnson","email":"johnsomor@gmail.com","username":"johnsom"},"change_message_id":"b198a72f32c5d7ceafcf40ef1d565e2f545db173","unresolved":false,"context_lines":[{"line_number":209,"context_line":"Furthermore, the pure-software driver (Amphora) is highly dependent on"},{"line_number":210,"context_line":"OpenStack Nova, which will not be present in Teapot."},{"line_number":211,"context_line":""},{"line_number":212,"context_line":"Finally, all of Octavia is tightly integrated with Neutron networking. Since we"},{"line_number":213,"context_line":"want to make use of Neutron only as a replaceable implementation detail -- if at"},{"line_number":214,"context_line":"all -- Teapot cannot allow other components of the system to become dependent"},{"line_number":215,"context_line":"on it. Given those constraints, Octavia does not appear to be a viable"}],"source_content_type":"text/x-rst","patch_set":1,"id":"1fa4df85_ad379cb0","line":212,"range":{"start_line":212,"start_character":0,"end_line":212,"end_character":70},"updated":"2020-02-27 15:35:44.000000000","message":"This is not true either. Again, the amphora driver for Octavia has a networking driver model. Currently we have only released a \"neutron\" driver, but the code supports alternate networking providers via our networking abstraction layer.","commit_id":"363f5f0062efdbb55398abfe42bcd5f9eab7fadb"},{"author":{"_account_id":4257,"name":"Zane Bitter","email":"zbitter@redhat.com","username":"zaneb"},"change_message_id":"cd2c4c3b0a59d5efd03fb4fa158091983bbb1846","unresolved":false,"context_lines":[{"line_number":209,"context_line":"Furthermore, the pure-software driver (Amphora) is highly dependent on"},{"line_number":210,"context_line":"OpenStack Nova, which will not be present in Teapot."},{"line_number":211,"context_line":""},{"line_number":212,"context_line":"Finally, all of Octavia is tightly integrated with Neutron networking. Since we"},{"line_number":213,"context_line":"want to make use of Neutron only as a replaceable implementation detail -- if at"},{"line_number":214,"context_line":"all -- Teapot cannot allow other components of the system to become dependent"},{"line_number":215,"context_line":"on it. Given those constraints, Octavia does not appear to be a viable"}],"source_content_type":"text/x-rst","patch_set":1,"id":"1fa4df85_11ccc2ba","line":212,"range":{"start_line":212,"start_character":0,"end_line":212,"end_character":70},"in_reply_to":"1fa4df85_ad379cb0","updated":"2020-02-28 05:03:02.000000000","message":"Will fix.","commit_id":"363f5f0062efdbb55398abfe42bcd5f9eab7fadb"},{"author":{"_account_id":11628,"name":"Michael Johnson","email":"johnsomor@gmail.com","username":"johnsom"},"change_message_id":"b198a72f32c5d7ceafcf40ef1d565e2f545db173","unresolved":false,"context_lines":[{"line_number":212,"context_line":"Finally, all of Octavia is tightly integrated with Neutron networking. Since we"},{"line_number":213,"context_line":"want to make use of Neutron only as a replaceable implementation detail -- if at"},{"line_number":214,"context_line":"all -- Teapot cannot allow other components of the system to become dependent"},{"line_number":215,"context_line":"on it. Given those constraints, Octavia does not appear to be a viable"},{"line_number":216,"context_line":"implementation option."},{"line_number":217,"context_line":""},{"line_number":218,"context_line":"A more promising avenue might be integration in the other direction -- using a"},{"line_number":219,"context_line":"Kubernetes-based service as a driver for Octavia when Teapot is co-installed"}],"source_content_type":"text/x-rst","patch_set":1,"id":"1fa4df85_cd63f8b0","line":216,"range":{"start_line":215,"start_character":32,"end_line":216,"end_character":22},"updated":"2020-02-27 15:35:44.000000000","message":"I would note, Octavia is already used as an ingress load balancer for Kubernetes in production at a number of sites. It also powers parts of Red Hat OpenShift when deployed on OpenStack.\nJust an (old) example: https://superuser.openstack.org/articles/guide-octavia-ingress-controller-for-kubernetes/","commit_id":"363f5f0062efdbb55398abfe42bcd5f9eab7fadb"},{"author":{"_account_id":11628,"name":"Michael Johnson","email":"johnsomor@gmail.com","username":"johnsom"},"change_message_id":"ff9f7a1f19b3cb05cd4fb2738ddcbadaa1b3ef32","unresolved":false,"context_lines":[{"line_number":217,"context_line":""},{"line_number":218,"context_line":"A more promising avenue might be integration in the other direction -- using a"},{"line_number":219,"context_line":"Kubernetes-based service as a driver for Octavia when Teapot is co-installed"},{"line_number":220,"context_line":"with an OpenStack cloud."},{"line_number":221,"context_line":""},{"line_number":222,"context_line":".. |LoadBalancer| replace:: ``LoadBalancer``"},{"line_number":223,"context_line":""}],"source_content_type":"text/x-rst","patch_set":1,"id":"1fa4df85_00d8f27c","line":220,"range":{"start_line":220,"start_character":18,"end_line":220,"end_character":24},"updated":"2020-02-27 23:08:52.000000000","message":"So there are a number of issues we have run into attempting to use kubernetes as the container scheduler for amphora (or any load balancing engine really). This has led those that do \"support\" load balancing engines with kuberentes to either deploy them outside the kubernetes environment (see the vendor docs as I\u0027m not going to call them out here) or they accept that network flows will be interrupted or incur outages.\n\n1. kubernetes networking typically layers on levels of encapsulation and NAT, all of which significantly impacts performance for network flows.\n2. The kubernetes scheduler can and will terminate pods for rescheduling. This will interrupt user network flows for excessive amounts of time and cause user visible traffic errors. There are attempts to address this with the new pod priority and preemption settings, but they still do not guarantee user network flows would not be interrupted.\n3. The current CNI options are not \"VIP\" aware and do not handle IP addresses that can migrate between pods well. This limits the ability to do transparent network flow failovers (I.e. inside the TCP retry window) using techniques like VRRP.\n4. The typical container usage is per-process, which complicates load balancer engines that use multiple processes with unix domain sockets for management, HA, and status monitoring.\n5. Health monitoring and reacting to those events is not fast enough for infrastructure networking HA.\n\n\nNone of these are impossible, kubernetes just isn\u0027t targeting the infrastructure use case yet.\n\nIt\u0027s not clear to me if the best answer is to build out the \"outside\" k8s use case, maybe with an independent scheduler or if it is within the scope of k8s to do the development work to resolve these issues.\nThe trend towards developing out the cloud providers to provide these infrastructure services (including load balancing) points towards these services continuing to be outside the k8s cluster.","commit_id":"363f5f0062efdbb55398abfe42bcd5f9eab7fadb"},{"author":{"_account_id":11628,"name":"Michael Johnson","email":"johnsomor@gmail.com","username":"johnsom"},"change_message_id":"b198a72f32c5d7ceafcf40ef1d565e2f545db173","unresolved":false,"context_lines":[{"line_number":215,"context_line":"on it. Given those constraints, Octavia does not appear to be a viable"},{"line_number":216,"context_line":"implementation option."},{"line_number":217,"context_line":""},{"line_number":218,"context_line":"A more promising avenue might be integration in the other direction -- using a"},{"line_number":219,"context_line":"Kubernetes-based service as a driver for Octavia when Teapot is co-installed"},{"line_number":220,"context_line":"with an OpenStack cloud."},{"line_number":221,"context_line":""},{"line_number":222,"context_line":".. |LoadBalancer| replace:: ``LoadBalancer``"},{"line_number":223,"context_line":""}],"source_content_type":"text/x-rst","patch_set":1,"id":"1fa4df85_6d0f64e1","line":220,"range":{"start_line":218,"start_character":0,"end_line":220,"end_character":24},"updated":"2020-02-27 15:35:44.000000000","message":"So there are some major problems with kubernetes scheduling for Octavia. This has blocked Octavia using it as well as the other load balancing vendors. I will detail those in another comment post as I need to post the above corrections as soon as possible.","commit_id":"363f5f0062efdbb55398abfe42bcd5f9eab7fadb"},{"author":{"_account_id":9237,"name":"Kevin Fox","email":"kevin@efox.cc","username":"kfox1111"},"change_message_id":"af98cfa7ada3bd384af8283842a3dec18d8e1edb","unresolved":false,"context_lines":[{"line_number":217,"context_line":""},{"line_number":218,"context_line":"A more promising avenue might be integration in the other direction -- using a"},{"line_number":219,"context_line":"Kubernetes-based service as a driver for Octavia when Teapot is co-installed"},{"line_number":220,"context_line":"with an OpenStack cloud."},{"line_number":221,"context_line":""},{"line_number":222,"context_line":".. |LoadBalancer| replace:: ``LoadBalancer``"},{"line_number":223,"context_line":""}],"source_content_type":"text/x-rst","patch_set":1,"id":"1fa4df85_a059de5a","line":220,"range":{"start_line":220,"start_character":18,"end_line":220,"end_character":24},"in_reply_to":"1fa4df85_00d8f27c","updated":"2020-02-28 00:34:11.000000000","message":"\u003e 1. kubernetes networking typically layers on levels of encapsulation and NAT, all of which significantly impacts performance for network flows.\n\nWhen direct networking is needed, often the option is to use hostNetworking: true on your pods.\n\n\u003e 2. The kubernetes scheduler can and will terminate pods for rescheduling. This will interrupt user network flows for excessive amounts of time and cause user visible traffic errors. There are attempts to address this with the new pod priority and preemption settings, but they still do not guarantee user network flows would not be interrupted.\n\nI\u0027ve used preStop hooks in the past to ensure proper stream draining before the pod is allowed to completely terminate.\n\n\u003e 3. The current CNI options are not \"VIP\" aware and do not handle IP addresses that can migrate between pods well. This limits the ability to do transparent network flow failovers (I.e. inside the TCP retry window) using techniques like VRRP.\n\nThis also can be done with hostNetworking: true I think.\n\n\u003e 4. The typical container usage is per-process, which complicates load balancer engines that use multiple processes with unix domain sockets for management, HA, and status monitoring.\n\nThis is why Pods are the unit of scheduling and not Containers. You can pass unix domain sockets between containers within a pod easily. HA is done with multiple pods but still possible. Status monitoring is often done as a separate sidecar to enable picking the status monitor you wish to use. (prometheus, collectd, etc)\n\nIf you still really want a monolithic single container with multiple processes, that still does work too. Its just frowned upon.\n\n5. Health monitoring and reacting to those events is not fast enough for infrastructure networking HA.\n\nI\u0027ve run keepalived hostNetwork: true for a while. It runs just as well in Kubernetes as installed on the bare host. It was one of the first things we containerized for kolla-kubernetes.\n\nI think everything you want to do can be done without too much effort. It does require using some of the less common features of Kubernetes though, so easy to overlook. One of the common complaints about Kubernetes is the api is huge. But its huge as users have needed to use every feature that\u0027s there. Often for use cases like above. :)","commit_id":"363f5f0062efdbb55398abfe42bcd5f9eab7fadb"},{"author":{"_account_id":11628,"name":"Michael Johnson","email":"johnsomor@gmail.com","username":"johnsom"},"change_message_id":"b3aadffe3054bff3ea227034deb7fac6abd32ba3","unresolved":false,"context_lines":[{"line_number":217,"context_line":""},{"line_number":218,"context_line":"A more promising avenue might be integration in the other direction -- using a"},{"line_number":219,"context_line":"Kubernetes-based service as a driver for Octavia when Teapot is co-installed"},{"line_number":220,"context_line":"with an OpenStack cloud."},{"line_number":221,"context_line":""},{"line_number":222,"context_line":".. |LoadBalancer| replace:: ``LoadBalancer``"},{"line_number":223,"context_line":""}],"source_content_type":"text/x-rst","patch_set":1,"id":"1fa4df85_76113e52","line":220,"range":{"start_line":220,"start_character":18,"end_line":220,"end_character":24},"in_reply_to":"1fa4df85_1084b3b0","updated":"2020-03-03 01:02:32.000000000","message":"Actually, no, with the current Octavia you can lose a VM and maintain the in-progress flow.\n\nYour comment, \"Given that it is taking over the world\" I think you have a typo, \"taking down the world\" is probably what you meant. grin\n\nAgain, I will mention that most current implementations for the infrastructure level in k8s deployments push the infrastructure needs outside the scheduler. This is especially true of load balancers and includes the examples you provided above.\n\nI am not saying this cannot be solved or blocks the wider vision, I\u0027m just bringing my experience to the conversation. Frankly I really want these issues fixed so we can be done with it. lol\n\nThe reality is that most people that start the \"load balancers should be running in kubernetes\" have never tried it or deployed a sizable k8s cluster in production. We spend hours talking through it at PTGs and end up with \"we can do it today, but it will be unreliable and slow providing a poor user experience\", \"We need to fix these issues in kubernetes\", or \"It just doesn\u0027t make sense to run infrastructure components inside kubernetes\".","commit_id":"363f5f0062efdbb55398abfe42bcd5f9eab7fadb"},{"author":{"_account_id":4257,"name":"Zane Bitter","email":"zbitter@redhat.com","username":"zaneb"},"change_message_id":"de4a2f86fc929f13e14827e209e8bab66173f549","unresolved":false,"context_lines":[{"line_number":217,"context_line":""},{"line_number":218,"context_line":"A more promising avenue might be integration in the other direction -- using a"},{"line_number":219,"context_line":"Kubernetes-based service as a driver for Octavia when Teapot is co-installed"},{"line_number":220,"context_line":"with an OpenStack cloud."},{"line_number":221,"context_line":""},{"line_number":222,"context_line":".. |LoadBalancer| replace:: ``LoadBalancer``"},{"line_number":223,"context_line":""}],"source_content_type":"text/x-rst","patch_set":1,"id":"1fa4df85_b9a1e748","line":220,"range":{"start_line":220,"start_character":18,"end_line":220,"end_character":24},"in_reply_to":"1fa4df85_76113e52","updated":"2020-03-03 05:33:23.000000000","message":"I\u0027m still confused about the issue here, but I want to learn more.\n\nAt the beginning you implied that the issue was that Kubernetes might kill your pod. But now you say that HAProxy in Octavia can maintain flows in the face of that... so clearly the problem isn\u0027t that pods might be killed.\n\nReading your first comment again, it sounds like the actual problem is that it\u0027s too slow to fail over? I can see how that could be a potential issue, but it seems to me that it\u0027s more an issue that Kubernetes doesn\u0027t solve all of the problems for you (as one might naively hope it would), rather than it prevents you solving them at all.\n\nLike, sure Kubernetes\u0027s health monitoring is too slow, but Nova gives you nothing! So use whatever you use with Nova instead of relying on Kubernetes to do it.\n\nIn the worst case you could stick the same stuff that\u0027s currently in your Amphora into a VM Image and run it in KubeVirt with host networking. It\u0027s extremely difficult to believe this is worse than Nova, because it\u0027s virtually the same thing.\n\nI read through some of your old etherpads like https://etherpad.openstack.org/p/octavia-ptg-rocky and https://etherpad.openstack.org/p/ptq-queens-octavia-kuryr and I get that Kubernetes is not a magic bullet for simplifying Octavia within OpenStack. But the context here is different. If it\u0027s theoretically possible to run a software load balancer in this specific scenario (baremetal cluster, separate cluster from tenant workloads, no Nova or Neutron) then we\u0027re happy.","commit_id":"363f5f0062efdbb55398abfe42bcd5f9eab7fadb"},{"author":{"_account_id":11628,"name":"Michael Johnson","email":"johnsomor@gmail.com","username":"johnsom"},"change_message_id":"b838f7d0bce2ad2ec36dfad5a4982a11da178301","unresolved":false,"context_lines":[{"line_number":217,"context_line":""},{"line_number":218,"context_line":"A more promising avenue might be integration in the other direction -- using a"},{"line_number":219,"context_line":"Kubernetes-based service as a driver for Octavia when Teapot is co-installed"},{"line_number":220,"context_line":"with an OpenStack cloud."},{"line_number":221,"context_line":""},{"line_number":222,"context_line":".. |LoadBalancer| replace:: ``LoadBalancer``"},{"line_number":223,"context_line":""}],"source_content_type":"text/x-rst","patch_set":1,"id":"1fa4df85_f097978f","line":220,"range":{"start_line":220,"start_character":18,"end_line":220,"end_character":24},"in_reply_to":"1fa4df85_a059de5a","updated":"2020-02-28 17:40:04.000000000","message":"Those are obvious answers that everyone has, but they don\u0027t actually address the problems and/or make it worse. We have spent a good chuck of time and multiple PTGs talking about this.\nNone of this is insurmountable as I said, but the current state of k8s layers on the problems, workarounds and fragile hacks to get around it. \n1. Host networking does not magically solve this problem, it pushes the complexity to another layer and you no longer play nice with the rest of the cluster. Same with nodeport. This is why there are attempts at workarounds, like trunking (see kuryr). host/node/etc. all assume you control down to the hardware on each node and ToR switch.\n2. The k8s schedule can and does override the preStop. This is advisory, not mandatory: https://kubernetes.io/docs/concepts/workloads/pods/pod/#termination-of-pods\n3. Same comment as 1. In fact, one well known deployment has to allocate double wide subnets to work around this problem.\n4. We know very well what \"pods\" and \"sidecars\" are. lol. As I said, this system complicates high performing infrastructure systems. Yes, it either has to be done as multiple pods that must be coordinated together (which yes, can communicate over exposed domain sockets). HA via side cars is not a \"replacement\" by any means. We need to transition live network flows in less than the TCP retry window. Today we do this in around one second, soon less. With tuning (which again makes a lot of assumptions about the level of control over the host, k8s deployment, etc. you are lucky to hit 40 seconds using side cars and k8s health monitoring. Remember, this isn\u0027t just starting a \"netcat\" web server, it requires ports to be attached, etc.\n5. See above.\n\nThere are a bunch of folks in the network load balancing community trying to influence these APIs, schedulers, and network implementations. The recent addition of the Pod Priority settings is an example of that, but sadly the community would not go far enough to make it completely solve the problem.\n\nOn the surface, \"containers\" are an excellent solution for load balancing engines, but when you get into the actual details it becomes clear that certain schedulers/cluster systems are not ready for this type of workload yet.","commit_id":"363f5f0062efdbb55398abfe42bcd5f9eab7fadb"},{"author":{"_account_id":9237,"name":"Kevin Fox","email":"kevin@efox.cc","username":"kfox1111"},"change_message_id":"25f6fc8b722ebaf27d60d4f1e464a688466f5b25","unresolved":false,"context_lines":[{"line_number":217,"context_line":""},{"line_number":218,"context_line":"A more promising avenue might be integration in the other direction -- using a"},{"line_number":219,"context_line":"Kubernetes-based service as a driver for Octavia when Teapot is co-installed"},{"line_number":220,"context_line":"with an OpenStack cloud."},{"line_number":221,"context_line":""},{"line_number":222,"context_line":".. |LoadBalancer| replace:: ``LoadBalancer``"},{"line_number":223,"context_line":""}],"source_content_type":"text/x-rst","patch_set":1,"id":"1fa4df85_8bc7e896","line":220,"range":{"start_line":220,"start_character":18,"end_line":220,"end_character":24},"in_reply_to":"1fa4df85_b9a1e748","updated":"2020-03-03 16:54:11.000000000","message":"My point was if Kubernetes doesn\u0027t give you functionality to do something within its featureset, most of the time you can turn off most of Kubernete\u0027s control and do it yourself. Is that extra work compared to K8s magically supporting every use case? Of course. But unlike other systems such as Nova, you can escape several layers of abstraction that may get in your way if you need to solve something yourself. This means, within the cluster, it is still possible to solve classes of problems that in other systems could be difficult to impossible.\n\nMy assertion is, if you can get a bare metal Linux box to solve the load balancing problem, you can do it in a Kubernetes system. With the majority of namespaces shut off, Kubernetes just becomes a package manager of sorts. A way to help you schedule a program onto a machine but minimally involved once it starts. This is still valuable for sysadmins to help maintain the workload over something like yum/apt/ansible. Versioning of software, dependency isolation, single control plane to manage workload, etc. In my mind the argument to push it to an external set of nodes doesn\u0027t make sense.\n\nIs it ideal? No. Is it still worth doing? absolutely. Could it get better in the future? I hope so. While not at the rate I would like, every release gets a little better. :)","commit_id":"363f5f0062efdbb55398abfe42bcd5f9eab7fadb"},{"author":{"_account_id":4257,"name":"Zane Bitter","email":"zbitter@redhat.com","username":"zaneb"},"change_message_id":"7b916e60b5c328343046f7330f45448c1515f45e","unresolved":false,"context_lines":[{"line_number":217,"context_line":""},{"line_number":218,"context_line":"A more promising avenue might be integration in the other direction -- using a"},{"line_number":219,"context_line":"Kubernetes-based service as a driver for Octavia when Teapot is co-installed"},{"line_number":220,"context_line":"with an OpenStack cloud."},{"line_number":221,"context_line":""},{"line_number":222,"context_line":".. |LoadBalancer| replace:: ``LoadBalancer``"},{"line_number":223,"context_line":""}],"source_content_type":"text/x-rst","patch_set":1,"id":"1fa4df85_1084b3b0","line":220,"range":{"start_line":220,"start_character":18,"end_line":220,"end_character":24},"in_reply_to":"1fa4df85_f097978f","updated":"2020-02-28 18:24:32.000000000","message":"\u003e all assume you control down to the hardware on each node and ToR\n \u003e switch.\n\nI have good news for you :)\n\n \u003e On the surface, \"containers\" are an excellent solution for load\n \u003e balancing engines, but when you get into the actual details it\n \u003e becomes clear that certain schedulers/cluster systems are not ready\n \u003e for this type of workload yet.\n\nIf the hypervisor running your Amphora VM dies then you\u0027ll also lose in-progress flows. Isn\u0027t the main difference with Kubernetes that it makes it explicit that this might happen and clients will have to deal with it? Given that it is taking over the world apparently without meeting our high standards, perhaps it is our standards that are wrong?\n\nYou can easily run all the load balancer pods on nodes that are tagged so that no other workloads get scheduled to them. (With more difficulty, you could even run them in a whole separate cluster.) Kubernetes does reserve the right to evict your pod just for the fun of it, but in practice it\u0027s not going to. Who knows, maybe someday someone will implement a load balancer running in KubeVirt so it gets live-migrated around.\n\nI\u0027m just struggling to see this as an obstacle to getting started.","commit_id":"363f5f0062efdbb55398abfe42bcd5f9eab7fadb"},{"author":{"_account_id":8099,"name":"Graham Hayes","email":"gr@ham.ie","username":"graham"},"change_message_id":"ded9be44ebf55be82048b22ea7fddaf26e1430f5","unresolved":false,"context_lines":[{"line_number":26,"context_line":"When using a Service of type |LoadBalancer| (rather than an Ingress), there is"},{"line_number":27,"context_line":"no standardised way of requesting TLS termination (some cloud providers permit"},{"line_number":28,"context_line":"it using an annotation), so supporting this use case is not a high priority."},{"line_number":29,"context_line":"The |LoadBalancer| Service type in general should be supported, however (though"},{"line_number":30,"context_line":"there are existing Kubernetes offerings where it is not)."},{"line_number":31,"context_line":""},{"line_number":32,"context_line":"Implementation options"}],"source_content_type":"text/x-rst","patch_set":2,"id":"1fa4df85_d5101fab","line":29,"range":{"start_line":29,"start_character":0,"end_line":29,"end_character":62},"updated":"2020-03-03 16:56:01.000000000","message":"Yeah, we do need [Loadbalancer] support for the beginning, if only to expose Ingresses :)","commit_id":"1f9515b9037633b4abb8acf4dc29ed4befb7b745"},{"author":{"_account_id":8099,"name":"Graham Hayes","email":"gr@ham.ie","username":"graham"},"change_message_id":"ded9be44ebf55be82048b22ea7fddaf26e1430f5","unresolved":false,"context_lines":[{"line_number":118,"context_line":"A new API in the management cluster would receive requests in a form similar to"},{"line_number":119,"context_line":"an Ingress resource, sanitise them, and then proxy them to an Ingress"},{"line_number":120,"context_line":"controller running in the management cluster (or some other"},{"line_number":121,"context_line":"centrally-controlled cluster). In fact, it is possible the \u0027API\u0027 could be as"},{"line_number":122,"context_line":"simple as using the existing Ingress API in a namespace with a validating"},{"line_number":123,"context_line":"webhook."},{"line_number":124,"context_line":""},{"line_number":125,"context_line":"The most challenging part of this would be coaxing the Ingress controllers on"},{"line_number":126,"context_line":"the load balancing cluster to target services in a different cluster (the"}],"source_content_type":"text/x-rst","patch_set":2,"id":"1fa4df85_15f79761","line":123,"range":{"start_line":121,"start_character":31,"end_line":123,"end_character":8},"updated":"2020-03-03 16:56:01.000000000","message":"Yeah, a teapot-ingress-controller could make a lot of this section easier","commit_id":"1f9515b9037633b4abb8acf4dc29ed4befb7b745"}],"doc/source/ideas/teapot/networking.rst":[{"author":{"_account_id":9237,"name":"Kevin Fox","email":"kevin@efox.cc","username":"kfox1111"},"change_message_id":"a4247659178b3de30368d4b8318d921090fa2efa","unresolved":false,"context_lines":[{"line_number":6,"context_line":"network itself must be the guarantor of multi-tenancy, with only untrusted"},{"line_number":7,"context_line":"components running on tenant machines. (Trusted components can still run within"},{"line_number":8,"context_line":"the management cluster.)"},{"line_number":9,"context_line":""},{"line_number":10,"context_line":".. _teapot-networking-multi-tenancy:"},{"line_number":11,"context_line":""},{"line_number":12,"context_line":"Multi-tenant Network Model"}],"source_content_type":"text/x-rst","patch_set":1,"id":"1fa4df85_e3a4c5bb","line":9,"updated":"2020-02-27 18:01:44.000000000","message":"This solution is kind of different then wg-multitenancy has been working towards, so its interesting to see.\n\nThis type of multitenancy is closer to \"kubernetes as a service\" then what the wg-multitenancy defines. Not making a judgement call here.\n\nIt may leave room for the two groups to collaborate further without stomping on each others toes.","commit_id":"363f5f0062efdbb55398abfe42bcd5f9eab7fadb"},{"author":{"_account_id":9237,"name":"Kevin Fox","email":"kevin@efox.cc","username":"kfox1111"},"change_message_id":"79c8f97b5b4a491295c5e4d3b18cd8a75ca34efb","unresolved":false,"context_lines":[{"line_number":6,"context_line":"network itself must be the guarantor of multi-tenancy, with only untrusted"},{"line_number":7,"context_line":"components running on tenant machines. (Trusted components can still run within"},{"line_number":8,"context_line":"the management cluster.)"},{"line_number":9,"context_line":""},{"line_number":10,"context_line":".. _teapot-networking-multi-tenancy:"},{"line_number":11,"context_line":""},{"line_number":12,"context_line":"Multi-tenant Network Model"}],"source_content_type":"text/x-rst","patch_set":1,"id":"1fa4df85_ebad3c9b","line":9,"in_reply_to":"1fa4df85_66621e41","updated":"2020-03-03 17:34:31.000000000","message":"where I see the biggest collaboration opportunity, wg-multitenancy works to define clearly what the various forms of multitenancy are, and work on common api\u0027s. I think we should work with them to ensure project teapot\u0027s kind of kubernetes as a service model is defined clearly in their docs so everyone can come to a clear understanding on when to use which multitenancy solution, as well as share as much as possible of the user facing api and tooling around it. This will make it substantially easier on users to go from one form of multitenancy to a project teapot system.","commit_id":"363f5f0062efdbb55398abfe42bcd5f9eab7fadb"},{"author":{"_account_id":4257,"name":"Zane Bitter","email":"zbitter@redhat.com","username":"zaneb"},"change_message_id":"cd2c4c3b0a59d5efd03fb4fa158091983bbb1846","unresolved":false,"context_lines":[{"line_number":6,"context_line":"network itself must be the guarantor of multi-tenancy, with only untrusted"},{"line_number":7,"context_line":"components running on tenant machines. (Trusted components can still run within"},{"line_number":8,"context_line":"the management cluster.)"},{"line_number":9,"context_line":""},{"line_number":10,"context_line":".. _teapot-networking-multi-tenancy:"},{"line_number":11,"context_line":""},{"line_number":12,"context_line":"Multi-tenant Network Model"}],"source_content_type":"text/x-rst","patch_set":1,"id":"1fa4df85_66621e41","line":9,"in_reply_to":"1fa4df85_e3a4c5bb","updated":"2020-02-28 05:03:02.000000000","message":"\u003e This solution is kind of different then wg-multitenancy has been\n \u003e working towards, so its interesting to see.\n\nYes, I see that WG\u0027s responsibility as multitenancy (in all its forms) within k8s itself; Teapot is coming at it from the layer below and providing a baremetal cloud that takes care of multitenancy from the outside in the same way that other clouds do. It just happens to do it using k8s itself because it\u0027s 2020.","commit_id":"363f5f0062efdbb55398abfe42bcd5f9eab7fadb"},{"author":{"_account_id":9237,"name":"Kevin Fox","email":"kevin@efox.cc","username":"kfox1111"},"change_message_id":"a41966391a4e6231c8261d0c1fef1e7c9c6265c4","unresolved":false,"context_lines":[{"line_number":6,"context_line":"network itself must be the guarantor of multi-tenancy, with only untrusted"},{"line_number":7,"context_line":"components running on tenant machines. (Trusted components can still run within"},{"line_number":8,"context_line":"the management cluster.)"},{"line_number":9,"context_line":""},{"line_number":10,"context_line":".. _teapot-networking-multi-tenancy:"},{"line_number":11,"context_line":""},{"line_number":12,"context_line":"Multi-tenant Network Model"}],"source_content_type":"text/x-rst","patch_set":1,"id":"1fa4df85_c39509e6","line":9,"in_reply_to":"1fa4df85_e3a4c5bb","updated":"2020-02-27 18:16:10.000000000","message":"One more option. If done with just one flat network, so long as you add a requirement that tenant clusters need to support NetworkPolicies to be secure, you can use that mechanism to enforce some cross tenant security.\n\nFor example, I typically setup network policies per namespace that only allows communcations in that namespace and with kube-system but no other namespace:\n\nhttps://github.com/pnnl-miscscripts/miscscripts/blob/master/charts/charts/tenant-namespace/values.yaml#L22-L37\nand \nhttps://github.com/pnnl-miscscripts/miscscripts/blob/master/charts/charts/tenant-namespace/templates/simple-restricted-networkpolicy.yaml\n\nFor some folks, this may be a much lighter weight solution and good enough?","commit_id":"363f5f0062efdbb55398abfe42bcd5f9eab7fadb"},{"author":{"_account_id":9003,"name":"Tom Barron","email":"tpb@dyncloud.net","username":"tbarron"},"change_message_id":"94e0bd3307d59c90569a70f8c7fcd4d64966389a","unresolved":false,"context_lines":[{"line_number":12,"context_line":"Multi-tenant Network Model"},{"line_number":13,"context_line":"--------------------------"},{"line_number":14,"context_line":""},{"line_number":15,"context_line":"Support for VLANs and VxLAN is ubiquitous in modern data center network"},{"line_number":16,"context_line":"hardware, so this will be the basis for Teapot\u0027s networking. Each tenant will"},{"line_number":17,"context_line":"be assigned one or more V(x)LANs. (Separate failure domains will likely also"},{"line_number":18,"context_line":"have separate broadcast domains.) As machines are assigned to the tenant, the"}],"source_content_type":"text/x-rst","patch_set":1,"id":"1fa4df85_5e2c7598","line":15,"range":{"start_line":15,"start_character":22,"end_line":15,"end_character":27},"updated":"2020-02-29 16:46:56.000000000","message":"(or similar overlay network technologies like Geneve)","commit_id":"363f5f0062efdbb55398abfe42bcd5f9eab7fadb"},{"author":{"_account_id":9003,"name":"Tom Barron","email":"tpb@dyncloud.net","username":"tbarron"},"change_message_id":"94e0bd3307d59c90569a70f8c7fcd4d64966389a","unresolved":false,"context_lines":[{"line_number":19,"context_line":"Teapot controller will connect each to a private virtual network also assigned"},{"line_number":20,"context_line":"to the tenant."},{"line_number":21,"context_line":""},{"line_number":22,"context_line":"Small deployments can just use VLANs. Larger deployments may need VxLAN, and in"},{"line_number":23,"context_line":"this case VTEP-capable edge switches and a VTEP-capable router will be"},{"line_number":24,"context_line":"required."},{"line_number":25,"context_line":""}],"source_content_type":"text/x-rst","patch_set":1,"id":"1fa4df85_fe2201a4","line":22,"range":{"start_line":22,"start_character":38,"end_line":22,"end_character":57},"updated":"2020-02-29 16:46:56.000000000","message":"And even larger deployments may use routed spine-and-leaf topologies?","commit_id":"363f5f0062efdbb55398abfe42bcd5f9eab7fadb"},{"author":{"_account_id":11628,"name":"Michael Johnson","email":"johnsomor@gmail.com","username":"johnsom"},"change_message_id":"aecc4250e02c80d8029d8854cf5f281738228674","unresolved":false,"context_lines":[{"line_number":19,"context_line":"Teapot controller will connect each to a private virtual network also assigned"},{"line_number":20,"context_line":"to the tenant."},{"line_number":21,"context_line":""},{"line_number":22,"context_line":"Small deployments can just use VLANs. Larger deployments may need VxLAN, and in"},{"line_number":23,"context_line":"this case VTEP-capable edge switches and a VTEP-capable router will be"},{"line_number":24,"context_line":"required."},{"line_number":25,"context_line":""},{"line_number":26,"context_line":"This design frees the tenant clusters from being forced to use a particular"},{"line_number":27,"context_line":":abbr:`CNI (Container Network Interface)` plugin. Tenants are free to select a"}],"source_content_type":"text/x-rst","patch_set":1,"id":"1fa4df85_a3bb2d70","line":24,"range":{"start_line":22,"start_character":0,"end_line":24,"end_character":9},"updated":"2020-02-27 20:23:41.000000000","message":"So this implies that ToR will de-encapsulate the frames.\nIs the proposal to provide un-encapsulated networks directly to the hardware NICs or to provide a \"trunk\" interface to the hardware NICs?\nPersonally I think there is value in providing trunk ports from the ToR to the bare metal NIC(s) to allow flexible/dynamic tenant networking.\nFor example, the ToR would VTEP de-encapsulate the VXLAN overlay and present the tenant a list of VLANs that are available on their bare metal NIC(s). Bonus points if they can configure which VLAN tags are exposed on which NIC and if one VLAN is untagged.","commit_id":"363f5f0062efdbb55398abfe42bcd5f9eab7fadb"},{"author":{"_account_id":4257,"name":"Zane Bitter","email":"zbitter@redhat.com","username":"zaneb"},"change_message_id":"cd2c4c3b0a59d5efd03fb4fa158091983bbb1846","unresolved":false,"context_lines":[{"line_number":19,"context_line":"Teapot controller will connect each to a private virtual network also assigned"},{"line_number":20,"context_line":"to the tenant."},{"line_number":21,"context_line":""},{"line_number":22,"context_line":"Small deployments can just use VLANs. Larger deployments may need VxLAN, and in"},{"line_number":23,"context_line":"this case VTEP-capable edge switches and a VTEP-capable router will be"},{"line_number":24,"context_line":"required."},{"line_number":25,"context_line":""},{"line_number":26,"context_line":"This design frees the tenant clusters from being forced to use a particular"},{"line_number":27,"context_line":":abbr:`CNI (Container Network Interface)` plugin. Tenants are free to select a"}],"source_content_type":"text/x-rst","patch_set":1,"id":"1fa4df85_66377eb2","line":24,"range":{"start_line":22,"start_character":0,"end_line":24,"end_character":9},"in_reply_to":"1fa4df85_a3bb2d70","updated":"2020-02-28 05:03:02.000000000","message":"\u003e So this implies that ToR will de-encapsulate the frames.\n\nCorrect.\n\n \u003e Is the proposal to provide un-encapsulated networks directly to the\n \u003e hardware NICs or to provide a \"trunk\" interface to the hardware\n \u003e NICs?\n\nThat sounds like it should be technically possible. But given that each tenant is in its own isolated network anyway, I\u0027m not sure I see the value. What do you think the use case would be for this?\n\nThe only one I can think of is to limit broadcast domains, but I\u0027d expect that the management cluster would split up tenant networks to limit broadcast domains anyway.","commit_id":"363f5f0062efdbb55398abfe42bcd5f9eab7fadb"},{"author":{"_account_id":17068,"name":"Jean-Philippe Evrard","email":"openstack@a.spamming.party","username":"evrardjp"},"change_message_id":"34cbd129b53047752e97338c2889f1e222a523b7","unresolved":false,"context_lines":[{"line_number":19,"context_line":"Teapot controller will connect each to a private virtual network also assigned"},{"line_number":20,"context_line":"to the tenant."},{"line_number":21,"context_line":""},{"line_number":22,"context_line":"Small deployments can just use VLANs. Larger deployments may need VxLAN, and in"},{"line_number":23,"context_line":"this case VTEP-capable edge switches and a VTEP-capable router will be"},{"line_number":24,"context_line":"required."},{"line_number":25,"context_line":""},{"line_number":26,"context_line":"This design frees the tenant clusters from being forced to use a particular"},{"line_number":27,"context_line":":abbr:`CNI (Container Network Interface)` plugin. Tenants are free to select a"}],"source_content_type":"text/x-rst","patch_set":1,"id":"1fa4df85_5f031c74","line":24,"range":{"start_line":22,"start_character":0,"end_line":24,"end_character":9},"in_reply_to":"1fa4df85_a3bb2d70","updated":"2020-02-28 16:00:22.000000000","message":"For me, the value would be to have a networking component, which connects on the switches, and configure what can pass on the physical port. It means that it can indeed encap/decapsulate the packet if necessary, and/or ensure that the right VLANs are allowed for tenant x, by configuring the port appropriately. This would allow a simple compute configuration (and its connection to the storage), by allowing trunk ports (+lacp for redundancy for example).\n\nHowever, I think that this could come as an evolution of teapot.\n\nAlso, \"small\" and \"large\" deployments depends on vlan pruning and the amount of tenants, so it really depends on the use case :) Though, for me, the simpler the better.","commit_id":"363f5f0062efdbb55398abfe42bcd5f9eab7fadb"},{"author":{"_account_id":11628,"name":"Michael Johnson","email":"johnsomor@gmail.com","username":"johnsom"},"change_message_id":"aecc4250e02c80d8029d8854cf5f281738228674","unresolved":false,"context_lines":[{"line_number":38,"context_line":"clusters, for several reasons:"},{"line_number":39,"context_line":""},{"line_number":40,"context_line":"* There is a performance overhead to encapsulating the packets on the"},{"line_number":41,"context_line":"  hypervisor, and it also limits the ability to apply some performance"},{"line_number":42,"context_line":"  optimisations (such as using SR-IOV to provide direct access to the NICs from"},{"line_number":43,"context_line":"  the VMs by virtualising the PCIe bus)."},{"line_number":44,"context_line":"* The extra overhead in each packet can cause fragmentation, and reduces the"}],"source_content_type":"text/x-rst","patch_set":1,"id":"1fa4df85_e32e05f9","line":41,"range":{"start_line":41,"start_character":2,"end_line":41,"end_character":12},"updated":"2020-02-27 20:23:41.000000000","message":"This may not be the right word here. Per the teapot compute vision, we don\u0027t know if they are running a hypervisor or just containers inside the bare metal host.","commit_id":"363f5f0062efdbb55398abfe42bcd5f9eab7fadb"},{"author":{"_account_id":4257,"name":"Zane Bitter","email":"zbitter@redhat.com","username":"zaneb"},"change_message_id":"cd2c4c3b0a59d5efd03fb4fa158091983bbb1846","unresolved":false,"context_lines":[{"line_number":38,"context_line":"clusters, for several reasons:"},{"line_number":39,"context_line":""},{"line_number":40,"context_line":"* There is a performance overhead to encapsulating the packets on the"},{"line_number":41,"context_line":"  hypervisor, and it also limits the ability to apply some performance"},{"line_number":42,"context_line":"  optimisations (such as using SR-IOV to provide direct access to the NICs from"},{"line_number":43,"context_line":"  the VMs by virtualising the PCIe bus)."},{"line_number":44,"context_line":"* The extra overhead in each packet can cause fragmentation, and reduces the"}],"source_content_type":"text/x-rst","patch_set":1,"id":"1fa4df85_465da2e7","line":41,"range":{"start_line":41,"start_character":2,"end_line":41,"end_character":12},"in_reply_to":"1fa4df85_e32e05f9","updated":"2020-02-28 05:03:02.000000000","message":"This is specifically talking about running k8s in VMs (e.g. on top of OpenStack, where we have Kuryr to avoid this double encapsulation).","commit_id":"363f5f0062efdbb55398abfe42bcd5f9eab7fadb"},{"author":{"_account_id":11628,"name":"Michael Johnson","email":"johnsomor@gmail.com","username":"johnsom"},"change_message_id":"aecc4250e02c80d8029d8854cf5f281738228674","unresolved":false,"context_lines":[{"line_number":43,"context_line":"  the VMs by virtualising the PCIe bus)."},{"line_number":44,"context_line":"* The extra overhead in each packet can cause fragmentation, and reduces the"},{"line_number":45,"context_line":"  bandwidth available at the edge."},{"line_number":46,"context_line":"* Broadcast, multicast and unknown unicast traffic is flooded to all possible"},{"line_number":47,"context_line":"  endpoints in the overlay network; doing this at multiple layers can increase"},{"line_number":48,"context_line":"  network load."},{"line_number":49,"context_line":""}],"source_content_type":"text/x-rst","patch_set":1,"id":"1fa4df85_0349e1ce","line":46,"range":{"start_line":46,"start_character":13,"end_line":46,"end_character":22},"updated":"2020-02-27 20:23:41.000000000","message":"This is not true of all overlay implementations.","commit_id":"363f5f0062efdbb55398abfe42bcd5f9eab7fadb"},{"author":{"_account_id":17068,"name":"Jean-Philippe Evrard","email":"openstack@a.spamming.party","username":"evrardjp"},"change_message_id":"34cbd129b53047752e97338c2889f1e222a523b7","unresolved":false,"context_lines":[{"line_number":54,"context_line":"* Encapsulation headers are carried only within the core of the network, where"},{"line_number":55,"context_line":"  bandwidth is less scarce and frame sizes can be adjusted to prevent"},{"line_number":56,"context_line":"  fragmentation."},{"line_number":57,"context_line":"* CNI plugins don\u0027t generally make significant use of broadcast or multicast."},{"line_number":58,"context_line":""},{"line_number":59,"context_line":".. _teapot-networking-provisioning:"},{"line_number":60,"context_line":""}],"source_content_type":"text/x-rst","patch_set":1,"id":"1fa4df85_9f1754a4","line":57,"range":{"start_line":57,"start_character":14,"end_line":57,"end_character":76},"updated":"2020-02-28 16:00:22.000000000","message":"I don\u0027t think this is a correct assumption. I would not make this a general truth. I am not sure this sentence brings anything, so maybe worth removing?","commit_id":"363f5f0062efdbb55398abfe42bcd5f9eab7fadb"},{"author":{"_account_id":9237,"name":"Kevin Fox","email":"kevin@efox.cc","username":"kfox1111"},"change_message_id":"79c8f97b5b4a491295c5e4d3b18cd8a75ca34efb","unresolved":false,"context_lines":[{"line_number":60,"context_line":""},{"line_number":61,"context_line":"Provisioning Network"},{"line_number":62,"context_line":"--------------------"},{"line_number":63,"context_line":""},{"line_number":64,"context_line":"Generally bare-metal machines will need at least one interface connected to a"},{"line_number":65,"context_line":"provisioning network in order to boot using :abbr:`PXE (Pre-boot execution"},{"line_number":66,"context_line":"environment)`. Typically the provisioning network is required to be an untagged"}],"source_content_type":"text/x-rst","patch_set":1,"id":"1fa4df85_2b6a74ed","line":63,"updated":"2020-03-03 17:34:31.000000000","message":"Kubernete\u0027s model of having everything flat has worked out pretty well for it. They usually try to solve the security aspect of it at a higher level then raw networking by assuming everything is untrusted at the network level.\nI like the approach. Istio is currently rather difficult to install but they are working on an operator that may make that manageable. Maybe Istio may be part of the solution rather than vlans/vxlans.\n\nBut specifically for provisioning, maybe we can flatten this out too. I\u0027ve wondered for a while, is there anything we could do to make it safe to provision over the same network as the rest of the traffic goes on. What this looks like exactly, I\u0027m not exactly sure. Some rough ideas for starting points:\nipxe on a readonly thumbdrive with an https ca pubkey for booting over https and a u2f key if there isn\u0027t a tpm to establish node identity?\nBoth devices are rather cheep these days. Guessing \u003c $25 per node. Probably better yet in bulk. You may be able to even do it with just a rasperry pi zero... $7. :)\n\nYeah, this takes a bit more effort to set up initially, but may be better in the long run then trying to keep a pure pxe provisioning network safe?","commit_id":"363f5f0062efdbb55398abfe42bcd5f9eab7fadb"},{"author":{"_account_id":11628,"name":"Michael Johnson","email":"johnsomor@gmail.com","username":"johnsom"},"change_message_id":"aecc4250e02c80d8029d8854cf5f281738228674","unresolved":false,"context_lines":[{"line_number":66,"context_line":"environment)`. Typically the provisioning network is required to be an untagged"},{"line_number":67,"context_line":"VLAN."},{"line_number":68,"context_line":""},{"line_number":69,"context_line":"PXE can be avoided by provisioning using virtual media (where the BMC attaches"},{"line_number":70,"context_line":"a virtual disk containing the boot image to the host\u0027s USB), but hardware"},{"line_number":71,"context_line":"support for doing this from Ironic is uneven (though rapidly improving). In"},{"line_number":72,"context_line":"addition, the Ironic agent typically communicates over this network for"}],"source_content_type":"text/x-rst","patch_set":1,"id":"1fa4df85_e366052a","line":69,"range":{"start_line":69,"start_character":41,"end_line":69,"end_character":54},"updated":"2020-02-27 20:23:41.000000000","message":"These are typically painfully slow as well.","commit_id":"363f5f0062efdbb55398abfe42bcd5f9eab7fadb"},{"author":{"_account_id":9237,"name":"Kevin Fox","email":"kevin@efox.cc","username":"kfox1111"},"change_message_id":"79c8f97b5b4a491295c5e4d3b18cd8a75ca34efb","unresolved":false,"context_lines":[{"line_number":66,"context_line":"environment)`. Typically the provisioning network is required to be an untagged"},{"line_number":67,"context_line":"VLAN."},{"line_number":68,"context_line":""},{"line_number":69,"context_line":"PXE can be avoided by provisioning using virtual media (where the BMC attaches"},{"line_number":70,"context_line":"a virtual disk containing the boot image to the host\u0027s USB), but hardware"},{"line_number":71,"context_line":"support for doing this from Ironic is uneven (though rapidly improving). In"},{"line_number":72,"context_line":"addition, the Ironic agent typically communicates over this network for"}],"source_content_type":"text/x-rst","patch_set":1,"id":"1fa4df85_cb96403f","line":69,"range":{"start_line":69,"start_character":41,"end_line":69,"end_character":54},"in_reply_to":"1fa4df85_e366052a","updated":"2020-03-03 17:34:31.000000000","message":"As are most bios\u0027s boot time. :)\n\nI wonder if a hybrid approach may be better. use virtual media to boot ipxe or the like, then netboot the rest over https.","commit_id":"363f5f0062efdbb55398abfe42bcd5f9eab7fadb"},{"author":{"_account_id":11628,"name":"Michael Johnson","email":"johnsomor@gmail.com","username":"johnsom"},"change_message_id":"aecc4250e02c80d8029d8854cf5f281738228674","unresolved":false,"context_lines":[{"line_number":78,"context_line":"the tenant worker would have to appear on a tagged VLAN. However, the Ironic"},{"line_number":79,"context_line":"agent\u0027s access to the Ironic APIs is unauthenticated, and therefore not safe to"},{"line_number":80,"context_line":"be carried over networks that have hosts allocated to tenants connected to"},{"line_number":81,"context_line":"them. This could occur over a separate network, but in any event hosts\u0027"},{"line_number":82,"context_line":"membership of this network will have to be changed dynamically in concert with"},{"line_number":83,"context_line":"the baremetal provisioner."},{"line_number":84,"context_line":""},{"line_number":85,"context_line":"The :abbr:`BMC (Baseboard management controller)`\\ s will be connected to a"},{"line_number":86,"context_line":"separate network that is reachable only from the management cluster."}],"source_content_type":"text/x-rst","patch_set":1,"id":"1fa4df85_833b91e9","line":83,"range":{"start_line":81,"start_character":6,"end_line":83,"end_character":26},"updated":"2020-02-27 20:23:41.000000000","message":"Right, in my previous implementations(pre-OpenStack), after the clean process completes the host is migrated to the provisioning network via the ToR configuration for the appropriate port that has PXE available on it. I also switched the BIOS configuration to allow PXE on clean, and disable it when the hardware is transitioned to the user. However there are use cases where it may be \"ok\" for the user to provision their own image via PXE.\nI think this \"teapot\" spec should address how an image gets installed on the bare metal host and where the \"managed\" delineation is in this environment. This has ramifications for how a user would get console access for example.","commit_id":"363f5f0062efdbb55398abfe42bcd5f9eab7fadb"},{"author":{"_account_id":4257,"name":"Zane Bitter","email":"zbitter@redhat.com","username":"zaneb"},"change_message_id":"cd2c4c3b0a59d5efd03fb4fa158091983bbb1846","unresolved":false,"context_lines":[{"line_number":78,"context_line":"the tenant worker would have to appear on a tagged VLAN. However, the Ironic"},{"line_number":79,"context_line":"agent\u0027s access to the Ironic APIs is unauthenticated, and therefore not safe to"},{"line_number":80,"context_line":"be carried over networks that have hosts allocated to tenants connected to"},{"line_number":81,"context_line":"them. This could occur over a separate network, but in any event hosts\u0027"},{"line_number":82,"context_line":"membership of this network will have to be changed dynamically in concert with"},{"line_number":83,"context_line":"the baremetal provisioner."},{"line_number":84,"context_line":""},{"line_number":85,"context_line":"The :abbr:`BMC (Baseboard management controller)`\\ s will be connected to a"},{"line_number":86,"context_line":"separate network that is reachable only from the management cluster."}],"source_content_type":"text/x-rst","patch_set":1,"id":"1fa4df85_46d4c2bd","line":83,"range":{"start_line":81,"start_character":6,"end_line":83,"end_character":26},"in_reply_to":"1fa4df85_833b91e9","updated":"2020-02-28 05:03:02.000000000","message":"I think your last point is covered in the compute doc. Essentially metal³ runs in the management cluster, so tenants can specify an image to boot but they don\u0027t get any control over the actual booting.","commit_id":"363f5f0062efdbb55398abfe42bcd5f9eab7fadb"},{"author":{"_account_id":11628,"name":"Michael Johnson","email":"johnsomor@gmail.com","username":"johnsom"},"change_message_id":"aecc4250e02c80d8029d8854cf5f281738228674","unresolved":false,"context_lines":[{"line_number":121,"context_line":""},{"line_number":122,"context_line":"The ``LoadBalancer`` Service type uses an external :doc:`load balancer"},{"line_number":123,"context_line":"\u003cload-balancing\u003e` as a front end. Traffic from the load balancer is directed"},{"line_number":124,"context_line":"to a ``NodePort`` service within the tenant cluster."},{"line_number":125,"context_line":""},{"line_number":126,"context_line":"Most managed Kubernetes services provide an Ingress controller that can set up"},{"line_number":127,"context_line":"load balancing (including TLS termination) in the underlying cloud for HTTP(S)"}],"source_content_type":"text/x-rst","patch_set":1,"id":"1fa4df85_2303bd11","line":124,"range":{"start_line":124,"start_character":5,"end_line":124,"end_character":17},"updated":"2020-02-27 20:23:41.000000000","message":"This resolves some of the NAT issues with kubernetes, but can cause problems with scaling, migration, and the \"service discovery\" concepts of k8s.\nIdeally we would introduce a better way.","commit_id":"363f5f0062efdbb55398abfe42bcd5f9eab7fadb"},{"author":{"_account_id":11628,"name":"Michael Johnson","email":"johnsomor@gmail.com","username":"johnsom"},"change_message_id":"aecc4250e02c80d8029d8854cf5f281738228674","unresolved":false,"context_lines":[{"line_number":124,"context_line":"to a ``NodePort`` service within the tenant cluster."},{"line_number":125,"context_line":""},{"line_number":126,"context_line":"Most managed Kubernetes services provide an Ingress controller that can set up"},{"line_number":127,"context_line":"load balancing (including TLS termination) in the underlying cloud for HTTP(S)"},{"line_number":128,"context_line":"traffic, including automatically configuring floating IPs. If Teapot provided"},{"line_number":129,"context_line":":ref:`such an Ingress controller \u003cteapot-load-balancing-ingress-controller\u003e`,"},{"line_number":130,"context_line":"it might be a viable option to not support floating IPs (or even IPv4 at all)"}],"source_content_type":"text/x-rst","patch_set":1,"id":"1fa4df85_de02bc09","line":127,"range":{"start_line":127,"start_character":61,"end_line":127,"end_character":66},"updated":"2020-02-27 20:23:41.000000000","message":"cluster?","commit_id":"363f5f0062efdbb55398abfe42bcd5f9eab7fadb"},{"author":{"_account_id":4257,"name":"Zane Bitter","email":"zbitter@redhat.com","username":"zaneb"},"change_message_id":"cd2c4c3b0a59d5efd03fb4fa158091983bbb1846","unresolved":false,"context_lines":[{"line_number":124,"context_line":"to a ``NodePort`` service within the tenant cluster."},{"line_number":125,"context_line":""},{"line_number":126,"context_line":"Most managed Kubernetes services provide an Ingress controller that can set up"},{"line_number":127,"context_line":"load balancing (including TLS termination) in the underlying cloud for HTTP(S)"},{"line_number":128,"context_line":"traffic, including automatically configuring floating IPs. If Teapot provided"},{"line_number":129,"context_line":":ref:`such an Ingress controller \u003cteapot-load-balancing-ingress-controller\u003e`,"},{"line_number":130,"context_line":"it might be a viable option to not support floating IPs (or even IPv4 at all)"}],"source_content_type":"text/x-rst","patch_set":1,"id":"1fa4df85_86315aa5","line":127,"range":{"start_line":127,"start_character":61,"end_line":127,"end_character":66},"in_reply_to":"1fa4df85_de02bc09","updated":"2020-02-28 05:03:02.000000000","message":"This is talking about managed kubernetes services like EKS; they have an Ingress controller that allows you to set up ELB in the cloud.","commit_id":"363f5f0062efdbb55398abfe42bcd5f9eab7fadb"},{"author":{"_account_id":11628,"name":"Michael Johnson","email":"johnsomor@gmail.com","username":"johnsom"},"change_message_id":"aecc4250e02c80d8029d8854cf5f281738228674","unresolved":false,"context_lines":[{"line_number":125,"context_line":""},{"line_number":126,"context_line":"Most managed Kubernetes services provide an Ingress controller that can set up"},{"line_number":127,"context_line":"load balancing (including TLS termination) in the underlying cloud for HTTP(S)"},{"line_number":128,"context_line":"traffic, including automatically configuring floating IPs. If Teapot provided"},{"line_number":129,"context_line":":ref:`such an Ingress controller \u003cteapot-load-balancing-ingress-controller\u003e`,"},{"line_number":130,"context_line":"it might be a viable option to not support floating IPs (or even IPv4 at all)"},{"line_number":131,"context_line":"for the ``NodePort`` service type, so that the implementation of floating IPs"}],"source_content_type":"text/x-rst","patch_set":1,"id":"1fa4df85_de795c93","line":128,"range":{"start_line":128,"start_character":19,"end_line":128,"end_character":44},"updated":"2020-02-27 20:23:41.000000000","message":"Most cases this is handled by the tenant application deployment tool as floating IPs typically live in a different place in the cluster than the load balancer element.\nThis is less than ideal and something I hope we can improve on here. Especially if we can remove the implementation of floating IPs by using NAT.\nThis was a design principal in Octavia, that we do not require floating IPs for public addresses, you can use a public IP directly on the VIP.\nA limitation in OpenStack is you cannot \"reserve\" public IP addresses for tenants without using something like a floating IP. Again, room for improvement here.","commit_id":"363f5f0062efdbb55398abfe42bcd5f9eab7fadb"},{"author":{"_account_id":11628,"name":"Michael Johnson","email":"johnsomor@gmail.com","username":"johnsom"},"change_message_id":"aecc4250e02c80d8029d8854cf5f281738228674","unresolved":false,"context_lines":[{"line_number":125,"context_line":""},{"line_number":126,"context_line":"Most managed Kubernetes services provide an Ingress controller that can set up"},{"line_number":127,"context_line":"load balancing (including TLS termination) in the underlying cloud for HTTP(S)"},{"line_number":128,"context_line":"traffic, including automatically configuring floating IPs. If Teapot provided"},{"line_number":129,"context_line":":ref:`such an Ingress controller \u003cteapot-load-balancing-ingress-controller\u003e`,"},{"line_number":130,"context_line":"it might be a viable option to not support floating IPs (or even IPv4 at all)"},{"line_number":131,"context_line":"for the ``NodePort`` service type, so that the implementation of floating IPs"}],"source_content_type":"text/x-rst","patch_set":1,"id":"1fa4df85_1e185473","line":128,"range":{"start_line":128,"start_character":45,"end_line":128,"end_character":57},"updated":"2020-02-27 20:23:41.000000000","message":"We should account for not needing floating IPs as well. They are a crutch in some cases, especially in IPv6 deployments.","commit_id":"363f5f0062efdbb55398abfe42bcd5f9eab7fadb"},{"author":{"_account_id":17068,"name":"Jean-Philippe Evrard","email":"openstack@a.spamming.party","username":"evrardjp"},"change_message_id":"34cbd129b53047752e97338c2889f1e222a523b7","unresolved":false,"context_lines":[{"line_number":125,"context_line":""},{"line_number":126,"context_line":"Most managed Kubernetes services provide an Ingress controller that can set up"},{"line_number":127,"context_line":"load balancing (including TLS termination) in the underlying cloud for HTTP(S)"},{"line_number":128,"context_line":"traffic, including automatically configuring floating IPs. If Teapot provided"},{"line_number":129,"context_line":":ref:`such an Ingress controller \u003cteapot-load-balancing-ingress-controller\u003e`,"},{"line_number":130,"context_line":"it might be a viable option to not support floating IPs (or even IPv4 at all)"},{"line_number":131,"context_line":"for the ``NodePort`` service type, so that the implementation of floating IPs"}],"source_content_type":"text/x-rst","patch_set":1,"id":"1fa4df85_bf07b01c","line":128,"range":{"start_line":128,"start_character":45,"end_line":128,"end_character":57},"in_reply_to":"1fa4df85_1e185473","updated":"2020-02-28 16:00:22.000000000","message":"I agree here. Let\u0027s ignore floating IPs. It\u0027s 2020.","commit_id":"363f5f0062efdbb55398abfe42bcd5f9eab7fadb"},{"author":{"_account_id":11628,"name":"Michael Johnson","email":"johnsomor@gmail.com","username":"johnsom"},"change_message_id":"b3aadffe3054bff3ea227034deb7fac6abd32ba3","unresolved":false,"context_lines":[{"line_number":125,"context_line":""},{"line_number":126,"context_line":"Most managed Kubernetes services provide an Ingress controller that can set up"},{"line_number":127,"context_line":"load balancing (including TLS termination) in the underlying cloud for HTTP(S)"},{"line_number":128,"context_line":"traffic, including automatically configuring floating IPs. If Teapot provided"},{"line_number":129,"context_line":":ref:`such an Ingress controller \u003cteapot-load-balancing-ingress-controller\u003e`,"},{"line_number":130,"context_line":"it might be a viable option to not support floating IPs (or even IPv4 at all)"},{"line_number":131,"context_line":"for the ``NodePort`` service type, so that the implementation of floating IPs"}],"source_content_type":"text/x-rst","patch_set":1,"id":"1fa4df85_56b0c243","line":128,"range":{"start_line":128,"start_character":19,"end_line":128,"end_character":44},"in_reply_to":"1fa4df85_502b0b2f","updated":"2020-03-03 01:02:32.000000000","message":"Yeah, so in the context of OpenStack (and more, but...) a floating IP is a NAT element that sits somewhere in the cloud and does address translation (the NAT part).\n\nMaybe to help you with context, you cannot create an IPv6 floating IP in OpenStack. There is no such thing, mostly because it doesn\u0027t make a lot of sense.\n\nIPv4 or IPv6 VIPs that can be assigned to different resources is very valuable in \"clouds\".\n\nCurrent kubernetes relies heavily on NAT and the \"floating IP\" concept (or even floating DNS? lol). This is a performance problem, management headache, and has led to many workarounds (kuryr for example).\n\nMy point with my comments is we should avoid NAT in the design and improve IP management through reservations.\n\nIf you must have protocol conversion(not by default), then yes, push it to the edge. Load balancers, such as Octavia(grin) are a good choice for this.\n\nWe should not force tenants to use NAT (floating IPs) to gain a public address.","commit_id":"363f5f0062efdbb55398abfe42bcd5f9eab7fadb"},{"author":{"_account_id":4257,"name":"Zane Bitter","email":"zbitter@redhat.com","username":"zaneb"},"change_message_id":"de4a2f86fc929f13e14827e209e8bab66173f549","unresolved":false,"context_lines":[{"line_number":125,"context_line":""},{"line_number":126,"context_line":"Most managed Kubernetes services provide an Ingress controller that can set up"},{"line_number":127,"context_line":"load balancing (including TLS termination) in the underlying cloud for HTTP(S)"},{"line_number":128,"context_line":"traffic, including automatically configuring floating IPs. If Teapot provided"},{"line_number":129,"context_line":":ref:`such an Ingress controller \u003cteapot-load-balancing-ingress-controller\u003e`,"},{"line_number":130,"context_line":"it might be a viable option to not support floating IPs (or even IPv4 at all)"},{"line_number":131,"context_line":"for the ``NodePort`` service type, so that the implementation of floating IPs"}],"source_content_type":"text/x-rst","patch_set":1,"id":"1fa4df85_b916c754","line":128,"range":{"start_line":128,"start_character":19,"end_line":128,"end_character":44},"in_reply_to":"1fa4df85_56b0c243","updated":"2020-03-03 05:33:23.000000000","message":"OK, thanks, that helps.\n\nI think we are all on the same page - NAT sucks and we want to avoid it altogether by giving everyone publicly-routable IPv6 addresses.\n\nWe will still need a way to sell the tenant a public IPv4 address, but we only need to allow them to attach it to their load balancer, and at least in this doc we shouldn\u0027t call it a floating IP even though that\u0027s how clouds have typically referred to it when they sell you a public IPv4 address, because that\u0027s associated with a specific implementation (NAT).","commit_id":"363f5f0062efdbb55398abfe42bcd5f9eab7fadb"},{"author":{"_account_id":9237,"name":"Kevin Fox","email":"kevin@efox.cc","username":"kfox1111"},"change_message_id":"79c8f97b5b4a491295c5e4d3b18cd8a75ca34efb","unresolved":false,"context_lines":[{"line_number":125,"context_line":""},{"line_number":126,"context_line":"Most managed Kubernetes services provide an Ingress controller that can set up"},{"line_number":127,"context_line":"load balancing (including TLS termination) in the underlying cloud for HTTP(S)"},{"line_number":128,"context_line":"traffic, including automatically configuring floating IPs. If Teapot provided"},{"line_number":129,"context_line":":ref:`such an Ingress controller \u003cteapot-load-balancing-ingress-controller\u003e`,"},{"line_number":130,"context_line":"it might be a viable option to not support floating IPs (or even IPv4 at all)"},{"line_number":131,"context_line":"for the ``NodePort`` service type, so that the implementation of floating IPs"}],"source_content_type":"text/x-rst","patch_set":1,"id":"1fa4df85_26a86b8b","line":128,"range":{"start_line":128,"start_character":19,"end_line":128,"end_character":44},"in_reply_to":"1fa4df85_b916c754","updated":"2020-03-03 17:34:31.000000000","message":"NAT solves one thing I think often gets forgotten.\n\nIn cloud technologies, you want to be able to separate stateful from that which is stateless.\n\nIn networking, stateful often shows up in two places. Either in DNS or in IP address.\n\nIf you have control of DNS, its best to just update your DNS records to point at the new IP address. But DNS sometimes is slow to update and often out of the control of those maintaining the workload. So its often required to keep an IP stable while moving around the workload. In IPv4, NAT often can be part of the solution to that problem.\n\nCan that solution work for IPv6 too? Absolutely. So I think its still probably valuable.\n\nThe other option I think is Mobile IPv6, but I\u0027m not sure its required that all IPv6 stacks support it. (Please correct me if I\u0027m wrong.) So maybe supporting NAT\u0027ed IPv6 still makes sense.\n\nNot speaking to what Teapot should choose specifically, but I hear a lot of bad things about NAT, but rarely do I hear someone speak up about addressing this issue.","commit_id":"363f5f0062efdbb55398abfe42bcd5f9eab7fadb"},{"author":{"_account_id":17068,"name":"Jean-Philippe Evrard","email":"openstack@a.spamming.party","username":"evrardjp"},"change_message_id":"34cbd129b53047752e97338c2889f1e222a523b7","unresolved":false,"context_lines":[{"line_number":125,"context_line":""},{"line_number":126,"context_line":"Most managed Kubernetes services provide an Ingress controller that can set up"},{"line_number":127,"context_line":"load balancing (including TLS termination) in the underlying cloud for HTTP(S)"},{"line_number":128,"context_line":"traffic, including automatically configuring floating IPs. If Teapot provided"},{"line_number":129,"context_line":":ref:`such an Ingress controller \u003cteapot-load-balancing-ingress-controller\u003e`,"},{"line_number":130,"context_line":"it might be a viable option to not support floating IPs (or even IPv4 at all)"},{"line_number":131,"context_line":"for the ``NodePort`` service type, so that the implementation of floating IPs"}],"source_content_type":"text/x-rst","patch_set":1,"id":"1fa4df85_fa121658","line":128,"range":{"start_line":128,"start_character":19,"end_line":128,"end_character":44},"in_reply_to":"1fa4df85_de795c93","updated":"2020-02-28 16:00:22.000000000","message":"again in agreement. Let\u0027s not count nat into this story. I know it cood look like it\u0027s removing a big chunk of the user base, but effectively it\u0027s not. IPv6 is a reality, and we have the ingress as abstration for v4.","commit_id":"363f5f0062efdbb55398abfe42bcd5f9eab7fadb"},{"author":{"_account_id":4257,"name":"Zane Bitter","email":"zbitter@redhat.com","username":"zaneb"},"change_message_id":"7b916e60b5c328343046f7330f45448c1515f45e","unresolved":false,"context_lines":[{"line_number":125,"context_line":""},{"line_number":126,"context_line":"Most managed Kubernetes services provide an Ingress controller that can set up"},{"line_number":127,"context_line":"load balancing (including TLS termination) in the underlying cloud for HTTP(S)"},{"line_number":128,"context_line":"traffic, including automatically configuring floating IPs. If Teapot provided"},{"line_number":129,"context_line":":ref:`such an Ingress controller \u003cteapot-load-balancing-ingress-controller\u003e`,"},{"line_number":130,"context_line":"it might be a viable option to not support floating IPs (or even IPv4 at all)"},{"line_number":131,"context_line":"for the ``NodePort`` service type, so that the implementation of floating IPs"}],"source_content_type":"text/x-rst","patch_set":1,"id":"1fa4df85_502b0b2f","line":128,"range":{"start_line":128,"start_character":19,"end_line":128,"end_character":44},"in_reply_to":"1fa4df85_fa121658","updated":"2020-02-28 18:24:32.000000000","message":"I\u0027d like to get some clarification on what we\u0027re saying here, because cloud networking is not my strong suit. (Before you ask, it\u0027s \"arguing on the Internet\", obviously.).\n\nWhat lines 130-131 are saying is that we could totally say that tenant clusters are IPv6 only - i.e. NodePort service types only get IPv6 addresses - and that you can only get IPv4 addresses by creating a LoadBalancer Service or an Ingress. (I\u0027m personally in favour of this plan BTW.) I think this is the same as what JP is saying?\n\nBut load balancers will also need to be able to get IP (v4 and v6) addresses. For HA they will need to be able to, uh, float between different instances of the load balancer, and in fact a user might sometimes want to move one to a completely different load balancer. Naively one might think of these IPs that float as \"floating IPs\", but perhaps that\u0027s not technically correct?\n\nIt\u0027s likely that this stuff is not explained clearly enough because I\u0027m not enough of an expert to cut through the fog of terminology (and also because info about floating IPs is awkwardly spread between this page, the load balancing one, and the dns one, since it touches all three). So rewrites are invited :)","commit_id":"363f5f0062efdbb55398abfe42bcd5f9eab7fadb"},{"author":{"_account_id":17068,"name":"Jean-Philippe Evrard","email":"openstack@a.spamming.party","username":"evrardjp"},"change_message_id":"34cbd129b53047752e97338c2889f1e222a523b7","unresolved":false,"context_lines":[{"line_number":130,"context_line":"it might be a viable option to not support floating IPs (or even IPv4 at all)"},{"line_number":131,"context_line":"for the ``NodePort`` service type, so that the implementation of floating IPs"},{"line_number":132,"context_line":"could be confined to the Ingress controller."},{"line_number":133,"context_line":""},{"line_number":134,"context_line":"Implementation Options"},{"line_number":135,"context_line":"----------------------"},{"line_number":136,"context_line":""}],"source_content_type":"text/x-rst","patch_set":1,"id":"1fa4df85_ba357edc","line":133,"updated":"2020-02-28 16:00:22.000000000","message":"I think this implementation of an ingress would be very dependent on what\u0027s running on the final compute node, and therefore the CNI plugin will have an impact, don\u0027t you think?\n\nShould we establish a first series of recommendations?\nIf that\u0027s the case, shouldn\u0027t we think of simplifying further by taking a stance on what\u0027s required?","commit_id":"363f5f0062efdbb55398abfe42bcd5f9eab7fadb"},{"author":{"_account_id":17068,"name":"Jean-Philippe Evrard","email":"openstack@a.spamming.party","username":"evrardjp"},"change_message_id":"830d47dfd06a583076c623dbfc6079ddcabe485c","unresolved":false,"context_lines":[{"line_number":130,"context_line":"it might be a viable option to not support floating IPs (or even IPv4 at all)"},{"line_number":131,"context_line":"for the ``NodePort`` service type, so that the implementation of floating IPs"},{"line_number":132,"context_line":"could be confined to the Ingress controller."},{"line_number":133,"context_line":""},{"line_number":134,"context_line":"Implementation Options"},{"line_number":135,"context_line":"----------------------"},{"line_number":136,"context_line":""}],"source_content_type":"text/x-rst","patch_set":1,"id":"1fa4df85_ae5b7bf3","line":133,"in_reply_to":"1fa4df85_13481c5a","updated":"2020-03-09 18:57:56.000000000","message":"I am not focusing on which service the ingress connects to (as this should be okay using the kubernetes networking model by default).\n\nI am focusing on how the ingress itself is working. Ingress are generally services themselves, to be properly exposed.\nAll your text is valid.\n\nThis triggers a few questions.\nWhere does the ingress run? Inside the tenant cluster? Inside the management cluster (how many ingresses per tenant?)? From a user perspective, I shouldn\u0027t configure my k8s cluster to deploy an nginx ingress with whatever service type to be able to use ingress resources. It should \"just work\". So it means that the management cluster should deal with providing an ingress controller. Which means having an opinion. That is fine for me, I just want it to be clear. (Or we say \"it\u0027s out of scope).\n\nIf teapot has an official ingress, should we suppose it will run in the tenant cluster itself or not? it doesn\u0027t look clear to me. If it runs inside the tenant cluster, aren\u0027t you afraid of support cases of breakages due to \u003cingress controller\u003e not working with \u003cnetwork implemented in the tenant using cni\u003e. \n\nIt\u0027s late for me, so maybe these are just mumblings :)","commit_id":"363f5f0062efdbb55398abfe42bcd5f9eab7fadb"},{"author":{"_account_id":4257,"name":"Zane Bitter","email":"zbitter@redhat.com","username":"zaneb"},"change_message_id":"d2419b7222d8d30ebef5eec0bb4e2aea01077448","unresolved":false,"context_lines":[{"line_number":130,"context_line":"it might be a viable option to not support floating IPs (or even IPv4 at all)"},{"line_number":131,"context_line":"for the ``NodePort`` service type, so that the implementation of floating IPs"},{"line_number":132,"context_line":"could be confined to the Ingress controller."},{"line_number":133,"context_line":""},{"line_number":134,"context_line":"Implementation Options"},{"line_number":135,"context_line":"----------------------"},{"line_number":136,"context_line":""}],"source_content_type":"text/x-rst","patch_set":1,"id":"1fa4df85_eef5f357","line":133,"in_reply_to":"1fa4df85_ae5b7bf3","updated":"2020-03-09 19:30:03.000000000","message":"I think it\u0027s clearer if you read the whole load balancer page in conjunction with this than it is from this paragraph on its own. In this model you would have an Ingress controller in the tenant that simply proxies the data to a tenant-specific namespace in the management cluster, where *another* Ingress controller (could be nginx, Octavia, HW load balancer...) will do the actual load balancing. Virtual IPs would be handled at this level. The load balancer would connect to the tenant service directly over IPv6.\n\nSo short answer to your question is, it runs in the management cluster. Number of Ingresses per tenant will need to be controlled by quota.\n\nAlternatively, of course users are always free to run the nginx Ingress controller or something similar inside their own cluster, but it\u0027s up to them to make sure it works with whatever networking they have chosen.","commit_id":"363f5f0062efdbb55398abfe42bcd5f9eab7fadb"},{"author":{"_account_id":4257,"name":"Zane Bitter","email":"zbitter@redhat.com","username":"zaneb"},"change_message_id":"229445ef999152ea8be705d7ed289d6d74cb81b6","unresolved":false,"context_lines":[{"line_number":130,"context_line":"it might be a viable option to not support floating IPs (or even IPv4 at all)"},{"line_number":131,"context_line":"for the ``NodePort`` service type, so that the implementation of floating IPs"},{"line_number":132,"context_line":"could be confined to the Ingress controller."},{"line_number":133,"context_line":""},{"line_number":134,"context_line":"Implementation Options"},{"line_number":135,"context_line":"----------------------"},{"line_number":136,"context_line":""}],"source_content_type":"text/x-rst","patch_set":1,"id":"1fa4df85_13481c5a","line":133,"in_reply_to":"1fa4df85_ba357edc","updated":"2020-03-02 23:17:47.000000000","message":"I missed this comment the first time around.\n\nAFAIK it\u0027s not dependent on the CNI plugin? An Ingress targets one or more Services (presumably of type NodePort), so if the CNI plugin allows the load balancer to connect to that service then it will work (and if it doesn\u0027t then nothing will work). Can you clarify what you mean?","commit_id":"363f5f0062efdbb55398abfe42bcd5f9eab7fadb"},{"author":{"_account_id":11628,"name":"Michael Johnson","email":"johnsomor@gmail.com","username":"johnsom"},"change_message_id":"aecc4250e02c80d8029d8854cf5f281738228674","unresolved":false,"context_lines":[{"line_number":142,"context_line":"A good long-term implementation strategy might be to use ansible-networking to"},{"line_number":143,"context_line":"directly configure the top-of-rack switches. This would be driven by a"},{"line_number":144,"context_line":"Kubernetes controller running in the management cluster operating on a set of"},{"line_number":145,"context_line":"CRDs. The ansible-networking project supports a wide variety of hardware"},{"line_number":146,"context_line":"already. A minimal proof of concept for this controller `exists"},{"line_number":147,"context_line":"\u003chttps://github.com/bcrochet/physical-switch-operator\u003e`_."},{"line_number":148,"context_line":""}],"source_content_type":"text/x-rst","patch_set":1,"id":"1fa4df85_1eb4d434","line":145,"range":{"start_line":145,"start_character":0,"end_line":145,"end_character":4},"updated":"2020-02-27 20:23:41.000000000","message":"CustomResourceDefinition (CRD)\nWe should expand the acronyms throughout this document. There are a lot used, some that are domain specific.","commit_id":"363f5f0062efdbb55398abfe42bcd5f9eab7fadb"},{"author":{"_account_id":4257,"name":"Zane Bitter","email":"zbitter@redhat.com","username":"zaneb"},"change_message_id":"cd2c4c3b0a59d5efd03fb4fa158091983bbb1846","unresolved":false,"context_lines":[{"line_number":142,"context_line":"A good long-term implementation strategy might be to use ansible-networking to"},{"line_number":143,"context_line":"directly configure the top-of-rack switches. This would be driven by a"},{"line_number":144,"context_line":"Kubernetes controller running in the management cluster operating on a set of"},{"line_number":145,"context_line":"CRDs. The ansible-networking project supports a wide variety of hardware"},{"line_number":146,"context_line":"already. A minimal proof of concept for this controller `exists"},{"line_number":147,"context_line":"\u003chttps://github.com/bcrochet/physical-switch-operator\u003e`_."},{"line_number":148,"context_line":""}],"source_content_type":"text/x-rst","patch_set":1,"id":"1fa4df85_061d6a0e","line":145,"range":{"start_line":145,"start_character":0,"end_line":145,"end_character":4},"in_reply_to":"1fa4df85_1eb4d434","updated":"2020-02-28 05:03:02.000000000","message":"Yep good catch. I got better at defining TLAs over time, but this doc was the first one I wrote.","commit_id":"363f5f0062efdbb55398abfe42bcd5f9eab7fadb"},{"author":{"_account_id":11628,"name":"Michael Johnson","email":"johnsomor@gmail.com","username":"johnsom"},"change_message_id":"aecc4250e02c80d8029d8854cf5f281738228674","unresolved":false,"context_lines":[{"line_number":171,"context_line":"integration point for e.g. :ref:`Octavia \u003cteapot-load-balancing-octavia\u003e` to"},{"line_number":172,"context_line":"provide an abstraction over hardware load balancers."},{"line_number":173,"context_line":""},{"line_number":174,"context_line":"The abstraction point would be the k8s CRDs -- different controllers could be"},{"line_number":175,"context_line":"chosen to manage CRs (and those might in turn make use of additional non-public"},{"line_number":176,"context_line":"CRDs), but we would not attempt to build controllers with multiple plugin"},{"line_number":177,"context_line":"points that could lead to ballooning complexity."}],"source_content_type":"text/x-rst","patch_set":1,"id":"1fa4df85_ded5dc44","line":174,"range":{"start_line":174,"start_character":35,"end_line":174,"end_character":43},"updated":"2020-02-27 20:23:41.000000000","message":"This slightly confuses me. Is \"teapot\" a specification for a new \"OpenStack style\" API on top of kubernetes clusters? or is it plugin extensions to a k8s API?\nMany of these plugin extensions already exist, like metal^3.","commit_id":"363f5f0062efdbb55398abfe42bcd5f9eab7fadb"},{"author":{"_account_id":17068,"name":"Jean-Philippe Evrard","email":"openstack@a.spamming.party","username":"evrardjp"},"change_message_id":"34cbd129b53047752e97338c2889f1e222a523b7","unresolved":false,"context_lines":[{"line_number":171,"context_line":"integration point for e.g. :ref:`Octavia \u003cteapot-load-balancing-octavia\u003e` to"},{"line_number":172,"context_line":"provide an abstraction over hardware load balancers."},{"line_number":173,"context_line":""},{"line_number":174,"context_line":"The abstraction point would be the k8s CRDs -- different controllers could be"},{"line_number":175,"context_line":"chosen to manage CRs (and those might in turn make use of additional non-public"},{"line_number":176,"context_line":"CRDs), but we would not attempt to build controllers with multiple plugin"},{"line_number":177,"context_line":"points that could lead to ballooning complexity."}],"source_content_type":"text/x-rst","patch_set":1,"id":"1fa4df85_fa69b6bd","line":174,"range":{"start_line":174,"start_character":35,"end_line":174,"end_character":43},"in_reply_to":"1fa4df85_99d45ebf","updated":"2020-02-28 16:00:22.000000000","message":"that\u0027s my understanding. Isn\u0027t that awesome that we reuse instead of rebuild? :)","commit_id":"363f5f0062efdbb55398abfe42bcd5f9eab7fadb"},{"author":{"_account_id":9237,"name":"Kevin Fox","email":"kevin@efox.cc","username":"kfox1111"},"change_message_id":"671ec37aa02571934fa392741a2f633855ab3afd","unresolved":false,"context_lines":[{"line_number":171,"context_line":"integration point for e.g. :ref:`Octavia \u003cteapot-load-balancing-octavia\u003e` to"},{"line_number":172,"context_line":"provide an abstraction over hardware load balancers."},{"line_number":173,"context_line":""},{"line_number":174,"context_line":"The abstraction point would be the k8s CRDs -- different controllers could be"},{"line_number":175,"context_line":"chosen to manage CRs (and those might in turn make use of additional non-public"},{"line_number":176,"context_line":"CRDs), but we would not attempt to build controllers with multiple plugin"},{"line_number":177,"context_line":"points that could lead to ballooning complexity."}],"source_content_type":"text/x-rst","patch_set":1,"id":"1fa4df85_99d45ebf","line":174,"range":{"start_line":174,"start_character":35,"end_line":174,"end_character":43},"in_reply_to":"1fa4df85_ded5dc44","updated":"2020-02-27 21:31:12.000000000","message":"My reading of it is that project teapot is a reference implementation solving the problems raised in this doc by some custom code, some api\u0027s and a lot of existing projects integrated together?","commit_id":"363f5f0062efdbb55398abfe42bcd5f9eab7fadb"},{"author":{"_account_id":9237,"name":"Kevin Fox","email":"kevin@efox.cc","username":"kfox1111"},"change_message_id":"a4247659178b3de30368d4b8318d921090fa2efa","unresolved":false,"context_lines":[{"line_number":174,"context_line":"The abstraction point would be the k8s CRDs -- different controllers could be"},{"line_number":175,"context_line":"chosen to manage CRs (and those might in turn make use of additional non-public"},{"line_number":176,"context_line":"CRDs), but we would not attempt to build controllers with multiple plugin"},{"line_number":177,"context_line":"points that could lead to ballooning complexity."}],"source_content_type":"text/x-rst","patch_set":1,"id":"1fa4df85_a3aecd95","line":177,"updated":"2020-02-27 18:01:44.000000000","message":"+1 to making a k8s native api. This allows switching out implementations like you mention much easier.","commit_id":"363f5f0062efdbb55398abfe42bcd5f9eab7fadb"},{"author":{"_account_id":17068,"name":"Jean-Philippe Evrard","email":"openstack@a.spamming.party","username":"evrardjp"},"change_message_id":"34cbd129b53047752e97338c2889f1e222a523b7","unresolved":false,"context_lines":[{"line_number":174,"context_line":"The abstraction point would be the k8s CRDs -- different controllers could be"},{"line_number":175,"context_line":"chosen to manage CRs (and those might in turn make use of additional non-public"},{"line_number":176,"context_line":"CRDs), but we would not attempt to build controllers with multiple plugin"},{"line_number":177,"context_line":"points that could lead to ballooning complexity."}],"source_content_type":"text/x-rst","patch_set":1,"id":"1fa4df85_9a79a294","line":177,"in_reply_to":"1fa4df85_a3aecd95","updated":"2020-02-28 16:00:22.000000000","message":"agreed","commit_id":"363f5f0062efdbb55398abfe42bcd5f9eab7fadb"},{"author":{"_account_id":9003,"name":"Tom Barron","email":"tpb@dyncloud.net","username":"tbarron"},"change_message_id":"94e0bd3307d59c90569a70f8c7fcd4d64966389a","unresolved":false,"context_lines":[{"line_number":12,"context_line":"Multi-tenant Network Model"},{"line_number":13,"context_line":"--------------------------"},{"line_number":14,"context_line":""},{"line_number":15,"context_line":"Support for VLANs and VxLAN is ubiquitous in modern data center network"},{"line_number":16,"context_line":"hardware, so this will be the basis for Teapot\u0027s networking. Each tenant will"},{"line_number":17,"context_line":"be assigned one or more V(x)LANs. (Separate failure domains will likely also"},{"line_number":18,"context_line":"have separate broadcast domains.) As machines are assigned to the tenant, the"}],"source_content_type":"text/x-rst","patch_set":2,"id":"1fa4df85_1e26fdb8","line":15,"range":{"start_line":15,"start_character":22,"end_line":15,"end_character":27},"updated":"2020-02-29 16:46:56.000000000","message":"(or similar overlay networking solutions like Geneve) ?","commit_id":"1f9515b9037633b4abb8acf4dc29ed4befb7b745"},{"author":{"_account_id":9003,"name":"Tom Barron","email":"tpb@dyncloud.net","username":"tbarron"},"change_message_id":"d13cad7b33b48599077f5598747c545d605d4164","unresolved":false,"context_lines":[{"line_number":12,"context_line":"Multi-tenant Network Model"},{"line_number":13,"context_line":"--------------------------"},{"line_number":14,"context_line":""},{"line_number":15,"context_line":"Support for VLANs and VxLAN is ubiquitous in modern data center network"},{"line_number":16,"context_line":"hardware, so this will be the basis for Teapot\u0027s networking. Each tenant will"},{"line_number":17,"context_line":"be assigned one or more V(x)LANs. (Separate failure domains will likely also"},{"line_number":18,"context_line":"have separate broadcast domains.) As machines are assigned to the tenant, the"}],"source_content_type":"text/x-rst","patch_set":2,"id":"1fa4df85_0df79561","line":15,"range":{"start_line":15,"start_character":22,"end_line":15,"end_character":27},"in_reply_to":"1fa4df85_0d0a4289","updated":"2020-03-03 11:14:41.000000000","message":"Didn\u0027t know that, thanks.","commit_id":"1f9515b9037633b4abb8acf4dc29ed4befb7b745"},{"author":{"_account_id":4257,"name":"Zane Bitter","email":"zbitter@redhat.com","username":"zaneb"},"change_message_id":"229445ef999152ea8be705d7ed289d6d74cb81b6","unresolved":false,"context_lines":[{"line_number":12,"context_line":"Multi-tenant Network Model"},{"line_number":13,"context_line":"--------------------------"},{"line_number":14,"context_line":""},{"line_number":15,"context_line":"Support for VLANs and VxLAN is ubiquitous in modern data center network"},{"line_number":16,"context_line":"hardware, so this will be the basis for Teapot\u0027s networking. Each tenant will"},{"line_number":17,"context_line":"be assigned one or more V(x)LANs. (Separate failure domains will likely also"},{"line_number":18,"context_line":"have separate broadcast domains.) As machines are assigned to the tenant, the"}],"source_content_type":"text/x-rst","patch_set":2,"id":"1fa4df85_0d0a4289","line":15,"range":{"start_line":15,"start_character":22,"end_line":15,"end_character":27},"in_reply_to":"1fa4df85_1e26fdb8","updated":"2020-03-02 23:17:47.000000000","message":"AFAIK there\u0027s ~no hw support for Geneve, so it\u0027s a non-starter for bare-metal. Even OVN relies on VxLAN for bare-metal.","commit_id":"1f9515b9037633b4abb8acf4dc29ed4befb7b745"},{"author":{"_account_id":9003,"name":"Tom Barron","email":"tpb@dyncloud.net","username":"tbarron"},"change_message_id":"94e0bd3307d59c90569a70f8c7fcd4d64966389a","unresolved":false,"context_lines":[{"line_number":19,"context_line":"Teapot controller will connect each to a private virtual network also assigned"},{"line_number":20,"context_line":"to the tenant."},{"line_number":21,"context_line":""},{"line_number":22,"context_line":"Small deployments can just use VLANs. Larger deployments may need VxLAN, and in"},{"line_number":23,"context_line":"this case :abbr:`VTEP (VxLAN Tunnel EndPoint)`-capable edge switches and a"},{"line_number":24,"context_line":"VTEP-capable router will be required."},{"line_number":25,"context_line":""}],"source_content_type":"text/x-rst","patch_set":2,"id":"1fa4df85_be3c89c8","line":22,"range":{"start_line":22,"start_character":38,"end_line":22,"end_character":56},"updated":"2020-02-29 16:46:56.000000000","message":"And even larger deployments may need routed spine-and-leaf topology solutions?","commit_id":"1f9515b9037633b4abb8acf4dc29ed4befb7b745"},{"author":{"_account_id":9003,"name":"Tom Barron","email":"tpb@dyncloud.net","username":"tbarron"},"change_message_id":"b49f0b8d0a91035ecc505443c8fa08fc8795ce03","unresolved":false,"context_lines":[{"line_number":19,"context_line":"Teapot controller will connect each to a private virtual network also assigned"},{"line_number":20,"context_line":"to the tenant."},{"line_number":21,"context_line":""},{"line_number":22,"context_line":"Small deployments can just use VLANs. Larger deployments may need VxLAN, and in"},{"line_number":23,"context_line":"this case :abbr:`VTEP (VxLAN Tunnel EndPoint)`-capable edge switches and a"},{"line_number":24,"context_line":"VTEP-capable router will be required."},{"line_number":25,"context_line":""}],"source_content_type":"text/x-rst","patch_set":2,"id":"1fa4df85_cbe400e3","line":22,"range":{"start_line":22,"start_character":38,"end_line":22,"end_character":56},"in_reply_to":"1fa4df85_8b3da8c5","updated":"2020-03-03 17:08:59.000000000","message":"My intent was just not to preclude, not to advocate spine-leaf as a starting point.","commit_id":"1f9515b9037633b4abb8acf4dc29ed4befb7b745"},{"author":{"_account_id":4257,"name":"Zane Bitter","email":"zbitter@redhat.com","username":"zaneb"},"change_message_id":"229445ef999152ea8be705d7ed289d6d74cb81b6","unresolved":false,"context_lines":[{"line_number":19,"context_line":"Teapot controller will connect each to a private virtual network also assigned"},{"line_number":20,"context_line":"to the tenant."},{"line_number":21,"context_line":""},{"line_number":22,"context_line":"Small deployments can just use VLANs. Larger deployments may need VxLAN, and in"},{"line_number":23,"context_line":"this case :abbr:`VTEP (VxLAN Tunnel EndPoint)`-capable edge switches and a"},{"line_number":24,"context_line":"VTEP-capable router will be required."},{"line_number":25,"context_line":""}],"source_content_type":"text/x-rst","patch_set":2,"id":"1fa4df85_cfbe5a64","line":22,"range":{"start_line":22,"start_character":38,"end_line":22,"end_character":56},"in_reply_to":"1fa4df85_be3c89c8","updated":"2020-03-02 23:17:47.000000000","message":"We discussed the possibility of enforcing multi-tenancy through routing alone, but came to the conclusion it is not feasible. Graham can probably explain more convincingly, so I will refrain from making something up ;)\n\nThat said, as noted on line 16-18, in a large deployment, not all of a tenant\u0027s machines would necessarily end up in the same VxLAN; so at that point it would be a routed network.","commit_id":"1f9515b9037633b4abb8acf4dc29ed4befb7b745"},{"author":{"_account_id":8099,"name":"Graham Hayes","email":"gr@ham.ie","username":"graham"},"change_message_id":"b23c912b30dfea2d9a59166cb1aadcb859cb69f9","unresolved":false,"context_lines":[{"line_number":19,"context_line":"Teapot controller will connect each to a private virtual network also assigned"},{"line_number":20,"context_line":"to the tenant."},{"line_number":21,"context_line":""},{"line_number":22,"context_line":"Small deployments can just use VLANs. Larger deployments may need VxLAN, and in"},{"line_number":23,"context_line":"this case :abbr:`VTEP (VxLAN Tunnel EndPoint)`-capable edge switches and a"},{"line_number":24,"context_line":"VTEP-capable router will be required."},{"line_number":25,"context_line":""}],"source_content_type":"text/x-rst","patch_set":2,"id":"1fa4df85_ab5e6441","line":22,"range":{"start_line":22,"start_character":38,"end_line":22,"end_character":56},"in_reply_to":"1fa4df85_cbe400e3","updated":"2020-03-03 17:19:40.000000000","message":"Yeap - I was just being extra clear so others can see this discussion as well :)","commit_id":"1f9515b9037633b4abb8acf4dc29ed4befb7b745"},{"author":{"_account_id":9003,"name":"Tom Barron","email":"tpb@dyncloud.net","username":"tbarron"},"change_message_id":"d13cad7b33b48599077f5598747c545d605d4164","unresolved":false,"context_lines":[{"line_number":19,"context_line":"Teapot controller will connect each to a private virtual network also assigned"},{"line_number":20,"context_line":"to the tenant."},{"line_number":21,"context_line":""},{"line_number":22,"context_line":"Small deployments can just use VLANs. Larger deployments may need VxLAN, and in"},{"line_number":23,"context_line":"this case :abbr:`VTEP (VxLAN Tunnel EndPoint)`-capable edge switches and a"},{"line_number":24,"context_line":"VTEP-capable router will be required."},{"line_number":25,"context_line":""}],"source_content_type":"text/x-rst","patch_set":2,"id":"1fa4df85_e829c789","line":22,"range":{"start_line":22,"start_character":38,"end_line":22,"end_character":56},"in_reply_to":"1fa4df85_cfbe5a64","updated":"2020-03-03 11:14:41.000000000","message":"Well IIUC routed spine-leaf data centre topology doesn\u0027t enforce multi-tenancy by routing alone.  vxlan-overlays still encapsulate layer 2 frames in UDP datagrams, but this is scoped to a \"leaf\" (e.g. rack) and layer 3 subnet-supernet routing is used to stitch these leaves together.  But I\u0027ll defer to Graham or Dan or others who know more about this than I do.","commit_id":"1f9515b9037633b4abb8acf4dc29ed4befb7b745"},{"author":{"_account_id":8099,"name":"Graham Hayes","email":"gr@ham.ie","username":"graham"},"change_message_id":"ded9be44ebf55be82048b22ea7fddaf26e1430f5","unresolved":false,"context_lines":[{"line_number":19,"context_line":"Teapot controller will connect each to a private virtual network also assigned"},{"line_number":20,"context_line":"to the tenant."},{"line_number":21,"context_line":""},{"line_number":22,"context_line":"Small deployments can just use VLANs. Larger deployments may need VxLAN, and in"},{"line_number":23,"context_line":"this case :abbr:`VTEP (VxLAN Tunnel EndPoint)`-capable edge switches and a"},{"line_number":24,"context_line":"VTEP-capable router will be required."},{"line_number":25,"context_line":""}],"source_content_type":"text/x-rst","patch_set":2,"id":"1fa4df85_8b3da8c5","line":22,"range":{"start_line":22,"start_character":38,"end_line":22,"end_character":56},"in_reply_to":"1fa4df85_e829c789","updated":"2020-03-03 16:56:01.000000000","message":"It all comes down to scale. Spine and Leaf may be needed in very large deployments, but it can add a lot of complexity (especially early on for us).\n\nWhen I was thinking about this, I was taking 100 nodes as the upper limit of nodes in a single k8s cluster (I am aware there is \"docs\" that say you can have more, but from experience, it ends up requiring manual tuning to really work after that). In this case scheduling nodes closer together (physically) can reduce the load on the east - west DC network fabric.\n\nOf course, we shouldn\u0027t do anything to preclude this in the future, but the core of this network proposal is \"the server gets an untagged network port(s) to send traffic both east west and north south\" which means as long as the spine \u0026 leaf allows each node to talk to the others over that port, we are OK.","commit_id":"1f9515b9037633b4abb8acf4dc29ed4befb7b745"},{"author":{"_account_id":9003,"name":"Tom Barron","email":"tpb@dyncloud.net","username":"tbarron"},"change_message_id":"94e0bd3307d59c90569a70f8c7fcd4d64966389a","unresolved":false,"context_lines":[{"line_number":127,"context_line":"Most managed Kubernetes services provide an Ingress controller that can set up"},{"line_number":128,"context_line":"load balancing (including :abbr:`TLS (Transport Layer Security)` termination)"},{"line_number":129,"context_line":"in the underlying cloud for HTTP(S) traffic, including automatically"},{"line_number":130,"context_line":"configuring floating IPs. If Teapot provided :ref:`such an Ingress controller"},{"line_number":131,"context_line":"\u003cteapot-load-balancing-ingress-controller\u003e`, it might be a viable option to not"},{"line_number":132,"context_line":"support floating IPs (or even IPv4 at all) for the ``NodePort`` service type,"},{"line_number":133,"context_line":"so that the implementation of floating IPs could be confined to the :ref:`load"}],"source_content_type":"text/x-rst","patch_set":2,"id":"1fa4df85_fe576101","line":130,"range":{"start_line":130,"start_character":11,"end_line":130,"end_character":24},"updated":"2020-02-29 16:46:56.000000000","message":"Do you want two different terms for floating IPs in the OpenStack NAT sense and for the virtual IP presented by a load balancer?","commit_id":"1f9515b9037633b4abb8acf4dc29ed4befb7b745"},{"author":{"_account_id":8099,"name":"Graham Hayes","email":"gr@ham.ie","username":"graham"},"change_message_id":"ded9be44ebf55be82048b22ea7fddaf26e1430f5","unresolved":false,"context_lines":[{"line_number":127,"context_line":"Most managed Kubernetes services provide an Ingress controller that can set up"},{"line_number":128,"context_line":"load balancing (including :abbr:`TLS (Transport Layer Security)` termination)"},{"line_number":129,"context_line":"in the underlying cloud for HTTP(S) traffic, including automatically"},{"line_number":130,"context_line":"configuring floating IPs. If Teapot provided :ref:`such an Ingress controller"},{"line_number":131,"context_line":"\u003cteapot-load-balancing-ingress-controller\u003e`, it might be a viable option to not"},{"line_number":132,"context_line":"support floating IPs (or even IPv4 at all) for the ``NodePort`` service type,"},{"line_number":133,"context_line":"so that the implementation of floating IPs could be confined to the :ref:`load"}],"source_content_type":"text/x-rst","patch_set":2,"id":"1fa4df85_0b4f786b","line":130,"range":{"start_line":130,"start_character":11,"end_line":130,"end_character":24},"in_reply_to":"1fa4df85_cdeaeaaa","updated":"2020-03-03 16:56:01.000000000","message":"Yeah, not sure what to call them - something like \"externally routable IP addresses\" ? \"Public IPs\"?","commit_id":"1f9515b9037633b4abb8acf4dc29ed4befb7b745"},{"author":{"_account_id":4257,"name":"Zane Bitter","email":"zbitter@redhat.com","username":"zaneb"},"change_message_id":"229445ef999152ea8be705d7ed289d6d74cb81b6","unresolved":false,"context_lines":[{"line_number":127,"context_line":"Most managed Kubernetes services provide an Ingress controller that can set up"},{"line_number":128,"context_line":"load balancing (including :abbr:`TLS (Transport Layer Security)` termination)"},{"line_number":129,"context_line":"in the underlying cloud for HTTP(S) traffic, including automatically"},{"line_number":130,"context_line":"configuring floating IPs. If Teapot provided :ref:`such an Ingress controller"},{"line_number":131,"context_line":"\u003cteapot-load-balancing-ingress-controller\u003e`, it might be a viable option to not"},{"line_number":132,"context_line":"support floating IPs (or even IPv4 at all) for the ``NodePort`` service type,"},{"line_number":133,"context_line":"so that the implementation of floating IPs could be confined to the :ref:`load"}],"source_content_type":"text/x-rst","patch_set":2,"id":"1fa4df85_cdeaeaaa","line":130,"range":{"start_line":130,"start_character":11,"end_line":130,"end_character":24},"in_reply_to":"1fa4df85_fe576101","updated":"2020-03-02 23:17:47.000000000","message":"Judging by comments on the previous patch set, apparently I do :D","commit_id":"1f9515b9037633b4abb8acf4dc29ed4befb7b745"},{"author":{"_account_id":8099,"name":"Graham Hayes","email":"gr@ham.ie","username":"graham"},"change_message_id":"ded9be44ebf55be82048b22ea7fddaf26e1430f5","unresolved":false,"context_lines":[{"line_number":173,"context_line":"integration point for e.g. :ref:`Octavia \u003cteapot-load-balancing-octavia\u003e` to"},{"line_number":174,"context_line":"provide an abstraction over hardware load balancers."},{"line_number":175,"context_line":""},{"line_number":176,"context_line":"The abstraction point would be the Kubernetes CRDs -- different controllers"},{"line_number":177,"context_line":"could be chosen to manage custom resources (and those might in turn make use of"},{"line_number":178,"context_line":"additional non-public CRDs), but we would not attempt to build controllers with"},{"line_number":179,"context_line":"multiple plugin points that could lead to ballooning complexity."}],"source_content_type":"text/x-rst","patch_set":2,"id":"1fa4df85_eba77c0a","line":179,"range":{"start_line":176,"start_character":0,"end_line":179,"end_character":64},"updated":"2020-03-03 16:56:01.000000000","message":"+100","commit_id":"1f9515b9037633b4abb8acf4dc29ed4befb7b745"},{"author":{"_account_id":11655,"name":"Julia Kreger","email":"juliaashleykreger@gmail.com","username":"jkreger","status":"Flying to the moon with a Jetpack!"},"change_message_id":"7cab13b32a25d76b7d30188bc36ea1a020d165c3","unresolved":false,"context_lines":[{"line_number":69,"context_line":"PXE can be avoided by provisioning using virtual media (where the BMC attaches"},{"line_number":70,"context_line":"a virtual disk containing the boot image to the host\u0027s USB), but hardware"},{"line_number":71,"context_line":"support for doing this from Ironic is uneven (though rapidly improving) and it"},{"line_number":72,"context_line":"is considerably slower than PXE. In addition, the Ironic agent typically"},{"line_number":73,"context_line":"communicates over this network for purposes such as introspection of hosts or"},{"line_number":74,"context_line":"cleaning of disks."},{"line_number":75,"context_line":""},{"line_number":76,"context_line":"For the purpose of PXE booting, hosts could be left permanently connected to"},{"line_number":77,"context_line":"the provisioning network provided they are isolated from each other (e.g. using"}],"source_content_type":"text/x-rst","patch_set":4,"id":"1fa4df85_d9079fc9","line":74,"range":{"start_line":72,"start_character":33,"end_line":74,"end_character":18},"updated":"2020-03-09 20:18:44.000000000","message":"Always, and all actions.","commit_id":"e77cfc3208b5c65f8031a0cadbe65f116419a193"},{"author":{"_account_id":11655,"name":"Julia Kreger","email":"juliaashleykreger@gmail.com","username":"jkreger","status":"Flying to the moon with a Jetpack!"},"change_message_id":"7cab13b32a25d76b7d30188bc36ea1a020d165c3","unresolved":false,"context_lines":[{"line_number":77,"context_line":"the provisioning network provided they are isolated from each other (e.g. using"},{"line_number":78,"context_line":"private VLANs). This would have the downside that the main network interface of"},{"line_number":79,"context_line":"the tenant worker would have to appear on a tagged VLAN. However, the Ironic"},{"line_number":80,"context_line":"agent\u0027s access to the Ironic APIs is unauthenticated, and therefore not safe to"},{"line_number":81,"context_line":"be carried over networks that have hosts allocated to tenants connected to"},{"line_number":82,"context_line":"them. This could occur over a separate network, but in any event hosts\u0027"},{"line_number":83,"context_line":"membership of this network will have to be changed dynamically in concert with"}],"source_content_type":"text/x-rst","patch_set":4,"id":"1fa4df85_39fcb3b3","line":80,"range":{"start_line":80,"start_character":21,"end_line":80,"end_character":52},"updated":"2020-03-09 20:18:44.000000000","message":"Ironic is actively working on this, although with PXE the first call would always be unauthenticated beyond the ask for \"what is the uuid of the machine matching the MAC addresses I have\". Virtual media of course would be more secure.","commit_id":"e77cfc3208b5c65f8031a0cadbe65f116419a193"},{"author":{"_account_id":11655,"name":"Julia Kreger","email":"juliaashleykreger@gmail.com","username":"jkreger","status":"Flying to the moon with a Jetpack!"},"change_message_id":"7cab13b32a25d76b7d30188bc36ea1a020d165c3","unresolved":false,"context_lines":[{"line_number":79,"context_line":"the tenant worker would have to appear on a tagged VLAN. However, the Ironic"},{"line_number":80,"context_line":"agent\u0027s access to the Ironic APIs is unauthenticated, and therefore not safe to"},{"line_number":81,"context_line":"be carried over networks that have hosts allocated to tenants connected to"},{"line_number":82,"context_line":"them. This could occur over a separate network, but in any event hosts\u0027"},{"line_number":83,"context_line":"membership of this network will have to be changed dynamically in concert with"},{"line_number":84,"context_line":"the baremetal provisioner."},{"line_number":85,"context_line":""}],"source_content_type":"text/x-rst","patch_set":4,"id":"1fa4df85_391593e5","line":82,"range":{"start_line":82,"start_character":11,"end_line":82,"end_character":16},"updated":"2020-03-09 20:18:44.000000000","message":"s/could/should/ although as we\u0027ve learned in OpenStack, people are super uncomfortable with switch automation.","commit_id":"e77cfc3208b5c65f8031a0cadbe65f116419a193"},{"author":{"_account_id":9237,"name":"Kevin Fox","email":"kevin@efox.cc","username":"kfox1111"},"change_message_id":"5fc8d5dd1fb1f26b42863ddd87dc9f31935d4f76","unresolved":false,"context_lines":[{"line_number":79,"context_line":"the tenant worker would have to appear on a tagged VLAN. However, the Ironic"},{"line_number":80,"context_line":"agent\u0027s access to the Ironic APIs is unauthenticated, and therefore not safe to"},{"line_number":81,"context_line":"be carried over networks that have hosts allocated to tenants connected to"},{"line_number":82,"context_line":"them. This could occur over a separate network, but in any event hosts\u0027"},{"line_number":83,"context_line":"membership of this network will have to be changed dynamically in concert with"},{"line_number":84,"context_line":"the baremetal provisioner."},{"line_number":85,"context_line":""}],"source_content_type":"text/x-rst","patch_set":4,"id":"1fa4df85_45cf21b0","line":82,"range":{"start_line":82,"start_character":11,"end_line":82,"end_character":16},"in_reply_to":"1fa4df85_067946c6","updated":"2020-03-11 16:19:43.000000000","message":"Network Teams are still typically siloed away from Host Teams. So getting the Network Team to give you enough credentials/access/etc to automate switch management is often a tough sell in a lot of organizations.\n\nOne of the lessons I learned from K8s networking though, is sometimes its better to consider solving issues higher up in the network layer cake then at lower levels. Contrast Neutron like l2 software defined networking and K8s\u0027s NetworkPolicy and Service Meshes. You can solve a bunch of problems at the l2 layer such as what can talk to what, but you could also solve it higher with NetworkPolicy/ServiceMesh. Its becoming increasingly rare when I actually need a virtual l2 network and NetworkPolicy/ServiceMeshes provide a lot more functionality with less operator complexity like trying to get the Network Team to give access to switches.\n\nI still wonder, is a dedicated network for provisioning really the best solution or can we get creative and come up with something higher up in the network stack that works just as well without the complexity at the Network layer where siloing makes things harder to solve?","commit_id":"e77cfc3208b5c65f8031a0cadbe65f116419a193"},{"author":{"_account_id":4257,"name":"Zane Bitter","email":"zbitter@redhat.com","username":"zaneb"},"change_message_id":"ab1560da13c4ed154c74706bbe252f2e99dd90d7","unresolved":false,"context_lines":[{"line_number":79,"context_line":"the tenant worker would have to appear on a tagged VLAN. However, the Ironic"},{"line_number":80,"context_line":"agent\u0027s access to the Ironic APIs is unauthenticated, and therefore not safe to"},{"line_number":81,"context_line":"be carried over networks that have hosts allocated to tenants connected to"},{"line_number":82,"context_line":"them. This could occur over a separate network, but in any event hosts\u0027"},{"line_number":83,"context_line":"membership of this network will have to be changed dynamically in concert with"},{"line_number":84,"context_line":"the baremetal provisioner."},{"line_number":85,"context_line":""}],"source_content_type":"text/x-rst","patch_set":4,"id":"1fa4df85_067946c6","line":82,"range":{"start_line":82,"start_character":11,"end_line":82,"end_character":16},"in_reply_to":"1fa4df85_391593e5","updated":"2020-03-10 18:56:43.000000000","message":"\"separate network\" meaning separate to the provisioning network. So what this is saying is that you could leave the (untagged) provisioning network always connected, but have IPA talk on a separate (tagged) network that only gets connected when the host is not provisioned. Or you could just keep it on the provisioning network and only connect that when the host is not provisioned. So either way, it doesn\u0027t allow the baremetal-operator+Ironic subsystem to not care about networking configuration.\n\n\nI hadn\u0027t heard that people are uncomfortable with switch automation, that\u0027s interesting. Do you have a sense of what in particular bothers them? I tend to be a bit skeptical of stuff like this because people have a tendency to suddenly not care when an exciting new product comes along that conflicts with their long-held but mostly baseless prejudice against doing some thing.\n\nFor example, when we first did Heat, it was a fundamental design tenet that we couldn\u0027t use ssh to provision a user\u0027s server. Totally unacceptable! People didn\u0027t even enable sshd and they never would because it was contrary to their immutable security policies. So we hacked together an agent-based thing (because as we all know, agents are always perfectly secure, except for sshd of course which is extremely dangerous). This was after I moved to OpenStack from another dead end project, where I was brought in after they\u0027d had an intern write an agent that pulled stuff without any particular provision for security and ran it as root... because I guess security policies said no sshd but they didn\u0027t say anything about agents with no security provisions at all that were written by random interns.\n\nAnyway, then Ansible came along and everybody loved it and wanted to know why Heat couldn\u0027t be more like that.\n\nSo yeah, interested to hear specific objections and think about how we can overcome them. But let\u0027s not hold up progress unless they\u0027re actually right :)","commit_id":"e77cfc3208b5c65f8031a0cadbe65f116419a193"},{"author":{"_account_id":4257,"name":"Zane Bitter","email":"zbitter@redhat.com","username":"zaneb"},"change_message_id":"7883a114617cabf675722891916407334c8f4c37","unresolved":false,"context_lines":[{"line_number":79,"context_line":"the tenant worker would have to appear on a tagged VLAN. However, the Ironic"},{"line_number":80,"context_line":"agent\u0027s access to the Ironic APIs is unauthenticated, and therefore not safe to"},{"line_number":81,"context_line":"be carried over networks that have hosts allocated to tenants connected to"},{"line_number":82,"context_line":"them. This could occur over a separate network, but in any event hosts\u0027"},{"line_number":83,"context_line":"membership of this network will have to be changed dynamically in concert with"},{"line_number":84,"context_line":"the baremetal provisioner."},{"line_number":85,"context_line":""}],"source_content_type":"text/x-rst","patch_set":4,"id":"1fa4df85_fb726549","line":82,"range":{"start_line":82,"start_character":11,"end_line":82,"end_character":16},"in_reply_to":"1fa4df85_45cf21b0","updated":"2020-03-11 21:45:01.000000000","message":"\u003e Network Teams are still typically siloed away from Host Teams. So\n \u003e getting the Network Team to give you enough credentials/access/etc\n \u003e to automate switch management is often a tough sell in a lot of\n \u003e organizations.\n\nI really fear for anyone who is trying to build a private cloud while having two separate teams warring for control over the components *within a rack*.\n\n \u003e I still wonder, is a dedicated network for provisioning really the\n \u003e best solution or can we get creative and come up with something\n \u003e higher up in the network stack that works just as well without the\n \u003e complexity at the Network layer where siloing makes things harder\n \u003e to solve?\n\nI do like the idea of booting a minimal image with unique creds over virtualmedia, then using that to chain-load over HTTPS or something like that.\n\nThat said, the best we can hope to achieve by tweaking the provisioning is to allow the networking part and the provisioning part to operate independently of each other. The whole of Teapot\u0027s networking is predicated on being able to control the ToR switch, and that can\u0027t be fixed by adding additional layers because tenants want to be able to add their own layers and that\u0027s too many layers.","commit_id":"e77cfc3208b5c65f8031a0cadbe65f116419a193"},{"author":{"_account_id":9237,"name":"Kevin Fox","email":"kevin@efox.cc","username":"kfox1111"},"change_message_id":"d2d6e9196c94bb9411d610a7f860dff1943b4df1","unresolved":false,"context_lines":[{"line_number":79,"context_line":"the tenant worker would have to appear on a tagged VLAN. However, the Ironic"},{"line_number":80,"context_line":"agent\u0027s access to the Ironic APIs is unauthenticated, and therefore not safe to"},{"line_number":81,"context_line":"be carried over networks that have hosts allocated to tenants connected to"},{"line_number":82,"context_line":"them. This could occur over a separate network, but in any event hosts\u0027"},{"line_number":83,"context_line":"membership of this network will have to be changed dynamically in concert with"},{"line_number":84,"context_line":"the baremetal provisioner."},{"line_number":85,"context_line":""}],"source_content_type":"text/x-rst","patch_set":4,"id":"1fa4df85_a3d11b32","line":82,"range":{"start_line":82,"start_character":11,"end_line":82,"end_character":16},"in_reply_to":"1fa4df85_fb726549","updated":"2020-03-12 15:52:07.000000000","message":"\u003e I really fear for anyone who is trying to build a private cloud\n \u003e while having two separate teams warring for control over the\n \u003e components *within a rack*.\n\n\nYup. Sorry, it happens. :/\n\nFortunately things like vxlan exist.\n\n \n \u003e I do like the idea of booting a minimal image with unique creds\n \u003e over virtualmedia, then using that to chain-load over HTTPS or\n \u003e something like that.\n \u003e \n \u003e That said, the best we can hope to achieve by tweaking the\n \u003e provisioning is to allow the networking part and the provisioning\n \u003e part to operate independently of each other. The whole of Teapot\u0027s\n \u003e networking is predicated on being able to control the ToR switch,\n \u003e and that can\u0027t be fixed by adding additional layers because tenants\n \u003e want to be able to add their own layers and that\u0027s too many layers.\n\nI think it could very well be too constraining to potential project teapot users to require ToR switch control. Please do consider alternate options for when it is not possible. I think it may very well be possible to solve without ToR control. We just need to get a little creative.","commit_id":"e77cfc3208b5c65f8031a0cadbe65f116419a193"},{"author":{"_account_id":6994,"name":"Michael Chapman","email":"woppin@gmail.com","username":"michaeltchapman"},"change_message_id":"68b51ffc944036fc51c336aa736dc1163f3edbef","unresolved":false,"context_lines":[{"line_number":164,"context_line":".. _teapot-networking-neutron:"},{"line_number":165,"context_line":""},{"line_number":166,"context_line":"OpenStack Neutron"},{"line_number":167,"context_line":"~~~~~~~~~~~~~~~~~"},{"line_number":168,"context_line":""},{"line_number":169,"context_line":"A good short-term option might be to use a cut-down Neutron installation as an"},{"line_number":170,"context_line":"implementation detail to manage the network. Using only the baremetal port"}],"source_content_type":"text/x-rst","patch_set":4,"id":"1fa4df85_de746000","line":167,"updated":"2020-03-10 02:33:36.000000000","message":"FYI We have a neutron ml2 driver that uses ansible networking to do vlan segregation at the port level for each baremetal instance, though it\u0027s still fairly new. https://opendev.org/x/networking-ansible\n\nvxlan support is something we\u0027ll look at once we have hardware that could support it, I expect.","commit_id":"e77cfc3208b5c65f8031a0cadbe65f116419a193"},{"author":{"_account_id":4257,"name":"Zane Bitter","email":"zbitter@redhat.com","username":"zaneb"},"change_message_id":"ab1560da13c4ed154c74706bbe252f2e99dd90d7","unresolved":false,"context_lines":[{"line_number":164,"context_line":".. _teapot-networking-neutron:"},{"line_number":165,"context_line":""},{"line_number":166,"context_line":"OpenStack Neutron"},{"line_number":167,"context_line":"~~~~~~~~~~~~~~~~~"},{"line_number":168,"context_line":""},{"line_number":169,"context_line":"A good short-term option might be to use a cut-down Neutron installation as an"},{"line_number":170,"context_line":"implementation detail to manage the network. Using only the baremetal port"}],"source_content_type":"text/x-rst","patch_set":4,"id":"1fa4df85_46b09edd","line":167,"in_reply_to":"1fa4df85_de746000","updated":"2020-03-10 18:56:43.000000000","message":"Yep, that\u0027s what I had in mind for \"Using only the baremetal port types\". I should probably have mentioned that explicitly though.","commit_id":"e77cfc3208b5c65f8031a0cadbe65f116419a193"}],"doc/source/ideas/teapot/openstack-integration.rst":[{"author":{"_account_id":11628,"name":"Michael Johnson","email":"johnsomor@gmail.com","username":"johnsom"},"change_message_id":"ff9f7a1f19b3cb05cd4fb2738ddcbadaa1b3ef32","unresolved":false,"context_lines":[{"line_number":2,"context_line":"\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d"},{"line_number":3,"context_line":""},{"line_number":4,"context_line":"Many potential users of Teapot have large existing OpenStack deployments."},{"line_number":5,"context_line":"Teapot is not intended to be a wholesale replacement for OpenStack -- it does"},{"line_number":6,"context_line":"not deal with virtualisation at all, in fact -- so it is important that the two"},{"line_number":7,"context_line":"complement each other."},{"line_number":8,"context_line":""},{"line_number":9,"context_line":".. _teapot-openstack-managed-services:"}],"source_content_type":"text/x-rst","patch_set":1,"id":"1fa4df85_1d84d90b","line":6,"range":{"start_line":5,"start_character":70,"end_line":6,"end_character":35},"updated":"2020-02-27 23:08:52.000000000","message":"This contradicts the statement \"Tenants can choose to use a container hypervisor (such as Kata) to further sandbox applications, traditional VMs (such as those managed by KubeVirt or OpenStack Nova), or both side-by-side in the same cluster.\" in the first paragraph of the \"compute\" page.","commit_id":"363f5f0062efdbb55398abfe42bcd5f9eab7fadb"},{"author":{"_account_id":4257,"name":"Zane Bitter","email":"zbitter@redhat.com","username":"zaneb"},"change_message_id":"cd2c4c3b0a59d5efd03fb4fa158091983bbb1846","unresolved":false,"context_lines":[{"line_number":2,"context_line":"\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d"},{"line_number":3,"context_line":""},{"line_number":4,"context_line":"Many potential users of Teapot have large existing OpenStack deployments."},{"line_number":5,"context_line":"Teapot is not intended to be a wholesale replacement for OpenStack -- it does"},{"line_number":6,"context_line":"not deal with virtualisation at all, in fact -- so it is important that the two"},{"line_number":7,"context_line":"complement each other."},{"line_number":8,"context_line":""},{"line_number":9,"context_line":".. _teapot-openstack-managed-services:"}],"source_content_type":"text/x-rst","patch_set":1,"id":"1fa4df85_86c15ae7","line":6,"range":{"start_line":5,"start_character":70,"end_line":6,"end_character":35},"in_reply_to":"1fa4df85_1d84d90b","updated":"2020-02-28 05:03:02.000000000","message":"I don\u0027t think it does. Configuring to use Kata, or running KubeVirt can be done entirely inside the tenant cluster, and Teapot itself need have no knowledge of this nor deal with it in any way.","commit_id":"363f5f0062efdbb55398abfe42bcd5f9eab7fadb"},{"author":{"_account_id":11628,"name":"Michael Johnson","email":"johnsomor@gmail.com","username":"johnsom"},"change_message_id":"ff9f7a1f19b3cb05cd4fb2738ddcbadaa1b3ef32","unresolved":false,"context_lines":[{"line_number":24,"context_line":"could benefit users of either cloud type even absent the other."},{"line_number":25,"context_line":""},{"line_number":26,"context_line":"Teapot\u0027s :ref:`load balancing API \u003cteapot-load-balancing-ingress-api\u003e` would"},{"line_number":27,"context_line":"arguably already be a managed service. :ref:`Octavia"},{"line_number":28,"context_line":"\u003cteapot-load-balancing-octavia\u003e` could possibly use it as a back-end as a first"},{"line_number":29,"context_line":"example."},{"line_number":30,"context_line":""},{"line_number":31,"context_line":".. _teapot-openstack-side-by-side:"},{"line_number":32,"context_line":""}],"source_content_type":"text/x-rst","patch_set":1,"id":"1fa4df85_fdf45d89","line":29,"range":{"start_line":27,"start_character":39,"end_line":29,"end_character":8},"updated":"2020-02-27 23:08:52.000000000","message":"Not sure I follow how an API (Octavia), using another API (teapot), to access a load balancing service would be useful.\nEspecially when there is already kubernetes integration with Octavia.","commit_id":"363f5f0062efdbb55398abfe42bcd5f9eab7fadb"},{"author":{"_account_id":4257,"name":"Zane Bitter","email":"zbitter@redhat.com","username":"zaneb"},"change_message_id":"cd2c4c3b0a59d5efd03fb4fa158091983bbb1846","unresolved":false,"context_lines":[{"line_number":24,"context_line":"could benefit users of either cloud type even absent the other."},{"line_number":25,"context_line":""},{"line_number":26,"context_line":"Teapot\u0027s :ref:`load balancing API \u003cteapot-load-balancing-ingress-api\u003e` would"},{"line_number":27,"context_line":"arguably already be a managed service. :ref:`Octavia"},{"line_number":28,"context_line":"\u003cteapot-load-balancing-octavia\u003e` could possibly use it as a back-end as a first"},{"line_number":29,"context_line":"example."},{"line_number":30,"context_line":""},{"line_number":31,"context_line":".. _teapot-openstack-side-by-side:"},{"line_number":32,"context_line":""}],"source_content_type":"text/x-rst","patch_set":1,"id":"1fa4df85_06ad6a30","line":29,"range":{"start_line":27,"start_character":39,"end_line":29,"end_character":8},"in_reply_to":"1fa4df85_fdf45d89","updated":"2020-02-28 05:03:02.000000000","message":"It\u0027s a case where the integration could go in either direction. So you might want the OpenStack cloud to contain the load balancers and have both clouds use them, or you might want the Teapot cloud to contain the load balancers and have both clouds use them.","commit_id":"363f5f0062efdbb55398abfe42bcd5f9eab7fadb"},{"author":{"_account_id":8482,"name":"Colleen Murphy","email":"colleen@gazlene.net","username":"krinkle"},"change_message_id":"a8b425dd7a9ebe57c60232ba449fc32b8561181b","unresolved":false,"context_lines":[{"line_number":93,"context_line":"There is a second use case, for running small OpenStack installations (similar"},{"line_number":94,"context_line":"to StarlingX) within a tenant. In these cases, the tenant OpenStack would still"},{"line_number":95,"context_line":"need to access storage from the Teapot cloud. This could possibly be achieved"},{"line_number":96,"context_line":"by federating the tenant Keystone to Teapot\u0027s Keystone and using hierarchical"},{"line_number":97,"context_line":"multi-tenancy so that projects in the tenant Keystone are actually sub-projects"},{"line_number":98,"context_line":"of the tenant\u0027s project in the Teapot Keystone. (The long-dead `Trio2o"},{"line_number":99,"context_line":"\u003chttps://opendev.org/x/trio2o#trio2o\u003e`_ project also offered a potential"},{"line_number":100,"context_line":"solution in the form of an API proxy, but probably not one worth resurrecting.)"},{"line_number":101,"context_line":"Use of an overlay network (e.g. OVN) would be required, since the tenant would"}],"source_content_type":"text/x-rst","patch_set":1,"id":"1fa4df85_631d350e","line":98,"range":{"start_line":96,"start_character":65,"end_line":98,"end_character":46},"updated":"2020-02-27 17:40:10.000000000","message":"with some complicated mapping rules i think this could be a thing","commit_id":"363f5f0062efdbb55398abfe42bcd5f9eab7fadb"},{"author":{"_account_id":4257,"name":"Zane Bitter","email":"zbitter@redhat.com","username":"zaneb"},"change_message_id":"cd2c4c3b0a59d5efd03fb4fa158091983bbb1846","unresolved":false,"context_lines":[{"line_number":93,"context_line":"There is a second use case, for running small OpenStack installations (similar"},{"line_number":94,"context_line":"to StarlingX) within a tenant. In these cases, the tenant OpenStack would still"},{"line_number":95,"context_line":"need to access storage from the Teapot cloud. This could possibly be achieved"},{"line_number":96,"context_line":"by federating the tenant Keystone to Teapot\u0027s Keystone and using hierarchical"},{"line_number":97,"context_line":"multi-tenancy so that projects in the tenant Keystone are actually sub-projects"},{"line_number":98,"context_line":"of the tenant\u0027s project in the Teapot Keystone. (The long-dead `Trio2o"},{"line_number":99,"context_line":"\u003chttps://opendev.org/x/trio2o#trio2o\u003e`_ project also offered a potential"},{"line_number":100,"context_line":"solution in the form of an API proxy, but probably not one worth resurrecting.)"},{"line_number":101,"context_line":"Use of an overlay network (e.g. OVN) would be required, since the tenant would"}],"source_content_type":"text/x-rst","patch_set":1,"id":"1fa4df85_a6ab361b","line":98,"range":{"start_line":96,"start_character":65,"end_line":98,"end_character":46},"in_reply_to":"1fa4df85_631d350e","updated":"2020-02-28 05:03:02.000000000","message":"\"Colleen says this could totally be a thing.\"","commit_id":"363f5f0062efdbb55398abfe42bcd5f9eab7fadb"},{"author":{"_account_id":11655,"name":"Julia Kreger","email":"juliaashleykreger@gmail.com","username":"jkreger","status":"Flying to the moon with a Jetpack!"},"change_message_id":"7cab13b32a25d76b7d30188bc36ea1a020d165c3","unresolved":false,"context_lines":[{"line_number":54,"context_line":""},{"line_number":55,"context_line":"However, the ideal for this type of deployment would be to allow servers to be"},{"line_number":56,"context_line":"dynamically moved between the OpenStack and Teapot clouds. Sharing inventory"},{"line_number":57,"context_line":"with OpenStack\u0027s Ironic might be simple enough -- if Metal³ was configured to"},{"line_number":58,"context_line":"use the OpenStack cloud\u0027s Ironic then a small component could claim hosts in"},{"line_number":59,"context_line":"OpenStack Placement and create corresponding BareMetalHost objects in Teapot."},{"line_number":60,"context_line":"Both clouds would end up manipulating the top-of-rack switch configuration for"},{"line_number":61,"context_line":"a host, but presumably only at different times."},{"line_number":62,"context_line":""},{"line_number":63,"context_line":"Switching hosts between acting as OpenStack compute nodes and being available"},{"line_number":64,"context_line":"to Teapot tenants would be more complex, since it would require interaction"}],"source_content_type":"text/x-rst","patch_set":4,"id":"1fa4df85_f9c13b55","line":61,"range":{"start_line":57,"start_character":50,"end_line":61,"end_character":47},"updated":"2020-03-09 20:18:44.000000000","message":"Claiming the host in placement wouldn\u0027t even really be required. If somehow there is a collision, the virt driver will reject the request because the node has already been picked up and the status will be resynced. I guess for much larger clouds, a claim is a good idea.","commit_id":"e77cfc3208b5c65f8031a0cadbe65f116419a193"},{"author":{"_account_id":4257,"name":"Zane Bitter","email":"zbitter@redhat.com","username":"zaneb"},"change_message_id":"ab1560da13c4ed154c74706bbe252f2e99dd90d7","unresolved":false,"context_lines":[{"line_number":54,"context_line":""},{"line_number":55,"context_line":"However, the ideal for this type of deployment would be to allow servers to be"},{"line_number":56,"context_line":"dynamically moved between the OpenStack and Teapot clouds. Sharing inventory"},{"line_number":57,"context_line":"with OpenStack\u0027s Ironic might be simple enough -- if Metal³ was configured to"},{"line_number":58,"context_line":"use the OpenStack cloud\u0027s Ironic then a small component could claim hosts in"},{"line_number":59,"context_line":"OpenStack Placement and create corresponding BareMetalHost objects in Teapot."},{"line_number":60,"context_line":"Both clouds would end up manipulating the top-of-rack switch configuration for"},{"line_number":61,"context_line":"a host, but presumably only at different times."},{"line_number":62,"context_line":""},{"line_number":63,"context_line":"Switching hosts between acting as OpenStack compute nodes and being available"},{"line_number":64,"context_line":"to Teapot tenants would be more complex, since it would require interaction"}],"source_content_type":"text/x-rst","patch_set":4,"id":"1fa4df85_94c004e3","line":61,"range":{"start_line":57,"start_character":50,"end_line":61,"end_character":47},"in_reply_to":"1fa4df85_f9c13b55","updated":"2020-03-10 18:56:43.000000000","message":"Hmm, good point. Does Ironic report to placement after a server has been provisioned? I just kind of assumed that was left up to Nova.","commit_id":"e77cfc3208b5c65f8031a0cadbe65f116419a193"}],"doc/source/ideas/teapot/storage.rst":[{"author":{"_account_id":17068,"name":"Jean-Philippe Evrard","email":"openstack@a.spamming.party","username":"evrardjp"},"change_message_id":"34cbd129b53047752e97338c2889f1e222a523b7","unresolved":false,"context_lines":[{"line_number":7,"context_line":"possible for tenants to make use of hyperconverged storage inside their own"},{"line_number":8,"context_line":"cluster (for example, using Rook_), this usually makes sense only for clusters"},{"line_number":9,"context_line":"that are essentially fixed. To take advantage of the highly dynamic environment"},{"line_number":10,"context_line":"offered by a cloud like Teapot, a shared storage pool is needed."},{"line_number":11,"context_line":""},{"line_number":12,"context_line":"To efficiently run hyperconverged storage -- that is to say, both compute and"},{"line_number":13,"context_line":"storage workloads on the same hosts -- requires a somewhat specialised choice"}],"source_content_type":"text/x-rst","patch_set":1,"id":"1fa4df85_1fe28402","line":10,"updated":"2020-02-28 16:00:22.000000000","message":"nit:\nI think if you clarify this by rephrasing it differently:\n- \"hyperconverged storage\" happens inside the compute cluster, and therefore is out of scope for teapot (you run what you want within the cluster)\n- \"shared storage pool\" is in the scope of teapot.\n\nThat\u0027s basically what you said here, for the correct reasons explained in this paragraph.","commit_id":"363f5f0062efdbb55398abfe42bcd5f9eab7fadb"},{"author":{"_account_id":4257,"name":"Zane Bitter","email":"zbitter@redhat.com","username":"zaneb"},"change_message_id":"7b916e60b5c328343046f7330f45448c1515f45e","unresolved":false,"context_lines":[{"line_number":7,"context_line":"possible for tenants to make use of hyperconverged storage inside their own"},{"line_number":8,"context_line":"cluster (for example, using Rook_), this usually makes sense only for clusters"},{"line_number":9,"context_line":"that are essentially fixed. To take advantage of the highly dynamic environment"},{"line_number":10,"context_line":"offered by a cloud like Teapot, a shared storage pool is needed."},{"line_number":11,"context_line":""},{"line_number":12,"context_line":"To efficiently run hyperconverged storage -- that is to say, both compute and"},{"line_number":13,"context_line":"storage workloads on the same hosts -- requires a somewhat specialised choice"}],"source_content_type":"text/x-rst","patch_set":1,"id":"1fa4df85_30cc4fbb","line":10,"in_reply_to":"1fa4df85_1fe28402","updated":"2020-02-28 18:24:32.000000000","message":"Thanks, always good to hear where things could be clearer. I\u0027ll try to rework this a bit.","commit_id":"363f5f0062efdbb55398abfe42bcd5f9eab7fadb"},{"author":{"_account_id":9003,"name":"Tom Barron","email":"tpb@dyncloud.net","username":"tbarron"},"change_message_id":"94e0bd3307d59c90569a70f8c7fcd4d64966389a","unresolved":false,"context_lines":[{"line_number":8,"context_line":"Tenants can always choose to use hyperconverged storage -- that is to say, both"},{"line_number":9,"context_line":"compute and storage workloads on the same hosts -- without involvement or"},{"line_number":10,"context_line":"permission from Teapot. (For example, by using Rook_.) However, this usually"},{"line_number":11,"context_line":"makes sense only for clusters that are essentially fixed. Hyperconverged"},{"line_number":12,"context_line":"storage has the effect of tightly coupling storage to the size of the cluster."},{"line_number":13,"context_line":"Tenants with disproportionately large amounts of data but modest compute needs"},{"line_number":14,"context_line":"(and sometimes vice-versa) would not be served efficiently. Changing the size"},{"line_number":15,"context_line":"of the cluster results in rebalancing of storage, so this is not suitable for"}],"source_content_type":"text/x-rst","patch_set":2,"id":"1fa4df85_e3726ee3","line":12,"range":{"start_line":11,"start_character":58,"end_line":12,"end_character":78},"updated":"2020-02-29 16:46:56.000000000","message":"Yeah, you can\u0027t independently scale storage and compute resources.\n\nAlso, with hyperconverged storage the life cycle of the compute and storage in a cluster are tightly coupled.  You can\u0027t rent compute for a couple days, free it up, and come back two weeks later and pick up the storage it was using.","commit_id":"1f9515b9037633b4abb8acf4dc29ed4befb7b745"},{"author":{"_account_id":9003,"name":"Tom Barron","email":"tpb@dyncloud.net","username":"tbarron"},"change_message_id":"94e0bd3307d59c90569a70f8c7fcd4d64966389a","unresolved":false,"context_lines":[{"line_number":54,"context_line":"applications where multiple pods are writing to the same filesystem in"},{"line_number":55,"context_line":"parallel."},{"line_number":56,"context_line":""},{"line_number":57,"context_line":"Manila\u0027s architecture is relatively simple already. It would be helpful if the"},{"line_number":58,"context_line":"dependency on RabbitMQ could be removed (to be replaced with e.g. json-rpc in"},{"line_number":59,"context_line":"the same way that Ironic has in Metal³), but this would require more"},{"line_number":60,"context_line":"investigation. An Operator for deploying and managing Manila on Kubernetes is"},{"line_number":61,"context_line":"under development."},{"line_number":62,"context_line":""},{"line_number":63,"context_line":"A :abbr:`CSI (Container Storage Interface)` plugin for Manila already exists in"}],"source_content_type":"text/x-rst","patch_set":2,"id":"1fa4df85_6350be33","line":60,"range":{"start_line":57,"start_character":0,"end_line":60,"end_character":14},"updated":"2020-02-29 16:46:56.000000000","message":"+1","commit_id":"1f9515b9037633b4abb8acf4dc29ed4befb7b745"},{"author":{"_account_id":9003,"name":"Tom Barron","email":"tpb@dyncloud.net","username":"tbarron"},"change_message_id":"94e0bd3307d59c90569a70f8c7fcd4d64966389a","unresolved":false,"context_lines":[{"line_number":67,"context_line":""},{"line_number":68,"context_line":"OpenStack Cinder"},{"line_number":69,"context_line":"~~~~~~~~~~~~~~~~"},{"line_number":70,"context_line":""},{"line_number":71,"context_line":"Cinder is more limited than Manila in the sense that it can provide only \u0027RWO\u0027"},{"line_number":72,"context_line":"(Read/Write One) access to persistent storage. Since most Kubernetes storage is"},{"line_number":73,"context_line":"file-based, it also involves a translation from file to block storage. However,"},{"line_number":74,"context_line":"this type of storage may be useful for some use cases. Kubernetes does provide"},{"line_number":75,"context_line":"for block storage volumes, and these are used by KubeVirt in particular to"},{"line_number":76,"context_line":"provide persistent storage for VMs. So this is likely to be a common use case."},{"line_number":77,"context_line":""},{"line_number":78,"context_line":"Much of the complexity in Cinder is linked to the need to provide agents"},{"line_number":79,"context_line":"running on Nova compute hosts. Since Teapot is a baremetal-only service, only"}],"source_content_type":"text/x-rst","patch_set":2,"id":"1fa4df85_2318867f","line":76,"range":{"start_line":70,"start_character":0,"end_line":76,"end_character":35},"updated":"2020-02-29 16:46:56.000000000","message":"Perhaps something along these lines:\n\nCinder is more limited than Manila in the sense that it can only provide \u0027RWO\u0027 (Read/Write One) access to persistent storage to most applications.  Kubernetes volume mounts are generally file based -- Kubernetes creates its own local file system on block devices if it does not discover one there already.  That said, Kubernetes recently added *raw* block mode support, which does support RWX mode for specialized applications that can work with block device offsets rather than file system paths.  Kubevirt in particular is expected to use raw block mode persistent volumes, so this is likely to be a common use case.","commit_id":"1f9515b9037633b4abb8acf4dc29ed4befb7b745"},{"author":{"_account_id":8099,"name":"Graham Hayes","email":"gr@ham.ie","username":"graham"},"change_message_id":"ded9be44ebf55be82048b22ea7fddaf26e1430f5","unresolved":false,"context_lines":[{"line_number":67,"context_line":""},{"line_number":68,"context_line":"OpenStack Cinder"},{"line_number":69,"context_line":"~~~~~~~~~~~~~~~~"},{"line_number":70,"context_line":""},{"line_number":71,"context_line":"Cinder is more limited than Manila in the sense that it can provide only \u0027RWO\u0027"},{"line_number":72,"context_line":"(Read/Write One) access to persistent storage. Since most Kubernetes storage is"},{"line_number":73,"context_line":"file-based, it also involves a translation from file to block storage. However,"},{"line_number":74,"context_line":"this type of storage may be useful for some use cases. Kubernetes does provide"},{"line_number":75,"context_line":"for block storage volumes, and these are used by KubeVirt in particular to"},{"line_number":76,"context_line":"provide persistent storage for VMs. So this is likely to be a common use case."},{"line_number":77,"context_line":""},{"line_number":78,"context_line":"Much of the complexity in Cinder is linked to the need to provide agents"},{"line_number":79,"context_line":"running on Nova compute hosts. Since Teapot is a baremetal-only service, only"}],"source_content_type":"text/x-rst","patch_set":2,"id":"1fa4df85_0bebf812","line":76,"range":{"start_line":70,"start_character":0,"end_line":76,"end_character":35},"in_reply_to":"1fa4df85_2318867f","updated":"2020-03-03 16:56:01.000000000","message":"As K8s has support for supplying multiple storage types to clusters, I do think including both is useful.","commit_id":"1f9515b9037633b4abb8acf4dc29ed4befb7b745"},{"author":{"_account_id":9003,"name":"Tom Barron","email":"tpb@dyncloud.net","username":"tbarron"},"change_message_id":"94e0bd3307d59c90569a70f8c7fcd4d64966389a","unresolved":false,"context_lines":[{"line_number":78,"context_line":"Much of the complexity in Cinder is linked to the need to provide agents"},{"line_number":79,"context_line":"running on Nova compute hosts. Since Teapot is a baremetal-only service, only"},{"line_number":80,"context_line":"the parts of Cinder needed to provide storage to Ironic servers are required."},{"line_number":81,"context_line":"Unfortunately, Cinder is quite heavily dependent on RabbitMQ. However, there"},{"line_number":82,"context_line":"may be scope for simplification through further work with the Cinder community."},{"line_number":83,"context_line":""},{"line_number":84,"context_line":"Cinder has a dependency on Barbican for supporting encrypted volumes. Encrypted"},{"line_number":85,"context_line":"volume support is not required but would be nice to have. In the short term it"}],"source_content_type":"text/x-rst","patch_set":2,"id":"1fa4df85_83085a2c","line":82,"range":{"start_line":81,"start_character":70,"end_line":82,"end_character":78},"updated":"2020-02-29 16:46:56.000000000","message":"If you remove the nova interaction and the Cinder backup service, Cinder and Manila have similer architectures: an API service, a Scheduler Service, and a Volume/Share service with aync messaging among these via rabbitmq and resource state persistence in a relational database.  I\u0027d expect that the work to leverage an administratively owned relational database running in K8s and the work to replace rabbitmq with json-rpc would be very similar for Cinder and Manila.","commit_id":"1f9515b9037633b4abb8acf4dc29ed4befb7b745"},{"author":{"_account_id":8099,"name":"Graham Hayes","email":"gr@ham.ie","username":"graham"},"change_message_id":"ded9be44ebf55be82048b22ea7fddaf26e1430f5","unresolved":false,"context_lines":[{"line_number":84,"context_line":"Cinder has a dependency on Barbican for supporting encrypted volumes. Encrypted"},{"line_number":85,"context_line":"volume support is not required but would be nice to have. In the short term it"},{"line_number":86,"context_line":"could be obtained by deploying Barbican; in the long term it might be better to"},{"line_number":87,"context_line":"adapt Cinder to be able to use Kubernetes Secrets (perhaps via another key"},{"line_number":88,"context_line":"manager back-end to Castellan)."},{"line_number":89,"context_line":""},{"line_number":90,"context_line":"A :abbr:`CSI (Container Storage Interface)` plugin for Cinder already exists in"}],"source_content_type":"text/x-rst","patch_set":2,"id":"1fa4df85_0b39b8a2","line":87,"range":{"start_line":87,"start_character":31,"end_line":87,"end_character":49},"updated":"2020-03-03 16:56:01.000000000","message":"Conceptually, this is a good idea, unfortunately right now, with the current implementations, it is not. (secrets are stored un encrypted in etcd, when they get mounted to a Pod, they can exist on disk on the node unencrypted, etc). \n\nPotentially adding a vault or exposing a multi tenant barbican could help, but that is extra work.","commit_id":"1f9515b9037633b4abb8acf4dc29ed4befb7b745"},{"author":{"_account_id":4257,"name":"Zane Bitter","email":"zbitter@redhat.com","username":"zaneb"},"change_message_id":"4c4d7133c9ace20983a78891ce3d67e8c2518d5a","unresolved":false,"context_lines":[{"line_number":84,"context_line":"Cinder has a dependency on Barbican for supporting encrypted volumes. Encrypted"},{"line_number":85,"context_line":"volume support is not required but would be nice to have. In the short term it"},{"line_number":86,"context_line":"could be obtained by deploying Barbican; in the long term it might be better to"},{"line_number":87,"context_line":"adapt Cinder to be able to use Kubernetes Secrets (perhaps via another key"},{"line_number":88,"context_line":"manager back-end to Castellan)."},{"line_number":89,"context_line":""},{"line_number":90,"context_line":"A :abbr:`CSI (Container Storage Interface)` plugin for Cinder already exists in"}],"source_content_type":"text/x-rst","patch_set":2,"id":"1fa4df85_48a8118c","line":87,"range":{"start_line":87,"start_character":31,"end_line":87,"end_character":49},"in_reply_to":"1fa4df85_0633ef15","updated":"2020-03-03 21:42:45.000000000","message":"Yeah, was just reading up on that. Agree that it doesn\u0027t help very much if the keys are stored in plaintext in the same DB as the encrypted data.\n\nAll of the existing KMS providers are for public clouds afaict?\nYou\u0027d think that Vault would be one of the first ones implemented, but I can only find an archived repo from Oracle with a single commit (code dump). And all of the instructions for running Vault in k8s at all end up using a cloud\u0027s key management service for unsealing. It\u0027s turtles all the way down.\n\nI\u0027m starting to wonder if we need a separate key management doc, since there literally doesn\u0027t seem to be another way to protect secrets without going to public cloud.","commit_id":"1f9515b9037633b4abb8acf4dc29ed4befb7b745"},{"author":{"_account_id":4257,"name":"Zane Bitter","email":"zbitter@redhat.com","username":"zaneb"},"change_message_id":"c965935c1352ee9321a66cc21defb631a9058c3f","unresolved":false,"context_lines":[{"line_number":84,"context_line":"Cinder has a dependency on Barbican for supporting encrypted volumes. Encrypted"},{"line_number":85,"context_line":"volume support is not required but would be nice to have. In the short term it"},{"line_number":86,"context_line":"could be obtained by deploying Barbican; in the long term it might be better to"},{"line_number":87,"context_line":"adapt Cinder to be able to use Kubernetes Secrets (perhaps via another key"},{"line_number":88,"context_line":"manager back-end to Castellan)."},{"line_number":89,"context_line":""},{"line_number":90,"context_line":"A :abbr:`CSI (Container Storage Interface)` plugin for Cinder already exists in"}],"source_content_type":"text/x-rst","patch_set":2,"id":"1fa4df85_260dcba0","line":87,"range":{"start_line":87,"start_character":31,"end_line":87,"end_character":49},"in_reply_to":"1fa4df85_0b39b8a2","updated":"2020-03-03 17:42:36.000000000","message":"Secrets don\u0027t have to be stored unencrypted: https://kubernetes.io/docs/tasks/administer-cluster/encrypt-data/\n\nAnd they don\u0027t have to be mounted in a Pod. In fact we wouldn\u0027t want that, because the secret belongs to the tenant and not to the Cinder service. So we\u0027d use the user\u0027s token passed to Castellan, and it would make a call to the k8s API, authed via Keystone using that token, instead of using Barbican as the backend.","commit_id":"1f9515b9037633b4abb8acf4dc29ed4befb7b745"},{"author":{"_account_id":8099,"name":"Graham Hayes","email":"gr@ham.ie","username":"graham"},"change_message_id":"c7e61de405dac95c06f340177b7b08512612d3ac","unresolved":false,"context_lines":[{"line_number":84,"context_line":"Cinder has a dependency on Barbican for supporting encrypted volumes. Encrypted"},{"line_number":85,"context_line":"volume support is not required but would be nice to have. In the short term it"},{"line_number":86,"context_line":"could be obtained by deploying Barbican; in the long term it might be better to"},{"line_number":87,"context_line":"adapt Cinder to be able to use Kubernetes Secrets (perhaps via another key"},{"line_number":88,"context_line":"manager back-end to Castellan)."},{"line_number":89,"context_line":""},{"line_number":90,"context_line":"A :abbr:`CSI (Container Storage Interface)` plugin for Cinder already exists in"}],"source_content_type":"text/x-rst","patch_set":2,"id":"1fa4df85_0633ef15","line":87,"range":{"start_line":87,"start_character":31,"end_line":87,"end_character":49},"in_reply_to":"1fa4df85_260dcba0","updated":"2020-03-03 17:54:46.000000000","message":"yeah - to be safe [2] you have to run a KMS provider[1] (otherwise the encrypted secrets are encrypted with a key that is also on the disk), at which point you are running barbican / vault / $HSM anyway \n\n1 - https://kubernetes.io/docs/tasks/administer-cluster/kms-provider/\n\n2 - from the page you linked :\n\n\u003e Storing the raw encryption key in the EncryptionConfig only moderately improves your security posture, compared to no encryption. Please use kms provider for additional security. By default, the identity provider is used to protect secrets in etcd, which provides no encryption. EncryptionConfiguration was introduced to encrypt secrets locally, with a locally managed key.","commit_id":"1f9515b9037633b4abb8acf4dc29ed4befb7b745"},{"author":{"_account_id":9003,"name":"Tom Barron","email":"tpb@dyncloud.net","username":"tbarron"},"change_message_id":"adc6d2ad6335e022f9aa0c6b76595d5c56528f17","unresolved":false,"context_lines":[{"line_number":86,"context_line":"could be obtained by deploying Barbican; in the long term it might be better to"},{"line_number":87,"context_line":"adapt Cinder to be able to use Kubernetes Secrets (perhaps via another key"},{"line_number":88,"context_line":"manager back-end to Castellan)."},{"line_number":89,"context_line":""},{"line_number":90,"context_line":"A :abbr:`CSI (Container Storage Interface)` plugin for Cinder already exists in"},{"line_number":91,"context_line":"cloud-provider-openstack_."},{"line_number":92,"context_line":""}],"source_content_type":"text/x-rst","patch_set":2,"id":"1fa4df85_e187bc0e","line":89,"updated":"2020-03-02 10:44:03.000000000","message":"It occurs to me that we may need a brief discussion of Object storage, especially given the emergence of Object Bucket Claims [1] alongside Kubernetes Persistent Volume Claims.  Some observations:\n\n  * Apps running in K8s may use OBCs alongside PVCs\n  * These may be fulfilled by Object services running\n     - hyperconverged using storage native to physical compute nodes in the cluser\n     - provided by the cloud, like Manila or Cinder\n     - from a foreign cloud\n  * Reasons for having a local cloud-provider option\n     - save bandwidth/cost/latency vs foreign cloud\n     - scale independently of compute vs hyperconverged\n     - independent life-cycle vs hyperconverged: can persist objects even when the compute cluster is spun down so the stored objects can be used later after spinning it back up\n     - straightforward object sharing across tenant k8s clusters (just share required secrets)\n  * local cloud-provider supplied Object storage service:\n    - can be classical Swift or a service (e.g. Ceph RGW) that emulates Swift\n    - may use S3 API rather than classical Swift API for ease of OBC integration\n    - is integrated with Keystone multitenancy: different tenant K8s clusters get different client credentials for getting and putting objects to the cloud provider object service and objects/buckets within the service are completely segregated by tenant ownership.\n     \n[1] https://rook.io/docs/rook/v1.1/ceph-object-bucket-claim.html [Note that although OBCs are coming out of Rook, they are not conceived as Rook-specific]","commit_id":"1f9515b9037633b4abb8acf4dc29ed4befb7b745"},{"author":{"_account_id":9003,"name":"Tom Barron","email":"tpb@dyncloud.net","username":"tbarron"},"change_message_id":"87db7e297e6e0c1b4fb3c6ea09f3e8a49bf5c7ea","unresolved":false,"context_lines":[{"line_number":86,"context_line":"could be obtained by deploying Barbican; in the long term it might be better to"},{"line_number":87,"context_line":"adapt Cinder to be able to use Kubernetes Secrets (perhaps via another key"},{"line_number":88,"context_line":"manager back-end to Castellan)."},{"line_number":89,"context_line":""},{"line_number":90,"context_line":"A :abbr:`CSI (Container Storage Interface)` plugin for Cinder already exists in"},{"line_number":91,"context_line":"cloud-provider-openstack_."},{"line_number":92,"context_line":""}],"source_content_type":"text/x-rst","patch_set":2,"id":"1fa4df85_e8bf875e","line":89,"in_reply_to":"1fa4df85_e187bc0e","updated":"2020-03-03 12:07:26.000000000","message":"The OBC KEP draft: https://github.com/kubernetes/enhancements/pull/1383/files","commit_id":"1f9515b9037633b4abb8acf4dc29ed4befb7b745"},{"author":{"_account_id":8099,"name":"Graham Hayes","email":"gr@ham.ie","username":"graham"},"change_message_id":"ded9be44ebf55be82048b22ea7fddaf26e1430f5","unresolved":false,"context_lines":[{"line_number":86,"context_line":"could be obtained by deploying Barbican; in the long term it might be better to"},{"line_number":87,"context_line":"adapt Cinder to be able to use Kubernetes Secrets (perhaps via another key"},{"line_number":88,"context_line":"manager back-end to Castellan)."},{"line_number":89,"context_line":""},{"line_number":90,"context_line":"A :abbr:`CSI (Container Storage Interface)` plugin for Cinder already exists in"},{"line_number":91,"context_line":"cloud-provider-openstack_."},{"line_number":92,"context_line":""}],"source_content_type":"text/x-rst","patch_set":2,"id":"1fa4df85_ebf53cee","line":89,"in_reply_to":"1fa4df85_e8bf875e","updated":"2020-03-03 16:56:01.000000000","message":"OBC seems interesting - we should defintely track it for teapot","commit_id":"1f9515b9037633b4abb8acf4dc29ed4befb7b745"},{"author":{"_account_id":17068,"name":"Jean-Philippe Evrard","email":"openstack@a.spamming.party","username":"evrardjp"},"change_message_id":"d5ba9ccc3a1564a9c0e410f8edf0e89e03975aea","unresolved":false,"context_lines":[{"line_number":86,"context_line":"could be obtained by deploying Barbican; in the long term it might be better to"},{"line_number":87,"context_line":"adapt Cinder to be able to use Kubernetes Secrets (perhaps via another key"},{"line_number":88,"context_line":"manager back-end to Castellan)."},{"line_number":89,"context_line":""},{"line_number":90,"context_line":"A :abbr:`CSI (Container Storage Interface)` plugin for Cinder already exists in"},{"line_number":91,"context_line":"cloud-provider-openstack_."},{"line_number":92,"context_line":""}],"source_content_type":"text/x-rst","patch_set":2,"id":"1fa4df85_ae237ba6","line":89,"in_reply_to":"1fa4df85_ebf53cee","updated":"2020-03-09 18:41:32.000000000","message":"Agreed.","commit_id":"1f9515b9037633b4abb8acf4dc29ed4befb7b745"},{"author":{"_account_id":11655,"name":"Julia Kreger","email":"juliaashleykreger@gmail.com","username":"jkreger","status":"Flying to the moon with a Jetpack!"},"change_message_id":"7cab13b32a25d76b7d30188bc36ea1a020d165c3","unresolved":false,"context_lines":[{"line_number":84,"context_line":"expected to make use of raw block mode persistent volumes for backing virtual"},{"line_number":85,"context_line":"machines, so this is likely to be a common use case."},{"line_number":86,"context_line":""},{"line_number":87,"context_line":"Much of the complexity in Cinder is linked to the need to provide agents"},{"line_number":88,"context_line":"running on Nova compute hosts. Since Teapot is a baremetal-only service, only"},{"line_number":89,"context_line":"the parts of Cinder needed to provide storage to Ironic servers are required."},{"line_number":90,"context_line":"Unfortunately, Cinder is quite heavily dependent on RabbitMQ. However, there"}],"source_content_type":"text/x-rst","patch_set":4,"id":"1fa4df85_1976973f","line":87,"range":{"start_line":87,"start_character":50,"end_line":87,"end_character":72},"updated":"2020-03-09 20:18:44.000000000","message":"It would be good to clarify what the agents are doing, if they are doing multipath io handling with a backend SAN, then they may still be required depending on the SAN type. Example being SANs that can handle \"RWX\", but only from only from a single controller or controller path at a time in order to address block address locking concurrency issues.","commit_id":"e77cfc3208b5c65f8031a0cadbe65f116419a193"},{"author":{"_account_id":11655,"name":"Julia Kreger","email":"juliaashleykreger@gmail.com","username":"jkreger","status":"Flying to the moon with a Jetpack!"},"change_message_id":"7cab13b32a25d76b7d30188bc36ea1a020d165c3","unresolved":false,"context_lines":[{"line_number":85,"context_line":"machines, so this is likely to be a common use case."},{"line_number":86,"context_line":""},{"line_number":87,"context_line":"Much of the complexity in Cinder is linked to the need to provide agents"},{"line_number":88,"context_line":"running on Nova compute hosts. Since Teapot is a baremetal-only service, only"},{"line_number":89,"context_line":"the parts of Cinder needed to provide storage to Ironic servers are required."},{"line_number":90,"context_line":"Unfortunately, Cinder is quite heavily dependent on RabbitMQ. However, there"},{"line_number":91,"context_line":"may be scope for simplification through further work with the Cinder community."},{"line_number":92,"context_line":"The remaining portions of Cinder are architecturally very similar to Manila, so"}],"source_content_type":"text/x-rst","patch_set":4,"id":"1fa4df85_f98ffb26","line":89,"range":{"start_line":88,"start_character":31,"end_line":89,"end_character":77},"updated":"2020-03-09 20:18:44.000000000","message":"I\u0027m not sure I understand what is being said here. Because while there is BFV functionality, it doesn\u0027t sound like it is being used thus the mention of ironic seems confusing.","commit_id":"e77cfc3208b5c65f8031a0cadbe65f116419a193"},{"author":{"_account_id":4257,"name":"Zane Bitter","email":"zbitter@redhat.com","username":"zaneb"},"change_message_id":"ab1560da13c4ed154c74706bbe252f2e99dd90d7","unresolved":false,"context_lines":[{"line_number":85,"context_line":"machines, so this is likely to be a common use case."},{"line_number":86,"context_line":""},{"line_number":87,"context_line":"Much of the complexity in Cinder is linked to the need to provide agents"},{"line_number":88,"context_line":"running on Nova compute hosts. Since Teapot is a baremetal-only service, only"},{"line_number":89,"context_line":"the parts of Cinder needed to provide storage to Ironic servers are required."},{"line_number":90,"context_line":"Unfortunately, Cinder is quite heavily dependent on RabbitMQ. However, there"},{"line_number":91,"context_line":"may be scope for simplification through further work with the Cinder community."},{"line_number":92,"context_line":"The remaining portions of Cinder are architecturally very similar to Manila, so"}],"source_content_type":"text/x-rst","patch_set":4,"id":"1fa4df85_8b621fad","line":89,"range":{"start_line":88,"start_character":31,"end_line":89,"end_character":77},"in_reply_to":"1fa4df85_f98ffb26","updated":"2020-03-10 18:56:43.000000000","message":"TBH my knowledge of Cinder architecture is pretty minimal. My understanding was that the code path for dealing with baremetal servers can be considerably simpler than for VMs. (Complexity doesn\u0027t go away, but it gets punted to the CSI driver.) This may be incorrect though.","commit_id":"e77cfc3208b5c65f8031a0cadbe65f116419a193"}]}
