)]}'
{"/COMMIT_MSG":[{"author":{"_account_id":6962,"name":"Kashyap Chamarthy","email":"kchamart@redhat.com","username":"kashyapc"},"change_message_id":"7be61b361dba9206f3abf45120a9f0730838c8c1","unresolved":false,"context_lines":[{"line_number":11,"context_line":"   https://review.openstack.org/#/q/topic:bp/amd-sev-libvirt-support"},{"line_number":12,"context_line":""},{"line_number":13,"context_line":"but unfortunately did not quite reach completion in time for the Stein"},{"line_number":14,"context_line":"release.  So re-submit the spec for approval for Stein, according to"},{"line_number":15,"context_line":"the process described at:"},{"line_number":16,"context_line":""},{"line_number":17,"context_line":"   https://specs.openstack.org/openstack/nova-specs/readme.html#previously-approved-specifications"}],"source_content_type":"text/x-gerrit-commit-message","patch_set":1,"id":"5fc1f717_6709c92a","line":14,"range":{"start_line":14,"start_character":49,"end_line":14,"end_character":54},"updated":"2019-03-08 12:33:12.000000000","message":"s/Stein/Train/ :-)","commit_id":"9bf96f8b6449e439098bd41daae16d53873a5b68"},{"author":{"_account_id":7,"name":"Jay Pipes","email":"jaypipes@gmail.com","username":"jaypipes"},"change_message_id":"406e0ef9a80e90a4085880d6e25b2f0cf2cd0afe","unresolved":false,"context_lines":[{"line_number":23,"context_line":"  \u003cmemoryBacking\u003e. It is changed because, as pointed out in the"},{"line_number":24,"context_line":"  review, we cannot use \u003chard_limit\u003e;"},{"line_number":25,"context_line":""},{"line_number":26,"context_line":"- Incorporate a lot of feedback and new information which has arisen"},{"line_number":27,"context_line":"  regarding memory locking and accounting, both in the review and"},{"line_number":28,"context_line":"  elsewhere (mailing lists etc.)."},{"line_number":29,"context_line":""}],"source_content_type":"text/x-gerrit-commit-message","patch_set":11,"id":"ffb9cba7_e8b59def","line":26,"range":{"start_line":26,"start_character":36,"end_line":26,"end_character":68},"updated":"2019-04-23 15:47:14.000000000","message":"In the words of The Dude... \"new shit has come to light, man\"","commit_id":"93c25dbb3f1a0d214f9eb3a8cab6742a1dec220c"}],"specs/train/approved/amd-sev-libvirt-support.rst":[{"author":{"_account_id":1779,"name":"Daniel Berrange","email":"berrange@redhat.com","username":"berrange"},"change_message_id":"10a008e7b19dd4186d64e8e47e054904160d61f7","unresolved":false,"context_lines":[{"line_number":164,"context_line":"    It is also recommended to include an additional padding of at"},{"line_number":165,"context_line":"    least 256KB for safety, since ROM sizes can occasionally change."},{"line_number":166,"context_line":"    For example the total of 10832KB required here for ROMs / ACPI"},{"line_number":167,"context_line":"    tables should be rounded up to 16MB."},{"line_number":168,"context_line":""},{"line_number":169,"context_line":"    The first two values are expected to commonly vary per VM, and"},{"line_number":170,"context_line":"    are already accounted for dynamically by the placement service."}],"source_content_type":"text/x-rst","patch_set":2,"id":"5fc1f717_9df1346e","line":167,"range":{"start_line":167,"start_character":39,"end_line":167,"end_character":40},"updated":"2019-03-14 17:07:04.000000000","message":"From the libvirt POV, I would advise extreme caution at trying to calculate \u0026 apply any kind of \"hard limit\" for individual QEMU process.\n\nWhen libvirt first added cgroups hard limit support we attempted to enforce a hard limit over all QEMU instances by default to provide protection for the host against a memory DoS by QEMU(s).\n\nWe went through many different algorithms for approximating the expected worst-case QEMU memory usage under normal operating conditions. We were never successful in this endeavour, and as a result had a constant flow of bug reports from users about their VMs being killed by the hard limit during normal usage.\n\nThe final most generous algorithm we applied was\n\n   (1 + k) * (domain memory + total video memory) + (32MB for cache per each disk) + F\n\n    where k \u003d 0.5 and F \u003d 400MB. \n\nOn a 1 GB guest this gave a hard limit of 1.9 GB and users still saw the VMs being killed.\n\nThe proposed algorithm in this spec is much less generous so I would expect it to result in many dead VMs.\n\nAfter discussions with QEMU maintainers we claim to the conclusion that it is impractical to determine a worst case memory usage for QEMU as there are far too many moving variables which can influence it. Thus we removed the default hard limit for libvirt \u0026 changed the docs to recommend against using it.\n\nEven if you do determine an accurate upper limit, it is so large than it is not practical to use it to protect the host from denial of service.\n\neg consider a 1 GB guest, requires a 2 GB hard limit. If the host has 16 GB of RAM, and the host OS needs 2 GB spare, then to protect the host you would only have 14 GB for VM usage. \n\nYou have to assume all guest simulataneously use the worst case, which means in your 14 GB of RAM you can only run 7 VMs with 1 GB allocation each. If you try to run more than 7 VMs, then the hard limit will not reliably protect the host from memory denial of service.\n\nUltimately this says the idea of protecting the host using using per-VM hard limits is a fundamentally flawed approach. A more viable approach is to put a hard limit around all VMs as a group, while leaving each individual VM unrestricted.\n\nIf we assume a systemd based OS (which is basically any modern Linux), then all VMs get placed into a cgroup that is below the /machine.slice top level group. Thus it is possible to place a hard limit on the /machine.slice cgroup and this will apply to the cumulative usage of all VMs. Each VM can thus be given an large/unlimited hard limit which will avoid unexpected death of any single VM from the OOM Killer while still protecting the host OS in general.\n\nNB, the main caveat is that this will not protect one VM from denial-or-service caused by a 2nd VM. We can\u0027t predict which VM the kernel will choose to kill when one causes the overall limit to be exceeded. It should protect the host OS though assuming a sensible amount of RAM is reserved for host OS usage. This kind of global memory limit is potentially useful as a Nova deployment rule even without the SEV feature, as the current nova reserved host memory is not enforced except in the nova schedular \u0026 as described the QEMU VM can use much much more than Nova may account for.","commit_id":"a3456cda878da568d77064d33b903944377b3a90"},{"author":{"_account_id":2394,"name":"Adam Spiers","email":"aspiers@suse.com","username":"adam.spiers"},"change_message_id":"3a88e7cd3c762e55710bcba53ddbb9d1f4e45c54","unresolved":false,"context_lines":[{"line_number":164,"context_line":"    It is also recommended to include an additional padding of at"},{"line_number":165,"context_line":"    least 256KB for safety, since ROM sizes can occasionally change."},{"line_number":166,"context_line":"    For example the total of 10832KB required here for ROMs / ACPI"},{"line_number":167,"context_line":"    tables should be rounded up to 16MB."},{"line_number":168,"context_line":""},{"line_number":169,"context_line":"    The first two values are expected to commonly vary per VM, and"},{"line_number":170,"context_line":"    are already accounted for dynamically by the placement service."}],"source_content_type":"text/x-rst","patch_set":2,"id":"3fce034c_65468552","line":167,"range":{"start_line":167,"start_character":39,"end_line":167,"end_character":40},"in_reply_to":"3fce034c_3614434e","updated":"2019-04-16 23:34:20.000000000","message":"Got it, thanks!  Would the failure mode be different in the SEV case if we use \u003clocked/\u003e as the latest patch set proposes?  Am I right in thinking this would cause the memory to be pinned in advance via mlockall(2) thereby failing or succeeding at guest launch, rather than triggering the OOM killer later?\n\nI presume that SEV guests would also need to mlock and encrypt any memory associated with storage threads such as RBD, or VNC / SPICE etc. - hopefully Brijesh or Jim can chip in here.  In the scenarios where extra memory needs to be allocated at post-launch, I\u0027m guessing the SEV support code will invoke the KVM_MEMORY_ENCRYPT_REG_REGION ioctl on allocation.  I see qemu/target/i386/sev.c has:\n\nstatic struct RAMBlockNotifier sev_ram_notifier \u003d {\n    .ram_block_added \u003d sev_ram_block_added,\n\nand sev_ram_block_added() does:\n\n    r \u003d kvm_vm_ioctl(kvm_state, KVM_MEMORY_ENCRYPT_REG_REGION, \u0026range);\n\nwhich appears to back this up - is that right?  I also see that os_mlock in qemu/os-posix.c uses the MCL_FUTURE flag:\n\n    ret \u003d mlockall(MCL_CURRENT | MCL_FUTURE);\n\nwhich IIUC takes care of ensuring that any pages mapped post-launch will also get pinned.","commit_id":"a3456cda878da568d77064d33b903944377b3a90"},{"author":{"_account_id":1779,"name":"Daniel Berrange","email":"berrange@redhat.com","username":"berrange"},"change_message_id":"3e0a56e312079389652a85eba8910ecff51cc5ac","unresolved":false,"context_lines":[{"line_number":164,"context_line":"    It is also recommended to include an additional padding of at"},{"line_number":165,"context_line":"    least 256KB for safety, since ROM sizes can occasionally change."},{"line_number":166,"context_line":"    For example the total of 10832KB required here for ROMs / ACPI"},{"line_number":167,"context_line":"    tables should be rounded up to 16MB."},{"line_number":168,"context_line":""},{"line_number":169,"context_line":"    The first two values are expected to commonly vary per VM, and"},{"line_number":170,"context_line":"    are already accounted for dynamically by the placement service."}],"source_content_type":"text/x-rst","patch_set":2,"id":"3fce034c_3614434e","line":167,"range":{"start_line":167,"start_character":39,"end_line":167,"end_character":40},"in_reply_to":"3fce034c_369f6304","updated":"2019-04-16 11:00:07.000000000","message":"This predated SEV by many years, it was just regular QEMU or KVM guests.\n\nWhat happens is that QEMU tries to allocate more memory that the cgroup limit allows. The kernel then runs the OOM killer on processes within that cgroup, which picks QEMU for termination as its the largest (\u0026 usually the only) process.\n\nThe key problem is that there are memory allocations done for a wide variety of supporting tasks, and the memory required varies *tremendously*. \n\nFor example, with RBD based storage QEMU starts one thread per RBD storage server. There can be anywhere from 2 to 200 RBD storage servers depending on how the admin has configured their RBD storage. So you have memory for an essentially arbitrary number of thread stacks. Then you have various allocations made in the process on handling in-fight I/o which varies depending on I/O patterns. Then if user has enabled VNC or SPICE then there are memory allocations for processing framebuffer updates.\n\nBy the time you add up all these potential allocations, the hard limit is soooo large than it is effectively useless at preventing a denial of service. \n\nThat\u0027s why I think the only practical way to protect the host OS from memory DoS is to place limits in cgroups at the top level for the aggregated usage across all VMs.","commit_id":"a3456cda878da568d77064d33b903944377b3a90"},{"author":{"_account_id":1779,"name":"Daniel Berrange","email":"berrange@redhat.com","username":"berrange"},"change_message_id":"e96e259d914b64200f854a56b490d556f3da4cf1","unresolved":false,"context_lines":[{"line_number":164,"context_line":"    It is also recommended to include an additional padding of at"},{"line_number":165,"context_line":"    least 256KB for safety, since ROM sizes can occasionally change."},{"line_number":166,"context_line":"    For example the total of 10832KB required here for ROMs / ACPI"},{"line_number":167,"context_line":"    tables should be rounded up to 16MB."},{"line_number":168,"context_line":""},{"line_number":169,"context_line":"    The first two values are expected to commonly vary per VM, and"},{"line_number":170,"context_line":"    are already accounted for dynamically by the placement service."}],"source_content_type":"text/x-rst","patch_set":2,"id":"3fce034c_0390d242","line":167,"range":{"start_line":167,"start_character":39,"end_line":167,"end_character":40},"in_reply_to":"3fce034c_65468552","updated":"2019-04-17 08:25:22.000000000","message":"There\u0027s two types of memory allocations in QEMU. \n\nMemory regions to be exposed to the guest (main RAM, device ROMs, firmware ROMs, video RAM, etc). Main RAM \u0026 video RAM won\u0027t be paged in by default unless QEMU is told to pre-allocate, or use huge pages, or mlock. Device ROMs, firmeware ROMs will be populated by QEMU immediately so will always be resident.\n\nThen there is all the memory allocation done by QEMU for its own data structures for handling the emulation. Some of this happens at startup, and some of this happens on the fly in response to guest or host events. eg VNC/Spice client memory is allocated when a client connects. RBD disk thread memory may be allocated at startup, or may be allocated in response to I/O operations, or may be allocated later if a disk is hotplugged on the fly.\n\nIOW even with mlock you are liable to have the guest killed at any time when running if you set a too low hard limit. If you set it to a value large enough to prevent accident kills, then it becomes largely useless at preventing DoS.","commit_id":"a3456cda878da568d77064d33b903944377b3a90"},{"author":{"_account_id":2394,"name":"Adam Spiers","email":"aspiers@suse.com","username":"adam.spiers"},"change_message_id":"fc8c33bd1da35da86d03776b95453e9c3725b742","unresolved":false,"context_lines":[{"line_number":164,"context_line":"    It is also recommended to include an additional padding of at"},{"line_number":165,"context_line":"    least 256KB for safety, since ROM sizes can occasionally change."},{"line_number":166,"context_line":"    For example the total of 10832KB required here for ROMs / ACPI"},{"line_number":167,"context_line":"    tables should be rounded up to 16MB."},{"line_number":168,"context_line":""},{"line_number":169,"context_line":"    The first two values are expected to commonly vary per VM, and"},{"line_number":170,"context_line":"    are already accounted for dynamically by the placement service."}],"source_content_type":"text/x-rst","patch_set":2,"id":"3fce034c_369f6304","line":167,"range":{"start_line":167,"start_character":39,"end_line":167,"end_character":40},"in_reply_to":"3fce034c_b9659b47","updated":"2019-04-16 10:51:24.000000000","message":"@Daniel I\u0027ve thought about this some more, and have a few questions regarding what you wrote:\n\n \u003e The final most generous algorithm we applied was\n \u003e \n \u003e (1 + k) * (domain memory + total video memory) + (32MB for cache per each disk) + F\n \u003e \n \u003e where k \u003d 0.5 and F \u003d 400MB.\n \u003e \n \u003e On a 1 GB guest this gave a hard limit of 1.9 GB and users still saw the VMs being killed.\n\nFirstly I wanted to double-check that here you are talking about QEMU in general, not just the SEV case - right?\n\nSecondly what were the VMs being killed by - a QEMU crash because it tried to allocate more than the rlimit, or the OOM killer, or something else?\n\nThirdly, if the memory usage was routinely exceeding what is listed in the table immediately above these review comments, that implies that the table is missing some entries.  Please could you point out what it\u0027s missing?\n\nThanks a lot!","commit_id":"a3456cda878da568d77064d33b903944377b3a90"},{"author":{"_account_id":11604,"name":"sean mooney","email":"smooney@redhat.com","username":"sean-k-mooney"},"change_message_id":"76fe41a2962fe0e9cf4fffe7073f15ec8dadd3fe","unresolved":false,"context_lines":[{"line_number":164,"context_line":"    It is also recommended to include an additional padding of at"},{"line_number":165,"context_line":"    least 256KB for safety, since ROM sizes can occasionally change."},{"line_number":166,"context_line":"    For example the total of 10832KB required here for ROMs / ACPI"},{"line_number":167,"context_line":"    tables should be rounded up to 16MB."},{"line_number":168,"context_line":""},{"line_number":169,"context_line":"    The first two values are expected to commonly vary per VM, and"},{"line_number":170,"context_line":"    are already accounted for dynamically by the placement service."}],"source_content_type":"text/x-rst","patch_set":2,"id":"5fc1f717_d5bdfe02","line":167,"range":{"start_line":167,"start_character":39,"end_line":167,"end_character":40},"in_reply_to":"5fc1f717_3bf55984","updated":"2019-03-19 18:54:49.000000000","message":"so just to keep track of this\nwhen you set a hard_limit in libvirt\nit calls https://linux.die.net/man/2/setrlimit\nwith RLIMIT_MEMLOCK here \nhttps://github.com/libvirt/libvirt/blob/b6aacfc435ce3b7e2665ddf5a422d2153bca88b8/src/util/virprocess.c#L766\n\nthe man page for RLIMIT_MEMLOCK states\n\n\"The maximum number of bytes of memory that may be locked into RAM. In effect this limit is rounded down to the nearest multiple of the system page size. This limit affects mlock(2) and mlockall(2) and the mmap(2) MAP_LOCKED operation. Since Linux 2.6.9 it also affects the shmctl(2) SHM_LOCK operation, where it sets a maximum on the total bytes in shared memory segments (see shmget(2)) that may be locked by the real user ID of the calling process. The shmctl(2) SHM_LOCK locks are accounted for separately from the per-process memory locks established by mlock(2), mlockall(2), and mmap(2) MAP_LOCKED; a process can lock bytes up to this limit in each of these two categories. In Linux kernels before 2.6.9, this limit controlled the amount of memory that could be locked by a privileged process. Since Linux 2.6.9, no limits are placed on the amount of memory that a privileged process may lock, and this limit instead governs the amount of memory that an unprivileged process may lock.\"\n\nso RLIMIT_MEMLOCK just limits the ram that unprivalged process can lock in ram with mlock https://linux.die.net/man/2/mlock\n\nif we generate the locked element in the memory backing\nlibvirt passes -realtime mlock\u003don to qemu which \ncalls os_mlock \nwhich on posix does mlockall(MCL_CURRENT | MCL_FUTURE);\nhere https://github.com/qemu/qemu/blob/dafd95053611aa14dda40266857608d12ddce658/os-posix.c#L356\n\n\nso  back to https://linux.die.net/man/2/mlock\ni think what is really need to pin the memory is the locked element and based on \nhttps://libvirt.org/git/?p\u003dlibvirt.git;a\u003dblob;f\u003dsrc/qemu/qemu_domain.c;h\u003dba3fff607a93533b9b47956cc2cfa70237e7c041;hb\u003dHEAD#l10049\n\nqemuDomainGetMemLockLimitBytes will just retrun VIR_DOMAIN_MEMORY_PARAM_UNLIMITED if we have set locked so there is no reason to set a hard_limmit as it will be ignored.\n\n\nanyway i may be wronge but lokking a the syscall and what libvirt is actully doing i dont think hard_limit is correct here and we should just be useing the memory_backing locked element to pin the memory without setting a hard_limmit.\n\nthat is potentally unsafe in general but i belive it is safer if combinded with hugepages because its only the qemu allocated memory that can grow as the guest memory is preallcoated as hugepages without over subsription.\n \nwe support stting the locked element already without a hard_limit when you enable realtime instances and i think distros shoudl condier the cgroup change in general out side of nova for the reason dan provided.","commit_id":"a3456cda878da568d77064d33b903944377b3a90"},{"author":{"_account_id":1779,"name":"Daniel Berrange","email":"berrange@redhat.com","username":"berrange"},"change_message_id":"962d6a42261f8a7b6fa24081767cc679edd3cd01","unresolved":false,"context_lines":[{"line_number":164,"context_line":"    It is also recommended to include an additional padding of at"},{"line_number":165,"context_line":"    least 256KB for safety, since ROM sizes can occasionally change."},{"line_number":166,"context_line":"    For example the total of 10832KB required here for ROMs / ACPI"},{"line_number":167,"context_line":"    tables should be rounded up to 16MB."},{"line_number":168,"context_line":""},{"line_number":169,"context_line":"    The first two values are expected to commonly vary per VM, and"},{"line_number":170,"context_line":"    are already accounted for dynamically by the placement service."}],"source_content_type":"text/x-rst","patch_set":2,"id":"5fc1f717_3bf55984","line":167,"range":{"start_line":167,"start_character":39,"end_line":167,"end_character":40},"in_reply_to":"5fc1f717_45b085de","updated":"2019-03-19 15:26:21.000000000","message":"With the current impl of SEV in QEMU, I\u0027m told it is *not* sufficient to pin guest RAM. It is also necessary to ensure ROMS, UEFI pflash and video RAM are pinned too.  Using huge pages only enables guest RAM to be pinned, and there\u0027s no fine grained control in QEMU to allow the other memory chunks to use huge pages or fine grained pinning. So effectively pinning the whole of QEMU RAM is the only option right now. \n\nThere has been talk about enhancements to reduce the requirement to merely pinning guest RAM and UEFI pflash, but there\u0027s no patches for that yet, nor any timescale.\n\nIt is none the less beneficial to use huge pages for performance, but it still needs memory pinned too.","commit_id":"a3456cda878da568d77064d33b903944377b3a90"},{"author":{"_account_id":2394,"name":"Adam Spiers","email":"aspiers@suse.com","username":"adam.spiers"},"change_message_id":"ed7814d3802189d74f13990741413eeb67ae3b14","unresolved":false,"context_lines":[{"line_number":164,"context_line":"    It is also recommended to include an additional padding of at"},{"line_number":165,"context_line":"    least 256KB for safety, since ROM sizes can occasionally change."},{"line_number":166,"context_line":"    For example the total of 10832KB required here for ROMs / ACPI"},{"line_number":167,"context_line":"    tables should be rounded up to 16MB."},{"line_number":168,"context_line":""},{"line_number":169,"context_line":"    The first two values are expected to commonly vary per VM, and"},{"line_number":170,"context_line":"    are already accounted for dynamically by the placement service."}],"source_content_type":"text/x-rst","patch_set":2,"id":"3fce034c_b9659b47","line":167,"range":{"start_line":167,"start_character":39,"end_line":167,"end_character":40},"in_reply_to":"5fc1f717_952c564b","updated":"2019-04-15 14:43:54.000000000","message":"Thanks a lot for all the very helpful discussion.  I\u0027ve tried to capture all this in patch set 6.  Feedback very welcome.","commit_id":"a3456cda878da568d77064d33b903944377b3a90"},{"author":{"_account_id":11604,"name":"sean mooney","email":"smooney@redhat.com","username":"sean-k-mooney"},"change_message_id":"d8e294315d91cf4aa99819bde8c1c14e7b36c897","unresolved":false,"context_lines":[{"line_number":164,"context_line":"    It is also recommended to include an additional padding of at"},{"line_number":165,"context_line":"    least 256KB for safety, since ROM sizes can occasionally change."},{"line_number":166,"context_line":"    For example the total of 10832KB required here for ROMs / ACPI"},{"line_number":167,"context_line":"    tables should be rounded up to 16MB."},{"line_number":168,"context_line":""},{"line_number":169,"context_line":"    The first two values are expected to commonly vary per VM, and"},{"line_number":170,"context_line":"    are already accounted for dynamically by the placement service."}],"source_content_type":"text/x-rst","patch_set":2,"id":"5fc1f717_45b085de","line":167,"range":{"start_line":167,"start_character":39,"end_line":167,"end_character":40},"in_reply_to":"5fc1f717_9df1346e","updated":"2019-03-19 13:53:58.000000000","message":"so i brought this up last cycle and given your concern i would like to re rais requireing hugepages for this feature.\n\nif we use hugepages we do not need to set the mem limit\nand we  resolve the issue with the OOM killer as we already track hugepages per numa node and have stict account for them in the resouce tracker.\n\nif we use hugepages i do not belive there is a vector for any dos attach between vms and it also simplifes the code.\n\nthe main disadvantage is without artoms numa migration feature live migation will not work properly. that restiction will be removed in train so i dont think that is a concurn.\n\nsimilarly while using hugepages will prevent oversubstion for sev to work the memory needs to memlocked so it is not swapped so over substiption was not possible.\n\nusing hugepages will mean we correctly track the instance memory use in the resouce track as non oversubsibved hugepages elimitanting the accounting issue we would otherwise have with sev guests using 4k pages.","commit_id":"a3456cda878da568d77064d33b903944377b3a90"},{"author":{"_account_id":11604,"name":"sean mooney","email":"smooney@redhat.com","username":"sean-k-mooney"},"change_message_id":"b2a0285ec5a687cf782c23bfc735b09f033e95ae","unresolved":false,"context_lines":[{"line_number":164,"context_line":"    It is also recommended to include an additional padding of at"},{"line_number":165,"context_line":"    least 256KB for safety, since ROM sizes can occasionally change."},{"line_number":166,"context_line":"    For example the total of 10832KB required here for ROMs / ACPI"},{"line_number":167,"context_line":"    tables should be rounded up to 16MB."},{"line_number":168,"context_line":""},{"line_number":169,"context_line":"    The first two values are expected to commonly vary per VM, and"},{"line_number":170,"context_line":"    are already accounted for dynamically by the placement service."}],"source_content_type":"text/x-rst","patch_set":2,"id":"5fc1f717_952c564b","line":167,"range":{"start_line":167,"start_character":39,"end_line":167,"end_character":40},"in_reply_to":"5fc1f717_d5bdfe02","updated":"2019-03-19 19:01:35.000000000","message":"oh and based on those finding i think \nhttps://github.com/AMDESE/AMDSEV/blob/master/xmls/sample.xml\nis incrrect as i dont think there is any guarentte that the guest memory is pinned \n\nbut based on \nhttps://github.com/libvirt/libvirt/blob/0ec6343a069b21178d4580688a8380dbb6d76620/docs/news-2013.html.in#L1526\nlibvirt also set the RLIMIT_MEMLOCK when you lock the memory and qemuDomainGetMemLockLimitBytes will retrun VIR_DOMAIN_MEMORY_PARAM_UNLIMITED which translate to RLIMIT_INFINITY when setrlimit is called.","commit_id":"a3456cda878da568d77064d33b903944377b3a90"},{"author":{"_account_id":1779,"name":"Daniel Berrange","email":"berrange@redhat.com","username":"berrange"},"change_message_id":"45b19a98aa11936d2266153cc87564511cd42ef8","unresolved":false,"context_lines":[{"line_number":524,"context_line":"Finally, a cloud administrator will need to define SEV-enabled flavors"},{"line_number":525,"context_line":"as described above, unless it is sufficient for users to define"},{"line_number":526,"context_line":"SEV-enabled images."},{"line_number":527,"context_line":""},{"line_number":528,"context_line":"Developer impact"},{"line_number":529,"context_line":"----------------"},{"line_number":530,"context_line":""}],"source_content_type":"text/x-rst","patch_set":2,"id":"5fc1f717_43a43dba","line":527,"updated":"2019-03-15 11:40:35.000000000","message":"Something which I don\u0027t see mentioned in the doc is that there is a limit on the number of VMs that can use SEV. I believe for EPYC this limit is 15 (https://www.redhat.com/archives/libvir-list/2019-January/msg00652.html)\n\nFor hosts with large RAM sizes, I could easily see this SEV limit being reached, before the default memory/CPU overcommit limit is hit. This is something admins should be aware of when planning capacity.","commit_id":"a3456cda878da568d77064d33b903944377b3a90"},{"author":{"_account_id":2394,"name":"Adam Spiers","email":"aspiers@suse.com","username":"adam.spiers"},"change_message_id":"ed7814d3802189d74f13990741413eeb67ae3b14","unresolved":false,"context_lines":[{"line_number":524,"context_line":"Finally, a cloud administrator will need to define SEV-enabled flavors"},{"line_number":525,"context_line":"as described above, unless it is sufficient for users to define"},{"line_number":526,"context_line":"SEV-enabled images."},{"line_number":527,"context_line":""},{"line_number":528,"context_line":"Developer impact"},{"line_number":529,"context_line":"----------------"},{"line_number":530,"context_line":""}],"source_content_type":"text/x-rst","patch_set":2,"id":"3fce034c_795f3372","line":527,"in_reply_to":"5fc1f717_1a2de0a7","updated":"2019-04-15 14:43:54.000000000","message":"Done","commit_id":"a3456cda878da568d77064d33b903944377b3a90"},{"author":{"_account_id":11604,"name":"sean mooney","email":"smooney@redhat.com","username":"sean-k-mooney"},"change_message_id":"d8e294315d91cf4aa99819bde8c1c14e7b36c897","unresolved":false,"context_lines":[{"line_number":524,"context_line":"Finally, a cloud administrator will need to define SEV-enabled flavors"},{"line_number":525,"context_line":"as described above, unless it is sufficient for users to define"},{"line_number":526,"context_line":"SEV-enabled images."},{"line_number":527,"context_line":""},{"line_number":528,"context_line":"Developer impact"},{"line_number":529,"context_line":"----------------"},{"line_number":530,"context_line":""}],"source_content_type":"text/x-rst","patch_set":2,"id":"5fc1f717_458165c3","line":527,"in_reply_to":"5fc1f717_43a43dba","updated":"2019-03-19 13:53:58.000000000","message":"this was not rased last cycle but this should be eaily resolveable. we simily need to create a new sev resocue class and create a new inventory on the compute node resouce provide in placement.\n\nideally we would discover this limit form libvirt or sysfs but if it is not auto discoverable we can add a new config option to set this value to the libvirt section of the nova.conf\n\nif sev will be supported in multiple virt dirvers in teh future we can move the config option to the compute sect in the future.\n\nso to state the above more clearly i propose modifying the virt diriver to discover the limit programitcally or from a config, create an invtory of resouce class sev_context on the compute ndoe resouce provider.\n\nthe sev enabled guest would then request 1 allcoation of the sev_context resouce classe in its flavor.","commit_id":"a3456cda878da568d77064d33b903944377b3a90"},{"author":{"_account_id":1779,"name":"Daniel Berrange","email":"berrange@redhat.com","username":"berrange"},"change_message_id":"962d6a42261f8a7b6fa24081767cc679edd3cd01","unresolved":false,"context_lines":[{"line_number":524,"context_line":"Finally, a cloud administrator will need to define SEV-enabled flavors"},{"line_number":525,"context_line":"as described above, unless it is sufficient for users to define"},{"line_number":526,"context_line":"SEV-enabled images."},{"line_number":527,"context_line":""},{"line_number":528,"context_line":"Developer impact"},{"line_number":529,"context_line":"----------------"},{"line_number":530,"context_line":""}],"source_content_type":"text/x-rst","patch_set":2,"id":"5fc1f717_fb4351ae","line":527,"in_reply_to":"5fc1f717_458165c3","updated":"2019-03-19 15:26:21.000000000","message":"It is planned to enhance libvirt to report the guest limit in the domain capabilities XML. It will also require an enhancement to QEMU, which means QEMU \u003e\u003d 4.1 is the earliest we\u0027d get this.","commit_id":"a3456cda878da568d77064d33b903944377b3a90"},{"author":{"_account_id":11604,"name":"sean mooney","email":"smooney@redhat.com","username":"sean-k-mooney"},"change_message_id":"4d0824c6dc3ce07f9052a83fa61aef6ee00923c8","unresolved":false,"context_lines":[{"line_number":524,"context_line":"Finally, a cloud administrator will need to define SEV-enabled flavors"},{"line_number":525,"context_line":"as described above, unless it is sufficient for users to define"},{"line_number":526,"context_line":"SEV-enabled images."},{"line_number":527,"context_line":""},{"line_number":528,"context_line":"Developer impact"},{"line_number":529,"context_line":"----------------"},{"line_number":530,"context_line":""}],"source_content_type":"text/x-rst","patch_set":2,"id":"5fc1f717_1a2de0a7","line":527,"in_reply_to":"5fc1f717_fb4351ae","updated":"2019-03-20 13:49:30.000000000","message":"i was discussing this down stream and it was pointed out i did not capture my resoning for suggesting a conf option instead of waiting for qemu 4.1\n\nwhile long term reading the value form qemu via libvirt is likely the best appoch give the timeing i think a nova config option in the libvirt section would be appropriate.\n\nqemu has roughly a 4 month planning/release cycle. qemu 4.0 is targeting  2019-04-16 assuming 4.1 is 4 months after that so  2019-08-16  that will be well after the train spec freeze and and i dont see how we would approve a spec with a depency\non a qemu version that will not yet be released.\n\nWe have blocked seveal feature on this point\nin the past, specifically we have rejected spec that depend on unrelease versions of software. given that sev is alreay available in the market and there is a non invasive alternitve e.g. a config option  i dont think qemu\u003e\u003d4.1 should be a hard requirement but i totlaly agree that when this can be deiscovered programaticlly we shoudl do so and remove the config option.\n\nhonestly we could go either way but i would personally be fine with the config option and leaving a todo in the code swap to using qemu when 4.1 is avialble and remove the config when our min qemu passes 4.1.","commit_id":"a3456cda878da568d77064d33b903944377b3a90"},{"author":{"_account_id":24938,"name":"Bryan Stephenson","email":"bryan.stephenson@suse.com","username":"bryans"},"change_message_id":"73a09225752df4502580ceb942f173563229e91f","unresolved":false,"context_lines":[{"line_number":198,"context_line":""},{"line_number":199,"context_line":"  #define SEV_POLICY_NORM \\"},{"line_number":200,"context_line":"      ((SEV_POLICY)(SEV_POLICY_NODBG|SEV_POLICY_NOKS| \\"},{"line_number":201,"context_line":"        SEV_POLICY_ES|SEV_POLICY_DOMAIN|SEV_POLICY_SEV))"},{"line_number":202,"context_line":""},{"line_number":203,"context_line":"which equates to ``0x0037``.  In the future, when support is added to"},{"line_number":204,"context_line":"QEMU and libvirt, this will permit live migration to other machines in"}],"source_content_type":"text/x-rst","patch_set":4,"id":"5fc1f717_54b89973","line":201,"updated":"2019-04-08 23:43:47.000000000","message":"We should remove SEV_POLICY_ES from here, and change the \"0x0037\" just below to \"0x0033\". SEV_ES is not ready upstream. Also remove the text \" and uses SEV-ES\" 3 lines below, and remove the last sentence of the paragraph below (\"If the upstream support for SEV-ES does not arrive in time\nthen SEV-ES will be not be included in the policy.\")","commit_id":"b14d6ac2f4d9f32f85bbc8dd96b68eacf8ee647c"},{"author":{"_account_id":2394,"name":"Adam Spiers","email":"aspiers@suse.com","username":"adam.spiers"},"change_message_id":"ed7814d3802189d74f13990741413eeb67ae3b14","unresolved":false,"context_lines":[{"line_number":198,"context_line":""},{"line_number":199,"context_line":"  #define SEV_POLICY_NORM \\"},{"line_number":200,"context_line":"      ((SEV_POLICY)(SEV_POLICY_NODBG|SEV_POLICY_NOKS| \\"},{"line_number":201,"context_line":"        SEV_POLICY_ES|SEV_POLICY_DOMAIN|SEV_POLICY_SEV))"},{"line_number":202,"context_line":""},{"line_number":203,"context_line":"which equates to ``0x0037``.  In the future, when support is added to"},{"line_number":204,"context_line":"QEMU and libvirt, this will permit live migration to other machines in"}],"source_content_type":"text/x-rst","patch_set":4,"id":"3fce034c_798653de","line":201,"in_reply_to":"5fc1f717_54b89973","updated":"2019-04-15 14:43:54.000000000","message":"Done","commit_id":"b14d6ac2f4d9f32f85bbc8dd96b68eacf8ee647c"},{"author":{"_account_id":14070,"name":"Eric Fried","email":"openstack@fried.cc","username":"efried"},"change_message_id":"102805a62e70852d95f33a0f5742fa15e0de7ea8","unresolved":false,"context_lines":[{"line_number":94,"context_line":"  ``HW_CPU_AMD_SEV`` (which is `already in os-traits"},{"line_number":95,"context_line":"  \u003chttps://docs.openstack.org/os-traits/latest/reference/index.html#amd-sev\u003e`_"},{"line_number":96,"context_line":"  `since 0.11.0 \u003chttps://review.openstack.org/635608\u003e`_) on compute"},{"line_number":97,"context_line":"  hosts whenever SEV functionality is automatically detected compute"},{"line_number":98,"context_line":"  host based on the above logic.  The driver can provide this trait"},{"line_number":99,"context_line":"  using `the new driver capabilities to traits mechanism"},{"line_number":100,"context_line":"  \u003chttps://docs.openstack.org/nova/stein/admin/configuration/schedulers.html#compute-capabilities-as-traits\u003e`_"}],"source_content_type":"text/x-rst","patch_set":6,"id":"3fce034c_7dacdf33","line":97,"range":{"start_line":97,"start_character":61,"end_line":97,"end_character":68},"updated":"2019-04-16 20:48:00.000000000","message":"on the compute?","commit_id":"5819d99d9c5753e4293751b9e6e90f5cf4f925b3"},{"author":{"_account_id":2394,"name":"Adam Spiers","email":"aspiers@suse.com","username":"adam.spiers"},"change_message_id":"3a88e7cd3c762e55710bcba53ddbb9d1f4e45c54","unresolved":false,"context_lines":[{"line_number":94,"context_line":"  ``HW_CPU_AMD_SEV`` (which is `already in os-traits"},{"line_number":95,"context_line":"  \u003chttps://docs.openstack.org/os-traits/latest/reference/index.html#amd-sev\u003e`_"},{"line_number":96,"context_line":"  `since 0.11.0 \u003chttps://review.openstack.org/635608\u003e`_) on compute"},{"line_number":97,"context_line":"  hosts whenever SEV functionality is automatically detected compute"},{"line_number":98,"context_line":"  host based on the above logic.  The driver can provide this trait"},{"line_number":99,"context_line":"  using `the new driver capabilities to traits mechanism"},{"line_number":100,"context_line":"  \u003chttps://docs.openstack.org/nova/stein/admin/configuration/schedulers.html#compute-capabilities-as-traits\u003e`_"}],"source_content_type":"text/x-rst","patch_set":6,"id":"3fce034c_926df533","line":97,"range":{"start_line":97,"start_character":61,"end_line":97,"end_character":68},"in_reply_to":"3fce034c_7dacdf33","updated":"2019-04-16 23:34:20.000000000","message":"Done","commit_id":"5819d99d9c5753e4293751b9e6e90f5cf4f925b3"},{"author":{"_account_id":14070,"name":"Eric Fried","email":"openstack@fried.cc","username":"efried"},"change_message_id":"102805a62e70852d95f33a0f5742fa15e0de7ea8","unresolved":false,"context_lines":[{"line_number":120,"context_line":"    value specified by ``libvirt.hw_machine_type`` in ``nova.conf``"},{"line_number":121,"context_line":"    (`which is not set by default"},{"line_number":122,"context_line":"    \u003chttps://docs.openstack.org/nova/rocky/configuration/config.html#libvirt.hw_machine_type\u003e`_),"},{"line_number":123,"context_line":"    then an exception should be raised so that the build fails."},{"line_number":124,"context_line":""},{"line_number":125,"context_line":"  - The ``iommu`` attribute is ``on`` for all virtio devices.  Despite"},{"line_number":126,"context_line":"    the name, this does not require the guest or host to have an IOMMU"}],"source_content_type":"text/x-rst","patch_set":6,"id":"3fce034c_3d4df799","line":123,"updated":"2019-04-16 20:48:00.000000000","message":"This is all greek to me, but is any of this possible to convert into traits so we can filter at placement time rather than failing the build late?","commit_id":"5819d99d9c5753e4293751b9e6e90f5cf4f925b3"},{"author":{"_account_id":2394,"name":"Adam Spiers","email":"aspiers@suse.com","username":"adam.spiers"},"change_message_id":"3a88e7cd3c762e55710bcba53ddbb9d1f4e45c54","unresolved":false,"context_lines":[{"line_number":120,"context_line":"    value specified by ``libvirt.hw_machine_type`` in ``nova.conf``"},{"line_number":121,"context_line":"    (`which is not set by default"},{"line_number":122,"context_line":"    \u003chttps://docs.openstack.org/nova/rocky/configuration/config.html#libvirt.hw_machine_type\u003e`_),"},{"line_number":123,"context_line":"    then an exception should be raised so that the build fails."},{"line_number":124,"context_line":""},{"line_number":125,"context_line":"  - The ``iommu`` attribute is ``on`` for all virtio devices.  Despite"},{"line_number":126,"context_line":"    the name, this does not require the guest or host to have an IOMMU"}],"source_content_type":"text/x-rst","patch_set":6,"id":"3fce034c_a56f8d90","line":123,"in_reply_to":"3fce034c_3d4df799","updated":"2019-04-16 23:34:20.000000000","message":"You mean e.g. providing a Q35 trait on hosts which can accommodate Q35 machine types?  That\u0027s a nice and very interesting idea and I\u0027d definitely look to Kashyap for guidance on that.  One challenge is that it\u0027s not a binary yes/no per machine type, because the domain capability is actually per (architecture, machine_type, hypervisor_type) tuple:\n\nhttps://review.openstack.org/#/c/633855/12/nova/virt/libvirt/host.py@699\n\nAssuming only one hypervisor type per host would allow us to reduce this to per (architecture, machine_type), which would mean traits which might look something like HW_VIRT_x86_64_Q35.  However there are typically *tons* of machine types (e.g. 40ish on the SEV host I\u0027m looking at right now), so this could be a can of worms.  Having said that, limiting it to the canonical ones only might be more manageable:\n\n      \u003cmachine canonical\u003d\u0027pc-i440fx-2.11\u0027 maxCpus\u003d\u0027255\u0027\u003epc\u003c/machine\u003e\n      \u003cmachine canonical\u003d\u0027pc-q35-2.11\u0027 maxCpus\u003d\u0027288\u0027\u003eq35\u003c/machine\u003e\n\nMaybe we could do something similar for UEFI?  Again need Kashyap\u0027s take on this.  I do like the idea at first sight, although I would definitely worry if we made this work a dependency of this spec, or in scope within it, since:\n\n- That would quite significantly inflate the scope of this spec, or at least delay it due to the dependency.\n\n- All modern x86 KVM hypervisors will almost certainly support Q35 anyway. \n\n- Even if there were a few which didn\u0027t, I don\u0027t see failure at launch-time as a deal-breaker.\n\n- Maybe the same is true of UEFI boot, which has been around quite a while.\n\n- It might be possible to incrementally add traits for machine types separately from a UEFI trait; even if for a period we only track one of the two, that would still catch *some* missing requirements, if not all, at placement time rather than launch time.  This is obviously preferable to catching none.","commit_id":"5819d99d9c5753e4293751b9e6e90f5cf4f925b3"},{"author":{"_account_id":14070,"name":"Eric Fried","email":"openstack@fried.cc","username":"efried"},"change_message_id":"9e2e40c9004abb373d60925f8452d6d2596481d1","unresolved":false,"context_lines":[{"line_number":120,"context_line":"    value specified by ``libvirt.hw_machine_type`` in ``nova.conf``"},{"line_number":121,"context_line":"    (`which is not set by default"},{"line_number":122,"context_line":"    \u003chttps://docs.openstack.org/nova/rocky/configuration/config.html#libvirt.hw_machine_type\u003e`_),"},{"line_number":123,"context_line":"    then an exception should be raised so that the build fails."},{"line_number":124,"context_line":""},{"line_number":125,"context_line":"  - The ``iommu`` attribute is ``on`` for all virtio devices.  Despite"},{"line_number":126,"context_line":"    the name, this does not require the guest or host to have an IOMMU"}],"source_content_type":"text/x-rst","patch_set":6,"id":"3fce034c_f0f3ff71","line":123,"in_reply_to":"3fce034c_892063b2","updated":"2019-04-17 17:03:27.000000000","message":"Thanks, that summarizes what I suspected: could be done, but would be messy and complicated and not worth it.\n\nAlso, could always be done later if we change our minds.","commit_id":"5819d99d9c5753e4293751b9e6e90f5cf4f925b3"},{"author":{"_account_id":6962,"name":"Kashyap Chamarthy","email":"kchamart@redhat.com","username":"kashyapc"},"change_message_id":"c1a115e962c5f8b05aa7223ca61bbc5ab4a3f267","unresolved":false,"context_lines":[{"line_number":120,"context_line":"    value specified by ``libvirt.hw_machine_type`` in ``nova.conf``"},{"line_number":121,"context_line":"    (`which is not set by default"},{"line_number":122,"context_line":"    \u003chttps://docs.openstack.org/nova/rocky/configuration/config.html#libvirt.hw_machine_type\u003e`_),"},{"line_number":123,"context_line":"    then an exception should be raised so that the build fails."},{"line_number":124,"context_line":""},{"line_number":125,"context_line":"  - The ``iommu`` attribute is ``on`` for all virtio devices.  Despite"},{"line_number":126,"context_line":"    the name, this does not require the guest or host to have an IOMMU"}],"source_content_type":"text/x-rst","patch_set":6,"id":"3fce034c_892063b2","line":123,"in_reply_to":"3fce034c_a56f8d90","updated":"2019-04-17 11:17:34.000000000","message":"(I should first admit upfront that my knowledge of \u0027traits\u0027 is largely theoretical from reading the code.  Still learning and exploring.)\n\nI, too, find the idea of failing earlier (at placement time), appealing.  That said, in this case, I agree with Adam\u0027s analysis here:\n\n- Making machine types into \u0027traits\u0027 can get really unwieldy.  As Adam mentions, indeed there are tons of them across architectures.  (Aside: I am seeing the \"it becomes unwieldy\"  problem elsewhere today in the Nova code—with CPU features as traits.  From a cursory audit, the other day I noticed that we\u0027re missing some important CPU features as traits, e.g. PCID, that protect from the performance degradation caused by the fixes for the infamous \"Meltdown\" flaw [1].  I have a TODO to add that and other missing \u0027traits\u0027 in Nova.)\n\n- Adam: even if we limit to the \"canonical\" machine types (and indeed, as you show in the example, there is one \u0027canonical\u0027 machine type for `pc`, another for \u0027q35`), the will change when you upgrade QEMU.  So we need to take care there, to do timely tracking of new machine types.\n\n- I think we should beware of \"scope creep\" indeed.  If further research and understanding leads us to the path of \u0027traits\u0027, we can do that later on.\n\n- And yes, most Linux distributions \"that matter\", do have Q35 machine type.  (Always bear in mind: Q35 is  nearly 11 years old, so it\u0027s not even _that_ modern :-))\n\n\n[1] https://kashyapc.fedorapeople.org/Reducing-OpenStack-Guest-Perf-Impact-from-Meltdown.txt","commit_id":"5819d99d9c5753e4293751b9e6e90f5cf4f925b3"},{"author":{"_account_id":2394,"name":"Adam Spiers","email":"aspiers@suse.com","username":"adam.spiers"},"change_message_id":"369e998f059ef0938726a40693a996a9ec5c33f2","unresolved":false,"context_lines":[{"line_number":120,"context_line":"    value specified by ``libvirt.hw_machine_type`` in ``nova.conf``"},{"line_number":121,"context_line":"    (`which is not set by default"},{"line_number":122,"context_line":"    \u003chttps://docs.openstack.org/nova/rocky/configuration/config.html#libvirt.hw_machine_type\u003e`_),"},{"line_number":123,"context_line":"    then an exception should be raised so that the build fails."},{"line_number":124,"context_line":""},{"line_number":125,"context_line":"  - The ``iommu`` attribute is ``on`` for all virtio devices.  Despite"},{"line_number":126,"context_line":"    the name, this does not require the guest or host to have an IOMMU"}],"source_content_type":"text/x-rst","patch_set":6,"id":"3fce034c_b4dc76e4","line":123,"in_reply_to":"3fce034c_f0f3ff71","updated":"2019-04-18 15:35:29.000000000","message":"Yep thanks.  I\u0027ve added an item to the Limitations section covering this issue and the idea of future work to address it.","commit_id":"5819d99d9c5753e4293751b9e6e90f5cf4f925b3"},{"author":{"_account_id":14070,"name":"Eric Fried","email":"openstack@fried.cc","username":"efried"},"change_message_id":"102805a62e70852d95f33a0f5742fa15e0de7ea8","unresolved":false,"context_lines":[{"line_number":183,"context_line":"- On EPYC machines, the number of SEV guests allowed to run is"},{"line_number":184,"context_line":"  `limited to 15"},{"line_number":185,"context_line":"  \u003chttps://www.redhat.com/archives/libvir-list/2019-January/msg00652.html\u003e`_."},{"line_number":186,"context_line":"  In order to account for this limitation, add a new ``sev_context``"},{"line_number":187,"context_line":"  resource class representing the discrete number of slots available"},{"line_number":188,"context_line":"  on each SEV compute host, and create a corresponding inventory on"},{"line_number":189,"context_line":"  the compute host resource provider in the placement API.  Introduce"}],"source_content_type":"text/x-rst","patch_set":6,"id":"3fce034c_fd3befe7","line":186,"range":{"start_line":186,"start_character":55,"end_line":186,"end_character":66},"updated":"2019-04-16 20:48:00.000000000","message":"capitalize this","commit_id":"5819d99d9c5753e4293751b9e6e90f5cf4f925b3"},{"author":{"_account_id":2394,"name":"Adam Spiers","email":"aspiers@suse.com","username":"adam.spiers"},"change_message_id":"3a88e7cd3c762e55710bcba53ddbb9d1f4e45c54","unresolved":false,"context_lines":[{"line_number":183,"context_line":"- On EPYC machines, the number of SEV guests allowed to run is"},{"line_number":184,"context_line":"  `limited to 15"},{"line_number":185,"context_line":"  \u003chttps://www.redhat.com/archives/libvir-list/2019-January/msg00652.html\u003e`_."},{"line_number":186,"context_line":"  In order to account for this limitation, add a new ``sev_context``"},{"line_number":187,"context_line":"  resource class representing the discrete number of slots available"},{"line_number":188,"context_line":"  on each SEV compute host, and create a corresponding inventory on"},{"line_number":189,"context_line":"  the compute host resource provider in the placement API.  Introduce"}],"source_content_type":"text/x-rst","patch_set":6,"id":"3fce034c_6569a589","line":186,"range":{"start_line":186,"start_character":55,"end_line":186,"end_character":66},"in_reply_to":"3fce034c_fd3befe7","updated":"2019-04-16 23:34:20.000000000","message":"Done","commit_id":"5819d99d9c5753e4293751b9e6e90f5cf4f925b3"},{"author":{"_account_id":14070,"name":"Eric Fried","email":"openstack@fried.cc","username":"efried"},"change_message_id":"102805a62e70852d95f33a0f5742fa15e0de7ea8","unresolved":false,"context_lines":[{"line_number":189,"context_line":"  the compute host resource provider in the placement API.  Introduce"},{"line_number":190,"context_line":"  a new config option in the ``[libvirt]`` section of ``nova.conf`` to"},{"line_number":191,"context_line":"  set the size of this inventory on SEV hosts.  SEV-enabled guests"},{"line_number":192,"context_line":"  would then request one allocation of the ``sev_context`` resource"},{"line_number":193,"context_line":"  class via their flavor or image properties."},{"line_number":194,"context_line":""},{"line_number":195,"context_line":"  It is planned to enhance libvirt in the future to report the SEV"}],"source_content_type":"text/x-rst","patch_set":6,"id":"3fce034c_0f77a2a2","line":192,"range":{"start_line":192,"start_character":13,"end_line":192,"end_character":35},"updated":"2019-04-16 20:48:00.000000000","message":"Sounds like this config value will have to have a default that\u0027s effectively unlimited. Otherwise you\u0027re effectively partitioning your cloud into \"hosts with limited SEV contexts\" and \"hosts without a limit\".\n\nAnd given that, you should really add this part of the resource request under the covers based on SEV being requested.\n\nBut whether it\u0027s manual or automatic, you might as well not bother with the trait, because it\u0027s redundant.","commit_id":"5819d99d9c5753e4293751b9e6e90f5cf4f925b3"},{"author":{"_account_id":2394,"name":"Adam Spiers","email":"aspiers@suse.com","username":"adam.spiers"},"change_message_id":"3a88e7cd3c762e55710bcba53ddbb9d1f4e45c54","unresolved":false,"context_lines":[{"line_number":189,"context_line":"  the compute host resource provider in the placement API.  Introduce"},{"line_number":190,"context_line":"  a new config option in the ``[libvirt]`` section of ``nova.conf`` to"},{"line_number":191,"context_line":"  set the size of this inventory on SEV hosts.  SEV-enabled guests"},{"line_number":192,"context_line":"  would then request one allocation of the ``sev_context`` resource"},{"line_number":193,"context_line":"  class via their flavor or image properties."},{"line_number":194,"context_line":""},{"line_number":195,"context_line":"  It is planned to enhance libvirt in the future to report the SEV"}],"source_content_type":"text/x-rst","patch_set":6,"id":"3fce034c_85a3719e","line":192,"range":{"start_line":192,"start_character":13,"end_line":192,"end_character":35},"in_reply_to":"3fce034c_0f77a2a2","updated":"2019-04-16 23:34:20.000000000","message":"There are maybe two potential config values we could have here.  The first and most obvious would be set per compute host, representing the inventory of the SEV_CONTEXT resource for that host.  This would default to 0, for non-SEV hosts.  The operator would need to set it to a non-zero limit on each SEV host.\n\nSecondly, there could be another config value set centrally somewhere, representing a default non-zero limit for hosts where SEV is automatically detected.  So for example if all your SEV-capable hosts were EPYC machines with the same maximum of 15 SEV guests, you could set that to 15 in one place and then rely on the automatic SEV detection code proposed in https://review.openstack.org/#/c/633855/ to set the SEV_CONTEXT inventory for that host to 15, without having to set it manually on each host.\n\nWe definitely need the first.  The second seems more like icing on the cake, and I\u0027m not sure where it would live.  If for a period the only SEV machines on the market all have the same limit of 15 (or whatever) then the second could default to that.\n\nAlthough if we don\u0027t have the second, it reduces the SEV detection code I\u0027ve proposed from being essential into more of a safety check, defending against an operator accidentally setting the first config value to non-zero on a non-SEV host.  That said it\u0027s still worth having long-term, since later we expect to obtain the limit programmatically as noted immediately below.\n\nBut yes, this does suggest that HW_CPU_AMD_SEV may become redundant :-(  If I understand you correctly, you are implying that rather than creating SEV flavors with extra specs containing trait:HW_CPU_AMD_SEV\u003drequired, they could contain resources:SEV_CONTEXT\u003d1.  And similarly for image properties.  Is that what you had in mind?","commit_id":"5819d99d9c5753e4293751b9e6e90f5cf4f925b3"},{"author":{"_account_id":14070,"name":"Eric Fried","email":"openstack@fried.cc","username":"efried"},"change_message_id":"ef0ad5b86f45c4bce11cbb6567ce554618a56a09","unresolved":false,"context_lines":[{"line_number":189,"context_line":"  the compute host resource provider in the placement API.  Introduce"},{"line_number":190,"context_line":"  a new config option in the ``[libvirt]`` section of ``nova.conf`` to"},{"line_number":191,"context_line":"  set the size of this inventory on SEV hosts.  SEV-enabled guests"},{"line_number":192,"context_line":"  would then request one allocation of the ``sev_context`` resource"},{"line_number":193,"context_line":"  class via their flavor or image properties."},{"line_number":194,"context_line":""},{"line_number":195,"context_line":"  It is planned to enhance libvirt in the future to report the SEV"}],"source_content_type":"text/x-rst","patch_set":6,"id":"3fce034c_e8e461e2","line":192,"range":{"start_line":192,"start_character":13,"end_line":192,"end_character":35},"in_reply_to":"3fce034c_349de6bb","updated":"2019-04-18 17:55:47.000000000","message":"\u003e But currently auto-detection\n \u003e is impossible and anyway this feels like discussion over\n \u003e implementation details, so maybe it doesn\u0027t need to happen in this\n \u003e spec?\n\nSure.\n\n \u003e \u003e However, we don\u0027t necessarily need the extra spec or image meta\n \u003e to be in placement-ese (especially because I don\u0027t think we yet\n \u003e have code that consumes *resources* from image meta - traits yes,\n \u003e resources no). Instead it\u0027s in intuitive nova-ese (encrypt_my_memory\u003dTrue)\n \u003e and we translate that to resources:SEV_CONTEXT\u003d1 under the covers.\n \u003e \n \u003e I\u0027m a bit confused here - are you saying that resources:SEV_CONTEXT\u003d1\n \u003e wouldn\u0027t work out of the box?  Because https://docs.openstack.org/nova/latest/user/flavors.html\n \u003e seems to explicitly suggest otherwise.\n\nThat would work out of the box in a *flavor*; image meta currently only supports *traits*, not resources, so further work would be required there.\n\n \u003e 1. \"Boot my VM with AMD SEV enabled, because I don\u0027t trust Intel\u0027s\n \u003e equivalent\" - or vice-versa, of course ;-) I don\u0027t mean to imply\n \u003e that there would necessarily be any reason to trust one\n \u003e implementation over the other, but operators should be free to\n \u003e impose their own judgements.\n\nFair, but I would prefer this to be handled with a separate trait:\n\n resources:ENCRYPTED_MEM\u003d1,\n trait:HW_CPU_AMD\u003drequired\n\nrather than combining the (to me, anyway) distinct concepts of \"I want to use AMD procs\" and \"I want my memory encrypted\".","commit_id":"5819d99d9c5753e4293751b9e6e90f5cf4f925b3"},{"author":{"_account_id":2394,"name":"Adam Spiers","email":"aspiers@suse.com","username":"adam.spiers"},"change_message_id":"369e998f059ef0938726a40693a996a9ec5c33f2","unresolved":false,"context_lines":[{"line_number":189,"context_line":"  the compute host resource provider in the placement API.  Introduce"},{"line_number":190,"context_line":"  a new config option in the ``[libvirt]`` section of ``nova.conf`` to"},{"line_number":191,"context_line":"  set the size of this inventory on SEV hosts.  SEV-enabled guests"},{"line_number":192,"context_line":"  would then request one allocation of the ``sev_context`` resource"},{"line_number":193,"context_line":"  class via their flavor or image properties."},{"line_number":194,"context_line":""},{"line_number":195,"context_line":"  It is planned to enhance libvirt in the future to report the SEV"}],"source_content_type":"text/x-rst","patch_set":6,"id":"3fce034c_349de6bb","line":192,"range":{"start_line":192,"start_character":13,"end_line":192,"end_character":35},"in_reply_to":"3fce034c_6baac4d2","updated":"2019-04-18 15:35:29.000000000","message":"\u003e \u003e The first and most obvious would be set per compute host, representing the inventory of the SEV_CONTEXT resource for that host.  This would default to 0, for non-SEV hosts.  The operator would need to set it to a non-zero limit on each SEV host.\n \u003e \n \u003e Mm, but how does that work in a future world where we detect the number of SEV_CONTEXTs automatically?\n\nWe\u0027d change the code accordingly, e.g. so that if auto-detection succeeds and yields a different value to the config option, it would trigger an error (perhaps even refuse to start nova-compute), and if it succeeded but yielded the same value, it would trigger a deprecation warning.\n\n \u003e \u003e Secondly, there could be another config value set centrally somewhere, representing a default non-zero limit for hosts where SEV is automatically detected.\n \u003e \n \u003e This sounds icky, just because of the \"set centrally\" part. How do we communicate that down to the computes?\n\nI don\u0027t know yet ;-)  I\u0027m not necessarily proposing this, just raising the idea.\n\n \u003e \u003e without having to set it manually on each host.\n \u003e \n \u003e I think the stance on this is generally: this stuff is set by deployment tools, so it\u0027s no biggie to blast a duplicated value to all the computes.\n\nGood point, yeah that makes more sense than handling centrally in nova.\n\n \u003e What about one conf value, on the compute, that indicates the max number of SEV_CONTEXTs to configure *when SEV enablement is detected*.\n \u003e \n \u003e - 0 means disable SEV (no SEV_CONTEXT inventory), even if the host is capable of it.\n \u003e - None (unset) or -1 means \"detect\". On systems where detection is not possible, that maps to \"effectively unlimited\" (MAXINT or something). Unless the host isn\u0027t SEV capable, and then it maps to 0 (no SEV_CONTEXT).\n \u003e - Any other int, we use that exact value as the SEV_CONTEXT inventory amount. Again, only if the host is SEV capable; otherwise 0.\n\nSure that sounds like a reasonable approach (albeit not handling the mismatch case mentioned above).  But currently auto-detection is impossible and anyway this feels like discussion over implementation details, so maybe it doesn\u0027t need to happen in this spec?\n\n \u003e \u003e But yes, this does suggest that HW_CPU_AMD_SEV may become redundant :-(  If I understand you correctly, you are implying that rather than creating SEV flavors with extra specs containing trait:HW_CPU_AMD_SEV\u003drequired, they could contain resources:SEV_CONTEXT\u003d1.  And similarly for image properties.  Is that what you had in mind?\n \u003e \n \u003e Yes, exactly.\n\nCool.\n\n \u003e However, we don\u0027t necessarily need the extra spec or image meta to be in placement-ese (especially because I don\u0027t think we yet have code that consumes *resources* from image meta - traits yes, resources no). Instead it\u0027s in intuitive nova-ese (encrypt_my_memory\u003dTrue) and we translate that to resources:SEV_CONTEXT\u003d1 under the covers.\n\nI\u0027m a bit confused here - are you saying that resources:SEV_CONTEXT\u003d1 wouldn\u0027t work out of the box?  Because https://docs.openstack.org/nova/latest/user/flavors.html seems to explicitly suggest otherwise.\n\nHaving said that, I do like your idea of offering a more human-friendly way of specifying the SEV requirement.\n\n \u003e For that matter, can we rename SEV_CONTEXT to something more generic that other architectures can use when that becomes available? IIRC, Intel is working on something equivalent to this.\n\nHmm, good point.  I think there are separate use cases which are both equally valid:\n\n1. \"Boot my VM with AMD SEV enabled, because I don\u0027t trust Intel\u0027s equivalent\" - or vice-versa, of course ;-) I don\u0027t mean to imply that there would necessarily be any reason to trust one implementation over the other, but operators should be free to impose their own judgements.\n\n2. \"Boot my VM with encrypted memory, somehow - I don\u0027t care whether you launch it on an AMD or Intel compute node.\"\n\nMy inclination would be to ensure that the low-level resource inventory correctly distinguishes between different technologies, and this could handle the first use case.  For example, we could have a resource class for each technology (AMD_SEV_CONTEXT, INTEL_MKTME_CONTEXT, ARM_WHATEVER, ...).\n\nBut we could also offer a higher-level interface to handle the second use case, e.g. via your high-level encrypt_my_memory\u003dTrue idea.","commit_id":"5819d99d9c5753e4293751b9e6e90f5cf4f925b3"},{"author":{"_account_id":14070,"name":"Eric Fried","email":"openstack@fried.cc","username":"efried"},"change_message_id":"9e2e40c9004abb373d60925f8452d6d2596481d1","unresolved":false,"context_lines":[{"line_number":189,"context_line":"  the compute host resource provider in the placement API.  Introduce"},{"line_number":190,"context_line":"  a new config option in the ``[libvirt]`` section of ``nova.conf`` to"},{"line_number":191,"context_line":"  set the size of this inventory on SEV hosts.  SEV-enabled guests"},{"line_number":192,"context_line":"  would then request one allocation of the ``sev_context`` resource"},{"line_number":193,"context_line":"  class via their flavor or image properties."},{"line_number":194,"context_line":""},{"line_number":195,"context_line":"  It is planned to enhance libvirt in the future to report the SEV"}],"source_content_type":"text/x-rst","patch_set":6,"id":"3fce034c_6baac4d2","line":192,"range":{"start_line":192,"start_character":13,"end_line":192,"end_character":35},"in_reply_to":"3fce034c_85a3719e","updated":"2019-04-17 17:03:27.000000000","message":"\u003e The first and most obvious would be set per compute host,\n \u003e representing the inventory of the SEV_CONTEXT resource for that\n \u003e host.  This would default to 0, for non-SEV hosts.  The operator\n \u003e would need to set it to a non-zero limit on each SEV host.\n\nMm, but how does that work in a future world where we detect the number of SEV_CONTEXTs automatically?\n\n \u003e Secondly, there could be another config value set centrally\n \u003e somewhere, representing a default non-zero limit for hosts where\n \u003e SEV is automatically detected.\n\nThis sounds icky, just because of the \"set centrally\" part. How do we communicate that down to the computes?\n\n \u003e without\n \u003e having to set it manually on each host.\n\nI think the stance on this is generally: this stuff is set by deployment tools, so it\u0027s no biggie to blast a duplicated value to all the computes.\n\nWhat about one conf value, on the compute, that indicates the max number of SEV_CONTEXTs to configure *when SEV enablement is detected*.\n\n- 0 means disable SEV (no SEV_CONTEXT inventory), even if the host is capable of it.\n- None (unset) or -1 means \"detect\". On systems where detection is not possible, that maps to \"effectively unlimited\" (MAXINT or something). Unless the host isn\u0027t SEV capable, and then it maps to 0 (no SEV_CONTEXT).\n- Any other int, we use that exact value as the SEV_CONTEXT inventory amount. Again, only if the host is SEV capable; otherwise 0.\n\n \u003e But yes, this does suggest that HW_CPU_AMD_SEV may become redundant\n \u003e :-(  If I understand you correctly, you are implying that rather\n \u003e than creating SEV flavors with extra specs containing\n \u003e trait:HW_CPU_AMD_SEV\u003drequired, they could contain resources:SEV_CONTEXT\u003d1.\n \u003e  And similarly for image properties.  Is that what you had in mind?\n\nYes, exactly.\n\nHowever, we don\u0027t necessarily need the extra spec or image meta to be in placement-ese (especially because I don\u0027t think we yet have code that consumes *resources* from image meta - traits yes, resources no). Instead it\u0027s in intuitive nova-ese (encrypt_my_memory\u003dTrue) and we translate that to resources:SEV_CONTEXT\u003d1 under the covers.\n\nFor that matter, can we rename SEV_CONTEXT to something more generic that other architectures can use when that becomes available? IIRC, Intel is working on something equivalent to this.","commit_id":"5819d99d9c5753e4293751b9e6e90f5cf4f925b3"},{"author":{"_account_id":2394,"name":"Adam Spiers","email":"aspiers@suse.com","username":"adam.spiers"},"change_message_id":"2998589ad62070be247518f08c81d3643f8556e5","unresolved":false,"context_lines":[{"line_number":189,"context_line":"  the compute host resource provider in the placement API.  Introduce"},{"line_number":190,"context_line":"  a new config option in the ``[libvirt]`` section of ``nova.conf`` to"},{"line_number":191,"context_line":"  set the size of this inventory on SEV hosts.  SEV-enabled guests"},{"line_number":192,"context_line":"  would then request one allocation of the ``sev_context`` resource"},{"line_number":193,"context_line":"  class via their flavor or image properties."},{"line_number":194,"context_line":""},{"line_number":195,"context_line":"  It is planned to enhance libvirt in the future to report the SEV"}],"source_content_type":"text/x-rst","patch_set":6,"id":"3fce034c_0e97723c","line":192,"range":{"start_line":192,"start_character":13,"end_line":192,"end_character":35},"in_reply_to":"3fce034c_e8e461e2","updated":"2019-04-18 23:20:52.000000000","message":"\u003e \u003e But currently auto-detection\n \u003e \u003e is impossible and anyway this feels like discussion over\n \u003e \u003e implementation details, so maybe it doesn\u0027t need to happen in\n \u003e this\n \u003e \u003e spec?\n \u003e \n \u003e Sure.\n \u003e \n \u003e \u003e \u003e However, we don\u0027t necessarily need the extra spec or image meta\n \u003e \u003e to be in placement-ese (especially because I don\u0027t think we yet\n \u003e \u003e have code that consumes *resources* from image meta - traits yes,\n \u003e \u003e resources no). Instead it\u0027s in intuitive nova-ese\n \u003e (encrypt_my_memory\u003dTrue)\n \u003e \u003e and we translate that to resources:SEV_CONTEXT\u003d1 under the\n \u003e covers.\n \u003e \u003e\n \u003e \u003e I\u0027m a bit confused here - are you saying that resources:SEV_CONTEXT\u003d1\n \u003e \u003e wouldn\u0027t work out of the box?  Because https://docs.openstack.org/nova/latest/user/flavors.html\n \u003e \u003e seems to explicitly suggest otherwise.\n \u003e \n \u003e That would work out of the box in a *flavor*; image meta currently\n \u003e only supports *traits*, not resources, so further work would be\n \u003e required there.\n\nAhhh got it - I read too quickly before.  Next patchset covers this.\n\n \u003e \u003e 1. \"Boot my VM with AMD SEV enabled, because I don\u0027t trust\n \u003e Intel\u0027s\n \u003e \u003e equivalent\" - or vice-versa, of course ;-) I don\u0027t mean to imply\n \u003e \u003e that there would necessarily be any reason to trust one\n \u003e \u003e implementation over the other, but operators should be free to\n \u003e \u003e impose their own judgements.\n \u003e \n \u003e Fair, but I would prefer this to be handled with a separate trait:\n \u003e \n \u003e resources:ENCRYPTED_MEM\u003d1,\n \u003e trait:HW_CPU_AMD\u003drequired\n \u003e \n \u003e rather than combining the (to me, anyway) distinct concepts of \"I\n \u003e want to use AMD procs\" and \"I want my memory encrypted\".\n\nNext patchset covers this, based on our subsequent discussion on IRC after you wrote that.","commit_id":"5819d99d9c5753e4293751b9e6e90f5cf4f925b3"},{"author":{"_account_id":14070,"name":"Eric Fried","email":"openstack@fried.cc","username":"efried"},"change_message_id":"102805a62e70852d95f33a0f5742fa15e0de7ea8","unresolved":false,"context_lines":[{"line_number":206,"context_line":"The sum of the work described above could also mean that images with"},{"line_number":207,"context_line":"the property ``trait:HW_CPU_AMD_SEV\u003drequired`` would similarly affect"},{"line_number":208,"context_line":"the process of launching instances."},{"line_number":209,"context_line":""},{"line_number":210,"context_line":"SEV launch-time configuration"},{"line_number":211,"context_line":"-----------------------------"},{"line_number":212,"context_line":""}],"source_content_type":"text/x-rst","patch_set":6,"id":"3fce034c_4fb92a2c","line":209,"updated":"2019-04-16 20:48:00.000000000","message":"NB: I started skimming here.","commit_id":"5819d99d9c5753e4293751b9e6e90f5cf4f925b3"},{"author":{"_account_id":14070,"name":"Eric Fried","email":"openstack@fried.cc","username":"efried"},"change_message_id":"102805a62e70852d95f33a0f5742fa15e0de7ea8","unresolved":false,"context_lines":[{"line_number":432,"context_line":""},{"line_number":433,"context_line":"Alternatives"},{"line_number":434,"context_line":"------------"},{"line_number":435,"context_line":""},{"line_number":436,"context_line":"These alternatives were originally proposed for `the previous version"},{"line_number":437,"context_line":"of this spec accepted for Stein"},{"line_number":438,"context_line":"\u003chttps://specs.openstack.org/openstack/nova-specs/specs/stein/approved/amd-sev-libvirt-support.html\u003e`_."}],"source_content_type":"text/x-rst","patch_set":6,"id":"3fce034c_cf7e7a6a","line":435,"updated":"2019-04-16 20:48:00.000000000","message":"I did read these to see whether \"just use the resource class\" had been proposed and rejected already.\n\nIf that alternative is rejected, please list it and the reasoning here.","commit_id":"5819d99d9c5753e4293751b9e6e90f5cf4f925b3"},{"author":{"_account_id":14070,"name":"Eric Fried","email":"openstack@fried.cc","username":"efried"},"change_message_id":"9e2e40c9004abb373d60925f8452d6d2596481d1","unresolved":false,"context_lines":[{"line_number":432,"context_line":""},{"line_number":433,"context_line":"Alternatives"},{"line_number":434,"context_line":"------------"},{"line_number":435,"context_line":""},{"line_number":436,"context_line":"These alternatives were originally proposed for `the previous version"},{"line_number":437,"context_line":"of this spec accepted for Stein"},{"line_number":438,"context_line":"\u003chttps://specs.openstack.org/openstack/nova-specs/specs/stein/approved/amd-sev-libvirt-support.html\u003e`_."}],"source_content_type":"text/x-rst","patch_set":6,"id":"3fce034c_ebec742a","line":435,"in_reply_to":"3fce034c_1215859c","updated":"2019-04-17 17:03:27.000000000","message":"It really doesn\u0027t make sense until the concept of a limited number of SEV contexts comes into play. Which afaict first happened at PS6 based on comments from Dan at PS2.","commit_id":"5819d99d9c5753e4293751b9e6e90f5cf4f925b3"},{"author":{"_account_id":2394,"name":"Adam Spiers","email":"aspiers@suse.com","username":"adam.spiers"},"change_message_id":"3a88e7cd3c762e55710bcba53ddbb9d1f4e45c54","unresolved":false,"context_lines":[{"line_number":432,"context_line":""},{"line_number":433,"context_line":"Alternatives"},{"line_number":434,"context_line":"------------"},{"line_number":435,"context_line":""},{"line_number":436,"context_line":"These alternatives were originally proposed for `the previous version"},{"line_number":437,"context_line":"of this spec accepted for Stein"},{"line_number":438,"context_line":"\u003chttps://specs.openstack.org/openstack/nova-specs/specs/stein/approved/amd-sev-libvirt-support.html\u003e`_."}],"source_content_type":"text/x-rst","patch_set":6,"id":"3fce034c_1215859c","line":435,"in_reply_to":"3fce034c_cf7e7a6a","updated":"2019-04-16 23:34:20.000000000","message":"Nope, the idea of an SEV resource class was never proposed until Sean mentioned it recently in the context of managing the SEV guest limit.  I guess the idea didn\u0027t occur to anyone until then.","commit_id":"5819d99d9c5753e4293751b9e6e90f5cf4f925b3"},{"author":{"_account_id":2394,"name":"Adam Spiers","email":"aspiers@suse.com","username":"adam.spiers"},"change_message_id":"369e998f059ef0938726a40693a996a9ec5c33f2","unresolved":false,"context_lines":[{"line_number":432,"context_line":""},{"line_number":433,"context_line":"Alternatives"},{"line_number":434,"context_line":"------------"},{"line_number":435,"context_line":""},{"line_number":436,"context_line":"These alternatives were originally proposed for `the previous version"},{"line_number":437,"context_line":"of this spec accepted for Stein"},{"line_number":438,"context_line":"\u003chttps://specs.openstack.org/openstack/nova-specs/specs/stein/approved/amd-sev-libvirt-support.html\u003e`_."}],"source_content_type":"text/x-rst","patch_set":6,"id":"3fce034c_513e848c","line":435,"in_reply_to":"3fce034c_ebec742a","updated":"2019-04-18 15:35:29.000000000","message":"Right.","commit_id":"5819d99d9c5753e4293751b9e6e90f5cf4f925b3"},{"author":{"_account_id":14070,"name":"Eric Fried","email":"openstack@fried.cc","username":"efried"},"change_message_id":"ef0ad5b86f45c4bce11cbb6567ce554618a56a09","unresolved":false,"context_lines":[{"line_number":95,"context_line":"  which represents the number of SEV guests a compute host can"},{"line_number":96,"context_line":"  accommodate (zero for hosts which do not support SEV)."},{"line_number":97,"context_line":""},{"line_number":98,"context_line":"- Remove the ``HW_CPU_AMD_SEV`` trait from os-traits.  It has been"},{"line_number":99,"context_line":"  present `since 0.11.0"},{"line_number":100,"context_line":"  \u003chttps://docs.openstack.org/os-traits/latest/reference/index.html#amd-sev\u003e`_,"},{"line_number":101,"context_line":"  and `was added in the Stein cycle"}],"source_content_type":"text/x-rst","patch_set":10,"id":"3fce034c_a8d2d9fe","line":98,"range":{"start_line":98,"start_character":2,"end_line":98,"end_character":52},"updated":"2019-04-18 17:55:47.000000000","message":"Nope. It\u0027s regrettable to have \"garbage\" in os-traits, but we can\u0027t remove it.","commit_id":"1e3a26b82cf731aa3fded15a3ebcd4d3dc05178b"},{"author":{"_account_id":2394,"name":"Adam Spiers","email":"aspiers@suse.com","username":"adam.spiers"},"change_message_id":"2998589ad62070be247518f08c81d3643f8556e5","unresolved":false,"context_lines":[{"line_number":95,"context_line":"  which represents the number of SEV guests a compute host can"},{"line_number":96,"context_line":"  accommodate (zero for hosts which do not support SEV)."},{"line_number":97,"context_line":""},{"line_number":98,"context_line":"- Remove the ``HW_CPU_AMD_SEV`` trait from os-traits.  It has been"},{"line_number":99,"context_line":"  present `since 0.11.0"},{"line_number":100,"context_line":"  \u003chttps://docs.openstack.org/os-traits/latest/reference/index.html#amd-sev\u003e`_,"},{"line_number":101,"context_line":"  and `was added in the Stein cycle"}],"source_content_type":"text/x-rst","patch_set":10,"id":"3fce034c_cea40aa9","line":98,"range":{"start_line":98,"start_character":2,"end_line":98,"end_character":52},"in_reply_to":"3fce034c_a8d2d9fe","updated":"2019-04-18 23:20:52.000000000","message":"Done","commit_id":"1e3a26b82cf731aa3fded15a3ebcd4d3dc05178b"},{"author":{"_account_id":14070,"name":"Eric Fried","email":"openstack@fried.cc","username":"efried"},"change_message_id":"ef0ad5b86f45c4bce11cbb6567ce554618a56a09","unresolved":false,"context_lines":[{"line_number":379,"context_line":"  ``virtio-scsi`` or SATA for the boot disk works as expected, as does"},{"line_number":380,"context_line":"  ``virtio-blk`` for non-boot disks."},{"line_number":381,"context_line":""},{"line_number":382,"context_line":"- Operators will initially be required to manually specify the upper"},{"line_number":383,"context_line":"  limit of SEV guests for each compute host, via the new config option"},{"line_number":384,"context_line":"  proposed above.  This is a short-term workaround to the current lack"},{"line_number":385,"context_line":"  of mechanism for programmatically discovering the SEV guest limit."},{"line_number":386,"context_line":""}],"source_content_type":"text/x-rst","patch_set":10,"id":"3fce034c_c8540556","line":383,"range":{"start_line":382,"start_character":30,"end_line":383,"end_character":7},"updated":"2019-04-18 17:55:47.000000000","message":"I won\u0027t block on this, but I think I still prefer the default to be None or -1 to indicate \"detect\", where the default \"detected\" value is \"unlimited\". This means you\u0027re not forcing the operator to\n\n- make any conf changes to take advantage of SEV out of the box - all they need to do is set the \"encrypt_my_memory\u003dTrue\" extra-spec/image-meta-prop. Late failures may occur if the host\u0027s real capacity is exceeded; if they get to that point they can set the value.\n- make conf changes once autodetection is introduced - It Just Works.","commit_id":"1e3a26b82cf731aa3fded15a3ebcd4d3dc05178b"},{"author":{"_account_id":2394,"name":"Adam Spiers","email":"aspiers@suse.com","username":"adam.spiers"},"change_message_id":"2998589ad62070be247518f08c81d3643f8556e5","unresolved":false,"context_lines":[{"line_number":379,"context_line":"  ``virtio-scsi`` or SATA for the boot disk works as expected, as does"},{"line_number":380,"context_line":"  ``virtio-blk`` for non-boot disks."},{"line_number":381,"context_line":""},{"line_number":382,"context_line":"- Operators will initially be required to manually specify the upper"},{"line_number":383,"context_line":"  limit of SEV guests for each compute host, via the new config option"},{"line_number":384,"context_line":"  proposed above.  This is a short-term workaround to the current lack"},{"line_number":385,"context_line":"  of mechanism for programmatically discovering the SEV guest limit."},{"line_number":386,"context_line":""}],"source_content_type":"text/x-rst","patch_set":10,"id":"3fce034c_aea73ea7","line":383,"range":{"start_line":382,"start_character":30,"end_line":383,"end_character":7},"in_reply_to":"3fce034c_c8540556","updated":"2019-04-18 23:20:52.000000000","message":"Ohhh I get what you were saying now, thanks.  Yeah that makes sense - fixed.","commit_id":"1e3a26b82cf731aa3fded15a3ebcd4d3dc05178b"},{"author":{"_account_id":14070,"name":"Eric Fried","email":"openstack@fried.cc","username":"efried"},"change_message_id":"ef0ad5b86f45c4bce11cbb6567ce554618a56a09","unresolved":false,"context_lines":[{"line_number":674,"context_line":"   introduced above.  It should also take into account the results of"},{"line_number":675,"context_line":"   the SEV detection code."},{"line_number":676,"context_line":""},{"line_number":677,"context_line":"#. Remove the ``HW_CPU_AMD_SEV`` trait from os-traits."},{"line_number":678,"context_line":""},{"line_number":679,"context_line":"#. Add a new ``nova.virt.libvirt.LibvirtConfigGuestSEVLaunchSecurity`` class"},{"line_number":680,"context_line":"   to describe the ``\u003claunchSecurity/\u003e`` element."}],"source_content_type":"text/x-rst","patch_set":10,"id":"3fce034c_48325561","line":677,"range":{"start_line":677,"start_character":3,"end_line":677,"end_character":54},"updated":"2019-04-18 17:55:47.000000000","message":"X","commit_id":"1e3a26b82cf731aa3fded15a3ebcd4d3dc05178b"},{"author":{"_account_id":2394,"name":"Adam Spiers","email":"aspiers@suse.com","username":"adam.spiers"},"change_message_id":"2998589ad62070be247518f08c81d3643f8556e5","unresolved":false,"context_lines":[{"line_number":674,"context_line":"   introduced above.  It should also take into account the results of"},{"line_number":675,"context_line":"   the SEV detection code."},{"line_number":676,"context_line":""},{"line_number":677,"context_line":"#. Remove the ``HW_CPU_AMD_SEV`` trait from os-traits."},{"line_number":678,"context_line":""},{"line_number":679,"context_line":"#. Add a new ``nova.virt.libvirt.LibvirtConfigGuestSEVLaunchSecurity`` class"},{"line_number":680,"context_line":"   to describe the ``\u003claunchSecurity/\u003e`` element."}],"source_content_type":"text/x-rst","patch_set":10,"id":"3fce034c_4e081a91","line":677,"range":{"start_line":677,"start_character":3,"end_line":677,"end_character":54},"in_reply_to":"3fce034c_48325561","updated":"2019-04-18 23:20:52.000000000","message":"Done","commit_id":"1e3a26b82cf731aa3fded15a3ebcd4d3dc05178b"},{"author":{"_account_id":7,"name":"Jay Pipes","email":"jaypipes@gmail.com","username":"jaypipes"},"change_message_id":"406e0ef9a80e90a4085880d6e25b2f0cf2cd0afe","unresolved":false,"context_lines":[{"line_number":111,"context_line":"  capability as a discretely quantifiable resource rather than as a"},{"line_number":112,"context_line":"  binary feature, therefore the ``SEV_CONTEXT`` resource class will"},{"line_number":113,"context_line":"  supersede it as the mechanism for indicating when an SEV context is"},{"line_number":114,"context_line":"  required."},{"line_number":115,"context_line":""},{"line_number":116,"context_line":"  It is regrettable to have to back-pedal on this element of the"},{"line_number":117,"context_line":"  design; however nothing used the ``HW_CPU_AMD_SEV`` trait yet so it"}],"source_content_type":"text/x-rst","patch_set":11,"id":"ffb9cba7_08ecb1c5","line":114,"updated":"2019-04-23 15:47:14.000000000","message":"ack.","commit_id":"93c25dbb3f1a0d214f9eb3a8cab6742a1dec220c"},{"author":{"_account_id":14070,"name":"Eric Fried","email":"openstack@fried.cc","username":"efried"},"change_message_id":"44ed33b71b101e740d9b2ce7b492ce43340c79fe","unresolved":false,"context_lines":[{"line_number":144,"context_line":"     AMD SEV and Intel `MKTME`_."},{"line_number":145,"context_line":""},{"line_number":146,"context_line":"     However, any implementations of those use cases are outside the"},{"line_number":147,"context_line":"     scope of this spec."},{"line_number":148,"context_line":""},{"line_number":149,"context_line":"- Make the libvirt driver `update the ProviderTree object"},{"line_number":150,"context_line":"  \u003chttps://docs.openstack.org/nova/latest/reference/update-provider-tree.html\u003e`_"}],"source_content_type":"text/x-rst","patch_set":11,"id":"ffb9cba7_4424e90d","line":147,"updated":"2019-04-22 21:22:11.000000000","message":"nice","commit_id":"93c25dbb3f1a0d214f9eb3a8cab6742a1dec220c"},{"author":{"_account_id":7,"name":"Jay Pipes","email":"jaypipes@gmail.com","username":"jaypipes"},"change_message_id":"406e0ef9a80e90a4085880d6e25b2f0cf2cd0afe","unresolved":false,"context_lines":[{"line_number":156,"context_line":"  Since it is not currently possible to obtain this limit"},{"line_number":157,"context_line":"  programmatically, introduce a new config option in the ``[libvirt]``"},{"line_number":158,"context_line":"  section of ``nova.conf`` to set the size of this inventory for each"},{"line_number":159,"context_line":"  SEV-capable compute host.  This would default to ``None`` or ``-1``"},{"line_number":160,"context_line":"  with a forward-looking meaning of \"auto-detect the limit, or if this"},{"line_number":161,"context_line":"  is not possible, don\u0027t impose any limit\".  This would have two"},{"line_number":162,"context_line":"  benefits:"}],"source_content_type":"text/x-rst","patch_set":11,"id":"ffb9cba7_2862d53f","line":159,"range":{"start_line":159,"start_character":63,"end_line":159,"end_character":69},"updated":"2019-04-23 15:47:14.000000000","message":"We use -1 to indicate \"unlimited\" in a couple other CONF options.\n\nPlacement doesn\u0027t support unlimited inventory. You need to create inventory in the ProviderTree with some positive integer. So, how do you expect to translate this \"unlimited\" limit to an actual, well, limit?\n\nI\u0027d prefer this to be \"None\" because it better indicates \"automatically determine the inventory\" and not \"there is an unlimited inventory of these things\" (which, as mentioned above, doesn\u0027t make sense to placement, which understands an inventory to mean a finite amount of some resource to be consumed).","commit_id":"93c25dbb3f1a0d214f9eb3a8cab6742a1dec220c"},{"author":{"_account_id":14070,"name":"Eric Fried","email":"openstack@fried.cc","username":"efried"},"change_message_id":"23cd7a345a63d259851fe9792c42664346641258","unresolved":false,"context_lines":[{"line_number":156,"context_line":"  Since it is not currently possible to obtain this limit"},{"line_number":157,"context_line":"  programmatically, introduce a new config option in the ``[libvirt]``"},{"line_number":158,"context_line":"  section of ``nova.conf`` to set the size of this inventory for each"},{"line_number":159,"context_line":"  SEV-capable compute host.  This would default to ``None`` or ``-1``"},{"line_number":160,"context_line":"  with a forward-looking meaning of \"auto-detect the limit, or if this"},{"line_number":161,"context_line":"  is not possible, don\u0027t impose any limit\".  This would have two"},{"line_number":162,"context_line":"  benefits:"}],"source_content_type":"text/x-rst","patch_set":11,"id":"ffb9cba7_e90effe3","line":159,"range":{"start_line":159,"start_character":63,"end_line":159,"end_character":69},"in_reply_to":"ffb9cba7_2862d53f","updated":"2019-04-23 17:59:30.000000000","message":"Cool, so \"None\" in the conf option, meaning \"detect\"; and it will just happen to be the case that, absent any other detectable limit, the driver will use an inventory amount of MAXINT or something similarly unlimited-ish. That work?","commit_id":"93c25dbb3f1a0d214f9eb3a8cab6742a1dec220c"},{"author":{"_account_id":2394,"name":"Adam Spiers","email":"aspiers@suse.com","username":"adam.spiers"},"change_message_id":"1fc5e94e23ed75f9d9aee3471c99ef81bc0985bb","unresolved":false,"context_lines":[{"line_number":156,"context_line":"  Since it is not currently possible to obtain this limit"},{"line_number":157,"context_line":"  programmatically, introduce a new config option in the ``[libvirt]``"},{"line_number":158,"context_line":"  section of ``nova.conf`` to set the size of this inventory for each"},{"line_number":159,"context_line":"  SEV-capable compute host.  This would default to ``None`` or ``-1``"},{"line_number":160,"context_line":"  with a forward-looking meaning of \"auto-detect the limit, or if this"},{"line_number":161,"context_line":"  is not possible, don\u0027t impose any limit\".  This would have two"},{"line_number":162,"context_line":"  benefits:"}],"source_content_type":"text/x-rst","patch_set":11,"id":"ffb9cba7_be0fdc2c","line":159,"range":{"start_line":159,"start_character":63,"end_line":159,"end_character":69},"in_reply_to":"ffb9cba7_49a62bc4","updated":"2019-04-23 23:00:30.000000000","message":"Done","commit_id":"93c25dbb3f1a0d214f9eb3a8cab6742a1dec220c"},{"author":{"_account_id":11604,"name":"sean mooney","email":"smooney@redhat.com","username":"sean-k-mooney"},"change_message_id":"7f7db86d62d6469fb136b2cb4d0fa45a9c884841","unresolved":false,"context_lines":[{"line_number":156,"context_line":"  Since it is not currently possible to obtain this limit"},{"line_number":157,"context_line":"  programmatically, introduce a new config option in the ``[libvirt]``"},{"line_number":158,"context_line":"  section of ``nova.conf`` to set the size of this inventory for each"},{"line_number":159,"context_line":"  SEV-capable compute host.  This would default to ``None`` or ``-1``"},{"line_number":160,"context_line":"  with a forward-looking meaning of \"auto-detect the limit, or if this"},{"line_number":161,"context_line":"  is not possible, don\u0027t impose any limit\".  This would have two"},{"line_number":162,"context_line":"  benefits:"}],"source_content_type":"text/x-rst","patch_set":11,"id":"ffb9cba7_1e1aa87c","line":159,"range":{"start_line":159,"start_character":63,"end_line":159,"end_character":69},"in_reply_to":"ffb9cba7_49a62bc4","updated":"2019-04-23 21:23:16.000000000","message":"im pretty sure we use max int in other places for unlimited in placment too. and none work form me too.\ni think -1 is also fine but if you think none is clearer thats cool i think they are about the same.","commit_id":"93c25dbb3f1a0d214f9eb3a8cab6742a1dec220c"},{"author":{"_account_id":7,"name":"Jay Pipes","email":"jaypipes@gmail.com","username":"jaypipes"},"change_message_id":"bbfc5324725658f5ac3866fdb16bcc267d77e832","unresolved":false,"context_lines":[{"line_number":156,"context_line":"  Since it is not currently possible to obtain this limit"},{"line_number":157,"context_line":"  programmatically, introduce a new config option in the ``[libvirt]``"},{"line_number":158,"context_line":"  section of ``nova.conf`` to set the size of this inventory for each"},{"line_number":159,"context_line":"  SEV-capable compute host.  This would default to ``None`` or ``-1``"},{"line_number":160,"context_line":"  with a forward-looking meaning of \"auto-detect the limit, or if this"},{"line_number":161,"context_line":"  is not possible, don\u0027t impose any limit\".  This would have two"},{"line_number":162,"context_line":"  benefits:"}],"source_content_type":"text/x-rst","patch_set":11,"id":"ffb9cba7_49a62bc4","line":159,"range":{"start_line":159,"start_character":63,"end_line":159,"end_character":69},"in_reply_to":"ffb9cba7_e90effe3","updated":"2019-04-23 18:05:33.000000000","message":"\u003e Cool, so \"None\" in the conf option, meaning \"detect\"; and it will\n \u003e just happen to be the case that, absent any other detectable limit,\n \u003e the driver will use an inventory amount of MAXINT or something\n \u003e similarly unlimited-ish. That work?\n\nYeah, that works for me.","commit_id":"93c25dbb3f1a0d214f9eb3a8cab6742a1dec220c"},{"author":{"_account_id":7,"name":"Jay Pipes","email":"jaypipes@gmail.com","username":"jaypipes"},"change_message_id":"406e0ef9a80e90a4085880d6e25b2f0cf2cd0afe","unresolved":false,"context_lines":[{"line_number":157,"context_line":"  programmatically, introduce a new config option in the ``[libvirt]``"},{"line_number":158,"context_line":"  section of ``nova.conf`` to set the size of this inventory for each"},{"line_number":159,"context_line":"  SEV-capable compute host.  This would default to ``None`` or ``-1``"},{"line_number":160,"context_line":"  with a forward-looking meaning of \"auto-detect the limit, or if this"},{"line_number":161,"context_line":"  is not possible, don\u0027t impose any limit\".  This would have two"},{"line_number":162,"context_line":"  benefits:"},{"line_number":163,"context_line":""}],"source_content_type":"text/x-rst","patch_set":11,"id":"ffb9cba7_08191184","line":160,"range":{"start_line":160,"start_character":53,"end_line":160,"end_character":58},"updated":"2019-04-23 15:47:14.000000000","message":"it\u0027s not a limit. it\u0027s an inventory amount :) small but important distinction.","commit_id":"93c25dbb3f1a0d214f9eb3a8cab6742a1dec220c"},{"author":{"_account_id":11604,"name":"sean mooney","email":"smooney@redhat.com","username":"sean-k-mooney"},"change_message_id":"7f7db86d62d6469fb136b2cb4d0fa45a9c884841","unresolved":false,"context_lines":[{"line_number":157,"context_line":"  programmatically, introduce a new config option in the ``[libvirt]``"},{"line_number":158,"context_line":"  section of ``nova.conf`` to set the size of this inventory for each"},{"line_number":159,"context_line":"  SEV-capable compute host.  This would default to ``None`` or ``-1``"},{"line_number":160,"context_line":"  with a forward-looking meaning of \"auto-detect the limit, or if this"},{"line_number":161,"context_line":"  is not possible, don\u0027t impose any limit\".  This would have two"},{"line_number":162,"context_line":"  benefits:"},{"line_number":163,"context_line":""}],"source_content_type":"text/x-rst","patch_set":11,"id":"ffb9cba7_3e1f6c6d","line":160,"range":{"start_line":160,"start_character":53,"end_line":160,"end_character":58},"in_reply_to":"ffb9cba7_08191184","updated":"2019-04-23 21:23:16.000000000","message":"well limit in this case is the hardware limit.\n\nthat said we woudl be relfecting that hardware limiation in placmeent as an inventory amount so you are also right.","commit_id":"93c25dbb3f1a0d214f9eb3a8cab6742a1dec220c"},{"author":{"_account_id":2394,"name":"Adam Spiers","email":"aspiers@suse.com","username":"adam.spiers"},"change_message_id":"1fc5e94e23ed75f9d9aee3471c99ef81bc0985bb","unresolved":false,"context_lines":[{"line_number":157,"context_line":"  programmatically, introduce a new config option in the ``[libvirt]``"},{"line_number":158,"context_line":"  section of ``nova.conf`` to set the size of this inventory for each"},{"line_number":159,"context_line":"  SEV-capable compute host.  This would default to ``None`` or ``-1``"},{"line_number":160,"context_line":"  with a forward-looking meaning of \"auto-detect the limit, or if this"},{"line_number":161,"context_line":"  is not possible, don\u0027t impose any limit\".  This would have two"},{"line_number":162,"context_line":"  benefits:"},{"line_number":163,"context_line":""}],"source_content_type":"text/x-rst","patch_set":11,"id":"ffb9cba7_912449c8","line":160,"range":{"start_line":160,"start_character":53,"end_line":160,"end_character":58},"in_reply_to":"ffb9cba7_3e1f6c6d","updated":"2019-04-23 23:00:30.000000000","message":"As Sean says it\u0027s a limit in the sense that you can\u0027t use more of something than what\u0027s available ;-)  But yeah, it\u0027s not a quota or anything like that.\n\nBut then in the cases where the inventory size is not automatically detectable *and* has not been specified via a config option, should the implementation reflect that via an absence of inventory for that resource class, or by setting the inventory amount to MAXINT?  I saw the discussion about this on IRC:\n\nhttp://eavesdrop.openstack.org/irclogs/%23openstack-nova/%23openstack-nova.2019-04-23.log.html#t2019-04-23T17:55:57\n\nbut it didn\u0027t seem to reach any clear conclusion about the best way of implementing this particular case.  You said that using MAXINT \"doesn\u0027t really make sense to me\" so I\u0027d welcome alternative suggestions for how to handle it.  It\u0027s probably more of an implementation detail rather than something we need to cover in the spec, but as the person most likely to have to implement it I\u0027d really appreciate guidance ;-)","commit_id":"93c25dbb3f1a0d214f9eb3a8cab6742a1dec220c"},{"author":{"_account_id":14070,"name":"Eric Fried","email":"openstack@fried.cc","username":"efried"},"change_message_id":"775ed5b840f9d9b7277d9e0d380cca7ad6992c71","unresolved":false,"context_lines":[{"line_number":157,"context_line":"  programmatically, introduce a new config option in the ``[libvirt]``"},{"line_number":158,"context_line":"  section of ``nova.conf`` to set the size of this inventory for each"},{"line_number":159,"context_line":"  SEV-capable compute host.  This would default to ``None`` or ``-1``"},{"line_number":160,"context_line":"  with a forward-looking meaning of \"auto-detect the limit, or if this"},{"line_number":161,"context_line":"  is not possible, don\u0027t impose any limit\".  This would have two"},{"line_number":162,"context_line":"  benefits:"},{"line_number":163,"context_line":""}],"source_content_type":"text/x-rst","patch_set":11,"id":"ffb9cba7_b40a9b23","line":160,"range":{"start_line":160,"start_character":53,"end_line":160,"end_character":58},"in_reply_to":"ffb9cba7_912449c8","updated":"2019-04-23 23:29:03.000000000","message":"See above @159 where Jay and I agreed that MAXINT from the driver is fine as the inventory value sent by the driver when the conf value is None and the limit is not discoverable.","commit_id":"93c25dbb3f1a0d214f9eb3a8cab6742a1dec220c"},{"author":{"_account_id":14070,"name":"Eric Fried","email":"openstack@fried.cc","username":"efried"},"change_message_id":"44ed33b71b101e740d9b2ce7b492ce43340c79fe","unresolved":false,"context_lines":[{"line_number":259,"context_line":"the same would not work out of the box with image properties, since"},{"line_number":260,"context_line":"they currently only support traits, not resources.  So further work"},{"line_number":261,"context_line":"would be required in order to support requiring SEV via image"},{"line_number":262,"context_line":"properties."},{"line_number":263,"context_line":""},{"line_number":264,"context_line":"SEV launch-time configuration"},{"line_number":265,"context_line":"-----------------------------"}],"source_content_type":"text/x-rst","patch_set":11,"id":"ffb9cba7_04fa7141","line":262,"updated":"2019-04-22 21:22:11.000000000","message":"But we also talked about defining the extra spec in a non-placement-ese way (e.g. encrypt_my_memory\u003dtrue) which would be translated into placement-ese (?resources\u003dSEV_CONTEXT:1) by the scheduler. If we did that, the same key/val could be used in image properties, and translated the same way.","commit_id":"93c25dbb3f1a0d214f9eb3a8cab6742a1dec220c"},{"author":{"_account_id":2394,"name":"Adam Spiers","email":"aspiers@suse.com","username":"adam.spiers"},"change_message_id":"5e22fedb653a93bdbcf4197ae37c0a3fdd8da044","unresolved":false,"context_lines":[{"line_number":259,"context_line":"the same would not work out of the box with image properties, since"},{"line_number":260,"context_line":"they currently only support traits, not resources.  So further work"},{"line_number":261,"context_line":"would be required in order to support requiring SEV via image"},{"line_number":262,"context_line":"properties."},{"line_number":263,"context_line":""},{"line_number":264,"context_line":"SEV launch-time configuration"},{"line_number":265,"context_line":"-----------------------------"}],"source_content_type":"text/x-rst","patch_set":11,"id":"ffb9cba7_da52e367","line":262,"in_reply_to":"ffb9cba7_04fa7141","updated":"2019-04-23 15:29:38.000000000","message":"I\u0027m fine with that, although I don\u0027t know what mechanism would be best for implementing it.  Sorry for the newbie questions:  Would it need to be done within an existing scheduler filter?  Or a new one?  Or somewhere else?  And would the code also handle the image properties, or would that need to be done elsewhere?","commit_id":"93c25dbb3f1a0d214f9eb3a8cab6742a1dec220c"},{"author":{"_account_id":2394,"name":"Adam Spiers","email":"aspiers@suse.com","username":"adam.spiers"},"change_message_id":"1fc5e94e23ed75f9d9aee3471c99ef81bc0985bb","unresolved":false,"context_lines":[{"line_number":259,"context_line":"the same would not work out of the box with image properties, since"},{"line_number":260,"context_line":"they currently only support traits, not resources.  So further work"},{"line_number":261,"context_line":"would be required in order to support requiring SEV via image"},{"line_number":262,"context_line":"properties."},{"line_number":263,"context_line":""},{"line_number":264,"context_line":"SEV launch-time configuration"},{"line_number":265,"context_line":"-----------------------------"}],"source_content_type":"text/x-rst","patch_set":11,"id":"ffb9cba7_d113a165","line":262,"in_reply_to":"ffb9cba7_0e7279e6","updated":"2019-04-23 23:00:30.000000000","message":"Done","commit_id":"93c25dbb3f1a0d214f9eb3a8cab6742a1dec220c"},{"author":{"_account_id":14070,"name":"Eric Fried","email":"openstack@fried.cc","username":"efried"},"change_message_id":"700c7c884d7334353d70a7ec742c941d769e5aa7","unresolved":false,"context_lines":[{"line_number":259,"context_line":"the same would not work out of the box with image properties, since"},{"line_number":260,"context_line":"they currently only support traits, not resources.  So further work"},{"line_number":261,"context_line":"would be required in order to support requiring SEV via image"},{"line_number":262,"context_line":"properties."},{"line_number":263,"context_line":""},{"line_number":264,"context_line":"SEV launch-time configuration"},{"line_number":265,"context_line":"-----------------------------"}],"source_content_type":"text/x-rst","patch_set":11,"id":"ffb9cba7_0e7279e6","line":262,"in_reply_to":"ffb9cba7_da52e367","updated":"2019-04-23 17:53:41.000000000","message":"Discussed this in IRC: http://eavesdrop.openstack.org/irclogs/%23openstack-nova/%23openstack-nova.2019-04-23.log.html#t2019-04-23T16:55:43-2","commit_id":"93c25dbb3f1a0d214f9eb3a8cab6742a1dec220c"},{"author":{"_account_id":14070,"name":"Eric Fried","email":"openstack@fried.cc","username":"efried"},"change_message_id":"44ed33b71b101e740d9b2ce7b492ce43340c79fe","unresolved":false,"context_lines":[{"line_number":538,"context_line":"hardware-agnostic feature-oriented way, for example"},{"line_number":539,"context_line":"``MEM_ENCRYPTED_CONTEXT``.  This could accommodate support for similar"},{"line_number":540,"context_line":"functionality from Intel (e.g. `MKTME`_) and other vendors in the"},{"line_number":541,"context_line":"future, via a unified user interface."},{"line_number":542,"context_line":""},{"line_number":543,"context_line":"Some `alternative approaches were originally proposed"},{"line_number":544,"context_line":"\u003chttps://specs.openstack.org/openstack/nova-specs/specs/stein/approved/amd-sev-libvirt-support.html#alternatives\u003e`_"}],"source_content_type":"text/x-rst","patch_set":11,"id":"ffb9cba7_24da15e9","line":541,"updated":"2019-04-22 21:22:11.000000000","message":"Why did you decide not to do this? I still think it would be better in the long run.","commit_id":"93c25dbb3f1a0d214f9eb3a8cab6742a1dec220c"},{"author":{"_account_id":14070,"name":"Eric Fried","email":"openstack@fried.cc","username":"efried"},"change_message_id":"700c7c884d7334353d70a7ec742c941d769e5aa7","unresolved":false,"context_lines":[{"line_number":538,"context_line":"hardware-agnostic feature-oriented way, for example"},{"line_number":539,"context_line":"``MEM_ENCRYPTED_CONTEXT``.  This could accommodate support for similar"},{"line_number":540,"context_line":"functionality from Intel (e.g. `MKTME`_) and other vendors in the"},{"line_number":541,"context_line":"future, via a unified user interface."},{"line_number":542,"context_line":""},{"line_number":543,"context_line":"Some `alternative approaches were originally proposed"},{"line_number":544,"context_line":"\u003chttps://specs.openstack.org/openstack/nova-specs/specs/stein/approved/amd-sev-libvirt-support.html#alternatives\u003e`_"}],"source_content_type":"text/x-rst","patch_set":11,"id":"ffb9cba7_c9945b8e","line":541,"in_reply_to":"ffb9cba7_0814b1f4","updated":"2019-04-23 17:53:41.000000000","message":"\u003e Are you saying you want something like encrypt_memory\u003dtrue *and*\n \u003e the MEM_ENCRYPTED_CONTEXT resource class?  I thought one\n \u003e vendor-agnostic mechanism would be enough?\n\nThe former is the user-facing thing; the latter is what it translates to under the covers, as part of the placement GET /a_c request.\n\nIt seems we\u0027re pretty well settled on doing the former. So this discussion is about making the resource class generic (non-AMD-/SEV-specific).\n\n \u003e One thing that concerns me about this is that it hardcodes an\n \u003e assumption that all memory encryption mechanisms from other vendors\n \u003e impose a limit which is measured in guest \"slots\".  I don\u0027t know\n \u003e about MKTME but it might not impose any guest limit, or even if it\n \u003e did, it might not be around the number of guests but some other\n \u003e related resource.  And it could be different again for ARM or\n \u003e whoever else jumps on the bandwagon.\n\nAgree, but:\n- If it is quantitative at all, it doesn\u0027t matter if it\u0027s around the number of guests or anything else; as long as it can be represented in an inventory using total/min_unit/max_unit/step_size/allocation_ratio/reserved, it\u0027ll work fine. It\u0027ll be up to the driver to determine what that inventory should look like and what it\u0027s based on, knowing that the request will always be for 1 unit.\n- If there\u0027s not a limit, the architecture you\u0027ve got in place (to default to \"unlimited\" based on the conf option) works fine too.\n\nHowever...\n\n \u003e \u003e My inclination would be to ensure that the low-level resource\n \u003e \u003e inventory correctly distinguishes between different technologies,\n \u003e \u003e and this could handle the first use case.  For example, we could\n \u003e \u003e have a resource class for each technology (AMD_SEV_CONTEXT,\n \u003e \u003e INTEL_MKTME_CONTEXT, ARM_WHATEVER, ...).\n \u003e \n \u003e It seems more in keeping with standard design principles if the\n \u003e lower layers (in this case the resource classes) are more\n \u003e implementation-specific, and the higher layers (encrypt_memory\u003dtrue)\n \u003e are where the abstraction / generalisation lives.\n\n...if we go with this, and make the resource class AMD_SEV-specific, and then the (let\u0027s say) INTEL_MKTME implementation comes along, we would add a resource class INTEL_MKTME_CONTEXT and libvirt would be exposing inventory of one or the other of those two. Now how do you translate encrypt_my_memory\u003dtrue? We have no way of saying \"give me N units of any one of these resource classes [X, Y, ...]\".\n\nOTOH, if we make the resource class generic, then encrypt_my_memory\u003dtrue always translates to resources\u003dMEM_ENC_CONTEXT:1, so my flavor can land on either one of those hosts. And if I want to narrow down to a specific one of those technologies, that\u0027s where I use the trait.","commit_id":"93c25dbb3f1a0d214f9eb3a8cab6742a1dec220c"},{"author":{"_account_id":2394,"name":"Adam Spiers","email":"aspiers@suse.com","username":"adam.spiers"},"change_message_id":"5e22fedb653a93bdbcf4197ae37c0a3fdd8da044","unresolved":false,"context_lines":[{"line_number":538,"context_line":"hardware-agnostic feature-oriented way, for example"},{"line_number":539,"context_line":"``MEM_ENCRYPTED_CONTEXT``.  This could accommodate support for similar"},{"line_number":540,"context_line":"functionality from Intel (e.g. `MKTME`_) and other vendors in the"},{"line_number":541,"context_line":"future, via a unified user interface."},{"line_number":542,"context_line":""},{"line_number":543,"context_line":"Some `alternative approaches were originally proposed"},{"line_number":544,"context_line":"\u003chttps://specs.openstack.org/openstack/nova-specs/specs/stein/approved/amd-sev-libvirt-support.html#alternatives\u003e`_"}],"source_content_type":"text/x-rst","patch_set":11,"id":"ffb9cba7_0814b1f4","line":541,"in_reply_to":"ffb9cba7_24da15e9","updated":"2019-04-23 15:29:38.000000000","message":"Are you saying you want something like encrypt_memory\u003dtrue *and* the MEM_ENCRYPTED_CONTEXT resource class?  I thought one vendor-agnostic mechanism would be enough?\n\nOne thing that concerns me about this is that it hardcodes an assumption that all memory encryption mechanisms from other vendors impose a limit which is measured in guest \"slots\".  I don\u0027t know about MKTME but it might not impose any guest limit, or even if it did, it might not be around the number of guests but some other related resource.  And it could be different again for ARM or whoever else jumps on the bandwagon.\n\nSo while I think it\u0027s fine to abstract out the functionality at the UI level via encrypt_memory\u003dtrue or similar, I\u0027m nervous of generalising our approach to low-level inventory before we actually know that the generalisation is actually valid for different vendor\u0027s implementations.\n\nThat\u0027s why in https://review.opendev.org/#/c/641994/6/specs/train/approved/amd-sev-libvirt-support.rst@192 I wrote:\n\n \u003e My inclination would be to ensure that the low-level resource\n \u003e inventory correctly distinguishes between different technologies,\n \u003e and this could handle the first use case.  For example, we could\n \u003e have a resource class for each technology (AMD_SEV_CONTEXT,\n \u003e INTEL_MKTME_CONTEXT, ARM_WHATEVER, ...).\n\nIt seems more in keeping with standard design principles if the lower layers (in this case the resource classes) are more implementation-specific, and the higher layers (encrypt_memory\u003dtrue) are where the abstraction / generalisation lives.","commit_id":"93c25dbb3f1a0d214f9eb3a8cab6742a1dec220c"},{"author":{"_account_id":2394,"name":"Adam Spiers","email":"aspiers@suse.com","username":"adam.spiers"},"change_message_id":"1fc5e94e23ed75f9d9aee3471c99ef81bc0985bb","unresolved":false,"context_lines":[{"line_number":538,"context_line":"hardware-agnostic feature-oriented way, for example"},{"line_number":539,"context_line":"``MEM_ENCRYPTED_CONTEXT``.  This could accommodate support for similar"},{"line_number":540,"context_line":"functionality from Intel (e.g. `MKTME`_) and other vendors in the"},{"line_number":541,"context_line":"future, via a unified user interface."},{"line_number":542,"context_line":""},{"line_number":543,"context_line":"Some `alternative approaches were originally proposed"},{"line_number":544,"context_line":"\u003chttps://specs.openstack.org/openstack/nova-specs/specs/stein/approved/amd-sev-libvirt-support.html#alternatives\u003e`_"}],"source_content_type":"text/x-rst","patch_set":11,"id":"ffb9cba7_1bd2ba5d","line":541,"in_reply_to":"ffb9cba7_c9945b8e","updated":"2019-04-23 23:00:30.000000000","message":"FTR, discussion continued here:\n\nhttp://eavesdrop.openstack.org/irclogs/%23openstack-nova/%23openstack-nova.2019-04-23.log.html#t2019-04-23T17:59:30\n\nThe above assumes that requests involving MKTME machines would also involve 1 unit of inventory per guest, so we\u0027ll run into problems if that\u0027s not the case.  But then you\u0027ve highlighted challenges with using vendor-specific resource classes too, so either way there are issues.  I\u0027m just going to defer to your (much) greater experience here.","commit_id":"93c25dbb3f1a0d214f9eb3a8cab6742a1dec220c"},{"author":{"_account_id":7,"name":"Jay Pipes","email":"jaypipes@gmail.com","username":"jaypipes"},"change_message_id":"f56c2569df4dd939817b8bd1b6af1939fee680b9","unresolved":false,"context_lines":[{"line_number":262,"context_line":"  extra specs and image properties, which when set to ``true`` would"},{"line_number":263,"context_line":"  be translated behind the scenes into"},{"line_number":264,"context_line":"  ``resources:MEM_ENCRYPTION_CONTEXT\u003d1`` which would be added to the"},{"line_number":265,"context_line":"  flavor extra specs in the ``RequestSpec`` object.  (This change to"},{"line_number":266,"context_line":"  the flavor would only affect this launch context and not be"},{"line_number":267,"context_line":"  persisted to the database.)"},{"line_number":268,"context_line":""},{"line_number":269,"context_line":"  Implementing this new parameter, which hides the implementation of"},{"line_number":270,"context_line":"  the resource inventory and allocation behind an abstraction, has"}],"source_content_type":"text/x-rst","patch_set":12,"id":"ffb9cba7_39745868","line":267,"range":{"start_line":265,"start_character":52,"end_line":267,"end_character":29},"updated":"2019-04-24 11:28:59.000000000","message":"Maybe I missed a conversation somewhere, but why wouldn\u0027t we want the resources:MEM_ENCRYPTION_CONTEXT\u003d1 extra spec to persist? On rebuild and resize/migrate, wouldn\u0027t we want to continue requiring memory encryption for the instance?","commit_id":"4671f102654c989db7f9911fc206e80b7aa2db2e"},{"author":{"_account_id":14070,"name":"Eric Fried","email":"openstack@fried.cc","username":"efried"},"change_message_id":"66b38e51ec6378c0f9221ab82dd01c52fcafc09b","unresolved":false,"context_lines":[{"line_number":262,"context_line":"  extra specs and image properties, which when set to ``true`` would"},{"line_number":263,"context_line":"  be translated behind the scenes into"},{"line_number":264,"context_line":"  ``resources:MEM_ENCRYPTION_CONTEXT\u003d1`` which would be added to the"},{"line_number":265,"context_line":"  flavor extra specs in the ``RequestSpec`` object.  (This change to"},{"line_number":266,"context_line":"  the flavor would only affect this launch context and not be"},{"line_number":267,"context_line":"  persisted to the database.)"},{"line_number":268,"context_line":""},{"line_number":269,"context_line":"  Implementing this new parameter, which hides the implementation of"},{"line_number":270,"context_line":"  the resource inventory and allocation behind an abstraction, has"}],"source_content_type":"text/x-rst","patch_set":12,"id":"ffb9cba7_9a1c476b","line":267,"range":{"start_line":265,"start_character":52,"end_line":267,"end_character":29},"in_reply_to":"ffb9cba7_00230c07","updated":"2019-04-24 20:28:06.000000000","message":"At least for my part, I was just pointing out that the code path around request_filter.py would not persist any changes you make to the flavor. The debate around whether that\u0027s a requirement (or even a good thing) is mostly beyond my ken.","commit_id":"4671f102654c989db7f9911fc206e80b7aa2db2e"},{"author":{"_account_id":2394,"name":"Adam Spiers","email":"aspiers@suse.com","username":"adam.spiers"},"change_message_id":"aa0239e7361d7f752e0b27e681003dafbf4bf761","unresolved":false,"context_lines":[{"line_number":262,"context_line":"  extra specs and image properties, which when set to ``true`` would"},{"line_number":263,"context_line":"  be translated behind the scenes into"},{"line_number":264,"context_line":"  ``resources:MEM_ENCRYPTION_CONTEXT\u003d1`` which would be added to the"},{"line_number":265,"context_line":"  flavor extra specs in the ``RequestSpec`` object.  (This change to"},{"line_number":266,"context_line":"  the flavor would only affect this launch context and not be"},{"line_number":267,"context_line":"  persisted to the database.)"},{"line_number":268,"context_line":""},{"line_number":269,"context_line":"  Implementing this new parameter, which hides the implementation of"},{"line_number":270,"context_line":"  the resource inventory and allocation behind an abstraction, has"}],"source_content_type":"text/x-rst","patch_set":12,"id":"ffb9cba7_00230c07","line":267,"range":{"start_line":265,"start_character":52,"end_line":267,"end_character":29},"in_reply_to":"ffb9cba7_3289cdc9","updated":"2019-04-24 16:56:33.000000000","message":"I inferred that it shouldn\u0027t be persisted based on this:\n\nhttp://eavesdrop.openstack.org/irclogs/%23openstack-nova/%23openstack-nova.2019-04-23.log.html#t2019-04-23T20:30:16\n\nbut maybe I misunderstood something.  \n\nYeah, resize could be a concern if things change significantly - to take an extreme example, \"resizing\" to Intel MKTME), but a milder example might be a change in SEV policy.","commit_id":"4671f102654c989db7f9911fc206e80b7aa2db2e"},{"author":{"_account_id":14070,"name":"Eric Fried","email":"openstack@fried.cc","username":"efried"},"change_message_id":"8a16122dd84f0b48b158be47dcd5bb9f87210899","unresolved":false,"context_lines":[{"line_number":262,"context_line":"  extra specs and image properties, which when set to ``true`` would"},{"line_number":263,"context_line":"  be translated behind the scenes into"},{"line_number":264,"context_line":"  ``resources:MEM_ENCRYPTION_CONTEXT\u003d1`` which would be added to the"},{"line_number":265,"context_line":"  flavor extra specs in the ``RequestSpec`` object.  (This change to"},{"line_number":266,"context_line":"  the flavor would only affect this launch context and not be"},{"line_number":267,"context_line":"  persisted to the database.)"},{"line_number":268,"context_line":""},{"line_number":269,"context_line":"  Implementing this new parameter, which hides the implementation of"},{"line_number":270,"context_line":"  the resource inventory and allocation behind an abstraction, has"}],"source_content_type":"text/x-rst","patch_set":12,"id":"ffb9cba7_3289cdc9","line":267,"range":{"start_line":265,"start_character":52,"end_line":267,"end_character":29},"in_reply_to":"ffb9cba7_39745868","updated":"2019-04-24 14:12:10.000000000","message":"The discussion (still ongoing) is around the multiattach filter [1], specifically [2], and may not be super relevant in this context.\n\nI would expect the hw:mem_encryption flag, which *would* be persisted since it\u0027s part of the original flavor, to be reinterpreted on the move operation. So we wouldn\u0027t lose memory encryption on a rebuild. (Not sure what happens on a resize if the new flavor requests a different SEV status than the original...)\n\n[1] https://review.opendev.org/#/c/645316/\n[2] https://review.opendev.org/#/c/645316/2/nova/compute/api.py@1085","commit_id":"4671f102654c989db7f9911fc206e80b7aa2db2e"},{"author":{"_account_id":7,"name":"Jay Pipes","email":"jaypipes@gmail.com","username":"jaypipes"},"change_message_id":"f56c2569df4dd939817b8bd1b6af1939fee680b9","unresolved":false,"context_lines":[{"line_number":569,"context_line":"*or* Intel MKTME hardware.  Therefore there would be no way to"},{"line_number":570,"context_line":"translate the vendor-agnostic ``hw:mem_encryption\u003dtrue`` extra spec"},{"line_number":571,"context_line":"parameter or image property into an extra spec parameter which would"},{"line_number":572,"context_line":"achieve the desired effect."},{"line_number":573,"context_line":""},{"line_number":574,"context_line":"Some fundamentally different `approaches to SEV were originally"},{"line_number":575,"context_line":"proposed"}],"source_content_type":"text/x-rst","patch_set":12,"id":"ffb9cba7_f90240b4","line":572,"updated":"2019-04-24 11:28:59.000000000","message":"++","commit_id":"4671f102654c989db7f9911fc206e80b7aa2db2e"}]}
