)]}'
{"specs/train/approved/separate-vcpu-into-different-priority-pool.rst":[{"author":{"_account_id":11604,"name":"sean mooney","email":"smooney@redhat.com","username":"sean-k-mooney"},"change_message_id":"c710524e882d77eeb80ff2fb52a3020ea3a13e23","unresolved":false,"context_lines":[{"line_number":5,"context_line":" http://creativecommons.org/licenses/by/3.0/legalcode"},{"line_number":6,"context_line":""},{"line_number":7,"context_line":"\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d"},{"line_number":8,"context_line":"Separate the vCPUs into different pool based on priority"},{"line_number":9,"context_line":"\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d"},{"line_number":10,"context_line":""},{"line_number":11,"context_line":"https://blueprints.launchpad.net/nova/+spec/separate-vcpu-into-different-priority-pool"}],"source_content_type":"text/x-rst","patch_set":3,"id":"5fc1f717_f8aac581","line":8,"range":{"start_line":8,"start_character":0,"end_line":8,"end_character":56},"updated":"2019-04-09 16:20:20.000000000","message":"before i get too far in to this i think this need to be viewed in combination with https://review.openstack.org/#/c/651024/ RMD Plugin: Energy Efficiency using CPU Core P-State control\n\nill cross post this comment there two but both specs are related to managin the performacne of guest cpus and both are being proposed by different parts of intel so it would be good to make sure there is alingmennt and hopefullly they are also compatibly with https://review.openstack.org/#/c/555081/\nwhich propses standardsed cpu tracking in placeemnt.","commit_id":"abd2ba916dfe93e61d24d7f652fcf99db2223e11"},{"author":{"_account_id":5754,"name":"Alex Xu","email":"hejie.xu@intel.com","username":"xuhj"},"change_message_id":"5df2e1c380cd0043b64510730d20135ee4eb2d88","unresolved":false,"context_lines":[{"line_number":5,"context_line":" http://creativecommons.org/licenses/by/3.0/legalcode"},{"line_number":6,"context_line":""},{"line_number":7,"context_line":"\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d"},{"line_number":8,"context_line":"Separate the vCPUs into different pool based on priority"},{"line_number":9,"context_line":"\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d"},{"line_number":10,"context_line":""},{"line_number":11,"context_line":"https://blueprints.launchpad.net/nova/+spec/separate-vcpu-into-different-priority-pool"}],"source_content_type":"text/x-rst","patch_set":3,"id":"5fc1f717_e38a0819","line":8,"range":{"start_line":8,"start_character":0,"end_line":8,"end_character":56},"in_reply_to":"5fc1f717_f8aac581","updated":"2019-04-10 06:25:29.000000000","message":"We already get alignment internally before we submit to the community.\n\nThis spec aims to a static and simple way to enable the operator to configure the CPU priority(or traits, it is more generic) whatever the technology behind this(software or hardware)\n\nThe RMD energy is more specific on the dynamic way, RMD tuning the configure and performance underneath the nova. And specific to P-State and Intel BPF.\n\nI will clarify this in the alternative section.","commit_id":"abd2ba916dfe93e61d24d7f652fcf99db2223e11"},{"author":{"_account_id":7,"name":"Jay Pipes","email":"jaypipes@gmail.com","username":"jaypipes"},"change_message_id":"9e3b02dbd35aa14694dd732c55180cb57218e472","unresolved":false,"context_lines":[{"line_number":12,"context_line":""},{"line_number":13,"context_line":"Linux Kernel supports scaling CPU frequency up or down, originally for the"},{"line_number":14,"context_line":"purpose of saving power, which could be used to create CPU pools of different"},{"line_number":15,"context_line":"performance priority. This benefits a lot for some typical cloud scenario"},{"line_number":16,"context_line":"which expects a stable or sustainable high-performance service quality for the"},{"line_number":17,"context_line":"vital workload."},{"line_number":18,"context_line":""}],"source_content_type":"text/x-rst","patch_set":3,"id":"5fc1f717_4f47764d","line":15,"range":{"start_line":15,"start_character":51,"end_line":15,"end_character":73},"updated":"2019-04-10 13:30:21.000000000","message":"we have very different ideas on what \"typical cloud scenario\" means :)","commit_id":"abd2ba916dfe93e61d24d7f652fcf99db2223e11"},{"author":{"_account_id":30209,"name":"Huaqiang","email":"huaqiang.wang@intel.com","username":"Huaqiang.Wang"},"change_message_id":"07d004f68dad13c01e8532c0cde0f94545955245","unresolved":false,"context_lines":[{"line_number":12,"context_line":""},{"line_number":13,"context_line":"Linux Kernel supports scaling CPU frequency up or down, originally for the"},{"line_number":14,"context_line":"purpose of saving power, which could be used to create CPU pools of different"},{"line_number":15,"context_line":"performance priority. This benefits a lot for some typical cloud scenario"},{"line_number":16,"context_line":"which expects a stable or sustainable high-performance service quality for the"},{"line_number":17,"context_line":"vital workload."},{"line_number":18,"context_line":""}],"source_content_type":"text/x-rst","patch_set":3,"id":"3fce034c_cc03440f","line":15,"range":{"start_line":15,"start_character":51,"end_line":15,"end_character":73},"in_reply_to":"5fc1f717_4f47764d","updated":"2019-04-11 10:24:35.000000000","message":"embarrassed. It is just a cloud scenario.","commit_id":"abd2ba916dfe93e61d24d7f652fcf99db2223e11"},{"author":{"_account_id":7,"name":"Jay Pipes","email":"jaypipes@gmail.com","username":"jaypipes"},"change_message_id":"9e3b02dbd35aa14694dd732c55180cb57218e472","unresolved":false,"context_lines":[{"line_number":28,"context_line":"by `/sys/devices/system/cpu/cpu[id]/cpufreq/scaling_min_freq` and `"},{"line_number":29,"context_line":"`/sys/devices/system/cpu/cpu[id]/cpufreq/scaling_max_freq`."},{"line_number":30,"context_line":""},{"line_number":31,"context_line":"Also echo host CPU is capable of selecting its own frequency scaling governor."},{"line_number":32,"context_line":"For example, the `performance` governor runs the CPU at the maximum frequency,"},{"line_number":33,"context_line":"the `on-demand` governor scales the frequency dynamically according to current"},{"line_number":34,"context_line":"load to save power but with some extra latency when switching the power state."}],"source_content_type":"text/x-rst","patch_set":3,"id":"5fc1f717_8f98febf","line":31,"range":{"start_line":31,"start_character":5,"end_line":31,"end_character":9},"updated":"2019-04-10 13:30:21.000000000","message":"s/echo/each","commit_id":"abd2ba916dfe93e61d24d7f652fcf99db2223e11"},{"author":{"_account_id":30209,"name":"Huaqiang","email":"huaqiang.wang@intel.com","username":"Huaqiang.Wang"},"change_message_id":"07d004f68dad13c01e8532c0cde0f94545955245","unresolved":false,"context_lines":[{"line_number":28,"context_line":"by `/sys/devices/system/cpu/cpu[id]/cpufreq/scaling_min_freq` and `"},{"line_number":29,"context_line":"`/sys/devices/system/cpu/cpu[id]/cpufreq/scaling_max_freq`."},{"line_number":30,"context_line":""},{"line_number":31,"context_line":"Also echo host CPU is capable of selecting its own frequency scaling governor."},{"line_number":32,"context_line":"For example, the `performance` governor runs the CPU at the maximum frequency,"},{"line_number":33,"context_line":"the `on-demand` governor scales the frequency dynamically according to current"},{"line_number":34,"context_line":"load to save power but with some extra latency when switching the power state."}],"source_content_type":"text/x-rst","patch_set":3,"id":"3fce034c_270f812f","line":31,"range":{"start_line":31,"start_character":5,"end_line":31,"end_character":9},"in_reply_to":"5fc1f717_8f98febf","updated":"2019-04-11 10:24:35.000000000","message":"My fault, thanks.","commit_id":"abd2ba916dfe93e61d24d7f652fcf99db2223e11"},{"author":{"_account_id":11604,"name":"sean mooney","email":"smooney@redhat.com","username":"sean-k-mooney"},"change_message_id":"9d901fe17f69d4eed75483b34f224390c2f59c5b","unresolved":false,"context_lines":[{"line_number":22,"context_line":"Problem description"},{"line_number":23,"context_line":"\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d"},{"line_number":24,"context_line":""},{"line_number":25,"context_line":"Linux provides ways [1]_ [2]_ to tune the priority on each CPU."},{"line_number":26,"context_line":""},{"line_number":27,"context_line":"For example, the max frequency and min frequency for a CPU can be configured"},{"line_number":28,"context_line":"by `/sys/devices/system/cpu/cpu[id]/cpufreq/scaling_min_freq` and `"},{"line_number":29,"context_line":"`/sys/devices/system/cpu/cpu[id]/cpufreq/scaling_max_freq`."},{"line_number":30,"context_line":""},{"line_number":31,"context_line":"Also echo host CPU is capable of selecting its own frequency scaling governor."},{"line_number":32,"context_line":"For example, the `performance` governor runs the CPU at the maximum frequency,"},{"line_number":33,"context_line":"the `on-demand` governor scales the frequency dynamically according to current"},{"line_number":34,"context_line":"load to save power but with some extra latency when switching the power state."},{"line_number":35,"context_line":""},{"line_number":36,"context_line":"This provides the operator with more possibility to arrange CPU resource."},{"line_number":37,"context_line":"Running low priority workloads on lower performance CPUs will consume less"},{"line_number":38,"context_line":"power and save power headroom, then the high priority workloads on the same"},{"line_number":39,"context_line":"host could obtain higher and sustainable performance by running on higher"},{"line_number":40,"context_line":"priority CPUs."},{"line_number":41,"context_line":""},{"line_number":42,"context_line":"Use Cases"},{"line_number":43,"context_line":"---------"}],"source_content_type":"text/x-rst","patch_set":3,"id":"5fc1f717_f80de58e","line":40,"range":{"start_line":25,"start_character":0,"end_line":40,"end_character":14},"updated":"2019-04-09 17:32:32.000000000","message":"so this does not state a problem it just state that apis exist that allow you to set the frequecy scaling of cpus or setting the linux or hardware gonvoner. \n\nsetting the cpu frequency per core was added in haswell by the way previouls it was set for all core on the package.\n\nyou do not state a problem here.\n\nby the way intel is also proposeing using RMD to dynamicl adjust the pstates (cpu frequncy) of cores allocated to guest\nbased on telementry governed by a policy set via the flavor.","commit_id":"abd2ba916dfe93e61d24d7f652fcf99db2223e11"},{"author":{"_account_id":30209,"name":"Huaqiang","email":"huaqiang.wang@intel.com","username":"Huaqiang.Wang"},"change_message_id":"c18cb79aa1327bcd7159af32cdf20831b2ab937e","unresolved":false,"context_lines":[{"line_number":22,"context_line":"Problem description"},{"line_number":23,"context_line":"\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d"},{"line_number":24,"context_line":""},{"line_number":25,"context_line":"Linux provides ways [1]_ [2]_ to tune the priority on each CPU."},{"line_number":26,"context_line":""},{"line_number":27,"context_line":"For example, the max frequency and min frequency for a CPU can be configured"},{"line_number":28,"context_line":"by `/sys/devices/system/cpu/cpu[id]/cpufreq/scaling_min_freq` and `"},{"line_number":29,"context_line":"`/sys/devices/system/cpu/cpu[id]/cpufreq/scaling_max_freq`."},{"line_number":30,"context_line":""},{"line_number":31,"context_line":"Also echo host CPU is capable of selecting its own frequency scaling governor."},{"line_number":32,"context_line":"For example, the `performance` governor runs the CPU at the maximum frequency,"},{"line_number":33,"context_line":"the `on-demand` governor scales the frequency dynamically according to current"},{"line_number":34,"context_line":"load to save power but with some extra latency when switching the power state."},{"line_number":35,"context_line":""},{"line_number":36,"context_line":"This provides the operator with more possibility to arrange CPU resource."},{"line_number":37,"context_line":"Running low priority workloads on lower performance CPUs will consume less"},{"line_number":38,"context_line":"power and save power headroom, then the high priority workloads on the same"},{"line_number":39,"context_line":"host could obtain higher and sustainable performance by running on higher"},{"line_number":40,"context_line":"priority CPUs."},{"line_number":41,"context_line":""},{"line_number":42,"context_line":"Use Cases"},{"line_number":43,"context_line":"---------"}],"source_content_type":"text/x-rst","patch_set":3,"id":"3fce034c_67826952","line":40,"range":{"start_line":25,"start_character":0,"end_line":40,"end_character":14},"in_reply_to":"5fc1f717_f80de58e","updated":"2019-04-11 10:35:27.000000000","message":"We\u0027ll refine this part. One problem we have seen is (as stated in the reply of Jay\u0027s comments):\n\nThe Linux per-CPU frequency scaling functionality and its underlying technologies are more and more expected by the infrastructure provider (refers to physical server provider inside a data center/cloud company) from the perspective of reducing the physical machine type in the whole cloud. \n\nWhile Nova does not provide the function of changing machine type through software configuration, in this spec, particularly, what we proposed is changing the CPU frequency to change machine type,  simplifying the selection of physical server machine type.","commit_id":"abd2ba916dfe93e61d24d7f652fcf99db2223e11"},{"author":{"_account_id":5754,"name":"Alex Xu","email":"hejie.xu@intel.com","username":"xuhj"},"change_message_id":"5df2e1c380cd0043b64510730d20135ee4eb2d88","unresolved":false,"context_lines":[{"line_number":22,"context_line":"Problem description"},{"line_number":23,"context_line":"\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d"},{"line_number":24,"context_line":""},{"line_number":25,"context_line":"Linux provides ways [1]_ [2]_ to tune the priority on each CPU."},{"line_number":26,"context_line":""},{"line_number":27,"context_line":"For example, the max frequency and min frequency for a CPU can be configured"},{"line_number":28,"context_line":"by `/sys/devices/system/cpu/cpu[id]/cpufreq/scaling_min_freq` and `"},{"line_number":29,"context_line":"`/sys/devices/system/cpu/cpu[id]/cpufreq/scaling_max_freq`."},{"line_number":30,"context_line":""},{"line_number":31,"context_line":"Also echo host CPU is capable of selecting its own frequency scaling governor."},{"line_number":32,"context_line":"For example, the `performance` governor runs the CPU at the maximum frequency,"},{"line_number":33,"context_line":"the `on-demand` governor scales the frequency dynamically according to current"},{"line_number":34,"context_line":"load to save power but with some extra latency when switching the power state."},{"line_number":35,"context_line":""},{"line_number":36,"context_line":"This provides the operator with more possibility to arrange CPU resource."},{"line_number":37,"context_line":"Running low priority workloads on lower performance CPUs will consume less"},{"line_number":38,"context_line":"power and save power headroom, then the high priority workloads on the same"},{"line_number":39,"context_line":"host could obtain higher and sustainable performance by running on higher"},{"line_number":40,"context_line":"priority CPUs."},{"line_number":41,"context_line":""},{"line_number":42,"context_line":"Use Cases"},{"line_number":43,"context_line":"---------"}],"source_content_type":"text/x-rst","patch_set":3,"id":"5fc1f717_436f5c5f","line":40,"range":{"start_line":25,"start_character":0,"end_line":40,"end_character":14},"in_reply_to":"5fc1f717_f80de58e","updated":"2019-04-10 06:25:29.000000000","message":"we are trying to state there are many ways to tuning the CPU, but currently in nova, we have no way to describe the different on each core, so that is the problem. we will clarify that.","commit_id":"abd2ba916dfe93e61d24d7f652fcf99db2223e11"},{"author":{"_account_id":11604,"name":"sean mooney","email":"smooney@redhat.com","username":"sean-k-mooney"},"change_message_id":"9d901fe17f69d4eed75483b34f224390c2f59c5b","unresolved":false,"context_lines":[{"line_number":48,"context_line":"  required to be guaranteed. These high priority workloads could be deployed"},{"line_number":49,"context_line":"  on the high priority CPUs."},{"line_number":50,"context_line":""},{"line_number":51,"context_line":"* For multiple workloads deployment scenario, the whole CPU utilization is not"},{"line_number":52,"context_line":"  that high, and the general power requirement is saving power. While there"},{"line_number":53,"context_line":"  exists some high priority workloads which are expected to run on higher"},{"line_number":54,"context_line":"  frequency CPUs continuously without the latency of switching power state by"}],"source_content_type":"text/x-rst","patch_set":3,"id":"5fc1f717_f882c5c6","line":51,"range":{"start_line":51,"start_character":2,"end_line":51,"end_character":44},"updated":"2019-04-09 17:32:32.000000000","message":"this is the default expection in a cloud.","commit_id":"abd2ba916dfe93e61d24d7f652fcf99db2223e11"},{"author":{"_account_id":30209,"name":"Huaqiang","email":"huaqiang.wang@intel.com","username":"Huaqiang.Wang"},"change_message_id":"bd347e8bbe11ef80b513f620bb5eaed54386d0f2","unresolved":false,"context_lines":[{"line_number":48,"context_line":"  required to be guaranteed. These high priority workloads could be deployed"},{"line_number":49,"context_line":"  on the high priority CPUs."},{"line_number":50,"context_line":""},{"line_number":51,"context_line":"* For multiple workloads deployment scenario, the whole CPU utilization is not"},{"line_number":52,"context_line":"  that high, and the general power requirement is saving power. While there"},{"line_number":53,"context_line":"  exists some high priority workloads which are expected to run on higher"},{"line_number":54,"context_line":"  frequency CPUs continuously without the latency of switching power state by"}],"source_content_type":"text/x-rst","patch_set":3,"id":"5fc1f717_92efcc1b","line":51,"range":{"start_line":51,"start_character":2,"end_line":51,"end_character":44},"in_reply_to":"5fc1f717_f882c5c6","updated":"2019-04-10 10:17:31.000000000","message":"Got, probably I need a refine or removal, let me think. \n\nThis paragraph I am trying to describe the requirement that some workloads want to keep high CPU frequency by using \u0027performance\u0027 governor, while most other workloads are using the \u0027power-save\u0027 governor. \u0027Performance\u0027 governor trys to maintain CPU frequency at the top frequency and will not incur the extra switching latency without the attempt to change P-state frequently even the CPU utilization is low.","commit_id":"abd2ba916dfe93e61d24d7f652fcf99db2223e11"},{"author":{"_account_id":11604,"name":"sean mooney","email":"smooney@redhat.com","username":"sean-k-mooney"},"change_message_id":"9d901fe17f69d4eed75483b34f224390c2f59c5b","unresolved":false,"context_lines":[{"line_number":48,"context_line":"  required to be guaranteed. These high priority workloads could be deployed"},{"line_number":49,"context_line":"  on the high priority CPUs."},{"line_number":50,"context_line":""},{"line_number":51,"context_line":"* For multiple workloads deployment scenario, the whole CPU utilization is not"},{"line_number":52,"context_line":"  that high, and the general power requirement is saving power. While there"},{"line_number":53,"context_line":"  exists some high priority workloads which are expected to run on higher"},{"line_number":54,"context_line":"  frequency CPUs continuously without the latency of switching power state by"},{"line_number":55,"context_line":"  the governor."}],"source_content_type":"text/x-rst","patch_set":3,"id":"5fc1f717_38660d53","line":52,"range":{"start_line":51,"start_character":46,"end_line":52,"end_character":62},"updated":"2019-04-09 17:32:32.000000000","message":"that is deployment specific. for an NFV could our a cloud that proably is not the case. for a hpc cloud it is defintly not the case.","commit_id":"abd2ba916dfe93e61d24d7f652fcf99db2223e11"},{"author":{"_account_id":30209,"name":"Huaqiang","email":"huaqiang.wang@intel.com","username":"Huaqiang.Wang"},"change_message_id":"bd347e8bbe11ef80b513f620bb5eaed54386d0f2","unresolved":false,"context_lines":[{"line_number":48,"context_line":"  required to be guaranteed. These high priority workloads could be deployed"},{"line_number":49,"context_line":"  on the high priority CPUs."},{"line_number":50,"context_line":""},{"line_number":51,"context_line":"* For multiple workloads deployment scenario, the whole CPU utilization is not"},{"line_number":52,"context_line":"  that high, and the general power requirement is saving power. While there"},{"line_number":53,"context_line":"  exists some high priority workloads which are expected to run on higher"},{"line_number":54,"context_line":"  frequency CPUs continuously without the latency of switching power state by"},{"line_number":55,"context_line":"  the governor."}],"source_content_type":"text/x-rst","patch_set":3,"id":"5fc1f717_f71de272","line":52,"range":{"start_line":51,"start_character":46,"end_line":52,"end_character":62},"in_reply_to":"5fc1f717_38660d53","updated":"2019-04-10 10:17:31.000000000","message":"Yes, We would like to limit this \u0027use case\u0027 to particular scenarios. Not all multi-deployment scenarios.","commit_id":"abd2ba916dfe93e61d24d7f652fcf99db2223e11"},{"author":{"_account_id":11604,"name":"sean mooney","email":"smooney@redhat.com","username":"sean-k-mooney"},"change_message_id":"9d901fe17f69d4eed75483b34f224390c2f59c5b","unresolved":false,"context_lines":[{"line_number":49,"context_line":"  on the high priority CPUs."},{"line_number":50,"context_line":""},{"line_number":51,"context_line":"* For multiple workloads deployment scenario, the whole CPU utilization is not"},{"line_number":52,"context_line":"  that high, and the general power requirement is saving power. While there"},{"line_number":53,"context_line":"  exists some high priority workloads which are expected to run on higher"},{"line_number":54,"context_line":"  frequency CPUs continuously without the latency of switching power state by"},{"line_number":55,"context_line":"  the governor."},{"line_number":56,"context_line":""},{"line_number":57,"context_line":"* The NFV user has a workload which needs one or two higher frequency CPUs"},{"line_number":58,"context_line":"  for important tasks, and another few tasks which can be running on lower"}],"source_content_type":"text/x-rst","patch_set":3,"id":"5fc1f717_bef425d0","line":55,"range":{"start_line":52,"start_character":64,"end_line":55,"end_character":14},"updated":"2019-04-09 17:32:32.000000000","message":"this is not a usecase.\n\nit think the usecase you are trying to convay is that if you have mixed set of workloads, some will require higher frequency cpus then others. to ensure better power saving","commit_id":"abd2ba916dfe93e61d24d7f652fcf99db2223e11"},{"author":{"_account_id":5754,"name":"Alex Xu","email":"hejie.xu@intel.com","username":"xuhj"},"change_message_id":"5df2e1c380cd0043b64510730d20135ee4eb2d88","unresolved":false,"context_lines":[{"line_number":49,"context_line":"  on the high priority CPUs."},{"line_number":50,"context_line":""},{"line_number":51,"context_line":"* For multiple workloads deployment scenario, the whole CPU utilization is not"},{"line_number":52,"context_line":"  that high, and the general power requirement is saving power. While there"},{"line_number":53,"context_line":"  exists some high priority workloads which are expected to run on higher"},{"line_number":54,"context_line":"  frequency CPUs continuously without the latency of switching power state by"},{"line_number":55,"context_line":"  the governor."},{"line_number":56,"context_line":""},{"line_number":57,"context_line":"* The NFV user has a workload which needs one or two higher frequency CPUs"},{"line_number":58,"context_line":"  for important tasks, and another few tasks which can be running on lower"}],"source_content_type":"text/x-rst","patch_set":3,"id":"5fc1f717_833a2431","line":55,"range":{"start_line":52,"start_character":64,"end_line":55,"end_character":14},"in_reply_to":"5fc1f717_bef425d0","updated":"2019-04-10 06:25:29.000000000","message":"The higher frequency cpus case I want to point to the first usecase I wrote at line 45.\n\nWe have see some cases customer want to fixed frequency, they don\u0027t want to the latency of raise the cpu frequency when the workload become higher with some kind of power save policy.","commit_id":"abd2ba916dfe93e61d24d7f652fcf99db2223e11"},{"author":{"_account_id":11604,"name":"sean mooney","email":"smooney@redhat.com","username":"sean-k-mooney"},"change_message_id":"9d901fe17f69d4eed75483b34f224390c2f59c5b","unresolved":false,"context_lines":[{"line_number":54,"context_line":"  frequency CPUs continuously without the latency of switching power state by"},{"line_number":55,"context_line":"  the governor."},{"line_number":56,"context_line":""},{"line_number":57,"context_line":"* The NFV user has a workload which needs one or two higher frequency CPUs"},{"line_number":58,"context_line":"  for important tasks, and another few tasks which can be running on lower"},{"line_number":59,"context_line":"  frequency CPUs. An example of such a workload is Open VSwitch (OVS) where"},{"line_number":60,"context_line":"  the OVS threads using DPDK are high priority tasks and serves as the"},{"line_number":61,"context_line":"  switching layer between other VNFs."},{"line_number":62,"context_line":""},{"line_number":63,"context_line":"Proposed change"},{"line_number":64,"context_line":"\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d"}],"source_content_type":"text/x-rst","patch_set":3,"id":"5fc1f717_588f59cc","line":61,"range":{"start_line":57,"start_character":2,"end_line":61,"end_character":37},"updated":"2019-04-09 17:32:32.000000000","message":"if you are refering to deploy ovs-dpdk or dpdk based apps on the host this can already be done today without any modifcaiton to opentack.\n\ninfact when deploying ovs-dpdk you are expected and advised to remove the pmd cores form the vcpu_pin_set and cpu_shared_set so that guest do not interfer with the vswitch.\n\nas the vswitch is staticly pinnined it simple to set tune those core for higher performance via systemd or other tools liek tuned profiles.\n\n\nif you are refering to deploy ovs-dpdk or dpdk based vnf in the guest then why woudl you use this feature over marking the dpdk pmd cores as realtime cores?","commit_id":"abd2ba916dfe93e61d24d7f652fcf99db2223e11"},{"author":{"_account_id":5754,"name":"Alex Xu","email":"hejie.xu@intel.com","username":"xuhj"},"change_message_id":"5df2e1c380cd0043b64510730d20135ee4eb2d88","unresolved":false,"context_lines":[{"line_number":54,"context_line":"  frequency CPUs continuously without the latency of switching power state by"},{"line_number":55,"context_line":"  the governor."},{"line_number":56,"context_line":""},{"line_number":57,"context_line":"* The NFV user has a workload which needs one or two higher frequency CPUs"},{"line_number":58,"context_line":"  for important tasks, and another few tasks which can be running on lower"},{"line_number":59,"context_line":"  frequency CPUs. An example of such a workload is Open VSwitch (OVS) where"},{"line_number":60,"context_line":"  the OVS threads using DPDK are high priority tasks and serves as the"},{"line_number":61,"context_line":"  switching layer between other VNFs."},{"line_number":62,"context_line":""},{"line_number":63,"context_line":"Proposed change"},{"line_number":64,"context_line":"\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d"}],"source_content_type":"text/x-rst","patch_set":3,"id":"5fc1f717_c3826cc7","line":61,"range":{"start_line":57,"start_character":2,"end_line":61,"end_character":37},"in_reply_to":"5fc1f717_588f59cc","updated":"2019-04-10 06:25:29.000000000","message":"I\u0027m refering to deploy ovs-dpdk/vnd in the guest. So for the realtime cores, I still can tuning the frequence/freq gonver on those cores, right? \n\nTo be hosnest, I\u0027m not familar with nfv case, correct me anything :)","commit_id":"abd2ba916dfe93e61d24d7f652fcf99db2223e11"},{"author":{"_account_id":11604,"name":"sean mooney","email":"smooney@redhat.com","username":"sean-k-mooney"},"change_message_id":"791d5b0089372fc37db6f291c515237022e65a3f","unresolved":false,"context_lines":[{"line_number":54,"context_line":"  frequency CPUs continuously without the latency of switching power state by"},{"line_number":55,"context_line":"  the governor."},{"line_number":56,"context_line":""},{"line_number":57,"context_line":"* The NFV user has a workload which needs one or two higher frequency CPUs"},{"line_number":58,"context_line":"  for important tasks, and another few tasks which can be running on lower"},{"line_number":59,"context_line":"  frequency CPUs. An example of such a workload is Open VSwitch (OVS) where"},{"line_number":60,"context_line":"  the OVS threads using DPDK are high priority tasks and serves as the"},{"line_number":61,"context_line":"  switching layer between other VNFs."},{"line_number":62,"context_line":""},{"line_number":63,"context_line":"Proposed change"},{"line_number":64,"context_line":"\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d"}],"source_content_type":"text/x-rst","patch_set":3,"id":"5fc1f717_979b3e15","line":61,"range":{"start_line":57,"start_character":2,"end_line":61,"end_character":37},"in_reply_to":"5fc1f717_c3826cc7","updated":"2019-04-10 09:39:16.000000000","message":"yes that is true you can certenly do both.","commit_id":"abd2ba916dfe93e61d24d7f652fcf99db2223e11"},{"author":{"_account_id":11604,"name":"sean mooney","email":"smooney@redhat.com","username":"sean-k-mooney"},"change_message_id":"9d901fe17f69d4eed75483b34f224390c2f59c5b","unresolved":false,"context_lines":[{"line_number":67,"context_line":"------------------------------------------"},{"line_number":68,"context_line":""},{"line_number":69,"context_line":"To enable the administrator to specify the priority of each vCPU in the flavor,"},{"line_number":70,"context_line":"propose to introduce a new extra specs ``hw:cpus.[Traits] \u003d cpuset string``,"},{"line_number":71,"context_line":"where the ``[Traits]`` could be customized to the CPU priority string, it also"},{"line_number":72,"context_line":"takes the standardized Traits defined in ``os_traits`` [3]_."},{"line_number":73,"context_line":""}],"source_content_type":"text/x-rst","patch_set":3,"id":"5fc1f717_9825c1c4","line":70,"range":{"start_line":70,"start_character":40,"end_line":70,"end_character":75},"updated":"2019-04-09 17:32:32.000000000","message":"i dont think this is a good idea in general.\n\ncan you provide an example of how this will be translated into a placement request.","commit_id":"abd2ba916dfe93e61d24d7f652fcf99db2223e11"},{"author":{"_account_id":5754,"name":"Alex Xu","email":"hejie.xu@intel.com","username":"xuhj"},"change_message_id":"5df2e1c380cd0043b64510730d20135ee4eb2d88","unresolved":false,"context_lines":[{"line_number":67,"context_line":"------------------------------------------"},{"line_number":68,"context_line":""},{"line_number":69,"context_line":"To enable the administrator to specify the priority of each vCPU in the flavor,"},{"line_number":70,"context_line":"propose to introduce a new extra specs ``hw:cpus.[Traits] \u003d cpuset string``,"},{"line_number":71,"context_line":"where the ``[Traits]`` could be customized to the CPU priority string, it also"},{"line_number":72,"context_line":"takes the standardized Traits defined in ``os_traits`` [3]_."},{"line_number":73,"context_line":""}],"source_content_type":"text/x-rst","patch_set":3,"id":"5fc1f717_a3588089","line":70,"range":{"start_line":70,"start_character":40,"end_line":70,"end_character":75},"in_reply_to":"5fc1f717_9825c1c4","updated":"2019-04-10 06:25:29.000000000","message":"actually, we have a whole section to explain the translation at line 202 :)","commit_id":"abd2ba916dfe93e61d24d7f652fcf99db2223e11"},{"author":{"_account_id":5754,"name":"Alex Xu","email":"hejie.xu@intel.com","username":"xuhj"},"change_message_id":"6b75cdcb82fc0737913caad465f2c26e60b91911","unresolved":false,"context_lines":[{"line_number":68,"context_line":""},{"line_number":69,"context_line":"To enable the administrator to specify the priority of each vCPU in the flavor,"},{"line_number":70,"context_line":"propose to introduce a new extra specs ``hw:cpus.[Traits] \u003d cpuset string``,"},{"line_number":71,"context_line":"where the ``[Traits]`` could be customized to the CPU priority string, it also"},{"line_number":72,"context_line":"takes the standardized Traits defined in ``os_traits`` [3]_."},{"line_number":73,"context_line":""},{"line_number":74,"context_line":"This new extra specs is binding to the existed NUMA related extra specs. That"}],"source_content_type":"text/x-rst","patch_set":3,"id":"5fc1f717_ec5e8c11","line":71,"range":{"start_line":71,"start_character":32,"end_line":71,"end_character":69},"updated":"2019-04-09 14:19:29.000000000","message":"\u0027could be the custom traits, it also can take the standardized traits defined in `os-traits`\u0027","commit_id":"abd2ba916dfe93e61d24d7f652fcf99db2223e11"},{"author":{"_account_id":30209,"name":"Huaqiang","email":"huaqiang.wang@intel.com","username":"Huaqiang.Wang"},"change_message_id":"bd347e8bbe11ef80b513f620bb5eaed54386d0f2","unresolved":false,"context_lines":[{"line_number":68,"context_line":""},{"line_number":69,"context_line":"To enable the administrator to specify the priority of each vCPU in the flavor,"},{"line_number":70,"context_line":"propose to introduce a new extra specs ``hw:cpus.[Traits] \u003d cpuset string``,"},{"line_number":71,"context_line":"where the ``[Traits]`` could be customized to the CPU priority string, it also"},{"line_number":72,"context_line":"takes the standardized Traits defined in ``os_traits`` [3]_."},{"line_number":73,"context_line":""},{"line_number":74,"context_line":"This new extra specs is binding to the existed NUMA related extra specs. That"}],"source_content_type":"text/x-rst","patch_set":3,"id":"5fc1f717_b240c807","line":71,"range":{"start_line":71,"start_character":32,"end_line":71,"end_character":69},"in_reply_to":"5fc1f717_ec5e8c11","updated":"2019-04-10 10:17:31.000000000","message":"Got, will be refined.","commit_id":"abd2ba916dfe93e61d24d7f652fcf99db2223e11"},{"author":{"_account_id":5754,"name":"Alex Xu","email":"hejie.xu@intel.com","username":"xuhj"},"change_message_id":"6b75cdcb82fc0737913caad465f2c26e60b91911","unresolved":false,"context_lines":[{"line_number":70,"context_line":"propose to introduce a new extra specs ``hw:cpus.[Traits] \u003d cpuset string``,"},{"line_number":71,"context_line":"where the ``[Traits]`` could be customized to the CPU priority string, it also"},{"line_number":72,"context_line":"takes the standardized Traits defined in ``os_traits`` [3]_."},{"line_number":73,"context_line":""},{"line_number":74,"context_line":"This new extra specs is binding to the existed NUMA related extra specs. That"},{"line_number":75,"context_line":"means only the guest which has NUMA topology is allowed to set the priority of"},{"line_number":76,"context_line":"vCPUs. This is due to the guest NUMA topology is the only place to describe the"}],"source_content_type":"text/x-rst","patch_set":3,"id":"5fc1f717_cc32b0a7","line":73,"updated":"2019-04-09 14:19:29.000000000","message":"I think we can just mention \u0027HW_CPU_HIGH_PRIORITY\u0027 and \u0027HW_CPU_LOW_PRIORITY\u0027 will be the standard traits at here.","commit_id":"abd2ba916dfe93e61d24d7f652fcf99db2223e11"},{"author":{"_account_id":30209,"name":"Huaqiang","email":"huaqiang.wang@intel.com","username":"Huaqiang.Wang"},"change_message_id":"bd347e8bbe11ef80b513f620bb5eaed54386d0f2","unresolved":false,"context_lines":[{"line_number":70,"context_line":"propose to introduce a new extra specs ``hw:cpus.[Traits] \u003d cpuset string``,"},{"line_number":71,"context_line":"where the ``[Traits]`` could be customized to the CPU priority string, it also"},{"line_number":72,"context_line":"takes the standardized Traits defined in ``os_traits`` [3]_."},{"line_number":73,"context_line":""},{"line_number":74,"context_line":"This new extra specs is binding to the existed NUMA related extra specs. That"},{"line_number":75,"context_line":"means only the guest which has NUMA topology is allowed to set the priority of"},{"line_number":76,"context_line":"vCPUs. This is due to the guest NUMA topology is the only place to describe the"}],"source_content_type":"text/x-rst","patch_set":3,"id":"5fc1f717_126afc7f","line":73,"in_reply_to":"5fc1f717_cc32b0a7","updated":"2019-04-10 10:17:31.000000000","message":"That\u0027s our plan to standardize \u0027HW_CPU_HIGH_PRIORITY\u0027 and \u0027HW_CPU_LOW_PRIORITY\u0027. I\u0027ll mention it here in making update.","commit_id":"abd2ba916dfe93e61d24d7f652fcf99db2223e11"},{"author":{"_account_id":11604,"name":"sean mooney","email":"smooney@redhat.com","username":"sean-k-mooney"},"change_message_id":"9d901fe17f69d4eed75483b34f224390c2f59c5b","unresolved":false,"context_lines":[{"line_number":83,"context_line":"      memory_mb\u003d512"},{"line_number":84,"context_line":"      extra_specs:"},{"line_number":85,"context_line":"        hw:numa_nodes\u003d1"},{"line_number":86,"context_line":"        hw:cpus.HW_CPU_HIGH_PRIORITY\u003d0-3"},{"line_number":87,"context_line":"        hw:cpus.HW_CPU_LOW_PRIORITY\u003d8-11"},{"line_number":88,"context_line":""},{"line_number":89,"context_line":"This flavor has 12 vCPUs and 1 NUMA node, the first 4 vCPUs are high priority,"},{"line_number":90,"context_line":"and the last 4 vCPUs are low priority, the rest of vCPUs are just normal"}],"source_content_type":"text/x-rst","patch_set":3,"id":"5fc1f717_5b32f3f3","line":87,"range":{"start_line":86,"start_character":8,"end_line":87,"end_character":40},"updated":"2019-04-09 17:32:32.000000000","message":"so this is not a standard extra_spec with a well defiend key but rather a templated extra_specs.\n\ne.g. the dirver would have to accept hw:cpu.\u003cany string that is availd in a extraspec\u003e\u003d\u003ca mask liek the realtime mask\u003e","commit_id":"abd2ba916dfe93e61d24d7f652fcf99db2223e11"},{"author":{"_account_id":5754,"name":"Alex Xu","email":"hejie.xu@intel.com","username":"xuhj"},"change_message_id":"5df2e1c380cd0043b64510730d20135ee4eb2d88","unresolved":false,"context_lines":[{"line_number":83,"context_line":"      memory_mb\u003d512"},{"line_number":84,"context_line":"      extra_specs:"},{"line_number":85,"context_line":"        hw:numa_nodes\u003d1"},{"line_number":86,"context_line":"        hw:cpus.HW_CPU_HIGH_PRIORITY\u003d0-3"},{"line_number":87,"context_line":"        hw:cpus.HW_CPU_LOW_PRIORITY\u003d8-11"},{"line_number":88,"context_line":""},{"line_number":89,"context_line":"This flavor has 12 vCPUs and 1 NUMA node, the first 4 vCPUs are high priority,"},{"line_number":90,"context_line":"and the last 4 vCPUs are low priority, the rest of vCPUs are just normal"}],"source_content_type":"text/x-rst","patch_set":3,"id":"5fc1f717_8351c464","line":87,"range":{"start_line":86,"start_character":8,"end_line":87,"end_character":40},"in_reply_to":"5fc1f717_5b32f3f3","updated":"2019-04-10 06:25:29.000000000","message":"we already have same examples: \u0027traits:[TRAIT]\u003drequired/forbidden\u0027 and \u0027resources:[RC]\u003dn\u0027","commit_id":"abd2ba916dfe93e61d24d7f652fcf99db2223e11"},{"author":{"_account_id":5754,"name":"Alex Xu","email":"hejie.xu@intel.com","username":"xuhj"},"change_message_id":"6b75cdcb82fc0737913caad465f2c26e60b91911","unresolved":false,"context_lines":[{"line_number":91,"context_line":"vCPUs without an explicit priority."},{"line_number":92,"context_line":""},{"line_number":93,"context_line":"Since \u0027cpu_policy\u0027 is not explicitly specified, it will take the default"},{"line_number":94,"context_line":"\u0027share\u0027 CPU policy, in contrast with \u0027dedicated\u0027 CPU policy that vCPU will"},{"line_number":95,"context_line":"be assigned to a particular host CPU, each of the vCPU will float across a"},{"line_number":96,"context_line":"range of host CPUs."},{"line_number":97,"context_line":""}],"source_content_type":"text/x-rst","patch_set":3,"id":"5fc1f717_ecc44c62","line":94,"range":{"start_line":94,"start_character":1,"end_line":94,"end_character":6},"updated":"2019-04-09 14:19:29.000000000","message":"nit, s/share/shared/","commit_id":"abd2ba916dfe93e61d24d7f652fcf99db2223e11"},{"author":{"_account_id":30209,"name":"Huaqiang","email":"huaqiang.wang@intel.com","username":"Huaqiang.Wang"},"change_message_id":"bd347e8bbe11ef80b513f620bb5eaed54386d0f2","unresolved":false,"context_lines":[{"line_number":91,"context_line":"vCPUs without an explicit priority."},{"line_number":92,"context_line":""},{"line_number":93,"context_line":"Since \u0027cpu_policy\u0027 is not explicitly specified, it will take the default"},{"line_number":94,"context_line":"\u0027share\u0027 CPU policy, in contrast with \u0027dedicated\u0027 CPU policy that vCPU will"},{"line_number":95,"context_line":"be assigned to a particular host CPU, each of the vCPU will float across a"},{"line_number":96,"context_line":"range of host CPUs."},{"line_number":97,"context_line":""}],"source_content_type":"text/x-rst","patch_set":3,"id":"5fc1f717_920b4cd3","line":94,"range":{"start_line":94,"start_character":1,"end_line":94,"end_character":6},"in_reply_to":"5fc1f717_ecc44c62","updated":"2019-04-10 10:17:31.000000000","message":"Got","commit_id":"abd2ba916dfe93e61d24d7f652fcf99db2223e11"},{"author":{"_account_id":5754,"name":"Alex Xu","email":"hejie.xu@intel.com","username":"xuhj"},"change_message_id":"6b75cdcb82fc0737913caad465f2c26e60b91911","unresolved":false,"context_lines":[{"line_number":90,"context_line":"and the last 4 vCPUs are low priority, the rest of vCPUs are just normal"},{"line_number":91,"context_line":"vCPUs without an explicit priority."},{"line_number":92,"context_line":""},{"line_number":93,"context_line":"Since \u0027cpu_policy\u0027 is not explicitly specified, it will take the default"},{"line_number":94,"context_line":"\u0027share\u0027 CPU policy, in contrast with \u0027dedicated\u0027 CPU policy that vCPU will"},{"line_number":95,"context_line":"be assigned to a particular host CPU, each of the vCPU will float across a"},{"line_number":96,"context_line":"range of host CPUs."},{"line_number":97,"context_line":""},{"line_number":98,"context_line":"In this flavor the CPU policy is not specified, it will take the default"},{"line_number":99,"context_line":"\u0027shared\u0027 CPU policy. The instance vCPU will float across host CPUs. With this"}],"source_content_type":"text/x-rst","patch_set":3,"id":"5fc1f717_4c91e051","line":96,"range":{"start_line":93,"start_character":0,"end_line":96,"end_character":19},"updated":"2019-04-09 14:19:29.000000000","message":"I feel we needn\u0027t this paragraph, it sounds like the explain how the cpu_policy works.","commit_id":"abd2ba916dfe93e61d24d7f652fcf99db2223e11"},{"author":{"_account_id":5754,"name":"Alex Xu","email":"hejie.xu@intel.com","username":"xuhj"},"change_message_id":"5df2e1c380cd0043b64510730d20135ee4eb2d88","unresolved":false,"context_lines":[{"line_number":90,"context_line":"and the last 4 vCPUs are low priority, the rest of vCPUs are just normal"},{"line_number":91,"context_line":"vCPUs without an explicit priority."},{"line_number":92,"context_line":""},{"line_number":93,"context_line":"Since \u0027cpu_policy\u0027 is not explicitly specified, it will take the default"},{"line_number":94,"context_line":"\u0027share\u0027 CPU policy, in contrast with \u0027dedicated\u0027 CPU policy that vCPU will"},{"line_number":95,"context_line":"be assigned to a particular host CPU, each of the vCPU will float across a"},{"line_number":96,"context_line":"range of host CPUs."},{"line_number":97,"context_line":""},{"line_number":98,"context_line":"In this flavor the CPU policy is not specified, it will take the default"},{"line_number":99,"context_line":"\u0027shared\u0027 CPU policy. The instance vCPU will float across host CPUs. With this"}],"source_content_type":"text/x-rst","patch_set":3,"id":"5fc1f717_43dc3ce0","line":96,"range":{"start_line":93,"start_character":0,"end_line":96,"end_character":19},"in_reply_to":"5fc1f717_3b0927dc","updated":"2019-04-10 06:25:29.000000000","message":"Just clarify. if the VM doesn\u0027t specify cpu_policy and numa topo. It will float its vcpus on all the host pcpus. For this case, we won\u0027t support to allow specify the vcpu priority.\n\nWe support the cpu_policy is \u0027shared\u0027 and the VM at least has one NUMA node. This is also the case you are talking about.\n\nAnd the logic you said it is the same with our proposal. I want to say it is very complex. I have PoC, we use the existed NUMATopology objs to track the usages of each range. And a little code tuning to enable we pinning the VCPUs to a range of priority PCPUs not all the PCPUs in the whole NUMA node.\n\nYou will see the detail in the data model change section and the implementation section.","commit_id":"abd2ba916dfe93e61d24d7f652fcf99db2223e11"},{"author":{"_account_id":11604,"name":"sean mooney","email":"smooney@redhat.com","username":"sean-k-mooney"},"change_message_id":"9d901fe17f69d4eed75483b34f224390c2f59c5b","unresolved":false,"context_lines":[{"line_number":90,"context_line":"and the last 4 vCPUs are low priority, the rest of vCPUs are just normal"},{"line_number":91,"context_line":"vCPUs without an explicit priority."},{"line_number":92,"context_line":""},{"line_number":93,"context_line":"Since \u0027cpu_policy\u0027 is not explicitly specified, it will take the default"},{"line_number":94,"context_line":"\u0027share\u0027 CPU policy, in contrast with \u0027dedicated\u0027 CPU policy that vCPU will"},{"line_number":95,"context_line":"be assigned to a particular host CPU, each of the vCPU will float across a"},{"line_number":96,"context_line":"range of host CPUs."},{"line_number":97,"context_line":""},{"line_number":98,"context_line":"In this flavor the CPU policy is not specified, it will take the default"},{"line_number":99,"context_line":"\u0027shared\u0027 CPU policy. The instance vCPU will float across host CPUs. With this"}],"source_content_type":"text/x-rst","patch_set":3,"id":"5fc1f717_3b0927dc","line":96,"range":{"start_line":93,"start_character":0,"end_line":96,"end_character":19},"in_reply_to":"5fc1f717_4c91e051","updated":"2019-04-09 17:32:32.000000000","message":"actully i was going to point this out in the flavor above. but since this is here.\n\nif you have hw:cpu_policy\u003dshare or if its not set then that means the vm if floating.\n\nin the case of the flavor above we have a single numa node explicitly so the guest will be partially pinned to float within a host numa node.\n\nsince the cpu frequencies are manged per host cpu (note not per process) if we have a vm with \nhw:cpus.HW_CPU_HIGH_PRIORITY\u003d0-3\nhw:cpus.HW_CPU_LOW_PRIORITY\u003d8-11\n\nwe would have to pin the first 4 guest vCPU to float over the high priortiy numa local subset on the host\nwe woudl have to ping the last 4 guest vCPU to float over the lower priort numa local subset on the host. \nand finally the mindel 4 cores would just be floating over the numa node cores that are listed in the vcpu_pin_set.\n\nthat seam unresonably complex.","commit_id":"abd2ba916dfe93e61d24d7f652fcf99db2223e11"},{"author":{"_account_id":5754,"name":"Alex Xu","email":"hejie.xu@intel.com","username":"xuhj"},"change_message_id":"6b75cdcb82fc0737913caad465f2c26e60b91911","unresolved":false,"context_lines":[{"line_number":100,"context_line":"proposal, the prioritized vCPU will float across the host CPUs that have the"},{"line_number":101,"context_line":"same priority. In the example above, it requires the first 4 vCPUs to float"},{"line_number":102,"context_line":"across host CPUs that has a priority of ``HW_CPU_HIGH_PRIORITY``, and the last"},{"line_number":103,"context_line":"4 vCPUs float across host CPUs of ``HW_CPU_LOW_PRIORITY``, the remaining vCPUs"},{"line_number":104,"context_line":"are required to run across host CPUs that do not have an explicit CPU priority."},{"line_number":105,"context_line":"All of those pCPUs are in single NUMA node, since the flavor is asking the"},{"line_number":106,"context_line":"guest only has one NUMA node."}],"source_content_type":"text/x-rst","patch_set":3,"id":"5fc1f717_4c9fa04f","line":103,"range":{"start_line":103,"start_character":30,"end_line":103,"end_character":31},"updated":"2019-04-09 14:19:29.000000000","message":"^ `that has a priority of ```","commit_id":"abd2ba916dfe93e61d24d7f652fcf99db2223e11"},{"author":{"_account_id":30209,"name":"Huaqiang","email":"huaqiang.wang@intel.com","username":"Huaqiang.Wang"},"change_message_id":"bd347e8bbe11ef80b513f620bb5eaed54386d0f2","unresolved":false,"context_lines":[{"line_number":100,"context_line":"proposal, the prioritized vCPU will float across the host CPUs that have the"},{"line_number":101,"context_line":"same priority. In the example above, it requires the first 4 vCPUs to float"},{"line_number":102,"context_line":"across host CPUs that has a priority of ``HW_CPU_HIGH_PRIORITY``, and the last"},{"line_number":103,"context_line":"4 vCPUs float across host CPUs of ``HW_CPU_LOW_PRIORITY``, the remaining vCPUs"},{"line_number":104,"context_line":"are required to run across host CPUs that do not have an explicit CPU priority."},{"line_number":105,"context_line":"All of those pCPUs are in single NUMA node, since the flavor is asking the"},{"line_number":106,"context_line":"guest only has one NUMA node."}],"source_content_type":"text/x-rst","patch_set":3,"id":"5fc1f717_52f964c5","line":103,"range":{"start_line":103,"start_character":30,"end_line":103,"end_character":31},"in_reply_to":"5fc1f717_4c9fa04f","updated":"2019-04-10 10:17:31.000000000","message":"Will refine.","commit_id":"abd2ba916dfe93e61d24d7f652fcf99db2223e11"},{"author":{"_account_id":11604,"name":"sean mooney","email":"smooney@redhat.com","username":"sean-k-mooney"},"change_message_id":"9d901fe17f69d4eed75483b34f224390c2f59c5b","unresolved":false,"context_lines":[{"line_number":137,"context_line":"where the ``cpu_set_traits`` is a ``StrOpt`` type string and using a blank"},{"line_number":138,"context_line":"space to separate CPUs of different priority in it. Here is an example::"},{"line_number":139,"context_line":""},{"line_number":140,"context_line":"   cpu_set_traits\u003d\u0027HW_CPU_HIGH_PRIORITY:0-7,9 HW_CPU_LOW_PRIORITY:40-47^41\u0027"},{"line_number":141,"context_line":""},{"line_number":142,"context_line":"The operator can tune the CPU by Linux sysfs or any tools, and then fill the"},{"line_number":143,"context_line":"CPU priorities in the config ``cpu_set_traits``. Those logics are expected to"}],"source_content_type":"text/x-rst","patch_set":3,"id":"5fc1f717_1ba22baf","line":140,"range":{"start_line":140,"start_character":3,"end_line":140,"end_character":17},"updated":"2019-04-09 17:32:32.000000000","message":"im not sure if this scheam will be reusable for other cpu traits. in general most of the time if an operator wants to advertise a trait for the cpu it would apply to all corese and requireing the mask could be combersum.","commit_id":"abd2ba916dfe93e61d24d7f652fcf99db2223e11"},{"author":{"_account_id":5754,"name":"Alex Xu","email":"hejie.xu@intel.com","username":"xuhj"},"change_message_id":"5df2e1c380cd0043b64510730d20135ee4eb2d88","unresolved":false,"context_lines":[{"line_number":137,"context_line":"where the ``cpu_set_traits`` is a ``StrOpt`` type string and using a blank"},{"line_number":138,"context_line":"space to separate CPUs of different priority in it. Here is an example::"},{"line_number":139,"context_line":""},{"line_number":140,"context_line":"   cpu_set_traits\u003d\u0027HW_CPU_HIGH_PRIORITY:0-7,9 HW_CPU_LOW_PRIORITY:40-47^41\u0027"},{"line_number":141,"context_line":""},{"line_number":142,"context_line":"The operator can tune the CPU by Linux sysfs or any tools, and then fill the"},{"line_number":143,"context_line":"CPU priorities in the config ``cpu_set_traits``. Those logics are expected to"}],"source_content_type":"text/x-rst","patch_set":3,"id":"5fc1f717_23fa5042","line":140,"range":{"start_line":140,"start_character":3,"end_line":140,"end_character":17},"in_reply_to":"5fc1f717_1ba22baf","updated":"2019-04-10 06:25:29.000000000","message":"yes, that is true. Actually, in the beginning, I named it as cpu_set_priorities. But actually, we can set any traits in this configuration, so I decided to make the name more generic. Make some possibility in the future.","commit_id":"abd2ba916dfe93e61d24d7f652fcf99db2223e11"},{"author":{"_account_id":11604,"name":"sean mooney","email":"smooney@redhat.com","username":"sean-k-mooney"},"change_message_id":"9d901fe17f69d4eed75483b34f224390c2f59c5b","unresolved":false,"context_lines":[{"line_number":139,"context_line":""},{"line_number":140,"context_line":"   cpu_set_traits\u003d\u0027HW_CPU_HIGH_PRIORITY:0-7,9 HW_CPU_LOW_PRIORITY:40-47^41\u0027"},{"line_number":141,"context_line":""},{"line_number":142,"context_line":"The operator can tune the CPU by Linux sysfs or any tools, and then fill the"},{"line_number":143,"context_line":"CPU priorities in the config ``cpu_set_traits``. Those logics are expected to"},{"line_number":144,"context_line":"be done by the deployment tools or scripts."},{"line_number":145,"context_line":""},{"line_number":146,"context_line":"Nova should track prioritized CPU resource based on the config"},{"line_number":147,"context_line":"``cpu_set_traits``."}],"source_content_type":"text/x-rst","patch_set":3,"id":"5fc1f717_bb90b74f","line":144,"range":{"start_line":142,"start_character":0,"end_line":144,"end_character":43},"updated":"2019-04-09 17:32:32.000000000","message":"so nova is not going to do the turning or validate it in anyway?","commit_id":"abd2ba916dfe93e61d24d7f652fcf99db2223e11"},{"author":{"_account_id":5754,"name":"Alex Xu","email":"hejie.xu@intel.com","username":"xuhj"},"change_message_id":"5df2e1c380cd0043b64510730d20135ee4eb2d88","unresolved":false,"context_lines":[{"line_number":139,"context_line":""},{"line_number":140,"context_line":"   cpu_set_traits\u003d\u0027HW_CPU_HIGH_PRIORITY:0-7,9 HW_CPU_LOW_PRIORITY:40-47^41\u0027"},{"line_number":141,"context_line":""},{"line_number":142,"context_line":"The operator can tune the CPU by Linux sysfs or any tools, and then fill the"},{"line_number":143,"context_line":"CPU priorities in the config ``cpu_set_traits``. Those logics are expected to"},{"line_number":144,"context_line":"be done by the deployment tools or scripts."},{"line_number":145,"context_line":""},{"line_number":146,"context_line":"Nova should track prioritized CPU resource based on the config"},{"line_number":147,"context_line":"``cpu_set_traits``."}],"source_content_type":"text/x-rst","patch_set":3,"id":"5fc1f717_0399541d","line":144,"range":{"start_line":142,"start_character":0,"end_line":144,"end_character":43},"in_reply_to":"5fc1f717_bb90b74f","updated":"2019-04-10 06:25:29.000000000","message":"yes, I don\u0027t want the nova depending on any specific tool or hardware feature.","commit_id":"abd2ba916dfe93e61d24d7f652fcf99db2223e11"},{"author":{"_account_id":5754,"name":"Alex Xu","email":"hejie.xu@intel.com","username":"xuhj"},"change_message_id":"6b75cdcb82fc0737913caad465f2c26e60b91911","unresolved":false,"context_lines":[{"line_number":146,"context_line":"Nova should track prioritized CPU resource based on the config"},{"line_number":147,"context_line":"``cpu_set_traits``."},{"line_number":148,"context_line":""},{"line_number":149,"context_line":"The reconfiguration is only allowed when there is no workload on the host."},{"line_number":150,"context_line":"Otherwise, the nova-compute restart operation will lead to a failure at the"},{"line_number":151,"context_line":"startup checking stage."},{"line_number":152,"context_line":""},{"line_number":153,"context_line":"Prioritized CPU resource tracking"},{"line_number":154,"context_line":"---------------------------------"}],"source_content_type":"text/x-rst","patch_set":3,"id":"5fc1f717_0cbe78ad","line":151,"range":{"start_line":149,"start_character":0,"end_line":151,"end_character":23},"updated":"2019-04-09 14:19:29.000000000","message":"We said this in the deployer impact section also.","commit_id":"abd2ba916dfe93e61d24d7f652fcf99db2223e11"},{"author":{"_account_id":11604,"name":"sean mooney","email":"smooney@redhat.com","username":"sean-k-mooney"},"change_message_id":"9d901fe17f69d4eed75483b34f224390c2f59c5b","unresolved":false,"context_lines":[{"line_number":146,"context_line":"Nova should track prioritized CPU resource based on the config"},{"line_number":147,"context_line":"``cpu_set_traits``."},{"line_number":148,"context_line":""},{"line_number":149,"context_line":"The reconfiguration is only allowed when there is no workload on the host."},{"line_number":150,"context_line":"Otherwise, the nova-compute restart operation will lead to a failure at the"},{"line_number":151,"context_line":"startup checking stage."},{"line_number":152,"context_line":""},{"line_number":153,"context_line":"Prioritized CPU resource tracking"},{"line_number":154,"context_line":"---------------------------------"}],"source_content_type":"text/x-rst","patch_set":3,"id":"5fc1f717_5ba99379","line":151,"range":{"start_line":149,"start_character":0,"end_line":151,"end_character":23},"in_reply_to":"5fc1f717_0cbe78ad","updated":"2019-04-09 17:32:32.000000000","message":"i get why you want to make this restriciton but that is a fairly major upgrade impact as it mean you cannot enable this feature after an inplace upgade.","commit_id":"abd2ba916dfe93e61d24d7f652fcf99db2223e11"},{"author":{"_account_id":5754,"name":"Alex Xu","email":"hejie.xu@intel.com","username":"xuhj"},"change_message_id":"5df2e1c380cd0043b64510730d20135ee4eb2d88","unresolved":false,"context_lines":[{"line_number":146,"context_line":"Nova should track prioritized CPU resource based on the config"},{"line_number":147,"context_line":"``cpu_set_traits``."},{"line_number":148,"context_line":""},{"line_number":149,"context_line":"The reconfiguration is only allowed when there is no workload on the host."},{"line_number":150,"context_line":"Otherwise, the nova-compute restart operation will lead to a failure at the"},{"line_number":151,"context_line":"startup checking stage."},{"line_number":152,"context_line":""},{"line_number":153,"context_line":"Prioritized CPU resource tracking"},{"line_number":154,"context_line":"---------------------------------"}],"source_content_type":"text/x-rst","patch_set":3,"id":"5fc1f717_638938c8","line":151,"range":{"start_line":149,"start_character":0,"end_line":151,"end_character":23},"in_reply_to":"5fc1f717_5ba99379","updated":"2019-04-10 06:25:29.000000000","message":"ok, we can make it in upgrade impact section.","commit_id":"abd2ba916dfe93e61d24d7f652fcf99db2223e11"},{"author":{"_account_id":11604,"name":"sean mooney","email":"smooney@redhat.com","username":"sean-k-mooney"},"change_message_id":"9d901fe17f69d4eed75483b34f224390c2f59c5b","unresolved":false,"context_lines":[{"line_number":154,"context_line":"---------------------------------"},{"line_number":155,"context_line":""},{"line_number":156,"context_line":"Nova should report those host prioritized CPU resources as different resource"},{"line_number":157,"context_line":"providers with a distinct specific trait. Using the legacy NUMA topology"},{"line_number":158,"context_line":"object in the Nova database to track prioritized CPU to implement the NUMA"},{"line_number":159,"context_line":"affinity."},{"line_number":160,"context_line":""}],"source_content_type":"text/x-rst","patch_set":3,"id":"5fc1f717_9b229b51","line":157,"range":{"start_line":157,"start_character":52,"end_line":157,"end_character":58},"updated":"2019-04-09 17:32:32.000000000","message":"legacy implies its going away. it wont be removed in the future as it will be need for asignemnt/availablity tracking. however it wont be need for capacity.","commit_id":"abd2ba916dfe93e61d24d7f652fcf99db2223e11"},{"author":{"_account_id":30209,"name":"Huaqiang","email":"huaqiang.wang@intel.com","username":"Huaqiang.Wang"},"change_message_id":"bd347e8bbe11ef80b513f620bb5eaed54386d0f2","unresolved":false,"context_lines":[{"line_number":154,"context_line":"---------------------------------"},{"line_number":155,"context_line":""},{"line_number":156,"context_line":"Nova should report those host prioritized CPU resources as different resource"},{"line_number":157,"context_line":"providers with a distinct specific trait. Using the legacy NUMA topology"},{"line_number":158,"context_line":"object in the Nova database to track prioritized CPU to implement the NUMA"},{"line_number":159,"context_line":"affinity."},{"line_number":160,"context_line":""}],"source_content_type":"text/x-rst","patch_set":3,"id":"5fc1f717_92426c67","line":157,"range":{"start_line":157,"start_character":52,"end_line":157,"end_character":58},"in_reply_to":"5fc1f717_9b229b51","updated":"2019-04-10 10:17:31.000000000","message":"It probably to be fine to remove this word.","commit_id":"abd2ba916dfe93e61d24d7f652fcf99db2223e11"},{"author":{"_account_id":5754,"name":"Alex Xu","email":"hejie.xu@intel.com","username":"xuhj"},"change_message_id":"5df2e1c380cd0043b64510730d20135ee4eb2d88","unresolved":false,"context_lines":[{"line_number":154,"context_line":"---------------------------------"},{"line_number":155,"context_line":""},{"line_number":156,"context_line":"Nova should report those host prioritized CPU resources as different resource"},{"line_number":157,"context_line":"providers with a distinct specific trait. Using the legacy NUMA topology"},{"line_number":158,"context_line":"object in the Nova database to track prioritized CPU to implement the NUMA"},{"line_number":159,"context_line":"affinity."},{"line_number":160,"context_line":""}],"source_content_type":"text/x-rst","patch_set":3,"id":"5fc1f717_238fb0cd","line":157,"range":{"start_line":157,"start_character":52,"end_line":157,"end_character":58},"in_reply_to":"5fc1f717_9b229b51","updated":"2019-04-10 06:25:29.000000000","message":"yea, in the end, we probably just leave the part of those data model change about the priority cpu pinning tracking.","commit_id":"abd2ba916dfe93e61d24d7f652fcf99db2223e11"},{"author":{"_account_id":11604,"name":"sean mooney","email":"smooney@redhat.com","username":"sean-k-mooney"},"change_message_id":"9d901fe17f69d4eed75483b34f224390c2f59c5b","unresolved":false,"context_lines":[{"line_number":165,"context_line":"  Config:"},{"line_number":166,"context_line":"      cpu_set_traits \u003d HW_CPU_HIGH_PRIORITY:0-7 HW_CPU_LOW_PRIORITY:40-47"},{"line_number":167,"context_line":""},{"line_number":168,"context_line":"  RP tree:"},{"line_number":169,"context_line":"  * ComputeNode RP"},{"line_number":170,"context_line":"      Inventory:"},{"line_number":171,"context_line":"        resource\u003dVCPU, total\u003d32, allocation_ratio\u003d8"},{"line_number":172,"context_line":"        resource\u003dMEMORY_GB, total\u003d1024"},{"line_number":173,"context_line":"        resource\u003dDISK_GB, total\u003d1024"},{"line_number":174,"context_line":"    * Child RP1"},{"line_number":175,"context_line":"        Inventories:"},{"line_number":176,"context_line":"          resource \u003d VCPU, total \u003d 8, allocation_ratio \u003d 8"},{"line_number":177,"context_line":"        traits: HW_CPU_HIGH_PRIORITY"},{"line_number":178,"context_line":"    * Child RP2"},{"line_number":179,"context_line":"        Inventories:"},{"line_number":180,"context_line":"          resource \u003d VCPU, total \u003d 8, allocation_ratio \u003d 8"},{"line_number":181,"context_line":"        traits: HW_CPU_LOW_PRIORITY"},{"line_number":182,"context_line":""},{"line_number":183,"context_line":"The `NUMATopologyCell` object will be extended to track prioritized CPUs,"},{"line_number":184,"context_line":"which will be described in the `Data Model Impact` section."}],"source_content_type":"text/x-rst","patch_set":3,"id":"5fc1f717_3b2fe7f6","line":181,"range":{"start_line":168,"start_character":2,"end_line":181,"end_character":35},"updated":"2019-04-09 17:32:32.000000000","message":"this  has a large upgrade impact.\n\nexisting flavor that request more then 32 vCPUs in this case will not be able to schedule to this host.\n\nthat is because all resource in a single request group(numbered or unnumbered) must be from the same resource provider.","commit_id":"abd2ba916dfe93e61d24d7f652fcf99db2223e11"},{"author":{"_account_id":5754,"name":"Alex Xu","email":"hejie.xu@intel.com","username":"xuhj"},"change_message_id":"19cf424285ad9f86778d415ce496f27e222291bb","unresolved":false,"context_lines":[{"line_number":165,"context_line":"  Config:"},{"line_number":166,"context_line":"      cpu_set_traits \u003d HW_CPU_HIGH_PRIORITY:0-7 HW_CPU_LOW_PRIORITY:40-47"},{"line_number":167,"context_line":""},{"line_number":168,"context_line":"  RP tree:"},{"line_number":169,"context_line":"  * ComputeNode RP"},{"line_number":170,"context_line":"      Inventory:"},{"line_number":171,"context_line":"        resource\u003dVCPU, total\u003d32, allocation_ratio\u003d8"},{"line_number":172,"context_line":"        resource\u003dMEMORY_GB, total\u003d1024"},{"line_number":173,"context_line":"        resource\u003dDISK_GB, total\u003d1024"},{"line_number":174,"context_line":"    * Child RP1"},{"line_number":175,"context_line":"        Inventories:"},{"line_number":176,"context_line":"          resource \u003d VCPU, total \u003d 8, allocation_ratio \u003d 8"},{"line_number":177,"context_line":"        traits: HW_CPU_HIGH_PRIORITY"},{"line_number":178,"context_line":"    * Child RP2"},{"line_number":179,"context_line":"        Inventories:"},{"line_number":180,"context_line":"          resource \u003d VCPU, total \u003d 8, allocation_ratio \u003d 8"},{"line_number":181,"context_line":"        traits: HW_CPU_LOW_PRIORITY"},{"line_number":182,"context_line":""},{"line_number":183,"context_line":"The `NUMATopologyCell` object will be extended to track prioritized CPUs,"},{"line_number":184,"context_line":"which will be described in the `Data Model Impact` section."}],"source_content_type":"text/x-rst","patch_set":3,"id":"5fc1f717_cc1e145e","line":181,"range":{"start_line":168,"start_character":2,"end_line":181,"end_character":35},"in_reply_to":"5fc1f717_12ff5cab","updated":"2019-04-10 12:40:46.000000000","message":"good point. I should clarify the defintion of floating instance this spec talking about.\n\nFor the instance without any NUMA topo, we won\u0027t support specify the cpu priority.\n\nFor the instance with NUMA topo and shared cpu policy is the floating case we talk about in this spec.\n\nFor vcpu_pin_set and cpu_share_set, I would say we just follow the existing behavor. we floating on the cpu just like the current cpu which the instance with shared cpu policy floating on.","commit_id":"abd2ba916dfe93e61d24d7f652fcf99db2223e11"},{"author":{"_account_id":5754,"name":"Alex Xu","email":"hejie.xu@intel.com","username":"xuhj"},"change_message_id":"5df2e1c380cd0043b64510730d20135ee4eb2d88","unresolved":false,"context_lines":[{"line_number":165,"context_line":"  Config:"},{"line_number":166,"context_line":"      cpu_set_traits \u003d HW_CPU_HIGH_PRIORITY:0-7 HW_CPU_LOW_PRIORITY:40-47"},{"line_number":167,"context_line":""},{"line_number":168,"context_line":"  RP tree:"},{"line_number":169,"context_line":"  * ComputeNode RP"},{"line_number":170,"context_line":"      Inventory:"},{"line_number":171,"context_line":"        resource\u003dVCPU, total\u003d32, allocation_ratio\u003d8"},{"line_number":172,"context_line":"        resource\u003dMEMORY_GB, total\u003d1024"},{"line_number":173,"context_line":"        resource\u003dDISK_GB, total\u003d1024"},{"line_number":174,"context_line":"    * Child RP1"},{"line_number":175,"context_line":"        Inventories:"},{"line_number":176,"context_line":"          resource \u003d VCPU, total \u003d 8, allocation_ratio \u003d 8"},{"line_number":177,"context_line":"        traits: HW_CPU_HIGH_PRIORITY"},{"line_number":178,"context_line":"    * Child RP2"},{"line_number":179,"context_line":"        Inventories:"},{"line_number":180,"context_line":"          resource \u003d VCPU, total \u003d 8, allocation_ratio \u003d 8"},{"line_number":181,"context_line":"        traits: HW_CPU_LOW_PRIORITY"},{"line_number":182,"context_line":""},{"line_number":183,"context_line":"The `NUMATopologyCell` object will be extended to track prioritized CPUs,"},{"line_number":184,"context_line":"which will be described in the `Data Model Impact` section."}],"source_content_type":"text/x-rst","patch_set":3,"id":"5fc1f717_4378bcbf","line":181,"range":{"start_line":168,"start_character":2,"end_line":181,"end_character":35},"in_reply_to":"5fc1f717_3b2fe7f6","updated":"2019-04-10 06:25:29.000000000","message":"I guess the operator will know he change the config of this host with different priority cpus, it means he change the inventory, some of flavor\u0027s requriement won\u0027t match this host anymore.","commit_id":"abd2ba916dfe93e61d24d7f652fcf99db2223e11"},{"author":{"_account_id":5754,"name":"Alex Xu","email":"hejie.xu@intel.com","username":"xuhj"},"change_message_id":"ecdae54c0676c8205cf11e9f19ee8eac8b281d82","unresolved":false,"context_lines":[{"line_number":165,"context_line":"  Config:"},{"line_number":166,"context_line":"      cpu_set_traits \u003d HW_CPU_HIGH_PRIORITY:0-7 HW_CPU_LOW_PRIORITY:40-47"},{"line_number":167,"context_line":""},{"line_number":168,"context_line":"  RP tree:"},{"line_number":169,"context_line":"  * ComputeNode RP"},{"line_number":170,"context_line":"      Inventory:"},{"line_number":171,"context_line":"        resource\u003dVCPU, total\u003d32, allocation_ratio\u003d8"},{"line_number":172,"context_line":"        resource\u003dMEMORY_GB, total\u003d1024"},{"line_number":173,"context_line":"        resource\u003dDISK_GB, total\u003d1024"},{"line_number":174,"context_line":"    * Child RP1"},{"line_number":175,"context_line":"        Inventories:"},{"line_number":176,"context_line":"          resource \u003d VCPU, total \u003d 8, allocation_ratio \u003d 8"},{"line_number":177,"context_line":"        traits: HW_CPU_HIGH_PRIORITY"},{"line_number":178,"context_line":"    * Child RP2"},{"line_number":179,"context_line":"        Inventories:"},{"line_number":180,"context_line":"          resource \u003d VCPU, total \u003d 8, allocation_ratio \u003d 8"},{"line_number":181,"context_line":"        traits: HW_CPU_LOW_PRIORITY"},{"line_number":182,"context_line":""},{"line_number":183,"context_line":"The `NUMATopologyCell` object will be extended to track prioritized CPUs,"},{"line_number":184,"context_line":"which will be described in the `Data Model Impact` section."}],"source_content_type":"text/x-rst","patch_set":3,"id":"5fc1f717_57ccf668","line":181,"range":{"start_line":168,"start_character":2,"end_line":181,"end_character":35},"in_reply_to":"5fc1f717_4378bcbf","updated":"2019-04-10 08:58:03.000000000","message":"I think I understand your question wrong at here also.\n\nIf the flavor is asking 32 vcpu without NUMA topo, just as the deployer impact, I suggest separate this kind of workload into different host aggregate. the operator won\u0027t want this VM\u0027s VCPUs floating on all the pCPUs on the host. It will effect the high priority vCPU performance.\n\nIf the flavor is asking 32 vcpu and with one NUMA node and no asking for any priority vcpus, we will translate it as. \"?resources1\u003dVCPU:32,MEMORY_MB:....\", so it will be available on this host, since the compute node rp has enough vCPU, and VCPU and MEMORY_MB are both in this rp.","commit_id":"abd2ba916dfe93e61d24d7f652fcf99db2223e11"},{"author":{"_account_id":11604,"name":"sean mooney","email":"smooney@redhat.com","username":"sean-k-mooney"},"change_message_id":"791d5b0089372fc37db6f291c515237022e65a3f","unresolved":false,"context_lines":[{"line_number":165,"context_line":"  Config:"},{"line_number":166,"context_line":"      cpu_set_traits \u003d HW_CPU_HIGH_PRIORITY:0-7 HW_CPU_LOW_PRIORITY:40-47"},{"line_number":167,"context_line":""},{"line_number":168,"context_line":"  RP tree:"},{"line_number":169,"context_line":"  * ComputeNode RP"},{"line_number":170,"context_line":"      Inventory:"},{"line_number":171,"context_line":"        resource\u003dVCPU, total\u003d32, allocation_ratio\u003d8"},{"line_number":172,"context_line":"        resource\u003dMEMORY_GB, total\u003d1024"},{"line_number":173,"context_line":"        resource\u003dDISK_GB, total\u003d1024"},{"line_number":174,"context_line":"    * Child RP1"},{"line_number":175,"context_line":"        Inventories:"},{"line_number":176,"context_line":"          resource \u003d VCPU, total \u003d 8, allocation_ratio \u003d 8"},{"line_number":177,"context_line":"        traits: HW_CPU_HIGH_PRIORITY"},{"line_number":178,"context_line":"    * Child RP2"},{"line_number":179,"context_line":"        Inventories:"},{"line_number":180,"context_line":"          resource \u003d VCPU, total \u003d 8, allocation_ratio \u003d 8"},{"line_number":181,"context_line":"        traits: HW_CPU_LOW_PRIORITY"},{"line_number":182,"context_line":""},{"line_number":183,"context_line":"The `NUMATopologyCell` object will be extended to track prioritized CPUs,"},{"line_number":184,"context_line":"which will be described in the `Data Model Impact` section."}],"source_content_type":"text/x-rst","patch_set":3,"id":"5fc1f717_12ff5cab","line":181,"range":{"start_line":168,"start_character":2,"end_line":181,"end_character":35},"in_reply_to":"5fc1f717_57ccf668","updated":"2019-04-10 09:39:16.000000000","message":"i replied a little later to this suggestion too but i think we need to redeine what a flaoting instance is.\n\nat a minium i think flaoting instace shoudl float over the vcpu_pin_set or cpu_share_set.\n\nif we have high and low priority cpu set then it shoudl be expclded.\n\nsomethere later in the spec around line 380 ish i suggest using forbiden traits to prevent shared guests with no proity request form being allocated form the priory RP.\n\nif we do that we dont need to have the operators partion there cloud for different types of floating guests.","commit_id":"abd2ba916dfe93e61d24d7f652fcf99db2223e11"},{"author":{"_account_id":11604,"name":"sean mooney","email":"smooney@redhat.com","username":"sean-k-mooney"},"change_message_id":"617648928b76b662e4799470360c606d5ad6a6f2","unresolved":false,"context_lines":[{"line_number":187,"context_line":"change the structure of resource provider tree. There will be some vision"},{"line_number":188,"context_line":"based on those proposals."},{"line_number":189,"context_line":""},{"line_number":190,"context_line":"* `Proposes NUMA topology with RPs` [4]_: The child resource provider of"},{"line_number":191,"context_line":"  priority vCPUs\u0027s parent resource provider should be NUMA node resource"},{"line_number":192,"context_line":"  provider. The vCPU without priority will be in the NUMA node resource"},{"line_number":193,"context_line":"  provider just like the specification described. But there will be a gap to"}],"source_content_type":"text/x-rst","patch_set":3,"id":"5fc1f717_feb5cded","line":190,"range":{"start_line":190,"start_character":3,"end_line":190,"end_character":34},"updated":"2019-04-09 17:39:00.000000000","message":"this might also be a depency for this spec but not nessiarily a hard depency.","commit_id":"abd2ba916dfe93e61d24d7f652fcf99db2223e11"},{"author":{"_account_id":5754,"name":"Alex Xu","email":"hejie.xu@intel.com","username":"xuhj"},"change_message_id":"5df2e1c380cd0043b64510730d20135ee4eb2d88","unresolved":false,"context_lines":[{"line_number":187,"context_line":"change the structure of resource provider tree. There will be some vision"},{"line_number":188,"context_line":"based on those proposals."},{"line_number":189,"context_line":""},{"line_number":190,"context_line":"* `Proposes NUMA topology with RPs` [4]_: The child resource provider of"},{"line_number":191,"context_line":"  priority vCPUs\u0027s parent resource provider should be NUMA node resource"},{"line_number":192,"context_line":"  provider. The vCPU without priority will be in the NUMA node resource"},{"line_number":193,"context_line":"  provider just like the specification described. But there will be a gap to"}],"source_content_type":"text/x-rst","patch_set":3,"id":"5fc1f717_a342c066","line":190,"range":{"start_line":190,"start_character":3,"end_line":190,"end_character":34},"in_reply_to":"5fc1f717_feb5cded","updated":"2019-04-10 06:25:29.000000000","message":"yea","commit_id":"abd2ba916dfe93e61d24d7f652fcf99db2223e11"},{"author":{"_account_id":11604,"name":"sean mooney","email":"smooney@redhat.com","username":"sean-k-mooney"},"change_message_id":"617648928b76b662e4799470360c606d5ad6a6f2","unresolved":false,"context_lines":[{"line_number":194,"context_line":"  describe the affinity between the prioritized vCPU and NUMA node with"},{"line_number":195,"context_line":"  placement request [5]_."},{"line_number":196,"context_line":""},{"line_number":197,"context_line":"* `Standardize CPU resource tracking` [6]_: This pending specification is"},{"line_number":198,"context_line":"  proposing the changes to let the pCPUs and vCPUs of one compute node to be"},{"line_number":199,"context_line":"  tracked in different RPs. We would like to divide the pCPU RP and vCPU RP"},{"line_number":200,"context_line":"  into prioritized CPUs later if this feature is implemented."}],"source_content_type":"text/x-rst","patch_set":3,"id":"5fc1f717_fedced45","line":197,"range":{"start_line":197,"start_character":3,"end_line":197,"end_character":43},"updated":"2019-04-09 17:39:00.000000000","message":"i think this need to be listed as a dependcy for this spec.","commit_id":"abd2ba916dfe93e61d24d7f652fcf99db2223e11"},{"author":{"_account_id":11604,"name":"sean mooney","email":"smooney@redhat.com","username":"sean-k-mooney"},"change_message_id":"4ef5deccd29778d9a9b93cba4e2e690a773b977a","unresolved":false,"context_lines":[{"line_number":194,"context_line":"  describe the affinity between the prioritized vCPU and NUMA node with"},{"line_number":195,"context_line":"  placement request [5]_."},{"line_number":196,"context_line":""},{"line_number":197,"context_line":"* `Standardize CPU resource tracking` [6]_: This pending specification is"},{"line_number":198,"context_line":"  proposing the changes to let the pCPUs and vCPUs of one compute node to be"},{"line_number":199,"context_line":"  tracked in different RPs. We would like to divide the pCPU RP and vCPU RP"},{"line_number":200,"context_line":"  into prioritized CPUs later if this feature is implemented."}],"source_content_type":"text/x-rst","patch_set":3,"id":"5fc1f717_cf7fe6dc","line":197,"range":{"start_line":197,"start_character":3,"end_line":197,"end_character":43},"in_reply_to":"5fc1f717_b2276863","updated":"2019-04-10 13:55:51.000000000","message":"i think this one should be a hard depency but the numa spec i dont think is. but that is really a scoping question.\n\nthere will certainty be some dependencies wehn it comes to the reshapes that of the cpu resouces and i think we need to manage that so that they dont conflict.","commit_id":"abd2ba916dfe93e61d24d7f652fcf99db2223e11"},{"author":{"_account_id":5754,"name":"Alex Xu","email":"hejie.xu@intel.com","username":"xuhj"},"change_message_id":"5df2e1c380cd0043b64510730d20135ee4eb2d88","unresolved":false,"context_lines":[{"line_number":194,"context_line":"  describe the affinity between the prioritized vCPU and NUMA node with"},{"line_number":195,"context_line":"  placement request [5]_."},{"line_number":196,"context_line":""},{"line_number":197,"context_line":"* `Standardize CPU resource tracking` [6]_: This pending specification is"},{"line_number":198,"context_line":"  proposing the changes to let the pCPUs and vCPUs of one compute node to be"},{"line_number":199,"context_line":"  tracked in different RPs. We would like to divide the pCPU RP and vCPU RP"},{"line_number":200,"context_line":"  into prioritized CPUs later if this feature is implemented."}],"source_content_type":"text/x-rst","patch_set":3,"id":"5fc1f717_c3458c70","line":197,"range":{"start_line":197,"start_character":3,"end_line":197,"end_character":43},"in_reply_to":"5fc1f717_fedced45","updated":"2019-04-10 06:25:29.000000000","message":"Done","commit_id":"abd2ba916dfe93e61d24d7f652fcf99db2223e11"},{"author":{"_account_id":30209,"name":"Huaqiang","email":"huaqiang.wang@intel.com","username":"Huaqiang.Wang"},"change_message_id":"bd347e8bbe11ef80b513f620bb5eaed54386d0f2","unresolved":false,"context_lines":[{"line_number":194,"context_line":"  describe the affinity between the prioritized vCPU and NUMA node with"},{"line_number":195,"context_line":"  placement request [5]_."},{"line_number":196,"context_line":""},{"line_number":197,"context_line":"* `Standardize CPU resource tracking` [6]_: This pending specification is"},{"line_number":198,"context_line":"  proposing the changes to let the pCPUs and vCPUs of one compute node to be"},{"line_number":199,"context_line":"  tracked in different RPs. We would like to divide the pCPU RP and vCPU RP"},{"line_number":200,"context_line":"  into prioritized CPUs later if this feature is implemented."}],"source_content_type":"text/x-rst","patch_set":3,"id":"5fc1f717_b2276863","line":197,"range":{"start_line":197,"start_character":3,"end_line":197,"end_character":43},"in_reply_to":"5fc1f717_fedced45","updated":"2019-04-10 10:17:31.000000000","message":"It is not a hard dependency, right?","commit_id":"abd2ba916dfe93e61d24d7f652fcf99db2223e11"},{"author":{"_account_id":11604,"name":"sean mooney","email":"smooney@redhat.com","username":"sean-k-mooney"},"change_message_id":"617648928b76b662e4799470360c606d5ad6a6f2","unresolved":false,"context_lines":[{"line_number":199,"context_line":"  tracked in different RPs. We would like to divide the pCPU RP and vCPU RP"},{"line_number":200,"context_line":"  into prioritized CPUs later if this feature is implemented."},{"line_number":201,"context_line":""},{"line_number":202,"context_line":"Scheduler and Placement request translation from flavor"},{"line_number":203,"context_line":"-------------------------------------------------------"},{"line_number":204,"context_line":""},{"line_number":205,"context_line":"The allocation request for prioritized CPU will be reflected in placement"}],"source_content_type":"text/x-rst","patch_set":3,"id":"5fc1f717_7e131d02","line":202,"range":{"start_line":202,"start_character":0,"end_line":202,"end_character":55},"updated":"2019-04-09 17:39:00.000000000","message":"by the way even if the host dont have instance currently you will still need to do a reshape of the resouce form the compute node resouce provider to the chiled resource provdiers.","commit_id":"abd2ba916dfe93e61d24d7f652fcf99db2223e11"},{"author":{"_account_id":5754,"name":"Alex Xu","email":"hejie.xu@intel.com","username":"xuhj"},"change_message_id":"5df2e1c380cd0043b64510730d20135ee4eb2d88","unresolved":false,"context_lines":[{"line_number":199,"context_line":"  tracked in different RPs. We would like to divide the pCPU RP and vCPU RP"},{"line_number":200,"context_line":"  into prioritized CPUs later if this feature is implemented."},{"line_number":201,"context_line":""},{"line_number":202,"context_line":"Scheduler and Placement request translation from flavor"},{"line_number":203,"context_line":"-------------------------------------------------------"},{"line_number":204,"context_line":""},{"line_number":205,"context_line":"The allocation request for prioritized CPU will be reflected in placement"}],"source_content_type":"text/x-rst","patch_set":3,"id":"5fc1f717_63605802","line":202,"range":{"start_line":202,"start_character":0,"end_line":202,"end_character":55},"in_reply_to":"5fc1f717_7e131d02","updated":"2019-04-10 06:25:29.000000000","message":"oh, right, we should mention this in the upgrade section.","commit_id":"abd2ba916dfe93e61d24d7f652fcf99db2223e11"},{"author":{"_account_id":5754,"name":"Alex Xu","email":"hejie.xu@intel.com","username":"xuhj"},"change_message_id":"6b75cdcb82fc0737913caad465f2c26e60b91911","unresolved":false,"context_lines":[{"line_number":220,"context_line":"at least 4 free unprioritized CPUs and at least 512MB free memory, the"},{"line_number":221,"context_line":"placement request would be::"},{"line_number":222,"context_line":""},{"line_number":223,"context_line":"  GET /allocation_candidates?resources1\u003dVCPU:4,MEMORY_MB:512"},{"line_number":224,"context_line":"                          \u0026resource2\u003dVCPU:4\u0026required2\u003dHW_CPU_HIGH_PRIORITY"},{"line_number":225,"context_line":"                          \u0026resource3\u003dVCPU:4\u0026required3\u003dHW_CPU_LOW_PRIORITY"},{"line_number":226,"context_line":""}],"source_content_type":"text/x-rst","patch_set":3,"id":"5fc1f717_6c7e1ce4","line":223,"range":{"start_line":223,"start_character":29,"end_line":223,"end_character":60},"updated":"2019-04-09 14:19:29.000000000","message":"We should mention that we have to put the non-priroity vCPU and memory into the same numbered request group. It ensure we get the non-priority vCPU by ensuring vCPU and Memory are coming from the same RP.","commit_id":"abd2ba916dfe93e61d24d7f652fcf99db2223e11"},{"author":{"_account_id":30209,"name":"Huaqiang","email":"huaqiang.wang@intel.com","username":"Huaqiang.Wang"},"change_message_id":"bd347e8bbe11ef80b513f620bb5eaed54386d0f2","unresolved":false,"context_lines":[{"line_number":220,"context_line":"at least 4 free unprioritized CPUs and at least 512MB free memory, the"},{"line_number":221,"context_line":"placement request would be::"},{"line_number":222,"context_line":""},{"line_number":223,"context_line":"  GET /allocation_candidates?resources1\u003dVCPU:4,MEMORY_MB:512"},{"line_number":224,"context_line":"                          \u0026resource2\u003dVCPU:4\u0026required2\u003dHW_CPU_HIGH_PRIORITY"},{"line_number":225,"context_line":"                          \u0026resource3\u003dVCPU:4\u0026required3\u003dHW_CPU_LOW_PRIORITY"},{"line_number":226,"context_line":""}],"source_content_type":"text/x-rst","patch_set":3,"id":"5fc1f717_12ed1cef","line":223,"range":{"start_line":223,"start_character":29,"end_line":223,"end_character":60},"in_reply_to":"5fc1f717_6c7e1ce4","updated":"2019-04-10 10:17:31.000000000","message":"Got.","commit_id":"abd2ba916dfe93e61d24d7f652fcf99db2223e11"},{"author":{"_account_id":11604,"name":"sean mooney","email":"smooney@redhat.com","username":"sean-k-mooney"},"change_message_id":"9d901fe17f69d4eed75483b34f224390c2f59c5b","unresolved":false,"context_lines":[{"line_number":220,"context_line":"at least 4 free unprioritized CPUs and at least 512MB free memory, the"},{"line_number":221,"context_line":"placement request would be::"},{"line_number":222,"context_line":""},{"line_number":223,"context_line":"  GET /allocation_candidates?resources1\u003dVCPU:4,MEMORY_MB:512"},{"line_number":224,"context_line":"                          \u0026resource2\u003dVCPU:4\u0026required2\u003dHW_CPU_HIGH_PRIORITY"},{"line_number":225,"context_line":"                          \u0026resource3\u003dVCPU:4\u0026required3\u003dHW_CPU_LOW_PRIORITY"},{"line_number":226,"context_line":""},{"line_number":227,"context_line":"For the instance requested in ``Example 2``, it is trying to build an instance"},{"line_number":228,"context_line":"with 2 NUMA nodes, first NUMA node has 4 high priority vCPUs, the second NUMA"}],"source_content_type":"text/x-rst","patch_set":3,"id":"5fc1f717_9b04db4b","line":225,"range":{"start_line":223,"start_character":29,"end_line":225,"end_character":73},"updated":"2019-04-09 17:32:32.000000000","message":"you missed the group_policy.\n\nas per https://developer.openstack.org/api-ref/placement/?expanded\u003dlist-allocation-candidates-detail\n\n\"When more than one resourcesN query parameter is supplied, group_policy is required to indicate how the groups should interact. With group_policy\u003dnone, separate groupings - numbered or unnumbered - may or may not be satisfied by the same provider. With group_policy\u003disolate, numbered groups are guaranteed to be satisfied by different providers - though there may still be overlap with the unnumbered group.\"\n\nthe problem with group_policy is that it is gloabl so using anything other then group_polcy\u003dnone can break other features.\n\ngroup_policy\u003disolate might  could like it makes sense of numa however if you used group_policy\u003disolate and the vm wanted two cinder volumes form the same shareing resouce provider or  two neutron ports with bandwith request then  the vm would not boot.","commit_id":"abd2ba916dfe93e61d24d7f652fcf99db2223e11"},{"author":{"_account_id":5754,"name":"Alex Xu","email":"hejie.xu@intel.com","username":"xuhj"},"change_message_id":"5df2e1c380cd0043b64510730d20135ee4eb2d88","unresolved":false,"context_lines":[{"line_number":220,"context_line":"at least 4 free unprioritized CPUs and at least 512MB free memory, the"},{"line_number":221,"context_line":"placement request would be::"},{"line_number":222,"context_line":""},{"line_number":223,"context_line":"  GET /allocation_candidates?resources1\u003dVCPU:4,MEMORY_MB:512"},{"line_number":224,"context_line":"                          \u0026resource2\u003dVCPU:4\u0026required2\u003dHW_CPU_HIGH_PRIORITY"},{"line_number":225,"context_line":"                          \u0026resource3\u003dVCPU:4\u0026required3\u003dHW_CPU_LOW_PRIORITY"},{"line_number":226,"context_line":""},{"line_number":227,"context_line":"For the instance requested in ``Example 2``, it is trying to build an instance"},{"line_number":228,"context_line":"with 2 NUMA nodes, first NUMA node has 4 high priority vCPUs, the second NUMA"}],"source_content_type":"text/x-rst","patch_set":3,"id":"5fc1f717_2356d02b","line":225,"range":{"start_line":223,"start_character":29,"end_line":225,"end_character":73},"in_reply_to":"5fc1f717_9b04db4b","updated":"2019-04-10 06:25:29.000000000","message":"yea, I should mention that. Although the value of group_policy at here isn\u0027t important, both value works.","commit_id":"abd2ba916dfe93e61d24d7f652fcf99db2223e11"},{"author":{"_account_id":11604,"name":"sean mooney","email":"smooney@redhat.com","username":"sean-k-mooney"},"change_message_id":"9d901fe17f69d4eed75483b34f224390c2f59c5b","unresolved":false,"context_lines":[{"line_number":242,"context_line":"Requesting other resource with prioritized CPU"},{"line_number":243,"context_line":"~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~"},{"line_number":244,"context_line":""},{"line_number":245,"context_line":"The prioritized CPU introduced in this proposal will not affect the way to"},{"line_number":246,"context_line":"request other resources, for example GPU, for an instance."},{"line_number":247,"context_line":""},{"line_number":248,"context_line":"Here is the example for building instance with VGPU along with prioritized"},{"line_number":249,"context_line":"vCPU. Given the instance flavor::"}],"source_content_type":"text/x-rst","patch_set":3,"id":"5fc1f717_db6903f8","line":246,"range":{"start_line":245,"start_character":0,"end_line":246,"end_character":58},"updated":"2019-04-09 17:32:32.000000000","message":"no it will, if we start modeling cpus and ram under numanode it will have a signinficant impact on the query.\n\nif you use anything other then group_policy\u003dnone it will also break other resouces as its a gloabal setting.","commit_id":"abd2ba916dfe93e61d24d7f652fcf99db2223e11"},{"author":{"_account_id":5754,"name":"Alex Xu","email":"hejie.xu@intel.com","username":"xuhj"},"change_message_id":"19cf424285ad9f86778d415ce496f27e222291bb","unresolved":false,"context_lines":[{"line_number":242,"context_line":"Requesting other resource with prioritized CPU"},{"line_number":243,"context_line":"~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~"},{"line_number":244,"context_line":""},{"line_number":245,"context_line":"The prioritized CPU introduced in this proposal will not affect the way to"},{"line_number":246,"context_line":"request other resources, for example GPU, for an instance."},{"line_number":247,"context_line":""},{"line_number":248,"context_line":"Here is the example for building instance with VGPU along with prioritized"},{"line_number":249,"context_line":"vCPU. Given the instance flavor::"}],"source_content_type":"text/x-rst","patch_set":3,"id":"5fc1f717_ac9e2851","line":246,"range":{"start_line":245,"start_character":0,"end_line":246,"end_character":58},"in_reply_to":"5fc1f717_52af24e2","updated":"2019-04-10 12:40:46.000000000","message":"thanks for help me understand those interesting case, good luck, this isn\u0027t depend on \u0027group_policy\u003disolate\u0027, but yes, it will break soon when we add more complex the placement request, I see that the important of we need more complex affinity.","commit_id":"abd2ba916dfe93e61d24d7f652fcf99db2223e11"},{"author":{"_account_id":11604,"name":"sean mooney","email":"smooney@redhat.com","username":"sean-k-mooney"},"change_message_id":"791d5b0089372fc37db6f291c515237022e65a3f","unresolved":false,"context_lines":[{"line_number":242,"context_line":"Requesting other resource with prioritized CPU"},{"line_number":243,"context_line":"~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~"},{"line_number":244,"context_line":""},{"line_number":245,"context_line":"The prioritized CPU introduced in this proposal will not affect the way to"},{"line_number":246,"context_line":"request other resources, for example GPU, for an instance."},{"line_number":247,"context_line":""},{"line_number":248,"context_line":"Here is the example for building instance with VGPU along with prioritized"},{"line_number":249,"context_line":"vCPU. Given the instance flavor::"}],"source_content_type":"text/x-rst","patch_set":3,"id":"5fc1f717_52af24e2","line":246,"range":{"start_line":245,"start_character":0,"end_line":246,"end_character":58},"in_reply_to":"5fc1f717_57af16e2","updated":"2019-04-10 09:39:16.000000000","message":"am yes so there are two exampels that break to day if you use group_policy\u003disolate.\n\nfirst is i have 1 cinder backend and i therefor have 1 sharing resouce provider. if you use group_policy\u003disolate you cannot boot a vm with two cinder volumes as each volume would be its own numbered request group and with the isoalte policy both cant come form the same RP.\n\n\nthe second case is with bandwith based schduling. \nif you use group_policy\u003disolate you cannot have 2 network interface form the same RP that request bandwith. that means\nif you are using bandwith based schduling with ovs for example you are limited to 1 port with a bandwith request\n\nso without the ablity to specfiy relationships between resouce groups the global isolate policy is too agressive and will break lots of usecase as a result the only safe default is \ngroup_policy\u003dnone.\n\ni personally would like to see this syntax extend to something like this \n\nGET /allocation_candidates?resources1\u003dVCPU:4,MEMORY_MB:512\n                          \u0026resource2\u003dVCPU:4\u0026required2\u003dHW_CPU_HIGH_PRIORITY\n                          \u0026resource3\u003dVCPU:4\u0026required3\u003dHW_CPU_LOW_PRIORITY\n                          \u0026group_policy\u003dnone;isolate:2,3;\n\nso that woudl say the default group policy is none but the policy for resouce2 and resouce3 is isolate.\n\nthat is not needed in the example above because teh traits will be suffient but that would allow us to specify the policy in a backwards compatible way.\n\n\nthe new syntax would be \n\"group_policy\u003d\" \u003cgloabal defaul none or isoalte\u003e \";\" \u003cpolicy\u003e \":\" \u003ccomma seperated list of group numbers\u003e \";\"\n\n\nso semi colons deplimts each of the policies and colon delimits the policy name form the list of groups its applies too.\nor explained as a regex\n\n\"group_policy\u003d(none|isolate)?(;)?(((none|isolate)):([0-9])(,[0-9])+(;)?)+\"\n\nits slightly sad that  ^ is clearer to me then writing it in english.","commit_id":"abd2ba916dfe93e61d24d7f652fcf99db2223e11"},{"author":{"_account_id":5754,"name":"Alex Xu","email":"hejie.xu@intel.com","username":"xuhj"},"change_message_id":"4d435953eca8ab1a1f48714663623e45184b7869","unresolved":false,"context_lines":[{"line_number":242,"context_line":"Requesting other resource with prioritized CPU"},{"line_number":243,"context_line":"~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~"},{"line_number":244,"context_line":""},{"line_number":245,"context_line":"The prioritized CPU introduced in this proposal will not affect the way to"},{"line_number":246,"context_line":"request other resources, for example GPU, for an instance."},{"line_number":247,"context_line":""},{"line_number":248,"context_line":"Here is the example for building instance with VGPU along with prioritized"},{"line_number":249,"context_line":"vCPU. Given the instance flavor::"}],"source_content_type":"text/x-rst","patch_set":3,"id":"5fc1f717_57af16e2","line":246,"range":{"start_line":245,"start_character":0,"end_line":246,"end_character":58},"in_reply_to":"5fc1f717_6325b8c1","updated":"2019-04-10 08:36:49.000000000","message":"Sorry, I read the question again. I may not understand correctly. Could you give an example for which case it can be broken?","commit_id":"abd2ba916dfe93e61d24d7f652fcf99db2223e11"},{"author":{"_account_id":11604,"name":"sean mooney","email":"smooney@redhat.com","username":"sean-k-mooney"},"change_message_id":"791d5b0089372fc37db6f291c515237022e65a3f","unresolved":false,"context_lines":[{"line_number":242,"context_line":"Requesting other resource with prioritized CPU"},{"line_number":243,"context_line":"~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~"},{"line_number":244,"context_line":""},{"line_number":245,"context_line":"The prioritized CPU introduced in this proposal will not affect the way to"},{"line_number":246,"context_line":"request other resources, for example GPU, for an instance."},{"line_number":247,"context_line":""},{"line_number":248,"context_line":"Here is the example for building instance with VGPU along with prioritized"},{"line_number":249,"context_line":"vCPU. Given the instance flavor::"}],"source_content_type":"text/x-rst","patch_set":3,"id":"5fc1f717_17198e78","line":246,"range":{"start_line":245,"start_character":0,"end_line":246,"end_character":58},"in_reply_to":"5fc1f717_6325b8c1","updated":"2019-04-10 09:39:16.000000000","message":"yes that gap is one we have been talking about on an off over the last two cycle.  for me its the main blocker to doing numa in placement at this point as we can certenly model numa in placement now but the allocation_candatis api syntax does not allow us to be expressive enough to then claim resouces propperly as we cant express the relationships between groups properly. ill take a look at this spec :) thanks for the link.","commit_id":"abd2ba916dfe93e61d24d7f652fcf99db2223e11"},{"author":{"_account_id":5754,"name":"Alex Xu","email":"hejie.xu@intel.com","username":"xuhj"},"change_message_id":"5df2e1c380cd0043b64510730d20135ee4eb2d88","unresolved":false,"context_lines":[{"line_number":242,"context_line":"Requesting other resource with prioritized CPU"},{"line_number":243,"context_line":"~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~"},{"line_number":244,"context_line":""},{"line_number":245,"context_line":"The prioritized CPU introduced in this proposal will not affect the way to"},{"line_number":246,"context_line":"request other resources, for example GPU, for an instance."},{"line_number":247,"context_line":""},{"line_number":248,"context_line":"Here is the example for building instance with VGPU along with prioritized"},{"line_number":249,"context_line":"vCPU. Given the instance flavor::"}],"source_content_type":"text/x-rst","patch_set":3,"id":"5fc1f717_6325b8c1","line":246,"range":{"start_line":245,"start_character":0,"end_line":246,"end_character":58},"in_reply_to":"5fc1f717_db6903f8","updated":"2019-04-10 06:25:29.000000000","message":"yes, also Eric pointed out there is gap in placement, we need a way to describe affinity between request group https://review.openstack.org/#/c/650476/","commit_id":"abd2ba916dfe93e61d24d7f652fcf99db2223e11"},{"author":{"_account_id":11604,"name":"sean mooney","email":"smooney@redhat.com","username":"sean-k-mooney"},"change_message_id":"9d901fe17f69d4eed75483b34f224390c2f59c5b","unresolved":false,"context_lines":[{"line_number":260,"context_line":""},{"line_number":261,"context_line":"The corresponding placement request will be generated::"},{"line_number":262,"context_line":""},{"line_number":263,"context_line":"  GET /allocation_candidates?resources\u003dVGPU:1"},{"line_number":264,"context_line":"                          \u0026resources1\u003dVCPU:4,MEMORY_MB:512"},{"line_number":265,"context_line":"                          \u0026resource2\u003dVCPU:4\u0026required2\u003dHW_CPU_HIGH_PRIORITY"},{"line_number":266,"context_line":"                          \u0026resource3\u003dVCPU:4\u0026required3\u003dHW_CPU_LOW_PRIORITY"},{"line_number":267,"context_line":""},{"line_number":268,"context_line":"Alternatives"},{"line_number":269,"context_line":"------------"}],"source_content_type":"text/x-rst","patch_set":3,"id":"5fc1f717_fe562d05","line":266,"range":{"start_line":263,"start_character":2,"end_line":266,"end_character":73},"updated":"2019-04-09 17:32:32.000000000","message":"so if we are modeling numa in placment this woudl have to change as we will need  6 cpus form each numa node(12 vcpus / 2 numa nodes).\n\nso this would actully need to be written like this\n\nGET /allocation_candidates?resources\u003dVGPU:1\n                          \u0026resources1\u003dVCPU:2,MEMORY_MB:256\n                          \u0026resources2\u003dVCPU:2,MEMORY_MB:256\n                          \u0026resource2\u003dVCPU:4\u0026required2\u003dHW_CPU_HIGH_PRIORITY\n                          \u0026resource3\u003dVCPU:4\u0026required3\u003dHW_CPU_LOW_PRIORITY\n\nassuming we model numa in placement.\n\nalso this ignore the fact we can choose how may guest cpus are attached to each virtual numa node os the *_PRIORITY groups may have to be split.\n\n\nflavors:\n      vcpus\u003d12\n      memory_mb\u003d512\n      extra_specs:\n        hw:numa_nodes\u003d4\n        hw:cpu_policy\u003ddedicated\n        hw:cpus.HW_CPU_HIGH_PRIORITY\u003d0-3\n        hw:cpus.HW_CPU_LOW_PRIORITY\u003d6-9\n        resources:VGPU\u003d1\n\nfor example if we have 4 numa nodes we woudl have 3 cpus per numa node.\n\nso it would need to look something like this\nGET /allocation_candidates?resources\u003dVGPU:1\n                          \u0026resources1\u003dVCPU:1,MEMORY_MB:128\n                          \u0026resources2\u003dVCPU:1,MEMORY_MB:128\n                          \u0026resources1\u003dVCPU:1,MEMORY_MB:128\n                          \u0026resources2\u003dVCPU:1,MEMORY_MB:128\n                          \u0026resource3\u003dVCPU:3\u0026required3\u003dHW_CPU_HIGH_PRIORITY\n                          \u0026resource4\u003dVCPU:1\u0026required4\u003dHW_CPU_HIGH_PRIORITY\n                          \u0026resource5\u003dVCPU:4\u0026required5\u003dHW_CPU_LOW_PRIORITY\n                          \u0026resource6\u003dVCPU:4\u0026required6\u003dHW_CPU_LOW_PRIORITY","commit_id":"abd2ba916dfe93e61d24d7f652fcf99db2223e11"},{"author":{"_account_id":11604,"name":"sean mooney","email":"smooney@redhat.com","username":"sean-k-mooney"},"change_message_id":"791d5b0089372fc37db6f291c515237022e65a3f","unresolved":false,"context_lines":[{"line_number":260,"context_line":""},{"line_number":261,"context_line":"The corresponding placement request will be generated::"},{"line_number":262,"context_line":""},{"line_number":263,"context_line":"  GET /allocation_candidates?resources\u003dVGPU:1"},{"line_number":264,"context_line":"                          \u0026resources1\u003dVCPU:4,MEMORY_MB:512"},{"line_number":265,"context_line":"                          \u0026resource2\u003dVCPU:4\u0026required2\u003dHW_CPU_HIGH_PRIORITY"},{"line_number":266,"context_line":"                          \u0026resource3\u003dVCPU:4\u0026required3\u003dHW_CPU_LOW_PRIORITY"},{"line_number":267,"context_line":""},{"line_number":268,"context_line":"Alternatives"},{"line_number":269,"context_line":"------------"}],"source_content_type":"text/x-rst","patch_set":3,"id":"5fc1f717_d73786af","line":266,"range":{"start_line":263,"start_character":2,"end_line":266,"end_character":73},"in_reply_to":"5fc1f717_8646b220","updated":"2019-04-10 09:39:16.000000000","message":"sorry yes you are correct i messed up this up when i was copy and pastinging it. i shoudl have just wrote it by hand:)\n\nwell the point i wanted to illustrate with the second example\nwas that this will in fact interact with other resource requests if we add numa to placement or hugepages to placement then the code that generates the request groups will need to be numa/hugepage aware.\n\na proposal i have mentioned to a number of people is the idea of have a number of transformers, that could be implemented as prefilters, 1 per resouce class, that take a placement requrest as an input and transform it retruning a new placemetn request as an output so that we can make this code modular by chaning the transformers.\n\ni think that could help with this complexity.\nperhaps using prefilters is the wrong place in some cases as it might be virt driver specific but in any case i think we will need a modular approach to building placement request to deal with the complexity that is involved.","commit_id":"abd2ba916dfe93e61d24d7f652fcf99db2223e11"},{"author":{"_account_id":5754,"name":"Alex Xu","email":"hejie.xu@intel.com","username":"xuhj"},"change_message_id":"5df2e1c380cd0043b64510730d20135ee4eb2d88","unresolved":false,"context_lines":[{"line_number":260,"context_line":""},{"line_number":261,"context_line":"The corresponding placement request will be generated::"},{"line_number":262,"context_line":""},{"line_number":263,"context_line":"  GET /allocation_candidates?resources\u003dVGPU:1"},{"line_number":264,"context_line":"                          \u0026resources1\u003dVCPU:4,MEMORY_MB:512"},{"line_number":265,"context_line":"                          \u0026resource2\u003dVCPU:4\u0026required2\u003dHW_CPU_HIGH_PRIORITY"},{"line_number":266,"context_line":"                          \u0026resource3\u003dVCPU:4\u0026required3\u003dHW_CPU_LOW_PRIORITY"},{"line_number":267,"context_line":""},{"line_number":268,"context_line":"Alternatives"},{"line_number":269,"context_line":"------------"}],"source_content_type":"text/x-rst","patch_set":3,"id":"5fc1f717_8646b220","line":266,"range":{"start_line":263,"start_character":2,"end_line":266,"end_character":73},"in_reply_to":"5fc1f717_fe562d05","updated":"2019-04-10 06:25:29.000000000","message":"for you second case, the request should be:\n\nGET /allocation_candidates?resources1\u003dMEMORY:128\u0026\nresources2\u003dVCPU:2, MEMORY:128\u0026\nresources3\u003dMEMORY:128\u0026\nresources4\u003dVCPU:2: MEMORY:128\u0026\nresources5\u003dVCPU:3\u0026required5\u003dHW_CPU_HIGH_PRIORITY\u0026\nresources6\u003dVCPU:1\u0026required6\u003dHW_CPU_HIGH_PRIORITY\u0026\nresources7\u003dVCPU:3\u0026required7\u003dHW_CPU_LOW_PRIORITY\u0026\nresources8\u003dVCPU:1\u0026reqruied8\u003dHW_CPU_LOW_PRIORITY\n\nThe request groups 1 and 5 are for NUMA0, 2 and 6 for NUMA1, 3 and 7 for NUMA2, 4 and 8 for NUMA3. But there is no way to describe the affinity like between 1 and 5, that is placement gap I talk about.\n\nDo you want me to describe the detail of thecase when we have numa in placement? I didn\u0027t make it as default proposal, since the spec of numa in placement doesn\u0027t merge yet. Although I feel the spec of numa in placement will get more high priority in the end.","commit_id":"abd2ba916dfe93e61d24d7f652fcf99db2223e11"},{"author":{"_account_id":11604,"name":"sean mooney","email":"smooney@redhat.com","username":"sean-k-mooney"},"change_message_id":"9d901fe17f69d4eed75483b34f224390c2f59c5b","unresolved":false,"context_lines":[{"line_number":292,"context_line":""},{"line_number":293,"context_line":"Actually, this is a discussion point in the `extra specs` [3]_ also. But there"},{"line_number":294,"context_line":"are some benefits of using extra specs can be found in this proposal. This"},{"line_number":295,"context_line":"extra specs propose to translate the extra specs into the placement request,"},{"line_number":296,"context_line":"to backward the compatible with existed flavor, the VCPU, MEMORY_MB and"},{"line_number":297,"context_line":"DISK_GB will be put into the same request, then it will ensure the existed"},{"line_number":298,"context_line":"flavor with no priority VCPU is allocated from the non-priority VCPU. Then the"}],"source_content_type":"text/x-rst","patch_set":3,"id":"5fc1f717_fea4ed08","line":295,"range":{"start_line":295,"start_character":0,"end_line":295,"end_character":5},"updated":"2019-04-09 17:32:32.000000000","message":"delete","commit_id":"abd2ba916dfe93e61d24d7f652fcf99db2223e11"},{"author":{"_account_id":30209,"name":"Huaqiang","email":"huaqiang.wang@intel.com","username":"Huaqiang.Wang"},"change_message_id":"ee9a3ccb9df764f679ebc728b2d15f700d7ab38a","unresolved":false,"context_lines":[{"line_number":292,"context_line":""},{"line_number":293,"context_line":"Actually, this is a discussion point in the `extra specs` [3]_ also. But there"},{"line_number":294,"context_line":"are some benefits of using extra specs can be found in this proposal. This"},{"line_number":295,"context_line":"extra specs propose to translate the extra specs into the placement request,"},{"line_number":296,"context_line":"to backward the compatible with existed flavor, the VCPU, MEMORY_MB and"},{"line_number":297,"context_line":"DISK_GB will be put into the same request, then it will ensure the existed"},{"line_number":298,"context_line":"flavor with no priority VCPU is allocated from the non-priority VCPU. Then the"}],"source_content_type":"text/x-rst","patch_set":3,"id":"3fce034c_3359ee59","line":295,"range":{"start_line":295,"start_character":0,"end_line":295,"end_character":5},"in_reply_to":"5fc1f717_fea4ed08","updated":"2019-04-12 12:06:02.000000000","message":"Got. should be \u0027specification\u0027 instead of \u0027extra specs\u0027.","commit_id":"abd2ba916dfe93e61d24d7f652fcf99db2223e11"},{"author":{"_account_id":5754,"name":"Alex Xu","email":"hejie.xu@intel.com","username":"xuhj"},"change_message_id":"6b75cdcb82fc0737913caad465f2c26e60b91911","unresolved":false,"context_lines":[{"line_number":307,"context_line":"  @base.NovaObjectRegistry.register"},{"line_number":308,"context_line":"  class NUMACell(base.NovaObject):"},{"line_number":309,"context_line":"      fields \u003d {"},{"line_number":310,"context_line":"          \u0027cpu_priorities\u0027: ListOfObjectsField(\u0027CPUPriority\u0027, nullable\u003dTrue)"},{"line_number":311,"context_line":"          ...."},{"line_number":312,"context_line":"          }"},{"line_number":313,"context_line":""}],"source_content_type":"text/x-rst","patch_set":3,"id":"5fc1f717_2c76d4d9","line":310,"range":{"start_line":310,"start_character":11,"end_line":310,"end_character":25},"updated":"2019-04-09 14:19:29.000000000","message":"s/cpu_priorities/cpu_traits/? Since we name the config option as cpuset_traits.","commit_id":"abd2ba916dfe93e61d24d7f652fcf99db2223e11"},{"author":{"_account_id":30209,"name":"Huaqiang","email":"huaqiang.wang@intel.com","username":"Huaqiang.Wang"},"change_message_id":"ee9a3ccb9df764f679ebc728b2d15f700d7ab38a","unresolved":false,"context_lines":[{"line_number":307,"context_line":"  @base.NovaObjectRegistry.register"},{"line_number":308,"context_line":"  class NUMACell(base.NovaObject):"},{"line_number":309,"context_line":"      fields \u003d {"},{"line_number":310,"context_line":"          \u0027cpu_priorities\u0027: ListOfObjectsField(\u0027CPUPriority\u0027, nullable\u003dTrue)"},{"line_number":311,"context_line":"          ...."},{"line_number":312,"context_line":"          }"},{"line_number":313,"context_line":""}],"source_content_type":"text/x-rst","patch_set":3,"id":"3fce034c_13515237","line":310,"range":{"start_line":310,"start_character":11,"end_line":310,"end_character":25},"in_reply_to":"5fc1f717_2c76d4d9","updated":"2019-04-12 12:06:02.000000000","message":"My understanding is the \u0027priority\u0027 in the next lines will be replaced all. right?","commit_id":"abd2ba916dfe93e61d24d7f652fcf99db2223e11"},{"author":{"_account_id":11604,"name":"sean mooney","email":"smooney@redhat.com","username":"sean-k-mooney"},"change_message_id":"791d5b0089372fc37db6f291c515237022e65a3f","unresolved":false,"context_lines":[{"line_number":314,"context_line":"The new object `CPUPriority` as below::"},{"line_number":315,"context_line":""},{"line_number":316,"context_line":"  @base.NovaObjectRegistry.register"},{"line_number":317,"context_line":"  class CPUPriority(base.NovaObject):"},{"line_number":318,"context_line":"      fields \u003d {"},{"line_number":319,"context_line":"          \u0027priority\u0027: fields.StringField(),"},{"line_number":320,"context_line":"          \u0027cpuset\u0027: fields.SetOfIntegersField(),"}],"source_content_type":"text/x-rst","patch_set":3,"id":"5fc1f717_7779f22c","line":317,"range":{"start_line":317,"start_character":8,"end_line":317,"end_character":19},"updated":"2019-04-10 09:39:16.000000000","message":"i think this is missing the resouce provider uuid.\n\nwe will want a simple way to assocate RPs in teh allocation canidate/summary with the CPUProptiy object and i think we should use the RP uuid for that correlation.","commit_id":"abd2ba916dfe93e61d24d7f652fcf99db2223e11"},{"author":{"_account_id":5754,"name":"Alex Xu","email":"hejie.xu@intel.com","username":"xuhj"},"change_message_id":"19cf424285ad9f86778d415ce496f27e222291bb","unresolved":false,"context_lines":[{"line_number":314,"context_line":"The new object `CPUPriority` as below::"},{"line_number":315,"context_line":""},{"line_number":316,"context_line":"  @base.NovaObjectRegistry.register"},{"line_number":317,"context_line":"  class CPUPriority(base.NovaObject):"},{"line_number":318,"context_line":"      fields \u003d {"},{"line_number":319,"context_line":"          \u0027priority\u0027: fields.StringField(),"},{"line_number":320,"context_line":"          \u0027cpuset\u0027: fields.SetOfIntegersField(),"}],"source_content_type":"text/x-rst","patch_set":3,"id":"5fc1f717_acc14882","line":317,"range":{"start_line":317,"start_character":8,"end_line":317,"end_character":19},"in_reply_to":"5fc1f717_7779f22c","updated":"2019-04-10 12:40:46.000000000","message":"This is also for numa not in the placement yet. This obj will be a field of host\u0027s NUMACell obj, so we will know which NUMA cell we are in. \n\nWhen the numa in the placement, we probably need that in the host NUMACell obj. but I think that will be scope of the numa in placement spec.","commit_id":"abd2ba916dfe93e61d24d7f652fcf99db2223e11"},{"author":{"_account_id":5754,"name":"Alex Xu","email":"hejie.xu@intel.com","username":"xuhj"},"change_message_id":"6b75cdcb82fc0737913caad465f2c26e60b91911","unresolved":false,"context_lines":[{"line_number":316,"context_line":"  @base.NovaObjectRegistry.register"},{"line_number":317,"context_line":"  class CPUPriority(base.NovaObject):"},{"line_number":318,"context_line":"      fields \u003d {"},{"line_number":319,"context_line":"          \u0027priority\u0027: fields.StringField(),"},{"line_number":320,"context_line":"          \u0027cpuset\u0027: fields.SetOfIntegersField(),"},{"line_number":321,"context_line":"          \u0027cpu_usage\u0027: fields.IntegerField()"},{"line_number":322,"context_line":"      }"}],"source_content_type":"text/x-rst","patch_set":3,"id":"5fc1f717_ec7becbb","line":319,"range":{"start_line":319,"start_character":11,"end_line":319,"end_character":19},"updated":"2019-04-09 14:19:29.000000000","message":"ditto, s/priority/trait/.","commit_id":"abd2ba916dfe93e61d24d7f652fcf99db2223e11"},{"author":{"_account_id":30209,"name":"Huaqiang","email":"huaqiang.wang@intel.com","username":"Huaqiang.Wang"},"change_message_id":"ee9a3ccb9df764f679ebc728b2d15f700d7ab38a","unresolved":false,"context_lines":[{"line_number":316,"context_line":"  @base.NovaObjectRegistry.register"},{"line_number":317,"context_line":"  class CPUPriority(base.NovaObject):"},{"line_number":318,"context_line":"      fields \u003d {"},{"line_number":319,"context_line":"          \u0027priority\u0027: fields.StringField(),"},{"line_number":320,"context_line":"          \u0027cpuset\u0027: fields.SetOfIntegersField(),"},{"line_number":321,"context_line":"          \u0027cpu_usage\u0027: fields.IntegerField()"},{"line_number":322,"context_line":"      }"}],"source_content_type":"text/x-rst","patch_set":3,"id":"3fce034c_937302e8","line":319,"range":{"start_line":319,"start_character":11,"end_line":319,"end_character":19},"in_reply_to":"5fc1f717_ec7becbb","updated":"2019-04-12 12:06:02.000000000","message":"will be replaced.","commit_id":"abd2ba916dfe93e61d24d7f652fcf99db2223e11"},{"author":{"_account_id":11604,"name":"sean mooney","email":"smooney@redhat.com","username":"sean-k-mooney"},"change_message_id":"9d901fe17f69d4eed75483b34f224390c2f59c5b","unresolved":false,"context_lines":[{"line_number":318,"context_line":"      fields \u003d {"},{"line_number":319,"context_line":"          \u0027priority\u0027: fields.StringField(),"},{"line_number":320,"context_line":"          \u0027cpuset\u0027: fields.SetOfIntegersField(),"},{"line_number":321,"context_line":"          \u0027cpu_usage\u0027: fields.IntegerField()"},{"line_number":322,"context_line":"      }"},{"line_number":323,"context_line":""},{"line_number":324,"context_line":"The ``priority`` field refers to the CPU priority Traits. The `cpuset` is set"}],"source_content_type":"text/x-rst","patch_set":3,"id":"5fc1f717_7eb0fd40","line":321,"range":{"start_line":321,"start_character":10,"end_line":321,"end_character":44},"updated":"2019-04-09 17:32:32.000000000","message":"what kind of useage is this tracking.","commit_id":"abd2ba916dfe93e61d24d7f652fcf99db2223e11"},{"author":{"_account_id":11604,"name":"sean mooney","email":"smooney@redhat.com","username":"sean-k-mooney"},"change_message_id":"791d5b0089372fc37db6f291c515237022e65a3f","unresolved":false,"context_lines":[{"line_number":318,"context_line":"      fields \u003d {"},{"line_number":319,"context_line":"          \u0027priority\u0027: fields.StringField(),"},{"line_number":320,"context_line":"          \u0027cpuset\u0027: fields.SetOfIntegersField(),"},{"line_number":321,"context_line":"          \u0027cpu_usage\u0027: fields.IntegerField()"},{"line_number":322,"context_line":"      }"},{"line_number":323,"context_line":""},{"line_number":324,"context_line":"The ``priority`` field refers to the CPU priority Traits. The `cpuset` is set"}],"source_content_type":"text/x-rst","patch_set":3,"id":"5fc1f717_f763a2af","line":321,"range":{"start_line":321,"start_character":10,"end_line":321,"end_character":44},"in_reply_to":"5fc1f717_2651fed9","updated":"2019-04-10 09:39:16.000000000","message":"would that useage not be tracked by placement.\nim not sure why it needs to be in the host numa cell info.\n\nim not saying there is no reason to have it its just not clear this is actully useful or where this will be used.","commit_id":"abd2ba916dfe93e61d24d7f652fcf99db2223e11"},{"author":{"_account_id":11604,"name":"sean mooney","email":"smooney@redhat.com","username":"sean-k-mooney"},"change_message_id":"4ef5deccd29778d9a9b93cba4e2e690a773b977a","unresolved":false,"context_lines":[{"line_number":318,"context_line":"      fields \u003d {"},{"line_number":319,"context_line":"          \u0027priority\u0027: fields.StringField(),"},{"line_number":320,"context_line":"          \u0027cpuset\u0027: fields.SetOfIntegersField(),"},{"line_number":321,"context_line":"          \u0027cpu_usage\u0027: fields.IntegerField()"},{"line_number":322,"context_line":"      }"},{"line_number":323,"context_line":""},{"line_number":324,"context_line":"The ``priority`` field refers to the CPU priority Traits. The `cpuset` is set"}],"source_content_type":"text/x-rst","patch_set":3,"id":"5fc1f717_cfb4c684","line":321,"range":{"start_line":321,"start_character":10,"end_line":321,"end_character":44},"in_reply_to":"5fc1f717_2ced5824","updated":"2019-04-10 13:55:51.000000000","message":"ah ok so this may not be needed if the other specs land but it is need without them got it.","commit_id":"abd2ba916dfe93e61d24d7f652fcf99db2223e11"},{"author":{"_account_id":5754,"name":"Alex Xu","email":"hejie.xu@intel.com","username":"xuhj"},"change_message_id":"5df2e1c380cd0043b64510730d20135ee4eb2d88","unresolved":false,"context_lines":[{"line_number":318,"context_line":"      fields \u003d {"},{"line_number":319,"context_line":"          \u0027priority\u0027: fields.StringField(),"},{"line_number":320,"context_line":"          \u0027cpuset\u0027: fields.SetOfIntegersField(),"},{"line_number":321,"context_line":"          \u0027cpu_usage\u0027: fields.IntegerField()"},{"line_number":322,"context_line":"      }"},{"line_number":323,"context_line":""},{"line_number":324,"context_line":"The ``priority`` field refers to the CPU priority Traits. The `cpuset` is set"}],"source_content_type":"text/x-rst","patch_set":3,"id":"5fc1f717_2651fed9","line":321,"range":{"start_line":321,"start_character":10,"end_line":321,"end_character":44},"in_reply_to":"5fc1f717_7eb0fd40","updated":"2019-04-10 06:25:29.000000000","message":"This is for the \u0027shared\u0027 cpu_policy case, we need to count the cpu usage and based on the allocation ratio. Currently we track the usage of cpu in host numa cell obj also.","commit_id":"abd2ba916dfe93e61d24d7f652fcf99db2223e11"},{"author":{"_account_id":5754,"name":"Alex Xu","email":"hejie.xu@intel.com","username":"xuhj"},"change_message_id":"19cf424285ad9f86778d415ce496f27e222291bb","unresolved":false,"context_lines":[{"line_number":318,"context_line":"      fields \u003d {"},{"line_number":319,"context_line":"          \u0027priority\u0027: fields.StringField(),"},{"line_number":320,"context_line":"          \u0027cpuset\u0027: fields.SetOfIntegersField(),"},{"line_number":321,"context_line":"          \u0027cpu_usage\u0027: fields.IntegerField()"},{"line_number":322,"context_line":"      }"},{"line_number":323,"context_line":""},{"line_number":324,"context_line":"The ``priority`` field refers to the CPU priority Traits. The `cpuset` is set"}],"source_content_type":"text/x-rst","patch_set":3,"id":"5fc1f717_2ced5824","line":321,"range":{"start_line":321,"start_character":10,"end_line":321,"end_character":44},"in_reply_to":"5fc1f717_f763a2af","updated":"2019-04-10 12:40:46.000000000","message":"This is used for the case numa not in the placement yet.\n\nSo placement only knows that the host has enough specific prioirity vcpus. But the placement still don\u0027t know the specific numa node has enough priority vcpus, so we still need the existed NUMA topology affinity filter to do that. Just as you previous comment said, when we have numa in placement, then we needn\u0027t this field anymore.","commit_id":"abd2ba916dfe93e61d24d7f652fcf99db2223e11"},{"author":{"_account_id":11604,"name":"sean mooney","email":"smooney@redhat.com","username":"sean-k-mooney"},"change_message_id":"9d901fe17f69d4eed75483b34f224390c2f59c5b","unresolved":false,"context_lines":[{"line_number":342,"context_line":"REST API impact"},{"line_number":343,"context_line":"---------------"},{"line_number":344,"context_line":""},{"line_number":345,"context_line":"The new extra specs is introduced::"},{"line_number":346,"context_line":""},{"line_number":347,"context_line":"  hw:cpus.[Traits] \u003d \u0027cpuset_string\u0027"},{"line_number":348,"context_line":""},{"line_number":349,"context_line":"There is no microverison required."},{"line_number":350,"context_line":""},{"line_number":351,"context_line":"Security impact"},{"line_number":352,"context_line":"---------------"}],"source_content_type":"text/x-rst","patch_set":3,"id":"5fc1f717_1eea190e","line":349,"range":{"start_line":345,"start_character":1,"end_line":349,"end_character":34},"updated":"2019-04-09 17:32:32.000000000","message":"we do not consider extra specs to be api changes.","commit_id":"abd2ba916dfe93e61d24d7f652fcf99db2223e11"},{"author":{"_account_id":30209,"name":"Huaqiang","email":"huaqiang.wang@intel.com","username":"Huaqiang.Wang"},"change_message_id":"bd347e8bbe11ef80b513f620bb5eaed54386d0f2","unresolved":false,"context_lines":[{"line_number":342,"context_line":"REST API impact"},{"line_number":343,"context_line":"---------------"},{"line_number":344,"context_line":""},{"line_number":345,"context_line":"The new extra specs is introduced::"},{"line_number":346,"context_line":""},{"line_number":347,"context_line":"  hw:cpus.[Traits] \u003d \u0027cpuset_string\u0027"},{"line_number":348,"context_line":""},{"line_number":349,"context_line":"There is no microverison required."},{"line_number":350,"context_line":""},{"line_number":351,"context_line":"Security impact"},{"line_number":352,"context_line":"---------------"}],"source_content_type":"text/x-rst","patch_set":3,"id":"5fc1f717_92b26c05","line":349,"range":{"start_line":345,"start_character":1,"end_line":349,"end_character":34},"in_reply_to":"5fc1f717_1eea190e","updated":"2019-04-10 10:17:31.000000000","message":"I\u0027ll remove it.","commit_id":"abd2ba916dfe93e61d24d7f652fcf99db2223e11"},{"author":{"_account_id":11604,"name":"sean mooney","email":"smooney@redhat.com","username":"sean-k-mooney"},"change_message_id":"9d901fe17f69d4eed75483b34f224390c2f59c5b","unresolved":false,"context_lines":[{"line_number":366,"context_line":"Performance Impact"},{"line_number":367,"context_line":"------------------"},{"line_number":368,"context_line":""},{"line_number":369,"context_line":"None"},{"line_number":370,"context_line":""},{"line_number":371,"context_line":"Other deployer impact"},{"line_number":372,"context_line":"---------------------"}],"source_content_type":"text/x-rst","patch_set":3,"id":"5fc1f717_fe0e2de2","line":369,"range":{"start_line":369,"start_character":0,"end_line":369,"end_character":4},"updated":"2019-04-09 17:32:32.000000000","message":"there will be significant addtional work for the numatoplogy filter and the virt dirver to calulate the appriate xml.\n\neffectivly all instance with priorites set will be partially pinned.\n\nthere will also be signifcation addtion work to do when live migrating alther that can reuse most of the work in the numa live migration spec but mode data will be requried to be passed from the destiation to the soruce node.","commit_id":"abd2ba916dfe93e61d24d7f652fcf99db2223e11"},{"author":{"_account_id":11604,"name":"sean mooney","email":"smooney@redhat.com","username":"sean-k-mooney"},"change_message_id":"791d5b0089372fc37db6f291c515237022e65a3f","unresolved":false,"context_lines":[{"line_number":366,"context_line":"Performance Impact"},{"line_number":367,"context_line":"------------------"},{"line_number":368,"context_line":""},{"line_number":369,"context_line":"None"},{"line_number":370,"context_line":""},{"line_number":371,"context_line":"Other deployer impact"},{"line_number":372,"context_line":"---------------------"}],"source_content_type":"text/x-rst","patch_set":3,"id":"5fc1f717_5774f667","line":369,"range":{"start_line":369,"start_character":0,"end_line":369,"end_character":4},"in_reply_to":"5fc1f717_266a5e80","updated":"2019-04-10 09:39:16.000000000","message":"i think the performacne will be ok but there will be an impact.  the perfromance of booting vms with cpu pinnig is accpetable and this will be no worse.","commit_id":"abd2ba916dfe93e61d24d7f652fcf99db2223e11"},{"author":{"_account_id":5754,"name":"Alex Xu","email":"hejie.xu@intel.com","username":"xuhj"},"change_message_id":"5df2e1c380cd0043b64510730d20135ee4eb2d88","unresolved":false,"context_lines":[{"line_number":366,"context_line":"Performance Impact"},{"line_number":367,"context_line":"------------------"},{"line_number":368,"context_line":""},{"line_number":369,"context_line":"None"},{"line_number":370,"context_line":""},{"line_number":371,"context_line":"Other deployer impact"},{"line_number":372,"context_line":"---------------------"}],"source_content_type":"text/x-rst","patch_set":3,"id":"5fc1f717_266a5e80","line":369,"range":{"start_line":369,"start_character":0,"end_line":369,"end_character":4},"in_reply_to":"5fc1f717_fe0e2de2","updated":"2019-04-10 06:25:29.000000000","message":"yea, I will talk about something. But the performace should be ok.","commit_id":"abd2ba916dfe93e61d24d7f652fcf99db2223e11"},{"author":{"_account_id":11604,"name":"sean mooney","email":"smooney@redhat.com","username":"sean-k-mooney"},"change_message_id":"617648928b76b662e4799470360c606d5ad6a6f2","unresolved":false,"context_lines":[{"line_number":371,"context_line":"Other deployer impact"},{"line_number":372,"context_line":"---------------------"},{"line_number":373,"context_line":""},{"line_number":374,"context_line":"Separate hosts for instances for \u0027shared\u0027 and \u0027dedicated\u0027 vCPUs"},{"line_number":375,"context_line":"~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~"},{"line_number":376,"context_line":""},{"line_number":377,"context_line":"This proposal requires the instances created by the following two flavors are"}],"source_content_type":"text/x-rst","patch_set":3,"id":"5fc1f717_3e3275a8","line":374,"range":{"start_line":374,"start_character":0,"end_line":374,"end_character":63},"updated":"2019-04-09 17:39:00.000000000","message":"you already have to do that totay but we are hoping to remove that requrement this cycle.","commit_id":"abd2ba916dfe93e61d24d7f652fcf99db2223e11"},{"author":{"_account_id":30209,"name":"Huaqiang","email":"huaqiang.wang@intel.com","username":"Huaqiang.Wang"},"change_message_id":"bd347e8bbe11ef80b513f620bb5eaed54386d0f2","unresolved":false,"context_lines":[{"line_number":371,"context_line":"Other deployer impact"},{"line_number":372,"context_line":"---------------------"},{"line_number":373,"context_line":""},{"line_number":374,"context_line":"Separate hosts for instances for \u0027shared\u0027 and \u0027dedicated\u0027 vCPUs"},{"line_number":375,"context_line":"~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~"},{"line_number":376,"context_line":""},{"line_number":377,"context_line":"This proposal requires the instances created by the following two flavors are"}],"source_content_type":"text/x-rst","patch_set":3,"id":"5fc1f717_12a8bc1d","line":374,"range":{"start_line":374,"start_character":0,"end_line":374,"end_character":63},"in_reply_to":"5fc1f717_3e3275a8","updated":"2019-04-10 10:17:31.000000000","message":"Got. But I think I have written a wrong title for the following paragraphs. \n\nThe next lines are trying to say:\n\nThe \u0027shared \u0026 prioritized CPU\u0027 flavor VM and the \u0027shared\u0026non-prioritized CPU\u0027 flavor VM should be separated and should be to be deployed on one host.\n\nThe \u0027shared\u0026non-prioritized CPU\u0027 flavor VM vCPU will float over all host CPUs, it will \u0027pollute\u0027 the host CPU tagged with priority. Once a prioritized CPU is processing prioritized workload, it should be \u0027polluted\u0027 with other priority workloads.","commit_id":"abd2ba916dfe93e61d24d7f652fcf99db2223e11"},{"author":{"_account_id":11604,"name":"sean mooney","email":"smooney@redhat.com","username":"sean-k-mooney"},"change_message_id":"791d5b0089372fc37db6f291c515237022e65a3f","unresolved":false,"context_lines":[{"line_number":379,"context_line":""},{"line_number":380,"context_line":"flavor 1: for vCPU float across all host CPUs::"},{"line_number":381,"context_line":""},{"line_number":382,"context_line":"    flavor:"},{"line_number":383,"context_line":"      vcpus\u003d12"},{"line_number":384,"context_line":"      memory_mb\u003d512"},{"line_number":385,"context_line":"      extra_specs:"},{"line_number":386,"context_line":"        hw:cpu_policy\u003dshared"},{"line_number":387,"context_line":""},{"line_number":388,"context_line":"flavor 2: for vCPU float across a list of prioritized host CPUs::"},{"line_number":389,"context_line":""}],"source_content_type":"text/x-rst","patch_set":3,"id":"5fc1f717_97801ef9","line":386,"range":{"start_line":382,"start_character":4,"end_line":386,"end_character":28},"updated":"2019-04-10 09:39:16.000000000","message":"you know if we proceed with this the placement request corresponding to this should proably have two forbiden traits\n\nGET /allocation_candidates?\u0026resources1\u003dVCPU:12,MEMORY_MB:512\n\u0026forbiden1\u003dHW_CPU_HIGH_PRIORITY\u0026forbiden1\u003dHW_CPU_LOW_PRIORITY","commit_id":"abd2ba916dfe93e61d24d7f652fcf99db2223e11"},{"author":{"_account_id":5754,"name":"Alex Xu","email":"hejie.xu@intel.com","username":"xuhj"},"change_message_id":"19cf424285ad9f86778d415ce496f27e222291bb","unresolved":false,"context_lines":[{"line_number":379,"context_line":""},{"line_number":380,"context_line":"flavor 1: for vCPU float across all host CPUs::"},{"line_number":381,"context_line":""},{"line_number":382,"context_line":"    flavor:"},{"line_number":383,"context_line":"      vcpus\u003d12"},{"line_number":384,"context_line":"      memory_mb\u003d512"},{"line_number":385,"context_line":"      extra_specs:"},{"line_number":386,"context_line":"        hw:cpu_policy\u003dshared"},{"line_number":387,"context_line":""},{"line_number":388,"context_line":"flavor 2: for vCPU float across a list of prioritized host CPUs::"},{"line_number":389,"context_line":""}],"source_content_type":"text/x-rst","patch_set":3,"id":"5fc1f717_cc0b74d2","line":386,"range":{"start_line":382,"start_character":4,"end_line":386,"end_character":28},"in_reply_to":"5fc1f717_97801ef9","updated":"2019-04-10 12:40:46.000000000","message":"yes, but If we begin to allow more than the traits just for high and low priority, and allow the custom traits, that will be a problem.","commit_id":"abd2ba916dfe93e61d24d7f652fcf99db2223e11"},{"author":{"_account_id":11604,"name":"sean mooney","email":"smooney@redhat.com","username":"sean-k-mooney"},"change_message_id":"4ef5deccd29778d9a9b93cba4e2e690a773b977a","unresolved":false,"context_lines":[{"line_number":379,"context_line":""},{"line_number":380,"context_line":"flavor 1: for vCPU float across all host CPUs::"},{"line_number":381,"context_line":""},{"line_number":382,"context_line":"    flavor:"},{"line_number":383,"context_line":"      vcpus\u003d12"},{"line_number":384,"context_line":"      memory_mb\u003d512"},{"line_number":385,"context_line":"      extra_specs:"},{"line_number":386,"context_line":"        hw:cpu_policy\u003dshared"},{"line_number":387,"context_line":""},{"line_number":388,"context_line":"flavor 2: for vCPU float across a list of prioritized host CPUs::"},{"line_number":389,"context_line":""}],"source_content_type":"text/x-rst","patch_set":3,"id":"5fc1f717_4260adcf","line":386,"range":{"start_line":382,"start_character":4,"end_line":386,"end_character":28},"in_reply_to":"5fc1f717_cc0b74d2","updated":"2019-04-10 13:55:51.000000000","message":"yes that is true so the question really is shoudl we allow custom tratits or a finite set.\n\ni dont know the answer to that.","commit_id":"abd2ba916dfe93e61d24d7f652fcf99db2223e11"},{"author":{"_account_id":11604,"name":"sean mooney","email":"smooney@redhat.com","username":"sean-k-mooney"},"change_message_id":"791d5b0089372fc37db6f291c515237022e65a3f","unresolved":false,"context_lines":[{"line_number":399,"context_line":"will expect the high priority pCPU only be allocated to the high priority"},{"line_number":400,"context_line":"workload."},{"line_number":401,"context_line":""},{"line_number":402,"context_line":"So we should suggest the deployer separates these two kinds of instances"},{"line_number":403,"context_line":"into different host groups through forbidden trait [7]_  or host aggregates."},{"line_number":404,"context_line":""},{"line_number":405,"context_line":"Do not alert CPU priority on compute node that having instance"},{"line_number":406,"context_line":"~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~"}],"source_content_type":"text/x-rst","patch_set":3,"id":"5fc1f717_57f956c5","line":403,"range":{"start_line":402,"start_character":0,"end_line":403,"end_character":76},"updated":"2019-04-10 09:39:16.000000000","message":"this directly goes againt on of the usecause of tracking pcpus in placement. i.e. the abilty to have shared vms and pinned vms on the same host. one porposal i need to write up is\ngoing forward i want all vms with\nhw:cpu_policy\u003dshared too float over the cores in the vcpu_pin_set or possible cpu_share_set depending on the cpu standardisation spec.\n\nif we implement this spec as well then we can tweek that behavior so that they wont float over the high or low priorty pools.","commit_id":"abd2ba916dfe93e61d24d7f652fcf99db2223e11"},{"author":{"_account_id":5754,"name":"Alex Xu","email":"hejie.xu@intel.com","username":"xuhj"},"change_message_id":"6b75cdcb82fc0737913caad465f2c26e60b91911","unresolved":false,"context_lines":[{"line_number":426,"context_line":"Bind prioritized vCPU to corresponding host CPUs"},{"line_number":427,"context_line":"-------------------------------------------------"},{"line_number":428,"context_line":""},{"line_number":429,"context_line":"If instance CPU  policy is \u0027shared\u0027, each vCPU will float across"},{"line_number":430,"context_line":"all host CPUs that has the same CPU priority. If the instance CPU"},{"line_number":431,"context_line":"policy is \u0027dedicated\u0027 then each vCPU will be assigned to a particular host"},{"line_number":432,"context_line":"CPU, in the way of 1:1 pinning, that has the same CPU priority."}],"source_content_type":"text/x-rst","patch_set":3,"id":"5fc1f717_8c14a84e","line":429,"range":{"start_line":429,"start_character":15,"end_line":429,"end_character":17},"updated":"2019-04-09 14:19:29.000000000","message":"nit, extra blank space","commit_id":"abd2ba916dfe93e61d24d7f652fcf99db2223e11"},{"author":{"_account_id":30209,"name":"Huaqiang","email":"huaqiang.wang@intel.com","username":"Huaqiang.Wang"},"change_message_id":"bd347e8bbe11ef80b513f620bb5eaed54386d0f2","unresolved":false,"context_lines":[{"line_number":426,"context_line":"Bind prioritized vCPU to corresponding host CPUs"},{"line_number":427,"context_line":"-------------------------------------------------"},{"line_number":428,"context_line":""},{"line_number":429,"context_line":"If instance CPU  policy is \u0027shared\u0027, each vCPU will float across"},{"line_number":430,"context_line":"all host CPUs that has the same CPU priority. If the instance CPU"},{"line_number":431,"context_line":"policy is \u0027dedicated\u0027 then each vCPU will be assigned to a particular host"},{"line_number":432,"context_line":"CPU, in the way of 1:1 pinning, that has the same CPU priority."}],"source_content_type":"text/x-rst","patch_set":3,"id":"5fc1f717_f2665029","line":429,"range":{"start_line":429,"start_character":15,"end_line":429,"end_character":17},"in_reply_to":"5fc1f717_8c14a84e","updated":"2019-04-10 10:17:31.000000000","message":"Got.","commit_id":"abd2ba916dfe93e61d24d7f652fcf99db2223e11"},{"author":{"_account_id":11604,"name":"sean mooney","email":"smooney@redhat.com","username":"sean-k-mooney"},"change_message_id":"3923624059445dd2e16bf0c000c23a1edd7f9bef","unresolved":false,"context_lines":[{"line_number":500,"context_line":"Testing"},{"line_number":501,"context_line":"\u003d\u003d\u003d\u003d\u003d\u003d\u003d"},{"line_number":502,"context_line":""},{"line_number":503,"context_line":"The unittests and functional tests will be required."},{"line_number":504,"context_line":""},{"line_number":505,"context_line":"Documentation Impact"},{"line_number":506,"context_line":"\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d"}],"source_content_type":"text/x-rst","patch_set":3,"id":"5fc1f717_fe34ed4a","line":503,"range":{"start_line":503,"start_character":0,"end_line":503,"end_character":52},"updated":"2019-04-09 17:45:05.000000000","message":"i think we need thirdpart or frist part ci too.\n\nwe might be able to get addiqute test coverage with fuctional test but this is perhaps the most complex part of novas code base and there are defietly edgecase that are not coverd by this spec yet that make me concerned.\n\nfor example how doe this interact with hw:cpu_theads_policy\u003dshare|isolate? hw:emulator_threads\u003dshare|isolate?\nhw:realtime_mask\u003d1-8^5?\nhw:numa_cpu.0\u003d3 hw:numa_cpu.1\u003d1\n\nthere are a lot of things the effect assigiment of host cpus to guest vcpus that i am not sure have been fully tought about and are missing form the sepc.","commit_id":"abd2ba916dfe93e61d24d7f652fcf99db2223e11"},{"author":{"_account_id":11604,"name":"sean mooney","email":"smooney@redhat.com","username":"sean-k-mooney"},"change_message_id":"4ef5deccd29778d9a9b93cba4e2e690a773b977a","unresolved":false,"context_lines":[{"line_number":500,"context_line":"Testing"},{"line_number":501,"context_line":"\u003d\u003d\u003d\u003d\u003d\u003d\u003d"},{"line_number":502,"context_line":""},{"line_number":503,"context_line":"The unittests and functional tests will be required."},{"line_number":504,"context_line":""},{"line_number":505,"context_line":"Documentation Impact"},{"line_number":506,"context_line":"\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d"}],"source_content_type":"text/x-rst","patch_set":3,"id":"5fc1f717_2251e10e","line":503,"range":{"start_line":503,"start_character":0,"end_line":503,"end_character":52},"in_reply_to":"5fc1f717_0cd41c1f","updated":"2019-04-10 13:55:51.000000000","message":"today we dont have upstream tempest ci that validates pinning today. we have functional test that check the xml.\n\nin the intel nfv ci there was a seperate tempest plug that sshed into the host and chneck the xml\n\nthe intel nfv ci nolonger runs on nova and only runs on neutron to do ovs-dpdk testign currently\n\nredhat has some downstream testing where we are enableding testing using a whitebox tempest plugin.\n\nwe (artom) hwas started the porcess of upstreaming that\nhere https://github.com/openstack/whitebox-tempest-plugin/\n\nit currently does not work out of the box with devstack but it can be made to work with devstack. it does some slightly hacking things like ssh in to the host and restart nova compute to force some configurations.\n\nit has some pinning test and we have more that will be added later \n\nhttps://github.com/openstack/whitebox-tempest-plugin/blob/master/whitebox_tempest_plugin/api/compute/test_cpu_pinning.py\n\nthis is a general topic for the ptg but im hoping we can improve this situation in train.","commit_id":"abd2ba916dfe93e61d24d7f652fcf99db2223e11"},{"author":{"_account_id":5754,"name":"Alex Xu","email":"hejie.xu@intel.com","username":"xuhj"},"change_message_id":"bf048c7cf4852359387a7a0c336ddb36ab823a0e","unresolved":false,"context_lines":[{"line_number":500,"context_line":"Testing"},{"line_number":501,"context_line":"\u003d\u003d\u003d\u003d\u003d\u003d\u003d"},{"line_number":502,"context_line":""},{"line_number":503,"context_line":"The unittests and functional tests will be required."},{"line_number":504,"context_line":""},{"line_number":505,"context_line":"Documentation Impact"},{"line_number":506,"context_line":"\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d"}],"source_content_type":"text/x-rst","patch_set":3,"id":"3fce034c_ccbb440c","line":503,"range":{"start_line":503,"start_character":0,"end_line":503,"end_character":52},"in_reply_to":"5fc1f717_2251e10e","updated":"2019-04-11 09:35:14.000000000","message":"Thanks for explain all of those for me!","commit_id":"abd2ba916dfe93e61d24d7f652fcf99db2223e11"},{"author":{"_account_id":11604,"name":"sean mooney","email":"smooney@redhat.com","username":"sean-k-mooney"},"change_message_id":"791d5b0089372fc37db6f291c515237022e65a3f","unresolved":false,"context_lines":[{"line_number":500,"context_line":"Testing"},{"line_number":501,"context_line":"\u003d\u003d\u003d\u003d\u003d\u003d\u003d"},{"line_number":502,"context_line":""},{"line_number":503,"context_line":"The unittests and functional tests will be required."},{"line_number":504,"context_line":""},{"line_number":505,"context_line":"Documentation Impact"},{"line_number":506,"context_line":"\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d"}],"source_content_type":"text/x-rst","patch_set":3,"id":"5fc1f717_57fab674","line":503,"range":{"start_line":503,"start_character":0,"end_line":503,"end_character":52},"in_reply_to":"5fc1f717_49b0db40","updated":"2019-04-10 09:39:16.000000000","message":"we can\u0027t use first party tempest test as this feature needs nested virt. you cannot do any form of per core cpu pinning with qemu unless you use kvm or the mttcg backend. we dont currently support mttcg so therefor we need nested virt to be able to use kvm.\n\nmaybe i should just go implement mttcg support.\nit should actually be pretty minimal to do and then we can test this in the upstream gate assuming we have a new enough qemu available.\n\ni think all of the more advanced flavor extra specs can be made work with this too but my main concern will be proving to ourselves that the code does what we think it does.\n\nin principal there is no reason we can get addiquet coverage with unit and functional test so this is not a blocker for me.","commit_id":"abd2ba916dfe93e61d24d7f652fcf99db2223e11"},{"author":{"_account_id":5754,"name":"Alex Xu","email":"hejie.xu@intel.com","username":"xuhj"},"change_message_id":"19cf424285ad9f86778d415ce496f27e222291bb","unresolved":false,"context_lines":[{"line_number":500,"context_line":"Testing"},{"line_number":501,"context_line":"\u003d\u003d\u003d\u003d\u003d\u003d\u003d"},{"line_number":502,"context_line":""},{"line_number":503,"context_line":"The unittests and functional tests will be required."},{"line_number":504,"context_line":""},{"line_number":505,"context_line":"Documentation Impact"},{"line_number":506,"context_line":"\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d"}],"source_content_type":"text/x-rst","patch_set":3,"id":"5fc1f717_0cd41c1f","line":503,"range":{"start_line":503,"start_character":0,"end_line":503,"end_character":52},"in_reply_to":"5fc1f717_57fab674","updated":"2019-04-10 12:40:46.000000000","message":"thanks, I probably see the hard point. one question, how can we check the pining is right in the CI? checking the libvirt xml?","commit_id":"abd2ba916dfe93e61d24d7f652fcf99db2223e11"},{"author":{"_account_id":5754,"name":"Alex Xu","email":"hejie.xu@intel.com","username":"xuhj"},"change_message_id":"5df2e1c380cd0043b64510730d20135ee4eb2d88","unresolved":false,"context_lines":[{"line_number":500,"context_line":"Testing"},{"line_number":501,"context_line":"\u003d\u003d\u003d\u003d\u003d\u003d\u003d"},{"line_number":502,"context_line":""},{"line_number":503,"context_line":"The unittests and functional tests will be required."},{"line_number":504,"context_line":""},{"line_number":505,"context_line":"Documentation Impact"},{"line_number":506,"context_line":"\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d\u003d"}],"source_content_type":"text/x-rst","patch_set":3,"id":"5fc1f717_49b0db40","line":503,"range":{"start_line":503,"start_character":0,"end_line":503,"end_character":52},"in_reply_to":"5fc1f717_fe34ed4a","updated":"2019-04-10 06:25:29.000000000","message":"For the CI, I\u0027m thinking that we can just use first part CI, since this feature doesn\u0027t depend on any specific hardware feature. \n\nFor the cpu_threads_policy and emulator_threads, I did think about them, but forget to write it down. \n\nFor the cpu_threads_policy, there needn\u0027t something special, it will work as expected. When using the shared cpu policy, it is just the way we count the available cpus. When using the dedicated policy, we just not allow to pin to the slibing cpu. Those should co-existed with existed NUMA affinity filter logic. I will add design about them.\n\nFor the emulator threads policy, I plan to follow the current logic, doesn\u0027t change anything. For shared policy, it should be ok, it\u0027s just floating on the CONF.compute.cpu_shared_set. For the isolated policy, I want to just pick one with the current logic. So it\u0027s maybe the high priority or low priority cpu, it depends on the vcpu priority in the instance numa node. For the next step, I want to enable to specify the cpu priority for emulator threads. Just doesn\u0027t want to increase the complexity for this spec. But I can do it in this spec if we feel this logic is strange.\n\nFor the realtime. yes, I forget it. But after check the code, I think we needn\u0027t any special thing for it. We just make it using realtime schedule policy for specific vcpus, and the user can free to set those vcpus priority.\n\n\nFor hw:numa_cpu.0\u003d3 hw:numa_cpu.1\u003d1, I don\u0027t there is different. let us, the hw:cpu.HW_CPU_HIGH_PRIORITY\u003d2,3. Then we can know the cpu \u00270,1,2\u0027 in NUMA0 and cpu \u00273\u0027 in NUMA1, and then \u00272\u0027 in NUMA0 and \u00273\u0027 in NUMA1 are high priority\n\nWe will clarify them in the next version.","commit_id":"abd2ba916dfe93e61d24d7f652fcf99db2223e11"}]}
