)]}'
{"nova/privsep/libvirt.py":[{"author":{"_account_id":17669,"name":"Doug Szumski","email":"doug@stackhpc.com","username":"DougSzumski"},"change_message_id":"80962539a7fcd4c5e272a113aa950ece76e40d15","unresolved":true,"context_lines":[{"line_number":71,"context_line":"    :param instance_domain: Libvirt domain ID for the VM to which the"},{"line_number":72,"context_line":"                            disk is attached."},{"line_number":73,"context_line":"    \"\"\""},{"line_number":74,"context_line":"    processutils.execute(\u0027virsh\u0027, \u0027blockpull\u0027, \u0027--domain\u0027, instance_domain,"},{"line_number":75,"context_line":"                         \u0027--path\u0027, disk_img_path, \u0027--wait\u0027)"},{"line_number":76,"context_line":""},{"line_number":77,"context_line":""}],"source_content_type":"text/x-python","patch_set":3,"id":"3c305d48_b8a0024b","line":74,"updated":"2025-02-27 17:11:42.000000000","message":"This is non-trivial, despite the backing file being tiny (as it is for ephemeral volumes). Testing has shown that with a 2TB ephemeral disk, blockpull can take ~20 minutes to complete. The time required seems to scale ~linearly with the disk size. This is using solid storage on the hypervisor on a fast machine. \n\nAn alternative could be to trigger it via nova_manage.","commit_id":"589f0818480e8a484a39ec08b8457d4e9dce5e99"}],"nova/virt/libvirt/driver.py":[{"author":{"_account_id":4393,"name":"Dan Smith","email":"dms@danplanet.com","username":"danms"},"change_message_id":"a4055776b25bcd0b2afc4af8b0824b88d020897e","unresolved":true,"context_lines":[{"line_number":11698,"context_line":"            # badly."},{"line_number":11699,"context_line":"            # This also fixes the issue of when the backing file"},{"line_number":11700,"context_line":"            # already exists at the destination, as it will be equally"},{"line_number":11701,"context_line":"            # wrong in that case too."},{"line_number":11702,"context_line":"            if (info[\u0027backing_file\u0027] and"},{"line_number":11703,"context_line":"                    info[\u0027backing_file\u0027].startswith(\u0027ephemeral\u0027) and"},{"line_number":11704,"context_line":"                        remove_ephemeral_backing_files):"}],"source_content_type":"text/x-python","patch_set":2,"id":"dd0920c4_35e4ab27","line":11701,"updated":"2025-02-07 16:37:35.000000000","message":"I must be missing how you can go from running with a backing file to running without a backing file and not have a problem. If all the blocks have been written to, then I get that it would go unnoticed, but if they have not, then this will introduce corruption as well, no? Also (and maybe I just need to read more further down) but if the backing file is expected to be present and isn\u0027t, then qemu is going to complain right?\n\nAll this is to say that when we discussed some related issues recently, several of us came to the conclusion that we should maybe stop basing the ephemerals on a backing file at instance creation time at all. Wouldn\u0027t it maybe be better (or at least consistent) to do that? I know it won\u0027t fix existing instances by definition, but we could combine that change with one to flatten all the ephemerals at next service start or something, right?","commit_id":"f78bcd28d98f611369f9032a370b16fd0ba4423b"},{"author":{"_account_id":17669,"name":"Doug Szumski","email":"doug@stackhpc.com","username":"DougSzumski"},"change_message_id":"80962539a7fcd4c5e272a113aa950ece76e40d15","unresolved":true,"context_lines":[{"line_number":11698,"context_line":"            # badly."},{"line_number":11699,"context_line":"            # This also fixes the issue of when the backing file"},{"line_number":11700,"context_line":"            # already exists at the destination, as it will be equally"},{"line_number":11701,"context_line":"            # wrong in that case too."},{"line_number":11702,"context_line":"            if (info[\u0027backing_file\u0027] and"},{"line_number":11703,"context_line":"                    info[\u0027backing_file\u0027].startswith(\u0027ephemeral\u0027) and"},{"line_number":11704,"context_line":"                        remove_ephemeral_backing_files):"}],"source_content_type":"text/x-python","patch_set":2,"id":"45a3cb6b_b0094a0e","line":11701,"in_reply_to":"52567347_ab949301","updated":"2025-02-27 17:11:42.000000000","message":"I hit a dead end with setting `VIR_MIGRATE_NON_SHARED_DISK` - basically, when Nova calls Libvirt via migrateToURI3, it specifies a list of migrate disks and a single flags field. The flags apply to all disks, so you can\u0027t easily set `VIR_MIGRATE_NON_SHARED_DISK` for the ephemeral disk only.\n\nI\u0027ve reworked the patch to fix cold-migration and new VMs. I haven\u0027t figured out the best place to run `virsh blockpull` yet for live-migration. Any feedback on the general approach would be great before going further. There is more code that can be stripped out relating to the backing file, but I\u0027ve left that for now to keep it simple.","commit_id":"f78bcd28d98f611369f9032a370b16fd0ba4423b"},{"author":{"_account_id":17669,"name":"Doug Szumski","email":"doug@stackhpc.com","username":"DougSzumski"},"change_message_id":"7ba772ba022801a3a5582032817ed9173b300fa5","unresolved":true,"context_lines":[{"line_number":11698,"context_line":"            # badly."},{"line_number":11699,"context_line":"            # This also fixes the issue of when the backing file"},{"line_number":11700,"context_line":"            # already exists at the destination, as it will be equally"},{"line_number":11701,"context_line":"            # wrong in that case too."},{"line_number":11702,"context_line":"            if (info[\u0027backing_file\u0027] and"},{"line_number":11703,"context_line":"                    info[\u0027backing_file\u0027].startswith(\u0027ephemeral\u0027) and"},{"line_number":11704,"context_line":"                        remove_ephemeral_backing_files):"}],"source_content_type":"text/x-python","patch_set":2,"id":"52567347_ab949301","line":11701,"in_reply_to":"dd0920c4_35e4ab27","updated":"2025-02-12 18:35:36.000000000","message":"Many thanks for taking a look.\n\nI found this specific bit of the libvirt documentation helpful to explain the approach here [1]. The code modified in this patch runs on the destination hypervisor. I\u0027ve tested it (on Yoga) and live-migrations complete, but having looked again, I think it needs to set the VIR_MIGRATE_NON_SHARED_DISK flag for the ephemeral volumes specifically (--copy-storage-all). That way, the entire chain will get copied into the standalone image on the destination.\n\nI agree about removing the backing file in general. Sean M mentioned he might get around to that this cycle.\n\nAs for flattening images on service start, that would work too. The best way I\u0027ve come up with is to run `virsh blockpull` on the instance layer of the volume. It takes some time for a large volume, so perhaps triggering it via `nova-manage` would be a better option?\n\nCompared to the approach in this patch,`virsh blockpull` has the disadvantage that if it goes wrong, it could corrupt the instance. On the other hand, it will also fix cold migration, which isn\u0027t fixed by the approach in this patch.\n\n[1] https://libvirt.org/migration.html#migration-of-vms-using-non-shared-images-for-disks","commit_id":"f78bcd28d98f611369f9032a370b16fd0ba4423b"}]}
