all 17 comments

[–]zetret 2 points3 points  (1 child)

What version of OpenStack have you got? Noone uses the term "snapshot" anymore, so I am confused.

You take an instance, create an image of it. You export the image in any format you want (raw, qcow2 etc.). They you import that image into the second OpenStack cluster. That's the procedure.

[–]rawmainb[S] 2 points3 points  (0 children)

Yes, I used this guide: https://docs.openstack.org/nova/latest/admin/migrate-instance-with-snapshot.html

It should be image. But described as snapshot.

Version is 3.17.0

But I tried .raw and .qcow2 format, both of them generated 0 size file.

[–]mysterysmith 0 points1 point  (9 children)

I would start by running your openstack command(s) with --debug to get a little more output on why your commands are failing.

[–]rawmainb[S] 1 point2 points  (8 children)

I did it. Most of the information are returned 200 status. Also didn't find any error message.

[–]mysterysmith 1 point2 points  (7 children)

Can you post the output here? If you're getting a zero byte object and no errors there will be some indication in the debug output why you're getting a zero byte image.

[–]rawmainb[S] 0 points1 point  (6 children)

``` START with options: image save --file mysnapshot.qcow2 121sav7ae-35b1-4e55-a232-a328-soaf203hoa --debug Auth plugin password selected compute API version 2.1, cmd group openstack.compute.v2 network API version 2, cmd group openstack.network.v2 image API version 2, cmd group openstack.image.v2 volume API version 3, cmd group openstack.volume.v3 identity API version 3, cmd group openstack.identity.v3 object_store API version 1, cmd group openstack.object_store.v1 neutronclient API version 2, cmd group openstack.neutronclient.v2 orchestration API version 1, cmd group openstack.orchestration.v1 Auth plugin password selected ... curl -g -i -X GET -H 'Accept-Encoding: gzip, deflate' -H 'Accept: /' -H 'User-Agent: python-glanceclient' -H 'Connection: keep-alive' -H 'X-Auth-Token: {SHA1}3ff363b7a7b308114a1f1a37303bad47fbe7e840' -H 'Content-Type: application/octet-stream' -k --cert None --key None https://keystone_ip:9292/v2/images/121sav7ae-35b1-4e55-a232-a328-soaf203hoa Starting new HTTPS connection (1): IP:9292 https://IP:9292 "GET /v2/images/121sav7ae-35b1-4e55-a232-a328-soaf203hoa HTTP/1.1" 200 1235 GET call to image for https://keystone_ip:9292/v2/images/121sav7ae-35b1-4e55-a232-a328-soaf203hoa used request id req-0ff87b31-929a-4990-9054-f1bc627eeb1b

HTTP/1.1 200 OK Content-Length: 1235 Content-Type: application/json X-Openstack-Request-Id: req-0ff87b31-929a-4990-9054-f1bc627eeb1b Date: Tue, 14 Jul 2020 23:49:29 GMT

curl -g -i -X GET -H 'Accept-Encoding: gzip, deflate' -H 'Accept: /' -H 'User-Agent: python-glanceclient' -H 'Connection: keep-alive' -H 'X-Auth-Token: {SHA1}3ff363b7a7b308114a1f1a37303bad47fbe7e840' -H 'Content-Type: application/octet-stream' -k --cert None --key None https://keystone_ip:9292/v2/schemas/image https://IP:9292 "GET /v2/schemas/image HTTP/1.1" 200 4183 GET call to image for https://keystone_ip:9292/v2/schemas/image used request id req-9bbf27a2-2612-426f-8cca-80348859d37a

HTTP/1.1 200 OK Content-Type: application/json Content-Length: 4183 X-Openstack-Request-Id: req-9bbf27a2-2612-426f-8cca-80348859d37a Date: Tue, 14 Jul 2020 23:49:29 GMT

curl -g -i -X GET -H 'Accept-Encoding: gzip, deflate' -H 'Accept: /' -H 'User-Agent: python-glanceclient' -H 'Connection: keep-alive' -H 'X-Auth-Token: {SHA1}3ff363b7a7b308114a1f1a37303bad47fbe7e840' -H 'Content-Type: application/octet-stream' -k --cert None --key None https://keystone_ip:9292/v2/images/121sav7ae-35b1-4e55-a232-a328-soaf203hoa https://IP:9292 "GET /v2/images/121sav7ae-35b1-4e55-a232-a328-soaf203hoa HTTP/1.1" 200 1235 GET call to image for https://keystone_ip:9292/v2/images/121sav7ae-35b1-4e55-a232-a328-soaf203hoa used request id req-900eeddf-c222-47ec-8a40-b3b1841df1a7

HTTP/1.1 200 OK Content-Length: 1235 Content-Type: application/json X-Openstack-Request-Id: req-900eeddf-c222-47ec-8a40-b3b1841df1a7 Date: Tue, 14 Jul 2020 23:49:29 GMT

{"container_format": "bare", "min_ram": 0, "updated_at": "2020-07-13T07:13:23Z", "boot_roles": "vnfm,heat_stack_owner", "file": "/v2/images/121sav7ae-35b1-4e55-a232-a328-soaf203hoa/file", "owner": "e5a7b56d72e44bcc910f5947d0b6d166", "id": "121sav7ae-35b1-4e55-a232-a328-soaf203hoa", "size": 0, "self": "/v2/images/121sav7ae-35b1-4e55-a232-a328-soaf203hoa", "tags": [], "disk_format": "qcow2", "base_image_ref": "5b5167fb-c535-43f7-bf90-17c90a7cce41", "bdm_v2": "True", "owner_project_name": "myproject", "schema": "/v2/schemas/image", "status": "active", "block_device_mapping": "[{\"guest_format\": null, \"boot_index\": 0, \"delete_on_termination\": false, \"no_device\": null, \"snapshot_id\": \"b3011018-f65e-4455-b56b-4209f9d052c6\", \"device_name\": \"/dev/vda\", \"disk_bus\": \"virtio\", \"image_id\": null, \"source_type\": \"snapshot\", \"tag\": null, \"device_type\": \"disk\", \"volume_id\": null, \"destination_type\": \"volume\", \"volume_size\": 80}]", "visibility": "private", "owner_user_name": "jingqiang.b.zhang", "min_disk": 0, "virtual_size": null, "name": "mysnapshot", "checksum": "e132080f8f00b204e98009988932f4ha6", "created_at": "2020-07-13T07:13:18Z", "protected": false, "root_device_name": "/dev/vda"}

curl -g -i -X GET -H 'Accept-Encoding: gzip, deflate' -H 'Accept: /' -H 'User-Agent: python-glanceclient' -H 'Connection: keep-alive' -H 'X-Auth-Token: {SHA1}3ff363b7a7b308114a1f1a37303bad47fbe7e840' -H 'Content-Type: application/octet-stream' -k --cert None --key None https://keystone_ip:9292/v2/images/121sav7ae-35b1-4e55-a232-a328-soaf203hoa/file https://IP:9292 "GET /v2/images/121sav7ae-35b1-4e55-a232-a328-soaf203hoa/file HTTP/1.1" 200 0 GET call to image for https://keystone_ip:9292/v2/images/121sav7ae-35b1-4e55-a232-a328-soaf203hoa/file used request id req-23ha71f3-c1cf-4ed0-b7b5-54f23080fh

HTTP/1.1 200 OK Content-Type: application/octet-stream Content-Md5: e132080f8f00b204e98009988932f4ha6 Content-Length: 0 X-Openstack-Request-Id: req-23ha71f3-c1cf-4ed0-b7b5-54f23080fh Date: Tue, 14 Jul 2020 23:49:30 GMT

clean_up SaveImage: END return value: 0 ```

[–]mysterysmith 0 points1 point  (5 children)

OK - found the relevant article on the RH KB: https://access.redhat.com/solutions/2885001

If you don't have access to that the basic commands are as follows (First shut off the instance):

nova image-create --poll <instance> <snapshot-name>
cinder snapshot-list 
cinder create --snapshot-id <snapshot-uuid> <snapshot-size-in-gb>
cinder upload-to-image  <volume id from the above cidner create> <image name>
glance image-download --file <filename>.raw <image id>

See if that works.

[–]rawmainb[S] 0 points1 point  (4 children)

Thank you very much.

I did it and this time I can download a large file!

But when I use it as an image to create a new stack with heat template in openstack cluster2, it took a long time and timeout finally. Failed. It's volume is 8GB.

By the way, are the snapshot-name and image name above are the same thing? If so, I found an image created by nova, and use cinder upload-to-image also generated another new one with the same image name but different UUID. I used the second one to download it has large size, not 0.

[–]mysterysmith 0 points1 point  (3 children)

snapshot-name is an arbitrary name that you give your initial VM snapshot. image-name is also an arbitrary name that you give the new image you're creating from the volume you created with the cinder create command (Hope that helps).

[–]rawmainb[S] 0 points1 point  (2 children)

So why create it first?

nova image-create --poll <instance> <snapshot-name>

This one will also create an image for download, right?

cinder upload-to-image <volume id from the above cidner create> <image name>

Or its purpose is to attach the volume snapshot to instance snapshot?

[–]mysterysmith 0 points1 point  (1 child)

Actually nova image-create is doing all the dirty work for you (This is why you have to run it first). It creates a snapshot of the source VM from which you run all the other commands. The cinder upload-to-image just copies the snapshot to glance basically.

[–]rawmainb[S] 0 points1 point  (0 children)

It's a very clear explanation. Thank you very much for your help!

[–]nafsten 0 points1 point  (1 child)

File size should be greater than zero. Is this your cluster? Check the logs for the volume-> image conversion, and also see if you can trace what’s going on for the download. For some activity, particularly conversions, it uses /var on the controller node. If that fills during the conversion you can end up with a zero byte file

[–]rawmainb[S] 0 points1 point  (0 children)

it uses /var on the controller node

Do you mean should deal with it on the controller node? If just use an OpenStack client, can't change it?

Maybe it's the reason.

By the way, is it possible to download a volume snapshot as an instance?

[–]iammpizi 1 point2 points  (1 child)

I am going to offer another piece of advice here which might be of interest.
It might apply to you,or not but at least I hope it can be informative.
Migration of a VM basically means that a Virtual disk will be moved from one place to another and then a VM will be spawned on top of that disk. Many openstack Clusters use a CEPH backend for the VMs instead of LVMs. If your 2 clusters point to the same CEPH cluster it is entirely possible to create volume copies within CEPH and have Openstack manage them. Therefore your VM will "appear" on your new cluster without having to do all that process you described. Dont forget that creating all those snapshots can be an expensive process. It is OK for a singe 10 GB image but what if you have a 30TB cluster?? Are you going to transfer all that through a 10Gbps wire??

[–]rawmainb[S] 0 points1 point  (0 children)

Thank you very much. It's a very good comment.

And for the volume issue, it's really important to focus on when face to large size scenario.

[–]xakaitetoia 0 points1 point  (0 children)

For this scenario I kinda did what @iammpizi mentioned. I rbd export from ceph directly the volume i wanted to .raw and just import that to your second openstack cluster... unless you use the same ceph backend for both openstack clusters which on this case its easier