Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

vm-bhyve ignoring -s when using cloud images? #535

Open
duncan-bayne opened this issue Nov 25, 2023 · 7 comments · May be fixed by #536
Open

vm-bhyve ignoring -s when using cloud images? #535

duncan-bayne opened this issue Nov 25, 2023 · 7 comments · May be fixed by #536

Comments

@duncan-bayne
Copy link

Hi :) Thanks for vm-bhyve, it's been very helpful.

I'm creating a VM with the following invocation:

vm create -s 128G -t `dev-linux` -C -i ubuntu-23.04-server-cloudimg-amd64.img -k key.pub foo
vm start

However when I SSH into the VM, I see it's only got 3.3GiB free disk space:

~ $ ssh ubuntu@foo
ubuntu@foo:~$ df -h
Filesystem      Size  Used Avail Use% Mounted on
tmpfs           1.6G  924K  1.6G   1% /run
/dev/vda1       3.3G  1.6G  1.7G  49% /
tmpfs           7.9G     0  7.9G   0% /dev/shm
tmpfs           5.0M     0  5.0M   0% /run/lock
/dev/vda15      105M  6.1M   99M   6% /boot/efi
tmpfs           1.6G  4.0K  1.6G   1% /run/user/1000

I'm fairly sure I'm doing something wrong here, because I'd have expected either (a) the disk size to be 128GiB, or (b) some sort of error or warning to be produced if the -s option was being intentionally ignored.

@mateuszkwiatkowski
Copy link
Collaborator

Could you please paste output of fdisk -l on the guest?

@duncan-bayne
Copy link
Author

Sure!

$ sudo fdisk -l
Disk /dev/loop0: 73.9 MiB, 77492224 bytes, 151352 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/loop1: 152.13 MiB, 159514624 bytes, 311552 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/loop2: 40.86 MiB, 42840064 bytes, 83672 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/loop3: 152.06 MiB, 159444992 bytes, 311416 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 512 bytes
I/O size (minimum/optimal): 512 bytes / 512 bytes


Disk /dev/vda: 3.5 GiB, 3758096384 bytes, 7340032 sectors
Units: sectors of 1 * 512 = 512 bytes
Sector size (logical/physical): 512 bytes / 131072 bytes
I/O size (minimum/optimal): 131072 bytes / 131072 bytes
Disklabel type: gpt
Disk identifier: 388B6A34-40E0-42E3-AAF4-9B640502DF2C

Device      Start     End Sectors  Size Type
/dev/vda1  227328 7339998 7112671  3.4G Linux filesystem
/dev/vda14   2048   10239    8192    4M BIOS boot
/dev/vda15  10240  227327  217088  106M EFI System

Partition table entries are not in disk order.

Also, in case it's helpful, the contents of the dev-linux template:

loader="grub"
cpu=16
memory=16G
network0_type="virtio-net"
network0_switch="public"
disk0_type="virtio-blk"
disk0_name="disk0.img"
disk0_size=128G

@mateuszkwiatkowski
Copy link
Collaborator

So the disk inside the guest is also 3.5 GiB. I'm assuming that's the original size of Ubuntu's cloud image.
Let's check how it looks like on hypervisor. If your datastore is on ZFS please do:

zfs list | grep dev-linux
qemu-img info /dev/zvol/path_to_zvol_disk0

If you're using files:

qemu-img info <path_to_disk_file>

@duncan-bayne
Copy link
Author

Yup, it doesn't seem to have been resized. I'm using files:

# qemu-img info disk0.img 
image: disk0.img
file format: raw
virtual size: 3.5 GiB (3758096384 bytes)
disk size: 3.5 GiB
Child node '/file':
    filename: disk0.img
    protocol type: file
    file length: 3.5 GiB (3758096384 bytes)
    disk size: 3.5 GiB

@mateuszkwiatkowski
Copy link
Collaborator

mateuszkwiatkowski commented Nov 29, 2023

So it seems that initially file is created with proper size using truncate -s <size> command (in core::create):

kwiat@utgard:~ $ du -hs testfile.img
512B	testfile.img
kwiat@utgard:~ $ qemu-img info testfile.img
image: testfile.img
file format: raw
virtual size: 128 GiB (137438953472 bytes)
disk size: 512 B

But later it's being shrinked by qemu-img dd command to size of source cloud image (in core::write_img:

$ qemu-img dd -O raw if=/vm/.img/jammy-server-cloudimg-amd64.img of=./testfile.img
$ qemu-img info testfile.img
image: testfile.img
file format: raw
virtual size: 2.2 GiB (2361393152 bytes)
disk size: 646 MiB
$ qemu-img info /vm/.img/jammy-server-cloudimg-amd64.img
image: /vm/.img/jammy-server-cloudimg-amd64.img
file format: qcow2
virtual size: 2.2 GiB (2361393152 bytes)
disk size: 646 MiB
cluster_size: 65536

Ultimate fix would be to issue qemu-img resize after the image is written to the base file. As a workaround you can issue qemu-img resize manually after VM is created. If cloud image has filesystem resize service enabled it should automatically resize partition during the first boot.

@mateuszkwiatkowski
Copy link
Collaborator

@duncan-bayne I prepared PR with a fix: #536
Could you please test it on your machine? Thanks!

@duncan-bayne
Copy link
Author

That seems to have fixed it, thanks!

On the host, after creation:

# qemu-img info disk0.img 
image: disk0.img
file format: raw
virtual size: 128 GiB (137438953472 bytes)
disk size: 3.58 GiB
Child node '/file':
    filename: disk0.img
    protocol type: file
    file length: 128 GiB (137438953472 bytes)
    disk size: 3.58 GiB

On the guest, after first boot and (I presume) automatic resize:

$ df -h
Filesystem      Size  Used Avail Use% Mounted on
tmpfs           1.6G  932K  1.6G   1% /run
/dev/vda1       124G  1.6G  123G   2% /
tmpfs           7.9G     0  7.9G   0% /dev/shm
tmpfs           5.0M     0  5.0M   0% /run/lock
/dev/vda15      105M  6.1M   99M   6% /boot/efi
tmpfs           1.6G  4.0K  1.6G   1% /run/user/1000

Thank you 🎉

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants