Skip to content

Commit

Permalink
Use virtiofsd for sharing file system data with host
Browse files Browse the repository at this point in the history
Switch over to using virtiofsd for sharing file system data with the
host. virtiofs is a file system designed for the needs of virtual
machines and environments. That is in contrast to 9P fs, which we
currently use for sharing data with the host, which is first and
foremost a network file system.
9P is problematic if for no other reason that it lacks proper support
for usage of the "open-unlink-fstat idiom", in which files are unlinked
and later referenced via file descriptor (see danobi#83). virtiofs does not
have this problem.

This change replaces usage of 9P with that of virtiofs. In order to
work, virtiofs needs a user space server. The current state-of-the-art
implementation (virtiofsd) is implemented in Rust and so we interface
directly with the library. Most of this code is extracted straight from
virtiofsd, as it's a lot of boilerplate. An alternative approach is to
install the binary via distribution packages or from crates.io, but
availability (and discovery) can be a bit of a challenge. Note that this
now means that both libcap-ng as well as libseccomp need to be
installed.

I benchmarked both the current master as well as this version with a
bare-bones custom kernel:
  Benchmark 1: target/release/vmtest -k bzImage-9p 'echo test'
    Time (mean ± σ):      1.316 s ±  0.087 s    [User: 0.462 s, System: 1.104 s]
    Range (min … max):    1.232 s …  1.463 s    10 runs

  Benchmark 1: target/release/vmtest -k bzImage-virtiofsd 'echo test'
    Time (mean ± σ):      1.244 s ±  0.011 s    [User: 0.307 s, System: 0.358 s]
    Range (min … max):    1.227 s …  1.260 s    10 runs

So it seems there is a ~0.7s speed up, on average (and significantly
less system time being used). This is great, but I suspect a more
pronounced speed advantage will be visible when working with large
files, in which virtiofs is said to significantly outperform 9P
(typically >2x from what I understand, but I have not done any
benchmarks of that nature).

A few other notes:
- we solely rely on guest level read-only mounts to enforce read-only
  state. The virtiofsd recommended way is to use read-only bind mounts
  [0], but doing so would require root.
- we are not using DAX, because it still is still incomplete and
  apparently requires building Qemu (?) from source. In any event, it
  should not change anything functionally and be solely a performance
  improvement.

I have adjusted the configs, but because I don't have Docker handy I
can't really create those kernel. CI seems incapable of producing the
artifacts without doing a fully-blown release dance. No idea what empty
is about, really. I suspect the test failures we see are because it
lacks support?

Some additional resources worth keeping around:
  - https://virtio-fs.gitlab.io/howto-boot.html
  - https://virtio-fs.gitlab.io/howto-qemu.html

[0] https://gitlab.com/virtio-fs/virtiofsd/-/blob/main/README.md?ref_type=heads#faq
[1] https://gitlab.com/virtio-fs/virtiofsd/-/blob/main/src/main.rs?ref_type=heads#L1242

Closes: danobi#16
Closes: danobi#83

Signed-off-by: Daniel Müller <deso@posteo.net>
  • Loading branch information
d-e-s-o committed Aug 27, 2024
1 parent 7c87449 commit f382adb
Show file tree
Hide file tree
Showing 12 changed files with 1,054 additions and 71 deletions.
8 changes: 4 additions & 4 deletions .github/workflows/rust.yml
Original file line number Diff line number Diff line change
Expand Up @@ -29,14 +29,14 @@ jobs:
override: true
components: rustfmt, clippy

- name: Build
run: make

- name: Install test deps
run: |
sudo apt-get update
# Virtualization deps
sudo apt-get install -y qemu-system-x86-64 qemu-guest-agent qemu-utils ovmf
sudo apt-get install -y qemu-system-x86-64 qemu-guest-agent qemu-utils ovmf libcap-ng-dev libseccomp2
- name: Build
run: make

- name: Cache test assets
uses: actions/cache@v3
Expand Down
Loading

0 comments on commit f382adb

Please sign in to comment.