The goal of these experiments is to evaluate the performance of various DPDK-based communication mechanisms when transmitting data between different container-based VMs running on the same physical host (in this case, the performance can be automatically and transparently improved by using shared memory buffers).
All the experiments have been performed on Linux-based systems (Ubuntu
is a good choice). After installing your favourite Linux distribution,
remember to add "hugepagesz=1G hugepages=8 intel_iommu=on
"
to the kernel boot option by properly modifying grub.cfg
The performance are evaluated by running DPDK-based applications in two different containers: an application sends packets at a specified rate, and the other one receives the packets measuring the received packet rate. The two applications use the virtio-user DPDK driver, and the two containers are connected by some kind of virtual bridge (or virtual switch) that uses the DPDK vhost driver to provide virtio ports to the clients.
The experimental testbed is similar to the one proposed in a DPDK howto, but uses lxc instead of Docker. The two containers have been connected by using the DPDK's testpmd application or the fd.io's vpp daemon.
First of all, you need to build the experimental testbed, then you can follow the instructions to run the experiments.
To run the experiments using vpp as a virtual switch, build it according to these instructions.
The experiments should be repeated using different alternatives to connect the containers. For example, try snabbswitch, or openvswitch with DPDK support (using vhost-user).
Maybe compare these results with some non-DPDK-based solutions.