SR-IO V vs Virtio, brief
SR-IOV vs Virtio: Most of the virtualization deployments are using virtio which involves a virtual switch/bridge on the host OS to forward traffic between the VMs and to the outside world, it involves emulating the physical NIC as vNIC and also involves the kernel from host OS (software space) and CPU/RAM (hardware space), that line rate, or near line rate cannot be achieved is because each packet must go through the software switch and that requires CPU cycles to process the packets. This Software-based sharing adds overhead to each I/O operation due to the emulation layer , whereas SR-IO V improves the traffic forwarding upto-60%, in SR-IO V (hardware dependent) gives Direct Memory Access of the physical NIC to the Guest OS, which means Guest OS sees a directly attached Physical NIC, this technique improves the traffic forwarding because first the packet forwarding is handled by the hardware (SR-IO V) and there is no role of host kernel and/or vswitch therefor no software switching means no CPU usage and less burden on the RAM. the biggest advantage is FPGA, which takes care of the packet forwarding. Think of SR-IO V as Intel-VT vs hyper-visor implemented in software.
SR-IO V cards not only improves traffic forwarding and gives near to bare metal performance, they implement many advance features like vlan-tagging, CoS marking on chip which again results in improved performance, since the traffic is forwarded through FPGAs that’s why it is very useful for delay sensitive traffic like voice. A single NIC in common NICs (SRIOV-enabled) can be used with 48VMs but this varies from vendor to vendor and model of the NIC.