Debian版本: 7.4
內核:3.2.0
gcc:4.7.2
後來在安裝的過程當中發現沒有sudo,因此若是沒有的話最好先裝一個:apt-get install sudo
給虛擬機分配3塊網卡,注意別用NAT模式,不然測試程序沒法收發包,統計結果都是0。
Debian環境下啓用一塊網卡便可,有實際地址,用來ssh登陸進行操做,其餘兩塊給dpdk折騰html
先設置好環境變量:
export RTE_SDK=`pwd`
export RTE_TARGET=x86_64-native-linuxapp-gccnode
進入源碼目錄執行:linux
root@debian:~/code/dpdk-1.8.0# ./tools/setup.sh ------------------------------------------------------------------------------ RTE_SDK exported as /root/code/dpdk-1.8.0 ------------------------------------------------------------------------------ ---------------------------------------------------------- Step 1: Select the DPDK environment to build ---------------------------------------------------------- [1] i686-native-linuxapp-gcc [2] i686-native-linuxapp-icc [3] ppc_64-power8-linuxapp-gcc [4] x86_64-ivshmem-linuxapp-gcc [5] x86_64-ivshmem-linuxapp-icc [6] x86_64-native-bsdapp-clang [7] x86_64-native-bsdapp-gcc [8] x86_64-native-linuxapp-clang [9] x86_64-native-linuxapp-gcc [10] x86_64-native-linuxapp-icc ---------------------------------------------------------- Step 2: Setup linuxapp environment ---------------------------------------------------------- [11] Insert IGB UIO module [12] Insert VFIO module [13] Insert KNI module [14] Setup hugepage mappings for non-NUMA systems [15] Setup hugepage mappings for NUMA systems [16] Display current Ethernet device settings [17] Bind Ethernet device to IGB UIO module [18] Bind Ethernet device to VFIO module [19] Setup VFIO permissions ---------------------------------------------------------- Step 3: Run test application for linuxapp environment ---------------------------------------------------------- [20] Run test application ($RTE_TARGET/app/test) [21] Run testpmd application in interactive mode ($RTE_TARGET/app/testpmd) ---------------------------------------------------------- Step 4: Other tools ---------------------------------------------------------- [22] List hugepage info from /proc/meminfo ---------------------------------------------------------- Step 5: Uninstall and system cleanup ---------------------------------------------------------- [23] Uninstall all targets [24] Unbind NICs from IGB UIO driver [25] Remove IGB UIO module [26] Remove VFIO module [27] Remove KNI module [28] Remove hugepage mappings [29] Exit Script Option:
我是64位系統,選擇[9]開始編譯,一開頭就碰到這麼個錯誤:
/lib/module/`uname -r`/build: no such file or directory
即便手動建立對應的目錄,一樣會報錯:No targets specified and no makefile found.這是由於正常狀況build不是個目錄,而是個軟連接,指向/usr/src下對應的kernel頭文件目錄。所以手動建立個build的軟連接便可,即/usr/src/linux-headers-`uname -r`/
若是沒有安裝kernel header,建議根據本身的內核版本下載:apt-get install linux-headers-`uname -r`
而後就是加載內核模塊、分配大頁內存、綁定網卡之類了
這裏選[11],[14],[17], 大頁內存可設置爲128,網卡的話,須要填寫網卡的PCIE的地址,如0000:02:05.0之類,操做過程的指導很清楚,能夠照着提示信息選網卡。git
選擇21,啓動測試程序,我只有兩個核,在選擇 bitmask of cores時,輸入了3
可是start後會不斷打印錯誤日誌:
EAL: Error reading from file descriptor
貌似是因爲VMWare對PCIE的INTX中斷模擬得比較差致使,
修改源碼:app
diff --git a/lib/librte_eal/linuxapp/igb_uio/igb_uio.c b/lib/librte_eal/linuxapp/igb_uio/igb_uio.c index d1ca26e..c46a00f 100644 --- a/lib/librte_eal/linuxapp/igb_uio/igb_uio.c +++ b/lib/librte_eal/linuxapp/igb_uio/igb_uio.c @@ -505,14 +505,11 @@ igbuio_pci_probe(struct pci_dev *dev, const struct pci_device_id *id) } /* fall back to INTX */ case RTE_INTR_MODE_LEGACY: - if (pci_intx_mask_supported(dev)) { - dev_dbg(&dev->dev, "using INTX"); - udev->info.irq_flags = IRQF_SHARED; - udev->info.irq = dev->irq; - udev->mode = RTE_INTR_MODE_LEGACY; - break; - } - dev_notice(&dev->dev, "PCI INTX mask not supported\n"); + dev_dbg(&dev->dev, "using INTX"); + udev->info.irq_flags = IRQF_SHARED; + udev->info.irq = dev->irq; + udev->mode = RTE_INTR_MODE_LEGACY; + break; /* fall back to no IRQ */ case RTE_INTR_MODE_NONE: udev->mode = RTE_INTR_MODE_NONE;
不光是這塊,因爲修改後pci_intx_mask_supported()
函數沒有用到,編譯還會報錯(dpdk認爲warning也是出錯),得把頭文件compat.h裏這個函數的定義也去掉...
從新編譯後一切ok:ssh
testpmd> start io packet forwarding - CRC stripping disabled - packets/burst=32 nb forwarding cores=1 - nb forwarding ports=2 RX queues=1 - RX desc=128 - RX free threshold=32 RX threshold registers: pthresh=8 hthresh=8 wthresh=0 TX queues=1 - TX desc=512 - TX free threshold=0 TX threshold registers: pthresh=32 hthresh=0 wthresh=0 TX RS bit threshold=0 - TXQ flags=0x0 testpmd> stop Telling cores to stop... Waiting for lcores to finish... ---------------------- Forward statistics for port 0 ---------------------- RX-packets: 829923 RX-dropped: 0 RX-total: 829923 TX-packets: 829856 TX-dropped: 0 TX-total: 829856 ---------------------------------------------------------------------------- ---------------------- Forward statistics for port 1 ---------------------- RX-packets: 829915 RX-dropped: 0 RX-total: 829915 TX-packets: 829856 TX-dropped: 0 TX-total: 829856 ---------------------------------------------------------------------------- +++++++++++++++ Accumulated forward statistics for all ports+++++++++++++++ RX-packets: 1659838 RX-dropped: 0 RX-total: 1659838 TX-packets: 1659712 TX-dropped: 0 TX-total: 1659712 ++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++++ Done.
不過我這裏一旦start,CPU都佔滿了,虛擬機系統會變得很慢,敲stop都要等好一會函數
交互式加載很差自動化,能夠寫個腳本加載
首先編譯安裝dpdk:
make install T=x86_64-native-linuxapp-gcc
接下來的命令能夠寫到腳本里,PCI地址須要根據本身的狀況設置:測試
echo 128 > /sys/kernel/mm/hugepages/hugepages-2048kB/nr_hugepages mount -t hugetlbfs nodev /mnt/huge modprobe uio insmod x86_64-native-linuxapp-gcc/kmod/igb_uio.ko ./tools/dpdk_nic_bind.py -b igb_uio 0000:02:05.0 ./tools/dpdk_nic_bind.py -b igb_uio 0000:02:06.0
執行測試程序:ui
./x86_64-native-linuxapp-gcc/app/testpmd -c 0x3 -n 2 -- -i