×

Welcome to TagMyCode

Please login or create account to add a snippet.
0
0
 
0
Language: Text
Posted by: Adam Bedell
Added: Sep 5, 2013 5:38 AM
Modified: Sep 5, 2013 5:44 AM
Views: 306
I have done a fair bit of testing and experimenting to maximize KVM VM
performance on ZFS. Hopefully someone finds the below useful. Source: http://scientificlinuxforum.org/index.php?showtopic=2410
  1. I have done a fair bit of testing and experimenting to maximize KVM VM
  2. performance on ZFS. Hopefully someone finds the below useful.
  3.  
  4. Guests/Benchmark used:
  5. WinXP Pro + Crystal Disk Mark 1 pass at 1000MG
  6. SL 6.4 + Gnome Disk Utility (palimpsest) R/W disk benchmark function
  7.  
  8. Each guest uses a file backed sparse raw disk located on an NFS mounted
  9. directory ("/foo"). The NFS mounted directory is
  10. interconnected via a dual 10Gbe Twin-ax copper from the targeted storage
  11. server disk subsystem and filesystem. The connection is bonded using Linu
  12. x
  13. bonding (mode=4 miimon=250 xmit_hash_policy=layer2+3). NFS server e
  14. xport
  15. options were (rw,async,no_root_squash). The NFS client was left with defa
  16. ult
  17. mount options. The NFS version used was NFSv3 only. NFSv4 was disabled on
  18.  
  19. the storage server via "RPCNFSDARGS = -N 4".
  20.  
  21. *NOTES*
  22. ZFS, EXT4, MD+LVM, or BTRFS disk subsystem configuration details are
  23. specified in each below test configuration.
  24. Low latency Intel SSD models were utilized in all cases wherein SSD are
  25. being used.
  26. When ext4 is used on top of a zvol with default mount options the
  27. journal/metadata is synchronous and therefore offloaded to the ZIL.
  28. Jumbo frames were enabled on all 10Gbe interfaces.
  29. The following network tuning was applied for to optimize for 10Gbe
  30. net.ipv4.tcp_wmem = 4096 65536 16777216
  31. net.ipv4.tcp_rmem = 4096 87380 16777216
  32. net.core.wmem_max = 16777216
  33. net.core.rmem_max = 16777216
  34. net.core.wmem_default = 65536
  35. net.core.rmem_default = 87380
  36. net.core.netdev_max_backlog = 30000
  37.  
  38.  
  39. WinXP qemu-kvm commandline:
  40. /usr/libexec/qemu-kvm -name winxpfoo -M rhel6.4.0 -cpu qemu64,-svm
  41. -enable-kvm -m 3072 -smp 2,sockets=1,cores=2,threads=1 -nodefconfig
  42.  
  43. -nodefaults -drive
  44. file=/foo/winxp.img,if=none,id=drive-virtio-disk0,format=raw,cach
  45. e=none,werror=stop,rerror=stop,aio=threads
  46. -device
  47. virtio-blk-pci,scsi=off,bus=pci.0,addr=0x6,drive=drive-virtio-dis
  48. k0,id=virtio-disk0
  49. -spice port=5900,addr=3.57.109.210,disable-ticketing -k en-us -vga qx
  50. l
  51. -global qxl-vga.ram_size=67108864 -global qxl-vga.vram_size=67108864
  52. &
  53.  
  54. SL 6.4 qemu-kvm commandline:
  55. /usr/libexec/qemu-kvm -name sl64 -M rhel6.4.0 -cpu qemu64,-svm -enable-kv
  56. m
  57. -m 3072 -smp 2,sockets=1,cores=2,threads=1 -nodefconfig -nodefaults
  58. -drive
  59. file=/vmstore/foo/SL64hd0.img,if=none,id=drive-virtio-disk0,format=
  60. raw,cache=none,werror=stop,rerror=stop,aio=threads
  61. -device
  62. virtio-blk-pci,scsi=off,bus=pci.0,addr=0x5,drive=drive-virtio-dis
  63. k0,id=virtio-disk0,bootindex=2
  64. -drive
  65. file=/vmstore/foo/SL64hd1.img,if=none,id=drive-virtio-disk1,format=
  66. raw,cache=none,werror=stop,rerror=stop,aio=threads
  67. -device
  68. virtio-blk-pci,scsi=off,bus=pci.0,addr=0x6,drive=drive-virtio-dis
  69. k1,id=virtio-disk1
  70. -spice port=5900,addr=3.57.109.210,disable-ticketing -k en-us -vga qx
  71. l
  72. -global qxl-vga.ram_size=67108864 -global qxl-vga.vram_size=67108864
  73. &
  74.  
  75. -- Test Config #1 --
  76.  
  77. Storage Server: Sunfire x4140
  78. RAM: 32GB
  79. CPU: 2x AMD Opteron 2439SE
  80. HBA: LSI 3801
  81. Disks: 8 146GB 10K 2.5" SAS
  82. Disk Config: LSI HW IME Raid + Linux LVM
  83. Filesystem: LVM+ext4
  84. Network: Intel x520DA2 dual port 10Gbe
  85. OS: SL 6.4
  86.  
  87. KVM/NFS Client: Sunfire X4170m2
  88. RAM: 72GB RAM
  89. CPU: 2x Intel E5540
  90. HBA: LSI 92118i
  91. Disks: 2 146GB 10K 2.5" SAS
  92. Disk Config: LSI HW IS Raid
  93. Filesystem: LSI HW Raid+ ext4
  94. Network: Intel x520DA2 dual port 10Gbe
  95. OS: OEL 6.4
  96.  
  97. Benchmark Results (MB/s):
  98. **** WinXP ****
  99. seq: 295.4 189.7
  100. 512k: 298.8 188.9
  101. 4K: 14.75 12.99
  102. 4K QD32: 16.16 14.42
  103.  
  104. **** SL 6.4 (MB/s) ****
  105. Min Read: 654.3
  106. Max Read: 704.8
  107. Avg Read: 682.5
  108. Min Write: 233.6
  109. Max Write: 551.1
  110. Avg Write: 465.8
  111. AVG Access: 0.5ms
  112.  
  113. -- End Test Config #1 --
  114.  
  115. -- Test Config #2 --
  116. Storage Server: Sunfire x4270m2
  117. RAM: 72GB
  118. CPU: 2x Intel E5540
  119. HBA: LSI 92118i
  120. Disks: 24 300GB 10K 2.5" SAS
  121. Disks: 4 128GB Intel SATA SSD
  122. Disk Config: ZFS Raid10 + Raid 0 ZIL + Raid 0 L2ARC
  123. Filesystem: ZFS ZVOL+ext4
  124. Network: Intel x520DA2 dual port 10Gbe
  125. OS: OEL 6.4
  126.  
  127. KVM/NFS Client: Sunfire X4170m2
  128. RAM: 72GB RAM
  129. CPU: 2x Intel E5540
  130. HBA: LSI 92118i
  131. Disks: 2 146GB 10K 2.5" SAS
  132. Disk Config: LSI HW IS Raid
  133. Network: Intel x520DA2 dual port 10Gbe
  134. OS: OEL 6.4
  135.  
  136. Filesystem/ZFS Configuration Details/Commands
  137. zpool create das0 mirror <dev> <dev> mirror <dev> <dev>... log <ssd> <ssd
  138. >
  139. cache <ssd> <ssd>
  140. zfs create -s -V 100G -o volblocksize=64K das0/foo
  141. mkfs.ext4 -L foo /dev/das0/foo
  142. mount /dev/das0/foo /zvol/foo -o noatime
  143.  
  144. **** XP (MB/s) ****
  145. seq: 315.4 273.4
  146. 512k: 302.4 263.0
  147. 4K: 12.94 8.688
  148. 4K QD32: 14.45 13.13
  149.  
  150. **** SL 6.4 (MB/s)****
  151. Min Read: 630.0
  152. Max Read: 698.2
  153. Avg Read: 662.8
  154. Min Write: 357.6
  155. Max Write: 1.1 GB/s
  156. Avg Write: 895.3
  157. AVG Access: 0.5ms
  158.  
  159. -- End Test Config #2 --
  160.  
  161. -- Test Config #3 --
  162.  
  163. Storage Server: Sunfire x4270m2
  164. RAM: 72GB
  165. CPU: 2x Intel E5540
  166. HBA: LSI 92118i
  167. Disks: 24 300GB 10K 2.5" SAS
  168. Disks: 4 128GB Intel SATA SSD
  169. Disk Config: ZFS Raid10 + Raid 0 ZIL + Raid 0 L2ARC
  170. Filesystem: ZFS
  171. Network: Intel x520DA2 dual port 10Gbe
  172. OS: OEL 6.4
  173.  
  174. KVM/NFS Client: Sunfire X4170m2
  175. RAM: 72GB RAM
  176. CPU: 2x Intel E5540
  177. HBA: LSI 92118i
  178. Disks: 2 146GB 10K 2.5" SAS
  179. Disk Config: LSI HW IS Raid
  180. Network: Intel x520DA2 dual port 10Gbe
  181. OS: OEL 6.4
  182.  
  183. Filesystem/ZFS Configuration Details/Commands
  184. zpool create das0 mirror <dev> <dev> mirror <dev> <dev>... log <ssd> <ssd
  185. >
  186. cache <ssd> <ssd>
  187. zfs create das0/foo
  188.  
  189. **** XP (MB/s) ****
  190. seq: 111.4 57.36
  191. 512k: 95.64 54.33
  192. 4K: 11.80 2.878
  193. 4K QD32: 11.31 3.095
  194.  
  195. -- End Test Config #3 --
  196.  
  197.  
  198. -- Test Config #4 --
  199.  
  200. Storage Server: Sunfire x4170
  201. RAM: 72GB
  202. CPU:2x Intel E5540
  203. HBA: LSI 92118i
  204. Disks: 4 200GB Intel SATA SSD
  205. Disks: 2 100 GB Intel SATA SSD
  206. Disk Config: ZFS Raid10 + ZIL + L2ARC
  207. Filesystem: ZFS ZVOL+ext4
  208. Network: Intel x520DA2 dual port 10Gbe
  209. OS: Fedora 18
  210.  
  211. KVM/NFS Client: Sunfire X4170m2
  212. RAM: 72GB RAM
  213. CPU: 2x Intel E5540
  214. HBA: LSI 92118i
  215. Disks: 2 146GB 10K 2.5" SAS
  216. Disk Config: LSI HW IS Raid
  217. Network: Intel x520DA2 dual port 10Gbe
  218. OS: OEL 6.4
  219.  
  220. Filesystem/ZFS Configuration Details/Commands
  221. zpool create das0 mirror <dev> <dev> mirror <dev> <dev> log <ssd> cache <
  222. ssd>
  223. zfs create -s -V 100G -o volblocksize=64K das0/foo
  224. mkfs.ext4 -L foo /dev/das0/foo
  225. mount /dev/das0/foo /zvol/foo -o noatime
  226.  
  227. **** XP (MB/s) ****
  228. seq: 196.5 200.2
  229. 512k: 195.2 188.8
  230. 4K: 9.665 7.043
  231. 4K QD32: 12.02 8.140
  232.  
  233. **** SL 6.4 (MB/s) ****
  234. Min Read: 423.7
  235. Max Read: 575.3
  236. Avg Read: 526.7
  237. Min Write: 44.4
  238. Max Write: 668.8
  239. Avg Write: 540.5
  240. AVG Access: 1.4ms
  241.  
  242.  
  243. -- End Test Config #4 --
  244.  
  245. -- Test Config #5 --
  246.  
  247. Storage Server: Sunfire x4170
  248. RAM: 72GB
  249. CPU:2x Intel E5540
  250. HBA: LSI 92118i
  251. Disks: 4 200GB Intel SATA SSD
  252. Disks: 2 100 GB Intel SATA SSD
  253. Disk Config: ZFS Raid10 + ZIL + L2ARC
  254. Filesystem: ZFS
  255. Network: Intel x520DA2 dual port 10Gbe
  256. OS: Fedora 18
  257.  
  258. KVM/NFS Client: Sunfire X4170m2
  259. RAM: 72GB RAM
  260. CPU: 2x Intel E5540
  261. HBA: LSI 92118i
  262. Disks: 2 146GB 10K 2.5" SAS
  263. Disk Config: LSI HW IS Raid
  264. Network: Intel x520DA2 dual port 10Gbe
  265. OS: OEL 6.4
  266.  
  267. Filesystem/ZFS Configuration Details/Commands
  268. zpool create das0 mirror <dev> <dev> mirror <dev> <dev> log <ssd> cache <
  269. ssd>
  270. zfs create das0/foo
  271.  
  272. **** XP (MB/s) ****
  273. seq: 152.5 126.2
  274. 512k: 147.4 121.8
  275. 4K: 9.189 7.481
  276. 4K QD32: 11.90 8.003
  277.  
  278. **** SL 6.4 (MB/s) ****
  279. Min Read: 409.3
  280. Max Read: 564.4
  281. Avg Read: 511.6
  282. Min Write: 237.1
  283. Max Write: 658.0
  284. Avg Write: 555.3
  285. AVG Access: 2.9ms
  286.  
  287. -- End Test Config #5 --
  288.  
  289.  
  290. -- Test Config #6 --
  291.  
  292. Storage Server: Sunfire x4170
  293. RAM: 72GB
  294. CPU:2x Intel E5540
  295. HBA: LSI 92118i
  296. Disks: 4 200GB Intel SATA SSD
  297. Disks: 2 100 GB Intel SATA SSD
  298. Disk Config: Linux MD Raid10 + LVM + ext4
  299. Filesystem: ext4
  300. Network: Intel x520DA2 dual port 10Gbe
  301. OS: OEL 6.4
  302.  
  303. KVM/NFS Client: Sunfire X4170m2
  304. RAM: 72GB RAM
  305. CPU: 2x Intel E5540
  306. HBA: LSI 92118i
  307. Disks: 2 146GB 10K 2.5" SAS
  308. Disk Config: LSI HW IS Raid
  309. Network: Intel x520DA2 dual port 10Gbe
  310. OS: OEL 6.4
  311.  
  312. Filesystem/ext4 Configuration Details/Commands
  313. 4 Disk SSD Linux MD Raid 10 Stripe Size 1MB
  314. 100GB LV(foo) from VG0 via PV MD device (ext4 data )
  315.  
  316. 2 Disk SSD Linux MD Raid 0 Stripe Size 1MB
  317. 1GB LV(journaldev) from VG1 via PV MD device (ext4 journal/metadata)
  318.  
  319. ext4 filesystem lv-foo, external journal on lv-journaldev (external
  320. journal_data_ordered)
  321. mounted at /vg0/foo (mount options
  322. rw,noatime,journal_checksum,journal_async_commit)
  323.  
  324. **** XP (MB/s) ****
  325. seq: 251.3 203.9
  326. 512k: 249.0 217.7
  327. 4K: 10.12 9.012
  328. 4K QD32: 13.18 11.54
  329.  
  330. **** SL 6.4 (MB/s) ****
  331. Min Read: 523.3
  332. Max Read: 663.0
  333. Avg Read: 606.4
  334. Min Write: 239.9
  335. Max Write: 656.0
  336. Avg Write: 563.8
  337. AVG Access: 0.8ms
  338.  
  339. -- End Test Config #6 --
  340.  
  341. -- Test Config #7 --
  342.  
  343. Storage Server: Sunfire x4170
  344. RAM: 72GB
  345. CPU:2x Intel E5540
  346. HBA: LSI 92118i
  347. Disks: 4 200GB Intel SATA SSD
  348. Disks: 2 100 GB Intel SATA SSD
  349. Disk Config: BTRFS Raid10 data + raid0 metadata
  350. Filesystem: BTRFS
  351. Network: Intel x520DA2 dual port 10Gbe
  352. OS: Fedora 18
  353.  
  354. KVM/NFS Client: Sunfire X4170m2
  355. RAM: 72GB RAM
  356. CPU: 2x Intel E5540
  357. HBA: LSI 92118i
  358. Disks: 2 146GB 10K 2.5" SAS
  359. Disk Config: LSI HW IS Raid
  360. Network: Intel x520DA2 dual port 10Gbe
  361. OS: OEL 6.4
  362.  
  363. Filesystem/BTRFS Configuration Details/Commands
  364. mkfs.btrfs -L foo -m raid0 <ssd> <ssd> -d raid10 <ssd> <ssd> <ssd> <ssd>
  365. btrfs subvolume create /das0/foo
  366.  
  367. **** XP (MB/s) ****
  368. seq: 195.4 180.0
  369. 512k: 180.8 174.5
  370. 4K: 9.898 7.183
  371. 4K QD32: 11.87 8.248
  372.  
  373. **** SL 6.4 (MB/s) ****
  374. Min Read: 465.4
  375. Max Read: 554.6
  376. Avg Read: 514.1
  377. Min Write: 450.5
  378. Max Write: 670.3
  379. Avg Write: 525.3
  380. AVG Access: 0.8ms
  381.  
  382. -- End Test Config #7--
  383.  
  384.  
  385. -- Test Config #8 --
  386.  
  387. Storage Server: Sunfire x4170
  388. RAM: 72GB
  389. CPU:2x Intel E5540
  390. HBA: LSI 92118i
  391. Disks: 4 200GB Intel SATA SSD
  392. Disks: 2 100 GB Intel SATA SSD
  393. Disk Config: BTRFS Raid10 data + raid0 metadata
  394. Filesystem: BTRFS
  395. Network: Intel x520DA2 dual port 10Gbe
  396. OS: OEL 6.4
  397.  
  398. KVM/NFS Client: Sunfire X4170m2
  399. RAM: 72GB RAM
  400. CPU: 2x Intel E5540
  401. HBA: LSI 92118i
  402. Disks: 2 146GB 10K 2.5" SAS
  403. Disk Config: LSI HW IS Raid
  404. Network: Intel x520DA2 dual port 10Gbe
  405. OS: OEL 6.4
  406.  
  407. Filesystem/BTRFS Configuration Details/Commands
  408. mkfs.btrfs -L foo -m raid0 <ssd> <ssd> -d raid10 <ssd> <ssd> <ssd> <ssd>
  409. btrfs subvolume create /das0/foo
  410.  
  411. **** XP (MB/s) ****
  412. seq: 232.1 123.4
  413. 512k: 243.0 213.6
  414. 4K: 9.393 9.170
  415. 4K QD32: 12.04 10.74
  416.  
  417. **** SL 6.4 (MB/s) ****
  418. Min Read: 508.0
  419. Max Read: 652.4
  420. Avg Read: 594.4
  421. Min Write: 409.6
  422. Max Write: 670.4
  423. Avg Write: 525.8
  424. AVG Access: 0.5
  425.  
  426. -- End Test Config #8 --