RAID 0/1/50/60 實測
儲存與快照管理員實測
Qtier 2.0
10G 網路讀寫實測
全域 SSD Cache 實測
Qboost - NAS 的 ccleaner
QVHelper 外掛實測
Video Station 360 VR 影音實測
VJBOD 實測
手機直連 NAS 瀏覽及備份
Ciname28 多房影音實測
TS-1079 PRO巨大的身影 |
在 TS-1685 未上市之前, 我想 TS-1079 應該是 QNAP Turbo NAS 最大台的 Tower NAS 吧! 這台上市也有五年以上了, 反正就是因緣巧合在今年買了這台庫存新機, 但買來以後就一直擺著. NAS 玩久了就變成在收集古董一樣, 別人是愈買愈新, 我是愈買愈回去. 前個禮拜還撈了一台 TS-231+.
這真的很大箱, 保護的很好, 因為外面還另外用一個 carton 裝著, 拿出來費了不少力氣.

這傢伙是紙箱控, 只會愈幫愈忙.


TS-1079 Pro 是 10 bay 3.5" NAS. 這機構和 TS-879 Pro 一樣. 差別只是 TS-1079 多了下面二bay.

這次要測 QTS 4.3.4 所提的 RAID 50/60, 這是搬這台出來的主因

這 HDD Tray 我是第一次用, 它再的設計很好, 金屬匣而且散熱佳, 再加上能 Lock 防止誤拔

後面這小模睡著了.

TS-1079 Pro 後方.

這台當初就是有 PCIe 能上 10G NIC 買的.

Dual 12cm 風扇, 平常約 1000rpm 運轉, 其實還蠻安靜的.

這設計原本是要裝兩顆 Power Supply 嗎?

HDD 盤位順序. 以及 HDD Tray Lock & Unlock 說明.

這...

這是唯一一張比較專業的 model 照

還吃...

這台當初上市據說空機 nt$10 萬, 很嚇人.

350W 內置 power supply 很夠力. 這特規的 Power Supply.

當年旗艦機也就配 2GB RAM, 現在至少也要配個 4GB*2 才夠用.

這台已經是前幾代的產品了, Intel socket 1155 i3 cpu, 規格部份就參考官網.
TS-1079 Pro 官網規格說明
搬這台出來主要就是測 QTS 4.3.4 Beta 新加入的 RAID 50/60 & 其他諸多功能.
註: 這台 NAS 最後還是有幫它加上一張 Mellanox 10G 網卡.

TS-1079 PRO 上 QTS 4.3.4 |
第一次開機登入 QTS 顯示為 QTS 4.0.x 版本, 仿彿有一種回到過去時光的感覺. 至次安裝 QTS 4.3.4 beta 的步驟就跳過, 直接至官網下載 QTS firmware, 利用手動更新即可. 當初還有點擔心從 QTS 4.0 直上 4.3.4 會不會跳太多版次, 看來是多慮了.
將之前測試 TS-231+ ARM NAS 的兩顆硬碟拿來裝上 TS-1079 PRO, 能正常開機, 原本的資料也都在, 看來 ARM 和 x86 NAS 之間硬碟互換還是 ok 的.

TS-1079 PRO 採用 i3-2120 CPU (socket 1155), 這已經是上了 QTS 4.3.4 版本了.

這是我最高階的 TS NAS 了, 以前也沒見過這麼多溫度 sensor. 已經換上 4GB DDR3 RAM*2

熟悉的 QTS 4.3 WEB UI

這 Qboost 是 QTS 4.3.4 新加入的 app, 感覺有點像 windows 工作管理員 & ccleaner 的感覺.

因為也要測 Qtier, 所以也裝了 SSD, 這裡很有意思的是, 原本 SSD 裝在 Tray#10, 結果系統無法建立 SSD cache, 原來 SSD 要裝在 Tray#7/8 才行.

這就是原因了, SSD 裝在 Tray#7/8 測速可以到 485MB/sec

裝在其他 slot, 測出來速度比較慢, 約 376MB/s, 猜測這 Tray#7/8 應該是系統原生的 SATA Port.

為了建 RAID 50/60, 系統提示不用急著建 RAID.

QTS 4.3.4 多了 help center.

手上沒有那麼多硬碟, 勉強用 WD 4TB Red & Toshiba 3TB desktop 塞滿. Tray#7/8 就拿來裝 SSD 碟.

規劃為 RAID 50

這裡提示日後如果要擴充容量的做法. RAID 50 就是兩組 RAID5(子陣列) 去組合起來 RAID0

新版 QTS 4.3.4 Web UI 多了很多圖像化的提示. 先來建 Storage Pool

自己習慣先劃個 200GB 用來存系統, 當做 system volume


QTS 4.3.4 多了設定 RAID resync/rebuild 的優先權

現在 NAS 忙著完成第一次 RAID resync, 就等著它做完, 這期間也可以先做其他事, 例如安裝慣用的套件, 建立共用資料夾的.
註: 上面混用 WD Red NAS 碟 & Toshiba 桌機碟, QTS 4.3.4 有提供測速功能, 看來 Toshiba 7200rpm 桌機碟速度上還是比 5xxx rpm NAS 碟快一些.
我還是比較鍾意傳統的機構設計 |
TS-1079 PRO 的機構設計還是比較好.
*金屬機身及 HDD tray. 質感更佳.
*HDD Tray 的設計散熱佳, 前方預留很多網格讓空氣從前方直入. 後方直接排出.
*機器右方也是細網格設計. 方便散熱.
*HDD Tray 雖然不帶傳統 Key, 但它兩段卡榫帶 Lock 防誤拔.
TS-1079 HDD tray 的插拔比 TS 2/4/6 bay 金屬匣更輕鬆好用. 這機構現在已不復見, 但真的是 QNAP 經典之作.
至於 PCIe slot, 當初有想到底是要上 10G NIC, 還是可以加獨顯轉檔? 不過這台 Power Supply 是特規, 也沒有預留額外的插頭可以接獨顯電源. 上 10G NIC 應該是比較實際的選擇.
RAID 0/1/50/60效能實測 |
這次主要測試 QTS 4.3.4 beta 所新增的 RAID 50/60, 也順便測一下 RAID 0/1 當做對比.
如下共用了三種碟, WD 4TB Red NAS 碟, Toshiba 3TB 桌機碟, 以及美光 275GB SSD*2.

QTS 內建效能測試程式, 方便檢測硬碟的讀取效能.
SSD *2 組 RAID 0 以 fio 實測. sequence r/w 大約接近 1000MB/s
[admin@TS1079PRO ssdraid0]# fio fio.conf
read: (g=0): rw=read, bs=1M-1M/1M-1M/1M-1M, ioengine=libaio, iodepth=16
write: (g=1): rw=write, bs=1M-1M/1M-1M/1M-1M, ioengine=libaio, iodepth=16
randread_libaio: (g=2): rw=randread, bs=4K-4K/4K-4K/4K-4K, ioengine=libaio, iodepth=16
randwrite_libaio: (g=3): rw=randwrite, bs=4K-4K/4K-4K/4K-4K, ioengine=libaio, iodepth=16
fio-2.2.10
Starting 4 processes
read: Laying out IO file(s) (1 file(s) / 16384MB)
write: Laying out IO file(s) (1 file(s) / 16384MB)
randread_libaio: Laying out IO file(s) (1 file(s) / 16384MB)
randwrite_libaio: Laying out IO file(s) (1 file(s) / 16384MB)
Jobs: 1 (f=1): [_(3),w(1)] [81.1% done] [0KB/143.5MB/0KB /s] [0/36.8K/0 iops] [eta 00m:36s]
read: (groupid=0, jobs=1): err= 0: pid=4205: Tue Dec 12 15:53:52 2017
read : io=16384MB, bw=1013.2MB/s, iops=1013, runt= 16171msec
slat (usec): min=106, max=1037, avg=176.70, stdev=19.19
clat (usec): min=4195, max=28983, avg=15608.36, stdev=769.34
lat (usec): min=4492, max=29154, avg=15785.32, stdev=765.40
clat percentiles (usec):
| 1.00th=[14656], 5.00th=[15296], 10.00th=[15424], 20.00th=[15424],
| 30.00th=[15424], 40.00th=[15424], 50.00th=[15424], 60.00th=[15424],
| 70.00th=[15424], 80.00th=[15424], 90.00th=[16064], 95.00th=[17024],
| 99.00th=[18560], 99.50th=[19072], 99.90th=[20352], 99.95th=[21376],
| 99.99th=[28032]
bw (KB /s): min=1003520, max=1052672, per=100.00%, avg=1038335.72, stdev=10033.41
lat (msec) : 10=0.20%, 20=99.65%, 50=0.15%
cpu : usr=0.41%, sys=18.33%, ctx=16410, majf=0, minf=4105
IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=99.9%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0%
issued : total=r=16384/w=0/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
latency : target=0, window=0, percentile=100.00%, depth=16
write: (groupid=1, jobs=1): err= 0: pid=4896: Tue Dec 12 15:53:52 2017
write: io=16384MB, bw=991.54MB/s, iops=991, runt= 16524msec
slat (usec): min=140, max=14746, avg=221.46, stdev=257.73
clat (msec): min=2, max=33, avg=15.91, stdev= 1.65
lat (msec): min=3, max=33, avg=16.13, stdev= 1.63
clat percentiles (usec):
| 1.00th=[12096], 5.00th=[15424], 10.00th=[15424], 20.00th=[15424],
| 30.00th=[15424], 40.00th=[15424], 50.00th=[15552], 60.00th=[15552],
| 70.00th=[15552], 80.00th=[15808], 90.00th=[17280], 95.00th=[19328],
| 99.00th=[21632], 99.50th=[23680], 99.90th=[27776], 99.95th=[28544],
| 99.99th=[31360]
bw (KB /s): min=979913, max=1042432, per=100.00%, avg=1016290.69, stdev=10468.70
lat (msec) : 4=0.13%, 10=0.54%, 20=95.60%, 50=3.72%
cpu : usr=7.05%, sys=15.20%, ctx=16202, majf=0, minf=9
IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=99.9%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0%
issued : total=r=0/w=16384/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
latency : target=0, window=0, percentile=100.00%, depth=16
randread_libaio: (groupid=2, jobs=1): err= 0: pid=6664: Tue Dec 12 15:53:52 2017
read : io=14999MB, bw=255971KB/s, iops=63992, runt= 60001msec
slat (usec): min=6, max=6109, avg=13.23, stdev= 7.91
clat (usec): min=75, max=6610, avg=235.23, stdev=66.50
lat (usec): min=88, max=6623, avg=248.65, stdev=67.38
clat percentiles (usec):
| 1.00th=[ 141], 5.00th=[ 161], 10.00th=[ 173], 20.00th=[ 187],
| 30.00th=[ 199], 40.00th=[ 211], 50.00th=[ 223], 60.00th=[ 237],
| 70.00th=[ 253], 80.00th=[ 274], 90.00th=[ 318], 95.00th=[ 358],
| 99.00th=[ 438], 99.50th=[ 478], 99.90th=[ 580], 99.95th=[ 628],
| 99.99th=[ 1128]
bw (KB /s): min=226552, max=291256, per=100.00%, avg=255999.87, stdev=11541.36
lat (usec) : 100=0.01%, 250=68.19%, 500=31.47%, 750=0.32%, 1000=0.01%
lat (msec) : 2=0.01%, 4=0.01%, 10=0.01%
cpu : usr=10.23%, sys=88.59%, ctx=37361, majf=0, minf=24
IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=100.0%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0%
issued : total=r=3839634/w=0/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
latency : target=0, window=0, percentile=100.00%, depth=16
randwrite_libaio: (groupid=3, jobs=1): err= 0: pid=9812: Tue Dec 12 15:53:52 2017
write: io=10265MB, bw=175192KB/s, iops=43797, runt= 60001msec
slat (usec): min=6, max=208611, avg=20.48, stdev=725.65
clat (usec): min=27, max=209575, avg=343.51, stdev=2833.41
lat (usec): min=42, max=209592, avg=364.16, stdev=2925.82
clat percentiles (usec):
| 1.00th=[ 233], 5.00th=[ 243], 10.00th=[ 245], 20.00th=[ 249],
| 30.00th=[ 251], 40.00th=[ 253], 50.00th=[ 255], 60.00th=[ 262],
| 70.00th=[ 270], 80.00th=[ 314], 90.00th=[ 370], 95.00th=[ 410],
| 99.00th=[ 490], 99.50th=[ 708], 99.90th=[ 3888], 99.95th=[ 9920],
| 99.99th=[154624]
bw (KB /s): min=112840, max=244208, per=100.00%, avg=175624.47, stdev=34455.40
lat (usec) : 50=0.02%, 100=0.03%, 250=27.75%, 500=71.24%, 750=0.49%
lat (usec) : 1000=0.14%
lat (msec) : 2=0.14%, 4=0.10%, 10=0.05%, 20=0.01%, 50=0.01%
lat (msec) : 100=0.01%, 250=0.03%
cpu : usr=8.15%, sys=73.44%, ctx=57108, majf=0, minf=9
IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=100.0%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0%
issued : total=r=0/w=2627917/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
latency : target=0, window=0, percentile=100.00%, depth=16
Run status group 0 (all jobs):
READ: io=16384MB, aggrb=1013.2MB/s, minb=1013.2MB/s, maxb=1013.2MB/s, mint=16171msec, maxt=16171msec
Run status group 1 (all jobs):
WRITE: io=16384MB, aggrb=991.54MB/s, minb=991.54MB/s, maxb=991.54MB/s, mint=16524msec, maxt=16524msec
Run status group 2 (all jobs):
READ: io=14999MB, aggrb=255971KB/s, minb=255971KB/s, maxb=255971KB/s, mint=60001msec, maxt=60001msec
Run status group 3 (all jobs):
WRITE: io=10265MB, aggrb=175191KB/s, minb=175191KB/s, maxb=175191KB/s, mint=60001msec, maxt=60001msec
Disk stats (read/write):
dm-17: ios=3872410/3184242, merge=0/0, ticks=1170902/24737366, in_queue=25909651, util=98.50%, aggrios=3872410/3198032, aggrmerge=0/0, aggrticks=1167705/25069778, aggrin_queue=26247294, aggrutil=98.46%
dm-16: ios=3872410/3198032, merge=0/0, ticks=1167705/25069778, in_queue=26247294, util=98.46%, aggrios=3872410/3198032, aggrmerge=0/0, aggrticks=1130053/25019392, aggrin_queue=26168363, aggrutil=98.17%
dm-14: ios=3872410/3198032, merge=0/0, ticks=1130053/25019392, in_queue=26168363, util=98.17%, aggrios=968190/799574, aggrmerge=0/0, aggrticks=282431/6254502, aggrin_queue=6538253, aggrutil=98.09%
dm-10: ios=351/0, merge=0/0, ticks=3337/0, in_queue=3337, util=2.16%, aggrios=0/0, aggrmerge=0/0, aggrticks=0/0, aggrin_queue=0, aggrutil=0.00%
drbd3: ios=0/0, merge=0/0, ticks=0/0, in_queue=0, util=0.00%
dm-11: ios=0/0, merge=0/0, ticks=0/0, in_queue=0, util=0.00%
dm-12: ios=0/0, merge=0/0, ticks=0/0, in_queue=0, util=0.00%
dm-13: ios=3872410/3198296, merge=0/0, ticks=1126388/25018009, in_queue=26149677, util=98.09%
RAID 1(SSD*1)讀寫效能: Sequence Read 大約 1000MB/s, Sequence Write 大約 4XXMB/s.
註: 這裡的確反應出 QTS RAID1 有 parallel read 的特性, 所以可以達到 SSD*2 加倍的速度.
[admin@TS1079PRO ssdraid1]# fio fio.conf
read: (g=0): rw=read, bs=1M-1M/1M-1M/1M-1M, ioengine=libaio, iodepth=16
write: (g=1): rw=write, bs=1M-1M/1M-1M/1M-1M, ioengine=libaio, iodepth=16
randread_libaio: (g=2): rw=randread, bs=4K-4K/4K-4K/4K-4K, ioengine=libaio, iodepth=16
randwrite_libaio: (g=3): rw=randwrite, bs=4K-4K/4K-4K/4K-4K, ioengine=libaio, iodepth=16
fio-2.2.10
Starting 4 processes
Jobs: 1 (f=1): [_(3),w(1)] [75.9% done] [0KB/103.9MB/0KB /s] [0/26.6K/0 iops] [eta 00m:55s]
read: (groupid=0, jobs=1): err= 0: pid=15535: Tue Dec 12 16:41:21 2017
read : io=16384MB, bw=1002.9MB/s, iops=1002, runt= 16350msec
slat (usec): min=100, max=9873, avg=174.11, stdev=99.06
clat (msec): min=2, max=29, avg=15.79, stdev= 2.19
lat (msec): min=2, max=29, avg=15.96, stdev= 2.19
clat percentiles (usec):
| 1.00th=[11328], 5.00th=[12352], 10.00th=[13376], 20.00th=[13888],
| 30.00th=[14912], 40.00th=[15424], 50.00th=[15552], 60.00th=[16192],
| 70.00th=[17024], 80.00th=[17536], 90.00th=[18304], 95.00th=[19328],
| 99.00th=[21120], 99.50th=[21888], 99.90th=[25216], 99.95th=[26240],
| 99.99th=[29312]
bw (KB /s): min=978944, max=1038336, per=100.00%, avg=1026810.75, stdev=11934.30
lat (msec) : 4=0.10%, 10=0.52%, 20=97.06%, 50=2.31%
cpu : usr=0.43%, sys=17.74%, ctx=14863, majf=0, minf=4105
IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=99.9%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0%
issued : total=r=16384/w=0/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
latency : target=0, window=0, percentile=100.00%, depth=16
write: (groupid=1, jobs=1): err= 0: pid=16836: Tue Dec 12 16:41:21 2017
write: io=16384MB, bw=479185KB/s, iops=467, runt= 35012msec
slat (usec): min=135, max=32772, avg=234.41, stdev=552.85
clat (msec): min=3, max=256, avg=33.95, stdev=11.93
lat (msec): min=7, max=256, avg=34.18, stdev=11.91
clat percentiles (msec):
| 1.00th=[ 31], 5.00th=[ 32], 10.00th=[ 32], 20.00th=[ 32],
| 30.00th=[ 32], 40.00th=[ 32], 50.00th=[ 33], 60.00th=[ 33],
| 70.00th=[ 33], 80.00th=[ 33], 90.00th=[ 37], 95.00th=[ 39],
| 99.00th=[ 109], 99.50th=[ 115], 99.90th=[ 149], 99.95th=[ 255],
| 99.99th=[ 258]
bw (KB /s): min=147439, max=529373, per=100.00%, avg=479361.16, stdev=77138.75
lat (msec) : 4=0.01%, 10=0.12%, 20=0.21%, 50=97.69%, 100=0.59%
lat (msec) : 250=1.29%, 500=0.10%
cpu : usr=3.37%, sys=7.54%, ctx=16396, majf=0, minf=9
IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=99.9%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0%
issued : total=r=0/w=16384/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
latency : target=0, window=0, percentile=100.00%, depth=16
randread_libaio: (groupid=2, jobs=1): err= 0: pid=18397: Tue Dec 12 16:41:21 2017
read : io=13428MB, bw=229163KB/s, iops=57290, runt= 60001msec
slat (usec): min=7, max=8530, avg=13.13, stdev=10.76
clat (usec): min=55, max=9096, avg=264.52, stdev=111.25
lat (usec): min=71, max=9112, avg=277.85, stdev=111.98
clat percentiles (usec):
| 1.00th=[ 131], 5.00th=[ 143], 10.00th=[ 157], 20.00th=[ 179],
| 30.00th=[ 199], 40.00th=[ 221], 50.00th=[ 243], 60.00th=[ 266],
| 70.00th=[ 298], 80.00th=[ 338], 90.00th=[ 398], 95.00th=[ 462],
| 99.00th=[ 604], 99.50th=[ 660], 99.90th=[ 796], 99.95th=[ 876],
| 99.99th=[ 1880]
bw (KB /s): min=167552, max=241008, per=99.99%, avg=229150.25, stdev=12138.58
lat (usec) : 100=0.01%, 250=53.15%, 500=43.61%, 750=3.08%, 1000=0.13%
lat (msec) : 2=0.02%, 4=0.01%, 10=0.01%
cpu : usr=11.37%, sys=78.06%, ctx=346987, majf=0, minf=24
IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=100.0%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0%
issued : total=r=3437507/w=0/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
latency : target=0, window=0, percentile=100.00%, depth=16
randwrite_libaio: (groupid=3, jobs=1): err= 0: pid=21793: Tue Dec 12 16:41:21 2017
write: io=8521.4MB, bw=145426KB/s, iops=36356, runt= 60002msec
slat (usec): min=8, max=357422, avg=22.47, stdev=757.68
clat (usec): min=181, max=357892, avg=415.97, stdev=3005.23
lat (usec): min=215, max=357919, avg=438.66, stdev=3101.59
clat percentiles (usec):
| 1.00th=[ 223], 5.00th=[ 294], 10.00th=[ 302], 20.00th=[ 322],
| 30.00th=[ 330], 40.00th=[ 334], 50.00th=[ 342], 60.00th=[ 346],
| 70.00th=[ 350], 80.00th=[ 358], 90.00th=[ 394], 95.00th=[ 540],
| 99.00th=[ 1608], 99.50th=[ 1960], 99.90th=[ 2928], 99.95th=[ 5152],
| 99.99th=[240640]
bw (KB /s): min=41448, max=239192, per=100.00%, avg=146259.06, stdev=36538.72
lat (usec) : 250=2.69%, 500=91.34%, 750=2.36%, 1000=0.82%
lat (msec) : 2=2.31%, 4=0.39%, 10=0.05%, 20=0.01%, 50=0.01%
lat (msec) : 100=0.01%, 250=0.01%, 500=0.01%
cpu : usr=6.89%, sys=75.82%, ctx=11510, majf=0, minf=9
IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=100.0%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0%
issued : total=r=0/w=2181466/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
latency : target=0, window=0, percentile=100.00%, depth=16
Run status group 0 (all jobs):
READ: io=16384MB, aggrb=1002.9MB/s, minb=1002.9MB/s, maxb=1002.9MB/s, mint=16350msec, maxt=16350msec
Run status group 1 (all jobs):
WRITE: io=16384MB, aggrb=479184KB/s, minb=479184KB/s, maxb=479184KB/s, mint=35012msec, maxt=35012msec
Run status group 2 (all jobs):
READ: io=13428MB, aggrb=229163KB/s, minb=229163KB/s, maxb=229163KB/s, mint=60001msec, maxt=60001msec
Run status group 3 (all jobs):
WRITE: io=8521.4MB, aggrb=145426KB/s, minb=145426KB/s, maxb=145426KB/s, mint=60002msec, maxt=60002msec
Disk stats (read/write):
dm-17: ios=3470279/2339896, merge=0/0, ticks=1290470/17304069, in_queue=18595822, util=99.39%, aggrios=3470279/2342436, aggrmerge=0/0, aggrticks=1287492/17303505, aggrin_queue=18592139, aggrutil=99.36%
dm-16: ios=3470279/2342436, merge=0/0, ticks=1287492/17303505, in_queue=18592139, util=99.36%, aggrios=3470279/2342436, aggrmerge=0/0, aggrticks=1245222/17264742, aggrin_queue=18511438, aggrutil=98.97%
dm-14: ios=3470279/2342436, merge=0/0, ticks=1245222/17264742, in_queue=18511438, util=98.97%, aggrios=867630/585632, aggrmerge=0/0, aggrticks=311463/4315711, aggrin_queue=4627487, aggrutil=98.78%
dm-10: ios=244/0, merge=0/0, ticks=3911/0, in_queue=3911, util=2.27%, aggrios=0/0, aggrmerge=0/0, aggrticks=0/0, aggrin_queue=0, aggrutil=0.00%
drbd3: ios=0/0, merge=0/0, ticks=0/0, in_queue=0, util=0.00%
dm-11: ios=0/0, merge=0/0, ticks=0/0, in_queue=0, util=0.00%
dm-12: ios=0/0, merge=0/0, ticks=0/0, in_queue=0, util=0.00%
dm-13: ios=3470279/2342528, merge=0/0, ticks=1241944/17262847, in_queue=18506038, util=98.78%
RAID 50 讀寫效能(HDD*6): Sequence R/W 大約是 400MB/s & 200MB/s
[admin@TS1079PRO video]# fio fio.conf
read: (g=0): rw=read, bs=1M-1M/1M-1M/1M-1M, ioengine=libaio, iodepth=16
write: (g=1): rw=write, bs=1M-1M/1M-1M/1M-1M, ioengine=libaio, iodepth=16
randread_libaio: (g=2): rw=randread, bs=4K-4K/4K-4K/4K-4K, ioengine=libaio, iodepth=16
randwrite_libaio: (g=3): rw=randwrite, bs=4K-4K/4K-4K/4K-4K, ioengine=libaio, iodepth=16
fio-2.2.10
Starting 4 processes
read: Laying out IO file(s) (1 file(s) / 16384MB)
write: Laying out IO file(s) (1 file(s) / 16384MB)
randread_libaio: Laying out IO file(s) (1 file(s) / 16384MB)
randwrite_libaio: Laying out IO file(s) (1 file(s) / 16384MB)
Jobs: 1 (f=1): [_(3),w(1)] [58.1% done] [0KB/1822KB/0KB /s] [0/455/0 iops] [eta 02m:40s]
read: (groupid=0, jobs=1): err= 0: pid=4107: Tue Dec 12 16:06:05 2017
read : io=16384MB, bw=411650KB/s, iops=402, runt= 40756msec
slat (usec): min=87, max=1721, avg=158.56, stdev=29.43
clat (msec): min=1, max=1965, avg=39.63, stdev=78.51
lat (msec): min=1, max=1965, avg=39.79, stdev=78.51
clat percentiles (usec):
| 1.00th=[ 1336], 5.00th=[ 1352], 10.00th=[ 1688], 20.00th=[ 2512],
| 30.00th=[ 5536], 40.00th=[ 9024], 50.00th=[13888], 60.00th=[18560],
| 70.00th=[31616], 80.00th=[52992], 90.00th=[113152], 95.00th=[171008],
| 99.00th=[292864], 99.50th=[403456], 99.90th=[962560], 99.95th=[1122304],
| 99.99th=[1957888]
bw (KB /s): min=238933, max=699665, per=100.00%, avg=412064.33, stdev=127104.82
lat (msec) : 2=13.16%, 4=12.74%, 10=16.66%, 20=19.12%, 50=17.29%
lat (msec) : 100=8.78%, 250=10.59%, 500=1.35%, 750=0.12%, 1000=0.13%
lat (msec) : 2000=0.07%
cpu : usr=0.24%, sys=6.47%, ctx=15502, majf=0, minf=4106
IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=99.9%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0%
issued : total=r=16384/w=0/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
latency : target=0, window=0, percentile=100.00%, depth=16
write: (groupid=1, jobs=1): err= 0: pid=6076: Tue Dec 12 16:06:05 2017
write: io=12514MB, bw=212274KB/s, iops=207, runt= 60367msec
slat (usec): min=114, max=105657, avg=436.72, stdev=1263.12
clat (msec): min=10, max=514, avg=76.59, stdev=49.83
lat (msec): min=10, max=514, avg=77.03, stdev=49.88
clat percentiles (msec):
| 1.00th=[ 34], 5.00th=[ 43], 10.00th=[ 47], 20.00th=[ 53],
| 30.00th=[ 58], 40.00th=[ 62], 50.00th=[ 66], 60.00th=[ 71],
| 70.00th=[ 76], 80.00th=[ 86], 90.00th=[ 108], 95.00th=[ 135],
| 99.00th=[ 371], 99.50th=[ 433], 99.90th=[ 469], 99.95th=[ 490],
| 99.99th=[ 498]
bw (KB /s): min=55296, max=273884, per=100.00%, avg=214156.38, stdev=47996.38
lat (msec) : 20=0.05%, 50=14.06%, 100=73.69%, 250=10.45%, 500=1.75%
lat (msec) : 750=0.01%
cpu : usr=1.48%, sys=7.29%, ctx=6317, majf=0, minf=9
IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=99.9%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0%
issued : total=r=0/w=12514/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
latency : target=0, window=0, percentile=100.00%, depth=16
randread_libaio: (groupid=2, jobs=1): err= 0: pid=9296: Tue Dec 12 16:06:05 2017
read : io=236056KB, bw=3931.1KB/s, iops=982, runt= 60036msec
slat (usec): min=5, max=638, avg=24.54, stdev= 8.12
clat (usec): min=43, max=166745, avg=16248.29, stdev=13723.49
lat (usec): min=57, max=166766, avg=16273.15, stdev=13723.22
clat percentiles (msec):
| 1.00th=[ 3], 5.00th=[ 4], 10.00th=[ 5], 20.00th=[ 7],
| 30.00th=[ 9], 40.00th=[ 11], 50.00th=[ 13], 60.00th=[ 15],
| 70.00th=[ 18], 80.00th=[ 23], 90.00th=[ 33], 95.00th=[ 44],
| 99.00th=[ 72], 99.50th=[ 83], 99.90th=[ 108], 99.95th=[ 117],
| 99.99th=[ 139]
bw (KB /s): min= 2698, max= 4464, per=100.00%, avg=3939.48, stdev=431.20
lat (usec) : 50=0.01%, 100=0.16%, 250=0.07%, 500=0.05%, 750=0.03%
lat (usec) : 1000=0.02%
lat (msec) : 2=0.53%, 4=5.65%, 10=31.26%, 20=37.51%, 50=21.32%
lat (msec) : 100=3.22%, 250=0.16%
cpu : usr=0.52%, sys=2.77%, ctx=57495, majf=0, minf=24
IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=100.0%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0%
issued : total=r=59014/w=0/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
latency : target=0, window=0, percentile=100.00%, depth=16
randwrite_libaio: (groupid=3, jobs=1): err= 0: pid=11303: Tue Dec 12 16:06:05 2017
write: io=110836KB, bw=1844.7KB/s, iops=461, runt= 60086msec
slat (usec): min=6, max=308389, avg=72.50, stdev=2599.21
clat (usec): min=137, max=722814, avg=34615.40, stdev=41635.21
lat (usec): min=171, max=722829, avg=34688.28, stdev=41756.12
clat percentiles (msec):
| 1.00th=[ 3], 5.00th=[ 6], 10.00th=[ 8], 20.00th=[ 11],
| 30.00th=[ 14], 40.00th=[ 17], 50.00th=[ 21], 60.00th=[ 26],
| 70.00th=[ 35], 80.00th=[ 49], 90.00th=[ 77], 95.00th=[ 110],
| 99.00th=[ 221], 99.50th=[ 269], 99.90th=[ 343], 99.95th=[ 388],
| 99.99th=[ 570]
bw (KB /s): min= 517, max= 2597, per=100.00%, avg=1846.87, stdev=455.75
lat (usec) : 250=0.09%, 500=0.10%, 750=0.09%, 1000=0.07%
lat (msec) : 2=0.11%, 4=1.53%, 10=15.81%, 20=31.81%, 50=30.97%
lat (msec) : 100=13.51%, 250=5.24%, 500=0.63%, 750=0.03%
cpu : usr=0.24%, sys=1.74%, ctx=24470, majf=0, minf=9
IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=99.9%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0%
issued : total=r=0/w=27709/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
latency : target=0, window=0, percentile=100.00%, depth=16
Run status group 0 (all jobs):
READ: io=16384MB, aggrb=411650KB/s, minb=411650KB/s, maxb=411650KB/s, mint=40756msec, maxt=40756msec
Run status group 1 (all jobs):
WRITE: io=12514MB, aggrb=212273KB/s, minb=212273KB/s, maxb=212273KB/s, mint=60367msec, maxt=60367msec
Run status group 2 (all jobs):
READ: io=236056KB, aggrb=3931KB/s, minb=3931KB/s, maxb=3931KB/s, mint=60036msec, maxt=60036msec
Run status group 3 (all jobs):
WRITE: io=110836KB, aggrb=1844KB/s, minb=1844KB/s, maxb=1844KB/s, mint=60086msec, maxt=60086msec
Disk stats (read/write):
dm-9: ios=91788/55387, merge=0/0, ticks=1740226/2762764, in_queue=4504234, util=99.74%, aggrios=91788/55742, aggrmerge=0/0, aggrticks=1740107/2776783, aggrin_queue=4517153, aggrutil=99.73%
dm-8: ios=91788/55742, merge=0/0, ticks=1740107/2776783, in_queue=4517153, util=99.73%, aggrios=92882/56921, aggrmerge=0/0, aggrticks=1740117/2928367, aggrin_queue=4668927, aggrutil=99.73%
dm-4: ios=92882/56921, merge=0/0, ticks=1740117/2928367, in_queue=4668927, util=99.73%, aggrios=23324/14275, aggrmerge=0/0, aggrticks=436704/740563, aggrin_queue=1177404, aggrutil=99.74%
dm-0: ios=416/0, merge=0/0, ticks=6838/0, in_queue=6838, util=2.70%, aggrios=0/0, aggrmerge=0/0, aggrticks=0/0, aggrin_queue=0, aggrutil=0.00%
drbd1: ios=0/0, merge=0/0, ticks=0/0, in_queue=0, util=0.00%
drbd2: ios=0/0, merge=0/0, ticks=0/0, in_queue=0, util=0.00%
dm-1: ios=0/0, merge=0/0, ticks=0/0, in_queue=0, util=0.00%
dm-2: ios=0/0, merge=0/0, ticks=0/0, in_queue=0, util=0.00%
dm-3: ios=92882/57101, merge=0/0, ticks=1739979/2962253, in_queue=4702778, util=99.74%
依據 QNAP 官網所提 RAID 50/60 主要是為了強化資料保護. RAID 50 主推用於高頻率備份 server 之用.
小結: RAID 0/1 效能如預期, 且在 RAID 1 時確實反應出 QTS 能從 RAID 1 硬碟 parallel read 讀取加倍的效果, 但 RAID 50 的效能似乎並不理想.