老傢私 QNAP TS-1079 PRO & QTS 4.3.4 新開箱

話說這台 QNAP TS-1079 PRO 已經買了好幾個月了, 一大箱就這麼擺在角落, 平日家裡貓喵在上面跳上跳下玩耍, 這次終於要拿出來開機了, 準備直上 QTS 4.3.4 Beta.


RAID 0/1/50/60 實測
儲存與快照管理員實測
Qtier 2.0
10G 網路讀寫實測
全域 SSD Cache 實測
Qboost - NAS 的 ccleaner
QVHelper 外掛實測
Video Station 360 VR 影音實測
VJBOD 實測
手機直連 NAS 瀏覽及備份
Ciname28 多房影音實測


TS-1079 PRO巨大的身影


在 TS-1685 未上市之前, 我想 TS-1079 應該是 QNAP Turbo NAS 最大台的 Tower NAS 吧! 這台上市也有五年以上了, 反正就是因緣巧合在今年買了這台庫存新機, 但買來以後就一直擺著. NAS 玩久了就變成在收集古董一樣, 別人是愈買愈新, 我是愈買愈回去. 前個禮拜還撈了一台 TS-231+.

這真的很大箱, 保護的很好, 因為外面還另外用一個 carton 裝著, 拿出來費了不少力氣.
老傢私 QNAP TS-1079 PRO & QTS 4.3.4 新開箱

這傢伙是紙箱控, 只會愈幫愈忙.
老傢私 QNAP TS-1079 PRO & QTS 4.3.4 新開箱

老傢私 QNAP TS-1079 PRO & QTS 4.3.4 新開箱

TS-1079 Pro 是 10 bay 3.5" NAS. 這機構和 TS-879 Pro 一樣. 差別只是 TS-1079 多了下面二bay.
老傢私 QNAP TS-1079 PRO & QTS 4.3.4 新開箱

這次要測 QTS 4.3.4 所提的 RAID 50/60, 這是搬這台出來的主因
老傢私 QNAP TS-1079 PRO & QTS 4.3.4 新開箱

這 HDD Tray 我是第一次用, 它再的設計很好, 金屬匣而且散熱佳, 再加上能 Lock 防止誤拔
老傢私 QNAP TS-1079 PRO & QTS 4.3.4 新開箱

後面這小模睡著了.
老傢私 QNAP TS-1079 PRO & QTS 4.3.4 新開箱

TS-1079 Pro 後方.
老傢私 QNAP TS-1079 PRO & QTS 4.3.4 新開箱

這台當初就是有 PCIe 能上 10G NIC 買的.
老傢私 QNAP TS-1079 PRO & QTS 4.3.4 新開箱

Dual 12cm 風扇, 平常約 1000rpm 運轉, 其實還蠻安靜的.
老傢私 QNAP TS-1079 PRO & QTS 4.3.4 新開箱

這設計原本是要裝兩顆 Power Supply 嗎?
老傢私 QNAP TS-1079 PRO & QTS 4.3.4 新開箱

HDD 盤位順序. 以及 HDD Tray Lock & Unlock 說明.
老傢私 QNAP TS-1079 PRO & QTS 4.3.4 新開箱

這...
老傢私 QNAP TS-1079 PRO & QTS 4.3.4 新開箱

這是唯一一張比較專業的 model 照
老傢私 QNAP TS-1079 PRO & QTS 4.3.4 新開箱

還吃...
老傢私 QNAP TS-1079 PRO & QTS 4.3.4 新開箱

這台當初上市據說空機 nt$10 萬, 很嚇人.
老傢私 QNAP TS-1079 PRO & QTS 4.3.4 新開箱

350W 內置 power supply 很夠力. 這特規的 Power Supply.
老傢私 QNAP TS-1079 PRO & QTS 4.3.4 新開箱

當年旗艦機也就配 2GB RAM, 現在至少也要配個 4GB*2 才夠用.
老傢私 QNAP TS-1079 PRO & QTS 4.3.4 新開箱


這台已經是前幾代的產品了, Intel socket 1155 i3 cpu, 規格部份就參考官網.
TS-1079 Pro 官網規格說明

搬這台出來主要就是測 QTS 4.3.4 Beta 新加入的 RAID 50/60 & 其他諸多功能.

註: 這台 NAS 最後還是有幫它加上一張 Mellanox 10G 網卡.
老傢私 QNAP TS-1079 PRO & QTS 4.3.4 新開箱


TS-1079 PRO 上 QTS 4.3.4

第一次開機登入 QTS 顯示為 QTS 4.0.x 版本, 仿彿有一種回到過去時光的感覺. 至次安裝 QTS 4.3.4 beta 的步驟就跳過, 直接至官網下載 QTS firmware, 利用手動更新即可. 當初還有點擔心從 QTS 4.0 直上 4.3.4 會不會跳太多版次, 看來是多慮了.

將之前測試 TS-231+ ARM NAS 的兩顆硬碟拿來裝上 TS-1079 PRO, 能正常開機, 原本的資料也都在, 看來 ARM 和 x86 NAS 之間硬碟互換還是 ok 的.
老傢私 QNAP TS-1079 PRO & QTS 4.3.4 新開箱

TS-1079 PRO 採用 i3-2120 CPU (socket 1155), 這已經是上了 QTS 4.3.4 版本了.
老傢私 QNAP TS-1079 PRO & QTS 4.3.4 新開箱

這是我最高階的 TS NAS 了, 以前也沒見過這麼多溫度 sensor. 已經換上 4GB DDR3 RAM*2
老傢私 QNAP TS-1079 PRO & QTS 4.3.4 新開箱

熟悉的 QTS 4.3 WEB UI
老傢私 QNAP TS-1079 PRO & QTS 4.3.4 新開箱

這 Qboost 是 QTS 4.3.4 新加入的 app, 感覺有點像 windows 工作管理員 & ccleaner 的感覺.
老傢私 QNAP TS-1079 PRO & QTS 4.3.4 新開箱

因為也要測 Qtier, 所以也裝了 SSD, 這裡很有意思的是, 原本 SSD 裝在 Tray#10, 結果系統無法建立 SSD cache, 原來 SSD 要裝在 Tray#7/8 才行.
老傢私 QNAP TS-1079 PRO & QTS 4.3.4 新開箱

這就是原因了, SSD 裝在 Tray#7/8 測速可以到 485MB/sec
老傢私 QNAP TS-1079 PRO & QTS 4.3.4 新開箱

裝在其他 slot, 測出來速度比較慢, 約 376MB/s, 猜測這 Tray#7/8 應該是系統原生的 SATA Port.
老傢私 QNAP TS-1079 PRO & QTS 4.3.4 新開箱

為了建 RAID 50/60, 系統提示不用急著建 RAID.
老傢私 QNAP TS-1079 PRO & QTS 4.3.4 新開箱

QTS 4.3.4 多了 help center.
老傢私 QNAP TS-1079 PRO & QTS 4.3.4 新開箱

手上沒有那麼多硬碟, 勉強用 WD 4TB Red & Toshiba 3TB desktop 塞滿. Tray#7/8 就拿來裝 SSD 碟.
老傢私 QNAP TS-1079 PRO & QTS 4.3.4 新開箱

規劃為 RAID 50
老傢私 QNAP TS-1079 PRO & QTS 4.3.4 新開箱

這裡提示日後如果要擴充容量的做法. RAID 50 就是兩組 RAID5(子陣列) 去組合起來 RAID0
老傢私 QNAP TS-1079 PRO & QTS 4.3.4 新開箱

新版 QTS 4.3.4 Web UI 多了很多圖像化的提示. 先來建 Storage Pool
老傢私 QNAP TS-1079 PRO & QTS 4.3.4 新開箱

自己習慣先劃個 200GB 用來存系統, 當做 system volume
老傢私 QNAP TS-1079 PRO & QTS 4.3.4 新開箱

老傢私 QNAP TS-1079 PRO & QTS 4.3.4 新開箱

QTS 4.3.4 多了設定 RAID resync/rebuild 的優先權
老傢私 QNAP TS-1079 PRO & QTS 4.3.4 新開箱

現在 NAS 忙著完成第一次 RAID resync, 就等著它做完, 這期間也可以先做其他事, 例如安裝慣用的套件, 建立共用資料夾的.

註: 上面混用 WD Red NAS 碟 & Toshiba 桌機碟, QTS 4.3.4 有提供測速功能, 看來 Toshiba 7200rpm 桌機碟速度上還是比 5xxx rpm NAS 碟快一些.


我還是比較鍾意傳統的機構設計

TS-1079 PRO 的機構設計還是比較好.

*金屬機身及 HDD tray. 質感更佳.
*HDD Tray 的設計散熱佳, 前方預留很多網格讓空氣從前方直入. 後方直接排出.
*機器右方也是細網格設計. 方便散熱.
*HDD Tray 雖然不帶傳統 Key, 但它兩段卡榫帶 Lock 防誤拔.

TS-1079 HDD tray 的插拔比 TS 2/4/6 bay 金屬匣更輕鬆好用. 這機構現在已不復見, 但真的是 QNAP 經典之作.

至於 PCIe slot, 當初有想到底是要上 10G NIC, 還是可以加獨顯轉檔? 不過這台 Power Supply 是特規, 也沒有預留額外的插頭可以接獨顯電源. 上 10G NIC 應該是比較實際的選擇.




RAID 0/1/50/60效能實測

這次主要測試 QTS 4.3.4 beta 所新增的 RAID 50/60, 也順便測一下 RAID 0/1 當做對比.

如下共用了三種碟, WD 4TB Red NAS 碟, Toshiba 3TB 桌機碟, 以及美光 275GB SSD*2.
老傢私 QNAP TS-1079 PRO & QTS 4.3.4 新開箱

QTS 內建效能測試程式, 方便檢測硬碟的讀取效能.

SSD *2 組 RAID 0 以 fio 實測. sequence r/w 大約接近 1000MB/s

[admin@TS1079PRO ssdraid0]# fio fio.conf
read: (g=0): rw=read, bs=1M-1M/1M-1M/1M-1M, ioengine=libaio, iodepth=16
write: (g=1): rw=write, bs=1M-1M/1M-1M/1M-1M, ioengine=libaio, iodepth=16
randread_libaio: (g=2): rw=randread, bs=4K-4K/4K-4K/4K-4K, ioengine=libaio, iodepth=16
randwrite_libaio: (g=3): rw=randwrite, bs=4K-4K/4K-4K/4K-4K, ioengine=libaio, iodepth=16
fio-2.2.10
Starting 4 processes
read: Laying out IO file(s) (1 file(s) / 16384MB)
write: Laying out IO file(s) (1 file(s) / 16384MB)
randread_libaio: Laying out IO file(s) (1 file(s) / 16384MB)
randwrite_libaio: Laying out IO file(s) (1 file(s) / 16384MB)
Jobs: 1 (f=1): [_(3),w(1)] [81.1% done] [0KB/143.5MB/0KB /s] [0/36.8K/0 iops] [eta 00m:36s]
read: (groupid=0, jobs=1): err= 0: pid=4205: Tue Dec 12 15:53:52 2017
read : io=16384MB, bw=1013.2MB/s, iops=1013, runt= 16171msec
slat (usec): min=106, max=1037, avg=176.70, stdev=19.19
clat (usec): min=4195, max=28983, avg=15608.36, stdev=769.34
lat (usec): min=4492, max=29154, avg=15785.32, stdev=765.40
clat percentiles (usec):
| 1.00th=[14656], 5.00th=[15296], 10.00th=[15424], 20.00th=[15424],
| 30.00th=[15424], 40.00th=[15424], 50.00th=[15424], 60.00th=[15424],
| 70.00th=[15424], 80.00th=[15424], 90.00th=[16064], 95.00th=[17024],
| 99.00th=[18560], 99.50th=[19072], 99.90th=[20352], 99.95th=[21376],
| 99.99th=[28032]
bw (KB /s): min=1003520, max=1052672, per=100.00%, avg=1038335.72, stdev=10033.41
lat (msec) : 10=0.20%, 20=99.65%, 50=0.15%
cpu : usr=0.41%, sys=18.33%, ctx=16410, majf=0, minf=4105
IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=99.9%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0%
issued : total=r=16384/w=0/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
latency : target=0, window=0, percentile=100.00%, depth=16
write: (groupid=1, jobs=1): err= 0: pid=4896: Tue Dec 12 15:53:52 2017
write: io=16384MB, bw=991.54MB/s, iops=991, runt= 16524msec
slat (usec): min=140, max=14746, avg=221.46, stdev=257.73
clat (msec): min=2, max=33, avg=15.91, stdev= 1.65
lat (msec): min=3, max=33, avg=16.13, stdev= 1.63
clat percentiles (usec):
| 1.00th=[12096], 5.00th=[15424], 10.00th=[15424], 20.00th=[15424],
| 30.00th=[15424], 40.00th=[15424], 50.00th=[15552], 60.00th=[15552],
| 70.00th=[15552], 80.00th=[15808], 90.00th=[17280], 95.00th=[19328],
| 99.00th=[21632], 99.50th=[23680], 99.90th=[27776], 99.95th=[28544],
| 99.99th=[31360]
bw (KB /s): min=979913, max=1042432, per=100.00%, avg=1016290.69, stdev=10468.70
lat (msec) : 4=0.13%, 10=0.54%, 20=95.60%, 50=3.72%
cpu : usr=7.05%, sys=15.20%, ctx=16202, majf=0, minf=9
IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=99.9%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0%
issued : total=r=0/w=16384/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
latency : target=0, window=0, percentile=100.00%, depth=16
randread_libaio: (groupid=2, jobs=1): err= 0: pid=6664: Tue Dec 12 15:53:52 2017
read : io=14999MB, bw=255971KB/s, iops=63992, runt= 60001msec
slat (usec): min=6, max=6109, avg=13.23, stdev= 7.91
clat (usec): min=75, max=6610, avg=235.23, stdev=66.50
lat (usec): min=88, max=6623, avg=248.65, stdev=67.38
clat percentiles (usec):
| 1.00th=[ 141], 5.00th=[ 161], 10.00th=[ 173], 20.00th=[ 187],
| 30.00th=[ 199], 40.00th=[ 211], 50.00th=[ 223], 60.00th=[ 237],
| 70.00th=[ 253], 80.00th=[ 274], 90.00th=[ 318], 95.00th=[ 358],
| 99.00th=[ 438], 99.50th=[ 478], 99.90th=[ 580], 99.95th=[ 628],
| 99.99th=[ 1128]
bw (KB /s): min=226552, max=291256, per=100.00%, avg=255999.87, stdev=11541.36
lat (usec) : 100=0.01%, 250=68.19%, 500=31.47%, 750=0.32%, 1000=0.01%
lat (msec) : 2=0.01%, 4=0.01%, 10=0.01%
cpu : usr=10.23%, sys=88.59%, ctx=37361, majf=0, minf=24
IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=100.0%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0%
issued : total=r=3839634/w=0/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
latency : target=0, window=0, percentile=100.00%, depth=16
randwrite_libaio: (groupid=3, jobs=1): err= 0: pid=9812: Tue Dec 12 15:53:52 2017
write: io=10265MB, bw=175192KB/s, iops=43797, runt= 60001msec
slat (usec): min=6, max=208611, avg=20.48, stdev=725.65
clat (usec): min=27, max=209575, avg=343.51, stdev=2833.41
lat (usec): min=42, max=209592, avg=364.16, stdev=2925.82
clat percentiles (usec):
| 1.00th=[ 233], 5.00th=[ 243], 10.00th=[ 245], 20.00th=[ 249],
| 30.00th=[ 251], 40.00th=[ 253], 50.00th=[ 255], 60.00th=[ 262],
| 70.00th=[ 270], 80.00th=[ 314], 90.00th=[ 370], 95.00th=[ 410],
| 99.00th=[ 490], 99.50th=[ 708], 99.90th=[ 3888], 99.95th=[ 9920],
| 99.99th=[154624]
bw (KB /s): min=112840, max=244208, per=100.00%, avg=175624.47, stdev=34455.40
lat (usec) : 50=0.02%, 100=0.03%, 250=27.75%, 500=71.24%, 750=0.49%
lat (usec) : 1000=0.14%
lat (msec) : 2=0.14%, 4=0.10%, 10=0.05%, 20=0.01%, 50=0.01%
lat (msec) : 100=0.01%, 250=0.03%
cpu : usr=8.15%, sys=73.44%, ctx=57108, majf=0, minf=9
IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=100.0%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0%
issued : total=r=0/w=2627917/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
latency : target=0, window=0, percentile=100.00%, depth=16

Run status group 0 (all jobs):
READ: io=16384MB, aggrb=1013.2MB/s, minb=1013.2MB/s, maxb=1013.2MB/s, mint=16171msec, maxt=16171msec

Run status group 1 (all jobs):
WRITE: io=16384MB, aggrb=991.54MB/s, minb=991.54MB/s, maxb=991.54MB/s, mint=16524msec, maxt=16524msec

Run status group 2 (all jobs):
READ: io=14999MB, aggrb=255971KB/s, minb=255971KB/s, maxb=255971KB/s, mint=60001msec, maxt=60001msec

Run status group 3 (all jobs):
WRITE: io=10265MB, aggrb=175191KB/s, minb=175191KB/s, maxb=175191KB/s, mint=60001msec, maxt=60001msec

Disk stats (read/write):
dm-17: ios=3872410/3184242, merge=0/0, ticks=1170902/24737366, in_queue=25909651, util=98.50%, aggrios=3872410/3198032, aggrmerge=0/0, aggrticks=1167705/25069778, aggrin_queue=26247294, aggrutil=98.46%
dm-16: ios=3872410/3198032, merge=0/0, ticks=1167705/25069778, in_queue=26247294, util=98.46%, aggrios=3872410/3198032, aggrmerge=0/0, aggrticks=1130053/25019392, aggrin_queue=26168363, aggrutil=98.17%
dm-14: ios=3872410/3198032, merge=0/0, ticks=1130053/25019392, in_queue=26168363, util=98.17%, aggrios=968190/799574, aggrmerge=0/0, aggrticks=282431/6254502, aggrin_queue=6538253, aggrutil=98.09%
dm-10: ios=351/0, merge=0/0, ticks=3337/0, in_queue=3337, util=2.16%, aggrios=0/0, aggrmerge=0/0, aggrticks=0/0, aggrin_queue=0, aggrutil=0.00%
drbd3: ios=0/0, merge=0/0, ticks=0/0, in_queue=0, util=0.00%
dm-11: ios=0/0, merge=0/0, ticks=0/0, in_queue=0, util=0.00%
dm-12: ios=0/0, merge=0/0, ticks=0/0, in_queue=0, util=0.00%
dm-13: ios=3872410/3198296, merge=0/0, ticks=1126388/25018009, in_queue=26149677, util=98.09%



RAID 1(SSD*1)讀寫效能: Sequence Read 大約 1000MB/s, Sequence Write 大約 4XXMB/s.
註: 這裡的確反應出 QTS RAID1 有 parallel read 的特性, 所以可以達到 SSD*2 加倍的速度.

[admin@TS1079PRO ssdraid1]# fio fio.conf
read: (g=0): rw=read, bs=1M-1M/1M-1M/1M-1M, ioengine=libaio, iodepth=16
write: (g=1): rw=write, bs=1M-1M/1M-1M/1M-1M, ioengine=libaio, iodepth=16
randread_libaio: (g=2): rw=randread, bs=4K-4K/4K-4K/4K-4K, ioengine=libaio, iodepth=16
randwrite_libaio: (g=3): rw=randwrite, bs=4K-4K/4K-4K/4K-4K, ioengine=libaio, iodepth=16
fio-2.2.10
Starting 4 processes
Jobs: 1 (f=1): [_(3),w(1)] [75.9% done] [0KB/103.9MB/0KB /s] [0/26.6K/0 iops] [eta 00m:55s]
read: (groupid=0, jobs=1): err= 0: pid=15535: Tue Dec 12 16:41:21 2017
read : io=16384MB, bw=1002.9MB/s, iops=1002, runt= 16350msec
slat (usec): min=100, max=9873, avg=174.11, stdev=99.06
clat (msec): min=2, max=29, avg=15.79, stdev= 2.19
lat (msec): min=2, max=29, avg=15.96, stdev= 2.19
clat percentiles (usec):
| 1.00th=[11328], 5.00th=[12352], 10.00th=[13376], 20.00th=[13888],
| 30.00th=[14912], 40.00th=[15424], 50.00th=[15552], 60.00th=[16192],
| 70.00th=[17024], 80.00th=[17536], 90.00th=[18304], 95.00th=[19328],
| 99.00th=[21120], 99.50th=[21888], 99.90th=[25216], 99.95th=[26240],
| 99.99th=[29312]
bw (KB /s): min=978944, max=1038336, per=100.00%, avg=1026810.75, stdev=11934.30
lat (msec) : 4=0.10%, 10=0.52%, 20=97.06%, 50=2.31%
cpu : usr=0.43%, sys=17.74%, ctx=14863, majf=0, minf=4105
IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=99.9%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0%
issued : total=r=16384/w=0/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
latency : target=0, window=0, percentile=100.00%, depth=16
write: (groupid=1, jobs=1): err= 0: pid=16836: Tue Dec 12 16:41:21 2017
write: io=16384MB, bw=479185KB/s, iops=467, runt= 35012msec
slat (usec): min=135, max=32772, avg=234.41, stdev=552.85
clat (msec): min=3, max=256, avg=33.95, stdev=11.93
lat (msec): min=7, max=256, avg=34.18, stdev=11.91
clat percentiles (msec):
| 1.00th=[ 31], 5.00th=[ 32], 10.00th=[ 32], 20.00th=[ 32],
| 30.00th=[ 32], 40.00th=[ 32], 50.00th=[ 33], 60.00th=[ 33],
| 70.00th=[ 33], 80.00th=[ 33], 90.00th=[ 37], 95.00th=[ 39],
| 99.00th=[ 109], 99.50th=[ 115], 99.90th=[ 149], 99.95th=[ 255],
| 99.99th=[ 258]
bw (KB /s): min=147439, max=529373, per=100.00%, avg=479361.16, stdev=77138.75
lat (msec) : 4=0.01%, 10=0.12%, 20=0.21%, 50=97.69%, 100=0.59%
lat (msec) : 250=1.29%, 500=0.10%
cpu : usr=3.37%, sys=7.54%, ctx=16396, majf=0, minf=9
IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=99.9%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0%
issued : total=r=0/w=16384/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
latency : target=0, window=0, percentile=100.00%, depth=16
randread_libaio: (groupid=2, jobs=1): err= 0: pid=18397: Tue Dec 12 16:41:21 2017
read : io=13428MB, bw=229163KB/s, iops=57290, runt= 60001msec
slat (usec): min=7, max=8530, avg=13.13, stdev=10.76
clat (usec): min=55, max=9096, avg=264.52, stdev=111.25
lat (usec): min=71, max=9112, avg=277.85, stdev=111.98
clat percentiles (usec):
| 1.00th=[ 131], 5.00th=[ 143], 10.00th=[ 157], 20.00th=[ 179],
| 30.00th=[ 199], 40.00th=[ 221], 50.00th=[ 243], 60.00th=[ 266],
| 70.00th=[ 298], 80.00th=[ 338], 90.00th=[ 398], 95.00th=[ 462],
| 99.00th=[ 604], 99.50th=[ 660], 99.90th=[ 796], 99.95th=[ 876],
| 99.99th=[ 1880]
bw (KB /s): min=167552, max=241008, per=99.99%, avg=229150.25, stdev=12138.58
lat (usec) : 100=0.01%, 250=53.15%, 500=43.61%, 750=3.08%, 1000=0.13%
lat (msec) : 2=0.02%, 4=0.01%, 10=0.01%
cpu : usr=11.37%, sys=78.06%, ctx=346987, majf=0, minf=24
IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=100.0%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0%
issued : total=r=3437507/w=0/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
latency : target=0, window=0, percentile=100.00%, depth=16
randwrite_libaio: (groupid=3, jobs=1): err= 0: pid=21793: Tue Dec 12 16:41:21 2017
write: io=8521.4MB, bw=145426KB/s, iops=36356, runt= 60002msec
slat (usec): min=8, max=357422, avg=22.47, stdev=757.68
clat (usec): min=181, max=357892, avg=415.97, stdev=3005.23
lat (usec): min=215, max=357919, avg=438.66, stdev=3101.59
clat percentiles (usec):
| 1.00th=[ 223], 5.00th=[ 294], 10.00th=[ 302], 20.00th=[ 322],
| 30.00th=[ 330], 40.00th=[ 334], 50.00th=[ 342], 60.00th=[ 346],
| 70.00th=[ 350], 80.00th=[ 358], 90.00th=[ 394], 95.00th=[ 540],
| 99.00th=[ 1608], 99.50th=[ 1960], 99.90th=[ 2928], 99.95th=[ 5152],
| 99.99th=[240640]
bw (KB /s): min=41448, max=239192, per=100.00%, avg=146259.06, stdev=36538.72
lat (usec) : 250=2.69%, 500=91.34%, 750=2.36%, 1000=0.82%
lat (msec) : 2=2.31%, 4=0.39%, 10=0.05%, 20=0.01%, 50=0.01%
lat (msec) : 100=0.01%, 250=0.01%, 500=0.01%
cpu : usr=6.89%, sys=75.82%, ctx=11510, majf=0, minf=9
IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=100.0%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0%
issued : total=r=0/w=2181466/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
latency : target=0, window=0, percentile=100.00%, depth=16

Run status group 0 (all jobs):
READ: io=16384MB, aggrb=1002.9MB/s, minb=1002.9MB/s, maxb=1002.9MB/s, mint=16350msec, maxt=16350msec

Run status group 1 (all jobs):
WRITE: io=16384MB, aggrb=479184KB/s, minb=479184KB/s, maxb=479184KB/s, mint=35012msec, maxt=35012msec

Run status group 2 (all jobs):
READ: io=13428MB, aggrb=229163KB/s, minb=229163KB/s, maxb=229163KB/s, mint=60001msec, maxt=60001msec

Run status group 3 (all jobs):
WRITE: io=8521.4MB, aggrb=145426KB/s, minb=145426KB/s, maxb=145426KB/s, mint=60002msec, maxt=60002msec

Disk stats (read/write):
dm-17: ios=3470279/2339896, merge=0/0, ticks=1290470/17304069, in_queue=18595822, util=99.39%, aggrios=3470279/2342436, aggrmerge=0/0, aggrticks=1287492/17303505, aggrin_queue=18592139, aggrutil=99.36%
dm-16: ios=3470279/2342436, merge=0/0, ticks=1287492/17303505, in_queue=18592139, util=99.36%, aggrios=3470279/2342436, aggrmerge=0/0, aggrticks=1245222/17264742, aggrin_queue=18511438, aggrutil=98.97%
dm-14: ios=3470279/2342436, merge=0/0, ticks=1245222/17264742, in_queue=18511438, util=98.97%, aggrios=867630/585632, aggrmerge=0/0, aggrticks=311463/4315711, aggrin_queue=4627487, aggrutil=98.78%
dm-10: ios=244/0, merge=0/0, ticks=3911/0, in_queue=3911, util=2.27%, aggrios=0/0, aggrmerge=0/0, aggrticks=0/0, aggrin_queue=0, aggrutil=0.00%
drbd3: ios=0/0, merge=0/0, ticks=0/0, in_queue=0, util=0.00%
dm-11: ios=0/0, merge=0/0, ticks=0/0, in_queue=0, util=0.00%
dm-12: ios=0/0, merge=0/0, ticks=0/0, in_queue=0, util=0.00%
dm-13: ios=3470279/2342528, merge=0/0, ticks=1241944/17262847, in_queue=18506038, util=98.78%



RAID 50 讀寫效能(HDD*6): Sequence R/W 大約是 400MB/s & 200MB/s

[admin@TS1079PRO video]# fio fio.conf
read: (g=0): rw=read, bs=1M-1M/1M-1M/1M-1M, ioengine=libaio, iodepth=16
write: (g=1): rw=write, bs=1M-1M/1M-1M/1M-1M, ioengine=libaio, iodepth=16
randread_libaio: (g=2): rw=randread, bs=4K-4K/4K-4K/4K-4K, ioengine=libaio, iodepth=16
randwrite_libaio: (g=3): rw=randwrite, bs=4K-4K/4K-4K/4K-4K, ioengine=libaio, iodepth=16
fio-2.2.10
Starting 4 processes
read: Laying out IO file(s) (1 file(s) / 16384MB)
write: Laying out IO file(s) (1 file(s) / 16384MB)
randread_libaio: Laying out IO file(s) (1 file(s) / 16384MB)
randwrite_libaio: Laying out IO file(s) (1 file(s) / 16384MB)
Jobs: 1 (f=1): [_(3),w(1)] [58.1% done] [0KB/1822KB/0KB /s] [0/455/0 iops] [eta 02m:40s]
read: (groupid=0, jobs=1): err= 0: pid=4107: Tue Dec 12 16:06:05 2017
read : io=16384MB, bw=411650KB/s, iops=402, runt= 40756msec
slat (usec): min=87, max=1721, avg=158.56, stdev=29.43
clat (msec): min=1, max=1965, avg=39.63, stdev=78.51
lat (msec): min=1, max=1965, avg=39.79, stdev=78.51
clat percentiles (usec):
| 1.00th=[ 1336], 5.00th=[ 1352], 10.00th=[ 1688], 20.00th=[ 2512],
| 30.00th=[ 5536], 40.00th=[ 9024], 50.00th=[13888], 60.00th=[18560],
| 70.00th=[31616], 80.00th=[52992], 90.00th=[113152], 95.00th=[171008],
| 99.00th=[292864], 99.50th=[403456], 99.90th=[962560], 99.95th=[1122304],
| 99.99th=[1957888]
bw (KB /s): min=238933, max=699665, per=100.00%, avg=412064.33, stdev=127104.82
lat (msec) : 2=13.16%, 4=12.74%, 10=16.66%, 20=19.12%, 50=17.29%
lat (msec) : 100=8.78%, 250=10.59%, 500=1.35%, 750=0.12%, 1000=0.13%
lat (msec) : 2000=0.07%
cpu : usr=0.24%, sys=6.47%, ctx=15502, majf=0, minf=4106
IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=99.9%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0%
issued : total=r=16384/w=0/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
latency : target=0, window=0, percentile=100.00%, depth=16
write: (groupid=1, jobs=1): err= 0: pid=6076: Tue Dec 12 16:06:05 2017
write: io=12514MB, bw=212274KB/s, iops=207, runt= 60367msec
slat (usec): min=114, max=105657, avg=436.72, stdev=1263.12
clat (msec): min=10, max=514, avg=76.59, stdev=49.83
lat (msec): min=10, max=514, avg=77.03, stdev=49.88
clat percentiles (msec):
| 1.00th=[ 34], 5.00th=[ 43], 10.00th=[ 47], 20.00th=[ 53],
| 30.00th=[ 58], 40.00th=[ 62], 50.00th=[ 66], 60.00th=[ 71],
| 70.00th=[ 76], 80.00th=[ 86], 90.00th=[ 108], 95.00th=[ 135],
| 99.00th=[ 371], 99.50th=[ 433], 99.90th=[ 469], 99.95th=[ 490],
| 99.99th=[ 498]
bw (KB /s): min=55296, max=273884, per=100.00%, avg=214156.38, stdev=47996.38
lat (msec) : 20=0.05%, 50=14.06%, 100=73.69%, 250=10.45%, 500=1.75%
lat (msec) : 750=0.01%
cpu : usr=1.48%, sys=7.29%, ctx=6317, majf=0, minf=9
IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=99.9%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0%
issued : total=r=0/w=12514/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
latency : target=0, window=0, percentile=100.00%, depth=16
randread_libaio: (groupid=2, jobs=1): err= 0: pid=9296: Tue Dec 12 16:06:05 2017
read : io=236056KB, bw=3931.1KB/s, iops=982, runt= 60036msec
slat (usec): min=5, max=638, avg=24.54, stdev= 8.12
clat (usec): min=43, max=166745, avg=16248.29, stdev=13723.49
lat (usec): min=57, max=166766, avg=16273.15, stdev=13723.22
clat percentiles (msec):
| 1.00th=[ 3], 5.00th=[ 4], 10.00th=[ 5], 20.00th=[ 7],
| 30.00th=[ 9], 40.00th=[ 11], 50.00th=[ 13], 60.00th=[ 15],
| 70.00th=[ 18], 80.00th=[ 23], 90.00th=[ 33], 95.00th=[ 44],
| 99.00th=[ 72], 99.50th=[ 83], 99.90th=[ 108], 99.95th=[ 117],
| 99.99th=[ 139]
bw (KB /s): min= 2698, max= 4464, per=100.00%, avg=3939.48, stdev=431.20
lat (usec) : 50=0.01%, 100=0.16%, 250=0.07%, 500=0.05%, 750=0.03%
lat (usec) : 1000=0.02%
lat (msec) : 2=0.53%, 4=5.65%, 10=31.26%, 20=37.51%, 50=21.32%
lat (msec) : 100=3.22%, 250=0.16%
cpu : usr=0.52%, sys=2.77%, ctx=57495, majf=0, minf=24
IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=100.0%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0%
issued : total=r=59014/w=0/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
latency : target=0, window=0, percentile=100.00%, depth=16
randwrite_libaio: (groupid=3, jobs=1): err= 0: pid=11303: Tue Dec 12 16:06:05 2017
write: io=110836KB, bw=1844.7KB/s, iops=461, runt= 60086msec
slat (usec): min=6, max=308389, avg=72.50, stdev=2599.21
clat (usec): min=137, max=722814, avg=34615.40, stdev=41635.21
lat (usec): min=171, max=722829, avg=34688.28, stdev=41756.12
clat percentiles (msec):
| 1.00th=[ 3], 5.00th=[ 6], 10.00th=[ 8], 20.00th=[ 11],
| 30.00th=[ 14], 40.00th=[ 17], 50.00th=[ 21], 60.00th=[ 26],
| 70.00th=[ 35], 80.00th=[ 49], 90.00th=[ 77], 95.00th=[ 110],
| 99.00th=[ 221], 99.50th=[ 269], 99.90th=[ 343], 99.95th=[ 388],
| 99.99th=[ 570]
bw (KB /s): min= 517, max= 2597, per=100.00%, avg=1846.87, stdev=455.75
lat (usec) : 250=0.09%, 500=0.10%, 750=0.09%, 1000=0.07%
lat (msec) : 2=0.11%, 4=1.53%, 10=15.81%, 20=31.81%, 50=30.97%
lat (msec) : 100=13.51%, 250=5.24%, 500=0.63%, 750=0.03%
cpu : usr=0.24%, sys=1.74%, ctx=24470, majf=0, minf=9
IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=99.9%, 32=0.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.0%
issued : total=r=0/w=27709/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
latency : target=0, window=0, percentile=100.00%, depth=16

Run status group 0 (all jobs):
READ: io=16384MB, aggrb=411650KB/s, minb=411650KB/s, maxb=411650KB/s, mint=40756msec, maxt=40756msec

Run status group 1 (all jobs):
WRITE: io=12514MB, aggrb=212273KB/s, minb=212273KB/s, maxb=212273KB/s, mint=60367msec, maxt=60367msec

Run status group 2 (all jobs):
READ: io=236056KB, aggrb=3931KB/s, minb=3931KB/s, maxb=3931KB/s, mint=60036msec, maxt=60036msec

Run status group 3 (all jobs):
WRITE: io=110836KB, aggrb=1844KB/s, minb=1844KB/s, maxb=1844KB/s, mint=60086msec, maxt=60086msec

Disk stats (read/write):
dm-9: ios=91788/55387, merge=0/0, ticks=1740226/2762764, in_queue=4504234, util=99.74%, aggrios=91788/55742, aggrmerge=0/0, aggrticks=1740107/2776783, aggrin_queue=4517153, aggrutil=99.73%
dm-8: ios=91788/55742, merge=0/0, ticks=1740107/2776783, in_queue=4517153, util=99.73%, aggrios=92882/56921, aggrmerge=0/0, aggrticks=1740117/2928367, aggrin_queue=4668927, aggrutil=99.73%
dm-4: ios=92882/56921, merge=0/0, ticks=1740117/2928367, in_queue=4668927, util=99.73%, aggrios=23324/14275, aggrmerge=0/0, aggrticks=436704/740563, aggrin_queue=1177404, aggrutil=99.74%
dm-0: ios=416/0, merge=0/0, ticks=6838/0, in_queue=6838, util=2.70%, aggrios=0/0, aggrmerge=0/0, aggrticks=0/0, aggrin_queue=0, aggrutil=0.00%
drbd1: ios=0/0, merge=0/0, ticks=0/0, in_queue=0, util=0.00%
drbd2: ios=0/0, merge=0/0, ticks=0/0, in_queue=0, util=0.00%
dm-1: ios=0/0, merge=0/0, ticks=0/0, in_queue=0, util=0.00%
dm-2: ios=0/0, merge=0/0, ticks=0/0, in_queue=0, util=0.00%
dm-3: ios=92882/57101, merge=0/0, ticks=1739979/2962253, in_queue=4702778, util=99.74%


依據 QNAP 官網所提 RAID 50/60 主要是為了強化資料保護. RAID 50 主推用於高頻率備份 server 之用.

小結: RAID 0/1 效能如預期, 且在 RAID 1 時確實反應出 QTS 能從 RAID 1 硬碟 parallel read 讀取加倍的效果, 但 RAID 50 的效能似乎並不理想.

高檔貨!



pctine wrote:
話說這台 QNAP TS...(恕刪)
儲存與快照總管實測


QTS 4.3.4 在 "儲存與快照總管" WEB UI 加強了非常多, 功能項次非常多. 一開始使用時不是很習慣.


前面實測 RAID 50 時, 系統所建立的兩組 RAID5. 這是 raid group1


這是 raid group2, 其實 QTS 是自行決定這六顆硬碟的分組, 好像也不能自選.


新版加了一個 resync/rebuild 的調整參數值.


在儲存與快照總管的設定部份, 細項非常多. 難怪大家都說這 QTS 是工程師的最愛. 如果你有控制慾的話.


有關快照的細項調整


對於 QTS WEB UI, 其實有一點不習慣的地方, 就是它同一個功能, 可能有多個進入的方式, 例如要建個 iSCSI LUN. 這裡可以做.


從這裡也可以做. 這樣到底是好還是不好? 往好的方面想, 到同一個目的地有多個路徑可選, 往壞的方面想, 這樣系統感覺複雜很多.


再來就是最小快照保留空間, 這裡提到當 storage pool 需要更多空間時, 有可能去移除快照.


這裡也是有關快照保留的 policy.



其實這是每家系統設計的不同點, 在 QTS 這裡比較偏向於系統自主自動管理, 可以確保系統運作時, 當下需要儲存時, 如果發現空間不足, 那麼就向快照區動手, 刪除舊的快照去騰出更多空間, 這有種活在當下, 現時最重要的概念, 這和以往當空間不足時再發出警訊, 由網管或是 IT 人員手動去決定刪除舊快照及舊檔, 或是擴充 NAS 空間的觀念是不同的.

至於那種做法比較好? 就見仁見智了.

在 QTS 新版仍然沒有見到所謂的 GFS backup 保留原則的做法(還是有此功能?), 其實用過磁帶的 user, 會比較鍾意過往的 GFS 保留原則, 在設計上概念很清楚. 我的備份或是快照要保留幾年or 幾個月/幾週. 依此定義就再清楚不過了.
FB: Pctine

pctine wrote:
話說這台 QNAP...(恕刪)


pctine發文

一定要來追一下
這台 記憶體 超難拆的

且似乎不給用戶升級的感覺

終於看到你搬出來用了

期待你測試的結果
i1537 wrote:
這台 記憶體 超難拆的
且似乎不給用戶升級的感覺
...(恕刪)


QNAP 本家 IEI 前身是在做工業電腦, 它在 RAM Module 上面綁束線帶是工業電腦慣用的工法, 的確是比較難拆一點, 我是直接用斜口鉗剪掉, 只是 NAS 機殼內空間較小比較難處理.

借一下 m01 網友之前的開箱圖.

FB: Pctine
早期的機構設計真的設計得比較好
我超愛早期的金屬tray
10G 網卡傳輸效能

Client Windows 10, i5-6400. Mellanox ConnectX-2 10G NIC
NAS: Mellanox ConnectX-3 10G NIC
Switch: Aruba 10G Switch.
MTU=1500

1M 100% read, 0% random


1M 100% write, 0% random


大約是 SAMBA 大檔讀取約 600MB/s, 寫入約 900MB/s
FB: Pctine
SSD快取加速實測


在 TS-1079 Pro Tray#7/8 安裝美光 275GB SSD*2, RAID0 建立唯讀快取


建議依預設值. 加速 random I/O


QTS 的 SSD Cache 設計為全域快取, SSD Cache 可同時套用在多個 volume or iSCSI LUN


一開始 ssd cache hit rate=0%



所謂 SSD cache, 即從 NAS 讀取的 data, 符合前述所設定的條件 (random i/o, 並小於 1MB) 即會進入 cache, 當常讀取的資料多了, SSD cache 的 hit rate 就會不斷提高.



flash cache status.

[~] # cat /proc/flashcache/CG0/flashcache_stats
raw_reads: 1218670
raw_writes: 41299
raw_read_hits: 2160
raw_potential_read_hits: 2160
raw_write_hits: 8204
raw_potential_write_hits: 8204
reads: 109798
writes: 425
read_hits: 0
potential_read_hits: 0
read_hit_percent: 0
potential_read_hit_percent: 0
write_hits: 253
potential_write_hits: 253
write_hit_percent: 59
potential_write_hit_percent: 59
replacement: 0
write_replacement: 0
write_invalidates: 39
read_invalidates: 60
direct: 1167873
fbc_busy: 0
fbc_inval: 0
defer_bio: 0
zero_sized_bio: 0
pending_enqueues: 84
pending_inval: 64
tier_ssd_inval: 0
no_room: 0
disk_reads: 1216519
disk_writes: 41171
ssd_reads: 2160
ssd_writes: 9682
uncached_reads: 1215169
uncached_writes: 32967
uncached_IO_requeue: 0
uncached_sequential_reads: 47294
uncached_sequential_writes: 32965
total_blocks: 499712
cached_blocks: 302
[~] #
</blcokquote>
FB: Pctine
Qboost

Qboost 程式應該算是 QTS NAS 的 ccleaner 一般, 包含一鍵清理 NAS 裡面的資料回收筒, 一鍵加速對 RAM 記憶體做有效利用. 最後透過排程開啟及停用指定程式.

Qboost main menu


一鍵清理垃圾檔


應用程式排程


程式排程


一鍵清理垃圾檔


不過建議還是排程讓系統自動清理回收筒.



這程式對於 NAS 記憶體較少的機種比較實用. 如果你的 NAS 本身記憶體足夠, 那麼 Qboost 是否安裝倒是沒有很大的差別. 尢其是應用程式排程, 這算很進階的應用, 就跟用 windows task manager 砍運行中程式的道理一樣. 要小心為之.
FB: Pctine
文章分享
評分
評分
複製連結

今日熱門文章 網友點擊推薦!