我要打十個 :-) Synology DS1621+ AMD CPU NAS 開箱分享

mark一下,关注后续测试...
lz是睡过去了吗,消失了
DS1621+ performance test

採用 Synology SAT5200 企業級 SATA SSD 480GB 四顆, 建立 RAID5

Synology SAT5200 SATA SSD 原廠連結

利用 fio 測試 NAS 內部 disk r/w 效能.


root@ds1621plus:/volume1/download# fio synology2.conf
seq-read: (g=0): rw=read, bs=64K-64K/64K-64K/64K-64K, ioengine=libaio, iodepth=32
seq-write: (g=1): rw=write, bs=64K-64K/64K-64K/64K-64K, ioengine=libaio, iodepth=32
rand-read: (g=2): rw=randread, bs=4K-4K/4K-4K/4K-4K, ioengine=libaio, iodepth=32
rand-write: (g=3): rw=randwrite, bs=4K-4K/4K-4K/4K-4K, ioengine=libaio, iodepth=32
fio-2.13
Starting 4 processes
Jobs: 1 (f=1): [_(3),w(1)] [76.6% done] [0KB/171.5MB/0KB /s] [0/43.9K/0 iops] [eta 00m:34s]

seq-read: (groupid=0, jobs=1): err= 0: pid=5504: Fri Oct 9 09:31:25 2020
read : io=16384MB, bw=1742.7MB/s, iops=27881, runt= 9402msec
slat (usec): min=11, max=189, avg=18.47, stdev= 5.13
clat percentiles (usec):
| 1.00th=[ 0], 5.00th=[ 0], 10.00th=[ 0], 20.00th=[ 0],
| 30.00th=[ 0], 40.00th=[ 0], 50.00th=[ 0], 60.00th=[ 0],
| 70.00th=[ 0], 80.00th=[ 0], 90.00th=[ 0], 95.00th=[ 0],
| 99.00th=[ 0], 99.50th=[ 0], 99.90th=[ 0], 99.95th=[ 0],
| 99.99th=[ 0]
cpu : usr=2.95%, sys=56.98%, ctx=118351, majf=0, minf=523
IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=100.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0%
issued : total=r=262144/w=0/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
latency : target=0, window=0, percentile=100.00%, depth=32

seq-write: (groupid=1, jobs=1): err= 0: pid=5544: Fri Oct 9 09:31:25 2020
write: io=1141.8MB, bw=1020.3MB/s, iops=16296, runt= 1119msec
slat (usec): min=13, max=562, avg=30.12, stdev=15.04
clat (usec): min=648, max=3288, avg=1928.89, stdev=266.48
lat (usec): min=668, max=3335, avg=1959.59, stdev=268.33
clat percentiles (usec):
| 1.00th=[ 1192], 5.00th=[ 1480], 10.00th=[ 1592], 20.00th=[ 1720],
| 30.00th=[ 1800], 40.00th=[ 1880], 50.00th=[ 1944], 60.00th=[ 2008],
| 70.00th=[ 2064], 80.00th=[ 2160], 90.00th=[ 2256], 95.00th=[ 2320],
| 99.00th=[ 2480], 99.50th=[ 2576], 99.90th=[ 2800], 99.95th=[ 3024],
| 99.99th=[ 3280]
bw (KB /s): min=1041920, max=1044992, per=99.88%, avg=1043456.00, stdev=2172.23
lat (usec) : 750=0.02%, 1000=0.22%
lat (msec) : 2=58.70%, 4=41.23%
cpu : usr=3.22%, sys=49.11%, ctx=5134, majf=0, minf=2
IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=1437.3%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0%
issued : total=r=0/w=18236/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
latency : target=0, window=0, percentile=100.00%, depth=32

rand-read: (groupid=2, jobs=1): err= 0: pid=5601: Fri Oct 9 09:31:25 2020
read : io=10210MB, bw=425913KB/s, iops=106477, runt= 24548msec
slat (usec): min=3, max=98, avg= 6.96, stdev= 2.33
clat (usec): min=47, max=1490, avg=291.97, stdev=17.93
lat (usec): min=52, max=1497, avg=299.28, stdev=18.12
clat percentiles (usec):
| 1.00th=[ 255], 5.00th=[ 266], 10.00th=[ 270], 20.00th=[ 278],
| 30.00th=[ 282], 40.00th=[ 286], 50.00th=[ 290], 60.00th=[ 298],
| 70.00th=[ 302], 80.00th=[ 306], 90.00th=[ 314], 95.00th=[ 322],
| 99.00th=[ 334], 99.50th=[ 342], 99.90th=[ 382], 99.95th=[ 410],
| 99.99th=[ 474]
bw (KB /s): min=423312, max=427904, per=100.00%, avg=425912.82, stdev=1154.85
lat (usec) : 50=0.01%, 100=0.01%, 250=0.39%, 500=99.60%, 750=0.01%
lat (msec) : 2=0.01%
cpu : usr=14.59%, sys=85.42%, ctx=34, majf=0, minf=1
IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=160.5%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0%
issued : total=r=2613800/w=0/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
latency : target=0, window=0, percentile=100.00%, depth=32

rand-write: (groupid=3, jobs=1): err= 0: pid=5743: Fri Oct 9 09:31:25 2020
write: io=4956.9MB, bw=169189KB/s, iops=42296, runt= 30001msec
slat (usec): min=9, max=2431, avg=20.20, stdev= 7.98
clat (usec): min=51, max=10520, avg=734.23, stdev=171.84
lat (usec): min=67, max=10539, avg=754.70, stdev=173.77
clat percentiles (usec):
| 1.00th=[ 418], 5.00th=[ 620], 10.00th=[ 636], 20.00th=[ 660],
| 30.00th=[ 676], 40.00th=[ 684], 50.00th=[ 700], 60.00th=[ 708],
| 70.00th=[ 724], 80.00th=[ 748], 90.00th=[ 916], 95.00th=[ 1080],
| 99.00th=[ 1336], 99.50th=[ 1560], 99.90th=[ 2192], 99.95th=[ 2288],
| 99.99th=[ 2512]
bw (KB /s): min=124368, max=180760, per=100.00%, avg=169231.32, stdev=10627.03
lat (usec) : 100=0.03%, 250=0.45%, 500=0.83%, 750=78.89%, 1000=12.26%
lat (msec) : 2=7.31%, 4=0.23%, 10=0.01%, 20=0.01%
cpu : usr=8.72%, sys=84.23%, ctx=108583, majf=0, minf=2
IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=148.5%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0%
issued : total=r=0/w=1268930/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
latency : target=0, window=0, percentile=100.00%, depth=32

Run status group 0 (all jobs):
READ: io=16384MB, aggrb=1742.7MB/s, minb=1742.7MB/s, maxb=1742.7MB/s, mint=9402msec, maxt=9402msec

Run status group 1 (all jobs):
WRITE: io=1141.8MB, aggrb=1020.3MB/s, minb=1020.3MB/s, maxb=1020.3MB/s, mint=1119msec, maxt=1119msec

Run status group 2 (all jobs):
READ: io=10210MB, aggrb=425913KB/s, minb=425913KB/s, maxb=425913KB/s, mint=24548msec, maxt=24548msec

Run status group 3 (all jobs):
WRITE: io=4956.9MB, aggrb=169189KB/s, minb=169189KB/s, maxb=169189KB/s, mint=30001msec, maxt=30001msec


大致解讀為循序讀取為 1743MB/s, 寫入約 1020MB/s.
4K read IOPS=106477, 4K write IOPS=42296

這是先測試 NAS 內部實際的磁碟效能, 日後會再實測透過網卡傳輸, 依上述數據, 若採用 10GbE 網卡, 其效能約 1000MB/s, 表示若採用 SSDx4, 建立 RAID5, 已可完全滿足 10GbE 傳輸所需達到滿速, 當然如果加入更多的 Disk 組成RAID, 在效能上會再往上提升.
FB: Pctine
Synology DS1621xs+ performance test

將 DS1621+ 上四顆 SATA SSD 組成的 RAID set, 直接移到 DS1621xs+ NAS 上,

將原本舊 NAS 上的硬碟, 直接移到新購 NAS 上, 保留原本所有的目錄檔案及帳號權限設定等, 這樣的移轉不管是因為 NAS 故障或是購買新機, 是很多人常在做的, Synology NAS 在做系統遷移上做的相當好, 而且這樣移轉所需時間也相當短, 不用十分鐘就全部完成了. 系統可以馬上重新上線.








root@ds1621xs:/volume1/download# fio synology2.conf
seq-read: (g=0): rw=read, bs=64K-64K/64K-64K/64K-64K, ioengine=libaio, iodepth=32
seq-write: (g=1): rw=write, bs=64K-64K/64K-64K/64K-64K, ioengine=libaio, iodepth=32
rand-read: (g=2): rw=randread, bs=4K-4K/4K-4K/4K-4K, ioengine=libaio, iodepth=32
rand-write: (g=3): rw=randwrite, bs=4K-4K/4K-4K/4K-4K, ioengine=libaio, iodepth=32
fio-2.13
Starting 4 processes
Jobs: 1 (f=1): [_(3),w(1)] [84.1% done] [0KB/170.8MB/0KB /s] [0/43.8K/0 iops] [eta 00m:18s]
seq-read: (groupid=0, jobs=1): err= 0: pid=21793: Fri Oct 9 10:06:40 2020
read : io=16384MB, bw=2061.7MB/s, iops=32986, runt= 7947msec
slat (usec): min=9, max=200, avg=15.50, stdev= 4.49
clat percentiles (usec):
| 1.00th=[ 0], 5.00th=[ 0], 10.00th=[ 0], 20.00th=[ 0],
| 30.00th=[ 0], 40.00th=[ 0], 50.00th=[ 0], 60.00th=[ 0],
| 70.00th=[ 0], 80.00th=[ 0], 90.00th=[ 0], 95.00th=[ 0],
| 99.00th=[ 0], 99.50th=[ 0], 99.90th=[ 0], 99.95th=[ 0],
| 99.99th=[ 0]
cpu : usr=4.17%, sys=53.78%, ctx=144491, majf=0, minf=522
IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=100.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0%
issued : total=r=262144/w=0/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
latency : target=0, window=0, percentile=100.00%, depth=32

seq-write: (groupid=1, jobs=1): err= 0: pid=21807: Fri Oct 9 10:06:40 2020
write: io=2031.6MB, bw=978514KB/s, iops=15274, runt= 2126msec
slat (usec): min=12, max=318, avg=26.20, stdev=12.50
clat (usec): min=1007, max=4143, avg=2063.57, stdev=269.46
lat (usec): min=1034, max=4180, avg=2093.28, stdev=270.65
clat percentiles (usec):
| 1.00th=[ 1416], 5.00th=[ 1608], 10.00th=[ 1704], 20.00th=[ 1832],
| 30.00th=[ 1928], 40.00th=[ 2008], 50.00th=[ 2064], 60.00th=[ 2160],
| 70.00th=[ 2224], 80.00th=[ 2288], 90.00th=[ 2384], 95.00th=[ 2480],
| 99.00th=[ 2672], 99.50th=[ 2736], 99.90th=[ 2960], 99.95th=[ 3184],
| 99.99th=[ 3568]
bw (KB /s): min=974336, max=982528, per=100.00%, avg=978880.00, stdev=3390.59
lat (msec) : 2=39.50%, 4=60.59%, 10=0.01%
cpu : usr=2.82%, sys=46.31%, ctx=8611, majf=0, minf=2
IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=807.1%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0%
issued : total=r=0/w=32474/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
latency : target=0, window=0, percentile=100.00%, depth=32

rand-read: (groupid=2, jobs=1): err= 0: pid=21814: Fri Oct 9 10:06:40 2020
read : io=6088.9MB, bw=701587KB/s, iops=175393, runt= 8887msec
slat (usec): min=3, max=64, avg= 4.98, stdev= 0.85
clat (usec): min=34, max=741, avg=177.11, stdev= 8.26
lat (usec): min=39, max=744, avg=182.15, stdev= 8.22
clat percentiles (usec):
| 1.00th=[ 159], 5.00th=[ 165], 10.00th=[ 169], 20.00th=[ 171],
| 30.00th=[ 173], 40.00th=[ 175], 50.00th=[ 177], 60.00th=[ 179],
| 70.00th=[ 181], 80.00th=[ 183], 90.00th=[ 187], 95.00th=[ 191],
| 99.00th=[ 201], 99.50th=[ 203], 99.90th=[ 215], 99.95th=[ 221],
| 99.99th=[ 255]
bw (KB /s): min=697057, max=701968, per=99.77%, avg=700005.94, stdev=1437.35
lat (usec) : 50=0.01%, 100=0.01%, 250=99.98%, 500=0.01%, 750=0.01%
cpu : usr=10.56%, sys=89.41%, ctx=17, majf=0, minf=1
IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=269.1%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0%
issued : total=r=1558720/w=0/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
latency : target=0, window=0, percentile=100.00%, depth=32

rand-write: (groupid=3, jobs=1): err= 0: pid=21820: Fri Oct 9 10:06:40 2020
write: io=5899.6MB, bw=201365KB/s, iops=50340, runt= 30001msec
slat (usec): min=8, max=812, avg=16.75, stdev= 7.67
clat (usec): min=43, max=7826, avg=617.84, stdev=218.09
lat (usec): min=55, max=7843, avg=634.82, stdev=221.62
clat percentiles (usec):
| 1.00th=[ 251], 5.00th=[ 366], 10.00th=[ 410], 20.00th=[ 462],
| 30.00th=[ 490], 40.00th=[ 510], 50.00th=[ 540], 60.00th=[ 620],
| 70.00th=[ 764], 80.00th=[ 812], 90.00th=[ 860], 95.00th=[ 932],
| 99.00th=[ 1336], 99.50th=[ 1576], 99.90th=[ 2160], 99.95th=[ 2256],
| 99.99th=[ 2480]
bw (KB /s): min=136720, max=258736, per=100.00%, avg=202396.31, stdev=42782.55
lat (usec) : 50=0.01%, 100=0.20%, 250=0.80%, 500=34.17%, 750=33.29%
lat (usec) : 1000=29.80%
lat (msec) : 2=1.50%, 4=0.24%, 10=0.01%
cpu : usr=6.37%, sys=87.07%, ctx=99406, majf=0, minf=2
IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=150.5%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0%
issued : total=r=0/w=1510255/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
latency : target=0, window=0, percentile=100.00%, depth=32

Run status group 0 (all jobs):
READ: io=16384MB, aggrb=2061.7MB/s, minb=2061.7MB/s, maxb=2061.7MB/s, mint=7947msec, maxt=7947msec

Run status group 1 (all jobs):
WRITE: io=2031.6MB, aggrb=978513KB/s, minb=978513KB/s, maxb=978513KB/s, mint=2126msec, maxt=2126msec

Run status group 2 (all jobs):
READ: io=6088.9MB, aggrb=701587KB/s, minb=701587KB/s, maxb=701587KB/s, mint=8887msec, maxt=8887msec

Run status group 3 (all jobs):
WRITE: io=5899.6MB, aggrb=201364KB/s, minb=201364KB/s, maxb=201364KB/s, mint=30001msec, maxt=30001msec


可以看出 Synology DS1621xs+ 在讀取的效能上, 確實優於 DS1621+, 相對也對的起它的價格, 比 Synology DS1621+ 三年保固多了二年, DS1621xs+ 提供五年保固, 另外還提供次個工作日換修交換機服務.

有關 Synology SRS 換修服務可以參考官網.
https://www.synology.com/zh-tw/solution/SRS
FB: Pctine
Synology DS1618+ NAS performance test

DS1621+ 的前一代為 DS1618+ NAS, 也是自己家裡所使用的機型, 這幾年來使用的經驗一直很好, 它也認真的 全年不停機了好多年.

今天以相同的 RAID set 移至 DS1618+ 測試了一下. (Synology SSDx4 RAID5)


root@ds1621xs:/volume1/download# uname -a
Linux ds1618plus 4.4.59+ #25426 SMP PREEMPT Wed Jul 8 03:18:11 CST 2020 x86_64 GNU/Linux synology_denverton_1618+


root@ds1621xs:/volume1/download# fio synology2.conf
seq-read: (g=0): rw=read, bs=64K-64K/64K-64K/64K-64K, ioengine=libaio, iodepth=32
seq-write: (g=1): rw=write, bs=64K-64K/64K-64K/64K-64K, ioengine=libaio, iodepth=32
rand-read: (g=2): rw=randread, bs=4K-4K/4K-4K/4K-4K, ioengine=libaio, iodepth=32
rand-write: (g=3): rw=randwrite, bs=4K-4K/4K-4K/4K-4K, ioengine=libaio, iodepth=32
fio-2.13
Starting 4 processes
Jobs: 1 (f=1): [_(3),w(1)] [72.1% done] [0KB/80864KB/0KB /s] [0/20.3K/0 iops] [eta 00m:48s]

seq-read: (groupid=0, jobs=1): err= 0: pid=20459: Fri Oct 9 14:00:15 2020
read : io=16384MB, bw=1467.8MB/s, iops=23483, runt= 11163msec
slat (usec): min=21, max=2950, avg=38.84, stdev=12.68
clat percentiles (usec):
| 1.00th=[ 0], 5.00th=[ 0], 10.00th=[ 0], 20.00th=[ 0],
| 30.00th=[ 0], 40.00th=[ 0], 50.00th=[ 0], 60.00th=[ 0],
| 70.00th=[ 0], 80.00th=[ 0], 90.00th=[ 0], 95.00th=[ 0],
| 99.00th=[ 0], 99.50th=[ 0], 99.90th=[ 0], 99.95th=[ 0],
| 99.99th=[ 0]
cpu : usr=7.25%, sys=91.79%, ctx=5547, majf=0, minf=523
IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=100.0%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0%
issued : total=r=262144/w=0/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
latency : target=0, window=0, percentile=100.00%, depth=32

seq-write: (groupid=1, jobs=1): err= 0: pid=20478: Fri Oct 9 14:00:15 2020
write: io=5227.2MB, bw=743930KB/s, iops=11619, runt= 7195msec
slat (usec): min=26, max=7557, avg=59.14, stdev=60.38
clat (usec): min=1086, max=10870, avg=2692.31, stdev=746.09
lat (usec): min=1131, max=10950, avg=2751.23, stdev=753.12
clat percentiles (usec):
| 1.00th=[ 1464], 5.00th=[ 1672], 10.00th=[ 1848], 20.00th=[ 2096],
| 30.00th=[ 2288], 40.00th=[ 2448], 50.00th=[ 2608], 60.00th=[ 2800],
| 70.00th=[ 3024], 80.00th=[ 3248], 90.00th=[ 3536], 95.00th=[ 3824],
| 99.00th=[ 4832], 99.50th=[ 5664], 99.90th=[ 8640], 99.95th=[ 9536],
| 99.99th=[10560]
bw (KB /s): min=657152, max=796359, per=99.86%, avg=742862.36, stdev=40611.70
lat (msec) : 2=16.14%, 4=80.54%, 10=3.33%, 20=0.02%
cpu : usr=4.10%, sys=59.10%, ctx=14912, majf=0, minf=2
IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=313.5%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0%
issued : total=r=0/w=83603/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
latency : target=0, window=0, percentile=100.00%, depth=32

rand-read: (groupid=2, jobs=1): err= 0: pid=20495: Fri Oct 9 14:00:15 2020
read : io=4653.6MB, bw=158836KB/s, iops=39708, runt= 30001msec
slat (usec): min=10, max=2267, avg=19.69, stdev= 7.98
clat (usec): min=41, max=2882, avg=782.04, stdev=59.59
lat (usec): min=56, max=2902, avg=802.08, stdev=61.82
clat percentiles (usec):
| 1.00th=[ 732], 5.00th=[ 740], 10.00th=[ 740], 20.00th=[ 748],
| 30.00th=[ 756], 40.00th=[ 756], 50.00th=[ 764], 60.00th=[ 772],
| 70.00th=[ 788], 80.00th=[ 796], 90.00th=[ 884], 95.00th=[ 900],
| 99.00th=[ 916], 99.50th=[ 940], 99.90th=[ 1208], 99.95th=[ 1512],
| 99.99th=[ 2224]
bw (KB /s): min=135452, max=167769, per=99.31%, avg=157742.64, stdev=7054.32
lat (usec) : 50=0.01%, 100=0.01%, 250=0.01%, 500=0.01%, 750=26.16%
lat (usec) : 1000=73.48%
lat (msec) : 2=0.34%, 4=0.02%
cpu : usr=13.00%, sys=83.42%, ctx=138451, majf=0, minf=1
IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=151.7%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0%
issued : total=r=1191282/w=0/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
latency : target=0, window=0, percentile=100.00%, depth=32

rand-write: (groupid=3, jobs=1): err= 0: pid=20533: Fri Oct 9 14:00:15 2020
write: io=2931.3MB, bw=100051KB/s, iops=25011, runt= 30001msec
slat (usec): min=23, max=6969, avg=35.54, stdev=16.86
clat (usec): min=118, max=11119, avg=1241.45, stdev=311.07
lat (usec): min=149, max=11146, avg=1277.10, stdev=316.50
clat percentiles (usec):
| 1.00th=[ 692], 5.00th=[ 884], 10.00th=[ 972], 20.00th=[ 1004],
| 30.00th=[ 1048], 40.00th=[ 1096], 50.00th=[ 1176], 60.00th=[ 1256],
| 70.00th=[ 1448], 80.00th=[ 1512], 90.00th=[ 1560], 95.00th=[ 1592],
| 99.00th=[ 2096], 99.50th=[ 2256], 99.90th=[ 2928], 99.95th=[ 5344],
| 99.99th=[ 7584]
bw (KB /s): min=79160, max=122472, per=100.00%, avg=100357.19, stdev=16588.32
lat (usec) : 250=0.03%, 500=0.12%, 750=1.63%, 1000=15.02%
lat (msec) : 2=81.89%, 4=1.24%, 10=0.07%, 20=0.01%
cpu : usr=10.29%, sys=86.40%, ctx=49930, majf=0, minf=2
IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=148.3%, >=64=0.0%
submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0%
complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.1%, 64=0.0%, >=64=0.0%
issued : total=r=0/w=750375/d=0, short=r=0/w=0/d=0, drop=r=0/w=0/d=0
latency : target=0, window=0, percentile=100.00%, depth=32

Run status group 0 (all jobs):
READ: io=16384MB, aggrb=1467.8MB/s, minb=1467.8MB/s, maxb=1467.8MB/s, mint=11163msec, maxt=11163msec

Run status group 1 (all jobs):
WRITE: io=5227.2MB, aggrb=743929KB/s, minb=743929KB/s, maxb=743929KB/s, mint=7195msec, maxt=7195msec

Run status group 2 (all jobs):
READ: io=4653.6MB, aggrb=158836KB/s, minb=158836KB/s, maxb=158836KB/s, mint=30001msec, maxt=30001msec

Run status group 3 (all jobs):
WRITE: io=2931.3MB, aggrb=100050KB/s, minb=100050KB/s, maxb=100050KB/s, mint=30001msec, maxt=30001msec


Seq Read=1468MB/s, Seq Write=726MB/s,
4k read iops = 39708, 4k write iops=25011

整體的效能表現, 可以發覺次一代的 DS1621+ 確實增強了非常多.
FB: Pctine
幫大家整理一下, 三台 NAS 透過 fio 實測的效能數據




三台 NAS 以同一組 RAID set 下去測試, Synology SATA SSD 480GBx4, RAID 5, 未使用 ssd cache.

DS1621xs+ 的效能數好, DS1621+ 次之, 而前一代的 DS1618+ 排名第三. 其中以 DS1618+ 下一代新機 DS1621+ 較前一代整體效能增進了非常多.
FB: Pctine
Synology E10M20-T1 M.2+10GbE 複合卡能否用在 DS1621+ 上?

HKEPC 網友詢問的. 還是分享一下.

DS1621+ 已內建 M.2 NVMe socket, 實際上應該用不太到這張複合卡, 但還是測了一下. 確定在 DS1621+ 是無法識別此張複合卡.
FB: Pctine
這台和DS1819+加購E10M20-T1=$35500 (8bay+2M2+10G)
和DS1621+加購E10G18-T1=31500 (6bay+2M2+10G)
兩台比較起來建議買哪台
Tom_Hwang wrote:
DS1621+ 用的(恕刪)


官方網站是這樣寫的

DS1621+ DS1621xs+
最大記憶體容量 32 GB (16 GB x 2) 32 GB (16 GB x 2)
jer168 wrote:
這台和DS1819+加購E10M20-T1=$35500 (8bay+2M2+10G)
和DS1621+加購E10G18-T1=31500 (6bay+2M2+10G)
兩台比較起來建議買哪台


建議購買 DS1621+, 這台效能比 DS1819+ 好很多, 但 6 bay vs 8 bay, 除非你的容量需求在 6bay NAS 上面無法滿足, 再考慮 DS1819+,

但另外一點 E10M20-T1 是為了前一代 NAS 所設計的 10G+SSD cache 複合卡, 以後的新機如 DS1621+ 大多會內建 M.2 socket, 表示 E10M20-T1 卡日後是無法在新機使用. 也無用武之地.
FB: Pctine
文章分享
評分
評分
複製連結

今日熱門文章 網友點擊推薦!