Any HC1/HC2 setup based on the new N1 hardware?

Zbe
Posts: 7
Joined: Mon Feb 05, 2018 5:54 pm
languages_spoken: english, swedish
ODROIDs: no one yet.. ;)
Has thanked: 0
Been thanked: 0
Contact:

Any HC1/HC2 setup based on the new N1 hardware?

Post by Zbe »

Will there be any HC1/HC2 setup based on the new N1 hardware? The HC1/HC2 has such slick design, but the USB3-SATA bridge is a bit "boring".

You need only ONE USB3 port, the NIC... the N1 now is overloaded with bunch of IOs. Of course no fan, just passive cooling.

Oo and perhaps a HC1.2 and HC2.2 with same slick design and support for 2 drives :) a simple storage chassie. No hotswap or stuff that add space. Just slick and small design.

The board could perhaps have 2GB or 4GB ram versions. I could mount the hardware my self, just a good kit... choosing:
a) 2 or 4GB RAM
b) .2 or not design with support for 1 or 2 drives.

fvolk
Posts: 592
Joined: Sun Jun 05, 2016 11:04 pm
languages_spoken: english
ODROIDs: C2, C4, H2
Has thanked: 0
Been thanked: 68 times
Contact:

Re: Any HC1/HC2 setup based on the new N1 hardware?

Post by fvolk »

Hmm... for a compact NAS design:

1) power supply/board power routing able to handle 2x 5TB disks (staggered spin-up possible?)
2) a mini-tower design for good airflow - board and the 2 disks vertically oriented
3) a large passive heatsink for the CPU, no noisy fan
4) please always include with N1 a proper quality power Y-cable that does not catch fire
(see https://www.youtube.com/watch?v=TataDaUNEFc or https://www.youtube.com/watch?v=fAyy_WOSdVc)

...get a backpack-size mini-NAS of 10TB where you don't have hardcoded backdoor passwords
(e.g. https://www.bleepingcomputer.com/news/s ... rd-drives/)

;-)

Zbe
Posts: 7
Joined: Mon Feb 05, 2018 5:54 pm
languages_spoken: english, swedish
ODROIDs: no one yet.. ;)
Has thanked: 0
Been thanked: 0
Contact:

Re: Any HC1/HC2 setup based on the new N1 hardware?

Post by Zbe »

Yea, a good two (2) 3.5" disk NAS would be great! :)

But the feature with one disk per server for a micro cloud cluster is what i really like. And the performance with native SATA, and perhaps 4GB RAM would make it a great build. Where you add one disk with one NAS-controller. And they all build one cloud cluster.

fvolk
Posts: 592
Joined: Sun Jun 05, 2016 11:04 pm
languages_spoken: english
ODROIDs: C2, C4, H2
Has thanked: 0
Been thanked: 68 times
Contact:

Re: Any HC1/HC2 setup based on the new N1 hardware?

Post by fvolk »

I actually thought of a design with 2x 2.5" 5TB disks, not 3.5".
2x 3.5" is not backpack size :-)

MimCom
Posts: 50
Joined: Sun Mar 12, 2017 3:24 am
languages_spoken: english
ODROIDs: C2, XU4Q
Has thanked: 5 times
Been thanked: 0
Contact:

Re: Any HC1/HC2 setup based on the new N1 hardware?

Post by MimCom »

Dual drive config would make mirrors easy. HC config would really benefit from a second GigE port in a distributed storage model like ceph. Even better would be to include 802.3at PoE (802.3af could work for single drive). Add some fan power/control like the CS2 has.

Physically, I would like to see a way of stacking horizontally for rackmount. Think rack ears with a row of 80mm fans.

Zbe
Posts: 7
Joined: Mon Feb 05, 2018 5:54 pm
languages_spoken: english, swedish
ODROIDs: no one yet.. ;)
Has thanked: 0
Been thanked: 0
Contact:

Re: Any HC1/HC2 setup based on the new N1 hardware?

Post by Zbe »

MimCom wrote:Dual drive config would make mirrors easy.
Yea the dual-config for mirroring setup is nice. For a 2.5" HDD disk setup, that is really a good small home NAS. Or using all the disk as JBOD and add the data security with a cloud setup and mirroring data in a cluster. I prefere the later, but as small setup. :)

MimCom wrote: HC config would really benefit from a second GigE port in a distributed storage model like ceph.
Really? Isn't that overkill. Somewere the hardware get to expensive. I think 1 GigE port is a good balance between performance and cost. :roll:
MimCom wrote: Even better would be to include 802.3at PoE (802.3af could work for single drive). Add some fan power/control like the CS2 has.
Fan, yes... but PoE+? That is 25W, is that really enough? For one disk setup perhaps, but not for 2! :?
MimCom wrote: Physically, I would like to see a way of stacking horizontally for rackmount. Think rack ears with a row of 80mm fans.
:) yes..

tkaiser
Posts: 673
Joined: Mon Nov 09, 2015 12:30 am
languages_spoken: english
ODROIDs: C1+, C2, XU4, HC1
Has thanked: 0
Been thanked: 5 times
Contact:

Re: Any HC1/HC2 setup based on the new N1 hardware?

Post by tkaiser »

Zbe wrote:The HC1/HC2 has such slick design, but the USB3-SATA bridge is a bit "boring"
Boring? Is this a performance metric?

I've a N1 here and a 2.5" disk (7.2k HGST) next to it. Let's benchmark! Same disk, same settings, one time connected to a SATA port, the other time to an USB3 port with a 'boring' USB3-SATA bridge' in between:

Code: Select all

    4k random IOPS write/read    sequential in MB/s write/read
X           301 / 170                      96 / 101
Y           311 / 168                      95 / 101
(IOPS are IO operations per second at 4KB blocksize, sequential transfers are made with 16MB blocksize)

What is X and what is Y? Benchmark used was 'iozone -e -I -a -s 100M -r 4k -r 16k -r 512k -r 1024k -r 16384k -i 0 -i 1 -i 2' and full results below:

Code: Select all

X attached SATA                                               random    random
              kB  reclen    write  rewrite    read    reread    read     write
          102400       4    20581    23404    27007    27142      680     1203
          102400      16    67872    72744    71260    73004     2647     5222
          102400     512    94530    96070    99963   101863    43844    45023
          102400    1024    91048    95331    94700   103645    60637    51321
          102400   16384    97060    94794    99418   102616    98464    92941

Y attached SATA                                               random    random
              kB  reclen    write  rewrite    read    reread    read     write
          102400       4    17729    18396    21824    21847      673     1244
          102400      16    59357    63370    59221    61650     2642     5457
          102400     512    95256    93899   100749   101837    43381    45185
          102400    1024    92423    93860   100883   103643    60642    55242
          102400   16384    96049    94808    99346   102585    98492    91496
Again: what is X and what is Y? :)

Small hint: With active PCIe powermanagement X numbers are lower.

tkaiser
Posts: 673
Joined: Mon Nov 09, 2015 12:30 am
languages_spoken: english
ODROIDs: C1+, C2, XU4, HC1
Has thanked: 0
Been thanked: 5 times
Contact:

Re: Any HC1/HC2 setup based on the new N1 hardware?

Post by tkaiser »

Zbe wrote:And the performance with native SATA
Impossible on N1 since the SoC does not provide native SATA capabilities (Hardkernel evaluated two more SoCs and if they would have chosen RTD1295 we could now talk about 2 native SATA ports).

So what happened? Just like on HC1 or HC2 Hardkernel had to choose a high speed connection to attach a SATA controller. On the N1 they chose a PCIe attached ASM1061 controller (host bus: PCIe 2.x Gen1/Gen2) while on HC1/HC2 they had to use USB3/SuperSpeed/UAS instead (interface/protocol called 'boring' by some folks for reasons unknown to me).

Let's compare some possible implementations for SoCs lacking native SATA capabilities:

Code: Select all

      SoC        high speed bus     controller    high speed bus     disk
  Exynos 5422       USB3/UAS          JMS578        SATA III        a HGST
Rockchip RK3399    PCIe Gen2         ASM1061        SATA III        a HGST
Rockchip RK3399     USB3/UAS          JMS567        SATA III        a HGST
Rockchip RK3399     USB3/UAS         ASM1153        SATA III        a HGST
Below the results for N1 comparing JMS567 and ASM1061 (for the latter with and without PCIe powermanagement):

Code: Select all

PCIe attached SATA, powersave settings                        random    random
              kB  reclen    write  rewrite    read    reread    read     write
          102400       4    17502    18096    19914    20093      674     1266
          102400      16    56566    61144    57304    58187     2623     5452
          102400     512    91692    94618   100661   101849    43380    43410
          102400    1024    89045    93166   100796   103631    60345    58911
          102400   16384    92511    93276    99291   102603    98468    92834

PCIe attached SATA, performance settings                      random    random
              kB  reclen    write  rewrite    read    reread    read     write
          102400       4    20581    23404    27007    27142      680     1203
          102400      16    67872    72744    71260    73004     2647     5222
          102400     512    94530    96070    99963   101863    43844    45023
          102400    1024    91048    95331    94700   103645    60637    51321
          102400   16384    97060    94794    99418   102616    98464    92941

USB3/UAS attached SATA via JMS567                             random    random
              kB  reclen    write  rewrite    read    reread    read     write
          102400       4    17729    18396    21824    21847      673     1244
          102400      16    59357    63370    59221    61650     2642     5457
          102400     512    95256    93899   100749   101837    43381    45185
          102400    1024    92423    93860   100883   103643    60642    55242
          102400   16384    96049    94808    99346   102585    98492    91496
It's pretty easy to realize that with spinning rust (HDDs) it doesn't matter which high speed bus is used between SoC and SATA controller (USB3 or PCIe). But USB3 requires more interrupts to be processed and also the choice of SATA controller matters (the JMS578 used on HC1 and HC2 is great, the majority of USB-to-SATA bridge chips in external disk enclosures unfortunately is crap).

TL;DR: There is no native SATA on N1, the chosen implementation (PCIe attached ASM1061) is a good one, now let's have a funny time watching users doing silly things (ignoring the 2 USB3 ports, playing RAID-1 with the SATA ports and so on)

crashoverride
Posts: 5315
Joined: Tue Dec 30, 2014 8:42 pm
languages_spoken: english
ODROIDs: C1
Has thanked: 0
Been thanked: 433 times
Contact:

Re: Any HC1/HC2 setup based on the new N1 hardware?

Post by crashoverride »

tkaiser wrote:if they would have chosen RTD1295 we could now talk about 2 native SATA ports
The RTD1295 block diagram I found shows the following:
Image

It indicates there is only one (1) "native" SATA port that is shared with something (not sure what SGMII is). The diagram also shows that its only 4 "little" cores rather than the 4 "little" + 2 "big" as found on RK3399.

[edit]
The internet says that SGMII is this (Serial Gigabit Media Independent Interface (SGMII) is a connection bus for Ethernet MACs and PHYs defined by Cisco Systems):
https://en.wikipedia.org/wiki/Media-ind ... _interface

So the "native" SATA shares bandwidth with the network interface.

User avatar
odroid
Site Admin
Posts: 37229
Joined: Fri Feb 22, 2013 11:14 pm
languages_spoken: English, Korean
ODROIDs: ODROID
Has thanked: 1723 times
Been thanked: 1120 times
Contact:

Re: Any HC1/HC2 setup based on the new N1 hardware?

Post by odroid »

As far as I know, the old QNAP TS-228 used a USB 3.0 hub with two ASM1153E chips to implement two SATA interfaces.
It seems to be a main reason why its RAID-1 performance was not good.
But I have no idea how the new TS-228A model implemented two SATA interfaces.
Its RAID-1 performance looks much better than the old TS-228.

Anyway, is there any body who can test the RAID-1 performance with mdadm on the N1?

tkaiser
Posts: 673
Joined: Mon Nov 09, 2015 12:30 am
languages_spoken: english
ODROIDs: C1+, C2, XU4, HC1
Has thanked: 0
Been thanked: 5 times
Contact:

Re: Any HC1/HC2 setup based on the new N1 hardware?

Post by tkaiser »

crashoverride wrote:
tkaiser wrote:if they would have chosen RTD1295 we could now talk about 2 native SATA ports
The RTD1295 ... So the "native" SATA shares bandwidth with the network interface.
Why? RTD1295/RTD1296 have not only an internal GbE MAC but also a real GbE PHY so neither RGMII nor SGMII are used here (usually RGMII and/or SGMII will be combined with an external GbE PHY). Hardkernel said they evaluated RTD1295 but I'm pretty sure they would've chosen the dual-SATA RTD1296 for real products... if software support situation with RealTek wouldn't be that horrible as it is :)

tkaiser
Posts: 673
Joined: Mon Nov 09, 2015 12:30 am
languages_spoken: english
ODROIDs: C1+, C2, XU4, HC1
Has thanked: 0
Been thanked: 5 times
Contact:

Re: Any HC1/HC2 setup based on the new N1 hardware?

Post by tkaiser »

odroid wrote:As far as I know, the old QNAP TS-228 used a USB 3.0 hub with two ASM1153E chips to implement two SATA interfaces.
The TS-228 is based on the old RTD1195 (dual-core A7, media player SoC), while the newer TS-228A uses the NAS/router/media SoC RTD1296 (2 x SATA, made for the job).

crashoverride
Posts: 5315
Joined: Tue Dec 30, 2014 8:42 pm
languages_spoken: english
ODROIDs: C1
Has thanked: 0
Been thanked: 433 times
Contact:

Re: Any HC1/HC2 setup based on the new N1 hardware?

Post by crashoverride »

tkaiser wrote:I'm pretty sure they would've chosen the dual-SATA RTD1296 for real products.
The RTD1296 is still only 4 core A53 (vs 4 A53 + 2 A72) and 1080p (vs 4K) from what I could find on the internet. It also seems to be limited to 2GB (vs 4GB) of RAM.

Since any NAS use would be bottlenecked by the 1Gbps network interface, the discussion of "native" (integrated) vs. external SATA is largely just academic.

tkaiser
Posts: 673
Joined: Mon Nov 09, 2015 12:30 am
languages_spoken: english
ODROIDs: C1+, C2, XU4, HC1
Has thanked: 0
Been thanked: 5 times
Contact:

Re: Any HC1/HC2 setup based on the new N1 hardware?

Post by tkaiser »

odroid wrote:Anyway, is there any body who can test the RAID-1 performance with mdadm on the N1?
Me not since this is just an insane waste of disks (especially when done for the wrong reason. There still exists an awful lot of people playing RAID since they're concerned about data 'protection' but all the redundant RAID modes provide is the try to provide data availability). And If I would want to use mdraid to implement something useless as an RAID-1 I would clearly do an RAID-10 with two disks instead.

https://forum.openmediavault.org/index. ... post146935 (since OMV forum seems to be down all the time here the archived link: https://archive.fo/8Pf34 -- post #12 there)
Last edited by tkaiser on Mon Feb 26, 2018 3:46 am, edited 1 time in total.

tkaiser
Posts: 673
Joined: Mon Nov 09, 2015 12:30 am
languages_spoken: english
ODROIDs: C1+, C2, XU4, HC1
Has thanked: 0
Been thanked: 5 times
Contact:

Re: Any HC1/HC2 setup based on the new N1 hardware?

Post by tkaiser »

crashoverride wrote:Since any NAS use would be bottlenecked by the 1Gbps network interface, the discussion of "native" (integrated) vs. external SATA is largely just academic.
Sure. I only tried to explain that a SoC/CPU that lacks native SATA capabilties needs to attach an external SATA controller in some way. With HDDs the type of high-speed interface to use between SoC/CPU and SATA controller doesn't matter (that much). USB3 SuperSpeed is as fast as PCIe here since HDDs are the bottleneck. But the specific SATA controller matters regardless of the interface between SoC and controller and Hardkernel's choice of the PCIe attached ASM1061 on N1 is a good one as it's the choice to use USB3 attached JMS578 on HC1 and HC2. The type of interconnect and protocols for the 'HDD use case' simply doesn't matter (that much).

And mentioning one of the other SoCs Hardkernel evaluated for N1 was obviously a mistake since only misleading (RealTek's software 'offerings' prevent their SoCs being used on ODROIDs)

User avatar
odroid
Site Admin
Posts: 37229
Joined: Fri Feb 22, 2013 11:14 pm
languages_spoken: English, Korean
ODROIDs: ODROID
Has thanked: 1723 times
Been thanked: 1120 times
Contact:

Re: Any HC1/HC2 setup based on the new N1 hardware?

Post by odroid »

When we evaluated the RTD129x, there was no proper DRM/Mali video drivers for Linux. I don't think there will be any Linux friendly video driver support.
RTD129x computing power was much weaker then RK3399 obviously. Its benchmark result was slightly lower than two years old ODROID-C2.
But I have to agree the RTD1296 is a very good solution to build a two bay headless network storage.

BTW, according to the official TS-228A product description, they use the RTD1295 clearly, instead of RTD1296.
So it is still uncertain how they implemented the SATA interface for two HDDs.
Too much out of topic?

tkaiser
Posts: 673
Joined: Mon Nov 09, 2015 12:30 am
languages_spoken: english
ODROIDs: C1+, C2, XU4, HC1
Has thanked: 0
Been thanked: 5 times
Contact:

Re: Any HC1/HC2 setup based on the new N1 hardware?

Post by tkaiser »

Well, really way too off-topic but... anyway ;)

If QNAP really took the RTD1295 for a 2018 design then most probably they also had to use an ASM1061 for the 2 bay device. But as you said: Linux software support is lacking and to be honest I would always prefer Marvell Armada SoCs for headless NAS use cases (the RealTeks are pretty capable wrt HDMI and video).

I've boards with Armada 38x and 37x0 here and they perform great as NAS even if underpowered by looking at specs. The secret is the AP (application processor) not doing all the IO and network stuff but this is off-loaded to one or more CP (communication processors). And there's Armada 7K/8K with 10GbE support and in the meantime somewhat decent Mainline kernel support (Marvell contracted Bootlin formerly known as Free Electrons for this).

Looking forward to get my hands on this box later this year to run my own distro on it: https://www.anandtech.com/show/12314/as ... h-10gbaset

tkaiser
Posts: 673
Joined: Mon Nov 09, 2015 12:30 am
languages_spoken: english
ODROIDs: C1+, C2, XU4, HC1
Has thanked: 0
Been thanked: 5 times
Contact:

Re: Any HC1/HC2 setup based on the new N1 hardware?

Post by tkaiser »

odroid wrote:Anyway, is there any body who can test the RAID-1 performance with mdadm on the N1?
Again no since RAID-1 with mdadm is stupid these days -- reasons why: https://webcache.googleusercontent.com/ ... of-raid-1/+

Please keep in mind that 2.5" HDDs when getting full become really slow due to ZBR (zone bit recording): https://forum.armbian.com/topic/1925-so ... ment=15319

So I did a mdraid RAID-10 consisting of a fast EVO840 on one of the SATA ports and a not that fast other Samsung on an USB3 port in a JMS567 enclosure:

Code: Select all

mdadm --create /dev/md127 --level=10 --raid-devices=2 --layout=f2 /dev/sdb /dev/sdc
parted -s /dev/md127 mklabel gpt
parted -s /dev/md127 unit s mkpart primary ext4 8192 100%
mkfs.ext4 /dev/md127p1 
Resync performance of a totally idle array was at 200 MB/s but as soon as some activity happens (e.g. creation of an ext4 filesystem while the RAID is resyncing) this drops down to as low as 5 MB/s. With SSDs and not ultra slow spinning rust!

The usual iozone test 'iozone -e -I -a -s 100M -r 4k -r 16k -r 512k -r 1024k -r 16384k -i 0 -i 1 -i 2' and another two to test for sequential transfer speeds with 400MB and 4GB test filesize:

Code: Select all

                                                              random    random
              kB  reclen    write  rewrite    read    reread    read     write
          102400       1     5942     6578    12767    12662     6615     6521
          102400       4    20862    24385    47022    47109    22445    24110
          102400      16    68855    78156   127019   124559    72826    76336
          102400     512   131043   130370   273195   309899   272703   123206
          102400    1024   130641   131077   463478   572274   537660   123972
          102400   16384   130865   127481   543470   681928   671503   125641
          409600   16384   127074   130486   644131   654339
         4096000   16384   130572   131099   689674   695537
As a reference single 'disk' numbers with identical settings: https://forum.armbian.com/topic/6496-od ... eview-yet/

Now let's repeat the above this time using an mdraid RAID-10 consisting of the EVO840 and an EVO750 on both SATA ports. Resync performance seemed to be slightly lower in the beginning but then again at ~200 MB/s: https://pastebin.com/0vgfAMNT

Corresponding dmesg output:

Code: Select all

[  226.650404] md: using maximum available idle IO bandwidth (but not more than 200000 KB/sec) for resync.
[ 1896.045225] md: using maximum available idle IO bandwidth (but not more than 200000 KB/sec) for resync.
Again iozone performance:

Code: Select all

                                                              random    random
              kB  reclen    write  rewrite    read    reread    read     write
          102400       1     9449    11078    21011    20950     8760    10969
          102400       4    32765    39365    68556    70403    32598    38214
          102400      16    85458    96512   163262   168154    97952    94709
          102400     512   169538   172728   302758   299981   273237   162098
          102400    1024   174524   177137   323528   322983   318133   172623
          102400   16384   188257   186566   351960   363919   365077   185081
          409600   16384   119328   127074   342490   328722
         4096000   16384   119420   120454   327141   329713
So operational performance is fine (sequential write bottleneck somewhere around 120-130 MB/s which is ok since well above GbE bottleneck, sequential read performance is good anyway, random IO performance with the SATA only setup is superiour but as soon as you use spinning rust it's irrelevant anyway since HDDs are way too slow for this).

Now testing again but this time with one of the two reasonable choices when wasting one disk for redundancy: Again the mdraid RAID-10 device but this time with an btrfs on top (for data integrity and data protection when doing snapshots all day long):

Code: Select all

                                                              random    random
              kB  reclen    write  rewrite    read    reread    read     write
          102400       1     3763     3037   223922   657666   447085     2521
          102400       4    27674    33646    42521    43289    26435    35954
          102400      16    69758    80059    90165    91577    69693    75640
          102400     512   138534   144257   138421   142427   138831   144227
          102400    1024   149075   149322   146503   148795   148224   146601
          102400   16384   152691   153621   334315   344131   345739   151952
          409600   16384   152944   151900   338289   343261
         4096000   16384   145200   126609   343853   347311
(the other reasonable choice would be ZFS and a zmirror or when using 4 disks a zmirror made out of the two USB3 disks and one out of the two SATA attached disks thrown into a single pool to add sequential and also random IO performance)
Last edited by tkaiser on Mon Feb 26, 2018 3:01 pm, edited 1 time in total.

User avatar
odroid
Site Admin
Posts: 37229
Joined: Fri Feb 22, 2013 11:14 pm
languages_spoken: English, Korean
ODROIDs: ODROID
Has thanked: 1723 times
Been thanked: 1120 times
Contact:

Re: Any HC1/HC2 setup based on the new N1 hardware?

Post by odroid »

Thank you for the deep and detail test.
It is really clear now why we have to choose the RAID-10 instead of RAID1.

User avatar
memeka
Posts: 4420
Joined: Mon May 20, 2013 10:22 am
languages_spoken: english
ODROIDs: XU rev2 + eMMC + UART
U3 + eMMC + IO Shield + UART
Has thanked: 2 times
Been thanked: 60 times
Contact:

Re: Any HC1/HC2 setup based on the new N1 hardware?

Post by memeka »

what's a raid 10 with 2 drives? :O

tkaiser
Posts: 673
Joined: Mon Nov 09, 2015 12:30 am
languages_spoken: english
ODROIDs: C1+, C2, XU4, HC1
Has thanked: 0
Been thanked: 5 times
Contact:

Re: Any HC1/HC2 setup based on the new N1 hardware?

Post by tkaiser »

memeka wrote:what's a raid 10 with 2 drives? :O
Something invented in this century instead of the last? The only reasonable choice with 2.5" disks when trying to use mdraid searching for 'redundancy'? Again: https://webcache.googleusercontent.com/ ... of-raid-1/+

It's really a mess that so many people choose stupid RAID-1 setups especially for the wrong reasons (thinking about data protection but that's impossible with this mdraid mode, it only tries to provide data availability, there's neither data protection nor data integrity involved). If only half of the people relying on stupid RAID modes would implement backup instead there would be way less data losses. 'Backup' with modern approaches (stuff invented in this century and not the last) is as easy as creating snapshots automatically and sending them to another disk or even device. What do SBC users instead? Waste one entire disk for this RAID-1 or even RAID-5 BS. :(

crossover
Posts: 113
Joined: Wed Jul 22, 2015 2:23 pm
languages_spoken: english
ODROIDs: XU4, C1+, C2, USB-IO, HC2, Tinkering kits
Has thanked: 0
Been thanked: 0
Contact:

Re: Any HC1/HC2 setup based on the new N1 hardware?

Post by crossover »

Yeah! Daily or weekly snapshot backup is much more reliable. It was the main reason why I bought two HC2 units. One for office and the other for home. They are sharing weekly snapshots each other.

tkaiser
Posts: 673
Joined: Mon Nov 09, 2015 12:30 am
languages_spoken: english
ODROIDs: C1+, C2, XU4, HC1
Has thanked: 0
Been thanked: 5 times
Contact:

Re: Any HC1/HC2 setup based on the new N1 hardware?

Post by tkaiser »

memeka wrote:what's a raid 10 with 2 drives? :O
Maybe this helps explaining this mdraid concept better (though linked from my link before but an awful lot of people don't want to spend few minutes on any knowledge issue any more): https://marc.info/?l=linux-raid&m=141051176728630&w=2

That's important with HDDs since those are faster on the outer tracks than on the inner ones. People only testing in 'benchmarking gone wrong' mode with empty disks won't realize this. Again: ZBR (zone bit recording) looks like this with 2.5" HDDs:
Image

(Especially) with 2.5" HDD the mdraid code will keep read speeds at above 80 MB/s even with completely full disks while write performance will be slightly lower over the whole capacity. With a 'traditional' RAID-1 you'll see read/write performance being nice in the beginning and once you start to use your disks and put data on it constantly degrading. So mdraid's RAID-10 mode with two disks compensates for that and does some magic in the background to improve performance while still providing 100% redundancy.

But it's still the wrong concept since providing no data integrity and no data protection. But at least with a btrfs on top of it you get data integrity. And by skipping those stupid RAID attempts and using two independent btrfs on two disks, mounting one with 'compress-force=zlib', then letting snapshots be created on the first disk and be sent to the 2nd by using 'btrfs send|receive' you get real data protection and integrity.

But I already know it's useless: users will still use the stupid RAID-1 modes, thinking 'my data is safe', not setting up any kind of monitoring and then most probably lose some or all data after some time since using the wrong concept.

crashoverride
Posts: 5315
Joined: Tue Dec 30, 2014 8:42 pm
languages_spoken: english
ODROIDs: C1
Has thanked: 0
Been thanked: 433 times
Contact:

Re: Any HC1/HC2 setup based on the new N1 hardware?

Post by crashoverride »

odroid wrote:Anyway, is there any body who can test the RAID-1 performance with mdadm on the N1?
I ordered another 2.5" 1TB WD Black HDD and a 12V 4A power supply so I can provide some test data.

[edit]
Beyond "benchmarks", I have no idea how to saturate the read/write bandwidth. Suggestions are welcome.

tkaiser
Posts: 673
Joined: Mon Nov 09, 2015 12:30 am
languages_spoken: english
ODROIDs: C1+, C2, XU4, HC1
Has thanked: 0
Been thanked: 5 times
Contact:

Re: Any HC1/HC2 setup based on the new N1 hardware?

Post by tkaiser »

crashoverride wrote:I ordered another 2.5" 1TB WD Black HDD and a 12V 4A power supply so I can provide some test data.
Those 2.5" HDDs show a spin up consumption peak at slightly above 1A on the 5V rail. The RK3399 is not able to consume more than 10W with most heavy CPU intensive loads. Hardkernel's 12V/2A PSU is good for 2.5A peak consumption so we're talking about a 30W peak power budget suitable for at least 3 such 2.5" HDD under worst case conditions or 4 x 2.5" disks in normal situations.

Wrt HDD 'bandwidth': it's known how HDDs work since decades (ZBR) but people still only play 'benchmarking gone wrong' with these devices. If buying a 1 TB HDD users might want to use the full capacity, right? So why do they only test with their disks being empty (then sequential performance is up to twice as fast)? It's easy to test with either fallocate or by partitioning the HDDs appropriately. I would assume a full WD Black Mobile won't exceed 60 MB/s on the inner tracks?

If WD Mobile disks behave like WD desktops (no idea, I did not find any h2benchw graphs/numbers) then performance on the inner tracks is even worse than usual since the desktops show only less than 50% compared to outer tracks:
Image
Image

User avatar
memeka
Posts: 4420
Joined: Mon May 20, 2013 10:22 am
languages_spoken: english
ODROIDs: XU rev2 + eMMC + UART
U3 + eMMC + IO Shield + UART
Has thanked: 2 times
Been thanked: 60 times
Contact:

Re: Any HC1/HC2 setup based on the new N1 hardware?

Post by memeka »

@tkaiser
I see. This solves my confusion. But saying raid 1 is crap and raid 10 is the solution is wrong, without specifying *mdadm*.
Because this is really a limitation and workaround of mdadm.
I am running raid 1 (not mdadm) and I do have 2x read speeds. And I use it for availability and protection - i want to be able to continue working after a drive crashes, w/o restoring a backup; and I want it to fix itself (after I change the drive) easily while I’m not there (eg gone home). This is all within the realm of raid 1 is supposed to do, and does :)

tkaiser
Posts: 673
Joined: Mon Nov 09, 2015 12:30 am
languages_spoken: english
ODROIDs: C1+, C2, XU4, HC1
Has thanked: 0
Been thanked: 5 times
Contact:

Re: Any HC1/HC2 setup based on the new N1 hardware?

Post by tkaiser »

crashoverride wrote:Beyond "benchmarks", I have no idea how to saturate the read/write bandwidth. Suggestions are welcome.
What about the simple real-world 'NAS use case'? Users are excited about Gigabit Ethernet and NAS throughput numbers that exceed 100MB/s. Use your two 2.5" HDDs, create a stupid RAID-1 and enjoy read performance dropping down to maybe even half of what's possible. Requires testing the use case of course: Using disks to store data and not to leave them empty. Only empty disks show nice sequential RAID-1 numbers, disks that are used to store data become slower the more capacity is used. But for whatever reasons the majority of 'benchmarks' out there tests irrelevant stuff (empty disk) that has no relation to reality (disks being used to store data).

Why am I'm talking about read performance above while write performance with a stupid RAID-1 is also affected? Since filesystem buffers exist and with default settings you might be able to write up to 3GB to an N1 NAS at full speed (above 100MB/s) before the kernel decides to flush filesystem buffers to disk. When Hardkernel decides to do an 'N1 light' with just 2GB RAM then we're talking about ~1.5GB able to be pushed at full network interface speed before performance drops down to IO 'performance' one layer below.

tkaiser
Posts: 673
Joined: Mon Nov 09, 2015 12:30 am
languages_spoken: english
ODROIDs: C1+, C2, XU4, HC1
Has thanked: 0
Been thanked: 5 times
Contact:

Re: Any HC1/HC2 setup based on the new N1 hardware?

Post by tkaiser »

memeka wrote:But saying raid 1 is crap and raid 10 is the solution is wrong, without specifying *mdadm*.
Neither of the two is an appropriate 'solution' for what the average user will misuse the concept. It's only that when I use mdraid and want to use some redundancy for whatever reasons I better choose mdraid's raid-10 mode instead since performance is not that crappy.

It's 2018 and users still don't understand that RAID is not backup. RAID-1 / RAID-10 neither provide data protection nor data integrity. There are better options now like ZFS/zmirror or btrfs' own RAID-1 implementation. Then the wasted disk does not only provide increased availability but also data integrity and self healing in case silent bit rot occured (which is a real problem the average user is not even aware of since focused on anachronistic concepts from last century and realizing bit rot / data corruption having happened always way too late when using inappropriate filesystems).

And with snapshots and another disk/device around where snapshots are being sent to data protection is also in place.

crashoverride
Posts: 5315
Joined: Tue Dec 30, 2014 8:42 pm
languages_spoken: english
ODROIDs: C1
Has thanked: 0
Been thanked: 433 times
Contact:

Re: Any HC1/HC2 setup based on the new N1 hardware?

Post by crashoverride »

tkaiser wrote:Users are excited about Gigabit Ethernet and NAS throughput numbers that exceed 100MB/s.
Those numbers are going to be more affected by MTU (jumbo frames) and protocol (SMB2/3, SFTP, NFS) than SATA. A single WD Black "benches" at 150MBs on N1 when I tested it.

See also: http://rickardnobel.se/actual-throughpu ... -ethernet/

tkaiser
Posts: 673
Joined: Mon Nov 09, 2015 12:30 am
languages_spoken: english
ODROIDs: C1+, C2, XU4, HC1
Has thanked: 0
Been thanked: 5 times
Contact:

Re: Any HC1/HC2 setup based on the new N1 hardware?

Post by tkaiser »

crashoverride wrote:
tkaiser wrote:Users are excited about Gigabit Ethernet and NAS throughput numbers that exceed 100MB/s.
Those numbers are going to be more affected by MTU (jumbo frames) and protocol (SMB2/3, SFTP, NFS) than SATA.
Well, SFTP is a good example for 'wrong protocol' since without taking care which cipher is being used it will be easily bottlenecked by single threaded CPU performance on slow SBC anyway. Jumbo frames... nope. It's about parallelism to get Windows Explorer or macOS Finder display something above 100 MB/s: https://www.helios.de/web/EN/support/TI/157.html
crashoverride wrote:A single WD Black "benches" at 150MBs on N1 when I tested it.
So that's the 'I don't want to store any data on my HDD' use case? I don't understand this use case since if I buy a HDD I will for sure put data on it. And to get an idea about real world performance I would need to adjust benchmarks that test only irrelevant stuff by default (that's the hdparm and dd on empty disks BS everyone uses). So how's performance when you fill the WD Black with 80% data (easy: partition it accordingly)?
That's the most funny 'explanation' I've ever came accross that tries to explain why Jumbo frames could matter. Only looking at payload to frame size ratio without even mentioning the stuff that really matters:
* amount of interrupts/packets to process (that's the reason why we went Jumbo frames decade(s) ago: since the CPU cores in our servers back then were not that fast as on today's SBC)
* round-trip times / latency

We use Jumbro frames only in 10GbE/40GbE backbones any more but nowhere else since not worth the hassles especially when a lot of mobile users are connected wireless and should access the same ressources.

crashoverride
Posts: 5315
Joined: Tue Dec 30, 2014 8:42 pm
languages_spoken: english
ODROIDs: C1
Has thanked: 0
Been thanked: 433 times
Contact:

Re: Any HC1/HC2 setup based on the new N1 hardware?

Post by crashoverride »

tkaiser wrote:So that's the 'I don't want to store any data on my HDD' use case?
Its the "I am not benchmarking a filesystem" use case.

[edit]
Why is the distinction important? I use various filesystem for different needs. Each will perform differently. Knowing how the drive itself performs provides an indication how any given filesystem will perform (assuming knowledge of the filesystem). The bare drive benchmark is a more useful metric for me. It may not be for others. I have no interest in debating benchmark dogma.

tkaiser
Posts: 673
Joined: Mon Nov 09, 2015 12:30 am
languages_spoken: english
ODROIDs: C1+, C2, XU4, HC1
Has thanked: 0
Been thanked: 5 times
Contact:

Re: Any HC1/HC2 setup based on the new N1 hardware?

Post by tkaiser »

memeka wrote:i want to be able to continue working after a drive crashes, w/o restoring a backup; and I want it to fix itself (after I change the drive) easily while I’m not there (eg gone home). This is all within the realm of raid 1 is supposed to do, and does :)
The majority of mdraid RAID-1 users is replacing backup with RAID which couldn't be more wrong (easy to test: sudo rm -rf / -- don't do this if you don't have tested backups but just RAID, all your data will be gone)

What I'm missing is data integrity since it's 2018 and we get this for free simply by stopping to use anachronistic attempts. All the 'traditional' RAID-1 attempts don't deal good or at all with data mismatches on the two disks. Even if you run a check (with newer kernels also a repair at the same time), then mdraid will not complain but just somewhat report that there might be mismatches:

Code: Select all

echo 'check' > /sys/block/md127/md/sync_action
watch -n 5 cat /proc/mdstat
(only exception: when there are bad blocks on disks -- only then mdraid will take action, mark the bad block and try to use data from the other disk). How often does your RAID-1 scrub itself since it's not mdraid based? Every month? More often? Never?

To provide again some numbers I created a btrfs RAID-1 out of an EVO840 and an EVO750. First both disks at the SATA ports, then the EVO750 has been transferred into a JMS567 enclosure:

Code: Select all

BTRFS RAID1 SATA/SATA                                         random    random
              kB  reclen    write  rewrite    read    reread    read     write
          102400       1     4248     3054   316705   643786   435420     2561
          102400       4    29349    33493    44383    44319    27021    34245
          102400      16    71317    81191    91958    92268    69416    78098
          102400     512   140607   162521   142464   146034   142852   160634
          102400    1024   153953   160024   145013   147115   146065   161042
          102400   16384   163937   160917   349460   358109   352623   166767
          409600   16384   160593   164919   349347   347243
         4096000   16384   155710   131588   351616   350538

BTRFS RAID1 SATA/USB3                                         random    random
              kB  reclen    write  rewrite    read    reread    read     write
          102400       1     3291     2505   174818   638842   417773     1982
          102400       4    21145    26348    29102    29112    20330    25879
          102400      16    66375    73479    71782    72045    57449    72088
          102400     512   183396   205107   148973   140237   134086   211841
          102400    1024   204088   255647   176437   177949   169514   149661
          102400   16384   229663   252407   362106   376795   378064   239237
          409600   16384   237860   262379   347353   352964
         4096000   16384   213188   134933   372032   372795
This RAID-1 variant provides data integrity since it's a 'checksummed' filesystem. Some observations look interesting (e.g. slower sequential performance when both disks behind the PCIe bus) but are more or less irrelevant for any real world scenarios. Though it should be noted that when reading from this RAID-1 the kernel chooses only one disk to read data and metadata (checksums) from. So btrfs' own RAID-1 implementation would be outperformed by an mdraid-10 with btrfs on top at least with HDDs (with SSDs it obviously doesn't matter).

But looking at btrfs performance and features is always problematic since almost all btrfs code lives inside the kernel and situation with mainline kernel (most probably every NAS centric RK3399 device later will run on) might look totally different compared to Rockchip's 4.4 LTS now.
Last edited by tkaiser on Mon Feb 26, 2018 8:09 pm, edited 1 time in total.

tkaiser
Posts: 673
Joined: Mon Nov 09, 2015 12:30 am
languages_spoken: english
ODROIDs: C1+, C2, XU4, HC1
Has thanked: 0
Been thanked: 5 times
Contact:

Re: Any HC1/HC2 setup based on the new N1 hardware?

Post by tkaiser »

crashoverride wrote:
tkaiser wrote:So that's the 'I don't want to store any data on my HDD' use case?
Its the "I am not benchmarking a filesystem" use case.
Again funny. I would call this total ignorance how every HDD out there works' (again: way SLOWER on the inner tracks regardless of which filesystem is used). But to discuss hdparm BS numbers is really just a waste of time. But also fun so let's continue :)

Let's 'not benchmark a filesystem' 6 times :lol:

Code: Select all

/dev/sda:
 Timing buffered disk reads: 1066 MB in  3.00 seconds = 354.80 MB/sec

/dev/sda:
 Timing buffered disk reads: 944 MB in  3.00 seconds = 314.25 MB/sec

/dev/sda:
 Timing buffered disk reads: 916 MB in  3.00 seconds = 305.24 MB/sec
What's the right value? Again:

Code: Select all

/dev/sda:
 Timing buffered disk reads: 956 MB in  3.00 seconds = 318.16 MB/sec

/dev/sda:
 Timing buffered disk reads: 944 MB in  3.00 seconds = 314.44 MB/sec

/dev/sda:
 Timing buffered disk reads: 946 MB in  3.01 seconds = 314.72 MB/sec
What's the difference? First time the 'benchmark' running on cpu4 (big core), next time running on cpu0. And now the most funny part: If I add a filesystem and choose instead of an inapproriate tool called hdparm an appropriate benchmark tool where I can adjust what really matters (block sizes) then the impossible happens: 390 MB/sec. Does adding a filesystem magically increases the device's performance? Nope it's just that hdparm uses for whatever reasons a block size too small to read data while a good benchmark tool clearly shows this dependency since allowing to specifiy the block size(s) to test with.

TL;DR: "I am not benchmarking a filesystem" is BS, same goes for all hdparm and dd 'benchmark' numbers you find on the net if not at least for hdparm the programm version and for dd the whole command line is also present (to see with which bs parameter dd has been called -- bs=1 count=1048576, bs=1K count=1024 and bs=1M count=1 will provide totally different results while only bs=16M count=100 or larger will provide any numbers that tell something about maximum sequential throughput. With hdparm and the 128K default block size you can't do much other than throwing these numbers away)

BTW: I benchmark filesystems. Since it matters depending on the use case. And one of the many pretty obvious observations when benchmarking filesystems on the same hardware is that the filesystem simply doesn't matter when it's about HDDs and their sequential performance drop. Already curious how you want to 'benchmark' RAID-1 performance with 2 pieces of spinning rust with your 'I don't benchmark the software stack' attempt (which is honestly all you want to test when looking at RAID-1 'performance') :)

BTW: I set /proc/sys/dev/raid/speed_limit_max to 800000 (~781 MB/s) to see how fast on N1 a RAID-10 can be resynced with 2 SSDs: ~400 MB/s were achieved but of course this will be bottlenecked by disk performance later when HDDs and not SSDs are used. At least the N1 is capable of handling a bunch of fast disks.

crashoverride
Posts: 5315
Joined: Tue Dec 30, 2014 8:42 pm
languages_spoken: english
ODROIDs: C1
Has thanked: 0
Been thanked: 433 times
Contact:

Re: Any HC1/HC2 setup based on the new N1 hardware?

Post by crashoverride »

tkaiser wrote:I would call this total ignorance how every HDD out there works
I feel like we have done this dance before. :lol:

Trust me. I know how a HDD works. Again, I have no interest in debating this minutia. Its a waste of everyone's time here. Should that ever change, I will create an account on the Armbian forums and post a diatribe about how "I am right and everyone else is wrong or stupid or both".

Until that time, there is a sufficient amount of information about benchmarking already stated. I trust that everyone here is adult enough to make up their own minds and/or perform their own experiments to determine what practices are best suited for them. Odroid forum members do not need anyone to "police" their thoughts.

crashoverride
Posts: 5315
Joined: Tue Dec 30, 2014 8:42 pm
languages_spoken: english
ODROIDs: C1
Has thanked: 0
Been thanked: 433 times
Contact:

Re: Any HC1/HC2 setup based on the new N1 hardware?

Post by crashoverride »

crashoverride wrote:Again, I have no interest in debating this minutia.
The goal of discussion here is in the "debug party" context. This means that all I care about is the fastest performance number (sequential read). This number tells me whether the N1 is operating within expected tolerance. Write speed and random I/O are irrelevant because they do not tell me anything meaningful about the PCIe+SATA interface. The hardware that I am interested in testing is the N1, not the HDD or SSD. The RAID performance number is expected to tell me whether dual SATA is operating within expected tolerance by exorcising both ports simultaneous and predictably.

tkaiser
Posts: 673
Joined: Mon Nov 09, 2015 12:30 am
languages_spoken: english
ODROIDs: C1+, C2, XU4, HC1
Has thanked: 0
Been thanked: 5 times
Contact:

Re: Any HC1/HC2 setup based on the new N1 hardware?

Post by tkaiser »

crashoverride wrote:This means that all I care about is the fastest performance number (sequential read).
It gets more and more funny. And you try to check for 'fastest performance numbers' by buying slow 2.5" HDDs that bottleneck your 'test'? You need a bunch of fast SSDs known to exceed all interface bottlenecks that might be present on N1 (SSDs that are known to exceed 500 MB/s both read/write). Unfortunately I've not that stuff lying around (only fast enough with reads) but the tests you seem to be interested in have long been finished and valid conclusions can be drawn since the individual limitations are well known: https://forum.armbian.com/topic/6496-od ... ment=49400

We know about USB3 'per port' limitations (~395 MB/s), we know about SATA 'per port' limitations (~385 MB/s), we know for sure that there's a 'per port group' limitation applying to both SATA ports since the ASM1061 is the bottleneck here but what we don't know is whether the same applies to both USB3 ports or if it's possible to exceed 400 MB/s with 2 very fast USB3 disks. Random IO matters of course depending on the use case (and the SATA ports with disabled PCIe powermanagement are the better choice even if USB3 looks better when only staring at those laughable stupid maximum sequential read performance numbers).

This 'discussion' happening here is more or less an excercise in ignorance and turned in the meantime into your usual 'us vs. them' game :lol:

In the context of latest 'discussion' here it's not about any N1 bottleneck but about HDD behaviour and why RAID-1 can not provide superiour performance compared to mdraid's RAID-10. And by using appropriate benchmark tools that reflect real world usage it's pretty easy to understand this.

Back on topic (crappy RAID-1 mdraid performance and not well known N1 bottlenecks): I found another 2.5" HDD with 500 GB flying around and now attached one 500GB disk to each SATA port. It's one Samsung SpinPoint M7 and the other is an Apple branded HGST rotating slightly faster. SMART output for both: http://ix.io/PLM

With RAID-1 a resync starts on the outer tracks and then walks over the whole capacity to the inner tracks where everything gets slower and slower and slower. It started with just +70 MB/s resync rate (vs. ~400MB/s with my SSDs before):

Code: Select all

root@odroid:~# cat /proc/mdstat
Personalities : [linear] [multipath] [raid0] [raid1] [raid6] [raid5] [raid4] [raid10]
md127 : active raid1 sda[1] sdb[0]
      488255488 blocks super 1.2 [2/2] [UU]
      [>....................]  resync =  0.4% (2316224/488255488) finish=111.8min speed=72382K/sec
      bitmap: 4/4 pages [16KB], 65536KB chunk

unused devices: <none>
As expected it's getting slower and slower the more the process moves from outer tracks to the inner ones (same will happen later when accessing data on the RAID-1 depending on capacity used -- that's why choosing mdraid's raid-10 instead is the better idea when interested in both redundancy and performance):

Code: Select all

      [==>..................]  resync = 12.8% (62607744/488255488) finish=104.4min speed=67918K/sec
      [=====>...............]  resync = 26.9% (131793920/488255488) finish=92.1min speed=64494K/sec
      [========>............]  resync = 42.5% (207593792/488255488) finish=78.5min speed=59541K/sec
Stopping now since both test and 'discussion' are a useless waste of time :)

crashoverride
Posts: 5315
Joined: Tue Dec 30, 2014 8:42 pm
languages_spoken: english
ODROIDs: C1
Has thanked: 0
Been thanked: 433 times
Contact:

Re: Any HC1/HC2 setup based on the new N1 hardware?

Post by crashoverride »

tkaiser wrote:And you try to check for 'fastest performance numbers' by buying slow 2.5" HDDs that bottleneck your 'test'?
The N1 is (at the time of writing) a $110 ARM development board. I am not attempting nor expecting to replace a $1000 server with it.

Multiple people were provided with N1 samples to test. The selection of participants represented a diversity of "use cases". My tests are suited to my anticipated uses.

The dual 2.5" HDD scenario is a realistic configuration that many others will also use. The WD Black are also not "cheap crap" drives that would produce abnormally deficient test results. The point to emphasis again is that the intent is not to "squeeze every possible last bit-per-second" out of a bus or storage device (others are testing that). It is "does the N1 perform as expect with this configuration"? If it does not, then it should be investigated. If it does, then this item is "crossed off the list" and forgotten. My only interest in SATA is that the Linux USB-UAS headaches are no longer an issue. Performance is secondary to correctness.

In conclusion, we all have N1 samples because we use them differently. Your goals are not necessarily my goals and vice-versa. Its only an 'us vs. them game' if you continue to tell "us" we are all using our N1 device "wrong" because we are not using it like "them".

Conduct your tests. Share your data. The constant condemnation is best left to the Armbian forums. Its really OK to just "agree to disagree". Diversity of opinions and use (even if "wrong") are the key here. Otherwise, HardKernel could have saved a lot of time and money and just sent out a single N1 to @tkaiser.

elatllat
Posts: 1858
Joined: Tue Sep 01, 2015 8:54 am
languages_spoken: english
ODROIDs: XU4, N1, N2, C4, N2+, HC4
Has thanked: 59 times
Been thanked: 132 times
Contact:

Re: Any HC1/HC2 setup based on the new N1 hardware?

Post by elatllat »

crashoverride wrote:...My only interest in SATA is that the Linux USB-UAS headaches are no longer an issue...
I also plan to use my N1 with a bunch of USB spinners. I had to disable UAS on the XU4 to get "SMART" working.
Because the N1 is using an older kernel (4.4 vs 4.14) but newer smartctl (6.6 vs 6.5) I guess I'll test that...

So the first thing that happened is the N1 crashes as soon as I turned on the drives. I upgraded, rebooted, connected the debugger, and it did not crash so on with the testing, and yes (predictably) I still have to disable UAS to use "SMART";

Code: Select all


N1> uname -r
4.4.112-16
N1> ls -1 /dev/sd* 
/dev/sda
/dev/sda1
/dev/sda2
/dev/sdb
/dev/sdb1
/dev/sdb2
N1> smartctl -a /dev/sda
smartctl 6.6 2016-05-31 r4324 [aarch64-linux-4.4.112-16] (local build)
Copyright (C) 2002-16, Bruce Allen, Christian Franke, www.smartmontools.org

/dev/sda: Unknown USB bridge [0x0bc2:0xab38 (0x100)]
Please specify device type with the -d option.

Use smartctl -h to get a usage summary

N1> smartctl -a -d sat /dev/sda
smartctl 6.6 2016-05-31 r4324 [aarch64-linux-4.4.112-16] (local build)
Copyright (C) 2002-16, Bruce Allen, Christian Franke, www.smartmontools.org

Read Device Identity failed: scsi error unsupported field in scsi command

A mandatory SMART command failed: exiting. To continue, add one or more '-T permissive' options.
N1> echo 0bc2:3312:u,0bc2:ab38:u > /sys/module/usb_storage/parameters/quirks
N1> smartctl -a /dev/sda
smartctl 6.6 2016-05-31 r4324 [aarch64-linux-4.4.112-16] (local build)
Copyright (C) 2002-16, Bruce Allen, Christian Franke, www.smartmontools.org

/dev/sda: Unknown USB bridge [0x0bc2:0xab38 (0x100)]
Please specify device type with the -d option.

Use smartctl -h to get a usage summary

N1> smartctl -a -d sat /dev/sda
smartctl 6.6 2016-05-31 r4324 [aarch64-linux-4.4.112-16] (local build)
Copyright (C) 2002-16, Bruce Allen, Christian Franke, www.smartmontools.org

=== START OF INFORMATION SECTION ===
Device Model:     ST8000DM004-2CX188
Serial Number:    WCT06ZYV
LU WWN Device Id: 5 000c50 0ab3ad4e6
Firmware Version: 0001
User Capacity:    8,001,563,222,016 bytes [8.00 TB]
Sector Sizes:     512 bytes logical, 4096 bytes physical
Rotation Rate:    5425 rpm
Device is:        Not in smartctl database [for details use: -P showall]
ATA Version is:   ACS-3 T13/2161-D revision 5
SATA Version is:  SATA 3.1, 6.0 Gb/s (current: 3.0 Gb/s)
Local Time is:    Mon Feb 26 22:05:33 2018 UTC
SMART support is: Available - device has SMART capability.
SMART support is: Enabled

=== START OF READ SMART DATA SECTION ===
SMART overall-health self-assessment test result: PASSED

General SMART Values:
Offline data collection status:  (0x82)	Offline data collection activity
					was completed without error.
					Auto Offline Data Collection: Enabled.
Self-test execution status:      (   0)	The previous self-test routine completed
					without error or no self-test has ever 
					been run.
Total time to complete Offline 
data collection: 		(    0) seconds.
Offline data collection
capabilities: 			 (0x7b) SMART execute Offline immediate.
					Auto Offline data collection on/off support.
					Suspend Offline collection upon new
					command.
					Offline surface scan supported.
					Self-test supported.
					Conveyance Self-test supported.
					Selective Self-test supported.
SMART capabilities:            (0x0003)	Saves SMART data before entering
					power-saving mode.
					Supports SMART auto save timer.
Error logging capability:        (0x01)	Error logging supported.
					General Purpose Logging supported.
Short self-test routine 
recommended polling time: 	 (   1) minutes.
Extended self-test routine
recommended polling time: 	 ( 963) minutes.
Conveyance self-test routine
recommended polling time: 	 (   2) minutes.
SCT capabilities: 	       (0x30a5)	SCT Status supported.
					SCT Data Table supported.

SMART Attributes Data Structure revision number: 10
Vendor Specific SMART Attributes with Thresholds:
ID# ATTRIBUTE_NAME          FLAG     VALUE WORST THRESH TYPE      UPDATED  WHEN_FAILED RAW_VALUE
  1 Raw_Read_Error_Rate     0x000f   068   065   006    Pre-fail  Always       -       5779524
  3 Spin_Up_Time            0x0003   095   095   000    Pre-fail  Always       -       0
  4 Start_Stop_Count        0x0032   100   100   020    Old_age   Always       -       6
  5 Reallocated_Sector_Ct   0x0033   100   100   010    Pre-fail  Always       -       0
  7 Seek_Error_Rate         0x000f   100   253   045    Pre-fail  Always       -       2736
  9 Power_On_Hours          0x0032   100   100   000    Old_age   Always       -       0 (32 7 0)
 10 Spin_Retry_Count        0x0013   100   100   097    Pre-fail  Always       -       0
 12 Power_Cycle_Count       0x0032   100   100   020    Old_age   Always       -       6
183 Runtime_Bad_Block       0x0032   100   100   000    Old_age   Always       -       0
184 End-to-End_Error        0x0032   100   100   099    Old_age   Always       -       0
187 Reported_Uncorrect      0x0032   100   100   000    Old_age   Always       -       0
188 Command_Timeout         0x0032   100   100   000    Old_age   Always       -       0
189 High_Fly_Writes         0x003a   100   100   000    Old_age   Always       -       0
190 Airflow_Temperature_Cel 0x0022   074   074   040    Old_age   Always       -       26 (Min/Max 26/26)
191 G-Sense_Error_Rate      0x0032   100   100   000    Old_age   Always       -       0
192 Power-Off_Retract_Count 0x0032   100   100   000    Old_age   Always       -       5
193 Load_Cycle_Count        0x0032   100   100   000    Old_age   Always       -       6
194 Temperature_Celsius     0x0022   026   040   000    Old_age   Always       -       26 (0 23 0 0 0)
195 Hardware_ECC_Recovered  0x001a   068   065   000    Old_age   Always       -       5779524
197 Current_Pending_Sector  0x0012   100   100   000    Old_age   Always       -       0
198 Offline_Uncorrectable   0x0010   100   100   000    Old_age   Offline      -       0
199 UDMA_CRC_Error_Count    0x003e   200   200   000    Old_age   Always       -       0
240 Head_Flying_Hours       0x0000   100   253   000    Old_age   Offline      -       0 (134 224 0)
241 Total_LBAs_Written      0x0000   100   253   000    Old_age   Offline      -       1925097
242 Total_LBAs_Read         0x0000   100   253   000    Old_age   Offline      -       3854427

SMART Error Log Version: 1
No Errors Logged

SMART Self-test log structure revision number 1
No self-tests have been logged.  [To run self-tests, use: smartctl -t]

SMART Selective self-test log data structure revision number 1
 SPAN  MIN_LBA  MAX_LBA  CURRENT_TEST_STATUS
    1        0        0  Not_testing
    2        0        0  Not_testing
    3        0        0  Not_testing
    4        0        0  Not_testing
    5        0        0  Not_testing
Selective self-test flags (0x0):
  After scanning selected spans, do NOT read-scan remainder of disk.
If Selective self-test is pending on power-up, resume after 0 minute delay.
Also predictably RAID 0 doubles disk IO;

Code: Select all

#die chirp die
echo 20000 > /sys/devices/virtual/thermal/thermal_zone0/trip_point_0_temp

#tools
apt install iozone3 mdadm lvm2


# ext4 test
umount /dev/sda2
parted -s /dev/sda mklabel GPT
parted -s /dev/sda mkpart primary 0% 100%
mkfs.ext4 /dev/sda1
mkdir /media/a
mount /dev/sda1 /media/a
#
umount /dev/sdb2
parted -s /dev/sdb mklabel GPT
parted -s /dev/sdb mkpart primary 0% 100%
mkfs.ext4 /dev/sdb1
mkdir /media/b
mount /dev/sdb1 /media/b
dd if=/dev/urandom of=/media/a/8GB bs=10M count=800
	800+0 records in
	800+0 records out
	8388608000 bytes (8.4 GB, 7.8 GiB) copied, 714.17 s, 11.7 MB/s
sync
echo 3 > /proc/sys/vm/drop_caches;
dd if=/media/a/8GB of=/dev/null bs=10M count=800
	800+0 records in
	800+0 records out
	8388608000 bytes (8.4 GB, 7.8 GiB) copied, 66.84 s, 126 MB/s
sync
echo 3 > /proc/sys/vm/drop_caches;
dd if=/media/a/8GB of=/media/b/8GB bs=10M count=800
	800+0 records in
	800+0 records out
	8388608000 bytes (8.4 GB, 7.8 GiB) copied, 107.437 s, 78.1 MB/s


#mdadm test
/media/a
/media/b
mdadm --create --verbose /dev/md0 --level=0 --raid-devices=2 /dev/sda /dev/sdb
mkfs.ext4 /dev/md0
mount /dev/md0 /media/a
dd if=/dev/urandom of=/media/a/8GB bs=10M count=800
	800+0 records in
	800+0 records out
	8388608000 bytes (8.4 GB, 7.8 GiB) copied, 713.147 s, 11.8 MB/s
sync
echo 3 > /proc/sys/vm/drop_caches;
dd if=/media/a/8GB of=/dev/null bs=10M count=800
	800+0 records in
	800+0 records out
	8388608000 bytes (8.4 GB, 7.8 GiB) copied, 32.7441 s, 256 MB/s

iozone /media/a/iozone
	Iozone: Performance Test of File I/O
	        Version $Revision: 3.429 $
		Compiled for 64 bit mode.
		Build: linux 

	Contributors:William Norcott, Don Capps, Isom Crawford, Kirby Collins
	             Al Slater, Scott Rhine, Mike Wisner, Ken Goss
	             Steve Landherr, Brad Smith, Mark Kelly, Dr. Alain CYR,
	             Randy Dunlap, Mark Montague, Dan Million, Gavin Brebner,
	             Jean-Marc Zucconi, Jeff Blomberg, Benny Halevy, Dave Boone,
	             Erik Habbinga, Kris Strecker, Walter Wong, Joshua Root,
	             Fabrice Bacchella, Zhenghua Xue, Qin Li, Darren Sawyer,
	             Vangel Bojaxhi, Ben England, Vikentsi Lapa.

	Run began: Mon Feb 26 23:35:38 2018

	Command line used: iozone /media/a/iozone
	Output is in kBytes/sec
	Time Resolution = 0.000001 seconds.
	Processor cache size set to 1024 kBytes.
	Processor cache line size set to 32 bytes.
	File stride size set to 17 * record size.
                                                              random    random     bkwd    record    stride                                    
              kB  reclen    write  rewrite    read    reread    read     write     read   rewrite      read   fwrite frewrite    fread  freread
             512       4    94151   371292   724325   681782   492790   294264   499553    339533    503890   262566   400291   636894   870552

iozone test complete.

iozone -e -I -a -s 100M -r 4k -r 16k -r 512k -r 1024k -r 16384k -i 0 -i 1 -i 2 /media/a/iozone
	Iozone: Performance Test of File I/O
	        Version $Revision: 3.429 $
		Compiled for 64 bit mode.
		Build: linux 

	Contributors:William Norcott, Don Capps, Isom Crawford, Kirby Collins
	             Al Slater, Scott Rhine, Mike Wisner, Ken Goss
	             Steve Landherr, Brad Smith, Mark Kelly, Dr. Alain CYR,
	             Randy Dunlap, Mark Montague, Dan Million, Gavin Brebner,
	             Jean-Marc Zucconi, Jeff Blomberg, Benny Halevy, Dave Boone,
	             Erik Habbinga, Kris Strecker, Walter Wong, Joshua Root,
	             Fabrice Bacchella, Zhenghua Xue, Qin Li, Darren Sawyer,
	             Vangel Bojaxhi, Ben England, Vikentsi Lapa.

	Run began: Mon Feb 26 23:37:07 2018

	Include fsync in write timing
	O_DIRECT feature enabled
	Auto Mode
	File size set to 102400 kB
	Record Size 4 kB
	Record Size 16 kB
	Record Size 512 kB
	Record Size 1024 kB
	Record Size 16384 kB
	Command line used: iozone -e -I -a -s 100M -r 4k -r 16k -r 512k -r 1024k -r 16384k -i 0 -i 1 -i 2 /media/a/iozone
	Output is in kBytes/sec
	Time Resolution = 0.000001 seconds.
	Processor cache size set to 1024 kBytes.
	Processor cache line size set to 32 bytes.
	File stride size set to 17 * record size.
                                                              random    random     bkwd    record    stride                                    
              kB  reclen    write  rewrite    read    reread    read     write     read   rewrite      read   fwrite frewrite    fread  freread
          102400       4    17798    25435    23123    22837    18944    15379                                                          
          102400      16    41028    47256    64696    65311    58925    27742                                                          
          102400     512    47283    43894   225596   225781   214621    28995                                                          
          102400    1024    41300    48878   241992   244695   232694    23774                                                          
          102400   16384    42883    48129   272953   276410   272909    35300                                                          

iozone test complete.

#cleanup
umount /dev/md0
mdadm --stop /dev/md0
mdadm --remove /dev/md0
mdadm --zero-superblock /dev/sda
mdadm --zero-superblock /dev/sdb


#lvm
dd if=/dev/zero of=/dev/sda bs=10M count=1
dd if=/dev/zero of=/dev/sdb bs=10M count=1
#edit /etc/lvm/lvm.conf
#	global_filter = [ "a|^/dev/sd*|", "r|/dev/mmcblk1rpmb|" ]
pvcreate /dev/sda
pvcreate /dev/sdb
vgcreate vg1 /dev/sda /dev/sdb
lvcreate --extents 100%FREE --type raid0 --name lv1 vg1
mkfs.ext4 /dev/mapper/vg1-lv1
mount /dev/mapper/vg1-lv1 /media/a
dd if=/dev/urandom of=/media/a/8GB bs=10M count=800
	800+0 records in
	800+0 records out
	8388608000 bytes (8.4 GB, 7.8 GiB) copied, 712.748 s, 11.8 MB/s
sync
echo 3 > /proc/sys/vm/drop_caches;
dd if=/media/a/8GB of=/dev/null bs=10M count=800
	800+0 records in
	800+0 records out
	8388608000 bytes (8.4 GB, 7.8 GiB) copied, 31.3478 s, 268 MB/s	
iozone /media/a/iozone
	Iozone: Performance Test of File I/O
	        Version $Revision: 3.429 $
		Compiled for 64 bit mode.
		Build: linux 

	Contributors:William Norcott, Don Capps, Isom Crawford, Kirby Collins
	             Al Slater, Scott Rhine, Mike Wisner, Ken Goss
	             Steve Landherr, Brad Smith, Mark Kelly, Dr. Alain CYR,
	             Randy Dunlap, Mark Montague, Dan Million, Gavin Brebner,
	             Jean-Marc Zucconi, Jeff Blomberg, Benny Halevy, Dave Boone,
	             Erik Habbinga, Kris Strecker, Walter Wong, Joshua Root,
	             Fabrice Bacchella, Zhenghua Xue, Qin Li, Darren Sawyer,
	             Vangel Bojaxhi, Ben England, Vikentsi Lapa.

	Run began: Tue Feb 27 00:36:58 2018

	Command line used: iozone /media/a/iozone
	Output is in kBytes/sec
	Time Resolution = 0.000001 seconds.
	Processor cache size set to 1024 kBytes.
	Processor cache line size set to 32 bytes.
	File stride size set to 17 * record size.
                                                              random    random     bkwd    record    stride                                    
              kB  reclen    write  rewrite    read    reread    read     write     read   rewrite      read   fwrite frewrite    fread  freread
             512       4   150850   289230   709019   771143   581846   409918   460852    337505    525082   222035   355790   636894   768934

iozone test complete.
iozone -e -I -a -s 100M -r 4k -r 16k -r 512k -r 1024k -r 16384k -i 0 -i 1 -i 2 /media/a/iozone
	Iozone: Performance Test of File I/O
	        Version $Revision: 3.429 $
		Compiled for 64 bit mode.
		Build: linux 

	Contributors:William Norcott, Don Capps, Isom Crawford, Kirby Collins
	             Al Slater, Scott Rhine, Mike Wisner, Ken Goss
	             Steve Landherr, Brad Smith, Mark Kelly, Dr. Alain CYR,
	             Randy Dunlap, Mark Montague, Dan Million, Gavin Brebner,
	             Jean-Marc Zucconi, Jeff Blomberg, Benny Halevy, Dave Boone,
	             Erik Habbinga, Kris Strecker, Walter Wong, Joshua Root,
	             Fabrice Bacchella, Zhenghua Xue, Qin Li, Darren Sawyer,
	             Vangel Bojaxhi, Ben England, Vikentsi Lapa.

	Run began: Tue Feb 27 00:37:11 2018

	Include fsync in write timing
	O_DIRECT feature enabled
	Auto Mode
	File size set to 102400 kB
	Record Size 4 kB
	Record Size 16 kB
	Record Size 512 kB
	Record Size 1024 kB
	Record Size 16384 kB
	Command line used: iozone -e -I -a -s 100M -r 4k -r 16k -r 512k -r 1024k -r 16384k -i 0 -i 1 -i 2 /media/a/iozone
	Output is in kBytes/sec
	Time Resolution = 0.000001 seconds.
	Processor cache size set to 1024 kBytes.
	Processor cache line size set to 32 bytes.
	File stride size set to 17 * record size.
                                                              random    random     bkwd    record    stride                                    
              kB  reclen    write  rewrite    read    reread    read     write     read   rewrite      read   fwrite frewrite    fread  freread
          102400       4    14751    23124    23201    22269    19856    16940                                                          
          102400      16    44283    43104    69305    69045    57577    25740                                                          
          102400     512    45545    42772   234054   235051   222939    48512                                                          
          102400    1024    48950    49071   245967   248551   238788    47939                                                          
          102400   16384    49708    50164   278977   279438   277012    50135                                                          

iozone test complete.

#cleanup
umount /media/a
vgchange -an vg1

#btrfs
mkfs.btrfs -f -d raid0 /dev/sda /dev/sdb
mount /dev/sda /media/a
dd if=/dev/urandom of=/media/a/8GB bs=10M count=800
	800+0 records in
	800+0 records out
	8388608000 bytes (8.4 GB, 7.8 GiB) copied, 705.341 s, 11.9 MB/s
sync
echo 3 > /proc/sys/vm/drop_caches;
dd if=/media/a/8GB of=/dev/null bs=10M count=800
	800+0 records in
	800+0 records out
	8388608000 bytes (8.4 GB, 7.8 GiB) copied, 32.7757 s, 256 MB/s
iozone /media/a/iozone

	Iozone: Performance Test of File I/O
	        Version $Revision: 3.429 $
		Compiled for 64 bit mode.
		Build: linux 

	Contributors:William Norcott, Don Capps, Isom Crawford, Kirby Collins
	             Al Slater, Scott Rhine, Mike Wisner, Ken Goss
	             Steve Landherr, Brad Smith, Mark Kelly, Dr. Alain CYR,
	             Randy Dunlap, Mark Montague, Dan Million, Gavin Brebner,
	             Jean-Marc Zucconi, Jeff Blomberg, Benny Halevy, Dave Boone,
	             Erik Habbinga, Kris Strecker, Walter Wong, Joshua Root,
	             Fabrice Bacchella, Zhenghua Xue, Qin Li, Darren Sawyer,
	             Vangel Bojaxhi, Ben England, Vikentsi Lapa.

	Run began: Tue Feb 27 01:04:40 2018

	Command line used: iozone /media/a/iozone
	Output is in kBytes/sec
	Time Resolution = 0.000001 seconds.
	Processor cache size set to 1024 kBytes.
	Processor cache line size set to 32 bytes.
	File stride size set to 17 * record size.
                                                              random    random     bkwd    record    stride                                    
              kB  reclen    write  rewrite    read    reread    read     write     read   rewrite      read   fwrite frewrite    fread  freread
             512       4   646286  1118011  2253852  3029721  2796910  1306409  1711407   1600443   2055390  1193175   373423   645703   711133

iozone test complete.
iozone -e -I -a -s 100M -r 4k -r 16k -r 512k -r 1024k -r 16384k -i 0 -i 1 -i 2 /media/a/iozone
	Iozone: Performance Test of File I/O
	        Version $Revision: 3.429 $
		Compiled for 64 bit mode.
		Build: linux 

	Contributors:William Norcott, Don Capps, Isom Crawford, Kirby Collins
	             Al Slater, Scott Rhine, Mike Wisner, Ken Goss
	             Steve Landherr, Brad Smith, Mark Kelly, Dr. Alain CYR,
	             Randy Dunlap, Mark Montague, Dan Million, Gavin Brebner,
	             Jean-Marc Zucconi, Jeff Blomberg, Benny Halevy, Dave Boone,
	             Erik Habbinga, Kris Strecker, Walter Wong, Joshua Root,
	             Fabrice Bacchella, Zhenghua Xue, Qin Li, Darren Sawyer,
	             Vangel Bojaxhi, Ben England, Vikentsi Lapa.

	Run began: Tue Feb 27 01:05:20 2018

	Include fsync in write timing
	O_DIRECT feature enabled
	Auto Mode
	File size set to 102400 kB
	Record Size 4 kB
	Record Size 16 kB
	Record Size 512 kB
	Record Size 1024 kB
	Record Size 16384 kB
	Command line used: iozone -e -I -a -s 100M -r 4k -r 16k -r 512k -r 1024k -r 16384k -i 0 -i 1 -i 2 /media/a/iozone
	Output is in kBytes/sec
	Time Resolution = 0.000001 seconds.
	Processor cache size set to 1024 kBytes.
	Processor cache line size set to 32 bytes.
	File stride size set to 17 * record size.
                                                              random    random     bkwd    record    stride                                    
              kB  reclen    write  rewrite    read    reread    read     write     read   rewrite      read   fwrite frewrite    fread  freread
          102400       4    21864    25682    22808    23097    21411    19575                                                          
          102400      16    44830    43563    67467    67460    66283    39495                                                          
          102400     512    45217    39932   236080   239781   228882    45869                                                          
          102400    1024    46781    39146   228191   256778   247185    48293                                                          
          102400   16384    49667    50075   293038   292482   289833    50081                                                          

iozone test complete.
#cleanup
umount /media/a
lvm(268 MB/s) was a bit quicker than mdadm and btrfs(256 MB/s) but with an N of 1 that could just be noise or a sub optimal block size.

tkaiser
Posts: 673
Joined: Mon Nov 09, 2015 12:30 am
languages_spoken: english
ODROIDs: C1+, C2, XU4, HC1
Has thanked: 0
Been thanked: 5 times
Contact:

Re: Any HC1/HC2 setup based on the new N1 hardware?

Post by tkaiser »

elatllat wrote:I had to disable UAS on the XU4 to get "SMART" working.
You had 'a problem' with kernel 4.14 not with 'XU4'. Same patch that affects SMART readouts in upstream 4.14 is also in RK's 4.4 kernel: viewtopic.php?f=153&t=30246

In UAS mode with Linux now SAT ATA pass-through is disabled if vendor ID is 0x0bc2 since the majority of Seagate USB3 disk products is broken here. Unfortunately this breaks SMART functionality now as mentioned in smartmontools FAQ already: https://www.smartmontools.org/wiki/USB

You might be able to query SMART data by temporarely allowing SAT ATA pass-through (using no quirk and later on switching back to t):

Code: Select all

echo '0bc2:3312,0bc2:ab38' > /sys/module/usb_storage/parameters/quirks
smartctl -a -d sat /dev/sda
echo '0bc2:3312:t,0bc2:ab38:t' > /sys/module/usb_storage/parameters/quirks
Not tested and last post here, I try to collect possibly useful info somewhere else, e.g. https://forum.armbian.com/topic/6496-od ... ment-50088

BTW:
elatllat wrote:Also predictably RAID 0 doubles disk IO
Well, you showed that you managed to create various sorts of stripe implementations (mdraid, lvm, btrfs) that are limited to horribly low sequential write performance (not exceeding ~50 MB/s). If your dd write performance 'benchmark' using /dev/urandom was not bottlenecked by CPU (pseudo random number generator) then this would be a nice improvement by factor 4 (and the disks should be considered garbage). But usually a disk that can read with 126MB/s should be able to write with at least 100 MB/s so in other words: your RAID 0 shows a much lower write performance than the individual disks which would need some active benchmarking approaches to get the idea why (UAS blacklisting? XHCI host controller issues? USB ports bottlenecking each other?) but unfortunately vendor communities hate active benchmarking :)

http://www.brendangregg.com/activebenchmarking.html

elatllat
Posts: 1858
Joined: Tue Sep 01, 2015 8:54 am
languages_spoken: english
ODROIDs: XU4, N1, N2, C4, N2+, HC4
Has thanked: 59 times
Been thanked: 132 times
Contact:

Re: Any HC1/HC2 setup based on the new N1 hardware?

Post by elatllat »

tkaiser;
quirks requires the USB to be replugged which reduces the usefulness of temporary measures,
but your are right; there may be a better way then disabling UAS completely just to get smart data.

I thought you would be happy I used your recommended iozone commend enough to ignore the obvious limitations (urandom, spinning) and focus on the advantages (no possible compression or cache speedup, many TB on the cheap). Had you recommended some active benchmarking command I would have used it to.

Well I could use LVMCache to boost those numbers when I get around to ordering more SSDs.

someone have a link explaining why eMMCs have a mmcblk?rpmb ? (I guess eMMCs are not common enough for LVM to blacklist them by default)

crashoverride
Posts: 5315
Joined: Tue Dec 30, 2014 8:42 pm
languages_spoken: english
ODROIDs: C1
Has thanked: 0
Been thanked: 433 times
Contact:

Re: Any HC1/HC2 setup based on the new N1 hardware?

Post by crashoverride »

elatllat wrote:someone have a link explaining why eMMCs have a mmcblk?rpmb ? (I guess eMMCs are not common enough for LVM to blacklist them by default)
Its part of the eMMC specification and a special protected area of the storage device.
https://lwn.net/Articles/682276/
Replay Protected Memory Block (RPMB)

elatllat
Posts: 1858
Joined: Tue Sep 01, 2015 8:54 am
languages_spoken: english
ODROIDs: XU4, N1, N2, C4, N2+, HC4
Has thanked: 59 times
Been thanked: 132 times
Contact:

Re: Any HC1/HC2 setup based on the new N1 hardware?

Post by elatllat »

Thanks.

ASword
Posts: 218
Joined: Fri Aug 04, 2017 12:48 pm
languages_spoken: english
ODROIDs: XU4, HC1, 2x N2
Has thanked: 14 times
Been thanked: 6 times
Contact:

Re: Any HC1/HC2 setup based on the new N1 hardware?

Post by ASword »

I certainly didn't expect such a wide-ranging and informative thread when I clicked on it. Thanks guys, this is going to be useful in the future. Virtually all the discussion is centered around using these devices as file-servers though. I almost hesitate to ask, but does anyone have any experience (and benchmarks) with running databases on these systems? I'm using my HC-1 as a time-series database & compute server, and looking forward to the N-1 for more compute muscle, more memory, 64-bit address space, and dual SSDs.


As for the new N1 hardware, my main ask is for a modified HC-1 case that can hold the fanless N1 and 2x 2.5" SATA SSDs. Ideally I could just slide the two drives in and have them click into place like for the HC-1.

tkaiser
Posts: 673
Joined: Mon Nov 09, 2015 12:30 am
languages_spoken: english
ODROIDs: C1+, C2, XU4, HC1
Has thanked: 0
Been thanked: 5 times
Contact:

Re: Any HC1/HC2 setup based on the new N1 hardware?

Post by tkaiser »

ASword wrote:I almost hesitate to ask, but does anyone have any experience (and benchmarks) with running databases on these systems?
Well, as far as IO is concerned databases need high random IO performance. You find some numbers in my 'ODROID N1 not a review' thread in Armbian forum (which seems to be down right now). It's important to switch PCIe powermanagement settings from powersave to performance to get really decent performance with small block sizes and if your databases are hosted on a filesystem of course it's essential that filesystem block sizes and database record sizes match in a proper way. When you use SSDs (only reasonable choice of course) then also partition alignment is important to get good performance.

Then you should pin the database threads to the two big cores and you're done. For such workloads currently the RK3399 shows the best 'bang for the buck' ratio if it's about ARM. Marvell Armada 7K/8K might provide much better performance in this area (quad core A72 and really performant IO) but to my knowledge the cheapest you could currently get is this here: https://www.solid-run.com/product/macch ... e-shot-4g/

ASword
Posts: 218
Joined: Fri Aug 04, 2017 12:48 pm
languages_spoken: english
ODROIDs: XU4, HC1, 2x N2
Has thanked: 14 times
Been thanked: 6 times
Contact:

Re: Any HC1/HC2 setup based on the new N1 hardware?

Post by ASword »

Well db performance in my case isn’t paramount and I will have 3-4 other compute processes that will be running at equal or higher priority. But that’s not really relevant here — I’m mostly interested in the recommended setup across two SSDs (there will be a separate backup NAS).

tkaiser
Posts: 673
Joined: Mon Nov 09, 2015 12:30 am
languages_spoken: english
ODROIDs: C1+, C2, XU4, HC1
Has thanked: 0
Been thanked: 5 times
Contact:

Re: Any HC1/HC2 setup based on the new N1 hardware?

Post by tkaiser »

ASword wrote:the recommended setup across two SSDs
What's that? Which recommendation?

ASword
Posts: 218
Joined: Fri Aug 04, 2017 12:48 pm
languages_spoken: english
ODROIDs: XU4, HC1, 2x N2
Has thanked: 14 times
Been thanked: 6 times
Contact:

Re: Any HC1/HC2 setup based on the new N1 hardware?

Post by ASword »

tkaiser wrote:
ASword wrote:the recommended setup across two SSDs
What's that? Which recommendation?
Given an N-1 with 2 SATA SSDs, how best to configure w/ BTRFS and compression turned on? The discussion of RAID types above just highlights how much I don’t know and it’s not where I want to spend time learning the nitty gritty.

tkaiser
Posts: 673
Joined: Mon Nov 09, 2015 12:30 am
languages_spoken: english
ODROIDs: C1+, C2, XU4, HC1
Has thanked: 0
Been thanked: 5 times
Contact:

Re: Any HC1/HC2 setup based on the new N1 hardware?

Post by tkaiser »

ASword wrote:Given an N-1 with 2 SATA SSDs, how best to configure w/ BTRFS and compression turned on?
With databases in mind you usually neither want btrfs nor compression. Please do a simple web search for 'database cow btrfs' -- btrfs implements 'copy-on-write' which you usually do not want with databases due to the way they update their contents on disk. Without CoW checksumming doesn't work any more too and compression with 'hot data' like databases has its own downsides. I would most probably put databases on an ext4 (without LVM and snapshots below since this will half the random IO performance instantly) but since I love data integrity databases with important stuff are only hosted on systems with ECC DRAM anyway.

Wrt 2 SSDs on N1... while ODROID community is busy doing funny things around the SATA controller on the board there's something that needs real attention: checking behaviour with 2 SSDs, NCQ and TRIM: viewtopic.php?f=149&t=30307 (but I've given up on this, dealing with the 'vendor community syndrome' is nothing worth the time or efforts)

Just as a reminder for people thinking about RAID-1: in my opinion this is a horrible idea in general (do backup instead of RAID) but when done with identical SSDs it gets stupid. SSDs die for totally different reasons than HDDs. This has to be taken into account since SSD lifetime expentance is predictable when you buy good SSDs and not cheap crap. Those good SSDs know about the wear happening inside and expose this through SMART. So you simply monitor the relevant SMART attribute and replace the drive once the SMART value indicates it's soon over.

SSDs die due to electrical/hardware reasons (this will affect most probably both SSDs at the same time when connected to both SATA ports) or funny firmware failures. We've seen stuff like SSDs becoming read-only or loosing all their data after a certain amount of data has been written to them or after a certain amount of power on hours. So combining the same SSD models in a RAID-1 especially when running exactly the same firmware revision will give you zero availability since they will fail at (almost) the same time. And there's no data protection or data integrity with RAID-1 anyway even if an awful lot of users believe into the opposite for no reason.

tkaiser
Posts: 673
Joined: Mon Nov 09, 2015 12:30 am
languages_spoken: english
ODROIDs: C1+, C2, XU4, HC1
Has thanked: 0
Been thanked: 5 times
Contact:

Re: Any HC1/HC2 setup based on the new N1 hardware?

Post by tkaiser »

Addendum: (almost) all of the btrfs code lives inside the kernel. While 4.4 is a kernel release recent enough to consider the shipped btrfs being 'stable' anything that really relies on filesystem features or performance might have improved a lot in the meantime. So for such use cases I would always switch to mainline kernel and drop the idea to use the vendor/BSP kernel.

elatllat
Posts: 1858
Joined: Tue Sep 01, 2015 8:54 am
languages_spoken: english
ODROIDs: XU4, N1, N2, C4, N2+, HC4
Has thanked: 59 times
Been thanked: 132 times
Contact:

Re: Any HC1/HC2 setup based on the new N1 hardware?

Post by elatllat »

yah btrfs had data loss bugs when i tried it on 4.9, the chance of raid helping is small, lvm without snapshots is helpful if you expect to keep adding disks to your volume or want to use cheap spinning disks with a fast solid state cache.

Post Reply

Return to “General Chat”

Who is online

Users browsing this forum: No registered users and 1 guest