[HOWTO] Yet another exotic way to do 10G networking with the Odroid H2

Post Reply
User avatar
domih
Posts: 763
Joined: Mon Feb 11, 2019 4:48 pm
languages_spoken: English, French
ODROIDs: UX4, HC2, N2, N2+, H2, H2+, C4, HC4, M1, M1S (with UPS, 4 x Relay) H3, H3+ - 1GbE, 2.5GbE, 10GbE, 45+ GbE
Location: San Francisco Bay Area
Has thanked: 291 times
Been thanked: 228 times
Contact:

[HOWTO] Yet another exotic way to do 10G networking with the Odroid H2

Post by domih »

Although I'm using Mellanox InfiniBand for my local Intranet(*) where an H2 can communicate from 10Gbe to 14Gbe depending on direction, there are two other way to do 10G with the Odroid H2 if this is what you want to achieve.

(*) see https://magazine.odroid.com/article/net ... odroid-h2/ and viewtopic.php?f=172&t=38711. Yes this is quite exotic (at the beginning), but works very well and it is cheaper than the current RJ-45 10GBase-T solutions you can find (as of this writing).

1. The RJ-45 10GBase-T way: use an M.2 NMVe M key to PCIe x4 adapter(**) and plug in an RJ-45 10G PCIe card. See https://www.newegg.com/p/pl?d=10+Gbe+PC ... sdeptsrh=1 and https://www.amazon.com/s?k=10+Gbe+PCIe+card. This will cost you from $75 to $250+ depending on brand, features and number or ports.

(**) see https://www.ebay.com/itm/PCIe-x4-3-0-PC ... 4398886092

WARNING: not all PCIe cards will work with the PCIe 2.0 x4 available on the H2. Your mileage may vary. Dig into the Odroid forums to leverage the experience of other users.

At this point, if you only have one computer you want to connect to your H2, just connect them directly with a CAT6A (max length 100M = 330ft) and you're done. If you want to use many more computers, you'll need a switch and it will cost you $$$ (from about $800 at best up to +$2500 depending on number of ports and capabilities). Common brands are slowly climbing the 1+ G mountain. Models with 2.5GBase-T, 5GBase-T and 10GBase-T ports are appearing each month. Expensive. IMHO: wait for the no brand models from China, South Korea and Taiwan. Most of them will work as well... because most of them are exactly the same hardware minus the brand added to it.

If you are planning to have a mix of 2.5, 5 and 10G devices for the next few years, RJ-45 10GBase-T is the way to go for minimizing hassles and headaches.

2. A more exotic way is to chose an SFP+ solution.

Cards: You can find still functioning well 10G SFP+ NICs on eBay for less than $20. Example: https://www.ebay.com/itm/SolarFlare-SFN ... 3129025454. The SolarFlare SFN5122F was one the 10 Gbe networking work horse in data centers years ago. There are more recent models if you want to spend more than $20 (but still much less than $75, depending on features you need). Data centers today (the big ones) are standardizing on 100 or 200 Gbe. From these data centers point of view 10G is like an antique from the Merovingian period. But what is trash from some is treasure for others (the others being us here). There is a cornucopia of used 10, 25, 40 Gbe or InfiniBand 56 Gbps networking hardware on eBay. From our point of view, this is the Golden Age of reusing this hardware for individual purpose. Yes there is at least something trickling down in our dear US of A...

SFP+ cables: A 3M SFP+ DAC (Direct Attach Cable) copper cable can be find for cheap on eBay too. See for example: https://www.ebay.com/sch/i.html?_from=R ... =0&_sop=15

WARNING: 10 Gbe DAC cables are not all equal. One brand may or may not work with particular models of NICs. The Molex SFP-H10Gb-Cu3M works fine with the SolarFlare SFN5122F. Note: did the first cable I bought work with the card? Nope! Then I did my home work. The 2nd cable I bought works AOK :-)

Switches: There is also a plethora of 10G SFP+ switches on eBay. Starting from $70 up to $600 and more. Do you home work: check out how many ports are actually 10G SFP+, often the cheap ones will have 2 or 4 x 10G SFP+ and 24 or 48 1G ports. They were used to connect a corporate department computers to the backbone of the corporate Intranet. Or racks to the backbone network in a data center.

WARNING: Another issue to deal going the SFP+ route if the fact that the various forms of the 2.5G or 5G networking (i.e. BaseT) are relatively recent and "old" used 10G hardware you can buy on eBay will know nothing about of these protocols. In other words, they will NOT negotiate 2.5G or 5G making it impossible to connect a 2.5G or 5G device.

I did not want to spend too much time finding the perfect "old" switch so I settled on a very recent one from MicroTik (a Latvian company well-known in the networking business).

MicroTik CRS317-1G-16S+ Product Page
https://mikrotik.com/product/crs317_1g_16s_rm
MSRP: $399.00

Review from STH
https://www.servethehome.com/mikrotik-c ... rm-review/

On Amazon: $349.90
https://www.amazon.com/Cloud-Router-Swi ... B0747TC9DB

On NewEgg: $369.00
https://www.newegg.com/mikrotik-crs317- ... 002R-000D9

On eBay: $328.76 + $22.00 shipping
https://www.ebay.com/itm/Mikrotik-CRS31 ... 3710103713

I was lucky to stumble on a bidding new one which I got for $275 + shipping + tax = $313.66.
.
ebay-bidding.png
ebay-bidding.png (84.88 KiB) Viewed 14517 times
Hands-on
See the H2 and switch in the picture shown below.
.
H2-and-SFP+-switch.png
H2-and-SFP+-switch.png (1.46 MiB) Viewed 14517 times
See the SolarFlare SFN5122F connected to the H2 via the m.2 / PCIe adapter cable in the picture shown below:
.
SolarFlare SFN5122F.png
SolarFlare SFN5122F.png (1.85 MiB) Viewed 14517 times
.
The SFP+ cable connects the SolarFlare SFN5122F to the first port of the switch:
.
switch ports 1 and 2 in use.png
switch ports 1 and 2 in use.png (1.89 MiB) Viewed 14517 times
Note the red CAT6-A cable connected to the SFP+ switch. This "magic" is possible thanks too a small piece of hardware (transceiver) which converts back and forth SFP+ signals to RJ-45 signals. This thing was also bought from eBay. See https://www.ebay.com/itm/10G-SFP-RJ45-C ... 4214937752. There are multiple brands and models of such a transceiver. For a general overview of these transceivers, see this article and the related reviews from STH: https://www.servethehome.com/sfp-to-10g ... ers-guide/

With everything setup, one can see the connections in the switch web-based admin:
.
admin-1.png
admin-1.png (107.24 KiB) Viewed 14517 times
CONTINUED IN NEXT POST
Last edited by domih on Sun Mar 21, 2021 6:51 am, edited 2 times in total.
These users thanked the author domih for the post (total 2):
rooted (Wed Jun 10, 2020 2:03 pm) • mad_ady (Wed Jun 10, 2020 2:58 pm)

User avatar
domih
Posts: 763
Joined: Mon Feb 11, 2019 4:48 pm
languages_spoken: English, French
ODROIDs: UX4, HC2, N2, N2+, H2, H2+, C4, HC4, M1, M1S (with UPS, 4 x Relay) H3, H3+ - 1GbE, 2.5GbE, 10GbE, 45+ GbE
Location: San Francisco Bay Area
Has thanked: 291 times
Been thanked: 228 times
Contact:

Re: Yet another exotic way to do 10G networking with the Odroid H2

Post by domih »

CONTINUING FROM PREVIOUS POST.

In the screenshot shown below, the first device is the H2 with the SolarFlare SFN5122F, the second device is a PC whose motherboard has an onboard Aquantia 10G Ethernet.
.
admin-2.png
admin-2.png (70.36 KiB) Viewed 14511 times
Here is a detailed view of the Ipolex transceiver allowing the Aquantia AQC-107 10GBase-T to talk with the SFP+ switch:
.
ipolex-1.png
ipolex-1.png (1.75 MiB) Viewed 14511 times
ipolex-2.png
ipolex-2.png (1.92 MiB) Viewed 14511 times
Let's now see the way the SolarFlare SFN5122F shows on the H2 with lspci:

Code: Select all

domih@h2a:~$ lspci
00:00.0 Host bridge: Intel Corporation Device 31f0 (rev 03)
.../...
01:00.0 Ethernet controller: Solarflare Communications SFC9020 10G Ethernet Controller
01:00.1 Ethernet controller: Solarflare Communications SFC9020 10G Ethernet Controller
02:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8111/8168/8411 PCI Express Gigabit Ethernet Controller (rev 0c)
03:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8111/8168/8411 PCI Express Gigabit Ethernet Controller (rev 0c)
Let's check whether or not the card does use the 4 lanes (it does, see LnkSta line):

Code: Select all

domih@h2a:~$ sudo lspci -vv -s 01:00.0 | grep Width 
		LnkCap:	Port #0, Speed 5GT/s, Width x8, ASPM L0s L1, Exit Latency L0s <512ns, L1 <64us
		LnkSta:	Speed 5GT/s, Width x4, TrErr- Train- SlotClk+ DLActive- BWMgmt- ABWMgmt-
After configuring the NIC IP settings with nmtui, let's see the result (see enp1s0f1np1 section):

Code: Select all

domih@h2a:~$ ip a
1: lo: <LOOPBACK,UP,LOWER_UP> mtu 65536 qdisc noqueue state UNKNOWN group default qlen 1000
    link/loopback 00:00:00:00:00:00 brd 00:00:00:00:00:00
    inet 127.0.0.1/8 scope host lo
       valid_lft forever preferred_lft forever
    inet6 ::1/128 scope host 
       valid_lft forever preferred_lft forever
2: enp2s0: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc fq_codel state UP group default qlen 1000
    link/ether 00:1e:06:45:0d:47 brd ff:ff:ff:ff:ff:ff
    inet 192.168.1.70/24 brd 192.168.1.255 scope global noprefixroute enp2s0
       valid_lft forever preferred_lft forever
    inet6 fe80::7938:96d2:adee:503b/64 scope link noprefixroute 
       valid_lft forever preferred_lft forever
3: enp3s0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc fq_codel state DOWN group default qlen 1000
    link/ether 00:1e:06:45:0d:48 brd ff:ff:ff:ff:ff:ff
4: enp1s0f0np0: <NO-CARRIER,BROADCAST,MULTICAST,UP> mtu 1500 qdisc mq state DOWN group default qlen 1000
    link/ether 00:0f:53:07:89:80 brd ff:ff:ff:ff:ff:ff
5: enp1s0f1np1: <BROADCAST,MULTICAST,UP,LOWER_UP> mtu 1500 qdisc mq state UP group default qlen 1000
    link/ether 00:0f:53:07:89:81 brd ff:ff:ff:ff:ff:ff
    inet 172.16.25.69/16 brd 172.16.255.255 scope global noprefixroute enp1s0f1np1
       valid_lft forever preferred_lft forever
    inet6 fe80::af67:8d14:c279:c9da/64 scope link noprefixroute 
       valid_lft forever preferred_lft forever
Is the H2 "seeing" the switch? It does:

Code: Select all

domih@h2a:~$ ping 172.16.25.1
PING 172.16.25.1 (172.16.25.1) 56(84) bytes of data.
64 bytes from 172.16.25.1: icmp_seq=1 ttl=255 time=0.188 ms
64 bytes from 172.16.25.1: icmp_seq=2 ttl=255 time=0.141 ms
64 bytes from 172.16.25.1: icmp_seq=3 ttl=255 time=0.326 ms
64 bytes from 172.16.25.1: icmp_seq=4 ttl=255 time=0.142 ms
64 bytes from 172.16.25.1: icmp_seq=5 ttl=255 time=0.335 ms
^C
--- 172.16.25.1 ping statistics ---
5 packets transmitted, 5 received, 0% packet loss, time 4102ms
rtt min/avg/max/mdev = 0.141/0.226/0.335/0.087 ms
Is the card configured for 10G with its link up? It is:

Code: Select all

domih@h2a:~$ ethtool enp1s0f1np1
Settings for enp1s0f1np1:
	Supported ports: [ FIBRE ]
	Supported link modes:   1000baseT/Full 
	                        10000baseT/Full 
	Supported pause frame use: Symmetric Receive-only
	Supports auto-negotiation: No
	Supported FEC modes: Not reported
	Advertised link modes:  Not reported
	Advertised pause frame use: No
	Advertised auto-negotiation: No
	Advertised FEC modes: Not reported
	Link partner advertised link modes:  Not reported
	Link partner advertised pause frame use: No
	Link partner advertised auto-negotiation: No
	Link partner advertised FEC modes: Not reported
	Speed: 10000Mb/s
	Duplex: Full
	Port: FIBRE
	PHYAD: 255
	Transceiver: internal
	Auto-negotiation: off
Cannot get wake-on-lan settings: Operation not permitted
	Current message level: 0x000020f7 (8439)
			       drv probe link ifdown ifup rx_err tx_err hw
	Link detected: yes
Is the H2 "seeing" the PC with the Aquantia? It does:

Code: Select all

domih@h2a:~$ ping 172.16.25.36
PING 172.16.25.36 (172.16.25.36) 56(84) bytes of data.
64 bytes from 172.16.25.36: icmp_seq=1 ttl=64 time=0.534 ms
64 bytes from 172.16.25.36: icmp_seq=2 ttl=64 time=0.296 ms
64 bytes from 172.16.25.36: icmp_seq=3 ttl=64 time=0.250 ms
64 bytes from 172.16.25.36: icmp_seq=4 ttl=64 time=0.264 ms
64 bytes from 172.16.25.36: icmp_seq=5 ttl=64 time=0.299 ms
64 bytes from 172.16.25.36: icmp_seq=6 ttl=64 time=0.259 ms
64 bytes from 172.16.25.36: icmp_seq=7 ttl=64 time=0.254 ms
64 bytes from 172.16.25.36: icmp_seq=8 ttl=64 time=0.326 ms
^C
--- 172.16.25.36 ping statistics ---
8 packets transmitted, 8 received, 0% packet loss, time 7158ms
rtt min/avg/max/mdev = 0.250/0.310/0.534/0.089 ms
It looks like everything is OK. Let's measure the throughput with iperf3:

Code: Select all

domih@h2a:~$ iperf3 -c 172.16.25.36 --bind 172.16.25.69
Connecting to host 172.16.25.36, port 5201
[  4] local 172.16.25.69 port 43329 connected to 172.16.25.36 port 5201
[ ID] Interval           Transfer     Bandwidth       Retr  Cwnd
[  4]   0.00-1.00   sec  1.09 GBytes  9.33 Gbits/sec    0   1.31 MBytes       
[  4]   1.00-2.00   sec  1.09 GBytes  9.33 Gbits/sec    0   1.41 MBytes       
[  4]   2.00-3.00   sec  1.09 GBytes  9.32 Gbits/sec    0   1.41 MBytes       
[  4]   3.00-4.00   sec  1.09 GBytes  9.33 Gbits/sec    0   1.41 MBytes       
[  4]   4.00-5.00   sec  1.08 GBytes  9.30 Gbits/sec    0   1.60 MBytes       
[  4]   5.00-6.00   sec  1.08 GBytes  9.31 Gbits/sec    0   1.60 MBytes       
[  4]   6.00-7.00   sec  1.09 GBytes  9.33 Gbits/sec    0   2.16 MBytes       
[  4]   7.00-8.00   sec  1.07 GBytes  9.22 Gbits/sec    1   1.21 MBytes       
[  4]   8.00-9.00   sec  1.06 GBytes  9.15 Gbits/sec    1   1.28 MBytes       
[  4]   9.00-10.00  sec  1.08 GBytes  9.26 Gbits/sec    0   1.34 MBytes       
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bandwidth       Retr
[  4]   0.00-10.00  sec  10.8 GBytes  9.29 Gbits/sec    2             sender
[  4]   0.00-10.00  sec  10.8 GBytes  9.28 Gbits/sec                  receiver

iperf Done.
domih@h2a:~$ iperf3 -c 172.16.25.36 --bind 172.16.25.69 -R
Connecting to host 172.16.25.36, port 5201
Reverse mode, remote host 172.16.25.36 is sending
[  4] local 172.16.25.69 port 37157 connected to 172.16.25.36 port 5201
[ ID] Interval           Transfer     Bandwidth
[  4]   0.00-1.00   sec  1.09 GBytes  9.39 Gbits/sec                  
[  4]   1.00-2.00   sec  1.10 GBytes  9.41 Gbits/sec                  
[  4]   2.00-3.00   sec  1.09 GBytes  9.40 Gbits/sec                  
[  4]   3.00-4.00   sec  1.10 GBytes  9.41 Gbits/sec                  
[  4]   4.00-5.00   sec  1.10 GBytes  9.41 Gbits/sec                  
[  4]   5.00-6.00   sec  1.10 GBytes  9.41 Gbits/sec                  
[  4]   6.00-7.00   sec  1.10 GBytes  9.41 Gbits/sec                  
[  4]   7.00-8.00   sec  1.10 GBytes  9.41 Gbits/sec                  
[  4]   8.00-9.00   sec  1.10 GBytes  9.41 Gbits/sec                  
[  4]   9.00-10.00  sec  1.10 GBytes  9.41 Gbits/sec                  
- - - - - - - - - - - - - - - - - - - - - - - - -
[ ID] Interval           Transfer     Bandwidth       Retr
[  4]   0.00-10.00  sec  11.0 GBytes  9.41 Gbits/sec  408             sender
[  4]   0.00-10.00  sec  11.0 GBytes  9.41 Gbits/sec                  receiver

iperf Done.
Yeah! We get a respectable 9.28 Gbits/sec in one direction and 9.41 Gbits/sec in the other.

However, see the high number of segment retries (408) in the reverse test (-R parameter). I will have to do my home work on this one to mitigate the issue. It does not seem to affect much the throughput though.

CONCLUSION: if you are NOT planning to use 2.5G and 5G devices and want to use 10G as soon as possible an SFP+ switch and a mix of SFP+ and RJ-45 NICs might be the solution you are looking for.

With SFP+, it will cost you about $20 (card) + $15 (3m DAC cable) + $360 / 16 (switch) = $57.50 per port and $50 (transceiver) + $15 (CAT6-A cable) + $360 / 16 = $87.50 per port (not counting the price of the 10GBase-T NIC when it is onboard a motherboard). This means that you want to go with an SFP+ NIC when the PC does not have 10G. All you need is a PCIe 2.0 x8 or higher (x4 is OK as shown above). Expect the price of the SFP+ to RJ-45 transceivers to go down.

Other considerations:
- SFP+ DAC cables are cheap enough for 3m or 5m or 7m. For much longer distances, you'll have to go with a fiber cable and corresponding transceiver. In this case, the 100m max length possible with CAT6-A makes the RJ-45 solution better in terms of cost.
- For casual usage, plugging-in an RJ-45 NIC in a PCIe slot is a simple thing to do (plug and play). The consumer products are easy to use, you will not have to perform tweaking in most of the cases. Old data centers cards based on SFP+ provide many options, the SolarFlare Server Adapter User Guide is 393-page long. But for the hands-on in this post, I did not even read it, the card drivers are part of the Linux kernel and I did not need to change settings on the card beyond setting the IP address and mask. It was also plug and play. So it could be more complex to set up an SFP+ solution but it does not have to be. You only need to start tweaking when you start to do specialized things like VMware servicing or esoteric networking configuration. Also in the RJ-45, it would be the same thing with cards costing > $200.

Do you home work by comparing this price per port to a full RJ-45 solution. The RJ-45 10G NICs are expensive and the switches even more expensive. But then again, if you are planning to have a mix of 2.5G, 5G and 10G devices, you will want to use RJ-45 all along and ignore SFP+.

To conclude: which is better for 10G networking? As you might expect, it depends!

HTH.
Last edited by domih on Wed Jun 10, 2020 3:17 pm, edited 1 time in total.
These users thanked the author domih for the post:
mad_ady (Wed Jun 10, 2020 3:00 pm)

User avatar
mad_ady
Posts: 11988
Joined: Wed Jul 15, 2015 5:00 pm
languages_spoken: english
ODROIDs: XU4 (HC1, HC2), C1+, C2, C4 (HC4), N1, N2, N2L, H2, H3+, Go, Go Advance, M1, M1S
Location: Bucharest, Romania
Has thanked: 662 times
Been thanked: 1280 times
Contact:

Re: Yet another exotic way to do 10G networking with the Odroid H2

Post by mad_ady »

Great networking overview.
Regarding
However, see the high number of segment retries (408) in the reverse test (-R parameter). I will have to do my home work on this one to mitigate the issue. It does not seem to affect much the throughput though.
I think packet drops and retransmits are normal and a part of TCP's exponential backoff network congestion avoidance algorithm. Simply put - TCP will always try (as long as the transmission window allows it) to send more data on a connection until there is packet loss/need for retransmission. When that happens (based on the current implementation of the backoff algorithm) it will reduce its throughput and ramp it up again slowly until another loss happens, and so on. Unlike UDP who will happily congest every link.
So packet loss is normal in the life of a tcp session. You can see it on 1G iperf tests too.
These users thanked the author mad_ady for the post:
domih (Wed Jun 10, 2020 3:20 pm)

User avatar
domih
Posts: 763
Joined: Mon Feb 11, 2019 4:48 pm
languages_spoken: English, French
ODROIDs: UX4, HC2, N2, N2+, H2, H2+, C4, HC4, M1, M1S (with UPS, 4 x Relay) H3, H3+ - 1GbE, 2.5GbE, 10GbE, 45+ GbE
Location: San Francisco Bay Area
Has thanked: 291 times
Been thanked: 228 times
Contact:

Re: Yet another exotic way to do 10G networking with the Odroid H2

Post by domih »

mad_ady wrote:
Wed Jun 10, 2020 3:09 pm
Great networking overview.
Regarding
However, see the high number of segment retries (408) in the reverse test (-R parameter). I will have to do my home work on this one to mitigate the issue. It does not seem to affect much the throughput though.
I think packet drops and retransmits are normal and a part of TCP's exponential backoff network congestion avoidance algorithm. Simply put - TCP will always try (as long as the transmission window allows it) to send more data on a connection until there is packet loss/need for retransmission. When that happens (based on the current implementation of the backoff algorithm) it will reduce its throughput and ramp it up again slowly until another loss happens, and so on. Unlike UDP who will happily congest every link.
So packet loss is normal in the life of a tcp session. You can see it on 1G iperf tests too.
I will have to read more about it because I'm really used to NOT seeing segment retries. With InfiniBand I only saw them when testing a damaged cable. A handful is acceptable but several hundred means potential issue, like for instance what I saw when testing the 2.5 Gbe USB adapters. But once again, I'll have to read about it, contrary to what it may seem, I am definitely NOT a networking guy ;-)

kingtut
Posts: 1
Joined: Sun Aug 23, 2020 5:03 am
languages_spoken: english
Has thanked: 0
Been thanked: 0
Contact:

Re: Yet another exotic way to do 10G networking with the Odroid H2

Post by kingtut »

@domih Can the Solarflare or even the RJ45 10gbe NIC be powered by the onboard Odroid H2 SATA power connecting to the m.2 to PCIe breakout adapter?

User avatar
domih
Posts: 763
Joined: Mon Feb 11, 2019 4:48 pm
languages_spoken: English, French
ODROIDs: UX4, HC2, N2, N2+, H2, H2+, C4, HC4, M1, M1S (with UPS, 4 x Relay) H3, H3+ - 1GbE, 2.5GbE, 10GbE, 45+ GbE
Location: San Francisco Bay Area
Has thanked: 291 times
Been thanked: 228 times
Contact:

Re: Yet another exotic way to do 10G networking with the Odroid H2

Post by domih »

kingtut wrote:
Sun Aug 23, 2020 5:06 am
@domih Can the Solarflare or even the RJ45 10gbe NIC be powered by the onboard Odroid H2 SATA power connecting to the m.2 to PCIe breakout adapter?
I don't know. The cards I used during testing where SolarFlare S5122-R66, the 5122 is stated to "typically" use 4.9W. What "typical" means, beats me.

solarflare-5122.png
solarflare-5122.png (44.29 KiB) Viewed 14025 times
So the next question is to know the max power the H2/H2+ SATA III power connector can deliver. If OK, then test idle and full speed with iperf3.

User avatar
domih
Posts: 763
Joined: Mon Feb 11, 2019 4:48 pm
languages_spoken: English, French
ODROIDs: UX4, HC2, N2, N2+, H2, H2+, C4, HC4, M1, M1S (with UPS, 4 x Relay) H3, H3+ - 1GbE, 2.5GbE, 10GbE, 45+ GbE
Location: San Francisco Bay Area
Has thanked: 291 times
Been thanked: 228 times
Contact:

Re: Yet another exotic way to do 10G networking with the Odroid H2

Post by domih »

kingtut wrote:
Sun Aug 23, 2020 5:06 am
@domih Can the Solarflare or even the RJ45 10gbe NIC be powered by the onboard Odroid H2 SATA power connecting to the m.2 to PCIe breakout adapter?
To give a more precise answer: yes you can power a Solarflare 10 GbE NIC from the H2/H2+ SATA power connectors.

I just tried it. Using the regular Hardkernel SATA power cables connecting to the H2/H2+ board, one is forked with a split extension cable to power the HDD and SSD. I connect the M.2 adapter floppy-like female to SATA male cable to the other Hardkernel SATA power cable connected to the H2/H2+ board. Works fine: I ran multiple Samba tests, no issue. The 5W NIC does not overload the H2+ 12V rail.

CONCLUSION: The 15V/4A power brick provides 60W, as long as the H2/H2+ board + disk(s) + network card are let's say <= 50W you're fine. Otherwise, if necessary upgrade to a 19V 90W power brick as described in other threads.

For a 10 GbE RJ-45 NIC, I don't know, I don't have one. I guess it is OK AS LONG AS the rule stated in the CONCLUSION is respected.

User avatar
domih
Posts: 763
Joined: Mon Feb 11, 2019 4:48 pm
languages_spoken: English, French
ODROIDs: UX4, HC2, N2, N2+, H2, H2+, C4, HC4, M1, M1S (with UPS, 4 x Relay) H3, H3+ - 1GbE, 2.5GbE, 10GbE, 45+ GbE
Location: San Francisco Bay Area
Has thanked: 291 times
Been thanked: 228 times
Contact:

Re: Yet another exotic way to do 10G networking with the Odroid H2

Post by domih »

A timid first salvo at democratizing 10GbE at home:

TP-Link's Cheap 5-port and 8-port 10GbE Switches Now Available
https://www.tomshardware.com/news/tp-li ... gbe-switch

IMHO, the "cheap" in the title is candid:
- $275 MSRP for the 5 ports.
- Projected $450 for the 8 ports.

Product pages
https://www.tp-link.com/en/home-network ... /#overview
https://www.tp-link.com/en/home-network ... tl-sx1008/

In the article, the comparison with the ASUS XG-U2008 is useless because it's a 2x10Gbe + 8x1Gbe and there is an endless series of switches offering such a mix.

IMHO, as of this writing, the cheapest solutions are still looking for used 10Gb switches on eBay and choose the best adapted to your needs. Example: the Aruba Networks S3500 24-Port PoE S3500-24P Gigabit Switch S3500-4x10G 10GB at $135 with 24x1GbE RJ-45 + 4x10GbE SFP+ coupled with used SolarFlare cards and cheap SFP+ cables for the 4 lucky PCs... or H2+ :-) See https://www.ebay.com/sch/i.html?_nkw=10gb+switch
These users thanked the author domih for the post:
TurdFurguson (Sun Mar 21, 2021 7:32 am)

User avatar
domih
Posts: 763
Joined: Mon Feb 11, 2019 4:48 pm
languages_spoken: English, French
ODROIDs: UX4, HC2, N2, N2+, H2, H2+, C4, HC4, M1, M1S (with UPS, 4 x Relay) H3, H3+ - 1GbE, 2.5GbE, 10GbE, 45+ GbE
Location: San Francisco Bay Area
Has thanked: 291 times
Been thanked: 228 times
Contact:

Re: [HOWTO] Yet another exotic way to do 10G networking with the Odroid H2

Post by domih »

Inside the MicroTik CRS317-1G-16S+ and silencing the two fans

The two fans inside the switch can be noisy, so I replaced them with two 12V Noctua NF-A4x20. Mission accomplished: the switch is now quasi-silent even with the two fans at full speed.

The construction of the MicroTik CRS317-1G-16S+ is of pretty good quality and the servicing is very easy: 5 screws and you can slide the top cover open, my rating is 10/10. The fans plugs are standard :-)

Internal view of the switch

01-inside-the-switch.png
01-inside-the-switch.png (1.45 MiB) Viewed 12531 times

The noisy stock fans

02-the-noisy-beasts.png
02-the-noisy-beasts.png (1.4 MiB) Viewed 12531 times

Noctua replacement
Using the rubber fasteners to minimize noise from resonance.

03-noctua-1.png
03-noctua-1.png (1.49 MiB) Viewed 12531 times
04-noctua-2.png
04-noctua-2.png (1.71 MiB) Viewed 12531 times

User avatar
domih
Posts: 763
Joined: Mon Feb 11, 2019 4:48 pm
languages_spoken: English, French
ODROIDs: UX4, HC2, N2, N2+, H2, H2+, C4, HC4, M1, M1S (with UPS, 4 x Relay) H3, H3+ - 1GbE, 2.5GbE, 10GbE, 45+ GbE
Location: San Francisco Bay Area
Has thanked: 291 times
Been thanked: 228 times
Contact:

Re: [HOWTO] Yet another exotic way to do 10G networking with the Odroid H2

Post by domih »

Other 10GbE cards opportunities

If you are not a fan of the Marvell/Aquantia chipset, there are used INTEL-based (X540 RJ-45) at sub-$100 on eBay, see https://www.ebay.com/sch/i.html?_nkw=in ... 45&_sop=15

Same thing for INTEL-based (X520 SFP+) again on eBay, see https://www.ebay.com/sch/i.html?_nkw=in ... fp&_sop=15

WARNING
INTEL-based NICs on eBay: make sure you identify the counterfeit ones, see https://www.bing.com/search?q=counterfeit+intel+10g+nic, note that these used cards are already out of warranty so if a user says they might be counterfeit but work fine, you might not care anyway...

Again, if you go with SFP+, the used SolarFlare are inexpensive and all over the place.

Note: drivers for all these cards are probably already present in the Linux upstream kernels. Same for Windows via updates.

Last but not least: the 10G NICs on eBay probably ran quite a few years (decade+ ?) in a data center, so re-pasting the heat sink is a good idea.

elatllat
Posts: 1900
Joined: Tue Sep 01, 2015 8:54 am
languages_spoken: english
ODROIDs: XU4, N1, N2, C4, N2+, HC4
Has thanked: 74 times
Been thanked: 140 times
Contact:

Re: [HOWTO] Yet another exotic way to do 10G networking with the Odroid H2

Post by elatllat »

I just had a nearby lightning strike induce voltage in an ethernet cable and take out ports on some gear. The Odroids were all fine as they are not grounded, but it's making me consider optical or Metal Shield RJ45...

User avatar
rooted
Posts: 10612
Joined: Fri Dec 19, 2014 9:12 am
languages_spoken: english
Location: Gulf of Mexico, US
Has thanked: 821 times
Been thanked: 733 times
Contact:

Re: [HOWTO] Yet another exotic way to do 10G networking with the Odroid H2

Post by rooted »

elatllat wrote:I just had a nearby lightning strike induce voltage in an ethernet cable and take out ports on some gear. The Odroids were all fine as they are not grounded, but it's making me consider optical or Metal Shield RJ45...
Yeah I had a nearby lightning strike take out the Ethernet on one of my XU4, I have to use USB Ethernet now :/

User avatar
domih
Posts: 763
Joined: Mon Feb 11, 2019 4:48 pm
languages_spoken: English, French
ODROIDs: UX4, HC2, N2, N2+, H2, H2+, C4, HC4, M1, M1S (with UPS, 4 x Relay) H3, H3+ - 1GbE, 2.5GbE, 10GbE, 45+ GbE
Location: San Francisco Bay Area
Has thanked: 291 times
Been thanked: 228 times
Contact:

Re: [HOWTO] Yet another exotic way to do 10G networking with the Odroid H2

Post by domih »

Nothing stops capitalism (read greed). The SolarFlare SFN5122F Dual Port 10GbE I've got for less than $20 on eBay one and a half year ago, are now floating around eBay at $35.

I guess the shortage of IC in 2021 traveled back in time to affect the manufacturing of these cards in the 2000's *&^%$#@! :D

User avatar
domih
Posts: 763
Joined: Mon Feb 11, 2019 4:48 pm
languages_spoken: English, French
ODROIDs: UX4, HC2, N2, N2+, H2, H2+, C4, HC4, M1, M1S (with UPS, 4 x Relay) H3, H3+ - 1GbE, 2.5GbE, 10GbE, 45+ GbE
Location: San Francisco Bay Area
Has thanked: 291 times
Been thanked: 228 times
Contact:

Re: [HOWTO] Yet another exotic way to do 10G networking with the Odroid H2

Post by domih »

You want to have 10 GbE on your computer but you are out of PCIe slots for a 10 GbE PCIe card?

If you have a free NVMe PCIe Gen 3 x2 or x4 slot, you can use this new thing:

https://www.techradar.com/news/this-tin ... ur-mini-pc
https://www.innodisk.com/en/products/em ... /egpl-t101

Do not expect a low nice price. Regular "consumer" 10 GbE PCIe cards range from $100 to $500++.

While compatible with PCIe Gen 2, you won't get 10 GbE with Gen 2, maybe 7.5 GbE at best.

Reminder: a 1 GbE or 2.5 GbE chipset can get "warm", a 10 GbE will get quite hotter.
These users thanked the author domih for the post:
odroid (Fri Dec 17, 2021 9:36 am)

User avatar
domih
Posts: 763
Joined: Mon Feb 11, 2019 4:48 pm
languages_spoken: English, French
ODROIDs: UX4, HC2, N2, N2+, H2, H2+, C4, HC4, M1, M1S (with UPS, 4 x Relay) H3, H3+ - 1GbE, 2.5GbE, 10GbE, 45+ GbE
Location: San Francisco Bay Area
Has thanked: 291 times
Been thanked: 228 times
Contact:

Re: [HOWTO] Yet another exotic way to do 10G networking with the Odroid H2

Post by domih »

More new 10 GbE hardware that can auto-negotiate to 2.5 and 5 GbE at relative affordable pricing:

TP-Link JetStream 8-Port 10GE SFP+ L2+ Managed Switch
https://www.tp-link.com/us/business-net ... l-sx3008f/
https://www.newegg.com/p/0E6-002W-007R1 $249

QNAP QHora-301W AX & 10Gbe
https://nascompares.com/2020/11/12/qnap ... er-review/
https://www.newegg.com/qnap-qhora-301w- ... 6833831028 $329

Dual 10GbE router supports WiFi 6 (AX3600)
https://www.cnx-software.com/2021/12/23 ... -6-ax3600/
https://www.acelink.com.tw/products_det ... br-6889ax/ $???
These users thanked the author domih for the post:
odroid (Tue Dec 28, 2021 8:55 am)

User avatar
rooted
Posts: 10612
Joined: Fri Dec 19, 2014 9:12 am
languages_spoken: english
Location: Gulf of Mexico, US
Has thanked: 821 times
Been thanked: 733 times
Contact:

Re: [HOWTO] Yet another exotic way to do 10G networking with the Odroid H2

Post by rooted »


I'm interested in this router and hope the price isn't crazy

User avatar
domih
Posts: 763
Joined: Mon Feb 11, 2019 4:48 pm
languages_spoken: English, French
ODROIDs: UX4, HC2, N2, N2+, H2, H2+, C4, HC4, M1, M1S (with UPS, 4 x Relay) H3, H3+ - 1GbE, 2.5GbE, 10GbE, 45+ GbE
Location: San Francisco Bay Area
Has thanked: 291 times
Been thanked: 228 times
Contact:

Re: [HOWTO] Yet another exotic way to do 10G networking with the Odroid H2

Post by domih »

For H2, H2+, H3, H3+ owners, there is a new way to do 10 GbE RJ-45 (but still consuming the M.2 slot):

M.2 10GbE network card sells for $86
https://www.cnx-software.com/2023/12/01 ... work-card/

The price already changed to $100.58: https://www.aliexpress.us/item/3256806027140300.html

So, do a little bit of search, and you'll find: https://www.aliexpress.us/item/3256805824691996.html or https://www.aliexpress.us/item/3256805741355936.html

Or: https://www.aliexpress.us/item/3256805550382200.html (if you need a longer cable).

This thingy can also be used on a desktop PC using a free M.2 PCIe Gen 3 x4 slot, thus not consuming a PCIe x16 or x8 slot. Your mileage may vary depending on mobo.

sonix_flex
Posts: 3
Joined: Wed Jan 03, 2024 9:32 am
languages_spoken: english
ODROIDs: H3+
Has thanked: 0
Been thanked: 2 times
Contact:

Re: [HOWTO] Yet another exotic way to do 10G networking with the Odroid H2

Post by sonix_flex »

Dear domih, dear odroid owners, dear odroid,

I bought the mentioned M.2 10GbE network card to use it with my H3+:

https://www.aliexpress.us/item/3256805741355936.html

and tried to make it work the whole night.

I flashed to the regular bios 1.15 because I used the Odroid netcard before.

I also tried to disable PCIE Clocks0 Gating option, but the card does not apere in the BIOS Advanced tap.

When I put the Odroid Netcard back inside, the Card appears immediately.

Has someone got such a card to work already? Pleas help me and give me advice.

Thanks!

SoniX

User avatar
domih
Posts: 763
Joined: Mon Feb 11, 2019 4:48 pm
languages_spoken: English, French
ODROIDs: UX4, HC2, N2, N2+, H2, H2+, C4, HC4, M1, M1S (with UPS, 4 x Relay) H3, H3+ - 1GbE, 2.5GbE, 10GbE, 45+ GbE
Location: San Francisco Bay Area
Has thanked: 291 times
Been thanked: 228 times
Contact:

Re: [HOWTO] Yet another exotic way to do 10G networking with the Odroid H2

Post by domih »

sonix_flex wrote:
Wed Jan 03, 2024 9:54 am
Dear domih, dear odroid owners, dear odroid,

I bought the mentioned M.2 10GbE network card to use it with my H3+:

https://www.aliexpress.us/item/3256805741355936.html

and tried to make it work the whole night.

I flashed to the regular bios 1.15 because I used the Odroid netcard before.

I also tried to disable PCIE Clocks0 Gating option, but the card does not apere in the BIOS Advanced tap.

When I put the Odroid Netcard back inside, the Card appears immediately.

Has someone got such a card to work already? Pleas help me and give me advice.

Thanks!

SoniX
Windows or Linux?
If Linux, what does lspci show?
If Windows, 10 or 11? What does the Device tree pannel show?

sonix_flex
Posts: 3
Joined: Wed Jan 03, 2024 9:32 am
languages_spoken: english
ODROIDs: H3+
Has thanked: 0
Been thanked: 2 times
Contact:

Re: [HOWTO] Yet another exotic way to do 10G networking with the Odroid H2

Post by sonix_flex »

Linux :-)
I couldn't identify any new card.

Code: Select all

Linux pve 6.5.11-7-pve #1 SMP PREEMPT_DYNAMIC PMX 6.5.11-7 (2023-12-05T09:44Z) x86_64

The programs included with the Debian GNU/Linux system are free software;
the exact distribution terms for each program are described in the
individual files in /usr/share/doc/*/copyright.

Debian GNU/Linux comes with ABSOLUTELY NO WARRANTY, to the extent
permitted by applicable law.
Last login: Wed Jan  3 00:24:25 CET 2024 on pts/0
root@pve:~# lspci
00:00.0 Host bridge: Intel Corporation Device 4e28
00:02.0 VGA compatible controller: Intel Corporation JasperLake [UHD Graphics] (rev 01)
00:04.0 Signal processing controller: Intel Corporation Dynamic Tuning service
00:08.0 System peripheral: Intel Corporation Device 4e11
00:14.0 USB controller: Intel Corporation Device 4ded (rev 01)
00:14.2 RAM memory: Intel Corporation Device 4def (rev 01)
00:15.0 Serial bus controller: Intel Corporation Serial IO I2C Host Controller (rev 01)
00:15.1 Serial bus controller: Intel Corporation Serial IO I2C Host Controller (rev 01)
00:16.0 Communication controller: Intel Corporation Management Engine Interface (rev 01)
00:17.0 SATA controller: Intel Corporation Device 4dd3 (rev 01)
00:1a.0 SD Host controller: Intel Corporation Device 4dc4 (rev 01)
00:1c.0 PCI bridge: Intel Corporation Device 4db8 (rev 01)
00:1c.1 PCI bridge: Intel Corporation Device 4db9 (rev 01)
00:1f.0 ISA bridge: Intel Corporation Device 4d87 (rev 01)
00:1f.3 Audio device: Intel Corporation Jasper Lake HD Audio (rev 01)
00:1f.4 SMBus: Intel Corporation Jasper Lake SMBus (rev 01)
00:1f.5 Serial bus controller: Intel Corporation Jasper Lake SPI Controller (rev 01)
01:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8125 2.5GbE Controller (rev 05)
02:00.0 Ethernet controller: Realtek Semiconductor Co., Ltd. RTL8125 2.5GbE Controller (rev 05)
root@pve:~#

User avatar
domih
Posts: 763
Joined: Mon Feb 11, 2019 4:48 pm
languages_spoken: English, French
ODROIDs: UX4, HC2, N2, N2+, H2, H2+, C4, HC4, M1, M1S (with UPS, 4 x Relay) H3, H3+ - 1GbE, 2.5GbE, 10GbE, 45+ GbE
Location: San Francisco Bay Area
Has thanked: 291 times
Been thanked: 228 times
Contact:

Re: [HOWTO] Yet another exotic way to do 10G networking with the Odroid H2

Post by domih »

sonix_flex wrote:
Wed Jan 03, 2024 4:29 pm
Linux :-)
I couldn't identify any new card.
Yeah, that sucks. PCIe cards that do now work with the H2+, H3, H3+ is usually due to the cards firmware not respecting the PCIe protocol correctly.

Suggestions
1. Could you try this M.2/PCIe adapter on a PC? Just to test that the card actually works. If so, run:

Code: Select all

lspci
Identify the line for the card. Then:

Code: Select all

lspci -vv -s <n.:n.n> | grep Width 
where the <n.:n.n> is the bus number, device number, and function number of your card. Post the result here.

2. After booting your H3+ could you run:

Code: Select all

dmesg > ~/dmesg-report.txt
And post the file here? You can also look into it and see if there are a few lines referring to the PCIe card connected to the M.2 slot?

3. Which power supply do you use to power the H3+? Voltage and Watts?

HTH

Dominique

User avatar
domih
Posts: 763
Joined: Mon Feb 11, 2019 4:48 pm
languages_spoken: English, French
ODROIDs: UX4, HC2, N2, N2+, H2, H2+, C4, HC4, M1, M1S (with UPS, 4 x Relay) H3, H3+ - 1GbE, 2.5GbE, 10GbE, 45+ GbE
Location: San Francisco Bay Area
Has thanked: 291 times
Been thanked: 228 times
Contact:

Re: [HOWTO] Yet another exotic way to do 10G networking with the Odroid H2

Post by domih »

4. Check the power consumption (Watts) of the NVMe adapter compared to the max power delivery (Watts) of the H3+ M.2 slot. It could be that the adapter does not even initialize due to lack of power. 10 GbE consumes much more than 1 GbE or 2.5 GbE.

sonix_flex
Posts: 3
Joined: Wed Jan 03, 2024 9:32 am
languages_spoken: english
ODROIDs: H3+
Has thanked: 0
Been thanked: 2 times
Contact:

Re: [HOWTO] Yet another exotic way to do 10G networking with the Odroid H2

Post by sonix_flex »

Hi Dominique,

I finally managed to get the card to work. I use the original 15 V / 4 A Powersupply (https://www.hardkernel.com/shop/15v-4a- ... y-eu-plug/).
I removed the 2 Sata SSD's and let the system to boot up from the 128 GB eMMC card into ubuntu 23.10.1.

I got full 10 GbE/s support. After a Shutdown, I tried again the system with the 2 Sata SSD's, and the card is now working. Don't ask me why or how, but it is working now!

Idle power consumption was 18.81 W with Ubuntu and 22,53 W with Windows 10.
The maximum power consumption with 2 SSD's + eMMC + m.2 10GbE Networkcard was 27,1 W while running a 4 K / 60 Hz video on Youtube with the display refresh rate changed to 60 Hz in ubuntu settings.
All Figures with the performance mode settings in BIOS.

The Card gets quiet hot up to 81 °C.

Thank you for your help! I am happy with the card!

SoniX
These users thanked the author sonix_flex for the post (total 2):
odroid (Tue Jan 16, 2024 9:28 am) • domih (Mon Mar 25, 2024 3:13 am)

user1234
Posts: 18
Joined: Sun Dec 10, 2023 9:51 pm
languages_spoken: english
ODROIDs: H3 plus
Has thanked: 7 times
Been thanked: 4 times
Contact:

Re: [HOWTO] Yet another exotic way to do 10G networking with the Odroid H2

Post by user1234 »

Hooray, I managed to get ~25Gbit/s of raw network speed and ~20Gbit/s of reads over the network from NVMe drive on my Odroid H3+ build!

Starting point
Odroid H3+
Corsair Vengeance 64GB RAM (2x32)
2x SATA Samsung 870 QVO drives
Kingston Fury Renegate in M.2 slot

Power draws ~17W on boot and ~11W in idle. Quick powertop --auto-tune run drops it to ~7-8W.

Aspiration/Problem
My main requirement was to build 10Gbit/s+ home network that draws a minimal amount of power and provides maximum uptime for my data management layer as well as runs and manages a number of containerized services (eg: K3s).
I also want to wake up from time to time my more powerful PCs, feed them with data fast (read from M.2 NVMe SSD, write to network), and hibernate until the next wake-up.

Solution

I settled on 2-port 40Gbit/s QSFP+ ASPM-enabled network cards connecting max 3 computers with DAC cables (up to 7 meters) in a ring. I just don't have more computers that require networking faster than 1/2.5Gbit/s realistically enough even for Wi-Fi 7 to feed laptops/tablets/phones over radio range. Unless of course your house is equipped with a 10Gbit/s Internet uplink, which is not a requirement for me at least for the next 5 years (I'm switching from 100Mbit/s to 1Gbit/s uplink, and not sure I'll need more than 2-2.5Gbit/s in 5 years).
So my new 40Gbit/s network works together in addition to 1/2.5Gbit/s connectivity all computers (and other peripherals) have to the Internet via the provider's router and a simple low-power switch.

My main 24/7 server is Odroid H3+ with 64GB RAM.
Devices tree (number of dashes indicates position in the tree):
- 2x SATA: Samsung 870 QVO SATA SSD 4TB - 2x $150 https://www.samsung.com/nl/memory-stora ... -77q4t0bw/
- M.2: CERRXIAN M.2 Key-M to PCIE X4 slot riser card - $10 https://www.amazon.nl/dp/B09JC2JG75?psc ... ct_details
-- ASM2812-based PCIe 3.0 x4 to 2x M.2 Key-M switch card - $50 https://www.aliexpress.com/item/1005005576318321.html
--- Kingston Fury Renegate 2TB - $130 https://www.kingston.com/en/ssd/gaming/ ... rd%2F2000g
--- CERRXIAN M.2 Key-M to PCIE X4 slot riser card - $10 https://www.amazon.nl/dp/B09JC2JG75?psc ... ct_details
---- Intel XL710-QDA2 - $130 https://www.ebay.com/itm/115608501119

IMPORTANT: CPU fan blowing on ASM2812 PCIe switch is an absolute must! Otherwise, the system booted from NVMe SSD connected to ASM2812 switch hangs after just a few minutes from the start while being idle.

To get power cabling right, I also used 2x SATA to 2xSATA power splitters, feeding 2x SATA drives from one native Odroid SATA power cable, and 2x CERRXIAN M.2 cards from another one.

To test network speed, I connected this build over QSFP+ 3m DAC cable to my main PC with another Intel XL710-QDA2.
NB: I used iperf 3.16 on Odroid as it uses multiple threads to fully utilize multiple CPU cores, and iperf 3.7 on the desktop.

Simple iperf3 server test yields 25.6Gbit/s between hosts:

Code: Select all

[ ID] Interval           Transfer     Bitrate
[  5]   0.00-10.00  sec  6.89 GBytes  5.92 Gbits/sec
[  8]   0.00-10.00  sec  7.72 GBytes  6.63 Gbits/sec
[ 10]   0.00-10.00  sec  7.62 GBytes  6.54 Gbits/sec
[ 12]   0.00-10.00  sec  7.56 GBytes  6.49 Gbits/sec
[SUM]   0.00-10.00  sec  29.8 GBytes  25.6 Gbits/sec
While iperf3 server writing to a RAM/tmpfs file - 11.8Gbit/s:

Code: Select all

[ ID] Interval           Transfer     Bitrate
        Sent 3.36 GByte / 3.59 GByte (93%) of /mnt/tmpfs/test
[  5]   0.00-10.00  sec  3.36 GBytes  2.89 Gbits/sec                  receiver
        Sent 3.40 GByte / 3.59 GByte (94%) of /mnt/tmpfs/test
[  9]   0.00-10.00  sec  3.40 GBytes  2.92 Gbits/sec                  receiver
        Sent 3.37 GByte / 3.59 GByte (93%) of /mnt/tmpfs/test
[ 12]   0.00-10.00  sec  3.37 GBytes  2.90 Gbits/sec                  receiver
        Sent 3.59 GByte / 3.59 GByte (100%) of /mnt/tmpfs/test
[ 15]   0.00-10.00  sec  3.59 GBytes  3.08 Gbits/sec                  receiver
[SUM]   0.00-10.00  sec  13.7 GBytes  11.8 Gbits/sec                  receiver
And iperf3 server writing to the NVMe Kingston drive located on same ASM2812 switch as Intel nic - 15.5Gbit/s (30% faster than to tmpfs !!!):

Code: Select all

[ ID] Interval           Transfer     Bitrate
        Sent 4.53 GByte / 0.00 Byte (100%) of /dev/nvme0n1p3
[  5]   0.00-10.00  sec  4.53 GBytes  3.89 Gbits/sec                  receiver
        Sent 4.53 GByte / 0.00 Byte (100%) of /dev/nvme0n1p3
[  9]   0.00-10.00  sec  4.53 GBytes  3.89 Gbits/sec                  receiver
        Sent 4.53 GByte / 0.00 Byte (100%) of /dev/nvme0n1p3
[ 12]   0.00-10.00  sec  4.53 GBytes  3.89 Gbits/sec                  receiver
        Sent 4.45 GByte / 0.00 Byte (100%) of /dev/nvme0n1p3
[ 15]   0.00-10.00  sec  4.45 GBytes  3.83 Gbits/sec                  receiver
[SUM]   0.00-10.00  sec  18.0 GBytes  15.5 Gbits/sec                  receiver

In client mode, iperf3 3.16 running on Odroid in 4 threads gave slightly different, but still very impressive results.

Simple iperf3 client test yields more 20.8Gbit/s between hosts:

Code: Select all

[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-10.00  sec  5.96 GBytes  5.12 Gbits/sec  7142             sender
[  5]   0.00-10.03  sec  5.96 GBytes  5.10 Gbits/sec                  receiver
[  7]   0.00-10.00  sec  6.21 GBytes  5.34 Gbits/sec  6998             sender
[  7]   0.00-10.03  sec  6.21 GBytes  5.32 Gbits/sec                  receiver
[  9]   0.00-10.00  sec  6.13 GBytes  5.27 Gbits/sec  5714             sender
[  9]   0.00-10.03  sec  6.13 GBytes  5.25 Gbits/sec                  receiver
[ 11]   0.00-10.00  sec  6.02 GBytes  5.17 Gbits/sec  6521             sender
[ 11]   0.00-10.03  sec  6.02 GBytes  5.16 Gbits/sec                  receiver
[SUM]   0.00-10.00  sec  24.3 GBytes  20.9 Gbits/sec  26375             sender
[SUM]   0.00-10.03  sec  24.3 GBytes  20.8 Gbits/sec                  receiver
While iperf3 client reading from a RAM/tmpfs file - 15.2Gbit/s:

Code: Select all

[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-2.00   sec   927 MBytes  3.88 Gbits/sec  1387             sender
        Sent  927 MByte / 3.59 GByte (25%) of /mnt/tmpfs/test
[  5]   0.00-2.03   sec   927 MBytes  3.82 Gbits/sec                  receiver
[  8]   0.00-2.00   sec   920 MBytes  3.86 Gbits/sec  945             sender
        Sent  920 MByte / 3.59 GByte (25%) of /mnt/tmpfs/test
[  8]   0.00-2.03   sec   920 MBytes  3.80 Gbits/sec                  receiver
[ 11]   0.00-2.00   sec   920 MBytes  3.85 Gbits/sec  1778             sender
        Sent  920 MByte / 3.59 GByte (25%) of /mnt/tmpfs/test
[ 11]   0.00-2.03   sec   920 MBytes  3.79 Gbits/sec                  receiver
[ 14]   0.00-2.00   sec   910 MBytes  3.81 Gbits/sec  1423             sender
        Sent  910 MByte / 3.59 GByte (24%) of /mnt/tmpfs/test
[ 14]   0.00-2.03   sec   910 MBytes  3.75 Gbits/sec                  receiver
[SUM]   0.00-2.00   sec  3.59 GBytes  15.4 Gbits/sec  5533             sender
[SUM]   0.00-2.03   sec  3.59 GBytes  15.2 Gbits/sec                  receiver
And iperf3 client reading from the NVMe Kingston drive located on the same ASM2812 switch as Intel nic - 18Gbit/s (still 14% faster than from RAM/tmpfs):

Code: Select all

[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-10.00  sec  5.26 GBytes  4.51 Gbits/sec  1769             sender
        Sent 5.26 GByte / 0.00 Byte (100%) of /dev/nvme0n1p3
[  5]   0.00-10.03  sec  5.26 GBytes  4.50 Gbits/sec                  receiver
[  8]   0.00-10.00  sec  5.26 GBytes  4.52 Gbits/sec  1846             sender
        Sent 5.26 GByte / 0.00 Byte (100%) of /dev/nvme0n1p3
[  8]   0.00-10.03  sec  5.26 GBytes  4.50 Gbits/sec                  receiver
[ 11]   0.00-10.00  sec  5.26 GBytes  4.52 Gbits/sec  1502             sender
        Sent 5.26 GByte / 0.00 Byte (100%) of /dev/nvme0n1p3
[ 11]   0.00-10.03  sec  5.26 GBytes  4.50 Gbits/sec                  receiver
[ 14]   0.00-10.00  sec  5.26 GBytes  4.52 Gbits/sec  1768             sender
        Sent 5.26 GByte / 0.00 Byte (100%) of /dev/nvme0n1p3
[ 14]   0.00-10.03  sec  5.26 GBytes  4.50 Gbits/sec                  receiver
[SUM]   0.00-10.00  sec  21.0 GBytes  18.1 Gbits/sec  6885             sender
[SUM]   0.00-10.03  sec  21.0 GBytes  18.0 Gbits/sec                  receiver

Running iper3 for about an hour and in parallel writing with dd occasionally to NVMe drive connected to the same ASM2812 switch as 40Gbit/s Intel nic, yields interesting/impressive numbers - ~14Gbit/s transfer over the network and ~3.3GByte/s NVMe drive I/O:

Code: Select all

[  5] 3260.00-3261.00 sec  2.46 GBytes  21.1 Gbits/sec
[  5] 3261.00-3262.00 sec  1.68 GBytes  14.4 Gbits/sec
[  5] 3262.00-3263.00 sec  2.54 GBytes  21.8 Gbits/sec
[  5] 3263.00-3264.00 sec  2.00 GBytes  17.2 Gbits/sec
[  5] 3264.00-3265.00 sec  1.82 GBytes  15.6 Gbits/sec
[  5] 3265.00-3266.00 sec  1.95 GBytes  16.8 Gbits/sec
[  5] 3266.00-3267.00 sec  1.70 GBytes  14.6 Gbits/sec
[  5] 3267.00-3268.00 sec  1.73 GBytes  14.8 Gbits/sec
[  5] 3268.00-3269.00 sec  1.32 GBytes  11.3 Gbits/sec
[  5] 3269.00-3270.00 sec  1.67 GBytes  14.3 Gbits/sec
[  5] 3270.00-3271.00 sec  1.33 GBytes  11.4 Gbits/sec
[  5] 3271.00-3272.00 sec   931 MBytes  7.81 Gbits/sec
[  5] 3272.00-3273.00 sec  2.35 GBytes  20.2 Gbits/sec
[  5] 3273.00-3274.00 sec  1.59 GBytes  13.6 Gbits/sec
[  5] 3274.00-3275.00 sec  1.60 GBytes  13.7 Gbits/sec
[  5] 3275.00-3276.00 sec  1.73 GBytes  14.8 Gbits/sec
[  5] 3276.00-3277.00 sec  1.82 GBytes  15.7 Gbits/sec
[  5] 3277.00-3278.00 sec  1.27 GBytes  10.9 Gbits/sec
[  5] 3278.00-3279.00 sec  1.75 GBytes  15.0 Gbits/sec
[  5] 3279.00-3280.00 sec  1.49 GBytes  12.8 Gbits/sec
[  5] 3280.00-3281.00 sec  1.70 GBytes  14.6 Gbits/sec
[  5] 3281.00-3282.00 sec  1.46 GBytes  12.5 Gbits/sec
[  5] 3282.00-3283.00 sec  2.69 GBytes  23.1 Gbits/sec

dd if=/dev/zero of=/dev/nvme0n1p3 bs=4K seek=0 count=262144 oflag=direct 8,42706 s, 127 MB/s
dd if=/dev/zero of=/dev/nvme0n1p3 bs=4K seek=262144 count=262144 oflag=direct 8,57663 s, 125 MB/s
dd if=/dev/zero of=/dev/nvme0n1p3 bs=4K seek=524288 count=262144 oflag=direct 8,4648 s, 127 MB/s
dd if=/dev/zero of=/dev/nvme0n1p3 bs=4K seek=786432 count=262144 oflag=direct 8,52811 s, 126 MB/s
dd if=/dev/zero of=/dev/nvme0n1p3 bs=16K seek=0 count=65536 oflag=direct 3,56112 s, 302 MB/s
dd if=/dev/zero of=/dev/nvme0n1p3 bs=16K seek=65536 count=65536 oflag=direct 3,50248 s, 307 MB/s
dd if=/dev/zero of=/dev/nvme0n1p3 bs=16K seek=131072 count=65536 oflag=direct 3,51835 s, 305 MB/s
dd if=/dev/zero of=/dev/nvme0n1p3 bs=16K seek=196608 count=65536 oflag=direct 3,46807 s, 310 MB/s
dd if=/dev/zero of=/dev/nvme0n1p3 bs=64K seek=0 count=16384 oflag=direct 1,56049 s, 688 MB/s
dd if=/dev/zero of=/dev/nvme0n1p3 bs=64K seek=16384 count=16384 oflag=direct 1,51169 s, 710 MB/s
dd if=/dev/zero of=/dev/nvme0n1p3 bs=64K seek=32768 count=16384 oflag=direct 1,57322 s, 683 MB/s
dd if=/dev/zero of=/dev/nvme0n1p3 bs=64K seek=49152 count=16384 oflag=direct 1,6637 s, 645 MB/s
dd if=/dev/zero of=/dev/nvme0n1p3 bs=256K seek=0 count=4096 oflag=direct 1,28899 s, 833 MB/s
dd if=/dev/zero of=/dev/nvme0n1p3 bs=256K seek=4096 count=4096 oflag=direct 1,31005 s, 820 MB/s
dd if=/dev/zero of=/dev/nvme0n1p3 bs=256K seek=8192 count=4096 oflag=direct 1,27795 s, 840 MB/s
dd if=/dev/zero of=/dev/nvme0n1p3 bs=256K seek=12288 count=4096 oflag=direct 1,26718 s, 847 MB/s
dd if=/dev/zero of=/dev/nvme0n1p3 bs=1024K seek=0 count=1024 oflag=direct 1,28472 s, 836 MB/s
dd if=/dev/zero of=/dev/nvme0n1p3 bs=1024K seek=1024 count=1024 oflag=direct 1,30667 s, 822 MB/s
dd if=/dev/zero of=/dev/nvme0n1p3 bs=1024K seek=2048 count=1024 oflag=direct 1,3044 s, 823 MB/s
dd if=/dev/zero of=/dev/nvme0n1p3 bs=1024K seek=3072 count=1024 oflag=direct 1,31361 s, 817 MB/s

Power usage:
- Boot: 27-30W
- Load: 27-28W (iperf3 server for 1H)
- Idle: 15.5W, 13.1W after powertop --auto-tune. ASM2812 PCIe switch draws probably 4-6W alone, while Intel XL710-QDA2 nic 1-3W.

So far system runs stable (uptime 5h), with no errors in dmesg even after running iperf3 server for 1 hour and occasionally running disk write tests with dd.

Journey

First, I started looking at SFP+ based network with an 8-port switch and the possibility of running optics between floors in my house.
But I quickly realized that 10Gbit/s is not enough to utilize NVMe bandwidth (which I intended to serve data from over a high-speed enthernet link), also power consumption will be quite high. And so I challenged myself to go beyond 10Gbit/s and get a somewhat "hyper-converged low-power home data center" :). Ideally with 40-56Gbit/s Infiniband+Ethernet+RoCE networking to have all possibilities for scientific/engineering exploration.

I started by navigating the market of used multi-gig DC network cards.
Long story short - ASPM support in most DC nic cards is shit, idle power usage in most cases is above 10W, and heat emission requires significant airflow.
After a lot of research, I decided to not go with Infiniband cards (eg: ConnectX-3 Mellanox) as those do not support ASPM and would be idling at 10W alone (not giving CPU to change its own power state) and just heating a space. The drawback - it's basically the only producer of widely available used cheap IB/RoCE 40/56Gbit/s DC nics - you can get one as low as $25 per piece for 2-port QSFP+ ConnectX-3 Pro MT27520 nic.
ASPM-supported DC nics are rare. Even less of those that draw less than 8-10W at idle and do not produce too much heat.
Nonetheless, I found one - Intel XL710-based NICs with the max power consumption of 4W utilizing both QSFP+ ports (on DAC)!

So I settled on 40Gbit/s QSFP+ interconnect between 3 PCs, one of which Odroid as a SATA-3 & NVMe data storage over QSFP+ network and the other two - Mini-ITX X570-based PC and (in prospects) 4x4-8840U ASRockInd-based Nano-ITX PC.
Going beyond that would require expensive and power-hungry QSFP+ switches. It's worth saying, that if you only need 10Gbit/s networking and have more than 3 devices to connect, then $100 TP-Link 8-port 10Gbit/s SFP+ TL-ST5008F v2 switch (Layer 2-4) is a great choice: [link]https://www.tp-link.com.cn/product_1649 ... cation#tag[/link].

We're halfway through by now.

Now cabling. I wanted to use m.2-pcie3.0x4 ADT-Link cables for a custom compact enclosure (200x110x80mm => 1.76L), but no combination of cables worked well enough to deem stable use.
I tried 4 different ADT-Link PCIe 3.0/4.0 cables to connect ASM2812 to Odroid's M.2 and XL710 to ASM2812 in different combinations:
- R42UR 20cm
- R42UL-4.0 20cm (PCIe 4.0)
- R42UL 5cm
- R42DL 10cm

I found no success with any combination of ADT-Link cables, and some success when combining cables with CERRXIAN adapters, though it wasn't stable enough to settle on.
No shenanigans with BIOS settings helped me. I also removed the Odroid mobo battery to avoid caching any device state between power supply cycles, done by disconnecting power cord from the socket.

To my surprise, 2x CERRAXIAN M.2 to PCIe x4 adapters was the only adapter that worked for me to extend Odroid H3+ with an extra M.2 slot via ASM2812 PCIe 3.0 switch and see the system is more or less stable.
As a drawback, I ended up building a case with a slightly bigger form factor - 190x117x117mm (2.6L vs projected 1.76L).

Here are the results (minus R42UL 5cm & R42DL 10cm that I received later, and didn't go through the whole testing cycle - a quick test revealed the same problem as with the other two 20cm long ADT-Link cables):

Code: Select all

LEGEND
---
corsair - 4TB NVMe PCIe 3.0 M.2 - https://www.corsair.com/us/en/p/data-storage/cssd-f4000gbmp400/mp400-4tb-nvme-pcie-m-2-ssd-cssd-f4000gbmp400
kingston - 2TB NVMe PCIe 4.0 M.2 - https://www.kingston.com/en/ssd/gaming/kingston-fury-renegade-nvme-m2-ssd?partnum=sfyrd%2F2000g
asm - ASM2812 PCIe switch PCIe 3.0 x4 - https://www.aliexpress.com/item/1005005576318321.html
mlx - Mellanox 341A OCP nic via OCP to PCIe 3.0 x4 adapter - https://nl.aliexpress.com/item/1005004300131350.html
intel - Intel (Cisco) XL710-QDA2 PCIe 3.0 x8 - https://www.ebay.com/itm/115608501119 
# Odroid M.2 slot in Auto/Gen3 mode (via Odroid BIOS)
pciex4 - CERRAXIAN M.2 to PCIe x4 adapter
pciev3 - ADT-Link R42UR cable adapter
pciev4 - ADT-Link R42UL-4.0 cable adapter
# Odroid M.2 slot in Gen2 mode (via Odroid BIOS)
pciex4:2 - CERRAXIAN M.2 to PCIe x4 adapter
pciev3:2 - ADT-Link R42UR cable adapter
pciev4:2 - ADT-Link R42UL-4.0 cable adapter

TESTS
---
pciex4/asm
  - kingston/corsair - no disk in OS (corsair)
pciex4/asm/pciex4
  + kingston/mlx - fast disk speed (940-3350MB/s)
  + kingston/intel - fast disk speed (920-3350MB/s)
pciex4/asm/pciev3 - nvme2 led is brighter
  - corsair/mlx - no disk in OS (corsair)
  + kingston/mlx - fast disk speed (920-3200MB/s)
pciex4/asm/pciev4
  - kingston/mlx - fast disk speed (920-3200MB/s), OS hangs

pciex4:2/asm/pciex4
  ? kingston/mlx
pciex4:2/asm/pciev3 - nvme2 led is brighter
  - kingston/mlx - OS hangs
pciex4:2/asm/pciev4 - nvme2 led is brighter
  - kingston/mlx - errors on boot (could not locate request tag 0x0), OS hangs


pciev3/asm
  - kingston - disk disappear in bios, disk not bootable
  - corsair - no disk in OS (corsair)
  - kingston/corsair - no disk in OS (corsair), OS hangs
pciev3/asm/pciex4
  - kingston/mlx - disk not bootable, hangs on bios
pciev3/asm/pciev4
  - kingston/mlx - hangs on bios

pciev3:2/asm
  + kingston - avg disk speed (820-1650MB/s)
  - kingston/corsair - no disk in OS (corsair)
pciev3:2/asm/pciex4
  - kingston/mlx - errors on boot (nvme nvme0: 0x0 genctr mismatch got 0x0 expected 0x1), (could not locate request tag 0x0)
pciev3:2/asm/pciev4
  - kingston/mlx - errors on boot (could not locate request tag 0x0)


pciev4/asm
pciev4/asm/pciex4
  - kingston/mlx - slow disk speed (64-200MB/s)
pciev4/asm/pciev3
  ? kingston/mlx

pciev4:2/asm
pciev4:2/asm/pciex4
  + kingston/mlx - avg disk speed (820-1650MB/s)
pciev4:2/asm/pciev3
  ? kingston/mlx
I verified on my Mini-ITX PC that ADT-Link cables work fine on a desktop mobo and the problem is likely in the Odroid's H3+ side. I don't have an idea whether it's an ability to support arbitrary PCIe devices, a "power supply over M.2" issue, or anything else. However, I noticed that 2 (two) ASM2812 LEDs are emitting with different intensities (independently from each other) when different types and combinations of ADT-Links cables are being used.
It's also worth noting, that CERRAXIAN M.2 to PCIe x4 adapters have additional components on their PCBs (look at Amazon image - there are some C1, C2, C3, C4, R1, which I presume capacitors and resistors, but I'm not an electrical engineer...), while ADT-Link cables have nothing else on their PCBs.

My main PC is AMD Ryzen 3700X with 64GB RAM on X570-based motherboard https://www.gigabyte.com/nl/Motherboard ... -rev-10#kf.
Running iper3 between two XL710 Intel nic cards connected to the main PC over 2 different M.2 slots (one of which is via PCIe x16 to PCIe x8 + 2x M.2 non-bifurcation adapter on x16 mobo slot bifurcated at x8/x4/x4) shows close to expected numbers (slightly higher than I'd expect though).
Here are the results of using ADT-Link cables on my desktop PC and running 1-3 iperf 3.7 server and client instances in parallel (before I found 3.16, to make sure load is spread over multiple CPU cores):
1: 22.1 Gbit/s
2: 36.5 Gbit/s: 2*(17.3-18.5 Gbit/s)
3: 36.2 Gbit/s: 3*(11.2-12.5 Gbit/s)

Happy ending

We're almost done by now.
But the system is not stable enough - 3 minutes after the boot it loses touch with NVMe drive on ASM2812 switch.
Turns out - adding to the game stock Odroid H3+ KKSB case CPU fan blowing towards ASM2812 chip is a game-changer!
Even after 2-hour long iper3 tests between computers occasional NVMe tests with dd don't show any signs to worry about.

Even though I'm not able to get a network bitrate close to the one I saw when connecting my QSFP+ nics on PC (where even with iperf3 3.7 raw network peaks at 36 Gbit/s), 60% utilization of PCIe 3.0 x4 bandwidth looks good enough to wrap up. Perhaps Odroid's Intel N6005 CPU has the biggest impact on overall speed reduction here - ps command on Odroid reported ~350-400% CPU usage during the test involving iperf3 3.16 reading data from NVMe drive and sending over QSFP+ nic to another PC in 4 parallel threads.

Right now I'm enjoying ~20Gbit/s (2.5GB/s) of data reads on my desktop over the network from NVMe drive on my Odroid as observed in the iperf 3.16 output for 30min and not a single error in dmesg!
If all goes well in the next few days, we can consider this build somewhat successful! :)


Hope the work I've put in the research and shared here will help more people looking to get the max out of their Odroid H3+!

Feedback to the Hardkernel team: it'd be great to see the next Odroid (H4?) having 2x M.2 Key-M slots to avoid using PCIe switches in these kinds of situations. At this point, I only see ASRockInd 4x4-8840U & 4x4-8640U (and maybe some NUCs) capable of challenging Mini-ITX mobos in this market segment. It has 2x SATA-3 and 2x M.2 Key-M slots - a perfect combo for my use case.


Problems encountered

If anyone knows what am I doing wrong looking at the issues below - I'd greatly appreciate for the input.
If not - oh well, it seems to be working, so I'll better do something else (and more enjoying) now than trying to figure that out :)

1. OS loses NVMe drive connected to ASM2812 when the switch gets hot. Without the fan blowing on ASM2812, the errors below appear in dmesg.
If anyone knows what this all means - I'd greatly appreciate any input.

Code: Select all

[  425.073491] could not locate request for tag 0x0
[  425.073520] nvme nvme0: invalid id 0 completed on queue 2
[  454.773735] could not locate request for tag 0x0
[  454.773745] nvme nvme0: invalid id 0 completed on queue 3
[  454.773747] could not locate request for tag 0x0
[  454.773749] nvme nvme0: invalid id 0 completed on queue 3
[  454.773755] nvme nvme0: I/O 453 (I/O Cmd) QID 2 timeout, aborting
[  454.774049] nvme nvme0: Abort status: 0x0
[  474.740989] nvme nvme0: I/O 128 (I/O Cmd) QID 3 timeout, aborting
[  474.741043] nvme nvme0: I/O 129 (I/O Cmd) QID 3 timeout, aborting
[  474.746795] nvme nvme0: Abort status: 0x0
[  474.746811] nvme nvme0: Abort status: 0x0
[  484.980595] nvme nvme0: I/O 453 QID 2 timeout, reset controller
[  546.403513] nvme nvme0: request 0x0 genctr mismatch (got 0x0 expected 0x9)
[  546.403543] nvme nvme0: invalid id 0 completed on queue 0
[  546.403549] nvme nvme0: request 0x0 genctr mismatch (got 0x0 expected 0x9)
[  546.403557] nvme nvme0: invalid id 0 completed on queue 0
[  546.403561] nvme nvme0: request 0x0 genctr mismatch (got 0x0 expected 0x9)
[  546.403568] nvme nvme0: invalid id 0 completed on queue 0
[  546.403572] nvme nvme0: request 0x0 genctr mismatch (got 0x0 expected 0x9)
[  546.403577] nvme nvme0: invalid id 0 completed on queue 0
[  546.442265] nvme nvme0: Shutdown timeout set to 10 seconds
[  546.443509] nvme nvme0: 4/0/0 default/read/poll queues
[  563.811467] could not locate request for tag 0x0
[  563.811512] nvme nvme0: invalid id 0 completed on queue 1
[  569.699403] could not locate request for tag 0x0
[  569.699434] nvme nvme0: invalid id 0 completed on queue 1
[  569.699443] could not locate request for tag 0x0
[  569.699449] nvme nvme0: invalid id 0 completed on queue 1
[  575.587361] could not locate request for tag 0x0
[  575.587390] nvme nvme0: invalid id 0 completed on queue 4
[  579.939445] could not locate request for tag 0x0
[  579.939475] nvme nvme0: invalid id 0 completed on queue 2
[  579.939485] could not locate request for tag 0x0
[  579.939491] nvme nvme0: invalid id 0 completed on queue 2
[  579.939495] could not locate request for tag 0x0
[  579.939499] nvme nvme0: invalid id 0 completed on queue 2
[  579.939503] could not locate request for tag 0x0
[  579.939507] nvme nvme0: invalid id 0 completed on queue 2
[  579.939510] could not locate request for tag 0x0
[  579.939514] nvme nvme0: invalid id 0 completed on queue 2
[  579.939517] could not locate request for tag 0x0
[  579.939521] nvme nvme0: invalid id 0 completed on queue 2
[  579.939525] could not locate request for tag 0x0
[  579.939528] nvme nvme0: invalid id 0 completed on queue 2
[  579.939532] could not locate request for tag 0x0
[  579.939535] nvme nvme0: invalid id 0 completed on queue 2
[  579.939539] could not locate request for tag 0x0
[  579.939542] nvme nvme0: invalid id 0 completed on queue 2
[  579.939546] could not locate request for tag 0x0
[  579.939549] nvme nvme0: invalid id 0 completed on queue 2
[  579.939553] could not locate request for tag 0x0
[  579.939556] nvme nvme0: invalid id 0 completed on queue 2
[  579.939560] could not locate request for tag 0x0
[  579.939563] nvme nvme0: invalid id 0 completed on queue 2
[  579.939566] could not locate request for tag 0x0
[  579.939570] nvme nvme0: invalid id 0 completed on queue 2
[  579.939573] could not locate request for tag 0x0
[  579.939577] nvme nvme0: invalid id 0 completed on queue 2
[  579.939580] could not locate request for tag 0x0
[  579.939584] nvme nvme0: invalid id 0 completed on queue 2
[  579.939587] could not locate request for tag 0x0
[  579.939591] nvme nvme0: invalid id 0 completed on queue 2
[  579.939594] could not locate request for tag 0x0
[  579.939598] nvme nvme0: invalid id 0 completed on queue 2
[  579.939601] could not locate request for tag 0x0
[  579.939605] nvme nvme0: invalid id 0 completed on queue 2
[  579.939608] could not locate request for tag 0x0
[  579.939612] nvme nvme0: invalid id 0 completed on queue 2
[  592.210904] could not locate request for tag 0x0
[  592.210934] nvme nvme0: invalid id 0 completed on queue 4
[  592.210943] could not locate request for tag 0x0
[  592.210949] nvme nvme0: invalid id 0 completed on queue 4
[  592.210953] could not locate request for tag 0x0
[  592.210958] nvme nvme0: invalid id 0 completed on queue 4
[  592.210961] could not locate request for tag 0x0
[  592.210965] nvme nvme0: invalid id 0 completed on queue 4
[  592.210969] could not locate request for tag 0x0
[  592.210973] nvme nvme0: invalid id 0 completed on queue 4
[  592.210976] could not locate request for tag 0x0
[  592.210980] nvme nvme0: invalid id 0 completed on queue 4
[  592.210983] could not locate request for tag 0x0
[  592.210987] nvme nvme0: invalid id 0 completed on queue 4
[  592.210990] could not locate request for tag 0x0
[  592.210994] nvme nvme0: invalid id 0 completed on queue 4
[  592.210997] could not locate request for tag 0x0
[  592.211001] nvme nvme0: invalid id 0 completed on queue 4
[  592.211004] could not locate request for tag 0x0
[  592.211008] nvme nvme0: invalid id 0 completed on queue 4
[  592.211011] could not locate request for tag 0x0
[  592.211015] nvme nvme0: invalid id 0 completed on queue 4
[  592.211018] could not locate request for tag 0x0
[  592.211022] nvme nvme0: invalid id 0 completed on queue 4
[  592.211025] could not locate request for tag 0x0
[  592.211029] nvme nvme0: invalid id 0 completed on queue 4
[  592.211032] could not locate request for tag 0x0
[  592.211036] nvme nvme0: invalid id 0 completed on queue 4
[  592.211039] could not locate request for tag 0x0
[  592.211043] nvme nvme0: invalid id 0 completed on queue 4
[  592.211046] could not locate request for tag 0x0
[  592.211050] nvme nvme0: invalid id 0 completed on queue 4
[  592.211053] could not locate request for tag 0x0
[  592.211057] nvme nvme0: invalid id 0 completed on queue 4
[  592.211060] could not locate request for tag 0x0
[  592.211064] nvme nvme0: invalid id 0 completed on queue 4
[  592.211067] could not locate request for tag 0x0
[  592.211071] nvme nvme0: invalid id 0 completed on queue 4
[  592.211074] could not locate request for tag 0x0
[  592.211078] nvme nvme0: invalid id 0 completed on queue 4
[  592.211081] could not locate request for tag 0x0
[  592.211085] nvme nvme0: invalid id 0 completed on queue 4
[  592.211088] could not locate request for tag 0x0
[  592.211092] nvme nvme0: invalid id 0 completed on queue 4
[  592.211111] nvme nvme0: I/O 713 (I/O Cmd) QID 1 timeout, aborting
[  592.211416] nvme nvme0: Abort status: 0x0
[  599.166745] nvme nvme0: I/O 832 (I/O Cmd) QID 1 timeout, aborting
[  599.166789] nvme nvme0: I/O 833 (I/O Cmd) QID 1 timeout, aborting
[  599.167069] nvme nvme0: Abort status: 0x0
[  599.167179] nvme nvme0: Abort status: 0x0
[  604.530705] nvme nvme0: I/O 460 (I/O Cmd) QID 4 timeout, aborting
[  604.531018] nvme nvme0: Abort status: 0x0
[  607.346680] nvme nvme0: I/O 454 (I/O Cmd) QID 2 timeout, aborting
[  607.346724] nvme nvme0: I/O 456 (I/O Cmd) QID 2 timeout, aborting
[  607.346740] nvme nvme0: I/O 457 (I/O Cmd) QID 2 timeout, aborting
[  607.346750] nvme nvme0: I/O 458 (I/O Cmd) QID 2 timeout, aborting
[  607.346979] nvme nvme0: Abort status: 0x0
[  607.347122] nvme nvme0: Abort status: 0x0
[  607.347323] nvme nvme0: Abort status: 0x0
[  607.347496] nvme nvme0: Abort status: 0x0
[  615.282291] nvme nvme0: I/O 461 (I/O Cmd) QID 4 timeout, aborting
[  615.282310] nvme nvme0: I/O 462 (I/O Cmd) QID 4 timeout, aborting
[  615.282314] nvme nvme0: I/O 463 (I/O Cmd) QID 4 timeout, aborting
[  615.282317] nvme nvme0: I/O 464 (I/O Cmd) QID 4 timeout, aborting
[  615.282596] nvme nvme0: Abort status: 0x0
[  615.282805] nvme nvme0: Abort status: 0x0
[  615.282979] nvme nvme0: Abort status: 0x0
[  615.283022] nvme nvme0: Abort status: 0x0
[  622.450352] nvme nvme0: I/O 713 QID 1 timeout, reset controller
[  622.490421] nvme nvme0: Shutdown timeout set to 10 seconds
[  622.492489] nvme nvme0: 4/0/0 default/read/poll queues
[  627.658349] could not locate request for tag 0x0
[  627.658360] could not locate request for tag 0x0
[  627.658373] nvme nvme0: invalid id 0 completed on queue 1
[  627.658379] nvme nvme0: invalid id 0 completed on queue 3
[  627.658384] could not locate request for tag 0x0
[  627.658389] nvme nvme0: invalid id 0 completed on queue 1
[  627.658394] could not locate request for tag 0x0
[  627.658402] nvme nvme0: invalid id 0 completed on queue 1
[  627.658406] could not locate request for tag 0x0
[  627.658410] nvme nvme0: invalid id 0 completed on queue 1
[  627.658414] could not locate request for tag 0x0
[  627.658418] nvme nvme0: invalid id 0 completed on queue 1
[  627.658422] could not locate request for tag 0x0
[  627.658426] nvme nvme0: invalid id 0 completed on queue 1
[  627.658429] could not locate request for tag 0x0
[  627.658433] nvme nvme0: invalid id 0 completed on queue 1
[  629.787331] could not locate request for tag 0x0
[  629.787390] nvme nvme0: invalid id 0 completed on queue 3
[  638.050284] could not locate request for tag 0x0
[  638.050327] nvme nvme0: invalid id 0 completed on queue 1
[  638.050336] could not locate request for tag 0x0
[  638.050342] nvme nvme0: invalid id 0 completed on queue 1
[  657.777626] nvme nvme0: I/O 714 (I/O Cmd) QID 1 timeout, aborting
[  657.777645] could not locate request for tag 0x0
[  657.777663] nvme nvme0: invalid id 0 completed on queue 3
[  657.777670] could not locate request for tag 0x0
[  657.777673] nvme nvme0: I/O 715 (I/O Cmd) QID 1 timeout, aborting
[  657.777677] nvme nvme0: invalid id 0 completed on queue 3
[  657.777690] nvme nvme0: I/O 716 (I/O Cmd) QID 1 timeout, aborting
[  657.777701] nvme nvme0: I/O 717 (I/O Cmd) QID 1 timeout, aborting
[  657.777890] nvme nvme0: Abort status: 0x0
[  657.778125] nvme nvme0: Abort status: 0x0
[  657.778298] nvme nvme0: Abort status: 0x0
[  657.778460] nvme nvme0: Abort status: 0x0
[  658.193644] nvme nvme0: I/O 134 (I/O Cmd) QID 3 timeout, aborting
[  662.897495] nvme nvme0: request 0x0 genctr mismatch (got 0x0 expected 0x1)
[  662.897524] nvme nvme0: invalid id 0 completed on queue 0
[  662.897545] nvme nvme0: I/O 721 (I/O Cmd) QID 1 timeout, aborting
[  662.897570] nvme nvme0: I/O 722 (I/O Cmd) QID 1 timeout, aborting
[  662.897841] nvme nvme0: Abort status: 0x0
[  662.898012] nvme nvme0: Abort status: 0x0
[  673.393249] nvme nvme0: I/O 136 (I/O Cmd) QID 3 timeout, aborting
[  673.393331] nvme nvme0: I/O 137 (I/O Cmd) QID 3 timeout, aborting
[  687.984887] nvme nvme0: request 0x0 genctr mismatch (got 0x0 expected 0x1)
[  687.984916] nvme nvme0: invalid id 0 completed on queue 0
[  687.984923] nvme nvme0: request 0x0 genctr mismatch (got 0x0 expected 0x1)
[  687.984930] nvme nvme0: invalid id 0 completed on queue 0
[  687.984951] nvme nvme0: I/O 714 QID 1 timeout, reset controller
[  688.021641] nvme0n1: I/O Cmd(0x2) @ LBA 35342800, 8 blocks, I/O Error (sct 0x3 / sc 0x71)
[  688.021674] I/O error, dev nvme0n1, sector 35342800 op 0x0:(READ) flags 0x80700 phys_seg 1 prio class 2
[  688.021697] nvme0n1: I/O Cmd(0x2) @ LBA 35342840, 24 blocks, I/O Error (sct 0x3 / sc 0x71)
[  688.021707] I/O error, dev nvme0n1, sector 35342840 op 0x0:(READ) flags 0x80700 phys_seg 3 prio class 2
[  688.021770] nvme nvme0: Abort status: 0x371
[  688.021779] nvme nvme0: Abort status: 0x371
[  688.021784] nvme nvme0: Abort status: 0x371
[  688.040206] nvme nvme0: Shutdown timeout set to 10 seconds
[  688.042376] nvme nvme0: 4/0/0 default/read/poll queues

2. ADT-Link cables on Odroid H3+ didn't work for me with ASM2812, although another H3+ user reported R42SF ADT-Link cable did work for him when building 14-drive NAS with ASM2812 switch and ASM1166 & JMB585 M.2 to SATA adapters: viewtopic.php?p=379882#p379882
Switching to Gen2, changing ASPM settings or disabling devices in BIOS didn't help.
System was randomly freezing (even in BIOS) or not booting at all.


3. Corsair MP400 4TB M.2 PCIe 3.0 drive connected to 2nd slot on ASM2812 (whereas 1st is occupied by Kingston Fury 2TB) is not being seein by OS (Ubuntu 23.04). It's seen in Odroid BIOS when M.2 slot is forced to Gen2 (iirc), but not in OS. Even if it's the only NVMe drive connected to ASM2812, OS can't see it.
Last edited by user1234 on Tue Mar 26, 2024 7:40 am, edited 3 times in total.
These users thanked the author user1234 for the post (total 2):
domih (Mon Mar 25, 2024 3:23 am) • odroid (Mon Mar 25, 2024 9:08 am)

User avatar
mad_ady
Posts: 11988
Joined: Wed Jul 15, 2015 5:00 pm
languages_spoken: english
ODROIDs: XU4 (HC1, HC2), C1+, C2, C4 (HC4), N1, N2, N2L, H2, H3+, Go, Go Advance, M1, M1S
Location: Bucharest, Romania
Has thanked: 662 times
Been thanked: 1280 times
Contact:

Re: [HOWTO] Yet another exotic way to do 10G networking with the Odroid H2

Post by mad_ady »

We'd love to see some pictures of how you've set up all those daughter cards!
These users thanked the author mad_ady for the post (total 2):
domih (Mon Mar 25, 2024 5:07 am) • user1234 (Mon Mar 25, 2024 7:00 pm)

user1234
Posts: 18
Joined: Sun Dec 10, 2023 9:51 pm
languages_spoken: english
ODROIDs: H3 plus
Has thanked: 7 times
Been thanked: 4 times
Contact:

Re: [HOWTO] Yet another exotic way to do 10G networking with the Odroid H2

Post by user1234 »

Below are some photos for those curious about what internals look like.
It's simple, and you don't really have too many spatial options - the only choice is which M.2 slot you put nic in on the ASM2812 switch.
So put all the components into one another and voila!
The only thing missing to "close the cover" is a thinner (Noctua 40x40x10 fan) - the one on the pictures doesn't fit under the cover (it's around 15m thick) and I realized it's necessary only when the case was already made.

I just had to build the case myself and used Dremel rotary tool, drill, M3 screw thread-making tool, M3 standouts and screws I bought on marketplaces.
Don't mind general quality and the abundance of screw holes - until my screw thread-making tool arrived I couldn't help myself and used means I had to attach my custom sheet cover.
And of course, some mistakes were made, since it's my 1st metal work since my school days 24 years ago :)


ImageImageImageImageImageImageImage

User avatar
odroid
Site Admin
Posts: 42297
Joined: Fri Feb 22, 2013 11:14 pm
languages_spoken: English, Korean
ODROIDs: ODROID
Has thanked: 3648 times
Been thanked: 2016 times
Contact:

Re: [HOWTO] Yet another exotic way to do 10G networking with the Odroid H2

Post by odroid »

Wow~~ I love the fully metallic frames and covers.
it looks gorgeous and something industrial.
These users thanked the author odroid for the post:
user1234 (Mon Mar 25, 2024 5:47 pm)

User avatar
rooted
Posts: 10612
Joined: Fri Dec 19, 2014 9:12 am
languages_spoken: english
Location: Gulf of Mexico, US
Has thanked: 821 times
Been thanked: 733 times
Contact:

Re: [HOWTO] Yet another exotic way to do 10G networking with the Odroid H2

Post by rooted »

Looks great, like the old pill style microphones.
These users thanked the author rooted for the post:
user1234 (Tue Mar 26, 2024 7:24 am)

user1234
Posts: 18
Joined: Sun Dec 10, 2023 9:51 pm
languages_spoken: english
ODROIDs: H3 plus
Has thanked: 7 times
Been thanked: 4 times
Contact:

Re: [HOWTO] Yet another exotic way to do 10G networking with the Odroid H2

Post by user1234 »

I just upgraded my desktop to Ubuntu 24.04 LTS (dev release) to get iperf 3.16 on the other end of the net and see far fewer TCP retransmits.
From ~5K/sec on each thread to ~5/sec (except 1st second of test):

Code: Select all

[ ID] Interval           Transfer     Bitrate         Retr  Cwnd
[  5]   0.00-1.00   sec   568 MBytes  4.76 Gbits/sec   11    329 KBytes
[  8]   0.00-1.00   sec   568 MBytes  4.76 Gbits/sec   17    355 KBytes
[ 11]   0.00-1.00   sec   568 MBytes  4.76 Gbits/sec   56    454 KBytes
[ 14]   0.00-1.00   sec   568 MBytes  4.76 Gbits/sec   89    445 KBytes
[SUM]   0.00-1.00   sec  2.22 GBytes  19.0 Gbits/sec  173
- - - - - - - - - - - - - - - - - - - - - - - - -
[  5]   1.00-2.00   sec   560 MBytes  4.69 Gbits/sec    2    329 KBytes
[  8]   1.00-2.00   sec   560 MBytes  4.69 Gbits/sec    4    366 KBytes
[ 11]   1.00-2.00   sec   560 MBytes  4.70 Gbits/sec    7    454 KBytes
[ 14]   1.00-2.00   sec   559 MBytes  4.69 Gbits/sec    0    445 KBytes
[SUM]   1.00-2.00   sec  2.19 GBytes  18.8 Gbits/sec   13
- - - - - - - - - - - - - - - - - - - - - - - - -
[  5]   2.00-3.00   sec   560 MBytes  4.70 Gbits/sec    1    331 KBytes
[  8]   2.00-3.00   sec   561 MBytes  4.70 Gbits/sec    0    386 KBytes
[ 11]   2.00-3.00   sec   561 MBytes  4.70 Gbits/sec    0    454 KBytes
[ 14]   2.00-3.00   sec   561 MBytes  4.70 Gbits/sec    0    445 KBytes
[SUM]   2.00-3.00   sec  2.19 GBytes  18.8 Gbits/sec    1
- - - - - - - - - - - - - - - - - - - - - - - - -
[  5]   3.00-4.00   sec   577 MBytes  4.84 Gbits/sec    0    331 KBytes
[  8]   3.00-4.00   sec   577 MBytes  4.84 Gbits/sec   92    386 KBytes
[ 11]   3.00-4.00   sec   577 MBytes  4.84 Gbits/sec    6    454 KBytes
[ 14]   3.00-4.00   sec   577 MBytes  4.84 Gbits/sec   43    445 KBytes
[SUM]   3.00-4.00   sec  2.25 GBytes  19.4 Gbits/sec  141
- - - - - - - - - - - - - - - - - - - - - - - - -
[  5]   4.00-5.00   sec   577 MBytes  4.84 Gbits/sec    0    373 KBytes
[  8]   4.00-5.00   sec   577 MBytes  4.84 Gbits/sec    4    390 KBytes
[ 11]   4.00-5.00   sec   577 MBytes  4.84 Gbits/sec    0    454 KBytes
[ 14]   4.00-5.00   sec   577 MBytes  4.84 Gbits/sec    0    445 KBytes
[SUM]   4.00-5.00   sec  2.25 GBytes  19.4 Gbits/sec    4
- - - - - - - - - - - - - - - - - - - - - - - - -
[  5]   5.00-6.00   sec   572 MBytes  4.80 Gbits/sec    0    386 KBytes
[  8]   5.00-6.00   sec   572 MBytes  4.80 Gbits/sec    0    390 KBytes
[ 11]   5.00-6.00   sec   572 MBytes  4.80 Gbits/sec    3    454 KBytes
[ 14]   5.00-6.00   sec   572 MBytes  4.80 Gbits/sec    0    445 KBytes
[SUM]   5.00-6.00   sec  2.23 GBytes  19.2 Gbits/sec    3
- - - - - - - - - - - - - - - - - - - - - - - - -
[  5]   6.00-7.00   sec   562 MBytes  4.71 Gbits/sec    0    386 KBytes
[  8]   6.00-7.00   sec   562 MBytes  4.72 Gbits/sec    0    390 KBytes
[ 11]   6.00-7.00   sec   562 MBytes  4.71 Gbits/sec    1    454 KBytes
[ 14]   6.00-7.00   sec   562 MBytes  4.71 Gbits/sec    0    445 KBytes
[SUM]   6.00-7.00   sec  2.20 GBytes  18.9 Gbits/sec    1
- - - - - - - - - - - - - - - - - - - - - - - - -
[  5]   7.00-8.00   sec   564 MBytes  4.73 Gbits/sec    5    386 KBytes
[  8]   7.00-8.00   sec   564 MBytes  4.73 Gbits/sec    1    390 KBytes
[ 11]   7.00-8.00   sec   564 MBytes  4.73 Gbits/sec    0    457 KBytes
[ 14]   7.00-8.00   sec   564 MBytes  4.73 Gbits/sec    6    445 KBytes
[SUM]   7.00-8.00   sec  2.20 GBytes  18.9 Gbits/sec   12
- - - - - - - - - - - - - - - - - - - - - - - - -
[  5]   8.00-9.00   sec   564 MBytes  4.73 Gbits/sec    6    386 KBytes
[  8]   8.00-9.00   sec   564 MBytes  4.73 Gbits/sec    0    390 KBytes
[ 11]   8.00-9.00   sec   564 MBytes  4.73 Gbits/sec    0    460 KBytes
[ 14]   8.00-9.00   sec   564 MBytes  4.73 Gbits/sec    0    445 KBytes
[SUM]   8.00-9.00   sec  2.20 GBytes  18.9 Gbits/sec    6
- - - - - - - - - - - - - - - - - - - - - - - - -
So the build is working almost perfectly now. The bottleneck is the Odroid H3+ Intel N6005 CPU being 400% busy.
I'll try to see in next few days if I can squeeze even more network speed by tweaking the nic and os.
My first shot will be to level up CPU performance in BIOS to see if that helps.
I want to get to ~26Gbit/s = 3.3GB/s (as reported by 4-thread dd against NVMe drive) and I'm only ~30% below the target.

user1234
Posts: 18
Joined: Sun Dec 10, 2023 9:51 pm
languages_spoken: english
ODROIDs: H3 plus
Has thanked: 7 times
Been thanked: 4 times
Contact:

Re: [HOWTO] Yet another exotic way to do 10G networking with the Odroid H2

Post by user1234 »

Wow, I managed to get 27.1Gbit/s of raw network speed!
And 23.1-24.5Gbit/s reading/writing from/to NVMe on Odroid over the network.

I didn't change CPU performance settings in BIOS.

Instead, I changed nic MTU to jumbo-frames and ran a series of iperf3 tests, where Odroid H3 acts as a server or client reading/writing to NVMe drive, while desktop (Ryzen 3700X) acts as a client and server respectively but don't use local drives. Columns named -P, -R, -Z are corresponding iperf3 flags used with client commands (either odroid or desktop).

Interestingly, -Z (zero-copy) on the client side of iperf3 shows better numbers when NVMe is specified on the server (with -F flag).
Here is a matrix of test results showing data rates and CPU usage depending on which options are used on in all client-server iperf3 tests I ran:
Image

With numbers like this I don't really need to look further. 75-85% of theoretical PCIe 3.0 x4 link utilization on Odroid, and hitting >90% of my target exchange rate of 3.3GB/s with NVMe on Odroid over network are the signs things work as expected. 8-)
These users thanked the author user1234 for the post:
domih (Tue Mar 26, 2024 6:53 am)

User avatar
odroid
Site Admin
Posts: 42297
Joined: Fri Feb 22, 2013 11:14 pm
languages_spoken: English, Korean
ODROIDs: ODROID
Has thanked: 3648 times
Been thanked: 2016 times
Contact:

Re: [HOWTO] Yet another exotic way to do 10G networking with the Odroid H2

Post by odroid »

Glad to hear that you're very close to your target speed. :o
BTW, I am curious about the kernel version of Ubuntu 24.04 nightly build you are currently running. Is it 6.8?

user1234
Posts: 18
Joined: Sun Dec 10, 2023 9:51 pm
languages_spoken: english
ODROIDs: H3 plus
Has thanked: 7 times
Been thanked: 4 times
Contact:

Re: [HOWTO] Yet another exotic way to do 10G networking with the Odroid H2

Post by user1234 »

On Odroid H3+, my first tests (from 1st post) were on Ubuntu 23.04 Desktop - kernel 6.5.
Then I updated to Ubuntu 24.04 devel release (with do-release-upgrade -d) on kernel 6.8.

On desktop PC (Ryzen 3700x) it was Ubuntu 20.04 kernel 5.15 & 5.18 all until y'day, when I realized that TCP retransmits happen largely because of single-threaded iperf 3.7 on PC.
Multi-threaded iperf 3.16 is only available since Ubuntu 24.04 (if you don't want to compile stuff) or if you install newer kernel and libc6 on your own.
Last edited by user1234 on Tue Mar 26, 2024 6:37 pm, edited 2 times in total.

User avatar
odroid
Site Admin
Posts: 42297
Joined: Fri Feb 22, 2013 11:14 pm
languages_spoken: English, Korean
ODROIDs: ODROID
Has thanked: 3648 times
Been thanked: 2016 times
Contact:

Re: [HOWTO] Yet another exotic way to do 10G networking with the Odroid H2

Post by odroid »

Thank you for the answer and iperf related information.

user1234
Posts: 18
Joined: Sun Dec 10, 2023 9:51 pm
languages_spoken: english
ODROIDs: H3 plus
Has thanked: 7 times
Been thanked: 4 times
Contact:

Re: [HOWTO] Yet another exotic way to do 10G networking with the Odroid H2

Post by user1234 »

You're welcome!
Hoping to see Odroid H4 soon with 2x M.2 Key-M slots (or maybe even 1x M.2 Key-M + 1x PCIe x8) ;)
One or another way - I'm very satisfied with the Odroid H3+ performance and extensibility, despite problems with ADT-Link cables (which is a pity they don't really work on Odroid).
Well done, HardKernel team!

user1234
Posts: 18
Joined: Sun Dec 10, 2023 9:51 pm
languages_spoken: english
ODROIDs: H3 plus
Has thanked: 7 times
Been thanked: 4 times
Contact:

Re: [HOWTO] Yet another exotic way to do 10G networking with the Odroid H2

Post by user1234 »

This ASM2812 switch has just a few atoms of thermal paste (or it's just a dust?) X)
A good reminder for everyone to check thermal transfer to a heatsink if one is present on a card chip.
After adding some Arctic MX-4 on the chip with the stock heatsink, the system seems stable without a fan - no errors in dmesg nor TCP retransmits for about an hour of reading data from NVMe drive and passing via XL710 nic to network at ~23.6Gbit/s.

I also noticed that the back side of ASM2812 switch PCB is very hot during the test. So I'll have to add another heatsink on the back of PCB on top of silicon thermal pad for heat transfer.
I might also need to make holes bigger in the case sheet for better natural convection. And fan option is still on the table for hot weather.

Photos before and after removing stock "thermal paste" (or whatever it is) - you can see on the paper napkin the amount of paste vendor put on the chip:

Image
Image

user1234
Posts: 18
Joined: Sun Dec 10, 2023 9:51 pm
languages_spoken: english
ODROIDs: H3 plus
Has thanked: 7 times
Been thanked: 4 times
Contact:

Re: [HOWTO] Yet another exotic way to do 10G networking with the Odroid H2

Post by user1234 »

This reminds me of how I recently struggled to detach my Ryzen 3700X CPU from stock AMD Wraith Prism cooler - both things just came out of locked AM4 socket as one whole thing X)
I had to add thermal paste removal liquid on the corners of the contact surface, which probably helped by 5% to finally detach one from another with the help of a knife and some brute force.
No components were harmed, nonetheless...
These users thanked the author user1234 for the post:
odroid (Wed Mar 27, 2024 7:06 pm)

user1234
Posts: 18
Joined: Sun Dec 10, 2023 9:51 pm
languages_spoken: english
ODROIDs: H3 plus
Has thanked: 7 times
Been thanked: 4 times
Contact:

Re: [HOWTO] Yet another exotic way to do 10G networking with the Odroid H2

Post by user1234 »

Oh boy, I believe, I finally confirmed that ASM2812 switch VRM (if I understood right, the area with 2 components with 1RO symbols on them at the beginning of the PCB) is quickly getting overheated.
When I was running the above tests, I didn't put a case cover on, so the internals of H3+ were exposed to enough free airflow, hence no errors appeared in dmesg.

As soon as I put the case cover on (right after a successful 30-minute test with no errors in dmesg), 3-5 minutes later I started receiving errors I mentioned in the "Problems encountered" section (OS loses NVMe drive). Getting big enough (>8mm) holes in the case cover right near the PCB overheating components for hot air exhaust and 40x10mm PWM Noctua fan with LNA (to reduce spinning speed) blowing out hot air off from both sides of ASM2812 PCB gave excellent results - the case cover was put on, tests ran successfully, no errors in dmesg throughout the night.

I'll put today a heatsink on the back of the ASM2812 switch PCB and see if the fan is still necessary. But something is telling me I'll need to keep both. Especially if the weather this summer will hit >35C.

Post Reply

Return to “Hardware and peripherals”

Who is online

Users browsing this forum: No registered users and 1 guest