Page 4 of 4

Re: Add-on to H2 - M2 to pciE slot

Posted: Sun Aug 25, 2019 11:17 am
by brad
RomaT wrote:
Sun Aug 25, 2019 10:51 am
d) Ubuntu 18.04.02 LTS amd64 operating system, kernel 4.18.0-25-generic x86_64
This version is no longer maintained & 4.19.x LTS does not have the the good updates, I think you should go 5.2.x release kernel for best performance with Aquantia at the moment.

You partitons should be good, did you use parted to create the GPT and the partitions?

Re: Add-on to H2 - M2 to pciE slot

Posted: Sun Aug 25, 2019 11:25 am
by RomaT
brad wrote:
Sun Aug 25, 2019 11:17 am
did you use parted to create the GPT and the partitions?
in the utility integrated into the desktop version, I don't know what it's called
.
Снимок экрана от 2019-08-25 08-02-54.png
Снимок экрана от 2019-08-25 08-02-54.png (128.06 KiB) Viewed 1906 times
.
Yes, the desktop version, with graphical interface, because this NAS is also a multimedia player,
.
screen01.jpg
screen01.jpg (245.7 KiB) Viewed 1906 times
.
therefore it is made as small as possible, as quiet as possible and with an acrylic cover with a LCD
brad wrote:
Sun Aug 25, 2019 11:17 am
I think you should go 5.2.x release kernel for best performance with Aquantia at the moment.
I specifically stopped the upgrages, because when upgrading to new kernels, the built-in utility Hotspot does not work well, Before this upgrade works fine.
So I returned to the old version and disabled updates.
.
screen02.jpg
screen02.jpg (45.18 KiB) Viewed 1906 times

Re: Add-on to H2 - M2 to pciE slot

Posted: Sun Aug 25, 2019 12:14 pm
by brad
RomaT wrote:
Sun Aug 25, 2019 11:25 am
in the utility integrated into the desktop version, I don't know what it's called
gparted is good :)
RomaT wrote:
Sun Aug 25, 2019 11:25 am
So I returned to the old version and disabled updates.
Fair enough its nice to stick with what works and Ubuntu don't support 5.x properly unless its 19.04 Disco Dingo. The performance of your tests is better than I originally imagined so stick with it I say :)

Its now going to extremes (maybe silly lvls and no ideas if it could easily work) but the last addition to maximise disk to 10Ge transfer might be adding a 3rd SATA port. The PCIE x4 2.0 has 20GT/s bandwidth and is encoded using 8b/10b leaving about 16Gbps for any connected devices. The 10Ge card will consume 10Gbps (to give 8Gbps with its own 8b/10b network encoding) so it will probably want 3 PCIs lanes (3 PCIe lanes is 15GT/s before the encoding). This almost leaves a lane free for a sata port using the 4th lane if it was possible.

Re: Add-on to H2 - M2 to pciE slot

Posted: Sun Aug 25, 2019 12:23 pm
by RomaT
brad wrote:
Sun Aug 25, 2019 12:14 pm
gparted is good :)
No, another, built-in utility, it is not clear what it is called, see above screenshot,
a label in Russian sounds like "discs"
GPARTED I know very well :) and also use it
honestly I don’t remember what I did in GPARTED, and what in the built-in utility, used both. :D
brad wrote:
Sun Aug 25, 2019 12:14 pm
Fair enough its nice to stick with what works
Yes, I adhere to this principle.
copy system disk images (eMMC), try updates, if something goes wrong, come back.

Code: Select all

dd if=/dev/mmcblk0 of=/path/backup.img bs=1M
This works fine ;) even with the system running, on the fly
return booting from live CD:

Code: Select all

dd if=/path/backup.img of=/dev/mmcblk0 bs=1M
Reboot, booting from eMMC

Re: Add-on to H2 - M2 to pciE slot

Posted: Tue Aug 27, 2019 3:19 am
by Mullcom
Hay !

I found a great article on 10gbit.

https://darksideclouds.wordpress.com/20 ... y-to-hell/

Beem my message up with phone.


Re: Add-on to H2 - M2 to pciE slot

Posted: Thu Aug 29, 2019 1:40 pm
by domih
Exploring means of network TCP/IP optimizations

Short story
1. Scaling Governor (powersave vs. performance) has no influence on bandwidth.
2. Setting IRQ Affinity for manually spreading the driver interrupts on different cores has no influence on bandwidth.

Conclusion
a) Modern Linux kernels do a pretty good job throttling up and down modern CPU core frequencies (*)
b) Modern IRQ Balance daemons do a pretty good job at moving interrupts from core to core (*)

What was leading edge expert tuning 10 years ago, is now simply built-in.

(*) At least on the H2 I tested.

---

Long Story
Note: unless you "want" to see patterns in the numbers, it is IMHO all the same results within statistical imprecision. To me, an optimization starts in double digits. I did not witness any double-digit change during these testings.

Scaling Governor
Reference: https://community.mellanox.com/s/articl ... governor-x

Code: Select all

cat /sys/devices/system/cpu/cpu*/cpufreq/scaling_governor
powersave
powersave
powersave
powersave

# Switch to performance

echo performance | sudo tee /sys/devices/system/cpu/cpu0/cpufreq/scaling_governor &&\
echo performance | sudo tee /sys/devices/system/cpu/cpu1/cpufreq/scaling_governor &&\
echo performance | sudo tee /sys/devices/system/cpu/cpu2/cpufreq/scaling_governor &&\
echo performance | sudo tee /sys/devices/system/cpu/cpu3/cpufreq/scaling_governor

# Test 3 x 60 sec iperf3 benchmarking with scaling_governor set to performance
# Looping cable on the dual ConnectX-2 (IN and OUT are summed)
# I only display the average.

[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-60.00  sec   187 GBytes  26.7 Gbits/sec    0             sender
[  5]   0.00-60.03  sec   187 GBytes  26.7 Gbits/sec                  receiver

[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-60.00  sec   176 GBytes  25.2 Gbits/sec    0             sender
[  5]   0.00-60.04  sec   176 GBytes  25.2 Gbits/sec                  receiver


[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-60.00  sec   182 GBytes  26.0 Gbits/sec    0             sender
[  5]   0.00-60.04  sec   182 GBytes  26.0 Gbits/sec                  receiver

# Switch back to powersave

echo powersave | sudo tee /sys/devices/system/cpu/cpu0/cpufreq/scaling_governor &&\
echo powersave | sudo tee /sys/devices/system/cpu/cpu1/cpufreq/scaling_governor &&\
echo powersave | sudo tee /sys/devices/system/cpu/cpu2/cpufreq/scaling_governor &&\
echo powersave | sudo tee /sys/devices/system/cpu/cpu3/cpufreq/scaling_governor

[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-60.00  sec   177 GBytes  25.4 Gbits/sec    0             sender
[  5]   0.00-60.03  sec   177 GBytes  25.4 Gbits/sec                  receiver

[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-60.00  sec   178 GBytes  25.4 Gbits/sec    0             sender
[  5]   0.00-60.03  sec   178 GBytes  25.4 Gbits/sec                  receiver

[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-60.00  sec   173 GBytes  24.8 Gbits/sec    0             sender
[  5]   0.00-60.03  sec   173 GBytes  24.8 Gbits/sec                  receiver

# Switch back to performance

echo performance | sudo tee /sys/devices/system/cpu/cpu0/cpufreq/scaling_governor &&\
echo performance | sudo tee /sys/devices/system/cpu/cpu1/cpufreq/scaling_governor &&\
echo performance | sudo tee /sys/devices/system/cpu/cpu2/cpufreq/scaling_governor &&\
echo performance | sudo tee /sys/devices/system/cpu/cpu3/cpufreq/scaling_governor

[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-60.00  sec   179 GBytes  25.6 Gbits/sec    0             sender
[  5]   0.00-60.04  sec   179 GBytes  25.5 Gbits/sec                  receiver

[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-60.00  sec   171 GBytes  24.5 Gbits/sec    0             sender
[  5]   0.00-60.04  sec   171 GBytes  24.5 Gbits/sec                  receiver

[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-60.00  sec   182 GBytes  26.0 Gbits/sec    0             sender
[  5]   0.00-60.03  sec   182 GBytes  26.0 Gbits/sec                  receiver

# Stop testing with looping cable. Exchanges between the H2 and an i5 9600K PC

# i5 --> h2 (with scaling_governor set to performance)

[ ID] Interval           Transfer     Bandwidth       Retr
[  4]   0.00-60.00  sec  99.1 GBytes  14.2 Gbits/sec    0             sender
[  4]   0.00-60.00  sec  99.1 GBytes  14.2 Gbits/sec                  receiver

[ ID] Interval           Transfer     Bandwidth       Retr
[  4]   0.00-60.00  sec  97.8 GBytes  14.0 Gbits/sec    0             sender
[  4]   0.00-60.00  sec  97.8 GBytes  14.0 Gbits/sec                  receiver

[ ID] Interval           Transfer     Bandwidth       Retr
[  4]   0.00-60.00  sec  99.9 GBytes  14.3 Gbits/sec    0             sender
[  4]   0.00-60.00  sec  99.9 GBytes  14.3 Gbits/sec                  receiver

# i5 --> h2 (with scaling_governor set to powersave)

[ ID] Interval           Transfer     Bandwidth       Retr
[  4]   0.00-60.00  sec  96.2 GBytes  13.8 Gbits/sec    0             sender
[  4]   0.00-60.00  sec  96.2 GBytes  13.8 Gbits/sec                  receiver

[ ID] Interval           Transfer     Bandwidth       Retr
[  4]   0.00-60.00  sec  99.0 GBytes  14.2 Gbits/sec    0             sender
[  4]   0.00-60.00  sec  99.0 GBytes  14.2 Gbits/sec                  receiv

[ ID] Interval           Transfer     Bandwidth       Retr
[  4]   0.00-60.00  sec   100 GBytes  14.3 Gbits/sec    0             sender
[  4]   0.00-60.00  sec   100 GBytes  14.3 Gbits/sec                  receiver

h2 (with scaling_governor set to performance) --> i5

[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-60.00  sec  77.2 GBytes  11.1 Gbits/sec    0             sender
[  5]   0.00-60.00  sec  77.2 GBytes  11.1 Gbits/sec                  receiver

[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-60.00  sec  77.6 GBytes  11.1 Gbits/sec    0             sender
[  5]   0.00-60.00  sec  77.6 GBytes  11.1 Gbits/sec                  receiver

[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-60.00  sec  77.2 GBytes  11.1 Gbits/sec    0             sender
[  5]   0.00-60.00  sec  77.2 GBytes  11.1 Gbits/sec                  receiver

h2 (with scaling_governor set to powersave) --> i5

[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-60.00  sec  77.8 GBytes  11.1 Gbits/sec    0             sender
[  5]   0.00-60.00  sec  77.8 GBytes  11.1 Gbits/sec                  receiver

[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-60.00  sec  77.8 GBytes  11.1 Gbits/sec    0             sender
[  5]   0.00-60.00  sec  77.8 GBytes  11.1 Gbits/sec                  receiver

[ ID] Interval           Transfer     Bitrate         Retr
[  5]   0.00-60.00  sec  77.6 GBytes  11.1 Gbits/sec    0             sender
[  5]   0.00-60.00  sec  77.6 GBytes  11.1 Gbits/sec                  receive
IRQ Affinity
Reference: https://community.mellanox.com/s/articl ... affinity-x

Code: Select all

# The ConnectX-2 card is handled by the mlx4 driver.
# Find out which interrupts the driver uses

domih@h2a:~$ cat /proc/interrupts | grep mlx4
 130:        430        524        904       9280  IR-PCI-MSI 524288-edge      mlx4-async@pci:0000:01:00.0
 131:     764439    1889590       2524   12932327  IR-PCI-MSI 524289-edge      mlx4-1@0000:01:00.0
 132:          0   12661183         36     979471  IR-PCI-MSI 524290-edge      mlx4-2@0000:01:00.0
 133:          0          8         25         11  IR-PCI-MSI 524291-edge      mlx4-3@0000:01:00.0
 134:         25         10         17         16  IR-PCI-MSI 524292-edge      mlx4-4@0000:01:00.0
 135:          0          0          0          0  IR-PCI-MSI 524293-edge      mlx4-5@0000:01:00.0
 136:          0          0          0          0  IR-PCI-MSI 524294-edge      mlx4-6@0000:01:00.0
 137:          0          0          0          0  IR-PCI-MSI 524295-edge      mlx4-7@0000:01:00.0
 138:          0          0          0          0  IR-PCI-MSI 524296-edge      mlx4-8@0000:01:00.0

# Look at the CPU mask for each of these interrupts

echo "130: " && cat /proc/irq/130/smp_affinity &&\
echo "131: " && cat /proc/irq/131/smp_affinity &&\
echo "132: " && cat /proc/irq/132/smp_affinity &&\
echo "133: " && cat /proc/irq/133/smp_affinity &&\
echo "134: " && cat /proc/irq/134/smp_affinity &&\
echo "135: " && cat /proc/irq/135/smp_affinity &&\
echo "136: " && cat /proc/irq/136/smp_affinity &&\
echo "137: " && cat /proc/irq/137/smp_affinity &&\
echo "138: " && cat /proc/irq/138/smp_affinity

# The result varies because tyhe IRQ balance daemon is constantly redirecting the interrupts to other cores.
# The Celeron J4105 has 4 cores, so the mask values can be 1, 2, 4 and 8 for core 0, 1, 2 and 3.
# Check the values from time to time and you'll see different values.

# Let's try manual "optimization"...
# Stop the irqbalance daemon

sudo systemctl stop irqbalance

# Spread the cores to the most active interrupts

echo 1 | sudo tee /proc/irq/130/smp_affinity &&\
echo 2 | sudo tee /proc/irq/131/smp_affinity &&\
echo 4 | sudo tee /proc/irq/132/smp_affinity &&\
echo 8 | sudo tee /proc/irq/133/smp_affinity &&\
echo 8 | sudo tee /proc/irq/134/smp_affinity &&\
echo 8 | sudo tee /proc/irq/135/smp_affinity &&\
echo 8 | sudo tee /proc/irq/136/smp_affinity &&\
echo 8 | sudo tee /proc/irq/137/smp_affinity

# From there, you can check the values from time to time, they no longer change.

# Combine IRQ testing with the two scaling_governor:

i5 --> h2 (with scaling_governor set to powersave)

[ ID] Interval           Transfer     Bandwidth       Retr
[  4]   0.00-60.00  sec  99.1 GBytes  14.2 Gbits/sec    0             sender
[  4]   0.00-60.00  sec  99.1 GBytes  14.2 Gbits/sec                  receiver

[ ID] Interval           Transfer     Bandwidth       Retr
[  4]   0.00-60.00  sec   100 GBytes  14.3 Gbits/sec    0             sender
[  4]   0.00-60.00  sec   100 GBytes  14.3 Gbits/sec                  receiver

[ ID] Interval           Transfer     Bandwidth       Retr
[  4]   0.00-60.00  sec  99.7 GBytes  14.3 Gbits/sec    0             sender
[  4]   0.00-60.00  sec  99.7 GBytes  14.3 Gbits/sec                  receiver

i5 --> h2 (with scaling_governor set to performance)

[ ID] Interval           Transfer     Bandwidth       Retr
[  4]   0.00-60.00  sec  98.7 GBytes  14.1 Gbits/sec    0             sender
[  4]   0.00-60.00  sec  98.7 GBytes  14.1 Gbits/sec                  receive

[ ID] Interval           Transfer     Bandwidth       Retr
[  4]   0.00-60.00  sec   100 GBytes  14.3 Gbits/sec    0             sender
[  4]   0.00-60.00  sec   100 GBytes  14.3 Gbits/sec                  receiver

[ ID] Interval           Transfer     Bandwidth       Retr
[  4]   0.00-60.00  sec  99.7 GBytes  14.3 Gbits/sec    0             sender
[  4]   0.00-60.00  sec  99.7 GBytes  14.3 Gbits/sec                  receiver

CONCLUSION: See the conclusion at the top of this post.
Note: all these tests were made with iperf3, so I was testing TCP/IP, not RDMA itself. So the conclusion should apply "as is" to 10Gbe Ethernet cards too.

Next in line, playing with the TCP settings:
Reference: https://www.slashroot.in/linux-network- ... ing-sysctl
Reference: http://www.linux-admins.net/2010/09/lin ... uning.html

Code: Select all

domih@h2a:~$ sudo sysctl -a | wc -l
1306
domih@h2a:~$ sudo sysctl -a | grep tcp | wc -l
72
domih@h2a:~$ sudo sysctl -a | grep enp2s0 | wc -l
108
domih@h2a:~$ sudo sysctl -a | grep ibp1s0 | wc -l
216
Oh my...

Re: Add-on to H2 - M2 to pciE slot

Posted: Thu Aug 29, 2019 5:46 pm
by Mullcom
Please make some checks on CPU load.

It interesting how the differens is affects CPU use.

Re: Add-on to H2 - M2 to pciE slot

Posted: Fri Aug 30, 2019 5:31 am
by domih
Mullcom wrote:
Thu Aug 29, 2019 5:46 pm
Please make some checks on CPU load.

It interesting how the differens is affects CPU use.
What do you mean? I don't understand. Please rephrase. TIA!

Re: Add-on to H2 - M2 to pciE slot

Posted: Fri Aug 30, 2019 5:48 am
by Mullcom
domih wrote:
Mullcom wrote:
Thu Aug 29, 2019 5:46 pm
Please make some checks on CPU load.

It interesting how the differens is affects CPU use.
What do you mean? I don't understand. Please rephrase. TIA!
What I mean is that it interesting how H2 CPU is affected when h2 using his network card.

Senator.
When transfer files over smb cpu need to work to Handel this transfer. Interesting is if it need to Handel 100% bandwidth how much CPU is in use.

Now it should be 0% when you using iperf. But other test like SMB or NFS.



Beem my message up with phone.


Re: Add-on to H2 - M2 to pciE slot

Posted: Fri Aug 30, 2019 7:53 am
by domih
<<...Now it should be 0% when you using iperf. But other test like SMB or NFS...>>

No iperf3 is using the IP stack (like Samba or NFS). On my side, when running iperf3, I see the cores handling the interrupts jumping to 80~90% load.

I was using System Monitor to watch the cores activity during testing.

Re: Add-on to H2 - M2 to pciE slot

Posted: Fri Aug 30, 2019 8:39 am
by domih
Exploring means of network TCP/IP optimizations (part two)

As mentioned at the end of my previous post about optimizations, I tried a few "optimizations" of the TCP/IP settings using the information from these two URLs:

Next in line, playing with the TCP settings:
Reference: https://www.slashroot.in/linux-network- ... ing-sysctl
Reference: http://www.linux-admins.net/2010/09/lin ... uning.html

Short Story
a) The information from these two pages is close to 10 years old and related to the advances in kernel 2.x.
b) The modern kernel TCP auto-tuning basically does all the job for you automatically.

Again, what was leading edge expert tuning 10 years ago, is now simply built-in. I did not witness any double-digit change during testing. As a matter of fact I did not witness anything particular. I felt like the cavalry arriving after the battle ended.

Conclusion
1) Half-empty glass view: rhaaaaahhhh!
2) Half-full glass view: the system engineers working on the Linux Kernel are doing a fracking good job. Kudos!

Long Story

Code: Select all

# Let's read the current values we would like to change (mostly buffers sizes)

sudo sysctl net.core.rmem_max
sudo sysctl net.core.wmem_max
sudo sysctl net.core.rmem_default
sudo sysctl net.core.wmem_default
sudo sysctl net.ipv4.tcp_rmem
sudo sysctl net.ipv4.tcp_wmem
sudo sysctl net.ipv4.tcp_mem
sudo sysctl net.core.netdev_max_backlog
# These must be set to one (already the case)
sudo sysctl net.ipv4.tcp_window_scaling
sudo sysctl net.ipv4.tcp_timestamps
sudo sysctl net.ipv4.tcp_sack
# On H2
ifconfig ibp1s0 | grep txqueuelen
# On i5
ifconfig ib0 | grep txqueuelen

# H2 current values
#------------------

net.core.rmem_max = 212992
net.core.wmem_max = 212992
net.core.rmem_default = 212992
net.core.wmem_default = 212992
net.ipv4.tcp_rmem = 4096	131072	6291456
net.ipv4.tcp_wmem = 4096	16384	4194304
net.ipv4.tcp_mem = 381435	508582	762870
net.core.netdev_max_backlog = 1000 
fs.file-nr = 6144	0	3259887
unspec 80-00-02-08-FE-80-00-00-00-00-00-00-00-00-00-00  txqueuelen 256  (UNSPEC)


#i5 current values
#-----------------

net.core.rmem_max = 212992
net.core.wmem_max = 212992
net.core.rmem_default = 212992
net.core.wmem_default = 212992
net.ipv4.tcp_rmem = 4096	131072	6291456
net.ipv4.tcp_wmem = 4096	16384	4194304
net.ipv4.tcp_mem = 379560	506081	759120
net.core.netdev_max_backlog = 1000 
fs.file-nr = 2304	0	3243836
unspec 80-00-02-08-FE-80-00-00-00-00-00-00-00-00-00-00  txqueuelen 256  (UNSPEC)


# New values for testing
#-----------------------

# On both H2 and i5
sudo sysctl -w net.core.rmem_max=33554432
sudo sysctl -w net.core.wmem_max=33554432
sudo sysctl -w net.core.rmem_default=33554432
sudo sysctl -w net.core.wmem_default=33554432
sudo sysctl -w net.ipv4.tcp_rmem='4096 131072 33554432'
sudo sysctl -w net.ipv4.tcp_wmem='4096 65536 33554432'
sudo sysctl -w net.ipv4.tcp_mem='33554432 33554432 33554432'
sudo sysctl -w net.core.netdev_max_backlog=30000
sudo sysctl -w net.ipv4.route.flush=1
sudo sysctl -p
echo 10000000 | sudo tee /proc/sys/fs/file-max
# On H2
sudo ifconfig ibp1s0 txqueuelen 5000
# On i5
sudo ifconfig ib0 txqueuelen 5000

# Results
#--------

Not worth showing. They were identical with only some differences (in both directions) well below statistical variations.

# Conclusion
#-----------

See conclusion at the top of this post.
Note (again): all these tests were made with iperf3, so I was testing TCP/IP, not RDMA itself. So the conclusion should apply "as is" to 10Gbe Ethernet cards too.

Next in line:
1. Just in case, go through https://access.redhat.com/documentation ... uide/index.
2. Try to find equivalent docs for Debian and/or Ubuntu.

This will conclude the research about finding some optimizations. From there, I'll finish writing the article for the Odroid Magazine. The major optimization is therefore going to be the choice between sync or async to get the disk I/O slowness out the picture.

@RomaT: for your cards, you might still have the 8b10b vs 64b66b as source of optimizations. On another forum, a member said he/she was able to reach 9+Gbe with a similar card and a RockPro64, so either he/she was running in 64b66b either he/she showing off and BSing the candid readers. After all you can't trust all your read on the Internet.

Re: Add-on to H2 - M2 to pciE slot

Posted: Fri Aug 30, 2019 12:40 pm
by RomaT
I wonder why he needs a 10Gbit/s network card on rockpro64,
nothing else can be connected, there’s nothing to connect devices,
if he took the PCIe slot under the network card, processors stack?
Or through USB3.0 connected SATA? but still not clear, the USB3.0 limit is 5Gbps, Why does he need 8+ Gb/s?
for the sake of theoretical speeds or what? What good is it if you can’t use it?
I have at least a ramdisk (up to 28GB) is, I can quickly drain something, then background move to the disk array.

Re: Add-on to H2 - M2 to pciE slot

Posted: Sat Aug 31, 2019 12:16 am
by domih
RomaT wrote:
Fri Aug 30, 2019 12:40 pm
I wonder why he needs a 10Gbit/s network card on rockpro64,
nothing else can be connected, there’s nothing to connect devices,
if he took the PCIe slot under the network card, processors stack?
Or through USB3.0 connected SATA? but still not clear, the USB3.0 limit is 5Gbps, Why does he need 8+ Gb/s?
for the sake of theoretical speeds or what? What good is it if you can’t use it?
I have at least a ramdisk (up to 28GB) is, I can quickly drain something, then background move to the disk array.
I guess they did it just for the sake of it (see https://forum.pine64.org/showthread.php?tid=6964&page=2 and https://twitter.com/armbian/status/1161515847124488198).
The ones actually showing console iperf3 extracts to support their claim have numbers in the same range as yours.

And yes, once they plug a NIC in the PCI 2 x4, the only storage solution becomes USB 3 drives :lol: That makes you love the H2 with its two SATA ports to save the day!

Re: Add-on to H2 - M2 to pciE slot

Posted: Sun Sep 01, 2019 1:14 pm
by RomaT
domih wrote:
Sat Aug 31, 2019 12:16 am
you love the H2 with its two SATA ports
I would not say that I love, but respect for 32GB RAM and bus I2C, otherwise mediocre.
I would love H2 when the second M2 slot (two PCIe lanes) appears, instead of double 1GbE NICs.
I would use it for a SATA controller - then it would be possible to talk about love for H2 with six SATA ports and 10GbE.
and if additional 1GbE ports were required, then USB is enough for this.
or if you don’t need a lot of SATA, then in the second M2 slot (two PCIe lanes) can the controller at least six 1GbE ports
That would really reveal all the possibilities of H2 (second M2 slot with two PCIe lanes, instead of double 1GbE NICs).
in addition, four more USB ports were stolen, they would also not hurt to return at least to the GPIO or two quadruple USB connectors, instead of two doubles USB with Ethernet connector.
In general, the Hardkernel spent this year poorly, instead of creating a wow effect, were put into a stupor fans of their products several times.

Re: Add-on to H2 - M2 to pciE slot

Posted: Mon Sep 23, 2019 7:02 am
by domih
According to this article [but don't trust everything you read on the internet...] setting the MTU to 9014 on all the NICs involved should yield better performance with the Aquantia 107:

https://sviko.com/t/aquantia-aqtion-aqc ... erfaces/32

Re: Add-on to H2 - M2 to pciE slot

Posted: Mon Sep 23, 2019 3:35 pm
by RomaT
maximum performance - on the Windows client side always 9014, server (H2) side "auto" - installation 1500, yet ping passes 8972 on both sides.
I tried server (H2) MTU 9014, ping passes 8972, but real throughput is reduced ...

Re: Add-on to H2 - M2 to pciE slot

Posted: Thu Sep 26, 2019 10:48 am
by newregister
Is it possible to use compex wireless pcie 1.1 module with m.2 interface?

Re: Add-on to H2 - M2 to pciE slot

Posted: Sun Sep 29, 2019 8:46 am
by domih
newregister wrote:
Thu Sep 26, 2019 10:48 am
Is it possible to use compex wireless pcie 1.1 module with m.2 interface?
Do you have the URL of the product or support page describing what a "compex wireless pcie 1.1 module" is?

Re: Add-on to H2 - M2 to pciE slot

Posted: Sun Sep 29, 2019 1:33 pm
by RomaT
I assume that this is a PCB with miniPCIe interface,
because if something other, then it would have sounded differently.
if the PCIe card, then there would be no word "module"
"module" - assumes that miniPCIe or M.2 A/E,
but if M.2 then there would be no "pcie" word,
accordingly, we can conclude that this module with miniPCIe interface
.
minipcie.jpg
minipcie.jpg (26.45 KiB) Viewed 1012 times
.
directly will not work,
I have not seen an adapter from M.2 M to miniPCIe,
but in theory it is possible, through two adapters - M.2 M to PCIe and PCIe to miniPCIe,
But it is much more expensive than buying a new USB adapter wireless.

Re: Add-on to H2 - M2 to pciE slot

Posted: Sun Oct 06, 2019 2:43 pm
by domih
RomaT wrote:
Mon Sep 23, 2019 3:35 pm
maximum performance - on the Windows client side always 9014, server (H2) side "auto" - installation 1500, yet ping passes 8972 on both sides.
I tried server (H2) MTU 9014, ping passes 8972, but real throughput is reduced ...
See thread viewtopic.php?f=171&p=269873, with these new facts I'm convinced you will in the end be able to reach 0.94 Gb/s, there is no reason why it wouldn't. It looks like it might not be a 8b10b story after all. So you can definitely "reopen the case".

Re: Add-on to H2 - M2 to pciE slot

Posted: Sun Oct 06, 2019 3:07 pm
by RomaT
I assembled another NAS Server, on the motherboard H370-I with Intel G5400, installing the same network card, differences only in the PCI-E version 3.0
I got this result (On the left is the PC Z390-I, on the right is the NAS Server H370-I):
.
netspeed01.jpg
netspeed01.jpg (354.4 KiB) Viewed 873 times
.
netspeed03.jpg
netspeed03.jpg (299.45 KiB) Viewed 873 times
.
.
By the way, the speed with the LVM2 array has become faster (More than 1GBytes per second)
.
netspeed.png
netspeed.png (12.6 KiB) Viewed 871 times
.
configuration (two M.2 M with PCIe x4 v3.0, one M.2 E with PCIe x2 v3.0 and I2C bus, one PCI-E x16 v3.0, four SATA 6G):
system2_h370i.png
system2_h370i.png (92.24 KiB) Viewed 359 times
.
.
In such a chassis (FRACTAL DESIGN Node 304):
.
case.jpg
case.jpg (43.7 KiB) Viewed 866 times

Re: Add-on to H2 - M2 to pciE slot

Posted: Sun Oct 06, 2019 4:17 pm
by domih
RomaT wrote:
Sun Oct 06, 2019 3:07 pm
I assembled another NAS Server, on the motherboard H370-I with Intel G5400, installing the same network card, differences only in the PCI-E version 3.0
I got this result (On the left is the PC Z390-I, on the right is the NAS Server H370-I):
.
netspeed01.jpg
.
netspeed03.jpg
.
.
By the way, the speed with the LVM2 array has become faster (More than 1GBytes per second)
.
netspeed.png
.
configuration (two M.2 M with PCIe x4 v3.0, one M.2 E with PCIe x2 v3.0 and I2C bus, one PCI-E x16 v3.0, four SATA 6G):
system_h370i.png
.
.
In such a chassis (FRACTAL DESIGN Node 304):
.
case.jpg

Rhaaahhhh, you're still short of what you should get! There is something that does not compute somewhere. @igorpec gets 9.40 Gb/sec with his onboard Aquantia chipset and an ASUS XC-C100C, same chipset as your card if I recollect correctly.

In addition, I just ran a "pure Ethernet" 10G test by configuring my Mellanox cards as Ethernet cards instead of InfiniBand and iperf3 reports 9.4 Gb/sec in one direction and 9.35 Gb/sec in the other (with min=9.01 and max= 9.43). Yes, my hardware is quite different but Ethernet is Ethernet and no RDMA involved here. I believed there is something (subtle or obvious) that eludes us and that you should get ~9.4Gb/sec in all cases.

By the way, the speed with the LVM2 array has become faster (More than 1GBytes per second)

Good for you! I still have not setup mine, too busy with work and other things.

Re: Add-on to H2 - M2 to pciE slot

Posted: Sun Oct 06, 2019 4:30 pm
by Mullcom
Sorry for I have not comment for a while! But have no time.... Much on work an no free time at home :( 3 kids and one more on the way and that make my girlfriend be a crasy monster right now.

But have some plans still to get my place in the basement ready for the The future.

If the tests is not make sense between Networks card it can be lack of good quality of the network card? What I mean by that is the The manufacturer say it can Handel 10gbit but in the real world environment it can't becous it has not the best setup as hardware/chip on the card.

//TM

Skickat från min SM-G955F via Tapatalk


Re: Add-on to H2 - M2 to pciE slot

Posted: Sun Oct 06, 2019 5:22 pm
by domih
If the tests is not make sense between Networks card it can be lack of good quality of the network card?

I thought about that too but more like "what if there are fake models from China?" It would not be the first time, there were plenty of fake Intel network cards on eBay at some point. But fake models on < $100 cards, I'm not sure it would make sense, the targeted Intel NICs were in the several hundred bucks.

In regard to the ASUS-XG-C100C, there is a reviewer on Amazon who posted iperf3 results in the 9.4Gb/sec range, see https://www.amazon.com/ASUS-XG-C100C-Ne ... B072N84DG6, that's the 1st review. I went through the reviews, nobody complains about being stuck in the 8Gb/sec range. In the other thread @MiguelA0145 is using an ASUS-XG-C100C and is stuck around 8Gb/sec, while @igorpec with the same card gets 9.40 Gb/sec. Nobody is complaining about a 8Gb/sec issue on NewEgg either (but no iperf3 references there.)

Still looking for online reviewers for the other card.

Re: Add-on to H2 - M2 to pciE slot

Posted: Sun Oct 06, 2019 5:52 pm
by domih
@RomaT:

I could not find the LR-LINK 6880BT on Amazon. On NewEgg (https://www.newegg.com/lr-link-lrec6880 ... 00JP-00029) it shows no activity. Only on Aliexpress the reviews are rather limited (read empty.)

However, see https://www.aliexpress.com/item/33036697781.html, the 1st review says: "...Be aware, the heatsink on the delivered card is much smaller than on the pictures! Tests will need to prove if the chip does not overheat under heavy load...".

I'm scrapping the bottom of the barrel for ideas here:

(1) You might want to compare your card with the photos from http://www.lr-link.com/products/LREC6880BT.html, mainly the back and front views http://www.lr-link.com/Upfiles/Prod_X/L ... ased)4.jpg and http://www.lr-link.com/Upfiles/Prod_X/L ... ased)1.jpg . Just in case...

(2) Otherwise, possible thermal issue? Fans usually drop the temp of a chip with a heat sink by up to 30 degrees. If it were a thermal issue, just put a fan on top of the heat sink and try. Again just in case...

Re: Add-on to H2 - M2 to pciE slot

Posted: Sat Oct 12, 2019 1:00 pm
by RomaT
domih wrote:
Sun Oct 06, 2019 5:22 pm
there is a reviewer who posted iperf3 results in the 9.4Gb/sec range
depending on how measure, it is possible and 9.9 Gb/s to obtain :D Despite the fact that nothing has changed ...
.
netspeed07.jpg
netspeed07.jpg (47 KiB) Viewed 727 times
.
Yes, the heat sink is smaller than in the photo, but the manufacturer himself replaced it.
If you go from the manufacturer’s website, click on the link at the top right, to aliexpress,
then you will get to the reseller from whom I bought.
02s.jpg
02s.jpg (150.16 KiB) Viewed 714 times
03s.jpg
03s.jpg (371.19 KiB) Viewed 714 times

.
no problem with temperature, +35C, and the chip power is not so great (4.7W), smaller heat sink copes well.
.
I think if there is a problem, then it is in the assembly of the kernel, driver and thin add-ons.

Re: Add-on to H2 - M2 to pciE slot

Posted: Sun Oct 13, 2019 10:41 am
by domih
So that's two more variables eliminated from the equation: you got genuine products and it's not a thermal issue.