load average at idle is 1.0

Post Reply
brad
Posts: 1156
Joined: Tue Mar 29, 2016 1:22 pm
languages_spoken: english
ODROIDs: C2 N1 N2 N2+ H2 H2+ (64 bit ftw)
Location: Australia
Has thanked: 57 times
Been thanked: 105 times
Contact:

load average at idle is 1.0

Post by brad »

This issue was mentioned by others earlier, the load at idle is 1.0 with the standard image.

Code: Select all

root@odroid:~# cat /proc/2009/status
Name:   vdec-core
Umask:  0000
State:  D (disk sleep)
Tgid:   2009
Ngid:   0
Pid:    2009
PPid:   2
TracerPid:      0
Uid:    0       0       0       0
Gid:    0       0       0       0
FDSize: 64
Groups:
NStgid: 2009
NSpid:  2009
NSpgid: 0
NSsid:  0
Threads:        1
SigQ:   0/12813
SigPnd: 0000000000000000
ShdPnd: 0000000000000000
SigBlk: 0000000000000000
SigIgn: ffffffffffffbfff
SigCgt: 0000000000004000
CapInh: 0000000000000000
CapPrm: 0000003fffffffff
CapEff: 0000003fffffffff
CapBnd: 0000003fffffffff
CapAmb: 0000000000000000
Seccomp:        0
Speculation_Store_Bypass:       unknown
Cpus_allowed:   3f
Cpus_allowed_list:      0-5
Mems_allowed:   1
Mems_allowed_list:      0
voluntary_ctxt_switches:        299336
nonvoluntary_ctxt_switches:     4
root@odroid:~# cat /proc/2009/stack
[<ffffff80090867ec>] __switch_to+0x9c/0xc0
[<ffffff8001cb54f0>] vdec_core_thread+0x330/0x638 [decoder_common]
[<ffffff80090cd4c4>] kthread+0x10c/0x110
[<ffffff8009083950>] ret_from_fork+0x10/0x40
[<ffffffffffffffff>] 0xffffffffffffffff
The main vdec core thread is in a disk sleep wait indefinitely but I could not find it attached to any file descriptors. I suspect it might be debug filesystem? Im going to compile a kernel shortly so will enable a bit more debug.

crashoverride
Posts: 5027
Joined: Tue Dec 30, 2014 8:42 pm
languages_spoken: english
ODROIDs: C1
Has thanked: 0
Been thanked: 322 times
Contact:

Re: load average at idle is 1.0

Post by crashoverride »

The task is here:
https://github.com/hardkernel/linux/blo ... 2039-L2231

It appears to decrements a semaphore, sleeps from 10ms to 20ms, increments a semaphore, and repeats when no video is playing. Worst case is activity 100 times a second (10ms) when no video is playing. Best case is activity 50 times a second (20ms).

User avatar
rooted
Posts: 7875
Joined: Fri Dec 19, 2014 9:12 am
languages_spoken: english
Location: Gulf of Mexico, US
Has thanked: 724 times
Been thanked: 222 times
Contact:

Re: load average at idle is 1.0

Post by rooted »

For now you can do this to stop the kernel thread.

Code: Select all

rmmod amvdec_vp9 amvdec_vc1 amvdec_real amvdec_mmpeg4 amvdec_mpeg4 amvdec_mpeg12 amvdec_mmjpeg amvdec_mjpeg amvdec_h265 amvdec_h264mvc amvdec_mh264 amvdec_h264 amvdec_avs stream_input decoder_common
These users thanked the author rooted for the post:
elatllat (Thu May 09, 2019 11:05 pm)

campbell
Posts: 414
Joined: Thu Sep 03, 2015 1:13 pm
languages_spoken: english
ODROIDs: C4, N2, C2, C1+, XU4, XU3, Cloudshell, Smart Power
Has thanked: 5 times
Been thanked: 8 times
Contact:

Re: load average at idle is 1.0

Post by campbell »

I was able to stop it by adding "fdt rm /vdec" to boot.ini, but worth mentioning there was no measureable difference in power consumption with or without this change.

brad
Posts: 1156
Joined: Tue Mar 29, 2016 1:22 pm
languages_spoken: english
ODROIDs: C2 N1 N2 N2+ H2 H2+ (64 bit ftw)
Location: Australia
Has thanked: 57 times
Been thanked: 105 times
Contact:

Re: load average at idle is 1.0

Post by brad »

I did a bit more looking at this issue and some obvervations

- vdec core is doing a large amount of context switching but stuck in disk sleep
- IRQ 38 (vsync) & IRQ 46 (rdma) appear to be locked in sync and majority of irq work is done on CPU0 & 2
- CPU4 (1st little core) is doing more work that I expect as idle (but not irq's) - Not sure what this is yet

I will try to make a monitoring script to check and monitor
- CPU affinities of kernel processes
- CPU affinities of interrupts & IRQ status

I am also building a realtime kernel build for N2 which has extensive scheduler debugging included. For the moment I need to complete more work to make amlogic drivers realtime friendly (change spinlock methods in drivers). My initial big hurdles here are almlogic meson uart, amlogic irq modifications and the amlogic debugger.

crashoverride
Posts: 5027
Joined: Tue Dec 30, 2014 8:42 pm
languages_spoken: english
ODROIDs: C1
Has thanked: 0
Been thanked: 322 times
Contact:

Re: load average at idle is 1.0

Post by crashoverride »

I do not believe there is anything 'wrong' here. I looked into this while researching other issues. The term "disk sleep" is a misnomer. Its actually "uninterruptible sleep" (see "man ps" docs). The thread does spend a lot of time (10ms to 20ms) sleeping without waiting for anything:
https://github.com/hardkernel/linux/blo ... ec.c#L2224

Code: Select all

	usleep_range(1000, 2000);
The code decrements a semaphore and increments a semaphore. Both are 'non trivial' operations using a spinlock:
https://elixir.bootlin.com/linux/v4.9.1 ... hore.c#L75
https://elixir.bootlin.com/linux/v4.9.1 ... ore.c#L178

brad
Posts: 1156
Joined: Tue Mar 29, 2016 1:22 pm
languages_spoken: english
ODROIDs: C2 N1 N2 N2+ H2 H2+ (64 bit ftw)
Location: Australia
Has thanked: 57 times
Been thanked: 105 times
Contact:

Re: load average at idle is 1.0

Post by brad »

crashoverride wrote:
Sun Feb 24, 2019 9:19 am
I do not believe there is anything 'wrong' here. I looked into this while researching other issues. The term "disk sleep" is a misnomer. Its actually "uninterruptible sleep" (see "man ps" docs). The thread does spend a lot of time (10ms to 20ms) sleeping without waiting for anything:
https://github.com/hardkernel/linux/blo ... ec.c#L2224

Code: Select all

	usleep_range(1000, 2000);
The code decrements a semaphore and increments a semaphore. Both are 'non trivial' operations using a spinlock:
https://elixir.bootlin.com/linux/v4.9.1 ... hore.c#L75
https://elixir.bootlin.com/linux/v4.9.1 ... ore.c#L178
Thanks creshoverride, I have a suspicion that the problem is a little deeper than the vdec process, a sleep would imply that the process will sleep and wait for the scheduler to wake it (should not consume CPU in wait) and this is probably what we see (vdec process reporting 2-6% cpu). My theory is that the scheduler itself is having some troubles, it's much more complex trying to schedule to 2 different CPU domains and also manage/ account for power for the 2 domains. The amlogic kernel has significant changes to scheduler for 4.9 to support multiple CPU domains, android specific changes, and patches to the scheduler from upstream (4.9+ fixes ported back).

I did plan to enable scheduler debug, stall, and lockup detection within the kernel to investigate further. I will continue with this if I have no luck with my Realtime kernel porting as realtime kernel already has some significant tracing operations for the scheduler. Either way for my realtime kernel to support vdec properly I will need to ensure it does not do an uninterruptible sleep for that long with irq's disabled.

crashoverride
Posts: 5027
Joined: Tue Dec 30, 2014 8:42 pm
languages_spoken: english
ODROIDs: C1
Has thanked: 0
Been thanked: 322 times
Contact:

Re: load average at idle is 1.0

Post by crashoverride »

brad wrote:
Sun Feb 24, 2019 9:55 am
(vdec process reporting 2-6% cpu)
The reason the number reported appears high is due to the cpu governor reducing the clocks. This can be verified by setting the governor to performance:

Code: Select all

cpufreq-set -c 0-1 -g performance
cpufreq-set -c 2-5 -g performance
The results show this (0.3%) instead:

Code: Select all

2028 root      rt   0       0      0      0 D   0.3  0.0   9:12.48 vdec-core

brad
Posts: 1156
Joined: Tue Mar 29, 2016 1:22 pm
languages_spoken: english
ODROIDs: C2 N1 N2 N2+ H2 H2+ (64 bit ftw)
Location: Australia
Has thanked: 57 times
Been thanked: 105 times
Contact:

Re: load average at idle is 1.0

Post by brad »

Yes I just tested on standard kernel again and increasing clock speed / changing governer reduces the reported percentage of cpu used by the vdec process at idle but It does not change the load of "1.00" and related to at least one of the modules rooted mentioned earlier. Seems to confirm a scheduling issue so I looked a little further and decided to trace the scheduler and interrupts at idle.

it is interesting that the scheduler is having hard time updating local cpu load for our friend the vdec-core, I notice default we were also using tickless idle too, I might disable that and retry.

Code: Select all

      vdec-core-2010  [000] d..3  4724.222201: sched_avg_update <-cpu_load_update
       vdec-core-2010  [000] d..3  4724.222203: sched_avg_update <-cpu_load_update
       vdec-core-2010  [000] d..3  4724.222204: sched_avg_update <-cpu_load_update
       vdec-core-2010  [000] d..3  4724.222205: sched_avg_update <-cpu_load_update
          <idle>-0     [000] d..2  4724.224197: irq_enter <-__handle_domain_irq
          <idle>-0     [000] d.h2  4724.224198: irq_find_mapping <-__handle_domain_irq
          <idle>-0     [000] dnh2  4724.224201: irq_exit <-__handle_domain_irq
          <idle>-0     [000] .n.2  4724.224203: sched_ttwu_pending <-cpu_startup_entry
          <idle>-0     [000] d..2  4724.226206: irq_enter <-__handle_domain_irq
          <idle>-0     [000] d.h2  4724.226208: irq_find_mapping <-__handle_domain_irq
          <idle>-0     [000] dnh2  4724.226211: irq_exit <-__handle_domain_irq
          <idle>-0     [000] dn.3  4724.226212: sched_avg_update <-cpu_load_update
          <idle>-0     [000] .n.2  4724.226214: sched_ttwu_pending <-cpu_startup_entry
       vdec-core-2010  [000] d..3  4724.226219: sched_avg_update <-cpu_load_update
       vdec-core-2010  [000] d..3  4724.226222: sched_avg_update <-cpu_load_update
       vdec-core-2010  [000] d..3  4724.226223: sched_avg_update <-cpu_load_update
       vdec-core-2010  [000] d..3  4724.226224: sched_avg_update <-cpu_load_update
       vdec-core-2010  [000] d..3  4724.226225: sched_avg_update <-cpu_load_update
          <idle>-0     [000] d..2  4724.228217: irq_enter <-__handle_domain_irq
          <idle>-0     [000] d.h2  4724.228218: irq_find_mapping <-__handle_domain_irq
          <idle>-0     [000] dnh2  4724.228221: irq_exit <-__handle_domain_irq
          <idle>-0     [000] .n.2  4724.228223: sched_ttwu_pending <-cpu_startup_entry
          <idle>-0     [000] d..2  4724.230226: irq_enter <-__handle_domain_irq
          <idle>-0     [000] d.h2  4724.230228: irq_find_mapping <-__handle_domain_irq
          <idle>-0     [000] dnh2  4724.230231: irq_exit <-__handle_domain_irq
          <idle>-0     [000] dn.3  4724.230232: sched_avg_update <-cpu_load_update
          <idle>-0     [000] .n.2  4724.230234: sched_ttwu_pending <-cpu_startup_entry
       vdec-core-2010  [000] d..3  4724.230239: sched_avg_update <-cpu_load_update
       vdec-core-2010  [000] d..3  4724.230241: sched_avg_update <-cpu_load_update
       vdec-core-2010  [000] d..3  4724.230243: sched_avg_update <-cpu_load_update
       vdec-core-2010  [000] d..3  4724.230244: sched_avg_update <-cpu_load_update
       vdec-core-2010  [000] d..3  4724.230246: sched_avg_update <-cpu_load_update
          <idle>-0     [002] d..2  4724.231990: irq_enter <-__handle_domain_irq
          <idle>-0     [002] d.h2  4724.231991: irq_find_mapping <-__handle_domain_irq
          <idle>-0     [002] d.h2  4724.231996: irq_exit <-__handle_domain_irq
          <idle>-0     [000] d..2  4724.231996: irq_enter <-scheduler_ipi
          <idle>-0     [000] d.h2  4724.231997: sched_ttwu_pending <-scheduler_ipi
          <idle>-0     [000] dnh2  4724.231999: irq_exit <-scheduler_ipi
          <idle>-0     [000] .n.2  4724.232001: sched_ttwu_pending <-cpu_startup_entry
          <idle>-0     [000] d..2  4724.232237: irq_enter <-__handle_domain_irq
          <idle>-0     [000] d.h2  4724.232238: irq_find_mapping <-__handle_domain_irq
          <idle>-0     [000] dnh2  4724.232241: irq_exit <-__handle_domain_irq

brad
Posts: 1156
Joined: Tue Mar 29, 2016 1:22 pm
languages_spoken: english
ODROIDs: C2 N1 N2 N2+ H2 H2+ (64 bit ftw)
Location: Australia
Has thanked: 57 times
Been thanked: 105 times
Contact:

Re: load average at idle is 1.0

Post by brad »

Disabling tickeless idle stops vdec core from holding disabled interrupts but the load issue is still present. (now for idle process)

Code: Select all

     ksoftirqd/0-3     [000] d.s2   546.978351: irq_enter <-scheduler_ipi
     ksoftirqd/0-3     [000] d.H2   546.978352: sched_ttwu_pending <-scheduler_ipi
     ksoftirqd/0-3     [000] d.H2   546.978368: irq_exit <-scheduler_ipi
          <idle>-0     [000] d..2   546.979005: irq_enter <-__handle_domain_irq
          <idle>-0     [000] d.h2   546.979007: irq_find_mapping <-__handle_domain_irq
          <idle>-0     [000] d.h3   546.979012: irq_may_run <-handle_fasteoi_irq
          <idle>-0     [000] d.h2   546.979034: irq_exit <-__handle_domain_irq
          <idle>-0     [000] d..2   546.979227: irq_enter <-__handle_domain_irq
          <idle>-0     [000] d.h2   546.979229: irq_find_mapping <-__handle_domain_irq
          <idle>-0     [000] d.h3   546.979232: irq_may_run <-handle_fasteoi_irq
          <idle>-0     [000] dnh2   546.979351: irq_exit <-__handle_domain_irq
          <idle>-0     [000] dn.2   546.979352: irq_enter <-__handle_domain_irq
          <idle>-0     [000] dnh2   546.979354: irq_find_mapping <-__handle_domain_irq
          <idle>-0     [000] dnh3   546.979357: irq_may_run <-handle_fasteoi_irq
          <idle>-0     [000] dnh2   546.979388: irq_exit <-__handle_domain_irq
          <idle>-0     [000] .n.2   546.979390: sched_ttwu_pending <-cpu_startup_entry
          <idle>-0     [000] d..2   546.980278: irq_enter <-__handle_domain_irq
          <idle>-0     [000] d.h2   546.980280: irq_find_mapping <-__handle_domain_irq
          <idle>-0     [000] dnh2   546.980295: irq_exit <-__handle_domain_irq
          <idle>-0     [000] .n.2   546.980297: sched_ttwu_pending <-cpu_startup_entry
          <idle>-0     [001] d..2   546.982195: irq_enter <-__handle_domain_irq
          <idle>-0     [000] d..2   546.982195: irq_enter <-__handle_domain_irq
          <idle>-0     [001] d.h2   546.982197: irq_find_mapping <-__handle_domain_irq
          <idle>-0     [000] d.h2   546.982197: irq_find_mapping <-__handle_domain_irq
          <idle>-0     [003] d..2   546.982199: irq_enter <-__handle_domain_irq
          <idle>-0     [005] d..2   546.982200: irq_enter <-__handle_domain_irq
          <idle>-0     [002] d..2   546.982200: irq_enter <-__handle_domain_irq
          <idle>-0     [004] d..2   546.982200: irq_enter <-__handle_domain_irq
          <idle>-0     [001] d.h3   546.982203: sched_avg_update <-cpu_load_update_active
          <idle>-0     [003] d.h2   546.982206: irq_find_mapping <-__handle_domain_irq
          <idle>-0     [005] d.h2   546.982207: irq_find_mapping <-__handle_domain_irq
          <idle>-0     [002] d.h2   546.982207: irq_find_mapping <-__handle_domain_irq
          <idle>-0     [004] d.h2   546.982208: irq_find_mapping <-__handle_domain_irq
          <idle>-0     [001] d.h2   546.982209: irq_exit <-__handle_domain_irq
          <idle>-0     [000] d.h3   546.982211: sched_avg_update <-cpu_load_update_active
          <idle>-0     [000] dnh2   546.982219: irq_exit <-__handle_domain_irq
          <idle>-0     [000] dns3   546.982229: irq_enter <-handle_IPI
          <idle>-0     [005] d.h3   546.982230: sched_avg_update <-cpu_load_update_active
          <idle>-0     [004] d.h3   546.982231: sched_avg_update <-cpu_load_update_active
          <idle>-0     [002] d.h3   546.982231: sched_avg_update <-cpu_load_update_active
          <idle>-0     [000] dnH3   546.982231: irq_work <-irq_work_run_list
          <idle>-0     [003] d.h3   546.982231: sched_avg_update <-cpu_load_update_active
          <idle>-0     [005] d.h2   546.982246: irq_exit <-__handle_domain_irq
          <idle>-0     [004] d.h2   546.982247: irq_exit <-__handle_domain_irq
          <idle>-0     [002] d.h2   546.982248: irq_exit <-__handle_domain_irq
          <idle>-0     [003] d.h2   546.982248: irq_exit <-__handle_domain_irq
          <idle>-0     [000] dnH3   546.982253: irq_exit <-handle_IPI
          <idle>-0     [000] .n.2   546.982306: sched_ttwu_pending <-cpu_startup_entry
          <idle>-0     [000] d..2   546.984313: irq_enter <-__handle_domain_irq
          <idle>-0     [000] d.h2   546.984314: irq_find_mapping <-__handle_domain_irq
          <idle>-0     [000] dnh2   546.984322: irq_exit <-__handle_domain_irq
          <idle>-0     [000] .n.2   546.984324: sched_ttwu_pending <-cpu_startup_entry
          <idle>-0     [001] d..2   546.986195: irq_enter <-__handle_domain_irq
          <idle>-0     [000] d..2   546.986195: irq_enter <-__handle_domain_irq
          <idle>-0     [000] d.h2   546.986197: irq_find_mapping <-__handle_domain_irq
          <idle>-0     [001] d.h2   546.986197: irq_find_mapping <-__handle_domain_irq
My initial debug script for tracing scheduler at idle is as follows

Code: Select all

echo 0 >/sys/kernel/debug/tracing/tracing_on
echo 'sched_*' 'irq_*' > /sys/kernel/debug/tracing/set_ftrace_filter
echo function >/sys/kernel/debug/tracing/current_tracer
echo 1 >/sys/kernel/debug/tracing/tracing_on

sleep 3

echo 0 >/sys/kernel/debug/tracing/tracing_on

less /sys/kernel/debug/tracing/trace

campbell
Posts: 414
Joined: Thu Sep 03, 2015 1:13 pm
languages_spoken: english
ODROIDs: C4, N2, C2, C1+, XU4, XU3, Cloudshell, Smart Power
Has thanked: 5 times
Been thanked: 8 times
Contact:

Re: load average at idle is 1.0

Post by campbell »

Need to make sure the remedy for the "problem" doesn't end up costing way more power :) Turning on the performance governor in order to decrease the cpu load is a classic example

brad
Posts: 1156
Joined: Tue Mar 29, 2016 1:22 pm
languages_spoken: english
ODROIDs: C2 N1 N2 N2+ H2 H2+ (64 bit ftw)
Location: Australia
Has thanked: 57 times
Been thanked: 105 times
Contact:

Re: load average at idle is 1.0

Post by brad »

campbell wrote:
Mon Feb 25, 2019 10:45 am
Need to make sure the remedy for the "problem" doesn't end up costing way more power :) Turning on the performance governor in order to decrease the cpu load is a classic example
Power consumption for something like this is actually a very interesting topic and I have no real idea on the correct answer. I would like to think lower cpu freq results in lower power.

- Potentially more power can be used by the cpu cores at higher frequencies due to higher clocks frequencies and overhead.
- Potentially more power can be used by interrupts at lower frequencies as they are held for longer due to the cpu and scheduler being slower to execute and having a lower resolution.

Im going to try a "tickless build" (and some more debug) to see if I can identify the issue.

brad
Posts: 1156
Joined: Tue Mar 29, 2016 1:22 pm
languages_spoken: english
ODROIDs: C2 N1 N2 N2+ H2 H2+ (64 bit ftw)
Location: Australia
Has thanked: 57 times
Been thanked: 105 times
Contact:

Re: load average at idle is 1.0

Post by brad »

This issue gets a little stranger. On a basic mainline kernel with example patches from Baylibre I still have a load avg of 1 at idle.

Code: Select all

odroid@odroid:/boot$ uname -a
Linux odroid 5.0.0-rc6 #2 SMP PREEMPT Sat Mar 2 23:25:40 UTC 2019 aarch64 aarch64 aarch64 GNU/Linux
odroid@odroid:/boot$ lsmod
Module                  Size  Used by
cpufreq_powersave      16384  0
cpufreq_conservative    16384  0
meson_dw_hdmi          20480  0
meson_drm              53248  2 meson_dw_hdmi
dw_hdmi                32768  1 meson_dw_hdmi
drm_kms_helper        180224  5 meson_dw_hdmi,meson_drm,dw_hdmi
drm                   434176  5 meson_dw_hdmi,meson_drm,drm_kms_helper,dw_hdmi
crct10dif_ce           16384  1
meson_canvas           16384  1 meson_drm
drm_panel_orientation_quirks    20480  1 drm
nvmem_meson_efuse      16384  0
ip_tables              32768  0
x_tables               36864  1 ip_tables
ipv6                  393216  22
odroid@odroid:/boot$ uptime
 07:25:04 up  8:20,  1 user,  load average: 1.00, 1.00, 1.00
odroid@odroid:/boot$
There is no vdec core here but Ethernet is broken and constantly trying to attach to phy so maybe this is the cause in this instance.

Code: Select all

top - 07:33:55 up  8:29,  1 user,  load average: 1.00, 1.00, 1.00
Tasks: 106 total,   1 running,  55 sleeping,   0 stopped,   0 zombie
%Cpu(s):  0.0 us,  0.1 sy,  0.0 ni, 99.9 id,  0.0 wa,  0.0 hi,  0.0 si,  0.0 st
KiB Mem :  3794076 total,  3441516 free,    78356 used,   274204 buff/cache
KiB Swap:        0 total,        0 free,        0 used.  3638488 avail Mem

  PID USER      PR  NI    VIRT    RES    SHR S  %CPU %MEM     TIME+ COMMAND
 3646 odroid    20   0    6028   2940   2360 R   0.3  0.1   0:00.07 top
    1 root      20   0  160496   7592   5764 S   0.0  0.2   0:03.31 systemd
    2 root      20   0       0      0      0 S   0.0  0.0   0:00.01 kthreadd
    3 root       0 -20       0      0      0 I   0.0  0.0   0:00.00 rcu_gp
    4 root       0 -20       0      0      0 I   0.0  0.0   0:00.00 rcu_par_gp
    8 root       0 -20       0      0      0 I   0.0  0.0   0:00.00 mm_percpu_+
    9 root      20   0       0      0      0 S   0.0  0.0   0:00.14 ksoftirqd/0
   10 root      20   0       0      0      0 I   0.0  0.0   0:00.15 rcu_preempt
   11 root      rt   0       0      0      0 S   0.0  0.0   0:00.00 migration/0
   12 root      20   0       0      0      0 S   0.0  0.0   0:00.00 cpuhp/0
   13 root      20   0       0      0      0 S   0.0  0.0   0:00.00 cpuhp/1
   14 root      rt   0       0      0      0 S   0.0  0.0   0:00.00 migration/1
   15 root      20   0       0      0      0 S   0.0  0.0   0:00.07 ksoftirqd/1
   18 root      20   0       0      0      0 S   0.0  0.0   0:00.00 cpuhp/2
   19 root      rt   0       0      0      0 S   0.0  0.0   0:00.00 migration/2
   20 root      20   0       0      0      0 S   0.0  0.0   0:00.01 ksoftirqd/2
   21 root      20   0       0      0      0 I   0.0  0.0   0:00.24 kworker/2:+

campbell
Posts: 414
Joined: Thu Sep 03, 2015 1:13 pm
languages_spoken: english
ODROIDs: C4, N2, C2, C1+, XU4, XU3, Cloudshell, Smart Power
Has thanked: 5 times
Been thanked: 8 times
Contact:

Re: load average at idle is 1.0

Post by campbell »

brad wrote:
Sun Mar 03, 2019 5:19 pm
This issue gets a little stranger. On a basic mainline kernel with example patches from Baylibre I still have a load avg of 1 at idle.
Conversely, if I boot Arch Linux using the Hardkernel 4.9 kernel I get a loadavg of zero. Same kernel and modules, same dtb, different userspace.

User avatar
OverSun
Posts: 1493
Joined: Mon Apr 29, 2013 5:12 pm
languages_spoken: english
Has thanked: 0
Been thanked: 19 times
Contact:

Re: load average at idle is 1.0

Post by OverSun »

campbell wrote:
Mon Mar 04, 2019 1:55 am
brad wrote:
Sun Mar 03, 2019 5:19 pm
This issue gets a little stranger. On a basic mainline kernel with example patches from Baylibre I still have a load avg of 1 at idle.
Conversely, if I boot Arch Linux using the Hardkernel 4.9 kernel I get a loadavg of zero. Same kernel and modules, same dtb, different userspace.
media modules are not loaded by default.
load all media modules needed for decoding, and one of them is going to spawn that ridiculous thread that does nothing in a very quick cycle.

brad
Posts: 1156
Joined: Tue Mar 29, 2016 1:22 pm
languages_spoken: english
ODROIDs: C2 N1 N2 N2+ H2 H2+ (64 bit ftw)
Location: Australia
Has thanked: 57 times
Been thanked: 105 times
Contact:

Re: load average at idle is 1.0

Post by brad »

brad wrote:
Sun Mar 03, 2019 5:19 pm
This issue gets a little stranger. On a basic mainline kernel with example patches from Baylibre I still have a load avg of 1 at idle.
[...]
There is no vdec core here but Ethernet is broken and constantly trying to attach to phy so maybe this is the cause in this instance.
The issue I had seen in mainline version is indeed related to the Ethernet PHY. I tracked it back to [kworker/4:2+pm] process which is essentially the link between STMMAC ethernet to the Realtek PHY. It's in a disk sleep constantly trying to find its PHY.

Code: Select all

odroid@odroid:~/mainline/linux-amlogic-v5.1-g12b$ ps -e v | grep 1601
 1601 ?        D      2:23      0     0     0     0  0.0 [kworker/4:2+pm]

elatllat
Posts: 1779
Joined: Tue Sep 01, 2015 8:54 am
languages_spoken: english
ODROIDs: XU4, N1, N2, C4, N2+
Has thanked: 47 times
Been thanked: 114 times
Contact:

Re: load average at idle is 1.0

Post by elatllat »

vdec-core is still consuming to much time on ubuntu-18.04.3-4.9-minimal-odroid-n2-20190806.img.xz second only to systemd.
These users thanked the author elatllat for the post:
alvarow (Fri Aug 09, 2019 12:57 pm)

skycrack
Posts: 3
Joined: Fri Oct 11, 2019 11:40 pm
languages_spoken: english german
ODROIDs: ODROID N2
Has thanked: 0
Been thanked: 0
Contact:

Re: load average at idle is 1.0

Post by skycrack »

Hello,
i has long wait for my odroid N2. He replaced my Rasperry3 as Homeautomatic Station.
So i have to see, that the device load in Idle State is over 1 .
I cant see a solution in this Thread or dont understand.
It would be nice to find any Support Suggestions here.

installed on MMC 32GB Red pointed
Ubuntu 18.04.3 LTS
Linux odroid 4.9.190-62 #1 SMP PREEMPT Tue Sep 10 01:00:59 -03 2019 aarch64 aarch64 aarch64 GNU/Linux

top - 16:55:18 up 10:13, 1 user, load average: 2,04, 2,08, 2,32

no process are significant busy or have a high cpu load.

Best Regards Rene

mad_ady
Posts: 8315
Joined: Wed Jul 15, 2015 5:00 pm
languages_spoken: english
ODROIDs: XU4, C1+, C2, C4, N1, N2, H2, Go, Go Advance
Location: Bucharest, Romania
Has thanked: 573 times
Been thanked: 434 times
Contact:

Re: load average at idle is 1.0

Post by mad_ady »

The 1 load is caused by a module that implements video decoding. So far it's normal, and it's a fake load.

skycrack
Posts: 3
Joined: Fri Oct 11, 2019 11:40 pm
languages_spoken: english german
ODROIDs: ODROID N2
Has thanked: 0
Been thanked: 0
Contact:

Re: load average at idle is 1.0

Post by skycrack »

Thank you for comment. Ca n i prove this by unload the modul? Greetings Rene

mad_ady
Posts: 8315
Joined: Wed Jul 15, 2015 5:00 pm
languages_spoken: english
ODROIDs: XU4, C1+, C2, C4, N1, N2, H2, Go, Go Advance
Location: Bucharest, Romania
Has thanked: 573 times
Been thanked: 434 times
Contact:

Re: load average at idle is 1.0

Post by mad_ady »

You can try to blacklist vdec_core

skycrack
Posts: 3
Joined: Fri Oct 11, 2019 11:40 pm
languages_spoken: english german
ODROIDs: ODROID N2
Has thanked: 0
Been thanked: 0
Contact:

Re: load average at idle is 1.0

Post by skycrack »

Hello,
this modul are currently not loaded. My Load ist at the time by 2


root@odroid:~# rmmod vdec_core
rmmod: ERROR: Module vdec_core is not currently loaded

darkalfie
Posts: 7
Joined: Sat Jul 13, 2019 1:45 pm
languages_spoken: english
ODROIDs: c2,xu4,n2
Has thanked: 0
Been thanked: 0
Contact:

Re: load average at idle is 1.0

Post by darkalfie »

Any updates on this ?

My load was 2.x

I've addded "fdt rm /vdec" to boot.ini and vdec_core, rebooted, and no longer see vdec_core, but load is at 1.0 still

I've blacklisted amvdec_vp9 amvdec_vc1 amvdec_real amvdec_mmpeg4 amvdec_mpeg4 amvdec_mpeg12 amvdec_mmjpeg amvdec_mjpeg amvdec_h265 amvdec_h264mvc amvdec_mh264 amvdec_h264 amvdec_avs stream_input decoder_common

Still the same at 1.0

ejolson
Posts: 31
Joined: Sat Feb 08, 2020 2:14 am
languages_spoken: english
Has thanked: 7 times
Been thanked: 9 times
Contact:

Re: load average at idle is 1.0

Post by ejolson »

More about N2 load average is discussed in the thread

viewtopic.php?f=181&t=37912

It would be great, in my opinion, if the kernel drivers responsible for the high load average were fixed.

mad_ady
Posts: 8315
Joined: Wed Jul 15, 2015 5:00 pm
languages_spoken: english
ODROIDs: XU4, C1+, C2, C4, N1, N2, H2, Go, Go Advance
Location: Bucharest, Romania
Has thanked: 573 times
Been thanked: 434 times
Contact:

Re: load average at idle is 1.0

Post by mad_ady »

Sadly those drivers are made by amlogic... So fat chance...

ejolson
Posts: 31
Joined: Sat Feb 08, 2020 2:14 am
languages_spoken: english
Has thanked: 7 times
Been thanked: 9 times
Contact:

Re: load average at idle is 1.0

Post by ejolson »

mad_ady wrote:
Wed Apr 01, 2020 1:37 pm
Sadly those drivers are made by amlogic... So fat chance...
I see no reason to believe the software engineers at amlogic are incompetent to the point of not being able to write a device driver that avoids increasing the load level to one or even two. It is strange, however, to imagine how the present drivers passed internal quality assurance without anyone noticing the increased load level.

If any of those developers happen to read the present thread and subsequently find themselves with extra time due to a quarantine, please consider fixing this load level problem.

mad_ady
Posts: 8315
Joined: Wed Jul 15, 2015 5:00 pm
languages_spoken: english
ODROIDs: XU4, C1+, C2, C4, N1, N2, H2, Go, Go Advance
Location: Bucharest, Romania
Has thanked: 573 times
Been thanked: 434 times
Contact:

Re: load average at idle is 1.0

Post by mad_ady »

I think quality is not as important as shipping the product. My LG Android TV bought last year has a constant load of 28 when idle! And it's not a 32 core system...

User avatar
rooted
Posts: 7875
Joined: Fri Dec 19, 2014 9:12 am
languages_spoken: english
Location: Gulf of Mexico, US
Has thanked: 724 times
Been thanked: 222 times
Contact:

Re: load average at idle is 1.0

Post by rooted »

mad_ady wrote:I think quality is not as important as shipping the product. My LG Android TV bought last year has a constant load of 28 when idle! And it's not a 32 core system...
My LeEco ATV fares a bit better, it idles at a load of 3 but it also had the most powerful SoC when it was new and is still pretty great.

You really can't pay attention to the load value of Android any longer, my phone is at a load of 5 as I type this message.

mad_ady
Posts: 8315
Joined: Wed Jul 15, 2015 5:00 pm
languages_spoken: english
ODROIDs: XU4, C1+, C2, C4, N1, N2, H2, Go, Go Advance
Location: Bucharest, Romania
Has thanked: 573 times
Been thanked: 434 times
Contact:

Re: load average at idle is 1.0

Post by mad_ady »

Indeed. My xiaomi phone shows 0.00 across the board. Smells like a coverup.

User avatar
rooted
Posts: 7875
Joined: Fri Dec 19, 2014 9:12 am
languages_spoken: english
Location: Gulf of Mexico, US
Has thanked: 724 times
Been thanked: 222 times
Contact:

Re: load average at idle is 1.0

Post by rooted »

Long ago with my first Android device the original Motorola Droid I developed a kernel which would reach a 0.00 idle load. It was mostly done by tweaking the interactive CPU governor and base frequency, you can't usually reach 0 load average if CPU states aren't utilized properly.

Post Reply

Return to “Issues”

Who is online

Users browsing this forum: Bacha and 0 guests