TransWikia.com

How to detect if a process was killed by cgroup due to exceeding a limit?

Server Fault Asked on November 27, 2021

I have a global cgroup defined in /etc/cgconfig.conf that limits the amount of memory. Everytime a user runs a command, I prepend cgexec to add the process and its children to the controlled group. Every now and then the limit kicks in and kills the user process.

If the exit code is not 0, how do I know if the process just failed because of some internal logic, or if it has been killed by the cgroup mechanism?

It’s running in user space, so I’d like to avoid parsing /var/log/syslog.

2 Answers

/var/log/kern.log will tell you that. In this case it logs death of a process running inside docker's cgroups which are inside LXC's cgroups.

Jan 21 12:32:59 server-hostname kernel: [5808332.413137] oom_reaper: reaped process 32190 (python), now anon-rss:0kB, file-rss:0kB, shmem-rss:0kB
Jan 21 17:28:27 server-hostname kernel: [5826415.492483] python invoked oom-killer: gfp_mask=0x6200ca(GFP_HIGHUSER_MOVABLE), nodemask=(null), order=0, oom_score_adj=0
Jan 21 17:28:27 server-hostname kernel: [5826415.492484] python cpuset=0efdc3f815c9d3755525ecfa5bbd40829a5724d4aafb39aedb909f7af23e2d3a mems_allowed=0-1
Jan 21 17:28:27 server-hostname kernel: [5826415.492489] CPU: 9 PID: 16369 Comm: python Tainted: P           O      4.18.0-11-generic #12-Ubuntu
Jan 21 17:28:27 server-hostname kernel: [5826415.492490] Hardware name: SuperHardware SYS-112999LX-LC2-S18/X176D7U-CXL, BIOS 2.9d 04/12/2018
Jan 21 17:28:27 server-hostname kernel: [5826415.492491] Call Trace:
Jan 21 17:28:27 server-hostname kernel: [5826415.492500]  dump_stack+0x63/0x83
Jan 21 17:28:27 server-hostname kernel: [5826415.492504]  dump_header+0x71/0x278
Jan 21 17:28:27 server-hostname kernel: [5826415.492506]  oom_kill_process.cold.26+0xb/0x386
Jan 21 17:28:27 server-hostname kernel: [5826415.492507]  out_of_memory+0x1ba/0x4b0
Jan 21 17:28:27 server-hostname kernel: [5826415.492511]  mem_cgroup_out_of_memory+0x4b/0x80
Jan 21 17:28:27 server-hostname kernel: [5826415.492513]  mem_cgroup_oom_synchronize+0x31d/0x350
Jan 21 17:28:27 server-hostname kernel: [5826415.492514]  ? mem_cgroup_swappiness_read+0x40/0x40
Jan 21 17:28:27 server-hostname kernel: [5826415.492516]  pagefault_out_of_memory+0x36/0x7b
Jan 21 17:28:27 server-hostname kernel: [5826415.492521]  mm_fault_error+0x8c/0x150
Jan 21 17:28:27 server-hostname kernel: [5826415.492525]  ? handle_mm_fault+0xe1/0x210
Jan 21 17:28:27 server-hostname kernel: [5826415.492527]  __do_page_fault+0x4a1/0x4d0
Jan 21 17:28:27 server-hostname kernel: [5826415.492528]  do_page_fault+0x2e/0xe0
Jan 21 17:28:27 server-hostname kernel: [5826415.492531]  ? page_fault+0x8/0x30
Jan 21 17:28:27 server-hostname kernel: [5826415.492532]  page_fault+0x1e/0x30
Jan 21 17:28:27 server-hostname kernel: [5826415.492534] RIP: 0033:0x4a9180
Jan 21 17:28:27 server-hostname kernel: [5826415.492534] Code: Bad RIP value.
Jan 21 17:28:27 server-hostname kernel: [5826415.492539] RSP: 002b:00007ffffed1c2d8 EFLAGS: 00010246
Jan 21 17:28:27 server-hostname kernel: [5826415.492540] RAX: 00007f894175fc30 RBX: 00000000019e20b8 RCX: 0000000000000002
Jan 21 17:28:27 server-hostname kernel: [5826415.492541] RDX: 00007f894a35a330 RSI: 0000000000000001 RDI: 00007f894a35a350
Jan 21 17:28:27 server-hostname kernel: [5826415.492541] RBP: 0000000001ce72a0 R08: 00000000008f9920 R09: 0000000000000000
Jan 21 17:28:27 server-hostname kernel: [5826415.492542] R10: 00007f8942840d40 R11: 0000000000000000 R12: 00007f894175fc30
Jan 21 17:28:27 server-hostname kernel: [5826415.492542] R13: 00007f894a35a350 R14: 00000000008f9920 R15: 0000000001ce74a0
Jan 21 17:28:27 server-hostname kernel: [5826415.492543] Task in /lxc/lxcname/docker/0efdc3f815c9d3755525ecfa5bbd40829a5724d4aafb39aedb909f7af23e2d3a killed as a result of limit of /lxc/lxcname/docker/0efdc3f815c9d3755525ecfa5bbd40829a5724d4aafb39aedb909f7af23e2d3a
Jan 21 17:28:27 server-hostname kernel: [5826415.492548] memory: usage 6291456kB, limit 6291456kB, failcnt 96125
Jan 21 17:28:27 server-hostname kernel: [5826415.492549] memory+swap: usage 6291456kB, limit 12582912kB, failcnt 0
Jan 21 17:28:27 server-hostname kernel: [5826415.492549] kmem: usage 17728kB, limit 9007199254740988kB, failcnt 0
Jan 21 17:28:27 server-hostname kernel: [5826415.492550] Memory cgroup stats for /lxc/lxcname/docker/0efdc3f815c9d3755525ecfa5bbd40829a5724d4aafb39aedb909f7af23e2d3a: cache:0KB rss:6272240KB rss_huge:0KB shmem:132KB mapped_file:0KB dirty:0KB writeback:0KB swap:0KB inactive_anon:64KB active_anon:6273580KB inactive_file:0KB active_file:0KB unevictable:0KB
Jan 21 17:28:27 server-hostname kernel: [5826415.492558] [ pid ]   uid  tgid total_vm      rss pgtables_bytes swapents oom_score_adj name
Jan 21 17:28:27 server-hostname kernel: [5826415.492687] [16369]  5000 16369  1580516  1568393 12701696        0             0 python
Jan 21 17:28:27 server-hostname kernel: [5826415.492873] Memory cgroup out of memory: Kill process 16369 (python) score 999 or sacrifice child
Jan 21 17:28:27 server-hostname kernel: [5826415.502126] Killed process 16369 (python) total-vm:6322064kB, anon-rss:6273396kB, file-rss:176kB, shmem-rss:0kB
Jan 21 17:28:27 server-hostname kernel: [5826415.767023] oom_reaper: reaped process 16369 (python), now anon-rss:0kB, file-rss:0kB, shmem-rss:0kB

Answered by styrofoam fly on November 27, 2021

I have conducted a series of experiments to answer the same question you have a few years ago, and my experiments showed that the killed process always has exit code 137 (which is 128 + 9, where 128 is the POSIX requirement for terminated executions and 9 is the integer code of SIGKILL [kill signal]). Unfortunately, I couldn't find a way to confirm that it was indeed a SIGKILL and not just the user-reported exit code exit(137) / return 137;

Answered by Vlad Frolov on November 27, 2021

Add your own answers!

Ask a Question

Get help from others!

© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP