The Admin UI is basically also just a SQL client, that uses the _sql / http-endpoint to run queries to display data in the UI. So even though you are not actively querying in the console, the admin ui sends quite some sql-statements to the cluster.
Are you using any sort of loadbalancer / reverse proxy?
The amount of data does not matter to much, but rather the amount of lucene segments, which only is somehow related to the amount of data
There’s an HAProxy load balancer, but for this development cluster we connected directly to the nodes most of the time. For example, all of the web-based UI connections were directly made to the IP address of a particular node, not via a load balancer.
I made a similar experiment this morning:
Every few seconds, check that value of process['open_file_descriptors'] for Node 01 and see that it stays the same at the value of 2902.
Connect via Chrome directly to the web-based administrative UI running on node 01.
Start clicking on some items on the web UI to generate some SQL queries to the back-end CrateDB server.
Observe that the value of process['open_file_descriptors'] increasing to 2944 quickly.
Exit Chrome browser.
The value of process['open_file_descriptors']drops but not by much, it becomes 2942 and stays at that amount.
The same situation applies to process['open_file_descriptors'] on all 3 nodes. They all increase, on and on…
Meanwhile, on our production cluster, I do something similar, connect directly to first node with Chrome, click, etc. exit. And process['open_file_descriptors'] stays around 500.
Therefore, it is almost like the following: if I continue connecting with Chrome to one of the nodes on our development cluster, click on a few items, exit, and do this repeatedly, the value of process['open_file_descriptors'] will continue to increase, until I start to get /usr/share/crate/lib/site/index.html: Too many open files error again.
What I’m trying to understand:
why does the value of process['open_file_descriptors'] continuously increase?
I mean, normally, I’d expect it to stabilize around some value on that cluster that’s not doing much at all. I’d expect that number to increase when I connect with Chrome, but then decrease after I exit Chrome.
Maybe you have some insight? Or some method to debug this strange case further?
I decided to look into the most recent file descriptors and see if there are obvious differences between production cluster node 01, and development cluster node 01.
Another difference between the production cluster and development cluster:
I installed sshfs on development cluster nodes around 17 December, but I didn’t configure anything for sshfs, didn’t create a network mount, etc.
Development cluster, node 01:
$ dpkg -l | egrep "(fuse)|(sshf)"
ii fuse3 3.10.3-2 amd64 Filesystem in Userspace (3.x version)
ii libfuse2:amd64 2.9.9-5 amd64 Filesystem in Userspace (library)
ii libfuse3-3:amd64 3.10.3-2 amd64 Filesystem in Userspace (library) (3.x version)
ii sshfs 3.7.1+repack-2 amd64 filesystem client based on SSH File Transfer Protocol
Production cluster, node 01:
$ dpkg -l | egrep "(fuse)|(sshf)"
ii libfuse2:amd64 2.9.9-1+deb10u1 amd64 Filesystem in Userspace (library)
Can it be that this fuse3 package and its associated programs creating thousands of file descriptors in the fd directory of the java process? Because:
On development cluster (sshfs installed, but not configured, running, etc.), node 01:
Another experiment, this time monitoring cgroup / fuse related file descriptors for java process as I run Chrome, click on a few items on the web-based administrative UI, and exit Chrome:
$ while true; do sudo ls -lt /proc/585/fd | egrep "(cgroup)|(fuse)" | wc -l ; sleep 3; done
2708
2712 <-- Start Chrome
2712 <-- ...
2716 <-- ...
2716
2720
2724 <-- click on UI items, generate SQL to the back-end
2724 <-- ...
2728 <-- ...
2728
2732
2736
2736
2740
2748
2752 <-- Exit Chrome
2752 <-- it's like once those file descriptors are created, they are always there
2752 ...
2752 ...
2752 <-- it's like once those file descriptors are created, they are always there
2752
2752
2752
2752 <-- but why? Some kind of resource leakage?
Core DB team did a quick analysis but did not find an obvious cause for this. I created a card on our board for us to try to reproduce this behaviour and dig deeper
Thanks for reporting this and looking into it so far
And then let’s trace that Java process and its “forks”:
$ sudo strace --decode-fds=all --trace-path=/sys/fs/cgroup --follow-forks --attach 1127
strace: Process 1127 attached with 116 threads
[pid 1615] lstat("/sys/fs/cgroup", {st_mode=S_IFDIR|0555, st_size=0, ...}) = 0 <-- As soon as I open the Web UI I start to see these kind of entries and they stop as soon as I close web UI
[pid 1615] openat(AT_FDCWD, "/sys/fs/cgroup", O_RDONLY) = 865</sys/fs/cgroup>
[pid 1615] dup(865</sys/fs/cgroup>) = 938</sys/fs/cgroup>
[pid 1615] fstat(865</sys/fs/cgroup>, {st_mode=S_IFDIR|0555, st_size=0, ...}) = 0
[pid 1615] fcntl(865</sys/fs/cgroup>, F_GETFL) = 0x8000 (flags O_RDONLY|O_LARGEFILE)
[pid 1615] fcntl(865</sys/fs/cgroup>, F_SETFD, FD_CLOEXEC) = 0
[pid 1615] getdents64(865</sys/fs/cgroup>, 0x7f4f5025f690 /* 30 entries */, 32768) = 1144
[pid 1303] lstat("/sys/fs/cgroup", {st_mode=S_IFDIR|0555, st_size=0, ...}) = 0
[pid 1303] openat(AT_FDCWD, "/sys/fs/cgroup", O_RDONLY) = 942</sys/fs/cgroup>
[pid 1303] dup(942</sys/fs/cgroup>) = 943</sys/fs/cgroup>
[pid 1303] fstat(942</sys/fs/cgroup>, {st_mode=S_IFDIR|0555, st_size=0, ...}) = 0
[pid 1303] fcntl(942</sys/fs/cgroup>, F_GETFL) = 0x8000 (flags O_RDONLY|O_LARGEFILE)
[pid 1303] fcntl(942</sys/fs/cgroup>, F_SETFD, FD_CLOEXEC) = 0
[pid 1303] getdents64(942</sys/fs/cgroup>, 0x7f4ea439a950 /* 30 entries */, 32768) = 1144 <-- At this point in time I close web UI
^Cstrace: Process 1127 detached <-- I manually exit strace, because after I close web UI, nothing happens with respect to /sys/fs/cgroup file descriptors
strace: Process 1133 detached
strace: Process 1134 detached
strace: Process 1135 detached
strace: Process 1136 detached
strace: Process 1138 detached
strace: Process 1139 detached
strace: Process 1140 detached
strace: Process 1141 detached
strace: Process 1142 detached
strace: Process 1143 detached
strace: Process 1144 detached
strace: Process 1145 detached
strace: Process 1146 detached
strace: Process 1147 detached
strace: Process 1148 detached
strace: Process 1149 detached
strace: Process 1150 detached
strace: Process 1151 detached
strace: Process 1162 detached
strace: Process 1163 detached
strace: Process 1164 detached
strace: Process 1165 detached
strace: Process 1166 detached
strace: Process 1167 detached
strace: Process 1168 detached
strace: Process 1169 detached
strace: Process 1170 detached
strace: Process 1171 detached
strace: Process 1172 detached
strace: Process 1173 detached
strace: Process 1174 detached
strace: Process 1175 detached
strace: Process 1180 detached
strace: Process 1181 detached
strace: Process 1183 detached
strace: Process 1184 detached
strace: Process 1185 detached
strace: Process 1186 detached
strace: Process 1187 detached
strace: Process 1188 detached
strace: Process 1189 detached
strace: Process 1190 detached
strace: Process 1191 detached
strace: Process 1192 detached
strace: Process 1193 detached
strace: Process 1197 detached
strace: Process 1198 detached
strace: Process 1199 detached
strace: Process 1200 detached
strace: Process 1201 detached
strace: Process 1202 detached
strace: Process 1203 detached
strace: Process 1204 detached
strace: Process 1205 detached
strace: Process 1206 detached
strace: Process 1207 detached
strace: Process 1208 detached
strace: Process 1209 detached
strace: Process 1210 detached
strace: Process 1211 detached
strace: Process 1212 detached
strace: Process 1213 detached
strace: Process 1214 detached
strace: Process 1216 detached
strace: Process 1217 detached
strace: Process 1218 detached
strace: Process 1219 detached
strace: Process 1220 detached
strace: Process 1221 detached
strace: Process 1222 detached
strace: Process 1223 detached
strace: Process 1224 detached
strace: Process 1225 detached
strace: Process 1226 detached
strace: Process 1227 detached
strace: Process 1228 detached
strace: Process 1229 detached
strace: Process 1230 detached
strace: Process 1231 detached
strace: Process 1233 detached
strace: Process 1234 detached
strace: Process 1238 detached
strace: Process 1246 detached
strace: Process 1247 detached
strace: Process 1248 detached
strace: Process 1249 detached
strace: Process 1250 detached
strace: Process 1251 detached
strace: Process 1252 detached
strace: Process 1253 detached
strace: Process 1254 detached
strace: Process 1255 detached
strace: Process 1256 detached
strace: Process 1264 detached
strace: Process 1265 detached
strace: Process 1266 detached
strace: Process 1267 detached
strace: Process 1268 detached
strace: Process 1269 detached
strace: Process 1270 detached
strace: Process 1271 detached
strace: Process 1272 detached
strace: Process 1284 detached
strace: Process 1303 detached
strace: Process 1313 detached
strace: Process 1400 detached
strace: Process 1425 detached
strace: Process 1615 detached
strace: Process 1626 detached
strace: Process 1628 detached
strace: Process 1993 detached
strace: Process 1994 detached
strace: Process 2001 detached
strace: Process 2002 detached
strace: Process 2733 detached
We came across this issue again and could finally point it down to a bug in CrateDB in combination with distributions using cgroup v2. The fix is already merged and has been released to the testings channels with version 4.8.4. I expect a 5.0.2 soon to follow.