High disk usage on 2.18.2

I’m not yet sure if this is the same coincidence and also I’m not yet sure what the reason for this is, but both of our Speckle Servers (production server and testserver) are filling up their hard drives like crazy. I had the testserver increased from 64 to 128 GB today … now it is full again.

1 Like

Hard drive issues like this would indicate that someone (or a script) is sending a lot of data. Is it a disk associated with postgres or blob storage (minio)?

Iain

1 Like

The testserver has everything on the machine. The production server everyting on managed services. Will take a look at it tomorrow again. Seems to be running again, after cleaning/resizing.

1 Like

Talking about the same dev server @AlexHofbeck was mentioning… After increasing drive size from 64 to 128GB it’s full again. After digging a little bit this might help for locating the problem:

Other sources say, “misconfigured docker containers may lead to a big overlay2”.
Might this issue be speckle related? We never experienced that prior to the latest speckle server update.

can also be that Docker had a bug … or we missed something in the Docker-Compose which was updated by Speckle

Just for the protocol… I have downgraded frontent-2 and speckle-server to version 2.18.1 for now. And it seems to work currently (let’s see):

But yes, the reason might be a different…

Hey @samberger and @AlexHofbeck

as a potential cause of this, could you upgrade to the faulty version and take a look at amount and type of data stored in the redis instance? Since the data is cleaned up after upgrading, im suspecting it could be a caching issue of some sorts.

Thanks,
Gergo

1 Like

Thanks for reporting this.

Could you provide the output of docker version?
And the partial output of the following command (we just need the Local Volumes space usage section, it’ll print a lot of other verbose output which we don’t need):

docker system df -v

Thank you!

1 Like

Hey @gjedlicska and @iainsproat! :slight_smile:

So, the quick part first. @iainsproat here is the docker version output:

grafik

The docker system df -v output is probably only interessting for the “faulty” version, right? Like @gjedlicska mentioned it? And also only in case the disk is filling up again.

This is for the currently running 2.18.1:
grafik

Since I currently need the dev server to work, I can do the upgrade later again and see… I will let you know!

Thanks!

1 Like

Thank you. Yes, it’s the high disk usage that we’re interested in. Hopefully it will show which volume is filling up.

Iain

This issue may also be caused by the application writing to disk within the running container. For example, to write log messages to file.

When running docker system df -v can you also provide the output of the Containers space usage section?

If you notice a container with a high volume size, you can view what files have been added or amended in each container as follows:

docker container diff ${container_name_or_id}

Hope this helps,

Iain

Here is the input Iain:

Volumes as of now … nothing changed here

VOLUME NAME                  LINKS     SIZE
mysql_my-db                  1         219.9MB
speckle_pgadmin-data         1         1.633GB
speckle_redis_insight-data   1         1.73MB

Overview of sizes of Docker

and this is the overview of the containers:

CONTAINER ID   IMAGE                                      COMMAND                  CREATED        STATUS        PORTS                                              NAMES                          SIZE
315ce14dba1d   speckle/speckle-preview-service:2          "tini -- node bin/www"   24 hours ago   Up 24 hours                                                      speckle_preview-service_1      -1B (virtual 891MB)
dece2b07cf56   speckle/speckle-server:2.18.2              "node bin/www"           24 hours ago   Up 24 hours                                                      speckle_speckle-server_1       -1B (virtual 351MB)
cd2575871c39   redislabs/redisinsight:latest              "./docker-entry.sh n…"   24 hours ago   Up 24 hours   5000/tcp, 0.0.0.0:8001->8001/tcp                   speckle_redis_insight_1        -1B (virtual 251MB)
ed639e5fe581   minio/minio                                "/usr/bin/docker-ent…"   24 hours ago   Up 24 hours   127.0.0.1:9000->9000/tcp, 0.0.0.0:9001->9001/tcp   speckle_minio_1                -1B (virtual 152MB)
635d3cecd76b   redis:6.0-alpine                           "docker-entrypoint.s…"   24 hours ago   Up 24 hours   127.0.0.1:6379->6379/tcp                           speckle_redis_1                -1B (virtual 29.5MB)
0380a0362a0b   speckle/speckle-webhook-service:2          "tini -- /nodejs/bin…"   24 hours ago   Up 24 hours                                                      speckle_webhook-service_1      -1B (virtual 178MB)
38e61e6f1220   speckle/speckle-frontend-2:2.18.2          "/tini -- /nodejs/bi…"   24 hours ago   Up 24 hours   8080/tcp                                           speckle_speckle-frontend-2_1   -1B (virtual 195MB)
8c80218c0302   speckle/speckle-fileimport-service:2       "tini -- /nodejs/bin…"   24 hours ago   Up 24 hours                                                      speckle_fileimport-service_1   -1B (virtual 426MB)
eda979b178e6   postgres:14.5-alpine                       "docker-entrypoint.s…"   24 hours ago   Up 24 hours   0.0.0.0:5432->5432/tcp                             speckle_postgres_1             -1B (virtual 216MB)
df7e36c1c921   dpage/pgadmin4                             "/entrypoint.sh"         24 hours ago   Up 24 hours   443/tcp, 0.0.0.0:16543->80/tcp                     speckle_pgadmin_1              -1B (virtual 471MB)
6c008b2a497c   speckle/speckle-docker-compose-ingress:2   "/docker-entrypoint.…"   24 hours ago   Up 24 hours   80/tcp, 0.0.0.0:81->8080/tcp                       speckle_speckle-ingress_1      -1B (virtual 187MB)
b90670236fd6   mysql:5.7                                  "docker-entrypoint.s…"   6 months ago   Up 2 days     0.0.0.0:3306->3306/tcp, 33060/tcp                  mysql_db_1                     -1B (virtual 581MB)

nothing which would explain why the disk is full. Or does something look big enough to go for a diff?

Linux filesystem looks like this:
“home” contains in this case the docker volumes of the databases e.g. as the docker-compose.yml was stored there for the Testserver.

108G    /var
14G     /home
2.2G    /usr
265M    /run
111M    /boot
11M     /core
7.2M    /etc
68K     /tmp
68K     /root
24K     /snap
16K     /opt
16K     /lost+found
8.0K    /mnt
4.0K    /srv
4.0K    /media
0       /sys
0       /sbin
0       /proc
0       /libx32
0       /lib64
0       /lib32
0       /lib
0       /dev
0       /bin

→ the responsible log file for the file-size
101G -rw-r----- 1 root root 101G Feb 18 13:56 38e61e6f1220ec67eefdff5ab22de9941c727478021b53e8673df4401af9a285-json.log
4.0K drwx------ 2 root root 4.0K Feb 17 13:32 checkpoints
4.0K -rw------- 1 root root 3.7K Feb 17 13:32 config.v2.json
4.0K -rw------- 1 root root 1.5K Feb 17 13:32 hostconfig.json
4.0K -rw-r–r-- 1 root root 13 Feb 17 13:32 hostname
4.0K -rw-r–r-- 1 root root 174 Feb 17 13:32 hosts
4.0K drwx–x— 2 root root 4.0K Feb 17 13:32 mounts
4.0K -rw-r–r-- 1 root root 114 Feb 17 13:32 resolv.conf
4.0K -rw-r–r-- 1 root root 71 Feb 17 13:32 resolv.conf.hash

→ the according container
38e61e6f1220ec67eefdff5ab22de9941c727478021b53e8673df4401af9a285 speckle/speckle-frontend-2:2.18.2 "/tini -- /nodejs/bin/node ./server/index.mjs" 24 hours ago Up 24 hours 8080/tcp speckle_speckle-frontend-2_1

in case we need to dig down further, let us know …

1 Like

It can also be that we have missed some update in the docker-compose and environment variables.
From 2.18.1 to 2.18.2 following … we double check that tomorrow.
Steffen only added the 2.18.2 to the server containers … usually we have the containers with :2 so that we take the latest official by default

Great work in identifying the root cause! Docker is not properly configured to manage large log files.

To prevent this occurring you should configure the log driver for Docker with a better driver and a maximum size: Configure logging drivers | Docker Docs

It is noticeable now as we recently increased the verbosity of log messages from Frontend 2 to better help in identifying problems and debugging them. An additional option is to reduce the verbosity of the logs generated by Frontend 2, by setting the LOG_LEVEL environment variable. You may wish to set it to warn or error instead of its default info, though the downside is it may make debugging problems more difficult in the future.

Iain

3 Likes

Thanks for leading us into the right direction, Iain. We have modified it and will see if this works out!

Best,
Alex

3 Likes