File uploads - does S3 need to be publically accessible?

Hello.

I’ve been trying to get a new Speckle server running for a while now, with both Kubernetes and Docker Compose on a VM. Since I am deploying to Azure, I have a choice between s3proxy and minio, and both have been causing various issues.

I have been working with a prelease of version 2.25.8, but have upgraded to the released version now.

With the released version of 2.25.8, I am able to set both a S3 url and a public S3 url. Not setting the public url, my browser tries to upload to the internal name from the docker network (e.g. http://minio:9000). Does this mean that Speckle depends on a publically accessible S3(-compatible) service? I can’t just run it in the cluster and upload files through the frontend?

Ideally the S3-compatible blob storage would be publicly accessible as it provides a performance improvement when uploading files.

However, if the S3-compatible blob storage is not publicly accessible, the frontend requires the environment variable NUXT_PUBLIC_FF_LEGACY_FILE_IMPORTS_ENABLED to be set to true. This will cause the frontend to proxy all file uploads via the Speckle server.

If the S3_PUBLIC_ENDPOINT environment variable is left blank/unset, it defaults to the value from S3_ENDPOINT. So S3_PUBLIC_ENDPOINT does not need to be set.

Iain

3 Likes

Thank you so much for the pointers. I got it to work with the legacy imports (for now).

I was (perhaps foolishly) updating a docker-compose file that worked with an older version - based on the docs, I believe -, and it took quite a bit of adding variables to get everything up and running, and I also found I had to disable the health check for the speckle-server component before everything started up correctly. (I replaced the test with console.log('dummy').)

Before doing this, preview-service and fileimport-service would not start and I would simply get errors in the frontend but nothing in the logs, which is always confusing! But at least I got there in the end.

Thanks for confirming it now works :partying_spockle:

As you’ve discovered, it’s important to keep the docker compose file up to date and in-sync with the application’s requirements.

It sounds as if the health check in the docker compose file still needs to be updated; you can find the latest documentation for this in our documentation and another example in our repository.

Hope this helps,

Iain

1 Like

Hi Iain, I think the health check in the example is buggy? At least running it directly (in the server container) produces a type error because process.exit wants a number, not a boolean. A sly ? 1 : 0 appears to fix it. Shell output attached.

niklas@speckle-dev-server-newest-version:/opt/speckle$ docker compose exec -it speckle-server /nodejs/bin/node -e "console.log('test')"
test
niklas@speckle-dev-server-newest-version:/opt/speckle$ docker compose exec -it speckle-server /nodejs/bin/node -e "try { require('node:http').request({headers: {'Content-Type': 'application/json'}, port:3000, hostname:'0.0.0.0', path:'/readiness', method: 'GET', timeout: 2000 }, (res) => { body = ''; res.on('data', (chunk) => {body += chunk;}); res.on('end', () => {process.exit(res.statusCode != 200 || body.toLowerCase().includes('error'));}); }).end(); } catch { process.exit(1); }"
node:internal/errors:540
      throw error;
      ^

TypeError [ERR_INVALID_ARG_TYPE]: The "code" argument must be of type number. Received type boolean (false)
    at process.set [as exitCode] (node:internal/bootstrap/node:122:9)
    at process.exit (node:internal/process/per_thread:189:24)
    at IncomingMessage.<anonymous> ([eval]:1:262)
    at IncomingMessage.emit (node:events:530:35)
    at endReadableNT (node:internal/streams/readable:1698:12)
    at process.processTicksAndRejections (node:internal/process/task_queues:90:21) {
  code: 'ERR_INVALID_ARG_TYPE'
}

Node.js v22.17.0

niklas@speckle-dev-server-newest-version:/opt/speckle$ docker compose exec -it speckle-server /nodejs/bin/node -e "try { require('node:http').request({headers: {'Content-Type': 'application/json'}, port:3000, hostname:'0.0.0.0', path:'/readiness', method: 'GET', timeout: 2000 }, (res) => { body = ''; res.on('data', (chunk) => {body += chunk;}); res.on('end', () => {process.exit(res.statusCode != 200 || body.toLowerCase().includes('error') ? 1 : 0);}); }).end(); } catch { process.exit(1); }"
niklas@speckle-dev-server-newest-version:/opt/speckle$ echo $?
0

I.e., my full health check in my docker-compose.yml is:

    healthcheck:
      test: ["CMD", '/nodejs/bin/node', '-e', "try { require('node:http').request({headers: {'Content-Type': 'application/json'}, port:3000, hostname:'127.0.0.1', path:'/readiness', method: 'GET', timeout: 2000 }, (res) => { body = ''; res.on('data', (chunk) => {body += chunk;}); res.on('end', () => {process.exit(res.statusCode != 200 || body.toLowerCase().includes('error') ? 1 : 0);}); }).end(); } catch { process.exit(1); }"]
      interval: 10s
      timeout: 10s
      retries: 3

I can send a PR on Github if you like.

PS: To be clear, this is also a surprise to me, and if I run Node on my workstation, process.exit(true) works as expected. So I don’t know exactly what’s going on.

Thanks for catching this bug! Please do make a Pull Request, contributions are always welcome!

Iain