Issues with Uploading Large IFC Files and Encountering Various Errors

Hello, Speckle Community,

I’ve been working with self deployed Speckle Server and facing some challenges when dealing with large IFC file uploads and imports using the file import service package within a Docker setup. I hope to find guidance or solutions from the community regarding these issues. Below are the details of the problems I’ve encountered:

following is the docker compose file I’m using

version: '2.3'
services:
  ####
  # Speckle Server dependencies
  #######
  postgres:
    image: 'postgres:14.5-alpine'
    restart: always
    environment:
      POSTGRES_DB: speckle
      POSTGRES_USER: speckle
      POSTGRES_PASSWORD: speckle
    volumes:
      - postgres-data:/var/lib/postgresql/data/
    healthcheck:
      # the -U user has to match the POSTGRES_USER value
      test: ["CMD-SHELL", "pg_isready -U speckle"]
      interval: 5s
      timeout: 5s
      retries: 30

  redis:
    image: 'redis:7-alpine'
    restart: always
    volumes:
      - redis-data:/data
    ports:
      - '127.0.0.1:6379:6379'
    healthcheck:
      test: [ "CMD", "redis-cli", "--raw", "incr", "ping" ]
      interval: 5s
      timeout: 5s
      retries: 30

  minio:
    image: 'minio/minio'
    command: server /data --console-address ":9001"
    restart: always
    volumes:
      - minio-data:/data
    healthcheck:
      test: ["CMD", "mc", "ready", "local"]
      interval: 5s
      timeout: 5s
      retries: 5

  ####
  # Speckle Server
  #######
  speckle-ingress:
    image: speckle/speckle-docker-compose-ingress:latest 
    restart: always
    ports: []  
    environment:
      FILE_SIZE_LIMIT_MB: '2000'
      NGINX_ENVSUBST_OUTPUT_DIR: '/etc/nginx'
    labels:
      - "traefik.enable=true"
        #TODO: replace `example.com` with your domain. This should just be the domain, and do not include the protocol (http/https).
      - "traefik.http.routers.speckle-ingress.rule=Host(`speckle.schullerco.net`)"
      - "traefik.http.routers.speckle-ingress.entrypoints=websecure"
      - "traefik.http.routers.speckle-ingress.tls.certresolver=myresolver"
      - "traefik.http.services.speckle-ingress.loadbalancer.server.port=8080"  

  speckle-frontend-2:
    image: speckle/speckle-frontend-2:latest 
    restart: always
    environment:
      NUXT_PUBLIC_SERVER_NAME: 'local'
      # TODO: Change NUXT_PUBLIC_API_ORIGIN to the URL of the speckle server, as accessed from the network. This is the same value as should be used for the CANONICAL_URL in the server section below.
      NUXT_PUBLIC_API_ORIGIN: 'https://speckle.schullerco.net'
      NUXT_PUBLIC_BACKEND_API_ORIGIN: 'http://speckle-server:3000'

  speckle-server:
    image: speckle/speckle-server:latest 
    restart: always
    healthcheck:
      test: ["CMD", "node", "-e", "try { require('node:http').request({headers: {'Content-Type': 'application/json'}, port:3000, hostname:'127.0.0.1', path:'/graphql?query={serverInfo{version}}', method: 'GET', timeout: 2000 }, (res) => { body = ''; res.on('data', (chunk) => {body += chunk;}); res.on('end', () => {process.exit(res.statusCode != 200 || body.toLowerCase().includes('error'));}); }).end(); } catch { process.exit(1); }"]
      interval: 10s
      timeout: 3s
      retries: 30

    depends_on:
      postgres:
        condition: service_healthy
      redis:
        condition: service_healthy
      minio:
        condition: service_healthy
    environment:
      # TODO: Change this to the URL of the speckle server, as accessed from the network
      CANONICAL_URL: 'https://speckle.schullerco.net'
      SPECKLE_AUTOMATE_URL: 'https://speckle.schullerco.net:3030'

      REDIS_URL: 'redis://redis'

      S3_ENDPOINT: 'http://minio:9000'
      S3_ACCESS_KEY: 'minioadmin'
      S3_SECRET_KEY: 'minioadmin'
      S3_BUCKET: 'speckle-server'
      S3_CREATE_BUCKET: 'true'

      FILE_SIZE_LIMIT_MB: 2000

      # TODO: Change this to a unique secret for this server
      SESSION_SECRET: 'MyS3cureSess!onKey12345'

      STRATEGY_LOCAL: 'true'
      DEBUG: 'speckle:*'

      POSTGRES_URL: 'postgres'
      POSTGRES_USER: 'speckle'
      POSTGRES_PASSWORD: 'speckle'
      POSTGRES_DB: 'speckle'
      ENABLE_MP: 'false'

      USE_FRONTEND_2: 'true'
      # TODO: Change this to the URL of the speckle server, as accessed from the network
      FRONTEND_ORIGIN: 'https://speckle.schullerco.net'


  reverse-proxy:
    image: traefik:v2.10
    command:
      - "--providers.docker=true"
      - "--providers.docker.exposedbydefault=false"
      - "--entrypoints.websecure.address=:443"
      - "--certificatesresolvers.myresolver.acme.tlschallenge=true"
      # To use Let's Encrypt staging server instead of production, uncomment the following line
      #- "--certificatesresolvers.myresolver.acme.caserver=https://acme-staging-v02.api.letsencrypt.org/directory"
      # TODO: replace `{your@example.com}` with your actual email
      - "--certificatesresolvers.myresolver.acme.email=vaddivamsi524@gmail.com"
      - "--certificatesresolvers.myresolver.acme.storage=/letsencrypt/acme.json"
      # To enable the Traefik web UI (enabled by --api.insecure=true); this is not recommended as it will expose the Traefik dashboard to the internet
      #- "--api.insecure=true"

    ports:
      # The HTTPS port (required for Traefik to listen to HTTPS requests)
      - "443:443"
      # The Traefik Web UI port if enabled by --api.insecure=true
      - "8080:8080"
    volumes:
      - "./letsencrypt:/letsencrypt"
      # So that Traefik can listen to the Docker events
      - "/var/run/docker.sock:/var/run/docker.sock:ro"    

  preview-service:
    image: speckle/speckle-preview-service:latest 
    restart: always
    depends_on:
      speckle-server:
        condition: service_healthy
    mem_limit: '1000m'
    memswap_limit: '1000m'
    environment:
      DEBUG: 'preview-service:*'
      PG_CONNECTION_STRING: 'postgres://speckle:speckle@postgres/speckle'

  webhook-service:
    image: speckle/speckle-webhook-service:latest 
    restart: always
    depends_on:
      speckle-server:
        condition: service_healthy
    environment:
      DEBUG: 'webhook-service:*'
      PG_CONNECTION_STRING: 'postgres://speckle:speckle@postgres/speckle'
      WAIT_HOSTS: postgres:5432

  fileimport-service:
    image: speckle/speckle-fileimport-service:latest 
    restart: always
    depends_on:
      speckle-server:
        condition: service_healthy
    environment:
      
      DEBUG: 'fileimport-service:*'
      PG_CONNECTION_STRING: 'postgres://speckle:speckle@postgres/speckle'
      WAIT_HOSTS: postgres:5432

      S3_ENDPOINT: 'http://minio:9000'
      S3_ACCESS_KEY: 'minioadmin'
      S3_SECRET_KEY: 'minioadmin'
      S3_BUCKET: 'speckle-server'

      FILE_IMPORT_TIME_LIMIT_MIN: 10

      SPECKLE_SERVER_URL: 'http://speckle-server:3000'
    

networks:
  default:
    name: speckle-server

volumes:
  postgres-data:
  redis-data:
  minio-data:

1. Timeout Error for Large IFC File Uploads Exceeding 5 Minutes

I have a scenario where uploading a large IFC file takes more than 5 minutes(With slow internet), which results in a “set time limit” error, despite having configured an environment variable in my Docker Compose file to set a 10-minute time limit for the file import service package.

Despite this configuration, uploads taking longer than 5 minutes fail with a time limit error. I’m looking for insights on why this might be happening and how I can ensure that the set time limit is respected by the service.

2. Errors During the Import Process of Two Different ~1GB IFC Files

In addition to the timeout issue, I’ve encountered two separate errors while attempting to import two different IFC files, each close to 1GB in size, using a high-speed internet connection. The files are uploaded in a few minutes, but the import process fails for each with different errors:

  • Error for the First IFC File: “Parser exited with code null”
  • Error for the Second IFC File: “RangeError: Set Maximum size exceeded”

These issues suggest that the import process struggles with aspects beyond just upload time, possibly related to the files’ content or size. I’ve attached a snippet of the Docker Compose file I’m using for reference.

Given these challenges, I’m reaching out for any advice, workarounds, or solutions that could help address these problems. Additionally, if there are any improvements I could make to my current setup or if there’s anything I might have overlooked, I’d greatly appreciate your input.

Thank you in advance for your help and support.
Vamsi

2 Likes

Do these same files also fail at app.speckle.systems ?

Hi @jonathon ,

Thank you very much for your response. unfortunately I did not tried to import the IFC file at app.speckle.system as the file size is limit to 100 mb. whereas the file I was trying to import is about 1 GB. In my speckle deployment i have set maximum file size limit to 2000mb so that i can test with larger IFC files. Just to mention the largest IFC file I could import without any problem was close to 600mb

Hi Vamsi,

If you’re interested in solutions to decimate the IFC file size, reach out to me.
I’ve been working on this a lot. There are a number of aspects that can reduce a 1GB file down by a factor of 10 (or more). It depends on where the IFC file is authored from (ie the contents) and what you actually need when utilizing the model (ie is it reference visualization only?)

I get that 1GB IFC files are found pretty frequently at the moment, but no data set should be that large.
It would be preferable not to have to decimate the IFC file in the first place, but that would require either more configuation of the export process, and likely improvements by the software vendor.

I’m always hopeful that one day we will get to the scenario where collaboration BIM is a pull transaction (ie recipient has read permissions on the authoring model and can configure what they want to extract) rather than the current push transaction (ie model author publishes a generic model probably containing all sorts of aspects that aren’t desired or needed). This can be emulated in the interim, by decimating the larger IFC file in the process I’m suggesting.

Cheers,

Jon

2 Likes

There are two different processes which are occurring where a time out might happen.

The first process is the uploading of your file to Speckle. Speckle server proxies that upload to the s3-compatible blob storage (which is minio in your docker compose file). A timeout threshold is configured in all of these components, some will be configurable:

your browser → traefik → speckle-ingress (nginx) → speckle-server → minio

It seems that this process is where the 5 minute time out error is occurring.

The second process is where the file import service then starts processing the file and generating the Speckle model from it. This is the process that can be configured directly with FILE_IMPORT_TIME_LIMIT_MIN. It is this process that is responsible for the errors that you are experiencing in the second part of your question.

I would recommend looking at the log messages for all the components in question; traefik, speckle-ingress, and speckle-server may indicate where the timeout is occurring. The log messages for fileimport-service may provide more clues about the error messages.

Iain

And a very warm welcome to the community @vamsi_vaddi ! Sorry that your first experience with Speckle hasn’t been as smooth as we’d like.
If you wish, please do introduce yourself to the community over at our introductions thread; we’d love to know more about what you are using Speckle for.

Iain