Webhooks and large ifc-files

,

Hello there. Love the work you guys are doing! I am currently working on interacting with ifc-files on Speckle thorugh secondary tools and have a couple of questions regarding this.

  1. For the new FE2 speckle viewer, is there a webhook functionality triggered by discussion post, letting one communicate with an AI about the project as a whole, and possibly programming a quantity extraction?

  2. In the long run one might be interested in uploading larger models. IFC-files larger than 100 MB perhaps. I was wondering what the reason for the current cap on a 100 MB is, and if it is possible surpassing this in any way?

Not sure if this is the best way of asking, but thanks anyways :))

  1. We’re getting there with the UI. FE2 is not a complete offering yet. However, the interaction you describe is possible in a long-handed way. FE2 is at latest.speckle.systems, the same server using FE1 is at latest.speckle.dev. All previous webhook interactions can be applied there, and FE2 actions should trigger them. AI-Powered Conversations with Speckle

:zombie:WORD OF WARNING: The “latest” :speckle: server in both its FE forms is subject to frequent testing, new developments, breaking changes, downtime, complete rebuild, tornado, zombie apocalypse and plague. It is advisable not to host production or live data there.

  1. The limit is there for the public server to protect the free access infrastructure for all users. Options are to self-host a speckle server on a cloud instance or a physical machine. There are configuration options on deployment to affect this limit. Alternatively, we can discuss hosting a server to your specification for you and your organisation. Get Started With Speckle!

Great. Thank you for the quick response! :))

Hi @Isakerikstad ! Just to mention we set up a Speckle Server raising 100MB limits to 300MB with some simple configuration. It worked fine.

We are doing some research on best strategy to implement IFC + BCF support for our project.

1 Like

Hi @Isakerikstad - thanks for sharing that 300Mb worked for you. I’d love to foster some more knowledge sharing and community collaboration on this!

Would you be able to share with configuration values you changed?

Is your server typically handling one large file upload at a time, or do you have multiple file uploads simultaneously? Are you running on Docker Compose or Kubernetes? What CPU and memory have you provided in your environment?

Iain