Performance issues with large models

Hello everyone,

We are testing if Power BI combined with the Speckle viewer is suitable for our projects. The models we use are very large, above 450 MB. We are testing without a license and are limited to 100 MB uploads. We are therefore creating multiple ifc models below the 100 MB threshold. In Power BI we are combining these models. In Power BI we are combing the querys to be able to use all the models. The performance of the viewer when filtering parts of the model is very poor and not workable.

Does anyone have experience with large models and how to optimize the performance?
Is the performance better when running in licenced mode?
Will the performance be better when setting up a dedicated server?

1 Like

Hey @Gerrit ,

Where do you export those IFCs from? Revit, Archicad? If so, we have connectors directly from those applications. You can use those. Our connectors don’t have any limitations.

We recently had a Community Standup covering this. Currently, you can use Grasshopper to lighten the models so it only has the geometry+only the parameters you are interested in Power BI. Take a look:

1 Like

Hi @gokermu,

Thank you for your response. Since I’m collaborating with Gerrit on the matter mentioned above, I’m replying to your message.

We are currently exporting the IFC files from Autocad Architecture. I’ve attempted to utilize the Autocad connector, but unfortunately, the models sent from Autocad to Speckle are not loading in Speckle for some reason. Only smaller models seem to load successfully. Additionally, when the models do load, navigating through them doesn’t function correctly.

While using Grasshopper seems intriguing, it would add an additional step to our already lengthy workflow. We’re in the process of launching a server this week to store the models on. Would utilizing a dedicated server benefit the loading times of the Speckle viewer in Power BI?

Looking forward to your response!

If you could share an example of a stream that doesn’t load, we can check. And to confirm, are you using or

I will send you a DM with the link of the stream. I am using

1 Like

We have looked over the models. You’re experiencing issues caused by having several thousand individual text objects. We have not implemented batching for ‘native’ text as a limitation - from the beginning, but it’s something we can look at and overcome in the future.

In the meantime, if you do not need to share text objects then the rest of the performance would be much improved.

1 Like

Thank you for looking into this. The text objects you mentioned are the attribute tags for all the block references in the model. When I make an export without the attribute tags, the model is indeed a lot smoother, but then I have lost all the specifications/ data of the model.