Thank you for seeking help! Our friendly community is here to assist you.
Objective: My collague and I are trying to send a Revit model - resulting in a stream of more than 66k objects.
Issue: this causes the viewer to crash (browser goes out of memory)
Question: are there best-practice methods to handle this? Would submitting different building parts (e.g. structure, envelope, interiors etc.) solve this?
Speckle link: https://speckle.xyz/streams/16fcfa0189/commits/f87f37d9c6
There’s indeed a limit number triangles your browser will be able to display at the same time. While we’re constantly optimizing our viewer, what you can do is exclude any element that might be heavy in trangles.
These most times are curved geometries or very heavy meshes imported in the project. We’ve seen it before for instance in very large MEP models, or in stadium that had thousands of very meshy CAD seats
You could also think about splitting your model by discipline or parts (as you suggest) and sending the data to different branches in Speckle.
Federating multiple models will be much easier in coming updates!
The main reason for “out-of-memory” issues in the frontend is that the current frontend uses the old synchronous object loading API from the viewer. The newer iteration of the frontend https://latest.speckle.systems/ uses the asynchronous object loading API from the viewer which will eliminate most “out-of-memory” issues.
I tried the model you mentioned and it loads and runs perfectly fine with asynchronous loading even on a machine with relatively little available memory
That said, there will always be hard limits on how much data you can load inside the browser, or any application for that matter, but things will need to get ridiculously large before we hit that limit
@alex thanks for your reply.
Would this be by invitations only in the future also, or are you planning to extend access?
Just invited you We’re planning a soft friends and family launch soon, but in the meantime if you give it a spin and let us know how it goes!
If the problem persists, we’d love to get our hands on the source model & data to do some deeper digging.
PS: we could backport async loading to fe1 if it won’t cause too much hassle (reminder to myself to discuss with alex later)