I’ve noticed a bug concerning the viewer. Shapes & meshes are scrambled while viewing models with large root offsets. It might only affect IFC models (since I’ve only tested it with IFC), but it seems to have its root cause in the transform pipe of the final model’s scene graph. I guess that the offset is directly assembled into the mesh’s coordinates which causes numerical issues. In former Speckle version (up to 2.20.6) there were no issues like that. Have something been changed?
Currently our RTE implementation, which is responsible for rendering objects very far away from origin correctly, does not work properly on MacOS and iOS. We’ve investigated the issue and have somewhat of a solution which we’ll try to put in production soon
On all the other OS-es, the issue should not be present. At least I cannot reproduce it with the streams I have available. Can you please share your stream so we can check?
the IFC model ifc (shifted origin) - IFC Viewer Offset Bug | Speckle is slightly changed with a large offset. The length unit is set to “mm”, offset is 32551900.000m & 5584000.000m. Both offset coordinates are taken from a real IFC project of an infrastructure model (which I couldn’t share). This kind of models often uses large offset coordinates (UTM CRS) since they are required by clients.
We’ve managed to reproduce the issue happening on some Intel integrated GPUs. We’ll see if we can come up with a workaround, but since this seems like a driver quirk there is not much we can do.
Until then, to avoid having this issue, I suggest switching to the dedicated GPU on your system, if you have one available. If you do have a dedicated GPU, browsers will not use it by default! See this post for more details and how to change what GPU the browsers uses
the workaround fixed the problem. It’s a strange rendering bug … but I guess that a lot of users have integrated GPUs. Since Win always uses the integrated GPU by default, one has to enforce the dedicated GPU manually.
Glad to hear you have it running properly now. Indeed, it’s very unfortunate that the intel drivers have such quirks. We will allocate some time in trying to figure out a workaround that does not involve switching GPUs. Thanks for pointing out the issue!
Unfortunately the solution is far less glorious than one would expect. Typically when you are faced with driver or hardware quirks like these all you can do is find a workaround. In this case the workaround was to re-write a set of RTE related operations in our shaders from (pseudocode):
Our RTE implementation requires the GPU to do a bunch of calculations using full 32bit floats. This is fine, as any GPU can do that, however what shader compilers typically do is aggressively try to optimize the floating point precision being used for operations, as GPUs can do 16bit or even 8bit of precision which matters a lot speed-wise. I can only speculate that somewhere in the compiler/driver, doing that initial subtraction pos.xyz-pivot.xyz forces the result into a lower floating point precision which breaks the final result. However, this is just a speculation as I cannot know for sure what is really going on.