Hi @BrettB! Welcome to our community
Interesting, I must say I haven’t been able to experience this issue myself, but I have fixed a couple of bugs regarding the update behaviour that did prevent it form succeeding.
Sadly, I’m not sure these fixes would address the issue you describe. I’ll have to do some more digging but if you could share a report file where this happens it may help pinpoint it faster.
I’ll keep you posted when we make a new release (soon!) so you can test that version too.
Excluding meshes
As for the speckle_type
trick to exclude meshes, you are correct that this results in a speed increase, this is the “expected behaviour” in a sense (but we’re still planning on improving this).
The way the data connector works, it will spit out a flat list of objects contained in your commit. This means that for every “Displayable object” (such as a wall, a floor, a window…) you will also get in the same list it’s mesh representation (usually stored in it’s displayValue
property.
When the viewer loads that flat list, it will encounter the Wall
object and load it’s displayValue, but will also encounter the mesh from the wall, which results in it being loaded twice.
Do note that if you exclude all meshes, commits from Rhino and Grasshopper would appear to be missing mesh objects as a result.
In general, it’s best to fix this on the Data Connector query, and just pick whatever objects you want, or exclude whichever objects you don’t. This way you’ll also be able to benefit from some improvements in performance we’ve planned on the Data Connector side.
Another suggested strategy would be to use the new Structured
way of getting data from our Data Connector, which will allow you to craft your own queries, resulting in way faster loading times and better control over the final objects in the table. The only downside is that generating that final table is up to whoever crafts the query (the table with Stream URL, Object ID columns produced by the original Get By Url
)
Hope this helps!