I’m with the Arup team and working on the v2 connector for GSA. We’ll soon be looking at the v2 speed of sending large models (with say 80,000-100,000 structural objects) as well as their results, which will be tables of data for each of those elements.
Here’s an early model I sent to the xyz server today (which @mishaelnuh will recognise and chuckle that it is still being used):
It’s a small model, and not many results were chosen to be sent; it took around 11 minutes to send this to the xyz server, so on first inspection looks to be slower than v1.
Work is ongoing, and there are many variables at play here, but just flagging that this will be something that we might focus on, and look to collaborate on, in the coming weeks.
(I imagine that @Reynold_Chan will be looking to speed up sending of large data sets to a Speckle v2 server too with his ETABS work)
Is anyone else dealing with transmitting large data sets to/from server transports, or is going to be, for v2?