Download Chunking

Hey everyone,

I am currently working with the Speckle API and dealing with some large objects of larger size. Therefore I am wondering if there is any best practice for chunking a stream pull.

var objects = client.StreamGetObjectsAsync(streamId, "").Result;
This does its work, but once the stream gets rather large, it is prone to fail.
I am thankful for any advice :slight_smile:

Best, Simon

Heya Simon! Should be rather easy with ObjectGetBulkAsync.

var stream = Client.StreamGetAsync( streamId, null ).Result
var maxObjRequestCount = 42; // this is a magic number :) 

var objIds = stream.Objects.Select( obj => obj._id ).ToArray();

for(int i = 0; i < payload.Length; i += maxObjRequestCount) 
{
  var subPayload = objIds.Skip( i ).Take( maxObjRequestCount ).ToArray(); // assemble the payload
  var res = Client.ObjectGetBulkAsync( subPayload, "omit=displayValue" ).Result; // here are your objects!
  // log progress somewhere
  // this.Message = SNJ.JsonConvert.SerializeObject( String.Format( "{0}/{1}", i, payload.Length ) ); 
}
// ... 

See how it’s applied here in the gh receiver code. It’s a bit more complex as it also leverages the local cache, but the bucketing approach is sound :slight_smile:

Great, this worked! My streams are now successfully chopped to pieces, but luckily reassembly upon arrival :slight_smile:

1 Like

Happy it worked out!


For reference & others reading this, the chop & chunk (batching) approach is still more efficient for the kinds of workloads speckle’s doing. The server itself is http2 ready (YMMV - it’s in your nginx config!), so in theory many small individual requests will share a socket, etc. - the viewer/admin might be able to leverage this in the future to simplify api calls.

Nevertheless, this article is actually very thorough reference: Performance testing HTTP/1.1 vs HTTP/2 vs HTTP/2 + Server Push for REST APIs and shows that actually, at the end of the day, classic batching is better (even if you use h2’s server pushes - if I remember it correctly, read it a few weeks ago).