Integration Idea Sanity Check

Hi, I may finally have a reason to work with Speckle and am stoked and also upset I didn’t make the connection (pun) sooner. I’ve done a first read through the developer docs and want to sanity check my understanding of what’s possible with you all:

I have a web client that composes .json representations of Grasshopper graphs. It sends them to a node.js api, and the api schedules them for parsing/execution on a Rhino Compute server. These json files are already large, and the results from Rhino quickly get larger. The client has trouble loading (or even receiving) these giant json files and adding them to a three.js scene. Salt in the wound is that there’s often very few actual differences from what it already has loaded.

I realized “streaming geometry to a 3D model, specifically Rhino geometry” is something Speckle does incredibly well and took a look at the Grasshopper connectors. Forgive the sloppy understanding, but it looked like something like this could be possible:

At a glance, it looks like I can push rhino geometry to a model stream from the C# solver service and subscribe to those changes from the client. This is great because of all the Speckle perks that come with it, but also because it solves my “model v big” problem.

My setup is a little different than what it felt like the docs were for (I don’t have an open Grasshopper script, for example) so I kept reading. And this is where things started to sound too good to be true and why I’m writing this post.

I have a few specific constraints and conditions in my setup:

  • The script.json object uses an (annoyingly) bespoke object structure
  • The .gh created from this .json must preserve ids for the components (overwrite defaults from Grasshopper)
  • The result from Grasshopper can’t just be a pile of geometry, I have to preserve:
    • Which geometry belongs to which component and parameter
    • Where in the Grasshopper data tree structure the geometry lives
    • Extra computed information beyond the RhinoCommon data, like bounding boxes, associated and stored as json
  • There is no persistently-running grasshopper script, compute is scaled up/down as needed
  • The client model needs to do more than view the results, and needs to be able to associate element ids with their geometry
    • The model must reflect selection status on the client (i.e. highly geometry if component is selected)
    • The model must respond immediately to visibility changes if a component is hidden

So it felt like I couldn’t use the out-of-the-box connectors. And then I read that connectors, kits, and transports can be created at-will and interact with speckle just fine. If I understand this correctly, Speckle doesn’t care if the json I’m sending around is a grasshopper script or Rhino geometry, and I may be able to create my replace my entire stack with a Speckle-based one:

The order of events would be something like:

  • User visits site, opens a script, creates a new stream based on script.json data that exists somewhere
  • User changes script and client pushes custom object changes to speckle server
  • C# compute service (or some JS gateway service), via subscription, is able to identify that graph has changed.
  • New solution is scheduled and completed
  • C# client pushes change to custom Rhino-like object to the user’s current stream
  • Client viewer detects change and is able to intelligently stream in new geometry/remove old geometry
  • Javascript SDK methods like v.applyFilter or v.colorBy allow me to make client-side modifications based on selection/visibility

I do get the sense that I’m missing a few important concepts. But if even a fraction of this is accurate, I’m beyond floored and excited. More than happy to contribute towards any extra bits I might need that are helpful.

The main questions are:

  • How accurate/possible is this?
  • Where should I start, since I’m 100% new to Speckle?

I have follow-up questions about authentication and how diffing might work, and will continue reading/poking around. But my first understanding really does sound too good to be true and I wanted to check in first. Greatly appreciate any guidance. :pray:

7 Likes

Hey @chuck! For the most part, you’re spot on. The model vvv big problems should go away via Speckle. They will be replaced with user internet connection vvv slow though (as a bit of a warning).

I’ll ignore for now the script.json (big) problem, as it seems like it’s not the biggest pain point yet. If it’s in the range of <1mb it’s probably not worth optimising it right now. If it gets bigger than that, then making it ‘stream-able’ via the canonical speckle way of doing things it might be worth it though.

This can totally be done. It sounds like we’d need to write something that extracts and converts data from all the grasshopper file in the way nodepn expects it to be. I don’t think we need a whole new kit, but we can probably dynamically attach any extra missing stuff. Off the top of my head, I would structure the data as such:


var definitionResults = new Base();

foreach(var component in definition) {
  var componentData = new Base();
  componentData["@geometry"] = etc. //can be a new base object, separating things by output ports, then tree structure, etc. 
  componentData["name"] = etc.
  componentData["id"] = etc.
  definitionResults["@componentName+idorsomethingUnique"] = componentData;
}

// TODO: Send over to speckle & create a new commit

During the sending part, we do a lot of diffing and caching etc., so theoretically you send only the new stuff out. It’s where the “deltas” happen. It’s nice because as a developer you don’t need to care :slight_smile:

If objects have the right props associated with them, and/or the data is structured in the way nodepen needs it, it’s a piece of cake now, as you’ve noticed too. The viewer’s latest filter api works really well for this!

We do a lot of diffing and caching optimisations when we load stuff, so yes. We will probably need to optimise a bit and add on our end a method for load and replace (we currently just add to the scene).

The other parts, with the subscriptions, etc. is something we’d happy work through, but you get your live updates for sure this way!

Now finally, let’s try to unpack the where to get started?

For such an adventure, i’d be happy to help (and others from the team too, i’m sure!!!). If i’m not mistaken, your project is open source too, so it would be a matter of figuring out how to get us set up locally so we can poke around.

Where i would really start is figuring out how to store the solution.json in speckle, and loading it in the viewer & some benchmarks around that part.

I’d also think a bit on the overall architecture of the app. Some questions here:

  • do you want the speckle server to be a “background” server, ie, not exposed the internet, or just piggy back on an existing one? (latter is much easier, former is gonna cause some pain)
  • auth integration is easier than it looks like, but it depends on the answer on the above actually
3 Likes

Ah, and another thing that comes to mind as a PS: webhooks can work as well as subscriptions. It depends a lot on the integration architecture. Anyway, getting a bit too much ahead of myself here :sweat_smile: (suffice to say both @AlanRynne and me are excited).

1 Like

Me too. Exactly the sort of ideas I’ve been sketching out also.

4 Likes

fantastic fantastic fantastic, you are all absolute heroes! Thank you @dimitrie for taking the time to spell it out.

I appreciate the offer for help. The project is open source, but the repo is also an absolute warzone. My next steps involve ripping out the useful bits into a library, though, and I’ll take a first swing at Speckle during the process. Even if it’s not totally working, it’ll give you a more specific idea of what I’m after (and what I’m actually understanding about Speckle). Will reach out again when I get there.

Excited to finally use this tech. Hopefully I can provide a valuable case study and some meaningful contributions back to the project.

3 Likes