Grasshopper: Schema Builder (Minor) Change

Hello fellow citizens of this forum! We (@clrkng, @teocomi & me) have had a thunk (sic) re how the v2 schema builder component works in Grasshopper (this is the node behind all the SpeckleBIM components).

More accurately, it’s currently a “schema attacher”: given a base input, it would output the base geometry with a speckle schema attached to it, which would then be picked up by other converters (e.g. Revit) and transformed into what the attached schema described. We went this way for a couple of reasons:

  • when creating a speckle wall in grasshopper, you actually won’t have all the intelligence of an actual wall (e.g., area, volume, display mesh, etc.) - so we didn’t want to set the wrong expectations.
  • when sending things out from Rhino, the behaviour was desctructive in some cases (ie, floor from a surface), which would cause confusion when getting that stream back into Rhino (e.g., your “surface” that you sent as a floor becomes a polyline).

This made sense to us at the time; nevertheless, as @daviddekoning and rightly pointed out, this also creates confusion and hampers usability of schema-builder created elements in workflows we want to support. Internally we went through two options:

A) Rename existing node to “SchemaTagger” and keep the old behaviour + create a new node, called “SchemaBuilder” that actually outputs the created schema element.

We don’t like this one as it would cause tons of confusion.

B) Add a right click toggle option to the existing schema builder node to allow grasshopper wizards to toggle between the two behaviours (output tagged geometry, output actual schema object).

We’re partial towards B, and if we get some thumbs up from you, we’ll make this change in the next sprint; we first wanted to hear from y’all at large, and particularly @daviddekoning and; as well as the other heavy users @JdB & co!


It makes sense indeed to go for option B! I think it will keep things more clear. People who need to toggle between the behaviours will find it. Maybe good to mention options like these in the Grasshopper help of each component?


Absolutely :sweat_smile: (i know we’re not the best on docs yet…).

1 Like

I’ve been thinking this over and I’ve got a slightly different approach for consideration! :slight_smile:

An object that gets stored in Speckle can exist in three different forms:

  1. Serialized Objects, the transport representation (i.e. pure JSON) - this is how the object is stored on the SpeckleServer, or in the local or in-memory transport.
  2. Speckle Objects, the programming environment representation: this is a 1 to 1 mapping of the transport representation, but as an object of whatever programming environment you use. E.G. .Net object, python object, ruby object (did you know that InfoWorks, a hydraulic modelling package, uses ruby for scripting? It’s not just SketchUp!!!)
  3. Tagged Native Objects, i.e. Rhino objects with additional properties. Every application that we write connectors for has some mechanism for adding additional data to the object, and for each application, there is some convention established for storing extra, Speckle-related properties. Revit properties, GSA sIDs, Rhino UserData, Microstation EngineeringContext, etc… all serve this purpose.

I think a lot of confusion is due to Grasshopper being both a programming environment and part of Rhino, so it’s not always clear if people are expecting Rhino objects with data (Tagged Native Objects) or straight up speckle objects (Objects built from a Schema).

In addition, I got confused by the SchemaBuilder creating Speckle Objects, but in the format of Tagged Native Objects: i.e. an Objects.Geometry object with an Objects.BuiltElement object on the @speckleSchema property. I would expect that anytime a Tagged Native Object is converted to a Speckle Object, it will be turned into form 2. It makes sense for Rhino native objects to have speckle schema data attached, but these should always be translated directly into ‘proper’ SpeckleObjects (e.g. Rhino geomety tagged as Wall → Objects.BuiltElement.Wall), not to a Speckle.Objects.Geometry with a @speckleSchema tag.

While I agree that a right-click toggle on the Schema Builder allows everyone to move forward, I am concerned that it will not help the users build a consistent mental model of what is going on.

One way to get clarity is to say:

  1. In Rhino, we have Rhino native objects + additional data (form 3)
  2. In Grasshopper, we have Speckle Objects (form 2)

This means that the Grasshopper component only creates Speckle Objects (no toggle for tagging) and the Rhino commands (CreateWall, etc…) only every add data to Rhino Objects, and we can translate between the two forms with the ToNative and ToSpeckle components

This is easy to explain, and easy to keep track of, and the Revit connector / kit doesn’t need to know anything about how we keep track of BIM data in Rhino.

Is there ever a need to create Speckle Objects that look like Tagged Native Objects?

Regarding the point about destructive sending: I’m not sure that’s a huge concern in real workflows. In most cases, you’ll be re-receiving the data after it gets processed by another program, so even if it gets sent out ‘losslessly’ by Rhino, once Revit processes it, and you send it back, the floor surface will have become a polyline anyhow.

I’m a little suprised at how long this post turned out! I hope it’s clear, and would be happy to jump on a call to discuss further. My underlying concern to keep the mental model needed to use Speckle as simple as possible, since any complexity really slows down adoption.

Here is how these ideas would look in some workflows:

Building a BIM model in Rhino only:

  1. Model Geometry (in Rhino)
  2. Tag the geometry with Speckle Schema (in Rhino)
  3. Send to Speckle (with the Rhino connector)

Building a BIM model with Rhino geometry and GH processing

  1. Model base geometry (in Rhino):
  2. Create Speckle objects in GH (with the Schema Builder / CreateSpeckleObject component). This will create Speckle Objects.
  3. (Optional) Add custom data parameters to the SpeckleObjects.
  4. Send to Speckle with the Grasshopper sender component.

Build model in GH, bake to Rhino, then send to Revit (does anyone do this?)

  1. Create a parametric GH model.
  2. Create SpeckleObjects in GH.
  3. Run the Speckle Objects through the ToNative component.
  4. Bake to Rhino.
  5. Send to Speckle server with the Rhino connector.

This last workflow could also be enabled by overriding GH_Component.BakeGeometry in the Schema Builder / Create Speckle Object component, so that the created Speckle object would be convered to a Tagged Rhino Object then baked (and we could skip step 3).

1 Like

The main issue is that, if the objects created by schema builder are not complete enough, they can cause problems down the line. Imagine a clash detection script, it would massively fail if the walls sent only have a baseline instead of a 3d mesh representation.
We had various feedback on this issue in v1 as a matter of fact.

I think Speckle should create Objects, only when it can properly set their main properties, otherwise it should be “attaching the schema” so that when receiving in a BIM software these are properly created.
As a matter of fact, the original Schema Builder component was created as a hack to create BIM objects in Revit from GH, if was never really meant to be a tool to create full BIM objects.

I understand this is not the case for most of the 2D structural elements @daviddekoning , which are simple to generate and can be used “as they are” in various scenarios. This is why we ultimately think that a fair compromise is leaving the choice to the user, whether to attach or create.

Alternatively, we could add an attribute to the Objects’ class definnitions to specify a default behaviour (eg create for 2d objects, and attach for all others), but this could confuse the users even more…

That’s interesting, I didn’t realize that the Schema Builder didn’t always create a fully fleshed out object. Given that it exists in GH (where the rhino geometry engine is available), would it be possible to generate the appropriate meshes at that time or expose the mesh property in the schema builder so that it can be set?

My concern is that this introduces a 4th form of Speckle objects for users and developers to remember and work with…but maybe we’re early enough in the project to let a thousand flowers bloom?

What do you think of users being able to set a default for this option? I imagine most people will usually use one or the other, and it would get annoying to have to toggle the option every time if you don’t like the default. Perhaps this is a reason to have two different components?

Further complicating matters is that some schema objects will never generate tagged geometry (like the structural schema, and non-geometric objects). How should they behave in “Tagged Geometry mode”?

Good points David! I’try reply to some :slight_smile:

  • generating meshes for BIM objects: unfortunately this is impossible, as most of the information is contained in their Revit family definition that we don’t have access to (and not all families have boxy shapes like walls hehe)
  • I think setting a global default or having 2 separate components are both plausible options
  • for objects that don’t have a base geometry, no tagging is needed, since the output of the SchemaBuilder will most likely be a complete object

Thing more about this, it seems that a good way ahead could be:

  • adding another attribute in Core, such as SchemaTag
  • the attribute will be set to true for all classes that are known not to output a “complete object” (eg most Revit BIM elements) and false for most structural elements
  • we’ll work out something to expose this specific behaviour on each GH component, so the user won’t have to check the docs every time

What do you guys think?

That makes sense about needing Revit to generate the BIM object geometry. It is definitely outside of Speckle’s scope to recreate Revit’s geometry engine!

I’d like to dive into this idea of a complete object, since it appears to be a driving concept.

First, I’d like to check that I understand what you mean by a complete object. Does the following capture it?

  1. A specified object is one that has enough information to unambiguously define it, and
  2. a computed object is one whose geometry is defined not by rules and parameters, but by a geometric object.

I’ll give an example with Revit Spaces, since we’ve been digging into them recently:

A SpeckleObject of type Space is specified if it has a name, Location (point), upper level and lower level defined. These properties are enough for a BIM engine to compute the geometry of the room in a full BIM model. A space would be computed if it also has a property that contained the geometry of the room. In fact, there might be several computed geometries: a mesh for the volume, a closed polyline to represent the boundary, etc…

Secondly, if this does capture the idea of complete vs incomplete objects, do we need to do anything to flag an object as being incomplete or un-computed other than not specifying the geometry property? My gut feeling is that specifying an objects computed or specified by changing the data structure (tagged geometry vs properties that store geometry) is confusing because it is implicit, not explict and also just because it introduces another way to do the same thing.

What about adding a tag called Computed to indicate that a property is intended to be computed from other properties in some environment? Clash-dection or IFC export routines could flag an error or warning if they come across elements that have been specified by not computed.

1 Like

Hi @daviddekoning, we’re getting somewhere here. I think the specified vs. computed object “dichotomy” boils down to validation. E.g., As a little clash detection script, do I have a valid wall or not to operate on?

IFC, as a standard, has validation “built in”. Speckle doesn’t, and our stance is to let validation happen in the business logic, or as a separate layer (think of a quick automatic validation step). There’s several ways that I see you could implement this in the business logic:

  1. Based on commit source application: you can selectively run the clash detection script if data’s coming from Revit or an IFC file.
  2. Simply, based on object properties. E.g., if (!wall.whatever) throw new Error( 'Descriptive error message' ). This can easily replace the [Computed] attribute - you can also decide to compute that value on the spot if you can!

I’m usually wary of relying on .NET specific features like attributes for things too close to the core, as they don’t translate well in other languages. @teocomi’s [SchemaTag] proposal sounds good though, as it will allow us to control the behaviour of the component in Grasshopper and reduce complexity for the user. I’d see that as guiding the initial behaviour on instantiation, and I’d also allow, via right click, to swap between the two different behaviours (I’m sure some Gh hackers will want this sooner or later).

For example, structural elements will be defaulting to outputting the real elements, and our revit walls will remain the same; and if truly a grasshopper hacker, you can dig in the docs more and you’ll learn how to control the behaviour too :slight_smile:

I’ve raised an issue here, flagged it as p1 for next sprint!

1 Like