Introducing Kits 2.0

Hi all,

Just had a quick read through and it’s indeed quite a lot to digest.

@dimitrie, re licensing, my first reaction is that copylefting the Core Kit makes total sense. (Small disclaimer though, I’m no lawyer and am not completely aware of all the consequences this has)

As long as you make sure your not limiting third parties to also develop their own kits and plugins from scratch under their own license, I believe that this should not withhold companies from using and contributing to the Speckle community. (And hopefully a lot of them will!)

Will definitively have a deeper look into this and the other topics as well

1 Like

@teocomi @dimitrie Is there an implicit assumption that all the kits are .Net code?

Are you planning to have an object model definition which is in a language neutral format ? XSD, JSONschema, Express (god forbid). ProtoBuf could be very nice.

Otherwise I fear the Python client (and any non .Net client) would forever be second class citizens.

The other aspect of licencing to consider may be around redistributables.
i.e If the kit contains any libraries by the vendor of the software being converted to/from, we may need to ensure that the developers are allowed to redistribute those libraries.

Hey Steve,

Thanks for your comment. Yes, the current implementation of Kits and Core 2.0 is in .NET (more specifically .NET Standard 2.0, so that it’ll be possible to reference them from .NET Framework 4.6.1 and above and .NET Core 2.0 and above).

We have started considering an object model definition in a language neutral format, but that will come after our alpha release. At the moment the main focus is reaching feature parity with 1.0, and getting out the new connectors and server.
We have also started discussing internally the python client and SDK, but just preliminarily!

Re licensing, we are leaning towards keeping Core and Kits as MIT because of distribution limitations a copyleft license would imply.

Hi, just to add my experience about copyleft licenses.
Just to make it clear on beforehand: I am not against the principles of copyleft. I have contributed to open source (GPL) code bases in the past, but recently we have gained some experience with some of the questions around these licenses and I think it is good that I share some of that experience.

We have recently been looking supported by legal advisors and two legal university professors at copyleft licenses and I must really warn against them if you want to base anything commercial (read: proprietary or part proprietary) on it or tie anything in this domain to it (like servers/platforms like ours). What I understand of it is that the GPL is more ‘problematic’ than the LGPL. Apache licenses are a lot better to use commercially (but are not copyleft) or MIT (but MIT does not protect anybody e.g. in case of liability which might also be a commercial problem).
What I understand is that the first ‘problem’ is that copyleft licenses are largely untested in court (and in what court/country should they be tested?) which means that nobody really has final answers because most of them contain textual riddles which would need to be tested in court to gain final answers (jurisprudence). Until this jurisprudence of specific cases exist each of these cases is a (commercial) risk.
The second ‘problem’ is that anybody can sue anybody so you cannot do what you usually do in commercial arrangements: make agreements between two (or more) specific parties. So there is no real way to control the above mentioned risks. This also implies a commercial risk.
The third ‘problem’ is that most of the copyleft licenses contain wordings from an age where everything was simple and clear; mostly around use of libraries related to operating systems. Now we have things like open source plugins running on proprietary systems (e.g. Rhino) or open source plugins running closed models (e.g. a Grasshopper model) running on proprietary system (e.g. Packhunt.io) containing proprietary components (RhinoInside/Rhino Compute) communicating over internal APIs. What is a library? What is the operating system? Are Grasshopper models data? Or can they be seen as ‘code’/applications? If Grasshopper models are based on open source plugins, should these models be open sourced? When are you distributing? Or conveying?

The final advice we received is to stay away from copyleft which I find a real pity as it would be great if somehow open source and proprietary systems could live together. One route to a solution is to dual-license with a contributors agreement who transfer their copyright to both a copyleft license for a completely open ‘community’ edition and a commercial license which is less risky to commercial use.

I hope this helps.

2 Likes

Breaking changes in kits is something we are dealing with in the GSA converter currently. As such I would be in favour of the ‘B’ option.

I can imagine that users might be jumping between different projects / scripts. It would be good if there was some kind of configuration component / object / file that could accompany scripts (akin to a packages.json or .csproj file) so that I could ensure I am always working with the same kit version.

2 Likes

Hi folks,

In this reply, I’ll be looking at the following questions raised by the original post:

  • Should Speckle clients support multiple kits at once?
  • When sending, how should the kit be selected?
  • Should the CoreGeometry and Elements kits be merged?
  • When receiving, how should kits be selected?

Kit Selection

I agree that the current method of somewhat randomly choosing a kit to convert Native Objects (NO) into SpeckleObjects (SO) needs to be made deterministic.

My understanding of the proposal is that by limiting each user to selecting a single active SpeckleKit at a time (system-wide!), we can ensure explicit converter selection.

There are situations alreay where explicit conversion happens, namely in the Schema Building Component in Grasshopper. No changes are required to this use case to get deterministic conversion.

To phase this as a question: when sending a group of objects (e.g. from Rhino or Revit), should SpeckleKits be chosen on a system-wide basis, a per-stream basis, or a per-object basis?

Selecting a single Kit on a system-wide basis will lead to a very frustrating user experience. People who are using different kits on different projects, or who are using multiple kits on a single project will spend a lot of time going into their system-wide settings and changing their active kit. As Hugh mentioned above, it would be great to avoid human error by locking the kit seletion in on a project or file basis.

Selecting a Kit on a per-object basis will also give a terrible user experience, because, well, selecting the kit for every single object is a lot of work… (that’s a lot of pop ups!) I only mention this option so that we can dismiss it.

So it seems to makes sense to select a Kit on a per-stream basis. I suggest that when a stream is created, the user be asked to select a primary Kit for conversions, and also specify if they want to also use CoreGeometry to convert elements that the primary kit cannot convert. The UI might look something like this:

image

This gives the following advantages:

  1. The CoreGeometry and Elements kit do not need to be merged. Basing new kits on the CoreGeometry objects is Speckle’s interoperability secret sauce, and tightly coupling CoreGeometry and Elements is not ideal. (see below)
  2. This gives any kit the ability to “merge” with CoreGeometry, instead of just the Elements kit.
  3. Once a stream has been set up, the user does not need to remember what Kit was chosen.
  4. Default kits can still be chosen by the client or set on a system-wide basis, so that new users don’t have to learn about it to get started.
  5. Different kits can be chosen for different projects, or different steps of the design process. (e.g. a façade designer might want to use the Elements kit to send some objects to the architect, and the Strutural kit to send them to the strutural engineer).

Merging CoreGeometry and Elements

Picking up the first point, the next question is: “Should the CoreGeometry and Elements” kits be merged?

Keeping the kits separate will help the Speckle ecosystem stay away from slow, IFC-like discussions about what the Kit should be. CoreGeometry provides a common vocabulary to specify geometry for the whole Speckle ecosystem, and keeping it purely focused on geometry encourages a federated, flexible approach to object model definitions.

We might rename the existing Kits as follows:

  • CoreGeometryGeometry
  • ElementsBIM
  • StructuralStructuralAnalysis

Some new kits we might see developed:

  • Geotechnical
  • LinearInfrastructure
  • Facade
  • EnergyModel

For the sake of easy coordination, having all the geometric objects in each kit inherit from the CoreGeometry classes makes it easy to create a federated model for coordination, issue tracking and clash detection, while giving freedom to smaller groups to define and refine their object models.

Choosing a kit when receiving

If we follow the suggestions above regarding sending streams, we will have two types of stream to deal with on reception:

  1. Streams where all the objects are from a given kit or CoreGeometry
  2. Streams that have been created by a script (GH, Dynamo, python, etc…) where there is no guarantee regarding the number of Kits used.

A simple and robust way to deal with all this is to expand the type definition of SpeckleObjects to specify the Kit name and version for each type. For example, a column in Speckle 1.0 has the following type:

type: 'Mesh/Column'

in Speckle 2.0, this could be:

type: CoreGeometry(1.0.0):Mesh/Elements(1.0.0):Column

By specifying this information on a per-object basis, we acheive a deterministic converter selection, and maintain the benefits of specifying a chain of types for a given object. In fact, this gives clients the option of either looking into the type themselves and picking a kit, or just passing the objects to SpeckleCore. Either option will give a deterministic conversion.

As an added bonus, clients won’t need to load kits unless they actually need them for a particular object, so bugs in one kit (cough, cough, SpeckleStructural) won’t crash clients that never use them :).

(As an aside, though we could in theory have long type chains, I suspect that two types–a primary type, and a CoreGeometry type–will be enough for almost all use cases. I would love to hear what others think.)

Summary

To summarize all this, here are the 4 questions I listed at the top, with our suggestions:

  • Should Speckle clients support multiple kits at once? Yes. Allow users to select a Kit when creating a stream.
  • When sending, how should the kit be selected? On a per-stream basis, with optional fallback to Core Geometry.
  • Should the CoreGeometry and Elements kits be merged? No
  • When receiving, how should kits be selected? Specify the Kit, version and object name in each object, and select a converter on that basis.

I have framed my comments around specific questions in the hope that it will make it easier for everyone to engage with the specific topics. Please share your opinions!!

3 Likes

Hey @daviddekoning, thanks a lot for the detailed response and feedback, it’s very much appreciated.

In regards to your suggestion of not merging the existing kits, and keeping CoreGeometry, Elements, Structural etc independent we see some major issues, especially if we want to make the conversion mechanism deterministic:

  • users would not be able to send or receive streams with objects from multiple kits, eg from both the “BIM/Elements” and “Structural” kit.
  • while these kits are theoretically complementing each other, there would be no way for us of guaranteeing there are no overlaps or conflicts in the schema (and we have seen some of these issues in the current implementation).
  • it would add a lot of friction to the end user as they would need to know, before sending any objects, in exactly which kit they are defined. This would not be the case if the existing kits were merged into one.
  • users would also need to switch kits much more often than if a single kit included all AEC disciplines

With Kits 2.0 we’re not only proposing a new way of structuring the various object models, but also a new way of using the kits. Kits will no longer complement each other, but contrast each other, so that users will only need to switch in between them in rare occasions (eg an Arup engineer collaborating with Grimshaw on a project where Grimshaw has developed and uses their own kit).
Ideally, we want to almost hide the concept of kits from the end users, leaving it only to speckle hackers/developers and systems admins.

@daviddekoning I would argue against naming anything ‘BIM’ the term is too overloaded and wishy washy. Lets be a little more explicit.
‘BuildingElements’ is probably a better name if the scope of that kit is to represent physical objects (e.g. concrete beam) with additional semantically accurate data (as opposed to pure geometry + loosely agreed data attributes)

1 Like

@teocomi - That sounds a little like building a house and then re-doing the foundations later. Possible to do, but a higher likelyhood of the final solution not being the most elegant…

Hi @teocomi,

I have a few questions to help me understand and then answer better.

In the diagrams in the original post you have an a yellow ‘Object Model’ box along with the Conversions in each kit.
Is that ‘Object Model’ completely distinct and separate to an ‘Object Model’ in another kit?

Your comment about “Kits will no longer complement each other, but contrast each other” seems to imply that you see them as distinct and totally separate object models.
i.e. There is no way for a kit to compose itself of lower level object definitions defined in other kits.

IMHO - One of the downfalls of IFC is that the schema (object model) has become a monolithic thing, with no ability to create a sub-schema by composing together other more fundamental sub-schemas. This has led to an organisational problem which slows down agreement and releases of that large monolithic schema.

I have further thoughts on the relationship between object model and kits, but want to just stop at this point and check you’re (seemingly) ruling out composability.

You’ve proposed merging the Geometry and Elements kit into an ‘Objects Kit’ .
But then the diagrams, and your naming changes table refers to ‘Speckle Kit’.

So, could you explain the relationship between the ‘Objects Kit’ and the ‘Speckle Kit’?
Related question. If they are different, what is the scope of the ‘Speckle Kit’?

I tend to agree with David that the choice of kit should be on a per-stream basis.

Imagine a multi-disc project where one modeller is putting together the defining geometry for downstream disciplines. They might need to switch between different kits depending on who they are sending to.
e.g. If the base Speckle Kit works for most disciplines, but then the project decides to create ProjectKit for certain circumstances/data exchanges.

And while Kits 1.0 wasn’t deterministic, it was at least clearer what the discipline/semantic scope of various kits were (Geometry, BIMElements, Structural). And that’s important if we want to allow the person sending the data to pick the kit to send with.

I get that you’re trying to simplify it, but I wonder if that is starting to cut against the philosophy of letting end users decide what is “maximally relevant” in terms of the data to be exchanged. It feels a bit like decisions are being taken out of the end-users hands, and given to the devs/admins.
Please do let me know if I’m wrong.

1 Like

BuiltElements might be even better. It’s been proposed to rename classes such as IfcBuildingElement to make it more acceptable to domains such as Infrastructure.

3 Likes

Hey Steve,

Thanks for you feedback, our replies below.

That is entirely up to a Kit creator so it could be the same as, based on, or an entirely different Object Model.

Composability is totally still an option! All you need to do is to fork and tweak an existing Kit to do so.

We are referring to the same thing! We haven’t finalized a name yet, but with “Objects Kit” and “Speckle Kit” we’re referring to the same thing in this post. Sorry for the confusion.

We have noticed that most of our end-users don’t care about things like kits or objects models – they just want geometry and data to transfer from software A to software B and we aim to make that as seamless as possible! We’re planning to add an “advanced” UI when sending and receiving data, but this is currently off topic :wink:

2 Likes

I actually like this name :wink: Currently, the kit is called “Objects Kit”, which goes for an acronym of “OK” which I also like (but, to be honest, it’s a ~stupid acronym, and I’m the only one rooting for it atm).

On composability, there’s several ways to approach this:

  • (1) For “dynamic”, on the fly composability: you can still mix and mash and dynamically create your own data structures with the objects from one kit. This will be exposed in dynamo/gh to the end-users, and of course is by default there for devs.
  • (2) For “strongly-typed” composability: you can totally fork an existing kit, extend it, and use that. We’d hope this will lead to a PR down the line! (what @izzylys & @teocomi mentioned already).
  • (3) Inter kit composability: this is the one that got us burnt in 1.0 on the programmatic side of things (difficult to debug, error prone, difficult to enforce “correctness”, impossible tracing). This one, for the moment, is out. If we see it’s really essential, it can be brought back!

Re schemas, our current thinking is that we can always generate a schema from an object instance :smiley:


@stevedowning’s last paragraph resonates - it was a difficult decision ultimately, but what @izzylys said is true. It’s a matter of balancing cognitive overload vs. flexibility and choice. Too many buttons, choices means more dev, maintenance and documentation work, and leaves the door open for a less good user experience and errors down the line.

It’s all part of the struggle of delivering a usable product (which is something that OSS projects are traditionally quite bad at, and speckle’s no exception) and keeping things open and flexible enough at the same time. PS: We don’t expect to get it right in 2.0 from the first go, but I’m sure we’re going to get there with your help!

2 Likes

Thanks for the answers.
A quick response (sorry, under the pump a bit).

My concern is that without InterKit composability, the BuiltEnvironments kits is going to grow and grow over time, until such point it gets difficult to manage.
i.e If a ‘Structural’ kit and a ‘Geotechnical’ kit wanted to rely on a common definition of base Speckle geometry (e.g. point/line/arc) then they would both have to exist in the same kit as that base Speckle geometry.

It feels like that could lead to a state where we have an unmanageable number of PR’s and forks of that single kit?

@JonM @Greg_Schleusner - Thoughts?

Thanks @stevedowning , I agree that could be a potential downside of this approach.
But how would you go about resolving the other issues we mentioned earlier that inter-kit composability would introduce:

  • potential conflicts between kits
  • not being able to send streams with objects belonging to multiple kits
  • having the end user to know exactly in which kit the objects they are about to send are defined
  • having users switch/select what kit to use every time they send something

Just a couple of thoughts. Regarding the different kits one might have on their machine.

I edited a bit your quote but I think this should be one of the core requirements for any future decision :slight_smile:

I am not sure I understand how this would be in practice. Will you not be able to have a stream that mixes Arup and Grimshow kits? What about having a custom Object (BuiltElement or whatever) that nests two different kits within it? Let’s say for instance. I have an ArupColumn and I want to combine it with a GrimshawFloor, GrimshawRoom, GrimshawWall to create a CustomProjectHouse object. Maybe the later is also sitting on a separate Kit.

Totally understand that you should solve the determinism issue and probably by simplifying Kits is helping. What I am thinking as a nice example would be the way GH is dealing with missing libraries. If you don’t have the library required for the script then you cannot open it. The .ghx file has enough information to know exactly what library is required. Can’t the streams have that information for Kits as well? That way you will get prompts regarding any missing Kits for a given stream. Ideally you would also allow for an option to automatically fetch the right Kit / version for this stream, the user doesn’t have to know anything about what kit they are using, when receiving data. This might add a bit more work on your end and the future developers but would offer the best user experience.

We hear you! I think they’re all valid concerns, arguments to the contrary aside (we haven’t really seen that many kit contributors/PRs).

Stam is right in summarising our main pain points as dev productivity (debugging) right now & conversion “determinism”, coupled with user experience: it’s much easier to handle and support one kit at a time properly.

Nevertheless, this does not need to be the ultimate approach. It’s the one we’ll start with as we want to kick things out in alpha; it does not block adding the same behaviour as in 1.0 back during the beta stage, and improve on that even with, for example, @Stam’s auto kit downloader & @daviddekoning’s UI suggestion.

Unverified potential code

It would mean, in the current architecture, creating a simple new “fake” kit, with one universal IConverter implementation that does what 1.0 does: reflect through existing kits, and invoke dynamically the conversion methods. It should be straightforward to grandfather in!

We just learned - the less we promise, the easier to deliver on, so I’ll keep mum about this (or at least put it under discourse’s spoiler alert flag :smiley:

PS: We’ve just posted another RFC on the Base Speckle Object in .NET. Check it out, hope you’ll like it!

1 Like

Hi everyone

I’m coming to this chat (and Insiders in general) a bit late but I’m catching up now, and I’d like some thoughts on this from another angle, if not for any other reason than to help my understanding.

It seems to me that a 2.0 Kit is effectively a Speckle worldview - at any one time, you apply a Speckle worldview to processing all data coming from all streams. Rando having its own kit to handle all Speckle data is saying “operating as Rando, this is how we’ll process all the data from Speckle streams”.

Since users won’t have kits in their faces as much in the 2.0 world, we can turn the dial slightly towards the developer and introduce a couple of extra mechanisms, such as:
- Schemas could be published separately as nuget packages - with any inheritance/composition required
- Conversion libraries could also be published separately, referencing schema nuget packages at will
- Kits could reference their own hand-picked set of conversion libraries (and therefore, schemas) - along with possibly other settings/rules/filters baked into the kit to suit the worldview of the kit

The structure of the repos themselves need not follow this separation of schemas, converters and kits - as long as they publish nuget packages.

So the in-house Rando developer can peruse the directories of available published (both externally and on their own intranet) schemas and conversion libraries and compose them into RandoKit. Of course, he will need to handle any conflicts e.g. ConversionA and ConversionB might use SchemaA and SchemaB which in turn inherit from different versions of BaseSchemaX - but that’s a periodic action that is only needed when Rando needs/wants to update its worldview.

The same work would be done by Speckle Systems in updating its own official Objects kit - it will be the result of reviewing the latest updates to the schemas and conversions and curating them into their own kit, which will be available to everyone happy to use as their default.

Does anyone have any thoughts on this concept? Am I missing some key aspects here?

Cheers

Nic

Hey @nicburgers, thanks for your input!

I think you’re spot on, on every aspect here:

  • we totally want to turn the dial slightly towards the developer, schemas and conversion libraries will be separate nugets
  • the in-house Rando developer can compose their RandoKit as they want, could easily be a combination of the Speckle schema (Objects) and other random stuff

:v:

This topic was automatically closed after 14 days. New replies are no longer allowed.