@daviddekoning I would argue against naming anything ‘BIM’ the term is too overloaded and wishy washy. Lets be a little more explicit.
‘BuildingElements’ is probably a better name if the scope of that kit is to represent physical objects (e.g. concrete beam) with additional semantically accurate data (as opposed to pure geometry + loosely agreed data attributes)
@teocomi - That sounds a little like building a house and then re-doing the foundations later. Possible to do, but a higher likelyhood of the final solution not being the most elegant…
Hi @teocomi,
I have a few questions to help me understand and then answer better.
In the diagrams in the original post you have an a yellow ‘Object Model’ box along with the Conversions in each kit.
Is that ‘Object Model’ completely distinct and separate to an ‘Object Model’ in another kit?
Your comment about “Kits will no longer complement each other, but contrast each other” seems to imply that you see them as distinct and totally separate object models.
i.e. There is no way for a kit to compose itself of lower level object definitions defined in other kits.
IMHO - One of the downfalls of IFC is that the schema (object model) has become a monolithic thing, with no ability to create a sub-schema by composing together other more fundamental sub-schemas. This has led to an organisational problem which slows down agreement and releases of that large monolithic schema.
I have further thoughts on the relationship between object model and kits, but want to just stop at this point and check you’re (seemingly) ruling out composability.
You’ve proposed merging the Geometry and Elements kit into an ‘Objects Kit’ .
But then the diagrams, and your naming changes table refers to ‘Speckle Kit’.
So, could you explain the relationship between the ‘Objects Kit’ and the ‘Speckle Kit’?
Related question. If they are different, what is the scope of the ‘Speckle Kit’?
I tend to agree with David that the choice of kit should be on a per-stream basis.
Imagine a multi-disc project where one modeller is putting together the defining geometry for downstream disciplines. They might need to switch between different kits depending on who they are sending to.
e.g. If the base Speckle Kit works for most disciplines, but then the project decides to create ProjectKit for certain circumstances/data exchanges.
And while Kits 1.0 wasn’t deterministic, it was at least clearer what the discipline/semantic scope of various kits were (Geometry, BIMElements, Structural). And that’s important if we want to allow the person sending the data to pick the kit to send with.
I get that you’re trying to simplify it, but I wonder if that is starting to cut against the philosophy of letting end users decide what is “maximally relevant” in terms of the data to be exchanged. It feels a bit like decisions are being taken out of the end-users hands, and given to the devs/admins.
Please do let me know if I’m wrong.
BuiltElements might be even better. It’s been proposed to rename classes such as IfcBuildingElement to make it more acceptable to domains such as Infrastructure.
Hey Steve,
Thanks for you feedback, our replies below.
That is entirely up to a Kit creator so it could be the same as, based on, or an entirely different Object Model.
Composability is totally still an option! All you need to do is to fork and tweak an existing Kit to do so.
We are referring to the same thing! We haven’t finalized a name yet, but with “Objects Kit” and “Speckle Kit” we’re referring to the same thing in this post. Sorry for the confusion.
We have noticed that most of our end-users don’t care about things like kits or objects models – they just want geometry and data to transfer from software A to software B and we aim to make that as seamless as possible! We’re planning to add an “advanced” UI when sending and receiving data, but this is currently off topic
I actually like this name Currently, the kit is called “Objects Kit”, which goes for an acronym of “OK” which I also like (but, to be honest, it’s a ~stupid acronym, and I’m the only one rooting for it atm).
On composability, there’s several ways to approach this:
- (1) For “dynamic”, on the fly composability: you can still mix and mash and dynamically create your own data structures with the objects from one kit. This will be exposed in dynamo/gh to the end-users, and of course is by default there for devs.
- (2) For “strongly-typed” composability: you can totally fork an existing kit, extend it, and use that. We’d hope this will lead to a PR down the line! (what @izzylys & @teocomi mentioned already).
- (3) Inter kit composability: this is the one that got us burnt in 1.0 on the programmatic side of things (difficult to debug, error prone, difficult to enforce “correctness”, impossible tracing). This one, for the moment, is out. If we see it’s really essential, it can be brought back!
Re schemas, our current thinking is that we can always generate a schema from an object instance
@stevedowning’s last paragraph resonates - it was a difficult decision ultimately, but what @izzylys said is true. It’s a matter of balancing cognitive overload vs. flexibility and choice. Too many buttons, choices means more dev, maintenance and documentation work, and leaves the door open for a less good user experience and errors down the line.
It’s all part of the struggle of delivering a usable product (which is something that OSS projects are traditionally quite bad at, and speckle’s no exception) and keeping things open and flexible enough at the same time. PS: We don’t expect to get it right in 2.0 from the first go, but I’m sure we’re going to get there with your help!
Thanks for the answers.
A quick response (sorry, under the pump a bit).
My concern is that without InterKit composability, the BuiltEnvironments kits is going to grow and grow over time, until such point it gets difficult to manage.
i.e If a ‘Structural’ kit and a ‘Geotechnical’ kit wanted to rely on a common definition of base Speckle geometry (e.g. point/line/arc) then they would both have to exist in the same kit as that base Speckle geometry.
It feels like that could lead to a state where we have an unmanageable number of PR’s and forks of that single kit?
@JonM @Greg_Schleusner - Thoughts?
Thanks @stevedowning , I agree that could be a potential downside of this approach.
But how would you go about resolving the other issues we mentioned earlier that inter-kit composability would introduce:
- potential conflicts between kits
- not being able to send streams with objects belonging to multiple kits
- having the end user to know exactly in which kit the objects they are about to send are defined
- having users switch/select what kit to use every time they send something
Just a couple of thoughts. Regarding the different kits one might have on their machine.
I edited a bit your quote but I think this should be one of the core requirements for any future decision
I am not sure I understand how this would be in practice. Will you not be able to have a stream that mixes Arup and Grimshow kits? What about having a custom Object (BuiltElement or whatever) that nests two different kits within it? Let’s say for instance. I have an ArupColumn and I want to combine it with a GrimshawFloor, GrimshawRoom, GrimshawWall to create a CustomProjectHouse object. Maybe the later is also sitting on a separate Kit.
Totally understand that you should solve the determinism issue and probably by simplifying Kits is helping. What I am thinking as a nice example would be the way GH is dealing with missing libraries. If you don’t have the library required for the script then you cannot open it. The .ghx file has enough information to know exactly what library is required. Can’t the streams have that information for Kits as well? That way you will get prompts regarding any missing Kits for a given stream. Ideally you would also allow for an option to automatically fetch the right Kit / version for this stream, the user doesn’t have to know anything about what kit they are using, when receiving data. This might add a bit more work on your end and the future developers but would offer the best user experience.
We hear you! I think they’re all valid concerns, arguments to the contrary aside (we haven’t really seen that many kit contributors/PRs).
Stam is right in summarising our main pain points as dev productivity (debugging) right now & conversion “determinism”, coupled with user experience: it’s much easier to handle and support one kit at a time properly.
Nevertheless, this does not need to be the ultimate approach. It’s the one we’ll start with as we want to kick things out in alpha; it does not block adding the same behaviour as in 1.0
back during the beta stage, and improve on that even with, for example, @Stam’s auto kit downloader & @daviddekoning’s UI suggestion.
Unverified potential code
It would mean, in the current architecture, creating a simple new “fake” kit, with one universal IConverter
implementation that does what 1.0 does: reflect through existing kits, and invoke dynamically the conversion methods. It should be straightforward to grandfather in!
We just learned - the less we promise, the easier to deliver on, so I’ll keep mum about this (or at least put it under discourse’s spoiler alert flag
PS: We’ve just posted another RFC on the Base Speckle Object in .NET. Check it out, hope you’ll like it!
Hi everyone
I’m coming to this chat (and Insiders in general) a bit late but I’m catching up now, and I’d like some thoughts on this from another angle, if not for any other reason than to help my understanding.
It seems to me that a 2.0 Kit is effectively a Speckle worldview - at any one time, you apply a Speckle worldview to processing all data coming from all streams. Rando having its own kit to handle all Speckle data is saying “operating as Rando, this is how we’ll process all the data from Speckle streams”.
Since users won’t have kits in their faces as much in the 2.0 world, we can turn the dial slightly towards the developer and introduce a couple of extra mechanisms, such as:
- Schemas could be published separately as nuget packages - with any inheritance/composition required
- Conversion libraries could also be published separately, referencing schema nuget packages at will
- Kits could reference their own hand-picked set of conversion libraries (and therefore, schemas) - along with possibly other settings/rules/filters baked into the kit to suit the worldview of the kit
The structure of the repos themselves need not follow this separation of schemas, converters and kits - as long as they publish nuget packages.
So the in-house Rando developer can peruse the directories of available published (both externally and on their own intranet) schemas and conversion libraries and compose them into RandoKit. Of course, he will need to handle any conflicts e.g. ConversionA and ConversionB might use SchemaA and SchemaB which in turn inherit from different versions of BaseSchemaX - but that’s a periodic action that is only needed when Rando needs/wants to update its worldview.
The same work would be done by Speckle Systems in updating its own official Objects kit - it will be the result of reviewing the latest updates to the schemas and conversions and curating them into their own kit, which will be available to everyone happy to use as their default.
Does anyone have any thoughts on this concept? Am I missing some key aspects here?
Cheers
Nic
Hey @nicburgers, thanks for your input!
I think you’re spot on, on every aspect here:
- we totally want to turn the dial slightly towards the developer, schemas and conversion libraries will be separate nugets
- the in-house Rando developer can compose their RandoKit as they want, could easily be a combination of the Speckle schema (Objects) and other random stuff
This topic was automatically closed after 14 days. New replies are no longer allowed.