I have a mesh object, which I’m updating by PUTting to /objects/{objectId}
(e.g. to move vertices around, or to recolour vertices, or to completely rearrange geometry). When I update it, I broadcast an update-global
message on a stream which contains that object.
First, is there any spec about the meaning of websocket messages?
If there isn’t a spec: Should this message cause the stream and all its objects to be reloaded? Or should it only cause the stream to be reloaded, and then only new objects to be loaded? Is there some other message I should be sending to announce an object update?
For now, I’ve patched my SpeckleViewer to reload all of a stream’s objects on update-global
.
Kudos for pasting this in the forum!
Re specs, they don’t exist yet anywhere. There are some conventions though, namely:
- chat rooms are defined by
streamId
. Essentially all streams are each one “channel” you can subscribe to.
- broadcasting: you’ll see this function pop up in SpeckleCore client. It sends a message to all ws clients listening to that channel.
- direct message: sends a direct message to ws client. you need to have its id to do so though.
As an example, when you unfold the controllers in pane in the viewer, the client broadcasts a ws message in that streamId channel asking for controllers. The first client to respond will then subsequently be sent compute-request
direct messages, to which, in theory, he replies with another direct message (to the viewer) with a compute-request-result
direct message.
In your case, suffice to say if you update an object:
- this means its hash changes (which the server doesn’t take care of when PUT/PATCHing, hash generation so far is up to the clients & their converters),
- therefore needs to get saved again in the database & gets a new _id (everything though should be hash based, and hashes should be server generated ideally to ensure consistency, but that’s for 2.x.x).
- therefore the stream
objects
array changes to include the new object _id.
- the viewer (i assume the online viewer, but the receiver clients do the same actually) then does a diff on the stream object array and loads/unloads the objects that changed.
Honestly, so far, we’ve never had to deal with object modification/generation outside a client where you control the hashing of said object too; hence why PUT & PATCH don’t regenerate the hash, as it’s assumed the client has done it.
In your specific case, a temporary & more efficient solution might be to broadcast an object-changed
event with the object’s db id in it, and then implement that in the viewer: listen for it and just reload that object, not the whole stream. Contrary to my slack answer, you can get away with this even if you change or not the object’s hash.
Thanks for the enlightening response!
I’d picked up a few of those conventions, but the controllers thing is new to me and pretty interesting. I’ll be having a read of the code around that later on
I hadn’t realised that objects were supposed to be immutable, but that makes sense now. Is the hash supposed to (eventually) be used for deduplication? Or is there some other reason to have both a unique hash and a unique id?
In my case, I was just updating the vertices
and colors
. Given what you’ve explained, the “proper” solution for me seems to be to create a whole new object. I should be able to afford to create a new object (and delete the old one). For future thought, there is a going to be slight overhead in re-uploading the same faces
and a little extra latency in needing to update the stream after receiving the new object id.
For Speckle in general, would you accept PRs for (…I might be getting ahead of things here, and misunderstanding something):
- In the specs, explaining that the hash should (for v1) be generated on the client as a hash of the object?
- In the server, changing the behaviour of
PUT /objects/{objectId}
to copy the object (assigning a new id) and update the requested fields? If object modification hasn’t been a problem so far, then this shouldn’t break anything?
It already is used for deduplication (check the bulk object save middleware). The fact that there are two unique indexes serving the same purpose on that collection is good 'ol technical debt.
Before we would shift to full-on hashes, we should do run some collision tests on the current algos.
I see your point regarding per object level mods. This is something that was not optimised for yet. The behaviour you’re describing for PUT /objects/objectId
is sensible though. I’m trying hard to remember if this change might cause breaking changes somewhere but i don’t think so (maybe a general @channel on #dev on slack to make sure others are not relying on the current behaviour). Only remark is that you need to populate the two hash fields too, as they’re required.
The geometryHash
is ${object.type}
+ hash from every own property in the object (not inherited ones, or db stuff, but you can ignore this for now), besides the properties
field, and the hash
field is basically a hash of the whole object. There’s a (bad) way to do this in here…
If you wanna do extra legwork for specs, be my guest! (would prioritise dev and face-to-face dev support for now, until we reach stable stableness)
1 Like
Oh, neat! If I do implement the behaviour change, I’ll try to reuse the bulk object save if it makes sense.
Another potential change (which seems less likely to break things?) would be to use the hash in the place of the id everywhere. Although it’s not super important, it might pave the way to removing on or the other eventually.
And the Slack discussion has been started
Let’s wait for this for 2.x.x work, starting i guess after summer; it’s a different thread too i guess. If we go for projects and streams as objects too, and get all recursive, we might enter in a problematic state. Ie, a project’s PUT on the name
prop doesn’t need a hash change realistically. Or it does, and we advertise this as infinite history feature…