Automate - Run Order

Hi Speklers

I wanted to understand what mechanisms there were to control the order that automations run/ chain multiple functions together.

The scenario I’m thinking of is as follows:

  • Receive a model from (Revit/ Grasshopper)
  • Automate Function 1 - Performs some data manipulation
  • Automate Function 2 - then starts and uses the data created by function 1 to check what’s changed between versions.

Although it would be possible to group the various automate functions into a larger composite one this seems to be against the idea of having smaller reusable functions that can be deployed in multiple scenarios.

the options i see currently would be

  1. Create a new model from function 1, which then triggers function 2.
  2. Control which function is run by the version authoring software type i.e. limit function 1 to Revit/ Grasshopper, function 2 to .net and save a version in between.
  3. Feed the result of the first function directly into 2nd without persisting the in between step?
  4. group functionality in a larger function that handles 1 and 2

The issues with Option 1 seems to be multiple models created and not having access to automate results in one place
Option 2 would seem to resolve those issues but may be the version history isn’t as clean?
Option 3 - not sure is technically possible although I’m sure i sore some kind of pipeline preview a while ago with automate functions chained together, possible it didn’t get out of alpha?
Option 4 - you need to create lots of different functions for the different scenarios 1, 2, 1+2 …

just wanted to get some wider thoughts as to the best approach?

Thanks

In other words: How do I chain Automate functions?
Short Answer: You can’t natively, but you can hack determinism using immutable model versions as events and a state model to orchestrate them.

Longer Answer (yes I have been thinking about this for a bunch of time):
Alright, this one’s for the tinkerers. The ones who stare at the words stateless execution and think, “challenge accepted.”

One of the questions that keeps surfacing is how to make Speckle Automate behave like a pipeline engine, to chain multiple functions, to have ordered or conditional execution, to get Automate to behave. Right now, each Automate run is self-contained: new model version → trigger → function executes → done. No direct DAGs, no internal pub/sub, no message queue.

But Speckle’s architecture, Projects, Models, Versions, gives us just enough rope to fake it.

What follows is a hack, but a robust one. It borrows from stateless engines like Pub/Sub, AWS Step Functions, and Temporal, and maps those primitives into the Speckle universe.

The Core Trick

Inside a single project, treat:

  • Models as topics
  • Versions as events
  • JSON state objects as your queue and message bus

No streams, no branches, no external broker. Just models and versions.

When a new model version (your “source”) is created, say from Revit, Rhino, or Grasshopper, it becomes the seed event. An orchestrator function detects this, consults its “state model,” and starts a cascade of controlled chaos.

The Architecture in Plain English

You’ll need three things:

  1. Source Model – The thing your designers actually commit.
  2. State Model – A lightweight JSON-only model acting as a pubsub hub.
  3. Results Model – Where merged outputs, checks, or reports land.

Then a set of Automate functions:

  • Orchestrator – Reads the source version, writes to the state model with f1: pending, f2: skipped, etc.
  • Workers (F1, F2, F3…) – Each listens for its flag to turn pending, runs, and writes back with done.
  • Finalizer – Waits until all required flags are done, merges artifacts, and posts a single clean result version.

How It Plays Out

Source Model (version V)
   ↓ triggers
Orchestrator Function
   ↓ writes pending channels
State Model (version S1)
   ↓ triggers
  ├── Function F1  → runs, marks done, adds F2:pending
  └── Function F2  → runs, marks done
   ↓
Finalizer → sees all done, writes result
Results Model (version R1)

Each version of the state model is a deterministic snapshot. There are no mutable writes, no shared memory, no race conditions, only convergence through new versions.

Every Automate trigger just reads the latest state version, decides if it needs to run, and writes a new version reflecting its completion. If someone else wrote first, it re-reads and tries again.

Why It Works (and Why It Shouldn’t)

This setup turns Speckle Models into a distributed event log.

  • Every version is a message.
  • Every function is a subscriber.
  • Every result is recorded.

You’re effectively building a mini workflow engine inside a Speckle project, using the same primitives that make Automate stateless and robust in the first place.

Concurrency doesn’t matter anymore.
Order doesn’t matter.
The system converges to the same end state, which is the closest thing to determinism you’ll get in a distributed cloud system.

Caveats (and Confessions)

It’s a hack. You’re using models as message buses.
It’s wasteful since each event is a new version and each version costs I/O.
Function containers may be cold, so you’ll pay the startup tax.
You’ll generate a lot of versions if you’re chatty.
But, and here’s the trade, it works.

Execution times are short enough that you won’t notice, and you gain composability: tiny reusable Automate functions that can be orchestrated in infinite ways.

The Hacker’s Mindset

You can think of this like a distributed orchestra:

  • The orchestrator is the conductor
  • The functions are the players, each responding when their bar of music comes up
  • The state model is the score being rewritten in real time

It’s deterministic chaos, encoded in JSON and powered by immutable versions.

The Future (When This Hack Becomes Canon)

If Automate ever gets native pipelines, DAGs, or chaining, it’ll probably look a lot like this: a stateful record of stateless runs. Or nothing like it.

Until then, this pattern gets you there with today’s tools, proof that Speckle’s version-based data model is flexible enough to double as an orchestration layer.

TL;DR

  • Use models as topics, versions as events, state models as queues
  • Each Automate function writes back to that state, marking progress
  • Determinism comes from convergence, not sequence
  • Everything stays within a single project
  • It’s a little ugly, a little brilliant, and entirely functional

Sometimes the most satisfying solutions come from making the system do something it was never meant to do.
Automate might be stateless, but with a little stubbornness, you can make it sing.

2 Likes