I had a quick go with ChatGPT about it and it started proposing using reflection because of all the private functions, so I think I will wait for official guidance before trying to make something
I got more value in testing from querying live data instead of building mocking.
In the mesh density demo I did mock an automation run (it includes a hardcoded version - bad Jonathon)
@pytest.fixture()
# fixture to mock the AutomationRunData that would be generated by a full Automation Run
def fake_automation_run_data(request, test_client: SpeckleClient) -> AutomationRunData:
SERVER_URL = request.config.SPECKLE_SERVER_URL
TOKEN = request.config.SPECKLE_TOKEN
project_id = "9c6bfd2177"
model_id = "6193bdb540"
function_name = "Automate Density Check"
automation_id = crypto_random_string(10)
automation_name = "Local Test Automation"
automation_revision_id = crypto_random_string(10)
register_new_automation(
project_id,
model_id,
test_client,
automation_id,
automation_name,
automation_revision_id,
)
fake_run_data = AutomationRunData(
project_id=project_id,
model_id=model_id,
branch_name="main",
version_id="107527ebd2",
speckle_server_url=SERVER_URL,
# These ids would be available with a valid registered Automation definition.
automation_id=automation_id,
automation_revision_id=automation_revision_id,
automation_run_id=crypto_random_string(12),
# These ids would be available with a valid registered Function definition. Can also be faked.
function_id="12345",
function_name=function_name,
function_logo=None,
)
return fake_run_data
def test_function_run(fake_automation_run_data: AutomationRunData, speckle_token: str):
"""Run an integration test for the automate function."""
context = AutomationContext.initialize(fake_automation_run_data, speckle_token)
automate_sdk = run_function(
context,
automate_function,
FunctionInputs(density_level=1000, max_percentage_high_density_objects=0.1),
)
assert automate_sdk.run_status == AutomationStatus.FAILED
This published a locally tested function as result on the server
So the way it’s currently structured in C# (and I don’t know if this is by design or not, so I haven’t changed it), AutomateFunction is a static private class:
static class AutomateFunction
{
public static async Task Run(
AutomationContext automationContext,
FunctionInputs functionInputs
)
{
....
}
as is FunctionInputs:
struct FunctionInputs
{
[Required]
public double Spacing;
}
if it’s safe to make these two components public, we’re gold as I can just do as you described. Just didn’t want to make changes to the template in case it was important that these functions were private.
Ultimately your function code ends up deployed self contained, so while I don’t profess to being a c# expert by any means, I don’t feel there is any harm in modifying that.
Knowing who wrote the template function it is expected that it would default to private static as a practice.
Cool happy to do that for now - it might be worth the template function coming with a stubbed out test function, both to emphasise best practice to test before uploading, and to ensure the template can be easiliy tested without modification.
That is also a good idea. Our template probably needs to syrip out the default tests in both sdks that generates E2E function listings while we are at it.
this is a case of an unmerged feature. … I perpared a local testing setup for C# using NUnit in this PR
I’ve added a separate project, that uses the function definition from your automate function project, and sets up a full e2e testing environment. The default setup creates a project, a model an object and every little datapoint, that is required for an automation function run. Feel free to modify this, pointing it to a selected test project if that is more ergonomic. Do note, that the e2e test setup re-creates all the automation primitives on a project, so if you are running this test a lot, it will report a lot of automation statuses. For this reason i suggest using a test project while developing.
Let me know if this is what you are looking for, and if not, what is it that you are missing.
Thanks @gjedlicska, that’s very comprehensive… just as a sense check though, do you really want to encourage people running tests like that on your infrastructure?
If you are referring to the spammy nature of this setup, then probably its not the greatest.
It does create a speckle project on each request, but otherwise there is no easy way to encapsulate the test into a fully functioning e2e setup.
We have considered this for sure. I was advocating for a setup that works out of the box, with minimal setup even at the cost of a bit of extra produced data.
The other way around puts a bit of setup burden on the function author, where you need to pre-register an automation for an already existing project, so the out of the box experience is a bit more work to get going.
But we’re def open to iterate on our assumptions based on your feedback.