Unit tests involving specklepy

Hi @izzylys ! Thanks for the chat about specklepy and related development.

Short story: I am developing a Python package that relies on transferring data to and from a Speckle server. Specklepy works like a charm, and I am able get it working as I wish. My current challenge is that I wish to write unit tests for the methods that involve interactions with the Speckle server. I have an Azure DevOps pipeline for automating the testing, so the test should be able to run both locally and on Azure. What is your recommended solution for this?

The workflow in one of the methods that I wish to write tests for is roughly as follows:

  1. Get list of commits from a stream on a client.
  2. Receive an object referred to by the commit using a predefined transport.
  3. Do some calculations.
  4. Update the object and send it using a predefined transport.
  5. Commit the updated object to the stream on the client.

You mentioned that the SQLiteTransport could be an option here, and I think it is, but this only solves the challenges related to sending and receiving, and not the stuff related to the stream and client. I guess what I am looking for is kind of an in-memory version of the SpeckleClient that works on top of the SQLite database referred to by the SQLiteTransport and can have a stream that I can commit to. I hope this made sense :crossed_fingers:

Note that I initialize the client and transport in a context class, such that replacing these by mocks could be an option.

Any suggestions on how to proceed are very welcome :slightly_smiling_face:

2 Likes

Hey @mortenengen!

If you want to unit test the correctness of your workflow, i would highly suggest to go the route you’ve hinted at in the last paragraph with faking the SpeckleClient with a test specific client implementation that lets say stores data on the FakeSpeckleClient instance. (hurray duck typing). It makes your test not rely on external implementations / networking stuff, which produces quicker and more reliable unittests.

Naturally you should also add some integration tests, that ensure that when using the real SpeckleClient with the ServerTransport things still work, but you can get the bulk of the tests done as unittests.

Hope this was helpful

2 Likes

Thanks @gjedlicska for the quick response! I will have a look at your suggested solution and get back to you if I manage to pull this through or if more assistance is needed.

1 Like

Hi @mortenengen and @gjedlicska,

Interesting question!
I’m wondering about a related issue and would like to hear your opinion.

We’re also extensively using the specklepy package to send and receive data involved with different disciplines. We also implemented some tests that connect to the Speckle server (note that we have a dedicated server for our own company), and send or receive some data. For the tests, I’m currently using my own token as authentication for the SpeckleClient. This isn’t good practice, but I can’t think of another way to authenticate on an online, non-public server, and interact with streams. Do you have a better idea and/or am I missing some testing functionality, e.g. a kind of test account/stream, or would you advise to execute the full workflow locally with a FakeSpeckleClient and a different Transport implementation as proposed in your comment? What do you think?

Thanks in advance!

Hey @Rob ,

this is an interesting topic, and i think there is not clear cut right and wrong answer to the question. First, whatever works for you, is the most important, cause having tests is miles better than not having tests.
Also if you have a test user specifically set up only to be able to access a specific set of streams reserved for testing, and the user token is loaded from an environment variable, when ran in CI, this setup is also completely valid.

We do something similar on our end, with a bit of extra steps. When testing our sdk-s, we create a test user on the fly.

But if the thing you are testing is the correctness of the internal calculation, an the Speckle server is “only” used for data persistence, I would prefer to create a test specific fake implementation.
Also note, that this is only my preference. Because I like to have as quick feedback loop with unittests as possible, and waiting on send and receive operations (even if they take a sec or two) introduces friction into rapid test focused development.

Hope this is helpful.

1 Like

Hi @Rob . Great to see that you pick up the thread! I agree with @gjedlicska that a fake interface is a good approach. It involves some extra coding, but could potentially speed up the test suite. Another good trade-off is of course that you get to take a deep-dive in the specklepy code to figure out the necessary ingredients of the interface :nerd_face: Unfortunately, I have had to post-pone this due to other pressing topics, and haven’t found a proper solution myself yet. Curious to hear what you end up with though :slightly_smiling_face:

2 Likes

I’m also curious about @izzylys 's oppinion on this, but if you guys come up with a nicely reusable FakeSpeckle we could bundle it into a specklepy.test package or something. I have a feeling this would be much appreciated down the line.

Also another path to consider, is to use only the in memory transport and not send stuff over the wire. It completely works the same as if you were sending stuff to a Speckle server, but no network connections are needed. Unfortunately this only works for operations.send and receive and not for the graphql based interactions.

The path involving the memory transport is interesting, @gjedlicska. In the Python package I referred to in the original post, I am relating everything to commits on a branch on a stream. From there I get the object ID that is used by the transport. How could the memory transport fit in such a setup? Would talking about streams, branches and commits make sense at all? :face_with_raised_eyebrow:

1 Like

Thanks for your answers @gjedlicska and @mortenengen!

For now, we had the token in a json that is used to setup the test request to the Speckle server. It’s out there for anyone to take basically. Your note about the environment variable was interesting for that reason. I had a look in the Azure docs and found that it’s possible to set encrypted variables in the Azure UI (Define variables - Azure Pipelines | Microsoft Docs), that can be referenced within an Azure YAML pipeline. This is actually a very nice solution, especially as for us, it’s desirable to also check the connection with our dedicated server. Moreover, our test streams are small, thus the tests are quick anyway.

Still, the implementation of a FakeClient sounds interesting, and would definitely be of added value when our test suite grows and the committed data increases in size and complexity.

1 Like