We’ve been thinking a bit about a testing strategy for our v2 deployment and the v2 connector development we’re planning to do (ex. GSA connector), and we’re keen to get your thoughts on a few things:
- To make it easier to set up a consistent set of automated tests for each connector, we’re interested in integrating connector-specific runners into our testing strategy. These runners would have the relevant software and required licenses installed (ex. a Rhino runner, a GSA runner). We’d build out a set of tests for each connector that could be executed on the relevant runner. We’re thinking to start by adding an initial set of tests that check if a connector can be properly loaded in its respective application. Longer term though, we would like to develop more complex, perhaps closer to end to end, tests that could be automated via use of the runner. We are also thinking to start with setting up Rhino runners first, while we look into how we might implement this for the other connectors. Thoughts on this approach? Would this be something that aligns well with overall testing strategy for Speckle as whole? Ideally these tests could sit in monorepo for the existing connectors, and we could update the existing CI/CD workflow to execute these tests as well.
- For community-developed connectors, is there a minimum required level of testing that we should all be aiming for? If so, what should it include/cover?
- Is CircleCI the platform that the team is planning to use going forward? (if it is what you plan to use long-term, we will look to setup our CI/CD with CircleCI as well)
On a slightly different vein - has anyone done any tests or benchmarking exercises that compare the performance of v1 vs v2? Ex. comparing speed of sending/receiving, comparing speed based on number of objects
Looping in @Stam