Search code examples
salesforcegithub-actionssalesforce-cli

Determine Minimum Tests To Be Run For Salesforce Deploy


I have set up a GitHub action that validates code changes upon a pull request. I am using the Salesforce CLI to validate (on PR) or deploy (on main merge).

The documentation gives me several options to determine testing for this deploy.These options are NoTestRun, RunSpecifiedTests, RunLocalTests, RunAllTestsInOrg. I am currently using RunLocalTests as so:

sfdx force:source:deploy -x output/package/package.xml --testlevel=RunLocalTests --checkonly

We work with some big orgs whose full tests take quite a while to complete. I would like to only RunSpecifiedTests for validation but am not sure how to set up my GitHub action to dynamically know which tests to pull in. I haven't seen anything in the CLI docs to determine this.


Solution

  • There really isn't a way to do this with 100% reliability. Any change in Apex code has the potential to impact any other Apex code. A wide variety of declarative metadata changes, including for example Validation Rules, Lookup Field Filters, Processes and Flows, Workflow Rules, and schema changes, can impact the execution of Apex code.

    If you want to reduce your test and deployment runtime, some key strategies are:

    • Ensure your tests can run in parallel, which is typically orders of magnitude faster.
    • Remove any tests that are not providing meaningful validation of your application.
    • Modularize your application into packages, which can be meaningfully tested in isolation. Then, use integration tests (whether written in Apex or in some other tooling, such as Robot Framework) to validate the interaction between new package versions.

    It's only this last that can provide you real ability to establish a boundary around specific code behavior and test it in isolation, although you'll always still need integration tests as well.

    At best, you can establish some sort of naming convention that maps between Apex classes and related test classes, but per the point above, using such a strategy to limit test runs has a very real possibility of causing you to miss bugs (i.e., false positives).