In order to properly evaluate system behavior it is required that known inputs produce desired outputs. This is especially critical when working with the JDA WMS due to its multitude of configuration options that drive specific system behavior.
Equally important is being able to reuse the known inputs to efficiently and accurately repeat tests. To accomplish this in transactional systems a mechanism to purge (clean up) test data is required.
Cycle includes steps to load and clean up test data using MOCA datasets.
Once a MOCA connection in Cycle is established, Local Syntax and MOCA commands can be directly executed in Cycle.
The most commonly used MOCA commands are for loading and cleaning up data. The Cycle steps to perform those actions are:
I execute MOCA dataset "<DATASET_DIRECTORY_PATH>"
This step Loads CSV files in the given directory into the current MOCA connection. The path to the dataset should be relative to the Resource Directory. First, any cleanup*.msql files are run (alphabetically ordered). Once all cleanup*.msql files have run, any files named load*.msql are executed. Next, data from any CSV files in the dataset (ordered alphabetically) is inserted into the appropriate database table based on the CSV file name matching the table name. Only columns in the CSV file matching known table columns are inserted into the database table. Any failures during insertion will cause this Step to generate an error. Finally, any files named validate*.msql found in the dataset are executed.
I execute cleanup script for MOCA dataset "<DATASET_DIRECTORY_PATH>"
Looks for any cleanup*.msql file in the specified directory and runs it (if it exists). The Step will generate an error if the script fails.
Loading data customarily takes place prior to executing the business process. In Cycle the data loading would occur after establishing a MOCA connection and setting any MOCA environment variables.
Let’s use the example of testing Wave Planning and more specifically a new Wave Rule.
To test Wave Planning the system must contain a waveable order with lines. To effectively test the new rule, the order and lines must contain the evaluation criteria.
The first step is to build the MSQL file or files responsible for creating orders and order lines. More than likely the MSQL will contain the MOCA commands ‘create order’ and ‘create order lines’ with the necessary arguments and error handling. Using the MOCA commands enables the use of all inherent validations as well as any standard or custom triggers and wrappers. While this is manually adding an order into the system all existing configurations and business rules are followed ensuring valid data.
The next step is to build the MSQL file or files responsible for cleaning up test data.
One important detail to remember when creating a cleanup file is that is not enough just cleaning up the data loaded, in this case order and order line. You must also account for the data created as a result of executing the test. This test will potentially add records into shipment, shipment_line, ordact, dlytrn and pckbat. Cleanup for these tables is required as well.
Another important detail when building a cleanup msql is that the file must be constructed in a way to always return a MOCA status of 0. This is necessary due to the fact that when using the loading step cleanup*.msql is the first script run. Also, the Feature may fail at different points meaning not all downstream tables will be written to.
Not handling no rows found errors in the cleanup will cause a false negative for the entire test.
Now that both the load and cleanup files are created the steps can be incorporated into the Feature file.
Depending on the intent of the Feature it is appropriate for the data load to occur in a Background or in the main Scenario prior to the business logic executing.
It is Best Practice to execute the cleanup script in the After Scenario in order to ensure that it always runs.
Below is an example Feature with all the pieces in place.
The example Feature connects to MOCA in the Background and then executes the dataset which populates the destination instance with the required order structure.
The Feature will then execute the main Scenario validating the business process for the wave rule.
Finally when the main Scenario completes (pass or fail), the After Scenario will run and clean up the data introduced in this execution.