Each Bullet Point is a Learning Goal.
You will see them referenced numerically in the subsequent sections.
Review the User Manual section:
Understand and be able to implement Volume Testing concepts, including a Controller and Controller Hooks, Data Management Strategy, Data Locks, Staggered Session Instantiation, Start Test Check, End Test Check, and Code Stability in relation to Git Workflows.
A Controller is the only Feature File that is run during a Volume Test.
It contains the logic to run a volume test from start to finish and is data-driven.
Based on data fed to the controller, different processes can be executed concurrently.
If I verify text $process is equal to "Process A" Then I execute "Process A" Elsif I verify text $process is equal to "Process B" Then I execute "Process B" Elsif I verify text $process is equal to "Process C" Then I execute "Process B" ...
A Controller Hook is a discrete scenario call inside of your scenario with intentional placement.
Some locations to place a Controller Hook include:
You can then include any logic into the Controller Hook Scenario that needs to occur across all devices at the same point in the process.
Some concrete examples includes a Controller Hook to trigger reporting logic in the After Scenario, a Data Lock during every process, and a Staggered Session Instantiation right before Terminal Login.
It is important to appropriately establish a data management strategy for your Volume Test.
New or bad data can cause existing logic to break due to unexpected exceptions.
Applying appropriate deadlines to data delivery, administering the data wholly or partially, and running the Volume Test in large quantities are important to ensuring that any potential risk introduced by data is minimized.
Some examples of mitigation tactics include:
Data Locks are a type of Controller Hook that occur during a specific process.
If a process grabs data dynamically, it will need to ensure that no other terminals that are running in parallel grab the same data.
Therefore, before it performs any conflicting actions, Cycle will check the poldat
table - a table selected since it is only queried during policy checks - for a record indicating whether the data is locked. Locking can also be done via other means like the existence of a file within a directory structure or external non-WMS databases.
If that record is not present, it will insert a record to indicate that the data is locked, to prevent other terminals from using that data.
Once the process is complete, the process will unlock the data by deleting or modifying the record appropriately.
While the method of locking via the filesystem is achievable it is not recommended as a best practice. Taking advantage of the DBMS record locking function is a more reliable method.
Data Locks should use a unique data value for best results.
An example would be identifying a lodnum
(a unique value in a warehouse) to transfer. The Data Lock would check the poldat.rtstr1
column where the value is equal to the lodnum. If there is not a match, it will create one, signaling to Cycle that the operational process can now occur. Post operational process, Cycle will then delete the record from the poldat table.
Establishing many concurrent connections simultaneously can occasionally overload the server, effectively performing an unintentional DoS attack.
For this reason, the group test CSV records should include a variable used to stagger terminal connections, and a Controller Hook referencing the variable is placed directly before the Terminal Login scenario is called.
At the beginning of a Volume Test, a continuous while
loop will query the poldat
table to see if a record for starting the test is in a state - oftentimes a 1
value instead of a 0
value - that triggers the test.
This poldat
record is manually updated by the test administrator to trigger the test start.
The process is wrapped in a continuous while
loop that will execute the process forever unless the poldat
table query reveals a record to end the test is in a specific state - oftentimes a 0
instead of a 1
- that triggers the end of the test.
This poldat
record is manually updated by the test administrator to trigger the end of the test.
Since a Volume Test is meant to run for an extended period of time and perform each process potentially thousands of times, the code base must be stable.
For this reason, a Git Workflow where only code approved by the project lead is merged into the master
branch is necessary.
A discrete phase dedicated to Data Preparation enables the client and Cycle Labs to fully grasp the scope of managing data for this particular Volume Test through discovery and establish firm deadlines around data delivery.
Without this phase, several possible risks are introduced to the project:
Before engineers begin building the volume test, they should architect its structure, including which Volume Testing components and concepts apply, how the selected components should be implemented, and how the data management strategy impacts the volume test implementation.
Without this phase, several possible risks are introduced to the project:
Once the data management strategy and planning phases are complete, development can begin in earnest.
Another discrete phase should be dedicated to Mock Volume Testing prior to the deadline.
Without this phase, several possible risks are introduced to the project:
This is the actual Volume Test. If all other phases have gone well, this phase should be primarily focused on handing off the volume test to the client, if appropriate, preparing reporting information, and handling any immediate issues that occur.
Volume Testing occurs with a minimum of 3 executions for a set and defined duration.
The 3 executions performed differ in either the number of terminals or tasks expected to be performed.
A 50%, 100%, and 150% performance tests.
A baseline must be identified prior to executing a volume test. This baseline will be based off of the expected peak performance.
Let’s assume a Volume Test is required where a system is expected to perform 170 transactions (Process A) within a 1 hour window during normal business operations. We are informed that 200 transactions is the Peak
number this system will ever perform during operations.
To identify a baseline, we will use our peak number of 200 transactions. This will be our 100% performance test. 100 transactions will be a 50% performance test and 300 will be the 150% performance test.
Why 50%,100%, and 150%?
A 100% performance test provides a baseline. A peak performance that the system will hit.
A 50% performance test provides the customer with metrics to show how their system can handle the transactions at a measured rate below baseline.
A 150% performance test can provide the customer with a glimpse into whether their system could handle a business decision that results in increasing more transactions to their environment.
There are a variety of ways to track metrics for volume testing.
The client must provide a systems administrator, DBA, network analyst or any required SMEs to monitor customer hardware.
Cycle Labs provides post Volume Test metrics that includes transactions by operation for each test.
Some commonly compiled statistics include: