Defining test cases
You create test cases by using the Record & Generate facility:
- Select the service you want to create test cases for
- Optionally select one or more services for which you want to create stubs
- Execute the service with different inputs, e.g. in Designer or in SOAP-UI
- Stop recording and generate a test suite.
This will give you a ready-to-use test suite that you can further edit or execute here, where you can:
- Give the test cases a meaningful name
- Adjust or add assertions
- Add callbacks
- Select a wrapper service for enhanced validation
- Repair failed test cases
Validating against regular expressions in expected data
You can use regular expressions in the expected data. This is convenient if a field contains e.g. a timestamp that changes with every invocation. Enclose a regular expression in forward slashes. If you don't what IwTest to interpret the regular expression, place a backslash in front of it.
<?xml version="1.0" encoding="UTF-8"?> <IDataXMLCoder version="1.0"> <record javaclass="com.wm.data.ISMemDataImpl"> <value name="color2">/\w+/</value> <!-- matches one or more word characters --> <value name="color1">/black|white/</value> <!-- matches 'black' or 'white' -- > <value name="random">/\d\d\d/</value> <!-- matches three digits --> <value name="random">\/\d{2}/</value> <!-- matches exactly '/\d{2}/' --> </record> </IDataXMLCoder>
Note: IwTest does not contain a data generation facility nor does it contain additional facilities to do verifications, e.g. accessing a databases in order to check that an entry has been written to a table or checking a log file. Instead, use the webMethods functionality you're already familiar with to do this. Simply wrap the service you want to write a test suite for in another service (in your test package) and execute any data generation logic and/or any verfication logic there. Then have IwTest validate the outputs of this wrapper service.
Manually creating a test suite
You can also create a test suite manually by creating a regular flow service that only invokes iw.test.execution.pub:execute and set the inputs of this service (but it's highly recommended that you Record & Generate test suites).
For the simplest test case you define:
- Test suite name
- Test case name
- Service to test
- One assertion, either inline or in a file
File paths are relative either to the containing package, or to the working directory of the IntegrationServer. Usually a service takes an input, but doesn't have to. You cannot define inputs inline; IwTest only allows you to define inputs that are saved in a file.
If you don't define any assertion, then the test case will be marked as failed when executed.
You can execute this test suite manually from Designer (after all, it's a flow service). The results are can also be checked in the results page.
Running & Results
There is a special service that will execute all tests it can find: iw.test.execution.pub:run. The results of the test runs are shown in the Test Results page. Optional input parameters let you execute a subset of the test suites.
The results can also be exported in jUnit format for consumption and display by e.g. Jenkins (be sure to use the jUnit plug-in):
#Execute the test suites contained in the packages whose name end on '_Test': curl http://[host]:[port]/invoke/iw.test.execution.pub:run?packageFilter=.*_Test #Download the results in junit format: curl http://[host]:[port]/invoke/iw.test.results.pub:export?format=junit > test-results.xml
Note: IwTest does not maintain a history of test runs. The Jenkins JUnit plug-in does already does a marvellous job at that.
Recording service invocations
The IwTestAgent package contains functionality to record test data as services are executed. You can record service invocations that you run manually from within Designer, but you can also do the recording in another environment, for example in an integration test or QA environment.
Note: Recording obviously has a performance impact. Be careful in high volume environments.
To start recording go to the record page:
- Click the icon in the table header to activate recording on that IntegrationServer. It should change into a blinking icon. Which you can click again to stop recording
- Click the icon to select a service. You can directly start typing the service name if you know it, or you can the select a package first.
- If the service you selected calls sub-services, then you can optionally select one or more of those for recording as well in the box that appears below.
- Execute your service one or more times, either from Designer, SOAP-UI or the external application. Notice the increasing counter.
- When done, stop recording and generate a test suite by clicking the icon. More on this the next section.
Recording configuration
In the settings page you can control the recording behaviour:
- record.storage.location: location where saved pipelines of the service invocations are initially stored.
- record.sample.interval.ms: Minimum interval between two recordings of the same service.
- record.services.use.service.signature: whether or not to only save defined inputs and outputs of a service configured for recording.
- record.stubs.use.service.signature: whether or not to only save defined inputs and outputs of a sub-service
Generating test suites
Based on the data you recorded in the previous section you can now generate a test suite by clicking on the icon. This automatically stops recording.
A dialog appears with pre-filled parameters. Accept the defaults or change them to your liking.
- Target package: the suggested name is based on the generate.test.package.suffix parameter
- Test Suite Service Name: the suggested name is based on the generate.test.suite.service.infix and generate.test.suite.service.suffix parameters.
- Test Suite Name: the suggested name is the service name with the colon replaced by a dot. This convention allows for a hierarchical treatment of the test suite by e.g. Jenkins.
- Generate Mode: merge or overwrite. Appears if the Test Suite Service already exists.
When you click 'Generate' the following will happen:
- A new package to hold the test suite will be created (if it did not already exists)
- The recorded data for the service is downloaded from the IwTestAgent and saved to a folder in the target test package, by default: [generate.test.package.folder] = 'testfiles' and underneath this folder another folder [service.name.with.dots] is created.
- A test suite flow service is created in the target package. The service only that calls iw.test.execution.pub:execute with fully configured inputs.
To run the test suite, execute this service. It has two optional parameters:
- index: do no execute the whole suite but only run test case [index] (one-based)
- definition-only: when set to 'true' only the parameters to iw.test.execution.pub:execute are returned
IwTest has named the test cases with an increasing integer starting with [00001]. It's a good idea to change it and add a meaningful description.
You can freely edit this flow service, rename it or move it around. If you move it to a different package then be sure to also move the corresponding test files.
You can always record additional test cases and add them to the existing test suite flow service. Be sure to set the 'mode' parameter to 'merge'./p>
Stubs
A stub is a replacement for a call to a service. This is handy when you want to make you test cases independent of external applications.There are four different types of stubs:
- Pipeline: Instead of calling a service, a previously saved pipeline is returned.
- Service: Instead of calling a service, another service is called. This can be a service in your test package. Note that you can optionally define a static input for this service.
- Exception: Instead of calling a service, you can throw an exception. This is handy for testing error scenario's. Specify the fully qualified exception name and message
- Dynamic: Based on the input a matching output is returned. You can consider this as a conditional stub.
A Dynamic stub combines the capabilities of the other three.
Note: The recording facility will only create dynamic stubs.
Dynamic stubbing solves the problem that arises a stubbed service is called multiple times but each time with different inputs by the service under test. Depending on the input, you'd like to return a different output.
Dynamic stubbing only takes one parameter as an input: a 'stub-base-dir'. Place combinations of input/output/error pipelines in arbitrarily named sub-directories. The files themselves must be named like this:
- test-input.xml
- test-output.xml
- test-error.xml
During the execution of a dynamic stub the actual input is matched against the test-input.xml files.
Note: Regular expressions in those files are supported.
Note: An (functionally) empty test-input.xml file matches everything:
<?xml version="1.0" encoding="UTF-8"?> <IDataXMLCoder version="1.0"> <record javaclass="com.wm.data.ISMemDataImpl"> </record> </IDataXMLCoder>
If a match is found then either the corresponding 'test-output.xml' - or - 'test-error.xml' is used (don't specify both). The latter only specifies the exception class name and message (not the pipeline) which are used to create the exception to throw.
Note:The field names must be named message and (optionally) class-name<?xml version="1.0" encoding="UTF-8"?> <IDataXMLCoder version="1.0"> <record javaclass="com.wm.data.ISMemDataImpl"> <value name="class-name">com.wm.lang.flow.FlowException</value> <value name="message">java.net.ConnectException: database unavailable</value> </record> </IDataXMLCoder>
A note on dynamic stubbing
There is an inherent problem with recording data for sub-services that you want to stub later in your test case. When a service calls another service in a regular invoke step, webMethods passes the complete pipeline to the called service, although that service might only need a small subset for executing its task. Only if a service is called as a transformer in a MAP step, then webMethods only passes the mapped parameters. Likewise, a service may return any unspecified parameters.
A generated test case uses so-called 'dynamic stubbing'. This means that a service call is replaced by a general stub service that tries to select the output based on the input. So the actual input is matched (yes, you can use regular expressions) against the saved inputs and on a match, the corresponding saved output is returned. So it clearly makes sense to only consider (and record) the advertised inputs and outputs of a mocked service.
Now, there may be cases when this doesn't work. It usually means that the developer has been sloppy and either relies on unspecified inputs, or returns more then the advertised outputs. This is not a problem when the test cases are created in a 'development' stream, but it might be unwanted when you create test cases from live data.
This global parameter lets you control this whether only the advertised inputs are recorded or not:
record.stubs.use.service.signature
It defaults to 'true'.
Callbacks
This is an advanced feature that allows you to capture the input or output of any service asynchronously and compare the data with what you expected. In this way you can extend the scope of your test case greatly. Suppose e.g. that your top-level service publishes a message which another service subscribes to. Normally what goes on there is beyond the reach of a test case that directly calls a service and compares its outputs. With a callback you can get hold of that data and do assertions against it in your test case. This not only works for services executed asynchronously on the same IS, but also on a remote IS!.
The Test Suite page supports configuring Callbacks:
- Open a test suite for editing in the Test Suites page
- Add a Callback to a test case by clicking the icon next to Callbacks
- Select a service by clicking the icon
- Select when the callback on the selected service should fire, either on:
- Start
- End
- Success
- Error
- Supply expected data. Note that you can get hold of this data by recording invocations of this service when you execute this test case!
Note that although IwTest does not support subscribing to (JMS) messages directly, you can still test published messages: simply create a trigger in your test package, define an empty flow service and subscribe to the JMS topic or Broker document type you want to test.
Repairing test cases
In the Test Suites page you can edit a test suite and execute individual test cases. If one of them fails, you'll see this icon appear next to the test suite name: and this icon next to the becomes enabled. If you click on it, IwTest will attempt to repair the test case: the expected results will be replaced with actual results, keeping any regular expression you might defined already.
Note: The repair function does not work on inline assertions or on inline exceptions. it only acts on a pipeline file, also if it represents an exception.
Samples
IwTest provides advanced functionality for constructing, executing and reporting of test cases for services created on Software AG's webMethods™ platform. Its main feature is the ability to record test data and generate test cases. IwTest is implemented in two IS packages. A test suite is just another flow service.
Examples are available in the IwTestSamples package.
Tutorial
A tutorial is available here.
Features
- Facility to record test data and generate test cases, including stubs
- Inline Assertions (equals, exists, greater than, etc.) on individual fields, e.g. '/books[0]/chapters[1]/title' equals 'Once upon a time'
- Assertions against a captured pipeline
- Use of regular expressions in expected results, e.g. <value name="title">/How.*/</value>
- Detailed error reporting: all validation errors are reported, not just the first one.
- Facility to repair all test cases in a suite in case the underlying service has changed
- Paths of input or output files can be specified relative to the package in which the test suite resides.
- Validation of intermediate values
- Validation of asynchronously executed services
- Automatic creation and deletion of callbacks.
- Supports a distributed architecture.
- Results available in JUnit format for easy integration with e.g. Jenkins
The ability to record service invocations allows you to quickly build a set of regression test when you record in e.g. QA environment.
Supported webMethods versions
IwTest supports wM9.6 and higher (The sources have been compiled targeting java 1.6)
Concepts
- A test suite contains one or more test cases to test different aspects of the same service. In IwTest a test suite is a regular flow service.
- A unit test covers the smallest unit of logic, e.g. a mapping or utility service.
- A system test covers a complete flow but without touching an external system
- A integration test covers the full chain from source system to target system
- A stubbed service is replaced during runtime with a fixed result, another service or an exception
- A callback service is a service that is registered to execute when some given service is executed.
You can set the test type in a test suite. There is no functionality behind it, other than that it lets you execute test suites of a certain type.
Architecture
The IwTest framework consists of two regular webMethods packages:
- IwTest This package contains the facilities to define and run test cases. The results are gathered by this package and presented in a html page. You can install this package on any IntegrationServer in your environment. The services that you want to test don't have to be on the same IS.
- IwTestAgent This package contains facilities to support stubs and callbacks. It can also record test data. Install this package on any IntegrationServer that contains logic that you want to test.
Distributed setup
IwTest supports the following execution scenario's:
- Local execution: everything resides on one IS:
- Application packages
- Test packages
- IwTest
- IwTestAgent
- Remote execution on one IS
- Distributed execution
Note: Asynchronous test cases with callbacks are supported in all three setups!
Local Recording/Execution
This is the most basic setup. In this case you do not have to set up Remote Agents or define Environments.
Remote Recording/Execution
In this scenario all application logic is executed on one remote IS:
In this case you need to:
- Enable support for the distributed setup in the UI by setting the parameter [ui.distributed.enabled] to true.
- In the Configuration page add the participating agents. For each you should create a Remote Alias in on the IS first.
- Install IwTestAgent on the remote IS by clicking the icon.
- Optionally: Configure a Central Alias on the Remote Agent that points back to the IS that hosts IwTest. You only need to do this if you want to execute asynchronous scenario's. That is, with callbacks.
When you enable UI support for distributed recording and execution, at the appropriate places drop-down menu's will appear which let you select a remote agent
Important: The packages with your test suites should be located on the Central IS, i.e. the one that IwTest resides on.
Distributed execution
In a distributed environment you have multiple - physical - IS's that are need to execute your application logic:
In this case you need to setup Environments, grouping two or more Remote Agents:
- Take the same steps as needed for the 'Remote Execution' scenario.
- Group Remote Agents into Environments.
When you execute test cases or run a collection of test suites you can now define, in addition to a remote alias, the Environment. During test case execution the following rules now apply for the stubs and callbacks:
- Callbacks are setup on each Remote Agent of the specified Environment
- Stubs with scope server are defined on each Remote Agent
In the 'Run Test Suites' dialog you will now always see the drop-down menu for selecting an environment.
Note:A Remote Agent may be defined in more than one Environment.
Common Conventions
Although you can define test cases in the same package that contains your integration logic, it's a common pattern to define test cases in a separate package. You could start out with these guidelines:
- Each package will have a corresponding package with test cases. It will have the same name with e.g. `_Test` appended.
- The hierarchy within this package follows the hierarchy of the package under test, but the root folder has '_test' appended
- For unit tests, the name of the test suite should be the service under test (with the ':' replaced by a '.'
- For system tests you could follow the same pattern, or create your own naming guidelines.
Checking for updates
In the About page you can manually whether a newer version of IwTest is available. The URL that is accessed is:
https://integrationwise.biz/iwtest/latest.url
Note: Should you get an error message indicating that the TLS Certificate Chain could not be verified, then add a Trustore to your IS containing the certificate chain of integrationwise.biz
You can configure IwTest to automatically check for updates by setting this parameter to 'true':
ui.update.check.enabled
Usage Notes
- The Extended Parameter watt.server.ns.dependencyManager must be set to true
Troubleshooting
- 'Repair test case' does not work!
A test case fails when you execute it. The 'repair' icon appears, you press it, but afterwards, the test case still fails.
Usually this means that the service under test produces different output data everytime you execute it. In this case you need to manually fix the
test case. Have a close look at fields for which the validation failed. You generally have two options:
- Remove one or more lines from the expected output and and tick the checkbox 'lax'
- Replace the value with a regular expression
- My callback times out!
You defined a test case with a callback. When you execute the test case, you get the message 'Callback timed out'
This usually means that the service actually never executed during the life time of the test case. The first step is to verify that it did. Go to
Service Usage and verify.
- Test case fails with 'Could not find matching stub data'!
You generated a fresh test case with a stub, but as soon as you execute it, you get this error message
This usually means that the service for which you created a stub, is called with dynamic data, for example a generated uuid or a timestamp.
Dynamic stubs provide a mechanism to return data depending on the input. If the input varies each time, you need to change the matching
criteria. Usually there are two options:
- Go to the definition of the test case, locate the dynamic stub and edit the file labeled with 'match'. You either replace the field that contains the dynamic
value with a regular expression, or you could remove the field altogether. Bear in my that an empty pipeline matches everything!
- Replace the dynamic stub with a static one.
A test case fails when you execute it. The 'repair' icon appears, you press it, but afterwards, the test case still fails.
Usually this means that the service under test produces different output data everytime you execute it. In this case you need to manually fix the test case. Have a close look at fields for which the validation failed. You generally have two options:
- Remove one or more lines from the expected output and and tick the checkbox 'lax'
- Replace the value with a regular expression
You defined a test case with a callback. When you execute the test case, you get the message 'Callback timed out'
This usually means that the service actually never executed during the life time of the test case. The first step is to verify that it did. Go to Service Usage and verify.
You generated a fresh test case with a stub, but as soon as you execute it, you get this error message
This usually means that the service for which you created a stub, is called with dynamic data, for example a generated uuid or a timestamp. Dynamic stubs provide a mechanism to return data depending on the input. If the input varies each time, you need to change the matching criteria. Usually there are two options:
- Go to the definition of the test case, locate the dynamic stub and edit the file labeled with 'match'. You either replace the field that contains the dynamic value with a regular expression, or you could remove the field altogether. Bear in my that an empty pipeline matches everything!
- Replace the dynamic stub with a static one.