Overview

IwTest provides advanced functionality for constructing, executing and reporting of test cases for services created on Software AG's webMethods™ platform. Its main feature is the ability to record test data and generate test cases. IwTest is implemented in two IS packages. A test suite is just another flow service.

Examples are available in the IwTestSamples package.

Tutorial

A tutorial is available here.

Features

The ability to record service invocations allows you to quickly build a set of regression test when you record in e.g. QA environment.

Supported webMethods versions

IwTest supports wM8.2 and higher (The sources have been compiled targeting java 1.6)

Concepts

You can set the test type in a test suite. There is no functionality behind it, other than that it lets you execute test suites of a certain type.

Architecture

The IwTest framework consists of two regular webMethods packages:

The standard setup is to have the two packages on the same IS, be it your local development IS or the central CI (Continuous Integration) IS.

Distributed setup

If you have a distributed environment with multiple IS's and if your integrations span those IS's then you can set up IwTest accordingly:

Important: The packages with your test suites should be located on the Central IS, i.e. the one that IwTest resides on.

In the Recording page now appear extra panels to record test suites on the remote IS's. Likewise, in the Test Suites page you can run test suites on a remote IS.

Defining test cases

You can either create a test suite manually by creating a flow service that only invokes iw.test.execution.pub:execute and set the inputs of this service, but it's highly recommended that you Record & Generate test suites (including stubs).

For the simplest test case you define:

  1. Test suite name
  2. Test case name
  3. Service to test
  4. One assertion, either inline or in a file
In addition you can define:
  1. input data
  2. One or more stubs
  3. One or more callbacks
File paths are relative either to the containing package, or to the working directory of the IntegrationServer. Usually a service takes an input, but doesn't have to. You cannot define inputs inline; IwTest only allows you to define inputs that are saved in a file.

If you don't define an assertion, then the test case will be marked as failed.

You can execute this your test suite manually from Designer (after all, it's a flow service). The results can be checked in the results page.

Note: IwTest does not contain data generation facilities nor does it contain additional facilities to do verifications, e.g. accessing a databases in order to check that an entry has been written to a table or checking a log file. Instead, use the webMethods functionality you're already familiar with to do this. Simply wrap the service you want to write a test suite for in another service (in your test package) and execute any data generation logic and/or any verfication logic there. Then have IwTest validate the outputs of this wrapper service.

Validating against regular expressions in expected data

You can use regular expressions in the expected data. This is convenient if a field contains e.g. a timestamp that changes with every invocation. Enclose a regular expression in forward slashes:

<?xml version="1.0" encoding="UTF-8"?>
<IDataXMLCoder version="1.0">
  <record javaclass="com.wm.data.ISMemDataImpl">
    <value name="color2">/\w+/</value>
    <value name="color1">/black|white/</value>
    <value name="random">/\d\d\d/</value>
  </record>
</IDataXMLCoder>

Common Conventions

Although you can define test cases in the same package that contains your integration logic, it's a common pattern to define test cases in a separate package. You could start out with these guidelines:

Running & Results

There is a special service that will execute all tests it can find: iw.test.execution.pub:run. The results of the test runs are shown in the Test Results page. Optional input parameters let you execute a subset of the test suites.

The results can also be exported in jUnit format for consumption and display by e.g. Jenkins (be sure to use the jUnit plug-in):

#Execute the test suites contained in the packages whose name end on '_Test':
curl http://[host]:[port]/invoke/iw.test.execution.pub:run?packageFilter=.*_Test
        
#Download the results in junit format:
curl http://[host]:[port]/invoke/iw.test.results.pub:export?format=junit > test-results.xml
    

Note: IwTest does not maintain a history of test runs. The Jenkins JUnit plug-in does already does a marvellous job at that.

Stubs

A stub is a replacement for a call to a service. This is handy when you want to make you test cases independent of external applications. There are four different types of stubs:

  1. Pipeline: Instead of calling a service, a previously saved pipeline is returned.
  2. Service: Instead of calling a service, another service is called. This can be a service in your test package. Note that you can optionally define a static input for this service.
  3. Exception: Instead of calling a service, you can throw an exception. This is handy for testing error scenario's. Specify the fully qualified exception name and message
  4. Dynamic: Based on the input a matching output is returned. You can consider this as a conditional stub.

A Dynamic stub combines the capabilities of the other three.

Note: The recording facility will only create dynamic stubs.

Dynamic stubbing solves the problem that arises a stubbed service is called multiple times but each time with different inputs by the service under test. Depending on the input, you'd like to return a different output.

Dynamic stubbing only takes one parameter as an input: a 'stub-base-dir'. Place combinations of input/output/error pipelines in arbitrarily named sub-directories. The files themselves must be named like this:

During the execution of a dynamic stub the actual input is matched against the test-input.xml files.

Note: Regular expressions in those files are supported.

Note: An (functionally) empty test-input.xml file matches everything:

<?xml version="1.0" encoding="UTF-8"?>

<IDataXMLCoder version="1.0">
  <record javaclass="com.wm.data.ISMemDataImpl">
  </record>
</IDataXMLCoder>
    

If a match is found then either the corresponding 'test-output.xml' - or - 'test-error.xml' is used (don't specify both). The latter only specifies the exception class name and message (not the pipeline) which are used to create the exception to throw.

<?xml version="1.0" encoding="UTF-8"?>

<IDataXMLCoder version="1.0">
  <record javaclass="com.wm.data.ISMemDataImpl">
    <value name="class-name">com.wm.lang.flow.FlowException</value>
    <value name="message">java.net.ConnectException: database unavailable</value>
  </record>
</IDataXMLCoder>
    

Callbacks

This is an advanced feature that allows you to capture the input or output of any service asynchronously and compare the data with what you expected. In this way you can extend the scope of your test case greatly. Suppose e.g. that your top-level service publishes a message which another service subscribes to. Normally what goes on there is beyond the reach of a test case that directly calls a service and compares its outputs. With a callback you can get hold of that data and do assertions against it in your test case. This not only works for services executed asynchronously on the same IS, but also on a remote IS!.

The Test Suite page supports configuring Callbacks:

  1. Open a test suite for editing in the Test Suites page
  2. Add a Callback to a test case by clicking the icon next to Callbacks
  3. Select a service by clicking the icon
  4. Select when the callback on the selected service should fire, either on:
    • Start
    • End
    • Success
    • Error
  5. Supply expected data. Note that you can get hold of this data by recording invocations of this service when you execute this test case!

Note that although IwTest does not support subscribing to (JMS) messages directly, you can still test published messages: simply create a trigger in your test package, define an empty flow service and subscribe to the JMS topic or Broker document type you want to test.

Recording service invocations

The IwTestAgent package contains functionality to record test data as services are executed. You can record service invocations that you run manually from within Designer, but you can also do the recording in another environment, for example in an integration test or QA environment.

Note: Recording obviously has a performance impact. Be careful in high volume environments.

To start recording go to the record page:

  1. Click the icon in the table header to activate recording on that IntegrationServer. It should change into a blinking icon. Which you can click again to stop recording
  2. Click the icon to select a service. You can directly start typing the service name if you know it, or you can the select a package first.
  3. If the service you selected calls sub-services, then you can optionally select one or more of those for recording as well in the box that appears below.
  4. Execute your service one or more times, either from Designer, SOAP-UI or the external application. Notice the increasing counter.
  5. When done, stop recording and generate a test suite by clicking the icon. More on this the next section.

Recording configuration

In the configuration page you can control the recording behaviour:

Generating test suites

Based on the data you recorded in the previous section you can now generate a test suite by clicking on the icon. This automatically stops recording.

A dialog appears with pre-filled parameters. Accept the defaults or change them to your liking.
  1. Target package: the suggested name is based on the generate.test.package.suffix parameter
  2. Test Suite Service Name: the suggested name is based on the generate.test.suite.service.infix and generate.test.suite.service.suffix parameters.
  3. Test Suite Name: the suggested name is the service name with the colon replaced by a dot. This convention allows for a hierarchical treatment of the test suite by e.g. Jenkins.
  4. Generate Mode: merge or overwrite. Appears if the Test Suite Service already exists.
When you click 'Generate' the following will happen:
  1. A new package to hold the test suite will be created (if it did not already exists)
  2. The recorded data for the service is downloaded from the IwTestAgent and saved to a folder in the target test package, by default: [generate.test.package.folder] = 'testfiles' and underneath this folder another folder [service.name.with.dots] is created.
  3. A test suite flow service is created in the target package. The service only that calls iw.test.execution.pub:execute with fully configured inputs.
To run the test suite, execute this service. It has two optional parameters: IwTest has named the test cases with an increasing integer starting with [00001]. It's good idea to change it and add a meaningful description.

You can freely edit this flow service, rename it or move it around. If you move it to a different package then be sure to also move the corresponding test files.

You can always record additional test cases and add them to the existing test suite flow service. Be sure to set the 'mode' parameter to 'merge'.

A note on dynamic stubbing

There is an inherent problem with recording data for sub-services that you want to stub later in your test case. When a service calls another service in a regular invoke step, webMethods passes the complete pipeline to the called service, although that service might only need a small subset for executing its task. Only if a service is called as a transformer in a MAP step, then webMethods only passes the mapped parameters. Likewise, a service may return any unspecified parameters.

A generated test case uses so-called 'dynamic stubbing'. This means that a service call is replaced a general stub service that tries to select the output based on the input. So the actual input is *matched* (yes, you can use regular expressions) against the saved inputs and on a match, the corresponding saved output is returned. So it clearly makes sense to only consider (and record) the advertised inputs and outputs of a mocked service.

Now, there may be cases when this doesn't work. It usually means that the developer has been sloppy and either relies on unspecified inputs, or returns more then the advertised outputs. This is not problem when the test cases are created in a 'development' stream, but it might be unwanted when you create test cases from live data.

This global parameter lets you control this whether only the advertised inputs are recorded or not:

record.stubs.use.service.signature
It defaults to 'true'.

© IntegrationWise, 2019