Defining test cases

You create test cases by using the Record & Generate facility:

  1. Select the service you want to create test cases for
    • Optionally select one or more services for which you want to create stubs
  2. Execute the service with different inputs, e.g. directly in the UI, in Designer or in SOAP-UI
  3. Stop recording and generate a test suite.

This will give you a ready-to-use test suite that you can further edit or execute. Go to the 'Test Suites' page, where you can:

Validating against regular expressions in expected data

You can use regular expressions in the expected data. This is convenient if a field contains e.g. a timestamp that changes with every invocation. Enclose a regular expression in forward slashes. If you don't want IwTest to interpret the regular expression, place a backslash in front of it.

<?xml version="1.0" encoding="UTF-8"?
<IDataXMLCoder version="1.0">
  <record javaclass="com.wm.data.ISMemDataImpl">
    <value name="color2">/\w+/</value>          <!-- matches one or more word characters -->
    <value name="color1">/black|white/</value>  <!-- matches 'black' or 'white' -- >
    <value name="random">/\d\d\d/</value>       <!-- matches three digits -->
    <value name="random">\/\d{2}/</value>       <!-- matches exactly '/\d{2}/' -->
  </record>
</IDataXMLCoder>

You can also use regular expressions for fields of other types as well. Suppose this is the actual result:

<?xml version="1.0" encoding="UTF-8"?
<IDataXMLCoder version="1.0">
  <record javaclass="com.wm.data.ISMemDataImpl">
    <!-- That's how a long is represented -->
    <number name="millis"           type="java.lang.Long">1234568793212</number>          
    <!-- This a float -->
    <float  name="fraction"         type="java.lang.Float">823.443</float>                
    <!-- And this is a Date -->
    <Date   name="fristEditionDate" type="java.util.Date"<Sun May 15 2016 15:39:41 CEST</Date> 
  </record>
</IDataXMLCoder>

Then you can change the type of the field and use a regular expression. IwTest will first call 'toString()' on the field before the expression is evaluated.

<?xml version="1.0" encoding="UTF-8"?
<IDataXMLCoder version="1.0">
  <record javaclass="com.wm.data.ISMemDataImpl">
    <value name="millis"           >/\d{13}/</value>
    <value name="fraction"         >/\d{3}\.\d{3}/</value>
    <value name="fristEditionDate" >/\.*/</value>
  </record>
</IDataXMLCoder>

Additional verfications

If you need to do an additional verification step, for example accessing a databases in order to check that an entry has been written to a table, then use the webMethods functionality you're already familiar with to do this. Simply wrap the service you want to write a test suite for in another service (in your test package) and add any verfication logic there. Then have IwTest validate the outputs of this wrapper service.

Invoking services from IwTest

On the Record page you can execute any service that is available on the IntegrationServer. However, only JSON as a serialization format is offered, so there are some restrictions with respects to the data types you can use.

Manually creating a test suite

You can also create a test suite manually by creating a regular flow service that only invokes iw.test.execution.pub:execute and set the inputs of this service (but it's highly recommended that you Record & Generate test suites).

For the simplest test case you define:

  1. Test suite name
  2. Test case name
  3. Service to test
  4. One assertion, either inline or in a file
In addition you can define:
  1. input data
  2. One or more stubs
  3. One or more callbacks
Note: Be sure to define a base directory

File paths are relative either to the containing package, or to the working directory of the IntegrationServer. Usually a service takes an input, but doesn't have to. You cannot define inputs inline; IwTest only allows you to define inputs that are saved in a file.

If you don't define any assertion, then the test case will be marked as failed when executed.

You can execute this test suite manually from Designer (after all, it's a flow service). The results are can also be checked in the results page.

Running & Results

There is a special service that will execute all tests it can find: iw.test.execution.pub:run. The results of the test runs are shown in the Test Results page. Optional input parameters let you execute a subset of the test suites.

The results can also be exported in jUnit format for consumption and display by e.g. Jenkins (be sure to use the jUnit plug-in):

#Execute the test suites contained in the packages whose name end on '_Test':
curl http://[host]:[port]/invoke/iw.test.execution.pub:run?packageFilter=.*_Test
        
#Download the results in junit format:
curl http://[host]:[port]/invoke/iw.test.results.pub:export?format=junit > test-results.xml

Note: IwTest does not maintain a history of test runs. The Jenkins JUnit plug-in does already does a marvellous job at that.

Recording service invocations

The IwTestAgent package contains functionality to record test data as services are executed. You can record service invocations that you run manually from within Designer, but you can also do the recording in another environment, for example in an integration test or QA environment.

Note: Recording obviously has a performance impact. Be careful in high volume environments.

To start recording go to the record page:

  1. Click the icon in the table header to activate recording on that IntegrationServer. It should change into a blinking icon. Which you can click again to stop recording
  2. Click the icon to select a service. You can directly start typing the service name if you know it, or you can the select a package first.
  3. If the service you selected calls sub-services, then you can optionally select one or more of those for recording as well in the box that appears below.
  4. Execute your service one or more times, either from Designer, SOAP-UI or the external application. Notice the increasing counter.
  5. When done, stop recording and generate a test suite by clicking the icon. More on this the next section.

Recording configuration

In the settings page you can control the recording behaviour:

Generating test suites

Based on the data you recorded in the previous section you can now generate a test suite by clicking on the icon. This automatically stops recording.

A dialog appears with pre-filled parameters. Accept the defaults or change them to your liking.

  1. Target package: the suggested name is based on the generate.test.package.suffix parameter
  2. Test Suite Service Name: the suggested name is based on the generate.test.suite.service.infix and generate.test.suite.service.suffix parameters.
  3. Test Suite Name: the suggested name is the service name with the colon replaced by a dot. This convention allows for a hierarchical treatment of the test suite by e.g. Jenkins.
  4. Generate Mode: merge or overwrite. Appears if the Test Suite Service already exists.

When you click 'Generate' the following will happen:

  1. A new package to hold the test suite will be created (if it did not already exists)
  2. The recorded data for the service is downloaded from the IwTestAgent and saved to a folder in the target test package, by default: [generate.test.package.folder] = 'testfiles' and underneath this folder another folder [service.name.with.dots] is created.
  3. A test suite flow service is created in the target package. The service only that calls iw.test.execution.pub:execute with fully configured inputs.

To run the test suite, execute this service. It has two optional parameters:

IwTest has named the test cases with an increasing integer starting with [00001]. It's a good idea to change it and add a meaningful description.

You can freely edit this flow service, rename it or move it around. If you move it to a different package then be sure to also move the corresponding test files.

You can always record additional test cases and add them to the existing test suite flow service. Be sure to set the 'mode' parameter to 'merge'./p>

Stubs

A stub is a replacement for a call to a service. This is handy when you want to make your test cases independent of external applications.There are four different types of stubs:

  1. Pipeline: Instead of calling a service, a previously saved pipeline is returned.
  2. Service: Instead of calling a service, another service is called. This can be a service in your test package. Note that you can optionally define a static input for this service.
  3. Exception: Instead of calling a service, you can throw an exception. This is handy for testing error scenario's. Specify the fully qualified exception name and message
  4. Dynamic: Based on the input a matching output is returned. You can consider this as a conditional stub.

A Dynamic stub combines the capabilities of the other three.

Note: The recording facility will only create dynamic stubs.

Dynamic stubbing solves the problem that arises if a stubbed service is called multiple times but each time with different inputs by the service under test. Depending on the input, you'd like to return a different output.

Dynamic stubbing only takes one parameter as an input: a 'stub-base-dir'. Place combinations of input/output/error pipelines in arbitrarily named sub-directories. The files themselves must be named like this:

During the execution of a dynamic stub the actual input is matched against the test-input.xml files.

Note: Regular expressions in those files are supported.

Note: An (functionally) empty test-input.xml file matches everything:

<?xml version="1.0" encoding="UTF-8"?>

<IDataXMLCoder version="1.0">
  <record javaclass="com.wm.data.ISMemDataImpl">
  </record>
</IDataXMLCoder>
    

If a match is found then either the corresponding 'test-output.xml' - or - 'test-error.xml' is used (don't specify both). The latter only specifies the exception class name and message (not the pipeline) which are used to create the exception to throw.

Note:The field names must be named message and (optionally) class-name
<?xml version="1.0" encoding="UTF-8"?>

<IDataXMLCoder version="1.0">
  <record javaclass="com.wm.data.ISMemDataImpl">
    <value name="class-name">com.wm.lang.flow.FlowException</value>
    <value name="message">java.net.ConnectException: database unavailable</value>
  </record>
</IDataXMLCoder>
    

A note on dynamic stubbing

There is an inherent problem with recording data for sub-services that you want to stub later in your test case. When a service calls another service in a regular invoke step, webMethods passes the complete pipeline to the called service, although that service might only need a small subset for executing its task. Only if a service is called as a transformer in a MAP step, then webMethods only passes the mapped parameters. Likewise, a service may return any unspecified parameters.

A generated test case uses so-called 'dynamic stubbing'. This means that a service call is replaced by a general stub service that tries to select the output based on the input. So the actual input is matched (yes, you can use regular expressions) against the saved inputs and on a match, the corresponding saved output is returned. So it clearly makes sense to only consider (and record) the advertised inputs and outputs of a stubbed service.

Now, there may be cases when this doesn't work. It usually means that the developer has been sloppy and either relies on unspecified inputs, or returns more then the advertised outputs. This is not a problem when the test cases are created in a 'development' stream, but it might be unwanted when you create test cases from live data.

This global parameter lets you control this whether only the advertised inputs are recorded or not:

record.stubs.use.service.signature

It defaults to 'true'.

Callbacks

This is an advanced feature that allows you to capture the input or output of any service asynchronously and compare the data with what you expected. In this way you can extend the scope of your test case greatly. Suppose e.g. that your top-level service publishes a message which another service subscribes to. Normally what goes on there is beyond the reach of a test case that directly calls a service and compares its outputs. With a callback you can get hold of that data and do assertions against it in your test case. This not only works for services executed asynchronously on the same IS, but also on a remote IS!.

The Test Suite page supports configuring Callbacks:

  1. Open a test suite for editing in the Test Suites page
  2. Add a Callback to a test case by clicking the icon next to Callbacks
  3. Select a service by clicking the icon
  4. Select when the callback on the selected service should fire, either on:
    • Start
    • End
    • Success
    • Error
  5. Supply expected data. Note that you can get hold of this data by recording invocations of this service when you execute this test case!

Instead of supplying expected data explicity, you can leverage the Repair functionality. Initially you leave the expected data empty, but you uncheck the box for lax verfication. Then you execute the test case, which will then obviously fail, because there is now unexpected data in the pipeline. Now repair the case. IwTest will capture all data and add it as expected data to the callback. Run the test case again with the new assertions. You might want to fine-tune the expected data by adding regular expression or removing uninteresting assertions (after which you need to check the lax box again).

Note that although IwTest does not support subscribing to (JMS) messages directly, you can still test published messages: simply create a trigger in your test package, define an empty flow service and subscribe to the JMS topic or Broker document type you want to test.

Repairing test cases

In the Test Suites page you can edit a test suite and execute individual test cases. If one of them fails, you'll see this icon appear next to the test suite name: and this icon next to the becomes enabled. If you click on it, IwTest will attempt to repair the test case: the expected results will be replaced with actual results, keeping any regular expression you might have defined already.

Note: The repair function does not work on inline assertions.

Note: The repair function does not overwrite fields with a regular expression. First remove the regex, and then try repairing again.

Note: The repair function cannot help you if it cannot find matching stub data for a dynamic stub.

Dynamic Code Coverage Analysis

This feature allows you to get insight into which paths have been executed by the test cases. This feature is disabled by default, but can be enabled by setting the following parameter to true in Settings page.

execute.coverage.enabled

You start a code coverage session normally in the UI on the Test Suites page when executing a single test suite or when running multiple.

Programmatically you can trigger a session by passing an extra query parameter 'dcc=true' in the url:

#Run a code coverage session for the test suites contained in the packages whose name end on '_Test':
curl http://[host]:[port]/invoke/iw.test.execution.pub:run?dcc=true&packageFilter=.*_Test
    

There is a summary available in the response. The full report is available here:

#Download the code coverage results (for the current session), 
the format respects the 'Accept' header (application/json, text/html, text/xml) :
curl http://[host]:[port]/invoke/iw.test.ui.coverage/getReport
#Download the latest code coverage results (pinned):
curl http://[host]:[port]/invoke/iw.test.ui.coverage/getReport?which=latest-global

In order to download the coverage report in HTML format, you can use the follow link:

curl: http://[host]:[port]/invoke/iw.test.ui.coverage:exportReport?which=latest-global
  

This will produce a zip-file, which you would need to unpack first

Note: Dynamic Code Coverage Analysis only works when executing test cases against the local IS.

Note: The Dynamic Code Coverage Analysis agent is only active in the current session. Service executions done in another session will not be detected. So this means that test cases that have callbacks defined, will execute paths that will not get recorded.

Note: If you are using Jenkins and want to save (archive) the code coverage report with every build and you want to view the report in Jenkins, then be aware that Jenkins does not allow the the loading and execution of javascript by default.

If you want to be able to access the coverage report in Jenkins (coverage.html), then make sure that the Content-Security-Policy HTTP that Jenkins uses, allows loading and executing javascript. In order to accomplish this, add this parameter to the start command of Jenkins:

-Dhudson.model.DirectoryBrowserSupport.CSP="default-src 'self';script-src 'unsafe-inline' 'unsafe-eval' 'self';"
  

This allows your browser to load and execute javascript code served by Jenkins, but not from other sources. For more information on this see the Jenkins documentation.

Migrating WmTestSuites

You can migrate WmTestSuite files to IwTest. This only works if:

The following conversion logic is applied:

Conversion will fail if either a Pipeline Filter or a MockFactory is encountered. The inline assertions however will convert just fine, however, only AND logic will apply.

At the moment there is no UI support for this, but you can use the following services to migrate WmTestSuite files:

Note: The conversion logic will not touch, move, rename or rearrange any of the input or output files. They remain where they are.

Samples

IwTest provides advanced functionality for constructing, executing and reporting of test cases for services created on Software AG's webMethods™ platform. Its main feature is the ability to record test data and generate test cases. IwTest is implemented in two IS packages. A test suite is just another flow service.

Examples are available in the IwTestSamples package.

Tutorial

A tutorial is available here.

Features

The ability to record service invocations allows you to quickly build a set of regression test when you record in e.g. QA environment.

Supported webMethods versions

IwTest supports wM 9.12 and higher (The sources have been compiled targeting java 8)

Concepts

You can set the test type in a test suite. There is no functionality behind it, other than that it lets you execute test suites of a certain type.

Architecture

The IwTest framework consists of two regular webMethods packages:

The standard setup is to have the two packages on the same IS, be it your local development IS or the central CI (Continuous Integration) IS.

Distributed setup

IwTest supports the following execution scenario's:

  1. Local execution: everything resides on one IS:
    • Application packages
    • Test packages
    • IwTest
    • IwTestAgent
  2. Remote execution on one IS
  3. Distributed execution

Note: Asynchronous test cases with callbacks are supported in all three setups!

Local Recording/Execution

This is the most basic setup. In this case you do not have to set up Remote Agents or define Environments.

Remote Recording/Execution

In this scenario all application logic is executed on one remote IS:

In this case you need to:

When you enable UI support for distributed recording and execution, at the appropriate places drop-down menu's will appear which let you select a remote agent

Important: The packages with your test suites should be located on the Central IS, i.e. the one that IwTest resides on.

Distributed execution

In a distributed environment you have multiple - physical - IS's that are need to execute your application logic:

In this case you need to setup Environments, grouping two or more Remote Agents:

When you execute test cases or run a collection of test suites you can now define, in addition to a remote alias, the Environment. During test case execution the following rules now apply for the stubs and callbacks:

  1. Callbacks are setup on each Remote Agent of the specified Environment
  2. Stubs with scope server are defined on each Remote Agent

In the 'Run Test Suites' dialog you will now always see the drop-down menu for selecting an environment.

Note: A Remote Agent may be defined in more than one Environment.

Common Conventions

Although you can define test cases in the same package that contains your integration logic, it's a common pattern to define test cases in a separate package. You could start out with these guidelines:

Checking for updates

In the About page you can manually check whether a newer version of IwTest is available. The URL that is accessed is:

https://integrationwise.biz/iwtest/latest.url

Note: Should you get an error message indicating that the TLS Certificate Chain could not be verified, then add a Trustore to your IS containing the certificate chain of integrationwise.biz

Automatically checking for updates/auto-install

The following setting controls the update behaviour of IwTest:

ui.update.check

It has the following possible values:

  1. update: Check daily for updates and install the latest version automatically
  2. check: Check daily for updates, but do not install
  3. off: Do not check for updates

Note: If enabled, then IwTest checks daily for updates

Usage Notes

Troubleshooting

  1. 'Repair test case' does not work!

    A test case fails when you execute it. The 'repair' icon appears, you press it, but afterwards, the test case still fails.

    Usually this means that the service under test produces different output data everytime you execute it. In this case you need to manually fix the test case. Have a close look at fields for which the validation failed. You generally have two options:

    • Remove one or more lines from the expected output and and tick the checkbox 'lax'
    • Replace the value with a regular expression

  2. My callback times out!

    You defined a test case with a callback. When you execute the test case, you get the message 'Callback timed out'

    This usually means that the service actually never executed during the life time of the test case. The first step is to verify that it did. Go to Service Usage and verify.

  3. Test case fails with 'Could not find matching stub data'!

    You generated a fresh test case with a stub, but as soon as you execute it, you get this error message

    This usually means that the service for which you created a stub, is called with dynamic data, for example a generated uuid or a timestamp. Dynamic stubs provide a mechanism to return data depending on the input. If the input varies each time, you need to change the matching criteria. Usually there are two options:

    1. Go to the definition of the test case, locate the dynamic stub and edit the file labeled with 'match'. You either replace the field that contains the dynamic value with a regular expression, or you could remove the field altogether. Bear in my that an empty pipeline matches everything!
    2. Replace the dynamic stub with a static one.

© IntegrationWise, 2024