xLM for ServiceNow

Continuous Validation for Continuous Compliance!
Cloud-based service providing documented evidence that the ServiceNow Cloud App has met – and continues to meet – pre-established criteria.

What is xLM for ServiceNow?

xLM for ServiceNow provides an effective mechanism for validating ServiceNow Apps.  xLM is revolutionizing the mundane task of validation since its tool set is well suited for the Cloud.  

xLM has introduced "eXtreme", "agile" and "automated" validation life cycle management that is rigorous enough to meet FDA, EMA regulations and at the same time cost effective.


xLM for ServiceNow - How does it work?

The above diagram depicts the key elements of the xLM Subscription.   One has to bear in mind that ServiceNow itself is continuously changing. Thus this Continuous Validation Framework is designed to ensure compliance continuously.

  • Requirements Definition: This step provides the foundation.  Functional, Non-functional, Regulatory, Performance, Security, Interface, etc.. requirements are clearly specified.
     
  • Risk Assessment: We perform a thorough risk assessment for all requirements.   A logical, useful risk assessment is performed, and applied it to testing strategies. The output of the risk assessment is applied to testing strategies to determine: What features to test?  What should be the extent of negative testing? What type of testing strategies to utilize (for eg: datasets to use, N Pair Testing)?
     
  • Specification Definition: Configuration, Workflow, Interface, Security, etc.. specifications are defined that meet the requirements.
     
  • Test Automation Models:    

Test automation models include both abstract and executable tests.  In short, MBT models replace a traditional UAT (User Acceptance Testing) or PQ (Performance Qualification) protocol.  These models are highly sophisticated and each model can run hundreds of tests in a few minutes.  Models are highly efficient, fast, compliant, and can achieve coverages not possible using manual testing. 

The output (evidence or "executed protocol" in the traditional sense) is very detailed and as a result the model is "self-validating".  Each model clearly shows the steps taken, the sequence taken, the coverage,  the data used and the test results (screenshots, etc..).  These set of unique features makes these models highly suitable for validation.  

In addition, each model goes through extensive quality checks before it is deployed in a "production" environment.  Once a model passes all the internal quality checks, then the model is promoted into production and such details (model license plate) are clearly shown in each execution report.

  • Validation Reporting: A robust ALM (Application Lifecycle Management) tool forms the heart of the xLM Platform.  Our ALM tool provides real-time dashboards, KPIs, summary reports, test deviation reports and more.

xLM Validation Infrastructure

The above diagram illustrates the key elements of the xLM Platform. 

  1. CORE: ALM (Application Lifecycle Management) Server
    1. Requirements Management
    2. Specification Management
    3. Risk Management
    4. Traceability
    5. Test Case Management
    6. Test Campaign Management
    7. Test Deviation Management
    8. Reporting
    9. Annex11 & Part 11 Compliant
  2. ALM Server Controls the Test Automation Server where test automation scripts are deployed.  ALM Server manages the test schedules and configuration.  It determines the test environment also.  For example:  The OS and Browser Type that are required in the client VM that will be used to run the test automation scripts.
  3. Based on the OS and Browser type, the test automation server automatically instantiates a Test VM which is used to validate the SUT (System Under Test).
  4. All test results are passed back to the ALM Server.
  5. Test Automation Logs and Cloud App Logs are fed into a Big Data Log Analysis App.  Detailed log analysis is performed on an ongoing basis to ensure compliance, review trends, etc..
  6. The ALM Server provides the single source of truth for the Cloud App Validation - GxP Compliant Summary Reports are available for every release of the Cloud App.

xLM Subscription Details and Process Flow


xLM - Test Automation Framework

What is Model Based Testing (MBT)

Model-based testing is an application of model-based design for designing and optionally also executing artifacts to perform software testing or system testing. Models can be used to represent the desired behavior of a System Under Test (SUT), or to represent testing strategies and a test environment.

A model describing a SUT is usually an abstract, partial presentation of the SUT's desired behavior. Test cases derived from such a model are functional tests on the same level of abstraction as the model. These test cases are collectively known as an abstract test suite. An abstract test suite cannot be directly executed against an SUT because the suite is on the wrong level of abstraction. An executable test suite needs to be derived from a corresponding abstract test suite. The executable test suite can communicate directly with the system under test. This is achieved by mapping the abstract test cases to concrete test cases suitable for execution.  Because test suites are derived from models and not from source code, model-based testing is usually seen as one form of black-box testing.

Source: wikipedia.org

xLM Model Based Testing

xLM's test automation models include both abstract and executable tests.  In short, MBT models replace a traditional UAT (User Acceptance Testing) or PQ (Performance Qualification) protocol.  These models are highly sophisticated and each model can run hundreds of tests in a few minutes.  Models are highly efficient, fast, compliant, and can achieve coverages not possible using manual testing. 

The output (evidence or "executed protocol" in the traditional sense) is very detailed and as a result the model is "self-validating".  Each model clearly shows the steps taken, the sequence taken, the coverage,  the data used and the test results (screenshots, etc..).  These set of unique features makes these models highly suitable for validation.  

In addition, each model goes through extensive quality checks before it is deployed in a "production" environment.  Once a model passes all the internal quality checks, then the model is promoted into production and such details (model license plate) are clearly shown in each execution report.