Olga Sheremeta - Testomat.io Author & Specialist https://testomat.io/author/editor/ AI Test Management System For Automated Tests Wed, 13 Aug 2025 09:06:51 +0000 en-US hourly 1 https://wordpress.org/?v=6.8.2 https://testomat.io/wp-content/uploads/2022/03/testomatio.png Olga Sheremeta - Testomat.io Author & Specialist https://testomat.io/author/editor/ 32 32 Playwright MCP: Modern Test Automation from Zero to Hero https://testomat.io/blog/playwright-mcp-modern-test-automation-from-zero-to-hero/ Wed, 06 Aug 2025 10:54:06 +0000 https://testomat.io/?p=22184 Automated testing is now key to making sure web applications work correctly across different browsers. But it is more than just writing and running scripts automatically. It’s about using smart AI-based systems. Smart systems that understand what you want to test and always give fast feedback. The Playwright Model Context Protocol fits right in with […]

The post Playwright MCP: Modern Test Automation from Zero to Hero appeared first on testomat.io.

]]>
Automated testing is now key to making sure web applications work correctly across different browsers. But it is more than just writing and running scripts automatically. It’s about using smart AI-based systems. Smart systems that understand what you want to test and always give fast feedback.

The Playwright Model Context Protocol fits right in with modern AI automation testing and helps development and QA teams write, manage, and execute automated tests more efficiently while improving test coverage and stability.

In our Playwright MCP tutorial, you will discover more details about the Playwright MCP server, reveal how it works, and learn how to set up Playwright MCP and benefit from integration with the test case management system.

What is Model Context Protocol?

Model Context Protocol (MCP) is an open protocol that standardizes how applications provide context to large language models (LLMs), developed by Anthropic. MCP is like a USB-C port for AI applications. Just as USB-C provides a way to connect your devices to various peripherals and accessories, MCP provides a formal two-way connection of how AI models integrate different data sources, services and external tools. Do it without requiring custom integrations for each one.

MCP Architecture Software Testing Scheme
Model Context Protocol (MCP) Architecture

MCP enables you to build agents and complex workflows on top of LLMs, connecting your models with the world, while addressing challenges such as security, scalability, and efficiency in AI-powered workflows.

MCP Architecture: How It Works

MCP follows a client-server architecture where an MCP host — an AI application which establishes connections to one or more MCP servers. It integrates AI models with external data sources and tools, including Google Drive, databases, APIs, and more. They, in turn, make their data accessible via Model Context Protocol servers. Each MCP client AI application maintains a dedicated connection with its corresponding MCP server. Every request to the server can provide context to LLMs in real-time, allowing them to maintain context even across multiple systems.

Components of MCP

Let’s define an architecture in detail where:

  • Hosts – applications the user interacts with – Claude Desktop, an IDE like Cursor, and custom agents.
  • Clients – components that are responsible for requesting and consuming external context from compliant servers. BeeAI, Microsoft Copilot Studio, Claude.ai, Windsurf Editor, and Postman are some of the popular examples of Model Context Protocol clients.
  • Servers – these external programs can make their tools, resources, and prompts available to an AI model through a standard API (Application Programming Interface) and convert user requests to server actions.
  • Local data sources – the computer’s files, databases, and services to which Model Context Protocol servers have secure access.
  • Remote services – external systems which can be accessed over the internet (e.g., through APIs) and connected to.

Most developers will likely find the data layer protocol section to be the most useful. It discusses how MCP servers can provide context to an AI application, regardless of where it runs. MCP servers can execute locally or remotely.

How MCP Client Interacts With MCP Server

  1. The MCP client, typically embedded in AI applications, creates a request for specific data or actions.
  2. The MCP client sends requests to the Model Context Protocol server when the AI model needs to access exposed data, tools, resources, or prompts.
  3. The MCP server gets these requests and sends them to the right external program or data source. Then, it handles the processing to make sure that the right data is retrieved.
  4. The MCP server gets the results from the external program once it’s finished. It then safely sends the response back to the Model Context Protocol client for consumption by the AI app.

What is MCP Playwright for Automation Testing?

The Playwright MCP meaning refers to the Playwright Multi-Context Protocol. When Model Context Protocol is combined with the Playwright cross-browser testing tool, it provides browser automation capabilities and utilizes Playwright’s locators to let LLMs or AI agents interact with web pages through structured accessibility snapshots instead of screenshots. 

Model Context Protocol is combined with the Playwright
Model Context Protocol in Playwright implementation

To put it simply, Playwright, which is known as one of the popular JS testing frameworks, acts as an MCP Client and connects to MCP Servers that expose various tools, services, or data. This setup helps QA teams and developers develop smart test scenarios, which are able to react to dynamic and live data, orchestrate more complex manual and automation workflows, and simulate real-world interactions with the system under test while keeping comprehensive and realistic automation.

 👉 Here is an example Playwright MCP example in action:

In e-commerce, the Model-Context-Protocol server could provide a searchProducts(query) function. When Playwright sends a prompt to check how the product search bar on a website works, the MCP server would return relevant product details as if from a live database.

In this situation, Playwright’s test automation script sends a search request, which is the prompt, to the MCP server, and the Model-Context-Protocol server then runs its searchProducts function, retrieves product information, and sends this data back to Playwright, simulating the search results that a user would see in real time. 

Key Features of MCP Playwright

Playwright MCP comes packed with a variety of powerful features that make it a must-have for today’s automation testing. Get to know these features, and you can make the most out of it:

  • Modular communication. The Model-Context-Protocol modular architecture of Playwright lets you use a set of tools, such as test runners, data generators, and smart validators.
  • Tool interoperability. When connecting Playwright to more than one Model-Context-Protocol server, which offer specialized tools (for example, visual tools, accessibility checkers, or API fuzzers), it lets you create complicated Playwright-based test scenarios without making your code too big.
  • Remote execution. Running tests on remote Model-Context-Protocol servers at the same time speeds things up and makes them more scalable.
  • Dynamic tool discovery. At runtime, Playwright’s MCP can ask a Model Context Protocol server what tools and services are available so that users can make test suites that can change and adapt.
  • Structured communication. Playwright’s Model Context Protocol and servers communicate using a standardized format (typically JSON), so that data and commands are exchanged without fail.

Playwright MCP AI Workflow

A typical workflow can be divided into the following phases, each with specific objectives:

  1. Setup and Initialization. Firstly, you need the server to be installed and configured so that it can receive commands and translate them into browser actions. Only by establishing the necessary connection can you prepare the environment for the AI agent or LLM to interact with web browsers.
  2. Capability Discovery. At this step, an AI client (e.g., an LLM or an AI agent) queries the Model-Context-Protocol server at runtime to discover what tools and services are available. Whether it is navigating pages, clicking, typing, or taking snapshots, AI needs to understand the full range of actions it can perform on a web page.
  3. Command Generation. Guided by a pre-defined testing scenario, the AI model generates specific commands for the Model-Context-Protocol server in JSON. Then, it translates test requirements into concrete instructions for browser automation, explaining what the browser needs to do.
  4. Browser Execution. At this step, the MCP server receives the commands provided by the AI and uses Playwright to execute them in a real web browser (Chromium, Firefox, WebKit). It interacts with the web page by performing actions like navigating to URLs, interacting with UI elements, and capturing page states.
  5. Contextual Feedback and Iteration. Once a command has been executed, the Model-Context-Protocol server provides rich contextual feedback to the AI (in the form of accessibility tree snapshots of the page). After that, AI analyzes this feedback to refine its next steps, generate further commands, or validate results to reach the desired goal.

Pros and Cons of Playwright MCP

While Playwright MCP presents several benefits, there are also potential drawbacks to consider. Understanding both can help you make informed decisions about using it.

✅ Pros 🚫 Cons
It allows AI models to identify and interact with web elements based on their context and accessibility to reduce test flakiness caused by minor UI changes and improve the self-healing capabilities of tests. You need to have powerful infrastructure to run an AI client and a Playwright Model Context Protocol server, especially with a visible browser or many connections.
Playwright’s MCP AI agents can uncover edge cases and unexpected behaviors that might be missed by static tests. AI’s inability to accurately interpret the web page context and generate appropriate actions leads to wrong or failed tests.
Since the AI can “see” and understand the page, teams spend less time manually updating element locators when the UI changes.  Managing complex, multi-step tasks or several AI agents through Playwright MCP can be tricky and takes careful design and debugging to make sure all actions are coordinated throughout a long user journey.
AI clients use detailed MCP information (page accessibility or network requests) to build relevant and smarter test flows. Dev and QA teams must be well-versed in understanding Playwright, test automation, and how to work with AI models and the Model-Context-Protocol protocol.
It uses modules for AI datatype conversions to process complex transformations between formats efficiently and convert spatial data for web visualization, easing integration into test environments. If AI models can access live browsers directly through MCP servers without any security measures in place, there is a risk of data theft.
It provides a standardized protocol for communication to be integrated with various AI models and platforms to create a more flexible approach to AI automation testing. Limited support for legacy systems may hinder integration with older web applications, requiring additional adaptation efforts.

Why Teams Need to Use Playwright with MCP

  • ✅ Smart test generation. Teams can create test cases automatically from the latest data of the application, which is based on Playwright’s MCP ability to utilize Large Language Models (LLMs) and develop more tests to increase test coverage.
  • ✅ Remote debugging. Software engineers can attach to the same Playwright instance for debugging to identify and resolve issues in real time, without the need to replicate the testing environment.
  • ✅ Shared testing environments. QA engineers can execute tests on a shared Playwright instance without the need to set up separate environments for each team member to accelerate the process.
  • ✅ Live monitoring. Dev teams can monitor ongoing tests in real time, get rapid feedback, and resolve bugs quickly during testing sessions.
  • ✅ Load testing & performance analysis. Teams can measure certain performance metrics, such as page loading speed during high traffic, server responsiveness under load, memory and CPU usage during smoke testing, and optimise accordingly.
  • ✅ Distributed and parallel testing. Instead of running separate Playwright instances for each test suite, teams can launch multiple browser instances in order to improve overall testing efficiency and reduce test execution time.
  • ✅ Adaptive testing based on live data. Teams can get more accurate test results and fine-tune tests using A/B testing and real-time user data.
  • ✅ Self-maintaining test suites. Teams can forget about ongoing script maintenance due to Playwright’s Model-Context-Protocol ability to adapt the test suite to changes in the application and automatically adjust test flows.
  • ✅ Integration and scalability. Teams can automate test creation and maintenance, speed up test cycles, thanks to Playwright’s MCP integration capabilities with CI/CD pipelines (e.g., GitHub Actions, Jenkins) and tools like Claude Desktop or Cursor IDE.

How to Set Up MCP Server for Playwright

Setting up the Playwright MCP Server requires a few dependencies and configurations to ensure smooth operation. To get started, you need to make sure that certain prerequisites are met:

Prerequisites for MCP Server

  • Node.js. As Playwright and Model-Context-Protocol Server rely on Node.js to execute automation scripts, you need to install the long-term support Node.js version 18 or later for stability and verify npm.
  • A Compatible Browser Driver. You need to make sure that the appropriate browser engines are installed, because Model-Context-Protocol Server supports Chromium, Firefox, and WebKit
  • Install the VS Code Insider build  It is important, as the Playwright MCP server requires the GitHub Copilot AI agent and certain other extensions to operate; these full functionalities are available only in the Insider build. The stable VS Code release has not yet rolled out support for them (It is my case, as a Mac user).
  • Network Configuration. You need to configure firewall settings and port access to prevent connection issues, because you need to make sure your network lets many clients talk to the MCP.

Explore a couple of expert video guides to assist you through the installation process:

Installing Playwright and MCP Server

Playwright. You need to install it via npm or yarn to interact with web browsers.

init playwright@latest

Once installed, you need to verify the installation. It can be done by running:

npx playwright --version

Since Model-Context-Protocol Server is an extension of Playwright, it comes built-in with Playwright’s package. However, you need to enable the Playwright MCP Server functionality. So next, proceed with the installation of Playwright MCP Server. You can do it in a few ways:

→ Follow the Playwright GitHub Repo link and trigger the Playwright Server installation:

→ Similarly, on an official Microsoft Visual Studio Extension Page, trigger the Playwright Server:

Run the following command to install the package as a dev dependency:

npm install --save-dev @playwright/mcp@latest

Check the Playwright MCP installation in your IDE 😊 Additionally, in the settings, you can see whether the MCP Playwright functionality you need is checked or not everywhere.

Copilot Agent Playwright MCP settings VSCode image
🔴 Choose Copilot Agent Playwright MCP works

Running MCP Server

Once Playwright is installed, you can start the MCP Server using the Playwright CLI. You can configure the launch file and this command separately in the package.json file and run the following command to start the server:

npx @playwright/mcp@latest

This command initializes a Playwright instance that multiple clients can connect to.

Running MCP Playwright from VSCode interface menu picture
Running MCP Playwright from VSCode client

It is important to verify that the Model-Context-Protocol Server is running. After launching the server, check the logs to confirm that it’s running successfully. The logs should display connection details, including the WebSocket URL that clients will use to connect.

Connecting Clients to MCP Server

Once the MCP Server is running, multiple clients (such as test scripts, automation or monitoring services) can connect to the shared Playwright session. You can use a basic connection script for it. MCP Server might perform actions on the shared session. You can connect various clients to the MCP Server, leveraging its shared Playwright session for efficient automation or testing workflows.

Practical Understanding

  • Shared Session: All clients interact with the same browser instance, so actions (e.g., navigating or clicking) affect all connected clients unless isolated contexts are created.
  • Use Cases: This is useful for distributed testing (e.g., running tests across machines), real-time monitoring, or AI-driven automation (e.g., with GitHub Copilot).
  • Troubleshooting: If connections fail, verify the server is running, the endpoint is correct, and there are no firewall blocks on the port.

Running tests

Once your server is configured, you can run smart test prompts. You can use your scenario in a .txt file and let MCP read the prompt file, interpret the request, generate relevant Playwright test code, and insert it directly into your project or type the prompt by yourself in the Copilot Agent window.

Test Plan Playwright Copilot
Playwright project Copilot response Test Plan

I asked the AI Agent to generate a Plan for the standard ToDo Playwright Demo Application  and what happened. Codepilot generated and structured 70 test cases. After that, I asked to execute this Test Plan, and the Agent provided me with a command and a proposal to run it.

Playwright MCP project Example img
Playwright MCP project Example
Test Result Playwright MCP project

Ultimately, I got this result by executing my AI-generated test plan, with the Playwright MCP server managing the entire process autonomously based on my prompts. It is a pure Vibe Testing!

Challenges and Solutions in MCP Server Playwright

Below, we are going to explore the biggest challenges you’ll face when working with the server and provide practical solutions to overcome these common issues. These are:

🚨 Issue 🔍 Possible Cause 🛠 How to Fix
MCP Server Not Starting
  • Playwright not installed or outdated
  • Port already in use
  • You need to make sure that Playwright is properly installed and up to date. Also, you can check its version and install updates.
  • You need to check if the default port is being used by another application and either stop the conflicting process or change the port to launch the server.
Clients Can’t Connect
  • MCP server not running
  • Firewall or network blocking WebSocket
  • You need to verify the Model-Context-Protocol Server Status to make sure that it is running and accepting connections.
  • If you’re running the server on a network with firewalls or restrictive security settings, you need to check the settings to make sure they don’t block the WebSocket connection between the client and the server.
Debugging is Complex
  • AI logic issue
  • MCP misinterpretation
  • App under test issue
  • Look at the detailed logs from both the AI client and the MCP server, which include snapshots of the page structure and network activity for each step.
  • Apply Playwright’s built-in debugging tools, like the Trace Viewer, together with the logs created by the AI.

Playwright MCP Best Practices 

To get the most out of Playwright MCP Server, here are some best practices to take into account:

  • When you’re running many clients at once, you should think about connection pooling to cut down on extra work by reusing old connections rather than to keep making new ones.
  • The Model-Context-Protocol Server can handle many clients, but too many connections at once can overload it, which could cause slowdowns or failures. Knowing that, you should track resource use, like memory and CPU, to stay within your system’s limits, or set appropriate limits if necessary.
  • You need to check for possible errors, like connection timeouts, pages that fail to load, or network problems, and fix them to prevent crashes or inconsistent results.
  • To stop different tests or clients from clashing, you need to run each set in its own isolated space and use separate browser contexts or tabs to keep tests from interfering with one another.
  • As the server can use a lot of your system’s memory and CPU, you should watch these resources while tests run to keep the server smooth. For big testing efforts, it is essential to consider upgrading your hardware or splitting the work across several machines.

Playwright MCP Integration with Test Management 

When integrating Playwright MCP with an AI-powered Test Management System (TMS) like Testomat.io, you can improve test planning, execution, and review, and make your testing efforts smarter and more automated.

  • With Testomat.io, you can group and link the tests to the requirements. If tests fail, you can create an issue and fix them. Also, you are able to see the percentage of automated test coverage.
  • Testomat.io allows for comprehensive and well-detailed Playwright’s test reports. Artifacts like screenshots, videos, and logs can be automatically uploaded to an S3 bucket and linked to test cases in the Testomat.io dashboard.
  • Testomat.io offers direct integration with Playwright’s Trace Viewer, which can be utilized and linked in the run artifacts to examine snapshots and actions.
  • When integrating Playwright MCP, you can view the history of automated Playwright’s test runs. However, it is important to mention that you need to set up the correct system configurations to make full use of this option.  

Bottom Line 

The Playwright MCP Server is a strong add-on for Playwright, which makes complex testing easier, as multiple users or scripts can work in the same session, boosting teamwork and saving resources. If you’re debugging remotely, running tests in parallel, or carrying out load testing, MCP Server helps make your automated testing process smoother. 

Contact us if you aim to add Playwright MCP Server to your testing so that your teams can manage tests well, watch progress, and create detailed reports. In addition to that, you can integrate it with a comprehensive testomat.io test case management system, which will guarantee effortless coordination among teams and make your overall testing process more efficient.

The post Playwright MCP: Modern Test Automation from Zero to Hero appeared first on testomat.io.

]]>
TestNG Annotations Tutorial https://testomat.io/blog/testng-annotations-tutorial/ Wed, 30 Jul 2025 09:26:12 +0000 https://testomat.io/?p=22082 Thanks to the fast digital transformation, massively created software products require testing. With their growing complexity, they must be kept under strict control. With the help of TestNG automated testing framework, development and testing teams can automate and perform the testing process of legacy code quickly and hassle-free. It is a powerful framework offering a […]

The post TestNG Annotations Tutorial appeared first on testomat.io.

]]>
Thanks to the fast digital transformation, massively created software products require testing. With their growing complexity, they must be kept under strict control. With the help of TestNG automated testing framework, development and testing teams can automate and perform the testing process of legacy code quickly and hassle-free.

It is a powerful framework offering a variety of features, such as annotations, that enable running test suites in an accurate, organized, and efficient manner. Let’s find out more information about TestNG’s annotations, their types, lifecycle and hierarchy, reveal their advantages, and disadvantages in the article below:

What is the TestNG Framework?

Developed on the same lines as JUnit and NUnit, TestNG is an open-source test automation framework for Java, which is suitable for unit testing, integration testing, and end-to-end testing. The ‘NG’ combination of letters means Next Generation.

However, in 2025, JUnit 5 is generally regarded as the more modern and future-ready tool compared to TestNG, especially for starting new projects. JUnit 5 continues active development with regular updates and improvements.

TestNG framework was created to make automated testing simpler and more effective thanks to diverse features and capabilities, which include grouping, assertions, simultaneous test execution, parameterized testing, test dependencies, annotations, and reporting.

Applying its useful functionality, especially annotations, during QA testing enables testers to easily organize, schedule, and execute tests. Thanks to its stability, it remains in demand for complex enterprise testing scenarios.

To start, let’s first clarify some essential terminology which relates to TestNG’s annotations.

→ Suite. A suite consists of one or more tests.
→ Test. A test consists of one or more classes.
→ Class. A class consists of one or more methods.

What are TestNG Annotations?

In TestNG, you can use annotations to identify tests, set priorities, and configure other aspects of how the tests should be run. Aimed to serve different purposes, annotations are lines of source code, which have been included in the program or business logic to control the flow of methods in the test script.  They are preceded by a @ symbol, which allows performing some Java logic before and after a certain point.

TestNG Annotations Hierarchy and Lifecycle

In this framework, there is a lifecycle of annotations that helps teams organize and execute test methods in a logical order. These lifecycle annotations are mainly the before and after annotations that are used to execute a certain set of code before and after the execution of actual tests.

These lifecycle methods are used to basically set up test infrastructure before the start of test execution and then to cleanup any of these things after the test execution completes. In the picture below, you can see that the method annotated with @BeforeSuite will be executed first, whereas the method annotated with @AfterSuite will be executed last. Below, you can see the lifecycle and hierarchy of TestNG annotations

Different Types of TestNG Annotations

Here you can find the TestNG annotations list along with TestNG annotations with examples:

  1. @Test TestNG is told to execute methods as standalone and separate test cases. Using this method, extra characteristics can be detailed, and tests can be turned on/off.
  2. @BeforeSuite This method will execute before the entire suite of tests. For example, it can be useful for one-time setup tasks, such as initializing a database connection or setting up a global test environment.
  3. @AfterSuite This method will execute after the entire suite of tests. It makes it ideal for global cleanup operations, like closing database connections, tearing down test environments, or generating final reports.
  4. @BeforeTest This method will run before the execution of all the @test annotated methods inside a TestNG Suite. It is suitable for setting up configurations, which are specific to a particular test run. For example, initializing a browser instance.
  5. @AfterTest This method will run after the execution of all the @test annotated methods inside a TestNG Suite. It is a good fit for specific test cleanup tasks – closing the group’s browser instance or clearing test run data.
  6. @BeforeClass This method will execute before each test class. It is a good fit for setup tasks common to all tests in that class. For example, loading configuration files or initializing a reusable class object.
  7. @AfterClass This method will run after each test class. It is suitable for class-level cleanup tasks, such as releasing resources or shutting down objects created for the entire class.
  8. @BeforeMethod This method will execute before each test method. It is suitable for initializing a fresh browser session, logging in a user, or resetting test data before each @Test method runs.
  9. AfterMethod This method will execute after each test method. It can be used for logging out a user, closing a browser session, or cleaning up test data.
  10. @BeforeGroups This method will execute right before the first test method of a specific group or set of groups, which can be smoke, regression, begin. It is perfect when you need to perform setup tasks, which are common for a collection of related tests.
  11. @AfterGroups This method will execute once, after all test methods belonging to specific groups have finished running. It can be applied as an ideal option for performing shared cleanup tasks for a collection of related tests.

Test annotations in TestNG have multiple attributes, which can be used for the test method and help define tests and provide clarity in terms of TestNG annotations order of execution used in the TestNG class. These attributes are:

  • alwaysRun: always executes, even if its dependencies or preceding methods fail.
  • dataProvider: specifies the name of a method that provides data for the test method.
  • dependsOnGroups: specifies group names whose methods must run and succeed before this test method (or class) executes.
  • dependsOnMethods: makes certain the tests execute only if its specified dependent method successfully runs, otherwise it will be skipped.
  • description: describes the test method briefly.
  • enabled: the test method or class’s tests are skipped if false. The default is true.
  • expectedExceptions: indicates that a test method is required to throw the specified exception.
  • groups: groups test methods focusing on a single functionality.
  • invocationCount: defines the number of times a test method should be executed.
  • priority: defines the order of execution of test cases.
  • successPercentage: sets the failure percentage for tests that run multiple times.
  • timeOut: defines the time a particular test/tests should take to execute.

Why Teams Use Annotations in TestNG

  • Thanks to TestNG annotations execution order, teams are in the know about a clear lifecycle of tests, having specific steps, which have been clearly defined, for everything that happens before, during, and after each test or group of tests is executed. 
  • Teams can categorize tests into logical groups in order to run only smoke or regression tests. 
  • With parameterization, teams can execute the same testing logic multiple times with different sets of data to improve test code reusability and eliminate the need to write separate test methods for each data variation.
  • Strongly typed annotations allow teams to get immediate feedback on incorrect configurations and fix problems with their setup before the tests even have a chance to run.
  • When marking methods with specific annotations, every team member has a clear understanding of what the purpose is and can maintain and upgrade tests over time.
  • When teams use annotations, there is no need to extend any Test class like JUnit.

How To Work With TestNG Annotations: Key Steps

Before using TestNG’s annotations, you need to take into account the following prerequisites:

  • You need to use an IDE like Eclipse or IntelliJ for easier development of tests. Anyway, our example TestNG project is implemented here with Visual Studio Code.
  • JDK version should be compatible with TestNG and configured.
  • You need to create or launch the Java project where you’ll be developing and running tests.
  • You need to include a Maven/Gradle equivalent dependency or TestNG’s JAR file in your project’s build path to make the annotations available.
  • You need to create and configure testng.xml to fully utilize features like suites, tests, groups, and parallel execution that interact with various annotations.
  • If necessary, you can get ready to include TestNG tests in your CI\CD workflow.

# 1 Step: Set up environment

Before starting a test framework, check the Java version which runs TestNG tests via Maven to ensure project stability and compatibility for proper execution and builds.

java -version
mvn -v

You may review them in the documentation by following the next links Java, Maven

So, my IDE is VS Code, and I have to install the official Microsoft Extension for Java:

Java Pack VSCode TestNG project screen
Official Microsoft Extension for Java Pack VSCode

By clicking the button Install you download the set of plugins, allowing you to code in Java with the Visual Studio Code editor freely now.

#2 Step: Create & configure your TestNG framework project

There are two options to create a Maven project in VSCode: using the IDE UI by choosing Maven in the New Project wizard or, as in my case, through the CMD command:

mvn archetype:generate -DgroupId=com.example.demo \
-DartifactId=Demo-Java-TestNG-framework \
-DarchetypeArtifactId=maven-archetype-quickstart \
-DinteractiveMode=false

A brief parameter explanation:

  • -DgroupId: Package name base (like com.yourcompany)
  • -DartifactId: Folder/project name
  • -DarchetypeArtifactId: Type of project scaffold (quickstart)
  • -DinteractiveMode=false: Prevents Maven from asking prompts

Once the project is created, in the editor, you will see an auto-generated basic pom.xml and project structure:

Java TestNG framework screen
Successfully installed Java TestNG framework

Pay attention to the Maven build notification in the bottom right-hand corner. You should agree every time after savings in the BDD framework project, anyway, to do it manually with the command:

mvn clean install

#3 Step: Setting up Configuration

First, the test automation engineer is adding dependencies via Maven in the pom.xml file:

<dependencies>
  <!-- TestNG -->
  <dependency>
    <groupId>org.testng</groupId>
    <artifactId>testng</artifactId>
    <version>7.9.0</version>
    <scope>test</scope>
  </dependency>

  <!-- Selenium -->
  <dependency>
    <groupId>org.seleniumhq.selenium</groupId>
    <artifactId>selenium-java</artifactId>
    <version>4.20.0</version>
  </dependency>
</dependencies>

#Step 4: Organizing TestNG framework structure

At this step, teams need to organize test suites based on their testing needs, so it is suitable to define the tree structure of our project now.

Demo-TestNG-Login-Project/
├── pom.xml
├── testng.xml
└── src
    └── test
        └── java
            └── com
                └── example
                    └── tests
                        ├── BaseTest.java
                        └── LoginTest.java

The testng.xml file in TestNG serves as an entry point for executing TestNG tests in a controlled and flexible manner. Instead of running all tests in classes, configure it to include or exclude specific groups.

Sample testng.xml Suite File:
<!DOCTYPE suite SYSTEM "https://testng.org/testng-1.0.dtd" >

<suite name="Login Suite">
  <test name="Login Tests">
    <classes>
      <class name="com.example.demo.LoginTest" />
    </classes>
  </test>
</suite>

Explanation:

  • <suite>: Defines the whole suite of tests. You can give it a name.
  • <test>: It is a logical container for a group of test classes.
  • <classes>: Contains all the test classes to be executed.
  • <class>: Specifies the fully qualified name of the test class

#3 Step: Writing Tests

At this step, teams can utilize the @Test annotation in TestNG to write tests for Java test classes with annotated methods.

File LoginTest.java showcases test automation logic, the example verify login in the system:

package com.example.demo;

import org.openqa.selenium.By;
import org.openqa.selenium.support.ui.WebDriverWait;
import org.openqa.selenium.support.ui.ExpectedConditions;
import org.testng.Assert;
import org.testng.annotations.Test;

import java.time.Duration;
import java.util.List;

public class LoginTest extends BaseTest {

    @Test
    public void loginWithValidCredentials() {
        driver.get("https://www.saucedemo.com/");

        // Wait for the username field to be visible
        WebDriverWait wait = new WebDriverWait(driver, Duration.ofSeconds(10));
        wait.until(ExpectedConditions.visibilityOfElementLocated(By.id("user-name")))
                .sendKeys("standard_user");
        driver.findElement(By.id("password")).sendKeys("secret_sauce");
        driver.findElement(By.id("login-button")).click();


        // Wait for product titles to be visible and verify
        wait.until(ExpectedConditions.visibilityOfElementLocated(By.cssSelector("[data-test='title']")));
        List<String> productTitles = driver.findElements(By.cssSelector("[data-test='title']"))
                .stream().map(element -> element.getText()).toList();
        Assert.assertFalse(productTitles.isEmpty(), "No product titles with data-test='title' are visible on the page.");

    }

    @Test
    public void loginWithInvalidCredentials() {
        driver.get("https://www.saucedemo.com/");

        // Wait for the username field to be visible
        WebDriverWait wait = new WebDriverWait(driver, Duration.ofSeconds(10));
        wait.until(ExpectedConditions.visibilityOfElementLocated(By.id("user-name")))
                .sendKeys("invalide_user"); // Using a likely invalid user for testing
        driver.findElement(By.id("password")).sendKeys("secret_sauce");
        driver.findElement(By.id("login-button")).click();

        // Wait for product titles to be visible and verify
        wait.until(ExpectedConditions.visibilityOfElementLocated(By.cssSelector("[data-test='title']")));
        List<String> productTitles = driver.findElements(By.cssSelector("[data-test='title']"))
                .stream().map(element -> element.getText()).toList();
        Assert.assertFalse(productTitles.isEmpty(), "No product titles with data-test='title' are visible on the page.");
    }
}

The BaseTest.java class serves as a foundational setup and teardown class for your Selenium TestNG tests. Particularly, it inherits common browser behavior from the LoginTest file.

package com.example.demo;

import org.openqa.selenium.WebDriver;
import org.openqa.selenium.chrome.ChromeDriver;
import org.testng.annotations.*;

public class BaseTest {
    protected WebDriver driver;

    @BeforeClass
    public void setUp() {
        driver = new ChromeDriver();
        driver.manage().window().maximize();
    }

    @AfterClass
    public void tearDown() {
        if (driver != null) {
            driver.quit();
        }
    }
}

#Step 5: Running TestNG framework tests

At this step, teams can execute the script directly from the IDE or build tool. It can be done by initiating the runner from the command line.

mvn test

# Step 6: Reporting

After being executed, the TestNG tests — would be okay to know their results 🤔 The test management system testomat.io generates comprehensive reports (HTML and XML) which provide details on results, including passes, failures, and skipped tests, and include execution time or runtime – how long the tests took to run. Based on that report, teams are in the know about bugs and ready to fix them. Find a Java report GitHub Link with the project details here Take only a few steps to get integrated reporting quickly:

  1. Add dependency to pom.xml  with a classifier to align your test framework:
    <dependency>
        <groupId>io.testomat</groupId>
        <artifactId>java-reporter-distribution</artifactId>
        <version>0.6.1</version>
        <classifier>junit</classifier>
    </dependency>
  2. Get your API key from Testomat.io (starts with tstmt_)
  3. Set your API key as environment variable:
    export testomatio.api.key=tstmt_your_key_here
  4. Run your tests – that’s it! 🎉
TestNG Annotation Run Report screenshot
Rich TestNG Run Report

We can see the error details to find out the reason the test failed.

Stack trace and exception TestNG test screenshot
Stack trace and exception of the failed TestNG test

Smart AI-generated Summary Report that appears after each test run — giving you an instant overview without digging into details to save testers, developers, and managers time. Also, provide some value insights and suggestions.

AI Testing Assistant
AI Testing Assistant Java Test Automation Reporting

# Step 7: Integrating CI\CD

At this step, teams can configure their CI\CD pipeline to automate TestNG execution on code commits. The pipeline uses these generated TestNG reports to see if the build succeeded. Then it provides immediate feedback on code quality and blocks faulty code from deployment.

CI\CD execution TestNG tests
CI\CD Test Management integrations

Integration into a CI\CD pipeline with testomat.io adds another powerful layer of test orchestration and traceability. @Tags and Labels allow smart running, subsets (e.g., smoke, regression, feature-specific tests) via the CI. When TestNG tests fail in CI, test management software sends notifications to Slack, Jira, email and Microsoft Teams.

Advantages and Disadvantages of TestNG

Focusing specifically on annotations, TestNG is more flexible than the JUnit framework, this is a comparison of pros and cons:

✅ Advantages of TestNG ❌ Disadvantages of TestNG
Annotations are easy to understand. It takes time to set up this framework.
Easy to group test cases and set timeouts. Without the need to prioritize test cases on the project, it is not a good fit.
Parallel testing and cross-browser testing are possible with TestNG. Compared to JUnit, its limited adoption resulted in a smaller pool of experienced specialists.
It supports parameterized and dependency tests. Requires additional effort to manage complex test dependencies.
Generating HTML Reports by default. Need some effort to customize for specific project needs.
Organizing tests into suites, groups, and dependents through a hierarchical test structure. Hierarchical structure can become hard to maintain with large test sets.
TestNG’s listener interface enables the addition of customized setup, takedown, reporting, and cleanup procedures. Listener implementation can introduce performance overhead.

TestNG Annotations Best Practices

Here are some tips to follow to effectively use TestNG Annotations:

  • You should know the specific TestNG annotations sequence for running tests – @BeforeSuite –> @BeforeTest –> @BeforeClass –> @BeforeMethod –> @Test –> @AfterMethod –> @AfterClass –> @AfterTest
  • You should correctly choose @Before and @After annotations, which define how often your setup/cleanup processes are going to be performed.
  • You should give your @Test methods relevant group names (for example, sanity, regression, integration) to quickly run select tests without altering your code.
  • You should keep your @Before and @After methods free of complex app logic to maintain the speed and reliability of tests.

Interesting to read:

Bottom Line: What about using TestNG annotations?

With TestNG, teams can make automated tests more organized, readable, and maintainable. Its annotations allow them to organize and control the flow of test cases. When it comes to scaling and executing cross-browser testing across varied web environments, using TestNG annotation in Selenium is a perfect option.

– If you have any questions about annotations? 👉 Do not hesitate to contact our specialists.

The post TestNG Annotations Tutorial appeared first on testomat.io.

]]>
Playwright Java BDD Framework Tutorial https://testomat.io/blog/playwright-java-bdd-framework-tutorial/ Mon, 28 Jul 2025 08:32:04 +0000 https://testomat.io/?p=20422 As software complexity grows, teams should react and prevent costly failures. With the Behavior-Driven Development (BDD) framework, product owners, programmers, and testers can cooperate using basic text language – simple Gherkin steps to link scenarios to automated tests and make sure they build the right features and functionalities, which meet the needs of the end […]

The post Playwright Java BDD Framework Tutorial appeared first on testomat.io.

]]>
As software complexity grows, teams should react and prevent costly failures. With the Behavior-Driven Development (BDD) framework, product owners, programmers, and testers can cooperate using basic text language – simple Gherkin steps to link scenarios to automated tests and make sure they build the right features and functionalities, which meet the needs of the end users.

Based on a recent report, 76% of managers and employees noted that the lack of effective collaboration and clear communication largely contributes to workplace failure. This means that BDD is crucial for various organizations in terms of its capability to guarantee that every member of the team is on the same page and has a clear understanding of the desired software behavior. Let’s find out how the BDD framework can transform the way teams build and test today’s modern software products 😃

What is BDD Framework?

Behavior-driven development (BDD) is a software development methodology, which has a focus on collaborative work between techies and non-techies – developers, testers, and stakeholders throughout the project’s lifecycle. With simple and natural language, teams design apps around a behavior a user expects to utilize. They write descriptions in  Given When Then  format using the user stories before any code is written to be the basis for automated test scenarios.

This BDD approach assists developers and business stakeholders in establishing a clear and common product understanding. The idea is in structuring business requirements and turning them into acceptance tests. Using tests written in plain English, all stakeholders understand and agree on the software’s expected behavior, and make sure that they develop the right software product. In BDD, teams use Gherkin language to write the script in simple words like  Given,  When , and Then . With those words, they describe the behavior of the software.

For example, Gherkin BDD framework scenario:
Feature: Product Search

  Scenario: Display search results when a user searches for a product
    Given a user is on the website
    When they perform a product search
    Then they should see search results

This test script is then turned into automated tests that check if the software behaves as expected and how it is described.

Key principles of BDD Test Framework

  • Collaboration. The scenarios are written in a way that all team members – developers, testers, and key stakeholders are in the know how the system should behave regardless of their technical expertise.
  • Focus on Behavior. The focus is on the users who are interacting with the product instead of how the software should be built technically.
  • Common Language. Simple shared language is used across the business and technical teams so that anyone can understand business requirements and technical implementation.
  • Living Documentation. BDD scenarios function as a living documentation. Since these scenarios are automated tests, they provide an up-to-date record of how the system behaves.
  • Test Automation. Automating the scenarios allows teams to validate the application behavior once code changes are made. This helps catch regressions early and ensures the system behaves as expected over time.

BDD Framework Life Cycle

BDD life cycle typically includes a series of steps, which make certain that stakeholder communication and the direction of business goals or objectives are unified. Below you can find the key stages:

  1. Discover. At this stage, teams collaborate with stakeholders to gain a comprehensive understanding of the project’s scope, objectives, and requirements and establish a roadmap for the project’s execution.
  2. Write Scenarios in Gherkin. Teams write scenarios in Given-When-Then format to describe the product’s behavior from the users’ perspectives. These scenarios make it easier for the development teams to understand the requirements and for the QA teams to test them properly.
  3. Automate Scenarios. Once scenarios are written, teams convert these plain language scenarios into automated tests using BDD test automation frameworks and tools. These tools parse the Gherkin syntax and map it to test code that interacts with the application.
  4. Test. These automated tests are executed frequently to make sure that the system behavior matches the desired behavior after new code is added or existing code is modified.
  5. Refactor. Teams improve existing code while maintaining behavior without changing the product’s functionality.
  6. Refine and Iterate. Teams update and refine the scenarios to reflect new requirements or changes in the system’s behavior. This creates a feedback loop where the behavior is constantly validated and documented.

Why Use Playwright Java bdd automation framework?

Behavior Driven Development (BDD) with Playwright Java allows you to write tests in a more natural language, which simplifies the process of s understanding and maintaining code quality. Playwright is known as a powerful automation library which enables reliable end-to-end testing across key browser platforms (Chrome/Edge, Firefox, Safari).

Aiming to create robust, understandable, and maintainable automated tests which align with the intended functionality of your application, you can combine BDD principles with Playwright’s automation capabilities in a Java environment.

This approach will work for teams that include non-developers, such as product managers or QA engineers, who need to understand the test cases.

Playwright Cucumber Java Framework: Steps to Follow

So, let’s get started to automate! The typical technology stack for modern Java BDD projects is Java + Playwright + Cucumber together; it is a popular and well-supported choice. Here’s what each part does ⬇

  • Cucumber – handles BDD-style .feature files with Gherkin syntax Given When Then
  • Playwright for Java – performs the actual browser automation (clicks, navigation, input, etc.)

Java test automation framework stack includes:

  • Maven – the build automation and dependency management tool for Java, is a project heartbeat.
  • JUnit (usually JUnit5) – test runner; executes the Cucumber tests.

How to set up our BDD Framework?

Initially, ensure that the Java programming language development environment is installed. It is JDK 17 or higher, and Maven 3.9.3+, of course.

java -version
mvn -v

Node.js (Playwright dependency)

node -v
npm -v

If something is not installed, follow the official links to get started: Java, Playwright, Maven

So, my IDE is VS Code, and I have to install the official Microsoft Extension for Java:

Java Pack Visual Studio Code
Official Microsoft Extension for Java Pack VSCode

By clicking the button Install you download the set of plugins, allowing you to code in Java with the Visual Studio Code editor freely now.

#1 Step: Create & configure your BDD framework project

There are two options to create a Maven project in VSCode: using the IDE UI by choosing Maven in the New Project wizard or, as in my case, through the CMD command:

mvn archetype:generate -DgroupId=com.example.demo \
-DartifactId=Demo-Java-BDD-framework \
-DarchetypeArtifactId=maven-archetype-quickstart \
-DinteractiveMode=false

Little explanation for parameters of this command:

  • -DgroupId: package name base (like com.yourcompany)
  • -DartifactId: Folder/project name
  • -DarchetypeArtifactId: Type of project scaffold (quickstart)
  • -DinteractiveMode=false: Prevents Maven from asking prompts

Once the project is created, in the editor, you will see an auto-generated basic pom.xml and project structure:

Basic Maven Project

Pay attention to the Maven build notification in the bottom right-hand corner. You should agree every time after savings in the BDD framework project, anyway, to do it manually with the command:

mvn clean install

#2 Step: Configure Dependencies

The following action is adding dependencies via Maven in the pom.xml file:

  • playwright
  • cucumber-java
  • cucumber-junit

Check the Playwright dependencies of the new version you can on the Playwright Java page

Playwright dependencies for BDD framework screen
Playwright dependencies for BDD framework

To avoid errors, you can alternatively install Playwright Java at one time:

mvn exec:java -e -Dexec.mainClass=com.microsoft.playwright.CLI -Dexec.args="install"

Similarly, you can find the required dependencies on the official Cucumber documentation page at the following link and JUnit(usually JUnit5) Dependency Page information. These configurations enable automation of step definitions and browser interactions.

Eventually, this is a minimal BDD framework example of dependencies for Playwright, Cucumber and JUnit:

<?xml version="1.0" encoding="UTF-8"?>
<project xmlns="http://maven.apache.org/POM/4.0.0"
         xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
    <modelVersion>4.0.0</modelVersion>

    <groupId>com.saucedemo</groupId>
    <artifactId>playwright-tests</artifactId>
    <version>1.0-SNAPSHOT</version>

    <properties>
        <maven.compiler.source>23</maven.compiler.source>
        <maven.compiler.target>23</maven.compiler.target>
        <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
    </properties>

    <dependencies>
        <dependency>
            <groupId>com.microsoft.playwright</groupId>
            <artifactId>playwright</artifactId>
            <version>1.52.0</version>
        </dependency>

          <dependency>
            <groupId>io.cucumber</groupId>
            <artifactId>cucumber-java</artifactId>
            <version>7.23.0</version>
        </dependency>

       <dependency>
            <groupId>io.cucumber</groupId>
            <artifactId>cucumber-junit</artifactId>
            <version>7.23.0</version>
            <scope>test</scope>
        </dependency>
    </dependencies>

  </project>

After saving pom.xml, compile the Maven build again

#3 Step: Create a Cucumber Runner

Create a TestRunner.java class file. The TestRunner.java class is like the engine that wires everything together. It tells Cucumber how to find and run .feature files.

Example TestRunner Java:

package runner;

import org.junit.runner.RunWith;
import io.cucumber.junit.Cucumber;
import io.cucumber.junit.CucumberOptions;

@RunWith(Cucumber.class)
@CucumberOptions(
    features = "src/test/resources/features",
    glue = "steps",
    plugin = {"pretty", "html:target/cucumber-report.html"},
    monochrome = true
)
public class TestRunner {
}

The BDD framework structure of the project, as you can see in this picture, keeps logic separate and testable:

Structure the BDD framework on Java and TestRunner file
src
└── test
    ├── java
    │   ├── runners
    │   │   └── TestRunner.java
    │   └── steps
    │       └── LoginSteps.java
    └── resources
        └── features
            └── Login.feature

#4 Step: Write feature files with scenarios in Gherkin

Create  .feature files, look at the top of the code path where it is placed 👀

Feature: Login to Sauce Demo

  Scenario: Successful login with valid credentials
    Given I open the login page
    When I enter username "standard_user" and password "secret_sauce"
    And I click the login button
    Then I should see the products page

#5:Step Map steps in Java using Cucumber step definitions

package steps;

import com.microsoft.playwright.*;
import io.cucumber.java.After;
import io.cucumber.java.Before;
import io.cucumber.java.en.*;

import static org.junit.Assert.assertTrue;

public class LoginSteps {
    Playwright playwright; // Variable playwright type of Playwright object
    Browser browser; // Represents a specific browser instance (e.g.Chromium)
    Page page; // Represents a single tab or page within the browser.

    @Before //Hook - runs before each scenario
    public void setUp() {
        playwright = Playwright.create(); //Initializes Playwright engine
        browser = playwright.chromium().launch(new BrowserType.LaunchOptions().setHeadless(false)); //Launches a visible Chromium Browser
        page = browser.newPage(); //opens a new Browser Tab
    }

    @Given("I open the login page")
    public void openLoginPage() {
        page.navigate("https://www.saucedemo.com/"); //Navigates to Log In page
    }

    @When("I enter username {string} and password {string}")
    public void enterCredentials(String username, String password) {
        page.fill("#user-name", username); //Fills in username
        page.fill("#password", password); //Fills password
    }

    @When("I click the login button")
    public void clickLogin() {
        page.click("#login-button"); //Clicks login button
    }

    @Then("I should see the products page")
    public void verifyProductsPage() {
        assertTrue(page.isVisible(".inventory_list")); //Checks of inventory list is visible
    }

    @After
    public void tearDown() {
        browser.close(); //closes browser page
        playwright.close(); // shuts down playwright engine
    }
}

#6:Step Run tests via JUnit5 runner

mvn clean test
🎉 Output

Opens browser using Playwright, navigates to login page, completes the login, verifies the dashboard is displayed and generates a basic Cucumber HTML report. Thus,

– Where can we find this BDD framework’ Cucumber HTML report?

Enter into the folder target, scroll down and launch Cucumber HTML report. It is automatically generated when Cucumber tests run with the proper configuration, namely:

@CucumberOptions(
    plugin = {"pretty", "html:target/cucumber-report.html"}
)
Cucumber HTML Report screenshot
Location Cucumber HTML Report in the BDD project

The Cucumber HTML Report is a simple and quite user-friendly visual representation test results of your BDD (Behavior-Driven Development) framework. It shows: Feature and Scenario breakdown, their steps in detail and result, Pass/Fail status, Error messages and stack traces, execution time. In total, it is not very informative, but it is not too bad either.

What is Playwright Test Report?

Generally, a Playwright test reports works as an extensive summary compiled after running a set of automated tests using the Playwright testing framework and indicates which scenarios passed, failed, or skipped.

With detailed reports, developers and test engineers can quickly identify the root cause of test failures and debug issues in the application code or the test automation itself. They can analyze reports to highlight areas of the application’s behavior that are not yet adequately covered by automated tests and create more scenarios.

The Testomatio Playwright Test Report Key Components

If a simple, basic Playwright or Cucumber HTML report is not enough, our solution is the perfect fit. The test management system testomat.io offers powerful Reporting and Analytics across different testing types.

In this test reporting, you can find the following information:

  • Manual testing, as well as automation testing, in one place.
  • Customizable test plans, selective test case execution. Easy to share it with stakeholders.
  • Information on test status –  which tests have been passed, failed, or skipped.
  • Descriptions of any errors, mentioning the type of error and the location.
  • How long the test runs in order to identify slow tests and areas which cause performance delays.
  • Information about test coverage.
  • Screenshots or video recordings of the test execution to better understand the test results.
  • Full test run history and clear progress tracking.
  • Detailed logs that can help developers debug issues and offer visibility into browser actions, network requests, and responses.
  • Moreover, actionable analytics with a wide range of metrics and insights to support decision-making.

Start by adding the Java JUnit XML plugin to the pom.xml file:

  <build>
    <plugins>
      <plugin>
        <groupId>org.apache.maven.plugins</groupId>
        <artifactId>maven-surefire-plugin</artifactId>
        <version>3.2.5</version> <!-- or the latest -->
      </plugin>
    </plugins>
  </build>

Sign up or log in to the TMS, then create your test project by following the system’s guidance, and then import BDD tests. Only follow the smart tips the test management UI offers.

Test Management reporter screen
How to install a custom Reporter for the BDD Framework

This is the result of syncronization of manual and auto tests in the common repository.

test management for BDD testing screen
Example sync manual and auto BDD test cases in one repo
Gherkin Editor screen
BDD scenario visualization in Gherkin Editor

In addition, testomat.io offers a unique feature that allows automatic converting of classical manual test cases into BDD (Behavior-Driven Development) format and importing detected steps into the Reusable Steps Database. Especially, this capability is useful for teams transitioning from traditional manual QA workflows to modern, executable BDD-style automation.

Example of Playwright Report screen
Example of Playwright Report

It seems this report offers a more polished presentation than the standard Cucumber report, doesn’t it?

Advantages of BDD Playwright Java framework

  • Step Definition Mapping. Teams can use Given/When/Then annotations with accurate regular expressions to link Gherkin steps to Java methods.
  • Playwright API Interaction. Teams can apply Playwright’s Page, Locator, and browser management APIs within step definitions in order to automate browser actions and assertions.
  • Test Reporting. Teams can correctly specify the paths to Feature Files (features) and Step Definition packages (glue), which contain your automation code to set up how test results are reported and provide meaningful feedback once tests are executed.
  • Parameter Passing in Steps. Teams can use capture groups in regular expressions within step annotations to pass data from Gherkin scenarios to Java methods,  because it allows them to write more reusable step definitions that can handle various data inputs from the Feature Files, and cut down code duplication.
  • Assertions. With assertion methods within step definitions, teams can verify that the actual application’s behavior matches the expected outcomes defined in the Gherkin scenarios, which makes the tests reliable and verifies the software works as designed
  • Selector Strategies.  With Playwright’s selector types (CSS, XPath, text-based) and reliable web elements identification, teams can automate code to accurately target and interact with specific UI elements, even after changes in the application’s structure or styling.
  • Handling Asynchronous Operations.  Understanding that Playwright’s API is asynchronous and ensuring proper handling, teams can prevent their automation code from prematurely proceeding before UI elements are fully loaded or actions are completed, which contributes to more reliable and less flaky tests that accurately reflect user interactions.
  • Integration with CI\CD. Teams can configure the build process to execute tests and generate reports in a continuous integration environment.

Interesting to read:

Disadvantages of BDD test framework with Playwright Java

  • Steeper Learning Curve & Complex Project Setup. If teams are new to both BDD principles and Playwright Java, it requires significant effort from them to set up the necessary dependencies.
  • Complex Project Setup. Setting up the necessary dependencies (Cucumber, Playwright, Test Runner, reporting libraries) in a Java project can be more involved than setting up simpler testing frameworks.
  • Too UI-centric Scenarios. Teams might fall into the trap of writing excessively detailed Gherkin scenarios that become difficult to maintain and understand. Scenarios should focus on business value, not low-level UI interactions.
  • Not a Replacement for All Testing. While BDD with Playwright Java focuses on end-to-end or integration software testing from a user’s perspective, it doesn’t serve the purpose of unit tests or API tests.
  • Slower Performance. Running end-to-end tests driven by Cucumber and Playwright can be slower than unit or integration tests. While Playwright is generally fast, the overhead of interpreting Gherkin and orchestrating browser actions can add to execution time, especially for large test suites.
  • Maintenance Challenges. Playwright tests are susceptible to changes in the application’s user interface, meaning any minor UI modifications can break a significant number of scenarios and need frequent updates to reflect changes in UI elements, workflows, or data.
  • Synchronization Issues. Web applications can be asynchronous, and handling synchronization (waiting for elements to load, animations to complete) in Playwright Step Definitions requires careful implementation to avoid flaky tests.
  • Cooperation Problems. If business stakeholders are not involved in writing and reviewing feature files, the scenarios might not accurately reflect business needs.

Bottom Line: Ready To Develop The Right Product The Right Way with BDD Playwright Java?

When it comes to incorporating the Behavior-Driven Development (BDD) testing framework, organizations need to remember that it is not just a methodology; it’s a mindset. Furthermore, its adoption becomes a crucial strategy for organizations which require revolutionizing how they approach the software development process.

With BDD practice in place, you can improve communication, catch bugs early, enhance documentation, and increase test coverage. Contact our specialists if you aim to navigate software development complexities and need a working approach like BDD to develop features, which are well-understood by both technical and business stakeholders. Only by utilizing the correct BDD tools and frameworks can you get BDD’s highest potential to achieve success in your projects.

The post Playwright Java BDD Framework Tutorial appeared first on testomat.io.

]]>
AI Agent Testing: Level Up Your QA Process https://testomat.io/blog/ai-agent-testing/ Sun, 27 Jul 2025 08:49:52 +0000 https://testomat.io/?p=21473 Unluckily, QA and testing teams have to cope with challenges in meeting the demands of modern software delivery and fail when balancing high-quality releases with effective bug findings along the way in the traditional testing process. Thus, poorly developed software products can ruin the user experience and the company’s name and reputation. The new trend […]

The post AI Agent Testing: Level Up Your QA Process appeared first on testomat.io.

]]>
Unluckily, QA and testing teams have to cope with challenges in meeting the demands of modern software delivery and fail when balancing high-quality releases with effective bug findings along the way in the traditional testing process.

Thus, poorly developed software products can ruin the user experience and the company’s name and reputation.

The new trend of applying AI agents in testing can minimize human errors, make the QA teams more productive, and dramatically increase test coverage. In the article below, you can find out what AI-based agent testing is, discover types of AI agents in software testing and their core components, learn how to test with AI agents, and much more.

What is an AI Agent?

An artificial intelligence agent, or AI agent for short, is both a small program and a complex system which has been programmed to utilize artificial intelligence technology for completing tasks and meeting user needs. Thanks to its ability to demonstrate logical reasoning and understand context, an AI-backed assistant can make decisions, learn, and respond to changes on the fly. In most cases, AI agents or AI-powered assistants are characterized by the following:

  • They can carry out repetitive or expert tasks and can even replace the whole QA team department.
  • They can function autonomously when there is a need to attain defined goals (often without constant human intervention).
  • They can be fully integrated into organizational workflows.

What is an AI Agent Testing?

When we talk about AI testing agents or assistants, we mean smart systems that apply artificial intelligence (AI) and machine learning (ML) to perform or assist in software testing tasks.

They replicate the work of human testers, including test creation, execution, and maintenance, with limited manual involvement of the specialists, operating under specific parameters they define. It is helpful to have AI-powered assistants in the following situations:

  • With their help, anyone in the team, even without technical expertise, can create and maintain stable test scripts in plain English.
  • They can automatically adjust, fix, and update tests in terms of the system changes, requiring less effort from human testers.
  • Suggest how to make your tests better.
  • They can automatically run test cases, which have been created with manual effort, and require minimal direct control of the QA specialists.

Types of AI Agents in Software Testing

Below, you can find information about widely known categories of AI agents in software testing, based on their roles and capabilities:

✨ Simple Reflex QA Agent

Reflex Agents use if-then rules or pattern recognition. They follow specified instructions or fixed parameters with rule-based logic and base their decisions on current information. As the most basic type, these AI agents perform tasks based on direct responses to environmental conditions, which include diverse OSs and web browsers, network connections, structured and unstructured data formats, poorly documented APIs, and user traffic. For example, a task for a simple reflex agent can be to detect basic failures (e.g., 404s, missing elements), log errors, and take screenshots when it detects an error message on the screen.

✨ Model-based Reflex Test Agent

These agents are intelligent enough to act on new data, not just directly, but with an understanding of the context in which it is presented. They are considering the broader context and then responding to new inputs. Can simulate user flows or business logic. Reflex Agents are used in testing to perform more complex testing tasks, because their decisions are based on what they remember/know about the situation around them.

For example, this type of agent remembers past login attempts. If users make multiple failures, it will try to attempt a Forgot Password process or alert about a potential account lockout instead of just logging an error.

✨ Goal-based Agents

When we talk about this type of agent, we also mean rule-based agents. It happens because they follow a set of rules to achieve their concrete goals. They choose the best tactics or strategy and can use search and planning algorithms, which help them achieve what they want. For example, if an agent’s goal is to find every unique error or warning, it can be done by creating test scripts, which will help identify only unforeseen errors and decrease re-testing efforts.

✨ Utility-based AI testing Agent

These agents are capable of making informed decisions. They can analyze complex issues and select the most effective options for the actions. When predicting what might happen for each action, they rate how good or useful each possible outcome is and finally choose the action most likely to give the best result.

For example, instead of just running all tests like Goal-based agents or reacting to immediate errors like simple reflex agents, these ones include a utility function, which allows them to rate situations and skip some less critical ones to find high-severity bugs.

✨ Learning Agents

These assistants can use past experiences and learn from past mistakes or code updates. Based on feedback and data, they can adapt and improve themselves over time. For example, QA teams can use learning agents if there is a need to optimize regression testing. AI-powered assistants self-learn over time from previous bugs and codebase changes, prioritising areas with frequent failures — they highlight the team’s attention on what matters most.

The core components of AI agent for QA Testing

The essential parts of an AI agent for QA testing allow it to intelligently analyse, learn, adapt, and act during the software testing process. Let’s explore them more:

  • Perception (Input Layer). Collects data from the environment. It might be code changes, test results, test execution logs, test diff analysis, API responses or some patterns in the test project.
  • Knowledge Base. Stores the historical information the agent learns from. Its contents typically include past bugs and their root causes, test coverage data, and frequently failing components, e.g., this helps the Testing Agent make informed decisions based on context and experience.
  • Reasoning Engine (AI agent brain). Makes decisions based on current input and the knowledge scope. AI agents use techniques such as rule-based logic, impact analysis, risk-based prioritization, and dependency graph evaluation to process information. For example, the agent can decide which tests to execute based on recent code changes.
  • Learning Module. AI agents learn from user stories and test cases by training, adapting to patterns in them over time. This continuous learning helps reduce false positives and minimize test noise, improving overall test reliability. By leveraging natural language processing (NLP), they can predict potential test failures, detect flakiness and intelligently optimize the sequence in which tests are executed.
  • Action(Execution Engine). Performs tasks based on decisions. For example: generating new test cases, selecting and proposing the execution of the most relevant tests, reporting defects or opening issues.
  • Feedback Loop. Analyzes test engineers’ feedback on false positives to improve future answers. It enables continuous learning and self-improvement of the AI agent.
  • Integration Layer. This layer enables seamless connection between the agent and external tools, allowing it to both send and receive data. The incoming information supports the agent’s perception and learning modules, enhancing its ability to respond in a reasoned and intelligent manner throughout the testing process. Integrates the agent with test frameworks, bug trackers, project management tools like Jira, documentation extensions such as Confluence and CI\CD pipelines.

So, these components work together to support automated, efficient, and smarter testing.

AI Testing Agent Components scheme
Basic interaction between AI Testing Agent Components

Considering this flow… 🤔 How do you think? Where should test engineers focus the most? Right! It is their prompting and the filling of artifacts and documentation. Moving forward with our next paragraph:

What kind of feeding AI agent is dependent?

First, focus on the efficiency of your inputs — specifically, the prompts— since the accurate and useful results you get will directly depend on how your instructions are formulated. Today, prompting has become a key skill for modern QA engineers working alongside AI-driven tools. Thus,

What is prompt engineering in software testing?

It is your specific questions, instructions, and inputs given to the MCP AI agents to guide their responses; they can generate and improve tests, highlight potential faults, assist with analysis, and decision-making.

5 basic rules for prompting in software testing

1. Be clear, avoid vague instructions. Define exactly what you want: test type, tool, scenario, outcome.

Example of right prompt to generate test cases:

✅ Generate test cases for the login form with email and password fields and a button Sign In to verify their work.
❌ Instead of just: Write a test for the form.

2. Provide Context. The more context, the better the result. Include the user story, functionality, code snippet, or bug description.

Example a prompt based on user story:

✅ Based on this user story <User Story RM-1523>: As a user, I want to reset my password via email, suggest edge cases for testing.
❌ Generate tests for password.

3. Include expected behavior or test goals. Helps AI understand the validation points or test criteria. Also, use terms like: edge cases, negative testing or happy path, e.g., to guide the logic.

Example:

✅ Write a test case that ensures users are redirected to the dashboard after logging in.”

4. Avoid overloading the prompt. Do not cram too many instructions in one sentence. It may confuse the MCP Agent’s model and reduce output quality. Better break large tasks into smaller steps, using step-by-step or follow-up prompts.

Example of step-by-step prompt

✅ Start with: “What should I test in…” then ask for detailed test cases or code.

5. Mention tool, framework and format. If you are using a specific stack (like Playwright, Cypress, Selenium), need output checklist code to describe a test case with Gherkin, etc., just say it in the prompt.

Examples:

✅ Write Gherkin-style scenarios for login functionality in Playwright with invalid credentials.
✅ Return the test cases in a markdown table format.

Reasonable artifacts in input data equal efficient AI prompting

Secondly, always remember to keep your artifacts well-structured. Avoid inflating your test project with excessive, unused test cases — that’s a red flag. Prioritize only what’s necessary. This discipline is especially important for AI agents, as they rely on clear, relevant artefacts and operate them in the following:

  • Requirements & Specifications. AI agents should have access to a detailed description of the intended purpose and environment for the system under testing. Knowing functional and non-functional requirements allows assistants to better understand what the system should do and how well it should perform in terms of speed, usability, security, etc.
  • Existing user stories. AI agents need access to users’ stories to investigate the desired features from a user’s perspective, which allows them to apply them to simulate realistic user journeys and test the end-to-end experience.
  • Test Cases. AI-based agents need access to test cases because these give them an understanding of what steps to take and what situations to test when checking software. With this information, assistants can make sure that the software works correctly and can find any bugs.
  • Bug Reports. You can find them in Jira, Bugzilla, Linear, or some internal analytics dashboards, as well as external tools like testomat.io Defect Tracking Metrics. AI agents use linking with them for bug reproduction, identifying the fault reason in runs. AI can summarize bug trends to inform QA decision-making in bug prevention.
  • Reporting and Analytics metrics. AI agents collect data from tools like Allure reports, CI/CD pipelines, and test dashboards. The AI evaluates test duration, failure trends, and pass/fail consistency. Frequent or critical test failures are flagged for priority investigation. AI agent provides suggestions for fixing unstable tests. Also learns which tests are most valuable for regression based on historical value. Based on these insights, it recommends test automation optimization.
  • Documentation. With access to the testing documents, AI agents are in the know of what the software should do and what its goals are. These documents also tell them exactly what to test, give clear rules, and expected results – passed or failed tests.  Also, AI-based agents can run existing tests and learn from past results in reports to carry out smarter and more effective testing.

Choose the best Artificial Intelligence testing agent

The test management system testomat.io  is a modern AI-powered test management tool that helps you easily develop test artifacts and organize test projects with maximum clarity.
It is more than just a data repository and a tool for storing your test artifacts; it offers powerful, AI-driven functionality to accelerate your QA process. The AI Orchestration is integrated across your entire test lifecycle — from requirements to execution and defect tracking — while synchronising automation and manual testing efforts, supported by numerous integrations and AI enhancements.

Other strengths are Collaboration and Scalability. QAs in the team can easily share their test result reviews, flexibly select tests for test plans and runs, adapting AI suggestions to fit their specific needs.

AI Assistant testing works at the level:

Generative AI and Chat with test modes provide direct interaction with the test case suites using natural language — it looks like chatting with a QA teammate. You can generate new test cases or refactor existing ones by automating these repetitive test tasks; manage suggestions, map tests to requirements, identify test gaps and flaky scenarios, and gain a clearer understanding of your test coverage to improve it continuously.

AI Test Management testomatio functionality screen
Chat with test as a part of AI Test Management functionality
For instance, prompt samples of questions:

→ What does this test case do?
→ Write edge tests for password reset
→ Rewrite this test to be more readable
→ Which parts of the app are under-tested?
→ Find duplicates
→ Map these test cases to requirements
→ To improve desired feedback, teach the MCP AI model by clarifying follow-ups gradually.

AI agent is an intelligent automation component that serves as a bridge between the test management system and the test automation framework ecosystem. This AI agent actively learns, analyzes, and optimizes the testing process. Namely, it can suggest clear, easy-to-understand test descriptions for automated tests — making them accessible even to non-technical stakeholders (Manual QAs, Business Analysts) — and can automatically transform your project into Behavior-Driven Development (BDD) format. Additionally, it detects flaky or failing tests based on execution history.

AI-powered AI Agent screen capabilities
Test Management AI Agent

AI Reporting and Analytics is also our strength. Insights from AI Assistant are not hidden — they are delivered through suggestions in the UI of the test Report. Now the development team has implemented two kinds of AI extensions inside the Report — Project Status Report and Project Run Status Report. These reports are available automatically based on recent test run history. They deliver instant visibility into the health of the test project without delving into individual Test Archive logs.

AI Agent Testing report screen
Run Status Report generated by AI Assistant in TMS Report

The AI testing agent provided by testomat.io is an intelligent testing co-pilot. It empowers testers to move faster, test smarter, and reduce risk — all with less manual effort. Below, we break down its capabilities in action and show how its workflow acts.

The Use of AI Agents for Software Testing

AI-driven agents are changing the way software QA engineering teams do their work, enabling them to make the testing process faster, more reliable, and more efficient. Here are the key areas where AI assistants are considered the most reliable helping hand:

✨ Test Case Generation

To speed up the process of creating different test suites, QA and testing teams can use artificial intelligence assistants, which take into account the software requirements and are able to turn simple instructions into test scripts on the fly. Based on Natural Language Processing (NLP) and Generative AI, this process happens much faster and covers a wide range of situations, which would have taken much longer if established with human QAs and testers.

✨ Test Case Prioritization

AI-powered assistants analyze previous test results, code changes, and defect patterns, which help them decide the most effective sequence of test runs. Instead of relying on fixed or random order, these models use data from prior test executions to prioritize tests and optimize the selection of test cases.

✨ Automated Test Execution

AI-based assistants/agents are capable of running tests without QA specialists’ involvement 24/7. When the source code is changed, test suites are automatically triggered to execute continuous testing and provide fast feedback. In addition to that, integrations with test case management systems allow bugs to be reported and all updates to be automatically shared with relevant teams and stakeholders.

✨ Shift-Left Testing

In shift-left testing, AI-based agents deliver faster execution and identify bugs quickly, which enables developers to resolve issues earlier. AI-powered agents can also adapt to evolving project requirements to suggest relevant tests to run based on code changes.

✨ Test Adaptation

Thanks to self-healing capabilities, AI agents can respond to changes in the application’s interface and adjust their actions based on what has been changed.  They can handle UI, API modifications, or backend changes while maintaining automated tests whenever there’s a change in the codebase.

✨ Self-Learning

Thanks to AI agents’ ability to learn from previous findings from tests, they can analyze trends and patterns from past testing cycles, which helps them predict future test results. When learning and adapting, assistants are getting better at identifying potential bugs and making decisions in a jiffy to proactively address them.

✨ Visual Testing

Backed with computer vision, agents can detect UI mismatches across various devices and screen sizes. They verify the aesthetic accuracy of the visible parts that users interact with. AI-based agents aim to find visual ‘bugs’  – misaligned buttons, overlaid visuals (images, texts), partially visible elements, which might be missed during traditional functional testing.

✨ Test Result Analysis

AI agents can review test results on their own in order to find failures and group similar defects. They also point out patterns in the data, which help them detect the root cause faster and focus on what matters most – identifying patterns that may lead to vulnerabilities in the system.

 Overview: Pros and Cons of AI agent for software testing

This table compares the advantages and disadvantages of using an AI agent testing (agentic testing), reminds us about common AI hallucination troubles, and provides a balanced view of AI’s role in testing processes.

✅ Pros of agentic testing  ❌ Cons of AI agent testing
Generating test cases that humans might miss. Improve test coverage. Inability to understand wider context – user intent, business logic, and non-functional requirements that a human tester would comprehend.
Updating the test cases automatically in terms of changes in the code. Generating test cases, which trigger false positives or false negatives, and requiring careful review before implementation.
Running test suites faster, accelerating the release cycle, and reducing manual effort. Requiring ongoing maintenance and updates to adapt to evolving testing needs.
Predicting potential bugs based on previous test data. Having blind spots or ending up with inaccurate predictions when training data is poor and does not cover a broad range of test scenarios and edge cases.
Identifying and fixing broken tests. Lack of human intuition in complex scenarios.
Self-learning capabilities and adapting testing strategies or techniques based on feedback. Over-reliance on AI could decrease critical human oversight among test engineers, especially when risks would have appeared in senior and QA manager roles.

How to Test with AI Agents: Basic AI Workflow

When it comes to the entire testing process of the software products, it is essential to mention that AI agentic workflows are Agile process and go beyond simple handling of repetitive tasks.  QA teams should define roles, decide what to test, and what AI agent tool to use.

AI agent testing workflow schema
Basic AI Workflow within Test Management

*It is a plain AI Agent testing Workflow Example; the more sophisticated one was published in a LinkedIn post.  Follow the link to check this AI Agent testing Workflow within testomat.io test management software.

  1. Data-Gathering. To get started, AI test agent gather data from many different sources, like APIs, user commands, requirements, past bugs, usage logs, external tools, environmental feedback, and so on, to be trained (if there is a need). Our test management solution supports native integration with many.
  2. Collection & Coordination. It is a Role of the Test Management System. Once all the relevant datasets have been collected, the AI-based agents can create relevant test cases to achieve good test coverage, which even covers edge cases, while human testers should approve whether the generated test cases are relevant. Also, AI-powered assistants generate enormous unique test data and user data for email, name, contact number, address, etc., which mirrors the actual real-world data. But when you integrate large language models (LLMs) and generative AI (GenAI), QA agents can rapidly simulate diverse real-world conditions and evaluate applications with greater intelligence and adaptability.
  3. Test Execution. AI-powered agents are deployed to autonomously run tests and simulate user interactions to test UI components, assessing functionality, usability, and application performance.
  4. Real-time Bug Detection & Reporting. AI-based assistants detect anomalies, frequent error points within the system, and can predict bugs and automatically report defects to stakeholders. In addition to that, it can recognize repetitive flows and high-priority areas for testing.
  5. Test Analysis & Continuous Learning. As the software scales, the AI-powered assistants analyse data from user interactions and system updates to keep tests aligned with the application’s current state.
  6. Feedback and Improvement. QA team members need to regularly review AI-generated results to maintain software quality. Despite the power of artificial intelligence, it’s important to mention that continuous monitoring and periodic checks of their work guarantee accurate and reliable testing results.

Challenges in AI agent for testing

  • When the software product becomes more complex, the amount of computing resources needed for AI testing increases exponentially.
  • The absence of representative data leads to testing ineffectiveness – AI-based assistants could develop biases and couldn’t meet ethical standards.
  • Using outdated APIs and poor documentation presents a huge challenge for the adoption of AI testing.
  • AI-generated test cases and results require careful review before implementation.
  • In terms of the AI black box problem, it is difficult for QA teams to understand the logic behind the failure of test cases.

Best Practices AI agent testing implementation

When choosing a test assistant, you need to find the tool that best adapts to your testing needs with the software development lifecycle. Take into account customization, integration, and user-friendliness. However, let us remind!

Do not forget about combining AI and human efforts to balance efficiency and creativity!

Here you can reveal some other tips to help you find the right AI agent testing tool:

  1. You need to investigate how your organization is structured, what systems and tools you already use, and what testing tasks you have.
  2. You need to define what areas the AI agent testing framework will help you automate before scaling.
  3. You need to make sure that your team understands why they need an QA agent in test automation and knows how to use it effectively.
  4. You need to discover if an AI bot can be integrated with the platforms you already use.
  5. When planning your tool budget, you should consider free, subscription, or enterprise pricing.
  6. You need to consider its customisation capabilities to be tailored to your unique testing requirements.

Boost your capabilities with AI Agent Testing right now

Whether you’re a QA lead or a startup founder, applying AI Testing Agents will change the way you carry out testing. They are becoming essential tools for modern QA teams, which can learn from past data and predict failure points. They can also generate different tests, self-heal, and adapt to changes to achieve a higher level of excellence for software delivery in record time.

Are you ready? 👉 Contact us to learn more information on how to use the power of Artificial Intelligence agents to create precise test cases and improve the quality and coverage of your software testing efforts.

The post AI Agent Testing: Level Up Your QA Process appeared first on testomat.io.

]]>
Playwright Reporting Generation: All You Need to Know https://testomat.io/blog/playwright-reporting-generation/ Wed, 16 Jul 2025 12:33:36 +0000 https://testomat.io/?p=21599 Undoubtedly, test reporting is considered a crucial element in software testing and helps QA and development teams make well-informed decisions. Since there is a wide range of reports aimed at meeting any testing needs, with Playwright Reports, dev and QA teams can get a detailed summary of test performance, making Playwright debugging more efficient and […]

The post Playwright Reporting Generation: All You Need to Know appeared first on testomat.io.

]]>
Undoubtedly, test reporting is considered a crucial element in software testing and helps QA and development teams make well-informed decisions. Since there is a wide range of reports aimed at meeting any testing needs, with Playwright Reports, dev and QA teams can get a detailed summary of test performance, making Playwright debugging more efficient and its test management smoother.

In the article below, you can find information about the importance of test automation reports, reveal various types of reports, and learn how to choose the most suitable ones. Also, you can discover what tips to follow to succeed in Playwright reporting, as we consider it the most popular testing framework today.

What is Playwright?

Developed by Microsoft, Playwright is an open-source framework which is used for browser automation and testing web applications. Thanks to its ability to test Chromium, Firefox, and WebKit with a single API, teams can apply it as an all-in-one solution when conducting real-time functional, API and performance testing. Also, teams can carry out end-to-end testing by simulating user interactions such as clicking, filling out forms, and navigation.

Explore more here:

Playwright API Testing: Detailed guide with examples

Being compatible with Windows, Linux, and macOS, the Playwright tool can be integrated with major CI\CD servers such as Jenkins, CircleCI, Azure Pipeline, TravisCI, GitHub Actions, etc.

In addition to that, Playwright has broad language compatibility – supports TypeScript, JavaScript, Python, .NET, and Java – to provide QAs with more options for writing tests. In total, there is a list of key Playwright’s features:

  • Cross-browser support – Chrome, Firefox, WebKit.
  • Automatic waiting for elements to be ready.
  • Parallel execution of tests to deliver high performance.
  • Mobile device emulation and geolocation simulation.
  • Easy integration with CI\CD tools.

Find more information about Playwright’s capabilities for automation testing:

Playwright Test Automation: Key Benefits and Features

What is a Test Report in Playwright?

A Playwright test report is a detailed document, which is generated after running a set of automated tests using the Playwright testing framework. It displays test results to reveal which tests were passed, failed, and skipped, and helps to uncover how well the application functions or performs.

In the Playwright test report, you can find the following components:

  • Status of Tests. This component shows information about passed/failed/flaky/skipped tests.
  • Error Details. This component outlines the types of errors (for example, assertion failed, timeout, network errors) and their positions within the application.
  • Execution Time. Here you can discover how much time it took to run each test to uncover slow tests and performance issues.
  • Screenshots. For a failed test, Playwright will automatically take a screenshot at the point of failure and provide crucial visual context.
  • Videos. Playwright can record a video of the entire test execution for failed or all tests, providing dynamic information and showing what has led to a full-scale failure.
  • Logs & Debug Information. Detailed logs that can help developers debug issues by providing insights into browser actions, network requests, and responses.
  • Test Coverage. This component is valuable because it provides visibility into the number of tests within the coverage scope.

Indeed, Playwright reports have been designed to be interactive – with options of expanding/collapsing sections, filtering tests, and navigating through detailed failure information such as stack traces, screenshots, and videos with ease; and giving QA and dev teams an important understanding of the test’s performance in context.

Why teams need Test Automation Reports

  • They see visual representations of the results of tests and can prioritize bug fixes and enhancements depending on how they affect the user experience.
  • Teams are in the know about the full picture of how all the tests have been executed: they see the number of passed, failed, or skipped tests to understand how good and stable the application is.
  • Thanks to reporting options, teams can get clear details of what went wrong to find and fix the main problem quickly.
  • Teams can see how much of the app is being tested and which parts still need testing.
  • Teams should not check the results of all tests manually to identify common problem areas in the app.
  • With regular and detailed test reports, teams can monitor how well the app is doing in different tests to decide how to make their test automation better.

Different Types of Test Reporters in Playwright

When you run Playwright tests without specifying a reporter, it uses the list reporter by default. For more control, it is good practice. Specify your preferred reporter you can in the file playwright.config By default, the HTML reporter is applied.

Playwright configuration file
How to Set Up Playwright Report 👀

Additionally, the easiest way to build reporters is to pass the --reporter flag with the command line. Example Playwright HTML reperter:

npx playwright test --reporter=line

Find your test result reports you can in the root folder result-reports or other if you set it.

So, let’s review the Playwright reporting types and reporter methods you can utilize to meet your testing needs.

Built-In Playwright Reporters

List Reporter

Playwright’s List Reporter provides a compact and text-based summary of the tests run. For every test that encounters a problem, it delivers the error message right near it and a call stack – this helps in figuring out what has gone wrong. While it doesn’t offer interactive features like the HTML report, its simplicity makes it an excellent choice for rapid debugging during development.

Simple Playwright List Report

Furthermore, the List Report offers valuable information on test execution status, but without the need for a browser or complex UI. This reporter is useful for CI\CD pipelines where a simple, sequential output is preferred for logging and immediate feedback.

Line Reporter

Being a highly compact, line reporter uses a single line to display test execution results and dynamically update it as tests complete.

Playwright Line reporter

Line reporter is useful for large test suites, where it shows the progress but does not spam the output by listing all the tests.

🔴 It is important to mention: Line Reporter only outputs detailed information, such as error messages and stack traces, specifically when a test fails to make it very useful for developers who need quick feedback during local development or in CI environments where log details should be controlled. Overall, it prioritizes a clean console while still delivering immediate alerts for any issues.

Dot Reporter

When you run your tests in the console, Playwright’s Dot Reporter provides a highly visual representation. It uses a single dot (.) for every test that passes, so you can instantly see how things stand at any time as tests are run. If a test fails, it usually emits an ‘F’ (‘Failure’) or similar character as a warning.

Playwright Dot reporter example
Playwright Dot reporter

Dot reporter is a good fit if you need to quickly measure overall test suite results without detailed output, making it just right to use on large projects or in the CI\CD dashboards. Its main advantage is that it offers a real-time and intuitive visual progress for your test suite.

HTML Reporter

HTML Reporter is an invaluable tool, which is used by teams to visualize test results in an intuitive and interactive web interface. After a test run, it generates a comprehensive HTML file which can be opened directly in a web browser. In our cases index.html file:

Playwright HTML Report

After we open the HTML file, we can see such a report in visualization Passed, Skipped or Failed tests:

Playwright HTML Report in browser screen
Playwright HTML Report in browser

The playwright HTML report gives a detailed overview of all tests, clearly displays which areas of the application were tested, and highlights the status of each test and its coverage.

Screenshot of Playwright Trace Viewer,
Location Playwright trace.zip file

For any failures, it offers detailed accounts of failures, notes error types and locations, supplemented by screenshots, videos, and powerful trace files.

🛠 What is Playwright Trace Viewer?

Playwright can record a trace of your test execution—essentially a detailed log that includes:

  • Screenshots and DOM snapshots
  • Network requests/responses
  • Console logs
  • Actions performed (clicks, inputs, navigations, etc.)

The Trace Viewer then lets you open these trace files in a visual UI for step-by-step playback. In Trace Viewer you can easily understand what exactly went wrong —maybe a timing issue, a missing element, or a slow response.

JUnit Reporter

Playwright’s JUnit Report is built to output test results in the standardized JUnit XML format that is crucial for Continuous Integration/Continuous Delivery (CI\CD) systems.

JUnit reporter produces a JUnit-style xml report.

Most likely you want to write the report to an xml file. You can see it on our screenshot in the down left corner.  When running with --reporter=junit use the environment variable:

PLAYWRIGHT_JUNIT_OUTPUT_NAME=results.xml npx playwright test --reporter=junit

In configuration file, pass options directly:

import { defineConfig } from '@playwright/test';

export default defineConfig({
  reporter: [['json', { outputFile: 'results.json' }]],
});
Playwright JUnit XML Report

The generated XML file includes all the information about the test suite and cases – names, durations, and results. For failed tests, it provides essential details like error messages and stack traces, enabling automated parsing by CI tools. Its biggest advantage is that it can be used in any CI pipeline, with build servers readily able to understand the results of tests and processes controlling deployments. Although it does not provide the rich interaction of the HTML Reporter, its machine-readable format is essential to make automatic quality gates and continuous feedback work. You can download the XML JUnit Report file and upload it to various analytics tools to view data in a more refined presentation.

Multiple Reports in the Configuration File

With Playwright, you’re not restricted to a single report format, so you can meet a variety of requirements. Thanks to this adaptability in reporting, you can assign multiple reporters at once to the configuration file and define them on the console terminal. In the configuration file write:

  reporter: [
    ['html'],
    ['json', {  outputFile: 'test-results.json' }],
    ['junit', { outputFile: 'results.xml' }]
  ],

For instance, you can create a HTML report and a JSON report to receive a JSON file along with the results once you specify it in the command line or configuration file.

Custom Report

With Playwright Custom Reporter, you can tailor test result output based on the project’s unique needs. You can develop your custom reporter using JavaScript/TypeScript to transform raw test data into any format or integrate Playwright tests into existing workflows or proprietary systems that don’t support standard report formats. A custom reporter allows you to filter, aggregate, or visualize data and give a view of the results of tests, which can be reviewed by all relevant stakeholders.

For using the Custom reporter, you need to study more about the Reporter API and update the Playwright configuration file by writing its data there:

import { defineConfig } from '@playwright/test';

export default defineConfig({
  reporter: [['./my-awesome-reporter.ts', { customOption: 'some value' }]],
});

Third-Party Reporters in Playwright

Playwright allows you to integrate third-party reporters to extend its built-in reporting capabilities more extended. Thanks to these external tools Allure, Monocart, Tesults, ReportPortal, Currents, and Serenity/JS, teams can improve the reporting process by adding the following features – detailed HTML reports, real-time monitoring, and interactive dashboards; they also help teams in viewing test results and visualising them in different formats and simplify the monitoring of test performance, failures, and trends.

Max Schmitt, Open Source enthusiast, Playwright full-stack web developer, gathered all such third-party solutions for Playwright in a single GitHub repo,  Awesome Playwright.

In this repo, we are also represented 😃

Playwright’s integration with testomat.io enables teams to see a live status before the test run has finished execution. And, a full report link will also be created to share among all parties involved as necessary.

Playwright Report with Test Management System screen
Playwright Report with Test Management System

If something fails, the execution trace, test case, and attachments can be analysed to find out what went wrong.

Playwright Trace viewer in test management software screen
Playwright Trace viewer in test management UI

These reports are good for analyzing whether build compile, automated test execution, or deployment steps passed or failed.

Comprehensive Analytics Dashboard with Test Management
Comprehensive Analytics Dashboard: Flaky tests, slowest test, Tags, custom labels, automation coverage, Jira statistics and many more

Detect Playwright flakiness you can in a 2 way as you can see with the Analytics Dashboard widget and the AI Testing Agent. Flakiness detection helps ensure Playwright tests in the framework are dependable enough to be run automatically and frequently.

Playwright Flaky tests
AI Analysis of Flakiness in Playwright

With generated reports for CI\CD pipelines, teams can create quality and deployment readiness reports automatically from their continuous integration and delivery processes. It can be achieved through the integration with the testomat.io tool.

How to Choose the Right Type of Playwright Reporter

Before selecting the type of reports, it is essential to define the needs of your team, your project scale, and the level of detail you require, and then adapt Playwright reporting to those needs. Here are a few considerations to make when deciding which type of Playwright reporting you need:

  • Purpose of the Report. Your report should be driven by the main testing goals and determine the need for either quick developer feedback or comprehensive stakeholder updates.
  • Size of the Test Suite. If you are testing small, a short console reporter (List or Dot Reporter) could be enough for fast feedback. But when the test suite reaches hundreds or thousands of tests, deeper reports like the HTML Reporter or specialized dashboards are invaluable to effectively explore and interpret the results.
  • Environment. The testing environment heavily influences reporter choice: for local development, an interactive HTML report is ideal for immediate debugging, while for CI\CD pipelines are a good fit thirty-part Playwright reporting solution for automated parsing and quality gate integration.
  • Level of Detail. The depth of understanding the results of tests is important. For detailed debugging and root cause analysis, the HTML Reporter (and it’s Traces, Screenshots, and Videos) provides every detail of any failure, up to the kind of failure, and the place in the application where it failed. If the level of detail is minimal, you can select the Line or Dot Reporter to get at-a-glance feedback.
  • Data Storage Needs. If you require the historical analysis, there are reporters that generate HTML and JUnit XML files for further review. For long-term trend analysis or integration with test management systems, you can select a reporter which will sends data to an external database or service, often through a Custom Reporter. But our test management software testomat.io also support this option.
  • Customization Options. If one of the regular reporters simply isn’t generating the data exactly like you need, the way the data’s aggregated, or if the data needs to be submitted to an external system, the Custom Reporter is a good option for matching the specific reporting workflows.
  • Test Management Systems (TMS) Integration.  Some reports (JUnit XML, for instance) can be readily integrated with a variety of TMS to collect data in one place. So, if you need real-time monitoring of test runs, failures, and trends, you need to consider whether the report is required to directly push results to a TMS for better visibility.
  • Team Cooperation. When selecting, you need to make sure the report format can be shared among team members to get a better understanding and make decisions. comprehension of the situation. If the team uses certain tools (such as Jira or Slack) to communicate with each other, then test management software might be suitable to facilitate your test result display.

Benefits of Reporting in Playwright

  • Thanks to Playwright reporting, teams can get comprehensive details about test runs and quickly pinpoint the root cause of issues.
  • Teams can assess the suite of results in real-time, speed up the feedback process, and maintain a continuous development flow.
  • With shareable reports, teams can quickly discuss test results with business stakeholders, even those without technical backgrounds, to accelerate understanding across development, QA, and product teams.
  • Teams can prevent faulty code from being deployed and ensure continuous quality thanks to automated quality gates in CI\CD.
  • Teams benefit from customizable Playwright reporting options to tailor their reports to unique requirements.
  • Teams can generate report files (like HTML or JUnit XML), archive results, and analyze performance, failure rates, and trends over time.

Challenges in Playwright Reporting

There are some challenges in Playwright Reporting that teams should be aware of:

  • While Playwright offers custom reporters, creating interactive reports beyond the built-in options can demand significant development effort.
  • Teams face difficulties in identifying key issues in the test reports in terms of including too much information in the suites of tests.
  • The use of multiple environments, including various browsers and devices, can contribute to generating unpredictable results.
  • Flaky tests are prone to producing false positives or negatives, which might result in inaccuracy in the reports.
  • Slow page loads may cause an increase in reported execution times and impact accuracy.
  • Complicated user flows and dynamic content can overwhelm reports with redundant information.

Tips for Effective Playwright Reporting

Here are some tips to follow to enhance the Playwright test reporting:

  • Before executing the tests, it is essential to clearly define what you’re testing and focus on metrics which will help you determine what success looks like.
  • You need to create a reporting format that is easy to interpret and helps teams resolve issues quickly. Likewise, the content can be presented in HTML format or offered as a downloadable PDF.
  • For better understanding, you need to add screenshots/videos to your test reports to provide visual context and make sure they are shareable.
  • You need to use CI tools to automatically trigger the report creation and distribution after each test run.

Want to Reap the Benefits from Playwright Reporting?

With good test reporting, you can turn testing data into actionable insights. Using Playwright’s reporting tools allows teams to get useful information about their results, uncover problems early, and make testing better. Thanks to diverse types of reporters in Playwright and integration capabilities, teams can integrate multiple reporters and even create custom ones to meet their different needs in testing.

If you are interested in simplifying Playwright reporting and integrating it with testomat.io for better management, do not hesitate to drop us a line and learn more about the services we provide.

The post Playwright Reporting Generation: All You Need to Know appeared first on testomat.io.

]]>
AI Unit Testing: A Detailed Guide https://testomat.io/blog/ai-unit-testing-a-detailed-guide/ Wed, 25 Jun 2025 12:37:18 +0000 https://testomat.io/?p=20420 Many testing teams may find it challenging to cope with the increasing complexity and fast changes in software systems when performing traditional testing. With manual creation and selection of test cases, their testing efforts are frequently inefficient and fail to adapt to codebase changes and rising requirements. As a result, they should think of implementing […]

The post AI Unit Testing: A Detailed Guide appeared first on testomat.io.

]]>
Many testing teams may find it challenging to cope with the increasing complexity and fast changes in software systems when performing traditional testing. With manual creation and selection of test cases, their testing efforts are frequently inefficient and fail to adapt to codebase changes and rising requirements. As a result, they should think of implementing a modern approach for testing. Using AI for unit testing and software development is essential to avoid falling behind and enhance the efficiency and effectiveness of unit testing processes.

What is AI Unit testing?

AI unit testing means using artificial intelligence to automate test case generation of unit tests and data preparation processes. It eliminates manual efforts and verifies the behavior of each unit in isolation. If a unit does not do what it should do, the software program will not work efficiently or will not work at all.

How can artificial intelligence be applied in unit testing?

Here are six ways artificial intelligence can help you carry out unit testing:

#1: Test Case Automation

With AI tools, QA teams can save time and resources by letting machine learning algorithms analyze the lines of code and quickly generate automated test cases. By analyzing both your code and the code segment context, AI can automatically select high-risk areas for testing, generate unit tests for code segments or recommend tests that will provide insights into your code’s behavior. It will reduce manual workload and speed up the testing process.

#2: Test Case Generation

By using AI,  teams can automatically generate a variety of test cases that cover a wide range of scenarios and conditions. Algorithms of generative AI for unit testing properly analyze the code to identify critical points and generate effective test cases, which will cover every possible execution scenario and enable team members to identify potential issues at early stages or before implementation.

#3: Test Case Selection

With AI-based tools, teams can quickly identify the tests which are most likely to uncover defects or choose a subset of test cases from the entire test suite to be executed in a particular testing cycle. Without running the entire suite, the aim is to select those test cases which are most likely to uncover defects.

#4: Test Case Prioritization

By using AI-backed tools, teams can see how tests are prioritized based on the code complexity,  history of bugs, and code changes. By arranging test cases in a sequence that maximizes certain criteria, they help in detecting critical defects early, improving the efficiency and effectiveness of the testing process.

#5: Test Suite Optimization

AI can identify redundant or less effective tests, helping to reduce the overall test execution time. It detects error-prone areas of code and focuses testing efforts on critical flows. Furthermore, it is effective when giving recommendations on the tests that should be performed for greater test coverage.

#6: Automated Test Maintenance

Thanks to AI, test failure logs can be analyzed to identify the root cause of failures. It can also suggest potential fixes to the code or automatically update and repair existing tests to maintain their relevance and effectiveness.

Benefits of using AI to create unit tests

AI-assisted unit test creation comes with several benefits for the QA and development teams:

  • Artificial intelligence tools are effective when generating a large number of tests.
  • Artificial intelligence tools provide high code coverage across the project by applying the same level of thoroughness to every piece of code.
  • Artificial intelligence systems can learn from feedback and improve their unit test generation efficiency over time.
  • Artificial intelligence tools identify and test edge cases that eliminate human errors of overlooking.
  • Artificial intelligence tools cut down the time developers spend on writing, maintaining, and running tests.
  • Artificial intelligence tools update existing test suites in response to changes in the codebase.

Challenges of unit testing with AI

While AI Unit Testing offers numerous benefits, teams may face some challenges. Let’s reveal what they are:

  • When it comes to unit testing with AI, teams lack standardized testing frameworks and can not establish consistent testing procedures across projects and teams.
  • Teams may face difficulties when dealing with large datasets, and they require more efficient methods to manage and process vast amounts of data during the test execution process.
  • When analyzing code syntax and logic, AI lacks the deep contextual understanding and might miss the broader context and business logic that dictate correct functionality, which result in tests that do not fully cover the necessary edge cases or misinterpret the intended functionality of the code.
  • It may get harder for developers to rely on test automation to catch real issues. It happens because AI can sometimes generate tests that either falsely pass or fail.

To avoid mistakes, you need to write your tests before you write the actual code so that each part of your application is tested as it’s developed. Also, you need to make sure that you use realistic synthetic data that mimics real-world scenarios before generating tests.

More importantly, you need to integrate unit testing into your CI\CD pipelines to ensure tests are automatically run with every code change, catching bugs early. It helps maintain code quality throughout development.

Popular AI Tools for Unit Testing

Here are the top tools on the market today for writing unit tests. These tools use various AI techniques to automate and optimize different aspects of code review, test generation, and quality assurance.

  • CaseIt It is a specialized testing tool that automatically generates test cases for diverse testing scenarios.
  • Bito Used for Behavior-Driven Development (BDD), this tool offers artificial intelligence code reviews for Git workflows, AI code generation, and plan-to-production developer agents for IDE or CLI.
  • Unit-test.dev This AI tool helps teams create unit test cases, supports multiple languages (Python, JavaScript/TypeScript, Java, C#) and IDEs to produce more accurate results when used in specific parts of the code.
  • Virtuoso QA Using natural language processing, it simplifies test creation and execution, provides low-code/no-code testing, self-healing test scripts, etc.
  • Checksum.ai This tool applies AI for test creation and maintenance.
  • Carbonate Integrated into your existing testing framework, it helps teams write tests in plain English, offers a code coverage analysis, and detects areas lacking proper unit testing.
  • Google Cloud’s Duet It offers AI-based code completion and generation for developers.
  • Diffblue Cover With this AI-powered tool, teams can automatically generate JUnit tests for Java applications.
  • Keploy An AI tool used as a test case generator for end-to-end test cases based on real user interactions.
  • Github Copilot It is used to generate unit and integration tests as well as help improve code quality.

One of the coolest tools on this list is Copilot. So let’s take a look at AI unit testing in action with an example of how Copilot works. We will show you how to start using AI Copilot by demonstrating the ins and outs of generating test automation. After that, we’ll discuss Copilot’s strengths and weaknesses. Although many tools listed here use similar NLM concepts, we will not compare them in this context.

Why Copilot?

GitHub Copilot is a reasonable choice for QA engineers and developers; it boosts their productivity, improves code quality, and helps release faster.

GitHub Copilot for AI unit testing helps reduce the tediousness of writing unit tests. Integration into an IDE is advantageous as the testing tool exposes the code to the AI Copilot Chat, making it easy to tell the IDE to generate tests for a function, method, class, etc. Even a junior coder can easily write unit tests to ensure quality development. It has wide support in VS Code, Visual Studio, IntelliJ IDEA, Vim, and other IDEs. Works with multiple programming languages.

Github Copilot

Microsoft provides the Copilot feature or service to users at no cost, and charges a premium for its advanced features.

Utilizing Copilot for Writing Unit Tests in VS Code

Copilot offers several ways to generate tests. We are focusing on using the Copilot integration with Visual Studio Code, which is a fairly representative one. To use Copilot in VS Code, we must first install it. Important prerequisites — you must have a GitHub account if you are using Copilot.

Copilot Extension in VS StudioCode screenshot
Copilot Extension in VS StudioCode

After installation, GitHub Copilot displays the chat screen as shown below.

AI-generated Unit Tests with VSCode

In VSCode, there are two primary ways to generate tests. You can enter commands in Copilot chat or you can use the right-click menu in a code file and select to generate tests. It offers AI-based code suggestions and auto-completions. To generate tests in the Copilot chat, enter a prompt asking Copilot to generate tests for the method or function. As a suggestion, on our request, Copilot provides unit test cases.

AI Unit testing with Copilot
Codepilot AI Unit Testing

Occasionally, Copilot responses might introduce errors because it lacks full context and a natural sense of user sense — be sure to double-check the results of its suggestions. And look at an example of Jest Unit tests Copilot provides:

Examples Generated Jest AI Generated tests by Copilot
Examples Generated Jest AI generated unit tests by Copilot

This Jest Unit test Copilot example code does not include setTimeout()which is better than jest.runAllTimers() in our use case. It might cause runtime issues. However, numerous users have found that Copilot attempts to predict your application’s logic but lacks a true understanding of its underlying structure or embedded details. It operates within the confines of a specific code snippet and ultimately functions in a highly intuitive manner.

Test coverage is always lacking in one way or another, if it exists at all. Leveraging AI unit testing in development is a good way to add value and decrease the significant risk of non-qualitative code.

You might also find this topic valuable:

Automated Code Review: How Smart Teams Scale Code Quality

Asking an AI to generate test automation for your code has the added advantage of providing an extra pair of eyes 👀 on your code. To an extent, the quality of the generated test code is correlated to the quality of the code being tested. When AI Copilot struggles to generate tests or produces tests, it can be an indication that the code is not easily testable, the application code is complex or incomplete. Conversely, it offers a valuable hint about refactoring: if Copilot struggles to suggest text, it may indicate that your code is overly complex and could benefit from simplification.

GitHub Copilot Agent VS Copilot Chat

You should pay attention to GitHub Copilot Agent. GitHub Copilot Agent is not only a code suggester, it is an advanced AI-powered extension that provides multi-step assistance to teams across the entire software development lifecycle — not just code completion. Learn more with the Execute Automation YouTube video, How GitHub Copilot Agent Writes Perfect Code & Tests 🤯

Best Practices For Implementing Unit Testing AI in general

We hope that following these best practices will help you implement AI unit testing successfully:

  • At the very start, you need to define the goals you aim to achieve with your AI unit testing and make testing data clean and well-prepared. Removing inconsistencies and errors from your data enhances reliability. This also improves the validity of your unit tests.
  • You need to create isolated tests for individual units in isolation. to identify specific issues within each unit and make debugging easier and more effective.
  • With artificial intelligence tools for writing unit tests, you can generate tests and data automatically, adapt to changes in the codebase, and continually improve the tests. However, don’t forget to keep your test cases and data up to date, regularly track test coverage, and analyze performance metrics to optimize your testing strategy.
  • By updating tests regularly, you make sure that test cases remain relevant and effective in catching new bugs.

Bottom Line: Ready to use AI-based Tools For Unit Testing?

With AI-driven tools for unit testing, you can make sure that your software testing is both efficient and highly effective. You can also ensure that web applications and mobile applications are functional and reliable. By implementing effective testing strategies and utilizing the right AI tools, you can improve code quality, reduce bugs, and avoid delays and bottlenecks in the development cycle.

If you’re hesitant to apply AI directly to a production codebase, that hesitation is well-grounded. Anyway, AI is amazing, as it significantly speeds up work, allowing us to deliver quality products and add more value to our clients in a more timely manner. However, we should always be cautious, keep our eyes open, and ensure we understand what we’re doing and what the AI tools are doing for us.

👉 Drop us a line today to learn how we can help you enhance your testing processes and deliver high-quality software that meets the highest standards.

The post AI Unit Testing: A Detailed Guide appeared first on testomat.io.

]]>
XPath in Selenium https://testomat.io/blog/xpath-in-selenium/ Mon, 23 Jun 2025 09:07:23 +0000 https://testomat.io/?p=21086 In automated testing with Selenium WebDriver for browser automation, locating web elements remains challenging, especially when dealing with dynamic content or complex HTML page structures. Without the ability to accurately pinpoint buttons, text fields, links, and other interactive components, even the most well-designed test script may be ineffective. XPath and CSS Selector commonly used methods […]

The post XPath in Selenium appeared first on testomat.io.

]]>
In automated testing with Selenium WebDriver for browser automation, locating web elements remains challenging, especially when dealing with dynamic content or complex HTML page structures. Without the ability to accurately pinpoint buttons, text fields, links, and other interactive components, even the most well-designed test script may be ineffective. XPath and CSS Selector commonly used methods for element identification to interact with web applications are XPath and CSS Selectors.

To address this challenge, you can use Selenium WebDriver’s locators to find and interact with web elements. While also basic element locators like ID, Name, Class Name, and CSS Selectors often work well, they are insufficient when elements lack unique attributes or their properties change frequently. That’s when you can use XPath to navigate a web page’s complex structure to find specific elements. In this article, we will discover what  XPath in Selenium is, explore the different types of XPath, reveal basic and advanced techniques, and learn how to write XPath in Selenium.

What is Selenium?

Being an open-source suite of tools and libraries, Selenium enables teams to make the testing of website functionality automated. With its cross-browser, cross-language, and cross-platform capabilities, they can test across different environments.

Selenium supports  Java, JavaScript, C#, PHP, Python, and Ruby programming languages, which allows teams to integrate it with existing development workflows.

Furthermore, it also offers extensive browser compatibility with major web browsers like Chrome, Firefox, Safari, Edge, and Opera to cover all major browsers, while being flexible in terms of its ability to be compatible with different automation testing frameworks like TestNG, JUnit, MSTest, Pytest, WebdriverIO,

Selenium Primary Components

  • Selenium WebDriver. It is a programming interface which can be used to create test cases and test across all the major programming languages, browsers, and operating systems. Regarding the cons, it has neither built-in test reporting nor a centralized way to maintain objects or elements.
  • Selenium Grid. It is a smart proxy server which allows automation testers to run tests on different machines against different browsers.
  • Selenium IDE. It is an easy-to-use browser extension which records your interactions with websites and helps you generate and maintain site automation, tests.

What is XPath in Selenium?

XPath, which is known as an acronym for XML Path Language, is a query language used to uniquely identify or address parts of an XML or HTML document. Generally, you can use it to do the following:

  • To query or transform XML documents
  • To move elements, attributes, and text through an XML document
  • To look for certain elements or attributes with matching patterns
  • To uniquely identify or address parts of an XML document
  • To extract information from any part of an XML document
  • To test the addressed nodes within a document to determine whether they match a pattern

When to use XPath

  • When elements do not have unique IDs, names, or class names
  • When elements are dynamic or change quickly
  • When there is a need to locate elements based on their text content or position, which is relative to other elements

Overview of Basic XPath syntax in Selenium

XPath structure sheme
XPath structure
  • // – it indicates the current node
  • tagname (e.g., div, input, a) – it indicates the tag name of the current node
  • @attribute (e.g., @id, @name, @class) – it indicates the attribute of the node
  • value (e.g., //input[@id='username']) – it indicates the value of the chosen attribute

The Difference Between Static | Dynamic XPath in Selenium

Before we start considering XPath types, it is essential to define “static” and “dynamic” XPath in the context of web elements. It needs to be done because it will determine the choice and robustness of your XPath and will result in effective test automation:

Static XPath. It is a direct and absolute path, which is specified from the root of the webpage to point to an element’s location in the Document Object Model (DOM) hierarchy. But any change in the UI can break the path. Here is XPath in Selenium example:

/html/body/div[1]/div[2]/input

This path starts from the root and traverses down to the desired element.

Dynamic XPath. It is a relative path that uses flexible criteria to locate dynamic web elements whose attributes or positions change frequently on a webpage. In contrast to Static XPath, the elements in the dynamic  XPath are more resilient to changes in the UI. To create dynamic XPathes, you can use the following:

  • contains(), text(), starts-with()dynamic element indexes
  • logical operators OR & AND separately or together
  • axes methods

Here is xpath examples in selenium:

//input[contains(@id, 'user')]

This expression selects any <input> element with an id attribute containing the substring ‘user’.

Sum up: Static VS Dynamic XPath

Static XPath (typically absolute XPath) provides a full path from the HTML root to an element, making it very prone to failure in terms of breaking with any minor change in the page’s HTML structure.

Dynamic XPath locates elements whose properties/positions change frequently to guarantee that test scripts are less prone to failure in the face of UI updates or dynamic content. With dynamic XPath techniques, you can create stable locators, which remain stable despite UI changes, to drastically cut down on test automation maintenance, while you may face frequent test failures by relying on static XPaths in dynamic web applications.

What is an XPath locator?

XPath locator in Selenium WebDriver is a technique used in automation testing to identify web elements and help automation tools like Selenium interact with them even in complex or dynamic DOM structures. They support both absolute and relative paths, providing adaptable element identification via relationships, attributes, or text.

Types of XPath in Selenium

You can use two ways to locate an element in XPath – Absolute XPath and Relative XPath. Let’s review them with some XPath examples in Selenium below:

Absolute XPath

It contains the location of all elements from the root node (HTML), where the path starts, and specifies every node in the hierarchy. However, the whole XPath will fail to find the element if there is any change/adjustment of any node or tag along the defined XPath expression. The syntax begins with a single slash, “/”, and looks like this:

/html/body/div[1]/div[2]/form/input[2]

We see that if any new element is added before the target element, or if the structure of the divs, form, or inputs changes, this XPath will fail and break your test automation script.

Relative XPath

As the most commonly used and recommended type, it tells XPath to search for the element anywhere in the document. Starting with a double forward slash “//”, it begins from the middle of the HTML DOM structure without the need to initiate the path from the root element (node). The syntax looks like this:

//input[@id='username'] or //button[text()='Submit']

How To Create XPath in Selenium

When writing XPath in Selenium, you can do it by applying various types of XPath locators. Let’s consider them:

  • Using Basic Attributes
  • Using Functions
  • Using Axes

Using Basic Attributes

XPath’s locators  Description Example
By Id By IdIt allows you to identify an element by its id attribute. driver.findElement(By.xpath(“//*[@id=’username’]”))
By Class Name It allows you to locate an element by its class name. driver.findElement(By.xpath(“//*[@class=’login-button’]”))
By Name It allows you to locate elements by their name attribute. driver.findElement(By.xpath(“//*[@name=’password’]”))
By Tag Name It allows you to detect elements by their HTML tag name. driver.findElement(By.xpath(“//p”))

Using XPath Functions in Selenium

XPath’s functions are used to determine elements by their attributes, positions, and other factors.

XPath’s locators Description Example
By Text It allows you to detect elements based on their inner text. driver.findElement(By.xpath(“//*[text()=’Submit’]”))
Using Contains It defines elements based on a substring of one of their attribute values. driver.findElement(By.xpath(“//*[contains(@href,’testomat.io’)]”))
Using Starts-With It allows you to find elements based on an attribute’s prefix. driver.findElement(By.xpath(“//*[starts-with(@id,’user’)]”))
Using Ends-With It allows you to find elements with attribute values which end with a specific string. driver.findElement(By.xpath(“//*[ends-with(@id,’name’)]”))
Using Logical Operations It uses logical operations to find elements that satisfy all specified criteria. //button[@class = “command-button” and @disabled=”true” )]

Using XPath axes in Selenium

With Axis, you can see the relationship to the current node and locate the relative nodes concerning the tree’s current node. So, the XPath Axis uses the relation between several nodes to find those nodes in the DOM structure:

DOM Elements Structure sheme
DOM Elements Structure

Below you can find commonly used XPath axes:

XPath’s locators Description Example
parent It selects the immediate parent. //input[@id=’username’]/parent::div
child It selects direct children. //div[@class=’form-group’]/child::input
ancestor It selects all ancestors (parent, grandparent, and so on). //input[@id=’username’]/ancestor::form
descendant It selects all descendants (children, grandchildren, and so on.) //div[@id=’container’]/descendant::a
following-sibling It selects all siblings after the current node. //input[@id=’firstName’]/following-sibling::input
preceding It chooses everything in the document before the current node’s opening tag //p/preceding::h1
preceding-sibling It selects all siblings before the current node //input[@id=’lastName’]/preceding-sibling::input

We would like to mention that you can apply chained XPath in Selenium concept, where you can utilize multiple XPaths in conjunction to locate an element that might not be uniquely identifiable by a single XPath expression. In other words, instead of writing one absolute XPath, you can separate it into multiple relative XPaths. When chaining XPaths, you can improve the accuracy and robustness of the element location strategy, thus making the automation scripts more stable.

How to Use XPath in Selenium: Practical Examples

Example 1: Locating an Element by ID

The simplest way to locate elements using XPath is by their unique identifier, which is, as a rule, the id attribute. It looks like this:

WebElement element = driver.findElement(By.xpath("//input[@id='username']"))

In this example, you can use the <input> element where the id is “username.” With the findElement method, you can return the element for further interaction, checking its presence, or entering data.

Example 2: Traversing Using Axes

In this example, we consider an advanced technique to traverse the DOM’s structure based on how elements relate to each other.

WebElement parentelement = driver.findElement(By.xpath("//span[@class='label']/parent::div"))

We can see that the parent axis is applied to find the parent <div> element of a <span> with the class “label”. When an element, which you’re aiming to locate, has no unique identifying attributes, but can be found by its relationship to parent or sibling elements, XPath’s axes can be useful to achieve this goal.

Example XPath Selenium Developers consple

html
└── body
    └── div#form
        ├── label        (Username or Email)
        ├── input        (name="log")
        ├── label        (Password)
        ├── div          (class="wp-pwd")
        ├── input        (name="rememberme")
        ├── label        (Remember Me)   
        └── button       (Log in)

Example XPath Selenium Dev tools
Result of copying XPath with Developers’ tools are the next:

Example 3: Copy Xpath
//*[@id="user_pass"] //Copy XPath
Example 4: Copy full XPath
/html/body/div[1]/form/div/div/input.

What Are the Advantages of XPath Locators?

  • With XPath, complex searches are becoming more flexible to allow you to locate items using a wide range of parameters.
  • When you work with web pages with dynamic content, XPath can easily adapt to changes in page structure.
  • It can traverse the DOM in both directions, which means moving from parents to children or from children to parents and siblings, to target elements that are structurally related to a known and stable element.

What Are the Disadvantages of XPath Locators?

  • Complex XPath queries may be more slowly compared to simpler locators like CSS selectors.
  • When relying on specific structures or attributes, XPath expressions may fail if the page structure changes.

Best Practices for Using XPath in Selenium

Here are some of the tips to follow when using XPath in Selenium:

  • You need to use relative XPath to write more adaptable and maintainable locators compared to absolute XPath, which is based on the complete path from the root node.
  • You need to keep XPaths as short and specific as possible to make them easier to maintain and improve.
  • You need to apply functions like contains(), starts-with(), and text() if there is a need to create XPath expressions for processing dynamic elements with changing attributes. The contains() function is suitable when attributes such as id or class have variable values.
  • When direct attributes aren’t enough, you can opt for XPath axes to locate elements through their relative position to a stable and identified element.
  • Before incorporating an XPath into your code, you should test it directly in the browser’s console to make certain it works correctly.

Topics interesting for you:

Bottom Line: Ready to use XPath in Selenium?

Applying XPath in Selenium while conducting the automated testing process is useful and effective for your teams. Whether they use a simple XPath or a more complex one, choosing the right XPath is crucial for test case stability. Being a powerful tool, it provides a flexible way to build robust Selenium test scripts that can handle a variety of web page structures with dynamic content and make sure they won’t fail if any of these locators change later.

👉 Drop us a line if you want to learn more additional information about XPath in Selenium, and the testomat.io team is glad to provide software test automation services

The post XPath in Selenium appeared first on testomat.io.

]]>
Continuous Testing: AI support in Software Testing https://testomat.io/blog/continuous-testing-ai/ Mon, 09 Jun 2025 11:27:30 +0000 https://testomat.io/?p=21025 We see the software development process evolving and moving from classic waterfall methodologies to agile and DevOps-based approaches. Knowing that, software QA teams should respond to quicker release cycles and growing complexity. With continuous testing, they can make the software testing process automatic and get it done quickly within the DevOps process. However, they face […]

The post Continuous Testing: AI support in Software Testing appeared first on testomat.io.

]]>
We see the software development process evolving and moving from classic waterfall methodologies to agile and DevOps-based approaches. Knowing that, software QA teams should respond to quicker release cycles and growing complexity.

With continuous testing, they can make the software testing process automatic and get it done quickly within the DevOps process.

However, they face difficulties when dealing with complicated CI\CD pipelines, strict security requirements, and dynamic cloud infrastructure. Thanks to artificial intelligence, that process is getting less labor-intensive and time-consuming. AI helps create systems that detect problems and heal themselves while optimizing performance and delivering software products which remain reliable, functional, and secure throughout their software development lifecycle.

What is Continuous Testing with AI?

Continuous testing is the practice of testing software continuously throughout the development cycle, typically as part of a continuous integration (CI) or continuous delivery and continuous deployment (CD) pipeline. By integrating testing into every phase of development, teams can catch defects early, improve product quality, and speed up delivery cycles.

When talking about Artificial Intelligence in the context of Continuous Testing, we mean embedding intelligent algorithms which can learn, adapt, and optimize the test cycles. By integrating AI algorithms in continuous testing, artificial intelligence technology helps minimize human involvement in executing tests, improve accuracy, optimize QA activities, and even start a self-healing process.

AI testing structure
Key components of AI testing

Generally, ML, NLP, robotics, computer vision, and other technologies come under the umbrella of AI in DevOps. Let’s reveal how they can power the continuous testing process:

  • Machine Learning (ML). ML-based algorithms are useful in analyzing historical test data. They applied to identify patterns, make test case selection more effective, predict software defects, and learn from past processes of the test executions.
  • Natural Language Processing (NLP). NLP can be used to turn test cases, which have been written in everyday language, into executable scripts and to avoid the need for complex scripting.
  • Robotic Process Automation (RPA). With RPA, you can model human actions across various systems and environments to confirm that all the pieces of the app work together correctly.
  • Computer Vision. Thanks to AI computer vision, UI elements can be recognized based on their visual characteristics rather than fixed positions, making tests more robust against layout changes and increasing the correctness of UI checks.
  • Deep Learning. With deep learning, you can tackle highly specific challenges – identifying sophisticated vulnerabilities in your code repositories, scaling real-time anomaly detection in dynamic systems, and using automated root cause analysis to pinpoint the root cause of complex incidents.
  • Predictive Analytics. Thanks to predictive analytics, you can learn information about future events and minimize risks in terms of selecting optimal deployment windows, adjusting resources when scaling.
  • Chatbots and Virtual Assistants. These AI-based tools are useful when there is a need to automate interactions with the development team and provide real-time assistance during the development cycle.

We would like to mention that all these types of artificial intelligence require you to learn how to use them for your team’s needs. It’s worth remembering that they are just tools, which will only work if handled the right way in continuous integration automated testing.

Ways To Use AI For Effective Continuous Testing in DevOps

How use AI testing in Software Development?

Here are some key ways artificial intelligence is changing continuous testing in DevOps:

✅ Test Case Generation

Teams can apply AI-based algorithms when they need to save time and minimize their effort in creating and maintaining test cases. In this case, artificial intelligence is used to automatically generate test cases in accordance with requirements, user stories, past defect patterns, and code analysis.

✅ Test Execution Optimization

When teams need to optimize the execution flow of tests based on real-time data and new changes in the software application, they can use AI algorithms. They are effective at assessing new code modifications and previous test outcomes to guarantee that the most critical tests are executed first. In addition to that, artificial intelligence can be used to provide parallel execution of tests across multiple environments. In the long run, it enhances test coverage and reduces execution time with faster feedback cycles.

✅ AI-based Defect Prediction

When teams need to predict potential defects and resolve them before they escalate, artificial intelligence algorithms help them do it by analyzing test results and historical data, and identifying correlations between code changes and failures. In addition to that, AI can be applied to monitor application behavior and identify anomalies or unusual patterns in test executions. It can even detect slight visual changes in the user interface that might negatively impact user experience.

✅ Self-Healing Test Suites

Teams have been forced to deal with broken test cases in terms of changes in UI elements, APIs, or system behaviors. Thanks to AI, the process of adapting to changes in the application under test is being done automatically while providing continuous and stable test execution in DevOps environments.

✅ Regression Testing

With AI, the process of identifying and managing relevant regression tests, following unit tests, based on changes and risk assessments, which have been given to each selection, is highly effective and guarantees more comprehensive regression testing. In addition to that, artificial intelligence also investigates areas that need more test cycles, which makes sure they are managed and handled in the right way.

✅ Proactive Continuous Security Testing

Artificial intelligence usage in DevOps can be applied as proactive security measures in order to identify security vulnerabilities through code modifications and code dependencies analysis. It detects security risks and discovers abnormal API call patterns in microservices, which allows teams to move security testing into the initial stages of development and minimize production threats. Also, artificial intelligence helps get real-time visibility and keep an organization’s digital presence under control through continuous attack surface testing.

Why Teams Need AI for Continuous Testing?

Now let’s talk about the main reasons your teams need an AI solution to streamline the testing process.

  • Teams can optimize test execution and deal with the highest-priority areas first to prevent things that could really go wrong.
  • Teams can win from AI’s capability to predict potential problems, even unusual situations that human quality assurance (QA) might miss.
  • With AI, there is no need for the teams to perform time-consuming testing tasks, such as regression testing, exploratory testing, integration testing, performance testing, UI validation, and data entry.
  • Based on deep artificial intelligence analysis, teams are in the know the original reasons for bugs, which means they need less time spent debugging and fixing.
  • Thanks to artificial intelligence in test data management, teams can deal with faster data provisioning, automated generation, retrieval, and preparation of diverse types of tests instead of doing it manually.
  • With artificial intelligence usage, teams can simulate complex and multi-stage attack scenarios and react by integrating continuous penetration testing directly into the CI\CD pipeline.

Key Benefits of AI in CI\CD

General Benefits of implementing AI testing

These were times when teams handled a lot of tasks manually. With modern technology, however, you can successfully use AI in CI\CD and reap the benefits:

  • Pipeline Optimization. With AI, you can understand historical data of test executions and performance patterns. Knowing this information, artificial intelligence can change pipeline settings automatically, which helps find problems and anomalies, suggest fixes, and change resource usage in a jiffy. In the long run, it will lead to quicker build times and more stable deployments.
  • Better Monitoring Capabilities. When using AI tools for CI/CD optimization, you can get real-time information about the QA process, alerts, logs, and error detection. Thanks to the ability to consider past logs and errors, artificial intelligence can quickly find the cause of pipeline issues and respond without manual work.
  • Efficient resource usage. When applying AI, there is no need to plan resources manually. It automatically scales resources up or down based on what is required.
  • Automated Tasks and Improved Security. When it comes to regular pipeline tasks like building, validation, and deploying code, artificial intelligence can automate them and reduce manual testing effort. In addition to that, there is an option of blocking or flagging risky code before it goes into the production environment.
  • Code Quality Checks. AI-powered solutions for CI/CD look at code for bugs, style problems, and performance issues. It gives quick feedback to developers, helping them fix mistakes early and keeping the code clean and effective. If there are any issues, they are listed in the logs for manual review.

Challenges in Implementing AI in DevOps Testing

While we can see more benefits of artificial intelligence in DevOps testing, there are also challenges to its adoption. These include:

  • You should take into account data quality and its availability. In order to work well, AI requires large volumes of high-quality data, which helps eliminate inaccurate predictions and inefficient QA processes. Furthermore, artificial intelligence should be continuously trained and fine-tuned to improve its accuracy, which requires expertise and resources.
  • You may face integration complexity when trying to incorporate AI continuous testing tools with existing DevOps processes. To do this, you need access to quality data from various sources. However, data can be scattered across different departments or systems, making access and integration cumbersome.
  • Many existing CI\CD pipelines and test frameworks lack built-in capabilities of artificial intelligence and necessitate additional setup.
  • When you decide to integrate AI, you need to remember that it requires investment in infrastructure and whole team training. With increasing AI complexity, you need more computational power to run it.

Best Practices for Implementing Continuous Testing

Before implementing AI in continuous testing, you need to plan strategically and utilize the right continuous testing tools in DevOps. Here are some best practices for artificial intelligence in continuous testing:

  • You need to remember that it is not effective to use AI to fix everything and everyone’s issues. It would be a good idea to define a specific testing challenge, which AI can solve. For example, improving test coverage in complex areas or identifying flaky tests.
  • To work correctly, AI relies on data. You need to make sure the data is clean and representative to avoid unfair or discriminatory test results. Also, you should choose AI tools and techniques that align with your specific needs and infrastructure.
  • You need to decide what you aim to achieve with AI and start doing it step-by-step. You can try the pilot projects without implementing a radical change, and apply AI where it can provide the most immediate value and help you attain established business goals.
  • You should combine AI with existing QA methods. There is no need to replace human testers and traditional automation. The task is to automate time-consuming and repetitive tasks which require analyzing large datasets.
  • You need to educate your team about how to use AI tools. Also, it is important to maintain documentation where all the information about the AI implementation process, training data, and integration processes is stored.
  • You need to train AI models with fresh data to adapt them to your needs when scaling.

Meet more:

Let’s sum up 😀

Is Your Infrastructure Ready For AI DevOps Continuous Testing Services?

AI continuous testing is becoming a crucial part of modern CI/CD workflows. When you use AI for continuous integration and testing, this integration promises automation and intelligent decision-making, improves software quality and reliability.

If utilized correctly, AI CI\CD workflow will lead to more efficient, accurate, and reliable software lifecycles. However, you need to remember that AI workloads demand specialized compute resources, flexibility,  and technically prepared team members.

The key lies in a well-prepared continuous testing strategy, continuous learning, and balancing AI-driven test automation frameworks with human oversight and the principles of Continuous Integration (CI) and Continuous Delivery (CD) in mind.

👉 Contact us if you aim to adopt this approach and benefit from faster release processes, improved software quality, and reduced risk of defects.

The post Continuous Testing: AI support in Software Testing appeared first on testomat.io.

]]>
Risk-Based Testing: Strategy, Approach & Real-World Examples https://testomat.io/blog/risk-based-testing/ Fri, 06 Jun 2025 11:23:06 +0000 https://testomat.io/?p=20697 Many modern applications are highly complex and require a lot of testing. However, QA and development teams often lack the time and resources to test every program component. Thanks to risk-based testing, they can solve this potential problem and focus on the software’s most critical areas that require attention. By prioritizing tests based on risk, […]

The post Risk-Based Testing: Strategy, Approach & Real-World Examples appeared first on testomat.io.

]]>
Many modern applications are highly complex and require a lot of testing. However, QA and development teams often lack the time and resources to test every program component. Thanks to risk-based testing, they can solve this potential problem and focus on the software’s most critical areas that require attention. By prioritizing tests based on risk, organizations can optimize their testing processes, reduce the likelihood of critical failures, and deliver software that meets both technical and business requirements. Furthermore, they achieved 35% higher ROI on their test investments according to a 2023 study.

What is Risk Based Testing in Software Testing?

As a software testing type, risk-based testing is a process in which the test execution of the QA team should be prioritized based on risk to make sure critical and vulnerable areas receive proper attention.

In the context of Risk-based testing, Paul Gerrard and Nail Thompson determine in the book Risk-based E-business testing:

A risk threatens one or more of a project’s cardinal objectives and has an uncertain probability. Risk is simply a possible mode of failure. When we consider the potential risks of failure in our system, we are making only a speculative bug prediction.

The evidence we gather from risk-based testing reduces some uncertainty, and although we can never eliminate the possibility of failure — which is the case — what testing does is it decreases the uncertainty surrounding a mode of failure.

Thus, by identifying major risk factors, teams identify bugs’ impact and pinpoint which ones will likely cause defects. Let’s consider a real-world use case to know how to formulate a risk-based testing approach 👀

Risk Based Testing Example

Alright, we are moving on! In the banking platform, user authentication and authorization, payment gateway, and credit assessment are high-risk areas because failures can contribute to financial loss, data breaches, and reputational damage.

Conversely, showing an incorrect ATM location or failing to load the map presents a low risk. Just because it may be inconvenient for customers, it doesn’t lead to a security or financial vulnerability.

Risk-based example sheme
Risk-based example user authentication and authorization payment

Discover how teams will act in this situation:

  1. They identify functionalities (user authentication and authorization, payment gateway, credit assessment) as high risk due to the severe impact of potential failures while incorrect ATM location and map loading failure are considered low risk.
  2. Theyplan and design test cases that will cover every possible scenario as well as assign tasks.
  3. They start test executions from high-risk test cases to address issues first, while monitoring and carrying out standard test execution for low-risk test cases.
  4. They continuously keep tracking to quickly detect and respond to any new issues.

What is the Purpose of Risk Based Testing?

The aim is to focus QA work on possible areas which are likely to fail or those whose failure would cause the most harm. They are the following:   

  • Delivering a stable and reliable product for the most important functionalities.
  • To minimize potential negative risks, consequences, prevent financial losses, avoid data breaches,  and protect the company’s reputation.
  • To increase confidence in the product’s reliability, performance, and security.

What are Key Risk Categories in Software Testing?

When implementing risk-based testing, organizations typically consider several risk dimensions:

  • Business risks: features and functionality which closely related to revenue generation, customer satisfaction, or competitive advantage.
  • Operational risks: system reliability and stability, performance under load, or security weaknesses.
  • Technical risks: complex code, architectural flaws, new technologies, security vulnerabilities.
  • Compliance risks: features which require regulatory or legal adherence.
  • Project risks: time constraints, resource limitations, third-party dependencies, insufficient skills

Knowing that, teams should systematically assess each app feature against these types of risks to have a better understanding of where potential issues might arise and what their potential impact is. Here you can find more information about risk management.

Who Performs Risk Based Testing?

When performing risk based testing in software testing, specialists work in collaboration throughout the software development lifecycle. While QA engineers or testers often carry out the execution, multiple specialists take part in identifying, assessing, and prioritizing risks. They are the following:

  • QA Engineers(Testers), QA Managers. They are the key specialists who are responsible for identifying, assessing, and prioritizing risks from a testing perspective. They also design, execute, and maintain test cases focused on high-risk areas while reporting bugs and providing feedback on risk mitigation.
  • Software Engineers. They fix identified bugs and point out technically weak parts of the system.
  • Project Managers. They are responsible for allocating resources among teams or specialists and managing timelines based on risk priorities. They also take place in keeping the team aligned and informed about risks.

Principles of Risk-Based Testing

If you aim to prevent any risks which impact app functionality and launch high-quality applications, you should apply risk-based testing. To use it more effectively, you must understand the key principles of risk based testing. Here are some of those:

Risk-based testing Flow sheme
How Risk-based testing go on
  • Risk identification. Risks can stem from a variety of sources (technical complexities, integration points, user requirements, and security vulnerabilities) and should be identified. To effectively test, the level of effort should correspond to the software’s risk level.
  • Risk assessment. Once risks are identified, they must be assessed to determine their likelihood and impact and focus testing activities where they’ll do the most good, because not every part of the application requires equal attention and needs testing.
  • Test prioritization. To prioritize tests for the riskiest areas, either because they’re likely to fail or because failure would cause the most harm.
  • Continuous risk monitoring. To make the process ongoing and iterative by identifying risks and splitting them into smaller units for better management.

How Risk-Based Testing Differs from Other Approaches

Understanding how risk-based testing compares to other methodologies is crucial for selecting the most effective strategy for your project.

Risk-based Waterfall Agile Testing BDD Testing
Goal Aims to optimize and focus testing efforts where risks are highest. It makes sure each phase of the SDLC is completed and fully tested before moving to the next. Deliver working software through continuous feedback and collaboration. Focus on teamwork through clear and executable software specifications (Gherkin).
Approach It is used to test the riskiest areas with optimal resources used. Perform the testing activities only after all development is done. Carry out continuous testing using quick cycles and feedback for fast software releases. It is used for structured, readable descriptions of requirements, scenarios, and user interactions.
Risk Focus It identifies and prioritizes risks to minimize their impact. It controls defect risk by completing each phase strictly before moving on. Risks are handled continuously short cycles through constant feedback and early testing of small features. It lowers the risk of late bugs because of executable specs (Gherkin) used as automated checks at an early stage.
Risk Management Identifies and mitigates high-risk areas. Relies on comprehensive upfront planning and extensive documentation to prevent issues. Mitigates risks within short development cycles. Reduce communication and implementation errors through executable specifications (Gherkin).
Good Fit For projects with limited resources and strategically prioritized testing efforts. For projects with fixed requirements and minimal scope changes. For projects that require rapid iteration, continuous delivery, and adaptability to changing requirements For projects that deal with complex requirements and require early and continuous validation.
Integration & Compatibility Incorporates both manual and automated testing methods. Relies mainly on manual testing. Combines continuous feedback loops with extensive test automation and CI\CD Provides automated acceptance tests.

Why Teams Need to Perform Risk-Based Testing?

  • Teams can make certain that potential weak points and critical functionalities receive the most attention during the QA process.
  • Teams can prioritize risks in accordance with available resources to make informed decisions about where to allocate their efforts.
  • With risk based testing, teams can achieve higher-quality and quicker software releases and minimize the potential for software-related critical issues and the impact of the risk.
  • Teams can increase test coverage through risk-based assessment activities such as test case prioritization, resource allocation, and schedule estimation.
  • Teams can reduce the manual effort and incorporate test automation to run regression tests on high-risk functionalities regularly.

When to use Risk-based Testing?

Below, you can find information about projects or situations in which implementing risk based testing will be suitable:

  • Projects with limitations — time, resources, and cost limitations.
  • New projects that require attention in terms of high-risk internal or external factors, like a lack of technological experience or limited domain understanding.
  • Projects with a focus on frequent software releases, an incremental and iterative model.
  • Projects that lack clear requirements, have poor design, inadequate time planning, or insufficient resources.
  • Projects that carry out security testing within cloud environments.
  • Projects that focus on security, where risk-based analysis helps identify vulnerabilities.
  • Complex projects with multiple integrations or numerous interdependencies which can be challenging to test.

Common Mistakes with Risk-Based Testing

Here is the overview of common mistakes which can delay the Software Testing Lifecycle (STLC) but also impact user experience, business reputation, and customer satisfaction. We will discover which  mistakes should be avoided:

  • Teams may delay risk analysis until later phases, rather than beginning it during planning and development.
  • Teams may wrongly determine the acceptable level of risk and focus on high-risk areas only.
  • Teams may inaccurately identify and resolve risks, which will affect future performance.
  • Teams which involved in risk assessment lack the experience or knowledge to fully understand the impact of the test results.
  • Teams do not pay much attention when selecting resources to successfully address weak and vulnerable areas.

Benefits Of Risk Based Software Testing

By identifying and analyzing system-related risks, it becomes feasible to enhance the efficiency and effectiveness of test execution.

Risk-based testing scheme
Benefits of Risk-based testing

Let’s reveal the benefits in detail:

💠 Increases testing efforts

With risk based testing in software testing, you can make sure that teams apply resources and their efforts toward the most critical areas of the software. Utilizing this risk based testing in Agile allows them to concentrate their time and energy on the high-priority application areas, which could greatly affect users.

💠 Identifies defects earlier

When teams concentrate on risk based testing, they are able to detect issues and vulnerabilities in the early phases of the project. Such a preventative risk based testing strategy will help them avoid critical difficulties and decrease the likelihood of mistakes which may arise later in the cycle, thereby resulting in higher software quality.

💠 Faster time to market

Using risk based testing often results in quicker cycles. Teams can stop to test everything equally and prioritize the essential features for test execution. When they focus on features with higher risks, they can be quicker validated and verified. Teams can discover critical defects/issues earlier in the development lifecycle and deliver a functional and reliable product to the market faster.

💠 Enhances stakeholder trust

When teams align testing priorities with business risks, stakeholders can trust the QA process more. They understand that testing efforts have a focus on the features and functionalities that are critical to the project success.

💠 Reduces costs

With a risk based testing approach in agile, teams can achieve the most risk coverage with optimal resources for QA work. Thanks to concentrating test efforts on high-risk areas, they can increase bug finding rates in high-priority business features and functionality without growing the test budget, as well as reduce the total cost of the development.

Limitations Of Risk Based Testing

While RBT can help maximize the efficiency of the QA process, it also comes with its own set of challenges. Here are some of them:

❌ Poor planning

Improper planning presents major challenges that may result in a chain reaction of problems across the project’s lifespan. So, there is a need to have a well-structured plan to run critical and priority tests first to prevent the final product from failing.

❌ Potential Overlook of Lower-Risk Areas

When focusing on high-risk areas, lower-risk functionalities might receive less attention. Therefore, it might lead to non-critical bugs being overlooked. In the long run, it can reduce the overall software quality and compromise user experience, even if the high-risk areas function correctly.

❌ Initial Time Investment

While effectively applying a risk based approach in testing, its initial phase needs a significant time investment. So, to set up an RBT, teams should organize a phase for risk identification and prioritization, which involves significant time and effort, especially in projects with tight timelines or where immediate results are required.

❌ Require Skillset Development

The lack of knowledge of risk assessment methodologies can lead to serious consequences in QA work. Teams should be well-versed in how to effectively identify, evaluate, and prioritize risks. Also, they should know about tools or platforms, which are helpful in risk assessment. However, skill development can be seen as a barrier, especially for smaller teams or organizations with limited resources.

❌ Incomplete coverage

When applying a risk-based approach, teams test only critical components of software applications, which means other important components are not yet fully tested. As a result, teams may deal with incomplete test coverage, which may lead to a higher risk of system failures.

Steps in a Risk-Based Testing Approach

The best way to implement a risk based approach testing within software testing is the following:

Risk-based Test Strategy vizualization
Risk-based Test Strategy

#1 Step: Risk Identification

At this step, stakeholders, developers, business analysts, and testers come together to discuss areas which are prone to errors. Thanks to their knowledge of the project, historical data and information for similar projects, defect reports, user stories, and industry expertise, they can identify potential issues.

#2 Step: Risk Analysis

Once the risks have been identified, you should undergo analysis to determine their likelihood and potential consequences. A risk-based approach focuses on the highest risks, so you need to define and rank them properly.  While ranking, you need to take into account the complexity of each area and calculate which risks present a significant danger to the organization or only cause minor inconveniences.

#3 Step: Risk Prioritization

Once you have identified and analyzed the risk, you need to prioritize risks. It’s important to mention that not all risks are equally critical. That’s why it is important to address high-impact and high-likelihood risks with the highest priority first, while medium and low priority risks can be solved later.

#4 Step: Test Planning & Test Design

At this step, you need to clearly define the test objectives, scope of testing, test cases, and testing strategy to respond to the identified risks. It’s also imperative to define the approaches, select relevant tools and frameworks, and create tests for high-priority risks. Additionally, you need to allocate the required resources, including test environments, teams, and time, according to the risk prioritization to make certain the test plan is implemented effectively.

#5 Step: Test Execution

Once the test cases are designed and budgets have been approved, you can start the QA process. You need to run tests according to the testing plan. This phase involves conducting tests and actively monitoring and responding to emerging risk information. After this step is completed, the process can begin again as new code, features, and functionality are added to the app.

#6 Step: Risk Control and Monitoring

At this step, you should control and monitor the risk-based test execution. When adding new test cases, updating or removing less relevant tests, you need to take into account new risks and prioritize them. Also, you can add extra test cases and allocate resources for them.

📊 Risk-Based Testing Results Reporting and Metrics

Below you can find some important reporting and analyzing metrics of risk-based testing. Let’s take a closer look below:

Number of Risks Identified

With this metric, testing teams can count the total number of unique risks that have been identified by using various activities – brainstorming sessions, requirements analysis, design reviews, and document them clearly with a test management software for further risk assessment and analysis. Use this metric to evaluate the effectiveness of your risk-identification efforts over time.

Number of Risks Identified metric

🧮 Example Calculation of Total Risks

If during a test planning phase, your team identified:

• 5 risks in a brainstorming session
• 3 risks during requirements review
• 2 risks from defect history
• And 1 duplicate across these was later merged

➡ Total unique risks = 5 + 3 + 2 − 1 duplicate = 9 risks identified

🔺 Notes: Count unique risks, not duplicates.

Risk Priority Number

This metric can be used for quantitative or qualitative assessment and to determine which areas of the software or which functionalities should receive the most attention and effort. It helps test teams to rank identified risks based on their consequence and probability of risk. Risk Priority is usually calculated using a formula based on the Risk Priority Number (RPN), which follows the common method of risk-based testing and Failure Mode and Effects Analysis (FMEA).

Risk Priority Number metric formula

In this formula:

  • Severity – mark of how serious the impact would be if the risk occurred (e.g., scale 1–10)
  • Likelihood – likelihood the risk is to occur (e.g., scale 1–10)
  • Detectability – estimation of the issue detection before reaching the user (e.g., scale 1–10, where 1 = easily detected, 10 = hard to detect)

🧮 Example Calculation of Risk Priority

Suppose for a specific risk:

• Severity = 8 (high impact)
• Likelihood = 6 (moderately likely)
• Detectability = 7 (not easily detected)

➡ Risk priority = 8 × 6 × 7 = 336

🔺 Notes: The higher the risk’s priority, it should be treated as a high-priority risk.

Number of test cases planned VS executed

When applying this metric, teams can measure the proportion of prepared test cases compared to the total tests which have been planned. Knowing that information, they can understand whether the test preparation stage is going according to plan and there are no delays or issues in test case development process. This metric helps QA teams track progress during test cycles, identify bottlenecks in test execution and report status to stakeholders. Also, tell about the current state of quality and test activities.

test effectiveness metric

🧮 Example of execution rate

• Planned test cases: 120
• Executed test cases: 90

➡ Execution Rate = (90 / 120) × 100 = 75%

Test cases executed

With this metric, teams can overview of how many tests are succeeding versus how many are failing. When teams see a high number of failures, it indicates instability or bugs in features which are not yet complete or working correctly. A high failure rate means the QA work is effective in finding issues.

percentage of failed test cases metric
🧮 Example metric of Executed Test Cases

• Passed 80
• Failed 10
• Skipped 5
• Blocked 3
• Not Run 2

➡ Test Cases Executed = 80 + 10 + 5 + 3 = 98

🔺 Notes: Test Cases Executed = Total Test Cases with Status ≠ Not Run or Blocked

Defect density

This metric shows teams how many bugs are in a specific piece of code (defect density) and how critical that piece of code is to the business or system (risk identified). It helps spend less time on low-risk and low-defect-density areas and use the budget and schedule efficiently. Most teams calculate defect density as the number of defects per thousand lines of code (KLOC), but some of them count per module or function points.

Defect density testing metric

🧮 Example Defect Density Metric

• 20 defects in a module
• 10,000 lines of code

➡ Defect Density = (20 x1000) / 10,000 = 0.2 defects per KLOC
➡ For example, if you find 2o bugs in 10 modules, the bug density will be 20 ÷ 10 = 2 per module

Number of test cases of high severity still open

Thanks to this metric, teams understand that there are still issues that block the software from being released and could have severe consequences – system crashes, data loss, major functionality breakdown, security vulnerabilities without fixing. It helps measure unresolved risk before release, drives prioritization during regression or release readiness, Informs stakeholders about potential blockers.

Number of test cases of high severity still open metric
🧮 Example of Number of test cases of high severity still open

You have 50 test cases marked as high severity:

• 20 passed
• 10 failed
• 5 blocked
• 5 skipped

➡ Remaining high-severity test cases still open: 50 – (20+10+5+5) = 10

Test effectiveness (risks identified and mitigated)

With this metric, teams can validate whether risk assessment and identification are accurate. They can also understand if a risk-based approach works or if their strategy and coverage need to be re-evaluated. Guides continuous improvement for test planning and execution. Helps stakeholders assess test quality and coverage.

Test effectiveness (risks identified and mitigated) metric

🧮 Example of Test Effectiveness

• Total risks identified: 20
• Risks successfully mitigated during testing: 15

➡ Test Effectiveness = (15 / 20) × 100 = 75%

🔺 75% test effectiveness means a strong alignment between risk identification and resolution.

Test coverage report

This metric determines the level to which your efforts have covered the codebase. Teams can assess the completeness of their effort and be in the know how thoroughly the software has been validated.

Test coverage testing metric

🧮 Example Automation Code Coverage Report

• Total test cases: 1,200
• Automated Test cases: 960

➡ Automation Code Coverage = (960 / 1,200) × 100 = 80%, Risk coverage by report

Risk coverage

When considering the risk coverage, teams can find out how much of the company’s risk is covered by test cases. They will have a clear understanding of how effectively the test strategy has been implemented and see which identified risks have been addressed or covered by test activities.

Risk coverage risk based testing metric

🧮 Example Risk Coverage report:

• Total Identified Risks: 20
• Risks Covered by Test Cases: 15

➡ Risk Coverage = (15 / 20) × 100 = 75%

Applying this metric allows testers to see if they tested the right issues/bugs effectively and focus on the most important risks. They can also find out which identified risks are now fixed or handled and why their efforts were spent on specific crucial areas. They can also discover if any big risks remain and if the software is ready to launch.

When tracking metrics mentioned above, you can make certain that your efforts are thorough and consistent. While there are a lot of other metrics you can track, it is essential to use those that can get you closer to your business goals and launch high-quality software.

Comprehensive Summary Risk-Based Reports

Risk identification testing based matrix give us clear visibility, making it easier to understand the current level of risk exposure. Support more efficient use of testing resources by reducing time spent on low-risk areas. Vice versa, pay attention to higher severity failure areas which have a higher impact on risks. At the end of the day, it gives us a clearer objective to aim for from a testing perspective. We’re not just guessing what’s important; we know what’s important now.

Summary Risk-Based Report
Use risk based traceability matrix to ensure each risk has one or more test cases linked. If we diagnose the severity of risk as high, usually we retest such test cases using regression testing along with risk based testing to reduce the perception of failure in the future.

risk based traceability matrix

Summary risk based testing reports help QA managers provide information clearly to business stakeholders, as well. They can reassess risks with greater confidence, knowledge, and information to decide to release or not release or continue testing.

Best Practices For Implementing Risk-Based Testing

Below we highlight some tips and tactics to follow to lead to a successful RBT implementation:

  1. Before evaluating risks, you need to define clear and consistent criteria for them. Also, you can use risk prioritization matrix and engage all stakeholders into the risk prioritization process.
  2. You can apply a risk breakdown structure to help all engaged parties identify the risk prone areas and categorize many sources from which the project risks may arise.
  3. You can use risk-based metrics for tests, which can help you visualize and show a lot of relevant information about test efforts, like the risk closure and status.
  4. You can utilize test automation frameworks and AI-based tools to speed up the RBT process and improve quality.
  5. You can use test cases that cover essential functionalities, consistently update and maintain in order to keep them relevant.
  6. You can apply a test case management tool like testomat.io to streamline the implementation of RBT in agile for better risk identification.

Bottom Line: What About Using RBT?

Risk-based testing approach allows teams to prioritize the critical functionality of the software or system. When using this approach, you can optimize the QA process and target your testing efforts towards the areas of your application that are most likely to cause problems. By identifying, assessing, analyzing, and mitigating risks based on their prioritization, they can eliminate over-testing and focus on checking the most critical areas.

However, before establishing risk based testing approach, you need to make sure that communication between the stakeholders, software engineers and testers, which are engaged in the software project, is open and clear. They should do their best to identify and address potentially critical risks in the developed software products.

🙂 Drop us a line if you are interested in adopting RBT (Risk Based Testing) in your QA process.

The post Risk-Based Testing: Strategy, Approach & Real-World Examples appeared first on testomat.io.

]]>
Behavior-Driven Development: Python with Pytest BDD https://testomat.io/blog/pytest-bdd/ Tue, 03 Jun 2025 10:53:09 +0000 https://testomat.io/?p=17735 If you want your IT projects to grow, your technical teams and stakeholders without tech backgrounds do not suffer from misunderstandings during the software development process. You can use the BDD framework to connect the team on one page and keep everyone aligned. In the article, you can discover more information about the Pytest BDD […]

The post Behavior-Driven Development: Python with Pytest BDD appeared first on testomat.io.

]]>
If you want your IT projects to grow, your technical teams and stakeholders without tech backgrounds do not suffer from misunderstandings during the software development process. You can use the BDD framework to connect the team on one page and keep everyone aligned.

In the article, you can discover more information about the Pytest BDD framework, learn how to write BDD tests with Pytest, and reveal some considerations to help you make the most of the Pytest BDD test framework.

Why teams need the Pytest BDD framework

If your team works on Python projects, pytest-BDD will give them a sizable boost in project clarity.

  • Tech teams and non-technical business executives can take part in writing test scenarios with Gherkin syntax to describe the intended behavior of software in a readable format to make sure it meets business requirements.
  • Teams can verify user stories and system behavior by directly linking them to feature requirements.
  • Teams can make the test automation process more scalable with pytest’s features like fixtures and plugins.
  • Teams can create a solid Steps base for test cases and reuse code in other tests by turning scenarios into automated tests.
  • Teams can easily update BDD scenarios as the product changes.
  • Teams can get detailed test reports with relevant information about the testing efforts.

Fixtures & Tags: Why use them?

With pytest-bdd, teams can use the power of the entire Pytest ecosystem, such as fixtures and tags.

Fixtures

Marked with the @pytest.fixture decorator, fixtures are known as special functions that provide a way to set up and tear down resources required for your tests. They are very flexible and have multiple use cases – applied to individual tests, entire test classes, or even across a whole test session to optimize resource usage. There are various reasons for using Pytest fixtures:

Fixtures are implemented in a modular manner and are easy to use.
Fixtures have a scope (function, class, module, session) and lifetime that help to define how often they are created and destroyed, which is crucial for efficient and reliable testing.
Fixtures with function scope improve the test code’s readability and consistency to simplify the code maintenance process.
Pytest fixtures allows testing complex scenarios, sometimes carrying out the simple.
Fixtures use dependency injection (configuration settings, database connections, external services) to improve test readability and maintainability by encapsulating setup and teardown logic.

While fixtures are great for extracting data or objects that you use across multiple tests, you may not use them for tests that require slight variations in the data.

Tags

Tags are a powerful feature that helps selectively run certain tests based on their labels. They also allow teams to assign tags to scenarios in feature files and use pytest to execute tests, especially when dealing with large test suites. Tags can be used to indicate test priority, skip certain tests under specific conditions, or group tests by categories like performance, integration, or acceptance. Let’s consider the reasons for using tags:

You need to run suites of tests that are relevant to your current needs, like testing a particular feature.
You need to group tests based on their functionality, priority, or other relevant criteria to easily understand the test suite structure and find specific tests in the future.
You need to execute tests that match multiple tags by using logical operators (AND, OR, NOT) to precisely target the tests you want to run.
You need to automate the execution of specific test subsets and get customized reports based on test tags.

While tags help categorize the tests based on any criteria, their overuse can lead to a cluttered test suite and make it hard for developers to understand or maintain the code.

In fact, Pytest has limitations, but it comes with many plugins that extend its capabilities, among them the Python BDD plugin, which we are interested in at this point in the article. This plugin provides all the advantages of Python in BDD, which is why many automators love it ❤

Getting Started with Pytest BDD

Prerequisites: Setting up the environment

If you are ready to utilize pytest-BDD, you need to make sure that all the required tools and libraries are installed. Below you can find out the steps to follow to set up the environment and start writing BDD tests:

    1. Install Python. You need to download the latest version from Python’s official website to get Python installed on your system. Then you need to verify the installation by running the command:
      python --version
    2. I used the optional alias python to python3 (macOS/Linux only) because I saw messages: command not found, as python3 was installed instead of python.
      alias python=python3
      
    3. I installed the package manager pip.
    4. Indeed, some test automation engineers prefer to use the Poetry library over Virtualenv. Poetry is more modern and enables management of dependencies in the global project directory without manually activating environments.
    5. Set up a Virtual Environment. At this step, you can create a virtual environment for your project to isolate it from other environments, give you full control of your project, and make it easily reproducible. Firstly, you need to install the virtualenv package if you haven’t already with pip. Once installed, you can specify the Python version and the desired name for the environment. It is a good practice to replace <version> with your Python version and <virtual-environment-name> with the environment name you want to give.
      pip install virtualenv
    6. Install pytest and pytest-BDD. At this step, you can use pip to install both the pytest framework and the Pytest-BDD plugin
      pip install pytest pytest-bdd
    7. Install Additional Dependencies. If you need additional libraries like Selenium or Playwright, you can install them as well. We need them to operate on a browser. For instance Playwright
      playwright install
      
    8. Activate virtualenv based on your OS
      source venv/bin/activate
      
    9. Create Feature Files and Steps File. The last step before writing the BDD tests is creating a structured project directory where you will keep your feature files and test scripts. Typical project structure looks like:
      pytest_bdd_selenium/
      ├── features/
      │   └── login.feature
      ├── steps/
      │   └── test_login_steps.py
      ├── tests/
      │   └── test_login.py
      ├── conftest.py
      ├── requirements.txt
      └── pytest.ini
      

How to write BDD Tests with Pytest

To write a BDD Test with Pytest, as mentioned above, you need to create a feature file and define step functions that match the scenarios in the feature file.

#1: Writing Feature File

To write feature files, you need to understand the Gherkin syntax used to describe the behavior of the application in plain English. The “given/when/then” vocabulary is pretty clear to all team members – analysts, developers, testers, and other specialists without technical background. Generally, the feature files work as living documentation of the system’s expected behavior. More information about Gherkin-based feature files can be found here.

Here is a basic example of a successful login functionality on this site https://practicetestautomation.com/practice-test-login/ 

Feature: Login functionality

  Scenario: Successful login with valid credentials
    Given the user is on the login page
    When the user enters valid username and password
    Then the user should see the secure area

#2: Creating Step Definitions

Step Definitions map the Gherkin steps in your feature files to Python functions. Pytest-bdd matches the steps in feature files with corresponding step definitions. Here is an example code for user login:

from pytest_bdd import scenarios, given, when, then

scenarios('../features/login.feature')

LOGIN_URL = "https://practicetestautomation.com/practice-test-login/"
USERNAME = "student"
PASSWORD = "Password123"

@given("the user is on the login page")
def open_login_page(browser_context):
    browser_context.goto(LOGIN_URL)

@when("the user enters valid username and password")
def login_user(browser_context):
    browser_context.fill("#username", USERNAME)
    browser_context.fill("#password", PASSWORD)
    browser_context.click("#submit")

@then("the user should see the secure area")
def check_login(browser_context):
    header = browser_context.locator("h1")
    assert "Logged In Successfully" in header.text_content()

* File test_login.py might be empty if all scenarios are loaded from a step file.

#3: Create Conftest file

Now, Playwright uses built-in fixtures like Page, and in many cases, we do not need it — Playwright provides everything.

You only need it if you want to:

  • Add custom fixtures (e.g., for login tokens, DB, API)
  • Change browser settings (e.g., headless, slow motion)
  • Set up project-wide hooks
  • Configure Playwright launch options

Our basic application is a login, so we have to create conftest.py

import pytest
from playwright.sync_api import sync_playwright

@pytest.fixture
def browser_context():
    with sync_playwright() as p:
        browser = p.chromium.launch(headless=False)  # set True for headless
        context = browser.new_context()
        page = context.new_page()
        yield page
        browser.close()

Using a Page, you can update your steps/test_login_steps.py file

from pytest_bdd import scenarios, given, when, then
from playwright.sync_api import Page

scenarios('../features/login.feature')

LOGIN_URL = "https://practicetestautomation.com/practice-test-login/"
USERNAME = "student"
PASSWORD = "Password123"

@given("the user is on the login page")
def open_login_page(page: Page):
    page.goto(LOGIN_URL)

@when("the user enters valid username and password")
def login_user(page: Page):
    page.fill("#username", USERNAME)
    page.fill("#password", PASSWORD)
    page.click("#submit")

@then("the user should see the secure area")
def check_login(page: Page):
    assert "Logged In Successfully" in page.text_content("h1")

#3: Executing PyTest BDD

Once the feature file and step definitions have been created, you can start test execution. It can be done with the pytest command:

pytest -v

#5: Analizing Results

After PyTest BDD tests execution, you can analize, measure and review your testing efforts to identify weaknesses and formulate solutions that improve the process in the future.

Playwright BDD PyTest Reporting screen
Playwright BDD PyTest Reporting

If you integrate pytest BDD with a test case management system such as testomat.io, you can generate test reports, analyze them, and get the picture of how your tested software performs. 

Playwright Trace Viewer feature test management
Playwright Trace Viewer

You can debug your Playwright tests right inside the test management system for faster troubleshooting and smoother test development.

Advantages of Pytest BDD

  • Pytest BDD works flawlessly with Pytest and all major Pytest plugins.
  • With the fixtures feature, you can manage context between steps.
  • With conftest.py, you can share step definitions and hooks.
  • You can execute filtered tests alongside other Pytest tests.
  • When dealing with functions that accept multiple input parameters, you can use tabular data to run the same test function with different sets of input data and make tests maintainable.

Disadvantages of Pytest BDD

  • Step definition modules must have explicit declarations for feature files (via @scenario or the “scenarios” function).
  • Scenario outline steps must be parsed differently
  • Sharing steps between feature files can be a bit of a challenge.

Rules to follow when using Pytest BDD for Test Automation

Below you can find some important considerations when using Pytest-bdd:

  • ‍You need to utilize Gherkin syntax with GWT statements.
  • You need to use steps as Python functions so that pytest-bdd can match them in attribute files with their corresponding step definitions to be parameterized or defined as regular Python functions.
  • You need to utilize the pytest-bdd and pytest fixture together to set up and break down the environment for testing.
  • Each scenario works as an individual test case. You need to run the BDD test using the standard pytest command.
  • You can use pytest-bdd hooks to generate code before or after events in the BDD test lifecycle.
  • You can use tags to run specific groups of tests, prioritize them, or group them by functionality.

Bottom Line: Ready to use Pytest BDD for Python project?

With pytest-BDD, your teams get a powerful framework to implement BDD in Python projects. When writing tests in a clear and Gehrkin-readable format, teams with different backgrounds can better collaborate, understand business requirements, and build what the business really needs. Contact us if you need more information about improving your Pytest BDD workflow and its integration with the testomat.io test case management system and increasing test coverage.

The post Behavior-Driven Development: Python with Pytest BDD appeared first on testomat.io.

]]>