test Archives - SD Times https://sdtimes.com/tag/test/ Software Development News Thu, 04 Jul 2024 00:20:14 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.5 https://sdtimes.com/wp-content/uploads/2019/06/bnGl7Am3_400x400-50x50.jpeg test Archives - SD Times https://sdtimes.com/tag/test/ 32 32 Parasoft’s latest release offers several new automated features for testing Java, C#, .NET apps https://sdtimes.com/test/parasofts-latest-release-offers-several-new-automated-features-for-testing-java-c-net-apps/ Wed, 03 Jul 2024 14:43:34 +0000 https://sdtimes.com/?p=55104 Parasoft recently released the 2024.1 releases of several of its products, including Java testing tool Jtest, C# and .NET testing tool dotTEST, and testing analytics solution DTP.  Jtest now includes test templates in Unit Test Assistant, which is a feature that uses AI to generate a suite of tests. With the new Jtest release, testers … continue reading

The post Parasoft’s latest release offers several new automated features for testing Java, C#, .NET apps appeared first on SD Times.

]]>
Parasoft recently released the 2024.1 releases of several of its products, including Java testing tool Jtest, C# and .NET testing tool dotTEST, and testing analytics solution DTP

Jtest now includes test templates in Unit Test Assistant, which is a feature that uses AI to generate a suite of tests. With the new Jtest release, testers get more control over the structure of their test classes and can specify common configurations that their tests require.

Jtest can also now run test impact analysis right from within the IDE. Whenever a code change is made, Jtest will identify and execute tests and provide feedback to the developer on the impact of their modifications.

“With the new Jtest release, developers get real-time insights into which tests are impacted by their code changes,” Igor Kirilenko, chief product officer at Parasoft, told SD Times. “While you are still modifying your code, Jtest automatically runs the relevant tests and delivers instant feedback. This groundbreaking feature not only saves time but also ensures that potential bugs are caught and fixed before they ever reach the build pipeline.”

In Jtest and dotTEST, an integration with OpenAI/Azure OpenAI Service provides AI-generated fixes for flow analysis violations. 

Jtest and dotTEST also now support the latest version of the Common Weakness Enumeration (CWE) list, 4.14. Additionally, both have improved out-of-the-box static analysis test configurations.

And finally, DTP’s integration with OpenAI/Azure OpenAI Service speeds up remediation of security vulnerabilities by matching security rule violations to known vulnerabilities, and then coming up with a probability score of the likelihood of each being a real vulnerability or a false positive. 

“Developers often face significant cognitive load when triaging static analysis violations, particularly those related to security,” Jeehong Min, technical product manager at Parasoft, told SD Times. “Each security rule comes with its own learning curve, requiring time to understand its nuances. To assist developers, Parasoft DTP offers recommendations powered by pre-trained machine learning models and models that learn from the development team’s triage behavior. The ultimate goal is to help developers make informed decisions when triaging and remediating static analysis violations.”


You may also like…

The human side of automation: Reclaiming work-life balance

Reducing the total cost of ownership of software testing

The post Parasoft’s latest release offers several new automated features for testing Java, C#, .NET apps appeared first on SD Times.

]]>
Parasoft offers new capabilities for API, microservices, and accessibility testing in latest release https://sdtimes.com/test/parasoft-offers-new-capabilities-to-api-microservices-and-accessibility-testing-in-latest-release/ Wed, 22 May 2024 13:41:35 +0000 https://sdtimes.com/?p=54678 The software testing company Parasoft has announced new updates for API, microservices, and accessibility testing. For API testing, the company is using AI to offer auto-parameterization of API scenario tests generated by the OpenAI integration.  According to Parasoft, this update will streamline the process of developing test scenarios that validate data flow.  In the realm … continue reading

The post Parasoft offers new capabilities for API, microservices, and accessibility testing in latest release appeared first on SD Times.

]]>
The software testing company Parasoft has announced new updates for API, microservices, and accessibility testing.

For API testing, the company is using AI to offer auto-parameterization of API scenario tests generated by the OpenAI integration. 

According to Parasoft, this update will streamline the process of developing test scenarios that validate data flow. 

In the realm of microservices testing, the platform now offers a single test environment for collecting code coverage metrics from multiple parallel test executions for Java and .NET microservices.  

Additionally, code coverage can now be published under a single project in Parasoft DTP, which provides tests an aggregated view of their microservices coverage, Parasoft explained.

And finally, for web accessibility, the company has added support for WCAG 2.2 as well as new reporting capabilities in Parasoft SOAtest and DTP. 

 

The post Parasoft offers new capabilities for API, microservices, and accessibility testing in latest release appeared first on SD Times.

]]>
Mabl now offers automated mobile testing https://sdtimes.com/test/mabl-now-offers-automated-mobile-testing/ Tue, 23 Apr 2024 13:30:19 +0000 https://sdtimes.com/?p=54349 The testing company mabl has announced that it now offers automated mobile testing capabilities in its platform, which already offered testing for web and APIs.  It was designed to give full coverage of all the unique functionalities of varying mobile devices and their operating systems. With this new new offering, tests are created through a … continue reading

The post Mabl now offers automated mobile testing appeared first on SD Times.

]]>
The testing company mabl has announced that it now offers automated mobile testing capabilities in its platform, which already offered testing for web and APIs. 

It was designed to give full coverage of all the unique functionalities of varying mobile devices and their operating systems.

With this new new offering, tests are created through a low-code interface, meaning that developers can create tests for their mobile apps in a matter of minutes and non-developers can also utilize the platform. According to the company, this makes testing more accessible and will help to build a culture of quality throughout the organization.

Tests can be executed in parallel across multiple devices, ensuring that testing teams are able to get results faster.  

The platform also offers capabilities designed to increase trust in test results by minimizing the occurrence of flaky tests, which are tests that return both passing and failing results. These include things like auto-healing, which is when tests rewrite themselves to adapt to code changes, and Intelligent Wait, which tailors testing wait times to the normal pace of the application. 

In addition to offering a low-code interface, mabl’s mobile testing capabilities also utilize AI to help make testers even more productive by reducing manual selector tests, speeding up test creation, automatically discovering gaps in test coverage, and identifying performance degradation issues. 

“Ensuring  the  highest  quality  software  across  the  entire  user  experience  is  critical  for organizations  today.  End  user  transactions  globally  occur  primarily  on  smartphones,  yet  the mobile  app  testing  and  deployment  process  has  failed  to  catch  up  to  the  pace  of  change,  and largely  continued  to  be  arcane,  time-intensive,  and  highly  piecemeal  in  its  focus  on  the  testing experience,”  said  Dan  Belcher,  cofounder  at  mabl.  “In  this  climate,  organizations  that  don’t  put mobile  quality  front  and  center  will  fail  to  attract  and  maintain  their  user  base.  At  mabl,  we’ve seen  firsthand  that  organizations  that  embrace  AI-powered,  automated  testing  solutions  have  a competitive  advantage,  by  democratizing  mobile  app  testing  and  accelerating  time  to  market.”

The post Mabl now offers automated mobile testing appeared first on SD Times.

]]>
Tricentis announces series of AI Copilots for its testing portfolio, starting with Testim Copilot https://sdtimes.com/test/tricentis-announces-series-of-ai-copilots-for-its-testing-portfolio-starting-with-testim-copilot/ Thu, 18 Apr 2024 16:01:58 +0000 https://sdtimes.com/?p=54319 The testing company Tricentis has just announced the first in a series of AI copilots for its testing portfolio. The first solution is Testim Copilot, which adds AI capabilities to the automated testing platform Testim.  With Testim Copilot, users can input a text description of what they want to test and receive the JavaScript code … continue reading

The post Tricentis announces series of AI Copilots for its testing portfolio, starting with Testim Copilot appeared first on SD Times.

]]>
The testing company Tricentis has just announced the first in a series of AI copilots for its testing portfolio. The first solution is Testim Copilot, which adds AI capabilities to the automated testing platform Testim

With Testim Copilot, users can input a text description of what they want to test and receive the JavaScript code for that test. In addition to providing test code, it also provides an explanation of that code, which makes it easier to understand and potentially reuse that code for future tests.

By incorporating generative AI into the testing process, Tricentis hopes to make testing accessible to testers with not as much technical knowledge. For users of all experience levels it has the potential to save time and reduce errors.

According to Tricentis, customers who have already been using its AI tools have lowered their test failure rate by 16%. 

“Testim Copilot puts AI directly into the hands of the user, automatically suggesting test cases and fixes, meaning more time spent on workflows to boost productivity and improve time to market for new applications,” said Mav Turner, chief product and strategy officer at Tricentis. “This is only the beginning – we expect future Tricentis Copilot releases to have even greater benefits.”

The next Tricentis products to get AI Copilots will be Tricentis Tosca (currently in beta) and Tricentis qTest. All of the Tricentis Copilots will be available as add-ons to each of the products. 

The company will also be hosting a webinar on May 7 to talk about its AI strategy and its upcoming AI Copilot releases. 

The post Tricentis announces series of AI Copilots for its testing portfolio, starting with Testim Copilot appeared first on SD Times.

]]>
The power of automation and AI in testing environments https://sdtimes.com/test/the-power-of-automation-and-ai-in-testing-environments/ Mon, 25 Mar 2024 15:35:45 +0000 https://sdtimes.com/?p=54094 Software testing is a critical aspect of the SDLC, but constraints on time and resources can cause software companies to treat testing as an afterthought, rather than a linchpin in product quality. The primary challenge in the field of testing is the scarcity of talent and expertise, particularly in automation testing, according to Nilesh Patel, … continue reading

The post The power of automation and AI in testing environments appeared first on SD Times.

]]>
Software testing is a critical aspect of the SDLC, but constraints on time and resources can cause software companies to treat testing as an afterthought, rather than a linchpin in product quality.

The primary challenge in the field of testing is the scarcity of talent and expertise, particularly in automation testing, according to Nilesh Patel, Senior Director of Software Services at KMS Technology. Many organizations struggle due to a lack of skilled testers capable of implementing and managing automated testing frameworks. As a result, companies often seek external assistance to fill this gap and are increasingly turning to AI/ML. 

Many organizations possess some level of automation but fail to leverage it fully, resorting to manual testing, which limits their efficiency and effectiveness in identifying and addressing software issues, Patel added. 

Another significant issue is the instability of testing environments and inadequate test data. Organizations frequently encounter difficulties with unstable cloud setups or lack the necessary devices for comprehensive testing, which hampers their ability to conduct efficient and effective tests. The challenge of securing realistic and sufficient test data further complicates the testing process. 

The potential solution for this, KMS’s Patel said, lies in leveraging advanced technologies, such as AI and machine learning, to predict and generate relevant test data, improving test coverage and the reliability of testing outcomes. 

Patel emphasized that applications are becoming more intricate than ever before, so AI/ML technologies are not only essential for managing that complexity but also play a crucial role in enhancing testing coverage by identifying gaps that could have been previously overlooked. 

“If you have GenAI or LLM models, they have algorithms that are actually looking at user actions and how the customers or end users are using the application itself, and they can predict what data sets you need,” Patel told SD Times. “So it helps increase test coverage as well. The AI can find gaps in your testing that you didn’t know about before.”

In an environment characterized by heightened complexity, rapid release expectations, and intense competition, with thousands of applications offering similar functionalities, Patel emphasizes the critical importance of launching high-quality software to ensure user retention despite these challenges. 

This challenge is particularly pronounced in the context of highly regulated industries like banking and health care, where AI and ML technologies can offer significant advantages, not only by streamlining the development process but also by facilitating the extensive documentation requirements inherent to these sectors.

“The level of detail is through the roof and you have to plan a lot more. It’s not as easy as just saying ‘I’m testing it, it works, I’ll take your word for it.’ No, you have to show evidence and have the buy-ins and it’s those [applications] that will probably have longer release cycles,” Patel said. “But that’s where you can use AI and GenAI again because those technologies will help figure out patterns that your business can use.”

The system or tool can monitor and analyze user actions and interactions, and predict potential defects. It emphasizes the vast amount of data available in compliance-driven industries, which can be leveraged to improve product testing and coverage. By learning from every possible data point, including the outcomes of test cases, the algorithm enhances its ability to ensure more comprehensive coverage for subsequent releases.

Testing is becoming all hands on deck

More people in the organization are actively engaged in testing to make sure that the application works for their part of the organization, Patel explained. 

“I would say everyone is involved now. In the old days, it used to be just the quality team or the testing team or maybe some of the software developers involved in testing, but I see it from everyone now. Everyone has to have high-quality products. Even the sales team, they’re doing demos right to their clients, and it has to work, so they have opinions on quality and in that case even serve as your  end users,” Patel said.

“Then when they’re selling, they’re getting actual feedback on how the app works. When you see how it works, or how they’re using it, the testers can take that information and generate test cases based on that. So it’s hand in hand. It’s everyone’s responsibility,” he added. 

In the realm of quality assurance, the emphasis is placed on ensuring that business workflows are thoroughly tested and aligned with the end users’ actual experiences. This approach underscores the importance of moving beyond isolated or siloed tests to embrace a comprehensive testing strategy that mirrors real-world usage. Such a strategy highlights potential gaps in functionality that might not be apparent when testing components in isolation. 

To achieve this, according to Patel, it’s crucial to incorporate feedback and observations from all stakeholders, including sales teams, end users, and customers, into the testing process. This feedback should inform the creation of scenarios and test cases that accurately reflect the users’ experiences and challenges. 

By doing so, quality assurance can validate the effectiveness and efficiency of business workflows, ensuring that the product not only meets but exceeds the high standards expected by its users. This holistic approach to testing is essential for identifying and addressing issues before they affect the customer experience, ultimately leading to a more robust and reliable product.

 

The post The power of automation and AI in testing environments appeared first on SD Times.

]]>
Report: How mobile testing strategies are embracing AI https://sdtimes.com/test/report-how-mobile-testing-strategies-are-embracing-ai/ Tue, 19 Mar 2024 22:00:17 +0000 https://sdtimes.com/?p=54061 AI has seeped into every corner of the tech space over the last couple of years, and mobile testing is no exception.  Tricentis just published its State of Mobile Application Quality Report 2024, where it found that 48% of testing professionals said that AI is already part of their mobile testing strategy. A further 21% … continue reading

The post Report: How mobile testing strategies are embracing AI appeared first on SD Times.

]]>
AI has seeped into every corner of the tech space over the last couple of years, and mobile testing is no exception. 

Tricentis just published its State of Mobile Application Quality Report 2024, where it found that 48% of testing professionals said that AI is already part of their mobile testing strategy. A further 21% plan to implement testing tools over the course of the next six months.

The company estimates that AI can save testers an average of 40 hours per month and save 76-100% of a company’s budget per year. 

According to Tricentis, companies that don’t incorporate AI into their mobile testing strategy may face challenges like lack of talent, resources, and upskilling. 

In addition to helping testers work faster, AI can help them get more done. For example, the company found that those who use AI as part of their strategy have more of their company’s information and services accessible on mobile than those that don’t. 

“For organizations looking to implement artificial intelligence to boost their business objectives, testing is a fantastic place to start,” said David Colwell, vice president of AI and ML at Tricentis. “Mobile application testing is a great use case for AI because not only does it have multiple benefits – including significant time and cost savings, as well as quality improvement and risk reduction – but also its impact can be accurately measured.” 

Despite the adoption of AI, the report found that about half of respondents are still using manual testing, though 38% believe that they would save about 51-75% of their company budget by fully automating their testing practice.

Another finding was that only 27% of respondents believe that their company’s test strategy exceeds expectations. 

And finally, a majority of respondents (90%) believe that they lose up to $2.49 million in lost revenue every year due to mobile quality issues. 

For the report, Tricentis surveyed 1,028 senior IT leaders from small and medium-sized businesses and enterprises in December 2023.

The post Report: How mobile testing strategies are embracing AI appeared first on SD Times.

]]>
SmartBear adds distributed tracing, developer API portal, and more in latest update https://sdtimes.com/softwaredev/smartbear-adds-distributed-tracing-developer-api-portal-and-more-in-latest-update/ Thu, 02 Nov 2023 17:51:16 +0000 https://sdtimes.com/?p=52920 The testing company SmartBear has announced updates to three of its products, with the goal of improving visibility into the software development life cycle. “We continue to put our customers at the center of our strategies and deliver on their needs by expanding our product portfolio through innovative enhancements to our popular solutions used by … continue reading

The post SmartBear adds distributed tracing, developer API portal, and more in latest update appeared first on SD Times.

]]>
The testing company SmartBear has announced updates to three of its products, with the goal of improving visibility into the software development life cycle.

“We continue to put our customers at the center of our strategies and deliver on their needs by expanding our product portfolio through innovative enhancements to our popular solutions used by millions of developers, testers, and software engineers worldwide,” said Dan Faulkner, chief product officer at SmartBear.

The first update is in BugSnag, a developer-focused monitoring platform. Earlier this year the company had acquired Aspecto, which is an OpenTelemetry-based company, and now SmartBear has integrated its distributed tracing capabilities into the BugSnag platform.

By adding distributed tracing to its platform, SmartBear is providing its customers with the ability to monitor errors and correlate them across traces, logs, and metrics to determine the root cause.  

Next up, it launched a developer portal for finding SmartBear APIs within the SwaggerHub Portal, which is a marketplace for finding APIs that the company launched in August. According to the company, this new portal will help companies more quickly get started with SmartBear products.

SmartBear also released updates to the test management solution Zephyr Squad Cloud. New features include a test case library, n-level folder structure for test cycles, reordering of test executions, enhanced test cycle details, and advanced reporting of test execution results. 

“Whether you are a seasoned QA professional, a developer, or a project manager, this update will make your testing process faster, more efficient, and more user-friendly than ever before,” SmartBear wrote in a blog post.

The post SmartBear adds distributed tracing, developer API portal, and more in latest update appeared first on SD Times.

]]>
DevOps success starts with quality engineering https://sdtimes.com/devops/devops-success-starts-with-quality-engineering/ Fri, 06 Oct 2023 15:18:57 +0000 https://sdtimes.com/?p=52589 Studies show that DevOps adoption is still a moving target for the vast majority of software development teams, with just 11% reporting full DevOps maturity in 2022. Navigating this transition requires organization-wide metrics that help everyone understand their role. To that end, Google developed the DORA (DevOps Research and Assessment) metrics to give development teams … continue reading

The post DevOps success starts with quality engineering appeared first on SD Times.

]]>
Studies show that DevOps adoption is still a moving target for the vast majority of software development teams, with just 11% reporting full DevOps maturity in 2022. Navigating this transition requires organization-wide metrics that help everyone understand their role. To that end, Google developed the DORA (DevOps Research and Assessment) metrics to give development teams a straightforward way to measure DevOps maturity. 

Fernando Mattos, Director of Product Marketing at low-code test automation platform mabl, put it this way: “DORA metrics capture the productivity and the stability of development pipelines, which can impact a business’ ability to innovate and keep customers happy. If an organization is struggling to balance higher deployment frequency with lowering change failure rates, quality engineering is critical for bridging that gap.”

DORA metrics were created in 2014 to help development organizations understand what t strategies make teams elite, and in turn, help more companies mature their DevOps practices. This, of course, cements the correlation between engineering efficiency and hitting the goals of the business. By delivering new features faster, fixing defects faster, and providing a better customer experience, the result is more business, higher conversion rates, and lower churn from customers. 

Mattos went on to explain that mabl sees test automation as a critical piece in the delivery chain. “Lead time for change and change failure rate are two key metrics we see impacted there,” he said. “Change lead time is the time it takes from committing a piece of code to when it’s released to production. It’s a straightforward metric that captures a complex process.” He gave the example of a team that has streamlined its code review process, automated its entire pipeline, but still needs to do testing before a feature can be released to production. “A thorough software testing strategy can include unit testing, UI testing, API testing, end-to-end testing, and even non-functional tests like accessibility and performance. And these are essential for reducing change failure rates. But if it takes too long, then it extends the lead time for change, which negatively impacts the business. So, all the improvements that they did in other parts of the process just go down the drain.

“So, by integrating quality engineering and test automation specifically,” he continued, “development teams can shorten the time needed for comprehensive testing and  really optimize their outcomes.” 

Mattos went on to stress that it’s critically important to ensure that test coverage is focused on the customer experience, which will lower the change failure rate. “Lots of customers we talk to have high test coverage, but it’s removed from the customer experience. So they feel like they’re testing everything, but when they release (the software) to production, defects still emerge, especially if there’s an integration with third-party tools, which is very difficult to pass using traditional test automation tools.”

Mabl is trying to help teams build end-to-end continuous testing that’s focused on the customer, according to Mattos. “That’s what customers care about when making purchasing decisions, the experience that they go through – functional and non-functional. Connecting to usage, metrics tools, understanding what user journeys  are  most important to  customers.. those flows must have high coverage.” Mabl helps development organizations create and scale a quality engineering practice that supports DORA improvements and high-quality customer experiences, so businesses see a positive impact on their overall goals. “When your team has an automated testing practice that reflects the customer experience, deployments can happen more often without introducing defects. DORA metrics improve and customers are happier.”

Content created by SD Times and Mabl

 

The post DevOps success starts with quality engineering appeared first on SD Times.

]]>
Buyers Guide: AI and the evolution of test automation https://sdtimes.com/test/buyers-guide-the-evolution-of-test-automation/ Fri, 22 Sep 2023 14:35:53 +0000 https://sdtimes.com/?p=52402 Test automation has undergone quite an evolution in the decades since it first became possible.  Yet despite the obvious benefits, the digitalization of the software development industry has created some new challenges. It comes down to three big things, according to Kevin Parker, vice president of product at Appvance. The first is velocity and how … continue reading

The post Buyers Guide: AI and the evolution of test automation appeared first on SD Times.

]]>
Test automation has undergone quite an evolution in the decades since it first became possible. 

Yet despite the obvious benefits, the digitalization of the software development industry has created some new challenges.

It comes down to three big things, according to Kevin Parker, vice president of product at Appvance. The first is velocity and how organizations “can keep pace with the rate at which developers are moving fast and improving things, so that when they deliver new code, we can test it and make sure it’s good enough to go on to the next phase in whatever your life cycle is,” he said. 

RELATED CONTENT:
A guide to automated testing tools
Take advantage of AI-augmented software testing

The second area is coverage. Parker said it’s important to understand that enough testing is being done, and being done in the right places, to the right depth. And, he added, “It’s got to be the right kind of testing. If you Google test types, it comes back with several hundred kinds of testing.”

How do you know when you’ve tested enough? “If your experience is anything like mine,” Parker said, “the first bugs that get reported when we put a new release out there, are from when the user goes off the script and does something unexpected, something we didn’t test for. So how do we get ahead of that?”

And the final, and perhaps most important, area is the user interface, as this is where the rubber meets the road for customers and users of the applications. “The user interfaces are becoming so exciting, so revolutionary, and the amount of psychology in the design of user interfaces is breathtaking. But that presents even more challenges now for the automation engineer,” Parker said.

Adoption and challenges

According to a report by Research Nester, the test automation market is expected to grow to more than $108 billion by 2031, up from about $17 billion in 2021. Yet as for uptake, it’s difficult to measure the extent to which organizations are successfully using automated testing.

 “I think if you tried to ask anyone, ‘are you doing DevOps? Are you doing Agile?’ Everyone will say yes,” said Jonathan Wright, chief technologist at Keysight, which owns the Eggplant testing software. “And everyone we speak to says, ‘yes, we’re already doing automation.’ And then you dig a little bit deeper, they say, ‘well, we’re running some selenium, running some RPM, running some Postman script.’ So I think, yes, they are doing something.”

Wright said most enterprises that are having success with test automation have invested heavily in it, and have established automation as its own discipline. These organizations, he said, 

“They’ve got hundreds of people involved to keep this to a point where they can run thousands of scripts.” But in the same breath, he noted that the conversation around test case optimization, and risk-based testing, still needs to be had. “Is over-testing a problem?” he posited. “There’s a continuous view that we’re in a bit of a tech crunch at the moment. We’re expected to do more with less, and testing, as always, is one of those areas that have been put under pressure. And now, just saying I’ve got 5,000 scripts, kind of means nothing. Why don’t you have 6,000 or 10,000? You have to understand that you’re not just adding a whole stack of tech debt into a regression folder that’s giving you this feel-good feeling that I’m reading 5,000 scripts a day, but they’re not actually adding any value because they’re not covering new features.”

RELATED CONTENT:
How Cox Automotive found value in automated testing
Accessibility testing
Training the model for testing

Testing at the speed of DevOps

One effect of the need to release software faster is the ever-increasing reliance on open-source software, which may or may not have been tested fully before being let out into the wild.

Arthur Hicken, chief evangelist at Parasoft, said he believes it’s a little forward thinking to assume that developers aren’t writing code anymore, that they’re simply gluing things together and standing them up. “That’s as forward thinking as the people who presume that AI can generate all your code and all your tests now,” he said. “The interesting thing about this is that your cloud native world is relying on a massive amount of component reuse. The promises are really great. But it’s also a trust assumption that the people who built those pieces did a good job. We don’t yet have certification standards for components that help us understand what the quality of this component is.”

He suggested the industry create a bill of materials that includes testing. “This thing was built according to these standards, whatever they are, and tested and passed. And the more we move toward a world where lots of code is built by people assembling components, the more important it will be that those components are well built, well tested and well understood.”

Appvance’s Parker suggests doing testing as close to code delivery as possible. “If you remember when you went to test automation school, we were always taught that we don’t test

the code, we test against the requirements,” he said. “But the modern technologies that we use for test automation require us to have the code handy. Until we actually see the code, we can’t find those [selectors]. So we’ve got to find ways where we can do just that, that is bring our test automation technology as far left in the development lifecycle as possible. It would be ideal if we had the ability to use the same source that the developers use to be able to write our tests, so that as dev finishes, test finishes, and we’re able to test immediately, and of course, if we use the same source that dev is using, then we will find that Holy Grail and be testing against requirements. So for me, that’s where we have to get to, we have to get to that place where dev and test can work in parallel.”

As Parker noted earlier, there are hundreds of types of testing tools on the market – for functional testing, performance testing, UI testing, security testing, and more. And Parasoft’s Hicken pointed out the tension organizations have between using specialized, discrete tools or tools that work well together. “In an old school traditional environment, you might have an IT department where developers write some tests. And then testers write some tests, even though the developers already wrote tests, and then the performance engineers write some tests, and it’s extremely inefficient. So having performance tools, end-to-end tools, functional tools and unit test tools that understand each other and can talk to each other, certainly is going to improve not just the speed at which you can do things and the amount of effort, but also the collaboration that goes on between the teams, because now the performance team picks up a functional scenario. And they’re just going to enhance it, which means the next time, the functional team gets a better test, and it’s a virtuous circle rather than a vicious one. So I think that having a good platform that does a lot of this can help you.”

Coverage: How much is enough?

Fernando Mattos, director of product marketing at test company mabl, believes that test coverage for flows that are very important should come as close to 100% as possible. But determining what those flows are is the hard part, he said. “We have reports within mabl that we try to make easy for our customers to understand. Here are all the different pages that I have on my application. Here’s the complexity of each of those. And here are the tests that have touched on those, the elements on those pages. So at least you can see where you have gaps.”

It is common practice today for organizations to emphasize thorough testing of the critical pieces of an application, but Mattos said it comes down to balancing the time you have for testing and the quality that you’re shooting for, and the risk that a bug would introduce.

“If the risk is low, you don’t have time, and it’s better for your business to be introducing new features faster than necessarily having a bug go out that can be fixed relatively quickly… and maybe that’s fine,” he said.

Parker said AI can help with coverage when it comes to testing every conceivable user experience. “The problem there,” he said, “is this word conceivable, because it’s humans conceiving, and our imagination is limited. Whereas with AI, it’s essentially an unlimited resource to follow every potential possible path through the application. And that’s what I was saying earlier about those first bugs that get reported after a new release, when the end user goes off the script. We need to bring AI so that we can not only autonomously generate tests based on what we read in the test cases, but that we can also test things that nobody even thought about testing, so that the delivery of software is as close to being bug free as is technically possible.”

Parasoft’s Hicken holds the view that testing without coverage isn’t meaningful.  “If I turn a tool loose and it creates a whole bunch of new tests, is it improving the quality of my testing or just the quantity? We need to have a qualitative analysis and at the moment, coverage gives us one of the better ones. In and of itself, coverage is not a great goal. But the lack of coverage is certainly indicative of insufficient testing. So my pet peeve is that some people say, it’s not how much you test, it’s what you test. No. You need to have as broad code coverage as you can have.”

The all-important user experience

It’s important to have someone who is very close to the customer, who understands the customer journey but not necessarily anything about writing code, creating tests, according to mabl’s Mattos. “Unless it’s manual testing, it tends to be technical, requiring writing code and no updating test scripts. That’s why we think low code can really be powerful because it can allow somebody who’s close to the customer but not technical…customer support, customer success.  They are not typically the ones who can understand GitHub and code and how to write it and update that – or even understand what was tested. So we think low code can bridge this gap. That’s what we do.”

Where is this all going?

The use of generative AI to write tests is the evolution everyone wants to see, Mattos said. “We’ll get better results by combining human insights. We’re specifically working on AI technology that will allow implementing and creating test scripts, but still using human intellect to understand what is actually important for the user. What’s important for the business? What are those flows, for example, that go to my application on my website, or my mobile app that actually generates revenue?”

“We want to combine that with the machine,” he continued. “So the human understands the customer, the machine can replicate and create several different scenarios that traverse those. But of course, right, lots of companies are investing in allowing the machine to just navigate through your website and find out the different quarters, but they weren’t able to prioritize for us. We don’t believe that they’re gonna be able to prioritize which ones are the most important for your company.”

Keysight’s Wright said the company is seeing value in generative AI capabilities. “Is it game changing? Yes. Is it going to get rid of manual testers? Absolutely not. It still requires human intelligence around requirements, engineering, feeding in requirements, and then humans identifying that what it’s giving you is trustworthy and is valid. If it suggests that I should test (my application) with every single language and every single country, is it really going to find anything I might do? But in essence, it’s just boundary value testing, it’s not really anything that spectacular and revolutionary.”

Wright said organizations that have dabbled with automation over the years and have had some levels of success are now just trying to get that extra 10% to 20% of value from automation, and get wider adoption across the organization. “We’ve seen a shift toward not tools but how do we bring a platform together to help organizations get to that point where they can really leverage all the benefits of automation. And I think a lot of that has been driven by open testing.” 

“As easy as it should be to get your test,” he continued, “you should also be able to move that into what’s referred to in some industries as an automation framework, something that’s in a standardized format for reporting purposes. That way, when you start shifting up, and shifting the quality conversation, you can look at metrics. And the shift has gone from how many tests am I running, to what are the business-oriented metrics? What’s the confidence rating? Are we going to hit the deadlines? So we’re seeing a move toward risk-based testing, and really more agility within large-scale enterprises.”

 

The post Buyers Guide: AI and the evolution of test automation appeared first on SD Times.

]]>
A guide to automated testing tools https://sdtimes.com/test/a-guide-to-automated-testing-tools-5/ Fri, 22 Sep 2023 14:15:18 +0000 https://sdtimes.com/?p=52398 The following is a listing of automated testing tool providers, along with a brief description of their offerings. FEATURED PROVIDERS APPVANCE is the leader in generative AI for Software Quality.  Its premier product AIQ is an AI-native, unified software quality platform that delivers unprecedented levels of productivity to accelerate digital transformation in the enterprise.   Leveraging generative … continue reading

The post A guide to automated testing tools appeared first on SD Times.

]]>
The following is a listing of automated testing tool providers, along with a brief description of their offerings.

FEATURED PROVIDERS

APPVANCE is the leader in generative AI for Software Quality.  Its premier product AIQ is an AI-native, unified software quality platform that delivers unprecedented levels of productivity to accelerate digital transformation in the enterprise.   Leveraging generative AI and machine learning,  AIQ robots autonomously validate all the possible user flows to achieve complete application coverage.

KEYSIGHT is a leader in test automation, where our AI-driven, digital twin-based solutions help innovators push the boundaries of test case design, scheduling, and execution. Whether you’re looking to secure the best experience for application users, analyze high-fidelity models of complex systems, or take proactive control of network security and performance, easy-to-use solutions including Eggplant and our broad array of network, security, traffic emulation, and application test software help you conquer the complexities of continuous integration, deployment, and test.

MABL is the enterprise SaaS leader of intelligent, low-code test automation that empowers high-velocity software teams to embed automated end-to-end tests into the entire development lifecycle. Mabl’s platform for easily creating, executing, and maintaining reliable browser, API and mobile web tests helps teams quickly deliver high-quality applications with confidence. That’s why brands like Charles Schwab, jetBlue, Dollar Shave Club, Stack Overflow, and more rely on mabl to create the digital experiences their customers demand.

PARASOFT helps organizations continuously deliver high-quality software with its AI-powered software testing platform and automated test solutions. Supporting embedded and enterprise markets, Parasoft’s proven technologies reduce the time, effort, and cost of delivering secure, reliable, and compliant software by integrating everything from deep code analysis and unit testing to UI and API testing, plus service virtualization and complete code coverage, into the delivery pipeline. 

OTHER PROVIDERS

Applitools is built to test all the elements that appear on a screen with just one line of code, across all devices, browsers and all screen sizes. We support all major test automation frameworks and programming languages covering web, mobile, and desktop apps.

Digital.ai Continuous Testing provides expansive test coverage across 2,000+ real mobile devices and web browsers, and seamlessly integrates with best-in-class tools throughout the DevOps/DevSecOps pipeline.

RELATED CONTENT: The evolution of test automation

IBM: Quality is essential and the combination of automated testing and service virtualization from IBM Rational Test Workbench allows teams to assess their software throughout their delivery life cycle. IBM has a market leading solution for the continuous testing of end-to-end scenarios covering mobile, cloud, cognitive, mainframe and more. 

Micro Focus enables customers to accelerate test automation with one intelligent functional testing tool for web, mobile, API and enterprise apps. Users can test both the front-end functionality and back-end service parts of an application to increase test coverage across the UI and API.

Kobiton offers GigaFox on-premises or hosted, and solves mobile device sharing and management challenges during development, debugging, manual testing, and automated testing. A pre-installed and pre-configured Appium server provides “instant on” Appium test automation.

Orasi is a leading provider of software testing services, utilizing test management, test automation, enterprise testing, Continuous Delivery, monitoring, and mobile testing technology. 

ProdPerfect is an autonomous, end-to-end (E2E) regression testing solution that continuously identifies, builds and evolves E2E test suites via data-driven, machine-led analysis of live user behavior data. It addresses critical test coverage gaps, eliminates long test suite runtimes and costly bugs in production.  

Progress Software’s Telerik Test Studio is a test automation solution that helps teams be more efficient in functional, performance and load testing, improving test coverage and reducing the number of bugs that slip into production. 

Sauce Labs provides a cloud-based platform for automated testing of web and mobile applications. Optimized for use in CI and CD environment, and built with an emphasis on security, reliability and scalability, users can run tests written in any language or framework using Selenium or Appium.

SmartBear offers tools for software development teams worldwide, ensuring visibility and end-to-end quality through test management, automation, API development, and application stability. Popular tools include SwaggerHub, TestComplete, BugSnag, ReadyAPI, Zephyr, and others. 

testRigor helps organizations dramatically reduce time spent on test maintenance, improve test stability, and dramatically improve the speed of test creation. This is achieved through its support of “plain English” language that allows users to describe how to find elements on the screen and what to do with those elements from the end-user’s perspective. People creating tests on their system build 2,000+ tests per year per person. On top of it,  testRigor helps teams deploy their analytics library in production that will make systems automatically produce tests reflecting the most frequently used end-to-end flows from production.

 

The post A guide to automated testing tools appeared first on SD Times.

]]>