QA Archives - SD Times https://sdtimes.com/tag/qa/ Software Development News Wed, 31 Jul 2024 21:04:01 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.5 https://sdtimes.com/wp-content/uploads/2019/06/bnGl7Am3_400x400-50x50.jpeg QA Archives - SD Times https://sdtimes.com/tag/qa/ 32 32 Q&A: Lessons NOT learned from CrowdStrike and other incidents https://sdtimes.com/test/qa-lessons-not-learned-from-crowdstrike-and-other-incidents/ Wed, 31 Jul 2024 20:11:01 +0000 https://sdtimes.com/?p=55310 When an event like the CrowdStrike failure literally brings the world to its knees, there’s a lot to unpack there. Why did it happen? How did it happen? Could it have been prevented?  On the most recent episode of our weekly podcast, What the Dev?, we spoke with Arthur Hicken, chief evangelist at the testing … continue reading

The post Q&A: Lessons NOT learned from CrowdStrike and other incidents appeared first on SD Times.

]]>
When an event like the CrowdStrike failure literally brings the world to its knees, there’s a lot to unpack there. Why did it happen? How did it happen? Could it have been prevented? 

On the most recent episode of our weekly podcast, What the Dev?, we spoke with Arthur Hicken, chief evangelist at the testing company Parasoft, about all of that and whether we’ll learn from the incident. 

Here’s an edited and abridged version of that conversation:

AH: I think that is the key topic right now: lessons not learned — not that it’s been long enough for us to prove that we haven’t learned anything. But sometimes I think, “Oh, this is going to be the one or we’re going to get better, we’re going to do things better.” And then other times, I look back at statements from Dijkstra in the 70s and go, maybe we’re not gonna learn now. My favorite Dijkstra quote is “if debugging is the act of removing bugs from software, then programming is the act of putting them in.” And it’s a good, funny statement, but I think it’s also key to one of the important things that went wrong with CrowdStrike. 

We have this mentality now, and there’s a lot of different names for it — fail fast, run fast, break fast —  that certainly makes sense in a prototyping era, or in a place where nothing matters when failure happens. Obviously, it matters. Even with a video game, you can lose a ton of money, right? But you generally don’t kill people when a video game is broken because it did a bad update. 

David Rubinstein, editor-in-chief of SD Times: You talk about how we keep having these catastrophic failures, and we keep not learning from them. But aren’t they all a little different in certain ways, like you had Log4j that you thought would be the thing that oh, people are now definitely going to pay more attention now. And then we get CrowdStrike, but they’re not all the same type of problem?

AH: Yeah, that is true, I would say, Log4j was kind of insidious, partly because we didn’t recognize how many people use this thing. Logging is one of those less worried about topics. I think there is a similarity in Log4j and in CrowdStrike, and that is we have become complacent where software is built without an understanding of what the rigors are for quality, right? With Log4j, we didn’t know who built it, for what purpose, and what it was suitable for. And with CrowdStrike, perhaps they hadn’t really thought about what if your antivirus software makes your computer go belly up on you? And what if that computer is doing scheduling for hospitals or 911 services or things like that? 

And so, what we’ve seen is that safety critical systems are being impacted by software that never thought about it. And one of the things to think about is, can we learn something from how we build safety critical software or what I like to call good software? Software meant to be reliable, robust, meant to operate under bad conditions. 

I think that’s a really interesting point. Would it have hurt CrowdStrike to have built their software to better standards? And the answer is it wouldn’t. And I posit that if they were building better software, speed would not be impacted negatively and they’d spend less time testing and finding things.

DR: You’re talking about safety critical, you know, back in the day that seemed to be the purview of what they were calling embedded systems that really couldn’t fail. They were running planes and medical devices and things that really were life and death. So is it possible that maybe some of those principles could be carried over into today’s software development? Or is it that you needed to have those specific RTOSs to ensure that kind of thing?

AH: There’s certainly something to be said for a proper hardware and software stack. But even in the absence of that, you have your standard laptop with no OS of choice on it and you can still build software that is robust. I have a little slide up on my other monitor from a joint webinar with CERT a couple of years ago, and one of the studies that we used there is that 64% of vulnerabilities in NIST are programming errors. And 51% of those are what they like to call classic errors. I look at what we just saw in CrowdStrike as a classic error. A buffer overflow, reading null pointers on initialized things, integer overflows, these are what they call classic errors. 

And they obviously had an effect.  We don’t have full visibility into what went wrong, right? We get what they tell us. But it appears that there’s a buffer overflow that was caused by reading a config file, and one can argue about the effort and performance impact of protecting against buffer overflows, like paying attention to every piece of data. On the other hand, how long has that buffer overflow been sitting in that code? To me a piece of code that’s responding to an arbitrary configuration file is something you have to check. You just have to check this. 

The question that keeps me up at night, like if I was on the team at CrowdStrike, is okay, we find it, we fix it, then it’s like, where else is this exact problem? Are we going to go and look and find six other or 60 other or 600 other potential bugs sitting in the code only exposed because of an external input?

DR: How much of this comes down to technical debt, where you have these things that linger in the code that never get cleaned up, and things are just kind of built on top of them? And now we’re in an environment where if a developer is actually looking to eliminate that and not writing new code, they’re seen as not being productive. How much of that is feeding into these problems that we’re having?

AH: That’s a problem with our current common belief about what technical debt is, right? I mean the original metaphor is solid, the idea that stupid things you’re doing or things that you failed to do now will come back to haunt you in the future. But simply running some kind of static analyzer and calling every undealt with issue technical debt is not helpful. And not every tool can find buffer overflows that don’t yet exist. There are certainly static analyzers that can look for design patterns that would allow or enforce design patterns that would disallow buffer overflow. In other words, looking for the existence of a size check. And those are the kinds of things that when people are dealing with technical debt, they tend to call false positives. Good design patterns are almost always viewed as false positives by developers. 

So again, it’s that we have to change the way we think, we have to build better software. Dodge said back in, I think it was the 1920s, you can’t test quality into a product. And the mentality in the software industry is if we just test it a little more, we can somehow find the bugs. There are some things that are very difficult to protect against. Buffer overflow, integer overflow, uninitialized memory, null pointer dereferencing, these are not rocket science.


You may also like…

Lessons learned from CrowdStrike outages on releasing software updates

Software testing’s chaotic conundrum: Navigating the Three-Body Problem of speed, quality, and cost

Q&A: Solving the issue of stale feature flags

The post Q&A: Lessons NOT learned from CrowdStrike and other incidents appeared first on SD Times.

]]>
Q&A: Solving the issue of stale feature flags https://sdtimes.com/test/qa-solving-the-issue-of-stale-feature-flags/ Thu, 25 Jul 2024 20:15:07 +0000 https://sdtimes.com/?p=55272 As we saw last week with what happened as a result of a bad update from CrowdStrike, it’s more clear than ever that companies releasing software need a way to roll back updates if things go wrong.  In the most recent episode of our podcast, What the Dev?, we spoke with Konrad Niemiec, founder and … continue reading

The post Q&A: Solving the issue of stale feature flags appeared first on SD Times.

]]>
As we saw last week with what happened as a result of a bad update from CrowdStrike, it’s more clear than ever that companies releasing software need a way to roll back updates if things go wrong. 

In the most recent episode of our podcast, What the Dev?, we spoke with Konrad Niemiec, founder and CEO of the feature flagging tool, Lekko, to talk about the importance of adding feature flags to your code, but also what can go wrong if flags aren’t properly maintained.

Here is an edited and abridged version of that conversation:

David Rubinstein, editor-in-chief of SD Times: For years we’ve been talking about feature flagging in the context of code experimentation, where you can release to a small cohort of people. And if they like it, you can spread it out to more people, or you can roll it back without really doing any damage if it doesn’t work the way you thought it would. What’s your take on the whole feature flag situation?

Konrad Niemiec, founder and CEO of Lekko: Feature flagging is now considered the mainstream way of releasing software features. So it’s definitely a practice that we want people to continue doing and continue evangelizing.  

When I was at Uber we used a dynamic configuration tool called Flipper, and I left Uber to a smaller startup called Sisu, where we used one of the leading feature flagging tools on the market. And when I used that, although it let us feature flag and it did solve a bunch of problems for us, we encountered different issues that resulted in risk and complexity being added to our system. 

So we ended up having a bunch of stale flags littered around our codebase, and things we needed to keep around because the business needed them. And so we ended up in a situation where code became very difficult to maintain, and it was very hard to keep things clean. And we just ended up causing issues left and right.

DR: What do you mean by a stale flag?

KN: An implementation of a feature flag often looks like an if statement in the code. It’ll say if feature flag is enabled, I’ll do one thing, otherwise, I’ll do the old version of the code. This is how it looks like when you’re actually adding it as an engineer. And what a stale flag will mean is the flag will be all the way on. So you’ll have fully rolled it out, but you’re leaving that ‘else’ code path in there. So you basically have some code that’s pretty much never going to get run, but it’s still sitting in your binaries. And it almost turns into this zombie. We like to call them zombie flags, where it kind of pops up when you least expect them. You think they’re dead, but they come back to life.

And this often happens in startups that are trying to move fast. You want to get features out as soon as possible so you don’t have time to have a flag clean update and go through and categorize to see if you should remove all this stuff from the code. And they end up accumulating and potentially causing issues because of these stale code paths.

DR: What kind of issues?

KN: So an easy example is you have some sort of untested code based on a combination of feature flags. Let’s say you have two feature flags that are in a similar part of the code base, so there are now four different paths. And if one of them hasn’t been executed in a while, odds are there’s a bug. So one thing that happened at Sisu was that one of our largest customers encountered an issue when we mistakenly turned off the wrong flag. We thought we were kind of rolling back a new feature for them, but we jumped into a stale code path, and we ended up causing a big issue for that customer.

DR: Is that something that artificial intelligence could take on as a way to go through the code and suggest removing these zombie flags?

KN: With current tools, it is a very manual process. You’re expected to just go through and clean things up yourself. And this is exactly what we’re seeing. We think that generative AI has a big role to play here. Right now we’re starting off with simple heuristic approaches as well as some generative AI approaches to figure out hey, what are some really complicated code paths here? Can we flag these and potentially bring these stale code paths down significantly? Can we define allowable configurations? 

Something we see as a big difference between dynamic configuration and feature flagging itself is that you can combine different flags or different pieces of dynamic behavior in the code together as one defined configuration. And that way, you can reduce the number of possible options out there, and different code paths that you have to worry about. And we think that AI has a huge place in improving safety and reducing the risk of using this kind of tooling.

DR: How widely adopted is the use of feature flags at this point?

KN: We think that especially amongst mid market to large tech companies, it’s probably a majority of companies that are currently using feature flagging in some capacity. You do find a significant portion of companies building their own. Often engineers will take it into their own hands and build a system. But often, when you grow to some level of complexity, you quickly realize there’s a lot involved in making the system both scalable and also work in a variety of different use cases. And there are lots of problems that end up coming up as a result of this. So we think it’s a good portion of companies, but they may not all be using third-party feature flagging tools. Some companies even go through the whole lifecycle, they start off with a feature flagging tool, they rip it out, then they spend significant effort building similar tooling to what Google, Uber, and Facebook have, these dynamic configuration tools.


You may also like…

Lessons learned from CrowdStrike outages on releasing software updates

Q&A on the Rust Foundation’s new Safety-Critical Rust Consortium

The post Q&A: Solving the issue of stale feature flags appeared first on SD Times.

]]>
SonarCloud integrates with Amazon CodeCatalyst to promote Clean Code practices https://sdtimes.com/qa/sonarcloud-integrates-with-amazon-codecatalyst-to-promote-clean-code-practices/ Mon, 10 Jun 2024 16:22:14 +0000 https://sdtimes.com/?p=54878 Sonar has announced a new integration of its code review tool, SonarCloud, with Amazon CodeCatalyst to help improve the development process for cloud-based applications.  Amazon CodeCatalyst is a platform that provides blueprints for setting up software development projects in AWS, including setting up project tools, managing CI/CD pipelines, provisioning and configuring development environments, and more. … continue reading

The post SonarCloud integrates with Amazon CodeCatalyst to promote Clean Code practices appeared first on SD Times.

]]>
Sonar has announced a new integration of its code review tool, SonarCloud, with Amazon CodeCatalyst to help improve the development process for cloud-based applications. 

Amazon CodeCatalyst is a platform that provides blueprints for setting up software development projects in AWS, including setting up project tools, managing CI/CD pipelines, provisioning and configuring development environments, and more.

The goal of this integration is to improve the overall quality of application code by enforcing quality or “Clean Code” best practices throughout the software development life cycle for applications built using CodeCatalyst. 

“An increasing number of developers are writing code and building apps in the cloud. We created SonarCloud to provide these developers an easy way to achieve a state of Clean Code, designing a tool that seamlessly integrates with DevOps platforms,” said Fabrice Bellingard, VP of products at Sonar. “Our growing collaboration with AWS will help more cloud-based development teams create high-quality code with our unique Clean as You Code methodology.” 

Nicolas Pujol, ISV Partner Management Leader in EMEA at AWS, added: “Customers building apps need to release code early and often, and with solid DevOps practices and tools. Sonar has done a great job giving developers and DevOps leaders solutions to be more productive, and we’re excited to collaborate to make these tools available to AWS customers.” 

Sonar has also announced that it received an AWS Foundational Technical Review (FTR) certification, which indicates it follows specific guidelines around reducing risk in security, reliability, and operations. 


You might also like…

The post SonarCloud integrates with Amazon CodeCatalyst to promote Clean Code practices appeared first on SD Times.

]]>
Google updates Search algorithm to help reduce spam and low-quality content https://sdtimes.com/google/google-updates-search-algorithm-to-help-reduce-spam-and-low-quality-content/ Fri, 08 Mar 2024 19:48:32 +0000 https://sdtimes.com/?p=53986 Google has unveiled updates aimed at enhancing the quality and relevance of its search results. Among these updates are algorithmic improvements to its core ranking systems, designed to prioritize the surfacing of the most useful information available online while concurrently minimizing the presence of unoriginal content.  Additionally, Google is revising its spam policies to more … continue reading

The post Google updates Search algorithm to help reduce spam and low-quality content appeared first on SD Times.

]]>
Google has unveiled updates aimed at enhancing the quality and relevance of its search results. Among these updates are algorithmic improvements to its core ranking systems, designed to prioritize the surfacing of the most useful information available online while concurrently minimizing the presence of unoriginal content. 

Additionally, Google is revising its spam policies to more effectively exclude low-quality content from its search results. The updated policies target specific types of undesirable content, including websites that have expired and been repurposed for spam, as well as the proliferation of obituary spam. 

These measures are part of Google’s broader strategy to maintain the integrity of its search results and protect users from irrelevant or malicious content, thereby enhancing the overall user experience on the platform.

“This update involves refining some of our core ranking systems to help us better understand if webpages are unhelpful, have a poor user experience or feel like they were created for search engines instead of people. This could include sites created primarily to match very specific search queries,” Elizabeth Tucker, director of product management for Google, wrote in a blog post. “We believe these updates will reduce the amount of low-quality content on Search and send more traffic to helpful and high-quality sites. Based on our evaluations, we expect that the combination of this update and our previous efforts will collectively reduce low-quality, unoriginal content in search results by 40%.”

Google is enhancing its policy to tackle abusive content creation practices aimed at manipulating search rankings through scaled content production, regardless of whether it is generated by automation, humans, or a combination of both. 

This update aims to target and mitigate the impact of low-value content created en masse, such as webpages that appear to provide answers to common searches but ultimately fail to offer useful information. This initiative reflects Google’s commitment to improving the quality of content surfaced by its search engine, ensuring users receive relevant and valuable information, according to Google.

The post Google updates Search algorithm to help reduce spam and low-quality content appeared first on SD Times.

]]>
Report: Slow mobile app releases cost over $100,000 in lost revenue per year for 75% of companies https://sdtimes.com/test/report-slow-mobile-app-releases-cost-over-100000-in-lost-revenue-per-year-for-75-of-companies/ Thu, 14 Dec 2023 19:11:02 +0000 https://sdtimes.com/?p=53340 It’s no surprise that slow development processes are costing companies greatly, but by how much? According to a new report by the mobile testing company Kobiton, 75% of respondents said slow mobile app releases cost their company at least $100,000 each year, and 13% said it costs them between $1 million and $10 million every … continue reading

The post Report: Slow mobile app releases cost over $100,000 in lost revenue per year for 75% of companies appeared first on SD Times.

]]>
It’s no surprise that slow development processes are costing companies greatly, but by how much? According to a new report by the mobile testing company Kobiton, 75% of respondents said slow mobile app releases cost their company at least $100,000 each year, and 13% said it costs them between $1 million and $10 million every year. 

Additionally, 75% said that mobile apps represent at least a quarter of their companies’ revenue, which highlights the fact that slow releases may threaten the viability of their business, not just their bottom line. 

When asked how frequently they release mobile app updates, 38% said weekly, 27% said monthly, 20% said daily, 14% said quarterly, and 1% said less than once per quarter. 

In terms of what is causing delays, limited financial resources are the culprit for 50% of organizations. Forty-seven percent also cited inefficient development and QA processes and 40% cited a lack of skilled development and QA labor. 

To combat some of these challenges, many companies are turning to test automation and other AI-based technologies. Manual tests take at least three days for 61% of companies, while automated tests can be completed in a matter of hours.

Twenty-eight percent of respondents claimed it took 1-3 hours to run automated tests, 32% said it takes 3-6 hours, 21% said it takes 6-9 hours, and 8% said it takes more than 10 hours. Eleven percent of respondents can complete their automated tests in under an hour. 

For those that have moved from manual to automated tests, time to market decreased by 25-50% for 37% of respondents and more than 50% for 18% of respondents.  

At the time of the survey, 48% of the respondents were automating 10 to 24% of their tests and 22% were automating 25 to 49% of their tests. Ideally, about half of respondents reported they would like to automate more than 50% of their tests. 

Some of the top strategies companies are following to increase their test automation coverage include providing training to enhance automation skills, hiring more automation engineers, using low-code/no-code automation tools, and building automation scripts using iOS and Android Native frameworks.

When asked how generative AI is playing into their testing strategy, 47% said they are using it to generate test scripts, 60% are using it to update scripts or code, and 55% are using it to analyze test results. Only 8% of respondents said they have not used generative AI at all. 

Looking ahead to the future, respondents said the most useful AI capabilities would be the ability to predict potential defects (51%), using generative AI to create test cases and data (45%), natural language processing for better test case documentation (44%), image recognition for UI testing (36%), and self-healing test strategies (36%).

“Witnessing firsthand the transformative power of AI tools in the realm of mobile app development and testing for our customers has been a remarkable journey,” said Frank Moyer, CTO of Kobiton. “By enhancing productivity, reducing costs, and enabling professionals to focus on more strategic tasks, AI is fundamentally reshaping the industry’s landscape. As these tools continue to evolve, I anticipate a profound and accelerated embrace of AI-driven methodologies.”

The post Report: Slow mobile app releases cost over $100,000 in lost revenue per year for 75% of companies appeared first on SD Times.

]]>
SD Times Open-Source Project of the Week: Storybook https://sdtimes.com/softwaredev/sd-times-open-source-project-of-the-week-storybook-2/ Fri, 25 Aug 2023 12:46:10 +0000 https://sdtimes.com/?p=52124 Storybook is a frontend workshop for building UI components and pages in isolation made for UI development, testing, and documentation. Storybook comes with a lot of addons for component design, documentation, testing, interactivity, and so on. Storybook’s API makes it possible to configure and extend in various ways. It has even been extended to support … continue reading

The post SD Times Open-Source Project of the Week: Storybook appeared first on SD Times.

]]>
Storybook is a frontend workshop for building UI components and pages in isolation made for UI development, testing, and documentation.

Storybook comes with a lot of addons for component design, documentation, testing, interactivity, and so on. Storybook’s API makes it possible to configure and extend in various ways. It has even been extended to support React Native, Android, iOS, and Flutter development for mobile.

The tool offers a platform for constructing UIs independently from the main application. It enables developers to work on challenging scenarios and uncommon situations without requiring the entire application to be executed. This isolation of UI development helps streamline the process and makes it easier to handle complex states and edge cases.

Users can create components and pages without having to deal with data, APIs, or business logic complexities.

Users can also render components in key states that are tricky to reproduce in an app. Then save those states as stories to revisit during development, testing, and QA. After building UI components in isolation, users can integrate them into their app with the assurance that they are well-tested for all potential edge cases.

Additional details are available here.

The post SD Times Open-Source Project of the Week: Storybook appeared first on SD Times.

]]>
Report: Test automation coverage has rebounded after a dip last year https://sdtimes.com/test/report-test-automation-coverage-has-rebounded-after-a-dip-last-year/ Wed, 09 Nov 2022 17:18:50 +0000 https://sdtimes.com/?p=49547 Test automation coverage has rebounded after a dip last year, according to SmartBear’s State of Quality Testing 2022 report.  SmartBear conducted a global online survey over the course of five weeks earlier this year. The findings are based upon aggregated responses from more than 1,500 software developers, testers, IT/operations professionals, and business leaders across many … continue reading

The post Report: Test automation coverage has rebounded after a dip last year appeared first on SD Times.

]]>
Test automation coverage has rebounded after a dip last year, according to SmartBear’s State of Quality Testing 2022 report. 

SmartBear conducted a global online survey over the course of five weeks earlier this year. The findings are based upon aggregated responses from more than 1,500 software developers, testers, IT/operations professionals, and business leaders across many different industries.

Last year saw the amount of companies performing manual tests at 11%, while that number dwindled to 7% this year, almost returning to pre-pandemic levels of 5% of all tests being performed completely manually. 

This year also saw slightly higher numbers than ever before for respondents that said 50-99%  of their tests are automated across the board. The biggest jump happened in the 76-99% group which jumped over 10% to 16% over the last year. The amount of respondents that said their tests are all automated regained some ground to the pre-pandemic level of 4%.

When looking at the different types of tests and how they are performed, over half of respondents reported using manual testing for usability and user acceptance tests. Unit tests, performance tests, and BDD framework tests were highest among all automated testing. 

Another finding is that the time spent testing increased for traditional testers but decreased for developers. However, the average percentage of time spent testing remained the same as last year, at 63% across the organization.

QA engineers/automation engineers spend the most time testing, averaging 76% of their weeks on testing up from 72% last year. While the trend for developer testing inched up between 2018 to 2021, reaching 47%, it sank to 40% this year. Testing done by architects plummeted from 49% to 30% over the last year. 

This year, the most time-consuming activity was performing manual and exploratory tests, jumping to 26% from 18% last year as the most time-consuming task. In the same time period, learning how to use test tools as the most time-consuming challenge with testing fell from 22% to just 8%. 

The biggest challenges that organizations reported for test automation varied by company size.  Companies with 1-25 employees cite “not having the correct tools” as their biggest challenge, while companies with 501-1,000 employees cite “not having the right testing environments available” as their biggest challenge. These are different from the biggest problem that was cited last year “not enough time to test” at 37%.

The post Report: Test automation coverage has rebounded after a dip last year appeared first on SD Times.

]]>
The importance of tool integration for QA teams https://sdtimes.com/test/the-importance-of-tool-integration-for-qa-teams/ Thu, 06 Oct 2022 13:30:22 +0000 https://sdtimes.com/?p=49113 Everybody cares about software quality (or they ought to, at least), but it’s easier said than done. Lots of factors can cause software to fail, from tools and systems not integrating well to people not communicating well. According to ConnectALL, improving value stream flow can help with these communication breakdowns, tool integration can improve quality … continue reading

The post The importance of tool integration for QA teams appeared first on SD Times.

]]>
Everybody cares about software quality (or they ought to, at least), but it’s easier said than done. Lots of factors can cause software to fail, from tools and systems not integrating well to people not communicating well.

According to ConnectALL, improving value stream flow can help with these communication breakdowns, tool integration can improve quality assurance function, and integrating test management tools with other tools can help provide higher quality test coverage. 

In a recent SD Times Live! event, Lance Knight, president and COO at ConnectALL, and Johnathan McGowan, principal solutions architect at ConnectALL, shared six ways that tool integration can improve test management processes and QA. 

“It’s a very complex area, right? There’s a lot going on here in the testing realm, and different teams are doing different kinds of tests. Your developers are doing those unit tests, your QA team is doing manual automated and regression, and then your security folks are doing something else. And they’ve all each got their own little places that they’re doing all of that in,” said McGowan.

This article first appeared on VSM Times. To read the full article, visit the original post here.

The post The importance of tool integration for QA teams appeared first on SD Times.

]]>
Automated testing still lags https://sdtimes.com/test/automated-testing-still-lags/ Tue, 02 Aug 2022 20:20:17 +0000 https://sdtimes.com/?p=48461 Automated testing initiatives still lag behind in many organizations as increasingly complex testing environments are met with a lack of skilled personnel to set up tests.  Recent research conducted by Forrester and commissioned by Keysight found that while only 11% of respondents had fully automated testing, 84% percent of respondents said that the majority of … continue reading

The post Automated testing still lags appeared first on SD Times.

]]>
Automated testing initiatives still lag behind in many organizations as increasingly complex testing environments are met with a lack of skilled personnel to set up tests. 

Recent research conducted by Forrester and commissioned by Keysight found that while only 11% of respondents had fully automated testing, 84% percent of respondents said that the majority of testing involves complex environments. 

For the study, Forrester conducted an online survey in December 2021 that involved 406 test operations decision-makers at organizations in North America, EMEA, and APAC to evaluate current testing capabilities for electronic design and development and to hear their thoughts on investing in automation.

The complexity of testing has increased the number of tests, according to 75% of the respondents. Sixty-seven percent of respondents said the time to complete tests has risen too.

Challenges with automated testing 

Those that do utilize automated testing often have difficulty making the tests stable in these complex environments, according to Paulina Gatkowska, head of quality assurance at STX Next, a Python software house. 

One such area where developers often find many challenges is in UI testing in which the tests work like a user: they use the browser, click through the application, fill fields, and more. These tests are quite heavy, Gatkowska continued, and when a developer finishes their test on a local environment, sometimes it fails in another environment, or only works 50% times, or a test works the first week, and then starts to be flaky. 

“What’s the point of writing and running the tests, if sometimes they fail even though there is no bug? To avoid this problem, it’s important to have a good architecture of the tests and good quality of the code. The tests should be independent, so they don’t interfere with each other, and you should have methods for repetitive code to change it only in one place when something changes in the application,” Gatkowska said. “You should also attach great importance to ‘waits’ – the conditions that must be met before the test proceeds. Having this in mind, you’ll be able to avoid the horror of maintaining flaky tests.”

Then there are issues with the network that can impede automated tests, according to Kavin Patel, founder and CEO of Convrrt, a landing page builder. A common difficulty for QA teams is network disconnection, which makes it difficult for them to access databases, VPNs, third-party services, APIs, and certain testing environments, because of shaky network connections, adding needless time to the testing process. The inability to access virtual environments, which are typically utilized by testers to test programs, is also a worry. 

Because some teams lack the expertise to implement automated testing, manual testing is still used as a correction for any automation gaps. This creates a disconnect with the R&D team, which is usually two steps ahead, according to Kenny Kline, president of Barbend, an online platform for strength sports training and nutrition.

“To keep up with them, testers must finish their cycles within four to six hours, but manual testing cannot keep up with the rate of development. Then, it is moved to the conclusion of the cycle,” Kline said. “Consequently, teams must include a manual regression, sometimes known as a stabilization phase, at the end of each sprint. They extend the release cadence rather than lowering it.”

Companies are shifting towards full test automation 

Forrester’s research also found that 45% of companies say that they’re willing to move to a fully automated testing environment within the next three years to increase productivity, gain the ability to simulate product function and performance, and shorten the time to market. 

The companies that have implemented automated testing right have reaped many rewards, according to Michael Urbanovich, head of the testing department at a1qa, an international quality assurance company. The ones relying on robotic process automation (RPA), AI, ML, natural language processing (NLP), and computer vision for automated testing have attained greater efficiency, sped up time to market, and freed up more resources to focus on strategic business initiatives. RPA alone can lower the time required for repetitive tasks up to 25%, according to research by Automation Alley. 

For those looking to gain even more from their automation initiatives, a1qa’s Urbanovich suggests looking into continuous test execution, implementing self-healing capabilities, RPA, API automation, regression testing, and UAT automation. 

Urbanovich emphasized that the decision to introduce automated QA workflows must be conscious. Rather than running with the crowd to follow the hype, organizations must calculate ROI based on their individual business needs and wisely choose the scope for automation and a fit-for-purpose strategy. 

“To meet quality gates, companies need to decide which automated tests to run and how to run them in the first place, especially considering that the majority of Agile-driven sprints last for up to only several weeks,” Urbanovich said. 

Although some may hope it were this easy, testers can’t just spawn automated tests and sit back like Paley’s watchmaker gods. The tests need to be guided and nurtured. 

“The number one challenge with automated testing is making sure you have a test for all possibilities. Covering all possibilities is an ongoing process, but executives especially hear that you have automated testing now and forget that it only covers what you actually are testing and not all possibilities,” said David Garthe, founder of Gravyware, a social media management tool. “As your application is a living thing, so are the tests that are for it. You need to factor in maintenance costs and expectations within your budget.” 

Also, just because a test worked last sprint, doesn’t mean it will work as expected this sprint, Garthe added. As applications change, testers have to make sure that the automated tests cover the new process correctly as well. 

Garthe said that he has had a great experience using Selenium, referring to it as the “gold standard” with regard to automated testing. It has the largest group of developers that can step in and work on a new project. 

“We’ve used other applications for testing, and they work fine for a small application, but if there’s a learning curve, they all fall short somewhere,” Garthe said. “Selenium will allow your team to jump right in and there are so many examples already written that you can shortcut the test creation time.”

And, there are many other choices to weave through to start the automated testing process.

“When you think about test automation, first of all you have to choose the framework. What language should it be? Do you want to have frontend or backend tests, or both? Do you want to use gherkin in your tests?,” STX Next’s Gatkowska said. “Then of course you need to have your favorite code editor, and it would be annoying to run the tests only on your local machine, so it’s important to configure jobs in the CI/CD tool. In the end, it’s good to see valuable output in a  reporting tool.”

Choosing the right tool and automated testing framework, though, might pose a challenge for some because different tools excel at different conditions, according to Robert Warner, Head of Marketing at VirtualValley, a UK-based virtual assistant company.

“Testing product vendors overstate their goods’ abilities. Many vendors believe they have a secret sauce for automation, but this produces misunderstandings and confusion. Many of us don’t conduct enough study before buying commercial tools, that’s why we buy them without proper evaluation,” Warner said. “Choosing a test tool is like marrying, in my opinion. Incompatible marriages tend to fail. Without a good test tool, test automation will fail.”

AI is augmenting the automated testing experience

In the next three years 52% of companies that responded to the Forrester report said they would consider using AI for integrating complex test suites.

The use of AI for integrated testing provides both better (not necessarily more) testing coverage and the ability to support agile product development and release, according to the Forrester report.

Companies are also looking to add AI for integrating complex test suites, an area of test automation that is severely lacking, with only 16% of companies using it today. 

a1qa’s Urbanovich explained that one of the best ways to cope with boosted software complexity and tight deadlines is to apply a risk-based approach. For that, AI is indispensable. Apart from removing redundant test cases, generating self-healing scripts, and predicting defects, it streamlines priority-setting. 

“In comparison with the previous year, the number of IT leaders leveraging AI for test prioritization has risen to 43%. Why so?” Urbanovich continued, alluding to the World Quality Report 2021-2022. “When you prioritize automated tests, you put customer needs FIRST because you care about the features that end users apply the most. Another vivid gain is that software teams can organize a more structured and thoughtful QA strategy. Identifying risks makes it easier to define the scope and execution sequence.”

Most of the time, companies are looking to implement AI in testing to leverage the speed improvements and increased scope of testing, according to Kevin Surace, CTO at Appvance, an AI-driven software testing provider

“You can’t write a script in 10 minutes, maybe one if you’re a Selenium master. Okay, the machine can write 5,000 in 10 minutes. And yes, they’re valid. And yes, they cover your use cases that you care about. And yes, they have 1,000s of validations, whatever you want to do. And all you did was spend one time teaching it your application, no different than walking into a room of 100 manual testers that you just hired, and you’re teaching them the application: do this, don’t do this, this is the outcome, these are the outcomes we want,” Surace said. “That’s what I’ve done, I got 100 little robots or however many we need that need to be taught what to do and what not to do, but mostly what not to do.”

QA has difficulty grasping how to handle AI in testing 

Appvance’s Surace said that the overall place of where testing needs to go is to be completely hands off from humans.

“If you just step back and say what’s going on in this industry, I need a 4,000 times productivity improvement in order to find essentially all the bugs that the CEO wants me to find, which is find all the bugs before users do,” Surace said. “Well, if you’ve got to increase productivity 4,000 times you cannot have people involved in the creation of very many use cases, or certainly not the maintenance of them. That has to come off the table just like you can’t put people in a spaceship and tell them to drive it, there’s too much that has to be done to control it.”  

Humans are still good at prioritizing which bugs to tackle based on what the business goals are

because only humans can really look at something and say, well, we’ll just leave it, it’s okay, we’re not gonna deal with it or say this is really critical and push it to the developers side to fix it before release, Surace continued. 

“A number of people are all excited about using AI and machine learning to prioritize which tests you should run, and that entire concept is wrong. The entire concept should be, I don’t care what you change in application, and I don’t understand your source code enough to know the impacts and on every particular outcome. Instead, I should be able to create 10,000 scripts and run them in the next hour, and give you the results across the entire application,” Surace said. “Job one, two, and three of QA is to make sure that you found the bugs before your users do. That’s it, then you can decide what to do with them. Every time a user finds a bug, I can guarantee you it’s in something you didn’t test or you chose to let the bug out. So when you think about it, that way users find bugs and the things we didn’t test. So what do we need to do? We need to test a lot more, not less.”

A challenge with AI is that it is a foreign concept to QA people so teaching them how to train AI is a whole different field, according to Surace. 

First off, many people on the QA team are scared of AI, Surace continued, because they see themselves as QA people but really have the skillset of a Selenium tester that writes Selenium scripts and tests them. Now, that has been taken away similar to how RPA disrupted many industries such as customer support and insurance claims processing. 

The second challenge is that they’re not trained in it.

“So one problem that we see that we have is you explain how the algorithms work?,” Surace said. “In AI, one of the challenges we have in QA and across the AI industry is how do we make people comfortable that here’s a machine that they may not ever be able to understand. It’s beyond their skillset to actually understand the algorithms at work here and why they work and how neural networks work so they now have to trust that the machine will get them from point A to point B, just like we trust the car gets from point A to point B.”

However, there are some areas of testing in which AI is not as applicable, for example, in a form-based application where there is nothing else for the application to do than to guide you through the form such as in a financial services application. 

“There’s nothing else to do with an AI that can add much value because one script that’s data-driven already handles the one use case that you care about. There are no more use cases. So AI is used to augment your use cases, but if you only have one, you should write it. But, that’s few and far between and most applications have hundreds of 1,000s of use cases perhaps or 1,000s of possible combinatorial use cases,” Surace said. 

According to Eli Lopian, CEO at Typemock, a provider of unit testing tools to developers worldwide, QA teams are still very effective at handling UI testing because the UI can often change without the behavior changing behind the scenes. 

“The QA teams are really good at doing that because they have a feel for the UI, how easy it is for the end user to use that code, and they can see the thing that is more of a product point of view and less of doesn’t work or does it not work point of view, which now is really it’s really essential if you want to an application to really succeed,” Lopian said. 

Dan Belcher, the co-founder at mabl, said that there is still plenty of room for a human in the loop when it comes to AI-driven testing. 

“So far, what we’re doing is supercharging quality engineers so human is certainly in the loop, It’s eliminating repetitive tasks where their intellect isn’t adding as much value and doing things that require high speed, because when you’re deploying every few minutes, you can’t really rely on a human to be involved in that in that loop of executing tests. And so what we’re empowering them to do is to focus on higher level concerns, like do I have the right test coverage? Are the things that we’re seeing good or bad for the users?,” Belcher said.

AI/ML excels at writing tests from unit to end-to-end scale

One area where AI/ML in testing excels at is in unit testing on legacy code, according to Typemock’s Lopian.

“Software groups often have this legacy code which could be a piece of code that maybe they didn’t do a unit test beforehand, or there was some kind of crisis, and they had to do it quickly, and they didn’t do the test. So you had this little piece of code that doesn’t have any unit tests. And that grows,” Lopian said. “Even though it’s a difficult piece of code, it wasn’t built for testability in mind, we have the technology to both write those tests for those kinds of code and to generate them in an automatic manner using the ML.”

The AI/ML can then make sure that the code is running in a clean and modernized way. The tests can refactor the code to work in a secure manner, Lopian added. 

AI-driven testing is also beneficial for UI testing because the testers don’t have to explicitly design the way that you reference things in the UI, you can let the AI figure that out, according to mabl’s Belcher. And then when the UI changes, typical test automation results in a lot of failures, whereas the AI can learn and improve the tests automatically, resulting in 85-90% reduction in the amount of time engineers spend creating and maintaining tests with AI. 

In the UI testing space, AI can be used for auto healing, intelligent timing, detecting visual changes automatically in the UI, and detecting anomalies and performance. 

According to Belcher, AI can be the vital component in creating a more holistic approach to end-to-end testing. 

“We’ve all known that the answer to improving quality was to bring together the insights that you get when you think about all facets of quality, whether that’s functional or performance, or accessibility, or UX. And, and to think about that holistically, whether it’s API or web or mobile. And so the area that will see the most innovation is when you can start to answer questions like, based on my UI tests, what API tests should I have? And how do they relate? So when the UI test fails? Was it an API issue? And then, when a functional test fails, did anything change from the user experience that could be related to that?,” Belcher said. “And so the key is to do this is we have to bring kind of all of the kind of end-to-end testing together and all the data that’s produced, and then you can really layer in some incredibly innovative intelligence, once you have all of that data, and you can correlate it and make predictions based on that.”

6 types of Automated Testing Frameworks 
  1. Linear Automation Framework – Also known as a record-and-playback framework in which testers don’t need to write code to create functions and the steps are written in a sequential order. Testers record steps such as navigation, user input, or checkpoints, and then plays the script back automatically to conduct the test.
  2.  Modular Based Testing Framework – one in which testers need to divide the application that is being tested into separate units, functions, or sections, each of which can then be tested in isolation. Test scripts are created for each part and then combined to build larger tests. 
  3. Library Architecture Testing Framework – in this testing framework, similar tasks within the scripts are identified and later grouped by function, so the application is ultimately broken down by common objectives. 
  4. Data-Driven Frameworktest data is separated from script logic and testers can store data externally. The test scripts are connected to the external data source and told to read and populate the necessary data when needed. 
  5. Keyword-Driven Framework – each function of the application is laid out in a table with instructions in a consecutive order for each test that needs to be run. 
  6. Hybrid Testing Framework – a combination of any of the previously mentioned frameworks set up to leverage the advantages of some and mitigate the weaknesses of others.

Source: https://smartbear.com/learn/automated-testing/test-automation-frameworks/

The post Automated testing still lags appeared first on SD Times.

]]>
SAST, SCA & QA are the best tools to combat hackers’ smaller, more sophisticated attacks https://sdtimes.com/security/sast-sca-qa-are-the-best-tools-to-combat-hackers-smaller-more-sophisticated-attacks/ Thu, 21 Jul 2022 18:51:48 +0000 https://sdtimes.com/?p=48339 As many organizations are bolstering up their security measures, hackers have shifted their focus to smaller and more concentrated attacks, according to Daniel Fonseca, senior solutions engineer at Kiuwan in the webinar “Preventing common vulnerabilities with Kiuwan’s SAST, SCA, and QA tools.” The National Vulnerability Database (NVD) said there were over 20,000 security vulnerabilities CVE … continue reading

The post SAST, SCA & QA are the best tools to combat hackers’ smaller, more sophisticated attacks appeared first on SD Times.

]]>
As many organizations are bolstering up their security measures, hackers have shifted their focus to smaller and more concentrated attacks, according to Daniel Fonseca, senior solutions engineer at Kiuwan in the webinar “Preventing common vulnerabilities with Kiuwan’s SAST, SCA, and QA tools.”

The National Vulnerability Database (NVD) said there were over 20,000 security vulnerabilities CVE published in 2021 – a 15% increase from 2020. The top five vulnerabilities that came up in 2021 were broken access control, cryptographic failures, injections, insecure design and security misconfigurations in the OWASP Top 10.

“In order to prevent such vulnerabilities, companies need to shift their priorities and infuse best practices within each organization. For starters, a product roadmap is a high-level summary that visualizes product direction over time, and it’s great for implementing best practices within development,” Fonseca said. 

Another great way to tackle these problems is by employing a Static Application Security Testing (SAST) tool or other tools that can identify risks early in the CI pipeline or within the IDE. 

As security moves right, coverage becomes increasingly challenging by implementing security earlier in the development cycle with the use of SAST, SCA & QA  – it automatically reduces the remediation work that can arise later in the cycle, according to Fonseca. Because half of web application vulnerabilities are critical or high-risk, this raises an important challenge for developers. Time to remediation for vulnerabilities is over 60 days, with significant cost accrued during the remediation process.

Watch this on-demand webinar to find out more about how Kiuwan’s two-part platform can be used to prevent security breaches. Kiuwan is composed of the cloud component, which serves as the web console, and the Local Analyzer, which performs scans and looks for patterns within source code in applications. 

“Basically, the reason why we have the Local Analyzer is because we want to make sure that you don’t need to ship or upload the source code of your applications anywhere in the cloud. But that intellectual property is going to remain on your premises, and the local analyzer will run the scans locally and upload only the results to the cloud,” Fonseca said. 

The post SAST, SCA & QA are the best tools to combat hackers’ smaller, more sophisticated attacks appeared first on SD Times.

]]>