Buyers Guide Archives - SD Times https://sdtimes.com/category/buyers-guide/ Software Development News Wed, 10 Jul 2024 19:08:01 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.5 https://sdtimes.com/wp-content/uploads/2019/06/bnGl7Am3_400x400-50x50.jpeg Buyers Guide Archives - SD Times https://sdtimes.com/category/buyers-guide/ 32 32 Companies still need to work on security fundamentals to win in the supply chain security fight https://sdtimes.com/security/companies-still-need-to-work-on-security-fundamentals-to-win-in-the-supply-chain-security-fight/ Mon, 08 Jul 2024 18:00:00 +0000 https://sdtimes.com/?p=55119 Though this is technically a “Buyer’s Guide” by SD Times terminology, let’s preface this article by remembering that buying a piece of software isn’t the key to fixing all security issues. If there was some magical security solution that could be installed to instantly fix all security problems, we wouldn’t be seeing a year-over-year increase … continue reading

The post Companies still need to work on security fundamentals to win in the supply chain security fight appeared first on SD Times.

]]>
Though this is technically a “Buyer’s Guide” by SD Times terminology, let’s preface this article by remembering that buying a piece of software isn’t the key to fixing all security issues. If there was some magical security solution that could be installed to instantly fix all security problems, we wouldn’t be seeing a year-over-year increase in supply chain attacks, and you probably wouldn’t be reading this article.

Yes, tooling is important; You can’t secure the software supply chain with secure coding practices alone. But you’ll need to combine those best practices with things like software bills of materials (SBOMs), software composition analysis, exploit prediction scoring systems (EPSS), and more.  

Before we can begin to think about what tooling can help, step one in this fight is to get the fundamentals down, explained Rob Cuddy, global application security evangelist at HCLSoftware. “There’s a lot of places now that are wanting to do security better, but they want to jump to steps four, five, and six, and they forget about steps one, two, and three,” he said. 

See also: A guide to supply chain security tools

He explained that even with new types of threats and vulnerabilities that are emerging, it’s still important to take a step back and make sure your security foundation is strong before you start getting into advanced tooling. 

“Having the basics done really, really well gets you a long way towards being safe in that space,” he said. 

According to Janet Worthington, senior analyst at Forrester, the first step is to ask if you’re following secure development practices when actually writing software.

“Are we secure by design when we’re building these applications? Are we doing threat modeling? Are we thinking about where this is going to be installed? About how people are going to use it? What are some of the attack vectors that we have to worry about?” 

These are some of the basics that companies need to get down before they even start looking at where tooling can help. But of course, tooling does still play a crucial role in the fight, once those pieces are in place, and Cuddy believes it is crucial that any tool you use supports the fundamentals.

The bare minimum for software supply chain security is to have an SBOM, which is a list of all of the components in an application. But an SBOM is just an ingredient list, and doesn’t provide information about those ingredients or where they came from, Worthington explained. 

Kristofer Duer, software architect team lead at HCLSoftware, added, “you need to know what goes into it, but you also need to know where it’s built and who has access to the code and a whole list of things.”

According to Worthington, this is where things like software composition analysis tools come in, which can analyze SBOMs for security risks, license compliance issues, and the operational risk of using a component. 

“An example of an operational risk would be this component is only maintained by one person, and that single contributor might just abandon the software or they might go do something else and no longer be maintaining that application,” she said. 

According to Colin Bell, AppScan CTO at HCLSoftware, EPSS — a measure of the likelihood that a vulnerability actually gets exploited — is another emerging tool to improve supply chain security by smartly prioritizing remediation efforts.

“Just because you have something in your supply chain doesn’t necessarily mean that it’s being used,” he explained. 

Bell said that he believes a lot of organizations struggle with the fact that they perceive every vulnerability to be a risk. But in reality, some vulnerabilities might never be exploited and he thinks companies are starting to recognize that, especially some of the larger ones. 

By focusing first on fixing the vulnerabilities that are most at risk of getting exploited, developers and security teams can effectively prioritize their remediation strategy. 

Worthington added that integrating secure by design foundations with some of these tools can also cut down on release delays that are caused by scanning tools finding security issues at the last moment, right before deployment, which might prevent deployments from going out until the issues are resolved. This is needed as companies are under more and more pressure to release software faster than ever. 

“Organizations that release frequently with high confidence do so by embedding security early in the Software Development Life Cycle (SDLC),” said Worthington. “Automating security testing, such as Software Composition Analysis and Static Application Security Testing, provides feedback to developers while they are writing code in the IDE or when they receive code review comments on a pull request. This approach gives developers the opportunity to review and respond to security findings in the flow of work.”

She also said that identifying issues before they are added to the codebase can actually save time in the long run by preventing things from needing to be reworked. “Security testing tools that automate the remediation process improve product velocity by allowing developers to focus on writing business logic without having to become security experts,” she said. 

XZ Utils backdoor highlights importance of people in protecting the software supply chain

However, as mentioned at the top, tools are only one component in the fight, and secure practices are also needed to deal with more advanced threats. A recent example of where the above-mentioned tools wouldn’t have done much to help on their own is when in March, it was announced that a backdoor had been introduced into the open-source Linux tool XZ Utils

The person who had placed the backdoor had been contributing to the project for three years while gaining the trust of the maintainers and ultimately was able to rise to a level at which they could sign off on releases and introduce the backdoor in an official release. If it hadn’t been detected when it was and had been adopted by more people, attackers could have gained access to SSH sessions around the world and really caused some damage. 

According to Duer, the vulnerability didn’t even show up in code changes because the attacker put the backdoor in a .gitignore file. “When you downloaded the source to do a build locally, that’s when the attack actually got realized,” he said.

He went on to explain that this goes to show that developers can no longer just “get the source and run a build and call it a day. You have to do so much more than that … They have the SHA-256 hash mark on the bins, but how many people run those commands to see if the thing that they downloaded is that hash? Does anybody look in the CVE for this particular package to see if there’s a problem? Where do you rely on scanners to do that work for you? It’s interesting because a lot of the problems could be avoided with another couple of extra steps. It doesn’t even take that much time. You just have to do them,” Duer said. 

Worthington added that it’s really important that the people actually pulling components into their applications are able to assess quality before bringing something into their system or application. Is this something maintained by the Linux Foundation with a vibrant community behind it or is it a simple piece of code where nobody is maintaining it and it might reach end of life? 

“A very sophisticated attacker played the long game with a maintainer and basically wore that poor maintainer down through social engineering to get their updates into XZ Utils. I think we’re finding that you need to have a really robust community. And so I think SBOM is only going to get you so far,” said Worthington.

While this may seem like an extreme example, the Open Source Security Foundation (OpenSSF) and the OpenJS Foundation put out an alert following the incident and implied that it might not be an isolated incident, citing similar suspicious patterns in two other popular JavaScript projects. 

In the post, they gave tips for recognizing social engineering attacks in open source projects, such as:

  • Aggressive, but friendly, pursuit of maintainers by unknown community members
  • Requests from new community members to be elevated to maintainer status
  • Endorsement of new community members coming from other unknown members
  • PRs containing blobs as artifacts
  • Intentionally difficult to understand source code
  • Gradually escalating security issues
  • Deviation from typical project compile, build, and deployment practices
  • A false sense of urgency to get a maintainer to bypass reviews or controls
AI will make things worse and better

AI will also exacerbate the number of threats that people have to deal with because as much as AI can add useful features to security tools to help security teams be more effective, AI also helps the attackers. 

Having AI in applications complicates the software supply chain, Worthington explained. “There’s a whole ecosystem around it,” she said. “What about all the APIs that are calling the LLMs? Now you have to worry about API security. And there’s gonna be a bunch of new types of development tools in order to build these applications and in order to deploy these applications.”

Worthington says that attackers are going to recognize that this is an area that people haven’t really wrapped their heads around in terms of how to secure it, and they’re going to exploit that, and that’s what worries her most about the advances in AI as it relates to supply chain security. 

However, it’s not all bad; in many ways, supply chain security can benefit from AI assistance. For instance, there are now software composition analysis tools that are using generative AI to explain vulnerabilities to developers and offer recommendations on how to fix it, Worthington explained. 

“I think AI will help the attackers but I think the first wave is actually helping defenders at this point,” she said. 

Bell was in agreement, adding “if you’re defending, it’s going to improve the threat detection, it’s going to help with incident response, and it’s going to help with detecting whether vulnerabilities are real.”

The government is starting to play a role in securing supply chains

In 2021, President Biden signed an executive order addressing the need to have stronger software supply chain security in government. In it, Biden explained that bold change is needed over incremental improvements, and stated that this would be a top priority for the administration. 

The executive order requires that any company selling software to the government provide an SBOM and set up a pilot program to create an “energy star” type program for software so that the government can easily see if software was developed securely. 

“Too much of our software, including critical software, is shipped with significant vulnerabilities that our adversaries exploit,” the White House explained. “This is a long-standing, well-known problem, but for too long we have kicked the can down the road. We need to use the purchasing power of the Federal Government to drive the market to build security into all software from the ground up.” 

Worthington said: “I think the Biden administration has done a really good job of trying to help software suppliers understand sort of like what the minimum requirements they’re going to be held to are, and I think those are probably the best place to start.”

Cuddy agreed and added that the industry is starting to catch up to the requirements. “Not only do you need to generate a bill of materials, but you have to be able to validate across it, you have to prove that you’ve been testing against it, that you’ve authorized those components … So much of it started with the executive order that was issued a few years ago from President Biden, and you’ve now seen the commercial side starting to catch up with some of those things, and really demanding it more,” he said.

The post Companies still need to work on security fundamentals to win in the supply chain security fight appeared first on SD Times.

]]>
A guide to supply chain security tools https://sdtimes.com/security/a-guide-to-supply-chain-security-tools/ Mon, 08 Jul 2024 17:59:18 +0000 https://sdtimes.com/?p=55122 The following is a listing of vendors that offer tools to help secure software supply chains, along with a brief description of their offerings. Featured Provider HCLSoftware: HCL AppScan empowers developers, DevOps, and security teams with a suite of technologies to pinpoint application vulnerabilities for quick remediation in every phase of the software development lifecycle. … continue reading

The post A guide to supply chain security tools appeared first on SD Times.

]]>
The following is a listing of vendors that offer tools to help secure software supply chains, along with a brief description of their offerings.


Featured Provider

HCLSoftware: HCL AppScan empowers developers, DevOps, and security teams with a suite of technologies to pinpoint application vulnerabilities for quick remediation in every phase of the software development lifecycle. HCL AppScan SCA (Software Composition Analysis) detects open-source packages, versions, licenses, and vulnerabilities, and provides an inventory of all of this data for comprehensive reporting.

See also: Companies still need to work on security fundamentals to win in the supply chain security fight

Other Providers

Anchore offers an enterprise version of its Syft open-source software bill of materials (SBOM) project, used to generate and track SBOMs across the development lifecycle. It also can continuously identify known and new vulnerabilities and security issues.

Aqua Security can help organizations protect all the links in their software supply chains to maintain code integrity and minimize attack surfaces. With Aqua, customers can secure the systems and processes used to build and deliver applications to production, while monitoring the security posture of DevOps tools to ensure that security controls put in place have not been averted.

ArmorCode‘s Application Security Posture Management (ASPM) Platform helps organizations unify visibility into their CI/CD posture and components from all of their SBOMs, prioritize supply chain vulnerabilities based on their impact in the environment, and find out if vulnerability advisories really affect the system.

Contrast Security: Contrast SCA focuses on real threats from open-source security risks and vulnerabilities in third-party components during runtime. Operating at runtime effectively reduces the occurrence of false positives often found with static SCA tools and prioritizes the remediation of vulnerabilities that present actual risks. The software can flag software supply chain risks by identifying potential instances of dependency confusion.

FOSSA provides an accurate and precise report of all code dependencies up to an unlimited depth; and can generate an SBOM for any prior version of software, not just the current one. The platform utilizes multiple techniques — beyond just analyzing manifest files — to produce an audit-grade component inventory.

GitLab helps secure the end-to-end software supply chain (including source, build, dependencies, and released artifacts), create an inventory of software used (software bill of materials), and apply necessary controls. GitLab can help track changes, implement necessary controls to protect what goes into production, and ensure adherence to license compliance and regulatory frameworks.

Mend.io: Mend’s SCA automatically generates an accurate and deeply comprehensive SBOM of all open source dependencies to help ensure software is secure and compliant. Mend SCA generates a call graph to determine if code reaches vulnerable functions, so developers can prioritize remediation based on actual risk.

Revenera provides ongoing risk assessment for license compliance issues and security threats. The solution can continuously assess risk across a portfolio of software applications and the supply chain. SBOM Insights supports the aggregation, ingestion, and reconciliation of SBOM data from various internal and external data sources, providing the needed insights to manage legal and security risk, deliver compliance artifacts, and secure the software supply chain.

Snyk can help developers understand and manage supply chain security, from enabling secure design to tracking dependencies to fixing vulnerabilities. Snyk provides the visibility, context, and control needed to work alongside developers on reducing application risk.

Sonatype can generate both CycloneDX and SPDX SBOM formats, import them from third-party software, and analyze them to pinpoint components, vulnerabilities, malware, and policy violations. Companies can prove their software’s security status easily with SBOM Manager, and share SBOMs and customized reports with customers, regulators, and certification bodies via the vendor portal.

Synopsys creates SBOMs automatically with Synopsys SCA. With the platform, users can import third-party SBOMs and evaluate for component risk, and generate SPDX and CycloneDX SBOMs containing open source, proprietary, and commercial dependencies.

Veracode Software Composition Analysis can continuously monitor software and its ecosystem to automate finding and remediating open-source vulnerabilities and license compliance risk. Veracode Container Security can prevent exploits to containers before runtime and provide actionable results that help developers remediate effectively.

Open Source Solutions

CycloneDX: The OWASP Foundation’s CycloneDX is a full-stack Bill of Materials (BOM) standard that provides advanced supply chain capabilities for cyber risk reduction. Strategic direction of the specification is managed by the CycloneDX Core Working Group. CycloneDX is also backed by the Ecma International Technical Committee 54 (Software & System Transparency).

SPDX is a Linux Foundation open standard for sharing SBOMs and other important AI, data, and security references. It supports a range of risk management use cases and is a freely available international open standard (ISO/IEC 5692:2021).

Syft is a powerful and easy-to-use CLI tool and library for generating SBOMs for container images and filesystems. It also supports CycloneDX/SPDX and JSON format. Syft can be installed and run directly on the developer machine to generate SBOMs against software being developed locally or can be pointed at a filesystem. 

The post A guide to supply chain security tools appeared first on SD Times.

]]>
premium The importance of security testing https://sdtimes.com/test/the-importance-of-security-testing/ Thu, 28 Mar 2024 19:07:47 +0000 https://sdtimes.com/?p=53443 With more development teams today using open-source and third-party components to build out their applications, the biggest area of concern for security teams has become the API. This is where vulnerabilities are likely to arise, as keeping on top of updating those interfaces has lagged. In a recent survey, the research firm Forrester asked security … continue reading

The post <span class="sdt-premium">premium</span> The importance of security testing appeared first on SD Times.

]]>
With more development teams today using open-source and third-party components to build out their applications, the biggest area of concern for security teams has become the API. This is where vulnerabilities are likely to arise, as keeping on top of updating those interfaces has lagged.

In a recent survey, the research firm Forrester asked security decision makers in which phase of the application lifecycle did they plan to adopt the following technologies.  Static application security testing (SAST) was at 34%, software composition analysis (SCA) was 37%, dynamic application security testing (DAST) was 50% and interactive application security testing (IAST) was at 40%. Janet Worthington, a senior analyst at Forrester advising security and risk professionals, said the number of people planning to adopt SAST was low because it’s already well-known and people have already implemented the practice and tools.

One of the drivers for that adoption was the awakening created by the log4j vulnerability, where, she said, developers using open source understand direct dependencies but might not consider dependencies of dependencies.

Open source and SCA

According to Forrester research, 53% of breaches from external attacks are attributed to the application and the application layer. Worthington explained that while organizations are implementing SAST, DAST and SCA, they are not implementing it for all of their applications. “When we look at the different tools like SAST and SCA, for example, we’re seeing more people actually running software composition analysis on their customer-facing applications,” she said. “And SAST is getting there as well, but almost 75% of the respondents who we asked are running SCA on all of their external-facing applications, and that, if you can believe it, is much larger than web application firewalls, and WAFs are actually there to protect all your customer-facing applications. Less than 40% of the respondents will say they cover all their applications.”

Worthington went on to say that more organizations are seeing the need for software composition analysis because of those breaches, but added that a problem with security testing today is that some of the older tools make it harder to integrate early on in the development life cycle. That is when developers are writing their code, committing code in the CI/CD pipeline, and on merge requests. “The reason we’re seeing more SCA and SAST tools there is because developers get that immediate feedback of, hey, there’s something up with the code that you just checked in. It’s still going to be in the context of what they’re thinking about before they move on to the next sprint. And it’s the best place to kind of give them that feedback.”

RELATED CONTENT: A guide to security testing tools

The best tools, she said, are not only doing that, but they’re providing very good remediation guidance. “What I mean by that is, they’re providing code examples, to say, ‘Hey, somebody found something similar to what you’re trying to do. Want to fix it this way?'”

Rob Cuddy, customer experience executive at HCL Software, said the company is seeing an uptick in remediation. Engineers, he said, say, “’I can find stuff really well, but I don’t know how to fix it. So help me do that.’ Auto remediation, I think, is going to be something that continues to grow.”

Securing APIs

When asked what the respondents were planning to use during the development phase, Worthington said, 50% said they are planning to implement DAST in development. “Five years ago you wouldn’t have seen that, and what this really calls attention to is API security,” Worthington said. “[That is] something everyone is trying to get a handle on in terms of what APIs they have, the inventory, what APIs are governed, and what APIs are secured in production.”

And now, she added, people are putting more emphasis on trying to understand what APIs they have, and what vulnerabilities may exist in them, during the pre-release phase or prior to production. DAST in development signals an API security approach, she said, because “as you’re developing, you develop the APIs first before you develop your web application.” Forrester, she said, is seeing that as an indicator of companies embracing DevSecOps, and that they are looking to test those APIs early in the development cycle.

API security also has a part in software supply chain security, with IAST playing a growing role, and encompassing parts of SCA as well, according to Colin Bell, AppScan CTO at HCL Software. “Supply chain is more a process than it is necessarily any feature of a product,” Bell said. “Products feed into that. So SAST and DAST and IAST all feed into the software supply chain, but bringing that together is something that we’re working on, and maybe even looking at partners to help.”

Forrester’s Worthington explained that DAST really is black box testing, meaning it doesn’t have any insights into the application. “You typically have to have a running version of your web application up, and it’s sending HTTP requests to try and simulate an attacker,” she said. “Now we’re seeing more developer-focused test tools that don’t actually need to hit the web application, they can hit the APIs. And that’s now where you’re going to secure things – at the API level.”

The way this works, she said, is you use your own functional tests that you use for QA, like smoke tests and automated functional tests. And what IAST does is it watches everything that the application is doing and tries to figure out if there are any vulnerable code paths.

Introducing AI into security

Cuddy and Bell both said they are seeing more organizations building AI and machine learning into their offerings, particularly in the areas of cloud security, governance and risk management.

Historically, organizations have operated with a level of what is acceptable risk and what is not, and have understood their threshold. Yet cybersecurity has changed that dramatically, such as when a zero-day event occurs but organizations haven’t been able to assess that risk before. 

“The best example we’ve had recently of this is what happened with the log4j scenario, where all of a sudden, something that people had been using for a decade, that was completely benign, we found one use case that suddenly means we can get remote code execution and take over,” Cuddy said. “So how do you assess that kind of risk? If you’re primarily basing risk on an insurance threshold or a cost metric, you may be in a little bit of trouble, because things that today are under that threshold that you think are not a problem could suddenly turn into one a year later.”

That, he said, is where machine learning and AI come in, with the ability to run thousands – if not millions – of scenarios to see if something within the application can be exploited in a particular fashion. And Cuddy pointed out that as most organizations are using AI to prevent attacks, there are unethical people using AI to find vulnerabilities to exploit. 

He predicted that five or 10 years down the road, you will ask AI to generate an application according to the data input and prompts it is given.  And the AI will write code, but it’ll be the most efficient, machine-to-machine code that humans might not even understand, he noted. 

That will turn around the need for developers. But it comes back to the question of how far out is that going to happen. “Then,” Bell said, “it becomes much more important to worry about, and testing now becomes more important. And we’ll probably move more towards the traditional testing of the finished product and black box testing, as opposed to testing the code, because what’s the point of testing the code when we can’t read the code? It becomes a very different approach.”

Governance, risk and compliance

Cuddy said HCL is seeing the roles of governance, risk and compliance coming together, where in a lot of organizations, those tend to be three different disciplines. And there’s a push for having them work together and connect seamlessly. “And we see that showing up in the regulations themselves,” he said. 

“Things like NYDFS [New York Department of Financial Services] regulation is one of my favorite examples of this,” he continued. “Years ago, they would say things like you have to have a robust application security program, and we’d all scratch our heads trying to figure out what robust meant. Now, when you go and look, you have a very detailed listing of all of the different aspects that you now have to comply with. And those are audited every year. And you have to have people dedicated to that responsibility. So we’re seeing the regulations are now catching up with that, and making the specificity drive the conversation forward.”

The cost of cybersecurity

The cost of cybersecurity attacks continues to climb as organizations fail to implement safeguards necessary to defend against ransomware attacks. Cuddy discussed the costs of implementing security versus the cost of paying a ransom.

“A year ago, there were probably a lot more of the hey, you know, look at the level, pay the ransom, it’s easier,” he said. But, even if organizations pay the ransom, Cuddy said “there’s no guarantee that if we pay the ransom, we’re going to get a key that actually works, that’s going to decrypt everything.”

But cyber insurance companies have been paying out huge sums and are now requiring organizations to do their own due diligence, and are raising the bar on what you need to do to remain insured. “They have gotten smart and they’ve realized ‘Hey, we’re paying out an awful lot in these ransomware things. So you better have some due diligence.’ And so what’s happening now is they are raising the bar on what’s going to happen to you to stay insured.”

“MGM could tell you their horror stories of being down and literally having everything down – every slot machine, every ATM machine, every cash register,” Cuddy said. And again, there’s no guarantee that if you pay off the ransom, that you’re going to be fine. “In fact,” he added, “I would argue you’re likely to be attacked again, by the same group. Because now they’ll just go somewhere else and ransom something else. So I think the cost of not doing it is worse than the cost of implementing good security practices and good measures to be able to deal with that.” 

When applications are used in unexpected ways

Software testers repeatedly say it’s impossible to test for ways people might use an application that is not intended. How can you defend against something that you haven’t even thought of?

Rob Cuddy, customer experience executive at HCL Software, tells of how he learned of the log4j vulnerability.

“Honestly, I found out about it through Minecraft, that my son was playing Minecraft that day. And I immediately ran up into his room, and I’m like, ‘Hey, are you seeing any bizarre things coming through in the chat here that look like weird textures that don’t make any sense?’ So who would have anticipated that?”

Cuddy also related a story from earlier in his career about unintended use and how it was dealt with and how organizations harden against that.

“There is always going to be that edge case that your average developer didn’t think about,” he began. “Earlier in my career, doing finite element modeling, I was using a three-dimensional tool, and I was playing around in it one day, and you could make a join of two planes together with a fillet. And I had asked for a radius on that. Well, I didn’t know any better. So I started using just typical numbers, right? 0, 180, 90, whatever. One of them, I believe it was 90 degrees, caused the software to crash, the window just completely disappeared, everything died.

“So I filed a ticket on it, thinking our software shouldn’t do that. Couple of days later, I get a much more senior gentleman running into my office going, ‘Did you file this? What the heck is wrong with you? Like this is a mathematical impossibility. There’s no such thing as a 90-degree fillet radius.’ But my argument to him was it shouldn’t crash. Long story short, I talk with his manager, and it’s basically yes, software shouldn’t crash, we need to go fix this. So that senior guy never thought that a young, inexperienced, just fresh out of college guy would come in and misuse the software in a way that was mathematically impossible. So he never accounted for it. So there was nothing to fix. But one day, it happened, right. That’s what’s going on in security, somebody’s going to attack in a way that we have no idea of, and it’s going to happen. And can we respond at that point?”  

The post <span class="sdt-premium">premium</span> The importance of security testing appeared first on SD Times.

]]>
A guide to security testing tools https://sdtimes.com/test/a-guide-to-security-testing-tools/ Thu, 04 Jan 2024 22:45:39 +0000 https://sdtimes.com/?p=53448 The following is a listing of security testing tool providers, along with a brief description of their offerings. FEATURED PROVIDER HCL AppScan helps organizations pinpoint and remediate vulnerabilities throughout the software development lifecycle (SDLC) with a suite of application security testing platforms available as a cloud-based service (SaaS), self-managed, or cloud-native. Powerful static, dynamic, interactive, … continue reading

The post A guide to security testing tools appeared first on SD Times.

]]>
The following is a listing of security testing tool providers, along with a brief description of their offerings.


FEATURED PROVIDER

HCL AppScan helps organizations pinpoint and remediate vulnerabilities throughout the software development lifecycle (SDLC) with a suite of application security testing platforms available as a cloud-based service (SaaS), self-managed, or cloud-native. Powerful static, dynamic, interactive, and open-source scanning engines (DAST, SAST, IAST, SCA, API) quickly and accurately test code, web applications, APIs, mobile applications, containers, and open-source components with the help of broad language support, seamless integrations and automations, and proven AI capabilities. Centralized dashboards provide visibility, oversight, compliance policies, and reporting to enable developers, DevOps, and security teams to collaborate in a comprehensive and continuous security model.

RELATED CONTENT: The importance of security testing

OTHERS

CheckmarxThe Checkmarx One cloud-native platform combines the full suite of application security testing (AST) solutions to help you secure your digital transformation across every phase of modern application development and bring your apps to market faster. The company enables large-scale enterprises to secure every phase of development for every application while balancing the dynamic needs of CISOs, security, and development teams.

Contrast Security: With its Scan (SAST), Software Composition Analysis (SCA) and Assess (IAST) solutions, Contrast’s Secure Code platform helps organizations make code security testing as routine as a code commit while focusing on the most imperative vulnerabilities to deliver fast, accurate and actionable results.

Gitlab provides all of the essential DevSecOps tools in one DevSecOps platform. From idea to production, GitLab helps teams improve cycle time from weeks to minutes, reduce development costs, speed time to market, and deliver more secure and compliant applications.

JFrog: Its Enhanced SCA tool helps organizations manage the risk of open-source software with a database that aggregates malicious package information from global sources. The Code Security Scanning tool enables development teams to write and commit trusted code with fast and accurate security-focused engines that deliver scans that minimize false positives and won’t slow down development.

Mend.io: The company’s Mend SCA enables you to quickly and easily generate SBOMs that identify all open-source libraries, track and document each component, including direct and transitive dependencies, and update automatically when components change. Its SAST offering offers automated remediation that writes the exact code changes needed to fix code flaws, based on approvals done through pull requests.

Parasoft:  AST tools extend automated application security testing across the SDLC to help uncover security and quality issues that could expose security risks in your software applications. This increases collaboration in DevSecOps and provides an effective way for you to identify and manage security risks more confidently. This includes static application security testing (SAST), penetration testing, and more, using different tools for each type. 

Perforce offers a full range of security testing tools, from its Klocwork static analysis,  BlazeMeter continuous testing, and Perfecto web and mobile solution. Perforce identifies software security, quality, and reliability issues, helping to enforce compliance with standards.

Snyk enables developers to build securely from the start, while giving security teams complete visibility and comprehensive controls. Snyk helps you secure critical components of your software supply chain, including first-party code, open-source libraries, container images, and cloud infrastructure, right in the tools your developers use every day.

SonarSource: SonarLint empowers organizations to find and fix issues in real time, while SonarQube provides development teams with a self-hosted code quality and security solution that integrates into their enterprise environment. SonarCloud is a code review tool that easily integrates into cloud DevOps platforms and extends your CI/CD workflow.

Sonatype supports 50+ languages and integrations across leading IDEs, source repositories, CI pipelines, and ticketing systems, enabling organizations to ensure their open-source components are secure throughout the entire software development life cycle by spotting vulnerabilities early on in the development process.

Veracode offers a full suite of security testing tools, including SAST, DAST and SCA, and that can integrate container security into the development pipeline. This makes security simpler for developers. The company also offers security training for developers to help them spot issues before they make it into production.

 

The post A guide to security testing tools appeared first on SD Times.

]]>
Buyers Guide: AI and the evolution of test automation https://sdtimes.com/test/buyers-guide-the-evolution-of-test-automation/ Fri, 22 Sep 2023 14:35:53 +0000 https://sdtimes.com/?p=52402 Test automation has undergone quite an evolution in the decades since it first became possible.  Yet despite the obvious benefits, the digitalization of the software development industry has created some new challenges. It comes down to three big things, according to Kevin Parker, vice president of product at Appvance. The first is velocity and how … continue reading

The post Buyers Guide: AI and the evolution of test automation appeared first on SD Times.

]]>
Test automation has undergone quite an evolution in the decades since it first became possible. 

Yet despite the obvious benefits, the digitalization of the software development industry has created some new challenges.

It comes down to three big things, according to Kevin Parker, vice president of product at Appvance. The first is velocity and how organizations “can keep pace with the rate at which developers are moving fast and improving things, so that when they deliver new code, we can test it and make sure it’s good enough to go on to the next phase in whatever your life cycle is,” he said. 

RELATED CONTENT:
A guide to automated testing tools
Take advantage of AI-augmented software testing

The second area is coverage. Parker said it’s important to understand that enough testing is being done, and being done in the right places, to the right depth. And, he added, “It’s got to be the right kind of testing. If you Google test types, it comes back with several hundred kinds of testing.”

How do you know when you’ve tested enough? “If your experience is anything like mine,” Parker said, “the first bugs that get reported when we put a new release out there, are from when the user goes off the script and does something unexpected, something we didn’t test for. So how do we get ahead of that?”

And the final, and perhaps most important, area is the user interface, as this is where the rubber meets the road for customers and users of the applications. “The user interfaces are becoming so exciting, so revolutionary, and the amount of psychology in the design of user interfaces is breathtaking. But that presents even more challenges now for the automation engineer,” Parker said.

Adoption and challenges

According to a report by Research Nester, the test automation market is expected to grow to more than $108 billion by 2031, up from about $17 billion in 2021. Yet as for uptake, it’s difficult to measure the extent to which organizations are successfully using automated testing.

 “I think if you tried to ask anyone, ‘are you doing DevOps? Are you doing Agile?’ Everyone will say yes,” said Jonathan Wright, chief technologist at Keysight, which owns the Eggplant testing software. “And everyone we speak to says, ‘yes, we’re already doing automation.’ And then you dig a little bit deeper, they say, ‘well, we’re running some selenium, running some RPM, running some Postman script.’ So I think, yes, they are doing something.”

Wright said most enterprises that are having success with test automation have invested heavily in it, and have established automation as its own discipline. These organizations, he said, 

“They’ve got hundreds of people involved to keep this to a point where they can run thousands of scripts.” But in the same breath, he noted that the conversation around test case optimization, and risk-based testing, still needs to be had. “Is over-testing a problem?” he posited. “There’s a continuous view that we’re in a bit of a tech crunch at the moment. We’re expected to do more with less, and testing, as always, is one of those areas that have been put under pressure. And now, just saying I’ve got 5,000 scripts, kind of means nothing. Why don’t you have 6,000 or 10,000? You have to understand that you’re not just adding a whole stack of tech debt into a regression folder that’s giving you this feel-good feeling that I’m reading 5,000 scripts a day, but they’re not actually adding any value because they’re not covering new features.”

RELATED CONTENT:
How Cox Automotive found value in automated testing
Accessibility testing
Training the model for testing

Testing at the speed of DevOps

One effect of the need to release software faster is the ever-increasing reliance on open-source software, which may or may not have been tested fully before being let out into the wild.

Arthur Hicken, chief evangelist at Parasoft, said he believes it’s a little forward thinking to assume that developers aren’t writing code anymore, that they’re simply gluing things together and standing them up. “That’s as forward thinking as the people who presume that AI can generate all your code and all your tests now,” he said. “The interesting thing about this is that your cloud native world is relying on a massive amount of component reuse. The promises are really great. But it’s also a trust assumption that the people who built those pieces did a good job. We don’t yet have certification standards for components that help us understand what the quality of this component is.”

He suggested the industry create a bill of materials that includes testing. “This thing was built according to these standards, whatever they are, and tested and passed. And the more we move toward a world where lots of code is built by people assembling components, the more important it will be that those components are well built, well tested and well understood.”

Appvance’s Parker suggests doing testing as close to code delivery as possible. “If you remember when you went to test automation school, we were always taught that we don’t test

the code, we test against the requirements,” he said. “But the modern technologies that we use for test automation require us to have the code handy. Until we actually see the code, we can’t find those [selectors]. So we’ve got to find ways where we can do just that, that is bring our test automation technology as far left in the development lifecycle as possible. It would be ideal if we had the ability to use the same source that the developers use to be able to write our tests, so that as dev finishes, test finishes, and we’re able to test immediately, and of course, if we use the same source that dev is using, then we will find that Holy Grail and be testing against requirements. So for me, that’s where we have to get to, we have to get to that place where dev and test can work in parallel.”

As Parker noted earlier, there are hundreds of types of testing tools on the market – for functional testing, performance testing, UI testing, security testing, and more. And Parasoft’s Hicken pointed out the tension organizations have between using specialized, discrete tools or tools that work well together. “In an old school traditional environment, you might have an IT department where developers write some tests. And then testers write some tests, even though the developers already wrote tests, and then the performance engineers write some tests, and it’s extremely inefficient. So having performance tools, end-to-end tools, functional tools and unit test tools that understand each other and can talk to each other, certainly is going to improve not just the speed at which you can do things and the amount of effort, but also the collaboration that goes on between the teams, because now the performance team picks up a functional scenario. And they’re just going to enhance it, which means the next time, the functional team gets a better test, and it’s a virtuous circle rather than a vicious one. So I think that having a good platform that does a lot of this can help you.”

Coverage: How much is enough?

Fernando Mattos, director of product marketing at test company mabl, believes that test coverage for flows that are very important should come as close to 100% as possible. But determining what those flows are is the hard part, he said. “We have reports within mabl that we try to make easy for our customers to understand. Here are all the different pages that I have on my application. Here’s the complexity of each of those. And here are the tests that have touched on those, the elements on those pages. So at least you can see where you have gaps.”

It is common practice today for organizations to emphasize thorough testing of the critical pieces of an application, but Mattos said it comes down to balancing the time you have for testing and the quality that you’re shooting for, and the risk that a bug would introduce.

“If the risk is low, you don’t have time, and it’s better for your business to be introducing new features faster than necessarily having a bug go out that can be fixed relatively quickly… and maybe that’s fine,” he said.

Parker said AI can help with coverage when it comes to testing every conceivable user experience. “The problem there,” he said, “is this word conceivable, because it’s humans conceiving, and our imagination is limited. Whereas with AI, it’s essentially an unlimited resource to follow every potential possible path through the application. And that’s what I was saying earlier about those first bugs that get reported after a new release, when the end user goes off the script. We need to bring AI so that we can not only autonomously generate tests based on what we read in the test cases, but that we can also test things that nobody even thought about testing, so that the delivery of software is as close to being bug free as is technically possible.”

Parasoft’s Hicken holds the view that testing without coverage isn’t meaningful.  “If I turn a tool loose and it creates a whole bunch of new tests, is it improving the quality of my testing or just the quantity? We need to have a qualitative analysis and at the moment, coverage gives us one of the better ones. In and of itself, coverage is not a great goal. But the lack of coverage is certainly indicative of insufficient testing. So my pet peeve is that some people say, it’s not how much you test, it’s what you test. No. You need to have as broad code coverage as you can have.”

The all-important user experience

It’s important to have someone who is very close to the customer, who understands the customer journey but not necessarily anything about writing code, creating tests, according to mabl’s Mattos. “Unless it’s manual testing, it tends to be technical, requiring writing code and no updating test scripts. That’s why we think low code can really be powerful because it can allow somebody who’s close to the customer but not technical…customer support, customer success.  They are not typically the ones who can understand GitHub and code and how to write it and update that – or even understand what was tested. So we think low code can bridge this gap. That’s what we do.”

Where is this all going?

The use of generative AI to write tests is the evolution everyone wants to see, Mattos said. “We’ll get better results by combining human insights. We’re specifically working on AI technology that will allow implementing and creating test scripts, but still using human intellect to understand what is actually important for the user. What’s important for the business? What are those flows, for example, that go to my application on my website, or my mobile app that actually generates revenue?”

“We want to combine that with the machine,” he continued. “So the human understands the customer, the machine can replicate and create several different scenarios that traverse those. But of course, right, lots of companies are investing in allowing the machine to just navigate through your website and find out the different quarters, but they weren’t able to prioritize for us. We don’t believe that they’re gonna be able to prioritize which ones are the most important for your company.”

Keysight’s Wright said the company is seeing value in generative AI capabilities. “Is it game changing? Yes. Is it going to get rid of manual testers? Absolutely not. It still requires human intelligence around requirements, engineering, feeding in requirements, and then humans identifying that what it’s giving you is trustworthy and is valid. If it suggests that I should test (my application) with every single language and every single country, is it really going to find anything I might do? But in essence, it’s just boundary value testing, it’s not really anything that spectacular and revolutionary.”

Wright said organizations that have dabbled with automation over the years and have had some levels of success are now just trying to get that extra 10% to 20% of value from automation, and get wider adoption across the organization. “We’ve seen a shift toward not tools but how do we bring a platform together to help organizations get to that point where they can really leverage all the benefits of automation. And I think a lot of that has been driven by open testing.” 

“As easy as it should be to get your test,” he continued, “you should also be able to move that into what’s referred to in some industries as an automation framework, something that’s in a standardized format for reporting purposes. That way, when you start shifting up, and shifting the quality conversation, you can look at metrics. And the shift has gone from how many tests am I running, to what are the business-oriented metrics? What’s the confidence rating? Are we going to hit the deadlines? So we’re seeing a move toward risk-based testing, and really more agility within large-scale enterprises.”

 

The post Buyers Guide: AI and the evolution of test automation appeared first on SD Times.

]]>
A guide to automated testing tools https://sdtimes.com/test/a-guide-to-automated-testing-tools-5/ Fri, 22 Sep 2023 14:15:18 +0000 https://sdtimes.com/?p=52398 The following is a listing of automated testing tool providers, along with a brief description of their offerings. FEATURED PROVIDERS APPVANCE is the leader in generative AI for Software Quality.  Its premier product AIQ is an AI-native, unified software quality platform that delivers unprecedented levels of productivity to accelerate digital transformation in the enterprise.   Leveraging generative … continue reading

The post A guide to automated testing tools appeared first on SD Times.

]]>
The following is a listing of automated testing tool providers, along with a brief description of their offerings.

FEATURED PROVIDERS

APPVANCE is the leader in generative AI for Software Quality.  Its premier product AIQ is an AI-native, unified software quality platform that delivers unprecedented levels of productivity to accelerate digital transformation in the enterprise.   Leveraging generative AI and machine learning,  AIQ robots autonomously validate all the possible user flows to achieve complete application coverage.

KEYSIGHT is a leader in test automation, where our AI-driven, digital twin-based solutions help innovators push the boundaries of test case design, scheduling, and execution. Whether you’re looking to secure the best experience for application users, analyze high-fidelity models of complex systems, or take proactive control of network security and performance, easy-to-use solutions including Eggplant and our broad array of network, security, traffic emulation, and application test software help you conquer the complexities of continuous integration, deployment, and test.

MABL is the enterprise SaaS leader of intelligent, low-code test automation that empowers high-velocity software teams to embed automated end-to-end tests into the entire development lifecycle. Mabl’s platform for easily creating, executing, and maintaining reliable browser, API and mobile web tests helps teams quickly deliver high-quality applications with confidence. That’s why brands like Charles Schwab, jetBlue, Dollar Shave Club, Stack Overflow, and more rely on mabl to create the digital experiences their customers demand.

PARASOFT helps organizations continuously deliver high-quality software with its AI-powered software testing platform and automated test solutions. Supporting embedded and enterprise markets, Parasoft’s proven technologies reduce the time, effort, and cost of delivering secure, reliable, and compliant software by integrating everything from deep code analysis and unit testing to UI and API testing, plus service virtualization and complete code coverage, into the delivery pipeline. 

OTHER PROVIDERS

Applitools is built to test all the elements that appear on a screen with just one line of code, across all devices, browsers and all screen sizes. We support all major test automation frameworks and programming languages covering web, mobile, and desktop apps.

Digital.ai Continuous Testing provides expansive test coverage across 2,000+ real mobile devices and web browsers, and seamlessly integrates with best-in-class tools throughout the DevOps/DevSecOps pipeline.

RELATED CONTENT: The evolution of test automation

IBM: Quality is essential and the combination of automated testing and service virtualization from IBM Rational Test Workbench allows teams to assess their software throughout their delivery life cycle. IBM has a market leading solution for the continuous testing of end-to-end scenarios covering mobile, cloud, cognitive, mainframe and more. 

Micro Focus enables customers to accelerate test automation with one intelligent functional testing tool for web, mobile, API and enterprise apps. Users can test both the front-end functionality and back-end service parts of an application to increase test coverage across the UI and API.

Kobiton offers GigaFox on-premises or hosted, and solves mobile device sharing and management challenges during development, debugging, manual testing, and automated testing. A pre-installed and pre-configured Appium server provides “instant on” Appium test automation.

Orasi is a leading provider of software testing services, utilizing test management, test automation, enterprise testing, Continuous Delivery, monitoring, and mobile testing technology. 

ProdPerfect is an autonomous, end-to-end (E2E) regression testing solution that continuously identifies, builds and evolves E2E test suites via data-driven, machine-led analysis of live user behavior data. It addresses critical test coverage gaps, eliminates long test suite runtimes and costly bugs in production.  

Progress Software’s Telerik Test Studio is a test automation solution that helps teams be more efficient in functional, performance and load testing, improving test coverage and reducing the number of bugs that slip into production. 

Sauce Labs provides a cloud-based platform for automated testing of web and mobile applications. Optimized for use in CI and CD environment, and built with an emphasis on security, reliability and scalability, users can run tests written in any language or framework using Selenium or Appium.

SmartBear offers tools for software development teams worldwide, ensuring visibility and end-to-end quality through test management, automation, API development, and application stability. Popular tools include SwaggerHub, TestComplete, BugSnag, ReadyAPI, Zephyr, and others. 

testRigor helps organizations dramatically reduce time spent on test maintenance, improve test stability, and dramatically improve the speed of test creation. This is achieved through its support of “plain English” language that allows users to describe how to find elements on the screen and what to do with those elements from the end-user’s perspective. People creating tests on their system build 2,000+ tests per year per person. On top of it,  testRigor helps teams deploy their analytics library in production that will make systems automatically produce tests reflecting the most frequently used end-to-end flows from production.

 

The post A guide to automated testing tools appeared first on SD Times.

]]>
Take advantage of AI-augmented software testing https://sdtimes.com/test/take-advantage-of-ai-augmented-software-testing/ Thu, 21 Sep 2023 21:13:05 +0000 https://sdtimes.com/?p=52393 The artificial intelligence-augmented software-testing market continues to rapidly evolve. As applications become increasingly complex, AI-augmented testing plays a critical role in helping teams deliver high-quality applications at speed.  By 2027, 80% of enterprises will have integrated AI-augmented testing tools into their software engineering toolchain, which is a significant increase from 10% in 2022, according to … continue reading

The post Take advantage of AI-augmented software testing appeared first on SD Times.

]]>
The artificial intelligence-augmented software-testing market continues to rapidly evolve. As applications become increasingly complex, AI-augmented testing plays a critical role in helping teams deliver high-quality applications at speed. 

By 2027, 80% of enterprises will have integrated AI-augmented testing tools into their software engineering toolchain, which is a significant increase from 10% in 2022, according to Gartner. AI-augmented software-testing tools assist humans in their testing efforts and reduce the need for human intervention. Overall, these tools streamline, accelerate and improve the test workflow. 

The future of the AI-augmented testing market

Many organizations continue to rely heavily on manual testing and aging technology, but market conditions demand a shift to automation, as well as more intelligent testing that is context-aware. AI-augmented software-testing tools will amplify testing capacity and help to eliminate steps that can be performed more efficiently by intelligent technologies. 

Over the next few years, there will be several trends that drive the adoption of AI-augmented software-testing tools, including increasing complexity of applications, increased adoption of agile and DevOps, shortage of skilled automation engineers and the need for maintainability. All of these factors will continue to drive an increasing need for AI and machine learning (ML) to increase the effectiveness of test creation, reduce the cost of maintenance and drive efficient test loops. Additionally, investment in AI-augmented testing will help software engineering leaders to delight their customers beyond their expectations and ensure production incidents are resolved quickly. 

AI augmentation is the next step in the evolution of software testing and is a crucial element for a strategy to reduce significant business continuity risks when critical applications and services are severely compromised or stop working. 

How generative AI can improve software quality and testing 

AI is transforming software testing by enabling improved test efficacy and faster delivery cycle times. AI-augmented software-testing tools use algorithmic approaches to enhance the productivity of testers and offer a wide range of capabilities across different areas of the test workflow.

There are currently several ways in which generative AI tools can assist software engineering leaders and their teams when it comes to software quality and testing:

  • Authoring test automation code is possible across unit, application programming interface (API) and user interface (UI) for both functional and nonfunctional checks and evaluation. 
  • Generative AI can help with general-impact analysts, such as comparing different versions of use stories, code files and test results for potential risks and causes, as well as to triage flaky tests and defects. 
  • Test data can be generated for populating a database or driving test cases. This could be common sales data, customer relationship management (CRM) and customer contact information, inventory information, or location data with realistic addresses. 
  • Generative AI offers testers a pairing opportunity for training, evaluating and experimenting in new methods and technologies. This will be of less value than that of human peers who actively suggest improved alternatives during pairing exercises. 
  • Converting existing automated test cases from one framework to another is possible, but will require more human engineering effort, and is currently best used as a pairing and learning activity rather than an autonomous one. 

While testers can leverage generative AI technology to assist in their roles, they should also expect a wave of mobile testing applications that are using generative capabilities. 

Software engineering leaders and their teams can exploit the positive impact of AI implications that use LLMs as long as human touch is still involved and integration with the broad landscape of development and testing tools is still improving. However, avoid creating prompts to feed into systems based on large language models (LLMs) if they have the potential to contravene intellectual property laws, or expose a system’s design or its vulnerabilities. 

Software engineering leaders can maximize the value of AI by identifying areas of software testing in their organizations where AI will be most applicable and impactful. Modernize teams’ testing capabilities by establishing a community of practice to share information and lessons and budgeting for training. 

The post Take advantage of AI-augmented software testing appeared first on SD Times.

]]>
How Cox Automotive found value in automated testing https://sdtimes.com/test/how-cox-automotive-found-value-in-automated-testing/ Fri, 01 Sep 2023 21:07:56 +0000 https://sdtimes.com/?p=52390 How does a quality organization run? And how does it deliver a quality product for consumers? According to Roya Montazeri, senior director of test and quality at Cox Automotive, no one tool or approach can solve the quality problem. Cox Automotive portfolios, she said, is a specialized software company that addresses the buying, selling, trading … continue reading

The post How Cox Automotive found value in automated testing appeared first on SD Times.

]]>
How does a quality organization run? And how does it deliver a quality product for consumers?

According to Roya Montazeri, senior director of test and quality at Cox Automotive, no one tool or approach can solve the quality problem. Cox Automotive portfolios, she said, is a specialized software company that addresses the buying, selling, trading and everything about the car life cycle, with a broad portfolio of products that includes Dealertrack, Kelly Blue Book, AutoTrader, Manheim and more.

“Whatever we create from software automation and software delivery … needs to make sure that all clients are getting the best deal,” Montazeri said. “They can, and our dealers can, trust our software and at the end, the consumers can get the car they want. And this is about digitalization of the entire process.”

When Montazeri joined Cox Automotive, her area – Dealertrack – was mature about testing, with automations in place. But, she said, the focus on automation and the need to strengthen it started from two aspects: the quality of what was being delivered, and the impact of that on trust within the division.  “Basically, when you have an increased defect rate, and when you have more [calls into] customer support, these are indications of a quality problem,” she said. “That was the realization of investment … into more tools or more ability for automation.”

To improve quality, Dealertrack  began to shift testing left, and invested in automating their CI/CD pipeline. “You can’t have a CI/CD pipeline without automation,” she said. “It’s just a broken pipeline.” And to have a fully automated pipeline, she said, training it is critical. 

Another factor that led to the need for automation at Dealertrack was the complexity of how their products work. “Any product these days is not a standalone on its own; there is a lot of integration,” Montazeri said. “So how do you test those integrations? And that led us to look at where most of our problems were.. is it at the component-level testing? Or is it the complexity of the integration testing?” 

That, she said, led to Dealertrack using service virtualization software, from Parasoft, so they could mimic the same interactions and find the problems before they actually moved the software to production and make the integration happen. 

When they first adopted virtualization, Montazeri said they originally thought, “Oh, we can basically figure out how many defects we found. But that wasn’t the right KPI at the time for just virtualization. We needed to mature enough to say, “It’s not just that we found that defect, it’s about exercising the path so we know what’s not even working. So that’s how the investment came about for us.” 

The post How Cox Automotive found value in automated testing appeared first on SD Times.

]]>
Accessibility testing https://sdtimes.com/test/accessibility-testing/ Fri, 01 Sep 2023 20:58:07 +0000 https://sdtimes.com/?p=52384 One area in which test automation can deliver big value to organizations is in accessibility. Accessibility is all about the user experience, and is especially important for users with disabilities. Automated end-to-end testing helps answer the question of how easy or difficult it is for users to engage with the software. “If the software is … continue reading

The post Accessibility testing appeared first on SD Times.

]]>
One area in which test automation can deliver big value to organizations is in accessibility.

Accessibility is all about the user experience, and is especially important for users with disabilities. Automated end-to-end testing helps answer the question of how easy or difficult it is for users to engage with the software.

“If the software is crummy, if it’s not responding, you’re going to have a bad experience,” noted Arthur Hicken, technical evangelist at Parasoft.  “But let’s say the software has passed the steps of being well-designed and well-constructed. Hicken said, after that, come those accessibility tests, which are, is this really usable and well-suited for humans? And which tasks do humans use most?”

There is nothing innate about test automation that can raise a flag to any issues, unless the model is trained to identify and report, for instance, if any tasks take more than four steps to complete, it should be looked at.

According to Jonathan Wright of Keysight, it’s equally important to be sure the application is usable and accessible in various regional deployments involving different language sets and cultural variations. “I had a call with a large-scale organization and they wanted to know how we could support their multiple different localization deployments, which includes help and documentation. So it’s really the ability to support global rollouts that follow the sun.”

Wright said in large organizations, centers of enablement are being marginalized as self-service takes hold. “I’m in a large organization, what tools and technology do I need? And you know, it’s usually a people problem, not a technology problem. It’s kind of giving them the right tools to be able to help them do the job.”

For accessibility testing, mabl’s Fernando Mattos said companies often will have a different team to do that type of testing. “Many times, it’s a third-party company performing that, along with legal advice. What we’re trying to do is to shift that left and allow the reusability of you having already tested the whole UI. Why recreate all those tests in a separate tool, and why have a different team do it much later after deployment?”

The impact of a poor user experience on digital businesses can involve loss of customers and revenue as users today expect a seamless experience. “In e-commerce, in B-to-C commerce, they’re seeing hypercompetitiveness in the market and customer switching because the page takes a little too long to load,” he said. “And that talks a little bit more about what end-to-end testing is.”

Mattos added that making sure things are working properly has been seen as functional quality, but it’s important for organizations to make sure the performance of the application is fast, that it responds quickly, and the UI shows up quickly. He added that organizations can reuse their functional test cases to check for accessibility, so if a development team is pushing new features, and one introduces a critical accessibility issue that gets caught right at the commit or pull request phase, it can get fixed right away. Mabl, and the industry as a whole, is moving to shift this testing left, rather than performing it just prior to release.

Mattos noted that there are libraries for automated accessibility testing that can be used to catch 55% to 60% of the issues, while the remaining 40-45% of issues have to be done by people with the disability or experts that know how to test for it. But for the 55-60%, mabl pushes those into development and introduces accessibility testing there, instead of waiting for a third-party company or team to duplicate the test and catch an error a week later.

The post Accessibility testing appeared first on SD Times.

]]>
Training the models for testing https://sdtimes.com/test/training-the-models-for-testing/ Fri, 01 Sep 2023 16:40:05 +0000 https://sdtimes.com/?p=52396 Code coverage and end-to-end testing – sometimes called path testing – are particularly well-suited for automation, but they’re only as good as the training and implementation. Since AI doesn’t have an imagination, it is up to the model and whoever is feeding in that data to cover as many paths as you can in an … continue reading

The post Training the models for testing appeared first on SD Times.

]]>
Code coverage and end-to-end testing – sometimes called path testing – are particularly well-suited for automation, but they’re only as good as the training and implementation.

Since AI doesn’t have an imagination, it is up to the model and whoever is feeding in that data to cover as many paths as you can in an end-to-end test. So how would the AI discover something that the person creating the model couldn’t think of? 

“If you hire a manual tester tomorrow, you say to them on Day One, ‘I want you to test my application,’ ” Appvance’s VP of Product Kevin Parker said. ” ‘Here’s the URL, here are 10 things you need to know, use this user ID and password to log in as an end user. And make sure that whenever you see quantity, price and cost, that quantity times price equals cost.’ You’re going to give that manual tester the basic rules.”

Parker said AI can be trained in the same way. “What you teach is that the manual tests involve three things: how to behave, what to test, and what data to use. You teach the AI, ‘That’s an annoying popup box, just dismiss it. Don’t click the admin button, don’t go into the marketing content.’ You communicate those things to the AI, which then has the ability to autonomously go to each page, identify if any of the rules you’ve trained it on apply – how to behave, how to interact, what data to enter – and then exercise them.”

It turns out that it’s way easier to spend your time training the AI that will yield huge amounts of tests generated than it is to sit down and manually write those tests, he explained. And that means that testing can now keep pace with development because you’re able to generate tests at scale and at volume, Parker said. 

Model trainers have the mechanism to teach the AI the ability to adapt, so when it sees something it’s not seen before, it can extrapolate an answer from the business rules it was trained it on, Parker said. “We can then assume that what it’s about to do is the right thing,” he said. Yet most importantly, he added, is that it needs to know when to ask questions. “It needs to know when to phone home to its human and say, ‘Hey, this is something, I’m not confident I know what to do, teach me,” Parker said.

And in that way, AI bots can be trained to autonomously interact with the application, and test it independently of humans.

The post Training the models for testing appeared first on SD Times.

]]>