Sponsored Archives - SD Times https://sdtimes.com/category/sponsored/ Software Development News Wed, 23 Oct 2024 19:25:46 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.5 https://sdtimes.com/wp-content/uploads/2019/06/bnGl7Am3_400x400-50x50.jpeg Sponsored Archives - SD Times https://sdtimes.com/category/sponsored/ 32 32 HCL DevOps streamlines development processes with its platform-centric approach https://sdtimes.com/devops/hcl-devops-streamlines-development-processes-with-its-platform-centric-approach/ Thu, 24 Oct 2024 13:00:55 +0000 https://sdtimes.com/?p=55885 Platform engineering has been gaining quite a lot of traction lately — and for good reason. The benefits to development teams are many, and it could be argued that platform engineering is a natural evolution of DevOps, so it’s not a huge cultural change to adapt to.  According to Jonathan Harding, Senior Product Manager of … continue reading

The post HCL DevOps streamlines development processes with its platform-centric approach appeared first on SD Times.

]]>
Platform engineering has been gaining quite a lot of traction lately — and for good reason. The benefits to development teams are many, and it could be argued that platform engineering is a natural evolution of DevOps, so it’s not a huge cultural change to adapt to. 

According to Jonathan Harding, Senior Product Manager of Value Stream Management at HCLSoftware, in an era where organizations have become so focused on how to be more productive, this discipline has gained popularity because “it gets new employees productive quickly, and it gets existing employees able to deliver quickly and in a way that is relatively self-sufficient.”

Platform engineering teams work to build an internal developer portal (IDP), which is a self-service platform that developers can use to make certain parts of their job easier. For example, rather than a developer needing to contact IT and waiting for them to provision infrastructure, that developer would interact with the IDP to get that infrastructure provisioned.

Essentially, an IDP is a technical implementation of a DevOps objective, explained Chris Haggan, Head of HCL DevOps at HCLSoftware.

“DevOps is about collaboration and agility of thinking, and platform engineering is the implementation of products like HCL DevOps that enable that technical delivery aspect,” Haggan said.

Haggan looks at platform engineering from the perspective of having a general strategy and then bringing in elements of DevOps to provide a holistic view of that objective. 

“I want to get this idea that a customer has given me out of the ideas bucket and into production as quickly as I can. And how do I do that? Well, some of that is going to be about the process, the methodology, and the ways of working to get that idea quickly through the delivery lifecycle, and some of that is going to be about having a technical platform that underpins that,” said Haggan. 

IDPs typically include several different functionalities and toolchains, acting as a one-stop shop for everything a developer might need. From a single platform, they might be able to create infrastructure, handle observability, or set up new development environments. The benefits are similar in HCL DevOps, but by coming in a ready-to-use, customizable package, development teams don’t have to go through the process of developing the IDP and can skip right to the benefits. 

Haggan explained that the costs of building and maintaining a platform engineering system are not inconsequential. For instance, they need to integrate multiple software delivery systems and figure out where to store metrics, SDLC events, and other data, which often requires setup and administration of a new database. 

Plus, sometimes teams design a software delivery system that incorporates their own culture nuances, which can sometimes be helpful, but other times “they reflect unnecessary cultural debt that has accumulated within an organization for years,” said Haggan.

HCL DevOps consists of multifaceted solutions, with the three most popular being:

  • HCL DevOps Test: An automated testing platform that covers UI, API, and performance testing, and provides testing capabilities like virtual services and test data creation.
  • HCL DevOps Deploy: A fully automated CI/CD solution that supports a variety of architectures, including distributed multi-tier, mobile, mainframe, and microservices. 
  • HCL DevOps Velocity: The company’s value stream management offering that pulls in data from across the SDLC to provide development teams with useful insights.

Haggan admitted that he’s fully aware that organizations will want to customize and add new capabilities, so it’s never going to be just their platform that’s in play. But the benefit they can provide is that customers can use HCL DevOps as a starting point and then build from there. 

“We’re trying to be incredibly open as an offering and allow customers to take advantage of the tools that they have,” Haggan said. “We’re not saying you have to work only with us. We’re fully aware that organizations have their own existing workflows, and we’re going to work with that.”

To that end, HCL offers plugins that connect with other software. For instance, HCL DevOps Deploy currently has about 200 different plugins that could be used, and customers can also create their own, Harding explained. 

The plugin catalog is curated by the HCL DevOps technical team, but also has contributions from the community submitted through GitHub. 

Making context switching less disruptive

Another key benefit of IDPs is that they can cut down on context switching, which is when a developer needs to switch to different apps for different tasks, ultimately taking them out of their productive flow state.  

“Distraction for any knowledge worker in a large enterprise is incredibly costly for the enterprise,” said Harding. “So, focus is important. I think for us, platform engineering — and our platform in general — allows a developer to stay focused on what they’re doing.”

“Context switching will always be needed to some degree,” Haggan went on to say. A developer is never going to be able to sit down for the day and not ever have to change what they’re thinking about or doing. 

“It’s about making it easy to make those transitions and making it simple, so that when I move from planning the work that I’m going to be doing to deploying something or testing something or seeing where it is in the value stream, that feels natural and logical,” Haggan said. 

Harding added that they’ve worked hard to make it easy to navigate between the different parts of the platform so that the user feels like it’s all part of the same overall solution. That aspect ultimately keeps them in that same mental state as best as possible.

The HCL DevOps team has designed the solution with personas in mind. In other words, thinking about the different tasks that a particular role might need to switch between throughout the day.

For instance, a quality engineer using a test-driven development approach might start with writing encoded acceptance criteria in a work-item management platform, then move to a CI/CD system to view the results of an automated test, and then move to a test management system to incorporate their test script into a regression suite. 

These tasks span multiple systems, and each system often has its own role-based access control (RBAC), tracking numbers, and user interfaces, which can make the process confusing and time-consuming, Haggan explained. 

“We try to make that more seamless, and tighten that integration across the platform,” said Harding. “I think that’s been a focus area, really looking from the end user’s perspective, how do we tighten the integration based on what they’re trying to accomplish?”

To learn more about how HCL DevOps can help achieve your platform goals and improve development team productivity, visit the website to book a demo and learn about the many capabilities the platform has to offer. 

The post HCL DevOps streamlines development processes with its platform-centric approach appeared first on SD Times.

]]>
How Melissa’s Global Phone service cuts down on data errors and saves companies money https://sdtimes.com/data/how-melissas-global-phone-service-cuts-down-on-data-errors-and-saves-companies-money/ Mon, 07 Oct 2024 14:01:09 +0000 https://sdtimes.com/?p=55792 Having the correct customer information in your databases is necessary for a number of reasons, but especially when it comes to active contact information like email addresses or phone numbers. “Data errors cost users time, effort, and money to resolve, so validating phone numbers allows users to spend those valuable resources elsewhere,” explained John DeMatteo, … continue reading

The post How Melissa’s Global Phone service cuts down on data errors and saves companies money appeared first on SD Times.

]]>
Having the correct customer information in your databases is necessary for a number of reasons, but especially when it comes to active contact information like email addresses or phone numbers.

“Data errors cost users time, effort, and money to resolve, so validating phone numbers allows users to spend those valuable resources elsewhere,” explained John DeMatteo, solutions engineer I at Melissa, a company that provides various data verification services, including one called Global Phone that validates phone number data.

For instance, call center employees often ask callers what a good number to call them back would be in case they get disconnected. Validating that number can eliminate user error and thus prevent that user being frustrated if they aren’t able to be called back. Or, if you’re doing a mobile campaign, you don’t want to be texting landlines or dead numbers because “it costs money every time you send out a text message,” DeMatteo said during a recent SD Times microwebinar.

RELATED: Validating names in databases with the help of Melissa’s global name verification service

It’s also helpful when cleansing databases or migrating data because you can confirm that the numbers in an existing database are actually valid.

There are a number of common errors in phone number data that validation can sort out, including inconsistent formatting, data type mismatches, disconnected or fake phone numbers, and manual entry errors.

“Global Phone allows customers the ability to standardize and validate phone numbers, to correct and detect any issues that may be present,” said DeMatteo.

The service takes in either a REST request for a single phone number or up to 100 records in a JSON request. All that’s needed is a single phone number, and optionally a country name — Global Phone can detect the country, but supplying it can speed up processing.

Then, Global Phone outputs a JSON file that contains validated, enriched, and standardized phone numbers, as well as result codes that identify information tied to the record, such as the number belonging to a cell phone or it being a disposable number. It may also be able to return CallerID information and carrier information.

“Probably the most important thing is the result code,” DeMatteo explained. “We’re going to be returning information about what the data quality looks like, if there’s any problems with it.”

During the microwebinar, DeMatteo walked through an example of a poorly formatted phone number going through Global Phone.

In his example, the original phone number was ((858)[481]_8931. While it is the correct number of digits for a phone number, it is clearly poorly formatted and contains extra punctuation characters that shouldn’t be there.

Running it through Global Phone put the number into the correct format and also returned specific validation codes: PS01 (valid phone number), PS08 (landline), and PS18 (Do Not Call) list.

According to DeMatteo, there are a number of best practices when working with phone data. First, always verify the phone type and active status before sending SMS. Another tip is to use the RecordID and TransmissionReference output fields to better keep track of data.

And for better efficiency, some recommendations are to supply the country information if it’s known and send multiple records at once using JSON batch calls, as that’s going to “give you the best bang for your buck.”

The post How Melissa’s Global Phone service cuts down on data errors and saves companies money appeared first on SD Times.

]]>
Accelerating innovation: How the Lucid visual collaboration suite boosts Agile team efficiency https://sdtimes.com/softwaredev/accelerating-innovation-how-the-lucid-visual-collaboration-suite-boosts-agile-team-efficiency/ Tue, 01 Oct 2024 13:00:01 +0000 https://sdtimes.com/?p=55753 Fostering a positive developer experience and aligning it with business goals may seem like an obvious focus for organizational stakeholders. When developers feel empowered to innovate, they deliver customer experiences that positively impact the bottom line. Yet key organizational stakeholders still struggle to get visibility into how products are advancing, from ideation to delivery. To … continue reading

The post Accelerating innovation: How the Lucid visual collaboration suite boosts Agile team efficiency appeared first on SD Times.

]]>
Fostering a positive developer experience and aligning it with business goals may seem like an obvious focus for organizational stakeholders. When developers feel empowered to innovate, they deliver customer experiences that positively impact the bottom line. Yet key organizational stakeholders still struggle to get visibility into how products are advancing, from ideation to delivery.

To help those teams gain insights into how products are advancing, Lucid Software is announcing enhancements to its visual collaboration platform that are designed to help elevate agile workflows by cultivating greater alignment, creating clarity and improving decision-making. 

“Visual collaboration is about seeing an entire workflow from the very beginning, enabling teams to align, make informed decisions and guide the initiative all the way to market delivery,” said Jessica Guistolise, an evangelist, Agile coach and consultant at Lucid. “Lucid excels at bringing all necessary information into one platform, supporting teams regardless of whether they follow Agile or simply need to iterate faster.”

Visuals, Guistolise said, are important for getting all stakeholders on the same page and improving the overall developer experience. “Prior to the pandemic, agile teams would gather in one room surrounded by visuals and sticky notes that displayed their work, vision, mission and tracked dependencies. Then, we all went home. Now where does all that information live?” Lucid, Guistolise explained, became a centralized hub for teams that have everything they need to do their work, day in and day out. 

Lucid’s latest release includes an emphasis on team-level coordination and program-level planning. On the team level, there are features for creating dedicated virtual team spaces for organizing such critical artifacts as charters, working agreements and more. Lucid’s platform replicates the benefits of physical team rooms and serves as a central hub for collaboration, where all needed documents are stored and can be shared. On the program level, real-time dependency mapping enables visualization and management of those dependencies directly from Jira and ADO. Other new features are structured big room planning templates to coordinate cross-functional work and the ability to sync project data between Lucid, Jira and ADO to have the most current information reflected across all platforms.

When it comes to team-level coordination, team spaces are customizable, allowing for a more personalized and engaging work experience. “When working with distributed teams, fostering a sense of team connection can be a challenge,” Guistolise said. “This brings some of that humanity and team experience. ‘What did you do this weekend? Can I see a picture of your dog?’ All of that can be done visually and it cultivates a shared understanding of one another, and not just of the work that we’re doing.” 

Speaking to how these features enhance the developer experience, Guistolise came to embrace agility because, she said, “when we bring humanity back into the workplace and elevate the overall team experience, we not only boost collaboration and efficiency but also foster connection that makes those moments more enjoyable.”

Customizable Agile templates are also available to help guide teams through daily standups, sprint planning retrospectives and other Agile events by offering integrated tools such as timers, laser pointers and the ability to import Jira issues. 

Lucid also offers a private mode to allow for anonymous contributions of ideas and feedback. Guistolise explained that private mode offers psychological safety “to allow for those voices who may not feel comfortable speaking up or even dissenting in a meeting.” Private mode, she added, still allows teams to surface that information anonymously, which means better decisions will be made in the long run. The release also includes new estimation capabilities for streamlining sprint planning using a poker-style approach, and those estimates can be synced with Jira or ADO to align planning and execution.

Further, two-way integrations with Jira and Azure DevOps mean that “no one has to take pictures of the sticky notes on the walls and then type it into a back-end system so there’s a record of what is going on,” she said. Instead, because of the integrations, everything moves automatically back and forth between systems, providing updated, real-time information upon which to make those business and development decisions.

These latest innovations from Lucid Software empower developer teams to have a more positive working experience by providing the tools they need to navigate the complexities of Agile workflows, from daily coordination to large-scale program planning. By enhancing both team-level and program-level collaboration, Lucid continues to lead the way in providing the most intelligent and comprehensive visual collaboration platform to support modern teams.

 

The post Accelerating innovation: How the Lucid visual collaboration suite boosts Agile team efficiency appeared first on SD Times.

]]>
Podcast: The importance of buildpacks in developing cloud native applications https://sdtimes.com/containers/podcast-the-importance-of-buildpacks-in-developing-cloud-native-applications/ Thu, 26 Sep 2024 17:39:52 +0000 https://sdtimes.com/?p=55725 Buildpacks help ease the burden on developers by taking source code and turning it into fully functional apps. To learn more about this technology, we interviewed Ram Iyengar, chief evangelist of the Cloud Foundry Foundation, on the most recent episode of our podcast, What the Dev? Here is an edited and abridged version of that … continue reading

The post Podcast: The importance of buildpacks in developing cloud native applications appeared first on SD Times.

]]>
Buildpacks help ease the burden on developers by taking source code and turning it into fully functional apps.

To learn more about this technology, we interviewed Ram Iyengar, chief evangelist of the Cloud Foundry Foundation, on the most recent episode of our podcast, What the Dev?

Here is an edited and abridged version of that conversation:

How do buildpacks — and the Paketo Buildpacks in particular — help developers deploy cloud native applications?

I think buildpacks have been very important in making a lot of applications get pushed to production and get containerized without having to deal with a lot of overhead that usually comes with the process of containerization. What can I say that we haven’t said already in the webinar and in the article and things like that? Well, there’s a community angle to this. Buildpacks is somewhat headed towards graduation within the CNCF, and we expect that it will graduate in the next six to 12 months. If there’s any show of support that you can do as a community, I highly welcome people giving it a star, opening important issues, trying the project out, and seeing how you can consume it, and giving us feedback about how the project can be improved.

One thing that I wanted to get into a little bit is Korifi, which is your platform for creating and deploying Kubernetes applications. Can you talk a little bit about Korifi and how it ties in with buildpacks?

Absolutely, one of the main areas where we see a lot of buildpacks being consumed is when people are getting into the job of building platforms on Kubernetes. Now, any sort of talk you see about Kubernetes these days, whether it’s at KubeCon or one of the other events, is it’s extremely complex, and it’s been said so many times over and over again, there’s memes, there’s opinion pieces, there’s all kinds of internet subculture about how complex Kubernetes can be. 

The consequence of this complexity is that some teams and companies have started to come up with a platform where they say you want to make use of Kubernetes? Well, install another substrate over Kubernetes and abstract a lot of the Kubernetes internals from interacting with your developers. So that resonates perfectly with what the Cloud Foundry messaging has been all these years. People want a first-class, self-service, multi-tenant experience over VMs, and they want that same kind of experience on Kubernetes today for somewhat slightly different reasons, but the ultimate aim being that developers need to be able to get to that velocity that they’re most optimal at. They need to be able to build fast and deploy faster and keep pushing applications out into production while folding down a lot of the other areas of importance, like, how do we scale this, and how do we maintain load balances on this? How do we configure networking and ingress?

All of these things should fall down into a platform. And so Korifi is what has emerged from the community for actually implementing that kind of behavior, and buildpacks fits perfectly well into this world. So by using buildpacks — and I think Korifi is like the numero uno consumer of buildpacks — they’ve actually built an experience to be able to deploy applications onto Kubernetes, irrespective of the language and family, and taking advantage of all of those buildpacks features.

I’m hearing a lot of conversation about the Cloud Foundry Foundation in general, that it’s kind of old, and perhaps Kubernetes is looking to displace what you guys are doing. So how would you respond to that? And what is the Cloud Foundry Foundation offering in the Kubernetes world? 

It’s a two part or a two pronged answer that I have. On the one hand, there is the technology side of things. On the other, there’s a community and a human angle to things. Engineers want new tools and new things and new infrastructure and new kinds and ways to work. And so what has happened in the larger technology community is that a sufficiently adequate technology like Cloud Foundry suddenly found itself being relegated to as legacy technology and the old way to do things and not modern enough in some cases. That’s the human angle to it. So when people started to look at Kubernetes, when the entire software development community learned of Kubernetes, what they did was to somehow pick up on this new trend, and they wanted to sort of ride the hype train, so to say. And Kubernetes started to occupy a lot of the mind space, and now, as Gartner puts it quite well, you’re past that elevated expectations phase. And you’re now getting into productivity, and the entire community is yearning for a way to consume Kubernetes minus the complexity. And they want a very convenient way in which to deploy applications on Kubernetes while not worrying about networking and load balancing and auto scalars and all of these other peripheral things that you have to attach to an application.

I think it’s not really about developers just wanting new things. I think they want better tools and more efficient ways of doing their jobs, which frees them up to do more of the innovation that they like and not get bogged down with all of those infrastructure issues and things that that you know now can be taken care of. So I think what you’re saying is very important in terms of positioning Cloud Foundry as being useful and helpful for developers in terms of gaining efficiency and being able to work the way they want to work.

Well, yes, I agree in principle, which is why I’m saying Cloud Foundry and some others like Heroku, they all perfected this experience of here’s what a developer’s workflow should be. Now, developers are happy to adopt new ways to work, but the problem is, when you’re on the path to gain that kind of efficiency and velocity, you often unintentionally build a lot of opinionated workflows around yourself. So, all developers will have a very specific way in which they’ll actually create deployments and create these immutable artifacts, and they’re going to build themselves a fort from where they’d like to be kings of the castle, lord of the manor, but it’s really assailing a lot of the mental image and any apprehensions that come from deviating from that mental image. And at the moment, Kubernetes seems to offer one of the best ways to build and package and deploy an app, given that it can accomplish so many different things. 

Now, if you take a point by point comparison between what Cloud Foundry was capable of in, let’s say, 2017 versus what Kubernetes is capable of right now, it will be almost the same. So in terms of feature parity, we are now at a point, and this might be very controversial to say on a public podcast, but in terms of feature parity, Cloud Foundry has always offered the kind of features that are available in the Kubernetes community right now. 

Now, of course, Kubernetes imagines applications to be built and and deployed in a slightly different way, but in terms of getting everything into containers and shipping into a container orchestrator and providing the kind of reliability that applications need, and allowing sidecars and services and multi-tenancy. 

I strongly believe that the Cloud Foundry offering was quite compelling even four or five years ago, while Kubernetes is still sort of navigating some fairly choppy waters in terms of multi-tenancy and services and things like that. But hey, as a community, they’re doing wonderful innovation. And yeah, I agree with you when I say engineers are always after the best way in which to, you know, gain that efficiency.

The post Podcast: The importance of buildpacks in developing cloud native applications appeared first on SD Times.

]]>
Data privacy and security in AI-driven testing https://sdtimes.com/data/data-privacy-and-security-in-ai-driven-testing/ Wed, 04 Sep 2024 13:00:23 +0000 https://sdtimes.com/?p=55596 As AI-driven testing (ADT) becomes increasingly integral to software development, the importance of data privacy and security cannot be overstated. While AI brings numerous benefits, it also introduces new risks, particularly concerning intellectual property (IP) leakage, data permanence in AI models, and the need to protect the underlying structure of code.  The Shift in Perception: … continue reading

The post Data privacy and security in AI-driven testing appeared first on SD Times.

]]>
As AI-driven testing (ADT) becomes increasingly integral to software development, the importance of data privacy and security cannot be overstated. While AI brings numerous benefits, it also introduces new risks, particularly concerning intellectual property (IP) leakage, data permanence in AI models, and the need to protect the underlying structure of code. 

The Shift in Perception: A Story from Typemock

In the early days of AI-driven unit testing, Typemock encountered significant skepticism. When we first introduced the idea that our tools could automate unit tests using AI, many people didn’t believe us. The concept seemed too futuristic, too advanced to be real.

Back then, the focus was primarily on whether AI could truly understand and generate meaningful tests. The idea that AI could autonomously create and execute unit tests was met with doubt and curiosity. But as AI technology advanced and Typemock continued to innovate, the conversation started to change.

Fast forward to today, and the questions we receive are vastly different. Instead of asking whether AI-driven unit tests are possible, the first question on everyone’s mind is: “Is the code sent to the cloud?” This shift in perception highlights a significant change in priorities. Security and data privacy have become the primary concerns, reflecting the growing awareness of the risks associated with cloud-based AI solutions.

RELATED: Addressing AI bias in AI-driven software testing

This story underscores the evolving landscape of AI-driven testing. As the technology has become more accepted and widespread, the focus has shifted from disbelief in its capabilities to a deep concern for how it handles sensitive data. At Typemock, we’ve adapted to this shift by ensuring that our AI-driven tools not only deliver powerful testing capabilities but also prioritize data security at every level.

The Risk of Intellectual Property (IP) Leakage
  1. Exposure to Hackers: Proprietary data, if not adequately secured, can become a target for hackers. This could lead to severe consequences, such as financial losses, reputational damage, and even security vulnerabilities in the software being developed.
  2. Cloud Vulnerabilities: AI-driven tools that operate in cloud environments are particularly susceptible to security breaches. While cloud services offer scalability and convenience, they also increase the risk of unauthorized access to sensitive IP, making robust security measures essential.
  3. Data Sharing Risks: In environments where data is shared across multiple teams or external partners, there is an increased risk of IP leakage. Ensuring that IP is adequately protected in these scenarios is critical to maintaining the integrity of proprietary information.
The Permanence of Data in AI Models
  1. Inability to Unlearn: Once AI models are trained with specific data, they retain that information indefinitely. This creates challenges in situations where sensitive data needs to be removed, as the model’s decisions continue to be influenced by the now “forgotten” data.
  2. Data Persistence: Even after data is deleted from storage, its influence remains embedded in the AI model’s learned behaviors. This makes it difficult to comply with privacy regulations like the GDPR’s “right to be forgotten,” as the data’s impact is still present in the AI’s functionality.
  3. Risk of Unintentional Data Exposure: Because AI models integrate learned data into their decision-making processes, there is a risk that the model could inadvertently expose or reflect sensitive information through its outputs. This could lead to unintended disclosure of proprietary or personal data.
Best Practices for Ensuring Data Privacy and Security in AI-Driven Testing
Protecting Intellectual Property

To mitigate the risks of IP leakage in AI-driven testing, organizations must adopt stringent security measures:

  • On-Premises AI Processing: Implement AI-driven testing tools that can be run on-premises rather than in the cloud. This approach keeps sensitive data and proprietary code within the organization’s secure environment, reducing the risk of external breaches.
  • Encryption and Access Control: Ensure that all data, especially proprietary code, is encrypted both in transit and at rest. Additionally, implement strict access controls to ensure that only authorized personnel can access sensitive information.
  • Regular Security Audits: Conduct frequent security audits to identify and address potential vulnerabilities in the system. These audits should focus on both the AI tools themselves and the environments in which they operate.
Protecting Code Structure with Identifier Obfuscation
  1. Code Obfuscation: By systematically altering variable names, function names, and other identifiers to generic or randomized labels, organizations can protect sensitive IP while allowing AI to analyze the code’s structure. This ensures that the logic and architecture of the code remain intact without exposing critical details.
  2. Balancing Security and Functionality: It’s essential to maintain a balance between security and the AI’s ability to perform its tasks. Obfuscation should be implemented in a way that protects sensitive information while still enabling the AI to effectively conduct its analysis and testing.
  3. Preventing Reverse Engineering: Obfuscation techniques help prevent reverse engineering of code by making it more difficult for malicious actors to decipher the original structure and intent of the code. This adds an additional layer of security, safeguarding intellectual property from potential threats.
The Future of Data Privacy and Security in AI-Driven Testing
Shifting Perspectives on Data Sharing

While concerns about IP leakage and data permanence are significant today, there is a growing shift in how people perceive data sharing. Just as people now share everything online, often too loosely in my opinion, there is a gradual acceptance of data sharing in AI-driven contexts, provided it is done securely and transparently.

  • Greater Awareness and Education: In the future, as people become more educated about the risks and benefits of AI, the fear surrounding data privacy may diminish. However, this will also require continued advancements in AI security measures to maintain trust.
  • Innovative Security Solutions: The evolution of AI technology will likely bring new security solutions that can better address concerns about data permanence and IP leakage. These solutions will help balance the benefits of AI-driven testing with the need for robust data protection.
Typemock’s Commitment to Data Privacy and Security

At Typemock, data privacy and security are top priorities. Typemock’s AI-driven testing tools are designed with robust security features to protect sensitive data at every stage of the testing process:

  • On-Premises Processing: Typemock offers AI-driven testing solutions that can be deployed on-premises, ensuring that your sensitive data remains within your secure environment.
  • Advanced Encryption and Control: Our tools utilize advanced encryption methods and strict access controls to safeguard your data at all times.
  • Code Obfuscation: Typemock supports techniques like code obfuscation to ensure that AI tools can analyze code structures without exposing sensitive IP.
  • Ongoing Innovation: We are continuously innovating to address the emerging challenges of AI-driven testing, including the development of new techniques for managing data permanence and preventing IP leakage.

Data privacy and security are paramount in AI-driven testing, where the risks of IP leakage, data permanence, and code exposure present significant challenges. By adopting best practices, leveraging on-premises AI processing, and using techniques like code obfuscation, organizations can effectively manage these risks. Typemock’s dedication to these principles ensures that their AI tools deliver both powerful testing capabilities and peace of mind.

 

The post Data privacy and security in AI-driven testing appeared first on SD Times.

]]>
Transition application code to images with Cloud Native Buildpacks https://sdtimes.com/cloud/transition-application-code-to-images-with-cloud-native-buildpacks/ Mon, 26 Aug 2024 14:56:19 +0000 https://sdtimes.com/?p=55532 Much of the conversation in the software industry is around developer experience. From new ways to measure productivity to reducing important but drudge work, organizations are looking to make life more joyful for developers. One area that’s gaining more attention is the use of buildpacks to create apps for cloud-native environments. Though not a new … continue reading

The post Transition application code to images with Cloud Native Buildpacks appeared first on SD Times.

]]>
Much of the conversation in the software industry is around developer experience. From new ways to measure productivity to reducing important but drudge work, organizations are looking to make life more joyful for developers.

One area that’s gaining more attention is the use of buildpacks to create apps for cloud-native environments. Though not a new concept – buildpacks have been around for about 15 years – they can ease the burden on developers by simply taking source code and turning it into fully functional apps.

A quick history, according to Ram Iyengar, chief evangelist at Cloud Foundry: Heroku brought up the concept of creating immutable objects from source code, regardless of programming language or platform, in 2010. Cloud Foundry (the open source project) was working to do much the same thing, but as open source. Pivotal was an early backer and developer of the Cloud Foundry project as a commercial tool, and both projects released a v2 in 2015. But when Pivotal was acquired by VMware in 2019, the Cloud Foundry Foundation was formed to shepherd the project, and that is now under the auspices of the Cloud Native Computing Foundation.

Pivotal’s path was to make containers out of the source code provided, while Heroku’s vision did not include containers. In the cloud native vs. non-cloud native debate, there exists a divide in which everything runs in containers, and where not everything runs in containers. So, Heroku and Pivotal/Cloud Foundry came together to create Cloud Native Buildpacks that would be compatible with the cloud native ecosystem, which, Iyengar said, meant that “it had to be open source, it had to adhere to the OCI specification, and it has to be ready to deploy on Kubernetes and make use of cloud native constructs.” 

The non-Kubernetes version 2 of buildpacks, Iyengar said, will continue to exist for the foreseeable future, while the “newer, shinier version of buildpacks in the one for containers and Kubernetes,” he said.

Heroku went ahead with its closed source commercial implementation – which has since been open-sourced –  while Cloud Foundry Foundation in 2020 created Paketo buildpacks, which is open source and production-ready, Iyengar said.

All about the developer experience

Among the benefits of buildpacks, as we bring the narrative back around, is improving the developer experience. While there are six or seven ways JavaScript developers can get this experience of having tooling give you a functional app from source code, but if you’re not using JavaScript, the tool is basically useless, Iyengar said. Packeto buildpacks enable developers to get the same build experience regardless of the source code language. 

“The kind of homogeneity that’s possible with buildpacks is phenomenal, and that’s really what I mean when I say developer experience,” Iyengar said. “It’s about allowing developers to bring any language or framework and providing them with the homogeneous and complete user interface in order to give them the best-in-class developer experience that is possible.”

Iyengar also pointed out that buildpacks can overcome automation hurdles that exist when using technologies such as Docker. “For a developer or software engineering team to maintain Docker files for local development and production, it can quickly become a big sort of development hell in creating these Docker files and maintaining them,” he said. “Buildpacks relieve users of having to write these meta files and maintain them.”  He explained that with a Docker-based build process, if you want to write a different Docker file for your GitHub actions versus if you’re running them on your pre-production machines, there are different requirements. It’s not the most optimal.” Buildpacks, he said, make the process uniform irrespective of the infrastructure you’re running on. 

The same is true for SBOMs – software bills of materials – and going forward, you’ll be able to choose between x86 images and ARM images and dictate in the build process what kind of image you want and make them all available, Iyengar said. “The focus on automation within the buildpacks community is huge.” Further, he noted, the project makes available production-ready Buildpacks that are also compatible with CI/CD integrations such as CircleCI, Gitlab, Tekton, and others.

Because buildpacks provide transparency into what’s in an image, and what images can and cannot contain, this is where buildpacks and AI cross. “Any AI that is able to read and parse buildpacks metadata can very conveniently look at what policies need to be set, and you can create rules like do not create or push containers to production if they contain a particular version of, say, Go that’s outdated or has a vulnerability,” Iyengar said. “And, if a new vulnerability gets detected, there can be an AI engine that basically turns through all of the buildpack layers and says, ‘these are the layers that are affected, let’s replace them immediately.’ Mitigation, he added, becomes a very trivial operation.

Iyengar stated that the focus within the buildpacks community has been to “plug a lot of gaps that the Docker-based ecosystem has left, but it’s really about knowing what’s inside an image when you’re deploying it.”  Buildpacks, he said, make it easy to attest and create provenance that images need in our modern, security-first cloud native landscape.  Going forward, built-in SBOMs won’t just be a convenience, they’ll be a compliance requirement.

 

The post Transition application code to images with Cloud Native Buildpacks appeared first on SD Times.

]]>
Prioritizing your developer experience roadmap https://sdtimes.com/softwaredev/prioritizing-your-developer-experience-roadmap/ Thu, 22 Aug 2024 12:30:58 +0000 https://sdtimes.com/?p=55514 If there’s one thing a platform engineering team doesn’t lack, it’s ideas. When your customers are your colleagues and friends, you have an ever-expanding wishlist to improve developer experience — you only have to ask!  But as with any product team, you have limited resources and the need to balance both business and engineering objectives. … continue reading

The post Prioritizing your developer experience roadmap appeared first on SD Times.

]]>
If there’s one thing a platform engineering team doesn’t lack, it’s ideas. When your customers are your colleagues and friends, you have an ever-expanding wishlist to improve developer experience — you only have to ask! 

But as with any product team, you have limited resources and the need to balance both business and engineering objectives. So many stakeholders inform your developer experience roadmap that it can be difficult to prioritize.

Yes, you need a roadmap 

The biggest thing that distinguishes platform engineering from the top-down platforms of tech days of yore? Nobody has to use it. 

When you’re building any developer experience tooling — whether it’s an internal developer platform or portal or just a directory or better documentation — you have to build something that your engineers actually want to use. Your platform strategy — sometimes called a developer experience or DevEx strategy — should make developer lives so much easier that they need a really good reason to go off that golden path. 

Platform engineering requires a Platform-as-a-Product mindset, packed with user-centric design, prototypes and demo days. Your colleagues become your customers.

You not only need an internal product roadmap, you need to actively publish it within your organization. This way not only are you making commitments to solve your developer-customer’s problems, you are closing that feedback loop, so your platform team knows early and often if you’re building something that they even want or need.

Know your stakeholders

Perhaps even more than when you are working with external users, a platform team, as stewards of the developer experience, is beholden to many stakeholders. 

As Sergiu Petean from Allianz Direct pointed out, a common anti-pattern for platform teams is only addressing the single stakeholder of the software engineer. The larger the enterprise, the more regulated your industry, the more stakeholders you have to consider from Day One. 

At the insurance giant, his team initially highlighted eight different stakeholders that all bring different demands:

  • End users
  • Quality
  • Security 
  • Software delivery 
  • Data
  • Sustainability
  • Incident management
  • Compliance 

Later they realized the platform has the capacity to interact with even more teams. 

Work to build a relationship with each of your technical and business stakeholders. Learn what part of the software development lifecycle matters most to them. And then bring them into your feedback loops that impact your platform engineering product roadmap.

Learn to prioritize

The more stakeholders you identify, the even more feature requests you’ll receive. Yet, according to research by DX, the average team focused on developer experience is a fraction of the whole engineering org. That can seem overwhelming, but a platform engineering strategy is all about centralizing and solving frustrations at scale.

How can you possibly balance so many conflicting demands? HashiCorp’s platform engineering lead Michael Galloway recommends looking to remove the pebble in their shoe.

The biggest points of friction will be an ongoing process, but, as he said, “A lot of times, engineers have been at a place for long enough where they’ve developed workarounds or become used to problems. It’s become a known experience. So we have to look at their workflow to see what the pebbles are and then remove them.”

Successful platform teams pair program with their customers regularly. It’s an effective way to build empathy.

Another thing to prioritize is asking: Is this affecting just one or two really vocal teams or is it something systemic across the organization? You’re never going to please everyone, but your job in platform engineering is to build solutions that about 80% of your developers would be happy to adopt. 

Go for the low-hanging fruit

Another way that platform engineering differs from the behemoth legacy platforms is that it’s not a giant one-off implementation. In fact, Team Topologies has the concept of Thinnest Viable Platform. You start with something small but sturdy that you can build your platform strategy on top of.

For most companies, the biggest time-waster is finding things. Your first TVP is often either a directory of who owns what or better documentation. 

But don’t trust that instinct — ask first. Running a developer productivity survey will let you know what the biggest frustrations are for your developers. Ask targeted questions, not open-ended ones. You can get started inquiring about the 25 drivers of developer productivity — which socio-technically range from incident response and on-call experience through to requirements gathering and realistic deadlines. 

Mix this with informal conversations and pair programming with your devs to uncover big and small problems that need solutions.

As startup advisor Lenny Rachitsky suggests, you can rate each idea from 1 to 5 across the X of how impactful it’ll be to solve a problem and Y of how much effort it’ll take. Just make sure anything that shows up on that “guesstimation graph” meets the requirement that it solves a problem for a majority of your developers — because a platform team should never work for just one dev team.

Don’t forget to value quick fixes to help ease some pain. Following the agile practice of “walking the board,” prioritize features closest to Done. This allows for early wins to foster platform advocates, which can go a long way to increase adoption. 

Be open to changes

As CTO of Carta Will Larson put it, “If something dire is happening at your company, then that’s the place to be engaged. Nothing else will matter if it doesn’t get addressed.” 

Your roadmap is just that, a map — there’s always more than one way to go. You need to be ready to deviate and change your priorities. This could be a global pandemic or an urgent vulnerability patch. It could be the need to adopt a new developer technology because it will help you work with a big-name integration partner. 

Especially in a well-regulated industry, your cybersecurity and compliance stakeholders can influence a lot of change. Just because platform engineering is opt-in, doesn’t mean it can’t facilitate some mandatory changes too.

No matter what the reason, it’s important that you communicate any fluctuations to your internal customers, explaining why the roadmap priorities have changed.

Continuously measure

Engineering is a science, so we know you can’t improve what you don’t measure. This “metrics-backed intuition” as Diogo Correia, developer experience product manager at Pipedrive, calls it, fosters continuous improvement, not just for your platform strategy but for your developers too.

His team uses DX for quarterly developer surveys. Then it developed and open sourced a one-hour developer experience workshop to help dev teams not only surface their own struggles but to set individual team focus areas for the next Q. 

“It has an immediate impact in terms of the sentiment and priorities that they report in the next quarter,” he said. For example, a lot of developers complain about technical debt, but almost no devs want to spend time fixing it. This knowledge has fed into Pipedrive’s rotation of teams focusing on paying down that debt versus releasing new features.

“The workshops help by identifying the concrete services or libraries that any given team owns that most developers in the team are feeling pain with,” Correia continued. This helps the team prioritize and plan to refactor, “instead of suffering through it for years on end, as before.”

In the end, the most important measurement of any developer experience strategy is if your internal dev customers are adopting and using it. Work to tighten that internal feedback loop to make sure you are building what they want. Only then will you achieve measurable, long-term success.

The post Prioritizing your developer experience roadmap appeared first on SD Times.

]]>
Addressing AI bias in AI-driven software testing https://sdtimes.com/test/addressing-ai-bias-in-ai-driven-software-testing/ Wed, 21 Aug 2024 13:00:08 +0000 https://sdtimes.com/?p=55502 Artificial Intelligence (AI) has become a powerful tool in software testing, by automating complex tasks, improving efficiency, and uncovering defects that might have been missed by traditional methods. However, despite its potential, AI is not without its challenges. One of the most significant concerns is AI bias, which can lead to false results and undermine … continue reading

The post Addressing AI bias in AI-driven software testing appeared first on SD Times.

]]>
Artificial Intelligence (AI) has become a powerful tool in software testing, by automating complex tasks, improving efficiency, and uncovering defects that might have been missed by traditional methods. However, despite its potential, AI is not without its challenges. One of the most significant concerns is AI bias, which can lead to false results and undermine the accuracy and reliability of software testing. 

AI bias occurs when an AI system produces skewed or prejudiced results due to erroneous assumptions or imbalances in the machine learning process. This bias can arise from various sources, including the quality of the data used for training, the design of the algorithms, or the way the AI system is integrated into the testing environment. When left unchecked, AI bias can lead to unfair and inaccurate testing outcomes, posing a significant concern in software development.

For instance, if an AI-driven testing tool is trained on a dataset that lacks diversity in test scenarios or over-represents certain conditions, the resulting model may perform well in those scenarios but fail to detect issues in others. This can result in a testing process that is not only incomplete but also misleading, as critical bugs or vulnerabilities might be missed because the AI wasn’t trained to recognize them.

RELATED: The evolution and future of AI-driven testing: Ensuring quality and addressing bias

To prevent AI bias from compromising the integrity of software testing, it’s crucial to detect and mitigate bias at every stage of the AI lifecycle. This includes using the right tools, validating the tests generated by AI, and managing the review process effectively.

Detecting and Mitigating Bias: Preventing the Creation of Wrong Tests

To ensure that AI-driven testing tools generate accurate and relevant tests, it’s essential to utilize tools that can detect and mitigate bias.

  • Code Coverage Analysis: Code coverage tools are critical for verifying that AI-generated tests cover all necessary parts of the codebase. This helps identify any areas that may be under-tested or over-tested due to bias in the AI’s training data. By ensuring comprehensive code coverage, these tools help mitigate the risk of AI bias leading to incomplete or skewed testing results.
  • Bias Detection Tools: Implementing specialized tools designed to detect bias in AI models is essential. These tools can analyze the patterns in test generation and identify any biases that could lead to the creation of incorrect tests. By flagging these biases early, organizations can adjust the AI’s training process to produce more balanced and accurate tests.
  • Feedback and Monitoring Systems: Continuous monitoring and feedback systems are vital for tracking the AI’s performance in generating tests. These systems allow testers to detect biased behavior as it occurs, providing an opportunity to correct course before the bias leads to significant issues. Regular feedback loops also enable AI models to learn from their mistakes and improve over time.
How to Test the Tests

Ensuring that the tests generated by AI are both effective and accurate is crucial for maintaining the integrity of the testing process. Here are methods to validate AI-generated tests.

  • Test Validation Frameworks: Using frameworks that can automatically validate AI-generated tests against known correct outcomes is essential. These frameworks help ensure that the tests are not only syntactically correct but also logically valid, preventing the AI from generating tests that pass formal checks but fail to identify real issues.
  • Error Injection Testing: Introducing controlled errors into the system and verifying that the AI-generated tests can detect these errors is an effective way to ensure robustness. If the AI misses injected errors, it may indicate a bias or flaw in the test generation process, prompting further investigation and correction.
  • Manual Spot Checks: Conducting random spot checks on a subset of AI-generated tests allows human testers to manually verify their accuracy and relevance. This step is crucial for catching potential issues that automated tools might miss, particularly in cases where AI bias could lead to subtle or context-specific errors.
How Can Humans Review Thousands of Tests They Didn’t Write?

Reviewing a large number of AI-generated tests can be daunting for human testers, especially since they didn’t write these tests themselves. This process can feel similar to working with legacy code, where understanding the intent behind the tests is challenging. Here are strategies to manage this process effectively.

  • Clustering and Prioritization: AI tools can be used to cluster similar tests together and prioritize them based on risk or importance. This helps testers focus on the most critical tests first, making the review process more manageable. By tackling high-priority tests early, testers can ensure that major issues are addressed without getting bogged down in less critical tasks.
  • Automated Review Tools: Leveraging automated review tools that can scan AI-generated tests for common errors or anomalies is another effective strategy. These tools can flag potential issues for human review, significantly reducing the workload on testers and allowing them to focus on areas that require more in-depth analysis.
  • Collaborative Review Platforms: Implementing collaborative platforms where multiple testers can work together to review and validate AI-generated tests is beneficial. This distributed approach makes the task more manageable and ensures thorough coverage, as different testers can bring diverse perspectives and expertise to the process.
  • Interactive Dashboards: Using interactive dashboards that provide insights and summaries of the AI-generated tests is a valuable strategy. These dashboards can highlight areas that require attention, allow testers to quickly navigate through the tests, and provide an overview of the AI’s performance. This visual approach helps testers identify patterns of bias or error that might not be immediately apparent in individual tests.

By employing these tools and strategies, your team can ensure that AI-driven test generation remains accurate and relevant while making the review process manageable for human testers. This approach helps maintain high standards of quality and efficiency in the testing process.

Ensuring Quality in AI-Driven Tests

To maintain the quality and integrity of AI-driven tests, it is crucial to adopt best practices that address both the technological and human aspects of the testing process.

  • Use Advanced Tools: Leverage tools like code coverage analysis and AI to identify and eliminate duplicate or unnecessary tests. This helps create a more efficient and effective testing process by focusing resources on the most critical and impactful tests.
  • Human-AI Collaboration: Foster an environment where human testers and AI tools work together, leveraging each other’s strengths. While AI excels at handling repetitive tasks and analyzing large datasets, human testers bring context, intuition, and judgment to the process. This collaboration ensures that the testing process is both thorough and nuanced.
  • Robust Security Measures: Implement strict security protocols to protect sensitive data, especially when using AI tools. Ensuring that the AI models and the data they process are secure is vital for maintaining trust in the AI-driven testing process.
  • Bias Monitoring and Mitigation: Regularly check for and address any biases in AI outputs to ensure fair and accurate testing results. This ongoing monitoring is essential for adapting to changes in the software or its environment and for maintaining the integrity of the AI-driven testing process over time.

Addressing AI bias in software testing is essential for ensuring that AI-driven tools produce accurate, fair, and reliable results. By understanding the sources of bias, recognizing the risks it poses, and implementing strategies to mitigate it, organizations can harness the full potential of AI in testing while maintaining the quality and integrity of their software. Ensuring the quality of data, conducting regular audits, and maintaining human oversight are key steps in this ongoing effort to create unbiased AI systems that enhance, rather than undermine, the testing process.

Learn more about transforming your testing with AI here

The post Addressing AI bias in AI-driven software testing appeared first on SD Times.

]]>
Validating names in databases with the help of Melissa’s global name verification service https://sdtimes.com/data/validating-names-in-databases-with-the-help-of-melissas-global-name-verification-service/ Mon, 19 Aug 2024 19:00:59 +0000 https://sdtimes.com/?p=55480 Companies that are collecting data need to ensure that the data is valid in order to actually make good use of it. And making sure they have the correct names in their database can help establish a good customer relationship by supporting a customer’s sense of identity.  Think back to times when you’ve signed up … continue reading

The post Validating names in databases with the help of Melissa’s global name verification service appeared first on SD Times.

]]>
Companies that are collecting data need to ensure that the data is valid in order to actually make good use of it. And making sure they have the correct names in their database can help establish a good customer relationship by supporting a customer’s sense of identity. 

Think back to times when you’ve signed up for a service and then you get an automated email that says “dear x” instead of your name, or perhaps lists your last name, not your first. It’s easy to fill out a form incorrectly and thus have your information incorrectly listed in a company’s database. 

When situations like this happen and a company reaches out using this incorrect information, it can be bad for the brand’s reputation. Therefore, validating database names can be beneficial. Validating names, however, isn’t the easiest process. Unlike email validation where there’s a specific format an address has to follow, or address verification where there is a set number of valid addresses, the possibilities for different names are seemingly endless. 

RELATED: Cleansing email lists will help preserve your sender reputation score

Just within the United States there are a number of different cultures that names draw inspiration from, multiple ways to spell the same name, unique characters that can show up, like for instance in hyphenated last names, and more. And those possibilities grow even more when you move around to different countries with different languages. 

To help companies validate the names in their lists, the data company Melissa offers a name verification service where it maintains a large list of known names, which can be used to validate the names in your list, according to the company’s data quality analyst Tim Sidor.

“Names are not a distinct knowledge base set, where it’s static, and each country or organization has a distinct set of names or rules,” he said. “So names are very fluid, and the valid names are changing all the time.” 

The name verification service works globally too, by validating names in other countries using keywords or characters that are associated with specific regions. For example, in the U.S., Roy is probably a first name, but in France, it’s probably a surname, said Sidor. 

“We know that different languages have certain keywords that represent certain things,” he explained. “And certain extended characters are a kind of a hint to say, ‘Oh, it’s this language. So we had better parse that name that way.’ So all those things, taken in tandem, allow us to parse global names a little bit differently, depending on where they come from, where they originate.”

According to Sidor, because of the nature of names and there being endless possibilities, their list can’t possibly include every known name that will ever come into existence. “Due to lack of standard naming laws and practices, as well as private companies’ willingness to accept endless name variations, there’s no error proof way of preventing new and never heard before names from entering modern lexicon,” he said. “Therefore, there are always going to be new valid names that are not recognized as such – being new or totally unique.”

However, just because a name isn’t on the list doesn’t mean it will be recognized as invalid. In those instances, the entry might not receive a “known name found” flag, but also won’t necessarily receive an “invalid” flag.

“Invalid” flags typically get thrown when there is a vulgarity or otherwise suspicious name. The company maintains a list of known vulgarities in multiple languages so that it can flag those if they show up in the field. 

Some privacy-minded individuals may not want to give their real name when they’re filling out a web form, so they put in a fake name, like Mickey Mouse, Harry Potter, Taylor Swift, etc. These names also have a list and might result in a name getting flagged as invalid, or at least flagged as something to check. 

“Mickey Mouse is probably not valid, and that’s easy,” he said. “But Taylor Swift, there could be more than one Taylor Swift or the actual Taylor Swift, so you want to flag it as being suspicious and then maybe verify it with the address or take some other action to determine whether it’s real or not.”

And finally, the name verification service can filter out company names that have ended up in a name field. For instance, Melissa is the name of the company, but it’s also a very popular girl’s name. In that case, Sidor said, having the word “data” in there too would flag it, but there’s also many indicators that something might be a company, like the keywords “corporation,” “company,” etc, which Melissa also maintains another list of. 

“Companies, they’re valid contacts,” he said. “A lot of times they’re meant to be in your database, maybe in a separate table, but they’re valid contacts. You don’t necessarily want to get rid of them. You just want to use those keywords to identify them as such.”

To hear Sidor talk more about Melissa’s name verification service, watch Episode 2 of our Microwebinar series on Data Verification, and tune in on September 18th to learn about phone number verification, October 18th to learn about address verification, and November 14th to learn about electronic identity verification. And if you missed Episode 1 on email verification, watch it now at the same link.

The post Validating names in databases with the help of Melissa’s global name verification service appeared first on SD Times.

]]>
The evolution and future of AI-driven testing: Ensuring quality and addressing bias https://sdtimes.com/test/the-evolution-and-future-of-ai-driven-testing-ensuring-quality-and-addressing-bias/ Mon, 29 Jul 2024 14:33:39 +0000 https://sdtimes.com/?p=55282 Automated testing began as a way to alleviate the repetitive and time-consuming tasks associated with manual testing. Early tools focused on running predefined scripts to check for expected outcomes, significantly reducing human error and increasing test coverage. With advancements in AI, particularly in machine learning and natural language processing, testing tools have become more sophisticated. … continue reading

The post The evolution and future of AI-driven testing: Ensuring quality and addressing bias appeared first on SD Times.

]]>
Automated testing began as a way to alleviate the repetitive and time-consuming tasks associated with manual testing. Early tools focused on running predefined scripts to check for expected outcomes, significantly reducing human error and increasing test coverage.

With advancements in AI, particularly in machine learning and natural language processing, testing tools have become more sophisticated. AI-driven tools can now learn from previous tests, predict potential defects, and adapt to new testing environments with minimal human intervention. Typemock has been at the forefront of this evolution, continuously innovating to incorporate AI into its testing solutions.

RELATED: Addressing AI bias in AI-driven software testing

Typemock’s AI Enhancements

Typemock has developed AI-driven tools that significantly enhance efficiency, accuracy, and test coverage. By leveraging machine learning algorithms, these tools can automatically generate test cases, optimize testing processes, and identify potential issues before they become critical problems. This not only saves time but also ensures a higher level of software quality.

I believe AI in testing is not just about automation; it’s about intelligent automation. We harness the power of AI to enhance, not replace, the expertise of unit testers. 

Difference Between Automated Testing and AI-Driven Testing

Automated testing involves tools that execute pre-written test scripts automatically without human intervention during the test execution phase. These tools are designed to perform repetitive tasks, check for expected outcomes, and report any deviations. Automated testing improves efficiency but relies on pre-written tests.

AI-driven testing, on the other hand, involves the use of AI technologies to both create and execute tests. AI can analyze code, learn from previous test cases, generate new test scenarios, and adapt to changes in the application. This approach not only automates the execution but also the creation and optimization of tests, making the process more dynamic and intelligent.

While AI has the capability to generate numerous tests, many of these can be duplicates or unnecessary. With the right tooling, AI-driven testing tools can create only the essential tests and execute only those that need to be run. The danger of indiscriminately generating and running tests lies in the potential to create many redundant tests, which can waste time and resources. Typemock’s AI tools are designed to optimize test generation, ensuring efficiency and relevance in the testing process.

While traditional automated testing tools run predefined tests, AI-driven testing tools go a step further by authoring those tests, continuously learning and adapting to provide more comprehensive and effective testing.

Addressing AI Bias in Testing

AI bias occurs when an AI system produces prejudiced results due to erroneous assumptions in the machine learning process. This can lead to unfair and inaccurate testing outcomes, which is a significant concern in software development. 

To ensure that AI-driven testing tools generate accurate and relevant tests, it is essential to utilize the right tools that can detect and mitigate bias:

  • Code Coverage Analysis: Use code coverage tools to verify that AI-generated tests cover all necessary parts of the codebase. This helps identify any areas that may be under-tested or over-tested due to bias.
  • Bias Detection Tools: Implement specialized tools designed to detect bias in AI models. These tools can analyze the patterns in test generation and identify any biases that could lead to the creation of incorrect tests.
  • Feedback and Monitoring Systems: Establish systems that allow continuous monitoring and feedback on the AI’s performance in generating tests. This helps in early detection of any biased behavior.

Ensuring that the tests generated by AI are effective and accurate is crucial. Here are methods to validate the AI-generated tests:

  • Test Validation Frameworks: Use frameworks that can automatically validate the AI-generated tests against known correct outcomes. These frameworks help ensure that the tests are not only syntactically correct but also logically valid.
  • Error Injection Testing: Introduce controlled errors into the system and verify that the AI-generated tests can detect these errors. This helps ensure the robustness and accuracy of the tests.
  • Manual Spot Checks: Conduct random spot checks on a subset of the AI-generated tests to manually verify their accuracy and relevance. This helps catch any potential issues that automated tools might miss.
How Can Humans Review Thousands of Tests They Didn’t Write?

Reviewing a large number of AI-generated tests can be daunting for human testers, making it feel similar to working with legacy code. Here are strategies to manage this process:

  • Clustering and Prioritization: Use AI tools to cluster similar tests together and prioritize them based on risk or importance. This helps testers focus on the most critical tests first, making the review process more manageable.
  • Automated Review Tools: Leverage automated review tools that can scan AI-generated tests for common errors or anomalies. These tools can flag potential issues for human review, reducing the workload on testers.
  • Collaborative Review Platforms: Implement collaborative platforms where multiple testers can work together to review and validate AI-generated tests. This distributed approach can make the task more manageable and ensure thorough coverage.
  • Interactive Dashboards: Use interactive dashboards that provide insights and summaries of the AI-generated tests. These dashboards can highlight areas that require attention and allow testers to quickly navigate through the tests.

By employing these tools and strategies, your team can ensure that AI-driven test generation remains accurate and relevant, while also making the review process manageable for human testers. This approach helps maintain high standards of quality and efficiency in the testing process.

Ensuring Quality in AI-Driven Tests

Some best practices for high-quality AI testing include:

  • Use Advanced Tools: Leverage tools like code coverage analysis and AI to identify and eliminate duplicate or unnecessary tests. This helps create a more efficient and effective testing process.
  • Human-AI Collaboration: Foster an environment where human testers and AI tools work together, leveraging each other’s strengths.
  • Robust Security Measures: Implement strict security protocols to protect sensitive data, especially when using AI tools.
  • Bias Monitoring and Mitigation: Regularly check for and address any biases in AI outputs to ensure fair testing results.

The key to high-quality AI-driven testing is not just in the technology, but in how we integrate it with human expertise and ethical practices.

The technology behind AI-driven testing is designed to shorten the time from idea to reality. This rapid development cycle allows for quicker innovation and deployment of software solutions.

The future will see self-healing tests and self-healing code. Self-healing tests can automatically detect and correct issues in test scripts, ensuring continuous and uninterrupted testing. Similarly, self-healing code can identify and fix bugs in real-time, reducing downtime and improving software reliability.

Increasing Complexity of Software

As we manage to simplify the process of creating code, it paradoxically leads to the development of more complex software. This increasing complexity requires new paradigms and tools, as current ones will not be sufficient. For example, the algorithms used in new software, particularly AI algorithms, might not be fully understood even by their developers. This will necessitate innovative approaches to testing and fixing software.

This growing complexity will necessitate the development of new tools and methodologies to test and understand AI-driven applications. Ensuring these complex systems run as expected will be a significant focus of future testing innovations.

To address security and privacy concerns, future AI testing tools will increasingly run locally rather than relying on cloud-based solutions. This approach ensures that sensitive data and proprietary code remain secure and within the control of the organization, while still leveraging the powerful capabilities of AI.


You may also like…

Software testing’s chaotic conundrum: Navigating the Three-Body Problem of speed, quality, and cost

Report: How mobile testing strategies are embracing AI

The post The evolution and future of AI-driven testing: Ensuring quality and addressing bias appeared first on SD Times.

]]>