ML Archives - SD Times https://sdtimes.com/tag/ml/ Software Development News Wed, 21 Aug 2024 15:38:54 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.5 https://sdtimes.com/wp-content/uploads/2019/06/bnGl7Am3_400x400-50x50.jpeg ML Archives - SD Times https://sdtimes.com/tag/ml/ 32 32 Pros and cons of 5 AI/ML workflow tools for data scientists today https://sdtimes.com/ai/pros-and-cons-of-5-ai-ml-workflow-tools-for-data-scientists-today/ Fri, 09 Aug 2024 16:17:37 +0000 https://sdtimes.com/?p=55407 With businesses uncovering more and more use cases for artificial intelligence and machine learning, data scientists find themselves looking closely at their workflow. There are a myriad of moving pieces in AI and ML development, and they all must be managed with an eye on efficiency and flexible, strong functionality. The challenge now is to … continue reading

The post Pros and cons of 5 AI/ML workflow tools for data scientists today appeared first on SD Times.

]]>
With businesses uncovering more and more use cases for artificial intelligence and machine learning, data scientists find themselves looking closely at their workflow. There are a myriad of moving pieces in AI and ML development, and they all must be managed with an eye on efficiency and flexible, strong functionality. The challenge now is to evaluate what tools provide which functionalities, and how various tools can be augmented with other solutions to support an end-to-end workflow. So let’s see what some of these leading tools can do.

DVC

DVC offers the capability to manage text, image, audio, and video files across ML modeling workflow. 

The pros: It’s open source, and it has solid data management capacities. It offers custom dataset enrichment and bias removal. It also logs changes in the data quickly, at natural points during the workflow. While you’re using the command line, the process feels quick. And DVC’s pipeline capabilities are language-agnostic.

The cons: DVC’s AI workflow capabilities are limited – there’s no deployment functionality or orchestration. While the pipeline design looks good in theory, it tends to break in practice. There’s no ability to set credentials for object storage as a configuration file, and there’s no UI – everything must be done through code.

MLflow

MLflow is an open-source tool, built on an MLOps platform. 

The pros: Because it’s open source, it’s easy to set up, and requires only one install. It supports all ML libraries, languages, and code, including R. The platform is designed for end-to-end workflow support for modeling and generative AI tools. And its UI feels intuitive, as well as easy to understand and navigate. 

The cons: MLflow’s AI workflow capacities are limited overall. There’s no orchestration functionality, limited data management, and limited deployment functionality. The user has to exercise diligence while organizing work and naming projects – the tool doesn’t support subfolders. It can track parameters, but doesn’t track all code changes – although Git Commit can provide the means for work-arounds. Users will often combine MLflow and DVC to force data change logging. 

Weights & Biases

Weights & Biases is a solution primarily used for MLOPs. The company recently added a solution for developing generative AI tools. 

The pros: Weights & Biases offers automated tracking, versioning, and visualization with minimal code. As an experiment management tool, it does excellent work. Its interactive visualizations make experiment analysis easy. Collaboration functions allow teams to efficiently share experiments and collect feedback for improving future experiments. And it offers strong model registry management, with dashboards for model monitoring and the ability to reproduce any model checkpoint. 

The cons: Weights & Biases is not open source. There are no pipeline capabilities within its own platform – users will need to turn to PyTorch and Kubernetes for that. Its AI workflow capabilities, including orchestration and scheduling functions, are quite limited. While Weights & Biases can log all code and code changes, that function can simultaneously create unnecessary security risks and drive up the cost of storage. Weights & Biases lacks the abilities to manage compute resources at a granular level. For granular tasks, users need to augment it with other tools or systems.

Slurm

Slurm promises workflow management and optimization at scale. 

The pros: Slurm is an open source solution, with a robust and highly scalable scheduling tool for large computing clusters and high-performance computing (HPC) environments. It’s designed to optimize compute resources for resource-intensive AI, HPC, and HTC (High Throughput Computing) tasks. And it delivers real-time reports on job profiling, budgets, and power consumption for resources needed by multiple users. It also comes with customer support for guidance and troubleshooting. 

The cons: Scheduling is the only piece of AI workflow that Slurm solves. It requires a significant amount of Bash scripting to build automations or pipelines. It can’t boot up different environments for each job, and can’t verify all data connections and drivers are valid. There’s no visibility into Slurm clusters in progress. Furthermore, its scalability comes at the cost of user control over resource allocation. Jobs that exceed memory quotas or simply take too long are killed with no advance warning.  

ClearML  

ClearML offers scalability and efficiency across the entire AI workflow, on a single open source platform. 

The pros: ClearML’s platform is built to provide end-to-end workflow solutions for GenAI, LLMops and MLOps at scale. For a solution to truly be called “end-to-end,” it must be built to support workflow for a wide range of businesses with different needs. It must be able to replace multiple stand-alone tools used for AI/ML, but still allow developers to customize its functionality by adding additional tools of their choice, which ClearML does.  ClearML also offers out-of-the-box orchestration to support scheduling, queues, and GPU management. To develop and optimize AI and ML models within ClearML, only two lines of code are required. Like some of the other leading workflow solutions, ClearML is open source. Unlike some of the others, ClearML creates an audit trail of changes, automatically tracking elements data scientists rarely think about – config, settings, etc. – and offering comparisons. Its dataset management functionality connects seamlessly with experiment management. The platform also enables organized, detailed data management, permissions and role-based access control, and sub-directories for sub-experiments, making oversight more efficient.

One important advantage ClearML brings to data teams is its security measures, which are built into the platform. Security is no place to slack, especially while optimizing workflow to manage larger volumes of sensitive data. It’s crucial for developers to trust their data is private and secure, while accessible to those on the data team who need it.

The cons: While being designed by developers, for developers, has its advantages, ClearML’s    model deployment is done not through a UI but through code. Naming conventions for tracking and updating data can be inconsistent across the platform. For instance, the user will “report” parameters and metrics, but “register” or “update” a model. And it does not support R, only Python.

In conclusion, the field of AI/ML workflow solutions is a crowded one, and it’s only going to grow from here. Data scientists should take the time today to learn about what’s available to them, given their teams’ specific needs and resources.


You may also like…

Data scientists and developers need a better working relationship for AI

How to maximize your ROI for AI in software development

The post Pros and cons of 5 AI/ML workflow tools for data scientists today appeared first on SD Times.

]]>
The evolution and future of AI-driven testing: Ensuring quality and addressing bias https://sdtimes.com/test/the-evolution-and-future-of-ai-driven-testing-ensuring-quality-and-addressing-bias/ Mon, 29 Jul 2024 14:33:39 +0000 https://sdtimes.com/?p=55282 Automated testing began as a way to alleviate the repetitive and time-consuming tasks associated with manual testing. Early tools focused on running predefined scripts to check for expected outcomes, significantly reducing human error and increasing test coverage. With advancements in AI, particularly in machine learning and natural language processing, testing tools have become more sophisticated. … continue reading

The post The evolution and future of AI-driven testing: Ensuring quality and addressing bias appeared first on SD Times.

]]>
Automated testing began as a way to alleviate the repetitive and time-consuming tasks associated with manual testing. Early tools focused on running predefined scripts to check for expected outcomes, significantly reducing human error and increasing test coverage.

With advancements in AI, particularly in machine learning and natural language processing, testing tools have become more sophisticated. AI-driven tools can now learn from previous tests, predict potential defects, and adapt to new testing environments with minimal human intervention. Typemock has been at the forefront of this evolution, continuously innovating to incorporate AI into its testing solutions.

RELATED: Addressing AI bias in AI-driven software testing

Typemock’s AI Enhancements

Typemock has developed AI-driven tools that significantly enhance efficiency, accuracy, and test coverage. By leveraging machine learning algorithms, these tools can automatically generate test cases, optimize testing processes, and identify potential issues before they become critical problems. This not only saves time but also ensures a higher level of software quality.

I believe AI in testing is not just about automation; it’s about intelligent automation. We harness the power of AI to enhance, not replace, the expertise of unit testers. 

Difference Between Automated Testing and AI-Driven Testing

Automated testing involves tools that execute pre-written test scripts automatically without human intervention during the test execution phase. These tools are designed to perform repetitive tasks, check for expected outcomes, and report any deviations. Automated testing improves efficiency but relies on pre-written tests.

AI-driven testing, on the other hand, involves the use of AI technologies to both create and execute tests. AI can analyze code, learn from previous test cases, generate new test scenarios, and adapt to changes in the application. This approach not only automates the execution but also the creation and optimization of tests, making the process more dynamic and intelligent.

While AI has the capability to generate numerous tests, many of these can be duplicates or unnecessary. With the right tooling, AI-driven testing tools can create only the essential tests and execute only those that need to be run. The danger of indiscriminately generating and running tests lies in the potential to create many redundant tests, which can waste time and resources. Typemock’s AI tools are designed to optimize test generation, ensuring efficiency and relevance in the testing process.

While traditional automated testing tools run predefined tests, AI-driven testing tools go a step further by authoring those tests, continuously learning and adapting to provide more comprehensive and effective testing.

Addressing AI Bias in Testing

AI bias occurs when an AI system produces prejudiced results due to erroneous assumptions in the machine learning process. This can lead to unfair and inaccurate testing outcomes, which is a significant concern in software development. 

To ensure that AI-driven testing tools generate accurate and relevant tests, it is essential to utilize the right tools that can detect and mitigate bias:

  • Code Coverage Analysis: Use code coverage tools to verify that AI-generated tests cover all necessary parts of the codebase. This helps identify any areas that may be under-tested or over-tested due to bias.
  • Bias Detection Tools: Implement specialized tools designed to detect bias in AI models. These tools can analyze the patterns in test generation and identify any biases that could lead to the creation of incorrect tests.
  • Feedback and Monitoring Systems: Establish systems that allow continuous monitoring and feedback on the AI’s performance in generating tests. This helps in early detection of any biased behavior.

Ensuring that the tests generated by AI are effective and accurate is crucial. Here are methods to validate the AI-generated tests:

  • Test Validation Frameworks: Use frameworks that can automatically validate the AI-generated tests against known correct outcomes. These frameworks help ensure that the tests are not only syntactically correct but also logically valid.
  • Error Injection Testing: Introduce controlled errors into the system and verify that the AI-generated tests can detect these errors. This helps ensure the robustness and accuracy of the tests.
  • Manual Spot Checks: Conduct random spot checks on a subset of the AI-generated tests to manually verify their accuracy and relevance. This helps catch any potential issues that automated tools might miss.
How Can Humans Review Thousands of Tests They Didn’t Write?

Reviewing a large number of AI-generated tests can be daunting for human testers, making it feel similar to working with legacy code. Here are strategies to manage this process:

  • Clustering and Prioritization: Use AI tools to cluster similar tests together and prioritize them based on risk or importance. This helps testers focus on the most critical tests first, making the review process more manageable.
  • Automated Review Tools: Leverage automated review tools that can scan AI-generated tests for common errors or anomalies. These tools can flag potential issues for human review, reducing the workload on testers.
  • Collaborative Review Platforms: Implement collaborative platforms where multiple testers can work together to review and validate AI-generated tests. This distributed approach can make the task more manageable and ensure thorough coverage.
  • Interactive Dashboards: Use interactive dashboards that provide insights and summaries of the AI-generated tests. These dashboards can highlight areas that require attention and allow testers to quickly navigate through the tests.

By employing these tools and strategies, your team can ensure that AI-driven test generation remains accurate and relevant, while also making the review process manageable for human testers. This approach helps maintain high standards of quality and efficiency in the testing process.

Ensuring Quality in AI-Driven Tests

Some best practices for high-quality AI testing include:

  • Use Advanced Tools: Leverage tools like code coverage analysis and AI to identify and eliminate duplicate or unnecessary tests. This helps create a more efficient and effective testing process.
  • Human-AI Collaboration: Foster an environment where human testers and AI tools work together, leveraging each other’s strengths.
  • Robust Security Measures: Implement strict security protocols to protect sensitive data, especially when using AI tools.
  • Bias Monitoring and Mitigation: Regularly check for and address any biases in AI outputs to ensure fair testing results.

The key to high-quality AI-driven testing is not just in the technology, but in how we integrate it with human expertise and ethical practices.

The technology behind AI-driven testing is designed to shorten the time from idea to reality. This rapid development cycle allows for quicker innovation and deployment of software solutions.

The future will see self-healing tests and self-healing code. Self-healing tests can automatically detect and correct issues in test scripts, ensuring continuous and uninterrupted testing. Similarly, self-healing code can identify and fix bugs in real-time, reducing downtime and improving software reliability.

Increasing Complexity of Software

As we manage to simplify the process of creating code, it paradoxically leads to the development of more complex software. This increasing complexity requires new paradigms and tools, as current ones will not be sufficient. For example, the algorithms used in new software, particularly AI algorithms, might not be fully understood even by their developers. This will necessitate innovative approaches to testing and fixing software.

This growing complexity will necessitate the development of new tools and methodologies to test and understand AI-driven applications. Ensuring these complex systems run as expected will be a significant focus of future testing innovations.

To address security and privacy concerns, future AI testing tools will increasingly run locally rather than relying on cloud-based solutions. This approach ensures that sensitive data and proprietary code remain secure and within the control of the organization, while still leveraging the powerful capabilities of AI.


You may also like…

Software testing’s chaotic conundrum: Navigating the Three-Body Problem of speed, quality, and cost

Report: How mobile testing strategies are embracing AI

The post The evolution and future of AI-driven testing: Ensuring quality and addressing bias appeared first on SD Times.

]]>
JFrog announces partnership with AWS to streamline secure ML model deployment https://sdtimes.com/jfrog/jfrog-announces-partnership-with-aws-to-streamline-secure-ml-model-deployment/ Wed, 17 Jan 2024 16:25:32 +0000 https://sdtimes.com/?p=53516 JFrog introduced a new integration between JFrog Artifactory and Amazon SageMaker to streamline the process of building, training, and deploying machine learning (ML) models. This integration will allow companies to manage their ML models with the same efficiency and security as other software components in a DevSecOps workflow.  In the new integration, ML models are … continue reading

The post JFrog announces partnership with AWS to streamline secure ML model deployment appeared first on SD Times.

]]>
JFrog introduced a new integration between JFrog Artifactory and Amazon SageMaker to streamline the process of building, training, and deploying machine learning (ML) models. This integration will allow companies to manage their ML models with the same efficiency and security as other software components in a DevSecOps workflow. 

In the new integration, ML models are immutable, traceable, secure, and validated. Additionally, JFrog has enhanced its ML Model management solution with new versioning capabilities, ensuring that compliance and security are integral parts of the ML model development process.

“As more companies begin managing big data in the cloud, DevOps team leaders are asking how they can scale data science and ML capabilities to accelerate software delivery without introducing risk and complexity,” said Kelly Hartman, SVP of global channels and alliances at JFrog. “The combination of Artifactory and Amazon SageMaker creates a single source of truth that indoctrinates DevSecOps best practices to ML model development in the cloud – delivering flexibility, speed, security, and peace of mind – breaking into a new frontier of MLSecOps.”

A Forrester survey found that half of the data decision-makers see the application of governance policies within AI/ML as a major challenge for its widespread use, and 45% view data and model security as a key issue. 

JFrog’s integration with Amazon SageMaker addresses these concerns by applying DevSecOps best practices to ML model management. This allows developers and data scientists to enhance and speed up the development of ML projects while ensuring enterprise-grade security and compliance with regulatory and organizational standards, JFrog explained.

JFrog has also introduced new versioning capabilities in its ML Model Management solution, complementing its Amazon SageMaker integration. These capabilities integrate model development more seamlessly into an organization’s existing DevSecOps workflow. According to JFrog, this enhancement significantly increases transparency regarding each version of the model.

The post JFrog announces partnership with AWS to streamline secure ML model deployment appeared first on SD Times.

]]>
Mendix Adds New AI and Machine Learning Capabilities to its Market and Technology-Leading Enterprise Low-Code Platform https://sdtimes.com/low-code/51511/ Thu, 22 Jun 2023 18:11:04 +0000 https://sdtimes.com/?p=51511  Mendix, a Siemens business and global leader in modern enterprise application development, today outlined powerful new and robust  AI and machine learning capabilities, including innovative context-aware AI developer tools, which will all be available upon the release of Mendix 10, to be announced during a live streaming event on June 27th. The new AI and Machine … continue reading

The post Mendix Adds New AI and Machine Learning Capabilities to its Market and Technology-Leading Enterprise Low-Code Platform appeared first on SD Times.

]]>
 Mendix, a Siemens business and global leader in modern enterprise application development, today outlined powerful new and robust  AI and machine learning capabilities, including innovative context-aware AI developer tools, which will all be available upon the release of Mendix 10, to be announced during a live streaming event on June 27th.

The new AI and Machine Learning enhancements reinforce the status of Mendix’s low-code platform as the de facto standard for building smart business applications and solutions. Mendix 10 features greatly expanded AI capabilities in two major areas. First, Mendix 10 empowers the enterprise to seamlessly integrate AI use cases with low-code applications using Mendix’s new Machine Learning Kit. Secondly, the platform greatly expands the scope and functionality of AI-enabled application development.

Infusing low-code applications with AI enablement via Mendix’s Machine Learning Kit 

 Addressing an urgent market need to integrate and deploy AI into applications, Mendix 10 features a new Machine Learning (ML) Kit that empowers enterprises to build solutions incorporating custom AI models within applications using the developer’s desired AI framework and language. These include pretrained models built with  PyTorch, Caffee2, Cognitive Toolkit, and other common AI frameworks that have adopted the Open Neural Network Exchange (ONNX) standard. ONNX-based models can be easily imported into Mendix’s integrated development environment (IDE) and weaved into a Mendix application. In doing so, this offers support for various inference patterns, and pre- and post-processing logic. The Mendix runtime has been enhanced to support seamless execution of ONNX-based models, enabling the ML model to run in the same environment as the application.

The ease of packaging and deploying these pre-trained models — whether open sourced or developed internally — within Mendix applications brings the low-code experience of speed, scalability, superior UI, and accelerated time-to-market to enterprises seeking to harness AI technology for business value and ROI.

By eliminating time-consuming tasks of manual integration, the ML Kit can reduce AI deployments from weeks to hours. The lower latency of the built-in integration of embedded AI models versus API-based integration drives the superior performance of AI-enhanced applications as the ML model runs in the same container as the application. Also, embedded AI model deployment enables the robust continuity of AI services when used offline, on-edge, or in IoT uses. Finally, in-application deployment of ML models eliminates the need to upload enterprise data or IP to third-party systems outside of the Mendix application landscape, thus providing another layer of security.

The Mendix ML Kit is based on Open Neural Network Exchange (ONNX), an open-source framework created in 2017 to enable framework interoperability. ML Kit provides access to dozens of pre-trained, out-of-the-box machine language models from the ONNX Model Zoo that are fully customizable.

“Enterprises with sophisticated machine learning capabilities can easily incorporate their models into Mendix applications using the ML Kit,” said Amir Piltan, Mendix’s senior product manager for AI. “But for those earlier in the adoption curve, it is not necessary for enterprises to build models from scratch. They can start with the ONNX Model Zoo, fine-tune the model for specific use cases, and keep their data and AI model secure, as it never leaves their Mendix ecosystem. This makes AI deployment easier from an operational, commercial, and governance standpoint.”

Empowering the full spectrum of developers with real-time AI assistance 

 A second area of enhancements targeting developers features the new Mendix Assist Best Practice bot that provides a virtual AI-enabled “co-developer” that inspects applications in real-time to implement Mendix software development best practices. The Data Validation bot helps developers build validation logic in an automated way using pre-built expressions. These platform upgrades to the Mendix Assist bot family put the power of software development into the hands of a broad spectrum of developers, e.g. enabling business technologists to create solutions with AI assistance to ensure the highest level of quality.

Mendix’s new bots also serve as valuable resources for skilled developers, helping to ensure that their applications conform to Mendix development best practices by identifying development anti-patterns, providing their location, and guiding developers on how to address and resolve them.

At their core, the new  AI-driven bots are designed to boost the productivity and efficiency for Mendix developers across a range of skill sets while optimizing the performance and quality of Mendix applications.

Uniting low-code and AI for turbocharged solutions

“We believe AI tools and low-code development are a natural fit to build better software faster,” said Hans de Visser, Mendix’s chief product officer. “Enterprises using low-code will be able to extract more value from AI in an efficient way using the new features of the Mendix 10 platform.”

De Visser added, “Our next step will be the introduction of “Mendix Chat,” a chat bot in the Mendix IDE that will guide developers on how to apply certain concepts or patterns. We are currently training a large language model based on sources drawn from Mendix Forum, Mendix documentation, and our support system. Next, we will bring generative AI into our DSLs and generate models and model elements based on natural language input. This means app developers and business domain experts will be able to use free text — a user story — and from that, generate application models.”

Clearing the path for in-platform AI integration

Despite the current popularity of a new generation of smart applications, companies and analysts are finding significant barriers that prevent enterprises from leveraging the promised ROI of AI-enhanced business solutions. According to Gartner, more than 50% of CIOs have shelved successful AI-focused pilot programs due to production-oriented obstacles, including cost, complexity, time constraints, and talent shortages.

“We have applied the core principles of low-code abstraction and automation for customers seeking a connected landscape to embed their machine language models into an application,” said Amir Piltan, Mendix’s senior product manager for AI. “Mendix is the first platform that enables developers to easily drag and drop ML models into the application’s logic and deploy it without the need to use an outside service.”

Piltan adds, “The combined use of Mendix Assist bots and the ML Kit will boost developer productivity across the entire lifecycle of software development,  enabling them to build smart apps in a smart way. With Mendix 10, enterprises are empowered to meet ever-changing market demands and deliver innovation quickly.”

About Mendix

In a digital-first world, customers want their every need anticipated, employees want better tools to do their jobs, and enterprises know that sweeping digital transformation is the key to survival and success. Mendix, the low-code engine of the Siemens Xcelerator platform, is quickly becoming the application development platform of choice to drive the enterprise digital landscape. Mendix’s industry-leading low-code platform, dedicated partner network, and extensive marketplace support advanced technology solutions that boost engagement, streamline operations, and relieve IT logjams. Built on the pillars of abstraction, automation, cloud, and collaboration, Mendix dramatically increases developer productivity and engages business technologists to create apps guided by their particular domain expertise. Mendix empowers enterprises to build apps faster than ever; catalyzes meaningful collaboration between IT and business experts; and maintains IT control of the entire application landscape. Consistently recognized as a leader and visionary by leading industry analysts, the platform is cloud-native, open, extensible, agile, and proven. From artificial intelligence and augmented reality to intelligent automation and native mobile, Mendix and Siemens Xcelerator are the backbone of digital-first enterprises. The Mendix low-code platform is used by more than 4,000 enterprises in 46 countries and has an active community of more than 300,000 developers who have created over 200,000 applications.

The post Mendix Adds New AI and Machine Learning Capabilities to its Market and Technology-Leading Enterprise Low-Code Platform appeared first on SD Times.

]]>
Innovation will transform the software engineering life cycle https://sdtimes.com/softwaredev/innovation-will-transform-the-software-engineering-life-cycle/ Tue, 13 Jun 2023 14:20:26 +0000 https://sdtimes.com/?p=51425 Innovation is essential for software engineering leaders to circumvent competition and create an attractive technology landscape for users and developers. Innovation keeps processes, tools and outcomes fresh and productive.  However, software engineering teams often experience burnout due to the demand for innovation and have little energy to innovate their own processes and practices. Software engineering … continue reading

The post Innovation will transform the software engineering life cycle appeared first on SD Times.

]]>
Innovation is essential for software engineering leaders to circumvent competition and create an attractive technology landscape for users and developers. Innovation keeps processes, tools and outcomes fresh and productive. 

However, software engineering teams often experience burnout due to the demand for innovation and have little energy to innovate their own processes and practices. Software engineering leaders can introduce innovation with new ways of working. 

Use AutoML to Reduce External Dependencies and Increase Innovation

Data science skills are not abundant within software engineering teams. Software engineering leaders are pressed to implement innovative machine learning (ML) algorithms into their applications for intelligent and predictive purposes. AutoML services allow developers without significant data science skills to build purpose-specific ML. Gartner predicts that by 2027, up to 75% of enterprise software engineering teams will use autoML techniques.

AutoML simplifies the current challenges of software engineering leaders and their teams from the creation of models to model life cycle management. As software engineering leaders solve their data science talent constraint by using autoML services, they must also ensure applications are using artificial intelligence (AI) responsibly. Responsible AI accounts for concepts such as bias mitigation, explainability and transparency.

Software engineering leaders must budget time and resources to train their developers in areas of model life cycle management, such as model validation, deployment, operations and monitoring. Establish a community to educate on responsible AI and governance, and to monitor deployed models for ethical behavior. 

Pilot ML-Powered Coding Assistants

Code generation products based on foundation models, such as large language models, are able to generate complex and longer suggestions, resulting in a significant increase in developer productivity. 

Code completion tools have become essential for developers to handle code complexity, especially in modern integrated development environments. By 2027, 50% of developers will use ML-powered coding tools, up from less than 5% today.

It is important to note that rule-based engines are not able to keep pace with the rapid growth of enterprise code and open-source code dependencies. New challenges around productivity, quality of the generated code, intellectual property attribution and bias in generated snippets are emerging. Software engineering leaders should define a strategy for these powerful tools and develop a plan to mitigate challenges as they arise. Foster a community of practice to master the new skill of crafting prompts using a combination of natural language and coding practices to figure out how to optimize code generation with minimal effort.

Evaluate How AI-Generated Design Improves User Outcomes

Generative design uses AI, ML and natural language processing (NLP) technologies to automatically generate user flows, screen designs and content for digital products. AI-generated design gives designers the opportunity to focus on solving problems for users, while AI tools produce intuitive, accessible software designs. This approach also allows software engineering leaders to move quickly and deliver innovative features. 

Generative design AI reduces the human effort needed for design exploration and final product design, allowing team members to focus on user research, product strategy and solution evaluation. By 2027, generative design will automate 70% of the design effort for new web and mobile apps. 

As early-stage products powered by generative design AI are growing, software engineering leaders should be building products that are ready to leverage this design sooner rather than later. Products based on popular design systems, such as platform-based and open-source design systems, will be able to use generative design AI sooner than custom product designs. 

Create a Vision for Digital Immunity Across the Software Delivery Life Cycle

Software engineering leaders struggle to plan for all eventualities of how modern, highly distributed software systems may fail, resulting in an inability to quickly remediate software defects and avoid impact on users. A digital immune system combines practices and technologies from observability, AI-augmented testing, chaos engineering, autoremediation, site reliability engineering and software supply chain security to increase the resilience of products, services and systems. 

By 2027, organizations who invest in building digital immunity will increase customer satisfaction by decreasing downtime by 80%. Prioritizing digital immunity activities will not only prepare organizations to mitigate potential risks, but also use failures as learning opportunities. 

Software engineering leaders need to provide clear guidance to teams defining how to prioritize digital immunity efforts and investments as part of value stream delivery. Accelerate response to critical business needs by improving developer experience and modernizing inefficient development, testing and security practices.  

These ways of working will help organizations improve the productivity and experience of users and engineers alike. Remember, innovation is a key part of keeping processes productive. Use these ways of working to improve the software life cycle from design, coding and testing, to the actual product-led experiences themselves.

The post Innovation will transform the software engineering life cycle appeared first on SD Times.

]]>
AI in API and UI software test automation https://sdtimes.com/test/ai-in-api-and-ui-software-test-automation/ Mon, 06 Mar 2023 16:03:29 +0000 https://sdtimes.com/?p=50475 Artificial intelligence is one of the digital marketplace’s most overused buzzwords. The term “AI” conjures up images of Alexa or Siri, computer chess opponents, and self-driving cars.  AI can help humans in a variety of ways, including reducing errors and automating repetitive tasks. Software test automation tools are maturing and have incorporated AI and machine learning … continue reading

The post AI in API and UI software test automation appeared first on SD Times.

]]>
Artificial intelligence is one of the digital marketplace’s most overused buzzwords. The term “AI” conjures up images of Alexa or Siri, computer chess opponents, and self-driving cars. 

AI can help humans in a variety of ways, including reducing errors and automating repetitive tasks. Software test automation tools are maturing and have incorporated AI and machine learning (ML) technology. The key point that separates the hype of AI from reality is that AI is not magic, nor the silver bullet promised with every new generation of tools. However, AI and ML do offer impressive enhancements to software testing tools.

More Software, More Releases

Software test automation is increasing in demand just as the worldwide demand for software continues to surge and the demand for developers increases. A recent report by Statista corroborates this expectation with a projection that suggests that the global developer population is expected to increase from 24.5 million in 2020 to 28.7 million by 2024.

Since testing and development resources are finite, there’s a need to make testing more efficient while increasing coverage to do more with the same. Focusing testing on exactly what needs to be validated after each code change is critical to accelerating testing, enabling continuous testing, and meeting delivery goals.

AI and ML play a key role in providing the data needed by test automation tools to focus testing while removing many of the tedious, error-prone, and mundane tasks.

  • Improve static analysis adoption.
  • Improve unit test creation.
  • Reduce test maintenance.
  • Reduce test execution.
  • Increase API test automation.
  • Improve UI test automation.
Real examples 

Let’s look at some real-life examples of what happens when you apply AI and ML technology to software testing.

Improve Unit Testing Coverage and Efficiency

Creating unit tests is a difficult task since it can be time-consuming to create unique tests that fully test a unit. One way to alleviate this is by making it easier to create stubs and mocks with assisted test creation for better isolation of the code under test. AI can assist in analyzing the unit under test to determine its dependencies on other classes. Then suggest mocking them to create more isolated tests.

The capabilities of AI in producing tests from code are impressive. However, it’s up to the developers to continuously invest in and build their own tests. Again, using AI test creation assistance, developers can:

  • Extend code coverage through clones and mutations.
  • Create the mocks.
  • Auto-generate assertions

Improve API Testing

The struggle to improve API testing has traditionally relied on the expertise and motivation of the development team because APIs are often outside the realm of QA. Moreover, APIs are sometimes poorly documented. Creating tests for them is difficult and time-consuming.

When it comes to API testing, AI and ML aim to accomplish the following:

  • Increase functional coverage with API and service layer testing.
  • Make it easier to automate and quicker to execute.
  • Reuse the results for load and performance testing.

This technology creates API tests by analyzing the traffic observed and recorded during manual UI tests. It then creates a series of API calls that are collected into scenarios and represent the underlying interface calls made during the UI flow. An ML algorithm is used to study interactions between different API resources and store those interactions as templates in a proprietary data structure. The goal of AI here is to create more advanced parameterized tests, not just repeat what the user was doing, as you get with simple record-and-playback testing.

Automate UI Testing Efficiently

Validating the application’s functionality with UI testing is another critical component of your testing strategy. The Selenium UI test automation framework is widely adopted for UI testing, but users still struggle with the common Selenium testing challenges of maintainability and stability.

AI helps by providing self-healing capabilities during runtime execution to address the common maintainability problems associated with UI testing.  AI can learn about internal data structures during the regular execution of Selenium tests by monitoring each test run and capturing detailed information about the web UI content of the application under test. This opens the possibility of self-healing of tests, which is a critical time-saver in cases when UI elements of web pages are moved or modified, causing tests to fail.

Remove Redundant Work With Smart Test Execution

Test impact analysis (TIA) assesses the impact of changes made to production code. The analysis and test selection are available to optimize the execution of unit tests, API tests, and Selenium web UI tests.

To prioritize test activities, a correlation from tests to business requirements is required. However, more is required since it’s unclear how recent changes have impacted the code. To optimize test execution, it’s necessary to understand the code that each test covers and then determine the code that has changed. Test impact analysis allows testers to focus only on the tests that validate the changes.

Benefits of AI/ML in Software Testing

AI and ML provide benefits throughout the SDLC and among the various tools that assist at each of these levels. Most importantly, these new technologies amplify the effectiveness of tools by first and foremost delivering better quality software and helping testing be more efficient and productive while reducing cost and risk.

For development managers, achieving production schedules becomes a reality with no late- cycle defects crippling release timetables. For developers, integrating test automation into their workflow is seamless with automated test creation, assisted test modification, and self-healing application testing. Testers and QA get quick feedback on test execution, so they can be more strategic about where to prioritize testing resources.

The post AI in API and UI software test automation appeared first on SD Times.

]]>
How to build trust in AI for software testing https://sdtimes.com/testing/how-to-build-trust-in-ai-for-software-testing/ Fri, 03 Feb 2023 15:38:09 +0000 https://sdtimes.com/?p=50238 The application of artificial intelligence (AI) and machine learning (ML) in software testing is both lauded and maligned, depending on who you ask. It’s an eventuality that strikes balanced notes of fear and optimism in its target users. But one thing’s for sure: the AI revolution is coming our way. And, when you thoughtfully consider … continue reading

The post How to build trust in AI for software testing appeared first on SD Times.

]]>
The application of artificial intelligence (AI) and machine learning (ML) in software testing is both lauded and maligned, depending on who you ask. It’s an eventuality that strikes balanced notes of fear and optimism in its target users. But one thing’s for sure: the AI revolution is coming our way. And, when you thoughtfully consider the benefits of speed and efficiency, it turns out that it is a good thing. So, how can we embrace AI with positivity and prepare to integrate it into our workflow while addressing the concerns of those who are inclined to distrust it?

Speed bumps on the road to trustville

Much of the resistance toward implementing AI in software testing comes down to two factors: a rational fear for personal job security and a healthy skepticism in the ability of AI to perform tasks contextually as well as humans. This skepticism is primarily based on limitations observed in early applications of the technology. 

To further promote the adoption of AI in our industry, we must assuage the fears and disarm the skeptics by setting reasonable expectations and emphasizing the benefits. Fortunately, as AI becomes more mainstream — a direct result of improvements in its abilities — a clearer picture has emerged of what AI and ML can do for software testers; one that is more realistic and less encumbered by marketing hype.

First things first: Don’t panic

Here’s the good news: the AI bots are not coming for our jobs. For as long as there have been AI and automation testing tools, there have been dystopian nightmares about humans losing their place in the world. Equally prevalent are the naysayers who scoff at such doomsday scenarios as being little more than the whims of science fiction writers.

The sooner we consider AI to be just another useful tool, the sooner we can start reaping its benefits. Just as the invention of the electrical screwdriver has not eliminated the need for workers to fasten screws, AI will not eliminate the need for engineers to author, edit, schedule and monitor test scripts. But it can help them perform these tasks faster, more efficiently, and with fewer distractions.

Autonomous software testing is simply more realistic — and more practical —  when viewed in the context of AI working in tandem with humans. People will remain central to software development since they are the ones who define the boundaries and potential of their software. The nature of software testing dictates that the “goal posts” are always shifting as business requirements are often unclear and constantly changing. This variable nature of the testing process demands continued human oversight.

The early standards and methodologies for software testing (including the term “quality assurance”) come from the world of manufacturing product testing. Within that context, products were well-defined with testing far more mechanistic compared to software whose traits are malleable and often changing. In reality, however, software testing is not applicable to such uniform, robotic methods of assuring quality. 

In modern software development, there are many things that can’t be known by developers. There are too many changing variables in the development of software that require a higher level of decision-making than AI can provide. And yet, while fully autonomous AI is unrealistic for the foreseeable future, AI that supports and extends human efforts at software quality is still a very worthwhile pursuit. Keeping human testers in the mix to consistently monitor, correct, and teach the AI will result in an increasingly improved software product.

The three stages of AI in software testing

Software testing AI development essentially has three stages of development maturity:

  • Operational Testing AI
  • Process Testing AI
  • Systemic Testing AI

Most AI-enabled software testing is currently performed at the operational stage. Operational testing involves creating scripts that mimic the routines human testers perform hundreds of times. Process AI is a more mature version of Operational AI with testers using Process AI for test generation. Other uses may include test coverage analysis and recommendations, defect root cause analysis and effort estimations, and test environment optimization. Process AI can also facilitate synthetic data creation based on patterns and usages. 

The third stage, Systemic AI, is the least tenable of the three owing to the enormous volume of training it would require. Testers can be reasonably confident that Process AI will suggest a single feature or function test to adequately assure software quality. With Systemic AI, however, testers cannot know with high confidence that the software will meet all requirements in all situations. AI at this level would test for all conceivable requirements – even those that have not been imagined by humans. This would make the work of reviewing the autonomous AI’s assumptions and conclusion such an enormous task that it would defeat the purpose of working toward full autonomy in the first place.

Set realistic expectations

After clarifying what AI can and cannot do, it is best to define what we expect from those who use it. Setting clear goals early on will prepare your team for success. When AI tools are introduced to a testing program, it should be presented as a software project that has the full support of management with well-defined goals and milestones. Offering an automated platform as an optional tool for testers to explore at their leisure is a setup for failure. Without a clear directive from management and a finite timeline, it is all too easy for the project to never get off the ground. Give the project a mandate and you’ll be well on your way to successful implementation. You should aim to be clear about who is on the team, what their roles are, and how they are expected to collaborate. It also means specifying what outcomes are expected and from whom. 

Accentuate the positive

Particularly in agile development environments, where software development is a team sport, AI is a technology that benefits not only testers but also everyone on the development team. Give testers a stake in the project and allow them to analyze the functionality and benefits for themselves. Having agency will build confidence in their use of the tools, and convince them that AI is a tool for augmenting their abilities and preparing them for the future.

Remind your team that as software evolves, it requires more scripts and new approaches for testing added features, for additional use patterns and for platform integrations. Automated testing is not a one-time occurrence. Even with machine learning assisting in the repairing of scripts, there will always be opportunities for further developing the test program in pursuit of greater test coverage, and higher levels of security and quality. Even with test scripts that approach 100 percent code execution, there will be new releases, new bug fixes, and new features to test. The role of the test engineer is not going anywhere, it is just evolving.

Freedom from the mundane

It is no secret that software test engineers are often burdened with a litany of tasks that are mundane. To be effective, testing programs are designed to audit software functionality, performance, security, look and feel, etc. in incrementally differing variations and at volume. Writing these variations is repetitive, painstaking, and—to many—even boring. By starting with this low-hanging fruit, the mundane, resource-intensive aspects of testing, you can score some early wins and gradually convince the skeptics of the value of using AI testing tools. 

Converting skeptics won’t happen overnight. If you overwhelm your team by imposing sweeping changes, you may be setting yourself up for failure. Adding AI-assisted automation into your test program greatly reduces the load of such repetitive tasks, and allows test engineers to focus on new interests and skills.

For example, one of the areas where automated tests frequently fail is in the identification of objects within a user interface (UI). AI tools can identify these objects quickly and accurately to bring clear benefit to the test script. By focusing on such operational efficiencies, you can make a strong case for embracing AI. When test engineers spend less time performing routine debugging tasks and more time focusing on strategy and coverage, they naturally become better at their jobs. When they are better at their jobs, they will be more inclined to embrace technology. 

In the end, AI is only as useful as the way in which it is applied. It is not an instantaneous solution to all our problems. We need to acknowledge what it does right, and what it does better. Then we need to let it help us be better at our jobs. With that mindset, test engineers can find a very powerful partner in AI and will no doubt be much more likely to accept it into their workflow.

The post How to build trust in AI for software testing appeared first on SD Times.

]]>
Web3 and Web 3.0: Two different ideas that can coexist https://sdtimes.com/data/web3-and-web-3-0-two-different-ideas-that-can-coexist/ Tue, 04 Oct 2022 13:58:27 +0000 https://sdtimes.com/?p=49082 It seems that every day in the tech world we hear about the salvation that the new era of the web will bring by taking away mega corporations’ hold on user data and giving control back to the people (at least some of it). But it isn’t until we read into the matter further that … continue reading

The post Web3 and Web 3.0: Two different ideas that can coexist appeared first on SD Times.

]]>
It seems that every day in the tech world we hear about the salvation that the new era of the web will bring by taking away mega corporations’ hold on user data and giving control back to the people (at least some of it).

But it isn’t until we read into the matter further that we see the terms Web3 and Web 3.0 thrown around, seemingly synonymous yet quite different. 

Web3 is the more commonly referred to aspect of the new web world and it incorporates concepts such as decentralization, blockchain technologies, and token-based economics.

On the other hand, Web 3.0 is otherwise known as the Semantic Web championed by father of the web, Sir Tim Berners-Lee, in an effort to correct his brainchild that has been led astray. His Solid project will have private information stored in decentralized data stores called pods that can be hosted anywhere the user wants. The project also relies on existing W3C standards and protocols as much as possible, according to Solid’s MIT website. 

When asked about whether he aligns with Web3’s version of the future at TNW’s 2022 Conference, he said, “nope” adding that “when you try to build those things on the blockchain, it just doesn’t work” – referring to the aspects of the web that would give power over data and identity back to the people. 

Web 3.0 holds promise in linking data together

The goal behind Web 3.0 has been to make data as machine-readable as possible.

The rules laid out for Web 3.0 as to how to link data are like the rules for writing an article, and how you should use links so that machines can read that information and understand the connection between different topics so that crawlers can learn effectively from that, according to Reed McGinley-Stempel, co-founder and CEO at Stytch, a developer platform for authentication. 

“I feel like when I interpret that today, as someone that has been trying to go really deep on a lot of the stuff that OpenAI has been doing, like GPT-3 and DALL-E 2, it feels like Tim Berners-Lee was way ahead of his time in terms of predicting that as you build smarter ML and AI, it would be really valuable if you had the context in a machine-readable form of what articles or content related to each other on the web,” Reed said. 

The two ideas for the new web differ in this regard, because the Semantic Web focuses mostly about how to actually present information at the machine-readable level on a website. On the other hand, the blockchain Web3 is much more focused on what is the back-end data structure for how this data is readable. 

However, this idea of data discoverability can be possible in some regards in Web3, according to Reed. 

“If you go to the heart of a blockchain, which is open data by default, obviously, there is some overlap here. Data discoverability mattered a lot to Tim Berners-Lee and his concept, and that can exist on the blockchain, because anything you do with your Ethereum wallet, or any smart contract that you interact with, is naturally searchable and discoverable. Though I think the intent for that data discoverability is different than that of Tim Berners-Lee,” Reed said. 

Similar goal, but a different way to get there

Bruno Woltzenlogel Paleo, STEM Lead at Dtravel, a native Web3 travel ecosystem that provides property hosts and hospitality entrepreneurs with the infrastructure to accept on-chain bookings, said that there are many articles that present Web3 and Web 3.0 as opposites, whereas they’re both just actually addressing different aspects of what people want to have from whatever follows Web 2.0.

“I think it’s perfectly possible for these ideas to coexist,” he explained, adding that they can even be complementary. “The Web3 notion coming from blockchain and cryptocurrency can contribute a lot to the economic incentives aspect, whereas the Web 3.0 idea from the Solid project can contribute a lot to the data storage and data ownership aspect.”

What people want from the new web is more participation and ownership over their data, more privacy over the data, and less dependency on third parties and intermediaries. The selling of user data for advertising has eroded the trust that people have in Web2. 

“The current technical solution from Web3, which in practice is Web2 plus blockchains, cryptocurrencies and smart contracts, doesn’t deliver the latter aspect yet,” Paleo said. “Tim Berners-Lee’s notion of Web 3.0 is very interesting and I think it addresses this need for data privacy and data ownership better than the approaches that currently exist in the blockchain space.”

Any kind of data can be stored in a Solid Pod: from structured data to regular files that you might store in Google Drive or Dropbox folders, and people can grant or revoke access to any piece of their data as needed.

All data in a Solid Pod is stored and accessed using standard, open, and interoperable data formats and protocols. Solid uses a common, shared way of describing things that different applications can understand. This gives Solid the unique ability to allow different applications to work with the same data, according to the Solid project. 

There’s a challenge to monetize Web 3.0

However, Paleo said that he doesn’t see anything in Web 3.0 to address the economic incentives.

“It’s not only a matter of finding a solution that allows people to easily own their data and migrate the data,” Paleo said. “There’s also an economic problem that people don’t want to store their own data and then, for somebody else to store their data, let’s say for Facebook or Google to store the data, there has to be some economic incentive and in Web 2.0 the incentive is the monetization of that data. But in the Web 3.0 idea, I just don’t see how he’s proposing any alternative to that monetization of data being proposed.”

On the other hand, Web3 has the profit motive because Web3 companies can provide services or tokenize their business model. 

Challenges for developers in Web3

While Web3 is poised to disrupt the web as we know it, it’s important for developers to understand that they’re not moving away from Web 2.0 but rather will continue to use the usual software development tools and add some extra components from Web3, according to Paleo.

“This is not something that’s going to happen over the next five years, or probably even 10 years, but maybe even longer as infrastructure develops and becomes easier for people to store their own data or to hold on to it,” said Cynthia Huang, head of growth of Dtravel. 

A big thing that developers have to watch out for is that some types of data are best not stored on the blockchain. Because transparency is really key to blockchains, and to Web3, it doesn’t really work well for data that you don’t want to be public. For example, if you have medical records, it doesn’t make sense for you to store that on the blockchain, Huang explained. 

Another challenge is that developers not only have to consider the front and back end of an application, they’ll also have to consider the smart contract layer and then the communication with the blockchain. 

“It’s challenging to decide what parts of an application logic should go into the smart contract, and what parts should be handled by the back end, for instance,” Paleo said. “And just because you’re using smart contracts, it doesn’t necessarily mean that magically you will gain the benefits from blockchains.”

Developers have to design in very specific ways to gain benefits from blockchain. 

“When people use blockchain, they typically talk about less reliance on trust and more independence from third parties and intermediaries, but if you implement a smart contract in such a way that you have absolute power to modify the smart contract anytime you want, then your users are still dependent on you as a third party and intermediary,” Paleo said. “So you must implement smart contracts in ways that really deliver those goals of immutability and reduction of the need for trust.”

Many people are still not familiar with cryptowallets

Also, many people are still not used to using noncustodial crypto wallets like MetaMask, and are still used to the Web2 way of paying for services with credit cards. 

“If you want to make a project that is crypto-native that is purely Web3, then to pay for things on your website, users would have to connect their MetaMask wallet and they would have to fund that MetaMask wallet with the base currency of some blockchain to pay for gas fees,” Paleo said. “So this creates entrance barriers for the users and friction for users who are new to blockchains and cryptocurrencies, which is a big challenge for developers in Web3.”

Because tokenomics might open up new revenue streams that don’t involve selling user data, holding users’ data may become a liability or a risk that is best avoided. so it’s in the interest of companies to not hold onto data anymore. 

Paleo said that there are some interesting approaches such as the IPFS (interplanetary file system), Filecoin, and the Web 3.0 idea of Tim Berners-Lee that can help solve this problem.

Web3 adoption in practice

Currently, a lot of Web3 adoption is driven by Web2 companies wanting to add Web3-native features into their products, according to Reed. For example, Twitter allows users to link their NFT to their Twitter profile. 

“The most traction we’re seeing with Web3 use cases are offerings within Web2 use cases that already have distribution. I think a lot of Web3 apps are still trying to prove why should you use this app over Twitter, Uber, Lyft, Facebook, or Google, because I think there are real UX questions about whether it’s worth the tradeoff at this point, which is why it seems to be that the hybrid approaches are gaining more traction from our vantage point,” Reed said. 

Also, not everyone wants the tradeoffs that Web3 would bring if it means sacrificing UX. 

The origin story of the Web3 idea is that people didn’t want to be locked into a walled garden of large Web2 platforms that have immense control over everyone’s digital lives. But, a lot of users don’t want to purely exist in a world where there’s bad UX, but you have complete control of your data.

“A lot of companies think there are interesting technical pieces and cultural trends coming up with Web3, and they’re interested to adopt that. They’re not immediately running everything on the blockchain. They see tons of value in their core Web2 platform and products. And they see value and also being able to appeal to the users that are very interested in when Web3 NFT’s. And so they just see it as another feature they can offer,” Reed said. 

The post Web3 and Web 3.0: Two different ideas that can coexist appeared first on SD Times.

]]>
SD Times Open-Source Project of the Week: BigCode https://sdtimes.com/software-development/sd-times-open-source-project-of-the-week-bigcode/ Fri, 30 Sep 2022 13:00:22 +0000 https://sdtimes.com/?p=49039 The BigCode initiative’s aim is to build state-of-the-art large language learning models (LLMs) to build code in an open and responsible way. Code LLMs enable the completion and synthesis of code from other code and natural language descriptions, and enables users to work across a wide range of domains, tasks, and programming languages.  The initiative … continue reading

The post SD Times Open-Source Project of the Week: BigCode appeared first on SD Times.

]]>
The BigCode initiative’s aim is to build state-of-the-art large language learning models (LLMs) to build code in an open and responsible way.

Code LLMs enable the completion and synthesis of code from other code and natural language descriptions, and enables users to work across a wide range of domains, tasks, and programming languages. 

The initiative is led by ServiceNow Research, which does research to futureproof AI-powered experiences, and Hugging Face, a community and data platform that provides tools to enable users to build, train, and deploy ML models based on open-source code and technologies. 

BigCode is inviting AI researchers to collaborate on a representative evaluation suite for code LLMs covering a diverse set of tasks and programming languages, responsible development and governance of data sets for code LLMs, and faster training and inference methods for LLMs.

“The first goal of BigCode is to develop and release a data set large enough to train a state-of-the-art language model for code. We’ll ensure that only files from repositories with permissive licenses go into the data set,” ServiceNow Research wrote in a blog post. 

“With that data set, we’ll train a 15-billion-parameter language model for code using ServiceNow’s in-house GPU cluster. With an adapted version of Megatron-LM, we’ll train the LLM on the distributed infrastructure.”

Additional details about the project are available here. 

 

The post SD Times Open-Source Project of the Week: BigCode appeared first on SD Times.

]]>
DALL-E now available without waitlist https://sdtimes.com/software-development/dall-e-now-available-without-waitlist/ Thu, 29 Sep 2022 18:49:01 +0000 https://sdtimes.com/?p=49050 OpenAI has removed the waitlist for the DALL-E beta so that users can get started right away. DALL-E allows users to type infinite combinations of prompts that will each generate a unique set of images generated by ML/AI. Whether the prompts are as simple as “an armchair in the shape of an avocado” or as … continue reading

The post DALL-E now available without waitlist appeared first on SD Times.

]]>
OpenAI has removed the waitlist for the DALL-E beta so that users can get started right away. DALL-E allows users to type infinite combinations of prompts that will each generate a unique set of images generated by ML/AI.

Whether the prompts are as simple as “an armchair in the shape of an avocado” or as elaborate and abstract as “a futuristic cyborg poster hanging in a neon lit subway station,” the model generates digital images from natural language descriptions. 

DALL-E was originally revealed by OpenAI in a blog post in January 2021, and it uses a version of GPT-3, another project by OpenAI that can generate original writing, modified to generate images.

Since the model was first previewed to users this April, users (especially artists) helped researchers discover new use cases for the tool. 

The feedback has resulted in features such as Outpainting, which lets users continue an image beyond its original borders and create bigger images of any size, and collections so that they can expedite their creative processes.

The real-world research has also led researchers to create a safer image-generating model. 

“Learning from real-world use has allowed us to improve our safety systems, making wider availability possible today. In the past months, we’ve made our filters more robust at rejecting attempts to generate sexual, violent and other content that violates our content policy and built new detection and response techniques to stop misuse,” OpenAI wrote on the project’s website that contains additional details.

More than 1.5 million users are now actively creating over 2 million images a day with DALL-E, according to OpenAI.

The post DALL-E now available without waitlist appeared first on SD Times.

]]>