Jenna Barron, Author at SD Times https://sdtimes.com/author/jennifer-sargent/ Software Development News Fri, 01 Nov 2024 15:23:47 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.5 https://sdtimes.com/wp-content/uploads/2019/06/bnGl7Am3_400x400-50x50.jpeg Jenna Barron, Author at SD Times https://sdtimes.com/author/jennifer-sargent/ 32 32 IBM releases open AI agents for resolving GitHub issues https://sdtimes.com/softwaredev/ibm-releases-open-ai-agents-for-resolving-github-issues/ Fri, 01 Nov 2024 15:23:47 +0000 https://sdtimes.com/?p=55973 IBM is releasing a family of AI agents (IBM SWE-Agent 1.0) that are powered by open LLMs and can resolve GitHub issues automatically, freeing up developers to work on other things rather than getting bogged down by their backlog of bugs that need fixing.  “For most software developers, every day starts with where the last … continue reading

The post IBM releases open AI agents for resolving GitHub issues appeared first on SD Times.

]]>
IBM is releasing a family of AI agents (IBM SWE-Agent 1.0) that are powered by open LLMs and can resolve GitHub issues automatically, freeing up developers to work on other things rather than getting bogged down by their backlog of bugs that need fixing. 

“For most software developers, every day starts with where the last one left off. Trawling through the backlog of issues on GitHub you didn’t deal with the day before, you’re triaging which ones you can fix quickly, which will take more time, and which ones you really don’t know what to do with yet. You might have 30 issues in your backlog and know you only have time to tackle 10,” IBM wrote in a blog post. This new family of agents aims to alleviate this burden and shorten the time developers are spending on these tasks. 

One of the agents is a localization agent that can find the file and line of code that is causing an error. According to IBM, the process of finding the correct line of code related to a bug report can be a time-consuming process for developers, and now they’ll be able to tag the bug report they’re working on in GitHub with “ibm-swe-agent-1.0” and the agent will work to find the code. 

Once found, the agent suggests a fix that the developer could implement. At that point the developer could either fix the issue themselves or enlist the help of other SWE agents for further assistants. 

Other agents in the SWE family include one that edits lines of code based on developer requests and one that can be used to develop and execute tests. All of the SWE agents can be invoked directly from within GitHub.

According to IBM’s early testing, these agents can localize and fix problems in less than five minutes and have a 23.7% success rate on SWE-bench tests, a benchmark that tests an AI system’s ability to solve GitHub issues. 

IBM explained that it set out to create SWE agents as an alternative to other competitors who use large frontier models, which tend to cost more. “Our goal was to build IBM SWE-Agent for enterprises who want a cost efficient SWE agent to run wherever their code resides — even behind your firewall — while still being performant,” said Ruchir Puri, chief scientist at IBM Research.

The post IBM releases open AI agents for resolving GitHub issues appeared first on SD Times.

]]>
ChatGPT can now include web sources in responses https://sdtimes.com/ai/chatgpt-can-now-include-web-sources-in-responses/ Thu, 31 Oct 2024 19:26:15 +0000 https://sdtimes.com/?p=55965 OpenAI is updating ChatGPT so that its responses include results from the web, bringing the power of the search engine directly into the chat interface. “This blends the benefits of a natural language interface with the value of up-to-date sports scores, news, stock quotes, and more,” OpenAI wrote in a post. According to OpenAI, ChatGPT … continue reading

The post ChatGPT can now include web sources in responses appeared first on SD Times.

]]>
OpenAI is updating ChatGPT so that its responses include results from the web, bringing the power of the search engine directly into the chat interface.

“This blends the benefits of a natural language interface with the value of up-to-date sports scores, news, stock quotes, and more,” OpenAI wrote in a post.

According to OpenAI, ChatGPT will automatically decide whether a web search is warranted based on the prompt. Users can also directly tell it to search the web by selecting the web search icon under the prompt field.  

Chats will include a link to the web source so that the user can visit that site for more information. A new Sources panel will display on the right hand side of the chat with a list of all sources. 

OpenAI partnered with specific news and data providers to get up-to-date information and visual designers for weather, stocks, sports, news, and maps. For instance, asking about the weather will result in a graphic that shows the five day forecast and stock questions will include a chart of that stock’s performance. 

Some partners OpenAI worked with include Associated Press, Axel Springer, Condé Nast, Dotdash Meredith, Financial Times, GEDI, Hearst, Le Monde, News Corp, Prisa (El País), Reuters, The Atlantic, Time, and Vox Media.

“ChatGPT search connects people with original, high-quality content from the web and makes it part of their conversation. By integrating search with a chat interface, users can engage with information in a new way, while content owners gain new opportunities to reach a broader audience,” OpenAI wrote. 

This feature is available on chatgpt.com, the desktop app, and the mobile app. It is available today to ChatGPT Plus and Team subscribers and people on the SearchGPT waitlist. In the next few weeks it should be available to Enterprise and Edu users, and in the next few months, all Free users will get access as well.

The post ChatGPT can now include web sources in responses appeared first on SD Times.

]]>
Gemini responses can now be grounded with Google Search results https://sdtimes.com/ai/gemini-responses-can-now-be-grounded-with-google-search-results/ Thu, 31 Oct 2024 17:45:00 +0000 https://sdtimes.com/?p=55961 Google is announcing that the Gemini API and Google AI Studio now both offer the ability to ground models using Google Search, which will improve the accuracy and reliability of Gemini’s responses.  By grounding the responses with Google Search results, responses can have fewer hallucinations, more up-to-date information, and richer information. Grounded responses also include … continue reading

The post Gemini responses can now be grounded with Google Search results appeared first on SD Times.

]]>
Google is announcing that the Gemini API and Google AI Studio now both offer the ability to ground models using Google Search, which will improve the accuracy and reliability of Gemini’s responses. 

By grounding the responses with Google Search results, responses can have fewer hallucinations, more up-to-date information, and richer information. Grounded responses also include links to the sources they are using. 

“By providing supporting links, grounding brings transparency to AI applications, making them more trustworthy and encouraging users to click on the underlying sources to find out more,” Google wrote in a blog post.

This new capability supports dynamic retrieval, meaning that Gemini will assess if grounding is necessary, as not all queries need the extra assistant and it does add extra cost and latency. It generates a prediction score for every prompt, which is a measure of how beneficial grounding would be, and developers can adjust the prediction score threshold to what works best for their application.

Currently, grounding only supports text prompts and does not support multimodal prompts, like text-and-image or text-and-audio. It is available in all of the languages Gemini currently supports. 

Google’s documentation on grounding provides instructions on how to configure Gemini models to use this new capability. 

The post Gemini responses can now be grounded with Google Search results appeared first on SD Times.

]]>
Google open sources Java-based differential privacy library https://sdtimes.com/data/google-open-sources-java-based-differential-privacy-library/ Thu, 31 Oct 2024 15:33:10 +0000 https://sdtimes.com/?p=55956 Google has announced that it is open sourcing a new Java-based differential privacy library called PipelineDP4J.  Differential privacy, according to Google, is a privacy-enhancing technology (PET) that “allows for analysis of datasets in a privacy-preserving way to help ensure individual information is never revealed.” This enables researchers or analysts to study a dataset without accessing … continue reading

The post Google open sources Java-based differential privacy library appeared first on SD Times.

]]>
Google has announced that it is open sourcing a new Java-based differential privacy library called PipelineDP4J

Differential privacy, according to Google, is a privacy-enhancing technology (PET) that “allows for analysis of datasets in a privacy-preserving way to help ensure individual information is never revealed.” This enables researchers or analysts to study a dataset without accessing personal data. 

Google claims that its implementation of differential privacy is the largest in the world, spanning nearly three billion devices. As such, Google has invested heavily in providing access to its differential privacy technologies over the last several years. For instance, in 2019, it open sourced its first differential privacy library, and in 2021, it open sourced its Fully Homomorphic Encryption transpiler.

In the years since, the company has also worked to expand the languages its libraries are available in, which is the basis for today’s news. 

The new library, PipelineDP4j, enables developers to execute highly parallelizable computations in Java, which reduces the barrier to differential privacy for Java developers, Google explained.

“With the addition of this JVM release, we now cover some of the most popular developer languages – Python, Java, Go, and C++ – potentially reaching more than half of all developers worldwide,” Miguel Guevara, product manager on the privacy team at Google, wrote in a blog post.

The company also announced that it is releasing another library, DP-Auditorium, that can audit differential privacy algorithms. 

According to Google, two key steps are needed to effectively test differential privacy: evaluating the privacy guarantee over a fixed dataset and finding the “worst-case” privacy guarantee in a dataset. DP-Auditorium provides tools for both of those steps in a flexible interface. 

It uses samples from the differential privacy mechanism itself and doesn’t need access to the application’s internal properties, Google explained. 

“We’ll continue to build on our long-standing investment in PETs and commitment to helping developers and researchers securely process and protect user data and privacy,” Guevara concluded. 

The post Google open sources Java-based differential privacy library appeared first on SD Times.

]]>
Tabnine’s new Code Review Agent validates code based on a dev team’s unique best practices and standards https://sdtimes.com/ai/tabnines-new-code-review-agent-validates-code-based-on-a-dev-teams-unique-best-practices-and-standards/ Wed, 30 Oct 2024 15:24:58 +0000 https://sdtimes.com/?p=55948 The AI coding assistant provider Tabnine is releasing a private preview for its Code Review Agent, a new AI-based tool that validates software based on the development team’s unique best practices and standards for software development.  According to Tabnine, using AI to review code is nothing new, but many of the tools currently available check … continue reading

The post Tabnine’s new Code Review Agent validates code based on a dev team’s unique best practices and standards appeared first on SD Times.

]]>
The AI coding assistant provider Tabnine is releasing a private preview for its Code Review Agent, a new AI-based tool that validates software based on the development team’s unique best practices and standards for software development. 

According to Tabnine, using AI to review code is nothing new, but many of the tools currently available check code against general standards. However, software development teams often develop their own unique ways of creating software. “What one team sees as their irrefutable standard, another team might reject outright. For AI to add meaningful value in improving software quality for most teams, it must have the same level of understanding as a fully onboarded, senior member of the team,” Tabnine explained in a blog post

Code Review Agent allows teams to create rules based on their own standards, best practices, and company policies. These rules are then applied during code review at the pull request or in the IDE.

Development teams can provide the parameters their code should comply with in natural language, and Tabnine works behind the scenes to convert that into a set of rules. Tabnine also offers a set of predefined rules that can be incorporated into the ruleset as well. 

For example, one of Tabnine’s predefined rules is “Only use SHA256 to securely hash data” and a customer-specific rule is “Only use library acme_secure_api_access for accessing external APIs, do not use standard http libraries.”

When a developer creates a pull request that doesn’t meet the established rules, Code Review Agent flags the issue to the code review and also offers suggestions on how to fix the problem. 

“By comprehensively reading through code and ensuring that it matches each team’s unique expectations, Tabnine saves engineering teams significant time and effort while applying a level of rigor in code review that was never possible with static code analysis. Just like AI code generation automates away simpler coding tasks so developers can focus on more valuable tasks, Tabnine’s AI Code Review agent automates common review tasks, freeing up code reviewers to focus on higher-order analysis instead of adherence to best practices,” Tabnine wrote. 

This tool is currently available as a private preview to Tabnine Enterprise customers. An example video of Code Review Agent in action can be viewed here

The post Tabnine’s new Code Review Agent validates code based on a dev team’s unique best practices and standards appeared first on SD Times.

]]>
GitHub Copilot now offers access to new Anthropic, Google, and OpenAI models https://sdtimes.com/ai/github-copilot-now-offers-access-to-anthropic-google-and-openai-models/ Tue, 29 Oct 2024 16:33:22 +0000 https://sdtimes.com/?p=55931 GitHub is hosting its annual user conference, GitHub Universe, today and tomorrow, and has announced a number of new AI capabilities that will enable developers to build applications more quickly, securely, and efficiently.  Many of the updates were across GitHub Copilot. First up, GitHub announced that users now have access to more model choices thanks … continue reading

The post GitHub Copilot now offers access to new Anthropic, Google, and OpenAI models appeared first on SD Times.

]]>
GitHub is hosting its annual user conference, GitHub Universe, today and tomorrow, and has announced a number of new AI capabilities that will enable developers to build applications more quickly, securely, and efficiently. 

Many of the updates were across GitHub Copilot. First up, GitHub announced that users now have access to more model choices thanks to partnerships with Anthropic, Google, and OpenAI. Newly added model options include Anthropic’s Claude 3.5 Sonnet, Google’s Gemini 1.5 Pro, and OpenAI’s GPT-4o, o1-preview, and o1-mini. 

By offering developers more choices, GitHub is enabling them to choose the model that works best for their specific use case, the company explained.

“In 2024, we experienced a boom in high-quality large and small language models that each individually excel at different programming tasks. There is no one model to rule every scenario, and developers expect the agency to build with the models that work best for them,” said Thomas Dohmke, CEO of GitHub. “It is clear the next phase of AI code generation will not only be defined by multi-model functionality, but by multi-model choice. Today, we deliver just that.”

Copilot Workspace has a number of new features as well, like a build and repair agent, brainstorming mode, integrations with VS Code, and iterative feedback loops. 

GitHub Models, which enables developers to experiment with different AI models, has a number of features now in public preview, including side-by-side model comparison, support for multi-modal models, the ability to save and share prompts and parameters, and additional cookbooks and SDK support in GitHub Codespaces.

Copilot Autofix, which analyzes and provides suggestions about code vulnerabilities, added security campaigns, enabling developers to triage up to 1,000 alerts at once and filter them by type, severity, repository, and team. The company also added integrations with ESLint, JFrog SAST, and Black Duck Polaris. Both security campaigns and these partner integrations are available in public preview. 

Other new features in GitHub Copilot include code completion in Copilot for Xcode (in public preview), a code review capability, and the ability to customize Copilot Chat responses based on a developer’s preferred tools, organizational knowledge, and coding conventions.

In terms of what’s coming next, starting November 1, developers will be able to edit multiple files at once using Copilot Chat in VS Code. Then, in early 2025, Copilot Extensions will be generally available, enabling developers to integrate their other developer tools into GitHub Copilot, like Atlassian Rovo, Docker, Sentry, and Stack Overflow.

The company also announced a technical preview for GitHub Spark, an AI tool for building fully functional micro apps (called “sparks”) solely using text prompts. Each spark can integrate external data sources without requiring the creator to manage cloud resources. 

While developers can make changes to sparks by diving into the code, any user can iterate and make changes entirely using natural language, reducing the barrier to application development. 

Finished sparks can be immediately run on the user’s desktop, tablet, or mobile device, or they can share with others, who can use it or even build upon it. 

“With Spark, we will enable over one billion personal computer and mobile phone users to build and share their own micro apps directly on GitHub—the creator network for the Age of AI,” said Dohmke.

And finally, the company revealed the results of its Octoverse report, which provides insights into the world of open source development by studying public activity on GitHub. 

Some key findings were that Python is now the most used language on the platform, AI usage is up 98% since last year, and the number of global developers continues increasing, particularly across Africa, Latin America, and Asia. 

The post GitHub Copilot now offers access to new Anthropic, Google, and OpenAI models appeared first on SD Times.

]]>
OpenSSF updates its Developing Secure Software course with new interactive labs https://sdtimes.com/security/openssf-updates-its-developing-secure-software-course-with-new-interactive-labs/ Tue, 29 Oct 2024 14:32:44 +0000 https://sdtimes.com/?p=55928 The Open Source Security Foundation (OpenSSF) is updating its Developing Secure Software (LFD121) course with new interactive learning labs that provide developers with more hands-on learning opportunities.  LFD121 is a free course offered by OpenSSF that takes about 14-18 hours to complete. Any student who passes the final exam gets a certificate that is valid … continue reading

The post OpenSSF updates its Developing Secure Software course with new interactive labs appeared first on SD Times.

]]>
The Open Source Security Foundation (OpenSSF) is updating its Developing Secure Software (LFD121) course with new interactive learning labs that provide developers with more hands-on learning opportunities. 

LFD121 is a free course offered by OpenSSF that takes about 14-18 hours to complete. Any student who passes the final exam gets a certificate that is valid for two years.  

The course is broken down into three parts. The first part covers the basics of secure software development, like how to implement secure design principles and how to secure the software supply chain. Part two covers implementation of those basics and then part three finishes up with security testing and also covers more specialized topics like threat modeling, fielding, and formal methods for verifying that software is secure. 

The new interactive labs are not required for completing the course, but do enhance the experience, OpenSSF explained. The labs launch directly in the web browser, meaning no additional software needs downloading. 

Each lab involves working through a specific task, such as validating input of a simple data type. “Learning how to do input validation is important,” said David Wheeler, director of open source supply chain security, at OpenSSF. “Attackers are *continuously* attacking programs, so developers need to learn to validate (check) inputs from potential attackers so that it’s much harder for attackers to malicious inputs into a program.”

Each lab includes a general goal, background on the issue, and information about the specific tasks. Students will work through a pre-written program that has some areas that will need to be filled in by the student. 

According to Wheeler, the goal of all of the labs isn’t to learn specific technologies, but to learn core concepts about writing secure software. For example, in the input validation lab, the student only needs to fix one line of code, but that line of code is the one that does the validation, and therefore, is critically important. 

“In fact, without the input validation line to be crafted by the user, the code has a vulnerability (specifically a ‘cross-site scripting vulnerability’),” said Wheeler.

Students can also get help throughout the lab by requesting context-specific hints that take into account where they are stuck. Wheeler explained that the hints help students progress through the labs even if they’re not familiar with the particular programming language used in the lab. 

The post OpenSSF updates its Developing Secure Software course with new interactive labs appeared first on SD Times.

]]>
Tech companies are turning to nuclear energy to meet growing power demands caused by AI https://sdtimes.com/ai/tech-companies-are-turning-to-nuclear-energy-to-meet-growing-power-demands-caused-by-ai/ Fri, 25 Oct 2024 16:57:14 +0000 https://sdtimes.com/?p=55907 The explosion in interest in AI, particularly generative AI, has had many positive benefits: increased productivity, easier and faster access to information, and often a better user experience in applications that have embedded AI chatbots.  But for all its positives, there is one huge problem that still needs solving: how do we power it all?  … continue reading

The post Tech companies are turning to nuclear energy to meet growing power demands caused by AI appeared first on SD Times.

]]>
The explosion in interest in AI, particularly generative AI, has had many positive benefits: increased productivity, easier and faster access to information, and often a better user experience in applications that have embedded AI chatbots. 

But for all its positives, there is one huge problem that still needs solving: how do we power it all? 

As of August of this year, ChatGPT had more than 200 million weekly active users, according to a report by Axios.  And it’s not just OpenAI; Google, Amazon, Apple, IBM, Meta, and many other players in tech have created their own AI models to better serve their customers and are investing heavily in AI strategies.

While people may generally be able to access these services for free, they’re not free in terms of the power they require. Research from Goldman Sachs indicates that a single ChatGPT query uses almost 10 times as much power as a Google search. 

Its research also revealed that by 2030, data center power demand will grow 160%. Relative to other energy demand categories, data centers will go from using 1-2% of total power to 3-4% by that same time, and by 2028, AI will represent 19% of the total power data center power demand.

Overall, the U.S. will see a 2.4% increase in energy demands every year through 2030, and will need to invest approximately $50 billion just to support its data centers. 

“Energy consumption in the United States has been pretty flat, really over the course of the last two decades,” Jason Carolan, chief innovation officer at Flexential, explained in a recent episode of ITOps Times’ podcast, Get With IT. “Part of that was that perhaps COVID sort of slowed things down. But now we’re at this point, whether it’s AI or whether it’s just electrification in general, that we’re really running out of capacity. In fact, there are states where projects of large scale, electrification builds, as well as data center builds, basically have stopped because there isn’t power capacity available.” 

To meet these growing demands, tech companies are turning to nuclear energy, and in the past month or so, Google, Microsoft, and Amazon have all announced investments in nuclear energy plants. 

On September 20, Microsoft announced that it had signed a 20 year deal with Constellation Energy to restart Three Mile Island Unit 1. This is a different reactor than the reactor (Unit 2) that caused the infamous Three Mile Island disaster in 1979, and this one had actually been restarted after the accident in 1985 and ran until 2019, when it shut down due to cost. 

Constellation and Microsoft say that the reactor should be back in operation by 2028 after improvements are made to the turbine, generator, main power transformer, and cooling and control systems. Constellation claims the reactor will generate around 835 megawatts of energy. 

“Powering industries critical to our nation’s global economic and technological competitiveness, including data centers, requires an abundance of energy that is carbon-free and reliable every hour of every day, and nuclear plants are the only energy sources that can consistently deliver on that promise,” said Joe Dominguez, president and CEO of Constellation.

Google and Amazon followed suit in October, both with news that they are investing in small modular reactors (SMR). SMRs generate less power than traditional reactors, typically around 100 to 300 megawatts compared to 1000 megawatts from a large-scale reactor, according to Carolan. Even though they generate less power, they also include more safety features, have a smaller footprint so that they can be installed in places where a large reactor couldn’t, and they cost less to build, according to the Office of Nuclear Energy.

“There’s been a lot of money and innovation put into small scale nuclear reactors over the course of the last four or five years, and there are several projects underway,” said Carolan. “There continues to be almost open-source-level innovation in the space because people are starting to share data points and share operational models.”

Google announced it had signed a deal with Kairo Power to purchase nuclear energy generated by their small modular reactors (SMR), revealing that Kairo’s first SMR should be online by 2030 and more SMRs will be deployed through 2025. Amazon also announced it partnering with energy companies in Washington and Virgina to develop SMRs there and invested in X-energy, which is a company developing SMR reactors and fuel.

“The grid needs new electricity sources to support AI technologies that are powering major scientific advances, improving services for businesses and customers, and driving national competitiveness and economic growth. This agreement helps accelerate a new technology to meet energy needs cleanly and reliably, and unlock the full potential of AI for everyone,” Michael Terrell, senior director of energy and climate at Google, wrote in the announcement. 

Carolan did note that SMRs are still a relatively new technology, and many of the designs have not yet been approved by the Nuclear Regulatory Commission. 

“I think we’re going to be in a little bit of a power gap here, in the course of the next two to three years as we continue to scale up nuclear,” he explained. As it stands now, as of April 2024, the U.S. only had 54 operating nuclear power plants, and in 2023, just 18.6% of our total power generation came from nuclear power. 

The post Tech companies are turning to nuclear energy to meet growing power demands caused by AI appeared first on SD Times.

]]>
Google expands Responsible Generative AI Toolkit with support for SynthID, a new Model Alignment library, and more https://sdtimes.com/ai/google-expands-responsible-generative-ai-toolkit-with-support-for-synthid-a-new-model-alignment-library-and-more/ Thu, 24 Oct 2024 16:17:54 +0000 https://sdtimes.com/?p=55901 Google is making it easier for companies to build generative AI responsibly by adding new tools and libraries to its Responsible Generative AI Toolkit. The Toolkit provides tools for responsible application design, safety alignment, model evaluation, and safeguards, all of which work together to improve the ability to responsibly and safely develop generative AI.  Google … continue reading

The post Google expands Responsible Generative AI Toolkit with support for SynthID, a new Model Alignment library, and more appeared first on SD Times.

]]>
Google is making it easier for companies to build generative AI responsibly by adding new tools and libraries to its Responsible Generative AI Toolkit.

The Toolkit provides tools for responsible application design, safety alignment, model evaluation, and safeguards, all of which work together to improve the ability to responsibly and safely develop generative AI. 

Google is adding the ability to watermark and detect text that is generated by an AI product using Google DeepMind’s SynthID technology. The watermarks aren’t visible to humans viewing the content, but can be seen by detection models to determine if content was generated by a particular AI tool. 

“Being able to identify AI-generated content is critical to promoting trust in information. While not a silver bullet for addressing problems such as misinformation or misattribution, SynthID is a suite of promising technical solutions to this pressing AI safety issue,” SynthID’s website states. 

The next addition to the Toolkit is the Model Alignment library, which allows the LLM to refine a user’s prompts based on specific criteria and feedback.  

“Provide feedback about how you want your model’s outputs to change as a holistic critique or a set of guidelines. Use Gemini or your preferred LLM to transform your feedback into a prompt that aligns your model’s behavior with your application’s needs and content policies,” Ryan Mullins, research engineer and RAI Toolkit tech lead at Google, wrote in a blog post

And finally, the last update is an improved developer experience in the Learning Interpretability Tool (LIT) on Google Cloud, which is a tool that provides insights into “how user, model, and system content influence generation behavior.”

It now includes a model server container, allowing developers to deploy Hugging Face or Keras LLMs on Google Cloud Run GPUs with support for generation, tokenization, and salience scoring. Users can also now connect to self-hosted models or Gemini models using the Vertex API. 

“Building AI responsibly is crucial. That’s why we created the Responsible GenAI Toolkit, providing resources to design, build, and evaluate open AI models. And we’re not stopping there! We’re now expanding the toolkit with new features designed to work with any LLMs, whether it’s Gemma, Gemini, or any other model. This set of tools and features empower everyone to build AI responsibly, regardless of the model they choose,” Mullins wrote. 

The post Google expands Responsible Generative AI Toolkit with support for SynthID, a new Model Alignment library, and more appeared first on SD Times.

]]>
HCL DevOps streamlines development processes with its platform-centric approach https://sdtimes.com/devops/hcl-devops-streamlines-development-processes-with-its-platform-centric-approach/ Thu, 24 Oct 2024 13:00:55 +0000 https://sdtimes.com/?p=55885 Platform engineering has been gaining quite a lot of traction lately — and for good reason. The benefits to development teams are many, and it could be argued that platform engineering is a natural evolution of DevOps, so it’s not a huge cultural change to adapt to.  According to Jonathan Harding, Senior Product Manager of … continue reading

The post HCL DevOps streamlines development processes with its platform-centric approach appeared first on SD Times.

]]>
Platform engineering has been gaining quite a lot of traction lately — and for good reason. The benefits to development teams are many, and it could be argued that platform engineering is a natural evolution of DevOps, so it’s not a huge cultural change to adapt to. 

According to Jonathan Harding, Senior Product Manager of Value Stream Management at HCLSoftware, in an era where organizations have become so focused on how to be more productive, this discipline has gained popularity because “it gets new employees productive quickly, and it gets existing employees able to deliver quickly and in a way that is relatively self-sufficient.”

Platform engineering teams work to build an internal developer portal (IDP), which is a self-service platform that developers can use to make certain parts of their job easier. For example, rather than a developer needing to contact IT and waiting for them to provision infrastructure, that developer would interact with the IDP to get that infrastructure provisioned.

Essentially, an IDP is a technical implementation of a DevOps objective, explained Chris Haggan, Head of HCL DevOps at HCLSoftware.

“DevOps is about collaboration and agility of thinking, and platform engineering is the implementation of products like HCL DevOps that enable that technical delivery aspect,” Haggan said.

Haggan looks at platform engineering from the perspective of having a general strategy and then bringing in elements of DevOps to provide a holistic view of that objective. 

“I want to get this idea that a customer has given me out of the ideas bucket and into production as quickly as I can. And how do I do that? Well, some of that is going to be about the process, the methodology, and the ways of working to get that idea quickly through the delivery lifecycle, and some of that is going to be about having a technical platform that underpins that,” said Haggan. 

IDPs typically include several different functionalities and toolchains, acting as a one-stop shop for everything a developer might need. From a single platform, they might be able to create infrastructure, handle observability, or set up new development environments. The benefits are similar in HCL DevOps, but by coming in a ready-to-use, customizable package, development teams don’t have to go through the process of developing the IDP and can skip right to the benefits. 

Haggan explained that the costs of building and maintaining a platform engineering system are not inconsequential. For instance, they need to integrate multiple software delivery systems and figure out where to store metrics, SDLC events, and other data, which often requires setup and administration of a new database. 

Plus, sometimes teams design a software delivery system that incorporates their own culture nuances, which can sometimes be helpful, but other times “they reflect unnecessary cultural debt that has accumulated within an organization for years,” said Haggan.

HCL DevOps consists of multifaceted solutions, with the three most popular being:

  • HCL DevOps Test: An automated testing platform that covers UI, API, and performance testing, and provides testing capabilities like virtual services and test data creation.
  • HCL DevOps Deploy: A fully automated CI/CD solution that supports a variety of architectures, including distributed multi-tier, mobile, mainframe, and microservices. 
  • HCL DevOps Velocity: The company’s value stream management offering that pulls in data from across the SDLC to provide development teams with useful insights.

Haggan admitted that he’s fully aware that organizations will want to customize and add new capabilities, so it’s never going to be just their platform that’s in play. But the benefit they can provide is that customers can use HCL DevOps as a starting point and then build from there. 

“We’re trying to be incredibly open as an offering and allow customers to take advantage of the tools that they have,” Haggan said. “We’re not saying you have to work only with us. We’re fully aware that organizations have their own existing workflows, and we’re going to work with that.”

To that end, HCL offers plugins that connect with other software. For instance, HCL DevOps Deploy currently has about 200 different plugins that could be used, and customers can also create their own, Harding explained. 

The plugin catalog is curated by the HCL DevOps technical team, but also has contributions from the community submitted through GitHub. 

Making context switching less disruptive

Another key benefit of IDPs is that they can cut down on context switching, which is when a developer needs to switch to different apps for different tasks, ultimately taking them out of their productive flow state.  

“Distraction for any knowledge worker in a large enterprise is incredibly costly for the enterprise,” said Harding. “So, focus is important. I think for us, platform engineering — and our platform in general — allows a developer to stay focused on what they’re doing.”

“Context switching will always be needed to some degree,” Haggan went on to say. A developer is never going to be able to sit down for the day and not ever have to change what they’re thinking about or doing. 

“It’s about making it easy to make those transitions and making it simple, so that when I move from planning the work that I’m going to be doing to deploying something or testing something or seeing where it is in the value stream, that feels natural and logical,” Haggan said. 

Harding added that they’ve worked hard to make it easy to navigate between the different parts of the platform so that the user feels like it’s all part of the same overall solution. That aspect ultimately keeps them in that same mental state as best as possible.

The HCL DevOps team has designed the solution with personas in mind. In other words, thinking about the different tasks that a particular role might need to switch between throughout the day.

For instance, a quality engineer using a test-driven development approach might start with writing encoded acceptance criteria in a work-item management platform, then move to a CI/CD system to view the results of an automated test, and then move to a test management system to incorporate their test script into a regression suite. 

These tasks span multiple systems, and each system often has its own role-based access control (RBAC), tracking numbers, and user interfaces, which can make the process confusing and time-consuming, Haggan explained. 

“We try to make that more seamless, and tighten that integration across the platform,” said Harding. “I think that’s been a focus area, really looking from the end user’s perspective, how do we tighten the integration based on what they’re trying to accomplish?”

To learn more about how HCL DevOps can help achieve your platform goals and improve development team productivity, visit the website to book a demo and learn about the many capabilities the platform has to offer. 

The post HCL DevOps streamlines development processes with its platform-centric approach appeared first on SD Times.

]]>