SD Times https://sdtimes.com/ Software Development News Fri, 01 Nov 2024 18:51:26 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.5 https://sdtimes.com/wp-content/uploads/2019/06/bnGl7Am3_400x400-50x50.jpeg SD Times https://sdtimes.com/ 32 32 3 common missteps of product-led growth https://sdtimes.com/softwaredev/3-common-missteps-of-product-led-growth/ Fri, 01 Nov 2024 18:51:26 +0000 https://sdtimes.com/?p=55978 Product-led growth (PLG) has become the golden standard for SaaS companies aiming to scale rapidly and efficiently. In fact, a 2024 survey from ProductLed.com found that 91% of respondents are planning to invest more resources in PLG initiatives this year. As an advocate for this approach personally, I’ve witnessed firsthand the transformative power of putting … continue reading

The post 3 common missteps of product-led growth appeared first on SD Times.

]]>
Product-led growth (PLG) has become the golden standard for SaaS companies aiming to scale rapidly and efficiently. In fact, a 2024 survey from ProductLed.com found that 91% of respondents are planning to invest more resources in PLG initiatives this year. As an advocate for this approach personally, I’ve witnessed firsthand the transformative power of putting the product at the center of customer acquisition and retention strategies. 

Admittedly, the path to successful PLG implementation has some challenges that can derail even the most promising companies. Specifically, the organizations that are transitioning from more traditional enterprise growth models may, in fact, have difficulty when navigating the change in dynamic – either from technology or leadership transitioning. As such, I’d like to explain three common missteps that organizations often encounter when adopting a PLG strategy and discuss how to overcome them. By understanding these pitfalls, organizations can better position themselves to harness the full potential of PLG and drive sustainable growth.

Before I dig in, it’s important to note that it’s a misconception that organizations need to choose a PLG or sales-led approach. In reality, there are companies that have succeeded by having both. It matters on who the customer is and what level of hybrid motion works for each company. For example, a product-led approach may not be well suited for organizations that rely heavily on an outbound sales motion. For organizations with a strong inbound sales motion, however, PLG can be a value add.

With that, I’ll dive into the missteps: 

1. Failing to Maintain a Product-Centric Culture

One of the most critical aspects of PLG is fostering a product-centric culture throughout the organization. This means aligning every department – from engineering and design, to marketing and sales – around the product’s value proposition and user experience. Many companies stumble by treating PLG as merely a go-to-market strategy rather than a holistic approach that permeates the entire organization. This misalignment can lead to inconsistent messaging, disjointed user experiences, and ultimately, a failure to deliver on the promise of PLG.

To succeed, companies should:

  • Prioritize cross-functional collaboration and communication;
  • Invest in continuous product education for all employees; and
  • Empower teams to make data-driven decisions that enhance the product experience.

By fostering a genuine product-centric culture, organizations can ensure that every team member contributes to the overall PLG strategy, creating a cohesive and compelling user journey

2. Getting Distracted by Individual Customer Requests

In the pursuit of customer satisfaction, it’s easy to fall into the trap of catering to individual customer requests at the expense of the broader product vision. While customer feedback is invaluable, allowing it to dictate product direction entirely can lead to feature bloat and a diluted value proposition.

Successful PLG requires a delicate balance between addressing user needs and maintaining a focused product roadmap. To strike this balance:

  • Develop a process for prioritizing feature requests based on their potential impact on the overall user base;
  • Communicate transparently with customers about product decisions, features, and timelines; and
  • Use data and user research to validate assumptions and guide product development.

By maintaining a clear product vision while remaining responsive to user feedback, companies can create a product that resonates with a broader audience and drives organic growth.

3. Struggling to Balance Stakeholder Needs with Product Vision

PLG doesn’t exist in a vacuum. While the product is the primary growth driver, other stakeholders – including investors, partners, and internal teams – often have their own goals and expectations. Balancing these diverse needs with the overarching product vision can be challenging.

Companies may falter by prioritizing short-term gains over long-term product health or by compromising on user experience to meet arbitrary growth targets. To navigate this challenge:

  • Establish clear, measurable metrics that align with both product and business goals;
  • Educate stakeholders on the principles and benefits of PLG to gain buy-in and support; and
  • Regularly review and adjust the product roadmap to ensure it aligns with both user needs and business objectives.

By fostering alignment between stakeholder expectations and product vision, organizations can create a sustainable PLG strategy that drives both user satisfaction and business growth.

Beyond the Basics: Additional Considerations for PLG Success

While addressing these three common missteps is crucial, there are additional factors that can make or break a PLG strategy:

  • Hiring for PLG expertise: Many organizations underestimate the importance of bringing in specialized talent with PLG experience. Look for individuals with a growth mindset and a track record of success in product-led environments, especially in SaaS.
  • Investing in robust instrumentation: PLG demands a data-driven approach. Ensure you have the right tools and processes in place to collect, analyze, and act on user data effectively.
  • Continuous optimization: Both your product and your acquisition funnel should be subject to ongoing refinement. Establish a culture of experimentation and iteration to drive continuous improvement. Additionally, a touch of customer obsession cannot hurt! Obsess over your customer experience and evaluate their journey through your product to inform experiments. By truly understanding your user’s journey, you can clearly see where customers encounter friction or obstacles. This allows you to proactively enhance these touchpoints, leading to a smoother and more satisfying experience. 
  • Empowering marketing: While the product leads the way, marketing plays a crucial role in amplifying its reach. Equip your marketing team with the resources and autonomy they need to effectively drive the pipeline.

Product-led growth offers immense potential for SaaS companies looking to scale efficiently and deliver exceptional user experiences. By avoiding these common missteps and focusing on building a truly product-centric organization, companies can unlock the full power of PLG.

Successful PLG is not about perfection from day one. It’s about creating a culture of continuous learning, experimentation, and improvement. By staying true to the core principles of PLG while remaining flexible in its implementation, organizations can build products that not only meet user needs but also drive sustainable business growth.

The post 3 common missteps of product-led growth appeared first on SD Times.

]]>
IBM releases open AI agents for resolving GitHub issues https://sdtimes.com/softwaredev/ibm-releases-open-ai-agents-for-resolving-github-issues/ Fri, 01 Nov 2024 15:23:47 +0000 https://sdtimes.com/?p=55973 IBM is releasing a family of AI agents (IBM SWE-Agent 1.0) that are powered by open LLMs and can resolve GitHub issues automatically, freeing up developers to work on other things rather than getting bogged down by their backlog of bugs that need fixing.  “For most software developers, every day starts with where the last … continue reading

The post IBM releases open AI agents for resolving GitHub issues appeared first on SD Times.

]]>
IBM is releasing a family of AI agents (IBM SWE-Agent 1.0) that are powered by open LLMs and can resolve GitHub issues automatically, freeing up developers to work on other things rather than getting bogged down by their backlog of bugs that need fixing. 

“For most software developers, every day starts with where the last one left off. Trawling through the backlog of issues on GitHub you didn’t deal with the day before, you’re triaging which ones you can fix quickly, which will take more time, and which ones you really don’t know what to do with yet. You might have 30 issues in your backlog and know you only have time to tackle 10,” IBM wrote in a blog post. This new family of agents aims to alleviate this burden and shorten the time developers are spending on these tasks. 

One of the agents is a localization agent that can find the file and line of code that is causing an error. According to IBM, the process of finding the correct line of code related to a bug report can be a time-consuming process for developers, and now they’ll be able to tag the bug report they’re working on in GitHub with “ibm-swe-agent-1.0” and the agent will work to find the code. 

Once found, the agent suggests a fix that the developer could implement. At that point the developer could either fix the issue themselves or enlist the help of other SWE agents for further assistants. 

Other agents in the SWE family include one that edits lines of code based on developer requests and one that can be used to develop and execute tests. All of the SWE agents can be invoked directly from within GitHub.

According to IBM’s early testing, these agents can localize and fix problems in less than five minutes and have a 23.7% success rate on SWE-bench tests, a benchmark that tests an AI system’s ability to solve GitHub issues. 

IBM explained that it set out to create SWE agents as an alternative to other competitors who use large frontier models, which tend to cost more. “Our goal was to build IBM SWE-Agent for enterprises who want a cost efficient SWE agent to run wherever their code resides — even behind your firewall — while still being performant,” said Ruchir Puri, chief scientist at IBM Research.

The post IBM releases open AI agents for resolving GitHub issues appeared first on SD Times.

]]>
ChatGPT can now include web sources in responses https://sdtimes.com/ai/chatgpt-can-now-include-web-sources-in-responses/ Thu, 31 Oct 2024 19:26:15 +0000 https://sdtimes.com/?p=55965 OpenAI is updating ChatGPT so that its responses include results from the web, bringing the power of the search engine directly into the chat interface. “This blends the benefits of a natural language interface with the value of up-to-date sports scores, news, stock quotes, and more,” OpenAI wrote in a post. According to OpenAI, ChatGPT … continue reading

The post ChatGPT can now include web sources in responses appeared first on SD Times.

]]>
OpenAI is updating ChatGPT so that its responses include results from the web, bringing the power of the search engine directly into the chat interface.

“This blends the benefits of a natural language interface with the value of up-to-date sports scores, news, stock quotes, and more,” OpenAI wrote in a post.

According to OpenAI, ChatGPT will automatically decide whether a web search is warranted based on the prompt. Users can also directly tell it to search the web by selecting the web search icon under the prompt field.  

Chats will include a link to the web source so that the user can visit that site for more information. A new Sources panel will display on the right hand side of the chat with a list of all sources. 

OpenAI partnered with specific news and data providers to get up-to-date information and visual designers for weather, stocks, sports, news, and maps. For instance, asking about the weather will result in a graphic that shows the five day forecast and stock questions will include a chart of that stock’s performance. 

Some partners OpenAI worked with include Associated Press, Axel Springer, Condé Nast, Dotdash Meredith, Financial Times, GEDI, Hearst, Le Monde, News Corp, Prisa (El País), Reuters, The Atlantic, Time, and Vox Media.

“ChatGPT search connects people with original, high-quality content from the web and makes it part of their conversation. By integrating search with a chat interface, users can engage with information in a new way, while content owners gain new opportunities to reach a broader audience,” OpenAI wrote. 

This feature is available on chatgpt.com, the desktop app, and the mobile app. It is available today to ChatGPT Plus and Team subscribers and people on the SearchGPT waitlist. In the next few weeks it should be available to Enterprise and Edu users, and in the next few months, all Free users will get access as well.

The post ChatGPT can now include web sources in responses appeared first on SD Times.

]]>
Gemini responses can now be grounded with Google Search results https://sdtimes.com/ai/gemini-responses-can-now-be-grounded-with-google-search-results/ Thu, 31 Oct 2024 17:45:00 +0000 https://sdtimes.com/?p=55961 Google is announcing that the Gemini API and Google AI Studio now both offer the ability to ground models using Google Search, which will improve the accuracy and reliability of Gemini’s responses.  By grounding the responses with Google Search results, responses can have fewer hallucinations, more up-to-date information, and richer information. Grounded responses also include … continue reading

The post Gemini responses can now be grounded with Google Search results appeared first on SD Times.

]]>
Google is announcing that the Gemini API and Google AI Studio now both offer the ability to ground models using Google Search, which will improve the accuracy and reliability of Gemini’s responses. 

By grounding the responses with Google Search results, responses can have fewer hallucinations, more up-to-date information, and richer information. Grounded responses also include links to the sources they are using. 

“By providing supporting links, grounding brings transparency to AI applications, making them more trustworthy and encouraging users to click on the underlying sources to find out more,” Google wrote in a blog post.

This new capability supports dynamic retrieval, meaning that Gemini will assess if grounding is necessary, as not all queries need the extra assistant and it does add extra cost and latency. It generates a prediction score for every prompt, which is a measure of how beneficial grounding would be, and developers can adjust the prediction score threshold to what works best for their application.

Currently, grounding only supports text prompts and does not support multimodal prompts, like text-and-image or text-and-audio. It is available in all of the languages Gemini currently supports. 

Google’s documentation on grounding provides instructions on how to configure Gemini models to use this new capability. 

The post Gemini responses can now be grounded with Google Search results appeared first on SD Times.

]]>
Google open sources Java-based differential privacy library https://sdtimes.com/data/google-open-sources-java-based-differential-privacy-library/ Thu, 31 Oct 2024 15:33:10 +0000 https://sdtimes.com/?p=55956 Google has announced that it is open sourcing a new Java-based differential privacy library called PipelineDP4J.  Differential privacy, according to Google, is a privacy-enhancing technology (PET) that “allows for analysis of datasets in a privacy-preserving way to help ensure individual information is never revealed.” This enables researchers or analysts to study a dataset without accessing … continue reading

The post Google open sources Java-based differential privacy library appeared first on SD Times.

]]>
Google has announced that it is open sourcing a new Java-based differential privacy library called PipelineDP4J

Differential privacy, according to Google, is a privacy-enhancing technology (PET) that “allows for analysis of datasets in a privacy-preserving way to help ensure individual information is never revealed.” This enables researchers or analysts to study a dataset without accessing personal data. 

Google claims that its implementation of differential privacy is the largest in the world, spanning nearly three billion devices. As such, Google has invested heavily in providing access to its differential privacy technologies over the last several years. For instance, in 2019, it open sourced its first differential privacy library, and in 2021, it open sourced its Fully Homomorphic Encryption transpiler.

In the years since, the company has also worked to expand the languages its libraries are available in, which is the basis for today’s news. 

The new library, PipelineDP4j, enables developers to execute highly parallelizable computations in Java, which reduces the barrier to differential privacy for Java developers, Google explained.

“With the addition of this JVM release, we now cover some of the most popular developer languages – Python, Java, Go, and C++ – potentially reaching more than half of all developers worldwide,” Miguel Guevara, product manager on the privacy team at Google, wrote in a blog post.

The company also announced that it is releasing another library, DP-Auditorium, that can audit differential privacy algorithms. 

According to Google, two key steps are needed to effectively test differential privacy: evaluating the privacy guarantee over a fixed dataset and finding the “worst-case” privacy guarantee in a dataset. DP-Auditorium provides tools for both of those steps in a flexible interface. 

It uses samples from the differential privacy mechanism itself and doesn’t need access to the application’s internal properties, Google explained. 

“We’ll continue to build on our long-standing investment in PETs and commitment to helping developers and researchers securely process and protect user data and privacy,” Guevara concluded. 

The post Google open sources Java-based differential privacy library appeared first on SD Times.

]]>
Opsera and Databricks partner to automate data orchestration https://sdtimes.com/data/opsera-and-databricks-partner-to-automate-data-orchestration/ Wed, 30 Oct 2024 19:38:27 +0000 https://sdtimes.com/?p=55952 Opsera, the Unified DevOps platform powered by Hummingbird AI trusted by top Fortune 500 companies, today announced that it has partnered with Databricks, the Data and AI company, to empower software and DevOps engineers to deliver software faster, safer and smarter through AI/ML model deployments and schema rollback capabilities. Opsera leverages its DevOps platform and … continue reading

The post Opsera and Databricks partner to automate data orchestration appeared first on SD Times.

]]>
Opsera, the Unified DevOps platform powered by Hummingbird AI trusted by top Fortune 500 companies, today announced that it has partnered with Databricks, the Data and AI company, to empower software and DevOps engineers to deliver software faster, safer and smarter through AI/ML model deployments and schema rollback capabilities.

Opsera leverages its DevOps platform and integrations and builds AI agents and frameworks to revolutionize the software delivery management process with a unique approach to automating data orchestration.
Opsera is now part of Databricks’ Built on Partner Program and Technology Partner Program.

The partnership enables:
● AI/ML Model Deployments with Security and Compliance Guardrails: Opsera
ensures that model training and deployment using Databricks infrastructure meets
security and quality guardrails and thresholds before deployment. Proper model training
allows customers to optimize Databricks Mosaic AI usage and reduce deployment risks.

● Schema Deployments with Rollback Capabilities: Opsera facilitates controlled
schema deployments in Databricks with built-in rollback features for enhanced flexibility
and confidence. Customers gain better change management and compliance tracking
and reduce unfettered production deployments, leading to increased adoption of
Databricks and enhanced value of automation pipelines.

“The development of advanced LLM models and Enterprise AI solutions continues to fuel an
insatiable demand for data,” said Torsten Volk, Principal Analyst at Enterprise Strategy Group.
“Partnerships between data management and data orchestration vendors to simplify the
ingestion and ongoing management of these vast flows of data are necessary responses to
these complex and extremely valuable AI efforts.”

Additional benefits of the Opsera and Databricks partnership include:
● Powerful ETL (Extract, Transform, Load) Capabilities: Databricks’ Spark-based
engine enables efficient ETL from various sources into a centralized data lake. This
empowers Opsera to collect and orchestrate vast amounts of data, increasing developer
efficiency and accelerating data processing efficiency.
● Scalable and Flexible Data Intelligence Platform: Databricks’ Delta UniForm and
Unity Catalog provide a scalable, governed, interoperable, and reliable Data Lakehouse
solution, enabling Opsera to orchestrate large volumes of structured and unstructured
data efficiently.
● Advanced Analytics and ML: Databricks Mosaic AI’s integrated machine learning
capabilities allow Opsera to efficiently build and deploy AI/ML models for predictive
analytics, anomaly detection and other advanced use cases.
● Seamless Integration: Databricks integrates seamlessly with Opsera’s existing
technology stack, facilitating smooth data flow and enabling end-to-end visibility of the
DevOps platform.

The post Opsera and Databricks partner to automate data orchestration appeared first on SD Times.

]]>
Tabnine’s new Code Review Agent validates code based on a dev team’s unique best practices and standards https://sdtimes.com/ai/tabnines-new-code-review-agent-validates-code-based-on-a-dev-teams-unique-best-practices-and-standards/ Wed, 30 Oct 2024 15:24:58 +0000 https://sdtimes.com/?p=55948 The AI coding assistant provider Tabnine is releasing a private preview for its Code Review Agent, a new AI-based tool that validates software based on the development team’s unique best practices and standards for software development.  According to Tabnine, using AI to review code is nothing new, but many of the tools currently available check … continue reading

The post Tabnine’s new Code Review Agent validates code based on a dev team’s unique best practices and standards appeared first on SD Times.

]]>
The AI coding assistant provider Tabnine is releasing a private preview for its Code Review Agent, a new AI-based tool that validates software based on the development team’s unique best practices and standards for software development. 

According to Tabnine, using AI to review code is nothing new, but many of the tools currently available check code against general standards. However, software development teams often develop their own unique ways of creating software. “What one team sees as their irrefutable standard, another team might reject outright. For AI to add meaningful value in improving software quality for most teams, it must have the same level of understanding as a fully onboarded, senior member of the team,” Tabnine explained in a blog post

Code Review Agent allows teams to create rules based on their own standards, best practices, and company policies. These rules are then applied during code review at the pull request or in the IDE.

Development teams can provide the parameters their code should comply with in natural language, and Tabnine works behind the scenes to convert that into a set of rules. Tabnine also offers a set of predefined rules that can be incorporated into the ruleset as well. 

For example, one of Tabnine’s predefined rules is “Only use SHA256 to securely hash data” and a customer-specific rule is “Only use library acme_secure_api_access for accessing external APIs, do not use standard http libraries.”

When a developer creates a pull request that doesn’t meet the established rules, Code Review Agent flags the issue to the code review and also offers suggestions on how to fix the problem. 

“By comprehensively reading through code and ensuring that it matches each team’s unique expectations, Tabnine saves engineering teams significant time and effort while applying a level of rigor in code review that was never possible with static code analysis. Just like AI code generation automates away simpler coding tasks so developers can focus on more valuable tasks, Tabnine’s AI Code Review agent automates common review tasks, freeing up code reviewers to focus on higher-order analysis instead of adherence to best practices,” Tabnine wrote. 

This tool is currently available as a private preview to Tabnine Enterprise customers. An example video of Code Review Agent in action can be viewed here

The post Tabnine’s new Code Review Agent validates code based on a dev team’s unique best practices and standards appeared first on SD Times.

]]>
Creatio Unveils “Energy” Release, Marking a New Era of Business Automation https://sdtimes.com/creatio-unveils-energy-release-marking-a-new-era-of-business-automation-2/ Wed, 30 Oct 2024 15:06:07 +0000 https://sdtimes.com/?p=55945 Creatio, a global vendor of a no-code platform to automate workflows and CRM with a maximum degree of freedom, today unveiled its most innovative release yet – Creatio Energy 8.2. This launch marks a new era of automation, where AI and no-code together set a modern market standard, delivering unprecedented speed, agility, autonomy, and a remarkable increase … continue reading

The post Creatio Unveils “Energy” Release, Marking a New Era of Business Automation appeared first on SD Times.

]]>
Creatio, a global vendor of a no-code platform to automate workflows and CRM with a maximum degree of freedom, today unveiled its most innovative release yet – Creatio Energy 8.2. This launch marks a new era of automation, where AI and no-code together set a modern market standard, delivering unprecedented speed, agility, autonomy, and a remarkable increase in productivity.

Businesses are often overwhelmed by the cost and complexity of traditional SaaS applications, which are slow to implement, overengineered, and have low adoption rates. In contrast, Creatio “Energy” heralds a new era of enterprise software – built on no-code and AI – that delivers greater economic value, provides engaging user experiences, and replaces static forms and data with conversational prompts that drive deep insights.   This new approach enables businesses to realize productivity savings of up to 80% for key knowledge worker roles, unlocking new levels of efficiency.

“With the launch of ‘Energy,’ Creatio continues to disrupt the business automation landscape with no-code and AI,” said Katherine Kostereva, CEO at Creatio. “The release combines agentic, generative, and prescriptive AI with our no-code tools, empowering business technologists to innovate and optimize operations like never before. Creatio Copilot delivers a unified AI architecture with a robust set of the latest AI capabilities, all easily configured with no-code tools and ready to use from day one.”

“With Creatio modern technology, we’ve been able to realize value extremely fast, boost front-office productivity, increase average order size, and streamline our commercial processes – all without the need for IT and development resources,” adds Jim Slomka, Chief Revenue Officer at BSN Sports and a Creatio customer.

According to the September 2024 Forrester report, The Four Agreements of Modern Business Apps, “AI is the force that most clearly marks the upcoming new era of business apps. To survive — and even thrive — in this new era, vendors must reimagine business apps to offer greater streams of value. They must become truly intelligent, dynamic, adaptable, and composable, be powered by cloud platforms, and offer AI, low-code, and marketplaces. “

Key features and enhancements of Creatio Energy 8.2:

No-Code AI Skill Development: Creatio Copilot now supports AI Skills, which are the building blocks that enable Copilot to execute specific intelligent tasks.  With Creatio’s no-code tools, users can effortlessly create new AI Skills using natural language, with no coding required. This makes AI accessible to all employees, regardless of technical expertise.  Energy also adds over 80 new no-code feature enhancements to further improve no-code productivity.

Unified Agentic, Generative and Prescriptive AI:  Creatio Copilot also introduces a new AI Command Center that integrates all three AI types—prescriptive, generative, and agentic—into a single platform. This unified approach provides organizations with the ability to design, deploy, and refine AI Skills without specialized technical expertise.

Modern CRM with Pre-Built AI Skills: The new release seamlessly embeds over 20 pre-configured AI Skills into sales, marketing, and customer service processes, enabling intelligent automation that reduces friction, increases efficiency, and enhances customer engagement.  This list will be rapidly evolving as new AI Skills will be published rapidly from Creatio and its ecosystem of partners.  Energy also adds over 100 new enhancements to CRM processes to improve user experience and drive greater automation.

Accelerated Adoption: Unlike traditional AI platforms that impose hidden fees and complex user licensing or usage costs, Creatio “Energy” accelerates user adoption by including cutting-edge AI as part of its base software license, providing a clear and predictable cost structure for organizations scaling their AI investments.  AI Command Center provides tools that allow Administrators to have full visibility into AI Skills adoption, including the ability to track consumption and users of each AI Skill.

Taken together, Creatio Energy represents a paradigm shift for no-code, moving beyond being a mere toolset for building applications to becoming an intelligent co-creator that actively collaborates with users. By integrating AI into every stage of app development, Creatio enables organizations to accelerate their innovation cycles, enhance customer experiences, and accelerate time-to-value. This transformation allows companies to reimagine what’s possible, positioning them to stay ahead of the curve.

The post Creatio Unveils “Energy” Release, Marking a New Era of Business Automation appeared first on SD Times.

]]>
Crowdbotics unveils extension for GitHub Copilot to improve acceptance rate of suggestions https://sdtimes.com/ai/crowdbotics-unveils-extension-for-github-copilot-to-improve-acceptance-rate-of-suggestions/ Tue, 29 Oct 2024 18:18:22 +0000 https://sdtimes.com/?p=55937 Crowdbotics today released an extension for GitHub Copilot, available now through the GitHub and Azure Marketplaces. The Crowdbotics platform uses AI to help business stakeholders and IT collaborate and generate high-quality requirements definitions for application development projects. The platform further uses AI to turn these business requirements into technical requirements and implementation recommendations. The new Crowdbotics extension for … continue reading

The post Crowdbotics unveils extension for GitHub Copilot to improve acceptance rate of suggestions appeared first on SD Times.

]]>
Crowdbotics today released an extension for GitHub Copilot, available now through the GitHub and Azure Marketplaces. The Crowdbotics platform uses AI to help business stakeholders and IT collaborate and generate high-quality requirements definitions for application development projects. The platform further uses AI to turn these business requirements into technical requirements and implementation recommendations.

The new Crowdbotics extension for GitHub Copilot takes advantage of all the requirements and context in the Crowdbotics platform to help developers generate more accurate code with Copilot. Integrated with GitHub Copilot Chat, the extension enables developers to benefit from this accuracy improvement without ever having to leave their development environment.

A recent joint research study conducted by Crowdbotics, GitHub, and Microsoft using a subset of the Crowdbotics extension features, found that injecting business requirements from Crowdbotics PRD AI into GitHub Copilot’s neighboring tab context model, improved GitHub Copilot code suggestion acceptance rate by 14%. This change reflects a 51% relative improvement in the acceptance rate. Additionally, the study found that developers using this multi-model configuration were 25% more likely to succeed at feature development than non-AI assisted developers. The now-publicly available Crowdbotics extension has this feature built in, along with a number of other additional features to help developers stay “in flow” longer.

“The Crowdbotics extension for GitHub Copilot achieves what both GitHub and Crowdbotics aim to do: improve developers’ lives by making their code smarter and more accurate,” said Anand Kulkarni, CEO at Crowdbotics. “Product requirements are the holy grail when it comes to making coding more efficient, so harnessing the power of this extension is a no-brainer for any developer looking to speed up their workflows without compromising context or accuracy.”

Benefits of the Crowdbotics extension include:

  • Break features into decomposed layers, such as front end, back end, business logic, data schema or third-party integrations.

  • Technical recommendations for the integrations best suited for the app and development team.

  • Seamless connections between developers and PRDs without disrupting workflows or needing to switch to different windows.

The Crowdbotics extension is available now, with a free 30-day trial available through GitHub Marketplace.

The post Crowdbotics unveils extension for GitHub Copilot to improve acceptance rate of suggestions appeared first on SD Times.

]]>
GitHub Copilot now offers access to new Anthropic, Google, and OpenAI models https://sdtimes.com/ai/github-copilot-now-offers-access-to-anthropic-google-and-openai-models/ Tue, 29 Oct 2024 16:33:22 +0000 https://sdtimes.com/?p=55931 GitHub is hosting its annual user conference, GitHub Universe, today and tomorrow, and has announced a number of new AI capabilities that will enable developers to build applications more quickly, securely, and efficiently.  Many of the updates were across GitHub Copilot. First up, GitHub announced that users now have access to more model choices thanks … continue reading

The post GitHub Copilot now offers access to new Anthropic, Google, and OpenAI models appeared first on SD Times.

]]>
GitHub is hosting its annual user conference, GitHub Universe, today and tomorrow, and has announced a number of new AI capabilities that will enable developers to build applications more quickly, securely, and efficiently. 

Many of the updates were across GitHub Copilot. First up, GitHub announced that users now have access to more model choices thanks to partnerships with Anthropic, Google, and OpenAI. Newly added model options include Anthropic’s Claude 3.5 Sonnet, Google’s Gemini 1.5 Pro, and OpenAI’s GPT-4o, o1-preview, and o1-mini. 

By offering developers more choices, GitHub is enabling them to choose the model that works best for their specific use case, the company explained.

“In 2024, we experienced a boom in high-quality large and small language models that each individually excel at different programming tasks. There is no one model to rule every scenario, and developers expect the agency to build with the models that work best for them,” said Thomas Dohmke, CEO of GitHub. “It is clear the next phase of AI code generation will not only be defined by multi-model functionality, but by multi-model choice. Today, we deliver just that.”

Copilot Workspace has a number of new features as well, like a build and repair agent, brainstorming mode, integrations with VS Code, and iterative feedback loops. 

GitHub Models, which enables developers to experiment with different AI models, has a number of features now in public preview, including side-by-side model comparison, support for multi-modal models, the ability to save and share prompts and parameters, and additional cookbooks and SDK support in GitHub Codespaces.

Copilot Autofix, which analyzes and provides suggestions about code vulnerabilities, added security campaigns, enabling developers to triage up to 1,000 alerts at once and filter them by type, severity, repository, and team. The company also added integrations with ESLint, JFrog SAST, and Black Duck Polaris. Both security campaigns and these partner integrations are available in public preview. 

Other new features in GitHub Copilot include code completion in Copilot for Xcode (in public preview), a code review capability, and the ability to customize Copilot Chat responses based on a developer’s preferred tools, organizational knowledge, and coding conventions.

In terms of what’s coming next, starting November 1, developers will be able to edit multiple files at once using Copilot Chat in VS Code. Then, in early 2025, Copilot Extensions will be generally available, enabling developers to integrate their other developer tools into GitHub Copilot, like Atlassian Rovo, Docker, Sentry, and Stack Overflow.

The company also announced a technical preview for GitHub Spark, an AI tool for building fully functional micro apps (called “sparks”) solely using text prompts. Each spark can integrate external data sources without requiring the creator to manage cloud resources. 

While developers can make changes to sparks by diving into the code, any user can iterate and make changes entirely using natural language, reducing the barrier to application development. 

Finished sparks can be immediately run on the user’s desktop, tablet, or mobile device, or they can share with others, who can use it or even build upon it. 

“With Spark, we will enable over one billion personal computer and mobile phone users to build and share their own micro apps directly on GitHub—the creator network for the Age of AI,” said Dohmke.

And finally, the company revealed the results of its Octoverse report, which provides insights into the world of open source development by studying public activity on GitHub. 

Some key findings were that Python is now the most used language on the platform, AI usage is up 98% since last year, and the number of global developers continues increasing, particularly across Africa, Latin America, and Asia. 

The post GitHub Copilot now offers access to new Anthropic, Google, and OpenAI models appeared first on SD Times.

]]>