DevOps Archives - SD Times https://sdtimes.com/tag/devops/ Software Development News Wed, 23 Oct 2024 19:25:46 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.5 https://sdtimes.com/wp-content/uploads/2019/06/bnGl7Am3_400x400-50x50.jpeg DevOps Archives - SD Times https://sdtimes.com/tag/devops/ 32 32 HCL DevOps streamlines development processes with its platform-centric approach https://sdtimes.com/devops/hcl-devops-streamlines-development-processes-with-its-platform-centric-approach/ Thu, 24 Oct 2024 13:00:55 +0000 https://sdtimes.com/?p=55885 Platform engineering has been gaining quite a lot of traction lately — and for good reason. The benefits to development teams are many, and it could be argued that platform engineering is a natural evolution of DevOps, so it’s not a huge cultural change to adapt to.  According to Jonathan Harding, Senior Product Manager of … continue reading

The post HCL DevOps streamlines development processes with its platform-centric approach appeared first on SD Times.

]]>
Platform engineering has been gaining quite a lot of traction lately — and for good reason. The benefits to development teams are many, and it could be argued that platform engineering is a natural evolution of DevOps, so it’s not a huge cultural change to adapt to. 

According to Jonathan Harding, Senior Product Manager of Value Stream Management at HCLSoftware, in an era where organizations have become so focused on how to be more productive, this discipline has gained popularity because “it gets new employees productive quickly, and it gets existing employees able to deliver quickly and in a way that is relatively self-sufficient.”

Platform engineering teams work to build an internal developer portal (IDP), which is a self-service platform that developers can use to make certain parts of their job easier. For example, rather than a developer needing to contact IT and waiting for them to provision infrastructure, that developer would interact with the IDP to get that infrastructure provisioned.

Essentially, an IDP is a technical implementation of a DevOps objective, explained Chris Haggan, Head of HCL DevOps at HCLSoftware.

“DevOps is about collaboration and agility of thinking, and platform engineering is the implementation of products like HCL DevOps that enable that technical delivery aspect,” Haggan said.

Haggan looks at platform engineering from the perspective of having a general strategy and then bringing in elements of DevOps to provide a holistic view of that objective. 

“I want to get this idea that a customer has given me out of the ideas bucket and into production as quickly as I can. And how do I do that? Well, some of that is going to be about the process, the methodology, and the ways of working to get that idea quickly through the delivery lifecycle, and some of that is going to be about having a technical platform that underpins that,” said Haggan. 

IDPs typically include several different functionalities and toolchains, acting as a one-stop shop for everything a developer might need. From a single platform, they might be able to create infrastructure, handle observability, or set up new development environments. The benefits are similar in HCL DevOps, but by coming in a ready-to-use, customizable package, development teams don’t have to go through the process of developing the IDP and can skip right to the benefits. 

Haggan explained that the costs of building and maintaining a platform engineering system are not inconsequential. For instance, they need to integrate multiple software delivery systems and figure out where to store metrics, SDLC events, and other data, which often requires setup and administration of a new database. 

Plus, sometimes teams design a software delivery system that incorporates their own culture nuances, which can sometimes be helpful, but other times “they reflect unnecessary cultural debt that has accumulated within an organization for years,” said Haggan.

HCL DevOps consists of multifaceted solutions, with the three most popular being:

  • HCL DevOps Test: An automated testing platform that covers UI, API, and performance testing, and provides testing capabilities like virtual services and test data creation.
  • HCL DevOps Deploy: A fully automated CI/CD solution that supports a variety of architectures, including distributed multi-tier, mobile, mainframe, and microservices. 
  • HCL DevOps Velocity: The company’s value stream management offering that pulls in data from across the SDLC to provide development teams with useful insights.

Haggan admitted that he’s fully aware that organizations will want to customize and add new capabilities, so it’s never going to be just their platform that’s in play. But the benefit they can provide is that customers can use HCL DevOps as a starting point and then build from there. 

“We’re trying to be incredibly open as an offering and allow customers to take advantage of the tools that they have,” Haggan said. “We’re not saying you have to work only with us. We’re fully aware that organizations have their own existing workflows, and we’re going to work with that.”

To that end, HCL offers plugins that connect with other software. For instance, HCL DevOps Deploy currently has about 200 different plugins that could be used, and customers can also create their own, Harding explained. 

The plugin catalog is curated by the HCL DevOps technical team, but also has contributions from the community submitted through GitHub. 

Making context switching less disruptive

Another key benefit of IDPs is that they can cut down on context switching, which is when a developer needs to switch to different apps for different tasks, ultimately taking them out of their productive flow state.  

“Distraction for any knowledge worker in a large enterprise is incredibly costly for the enterprise,” said Harding. “So, focus is important. I think for us, platform engineering — and our platform in general — allows a developer to stay focused on what they’re doing.”

“Context switching will always be needed to some degree,” Haggan went on to say. A developer is never going to be able to sit down for the day and not ever have to change what they’re thinking about or doing. 

“It’s about making it easy to make those transitions and making it simple, so that when I move from planning the work that I’m going to be doing to deploying something or testing something or seeing where it is in the value stream, that feels natural and logical,” Haggan said. 

Harding added that they’ve worked hard to make it easy to navigate between the different parts of the platform so that the user feels like it’s all part of the same overall solution. That aspect ultimately keeps them in that same mental state as best as possible.

The HCL DevOps team has designed the solution with personas in mind. In other words, thinking about the different tasks that a particular role might need to switch between throughout the day.

For instance, a quality engineer using a test-driven development approach might start with writing encoded acceptance criteria in a work-item management platform, then move to a CI/CD system to view the results of an automated test, and then move to a test management system to incorporate their test script into a regression suite. 

These tasks span multiple systems, and each system often has its own role-based access control (RBAC), tracking numbers, and user interfaces, which can make the process confusing and time-consuming, Haggan explained. 

“We try to make that more seamless, and tighten that integration across the platform,” said Harding. “I think that’s been a focus area, really looking from the end user’s perspective, how do we tighten the integration based on what they’re trying to accomplish?”

To learn more about how HCL DevOps can help achieve your platform goals and improve development team productivity, visit the website to book a demo and learn about the many capabilities the platform has to offer. 

The post HCL DevOps streamlines development processes with its platform-centric approach appeared first on SD Times.

]]>
JFrog helps developers improve DevSecOps with new solutions and integrations https://sdtimes.com/devops/jfrog-helps-developers-improve-devsecops-with-new-solutions-and-integrations/ Tue, 10 Sep 2024 16:48:15 +0000 https://sdtimes.com/?p=55627 At its annual user conference, swampUp, the DevOps company JFrog announced new solutions and integrations with companies like GitHub and NVIDIA to enable developers to improve their DevSecOps capabilities and bring LLMs to production quickly and safely.  JFrog Runtime is a new security solution that enables developers to discover vulnerabilities in runtime environments. It monitors … continue reading

The post JFrog helps developers improve DevSecOps with new solutions and integrations appeared first on SD Times.

]]>
At its annual user conference, swampUp, the DevOps company JFrog announced new solutions and integrations with companies like GitHub and NVIDIA to enable developers to improve their DevSecOps capabilities and bring LLMs to production quickly and safely. 

JFrog Runtime is a new security solution that enables developers to discover vulnerabilities in runtime environments. It monitors Kubernetes clusters in real time to identify, prioritize, and remediate security incidents based on their risk.

It provides developers with a method to track and manage packages, organize repositories by environment types, and activate JFrog Xray policies. Other benefits include centralized incident awareness, comprehensive analytics for workloads and containers, and continuous monitoring of post-deployment threats like malware or privilege escalation.

“By empowering DevOps, Data Scientists, and Platform engineers with an integrated solution that spans from secure model scanning and curation on the left to JFrog Runtime on the right, organizations can significantly enhance the delivery of trusted software at scale,” said Asaf Karas, CTO of JFrog Security.

Next, the company announced an expansion to its partnership with GitHub. New integrations will provide developers with better visibility into project status and security posture, allowing them to address potential issues more rapidly. 

JFrog customers now get access to GitHub’s Copilot chat extension, which can help them select software packages that have already been updated, approved by the organization, and safe for use. 

It also provides a unified view of security scan results from GitHub Advanced Security and JFrog Advanced Security, a job summary page that shows the health and security status of GitHub Actions Workflows, and dynamic project mapping and authentication. 

Finally, the company announced a partnership with NVIDIA, integrating NVIDIA NIM microservices with the JFrog Platform and JFrog Artifactory model registry. 

According to JFrog, this integration will “combine GPU-optimized, pre-approved AI models with centralized DevSecOps processes in an end-to-end software supply chain workflow.” The end result will be that developers can bring LLMs to production quickly while also maintaining transparency, traceability, and trust. 

Benefits include unified management of NIM containers alongside other assets, continuous scanning, accelerated computing through NVIDIA’s infrastructure, and flexible deployment options with JFrog Artifactory. 

“As enterprises scale their generative AI deployments, a central repository can help them rapidly select and deploy models that are approved for development,” said Pat Lee, vice president of  enterprise strategic partnerships at NVIDIA. “The integration of NVIDIA NIM microservices into the JFrog Platform can help developers quickly get fully compliant, performance-optimized models quickly running in production.”

The post JFrog helps developers improve DevSecOps with new solutions and integrations appeared first on SD Times.

]]>
Data scientists and developers need a better working relationship for AI https://sdtimes.com/data/data-scientists-and-developers-need-a-better-working-relationship-for-ai/ Tue, 06 Aug 2024 20:02:41 +0000 https://sdtimes.com/?p=55365 Good teamwork is key to any successful AI project but combining data scientists and software engineers into an effective force is no easy task. According to Gartner, 30 percent of AI projects will be abandoned by the end of 2025 thanks to factors such as poor data quality, escalating costs and a lack of business … continue reading

The post Data scientists and developers need a better working relationship for AI appeared first on SD Times.

]]>
Good teamwork is key to any successful AI project but combining data scientists and software engineers into an effective force is no easy task.

According to Gartner, 30 percent of AI projects will be abandoned by the end of 2025 thanks to factors such as poor data quality, escalating costs and a lack of business value. Data scientists are pessimistic, too, expecting just 22 percent of their projects to make it through to deployment.

Much of the debate on turning these poor figures around by delivering better AI has focused on technology but little attention has been paid to improving the relationship between those scientists and engineers responsible for producing AI in the first place.

This is surprising because although both are crucial to AI, their working practices don’t exactly align — in fact they can be downright incompatible. Failing to resolve these differences can scupper project delivery, jeopardize data security and threaten to break machine learning models in production.

Data scientists and software engineers need a better working relationship – but what does that look like and how do we achieve it?

DevOps forgot the data science people

As cloud has burgeoned, much of the industry’s attention has been devoted to bringing together developers and operations to make software delivery and lifecycle management more predictable and improve build quality. 

Data scientists, during this time, have flown under the radar. Drafted into enterprise IT to work on AI projects, they are joining an environment that’s not quite ready for them.

What do I mean? Data scientists have a broad remit, taking a research-driven approach to solving business- and domain-level challenges through data manipulation and analysis. They operate outside the software delivery lifecycle using special tools and test platforms to build models using a subset of languages employed by developers.

Software engineering, while a creative and problem-solving discipline, takes a different approach. Engineers are delivery-focused and tackle jobs in priority order with results delivered in sprints to hit specific goals. Tool chains built on shared workflows are integrated and automated for team-based collaboration and communication.

These differences have bred friction in four notable areas:

  1. Process. Data scientists’ longer cycles don’t fit neatly into the process- and priority-driven flow of Agile. Accomplish five tasks in two days or deliver a new release every few hours? Such targets run counter to the nature of data science and failure to accommodate this will soon see the data science and software engineering wheels on an AI running out of sync.
  2. Deployment. Automated delivery is a key tenet of Agile that’s eliminated the problems of manual delivery in large and complex cloud-based environments and helps ensure uptime. But a deployment target of, say, 15-30 minutes cannot work for today’s large and data-heavy LLMs. Deployment of one to two hours is more like it — but this is an unacceptable length of time for a service to go offline. Push that and you will break the model.
  3. Lifecycle. Data scientists using their own tools and build processes breed machine learning model code that lives outside the shared repo where it would be inspected and understood by the engineering team. It can fly under the radar of Quality Assurance. This is a fast-track to black-box AI, where engineers cannot explain the code to identify and fix problems, nor undertake meaningful updates and lifecycle management downstream.
  4. Data Security. There’s a strong chance data scientists in any team will train their models on data that’s commercially sensitive or that identifies individuals, such as customers or patients. If that’s not treated before it hits the DevOps pipeline or production environment, there’s a real chance that information will leak.
No right or wrong answer

We need to find a collaborative path — and we can achieve that by fostering a good working environment that bridges the two disciplines to deliver products. That means data scientists internalizing the pace of software engineering and the latter adopting flexible ways to accommodate the scientists. 

Here’s my top three recommendations for putting this into practice:

  1. Establish shared goals. This will help the teams to sync. For example, is the project goal to deliver a finished product such as a chatbot? Or is the goal a feature update, where all users receive the update at the same time? With shared goals in place it’s possible to set and align project and team priorities. For data scientists that will mean finding ways to accelerate aspects of their work to hit engineering sprints, for example by adopting best practices in coding. This is a soft way for data scientists to adopt a more product-oriented mindset to delivery but it also means software engineers can begin to factor research backlogs into the delivery timelines.
  2. Create a shared workflow to deliver transparent code and robust AI. Join the different pieces of the AI project team puzzle: make sure the data scientists working on the model are connected to both the back-end production system and front-end while software engineers focus on making sure everything works. That means working through shared tools according to established best practices, following procedures such as common source control, versioning and QA.
  3. Appoint a project leader who can step in when needed on product engineering and delivery management. This person should have experience in building a product and understand the basics of the product life cycle so they can identify problems and offer answers for the team. They should have the skills and experience to make tactical decisions such as squaring the circle of software sprints. Ultimately they should be a project polyglot — capable of understanding both scientists and engineers, acting as translator and leading both.

Data scientists and software developers operate differently but they share a common interest in project success — exploiting that is the trick. If data scientists can align with Agile-driven delivery in software engineering and software engineers can accommodate the pace of their data-diving colleagues it will be a win for all concerned. A refined system of collaboration between teams will improve the quality of code, mean faster releases and — ultimately — deliver AI systems that make it through deployment and start delivering on the needs of business.


You may also like…

Generative AI development requires a different approach to testing

The secret to better products? Let engineers drive vision

The post Data scientists and developers need a better working relationship for AI appeared first on SD Times.

]]>
Working toward AIOps maturity? It’s never too early (or late) for platform engineering https://sdtimes.com/softwaredev/working-toward-aiops-maturity-its-never-too-early-or-late-for-platform-engineering/ Tue, 09 Jul 2024 15:16:57 +0000 https://sdtimes.com/?p=55135 Until about two years ago, many enterprises were experimenting with isolated proofs of concept or managing limited AI projects, with results that often had little impact on the company’s overall financial or operational performance. Few companies were making big bets on AI, and even fewer executive leaders lost their jobs when AI initiatives didn’t pan … continue reading

The post Working toward AIOps maturity? It’s never too early (or late) for platform engineering appeared first on SD Times.

]]>
Until about two years ago, many enterprises were experimenting with isolated proofs of concept or managing limited AI projects, with results that often had little impact on the company’s overall financial or operational performance. Few companies were making big bets on AI, and even fewer executive leaders lost their jobs when AI initiatives didn’t pan out.

Then came the GPUs and LLMs.

All of a sudden, enterprises in all industries found themselves in an all-out effort to position AI – both traditional and generative – at the core of as many business processes as possible, with as many employee- and customer-facing AI applications in as many geographies as they can manage concurrently. They’re all trying to get to market ahead of their competitors. Still, most are finding that the informal operational approaches they had been taking to their modest AI initiatives are ill-equipped to support distributed AI at scale.

They need a different approach.

Platform Engineering Must Move Beyond the Application Development Realm

Meanwhile, in DevOps, platform engineering is reaching critical mass. Gartner predicts that 80% of large software engineering organizations will establish platform engineering teams by 2026 – up from 45% in 2022. As organizations scale, platform engineering becomes essential to creating a more efficient, consistent, and scalable process for software development and deployment. It also helps improve overall productivity and creates a better employee experience.

The rise of platform engineering for application development, coinciding with the rise of AI at scale, presents a massive opportunity. A helpful paradigm has already been established: Developers appreciate platform engineering for the simplicity these solutions bring to their jobs, abstracting away the peripheral complexities of provisioning infrastructure, tools, and frameworks they need to assemble their ideal dev environments; operations teams love the automation and efficiencies platform engineering introduces on the ops side of the DevOps equation; and the executive suite is sold on the return the broader organization is seeing on its platform engineering investment.

Potential for similar outcomes exists within the organization’s AI operations (AIOps). Enterprises with mature AIOps can have hundreds of AI models in development and production at any time. In fact, according to a new study of 1,000 IT leaders and practitioners conducted by S&P Global and commissioned by Vultr, each enterprise employing these survey respondents has, on average, 158 AI models in development or production concurrently, and the vast majority of these organizations expect that number to grow very soon.

When bringing AIOps to a global scale, enterprises need an operating model that can provide the agility and resiliency to support such an order of magnitude. Without a tailored approach to AIOps, the risk posed is a perfect storm of inefficiency, delays, and ultimately, the potential loss of revenue, first-market advantages, and even crucial talent due to the impact on the machine learning (ML) engineer experience.

Fortunately, platform engineering can do for AIOps what it already does for traditional DevOps.

The time is now for platform engineering purpose-built for AIOps

Even though platform engineering for DevOps is an established paradigm, a platform engineering solution for AIOps must be purpose-built; enterprises can’t take a platform engineering solution designed for DevOps workflows and retrofit it for AI operations. The requirements of AIOps at scale are vastly different, so the platform engineering solution must be built from the ground up to address those particular needs.

Platform engineering for AIOps must support mature AIOps workflows, which can vary slightly between companies. However, distributed enterprises should deploy a hub-and-spoke operating model that generally comprises the following steps:

  • Initial AI model development and training on proprietary company data by a centralized data science team working in an established AI Center of Excellence

  • Containerization of proprietary models and storage in private model registries to make all models accessible across the enterprise

  • Distribution of models to regional data center locations where local data science teams fine-tune models on local data

  • Deployment and monitoring of models to deliver inference in edge environments

In addition to enabling the self-serve provisioning of the infrastructure and tooling preferred by each ML engineer in the AI Center of Excellence and the regional data center locations, platform engineering solutions built for distributed AIOps automate and simplify the workflows of this hub-and-spoke operating model.

MORE FROM THIS AUTHOR: Vultr adds CDN to its cloud computing platform

Mature AI involves more than just operational and business efficiencies. It must also include responsible end-to-end AI practices. The ethics of AI underpin public trust. As with any new technological innovation, improper management of privacy controls, data, or biases can harm adoption (user and business growth) and generate increased governmental scrutiny.

The EU AI Act, passed in March 2024, is the most notable legislation to date to govern the commercial use of AI. It’s likely only the start of new regulations to address short and long-term risks. Staying ahead of regulatory requirements is not only essential to remain in compliance; business dealings for those who fall out of compliance may be impacted around the globe. As part of the right platform engineering strategy, responsible AI can identify and mitigate risks through:

  • Automating workflow checks to look for bias and ethical AI practices

  • Creating a responsible AI “red” team to test and validate models

  • Deploying observability tooling and infrastructure to provide real-time monitoring

Platform engineering also future-proofs enterprise AI operations

As AI growth and the resulting demands on enterprise resources compound, IT leaders must align their global IT architecture with an operating model designed to accommodate distributed AI at scale. Doing so is the only way to prepare data science and AIOps teams for success.

Purpose-built platform engineering solutions enable IT teams to meet business needs and operational requirements while providing companies with a strategic advantage. These solutions also help organizations scale their operations and governance, ensuring compliance and alignment with responsible AI practices.

There is no better approach to scaling AI operations. It’s never too early (or late) to build platform engineering solutions to pave your company’s path to AI maturity.


You may also like…

Platform Engineering is not (just) about infrastructure!

The real problems IT still needs to tackle for platforms

The post Working toward AIOps maturity? It’s never too early (or late) for platform engineering appeared first on SD Times.

]]>
Are developers and DevOps converging? https://sdtimes.com/devops/are-developers-and-devops-converging/ Fri, 14 Jun 2024 14:56:49 +0000 https://sdtimes.com/?p=54946 Are your developers on PagerDuty? That’s the core question, and for most teams the answer is emphatically “yes.” This is a huge change from a few years ago when, unless you did not have DevOps or SRE teams, the answer was a resounding “no.”  So, what’s changed? A long-term trend is happening across large and … continue reading

The post Are developers and DevOps converging? appeared first on SD Times.

]]>
Are your developers on PagerDuty? That’s the core question, and for most teams the answer is emphatically “yes.” This is a huge change from a few years ago when, unless you did not have DevOps or SRE teams, the answer was a resounding “no.” 

So, what’s changed?

A long-term trend is happening across large and small companies, and that is the convergence of developers, those who code apps, and DevOps, those who maintain the systems on which apps run and developers code. There are three core reasons for this shift – (1) transformation to the cloud, (2) a shift to a single store of observability data, and (3) a focus of technical work efforts on business KPIs.

The impending impact on DevOps in terms of role, workflow, and alignment to the business will be profound. Before diving into the three reasons shortly, first, why should business leaders care? 

The role of DevOps and team dynamics – The lines are blurring between traditionally separate teams as developers, DevOps, and SREs increasingly collide. The best organizations will adjust team roles and skills, and they will change workflows to more cohesive approaches. One key way is via communicating around commingled data sets as opposed to distinct and separate vendors built and isolated around roles. While every technical role will be impacted, the largest change will be felt by DevOps as companies redefine its role and the mentalities that are required by its team members going forward.

Cost efficiency As organizations adjust to the new paradigm, their team makeup must adjust accordingly. Different skills will be needed, different vendors will be used, and costs will consolidate.

Culture and expectations adaptation – Who will you be on call with PagerDuty? How will the roles of DevOps and SREs change when developers can directly monitor, alert, and resolve their own questions? What will the expectation of triage be when teams are working closer together and focused on business outcomes rather than uptime? DevOps will not just be setting up vendors, maintaining developer tools, and monitoring cloud costs.

Transformation to the cloud

This is a well-trodden topic, so the short story is… Vendors would love to eliminate roles on your teams entirely, especially DevOps and SREs. Transformation to the cloud means everything is virtual. While the cloud is arguably more immense in complexity, teams no longer deal with physical equipment that literally requires someone onsite or in an office. With virtual environments, cloud and cloud-related vendors manage your infrastructure, vendor setup, developer tooling, and cost measures… all of which have the goals of less setup and zero ongoing maintenance.

The role of DevOps won’t be eliminated… at least not any time soon, but it must flex and align. As cloud vendors make it so easy for developers to run and maintain their applications, DevOps in its current incarnation is not needed. Vendors and developers themselves can support the infrastructure and applications respectively.

Instead, DevOps will need to justify their work efforts according to business KPIs such as revenue and churn. A small subset of the current DevOps team will have KPIs around developer efficiency, becoming the internal gatekeeper to enforce standardization across your developers and the entire software lifecycle, including how apps are built, tested, deployed, and monitored. Developers can then be accountable for the effectiveness and efficiency of their apps (and underlying infrastructure) from end-to-end. This means developers – not DevOps – are on PagerDuty, monitor issues across the full stack, and respond to incidents. 

Single store of observability data

Vendors and tools are converging on a single set of data types. Looking at the actions of different engineering teams, efforts can easily be bucketed into analytics (e.g., product, experience, engineering), monitoring (e.g., user, application, infrastructure), and security. What’s interesting is that these buckets currently use different vendors built for specific roles, but the underlying datasets are quickly becoming the same. This was not true just a few years ago. 

The definition of observability data is to collect *all* the unstructured data that’s created within applications (whether server-side or client-side) and the surrounding infrastructure. While the structure of this data varies by discipline, it is always transformed into four forms – metrics, logs, traces, and, more recently, events. 

Current vendors generally think of these four types separately, with one used for logs, another for traces, a third for metrics, and yet another for analytics. However, when you combine these four types, you create the underpinnings of a common data store. The use cases of these common data types become immense because analytics, monitoring, and security all use the same underlying data types and thus should leverage the same store. The question is then less about how to collect and store the data (which is often the source of vendor lock-in), and more about how to use the combined data to create analysis that best informs and protects the business.

The convergence between developers and DevOps teams – and in this case eventually product as well – is that the same data is needed for all their use cases. With the same data, teams can increasingly speak the same language. Workflows that were painful before now become possible. (There’s no more finger-pointing between DevOps and developers.) The work efforts become more aligned around what drives the business and less about what each separate vendor tells you is most important. The roles then become blurred instead of having previously clean dividing lines. 

Focus of work efforts on business KPIs

Teams are increasingly driven by business goals and the top line. For DevOps, the focus is shifting from the current low bar of uptime and SLAs to those KPIs that correlate to revenue, churn, and user experience. And with business alignment, developers and DevOps are being asked to report differently and to justify their work efforts and prioritization. 

For example, one large Fortune 500 retailer has monthly meetings across their engineering groups (no product managers included). They review the KPIs on which business leaders are focused, especially top-line revenue loss. The developers (not DevOps) select specific metrics and errors as leading indicators of revenue loss and break them down by type (e.g., crashes, error logs, ANRs), user impact (e.g., abandonment rate), and area of the app affected (e.g., startup, purchase flow). 

Notice there’s no mention of DevOps metrics. The group does not review the historically used metrics around uptime and SLAs because those are assumed… and are not actionable to prioritize work and better grow the business.

The goal is to prioritize developer and DevOps efforts to push business goals. This means engineering teams must now justify work, which requires total team investment into this new approach. In many ways, this is easier than the previous methodology of separately driving technical KPIs. 

DevOps must flex and align

DevOps is not disappearing altogether, but it must evolve alongside the changing technology and business landscapes of today’s business KPI-driven world. Those in DevOps adapted to the rapid adoption of the cloud, and must adapt again to the fact that technological advancements and consolidation of data sources will impact them. 

As cloud infrastructures become more modular and easier to maintain, vendors will further force a shift in the roles and responsibilities of DevOps. And as observability, analytics, and security data consolidates, a set of vendors will emerge – looking at Databricks, Confluent, and Snowflake – to manage this complexity. Thus, the data will become more accessible and easier to leverage, allowing developers and business leaders to connect the data to the true value – aligning work efforts to business impact. 

DevOps must follow suit, aligning their efforts to goals that have the greatest impact on the business. 

The post Are developers and DevOps converging? appeared first on SD Times.

]]>
GitLab 17 introduces GitLab Duo Enterprise and new CI/CD catalog https://sdtimes.com/devops/gitlab-17-introduces-gitlab-duo-enterprise-and-new-ci-cd-catalog/ Fri, 17 May 2024 13:44:32 +0000 https://sdtimes.com/?p=54608 GitLab has announced the latest version of its platform. GitLab 17 introduces new features such as GitLab Duo Enterprise, a new CI/CD catalog, and Native Secrets Manager. GitLab Duo Enterprise is a new AI add-on that builds on the capabilities of GitLab Duo Pro. It can be used to detect and fix security issues, summarize … continue reading

The post GitLab 17 introduces GitLab Duo Enterprise and new CI/CD catalog appeared first on SD Times.

]]>
GitLab has announced the latest version of its platform. GitLab 17 introduces new features such as GitLab Duo Enterprise, a new CI/CD catalog, and Native Secrets Manager.

GitLab Duo Enterprise is a new AI add-on that builds on the capabilities of GitLab Duo Pro. It can be used to detect and fix security issues, summarize issue discussions and merge requests, resolve CI/CD bottlenecks, and improve team collaboration. 

It also includes an AI impact dashboard that provides insights into the impact of AI on the software development life cycle so that teams can assess whether their AI usage is delivering actual value. 

GitLab expects that this offering will be available in the next couple of months to GitLab Ultimate customers.

Another new feature in GitLab 17 is a new CI/CD catalog that allows developers to discover, reuse, and contribute CI/CD components. Organizations can also create their own private catalog that can only be accessed internally. 

The company also added a Native Secrets Manager, enabling customers to store sensitive credentials directly in GitLab.

Other new additions in GitLab 17 include the availability of GitLab Dedicated on Google Cloud, new SAST integrations, product analytics features, observability functionality, agile planning capabilities, and Model Registry for developing AI/MLs within GitLab. 

“GitLab continues to revolutionize the way organizations develop, build, secure, and deploy software faster leveraging a comprehensive DevSecOps platform,” said David DeSanto, chief product officer of GitLab. “GitLab 17 ushers in the future of AI-driven software innovation by removing silos across every team involved in delivering software value, automating tasks and complex workflows, and ensuring security and compliance is built-in from the beginning.”

The post GitLab 17 introduces GitLab Duo Enterprise and new CI/CD catalog appeared first on SD Times.

]]>
Copado releases new AI assistant for creating Salesforce tests https://sdtimes.com/test/copado-releases-new-ai-assistant-for-creating-salesforce-tests/ Fri, 26 Apr 2024 15:41:11 +0000 https://sdtimes.com/?p=54410 The DevOps company Copado has announced a new AI assistant for Salesforce test creation called Test Copilot. This follows the company’s recent announcement of Copado Explorer, which is an automated testing solution designed for Salesforce users, as well as the launch of its AI assistant CopadoGPT, which Test Copilot is built on.  Users provide a … continue reading

The post Copado releases new AI assistant for creating Salesforce tests appeared first on SD Times.

]]>
The DevOps company Copado has announced a new AI assistant for Salesforce test creation called Test Copilot.

This follows the company’s recent announcement of Copado Explorer, which is an automated testing solution designed for Salesforce users, as well as the launch of its AI assistant CopadoGPT, which Test Copilot is built on. 

Users provide a text prompt of what needs to be tested and Test Copilot creates a test that fits those requirements.

It can convert existing tests, Selenium tests, or Copado Explorer results into a new test, create tests from scratch, or turn recorded user sessions into test scripts. 

“Copado is in the business of giving people their time back,” said Esko Hannula, senior vice president of product management at Copado. “By eliminating repeated tasks and using AI to automate the test creation process, Copado is helping release teams work faster than ever before while improving release quality. With our AI-powered testing solutions, Copado customers are not only accelerating software testing, but simplifying it.”

The post Copado releases new AI assistant for creating Salesforce tests appeared first on SD Times.

]]>
GitLab Duo Chat released as part of GitLab 16.11 https://sdtimes.com/softwaredev/gitlab-duo-chat-released-as-part-of-gitlab-16-11/ Fri, 19 Apr 2024 17:07:01 +0000 https://sdtimes.com/?p=54339 GitLab has announced that its AI assistant GitLab Duo Chat is now generally available as part of the GitLab 16.11 release.  GitLab Duo Chat can answer questions about issues, epics, code, errors, CI/CD configurations, or the GitLab platform itself. It can also refactor existing code and generate tests. For instance, a developer onboarding onto a … continue reading

The post GitLab Duo Chat released as part of GitLab 16.11 appeared first on SD Times.

]]>
GitLab has announced that its AI assistant GitLab Duo Chat is now generally available as part of the GitLab 16.11 release. 

GitLab Duo Chat can answer questions about issues, epics, code, errors, CI/CD configurations, or the GitLab platform itself. It can also refactor existing code and generate tests.

For instance, a developer onboarding onto a project could ask for general knowledge like understanding the CI/CD setup, learning the difference between an issue and an epic, resetting their GitLab password, and getting started with specific development frameworks. 

“With Chat, you have an assistant ready to answer all of your onboarding questions, and soon you’re ready to dig into your first project,” GitLab wrote in a blog post

Organizations can control which data the AI gets read access to at the project, sub-group, and group levels. GitLab also said that customer data isn’t used to train its AI models. 

GitLab Duo Chat is available within GitLab and in popular IDEs like VS Code and JetBrains’ IDEs. It is offered as an add-on to GitLab Duo Pro, which costs $19/user/month.

“Whether you’re a developer or you’re managing the entire team, GitLab Duo Chat can empower you to take advantage of AI exactly where you need it throughout the software development lifecycle — all while helping you maintain code quality and security guardrails,” the company wrote. 

Other new features in GitLab 16.11 include:

  • Policy scoping, which allows compliance teams to set policy enforcement to a specific group of projects 
  • Product Analytics, including key usage and adoption data on users
  • The ability for Enterprise Users to disable personal access tokens
  • Autocompletion when inserting links to wiki pages
  • A sidebar containing project information

In total, over 40 new features were added to GitLab 16.11. A full list of updates can be found in GitLab’s blog post here

The post GitLab Duo Chat released as part of GitLab 16.11 appeared first on SD Times.

]]>
Report: As DevOps adoption nears 100%, these factors determine maturity https://sdtimes.com/devops/report-as-devops-adoption-nears-100-these-factors-determine-maturity/ Tue, 16 Apr 2024 15:00:40 +0000 https://sdtimes.com/?p=54299 Most developers at this point in time have adopted DevOps in some form or another, whether they are a full-blown DevOps engineer or a developer utilizing parts of the DevOps practice.  According to a new report from the Continuous Delivery Foundation (CDF), 83% of developers were “involved in DevOps-related activities” in the first quarter of … continue reading

The post Report: As DevOps adoption nears 100%, these factors determine maturity appeared first on SD Times.

]]>
Most developers at this point in time have adopted DevOps in some form or another, whether they are a full-blown DevOps engineer or a developer utilizing parts of the DevOps practice. 

According to a new report from the Continuous Delivery Foundation (CDF), 83% of developers were “involved in DevOps-related activities” in the first quarter of 2024. The report was based on data over the past three and a half years from SlashData. Because of the wide time period being examined, the organization was able to compare this to a 77% involvement in DevOps in early 2022, a 6% increase.

Even though the total number of developers involved in DevOps in some way has risen, there has at the same time been a small decrease in the number of developers who involve themselves in all DevOps-related activities. In other words, developers are specializing on a specific DevOps task rather than trying to do it all. CDF sees this as an indicator of DevOps maturity.

The most common DevOps task developers take on is monitoring software or infrastructure performance, which was done by 33% of developers in the first quarter of the year. Other popular activities include approving code deployments to production (29%), testing applications for security vulnerabilities (29%), and using continuous integration to automatically build and test code changes (29%).

The report also pointed out that there is a strong correlation between the number of tools in use and maturity level. However, there is also a decrease in deployment performance when developers use multiple CI/CD tools of the same type, because it introduces interoperability challenges. 

Another indicator of maturity is simply the experience level of the developer. Developers with more than 11 years of experience are twice as likely to be top performers in lead time for code changes, compared to less experienced colleagues. Only 10% of those with 5 or less years of experience are considered to be top performers. 

When measuring time to restore services, only 5% of developers with two years or less experience are top performers. 

In addition, more experienced developers are more likely to be using more tools. Developers with two or less years experience use an average of 2.3 tools and those with 16 or more years experience use an average of 5.2 tools. 

“The CD Foundation has been promoting standards in CD, securing the software supply chain, and advocating for better interoperability,” said Dadisi Sanyika, governing board chair at CDF. “The report findings reflect our community’s ongoing efforts and provide a framework for organizations to compare their practices with those of their industry peers, offering insights into where they stand and highlighting areas that require attention to enhance organizational efficiency.”

The post Report: As DevOps adoption nears 100%, these factors determine maturity appeared first on SD Times.

]]>
Analyst View: What’s new, what’s now, and what’s next in platform engineering https://sdtimes.com/softwaredev/analyst-view-whats-new-whats-now-and-whats-next-in-platform-engineering/ Mon, 04 Mar 2024 15:44:12 +0000 https://sdtimes.com/?p=53919 The problem is not new: Modern software architectures are complex distributed systems made up of many independent services, many of which are built by other teams or cloud providers. Kubernetes wrangles this herd of services—but adds yet more complexity that must be tamed. This creates hard problems at the intersection of development and operations. Developers … continue reading

The post Analyst View: What’s new, what’s now, and what’s next in platform engineering appeared first on SD Times.

]]>
The problem is not new: Modern software architectures are complex distributed systems made up of many independent services, many of which are built by other teams or cloud providers. Kubernetes wrangles this herd of services—but adds yet more complexity that must be tamed. This creates hard problems at the intersection of development and operations. Developers are frustrated when they need to operate an array of complex, arcane services and tools, in which they aren’t experts. Operators are frustrated when non-expert developers build subpar infrastructure. Devs complain that Ops slows them down. Ops complains that Devs push code that is not resilient, compliant, or secure.

What is new is the solution: platform engineering. Operating platforms sit between the end user and the backing services on which they rely. The platform is an internal software product, built by a dedicated team, that provides a curated collection of reusable components, tools, services, and knowledge, packaged for easy consumption. As illustrated below, the platform becomes a layer of abstraction between the developer and the messy complexities of operations.

The specific components and capabilities of each platform vary widely. Ultimately, the platform is whatever a development team needs it to be. The platform team’s job is to build that product. Each platform may be unique, but all platforms are:

  • Productized: The platform is a product. User feedback directs product strategy.
  • User-centric: Platforms solve users’ problems, not operators’. For example, developers probably don’t want a fast way to build a VM; they don’t want to think about infrastructure at all.
  • Self-service: Developers can access everything they need from a single source, without opening tickets, sending emails, or filing requests.
  • Consistent and compliant: Standards are built in, so that users cannot deliver code that is out-of-spec or insecure.

The platform is a “paved road” including both guidelines (recommended ways of travelling) and guardrails (hard boundaries the user can’t cross). A platform might enforce compliance guardrails: “You must run these automated tests of your security posture before deploying.” However, it might only suggest certain workflows: “We recommend the following tool for these use cases.”

What’s New: Platform Engineering Fulfills the Promises of DevOps

It’s important to distinguish platform engineering from what has come before. Automation is nothing new; nor are calls for better collaboration between developers and operators. A platform goes beyond existing techniques and tools. It is a new software product, with its own customers, lifecycle, user contracts, and lofty expectations.

Platform engineering represents the state of the art in DevOps. Not surprisingly, therefore, platform engineering has quickly become the hottest topic of conversation in that world, spawning its own user community and conferences. Gartner named platform engineering a Top Strategic Technology Trend in both 2023 and 2024 and predicts that, by 2026, 80% of large software engineering organizations will establish platform engineering teams.

What’s Now: The Platform Revolution Has Begun

IT shops have already begun to implement platform teams. Most start by building an internal developer portal, often using the open-source project Backstage. The portal is a central service catalog and document repository. It can also be a graphical user interface for automations and delivery pipelines. The user experience should be markedly better than whatever homebrew solutions developers have built for themselves. The goal is not to force developers onboard; they should want to use it.

Platforms make developers more productive. They free developers from the burden of building out their own operating environments. This allows developers to avoid unnecessary “glue” work and focus on writing code that creates value. When measuring the value of the platform—or making a business case to build one—focus on its positive impact on productivity. Deployment rates should increase; error rates, incidents, exceptions, rework, and time-to-value should all decrease. 

What’s Next: Platforms Expand and Evangelize

Platforms start small, often with only documentation and a service catalog. But even this is valuable. It saves developers from having to open tickets or send emails. Over time, the platform grows more capable. In the maximal vision of platform engineering, the platform has its own set of APIs. Developers write to the platform, and its abstractions become load-bearing parts of the application.

The platform can also expand to other users—data scientists, for example, or even business units looking to automate their work. They too will find value in a platform that meets users where they are. A platform team that can provide building blocks that are immediately useful, at an appropriate cognitive load, without unduly constraining users or forcing them into foreign ways of working, provides value now and in the future.

 

The post Analyst View: What’s new, what’s now, and what’s next in platform engineering appeared first on SD Times.

]]>