observability Archives - SD Times https://sdtimes.com/tag/observability/ Software Development News Tue, 24 Sep 2024 13:51:09 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.5 https://sdtimes.com/wp-content/uploads/2019/06/bnGl7Am3_400x400-50x50.jpeg observability Archives - SD Times https://sdtimes.com/tag/observability/ 32 32 Honeycomb brings its observability capabilities to the frontend with new offering https://sdtimes.com/monitoring/honeycomb-brings-its-observability-capabilities-to-the-frontend-with-new-offering/ Tue, 24 Sep 2024 13:48:52 +0000 https://sdtimes.com/?p=55707 The observability platform Honeycomb has just launched Honeycomb for Frontend Observability to provide developers with access to more debugging and observability capabilities. “The frontend is critical – it’s where customers spend their time and where revenue is generated. Unfortunately, many frontend observability tools are outdated, offering only aggregated metrics and limited insights,” said Christine Yen, … continue reading

The post Honeycomb brings its observability capabilities to the frontend with new offering appeared first on SD Times.

]]>
The observability platform Honeycomb has just launched Honeycomb for Frontend Observability to provide developers with access to more debugging and observability capabilities.

“The frontend is critical – it’s where customers spend their time and where revenue is generated. Unfortunately, many frontend observability tools are outdated, offering only aggregated metrics and limited insights,” said Christine Yen, CEO of Honeycomb. “With Honeycomb’s Frontend Observability solution, we’re bridging the gap between backend and frontend observability, giving engineering teams a holistic view of their entire application – from server to browser – enabling them to deliver exceptional user experiences.”

This new platform includes an OpenTelemetry wrapper that can collect performance data on hundreds of custom attributes and trace an application from end-to-end so that issues can be resolved more quickly. According to OpenTelemetry, this enables Real User Monitoring (RUM) in a way that traditional frontend monitoring tools can’t because they aren’t able to collect the number of signals needed for today’s complex applications.

It also includes user interaction tracking and context capture, which allows teams to query high-cardinality, which is another way Honeycomb for Frontend Observability improves cross-team collaboration and leads to faster issue resolution. 

Additionally, it collects Core Web Vitals data, which can help developers improve their SEO and site performance. 

“E-commerce is critical to Fender,” said Michael J Garski, director of software engineering at Fender Instruments. “Honeycomb for Frontend Observability removes the guesswork from diagnosing our site’s performance issues by tracking precise page speeds, filtering sessions, and identifying the cause of speed spikes, enabling targeted site optimization for maximizing conversions and delivering better customer experiences.”

The post Honeycomb brings its observability capabilities to the frontend with new offering appeared first on SD Times.

]]>
Elastic’s donation of Universal Profiling agent to OpenTelemetry further solidifies profiling as core telemetry signal https://sdtimes.com/softwaredev/elastics-donation-of-universal-profiling-agent-to-opentelemetry-further-solidifies-profiling-as-core-telemetry-signal/ Fri, 07 Jun 2024 16:55:31 +0000 https://sdtimes.com/?p=54862 Elastic has announced that it would be donating its Universal Profiling agent to the OpenTelemetry project, setting the stage for profiling to become a fourth core telemetry signal in addition to logs, metrics, and tracing.  This follows OpenTelemetry’s announcement in March that it would be supporting profiling and was working towards having a stable spec … continue reading

The post Elastic’s donation of Universal Profiling agent to OpenTelemetry further solidifies profiling as core telemetry signal appeared first on SD Times.

]]>
Elastic has announced that it would be donating its Universal Profiling agent to the OpenTelemetry project, setting the stage for profiling to become a fourth core telemetry signal in addition to logs, metrics, and tracing. 

This follows OpenTelemetry’s announcement in March that it would be supporting profiling and was working towards having a stable spec and implementation sometime this year.

Elastic’s agent profiles every line of code running on a company’s machines, including application code, kernels, and third-party libraries. It is always running in the background and can collect data about an application over time. 

It measures code efficiency across three categories: CPU utilization, CO2, and cloud cost. According to Elastic, this helps companies identify areas where waste can be reduced or eliminated so that they can optimize their systems. 

Universal Profiling currently supports a number of runtimes and languages, including C/C++, Rust, Zig, Go, Java, Python, Ruby, PHP, Node.js, V8, Perl, and .NET. 

“This contribution not only boosts the standardization of continuous profiling for observability but also accelerates the practical adoption of profiling as the fourth key signal in OTel. Customers get a vendor-agnostic way of collecting profiling data and enabling correlation with existing signals, like tracing, metrics, and logs, opening new potential for observability insights and a more efficient troubleshooting experience,” Elastic wrote in a blog post

OpenTelemetry echoed those sentiments, saying: “This marks a significant milestone in establishing profiling as a core telemetry signal in OpenTelemetry. Elastic’s eBPF based profiling agent observes code across different programming languages and runtimes, third-party libraries, kernel operations, and system resources with low CPU and memory overhead in production. Both, SREs and developers can now benefit from these capabilities: quickly identifying performance bottlenecks, maximizing resource utilization, reducing carbon footprint, and optimizing cloud spend.”


You may also like…

The post Elastic’s donation of Universal Profiling agent to OpenTelemetry further solidifies profiling as core telemetry signal appeared first on SD Times.

]]>
What AI can and can’t do for your observability practice https://sdtimes.com/ai/what-ai-can-and-cant-do-for-your-observability-practice/ Fri, 22 Dec 2023 14:30:05 +0000 https://sdtimes.com/?p=53392 Artificial intelligence (AI) and large language models (LLMs) have dominated the tech scene over the past year. As a byproduct, vendors in nearly every tech sector are adding AI capabilities and scrambling to promote how their products and services use it.  This trend has also made its way to the observability and monitoring space. However, … continue reading

The post What AI can and can’t do for your observability practice appeared first on SD Times.

]]>
Artificial intelligence (AI) and large language models (LLMs) have dominated the tech scene over the past year. As a byproduct, vendors in nearly every tech sector are adding AI capabilities and scrambling to promote how their products and services use it. 

This trend has also made its way to the observability and monitoring space. However, the AI solutions coming to market often feel like putting a square peg in a round hole. While AI can significantly impact certain areas of observability, it is not a fit for others. In this article, I’ll share my views on how AI can and cannot support an observability practice – at least right now.

The Long Tail of Errors

The very nature of observability makes ‘prediction’ in the traditional sense unfeasible. In life, certain ‘act of God’ types of events can impact business and are impossible to predict – weather-related events, geopolitical conflicts, pandemics, and more. These events are so rare and capricious that it’s implausible to train an AI model to predict when one is imminent.

The long tail of potential errors in application development mirrors this. In observability, many errors may happen only once, such that you may never see them happen again in your lifetime, while other types of errors may occur daily. So, if you’re looking to train a model that will completely understand and predict all the ways things could go wrong in an application development context, you’re likely to be disappointed.

Poor Quality Data

Another way that AI needs to improve in observability is its inability to make a distinction between details that are irrelevant, and those that are not. In other words, AI can pick up on small, inconsequential aberrations with a big impact on your results.

For example, previously, I worked with a customer training an AI model with hours of basketball footage to predict successful versus unsuccessful baskets. There was one big issue: all footage of an unsuccessful basket included a timestamp on the video. So, the model determined timestamps have an impact on the success of a shot (not the result we were looking for).

Observability practices often work with imperfect data – unneeded log contents, noisy data, etc. When you introduce AI without cleaning up this data, you create the possibility of false positives – as the saying goes, “garbage in and garbage out.” Ultimately, this can leave organizations in a more vulnerable position of alert fatigue.

Where AI Does Fit Observability

So, where should we be using AI in observability? One area where AI can add a lot of value is in baselining datasets and detecting anomalies. In fact, many teams have been using AI for anomaly detection for quite some time. In this use case, AI systems can, for example, understand what “normal” activity is across different seasonalities and flag when it detects an outlier. In this way, AI can give teams a proactive heads-up when something may be going awry.

Another area where AI can be helpful is by shortening the learning curve when adopting a new query language. Several vendors are currently working on natural language query translators driven by AI. A natural language translator is an excellent way to lower the entry barriers when using a new tool. It frees up practitioners to focus on the flow and the practice itself rather than the pipes, semicolons, and all other nuances that come with learning a new syntax.

What to Focus on Instead

Whether beginning a journey with AI or making any other improvement, understanding usage trends is essential to optimizing the value of an observability practice. Improving a system without understanding its usage is akin to throwing darts in a pitch-black room. If no one uses the observability system, it’s pointless to have it. Many different analytics can help you know who’s using the system and, conversely, who isn’t using the system that should be.

Practitioners should focus on usage related to the following:

  • User-generated content – are users creating alerts or dashboards? How often are they being viewed? How delayed is the data getting to these dashboards, and can this be improved?
  • Queries – how often are you running queries powering dashboards and alerts?  Are queries fast or slow, and could they be optimized for performance? Understanding and improving query speed can improve development velocity for core functions.
  • Data – what volume is stored, and from what sources? How much of the stored data is actually queried?  What are the hotspots/dead zones, and can storage be tiered in a manner so as to optimize cloud storage costs?

Closing Thoughts

I believe that AI is currently at the peak of the hype curve. In an application development setting, pretending AI does what it doesn’t do – i.e., predict root causes and recommend specific remediations – is not going to propel us to the part after all the hype when the technology actually gets useful. There are very real ways that AI can turn the gears on observability improvements today – and this is where we should be focused. 

The post What AI can and can’t do for your observability practice appeared first on SD Times.

]]>
3 Myths About Observability — And Why They’re Holding Back Your Teams https://sdtimes.com/monitor/3-myths-about-observability-and-why-theyre-holding-back-your-teams/ Tue, 14 Nov 2023 20:48:01 +0000 https://sdtimes.com/?p=52912 The past few years have seen intense interest in observability tools, which collect data about the performance of systems and applications to help companies identify and address performance issues and outages. The category seems to be nearing the top of its hype cycle, as seen in Cisco’s recent $28 billion cash offer to acquire Splunk. … continue reading

The post 3 Myths About Observability — And Why They’re Holding Back Your Teams appeared first on SD Times.

]]>
The past few years have seen intense interest in observability tools, which collect data about the performance of systems and applications to help companies identify and address performance issues and outages. The category seems to be nearing the top of its hype cycle, as seen in Cisco’s recent $28 billion cash offer to acquire Splunk.

The concept of observability is a valuable one, but the way the term has been used is misleading and leaves some teams worse off because of limitations in what observability tools actually provide. Enterprises need to rethink what observability means and regard it as a practice rather than a catch-all product category that can serve every team member’s needs equally. 

There are several teams that can benefit from observability, and they each have needs specific to their roles and responsibilities. For example, key constituents include:

  • SRE and infrastructure specialists
  • Data engineers
  • Developers
  • Security specialists

What enterprises really need from observability

Each of these teams need actionable information to help them address the specific issues they confront in their roles. Crucially, this information must not just alert them that a problem exists but also provide the specific details and context to help address the problem quickly.

For example, in the context of security, observability tools must help security practitioners to quickly detect and mitigate threats and vulnerabilities. The tools should provide metrics to help explain why incidents occurred and suggest proactive measures to mitigate against threats in future.

For data engineering teams, observability tools should provide visibility across data pipelines and data products. Data practitioners need to know when the source of data in a pipeline changes, for example, and what action they need to take to maintain the integrity of their data applications.

For developers, observability tools need to know not just that an error or a performance issue occurred, but also direct them to the specific coding issue that caused the error. They also need information to help them prioritize errors and know which to address first, delivered in the context of their workflows, not in a separate tool.

Providing teams with actionable detail that’s specific to their roles creates greater ownership and accountability for each practice area, because specialists get the information they need delivered directly to them. In contrast, treating observability as a catch-all product area leads to multiple teams chasing after the latest problem, no matter the source of that issue. Observability has skewed too far towards solving cloud infrastructure challenges, and this approach doesn’t do enough for specialists in other areas.

Here are three myths of observability that have emerged as a result of this wrong-headed thinking:

Myth 1. Observability is a product. We need to stop thinking about observability as a product and start to regard it for what it actually is: a practice. When we view observability as a practice, we quickly realize that each practice area has its own needs that cannot be addressed with a single pane of glass. 

Myth 2. Observability is the same for every persona. Each practice area has its own distinct needs, and that means they need information that addresses their specific roles and objectives, delivered in the context of their usual workflow. What’s useful for an SRE will not be the same as what’s useful for a developer or a security specialist. 

Myth 3. More data solves everything. Data alone is not a silver bullet, and too much data can become a liability when it needs to be securely stored and managed at scale. One financial services firm was recently hit with an observability bill of $65 million, reportedly due to unpredictable spikes in the data collected.

An approach that targets specialists with just the data they need to solve the problem at hand is far more efficient than collecting all logs, metrics and trace data and trying to analyze it after the fact.

Technology specialists in areas like security, data and software development are far more effective — and happier — when they get the information they need to solve problems quickly and take ownership of their work. Observability is an important area, but treating it as a product rather than a practice can lead to higher costs and poorer outcomes for the business.

The post 3 Myths About Observability — And Why They’re Holding Back Your Teams appeared first on SD Times.

]]>
Apica Acquires Data Fabric Innovator LOGIQ.AI and Raises $10M in New Funding to Modernize Data Management https://sdtimes.com/apica/apica-acquires-data-fabric-innovator-logiq-ai-and-raises-10m-in-new-funding-to-modernize-data-management/ Wed, 16 Aug 2023 17:57:24 +0000 https://sdtimes.com/?p=52047 STOCKHOLM and EL SEGUNDO, Calif. – Aug. 16, 2023 – Apica, the leader in synthetic monitoring and observability, today announced its agreement to acquire observability data fabric start-up LOGIQ.AI. Apica also announced it has raised $10M in funding from existing investors Industrifonden, SEB Foundation, and Oxx. With the acquisition and the new financing, Apica plans to continue delivering affordable … continue reading

The post Apica Acquires Data Fabric Innovator LOGIQ.AI and Raises $10M in New Funding to Modernize Data Management appeared first on SD Times.

]]>
STOCKHOLM and EL SEGUNDO, Calif. – Aug. 16, 2023 – Apica, the leader in synthetic monitoring and observability, today announced its agreement to acquire observability data fabric start-up LOGIQ.AI. Apica also announced it has raised $10M in funding from existing investors Industrifonden, SEB Foundation, and Oxx. With the acquisition and the new financing, Apica plans to continue delivering affordable and flexible observability innovations and develop new capabilities in the coming months for enterprise customers.Today, IT teams must balance the need for standardization and cost reduction with the difficult task of consolidating monitoring tools, services, and network cost centers. With the acquisition of LOGIQ.AI and the funding announced today, Apica will deliver active observability, automated root cause analysis, and advanced data management to bridge real-world gaps in analysis.“We are determined to address the need for low-cost infinite storage and observability to support businesses with relevant, actionable data,” said Mathias Thomsen, CEO, Apica. “With the acquisition of LOGIQ.AI and the additional funding, we will deliver ‘Active Observability’ that combines observability and synthetic monitoring into a proactive platform to plug data gaps and put business data in context.””Joining Apica with LOGIQ’s data fabric platform creates an innovative and intelligent approach to data management,” said Ranjan Parthasarathy, CEO of LOGIQ. “Together, we are empowering businesses to thrive where data-driven insights meet flawless performance, shaping the future of our customers’ digital success.”

Active Observability with Synthetics and User Control
The Apica Ascent platform with LOGIQ.AI gives users complete data pipeline control, a unified view of all information, and infinite high-quality storage at the lowest cost on the market. The platform acts as a superior indexing tool that aggregates data such as logs, traces, network packets, etc. from multiple sources and improves data quality by trimming off excess data and performing enrichments. The data can be shifted from the platform to a lake environment – either Apica’s or another data lake. The result is a unified view of all data for faster root cause analysis while slashing costs and eliminating vendor lock.“We are offering active observability on your terms,” said Jason Haworth, CPO, Apica. “We can give you observability at a low cost that scales to exabytes and gives you your data in context when and how you need it. We’re also stacking all this functionality into our data lakes and indexers while embracing open standards such as OpenTelemetry. This allows us to be application, device, service, and vendor agnostic. Having these pieces in place lets us be that decoder ring that other vendors in the space just can’t do.”The LOGIQ.AI capabilities will be added to the Apica Ascent platform and deployed to current customers in Q3 of this year.

Connect with Apica
Twitter

LinkedIn

 

About Apica
Apica keeps enterprises operating. The Ascent platform delivers active observability, automated root cause analysis, and advanced data management to quickly find and resolve complex digital performance issues before they negatively impact the bottom line. Today, business operations depend on understanding the health of multi-cloud, hybrid, and on-premises environments to keep business-critical applications and systems online while providing an optimal user experience. Apica delivers a unified view of all information for the entire technology stack helping reduce, prevent and resolve outages and lost revenue. For more information, visit www.apica.io.

The post Apica Acquires Data Fabric Innovator LOGIQ.AI and Raises $10M in New Funding to Modernize Data Management appeared first on SD Times.

]]>
Grafana Labs launches new observability solution for monitoring the end user experience https://sdtimes.com/api-monitoring/grafana-labs-launches-new-observability-solution-for-monitoring-the-end-user-experience/ Thu, 20 Jul 2023 16:02:08 +0000 https://sdtimes.com/?p=51811 Grafana Labs is hoping to make it easier for companies to manage the end user experience for their applications. To achieve this, the company launched Grafana Cloud Frontend Observability, which enables companies to monitor frontend health, investigate frontend issues, resolve errors, and query, correlate, and visualize frontend telemetry in Grafana.  According to Grafana, today it’s … continue reading

The post Grafana Labs launches new observability solution for monitoring the end user experience appeared first on SD Times.

]]>
Grafana Labs is hoping to make it easier for companies to manage the end user experience for their applications. To achieve this, the company launched Grafana Cloud Frontend Observability, which enables companies to monitor frontend health, investigate frontend issues, resolve errors, and query, correlate, and visualize frontend telemetry in Grafana. 

According to Grafana, today it’s common for frontend applications to run more of their code on the end users’ device. Since users can be accessing the app from various devices, browsers, and operating systems, or even while on different internet bandwidths, ensuring compatibility is a challenge. 

The new solution will measure and report on Web Vitals like Time to First Byte, Largest Contentful Paint, First Input Delay, and Cumulative Layout Shift.

This data can be “sliced and diced” across any dimension, depending on what matters most to the business. It can also be useful in determining how different users interact with the app.

To make it easier to track down errors, it groups errors based on similarities and also ranks them by volume and frequency so that developers can see what URLs or browsers have the most errors and investigate accordingly. 

It also allows drilling down into specific user sessions based on things like application names, browser type, and timeframe, which can also be helpful in investigating issues. 

All observability data is stored in Grafana Cloud Logs, which allows teams to turn data into custom Grafana dashboards that can then be shared with team members and stakeholders. 

“The frontend of a web application is the part that users directly interact with. It’s the last mile of the digital service you deliver to your customers and it’s directly associated with customer satisfaction and business objectives. Knowing performance metrics such as CPU or memory is helpful, but at the end of the day, what you care most about is if the user experience is affected,” Grafana Labs wrote in a blog post.

The post Grafana Labs launches new observability solution for monitoring the end user experience appeared first on SD Times.

]]>
Harness announces new feature to proactively identify errors https://sdtimes.com/monitor/harness-announces-new-feature-to-proactively-identify-errors/ Wed, 10 May 2023 16:11:36 +0000 https://sdtimes.com/?p=51114 The new Harness Continuous Tracking (CET) release is designed to provide developer-first observability for modern applications to proactively identify and solve errors across the SDLC.  The Harness CET provides several advantages to developers, such as minimizing the occurrence of defects that go undetected, removing the need for manual troubleshooting, and enabling quicker resolution of customer … continue reading

The post Harness announces new feature to proactively identify errors appeared first on SD Times.

]]>
The new Harness Continuous Tracking (CET) release is designed to provide developer-first observability for modern applications to proactively identify and solve errors across the SDLC. 

The Harness CET provides several advantages to developers, such as minimizing the occurrence of defects that go undetected, removing the need for manual troubleshooting, and enabling quicker resolution of customer problems. This enables teams to identify and resolve issues within a matter of minutes instead of weeks, resulting in enhanced satisfaction for both the developers and end-users.

“Our goal is to empower developers by providing a solution that addresses the pain points unmet by traditional error tracking and observability tools,” said Jyoti Bansal, CEO and co-founder of Harness. “Harness Continuous Error Tracking offers unparalleled visibility and context, enabling teams to quickly identify, diagnose, and resolve issues, ultimately ensuring a better experience for both developers and customers.”

The tool includes runtime code analysis that provides complete visibility into every exception’s source code, variables, and environment state. These issues are routed directly to the right developer for faster resolution. CET also provides the full context of errors including code variables and objects up to ten levels deep into the heap.

CET creates guardrails to ensure only high-quality code advances which prevents unreliable releases from being promoted to staging and production environments.

In addition, release stability allows developers to compare current or past releases to understand trends in new, critical, and resurfaced errors.

The tool integrates with monitoring solutions such as AppDynamics, Dynatrace, Datadog, New Relic, and Splunk. It also natively integrates into Harness build and deployment pipelines or it can be used as a standalone solution.

The post Harness announces new feature to proactively identify errors appeared first on SD Times.

]]>
vFunction enables continuous monitoring, detection, and drift issues with latest release https://sdtimes.com/monitor/vfunction-enables-continuous-monitoring-detection-and-drift-issues-with-latest-release/ Tue, 04 Apr 2023 20:36:55 +0000 https://sdtimes.com/?p=50806 The vFunction Continuous Modernization Manager (CMM) platform is now available, enabling software architects to shift left and find and fix application architecture anomalies. vFunction also announced a new version of vFunction Assessment Hub and updates to vFunction Assessment Hub. CMM observes Java and .NET applications and services to set baselines and monitor for any architectural … continue reading

The post vFunction enables continuous monitoring, detection, and drift issues with latest release appeared first on SD Times.

]]>
The vFunction Continuous Modernization Manager (CMM) platform is now available, enabling software architects to shift left and find and fix application architecture anomalies. vFunction also announced a new version of vFunction Assessment Hub and updates to vFunction Assessment Hub.

CMM observes Java and .NET applications and services to set baselines and monitor for any architectural drift and erosion. It can help companies detect critical architectural anomalies such as new dead code in the application or the emergence of unnecessary code.

“Application architects today lack the architectural observability, visibility, and tooling to understand, track, and manage architectural technical debt as it develops and grows over time,” said Moti Rafalin, the founder and CEO at vFunction. “vFunction Continuous Modernization Manager allows architects to shift left into the ongoing software development lifecycle from an architectural perspective to manage, monitor, and fix application architecture anomalies on an iterative, continuous basis before they erupt into bigger problems.”

The platform also identifies the introduction of a new service or domain and newly identified common classes that can be added to a common library to prevent further technical debt. 

Finally, it monitors and alerts when new dependencies are introduced that expand architectural technical debt, and identifies the highest technical debt classes that contribute to application complexity. Users are notified of changes through Slack, email, and the vFunction Notifications Center, allowing architects to then configure schedules for learning, analysis, and baseline measurements through the vFunction Continuous Modernization Manager.

The latest release of vFunction Modernization Hub 3.0 allows modernization teams to collaborate more effectively by working on different measurements in parallel and later merging them into one measurement. Additionally, the vFunction Assessment Hub now includes a Multi-Application Assessment Dashboard that allows users to track and compare different parameters for hundreds of applications, such as technical debt, aging frameworks, complexity, and state, among others. 

All three products are available in the company’s Application Modernization Platform. 

The post vFunction enables continuous monitoring, detection, and drift issues with latest release appeared first on SD Times.

]]>
Qt launches Qt Insight to provide developers with better customer insights https://sdtimes.com/software-development/qt-launches-qt-insight-to-provide-developers-with-better-customer-insights/ Mon, 20 Mar 2023 15:02:36 +0000 https://sdtimes.com/?p=50603 The new Qt Insight platform provides real customer insights into the usage of applications or devices. The platform reveals how users navigate devices, identifies customer pain points, analyzes performance, and creates concrete, evidence-based development plans to optimize product development and lower running costs by eliminating redundant, unused features based on session activity and metrics such … continue reading

The post Qt launches Qt Insight to provide developers with better customer insights appeared first on SD Times.

]]>
The new Qt Insight platform provides real customer insights into the usage of applications or devices.

The platform reveals how users navigate devices, identifies customer pain points, analyzes performance, and creates concrete, evidence-based development plans to optimize product development and lower running costs by eliminating redundant, unused features based on session activity and metrics such as button clicks and time on screen.

“Understanding customer behaviour, needs, and pain points is essential to delivering an outstanding customer experience,” says Marko Kaasila, the senior vice president of product management at Qt Group. “We are delighted to see such a high level of interest in Qt Insight from a wide range of industries, including industrial automation, consumer electronics, medical and automotive. With the launch of Qt Insight, we are providing businesses with the information they need to truly understand their users, making it possible to develop evidence-based UX strategies that are truly tailored to customers.”

The platform is part of Qt’s portfolio of integrated software development solutions that include Qt Design Studio, Qt Creator, Qt Quality Assurance & Qt Digital Advertising and it will be available as a SaaS product. The solution also supports desktop applications 

Companies can ensure compliance with GDPR and address modern data privacy requirements by using Qt Insight, as it anonymizes their application data as standard. The tool is especially useful for developers, designers, marketers, and product owners. 

Find out more details about the platform here

The post Qt launches Qt Insight to provide developers with better customer insights appeared first on SD Times.

]]>
New Relic announces JFrog integration to provide a single point of access for monitoring https://sdtimes.com/monitoring/new-relic-announces-jfrog-integration-to-provide-a-single-point-of-access-for-monitoring/ Wed, 15 Mar 2023 15:44:34 +0000 https://sdtimes.com/?p=50569 Observability company New Relic and DevOps company JFrog today announced an integration to give engineering teams a single point of access to monitor software development operations. With this integration, users are able to access real-time visibility into CI/CD pipelines, APIs, and web application development workflows so that DevOps and security leaders can solve software supply … continue reading

The post New Relic announces JFrog integration to provide a single point of access for monitoring appeared first on SD Times.

]]>
Observability company New Relic and DevOps company JFrog today announced an integration to give engineering teams a single point of access to monitor software development operations.

With this integration, users are able to access real-time visibility into CI/CD pipelines, APIs, and web application development workflows so that DevOps and security leaders can solve software supply chain performance and security issues.

Additionally, site reliability engineers, security, and operations teams are enabled to consistently monitor the health, security, and usage trends through each stage of the software development lifecycle.

The integration allows engineering teams to track key metrics and generate alerts in New Relic to identify performance degradation so that administrators can manage performance, mitigate risks, and remediate any issues in a single view. 

“Today’s developers need a 360-degree view of applications to monitor and remediate both performance and security, no matter if they’re running on-premises, in the cloud, or at the edge,” said Omer Cohen, executive vice president of strategy at JFrog. “Our integration with New Relic gives DevOps, security, and operations teams the real-time insights needed to optimize their software supply chain environment and accelerate time to market.”

Preconfigured New Relic dashboards also bring a complete view of performance data, artifact usage, and security metrics from JFrog Artifactory and JFrog Xray environments alongside their telemetry data.

To get started, visit the website

The post New Relic announces JFrog integration to provide a single point of access for monitoring appeared first on SD Times.

]]>