OpenTelemetry Archives - SD Times https://sdtimes.com/tag/opentelemetry/ Software Development News Fri, 07 Jun 2024 16:55:31 +0000 en-US hourly 1 https://wordpress.org/?v=6.5.5 https://sdtimes.com/wp-content/uploads/2019/06/bnGl7Am3_400x400-50x50.jpeg OpenTelemetry Archives - SD Times https://sdtimes.com/tag/opentelemetry/ 32 32 Elastic’s donation of Universal Profiling agent to OpenTelemetry further solidifies profiling as core telemetry signal https://sdtimes.com/softwaredev/elastics-donation-of-universal-profiling-agent-to-opentelemetry-further-solidifies-profiling-as-core-telemetry-signal/ Fri, 07 Jun 2024 16:55:31 +0000 https://sdtimes.com/?p=54862 Elastic has announced that it would be donating its Universal Profiling agent to the OpenTelemetry project, setting the stage for profiling to become a fourth core telemetry signal in addition to logs, metrics, and tracing.  This follows OpenTelemetry’s announcement in March that it would be supporting profiling and was working towards having a stable spec … continue reading

The post Elastic’s donation of Universal Profiling agent to OpenTelemetry further solidifies profiling as core telemetry signal appeared first on SD Times.

]]>
Elastic has announced that it would be donating its Universal Profiling agent to the OpenTelemetry project, setting the stage for profiling to become a fourth core telemetry signal in addition to logs, metrics, and tracing. 

This follows OpenTelemetry’s announcement in March that it would be supporting profiling and was working towards having a stable spec and implementation sometime this year.

Elastic’s agent profiles every line of code running on a company’s machines, including application code, kernels, and third-party libraries. It is always running in the background and can collect data about an application over time. 

It measures code efficiency across three categories: CPU utilization, CO2, and cloud cost. According to Elastic, this helps companies identify areas where waste can be reduced or eliminated so that they can optimize their systems. 

Universal Profiling currently supports a number of runtimes and languages, including C/C++, Rust, Zig, Go, Java, Python, Ruby, PHP, Node.js, V8, Perl, and .NET. 

“This contribution not only boosts the standardization of continuous profiling for observability but also accelerates the practical adoption of profiling as the fourth key signal in OTel. Customers get a vendor-agnostic way of collecting profiling data and enabling correlation with existing signals, like tracing, metrics, and logs, opening new potential for observability insights and a more efficient troubleshooting experience,” Elastic wrote in a blog post

OpenTelemetry echoed those sentiments, saying: “This marks a significant milestone in establishing profiling as a core telemetry signal in OpenTelemetry. Elastic’s eBPF based profiling agent observes code across different programming languages and runtimes, third-party libraries, kernel operations, and system resources with low CPU and memory overhead in production. Both, SREs and developers can now benefit from these capabilities: quickly identifying performance bottlenecks, maximizing resource utilization, reducing carbon footprint, and optimizing cloud spend.”


You may also like…

The post Elastic’s donation of Universal Profiling agent to OpenTelemetry further solidifies profiling as core telemetry signal appeared first on SD Times.

]]>
Why the world needs OpenTelemetry https://sdtimes.com/monitor/why-the-world-needs-opentelemetry/ Thu, 02 Mar 2023 18:19:09 +0000 https://sdtimes.com/?p=50449 Observability has really taken off in the past few years, and while in some ways observability has become a bit of a marketing buzzword, one of the main ways companies are implementing observability is not with any particular company’s solution, but with an open-source project: OpenTelemetry. Since 2019, it has been incubating at the Cloud … continue reading

The post Why the world needs OpenTelemetry appeared first on SD Times.

]]>
Observability has really taken off in the past few years, and while in some ways observability has become a bit of a marketing buzzword, one of the main ways companies are implementing observability is not with any particular company’s solution, but with an open-source project: OpenTelemetry.

Since 2019, it has been incubating at the Cloud Native Computing Foundation, but the project has its origins in two different open-source projects: OpenCensus and OpenTracing, which were merged into one to form OpenTelemetry. 

“It has become now the de facto in terms of how companies are willing to instrument their applications and collect data because it gives them flexibility back and there’s nothing proprietary, so it helps them move away from data silos, and also helps connect the data end to end to offer more effective observability,” said Spiros Xanthos, SVP and general manager of observability at Splunk

OpenTelemetry is one of the most successful open-source projects, depending on what you measure by. According to Austin Parker, head of DevRel at Lightstep and maintainer of OpenTelemetry, it is the second highest velocity project within the CNCF, only behind Kubernetes, in terms of contributions and improvements.

According to Parker, one of the reasons why OpenTelemetry has just exploded in use is that cloud native development and distributed systems have “eaten the world.” This in turn leads to increased complexity. And what do you need when complexity increases? Observability, visibility, a way to understand what is actually going on in your systems. 

RELATED ARTICLE: How to ensure open-source longevity 

Parker feels that for the past few decades, a real struggle companies have run into is that everyone has a different tool for each part of observability. They have a tool for tracing, something for handling logs, something to track metrics, etc. 

“There’s scaling issues, lack of data portability, lack of vendor agnosticism, and a lack of ability to easily correlate these things across different dimensions and across different signal types,” said Parker. “OpenTelemetry is a project whose time has come in terms of providing a single, well-supported, vendor-agnostic solution for making telemetry a built-in part of cloud native systems.” 

Morgan McLean, director of product management at Splunk and co-founder of OpenTelemetry,  has seen first-hand how the project has exploded in use as it becomes more mature. He explained that a year ago, he was having conversations with prospective users who at the time felt like OpenTelemetry didn’t meet all of their needs. Now with a more complete feature set, “it’s become a thing that organizations are now much more comfortable and confident using,” Morgan explained. 

Today when he meets with someone to tell them about OpenTelemetry, often they will say they’re already using it. 

“OpenTelemetry is maybe the best starting point in that it has universal support from all vendors,” said Xanthos. “It’s a very robust set of, let’s say, standards and open source implementation. So first of all, I know that it will be something that will be around for a while. It is, let’s say, the state of the art on how to instrument applications and collect data. And it’s supported universally. So essentially, I’m betting on something that is a standard accepted across the industry, that is probably going to be around for a while, and gives me control over the data.”

It’s not just the enterprise that has jumped on board with OpenTelemetry; the open-source community as a whole has also embraced it. 

Now there are a number of web frameworks, programming languages, and libraries stating their support for OpenTelemetry. For example, OpenTelemetry is now integrated into .NET, Parker explained. 

Having a healthy open-source ecosystem crucial to success

There are a lot of vendors in the observability space, and OpenTelemetry “threatens the moat around most of the existing vendors in the space,” said Parker. It has taken a lot of work to build a community that brings in people that work for those companies and have them say “hey, here’s what we’re going to do together to make this a better experience for our end users, regardless of which commercial solution they might pick, or which open-source project they’re using,” said Parker. 

According to Xanthos, the reason an open-source standard has become the de facto and not something from a vendor is because of demand from end users. 

“End users essentially are asking vendors to have open-source standards-based data collection, so that they can have more effective observability tools, and they can have control over the data,” said Xanthos. “So because of this demand from end users, essentially all vendors either decided or were forced to support OpenTelemetry. So essentially, there is no major vendor and observability that doesn’t support it today.”

OpenTelemetry’s governance committee seats are tied to people, not companies, which is the case for some other open-source projects as well. 

“We try to be cognizant of the fact that we all work for people that have commercial interests here, but at the end of the day, we’re people and we are not avatars of our corporate overlords,” said Parker. 

For example, Morgan and Parker work for two separate companies which are direct competitors to each other, but in the OpenTelemetry space they come together to do things for the project like form end-user working groups or running events. 

“It doesn’t matter who signs the paycheck,” Parker said. “We are all in this space for a reason. It’s because we believe that by enabling observability for our end users through OpenTelemetry, we are going to make their professional lives better, we’re going to help them work better, and make that world of work better.”

What’s next?

OpenTelemetry has a lot planned for the future, and recently published an official project roadmap

The original promise of OpenTelemetry back when it was first announced was to deliver capabilities to allow people to capture distributed traces and metrics from applications and infrastructure, then send that data to a backend analytics system for processing. 

The project has largely achieved that, which presents the opportunity to sit down and ask what comes next. 

For example, logging is something important to a large portion of the community so that is one focus. “We want to be able to capture logs as an adjacent signal type to distributed traces and to metrics,” said Morgan.

Another long-term focus will be capturing profiles from applications so that developers can delve into the performance of their code.

The maintainers are also working on client instrumentation. They want OpenTelemetry to be able to extract data from web, mobile, and desktop applications. 

“OpenTelemetry is very focused on back end infrastructure, back end services, the stuff that people run inside of AWS or Azure or GCP,” Morgan explained. “There’s also a need to monitor the performance and get crash reports from their client applications, like front end websites or mobile applications or desktop applications, so they can judge the true end to end performance of everything that they’ve built, not just the parts that are running in various data centers.”

The promise of unified telemetry

At the end of the day, it’s important to remember the main goal of the project, which is to unify telemetry. Developers and operators are dealing with increasing amounts of data, and OpenTelemetry’s purpose is to unify those streams of data and be able to do something with it. 

Parker noted the importance of using this data to deliver great user experiences. Customers don’t care whether you’re using Kubernetes or OpenTelemetry, he said. 

“Am I able to buy this PS5? Am I able to really easily put my shopping list into this app and order my groceries for the week?” According to Parker this is what really matters to customers, not what technology is making this happen. 

“OpenTelemetry is a foundational component of tying together application and system performance with end user experiences,” said Parker. “That is going to be the next generation of performance monitoring for everyone. This isn’t focused on just the enterprise; this isn’t a particular vertical. This, to me, is going to be a 30 year project almost, in terms of the horizon, where you can definitely see OpenTelemetry being part of how we think about these questions for many years to come.” 

The post Why the world needs OpenTelemetry appeared first on SD Times.

]]>
Why OpenTelemetry is driving a new wave of innovation on top of observability data https://sdtimes.com/monitor/why-opentelemetry-is-driving-a-new-wave-of-innovation-on-top-of-observability-data/ Wed, 29 Sep 2021 16:10:05 +0000 https://sdtimes.com/?p=45396 The last decade has brought a progressive transition from monolithic applications that run on static infrastructure to microservices that run on highly dynamic cloud-native infrastructure. This shift has led to the rapid emergence of lots of new technologies, frameworks, and architectures and a new set of monitoring and observability tools that give engineers full visibility … continue reading

The post Why OpenTelemetry is driving a new wave of innovation on top of observability data appeared first on SD Times.

]]>
The last decade has brought a progressive transition from monolithic applications that run on static infrastructure to microservices that run on highly dynamic cloud-native infrastructure. This shift has led to the rapid emergence of lots of new technologies, frameworks, and architectures and a new set of monitoring and observability tools that give engineers full visibility into the health and performance of these new systems. 

Visibility is essential to ensure that a system and its dependencies behave as expected and to identify and speed resolution of any issues that may arise. To that end, teams need to gather complete health and performance telemetry data (metrics, logs, and traces) from all those components. This is accomplished through instrumentation.

Why do we need OpenTelemetry?

For many years there have been a wide variety of open-source and proprietary instrumentation tools like StatsD, Nagios plugins, Prometheus exporters, Datadog integrations, or New Relic agents. Unfortunately, while there are lots of open-source tools, there hasn’t been alignment about specific instrumentation standards, such as StatsD, in the developer community and between vendors. This makes interoperability a challenge.

The lack of instrumentation standards and interoperability has required every monitoring and observability tool to build their own collection of integrations to instrument the technologies developers use and need visibility into. For example, many monitoring tools have built integrations to instrument widely used databases like MySQL, including Prometheus MySQL Exporter, Datadog MySQL integration, and New Relic MySQL integration

This is also true for application code instrumentation, where New Relic, Dynatrace, Datadog and other vendors have built complex agents that automatically instrument popular application frameworks and libraries. Developers spend years building instrumentation, and it requires a sizable investment to build a large enough catalog of integrations and maintain it as new versions of the technologies monitored are released. Not only is this a very inefficient use of global developer resources, it also creates vendor lock-in since you need to re-instrument your systems if you want to change your observability tool. 

Finally, the value of (and where customers most benefit from!) innovation is not innovation on the instrumentation itself. It’s improvements and advancements on what you can do with the data that gets collected. The requirement to make a large investment on instrumentation – i.e., the area that delivers little benefit to end users – for new tools to enter the market has created a big barrier to entry and has severely limited innovation in the space.

This is all about to dramatically change, thanks to OpenTelemetry: an emerging open-source standard that is democratizing instrumentation. 

OpenTelemetry has already gained a lot of momentum, with support from all major observability vendors, cloud providers, and many end users contributing to the project. It has become the second most active CNCF project in terms of number of contributions only behind Kubernetes. (It’s also recently been accepted as a CNCF incubating project, which reiterates its importance to engineering communities.).

Why is OpenTelemetry so popular?

OpenTelemetry approaches the instrumentation “problem” in a different way. Like other (usually proprietary) attempts, it provides a lot of out-of-the-box instrumentation for application frameworks and infrastructure components, as well as SDKs for developers to add their own instrumentation.

Unlike other instrumentation frameworks, OpenTelemetry covers metrics, traces, and logs, defines an API, semantic conventions, and a standard communication protocol (OpenTelemetry protocol or OTLP). Moreover, it is completely vendor agnostic, with a plugin architecture to export data to any backend. 

Even more, OpenTelemetry’s goal is for developers who build technologies for others to use (e.g., application frameworks, databases, web servers, and service meshes) to bake instrumentation directly into the code they produce. This will make instrumentation readily available to anyone who uses the code in the future and avoid the need for another developer to learn the technology and figure out how to write instrumentation for it (which in some cases requires the use of complex techniques like bytecode injection.)

OpenTelemetry unlocks a lot of new value to all developers:

  1. Interoperability. Analyze the entire flow of requests to your application as they go through your microservices, cloud services, and third party SaaS in your observability tool of choice. Effortlessly send your observability data to a data warehouse to be analyzed alongside your business data. OpenTelemetry’s common API, data semantics, and protocol make all of the above – and more – possible, out-of-the-box.
  2. Ubiquitous instrumentation. Thanks to a much larger community working together vs. siloed duplicative efforts, everyone benefits from the broadest, deepest, and highest quality instrumentation available.
  3. Future-proof. You can instrument your code once and use it anywhere since the vendor-agnostic approach enables you to send data to and run analysis in your backend of choice. Before OpenTelemetry, changing observability backends typically required a time-consuming reinstrumentation of your system.
  4. Lower resource footprint. More and more instrumentation is directly baked into frameworks and technologies instead of injected, resulting in reduced CPU and memory utilization. 
  5. Improved uptime. With OpenTelemetry’s shared metadata, observability tools deliver better correlation between metrics, traces, and logs, so you troubleshoot and resolve production problems faster. 

More importantly, companies no longer have to devote time, people, and money to developing their own product-specific instrumentation and can focus on improving developer experience. With access to a broad, deep, and high-quality observability data set of metrics, traces, and logs with no multi-million dollar investment in instrumentation, a new wave of new solutions that leverage observability data is about to come. 

Let’s look at some examples to demonstrate what OpenTelemetry will – and is already – enabling developers to do:

  • AWS is embedding OpenTelemetry instrumentation across their services. For example, they have released automatic trace instrumentation for Java Lambda functions with no code changes. This gives developers immediate visibility into the performance of their Java code and enables them to send any collected data to their backend of choice. As a result, they’re not tied to a specific vendor and can send the data to multiple backends to solve for different use cases.
  • Kubernetes and the popular GraphQL Apollo Server have added initial OpenTelemetry tracing instrumentation to their code. This provides efficient out-of-the-box instrumentation that’s directly embedded in the code through the Go and JavaScript OpenTelemetry libraries, and the instrumentation is written by the experts that have built those technologies. 
  • Jenkins, the open-source CI/CD server, offers an OpenTelemetry plugin to monitor and troubleshoot jobs using distributed tracing. This gives developers visibility into where time in jobs is spent and where errors are occurring to help troubleshoot and improve those jobs.
  • Rookout, a debugger for cloud-native applications, has integrated OpenTelemetry traces to provide additional context within the debugger itself. This helps developers understand the entire flow of the request traversing the code they are troubleshooting, with additional context from tags in the OpenTelemetry data.
  • Promscale lets developers store your OpenTelemetry trace data inside Postgres via OTLP. Then, developers can use powerful SQL queries to analyze their traces and correlate them with other business data that’s stored in Postgres. For example, if you develop a SaaS service that uses a database, you could analyze database query response time by customer ARR band to ensure your most valuable customers – who are most likely to suffer from bad query performance, since they store more data in your application – are seeing the best possible performance with your product. 

OpenTelemetry is still being (very!) actively developed, so this is just the beginning. While many of the above products and projects will improve the lives of engineers who operate production environments, there is a greenfield of possibilities. With interoperability and ubiquitous instrumentation, there’s massive potential for existing companies to improve their existing products or develop new tools – and for new upstarts and entrepreneurs to leverage OpenTelemetry instrumentation to solve new problems or existing problems with new innovative approaches.

Learn more about OpenTelemetry at KubeCon + CloudNativeCon Oct. 11-15.

The post Why OpenTelemetry is driving a new wave of innovation on top of observability data appeared first on SD Times.

]]>
OpenTelemetry .NET 1.0 released https://sdtimes.com/msft/opentelemetry-net-1-0-released/ Tue, 23 Mar 2021 01:59:16 +0000 https://sdtimes.com/?p=43342 Microsoft announced the 1.0 specification for OpenTelemetry .NET, the canonical distribution of the OpenTelemetry SDK implementation in .NET. The 1.0 release includes OpenTelemetry .NET APIs: Tracing API, Baggage API, Context API and Propagators API. Developers will also have access to an SDK that provides controls for sampling, processing and exporting as well as documentation, which … continue reading

The post OpenTelemetry .NET 1.0 released appeared first on SD Times.

]]>
Microsoft announced the 1.0 specification for OpenTelemetry .NET, the canonical distribution of the OpenTelemetry SDK implementation in .NET.

The 1.0 release includes OpenTelemetry .NET APIs: Tracing API, Baggage API, Context API and Propagators API. Developers will also have access to an SDK that provides controls for sampling, processing and exporting as well as documentation, which includes samples and guides for plugin authors. The release also includes exporters to Jaeger, Zipkin and the OpenTelemetry Protocol (OTLP). 

The specification for .NET follows February’s announcement that the OpenTelemetry specification reached v1.0, which offered stability guarantees for distributed tracing. 

RELATED CONTENT: Why OpenTelemetry is here to stay

OpenTelemetry was the result of OpenTracing and OpenCensus merging in 2019. 

“As modern application environments are polyglot, distributed, and increasingly complex, observing your application to identify and react to failures has become challenging,” Sourabh Shirhatti, a senior program manager at Microsoft, wrote in a blog post. “By standardizing how different applications and frameworks collect and emit observability telemetry, OpenTelemetry aims to solve some of the challenges posed by these environments.”

Shirhatti went on to explain the main benefits of the specification: it is interoperable, allowing users to monitor their distributed application with complete interoperability; it’s vendor neutral so that as users choose their telemetry backend, they don’t have to change their instrumentation code; and OpenTelemetry is future proof so that when newer libraries and frameworks emerge, users can easily monitor them using shared instrumentation libraries.

“We’re super excited to continue to improve the observability of all applications built on .NET and OpenTelemetry is a giant stride for us in that direction,” Shirhatti added. 

The post OpenTelemetry .NET 1.0 released appeared first on SD Times.

]]>