New Relic Now ¿Sueñas con innovar más? Comienza a vivir tu sueño en octubre.
Reservar plaza
Por el momento, esta página sólo está disponible en inglés.

First, we look at the observability capabilities deployed, how many tools were used for those capabilities, how respondents detect software and system interruptions, observability characteristics employed, and the annual observability spend at the time of the survey.

Current deployment highlights:

85%

had 5+ capabilities currently deployed

63%

toggled between 4+ observability tools

45%

spent $500K+ per year on observability

33%

had achieved full-stack observability

25%

learned about interruptions through less efficient methods

Observability capabilities deployed

Capabilities, not to be confused with characteristics or tools, are specific components of observability. We asked survey respondents to tell us which of 17 different observability capabilities they deployed. Below we review the results by capability, number of capabilities, and how many have achieved full-stack observability.

By capability

The survey respondents indicated their organizations deploy observability capabilities by as much as 75% (security monitoring) and as little as 23% (synthetic monitoring). We found that:

  • About three-quarters had deployed security monitoring and network monitoring, which were once again neck and neck for the top position, with security narrowly overtaking network for the lead. Both increased by more than 30% year-over-year (YoY).
  • Dashboard deployment saw the biggest change, jumping from ninth to fourth place with a 40% increase YoY.
  • Deployment for most of the more established capabilities increased YoY, while it decreased YoY for most of the newer or emerging capabilities (serverless, ML model performance, and Kubernetes monitoring).

View highlights for each capability.

Regional insight
North America had notably higher current deployment for security, network, database, and infrastructure monitoring, plus alerts, dashboards, and log management.

Organization size insight
Large organizations had the highest deployment rates for all capabilities, while small organizations had the lowest.

Deployed capabilities in 2023 compared to 2022

By number of capabilities

When we looked at how many capabilities the survey respondents said their organizations deploy, we found:

  • Organizations deployed more observability capabilities in 2023 than last year: the count followed a normal distribution centered around 6 current capabilities in 2022 compared to 9–10 in 2023.
  • More than half (56%) had 6–11 deployed (12% had 1–4, 85% had five or more, and 42% had 10 or more).
  • Only 1.5% indicated that their organization has all 17 observability capabilities deployed (down from 3% in 2022).
These results show that while most organizations still do not currently monitor their full tech stacks, this is changing with more capabilities deployed YoY and more planned for the future.

View future deployment plans.

Number of deployed capabilities in 2023 compared to 2022
85%

had 5+ capabilities deployed

Regional insight
Europe as a region had a normal distribution that centers lower than other regions (8 current capabilities compared to 9–10 for Asia Pacific and 10 for North America).

Organization size insight
Large organizations were the most likely to have 10+ capabilities deployed (48% compared to 29% for small and 35% for midsize).

Industry insight
IT/telco organizations tended to have the highest deployment rates, while nonprofits tended to have the lowest.

Full-stack observability prevalence

Based on our definition of full-stack observability, a third (33%) of survey respondents’ organizations had achieved it, which is 58% more than last year.

While these results indicate that organizations are still not monitoring or fully observing large parts of their tech stacks, they’re making progress.
Notably, organizations that had achieved full-stack observability had fewer outages, a faster mean time to detection (MTTD), a faster mean time to resolution (MTTR), lower outage costs, and a higher median annual return on investment (ROI) than those that had not.

View additional findings about the advantages of achieving full-stack observability.

Percentage of organizations that do and don’t have full-stack observability in 2023 compared to 2022
67%

had NOT achieved full-stack observability

Regional insight
European organizations were less likely to have achieved full-stack observability (28%), while Asia Pacific and North American organizations were more likely (35% and 34% respectively).

Organization size insight
Of those organizations that had achieved full-stack observability, 38% were large, 27% were midsize, and 22% were small.

Industry insight
IT/telco organizations were the most likely to have achieved full-stack observability (43%), followed by financial services/insurance (38%) and industrials/materials/manufacturing (36%). Nonprofits were the least likely (4%), followed by education (19%).

Number of monitoring tools

When asked about the number of tools, not to be confused with capabilities or characteristics, they use to monitor the health of their systems, survey respondents overwhelmingly reported using more than one.

  • Most 86% used two or more tools (9% less than 2022), 63% used four or more (23% less than 2022), and about one in five (19%) used 8 or more (14% less than 2022).
  • The most common number of tools reported (mode) were two and five (13% each), down from seven tools in 2022, and the mean number of tools is now 5.1 tools (down from 5.9 tools in 2022, a decrease of 14%).
  • Only 5% used just one tool to satisfy their observability needs (increased by 171% from 2022).
The current state of observability today is primarily multi-tool—and therefore fragmented—and likely inherently complex to handle. In fact, 25% of survey respondents noted that too many monitoring tools are a primary challenge that prevents them from achieving full-stack observability.
However, these results suggest organizations are using fewer tools than they did last year—the proportion of respondents using a single tool has more than doubled. And the average number of tools has gone down by almost one tool. The shift towards fewer tools this year compared to last year and the fact that 54% said they prefer a single, consolidated platform reinforces a move towards tool consolidation.
Number of tools used for observability capabilities in 2023 compared to 2022
63%

still toggle between 4+ observability tools

Regional insight
Asia Pacific as a whole tends to use more tools, with 24% using 8+ tools compared to 19% for Europe and 12% for North America. Those from North America were more likely to use a single tool (7%, compared to 3% for Asia Pacific and 5% for Europe).

Organization size insight
In general, large organizations were more likely to use more tools than small and midsize organizations, possibly because they tend to have more business units that operate autonomously.

Industry insight
Respondents from the IT/telco, energy/utilities, and retail/consumer industries were generally more likely to use more tools. Those from government, healthcare/pharma, education, and nonprofits were more likely to use a single tool.

Detection of software and system interruptions

We asked respondents how their organization primarily learns about software and system interruptions. The survey results showed that:

  • Multiple monitoring tools is still the top answer (58%) by a wide margin, and there was a shift away from every other answer choice to this option as it jumped nearly 12 percentage points YoY (25% increase), including a 25% decrease YoY for one observability platform.
  • Almost three-quarters (73%) said they primarily learn about interruptions through one or more monitoring tools (up from 67% in 2022).
  • Conversely, a quarter (25%) said they still learn about interruptions with less efficient methods, including manual checks, complaints, or incident tickets (down from 33% in 2022).
Organizations are relying less on less efficient methods like manual checks, complaints, and incident tickets to learn about interruptions and using automated monitoring tools more.
That a higher percentage primarily learn about interruptions through multiple monitoring tools makes sense given what we know about the large number of monitoring tools respondents deployed for observability purposes. These results seem to indicate that containing tool sprawl is an ongoing challenge, and organizations need a solid strategy for how many different tools make sense in light of the desire to control spending.
What's more, like last year, there’s a clear connection between how respondents primarily learned about interruptions and how unified their telemetry data was. Generally, when telemetry data was more unified, notice of interruptions came through one observability tool.
How respondents learned about software and system interruptions in 2023 compared to 2022
25%

still learn about interruptions through less efficient methods

Summary of how respondents learned about interruptions

Regional insight
Respondents surveyed in Europe were the most likely to say they learn about interruptions with one observability platform, those in North America with multiple monitoring tools, and those in Asia Pacific with manual checks, complaints, and incident tickets.

Organizational insight
Those from midsize and large organizations were the most likely to learn about interruptions with multiple monitoring tools, while those from small organizations were the most likely to use manual checks, complaints, or incident tickets.

Unified or siloed telemetry data

When we asked survey respondents about how unified or siloed their organizations’ telemetry data is, we found:

  • Collectively, 40% had more siloed telemetry data (increased by 22% from 2022), compared to 37% with more unified telemetry data (decreased by 25% from 2022)—a roughly even split.
  • Somewhat siloed was the top choice (27%), and 13% said they had mostly siloed telemetry data (they silo telemetry data in discrete data stores).
  • Only 14% said they had mostly unified telemetry data (they unify telemetry data in one place).

Those with eight or more tools were more likely to say they have more siloed telemetry data (46%) compared to those with a single tool (42%).

Respondents with more unified telemetry data were more likely to have fewer high-business-impact outages, a faster MTTD, and a faster MTTR than those with more siloed telemetry data:

  • Two-thirds (66%) said they experience them 2–3 times per month or fewer compared to 55% with more siloed telemetry data.
  • More than half (51%) said they detect them in 30 minutes or less compared to 47% with more siloed telemetry data.
  • Almost a third (32%) said they resolve them in 30 minutes or less compared to 30% with more siloed telemetry data.

Notably, among the 40% who had more siloed data, 68% indicated that they strongly prefer a single, consolidated platform.

Telemetry data is more siloed than unified this year. Because siloed and fragmented data make for a painful user experience (expensive, lack of context, slow to troubleshoot), the more silos an organization has, the more preference to consolidate. Perhaps the respondents who seemingly feel the most pain from juggling data from different silos long for more simplicity in their observability solutions.
The data also shows more unified data leads to more desirable service-level metrics.
How respondents learned about software and system interruptions in 2023 compared to 2022
Summary of unified or siloed telemetry data

Regional insight
Respondents surveyed in Europe and North America indicated more siloed data (both 43%) than unified (both about a third). Conversely, those in Asia Pacific indicated more unified data (41%) than siloed (36%).

Organization size insight
Large organizations were the most likely to have more unified data (38%) compared to small (34%) and midsize (35%). Small organizations were the most likely to have more siloed data (43%) compared to midsize (38%) and large (40%).

Industry insight
Healthcare/pharma and services/consulting were the most likely to have siloed data (both 48%), while government (50%) and nonprofits (46%) were the most likely to have unified data.

Observability practice characteristics employed

We asked survey respondents which of 15 observability practice characteristics (not to be confused with capabilities or tools) they had employed. Below we review the results by characteristic and by the number of characteristics, as well as how many have achieved a mature observability practice.

By characteristic

The survey respondents indicated their organizations employ observability practice characteristics by as much as 46% (improved collaboration) and as little as 21% (ingestion of high-cardinality data). We found that:

  • In general, fewer respondents said they were employing most observability practice characteristics (10 decreased and five increased YoY).
  • The top three answers were the same as last year, with nearly half (46%) citing improved collaboration across teams to make decisions related to the software stack.

Notably, the number of respondents who said their telemetry data includes business context to quantify the business impact of events and incidents decreased by 10 percentage points or 27% YoY. And the number of respondents who said their organization captures telemetry across their full tech stack decreased by 13% YoY. These decreases are concerning as both of these characteristics are important to achieve business observability.

These results imply that because most observability tools lack business context, many organizations struggle with quantifying the business impact of technology and often think of analyzing technology separately from analyzing business. So it’s not a given that they will naturally progress to the point of adding business context—it’s something that requires intention.
Observability platforms can support business metrics. It just requires a different mindset. There’s an opportunity for organizations to leverage observability as a business enabler rather than just an insurance policy to resolve problems. This requires top-down thinking (products, services, customers, business processes) rather than just a bottom-up (technology, speeds and feeds) approach.
46%

said observability improves collaboration across teams

Industry insight
Those from industries where outages are a bigger problem were more likely to say their telemetry data includes business context to quantify the business impact of events and incidents, including energy/utilities (33%), retail/consumer (32%), and IT/telco (31%). This is likely because business context helps prioritize where to focus.

Observability practice characteristics employed in 2023 compared to 2022

By number of characteristics

When we looked at how many observability practice characteristics the survey respondents said their organizations employ, we found:

  • Just 1% of respondents said they have all 15 characteristics employed, which was about the same as last year.
  • Only 4% of respondents said they had none employed (up from last year, which was 1%).
  • Nearly half (49%) had 3–5 employed, which was slightly less than last year (50% had 1–4 employed, 46% had 5+ employed, and 9% had 10+ employed).
Number of observability practice characteristics employed in 2023 compared to 2022

Role insight
Notably, executives were much more likely to employ 5+ characteristics (61%) than non-executive managers (45%) and practitioners (40%).

Regional insight
The Asia-Pacific region as a whole was more likely to employ 5+ characteristics (49%) than Europe (46%) or North America (41%).

Organization size insight
Those from large organizations were much more likely to employ 5+ characteristics (51%) than those from small (26%) and midsize (42%) organizations.

Industry insight
Those from IT/telco were the most likely to employ 5+ characteristics (58%), followed by those from retail/consumer (46%) and financial services/insurance (45%). Those from education were the least likely (29%).

Mature observability practice prevalence

Based on our definition of a mature observability practice, only 5% of survey respondents had a mature observability practice (same as last year). Those with mature observability practices also tended to have more observability practice characteristics employed: 77% had 10+, including 25% that had all 15.

Notably, all 85 respondents (100%) who had mature observability practices indicated that observability improves revenue retention by deepening their understanding of customer behaviors compared to the 31% whose practices were less mature (as in 2022).

Only 3% of respondents had both a mature observability practice and full-stack observability. Nearly two-thirds (65%) had neither a mature observability practice nor full-stack observability.

Mature observability practice characteristics employed in 2023 compared to 2022
5%

had a mature observability practice

Regional insight
The European region as a whole was the least likely to have a mature observability practice (3%) compared to Asia Pacific and North America (both 5%).

Organization size insight
Large organizations were more likely to have a mature observability practice (6%) than small (1%) and midsize (3%) organizations.

Industry insight
Respondents from the retail/consumer industry were the most likely to have a mature observability practice (8%), followed by those from services/consulting (7%). Those from energy/utilities and education were the least likely (both 1%).

We’re not running a recreational website for the high school volleyball team. We’re running a very large enterprise with a ton of moving parts, so that’s why we need to have the correct resources and correctional telemetry to analyze this. And we use all the best practices to do this. To have a good handle on over a thousand servers, you need to have the correct implementation.

Annual observability spend

When we asked survey-takers how much their organization currently spends on observability per year, we found that:

  • Only 14% spend less than $100,000, while 77% spend $100,000 or more.
  • Nearly two-thirds (64%) spend $100,000–$2.5 million.
  • Almost half (45%) spend $500,000 or more.
  • About three in ten (29%) spend $1 million or more.
  • Just 13% spend $2.5 million or more.

Organizations with more mature observability practices (by our definition) tended to spend more on observability: 59% of those that were mature spend $500,000 or more per year, compared to the 45% that were less mature.

The more they spend on observability, the more likely they are to say it’s more for achieving core business goals and that their MTTR has improved to some extent since adopting observability.

Notably, those who said their organization uses just one observability tool were the most likely to say they spend less than $100,000 per year on observability (44% compared to only 13% who use two or more tools). Conversely, those who said they use eight or more tools were the most likely to say they spend $1 million or more per year (49% compared to just 5% with one tool). And those who said they use 10 or more tools were the most likely to say they spend $5 million or more per year (14% compared to 0% for those with one tool).

In addition, those who said their organization spends at least $100,000 per year on observability were the most likely to cite too many monitoring tools and cost (too expensive) as primary challenges preventing them from achieving full-stack observability (about a quarter each).

These results imply that investing in observability leads to better business outcomes and using a single tool for observability is more cost-effective than using multiple tools. If organizations consolidate some or all of their tools, that additional spend on multiple tools could become savings instead.

Learn about their plans to get the most value out of their observability spend for next year and their median annual ROI.

45%

spend $500K+ per year on observability

Role insight
Practitioners were more likely to say they aren’t sure how much their organization spends on observability per year (11%) compared to ITDMs (4%).

Regional insight
Those in Asia Pacific tend to spend more on observability per year—52% spend $500K+ per year compared to 47% in Europe and 32% in North America.

Organization size insight
Observability spend is correlated with annual revenue. Generally, the higher the annual revenue for an organization, the more it spends on observability per year. For example, 53% of those from large organizations spend $500K+ per year compared to 41% for midsize and 23% for small.

Industry insight
Those most likely to spend $500K+ per year were from energy/utilities (68%), financial services/insurance (49%), retail/consumer (49%), and IT/telco (45%). The most likely to spend $100K or less were from services/consulting (21%), education (20%), and healthcare/pharma (20%).

Annual observability spend proportion by annual revenue