Skip to content
GitHubX/TwitterRSS

Latency Debugging Flowchart: From Slow Response to Root Cause

Latency Debugging Flowchart: From Slow Response to Root Cause

Section titled “Latency Debugging Flowchart: From Slow Response to Root Cause”

Your AI application just slowed to a crawl. Users are complaining, and the on-call engineer is staring at a dashboard full of metrics but no answers. This systematic debugging flowchart will guide you from “it’s slow” to “here’s the exact bottleneck” in minutes, not hours.

Latency directly impacts user experience, operational costs, and system reliability. According to Azure OpenAI documentation, latency of a completion request varies based on four primary factors: the model, the number of tokens in the prompt, the number of tokens generated, and the overall load on the deployment and system. A one-second delay in response time can reduce user satisfaction by up to 16% and increase abandonment rates significantly.

The financial impact is equally severe. OpenAI’s guidance confirms that latency is mostly influenced by two factors: the model and the number of tokens generated. This means inefficient prompts or misconfigured parameters can multiply your costs while degrading performance. A production system generating 1M tokens daily with unnecessary overhead could waste thousands of dollars monthly while delivering poor user experiences.

This flowchart provides a decision tree for diagnosing latency issues. Follow each phase sequentially, measuring and eliminating variables as you go.

Before diving into diagnostics, establish baseline metrics:

import time
import tiktoken
from openai import OpenAI
def measure_latency_phases(prompt, model="gpt-4o-mini"):
client = OpenAI()
# Phase 1: Tokenization
start = time.time()
encoder = tiktoken.encoding_for_model(model)
prompt_tokens = len(encoder.encode(prompt))
tokenization_time = time.time() - start
# Phase 2: API Request
start = time.time()
response = client.chat.completions.create(
model=model,
messages=[{"role": "user", "content": prompt}],
max_tokens=150
)
api_time = time.time() - start
# Phase 3: Response Parsing
start = time.time()
content = response.choices[0].message.content
output_tokens = len(encoder.encode(content))
parsing_time = time.time() - start
return {
"tokenization_ms": tokenization_time * 1000,
"api_ms": api_time * 1000,
"parsing_ms": parsing_time * 1000,
"total_ms": (tokenization_time + api_time + parsing_time) * 1000,
"prompt_tokens": prompt_tokens,
"output_tokens": output_tokens
}

To operationalize this flowchart, implement a three-tier monitoring strategy: local debugging, integration testing, and production observability. The key is to instrument your LLM calls with phase-level timing without significantly impacting performance.

For Azure OpenAI deployments, leverage the platform’s built-in metrics. The AzureOpenAIRequests metric tracks call volume, while TimeToResponse provides streaming latency measurements learn.microsoft.com. For non-streaming requests, measure end-to-end latency at your API gateway.

When debugging locally, wrap your client calls with timing decorators. In production, implement middleware that captures these metrics and forwards them to your observability platform. This creates a continuous feedback loop for identifying degradation.

The following production-ready Python implementation demonstrates a complete latency debugging workflow. It integrates tokenization analysis, API timing, and bottleneck detection with actionable recommendations.

import time
import tiktoken
from openai import OpenAI
from typing import Dict, List
class LatencyDebugger:
def __init__(self, model: str = "gpt-4o-mini"):
self.model = model
self.client = OpenAI()
self.encoder = tiktoken.encoding_for_model(model)
def diagnose(self, prompt: str, max_tokens: int = 150) -> Dict:
"""Comprehensive latency diagnosis across all phases"""
results = {}
# Phase 1: Tokenization
start = time.time()
prompt_tokens = len(self.encoder.encode(prompt))
tokenization_time = time.time() - start
results['tokenization'] = {
'time_ms': tokenization_time * 1000,
'tokens': prompt_tokens,
'rate': prompt_tokens / tokenization_time if tokenization_time > 0 else 0
}
# Phase 2: API Request
start = time.time()
response = self.client.chat.completions.create(
model=self.model,
messages=[{"role": "user", "content": prompt}],
max_tokens=max_tokens,
stream=True
)
# Stream to measure time-to-first-token
first_token_time = None
output_content = []
for chunk in response:
if chunk.choices[0].delta.content:
if first_token_time is None:
first_token_time = time.time()
output_content.append(chunk.choices[0].delta.content)
api_time = time.time() - start
output_text = ''.join(output_content)
output_tokens = len(self.encoder.encode(output_text))
results['api'] = {
'total_time_ms': api_time * 1000,
'ttft_ms': (first_token_time - start) * 1000 if first_token_time else None,
'output_tokens': output_tokens,
'tokens_per_second': output_tokens / api_time if api_time > 0 else 0
}
# Phase 3: Bottleneck Detection
results['bottleneck'] = self._identify_bottleneck(results)
return results
def _identify_bottleneck(self, results: Dict) -> str:
"""Identify the primary bottleneck based on thresholds"""
token_time = results['tokenization']['time_ms']
api_time = results['api']['total_time_ms']
if token_time > 50:
return "tokenization"
elif api_time > 2000:
return "api"
else:
return "none"
## Common Pitfalls
<Aside type="danger" title="Avoid These Production Mistakes">
Based on verified production data, these are the most common causes of latency degradation:
</Aside>
<Tabs>
<TabItem label="Workload Mixing">
**Problem:** Running multiple workload types on the same endpoint creates unpredictable latency.
**Why it happens:** Short calls wait for longer completions during batching, and competing workloads reduce cache hit rates [learn.microsoft.com](https://learn.microsoft.com/en-us/azure/ai-services/openai/how-to/latency).
**Fix:** Deploy separate endpoints for different workload patterns (e.g., chat vs. batch processing).
</TabItem>
<TabItem label="Over-provisioning max_tokens">
**Problem:** Setting `max_tokens` excessively high increases latency even when generation is shorter.
**Why it happens:** The model reserves compute time for the full `max_tokens` value upfront, then releases unused quota after completion [learn.microsoft.com](https://learn.microsoft.com/en-us/azure/ai-services/openai/how-to/latency).
**Fix:** Set `max_tokens` as low as possible. Use stop sequences to prevent over-generation.
</TabItem>
<TabItem label="Ignoring Prompt Size">
**Problem:** Treating prompt tokens as negligible.
**Why it happens:** While each prompt token adds less time than each output token, large prompts (1000+ tokens) still significantly impact total latency [learn.microsoft.com](https://learn.microsoft.com/en-us/azure/ai-services/openai/how-to/latency).
**Fix:** Implement prompt compression, use caching, or consider context-aware summarization.
</TabItem>
<TabItem label="Missing Content Filter Overhead">
**Problem:** Latency budgets don't account for Azure OpenAI's content filtering.
**Why it happens:** Content filtering runs classification models on both prompt and completion, adding measurable latency [learn.microsoft.com](https://learn.microsoft.com/en-us/azure/ai-services/openai/how-to/latency).
**Fix:** For low-risk use cases, request modified content filtering policies. Measure baseline with/without filtering.
</TabItem>
</Tabs>
## Quick Reference
### Latency Thresholds by Phase
| Phase | Acceptable | Warning | Critical | Action |
|-------|------------|---------|----------|--------|
| Tokenization | less than 10ms | 10-50ms | greater than 50ms | Optimize prompt, use faster tokenizer |
| API Request | less than 500ms | 500-2000ms | greater than 2000ms | Check deployment capacity, enable streaming |
| Response Parsing | less than 5ms | 5-20ms | greater than 20ms | Stream responses, optimize parser |
| **Total (Non-Streaming)** | less than 1s | 1-3s | greater than 3s | Review all phases |
| **Total (Streaming TTF)** | less than 300ms | 300-800ms | greater than 800ms | Model upgrade, provisioned throughput |
### Model Selection Cheat Sheet
<Aside type="note" title="Latency vs Cost Trade-off">
Lower latency models may cost more per token but reduce overall costs through faster processing and better user engagement.
</Aside>
| Model | Input Cost | Output Cost | Context | Best For |
|-------|------------|-------------|---------|----------|
| **gpt-4o-mini** | $0.15/M | $0.60/M | 128K | Fast responses, cost-sensitive apps |
| **gpt-4o** | $5.00/M | $15.00/M | 128K | Balanced performance |
| **haiku-3.5** | $1.25/M | $5.00/M | 200K | Fast reasoning, moderate cost |
| **claude-3-5-sonnet** | $3.00/M | $15.00/M | 200K | Complex tasks, high quality |
*Pricing verified from [OpenAI](https://openai.com/pricing) and [Anthropic](https://docs.anthropic.com/en/docs/about-claude/models) documentation.*
### Diagnostic Commands
```bash
# Check Azure OpenAI deployment utilization
az monitor metrics list \
--resource <deployment-id> \
--metric "Provisioned-managed Utilization V2" \
--interval PT1M
# Measure streaming time-to-response
curl -w "@curl-format.txt" \
-H "Authorization: Bearer $OPENAI_API_KEY" \
-d '{"model":"gpt-4o-mini","messages":[{"role":"user","content":"test"}],"stream":true}' \
https://api.openai.com/v1/chat/completions
<div id="latency-widget" style="font-family: system-ui; max-width: 600px; margin: 2rem auto; padding: 1.5rem; border: 1px solid #e2e8f0; border-radius: 8px; background: #f8fafc;">
<h3 style="margin-top: 0;">Latency Debugger</h3>
<div style="margin-bottom: 1rem;">
<label style="display: block; font-weight: 600; margin-bottom: 0.5rem;">Prompt Tokens:</label>
<input type="number" id="promptTokens" value="500" min="1" style="width: 100%; padding: 0.5rem; border: 1px solid #cbd5e1; border-radius: 4px;">
</div>
<div style="margin-bottom: 1rem;">
<label style="display: block; font-weight: 600; margin-bottom: 0.5rem;">Output Tokens:</label>
<input type="number" id="outputTokens" value="150" min="1" style="width: 100%; padding: 0.5rem; border: 1px solid #cbd5e1; border-radius: 4px;">
</div>
<div style="margin-bottom: 1rem;">
<label style="display: block; font-weight: 600; margin-bottom: 0.5rem;">Requests Per Minute:</label>
<input type="number" id="rpm" value="30" min="1" style="width: 100%; padding: 0.5rem; border: 1px solid #cbd5e1; border-radius: 4px;">
</div>
<div style="margin-bottom: 1rem;">
<label style="display: block; font-weight: 600; margin-bottom: 0.5rem;">Model:</label>
<select id="modelSelect" style="width: 100%; padding: 0.5rem; border: 1px solid #cbd5e1; border-radius: 4px;">
<option value="gpt-4o-mini">gpt-4o-mini</option>
<option value="gpt-4o">gpt-4o</option>
<option value="haiku-3.5">haiku-3.5</option>
<option value="claude-3-5-sonnet">claude-3-5-sonnet</option>
</select>
</div>
<button onclick="calculateLatency()" style="width: 100%; padding: 0.75rem; background: #0ea5e9; color: white; border: none; border-radius: 4px; font-weight: 600; cursor: pointer;">Calculate Estimated Latency</button>
<div id="results" style="margin-top: 1.5rem; display: none; padding: 1rem; background: white; border-radius: 4px; border-left: 4px solid #0ea5e9;">
<h4 style="margin: 0 0 0.5rem 0;">Estimated Metrics</h4>
<div id="metricsOutput" style="font-size: 0.9rem; line-height: 1.6;"></div>
</div>
<script>
const pricing = {
'gpt-4o-mini': { input: 0.15, output: 0.60, latencyFactor: 0.8 },
'gpt-4o': { input: 5.00, output: 15.00, latencyFactor: 1.0 },
'haiku-3.5': { input: 1.25, output: 5.00, latencyFactor: 0.9 },
'claude-3-5-sonnet': { input: 3.00, output: 15.00, latencyFactor: 1.1 }
};
function calculateLatency() {
const promptTokens = parseInt(document.getElementById('promptTokens').value);
const outputTokens = parseInt(document.getElementById('outputTokens').value);
const rpm = parseInt(document.getElementById('rpm').value);
const model = document.getElementById('modelSelect').value;
const pricingData = pricing[model];
// Azure OpenAI latency model (simplified)
const baseLatency = 200; // ms
const tokenLatency = (promptTokens + outputTokens) * 0.5 * pricingData.latencyFactor;
const totalLatency = baseLatency + tokenLatency;
// Cost calculation
const monthlyTokens = (promptTokens + outputTokens) * rpm * 60 * 24 * 30;
const monthlyCost = (monthlyTokens / 1000000) * (pricingData.input + pricingData.output);
// Display results
const resultsDiv = document.getElementById('results');
const metricsDiv = document.getElementById('metricsOutput');
metricsDiv.innerHTML = `
<strong>Estimated Latency:</strong> ${totalLatency.toFixed(0)}ms<br>
<strong>Monthly Cost:</strong> $${monthlyCost.toFixed(2)}<br>
<strong>Monthly Tokens:</strong> ${(monthlyTokens / 1000000).toFixed(2)}M<br>
<strong>Throughput:</strong> ${(rpm * 60).toLocaleString()} req/hour
`;
resultsDiv.style.display = 'block';
}
</script>
</div>

Interactive decision tree + latency profiling template

Interactive widget derived from “Latency Debugging Flowchart: From Slow Response to Root Cause” that lets readers explore interactive decision tree + latency profiling template.

Key models to cover:

  • Anthropic claude-3-5-sonnet (tier: general) — refreshed 2024-11-15
  • OpenAI gpt-4o-mini (tier: balanced) — refreshed 2024-10-10
  • Anthropic haiku-3.5 (tier: throughput) — refreshed 2024-11-15

Widget metrics to capture: user_selections, calculated_monthly_cost, comparison_delta.

Data sources: model-catalog.json, retrieved-pricing.