Skip to main content
MQTT Explorer includes comprehensive debugging capabilities to help developers troubleshoot issues with MQTT connections, message decoding, and AI Assistant interactions.

AI Assistant Debug View

The AI Assistant includes a built-in debug panel that shows complete request/response information.

Enabling Debug Mode

1

Open AI Assistant

Click the AI Assistant icon in the sidebar or press the keyboard shortcut.
2

Click the debug icon

Look for the bug icon (🐛) in the AI Assistant header and click it to toggle debug view.
3

View debug information

The debug panel displays system messages, API requests, responses, and timing information.
Debug mode persists across sessions - toggle it off when you’re done debugging to reduce UI clutter.

Debug Output Structure

The debug view displays a comprehensive JSON structure:
{
  "systemMessage": {
    "role": "system",
    "content": "You are an expert AI assistant specializing in MQTT...",
    "note": "This is the system prompt that provides context to the LLM"
  },
  "messages": [
    {
      "index": 0,
      "role": "user",
      "content": "What does this topic do?",
      "fullContent": "Context:\nTopic: home/livingroom/light\n...",
      "timestamp": "2026-01-30T13:20:15.123Z",
      "proposals": 0,
      "questionProposals": 0,
      "apiDebug": { /* ... */ }
    }
  ],
  "summary": {
    "totalMessages": 2,
    "messagesWithDebugInfo": 1,
    "lastApiCall": "2026-01-30T13:20:15.123Z"
  }
}

System Message

The system message contains the AI Assistant’s core instructions:
{
  "role": "system",
  "content": "You are an expert AI assistant specializing in MQTT...",
  "note": "This is the system prompt that provides context to the LLM"
}
Purpose:
  • Defines the AI’s expertise and behavior
  • Sets communication style (concise, technical, etc.)
  • Specifies response format rules
  • Lists supported MQTT ecosystems
Debugging Use: If the AI gives incorrect or off-topic responses, review the system message to ensure the instructions are clear.

Message Array

Each conversation turn is logged with complete metadata:
FieldTypeDescription
indexnumberMessage position in conversation
rolestring"user" or "assistant"
contentstringDisplay text (may be truncated)
fullContentstringComplete message with context
timestampstringISO 8601 timestamp
proposalsnumberCount of action proposals in response
questionProposalsnumberCount of suggested follow-up questions
apiDebugobjectAPI request/response details (user messages only)

API Debug Information

User messages include detailed API debugging data:
{
  "apiDebug": {
    "provider": "openai",
    "model": "gpt-5-mini",
    "timing": {
      "duration_ms": 1234,
      "timestamp": "2026-01-30T13:20:15.123Z"
    },
    "request": {
      "url": "https://api.openai.com/v1/chat/completions",
      "body": {
        "model": "gpt-5-mini",
        "messages": [ /* ... */ ],
        "max_completion_tokens": 500
      }
    },
    "response": {
      "id": "chatcmpl-AbCdEfGh123456",
      "model": "gpt-5-mini",
      "choices": [ /* ... */ ],
      "usage": {
        "prompt_tokens": 156,
        "completion_tokens": 98,
        "total_tokens": 254
      }
    }
  }
}

Request Details

Full API request including URL, headers, and body

Response Data

Complete API response with usage statistics

Timing Info

Request duration and timestamp

Token Usage

Prompt, completion, and total token counts

Server Console Output

The backend server logs detailed debugging information to the console.

Request Logging

When an AI request is sent:
================================================================================
LLM REQUEST (OpenAI)
================================================================================
Provider: openai
Model: gpt-5-mini
Messages Count: 2

Full Request Body:
{
  model: 'gpt-5-mini',
  messages: [
    {
      role: 'system',
      content: 'You are an expert AI assistant specializing in MQTT...'
    },
    {
      role: 'user',
      content: 'Context:\nTopic: home/livingroom/light\n...'
    }
  ],
  max_completion_tokens: 500
}

System Message:
{
  role: 'system',
  content: 'You are an expert AI assistant specializing in MQTT...'
}
================================================================================
No Truncation: The server logs show complete objects with depth: null and maxArrayLength: null for full visibility.

Response Logging

When the AI responds:
================================================================================
LLM RESPONSE (OpenAI)
================================================================================
Duration: 1234 ms

Full Response:
{
  id: 'chatcmpl-AbCdEfGh123456',
  object: 'chat.completion',
  created: 1738247815,
  model: 'gpt-5-mini',
  choices: [
    {
      index: 0,
      message: {
        role: 'assistant',
        content: 'This topic represents a smart light in your living room...'
      },
      finish_reason: 'stop'
    }
  ],
  usage: {
    prompt_tokens: 156,
    completion_tokens: 98,
    total_tokens: 254
  },
  system_fingerprint: 'fp_abc123def456'
}
================================================================================

================================================================================
LLM RPC HANDLER - Returning response
================================================================================
Response length: 456
Has debugInfo: true
================================================================================

Error Logging

When errors occur:
================================================================================
LLM RPC ERROR
================================================================================
Error message: Invalid API key configuration
Error stack: Error: Invalid API key configuration
    at /home/runner/work/MQTT-Explorer/MQTT-Explorer/dist/src/server.js:642:15
    at process.processTicksAndRejections (node:internal/process/task_queues:95:5)
Full error: Error: Invalid API key configuration {
  status: 401,
  type: 'invalid_request_error',
  code: 'invalid_api_key'
}
================================================================================
Production ConsiderationsThese verbose logs are designed for development. In production:
  • Use log levels (DEBUG, INFO, ERROR)
  • Sample requests (log 1% for monitoring)
  • Disable ANSI colors for log aggregation
  • Filter PII and API keys from logs

Browser Console Output

The frontend logs debug information to the browser console.

Normal Flow

LLM Service: Received result from backend: {
  response: "This topic represents a smart light...",
  debugInfo: {
    provider: "openai",
    model: "gpt-5-mini",
    timing: { duration_ms: 1234, timestamp: "2026-01-30T13:20:15.123Z" },
    request: { url: "...", body: {...} },
    response: { id: "chatcmpl-...", usage: {...} }
  }
}
LLM Service: Has response: true
LLM Service: Has debugInfo: true
LLM Service: Assistant message length: 456
LLM Service: Debug info: { provider: "openai", model: "gpt-5-mini", ... }

Error Flow

LLM Service: Received result from backend: undefined
LLM Service: Has response: false
LLM Service: Has debugInfo: false
LLM Service: Invalid result from backend: undefined
AI Assistant error: Error: No response from AI assistant
    at LLMService.sendMessage (llmService.ts:440)
Error details: { message: "No response from AI assistant" }

Debugging Decoder Issues

When messages don’t decode correctly:
1

Check the topic pattern

Verify the topic matches the decoder’s canDecodeTopic pattern:
// Sparkplug decoder expects this pattern:
/^spBv1\.0\/[^/]+\/[ND](DATA|CMD|DEATH|BIRTH)\/[^/]+(\/[^/]+)?$/u
2

Inspect the raw payload

Switch to “hex” view to see the raw bytes:
0x73 0x70 0x42 0x76 0x31 0x2E 0x30 ...
3

Try different formats

Use the format dropdown to test different decoders:
  • String
  • Sparkplug
  • int8, uint32, float, etc.
4

Check for error messages

Look for warning icons (⚠️) next to format options in the dropdown. Hover to see the error message.
5

Review decoder implementation

Check the decoder code in app/src/decoders/ for logic errors.

Common Decoder Errors

Cause: Binary payload length is not evenly divisible by the data type’s byte size.Example: Trying to decode 5 bytes as uint32 (needs 4-byte alignment).Solution:
  • Try uint8 to see individual bytes
  • Check if the payload is actually a different type
  • Verify the device is sending the expected format
Cause: The payload is not valid Sparkplug B binary data.Solution:
  • Verify the topic matches the Sparkplug pattern
  • Check that the sender is using sparkplug-payload encoder
  • Try decoding with the Sparkplug test client to isolate issues
Cause: The string payload is not valid JSON.Solution:
  • View in “string” format instead of “json”
  • Check for trailing commas, unquoted keys, or single quotes
  • Verify the payload isn’t binary data misinterpreted as text

Debugging Performance Issues

Use timing information to identify bottlenecks:

AI Assistant Response Time

// Check duration_ms in apiDebug
const { timing } = message.apiDebug
console.log(`Response took ${timing.duration_ms}ms`)

// Slow responses (>5s) may indicate:
// - Large context windows (many related topics)
// - Complex prompts
// - API rate limiting
// - Network latency

Token Usage Analysis

const { usage } = response
const efficiency = usage.completion_tokens / usage.prompt_tokens

// High prompt tokens indicate:
// - Too much context being sent
// - Need to reduce related topics count
// - Consider shorter system prompt

// High completion tokens indicate:
// - Verbose responses
// - Multiple proposals/questions
// - May need to tune max_completion_tokens

Log Levels and Filtering

MQTT Explorer logs can be filtered by level:
npm run dev:server

Visual Separators

The console uses clear visual boundaries:
================================================================================
SECTION HEADER
================================================================================
Content...
================================================================================
Color Coding (in terminal):
  • Green: Strings
  • Yellow: Numbers and booleans
  • Gray: Null/undefined
  • Cyan: Object keys

Network Debugging

For MQTT connection issues:
1

Enable MQTT debug logs

Set the DEBUG environment variable:
DEBUG=mqtt* npm run dev
2

Check connection statistics

Open Settings → Broker Statistics to view:
  • Connection state
  • Message counts
  • Subscription list
  • Error messages
3

Monitor network traffic

Use Wireshark or tcpdump to capture MQTT packets:
tcpdump -i any -n port 1883 -w mqtt.pcap

Memory Debugging

If MQTT Explorer becomes slow or unresponsive:
const historySize = treeNode.messageHistory.toArray().length
if (historySize > 1000) {
  console.warn(`Large message history: ${historySize} messages`)
}
Memory LeaksCommon causes:
  • Retained message history (grows unbounded)
  • Subscriptions to high-frequency topics
  • Decoder caching without cleanup
  • Event listeners not removed

Troubleshooting Checklist

AI Not Responding

  1. Check browser console for errors
  2. Verify API key in settings
  3. Review server logs for API errors
  4. Test with a simple question

Decoder Not Working

  1. Verify topic pattern matches
  2. Check payload in hex view
  3. Look for warning icons
  4. Test with known-good data

Performance Issues

  1. Check message history size
  2. Monitor token usage
  3. Look for high-frequency topics
  4. Profile with DevTools

Connection Problems

  1. Enable MQTT debug logs
  2. Check broker statistics
  3. Verify broker is running
  4. Test with mosquitto_sub

Debug Best Practices

1

Enable debug mode early

Turn on debug view before encountering issues to capture the full sequence of events.
2

Check multiple log sources

Issues may appear in browser console, server logs, or debug UI - check all three.
3

Use incremental testing

Test each component (connection, decoder, AI) separately to isolate issues.
4

Save debug output

Copy debug JSON or save logs to files for later analysis or bug reports.
5

Compare with working examples

Use the test suite’s mock clients to verify expected behavior.

See Also

Build docs developers (and LLMs) love