Skip to main content
This app can send traces of all LLM interactions to Langfuse for debugging and usage analytics. Find them on GitHub and review their Open Source statement.

Prerequisites

1

Create a Langfuse project

Create a Langfuse project either:
2

Get your API keys

Copy the public key and secret key from the project’s settings in the Langfuse dashboard.

Configuration

Set the following environment variables for the Rails app:
LANGFUSE_PUBLIC_KEY=your_public_key
LANGFUSE_SECRET_KEY=your_secret_key
# Optional if self-hosting or using a non-default domain
LANGFUSE_HOST=https://your-langfuse-domain.com
In Docker setups, add the variables to compose.yml and the accompanying .env file.
The initializer reads these values on boot and automatically enables tracing. If the keys are absent, the app runs normally without Langfuse.

What Gets Tracked

Langfuse automatically tracks the following LLM operations:
  • chat_response - AI assistant chat interactions
  • auto_categorize - Transaction categorization
  • auto_detect_merchants - Merchant name detection
Each call records:
  • Prompt content
  • Model used
  • Response text
  • Token usage (when available)
  • Latency and timing

Viewing Traces

After starting the app with the variables set, visit your Langfuse dashboard to see traces and generations grouped under the openai.* traces.
All traces are automatically organized by operation type, making it easy to analyze performance and costs for different AI features.

Privacy Considerations

What’s sent to Langfuse:
  • Prompts and responses
  • Model names
  • Token counts
  • Timestamps
  • Session IDs
  • Hashed user IDs (not actual user data)
What’s NOT sent:
  • User email addresses
  • User names
  • Unhashed user IDs
  • Account credentials
For maximum privacy, self-host Langfuse on your own infrastructure rather than using the cloud offering.

Build docs developers (and LLMs) love