Langfuse Integration
Connect Calmo to your Langfuse instance to enable LLM observability and prompt management through AI assistance. This integration provides access to 9 specialized tools across 3 categories for complete LLM monitoring and optimization workflows.Overview
The Langfuse integration transforms how your team handles LLM operations and observability by providing:- Intelligent Prompt Management - AI-powered prompt optimization and version control
- Advanced Trace Analysis - Comprehensive LLM execution tracing with performance insights
- Session Analytics - User activity analysis and conversation pattern recognition
- Cost Optimization - Token usage analysis and cost monitoring across LLM providers
- Quality Assurance - Automated evaluation and quality scoring of LLM outputs
- Safe Operations - All tools are read-only for secure observability without data modification
Key Capabilities
When connected, Calmo gains access to 9 Langfuse tools across 3 categories:Category | Tools | Capability |
---|---|---|
Prompt Management | 2 tools | Manage and retrieve prompts with version control |
Trace & Observations | 5 tools | Analyze LLM execution traces and performance |
Session Analytics | 2 tools | Session analysis and user activity monitoring |
Prerequisites
- Langfuse instance (cloud or self-hosted) with API access
- Project access with appropriate permissions
- Calmo account with team or personal workspace
Setup Process
Step 1: Access Your Langfuse Instance
Locate Your Langfuse Configuration:- Navigate to your Langfuse dashboard
- Note your instance URL (e.g.,
https://cloud.langfuse.com
or your self-hosted URL) - Ensure you have project access and API key generation permissions
Step 2: Generate Langfuse API Keys
Create Project API Keys:- Log in to your Langfuse instance
- Navigate to your target project
- Go to Settings → API Keys
- Click Create new API key pair
- Configure key settings:
- Description: “Calmo Integration”
- Permissions: Read-only (recommended for security)
- Copy both the Public Key and Secret Key immediately
- Public Key: Used for client-side operations and identification
- Secret Key: Used for server-side operations and authentication
- Base URL: Your Langfuse instance URL (cloud or self-hosted)
- Read access to prompts and prompt versions
- Read access to traces and observations
- Read access to sessions and user activity data
- Read access to project statistics and metadata
Step 3: Connect to Calmo
- Navigate to Integrations in your Calmo dashboard
- Click Langfuse integration
- Enter your Public Key
- Enter your Secret Key
- Enter your Base URL (defaults to
https://cloud.langfuse.com
) - Configure tool permissions:
- ✅ All operations are read-only for security
- ✅ All tools enabled by default for comprehensive observability
- Test the connection using the built-in connection test
- Complete the integration setup
Tool Categories & Configuration
Prompt Management (Safe)
Default: Enabled - Essential for prompt optimization and version control- get-prompts - Get all prompts with metadata and version information
- get-prompt - Get a specific prompt with full version history and configuration
Trace & Observations (Safe)
Default: Enabled - Core LLM execution monitoring and analysis- get_trace - Get comprehensive trace information with all spans and metadata
- get-trace-details - Get detailed trace information including costs and performance
- fetch-traces - Fetch multiple traces with filtering and pagination
- fetch-observations - Fetch observations with advanced filtering capabilities
- fetch-observation - Fetch a single observation with complete details
Session Analytics (Safe)
Default: Enabled - User behavior and conversation analysis- fetch-sessions - Fetch user sessions with activity and conversation data
- analyze-user-activity - Analyze user activity patterns and engagement metrics
Team vs Personal Configuration
Team/Organization Setup
- Shared Langfuse project access across team members
- Organization-level LLM monitoring and governance policies
- Centralized prompt management and optimization workflows
- Team-wide cost monitoring and optimization insights
Personal Setup
- Individual Langfuse project connections
- Personal prompt experimentation and analysis
- Private LLM performance monitoring and debugging
- Individual cost tracking and optimization
Security & Best Practices
⚠️ Safety Recommendations
- Read-Only Access - All tools are read-only by design for maximum security
- Key Security - Use dedicated API keys with minimal required permissions
- Instance Security - Ensure Langfuse instance uses HTTPS and proper authentication
- Data Privacy - Review data retention and privacy policies for your Langfuse instance
- Access Monitoring - Monitor API usage and access patterns regularly
🔒 Permission Levels
Risk Level | Operations | Recommendation |
---|---|---|
Low | All Langfuse operations (read-only by design) | ✅ Safe to enable |
Configuration Management
Updating Langfuse Connection
- Navigate to Integrations → Langfuse
- Click Edit Configuration
- Update public/secret keys or base URL as needed
- Test connection to verify changes
- Save configuration updates
Managing Multiple Projects
- Connect separate Langfuse projects for different applications
- Use different API keys for production vs development environments
- Configure project-specific monitoring and analysis workflows
- Maintain separate cost tracking per project
Advanced Features
Prompt Intelligence
- Version Analysis - Compare prompt versions and performance metrics
- Optimization Insights - AI-powered prompt improvement suggestions
- A/B Testing - Analyze prompt performance across different versions
- Template Management - Manage prompt templates and variables
Trace Analytics
- Performance Monitoring - Real-time LLM execution performance tracking
- Cost Analysis - Detailed token usage and cost breakdown by provider
- Error Detection - Automatic identification of failed or problematic traces
- Quality Scoring - Automated quality assessment of LLM outputs
Session Intelligence
- Conversation Flow - Analyze conversation patterns and user journeys
- Engagement Metrics - Track user engagement and session quality
- Behavioral Patterns - Identify common user behavior patterns
- Optimization Opportunities - Discover areas for conversation improvement
LLM Observability Workflows
Performance Monitoring
- Real-Time Tracking - Monitor LLM performance and latency in real-time
- Cost Optimization - Track and optimize token usage across different models
- Quality Assurance - Continuous monitoring of output quality and relevance
- Error Analysis - Identify and analyze LLM execution errors and failures
Prompt Development
- Iterative Optimization - Track prompt performance improvements over time
- Version Management - Manage prompt versions and deployment strategies
- A/B Testing - Compare different prompt approaches and configurations
- Best Practice Discovery - Identify high-performing prompt patterns
User Experience Analysis
- Conversation Quality - Analyze conversation effectiveness and user satisfaction
- Journey Mapping - Understand user interaction patterns and preferences
- Engagement Optimization - Optimize conversation flows for better engagement
- Personalization Insights - Discover opportunities for conversation personalization
Troubleshooting
Common Issues
Authentication Failed- Verify public and secret keys are correct and haven’t been revoked
- Check API key permissions in Langfuse project settings
- Ensure base URL is correct for your Langfuse instance
- Verify network connectivity to Langfuse instance
- Confirm API keys have access to the target project
- Check project permissions and user access levels
- Verify project exists and is accessible
- Review Langfuse logs for detailed error information
- Verify data exists in the specified time range
- Check trace IDs and observation IDs are correct
- Ensure data collection is active for your application
- Review data retention policies for your Langfuse instance
- Check network connectivity to Langfuse instance
- Verify Langfuse instance is responsive and healthy
- Consider increasing timeout settings if using self-hosted instance
- Review firewall and proxy settings
Getting Help
- Test Connection - Use the built-in connection test feature
- Update Credentials - Regenerate API keys if authentication issues persist
- Check Documentation - Refer to Langfuse official documentation for API setup
- Contact Support - Reach out to support@getcalmo.com for integration assistance
Data Types & Analysis
Prompt Data
- Prompt Configurations - Prompt templates, variables, and version information
- Performance Metrics - Prompt execution success rates and quality scores
- Version History - Complete audit trail of prompt changes and deployments
- Usage Analytics - Prompt usage patterns and frequency analysis
Trace Data
- Execution Traces - Complete LLM execution flows with timing and performance data
- Span Information - Individual operation details within traces
- Cost Metrics - Token usage and cost breakdown by model and provider
- Quality Scores - Automated and manual quality assessments
Observation Data
- LLM Outputs - Generated text and response quality metrics
- Input Analysis - Input processing and preparation statistics
- Model Performance - Model-specific performance and accuracy metrics
- Error Tracking - Failed operations and error classification
Session Data
- User Sessions - Complete user interaction sessions with conversation data
- Activity Patterns - User behavior patterns and engagement metrics
- Conversation Flows - Conversation structure and progression analysis
- Engagement Metrics - Session duration, interaction frequency, and satisfaction
Analytics Data
- Usage Statistics - Application usage patterns and adoption metrics
- Performance Trends - Long-term performance and quality trends
- Cost Optimization - Cost analysis and optimization recommendations
- Quality Insights - Quality improvement opportunities and best practices
For additional help with Langfuse integration, contact our support team at support@getcalmo.com.