PromptLens is a comprehensive AI-powered analytics and chat platform that helps organizations understand, analyze, and optimize their AI interactions. It's essentially a "business intelligence tool for AI conversations" that provides deep insights into how people interact with AI systems.
- Conversation Tracking: Records all AI prompts and responses with full metadata
- Vector Similarity Search: Uses embeddings to find similar conversations and responses
- Keyword Selection: Algorithmically selected relevant keyword for fast context and accurate embeddings
- Smart Caching: Reuses similar responses to reduce API costs and improve response times
- Multi-LLM Support: Works with OpenAI, Anthropic Claude, and XAI (Grok) models
- Reduce Repeated Requests Automatically generate markdown files for commonly asked questions to reduce costs and environmental impact
- Query in Plain English: Ask questions like "Show me daily prompt volume trends" or "What are the most common user questions?"
- Novel NLP Approach: Iteratively increases the amount of context the LLM has until accurate categorization is achieved
- Automatic Chart Generation: Converts natural language queries into interactive charts (line, bar, pie charts)
- Time Series Analysis: Supports different granularities (daily, hourly, 30-minute, 15-minute intervals)
- Real-time Insights: Provides instant visualizations of your AI usage patterns
- Usage Metrics: Track prompts, responses, users, and engagement over time
- Performance Monitoring: Response times, token usage, and model performance
- User Behavior Analysis: Understand how different users interact with AI
- Cost Optimization: Identify opportunities to reduce API costs through caching
- Cost Optimization: Reduce AI API costs by 30-50% through intelligent caching
- Quality Assurance: Monitor AI response quality and consistency, helping to improve system prompts, documentation, etc.
- Usage Insights: Understand which AI features are most valuable
- Performance Monitoring: Track response times and identify bottlenecks
- AI Analytics: Get detailed insights into AI usage patterns
- Custom Visualizations: Create charts and dashboards from natural language queries
- Data Export: Export conversation data for further analysis
- A/B Testing: Compare different AI models and prompts
- Better AI Experience: Faster responses through caching
- Conversation History: Never lose important AI conversations
- Multi-Model Access: Use the best AI model for each task
- Intuitive Interface: Natural language queries for complex analytics
PromptLens uses a modern, scalable architecture with three main components:
βββββββββββββββββββ βββββββββββββββββββ βββββββββββββββββββ
β Frontend β β Backend API β β Database β
β (Next.js) βββββΊβ (FastAPI) βββββΊβ (Supabase) β
β β β β β β
β β’ Dashboard β β β’ LLM Services β β β’ PostgreSQL β
β β’ Chat UI β β β’ Vector Search β β β’ Vector DB β
β β’ Analytics β β β’ Caching β β β’ Auth β
βββββββββββββββββββ βββββββββββββββββββ βββββββββββββββββββ
- Framework: Next.js with App Router
- Styling: Tailwind CSS + shadcn/ui components
- Charts: Recharts for data visualization
- State Management: React hooks and context
- Authentication: Supabase Auth
- Framework: FastAPI with async/await
- LLM Integration: OpenAI, Anthropic, XAI APIs
- Vector Search: OpenAI embeddings + cosine similarity
- Caching: Intelligent response caching system
- Authentication: JWT + API key management
- Primary DB: PostgreSQL with vector extensions
- Vector Storage: pgvector for similarity search
- Authentication: Built-in user management
- Real-time: WebSocket subscriptions for live updates
# Uses OpenAI embeddings to find similar conversations
embedding = await openai.embeddings.create(
model="text-embedding-3-small",
input=prompt
)- Semantic Matching: Finds similar prompts using vector similarity
- Cost Reduction: Reuses responses for similar queries
- Quality Control: Only caches high-quality responses
// Converts "Show me daily trends" into structured data
const result = await agent.run("Show me daily trends");- Unified Interface: Same API for all AI providers
- Model Selection: Choose the best model for each task
- Fallback Handling: Graceful degradation if models fail
- Horizontal Scaling: Stateless FastAPI backend
- Database Optimization: Indexed vector searches
- Caching Strategy: Multi-layer caching (memory + database)
- CDN Integration: Static assets served via CDN
- Real-time Updates: WebSocket connections for live data
- API Key Management: Secure storage of LLM API keys
- User Authentication: Supabase Auth with JWT tokens
- Data Encryption: All data encrypted in transit and at rest
- Access Control: Role-based permissions for different user types
- Frontend: Deployed on Vercel (Next.js)
- Backend: Deployed on Heroku (FastAPI)
- Database: Supabase (managed PostgreSQL)
- Monitoring: Built-in logging and error tracking
This architecture makes PromptLens a powerful, scalable platform for AI analytics that can grow with organizations while providing immediate value through cost savings and insights.