A Java Spring Boot service that provides AI-powered research capabilities using LangChain4j and various LLM providers (Ollama, Groq, OpenAI, etc.). It autonomously generates research queries, performs web searches, and creates comprehensive summaries.
- AI-Powered Research: Uses LangChain4j with multiple LLM providers
- Multiple Search Backends: DuckDuckGo, Tavily, Perplexity support
- Beautiful Web UI: Professional markdown rendering with syntax highlighting
- Real-time Feedback: Loading indicators and status updates
- Flexible Configuration: Environment variable support
- Comprehensive Debug Logging: Detailed logging for troubleshooting
- Responsive Design: Works on desktop and mobile
- An sample report generated by this tool on the topic of "Cybersecurity industry deep dive" is available at research_report_1.md
- Another one on the topic of "Explain sorting algorithms with examples in java" is available at research_report_2.md
- "Latest advances in quantum computing"
- "2024 FDA approved cancer drugs"
- "Climate change mitigation strategies"
- "Machine learning applications in healthcare"
The application generates comprehensive markdown reports with:
- Executive summary
- Key findings
- Sources and references
- Formatted tables and lists
- Code examples (if applicable)
- Java 21 or higher
- Maven 3.6+
- An LLM provider (OpenAI, Inception, Anthropic, Ollama, Groq, etc.)
git clone <repository-url>
cd DeepResearch
mvn clean package# API key is the only thing needed, since others are defaults
export RESEARCH_API_KEY=your_inception_api_key
# These are not needed but merely mentioned here for completeness
export RESEARCH_LLM_PROVIDER=inception
export RESEARCH_MODEL_NAME=mercury-coder
export RESEARCH_BASE_URL=https://api.inceptionlabs.ai/v1# Install Ollama
curl -fsSL https://ollama.ai/install.sh | sh
# Pull a model
ollama pull llama2
# Set environment variables
export RESEARCH_LLM_PROVIDER=ollama
export RESEARCH_MODEL_NAME=llama2
export RESEARCH_BASE_URL=http://localhost:11434export RESEARCH_LLM_PROVIDER=groq
export RESEARCH_MODEL_NAME=mixtral-8x7b-32768
export RESEARCH_BASE_URL=https://api.groq.com/openai/v1
export RESEARCH_API_KEY=your_groq_api_keyexport RESEARCH_LLM_PROVIDER=openai
export RESEARCH_MODEL_NAME=gpt-4
export RESEARCH_BASE_URL=https://api.openai.com/v1
export RESEARCH_API_KEY=your_openai_api_keymvn spring-boot:runOpen your browser to: http://localhost:8080
The application includes a beautiful, responsive web interface with:
- Markdown rendering with syntax highlighting
- Real-time loading indicators
- Error handling and feedback
- Copy buttons for code blocks
- Mobile-responsive design
| Variable | Description | Default |
|---|---|---|
RESEARCH_LLM_PROVIDER |
LLM provider (ollama, groq, openai) | ollama |
RESEARCH_MODEL_NAME |
Model name to use | llama2 |
RESEARCH_BASE_URL |
Base URL for LLM API | http://localhost:11434 |
RESEARCH_API_KEY |
API key for cloud providers | |
MAX_RESEARCH_LOOPS |
Number of research iterations | 3 |
SEARCH_API |
Search backend (duckduckgo, tavily, perplexity) | duckduckgo |
MAX_TOKENS |
Max tokens per source | 1000 |
The application uses Spring Boot configuration. You can override settings in application.yml.
POST /api/research/conduct- Start researchGET /api/research/health- Health checkGET /api/research/debug- Debug information
# Start research
curl -X POST http://localhost:8080/api/research/conduct \
-H "Content-Type: application/json" \
-d '{"topic":"Latest advances in quantum computing"}'
# Health check
curl http://localhost:8080/api/research/health
# Debug info
curl http://localhost:8080/api/research/debugEnable debug logging by setting:
export LOGGING_LEVEL_COM_EXAMPLE=DEBUG- Start the application
- Open
http://localhost:8080 - Enter a research topic
- Click "Run Research"
- Watch the loading indicator and results
# Test with a simple topic
curl -X POST http://localhost:8080/api/research/conduct \
-H "Content-Type: application/json" \
-d '{"topic":"What is artificial intelligence?"}'- Check your LLM provider connection
- Reduce
MAX_RESEARCH_LOOPSif needed - Ensure your API key is valid
- Check browser console for errors
- Verify environment variables
- Test with a simple topic first
# Check configuration
curl http://localhost:8080/api/research/debug
# Test connection
curl http://localhost:8080/api/research/health- Fork the repository
- Create a feature branch
- Add comprehensive logging
- Test with multiple LLM providers
- Submit a pull request
This project is licensed under the MIT License - see the LICENSE file for details.
- LangChain4j for AI integration
- Marked.js for markdown rendering
- Prism.js for syntax highlighting
- Spring Boot for the framework