- Decompose and disambiguate
- Analyze narrow, well-defined sub-questions
- Synthesize and present comprehensive context-rich answers
💡 Example: When comparing two products (e.g. Car A vs. Car B), Agent Search will independently explore both, then compare them to form a rich, contextual answer.
Key Features
- 
Intelligent Query Decomposition
 Breaks complex questions into precise sub-questions
- 
Parallel Search Processing
 Executes multiple analysis threads simultaneously
- 
Answer Validation
 Refines and validates responses for accuracy and completeness
Configuration
Basic Setup
To enable Agent Search in your Hymalaia deployment:- Update to the latest version of Hymalaia
- Configure knowledge source connections
- Set up LLM provider credentials
- Enable the Agent toggle in the chat interface (with a search-capable assistant)
Advanced Configuration
Best Practices & Suggestions
- Don’t hesitate to ask complex or multi-layered questions.
- Try comparative queries like:
“What’s the difference between Solution A and B?” 
 Agent Search will separately analyze A and B before comparing.
- Ask ambiguous questions such as:
“What are the guiding principles for X?” 
 The system will use context to clarify what “guiding principles” refers to.
- Even simple questions may benefit from deeper, contextualized answers.
- Click on sub-question analyses — they may provide interesting insights individually.
⚠️ It is recommended to assign a faster/cheaper LLM model as your Fast Model, since Agent Search performs many parallel queries.
Common Issues and Solutions
| Issue | Solution | 
|---|---|
| Langgraph/Langchain errors | Ensure server uses Python 3.11 and installs libraries from backend/requirements.txt. | 
| Rate limits | Agent Search may hit rate limits due to parallel queries. Use a provider with higher limits. | 
| Timeouts | Timeout thresholds are enforced to avoid blocking. Contact support if these are too strict. | 
| High token usage | Expect significantly more input/output tokens than with Basic Search. | 
Summary
Agent Search offers a powerful way to surface deeper insights, especially when working with ambiguous or multi-faceted questions. For best performance:- Use optimized LLM configurations
- Expect and account for higher token usage
- Experiment with your queries to see how well the system synthesizes knowledge
💬 Reach out to us on Slack or Discord if you’re experiencing issues or want help fine-tuning your setup.
