📄 Read Our Research PaperWe've published our findings on optimizing the interface between Knowledge Graphs and LLMs for complex reasoning. In this paper, we present systematic hyperparameter optimization results using cognee's modular framework across multiple QA benchmarks.
AI Memory Benchmark ResultsUnderstanding how well different AI memory systems retain and utilize context across interactions is crucial for enhancing LLM performance.We have updated our benchmark to include a comprehensive evaluation of cognee AI memory system against other leading tools, including LightRAG, Mem0, and Graphiti (previous result).This analysis provides a detailed comparison of performance metrics, helping developers select the best AI memory solution for their applications.The evaluation results are based on the following metrics:Key Performance MetricsResults for cognee
0.93Human-like Correctness
0.85DeepEval Correctness
0.84DeepEval F1
0.69DeepEval EM
Benchmark Comparison
Optimized Cognee Configurations
Cognee Graph Completion with Chain-of-Thought (CoT) shows significant performance improvements over the previous non-optimized version:
What is Next?Continuous improvement is key. We are actively enhancing our benchmarks, integrating new metrics, and evaluating additional AI memory solutions. Stay tuned for updates and more detailed analysis.
Have questions or want help optimizing your AI system?