Large Language Model (LLM) agents exhibit remarkable conversational and reasoning capabilities but remain constrained by limited context windows and the lack of persistent memory. Recent efforts address these limitations via external memory architectures, often employing graph-based representations, yet most adopt flat, entangled structures that intertwine semantics with topology, leading to redundant representations, unstructured retrieval, and degraded efficiency and accuracy. To resolve these issues, we propose LiCoMemory, an end-to-end agentic memory framework for real-time updating and retrieval, which introduces CogniGraph, a lightweight hierarchical graph that utilizes entities and relations as semantic indexing layers, and employs temporal and hierarchy-aware search with integrated reranking for adaptive and coherent knowledge retrieval. Experiments on long-term dialogue benchmarks, LoCoMo and LongMemEval, show that LiCoMemory not only outperforms established baselines in temporal reasoning, multi-session consistency, and retrieval efficiency, but also notably reduces update latency. Our official code and data are available at https://github.com/EverM0re/LiCoMemory.
翻译:大语言模型(LLM)智能体展现出卓越的对话与推理能力,但仍受限于有限的上下文窗口及缺乏持久记忆。近期研究通过外部记忆架构应对这些限制,常采用基于图的表示方法,但多数采用扁平、纠缠的结构,将语义与拓扑交织,导致表示冗余、检索非结构化,并降低效率与准确性。为解决这些问题,我们提出LiCoMemory,一种用于实时更新与检索的端到端智能体记忆框架。该框架引入了CogniGraph——一种轻量级层次化图,利用实体与关系作为语义索引层,并采用融合时序与层次感知的搜索及集成重排序机制,以实现自适应且连贯的知识检索。在长期对话基准测试(LoCoMo与LongMemEval)上的实验表明,LiCoMemory不仅在时序推理、多会话一致性及检索效率上优于现有基线,还显著降低了更新延迟。我们的官方代码与数据可在https://github.com/EverM0re/LiCoMemory获取。