Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[BUG] Cannot delete memories when not using OpenAi #2023

Open
heckfy88 opened this issue Feb 2, 2025 · 5 comments · May be fixed by #2034
Open

[BUG] Cannot delete memories when not using OpenAi #2023

heckfy88 opened this issue Feb 2, 2025 · 5 comments · May be fixed by #2034
Labels
bug Something isn't working

Comments

@heckfy88
Copy link

heckfy88 commented Feb 2, 2025

Description

I'm trying to remove all memories, but when I type "crewai reset-memories -a" I receive an error that haven't configured OpenAI API key, which shouldn't be needed because I'm using Ollama's local LLM and embeddings.

Steps to Reproduce

  1. Configure project with Ollama's llama3.1 LLM model and nomic-embed-text embeddings model.
  2. Create a knowledge base
  3. Run the project to have a knowledge document saved
  4. Run "crewai reset-memories -a" command in console
  5. Observe error

Expected behavior

No error, memories purged

Screenshots/Code snippets

import os

from crewai import Agent, Crew, Process, Task, LLM
from crewai.knowledge.source.text_file_knowledge_source import TextFileKnowledgeSource
from crewai.project import CrewBase, agent, crew, task, after_kickoff

# Uncomment the following line to use an example of a custom tool
# from knowledge_example.tools.custom_tool import MyCustomTool

# Check our tools documentations for more information on how to use them
# from crewai_tools import SerperDevTool

os.environ["OPENAI_API_KEY"] = "NA"
os.environ["OTEL_SDK_DISABLED"] = "true"

# Create a text file knowledge source
text_source = TextFileKnowledgeSource(
    file_paths=["1.java"]
)


@CrewBase
class KnowledgeExample():
    """KnowledgeExample crew"""

    agents_config = 'config/agents.yaml'
    tasks_config = 'config/tasks.yaml'

    @after_kickoff  # Optional hook to be executed after the crew has finished
    def log_results(self, output):
        # Example of logging results, dynamically changing the output
        print(f"Results: {output}")
        return output

    @agent
    def researcher(self) -> Agent:
        return Agent(
            config=self.agents_config['researcher'],
            verbose=True,
            memory=True,
            llm=LLM(model="ollama/llama3.1", base_url="http://localhost:11434"),
            knowledge_sources=[text_source],
            embedder={
                "provider": "ollama",
                "config": {
                    "model": "nomic-embed-text"
                }
            }
        )

    @task
    def research_task(self) -> Task:
        return Task(
            config=self.tasks_config['research_task'],
        )

    @crew
    def crew(self) -> Crew:
        """Creates the KnowledgeExample crew"""
        return Crew(
            agents=self.agents,  # Automatically created by the @agent decorator
            tasks=self.tasks,  # Automatically created by the @task decorator
            process=Process.sequential,
            verbose=True,
            memory=True,
            llm=LLM(model="ollama/llama3.1", base_url="http://localhost:11434"),
            knowledge_sources=[text_source],
            embedder={
                "provider": "ollama",
                "config": {
                    "model": "mxbai-embed-large",
                    "dimensions": "768",
                }
            },
        )

Operating System

Other (specify in additional context)

Python Version

3.12

crewAI Version

0.100.1

crewAI Tools Version

0.33

Virtual Environment

Venv

Evidence

Image

Possible Solution

this command should also use other LLMs

Additional context

I also encounter problems when I include embedder config in crew, although the same block in agent works:

[2025-02-02 14:11:13][ERROR]: Failed to upsert documents: APIStatusError.init() missing 2 required keyword-only arguments: 'response' and 'body'

[2025-02-02 14:11:13][WARNING]: Failed to init knowledge: APIStatusError.init() missing 2 required keyword-only arguments: 'response' and 'body'
ERROR:root:Error during entities search: Embedding dimension 1024 does not match collection dimensionality 768

@heckfy88 heckfy88 added the bug Something isn't working label Feb 2, 2025
@androw
Copy link
Contributor

androw commented Feb 4, 2025

I'm having the same problem as the Additional context part.
The embedder + knowledge_sources works in the agent block but not in the crew block.

@heckfy88
Copy link
Author

heckfy88 commented Feb 4, 2025

@androw same! Talking about reset though, I found a workaround, you can use the following code:

        @after_kickoff 
	def reset_knowledge_base(self, output):
		text_source.storage.reset()
		return output

Where text_source is:
text_source = TextFileKnowledgeSource( file_paths=["1.txt", "2.txt", "3.txt", "4.txt"] )

@jackyin5918
Copy link

jackyin5918 commented Feb 5, 2025

I'm having the same problem as the Additional context part. The embedder + knowledge_sources works in the agent block but not in the crew block.

I may encounter same issue/bug, see #2033 for possible workaround, FYI.

devin-ai-integration bot added a commit that referenced this issue Feb 5, 2025
… LLMs

- Add environment variables for default embedding provider
- Support Ollama as default embedding provider
- Add tests for memory reset with different providers
- Update documentation

Fixes #2023

Co-Authored-By: Joe Moura <[email protected]>
devin-ai-integration bot added a commit that referenced this issue Feb 5, 2025
… LLMs

- Add environment variables for default embedding provider
- Support Ollama as default embedding provider
- Add tests for memory reset with different providers
- Update documentation

Fixes #2023

Co-Authored-By: Joe Moura <[email protected]>
@Vidit-Ostwal
Copy link
Contributor

I think the problem lies, because of how reset_memory_command which is executed from CLI is initialising short_term_memory and entity_memory.

This is different from how they are initialised inside crew.py.

While the reset_memory_command should work perfectly fine for the long_term_memory and knowledge_memory as they don't require any argument for initialised.

Direct CLI command won't work if anyone does a custom initialisation of any of the memory

Image Image

@Vidit-Ostwal
Copy link
Contributor

@heckfy88, the above should get resolved with the above PR, kindly close the issue, if this works.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

Successfully merging a pull request may close this issue.

4 participants