Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Ollama memory embeddings #2050

Open
jmrichardson opened this issue Feb 6, 2025 · 1 comment
Open

Ollama memory embeddings #2050

jmrichardson opened this issue Feb 6, 2025 · 1 comment
Labels
bug Something isn't working

Comments

@jmrichardson
Copy link

jmrichardson commented Feb 6, 2025

Description

When using ollama memory embeddings, i am getting the following errors:

Give Feedback / Get Help: https://github.com/BerriAI/litellm/issues/new
LiteLLM.Info: If you need to debug this error, use `litellm.set_verbose=True'.
Provider List: https://docs.litellm.ai/docs/providers
Give Feedback / Get Help: https://github.com/BerriAI/litellm/issues/new
LiteLLM.Info: If you need to debug this error, use `litellm.set_verbose=True'.
Provider List: https://docs.litellm.ai/docs/providers
Give Feedback / Get Help: https://github.com/BerriAI/litellm/issues/new
LiteLLM.Info: If you need to debug this error, use `litellm.set_verbose=True'.
Provider List: https://docs.litellm.ai/docs/providers
Give Feedback / Get Help: https://github.com/BerriAI/litellm/issues/new
LiteLLM.Info: If you need to debug this error, use `litellm.set_verbose=True'.
Provider List: https://docs.litellm.ai/docs/providers
Give Feedback / Get Help: https://github.com/BerriAI/litellm/issues/new
LiteLLM.Info: If you need to debug this error, use `litellm.set_verbose=True'.
Provider List: https://docs.litellm.ai/docs/providers
Give Feedback / Get Help: https://github.com/BerriAI/litellm/issues/new
LiteLLM.Info: If you need to debug this error, use `litellm.set_verbose=True'.
Provider List: https://docs.litellm.ai/docs/providers
Give Feedback / Get Help: https://github.com/BerriAI/litellm/issues/new
LiteLLM.Info: If you need to debug this error, use `litellm.set_verbose=True'.
Provider List: https://docs.litellm.ai/docs/providers
Give Feedback / Get Help: https://github.com/BerriAI/litellm/issues/new
LiteLLM.Info: If you need to debug this error, use `litellm.set_verbose=True'.
Provider List: https://docs.litellm.ai/docs/providers
Give Feedback / Get Help: https://github.com/BerriAI/litellm/issues/new
LiteLLM.Info: If you need to debug this error, use `litellm.set_verbose=True'.
Provider List: https://docs.litellm.ai/docs/providers
Failed to add to long term memory: Failed to convert text into a Pydantic model due to the following error: litellm.APIConnectionError: OllamaException - [WinError 10061] No connection could be made because the target machine actively refused it

Steps to Reproduce

Here is my config:

crew = Crew(
        agents=[agent],
        tasks=[task],
        process=Process.sequential,
        embedder={
          "provider": "ollama",
          "config": {
            "model": "nomic-embed-text",
            "url": "http://10.23.50.101:11434/api/embeddings",
          }
        },
        memory=True, # Enable memory for the crew - this activates short-term, long-term, entity, contextual memory
        verbose=True,
        llm=crew_llm # Explicitly set the Crew's LLM to ollama_llm
    )

Expected behavior

I am able to get embeddings by testing with Curl:

curl http://10.23.50.101:11434/api/embed -d "{ \"model\": \"nomic-embed-text\", \"input\": \"Llamas are members of the camelid family\" }"

Operating System

Ubuntu 20.04

Python Version

3.10

crewAI Version

crewai 0.100.1

crewAI Tools Version

crewai-tools 0.33.0

Virtual Environment

Conda

Evidence

None

Possible Solution

I have tried many different configs but none seem to work

Additional context

None

@jmrichardson jmrichardson added the bug Something isn't working label Feb 6, 2025
@jmrichardson jmrichardson changed the title [BUG] Ollama memory embeddings Feb 6, 2025
@jmrichardson
Copy link
Author

I was able to resolve the problem by updating this line to:

            self._client = instructor.from_litellm(
                completion,
                model=self.llm.model,
                api_base=self.llm.base_url,
            )

the litellm was defaulting to OpenAI provider which I don't use. Hopefully someone can make the edit in main if it looks ok.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
bug Something isn't working
Projects
None yet
Development

No branches or pull requests

1 participant