Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[LLM Object Generation][1/2] Leverage AI Lib's Generate Object instead of parsing strings #309

Open
wants to merge 6 commits into
base: main
Choose a base branch
from

Conversation

monilpat
Copy link

Relates to:

Issue #148: JSON Parsing Reliability

Risks

Low. The changes are not yet leveraged in production code, so the risk is contained to internal testing and development environments. The new modular approach will eventually replace generateObject and generateObjectArray but poses no immediate impact on existing functionality.

Background

What does this PR do?

This PR refactors the generateObject function by introducing a modular approach to handle AI model generation and parsing. The update addresses issue #148, which details inconsistencies in JSON response formatting from the LLM. For instance, action values like NONE are inconsistently quoted, causing JSON parsing errors that disrupt program logic.

This PR introduces provider-specific handlers within generateObject, allowing standardized JSON parsing for each model type and reducing the need for custom error-handling code. This is the first part of a two-stage update, with a follow-up PR to fully deprecate generateObject and generateObjectArray in favor of this approach.

What kind of change is this?

  • Improvements: Increases reliability by introducing structured, provider-specific handling.
  • Refactor: Simplifies generateObject and centralizes model-specific parsing logic.

Documentation changes needed?

  • Yes: Documentation should reflect the updated generateObject design and the role of provider-specific handlers.

Testing

Where should a reviewer start?

Review generateObject, which now routes requests through handleProvider, using specific handlers (e.g., handleOpenAI, handleAnthropic) that configure JSON response handling for each model. This centralizes parsing and prepares for structured output where supported.

Detailed testing steps

  • Run unit tests on generateObject to confirm JSON parsing accuracy with new handlers.
  • Test across different AI model providers, especially OpenAI, to ensure JSON responses adhere to expected formats.
  • Check logs for consistent error handling and retry logic.

@monilpat monilpat changed the title [LLM Content Generation] Leverage AI Lib's Generate Object instead of parsing strings [LLM Object Generation][1/2] Leverage AI Lib's Generate Object instead of parsing strings Nov 14, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

1 participant