Meet JASON, the AI that can explore concepts in JSON structure! #3983
mrmemo
started this conversation in
Show and tell
Replies: 1 comment
-
I like this. Very very great for downstream processing. There's a way to specify a schema right? |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
-
JASON, the GPT-4 AI that can explore concepts in JSON structure
Problem Summary:
Out of the box, GPT4 doesn't do "JSON" structures in great detail. It will generally favor breadth over depth, and frequently duplicate sub-structures within that object. It will also try to give example values for the object, in lieu of exploring greater detail of structure. This means that if you ask GPT-4 to generate a hierarchy in key-value pairs like JSON, it'll generally "skim over" the top layer of complexity, rather than "diving into" the object. It will also waste tokens generating example values for keys, and duplicate structures within its response. This can limit its ability to explore complex concepts as hierarchies.
Here's an example:
PROMPT:
Turn the U.S. Government into JSON format.
BASE GPT-4 RESPONSE:
We can see that the model natively returns a very cursory review of the subject matter, and sometimes includes what it thinks are correct values for the keys. But this behavior limits its ability to consider complexity and depth.
"JASON'S" Functionality:
JASON iteratively considers the complexity at each level of detail in the JSON structure. This iterative process ensures that JASON will return consistent and predictable results, as well as maximizes the detail which JASON uses to consider sub-structures.
In contrast to base-GPT-4, JASON will explore significantly greater detail of conceptual complexity. This would enable much more robust hypothesis generation within these potentially abstract structures.
JASON'S RESPONSE (V1):
In my example, JASON does a few things better than base-GPT-4:
Where do we go next?
I'm currently working to compress the intermediary output that JASON generates while iterating. This output is necessary to ensure that it follows the process correctly, as well as providing it a way to refer back and structure the final output in more proper JSON format. This intermediary process, while robust and exhaustive, burns a fair number of tokens.
However, it's possible that some (or even all) of this could be offloaded to GPT-3.5, with enough prompt engineering. For now, I'm confident that I can get this functionality with few-shot priming in GPT-4. At this stage, I believe it could readily be incorporated into the AutoGPT codebase as an "AI Companion Method".
Deployment
Deployment would be as simple as starting a new Session with the desired model, priming it with the necessary initial prompts, and querying it conversationally.
Code Exposure
I don't want to push this without a demand, so I'm interested to know: is this functionality worth expanding?
Beta Was this translation helpful? Give feedback.
All reactions