Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Superagentx - OpenSource Agent AI with Bedrock LLMs initial examples #369

Open
wants to merge 9 commits into
base: main
Choose a base branch
from
Original file line number Diff line number Diff line change
@@ -0,0 +1,323 @@
{
"cells": [
{
"cell_type": "markdown",
"id": "9f3de924-5afd-4df3-b81b-1d2950390a29",
"metadata": {},
"source": [
"# Simplified AWS Bedrock LLM function calling using SuperAgentX.\n",
"\n",
"### Open Source Github Repository URL: https://github.com/superagentxai/superagentx\n",
"\n",
"## What is SuperAgentX?\n",
"\n",
"SuperAgentX is an advanced agentic AI framework designed to accelerate the development of Artificial General Intelligence (AGI). It provides a powerful, modular, and flexible platform for building autonomous AI agents capable of executing complex tasks with minimal human intervention.\n",
"\n",
"Using the SuperAgentX agentic AI framework, function calling with AWS Bedrock LLMs is simplified.\n",
"\n",
"The below example creates weather as a handler (Tool) and invokes using SuperAgentX's agent.\n",
"\n",
"#### The Below example explains how to invoke Bedrock Converse API using the SuperAgentX framework.\n",
"\n",
"In this example, the SuperAgentX Agent module can automatically invoke a mock weather handler (Tool).\n"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "36e84a3e-07c5-4bfe-a00c-1e95ac06be23",
"metadata": {},
"outputs": [],
"source": [
"!pip install superagentx"
]
},
{
"cell_type": "code",
"execution_count": 11,
"id": "49046b9a-5b97-4820-854c-2fd47da4c5d0",
"metadata": {},
"outputs": [],
"source": [
"from superagentx.agent import Agent\n",
"from superagentx.engine import Engine\n",
"from superagentx.llm import LLMClient\n",
"from superagentx.handler.base import BaseHandler\n",
"from superagentx.prompt import PromptTemplate\n",
"from superagentx.agentxpipe import AgentXPipe"
]
},
{
"cell_type": "code",
"execution_count": 18,
"id": "526ddbe1",
"metadata": {},
"outputs": [
{
"data": {
"text/html": [
"<style>\n",
"table {float:left}\n",
"</style>\n"
],
"text/plain": [
"<IPython.core.display.HTML object>"
]
},
"metadata": {},
"output_type": "display_data"
}
],
"source": [
"%%html\n",
"<style>\n",
"table {float:left}\n",
"</style>"
]
},
{
"cell_type": "markdown",
"id": "e3973645-f92f-44b6-82be-e76e994a5386",
"metadata": {},
"source": [
"Although this example leverages Claude 3 Sonnet, Bedrock supports many other models. The full list of models and supported features can be found [here](https://docs.aws.amazon.com/bedrock/latest/userguide/conversation-inference.html). The models are invoked via `bedrock-runtime`.\n",
"\n",
"\n",
"**Best Practice**: Always set your AWS credentials as environment variables.\n",
"\n",
"```\n",
"export AWS_ACCESS_KEY=<<`YOUR_ACCESS_KEY`>>\n",
"export AWS_SECRET_KEY=<<`YOUR_ACCESS_SECRET_KEY`>>\n",
"export AWS_REGION=<<`AWS_REGION`>>\n",
"```\n",
"\n",
"#### LLM Config in SuperAgentX\n",
"\n",
"LLM Configuration Specify ==> llm_config\n",
"\n",
"| Name | Description | Data Type | Required | Example |\n",
"|------|-------------|-----------|----------|---------|\n",
"|`model`| AWS Bedrock supported [models](https://docs.aws.amazon.com/bedrock/latest/userguide/conversation-inference.html)|str| Yes| 'model': 'anthropic.claude-3-5-sonnet-20240620-v1:0' |\n",
"|`llm_type`| LLM type - `bedrock`|str| Yes| 'llm_type':'bedrock'\n",
"\n",
"\n",
" "
]
},
{
"cell_type": "code",
"execution_count": 12,
"id": "64768dc6-5f0f-4d26-b0ca-f40f63834fb1",
"metadata": {},
"outputs": [],
"source": [
"llm_config = {'model': 'anthropic.claude-3-5-sonnet-20240620-v1:0', 'llm_type':'bedrock'}\n",
"\n",
"llm_client: LLMClient = LLMClient(llm_config=llm_config)"
]
},
{
"cell_type": "markdown",
"id": "c3d3ed52-5ff9-4c2c-ad7a-42fd8f453fa3",
"metadata": {},
"source": [
"### Handler (Tool) definition \n",
"\n",
"We define `WeatherHandler` as a class where individual handlers (tools) are defined as functions. Note that there is nothing specific to the model used or Bedrock in this definition."
]
},
{
"cell_type": "code",
"execution_count": 13,
"id": "26ebfd29-1069-48a5-854a-9f6af856f9a5",
"metadata": {},
"outputs": [],
"source": [
"from superagentx.handler.base import BaseHandler\n",
"from superagentx.handler.decorators import tool\n",
"\n",
"class WeatherHandler(BaseHandler):\n",
" \n",
" @tool\n",
" async def get_weather(self, city: str, state: str) -> str:\n",
" \"\"\"\n",
" Generate weather in Celsius or Fahrenheit \n",
"\n",
" Args:\n",
" @param city (str): The city name\n",
" @param state (str): The state name\n",
"\n",
" @return result (Str): Return Fake Weather for the given city & name in Fahrenheit \n",
" \"\"\"\n",
" result = f'Weather in {city}, {state} is 70F and clear skies.'\n",
" print(f'Tool result: {result}')\n",
" return result"
]
},
{
"cell_type": "markdown",
"id": "79a5dd9f-d43e-4f8d-a144-a186f1af30d3",
"metadata": {},
"source": [
"### Create Prompt Template Object"
]
},
{
"cell_type": "code",
"execution_count": 14,
"id": "3a732b3a-e562-4703-854e-f1ab3a0595a9",
"metadata": {},
"outputs": [],
"source": [
"prompt_template = PromptTemplate()"
]
},
{
"cell_type": "code",
"execution_count": 15,
"id": "adf0cc3f-50c2-4ddf-b97d-967ba5222731",
"metadata": {},
"outputs": [],
"source": [
"weather_forecast_engine = Engine(\n",
" handler=WeatherHandler(),\n",
" llm=llm_client,\n",
" prompt_template=prompt_template\n",
")"
]
},
{
"cell_type": "markdown",
"id": "bd7a5bae",
"metadata": {},
"source": [
"## Agent attributes\n",
"\n",
"| Attribute | Parameter | Description |\n",
"| :------------------------- | :--------- | :--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |\n",
"| **Role** | `role` | Defines the agent's function within the SuperAgentX. It determines the kind of activcity the agent is best suited for. |\n",
"| **Goal** | `goal` | The individual objective that the agent aims to achieve. It guides the agent's decision-making process. |\n",
"| **LLM** | `llm`| Represents the language model that will run the agent.|\n",
"| **Retry Max** *(optional)* | `max_retry` | sets the maximum number of iterations an agent can complete before it must provide its best possible answer. Default `5` |\n",
"| **Prompt Template** | `prompt_template` | Specifies the prompt format for the agent. | |\n",
"| **Engine** | `engines` | A list of engines (or lists of engines) that the engine can utilize in `parallel` or `sequnetial`. This allows for flexibility in processing and task execution based on different capabilities or configurations or or configurations.|"
]
},
{
"cell_type": "code",
"execution_count": 16,
"id": "31b003a7-b8de-44c7-bd73-5900a4f84a03",
"metadata": {},
"outputs": [],
"source": [
"weather_man_agent = Agent(\n",
" goal=\"Verify the weather results for the given city and state \",\n",
" role=\"Weather Man\",\n",
" llm=llm_client,\n",
" max_retry=2, #Default Max Retry is 5\n",
" prompt_template=prompt_template,\n",
" engines=[weather_forecast_engine],\n",
" )"
]
},
{
"cell_type": "markdown",
"id": "04315d06",
"metadata": {},
"source": [
"### Pipe Attributes\n",
"\n",
"| Attribute | Parameter | Description |\n",
"| :------------------------- | :--------- | :--------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------- |\n",
"| **Name** | `name` | An optional name for the agentxpipe, providing a more friendly reference for display or logging purposes. |\n",
"| **Agents** | `agents` | A list of Agent instances (or lists of Agent instances) that are part of this structure. These agents can perform tasks and contribute to achieving the defined goal. |\n"
]
},
{
"cell_type": "code",
"execution_count": 17,
"id": "daa61f68",
"metadata": {},
"outputs": [],
"source": [
"pipe = AgentXPipe(\n",
" name='Fake Weather Man',\n",
" agents=[weather_man_agent]\n",
")"
]
},
{
"cell_type": "code",
"execution_count": 21,
"id": "961a636e-c8ef-4142-bc83-5a8ea086374e",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"Tool result: Weather in New York, New York is 70F and clear skies.\n"
]
}
],
"source": [
"result = await pipe.flow(query_instruction=\"What is the weather like in New York, USA?\")"
]
},
{
"cell_type": "code",
"execution_count": 22,
"id": "82b00c28-ea7c-4817-a1b9-0babf15c2fd0",
"metadata": {},
"outputs": [
{
"name": "stdout",
"output_type": "stream",
"text": [
"[GoalResult(name='Agent-9c51452a31fd417eb6492a01961e7011', agent_id='9c51452a31fd417eb6492a01961e7011', reason='The output context provides weather information for New York, USA, which matches the query instruction.', result='Weather in New York, New York is 70F and clear skies.', content=None, error=None, is_goal_satisfied=True)]\n"
]
}
],
"source": [
"print(result)"
]
},
{
"cell_type": "code",
"execution_count": null,
"id": "fd2dec9d-0037-4298-abcb-58b5a309eade",
"metadata": {},
"outputs": [],
"source": []
},
{
"cell_type": "code",
"execution_count": null,
"id": "a2640345",
"metadata": {},
"outputs": [],
"source": []
}
],
"metadata": {
"kernelspec": {
"display_name": "venv",
"language": "python",
"name": "venv"
},
"language_info": {
"codemirror_mode": {
"name": "ipython",
"version": 3
},
"file_extension": ".py",
"mimetype": "text/x-python",
"name": "python",
"nbconvert_exporter": "python",
"pygments_lexer": "ipython3",
"version": "3.12.5"
}
},
"nbformat": 4,
"nbformat_minor": 5
}
Loading