Quick Start
This guide walks you through creating a simple agent that answers questions using a local LLM.
Prerequisites
Section titled “Prerequisites”- Ollama installed and running
- A model pulled (e.g.,
ollama pull qwen3:8b) - Node.js 20+ or Python 3.11+
Installation
Section titled “Installation”# Coming soon — the framework is under active developmentnpm install @ai-gent/coreYour First Agent
Section titled “Your First Agent”import { Agent, OllamaProvider } from '@ai-gent/core';
const agent = new Agent({ provider: new OllamaProvider({ model: 'qwen3:8b', baseUrl: 'http://localhost:11434', }), tools: [], systemPrompt: 'You are a helpful assistant.',});
const result = await agent.run('What is the capital of France?');console.log(result);Adding Tools
Section titled “Adding Tools”Tools give your agent superpowers. Here’s a simple web search tool:
import { Agent, OllamaProvider, defineTool } from '@ai-gent/core';
const searchTool = defineTool({ name: 'web_search', description: 'Search the web for information', parameters: { query: { type: 'string', description: 'Search query' }, }, execute: async ({ query }) => { // Your search implementation here return `Results for: ${query}`; },});
const agent = new Agent({ provider: new OllamaProvider({ model: 'qwen3:8b' }), tools: [searchTool],});Using LiteLLM Router
Section titled “Using LiteLLM Router”For hybrid local/cloud routing:
import { Agent, LiteLLMProvider } from '@ai-gent/core';
const agent = new Agent({ provider: new LiteLLMProvider({ baseUrl: 'http://localhost:4000/v1', apiKey: 'your-litellm-key', model: 'local-general', // Routes via LiteLLM }),});Next Steps
Section titled “Next Steps”- Read about the Architecture to understand how agents work internally
- Check the API Reference for all available options