Zero to AI Agent in 10 Minutes: The Low-Code Path to Building Intelligent Systems
The Promise (and Problem) of AI Agents
AI agents are supposed to make our lives easier. But if you've tried building one recently, you know the reality:
- Day 1: "I'll just use this framework..."
- Day 3: Wrestling with configuration files
- Day 5: Debugging message queue connections
- Day 7: Still haven't written actual agent logic
Sound familiar?
We built Soorma because we were tired of spending 90% of our time on infrastructure and only 10% on the intelligent behavior we actually wanted to create.
The Firebase Moment for AI Agents
Remember when Firebase launched? Before Firebase, building a real-time app meant:
- Setting up a database server
- Configuring WebSocket connections
- Writing synchronization logic
- Managing authentication
- Scaling infrastructure
Firebase said: "What if you just... wrote your app logic?"
That's what Soorma does for AI agents. We give you the Control Plane (Registry, Event Bus, Memory, State Management) so you can focus on the Cognitive Logic (what your agents actually do).
Show Me: 10 Minutes to Your First Agent
Let's build a real agent system. Not a toy exampleβa production-ready setup with service discovery, event choreography, and persistent memory.
Step 1: Clone and Build (2 minutes)
Note: Docker images are not yet published. You need to build them from source first.
# Clone the repository
git clone https://github.com/soorma-ai/soorma-core.git
cd soorma-core
# Create and activate virtual environment
python -m venv .venv
source .venv/bin/activate # On Windows: .venv\Scripts\activate
# Install the SDK
pip install soorma-core
# Build infrastructure containers (one-time setup)
# Set your OpenAI API key (required for Memory Service embeddings)
export OPENAI_API_KEY=your_key_here
soorma dev --build
This builds and starts your entire development environment:
- Registry Service for agent discovery
- Event Service for async communication
- Memory Service with vector search (uses OpenAI text-embedding-3-small)
- NATS JetStream for reliable messaging
- PostgreSQL with pgvector for semantic memory
All running in Docker. All talking to each other. All production-ready patterns.
π‘ Pro Tip: Your code runs outside Docker on your host machine. Change a file, save, and see results instantly. No rebuilding containers, no waiting. Full debugger support in VS Code or PyCharm.
π Note: The Memory Service requires
OPENAI_API_KEYto generate embeddings. If you don't need memory features right now, you can skip thisβthe service will still start but embedding operations will fail. Support for local embedding models is on the roadmap.
Step 2: Write Your Agent (5 minutes)
from soorma import Worker
from soorma.context import PlatformContext
# Create a worker that processes customer inquiries
worker = Worker(
name="support-agent",
description="Handles customer support questions",
capabilities=["customer-support"]
)
@worker.on_event("customer.inquiry.received")
async def handle_inquiry(event: dict, context: PlatformContext):
"""
This function automatically triggers when an inquiry event arrives.
No polling. No checking. Pure event-driven magic.
"""
inquiry = event["data"]["message"]
customer_id = event["data"]["customer_id"]
# Your agent logic here - could be LLM, rules, or hybrid
response = await generate_response(inquiry)
# Publish result - other agents can react to this
await context.bus.publish(
event_type="customer.inquiry.resolved",
topic="support-results",
data={
"customer_id": customer_id,
"response": response
}
)
# Start listening for events
worker.run()
Save this as support_agent.py and run it:
python support_agent.py
That's it. Your agent is now:
- β Registered with the platform
- β Listening for events via SSE (Server-Sent Events)
- β Ready to collaborate with other agents
- β Automatically reconnecting if it crashes
- β Discoverable by other agents through the Registry
The Secret Sauce: "Infra in Docker, Code on Host"
Here's why the developer experience feels so smooth:
Figure: Infrastructure runs in Docker, your agent code runs natively on your host machine
Why this matters:
| Traditional Containerized Dev | Soorma's Hybrid Approach |
|---|---|
| Change code β Rebuild image β Restart container β Test | Change code β Save β See results |
| Debug with logs and print statements | Debug with VS Code breakpoints |
| Services on random ports or complex networking | Services on localhost with standard ports |
| Heavy, slow iteration | Instant feedback loop |
Level Up: Add a Planner (The Brain)
Workers are great for specific tasks. But who decides which worker to call? That's where a Planner comes in.
from soorma import Planner
from soorma.ai import EventToolkit
planner = Planner(
name="orchestrator",
description="Coordinates multi-step workflows"
)
@planner.on_event("workflow.start")
async def orchestrate(event: dict, context: PlatformContext):
goal = event["data"]["goal"]
# Discover what agents are available RIGHT NOW
async with EventToolkit(context.registry_url) as toolkit:
available_events = await toolkit.discover_actionable_events()
# Let the LLM decide what to do next
next_action = await llm_decide(goal, available_events)
# Publish the decision - triggers the right worker
await context.bus.publish(
event_type=next_action["event_type"],
topic=next_action["topic"],
data=next_action["payload"]
)
planner.run()
This is Autonomous Choreography:
- Planner asks Registry: "What can I do?"
- LLM reasons: "Given these options, do X"
- Event triggers the right worker
- Worker does its job and publishes result
- Planner receives result and decides next step
No hardcoded workflows. Add a new worker? The planner discovers it automatically. The system adapts.
Use Your Favorite AI Coding Tool
Whether you're using Cursor, GitHub Copilot, Windsurf, or Codeium, Soorma works seamlessly:
1. Context-Aware Assistance
Tell your AI tool:
"I'm building a Soorma agent that analyzes customer sentiment. Use the Worker pattern and publish to 'analytics.sentiment' topic."
The simple, clear patterns make it easy for AI assistants to generate correct code.
2. Architecture as Documentation
Our ARCHITECTURE.md file is designed for AI consumption. Add it to your AI tool's context:
# In Cursor/Windsurf
@ARCHITECTURE.md help me build a new agent for...
The AI will understand:
- The DisCo pattern
- Event registration
- Service discovery
- Memory integration
3. Example-Driven Development
Check out our examples:
- Hello World: The basics in 50 lines
- Research Advisor: Advanced patterns with dynamic choreography
Copy, modify, ship. That's the workflow.
Pro Tips for AI-Assisted Development:
-
Start with the event definition:
# Ask your AI: "Create an event definition for X" EVENT = EventDefinition( event_name="...", description="...", # AI fills this in purpose="...", # AI explains the "why" topic="...", payload_schema={...} ) -
Use type hints everywhere - AI tools love them:
async def handle(event: dict, context: PlatformContext) -> None: # AI knows what's available in 'context' -
Leverage the SDK's auto-discovery:
# AI can query Registry structure agents = await context.registry.discover_agents()
Real-World Use Cases (That Actually Work)
1. Research Assistant (5-Minute Setup)
cd examples/research-advisor
pip install -e .
export OPENAI_API_KEY=your_key
# Start all agents and interactive client
bash start.sh
# This starts the planner, workers, and client in one command
# Then just type your question when prompted!
Agents autonomously:
- Search the web
- Validate facts
- Draft a report
- Check for hallucinations
All coordinated through events. No hardcoded logic.
2. Customer Support System
# Email Monitor Agent
@worker.on_event("email.received")
async def triage_email(event, ctx):
await ctx.bus.publish("support.triage.complete", ...)
# Sentiment Analyzer Agent
@worker.on_event("support.triage.complete")
async def analyze_sentiment(event, ctx):
await ctx.bus.publish("support.sentiment.analyzed", ...)
# Response Generator Agent
@worker.on_event("support.sentiment.analyzed")
async def draft_response(event, ctx):
await ctx.bus.publish("support.response.ready", ...)
Each agent is 20-30 lines. They don't know about each other. They just react to events. That's the power of choreography.
3. Code Review Bot
@worker.on_event("github.pull_request.opened")
async def review_code(event, ctx):
# Fetch PR diff
# Run through LLM
# Store findings in memory
await ctx.memory.store_semantic(
content=analysis,
metadata={"pr_id": pr_id, "repo": repo}
)
await ctx.bus.publish("code.review.complete", ...)
Uses memory service for context across PRs. Event-driven, so it scales to 1000s of repos.
Deploy to Production (Coming Soon)
Currently, Soorma is optimized for local development to gather feedback from early adopters. We're focused on perfecting the developer experience first.
What's available now:
- β
Full-featured local development environment (
soorma dev) - β Production-ready service architecture
- β Docker Compose configurations
Coming soon:
- π§ Production Docker Compose templates with health checks and monitoring
- π§ Kubernetes Helm charts for cloud deployment
- π§ One-command deployment to cloud providers
- π§ Managed Soorma Cloud offering
For now, you can manually deploy services to your production environment using the same Docker containers. We'd love your feedback on what deployment patterns matter most to you!
We Want Your Feedback
Soorma is in early preview and we're actively shaping the developer experience based on real feedback.
What we'd love to know:
- Does the hybrid development pattern (infra in Docker, code on host) work for you?
- What deployment patterns do you need? (Docker Compose? Kubernetes? Serverless?)
- What examples would help you most? (Your specific use case?)
- What pain points are we missing?
Join the conversation:
- π¬ GitHub Discussions - Chat with the team and other developers
- β Star us on GitHub - Follow development
- π Open an Issue - Request features or report bugs
- π― Join the Waitlist - Get updates on new releases
We're building this in the open, and your input directly shapes what we build next.
The Philosophy: Low-Code, Not No-Code
We're not building a drag-and-drop agent builder. Those tools are great for demos but terrible for production.
Low-code means:
- β Minimal boilerplate
- β Clear abstractions
- β Escape hatches everywhere
- β Standard Pythonβno DSLs
You're still in control. We just removed the boring parts:
- β No message queue configuration
- β No service discovery implementation
- β No state management code
- β No deployment scripts
Get Started in 3 Minutes
# Install
pip install soorma-core
# Initialize a project
soorma init my-agent --type worker
cd my-agent
# Start infrastructure
soorma dev
# Run your agent
python -m my_agent.agent
That's all it takes to join the DisCo revolution.
Next Steps
- Read the Architecture Guide - Understand the patterns
- Try the Hello World Example - 5-minute quickstart
- Build the Research Advisor - Advanced patterns
- Join the Community - We're building in the open
The Bottom Line
Building AI agents should feel like building a web app in 2025βfast, intuitive, and focused on your logic, not the plumbing.
With Soorma, you get:
- π 10-minute setup (not 10 days)
- π― Event-driven by default (scalable from day 1)
- π Works with your tools (Cursor, Copilot, Windsurf)
- π Open source, self-hostable (your data, your control)
- π§© Framework agnostic (bring your own LLM library)
Stop fighting infrastructure. Start building intelligence.
Star us on GitHub β’ Join the Waitlist
Built by developers who were tired of YAML hell and wanted to write code again.