17 points | by gwillen85 16 hours ago
3 comments
<agentml xmlns="github.com/agentflare-ai/agentml" datamodel="ecmascript"> <state id="respond"> <openai:generate model="grok-4-fast-reasoning" promptexpr="`continue: ${conversationHistory(10)}`" location="_event"/> <transition event="done" target="send"/> </state> <state id="send"> <send event="output" data="_event.data"/> <transition target="respond"/> </state> </agentml>
The problem: LLM agents are flaky, locked to specific frameworks, and nearly impossible to debug or audit.
The fix: Declare agent behavior in XML using state machines. State transitions are explicit, outputs are schema-bound, execution is traceable.
Key features:
* No hallucinated tool calls (structured outputs only)
* Built-in memory (SQLite + graph storage)
* 80% fewer tokens via runtime snapshots
* CLI: amlx validate, amlx run
* Swap models freely (OpenAI, Grok, Ollama)
Install: curl -fsSL sh.agentml.dev | sh
Run: amlx run chat.aml
Runtime: Go/WASM (agentmlx). Coming soon: LangGraph export, Python SDK.
GitHub: https://github.com/agentflare-ai/agentml
Docs + Demo: https://www.agentml.dev/
What's your biggest agent pain point - framework lock-in, debugging, or compliance?
The problem: LLM agents are flaky, locked to specific frameworks, and nearly impossible to debug or audit.
The fix: Declare agent behavior in XML using state machines. State transitions are explicit, outputs are schema-bound, execution is traceable.
Key features:
* No hallucinated tool calls (structured outputs only)
* Built-in memory (SQLite + graph storage)
* 80% fewer tokens via runtime snapshots
* CLI: amlx validate, amlx run
* Swap models freely (OpenAI, Grok, Ollama)
Install: curl -fsSL sh.agentml.dev | sh
Run: amlx run chat.aml
Runtime: Go/WASM (agentmlx). Coming soon: LangGraph export, Python SDK.
GitHub: https://github.com/agentflare-ai/agentml
Docs + Demo: https://www.agentml.dev/
What's your biggest agent pain point - framework lock-in, debugging, or compliance?