Try-Works
An index of projects and products.
Projects & Products
rlm-workflow
tinytunes DJ
lurkkit
role-model
Pocketmodel
Writing
rlm-workflow: Recursive Language Models for coding agents
Contact
Email
My new Twitterrlm-workflow: Recursive Language Models for coding agents
Installation
npx skills add https://github.com/doubleuuser/rlm-workflow --skill rlm-workflow
skills.sh
https://skills.sh/doubleuuser/rlm-workflow/rlm-workflow
GitHub
https://github.com/doubleuuser/rlm-workflow
Since the Recursive Language Models[0] paper demonstrated a method of increasing effective context length to 10M tokens by using sub-agents to move information from the context window to an information store outside the chat, there's been a number of different takes on how to put this into practice in development workflows. Some even go in the direction of storing entire session contexts in a database for later retrieval to preserve reasoning for changes that were made.
rlm-workflow is yet another take, this time with a slightly different angle:
Important information like requirements, codebase analysis and implementation plans should not be passed in the chat in the first place. Chat is effectively CLI and should be used for invocations and commands, not for passing information.
rlm-workflow is modelled after a regular kanban workflow from requirement to implementation plan to testing and manual QA. The workflow is sequential and phase; each phase outputs one markdown doc and takes the previous phases' docs as input. Each phase is gated on fulfilling criteria defined in the previous phase, and at the end of a phase, its output docs are locked.
The user first creates the 00-requirements.md doc in an RLM folder, then invokes the workflow in chat. It then runs until the manual QA stage, where it waits for user approval before continuing. After finishing an RLM run, the agent updates DECISIONS.md which is a ledger of requirements implemented previously, their whys and whats, and links to respective RLM docs. It also updates STATE.md, an overview of the app's current state.
To be practical, this is what your repo will look like:
rlm/00-my-first-requirements/
- 00-requirements.md (user-created)
- 01-as-is.md
- 02-to-be.md
- 03-implementation-summary.md
- 04-manual-qa.md (test cases are pre-defined; the user enters pass/fail and notes in the doc)
+ /addenda/ if needed
To summarize:
1. Specs are never passed through the chat so they do not suffer from context rot
2. Work is always done based on docs that are locked, so it cannot suffer from degradation
3. The workflow is self-documenting; it is also easily human readable; can also be used to generate information for non-technical stakeholders
4. There is no need to index the codebase a database. The rlm docs provide progressive disclosure and point the model in the right direction. Should significantly reduce token usage.
4. In my simple test, the workflow improves both quality and time to success for complex requirements
Yes, it is essentially a waterfall workflow but the agent iterates within each phase before passing. Iteration with the user happens in the QA phase, where you will normally discover edge cases etc. You can add new requirements as addenda docs or ask the agent to do so, and it will implement according to the workflow.
What about the chat sessions? Forget about them. Instructions don't matter, only outcomes.
rlm-workflow simulates a standard kanban-workflow with distinct phases like requirements, codebase analysis, implementation plan, implementation summary, verification, manual QA of implementation, and then updating of global repo artifacts (STATE.md and DECISIONS.md) to document the codebase.
The benefits of using rlm-workflow for assisted engineering includes improved traceability through workflow and global docs, reduced token usage, reduced context rot, improved accuracy and code quality, and improved speed.
[0]