openclaw/docs/youtube/after-12-years-of-maintaining-srs-i-let-ai-run-the-project.md
⚠️ Note: This is a transcript from a specific date. Information may be outdated. Do not treat as authoritative — verify against current codebase and documentation.
After several months of experimentation, I found that AI — powered by Augment — can effectively manage the SRS open-source project end to end, from issues and code to testing, bug fixes, and community support.
However, this does not work automatically — AI still needs proper context, including clear guidelines, correctly configured ignore files, and a repository that gathers all relevant documentation while excluding unrelated content like third-party libraries.
Moreover, AI has clear limitations and side effects — it cannot handle everything, and achieving the expected results still requires following specific patterns and workflows.
Within several months, AI helped reduce open issues from ~200 to ~10, increased test coverage from ~50% to ~88%, and enabled a largely end-to-end AI-assisted contribution workflow, from issue analysis to feature delivery and community communication. We achieved measurable improvements across communication, quality, maintenance, feature delivery, and workflow efficiency.
Global Communication and Documentation at Scale: AI was first introduced one to two years ago to handle translation. All issue titles, descriptions, discussions, pull requests, and documentation were automatically translated into English.
This removed language barriers and allowed contributors and users worldwide to participate more effectively. Over time, English became the consistent working language of the project without increasing the burden on maintainers.
Test Coverage and Testability Improvements: With AI assistance, test coverage improved from about 50% to 88%. Most new unit tests were generated by AI.
More importantly, AI helped refactor code to be more testable — improving modularity, enabling mocking, and isolating components. This allowed AI to write meaningful uTests at the class and function level, rather than superficial tests.
Issue Analysis and Backlog Reduction: Historically, SRS accumulated around 1,600 total issues, with roughly 200 open issues at any time.
AI was used to analyze issue reports — especially crash dumps and failure logs — identify root causes, write reproduction tests, or provide precise reproduction steps when tests were not feasible. AI could also run servers, benchmarks, and black-box tests to verify fixes. As a result, open issues dropped from ~200 to ~10, a significant improvement in project stability.
Feature Development and Protocol Enhancements: AI also contributed to real feature development, not just maintenance.
A notable example is IPv6 support. Previously, IPv6 was only partially supported (mainly RTMP). With AI assistance, IPv6 support was completed across WebRTC, HTTP API, HTTP streaming, SRT, RTSP, and GB protocols. AI also helped deliver other features, demonstrating its ability to handle cross-protocol and non-trivial development work.
AI as a Knowledge Base and Support Engineer: Earlier AI bots that indexed only documentation were limited because they lacked awareness of the code and recent changes.
By indexing code, documents, tests, and changelogs, AI now has comprehensive and up-to-date knowledge of SRS. It can reliably answer questions about feature support, usage, and version history, effectively acting as both a maintainer assistant and a user support engineer.
A Closed-Loop, AI-Assisted Contribution Workflow: AI fundamentally improved the contribution workflow. For both bug fixes and new features, the process now follows a test-driven, end-to-end loop.
AI helps clarify requirements, writes uTests first, assists with implementation, runs verification tests, drafts pull request titles and descriptions, updates documentation, and even prepares community announcements. Human review remains essential, but the overall cycle is faster, more consistent, and easier to sustain.
Before settling on Augment, I experimented with several AI tools and agents to manage the SRS project, including GitHub Copilot, Cursor, and Cloud Code. Each of these tools has its own design philosophy, workflow, and strengths, but in practice, only Augment proved suitable for managing a large, long-lived open-source project like SRS.
GitHub Copilot and Cloud Code: My experience with GitHub Copilot and Cloud Code was similar. While both tools were helpful for writing small pieces of code, they often failed to respect the latest SRS codebase and documentation. Instead, they sometimes relied on common or generic knowledge, or even outdated assumptions about the project. As a result, they occasionally generated code that did not match the actual design or current state of SRS, and in some cases exhibited hallucinations.
Cursor: Cursor exposed a different but equally critical problem. SRS is not a greenfield project created by AI from scratch; it is a mature system maintained by human engineers for more than a decade. Over time, it has accumulated a large number of third-party libraries and tools — such as OpenSSL, LibSRT, SRS Bench, and numerous Go dependencies.
Augment: Augment stood out because it solved several fundamental problems that are critical for non-greenfield open-source projects.
First, Augment allowed me to explicitly define the SRS codebase by ignoring irrelevant files and directories, such as OpenSSL and other bundled libraries. This ability to limit and curate the AI’s knowledge base is essential. While SRS itself consists of roughly 150k lines of code plus hundreds of documents, its third-party dependencies can reach millions of lines. Without exclusion, meaningful reasoning becomes impossible.
Second, Augment provides a context indexing and selection engine. It indexes both code and documentation and clearly shows what has been indexed. For each query, it dynamically selects the most relevant context depending on the task — whether it is answering usage questions, developing new features, fixing bugs, or re-investigating reported issues. In my experience, this context engine works extremely well, with very few hallucinations.
Third, Augment follows project guidelines very strictly. For SRS, this is crucial. The project uses C++98 only, does not support newer C++ standards, and relies on a custom smart pointer implementation rather than the C++11 standard library. These constraints are non-negotiable, and Augment consistently respected them. While guideline support is not unique to Augment, strict adherence is a key factor that made AI-driven management viable in practice.
For these reasons, after trying multiple tools, Augment was the only solution that worked reliably for managing SRS at the project level — not just writing code, but understanding context, respecting constraints, and operating within a complex, real-world open-source environment.
While Augment proved to be the only practical solution for managing the SRS open-source project at scale, it also exposes several important limitations and risks. These weaknesses are not flaws of implementation, but rather structural trade-offs that anyone adopting AI-driven project management must understand.
Context Engine vs. Code Privacy: Augment relies on indexing the full codebase and documentation to ensure it always works with the latest and most accurate context. This is ideal for open-source projects, where code is already public.
However, this approach does not work well for commercial or private projects. Uploading proprietary code to a third-party server for indexing introduces security and confidentiality risks that many organizations cannot accept. As a result, Augment is far better suited for open-source projects than for closed-source or sensitive systems.
Cost, Token Usage, and Scalability: Cost is another practical concern. At around $200 per month, Augment may be acceptable for a company but is already expensive for individual maintainers.
More importantly, cost scales with project size, number of repositories, and model choice. Newer models can be significantly more expensive, and because the context engine dynamically decides what to load, users have limited control over token consumption. This makes costs harder to predict and, in some cases, difficult to control.
Hallucination in Issue Handling: Hallucinations are rare in practice, but they become risky when handling issues. Issue reports often contain misleading descriptions, false assumptions, or outdated information.
By default, AI tends to trust the input it receives. When an issue itself is wrong, the AI may produce confident but incorrect analysis. This problem cannot be fully eliminated, because it originates from human input. Therefore, verification and review must always be part of the workflow, especially for bug reports and investigations.
The Human Growth Problem: The most subtle risk is its impact on human development. A good open-source project should not only deliver features, but also improve the design, architecture, and reasoning skills of its maintainers.
Because Augment’s context engine can handle poorly organized or inconsistent code, it may reduce the incentive to maintain clean structure, strong abstractions, and clear design patterns. Over time, this can lead to code that is optimized for AI understanding rather than human understanding.
If maintainers fully trust AI output without careful review and intentional design, the project may become increasingly dependent on AI and harder for humans to maintain. From a long-term perspective, this is a serious risk that requires discipline and conscious control.
While Augment works well today, my long-term goal is not to depend on a single AI product. Instead, I want to explore general patterns and approaches that allow software engineers to use AI tools effectively — across both open-source and commercial projects.
Avoiding Tool Lock-In: Relying on one AI product inevitably limits flexibility. Even if a tool works well now, its constraints become your constraints.
My goal is to separate methodology from tooling — so that workflows remain valid even as AI models and products change. Ideally, switching tools should not require redesigning the entire development process.
Reclaiming the Context Engine: Inspired by The New Calculus of AI-based Coding, one idea I want to explore is manually managing context, instead of fully relying on an AI product’s built-in context engine.
This means designing code structure, documentation, and project layout in a way that is easier for AI to search, load, and reason about — while still being understandable and maintainable for humans. Context should be an explicit design artifact, not an opaque byproduct.
Designing for AI First, but Not AI Only: Before AI, good software design was optimized primarily for human understanding. In the AI era, this assumption may need to evolve.
I want to explore AI-friendly system design, while still preserving human readability, architectural clarity, and long-term maintainability. AI should benefit from structure — but humans must never lose the ability to understand or control the system.
Re-evaluating Other AI Tools: With better context design, I plan to revisit other AI tools such as GitHub Copilot, Cloud Code, Amazon Q, and others.
If these tools start to work well under improved structure, it would suggest that the key problem is how context is presented. This would make the approach portable across projects and environments.
An Ongoing Experiment: I fully acknowledge that I may be wrong. Future AI tools or models may prove that manual context design is unnecessary.
For now, this is an experiment: discovering how software engineers can stay in the loop, grow their skills, and use AI as a force multiplier. I plan to continue documenting what I learn and sharing the takeaways with the community.