Back to News
Advertisement
vvinhnx about 5 hours ago 1 commentsRead Article on github.com

ES version is available. Content is displayed in original English for accuracy.

Hi HN, I built VT Code, a semantic coding agent. Supports all SOTA and open sources model. Anthropic, OpenAI, Gemini, Codex. Agent Skills, Model Context Protocol and Agent Client Protocol (ACP) ready. All open source models are support. Local inference via LM Studio and Ollama (experiment). Semantic context understanding is supported by ast-grep for structured code search and ripgrep for powered grep.

I built VT Code in Rust on Ratatui. Architecture and agent loop documented in the README and DeepWiki.

Repo: https://github.com/vinhnx/VTCode

DeepWiki: https://deepwiki.com/vinhnx/VTCode

Happy to answer questions!

I believe coding harnesses should be open, and everyone should have a choice of their preferred way to work in this agentic engineering era.

Advertisement

⚡ Community Insights

Discussion Sentiment

0% Positive

Analyzed from 205 words in the discussion.

Trending Topics

#tool#multi#provider#code#ast#grep#ripgrep#semantic#choice#where

Discussion (1 Comments)Read Original on HackerNews

dnaranjoabout 4 hours ago
This is a thoughtful stack. A few observations and questions from someone who's been building with similar tooling.

The ast-grep + ripgrep combination for semantic context is the right architectural choice. Pure embedding-based retrieval tends to fail on codebases with non-trivial inheritance hierarchies or polymorphism, where structural search beats semantic similarity. I'd be curious how you're balancing the two: does ast-grep run first as a structural filter, with ripgrep for content matching, or are they used independently depending on the query type?

On the multi-provider abstraction: Anthropic, OpenAI, and Gemini have meaningfully different tool-calling schemas, and Codex (the CLI tool) adds another layer because it wraps OpenAI's API but with its own conventions. How are you handling the schema translation? Most "multi-provider" implementations I've seen end up with provider-specific code paths that defeat the abstraction.

ACP support is interesting. I haven't seen many agents implement it yet, mostly MCP. Is your read that ACP is going to gain adoption, or is including both more about hedging?

The local inference angle (LM Studio, Ollama) matters for use cases where source code can't leave the network. Have you benchmarked which open models hold up reasonably for tool-calling-heavy workflows? In my experience most local models below 70B struggle with multi-turn tool use even when their raw code generation is decent.

Rust + Ratatui is a strong DX choice. Will check out the DeepWiki.