ES version is available. Content is displayed in original English for accuracy.
Advertisement
Advertisement
⚡ Community Insights
Discussion Sentiment
67% Positive
Analyzed from 1900 words in the discussion.
Trending Topics
#terminal#tuis#gui#more#accessibility#tui#windows#text#don#web

Discussion (50 Comments)Read Original on HackerNews
The Claude Code rendering UI is the first place where I realized the TUI is more like a DOS or Borland UI system rather than a command line interface.
I was poking about CLAUDE_CODE_NO_FLICKER=1 setting when I realized what exactly this TUI is, it is layers of stuff showing up on top of each other with terminal codes.
Ended up reading the Ink Terminal implementation of React
https://github.com/vadimdemedes/ink
Fascinating how it ends up looking Wordperfect or Wordstar from the past instead of pixel based graphics.
The usability for a vision impaired user is about the same, though I remember braille pads for DOS tools (80x25) which work better than all the screen readers which came later.
They’re attempts at pretending to have Windows (etc.) GUIs in a terminal.
Same stuff people made for DOS when Windows wasn’t common or good enough yet.
I’m not surprised they’re a disaster. Or built without understanding the abilities of the terminal they’re running on.
Actually, I think that is close to a good name for them: Terminal-based GUIs.
Some are pretty useful, for instance I like lazygit as a simple dashboard/panel for a small repo (or when making small changes to a larger one), some make me wonder what those who made them were smoking!
The less silly ones are handy when you are tinkering with a far away machine and want something a little more interactive than CLI commands and stuff connected by pipes and scripts but don't want to deal with the latency of GUI remoting. Some, though, are so badly thought out that they are slower than using a browser over long-distance X…
I get it that maybe the constraints of terminals force design of TUIs to be more focused on the purpose of the tool than polish, but it's not that compelling of a point to me.
But the terminal is just fundamentally the wrong basic abstraction on which to build a structured GUI, it just happens to require few enough bits to be sent over the wire that it actually works reasonably well over SSH as opposed to pushing graphics.
If you're going to "run command, edit command, run command", performing the edits from the terminal you're running the commands in seems reasonable/intuitive. (In contrast, for tools like VSCode, I think it's more common for terminals to take up a fraction of the screen space rather than switching it to full screen. And then developers will say they need a huge monitor).
It also seems to be that keyboard-driven programs are more commonly TUI than GUI. e.g. magit or lazygit. Or lazydocker. Or k9s.
- you can use em over ssh
- they’re typically made with keyboard usage in mind, which is often an afterthought in a typical browser based UI
- other GUI options are browser (sandboxes, obvi, not good for lil personal tools), native (not dead simple, compared to TUI/browser/electron), or something like electron (no way lmao)
I don’t seek out TUI’s instead of other solutions. But it’s so dang easy to pop open a new pane and run lazygit. And it makes you look really cool when people walk behind you
If developers want to experiment with various UI configs then let them but keep a CUA in the background that can be called upon by machines and humans alike. (Unfortunately, ergonomics has never been a strong point for developers.)
They're a long way from web apps, far worse on most axes.
Clearly no one put thought into any of that. It was just “make terminal, but pretty”.
Back in the 90s when most SAP systems switched from AS/400 terminals to Windows NT, people reported massive losses in productivity.
I've never worked on SAP, my mother did. And basically, she went from a fully tabular, function-key based oriented workflow, to holding a mouse, moving around and clicking a lot (tabbing and F keys were lost for many functions).
She showed me how she could go from ESC ESC F4 F3 TAB TAB and she was across the whole system a super speed. And this was a terminal, not the actual system!
The short of the story is this
Windows based application work best for discoverability and new users
Terminal based applications work best for faster, memory based navigation and power users.
In fact, all successful applications for professionals/power users are built with fast paths in mind. Even Microsoft's ribbon which gets a lot of hate for some reason is an example of that, it's keyboard-driven, customizable, and discoverable at the same time.
I am surprised, though, that something like "turning off the cursor" enhances the accessibility.
> It isn't fair to blame TUIs.
> The real problem is that pretty much the whole stack has a terrible AX story.
> First, most GPU-rendered terminal emulators don't engage in system-provided accessibility APIs AT ALL. Because text is GPU-rendered, AX tooling can't "read" it, it just shows up as an image. This applies to Kitty, Alacritty, WezTerm. My own terminal Ghostty is AX-readable (on macOS), and so are others like iTerm2 and Terminal.app (which admittedly do it better than me, we have gaps to fill).
> Second, there are no terminal sequences or initiatives at all for TUIs to communicate AX information to the emulator, so the emulator itself can't do much more than display a blob of text to AX tooling. We need the equivalent of ARIA-style annotations but for terminal cells, runs, and regions. No such initiative exists. Even if TUIs do great things with the cursor, this is going to bite a lot of use cases.
> As an example of combining the above, I've been working on something with Ghostty where we integrate semantic prompt (OSC133) and AX APIs so that we can present each shell prompt, input, and command as structurally significant to AX tooling (rather than simply a text box where the cursor is somewhere else). This shows the importance of the relationship between terminal specs (OSC133), TUIs (which must emit OSC133), and terminal emulators (which must both understand OSC133 AND communicate it to AX APIs).
> The whole stack is rotten. And no one is earnestly trying to fix it (including me, I have limited time and I do my best but this is a WHOLE TOPIC that requires a huge amount of time and politicking the ecosystem and I don't have it, sorry).
Bonus: a simultaneously awesome and horrible reality is that AI is really helping to improve AX here. A lot of AI tooling uses/abuses AX APIs to make things happen. How is OpenAI reading your list of windows, typing into them, etc? Accessibility frameworks! So a lot more apps are taking AX integration a lot more seriously since its table stacks for AI using it... Sad it requires that but the glass half full is more software is doing that.
The language of accessibility – Staffnet | ETH Zurich https://ethz.ch/staffnet/en/service/communication/digital-ac...
[0] https://news.ycombinator.com/item?id=47364817
[0] https://news.ycombinator.com/item?id=20503158