ZH version is available. Content is displayed in original English for accuracy.
Advertisement
Advertisement
⚡ Community Insights
Discussion Sentiment
64% Positive
Analyzed from 4737 words in the discussion.
Trending Topics
#system#more#systems#code#things#software#model#domain#problem#data

Discussion (62 Comments)Read Original on HackerNews
I used to believe this, but after working at a successful SaaS I have come to believe that correctness and unambiguity are not entirely necessary for successful software products.
It was a very sad realization that systems can be flaky if there is enough support people to solve edge case problems. That features can be delivered while breaking other features as long as enough users don't run into the middle of that venn diagram, etc.
Fact is it always comes down to economics, your software can afford to be as broken and unpredictable as your users will still be willing to pay money for it.
The overhead will get absurd, you'll end up with 10x or more increase in engineers working on the system, all of them making slow progress while spending 90% time debugging, researching, or writing docs and holding meetings to work out this week's shared understanding of underlying domain semantics - but progress they will make, system will be kept running, and new features will be added.
If the system is valuable enough for the company, the economic math actually adds up. I've seen at least one such case personally - a system that survived decades and multiple attempts at redoing it from scratch, and keeps going strong, fueled by massive amount of people-hours spent on meetings.
Adding AI to the mix today mostly just shifts individual time balance towards more meetings (Amdahl's Law meets Parkinson's law). But ironically, the existence of such systems, and the points made in the article, actually reinforce the point of AI being key to, if not improving it, then at least keeping this going: it'll help shorten the time to re-establish consensus on current global semantics, and update code at scale to stay consistent.
[Infinite screaming]
In the end most challenges for a business holding them back to better code quality are organizational, not technical.
This is true. And I get sad every time it is used as an argument not to improve tooling. It feels like sort of a self-fulfilling prophecy: an organizational problem that prevents us from investing into technical improvements... is indeed an organizational problem.
In your example even as the interface for those products is unstable (UI that changes all the time, slightly broken API), those products are coded in a language like C++ or Java, which benefit from compiler error checking. The seams where it connects with other systems is where they're unstable. That's the point of this blog post.
Example: Gambling is wrong but you can win big money.
Management and sales may not appreciate good software design and good code, the next developer that has to work on system will.
Obviously a lot of this you can piece together today, in fact Snowflake itself does a lot of it. But the other part of the article makes me think they understand the even harder part of the problem in modern enterprises, which is that nobody has a clear view of the model they're operating under, and how it interacts with parts of the business. It takes insane foresight and discipline to keep these things coherent, and the moment you are trying to integrate new acquisitions with different models you're in a world of pain. If you can create a layer to make all of this explicit - the models, the responsibilities, the interactions, and the incompatibilities that may already exist, then mediate the chaos with some sort of AI handholding layer (because domain experts and disciplined engineers aren't always going to be around to resolve ambiguities), then you can solve both a huge technical problem but a much more complicated ecological one.
Anyway, whatever they're working on, I think this is the exact area you should focus on if you want to transform modern enterprise data stacks. Throwing AI at existing heterogenous systems and complex tech stacks might work, but building from scratch on a system that enforces cohesion while maintaining agility feels like it's going to win out in the end. Excited to see what they come up with!
If this is the right framing, then the two systems aren't really competitors despite solving the same problem--they're going to appeal to fundamentally different developer sensibilities. Rama is for people who want to think like Jay Kreps or Martin Kleppmann: the event log is sacred, physical data layout is a first-class design decision, and the programmer earns the performance benefits by understanding the system deeply. Cambra (if these assumptions hold) will be for people who want to think like database users: describe what you want, let the optimizer figure out how, intervene only when necessary. These are both defensible positions and both have historical track records of working. SQL's history shows the declarative camp has ecosystem advantages once the optimizer is good enough; Kafka/Rama's history shows the log-centric camp has correctness and observability advantages for event-heavy domains.
In my opinion, a system that has been stable for years isn't 'mature' in a good sense. An exceptional system is one that can still change after many years in production.
I believe this is almost impossible to achieve for enterprise software, because nobody has incentive to make the (huge) investment into longterm maintainability and changeability.
For me, consistent systematic naming and prefixes/suffixes to make names unique are a hint that a person is thinking about this or has experience with maintaining old systems. This has a huge effect on how well you can search, analyze, find usages, understand, replace, change.
> Not sure if tools and technologies can solve accidental complexity.
... and then say
> For me, consistent systematic naming and prefixes/suffixes to make names unique are a hint that a person is thinking about this or has experience with maintaining old systems. This has a huge effect on how well you can search, analyze, find usages, understand, replace, change.
I have battle scars from refactoring legacy systems where my predecessors did _not_ consistently or uniquely name things and I would not have seen it through without my sidekick, the type checker!
Uh...
> Implementing it is more than I can do alone, which is why my cofounders, Daniel Mills and Skylar Cook, and I are starting Cambra. We are developing a new kind of programming system that rethinks the traditional internet software stack on the basis of a new model.
For example, say I develop some object (scene) in Godot Engine. It interacts with the environment using physics simulation, renders 3D graphics to the screen with some shaders and textures, and plays back audio from its 3D location.
I can send this scene to some other user of Godot, and it will naturally just work in their project, including colliding with their objects, rendering in their viewport (including lighting and effects), and the player will hear the location of the object spatially.
Of course there is much more you can do in Godot, too: network requests, event-driven updates, localization, cross-platform inputs, the list goes on. And all of these compose and scale in a manageable way as projects grow.
Plus the engine provides a common data and behavior backbone that makes it possible for a single project to have code in C++, C#, GDScript, and even other languages simultaneously (all of these languages talk with the engine, and the engine exposes state and behaviors to each language's APIs).
In fact, I've been thinking about making a Godot-inspired (or perhaps even powered) business application framework because it's just such a productive way of building complex logic and behavior in a way that is still easy to change and maintain.
So I imagine if Cambra can bring a similar level of composability for web & data software, it could dramatically improve the development speed and quality of complex applications.
It is kind of broken now, much thanks to using web applications (and applications that are basically just wrappers for web applications), but I don't know I if want to go back.
On one side it was much easier when I could hack together a program that was good enough (since everything was the same bland grey).
On the other hand some programs certainly looks nicer today.
And it has become easier to compose logic with solutions like Maven, Nuget and the various frontend package managers.
But yes, we lost drag and drop UI development, we lost consistency and we lost a lot of UX (at least temporarily).
Especially if it can be easy for non-technical people to build efficient UIs and databases (so they don't have to resort to spreadsheet contraptions), I think there's an opportunity here...
And while web pages can masquerade as desktop and mobile apps why wouldn't games be allowed to do the same? Godot for example can do desktop multi-window while something like Flutter (which is amazing in its own right) can't do.
But yeah, someone needs to spend time and build out UI toolkits for Godot and sadly that's not really a long weekend undertaking.
Still! It's nice to dream from time to time and imagine a reality where we can either do some generic cookie cutter UI because it's meant to get things done without much ceremony or we can pull out all the stops and plop a 3D scene to walk around the file system and shot files to delete them. And yeah, I'm aware someone did a thing like that in VS Code with Three.js (I think?)[0] and for Flutter you can do something similar in a webview inside the app proper.
Yet somehow I would rather do those things inside Godot for reasons unknown to me.
[0] Found it: https://marketplace.visualstudio.com/items?itemName=brian-nj...
You're just comparing the wrong things. Yes, when you're locked into one environment, everything works together well. The moment you interact with outside systems, all hell breaks loose. If anything, what you're saying is just that platforms should have a much larger stdlib, or abstract platform differences properly (hint: this is only doable if you're a game engine and can afford to absolutely ignore _everything_ the OS does and just concern yourself with reinventing every wheel).
Not to say there's nothing good in the games side of things: a bunch of software could benefit from accepting that some systems like a big fat central message bus and singletons can be good when handled well.
In practice, most of the complexity comes exactly from what’s described here: every system has a rich internal model, but the moment data crosses a boundary, everything degrades into strings, schemas, and implicit contracts.
You end up rebuilding semantics over and over again (validation, mapping, enrichment), and a lot of failures only show up at runtime.
I’m skeptical about “one model to rule them all”, but I strongly agree that losing semantics at system boundaries is the core problem.
I think die-hard fans of static typing mostly fail to acknowledge this objective reality and its implications. Every time they encounter this problem again and again, they approach it as if nobody thought of this before, and didn’t develop reliable abstractions to productively work in these environments.
[0] Distributed Systems Programming Has Stalled: https://news.ycombinator.com/item?id=43195702
[1] Choreographic Programming: https://en.wikipedia.org/wiki/Choreographic_programming
I'm looking forward to whatever these people come up with, because I believe they do understand the problem, which is the best starting position you can have.
It was a very productive way to produce most software. But as soon as you want to do something off-piste, you pay the entire productivity penalty.
You see even on this thread people begging for one single standard.
What actually happens with that one single standard?
- Behind it, you have a shittload of people implicitly optimizing for the general use case and hiding all the said complexity for you
- No need to worry about [semantic conflict](https://www.sigbus.info/worse-is-better)
Once you have centralization, "composition" is not so hard. You get to define all your edge cases, define how you see the real world. Everybody doesn't have their own way of doing things, you have only one way of doing things.
Of course, then comes the extension of the software. People will see the world differently. And we have not algorithmically figured out how domains themselves evolve. The centralization abstraction breaks because people disagree and have different use cases.
I don't see how you get around this fundamental limitation. Are you going to impose yet another secret standard on everybody to get the interoperability you want? If you had full control over the world, yes, things are easy.
I'm not saying this as a diss. I truly do believe centralization works. AWS? Palantir? Building the largest centralized platforms in history and having everybody go through your tooling, when executed carefully, is a dummy effective strategy. In the past, monopolies were effectively this too (though I'd say buying steel is much different than "buying" arbitrary turing-complete services to help deal with a wide variety of semantic issues, and that's what precisely makes the 'monopoly' model break in the 21st century). And hey, at least AWS is a pretty good service, insofar that it makes certain things braindead easy. Is it a "good" service, intrinsically or whatever? I don't know.
I work at a company that thinks extremely deeply about interoperability issues and everybody is on the opposite side: it can be said that we were made as a response to xkcd 927, to try and solve the issue.
I think the company is right in that semantic decentralization with interoperability would be a good end goal, but I think just plain darwinism explains the necessity of the opposite.
Not a great example of a single centralised system. The errors came from trying to write custom reconciliation code between two systems, the ERP and the bank - perfect example of the problems OP raises.
We lucked into filesystems that have open structures (even if the data is opaque). Perhaps we should be pushing for "in-memory filesystems" as a default way of storing runtime data, for example.
I believe there are several ways achieve that analogy today, even though the technology we have access to (and our own demands) has exponentially grown in complexity. I am happy to see more people thinking about it.
[Side track: I am personally not a fan of "break it up into many tiny systems" (microservices, etc) since it removes that agility of logic/state moving around the system. I just see an attempt to codify the analog of a very large human organization.]
Now that AI lets a single person (and in some cases, no person at all!) write several orders of magnitude more code than they would possibly have been able to, the requirements of our systems will change too, and our old ways of working is cracking at the seams. In a way we're perhaps building up a whole new foundation, sending our AIs to run 50-year-old terminal commands. Maybe that's all we needed all along, but I do find it strange that AI is forced to work within a highly fragmented system, where 95%, if not 99%, of all startups that write code with AI while hiding it from the user, are essentially following the recipe of: (1) launch VM (2) tell AI to install Next.js and good luck.
I too have a horse in this race and have come to similar conclusions as the article: there is a way to create primitives on top of bare metal that work really well for small and large applications alike, and let you express what you really wanted across compute/memory/network. And I believe that with AI we can go back to first principles and rethink how we do things, because this time the technology is not just for groups of humans. I find this really exciting!
[1] https://redplanetlabs.com/programming-model
> What is Rama? Rama is a platform for building distributed backends as single programs. Instead of stitching together databases, queues, caches, and stream processors, you write one application that handles event ingestion, processing, and storage.
I think anything that can change this has to be simple enough that it'd be more effective to just explain the system and implement it, than wax about the general outline of part of the problem. Especially since the real target audience for an initial release by necessity needs to understand it.
There are some big leaps we could make with having code be more flat. Things like having the frontend and backend handler in the same file under the same compiler/type checker. But somebody will want to interact with a system outside of the 'known-world' and then you're writing bindings and https://xkcd.com/927/
At the end of the day I think the core tension is that once the speed of light is noticeable to your usecase things become distributed, which creates the desire for separate rate-of-change. I'm not sure what would 'solve' that.
AI will be a plus, for the fact that a single team can be in charge of more of the parts leading to a more coherent whole.
Hope OP builds some nice tools, but I've seen too many of these attempts fail to get excited about "i think we found it".
I believe the real problem is that software is symbolic and the problems it solves usually aren't. Writing an application means committing to a certain set of symbolic axioms and derivation schemas, and these are never going to encapsulate the complexity of the real world. This relates to Greenspun's 10th rule:
Or in a modern context, C++/C# and managing a huge amount of configuration data with a janky JSON/XML parser, often gussied up as an "entity component system" in game development, or a "DSL" in enterprise. The entirely equivalent alternative is a huge amount of (deterministic!) compile-time code generation. Any specific symbolic system small enough to be useful to humans is eventually going to go "out of sync" with the real world. The authors hint at this with the discrepancy between SQL's type system and that of most programming languages, but this is a historical artifact. The real problem is that language designers make different tradeoffs when designing their type system, and I believe this tradeoff is essentially fundamental. Lisp is a dynamically-typed s-expression parser and Lisp programs benefit from being able to quickly and easily deal with an arbitrary tree of whatever objects. In C#/C++ you would either have to do some painful generics boilerplate (likely codegen with C#) or box everything as System.Object / void pointer and actually lose some of the type safety that Lisp provides. OTOH Idris and Lean can do heterogeneous lists and trees a little more easily, but that cost is badly paid for in compilation times, and AFAICT it'll still demand irritating "mother may I?" boilerplate to please the typechecker. There is a fundamental tradeoff that seems innate to the idea of communicating with relatively short strings of relatively few symbols.This sounds like Godel incompleteness, and it's a related idea. But this has more to do with cognition and linguistics. I wish I was able to write a little more coherently about this... I guess I should collect some references and put together a blog at some point.
I'm not sure what point you're trying to make here. The list you're referring to is definitely a bit hand-wavy, but it also makes sense to me to read it as, for example, "today's relational databases (software) are almost perfectly aligned to the domain of relational databases (concept)". As in, MariaDB running on my Mac wraps an insane amount of complexity and smarts in a very coherent system that only exposes a handful of general concepts.
The concepts don't match what I'd like to work with in my Rails app, which makes the combination of both a "fragmented system", as the article calls it, but the database itself, the columns, tables, rows and SQL above it all, that's coherent and very powerful.
- Tables are not relations. Tables are multisets, allowing duplicate rows, whereas relations always have a de facto primary key. SQL is fundamentally a table language, not a relational language.
- NULL values are not allowed in relations, but they are in SQL. In particular, there's nothing relational about an outer join.
In both cases they are basically unscientific kludges imposed by the demands of real databases in real problems. "NULL" points to the absence of a coherent answer to a symbolic rule, requiring ad hoc domain-specific handling. So this isn't a pedantic point: most people wouldn't want to use a database that didn't allow duplicate rows (the SQL standard committee mentioned a cash register receipt with multiple entries that don't need to be distinguished, just counted). Nullable operations are obviously practical even if they're obviously messy. Sometimes you just want the vague structure of a table, a theory that's entire structural and has no semantics whatsoever. But doing so severely complicates the nice symbolic theory of relational algebra.
That's the point I'm getting at: there isn't really a "domain" limitation for relational algebra, it's more that there's a fundamental tradeoff between "formal symbolic completeness" and "practical ability to deal with real problems." Eventually when you're dealing with real problems, practicality demands kludges.
Which is tantamount to waving one's hands about and saying there's "New magic!(tm)"
... while standing next to a pile of discarded old magic that didn't work out.
This blog post says nothing about what makes Cambra's approach unique and likely to succeed; it is just a list of (valid) complaints about the status quo.
I'm guessing they want to build a "cathedral" instead of the current "bazaar" of components, perhaps like Heroku or Terraform, but "better"? I wish them luck! They're going to need it...