RU version is available. Content is displayed in original English for accuracy.
Advertisement
Advertisement
⚡ Community Insights
Discussion Sentiment
67% Positive
Analyzed from 304 words in the discussion.
Trending Topics
#git#repo#clone#install#https#tarball#etc#why#isn#default

Discussion (10 Comments)Read Original on HackerNews
Seemingly seconds on every remote-touching command, even on a very small repo.
Why isn't
the default?I would guess that for at least 90% of the repos I clone, I just want to install something. Even for the rest, I might hack on the code but seldom look into the history. If I do then I could do a `git fetch` at that point and save the bandwidth and disk space the rest of the time.
https://github.blog/open-source/git/get-up-to-speed-with-par...
https://gitperf.com/chapter-11.html
Downloading a tarball and running ./configure or make, editing a config file here or there, etc then running `make install` is the most common flow. Now days I find myself frequently editing the Dockerfile to make it to my liking. With a git repo, the owners of the repo have excluded all the local files, build caches, etc and you can keep pulling to get updates stashing and reapplying your local changes. With tarballs, you have to figure it out all over again. Lose your build cache (language dependent maybe), lose a change you made here or there, etc.
A) You can update them, because you can git pull to fetch changes.
B) If you want to apply patches on top, its better to have version control so you can keep track of what you changed, especially useful if you want to rebase.
and also git
which makes more sense i guess