Back to News
Advertisement
Advertisement

⚡ Community Insights

Discussion Sentiment

100% Positive

Analyzed from 168 words in the discussion.

Trending Topics

#training#lot#done#latency#high#scheme#doing#before#right#low

Discussion (3 Comments)Read Original on HackerNews

SilverElfinabout 3 hours ago
Is this actually innovative? I respect that there’s a lot of work in making it reality and doing it specifically for AI training by modifying their algorithms. But doing portions of work in clusters that are far apart and combining them has been done many times before for non AI things, right? Or so I would think.
philipkglassabout 3 hours ago
Generically speaking, yes, this has been done before. But it can take a lot of work to transform software that works with shared memory or other low-latency interprocess communication mechanisms so that it's practical to run across wide area networks. Sometimes that's not possible at all. Certain problems still require "high performance computing" architectures with all of their compute nodes in the same building, connected by high-bandwidth, low-latency links.
Centigonalabout 2 hours ago
You're right: the MapReduce pattern is very old, and it is well-known that applying it to AI training to enable geographically distributed training runs would be very beneficial. We haven't done it yet because model training workloads are more difficult to parallelize with high intra-node latency than a lot of traditional workloads.

This paper proposes a work partitioning scheme that removes a constraint that makes parallelizing AI training inefficient. The idea of a work partitioning scheme isn't novel, but the scheme itself is.