Back to News
Advertisement
Advertisement

Discussion (3 Comments)Read Original on HackerNews

maxaravind•3 days ago
Author here.

I spent the last weekend thinking about continual learning. A lot of people think that we can solve long term memory and learning in LLMs by simply extending the context length to infinity. I analyse a different perspective that challenges this assumption.

Let me know how you think about this.