lennxa

Bear Case for LLM scaling

# | #ai, #forecasting, #lesswrong

Thane Ruthenis (lesswrong):

I expect that none of the currently known avenues of capability advancement are sufficient to get us to AGI.

...I expect AGI Labs' AGI timelines have ~nothing to do with what will actually happen. On average, we likely have more time than the AGI labs say. Pretty likely that we have until 2030, maybe well into 2030s.

By default, we likely don't have much longer than that. Incremental scaling of known LLM-based stuff won't get us there, but I don't think the remaining qualitative insights are many. 5-15 years, at a rough guess.

#ai #forecasting #lesswrong #links