Large Language Models are at a pivotal point. On one hand, we still don’t understand how they work or why they work so well - could human thought be nothing more than a pattern that these models have somehow uncovered? On the other hand, there’s growing concern that LLMs are hitting a wall[1], that current approaches (i.e. throwing ever more money and computation on training) won’t fix its biggest shortcomings (e.g. hallucinations), and that the alarmingly high valuations and expectations may come crushing down soon.
But whatever the future holds, as of early 2025, LLMs have demonstrated their usefulness in various ways. One specific use case - this blog’s hobby horse - is using language models as a learning companion.
Continue: https://engineeringknowledge.blog/p/where-the-bug-is-a-feature