My dad was one of those old-school techno-utopian nerds. He had been programming since the 70s. Sometime near the start of my career in software, in the 2010s, I recall chatting with him about some problem I was working on.
I was going to take a lazy & more resource-intensive approach to solving a problem. My dad said something like,
“Back in the day, we didn’t have enough memory to write bad code.”
I think about that from time to time. I have been thinking about it in the context of LLM development. At this moment in time, we’re restricted on how much memory we can use. We can’t dump entire documents or knowledge bases into a chat and expect most LLMs to be able to handle it. As such, folks are coming up with all sorts of clever ways to retrieve small but relevant pieces of data…
It’s a little turned on its head from my conversation with my dad, but it’s a similar pattern at play. Due to space/data constraints for LLM inputs, we’re forced to be clever.
I guess necessity really is the mother of invention.