The Memory AI Needs Depends on What You're Using It For

July 8, 2025

I recently read this post by Pedro Franceschi, and it stuck with me.

The actual bottlenecks were, and still are, code reviews, knowledge transfer through mentoring and pairing, testing, debugging, and the human overhead of coordination and communication. All of this wrapped inside the labyrinth of tickets, planning meetings, and agile rituals.

He argues that despite how powerful AI coding tools have become, they haven’t really made us significantly faster at shipping software. And I think he’s absolutely right.

It’s not that the models aren’t good enough. It’s that something deeper is missing.


The analogy I keep coming back to is:

Using an AI coding assistant is like onboarding a senior+ engineer. They're smart and capable, but they are always on day one.

which means they don’t know:

  • Where the technical debt is buried and more importantly, why.
  • What tradeoffs we decided to accept at the cost of what
  • What your API abstractions actually mean

So even though the model is capable, you spend a lot of time giving context, correcting it, or just doing things yourself.

It’s a big reason why the productivity gains often fall short.

As Phil Schmid writes in this post, long-term memory is one of the promising ways to make AI assistants more useful.

But what kind of memory is useful depends entirely on what you’re using the AI for.

For example, if you're using AI to help plan a trip, the ideal memory includes things like:

  • Where you live
  • Your typical budget
  • That you prefer “decent but not fancy” hotels
  • That you have three kids and their ages
  • What kinds of activities your family enjoys

The more personal context the AI remembers, the better its recommendations get.

But when you're using AI to help write or reason about code, personal details are irrelevant. And you actually don’t want AI to know none of it.

In this case, the AI needs memory not of you, but of your codebase — and ideally, the tribal knowledge of your team.

The mistake would be to think “AI needs memory” is a single problem with a one-size-fits-all solution.

We should be asking, what should this AI remember for this task to be more useful tomorrow than it was today?

And the more we can tailor AI memory to fit these differences, the closer we get to truly useful assistants — not just fancy autocomplete.

The way to make AI more useful isn't just from bigger models or longer context windows. It's smarter memory — designed to evolve with the role the AI plays.