There's a version of AI adoption that sounds impressive in a boardroom and falls completely apart the moment a real customer needs something. I ran into both versions of it recently, within a few weeks of each other, and the contrast was jarring enough that it's been sitting in the back of my mind ever since.
I have whole life policies with two insurers: Northwestern Mutual and Lincoln National. Borth are very solid companies. As a business owner, I periodically tap into the cash value of those policies to help fund growth. It's a straightforward transaction, one I've done before, and one both companies have systems built to handle. What happened next with each of them tells you almost everything you need to know about the difference between AI that serves customers and AI that simply performs the appearance of serving them.
Northwestern Mutual: The Quiet Excellence of a System That Works
With Northwestern Mutual, there isn't much to say, and that's the point. I made the request. I reached a human quickly. The only snag was that my usual contact was on vacation, but I was immediately connected to another person who handled everything. The money was in my account via ACH within 24 hours. Done.
No friction. No explaining myself to a system that didn't know me. No time wasted performing actions into a void and waiting to see if anything came back. A person took ownership of my request, and the process did what it was supposed to do. They sat down and asked, "What does our customer need this to feel like?" and then built backward from that answer.
It's the kind of service that doesn't generate a story because it doesn't need to. It just works.
Lincoln National: A Good Company with a Bad Process
Lincoln National is a different a story. Unfortunately, it's not a good one.
I submitted my loan request and waited. Fourteen days later, I received a piece of mail, physical mail, informing me I had submitted the wrong form. Not an email or a text. A letter. By the time it arrived, two weeks had already passed.
When I called to sort it out, the representative confirmed I had used an outdated form, one that had been replaced two years prior. I asked the obvious question: why didn't you email me? The answer was that mailing a letter is simply their process.
So I filled out the correct form, submitted it via email (not a secure method for applying for a policy loan, btw, but thatās another story altogether), and received an automated email acknowledgment. Then silence. Ten days passed. I logged into my account and discovered the loan had been approved seven days earlier. The interest clock had been running since the day of approval. The money was still not in my account.
I called again. This time, I had to work through an IVR system that clearly has some elements of AI or ML behind it. If it were well-architected artificial intelligence, it would have been able to detect my tone, assess my level of frustration, and route me to the right person faster. Instead, the voice recognition failed repeatedly. The call routing failed repeatedly. The āintelligentā system kept me in a loop of technology until the fourth failure, and then it routed me to a human. That person asked me the same questions over again, and then put me into another queue to talk to a human. After twenty minutes of on-hold music, I finally reached a person.
That person told me my ACH had been approved that morning and the funds might arrive within 1-3 business days. When I asked who I could speak with about improving the process, the response was direct, if not exactly encouraging: "I can give you some numbers, but nothing is going to change."
So let's total this up. Three weeks from request to funds received, assuming the ACH arrives on the shorter end. Ten of those days charging me interest on money I hadn't yet received. And at no point, across the entire chain of events, did a human take ownership of anything.
Customer Satisfaction Window Dressing
Interestingly enough, when I started this phone call, the Lincoln National Interactive Voice Response system (IVR) asked me if I would take part in a customer satisfaction survey. I said yes. After I hung up with the representative, Lincolnās automated attendant dutifully dialed me back and asked me all of two questions:
- Was I satisfied with the service?
- Would I want the same customer service rep if I called back again?
For each one, I had to press a key for my response, so there was absolutely no attempt to gather any useful information for customer service like words, stories, or sentiment.
All of that could've been used by AI to refine a process and to inform management in a much more detailed way. Two yes-or-no questions do not constitute a customer service survey. It simply asks, "Did our customer service rep do okay?" It doesn't ask, "Did we provide you with the best possible service as a company and how can we do better?" That kind of lame attempt to gather information on customer satisfaction only adds fuel to a smoldering fire.
The Real Problem Isn't Technology
It would be easy to frame this as a technology problem. Lincoln National has outdated systems and is behind the times. But that's not the real issue, and accepting that framing would mean missing what's actually happening here.
Lincoln National has AI, and they have systems. They have voice recognition, decision-tree routing, an automated acknowledgment system, and an automated ācustomer satisfactionā dial-back survey. The technology exists. What doesn't exist is a coherent answer to the two questions that should drive every customer-facing system they build: "What does the customer actually need from this interaction?" and āHow can we give it to them in a way that they become raving fans of our company and service?ā
If someone had asked those questions seriously, a few things would have become obvious. Customers who submit wrong forms need to know immediately, not two weeks later, and they need to know via the fastest available channel. Those who have been approved for a loan need to know that too, before the interest starts running, not seven days after. And customers who have spent 20 minutes failing to reach a human through an automated system aren't having their frustration detected. They're experiencing it being manufactured and accelerated.
The AI Lincoln National deployed didn't improve anything for customers. It created a layer of friction between people with real needs and the humans who might actually be able to help them, and it did so while performing competence it didn't have.
That's not AI. That's theater.
The Human Problem
Here's what gets overlooked in most conversations about AI and customer experience. The goal of good AI isn't to replace human contact. It's to make sure the right humans are available at the right moments, freed from the repetitive work that machines can handle, and empowered to actually own the outcome for the customer.
Northwestern Mutual got this right. When I needed a person, I got one fast. The automated side of the transaction, the ACH processing, the account verification, all of that ran quietly in the background without requiring me to babysit it. The human was there for the part that required judgment, ownership, and accountability. The rest was handled by systems that actually worked.
Lincoln National inverted this entirely. They used AI to gatekeep access to humans, routing me through a broken automated layer couldn't do what it was likely designed to do ā handle an issue without human intervention. I certainly understand the economic efficiency of minimizing person-to-person interaction, so I'm not faulting that attempt. It's just that their IVR was clearly built on antiquated technology. Had it been built using today's AI capabilities, the semantic inferences would have made a significant difference in the customer experience.
Legacy IVRs typically rely on rigid, exact-match keywords or touch-tone menus (e.g., "Press 1 for billing"). Modern AI uses Natural Language Understanding (NLU) to deduce intent and context (semantics) from a user's natural phrasing, allowing it to infer what the user actually needs.
When I finally reached a human, that person had no authority to address or change a process that could deliver a reasonable outcome. The human was trapped inside the same broken system as the customer. The AI hadn't freed anyone. It had just added another obstacle. When asked why they didnāt communicate with me via email or SMS as opposed to snail mail, he simply said, āThatās our process.ā
When you put AI in front of a broken process without fixing the process, you don't reduce the human burden. You just push it downstream, onto the customers who now have to work harder to get to anyone, and onto the employees who have to apologize for a system they didn't build and can't change.
Bad Processes Don't Get Better With Automation
This is the part that applies well beyond the insurance industry.
When a broken process gets automated, it doesn't become a fixed process. It becomes a broken process that runs faster, scales wider, and has fewer humans available to absorb the damage it causes. The organizational instinct to layer AI onto existing workflows is understandable.
The pressure to show progress on AI adoption is real, and it's everywhere right now. But that instinct, left unchecked, produces exactly what Lincoln National built: a system that sounds modern, performs poorly, and leaves customers more frustrated than they would have been with a slower, more human process.
The 2-week mail delay was the original sin here. That's a process designed around what was convenient for the organization, not what was useful for the customer. Adding voice recognition and decision trees on top of that problem didn't fix it. It just ensured that customers had to clear one more hurdle before reaching someone who might acknowledge the problem. And even then, the human on the other end had no path to a good outcome.
Northwestern Mutual didn't achieve 24-hour ACH delivery by automating their old process. They built a process around the outcome the customer needed, put a human in the role that required a human, and let the technology handle the rest. The difference between those two companies isn't how much AI they're using. It's whether the AI is genuinely serving the customer or just performing the appearance of it.
What This Means for Your Organization
Every business leader right now is making decisions about where AI fits into their operations. Those decisions are going to land somewhere on the spectrum between what Northwestern Mutual built and what Lincoln National built. The gap between those two outcomes isn't primarily a budget gap or a technology gap; It's a thinking, discernment, judgment, and values gap.
The question shouldn't be: "how do we add AI to what we're already doing?" That question almost always produces Lincoln National output: Results that reduce operating expenses while alienating clients and negatively impacting sales and revenue.
The question should be: "what do our customers actually need from this process, where do humans add irreplaceable value, and how do we build a system that delivers on both?"
If you can't answer those questions clearly and specifically before you start automating, you're not making your service better. You're making your existing problems faster, wider, and harder to fix, while quietly removing the humans who were the last line of defense between your customers and a genuinely bad experience.
One more thing worth sitting with: Lincoln National is charging interest on loan funds from the moment they approve the loan, regardless of when the customer actually receives the money. In this case, that gap was 10 days. Whether that practice is legal is a question for their compliance team. Whether it's the kind of thing a company does when it has designed its processes around its own interests rather than its customers', that question has already been answered.
Lincoln National's rep told me that I could contact anybody on a list he could provide me, but nothing was going to change. He may have been right about his company. He doesn't have to be right about yours.

