Should AI Be Your Business's IT Support?
Two AI behaviors that business leaders should understand before putting AI at the front of their IT service, plus where AI actually helps.
Over lunch this week, our Head of Cybersecurity Andy told me about a soda recipe that almost went sideways.
He's been making ginger bug soda at home, which is a naturally fermented drink built around a ginger-and-sugar starter culture. A few months back, he asked ChatGPT for the standard juice-to-water ratio. It told him three cups of juice to one cup of water. He made a batch, it came out fine, and he moved on.
A few weeks later he was prepping a new batch with a different juice, so he asked ChatGPT the same question. This time it confidently came back with one cup of juice to three cups of water. The opposite of what it had said before.
Andy paused. He was pretty sure the original answer was the right one. So he pushed back, told the AI the ratio looked wrong, and asked it to verify. ChatGPT immediately folded. Yes, you're right. Three to one is the standard. Apologies for the confusion.
If Andy hadn't remembered the earlier answer, he would have made his soda with a quarter of the juice it actually needed. Watery, vinegary, and faintly disappointing, but not the worst outcome in a kitchen.
It's a different story when the same dynamic plays out in a business system, where the stakes aren't a bad batch of soda.
TL;DR: AI tends to agree with whoever pushed last, and it fills in missing context with assumptions rather than asking what's different. Used as a replacement for experienced judgment, it's the wrong tool for IT support. Used to make experienced practitioners faster, with humans owning every decision, it earns its keep. The decision in front of you is how much human judgment you're willing to replace with AI, both inside your own team and inside the provider you hire.
Key takeaways
- Sycophancy is real. When you push back on an AI's answer, it tends to agree with you, even if the first answer was correct.
- Incomplete questions yield confident-but-wrong answers. AI fills in missing context with assumptions instead of asking what's different about the situation.
- AI's best fit in IT is volume work. Security alert triage, ticket summarization, and script drafting are real wins when a human reviews the output.
- The decision belongs to the buyer. How aggressively you let AI replace human judgment, either inside your own team or through a low-cost provider leaning heavily on automation, is the actual call you're making.
The agreement problem
When Andy pushed back, the AI didn't stand by its answer. It reversed. That pattern is a well-documented behavior in consumer AI tools. Anthropic's own research on sycophancy found that five leading AI assistants consistently tilt toward agreement when a user challenges an answer, even when the original answer was correct. The Nielsen Norman Group has documented the same behavior in mainstream tools like ChatGPT: a simple "are you sure?" is often enough to make a model abandon a correct answer. If you happen to be right, great. If you happen to be wrong, the AI has just validated your mistake with confidence.
In a kitchen, the worst outcome is a bad batch. In a business system, the worst outcome is a team running the wrong configuration for six months because the AI agreed with the first plausible sounding theory someone brought to it.
What AI does with incomplete questions
When the AI first gave Andy the reversed ratio, it didn't know it was a "new juice, same recipe" batch. He didn't tell it. The model filled in the missing context with assumptions and produced an answer that sounded right and wasn't.
This is the modern version of garbage in, garbage out, where the old version was about bad data and the new is about incomplete questions. AI usually won't stop you and ask, "What's different about this situation?" It will produce an answer with whatever it has, because that's what you asked for.
That matters more in business than people realize. Imagine a security analyst pasting an alert from a monitoring tool into an AI assistant and asking, "Is this a real threat or a false alarm?" The AI doesn't ask back, "What's normal behavior for this user? Was there a known software rollout that day? Have you seen this pattern before in this environment?" It gives an answer. If the analyst treats that answer as a conclusion instead of a starting point, the team makes the wrong decision with the AI's confidence attached to the mistake.
When this happened to one of our techs
One of our technicians was troubleshooting an application that kept failing to install correctly. The AI assistant he was using to help triage the issue kept pointing at licensing, and it produced five or six custom technical workarounds, each more elaborate than the last.
The real cause was quite simple. The vendor had produced a new installer file, hadn't documented the new location, and stored it somewhere unexpected. Once the technician located the documentation and the installer himself, the problem was resolved in minutes. By that point, the AI assistant had dragged him on a 15-minute journey inventing a solution to a problem that didn't exist.
What kept the delay to 15 minutes was the technician's experience and the forethought to keep investigating on his own rather than relying solely on the AI. Had the human not been in the loop, I suspect the AI would still be vibe-coding a solution at this very moment.
Where does AI actually help in IT?
None of this is an argument against AI in IT. Used in the right places, it makes our team noticeably faster. One of the clearest examples is security incident triage. A serious incident can generate thousands of individual data points across logs, alerts, and activity from the devices on a company's network, and the volume problem is well documented: an industry survey of 2,000 SOC analysts found teams receive an average of 4,484 alerts daily, with 67% going unaddressed simply because there isn't time. AI changes the math. It can pull in and assess those thousands of points in minutes, find the actual problems, and let the analyst spend their time on judgment and action instead of volume.
AI also summarizes ticket histories, drafts communications, helps build automation scripts that a human reviews before deployment, and routes work faster than any manual process could. In every one of those cases, AI is making an experienced human faster. The human still owns the decision.
Who's asking the questions?
AI won't push back. It won't ask for more context unless you train it to, or ask it to. Experienced people know which questions to ask, and which answers deserve a second look. Models don't.
Where the line should be
AI works in IT when three rules don't bend:
- AI is never in front of a client. No chatbot taking the front line of support. No AI-generated email going out unread.
- Every person on our team has decades of industry experience. AI supports them. It doesn't replace their judgment.
- No AI output, whether a script, a recommendation, or a diagnosis, gets deployed without a human reviewing and owning it.
That human validation step is what separates an IT operation that uses AI well from one that's letting it make calls it shouldn't.
What you're actually deciding
Should AI be your business's IT support? No, not on its own. The stakes are too high, the environments are too specific, and AI's tendency to sound right, agree with whoever pushed last, and fill in missing context with assumptions is exactly the failure mode you cannot afford in systems your business runs on.
Should your IT support be powered by AI used responsibly? Yes. The best IT teams over the next few years will be the ones that pair experienced practitioners with AI tooling that makes them faster and sharper. The weakest will be the ones that trade judgment for velocity and learn a version of the soda lesson on a production system instead of a pot of syrup.
You'll have to decide how much you're willing to let AI replace human judgment, both inside your own team and inside the provider you hire. Internally, that looks like deciding how aggressively to let your own staff use AI to do work that experienced people used to do. Externally, the same question shows up when you're picking a managed services provider: when an MSP comes in well below market, are you hiring an experienced team that uses AI as a multiplier, or a thin operation leaning on automation to keep headcount down, with no real accountability layer between the AI and your business?
The risk in both directions is the same: AI's confidence behind decisions that needed human judgment.
Frequently asked questions
Can AI replace an IT help desk?
No, not without significant risk. AI tends to agree with whoever is asking and fills in missing context with assumptions rather than clarifying questions. Those behaviors are manageable when a human is reviewing the output, and dangerous when they aren't.
What's the biggest risk of using AI for technical decisions?
Two risks working together. AI tends to agree with whoever is asking, which means a confident user with a wrong theory can get that theory validated. AI also fills in missing context with assumptions instead of asking clarifying questions, which can send a team chasing the wrong problem with the AI's confidence attached to the mistake.
Where does AI actually help in IT?
Security incident triage is one of the clearest wins, with AI able to sift thousands of data points in minutes. AI is also effective at summarizing ticket context, drafting user communications, accelerating research, routing work, and helping build automation scripts that humans review before deployment.
Talk to a real human about your IT
If you're weighing how to bring AI into your own business, or trying to decide between an IT provider that uses AI thoughtfully and one that's letting it make calls it shouldn't, we'd be glad to walk you through how we draw that line.
Visit AnnealTech.com or call 512-593-8001 to start a conversation.