A Few Thoughts about Artificial Intelligence in Technology Support

“Statistics allow you to do exactly the wrong thing, but with 99 percent confidence.” Not sure if that’s how the saying goes, but perhaps it should. 

There is trend in our industry to deliver support through AI, which is, of course, a more glamorous term for the application of statistics. It sounds like a good idea: Develop recipes that solve most of the common issues, and a fully automated way to match a recipe to a specific case. As you accumulate data, you can improve both the recipe and the matching accuracy. Great! 

Why does mac-tech do support differently? A few reasons, starting with our actual client, the human being. The first step is listening, learning, and parsing information that can be expressed in myriad ways. Technical vocabulary, previous experience, and to some degree the speaker’s interpretation, always informs the expression, and therefore the accuracy with which a system could ingest and process that information. To normalize this data set, i.e., work around the inherent shortcomings of semantic searches, we could offer a forms-based input system, however that immediately introduces bias by presenting several optional scenarios the user has to identify their situation with. 

That user becomes part of a system which, in order to find any useful answer, needs to do exactly what a support engineer should stay away from: pattern matching. The situation that, in the interpretation of the user needing an answer, is the most likely match, will correspond to a recipe which will then, hopefully, produce a solution. 

If an initial match doesn’t produce the desired result, the second most likely scenario and corresponding playbook can be applied, and so forth. 

This works very well as long as the cost of trying one or several solutions is not prohibitive. It favors the provider because the cost of presenting solutions is minimal, yet the cost of experimentation is entirely borne by that user or their employer. This burden becomes greater and more apparent as solutions are applied and discarded, increasing the complexity of a given problem, and placing greater demand on the decision maker’s skill (which of the described scenarios applies to me?).  

The truth is that with every action that cannot be reverted, we introduce a new situation and a new branch of the decision tree, sometimes a whole new tree. Yet again, the situation has to be analyzed and a new set of possible matches produced and presented. This creates a moving target, and the loss of time, work, or data quickly escalates. 

Gauging the impact of any action requires awareness of context—something that does not lend itself to an automated forms-based information exchange. An example of a context-aware decision: whether or not to recreate a situation to test a working theory before applying it.  

Although technology support can be systematized and automated, we humans seem to have very little tolerance for the dreary repetitiveness of establishing a picture through a series of binary choices, and still less tolerance for failure, even if it ultimately brings us closer to a solution.  Both context awareness and tolerance for failure are extremely non-binary in nature. Human dialogue has shown to be superior to an automated system in understanding and communicating them. 

Once we have a possible solution, and it appears to suit both the situation and circumstance, the AI based solution hands off the recipe to the user who has to walk through the implementation themselves or an engineer who is relatively unaware of the processes, possible assumptions, and less-than-certain statements that lead to the solution.  

By contrast, the process of developing a solution with a holistic understanding will almost always produce several alternative plans as well. Our knowledge and comfort with these solutions can positively inform our decision-making and creates the opportunity to pivot on the spot, should plan A turn out to be a dead end.  

With sufficient experimentation, any human or artificial system can accumulate enough data to offer “likely successful” solutions without any true understanding of the causality or science behind an engineering problem. It can be infuriating to watch a scenario play out where a systematized process goes off the rails and stubbornly leads the user down the wrong path without any awareness of the potential for ambiguity in the information provided. 

We do utilize systematized processes for repeatability and accuracy at mac-tech. Automation is a logical next step, and a viable option when upfront cost is the primary driver. However, we believe that good guidance and decision-making require real understanding, and therefore mac-tech puts real people to work—particularly when the touch point is a fellow human.