When I look at AI efforts from companies like Microsoft, the focus is on productivity, which has been the primary benefit of most technological advances over the years. This is because it is far easier to quantify the benefits financially from productivity than any other metric, including quality. This focus on productivity has resulted in a lack of critical focus on quality and quality problems with AI platforms, as highlighted by the recent WSJ head-to-head AI comparison article that ranked Microsoft’s Copilot last.
This is particularly problematic for Copilot because it is used for coding. Introducing errors into code could have broad implications for both quality and security going forward because those problems are being introduced at machine speeds that could overwhelm the ability to find or correct them quickly.
In addition, AI is being focused on things users want to do, but still requires users to perform tasks, like checking and commenting code, and builds on the meme that argued “what I wanted AI to do was clean my house and do my laundry so I have more time to do things I like doing like draw, write creatively, and create music. Instead, AI is being created to draw, write creatively, and create music, leaving me to do the things I hate doing.”
Where AI Needs to Be Focused
While we do have labor shortages that need addressing and AI offerings like Devin are being spun up to address them, and while productivity is important, productivity without a focus on better direction is problematic. Let me explain what I mean.
Back when I was at IBM and moving from Internal Audit to Competitive Intelligence, I took a class that has stuck with me over the years. The instructor used an X/Y chart to highlight that when it comes to executing a strategy, most companies focus nearly immediately on accomplishing the stated goal as rapidly as possible.
The instructor argued that the first step should not be speed. It should be assuring you are going in the right direction. Otherwise, you are moving ever faster away from where you should be going because you did not validate the goal first.
I have seen this play out over the years at every company I have worked for. Ironically, it was often my job to assure direction, but most often, decisions were made either prior to my work being submitted, or the decision maker viewed me and my team as a threat. If we were right and they were wrong, it would reflect on the decision-maker’s reputation. While I initially thought this was due to Confirmation Bias, or our tendency to accept information that validates a prior position and reject anything that doesn’t, I later learned about Argumentative Theory, which argues we are hardwired back to our days as cave dwellers to fight to appear right, regardless of being right, because those that are seen to be right got the best mates and the most senior positions in the tribe.
I think that part of the reason we do not focus AI on assuring we make better decisions is largely because of Argumentative Theory which has executives thinking that if AI can make better decisions, aren’t they redundant? So why take that risk?
But bad decisions, as I have personally seen repeatedly, are company killers. Sam Altman stealing Scarlett Johanson’s voice, the way OpenAI fired Altman, and the lack of sufficient focus on AI quality in favor of speed are all potentially catastrophic decisions, but OpenAI seems uninterested in using AI to fix the problem of bad decisions (particularly strategic decisions) even though we are plagued by them.
Wrapping Up
We are not thinking about a hierarchy of where we need to focus AI first. That hierarchy should start with decision support, move to enhancing employees before replacing them with Devin-like offerings, and only then move to speed to avoid going in the wrong direction at machine speeds.
Using Tesla as an example, focusing on getting Autopilot to market before it could do the job of an Autopilot has cost an impressive number of avoidable deaths. Individually and professionally, we are plagued with bad decisions that are costing jobs, reducing our quality of life (global warming), and adversely impacting the quality of our relationships.
Our lack of focus on and resistance to AI helping us make better decisions is likely to result in future catastrophic outcomes that could otherwise be avoided. Thus, we should be focusing far more on assuring these mistakes are not made rather than potentially speeding up the rate at which we make them, which is, unfortunately, the path we are on.
About the author: As President and Principal Analyst of the Enderle Group, Rob Enderle provides regional and global companies with guidance in how to create credible dialogue with the market, target customer needs, create new business opportunities, anticipate technology changes, select vendors and products, and practice zero dollar marketing. For over 20 years Rob has worked for and with companies like Microsoft, HP, IBM, Dell, Toshiba, Gateway, Sony, USAA, Texas Instruments, AMD, Intel, Credit Suisse First Boston, ROLM, and Siemens.
Related Items:
The Best Strategy for AI Deployment
How HP Was Able to Leapfrog Other PC/Workstation OEMs to Launch its AI Solution
Why Digital Transformations Failed and AI Implementations Are Likely To
The post Why the Current Approach for AI Is Excessively Dangerous appeared first on Datanami.
Source link
lol