AI What A Research Firm Learned From Hundreds Of AI Project Failures Kathleen Walch Contributor COGNITIVE WORLD Contributor Group Opinions expressed by Forbes Contributors are their own. Aug 29, 2022, 01:30am EDT | New! Click on the conversation bubble to join the conversation Got it! Share to Facebook Share to Twitter Share to Linkedin AI projects aren’t dying because of big problems, but rather it’s the small things that are causing projects to fail. It’s very easy to get excited at the beginning of an AI project when the future has all sorts of possibilities.
But then the day-to-day realities of making AI projects work interfere with those fantasy visions and AI projects are left to languish, eventually failing. One of the biggest cases of those “death by a thousand cuts” for projects is that organizations fail to factor in the long term costs of the project. Leadership budgets for only the initial costs of the project, forgetting about maintenance and upkeep.
Research firm Cognilytica concluded in an analysis of hundreds of AI project failures that organizations are not understanding that AI project lifecycles are continuous. Organizations often budget for the first few iterations of a project, with all the need for data preparation, cleansing, model training, data labeling, model evaluation, and iteration, but fail to keep up that budgeting for ongoing iterations. Organizations also have to continually monitor models and data for decay, retraining the model as needed, and taking future expansions and iterations into consideration.
This results in organizations having a misaligned or unjustified ROI for the AI project over time. Considering the cost of continuous model iteration ut how exactly do you go about doing this in a thoughtful way? The challenge for most organizations is that they treat AI projects as a one-time, proof-of-concept or pilot application, failing to consider setting aside funds, resources, and people for continuous model evaluation and retraining. As a data-driven project, AI is not a one time investment.
People often don’t realize they’ll need to allocate money, people and resources to the continued iteration and development of a model once it’s in production. If, in the beginning of the project, organizations only take into consideration the cost of building the first version of the model, they often run into issues. In considering AI project cost and ROI, AI project owners need to ask how much it’s going to cost you to maintain that model and how much they’re willing to invest in ongoing data preparation and model iteration.
One best practice learned from successful AI project owners is that AI projects aren’t one-time delivery of functionality. Rather, AI projects should be looked at as continuously iterative cycles, not a start point and finish line. In much the same way that cybersecurity projects aren’t one-time things, neither are data-driven projects, of which AI projects are a subtype.
The real world continually changes and so does the data. Even a model that works very well at the beginning will, over time, begin to fail. It’s inevitable to see data drift and model drift.
Furthermore, as organizations develop expertise and knowledge on how to apply AI, the use cases, models, and data changes along with those updated applications. In addition, the global economy and world patterns continue to change in unanticipated ways. This makes long-term planning of any project, let alone complex AI projects, very difficult.
In the past two years, retailers likely didn’t anticipate supply chain and labor shocks, and organizations didn’t anticipate the shift in workforce to home-based approaches. As the real world changes and user behavior quickly morphs, the data also changes and models need to change with it. This is why continuous monitoring and iteration of your model is of paramount importance and you must be accounting for these issues with data drift and model drift.
MORE FOR YOU Black Google Product Manager Stopped By Security Because They Didn’t Believe He Was An Employee Vendor Management Is The New Customer Management, And AI Is Transforming The Sector Already What Are The Ethical Boundaries Of Digital Life Forever? Take iteration into consideration: methodology and ML Ops It’s also important to consider continuous model iteration as organizations plan on expanding or enhancing their model. Organizations that have models used to anticipate buying patterns in North America are looking to expand those models to other geographies. As such, there is a need to continuously iterate on the model and data to include these new data requirements.
These factors all mean that organizations need to secure additional funding for iterations, as well as make sure you’re properly identifying data sources and other critical factors. Successful organizations are also realizing that they need to follow a proven iterative and agile approach to AI for AI project expansion success. Iterative approaches borrowing from Agile and data-centric project management approaches such as the Cross Industry Standard for Process for Data Mining (CRISP-DM) enhanced with AI capabilities are helping ensure that key steps aren’t skipped in iterative project plans.
As the markets for AI continue to evolve, the emergence of Machine Learning Model Operationalization Management, referred to as “MLOps”, has started to take hold. MLOps is focused on the lifecycle of model development and usage, machine learning model operationalization, and deployment. ML Ops approaches and solutions aim to help organizations manage and monitor your models in a continuously evolving space.
MLOps builds on top of related ideas such as DevOps, focused primarily on continuous iteration and development for development-centric projects, and DataOps, focused mainly on issues of moving and manipulating large, continuously changing data sets. ML Ops aims to assist with the iteration of AI projects by giving organizations needed visibility into things like model drift, model governance, and versioning. MLOps helps you better manage these issues.
While there are many vendors selling ML Ops tools, ML Ops, like DevOps, is primarily something you “do” and not necessarily just something you “buy”. ML Ops best practices cover the range of concerns around model governance, versioning, discovery, monitoring, transparency, to model security and iteration. ML Ops solutions also allow multiple simultaneous versions of a model to exist so that their behavior can be tailored to specific needs.
These solutions also keep track of, monitor, and determine who has access to what models, and other areas related to governance and security. Given the need for a process for constant AI iteration, MLOps is being implemented as part of the overall environment for building and managing models. We’re likely going to start to see these capabilities increasingly implemented as part of larger AI and ML tool sets as well as as part of cloud solutions, open source offerings, and part of ML machine learning platform solutions.
With Lots of Failures Comes Lots of Opportunities to Learn Best practices are extremely important when it comes to MLOps and AI projects as a whole. The problem doesn’t come from AI project failure, but rather from making too big of a mistake that a failure is fatal. Rather than killing off all the AI projects, organizations need to see AI projects as an iterative, step-by-step approach, using methodologies such as Cognitive Project Management for AI (CPMAI) methodology and evolving ML Ops tools to take a best practice “think big, start small, and iterate often” philosophy.
The one common theme from all those hundreds of AI project failures is not that the project ended, but rather that they didn’t provide enough room for the organization to pick itself up and try again. Follow me on Twitter . Check out my website .
Kathleen Walch Editorial Standards Print Reprints & Permissions.
From: forbes
URL: https://www.forbes.com/sites/cognitiveworld/2022/08/29/what-a-research-firm-learned-from-hundreds-of-ai-project-failures/