Is your organization/leadership trying to find the business value in AI and ML? Here’s how you can help.
There’s no doubt that, when applied effectively, machine learning (ML) and artificial intelligence (AI) have proven potential to deliver significant value and cutting-edge technological innovation.
But many organizations are struggling with the “effectively” part.
Even though businesses are increasingly undertaking initiatives to leverage ML and AI, many tools and projects lack appropriate resources, are far less productive than they should be, lag in deployment, and fail or are abandoned.
In short, the business value is rarely captured — and very often falls short of expectations — because significant time, resources, and budgets are being wasted, according to a 2021 survey of ML practitioners, “Too Much Friction, Too Little ML.”
“Building AI is hard,” said Gideon Mendels, CEO and co-founder of Comet, the enterprise ML development platform company that commissioned the survey. “ML is often a slow, iterative process with many potential pitfalls and moving parts. Adding to that challenge, the tools and processes for ML development are still being developed. Most companies are still trying to figure out their processes and stack.”
More than 500 enterprise ML practitioners across the U.S. took part in the online survey, which Comet performed with research company Censuswide. The query of ML development experiences and the factors that impact the capability to deliver expected business value revealed that many tools and processes are often “nascent, disconnected, and complex,” according to Mendels.
Meeting the potential of ML and AI.
“There has been so much enthusiasm around AI, and ML specifically, based on its potential over the past several years, but the realities of generating experiments and deploying models have often fallen well short of expectations,” said Mendels. “We wanted to look deeper into where the friction lies so that issues can be addressed.”
Notably, 68% of respondents said they scrapped anywhere from 40 to 80% of experiments.
As such, there is a severe lag in the model deployment:
- Just 6% of surveyed teams reported being able to make a model live in less than 30 days.
- 43% said they required up to three months to deploy a single ML project.
- 47% said they required four to six months to deploy a single ML project.
This was due to the breakdown and mismanagement of data science lifecycles beyond the typical iterative process of experimentation. Reported impediments included lack of infrastructure, API integration errors, reproducibility failures, and debugging failures.
It’s true that running, adjusting, and re-running experiments is integral to the model development process. Mendels said — this can involve changing the model itself, tweaking its hyperparameters, utilizing different datasets, or changing code to evaluate how that impacts algorithms.
“All these changes repeatedly happen, sometimes with only minute differences each time,” he said. Yet, this necessary process can make it challenging to determine which experiments and parameters produce which results, whether that has to do with runtime environments, configuration files, data versions, or many other factors.
Poor experiment management can exacerbate this because results can’t be reproduced accurately or consistently. “It can throw an entire project off the rails, wasting countless hours of a team’s work,” Mendels said.
Meanwhile, when models are deployed, nearly one-quarter failed in the real world for more than half (56.5%) of the companies surveyed.
One reason for all this is that budgets are “woefully inadequate”: 88% of respondents have an annual budget of less than $75,000 for ML tools and infrastructure.
Manual and ML, and don’t mix
ML teams must track experiments manually without the proper financial support. 58% of respondents reported doing so. This, in turn, places enormous strain on workers, creates challenges for team collaboration and model lineage tracking, causes projects to take far longer to complete, hinders model suitability, and leads to mistakes, Mendels pointed out.
Companies are not intentionally withholding budgets or misallocating ML resources: 63% of respondents said their organizations would increase ML budgets in 2022. Yet many still “don’t know what to do” with that funding.
“ML is a fairly new paradigm, and as such, companies are still learning what is required to realize ROI,” said Mendels. Many companies’ primary focus is on recruiting talent and preparing suitable datasets. Yet substantial investment incorrect infrastructure is critical, he said.
Before companies allocate more money and resources to ML programs, they must first address core operational issues — this is the only way they will see positive ROI, Mendels said — and consider extensibility and ability to customize. If teams are maxed out and struggling with visibility, reproducibility, and cost-efficiency, they will grapple with adding models, experiments, and deployments.
“If an organization uses ML, they will achieve more value — faster — by looking at their tools and processes and budgeting appropriately for ML development,” Mendels said. “The best way for businesses to be productive with their AI initiatives is to apply people, processes, and tools strategically across the ML lifecycle.”
Data science teams can improve efficiencies and build models faster with platforms such as Comet’s, Mendels said. The New York City-headquartered company manages and optimizes ML models’ entire ML development and workflow — from early experimentation and production. It offers standalone experiment tracking and model production monitoring, and its platform can run on any infrastructure and within existing software and data stacks.
The company supports a community of tens of thousands of users and academic teams who use its platform for free, and some of its high-profile enterprise customers include Ancestry, Cepsa, Etsy, Uber, and Zappos.
Ultimately, Mendels emphasized that tools for building ML have evolved dramatically in recent years. The field continues to expand and improve to help solve the problems identified in the survey.
“Leading-edge companies that have implemented modern AI development platforms realize the benefits, full potential, and value from their machine learning initiatives,” he said, “which is quite exciting.”