Colored abstract information networks Getty Images

New AI readiness report reveals insights into ML lifecycle

Machine learning teams are drowning in issues of data quality, according to a recent report of over 1300 machine learning teams.

Data quality is the biggest challenge faced by machine learning (ML) teams when acquiring training data, according to a recent survey of more than 1,300 practitioners in the field.

A third of respondents said they encounter data quality problems, followed by issues with collection, analysis, storage and versioning, according to Zeitgeist: AI Readiness Report by Scale AI.

These problems must be addressed since they have a “significant downstream impact” on ML efforts and teams often cannot model effectively without quality data,” the survey said.

In the report, ML teams said they find it difficult to sort through volume, data complexity, and scarcity. Unstructured data poses a particular challenge. Practitioners find that curating data for its models impacts how quickly they can deploy their ML projects. Without high-quality data, teams cannot create robust models.

Variety, volume and noise

Factors contributing to data quality include variety, volume and noise.

In the survey, 37% find it difficult to find the data variety they need to improve model performance. Those working with unstructured data specifically have the biggest challenge getting the variety of data to improve model performance.

Since most of data today is unstructured, ML teams must have a strategy around how they manage this data to enhance data quality.

ML teams working with unstructured data are more likely than those working with semi-structured or structured data to have too little data.

Most respondents report problem with their training data, with data noise as the largest headache (67%), followed by data bias (47%) and domain gaps (47%). Only 9% did not have such issues.

The report offers these offered these five tips for data-centric AI development from Andrew Ng, co-founder of Google Brain.

  1. Make labels consistent
  2. Use consensus labeling to spot inconsistencies
  3. Clarify labeling instructions
  4. Toss out noisy examples (because more data is not always better)
  5. Use error analysis to focus on a subset of data to improve

To read the rest of this story, visit our sister site AI Business.

Hide comments

Comments

  • Allowed HTML tags: <em> <strong> <blockquote> <br> <p>

Plain text

  • No HTML tags allowed.
  • Web page addresses and e-mail addresses turn into links automatically.
  • Lines and paragraphs break automatically.
Publish