It’s pretty much understood that companies in most every industry will need to implement some level of machine learning in order to remain competitive.
With the vast amounts of data companies accumulate, they need to make this data work for them—helping them predict the likelihood of a loan going into default, what fashion trends customers will be looking for next summer, or how many buses may be operating in specific regions and their impact on traffic.
Before machine learning this was stuff that was close to impossible to determine with any level of accuracy.
Data is the lifeblood of machine learning
Creating a machine-learning algorithm that enables software to conduct this type of predictive analysis just doesn’t happen overnight. It’s all about sorting through vast amounts of data (yes, big data), labeling it and cleaning it to build and train and re-train an algorithm that can help it identify precisely what you are hoping to find.
Accomplishing this can certainly be a tedious process, literally pouring through tons of data, to simply mark specific things. To complete the process as quickly as possible, most AI experts have used the power of many “bodies,” who, as long as they have eyes to identify objects or text and label it, are put on the job.
They’ve also used the power of the masses to gather this data, sometimes called crowd sourcing, or public crowds.
The problem with machine learning software: It takes things too literally
While putting many bodies on the job to prepare data has often worked out well, the problem with this strategy is that it sometimes doesn’t provide an accurate enough picture to train an algorithm. It can sometimes produce bad data.
For example, if a public crowd is asked to identify all images of tigers, it will select each and every tiger and label it simply as a tiger. But machine learning tools are really only as smart as the information they are fed. If an algorithm is trained to see an image of a tiger sitting in grassland and to identify it as a tiger, what happens when an image of a domestic cat is displayed, which just so happens to be sitting in grassland? The machine learning tool thinks that since it is a furry animal sitting in grassland, it must be a tiger.
Enter the private crowd
Because of this example, it’s important to utilize data specialists to sort through, clean and label data for algorithm training, not just random crowd sources.
Specialized data specialists that are part of a private crowd, understand the nuances of how algorithms are taught and how smart apps can learn. They can get into the minds of the smart apps, so to speak, and understand that it’s important to label and tag every single thing in an image, and see the image as a machine-learning tool would.
Private crowds can provide extremely accurate and precise identification of objects, images and data sets to ensure the highest quality algorithms.
So, the term, smart app, in actuality, may be a bit misleading. Machine-learning tools don’t start out achieving Mensa status. But with good data, gathered, labeled, tagged and cleaned from proficient private crowds, and with proper training they can learn to become A pupils.
As the role of private crowds grows to keep pace with the need for ever-smarter machine learning algorithms, stay tuned for more updates, observations and advice, as we explore this new terrain together.
This article is published as part of the IDG Contributor Network. Want to Join?
Are machine learning tools really smart or do they need private crowds to get there? have 623 words, post on www.infoworld.com at 2017-08-10 12:07:05. This is cached page on Technology Breaking News. If you want remove this page, please contact us.