Engineers at MIT are trying to help you figure out what recipes create the food in your Instagram-worthy photos.
A newly released deep-learning algorithm from the Computer Science and Artificial Intelligence Laboratory (CSAIL) intends to predict possible recipes from a picture of your finished foods. A study released today outlines the new method, called Pic2Recipe, and it has a 65 per cent success rate of identifying what a dish’s ingredients are.
The system uses neural networks to help determine what goes into a dish, attempting to gain “valuable insight into health habits and dietary preferences.”
Pic2Recipe was built using the Swiss-made Food-101 Data Set, an algorithm that already contains over 100,000 food images. This information is cross-referenced with another database created by the engineers themselves of over one million images drawn from popular food blogging and reporting sites. This allows Pic2Recipe to learn from a vast amount of pictures and dishes and make an educated guess.
“Guess” is still the right word for this new system—it needs some work before seeing any kind of practical use. It only identifies ingredients 65 percent of the time, and the biggest problem are the images the algorithm is drawing from.
“Work continues on improving the system,” the video below reads. “Including inferring how the food is prepared.”
Ambiguous recipes tend to confuse the system, such as smoothies and sushi rolls. Dishes with many variations can also throw it off, such as specific types of curries.
You don’t have to wait to use Pic2Recipe. An online version where you can upload your own pictures is live right now.