A team of researchers from Tel-Aviv University developed a neural network capable of reading a recipe and generating an image of what the finished, cooked product would look like.

The Tel-Aviv team, consisting of researchers Ori Bar El, Ori Licht, and Netanel Yosephian created their AI using a modified version of a generative adversarial network (GAN) called StackGAN V2 and 52K image/recipe combinations from the gigantic recipe1M dataset.

Basically, the team developed an AI that can take almost any list of ingredients and instructions, and figure out what the finished food product looks like.

After thinking about this task for a while I concluded that it is too hard for a system to get an exact recipe with real quantities and with “hidden” ingredients such as salt, pepper, butter, flour etc.

Namely, generating food images based on the recipes.

We believe that this task is very challenging to be accomplished by humans, all the more so for computers.

The text above is a summary, you can read full article here.