Interpretable Relational Representations for Food Ingredient Recommendation Systems
Supporting chefs with ingredient recommender systems to create new recipes is challenging, as good ingredient combinations depend on many factors like taste, smell, cuisine style, texture, chef’s preference and many more. Useful machine learning models do need to be accurate but importantly– especially for food professionals – interpretable and customizable for ideation. To address these issues, we propose the Interpretable Relational Representation Model (IRRM). The main component of the model is a key-value memory network to represent the relationships of ingredients. The IRRM can learn relational representations over a memory network that integrates an external knowledge base- this allow chefs to inspect why certain ingredient pairings are suggested. Our training procedure can integrate ideas from chefs as scoring rules into the IRRM. We analyze the trained model by comparing rule-base pairing algorithms. The results demonstrate IRRM’s potential for supporting creative new recipe ideation.
Improving Artificial Intelligence with Games
Games continue to drive progress in the development of artificial intelligence.
MocoSFL: enabling cross-client collaborative self-supervised learning
Existing collaborative self-supervised learning (SSL) schemes are not suitable for cross-client applications because of their expensive computation and large local data requirements. To address these issues, we propose MocoSFL, a collaborative SSL framework based on Split Fe…
MECTA: Memory-Economic Continual Test-Time Model Adaptation
Continual Test-time Adaptation (CTA) is a promising art to secure accuracy gains in continually-changing environments. The state-of-the-art adaptations improve out-of-distribution model accuracy via computation-efficient online test-time gradient descents but meanwhile cost …