Mise-en-scène Project

Short Description

Recommendations are traditionally generated on the basis of user’s implicit and explicit preferences on movies’ attributes, such as genre, director, and actors. However, user’s preferences can be better described by the mise-en-sc`ene characteristics of movies namely the design aspects of a movie production used to classify aesthetic and style. Lighting, colors, background, and movements in a movie are all examples of mise-en-sc`ene features. Although viewers may not consciously notice movie style, it still affects the viewer’s experience of the movie. The mise-en-sc`ene highlights similarities in the narratives, as movie makers typically relate the overall movie style to reflect the story, and can be used to categorize movies at a finer level compared to the traditional movie features.

Mise-en-scène Project proposes the exploitation of automatically extracted design visual features of movies, based on mise-en-sc`ene characteristics, in the context of recommender systems. It proposes a novel content-based recommender system that automatically analyze movie contents and extracts a set of representative stylistic visual features grounded on existing approaches of Applied Media Aesthetici.e., the theory that is concerned with the relation of aesthetic media attributes with the perceptual reactions they are able to evoke in consumers of media communication, mainly movies.

This is a novel and multidisciplinary approach, from both design and engineering perspectives, toward video recommendation systems. It shall create huge influence on the research area, and revolutionize the industry behind social video sharing.

Dataset of Visual Features

We have created two datasets:

  • Mise-en-scene dataset: 7 low-level VISUAL features extracted from 13373 movie trailers. The detail description of the dataset and the download link is provided in the Mise-en-Scene Visual Dataset Page.
  • Mise-en-scene MPEG7 dataset: low-level MPEG-7 VISUAL features extracted from 3964 movie trailers. The detail description of the dataset can be found here and the download link here.

 

More Details

Springer Journal on Data Semantic article 2016: Content-based Video Recommendation System based on Stylistic Visual Features
@Springer @ResearchGate  @Slideshare @Academia

ACM CHI 2016 article (San Jose, CA, USA): Recommending Movies Based on Mise-en-Scene Design
@ACM @ResearchGate @Academia @Mirror

ArXiv TR 2017: Using Mise-En-Scene Visual Features based on MPEG-7 and Deep Learning for Movie Recommendation
@ArXive @ResearchGate @Academia

CB-Recsys 2016 article (Boston, MA, USA): Using Visual Features and Latent Factors for Movie Recommendation
@CEUR-WS @ResearchGate @Academia

EC-WEB 2016 (Porto, Portugal): How to Combine Visual Features with Tags to Improve Movie Recommendation Accuracy?
@Springer @ResearchGate @Academia @Mirror

 

Contacts

YASHAR.DELDJOO at POLIMI dot IT

MEHDI.ELAHI at POLIMI dot IT

PAOLO.CREMONESI at POLIMI dot IT