Deep fusion of visual signatures for client-server facial analysis

Binod Bhattarai, Gaurav Sharma, Frédéric Jurie

Research output: Chapter in Book/Report/Conference proceedingPublished conference contribution


Facial analysis is a key technology for enabling human-machine interaction. In this context, we present a client-server framework, where a client transmits the signature of a face to be analyzed to the server, and, in return, the server sends back various information describing the face e.g. is the person male or female, is she/he bald, does he have a mustache , etc. We assume that a client can compute one (or a combination) of visual features; from very simple and efficient features, like Local Binary Patterns, to more complex and computationally heavy, like Fisher Vectors and CNN based, depending on the computing resources available. The challenge addressed in this paper is to design a common universal representation such that a single merged signature is transmitted to the server, whatever be the type and number of features computed by the client, ensuring nonetheless an optimal performance. Our solution is based on learning of a common optimal subspace for aligning the different face features and merging them into a universal signature. We have validated the proposed method on the challenging CelebA dataset, on which our method outperforms existing state-of-art methods when rich representation is available at test time, while giving competitive performance when only simple signatures (like LBP) are available at test time due to resource constraints on the client.
Original languageEnglish
Title of host publicationTenth Indian Conference on Computer Vision, Graphics and Image Processing (ICVGIP 2016)
Publication statusPublished - 2016

Bibliographical note

ACM ICVGIP (Best Paper Runner Up)


Dive into the research topics of 'Deep fusion of visual signatures for client-server facial analysis'. Together they form a unique fingerprint.

Cite this