You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I tried face attribnet model which saves json file for each face detection.
Then i noticed demo.py which didnt do any post processing rather its dump whatever outputs model has, it simply dumps to json
could you provide solution to process the model output
The text was updated successfully, but these errors were encountered:
We have a collection of output for this model including face recognition related embedding vector(length of 512), liveness related embedding vector(length of 32), and probability of detecting eye closeness, (sun)glasses, mask. Users can check specific result given the key values in the json file.
Eye Closeness, Sunglasses, and Mask Probabilities:
Are the probabilities for detecting eye closeness, sunglasses, and masks raw logits or already in the range of [0, 1]?
If they are raw logits, should we apply the sigmoid function to map them to probabilities?
For eye closeness specifically, the output has two values. Does this correspond to the probabilities for the left and right eyes, respectively?
Liveness Embedding Vector:
The liveness embedding vector has a length of 32. How should we interpret or use these embeddings in downstream tasks
In the demo, the liveness result is shown as a boolean (true/false). How is this boolean value derived from the 32-dimensional embedding vector? Are there specific thresholds or logic applied?
mestrona-3
added
the
question
Please ask any questions on Slack. This issue will be closed once responded to.
label
Dec 13, 2024
For eye closeness, glasses, and masks, we output raw logits. You need to apply softmax to get the final probabilities.
However, for sunglasses, the output is directly in probabilities.
For question 3: Yes, the two values in eye closeness correspond to the left and right eyes separately.
Regarding the liveness embedding vector, they are subject-specific. This means that to determine the liveness of a specific person, you need to obtain the embedding of the real face photo of that person first. Then, compare the later embedding with this one using L2 or cosine similarity. Additionally, you may train a classifier on your own using these embeddings.
I tried face attribnet model which saves json file for each face detection.
Then i noticed demo.py which didnt do any post processing rather its dump whatever outputs model has, it simply dumps to json
could you provide solution to process the model output
The text was updated successfully, but these errors were encountered: