Only this pageAll pages
Powered by GitBook
1 of 15

English

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Loading...

Introduction

The Mercury Cloud platform is an AI computer vision service platform. It can provide high-quality CV PaaS services (such as image and video access and processing, face detection, search, clustering, and recognition, etc.). Its advantages are flexible deployment, high hardware cost-benefit ratio, high service availability, high-level security measures. Also, the cloud architecture also provides an agile and powerful service framework to support applications and services on each business side to ensure the stable and continuous growth of the business.

User Manual

This manual provides instructions on how to use the Mercury Cloud management console.

This page is under construction. The management console is not ready yet.

In this manual, terms and figures are described on the assumption that the language setting is English. If you use other languages, the terms and figures may differ. In that case, please read and use them as appropriate.

4 Feature & Image

This page describes how images, including faces, are processed in Mercury Cloud.

4.1 Feature

When we compare faces, add faces to the database, or search a face from the database, the algorithm does not directly use the uploaded raw images. Instead, features are extracted from faces within the Mercury Cloud platform when using these APIs. A feature is a multi-dimension vector that is extracted from the face in the image. Each face from the image will generate a unique feature. What a similarity score indicates is the distance between the feature vectors of two faces.

Therefore, in all Mercury Cloud OpenAPI documents, it refers to comparison or searching for features when mentioning face comparison or face searching. Mercury Cloud OpenAPI DO NOT store any image binary or files within the service.

4.2 Image encoding

Mercury Cloud OpenAPIs use base64 encoded image binaries in HTTP requests to allow image data transmissions. It is much easier to convert the image to the base64 string in Linux via bash command. Refer to the following script and command to convert your images.

4.3 Image standards

There are five essential APIs in the service that require base64 encoded image data as input, namely, the Face Detect API (/{app_id}/detect), the Face Compare API (/{app_id}/compare), the Quality Check API (/{app_id}/quality), the Add Feature API (/{app_id}/databases/{db_id}/features), and the Face Search API (/{app_id}/databases/search). The requirements of input images are common among those APIs and are as follows.

  • The image format should be JPG, PNG, BMP, TIFF, or GIF (Only the first frame is accepted).

  • The file size should be smaller than 8MB.

  • The minimum detectable face area should be more than 32x32 pixels.

  • In the Face Detect API (/{app_id}/detect) and the Add Feature API (/{app_id}/databases/{db_id}/features), where batch upload is supported, the number of images in a single API call should be no more than 16.

Higher face image quality means better precision, while larger image file size means more response time in the API call. As the best practice, we highly recommend using high-quality, frontal, clear images, with the face area over 200x200 pixels, while the file is trimmed and compressed to less than 200KB before calling APIs.

Revision History

Date
Revised content

2022/05/11

  • Updated the API references to V1.5.0.

2021/12/08

  • Updated the API references to V1.4.3.

2021/10/25

  • Started service in Bahrain.

2021/09/29

  • Updated the API references to V1.4.0.

2021/08/11

  • Updated the API references to V1.3.0.

2021/07/21

  • Updated the API references to V1.2.5.

  • Added guides to API.

  • Started service in the USA.

2021/06/15

  • Updated the API references to V1.1.0.

2021/05/17

  • The first edition released.

OpenAPI Manual

This manual provides instructions on how to use the OpenAPI in Mercury Cloud.

This manual is intended for customers who belong to the IT department and who have experience in developing and operating applications that use HTTP/HTTPS-based and JSON-formatted Web APIs.

$ base64 file_path
def base64_encode_file(file_path):
    handle = open(file_path, "rb")
    raw_bytes = handle.read()
    handle.close()
    return base64.b64encode(raw_bytes).decode("utf-8")

Overview

This page is under construction.

Revision History

This page is under construction.

Date
Revised content

2021/05/17

  • The first edition released

Release Notes

This page provides the release notes of Mercury Cloud.

The update schedule will be announced on the .

Date
Version
Release notes

Known Issues

  • When uploading an image with multiple faces in the Face Detection API, not all faces can be detected. Suggest using images with only one face or trim the image to contain one face only before using this API.

  • When using the quality check API to detect the optimal rotation angle, some images may return the wrong rotation angle. We are currently working on modifying the algorithm to improve the accuracy.

  • If the same image is added and deleted over 5 times in a feature database with max_size larger than 100K, the cross-database search (1vN) API has the possibility to get an empty set, even if there are similar features registered. To prevent this issue, create a feature database with a smaller size or avoid adding and deleting the same image multiple times.

1 Overview

This page describes the overview of the OpenAPI in Mercury Cloud.

Mercury Cloud OpenAPI offers AI algorithms that detect, recognize and analyze faces in images with high service availability and high-level security measures to ensure the stable and continuous growth of your online business. This service provides several different facial analysis functions.

By utilizing this RESTful API platform, your systems can retrieve and integrate information on Face Detection, Face Quality, Face Verification, Face Identification, as well as feature database management and face feature management. Some quickstart guides to these functions will be introduced in the later chapters.

Face Detection

The API detects faces in images and returns rectangle coordinates representing the locations of the faces. The API also extracts several face-related attributes, such as face angle, gender, age, emotion, etc. All attributes are predicted by AI algorithms, not actual classification.

Refer to for a quickstart guide to this function.

Face Verification (Face Comparison)

The API detects the largest face in two images and verifies whether these two faces are from the same person. Face verification is also called "one-to-one" or "1:1" matching. Verification can be used in identity verification that matches a snapshot with a previously registered image, like a photo on the driver's license.

Refer to for a quickstart guide to this function.

Face Identification (Face Searching)

The API searches a detected face among all registered face features in the feature databases and returns the closest results. Face identification is also called "one-to-many" or "1:N" matching. Candidate results are returned based on the similarity with the detected face. After creating a feature database and adding some registration photos to the database, you can perform the face identification with a newly uploaded image.

Refer to for a quickstart guide to this function.

2022/05/11

V1.5.0

  • Added the update extra_info API

2021/12/08

V1.4.3

  • Fixed several API bugs and documentation errors

2021/09/29

V1.4.0

  • Added the auto-rotation function of input images.

2021/08/11

V1.3.0

  • Added optimal image rotation angle as an output to the Quality Check API.

2021/07/21

V1.2.5

  • Added the Face Quality Check API.

  • Added trace_id to the Get System Info API.

2021/06/15

V1.1.0

  • Added attributes detection to feature-related APIs.

2021/05/17

V1.0.1

  • Changed OpenAPI’s credential headers from “date” to “ x-date” to enable more clients to access.

2021/05/17

V1.0.0

  • Beta released for customer trial only.

  • Applied OpenAPIs.

  • Supported multiple tenants to access with AKSK authentication.

status dashboard
Chapter 5
Chapter 6
Chapter 7

6 Face Verification (1:1) Quickstart Guide

This page provides a comprehensive guide to how you can quickly use the face verification function in Mercury Cloud.

The Face Verification API detects the largest face in two images and verifies whether these two faces are from the same person. Face verification is also called "one-to-one" or "1:1" matching. Verification can be used in identity verification that matches a snapshot with a previously registered image, like a photo on the driver's license.

The following steps upload two images, detect the largest face within each image, and compare the likelihood that the two faces are the same person. When detected successfully, the system returns the comparison result and detected face information.

6.1 Preparation

To start, make sure you have a Python environment installed.

Download and copy the following Python files to your Python path folder.

1KB
auth_headers.py
API Auth Headers Generator
611B
base64_encode.py
Image File to Base64 Converter
1KB
compare_images.py
Comapre Faces
605B
api_parameters.py
API Parameters

Open the api_parameters.py with a text editor and replace the following parameters will your info. Refer to Section 3.2 for more details.

# Common parameters. Used for all API calls.
# Base URL for Mercury Open API.
api_url = "https://mercury.japancv.co.jp/openapi/face/v1"
# Provision App Id for API calls.
app_id = "aabbccdd-eeff-0011-2233-445566778899"
# Provision access key to authentication.
access_key = '00112233-4455-6677-8899-aabbccddeeff'
# Provision secret key to authentication.
secret_key = '13579acegijmoqsuwyACEGIJMOPSUWY'

6.2 Send a face comparison request

Try the following command to send an API call of Face Comparison to compare the largest face in 2 images. Replace the path with your Python library path and target image file path, respectively.

python {python_path}\compare_images.py "{image_path}\image1.jpg" "{image_path}\image2.jpg"

The result would be shown as follows. It includes the comparison score field that shows the similarity of two faces detected and one_face, another_face fields that include detection results.

Compare image: {image_path}\image1.jpg {image_path}\image2.jpg
https://mercury.japancv.co.jp/openapi/face/v1/aabbccdd-eeff-0011-2233-445566778899/detect
Http status code: 200
Similarity: 0.9915448
Detect face. rectangle: {'top': 625, 'left': 350, 'width': 793, 'height': 818} angle: {'yaw': -0.42474133, 'pitch': 9.596367, 'roll': 0.07245465}
Predicted attributes:
        Age: 29 ~ 39
        Gender: MALE
        Cap: HAT_STYLE_TYPE_NONE
        Glasses: TRANSPARENT_GLASSES
        Mask: COLOR_TYPE_NONE
Detect face. rectangle: {'top': 637, 'left': 276, 'width': 844, 'height': 834} angle: {'yaw': 4.691873, 'pitch': 10.485169, 'roll': 1.0859865}
Predicted attributes:
        Age: 28 ~ 38
        Gender: MALE
        Cap: HAT_STYLE_TYPE_NONE
        Glasses: TRANSPARENT_GLASSES
        Mask: COLOR_TYPE_NONE

The Similarity stands for the confidence level of the two faces belong to the same person. In this example, we can say we are 99.15% confident that they are verified to be the same person.

6.3 Threshold and accuracy

You should decide your acceptance level, which is usually called "threshold," to compare with the similarity score and judge the final result of this face verification. This logic should be built into your system. Mercury Cloud service could not decide it for you.

Depending on the threshold, the result of face verification might be different. For example, suppose the threshold is set to a strict value of 0.995. The hypothesis of the two faces being the same person is rejected since 0.9915448 < 0.995, even though the comparison score is considered to be a relatively high one. On the contrary, if the threshold is set to a more reasonable value of 0.95, we can accept the same hypothesis since 0.9915448>0.95.

The threshold setting is a trade-off between the false acceptance rate (FAR) and the false rejection rate (FRR). The higher the threshold, the more likely a false rejection would happen and less likely that a false acceptance would happen.

Different businesses have different use scenes and different demands on face recognition accuracy. Some common threshold values are set from 0.6 to 0.7 to avoid FAR as much as possible. But, please adjust and configure the threshold based on your business requirements and test results.

8 Guidance to Advanced Users

This page provides some advice and technics on advanced usage of Mercury Cloud OpenAPI.

8.1 Advanced use of images

Typically, only one face from a single image is used in image-related APIs. But Mercury Cloud offers more powerful functions to fulfill the needs of different user scenarios.

8.1.1 Specific a region in the image

In some scenarios, you might not want to detect faces within the whole image but a specific region, such as a specific area of the ID card. A rectangle area can be specified in some APIs. With the rectangle area specified, the API will only scan face(s) in that region, and only the faces that overlap with that region will be returned. If no rectangle is specified, the whole image will be scanned. Incidentally, specifying a rectangle area is faster than processing the whole image, especially for HD images.

The rectangle field behaves the same in the Face Detection API, the Face Compare API, the Batch Add Faces API, and the Face Searching API.

Request Example

{
  "images": [
    {
      "data": "/9j/4AAQSkZJRgA...Tpi3Q1lZTCn//Z",
      "rectangle": {
        "top": 50,
        "left": 100,
        "width": 500,
        "height": 800
      }
    }
  ]
}

8.1.2 Multiple Faces Detection

The Face Detection function is used in several APIs, namely the Face Detection API, the Face Comparison API, the Quality Check API, the Batch Add Face API, and the Face Searching API. However, the detection behaviors are slightly different. The Face Detection API scans all detectable faces within the image. If an image contains multiple faces, the Face Detection API can detect all faces within the image. The batches.faces in the API response is a list containing all detected faces in the image.

But the Quality Check API, the Face Comparison API, the Batch Add Face API, and the Face Searching API only use the largest face in the image.

Response Example

{
  "results": [ { "code": 0, "message": "Success.", "internal_code": 0 } ],
  "batches": [
    {
      "faces": [
        {
          "rectangle": { "top": 771, "left": 403, "width": 695, "height": 718 },
          ...
        },
        {
          "rectangle": { "top": 92, "left": 274, "width": 132, "height": 130 },
          ...
        }
      ]
    }
  ]
}

8.1.3 Batch Image Detection

In the Face Detection API, the images field in the request and results and batches fields in the response are all list types. These lists can contain multiple images. The order of images in the images field in the request body and the order of items in results and batches fields in the response are strictly matched. You can use the same index value to access images, detect results, and detected faces.

Besides the Face Detection API, the Batch Add Face API also supports batch mode. Due to the performance considerations, the maximum number of images within a single request is restricted to 16, and each image should meet the image standards.

8.2 Advanced use of Key and Extra Info

While adding features to the feature database, it is recommended to add key or extra_info value along with the image, so that the added feature can be managed and accessed using more comprehensive methods other than the feature_id. If you add a face to the feature database several times, you will receive several feature_ids, and there may exist some duplicated features in the database. Without a key or an extra_info value, it is hard to manage if the feature_id is lost.

8.2.1 Key

The key of a feature is a user-defined string composed of alphabets, digits, hyphens ("-"), and up to 48 half-width characters long. If a feature is represented as a person in the database, the key is the index of that person. You can access a feature by a feature_id to retrieve its key and extra_info as well. The value of key is not necessarily to be unique across all feature databases. Therefore you can set an identical key value to a set of features.

In most user systems, the system will distribute a unique user_id for each user, for example, an employee number or a membership number. This unique user_id can be set as the key value in the Batch Add Feature API to map the user_id of the user system and the feature in Mercury Cloud. The user system should be responsible for maintaining the uniqueness of the key across feature databases instead of Mercury Cloud.

In the face identification (1:N) case, with a given user image, the API response contains the top results with the highest similarity compared to that face feature. The key in the result can help you quickly identify which user is the most similar feature belonging to. Then you can perform further business logic and process for that user agilely.

8.2.2 Extra Info

The Mercury Cloud does not offer dedicated columns to store user information. But Mercury Cloud provides a more flexible and advanced solution. The extra_info field is a user-defined string, able to store up to 1024 half-width characters. It can include any string types, including but not limited to a serialized JSON object, a Base64 encoded binary, or an URL of the user avatar. An example of a membership service can be structured as follows.

{
  "customer_info": {
    "id": 1000000,
    "first_name": "foo",
    "last_name": "bar",
    "create_at": "2021-01-01 15:00:00",
    "avatar": "base_url/2021/1000000_foo_bar.jpg"
  },
  "point_info": {
    "normal_points": {
      "card_no": 000011112222,
      "point": 100.00,
      "enabled": true,
      "expiration": "2022-01-01 00:00:00"
    },
    "2021_bonus_points": {
      "card_no": 000033334444,
      "point": 50.00,
      "enabled": false,
      "expiration": "2021-01-31 23:59:59"
    }
  }
}

In this example, since the name and avatar URL is included in extra_info, your system can rapidly extract the registered user avatar or user name without query your own database.

It is highly recommended to store only frequently used data in extra_info, as well as in Mercury Cloud. As a best practice, you should avoid saving sensitive personal data or do it at your own risk, even though Mercury Cloud provides high-level security measures to ensure the isolation of tenants and data security.

Starting from V1.5.0, you can update the extra_info by using the Update Feature Extra Info API.

8.3 Advanced use of Face Searching

The Face Searching API is the most important API in the 1:N identification scenario. Snapshots captured by cameras are compared with registered features in the feature database to find matching results.

8.3.1 About the search result score

A feature is a multi-dimensional vector. In the face identification scenario, the API compares a given feature with existing features in the database and calculates how close each pair is. Thescorein the API response ranging from 0 ~ 1, indicates the similarity between them. Different images, angles, brightness conditions, dates, etc. will affect the verification score. Even if you compare two identical images, there will be some differences in their similarity, though their value is very close to 1.

8.3.2 Min score and Top K

When sending a face searching API request, the min_score field can be used as a threshold to limit the lowest score in the response. Only results withscore equal to or higher than the min_score will be returned. You should decide your own min_score. Some common threshold values are set from 0.94 to 0.96 to avoid FAR as much as possible. But, please adjust and configure it based on your business requirements and test results.

Another field top_k is used to limit the number of results returned by API. In most cases, only the top one result is needed. But getting more top results may be helpful in some other cases. Notice that the min_score setting has a higher priority than top_k, meaning that if any of the top k features does not fulfill the min_score requirement, it will not be included in the API response.

8.3.3 Search across multiple feature databases

In some scenarios, your system may want to store features in multiple feature databases, such as separated feature databases for different regions.

Generally, the face searching API will only run in a single feature database (region). But in special cases, it is necessary to search across all feature databases (regions). The Face Searching API in Mercury Cloud provides this capability by supporting multiple db_id . When specify some db_id values, the API will search features across several feature databases and respond top results separately.

Notice that the Cross-database Face Search will increase back and forth network latency since it performs N times the database searches if the number of db_id in the API request is set to N.

5 Face Detection Quickstart Guide

This page provides a comprehensive guide to how you can quickly use the face detection function in Mercury Cloud.

The Face Detection API detects faces in images and returns rectangle coordinates representing the locations of the faces. The API also extracts several face-related attributes, such as face angle, gender, age, emotion, etc. All attributes are predicted by AI algorithms, not actual classification.

The following steps upload a single image and detect faces within the images. When detected successfully, the system returns the detected face information.

5.1 Preparation

To start, make sure you have a Python environment installed.

Download and copy the following Python files to your Python path folder.

1KB
auth_headers.py
API Auth Headers Generator
611B
base64_encode.py
Image File to Base64 Converter
2KB
detect_faces.py
Face Detection
605B
api_parameters.py
API Parameters

Open the api_parameters.py with a text editor and replace the following parameters will your info. Refer to Section 3.2 for more details.

# Common parameters. Used for all API calls.
# Base URL for Mercury Open API.
api_url = "https://mercury.japancv.co.jp/openapi/face/v1"
# Provision App Id for API calls.
app_id = "aabbccdd-eeff-0011-2233-445566778899"
# Provision access key to authentication.
access_key = '00112233-4455-6677-8899-aabbccddeeff'
# Provision secret key to authentication.
secret_key = '13579acegijmoqsuwyACEGIJMOPSUWY'

5.2 Send a face detection request

Try the following command to send an API call of Face Detection to detect faces in that image. Replace the path with your Python library path and target image file path, respectively.

python {python_path}\detect_faces.py "{image_path}\image.jpg"

The result would be shown as follows. It includes a detection results field that shows whether a face has been detected and detected face details.

Detect image: {image_path}\image.jpg
Http status code: 200
Detect face. rectangle: {'top': 625, 'left': 350, 'width': 793, 'height': 818} angle: {'yaw': -0.42474133, 'pitch': 9.596367, 'roll': 0.07245465}
Predicted attributes:
        Age: 29 ~ 39
        Gender: MALE
        Cap: HAT_STYLE_TYPE_NONE
        Glasses: TRANSPARENT_GLASSES
        Mask: COLOR_TYPE_NONE

Here,HAT_STYLE_TYPE_NONEmeans the face detected is not wearing a hat/cap. TRANSPARENT_GLASSES means the face detected is wearing a pair of normal glasses (not sunglasses). COLOR_TYPE_NONE means that the face detected is not wearing a mask.

For more details on attributes information, refer to the latest version YAML file or online API manual provided in Chapter 2.

Table: Attributes

Item
Description
Meaning of values

age_lower_limit

The estimated lower limit of age

-

age_up_limit

The estimated upper limit of age

-

st_age

The estimated classification of age

  • ST_CHILD: Child

  • ST_ADULT: Adult

  • ST_OLD: Elderly

gender_code

The estimated classification of gender

  • MALE: Male

  • FEMALE: Female

mustache_style

The estimated classification of bearded

  • MUSTACHE_STYLE_TYPE_NONE: No mustache

  • WHISKERS: Has mustache

respirator_color

The estimated status of mask wearing

  • COLOR_TYPE_NONE: Not wearing a mask

  • COLOR_TYPE_OTHER: Wearing a mask

glass_style

The estimated status of glasses wearing

  • GLASSES_STYLE_TYPE_NONE: Not wearing glasses

  • TRANSPARENT_GLASSES: Wearing normal glasses

  • SUNGLASSES: Wearing sunglasses

cap_style

The estimated status of hat wearing

  • HAT_STYLE_TYPE_NONE: Not wearing a cap

  • CAP: Wearing a cap

st_helmet_style

The estimated status of helmet wearing

  • ST_HELMET_STYLE_TYPE_NONE: Not wearing a helmet

  • ST_HELMET: Wearing a helmet

st_expression

The estimated classification of emotions

  • ST_CALM: calm

  • ST_HAPPY: happy

  • ST_ANGRY: angry

  • ST_SURPRISED: surprised

  • ST_SORROW: sorrow

st_respirator

Reserved and not used

-

3 How to call

This page describes how to call Mercury Cloud OpenAPI.

3.1 Base URL

The base service endpoint is at:

https://domain.com/openapi/face/v1

Mercury Cloud API is served only over HTTPS to ensure your data privacy.

The domain.com might differ according to your service region. Please find this information in your service starting email.

3.2 Common parameters

Each API call requires a few common parameters, HTTP method, App ID, Access Key, and Secret Key.

GET, POST, and DELETE are the HTTP method used in Mercury Cloud. Refer to the API References for details on the method of each API. This method will also be used in API authentication.

The App ID, the Access Key, and the Secret Key are included in your service starting email. Please keep them in a safe place and do not disclose them to others. They will be used in setting the API URL and calculating the API auth token.

3.3 API authentication

All Mercury APIs require auth tokens to verify valid clients. There are two types of additional headers (x-date and Authorization) needed in each API call. If they are not included, you will get a 401 HTTP error code.

The x-date header

The x-date header uses an RFC-7231 formatted UTC date-time string.

For example:

x-date: Fri, 09 Jul 2021 01:51:02 GMT

This stands for 2021-07-09T01:51:02Z. Please notice that the x-date is the time of the GMT, not your local time.

The Authorization header

The Authorization header is generated based on a given URL path, HTTP method, App ID, Access Key, and Secret Key. For some APIs related to Features DBs, the DB ID is also required. The Authorization follows the following format.

Authorization: hmac username="{Access_key}", algorithm="hmac-sha256", headers="x-date request-line", signature="{Signature}"

A common Authorization header example is as follows.

Authorization: hmac username="005c5acf-5ea9-499c-8d3e-690413f9b5b9", algorithm="hmac-sha256", headers="x-date request-line", signature="kUJ6OHiMMBZnxgSEa2ARxVAlgjC2kzjedZgxOz07i+Y="

The hmac username is your Access Key. Replace 005c5acf-5ea9-499c-8d3e-690413f9b5b9 with your own access key.

The signature is a base64 encoded string encrypted by HMAC-SHA256.

signature = encode_base64(hmac_sha256(key={secret_key},message=concat("x-date: ", {x-date}, "/n", {method}, " ", {URL path}, " HTTPS/1.1")))

Let us do it step by step. First, assemble the message before encryption.

A common message example before encryption is as follows.

x-date: Fri, 09 Jul 2021 01:51:02 GMT
POST /openapi/face/v1/abc1a8a7-038f-4f9a-b98a-5b602978b135/detect HTTP/1.1

x-date is the same as the x-date header.

POST is the HTTP method. Make it consistent with the API you are going to use.

/openapi/face/v1/abc1a8a7-038f-4f9a-b98a-5b602978b135/detect is the URL path including the abc1a8a7-038f-4f9a-b98a-5b602978b135 part as the APP ID. Replace it with your own APP ID. Also, substitute this part with the URL path of your designated API. In some APIs, DB ID is also needed. For example,

x-date: Fri, 09 Jul 2021 01:51:02 GMT
GET /openapi/face/v1/abc1a8a7-038f-4f9a-b98a-5b602978b135/databases/aed37153-16b6-4f19-a479-302049e44000 HTTP/1.1

Where the aed37153-16b6-4f19-a479-302049e44000 is the DB ID.

Use the Secret key blFWSvhp9pRz2JnRHnfvkFeAuApClhKg to encrypt the first message we created.

x-date: Fri, 09 Jul 2021 01:37:00 GMT
POST /openapi/face/v1/abc1a8a7-038f-4f9a-b98a-5b602978b135/detect HTTP/1.1

Then we will get

91427a38788c301667c604846b6011c550258230b69338de7598313b3d3b8be6

Encode it with base64. The signature is finalized as follows.

kUJ6OHiMMBZnxgSEa2ARxVAlgjC2kzjedZgxOz07i+Y=

Use this signature will complete the Authorization composition we saw above.

Authorization: hmac username="005c5acf-5ea9-499c-8d3e-690413f9b5b9", algorithm="hmac-sha256", headers="x-date request-line", signature="kUJ6OHiMMBZnxgSEa2ARxVAlgjC2kzjedZgxOz07i+Y="

You can now use the x-date Header and the Authorization Header to make the API call of /openapi/face/v1/{app_id}/detect.

curl -X POST -d {} 'https://domain.com/openapi/face/v1/abc1a8a7-038f-4f9a-b98a-5b602978b135/detect' \
-H 'x-date: Fri, 09 Jul 2021 01:37:00 GMT' \
-H 'Authorization: hmac username="005c5acf-5ea9-499c-8d3e-690413f9b5b9", algorithm="hmac-sha256", headers="x-date request-line", signature="kUJ6OHiMMBZnxgSEa2ARxVAlgjC2kzjedZgxOz07i+Y="'

3.4 Samples for making headers

import base64
import datetime
import hashlib
import hmac
import urllib.parse

def generate_authorization_headers(access_key, secret_key, url, http_method):
    RFC7231_FORMAT = '%a, %d %b %Y %H:%M:%S GMT'
    xdate = datetime.datetime.utcnow().strftime(RFC7231_FORMAT)
    url_path = urllib.parse.urlparse(url).path
    signature = 'x-date: %s\n%s %s HTTP/1.1' % (xdate, http_method.upper(), url_path)
    crypto = hmac.new(secret_key.encode('utf-8'), signature.encode('utf-8'), hashlib.sha256)
    hmac_signature = base64.b64encode(crypto.digest()).decode('utf-8')
    authorization = '''hmac username="%s", algorithm="hmac-sha256", headers="x-date request-line", signature="%s"''' % (access_key, hmac_signature)
    return {
        "x-date": xdate,
        "Authorization": authorization
    }
//Requires crypto-js. See https://www.npmjs.com/package/crypto-js for more details.

function generate_authorization_headers(access_key, secret_key, url, http_method) {
    var today = new Date();
    var xdate = today.toGMTString();
    var reg = /.+?\:\/\/.+?(\/.+?)(?:#|\?|$)/;
		var url_path = reg.exec(url)[1];
    var signature = "x-date: " + xdate + "\n" + http_method + " " + url_path + " HTTP/1.1";
    var hmac_signature = CryptoJS.HmacSHA256(signature, secret_key).toString(CryptoJS.enc.Base64);
    var authorization = "hmac username=\"" + access_key + "\", algorithm=\"hmac-sha256\", headers=\"x-date request-line\", signature=\"" + hmac_signature +  "\"";
    return "-H 'x-date: " + xdate + "' -H 'Authorization: " + authorization + "'";
}
import java.text.SimpleDateFormat
import java.util.*
import javax.crypto.Mac
import javax.crypto.spec.SecretKeySpec
import java.net.URL

fun generate_authorization_headers(
     access_key: String,
     secret_key: String,
     url: String,
     http_method: String): Pair<String, String> {
    val rfc7231 = SimpleDateFormat("EEE, dd MMM yyyy HH:mm:ss 'GMT'", Locale.US).apply {
        isLenient = false
        timeZone = TimeZone.getTimeZone("UTC")
    }
    val xDate = rfc7231.format(Date())
    val url_path = URL(url).getPath()
    val encodeData = "x-date: $xDate\n$http_method $url_path HTTP/1.1"
    val sha256Hmac = Mac.getInstance("HmacSHA256")
    val secretKey = SecretKeySpec(secret_key.toByteArray(), "HmacSHA256")
    sha256Hmac.init(secretKey)
    val signature = Base64.getEncoder().encodeToString(sha256Hmac.doFinal(encodeData.toByteArray()))
    val authorization = "hmac username=\"${access_key}\", algorithm=\"hmac-sha256\", headers=\"x-date request-line\", signature=\"${signature}\""
    return Pair(xDate,authorization)
}

3.5 "Try it out" in the SwaggerHub

SwaggerHub offers interactive ways to let you test API calls directly from the browser using the "Try it out" button. Mercury Cloud Interactive API Documentation requires a special Authentication before you can use this function. Here is the guide to using the "Try it out" function in the Mercury Cloud Interactive API Documentation.

3.5.1 Prepare headers

Prepare the x-date header and the Authorization header, respectively, using the code samples.

3.5.2 Select the server

Click the Servers dropdown list and choose the server according to your service area.

Select the server

3.5.3 Authorize with the Authorization header

Click the "Authorize" button right to the Servers list.

Authorize

The following window will pop up. Paste the Authorization header into the "Value" textbox and click the "Authorize" button.

Available authorizations

The Authorization header has now been fixed. To re-authorize it, click the "Logout" button and re-do the steps above.

The Authorization header is time-sensitive, so you need to re-authorize it every time before you call the API.

Cancel the authorization

3.5.4 Enter the Parameters and have a try

Now we can try the API call. Take the List Feature Database API for example.

Try it out button

Click the "Try it out" button, the "Execute" button will show up.

Fill up the x-date and the app_id parameter and click the "Execute" button.

Execute

The API response will be displayed with some other related information.

Execute result

7 Face Identification (1:N) Quickstart Guide

This page provides a comprehensive guide to how you can quickly build your face database and use the Face Identification function.

The Face Identification API searches a detected face among all registered face features in the feature databases and returns the closest results. Face identification is also called "one-to-many" or "1:N" matching. Candidate results are returned based on the similarity with the detected face. After creating a feature database and adding some registration photos to the database, you can perform the face identification with a newly uploaded image.

The following steps create a feature database, add a few features to the database, upload an image, detect the largest face in the image, then search similar faces within one feature database. When detected successfully, the system returns the search result and detected face information. Finally, the feature database is deleted.

During the APIs operating feature database, the db_id is a unique key to identify any single feature database. Please memorized the db_id in the response of the creating feature database API (POST/{app_id}/databases) or you can find it with the list feature databases API (GET/{app_id}/databases).

7.1 Preparation

To start, make sure you have a Python environment installed.

Download and copy the following Python files to your Python path folder.

1KB
auth_headers.py
API Auth Headers Generator
611B
base64_encode.py
Image File to Base64 Converter
1KB
create_feature_db.py
Create a Feature DB
945B
list_feature_db.py
List Feature DBs
1KB
check_quality.py
Check Image Quality
2KB
add_faces.py
Add Features
2KB
search_faces.py
Face Searching
1KB
delete_feature_db.py
Delete a Feature DB
605B
api_parameters.py
API Parameters
2KB
detect_faces.py
Face Detection

Open the api_parameters.py with a text editor and replace the following parameters will your info. Refer to Section 3.2 for more details.

# Common parameters. Used for all API calls.
# Base URL for Mercury Open API.
api_url = "https://mercury.japancv.co.jp/openapi/face/v1"
# Provision App Id for API calls.
app_id = "aabbccdd-eeff-0011-2233-445566778899"
# Provision access key to authentication.
access_key = '00112233-4455-6677-8899-aabbccddeeff'
# Provision secret key to authentication.
secret_key = '13579acegijmoqsuwyACEGIJMOPSUWY'

7.2 Create a feature database

Use the following command to send an API call to create a feature database with the name "foo" and the DB size 1000. Replace the path with your Python library path. The DB size restricts the maximum number of features which can be stored in a single feature database. Note that this number should be no more than the number of IDs you purchased in your subscription.

python {python_path}\create_feature_db.py foo 1000

Create feature database API sends a request with the database name and the maximum size. After creating successfully, the response will contain a unique db_id. The result would be shown as follows.

Compare feature database: name: foo max size: 1000
Http status code: 200
response: {
 "trace_id": "de2557f6b435c836dd12447413bc9966",
 "name": "foo",
 "db_id": "4caf27d2-f621-4cbf-acec-a7553f226000",
 "object_type": "OBJECT_FACE",
 "feature_version": 24902,
 "description": "Description for database foo",
 "created_at": "2021-07-15T04:11:04.744807758Z",
 "max_size": 1000,
 "size": 0
}

Please memorize this db_id in the response_body. It will be used while calling other APIs in the next steps.

7.3 List feature databases

Use the following command to send an API call to confirm the existence of the feature database you have just created.

python {python_path}\list_feature_db.py

The result would be shown as follows.

List feature databases:
Http status code: 200
db_ids: [
 "4caf27d2-f621-4cbf-acec-a7553f226000"
]

You can find the feature database with the same db_id is returned.

7.4 Check Image Quality

The Quality Check API analyzes face size, angle, brightness, sharpness, occlusion, etc., and responses values for all factors. It's highly recommended to check image quality before adding faces to the feature database since higher image quality leads to higher recognition precision.

Use the following command to send an API call to perform a quality check.

python {python_path}\check_quality.py "{image_path}\image.jpg"

The result would be shown as follows.

Check quality: {image_path}\image.jpg
Http status code: 200
Quality results: {
 "angle": {
  "yaw": 0.0027652662,
  "pitch": 7.864833,
  "roll": 0.049663484
 },
 "quality": {
  "distance2center": 0.08779055,
  "size": 0.36317098,
  "brightness": 0.17242007,
  "sharpness": 1,
  "mouth_open": 0.018774271,
  "missing": 1,
  "align_score": 9.999992
 },
 "occlusion": {
  "eye": 0,
  "nose": 0,
  "mouth": 0,
  "eyebrow": 0,
  "face_line": 0,
  "occlusion_total": 0
 },
 "rectangle": {
  "top": 610,
  "left": 332,
  "width": 777,
  "height": 774
 }
}

You should decide your acceptance level or threshold of each factor returned by the API. This logic should be built into your system. Different businesses have different use scenes and different demands on image quality.

We provide some reference values that are approximate to the image quality for passport photos. But, please adjust and configure the threshold based on your business requirements and test results.

Passport level image quality

Name
Value range (only for reference)
Description

angle.yaw

-10.0~10.0

Yaw in angle.

angle.pitch

-15.0~15.0

Pitch in angle.

angle.roll

-10.0~10.0

Roll in angle.

quality.distance2center

0.2~1.0

Indicate the distance between the center of the face and the center of the image, far to near.

occlusion.occlusion_total

0.0~0.02

Face occlusion, low to high.

occlusion.eye

0.0

Eye occlusion, low to high.

occlusion.nose

0.0

Nose occlusion, low to high.

occlusion.mouth

0.0~0.4

Mouth occlusion, low to high.

occlusion.eyebrow

0.0

Eyebrow occlusion, low to high.

occlusion.face_line

0.0~0.1

Face contour occlusion, low to high.

quality.align_score

1.0~

Face landmarks score, low to high.

quality.brightness

-0.5~0.5

Face brightness, dark to bright.

quality.sharpness

0.8~1.0

Face sharpness, bad to good.

quality.mouth_open

0.0~0.4

Mouth opened size, closed to open.

quality.missing

0.9~1.0

Effective face proportion, high to low.

quality.size

0~0.85

The face proportion in the image, small to large.

7.5 Add some images to the feature database

The Batch Add feature API sends some images in a single request to add multiple features to a feature database. Each feature added to a feature database will get a unique feature_id. You may also add identical images or several images belonging to the same person to a feature database multiple times and receive several differentfeature_id. That is to say, a feature_id does not have to be an equivalent to a single person, but rather a feature added at that time.

Use the following command to send an API call to add all images in the designated folder to the feature database we have created.

python {python_path}\add_faces.py "4caf27d2-f621-4cbf-acec-a7553f226000" {image_path}

The result would be shown as follows. There are two images in the folder.

Add to database: 4caf27d2-f621-4cbf-acec-a7553f226000 image: {image_path}\image1.jpg
Http status code: 200
Face feature_id: 4caf27d2f6214cbfaceca7553f226001000000000000008e
Detect face. rectangle: {'top': 625, 'left': 350, 'width': 793, 'height': 818} angle: {'yaw': -0.42474133, 'pitch': 9.596367, 'roll': 0.07245465}
Predicted attributes:
        Age: 29 ~ 39
        Gender: MALE
        Cap: HAT_STYLE_TYPE_NONE
        Glasses: TRANSPARENT_GLASSES
        Mask: COLOR_TYPE_NONE
Add to database: 4caf27d2-f621-4cbf-acec-a7553f226000 image: {image_path}\image2.jpg
Http status code: 200
Face feature_id: 4caf27d2f6214cbfaceca7553f226001000000000000008f
Detect face. rectangle: {'top': 637, 'left': 276, 'width': 844, 'height': 834} angle: {'yaw': 4.691873, 'pitch': 10.485169, 'roll': 1.0859865}
Predicted attributes:
        Age: 28 ~ 38
        Gender: MALE
        Cap: HAT_STYLE_TYPE_NONE
        Glasses: TRANSPARENT_GLASSES
        Mask: COLOR_TYPE_NONE

The features from the two images are added to the feature database.

7.6 Search in Feature Database

The Face Searching API uploads an image and finds the top K similar faces within the feature database based on verification score results. The example code we provide here restricts the top two features with verification scores above 0.8.

Use the following command to send an API call to perform a search.

python {python_path}\search_faces.py "4caf27d2-f621-4cbf-acec-a7553f226000" "{image_path}\image.jpg"

The result would be shown as follows.

Search face: {image_path}\image.jpg in:  4caf27d2-f621-4cbf-acec-a7553f226000
Http status code: 200
Detect face. rectangle: {'top': 342, 'left': 171, 'width': 384, 'height': 392} angle: {'yaw': -1.557804, 'pitch': 10.313386, 'roll': 1.3425148}
Predicted attributes:
        Age: 28 ~ 38
        Gender: MALE
        Cap: HAT_STYLE_TYPE_NONE
        Glasses: TRANSPARENT_GLASSES
        Mask: COLOR_TYPE_NONE
        db_id: 4caf27d2-f621-4cbf-acec-a7553f226000
top 1 score: 0.9813498
        feature: {'feature_id': '4caf27d2f6214cbfaceca7553f226001000000000000008e', 'key': '', 'extra_info': 'image1.jpg'}
top 2 score: 0.9803276
        feature: {'feature_id': '4caf27d2f6214cbfaceca7553f226001000000000000008f', 'key': '', 'extra_info': 'image2.jpg'}

The API response includes top K results if matches are found. Otherwise, no results will be returned. The two images added in the previous steps and the searching image belong to the same person, thus the feature_ids and their scores are returned.

7.7 Delete the feature database

Finally, if the feature database is no longer needed, you can simply delete the feature database. All features registered in the feature database will be deleted together simultaneously. Use the following command to send an API to perform the deletion.

python {python_path}\delete_feature_db.py "4caf27d2-f621-4cbf-acec-a7553f226000"

The result would be shown as follows.

Delete feature database: db_id: 4caf27d2-f621-4cbf-acec-a7553f226000
Http status code: 200
response: {
 "trace_id": "81c28dcd71c48befdcd2d92afb34c278"
}

The feature database and all features added to it are now completely removed from your Mercury Cloud environment.

2 API References

This page contains the file of Mercury Cloud OpenAPI extracted from Swagger and the link to the SwaggerHub hosted API documentation.

Product versions may vary depending on your service region. If the latest version is not available in your service region, please refer to the previous version of the Online API Document.

As of May 2022, the version information for each region is as follows.

  • US: V1.5.0

  • Japan: V1.5.0

  • Bahrain: V1.5.0

2.1 Mercury Cloud OpenAPI file

Download the YAML file of the latest Mercury Cloud here.

0B
mercury-face-open-api-1.5.0.yaml

2.2 Online API document

Click the link below to access the latest Mercury Cloud Online API document.

https://app.swaggerhub.com/apis/japancv/MercuryCloudAPI/1.5.0