Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
Loading...
This page provides the release notes of Mercury Cloud.
The update schedule will be announced on the status dashboard.
2022/05/11
V1.5.0
Added the update extra_info API
2021/12/08
V1.4.3
Fixed several API bugs and documentation errors
2021/09/29
V1.4.0
Added the auto-rotation function of input images.
2021/08/11
V1.3.0
Added optimal image rotation angle as an output to the Quality Check API.
2021/07/21
V1.2.5
Added the Face Quality Check API.
Added trace_id to the Get System Info API.
2021/06/15
V1.1.0
Added attributes detection to feature-related APIs.
2021/05/17
V1.0.1
Changed OpenAPI’s credential headers from “date” to “ x-date” to enable more clients to access.
2021/05/17
V1.0.0
Beta released for customer trial only.
Applied OpenAPIs.
Supported multiple tenants to access with AKSK authentication.
When uploading an image with multiple faces in the Face Detection API, not all faces can be detected. Suggest using images with only one face or trim the image to contain one face only before using this API.
When using the quality check API to detect the optimal rotation angle, some images may return the wrong rotation angle. We are currently working on modifying the algorithm to improve the accuracy.
If the same image is added and deleted over 5 times in a feature database with max_size larger than 100K, the cross-database search (1vN) API has the possibility to get an empty set, even if there are similar features registered. To prevent this issue, create a feature database with a smaller size or avoid adding and deleting the same image multiple times.
This page is under construction.
The Mercury Cloud platform is an AI computer vision service platform. It can provide high-quality CV PaaS services (such as image and video access and processing, face detection, search, clustering, and recognition, etc.). Its advantages are flexible deployment, high hardware cost-benefit ratio, high service availability, high-level security measures. Also, the cloud architecture also provides an agile and powerful service framework to support applications and services on each business side to ensure the stable and continuous growth of the business.
This manual provides instructions on how to use the OpenAPI in Mercury Cloud.
This manual is intended for customers who belong to the IT department and who have experience in developing and operating applications that use HTTP/HTTPS-based and JSON-formatted Web APIs.
This manual provides instructions on how to use the Mercury Cloud management console.
This page is under construction. The management console is not ready yet.
In this manual, terms and figures are described on the assumption that the language setting is English. If you use other languages, the terms and figures may differ. In that case, please read and use them as appropriate.
2022/05/11
Updated the API references to V1.5.0.
2021/12/08
Updated the API references to V1.4.3.
2021/10/25
Started service in Bahrain.
2021/09/29
Updated the API references to V1.4.0.
2021/08/11
Updated the API references to V1.3.0.
2021/07/21
Updated the API references to V1.2.5.
Added guides to API.
Started service in the USA.
2021/06/15
Updated the API references to V1.1.0.
2021/05/17
The first edition released.
This page contains the file of Mercury Cloud OpenAPI extracted from Swagger and the link to the SwaggerHub hosted API documentation.
Product versions may vary depending on your service region. If the latest version is not available in your service region, please refer to the previous version of the Online API Document.
As of May 2022, the version information for each region is as follows.
US: V1.5.0
Japan: V1.5.0
Bahrain: V1.5.0
Download the YAML file of the latest Mercury Cloud here.
Click the link below to access the latest Mercury Cloud Online API document.
https://app.swaggerhub.com/apis/japancv/MercuryCloudAPI/1.5.0
This page is under construction.
2021/05/17
The first edition released
This page provides a comprehensive guide to how you can quickly use the face verification function in Mercury Cloud.
The Face Verification API detects the largest face in two images and verifies whether these two faces are from the same person. Face verification is also called "one-to-one" or "1:1" matching. Verification can be used in identity verification that matches a snapshot with a previously registered image, like a photo on the driver's license.
The following steps upload two images, detect the largest face within each image, and compare the likelihood that the two faces are the same person. When detected successfully, the system returns the comparison result and detected face information.
To start, make sure you have a Python environment installed.
Download and copy the following Python files to your Python path folder.
Open the api_parameters.py
with a text editor and replace the following parameters will your info. Refer to Section 3.2 for more details.
Try the following command to send an API call of Face Comparison to compare the largest face in 2 images. Replace the path with your Python library path and target image file path, respectively.
The result would be shown as follows. It includes the comparison score
field that shows the similarity of two faces detected and one_face
, another_face
fields that include detection results.
The Similarity
stands for the confidence level of the two faces belong to the same person. In this example, we can say we are 99.15% confident that they are verified to be the same person.
You should decide your acceptance level, which is usually called "threshold," to compare with the similarity score and judge the final result of this face verification. This logic should be built into your system. Mercury Cloud service could not decide it for you.
Depending on the threshold, the result of face verification might be different. For example, suppose the threshold is set to a strict value of 0.995. The hypothesis of the two faces being the same person is rejected since 0.9915448 < 0.995, even though the comparison score is considered to be a relatively high one. On the contrary, if the threshold is set to a more reasonable value of 0.95, we can accept the same hypothesis since 0.9915448>0.95.
The threshold setting is a trade-off between the false acceptance rate (FAR) and the false rejection rate (FRR). The higher the threshold, the more likely a false rejection would happen and less likely that a false acceptance would happen.
Different businesses have different use scenes and different demands on face recognition accuracy. Some common threshold values are set from 0.6 to 0.7 to avoid FAR as much as possible. But, please adjust and configure the threshold based on your business requirements and test results.
This page provides a comprehensive guide to how you can quickly use the face detection function in Mercury Cloud.
The Face Detection API detects faces in images and returns rectangle coordinates representing the locations of the faces. The API also extracts several face-related attributes, such as face angle, gender, age, emotion, etc. All attributes are predicted by AI algorithms, not actual classification.
The following steps upload a single image and detect faces within the images. When detected successfully, the system returns the detected face information.
To start, make sure you have a Python environment installed.
Download and copy the following Python files to your Python path folder.
Try the following command to send an API call of Face Detection to detect faces in that image. Replace the path with your Python library path and target image file path, respectively.
The result would be shown as follows. It includes a detection results
field that shows whether a face has been detected and detected face details.
Here,HAT_STYLE_TYPE_NONE
means the face detected is not wearing a hat/cap.
TRANSPARENT_GLASSES
means the face detected is wearing a pair of normal glasses (not sunglasses).
COLOR_TYPE_NONE
means that the face detected is not wearing a mask.
This page describes the overview of the OpenAPI in Mercury Cloud.
Mercury Cloud OpenAPI offers AI algorithms that detect, recognize and analyze faces in images with high service availability and high-level security measures to ensure the stable and continuous growth of your online business. This service provides several different facial analysis functions.
By utilizing this RESTful API platform, your systems can retrieve and integrate information on Face Detection, Face Quality, Face Verification, Face Identification, as well as feature database management and face feature management. Some quickstart guides to these functions will be introduced in the later chapters.
The API detects faces in images and returns rectangle coordinates representing the locations of the faces. The API also extracts several face-related attributes, such as face angle, gender, age, emotion, etc. All attributes are predicted by AI algorithms, not actual classification.
Refer to for a quickstart guide to this function.
The API detects the largest face in two images and verifies whether these two faces are from the same person. Face verification is also called "one-to-one" or "1:1" matching. Verification can be used in identity verification that matches a snapshot with a previously registered image, like a photo on the driver's license.
Refer to for a quickstart guide to this function.
The API searches a detected face among all registered face features in the feature databases and returns the closest results. Face identification is also called "one-to-many" or "1:N" matching. Candidate results are returned based on the similarity with the detected face. After creating a feature database and adding some registration photos to the database, you can perform the face identification with a newly uploaded image.
Refer to for a quickstart guide to this function.
This page provides a comprehensive guide to how you can quickly build your face database and use the Face Identification function.
The Face Identification API searches a detected face among all registered face features in the feature databases and returns the closest results. Face identification is also called "one-to-many" or "1:N" matching. Candidate results are returned based on the similarity with the detected face. After creating a feature database and adding some registration photos to the database, you can perform the face identification with a newly uploaded image.
The following steps create a feature database, add a few features to the database, upload an image, detect the largest face in the image, then search similar faces within one feature database. When detected successfully, the system returns the search result and detected face information. Finally, the feature database is deleted.
During the APIs operating feature database, the db_id is a unique key to identify any single feature database. Please memorized the db_id in the response of the creating feature database API (POST/{app_id}/databases
) or you can find it with the list feature databases API (GET/{app_id}/databases
).
To start, make sure you have a Python environment installed.
Download and copy the following Python files to your Python path folder.
Use the following command to send an API call to create a feature database with the name "foo" and the DB size 1000. Replace the path with your Python library path. The DB size restricts the maximum number of features which can be stored in a single feature database. Note that this number should be no more than the number of IDs you purchased in your subscription.
Create feature database API sends a request with the database name and the maximum size. After creating successfully, the response will contain a unique db_id
. The result would be shown as follows.
Please memorize this db_id in the response_body. It will be used while calling other APIs in the next steps.
Use the following command to send an API call to confirm the existence of the feature database you have just created.
The result would be shown as follows.
You can find the feature database with the same db_id
is returned.
The Quality Check API analyzes face size, angle, brightness, sharpness, occlusion, etc., and responses values for all factors. It's highly recommended to check image quality before adding faces to the feature database since higher image quality leads to higher recognition precision.
Use the following command to send an API call to perform a quality check.
The result would be shown as follows.
You should decide your acceptance level or threshold of each factor returned by the API. This logic should be built into your system. Different businesses have different use scenes and different demands on image quality.
We provide some reference values that are approximate to the image quality for passport photos. But, please adjust and configure the threshold based on your business requirements and test results.
The Batch Add feature API sends some images in a single request to add multiple features to a feature database. Each feature added to a feature database will get a unique feature_id
. You may also add identical images or several images belonging to the same person to a feature database multiple times and receive several differentfeature_id
. That is to say, a feature_id
does not have to be an equivalent to a single person, but rather a feature added at that time.
Use the following command to send an API call to add all images in the designated folder to the feature database we have created.
The result would be shown as follows. There are two images in the folder.
The features from the two images are added to the feature database.
The Face Searching API uploads an image and finds the top K similar faces within the feature database based on verification score results. The example code we provide here restricts the top two features with verification scores above 0.8.
Use the following command to send an API call to perform a search.
The result would be shown as follows.
The API response includes top K results if matches are found. Otherwise, no results will be returned. The two images added in the previous steps and the searching image belong to the same person, thus the feature_ids and their scores are returned.
Finally, if the feature database is no longer needed, you can simply delete the feature database. All features registered in the feature database will be deleted together simultaneously. Use the following command to send an API to perform the deletion.
The result would be shown as follows.
The feature database and all features added to it are now completely removed from your Mercury Cloud environment.
This page describes how to call Mercury Cloud OpenAPI.
The base service endpoint is at:
Mercury Cloud API is served only over HTTPS to ensure your data privacy.
The domain.com
might differ according to your service region. Please find this information in your service starting email.
Each API call requires a few common parameters, HTTP method, App ID, Access Key, and Secret Key.
GET
, POST
, and DELETE
are the HTTP method used in Mercury Cloud. Refer to the for details on the method of each API. This method will also be used in .
The App ID, the Access Key, and the Secret Key are included in your service starting email. Please keep them in a safe place and do not disclose them to others. They will be used in setting the API URL and calculating the API auth token.
All Mercury APIs require auth tokens to verify valid clients. There are two types of additional headers (x-date and Authorization) needed in each API call. If they are not included, you will get a 401 HTTP error code.
The x-date header uses an RFC-7231 formatted UTC date-time string.
For example:
This stands for 2021-07-09T01:51:02Z. Please notice that the x-date is the time of the GMT, not your local time.
The Authorization header is generated based on a given URL path, HTTP method, App ID, Access Key, and Secret Key. For some APIs related to Features DBs, the DB ID is also required. The Authorization follows the following format.
A common Authorization header example is as follows.
The hmac username
is your Access Key. Replace 005c5acf-5ea9-499c-8d3e-690413f9b5b9
with your own access key.
The signature
is a base64 encoded string encrypted by HMAC-SHA256.
Let us do it step by step. First, assemble the message
before encryption.
A common message
example before encryption is as follows.
POST
is the HTTP method. Make it consistent with the API you are going to use.
/openapi/face/v1/abc1a8a7-038f-4f9a-b98a-5b602978b135/detect
is the URL path including the abc1a8a7-038f-4f9a-b98a-5b602978b135
part as the APP ID. Replace it with your own APP ID. Also, substitute this part with the URL path of your designated API. In some APIs, DB ID is also needed. For example,
Where the aed37153-16b6-4f19-a479-302049e44000
is the DB ID.
Use the Secret key blFWSvhp9pRz2JnRHnfvkFeAuApClhKg
to encrypt the first message we created.
Then we will get
Encode it with base64. The signature is finalized as follows.
Use this signature will complete the Authorization composition we saw above.
SwaggerHub offers interactive ways to let you test API calls directly from the browser using the "Try it out" button. Mercury Cloud Interactive API Documentation requires a special Authentication before you can use this function. Here is the guide to using the "Try it out" function in the Mercury Cloud Interactive API Documentation.
Click the Servers dropdown list and choose the server according to your service area.
Click the "Authorize" button right to the Servers list.
The following window will pop up. Paste the Authorization header into the "Value" textbox and click the "Authorize" button.
The Authorization header has now been fixed. To re-authorize it, click the "Logout" button and re-do the steps above.
The Authorization header is time-sensitive, so you need to re-authorize it every time before you call the API.
Now we can try the API call. Take the List Feature Database API for example.
Click the "Try it out" button, the "Execute" button will show up.
Fill up the x-date and the app_id parameter and click the "Execute" button.
The API response will be displayed with some other related information.
This page provides some advice and technics on advanced usage of Mercury Cloud OpenAPI.
Typically, only one face from a single image is used in image-related APIs. But Mercury Cloud offers more powerful functions to fulfill the needs of different user scenarios.
In some scenarios, you might not want to detect faces within the whole image but a specific region, such as a specific area of the ID card. A rectangle area can be specified in some APIs. With the rectangle area specified, the API will only scan face(s) in that region, and only the faces that overlap with that region will be returned. If no rectangle is specified, the whole image will be scanned. Incidentally, specifying a rectangle area is faster than processing the whole image, especially for HD images.
The rectangle
field behaves the same in the Face Detection API, the Face Compare API, the Batch Add Faces API, and the Face Searching API.
The Face Detection function is used in several APIs, namely the Face Detection API, the Face Comparison API, the Quality Check API, the Batch Add Face API, and the Face Searching API. However, the detection behaviors are slightly different. The Face Detection API scans all detectable faces within the image. If an image contains multiple faces, the Face Detection API can detect all faces within the image. The batches.faces
in the API response is a list containing all detected faces in the image.
But the Quality Check API, the Face Comparison API, the Batch Add Face API, and the Face Searching API only use the largest face in the image.
In the Face Detection API, the images
field in the request and results
and batches
fields in the response are all list types. These lists can contain multiple images. The order of images in the images
field in the request body and the order of items in results
and batches
fields in the response are strictly matched. You can use the same index value to access images, detect results, and detected faces.
While adding features to the feature database, it is recommended to add key
or extra_info
value along with the image, so that the added feature can be managed and accessed using more comprehensive methods other than the feature_id
. If you add a face to the feature database several times, you will receive several feature_ids
, and there may exist some duplicated features in the database. Without a key
or an extra_info
value, it is hard to manage if the feature_id
is lost.
The key
of a feature is a user-defined string composed of alphabets, digits, hyphens ("-"), and up to 48 half-width characters long. If a feature is represented as a person in the database, the key
is the index of that person. You can access a feature by a feature_id
to retrieve its key
and extra_info
as well. The value of key
is not necessarily to be unique across all feature databases. Therefore you can set an identical key
value to a set of features.
In most user systems, the system will distribute a unique user_id
for each user, for example, an employee number or a membership number. This unique user_id
can be set as the key
value in the Batch Add Feature API to map the user_id
of the user system and the feature in Mercury Cloud. The user system should be responsible for maintaining the uniqueness of the key
across feature databases instead of Mercury Cloud.
In the face identification (1:N) case, with a given user image, the API response contains the top results with the highest similarity compared to that face feature. The key
in the result can help you quickly identify which user is the most similar feature belonging to. Then you can perform further business logic and process for that user agilely.
The Mercury Cloud does not offer dedicated columns to store user information. But Mercury Cloud provides a more flexible and advanced solution. The extra_info
field is a user-defined string, able to store up to 1024 half-width characters. It can include any string types, including but not limited to a serialized JSON object, a Base64 encoded binary, or an URL of the user avatar. An example of a membership service can be structured as follows.
In this example, since the name and avatar URL is included in extra_info
, your system can rapidly extract the registered user avatar or user name without query your own database.
It is highly recommended to store only frequently used data in extra_info
, as well as in Mercury Cloud. As a best practice, you should avoid saving sensitive personal data or do it at your own risk, even though Mercury Cloud provides high-level security measures to ensure the isolation of tenants and data security.
Starting from V1.5.0
, you can update the extra_info
by using the Update Feature Extra Info API.
The Face Searching API is the most important API in the 1:N identification scenario. Snapshots captured by cameras are compared with registered features in the feature database to find matching results.
A feature is a multi-dimensional vector. In the face identification scenario, the API compares a given feature with existing features in the database and calculates how close each pair is. Thescore
in the API response ranging from 0 ~ 1, indicates the similarity between them. Different images, angles, brightness conditions, dates, etc. will affect the verification score. Even if you compare two identical images, there will be some differences in their similarity, though their value is very close to 1.
When sending a face searching API request, the min_score
field can be used as a threshold to limit the lowest score
in the response. Only results withscore
equal to or higher than the min_score
will be returned. You should decide your own min_score
. Some common threshold values are set from 0.94 to 0.96 to avoid FAR as much as possible. But, please adjust and configure it based on your business requirements and test results.
Another field top_k
is used to limit the number of results returned by API. In most cases, only the top one result is needed. But getting more top results may be helpful in some other cases. Notice that the min_score
setting has a higher priority than top_k
, meaning that if any of the top k features does not fulfill the min_score
requirement, it will not be included in the API response.
In some scenarios, your system may want to store features in multiple feature databases, such as separated feature databases for different regions.
Generally, the face searching API will only run in a single feature database (region). But in special cases, it is necessary to search across all feature databases (regions). The Face Searching API in Mercury Cloud provides this capability by supporting multiple db_id
. When specify some db_id
values, the API will search features across several feature databases and respond top results separately.
Notice that the Cross-database Face Search will increase back and forth network latency since it performs N times the database searches if the number of db_id
in the API request is set to N.
Open the api_parameters.py
with a text editor and replace the following parameters will your info. Refer to for more details.
For more details on attributes information, refer to the latest version YAML file or online API manual provided in .
Open the api_parameters.py
with a text editor and replace the following parameters will your info. Refer to for more details.
x-date
is the same as .
You can now use the and the to make the API call of /openapi/face/v1/{app_id}/detect
.
Prepare and , respectively, using .
Besides the Face Detection API, the Batch Add Face API also supports batch mode. Due to the performance considerations, the maximum number of images within a single request is restricted to 16, and each image should meet .
age_lower_limit
The estimated lower limit of age
-
age_up_limit
The estimated upper limit of age
-
st_age
The estimated classification of age
ST_CHILD
: Child
ST_ADULT
: Adult
ST_OLD
: Elderly
gender_code
The estimated classification of gender
MALE
: Male
FEMALE
: Female
mustache_style
The estimated classification of bearded
MUSTACHE_STYLE_TYPE_NONE
: No mustache
WHISKERS
: Has mustache
respirator_color
The estimated status of mask wearing
COLOR_TYPE_NONE
: Not wearing a mask
COLOR_TYPE_OTHER
: Wearing a mask
glass_style
The estimated status of glasses wearing
GLASSES_STYLE_TYPE_NONE
: Not wearing glasses
TRANSPARENT_GLASSES
: Wearing normal glasses
SUNGLASSES
: Wearing sunglasses
cap_style
The estimated status of hat wearing
HAT_STYLE_TYPE_NONE
: Not wearing a cap
CAP
: Wearing a cap
st_helmet_style
The estimated status of helmet wearing
ST_HELMET_STYLE_TYPE_NONE
: Not wearing a helmet
ST_HELMET
: Wearing a helmet
st_expression
The estimated classification of emotions
ST_CALM
: calm
ST_HAPPY
: happy
ST_ANGRY
: angry
ST_SURPRISED
: surprised
ST_SORROW
: sorrow
st_respirator
Reserved and not used
-
angle.yaw
-10.0~10.0
Yaw in angle.
angle.pitch
-15.0~15.0
Pitch in angle.
angle.roll
-10.0~10.0
Roll in angle.
quality.distance2center
0.2~1.0
Indicate the distance between the center of the face and the center of the image, far to near.
occlusion.occlusion_total
0.0~0.02
Face occlusion, low to high.
occlusion.eye
0.0
Eye occlusion, low to high.
occlusion.nose
0.0
Nose occlusion, low to high.
occlusion.mouth
0.0~0.4
Mouth occlusion, low to high.
occlusion.eyebrow
0.0
Eyebrow occlusion, low to high.
occlusion.face_line
0.0~0.1
Face contour occlusion, low to high.
quality.align_score
1.0~
Face landmarks score, low to high.
quality.brightness
-0.5~0.5
Face brightness, dark to bright.
quality.sharpness
0.8~1.0
Face sharpness, bad to good.
quality.mouth_open
0.0~0.4
Mouth opened size, closed to open.
quality.missing
0.9~1.0
Effective face proportion, high to low.
quality.size
0~0.85
The face proportion in the image, small to large.
This page describes how images, including faces, are processed in Mercury Cloud.
When we compare faces, add faces to the database, or search a face from the database, the algorithm does not directly use the uploaded raw images. Instead, features are extracted from faces within the Mercury Cloud platform when using these APIs. A feature is a multi-dimension vector that is extracted from the face in the image. Each face from the image will generate a unique feature. What a similarity score indicates is the distance between the feature vectors of two faces.
Therefore, in all Mercury Cloud OpenAPI documents, it refers to comparison or searching for features when mentioning face comparison or face searching. Mercury Cloud OpenAPI DO NOT store any image binary or files within the service.
Mercury Cloud OpenAPIs use base64 encoded image binaries in HTTP requests to allow image data transmissions. It is much easier to convert the image to the base64 string in Linux via bash command. Refer to the following script and command to convert your images.
There are five essential APIs in the service that require base64 encoded image data as input, namely, the Face Detect API (/{app_id}/detect
), the Face Compare API (/{app_id}/compare
), the Quality Check API (/{app_id}/quality
), the Add Feature API (/{app_id}/databases/{db_id}/features
), and the Face Search API (/{app_id}/databases/search
). The requirements of input images are common among those APIs and are as follows.
The image format should be JPG, PNG, BMP, TIFF, or GIF (Only the first frame is accepted).
The file size should be smaller than 8MB.
The minimum detectable face area should be more than 32x32 pixels.
In the Face Detect API (/{app_id}/detect
) and the Add Feature API (/{app_id}/databases/{db_id}/features
), where batch upload is supported, the number of images in a single API call should be no more than 16.
Higher face image quality means better precision, while larger image file size means more response time in the API call. As the best practice, we highly recommend using high-quality, frontal, clear images, with the face area over 200x200 pixels, while the file is trimmed and compressed to less than 200KB before calling APIs.