Face authentication with iOS SDK

You can use face biometrics to reliably verify your customers' identity by biometrically matching a selfie to a known photo of the user. For example, face authentication can act as a step-up safeguard when a logged-in user wants to perform a money transfer or change their profile data. This guide describes how to quickly integrate face authentication into your iOS application using our iOS SDK, including both the client-side and backend integration.

How it works

Below is an example of a basic flow. Mosaic APIs are shown in pink, along with the relevant integration step.

After initializing the SDK (Step 3), your app starts a verification flow by creating a session in the backend that establishes a secure context and provides a reference image (Step 7 and Step 8) and then starting the session (Step 9). The SDK executes the verification process with the user using the Mosaic experience. Once the user's selfie is submitted, Mosaic starts processing the verification while the SDK polls for its status. Once processing is completed, the SDK notifies the app (Step 4) so it can obtain the verification result (Step 10) and proceed accordingly (Step 11).

Requirements

  • iOS 13+
  • Xcode 11+

Step 1: Configure your app

Admin Portal

To integrate with Mosaic, you'll need to configure an application. From the Applications page, create a new application or use an existing one. From the application settings:

  • For Client type , select native .
  • For Redirect URI , enter your website URL. This is a mandatory field, but it isn't used for this flow.
  • Obtain your client ID and secret, which are autogenerated upon app creation.

Step 2: Add SDK to project

client

Add the SDK to your Xcode project so your application can access all the Mosaic functionality.

  • To use Swift Package Manager : install the SDK as a dependency in your Package.swift .
  • To use CocoaPods : specify the SDK in your Podfile .
Swift Package ManagerCocoaPods
Copy
Copied
dependencies: [
    .package(url: "https://github.com/TransmitSecurity/identityVerification-ios-sdk.git", .upToNextMajor(from: "1.0.0"))
]
Copy
Copied
pod 'IdentityVerification', '~> 1.0.1'

Step 3: Initialize the SDK

client
Initialize using PLIST configuration (recommended)

To do this, create a plist file named TransmitSecurity.plist in your Application with the following content. The [CLIENT_ID] should be replaced with your client ID from step 1 (Step 1)

Copy
Copied
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
	<key>credentials</key>
	<dict>
		<key>baseUrl</key>
		<string>https://api.transmitsecurity.io/</string>
		<key>clientId</key>
		<string>[CLIENT_ID]</string>
	</dict>
</dict>
</plist>

Add the code below to your Application Class

  • To use UIKit : add the code to your AppDelegate or your SceneDelegate .
  • To use SwiftUI : add the code to your main app file.
UIKit AppDelegateUIKit SceneDelegateSwiftUI
Copy
Copied
class AppDelegate: UIResponder, UIApplicationDelegate {

    func application(_ application: UIApplication, didFinishLaunchingWithOptions launchOptions: [UIApplication.LaunchOptionsKey: Any]?) -> Bool {
        TSIdentityVerification.initializeSDK()
        TSIdentityVerification.faceAuthDelegate = self
        return true
    }
}
Copy
Copied
class SceneDelegate: UIResponder, UIWindowSceneDelegate {

    func scene(_ scene: UIScene, willConnectTo session: UISceneSession, options connectionOptions: UIScene.ConnectionOptions) {
        guard let _ = (scene as? UIWindowScene) else { return }

        TSIdentityVerification.initializeSDK()
        TSIdentityVerification.faceAuthDelegate = self
    }
}
Copy
Copied
struct ExampleApp: App {
    private let idvObserver = IDVStatusObserver()

    init() {
        TSIdentityVerification.initializeSDK()
        TSIdentityVerification.faceAuthDelegate = idvObserver
    }

    var body: some Scene {
        WindowGroup {
            ContentView()
        }
    }
}

private class IDVStatusObserver: TSIdentityFaceAuthenticationDelegate {

    func faceAuthenticationDidStartCapturing() {
        ...
    }

    func faceAuthenticationDidStartProcessing() {
        ...
    }

    func faceAuthenticationDidComplete() {
        ...
    }

    func faceAuthenticationDidCancel() {
        ...
    }

    func faceAuthenticationDidFail(with error: TSIdentityVerificationError) {
        ...
    }
}
Note
  • Make sure to add import IdentityVerification at the top of the implementation class.
  • The SDK can be configured to work with a different cluster by setting the baseUrl parameter within the TransmitSecurity.plist to https://api.eu.transmitsecurity.io/ (for EU) or https://api.ca.transmitsecurity.io/ (for Canada).
Initialize using SDK parameters

Configure the SDK using one of the snippets below, where CLIENT_ID is your client ID (obtained in Step 1):

  • To use UIKit : add the code to your AppDelegate or your SceneDelegate .
  • To use SwiftUI : add the code to your main app file.
UIKit AppDelegateUIKit SceneDelegateSwiftUI
Copy
Copied
class AppDelegate: UIResponder, UIApplicationDelegate {

    func application(_ application: UIApplication, didFinishLaunchingWithOptions launchOptions: [UIApplication.LaunchOptionsKey: Any]?) -> Bool {
        TSIdentityVerification.initialize(clientId: [CLIENT_ID])
        TSIdentityVerification.faceAuthDelegate = self
        return true
    }
}
Copy
Copied
class SceneDelegate: UIResponder, UIWindowSceneDelegate {

    func scene(_ scene: UIScene, willConnectTo session: UISceneSession, options connectionOptions: UIScene.ConnectionOptions) {
        guard let _ = (scene as? UIWindowScene) else { return }

        TSIdentityVerification.initialize(clientId: [CLIENT_ID])
        TSIdentityVerification.faceAuthDelegate = self
    }
}
Copy
Copied
struct ExampleApp: App {
    private let idvObserver = IDVStatusObserver()

    init() {
        TSIdentityVerification.initialize(clientId: [CLIENT_ID])
        TSIdentityVerification.faceAuthDelegate = idvObserver
    }

    var body: some Scene {
        WindowGroup {
            ContentView()
        }
    }
}

private class IDVStatusObserver: TSIdentityFaceAuthenticationDelegate {

    func faceAuthenticationDidStartCapturing() {
        ...
    }

    func faceAuthenticationDidStartProcessing() {
        ...
    }

    func faceAuthenticationDidComplete() {
        ...
    }

    func faceAuthenticationDidCancel() {
        ...
    }

    func faceAuthenticationDidFail(with error: TSIdentityVerificationError) {
        ...
    }
}
Note
  • Make sure to add import IdentityVerification at the top of the implementation class.
  • The SDK can be configured to work with a different cluster by setting the first initialization parameter to baseUrl : 'https://api.eu.transmitsecurity.io/' (for EU) or baseUrl : 'https://api.ca.transmitsecurity.io/' (for Canada).

Step 4: Observe status updates

client

Once a face authentication process has been initiated (as described in Step 9), the process moves through different statuses. For example, the status will indicate if the process was completed successfully so the app can fetch the result.

Here are examples of adding the extension to your AppDelegate or SceneDelegate class:

AppDelegateSceneDelegate
Copy
Copied
/** TODO: Add `import IdentityVerification` at the top of the implementation class */

extension AppDelegate: TSIdentityVerificationDelegate {

    /** Notifies when user has started to capture images. */
    func faceAuthenticationDidStartCapturing() {
        ...
    }

    /** Notifies when user has finished uploading images and the face authentication is being processed. */
    func faceAuthenticationDidStartProcessing() {
        ...
    }

    /** Notifies when face authentication process completed, and the result can be obtained (via backend request). */
    func faceAuthenticationDidComplete() {
        ...
    }

    /** Notifies when face authentication process being canceled by user */
    func faceAuthenticationDidCancel() {
        ...
    }

    /** Notifies the user about updates to the face authentication status.
     - Parameters:
       - error: The verification error
     */
    func faceAuthenticationDidFail(with error: TSIdentityVerificationError) {
        print("Verification error: \(error.rawValue)")
    }
}
Copy
Copied
/** TODO: Add `import IdentityVerification` at the top of the implementation class */

extension SceneDelegate: TSIdentityVerificationDelegate {

    /** Notifies when user has started to capture images. */
    func faceAuthenticationDidStartCapturing() {
        ...
    }

    /** Notifies when user has finished uploading images and the face authentication is being processed. */
    func faceAuthenticationDidStartProcessing() {
        ...
    }

    /** Notifies when face authentication process completed, and the result can be obtained (via backend request). */
    func faceAuthenticationDidComplete() {
        ...
    }

    /** Notifies when face authentication process being canceled by user */
    func faceAuthenticationDidCancel() {
        ...
    }

    /** Notifies the user about updates to the face authentication status.
     - Parameters:
       - error: The verification error
     */
    func faceAuthenticationDidFail(with error: TSIdentityVerificationError) {
        print("Verification error: \(error.rawValue)")
    }
}

Step 5: Add camera permission

client
Your app requires camera permissions in order to capture the images required for the face authentication process.
  1. Open the Info.plist file as a Property List and add the following key: Privacy - Camera Usage Description . The key value contains an explanation for why the permission is needed, which will be displayed to the user to approve. For example: This is needed to capture images for the verification process .
  2. To handle camera permissions, your client app should implement code like the snippet below.
Copy
Copied
switch AVCaptureDevice.authorizationStatus(for: .video) {
case .authorized: // User has already authorized camera access
    // Start face auth here
    break
case .notDetermined: // User hasn't yet authorized camera access
    AVCaptureDevice.requestAccess(for: .video) { (granted) in
        if granted { // User has granted camera access
            // Start face auth here
        } else {
            // Handle unauthorized state
        }
    }
    break

case .denied:
    // Handle authorization state
    break
case .restricted:
    // Handle authorization state
    break
@unknown default:
    // Handle unauthorized state
    break
}

Step 6: Collect reference image

client

In the face authentication flow, a reference image is required to prove user's identity as Mosaic will compare the user's selfie against the reference. Your app should collect a reference image before starting the face authentication. This could be achieved by downloading the selfie created during the document verification flow, by fetching user ID photos from government databases, or using any other method of your choice.

For example, after a document verification flow, the selfie image can be retrieved by fetching all image IDs for the verification session (see Get all session images) and then fetching the selfie image itself by its ID (see Get image by ID).

Note

Mosaic stores document verification images for 90 days. Contact Mosaic to extend the retention period.

Step 7: Get access tokens

backend

Since the access token is required to authorize the backend API calls, such as for creating a verification session (Step 8) and obtaining the result (Step 10), your app should be able to obtain these tokens from Mosaic.

Copy
Copied
import fetch from 'node-fetch';

async function run() {
  const formData = {
    client_id: '[CLIENT_ID]', // Client ID obtained in Step 1
    client_secret: '[CLIENT_SECRET]', // Client secret obtained in Step 1
    grant_type: 'client_credentials',
    resource: 'https://verify.identity.security' // Targets IDV resource (required)
  };

  const resp = await fetch(
    `https://api.transmitsecurity.io/oidc/token`,
    {
      method: 'POST',
      headers: {
        'Content-Type': 'application/x-www-form-urlencoded'
      },
      body: new URLSearchParams(formData).toString()
    }
  );

  const data = await resp.text();
  console.log(data);
}

run();
Notes
  • The token must be requested for the https://verify.identity.security resource, which will appear in the audience claim of the generated token (in the future we’ll block access to tokens without this audience).
  • The token must remain secure on your server, and must only be used for backend requests.

Step 8: Create session

backend

Before your mobile app can initiate the face auth process, your backend must create a session in order to provide a secure context for the flow and submit a reference image. To do this, send a request like the one below (see API reference):

Note

For optimal results, the image resolution should be HD to FHD (~1900x1000).

Copy
Copied
import fetch from 'node-fetch';

async function run() {
  const resp = await fetch(
    `https://api.transmitsecurity.io/verify/api/v1/face-auth`,
    {
      method: 'POST',
      headers: {
        'Content-Type': 'application/json',
        Authorization: 'Bearer [CLIENT_ACCESS_TOKEN]' // Client access token generated in Step 7
      },
      body: JSON.stringify({
        reference: {
          type: 'raw',
          content: '[IMAGE_CONTENT]',  // Content of the image, as a base64 string
          format: 'jpg' // Currently only jpeg is supported
        }
      })
    }
  );

  const data = await resp.json();
  console.log(data);
}

run();

The response contains device_session_id that will be used to start the face authentication session on the client-side (in Step 9), and obtain the result (in Step 10). For example:

Copy
Copied
{
  "device_session_id": "ca766ed78c8c0b7824dfea356ed30b72",
  "session_id": "H1I12oskjzsdhskj4"
}

Step 9: Start session

client

Once a session is created, initiate the authentication process using the startFaceAuth() SDK method. Add the code below to your mobile app, passing the device_session_id value returned in the previous step. If successful, the SDK will start a face authentication process for the user and guide them through the flow using the Mosaic experience.

Copy
Copied
/** TODO: Add `import IdentityVerification` at the top of the implementation class*/

TSIdentityVerification.startFaceAuth(deviceSessionId: [device_session_id])

Step 10: Get results

backend

Once the face authentication process starts, your mobile app can track its status using the extension added in Step 4. When the selfie is successfully submitted, Mosaic starts processing the verification and the SDK starts polling for the status. If the status is completed, your backend should send the request below to obtain the verification result (see API reference):

Copy
Copied
import fetch from 'node-fetch';

async function run() {
  const dsid = 'YOUR_dsid_PARAMETER'; // Device session ID returned in Step 8
  const resp = await fetch(
    `https://api.transmitsecurity.io/verify/api/v1/face-auth/${dsid}/result`,
    {
      method: 'GET',
      headers: {
        Authorization: 'Bearer [CLIENT_ACCESS_TOKEN]' // Client access token generated in Step 7
      }
    }
  );

  const data = await resp.text();
  console.log(data);
}

run();

Step 11: Handle results

client

Your app should decide how to proceed based on the face authentication result returned in the previous step, which is indicated by the recommendation field:

  • If ALLOW : the face authentication process was completed successfully. The user identity is confirmed.
  • If CHALLENGE : the face authentication process didn't succeed, since at least one verification check didn't pass. Depending on your use case, proceed with other checks.
  • If DENY : the face authentication indicates a high likelihood of attempted fraud. You should block the user or initiate an in-depth review to avoid fraudulent actions.

Here's an example response for a successful face authentication:

Copy
Copied
{
  "status": "complete",
  "recommendation": "ALLOW"
}