Data In Connectors

Bring data into Visier directly from your source system.

Overview

Visier's data connectors simplify the data loading process. Using an automated workflow, data connectors connect and extract the raw data directly from your source systems, subsequently transforming and processing it within the Visier platform.

Data connectors are an alternative to generating flat files and transferring them to Visier via SFTP.

Supported connectors and requirements

Visier seamlessly connects to multiple data sources. Click the source name to learn more about the requirements to get started.

Note: If you're interested in a connector that is in beta, please contact your Customer Success Manager (CSM). If you are unsure who your CSM is, please create a technical support case through the Visier Service Portal.

Data in connector workflow

  1. Create a service account in your source system that enables Visier to connect your data to our solution. For more information, see Set up a service account.
  2. Create connector credentials to authenticate Visier with your source system. For more information, see Provide connector credentials in Visier.

Data connector architecture

Data connectors utilize a service account in the source system to authenticate an HTTPS connection to read the data from the source system. The data access and security permissions granted to the service account are controlled by the end user in the source system.

Depending on the source system, the required permissions may be different. For source requirements, click the source name in Supported connectors and requirements.

Tip: After service account credentials have been connected to Visier, the credentials are placed in an encrypted secret store that doesn't allow individuals to read the contents. If you need to update an existing credential, you must input each field again because the fields are protected from view.

Visier's data connectors retrieve data in the following ways:

  • REST. Visier utilizes your source system's API to securely retrieve data over HTTPS and generate a stream of data.
  • Java Database Connectivity (JDBC). Visier uses a standardized API to connect directly to your source system's database to securely retrieve data and generate a stream of data.
  • Simple Object Access Protocol (SOAP): Visier uses this XML-based protocol to retrieve information from your source system.
  • GraphQL API: Visier uses this API to retrieve the exact data required from your source system.
  • Other. Visier may use alternative retrieval methods in special cases. For more information, contact Visier Customer Support.

Visier reads the data stream and generates a set of records that are stored within Visier's data store. These records are then loaded into the solution via the traditional data flow.

Retrieval Method

Data Source

REST API

  • Dayforce
  • iCIMS
  • Oracle Fusion
  • Qualtrics
  • SAP SuccessFactors
  • UKG Pro

JDBC API

  • Amazon Redshift
  • Google BigQuery
  • Greenhouse
  • Microsoft SQL Server
  • Snowflake

SOAP

  • Workday

GraphQL API

  • Medallia

Other

  • Amazon S3

Note:  

  • Please see Visier's Trust Assurance Overview for a detailed diagram of the data flow. If you don't have Visier's Trust Assurance Package, contact Visier Customer Support.
  • For JDBC connectors, we can connect to database objects including tables, views, materialized views, stored procedures, and more. We cannot read parameterized stored procedures.

Methodology

Initially, data connectors retrieve a full history for each subject to generate an initial history of events.

For each subsequent extraction, the connector identifies subjects that have experienced a change and restates a full history for each of these subjects.

A full history is retrieved per subject to accommodate for any corrections made on the subject throughout history. By identifying the subjects that experienced a change, Visier reduces the volume of data retrieved by generating a semi-delta load and allows for self-healing correction workflows in the source system.

It is recommended that you execute daily extraction and data loads to keep load times at a minimum.

In this section