Vee Architecture, Privacy, and Security

Learn more about the data flow architecture and information security for Vee for Enterprise.

Overview

Vee doesn’t speak the same language that a person speaks so it needs some assistance to ensure it understands the natural language input.

Architecture and data flow

Vee uses large language models hosted in Microsoft Azure for the advanced the tasks of translating natural language into query language and generating summaries of text. Customer data is not used for training purposes and no prompt submissions are retained.

Asking a question

Vee partners with third-party large language models (LLM) to help translate and understand the question being asked. The LLM translates the question into a query that Vee uses to generate a response. The following illustration summarizes the architecture and data flow for Vee when a user asks a question.

  1. A user enters a question on the client side in Visier People or Microsoft Teams.
  2. The client makes an authentication request to the Visier server, which contains the core set of technologies underlying Visier People. The Visier server checks the user credentials to verify that the user is valid.
  3. If the user is valid, the server sends an access token to the client. This allows the client to make additional requests to the Visier server.
  4. The client sends the question and any existing chat history to the Visier server. Chat history provides context for the user's intent and includes only the user's previous questions, excluding responses from Vee. Both the question and chat history move from the customer environment to Visier’s environment. Chat history is retained in the client when the user interacts through a web browser and in a server-side database for the Microsoft Teams integration. The data stored in the database is encrypted and has a short Time To Live (TTL) of approximately 8 hours.
  5. The Visier server sends the question and chat history to the AI server. This is where the communication between Vee and large language models in Microsoft Azure occurs.
  6. In a series of back-and-forth interactions, facilitated by the Azure REST API, the AI server will send the question, chat history, and metadata to the large language models to translate the question into a query that Vee understands. Metadata is any relevant information that can be used to produce the best translation of the natural language question such as information about Visier's analytic model, analytic objects that exist in the tenant, past inferences, and query examples.
  7. The LLM translates the question into a query function and returns it to the AI server.
  8. The AI server sends the query function to the Visier server.
  9. The Visier server performs the query function and generates a response (narrative and charts) based on the user's data access. The Visier server sends the response data to the client.
  10. The user receives an answer back, in natural language, from Vee.

Requesting explanations and conversation summaries

Vee partners with third-party large language models (LLM) to summarize text. The following illustration summarizes the architecture and data flow for Vee when a user requests an AI explanation in a Vee Board or a summary of their conversation with Vee.

Note: Limited Availability The conversation summary feature (Vee Summarization) is in limited availability. If you are interested, please contact your Customer Success Manager.

  1. A user requests an AI explanation in a Vee Board (analysis) or a summary of their conversation with Vee on the client side in Visier People.
  2. The client makes an authentication request to the Visier server, which contains the core set of technologies underlying Visier People. The Visier server checks the user credentials to verify that the user is valid.
  3. If the user is valid, the server sends an access token to the client. This allows the client to make additional requests to the Visier server.
  4. The client sends the analysis or chat history to the Visier server. Both the analysis and chat history move from the customer environment to Visier’s environment. Chat history is retained in the client when the user interacts through a web browser and in a server-side database for the Microsoft Teams integration. The data stored in the database is encrypted and has a short Time To Live (TTL) of approximately 8 hours.
  5. The Visier server sends the analysis or chat history to the AI server. This is where the communication between Vee and large language models in Microsoft Azure occurs.
    • To provide an explanation, the analysis ID, its content, metadata describing the content's structure, and the data within the analysis is sent.
    • To provide a chat summary, the complete conversation thread for the current user session (user questions and Vee's responses) is sent. Responses may include narrative content and organizational data.
  6. The AI server sends the analysis or chat history along with a prompt to the LLM. The prompt contains instructions that guide the LLM in generating an explanation or summary in natural language.
  7. The LLM generates an explanation or summary and returns it to the AI server.
  8. The AI server sends the explanation or summary to the Visier server.
  9. The Visier server sends the explanation or summary to the client.
  10. The user receives an AI explanation in a Vee Board or a summary of their conversation with Vee.

How Vee is trained

Vee is trained on Visier's analytic model and Blueprint, the complete set of Visier content. Generative AI is only as good as the underlying data it’s trained on. Visier not only brings together all of the disparate data across your HR tech stack, but we also have the content, questions, answers, metrics, and supporting knowledge to understand what the data means and how to explain it to end users. Generative AI makes asking questions easier than ever, and only Visier has the scope of information needed to draw accurate conclusions to nearly any question.

Visier does not train Vee on customer data.

How sensitive data and information is protected

Vee is governed by Visier's trusted data security model. Vee's responses are based on the user's permissions and data access. This means that Vee’s answers from one person to another, while both accurate, will be different.

Approach to responsible AI

Visier builds its AI programs, including Vee, in accordance with the following guiding principles:

  • We respect the evolving guidance of legislative authorities globally, including without limitation the Blueprint for an AI Bill of Rights (US), Responsible use of artificial intelligence (AI) (Canada), and the European Commission's proposed EU Regulatory framework for AI (EU).
  • We believe in responsible, measured development, over innovation at all costs.
  • We ascribe to high levels of transparency, accountability, and explainability.
  • We value continued human oversight with appropriate checks and balances on AI autonomy.
  • We prioritize data security and limit the sharing and persisting of data.
  • We recognize, understand, and address inherent flaws in AI, including the potential for bias, discrimination, and hallucinations.
  • We are committed to continuing to learn, to evolve, and to reevaluate with each new development.

Opt out of AI features

Your organization can turn off all AI features that send data to third-party LLMs for natural language processing. This includes features like AI explanations in Vee Boards. Administrators can opt out in Studio by clicking Settings > AI Features in the global workspace.