Vee Architecture, Privacy, and Security

Learn more about the data flow architecture and information security for Vee for Enterprise.

Overview

Vee doesn’t speak the same language that a person speaks so it needs some assistance to ensure it understands the natural language input. Vee partners with third-party large language models (LLM) to help translate and understand the question being asked. The LLM translates the question into a query that Vee uses to generate a response.

Architecture and data flow

Note: Vee uses large language models hosted in Microsoft Azure for the advanced translation of natural language into queries. Customer data is not used for training purposes and no prompt submissions are retained.

The following illustration summarizes the architecture and data flow for Vee.

  1. A user enters a question on the client side in Visier People or Microsoft Teams.
  2. The client makes an authentication request to the Visier server, which contains the core set of technologies underlying Visier People. The Visier server checks the user credentials to verify that the user is valid.
  3. If the user is valid, the server sends an access token to the client. This allows the client to make additional requests to the Visier server.
  4. The client sends the question and any existing chat history to the Visier server. Both the question and chat history move from the customer environment to Visier’s environment. Chat history is retained in the client when the user interacts through a web browser and in a server-side database for the Microsoft Teams integration. The data stored in the database is encrypted and has a short Time To Live (TTL) of approximately 8 hours.
  5. The Visier server sends the question and chat history to the AI server. This is where the communication between Vee and large language models in Microsoft Azure occurs.
  6. In a series of back-and-forth interactions, facilitated by the Azure REST API, the AI server will send the question, chat history, and metadata to the large language models to translate the question into a query that Vee understands. Metadata is any relevant information that can be used to produce the best translation of the natural language question such as information about Visier's analytic model, analytic objects that exist in the tenant, past inferences, and query examples.
  7. The LLM translates the question into a query function and returns it to the AI server.
  8. The AI server sends the query function to the Visier server.
  9. The Visier server performs the query function and generates a response (narrative and charts) based on the user's data access. The Visier server sends the response data to the client.
  10. The user receives an answer back, in natural language, from Vee.

How Vee is trained

Vee is trained on Visier's analytic model and Blueprint, the complete set of Visier content. Generative AI is only as good as the underlying data it’s trained on. Visier not only brings together all of the disparate data across your HR tech stack, but we also have the content, questions, answers, metrics, and supporting knowledge to understand what the data means and how to explain it to end users. Generative AI makes asking questions easier than ever, and only Visier has the scope of information needed to draw accurate conclusions to nearly any question.

Visier does not train Vee on customer data.

How sensitive data and information is protected

Vee is governed by Visier's trusted data security model. Vee's responses are based on the user's permissions and data access. This means that Vee’s answers from one person to another, while both accurate, will be different.

Approach to responsible AI

Visier builds its AI programs, including Vee, in accordance with the following guiding principles:

  • We respect the evolving guidance of legislative authorities globally, including without limitation the Blueprint for an AI Bill of Rights (US), Responsible use of artificial intelligence (AI) (Canada), and the European Commission's proposed EU Regulatory framework for AI (EU).
  • We believe in responsible, measured development, over innovation at all costs.
  • We ascribe to high levels of transparency, accountability, and explainability.
  • We value continued human oversight with appropriate checks and balances on AI autonomy.
  • We prioritize data security and limit the sharing and persisting of data.
  • We recognize, understand, and address inherent flaws in AI, including the potential for bias, discrimination, and hallucinations.
  • We are committed to continuing to learn, to evolve, and to reevaluate with each new development.