Tools and templates to create validated and maintainable data charts and dashboards.
Author:@mckinsey
Updated at:

Data Science Tools

Vizro-MCP

Vizro-MCP is a Model Context Protocol (MCP) server, which works alongside a LLM to help you create Vizro dashboards and charts.

Vizro-MCP Demo

Set up Vizro-MCP

Vizro-MCP is best used with Claude Desktop, Cursor or VS Code. However, it can be used with most LLM products that enable configuration of MCP server usage.

> ๐Ÿ’ก Tip: For best performance, we recommend using the claude-4-sonnet model, or another high-performing model of your choice. Using the often offered auto setting may lead to inconsistent or unexpected results.

Our documentation offers separate, detailed steps for Claude Desktop, Cursor and VS Code.

Basic configuration

The following is for those familiar with MCP server setup who are comfortable with basic configuration settings.

Prerequisites

  • You must have downloaded and installed the LLM app you want to configure and use as a MCP host.

  • You must also install either uv or Docker by following the linked instructions.

Set up Vizro-MCP using uv

If you've installed uv, open a terminal window and type uv to confirm that is available. To get the path to uvx, type the following:

which uv

Copy the path returned, and add the following to the JSON file used to configure MCP servers for your LLM app. Be sure to substitute your path to uv as returned above, for the placeholder given:

{
  "mcpServers": {
    "vizro-mcp": {
      "command": "/placeholder-path/uvx",
      "args": [
        "vizro-mcp"
      ]
    }
  }
}

Quick install

HostPrerequisiteLink
CursoruvInstall with UVX in Cursor
VS CodeuvInstall with UVX in VS Code

Set up Vizro-MCP using Docker

If you are using Docker, add the following to the JSON file used to configure MCP servers for your LLM app.

{
  "mcpServers": {
    "vizro-mcp": {
      "command": "docker",
      "args": [
        "run",
        "-i",
        "--rm",
        "mcp/vizro"
      ]
    }
  }
}

To use local data with Docker

Mount your data directory or directories into the container with the following extended configuration. Replace (syntax for folders) or (syntax for files) with the absolute path to your data on your machine. For consistency, we recommend that the dst path matches the src path.

{
  "mcpServers": {
    "vizro-mcp": {
      "command": "docker",
      "args": [
        "run",
        "-i",
        "--rm",
        "--mount",
        "type=bind,src=,dst=",
        "--mount",
        "type=bind,src=,dst=",
        "mcp/vizro"
      ]
    }
  }
}

Quick install

HostPrerequisiteLinkNotes
CursorDockerInstall with Docker in CursorFor local data access, mount your data directory
VS CodeDockerInstall with Docker in VS CodeFor local data access, mount your data directory

Disclaimers

Transparency and trust

MCP servers are a relatively new concept, and it is important to be transparent about what the tools are capable of so you can make an informed choice as a user. Overall, the Vizro MCP server only reads data, and never writes, deletes or modifies any data on your machine.

Third party API

Users are responsible for anything done via their host LLM application.

Users are responsible for procuring any and all rights necessary to access any third-party generative AI tools and for complying with any applicable terms or conditions thereof.

Users are wholly responsible for the use and security of the third-party generative AI tools and of Vizro.

User acknowledgments

Users acknowledge and agree that:

Any results, options, data, recommendations, analyses, code, or other information (โ€œOutputsโ€) generated by any third-party generative AI tools (โ€œGenAI Toolsโ€) may contain some inaccuracies, biases, illegitimate, potentially infringing, or otherwise inappropriate content that may be mistaken, discriminatory, or misleading.

McKinsey & Company:

(i) expressly disclaims the accuracy, adequacy, timeliness, reliability, merchantability, fitness for a particular purpose, non-infringement, safety or completeness of any Outputs,

(ii) shall not be liable for any errors, omissions, or other defects in, delays or interruptions in such Outputs, or for any actions taken in reliance thereon, and

(iii) shall not be liable for any alleged violation or infringement of any right of any third party resulting from the usersโ€™ use of the GenAI Tools and the Outputs.

The Outputs shall be verified and validated by the users and shall not be used without human oversight and as a sole basis for making decisions impacting individuals.

Users remain solely responsible for the use of the Output, in particular, the users will need to determine the level of human oversight needed to be given the context and use case, as well as for informing the usersโ€™ personnel and other affected users about the nature of the GenAI Output. Users are also fully responsible for their decisions, actions, use of Vizro and Vizro-MCP and compliance with applicable laws, rules, and regulations, including but not limited to confirming that the Outputs do not infringe any third-party rights.

Warning and safety usage for generative AI models

Vizro-MCP is used by generative AI models because large language models (LLMs) represent significant advancements in the AI field. However, as with any powerful tool, there are potential risks associated with connecting to a generative AI model.

We recommend users research and understand the selected model before using Vizro-MCP. We also recommend users to check the MCP server code before using it.

Users are encouraged to treat AI-generated content as supplementary, always apply human judgment, approach with caution, review the relevant disclaimer page, and consider the following:

  1. Hallucination and misrepresentation
  2. Generative models can potentially generate information while appearing factual, being entirely fictitious or misleading.

    The vendor models might lack real-time knowledge or events beyond its last updates. Vizro-MCP output may vary and you should always verify critical information. It is the user's responsibility to discern the accuracy, consistent, and reliability of the generated content.

  3. Unintended and sensitive output
  4. The outputs from these models can be unexpected, inappropriate, or even harmful. Users as human in the loop is an essential part. Users must check and interpret the final output. It is necessary to approach the generated content with caution, especially when shared or applied in various contexts.
  5. Data privacy
  6. Your data is sent to model vendors if you connect to LLMs via their APIs. For example, if you connect to the model from OpenAI, your data will be sent to OpenAI via their API. Users should be cautious about sharing or inputting any personal or sensitive information.
  7. Bias and fairness
  8. Generative AI can exhibit biases present in their training data. Users need to be aware of and navigate potential biases in generated outputs and be cautious when interpreting the generated content.
  9. Malicious use
  10. These models can be exploited for various malicious activities. Users should be cautious about how and where they deploy and access such models.
It's crucial for users to remain informed, cautious, and ethical in their applications.
MCP Index is your go-to directory for Model Context Protocol servers. Discover and integrate powerful MCP solutions to enhance AI applications like Claude, Cursor, and Cline. Find official and community servers with integration guides and compatibility details.
Copyright ยฉ 2025