docs/migrations/v8_NEW_NETWORKING_SETUP.md
This project has 3 components:
The MCP clients (e.g., Cursor, VS Code, Windsurf, Claude Code) are how users interact with our systems. They communicate with the MCP server by sending commands. The MCP commands communicates with our Unity plugin, which gives reports on the action it completed (for function tools) or gives it data (for resources).
The MCP protocol defines how clients and servers can communicate, but we have to get creative when communicating with Unity. Let's learn more.
MCP servers support communicating over stdio or via Streamable HTTP.
MCP for Unity communicates via stdio. Particularly, the MCP server and the MCP client use stdio to communicate. The MCP server and the Unity plugin editor communicate via a locally opened port, typically 6400, but users can change it to any port.
Why can't the Unity plugin communicate to the server via stdio like the MCP client? When we create MCP configs that use uvx, MCP clients run the command in an internal subprocess, and communicate with the MCP server via stdio (think pipes in the OS e.g. ls -l | grep "example.txt").
Unity can't reach that internal subprocess, so we listen to port 6400, which the MCP server can open and send/receive data.
Note: Previously we used
uv, and we installed the server locally in the plugin. Now we useuvxwhich installs the server for us, directly from our GitHub repo.
When the user sends a prompt:
In this new version of MCP for Unity, our MCP server supports both the stdio and HTTP protocols.
We create MCP configs that reference a URL, by default http://localhost:8080, however, users can change it to any address. MCP clients connect to the running HTTP server, and communicate with the MCP server via HTTP POST requests with JSON. Unlike in stdio, the MCP server is not a subprocess of the client, but is run independently of all other components.
What about the MCP server and Unity? We could maintain the communication channel that's used in stdio i.e. communicating via port 6400. However, this would limit the HTTP server to only being run locally. A remote HTTP server would not have access to a user's port 6400 (unless users open their ports to the internet, which may be hard for some and is an unnecessary security risk).
To work with both locally and remotely hosted HTTP servers, we set up a WebSocket connection between Unity and the MCP Server. This allows for real time communication between the two components.
When the user sends a prompt:
MCP for Unity previously only supported local connections with MCP clients via stdio. Now, we continue to support local stdio connections, but additionally support local HTTP connections and remote HTTP connections.
Let's discuss both technical and political reasons:
uv? You can.uv installed to use MCP for Unity.
uv to be installed.uv has benefits to non-asset store users, but I think to avoid maintaining this server, we'll explore running the MCP server inside the Unity plugin as a background process using the official MCP C# SDK.Significant changes were made to the server and Unity plugin to support the HTTP protocol, as well as the new WebSocket connection, with the right amount of abstraction to support both stdio and HTTP.
server.py is still the main entrypoint for the backend, but now it's been modified to setup both HTTP and stdio connections. It processes command line arguments or environment variables for the HTTP mode. CLI args take precedence over the environment variables. The following code runs the server:
mcp.run(transport=transport, host=host, port=port)
And that's pretty much it in terms of HTTP support between the MCP server and client. Things get more interesting for the connection to the Unity plugin.
Backward compatability with stdio connections was maintained, but we did make some small performance optimisations. Namely, we have an in-memory cache of unity isntances using the StdioPortRegistry class.
It still calls PortDiscovery.discover_all_unity_instances(), but we add a lock when calling it, so multiple attempts to retrieve the instances do not cause our app to run multiple file scans at the same time.
The UnityConnection class uses the cached ports to retrieve the open port for a specific instances when creating a new connection, and when sending a command.
For WebSocket connections, we need to understand the PluginHub and the PluginRegistry classes. The plugin hub is what manages the WebSocket connections with the MCP server in-memory. It also has the send_command_for_instance function, which actually sends the command to the Unity plugin.
The in-memory mapping of sessions to WebSockets connections in the plugin hub is done via the PluginRegistry class.
You're wondering if every function tool needs to use the send_command_for_instance and the current function and choose between WebSockets/stdio every invocation? No, to keep tool calls as simple as posisble, all users have to do is call the send_with_unity_instance, which delegates the actual sending of data to send_command_for_instance or send_fn, which is a function that's parsed to the arguments of send_with_unity_instance, typically async_send_command_with_retry.
Let's start with how things worked before this change. The MCPForUnityBridge was a monolith of all networking logic. As we began to develop a service architecture, we created the BridgeControlService to wrap the MCPForUnityBridge class, to make the migration to the new architecture easier.
The BridgeControlService called functions in the MCPForUnityBridge, which managed the state and processing for TCP communication.
In this version BridgeControlService wraps around the TransportManager, it doesn't have hardcoded logic specific to stdio. The TransportManager object manages the state of the network and delegates the actual networking logic to the appropriate transport client - either WebSocket or stdio. The TransportManager interacts with objects that implement the IMcpTransportClient interface.
The legacy McpForUnityBridge was renamed and moved to StdioBridgeHost. The StdioTransportClient class is a thin wrapper over the StdioBridgeHost class, that implements the IMcpTransportClient interface. All the logic for the WebSocket connection is in the WebSocketTransportClient class.
Since we support both HTTP and stdio connections, we had to do some work around the MCP config builders. The major change was reworking how stdio connections were constructed to use uvx with the remote package instead of the locally bundled server and uv, HTTP configs are much simpler.
The remote git URL we use to get the package is versioned, which added some complications. We frequently make changes to the main branch of this repo, some are breaking (the last version before this was v7, which was a major breaking change as well). We don't control how users update their MCP for Unity package. So if we point to the main branch, their plugin could be talking to an incompatible version of the server.
To address this, we have a process to auto-update stdio configurations. The StdIoVersionMigration class runs when the plugin is loaded. It checks a new editor pref that stores the last version we upgraded clients to. If the plugin was updated, the package version will mismatch the editor pref's version, and we'll cycle through a user's configured MCP clients and re-configure them.
This way, whenever a user updates the plugin, they will automatically point to the correct version of the MCP server for their MCP clients to use.
Relevant commits:
The new HTTP config and the new stdio config using uvx is a departure from the previous MCP configs which have uv and a path to server.py. No matter the protocol, all users would have to update their MCP configs. Not all users are on Discord where we can reach them, and not all our Discord users read our messages in any case. Forcing users to update their configs after updating is something they can easily ignore or forget.
So we added the LegacyServerSrcMigration class. It looks for the MCPForUnity.UseEmbeddedServer editor pref, which was used in earlier versions. If this pref exists, we will reconfigure all of a user's MCP clients (defaulting to HTTP) at startup. The editor pref is then deleted, so this config update only happens once.
Relevant commits:
This version contains numerous other updates, including:
uvx instead of uvPreviously, the MCP server was bundled with the plugin in the UnityMcpServer~/src folder. I don't have the context as to why this was done, but I imagine uv support for running remote packages was not available/popular at the time this repo was created.
By using uvx and remote packages, we can safely offload all aspects of server file management from our plugin.
Relevant commits:
Previously we had async_send_with_unity_instance and send_with_unity_instance functions to communicate with the Unity. Now, we only have send_with_unity_instance, and it's asynchronous.
This was required for the HTTP server, because we cannot block the event loop with synchronous commands. This change does not affect how the server works when using stdio.
Relevant commits:
Custom tools were revamped once more, this time they're reached the simplest version that we wanted them to have - custom tools are written entirely in C# - no Python required. How does it work?
Like before, we do reflection on the McpForUnityToolAttribute. However, this time the attribute now accepts a name, description, and AutoRegister. The AutoRegister boolean is true by default, but for our core tools it's false, as they don't have their tool details nor parameters defined in C# as yet.
Parameters are defined using the ToolParameterAttribute, which contains Name, Description, Required, and DefaultValue properties.
The ToolDiscoveryService class uses reflection to find all classes with McpForUnityToolAttribute. It does the same for ToolParameterAttribute. With that data, it constructs a ToolMetadata object. These tools are stored in-memory in a dictionary that maps tool names with their metadata.
When we initiate a websocket connection, after successfully registering and retrieving a session ID, we call the SendRegisterToolsAsync function. This function sends a JSON payload to the server with all the tools that were found in the ToolDiscoveryService.
In the plugin_hub's on_receive handler, we look out for the register_tools message type, and map the tools to the session ID. This is important, we only want custom tools to be available for the project they've been added to.
That requirement of keeping tools local to the projeect made this implementation a bit trickier. We have the requirement because in this project, we can run multiple Unity instances at the same time. So it doesn't make sense to make every tool globally available to all connected projects.
To make tools local to the project, we add a mcpforunity://custom-tools resource which lists all tools mapped to a session (which is retrieve from FastMCP's context). And then we add a execute_custom_tool function tool which can call the tools the user added. This worked surprisingly well, but required some tweaks:
set_active_instance so the mapping between session IDs and Unity instances will be correct.read_resources tool. It simply did not work, and LLMs would go in circles for a long time before actually reading the resource directly. This only works because MCP resources have up to date information and gives the MCP clients the right context to call the tools.Note: FastMCP can register and deregister tools while the server is running, however, not all MCP clients can process the updates in real time. We recommend that users refresh/reconfigure the MCP servers in the clients so they can see the new custom tools.
Relevant commits:
The main MCPForUnityEditorWindow.cs class, and the releated uxml and uss files, were getting quite long. We had a similar problem with the last immediate UI version of it. To keep it maintanable, we split the logic into 3 separate view classes: Settings, Connection andn ClientConfig. They correspond to the 3 visual sections the window has.
Each section has its own C#, uxml and uss files, but we use a common uss file for shared styles.
Relevant commits:
The Setup Wizard also got revamped. For starters, it's no longer a wizard, just a single window with a status and instructions. We check if Python and uv are installed, based on us being able to check their version by calling them in a process. That's the most reliable indicator of them being correctly installed. Otherwise, we have buttons that open up the webpages for users to download them as needed.
Relevant commits:
Previously, the Response class had helper functions that returned generic objects. It's not the worst option, but we've improved it with strongly typed classes. Instead of Response.Success() returning a success message, we now return SuccessResponse objects. Similarly, Response.Error() now returns ErrorResponse objects.
JSON serialization is the exact same, but it's clearer in the code what's being transmitted between the client and server.
Relevant commits:
McpLog object now has a Debug function, which only logs when the user selects "Show debug logs" in the settings.EditorPrefs are defined in MCPForUnity/Editor/Constants/EditorPrefKeys.cs. At a glance, we can see all the settings we have available.This was a big change, and it touches all the repo. So a lot of inefficiencies and room for improvement were exposed while working on it. Here are some items to address:
_coerce_int function, why? Why are we redefining a function that's the same across files? Can we use a shared function, or maybe use it as middleware?DummyMCP class is defined in 10 server tests, we could set this up in conftest.py. These tests were originally indepdendent of the Server project, but in v7 they became integration tests we run with pytest. With pytest being the default test runner, we can relook at how the tests are structured and optimize their setup.server_version.txt is used in one place, but the server can now read its own pyproject.toml to get the version, so we can remove this.tools, resources and registry folders make sense, but everything else just forms part of the high level repo. It's growing, so some thought about how we create modules will help with scalability.