Anthropic has recently released the third-party platform configuration guide for Claude Cowork, aiming to help IT administrators quickly complete enterprise-level deployments in mainstream cloud environments such as Amazon Bedrock, Google Cloud Vertex AI, and Azure AI Foundry.

Multi-platform Compatibility and Flexible Deployment

This update provides great flexibility for enterprises. According to the official documentation, Claude Cowork now fully supports macOS 13.0 (Ventura) and higher versions, as well as Windows 10 and 11 systems. Notably, Windows users need to ensure that the "Virtual Machine Platform" feature is enabled before installation. For IT departments pursuing efficient management, the new version supports configuring macOS through MDM tools such as Jamf and Kandji, or deploying in bulk on Windows environments using Intune and group policies.

image.png

Graphical Interface Simplifies Configuration Process

To lower the technical barrier, the Claude desktop version introduces a new "Developer Mode" setting UI. Administrators can enable this mode directly from the "Troubleshooting" option in the "Help" menu without logging in, thereby configuring third-party inference interfaces directly. This graphical operation method makes setting key parameters such as API keys, service account JSON files, and gateway Base URLs more intuitive and efficient.

Enhanced Plugin Expansion and Collaboration Features

In terms of functional application, Claude Cowork significantly enhances collaboration capabilities. The system distributes plugins by mounting local directories, allowing enterprises to customize skills and MCP servers based on specific roles. After successful deployment, the user interface will add "Cowork (Collaboration)" and "Code (Code)" tabs, supporting terminal-based code writing tasks directly in the host environment.

Strict Security and Management Mechanisms

Regarding compliance requirements in enterprise applications, the configuration guide details multiple management permissions. Administrators can not only customize token usage limits (Token Caps) and statistical windows, but also disable unnecessary telemetry data uploads through MDM policies. In addition, the system supports OTLP protocol integration, facilitating real-time monitoring of prompts, tool calls, and token consumption through OpenTelemetry, ensuring transparency and security in the use of AI assets.

Currently, the relevant MDM configuration reference table has been updated to the April 2026 version. These series of deployment optimizations mark a significant improvement in the efficiency of applying advanced AI models in private clouds or managed environments.

Official Documentation:

https://support.claude.com/en/articles/14680741-install-and-configure-claude-cowork-with-third-party-platforms#h_c00b8c02e0