Introduction: Beyond the Chat Window
Most demonstrations of AI workflows treat tools like Claude as mere text generators: open a chat, paste instructions, copy the output, and call it done. This approach works fine for personal tasks or simple coding aids, but it quickly breaks down when building for businesses that operate under strict confidentiality, compliance requirements, and document-heavy processes.

For the past year, I've been developing Claude-powered solutions for law firms—a highly regulated vertical where every output must meet exacting standards. Yet the lessons I've learned extend far beyond the legal sector. Replace a "motion to compel" with an "engineering change order" or a "policy underwriting memo," and the underlying architecture remains remarkably similar. Whether you're serving accountants, doctors, financial advisors, contractors, or any other regulated professional services niche, the same principles apply.
This article breaks down the stack I've refined after numerous dead ends—a framework that treats Claude not as a chat interface but as a layered runtime platform. Here's what actually works.
Rethinking Claude: From Chat Tool to Runtime Platform
The single biggest insight isn't a clever prompt tweak—it's recognizing that Anthropic has built a comprehensive platform, and the chat window is merely the entry point. The real power lies in understanding the full set of primitives available:
- Projects — persistent workspaces that hold files and custom instructions
- Skills — reusable instruction packs that encode your firm's style and processes
- MCP connectors — integrations with Drive, Gmail, Calendar, Slack, and custom servers
- Cowork — agentic, multi-step task execution
- Artifacts — interactive UI elements generated inline
- Memory — context that persists across conversations (on Max plan)
- Claude Code — terminal-native development workflows
- API — for building production-grade products
Once you stop viewing these as individual features and instead as building blocks in a stack, the architecture for any regulated vertical starts to write itself.
The Proven Stack for Vertical Use Cases
After extensive experimentation and many failed approaches, here's the layering that has consistently worked for regulated professional services.
Layer 1: Projects as Workspace per Matter
In a law firm, the fundamental unit of work is a case. In your vertical, it might be an account, a project, a client engagement, or a deal. Whatever the long-running work unit is, map it to a single Claude Project.
Inside each Project, load three critical elements:
- Reference documents — the foundational materials that define the engagement (contracts, regulations, guidelines)
- Project-level instructions — the rules of the road: jurisdiction, preferred style, default output format, absolute no-gos
- Output history — every previous generation becomes context, allowing Claude to build upon past work
Skip this step and every conversation starts from zero. Use it, and Claude walks in pre-loaded with all the relevant context—like an associate who's already read the file.
Layer 2: Skills as Your Firm's Playbook in Code
Skills are arguably the most underutilized feature Anthropic has shipped. They are essentially a SKILL.md file plus optional supporting assets, and they tell Claude exactly how to perform a specific type of work the way your team does it.
For one law firm, I built a Skill for drafting demand letters. It encodes:

- Required structure and sections
- Citation format and style rules
- Desired tone (formal, persuasive, but not aggressive)
- Preferred phrasings from the senior partner
- Things to never include (e.g., speculation about settlement amounts)
The result: any associate at the firm can invoke that Skill and produce an output that reads as if the senior partner drafted it themselves. This pattern generalizes to any repeatable workflow in your organization.
Here's an example structure for a firm's skill library:
skills/
demand-letter/SKILL.md
client-intake-summary/SKILL.md
discovery-response/SKILL.md
weekly-status-update/SKILL.md
Layer 3: MCP Connectors for Live Data
Static context isn't enough. In regulated verticals, decisions often rely on real-time information—calendars, emails, shared documents, or internal databases. MCP (Model Context Protocol) connectors allow Claude to pull in data from external sources without manual copy-paste. For a legal use case, connecting to a firm's document management system means Claude can reference the latest version of a contract or check precedents automatically. This layer reduces errors and keeps the AI aligned with the current state of affairs.
Layer 4: Cowork and Artifacts for Interactive Workflows
When a task requires multiple steps—like reviewing a document, generating a summary, and then producing a client-ready report—Cowork handles the orchestration. It can chain together Skills, call MCP connectors, and even generate Artifacts (like a draft email or a formatted memo) as intermediate outputs. This turns Claude from a single-answer machine into a full-fledged collaborator that can manage complex, conditional workflows.
Key Takeaways for Building in Regulated Verticals
The chat-window approach fails in regulated environments because it ignores the need for context, consistency, and compliance. By adopting a platform mindset and leveraging Claude's layered primitives, you can build systems that:
- Respect confidentiality boundaries (through Project isolation and access controls)
- Enforce firm-specific style and process (via Skills)
- Maintain coherence across long-running engagements (via Project memory)
- Integrate with existing tools and data sources (via MCP connectors)
- Scale from simple tasks to complex, multi-step workflows (via Cowork)
The vertical doesn't matter as much as the architecture. Whether you're in law, accounting, finance, healthcare, or engineering, the stack described here provides a robust foundation. The crucial step is to stop treating Claude as a fancy text box and start using it as the runtime platform it was designed to be.