Skip to main content

Command Palette

Search for a command to run...

I Built a Zero-Framework MCP Server for Targetprocess in Java 21

Published
4 min read
I Built a Zero-Framework MCP Server for Targetprocess in Java 21

Every morning I open Targetprocess, navigate to my board, filter by assignee, check sprint progress, and look for blockers. Then I open my AI assistant to plan the day — and immediately have to describe everything I just saw, manually, in a prompt.

That's the gap I wanted to close.

What is MCP and why does it matter?

Model Context Protocol (MCP) is an open standard that lets AI assistants call external tools directly. Instead of you describing your data to the AI, the AI fetches it itself. It's the difference between "I have 14 open bugs, here's a summary..." and just asking "What are my open bugs?" and getting the answer.

If your team uses Targetprocess, you interact with it dozens of times a day — checking user stories, creating tasks, linking blockers, updating statuses. All of that is now something your AI assistant can do for you.

The result: zdtp-mcp

zdtp-mcp is an MCP server that exposes 52 tools across 15 Targetprocess domains:

Domain What you can do
User Stories search, create, update, get, delete
Tasks search, create, update, get, delete
Bugs search, create, update, get, delete
Test Plans & Cases full CRUD + inline step creation
Epics, Features, Releases full CRUD
Relations search, link, delete
Teams, Sprints, Projects, Users search & get
Comments add to any entity

One Docker command to set it up:

# Claude Code
claude mcp add zdtp -- docker run -i --rm \
  -e TP_URL="https://youraccount.tpondemand.com" \
  -e TP_TOKEN="your_token" \
  ghcr.io/aldo-lushkja/zdtp-mcp:latest

# Gemini CLI
gemini mcp add zdtp docker run -i --rm \
  -e TP_URL="https://youraccount.tpondemand.com" \
  -e TP_TOKEN="your_token" \
  ghcr.io/aldo-lushkja/zdtp-mcp:latest

Then you can just ask:

"Find all open bugs assigned to me in Project Alpha"

"Link US-123 as a blocker of US-456"

"Create a test case with steps for the payment flow"

"Show me all releases due this month"

Why Java 21 and zero dependencies?

Most MCP servers I've seen are built with Python or TypeScript, often with a framework doing heavy lifting. I chose Java 21 with a deliberate zero-framework policy: no Spring, no Quarkus, no HTTP framework of any kind.

The reasons:

Startup time. MCP servers over stdio are spawned on demand — every time the AI client needs them. A Spring Boot service takes a few seconds to start. A plain Java process with a shadow JAR is up in under 100ms.

Size. The final fat JAR is small. No framework means no classpath scanning, no annotation processing, no auto-configuration. The Docker image stays lean.

Predictability. When something breaks, you read the code — not a framework's internals. For a small, focused tool this matters a lot.

The only dependencies are Jackson (JSON) and Commonmark (Markdown-to-HTML for description fields). Everything else — HTTP client, stdio loop, JSON-RPC parsing — is standard Java 21.

Architecture: BCE

The project follows the Boundary-Control-Entity (BCE) pattern:

McpServer (stdio JSON-RPC loop)
    └── *McpTools        ← Boundary: registers tools, formats output
        └── *Service     ← Control: business logic, calls QueryEngine
            └── QueryEngine → Targetprocess REST API

Each Targetprocess domain (UserStory, Bug, Task, etc.) is a vertical slice with its own boundary, control, and entity classes. Adding a new domain is a matter of copying the pattern — no framework wiring, no configuration files.

The McpServer class itself is ~140 lines. It reads JSON-RPC from stdin, dispatches to registered tool handlers, and writes responses to stdout. That's the entire transport layer.

Multi-platform Docker image

The published image targets both linux/amd64 and linux/arm64, so it runs natively on Apple Silicon without Rosetta emulation:

ghcr.io/aldo-lushkja/zdtp-mcp:latest   # stable
ghcr.io/aldo-lushkja/zdtp-mcp:develop  # latest dev build

The CI/CD pipeline is a full Git Flow setup on GitHub Actions: feature branches merge into develop, release branches cut from develop and merge into main, Docker images are tagged automatically per branch.

What's next

The server currently supports stdio transport only. HTTP/SSE transport would allow deploying it as a shared team service — one running instance, multiple AI clients pointing at it. That's the next milestone.

If your team uses Targetprocess and AI assistants, give it a try. PRs and issues welcome.

34 views