The Linear integration allows Nectr to pull issue and task data from Linear during PR reviews. When a PR references a Linear issue (e.g., “Fixes ENG-123”), Nectr fetches the full issue context — title, description, status, assignee — and includes it in the AI review.
This is an inbound MCP integration — Nectr acts as an MCP client connecting to a Linear MCP server.
You need a Linear MCP server running separately from Nectr. Options:Option A: Use official Linear MCP server
# Install Linear MCP servernpm install -g @linear/mcp-server# Run as HTTP server (not stdio)LINEAR_API_KEY=lin_api_... linear-mcp-server --http --port 8001
Option B: Build your own Linear MCP server
from fastapi import FastAPIfrom mcp.server.fastmcp import FastMCPimport httpximport osmcp = FastMCP("Linear")@mcp.tool()async def search_issues(team_id: str, query: str) -> list[dict]: """Search Linear issues for a team.""" async with httpx.AsyncClient() as client: resp = await client.post( "https://api.linear.app/graphql", headers={ "Authorization": f"Bearer {os.getenv('LINEAR_API_KEY')}", "Content-Type": "application/json", }, json={ "query": """ query SearchIssues($teamId: String!, $query: String!) { issues(filter: {team: {key: {eq: $teamId}}, title: {contains: $query}}) { nodes { id identifier title description state { name } assignee { name } url } } } """, "variables": {"teamId": team_id, "query": query}, }, ) data = resp.json() return data.get("data", {}).get("issues", {}).get("nodes", [])app = FastAPI()app.mount("/mcp", mcp.sse_app())if __name__ == "__main__": import uvicorn uvicorn.run(app, host="0.0.0.0", port=8001)
# Linear MCP server base URLLINEAR_MCP_URL=https://your-linear-mcp-server.railway.app# Linear personal API key (passed to MCP server via Authorization header)LINEAR_API_KEY=lin_api_...
Both LINEAR_MCP_URL and LINEAR_API_KEY must be set for the integration to work. If either is missing, Linear context will be silently skipped.
Nectr’s MCPClientManager unwraps this format automatically:
# JSON-RPC 2.0 result shape: {"result": {"content": [...]}}result = data.get("result", data)if isinstance(result, dict): content = result.get("content", result) if isinstance(content, list): # Each content item may be {type: "text", text: "<json>"} for item in content: if isinstance(item, dict) and item.get("type") == "text": parsed = json.loads(item["text"]) # parsed is now the list of issues
The Linear MCP client has a 10-second timeout to prevent slow external services from blocking PR reviews.
_MCP_TIMEOUT = 10.0 # secondstry: async with httpx.AsyncClient(timeout=_MCP_TIMEOUT) as client: response = await client.post(...) response.raise_for_status()except httpx.TimeoutException: logger.warning( f"MCP call timed out: {settings.LINEAR_MCP_URL} tool=search_issues" ) return [] # Empty list - review continues without Linear contextexcept httpx.HTTPStatusError as exc: logger.warning(f"Linear MCP returned HTTP {exc.response.status_code}") return []except Exception as exc: logger.warning(f"Linear MCP query failed: {exc}") return []
Linear integration is best-effort. If the MCP server is down or slow, PR reviews continue without Linear context. This is by design — external context should enhance reviews, not block them.
Here’s how Linear context is pulled during a review:
# 1. Extract Linear issue IDs from PR metadataissue_ids = extract_linear_issue_ids(pr_data["title"], pr_data["body"])if pr_data.get("head", {}).get("ref"): branch_issue = extract_issue_from_branch(pr_data["head"]["ref"]) if branch_issue: issue_ids.append(branch_issue)# 2. Pull issue details from Linear MCP serverlinear_issues = []if issue_ids: linear_issues = await mcp_client.get_linear_issues( team_id="ENG", # TODO: Make this configurable per repo query=" ".join(issue_ids), )# 3. Format Linear context for AI review promptlinear_context = ""if linear_issues: linear_context = "\n\nLINEAR CONTEXT:\n" for issue in linear_issues: linear_context += f"""- Issue {issue['identifier']}: "{issue['title']}" Status: {issue['state']} URL: {issue['url']} Description: {issue.get('description', 'N/A')[:300]}"""# 4. Include in AI reviewreview_prompt = f"""Review this pull request:PR Title: {pr_data['title']}PR Description: {pr_data['body']}Changed Files: {changed_files}{linear_context}Diff:{pr_diff}Provide a structured review covering:1. Does this PR fully address the linked Linear issues?2. Are there gaps between issue requirements and implementation?..."""
Here’s how Linear context appears in an AI-generated review:
# PR Review: Add authentication flow (ENG-123)## Linked Issues✅ **ENG-123**: Add user authentication - Status: In Progress - Assignee: Alice - [View in Linear](https://linear.app/team/issue/ENG-123)## SummaryThis PR implements the OAuth flow described in ENG-123. The implementationaligns well with the issue requirements:- ✅ GitHub OAuth integration- ✅ Token encryption at rest- ⚠️ Issue mentions rate limiting, but I don't see it implemented## Suggestions1. **Missing requirement**: ENG-123 specifies rate limiting for failed auth attempts. Consider adding a simple in-memory rate limiter.2. **Test coverage**: Issue states this is a critical security feature. Add E2E tests for the OAuth callback flow.## VerdictAPPROVE_WITH_SUGGESTIONS - Strong implementation, but address rate limitingbefore merging per issue requirements.
def extract_team_from_issue_ids(issue_ids: list[str]) -> str | None: """Extract team ID from issue identifiers. Examples: ["ENG-123"] -> "ENG" ["INFRA-456"] -> "INFRA" """ if not issue_ids: return None # All issue IDs should have same team prefix team_id = issue_ids[0].split("-")[0] return team_id