Compare commits
13 Commits
ba9201dfa6
...
main
Author | SHA1 | Date | |
---|---|---|---|
7073d37c26 | |||
a0fc5c2bb2 | |||
dc2fe4c425 | |||
3aa1d90b89 | |||
53bee3340f | |||
bc67c9f502 | |||
a0bd71e438 | |||
09dcc8e507 | |||
defd03b13a | |||
896ef83dc5 | |||
4205a48c73 | |||
5d79edff49 | |||
4ed9da5b72 |
9
.env.ssl
Normal file
9
.env.ssl
Normal file
@ -0,0 +1,9 @@
|
||||
# SSL Configuration for Mei Sheng Group Internal Services
|
||||
export SSL_CERT_FILE="$(pwd)/certs/complete_ca_bundle.pem"
|
||||
export REQUESTS_CA_BUNDLE="$(pwd)/certs/complete_ca_bundle.pem"
|
||||
export CURL_CA_BUNDLE="$(pwd)/certs/complete_ca_bundle.pem"
|
||||
export GIT_SSL_CAINFO="$(pwd)/certs/complete_ca_bundle.pem"
|
||||
|
||||
# Usage: source .env.ssl
|
||||
echo "SSL configuration loaded. CA bundle: $(pwd)/certs/complete_ca_bundle.pem"
|
||||
EOF < /dev/null
|
28
.gitignore
vendored
Normal file
28
.gitignore
vendored
Normal file
@ -0,0 +1,28 @@
|
||||
# Python cache
|
||||
__pycache__/
|
||||
*.py[cod]
|
||||
*$py.class
|
||||
*.so
|
||||
|
||||
# Virtual environments
|
||||
.venv/
|
||||
venv/
|
||||
env/
|
||||
|
||||
# IDE
|
||||
.vscode/
|
||||
.idea/
|
||||
|
||||
# OS
|
||||
.DS_Store
|
||||
Thumbs.db
|
||||
|
||||
# Environment variables
|
||||
.env
|
||||
|
||||
# Logs
|
||||
*.log
|
||||
|
||||
# Test files
|
||||
test_*.json
|
||||
*_test.json
|
110
CLAUDE.md
110
CLAUDE.md
@ -24,4 +24,112 @@ Nomad MCP is a service that enables management of HashiCorp Nomad jobs via REST
|
||||
- `/static`: Frontend assets
|
||||
- `/tests`: Test files
|
||||
|
||||
Always maintain backward compatibility with existing API endpoints. Follow REST principles.
|
||||
Always maintain backward compatibility with existing API endpoints. Follow REST principles.
|
||||
|
||||
## Enhanced Log Analysis Features
|
||||
|
||||
The logs API has been enhanced with advanced filtering and analysis capabilities:
|
||||
|
||||
### REST API Endpoints:
|
||||
- `/api/logs/job/{job_id}` - Enhanced with time, level, and search filtering
|
||||
- `/api/logs/errors/{job_id}` - Get only error/warning logs
|
||||
- `/api/logs/search/{job_id}` - Search logs for specific terms
|
||||
- `/api/logs/repository/{repository}` - Get logs by repository name
|
||||
|
||||
### New Query Parameters:
|
||||
- `start_time` & `end_time` - Filter by time range (HH:MM format)
|
||||
- `log_level` - Filter by levels (ERROR, WARNING, INFO, etc.)
|
||||
- `search` - Search for specific terms
|
||||
- `lines` - Limit number of lines returned
|
||||
- `formatted` - Proper line breaks (default: true)
|
||||
|
||||
### MCP Tools Available:
|
||||
- `get_job_logs` - Enhanced with all filtering options
|
||||
- `get_error_logs` - Convenience tool for troubleshooting
|
||||
- `search_job_logs` - Search logs for patterns
|
||||
|
||||
### Example Usage:
|
||||
```bash
|
||||
# Get errors between 8pm-6am for plant-manager in production
|
||||
curl "https://nomad_mcp.dev.meisheng.group/api/logs/errors/plant-manager?namespace=production&start_time=20:00&end_time=06:00"
|
||||
|
||||
# Search for pump issues
|
||||
curl "https://nomad_mcp.dev.meisheng.group/api/logs/search/plant-manager?q=pump&namespace=production"
|
||||
|
||||
# Get last 50 lines with proper formatting
|
||||
curl "https://nomad_mcp.dev.meisheng.group/api/logs/job/plant-manager?namespace=production&lines=50&formatted=true"
|
||||
```
|
||||
|
||||
Always maintain backward compatibility with existing API endpoints. Follow REST principles.
|
||||
|
||||
## SSL Certificate Management for Internal Services
|
||||
|
||||
When working with internal/corporate services that use custom Certificate Authorities (CAs):
|
||||
|
||||
### Problem
|
||||
- Internal services use SSL certificates signed by custom/corporate CAs
|
||||
- System trust stores don't recognize these CAs
|
||||
- Results in `SSL: CERTIFICATE_VERIFY_FAILED` errors
|
||||
|
||||
### Solution: Extract and Configure CA Bundle
|
||||
|
||||
1. **Extract CA Certificate Chain**:
|
||||
```bash
|
||||
# Find the CA issuer from certificate details
|
||||
openssl s_client -connect your-service.internal:443 -showcerts
|
||||
|
||||
# Download CA certificate (adjust URL for your PKI)
|
||||
curl -k "https://vault.internal:8200/v1/pki/ca" -o certs/ca_bundle.pem
|
||||
```
|
||||
|
||||
2. **Test CA Bundle**:
|
||||
```bash
|
||||
# Test with curl
|
||||
curl --cacert certs/ca_bundle.pem https://your-service.internal
|
||||
|
||||
# Test with Python
|
||||
python -c "import requests; print(requests.get('https://your-service.internal', verify='certs/ca_bundle.pem').status_code)"
|
||||
```
|
||||
|
||||
3. **Create Environment Configuration**:
|
||||
```bash
|
||||
# .env.ssl
|
||||
export SSL_CERT_FILE="$(pwd)/certs/ca_bundle.pem"
|
||||
export REQUESTS_CA_BUNDLE="$(pwd)/certs/ca_bundle.pem"
|
||||
export CURL_CA_BUNDLE="$(pwd)/certs/ca_bundle.pem"
|
||||
export GIT_SSL_CAINFO="$(pwd)/certs/ca_bundle.pem"
|
||||
```
|
||||
|
||||
4. **Usage**:
|
||||
```bash
|
||||
# Load SSL configuration
|
||||
source .env.ssl
|
||||
|
||||
# Now all tools use the CA bundle automatically
|
||||
curl https://your-service.internal
|
||||
git clone https://git.internal/repo.git
|
||||
pip install -i https://pypi.internal/simple/ package
|
||||
```
|
||||
|
||||
### For Different Tools
|
||||
|
||||
- **Curl**: `curl --cacert path/to/ca_bundle.pem`
|
||||
- **Python requests**: `requests.get(url, verify='path/to/ca_bundle.pem')`
|
||||
- **Git**: `git config http.sslCAInfo path/to/ca_bundle.pem`
|
||||
- **Node.js**: `NODE_EXTRA_CA_CERTS=path/to/ca_bundle.pem`
|
||||
- **Docker**: Mount certs and set `SSL_CERT_FILE` environment variable
|
||||
|
||||
### Environment Variables Priority
|
||||
1. `SSL_CERT_FILE` - Used by most SSL libraries
|
||||
2. `REQUESTS_CA_BUNDLE` - Python requests library
|
||||
3. `CURL_CA_BUNDLE` - curl command
|
||||
4. Tool-specific variables (e.g., `GIT_SSL_CAINFO`)
|
||||
|
||||
### Best Practices
|
||||
- **Keep CA bundle in version control** (it's public key material)
|
||||
- **Test SSL connections** with a script to verify setup
|
||||
- **Document certificate renewal process** in project README
|
||||
- **Use environment variables** for consistent configuration across tools
|
||||
- **Never disable SSL verification** in production code
|
||||
|
||||
This approach provides proper SSL security while working with internal services.
|
@ -82,11 +82,96 @@ Here are some examples of how an AI agent might use the MCP tools:
|
||||
}
|
||||
```
|
||||
|
||||
## Setting Up Claude with MCP
|
||||
## MCP Integration Options
|
||||
|
||||
Nomad MCP provides two integration approaches:
|
||||
|
||||
### 1. FastAPI MCP Integration (Zero-Config)
|
||||
|
||||
Automatically exposes all REST API endpoints as MCP tools via SSE:
|
||||
|
||||
```
|
||||
http://your-server:8000/mcp/sse
|
||||
```
|
||||
|
||||
### 2. Standalone MCP Server (Claude Desktop)
|
||||
|
||||
A dedicated MCP server optimized for Claude Desktop with enhanced capabilities.
|
||||
|
||||
## Setting Up Claude Desktop with Standalone MCP Server
|
||||
|
||||
### Prerequisites
|
||||
|
||||
1. **Install Dependencies**:
|
||||
```bash
|
||||
uv venv
|
||||
uv pip install -r requirements.txt
|
||||
```
|
||||
|
||||
2. **Set Environment Variables**:
|
||||
```bash
|
||||
export NOMAD_ADDR="http://your-nomad-server:4646"
|
||||
export NOMAD_NAMESPACE="development" # optional
|
||||
```
|
||||
|
||||
### Local Setup
|
||||
|
||||
1. **Configure Claude Desktop** (`~/Library/Application Support/Claude/claude_desktop_config.json`):
|
||||
```json
|
||||
{
|
||||
"mcpServers": {
|
||||
"nomad-mcp": {
|
||||
"command": "/path/to/nomad_mcp/run_mcp_server.sh",
|
||||
"env": {
|
||||
"NOMAD_ADDR": "http://your-nomad-server:4646"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
2. **Restart Claude Desktop** to load the configuration
|
||||
|
||||
### Available MCP Tools
|
||||
|
||||
The standalone MCP server provides these tools:
|
||||
|
||||
- **`list_nomad_jobs`** - List all jobs in a namespace
|
||||
- **`get_job_status`** - Get detailed job status and health
|
||||
- **`stop_job`** - Stop jobs with optional purge
|
||||
- **`restart_job`** - Restart jobs
|
||||
- **`create_job`** - Create jobs from specifications
|
||||
- **`submit_job_file`** ⭐ - Submit Nomad job files (JSON/HCL)
|
||||
- **`get_job_logs`** - Retrieve stdout/stderr logs
|
||||
- **`get_allocation_status`** ⭐ - Detailed allocation monitoring
|
||||
- **`get_job_evaluations`** ⭐ - Placement failure analysis
|
||||
- **`force_evaluate_job`** ⭐ - Retry failed placements
|
||||
|
||||
### Example Workflow
|
||||
|
||||
1. **Submit a job file**:
|
||||
```
|
||||
Please submit this job file: [paste JSON job spec]
|
||||
```
|
||||
|
||||
2. **Monitor deployment**:
|
||||
```
|
||||
Check the status and allocations for my-service
|
||||
```
|
||||
|
||||
3. **Debug issues**:
|
||||
```
|
||||
Get evaluations for my-service to see why it failed
|
||||
```
|
||||
|
||||
4. **Force retry**:
|
||||
```
|
||||
Force evaluate my-service to retry placement
|
||||
```
|
||||
|
||||
### Claude Code Integration
|
||||
|
||||
Claude Code can directly connect to the MCP endpoint at `http://your-server:8000/mcp/sse`. Use the `--mcp-url` flag when starting Claude Code:
|
||||
Claude Code can directly connect to the FastAPI MCP endpoint:
|
||||
|
||||
```bash
|
||||
claude-code --mcp-url http://your-server:8000/mcp/sse
|
||||
@ -98,6 +183,118 @@ For integration with the Claude API, you can use the MCP toolchain configuration
|
||||
|
||||
See the [Claude API Integration Documentation](CLAUDE_API_INTEGRATION.md) for more detailed instructions.
|
||||
|
||||
## Network Deployment
|
||||
|
||||
### Running MCP Server on Nomad Cluster
|
||||
|
||||
You can deploy the MCP server itself on your Nomad cluster for centralized access.
|
||||
|
||||
#### Option 1: FastAPI MCP Server (HTTP/SSE)
|
||||
|
||||
Deploy the full FastAPI application with MCP endpoint:
|
||||
|
||||
```bash
|
||||
# Start the FastAPI server with MCP endpoint
|
||||
uvicorn app.main:app --host 0.0.0.0 --port 8000
|
||||
```
|
||||
|
||||
**Access via**: `http://your-nomad-server:8000/mcp/sse`
|
||||
|
||||
#### Option 2: Standalone MCP Server (TCP/Network)
|
||||
|
||||
For network access to the standalone MCP server, you'll need to modify it to use TCP transport instead of stdio.
|
||||
|
||||
**Current limitation**: The standalone MCP server (`mcp_server.py`) uses stdio transport, which is designed for local process communication.
|
||||
|
||||
**Network solution**: Create a TCP-based version or use the FastAPI MCP endpoint instead.
|
||||
|
||||
### Claude Desktop Network Configuration
|
||||
|
||||
To connect Claude Desktop to a network MCP server:
|
||||
|
||||
#### For FastAPI MCP (Recommended)
|
||||
|
||||
Create a wrapper script that uses the HTTP/SSE endpoint:
|
||||
|
||||
```json
|
||||
{
|
||||
"mcpServers": {
|
||||
"nomad-mcp-network": {
|
||||
"command": "npx",
|
||||
"args": [
|
||||
"@modelcontextprotocol/server-everything",
|
||||
"--url", "http://your-nomad-server:8000/mcp/sse"
|
||||
]
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
#### For Custom Network MCP Server
|
||||
|
||||
If you need a network-accessible standalone MCP server, you would need to:
|
||||
|
||||
1. **Modify the transport** in `mcp_server.py` from stdio to TCP
|
||||
2. **Add network security** (authentication, TLS)
|
||||
3. **Configure Claude Desktop** to connect via TCP
|
||||
|
||||
**Example network MCP server** (requires modification):
|
||||
```python
|
||||
# In mcp_server.py - replace stdio with TCP transport
|
||||
import mcp.server.tcp
|
||||
|
||||
async def main():
|
||||
async with mcp.server.tcp.tcp_server("0.0.0.0", 8001) as server:
|
||||
await server.run(...)
|
||||
```
|
||||
|
||||
**Claude Desktop config for network TCP**:
|
||||
```json
|
||||
{
|
||||
"mcpServers": {
|
||||
"nomad-mcp-tcp": {
|
||||
"command": "mcp-client",
|
||||
"args": ["tcp://your-nomad-server:8001"]
|
||||
}
|
||||
}
|
||||
}
|
||||
```
|
||||
|
||||
### Security Considerations for Network Deployment
|
||||
|
||||
When deploying MCP servers on the network:
|
||||
|
||||
1. **Use HTTPS/TLS** for HTTP-based MCP servers
|
||||
2. **Implement authentication** (API keys, OAuth, etc.)
|
||||
3. **Network isolation** (VPN, private networks)
|
||||
4. **Firewall rules** to restrict access
|
||||
5. **Rate limiting** to prevent abuse
|
||||
6. **Audit logging** for all MCP operations
|
||||
|
||||
### SSL Certificate Trust
|
||||
|
||||
For Mei Sheng Group internal services:
|
||||
|
||||
1. **Use the provided CA bundle** in `/certs/meisheng_ca_bundle.pem`
|
||||
2. **Automatic certificate renewal** - Server certificates renew every 24 hours
|
||||
3. **Stable CA chain** - The certificate authority chain can be trusted long-term
|
||||
4. **Environment configuration** - Source `.env.ssl` for proper SSL verification
|
||||
|
||||
```bash
|
||||
# Configure SSL trust for development
|
||||
source .env.ssl
|
||||
|
||||
# Test SSL connections
|
||||
uv run python certs/test_ssl.py
|
||||
```
|
||||
|
||||
### Recommended Network Architecture
|
||||
|
||||
```
|
||||
Claude Desktop → HTTPS/WSS → Load Balancer → FastAPI MCP Server → Nomad API
|
||||
(secure) (optional) (on cluster) (internal)
|
||||
```
|
||||
|
||||
## Debugging MCP Connections
|
||||
|
||||
If you're having issues with MCP connections:
|
||||
|
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
@ -1,6 +1,8 @@
|
||||
from fastapi import APIRouter, HTTPException, Query
|
||||
from typing import List, Dict, Any, Optional
|
||||
import logging
|
||||
import re
|
||||
from datetime import datetime, time as datetime_time
|
||||
|
||||
from app.services.nomad_client import NomadService
|
||||
from app.services.config_service import ConfigService
|
||||
@ -12,6 +14,133 @@ router = APIRouter()
|
||||
nomad_service = NomadService()
|
||||
config_service = ConfigService()
|
||||
|
||||
def format_logs_with_line_breaks(logs: str) -> str:
|
||||
"""Format logs with proper line breaks."""
|
||||
if not logs:
|
||||
return logs
|
||||
|
||||
# Ensure proper line breaks
|
||||
formatted = logs.replace('\\n', '\n')
|
||||
|
||||
# Clean up any double line breaks
|
||||
formatted = re.sub(r'\n{3,}', '\n\n', formatted)
|
||||
|
||||
return formatted.strip()
|
||||
|
||||
def filter_logs_by_time(logs: str, start_time: str = None, end_time: str = None) -> str:
|
||||
"""Filter logs by time range."""
|
||||
if not logs or (not start_time and not end_time):
|
||||
return logs
|
||||
|
||||
lines = logs.split('\n')
|
||||
filtered_lines = []
|
||||
|
||||
for line in lines:
|
||||
# Extract timestamp from log line (assumes format: YYYY-MM-DD HH:MM:SS)
|
||||
timestamp_match = re.search(r'(\d{4}-\d{2}-\d{2}\s+\d{2}:\d{2}:\d{2})', line)
|
||||
if not timestamp_match:
|
||||
# If no timestamp, include the line (might be continuation)
|
||||
filtered_lines.append(line)
|
||||
continue
|
||||
|
||||
try:
|
||||
log_time = datetime.strptime(timestamp_match.group(1), '%Y-%m-%d %H:%M:%S')
|
||||
log_time_only = log_time.time()
|
||||
|
||||
# Parse time filters
|
||||
include_line = True
|
||||
|
||||
if start_time:
|
||||
if ':' in start_time and len(start_time.split(':')) == 2:
|
||||
start_t = datetime.strptime(start_time, '%H:%M').time()
|
||||
if log_time_only < start_t:
|
||||
include_line = False
|
||||
|
||||
if end_time and include_line:
|
||||
if ':' in end_time and len(end_time.split(':')) == 2:
|
||||
end_t = datetime.strptime(end_time, '%H:%M').time()
|
||||
if log_time_only > end_t:
|
||||
include_line = False
|
||||
|
||||
if include_line:
|
||||
filtered_lines.append(line)
|
||||
|
||||
except ValueError:
|
||||
# If time parsing fails, include the line
|
||||
filtered_lines.append(line)
|
||||
|
||||
return '\n'.join(filtered_lines)
|
||||
|
||||
def filter_logs_by_level(logs: str, log_level: str) -> str:
|
||||
"""Filter logs by log level. Supports multiple levels separated by |"""
|
||||
if not logs or not log_level:
|
||||
return logs
|
||||
|
||||
lines = logs.split('\n')
|
||||
filtered_lines = []
|
||||
|
||||
# Handle multiple log levels separated by |
|
||||
levels = [level.strip().upper() for level in log_level.split('|')]
|
||||
level_patterns = [re.compile(rf'\b{level}\b', re.IGNORECASE) for level in levels]
|
||||
|
||||
for line in lines:
|
||||
if any(pattern.search(line) for pattern in level_patterns):
|
||||
filtered_lines.append(line)
|
||||
|
||||
return '\n'.join(filtered_lines)
|
||||
|
||||
def search_logs(logs: str, search_term: str) -> str:
|
||||
"""Search logs for specific term."""
|
||||
if not logs or not search_term:
|
||||
return logs
|
||||
|
||||
lines = logs.split('\n')
|
||||
filtered_lines = []
|
||||
search_pattern = re.compile(re.escape(search_term), re.IGNORECASE)
|
||||
|
||||
for line in lines:
|
||||
if search_pattern.search(line):
|
||||
filtered_lines.append(line)
|
||||
|
||||
return '\n'.join(filtered_lines)
|
||||
|
||||
def limit_log_lines(logs: str, lines_limit: int) -> str:
|
||||
"""Limit number of log lines returned."""
|
||||
if not logs or not lines_limit:
|
||||
return logs
|
||||
|
||||
lines = logs.split('\n')
|
||||
return '\n'.join(lines[-lines_limit:]) # Return most recent lines
|
||||
|
||||
def process_logs(logs: str, start_time: str = None, end_time: str = None,
|
||||
log_level: str = None, search: str = None, lines: int = None,
|
||||
formatted: bool = True) -> str:
|
||||
"""Process logs with all filters and formatting."""
|
||||
if not logs:
|
||||
return logs
|
||||
|
||||
# Apply formatting first
|
||||
if formatted:
|
||||
logs = format_logs_with_line_breaks(logs)
|
||||
|
||||
# Apply time filtering
|
||||
if start_time or end_time:
|
||||
logs = filter_logs_by_time(logs, start_time, end_time)
|
||||
|
||||
# Apply log level filtering
|
||||
if log_level:
|
||||
logs = filter_logs_by_level(logs, log_level)
|
||||
|
||||
# Apply search filtering
|
||||
if search:
|
||||
logs = search_logs(logs, search)
|
||||
|
||||
# Limit lines if specified
|
||||
if lines:
|
||||
logs = limit_log_lines(logs, lines)
|
||||
|
||||
return logs
|
||||
|
||||
# More specific routes first
|
||||
@router.get("/repository/{repository}")
|
||||
async def get_repository_logs(
|
||||
@ -132,7 +261,14 @@ async def get_job_logs(
|
||||
namespace: str = Query(None, description="Nomad namespace"),
|
||||
log_type: str = Query("stderr", description="Log type: stdout or stderr"),
|
||||
limit: int = Query(1, description="Number of allocations to return logs for"),
|
||||
plain_text: bool = Query(False, description="Return plain text logs instead of JSON")
|
||||
plain_text: bool = Query(False, description="Return plain text logs instead of JSON"),
|
||||
# New filtering parameters
|
||||
start_time: str = Query(None, description="Start time filter (YYYY-MM-DD HH:MM or HH:MM)"),
|
||||
end_time: str = Query(None, description="End time filter (YYYY-MM-DD HH:MM or HH:MM)"),
|
||||
log_level: str = Query(None, description="Filter by log level: ERROR, WARNING, INFO, DEBUG"),
|
||||
search: str = Query(None, description="Search term to filter logs"),
|
||||
lines: int = Query(None, description="Number of log lines to return (most recent)"),
|
||||
formatted: bool = Query(True, description="Return formatted logs with proper line breaks")
|
||||
):
|
||||
"""Get logs for the most recent allocations of a job."""
|
||||
# Create a custom service with the specific namespace if provided
|
||||
@ -179,12 +315,23 @@ async def get_job_logs(
|
||||
logs = custom_nomad.get_allocation_logs(alloc_id, task_name, log_type)
|
||||
# Only add if we got some logs and not an error message
|
||||
if logs and not logs.startswith("No") and not logs.startswith("Error"):
|
||||
# Process logs with filters
|
||||
processed_logs = process_logs(
|
||||
logs,
|
||||
start_time=start_time,
|
||||
end_time=end_time,
|
||||
log_level=log_level,
|
||||
search=search,
|
||||
lines=lines,
|
||||
formatted=formatted
|
||||
)
|
||||
|
||||
result.append({
|
||||
"alloc_id": alloc_id,
|
||||
"task": task_name,
|
||||
"type": log_type,
|
||||
"create_time": alloc.get("CreateTime"),
|
||||
"logs": logs
|
||||
"logs": processed_logs
|
||||
})
|
||||
logger.info(f"Successfully retrieved logs for {task_name}")
|
||||
else:
|
||||
@ -197,7 +344,15 @@ async def get_job_logs(
|
||||
if plain_text:
|
||||
if not result:
|
||||
return "No logs found for this job"
|
||||
return "\n\n".join([f"=== {r.get('task')} ===\n{r.get('logs')}" for r in result])
|
||||
|
||||
# Combine all logs with task separators
|
||||
combined_logs = []
|
||||
for r in result:
|
||||
task_logs = r.get('logs', '')
|
||||
if task_logs:
|
||||
combined_logs.append(f"=== {r.get('task')} ===\n{task_logs}")
|
||||
|
||||
return "\n\n".join(combined_logs)
|
||||
|
||||
# Otherwise return as JSON
|
||||
return {
|
||||
@ -269,6 +424,49 @@ async def get_build_logs(job_id: str, plain_text: bool = Query(False)):
|
||||
# This is a convenience endpoint that returns stderr logs from the latest allocation
|
||||
return await get_latest_allocation_logs(job_id, "stderr", plain_text)
|
||||
|
||||
@router.get("/errors/{job_id}")
|
||||
async def get_error_logs(
|
||||
job_id: str,
|
||||
namespace: str = Query(None, description="Nomad namespace"),
|
||||
start_time: str = Query(None, description="Start time filter (HH:MM format, e.g., 20:00)"),
|
||||
end_time: str = Query(None, description="End time filter (HH:MM format, e.g., 06:00)"),
|
||||
plain_text: bool = Query(True, description="Return plain text logs instead of JSON")
|
||||
):
|
||||
"""Get error and warning logs for a job, with optional time filtering."""
|
||||
return await get_job_logs(
|
||||
job_id=job_id,
|
||||
namespace=namespace,
|
||||
log_type="stderr",
|
||||
limit=5, # Check more allocations for errors
|
||||
plain_text=plain_text,
|
||||
start_time=start_time,
|
||||
end_time=end_time,
|
||||
log_level="WARNING|ERROR|EMERGENCY|CRITICAL", # Multiple levels
|
||||
formatted=True
|
||||
)
|
||||
|
||||
@router.get("/search/{job_id}")
|
||||
async def search_job_logs(
|
||||
job_id: str,
|
||||
q: str = Query(..., description="Search term"),
|
||||
namespace: str = Query(None, description="Nomad namespace"),
|
||||
log_type: str = Query("stderr", description="Log type: stdout or stderr"),
|
||||
limit: int = Query(3, description="Number of allocations to search"),
|
||||
lines: int = Query(100, description="Number of matching lines to return"),
|
||||
plain_text: bool = Query(True, description="Return plain text logs instead of JSON")
|
||||
):
|
||||
"""Search job logs for specific terms."""
|
||||
return await get_job_logs(
|
||||
job_id=job_id,
|
||||
namespace=namespace,
|
||||
log_type=log_type,
|
||||
limit=limit,
|
||||
plain_text=plain_text,
|
||||
search=q,
|
||||
lines=lines,
|
||||
formatted=True
|
||||
)
|
||||
|
||||
# Generic allocation logs route last
|
||||
@router.get("/allocation/{alloc_id}/{task}")
|
||||
async def get_allocation_logs(
|
||||
|
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
Binary file not shown.
@ -16,7 +16,23 @@ class GiteaClient:
|
||||
self.api_base_url = os.getenv("GITEA_API_URL", "").rstrip("/")
|
||||
self.token = os.getenv("GITEA_API_TOKEN")
|
||||
self.username = os.getenv("GITEA_USERNAME")
|
||||
self.verify_ssl = os.getenv("GITEA_VERIFY_SSL", "true").lower() == "true"
|
||||
|
||||
# Configure SSL verification with certificate bundle
|
||||
ssl_cert_file = os.getenv("SSL_CERT_FILE")
|
||||
requests_ca_bundle = os.getenv("REQUESTS_CA_BUNDLE")
|
||||
|
||||
# Use certificate bundle if available, otherwise fall back to boolean verification
|
||||
if ssl_cert_file and os.path.exists(ssl_cert_file):
|
||||
self.verify_ssl = ssl_cert_file
|
||||
elif requests_ca_bundle and os.path.exists(requests_ca_bundle):
|
||||
self.verify_ssl = requests_ca_bundle
|
||||
else:
|
||||
# Check for project-local certificate bundle
|
||||
project_ca_bundle = os.path.join(os.path.dirname(os.path.dirname(os.path.dirname(__file__))), "certs", "mei_sheng_ca_bundle.pem")
|
||||
if os.path.exists(project_ca_bundle):
|
||||
self.verify_ssl = project_ca_bundle
|
||||
else:
|
||||
self.verify_ssl = os.getenv("GITEA_VERIFY_SSL", "true").lower() == "true"
|
||||
|
||||
if not self.api_base_url:
|
||||
logger.warning("GITEA_API_URL is not configured. Gitea integration will not work.")
|
||||
|
80
certs/README.md
Normal file
80
certs/README.md
Normal file
@ -0,0 +1,80 @@
|
||||
# Mei Sheng Group SSL Certificates
|
||||
|
||||
This folder contains the SSL certificate chain for Mei Sheng Group internal services.
|
||||
|
||||
🔄 **Auto-Renewal**: Server certificates are automatically renewed every 24 hours, but the CA chain remains stable and trustworthy for long-term use.
|
||||
|
||||
## Certificate Chain
|
||||
|
||||
1. **Intermediate CA**: `Mei_Sheng_Group_Intermediate_CA_02`
|
||||
- File: `intermediate_ca.pem`, `meisheng_ca_bundle.pem`
|
||||
- Valid: Sep 14, 2020 - Sep 13, 2025
|
||||
- Issuer: Mei_Sheng_Group_RootCA
|
||||
|
||||
2. **Server Certificate**: `*.dev.meisheng.group`
|
||||
- File: `server_cert.pem`
|
||||
- Valid: May 30, 2025 - May 31, 2025 (expires soon!)
|
||||
- Covers: gitea.dev.meisheng.group, nomad_mcp.dev.meisheng.group
|
||||
|
||||
## Usage
|
||||
|
||||
### For Python Applications
|
||||
|
||||
Use the CA bundle to verify SSL connections:
|
||||
|
||||
```python
|
||||
import requests
|
||||
|
||||
# Use the CA bundle for requests
|
||||
response = requests.get(
|
||||
'https://gitea.dev.meisheng.group',
|
||||
verify='/path/to/certs/meisheng_ca_bundle.pem'
|
||||
)
|
||||
```
|
||||
|
||||
### For curl
|
||||
|
||||
```bash
|
||||
curl --cacert certs/meisheng_ca_bundle.pem https://gitea.dev.meisheng.group
|
||||
```
|
||||
|
||||
### For Git
|
||||
|
||||
```bash
|
||||
# Configure git to use the CA bundle
|
||||
git config http.sslCAInfo /path/to/certs/meisheng_ca_bundle.pem
|
||||
```
|
||||
|
||||
### For MCP/Claude Code
|
||||
|
||||
Add to environment variables:
|
||||
|
||||
```bash
|
||||
export REQUESTS_CA_BUNDLE=/path/to/certs/meisheng_ca_bundle.pem
|
||||
export SSL_CERT_FILE=/path/to/certs/meisheng_ca_bundle.pem
|
||||
```
|
||||
|
||||
## Certificate Renewal
|
||||
|
||||
✅ **Automatic Renewal**: Server certificates are automatically renewed every 24 hours by the Mei Sheng Group certificate management system.
|
||||
|
||||
📋 **Certificate Details**:
|
||||
- **CA Chain**: Stable and can be trusted long-term
|
||||
- **Server Certificates**: Auto-renewed daily (expires every ~24h)
|
||||
- **Intermediate CA**: Valid until Sep 13, 2025
|
||||
- **Root CA**: Managed by Mei Sheng Group PKI infrastructure
|
||||
|
||||
## System Trust Store (Optional)
|
||||
|
||||
To install the CA in the system trust store:
|
||||
|
||||
### macOS
|
||||
```bash
|
||||
sudo security add-trusted-cert -d -r trustRoot -k /Library/Keychains/System.keychain certs/meisheng_ca_bundle.pem
|
||||
```
|
||||
|
||||
### Linux
|
||||
```bash
|
||||
sudo cp certs/meisheng_ca_bundle.pem /usr/local/share/ca-certificates/meisheng-ca.crt
|
||||
sudo update-ca-certificates
|
||||
```
|
20
certs/ca_chain.pem
Normal file
20
certs/ca_chain.pem
Normal file
@ -0,0 +1,20 @@
|
||||
-----BEGIN CERTIFICATE-----
|
||||
MIIDXDCCAuKgAwIBAgIUHChdZkXlA0s5wEy9qjYCkrwc58UwCgYIKoZIzj0EAwMw
|
||||
gYcxCzAJBgNVBAYTAlZOMQ4wDAYDVQQIEwVWTi00MzESMBAGA1UEBxMJTmdhaSBH
|
||||
aWFvMRgwFgYDVQQKEw9NZWkgU2hlbmcgR3JvdXAxGTAXBgNVBAsTEFRlY2hub2xv
|
||||
Z3kgR3JvdXAxHzAdBgNVBAMMFk1laV9TaGVuZ19Hcm91cF9Sb290Q0EwHhcNMjAw
|
||||
OTE0MDQwNzAwWhcNMjUwOTEzMDQwNzAwWjAtMSswKQYDVQQDDCJNZWlfU2hlbmdf
|
||||
R3JvdXBfSW50ZXJtZWRpYXRlX0NBXzAyMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8A
|
||||
MIIBCgKCAQEAyEoQIfXC9wX9lqq9nGMpf437M70FUeTExY915wNsMhOXrJflT66p
|
||||
f2A+uA3hq8wHGq+wOGFTEhteQhIDoRADLes5ywa5qXCQbi3HeB5WtbT3ayFfh2xY
|
||||
MdGsJVg0aqjPPuF1UVnNFSTvsJm0unLgNNrw1lzwB3qvg28G/j3MDkRYhB+pNmOH
|
||||
yHZQbDIJhZ+OCOxf78fdNfSVUJNmVZM2tVDbN/Dz2jiFIkEyX7FgRm26uTdmAMTG
|
||||
m/RbSa4k7C+9/bZSm2k22R0weKodnCVMVJvqeh3VB40ETeebaIi3oBi4AzyN8d8q
|
||||
yhqle+Bj78qtghaPHrRY4Hbt51wh8fjdjwIDAQABo4G5MIG2MA4GA1UdDwEB/wQE
|
||||
AwIBpjASBgNVHRMBAf8ECDAGAQH/AgEAMB0GA1UdDgQWBBTLduok3uInrMWi6mZe
|
||||
Lt9v6weoyTAfBgNVHSMEGDAWgBRFZFsAQFhk5efyrI3BepXfPi+DgjBQBgNVHR8E
|
||||
STBHMEWgQ6BBhj9odHRwOi8vY3JsLmRzLm1laXNoZW5nLmdyb3VwL3BraS9NZWkt
|
||||
U2hlbmctR3JvdXAtVmF1bHQtSU1DQS5jcmwwCgYIKoZIzj0EAwMDaAAwZQIwKWCU
|
||||
8udFsZc1hH5IGMSo/PJjAs/q4PbsddwFp0s+P64PFxun+DTkFDmw4GYwUjv5AjEA
|
||||
i+TpLy8j4LmvTq9tgJ/6UlFHAuHmnho8qoBURNrve7dJiRPYJfRYoqJ3IY3J7CdK
|
||||
-----END CERTIFICATE-----
|
21
certs/complete_ca_bundle.pem
Normal file
21
certs/complete_ca_bundle.pem
Normal file
@ -0,0 +1,21 @@
|
||||
{"errors":["missing client token"]}
|
||||
-----BEGIN CERTIFICATE-----
|
||||
MIIDXDCCAuKgAwIBAgIUHChdZkXlA0s5wEy9qjYCkrwc58UwCgYIKoZIzj0EAwMw
|
||||
gYcxCzAJBgNVBAYTAlZOMQ4wDAYDVQQIEwVWTi00MzESMBAGA1UEBxMJTmdhaSBH
|
||||
aWFvMRgwFgYDVQQKEw9NZWkgU2hlbmcgR3JvdXAxGTAXBgNVBAsTEFRlY2hub2xv
|
||||
Z3kgR3JvdXAxHzAdBgNVBAMMFk1laV9TaGVuZ19Hcm91cF9Sb290Q0EwHhcNMjAw
|
||||
OTE0MDQwNzAwWhcNMjUwOTEzMDQwNzAwWjAtMSswKQYDVQQDDCJNZWlfU2hlbmdf
|
||||
R3JvdXBfSW50ZXJtZWRpYXRlX0NBXzAyMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8A
|
||||
MIIBCgKCAQEAyEoQIfXC9wX9lqq9nGMpf437M70FUeTExY915wNsMhOXrJflT66p
|
||||
f2A+uA3hq8wHGq+wOGFTEhteQhIDoRADLes5ywa5qXCQbi3HeB5WtbT3ayFfh2xY
|
||||
MdGsJVg0aqjPPuF1UVnNFSTvsJm0unLgNNrw1lzwB3qvg28G/j3MDkRYhB+pNmOH
|
||||
yHZQbDIJhZ+OCOxf78fdNfSVUJNmVZM2tVDbN/Dz2jiFIkEyX7FgRm26uTdmAMTG
|
||||
m/RbSa4k7C+9/bZSm2k22R0weKodnCVMVJvqeh3VB40ETeebaIi3oBi4AzyN8d8q
|
||||
yhqle+Bj78qtghaPHrRY4Hbt51wh8fjdjwIDAQABo4G5MIG2MA4GA1UdDwEB/wQE
|
||||
AwIBpjASBgNVHRMBAf8ECDAGAQH/AgEAMB0GA1UdDgQWBBTLduok3uInrMWi6mZe
|
||||
Lt9v6weoyTAfBgNVHSMEGDAWgBRFZFsAQFhk5efyrI3BepXfPi+DgjBQBgNVHR8E
|
||||
STBHMEWgQ6BBhj9odHRwOi8vY3JsLmRzLm1laXNoZW5nLmdyb3VwL3BraS9NZWkt
|
||||
U2hlbmctR3JvdXAtVmF1bHQtSU1DQS5jcmwwCgYIKoZIzj0EAwMDaAAwZQIwKWCU
|
||||
8udFsZc1hH5IGMSo/PJjAs/q4PbsddwFp0s+P64PFxun+DTkFDmw4GYwUjv5AjEA
|
||||
i+TpLy8j4LmvTq9tgJ/6UlFHAuHmnho8qoBURNrve7dJiRPYJfRYoqJ3IY3J7CdK
|
||||
-----END CERTIFICATE-----
|
33
certs/full_chain_raw.pem
Normal file
33
certs/full_chain_raw.pem
Normal file
@ -0,0 +1,33 @@
|
||||
-----BEGIN CERTIFICATE-----
|
||||
MIIFsjCCBJqgAwIBAgIUILZlhb2ckYpVea2ie8YePywKDNswDQYJKoZIhvcNAQEL
|
||||
BQAwLTErMCkGA1UEAwwiTWVpX1NoZW5nX0dyb3VwX0ludGVybWVkaWF0ZV9DQV8w
|
||||
MjAeFw0yNTA1MzAxNTIwNDJaFw0yNTA1MzExNTIxMDlaMIHSMQswCQYDVQQGEwJW
|
||||
TjERMA8GA1UECBMIRG9uZyBOYWkxEzARBgNVBAcTCkxvbmcgVGhhbmgxLTArBgNV
|
||||
BAkTJFN0cmVldCAxLCBMb25nIFRoYW5oIEluZHVzdHJpYWwgWm9uZTEPMA0GA1UE
|
||||
ERMGNzYwMDAwMSEwHwYDVQQKExhNZWkgU2hlbmcgVGV4dGlsZXMgR3JvdXAxGTAX
|
||||
BgNVBAsTEFRlY2hub2xvZ3kgR3JvdXAxHTAbBgNVBAMMFCouZGV2Lm1laXNoZW5n
|
||||
Lmdyb3VwMIICIjANBgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEArnfHjOSjdy8p
|
||||
bkV0+Xq+9VCJHwNnaakOUJKSkW/Iw8/KbnNzT0Y9V3zFhKJMUaVsucTNneEbWOc3
|
||||
wdoe0C75PjYY9Bw3VSnyaXHF84QNy7LxM3E8X0R3rqETfJilzFA4nBtI5bx1WxNp
|
||||
tSOOYSgcoD7W38mKPpNO3yKdEmrkl5YiunWQBChD+K7tlDewcHnWuJsBPHO+cRrz
|
||||
rcfv6oozD2zlX5yBzF1lOVWV7TDnCjvyCYuUR9LvwswOkEi8gxnCZxlF9psHvb+a
|
||||
5CoMleVct6Hgzo2lPe3t7f/eszdbkMIxN/CyIsqG+G5Ljr9M4dTTWhy6nnkF9MkA
|
||||
i/wZGdKdmSIabXq2/nwlebSJA4sDUBzX+/8Wm2izHN9WqM0bmOdhrwF9uCfAI3d0
|
||||
iMeRzxGfJHVU6yml3PSyejc5SpHG4htnWbrZwJB0kxNCPVHYssajqyG41n9xS5dp
|
||||
bdlP6nl0x1BLvESPKE0oksoDdEkZ1nudSW6uVnNA2idyAwplFD4H4Ww82zxdTwBY
|
||||
i9nHtMAoizSyd1RxC6SRGaw5jgCaoBw95YbTftOQqH3meu3SWYGhFtpVMu2ZL4nz
|
||||
7ZwAz/XMJXNdni/+O6hI9ajRSYkrYW5qU3sWXhpehHNGD+Z5MYse/Gl7qGB4P4G6
|
||||
3aWx0iFmlpi7EzNe7mG85+6oqsfXBYsCAwEAAaOCASIwggEeMA4GA1UdDwEB/wQE
|
||||
AwIDqDAdBgNVHSUEFjAUBggrBgEFBQcDAQYIKwYBBQUHAwIwHQYDVR0OBBYEFOAu
|
||||
oM+WwWMbylfcJQlkYkPMCi3dMB8GA1UdIwQYMBaAFMt26iTe4iesxaLqZl4u32/r
|
||||
B6jJMEoGCCsGAQUFBwEBBD4wPDA6BggrBgEFBQcwAoYuaHR0cHM6Ly92YXVsdC5k
|
||||
cy5tZWlzaGVuZy5ncm91cDo4MjAwL3YxL3BraS9jYTAfBgNVHREEGDAWghQqLmRl
|
||||
di5tZWlzaGVuZy5ncm91cDBABgNVHR8EOTA3MDWgM6Axhi9odHRwczovL3ZhdWx0
|
||||
LmRzLm1laXNoZW5nLmdyb3VwOjgyMDAvdjEvcGtpL2NybDANBgkqhkiG9w0BAQsF
|
||||
AAOCAQEAs0lsR2dVZNqe/4Rt4nB/YOz/GjnX2HU3EY9LNK0ItCpZNcoVGDiAPWn/
|
||||
tWzAVdSp9DfDl4SO26pKCTknGUUrBr08WtkR6CqUmZ7rf5cYl0gtG6LM3/Qn2wt6
|
||||
p14HYwJsgt3z3uJ8NGdp9SeamQuhMERz6uQ/t9ueeR806vJEZxJGb/bpHMYiEYyh
|
||||
6FFwSnwSBLVUwR0aUqVCVg5yYnrjI/WVbLaXQLf1WBpbNl72sSBPnxxWzfb2ddvN
|
||||
DkPD/w983xFNSys98E+N9XeSSOuzOocLvgqZkFlCU9J60sgS8Zyaxrt4H29WMvip
|
||||
8nHYJG7vL61dt80BZioEuChMpRKKsw==
|
||||
-----END CERTIFICATE-----
|
BIN
certs/intermediate_ca.pem
Normal file
BIN
certs/intermediate_ca.pem
Normal file
Binary file not shown.
33
certs/mcp_full_chain.pem
Normal file
33
certs/mcp_full_chain.pem
Normal file
@ -0,0 +1,33 @@
|
||||
-----BEGIN CERTIFICATE-----
|
||||
MIIFsjCCBJqgAwIBAgIURoqUihkDo+A8XSErmcZq4C+r7fowDQYJKoZIhvcNAQEL
|
||||
BQAwLTErMCkGA1UEAwwiTWVpX1NoZW5nX0dyb3VwX0ludGVybWVkaWF0ZV9DQV8w
|
||||
MjAeFw0yNTA2MDMwNjAxNTVaFw0yNTA2MDQwNjAyMjFaMIHSMQswCQYDVQQGEwJW
|
||||
TjERMA8GA1UECBMIRG9uZyBOYWkxEzARBgNVBAcTCkxvbmcgVGhhbmgxLTArBgNV
|
||||
BAkTJFN0cmVldCAxLCBMb25nIFRoYW5oIEluZHVzdHJpYWwgWm9uZTEPMA0GA1UE
|
||||
ERMGNzYwMDAwMSEwHwYDVQQKExhNZWkgU2hlbmcgVGV4dGlsZXMgR3JvdXAxGTAX
|
||||
BgNVBAsTEFRlY2hub2xvZ3kgR3JvdXAxHTAbBgNVBAMMFCouZGV2Lm1laXNoZW5n
|
||||
Lmdyb3VwMIICIjANBgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEA9O1Q/5KVlz3J
|
||||
00ay0ExO0cOUCdvk/vDiDw4sk3IIgTi22lQbV44eMRdcKkXg54BJu4FhUSglnstl
|
||||
ESgEcsmcuy4pw0E8AGQmsaBkGRQO8Qm5Fo3Ifly28x+4nkZyyNry7YJjmqkSDmt8
|
||||
JEXmTNz+rApgN3f7IsZLzpfKsihNVDOj0fxn0Csf7JlscFBGv1SzuNlpUbAFkn1K
|
||||
nMsnwzlp8mJOCOh1RPP41r1GBBjrwqh67urAv57aIUHxyFF+yqNYpmhKKZMkKdJu
|
||||
QPKHTHeKBGs2xwkGDusv3vlqfKcPPQU3CmFI/MBiVkCx9t9MztWcY7bZ4iVG0NeH
|
||||
tOxAosn0jNqUy5Lk3LAIlIxC29DEAeuzUhx5dNHMolWqcy9KzID0jawcqL1/AdSH
|
||||
pituAKmkemzbM5YNHLGBaIu5scKbzi5oidJTPtBYjyB3anTV2hgxaWPbqKEetFL/
|
||||
pg0GOFarQiG2KcztZtGrSrOBD/o8h2hsBSaqmtrA22am8ubOaE61rhpiDJ0e7kpb
|
||||
lGD3fGt5tzpoVdZYPccZvu2QXPwQ9BKUO0ZFa40vziQ9GXC3YqMGfyH4mxkghvp7
|
||||
3TTgWlpiATj92nRmpptcoeIDoOV4rXPVRwOFZNRoZ5Ce1VK37ZdRsgqgmRLZk0EJ
|
||||
76LjBD/79ncxWXAXYDadkkKp9BIV9yMCAwEAAaOCASIwggEeMA4GA1UdDwEB/wQE
|
||||
AwIDqDAdBgNVHSUEFjAUBggrBgEFBQcDAQYIKwYBBQUHAwIwHQYDVR0OBBYEFOY9
|
||||
qwv5bdsS55ErMBXHfLkdOmZTMB8GA1UdIwQYMBaAFMt26iTe4iesxaLqZl4u32/r
|
||||
B6jJMEoGCCsGAQUFBwEBBD4wPDA6BggrBgEFBQcwAoYuaHR0cHM6Ly92YXVsdC5k
|
||||
cy5tZWlzaGVuZy5ncm91cDo4MjAwL3YxL3BraS9jYTAfBgNVHREEGDAWghQqLmRl
|
||||
di5tZWlzaGVuZy5ncm91cDBABgNVHR8EOTA3MDWgM6Axhi9odHRwczovL3ZhdWx0
|
||||
LmRzLm1laXNoZW5nLmdyb3VwOjgyMDAvdjEvcGtpL2NybDANBgkqhkiG9w0BAQsF
|
||||
AAOCAQEAurpEPi//nEtECNmYP3rCH67CX8P4SXH8VS+/y7luaPU+YQpHeJD/6+6n
|
||||
E2iK4XVElyKgISobm4wVY8G600St4U7TGsPB+lR4q7yKsi271BHhP2GRcsK1+WYY
|
||||
STCr5Z0hznrgli7xHySIlWOx1k8qtEE1D9Z/zJDgF6FcgtS2TWkPVhaGEo++PQE7
|
||||
OyrYCZ+JgCGO0pRUIagu7ZlATdpsnuTvalzdV7vPTSBMB7GI/gtcT95GKb0G8vVi
|
||||
CvANvUTKIag0rIlRNHoqwz+9wa9fzVgIr9ZnxlXLfB4PYHuOtxpyIAUc7ZsuYf5P
|
||||
MEboDMck/g5mE+VBMywOVYb9+1N+VA==
|
||||
-----END CERTIFICATE-----
|
20
certs/mei_sheng_ca_bundle.pem
Normal file
20
certs/mei_sheng_ca_bundle.pem
Normal file
@ -0,0 +1,20 @@
|
||||
-----BEGIN CERTIFICATE-----
|
||||
MIIDXDCCAuKgAwIBAgIUHChdZkXlA0s5wEy9qjYCkrwc58UwCgYIKoZIzj0EAwMw
|
||||
gYcxCzAJBgNVBAYTAlZOMQ4wDAYDVQQIEwVWTi00MzESMBAGA1UEBxMJTmdhaSBH
|
||||
aWFvMRgwFgYDVQQKEw9NZWkgU2hlbmcgR3JvdXAxGTAXBgNVBAsTEFRlY2hub2xv
|
||||
Z3kgR3JvdXAxHzAdBgNVBAMMFk1laV9TaGVuZ19Hcm91cF9Sb290Q0EwHhcNMjAw
|
||||
OTE0MDQwNzAwWhcNMjUwOTEzMDQwNzAwWjAtMSswKQYDVQQDDCJNZWlfU2hlbmdf
|
||||
R3JvdXBfSW50ZXJtZWRpYXRlX0NBXzAyMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8A
|
||||
MIIBCgKCAQEAyEoQIfXC9wX9lqq9nGMpf437M70FUeTExY915wNsMhOXrJflT66p
|
||||
f2A+uA3hq8wHGq+wOGFTEhteQhIDoRADLes5ywa5qXCQbi3HeB5WtbT3ayFfh2xY
|
||||
MdGsJVg0aqjPPuF1UVnNFSTvsJm0unLgNNrw1lzwB3qvg28G/j3MDkRYhB+pNmOH
|
||||
yHZQbDIJhZ+OCOxf78fdNfSVUJNmVZM2tVDbN/Dz2jiFIkEyX7FgRm26uTdmAMTG
|
||||
m/RbSa4k7C+9/bZSm2k22R0weKodnCVMVJvqeh3VB40ETeebaIi3oBi4AzyN8d8q
|
||||
yhqle+Bj78qtghaPHrRY4Hbt51wh8fjdjwIDAQABo4G5MIG2MA4GA1UdDwEB/wQE
|
||||
AwIBpjASBgNVHRMBAf8ECDAGAQH/AgEAMB0GA1UdDgQWBBTLduok3uInrMWi6mZe
|
||||
Lt9v6weoyTAfBgNVHSMEGDAWgBRFZFsAQFhk5efyrI3BepXfPi+DgjBQBgNVHR8E
|
||||
STBHMEWgQ6BBhj9odHRwOi8vY3JsLmRzLm1laXNoZW5nLmdyb3VwL3BraS9NZWkt
|
||||
U2hlbmctR3JvdXAtVmF1bHQtSU1DQS5jcmwwCgYIKoZIzj0EAwMDaAAwZQIwKWCU
|
||||
8udFsZc1hH5IGMSo/PJjAs/q4PbsddwFp0s+P64PFxun+DTkFDmw4GYwUjv5AjEA
|
||||
i+TpLy8j4LmvTq9tgJ/6UlFHAuHmnho8qoBURNrve7dJiRPYJfRYoqJ3IY3J7CdK
|
||||
-----END CERTIFICATE-----
|
20
certs/meisheng_ca_bundle.pem
Normal file
20
certs/meisheng_ca_bundle.pem
Normal file
@ -0,0 +1,20 @@
|
||||
-----BEGIN CERTIFICATE-----
|
||||
MIIDXDCCAuKgAwIBAgIUHChdZkXlA0s5wEy9qjYCkrwc58UwCgYIKoZIzj0EAwMw
|
||||
gYcxCzAJBgNVBAYTAlZOMQ4wDAYDVQQIEwVWTi00MzESMBAGA1UEBxMJTmdhaSBH
|
||||
aWFvMRgwFgYDVQQKEw9NZWkgU2hlbmcgR3JvdXAxGTAXBgNVBAsTEFRlY2hub2xv
|
||||
Z3kgR3JvdXAxHzAdBgNVBAMMFk1laV9TaGVuZ19Hcm91cF9Sb290Q0EwHhcNMjAw
|
||||
OTE0MDQwNzAwWhcNMjUwOTEzMDQwNzAwWjAtMSswKQYDVQQDDCJNZWlfU2hlbmdf
|
||||
R3JvdXBfSW50ZXJtZWRpYXRlX0NBXzAyMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8A
|
||||
MIIBCgKCAQEAyEoQIfXC9wX9lqq9nGMpf437M70FUeTExY915wNsMhOXrJflT66p
|
||||
f2A+uA3hq8wHGq+wOGFTEhteQhIDoRADLes5ywa5qXCQbi3HeB5WtbT3ayFfh2xY
|
||||
MdGsJVg0aqjPPuF1UVnNFSTvsJm0unLgNNrw1lzwB3qvg28G/j3MDkRYhB+pNmOH
|
||||
yHZQbDIJhZ+OCOxf78fdNfSVUJNmVZM2tVDbN/Dz2jiFIkEyX7FgRm26uTdmAMTG
|
||||
m/RbSa4k7C+9/bZSm2k22R0weKodnCVMVJvqeh3VB40ETeebaIi3oBi4AzyN8d8q
|
||||
yhqle+Bj78qtghaPHrRY4Hbt51wh8fjdjwIDAQABo4G5MIG2MA4GA1UdDwEB/wQE
|
||||
AwIBpjASBgNVHRMBAf8ECDAGAQH/AgEAMB0GA1UdDgQWBBTLduok3uInrMWi6mZe
|
||||
Lt9v6weoyTAfBgNVHSMEGDAWgBRFZFsAQFhk5efyrI3BepXfPi+DgjBQBgNVHR8E
|
||||
STBHMEWgQ6BBhj9odHRwOi8vY3JsLmRzLm1laXNoZW5nLmdyb3VwL3BraS9NZWkt
|
||||
U2hlbmctR3JvdXAtVmF1bHQtSU1DQS5jcmwwCgYIKoZIzj0EAwMDaAAwZQIwKWCU
|
||||
8udFsZc1hH5IGMSo/PJjAs/q4PbsddwFp0s+P64PFxun+DTkFDmw4GYwUjv5AjEA
|
||||
i+TpLy8j4LmvTq9tgJ/6UlFHAuHmnho8qoBURNrve7dJiRPYJfRYoqJ3IY3J7CdK
|
||||
-----END CERTIFICATE-----
|
1
certs/root_ca.pem
Normal file
1
certs/root_ca.pem
Normal file
@ -0,0 +1 @@
|
||||
{"errors":["missing client token"]}
|
33
certs/server_cert.pem
Normal file
33
certs/server_cert.pem
Normal file
@ -0,0 +1,33 @@
|
||||
-----BEGIN CERTIFICATE-----
|
||||
MIIFsjCCBJqgAwIBAgIUILZlhb2ckYpVea2ie8YePywKDNswDQYJKoZIhvcNAQEL
|
||||
BQAwLTErMCkGA1UEAwwiTWVpX1NoZW5nX0dyb3VwX0ludGVybWVkaWF0ZV9DQV8w
|
||||
MjAeFw0yNTA1MzAxNTIwNDJaFw0yNTA1MzExNTIxMDlaMIHSMQswCQYDVQQGEwJW
|
||||
TjERMA8GA1UECBMIRG9uZyBOYWkxEzARBgNVBAcTCkxvbmcgVGhhbmgxLTArBgNV
|
||||
BAkTJFN0cmVldCAxLCBMb25nIFRoYW5oIEluZHVzdHJpYWwgWm9uZTEPMA0GA1UE
|
||||
ERMGNzYwMDAwMSEwHwYDVQQKExhNZWkgU2hlbmcgVGV4dGlsZXMgR3JvdXAxGTAX
|
||||
BgNVBAsTEFRlY2hub2xvZ3kgR3JvdXAxHTAbBgNVBAMMFCouZGV2Lm1laXNoZW5n
|
||||
Lmdyb3VwMIICIjANBgkqhkiG9w0BAQEFAAOCAg8AMIICCgKCAgEArnfHjOSjdy8p
|
||||
bkV0+Xq+9VCJHwNnaakOUJKSkW/Iw8/KbnNzT0Y9V3zFhKJMUaVsucTNneEbWOc3
|
||||
wdoe0C75PjYY9Bw3VSnyaXHF84QNy7LxM3E8X0R3rqETfJilzFA4nBtI5bx1WxNp
|
||||
tSOOYSgcoD7W38mKPpNO3yKdEmrkl5YiunWQBChD+K7tlDewcHnWuJsBPHO+cRrz
|
||||
rcfv6oozD2zlX5yBzF1lOVWV7TDnCjvyCYuUR9LvwswOkEi8gxnCZxlF9psHvb+a
|
||||
5CoMleVct6Hgzo2lPe3t7f/eszdbkMIxN/CyIsqG+G5Ljr9M4dTTWhy6nnkF9MkA
|
||||
i/wZGdKdmSIabXq2/nwlebSJA4sDUBzX+/8Wm2izHN9WqM0bmOdhrwF9uCfAI3d0
|
||||
iMeRzxGfJHVU6yml3PSyejc5SpHG4htnWbrZwJB0kxNCPVHYssajqyG41n9xS5dp
|
||||
bdlP6nl0x1BLvESPKE0oksoDdEkZ1nudSW6uVnNA2idyAwplFD4H4Ww82zxdTwBY
|
||||
i9nHtMAoizSyd1RxC6SRGaw5jgCaoBw95YbTftOQqH3meu3SWYGhFtpVMu2ZL4nz
|
||||
7ZwAz/XMJXNdni/+O6hI9ajRSYkrYW5qU3sWXhpehHNGD+Z5MYse/Gl7qGB4P4G6
|
||||
3aWx0iFmlpi7EzNe7mG85+6oqsfXBYsCAwEAAaOCASIwggEeMA4GA1UdDwEB/wQE
|
||||
AwIDqDAdBgNVHSUEFjAUBggrBgEFBQcDAQYIKwYBBQUHAwIwHQYDVR0OBBYEFOAu
|
||||
oM+WwWMbylfcJQlkYkPMCi3dMB8GA1UdIwQYMBaAFMt26iTe4iesxaLqZl4u32/r
|
||||
B6jJMEoGCCsGAQUFBwEBBD4wPDA6BggrBgEFBQcwAoYuaHR0cHM6Ly92YXVsdC5k
|
||||
cy5tZWlzaGVuZy5ncm91cDo4MjAwL3YxL3BraS9jYTAfBgNVHREEGDAWghQqLmRl
|
||||
di5tZWlzaGVuZy5ncm91cDBABgNVHR8EOTA3MDWgM6Axhi9odHRwczovL3ZhdWx0
|
||||
LmRzLm1laXNoZW5nLmdyb3VwOjgyMDAvdjEvcGtpL2NybDANBgkqhkiG9w0BAQsF
|
||||
AAOCAQEAs0lsR2dVZNqe/4Rt4nB/YOz/GjnX2HU3EY9LNK0ItCpZNcoVGDiAPWn/
|
||||
tWzAVdSp9DfDl4SO26pKCTknGUUrBr08WtkR6CqUmZ7rf5cYl0gtG6LM3/Qn2wt6
|
||||
p14HYwJsgt3z3uJ8NGdp9SeamQuhMERz6uQ/t9ueeR806vJEZxJGb/bpHMYiEYyh
|
||||
6FFwSnwSBLVUwR0aUqVCVg5yYnrjI/WVbLaXQLf1WBpbNl72sSBPnxxWzfb2ddvN
|
||||
DkPD/w983xFNSys98E+N9XeSSOuzOocLvgqZkFlCU9J60sgS8Zyaxrt4H29WMvip
|
||||
8nHYJG7vL61dt80BZioEuChMpRKKsw==
|
||||
-----END CERTIFICATE-----
|
58
certs/test_ssl.py
Executable file
58
certs/test_ssl.py
Executable file
@ -0,0 +1,58 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Test SSL connections to Mei Sheng Group services with proper certificate verification.
|
||||
"""
|
||||
import requests
|
||||
import urllib3
|
||||
import os
|
||||
import sys
|
||||
|
||||
# Disable only the specific warning for unverified HTTPS requests
|
||||
urllib3.disable_warnings(urllib3.exceptions.InsecureRequestWarning)
|
||||
|
||||
def test_with_ca_bundle():
|
||||
"""Test connections using the CA bundle"""
|
||||
ca_bundle = os.path.join(os.path.dirname(__file__), 'meisheng_ca_bundle.pem')
|
||||
|
||||
print("🔒 Testing with CA Bundle...")
|
||||
print(f"📁 CA Bundle: {ca_bundle}")
|
||||
|
||||
services = [
|
||||
("Gitea", "https://gitea.dev.meisheng.group/api/v1/version"),
|
||||
("Nomad MCP", "https://nomad_mcp.dev.meisheng.group/api/health"),
|
||||
]
|
||||
|
||||
for name, url in services:
|
||||
try:
|
||||
response = requests.get(url, verify=ca_bundle, timeout=5)
|
||||
print(f"✅ {name}: {response.status_code} - {response.text[:100]}")
|
||||
except requests.exceptions.SSLError as e:
|
||||
print(f"🔓 {name}: SSL Error - {e}")
|
||||
# Try with verification disabled to check if it's just a cert issue
|
||||
try:
|
||||
response = requests.get(url, verify=False, timeout=5)
|
||||
print(f"⚠️ {name}: Works without SSL verification - {response.status_code}")
|
||||
except Exception as e2:
|
||||
print(f"❌ {name}: Complete failure - {e2}")
|
||||
except Exception as e:
|
||||
print(f"❌ {name}: Error - {e}")
|
||||
|
||||
def test_with_system_certs():
|
||||
"""Test connections using system certificates"""
|
||||
print("\n🔒 Testing with System Certificates...")
|
||||
|
||||
services = [
|
||||
("Gitea", "https://gitea.dev.meisheng.group/api/v1/version"),
|
||||
("Nomad MCP", "https://nomad_mcp.dev.meisheng.group/api/health"),
|
||||
]
|
||||
|
||||
for name, url in services:
|
||||
try:
|
||||
response = requests.get(url, timeout=5)
|
||||
print(f"✅ {name}: {response.status_code}")
|
||||
except Exception as e:
|
||||
print(f"❌ {name}: {e}")
|
||||
|
||||
if __name__ == "__main__":
|
||||
test_with_ca_bundle()
|
||||
test_with_system_certs()
|
@ -4,7 +4,7 @@
|
||||
"description": "Nomad MCP service for Claude Code using SSE",
|
||||
"transport": {
|
||||
"type": "sse",
|
||||
"url": "http://localhost:8000/api/claude/mcp/stream"
|
||||
"url": "https://nomad_mcp.dev.meisheng.group/api/claude/mcp/stream"
|
||||
},
|
||||
"authentication": {
|
||||
"type": "none"
|
||||
|
10
claude_desktop_config.json
Normal file
10
claude_desktop_config.json
Normal file
@ -0,0 +1,10 @@
|
||||
{
|
||||
"mcpServers": {
|
||||
"nomad-mcp": {
|
||||
"command": "/Users/nkohl/Documents/Code/nomad_mcp/run_mcp_server.sh",
|
||||
"env": {
|
||||
"NOMAD_ADDR": "http://pjmldk01.ds.meisheng.group:4646"
|
||||
}
|
||||
}
|
||||
}
|
||||
}
|
925
mcp_server.py
Normal file
925
mcp_server.py
Normal file
@ -0,0 +1,925 @@
|
||||
#!/usr/bin/env python3
|
||||
"""
|
||||
Nomad MCP Server for Claude Desktop App
|
||||
Provides MCP tools for managing HashiCorp Nomad jobs
|
||||
"""
|
||||
|
||||
import asyncio
|
||||
import json
|
||||
import logging
|
||||
import os
|
||||
import sys
|
||||
import requests
|
||||
from typing import Any, Dict, List, Optional
|
||||
|
||||
from mcp.server import NotificationOptions, Server
|
||||
from mcp.server.models import InitializationOptions
|
||||
import mcp.server.stdio
|
||||
import mcp.types as types
|
||||
|
||||
from app.services.nomad_client import NomadService
|
||||
from app.schemas.claude_api import ClaudeJobSpecification
|
||||
|
||||
# Configure logging
|
||||
logging.basicConfig(level=logging.INFO)
|
||||
logger = logging.getLogger("nomad-mcp")
|
||||
|
||||
# Create the server instance
|
||||
server = Server("nomad-mcp")
|
||||
|
||||
@server.list_tools()
|
||||
async def handle_list_tools() -> List[types.Tool]:
|
||||
"""List available tools for Nomad management."""
|
||||
return [
|
||||
types.Tool(
|
||||
name="list_nomad_jobs",
|
||||
description="List all Nomad jobs in a namespace",
|
||||
inputSchema={
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"namespace": {
|
||||
"type": "string",
|
||||
"description": "Nomad namespace",
|
||||
"default": "development"
|
||||
}
|
||||
}
|
||||
}
|
||||
),
|
||||
types.Tool(
|
||||
name="get_job_status",
|
||||
description="Get the status of a specific Nomad job",
|
||||
inputSchema={
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"job_id": {
|
||||
"type": "string",
|
||||
"description": "ID of the job to check"
|
||||
},
|
||||
"namespace": {
|
||||
"type": "string",
|
||||
"description": "Nomad namespace",
|
||||
"default": "development"
|
||||
}
|
||||
},
|
||||
"required": ["job_id"]
|
||||
}
|
||||
),
|
||||
types.Tool(
|
||||
name="stop_job",
|
||||
description="Stop a running Nomad job",
|
||||
inputSchema={
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"job_id": {
|
||||
"type": "string",
|
||||
"description": "ID of the job to stop"
|
||||
},
|
||||
"namespace": {
|
||||
"type": "string",
|
||||
"description": "Nomad namespace",
|
||||
"default": "development"
|
||||
},
|
||||
"purge": {
|
||||
"type": "boolean",
|
||||
"description": "Whether to purge the job",
|
||||
"default": False
|
||||
}
|
||||
},
|
||||
"required": ["job_id"]
|
||||
}
|
||||
),
|
||||
types.Tool(
|
||||
name="restart_job",
|
||||
description="Restart a Nomad job",
|
||||
inputSchema={
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"job_id": {
|
||||
"type": "string",
|
||||
"description": "ID of the job to restart"
|
||||
},
|
||||
"namespace": {
|
||||
"type": "string",
|
||||
"description": "Nomad namespace",
|
||||
"default": "development"
|
||||
}
|
||||
},
|
||||
"required": ["job_id"]
|
||||
}
|
||||
),
|
||||
types.Tool(
|
||||
name="create_job",
|
||||
description="Create a new Nomad job",
|
||||
inputSchema={
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"job_id": {
|
||||
"type": "string",
|
||||
"description": "Unique ID for the job"
|
||||
},
|
||||
"name": {
|
||||
"type": "string",
|
||||
"description": "Display name for the job"
|
||||
},
|
||||
"type": {
|
||||
"type": "string",
|
||||
"description": "Job type (service, batch, etc.)",
|
||||
"default": "service"
|
||||
},
|
||||
"datacenters": {
|
||||
"type": "array",
|
||||
"description": "List of datacenters to run the job in",
|
||||
"items": {"type": "string"},
|
||||
"default": ["jm"]
|
||||
},
|
||||
"namespace": {
|
||||
"type": "string",
|
||||
"description": "Nomad namespace",
|
||||
"default": "development"
|
||||
},
|
||||
"docker_image": {
|
||||
"type": "string",
|
||||
"description": "Docker image to run"
|
||||
},
|
||||
"count": {
|
||||
"type": "integer",
|
||||
"description": "Number of instances to run",
|
||||
"default": 1
|
||||
},
|
||||
"cpu": {
|
||||
"type": "integer",
|
||||
"description": "CPU allocation in MHz",
|
||||
"default": 100
|
||||
},
|
||||
"memory": {
|
||||
"type": "integer",
|
||||
"description": "Memory allocation in MB",
|
||||
"default": 128
|
||||
},
|
||||
"ports": {
|
||||
"type": "array",
|
||||
"description": "Port mappings",
|
||||
"items": {"type": "object"},
|
||||
"default": []
|
||||
},
|
||||
"env_vars": {
|
||||
"type": "object",
|
||||
"description": "Environment variables for the container",
|
||||
"default": {}
|
||||
}
|
||||
},
|
||||
"required": ["job_id", "name", "docker_image"]
|
||||
}
|
||||
),
|
||||
types.Tool(
|
||||
name="get_job_logs",
|
||||
description="Get logs for a Nomad job with advanced filtering options",
|
||||
inputSchema={
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"job_id": {
|
||||
"type": "string",
|
||||
"description": "ID of the job to get logs for"
|
||||
},
|
||||
"namespace": {
|
||||
"type": "string",
|
||||
"description": "Nomad namespace",
|
||||
"default": "development"
|
||||
},
|
||||
"log_type": {
|
||||
"type": "string",
|
||||
"description": "Type of logs: stdout or stderr",
|
||||
"enum": ["stdout", "stderr"],
|
||||
"default": "stderr"
|
||||
},
|
||||
"start_time": {
|
||||
"type": "string",
|
||||
"description": "Start time filter (HH:MM format, e.g., '20:00' for 8 PM)"
|
||||
},
|
||||
"end_time": {
|
||||
"type": "string",
|
||||
"description": "End time filter (HH:MM format, e.g., '06:00' for 6 AM)"
|
||||
},
|
||||
"log_level": {
|
||||
"type": "string",
|
||||
"description": "Filter by log level: ERROR, WARNING, INFO, DEBUG, EMERGENCY, CRITICAL"
|
||||
},
|
||||
"search": {
|
||||
"type": "string",
|
||||
"description": "Search term to filter logs"
|
||||
},
|
||||
"lines": {
|
||||
"type": "integer",
|
||||
"description": "Number of recent log lines to return"
|
||||
},
|
||||
"limit": {
|
||||
"type": "integer",
|
||||
"description": "Number of allocations to check",
|
||||
"default": 1
|
||||
}
|
||||
},
|
||||
"required": ["job_id"]
|
||||
}
|
||||
),
|
||||
types.Tool(
|
||||
name="get_error_logs",
|
||||
description="Get only error and warning logs for a Nomad job, useful for troubleshooting",
|
||||
inputSchema={
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"job_id": {
|
||||
"type": "string",
|
||||
"description": "ID of the job to get error logs for"
|
||||
},
|
||||
"namespace": {
|
||||
"type": "string",
|
||||
"description": "Nomad namespace",
|
||||
"default": "development"
|
||||
},
|
||||
"start_time": {
|
||||
"type": "string",
|
||||
"description": "Start time filter (HH:MM format, e.g., '20:00' for 8 PM)"
|
||||
},
|
||||
"end_time": {
|
||||
"type": "string",
|
||||
"description": "End time filter (HH:MM format, e.g., '06:00' for 6 AM)"
|
||||
}
|
||||
},
|
||||
"required": ["job_id"]
|
||||
}
|
||||
),
|
||||
types.Tool(
|
||||
name="search_job_logs",
|
||||
description="Search Nomad job logs for specific terms or patterns",
|
||||
inputSchema={
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"job_id": {
|
||||
"type": "string",
|
||||
"description": "ID of the job to search logs for"
|
||||
},
|
||||
"search_term": {
|
||||
"type": "string",
|
||||
"description": "Term or pattern to search for in logs"
|
||||
},
|
||||
"namespace": {
|
||||
"type": "string",
|
||||
"description": "Nomad namespace",
|
||||
"default": "development"
|
||||
},
|
||||
"log_type": {
|
||||
"type": "string",
|
||||
"description": "Type of logs: stdout or stderr",
|
||||
"enum": ["stdout", "stderr"],
|
||||
"default": "stderr"
|
||||
},
|
||||
"lines": {
|
||||
"type": "integer",
|
||||
"description": "Number of matching lines to return",
|
||||
"default": 50
|
||||
}
|
||||
},
|
||||
"required": ["job_id", "search_term"]
|
||||
}
|
||||
),
|
||||
types.Tool(
|
||||
name="submit_job_file",
|
||||
description="Submit a Nomad job from HCL or JSON file content",
|
||||
inputSchema={
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"file_content": {
|
||||
"type": "string",
|
||||
"description": "Content of the Nomad job file (HCL or JSON format)"
|
||||
},
|
||||
"file_type": {
|
||||
"type": "string",
|
||||
"description": "Type of file content: 'hcl' or 'json'",
|
||||
"enum": ["hcl", "json"],
|
||||
"default": "json"
|
||||
},
|
||||
"namespace": {
|
||||
"type": "string",
|
||||
"description": "Nomad namespace to submit the job to",
|
||||
"default": "development"
|
||||
}
|
||||
},
|
||||
"required": ["file_content"]
|
||||
}
|
||||
),
|
||||
types.Tool(
|
||||
name="get_allocation_status",
|
||||
description="Get detailed status of job allocations",
|
||||
inputSchema={
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"job_id": {
|
||||
"type": "string",
|
||||
"description": "ID of the job to check allocations for"
|
||||
},
|
||||
"namespace": {
|
||||
"type": "string",
|
||||
"description": "Nomad namespace",
|
||||
"default": "development"
|
||||
}
|
||||
},
|
||||
"required": ["job_id"]
|
||||
}
|
||||
),
|
||||
types.Tool(
|
||||
name="get_job_evaluations",
|
||||
description="Get evaluations for a job to understand placement and failures",
|
||||
inputSchema={
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"job_id": {
|
||||
"type": "string",
|
||||
"description": "ID of the job to get evaluations for"
|
||||
},
|
||||
"namespace": {
|
||||
"type": "string",
|
||||
"description": "Nomad namespace",
|
||||
"default": "development"
|
||||
}
|
||||
},
|
||||
"required": ["job_id"]
|
||||
}
|
||||
),
|
||||
types.Tool(
|
||||
name="force_evaluate_job",
|
||||
description="Force a new evaluation for a job (retry failed placements)",
|
||||
inputSchema={
|
||||
"type": "object",
|
||||
"properties": {
|
||||
"job_id": {
|
||||
"type": "string",
|
||||
"description": "ID of the job to force evaluate"
|
||||
},
|
||||
"namespace": {
|
||||
"type": "string",
|
||||
"description": "Nomad namespace",
|
||||
"default": "development"
|
||||
}
|
||||
},
|
||||
"required": ["job_id"]
|
||||
}
|
||||
)
|
||||
]
|
||||
|
||||
@server.call_tool()
|
||||
async def handle_call_tool(name: str, arguments: Dict[str, Any]) -> List[types.TextContent]:
|
||||
"""Handle tool calls from Claude."""
|
||||
try:
|
||||
# Create Nomad service instance
|
||||
nomad_service = NomadService()
|
||||
namespace = arguments.get("namespace", "development")
|
||||
nomad_service.namespace = namespace
|
||||
|
||||
if name == "list_nomad_jobs":
|
||||
jobs = nomad_service.list_jobs()
|
||||
simplified_jobs = []
|
||||
for job in jobs:
|
||||
simplified_jobs.append({
|
||||
"id": job.get("ID"),
|
||||
"name": job.get("Name"),
|
||||
"status": job.get("Status"),
|
||||
"type": job.get("Type"),
|
||||
"namespace": namespace
|
||||
})
|
||||
|
||||
return [types.TextContent(
|
||||
type="text",
|
||||
text=json.dumps(simplified_jobs, indent=2)
|
||||
)]
|
||||
|
||||
elif name == "get_job_status":
|
||||
job_id = arguments.get("job_id")
|
||||
if not job_id:
|
||||
return [types.TextContent(
|
||||
type="text",
|
||||
text="Error: job_id is required"
|
||||
)]
|
||||
|
||||
job = nomad_service.get_job(job_id)
|
||||
allocations = nomad_service.get_allocations(job_id)
|
||||
|
||||
latest_alloc = None
|
||||
if allocations:
|
||||
sorted_allocations = sorted(
|
||||
allocations,
|
||||
key=lambda a: a.get("CreateTime", 0),
|
||||
reverse=True
|
||||
)
|
||||
latest_alloc = sorted_allocations[0]
|
||||
|
||||
result = {
|
||||
"job_id": job_id,
|
||||
"status": job.get("Status", "unknown"),
|
||||
"message": f"Job {job_id} is {job.get('Status', 'unknown')}",
|
||||
"details": {
|
||||
"job": job,
|
||||
"latest_allocation": latest_alloc
|
||||
}
|
||||
}
|
||||
|
||||
return [types.TextContent(
|
||||
type="text",
|
||||
text=json.dumps(result, indent=2)
|
||||
)]
|
||||
|
||||
elif name == "stop_job":
|
||||
job_id = arguments.get("job_id")
|
||||
purge = arguments.get("purge", False)
|
||||
|
||||
if not job_id:
|
||||
return [types.TextContent(
|
||||
type="text",
|
||||
text="Error: job_id is required"
|
||||
)]
|
||||
|
||||
result = nomad_service.stop_job(job_id, purge=purge)
|
||||
|
||||
response = {
|
||||
"success": True,
|
||||
"job_id": job_id,
|
||||
"status": "stopped",
|
||||
"message": f"Job {job_id} has been stopped" + (" and purged" if purge else ""),
|
||||
"details": result
|
||||
}
|
||||
|
||||
return [types.TextContent(
|
||||
type="text",
|
||||
text=json.dumps(response, indent=2)
|
||||
)]
|
||||
|
||||
elif name == "restart_job":
|
||||
job_id = arguments.get("job_id")
|
||||
|
||||
if not job_id:
|
||||
return [types.TextContent(
|
||||
type="text",
|
||||
text="Error: job_id is required"
|
||||
)]
|
||||
|
||||
# Get current job spec
|
||||
job_spec = nomad_service.get_job(job_id)
|
||||
|
||||
# Stop and restart
|
||||
nomad_service.stop_job(job_id)
|
||||
result = nomad_service.start_job(job_spec)
|
||||
|
||||
response = {
|
||||
"success": True,
|
||||
"job_id": job_id,
|
||||
"status": "restarted",
|
||||
"message": f"Job {job_id} has been restarted",
|
||||
"details": result
|
||||
}
|
||||
|
||||
return [types.TextContent(
|
||||
type="text",
|
||||
text=json.dumps(response, indent=2)
|
||||
)]
|
||||
|
||||
elif name == "create_job":
|
||||
# Validate required arguments
|
||||
required_args = ["job_id", "name", "docker_image"]
|
||||
for arg in required_args:
|
||||
if not arguments.get(arg):
|
||||
return [types.TextContent(
|
||||
type="text",
|
||||
text=f"Error: {arg} is required"
|
||||
)]
|
||||
|
||||
# Create job specification
|
||||
job_spec = ClaudeJobSpecification(**arguments)
|
||||
|
||||
# Set namespace
|
||||
if job_spec.namespace:
|
||||
nomad_service.namespace = job_spec.namespace
|
||||
|
||||
# Convert to Nomad format and start
|
||||
nomad_job_spec = job_spec.to_nomad_job_spec()
|
||||
result = nomad_service.start_job(nomad_job_spec)
|
||||
|
||||
response = {
|
||||
"success": True,
|
||||
"job_id": job_spec.job_id,
|
||||
"status": "started",
|
||||
"message": f"Job {job_spec.job_id} has been created and started",
|
||||
"details": result
|
||||
}
|
||||
|
||||
return [types.TextContent(
|
||||
type="text",
|
||||
text=json.dumps(response, indent=2)
|
||||
)]
|
||||
|
||||
elif name == "get_job_logs":
|
||||
job_id = arguments.get("job_id")
|
||||
|
||||
if not job_id:
|
||||
return [types.TextContent(
|
||||
type="text",
|
||||
text="Error: job_id is required"
|
||||
)]
|
||||
|
||||
# Use the enhanced REST API endpoint
|
||||
import requests
|
||||
base_url = os.getenv("BASE_URL", "http://localhost:8000")
|
||||
|
||||
# Build query parameters
|
||||
params = {
|
||||
"namespace": arguments.get("namespace", namespace),
|
||||
"log_type": arguments.get("log_type", "stderr"),
|
||||
"limit": arguments.get("limit", 1),
|
||||
"plain_text": True,
|
||||
"formatted": True
|
||||
}
|
||||
|
||||
# Add optional filters
|
||||
if arguments.get("start_time"):
|
||||
params["start_time"] = arguments["start_time"]
|
||||
if arguments.get("end_time"):
|
||||
params["end_time"] = arguments["end_time"]
|
||||
if arguments.get("log_level"):
|
||||
params["log_level"] = arguments["log_level"]
|
||||
if arguments.get("search"):
|
||||
params["search"] = arguments["search"]
|
||||
if arguments.get("lines"):
|
||||
params["lines"] = arguments["lines"]
|
||||
|
||||
try:
|
||||
response = requests.get(
|
||||
f"{base_url}/api/logs/job/{job_id}",
|
||||
params=params,
|
||||
timeout=30
|
||||
)
|
||||
|
||||
if response.status_code == 200:
|
||||
logs_text = response.text
|
||||
|
||||
result = {
|
||||
"success": True,
|
||||
"job_id": job_id,
|
||||
"namespace": params["namespace"],
|
||||
"message": f"Retrieved logs for job {job_id}",
|
||||
"logs": logs_text,
|
||||
"filters_applied": {k: v for k, v in params.items() if k not in ["namespace", "plain_text", "formatted"]}
|
||||
}
|
||||
else:
|
||||
result = {
|
||||
"success": False,
|
||||
"job_id": job_id,
|
||||
"message": f"Failed to get logs: {response.status_code} - {response.text}",
|
||||
"logs": None
|
||||
}
|
||||
|
||||
except Exception as e:
|
||||
result = {
|
||||
"success": False,
|
||||
"job_id": job_id,
|
||||
"message": f"Error getting logs: {str(e)}",
|
||||
"logs": None
|
||||
}
|
||||
|
||||
return [types.TextContent(
|
||||
type="text",
|
||||
text=json.dumps(result, indent=2) if not result.get("success") else result["logs"]
|
||||
)]
|
||||
|
||||
elif name == "get_error_logs":
|
||||
job_id = arguments.get("job_id")
|
||||
|
||||
if not job_id:
|
||||
return [types.TextContent(
|
||||
type="text",
|
||||
text="Error: job_id is required"
|
||||
)]
|
||||
|
||||
# Use the error logs endpoint
|
||||
import requests
|
||||
base_url = os.getenv("BASE_URL", "http://localhost:8000")
|
||||
|
||||
params = {
|
||||
"namespace": arguments.get("namespace", namespace),
|
||||
"plain_text": True
|
||||
}
|
||||
|
||||
if arguments.get("start_time"):
|
||||
params["start_time"] = arguments["start_time"]
|
||||
if arguments.get("end_time"):
|
||||
params["end_time"] = arguments["end_time"]
|
||||
|
||||
try:
|
||||
response = requests.get(
|
||||
f"{base_url}/api/logs/errors/{job_id}",
|
||||
params=params,
|
||||
timeout=30
|
||||
)
|
||||
|
||||
if response.status_code == 200:
|
||||
logs_text = response.text
|
||||
|
||||
result = {
|
||||
"success": True,
|
||||
"job_id": job_id,
|
||||
"message": f"Retrieved error logs for job {job_id}",
|
||||
"logs": logs_text
|
||||
}
|
||||
|
||||
return [types.TextContent(
|
||||
type="text",
|
||||
text=logs_text
|
||||
)]
|
||||
else:
|
||||
return [types.TextContent(
|
||||
type="text",
|
||||
text=f"Error: Failed to get error logs: {response.status_code} - {response.text}"
|
||||
)]
|
||||
|
||||
except Exception as e:
|
||||
return [types.TextContent(
|
||||
type="text",
|
||||
text=f"Error getting error logs: {str(e)}"
|
||||
)]
|
||||
|
||||
elif name == "search_job_logs":
|
||||
job_id = arguments.get("job_id")
|
||||
search_term = arguments.get("search_term")
|
||||
|
||||
if not job_id or not search_term:
|
||||
return [types.TextContent(
|
||||
type="text",
|
||||
text="Error: job_id and search_term are required"
|
||||
)]
|
||||
|
||||
# Use the search logs endpoint
|
||||
import requests
|
||||
base_url = os.getenv("BASE_URL", "http://localhost:8000")
|
||||
|
||||
params = {
|
||||
"q": search_term,
|
||||
"namespace": arguments.get("namespace", namespace),
|
||||
"log_type": arguments.get("log_type", "stderr"),
|
||||
"lines": arguments.get("lines", 50),
|
||||
"plain_text": True
|
||||
}
|
||||
|
||||
try:
|
||||
response = requests.get(
|
||||
f"{base_url}/api/logs/search/{job_id}",
|
||||
params=params,
|
||||
timeout=30
|
||||
)
|
||||
|
||||
if response.status_code == 200:
|
||||
logs_text = response.text
|
||||
|
||||
return [types.TextContent(
|
||||
type="text",
|
||||
text=logs_text
|
||||
)]
|
||||
else:
|
||||
return [types.TextContent(
|
||||
type="text",
|
||||
text=f"Error: Failed to search logs: {response.status_code} - {response.text}"
|
||||
)]
|
||||
|
||||
except Exception as e:
|
||||
return [types.TextContent(
|
||||
type="text",
|
||||
text=f"Error searching logs: {str(e)}"
|
||||
)]
|
||||
|
||||
elif name == "submit_job_file":
|
||||
file_content = arguments.get("file_content")
|
||||
file_type = arguments.get("file_type", "json")
|
||||
|
||||
if not file_content:
|
||||
return [types.TextContent(
|
||||
type="text",
|
||||
text="Error: file_content is required"
|
||||
)]
|
||||
|
||||
try:
|
||||
# Parse the job specification based on file type
|
||||
if file_type.lower() == "json":
|
||||
import json as json_parser
|
||||
job_spec = json_parser.loads(file_content)
|
||||
elif file_type.lower() == "hcl":
|
||||
return [types.TextContent(
|
||||
type="text",
|
||||
text="Error: HCL parsing not yet implemented. Please provide JSON format."
|
||||
)]
|
||||
else:
|
||||
return [types.TextContent(
|
||||
type="text",
|
||||
text=f"Error: Unsupported file type '{file_type}'. Use 'json' or 'hcl'."
|
||||
)]
|
||||
|
||||
# Submit the job
|
||||
result = nomad_service.start_job(job_spec)
|
||||
|
||||
response = {
|
||||
"success": True,
|
||||
"job_id": result.get("job_id"),
|
||||
"status": "submitted",
|
||||
"message": f"Job {result.get('job_id')} has been submitted from {file_type} file",
|
||||
"details": result
|
||||
}
|
||||
|
||||
return [types.TextContent(
|
||||
type="text",
|
||||
text=json.dumps(response, indent=2)
|
||||
)]
|
||||
|
||||
except json.JSONDecodeError as e:
|
||||
return [types.TextContent(
|
||||
type="text",
|
||||
text=f"Error: Invalid JSON format - {str(e)}"
|
||||
)]
|
||||
except Exception as e:
|
||||
return [types.TextContent(
|
||||
type="text",
|
||||
text=f"Error submitting job: {str(e)}"
|
||||
)]
|
||||
|
||||
elif name == "get_allocation_status":
|
||||
job_id = arguments.get("job_id")
|
||||
|
||||
if not job_id:
|
||||
return [types.TextContent(
|
||||
type="text",
|
||||
text="Error: job_id is required"
|
||||
)]
|
||||
|
||||
# Get allocations for the job
|
||||
allocations = nomad_service.get_allocations(job_id)
|
||||
|
||||
# Get detailed status for each allocation
|
||||
detailed_allocations = []
|
||||
for alloc in allocations:
|
||||
alloc_id = alloc.get("ID")
|
||||
detailed_allocations.append({
|
||||
"allocation_id": alloc_id,
|
||||
"name": alloc.get("Name"),
|
||||
"client_status": alloc.get("ClientStatus"),
|
||||
"desired_status": alloc.get("DesiredStatus"),
|
||||
"job_id": alloc.get("JobID"),
|
||||
"task_group": alloc.get("TaskGroup"),
|
||||
"node_id": alloc.get("NodeID"),
|
||||
"create_time": alloc.get("CreateTime"),
|
||||
"modify_time": alloc.get("ModifyTime"),
|
||||
"task_states": alloc.get("TaskStates", {}),
|
||||
"failed": alloc.get("Failed", False),
|
||||
"deployment_status": alloc.get("DeploymentStatus", {})
|
||||
})
|
||||
|
||||
result = {
|
||||
"job_id": job_id,
|
||||
"total_allocations": len(allocations),
|
||||
"allocations": detailed_allocations,
|
||||
"message": f"Found {len(allocations)} allocations for job {job_id}"
|
||||
}
|
||||
|
||||
return [types.TextContent(
|
||||
type="text",
|
||||
text=json.dumps(result, indent=2)
|
||||
)]
|
||||
|
||||
elif name == "get_job_evaluations":
|
||||
job_id = arguments.get("job_id")
|
||||
|
||||
if not job_id:
|
||||
return [types.TextContent(
|
||||
type="text",
|
||||
text="Error: job_id is required"
|
||||
)]
|
||||
|
||||
try:
|
||||
evaluations = nomad_service.get_job_evaluations(job_id)
|
||||
|
||||
simplified_evals = []
|
||||
for eval_item in evaluations:
|
||||
simplified_evals.append({
|
||||
"eval_id": eval_item.get("ID"),
|
||||
"status": eval_item.get("Status"),
|
||||
"type": eval_item.get("Type"),
|
||||
"triggered_by": eval_item.get("TriggeredBy"),
|
||||
"job_id": eval_item.get("JobID"),
|
||||
"create_time": eval_item.get("CreateTime"),
|
||||
"modify_time": eval_item.get("ModifyTime"),
|
||||
"wait_until": eval_item.get("WaitUntil"),
|
||||
"blocked_eval": eval_item.get("BlockedEval"),
|
||||
"failed_tg_allocs": eval_item.get("FailedTGAllocs", {}),
|
||||
"class_eligibility": eval_item.get("ClassEligibility", {}),
|
||||
"quota_limit_reached": eval_item.get("QuotaLimitReached")
|
||||
})
|
||||
|
||||
result = {
|
||||
"job_id": job_id,
|
||||
"total_evaluations": len(evaluations),
|
||||
"evaluations": simplified_evals,
|
||||
"message": f"Found {len(evaluations)} evaluations for job {job_id}"
|
||||
}
|
||||
|
||||
return [types.TextContent(
|
||||
type="text",
|
||||
text=json.dumps(result, indent=2)
|
||||
)]
|
||||
|
||||
except Exception as e:
|
||||
return [types.TextContent(
|
||||
type="text",
|
||||
text=f"Error getting evaluations: {str(e)}"
|
||||
)]
|
||||
|
||||
elif name == "force_evaluate_job":
|
||||
job_id = arguments.get("job_id")
|
||||
|
||||
if not job_id:
|
||||
return [types.TextContent(
|
||||
type="text",
|
||||
text="Error: job_id is required"
|
||||
)]
|
||||
|
||||
try:
|
||||
# Force evaluation by making a direct API call
|
||||
import requests
|
||||
nomad_addr = f"http://{nomad_service.client.host}:{nomad_service.client.port}"
|
||||
url = f"{nomad_addr}/v1/job/{job_id}/evaluate"
|
||||
|
||||
headers = {}
|
||||
if hasattr(nomad_service.client, 'token') and nomad_service.client.token:
|
||||
headers["X-Nomad-Token"] = nomad_service.client.token
|
||||
|
||||
params = {"namespace": nomad_service.namespace}
|
||||
|
||||
response = requests.post(
|
||||
url=url,
|
||||
headers=headers,
|
||||
params=params,
|
||||
verify=False if os.getenv("NOMAD_SKIP_VERIFY", "false").lower() == "true" else True
|
||||
)
|
||||
|
||||
if response.status_code == 200:
|
||||
response_data = response.json()
|
||||
|
||||
result = {
|
||||
"success": True,
|
||||
"job_id": job_id,
|
||||
"eval_id": response_data.get("EvalID"),
|
||||
"status": "evaluation_forced",
|
||||
"message": f"Forced evaluation for job {job_id}",
|
||||
"details": response_data
|
||||
}
|
||||
|
||||
return [types.TextContent(
|
||||
type="text",
|
||||
text=json.dumps(result, indent=2)
|
||||
)]
|
||||
else:
|
||||
return [types.TextContent(
|
||||
type="text",
|
||||
text=f"Error: Failed to force evaluation - {response.text}"
|
||||
)]
|
||||
|
||||
except Exception as e:
|
||||
return [types.TextContent(
|
||||
type="text",
|
||||
text=f"Error forcing evaluation: {str(e)}"
|
||||
)]
|
||||
|
||||
else:
|
||||
return [types.TextContent(
|
||||
type="text",
|
||||
text=f"Error: Unknown tool '{name}'"
|
||||
)]
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"Error in tool '{name}': {str(e)}")
|
||||
return [types.TextContent(
|
||||
type="text",
|
||||
text=f"Error: {str(e)}"
|
||||
)]
|
||||
|
||||
async def main():
|
||||
"""Main entry point for the MCP server."""
|
||||
# Run the server using stdio transport
|
||||
async with mcp.server.stdio.stdio_server() as (read_stream, write_stream):
|
||||
await server.run(
|
||||
read_stream,
|
||||
write_stream,
|
||||
InitializationOptions(
|
||||
server_name="nomad-mcp",
|
||||
server_version="1.0.0",
|
||||
capabilities=server.get_capabilities(
|
||||
notification_options=NotificationOptions(),
|
||||
experimental_capabilities={},
|
||||
),
|
||||
),
|
||||
)
|
||||
|
||||
if __name__ == "__main__":
|
||||
asyncio.run(main())
|
@ -7,4 +7,5 @@ httpx
|
||||
python-multipart
|
||||
pyyaml
|
||||
requests
|
||||
fastapi_mcp
|
||||
fastapi_mcp
|
||||
mcp
|
3
run_mcp_server.sh
Executable file
3
run_mcp_server.sh
Executable file
@ -0,0 +1,3 @@
|
||||
#!/bin/bash
|
||||
cd /Users/nkohl/Documents/Code/nomad_mcp
|
||||
/Users/nkohl/.local/bin/uv run python mcp_server.py
|
Reference in New Issue
Block a user