feat: Implement CLI tool, Celery workers, and VMware collector
Some checks failed
CI/CD Pipeline / Generate Documentation (push) Successful in 4m57s
CI/CD Pipeline / Lint Code (push) Successful in 5m33s
CI/CD Pipeline / Run Tests (push) Successful in 4m20s
CI/CD Pipeline / Security Scanning (push) Successful in 4m32s
CI/CD Pipeline / Build and Push Docker Images (chat) (push) Failing after 49s
CI/CD Pipeline / Build and Push Docker Images (frontend) (push) Failing after 48s
CI/CD Pipeline / Build and Push Docker Images (worker) (push) Failing after 46s
CI/CD Pipeline / Build and Push Docker Images (api) (push) Failing after 40s
CI/CD Pipeline / Deploy to Staging (push) Has been skipped
CI/CD Pipeline / Deploy to Production (push) Has been skipped
Some checks failed
CI/CD Pipeline / Generate Documentation (push) Successful in 4m57s
CI/CD Pipeline / Lint Code (push) Successful in 5m33s
CI/CD Pipeline / Run Tests (push) Successful in 4m20s
CI/CD Pipeline / Security Scanning (push) Successful in 4m32s
CI/CD Pipeline / Build and Push Docker Images (chat) (push) Failing after 49s
CI/CD Pipeline / Build and Push Docker Images (frontend) (push) Failing after 48s
CI/CD Pipeline / Build and Push Docker Images (worker) (push) Failing after 46s
CI/CD Pipeline / Build and Push Docker Images (api) (push) Failing after 40s
CI/CD Pipeline / Deploy to Staging (push) Has been skipped
CI/CD Pipeline / Deploy to Production (push) Has been skipped
Complete implementation of core MVP components: CLI Tool (src/datacenter_docs/cli.py): - 11 commands for system management (serve, worker, init-db, generate, etc.) - Auto-remediation policy management (enable/disable/status) - System statistics and monitoring - Rich formatted output with tables and panels Celery Workers (src/datacenter_docs/workers/): - celery_app.py with 4 specialized queues (documentation, auto_remediation, data_collection, maintenance) - tasks.py with 8 async tasks integrated with MongoDB/Beanie - Celery Beat scheduling (6h docs, 1h data collection, 15m metrics, 2am cleanup) - Rate limiting (10 auto-remediation/h) and timeout configuration - Task lifecycle signals and comprehensive logging VMware Collector (src/datacenter_docs/collectors/): - BaseCollector abstract class with full workflow (connect/collect/validate/store/disconnect) - VMwareCollector for vSphere infrastructure data collection - Collects VMs, ESXi hosts, clusters, datastores, networks with statistics - MCP client integration with mock data fallback for development - MongoDB storage via AuditLog and data validation Documentation & Configuration: - Updated README.md with CLI commands and Workers sections - Updated TODO.md with project status (55% completion) - Added CLAUDE.md with comprehensive project instructions - Added Docker compose setup for development environment Project Status: - Completion: 50% -> 55% - MVP Milestone: 80% complete (only Infrastructure Generator remaining) - Estimated time to MVP: 1-2 days 🤖 Generated with [Claude Code](https://claude.com/claude-code) Co-Authored-By: Claude <noreply@anthropic.com>
This commit is contained in:
296
src/datacenter_docs/utils/llm_client.py
Normal file
296
src/datacenter_docs/utils/llm_client.py
Normal file
@@ -0,0 +1,296 @@
|
||||
"""
|
||||
Generic LLM Client using OpenAI-compatible API
|
||||
|
||||
This client works with:
|
||||
- OpenAI
|
||||
- Anthropic (via OpenAI-compatible endpoint)
|
||||
- LLMStudio
|
||||
- Open-WebUI
|
||||
- Ollama
|
||||
- LocalAI
|
||||
- Any other OpenAI-compatible provider
|
||||
"""
|
||||
|
||||
import logging
|
||||
from typing import Any, Dict, List, Optional
|
||||
|
||||
from openai import AsyncOpenAI
|
||||
|
||||
from .config import get_settings
|
||||
|
||||
logger = logging.getLogger(__name__)
|
||||
|
||||
|
||||
class LLMClient:
|
||||
"""
|
||||
Generic LLM client using OpenAI-compatible API standard.
|
||||
|
||||
This allows switching between different LLM providers without code changes,
|
||||
just by updating configuration (base_url, api_key, model).
|
||||
|
||||
Examples:
|
||||
# OpenAI
|
||||
LLM_BASE_URL=https://api.openai.com/v1
|
||||
LLM_MODEL=gpt-4-turbo-preview
|
||||
|
||||
# Anthropic (via OpenAI-compatible endpoint)
|
||||
LLM_BASE_URL=https://api.anthropic.com/v1
|
||||
LLM_MODEL=claude-sonnet-4-20250514
|
||||
|
||||
# LLMStudio
|
||||
LLM_BASE_URL=http://localhost:1234/v1
|
||||
LLM_MODEL=local-model
|
||||
|
||||
# Open-WebUI
|
||||
LLM_BASE_URL=http://localhost:8080/v1
|
||||
LLM_MODEL=llama3
|
||||
"""
|
||||
|
||||
def __init__(
|
||||
self,
|
||||
base_url: Optional[str] = None,
|
||||
api_key: Optional[str] = None,
|
||||
model: Optional[str] = None,
|
||||
temperature: Optional[float] = None,
|
||||
max_tokens: Optional[int] = None,
|
||||
):
|
||||
"""
|
||||
Initialize LLM client with OpenAI-compatible API.
|
||||
|
||||
Args:
|
||||
base_url: Base URL of the API endpoint (e.g., https://api.openai.com/v1)
|
||||
api_key: API key for authentication
|
||||
model: Model name to use (e.g., gpt-4, claude-sonnet-4, llama3)
|
||||
temperature: Sampling temperature (0.0-1.0)
|
||||
max_tokens: Maximum tokens to generate
|
||||
"""
|
||||
settings = get_settings()
|
||||
|
||||
# Use provided values or fall back to settings
|
||||
self.base_url = base_url or settings.LLM_BASE_URL
|
||||
self.api_key = api_key or settings.LLM_API_KEY
|
||||
self.model = model or settings.LLM_MODEL
|
||||
self.temperature = temperature if temperature is not None else settings.LLM_TEMPERATURE
|
||||
self.max_tokens = max_tokens or settings.LLM_MAX_TOKENS
|
||||
|
||||
# Initialize AsyncOpenAI client
|
||||
self.client = AsyncOpenAI(base_url=self.base_url, api_key=self.api_key)
|
||||
|
||||
logger.info(
|
||||
f"Initialized LLM client: base_url={self.base_url}, model={self.model}"
|
||||
)
|
||||
|
||||
async def chat_completion(
|
||||
self,
|
||||
messages: List[Dict[str, str]],
|
||||
temperature: Optional[float] = None,
|
||||
max_tokens: Optional[int] = None,
|
||||
stream: bool = False,
|
||||
**kwargs: Any,
|
||||
) -> Dict[str, Any]:
|
||||
"""
|
||||
Generate chat completion using OpenAI-compatible API.
|
||||
|
||||
Args:
|
||||
messages: List of messages [{"role": "user", "content": "..."}]
|
||||
temperature: Override default temperature
|
||||
max_tokens: Override default max_tokens
|
||||
stream: Enable streaming response
|
||||
**kwargs: Additional parameters for the API
|
||||
|
||||
Returns:
|
||||
Response with generated text and metadata
|
||||
"""
|
||||
try:
|
||||
response = await self.client.chat.completions.create(
|
||||
model=self.model,
|
||||
messages=messages, # type: ignore[arg-type]
|
||||
temperature=temperature or self.temperature,
|
||||
max_tokens=max_tokens or self.max_tokens,
|
||||
stream=stream,
|
||||
**kwargs,
|
||||
)
|
||||
|
||||
if stream:
|
||||
# Return generator for streaming
|
||||
return {"stream": response} # type: ignore[dict-item]
|
||||
|
||||
# Extract text from first choice
|
||||
message = response.choices[0].message
|
||||
content = message.content or ""
|
||||
|
||||
return {
|
||||
"content": content,
|
||||
"model": response.model,
|
||||
"usage": {
|
||||
"prompt_tokens": response.usage.prompt_tokens if response.usage else 0,
|
||||
"completion_tokens": (
|
||||
response.usage.completion_tokens if response.usage else 0
|
||||
),
|
||||
"total_tokens": response.usage.total_tokens if response.usage else 0,
|
||||
},
|
||||
"finish_reason": response.choices[0].finish_reason,
|
||||
}
|
||||
|
||||
except Exception as e:
|
||||
logger.error(f"LLM API call failed: {e}")
|
||||
raise
|
||||
|
||||
async def generate_with_system(
|
||||
self,
|
||||
system_prompt: str,
|
||||
user_prompt: str,
|
||||
temperature: Optional[float] = None,
|
||||
max_tokens: Optional[int] = None,
|
||||
**kwargs: Any,
|
||||
) -> str:
|
||||
"""
|
||||
Generate completion with system and user prompts.
|
||||
|
||||
Args:
|
||||
system_prompt: System instruction
|
||||
user_prompt: User message
|
||||
temperature: Override default temperature
|
||||
max_tokens: Override default max_tokens
|
||||
**kwargs: Additional API parameters
|
||||
|
||||
Returns:
|
||||
Generated text content
|
||||
"""
|
||||
messages = [
|
||||
{"role": "system", "content": system_prompt},
|
||||
{"role": "user", "content": user_prompt},
|
||||
]
|
||||
|
||||
response = await self.chat_completion(
|
||||
messages=messages, temperature=temperature, max_tokens=max_tokens, **kwargs
|
||||
)
|
||||
|
||||
return response["content"]
|
||||
|
||||
async def generate_json(
|
||||
self,
|
||||
messages: List[Dict[str, str]],
|
||||
temperature: Optional[float] = None,
|
||||
max_tokens: Optional[int] = None,
|
||||
) -> Dict[str, Any]:
|
||||
"""
|
||||
Generate JSON response (if provider supports response_format).
|
||||
|
||||
Args:
|
||||
messages: List of messages
|
||||
temperature: Override default temperature
|
||||
max_tokens: Override default max_tokens
|
||||
|
||||
Returns:
|
||||
Parsed JSON response
|
||||
"""
|
||||
import json
|
||||
|
||||
try:
|
||||
# Try with response_format if supported
|
||||
response = await self.chat_completion(
|
||||
messages=messages,
|
||||
temperature=temperature or 0.3, # Lower temp for structured output
|
||||
max_tokens=max_tokens,
|
||||
response_format={"type": "json_object"},
|
||||
)
|
||||
except Exception as e:
|
||||
logger.warning(f"response_format not supported, using plain completion: {e}")
|
||||
# Fallback to plain completion
|
||||
response = await self.chat_completion(
|
||||
messages=messages,
|
||||
temperature=temperature or 0.3,
|
||||
max_tokens=max_tokens,
|
||||
)
|
||||
|
||||
# Parse JSON from content
|
||||
content = response["content"]
|
||||
try:
|
||||
return json.loads(content)
|
||||
except json.JSONDecodeError as e:
|
||||
logger.error(f"Failed to parse JSON response: {e}")
|
||||
logger.debug(f"Raw content: {content}")
|
||||
raise ValueError(f"LLM did not return valid JSON: {content[:200]}...")
|
||||
|
||||
async def generate_stream(
|
||||
self,
|
||||
messages: List[Dict[str, str]],
|
||||
temperature: Optional[float] = None,
|
||||
max_tokens: Optional[int] = None,
|
||||
) -> Any:
|
||||
"""
|
||||
Generate streaming completion.
|
||||
|
||||
Args:
|
||||
messages: List of messages
|
||||
temperature: Override default temperature
|
||||
max_tokens: Override default max_tokens
|
||||
|
||||
Yields:
|
||||
Text chunks as they arrive
|
||||
"""
|
||||
response = await self.chat_completion(
|
||||
messages=messages,
|
||||
temperature=temperature,
|
||||
max_tokens=max_tokens,
|
||||
stream=True,
|
||||
)
|
||||
|
||||
async for chunk in response["stream"]: # type: ignore[union-attr]
|
||||
if chunk.choices and chunk.choices[0].delta.content:
|
||||
yield chunk.choices[0].delta.content
|
||||
|
||||
|
||||
# Singleton instance
|
||||
_llm_client: Optional[LLMClient] = None
|
||||
|
||||
|
||||
def get_llm_client() -> LLMClient:
|
||||
"""Get or create singleton LLM client instance."""
|
||||
global _llm_client
|
||||
if _llm_client is None:
|
||||
_llm_client = LLMClient()
|
||||
return _llm_client
|
||||
|
||||
|
||||
# Example usage
|
||||
async def example_usage() -> None:
|
||||
"""Example of using the LLM client"""
|
||||
|
||||
client = get_llm_client()
|
||||
|
||||
# Simple completion
|
||||
messages = [
|
||||
{"role": "system", "content": "You are a helpful datacenter expert."},
|
||||
{"role": "user", "content": "Explain what a VLAN is in 2 sentences."},
|
||||
]
|
||||
|
||||
response = await client.chat_completion(messages)
|
||||
print(f"Response: {response['content']}")
|
||||
print(f"Tokens used: {response['usage']['total_tokens']}")
|
||||
|
||||
# JSON response
|
||||
json_messages = [
|
||||
{
|
||||
"role": "user",
|
||||
"content": "List 3 common datacenter problems in JSON: {\"problems\": [...]}",
|
||||
}
|
||||
]
|
||||
|
||||
json_response = await client.generate_json(json_messages)
|
||||
print(f"JSON: {json_response}")
|
||||
|
||||
# Streaming
|
||||
stream_messages = [{"role": "user", "content": "Count from 1 to 5"}]
|
||||
|
||||
print("Streaming: ", end="")
|
||||
async for chunk in client.generate_stream(stream_messages):
|
||||
print(chunk, end="", flush=True)
|
||||
print()
|
||||
|
||||
|
||||
if __name__ == "__main__":
|
||||
import asyncio
|
||||
|
||||
asyncio.run(example_usage())
|
||||
Reference in New Issue
Block a user