Add Helm chart, Docs, and Config conversion script
Some checks failed
Build / Code Quality Checks (push) Successful in 15m11s
Build / Build & Push Docker Images (worker) (push) Successful in 13m44s
Build / Build & Push Docker Images (frontend) (push) Successful in 5m8s
Build / Build & Push Docker Images (chat) (push) Failing after 30m7s
Build / Build & Push Docker Images (api) (push) Failing after 21m39s

This commit is contained in:
2025-10-22 14:35:21 +02:00
parent ba9900bd57
commit 2719cfff59
31 changed files with 4436 additions and 0 deletions

400
deploy/helm/README.md Normal file
View File

@@ -0,0 +1,400 @@
# Helm Deployment
This directory contains Helm charts for deploying the Datacenter Docs & Remediation Engine on Kubernetes.
## Contents
- `datacenter-docs/` - Main Helm chart for the application
- `test-chart.sh` - Automated testing script for chart validation
## Quick Start
### Prerequisites
- Kubernetes cluster (1.19+)
- Helm 3.0+
- kubectl configured to access your cluster
### Development/Testing Installation
```bash
# Install with development settings (minimal resources, local testing)
helm install dev ./datacenter-docs -f ./datacenter-docs/values-development.yaml
# Access the application
kubectl port-forward svc/dev-datacenter-docs-api 8000:8000
kubectl port-forward svc/dev-datacenter-docs-frontend 8080:80
# View API docs: http://localhost:8000/api/docs
# View frontend: http://localhost:8080
```
### Production Installation
```bash
# Copy and customize production values
cp datacenter-docs/values-production.yaml my-production-values.yaml
# Edit my-production-values.yaml:
# - Change all secrets (llmApiKey, apiSecretKey, mongodbPassword)
# - Update ingress hosts
# - Adjust resource limits
# - Configure LLM provider
# - Review auto-remediation settings
# Install
helm install prod ./datacenter-docs -f my-production-values.yaml
# Verify deployment
helm list
kubectl get pods
kubectl get ingress
```
## Chart Structure
```
datacenter-docs/
├── Chart.yaml # Chart metadata
├── values.yaml # Default configuration
├── values-development.yaml # Development settings
├── values-production.yaml # Production example
├── README.md # Detailed chart documentation
├── .helmignore # Files to exclude from package
└── templates/
├── NOTES.txt # Post-install instructions
├── _helpers.tpl # Template helpers
├── configmap.yaml # Application configuration
├── secrets.yaml # Sensitive data
├── serviceaccount.yaml # Service account
├── mongodb-statefulset.yaml # MongoDB StatefulSet
├── mongodb-service.yaml # MongoDB Service
├── redis-deployment.yaml # Redis Deployment
├── redis-service.yaml # Redis Service
├── api-deployment.yaml # API Deployment
├── api-service.yaml # API Service
├── api-hpa.yaml # API autoscaling
├── chat-deployment.yaml # Chat Deployment
├── chat-service.yaml # Chat Service
├── worker-deployment.yaml # Worker Deployment
├── worker-hpa.yaml # Worker autoscaling
├── frontend-deployment.yaml # Frontend Deployment
├── frontend-service.yaml # Frontend Service
└── ingress.yaml # Ingress configuration
```
## Testing the Chart
Run the automated test script:
```bash
cd deploy/helm
./test-chart.sh
```
This will:
1. Lint the chart
2. Render templates with different value files
3. Perform dry-run installation
4. Validate Kubernetes manifests
5. Package the chart
## Common Operations
### Upgrade Release
```bash
# Upgrade with new values
helm upgrade prod ./datacenter-docs -f my-production-values.yaml
# Upgrade with specific parameter changes
helm upgrade prod ./datacenter-docs --set api.replicaCount=10 --reuse-values
```
### Check Status
```bash
# List releases
helm list
# Get release status
helm status prod
# Get current values
helm get values prod
# Get all manifests
helm get manifest prod
```
### Rollback
```bash
# View revision history
helm history prod
# Rollback to previous version
helm rollback prod
# Rollback to specific revision
helm rollback prod 2
```
### Uninstall
```bash
# Uninstall release
helm uninstall prod
# Also delete PVCs (if using persistent storage)
kubectl delete pvc -l app.kubernetes.io/instance=prod
```
## Configuration Files
### values.yaml
Default configuration with reasonable settings for development/testing.
### values-development.yaml
Optimized for local development:
- Minimal resource requests/limits
- Single replicas
- Persistence disabled
- Dry-run mode for auto-remediation
- Debug logging
- Ingress disabled (use port-forward)
### values-production.yaml
Example production configuration:
- Higher resource limits
- Multiple replicas
- Autoscaling enabled
- Persistence enabled with larger volumes
- TLS/SSL enabled
- Production-grade security settings
- All components enabled
**Important**: Copy and customize this file for your environment. Never use default secrets!
## Available Components
| Component | Purpose | Default Enabled |
|-----------|---------|-----------------|
| MongoDB | Document database | Yes |
| Redis | Cache & task queue | Yes |
| API | REST API service | Yes |
| Chat | WebSocket server | No (not implemented) |
| Worker | Celery background tasks | No (not implemented) |
| Frontend | Web UI | Yes |
Enable/disable components in your values file:
```yaml
mongodb:
enabled: true
redis:
enabled: true
api:
enabled: true
chat:
enabled: false # Set to true when implemented
worker:
enabled: false # Set to true when implemented
frontend:
enabled: true
```
## Architecture
The chart deploys a complete microservices architecture:
```
┌─────────────┐
│ Ingress │
└──────┬──────┘
┌─────────────┼─────────────┐
│ │ │
┌────▼────┐ ┌────▼────┐ ┌────▼────┐
│Frontend │ │ API │ │ Chat │
└─────────┘ └────┬────┘ └────┬────┘
│ │
┌─────────────┼────────────┘
│ │
┌────▼────┐ ┌────▼────┐
│ Redis │ │ MongoDB │
└─────────┘ └─────────┘
┌────┴────┐
│ Worker │
└─────────┘
```
## LLM Provider Configuration
The chart supports multiple LLM providers. Configure in your values file:
### OpenAI
```yaml
config:
llm:
baseUrl: "https://api.openai.com/v1"
model: "gpt-4-turbo-preview"
secrets:
llmApiKey: "sk-your-openai-key"
```
### Anthropic Claude
```yaml
config:
llm:
baseUrl: "https://api.anthropic.com/v1"
model: "claude-3-opus-20240229"
secrets:
llmApiKey: "sk-ant-your-anthropic-key"
```
### Local (Ollama)
```yaml
config:
llm:
baseUrl: "http://ollama-service:11434/v1"
model: "llama2"
secrets:
llmApiKey: "not-needed"
```
### Azure OpenAI
```yaml
config:
llm:
baseUrl: "https://your-resource.openai.azure.com"
model: "gpt-4"
secrets:
llmApiKey: "your-azure-key"
```
## Security Best Practices
For production deployments:
1. **Change all default secrets**
```bash
helm install prod ./datacenter-docs \
--set secrets.llmApiKey="your-actual-key" \
--set secrets.apiSecretKey="$(openssl rand -base64 32)" \
--set secrets.mongodbPassword="$(openssl rand -base64 32)"
```
2. **Use external secret management**
- HashiCorp Vault
- AWS Secrets Manager
- Azure Key Vault
- Kubernetes External Secrets Operator
3. **Enable TLS/SSL**
```yaml
ingress:
annotations:
cert-manager.io/cluster-issuer: "letsencrypt-prod"
tls:
- secretName: datacenter-docs-tls
hosts:
- datacenter-docs.yourdomain.com
```
4. **Review auto-remediation settings**
```yaml
config:
autoRemediation:
enabled: true
minReliabilityScore: 95.0 # High threshold for production
dryRun: true # Test first, then set to false
```
5. **Implement network policies**
6. **Enable resource quotas**
7. **Regular security scanning**
## Monitoring and Observability
The chart is designed to integrate with:
- **Prometheus**: Metrics collection
- **Grafana**: Visualization
- **Jaeger**: Distributed tracing
- **ELK/Loki**: Log aggregation
Add annotations to enable monitoring:
```yaml
podAnnotations:
prometheus.io/scrape: "true"
prometheus.io/port: "8000"
prometheus.io/path: "/metrics"
```
## Troubleshooting
### Pods not starting
```bash
# Check pod status
kubectl get pods -l app.kubernetes.io/instance=prod
# Describe pod for events
kubectl describe pod <pod-name>
# View logs
kubectl logs <pod-name> -f
```
### Storage issues
```bash
# Check PVC status
kubectl get pvc
# Check storage class
kubectl get storageclass
# Manually create PVC if needed
kubectl apply -f - <<EOF
apiVersion: v1
kind: PersistentVolumeClaim
metadata:
name: mongodb-data
spec:
accessModes:
- ReadWriteOnce
resources:
requests:
storage: 10Gi
EOF
```
### Ingress not working
```bash
# Check ingress status
kubectl get ingress
kubectl describe ingress prod-datacenter-docs
# Check ingress controller logs
kubectl logs -n ingress-nginx -l app.kubernetes.io/component=controller -f
```
## Support
For detailed documentation, see:
- Chart README: `datacenter-docs/README.md`
- Main project: `../../README.md`
- Issues: https://git.commandware.com/it-ops/llm-automation-docs-and-remediation-engine/issues
## License
See the main repository for license information.

View File

@@ -0,0 +1,32 @@
# Patterns to ignore when building packages.
# This supports shell glob matching, relative path matching, and
# negation (prefixed with !). Only one pattern per line.
.DS_Store
# Common VCS dirs
.git/
.gitignore
.bzr/
.bzrignore
.hg/
.hgignore
.svn/
# Common backup files
*.swp
*.bak
*.tmp
*.orig
*~
# Various IDEs
.project
.idea/
*.tmproj
.vscode/
# CI/CD
.github/
.gitlab-ci.yml
.gitea/
# Documentation
README.md
NOTES.md
# Development files
*.log

View File

@@ -0,0 +1,19 @@
apiVersion: v2
name: datacenter-docs
description: A Helm chart for LLM Automation - Docs & Remediation Engine
type: application
version: 0.1.0
appVersion: "0.1.0"
keywords:
- datacenter
- documentation
- ai
- automation
- remediation
- llm
maintainers:
- name: Datacenter Docs Team
home: https://git.commandware.com/it-ops/llm-automation-docs-and-remediation-engine
sources:
- https://git.commandware.com/it-ops/llm-automation-docs-and-remediation-engine
dependencies: []

View File

@@ -0,0 +1,423 @@
# Datacenter Docs & Remediation Engine - Helm Chart
Helm chart for deploying the LLM Automation - Docs & Remediation Engine on Kubernetes.
## Overview
This chart deploys a complete stack including:
- **MongoDB**: Document database for storing tickets, documentation, and metadata
- **Redis**: Cache and task queue backend
- **API Service**: FastAPI REST API with auto-remediation capabilities
- **Chat Service**: WebSocket server for real-time documentation queries (optional, not yet implemented)
- **Worker Service**: Celery workers for background tasks (optional, not yet implemented)
- **Frontend**: React-based web interface
## Prerequisites
- Kubernetes 1.19+
- Helm 3.0+
- PersistentVolume provisioner support in the underlying infrastructure (for MongoDB persistence)
- Ingress controller (optional, for external access)
## Installation
### Quick Start
```bash
# Add the chart repository (if published)
helm repo add datacenter-docs https://your-repo-url
helm repo update
# Install with default values
helm install my-datacenter-docs datacenter-docs/datacenter-docs
# Or install from local directory
helm install my-datacenter-docs ./datacenter-docs
```
### Production Installation
For production, create a custom `values.yaml`:
```bash
# Copy and edit the values file
cp values.yaml my-values.yaml
# Edit my-values.yaml with your configuration
# At minimum, change:
# - secrets.llmApiKey
# - secrets.apiSecretKey
# - ingress.hosts
# Install with custom values
helm install my-datacenter-docs ./datacenter-docs -f my-values.yaml
```
### Install with Specific Configuration
```bash
helm install my-datacenter-docs ./datacenter-docs \
--set secrets.llmApiKey="sk-your-openai-api-key" \
--set secrets.apiSecretKey="your-strong-secret-key" \
--set ingress.hosts[0].host="datacenter-docs.yourdomain.com" \
--set mongodb.persistence.size="50Gi"
```
## Configuration
### Key Configuration Parameters
#### Global Settings
| Parameter | Description | Default |
|-----------|-------------|---------|
| `global.imagePullPolicy` | Image pull policy | `IfNotPresent` |
| `global.storageClass` | Storage class for PVCs | `""` |
#### MongoDB
| Parameter | Description | Default |
|-----------|-------------|---------|
| `mongodb.enabled` | Enable MongoDB | `true` |
| `mongodb.image.repository` | MongoDB image | `mongo` |
| `mongodb.image.tag` | MongoDB version | `7` |
| `mongodb.auth.rootUsername` | Root username | `admin` |
| `mongodb.auth.rootPassword` | Root password | `admin123` |
| `mongodb.persistence.enabled` | Enable persistence | `true` |
| `mongodb.persistence.size` | Volume size | `10Gi` |
| `mongodb.resources.requests.memory` | Memory request | `512Mi` |
| `mongodb.resources.limits.memory` | Memory limit | `2Gi` |
#### Redis
| Parameter | Description | Default |
|-----------|-------------|---------|
| `redis.enabled` | Enable Redis | `true` |
| `redis.image.repository` | Redis image | `redis` |
| `redis.image.tag` | Redis version | `7-alpine` |
| `redis.resources.requests.memory` | Memory request | `128Mi` |
| `redis.resources.limits.memory` | Memory limit | `512Mi` |
#### API Service
| Parameter | Description | Default |
|-----------|-------------|---------|
| `api.enabled` | Enable API service | `true` |
| `api.replicaCount` | Number of replicas | `2` |
| `api.image.repository` | API image repository | `datacenter-docs-api` |
| `api.image.tag` | API image tag | `latest` |
| `api.service.port` | Service port | `8000` |
| `api.autoscaling.enabled` | Enable HPA | `true` |
| `api.autoscaling.minReplicas` | Min replicas | `2` |
| `api.autoscaling.maxReplicas` | Max replicas | `10` |
| `api.resources.requests.memory` | Memory request | `512Mi` |
| `api.resources.limits.memory` | Memory limit | `2Gi` |
#### Worker Service
| Parameter | Description | Default |
|-----------|-------------|---------|
| `worker.enabled` | Enable worker service | `false` |
| `worker.replicaCount` | Number of replicas | `3` |
| `worker.autoscaling.enabled` | Enable HPA | `true` |
| `worker.autoscaling.minReplicas` | Min replicas | `1` |
| `worker.autoscaling.maxReplicas` | Max replicas | `10` |
#### Chat Service
| Parameter | Description | Default |
|-----------|-------------|---------|
| `chat.enabled` | Enable chat service | `false` |
| `chat.replicaCount` | Number of replicas | `1` |
| `chat.service.port` | Service port | `8001` |
#### Frontend
| Parameter | Description | Default |
|-----------|-------------|---------|
| `frontend.enabled` | Enable frontend | `true` |
| `frontend.replicaCount` | Number of replicas | `2` |
| `frontend.service.port` | Service port | `80` |
#### Ingress
| Parameter | Description | Default |
|-----------|-------------|---------|
| `ingress.enabled` | Enable ingress | `true` |
| `ingress.className` | Ingress class | `nginx` |
| `ingress.hosts[0].host` | Hostname | `datacenter-docs.example.com` |
| `ingress.tls[0].secretName` | TLS secret name | `datacenter-docs-tls` |
#### Application Configuration
| Parameter | Description | Default |
|-----------|-------------|---------|
| `config.llm.baseUrl` | LLM provider URL | `https://api.openai.com/v1` |
| `config.llm.model` | LLM model | `gpt-4-turbo-preview` |
| `config.autoRemediation.enabled` | Enable auto-remediation | `true` |
| `config.autoRemediation.minReliabilityScore` | Min reliability score | `85.0` |
| `config.autoRemediation.dryRun` | Dry run mode | `false` |
| `config.logLevel` | Log level | `INFO` |
#### Secrets
| Parameter | Description | Default |
|-----------|-------------|---------|
| `secrets.llmApiKey` | LLM API key | `sk-your-openai-api-key-here` |
| `secrets.apiSecretKey` | API secret key | `your-secret-key-here-change-in-production` |
**IMPORTANT**: Change these secrets in production!
## Usage Examples
### Enable All Services (including chat and worker)
```bash
helm install my-datacenter-docs ./datacenter-docs \
--set chat.enabled=true \
--set worker.enabled=true
```
### Disable Auto-Remediation
```bash
helm install my-datacenter-docs ./datacenter-docs \
--set config.autoRemediation.enabled=false
```
### Use Different LLM Provider (e.g., Anthropic Claude)
```bash
helm install my-datacenter-docs ./datacenter-docs \
--set config.llm.baseUrl="https://api.anthropic.com/v1" \
--set config.llm.model="claude-3-opus-20240229" \
--set secrets.llmApiKey="sk-ant-your-anthropic-key"
```
### Use Local LLM (e.g., Ollama)
```bash
helm install my-datacenter-docs ./datacenter-docs \
--set config.llm.baseUrl="http://ollama-service:11434/v1" \
--set config.llm.model="llama2" \
--set secrets.llmApiKey="not-needed"
```
### Scale MongoDB Storage
```bash
helm install my-datacenter-docs ./datacenter-docs \
--set mongodb.persistence.size="100Gi"
```
### Disable Ingress (use port-forward instead)
```bash
helm install my-datacenter-docs ./datacenter-docs \
--set ingress.enabled=false
```
### Production Configuration with External MongoDB
```yaml
# production-values.yaml
mongodb:
enabled: false
config:
mongodbUrl: "mongodb://user:pass@external-mongodb:27017/datacenter_docs?authSource=admin"
api:
replicaCount: 5
autoscaling:
maxReplicas: 20
secrets:
llmApiKey: "sk-your-production-api-key"
apiSecretKey: "your-production-secret-key"
ingress:
hosts:
- host: "datacenter-docs.prod.yourdomain.com"
paths:
- path: /
pathType: Prefix
service: frontend
- path: /api
pathType: Prefix
service: api
```
```bash
helm install prod-datacenter-docs ./datacenter-docs -f production-values.yaml
```
## Upgrading
```bash
# Upgrade with new values
helm upgrade my-datacenter-docs ./datacenter-docs -f my-values.yaml
# Upgrade specific parameters
helm upgrade my-datacenter-docs ./datacenter-docs \
--set api.image.tag="v1.2.0" \
--reuse-values
```
## Uninstallation
```bash
helm uninstall my-datacenter-docs
```
**Note**: This will delete all resources except PersistentVolumeClaims (PVCs) for MongoDB. To also delete PVCs:
```bash
kubectl delete pvc -l app.kubernetes.io/instance=my-datacenter-docs
```
## Monitoring and Troubleshooting
### Check Pod Status
```bash
kubectl get pods -l app.kubernetes.io/instance=my-datacenter-docs
```
### View Logs
```bash
# API logs
kubectl logs -l app.kubernetes.io/component=api -f
# Worker logs
kubectl logs -l app.kubernetes.io/component=worker -f
# MongoDB logs
kubectl logs -l app.kubernetes.io/component=database -f
```
### Access Services Locally
```bash
# API
kubectl port-forward svc/my-datacenter-docs-api 8000:8000
# Frontend
kubectl port-forward svc/my-datacenter-docs-frontend 8080:80
# MongoDB (for debugging)
kubectl port-forward svc/my-datacenter-docs-mongodb 27017:27017
```
### Common Issues
#### Pods Stuck in Pending
Check if PVCs are bound:
```bash
kubectl get pvc
```
If storage class is missing, set it:
```bash
helm upgrade my-datacenter-docs ./datacenter-docs \
--set mongodb.persistence.storageClass="standard" \
--reuse-values
```
#### API Pods Crash Loop
Check logs:
```bash
kubectl logs -l app.kubernetes.io/component=api --tail=100
```
Common causes:
- MongoDB not ready (wait for init containers)
- Invalid LLM API key
- Missing environment variables
#### Cannot Access via Ingress
Check ingress status:
```bash
kubectl get ingress
kubectl describe ingress my-datacenter-docs
```
Ensure:
- Ingress controller is installed
- DNS points to ingress IP
- TLS certificate is valid (if using HTTPS)
## Security Considerations
### Production Checklist
- [ ] Change `secrets.llmApiKey` to a valid API key
- [ ] Change `secrets.apiSecretKey` to a strong random key
- [ ] Change MongoDB credentials (`mongodb.auth.rootPassword`)
- [ ] Enable TLS/SSL on ingress
- [ ] Review RBAC policies
- [ ] Use external secret management (e.g., HashiCorp Vault, AWS Secrets Manager)
- [ ] Enable network policies
- [ ] Set resource limits on all pods
- [ ] Enable pod security policies
- [ ] Review auto-remediation settings
### Using External Secrets
Instead of storing secrets in values.yaml, use Kubernetes secrets:
```bash
# Create secret
kubectl create secret generic datacenter-docs-secrets \
--from-literal=llm-api-key="sk-your-key" \
--from-literal=api-secret-key="your-secret"
# Modify templates to use existing secret
# (requires chart customization)
```
## Development
### Validating the Chart
```bash
# Lint the chart
helm lint ./datacenter-docs
# Dry run
helm install my-test ./datacenter-docs --dry-run --debug
# Template rendering
helm template my-test ./datacenter-docs > rendered.yaml
```
### Testing Locally
```bash
# Create kind cluster
kind create cluster
# Install chart
helm install test ./datacenter-docs \
--set ingress.enabled=false \
--set api.autoscaling.enabled=false \
--set mongodb.persistence.enabled=false
# Test
kubectl port-forward svc/test-datacenter-docs-api 8000:8000
curl http://localhost:8000/health
```
## Support
For issues and questions:
- Issues: https://git.commandware.com/it-ops/llm-automation-docs-and-remediation-engine/issues
- Documentation: https://git.commandware.com/it-ops/llm-automation-docs-and-remediation-engine
## License
See the main repository for license information.

View File

@@ -0,0 +1,162 @@
█████████████████████████████████████████████████████████████████████████████
█ █
█ Datacenter Docs & Remediation Engine - Successfully Deployed! █
█ █
█████████████████████████████████████████████████████████████████████████████
Thank you for installing {{ .Chart.Name }}.
Your release is named {{ .Release.Name }}.
Release namespace: {{ .Release.Namespace }}
==============================================================================
📦 INSTALLED COMPONENTS:
==============================================================================
{{- if .Values.mongodb.enabled }}
✓ MongoDB (Database)
{{- end }}
{{- if .Values.redis.enabled }}
✓ Redis (Cache & Task Queue)
{{- end }}
{{- if .Values.api.enabled }}
✓ API Service
{{- end }}
{{- if .Values.chat.enabled }}
✓ Chat Service (WebSocket)
{{- end }}
{{- if .Values.worker.enabled }}
✓ Celery Worker (Background Tasks)
{{- end }}
{{- if .Values.frontend.enabled }}
✓ Frontend (Web UI)
{{- end }}
==============================================================================
🔍 CHECK DEPLOYMENT STATUS:
==============================================================================
kubectl get pods -n {{ .Release.Namespace }} -l app.kubernetes.io/instance={{ .Release.Name }}
kubectl get services -n {{ .Release.Namespace }} -l app.kubernetes.io/instance={{ .Release.Name }}
==============================================================================
🌐 ACCESS YOUR APPLICATION:
==============================================================================
{{- if .Values.ingress.enabled }}
{{- range $host := .Values.ingress.hosts }}
{{ if $.Values.ingress.tls }}https{{ else }}http{{ end }}://{{ $host.host }}
{{- end }}
{{- else if .Values.frontend.enabled }}
To access the frontend, run:
kubectl port-forward -n {{ .Release.Namespace }} svc/{{ include "datacenter-docs.frontend.fullname" . }} 8080:{{ .Values.frontend.service.port }}
Then visit: http://localhost:8080
{{- end }}
{{- if .Values.api.enabled }}
To access the API directly, run:
kubectl port-forward -n {{ .Release.Namespace }} svc/{{ include "datacenter-docs.api.fullname" . }} 8000:{{ .Values.api.service.port }}
Then visit: http://localhost:8000/api/docs (OpenAPI documentation)
{{- end }}
==============================================================================
📊 VIEW LOGS:
==============================================================================
API logs:
kubectl logs -n {{ .Release.Namespace }} -l app.kubernetes.io/component=api -f
{{- if .Values.worker.enabled }}
Worker logs:
kubectl logs -n {{ .Release.Namespace }} -l app.kubernetes.io/component=worker -f
{{- end }}
{{- if .Values.chat.enabled }}
Chat logs:
kubectl logs -n {{ .Release.Namespace }} -l app.kubernetes.io/component=chat -f
{{- end }}
==============================================================================
🔐 SECURITY NOTICE:
==============================================================================
{{ if eq .Values.secrets.llmApiKey "sk-your-openai-api-key-here" }}
⚠️ WARNING: You are using the default LLM API key!
Update this immediately in production:
helm upgrade {{ .Release.Name }} datacenter-docs \
--set secrets.llmApiKey="your-actual-api-key" \
--reuse-values
{{ end }}
{{ if eq .Values.secrets.apiSecretKey "your-secret-key-here-change-in-production" }}
⚠️ WARNING: You are using the default API secret key!
Update this immediately in production:
helm upgrade {{ .Release.Name }} datacenter-docs \
--set secrets.apiSecretKey="your-actual-secret-key" \
--reuse-values
{{ end }}
For production deployments:
- Use strong, unique secrets
- Enable TLS/SSL for all services
- Review security context and RBAC policies
- Consider using external secret management (e.g., HashiCorp Vault)
==============================================================================
📖 USEFUL COMMANDS:
==============================================================================
Upgrade release:
helm upgrade {{ .Release.Name }} datacenter-docs --values custom-values.yaml
Get values:
helm get values {{ .Release.Name }}
View all resources:
helm get manifest {{ .Release.Name }}
Uninstall:
helm uninstall {{ .Release.Name }}
==============================================================================
🛠️ CONFIGURATION:
==============================================================================
{{- if .Values.config.autoRemediation.enabled }}
✓ Auto-remediation: ENABLED
- Minimum reliability score: {{ .Values.config.autoRemediation.minReliabilityScore }}%
- Approval threshold: {{ .Values.config.autoRemediation.requireApprovalThreshold }}%
{{- if .Values.config.autoRemediation.dryRun }}
- Mode: DRY RUN (no actual changes will be made)
{{- else }}
- Mode: ACTIVE (changes will be applied)
{{- end }}
{{- else }}
⚠️ Auto-remediation: DISABLED
{{- end }}
LLM Provider: {{ .Values.config.llm.baseUrl }}
Model: {{ .Values.config.llm.model }}
==============================================================================
📚 DOCUMENTATION & SUPPORT:
==============================================================================
For more information, visit:
https://git.commandware.com/it-ops/llm-automation-docs-and-remediation-engine
Report issues:
https://git.commandware.com/it-ops/llm-automation-docs-and-remediation-engine/issues
==============================================================================
Happy automating! 🚀

View File

@@ -0,0 +1,235 @@
{{/*
Expand the name of the chart.
*/}}
{{- define "datacenter-docs.name" -}}
{{- default .Chart.Name .Values.nameOverride | trunc 63 | trimSuffix "-" }}
{{- end }}
{{/*
Create a default fully qualified app name.
*/}}
{{- define "datacenter-docs.fullname" -}}
{{- if .Values.fullnameOverride }}
{{- .Values.fullnameOverride | trunc 63 | trimSuffix "-" }}
{{- else }}
{{- $name := default .Chart.Name .Values.nameOverride }}
{{- if contains $name .Release.Name }}
{{- .Release.Name | trunc 63 | trimSuffix "-" }}
{{- else }}
{{- printf "%s-%s" .Release.Name $name | trunc 63 | trimSuffix "-" }}
{{- end }}
{{- end }}
{{- end }}
{{/*
Create chart name and version as used by the chart label.
*/}}
{{- define "datacenter-docs.chart" -}}
{{- printf "%s-%s" .Chart.Name .Chart.Version | replace "+" "_" | trunc 63 | trimSuffix "-" }}
{{- end }}
{{/*
Common labels
*/}}
{{- define "datacenter-docs.labels" -}}
helm.sh/chart: {{ include "datacenter-docs.chart" . }}
{{ include "datacenter-docs.selectorLabels" . }}
{{- if .Chart.AppVersion }}
app.kubernetes.io/version: {{ .Chart.AppVersion | quote }}
{{- end }}
app.kubernetes.io/managed-by: {{ .Release.Service }}
{{- end }}
{{/*
Selector labels
*/}}
{{- define "datacenter-docs.selectorLabels" -}}
app.kubernetes.io/name: {{ include "datacenter-docs.name" . }}
app.kubernetes.io/instance: {{ .Release.Name }}
{{- end }}
{{/*
Create the name of the service account to use
*/}}
{{- define "datacenter-docs.serviceAccountName" -}}
{{- if .Values.serviceAccount.create }}
{{- default (include "datacenter-docs.fullname" .) .Values.serviceAccount.name }}
{{- else }}
{{- default "default" .Values.serviceAccount.name }}
{{- end }}
{{- end }}
{{/*
MongoDB fullname
*/}}
{{- define "datacenter-docs.mongodb.fullname" -}}
{{- printf "%s-mongodb" (include "datacenter-docs.fullname" .) | trunc 63 | trimSuffix "-" }}
{{- end }}
{{/*
Redis fullname
*/}}
{{- define "datacenter-docs.redis.fullname" -}}
{{- printf "%s-redis" (include "datacenter-docs.fullname" .) | trunc 63 | trimSuffix "-" }}
{{- end }}
{{/*
API fullname
*/}}
{{- define "datacenter-docs.api.fullname" -}}
{{- printf "%s-api" (include "datacenter-docs.fullname" .) | trunc 63 | trimSuffix "-" }}
{{- end }}
{{/*
Chat fullname
*/}}
{{- define "datacenter-docs.chat.fullname" -}}
{{- printf "%s-chat" (include "datacenter-docs.fullname" .) | trunc 63 | trimSuffix "-" }}
{{- end }}
{{/*
Worker fullname
*/}}
{{- define "datacenter-docs.worker.fullname" -}}
{{- printf "%s-worker" (include "datacenter-docs.fullname" .) | trunc 63 | trimSuffix "-" }}
{{- end }}
{{/*
Frontend fullname
*/}}
{{- define "datacenter-docs.frontend.fullname" -}}
{{- printf "%s-frontend" (include "datacenter-docs.fullname" .) | trunc 63 | trimSuffix "-" }}
{{- end }}
{{/*
Component labels for MongoDB
*/}}
{{- define "datacenter-docs.mongodb.labels" -}}
{{ include "datacenter-docs.labels" . }}
app.kubernetes.io/component: database
{{- end }}
{{/*
Component labels for Redis
*/}}
{{- define "datacenter-docs.redis.labels" -}}
{{ include "datacenter-docs.labels" . }}
app.kubernetes.io/component: cache
{{- end }}
{{/*
Component labels for API
*/}}
{{- define "datacenter-docs.api.labels" -}}
{{ include "datacenter-docs.labels" . }}
app.kubernetes.io/component: api
{{- end }}
{{/*
Component labels for Chat
*/}}
{{- define "datacenter-docs.chat.labels" -}}
{{ include "datacenter-docs.labels" . }}
app.kubernetes.io/component: chat
{{- end }}
{{/*
Component labels for Worker
*/}}
{{- define "datacenter-docs.worker.labels" -}}
{{ include "datacenter-docs.labels" . }}
app.kubernetes.io/component: worker
{{- end }}
{{/*
Component labels for Frontend
*/}}
{{- define "datacenter-docs.frontend.labels" -}}
{{ include "datacenter-docs.labels" . }}
app.kubernetes.io/component: frontend
{{- end }}
{{/*
Selector labels for MongoDB
*/}}
{{- define "datacenter-docs.mongodb.selectorLabels" -}}
{{ include "datacenter-docs.selectorLabels" . }}
app.kubernetes.io/component: database
{{- end }}
{{/*
Selector labels for Redis
*/}}
{{- define "datacenter-docs.redis.selectorLabels" -}}
{{ include "datacenter-docs.selectorLabels" . }}
app.kubernetes.io/component: cache
{{- end }}
{{/*
Selector labels for API
*/}}
{{- define "datacenter-docs.api.selectorLabels" -}}
{{ include "datacenter-docs.selectorLabels" . }}
app.kubernetes.io/component: api
{{- end }}
{{/*
Selector labels for Chat
*/}}
{{- define "datacenter-docs.chat.selectorLabels" -}}
{{ include "datacenter-docs.selectorLabels" . }}
app.kubernetes.io/component: chat
{{- end }}
{{/*
Selector labels for Worker
*/}}
{{- define "datacenter-docs.worker.selectorLabels" -}}
{{ include "datacenter-docs.selectorLabels" . }}
app.kubernetes.io/component: worker
{{- end }}
{{/*
Selector labels for Frontend
*/}}
{{- define "datacenter-docs.frontend.selectorLabels" -}}
{{ include "datacenter-docs.selectorLabels" . }}
app.kubernetes.io/component: frontend
{{- end }}
{{/*
Return the proper image name
*/}}
{{- define "datacenter-docs.image" -}}
{{- $registryName := .registry -}}
{{- $repositoryName := .repository -}}
{{- $tag := .tag | toString -}}
{{- if $registryName }}
{{- printf "%s/%s:%s" $registryName $repositoryName $tag -}}
{{- else }}
{{- printf "%s:%s" $repositoryName $tag -}}
{{- end }}
{{- end }}
{{/*
Return the proper Docker Image Registry Secret Names
*/}}
{{- define "datacenter-docs.imagePullSecrets" -}}
{{- if .Values.global.imagePullSecrets }}
imagePullSecrets:
{{- range .Values.global.imagePullSecrets }}
- name: {{ . }}
{{- end }}
{{- end }}
{{- end }}
{{/*
Return the appropriate apiVersion for HPA
*/}}
{{- define "datacenter-docs.hpa.apiVersion" -}}
{{- if semverCompare ">=1.23-0" .Capabilities.KubeVersion.GitVersion -}}
{{- print "autoscaling/v2" -}}
{{- else -}}
{{- print "autoscaling/v2beta2" -}}
{{- end -}}
{{- end -}}

View File

@@ -0,0 +1,120 @@
{{- if .Values.api.enabled }}
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "datacenter-docs.api.fullname" . }}
labels:
{{- include "datacenter-docs.api.labels" . | nindent 4 }}
spec:
{{- if not .Values.api.autoscaling.enabled }}
replicas: {{ .Values.api.replicaCount }}
{{- end }}
selector:
matchLabels:
{{- include "datacenter-docs.api.selectorLabels" . | nindent 6 }}
template:
metadata:
labels:
{{- include "datacenter-docs.api.selectorLabels" . | nindent 8 }}
annotations:
checksum/config: {{ include (print $.Template.BasePath "/configmap.yaml") . | sha256sum }}
checksum/secret: {{ include (print $.Template.BasePath "/secrets.yaml") . | sha256sum }}
{{- with .Values.podAnnotations }}
{{- toYaml . | nindent 8 }}
{{- end }}
spec:
{{- with .Values.global.imagePullSecrets }}
imagePullSecrets:
{{- toYaml . | nindent 8 }}
{{- end }}
serviceAccountName: {{ include "datacenter-docs.serviceAccountName" . }}
securityContext:
{{- toYaml .Values.podSecurityContext | nindent 8 }}
initContainers:
- name: wait-for-mongodb
image: busybox:1.36
command:
- sh
- -c
- |
until nc -z {{ include "datacenter-docs.mongodb.fullname" . }} {{ .Values.mongodb.service.port }}; do
echo "Waiting for MongoDB..."
sleep 2
done
- name: wait-for-redis
image: busybox:1.36
command:
- sh
- -c
- |
until nc -z {{ include "datacenter-docs.redis.fullname" . }} {{ .Values.redis.service.port }}; do
echo "Waiting for Redis..."
sleep 2
done
containers:
- name: api
securityContext:
{{- toYaml .Values.securityContext | nindent 12 }}
image: "{{ .Values.api.image.repository }}:{{ .Values.api.image.tag }}"
imagePullPolicy: {{ .Values.api.image.pullPolicy }}
ports:
- name: http
containerPort: {{ .Values.api.service.targetPort }}
protocol: TCP
env:
- name: MONGODB_URL
valueFrom:
configMapKeyRef:
name: {{ include "datacenter-docs.fullname" . }}-config
key: mongodb-url
- name: REDIS_URL
valueFrom:
configMapKeyRef:
name: {{ include "datacenter-docs.fullname" . }}-config
key: redis-url
- name: LLM_BASE_URL
valueFrom:
configMapKeyRef:
name: {{ include "datacenter-docs.fullname" . }}-config
key: llm-base-url
- name: LLM_MODEL
valueFrom:
configMapKeyRef:
name: {{ include "datacenter-docs.fullname" . }}-config
key: llm-model
- name: LLM_API_KEY
valueFrom:
secretKeyRef:
name: {{ include "datacenter-docs.fullname" . }}-secrets
key: llm-api-key
- name: API_SECRET_KEY
valueFrom:
secretKeyRef:
name: {{ include "datacenter-docs.fullname" . }}-secrets
key: api-secret-key
- name: LOG_LEVEL
valueFrom:
configMapKeyRef:
name: {{ include "datacenter-docs.fullname" . }}-config
key: log-level
- name: PYTHONPATH
value: "/app/src"
livenessProbe:
{{- toYaml .Values.api.livenessProbe | nindent 12 }}
readinessProbe:
{{- toYaml .Values.api.readinessProbe | nindent 12 }}
resources:
{{- toYaml .Values.api.resources | nindent 12 }}
{{- with .Values.nodeSelector }}
nodeSelector:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.affinity }}
affinity:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.tolerations }}
tolerations:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- end }}

View File

@@ -0,0 +1,32 @@
{{- if and .Values.api.enabled .Values.api.autoscaling.enabled }}
apiVersion: {{ include "datacenter-docs.hpa.apiVersion" . }}
kind: HorizontalPodAutoscaler
metadata:
name: {{ include "datacenter-docs.api.fullname" . }}
labels:
{{- include "datacenter-docs.api.labels" . | nindent 4 }}
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: {{ include "datacenter-docs.api.fullname" . }}
minReplicas: {{ .Values.api.autoscaling.minReplicas }}
maxReplicas: {{ .Values.api.autoscaling.maxReplicas }}
metrics:
{{- if .Values.api.autoscaling.targetCPUUtilizationPercentage }}
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: {{ .Values.api.autoscaling.targetCPUUtilizationPercentage }}
{{- end }}
{{- if .Values.api.autoscaling.targetMemoryUtilizationPercentage }}
- type: Resource
resource:
name: memory
target:
type: Utilization
averageUtilization: {{ .Values.api.autoscaling.targetMemoryUtilizationPercentage }}
{{- end }}
{{- end }}

View File

@@ -0,0 +1,17 @@
{{- if .Values.api.enabled }}
apiVersion: v1
kind: Service
metadata:
name: {{ include "datacenter-docs.api.fullname" . }}
labels:
{{- include "datacenter-docs.api.labels" . | nindent 4 }}
spec:
type: {{ .Values.api.service.type }}
ports:
- port: {{ .Values.api.service.port }}
targetPort: http
protocol: TCP
name: http
selector:
{{- include "datacenter-docs.api.selectorLabels" . | nindent 4 }}
{{- end }}

View File

@@ -0,0 +1,94 @@
{{- if .Values.chat.enabled }}
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "datacenter-docs.chat.fullname" . }}
labels:
{{- include "datacenter-docs.chat.labels" . | nindent 4 }}
spec:
replicas: {{ .Values.chat.replicaCount }}
selector:
matchLabels:
{{- include "datacenter-docs.chat.selectorLabels" . | nindent 6 }}
template:
metadata:
labels:
{{- include "datacenter-docs.chat.selectorLabels" . | nindent 8 }}
annotations:
checksum/config: {{ include (print $.Template.BasePath "/configmap.yaml") . | sha256sum }}
checksum/secret: {{ include (print $.Template.BasePath "/secrets.yaml") . | sha256sum }}
{{- with .Values.podAnnotations }}
{{- toYaml . | nindent 8 }}
{{- end }}
spec:
{{- with .Values.global.imagePullSecrets }}
imagePullSecrets:
{{- toYaml . | nindent 8 }}
{{- end }}
serviceAccountName: {{ include "datacenter-docs.serviceAccountName" . }}
securityContext:
{{- toYaml .Values.podSecurityContext | nindent 8 }}
initContainers:
- name: wait-for-mongodb
image: busybox:1.36
command:
- sh
- -c
- |
until nc -z {{ include "datacenter-docs.mongodb.fullname" . }} {{ .Values.mongodb.service.port }}; do
echo "Waiting for MongoDB..."
sleep 2
done
containers:
- name: chat
securityContext:
{{- toYaml .Values.securityContext | nindent 12 }}
image: "{{ .Values.chat.image.repository }}:{{ .Values.chat.image.tag }}"
imagePullPolicy: {{ .Values.chat.image.pullPolicy }}
ports:
- name: http
containerPort: {{ .Values.chat.service.targetPort }}
protocol: TCP
env:
- name: MONGODB_URL
valueFrom:
configMapKeyRef:
name: {{ include "datacenter-docs.fullname" . }}-config
key: mongodb-url
- name: LLM_BASE_URL
valueFrom:
configMapKeyRef:
name: {{ include "datacenter-docs.fullname" . }}-config
key: llm-base-url
- name: LLM_MODEL
valueFrom:
configMapKeyRef:
name: {{ include "datacenter-docs.fullname" . }}-config
key: llm-model
- name: LLM_API_KEY
valueFrom:
secretKeyRef:
name: {{ include "datacenter-docs.fullname" . }}-secrets
key: llm-api-key
- name: LOG_LEVEL
valueFrom:
configMapKeyRef:
name: {{ include "datacenter-docs.fullname" . }}-config
key: log-level
- name: PYTHONPATH
value: "/app/src"
resources:
{{- toYaml .Values.chat.resources | nindent 12 }}
{{- with .Values.nodeSelector }}
nodeSelector:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.affinity }}
affinity:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.tolerations }}
tolerations:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- end }}

View File

@@ -0,0 +1,17 @@
{{- if .Values.chat.enabled }}
apiVersion: v1
kind: Service
metadata:
name: {{ include "datacenter-docs.chat.fullname" . }}
labels:
{{- include "datacenter-docs.chat.labels" . | nindent 4 }}
spec:
type: {{ .Values.chat.service.type }}
ports:
- port: {{ .Values.chat.service.port }}
targetPort: http
protocol: TCP
name: http
selector:
{{- include "datacenter-docs.chat.selectorLabels" . | nindent 4 }}
{{- end }}

View File

@@ -0,0 +1,37 @@
apiVersion: v1
kind: ConfigMap
metadata:
name: {{ include "datacenter-docs.fullname" . }}-config
labels:
{{- include "datacenter-docs.labels" . | nindent 4 }}
data:
# MongoDB connection
mongodb-url: {{ tpl .Values.config.mongodbUrl . | quote }}
# Redis connection
redis-url: {{ tpl .Values.config.redisUrl . | quote }}
# LLM configuration
llm-base-url: {{ .Values.config.llm.baseUrl | quote }}
llm-model: {{ .Values.config.llm.model | quote }}
llm-max-tokens: {{ .Values.config.llm.maxTokens | quote }}
llm-temperature: {{ .Values.config.llm.temperature | quote }}
# MCP configuration
mcp-base-url: {{ .Values.config.mcp.baseUrl | quote }}
mcp-timeout: {{ .Values.config.mcp.timeout | quote }}
# Auto-remediation configuration
auto-remediation-enabled: {{ .Values.config.autoRemediation.enabled | quote }}
auto-remediation-min-reliability: {{ .Values.config.autoRemediation.minReliabilityScore | quote }}
auto-remediation-approval-threshold: {{ .Values.config.autoRemediation.requireApprovalThreshold | quote }}
auto-remediation-max-actions-per-hour: {{ .Values.config.autoRemediation.maxActionsPerHour | quote }}
auto-remediation-dry-run: {{ .Values.config.autoRemediation.dryRun | quote }}
# Security configuration
api-key-enabled: {{ .Values.config.apiKeyEnabled | quote }}
cors-origins: {{ join "," .Values.config.corsOrigins | quote }}
# Logging configuration
log-level: {{ .Values.config.logLevel | quote }}
log-format: {{ .Values.config.logFormat | quote }}

View File

@@ -0,0 +1,69 @@
{{- if .Values.frontend.enabled }}
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "datacenter-docs.frontend.fullname" . }}
labels:
{{- include "datacenter-docs.frontend.labels" . | nindent 4 }}
spec:
replicas: {{ .Values.frontend.replicaCount }}
selector:
matchLabels:
{{- include "datacenter-docs.frontend.selectorLabels" . | nindent 6 }}
template:
metadata:
labels:
{{- include "datacenter-docs.frontend.selectorLabels" . | nindent 8 }}
{{- with .Values.podAnnotations }}
annotations:
{{- toYaml . | nindent 8 }}
{{- end }}
spec:
{{- with .Values.global.imagePullSecrets }}
imagePullSecrets:
{{- toYaml . | nindent 8 }}
{{- end }}
serviceAccountName: {{ include "datacenter-docs.serviceAccountName" . }}
securityContext:
{{- toYaml .Values.podSecurityContext | nindent 8 }}
containers:
- name: frontend
securityContext:
{{- toYaml .Values.securityContext | nindent 12 }}
image: "{{ .Values.frontend.image.repository }}:{{ .Values.frontend.image.tag }}"
imagePullPolicy: {{ .Values.frontend.image.pullPolicy }}
ports:
- name: http
containerPort: {{ .Values.frontend.service.targetPort }}
protocol: TCP
livenessProbe:
httpGet:
path: /
port: http
initialDelaySeconds: 30
periodSeconds: 10
timeoutSeconds: 5
failureThreshold: 3
readinessProbe:
httpGet:
path: /
port: http
initialDelaySeconds: 10
periodSeconds: 5
timeoutSeconds: 3
failureThreshold: 3
resources:
{{- toYaml .Values.frontend.resources | nindent 12 }}
{{- with .Values.nodeSelector }}
nodeSelector:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.affinity }}
affinity:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.tolerations }}
tolerations:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- end }}

View File

@@ -0,0 +1,17 @@
{{- if .Values.frontend.enabled }}
apiVersion: v1
kind: Service
metadata:
name: {{ include "datacenter-docs.frontend.fullname" . }}
labels:
{{- include "datacenter-docs.frontend.labels" . | nindent 4 }}
spec:
type: {{ .Values.frontend.service.type }}
ports:
- port: {{ .Values.frontend.service.port }}
targetPort: http
protocol: TCP
name: http
selector:
{{- include "datacenter-docs.frontend.selectorLabels" . | nindent 4 }}
{{- end }}

View File

@@ -0,0 +1,57 @@
{{- if .Values.ingress.enabled -}}
apiVersion: networking.k8s.io/v1
kind: Ingress
metadata:
name: {{ include "datacenter-docs.fullname" . }}
labels:
{{- include "datacenter-docs.labels" . | nindent 4 }}
{{- with .Values.ingress.annotations }}
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
spec:
{{- if .Values.ingress.className }}
ingressClassName: {{ .Values.ingress.className }}
{{- end }}
{{- if .Values.ingress.tls }}
tls:
{{- range .Values.ingress.tls }}
- hosts:
{{- range .hosts }}
- {{ . | quote }}
{{- end }}
secretName: {{ .secretName }}
{{- end }}
{{- end }}
rules:
{{- range .Values.ingress.hosts }}
- host: {{ .host | quote }}
http:
paths:
{{- range .paths }}
- path: {{ .path }}
pathType: {{ .pathType }}
backend:
service:
{{- if eq .service "frontend" }}
name: {{ include "datacenter-docs.frontend.fullname" $ }}
{{- else if eq .service "api" }}
name: {{ include "datacenter-docs.api.fullname" $ }}
{{- else if eq .service "chat" }}
name: {{ include "datacenter-docs.chat.fullname" $ }}
{{- else }}
name: {{ .service }}
{{- end }}
port:
{{- if eq .service "frontend" }}
number: {{ $.Values.frontend.service.port }}
{{- else if eq .service "api" }}
number: {{ $.Values.api.service.port }}
{{- else if eq .service "chat" }}
number: {{ $.Values.chat.service.port }}
{{- else }}
number: 80
{{- end }}
{{- end }}
{{- end }}
{{- end }}

View File

@@ -0,0 +1,17 @@
{{- if .Values.mongodb.enabled }}
apiVersion: v1
kind: Service
metadata:
name: {{ include "datacenter-docs.mongodb.fullname" . }}
labels:
{{- include "datacenter-docs.mongodb.labels" . | nindent 4 }}
spec:
type: {{ .Values.mongodb.service.type }}
ports:
- port: {{ .Values.mongodb.service.port }}
targetPort: mongodb
protocol: TCP
name: mongodb
selector:
{{- include "datacenter-docs.mongodb.selectorLabels" . | nindent 4 }}
{{- end }}

View File

@@ -0,0 +1,113 @@
{{- if .Values.mongodb.enabled }}
apiVersion: apps/v1
kind: StatefulSet
metadata:
name: {{ include "datacenter-docs.mongodb.fullname" . }}
labels:
{{- include "datacenter-docs.mongodb.labels" . | nindent 4 }}
spec:
serviceName: {{ include "datacenter-docs.mongodb.fullname" . }}
replicas: 1
selector:
matchLabels:
{{- include "datacenter-docs.mongodb.selectorLabels" . | nindent 6 }}
template:
metadata:
labels:
{{- include "datacenter-docs.mongodb.selectorLabels" . | nindent 8 }}
{{- with .Values.podAnnotations }}
annotations:
{{- toYaml . | nindent 8 }}
{{- end }}
spec:
{{- with .Values.global.imagePullSecrets }}
imagePullSecrets:
{{- toYaml . | nindent 8 }}
{{- end }}
serviceAccountName: {{ include "datacenter-docs.serviceAccountName" . }}
securityContext:
fsGroup: 999
runAsUser: 999
containers:
- name: mongodb
image: "{{ .Values.mongodb.image.repository }}:{{ .Values.mongodb.image.tag }}"
imagePullPolicy: {{ .Values.mongodb.image.pullPolicy }}
ports:
- name: mongodb
containerPort: 27017
protocol: TCP
env:
- name: MONGO_INITDB_ROOT_USERNAME
valueFrom:
secretKeyRef:
name: {{ include "datacenter-docs.fullname" . }}-secrets
key: mongodb-username
- name: MONGO_INITDB_ROOT_PASSWORD
valueFrom:
secretKeyRef:
name: {{ include "datacenter-docs.fullname" . }}-secrets
key: mongodb-password
- name: MONGO_INITDB_DATABASE
value: {{ .Values.mongodb.auth.database | quote }}
livenessProbe:
exec:
command:
- mongosh
- --eval
- "db.adminCommand('ping')"
initialDelaySeconds: 30
periodSeconds: 10
timeoutSeconds: 5
failureThreshold: 3
readinessProbe:
exec:
command:
- mongosh
- --eval
- "db.adminCommand('ping')"
initialDelaySeconds: 10
periodSeconds: 5
timeoutSeconds: 3
failureThreshold: 3
resources:
{{- toYaml .Values.mongodb.resources | nindent 12 }}
volumeMounts:
- name: data
mountPath: /data/db
{{- with .Values.nodeSelector }}
nodeSelector:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.affinity }}
affinity:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.tolerations }}
tolerations:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- if .Values.mongodb.persistence.enabled }}
volumeClaimTemplates:
- metadata:
name: data
labels:
{{- include "datacenter-docs.mongodb.labels" . | nindent 10 }}
spec:
accessModes:
- ReadWriteOnce
{{- if .Values.mongodb.persistence.storageClass }}
{{- if (eq "-" .Values.mongodb.persistence.storageClass) }}
storageClassName: ""
{{- else }}
storageClassName: {{ .Values.mongodb.persistence.storageClass | quote }}
{{- end }}
{{- end }}
resources:
requests:
storage: {{ .Values.mongodb.persistence.size | quote }}
{{- else }}
volumes:
- name: data
emptyDir: {}
{{- end }}
{{- end }}

View File

@@ -0,0 +1,70 @@
{{- if .Values.redis.enabled }}
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "datacenter-docs.redis.fullname" . }}
labels:
{{- include "datacenter-docs.redis.labels" . | nindent 4 }}
spec:
replicas: 1
selector:
matchLabels:
{{- include "datacenter-docs.redis.selectorLabels" . | nindent 6 }}
template:
metadata:
labels:
{{- include "datacenter-docs.redis.selectorLabels" . | nindent 8 }}
{{- with .Values.podAnnotations }}
annotations:
{{- toYaml . | nindent 8 }}
{{- end }}
spec:
{{- with .Values.global.imagePullSecrets }}
imagePullSecrets:
{{- toYaml . | nindent 8 }}
{{- end }}
serviceAccountName: {{ include "datacenter-docs.serviceAccountName" . }}
securityContext:
fsGroup: 999
runAsUser: 999
containers:
- name: redis
image: "{{ .Values.redis.image.repository }}:{{ .Values.redis.image.tag }}"
imagePullPolicy: {{ .Values.redis.image.pullPolicy }}
ports:
- name: redis
containerPort: 6379
protocol: TCP
livenessProbe:
exec:
command:
- redis-cli
- ping
initialDelaySeconds: 30
periodSeconds: 10
timeoutSeconds: 5
failureThreshold: 3
readinessProbe:
exec:
command:
- redis-cli
- ping
initialDelaySeconds: 10
periodSeconds: 5
timeoutSeconds: 3
failureThreshold: 3
resources:
{{- toYaml .Values.redis.resources | nindent 12 }}
{{- with .Values.nodeSelector }}
nodeSelector:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.affinity }}
affinity:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.tolerations }}
tolerations:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- end }}

View File

@@ -0,0 +1,17 @@
{{- if .Values.redis.enabled }}
apiVersion: v1
kind: Service
metadata:
name: {{ include "datacenter-docs.redis.fullname" . }}
labels:
{{- include "datacenter-docs.redis.labels" . | nindent 4 }}
spec:
type: {{ .Values.redis.service.type }}
ports:
- port: {{ .Values.redis.service.port }}
targetPort: redis
protocol: TCP
name: redis
selector:
{{- include "datacenter-docs.redis.selectorLabels" . | nindent 4 }}
{{- end }}

View File

@@ -0,0 +1,17 @@
apiVersion: v1
kind: Secret
metadata:
name: {{ include "datacenter-docs.fullname" . }}-secrets
labels:
{{- include "datacenter-docs.labels" . | nindent 4 }}
type: Opaque
stringData:
# LLM API Key
llm-api-key: {{ .Values.secrets.llmApiKey | quote }}
# API Secret Key
api-secret-key: {{ .Values.secrets.apiSecretKey | quote }}
# MongoDB credentials
mongodb-username: {{ .Values.secrets.mongodbUsername | quote }}
mongodb-password: {{ .Values.secrets.mongodbPassword | quote }}

View File

@@ -0,0 +1,13 @@
{{- if .Values.serviceAccount.create -}}
apiVersion: v1
kind: ServiceAccount
metadata:
name: {{ include "datacenter-docs.serviceAccountName" . }}
labels:
{{- include "datacenter-docs.labels" . | nindent 4 }}
{{- with .Values.serviceAccount.annotations }}
annotations:
{{- toYaml . | nindent 4 }}
{{- end }}
automountServiceAccountToken: true
{{- end }}

View File

@@ -0,0 +1,107 @@
{{- if .Values.worker.enabled }}
apiVersion: apps/v1
kind: Deployment
metadata:
name: {{ include "datacenter-docs.worker.fullname" . }}
labels:
{{- include "datacenter-docs.worker.labels" . | nindent 4 }}
spec:
{{- if not .Values.worker.autoscaling.enabled }}
replicas: {{ .Values.worker.replicaCount }}
{{- end }}
selector:
matchLabels:
{{- include "datacenter-docs.worker.selectorLabels" . | nindent 6 }}
template:
metadata:
labels:
{{- include "datacenter-docs.worker.selectorLabels" . | nindent 8 }}
annotations:
checksum/config: {{ include (print $.Template.BasePath "/configmap.yaml") . | sha256sum }}
checksum/secret: {{ include (print $.Template.BasePath "/secrets.yaml") . | sha256sum }}
{{- with .Values.podAnnotations }}
{{- toYaml . | nindent 8 }}
{{- end }}
spec:
{{- with .Values.global.imagePullSecrets }}
imagePullSecrets:
{{- toYaml . | nindent 8 }}
{{- end }}
serviceAccountName: {{ include "datacenter-docs.serviceAccountName" . }}
securityContext:
{{- toYaml .Values.podSecurityContext | nindent 8 }}
initContainers:
- name: wait-for-mongodb
image: busybox:1.36
command:
- sh
- -c
- |
until nc -z {{ include "datacenter-docs.mongodb.fullname" . }} {{ .Values.mongodb.service.port }}; do
echo "Waiting for MongoDB..."
sleep 2
done
- name: wait-for-redis
image: busybox:1.36
command:
- sh
- -c
- |
until nc -z {{ include "datacenter-docs.redis.fullname" . }} {{ .Values.redis.service.port }}; do
echo "Waiting for Redis..."
sleep 2
done
containers:
- name: worker
securityContext:
{{- toYaml .Values.securityContext | nindent 12 }}
image: "{{ .Values.worker.image.repository }}:{{ .Values.worker.image.tag }}"
imagePullPolicy: {{ .Values.worker.image.pullPolicy }}
env:
- name: MONGODB_URL
valueFrom:
configMapKeyRef:
name: {{ include "datacenter-docs.fullname" . }}-config
key: mongodb-url
- name: REDIS_URL
valueFrom:
configMapKeyRef:
name: {{ include "datacenter-docs.fullname" . }}-config
key: redis-url
- name: LLM_BASE_URL
valueFrom:
configMapKeyRef:
name: {{ include "datacenter-docs.fullname" . }}-config
key: llm-base-url
- name: LLM_MODEL
valueFrom:
configMapKeyRef:
name: {{ include "datacenter-docs.fullname" . }}-config
key: llm-model
- name: LLM_API_KEY
valueFrom:
secretKeyRef:
name: {{ include "datacenter-docs.fullname" . }}-secrets
key: llm-api-key
- name: LOG_LEVEL
valueFrom:
configMapKeyRef:
name: {{ include "datacenter-docs.fullname" . }}-config
key: log-level
- name: PYTHONPATH
value: "/app/src"
resources:
{{- toYaml .Values.worker.resources | nindent 12 }}
{{- with .Values.nodeSelector }}
nodeSelector:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.affinity }}
affinity:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- with .Values.tolerations }}
tolerations:
{{- toYaml . | nindent 8 }}
{{- end }}
{{- end }}

View File

@@ -0,0 +1,24 @@
{{- if and .Values.worker.enabled .Values.worker.autoscaling.enabled }}
apiVersion: {{ include "datacenter-docs.hpa.apiVersion" . }}
kind: HorizontalPodAutoscaler
metadata:
name: {{ include "datacenter-docs.worker.fullname" . }}
labels:
{{- include "datacenter-docs.worker.labels" . | nindent 4 }}
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: {{ include "datacenter-docs.worker.fullname" . }}
minReplicas: {{ .Values.worker.autoscaling.minReplicas }}
maxReplicas: {{ .Values.worker.autoscaling.maxReplicas }}
metrics:
{{- if .Values.worker.autoscaling.targetCPUUtilizationPercentage }}
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: {{ .Values.worker.autoscaling.targetCPUUtilizationPercentage }}
{{- end }}
{{- end }}

View File

@@ -0,0 +1,181 @@
# Development values for datacenter-docs
# This configuration is optimized for local development and testing
# Use with: helm install dev ./datacenter-docs -f values-development.yaml
global:
imagePullPolicy: IfNotPresent
storageClass: ""
# MongoDB - minimal resources for development
mongodb:
enabled: true
image:
repository: mongo
tag: "7"
pullPolicy: IfNotPresent
auth:
rootUsername: admin
rootPassword: admin123
database: datacenter_docs
persistence:
enabled: false # Use emptyDir for faster testing
size: 1Gi
resources:
requests:
memory: "256Mi"
cpu: "100m"
limits:
memory: "512Mi"
cpu: "500m"
# Redis - minimal resources
redis:
enabled: true
resources:
requests:
memory: "64Mi"
cpu: "50m"
limits:
memory: "256Mi"
cpu: "200m"
# API service - single replica for development
api:
enabled: true
replicaCount: 1
image:
repository: datacenter-docs-api
tag: "latest"
pullPolicy: IfNotPresent
service:
type: ClusterIP
port: 8000
resources:
requests:
memory: "256Mi"
cpu: "100m"
limits:
memory: "1Gi"
cpu: "500m"
autoscaling:
enabled: false # Disable for development
# Chat service - disabled by default (not implemented)
chat:
enabled: false
# Worker service - disabled by default (not implemented)
worker:
enabled: false
# Frontend - single replica
frontend:
enabled: true
replicaCount: 1
image:
repository: datacenter-docs-frontend
tag: "latest"
pullPolicy: IfNotPresent
resources:
requests:
memory: "64Mi"
cpu: "50m"
limits:
memory: "128Mi"
cpu: "100m"
# Ingress - disabled for development (use port-forward)
ingress:
enabled: false
# Application configuration for development
config:
mongodbUrl: "mongodb://admin:admin123@{{ include \"datacenter-docs.mongodb.fullname\" . }}:27017/datacenter_docs?authSource=admin"
redisUrl: "redis://{{ include \"datacenter-docs.redis.fullname\" . }}:6379/0"
llm:
# Use local LLM for development (no API costs)
baseUrl: "http://localhost:11434/v1" # Ollama
model: "llama2"
# Or use OpenAI with a test key
# baseUrl: "https://api.openai.com/v1"
# model: "gpt-3.5-turbo"
maxTokens: 2048
temperature: 0.7
mcp:
baseUrl: "http://mcp-server:8080"
timeout: 30
# Auto-remediation in dry-run mode for safety
autoRemediation:
enabled: true
minReliabilityScore: 85.0
requireApprovalThreshold: 90.0
maxActionsPerHour: 100
dryRun: true # ALWAYS dry-run in development
apiKeyEnabled: false # Disable for easier testing
corsOrigins:
- "http://localhost:3000"
- "http://localhost:8080"
- "http://localhost:8000"
logLevel: "DEBUG" # Verbose logging for development
logFormat: "text" # Human-readable logs
# Secrets - safe defaults for development only
secrets:
llmApiKey: "not-needed-for-local-llm"
apiSecretKey: "dev-secret-key-not-for-production"
mongodbUsername: "admin"
mongodbPassword: "admin123"
# ServiceAccount
serviceAccount:
create: true
annotations: {}
name: ""
# Relaxed security for development
podSecurityContext:
fsGroup: 1000
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
# No node selectors or tolerations
nodeSelector: {}
tolerations: []
affinity: {}
# No priority class
priorityClassName: ""
# Development tips:
#
# 1. Port-forward to access services:
# kubectl port-forward svc/dev-datacenter-docs-api 8000:8000
# kubectl port-forward svc/dev-datacenter-docs-frontend 8080:80
#
# 2. View logs:
# kubectl logs -l app.kubernetes.io/component=api -f
#
# 3. Access MongoDB directly:
# kubectl port-forward svc/dev-datacenter-docs-mongodb 27017:27017
# mongosh mongodb://admin:admin123@localhost:27017
#
# 4. Quick iteration:
# # Make code changes
# docker build -t datacenter-docs-api:latest -f deploy/docker/Dockerfile.api .
# kubectl rollout restart deployment/dev-datacenter-docs-api
#
# 5. Clean slate:
# helm uninstall dev
# kubectl delete pvc --all
# helm install dev ./datacenter-docs -f values-development.yaml

View File

@@ -0,0 +1,304 @@
# Production values for datacenter-docs
# This is an example configuration for production deployment
# Copy this file and customize it for your environment
global:
imagePullPolicy: Always
storageClass: "standard" # Use your storage class
# MongoDB configuration for production
mongodb:
enabled: true
auth:
rootUsername: admin
rootPassword: "CHANGE-THIS-IN-PRODUCTION" # Use strong password
database: datacenter_docs
persistence:
enabled: true
size: 50Gi # Adjust based on expected data volume
storageClass: "fast-ssd" # Use SSD storage class for better performance
resources:
requests:
memory: "2Gi"
cpu: "1000m"
limits:
memory: "4Gi"
cpu: "2000m"
# Redis configuration for production
redis:
enabled: true
resources:
requests:
memory: "256Mi"
cpu: "200m"
limits:
memory: "1Gi"
cpu: "1000m"
# API service - production scale
api:
enabled: true
replicaCount: 5
image:
repository: your-registry.io/datacenter-docs-api
tag: "v1.0.0" # Use specific version, not latest
pullPolicy: Always
service:
type: ClusterIP
port: 8000
resources:
requests:
memory: "1Gi"
cpu: "500m"
limits:
memory: "4Gi"
cpu: "2000m"
autoscaling:
enabled: true
minReplicas: 5
maxReplicas: 20
targetCPUUtilizationPercentage: 70
targetMemoryUtilizationPercentage: 80
# Chat service - enable in production
chat:
enabled: true
replicaCount: 3
image:
repository: your-registry.io/datacenter-docs-chat
tag: "v1.0.0"
pullPolicy: Always
resources:
requests:
memory: "512Mi"
cpu: "250m"
limits:
memory: "2Gi"
cpu: "1000m"
# Worker service - enable in production
worker:
enabled: true
replicaCount: 5
image:
repository: your-registry.io/datacenter-docs-worker
tag: "v1.0.0"
pullPolicy: Always
resources:
requests:
memory: "1Gi"
cpu: "500m"
limits:
memory: "4Gi"
cpu: "2000m"
autoscaling:
enabled: true
minReplicas: 3
maxReplicas: 20
targetCPUUtilizationPercentage: 75
# Frontend - production scale
frontend:
enabled: true
replicaCount: 3
image:
repository: your-registry.io/datacenter-docs-frontend
tag: "v1.0.0"
pullPolicy: Always
resources:
requests:
memory: "128Mi"
cpu: "100m"
limits:
memory: "512Mi"
cpu: "500m"
# Ingress - production configuration
ingress:
enabled: true
className: "nginx"
annotations:
cert-manager.io/cluster-issuer: "letsencrypt-prod"
nginx.ingress.kubernetes.io/ssl-redirect: "true"
nginx.ingress.kubernetes.io/force-ssl-redirect: "true"
nginx.ingress.kubernetes.io/proxy-body-size: "50m"
nginx.ingress.kubernetes.io/rate-limit: "100"
nginx.ingress.kubernetes.io/limit-rps: "50"
hosts:
- host: datacenter-docs.yourdomain.com
paths:
- path: /
pathType: Prefix
service: frontend
- path: /api
pathType: Prefix
service: api
- path: /ws
pathType: Prefix
service: chat
tls:
- secretName: datacenter-docs-tls
hosts:
- datacenter-docs.yourdomain.com
# Application configuration for production
config:
# MongoDB connection (if using external MongoDB, change this)
mongodbUrl: "mongodb://admin:CHANGE-THIS-IN-PRODUCTION@{{ include \"datacenter-docs.mongodb.fullname\" . }}:27017/datacenter_docs?authSource=admin"
# Redis connection
redisUrl: "redis://{{ include \"datacenter-docs.redis.fullname\" . }}:6379/0"
# LLM Provider configuration
llm:
# For OpenAI
baseUrl: "https://api.openai.com/v1"
model: "gpt-4-turbo-preview"
# For Anthropic Claude (alternative)
# baseUrl: "https://api.anthropic.com/v1"
# model: "claude-3-opus-20240229"
# For Azure OpenAI (alternative)
# baseUrl: "https://your-resource.openai.azure.com"
# model: "gpt-4"
maxTokens: 4096
temperature: 0.7
# MCP configuration
mcp:
baseUrl: "http://mcp-server:8080"
timeout: 30
# Auto-remediation configuration
autoRemediation:
enabled: true
minReliabilityScore: 90.0 # Higher threshold for production
requireApprovalThreshold: 95.0
maxActionsPerHour: 50 # Conservative limit
dryRun: false # Set to true for initial deployment
# Security
apiKeyEnabled: true
corsOrigins:
- "https://datacenter-docs.yourdomain.com"
- "https://admin.yourdomain.com"
# Logging
logLevel: "INFO" # Use "DEBUG" for troubleshooting
logFormat: "json"
# Secrets - MUST BE CHANGED IN PRODUCTION
secrets:
# LLM API Key
llmApiKey: "CHANGE-THIS-TO-YOUR-ACTUAL-API-KEY"
# API authentication secret key
apiSecretKey: "CHANGE-THIS-TO-A-STRONG-RANDOM-KEY"
# MongoDB credentials
mongodbUsername: "admin"
mongodbPassword: "CHANGE-THIS-IN-PRODUCTION"
# ServiceAccount
serviceAccount:
create: true
annotations:
# Add cloud provider annotations if needed
# eks.amazonaws.com/role-arn: arn:aws:iam::ACCOUNT-ID:role/IAM-ROLE-NAME
name: ""
# Pod security context
podSecurityContext:
fsGroup: 1000
runAsNonRoot: true
runAsUser: 1000
seccompProfile:
type: RuntimeDefault
# Container security context
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
readOnlyRootFilesystem: false
runAsNonRoot: true
runAsUser: 1000
# Node selector - place workloads on specific nodes
nodeSelector:
workload-type: "application"
# kubernetes.io/arch: amd64
# Tolerations - allow scheduling on tainted nodes
tolerations:
- key: "workload-type"
operator: "Equal"
value: "application"
effect: "NoSchedule"
# Affinity rules - spread pods across zones and nodes
affinity:
podAntiAffinity:
preferredDuringSchedulingIgnoredDuringExecution:
- weight: 100
podAffinityTerm:
labelSelector:
matchExpressions:
- key: app.kubernetes.io/name
operator: In
values:
- datacenter-docs
topologyKey: kubernetes.io/hostname
- weight: 100
podAffinityTerm:
labelSelector:
matchExpressions:
- key: app.kubernetes.io/component
operator: In
values:
- api
topologyKey: topology.kubernetes.io/zone
# Priority class - ensure critical pods are scheduled first
priorityClassName: "high-priority"
# Additional production recommendations:
#
# 1. Use external secret management:
# - HashiCorp Vault
# - AWS Secrets Manager
# - Azure Key Vault
# - Google Secret Manager
#
# 2. Enable monitoring:
# - Prometheus metrics
# - Grafana dashboards
# - AlertManager alerts
#
# 3. Enable logging:
# - ELK Stack
# - Loki
# - CloudWatch
#
# 4. Enable tracing:
# - Jaeger
# - OpenTelemetry
#
# 5. Backup strategy:
# - MongoDB backups (Velero, native tools)
# - Disaster recovery plan
#
# 6. Network policies:
# - Restrict pod-to-pod communication
# - Isolate database access
#
# 7. Pod disruption budgets:
# - Ensure high availability during updates
#
# 8. Regular security scans:
# - Container image scanning
# - Dependency vulnerability scanning

View File

@@ -0,0 +1,265 @@
# Default values for datacenter-docs
# This is a YAML-formatted file.
# Declare variables to be passed into your templates.
global:
imagePullPolicy: IfNotPresent
storageClass: ""
# MongoDB configuration
mongodb:
enabled: true
image:
repository: mongo
tag: "7"
pullPolicy: IfNotPresent
service:
type: ClusterIP
port: 27017
auth:
enabled: true
rootUsername: admin
rootPassword: admin123
database: datacenter_docs
persistence:
enabled: true
size: 10Gi
storageClass: ""
resources:
requests:
memory: "512Mi"
cpu: "250m"
limits:
memory: "2Gi"
cpu: "1000m"
# Redis configuration
redis:
enabled: true
image:
repository: redis
tag: "7-alpine"
pullPolicy: IfNotPresent
service:
type: ClusterIP
port: 6379
resources:
requests:
memory: "128Mi"
cpu: "100m"
limits:
memory: "512Mi"
cpu: "500m"
# API service configuration
api:
enabled: true
replicaCount: 2
image:
repository: datacenter-docs-api
tag: "latest"
pullPolicy: Always
service:
type: ClusterIP
port: 8000
targetPort: 8000
resources:
requests:
memory: "512Mi"
cpu: "250m"
limits:
memory: "2Gi"
cpu: "1000m"
autoscaling:
enabled: true
minReplicas: 2
maxReplicas: 10
targetCPUUtilizationPercentage: 80
targetMemoryUtilizationPercentage: 80
livenessProbe:
httpGet:
path: /health
port: 8000
initialDelaySeconds: 30
periodSeconds: 10
timeoutSeconds: 5
failureThreshold: 3
readinessProbe:
httpGet:
path: /health
port: 8000
initialDelaySeconds: 10
periodSeconds: 5
timeoutSeconds: 3
failureThreshold: 3
# Chat service configuration
chat:
enabled: false # Not yet implemented
replicaCount: 1
image:
repository: datacenter-docs-chat
tag: "latest"
pullPolicy: Always
service:
type: ClusterIP
port: 8001
targetPort: 8001
resources:
requests:
memory: "256Mi"
cpu: "100m"
limits:
memory: "1Gi"
cpu: "500m"
# Worker service configuration
worker:
enabled: false # Not yet implemented
replicaCount: 3
image:
repository: datacenter-docs-worker
tag: "latest"
pullPolicy: Always
resources:
requests:
memory: "512Mi"
cpu: "250m"
limits:
memory: "2Gi"
cpu: "1000m"
autoscaling:
enabled: true
minReplicas: 1
maxReplicas: 10
targetCPUUtilizationPercentage: 80
# Frontend service configuration
frontend:
enabled: true
replicaCount: 2
image:
repository: datacenter-docs-frontend
tag: "latest"
pullPolicy: Always
service:
type: ClusterIP
port: 80
targetPort: 80
resources:
requests:
memory: "128Mi"
cpu: "100m"
limits:
memory: "256Mi"
cpu: "200m"
# Ingress configuration
ingress:
enabled: true
className: "nginx"
annotations:
cert-manager.io/cluster-issuer: "letsencrypt-prod"
nginx.ingress.kubernetes.io/ssl-redirect: "true"
nginx.ingress.kubernetes.io/proxy-body-size: "50m"
hosts:
- host: datacenter-docs.example.com
paths:
- path: /
pathType: Prefix
service: frontend
- path: /api
pathType: Prefix
service: api
- path: /ws
pathType: Prefix
service: chat
tls:
- secretName: datacenter-docs-tls
hosts:
- datacenter-docs.example.com
# Application configuration
config:
# MongoDB connection
mongodbUrl: "mongodb://admin:admin123@{{ include \"datacenter-docs.mongodb.fullname\" . }}:27017/datacenter_docs?authSource=admin"
# Redis connection
redisUrl: "redis://{{ include \"datacenter-docs.redis.fullname\" . }}:6379/0"
# LLM Provider configuration
llm:
baseUrl: "https://api.openai.com/v1"
model: "gpt-4-turbo-preview"
maxTokens: 4096
temperature: 0.7
# MCP configuration
mcp:
baseUrl: "http://mcp-server:8080"
timeout: 30
# Auto-remediation configuration
autoRemediation:
enabled: true
minReliabilityScore: 85.0
requireApprovalThreshold: 90.0
maxActionsPerHour: 100
dryRun: false
# Security
apiKeyEnabled: true
corsOrigins:
- "http://localhost:3000"
- "https://datacenter-docs.example.com"
# Logging
logLevel: "INFO"
logFormat: "json"
# Secrets (should be overridden in production)
secrets:
# LLM API Key
llmApiKey: "sk-your-openai-api-key-here"
# API authentication
apiSecretKey: "your-secret-key-here-change-in-production"
# MongoDB credentials (override mongodb.auth if using external DB)
mongodbUsername: "admin"
mongodbPassword: "admin123"
# ServiceAccount configuration
serviceAccount:
create: true
annotations: {}
name: ""
# Pod annotations
podAnnotations: {}
# Pod security context
podSecurityContext:
fsGroup: 1000
runAsNonRoot: true
runAsUser: 1000
# Container security context
securityContext:
allowPrivilegeEscalation: false
capabilities:
drop:
- ALL
readOnlyRootFilesystem: false
# Node selector
nodeSelector: {}
# Tolerations
tolerations: []
# Affinity rules
affinity: {}
# Priority class
priorityClassName: ""

143
deploy/helm/test-chart.sh Executable file
View File

@@ -0,0 +1,143 @@
#!/bin/bash
# Test script for Helm chart validation
# Usage: ./test-chart.sh
set -e
CHART_DIR="datacenter-docs"
RELEASE_NAME="test-datacenter-docs"
echo "=========================================="
echo "Helm Chart Testing Script"
echo "=========================================="
echo ""
# Check if helm is installed
if ! command -v helm &> /dev/null; then
echo "ERROR: helm is not installed. Please install Helm first."
exit 1
fi
echo "✓ Helm version: $(helm version --short)"
echo ""
# Lint the chart
echo "=========================================="
echo "Step 1: Linting Chart"
echo "=========================================="
helm lint ${CHART_DIR}
echo "✓ Lint passed"
echo ""
# Template rendering with default values
echo "=========================================="
echo "Step 2: Template Rendering (default values)"
echo "=========================================="
helm template ${RELEASE_NAME} ${CHART_DIR} > /tmp/rendered-default.yaml
echo "✓ Template rendering successful"
echo " Output: /tmp/rendered-default.yaml"
echo ""
# Template rendering with development values
echo "=========================================="
echo "Step 3: Template Rendering (development values)"
echo "=========================================="
helm template ${RELEASE_NAME} ${CHART_DIR} -f ${CHART_DIR}/values-development.yaml > /tmp/rendered-dev.yaml
echo "✓ Template rendering successful"
echo " Output: /tmp/rendered-dev.yaml"
echo ""
# Template rendering with production values
echo "=========================================="
echo "Step 4: Template Rendering (production values)"
echo "=========================================="
helm template ${RELEASE_NAME} ${CHART_DIR} -f ${CHART_DIR}/values-production.yaml > /tmp/rendered-prod.yaml
echo "✓ Template rendering successful"
echo " Output: /tmp/rendered-prod.yaml"
echo ""
# Dry run installation
echo "=========================================="
echo "Step 5: Dry Run Installation"
echo "=========================================="
helm install ${RELEASE_NAME} ${CHART_DIR} --dry-run --debug > /tmp/dry-run.log 2>&1
echo "✓ Dry run successful"
echo " Output: /tmp/dry-run.log"
echo ""
# Test with disabled components
echo "=========================================="
echo "Step 6: Template with Disabled Components"
echo "=========================================="
helm template ${RELEASE_NAME} ${CHART_DIR} \
--set mongodb.enabled=false \
--set redis.enabled=false \
--set api.enabled=false \
--set frontend.enabled=false \
> /tmp/rendered-minimal.yaml
echo "✓ Minimal template rendering successful"
echo " Output: /tmp/rendered-minimal.yaml"
echo ""
# Test with all components enabled
echo "=========================================="
echo "Step 7: Template with All Components"
echo "=========================================="
helm template ${RELEASE_NAME} ${CHART_DIR} \
--set chat.enabled=true \
--set worker.enabled=true \
> /tmp/rendered-full.yaml
echo "✓ Full template rendering successful"
echo " Output: /tmp/rendered-full.yaml"
echo ""
# Validate Kubernetes manifests (if kubectl is available)
if command -v kubectl &> /dev/null; then
echo "=========================================="
echo "Step 8: Kubernetes Manifest Validation"
echo "=========================================="
if kubectl version --client &> /dev/null; then
kubectl apply --dry-run=client -f /tmp/rendered-default.yaml > /dev/null 2>&1
echo "✓ Kubernetes manifest validation passed"
else
echo "⚠ kubectl not connected to cluster, skipping validation"
fi
echo ""
else
echo "⚠ kubectl not found, skipping Kubernetes validation"
echo ""
fi
# Package the chart
echo "=========================================="
echo "Step 9: Packaging Chart"
echo "=========================================="
helm package ${CHART_DIR} -d /tmp/
echo "✓ Chart packaged successfully"
echo " Output: /tmp/datacenter-docs-*.tgz"
echo ""
# Summary
echo "=========================================="
echo "All Tests Passed! ✓"
echo "=========================================="
echo ""
echo "Generated files:"
echo " - /tmp/rendered-default.yaml (default values)"
echo " - /tmp/rendered-dev.yaml (development values)"
echo " - /tmp/rendered-prod.yaml (production values)"
echo " - /tmp/rendered-minimal.yaml (minimal components)"
echo " - /tmp/rendered-full.yaml (all components)"
echo " - /tmp/dry-run.log (dry run output)"
echo " - /tmp/datacenter-docs-*.tgz (packaged chart)"
echo ""
echo "To install the chart locally:"
echo " helm install my-release ${CHART_DIR}"
echo ""
echo "To install with development values:"
echo " helm install dev ${CHART_DIR} -f ${CHART_DIR}/values-development.yaml"
echo ""
echo "To install with production values (customize first!):"
echo " helm install prod ${CHART_DIR} -f ${CHART_DIR}/values-production.yaml"
echo ""