Files
llm-automation-docs-and-rem…/deploy/docker/Dockerfile.worker
dnviti d6d44270ee
Some checks failed
Build / Build and Push Docker Images (worker) (push) Waiting to run
Deploy Staging / Deploy to Staging (push) Waiting to run
Lint / Lint Code (push) Has been cancelled
Security / Security Scanning (push) Has been cancelled
Build / Build and Push Docker Images (api) (push) Has started running
Build / Build and Push Docker Images (chat) (push) Has started running
Build / Build and Push Docker Images (frontend) (push) Has started running
Test / Run Tests (push) Has been cancelled
refactor: reorganize CI/CD pipelines into separate workflow files
BREAKING CHANGE: Monolithic ci.yml split into focused pipeline files

## Pipeline Reorganization

Split single CI/CD pipeline into 7 specialized workflows:

1. **lint.yml** - Code quality checks (Black, Ruff, MyPy)
2. **test.yml** - Test suite with coverage reporting
3. **security.yml** - Security scanning (Bandit)
4. **build.yml** - Docker image builds and registry push
5. **deploy-staging.yml** - Staging environment deployment
6. **deploy-production.yml** - Production deployment (tags only)
7. **docs-generation.yml** - Scheduled documentation generation

## Benefits

- **Modularity**: Each pipeline has single responsibility
- **Performance**: Workflows run independently, faster feedback
- **Clarity**: Easier to understand and maintain
- **Flexibility**: Trigger pipelines independently
- **Debugging**: Isolated failures easier to diagnose

## Dockerfile Improvements

- Fix FROM AS casing (was 'as', now 'AS') in all Dockerfiles
- Resolves Docker build warnings
- Improves consistency across build files

## Documentation

- Added .gitea/workflows/README.md with:
  - Workflow descriptions and triggers
  - Dependency diagram
  - Environment variables reference
  - Troubleshooting guide
  - Best practices

## Migration Notes

- Old monolithic pipeline backed up as ci.yml.old
- All triggers preserved from original pipeline
- No changes to build or deploy logic
- Same environment variables and secrets required

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
2025-10-21 13:37:21 +02:00

65 lines
1.7 KiB
Docker

# Dockerfile for Celery Worker Service
FROM python:3.12-slim AS builder
WORKDIR /build
# Install Poetry
RUN pip install --no-cache-dir poetry==1.8.0
# Copy dependency files
COPY pyproject.toml poetry.lock ./
# Export dependencies
RUN poetry config virtualenvs.create false \
&& poetry export -f requirements.txt --output requirements.txt --without-hashes
# Runtime stage
FROM python:3.12-slim
LABEL maintainer="automation-team@company.com"
LABEL description="Datacenter Documentation Background Worker"
# Install system dependencies for network automation
RUN apt-get update && apt-get install -y \
gcc \
libpq-dev \
openssh-client \
snmp \
curl \
&& rm -rf /var/lib/apt/lists/*
WORKDIR /app
# Copy requirements from builder
COPY --from=builder /build/requirements.txt .
# Install Python dependencies
RUN pip install --no-cache-dir -r requirements.txt
# Copy application code and package definition
COPY src/ /app/src/
COPY config/ /app/config/
COPY templates/ /app/templates/
COPY pyproject.toml README.md /app/
# Install poetry-core (required for install with pyproject.toml)
RUN pip install --no-cache-dir poetry-core
# Install the package
RUN pip install --no-cache-dir /app
# Set PYTHONPATH to ensure module can be imported
ENV PYTHONPATH=/app/src:$PYTHONPATH
# Create necessary directories
RUN mkdir -p /app/logs /app/output
# Create non-root user
RUN useradd -m -u 1000 appuser && \
chown -R appuser:appuser /app
USER appuser
# Run the Celery worker with specific queues
CMD ["celery", "-A", "datacenter_docs.workers.celery_app", "worker", "--loglevel=info", "--concurrency=4", "-Q", "documentation,auto_remediation,data_collection,maintenance,celery"]