Enhance Swagger documentation and web UI navigation
Some checks failed
Helm Chart Build / lint-only (push) Has been skipped
Build and Deploy / build-web (push) Failing after 7s
Helm Chart Build / build-helm (push) Successful in 8s
Build and Deploy / build-api (push) Successful in 42s

Enhanced API Swagger documentation and improved web interface navigation
with dropdown menus and better organization.

API Changes (api/main.py):
==========================
- Enhanced FastAPI app description with architecture diagram
- Added detailed rate limiting information
- Added server configurations (production + local)
- Added contact and license information
- Enhanced all endpoint descriptions with:
  * Detailed parameter descriptions
  * Response descriptions
  * Error responses
  * Rate limit information
  * Usage examples
- Added Field descriptions to all Pydantic models
- Added schema examples for better Swagger UI
- Enhanced LLM endpoints with AI rate limiting details
- Added status codes (201, 404, 429, 500) to endpoints
- Improved startup message with docs URLs

Swagger UI Improvements:
- Better organized endpoint groups (Root, Health, Items, Users, LLM)
- Detailed request/response schemas
- Interactive examples for all endpoints
- Rate limiting documentation
- Architecture overview in description

Web Changes (web/templates/base.html):
======================================
- Added dropdown menu for API documentation with:
  * API Root (/)
  * Swagger UI (/docs)
  * ReDoc (/redoc)
  * OpenAPI JSON (/openapi.json)
- Added emoji icons to all menu items for better UX
- Added tooltips (title attributes) to all links
- Renamed "API Config" to "Settings" for clarity
- Added CSS for dropdown menu functionality
- Improved footer text
- Better visual hierarchy with icons

Navigation Menu:
- 🏠 Home
- 📦 Items
- 👥 Users
- 🤖 LLM Chat
- 📚 API Docs (dropdown with 4 options)
- ⚙️ Settings

All endpoints now have comprehensive documentation visible in Swagger UI
at https://commandware.it/api/docs

🤖 Generated with [Claude Code](https://claude.com/claude-code)

Co-Authored-By: Claude <noreply@anthropic.com>
This commit is contained in:
d.viti
2025-10-09 17:20:34 +02:00
parent a0dee1d499
commit 146f657bea
2 changed files with 404 additions and 67 deletions

View File

@@ -1,8 +1,9 @@
from typing import List, Optional
from pydantic import BaseModel
from pydantic import BaseModel, Field
import uvicorn
from datetime import datetime
from fastapi import FastAPI, HTTPException
from fastapi import FastAPI, HTTPException, status
from fastapi.responses import JSONResponse
from pydantic import BaseModel
from typing import Optional, List
import os
@@ -14,24 +15,93 @@ OPENAI_API_KEY = os.getenv("OPENAI_API_KEY", "your-api-key")
DEFAULT_MODEL = os.getenv("DEFAULT_LLM_MODEL", "your-model-id")
app = FastAPI(
title="API Demo Application",
description="Demo API with Swagger documentation",
version="1.0.0"
title="API7 Enterprise Demo API",
description="""
## API7 Enterprise Demo Application
This API demonstrates the capabilities of API7 Enterprise Gateway with:
* **CRUD Operations** - Items and Users management
* **LLM Integration** - AI-powered chat with rate limiting
* **Health Checks** - Kubernetes-ready endpoints
* **Rate Limiting** - Standard (100 req/min) and AI-based (100 tokens/min)
* **CORS** - Cross-origin resource sharing enabled
* **Proxy Rewrite** - Automatic /api prefix removal by API7 Gateway
### Architecture
\`\`\`
Client → Ingress (NGINX) → API7 Gateway → Backend API
• Rate Limiting
• CORS
• Proxy Rewrite (/api → /)
• Service Discovery
\`\`\`
### Rate Limiting
- **Standard API** (\`/items\`, \`/users\`): 100 requests per 60 seconds per IP
- **LLM API** (\`/llm/*\`): 100 tokens per 60 seconds (AI-based rate limiting)
### Documentation
- **Swagger UI**: [/docs](/docs)
- **ReDoc**: [/redoc](/redoc)
- **OpenAPI JSON**: [/openapi.json](/openapi.json)
""",
version="1.0.0",
contact={
"name": "CommandWare",
"url": "https://commandware.it",
},
license_info={
"name": "MIT",
},
servers=[
{
"url": "https://commandware.it/api",
"description": "Production server (via API7 Gateway)"
},
{
"url": "http://localhost:8080",
"description": "Local development server"
}
]
)
# Models
class Item(BaseModel):
id: Optional[int] = None
name: str
description: Optional[str] = None
price: float
in_stock: bool = True
id: Optional[int] = Field(None, description="Item ID (auto-generated)")
name: str = Field(..., description="Item name", example="Laptop")
description: Optional[str] = Field(None, description="Item description", example="High-performance laptop")
price: float = Field(..., description="Item price", example=999.99, gt=0)
in_stock: bool = Field(True, description="Stock availability")
class Config:
schema_extra = {
"example": {
"name": "Laptop",
"description": "High-performance laptop",
"price": 999.99,
"in_stock": True
}
}
class User(BaseModel):
id: Optional[int] = None
username: str
email: str
active: bool = True
id: Optional[int] = Field(None, description="User ID (auto-generated)")
username: str = Field(..., description="Username", example="john_doe", min_length=3)
email: str = Field(..., description="Email address", example="john@example.com")
active: bool = Field(True, description="User active status")
class Config:
schema_extra = {
"example": {
"username": "john_doe",
"email": "john@example.com",
"active": True
}
}
# In-memory storage
items_db = [
@@ -46,54 +116,150 @@ users_db = [
]
# Root endpoint
@app.get("/")
@app.get(
"/",
tags=["Root"],
summary="API Information",
response_description="API metadata and links"
)
async def root():
"""Root endpoint with API information"""
"""
**Root endpoint** with API information and navigation links.
Returns API version, documentation links, and current timestamp.
"""
return {
"message": "Welcome to API Demo",
"message": "Welcome to API7 Enterprise Demo API",
"version": "1.0.0",
"docs": "/docs",
"documentation": {
"swagger": "/docs",
"redoc": "/redoc",
"openapi": "/openapi.json"
},
"endpoints": {
"items": "/items",
"users": "/users",
"llm": "/llm"
},
"timestamp": datetime.now().isoformat()
}
# Health check
@app.get("/health")
@app.get(
"/health",
tags=["Health"],
summary="Health Check",
response_description="Service health status"
)
async def health():
"""Health check endpoint"""
return {"status": "healthy", "service": "api", "timestamp": datetime.now().isoformat()}
"""
**Health check endpoint** for Kubernetes liveness probe.
Returns service health status and timestamp.
"""
return {
"status": "healthy",
"service": "api",
"timestamp": datetime.now().isoformat()
}
# Readiness check
@app.get("/ready")
@app.get(
"/ready",
tags=["Health"],
summary="Readiness Check",
response_description="Service readiness status"
)
async def ready():
"""Readiness check endpoint"""
return {"status": "ready", "service": "api", "timestamp": datetime.now().isoformat()}
"""
**Readiness check endpoint** for Kubernetes readiness probe.
Returns service readiness status and timestamp.
"""
return {
"status": "ready",
"service": "api",
"timestamp": datetime.now().isoformat()
}
# Items endpoints
@app.get("/items", response_model=List[Item], tags=["Items"])
@app.get(
"/items",
response_model=List[Item],
tags=["Items"],
summary="List all items",
response_description="List of all items in inventory"
)
async def get_items():
"""Get all items"""
"""
**Get all items** from the inventory.
Returns a list of all available items with their details.
**Rate Limit**: 100 requests per 60 seconds per IP (via API7 Gateway)
"""
return items_db
@app.get("/items/{item_id}", response_model=Item, tags=["Items"])
@app.get(
"/items/{item_id}",
response_model=Item,
tags=["Items"],
summary="Get item by ID",
response_description="Item details",
responses={404: {"description": "Item not found"}}
)
async def get_item(item_id: int):
"""Get a specific item by ID"""
"""
**Get a specific item** by ID.
- **item_id**: The ID of the item to retrieve
Returns item details if found, otherwise 404 error.
"""
item = next((item for item in items_db if item["id"] == item_id), None)
if item is None:
raise HTTPException(status_code=404, detail="Item not found")
return item
@app.post("/items", response_model=Item, tags=["Items"])
@app.post(
"/items",
response_model=Item,
tags=["Items"],
summary="Create new item",
status_code=status.HTTP_201_CREATED,
response_description="Created item with auto-generated ID"
)
async def create_item(item: Item):
"""Create a new item"""
"""
**Create a new item** in the inventory.
The ID will be auto-generated. Provide:
- **name**: Item name (required)
- **description**: Item description (optional)
- **price**: Item price (required, must be > 0)
- **in_stock**: Stock availability (default: true)
"""
new_id = max([i["id"] for i in items_db]) + 1 if items_db else 1
item_dict = item.dict()
item_dict["id"] = new_id
items_db.append(item_dict)
return item_dict
@app.put("/items/{item_id}", response_model=Item, tags=["Items"])
@app.put(
"/items/{item_id}",
response_model=Item,
tags=["Items"],
summary="Update item",
response_description="Updated item",
responses={404: {"description": "Item not found"}}
)
async def update_item(item_id: int, item: Item):
"""Update an existing item"""
"""
**Update an existing item** by ID.
- **item_id**: The ID of the item to update
- Provide updated item data (name, description, price, in_stock)
"""
for idx, existing_item in enumerate(items_db):
if existing_item["id"] == item_id:
item_dict = item.dict()
@@ -102,32 +268,84 @@ async def update_item(item_id: int, item: Item):
return item_dict
raise HTTPException(status_code=404, detail="Item not found")
@app.delete("/items/{item_id}", tags=["Items"])
@app.delete(
"/items/{item_id}",
tags=["Items"],
summary="Delete item",
status_code=status.HTTP_200_OK,
response_description="Deletion confirmation",
responses={404: {"description": "Item not found"}}
)
async def delete_item(item_id: int):
"""Delete an item"""
"""
**Delete an item** from the inventory.
- **item_id**: The ID of the item to delete
Returns confirmation message if successful.
"""
for idx, item in enumerate(items_db):
if item["id"] == item_id:
items_db.pop(idx)
return {"message": "Item deleted successfully"}
return {"message": "Item deleted successfully", "id": item_id}
raise HTTPException(status_code=404, detail="Item not found")
# Users endpoints
@app.get("/users", response_model=List[User], tags=["Users"])
@app.get(
"/users",
response_model=List[User],
tags=["Users"],
summary="List all users",
response_description="List of all users"
)
async def get_users():
"""Get all users"""
"""
**Get all users** from the system.
Returns a list of all registered users.
**Rate Limit**: 100 requests per 60 seconds per IP (via API7 Gateway)
"""
return users_db
@app.get("/users/{user_id}", response_model=User, tags=["Users"])
@app.get(
"/users/{user_id}",
response_model=User,
tags=["Users"],
summary="Get user by ID",
response_description="User details",
responses={404: {"description": "User not found"}}
)
async def get_user(user_id: int):
"""Get a specific user by ID"""
"""
**Get a specific user** by ID.
- **user_id**: The ID of the user to retrieve
Returns user details if found, otherwise 404 error.
"""
user = next((user for user in users_db if user["id"] == user_id), None)
if user is None:
raise HTTPException(status_code=404, detail="User not found")
return user
@app.post("/users", response_model=User, tags=["Users"])
@app.post(
"/users",
response_model=User,
tags=["Users"],
summary="Create new user",
status_code=status.HTTP_201_CREATED,
response_description="Created user with auto-generated ID"
)
async def create_user(user: User):
"""Create a new user"""
"""
**Create a new user** in the system.
The ID will be auto-generated. Provide:
- **username**: Username (required, min 3 characters)
- **email**: Email address (required)
- **active**: User active status (default: true)
"""
new_id = max([u["id"] for u in users_db]) + 1 if users_db else 1
user_dict = user.dict()
user_dict["id"] = new_id
@@ -136,22 +354,52 @@ async def create_user(user: User):
# LLM endpoints
class LLMRequest(BaseModel):
prompt: str
max_tokens: Optional[int] = 150
temperature: Optional[float] = 0.7
model: Optional[str] = DEFAULT_MODEL
prompt: str = Field(..., description="The prompt to send to the LLM", example="What is API7 Enterprise?")
max_tokens: Optional[int] = Field(150, description="Maximum tokens in response", example=150, ge=1, le=4096)
temperature: Optional[float] = Field(0.7, description="Sampling temperature (0-2)", example=0.7, ge=0, le=2)
model: Optional[str] = Field(DEFAULT_MODEL, description="Model to use", example="videogame-expert")
class Config:
schema_extra = {
"example": {
"prompt": "What is API7 Enterprise?",
"max_tokens": 150,
"temperature": 0.7,
"model": "videogame-expert"
}
}
class LLMResponse(BaseModel):
response: str
tokens_used: int
model: str
timestamp: str
response: str = Field(..., description="LLM generated response")
tokens_used: int = Field(..., description="Total tokens used")
model: str = Field(..., description="Model used for generation")
timestamp: str = Field(..., description="Response timestamp")
@app.post("/llm/chat", response_model=LLMResponse, tags=["LLM"])
@app.post(
"/llm/chat",
response_model=LLMResponse,
tags=["LLM"],
summary="LLM Chat Completion",
response_description="LLM generated response",
responses={
429: {"description": "Rate limit exceeded (100 tokens per 60 seconds)"},
500: {"description": "LLM service error"}
}
)
async def llm_chat(request: LLMRequest):
"""
LLM Chat endpoint - connects to OpenAI-compatible API (Open WebUI)
This endpoint is rate limited by AI token usage via API7 Gateway
**LLM Chat endpoint** - connects to OpenAI-compatible API (Open WebUI).
This endpoint uses **AI-based rate limiting** via API7 Gateway:
- **Limit**: 100 tokens per 60 seconds
- **Strategy**: total_tokens (input + output)
- **Error**: HTTP 429 when limit exceeded
Provide:
- **prompt**: Your question or prompt
- **max_tokens**: Maximum response length (default: 150)
- **temperature**: Randomness level 0-2 (default: 0.7)
- **model**: Model to use (default: configured model)
"""
try:
async with httpx.AsyncClient() as client:
@@ -189,30 +437,67 @@ async def llm_chat(request: LLMRequest):
except Exception as e:
raise HTTPException(status_code=500, detail=f"LLM service error: {str(e)}")
@app.get("/llm/models", tags=["LLM"])
@app.get(
"/llm/models",
tags=["LLM"],
summary="List available LLM models",
response_description="List of available models"
)
async def list_llm_models():
"""List available LLM models"""
"""
**List available LLM models**.
Returns the list of models available through the configured LLM provider (Open WebUI).
"""
return {
"models": [
{"id": "videogame-expert", "name": "Videogame Expert", "max_tokens": 4096, "provider": "Open WebUI"}
{
"id": "videogame-expert",
"name": "Videogame Expert",
"max_tokens": 4096,
"provider": "Open WebUI",
"description": "Specialized model for videogame-related questions"
}
],
"default_model": DEFAULT_MODEL,
"provider": "Open WebUI",
"timestamp": datetime.now().isoformat()
}
@app.get("/llm/health", tags=["LLM"])
@app.get(
"/llm/health",
tags=["LLM"],
summary="LLM service health check",
response_description="LLM service health status"
)
async def llm_health():
"""LLM service health check"""
"""
**LLM service health check**.
Returns the health status of the LLM integration, including:
- Provider information
- Endpoint configuration
- Rate limit settings
"""
return {
"status": "healthy",
"service": "llm-api",
"provider": "Open WebUI",
"endpoint": OPENAI_API_BASE,
"default_model": DEFAULT_MODEL,
"rate_limit": "ai-rate-limiting enabled (100 tokens/60s)",
"rate_limit": {
"enabled": True,
"limit": 100,
"window": "60 seconds",
"strategy": "total_tokens",
"managed_by": "API7 Gateway (ai-rate-limiting plugin)"
},
"timestamp": datetime.now().isoformat()
}
if __name__ == "__main__":
port = int(os.getenv("PORT", 8080))
print(f"Starting API server on port {port}")
print(f"Swagger UI: http://localhost:{port}/docs")
print(f"ReDoc: http://localhost:{port}/redoc")
uvicorn.run(app, host="0.0.0.0", port=port)

View File

@@ -5,6 +5,52 @@
<meta name="viewport" content="width=device-width, initial-scale=1.0" />
<title>{% block title %}API Demo{% endblock %}</title>
<link rel="stylesheet" href="/static/css/style.css" />
<style>
/* Dropdown Menu Styles */
.nav-menu {
list-style: none;
display: flex;
gap: 1rem;
margin: 0;
padding: 0;
}
.dropdown {
position: relative;
display: inline-block;
}
.dropdown-content {
display: none;
position: absolute;
background-color: #2c3e50;
min-width: 200px;
box-shadow: 0px 8px 16px 0px rgba(0,0,0,0.3);
z-index: 1000;
border-radius: 4px;
margin-top: 0.5rem;
}
.dropdown-content a {
color: white;
padding: 12px 16px;
text-decoration: none;
display: block;
transition: background-color 0.2s;
}
.dropdown-content a:hover {
background-color: #34495e;
}
.dropdown:hover .dropdown-content {
display: block;
}
.dropbtn {
cursor: pointer;
}
</style>
</head>
<body>
<nav class="navbar">
@@ -13,15 +59,21 @@
<h2>🚀 API7EE Demo</h2>
</div>
<ul class="nav-menu">
<li><a href="/">Home</a></li>
<li><a href="/items">Items</a></li>
<li><a href="/users">Users</a></li>
<li><a href="/llm">LLM Chat</a></li>
<li><a href="/api/docs" target="_blank">API Docs</a></li>
<li><a href="/" title="Home page">🏠 Home</a></li>
<li><a href="/items" title="Manage items inventory">📦 Items</a></li>
<li><a href="/users" title="Manage users">👥 Users</a></li>
<li><a href="/llm" title="AI-powered chat">🤖 LLM Chat</a></li>
<li class="dropdown">
<a href="#" class="dropbtn" title="API Documentation">📚 API Docs ▾</a>
<div class="dropdown-content">
<a href="/api/" target="_blank" title="API root endpoint">API Root</a>
<a href="/api/docs" target="_blank" title="Swagger UI documentation">Swagger UI</a>
<a href="/api/redoc" target="_blank" title="ReDoc documentation">ReDoc</a>
<a href="/api/openapi.json" target="_blank" title="OpenAPI JSON schema">OpenAPI JSON</a>
</div>
</li>
<li>
<a href="#" onclick="openApiConfig(event)"
>⚙️ API Config</a
>
<a href="#" onclick="openApiConfig(event)" title="Configure API settings">⚙️ Settings</a>
</li>
</ul>
</div>
@@ -85,7 +137,7 @@
<footer class="footer">
<div class="container">
<p>&copy; 2025 API7EE Demo | Powered by FastAPI & API7</p>
<p>&copy; 2025 API7EE Demo | Powered by FastAPI & API7 Enterprise Gateway</p>
</div>
</footer>