This project implements an e-commerce system using a microservices architecture. Each service is built with Flask, and the project is designed to be cloud-native, leveraging Kubernetes for orchestration and Prometheus and Grafana for observability.
As per the requirements, the following have been achieved, all works have been cited within the relevant manifest files
- Debug and deploy the services using provided manifests - each service and 3rd party app has its own manifests files within folders named k8s.
- Address issues in the CI/CD pipeline - set up an automated CI/CD pipeline which is passing, click on the badge for more information.
- Use logs and metrics to identify and fix issues - using prometheus, integrated grafana and local logs mounted on containers (git excluded for security).
- Provide solutions for security and cost optimization - using istio for app mesh for secure intra service interactions (mTLS, RBAC, sidecar injection - envoy) enabled. Project is using the host ecommerce.local, routes traffic approproately and for cost using light weight containers and a lightweight Kubernetes in docker setup.
.
├── LICENSE
├── README.md
├── app
│ ├── catalog
│ │ ├── Dockerfile
│ │ ├── app.py
│ │ ├── data
│ │ │ └── catalogue_data.json
│ │ ├── gunicorn-config.py
│ │ ├── k8s
│ │ │ ├── deployment.yaml
│ │ │ ├── hpa.yaml
│ │ │ └── service.yaml
│ │ ├── requirements.txt
│ │ └── utils
│ │ ├── __init__.py
│ │ └── logger.py
│ ├── frontend
│ │ ├── Dockerfile
│ │ ├── app.py
│ │ ├── gunicorn-config.py
│ │ ├── k8s
│ │ │ ├── deployment.yaml
│ │ │ ├── hpa.yaml
│ │ │ └── service.yaml
│ │ ├── requirements.txt
│ │ └── utils
│ │ ├── __init__.py
│ │ └── logger.py
│ ├── order
│ │ ├── Dockerfile
│ │ ├── app.py
│ │ ├── gunicorn-config.py
│ │ ├── k8s
│ │ │ ├── deployment.yaml
│ │ │ ├── hpa.yaml
│ │ │ └── service.yaml
│ │ ├── requirements.txt
│ │ └── utils
│ │ ├── __init__.py
│ │ └── logger.py
│ └── search
│ ├── Dockerfile
│ ├── app.py
│ ├── data
│ │ └── search_data.json
│ ├── gunicorn-config.py
│ ├── k8s
│ │ ├── deployment.yaml
│ │ ├── hpa.yaml
│ │ └── service.yaml
│ ├── requirements.txt
│ └── utils
│ ├── __init__.py
│ └── logger.py
├── ci_cd
│ └── README.md
├── elasticsearch
│ └── k8s
│ ├── deployment.yaml
│ └── service.yaml
├── grafana
│ ├── dashboards
│ │ └── flask-services.json
│ └── k8s
│ ├── dashboard-provisioning.yaml
│ ├── datasource.yaml
│ ├── deployment.yaml
│ └── service.yaml
├── istio
│ └── k8s
│ ├── auth-policy.yaml
│ ├── deployment.yaml
│ ├── mesh-config.yaml
│ └── service.yaml
├── kind
│ └── k8s
│ ├── kind-config.yaml
│ └── storage-class.yaml
├── logs_and_metrics
│ ├── catalog
│ ├── frontend
│ ├── order
│ └── search
├── manifests
│ └── README.md
├── nginx
│ └── k8s
│ ├── deployment.yaml
│ └── service.yaml
├── postgres
│ └── k8s
│ ├── deployment.yaml
│ └── service.yaml
├── prometheus
│ ├── k8s
│ │ ├── config
│ │ │ └── prometheus.yml
│ │ ├── deployment.yaml
│ │ └── service.yaml
│ └── prometheus-configmap.yaml
├── rabbitmq
│ └── k8s
│ ├── deployment.yaml
│ └── service.yaml
├── scripts
│ ├── deploy-helm.sh
│ ├── deploy-kubectl.sh
│ └── test-local.sh
└── secrets.yaml
- Purpose: Manages product catalog data.
- Endpoints:
/catalog
: Fetch catalog data./metrics
: Metrics for Prometheus./health
: Health check endpoint.
- Integrations:
- PostgreSQL for storage.
- Prometheus for metrics collection.
- Purpose: Acts as a gateway for the user-facing application.
- Endpoints:
/
: Home route./health
: Health check.
- Purpose: Manages customer orders.
- Endpoints:
/create-order
: Handles new orders./metrics
: Metrics for Prometheus./health
: Health check endpoint.
- Integrations:
- RabbitMQ for message queueing.
- PostgreSQL for order persistence.
- Purpose: Provides search functionality over the catalog data.
- Endpoints:
/search
: Query products./metrics
: Metrics for Prometheus./health
: Health check.
- Integrations:
- Elasticsearch for search indexing.
- Replaces Nginx Ingress with Istio Service Mesh
- Provides mTLS encryption between services
- Implements fine-grained RBAC
- Manages traffic routing and load balancing
-
mTLS Authentication
- Automatic encryption between services
- Certificate management handled by Istio
- STRICT mode enforced across namespace
-
Authorization Policies
- Frontend Service: Public access to / and /health
- Catalog Service: Only accessible by Frontend
- Order Service: Protected endpoints with method restrictions
- Search Service: Controlled access from Frontend
-
Traffic Management
- Route definitions via Virtual Services
- Load balancing across service instances
- Circuit breaking and fault injection capabilities
- All external traffic routes through Istio Ingress Gateway
- Internal service-to-service communication secured by mTLS
- Original ports and endpoints remain unchanged
- Kubernetes cluster (local or cloud-based).
kubectl
installed and configured.- Helm installed for package management.
- Deploy Secrets:
kubectl apply -f secrets.yaml
- Deploy Services:
./scripts/deploy-kubectl.sh
- Verify Resources:
kubectl get pods -n ecommerce kubectl get services -n ecommerce
- Deploy Helm Charts:
./scripts/deploy-helm.sh
- Access Services:
- Frontend: <Node_IP>:
- Metrics: Access Prometheus and Grafana for system observability.
- Scrapes metrics from the microservices and system components.
- Configured with prometheus.yml.
- Visualizes metrics collected by Prometheus.
- Dashboards defined in flask-services.json.
- Local file logging under the folder logs_and_metrics, each service has a volume mount for the logs.
- Configured with utils/logger.py.
To run this project locally you can use the following script:
Requirements:
- Kind
- Kubectl
- Istioctl
- Helm
Run Locally:
```bash
./scripts/test-local.sh
```
It is Worth looking at the deploy-helm.sh that consolidates all of our helm installations and deploy-kubectl.sh that consolidates all of our deployment, service, hpa and configuration manifests applies them to deploy our apps and 3rd party dependencies to better understand how the test script works.
- Port: 5001
- Endpoints:
/catalog
: Fetch catalog data/metrics
: Prometheus metrics/health
: Health check endpoint
- Internal Service Name: catalog-service.ecommerce.svc.cluster.local
- Port: 5002
- Endpoints:
/search
: Query products/metrics
: Prometheus metrics/health
: Health check endpoint
- Internal Service Name: search-service.ecommerce.svc.cluster.local
- Port: 5003
- Endpoints:
/
: Home route/create-order
: Create new orders/metrics
: Prometheus metrics/health
: Health check endpoint
- Internal Service Name: order-service.ecommerce.svc.cluster.local
- Port: 5004
- Endpoints:
/
: Home route/metrics
: Prometheus metrics/health
: Health check endpoint
- Internal Service Name: frontend-service.ecommerce.svc.cluster.local
(Mmonitoring): Scrapes orders, frontend, search & catalogue for metrics to report.
- Port: 9090
- Internal Service Name: prometheus.monitoring.svc.cluster.local
- Access: http://localhost:9090
(Observability): Integrates with prometheus, pre-configured dashboards to highlight app level info.
- Port: 3000
- Internal Service Name: grafana.monitoring.svc.cluster.local
- Access: http://localhost:3000
- Default Credentials: admin/admin
Provides a convenient way to search; we have data folders with some rows for simulating searching.
- Port: 9200
- Internal Service Name: elasticsearch.logging.svc.cluster.local
- Access: http://localhost:9200
For message brokerage, orders are placed in queue and can be polled for processing and viewed on our dashboard.
- Ports:
- 5672 (AMQP)
- 15672 (Management Interface)
- Internal Service Name: rabbitmq.messaging.svc.cluster.local
- Access: http://localhost:15672
- Default Credentials: admin/adminpassword
For ACID persistent storage using a slim version 15 package; for orders.
- Port: 5432
- Internal Service Name: postgres-postgresql.database.svc.cluster.local
- Access: localhost:5432
- Default Credentials: postgres/postgrespass
Deprecated over Istio, the folders and the deploy and service configs are still available.
- Port: 80
- Internal Service Name: nginx-ingress.ecommerce.svc.cluster.local
- Access: http://localhost:80
- Configuration: Managed via ConfigMap nginx-config
Steps 1,2 & 3 happen in one step then proceeds to 4,5 & 6 if successful.
- Install: Installs helm charts for 3rd parties and install the apps requirements.
- Deploy: Specifies the deploy parameters, deploys the services into our Kind cluster.
- Test: Run a local test with Kind, this ensures we can create a cluster and operate interoperably with the services.
- Build: Docker images for each service locally and load them into our Kind cluster.
- Scan: Vulnerability scanning using Trivy - Optional to discover CVE's present.
- Publish: Automate publish using GitHub Actions to my public Dockhub Repo.
This project is licensed under the MIT License. See LICENSE for details, credits to Bineyame bineyame.afework@engie.com.