AYS | AWS Monitoring Altyapısı

AYS | AWS Monitoring Altyapısı


Bu döküman, ELK (Elasticsearch, Logstash, Kibana) Stack, Prometheus ve Grafa’nın Docker Compose ile nasıl kurulacağını ve yapılandırılacağını açıklamaktadır. Kurulum, AYS İsimlendirme Standartları'na uygun olacak şekilde; production-ec2-ubuntu-monitoring-001 ismi ile t3.large instance türünde bir sanal makineye yapılmıştır. ELK, bir Application Load Balancer'a (production-monitoring-alb) bağlanmış olup, hiçbir Security Group a ihtiyaçtan fazla port açılmamıştır.

 

ELK Stack Kurulumu (Docker Compose)


ELK container larından veri kaybetmemek için kurulumu 2 aşamada gerçekleştirdik;

  1. Resmi websitesinden gerekli dosyaların alınarak temiz bir kurulum yapılması

  2. Oluşturulan container lardan gerekli dosyaların dışarıya kopyalanıp, yeni bir docker-compose.ymldosyasında kopyalanan dosyaların volumeolarak kullanılması.

Gereksinimler

  • Docker ve Docker Compose yüklü bir sunucu

  • En az 8GB RAM ve 2 CPU ya sahip bir sanal makine (t3.large önerilir)

  • Güvenli bir bağlantı için uygun bir Security Group yapılandırması

Kurulum Adımları

Docker Compose Dosyasını Hazırlama

Aşağıdaki docker-compose.yml dosyasını oluşturun:

volumes: certs: driver: local esdata01: driver: local kibanadata: driver: local logstashdata01: driver: local networks: default: name: elastic external: false services: setup: image: docker.elastic.co/elasticsearch/elasticsearch:${STACK_VERSION} container_name: elk-setup-container volumes: - certs:/usr/share/elasticsearch/config/certs user: "0" command: > bash -c ' if [ x${ELASTIC_PASSWORD} == x ]; then echo "Set the ELASTIC_PASSWORD environment variable in the .env file"; exit 1; elif [ x${KIBANA_PASSWORD} == x ]; then echo "Set the KIBANA_PASSWORD environment variable in the .env file"; exit 1; fi; if [ ! -f config/certs/ca.zip ]; then echo "Creating CA"; bin/elasticsearch-certutil ca --silent --pem -out config/certs/ca.zip; unzip config/certs/ca.zip -d config/certs; fi; if [ ! -f config/certs/certs.zip ]; then echo "Creating certs"; echo -ne \ "instances:\n"\ " - name: es01\n"\ " dns:\n"\ " - es01\n"\ " - localhost\n"\ " ip:\n"\ " - 127.0.0.1\n"\ " - name: kibana\n"\ " dns:\n"\ " - kibana\n"\ " - localhost\n"\ " ip:\n"\ " - 127.0.0.1\n"\ > config/certs/instances.yml; bin/elasticsearch-certutil cert --silent --pem -out config/certs/certs.zip --in config/certs/instances.yml --ca-cert config/certs/ca/ca.crt --ca-key config/certs/ca/ca.key; unzip config/certs/certs.zip -d config/certs; fi; echo "Setting file permissions" chown -R root:root config/certs; find . -type d -exec chmod 750 \{\} \;; find . -type f -exec chmod 640 \{\} \;; echo "Waiting for Elasticsearch availability"; until curl -s --cacert config/certs/ca/ca.crt https://es01:9200 | grep -q "missing authentication credentials"; do sleep 30; done; echo "Setting kibana_system password"; until curl -s -X POST --cacert config/certs/ca/ca.crt -u "elastic:${ELASTIC_PASSWORD}" -H "Content-Type: application/json" https://es01:9200/_security/user/kibana_system/_password -d "{\"password\":\"${KIBANA_PASSWORD}\"}" | grep -q "^{}"; do sleep 10; done; echo "All done!"; ' healthcheck: test: ["CMD-SHELL", "[ -f config/certs/es01/es01.crt ]"] interval: 1s timeout: 5s retries: 120 es01: depends_on: setup: condition: service_healthy image: docker.elastic.co/elasticsearch/elasticsearch:${STACK_VERSION} container_name: elasticsearch labels: co.elastic.logs/module: elasticsearch volumes: - certs:/usr/share/elasticsearch/config/certs - esdata01:/usr/share/elasticsearch/data ports: - 9200:9200 environment: - node.name=es01 - cluster.name=${CLUSTER_NAME} - cluster.initial_master_nodes=es01 - discovery.seed_hosts=e01 - ELASTIC_PASSWORD=${ELASTIC_PASSWORD} - bootstrap.memory_lock=true - xpack.security.enabled=true - xpack.security.http.ssl.enabled=true - xpack.security.http.ssl.key=certs/es01/es01.key - xpack.security.http.ssl.certificate=certs/es01/es01.crt - xpack.security.http.ssl.certificate_authorities=certs/ca/ca.crt - xpack.security.transport.ssl.enabled=true - xpack.security.transport.ssl.key=certs/es01/es01.key - xpack.security.transport.ssl.certificate=certs/es01/es01.crt - xpack.security.transport.ssl.certificate_authorities=certs/ca/ca.crt - xpack.security.transport.ssl.verification_mode=certificate - xpack.license.self_generated.type=${LICENSE} mem_limit: ${ES_MEM_LIMIT} ulimits: memlock: soft: -1 hard: -1 healthcheck: test: [ "CMD-SHELL", "curl -s --cacert config/certs/ca/ca.crt https://localhost:9200 | grep -q 'missing authentication credentials'", ] interval: 10s timeout: 10s retries: 120 kibana: depends_on: es01: condition: service_healthy image: docker.elastic.co/kibana/kibana:${STACK_VERSION} container_name: kibana labels: co.elastic.logs/module: kibana volumes: - certs:/usr/share/kibana/config/certs - kibanadata:/usr/share/kibana/data ports: - ${KIBANA_PORT}:5601 environment: - SERVERNAME=kibana - ELASTICSEARCH_HOSTS=https://es01:9200 - ELASTICSEARCH_USERNAME=kibana_system - ELASTICSEARCH_PASSWORD=${KIBANA_PASSWORD} - ELASTICSEARCH_SSL_CERTIFICATEAUTHORITIES=config/certs/ca/ca.crt - XPACK_SECURITY_ENCRYPTIONKEY=${ENCRYPTION_KEY} - XPACK_ENCRYPTEDSAVEDOBJECTS_ENCRYPTIONKEY=${ENCRYPTION_KEY} - XPACK_REPORTING_ENCRYPTIONKEY=${ENCRYPTION_KEY} mem_limit: ${KB_MEM_LIMIT} healthcheck: test: [ "CMD-SHELL", "curl -s -I http://localhost:5601 | grep -q 'HTTP/1.1 302 Found'", ] interval: 10s timeout: 10s retries: 120 logstash: depends_on: es01: condition: service_healthy kibana: condition: service_healthy image: docker.elastic.co/logstash/logstash:${STACK_VERSION} container_name: logstash labels: co.elastic.logs/module: logstash user: root volumes: - logstashdata01:/usr/share/logstash/data - certs:/usr/share/logstash/certs - ./logstash.conf:/usr/share/logstash/pipeline/logstash.conf:ro environment: - NODE_NAME="logstash" - xpack.monitoring.enabled=false - ELASTIC_USER=elastic - ELASTIC_PASSWORD=${ELASTIC_PASSWORD} - ELASTIC_HOSTS=https://es01:9200 command: logstash -f /usr/share/logstash/pipeline/logstash.conf ports: - "5044:5044" mem_limit: ${LS_MEM_LIMIT}

Ortam Değişkenleri (.env Dosyası)

Aşağıdaki .env dosyasını oluşturun ve içeriklerini ihtiyaca göre düzenleyin:

# Password for the 'elastic' user (at least 6 characters) ELASTIC_PASSWORD=<changeme> # Password for the 'kibana_system' user (at least 6 characters) KIBANA_PASSWORD=<changeme> # Version of Elastic products STACK_VERSION=<version> # Set the cluster name CLUSTER_NAME=<cluster_name> # Set to 'basic' or 'trial' to automatically start the 30-day trial LICENSE=basic #LICENSE=trial # Port to expose Elasticsearch HTTP API to the host ES_PORT=9200 # Port to expose Kibana to the host KIBANA_PORT=5601 # Memory Limits ES_MEM_LIMIT=4294967296 KB_MEM_LIMIT=1073741824 LS_MEM_LIMIT=1073741824 # Encryption Key ENCRYPTION_KEY=<encryption_key>

ENCRYPTION_KEYoluşturmak için aşağıdaki komut kullanılabilir:

openssl rand -base64 32

Logstash Dosyalarının Hazırlanması

logstash.conf:

input { tcp { port => 5044 codec => json_lines { ecs_compatibility => disabled } mode => "server" } } output { elasticsearch { index => "ays-production-logs-%{+YYYY.MM.dd}" hosts => ["https://es01:9200"] user => "elastic" password => "<changeme>" ssl_enabled => true ssl_certificate_authorities => ["/usr/share/logstash/certs/ca/ca.crt"] timeout => 86400 } }

logstash.yml:

api.http.host: 0.0.0.0 node.name: ${NODE_NAME} pipeline.buffer.type: heap queue.max_bytes: 5gb queue.type: persisted xpack.monitoring.elasticsearch.hosts: - http://es01:9200 xpack.monitoring.enabled: ${xpack.monitoring.enabled}

Servisleri Başlatma ve Dosyaları Kopyalama

ELK Stack’i başlattıktan sonra;

  1. Elasticsearch için: /usr/share/elasticsearch/data ve /usr/share/elasticsearch/config

  2. Kibana için: /usr/share/kibana/data ve /usr/share/kibana/config

  3. Logstash için: /usr/share/logstash/data, /usr/share/logstash/data ve /usr/share/logstash/data

klasörleri container lardan dışarıya kopyalanmalıdır.
Örnek kopyalama komutu:

docker cp logstash:/usr/share/logstash/certs <path-to-copy>/

 

Yeni Docker Compose Dosyasının Hazırlanması

Yukarıda belirtilen gerekli dosyalar kopyalandıktan sonra container lar durdurulmalı ve yeni bir docker-compose.yml veya önceki dosyanın içeriği değiştirilmelidir.

docker-compose.yml:

volumes: elasticsearch-volume: kibana-volume: logstash-volume: networks: default: name: monitoring-network driver: bridge services: es01: image: docker.elastic.co/elasticsearch/elasticsearch:${STACK_VERSION} restart: always container_name: elasticsearch labels: co.elastic.logs/module: elasticsearch volumes: - ./elasticsearch-volume/data:/usr/share/elasticsearch/data - ./elasticsearch-volume/config:/usr/share/elasticsearch/config ports: - 9200:9200 environment: - node.name=es01 - cluster.name=${CLUSTER_NAME} - cluster.initial_master_nodes=es01 - discovery.seed_hosts=e01 - ELASTIC_PASSWORD=${ELASTIC_PASSWORD} - bootstrap.memory_lock=true - xpack.security.enabled=true - xpack.security.http.ssl.enabled=true - xpack.security.http.ssl.key=certs/es01/es01.key - xpack.security.http.ssl.certificate=certs/es01/es01.crt - xpack.security.http.ssl.certificate_authorities=certs/ca/ca.crt - xpack.security.transport.ssl.enabled=true - xpack.security.transport.ssl.key=certs/es01/es01.key - xpack.security.transport.ssl.certificate=certs/es01/es01.crt - xpack.security.transport.ssl.certificate_authorities=certs/ca/ca.crt - xpack.security.transport.ssl.verification_mode=certificate - xpack.license.self_generated.type=${LICENSE} mem_limit: ${ES_MEM_LIMIT} ulimits: memlock: soft: -1 hard: -1 healthcheck: test: [ "CMD-SHELL", "curl -s --cacert config/certs/ca/ca.crt https://localhost:9200 | grep -q 'missing authentication credentials'", ] interval: 10s timeout: 10s retries: 120 kibana: depends_on: es01: condition: service_healthy image: docker.elastic.co/kibana/kibana:${STACK_VERSION} restart: always container_name: kibana labels: co.elastic.logs/module: kibana volumes: - ./kibana-volume/data:/usr/share/kibana/data - ./kibana-volume/config:/usr/share/kibana/config ports: - ${KIBANA_PORT}:5601 environment: - SERVERNAME=kibana - ELASTICSEARCH_HOSTS=https://es01:9200 - ELASTICSEARCH_USERNAME=kibana_system - ELASTICSEARCH_PASSWORD=${KIBANA_PASSWORD} - ELASTICSEARCH_SSL_CERTIFICATEAUTHORITIES=config/certs/ca/ca.crt - XPACK_SECURITY_ENCRYPTIONKEY=${ENCRYPTION_KEY} - XPACK_ENCRYPTEDSAVEDOBJECTS_ENCRYPTIONKEY=${ENCRYPTION_KEY} - XPACK_REPORTING_ENCRYPTIONKEY=${ENCRYPTION_KEY} mem_limit: ${KB_MEM_LIMIT} healthcheck: test: [ "CMD-SHELL", "curl -s -I http://localhost:5601 | grep -q 'HTTP/1.1 302 Found'", ] interval: 10s timeout: 10s retries: 120 logstash: depends_on: es01: condition: service_healthy kibana: condition: service_healthy image: docker.elastic.co/logstash/logstash:${STACK_VERSION} restart: always container_name: logstash labels: co.elastic.logs/module: logstash user: root volumes: - ./logstash-volume/data:/usr/share/logstash/data - ./logstash-volume/config:/usr/share/logstash/config - ./logstash-volume/certs:/usr/share/logstash/certs - ./logstash.conf:/usr/share/logstash/pipeline/logstash.conf:ro environment: - NODE_NAME="logstash" - xpack.monitoring.enabled=false - ELASTIC_USER=elastic - ELASTIC_PASSWORD=${ELASTIC_PASSWORD} - ELASTIC_HOSTS=https://es01:9200 command: logstash -f /usr/share/logstash/pipeline/logstash.conf ports: - "5044:5044" mem_limit: ${LS_MEM_LIMIT}

Dosya yolları kontrol edildikten sonra bir sorun olmadığı takdirde container lar sorunsuz şekilde çalışacaktır.

Prometheus ve Grafana Kurulumu (Docker Compose)

Docker Compose Dosyasının Hazırlanması

docker-compose.yml:

volumes: grafana-volume: prometheus-volume: networks: monitoring-network: driver: bridge services: prometheus: container_name: prometheus image: prom/prometheus:${PROMETHEUS_IMAGE_VERSION} restart: always ports: - "9090:9090" environment: - PROMETHEUS_CONFIG_PATH=${PROMETHEUS_CONFIG_PATH} volumes: - ./prometheus-volume:/prometheus - ./prometheus.yml:/etc/prometheus/prometheus.yml networks: - monitoring-network node-exporter: container_name: node-exporter image: prom/node-exporter:${NODE_EXPORTER_IMAGE_VERSION} restart: always ports: - "9100:9100" networks: - monitoring-network grafana: container_name: grafana image: grafana/grafana:${GRAFANA_IMAGE_VERSION} restart: always environment: - GF_SECURITY_ADMIN_USER=${GF_SECURITY_ADMIN_USER} - GF_SECURITY_ADMIN_PASSWORD=${GF_SECURITY_ADMIN_PASSWORD} ports: - "3000:3000" depends_on: - prometheus volumes: - ./grafana-volume:/var/lib/grafana - ./grafana-volume/logs:/var/log/grafana networks: - monitoring-network

Prometheus Konfigürasyonu

prometheus.yml:

global: scrape_interval: 15s scrape_configs: - job_name: 'ays-production-backend-ecs' metrics_path: '/public/actuator/prometheus' static_configs: - targets: ['servis.afetyonetimsistemi.org'] - job_name: 'node-exporter' static_configs: - targets: ['node-exporter:9100']

Ortam Değişkenleri (.env Dosyası)

Aşağıdaki .env dosyasını oluşturun ve içeriklerini ihtiyaca göre düzenleyin:

.env :

################################# Prometheus and Grafana ################################# # Prometheus config PROMETHEUS_CONFIG_PATH=/etc/prometheus/prometheus.yml # Grafana credentials GF_SECURITY_ADMIN_USER=admin GF_SECURITY_ADMIN_PASSWORD=<changeme> # Image versions PROMETHEUS_IMAGE_VERSION=<version> NODE_EXPORTER_IMAGE_VERSION=<version> GRAFANA_IMAGE_VERSION=<version>

Tercihe göre iki docker-compos.yml ve env dosyalar tek birer dosya halinde birleştirilip gerekli dizinler ilgili yerlere taşınarak tek bir komut ile tüm container lar kontrol edilebilir.

 

KAYNAKLAR