728x90
반응형
설치 환경: AWS EC2 Ubuntu 20.04
1. ELK stack 설치
https://github.com/deviantony/docker-elk
1) 호스트 커널의 vm.max_map_count 설정
먼저, 프로덕션 환경에서 무리 없이 사용하기 위해 아래와 같이 vm.max_map_count를 설정한다.
$ sysctl vm.max_map_count
vm.max_map_count = 65530
$ sysctl -w vm.max_map_count=262144
2) Docker를 활용한 ELK stack 설치
아래 github 레파지토리를 클론해서 사용할 것이다.
git clone https://github.com/deviantony/docker-elk.git
cd docker-elk
실시간 로그 수집을 위해 사용하는 것이므로, 많은 양의 로그를 처리할 수 있도록 ElasticSearch를 Cluster 구성 하는 것이 좋다.
vi docker-stack.yml
Elasticsearch를 Swarm 모드로 변경한다.
version: '3.3'
services:
elasticsearch:
image: docker.elastic.co/elasticsearch/elasticsearch:7.15.0
ports:
- "9200:9200"
- "9300:9300"
configs:
- source: elastic_config
target: /usr/share/elasticsearch/config/elasticsearch.yml
environment:
ES_JAVA_OPTS: "-Xmx256m -Xms256m"
ELASTIC_PASSWORD: fasoo12!
# Use single node discovery in order to disable production mode and avoid bootstrap checks.
# see: https://www.elastic.co/guide/en/elasticsearch/reference/current/bootstrap-checks.html
# ES Swarm Mode
# ##############
node.name: elk_elasticsearch.{{.Task.Slot}}
discovery.type: ''
# Force publishing on the 'elk' overlay.
discovery.seed_hosts: tasks.elasticsearch
cluster.initial_master_nodes: elk_elasticsearch.1,elk_elasticsearch.2,elk_elasticsearch.3_
networks:
- elk
deploy:
mode: replicated
replicas: 1
logstash:
image: docker.elastic.co/logstash/logstash:7.15.0
ports:
- "5044:5044"
- "5000:5000"
- "9600:9600"
configs:
- source: logstash_config
target: /usr/share/logstash/config/logstash.yml
- source: logstash_pipeline
target: /usr/share/logstash/pipeline/logstash.conf
environment:
LS_JAVA_OPTS: "-Xmx256m -Xms256m"
networks:
- elk
deploy:
mode: replicated
replicas: 1
kibana:
image: docker.elastic.co/kibana/kibana:7.15.0
ports:
- "5601:5601"
configs:
- source: kibana_config
target: /usr/share/kibana/config/kibana.yml
networks:
- elk
deploy:
mode: replicated
replicas: 1
configs:
elastic_config:
file: ./elasticsearch/config/elasticsearch.yml
logstash_config:
file: ./logstash/config/logstash.yml
logstash_pipeline:
file: ./logstash/pipeline/logstash.conf
kibana_config:
file: ./kibana/config/kibana.yml
networks:
elk:
driver: overlay
vi docker-compose.yml
port 매핑 및 디스크 볼륨을 지정한다.
version: '3.2'
services:
elasticsearch:
build:
context: elasticsearch/
args:
ELK_VERSION: $ELK_VERSION
volumes:
- type: bind
source: ./elasticsearch/config/elasticsearch.yml
target: /usr/share/elasticsearch/config/elasticsearch.yml
read_only: true
- type: volume
source: elasticsearch
target: /usr/share/elasticsearch/data
ports:
- "9200"
- "9300"
environment:
ES_JAVA_OPTS: "-Xmx256m -Xms256m"
ELASTIC_PASSWORD: fasoo12!
# Use single node discovery in order to disable production mode and avoid bootstrap checks.
# see: https://www.elastic.co/guide/en/elasticsearch/reference/current/bootstrap-checks.html
discovery.type: single-node
node.max_local_storage_nodes: '3'
networks:
- elk
logstash:
build:
context: logstash/
args:
ELK_VERSION: $ELK_VERSION
volumes:
- type: bind
source: ./logstash/config/logstash.yml
target: /usr/share/logstash/config/logstash.yml
read_only: true
- type: bind
source: ./logstash/pipeline
target: /usr/share/logstash/pipeline
read_only: true
ports:
- "5044:5044"
- "5000:5000/tcp"
- "5000:5000/udp"
- "9600:9600"
environment:
LS_JAVA_OPTS: "-Xmx256m -Xms256m"
networks:
- elk
depends_on:
- elasticsearch
kibana:
build:
context: kibana/
args:
ELK_VERSION: $ELK_VERSION
volumes:
- type: bind
source: ./kibana/config/kibana.yml
target: /usr/share/kibana/config/kibana.yml
read_only: true
ports:
- "5601:5601"
networks:
- elk
depends_on:
- elasticsearch
networks:
elk:
driver: bridge
volumes:
elasticsearch:
vi elasticsearch/Dockerfile
추가적으로, Kibana에서 한글 분석기 nori를 설치해준다.
ARG ELK_VERSION
# https://www.docker.elastic.co/
FROM docker.elastic.co/elasticsearch/elasticsearch:${ELK_VERSION}
# Add your elasticsearch plugins setup here
# Example: RUN elasticsearch-plugin install analysis-icu
RUN elasticsearch-plugin install analysis-nori
vi kibana/config/kibana.yml
## Default Kibana configuration from kibana-docker.
## https://github.com/elastic/kibana-docker/blob/master/.tedi/template/kibana.yml.j2
#
server.name: kibana
server.host: "0"
elasticsearch.hosts: [ "http://elasticsearch:9200" ]
xpack.monitoring.ui.container.elasticsearch.enabled: true
Docker 실행
docker-compose build && docker-compose up -d --scale elasticsearch=3
실행 후 컨테이너를 확인해보면 다음과 같이 잘 실행되고 있다.
ELK 각각의 포트번호는 아래와 같다.
Elasticsearch : 9200 / 9300
Logstash : 5000 / 9600
Kibana : 5601
잘 설치되었는지 Kibana에 접속해본다.
웹 브라우저에 http://ip주소:5601 로 접속!
Add datas 클릭 시 아래와 같이 잘 접속된다.
+종료하는 명령어
docker-compose down -v
2. Filebeat 설치
# 1. filebeat 설치
$ wget https://artifacts.elastic.co/downloads/beats/filebeat/filebeat-7.15.0-linux-x86_64.tar.gz
$ tar -xvzf filebeat-7.15.0-linux-x86_64.tar.gz
$ cd filebeat-7.15.0-linux-x86_64.tar.gz/
# 2. filebeat.yml에서 다음 내용을 수정
$ vi filebeat.yml
output.elasticsearch:
hosts: ["localhost:9200"]
username: "elastic"
password: "<password>"
setup.kibana:
host: "localhost:5601"
# 3. suricata의 모듈 활성화
$ ./filebeat modules enable suricata
# 4. filebeat 실행하기
$ ./filebeat setup -e
https://velog.io/@seunghyeon/Suricata-ELK-%EC%97%B0%EB%8F%99
728x90
반응형
'ETC > ELK' 카테고리의 다른 글
Elasticsearch 기본 개념과 CRUD 사용법 알아보기 (0) | 2022.04.25 |
---|---|
Docker를 활용한 Filebeat, ELK stack 설치 및 Suricata 연동 (0) | 2021.10.08 |