国产av日韩一区二区三区精品,成人性爱视频在线观看,国产,欧美,日韩,一区,www.成色av久久成人,2222eeee成人天堂

Table of Contents
Using Prometheus for application monitoring
Use ElasticSearch and Logstash for log management
Home Operation and Maintenance Linux Operation and Maintenance How to use Docker for application monitoring and log management

How to use Docker for application monitoring and log management

Nov 07, 2023 pm 04:58 PM
docker monitor Log management

How to use Docker for application monitoring and log management

Docker has become an essential technology in modern applications, but using Docker for application monitoring and log management is a challenge. With the continuous enhancement of Docker network functions, such as Service Discovery and Load Balancing, we increasingly need a complete, stable, and efficient application monitoring system.

In this article, we will briefly introduce the use of Docker for application monitoring and log management and give specific code examples.

Using Prometheus for application monitoring

Prometheus is an open source, Pull model-based service monitoring and warning tool developed by SoundCloud. It is written in Go language and is widely used in microservice solutions and cloud environments. As a monitoring tool, it can monitor Docker's CPU, memory, network and disk, etc., and also supports multi-dimensional data switching, flexible query, alarm and visualization functions, allowing you to react quickly and do things quickly. Make decisions.

Another thing to note is that Prometheus needs to sample through Pull mode, that is, access the /metrics interface in the monitored application to obtain monitoring data. Therefore, when starting the monitored application image, you need to first configure the IP and port that can access Prometheus into the /metrics interface. Below is a simple Node.js application.

const express = require('express')
const app = express()

app.get('/', (req, res) => {
  res.send('Hello World!')
})

app.get('/metrics', (req, res) => {
  res.send(`
    # HELP api_calls_total Total API calls
    # TYPE api_calls_total counter
    api_calls_total 100
  `)
})

app.listen(3000, () => {
  console.log('Example app listening on port 3000!')
})

In this code, we return an api_calls_total monitoring indicator through the /metrics interface.

Next, download the Docker image of Prometheus on the official website and create a docker-compose.yml file, and in this file, we obtain the data of the Node.js application.

version: '3'
services:
  node:
    image: node:lts
    command: node index.js
    ports:
      - 3000:3000

  prometheus:
    image: prom/prometheus:v2.25.2
    volumes:
      - ./prometheus:/etc/prometheus
    command:
      - '--config.file=/etc/prometheus/prometheus.yml'
      - '--storage.tsdb.retention.time=15d'
    ports:
      - 9090:9090

In the docker-compose.yml file, we define two services, one is the Node service that runs the Node.js application, and the other is the Prometheus service for monitoring. Among them, the port published by the Node service is port 3000. Through port mapping, the /metrics interface of the Node application can be accessed through the IP and 3000 port in docker-compose.yml. Prometheus can access the corresponding monitoring indicator data through port 9090.

Finally, in the prometheus.yml file, we need to define the data source to be obtained.

global:
  scrape_interval:     15s
  evaluation_interval: 15s

scrape_configs:
  - job_name: 'node-exporter'
    static_configs:
    - targets: ['node:9100']

  - job_name: 'node-js-app'
    static_configs:
    - targets: ['node:3000']

In this file, we define the indicators of all Node.js applications to be collected, where the targets parameter is the IP address of the Node.js application and its corresponding port number. Here, we are using node and port 3000.

Finally, run the docker-compose up command to start the entire application and its monitoring service, and view the member indicators in Prometheus.

Use ElasticSearch and Logstash for log management

In Docker, application log data is distributed in different Docker containers. If you want to manage these logs in a centralized place, you can use ElasticSearch and Logstash in ELK to centrally manage the logs to make it easier to monitor and analyze computer resources.

Before starting, you need to download the Docker images of Logstash and ElasticSearch and create a docker-compose.yml file.

In this file, we define three services, among which bls is an API service used to simulate business logs. After each response, a log will be recorded to stdout and log files. The logstash service is built from the Docker image officially provided by Logstash and is used to collect, filter and transmit logs. The ElasticSearch service is used to store and retrieve logs.

version: '3'
services:
  bls:
    image: nginx:alpine
    volumes:
      - ./log:/var/log/nginx
      - ./public:/usr/share/nginx/html:ro
    ports:
      - "8000:80"
    logging:
      driver: "json-file"
      options:
        max-size: "10m"
        max-file: "10"

  logstash:
    image: logstash:7.10.1
    volumes:
      - ./logstash/pipeline:/usr/share/logstash/pipeline
    environment:
      - "ES_HOST=elasticsearch"
    depends_on:
      - elasticsearch

  elasticsearch:
    image: elasticsearch:7.10.1
    environment:
      - "http.host=0.0.0.0"
      - "discovery.type=single-node"
    volumes:
      - ./elasticsearch:/usr/share/elasticsearch/data

In the configuration file, we map the path in the container to the host's log file system. At the same time, through the logging option, the volume size and quantity of the log are defined to limit the storage occupied by the log.

In the logstash of the configuration file, we define a new pipeline named nginx_pipeline.conf. This file is used to handle the collection, filtering and transmission of nginx logs. Similar to how ELK works, logstash will process the received logs based on different conditions and send them to the already created Elasticsearch cluster. In this configuration file, we define the following processing logic:

input {
  file {
    path => "/var/log/nginx/access.log"
  }
}

filter {
  grok {
    match => { "message" => "%{COMBINEDAPACHELOG}" }
  }
}

output {
  elasticsearch {
    hosts => [ "${ES_HOST}:9200" ]
    index => "nginx_log_index"
  }
}

In this configuration file, we define an input named file, which means that we want to read data from the local Log file. Next, we introduced a filter that uses the grok library to parse logs that match a specific template. Finally, we define the output, which transfers data to the address of the Elasticsearch cluster, while passing retrieval and reporting into the container via the environment variable ES_HOST.

In the end, after completing the entire ELK configuration as above, we will get an efficient log management system. Each log will be sent to a centralized place and integrated together, allowing for easy search. Filtering and visualization operations.

The above is the detailed content of How to use Docker for application monitoring and log management. For more information, please follow other related articles on the PHP Chinese website!

Statement of this Website
The content of this article is voluntarily contributed by netizens, and the copyright belongs to the original author. This site does not assume corresponding legal responsibility. If you find any content suspected of plagiarism or infringement, please contact admin@php.cn

Hot AI Tools

Undress AI Tool

Undress AI Tool

Undress images for free

Undresser.AI Undress

Undresser.AI Undress

AI-powered app for creating realistic nude photos

AI Clothes Remover

AI Clothes Remover

Online AI tool for removing clothes from photos.

Clothoff.io

Clothoff.io

AI clothes remover

Video Face Swap

Video Face Swap

Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Tools

Notepad++7.3.1

Notepad++7.3.1

Easy-to-use and free code editor

SublimeText3 Chinese version

SublimeText3 Chinese version

Chinese version, very easy to use

Zend Studio 13.0.1

Zend Studio 13.0.1

Powerful PHP integrated development environment

Dreamweaver CS6

Dreamweaver CS6

Visual web development tools

SublimeText3 Mac version

SublimeText3 Mac version

God-level code editing software (SublimeText3)

Hot Topics

PHP Tutorial
1502
276
How to develop a complete Python Web application? How to develop a complete Python Web application? May 23, 2025 pm 10:39 PM

To develop a complete Python Web application, follow these steps: 1. Choose the appropriate framework, such as Django or Flask. 2. Integrate databases and use ORMs such as SQLAlchemy. 3. Design the front-end and use Vue or React. 4. Perform the test, use pytest or unittest. 5. Deploy applications, use Docker and platforms such as Heroku or AWS. Through these steps, powerful and efficient web applications can be built.

How to view process information inside Docker container How to view process information inside Docker container May 19, 2025 pm 09:06 PM

There are three ways to view the process information inside the Docker container: 1. Use the dockertop command to list all processes in the container and display PID, user, command and other information; 2. Use dockerexec to enter the container, and then use the ps or top command to view detailed process information; 3. Use the dockerstats command to display the usage of container resources in real time, and combine dockertop to fully understand the performance of the container.

How to deploy a PyTorch app on Ubuntu How to deploy a PyTorch app on Ubuntu May 29, 2025 pm 11:18 PM

Deploying a PyTorch application on Ubuntu can be done by following the steps: 1. Install Python and pip First, make sure that Python and pip are already installed on your system. You can install them using the following command: sudoaptupdatesudoaptinstallpython3python3-pip2. Create a virtual environment (optional) To isolate your project environment, it is recommended to create a virtual environment: python3-mvenvmyenvsourcemyenv/bin/activatet

Performance Tuning of Jenkins Deployment on Debian Performance Tuning of Jenkins Deployment on Debian May 28, 2025 pm 04:51 PM

Deploying and tuning Jenkins on Debian is a process involving multiple steps, including installation, configuration, plug-in management, and performance optimization. Here is a detailed guide to help you achieve efficient Jenkins deployment. Installing Jenkins First, make sure your system has a Java environment installed. Jenkins requires a Java runtime environment (JRE) to run properly. sudoaptupdatesudoaptininstallopenjdk-11-jdk Verify that Java installation is successful: java-version Next, add J

Efficient operation method for batch stopping Docker containers Efficient operation method for batch stopping Docker containers May 19, 2025 pm 09:03 PM

An efficient way to batch stop a Docker container includes using basic commands and tools. 1. Use the dockerstop$(dockerps-q) command and adjust the timeout time, such as dockerstop-t30$(dockerps-q). 2. Use dockerps filtering options, such as dockerstop$(dockerps-q--filter"label=app=web"). 3. Use the DockerCompose command docker-composedown. 4. Write scripts to stop containers in order, such as stopping db, app and web containers.

How to compare the differences in different Docker image versions How to compare the differences in different Docker image versions May 19, 2025 pm 09:00 PM

There are two ways to compare the differences in different Docker image versions: 1. Use the dockerdiff command to view changes in the container file system; 2. Use the dockerhistory command to view the hierarchy difference in the image building. These methods help to understand and optimize image versioning.

How to implement automated deployment of Docker on Debian How to implement automated deployment of Docker on Debian May 28, 2025 pm 04:33 PM

Implementing Docker's automated deployment on Debian system can be done in a variety of ways. Here are the detailed steps guide: 1. Install Docker First, make sure your Debian system remains up to date: sudoaptupdatesudoaptupgrade-y Next, install the necessary software packages to support APT access to the repository via HTTPS: sudoaptinstallapt-transport-httpsca-certificatecurlsoftware-properties-common-y Import the official GPG key of Docker: curl-

Configure PhpStorm and Docker containerized development environment Configure PhpStorm and Docker containerized development environment May 20, 2025 pm 07:54 PM

Through Docker containerization technology, PHP developers can use PhpStorm to improve development efficiency and environmental consistency. The specific steps include: 1. Create a Dockerfile to define the PHP environment; 2. Configure the Docker connection in PhpStorm; 3. Create a DockerCompose file to define the service; 4. Configure the remote PHP interpreter. The advantages are strong environmental consistency, and the disadvantages include long startup time and complex debugging.

See all articles