Monitoring Redis Droplets Using Redis Exporter Service
Jan 06, 2025 am 10:19 AMMethod 1: Manual Configuration
Let’s proceed with the manual configuration method in this section.
Create Prometheus System User and Group
Create a system user and group named “prometheus” to manage the exporter service.
sudo?groupadd?--system?prometheus
sudo?useradd?-s?/sbin/nologin?--system?-g?prometheus?prometheus
Download and Install Redis Exporter
Download the latest release of Redis Exporter from GitHub, extract the downloaded files, and move the binary to the /usr/local/bin/ directory.
curl?-s?https://api.github.com/repos/oliver006/redis_exporter/releases/latest?|?grep?browser_download_url?|?grep?linux-amd64?|?cut?-d?'"'?-f?4?|?wget?-qi?-
tar?xvf?redis_exporter-*.linux-amd64.tar.gz
sudo?mv?redis_exporter-*.linux-amd64/redis_exporter?/usr/local/bin/
Verify Redis Exporter Installation
redis_exporter?--version
Here is the sample Output:
Configure systemd Service for Redis Exporter
Create a systemd service unit file to manage the Redis Exporter service.
sudo?vim?/etc/systemd/system/redis_exporter.service
Add the following content to the file:
[Unit]Description=Prometheus?Redis?ExporterDocumentation=https://github.com/oliver006/redis_exporterWants=network-online.targetAfter=network-online.target[Service]Type=simpleUser=prometheusGroup=prometheusExecReload=/bin/kill?-HUP?$MAINPIDExecStart=/usr/local/bin/redis_exporter? ??--log-format=txt? ??--namespace=redis? ??--web.listen-address=:9121? ??--web.telemetry-path=/metricsSyslogIdentifier=redis_exporterRestart=always[Install]WantedBy=multi-user.target
Reload systemd and Start Redis Exporter Service
sudo?systemctl?daemon-reload
sudo?systemctl?enable?redis_exporter
sudo?systemctl?start?redis_exporter
Configuring the Prometheus Droplet (Manual Method)
Let’s configure the Prometheous droplet for the manual configuration.
Take a backup of the prometheus.yml file
cp?/etc/prometheus/prometheus.yml?/etc/prometheus/prometheus.yml-$(date??'%d%b%Y-%H:%M')
Add the Redis Exporter endpoints to be scraped
Log in to your Prometheus server and add the Redis Exporter endpoints to be scraped.
Replace the IP addresses and ports with your Redis Exporter endpoints (9121 is the default port for Redis Exporter Service).
vi?/etc/prometheus/prometheus.yml
scrape_configs: ??-?job_name:?server1_db ????static_configs: ??????-?targets:?['10.10.1.10:9121'] ????????labels: ??????????alias:?db1 ??-?job_name:?server2_db ????static_configs: ??????-?targets:?['10.10.1.11:9121'] ????????labels:
This is the end of the manual configuration. Now, let’s proceed with the script-based configuration.
Method 2: Configuring Using Scripts
You can also achieve this by running two scripts - one for the target droplets and the other for the Prometheus droplet.
Let’s start by configuring the Target Droplets.
SSH into the Target Droplet.
Download the Target Configuration script by using the following command:
wget?https://solutions-files.ams3.digitaloceanspaces.com/Redis-Monitoring/DO_Redis_Target_Config.sh
Once the script is downloaded, ensure it has executable permissions by running:
chmod??x?DO_Redis_Target_Config.sh
Execute the script by running:
./DO_Redis_Target_Config.sh
The configuration is complete.
Note: If the redis_exporter.service file already exists, the script will not run.
Configuring the Prometheus Droplet (Script Method)
SSH into the Prometheus Droplet and download the script by using the following command:
wget?https://solutions-files.ams3.digitaloceanspaces.com/Redis-Monitoring/DO_Redis_Prometheus_Config.sh
Once the script is downloaded, ensure it has executable permissions by running:
chmod??x?DO_Redis_Prometheus_Config.sh
Execute the script by running:
./DO_Redis_Prometheus_Config.sh
Enter the number of Droplets to add to monitoring.
Enter the hostnames and IP addresses.
The configuration is complete.
Once added, check whether the targets are updated by accessing the URL prometheushostname:9090/targets.
Note: If you enter an IP address already added to the monitoring, you will be asked to enter the details again. Also, if you do not have any more servers to add, you can enter 0 to exit the script
Configuring Grafana
Log into the Grafana dashboard by visiting Grafana-IP:3000 on a browser.
Go to Configuration > Data Sources.
Click on Add data source.
Search and Select Prometheus.
Enter Name as Prometheus, and URL (Prometheushostname:9090) and click “Save & Test”. If you see “Data source is working”, you have successfully added the data source. Once done, go to Create > Import.
You can manually configure the dashboard or import the dashboard by uploading the JSON file. A JSON template for Redis monitoring can be found in the below link:
https://solutions-files.ams3.digitaloceanspaces.com/Redis-Monitoring/DO_Grafana-Redis_Monitoring.json
Fill in the fields and Import.
The Grafana dashboard is ready. Select the host and check if the metrics are visible. Please feel free to modify and edit the dashboard as needed.
The above is the detailed content of Monitoring Redis Droplets Using Redis Exporter Service. For more information, please follow other related articles on the PHP Chinese website!

Hot AI Tools

Undress AI Tool
Undress images for free

Undresser.AI Undress
AI-powered app for creating realistic nude photos

AI Clothes Remover
Online AI tool for removing clothes from photos.

Clothoff.io
AI clothes remover

Video Face Swap
Swap faces in any video effortlessly with our completely free AI face swap tool!

Hot Article

Hot Tools

Notepad++7.3.1
Easy-to-use and free code editor

SublimeText3 Chinese version
Chinese version, very easy to use

Zend Studio 13.0.1
Powerful PHP integrated development environment

Dreamweaver CS6
Visual web development tools

SublimeText3 Mac version
God-level code editing software (SublimeText3)

Hot Topics

mysqldump is a common tool for performing logical backups of MySQL databases. It generates SQL files containing CREATE and INSERT statements to rebuild the database. 1. It does not back up the original file, but converts the database structure and content into portable SQL commands; 2. It is suitable for small databases or selective recovery, and is not suitable for fast recovery of TB-level data; 3. Common options include --single-transaction, --databases, --all-databases, --routines, etc.; 4. Use mysql command to import during recovery, and can turn off foreign key checks to improve speed; 5. It is recommended to test backup regularly, use compression, and automatic adjustment.

MySQL supports transaction processing, and uses the InnoDB storage engine to ensure data consistency and integrity. 1. Transactions are a set of SQL operations, either all succeed or all fail to roll back; 2. ACID attributes include atomicity, consistency, isolation and persistence; 3. The statements that manually control transactions are STARTTRANSACTION, COMMIT and ROLLBACK; 4. The four isolation levels include read not committed, read submitted, repeatable read and serialization; 5. Use transactions correctly to avoid long-term operation, turn off automatic commits, and reasonably handle locks and exceptions. Through these mechanisms, MySQL can achieve high reliability and concurrent control.

Character set and sorting rules issues are common when cross-platform migration or multi-person development, resulting in garbled code or inconsistent query. There are three core solutions: First, check and unify the character set of database, table, and fields to utf8mb4, view through SHOWCREATEDATABASE/TABLE, and modify it with ALTER statement; second, specify the utf8mb4 character set when the client connects, and set it in connection parameters or execute SETNAMES; third, select the sorting rules reasonably, and recommend using utf8mb4_unicode_ci to ensure the accuracy of comparison and sorting, and specify or modify it through ALTER when building the library and table.

To set up asynchronous master-slave replication for MySQL, follow these steps: 1. Prepare the master server, enable binary logs and set a unique server-id, create a replication user and record the current log location; 2. Use mysqldump to back up the master library data and import it to the slave server; 3. Configure the server-id and relay-log of the slave server, use the CHANGEMASTER command to connect to the master library and start the replication thread; 4. Check for common problems, such as network, permissions, data consistency and self-increase conflicts, and monitor replication delays. Follow the steps above to ensure that the configuration is completed correctly.

The most direct way to connect to MySQL database is to use the command line client. First enter the mysql-u username -p and enter the password correctly to enter the interactive interface; if you connect to the remote database, you need to add the -h parameter to specify the host address. Secondly, you can directly switch to a specific database or execute SQL files when logging in, such as mysql-u username-p database name or mysql-u username-p database name

The setting of character sets and collation rules in MySQL is crucial, affecting data storage, query efficiency and consistency. First, the character set determines the storable character range, such as utf8mb4 supports Chinese and emojis; the sorting rules control the character comparison method, such as utf8mb4_unicode_ci is case-sensitive, and utf8mb4_bin is binary comparison. Secondly, the character set can be set at multiple levels of server, database, table, and column. It is recommended to use utf8mb4 and utf8mb4_unicode_ci in a unified manner to avoid conflicts. Furthermore, the garbled code problem is often caused by inconsistent character sets of connections, storage or program terminals, and needs to be checked layer by layer and set uniformly. In addition, character sets should be specified when exporting and importing to prevent conversion errors

To design a reliable MySQL backup solution, 1. First, clarify RTO and RPO indicators, and determine the backup frequency and method based on the acceptable downtime and data loss range of the business; 2. Adopt a hybrid backup strategy, combining logical backup (such as mysqldump), physical backup (such as PerconaXtraBackup) and binary log (binlog), to achieve rapid recovery and minimum data loss; 3. Test the recovery process regularly to ensure the effectiveness of the backup and be familiar with the recovery operations; 4. Pay attention to storage security, including off-site storage, encryption protection, version retention policy and backup task monitoring.

Database schema migration refers to the process of modifying the database structure without changing the data, which mainly includes adding or deleting tables, modifying column types or constraints, creating or deleting indexes, changing default values ??or nullable settings, etc. It is usually driven by application updates, for example, when new features need to store user preferences, new columns are added to the user table. Unlike data migrations that deal with large amounts of data movement, pattern migration focuses on structural changes. To perform mode migrations safely, version control should be used to track structure files, verify them in the test environment before the production environment, split the large migration into small steps, avoid multiple irrelevant changes in a single time, and note that changes to large tables may cause long-term table locking problems. You can use tools such as pt-online-schema-chan.
