🎯 Introduction
Redis Sentinel provides high availability and monitoring for Redis deployments. It’s a distributed system that monitors Redis master and replica instances, performs automatic failover, and acts as a configuration provider for clients. This comprehensive guide covers Redis Sentinel architecture, setup procedures, Java integration, and production best practices.
Redis Sentinel solves critical production challenges including automatic failover, service discovery, and configuration management, making it essential for mission-critical applications that require high availability and minimal downtime.
🏗️ Redis Deployment Modes Comparison
📊 Deployment Modes Overview
graph TD
A[Redis Deployment Options] --> B[Standalone Redis]
A --> C[Master-Replica]
A --> D[Redis Sentinel]
A --> E[Redis Cluster]
B --> F[Single Point of Failure<br/>Simple Setup<br/>No High Availability]
C --> G[Manual Failover<br/>Read Scaling<br/>Limited Availability]
D --> H[Automatic Failover<br/>Service Discovery<br/>High Availability]
E --> I[Horizontal Scaling<br/>Data Sharding<br/>Complex Setup]
style B fill:#ff6b6b
style C fill:#feca57
style D fill:#4ecdc4
style E fill:#45b7d1
🔍 Detailed Comparison
Feature | Standalone | Master-Replica | Redis Sentinel | Redis Cluster |
---|---|---|---|---|
High Availability | ❌ No | ⚠️ Manual | ✅ Automatic | ✅ Automatic |
Automatic Failover | ❌ No | ❌ No | ✅ Yes | ✅ Yes |
Data Sharding | ❌ No | ❌ No | ❌ No | ✅ Yes |
Read Scaling | ❌ No | ✅ Yes | ✅ Yes | ✅ Yes |
Write Scaling | ❌ No | ❌ No | ❌ No | ✅ Yes |
Setup Complexity | Simple | Medium | Medium | Complex |
Operational Overhead | Low | Medium | Medium | High |
Data Consistency | Strong | Eventual | Eventual | Eventual |
Network Partitions | N/A | Poor | Good | Excellent |
🏛️ Redis Sentinel Architecture
🔧 Core Components
graph TD
subgraph "Redis Sentinel Cluster"
S1[Sentinel 1<br/>Port: 26379]
S2[Sentinel 2<br/>Port: 26380]
S3[Sentinel 3<br/>Port: 26381]
end
subgraph "Redis Data Nodes"
M[Master Redis<br/>Port: 6379]
R1[Replica 1<br/>Port: 6380]
R2[Replica 2<br/>Port: 6381]
end
subgraph "Client Applications"
A1[Java App 1]
A2[Java App 2]
A3[Java App 3]
end
S1 -.-> M
S1 -.-> R1
S1 -.-> R2
S2 -.-> M
S2 -.-> R1
S2 -.-> R2
S3 -.-> M
S3 -.-> R1
S3 -.-> R2
M --> R1
M --> R2
A1 --> S1
A2 --> S2
A3 --> S3
style M fill:#ff6b6b
style R1 fill:#4ecdc4
style R2 fill:#4ecdc4
style S1 fill:#feca57
style S2 fill:#feca57
style S3 fill:#feca57
🎯 Sentinel Responsibilities
- Monitoring: Continuously checks Redis master and replica instances
- Notification: Alerts administrators about Redis instance issues
- Automatic Failover: Promotes replicas to master when failures occur
- Configuration Provider: Provides current master information to clients
🛠️ Redis Sentinel Setup
1. Basic Redis Configuration
Master Redis Configuration (redis-master.conf
):
1# Redis Master Configuration
2port 6379
3bind 0.0.0.0
4
5# Persistence
6save 900 1
7save 300 10
8save 60 10000
9
10# Replication
11replica-serve-stale-data yes
12replica-read-only yes
13
14# Security
15requirepass "redis_master_password"
16masterauth "redis_master_password"
17
18# Memory management
19maxmemory 2gb
20maxmemory-policy allkeys-lru
21
22# Logging
23loglevel notice
24logfile "/var/log/redis/redis-master.log"
25
26# Network
27timeout 0
28tcp-keepalive 300
29
30# Persistence tuning
31rdbcompression yes
32rdbchecksum yes
33dbfilename "dump-master.rdb"
34dir "/data/redis"
35
36# AOF
37appendonly yes
38appendfilename "appendonly-master.aof"
39appendfsync everysec
40no-appendfsync-on-rewrite no
41auto-aof-rewrite-percentage 100
42auto-aof-rewrite-min-size 64mb
Replica Redis Configuration (redis-replica-1.conf
):
1# Redis Replica Configuration
2port 6380
3bind 0.0.0.0
4
5# Replication
6replicaof redis-master 6379
7replica-serve-stale-data yes
8replica-read-only yes
9
10# Security
11requirepass "redis_replica_password"
12masterauth "redis_master_password"
13
14# Memory management
15maxmemory 1gb
16maxmemory-policy allkeys-lru
17
18# Logging
19loglevel notice
20logfile "/var/log/redis/redis-replica-1.log"
21
22# Persistence
23save 900 1
24save 300 10
25save 60 10000
26
27dbfilename "dump-replica-1.rdb"
28dir "/data/redis"
29
30# AOF
31appendonly yes
32appendfilename "appendonly-replica-1.aof"
33appendfsync everysec
2. Sentinel Configuration
Sentinel Configuration (sentinel-1.conf
):
1# Redis Sentinel Configuration
2port 26379
3bind 0.0.0.0
4
5# Sentinel specific settings
6sentinel deny-scripts-reconfig yes
7
8# Monitor master - requires at least 2 sentinels to agree for failover
9sentinel monitor mymaster redis-master 6379 2
10
11# Master password
12sentinel auth-pass mymaster redis_master_password
13
14# Failover timeout (30 seconds)
15sentinel down-after-milliseconds mymaster 30000
16
17# How many replicas can be reconfigured simultaneously during failover
18sentinel parallel-syncs mymaster 1
19
20# Failover timeout (3 minutes)
21sentinel failover-timeout mymaster 180000
22
23# Notification scripts
24sentinel notification-script mymaster /usr/local/bin/redis-notify.sh
25
26# Client reconfig script
27sentinel client-reconfig-script mymaster /usr/local/bin/redis-reconfig.sh
28
29# Logging
30logfile "/var/log/redis/sentinel-1.log"
31loglevel notice
32
33# Security (Redis 6.0+)
34requirepass "sentinel_password"
35
36# Performance tuning
37sentinel resolve-hostnames yes
38sentinel announce-hostnames yes
3. Docker Compose Setup
Complete Docker Compose Configuration:
1version: '3.8'
2
3services:
4 redis-master:
5 image: redis:7-alpine
6 container_name: redis-master
7 ports:
8 - "6379:6379"
9 command: redis-server /usr/local/etc/redis/redis.conf
10 volumes:
11 - ./config/redis-master.conf:/usr/local/etc/redis/redis.conf
12 - redis-master-data:/data
13 - ./logs:/var/log/redis
14 networks:
15 - redis-network
16 healthcheck:
17 test: ["CMD", "redis-cli", "-p", "6379", "ping"]
18 interval: 30s
19 timeout: 10s
20 retries: 3
21
22 redis-replica-1:
23 image: redis:7-alpine
24 container_name: redis-replica-1
25 ports:
26 - "6380:6380"
27 command: redis-server /usr/local/etc/redis/redis.conf
28 volumes:
29 - ./config/redis-replica-1.conf:/usr/local/etc/redis/redis.conf
30 - redis-replica-1-data:/data
31 - ./logs:/var/log/redis
32 depends_on:
33 - redis-master
34 networks:
35 - redis-network
36 healthcheck:
37 test: ["CMD", "redis-cli", "-p", "6380", "ping"]
38 interval: 30s
39 timeout: 10s
40 retries: 3
41
42 redis-replica-2:
43 image: redis:7-alpine
44 container_name: redis-replica-2
45 ports:
46 - "6381:6381"
47 command: redis-server /usr/local/etc/redis/redis.conf
48 volumes:
49 - ./config/redis-replica-2.conf:/usr/local/etc/redis/redis.conf
50 - redis-replica-2-data:/data
51 - ./logs:/var/log/redis
52 depends_on:
53 - redis-master
54 networks:
55 - redis-network
56 healthcheck:
57 test: ["CMD", "redis-cli", "-p", "6381", "ping"]
58 interval: 30s
59 timeout: 10s
60 retries: 3
61
62 redis-sentinel-1:
63 image: redis:7-alpine
64 container_name: redis-sentinel-1
65 ports:
66 - "26379:26379"
67 command: redis-sentinel /usr/local/etc/redis/sentinel.conf
68 volumes:
69 - ./config/sentinel-1.conf:/usr/local/etc/redis/sentinel.conf
70 - ./logs:/var/log/redis
71 - ./scripts:/usr/local/bin
72 depends_on:
73 - redis-master
74 - redis-replica-1
75 - redis-replica-2
76 networks:
77 - redis-network
78 healthcheck:
79 test: ["CMD", "redis-cli", "-p", "26379", "ping"]
80 interval: 30s
81 timeout: 10s
82 retries: 3
83
84 redis-sentinel-2:
85 image: redis:7-alpine
86 container_name: redis-sentinel-2
87 ports:
88 - "26380:26379"
89 command: redis-sentinel /usr/local/etc/redis/sentinel.conf
90 volumes:
91 - ./config/sentinel-2.conf:/usr/local/etc/redis/sentinel.conf
92 - ./logs:/var/log/redis
93 - ./scripts:/usr/local/bin
94 depends_on:
95 - redis-master
96 - redis-replica-1
97 - redis-replica-2
98 networks:
99 - redis-network
100 healthcheck:
101 test: ["CMD", "redis-cli", "-p", "26379", "ping"]
102 interval: 30s
103 timeout: 10s
104 retries: 3
105
106 redis-sentinel-3:
107 image: redis:7-alpine
108 container_name: redis-sentinel-3
109 ports:
110 - "26381:26379"
111 command: redis-sentinel /usr/local/etc/redis/sentinel.conf
112 volumes:
113 - ./config/sentinel-3.conf:/usr/local/etc/redis/sentinel.conf
114 - ./logs:/var/log/redis
115 - ./scripts:/usr/local/bin
116 depends_on:
117 - redis-master
118 - redis-replica-1
119 - redis-replica-2
120 networks:
121 - redis-network
122 healthcheck:
123 test: ["CMD", "redis-cli", "-p", "26379", "ping"]
124 interval: 30s
125 timeout: 10s
126 retries: 3
127
128 # Monitoring and management
129 redis-insight:
130 image: redislabs/redisinsight:latest
131 container_name: redis-insight
132 ports:
133 - "8001:8001"
134 volumes:
135 - redis-insight-data:/db
136 networks:
137 - redis-network
138 depends_on:
139 - redis-master
140
141volumes:
142 redis-master-data:
143 redis-replica-1-data:
144 redis-replica-2-data:
145 redis-insight-data:
146
147networks:
148 redis-network:
149 driver: bridge
4. Notification Scripts
Redis Notification Script (redis-notify.sh
):
1#!/bin/bash
2
3# Redis Sentinel Notification Script
4MASTER_NAME="$1"
5EVENT_TYPE="$2"
6EVENT_STATE="$3"
7FROM_IP="$4"
8FROM_PORT="$5"
9TO_IP="$6"
10TO_PORT="$7"
11
12TIMESTAMP=$(date '+%Y-%m-%d %H:%M:%S')
13LOG_FILE="/var/log/redis/sentinel-notifications.log"
14
15# Create log entry
16LOG_MESSAGE="[$TIMESTAMP] SENTINEL EVENT: Master=$MASTER_NAME, Type=$EVENT_TYPE, State=$EVENT_STATE, From=$FROM_IP:$FROM_PORT, To=$TO_IP:$TO_PORT"
17echo "$LOG_MESSAGE" >> "$LOG_FILE"
18
19# Send notification based on event type
20case "$EVENT_TYPE" in
21 "+switch-master")
22 echo "CRITICAL: Redis master switched from $FROM_IP:$FROM_PORT to $TO_IP:$TO_PORT" >> "$LOG_FILE"
23 # Send alert to monitoring system
24 curl -X POST "http://monitoring-system/alerts" \
25 -H "Content-Type: application/json" \
26 -d "{\"level\":\"critical\",\"message\":\"Redis master failover: $FROM_IP:$FROM_PORT -> $TO_IP:$TO_PORT\",\"timestamp\":\"$TIMESTAMP\"}"
27 ;;
28 "+sdown")
29 echo "WARNING: Redis instance $FROM_IP:$FROM_PORT is subjectively down" >> "$LOG_FILE"
30 ;;
31 "+odown")
32 echo "CRITICAL: Redis instance $FROM_IP:$FROM_PORT is objectively down" >> "$LOG_FILE"
33 ;;
34 "+reboot")
35 echo "INFO: Redis instance $FROM_IP:$FROM_PORT is rebooting" >> "$LOG_FILE"
36 ;;
37esac
38
39exit 0
☕ Java Integration with Spring Boot
1. Dependencies and Configuration
Maven Dependencies (pom.xml
):
1<dependencies>
2 <!-- Spring Boot Starter -->
3 <dependency>
4 <groupId>org.springframework.boot</groupId>
5 <artifactId>spring-boot-starter-data-redis</artifactId>
6 </dependency>
7
8 <!-- Lettuce Redis Client (supports Sentinel) -->
9 <dependency>
10 <groupId>io.lettuce</groupId>
11 <artifactId>lettuce-core</artifactId>
12 </dependency>
13
14 <!-- Connection pooling -->
15 <dependency>
16 <groupId>org.apache.commons</groupId>
17 <artifactId>commons-pool2</artifactId>
18 </dependency>
19
20 <!-- Monitoring -->
21 <dependency>
22 <groupId>org.springframework.boot</groupId>
23 <artifactId>spring-boot-starter-actuator</artifactId>
24 </dependency>
25
26 <!-- Metrics -->
27 <dependency>
28 <groupId>io.micrometer</groupId>
29 <artifactId>micrometer-registry-prometheus</artifactId>
30 </dependency>
31</dependencies>
Application Configuration (application.yml
):
1spring:
2 redis:
3 sentinel:
4 master: mymaster
5 nodes:
6 - redis-sentinel-1:26379
7 - redis-sentinel-2:26380
8 - redis-sentinel-3:26381
9 password: sentinel_password
10 password: redis_master_password
11 timeout: 2000ms
12 lettuce:
13 pool:
14 max-active: 20
15 max-idle: 8
16 min-idle: 2
17 max-wait: 2000ms
18 shutdown-timeout: 200ms
19
20 # Application settings
21 application:
22 name: redis-sentinel-demo
23
24# Actuator for health checks and metrics
25management:
26 endpoints:
27 web:
28 exposure:
29 include: health,metrics,info,redis
30 endpoint:
31 health:
32 show-details: always
33 redis:
34 enabled: true
35
36# Logging
37logging:
38 level:
39 io.lettuce: DEBUG
40 org.springframework.data.redis: DEBUG
41 com.example.redis: DEBUG
42 pattern:
43 console: "%d{HH:mm:ss.SSS} [%thread] %-5level %logger{36} - %msg%n"
2. Redis Configuration Classes
Redis Sentinel Configuration:
1@Configuration
2@EnableCaching
3@EnableRedisRepositories
4public class RedisSentinelConfig {
5
6 @Value("${spring.redis.sentinel.master}")
7 private String masterName;
8
9 @Value("${spring.redis.sentinel.nodes}")
10 private List<String> sentinelNodes;
11
12 @Value("${spring.redis.password}")
13 private String redisPassword;
14
15 @Value("${spring.redis.sentinel.password:}")
16 private String sentinelPassword;
17
18 @Bean
19 public LettuceConnectionFactory redisConnectionFactory() {
20 // Parse sentinel nodes
21 Set<RedisNode> sentinelNodeSet = sentinelNodes.stream()
22 .map(this::parseRedisNode)
23 .collect(Collectors.toSet());
24
25 // Sentinel configuration
26 RedisSentinelConfiguration sentinelConfiguration = new RedisSentinelConfiguration()
27 .master(masterName)
28 .sentinels(sentinelNodeSet);
29
30 if (StringUtils.hasText(redisPassword)) {
31 sentinelConfiguration.setPassword(redisPassword);
32 }
33
34 if (StringUtils.hasText(sentinelPassword)) {
35 sentinelConfiguration.setSentinelPassword(sentinelPassword);
36 }
37
38 // Connection pool configuration
39 GenericObjectPoolConfig<RedisConnection> poolConfig = new GenericObjectPoolConfig<>();
40 poolConfig.setMaxTotal(20);
41 poolConfig.setMaxIdle(8);
42 poolConfig.setMinIdle(2);
43 poolConfig.setMaxWaitMillis(2000);
44 poolConfig.setTestOnBorrow(true);
45 poolConfig.setTestOnReturn(true);
46 poolConfig.setTestWhileIdle(true);
47
48 LettucePoolingClientConfiguration clientConfig = LettucePoolingClientConfiguration.builder()
49 .poolConfig(poolConfig)
50 .commandTimeout(Duration.ofMillis(2000))
51 .shutdownTimeout(Duration.ofMillis(200))
52 .build();
53
54 LettuceConnectionFactory factory = new LettuceConnectionFactory(sentinelConfiguration, clientConfig);
55 factory.setValidateConnection(true);
56
57 return factory;
58 }
59
60 @Bean
61 public RedisTemplate<String, Object> redisTemplate(LettuceConnectionFactory connectionFactory) {
62 RedisTemplate<String, Object> template = new RedisTemplate<>();
63 template.setConnectionFactory(connectionFactory);
64
65 // JSON serialization
66 Jackson2JsonRedisSerializer<Object> jsonSerializer = new Jackson2JsonRedisSerializer<>(Object.class);
67 ObjectMapper objectMapper = new ObjectMapper();
68 objectMapper.setVisibility(PropertyAccessor.ALL, JsonAutoDetect.Visibility.ANY);
69 objectMapper.activateDefaultTyping(LaissezFaireSubTypeValidator.instance, ObjectMapper.DefaultTyping.NON_FINAL);
70 jsonSerializer.setObjectMapper(objectMapper);
71
72 // Key serialization
73 template.setKeySerializer(new StringRedisSerializer());
74 template.setHashKeySerializer(new StringRedisSerializer());
75
76 // Value serialization
77 template.setValueSerializer(jsonSerializer);
78 template.setHashValueSerializer(jsonSerializer);
79
80 template.afterPropertiesSet();
81 return template;
82 }
83
84 @Bean
85 public StringRedisTemplate stringRedisTemplate(LettuceConnectionFactory connectionFactory) {
86 return new StringRedisTemplate(connectionFactory);
87 }
88
89 @Bean
90 public CacheManager cacheManager(LettuceConnectionFactory connectionFactory) {
91 RedisCacheConfiguration config = RedisCacheConfiguration.defaultCacheConfig()
92 .entryTtl(Duration.ofMinutes(30))
93 .serializeKeysWith(RedisSerializationContext.SerializationPair.fromSerializer(new StringRedisSerializer()))
94 .serializeValuesWith(RedisSerializationContext.SerializationPair.fromSerializer(new GenericJackson2JsonRedisSerializer()))
95 .disableCachingNullValues();
96
97 return RedisCacheManager.builder(connectionFactory)
98 .cacheDefaults(config)
99 .build();
100 }
101
102 private RedisNode parseRedisNode(String node) {
103 String[] parts = node.split(":");
104 if (parts.length != 2) {
105 throw new IllegalArgumentException("Invalid Redis node format: " + node);
106 }
107 return new RedisNode(parts[0], Integer.parseInt(parts[1]));
108 }
109}
3. Redis Service Implementation
Redis Service with Failover Handling:
1@Service
2@Slf4j
3public class RedisService {
4
5 private final RedisTemplate<String, Object> redisTemplate;
6 private final StringRedisTemplate stringRedisTemplate;
7 private final MeterRegistry meterRegistry;
8
9 private final Counter redisOperationCounter;
10 private final Timer redisOperationTimer;
11 private final Gauge redisConnectionGauge;
12
13 public RedisService(RedisTemplate<String, Object> redisTemplate,
14 StringRedisTemplate stringRedisTemplate,
15 MeterRegistry meterRegistry) {
16 this.redisTemplate = redisTemplate;
17 this.stringRedisTemplate = stringRedisTemplate;
18 this.meterRegistry = meterRegistry;
19
20 // Initialize metrics
21 this.redisOperationCounter = Counter.builder("redis.operations.total")
22 .description("Total Redis operations")
23 .register(meterRegistry);
24
25 this.redisOperationTimer = Timer.builder("redis.operations.duration")
26 .description("Redis operation duration")
27 .register(meterRegistry);
28
29 this.redisConnectionGauge = Gauge.builder("redis.connections.active")
30 .description("Active Redis connections")
31 .register(meterRegistry, this, RedisService::getActiveConnectionCount);
32 }
33
34 // Basic operations with metrics and error handling
35 public void setValue(String key, Object value) {
36 Timer.Sample sample = Timer.start(meterRegistry);
37 try {
38 redisTemplate.opsForValue().set(key, value);
39 redisOperationCounter.increment(Tags.of("operation", "set", "status", "success"));
40 log.debug("Successfully set value for key: {}", key);
41 } catch (Exception e) {
42 redisOperationCounter.increment(Tags.of("operation", "set", "status", "error"));
43 log.error("Failed to set value for key: {}", key, e);
44 throw new RedisOperationException("Failed to set value for key: " + key, e);
45 } finally {
46 sample.stop(redisOperationTimer.builder().tag("operation", "set").register(meterRegistry));
47 }
48 }
49
50 public void setValue(String key, Object value, Duration timeout) {
51 Timer.Sample sample = Timer.start(meterRegistry);
52 try {
53 redisTemplate.opsForValue().set(key, value, timeout);
54 redisOperationCounter.increment(Tags.of("operation", "setex", "status", "success"));
55 log.debug("Successfully set value with TTL for key: {}, TTL: {}", key, timeout);
56 } catch (Exception e) {
57 redisOperationCounter.increment(Tags.of("operation", "setex", "status", "error"));
58 log.error("Failed to set value with TTL for key: {}", key, e);
59 throw new RedisOperationException("Failed to set value with TTL for key: " + key, e);
60 } finally {
61 sample.stop(redisOperationTimer.builder().tag("operation", "setex").register(meterRegistry));
62 }
63 }
64
65 @SuppressWarnings("unchecked")
66 public <T> T getValue(String key, Class<T> type) {
67 Timer.Sample sample = Timer.start(meterRegistry);
68 try {
69 Object value = redisTemplate.opsForValue().get(key);
70 redisOperationCounter.increment(Tags.of("operation", "get", "status", "success"));
71 log.debug("Successfully retrieved value for key: {}", key);
72 return value != null ? (T) value : null;
73 } catch (Exception e) {
74 redisOperationCounter.increment(Tags.of("operation", "get", "status", "error"));
75 log.error("Failed to get value for key: {}", key, e);
76 throw new RedisOperationException("Failed to get value for key: " + key, e);
77 } finally {
78 sample.stop(redisOperationTimer.builder().tag("operation", "get").register(meterRegistry));
79 }
80 }
81
82 public Boolean deleteKey(String key) {
83 Timer.Sample sample = Timer.start(meterRegistry);
84 try {
85 Boolean result = redisTemplate.delete(key);
86 redisOperationCounter.increment(Tags.of("operation", "delete", "status", "success"));
87 log.debug("Successfully deleted key: {}, result: {}", key, result);
88 return result;
89 } catch (Exception e) {
90 redisOperationCounter.increment(Tags.of("operation", "delete", "status", "error"));
91 log.error("Failed to delete key: {}", key, e);
92 throw new RedisOperationException("Failed to delete key: " + key, e);
93 } finally {
94 sample.stop(redisOperationTimer.builder().tag("operation", "delete").register(meterRegistry));
95 }
96 }
97
98 public Boolean exists(String key) {
99 Timer.Sample sample = Timer.start(meterRegistry);
100 try {
101 Boolean result = redisTemplate.hasKey(key);
102 redisOperationCounter.increment(Tags.of("operation", "exists", "status", "success"));
103 return result;
104 } catch (Exception e) {
105 redisOperationCounter.increment(Tags.of("operation", "exists", "status", "error"));
106 log.error("Failed to check existence of key: {}", key, e);
107 throw new RedisOperationException("Failed to check existence of key: " + key, e);
108 } finally {
109 sample.stop(redisOperationTimer.builder().tag("operation", "exists").register(meterRegistry));
110 }
111 }
112
113 // Hash operations
114 public void setHashValue(String key, String field, Object value) {
115 Timer.Sample sample = Timer.start(meterRegistry);
116 try {
117 redisTemplate.opsForHash().put(key, field, value);
118 redisOperationCounter.increment(Tags.of("operation", "hset", "status", "success"));
119 log.debug("Successfully set hash value for key: {}, field: {}", key, field);
120 } catch (Exception e) {
121 redisOperationCounter.increment(Tags.of("operation", "hset", "status", "error"));
122 log.error("Failed to set hash value for key: {}, field: {}", key, field, e);
123 throw new RedisOperationException("Failed to set hash value", e);
124 } finally {
125 sample.stop(redisOperationTimer.builder().tag("operation", "hset").register(meterRegistry));
126 }
127 }
128
129 @SuppressWarnings("unchecked")
130 public <T> T getHashValue(String key, String field, Class<T> type) {
131 Timer.Sample sample = Timer.start(meterRegistry);
132 try {
133 Object value = redisTemplate.opsForHash().get(key, field);
134 redisOperationCounter.increment(Tags.of("operation", "hget", "status", "success"));
135 return value != null ? (T) value : null;
136 } catch (Exception e) {
137 redisOperationCounter.increment(Tags.of("operation", "hget", "status", "error"));
138 log.error("Failed to get hash value for key: {}, field: {}", key, field, e);
139 throw new RedisOperationException("Failed to get hash value", e);
140 } finally {
141 sample.stop(redisOperationTimer.builder().tag("operation", "hget").register(meterRegistry));
142 }
143 }
144
145 public Map<Object, Object> getAllHashValues(String key) {
146 Timer.Sample sample = Timer.start(meterRegistry);
147 try {
148 Map<Object, Object> result = redisTemplate.opsForHash().entries(key);
149 redisOperationCounter.increment(Tags.of("operation", "hgetall", "status", "success"));
150 return result;
151 } catch (Exception e) {
152 redisOperationCounter.increment(Tags.of("operation", "hgetall", "status", "error"));
153 log.error("Failed to get all hash values for key: {}", key, e);
154 throw new RedisOperationException("Failed to get all hash values", e);
155 } finally {
156 sample.stop(redisOperationTimer.builder().tag("operation", "hgetall").register(meterRegistry));
157 }
158 }
159
160 // List operations
161 public void pushToList(String key, Object... values) {
162 Timer.Sample sample = Timer.start(meterRegistry);
163 try {
164 redisTemplate.opsForList().rightPushAll(key, values);
165 redisOperationCounter.increment(Tags.of("operation", "lpush", "status", "success"));
166 log.debug("Successfully pushed {} values to list: {}", values.length, key);
167 } catch (Exception e) {
168 redisOperationCounter.increment(Tags.of("operation", "lpush", "status", "error"));
169 log.error("Failed to push values to list: {}", key, e);
170 throw new RedisOperationException("Failed to push values to list", e);
171 } finally {
172 sample.stop(redisOperationTimer.builder().tag("operation", "lpush").register(meterRegistry));
173 }
174 }
175
176 public Object popFromList(String key) {
177 Timer.Sample sample = Timer.start(meterRegistry);
178 try {
179 Object result = redisTemplate.opsForList().rightPop(key);
180 redisOperationCounter.increment(Tags.of("operation", "rpop", "status", "success"));
181 return result;
182 } catch (Exception e) {
183 redisOperationCounter.increment(Tags.of("operation", "rpop", "status", "error"));
184 log.error("Failed to pop value from list: {}", key, e);
185 throw new RedisOperationException("Failed to pop value from list", e);
186 } finally {
187 sample.stop(redisOperationTimer.builder().tag("operation", "rpop").register(meterRegistry));
188 }
189 }
190
191 public List<Object> getListRange(String key, long start, long end) {
192 Timer.Sample sample = Timer.start(meterRegistry);
193 try {
194 List<Object> result = redisTemplate.opsForList().range(key, start, end);
195 redisOperationCounter.increment(Tags.of("operation", "lrange", "status", "success"));
196 return result != null ? result : Collections.emptyList();
197 } catch (Exception e) {
198 redisOperationCounter.increment(Tags.of("operation", "lrange", "status", "error"));
199 log.error("Failed to get list range for key: {}", key, e);
200 throw new RedisOperationException("Failed to get list range", e);
201 } finally {
202 sample.stop(redisOperationTimer.builder().tag("operation", "lrange").register(meterRegistry));
203 }
204 }
205
206 // Set operations
207 public void addToSet(String key, Object... values) {
208 Timer.Sample sample = Timer.start(meterRegistry);
209 try {
210 redisTemplate.opsForSet().add(key, values);
211 redisOperationCounter.increment(Tags.of("operation", "sadd", "status", "success"));
212 log.debug("Successfully added {} values to set: {}", values.length, key);
213 } catch (Exception e) {
214 redisOperationCounter.increment(Tags.of("operation", "sadd", "status", "error"));
215 log.error("Failed to add values to set: {}", key, e);
216 throw new RedisOperationException("Failed to add values to set", e);
217 } finally {
218 sample.stop(redisOperationTimer.builder().tag("operation", "sadd").register(meterRegistry));
219 }
220 }
221
222 public Set<Object> getSetMembers(String key) {
223 Timer.Sample sample = Timer.start(meterRegistry);
224 try {
225 Set<Object> result = redisTemplate.opsForSet().members(key);
226 redisOperationCounter.increment(Tags.of("operation", "smembers", "status", "success"));
227 return result != null ? result : Collections.emptySet();
228 } catch (Exception e) {
229 redisOperationCounter.increment(Tags.of("operation", "smembers", "status", "error"));
230 log.error("Failed to get set members for key: {}", key, e);
231 throw new RedisOperationException("Failed to get set members", e);
232 } finally {
233 sample.stop(redisOperationTimer.builder().tag("operation", "smembers").register(meterRegistry));
234 }
235 }
236
237 // Sorted Set operations
238 public void addToSortedSet(String key, Object value, double score) {
239 Timer.Sample sample = Timer.start(meterRegistry);
240 try {
241 redisTemplate.opsForZSet().add(key, value, score);
242 redisOperationCounter.increment(Tags.of("operation", "zadd", "status", "success"));
243 log.debug("Successfully added value to sorted set: {}, score: {}", key, score);
244 } catch (Exception e) {
245 redisOperationCounter.increment(Tags.of("operation", "zadd", "status", "error"));
246 log.error("Failed to add value to sorted set: {}", key, e);
247 throw new RedisOperationException("Failed to add value to sorted set", e);
248 } finally {
249 sample.stop(redisOperationTimer.builder().tag("operation", "zadd").register(meterRegistry));
250 }
251 }
252
253 public Set<Object> getSortedSetRange(String key, long start, long end) {
254 Timer.Sample sample = Timer.start(meterRegistry);
255 try {
256 Set<Object> result = redisTemplate.opsForZSet().range(key, start, end);
257 redisOperationCounter.increment(Tags.of("operation", "zrange", "status", "success"));
258 return result != null ? result : Collections.emptySet();
259 } catch (Exception e) {
260 redisOperationCounter.increment(Tags.of("operation", "zrange", "status", "error"));
261 log.error("Failed to get sorted set range for key: {}", key, e);
262 throw new RedisOperationException("Failed to get sorted set range", e);
263 } finally {
264 sample.stop(redisOperationTimer.builder().tag("operation", "zrange").register(meterRegistry));
265 }
266 }
267
268 // Utility methods
269 public void expire(String key, Duration timeout) {
270 Timer.Sample sample = Timer.start(meterRegistry);
271 try {
272 redisTemplate.expire(key, timeout);
273 redisOperationCounter.increment(Tags.of("operation", "expire", "status", "success"));
274 log.debug("Successfully set expiration for key: {}, timeout: {}", key, timeout);
275 } catch (Exception e) {
276 redisOperationCounter.increment(Tags.of("operation", "expire", "status", "error"));
277 log.error("Failed to set expiration for key: {}", key, e);
278 throw new RedisOperationException("Failed to set expiration", e);
279 } finally {
280 sample.stop(redisOperationTimer.builder().tag("operation", "expire").register(meterRegistry));
281 }
282 }
283
284 public Long getTimeToLive(String key) {
285 Timer.Sample sample = Timer.start(meterRegistry);
286 try {
287 Long result = redisTemplate.getExpire(key);
288 redisOperationCounter.increment(Tags.of("operation", "ttl", "status", "success"));
289 return result;
290 } catch (Exception e) {
291 redisOperationCounter.increment(Tags.of("operation", "ttl", "status", "error"));
292 log.error("Failed to get TTL for key: {}", key, e);
293 throw new RedisOperationException("Failed to get TTL", e);
294 } finally {
295 sample.stop(redisOperationTimer.builder().tag("operation", "ttl").register(meterRegistry));
296 }
297 }
298
299 // Connection health check
300 public boolean isHealthy() {
301 try {
302 stringRedisTemplate.opsForValue().get("health-check");
303 return true;
304 } catch (Exception e) {
305 log.error("Redis health check failed", e);
306 return false;
307 }
308 }
309
310 // Get active connection count for monitoring
311 private double getActiveConnectionCount() {
312 try {
313 LettuceConnectionFactory factory = (LettuceConnectionFactory) redisTemplate.getConnectionFactory();
314 if (factory != null) {
315 return factory.getConnection().isOpen() ? 1.0 : 0.0;
316 }
317 } catch (Exception e) {
318 log.debug("Failed to get connection count", e);
319 }
320 return 0.0;
321 }
322
323 // Batch operations for better performance
324 public void executePipeline(List<RedisOperations> operations) {
325 redisTemplate.executePipelined(new RedisCallback<Object>() {
326 @Override
327 public Object doInRedis(RedisConnection connection) throws DataAccessException {
328 for (RedisOperations operation : operations) {
329 operation.execute(connection);
330 }
331 return null;
332 }
333 });
334 }
335
336 @FunctionalInterface
337 public interface RedisOperations {
338 void execute(RedisConnection connection);
339 }
340
341 // Custom exception for Redis operations
342 public static class RedisOperationException extends RuntimeException {
343 public RedisOperationException(String message) {
344 super(message);
345 }
346
347 public RedisOperationException(String message, Throwable cause) {
348 super(message, cause);
349 }
350 }
351}
4. Caching Service with Sentinel Support
Advanced Caching Service:
1@Service
2@Slf4j
3public class CacheService {
4
5 private final RedisService redisService;
6 private final ObjectMapper objectMapper;
7 private final MeterRegistry meterRegistry;
8
9 // Cache configurations
10 private static final Duration DEFAULT_TTL = Duration.ofMinutes(30);
11 private static final Duration SHORT_TTL = Duration.ofMinutes(5);
12 private static final Duration LONG_TTL = Duration.ofHours(2);
13
14 // Cache prefixes for different data types
15 private static final String USER_CACHE_PREFIX = "user:";
16 private static final String SESSION_CACHE_PREFIX = "session:";
17 private static final String PRODUCT_CACHE_PREFIX = "product:";
18 private static final String ANALYTICS_CACHE_PREFIX = "analytics:";
19
20 public CacheService(RedisService redisService, ObjectMapper objectMapper, MeterRegistry meterRegistry) {
21 this.redisService = redisService;
22 this.objectMapper = objectMapper;
23 this.meterRegistry = meterRegistry;
24 }
25
26 // Generic cache operations with automatic serialization
27 public <T> void cacheObject(String key, T object, Duration ttl) {
28 try {
29 redisService.setValue(key, object, ttl);
30 meterRegistry.counter("cache.operations", "operation", "set", "type", "object").increment();
31 log.debug("Cached object with key: {}, TTL: {}", key, ttl);
32 } catch (Exception e) {
33 meterRegistry.counter("cache.errors", "operation", "set", "type", "object").increment();
34 log.error("Failed to cache object with key: {}", key, e);
35 }
36 }
37
38 public <T> Optional<T> getCachedObject(String key, Class<T> type) {
39 try {
40 T cachedObject = redisService.getValue(key, type);
41 boolean hit = cachedObject != null;
42
43 meterRegistry.counter("cache.operations",
44 "operation", "get",
45 "type", "object",
46 "result", hit ? "hit" : "miss").increment();
47
48 if (hit) {
49 log.debug("Cache hit for key: {}", key);
50 return Optional.of(cachedObject);
51 } else {
52 log.debug("Cache miss for key: {}", key);
53 return Optional.empty();
54 }
55 } catch (Exception e) {
56 meterRegistry.counter("cache.errors", "operation", "get", "type", "object").increment();
57 log.error("Failed to get cached object with key: {}", key, e);
58 return Optional.empty();
59 }
60 }
61
62 // User caching operations
63 public void cacheUser(String userId, User user) {
64 cacheObject(USER_CACHE_PREFIX + userId, user, DEFAULT_TTL);
65 }
66
67 public Optional<User> getCachedUser(String userId) {
68 return getCachedObject(USER_CACHE_PREFIX + userId, User.class);
69 }
70
71 public void evictUser(String userId) {
72 redisService.deleteKey(USER_CACHE_PREFIX + userId);
73 meterRegistry.counter("cache.operations", "operation", "evict", "type", "user").increment();
74 }
75
76 // Session caching operations
77 public void cacheSession(String sessionId, UserSession session) {
78 cacheObject(SESSION_CACHE_PREFIX + sessionId, session, Duration.ofHours(8));
79 }
80
81 public Optional<UserSession> getCachedSession(String sessionId) {
82 return getCachedObject(SESSION_CACHE_PREFIX + sessionId, UserSession.class);
83 }
84
85 public void extendSession(String sessionId, Duration additionalTime) {
86 String key = SESSION_CACHE_PREFIX + sessionId;
87 if (redisService.exists(key)) {
88 redisService.expire(key, additionalTime);
89 meterRegistry.counter("cache.operations", "operation", "extend", "type", "session").increment();
90 }
91 }
92
93 // Product caching operations
94 public void cacheProduct(String productId, Product product) {
95 cacheObject(PRODUCT_CACHE_PREFIX + productId, product, LONG_TTL);
96 }
97
98 public Optional<Product> getCachedProduct(String productId) {
99 return getCachedObject(PRODUCT_CACHE_PREFIX + productId, Product.class);
100 }
101
102 // Batch operations for better performance
103 public void cacheProducts(Map<String, Product> products) {
104 List<RedisService.RedisOperations> operations = products.entrySet().stream()
105 .map(entry -> (RedisService.RedisOperations) connection -> {
106 try {
107 String key = PRODUCT_CACHE_PREFIX + entry.getKey();
108 String value = objectMapper.writeValueAsString(entry.getValue());
109 connection.setEx(key.getBytes(), LONG_TTL.getSeconds(), value.getBytes());
110 } catch (Exception e) {
111 log.error("Failed to cache product: {}", entry.getKey(), e);
112 }
113 })
114 .collect(Collectors.toList());
115
116 redisService.executePipeline(operations);
117 meterRegistry.counter("cache.operations", "operation", "batch_set", "type", "product")
118 .increment(operations.size());
119 }
120
121 // Analytics caching with sorted sets for rankings
122 public void cacheProductRanking(String category, Map<String, Double> productScores) {
123 String key = ANALYTICS_CACHE_PREFIX + "ranking:" + category;
124
125 // Clear existing ranking
126 redisService.deleteKey(key);
127
128 // Add products with scores
129 productScores.forEach((productId, score) ->
130 redisService.addToSortedSet(key, productId, score));
131
132 // Set expiration
133 redisService.expire(key, SHORT_TTL);
134
135 meterRegistry.counter("cache.operations", "operation", "ranking_update", "type", "analytics").increment();
136 log.debug("Updated product ranking for category: {}", category);
137 }
138
139 public List<String> getTopProducts(String category, int limit) {
140 String key = ANALYTICS_CACHE_PREFIX + "ranking:" + category;
141 Set<Object> topProducts = redisService.getSortedSetRange(key, -limit, -1); // Get top scores (reversed)
142
143 List<String> result = topProducts.stream()
144 .map(Object::toString)
145 .collect(Collectors.toList());
146
147 Collections.reverse(result); // Reverse to get highest scores first
148
149 meterRegistry.counter("cache.operations", "operation", "ranking_get", "type", "analytics").increment();
150 return result;
151 }
152
153 // Multi-level caching with fallback
154 public <T> T getOrCompute(String key, Class<T> type, Supplier<T> computeFunction, Duration ttl) {
155 // Try cache first
156 Optional<T> cached = getCachedObject(key, type);
157 if (cached.isPresent()) {
158 return cached.get();
159 }
160
161 // Compute value
162 T computed = computeFunction.get();
163 if (computed != null) {
164 cacheObject(key, computed, ttl);
165 }
166
167 meterRegistry.counter("cache.operations", "operation", "compute", "type", "generic").increment();
168 return computed;
169 }
170
171 // Cache warming operations
172 @EventListener(ApplicationReadyEvent.class)
173 public void warmUpCache() {
174 log.info("Starting cache warm-up...");
175
176 CompletableFuture.runAsync(() -> {
177 try {
178 // Warm up frequently accessed data
179 warmUpUserCache();
180 warmUpProductCache();
181
182 log.info("Cache warm-up completed successfully");
183 meterRegistry.counter("cache.operations", "operation", "warmup", "result", "success").increment();
184
185 } catch (Exception e) {
186 log.error("Cache warm-up failed", e);
187 meterRegistry.counter("cache.operations", "operation", "warmup", "result", "error").increment();
188 }
189 });
190 }
191
192 private void warmUpUserCache() {
193 // Implementation would load frequently accessed users
194 log.debug("Warming up user cache...");
195 }
196
197 private void warmUpProductCache() {
198 // Implementation would load popular products
199 log.debug("Warming up product cache...");
200 }
201
202 // Cache statistics and monitoring
203 public CacheStatistics getCacheStatistics() {
204 // This would typically gather statistics from Redis INFO command
205 return new CacheStatistics(
206 redisService.isHealthy(),
207 getCurrentCacheSize(),
208 getHitRate(),
209 getMissRate()
210 );
211 }
212
213 private long getCurrentCacheSize() {
214 // Implementation would count keys with different prefixes
215 return 0L;
216 }
217
218 private double getHitRate() {
219 // Implementation would calculate from metrics
220 return meterRegistry.find("cache.operations")
221 .tag("result", "hit")
222 .counter()
223 .map(Counter::count)
224 .orElse(0.0);
225 }
226
227 private double getMissRate() {
228 // Implementation would calculate from metrics
229 return meterRegistry.find("cache.operations")
230 .tag("result", "miss")
231 .counter()
232 .map(Counter::count)
233 .orElse(0.0);
234 }
235
236 // Supporting classes
237 public static class User {
238 private String id;
239 private String username;
240 private String email;
241 private LocalDateTime createdAt;
242
243 // Constructors, getters, setters...
244 public User() {}
245
246 public User(String id, String username, String email) {
247 this.id = id;
248 this.username = username;
249 this.email = email;
250 this.createdAt = LocalDateTime.now();
251 }
252
253 // Getters and setters
254 public String getId() { return id; }
255 public void setId(String id) { this.id = id; }
256 public String getUsername() { return username; }
257 public void setUsername(String username) { this.username = username; }
258 public String getEmail() { return email; }
259 public void setEmail(String email) { this.email = email; }
260 public LocalDateTime getCreatedAt() { return createdAt; }
261 public void setCreatedAt(LocalDateTime createdAt) { this.createdAt = createdAt; }
262 }
263
264 public static class UserSession {
265 private String sessionId;
266 private String userId;
267 private LocalDateTime createdAt;
268 private LocalDateTime lastAccessedAt;
269 private Map<String, Object> attributes;
270
271 // Constructors, getters, setters...
272 public UserSession() {
273 this.attributes = new HashMap<>();
274 }
275
276 public UserSession(String sessionId, String userId) {
277 this();
278 this.sessionId = sessionId;
279 this.userId = userId;
280 this.createdAt = LocalDateTime.now();
281 this.lastAccessedAt = LocalDateTime.now();
282 }
283
284 // Getters and setters
285 public String getSessionId() { return sessionId; }
286 public void setSessionId(String sessionId) { this.sessionId = sessionId; }
287 public String getUserId() { return userId; }
288 public void setUserId(String userId) { this.userId = userId; }
289 public LocalDateTime getCreatedAt() { return createdAt; }
290 public void setCreatedAt(LocalDateTime createdAt) { this.createdAt = createdAt; }
291 public LocalDateTime getLastAccessedAt() { return lastAccessedAt; }
292 public void setLastAccessedAt(LocalDateTime lastAccessedAt) { this.lastAccessedAt = lastAccessedAt; }
293 public Map<String, Object> getAttributes() { return attributes; }
294 public void setAttributes(Map<String, Object> attributes) { this.attributes = attributes; }
295 }
296
297 public static class Product {
298 private String id;
299 private String name;
300 private String description;
301 private BigDecimal price;
302 private String category;
303 private Integer stockQuantity;
304
305 // Constructors, getters, setters...
306 public Product() {}
307
308 public Product(String id, String name, BigDecimal price, String category) {
309 this.id = id;
310 this.name = name;
311 this.price = price;
312 this.category = category;
313 }
314
315 // Getters and setters
316 public String getId() { return id; }
317 public void setId(String id) { this.id = id; }
318 public String getName() { return name; }
319 public void setName(String name) { this.name = name; }
320 public String getDescription() { return description; }
321 public void setDescription(String description) { this.description = description; }
322 public BigDecimal getPrice() { return price; }
323 public void setPrice(BigDecimal price) { this.price = price; }
324 public String getCategory() { return category; }
325 public void setCategory(String category) { this.category = category; }
326 public Integer getStockQuantity() { return stockQuantity; }
327 public void setStockQuantity(Integer stockQuantity) { this.stockQuantity = stockQuantity; }
328 }
329
330 public static class CacheStatistics {
331 private final boolean healthy;
332 private final long totalKeys;
333 private final double hitRate;
334 private final double missRate;
335
336 public CacheStatistics(boolean healthy, long totalKeys, double hitRate, double missRate) {
337 this.healthy = healthy;
338 this.totalKeys = totalKeys;
339 this.hitRate = hitRate;
340 this.missRate = missRate;
341 }
342
343 // Getters
344 public boolean isHealthy() { return healthy; }
345 public long getTotalKeys() { return totalKeys; }
346 public double getHitRate() { return hitRate; }
347 public double getMissRate() { return missRate; }
348
349 @Override
350 public String toString() {
351 return String.format("CacheStatistics{healthy=%s, totalKeys=%d, hitRate=%.2f%%, missRate=%.2f%%}",
352 healthy, totalKeys, hitRate * 100, missRate * 100);
353 }
354 }
355}
📊 Monitoring and Health Checks
1. Health Check Implementation
1@Component
2public class RedisSentinelHealthIndicator implements HealthIndicator {
3
4 private final RedisService redisService;
5 private final LettuceConnectionFactory connectionFactory;
6
7 public RedisSentinelHealthIndicator(RedisService redisService, LettuceConnectionFactory connectionFactory) {
8 this.redisService = redisService;
9 this.connectionFactory = connectionFactory;
10 }
11
12 @Override
13 public Health health() {
14 try {
15 // Test basic connectivity
16 boolean isHealthy = redisService.isHealthy();
17
18 if (!isHealthy) {
19 return Health.down()
20 .withDetail("error", "Redis connectivity check failed")
21 .build();
22 }
23
24 // Get additional Redis info
25 RedisConnection connection = connectionFactory.getConnection();
26 Properties info = connection.info();
27 String role = info.getProperty("role");
28 String masterHost = info.getProperty("master_host");
29 String masterPort = info.getProperty("master_port");
30
31 Health.Builder healthBuilder = Health.up()
32 .withDetail("role", role)
33 .withDetail("connection", "available");
34
35 if ("slave".equals(role) && masterHost != null) {
36 healthBuilder
37 .withDetail("master_host", masterHost)
38 .withDetail("master_port", masterPort);
39 }
40
41 return healthBuilder.build();
42
43 } catch (Exception e) {
44 return Health.down()
45 .withDetail("error", e.getMessage())
46 .withException(e)
47 .build();
48 }
49 }
50}
2. Metrics and Monitoring
1@Component
2@Slf4j
3public class RedisSentinelMonitor {
4
5 private final MeterRegistry meterRegistry;
6 private final LettuceConnectionFactory connectionFactory;
7 private final RedisService redisService;
8
9 @Scheduled(fixedRate = 30000) // Every 30 seconds
10 public void collectRedisMetrics() {
11 try {
12 RedisConnection connection = connectionFactory.getConnection();
13 Properties info = connection.info();
14
15 // Connection metrics
16 String connectedClients = info.getProperty("connected_clients");
17 if (connectedClients != null) {
18 Gauge.builder("redis.connected_clients")
19 .description("Number of connected clients")
20 .register(meterRegistry, () -> Double.parseDouble(connectedClients));
21 }
22
23 // Memory metrics
24 String usedMemory = info.getProperty("used_memory");
25 if (usedMemory != null) {
26 Gauge.builder("redis.memory.used")
27 .description("Used memory in bytes")
28 .register(meterRegistry, () -> Double.parseDouble(usedMemory));
29 }
30
31 String maxMemory = info.getProperty("maxmemory");
32 if (maxMemory != null && !maxMemory.equals("0")) {
33 Gauge.builder("redis.memory.max")
34 .description("Max memory in bytes")
35 .register(meterRegistry, () -> Double.parseDouble(maxMemory));
36 }
37
38 // Operation metrics
39 String totalCommandsProcessed = info.getProperty("total_commands_processed");
40 if (totalCommandsProcessed != null) {
41 Counter.builder("redis.commands.processed.total")
42 .description("Total commands processed")
43 .register(meterRegistry)
44 .increment(Double.parseDouble(totalCommandsProcessed));
45 }
46
47 // Keyspace metrics
48 collectKeyspaceMetrics(info);
49
50 connection.close();
51
52 } catch (Exception e) {
53 log.error("Failed to collect Redis metrics", e);
54 meterRegistry.counter("redis.metrics.collection.errors").increment();
55 }
56 }
57
58 private void collectKeyspaceMetrics(Properties info) {
59 info.stringPropertyNames().stream()
60 .filter(key -> key.startsWith("db"))
61 .forEach(dbKey -> {
62 String dbInfo = info.getProperty(dbKey);
63 if (dbInfo != null) {
64 // Parse "keys=X,expires=Y,avg_ttl=Z"
65 Map<String, String> dbMetrics = Arrays.stream(dbInfo.split(","))
66 .map(pair -> pair.split("="))
67 .filter(parts -> parts.length == 2)
68 .collect(Collectors.toMap(parts -> parts[0], parts -> parts[1]));
69
70 String keys = dbMetrics.get("keys");
71 if (keys != null) {
72 Gauge.builder("redis.keyspace.keys")
73 .tag("database", dbKey)
74 .description("Number of keys in database")
75 .register(meterRegistry, () -> Double.parseDouble(keys));
76 }
77
78 String expires = dbMetrics.get("expires");
79 if (expires != null) {
80 Gauge.builder("redis.keyspace.expires")
81 .tag("database", dbKey)
82 .description("Number of keys with expiration")
83 .register(meterRegistry, () -> Double.parseDouble(expires));
84 }
85 }
86 });
87 }
88
89 @EventListener
90 public void handleRedisConnectionFailure(RedisConnectionFailureEvent event) {
91 log.error("Redis connection failure detected: {}", event.getCause().getMessage());
92 meterRegistry.counter("redis.connection.failures",
93 "cause", event.getCause().getClass().getSimpleName()).increment();
94
95 // Could trigger alerts here
96 sendAlert("Redis Connection Failure", event.getCause().getMessage());
97 }
98
99 private void sendAlert(String title, String message) {
100 // Implementation would send alerts to monitoring system
101 log.warn("ALERT: {} - {}", title, message);
102 }
103}
🔧 Testing Redis Sentinel
1. Integration Tests
1@SpringBootTest
2@Testcontainers
3class RedisSentinelIntegrationTest {
4
5 @Container
6 static RedisContainer redis = new RedisContainer(DockerImageName.parse("redis:7-alpine"))
7 .withExposedPorts(6379);
8
9 @Autowired
10 private RedisService redisService;
11
12 @Autowired
13 private CacheService cacheService;
14
15 @Test
16 void testBasicRedisOperations() {
17 // Test string operations
18 String key = "test:key";
19 String value = "test-value";
20
21 redisService.setValue(key, value);
22 String retrieved = redisService.getValue(key, String.class);
23
24 assertThat(retrieved).isEqualTo(value);
25 }
26
27 @Test
28 void testCachingOperations() {
29 // Test user caching
30 CacheService.User user = new CacheService.User("123", "testuser", "test@example.com");
31
32 cacheService.cacheUser(user.getId(), user);
33 Optional<CacheService.User> cachedUser = cacheService.getCachedUser(user.getId());
34
35 assertThat(cachedUser).isPresent();
36 assertThat(cachedUser.get().getUsername()).isEqualTo("testuser");
37 }
38
39 @Test
40 void testFailoverBehavior() {
41 // This test would require a more complex setup with actual Sentinel
42 // In a real scenario, you would:
43 // 1. Set up master and replicas
44 // 2. Configure sentinels
45 // 3. Simulate master failure
46 // 4. Verify automatic failover
47 // 5. Ensure application continues working
48
49 assertThat(redisService.isHealthy()).isTrue();
50 }
51}
2. Failover Testing Script
Bash Script for Failover Testing:
1#!/bin/bash
2
3# Redis Sentinel Failover Test Script
4
5set -e
6
7REDIS_MASTER_PORT=6379
8SENTINEL_PORT=26379
9TEST_KEY="failover:test"
10TEST_VALUE="test-data-$(date +%s)"
11
12echo "=== Redis Sentinel Failover Test ==="
13
14# Function to test Redis connectivity
15test_redis() {
16 local host=$1
17 local port=$2
18
19 if redis-cli -h "$host" -p "$port" ping > /dev/null 2>&1; then
20 echo "✓ Redis at $host:$port is responsive"
21 return 0
22 else
23 echo "✗ Redis at $host:$port is not responsive"
24 return 1
25 fi
26}
27
28# Function to get current master info from sentinel
29get_master_info() {
30 redis-cli -h localhost -p $SENTINEL_PORT sentinel masters | \
31 grep -A 20 "mymaster" | \
32 awk '/ip/{getline; print $1}'
33}
34
35# Initial setup
36echo "1. Testing initial setup..."
37INITIAL_MASTER=$(get_master_info)
38echo "Current master: $INITIAL_MASTER:$REDIS_MASTER_PORT"
39
40test_redis "$INITIAL_MASTER" "$REDIS_MASTER_PORT"
41
42# Set test data
43echo "2. Setting test data..."
44redis-cli -h "$INITIAL_MASTER" -p "$REDIS_MASTER_PORT" set "$TEST_KEY" "$TEST_VALUE"
45echo "Set $TEST_KEY = $TEST_VALUE"
46
47# Verify data on replicas
48echo "3. Verifying replication..."
49sleep 2
50REPLICA_VALUE=$(redis-cli -h redis-replica-1 -p 6380 get "$TEST_KEY")
51if [ "$REPLICA_VALUE" = "$TEST_VALUE" ]; then
52 echo "✓ Data replicated successfully"
53else
54 echo "✗ Data replication failed"
55 exit 1
56fi
57
58# Simulate master failure
59echo "4. Simulating master failure..."
60docker stop redis-master
61echo "Master stopped"
62
63# Wait for failover
64echo "5. Waiting for failover (up to 60 seconds)..."
65FAILOVER_TIMEOUT=60
66ELAPSED=0
67
68while [ $ELAPSED -lt $FAILOVER_TIMEOUT ]; do
69 sleep 2
70 ELAPSED=$((ELAPSED + 2))
71
72 NEW_MASTER=$(get_master_info)
73 if [ "$NEW_MASTER" != "$INITIAL_MASTER" ]; then
74 echo "✓ Failover completed! New master: $NEW_MASTER"
75 break
76 fi
77
78 echo "Waiting for failover... ($ELAPSED/$FAILOVER_TIMEOUT seconds)"
79done
80
81if [ $ELAPSED -ge $FAILOVER_TIMEOUT ]; then
82 echo "✗ Failover timed out"
83 exit 1
84fi
85
86# Test new master
87echo "6. Testing new master..."
88NEW_MASTER=$(get_master_info)
89test_redis "$NEW_MASTER" "$REDIS_MASTER_PORT"
90
91# Verify data integrity
92RECOVERED_VALUE=$(redis-cli -h "$NEW_MASTER" -p "$REDIS_MASTER_PORT" get "$TEST_KEY")
93if [ "$RECOVERED_VALUE" = "$TEST_VALUE" ]; then
94 echo "✓ Data integrity maintained after failover"
95else
96 echo "✗ Data integrity compromised: expected '$TEST_VALUE', got '$RECOVERED_VALUE'"
97 exit 1
98fi
99
100# Test write operations on new master
101NEW_TEST_VALUE="post-failover-$(date +%s)"
102redis-cli -h "$NEW_MASTER" -p "$REDIS_MASTER_PORT" set "${TEST_KEY}:new" "$NEW_TEST_VALUE"
103WRITTEN_VALUE=$(redis-cli -h "$NEW_MASTER" -p "$REDIS_MASTER_PORT" get "${TEST_KEY}:new")
104
105if [ "$WRITTEN_VALUE" = "$NEW_TEST_VALUE" ]; then
106 echo "✓ Write operations working on new master"
107else
108 echo "✗ Write operations failed on new master"
109 exit 1
110fi
111
112echo "7. Cleanup..."
113redis-cli -h "$NEW_MASTER" -p "$REDIS_MASTER_PORT" del "$TEST_KEY" "${TEST_KEY}:new"
114
115echo ""
116echo "=== Failover Test Completed Successfully ==="
117echo "Initial master: $INITIAL_MASTER:$REDIS_MASTER_PORT"
118echo "New master: $NEW_MASTER:$REDIS_MASTER_PORT"
119echo "Failover time: $ELAPSED seconds"
🎯 Conclusion
Redis Sentinel provides robust high availability for Redis deployments through automatic failover, service discovery, and monitoring capabilities. Key takeaways:
🔑 Key Benefits:
- Automatic Failover - No manual intervention required during failures
- Service Discovery - Clients automatically discover current master
- Monitoring - Continuous health checks and alerting
- Configuration Management - Dynamic configuration updates
📋 Best Practices:
- Odd Number of Sentinels - Deploy 3, 5, or 7 sentinels for proper quorum
- Network Segregation - Place sentinels in different network zones
- Proper Monitoring - Implement comprehensive monitoring and alerting
- Regular Testing - Test failover scenarios regularly
- Performance Tuning - Optimize timeout and pool configurations
🔄 Comparison Summary:
Aspect | Standalone | Master-Replica | Redis Sentinel | Redis Cluster |
---|---|---|---|---|
Best For | Development | Read scaling | High availability | Horizontal scaling |
Complexity | Low | Medium | Medium | High |
Failover | Manual | Manual | Automatic | Automatic |
Data Sharding | No | No | No | Yes |
Redis Sentinel strikes an excellent balance between high availability and operational complexity, making it ideal for production applications requiring automatic failover without the complexity of full clustering.