国产av日韩一区二区三区精品,成人性爱视频在线观看,国产,欧美,日韩,一区,www.成色av久久成人,2222eeee成人天堂

目錄
3. Adding Kafka Dependencies in Java (Maven)
4. Creating a Kafka Producer in Java
5. Creating a Kafka Consumer in Java
6. Best Practices for EDA with Kafka & Java
7. Extending with Spring Boot (Optional but Recommended)
Final Thoughts
首頁 Java java教程 使用Java和Apache Kafka實(shí)施事件驅(qū)動(dòng)的體系結(jié)構(gòu)

使用Java和Apache Kafka實(shí)施事件驅(qū)動(dòng)的體系結(jié)構(gòu)

Jul 23, 2025 am 03:51 AM
java kafka

理解核心組件:Producers發(fā)布事件到Topics,Consumers訂閱並處理事件,Kafka Broker管理消息存儲(chǔ)與傳遞;2. 本地搭建Kafka:用Docker快速啟動(dòng)ZooKeeper和Kafka服務(wù),暴露9092端口;3. Java集成Kafka:引入kafka-clients依賴,或使用Spring Kafka提升開發(fā)效率;4. 編寫Producer:配置KafkaProducer發(fā)送JSON格式訂單事件到orders主題;5. 編寫Consumer:通過KafkaConsumer訂閱orders主題,異步處理消息並實(shí)現(xiàn)業(yè)務(wù)邏輯;6. 遵循最佳實(shí)踐:規(guī)範(fàn)事件結(jié)構(gòu)、保證消費(fèi)者冪等性、合理命名主題、監(jiān)控消費(fèi)滯後、優(yōu)雅處理錯(cuò)誤;7. 可選Spring Boot擴(kuò)展:利用@KafkaListener和KafkaTemplate簡化開發(fā),提升可維護(hù)性。該架構(gòu)通過事件解耦服務(wù),實(shí)現(xiàn)高擴(kuò)展性和響應(yīng)能力,適合現(xiàn)代分佈式系統(tǒng)。

Implementing Event-Driven Architecture with Java and Apache Kafka

Event-Driven Architecture (EDA) is a powerful design pattern that enables systems to communicate through events—changes in state that are published, routed, and consumed asynchronously. When combined with Java and Apache Kafka, it becomes a scalable, resilient, and high-performance solution for modern distributed applications.

Implementing Event-Driven Architecture with Java and Apache Kafka

Here's how you can implement Event-Driven Architecture using Java and Apache Kafka effectively.


1. Understanding the Core Components

Before jumping into code, understand the key pieces:

Implementing Event-Driven Architecture with Java and Apache Kafka
  • Producers : Java applications that publish events (messages) to Kafka topics.
  • Topics : Named streams of records; categories or feeds to which events are written.
  • Consumers : Java applications that subscribe to topics and process events.
  • Kafka Broker : The server that manages topics and stores messages.
  • ZooKeeper / Kafka Raft (KRaft) : Coordinates brokers (ZooKeeper in older versions; KRaft in newer ones).

Events typically represent business actions like OrderCreated , PaymentProcessed , or UserRegistered .


2. Setting Up Apache Kafka Locally

You'll need Kafka running. Use Docker for quick setup:

Implementing Event-Driven Architecture with Java and Apache Kafka
 # docker-compose.yml
version: '3'
services:
  zookeeper:
    image: confluentinc/cp-zookeeper:latest
    environment:
      ZOOKEEPER_CLIENT_PORT: 2181

  kafka:
    image: confluentinc/cp-kafka:latest
    depends_on:
      - zookeeper
    ports:
      - "9092:9092"
    environment:
      KAFKA_BROKER_ID: 1
      KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
      KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT
      KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://localhost:9092
      KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1

Run with:

 docker-compose up -d

Now your Kafka broker is accessible at localhost:9092 .


3. Adding Kafka Dependencies in Java (Maven)

Use the official Kafka clients:

 <dependency>
    <groupId>org.apache.kafka</groupId>
    <artifactId>kafka-clients</artifactId>
    <version>3.7.0</version>
</dependency>

For better productivity, consider Spring Boot with Spring Kafka :

 <dependency>
    <groupId>org.springframework.kafka</groupId>
    <artifactId>spring-kafka</artifactId>
</dependency>

But here we'll focus on raw Kafka clients to show core concepts.


4. Creating a Kafka Producer in Java

A producer sends events to a topic:

 import org.apache.kafka.clients.producer.*;
import java.util.Properties;

public class OrderProducer {
    public static void main(String[] args) {
        Properties props = new Properties();
        props.put("bootstrap.servers", "localhost:9092");
        props.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer");
        props.put("value.serializer", "org.apache.kafka.common.serialization.StringSerializer");

        Producer<String, String> producer = new KafkaProducer<>(props);

        String topic = "orders";
        String key = "order-123";
        String value = "{\"orderId\": \"123\", \"status\": \"CREATED\", \"amount\": 99.9}";

        ProducerRecord<String, String> record = new ProducerRecord<>(topic, key, value);

        producer.send(record, (metadata, exception) -> {
            if (exception != null) {
                System.err.println("Send failed: " exception.getMessage());
            } else {
                System.out.printf("Sent to %s partition %d offset %d%n",
                        metadata.topic(), metadata.partition(), metadata.offset());
            }
        });

        producer.flush();
        producer.close();
    }
}

This sends an OrderCreated event as JSON to the orders topic.

? Tip: Use Avro, Protobuf, or JSON Schema for structured, versioned events in production.


5. Creating a Kafka Consumer in Java

The consumer reads events from the same topic:

 import org.apache.kafka.clients.consumer.*;
import java.time.Duration;
import java.util.Collections;
import java.util.Properties;

public class OrderConsumer {
    public static void main(String[] args) {
        Properties props = new Properties();
        props.put("bootstrap.servers", "localhost:9092");
        props.put("group.id", "order-processing-group");
        props.put("auto.offset.reset", "earliest");
        props.put("key.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");
        props.put("value.deserializer", "org.apache.kafka.common.serialization.StringDeserializer");

        Consumer<String, String> consumer = new KafkaConsumer<>(props);
        consumer.subscribe(Collections.singletonList("orders"));

        try {
            while (true) {
                ConsumerRecords<String, String> records = consumer.poll(Duration.ofMillis(100));
                for (ConsumerRecord<String, String> record : records) {
                    System.out.printf("Received: key=%s, value=%s, topic=%s, partition=%d, offset=%d%n",
                            record.key(), record.value(), record.topic(), record.partition(), record.offset());

                    // Process business logic here
                    handleOrderEvent(record.value());
                }
            }
        } finally {
            consumer.close();
        }
    }

    private static void handleOrderEvent(String value) {
        // Parse JSON and react accordingly
        System.out.println("Processing order event: " value);
        // eg, update DB, trigger payment service, send email
    }
}

Note:

  • group.id : Enables consumer groups for scalability and failover.
  • auto.offset.reset=earliest : Starts reading from beginning if no offset exists.

6. Best Practices for EDA with Kafka & Java

? Use meaningful topic names
eg, user-signups , payments-failed , not topic1 .

? Structure event data consistently
Include metadata like event type, timestamp, version:

 {
  "eventId": "abc-123",
  "eventType": "OrderCreated",
  "version": "1.0",
  "timestamp": "2025-04-05T10:00:00Z",
  "data": { "orderId": "123", "amount": 99.9 }
}

? Make consumers idempotent
Since Kafka guarantees at-least-once delivery, duplicates may occur.

? Scale horizontally
Add more consumers in the same group—they'll split partitions automatically.

? Monitor lag and throughput
Use tools like Kafka Manager, Confluent Control Center, or Prometheus Grafana.

? Handle errors gracefully
Don't crash on bad messages. Log, retry (with backoff), or send to a dead-letter topic.


With Spring Kafka, things get simpler:

 @Component
public class OrderEventListener {

    @KafkaListener(topics = "orders", groupId = "order-processing-group")
    public void listen(String message) {
        System.out.println("Received via Spring: " message);
        // Business logic
    }
}

And auto-configured producer:

 @Autowired
private KafkaTemplate<String, String> kafkaTemplate;

public void sendOrderEvent(String key, String payload) {
    kafkaTemplate.send("orders", key, payload);
}

Much cleaner and integrates well with DI, logging, metrics, etc.


Final Thoughts

Implementing Event-Driven Architecture with Java and Kafka lets you build loosely coupled, scalable, and responsive systems. Start small—produce one event, consume it—and gradually expand to pipelines involving multiple services.

Key takeaways:

  • Kafka acts as the central nervous system.
  • Producers emit facts; consumers react.
  • Design events around business semantics.
  • Use async processing to decouple components.

It's not just about tech—it's about changing how your services think and talk to each other.

Basically, once you get the flow down, it scales beautifully.

以上是使用Java和Apache Kafka實(shí)施事件驅(qū)動(dòng)的體系結(jié)構(gòu)的詳細(xì)內(nèi)容。更多資訊請(qǐng)關(guān)注PHP中文網(wǎng)其他相關(guān)文章!

本網(wǎng)站聲明
本文內(nèi)容由網(wǎng)友自願(yuàn)投稿,版權(quán)歸原作者所有。本站不承擔(dān)相應(yīng)的法律責(zé)任。如發(fā)現(xiàn)涉嫌抄襲或侵權(quán)的內(nèi)容,請(qǐng)聯(lián)絡(luò)admin@php.cn

熱AI工具

Undress AI Tool

Undress AI Tool

免費(fèi)脫衣圖片

Undresser.AI Undress

Undresser.AI Undress

人工智慧驅(qū)動(dòng)的應(yīng)用程序,用於創(chuàng)建逼真的裸體照片

AI Clothes Remover

AI Clothes Remover

用於從照片中去除衣服的線上人工智慧工具。

Clothoff.io

Clothoff.io

AI脫衣器

Video Face Swap

Video Face Swap

使用我們完全免費(fèi)的人工智慧換臉工具,輕鬆在任何影片中換臉!

熱工具

記事本++7.3.1

記事本++7.3.1

好用且免費(fèi)的程式碼編輯器

SublimeText3漢化版

SublimeText3漢化版

中文版,非常好用

禪工作室 13.0.1

禪工作室 13.0.1

強(qiáng)大的PHP整合開發(fā)環(huán)境

Dreamweaver CS6

Dreamweaver CS6

視覺化網(wǎng)頁開發(fā)工具

SublimeText3 Mac版

SublimeText3 Mac版

神級(jí)程式碼編輯軟體(SublimeText3)

熱門話題

Laravel 教程
1600
29
PHP教程
1502
276
如何使用JDBC處理Java的交易? 如何使用JDBC處理Java的交易? Aug 02, 2025 pm 12:29 PM

要正確處理JDBC事務(wù),必須先關(guān)閉自動(dòng)提交模式,再執(zhí)行多個(gè)操作,最後根據(jù)結(jié)果提交或回滾;1.調(diào)用conn.setAutoCommit(false)以開始事務(wù);2.執(zhí)行多個(gè)SQL操作,如INSERT和UPDATE;3.若所有操作成功則調(diào)用conn.commit(),若發(fā)生異常則調(diào)用conn.rollback()確保數(shù)據(jù)一致性;同時(shí)應(yīng)使用try-with-resources管理資源,妥善處理異常並關(guān)閉連接,避免連接洩漏;此外建議使用連接池、設(shè)置保存點(diǎn)實(shí)現(xiàn)部分回滾,並保持事務(wù)盡可能短以提升性能。

了解Java虛擬機(jī)(JVM)內(nèi)部 了解Java虛擬機(jī)(JVM)內(nèi)部 Aug 01, 2025 am 06:31 AM

TheJVMenablesJava’s"writeonce,runanywhere"capabilitybyexecutingbytecodethroughfourmaincomponents:1.TheClassLoaderSubsystemloads,links,andinitializes.classfilesusingbootstrap,extension,andapplicationclassloaders,ensuringsecureandlazyclassloa

如何使用Java的日曆? 如何使用Java的日曆? Aug 02, 2025 am 02:38 AM

使用java.time包中的類替代舊的Date和Calendar類;2.通過LocalDate、LocalDateTime和LocalTime獲取當(dāng)前日期時(shí)間;3.使用of()方法創(chuàng)建特定日期時(shí)間;4.利用plus/minus方法不可變地增減時(shí)間;5.使用ZonedDateTime和ZoneId處理時(shí)區(qū);6.通過DateTimeFormatter格式化和解析日期字符串;7.必要時(shí)通過Instant與舊日期類型兼容;現(xiàn)代Java中日期處理應(yīng)優(yōu)先使用java.timeAPI,它提供了清晰、不可變且線

比較Java框架:Spring Boot vs Quarkus vs Micronaut 比較Java框架:Spring Boot vs Quarkus vs Micronaut Aug 04, 2025 pm 12:48 PM

前形式攝取,quarkusandmicronautleaddueTocile timeProcessingandGraalvSupport,withquarkusoftenpernperforminglightbetterine nosserless notelless centarios.2。

垃圾收集如何在Java工作? 垃圾收集如何在Java工作? Aug 02, 2025 pm 01:55 PM

Java的垃圾回收(GC)是自動(dòng)管理內(nèi)存的機(jī)制,通過回收不可達(dá)對(duì)象釋放堆內(nèi)存,減少內(nèi)存洩漏風(fēng)險(xiǎn)。 1.GC從根對(duì)象(如棧變量、活動(dòng)線程、靜態(tài)字段等)出發(fā)判斷對(duì)象可達(dá)性,無法到達(dá)的對(duì)像被標(biāo)記為垃圾。 2.基於標(biāo)記-清除算法,標(biāo)記所有可達(dá)對(duì)象,清除未標(biāo)記對(duì)象。 3.採用分代收集策略:新生代(Eden、S0、S1)頻繁執(zhí)行MinorGC;老年代執(zhí)行較少但耗時(shí)較長的MajorGC;Metaspace存儲(chǔ)類元數(shù)據(jù)。 4.JVM提供多種GC器:SerialGC適用於小型應(yīng)用;ParallelGC提升吞吐量;CMS降

了解網(wǎng)絡(luò)端口和防火牆 了解網(wǎng)絡(luò)端口和防火牆 Aug 01, 2025 am 06:40 AM

NetworkPortSandFireWallsworkTogetHertoEnableCommunication whereSeringSecurity.1.NetWorkPortSareVirtualendPointSnumbered0-655 35,with-Well-with-Newonportslike80(HTTP),443(https),22(SSH)和25(smtp)sindiessingspefificservices.2.portsoperateervertcp(可靠,c

以身作則,解釋說明 以身作則,解釋說明 Aug 02, 2025 am 06:26 AM

defer用於在函數(shù)返回前執(zhí)行指定操作,如清理資源;參數(shù)在defer時(shí)立即求值,函數(shù)按後進(jìn)先出(LIFO)順序執(zhí)行;1.多個(gè)defer按聲明逆序執(zhí)行;2.常用於文件關(guān)閉等安全清理;3.可修改命名返回值;4.即使發(fā)生panic也會(huì)執(zhí)行,適合用於recover;5.避免在循環(huán)中濫用defer,防止資源洩漏;正確使用可提升代碼安全性和可讀性。

比較Java構(gòu)建工具:Maven vs. Gradle 比較Java構(gòu)建工具:Maven vs. Gradle Aug 03, 2025 pm 01:36 PM

Gradleisthebetterchoiceformostnewprojectsduetoitssuperiorflexibility,performance,andmoderntoolingsupport.1.Gradle’sGroovy/KotlinDSLismoreconciseandexpressivethanMaven’sverboseXML.2.GradleoutperformsMaveninbuildspeedwithincrementalcompilation,buildcac

See all articles