From ada8e9c856d7dc937ce08b5ffcf2e0457b6b5e34 Mon Sep 17 00:00:00 2001 From: Guilherme Biff Zarelli Date: Sun, 12 Apr 2026 18:38:26 -0300 Subject: [PATCH 1/4] Add retroactive design documentation Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> --- docs/adr/README.pt.md | 56 ----- ...planning-the-java-architecture-template.md | 210 ++++++++++++++++++ docs/design/README.md | 47 ++++ docs/design/template.md | 80 +++++++ 4 files changed, 337 insertions(+), 56 deletions(-) delete mode 100644 docs/adr/README.pt.md create mode 100644 docs/design/2025-01-05-planning-the-java-architecture-template.md create mode 100644 docs/design/README.md create mode 100644 docs/design/template.md diff --git a/docs/adr/README.pt.md b/docs/adr/README.pt.md deleted file mode 100644 index 0d0a028..0000000 --- a/docs/adr/README.pt.md +++ /dev/null @@ -1,56 +0,0 @@ -# Registros de Decisões Arquiteturais (ADR) - -Este diretório contém os **Registros de Decisões Arquiteturais (ADRs)** do projeto. -Cada ADR documenta uma decisão significativa tomada durante o desenvolvimento do sistema, garantindo rastreabilidade e compreensão das escolhas arquiteturais. - -📚 Leia em: -- 🇬🇧 [English](README.md) - ---- - -## 📜 O que é um ADR? - -Um **Registro de Decisão Arquitetural (ADR)** é um documento que captura uma decisão arquitetural importante, juntamente com seu contexto e consequências. Isso ajuda a manter a transparência e fornece uma referência para revisitar decisões no futuro. - ---- - -## 🛠️ Processo para Adicionar um Novo ADR - -Siga estes passos ao criar um novo ADR: - -1. **Identifique a Decisão** - Defina claramente o problema ou desafio que levou à decisão. - -2. **Crie um Novo Arquivo ADR** - Use a seguinte convenção de nomenclatura para o arquivo: - `NNNN-titulo-da-decisao.md` - - `NNNN`: Um número sequencial com zeros à esquerda (ex.: `0001`). - - `titulo-da-decisao`: Um título curto e descritivo em letras minúsculas, com palavras separadas por hífens. - -3. **Use o Modelo** - Copie e personalize o modelo de ADR fornecido para estruturar o documento. Certifique-se de preencher todas as seções relevantes. - -4. **Atualize a Lista de ADRs** - Adicione manualmente o novo ADR à lista neste arquivo `README.md` na seção **Lista de ADRs**. - -## 🗂️ Lista de ADRs - -| # | Título | Status | Data | -|------|------------------------------------------------------------------------------|-----------|------------| -| 0001 | [Adotar REST para Comunicação de API](0001-adopt-rest-for-api-communication.md) | Aceito | 2024-12-31 | -| 0002 | [Usar MySQL como Banco de Dados Primário](0002-use-mysql.md) | Aceito | 2024-12-31 | -| 0003 | [Usar Kafka para Streaming de Eventos](0003-use-kafka-for-event-streaming.md) | Aceito | 2024-12-31 | - ---- - -## 🖋️ Modelo - -Consulte o arquivo [Modelo de ADR](template.md) para criar novos ADRs. - ---- - -## 📚 Referências - -- [O que é um ADR?](https://adr.github.io/) -- [Modelos e Exemplos de ADR](https://github.com/joelparkerhenderson/architecture_decision_record) - diff --git a/docs/design/2025-01-05-planning-the-java-architecture-template.md b/docs/design/2025-01-05-planning-the-java-architecture-template.md new file mode 100644 index 0000000..e347131 --- /dev/null +++ b/docs/design/2025-01-05-planning-the-java-architecture-template.md @@ -0,0 +1,210 @@ +# Retroactive Design Doc: Planning the Java Architecture Template + +- **Status:** Accepted +- **Date:** 2025-01-05 +- **Owners:** Repository maintainers +- **Related artifacts:** [Repository README](../../README.md), [ADR 0001 - Adopt REST for API Communication](../adr/0001-adopt-rest-for-api-communication.md), [ADR 0002 - Use MySQL as the Primary Database](../adr/0002-use-mysql.md), [ADR 0003 - Use Kafka for Event Streaming](../adr/0003-use-kafka-for-event-streaming.md), [Acceptance Test README](../../acceptance-test/README.md) + +## 1. Summary + +This document reconstructs the intended design for the repository as if it had been written before implementation. The goal is to turn the current codebase into a reusable planning reference for future adopters of the template, not just a sample project with implicit conventions. + +The template combines a Spring Boot 3 and Java 21 application, a hexagonal internal structure, synchronous REST and asynchronous Kafka flows, isolated database migration assets, generated runtime contracts, observability defaults, and Docker-based black-box acceptance tests. Together, these pieces define the baseline shape of a maintainable service starter. + +## 2. Problem statement + +Starting a new backend service usually involves repeating the same architectural decisions: where business logic belongs, how adapters are isolated, how contracts are documented, how local infrastructure is reproduced, and how confidence is built beyond unit tests. Without a deliberate template, teams often get only a folder scaffold and must rediscover guardrails while shipping features. + +This repository needs a single design document that explains the complete plan behind the template: what problems it solves, how its pieces fit together, what trade-offs it makes, and how adopters should evolve it. Existing ADRs already capture isolated technology decisions, but they do not explain the full template shape or the intended adoption path. + +## 3. Goals + +- Provide a reference implementation of hexagonal architecture on Spring Boot 3 and Java 21. +- Keep the core domain and use cases framework-agnostic through explicit input and output ports. +- Demonstrate both synchronous request-response and asynchronous event-driven integration in the same template. +- Make architecture rules visible and enforceable through ArchUnit tests and package-private implementations. +- Provide a reproducible local stack for application runtime, migrations, messaging, persistence, and observability. +- Generate REST and event contracts from source code through OpenAPI and AsyncAPI tooling. +- Validate the template from the outside with Docker-based acceptance tests instead of relying only on in-process integration tests. + +## 4. Non-goals + +- Provide a complete production-ready business system beyond a minimal sample domain. +- Solve authentication, authorization, tenancy, or deployment automation end to end. +- Be technology-neutral out of the box; this template starts with opinionated defaults and expects future services to replace parts intentionally. +- Replace ADRs as the place for individual long-lived decisions such as choosing REST, MySQL, or Kafka. + +## 5. Current state + +### 5.1 Repository layout + +| Area | Purpose | +| --- | --- | +| `application/` | Spring Boot application module containing adapters, core, config, tests, and runtime resources. | +| `acceptance-test/` | Black-box acceptance module that builds the app image and validates behavior with Testcontainers, WireMock, and RestAssured. | +| `resources/flyway/` | Standalone Flyway assets used for decoupled database migration execution. | +| `.docker-compose-local/` | Local runtime stack for application, infrastructure, and observability services. | +| `docs/adr/` | ADRs for isolated architectural decisions. | + +### 5.2 Sample behavior encoded by the template + +The sample domain is intentionally small, but it exercises the full template: + +1. A client creates a user through `POST /user`. +2. The application validates domain constraints, persists the user in MySQL, and publishes a `user-events` Kafka event. +3. A Kafka listener reacts to the creation event, calls an external random address API, persists the address, and publishes an update event. +4. A client can later query the user through `GET /user/{uuid}` and observe the eventual address enrichment. + +This flow is valuable because it demonstrates core concerns that a real service starter must support: validation, persistence, messaging, retries, eventual consistency, contract documentation, and acceptance coverage. + +### 5.3 Existing guardrails + +- The root and module `pom.xml` files establish a multi-module Maven build with `application` and `acceptance-test`. +- `ArchitectureTest` encodes the main structural rules: ports must be interfaces ending with `Port`, use cases must implement input ports, and implementations should stay package-private. +- Runtime contracts are exposed through Springdoc OpenAPI and Springwolf AsyncAPI. +- Database migrations are packaged separately under `resources/flyway/` and can run before the application starts. +- Acceptance tests run the built application image with real MySQL and Kafka containers plus a mocked external HTTP service. + +### 5.4 ADR baseline + +The current template already formalizes three key decisions: + +- [ADR 0001](../adr/0001-adopt-rest-for-api-communication.md): REST is the synchronous API style. +- [ADR 0002](../adr/0002-use-mysql.md): MySQL is the primary relational store. +- [ADR 0003](../adr/0003-use-kafka-for-event-streaming.md): Kafka is the event streaming backbone. + +This design doc assembles those separate decisions into a single planning narrative. + +## 6. Proposed design + +### 6.1 Module decomposition + +| Module | Responsibilities | Main technologies | +| --- | --- | --- | +| `application` | Hosts the runnable service, the domain core, adapters, configuration, and unit/architecture tests. | Spring Boot, Spring MVC, Spring Data JPA, Spring Kafka, OpenFeign, Springdoc, Springwolf, Micrometer OTLP | +| `acceptance-test` | Builds the application image and runs black-box acceptance scenarios against containerized dependencies. | Maven Failsafe, Testcontainers, RestAssured, WireMock | +| `resources/flyway` | Keeps migration execution separate from app startup and packaging concerns. | Flyway | + +The application module is the product template. The acceptance-test module proves that the template works as a deployable artifact. The Flyway assets keep schema evolution explicit and runnable in isolation. + +### 6.2 Internal application structure + +| Package area | Responsibility | +| --- | --- | +| `adapters.input.rest` | REST controllers, DTOs, request validation, and HTTP error mapping. | +| `adapters.input.kafka` | Kafka listeners that translate events into input port calls. | +| `adapters.output.db` | Persistence adapters and JPA mappings for domain entities. | +| `adapters.output.feign` | HTTP client adapters for external service integration. | +| `adapters.output.kafka` | Event publication adapters for outbound domain events. | +| `core.domain` | Domain entities, value objects, and domain exceptions. | +| `core.ports.input` | Application entry points exposed to adapters. | +| `core.ports.output` | Interfaces required by the core to reach external systems. | +| `core.usecases` | Package-private application services that orchestrate domain workflows. | +| `config` | Spring configuration and bootstrap wiring when explicit wiring is needed. | + +The design intent is that adapters depend on ports, and the core depends only on domain concepts plus port interfaces. Package-private implementations keep the public surface small and make internals harder to couple to accidentally. + +### 6.3 Main flows + +#### User creation flow + +`UserController` receives `POST /user`, maps the request DTO to a domain `User`, and calls `UserCreatorPort`. `UserCreatorUseCase` enforces the age-of-majority rule, stores the user through `UserRepositoryPort`, then publishes a user-created event through `UserEventDispatcherPort`. + +#### User enrichment flow + +`UserEventListener` listens on the `user-events` topic. For creation events it calls `UserEnricherPort`, which loads the user, fetches an address through `AddressClientPort`, persists it through `AddressRepositoryPort`, and emits a user-updated event through `UserEventDispatcherPort`. + +The enrichment flow is intentionally asynchronous. It demonstrates eventual consistency, decouples API latency from external HTTP latency, and gives the template a realistic event-driven example instead of a purely CRUD-oriented design. + +#### User query flow + +`UserController` receives `GET /user/{uuid}` and delegates to `UserGetterPort`. `UserGetterUseCase` retrieves the user from persistence or fails with a domain-specific not-found exception that is translated to an HTTP `404 ProblemDetail`. + +### 6.4 Cross-cutting concerns + +- **Runtime contracts:** REST endpoints are documented by Springdoc and asynchronous channels by Springwolf. +- **Error handling:** `ControllerErrorHandler` maps domain and validation failures to Spring `ProblemDetail`. +- **Observability:** Actuator health endpoints plus OTLP metrics and tracing defaults integrate the application with Prometheus, Jaeger, and Grafana through the local stack. +- **Migration control:** Flyway runs independently from the application process, reducing coupling between schema changes and runtime startup. +- **Retry behavior:** The external address client retries HTTP calls, and the Kafka listener uses a retryable topic, making transient integration failures visible in the sample. + +## 7. Dependencies and interfaces + +### 7.1 Core ports and primary implementations + +| Port | Direction | Main implementation | Purpose | +| --- | --- | --- | --- | +| `UserCreatorPort` | Input | `UserCreatorUseCase` | Create users from incoming application requests. | +| `UserGetterPort` | Input | `UserGetterUseCase` | Retrieve users by UUID. | +| `UserEnricherPort` | Input | `UserEnricherUseCase` | Execute asynchronous enrichment after a creation event. | +| `UserRepositoryPort` | Output | `UserRepository` | Persist and query users in MySQL via JPA. | +| `AddressRepositoryPort` | Output | `AddressRepository` | Persist enriched addresses. | +| `AddressClientPort` | Output | `AddressClient` | Fetch address data from an external HTTP service. | +| `UserEventDispatcherPort` | Output | `UserEventDispatcher` | Publish domain events to Kafka. | + +### 7.2 External interfaces and runtime surfaces + +| Surface | Type | Purpose | Notes | +| --- | --- | --- | --- | +| `POST /user` | REST | Create a user and return a resource location. | Implemented in `UserController`; documented in OpenAPI. | +| `GET /user/{uuid}` | REST | Read user state, including eventual address enrichment. | Implemented in `UserController`; documented in OpenAPI. | +| `user-events` | Kafka topic | Carry user lifecycle events. | Published and consumed by the application; documented in AsyncAPI. | +| MySQL `sampledb` | Database | Persist users and addresses. | Chosen by ADR 0002; migrations managed through Flyway assets. | +| Random Data API | External HTTP dependency | Provide address data for enrichment. | Accessed through OpenFeign with retry. | +| `/actuator/health` | Runtime health | Expose readiness and liveness-oriented health data. | Used by local runtime and acceptance startup checks. | +| `/v3/api-docs` and `/swagger-ui.html` | Runtime documentation | Expose REST contract artifacts. | Generated from source annotations and Spring configuration. | +| `/springwolf/docs` and `/springwolf/asyncapi-ui.html` | Runtime documentation | Expose event contract artifacts. | Generated from Kafka listener and publisher annotations. | + +## 8. Alternatives considered + +- **Layered Spring application without explicit ports:** simpler at first glance, but easier to couple controllers, persistence, and framework concerns directly into business logic. +- **Synchronous enrichment during `POST /user`:** would return a fully enriched payload immediately, but would also make API latency and availability depend on the external address provider and would remove the template's asynchronous example. +- **Running migrations inside application startup:** simpler to wire locally, but weaker for deployment control and more prone to startup-time contention in orchestrated environments. +- **Keeping all integration tests inside the application module:** lower module count, but less representative of a real deployable artifact and easier to blur the line between unit and black-box tests. + +## 9. Risks and mitigations + +| Risk | Why it matters | Mitigation | +| --- | --- | --- | +| Architecture drift in future template forks | Teams can keep the package structure while bypassing the intended dependency rules. | Keep ArchUnit tests, package-private implementations, and this design doc aligned. | +| Hidden contract drift between code and consumers | Template adopters may change endpoints or events without updating external expectations. | Generate OpenAPI and AsyncAPI from source and keep those runtime artifacts discoverable. | +| Local-stack complexity for new adopters | The template includes several moving parts and can feel heavy for simple services. | Provide Make targets, Docker Compose stacks, and a small sample domain that demonstrates why each part exists. | +| Retry amplification in the enrichment path | The template combines Feign retries with Kafka retries, which can multiply external requests. | Keep retry behavior explicit in the design and acceptance tests; adopters should evaluate idempotency and backoff when changing the flow. | +| External dependency instability | The address enrichment path depends on a third-party HTTP endpoint. | Isolate it behind `AddressClientPort`, use retry semantics, and validate failure behavior in acceptance tests with WireMock. | +| Partial enforcement of adapter sublayer rules | The detailed ArchUnit layered test currently uses `adapter` package matchers while source packages are under `adapters`, so sublayer checks are weaker than intended. | Treat broader architecture tests as the current guardrail and review the detailed rule when strengthening template governance. | + +## 10. Migration or rollout notes + +No migration is required for the current repository because this document is retroactive. It becomes the baseline planning artifact for future template evolution. + +For teams adopting the template: + +1. Rename Maven coordinates, base packages, and service metadata first. +2. Replace the sample `User` domain while preserving the `core`, `ports`, `usecases`, and `adapters` separation. +3. Review the existing ADRs and either keep them or supersede them before replacing MySQL, Kafka, or REST. +4. Adapt migration scripts, external clients, and local stack definitions to the new service. +5. Extend acceptance tests before adding substantial new behavior so the black-box contract remains the main confidence layer. +6. Update OpenAPI, AsyncAPI, and observability metadata so generated runtime docs still reflect the service accurately. + +## 11. Validation strategy + +- **Unit and architecture tests:** keep fast feedback in `application/`, especially for domain rules, use cases, and architectural boundaries. +- **Acceptance tests:** use `acceptance-test/` to build the real app image and validate HTTP, Kafka, persistence, and external integration behavior against containers. +- **Runtime contract checks:** expose and inspect OpenAPI and AsyncAPI artifacts after startup. +- **Operational smoke checks:** verify the local stack through Actuator health plus observability tooling endpoints. +- **Common commands:** `make run-unit-tests`, `make run-acceptance-tests`, `make run-all-tests`, `make run-all`, and `make stop-all` are the baseline entry points for template users. + +## 12. Open questions + +- Should the detailed ArchUnit layer rules be tightened so the adapter sublayer package names are fully enforced? +- Should future template changes require a design doc whenever they affect more than one module or more than one runtime contract? +- When adopters replace the default technologies, should the repository provide migration checklists or artifact templates beyond ADRs and this design-doc template? + +## 13. References + +- [Repository README](../../README.md) +- [Acceptance Test README](../../acceptance-test/README.md) +- [ADR 0001 - Adopt REST for API Communication](../adr/0001-adopt-rest-for-api-communication.md) +- [ADR 0002 - Use MySQL as the Primary Database](../adr/0002-use-mysql.md) +- [ADR 0003 - Use Kafka for Event Streaming](../adr/0003-use-kafka-for-event-streaming.md) +- [PicPay - Design Docs: a importancia da documentacao tecnica no desenvolvimento de software](https://medium.com/inside-picpay/design-docs-a-import%C3%A2ncia-da-documenta%C3%A7%C3%A3o-t%C3%A9cnica-no-desenvolvimento-de-software-30f75686ab7e) diff --git a/docs/design/README.md b/docs/design/README.md new file mode 100644 index 0000000..5cd57c5 --- /dev/null +++ b/docs/design/README.md @@ -0,0 +1,47 @@ +# Design Docs + +This folder contains design documents for changes or capabilities that need more context than a small ADR, issue, or pull request description. In this repository, a design doc complements ADRs by capturing the full shape of a solution: the problem, scope, structure, interfaces, risks, rollout guidance, and validation strategy. + +## When to create a design doc + +Create a design doc when the work has non-trivial structure, integration, migration, or technical risk. Typical examples: + +- a new feature slice that changes multiple adapters, ports, or use cases +- a migration that affects runtime contracts, persistence, or infrastructure +- a cross-cutting concern such as observability, security, or deployment shape +- a template or platform change that other teams will copy and extend + +Use a different artifact when it fits better: + +- use an **ADR** for a single long-lived architectural decision +- use a **spec** for scope, scenarios, and acceptance criteria +- use a **runbook** for operational procedures + +## Process + +1. Clarify the problem and scope first. +2. Copy [template.md](template.md). +3. Link any relevant ADRs, specs, or runbooks instead of duplicating them. +4. Keep the document specific enough to guide implementation, but lean enough to stay maintained. +5. Add the new document to the index below. + +## Naming convention + +Design docs in this repository follow this pattern: + +`YYYY-MM-DD-.md` + +## Design doc index + +| Date | Title | Status | Notes | +| --- | --- | --- | --- | +| 2025-01-05 | [Planning the Java Architecture Template](2025-01-05-planning-the-java-architecture-template.md) | Accepted | Retroactive baseline design doc for the current template. | + +## Template + +Use [template.md](template.md) to create new design docs. + +## References + +- [PicPay - Design Docs: a importancia da documentacao tecnica no desenvolvimento de software](https://medium.com/inside-picpay/design-docs-a-import%C3%A2ncia-da-documenta%C3%A7%C3%A3o-t%C3%A9cnica-no-desenvolvimento-de-software-30f75686ab7e) +- [ADR README](../adr/README.md) diff --git a/docs/design/template.md b/docs/design/template.md new file mode 100644 index 0000000..c30a1d8 --- /dev/null +++ b/docs/design/template.md @@ -0,0 +1,80 @@ +# + +- **Status:** Draft | Review | Accepted | Superseded +- **Date:** YYYY-MM-DD +- **Owners:** +- **Related artifacts:** + +## 1. Summary + +Describe the proposal in a few sentences. Keep the focus on why the work matters and what shape of solution is being proposed. + +## 2. Problem statement + +Explain the problem, opportunity, or constraint that makes this work necessary. Capture the context that would be expensive to rediscover later. + +## 3. Goals + +- +- + +## 4. Non-goals + +- +- + +## 5. Current state + +Describe the relevant baseline: current architecture, contracts, operational model, or implementation constraints. + +## 6. Proposed design + +### 6.1 Structure and responsibilities + +Explain the main building blocks and their responsibilities. + +### 6.2 Main flows + +Describe the important request, event, data, or control flows. + +### 6.3 Cross-cutting concerns + +Capture security, observability, performance, documentation, or reliability choices when they materially affect the design. + +## 7. Dependencies and interfaces + +| Dependency or interface | Type | Purpose | Notes | +| --- | --- | --- | --- | +| | | | | + +## 8. Alternatives considered + +List the main alternatives and why they were not chosen. + +## 9. Risks and mitigations + +| Risk | Why it matters | Mitigation | +| --- | --- | --- | +| | | | + +## 10. Migration or rollout notes + +Explain how the design should be introduced, adopted, or phased in. + +## 11. Validation strategy + +Describe how the team will know the design is working: tests, contract checks, rollout checks, observability, or manual verification. + +## 12. Open questions + +- +- + +## 13. References + +- +- + +--- + +Keep the document lean. Record the why, the structure, and the main trade-offs; leave transient execution detail to tickets, tasks, or pull requests. From 05bed21df8fe615a4331bdf7a140b958f03f6765 Mon Sep 17 00:00:00 2001 From: Guilherme Biff Zarelli Date: Sun, 12 Apr 2026 18:52:10 -0300 Subject: [PATCH 2/4] Implement CloudEvents specification Adopt CloudEvents structured JSON for Kafka user events, update the event adapters and tests, and document the contract decision.\n\nCloses #3\n\nCo-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> --- README.md | 8 +- README.pt.md | 6 +- application/pom.xml | 13 +++ .../input/kafka/UserEventListener.java | 33 +++++-- .../input/kafka/dto/UserEventDataDto.java | 8 ++ .../input/kafka/dto/UserEventDto.java | 13 ++- .../adapters/output/kafka/UserEvent.java | 29 ++---- .../adapters/output/kafka/UserEventData.java | 8 ++ .../output/kafka/UserEventDispatcher.java | 81 +++++++++++++---- .../input/kafka/UserEventListenerTest.java | 79 ++++++++++++---- .../output/kafka/UserEventDispatcherTest.java | 91 +++++++++++++++++-- ...cloudevents-as-the-kafka-event-contract.md | 51 +++++++++++ docs/adr/README.md | 1 + docs/adr/README.pt.md | 2 +- 14 files changed, 340 insertions(+), 83 deletions(-) create mode 100644 application/src/main/java/br/com/helpdev/sample/adapters/input/kafka/dto/UserEventDataDto.java create mode 100644 application/src/main/java/br/com/helpdev/sample/adapters/output/kafka/UserEventData.java create mode 100644 docs/adr/0004-use-cloudevents-as-the-kafka-event-contract.md diff --git a/README.md b/README.md index d80b889..061de1d 100644 --- a/README.md +++ b/README.md @@ -6,9 +6,10 @@ Here you will describe this project, what it does, and its goals, making it clear to everyone. Example: -The **Java Architecture Template** is a project designed to serve as a template for creating applications, aiming for development with exceptional -technical quality to ensure long-term maintainability. -In this template, we provide a user registration endpoint that triggers an event in the broker when a user is registered. A listener will receive these creation events and enrich them with address data. +The **Java Architecture Template** is a project designed to serve as a template for creating applications, aiming for development with exceptional +technical quality to ensure long-term maintainability. +In this template, we provide a user registration endpoint that publishes a **CloudEvent** to Kafka when a user is registered. A listener consumes +CloudEvents of type `br.com.helpdev.sample.user.created` and enriches the user with address data. 📖 Read this in: - 🇧🇷 [Português](README.pt.md) @@ -171,6 +172,7 @@ After starting the application, access: ### **AsyncAPI** This project uses **Springwolf** to document asynchronous events (Kafka, RabbitMQ, etc.) with **AsyncAPI**. +Kafka messages on the `user-events` topic follow the **CloudEvents structured JSON** format (`application/cloudevents+json`). 🔗 [Official AsyncAPI site](https://www.asyncapi.com/) diff --git a/README.pt.md b/README.pt.md index beb5eb9..0be271a 100644 --- a/README.pt.md +++ b/README.pt.md @@ -6,8 +6,9 @@ Aqui você deve descrever seu projeto, seu funcionamento e seus objetivos, tornando-o claro para todos. Exemplo: -O **Java Architecture Template** é um projeto criado para servir como modelo na criação de aplicações, visando um desenvolvimento com **qualidade técnica excepcional** para garantir **manutenção a longo prazo**. -Neste template, fornecemos um **endpoint de cadastro de usuário**, que **dispara um evento no broker** quando um usuário é registrado. Um **listener recebe esses eventos** de criação e os enriquece com dados de endereço. +O **Java Architecture Template** é um projeto criado para servir como modelo na criação de aplicações, visando um desenvolvimento com **qualidade técnica excepcional** para garantir **manutenção a longo prazo**. +Neste template, fornecemos um **endpoint de cadastro de usuário** que publica um **CloudEvent** no Kafka quando um usuário é registrado. Um +**listener consome CloudEvents** do tipo `br.com.helpdev.sample.user.created` e enriquece o usuário com dados de endereço. 📚 Leia em: - 🇬🇧 [English](README.md) @@ -159,6 +160,7 @@ Após iniciar a aplicação, acesse: ### **AsyncAPI** Este projeto utiliza o **Springwolf** para documentar eventos assíncronos (Kafka, RabbitMQ, etc.) com **AsyncAPI**. +As mensagens Kafka no tópico `user-events` seguem o formato **CloudEvents structured JSON** (`application/cloudevents+json`). 🔗 [Site oficial da AsyncAPI](https://www.asyncapi.com/) diff --git a/application/pom.xml b/application/pom.xml index fc3268c..095f721 100644 --- a/application/pom.xml +++ b/application/pom.xml @@ -17,6 +17,7 @@ 2024.0.0 3.4.1 1.3.0 + 4.0.2 8.4.0 2.8.1 1.9.0 @@ -52,6 +53,18 @@ ${springwolf.version} + + + io.cloudevents + cloudevents-core + ${cloudevents.version} + + + io.cloudevents + cloudevents-json-jackson + ${cloudevents.version} + + org.springframework.cloud diff --git a/application/src/main/java/br/com/helpdev/sample/adapters/input/kafka/UserEventListener.java b/application/src/main/java/br/com/helpdev/sample/adapters/input/kafka/UserEventListener.java index b6513d7..8cd6a48 100644 --- a/application/src/main/java/br/com/helpdev/sample/adapters/input/kafka/UserEventListener.java +++ b/application/src/main/java/br/com/helpdev/sample/adapters/input/kafka/UserEventListener.java @@ -1,11 +1,17 @@ package br.com.helpdev.sample.adapters.input.kafka; +import java.io.IOException; +import java.nio.charset.StandardCharsets; +import java.util.Objects; import java.util.UUID; import io.github.springwolf.bindings.kafka.annotations.KafkaAsyncOperationBinding; import io.github.springwolf.core.asyncapi.annotations.AsyncListener; import io.github.springwolf.core.asyncapi.annotations.AsyncMessage; import io.github.springwolf.core.asyncapi.annotations.AsyncOperation; +import io.cloudevents.core.format.EventFormat; +import io.cloudevents.core.provider.EventFormatProvider; +import io.cloudevents.jackson.JsonFormat; import org.slf4j.Logger; import org.slf4j.LoggerFactory; @@ -14,9 +20,9 @@ import org.springframework.retry.annotation.Backoff; import org.springframework.stereotype.Controller; -import com.fasterxml.jackson.core.JsonProcessingException; import com.fasterxml.jackson.databind.ObjectMapper; +import br.com.helpdev.sample.adapters.input.kafka.dto.UserEventDataDto; import br.com.helpdev.sample.adapters.input.kafka.dto.UserEventDto; import br.com.helpdev.sample.core.ports.input.UserEnricherPort; @@ -25,6 +31,9 @@ class UserEventListener { private static final String TOPIC_NAME = "user-events"; + private static final EventFormat EVENT_FORMAT = Objects.requireNonNull( + EventFormatProvider.getInstance().resolveFormat(JsonFormat.CONTENT_TYPE)); + private final Logger logger = LoggerFactory.getLogger(UserEventListener.class); private final ObjectMapper objectMapper; @@ -41,8 +50,8 @@ class UserEventListener { description = "Listen for user events", message = @AsyncMessage( name = "UserEventDto", - contentType = "application/json", - messageId = "uuid" + contentType = JsonFormat.CONTENT_TYPE, + messageId = "id" ), headers = @AsyncOperation.Headers( notUsed = true @@ -52,16 +61,22 @@ class UserEventListener { @KafkaAsyncOperationBinding(bindingVersion = "1.0.0") @KafkaListener(topics = TOPIC_NAME) @RetryableTopic(attempts = "3", backoff = @Backoff(delay = 1000, maxDelay = 10000, multiplier = 2), autoCreateTopics = "true") - public void listen(final String message) throws JsonProcessingException { - final var userEventDto = objectMapper.readValue(message, UserEventDto.class); + public void listen(final String message) throws IOException { + final var cloudEvent = EVENT_FORMAT.deserialize(message.getBytes(StandardCharsets.UTF_8)); + + if (UserEventDto.EVENT_TYPE_CREATED.equals(cloudEvent.getType())) { + final var eventData = cloudEvent.getData(); + if (eventData == null) { + throw new IllegalArgumentException("CloudEvent data is required for user events"); + } + final var userEventData = objectMapper.readValue(eventData.toBytes(), UserEventDataDto.class); - if (UserEventDto.EVENT_CREATED.equals(userEventDto.event())) { - userEnricherPort.enrichUser(UUID.fromString(userEventDto.uuid())); - logger.info("User enriched: {}", userEventDto.uuid()); + userEnricherPort.enrichUser(UUID.fromString(userEventData.userUuid())); + logger.info("User enriched: {}", userEventData.userUuid()); return; } - logger.info("User event ignored to enrich process: {}", userEventDto.event()); + logger.info("User event ignored to enrich process: {}", cloudEvent.getType()); } } diff --git a/application/src/main/java/br/com/helpdev/sample/adapters/input/kafka/dto/UserEventDataDto.java b/application/src/main/java/br/com/helpdev/sample/adapters/input/kafka/dto/UserEventDataDto.java new file mode 100644 index 0000000..aabcfe4 --- /dev/null +++ b/application/src/main/java/br/com/helpdev/sample/adapters/input/kafka/dto/UserEventDataDto.java @@ -0,0 +1,8 @@ +package br.com.helpdev.sample.adapters.input.kafka.dto; + +import io.swagger.v3.oas.annotations.media.Schema; + +public record UserEventDataDto( + @Schema(title = "User UUID", example = "f0f8cf3e-e856-4d61-a613-44f5df7742ca") String userUuid) +{ +} diff --git a/application/src/main/java/br/com/helpdev/sample/adapters/input/kafka/dto/UserEventDto.java b/application/src/main/java/br/com/helpdev/sample/adapters/input/kafka/dto/UserEventDto.java index ce01235..eace819 100644 --- a/application/src/main/java/br/com/helpdev/sample/adapters/input/kafka/dto/UserEventDto.java +++ b/application/src/main/java/br/com/helpdev/sample/adapters/input/kafka/dto/UserEventDto.java @@ -1,12 +1,19 @@ package br.com.helpdev.sample.adapters.input.kafka.dto; +import java.time.OffsetDateTime; + import io.swagger.v3.oas.annotations.media.Schema; public record UserEventDto( - @Schema(title = "Event", example = "CREATED|UPDATED") String event, - @Schema(title = "UUID", example = "uuid") String uuid) + @Schema(title = "CloudEvent Spec Version", example = "1.0") String specversion, + @Schema(title = "CloudEvent Identifier", example = "ef5f318c-4b7c-4fd7-a661-56293d8b8a91") String id, + @Schema(title = "CloudEvent Source", example = "urn:helpdev:sample:user") String source, + @Schema(title = "CloudEvent Type", example = "br.com.helpdev.sample.user.created") String type, + @Schema(title = "CloudEvent Time", example = "2026-04-12T21:33:04Z") OffsetDateTime time, + @Schema(title = "CloudEvent Data Content Type", example = "application/json") String datacontenttype, + @Schema(title = "CloudEvent Data") UserEventDataDto data) { - public static final String EVENT_CREATED = "CREATED"; + public static final String EVENT_TYPE_CREATED = "br.com.helpdev.sample.user.created"; } diff --git a/application/src/main/java/br/com/helpdev/sample/adapters/output/kafka/UserEvent.java b/application/src/main/java/br/com/helpdev/sample/adapters/output/kafka/UserEvent.java index 72dfb73..5a071ed 100644 --- a/application/src/main/java/br/com/helpdev/sample/adapters/output/kafka/UserEvent.java +++ b/application/src/main/java/br/com/helpdev/sample/adapters/output/kafka/UserEvent.java @@ -1,27 +1,16 @@ package br.com.helpdev.sample.adapters.output.kafka; -import java.util.UUID; +import java.time.OffsetDateTime; import io.swagger.v3.oas.annotations.media.Schema; public record UserEvent( - @Schema(title = "Event", example = "CREATED|UPDATED") String event, - @Schema(title = "UUID", example = "uuid") String uuid) { - - public static final String EVENT_CREATED = "CREATED"; - - public static final String EVENT_UPDATED = "UPDATED"; - - public static UserEvent ofCreated(UUID uuid) { - return new UserEvent(EVENT_CREATED, uuid.toString()); - } - - public static UserEvent ofUpdated(UUID uuid) { - return new UserEvent(EVENT_UPDATED, uuid.toString()); - } - - public String toJson() { - return String.format("{\"event\":\"%s\",\"uuid\":\"%s\"}", event, uuid); - } - + @Schema(title = "CloudEvent Spec Version", example = "1.0") String specversion, + @Schema(title = "CloudEvent Identifier", example = "ef5f318c-4b7c-4fd7-a661-56293d8b8a91") String id, + @Schema(title = "CloudEvent Source", example = "urn:helpdev:sample:user") String source, + @Schema(title = "CloudEvent Type", example = "br.com.helpdev.sample.user.created") String type, + @Schema(title = "CloudEvent Time", example = "2026-04-12T21:33:04Z") OffsetDateTime time, + @Schema(title = "CloudEvent Data Content Type", example = "application/json") String datacontenttype, + @Schema(title = "CloudEvent Data") UserEventData data) +{ } diff --git a/application/src/main/java/br/com/helpdev/sample/adapters/output/kafka/UserEventData.java b/application/src/main/java/br/com/helpdev/sample/adapters/output/kafka/UserEventData.java new file mode 100644 index 0000000..f0524d4 --- /dev/null +++ b/application/src/main/java/br/com/helpdev/sample/adapters/output/kafka/UserEventData.java @@ -0,0 +1,8 @@ +package br.com.helpdev.sample.adapters.output.kafka; + +import io.swagger.v3.oas.annotations.media.Schema; + +public record UserEventData( + @Schema(title = "User UUID", example = "f0f8cf3e-e856-4d61-a613-44f5df7742ca") String userUuid) +{ +} diff --git a/application/src/main/java/br/com/helpdev/sample/adapters/output/kafka/UserEventDispatcher.java b/application/src/main/java/br/com/helpdev/sample/adapters/output/kafka/UserEventDispatcher.java index d32820d..c37f9bf 100644 --- a/application/src/main/java/br/com/helpdev/sample/adapters/output/kafka/UserEventDispatcher.java +++ b/application/src/main/java/br/com/helpdev/sample/adapters/output/kafka/UserEventDispatcher.java @@ -1,13 +1,28 @@ package br.com.helpdev.sample.adapters.output.kafka; +import java.net.URI; +import java.nio.charset.StandardCharsets; +import java.time.OffsetDateTime; +import java.time.ZoneOffset; +import java.util.Objects; +import java.util.UUID; + import io.github.springwolf.bindings.kafka.annotations.KafkaAsyncOperationBinding; import io.github.springwolf.core.asyncapi.annotations.AsyncMessage; import io.github.springwolf.core.asyncapi.annotations.AsyncOperation; import io.github.springwolf.core.asyncapi.annotations.AsyncPublisher; +import io.cloudevents.core.builder.CloudEventBuilder; +import io.cloudevents.core.format.EventFormat; +import io.cloudevents.core.provider.EventFormatProvider; +import io.cloudevents.jackson.JsonFormat; +import org.springframework.kafka.KafkaException; import org.springframework.kafka.core.KafkaTemplate; import org.springframework.stereotype.Service; +import com.fasterxml.jackson.core.JsonProcessingException; +import com.fasterxml.jackson.databind.ObjectMapper; + import br.com.helpdev.sample.core.domain.entities.User; import br.com.helpdev.sample.core.ports.output.UserEventDispatcherPort; @@ -16,37 +31,67 @@ class UserEventDispatcher implements UserEventDispatcherPort { private static final String USER_EVENTS_TOPIC = "user-events"; + private static final String CLOUD_EVENT_SOURCE = "urn:helpdev:sample:user"; + + private static final String USER_CREATED_EVENT_TYPE = "br.com.helpdev.sample.user.created"; + + private static final String USER_ADDRESS_UPDATED_EVENT_TYPE = "br.com.helpdev.sample.user.address.updated"; + + private static final EventFormat EVENT_FORMAT = Objects.requireNonNull( + EventFormatProvider.getInstance().resolveFormat(JsonFormat.CONTENT_TYPE)); + private final KafkaTemplate kafkaProducer; - UserEventDispatcher(final KafkaTemplate kafkaProducer) { + private final ObjectMapper objectMapper; + + UserEventDispatcher(final KafkaTemplate kafkaProducer, final ObjectMapper objectMapper) { this.kafkaProducer = kafkaProducer; + this.objectMapper = objectMapper; } @Override public void sendUserCreatedEvent(final User user) { - publish(user, UserEvent.ofCreated(user.uuid())); + publish(user, USER_CREATED_EVENT_TYPE); } @Override public void sendUserAddressUpdatedEvent(final User user) { - publish(user, UserEvent.ofUpdated(user.uuid())); + publish(user, USER_ADDRESS_UPDATED_EVENT_TYPE); } - @AsyncPublisher( - operation = @AsyncOperation( - channelName = USER_EVENTS_TOPIC, - description = "Publish user events", - message = @AsyncMessage( - name = "UserEvent", - contentType = "application/json", - messageId = "uuid" - ), - payloadType = UserEvent.class - ) - ) - @KafkaAsyncOperationBinding(bindingVersion = "1.0.0") - void publish(User user, UserEvent userEvent) { - kafkaProducer.send(USER_EVENTS_TOPIC, user.uuid().toString(), userEvent.toJson()); + @AsyncPublisher( + operation = @AsyncOperation( + channelName = USER_EVENTS_TOPIC, + description = "Publish user events", + message = @AsyncMessage( + name = "UserEvent", + contentType = JsonFormat.CONTENT_TYPE, + messageId = "id" + ), + payloadType = UserEvent.class + ) + ) + @KafkaAsyncOperationBinding(bindingVersion = "1.0.0") + void publish(final User user, final String eventType) { + final var cloudEvent = CloudEventBuilder + .v1() + .withId(UUID.randomUUID().toString()) + .withType(eventType) + .withSource(URI.create(CLOUD_EVENT_SOURCE)) + .withTime(OffsetDateTime.now(ZoneOffset.UTC)) + .withData("application/json", serializeData(new UserEventData(user.uuid().toString()))) + .build(); + + kafkaProducer.send(USER_EVENTS_TOPIC, user.uuid().toString(), + new String(EVENT_FORMAT.serialize(cloudEvent), StandardCharsets.UTF_8)); + } + + private byte[] serializeData(final UserEventData userEventData) { + try { + return objectMapper.writeValueAsBytes(userEventData); + } catch (JsonProcessingException exception) { + throw new KafkaException("Cannot serialize CloudEvent user payload", exception); + } } } diff --git a/application/src/test/java/br/com/helpdev/sample/adapters/input/kafka/UserEventListenerTest.java b/application/src/test/java/br/com/helpdev/sample/adapters/input/kafka/UserEventListenerTest.java index 5e7d49c..b243ba5 100644 --- a/application/src/test/java/br/com/helpdev/sample/adapters/input/kafka/UserEventListenerTest.java +++ b/application/src/test/java/br/com/helpdev/sample/adapters/input/kafka/UserEventListenerTest.java @@ -1,58 +1,99 @@ package br.com.helpdev.sample.adapters.input.kafka; +import java.net.URI; +import java.nio.charset.StandardCharsets; +import java.time.OffsetDateTime; +import java.util.Objects; import java.util.UUID; +import static org.junit.jupiter.api.Assertions.assertThrows; import static org.mockito.Mockito.verify; import static org.mockito.Mockito.verifyNoInteractions; import static org.mockito.Mockito.verifyNoMoreInteractions; -import static org.mockito.Mockito.when; +import org.junit.jupiter.api.BeforeEach; import org.junit.jupiter.api.Test; import org.junit.jupiter.api.extension.ExtendWith; -import org.mockito.InjectMocks; import org.mockito.Mock; import org.mockito.junit.jupiter.MockitoExtension; -import com.fasterxml.jackson.core.JsonProcessingException; import com.fasterxml.jackson.databind.ObjectMapper; -import br.com.helpdev.sample.adapters.input.kafka.dto.UserEventDto; +import io.cloudevents.core.builder.CloudEventBuilder; +import io.cloudevents.core.format.EventFormat; +import io.cloudevents.core.provider.EventFormatProvider; +import io.cloudevents.jackson.JsonFormat; + +import br.com.helpdev.sample.adapters.input.kafka.dto.UserEventDataDto; import br.com.helpdev.sample.core.ports.input.UserEnricherPort; @ExtendWith(MockitoExtension.class) class UserEventListenerTest { - @Mock - private ObjectMapper objectMapper; + private static final String CLOUD_EVENT_SOURCE = "urn:helpdev:sample:user"; + + private static final EventFormat EVENT_FORMAT = Objects.requireNonNull( + EventFormatProvider.getInstance().resolveFormat(JsonFormat.CONTENT_TYPE)); @Mock private UserEnricherPort userEnricherPort; - @InjectMocks private UserEventListener userEventListener; - @Test - void testListen_UserCreatedEvent() throws JsonProcessingException { - UserEventDto userEventDto = new UserEventDto(UserEventDto.EVENT_CREATED, UUID.randomUUID().toString()); - String message = "{\"event\":\"" + UserEventDto.EVENT_CREATED + "\",\"uuid\":\"" + userEventDto.uuid() + "\"}"; + private ObjectMapper objectMapper; + + @BeforeEach + void setUp() { + objectMapper = new ObjectMapper(); + userEventListener = new UserEventListener(objectMapper, userEnricherPort); + } - when(objectMapper.readValue(message, UserEventDto.class)).thenReturn(userEventDto); + @Test + void testListen_UserCreatedEvent() throws Exception { + final var userUuid = UUID.randomUUID(); + final var message = cloudEventMessage("br.com.helpdev.sample.user.created", userUuid.toString()); userEventListener.listen(message); - verify(userEnricherPort).enrichUser(UUID.fromString(userEventDto.uuid())); + verify(userEnricherPort).enrichUser(userUuid); verifyNoMoreInteractions(userEnricherPort); } @Test - void testListen_UserEventIgnored() throws JsonProcessingException { - UserEventDto userEventDto = new UserEventDto("EVENT_UPDATED", UUID.randomUUID().toString()); - String message = "{\"event\":\"OTHER_EVENT\",\"uuid\":\"" + userEventDto.uuid() + "\"}"; - - when(objectMapper.readValue(message, UserEventDto.class)).thenReturn(userEventDto); + void testListen_UserEventIgnored() throws Exception { + final var message = cloudEventMessage("br.com.helpdev.sample.user.address.updated", UUID.randomUUID().toString()); userEventListener.listen(message); verifyNoInteractions(userEnricherPort); } -} \ No newline at end of file + + @Test + void testListen_UserCreatedEventWithoutDataShouldThrowException() { + final var cloudEvent = CloudEventBuilder + .v1() + .withId(UUID.randomUUID().toString()) + .withType("br.com.helpdev.sample.user.created") + .withSource(URI.create(CLOUD_EVENT_SOURCE)) + .withTime(OffsetDateTime.now()) + .build(); + + final var message = new String(EVENT_FORMAT.serialize(cloudEvent), StandardCharsets.UTF_8); + + assertThrows(IllegalArgumentException.class, () -> userEventListener.listen(message)); + verifyNoInteractions(userEnricherPort); + } + + private String cloudEventMessage(final String eventType, final String userUuid) throws Exception { + final var cloudEvent = CloudEventBuilder + .v1() + .withId(UUID.randomUUID().toString()) + .withType(eventType) + .withSource(URI.create(CLOUD_EVENT_SOURCE)) + .withTime(OffsetDateTime.now()) + .withData("application/json", objectMapper.writeValueAsBytes(new UserEventDataDto(userUuid))) + .build(); + + return new String(EVENT_FORMAT.serialize(cloudEvent), StandardCharsets.UTF_8); + } +} diff --git a/application/src/test/java/br/com/helpdev/sample/adapters/output/kafka/UserEventDispatcherTest.java b/application/src/test/java/br/com/helpdev/sample/adapters/output/kafka/UserEventDispatcherTest.java index d48d9d3..e49d317 100644 --- a/application/src/test/java/br/com/helpdev/sample/adapters/output/kafka/UserEventDispatcherTest.java +++ b/application/src/test/java/br/com/helpdev/sample/adapters/output/kafka/UserEventDispatcherTest.java @@ -1,52 +1,127 @@ package br.com.helpdev.sample.adapters.output.kafka; +import java.nio.charset.StandardCharsets; import java.time.LocalDate; +import java.time.OffsetDateTime; +import java.util.Objects; import java.util.UUID; +import static org.junit.jupiter.api.Assertions.assertEquals; +import static org.junit.jupiter.api.Assertions.assertNotNull; +import static org.junit.jupiter.api.Assertions.assertThrows; +import static org.mockito.ArgumentMatchers.any; +import static org.mockito.Mockito.eq; +import static org.mockito.Mockito.mock; import static org.mockito.Mockito.verify; +import static org.mockito.Mockito.verifyNoInteractions; import static org.mockito.Mockito.verifyNoMoreInteractions; +import static org.mockito.Mockito.when; import org.junit.jupiter.api.BeforeEach; import org.junit.jupiter.api.Test; import org.junit.jupiter.api.extension.ExtendWith; -import org.mockito.InjectMocks; +import org.mockito.ArgumentCaptor; import org.mockito.Mock; import org.mockito.junit.jupiter.MockitoExtension; +import org.springframework.kafka.KafkaException; import org.springframework.kafka.core.KafkaTemplate; +import com.fasterxml.jackson.core.JsonProcessingException; +import com.fasterxml.jackson.databind.ObjectMapper; + +import io.cloudevents.SpecVersion; +import io.cloudevents.core.format.EventFormat; +import io.cloudevents.core.provider.EventFormatProvider; +import io.cloudevents.jackson.JsonFormat; + import br.com.helpdev.sample.core.domain.entities.User; import br.com.helpdev.sample.core.domain.vo.Email; @ExtendWith(MockitoExtension.class) class UserEventDispatcherTest { + private static final String CLOUD_EVENT_SOURCE = "urn:helpdev:sample:user"; + + private static final EventFormat EVENT_FORMAT = Objects.requireNonNull( + EventFormatProvider.getInstance().resolveFormat(JsonFormat.CONTENT_TYPE)); + @Mock private KafkaTemplate kafkaProducer; - @InjectMocks private UserEventDispatcher userEventDispatcher; + private ObjectMapper objectMapper; + private User user; @BeforeEach void setUp() { final var userUuid = UUID.randomUUID(); + objectMapper = new ObjectMapper(); + userEventDispatcher = new UserEventDispatcher(kafkaProducer, objectMapper); user = new User(1L, userUuid, "John Doe", Email.of("john.doe@example.com"), LocalDate.of(2000, 1, 1), null); } @Test - void testSendUserCreatedEvent() { + void testSendUserCreatedEvent() throws Exception { userEventDispatcher.sendUserCreatedEvent(user); - verify(kafkaProducer).send("user-events", user.uuid().toString(), UserEvent.ofCreated(user.uuid()).toJson()); - verifyNoMoreInteractions(kafkaProducer); + assertPublishedEvent("br.com.helpdev.sample.user.created"); } @Test - void testSendUserAddressUpdatedEvent() { + void testSendUserAddressUpdatedEvent() throws Exception { userEventDispatcher.sendUserAddressUpdatedEvent(user); - verify(kafkaProducer).send("user-events", user.uuid().toString(), UserEvent.ofUpdated(user.uuid()).toJson()); + assertPublishedEvent("br.com.helpdev.sample.user.address.updated"); + } + + @Test + void testSendUserCreatedEvent_WhenSerializationFailsShouldThrowKafkaException() throws Exception { + final var failingObjectMapper = mock(ObjectMapper.class); + userEventDispatcher = new UserEventDispatcher(kafkaProducer, failingObjectMapper); + + when(failingObjectMapper.writeValueAsBytes(any(UserEventData.class))).thenThrow(new JsonProcessingException("boom") { + private static final long serialVersionUID = 1L; + }); + + assertThrows(KafkaException.class, () -> userEventDispatcher.sendUserCreatedEvent(user)); + verifyNoInteractions(kafkaProducer); + } + + @Test + void testUserEventRecordShouldExposeCloudEventFields() { + final var time = OffsetDateTime.now(); + final var userEventData = new UserEventData(user.uuid().toString()); + final var userEvent = new UserEvent("1.0", "event-id", CLOUD_EVENT_SOURCE, "br.com.helpdev.sample.user.created", time, + "application/json", userEventData); + + assertEquals("1.0", userEvent.specversion()); + assertEquals("event-id", userEvent.id()); + assertEquals(CLOUD_EVENT_SOURCE, userEvent.source()); + assertEquals("br.com.helpdev.sample.user.created", userEvent.type()); + assertEquals(time, userEvent.time()); + assertEquals("application/json", userEvent.datacontenttype()); + assertEquals(user.uuid().toString(), userEvent.data().userUuid()); + } + + private void assertPublishedEvent(final String expectedType) throws Exception { + final var payloadCaptor = ArgumentCaptor.forClass(String.class); + + verify(kafkaProducer).send(eq("user-events"), eq(user.uuid().toString()), payloadCaptor.capture()); verifyNoMoreInteractions(kafkaProducer); + + final var cloudEvent = EVENT_FORMAT.deserialize(payloadCaptor.getValue().getBytes(StandardCharsets.UTF_8)); + + assertEquals(SpecVersion.V1, cloudEvent.getSpecVersion()); + assertNotNull(cloudEvent.getId()); + assertEquals(expectedType, cloudEvent.getType()); + assertEquals(CLOUD_EVENT_SOURCE, cloudEvent.getSource().toString()); + assertEquals("application/json", cloudEvent.getDataContentType()); + assertNotNull(cloudEvent.getTime()); + assertNotNull(cloudEvent.getData()); + + final var data = objectMapper.readValue(cloudEvent.getData().toBytes(), UserEventData.class); + assertEquals(user.uuid().toString(), data.userUuid()); } -} \ No newline at end of file +} diff --git a/docs/adr/0004-use-cloudevents-as-the-kafka-event-contract.md b/docs/adr/0004-use-cloudevents-as-the-kafka-event-contract.md new file mode 100644 index 0000000..fce28f7 --- /dev/null +++ b/docs/adr/0004-use-cloudevents-as-the-kafka-event-contract.md @@ -0,0 +1,51 @@ +# 4. Use CloudEvents as the Kafka Event Contract + +## Context + +The application already uses Kafka to publish domain events on the `user-events` topic, but the current payload is a custom JSON shape that mixes transport metadata and business data. That makes the contract ad hoc, limits interoperability with external consumers, and leaves event metadata such as source, type, identifier, and content type without a standard representation. + +Issue #3 asks the project to implement the CloudEvents specification using the Java SDK, so the team needs a durable decision for how Kafka events should be represented from now on. + +## Decision + +We will publish and consume Kafka events as **CloudEvents v1.0 in structured JSON format**. + +The event contract will follow these rules: + +- use the Java CloudEvents SDK to build and parse events +- serialize messages as `application/cloudevents+json` +- use the CloudEvent `type` attribute to represent the domain event kind +- use the CloudEvent `data` attribute for the business payload +- keep the Kafka record key tied to the user identifier so ordering semantics stay unchanged for a given user + +## Consequences + +- Kafka events become self-describing and interoperable with CloudEvents-aware tooling and consumers. +- Event metadata is normalized through standard attributes instead of custom JSON conventions. +- The async contract becomes easier to document consistently in AsyncAPI because the envelope is explicit. +- Publishers and listeners must map between domain payloads and the CloudEvents envelope. +- Consumers of the existing custom JSON payload need to understand the new CloudEvents envelope. + +## Alternatives Considered + +- **Keep the current custom JSON payload**: simpler short term, but it keeps the contract proprietary and pushes metadata conventions into custom code. +- **Use CloudEvents binary mode over Kafka headers**: valid and efficient, but it spreads the contract across headers and payload, which makes the sample harder to inspect, test, and document than a self-contained structured JSON message. + +## Rationale + +Structured JSON CloudEvents gives the template a standards-based event contract without changing the existing topic topology or ordering behavior. It keeps each Kafka message self-contained, which fits the template's goals of clarity, learnability, and generated documentation while still aligning with the CloudEvents specification and SDK. + +## Date + +2026-04-12 + +## Status + +Accepted + +## Links + +- [ADR 0003: Use Kafka for Event Streaming](0003-use-kafka-for-event-streaming.md) +- [Issue #3](https://github.com/helpdeveloper/java-architecture-template/issues/3) +- [CloudEvents Specification](https://cloudevents.io/) +- [CloudEvents Java SDK](https://github.com/cloudevents/sdk-java) diff --git a/docs/adr/README.md b/docs/adr/README.md index 8701be6..d30f912 100644 --- a/docs/adr/README.md +++ b/docs/adr/README.md @@ -42,6 +42,7 @@ Follow these steps when creating a new ADR: | 0001 | [Adopt REST for API Communication](0001-adopt-rest-for-api-communication.md) | Accepted | 2024-12-31 | | 0002 | [Use MySQL as the Primary Database](0002-use-mysql.md) | Accepted | 2024-12-31 | | 0003 | [Use Kafka for Event Streaming](0003-use-kafka-for-event-streaming.md) | Accepted | 2024-12-31 | +| 0004 | [Use CloudEvents as the Kafka Event Contract](0004-use-cloudevents-as-the-kafka-event-contract.md) | Accepted | 2026-04-12 | --- diff --git a/docs/adr/README.pt.md b/docs/adr/README.pt.md index 0d0a028..057beec 100644 --- a/docs/adr/README.pt.md +++ b/docs/adr/README.pt.md @@ -40,6 +40,7 @@ Siga estes passos ao criar um novo ADR: | 0001 | [Adotar REST para Comunicação de API](0001-adopt-rest-for-api-communication.md) | Aceito | 2024-12-31 | | 0002 | [Usar MySQL como Banco de Dados Primário](0002-use-mysql.md) | Aceito | 2024-12-31 | | 0003 | [Usar Kafka para Streaming de Eventos](0003-use-kafka-for-event-streaming.md) | Aceito | 2024-12-31 | +| 0004 | [Usar CloudEvents como Contrato de Eventos do Kafka](0004-use-cloudevents-as-the-kafka-event-contract.md) | Aceito | 2026-04-12 | --- @@ -53,4 +54,3 @@ Consulte o arquivo [Modelo de ADR](template.md) para criar novos ADRs. - [O que é um ADR?](https://adr.github.io/) - [Modelos e Exemplos de ADR](https://github.com/joelparkerhenderson/architecture_decision_record) - From 9228eae569bf43c44a1e996d33b78647f95a7799 Mon Sep 17 00:00:00 2001 From: Guilherme Biff Zarelli Date: Mon, 13 Apr 2026 10:06:55 -0300 Subject: [PATCH 3/4] Update README skill references and add repo skills Add skill references across the main README sections, replace the ADR-only section with the repository docs flow, and introduce dedicated skills for Flyway decoupled migrations and observability. Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> --- .agents/skills/flyway-decoupled/SKILL.md | 54 ++++++++++++++++++++++++ .agents/skills/observability/SKILL.md | 54 ++++++++++++++++++++++++ AGENTS.md | 2 + README.md | 38 +++++++++++++++-- README.pt.md | 37 ++++++++++++++-- 5 files changed, 177 insertions(+), 8 deletions(-) create mode 100644 .agents/skills/flyway-decoupled/SKILL.md create mode 100644 .agents/skills/observability/SKILL.md diff --git a/.agents/skills/flyway-decoupled/SKILL.md b/.agents/skills/flyway-decoupled/SKILL.md new file mode 100644 index 0000000..582366a --- /dev/null +++ b/.agents/skills/flyway-decoupled/SKILL.md @@ -0,0 +1,54 @@ +--- +name: flyway-decoupled +description: Use when changing schema migrations, Flyway container wiring, or startup ordering for the template's decoupled migration flow. +--- + +# Flyway Decoupled + +Use when a schema or startup change must preserve this repository's pattern of running Flyway outside the Spring Boot process. + +## Destination + +- `resources/flyway/db/migration/*.sql` +- `resources/flyway/pom.xml` +- `resources/flyway/run-migration.sh` +- `Dockerfile` +- `.docker-compose-local/application.yaml` +- `.docker-compose-local/infrastructure.yaml` +- `README.md` only when developer-facing execution guidance changes + +## Use For + +- adding new versioned SQL migrations +- changing Flyway runner commands, locations, or connection variables +- adjusting startup ordering between the database, migration step, and application +- keeping local and deployment-like migration flows reproducible + +## Inputs + +1. Schema change or migration behavior to implement +2. Expected database URL, credentials, and migration location +3. Whether the change affects local startup, deployment startup, or both +4. Compatibility, rollback, or rollout expectations + +## Workflow + +1. Add or update versioned SQL files under `resources/flyway/db/migration/` using Flyway naming. +2. Keep migration execution decoupled from the Spring Boot startup path; prefer the dedicated runner script or migration service over in-app hooks. +3. Align `resources/flyway/pom.xml`, `run-migration.sh`, and compose environment variables when connection settings or migration locations change. +4. Preserve explicit startup ordering so the application waits for the migration step in `.docker-compose-local/application.yaml`. +5. Keep `.docker-compose-local/infrastructure.yaml` and the application startup flow coherent when the local stack also needs Flyway changes. +6. If the work changes operational recovery or rollout procedure, route to `docs-runbook`; if it changes long-lived migration strategy, route to `docs-design-doc` or `docs-adr`. + +## Output Shape + +1. Versioned migration files +2. Minimal wiring changes for Dockerfile, compose, or Flyway runner files +3. README or runbook follow-up only when operator behavior changed + +## Rules + +- Do not move schema migration responsibility back into the application startup unless the architecture intentionally changes. +- Keep migrations append-only and versioned; prefer new scripts over rewriting applied ones. +- Keep connection settings and migration locations aligned across compose files and runner scripts. +- Use `make run-all` as the default validation target when startup wiring changes. diff --git a/.agents/skills/observability/SKILL.md b/.agents/skills/observability/SKILL.md new file mode 100644 index 0000000..a8ad0c5 --- /dev/null +++ b/.agents/skills/observability/SKILL.md @@ -0,0 +1,54 @@ +--- +name: observability +description: Use when changing the local OpenTelemetry, Prometheus, Grafana, or Jaeger stack and its wiring to the application. +--- + +# Observability + +Use when telemetry collection, export, or local observability services change and the template's metrics and traces setup must stay coherent. + +## Destination + +- `.docker-compose-local/observability.yaml` +- `.docker-compose-local/config/otel-collector.yaml` +- `.docker-compose-local/config/prometheus.yaml` +- `.docker-compose-local/config/ds-prometheus.yaml` +- `.docker-compose-local/application.yaml` +- `README.md` only when access URLs or startup guidance changes + +## Use For + +- changing OTLP endpoints or application telemetry environment variables +- adjusting OpenTelemetry Collector pipelines, exporters, or processors +- changing Prometheus scraping or remote-write configuration +- updating Grafana datasource wiring or Jaeger exposure +- troubleshooting missing local metrics, traces, or dashboards + +## Inputs + +1. The affected signal or user-visible behavior: metrics, traces, dashboards, or collector flow +2. The config or service being changed +3. Expected URLs, ports, networks, and dependencies +4. Any application environment changes required for the new telemetry path + +## Workflow + +1. Start from `.docker-compose-local/application.yaml` and confirm how the app points to OTLP endpoints. +2. Follow the signal path through `.docker-compose-local/config/otel-collector.yaml`. +3. Keep downstream exporters and services aligned in `.docker-compose-local/observability.yaml`. +4. Preserve Grafana `:3000`, Prometheus `:9090`, Jaeger `:16686`, and OTLP `:4317/:4318` access paths unless a deliberate change is required. +5. Keep Prometheus and Grafana configuration in sync with the collector and service ports. +6. If the change affects operational response or support workflows, route to `docs-runbook`; if it changes cross-cutting platform design, route to `docs-design-doc`. + +## Output Shape + +1. Minimal stack or config changes across application wiring, collector, Prometheus, Grafana, or Jaeger +2. A clear telemetry path that remains coherent end to end +3. README or runbook updates only when developer-visible behavior changed + +## Rules + +- Keep the telemetry path coherent end to end: app -> OTEL collector -> Jaeger or Prometheus -> Grafana. +- Prefer config-as-code under `.docker-compose-local/` over manual container tweaks. +- Do not change developer-facing ports casually; update docs when you do. +- Use `make run-observability` as the default validation target, plus `make run-app` when application wiring changes. diff --git a/AGENTS.md b/AGENTS.md index 226ec23..943841b 100644 --- a/AGENTS.md +++ b/AGENTS.md @@ -62,6 +62,8 @@ Use these repository-native skills for common delivery workflows. For new produc - `.agents/skills/archunit-guard/SKILL.md` for diagnosing or preserving architecture rules enforced by ArchUnit - `.agents/skills/acceptance-scenario-scaffold/SKILL.md` for black-box acceptance coverage in the `acceptance-test/` module - `.agents/skills/api-doc-auditor/SKILL.md` for keeping OpenAPI and AsyncAPI output aligned with source code +- `.agents/skills/flyway-decoupled/SKILL.md` for decoupled database migration scripts, runner wiring, and startup ordering +- `.agents/skills/observability/SKILL.md` for local OpenTelemetry, Prometheus, Grafana, and Jaeger stack wiring ## Documentation Conventions diff --git a/README.md b/README.md index 061de1d..4844a14 100644 --- a/README.md +++ b/README.md @@ -17,6 +17,8 @@ CloudEvents of type `br.com.helpdev.sample.user.created` and enriches the user w ## **Architecture** +**Related skill:** [`hexagon-scaffold`](.agents/skills/hexagon-scaffold/SKILL.md) for adding feature slices that follow the template's hexagonal structure. + This project follows the **Hexagonal Architecture**, as proposed by Alistair Cockburn, focuses on **decoupling the application’s core business logic from its input and output mechanisms**. This design principle promotes adaptability, testability, and sustainability by encapsulating the application layer (business core) and exposing defined ports for interactions with external systems. @@ -65,6 +67,8 @@ acceptance-test ### **Architecture Tests** +**Related skill:** [`archunit-guard`](.agents/skills/archunit-guard/SKILL.md) for preserving and evolving the repository's architecture rules safely. + This architecture is warranted by [**ArchUnit tests**](application/src/test/java/br/com/helpdev/sample/ArchitectureTest.java) to ensure the project's compliance with the defined structure. These tests validate the project's adherence to the Hexagonal Architecture principles, @@ -76,6 +80,8 @@ _Read more about: [Garantindo a arquitetura de uma aplicação sem complexidade] ### **Acceptance Tests** +**Related skill:** [`acceptance-scenario-scaffold`](.agents/skills/acceptance-scenario-scaffold/SKILL.md) for Docker-based black-box scenarios in the `acceptance-test/` module. + To ensure robust testing, the **acceptance-test** module encapsulates the application within a Docker image and executes integration tests in an environment that closely mimics the real-world behavior of the application. This approach guarantees homogeneity in the application modules by restricting unit tests to the main application module, while handling integration tests separately within the acceptance-test module. @@ -145,6 +151,8 @@ environment. ### **The Flyway Database Migration Tool** +**Related skill:** [`flyway-decoupled`](.agents/skills/flyway-decoupled/SKILL.md) for versioned migrations, Flyway runner wiring, and startup ordering. + To ensure better startup performance and avoid concurrency issues in Kubernetes environments, **Flyway** has been implemented as a decoupled database migration tool. This design enables the migration process to run independently of the application. @@ -160,6 +168,9 @@ This approach enhances deployment reliability and maintains a clean separation o You see the sample how to execute in: [application docker-compose file](.docker-compose-local/application.yaml). ### **OpenAPI** + +**Related skill:** [`api-doc-auditor`](.agents/skills/api-doc-auditor/SKILL.md) for keeping generated API documentation aligned with the source code. + This project uses **Springdoc OpenAPI** to automatically document REST endpoints. 🔗 [Official OpenAPI site](https://swagger.io/specification/) @@ -171,6 +182,9 @@ After starting the application, access: - **OpenAPI specification in JSON**: [http://localhost:8080/v3/api-docs](http://localhost:8080/v3/api-docs) ### **AsyncAPI** + +**Related skill:** [`api-doc-auditor`](.agents/skills/api-doc-auditor/SKILL.md) for keeping async contracts and generated docs aligned with the source code. + This project uses **Springwolf** to document asynchronous events (Kafka, RabbitMQ, etc.) with **AsyncAPI**. Kafka messages on the `user-events` topic follow the **CloudEvents structured JSON** format (`application/cloudevents+json`). @@ -189,6 +203,8 @@ functionalities: #### Observability Services +**Related skill:** [`observability`](.agents/skills/observability/SKILL.md) for the local OpenTelemetry, Prometheus, Grafana, and Jaeger stack. + See the stack: [docker-compose-observability.yaml](.docker-compose-local/observability.yaml) - **Grafana**: Visualization and monitoring dashboard, available at [http://localhost:3000](http://localhost:3000). @@ -206,12 +222,26 @@ These services are orchestrated using Docker Compose to ensure seamless setup an -## **Architectural Decision Records (ADR)** +## **Docs** + +**Related skills:** [`docs`](.agents/skills/docs/SKILL.md) as the entrypoint, then [`docs-spec`](.agents/skills/docs-spec/SKILL.md), [`docs-adr`](.agents/skills/docs-adr/SKILL.md), [`docs-design-doc`](.agents/skills/docs-design-doc/SKILL.md), [`docs-runbook`](.agents/skills/docs-runbook/SKILL.md), and [`docs-selective-persistence`](.agents/skills/docs-selective-persistence/SKILL.md). + +This template treats documentation as an active part of delivery, guided by repository skills instead of ad-hoc files. The entry point is +`.agents/skills/docs/SKILL.md`, which decides whether the change needs durable documentation, whether an existing document should be updated, or whether +explicitly no new document is needed. + +The flow works like this: -The project includes a dedicated folder for **Architectural Decision Records (ADR)**, located in the `docs/adr` directory. This folder documents key -architectural decisions made throughout the project, providing context, rationale, and implications for these choices. +1. Clarify scope first through definition, spec, or plan mode. +2. Route the work to the right artifact: + - **Spec** for scope, scenarios, constraints, and acceptance criteria (`docs/specs/` when created). + - **ADR** for long-lived architectural decisions and guardrails in [`docs/adr/`](./docs/adr/README.md). + - **Design Doc** for non-trivial structure, integrations, migrations, or risks in [`docs/design/`](./docs/design/README.md). + - **Runbook** for operations, rollout, rollback, support, and incident handling (`docs/runbooks/` when created). +3. When required, ADRs and design docs are written and aligned before implementation starts. +4. After planning or execution, `docs-selective-persistence` decides what remains durable and what should stay transient. -To learn more about the ADRs and explore the documented decisions, refer to the [ADR README](./docs/adr/README.md). +This keeps documentation lean, decision-oriented, and connected to execution instead of turning every discussion into a permanent artifact. ## **Contribute** diff --git a/README.pt.md b/README.pt.md index 0be271a..c99c477 100644 --- a/README.pt.md +++ b/README.pt.md @@ -16,6 +16,8 @@ Neste template, fornecemos um **endpoint de cadastro de usuário** que publica u ## **Architecture** +**Skill relacionada:** [`hexagon-scaffold`](.agents/skills/hexagon-scaffold/SKILL.md) para adicionar novas fatias de funcionalidade seguindo a estrutura hexagonal do template. + Este projeto segue a **Arquitetura Hexagonal**, conforme proposta por **Alistair Cockburn**, focando em **desacoplar a lógica de negócio principal da aplicação de seus mecanismos de entrada e saída**. Esse princípio de design promove **adaptabilidade, testabilidade e sustentabilidade**, encapsulando a camada de aplicação (núcleo de negócio) e expondo portas definidas para interação com sistemas externos.

@@ -62,6 +64,8 @@ acceptance-test ### **Architecture Tests** +**Skill relacionada:** [`archunit-guard`](.agents/skills/archunit-guard/SKILL.md) para preservar e evoluir com segurança as regras de arquitetura do repositório. + Esta arquitetura é garantida por meio de testes **ArchUnit**, que validam a conformidade do projeto com os princípios da Arquitetura Hexagonal, assegurando a separação de responsabilidades e a independência da lógica de negócio central em relação aos sistemas externos. _Read more about: [Garantindo a arquitetura de uma aplicação sem complexidade](https://medium.com/luizalabs/garantindo-a-arquitetura-de-uma-aplica%C3%A7%C3%A3o-sem-complexidade-6f675653799c)_ @@ -70,6 +74,8 @@ _Read more about: [Garantindo a arquitetura de uma aplicação sem complexidade] ### **Acceptance Tests** +**Skill relacionada:** [`acceptance-scenario-scaffold`](.agents/skills/acceptance-scenario-scaffold/SKILL.md) para cenários black-box baseados em Docker no módulo `acceptance-test/`. + Para garantir testes robustos, o módulo **acceptance-test** encapsula a aplicação dentro de uma imagem Docker e executa testes de integração em um ambiente que imita de perto o comportamento real da aplicação. Essa abordagem garante a homogeneidade nos módulos da aplicação ao restringir os testes unitários ao módulo principal, enquanto lida com testes de integração separadamente no módulo acceptance-test. Esta separação garante: @@ -135,6 +141,8 @@ Esta configuração garante uma experiência de desenvolvimento eficiente e cons ### **The Flyway Database Migration Tool** +**Skill relacionada:** [`flyway-decoupled`](.agents/skills/flyway-decoupled/SKILL.md) para migrações versionadas, wiring do Flyway e ordenação de startup. + Para garantir um melhor desempenho de inicialização e evitar problemas de concorrência em ambientes Kubernetes, o **Flyway** foi implementado como uma ferramenta de migração de banco de dados desacoplada. Este design permite que o processo de migração seja executado de forma independente da aplicação. Principais Características: @@ -148,6 +156,8 @@ Essa abordagem melhora a confiabilidade da implantação e mantém uma separaç Você pode ver um exemplo de como executar em: [arquivo docker-compose da aplicação](.docker-compose-local/application.yaml). ### **OpenAPI** +**Skill relacionada:** [`api-doc-auditor`](.agents/skills/api-doc-auditor/SKILL.md) para manter a documentação gerada da API alinhada com o código-fonte. + Este projeto utiliza o **Springdoc OpenAPI** para documentar automaticamente os endpoints REST. 🔗 [Site oficial da OpenAPI](https://swagger.io/specification/) @@ -159,6 +169,8 @@ Após iniciar a aplicação, acesse: - **Especificação OpenAPI em JSON**: [http://localhost:8080/v3/api-docs](http://localhost:8080/v3/api-docs) ### **AsyncAPI** +**Skill relacionada:** [`api-doc-auditor`](.agents/skills/api-doc-auditor/SKILL.md) para manter contratos assíncronos e documentação gerada alinhados com o código-fonte. + Este projeto utiliza o **Springwolf** para documentar eventos assíncronos (Kafka, RabbitMQ, etc.) com **AsyncAPI**. As mensagens Kafka no tópico `user-events` seguem o formato **CloudEvents structured JSON** (`application/cloudevents+json`). @@ -177,6 +189,8 @@ essenciais: #### Observability Services +**Skill relacionada:** [`observability`](.agents/skills/observability/SKILL.md) para a stack local de OpenTelemetry, Prometheus, Grafana e Jaeger. + Veja a stack: [docker-compose-observability.yaml](.docker-compose-local/observability.yaml) - **Grafana**: Visualization and monitoring dashboard, available at [http://localhost:3000](http://localhost:3000). @@ -193,11 +207,26 @@ Veja a stack: [docker-compose-infrastructure.yaml](.docker-compose-local/infrast Esses serviços são orquestrados usando o Docker Compose para garantir configuração e operação perfeitas em um ambiente de desenvolvimento local. -## **Architectural Decision Records (ADR)** -O projeto inclui uma pasta dedicada para **Registros de Decisões Arquiteturais (ADR)**, localizada no diretório `docs/adr`. Esta pasta documenta as principais -decisões arquiteturais tomadas ao longo do projeto, fornecendo contexto, justificativa e implicações para essas escolhas. +## **Docs** + +**Skills relacionadas:** [`docs`](.agents/skills/docs/SKILL.md) como entrada, depois [`docs-spec`](.agents/skills/docs-spec/SKILL.md), [`docs-adr`](.agents/skills/docs-adr/SKILL.md), [`docs-design-doc`](.agents/skills/docs-design-doc/SKILL.md), [`docs-runbook`](.agents/skills/docs-runbook/SKILL.md) e [`docs-selective-persistence`](.agents/skills/docs-selective-persistence/SKILL.md). + +Este template trata a documentação como uma parte ativa da entrega, guiada por skills do repositório em vez de arquivos criados de forma ad hoc. O ponto +de entrada é `.agents/skills/docs/SKILL.md`, que decide se a mudança precisa de documentação durável, se um documento existente deve ser atualizado, ou se +explicitamente nenhum novo documento é necessário. + +O fluxo funciona assim: + +1. Primeiro, o escopo é esclarecido por definição, spec ou plan mode. +2. Depois, a necessidade é roteada para o artefato correto: + - **Spec** para escopo, cenários, restrições e critérios de aceite (`docs/specs/`, quando existir). + - **ADR** para decisões arquiteturais duráveis e guardrails em [`docs/adr/`](./docs/adr/README.md). + - **Design Doc** para estrutura não trivial, integrações, migrações ou riscos em [`docs/design/`](./docs/design/README.md). + - **Runbook** para operação, rollout, rollback, suporte e incidentes (`docs/runbooks/`, quando existir). +3. Quando necessários, ADRs e design docs são escritos e alinhados antes do início da implementação. +4. Após o planejamento ou a execução, `docs-selective-persistence` decide o que permanece durável e o que deve continuar transitório. -Para saber mais sobre os ADRs e explorar as decisões documentadas, consulte o [README do ADR](./docs/adr/README.md). +Com isso, a documentação fica enxuta, orientada a decisão e conectada à execução, em vez de transformar toda conversa em artefato permanente. ## **Contribua** From f1aaf9b67b63747a940945592448400f5aedae44 Mon Sep 17 00:00:00 2001 From: Guilherme Biff Zarelli Date: Mon, 13 Apr 2026 11:00:15 -0300 Subject: [PATCH 4/4] Mention docs article in README Reference the article about documentation as execution context in the Docs section of both README variants. Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com> --- README.md | 2 ++ README.pt.md | 2 ++ 2 files changed, 4 insertions(+) diff --git a/README.md b/README.md index 4844a14..68dbcba 100644 --- a/README.md +++ b/README.md @@ -243,6 +243,8 @@ The flow works like this: This keeps documentation lean, decision-oriented, and connected to execution instead of turning every discussion into a permanent artifact. +Read more about this perspective in [Documentacao na era da IA: quando a documentacao vira contexto de execucao](https://medium.com/@guilherme.zarelli/documenta%C3%A7%C3%A3o-na-era-da-ia-quando-a-documenta%C3%A7%C3%A3o-vira-contexto-de-execu%C3%A7%C3%A3o-cb8d6fdf84ed) _(Portuguese)_. + ## **Contribute** diff --git a/README.pt.md b/README.pt.md index c99c477..3fde0ea 100644 --- a/README.pt.md +++ b/README.pt.md @@ -228,6 +228,8 @@ O fluxo funciona assim: Com isso, a documentação fica enxuta, orientada a decisão e conectada à execução, em vez de transformar toda conversa em artefato permanente. +Leia mais em [Documentação na era da IA: quando a documentação vira contexto de execução](https://medium.com/@guilherme.zarelli/documenta%C3%A7%C3%A3o-na-era-da-ia-quando-a-documenta%C3%A7%C3%A3o-vira-contexto-de-execu%C3%A7%C3%A3o-cb8d6fdf84ed). + ## **Contribua**