Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Init chapter 4 #75

Merged
merged 8 commits into from
Jan 29, 2024
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
63 changes: 63 additions & 0 deletions .github/workflows/chapter-4-contracts-workflow.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,63 @@
name: Contracts package workflow

on:
push:
branches: [ "main" ]
paths:
- 'Chapter-4-applying-tactical-domain-driven-design/Fitnet.Contracts/Src/Fitnet.Contracts.IntegrationEvents/**'
pull_request:
branches: [ "main" ]
paths:
- 'Chapter-4-applying-tactical-domain-driven-design/Fitnet.Contracts/Src/Fitnet.Contracts.IntegrationEvents/**'

env:
CHAPTER_DIR: 'Chapter-4-applying-tactical-domain-driven-design/Fitnet.Contracts/Src'

jobs:
build:
defaults:
run:
working-directory: ${{ env.CHAPTER_DIR }}
runs-on: ubuntu-latest
name: Build
steps:
- uses: actions/checkout@v3
- name: Setup .NET
uses: actions/setup-dotnet@v3
with:
dotnet-version: 7.0.x
- name: Add GitHub NuGet Source
run: |
dotnet nuget add source --username $OWNER --password $GITHUB_TOKEN --store-password-in-clear-text --name github "https://nuget.pkg.github.com/$OWNER/index.json"
dotnet nuget list source
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
OWNER: ${{ github.repository_owner }}
- name: Restore dependencies
run: dotnet restore
- name: Build
run: dotnet build --no-restore

pack:
defaults:
run:
working-directory: ${{ env.CHAPTER_DIR }}
runs-on: ubuntu-latest
needs: build
if: github.ref == 'refs/heads/main'
name: Pack and Publish
steps:
- uses: actions/checkout@v3
- name: Setup .NET
uses: actions/setup-dotnet@v3
with:
dotnet-version: 7.0.x
- name: Prepare Packages
run: dotnet nuget add source --username $OWNER --password $GITHUB_TOKEN --store-password-in-clear-text --name github "https://nuget.pkg.github.com/$OWNER/index.json"
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
OWNER: ${{ github.repository_owner }}
- name: Pack Project
run: dotnet pack Fitnet.Contracts.IntegrationEvents/Fitnet.Contracts.IntegrationEvents.csproj -c Release
- name: Publish Packages
run: dotnet nuget push "Fitnet.Contracts.IntegrationEvents/bin/Release/EvolutionaryArchitecture.Fitnet.Contracts.IntegrationEvents.*.nupkg" --source "github" --api-key ${{ secrets.GITHUB_TOKEN }}
1 change: 1 addition & 0 deletions .github/workflows/chapter-4-package-workflow.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1 @@

65 changes: 65 additions & 0 deletions .github/workflows/chapter-4-workflow.yml
Original file line number Diff line number Diff line change
@@ -0,0 +1,65 @@
name: Chapter 4 Modular Monolith workflow

on:
push:
branches: [ "main" ]
paths:
- 'Chapter-4-applying-tactical-domain-driven-design/Fitnet/Src/**'
pull_request:
branches: [ "main" ]
paths:
- 'Chapter-4-applying-tactical-domain-driven-design/Fitnet/Src/**'

env:
CHAPTER_DIR: 'Chapter-4-applying-tactical-domain-driven-design/Fitnet/Src'

jobs:
build:
defaults:
run:
working-directory: ${{ env.CHAPTER_DIR }}
runs-on: ubuntu-latest

name: Build
steps:
- uses: actions/checkout@v3
- name: Setup .NET
uses: actions/setup-dotnet@v3
with:
dotnet-version: 7.0.x
- name: Add GitHub NuGet Source
run: |
dotnet nuget add source --username $OWNER --password $GITHUB_TOKEN --store-password-in-clear-text --name github "https://nuget.pkg.github.com/$OWNER/index.json"
dotnet nuget list source
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
OWNER: ${{ github.repository_owner }}
- name: Restore dependencies
run: dotnet restore
- name: Build
run: dotnet build --no-restore

test:
defaults:
run:
working-directory: ${{ env.CHAPTER_DIR }}
runs-on: ubuntu-latest
name: Test
needs: build
steps:
- uses: actions/checkout@v3
- name: Setup .NET
uses: actions/setup-dotnet@v3
with:
dotnet-version: 7.0.x
- name: Add GitHub NuGet Source
run: |
dotnet nuget add source --username $OWNER --password $GITHUB_TOKEN --store-password-in-clear-text --name github "https://nuget.pkg.github.com/$OWNER/index.json"
dotnet nuget list source
env:
GITHUB_TOKEN: ${{ secrets.GITHUB_TOKEN }}
OWNER: ${{ github.repository_owner }}
- name: Restore dependencies
run: dotnet restore
- name: Test
run: dotnet test
Original file line number Diff line number Diff line change
@@ -0,0 +1,18 @@
= 1. Record architecture decisions

Date: 2023-04-15

== Problem

We need to record the architectural decisions made on this project and store it as close to code as possible.

== Decision

We will use the concept of Architecture Decision Records, as [described by Michael Nygard](http://thinkrelevance.com/blog/2011/11/15/documenting-architecture-decisions).

== Consequences

- Every time new architecture decision comes in, it is required to add new architecture decision record
- Existing architecture decision records are immutable. Whenever there is a need to update old record, a new one is created

See Michael Nygard's article, linked above. For a lightweight ADR toolset, see Nat Pryce's [adr-tools](https://github.com/npryce/adr-tools).
Original file line number Diff line number Diff line change
@@ -0,0 +1,19 @@
= 2. Use one project

Date: 2023-04-15

== Problem

We need to determine the most suitable solution structure for the initial phase of our project.

== Decision

We will employ a single project structure for production code, along with two separate test projects for integration and unit tests.

== Consequences

- The straightforward organization enables team members to quickly familiarize themselves with the codebase
- By simplifying the solution architecture at the outset, the team can prioritize delivering new features over addressing architectural concerns
- This approach allows us to make architectural pattern decisions for each module as needed, responding to real requirements instead of preemptively committing to specific patterns
- If not managed properly, the single project structure could become cluttered and disorganized, necessitating timely extraction into separate projects
- As the project grows, managing and maintaining a single project may become increasingly complex, potentially requiring a transition to a more modular structure
Original file line number Diff line number Diff line change
@@ -0,0 +1,27 @@
= 3. Use modules

Date: 2023-04-15

== Problem

We need to determine an effective approach for implementing the discovered subdomains in our solution while maintaining modularity and scalability.

== Decision

We decided that, in the initial phase of the project, we will assume that each subdomain corresponds to a single bounded context. Each bounded context will be represented by a module with its own namespace, such as:

- Passes
- Contracts
- Offers
- Reports

Modules cannot access other modules by reference, ensuring that each module remains independent and self-contained.

== Consequences

- Each module encapsulates its processes within its namespace, promoting separation of concerns
- This approach enhances modularity within our monolithic application, making it easier to manage and maintain
- If needed, extracting modules into separate projects or microservices becomes straightforward, as each module is represented by a single namespace
- Developers may need time to learn how to effectively modularize a monolithic application, particularly if they have no prior experience with this approach
- Managing multiple independent modules may increase complexity, particularly when it comes to communication and data sharing between modules
- If significant changes need to be made across the entire application, such as implementing a new API or database technology, this approach may result in slower development and deployment times, as changes will need to be made to each module individually
Original file line number Diff line number Diff line change
@@ -0,0 +1,19 @@
= 4. Use separate database schemas

Date: 2023-04-15

== Problem

We need to determine an effective approach for storing and retrieving data in our application while maintaining modularity and scalability.

== Decision

We decided to use a single relational database, divided into separate schemas for each module. Each module's data will be fully encapsulated within its own schema, preventing any shared data between modules.

== Consequences

- By isolating data within each module's schema, we prevent potential data entanglement between modules, reducing the risk of future complications
- As each schema can be easily extracted into a separate database if needed, we can scale individual schemas based on their specific requirements without affecting other modules
- By dividing the database into smaller schemas, developers can focus on a more manageable number of tables within each module (e.g., 25 tables per schema instead of 100 tables in a single schema)
- Managing separate schemas for each module may result in additional maintenance efforts, such as updating and migrating schemas individually
- Accessing or aggregating data across multiple modules may become more complex, requiring careful consideration of inter-module communication and data sharing strategies
Original file line number Diff line number Diff line change
@@ -0,0 +1,18 @@
= 5. Use vertical slices

Date: 2023-04-15

== Problem

We need to determine an efficient way to structure and organize the application logic in our codebase.

== Decision

We decided to adopt a vertical slices architecture, wherein each business process is represented entirely within its own namespace. This approach eliminates the need for traditional technical folders like Controllers, Entities, Commands, and Queries. Instead, all related code for a specific process is grouped together within the corresponding namespace.

== Consequences

- With all related code organized in a single namespace, it becomes easier to extract or refactor functionalities as needed
- Understanding the codebase becomes more straightforward when related code is grouped together, as opposed to being spread across multiple namespaces
- The vertical slices architecture emphasizes business processes rather than technical aspects, ensuring that the codebase remains aligned with the business domain
- Grouping code by process may result in some code duplication when there is shared functionality across multiple processes
Original file line number Diff line number Diff line change
@@ -0,0 +1,20 @@
= 6. Use Docker

Date: 2023-04-15

== Problem

We need a consistent way to manage dependencies, simplify application deployment, and ensure that our application runs consistently across different environments.

== Decision

We decided to use Docker, a containerization platform that enables us to package our application and its dependencies into lightweight, portable containers. Docker containers are self-contained, easily shareable, and provide a consistent environment for our application regardless of the underlying infrastructure.

== Consequences

- Containers encapsulate the application and its dependencies, making it easier to manage and maintain
- Docker ensures that our application runs consistently across different environments (development, testing, and production) by packaging it with its runtime environment
- Cntainers can be easily deployed to any environment that supports Docker, streamlining the deployment process and reducing potential deployment issues
- Sharing Docker images or Dockerfiles allows team members to quickly set up their development environment with the correct dependencies and configurations
- Developers may need time to learn Docker and its best practices
- Introducing Docker adds another layer of complexity to our application infrastructure, which could potentially result in new challenges and issues
Original file line number Diff line number Diff line change
@@ -0,0 +1,19 @@
= 7. Use in-memory event bus

Date: 2023-04-15

== Problem

Our project is in its early stages and is currently deployed as a single deployment unit. We need a way for components to communicate and interact with each other without tight coupling, while also ensuring high maintability. We want to focus on the bussiness process and interactions not spending time to choose out of process message broker. We can delay this decison in time.

== Decision

After evaluating our options, we have decided to implement an in-memory event bus. This will allow us to manage interactions between components in a flexible.

We have chosen to use the MediatR library to build our in-memory event bus as it is a quick and easy implementation. In addition, we will create an abstraction layer over the event bus to provide flexibility to switch to an external event bus in the future if needed.

== Consequences

- By implementing an in-memory event bus, our components will be loosely coupled and better able to communicate and interact with each other. This will enable us to develop and deploy our system in a more scalable and flexible way.
- One consequence of using an in-memory event bus as opposed to an external message broker is that we cannot retire messages and persist them in the event bus for later retrieval. In an external message broker, messages can be persisted and stored for a specified period of time, enabling them to be replayed or reprocessed if necessary. However, in an in-memory event bus, messages are stored only in memory and are lost when the event bus is shut down or restarted
- Overall, we believe that the benefits of implementing an in-memory event bus during the early stages of our project outweigh the potential risks and challenges.
Original file line number Diff line number Diff line change
@@ -0,0 +1,23 @@
= 8. Use internal sealed as default

Date: 2023-04-15

== Problem

We need to establish a default access level and inheritance strategy for our classes to improve code maintainability, reduce the risk of unintended inheritance, and encourage proper encapsulation.

== Decision

We decided to use the ```internal sealed``` access level as the default for our classes. By doing this, we ensure that:

- Classes are only accessible within the same assembly, providing better encapsulation and reducing the risk of unintended external usage
- Classes are sealed, preventing unintended inheritance and potential issues arising from subclassing

When creating a new class, developers should use the ```internal sealed``` access level by default, unless there is a specific reason to allow inheritance or external access.

== Consequences

- Improved encapsulation - limiting class access to the same assembly helps prevent unintended external usage and reduces the potential for unexpected side effects
- Sealing classes by default ensures that developers explicitly decide when to allow inheritance, reducing the risk of unintended inheritance and its associated problems
- By preventing inheritance by default, developers are encouraged to use composition over inheritance, which can lead to more flexible and maintainable code
- Developers may need time to adjust to this new convention and understand when and how to override the default access level and inheritance behavior
Original file line number Diff line number Diff line change
@@ -0,0 +1,24 @@
= 9. Select testing strategy

Date: 2023-04-15

== Problem

We need to decide on an effective testing strategy for our backend API to ensure the quality, reliability, and maintainability of our application.

== Decision

We decided to adopt a testing strategy that combines both unit and integration tests for our backend API. This approach will allow us to validate individual components and their interactions within the system.

- We will write unit tests to validate the behavior of individual components, such as classes and methods. These tests will be focused on the functionality of a single component, isolating it from external dependencies using techniques like mocking or stubbing

- We will write integration tests to validate the interactions between different components and subsystems, such as APIs, databases, and external services. These tests will verify that the various parts of our system work together correctly

== Consequences

- A robust testing strategy ensures that our code meets the desired functionality and adheres to established coding standards
- Regular testing helps identify and fix issues early in the development process, reducing the likelihood of defects making it to production
- A comprehensive test suite makes it easier to refactor code and add new features with confidence, ensuring that existing functionality is not inadvertently affected
- With a reliable test suite, developers can quickly identify and resolve issues, enabling more efficient development cycles and reducing the risk of introducing regressions
- Developers may need time to learn best practices for writing effective unit and integration tests
- As the codebase evolves, tests may need to be updated or refactored to remain relevant and effective
Original file line number Diff line number Diff line change
@@ -0,0 +1,22 @@
= 10. Select integration style between passes and offers modules

Date: 2023-05-13

== Problem

In a certain business process, there is a requirement for communication between passes and offers.
For example, when a pass expires, an offer is proposed to the pass owner.
There are several integration styles available for implementing this requirement, including synchronous and asynchronous calls.

== Decision

After careful consideration, messaging we chosen as the communication method for the requirement implementation, because business process did not require simultaneous state changes for pass expiration and proposing new offer.

Introducing a brief delay between pass expiration and offer processing we deemed beneficial for achieving high maintainability and modularity of the system.

== Consequences

- By using messaging, the coupling between the passes and offers modules can be reduced, resulting in a more modular system that is easier to maintain.
- Messaging can improve the maintainability of the system by allowing for flexible and independent updates to each module.
- However, messaging can introduce a delay between the pass expiry event and the generation of the new offer, due to the time required for message processing.
- Messaging also introduces additional points of failure, such as message processing failures, which need to be closely managed and monitored to ensure the system functions correctly.
Original file line number Diff line number Diff line change
@@ -0,0 +1,21 @@
= 11. Turn on static code analysis

Date: 2023-09-04

== Problem

We have been experiencing an increasing number of bugs and code quality issues in our project. Manual code reviews are time-consuming and error-prone, and we need a more systematic approach to catch these issues early in the development process.

== Decision

After careful consideration, we have decided to turn on static code analysis for all our software projects. We will leverage automated tools and integrate them into our continuous integration (CI) pipeline to analyze code for potential issues, adherence to coding standards, and security vulnerabilities.

== Consequences

- Static code analysis will help us catch code quality issues, bugs, and security vulnerabilities at an early stage of development, reducing the likelihood of these issues reaching production
- The tools will enforce coding standards and best practices across all projects, leading to more consistent and maintainable code
- Developers can focus more on writing code, as the analysis tools will automatically identify issues, reducing the need for manual code reviews
- Over time, the quality of our codebase is expected to improve, leading to fewer defects and maintenance challenges
- Identifying and addressing security vulnerabilities early in the development process will enhance the security of our software
- There may be a learning curve for some team members as they become accustomed to using static code analysis tools. We will provide training and resources to help them adapt
- Configuring and integrating static code analysis tools into our CI pipeline may require some initial effort, but this investment is expected to pay off in the long run
Loading
Loading