CryptoCoinBlock is a scalable cryptocurrency data management system designed to import and manage blockchain data for multiple coins, including Bitcoin, Dogecoin, Dashcoin, and others. This solution utilizes clean architectural principles, MediatR for CQRS, and a domain-driven approach to encapsulate business logic effectively.
- Multi-Coin Support: Import and query blocks for Bitcoin, Dogecoin, Dashcoin, and other cryptocurrencies.
- Command and Query Separation: Uses MediatR for clean separation of business logic.
- Extensible Architecture: Built for easy extension and maintainability.
- Generic Methods: Efficient and reusable methods are implemented to handle operations across multiple coin types.
- Async Import Endpoint: A single unified endpoint allows importing of blocks asynchronously for all supported coins.
- Rate Limiting Middleware: Middleware is included to enforce rate limits on API requests, ensuring fair usage and protecting the service from abuse.
- Data Transfer Objects (DTOs) and AutoMapper: Simplifies data mapping between layers using DTOs for clean data structures and AutoMapper to streamline transformations.
- API Gateway Support: CM.APIGateway enables unified access to different cryptocurrency services through a single entry point, providing routing and transformation capabilities.
- API Layer: Handles incoming HTTP requests and routes them to application services.
- Application Layer: Contains commands, queries, and handlers using MediatR.
- Domain Layer: Manages core business logic and domain rules.
- Infrastructure Layer: Provides integration with data storage and third-party services.
- DTO Project: A dedicated project to define clean data structures used to transfer information between layers, ensuring separation of concerns and consistent data models.
- API Gateway (CM.APIGateway): Provides centralized routing for all API requests to backend services, simplifying access and management.
- CQRS (Command Query Responsibility Segregation): Separates commands (state changes) from queries (data retrieval).
- Dependency Injection (DI): Ensures loose coupling of components.
- Mediator Pattern: Manages communication between handlers using MediatR.
- Domain-Driven Design (DDD): Encapsulates business logic in domain models.
- Data Mapping with AutoMapper: Streamlines object-to-object mapping to avoid boilerplate code.
- API Gateway Pattern: Centralized entry point for managing requests to backend services.
- .NET 8.0 SDK
- Visual Studio 2022 or VS Code
- Postman (optional) for API testing
- .NET 8
- PostgreSQL
- MediatR
- Swagger/OpenAPI
- Grafana (for monitoring logs and metrics)
- Serilog (for logging by APIGateway)
- Clone the repository:
git clone <repository-url> cd CryptoCoinBlock
- Restore NuGet packages:
dotnet restore
- Build the solution:
dotnet build
- Run the project:
dotnet run --project CryptoCoinBlock
- The API will be available at
http://localhost:5000
.
To execute unit tests:
dotnet test
CM.APIGateway serves as a centralized entry point to interact with multiple cryptocurrency APIs, simplifying requests and ensuring efficient routing.
- Routing:
POST /api/{coin}/Import
-> Maps to{coin}/Import
GET /api/{coin}/Get
-> Maps to{coin}/GetHistory
- Host and Ports: Requests are forwarded to
localhost:...
. - Header Transformations: Ensures proper content-type settings for JSON-based requests.
- Global Configuration:
RequestIdKey
: Tracks API requests.BaseUrl
:http://localhost:...
- Ocelot Configuration: API Gateway is built using Ocelot, with settings stored in
ocelot.json
. The configuration includes automatic reloading ("AutoReload": true
), allowing real-time updates without restarting the gateway service.
A Postman collection (CM.APIGateway.postman_collection.json
) is provided to facilitate API testing through the gateway.
-
Navigate to the project directory containing the Dockerfile.
cd CryptoCoinBlock
-
Build the Docker image:
docker build -t cryptocoinblock-image .
-
During the build process, the following steps are performed:
- The entire solution is copied, and NuGet packages are restored.
- Entity Framework Core tools are installed for handling migrations.
- Unit tests for both CM.Domain.Tests and CM.API.Tests are executed. If any test fails, the build process will stop.
- The solution is built in Release mode, and the output is published.
-
Run the Docker container:
docker run -p 5000:5000 cryptocoinblock-image
-
The application will be available at
http://localhost:5000
.
-
Create and run services defined in the
docker-compose.yml
file:docker-compose up
-
The
db-migrator
service, defined as part of the Docker Compose configuration, applies the necessary database migrations using the CM.Data.Migrations project before starting the API container. This service ensures schema consistency by updating the PostgreSQL database defined in theccb-postgresql
service.
- POST /Bitcoin/Import: Imports a new block.
- GET /Bitcoin/GetHistory: Retrieves historical block data.
- POST /Dogecoin/Import: Imports a new block.
- GET /Dogecoin/GetHistory: Retrieves historical block data.
- POST /AllCoins/Import: Imports blocks asynchronously for all supported cryptocurrencies in a single request.
POST /AllCoins/Import
{
"isTest": false
}
The project includes a custom middleware to enforce API rate limits, protecting the service from excessive traffic and ensuring fair resource usage. Configuration allows adjusting request thresholds and time windows.
The RateLimitingMiddleware
monitors incoming API requests and applies restrictions based on configurable thresholds. It helps prevent abuse by limiting the number of requests a client can make within a specified time frame.
Key Features:
- Tracks API usage per client.
- Sends
429 Too Many Requests
responses when limits are exceeded. - Fully configurable for time window and request thresholds.
How to Enable: The middleware is added in Program.cs
using:
app.UseMiddleware<RateLimitingMiddleware>();
CM.APIGateway # Centralized entry point
CryptoCoinBlock
├── CM.API # API Layer
├── CM.Application # Application Layer (Commands, Queries)
├── CM.Domain # Domain Models and Business Logic
├── CM.Infrastructure # Data Persistence and Integration
├── CM.DTO # Data Transfer Object Definitions
├── CM.API.Tests # Unit tests for the API layer
├── CM.Domain.Tests # Unit tests for the Domain layer
├── CM.Data.Migrations # Project for handling database migrations
This project contains unit tests for the API layer to ensure that controllers, middleware, and endpoint behavior work as expected. Tests cover the correct handling of requests and responses, including validations, success, and error scenarios.
This project focuses on testing the domain logic to ensure that core business rules are enforced correctly. It includes tests for domain models, value objects, and business rule validations.
This project is responsible for managing database schema changes and versioning. It uses Entity Framework Core to handle migrations and updates to the database schema. Developers can apply migrations using the following commands:
- Add a new migration:
dotnet ef migrations add <MigrationName> --project CM.Data.Migrations
- Update the database:
dotnet ef database update --project CM.Data.Migrations
- The application uses
LoggerFactory
to set up logging with console and debug outputs. - Developers can extend logging configurations by modifying the service registration in
Program.cs
.
- A health check endpoint is available at
/health
, providing a simple way to verify the application’s status. - This can be extended to include more detailed health reporting if required.
- The project uses a basic CORS policy allowing all origins but restricting methods to
GET
andPOST
. - For production, it is recommended to tighten CORS policies to allow only trusted domains and specific HTTP methods.
The project uses Serilog for logging, providing powerful logging capabilities while maintaining performance. The configuration allows for flexible log routing and indexing:
- Console Logging: Enabled only in development to avoid performance degradation in production.
- OpenSearch: Logs are sent to OpenSearch for centralized log storage, ensuring efficient querying and visualization in tools like Grafana.
While logging can be resource-intensive, Serilog ensures high performance by:
- Asynchronous Operations: Logs are written asynchronously, reducing the impact on the application's throughput.
- Batching: Logs are sent in batches to minimize the frequency of requests to OpenSearch.
- Efficient Indexing: By using time-based indexing
ccb-logs-{0:yyyy.MM.dd}
, logs are efficiently stored and queried in OpenSearch. Overall, the Serilog configuration is designed to ensure that logging does not significantly impact application performance while still providing detailed insights into application behavior.
Grafana is integrated for real-time log monitoring. It connects to Elasticsearch and displays logs collected by Serilog.
- Visualizes Logs: View logs from Elasticsearch in an interactive Grafana dashboard.
- Real-Time Monitoring: Continuously monitor application logs.
- Pre-configured Dashboard: Import the
grafana.json
to visualize data quickly.
Grafana is included in the docker-compose.yml to automate deployment.
grafana.json
: Import this pre-configured dashboard to visualize logs from Elasticsearch.- Real-time Updates: Monitor application performance and logs in real-time.
This project is licensed under the MIT License.