Microservices architecture has become the standard approach for building scalable, maintainable backend systems. Combined with Docker for containerization and Kubernetes for orchestration, .NET provides a robust platform for building production-grade microservices that can handle real-world workloads. This guide walks through the full lifecycle: designing a .NET microservice, packaging it in a Docker container, deploying it to Kubernetes, and configuring the essential infrastructure around it.
Whether you are decomposing a monolith or starting a new system from scratch, the patterns and configurations covered here will give you a practical foundation for running .NET microservices in production.
Microservices Architecture Principles
Before diving into implementation, it is worth grounding the discussion in the core principles that make microservices successful. Getting the architecture right matters more than any specific technology choice.
Single Responsibility. Each microservice should own one bounded context -- a well-defined piece of business functionality. An order service manages orders. An inventory service manages stock levels. A notification service sends emails and push notifications. When a service starts doing too many things, it becomes a distributed monolith with all the complexity of microservices and none of the benefits.
Loose Coupling. Services communicate through well-defined interfaces (REST APIs, gRPC, or message queues) and never share databases. If two services need the same data, they communicate through APIs or events. Sharing a database creates hidden coupling that makes independent deployment impossible.
Independent Deployment. Each service can be built, tested, and deployed independently without coordinating with other teams. This is the primary operational benefit of microservices. If your services must be deployed together, you have a distributed monolith.
Resilience. Services must handle failures in their dependencies gracefully. Network calls fail. Downstream services go down. Your service should degrade gracefully using patterns like circuit breakers, retries with exponential backoff, and timeout policies.
Observability. In a distributed system, you need structured logging, distributed tracing, and metrics collection to understand what is happening across services. Without observability, debugging production issues in a microservices architecture is nearly impossible.
With these principles in mind, let us build a .NET microservice.
Creating a .NET Microservice
We will build a simple product catalog service using ASP.NET Core Minimal APIs. This service exposes a REST API for managing products, includes health checks, and follows patterns suitable for containerized deployment.
Start by creating the project:
dotnet new webapi -n ProductCatalog --use-minimal-apis
cd ProductCatalog
dotnet add package Microsoft.Extensions.Diagnostics.HealthChecks
dotnet add package Microsoft.EntityFrameworkCore.Npgsql
Here is a streamlined Program.cs that sets up the service with health checks, structured logging, and a clean API:
using Microsoft.EntityFrameworkCore;
var builder = WebApplication.CreateBuilder(args);
// Database context
builder.Services.AddDbContext<CatalogDbContext>(options =>
options.UseNpgsql(builder.Configuration.GetConnectionString("CatalogDb")));
// Health checks
builder.Services.AddHealthChecks()
.AddNpgSql(builder.Configuration.GetConnectionString("CatalogDb")!,
name: "database",
tags: new[] { "ready" });
// OpenAPI
builder.Services.AddEndpointsApiExplorer();
builder.Services.AddSwaggerGen();
var app = builder.Build();
if (app.Environment.IsDevelopment())
{
app.UseSwagger();
app.UseSwaggerUI();
}
// Health check endpoints
app.MapHealthChecks("/healthz/live", new()
{
Predicate = _ => false // Liveness: always returns healthy if the process is running
});
app.MapHealthChecks("/healthz/ready", new()
{
Predicate = check => check.Tags.Contains("ready") // Readiness: checks dependencies
});
// API endpoints
app.MapGet("/api/products", async (CatalogDbContext db) =>
await db.Products.OrderByDescending(p => p.CreatedAt).Take(50).ToListAsync());
app.MapGet("/api/products/{id:guid}", async (Guid id, CatalogDbContext db) =>
await db.Products.FindAsync(id) is { } product
? Results.Ok(product)
: Results.NotFound());
app.MapPost("/api/products", async (CreateProductRequest request, CatalogDbContext db) =>
{
var product = new Product
{
Id = Guid.NewGuid(),
Name = request.Name,
Description = request.Description,
Price = request.Price,
CreatedAt = DateTime.UtcNow
};
db.Products.Add(product);
await db.SaveChangesAsync();
return Results.Created($"/api/products/{product.Id}", product);
});
app.Run();
// Models
public record CreateProductRequest(string Name, string Description, decimal Price);
public class Product
{
public Guid Id { get; set; }
public string Name { get; set; } = string.Empty;
public string Description { get; set; } = string.Empty;
public decimal Price { get; set; }
public DateTime CreatedAt { get; set; }
}
public class CatalogDbContext : DbContext
{
public CatalogDbContext(DbContextOptions<CatalogDbContext> options) : base(options) { }
public DbSet<Product> Products => Set<Product>();
}
Notice the two health check endpoints: /healthz/live for liveness (is the process running?) and /healthz/ready for readiness (can the service handle requests, including database connectivity?). This distinction is critical for Kubernetes, as we will see in the deployment section.
The configuration is externalized through appsettings.json and environment variables, which is essential for containerized deployments where configuration changes between environments:
{
"ConnectionStrings": {
"CatalogDb": "Host=localhost;Database=catalog;Username=postgres;Password=postgres"
},
"Logging": {
"LogLevel": {
"Default": "Information"
}
}
}
Dockerizing the .NET Service
Docker containers provide a consistent, reproducible environment for your microservice. A well-crafted Dockerfile makes the difference between a container that is efficient and secure versus one that is bloated and vulnerable.
Here is a production-grade multi-stage Dockerfile for our .NET service:
# Stage 1: Build
FROM mcr.microsoft.com/dotnet/sdk:9.0 AS build
WORKDIR /src
# Copy project file and restore dependencies (layer caching)
COPY ProductCatalog.csproj .
RUN dotnet restore
# Copy remaining source and publish
COPY . .
RUN dotnet publish -c Release -o /app --no-restore
# Stage 2: Runtime
FROM mcr.microsoft.com/dotnet/aspnet:9.0 AS runtime
WORKDIR /app
# Create non-root user for security
RUN groupadd -r appuser && useradd -r -g appuser -s /bin/false appuser
# Copy published output from build stage
COPY --from=build /app .
# Set non-root user
USER appuser
# Expose port and set environment
EXPOSE 8080
ENV ASPNETCORE_URLS=http://+:8080
ENV ASPNETCORE_ENVIRONMENT=Production
# Health check at container level
HEALTHCHECK --interval=30s --timeout=5s --start-period=10s --retries=3 \
CMD curl -f http://localhost:8080/healthz/live || exit 1
ENTRYPOINT ["dotnet", "ProductCatalog.dll"]
Key practices in this Dockerfile:
Multi-stage builds separate the build environment (SDK, 900+ MB) from the runtime environment (ASP.NET runtime, ~200 MB). The final image contains only what is needed to run the application.
Layer caching optimization. By copying the .csproj file and running dotnet restore before copying the rest of the source code, Docker caches the dependency restoration layer. Subsequent builds only re-restore if the project file changes, dramatically speeding up build times.
Non-root user. Running as root inside a container is a security risk. Creating and switching to a non-root user follows the principle of least privilege.
Container-level health check. The HEALTHCHECK instruction lets Docker monitor the container's health independently of Kubernetes. This is useful during local development and in Docker Compose environments.
Build and test the image locally:
docker build -t product-catalog:1.0 .
docker run -p 8080:8080 -e ConnectionStrings__CatalogDb="Host=host.docker.internal;Database=catalog;Username=postgres;Password=postgres" product-catalog:1.0
Kubernetes Deployment Manifests
With our service containerized, we can deploy it to Kubernetes. A production deployment requires several Kubernetes resources: a Deployment, a Service, a ConfigMap, and optionally a HorizontalPodAutoscaler.
Deployment manifest (k8s/deployment.yaml):
apiVersion: apps/v1
kind: Deployment
metadata:
name: product-catalog
labels:
app: product-catalog
spec:
replicas: 3
selector:
matchLabels:
app: product-catalog
template:
metadata:
labels:
app: product-catalog
version: "1.0"
spec:
containers:
- name: product-catalog
image: registry.example.com/product-catalog:1.0
ports:
- containerPort: 8080
protocol: TCP
env:
- name: ConnectionStrings__CatalogDb
valueFrom:
secretKeyRef:
name: catalog-db-secret
key: connection-string
resources:
requests:
cpu: "100m"
memory: "128Mi"
limits:
cpu: "500m"
memory: "512Mi"
livenessProbe:
httpGet:
path: /healthz/live
port: 8080
initialDelaySeconds: 10
periodSeconds: 15
failureThreshold: 3
readinessProbe:
httpGet:
path: /healthz/ready
port: 8080
initialDelaySeconds: 5
periodSeconds: 10
failureThreshold: 3
startupProbe:
httpGet:
path: /healthz/live
port: 8080
initialDelaySeconds: 5
periodSeconds: 5
failureThreshold: 12
restartPolicy: Always
This manifest configures three replicas, resource limits to prevent a single service from consuming excessive cluster resources, and three types of probes:
- Liveness probe checks if the process is healthy. If it fails, Kubernetes restarts the container.
- Readiness probe checks if the service can handle traffic. If it fails, Kubernetes removes the pod from the Service's endpoint list (no traffic routed to it) but does not restart it.
- Startup probe gives the application time to initialize before liveness checks begin. This prevents slow-starting containers from being killed during startup.
Service manifest (k8s/service.yaml):
apiVersion: v1
kind: Service
metadata:
name: product-catalog
labels:
app: product-catalog
spec:
type: ClusterIP
selector:
app: product-catalog
ports:
- port: 80
targetPort: 8080
protocol: TCP
The ClusterIP Service provides internal service discovery. Other microservices in the cluster can reach our product catalog at http://product-catalog (or http://product-catalog.default.svc.cluster.local for the fully qualified name).
HorizontalPodAutoscaler (k8s/hpa.yaml):
apiVersion: autoscaling/v2
kind: HorizontalPodAutoscaler
metadata:
name: product-catalog
spec:
scaleTargetRef:
apiVersion: apps/v1
kind: Deployment
name: product-catalog
minReplicas: 3
maxReplicas: 10
metrics:
- type: Resource
resource:
name: cpu
target:
type: Utilization
averageUtilization: 70
- type: Resource
resource:
name: memory
target:
type: Utilization
averageUtilization: 80
The HPA automatically scales the deployment between 3 and 10 replicas based on CPU and memory utilization. This ensures your service can handle traffic spikes without manual intervention and scales back down during quiet periods to save resources.
Deploy everything:
kubectl apply -f k8s/deployment.yaml
kubectl apply -f k8s/service.yaml
kubectl apply -f k8s/hpa.yaml
Service Discovery and Inter-Service Communication
In a microservices architecture, services need to find and communicate with each other. Kubernetes provides built-in service discovery through DNS. When you create a Kubernetes Service, it gets a DNS entry that other pods in the cluster can resolve.
For our .NET services, configuring inter-service communication is straightforward using HttpClient with Kubernetes DNS:
// In another service that needs to call the product catalog
builder.Services.AddHttpClient("ProductCatalog", client =>
{
client.BaseAddress = new Uri("http://product-catalog"); // Kubernetes service name
client.Timeout = TimeSpan.FromSeconds(10);
});
For production systems, add resilience with Polly policies for retries and circuit breaking:
using Microsoft.Extensions.Http.Resilience;
builder.Services.AddHttpClient("ProductCatalog", client =>
{
client.BaseAddress = new Uri("http://product-catalog");
})
.AddStandardResilienceHandler(options =>
{
options.Retry.MaxRetryAttempts = 3;
options.Retry.Delay = TimeSpan.FromMilliseconds(500);
options.CircuitBreaker.SamplingDuration = TimeSpan.FromSeconds(30);
options.CircuitBreaker.FailureRatio = 0.5;
options.AttemptTimeout.Timeout = TimeSpan.FromSeconds(5);
options.TotalRequestTimeout.Timeout = TimeSpan.FromSeconds(30);
});
This configuration retries failed requests up to three times with exponential backoff. If more than 50% of requests fail within a 30-second window, the circuit breaker opens and subsequent calls fail fast without hitting the downstream service, giving it time to recover.
For asynchronous communication between services, consider a message broker like RabbitMQ or Apache Kafka deployed within the cluster. Event-driven patterns decouple services temporally -- the producer does not wait for the consumer to process the message. This is especially important for operations that do not need an immediate response, such as sending notification emails after an order is placed.
Health Checks and Observability
We already implemented health check endpoints in our .NET service. Let us expand on observability, which is critical for operating microservices in production.
Structured logging with Serilog or the built-in .NET logging ensures that log entries are machine-parseable and can be aggregated in a centralized logging system like the ELK stack, Grafana Loki, or Datadog:
builder.Logging.AddJsonConsole(options =>
{
options.IncludeScopes = true;
options.TimestampFormat = "yyyy-MM-ddTHH:mm:ssZ";
options.JsonWriterOptions = new()
{
Indented = false // Compact JSON for log aggregation
};
});
With JSON-formatted logs written to stdout, Kubernetes can capture them and forward them to your logging infrastructure using a DaemonSet log collector like Fluent Bit.
Distributed tracing with OpenTelemetry allows you to trace a request as it flows across multiple services:
builder.Services.AddOpenTelemetry()
.WithTracing(tracing =>
{
tracing
.AddAspNetCoreInstrumentation()
.AddHttpClientInstrumentation()
.AddEntityFrameworkCoreInstrumentation()
.AddOtlpExporter(options =>
{
options.Endpoint = new Uri("http://otel-collector:4317");
});
})
.WithMetrics(metrics =>
{
metrics
.AddAspNetCoreInstrumentation()
.AddHttpClientInstrumentation()
.AddOtlpExporter(options =>
{
options.Endpoint = new Uri("http://otel-collector:4317");
});
});
OpenTelemetry instrumentation automatically captures trace context propagation between services, HTTP request/response metrics, database query durations, and custom business metrics. The data flows to an OpenTelemetry Collector, which can export to Jaeger, Zipkin, Grafana Tempo, or any compatible backend.
Combining health checks, structured logging, distributed tracing, and metrics gives you full visibility into your microservices architecture. When an issue arises, you can trace a request from the API gateway through every service it touches, see exactly where latency spiked or an error occurred, and correlate it with resource utilization metrics.
Moving to Production
The patterns and configurations in this guide form a solid foundation for running .NET microservices with Docker and Kubernetes. In a real production environment, you would also want to consider secret management with tools like HashiCorp Vault or Kubernetes Sealed Secrets, a CI/CD pipeline that builds images, runs tests, and deploys to your cluster automatically, an ingress controller or API gateway for external traffic routing, and network policies to restrict inter-service communication to only the paths you explicitly allow.
Microservices architecture is powerful but adds operational complexity. The investment pays off when your system needs to scale individual components independently, when multiple teams need to deploy at different cadences, or when different parts of your system have fundamentally different resource requirements.
At Maranatha Technologies, we help organizations design, build, and operate microservices architectures on .NET, Docker, and Kubernetes. From greenfield development to monolith decomposition, our team brings production experience across the full stack. Explore our custom software development services to see how we can help you build scalable, resilient backend systems.