Dockerizing an Asp.Net Core Microservice Behind Nginx

- Developer and Blogger
Published: Sun May 12 2019

In this post I will show how to dockerize an Asp.Net microservice and host it behind an nginx reverse proxy.

This project came out of a need to revamp the way I manage comments on my blog. To avoid spam I have an approval step for comments, so I figured this would be good to opportunity to play with Asp.net Core and Docker. The architecture I will show is a dockerized Asp.Net microservice hosted behind an nginx reverse proxy.

Docker

I won’t spend time talking about the Asp.Net code since there is nothing non standard about the C#/Asp.Net code. Instead, let’s start by looking at the Docker part.

In my setup I am creating containers for the microservice as well as the nginx reverse proxy. Let’s start by looking at the Dockerfile for the microservice.

FROM mcr.microsoft.com/dotnet/core/sdk:2.2 AS build-env COPY ./comments/*.csproj comments/ COPY ./model/*.csproj model/ WORKDIR comments RUN dotnet restore RUN cd .. COPY . ./ RUN dotnet publish comments/comments.csproj -c Release -o out FROM microsoft/dotnet:aspnetcore-runtime COPY --from=build-env /comments/comments/out . ENTRYPOINT ["dotnet", "comments.dll"]

In the Docker I am publishing a release and run the generated dll in a Kestrel process.

Kestrel, the default Asp.Net web server is fast and production ready, but it’s not recommended to expose Kestrel directly to the internet. Instead the recommendation is to put a reverse proxy in front of it to proxy traffic to the underlying Kesteral instance(s). In my case I am configuring nginx to do just that. The nginx Dockerfile can be found below:

FROM nginx EXPOSE 443 COPY ./localhost.cert /etc/nginx COPY ./localhost.key /etc/nginx COPY ./nginx.conf /etc/nginx CMD ["nginx", "-g", "daemon off;"]

For local development I am using Docker-Compose to wire up the containers. See my docker-compose.yml below:

version: '3.5' networks: corp: driver: bridge services: blog_comments_api_01: build: context: . dockerfile: Dockerfile-blog-comments container_name: blog_comments_api_01 networks: - corp ports: - '8001:8001' - '8000:8000' blog_comments_api_02: build: context: . dockerfile: Dockerfile-blog-comments container_name: blog_comments_api_02 networks: - corp ports: - '9001:8001' - '9000:8000' api: container_name: api networks: - corp build: context: . dockerfile: Dockerfile-nginx ports: - '443:443' depends_on: - 'blog_comments_api_01' - 'blog_comments_api_02'

Notice there are two instances of the blog comments microservice. This is overkill for my purposes, but I do it to show that you can do load balancing with nginx as well. Locally the containers are bridged together on a common network, which allows me to use the container names as a host names.

Nginx

Next we have to configure nginx as the reverse proxy in front of my two instances of the microservice. In addition to the Docker instance of nginx we need a config file (nginx.conf) to define the behavior of the proxy. Let’s take a look at nginx.conf below:

events { worker_connections 1024; } http { sendfile on; upstream comments { server blog_comments_api_01:8000; server blog_comments_api_02:9000; } server { listen 443 ssl; ssl_certificate localhost.cert; ssl_certificate_key localhost.key; ssl_protocols TLSv1 TLSv1.1 TLSv1.2; ssl_prefer_server_ciphers on; ssl_ciphers ECDH+AESGCM:ECDH+AES256:ECDH+AES128:DH+3DES:!ADH:!AECDH:!MD5; ssl_session_cache shared:SSL:40m; ssl_session_timeout 4h; add_header Strict-Transport-Security "max-age=31536000" always; location /comments/ { proxy_pass http://comments/api/Comments; } } }

At a high level the config file configures proxying in front of the two instances of my microservice, running on ports 8000 and 9000.

The config also wires up https in front of the microservices. In this example I am using a self signed certificate, but this is just for local testing. I am not a security expert, so I would appreciate any feedback on the security settings in this setup.

I invite you to follow me on twitter