No-nonsense gRPC guide for the C# developers, Part Two: Secure service


source code

In the previous part, we created a Calculator micro-service which happily performs uncomplicated integer arithmetic. We were able to call that service locally and, hopefully, from the remote computer. The problem though is that our network exchanges are completely unprotected so anyone, with appropriate knowledge and tools, can see what numbers we are trying to multiply. Moreover, the client even cannot be sure that it receives the responses from the valid service, not the one which might be controlled by hackers. Let’s mitigate that.

Public/private key cryptography refresher

So there is this public/private key security mechanism.

We won’t get deep into the details, but the main idea is that there are two keys involved with every secure message. The math behind the key pair is that one key encrypts the message but to decrypt the message you need the second key. That is, if I encrypt something with the first key, I cannot decrypt it with the same key! No way, you need the second key. The keys are also designed the way that having one key, it is impossible to derive the second, and vice versa.

If you think about it for a moment, it opens pretty powerful opportunities.

Say Lech and Klaus want to exchange some information in a secure manner. They each generate a pair of keys, one will be called private and another will be called public (so there are 4 keys in total: Lech’s private and public and Klaus' private and public). Lech hides his private key so only he has access to it and makes his public key accessible to everyone. That is, he publishes on a website, prints in the newspaper, whatever. Klaus does the same with his pair. So the whole world can access Lech’s and Klaus' public keys, but their respective private keys are well hidden. Consider two cases:

Lech want to send a message to Klaus so nobody but Klaus can read it.

Easy: Lech takes Klaus' public key from the newspaper (remember, everyone can access it!), encrypts the message with it and transfers it to Klaus. If anyone but Klaus intercepts the message, they won’t be able to do anything with it as to decrypt it, they need Klaus' private key, as it is the only key which can decrypt the message! That is in possession and, hopefully, well guarded by Klaus.

Klaus wants to send the message to Lech and he wants to ensure that Lech knows that the message is indeed from him and nobody else.

Klaus takes the message, encrypts it with his private key and sends to Lech. Lech can grab Klaus' public key and decrypt the message. If it works, he can make sure that it is send only by Klaus, as only his private key might have be able to encrypt the message this way. In essence, the message is “signed” by Klaus.

Pretty cool stuff.

Obviously encrypting and signing can be combined, so Klaus can use his private key to sign and Lech’s public key to encrypt. This is fundamentally how it works when you do online banking from your browser (the reality is a little bit more complicated but the concept holds).

So what is this certificate my browser is talking about?

Fundamentally, bank’s certificate is bank’s public key. Yeah you say, but how we know that it is bank’s public key and not some hacker’s public key pretending to be our bank? The thing is that the certificate is signed (in the sense we talked above) by some “Certificate Authority” and the list of trusted “Certificate Authorities” is pretty much baked into the browser.

gRPC supports TLS, which is mechanism of using private/public keys to secure the traffic. So we will need a private/public key pair at least for our service. instead of public key we will use the certificate, which is fundamentally a public key signed by some Certificate Authority.

So, who is going to be that Certificate Authority?

Well there are public ones which issue certificate/private key pairs on demand, but for most of them you have to pay money. Since our gRPC service is not going to be open for public access (gRPC services rarely are, as opposed to the HTTPS web sites), it is perfectly valid to operate our own Certificate Authority (CA). The plan is this:

  • create a CA certificate/private key pair so we can issue and sign certificates.
  • create a service certificate/private key pair for the service and ask CA to sign it.
  • deploy CA’s certificate to the client (perfectly valid as it is, in essence, a public key). This way, the client will be able to ensure that it is connected to the right service and the network traffic will be encrypted.

CloudFlare SSL toolkit

There are many ways to generate certificate/private key pairs, one particularly easy to use tool is a widely used toolkit created by CloudFlare. One can install the toolkit on the local computer that it might require to jump through several hoops, so to keep things relatively easy, we will use the docker image provided by CloudFlare. Make sure you have docker installed. Now, create the file docker-compose.yml with the following

version: '3.8'
services:
  cfssl:
    image: cfssl/cfssl
    entrypoint: bash
    working_dir: /cert
    volumes:
      - ./cert:/cert

The first step as we discussed, is to create a CA certificate and the private key. Create the directory cert with a file ca-csr.json which is a json file specifying the “request” to create the certificate for CA.

mkdir cert
touch cert/ca-csr.json

Make that json file look like this:

{
   "CN": "My CA",
   "key": {
       "algo": "rsa",
       "size": 2048
   },
   "names": [
       {
           "C": "US",
           "S": "WA",
           "ST": "Seattle",
           "O": "grpc_csharp",
           "OU": "CA Services"
       }
   ]
}

The details are not terribly important for our purposes, it is just a bunch of attributes baked into the certificate plus specifics for the private key to be generated. Now launch the docker image:

docker-compose run cfssl

You should end up inside the cfssl container in the cert directory

Your ca-csr.json file should be there. Run the following

cfssl gencert -initca ca-csr.json | cfssljson -bare ca

This will look at the settings in the ca-csr.json and generate appropriate certificate and private key. You may exit the docker container, for now:

exit

If everything went well, you should see ca.pem and ca-key.pem int the cert directory. The former is CA’s certificate, the latter is its private key. Now create the configuration for the CA, so it knows how to create the certificates. Create the cert/ca-config.json file with the following:

{
    "signing": {
        "profiles": {
            "service": {
                "expiry": "8760h",
                "usages": [
                    "signing",
                    "key encipherment",
                    "server auth"
                ]
            }
        }
    }
}

Ok, the CA is set up, let’s create a certificate for the service:

touch cert/service-csr.json

make this file look like this:

{
    "CN": "127.0.0.1",
    "hosts": [
        "localhost",
        "127.0.0.1"
    ],
    "key": {
        "algo": "rsa",
        "size": 2048
    },
    "names": [
        {
            "C": "US",
            "S": "WA",
            "ST": "Seattle",
            "O": "grpc_csharp",
            "OU": "CA Services"
        }
    ]
}

The hosts part is important; it lists the hostnames/ip addresses the certificate is valid for. When you run service on another box, make sure it contains the hostname/ip address on which you want to reach that service from the client.

Launch the cfssl container again

 docker-compose run cfssl

Generate the certificate:

 cfssl gencert -ca=ca.pem -ca-key=ca-key.pem -config=ca-config.json -profile=service  service-csr.json | cfssljson -bare service

Assuming everything went well, exit the container:

exit

Now in the cert directory there should be two extra files service.pem and service-key.pem, service certificate and service private key, respectively.

Enable certificate on the service

Hop into the Service/Service.cs and, just after parsing the port, read the certificate and the private file

// ...
int port = int.Parse(args[0]);
// new
var pair = new KeyCertificatePair(
   File.ReadAllText("cert/service.pem"),
   File.ReadAllText("cert/service-key.pem")
);

You will need to make sure you have using System.IO at the top of your file. Immediately after reading the certificate, create the credentials and pass these credentials to the server

var pair = new KeyCertificatePair(
    File.ReadAllText("cert/service.pem"),
    File.ReadAllText("cert/service-key.pem")
);
// new
var creds = new SslServerCredentials(new[] { pair });
var server = new Server
{
    Services = { Svc.BindService(new MyService()) },
    // changed
    Ports = { new ServerPort("0.0.0.0", port, creds) }
};

Ok, that covers the service.

Set up the client with the CA certificate.

As you may recall, for the client, we want it to be aware of the CA certificate so it can verify the signature of the certificate coming from the service. Make these changes in the Client/Client.cs

// new
var creds = new SslCredentials(
    File.ReadAllText("cert/ca.pem")
);
// changed
var channel = new Channel(
    host,
    port,
    creds
    );

Don’t forget add using System.IO. Give it a shot: Start the service:

dotnet run -p Service 9000

In a different terminal run the client:

dotnet run -p Client localhost 9000 17 + 25

Everything should work as before. But this time, the traffic between the client and the service is protected by the TLS based on the certificates.

Notice that the discussion above does not discuss how the service authenticates and authorizes the client. There are multiple options for that. One popular one is so called mutual authentication, when the client gets also issued a certificate and the private key, so the service can validate that certificate, retrieve the identity of the client and perform various authorization checks. There are other options too, but this is beyond the scope of this series. In the next part we are going to discuss gRPC streaming, highly efficient pattern for certain scenarios.


See also