Category: Servers

  • sshc: a simple command-line SSH manager

    sshc: a simple command-line SSH manager

    I usually have to SSH into a lot of servers; personal servers and work-related. Remembering their hostnames or IPs has always been a task. I have tried a few apps like Termius, but they often come with their own set of drawbacks. Many of these solutions are paid, which can be a significant investment if you’re just looking for a simple way to manage your connections. Furthermore, they often require extensive setup and configuration, which can be time-consuming when you just want to quickly connect to your servers.

    What I really needed was a lightweight, free solution that I could set up quickly and start using right away. I wanted something that would help me organize my SSH connections without the overhead of a full-featured (and often overpriced) application.

    That’s why I decided to create my own solution: a simple npm package that addresses these exact pain points. My goal was to develop a tool that’s easy to install, requires minimal setup, and gets you connected to your servers with minimal fuss.

    In this post, I’ll introduce you to this package and show you how it can simplify your SSH workflow without breaking the bank or requiring a considerable effort to set up.

    Installing simple-sshc

    Installing simple-sshc requires node version 14.0.0 or above to work. If you have not already, you can install node and npm here.

    Once you have node and npm setup, run this command to install simple-sshc globally:

    $ npm install -g simple-sshc

    You can verify the installation using:

    $ sshc version                                                  
    sshc version 1.0.1

    Connecting to a server

    You can SSH into your saved hosts by simply invoking the sshc command:

    Features

    Adding connections

    Easily add new SSH connections to your list with a simple interactive prompt:

    $ sshc add
    Enter the label: myserver 
    Username: user
    Hostname (IP address): 192.168.1.100

    The CLI guides you through the process, ensuring you don’t miss any crucial details. Once added, your connection is saved and ready for quick access.

    List all connections

    View all your saved connections at a glance:

    $ sshc list
    sshc list command and its output on shell.

    Modify existing connections

    Need to update a connection? You can use sshc modify to do that.

    $ sshc modify
    ? Select the connection to modify: myserver
    ? New username: newuser
    ? New hostname (IP address): 192.168.1.101

    Remove connections

    Cleaning up is just as easy:

    $ sshc remove 
    ? Select the connection you wish to remove: oldserver 
    ? Are you sure you want to remove this connection? Yes

    GitHub

    You can download the source code from GitHub: https://github.com/danish17/sshc/

  • How to Cache POST Requests in Nginx

    How to Cache POST Requests in Nginx

    Caching can substantially reduce load times and bandwidth usage, thereby enhancing the overall user experience. It allows the application to store the results of expensive database queries or API calls, enabling instant serving of cached data instead of re-computing or fetching it from the source each time. In this tutorial, we will explore why and how to cache POST requests in Nginx.

    There are only two hard things in Computer Science: cache invalidation and naming things.

    — Phil Karlton

    Caching POST requests: potential hazards

    By default, POST requests cannot be cached. Their (usually) non-idempotent (or “non-safe”) nature can lead to undesired and unexpected consequences when cached. Sensitive data, like passwords, which these requests may contain, risk exposure to other users and potential threats when cached. Additionally, POST requests often carry large payloads, such as file uploads, which can significantly consume memory or storage resources when stored. These potential hazards are the reasons why caching POST requests is not generally advised.

    Source: https://restfulapi.net/idempotent-rest-apis/

    Although it may not be a good idea to cache POST requests, RFC 2616 allows POST methods to be cached provided the response includes appropriate Cache-Control or Expires header fields.

    The question: why would you want to cache a POST request?

    The decision to cache a POST request typically depends on the impact of the POST request on the server. If the POST request can trigger side effects on the server beyond just resource creation, it should not be cached. However, a POST request can also be idempotent/safe in nature. In such instances, caching is considered safe.

    Why and how to cache POST requests

    Recently, while working on a project, I found myself designing a simple fallback mechanism to ensure responses to requests even when the backend was offline. The request itself had no side effects, though the returned data might change infrequently. Thus, using caching made sense.

    I did not want to use Redis for two reasons:

    1. I wanted to keep the approach simple, without involving ‘too many’ moving parts.
    2. Redis does not automatically serve stale cache data when the cache expires or is evicted (invalidate-on-expire).

    As we were using Nginx, I decided to go ahead with this approach (see figure).

    The frontend makes a POST request to the server, which has an Nginx set up as a reverse proxy. While the services are up and running, Nginx caches them for a certain time and in a case where the services are down, Nginx will serve the cache (even if it is stale) from its store.

    http {
        ...
        # Define cache zone
        proxy_cache_path /var/cache/nginx levels=1:2 keys_zone=my-cache:20m max_size=1g inactive=3h use_temp_path=off;
        ...
    }
    
    location /cache/me {
        proxy_set_header X-Real-IP  $remote_addr;
        proxy_set_header Host $host;
        proxy_set_header X-Forwarded-For $remote_addr;
        proxy_pass http://service:3000;
    
        # Use cache zone defined in http
        proxy_cache my-cache;
        proxy_cache_lock on;
        
        # Cache for 3h if the status code is 200/201/302
        proxy_cache_valid 200 201 302 3h;
        
        # Serve staled cached responses
        proxy_cache_use_stale error timeout updating http_500 http_502 http_503 http_504;
        proxy_cache_methods POST;
    
        # ! This is important
        proxy_cache_key "$request_uri|$request_body";
    
        proxy_ignore_headers "Cache-Control" "Expires" "Set-Cookie";
    
        # Add header to the response
        add_header X-Cached $upstream_cache_status;
    }

    Things to consider

    In proxy_cache_key "$request_uri|$request_body", we are using the request URI as well as the body as an identifier for the cached response. This was important in my case as the request (payload) and response contained sensitive information. We needed to ensure that the response is cached on per-user basis. This, however, comes with a few implications:

    1. Saving the request body may cause a downgrade in performance (if the request body is large).
    2. Increased memory/storage usage.
    3. Even if the request body is slightly different, it will cause Nginx to cache a new response. This may cause redundancy and data mismatch.

    Conclusion

    Caching POST requests in Nginx may offer a viable solution for enhancing application performance. Despite the inherent risks associated with caching such requests, careful implementation can make this approach both safe and effective. This tutorial discusses how we can implement POST request caching wisely.

    Want to know how we can monitor server logs like a pro, using Grafana Loki?

    Suggested Readings

    1. Idempotent and Safe APIs
    2. Nginx Proxy Module
    3. Caching POST Requests with Varnish
  • New Relic with WordPress using Event API for better monitoring

    New Relic with WordPress using Event API for better monitoring

    New Relic is a leading application performance monitoring (APM) platform that offers developers invaluable insights into the performance of their applications. APM tools provide us with real-time monitoring, diagnostics, and analytics capabilities that enable us to gain deep visibility into our applications, track down performance issues, and make informed decisions to improve the overall user experience. If you wish to add monitoring to your server, here is how you can use Grafana Loki to monitor your server logs. WordPress is the most popular CMS in the world. Integrating New Relic with WordPress can help developers optimize their code and identify bottlenecks. It also helps ensure that the WordPress applications perform well under different loads and usage scenarios.

    Recently, we shipped a WordPress solution that relies heavily on third-party API. To obfuscate the keys and other confidential data, we used WP REST API as a relay which acts like a proxy (sort of). On the client side, we hit the WP REST API endpoints, which call the third-party API. We use Transients to cache the response on the server side. We also use a caching layer on the client side (using Axios and LocalStorage). The transients (server-side cache) mitigate redundant requests from different users to the third-party API, whereas the client-side cache reduces redundant requests from the same user to the site’s backend.

    In this post, we will learn how to integrate New Relic with WordPress using the Event API.

    Overview

    We could not install and configure New Relic’s PHP Agent to instrument external API calls (because the hosting platform did not allow that). Therefore, we decided to use the Event API. It is a powerful tool that allows us to send custom event data to New Relic Insights, their real-time analytics platform. Using Event API, we can capture and analyze specific events or metrics that are important to our application’s performance and operations.

    Flowchart of how Event API is triggered.

    Event API

    Using the Event API, we can programmatically send structured JSON data to New Relic Insights. The data can then be visualized and queried to gain deeper insights into our application’s behavior. This can include information such as user interactions, system events, errors, custom metrics, or any other relevant data points.

    To use Event API, we need to follow these steps:

    1. Obtain your New Relic Ingest - License API key.
    2. Obtain your Account ID.

    We have to use the following endpoint to POST to the Event API: https://insights-collector.newrelic.com/v1/accounts/{{ ACCOUNT_ID }}/events.

    The API Key needs to be set in the Headers. The JSON payload looks like this:

    {
      "eventType": "myEvent",
      "timestamp": 1652455543000,
      "applicationId": "myApp",
      "data": {
        "key1": "value1",
        "key2": "value2"
      }
    }

    The eventType is what we will use to query the data.

    $access_id = EXTERNAL_API_KEY;
    $response = wp_safe_remote_get( $endpoint );
    
    if ( is_wp_error( $response ) || 200 !== $response['response']['code'] ) {
    			$incident = send_external_api_incident( $endpoint, (string) $response['response']['code'], $response );
    			return new WP_Error(
    				(int) $response['response']['code'],
    				array(
    					'actual_response'    => $response,
    					'new_relic_incident' => $incident,
    				) 
    			);
    		}

    The send_external_api_incident() logic:

    /**
     * Sends API Incident event to New Relic.
     *
     * @param  string          $endpoint
     * @param  string          $response_code
     * @param  array|\WP_Error $response
     * @return array|\WP_Error
     */
    function send_external_api_incident( string $endpoint, string $response_code, array|\WP_Error $response ) {
    	$base_url = 'https://insights-collector.newrelic.com/v1/accounts/' . NEW_RELIC_ACCOUNT_ID . '/events';
    	$body     = [
    		array(
    			'eventType'    => 'ExternalApiIncident',
    			'endpoint'     => $endpoint,
    			'responseCode' => $response_code,
    			'response'     => wp_json_encode( $response ),
    		),
    	];
    	$response = wp_safe_remote_post(
    		$base_url,
    		array(
    			'headers' => array(
    				'Content-Type' => 'application/json',
    				'Api-Key'      => NEW_RELIC_INGEST_KEY,
    			),
    			'body'    => wp_json_encode( $body ),
    		)
    	);
    
    	return $response;
    }

    Checking the results

    Head over to the your New Relic account and click on your application. You can use NRQL queries to query data.

    SELECT * FROM `ExternalApiIncident` since 1 day ago


    In conclusion, integrating New Relic with WordPress application offers a robust solution for monitoring and optimizing application performance. This approach not only enhances the visibility of your application’s internal workings but also ensures a seamless user experience by efficiently tracking and analyzing critical data points. By following the outlined steps and best practices, you can successfully implement this powerful tool, even in environments with certain restrictions. The ability to customize event tracking and gain insights through real-time analytics is invaluable for developers aiming to maintain high-performance standards. Remember, continuous monitoring and improvement are key to staying ahead in the fast-paced digital world. Utilize these insights to keep your application running smoothly, and always be proactive in seeking ways to refine and enhance your system’s performance.

  • Grafana Loki – How to Monitor Server Logs Like a Pro!

    Grafana Loki – How to Monitor Server Logs Like a Pro!

    If you love working with servers, you must have wanted a beautiful and efficient way to monitor your server logs. Grafana Loki does just that! Loki is a log aggregation system that is horizontally-scalable and highly-available. Inspired by Prometheus, Loki integrates pretty well with Grafana and is an amazing tool to monitor your server logs. It is fast since it does not index the content of the logs, but rather labels each set of log stream. In this tutorial, we will discuss how to set up Grafana Loki and integrate it with Grafana Dashboard. We will also learn how to add Nginx logs to the explorer.

    The ‘Grafana’ Plan

    I want to monitor my personal server’s (let us call it the ‘source’) logs. My personal server is a Hetzner VPS and runs on Ubuntu Server 20.04 LTS. I plan to use an Amazon EC2 free-tier t2.micro instance to serve the Grafana Dashboard over HTTP. The source will run Grafana Loki inside a Docker container over port 3100. The dashboard will also have an Ubuntu Server 20.04 LTS.

    Grafana Loki - Infra
    Grafana Loki – Flow of data

    Setting up the Grafana Dashboard

    To serve our Grafana Dashboard, we will use an Amazon EC2 free-tier t2.micro instance running Ubuntu Server 20.04 LTS. Now, choosing the cloud service provider is completely upto you. You can also set up the dashboard locally or on a Raspberry Pi, if you have one. If you do not know how you can expose your Raspberry Pi to the public without a public IP, here is a guide for you. We do not need to do anything special, but make sure you allow access to port 80 (HTTP). Once it is done, connect to your instance via SSH.

    Installing Grafana

    Now, we need to install Grafana. We will install the Grafana Enterprise edition, but if you wish to go for the OSS release, you can follow this guide.

    sudo apt-get install -y apt-transport-https
    sudo apt-get install -y software-properties-common wget
    wget -q -O - https://packages.grafana.com/gpg.key | sudo apt-key add -

    To install the stable release, we need to add the repository using the following command:

    echo "deb https://packages.grafana.com/enterprise/deb stable main" | sudo tee -a /etc/apt/sources.list.d/grafana.list

    If everything goes well, we should be ready to install the grafana-enterprise package.

    sudo apt-get update
    sudo apt-get install grafana-enterprise

    Starting Grafana

    Now that Grafana is successfully installed, we need to start the service. To check if the Grafana service is loaded, we need to use the following command:

    systemctl status grafana-server.service

    We should see that the service is loaded. Let’s enable the service now.

    systemctl enable grafana-server.service

    Finally, let’s start the service.

    systemctl start grafana-server.service

    Let’s check if everything works well. In my case, I will navigate to 34.201.129.128:3000.

    Grafana Login Page
    Grafana Login Page

    The default username is admin and password is admin.

    Redirecting Port 80 to Grafana Port

    We want to access Grafana Dashboard over HTTP but to bind Grafana to ports below 1024, we need to run grafana as root. Upon installation, Grafana creates a grafana user and the service runs under that user. We will redirect port 80 to 3000 using iptables to be able to access Grafana over HTTP.

    iptables -t nat -A PREROUTING -p tcp --dport 80 -j REDIRECT --to-port 3000

    Now, I will create an A record pointing to the IP so that I can access Grafana over http://dummydash.danishshakeel.me.

    Next step is to add a data source (Loki) to Grafana.

    Setting up Grafana Loki

    In order for Grafana to be able to fetch our server logs, we need to install Grafana Loki on our source. We will use Docker for this so make sure Docker is installed and working. First, let us pull the repository.

    git clone https://github.com/grafana/loki.git

    Now, let us cd into loki/production/ directory and pull the required images.

    docker-compose pull

    This will pull three images – loki, promtail, and grafana. We are ready to spin our containers.

    docker-compose up

    This will make Loki available via port 3100.

    Adding Loki as a Data Source

    We are ready with Grafana Dashboard and Loki. Now, we need to integrate them. Head to your Grafana Dashboard > Gear Icon ⚙ > Data Source

    Grafana Dashboard - Data Source
    Grafana Dashboard – Data Source

    Click on ‘Add Data Source’ and choose Loki under Logging and Document Databases. Now, we will configure the Data Source.

    Data Source Settings
    Data Source Settings

    We are all set! We should now be able to explore our logs from the dashboard.

    Exploring Logs

    To explore the logs, click on the Explore option (?) in the sidebar. Click on the Data Source and you should be able to see the list of log files, select one to see the logs. You can also type in your query, for example: {filename="/var/log/syslog"} will yield the logs from syslog.

    Exploring Data Source
    Exploring Data Source

    Exploring Nginx Logs

    Loki will not store Nginx logs out of the box. We need to configure our deplyoment to do that. The default configuration for promtail is located at /etc/promtail/config.yml. To check it, we need to first initiate a shell session in our promtail container. We can get a list of running containers by docker ps. Copy promtail container’s id and run:

    docker exec -it <container_id> bash

    We will create our own configuration to access Nginx logs. Cd into /opt/loki/ on your host machine and create a new file – promtail-config.yml.

    Add the following configuration to the file:

    server:
      http_listen_port: 9080
      grpc_listen_port: 0
    
    positions:
      filename: /tmp/positions.yaml
    
    clients:
      - url: http://loki:3100/loki/api/v1/push
    
    scrape_configs:
      - job_name: system
        static_configs:
        - targets:
            - localhost
          labels:
            job: varlogs
            __path__: /var/log/*log
      - job_name: nginx
        static_configs:
        - targets:
            - localhost
          labels:
            job: nginx
            __path__: /var/log/nginx/*log
    

    We are simply adding another job and specifying the path to our nginx logs.

    Once our configuration file has been added, we need to edit our compose file and map the configuration file from our host to the promtail container.

    ...
    ...
        promtail:
            image: grafana/promtail:master
            volumes:
              - /opt/loki/promtail-config.yml:/etc/promtail/new-config.yaml
              - /var/log:/var/log
            command: -config.file=/etc/promtail/new-config.yaml
            networks:
              - loki
    ...
    ...

    You should now be able to see access.log and error.log in your Grafana Explorer.

    Grafana Explorer – Nginx Logs

    There you have it! We have successfully configured Grafana and Grafana Loki to monitor our server logs. We have also learnt to configure Nginx with Promtail to serve logs to Grafana.

    Grafana Dashboard
    Grafana Dashboard

    What’s next?

    Certainly, we want to create a centralized dashboard for all our logs as well as system metrics like Disk and CPU usage. In future, I will discuss how we can add Prometheus to Grafana and monitor our system metrics.

  • Creating an SSH Tunnel using Cloudflare Argo and Access

    Creating an SSH Tunnel using Cloudflare Argo and Access

    I had always wanted to access my home server, running on a Raspberry Pi 4, from outside the local network. The most straightforward answer seemed to be getting a static IP from the ISP; however, both of my ISPs did not help me with that. I forgot about it for a while but when I flashed my Pi a couple of days ago I knew that I had to do it. Being able to SSH and rsync into my Pi on the fly is pretty cool! Today we will learn how to create an SSH Tunnel using Cloudflare’s Argo and Access.

    I tried this script to update the Cloudflare DNS records with my public IP. In addition to the script, I used crons to automically handle updates every minute, but it did not work. It turns out that my ISPs are using CGNAT and I have to create port forwarding rules in ISP’s router for this method to work, which will never be allowed. I came across Cloudflare Argo which lets you tunnel services running locally to Cloudflare.

    Installing Cloudflared

    Cloudflared (pronounced: cloudflare-dee) is a light-weight server-side daemon which lets you connect your infrastructure to Cloudflare. Using cloudflared we will create an ssh tunnel. The installation is straightforward, and you can find the compatible package here. We will install ARM cloudflared .deb package on our Raspberry Pi.

    Once downloaded, we will use dkpg to install the package.

    $ dkpg -i <path_to_the_deb_package>

    We can verify the installation using this command:

    $ cloudflared -V
    cloudflared version 2021.9.2 (built 2021-09-28-1343 UTC)

    Setting up Cloudflare Access

    Next, we will create a subdomain and secure it with Cloudflare Access. Access secures SSH connections and other protocols with Cloudflare’s global network, with a Zero-Trust Approach.

    Login to your Cloudflare account and choose your domain. On the Dashboard, click on ‘Access‘.

    Cloudflare Account Dashboard

    Next, we need to create an ‘Access Policy’. Click on ‘Create Access Policy Button’ in the ‘Access Policies’ section.

    Creating a Cloudflare Access Policy

    The users will be able to attempt to gain access to ‘Raspberry Pi Server’ on pi.danishshakeel.me and each session will expire after 24 hours.

    Creating SSH Tunnel

    Before we can create a tunnel, we need to login to cloudflared.

    $ cloudflared tunnel login

    This command will provide a link using which you can authorize the Argo tunnel. Select the domain on which you wish to authorize Argo. Now, we can create an Argo SSH Tunnel using the following command:

    $ cloudflared tunnel --hostname <subdomain> --url <url_to_service>

    We want to tunnel SSH on localhost to pi.danishshakeel.me. The command will look like this:

    $ cloudflared tunnel --hostname pi.danishshakeel.me --url ssh://localhost:22

    To verify, we can check our DNS records in Cloudflare. They should have an AAAA record for our subdomain.

    Cloudflare DNS Records

    Connecting to the SSH Tunnel

    In order to connect to the tunnel, we need to install cloudflared on the client. After installing, we need to run:

    $ cloudflared access ssh-config --hostname pi.danishshakeel.me

    This will give the required configuration that we need to add to SSH configuration. The configuration will look like this:

    Host pi.danishshakeel.me
      ProxyCommand /opt/homebrew/bin/cloudflared access ssh --hostname %h

    Now, to connect to the SSH, we will do ssh username@subdomain. For my Raspberry Pi, username is pi and hostname is pi.danishshakeel.me. This command will also output a link using which we need to authorize the connection.

    Cloudflare Access Login

    You should be able to successfully ssh into your server. Remember that you need to start the tunnel before trying to access it.

    SSH Access to Raspberry Pi

  • Wildcard SSL Certificate on Linode using Certbot

    Wildcard SSL Certificate on Linode using Certbot

    I recently migrated to Linode for my personal portfolio and project (proof of concept) websites. I am running Ubuntu Server 20.04 LTS on a 1GB Nanode. Most of my websites use WordPress and I use Nginx, MariaDB, PHP (LEMP) as my stack. I use a Multisite Network since it let’s me manage all my websites from a single dashboard.

    Initially, I was using a single site, so I used Certbot to install a Let’s Encrypt SSL Certificate. If you plan to host only one site on your server then you should be good to go with a single Certbot command; however, if you’ve, or plan, to run more than one site on your server, the process is different. Let’s learn how we can install wildcard SSL certificates on Linode.

    Generating a Token

    To let Certbot manage your DNS Records, we first need to generate an API token or Personal Access Token (PAT). To generate an API token:

    1. Log in to your Linode account
    2. Click on your Profile & Account settings
    3. Choose API Tokens
    API Tokens - Profile & Accounts Linode
    API Tokens – Profile & Accounts – Linode

    Once you’re in, click on ‘Create a Personal Access Token’ option.

    Create a new token that can read/write your Domain Records. Since you’ll most likely be using this token just for Certbot, you can disable all the other privileges.

    Add a Personal Access Token - Linode Wildcard SSL
    Adding a Personal Access Token – Linode

    Click on ‘Create Token’, copy the generated token and save it somewhere safe. The tokens cannot be viewed again, so if you lose it, you’ll have to regenerate it.

    Now, create an .ini file to store your token. Your .ini file should look like this:

    # Linode API Credentials .ini file
    dns_linode_key = <YOUR_API_KEY>
    dns_linode_version = 4

    Installing Certbot

    Certbot is a free, open source software tool for automatically using Let’s Encrypt certificates on manually-administrated websites to enable HTTPS. We’ll use certbot package and python3-certbot-dns-linode plugin.

    Now, we can install the Certbot.

    sudo apt install certbot python3-certbot-dns-linode

    Generating Certificate

    We’ll not use Certbot’s automatic Nginx configuration, we’ll use Certbot to generate a certificate and then manually edit our Nginx files.

    To generate a certificate:

    certbot certonly --dns-linode --dns-linode-propagation-seconds <TIME_IN_SEC> -d <YOUR_DOMAIN> -d "*.<YOUR_DOMAIN>"

    For my website, the command will look like this:

    certbot certonly --dns-linode --dns-linode-propagation-seconds 180 -d danishshakeel.me -d "*.danishshakeel.me"

    We are using ‘*’ to let Certbot know that all the subdomains, such as blog.danishshakeel.me, hire.danishshakeel.me, or www.danishshakeel.me should be able to use the certificate. –dns-linode-propagation-seconds is the time (in seconds) for which we wait for the changes to propagate to the server before asking the ACME servers to verify.

    Certbot will ask you to input the path of the .ini file which we created.

    Input the path to your Linode credentials INI file (Enter 'c' to cancel): <PATH_TO_INI_FILE>
    Waiting 180 seconds for DNS changes to propagate
    Waiting for verification...
    Cleaning up challenges

    Congratulations, we have successfully generated our certificate and chain. Note down the path to the fullchain.pem and privkey.pem.

    Configuring Nginx

    Now, we can configure Nginx to use our certificate.

    options-ssl-nginx.conf

    Before we can edit our Nginx configurations, we need to ensure that options-ssl-nginx.conf exists in /etc/letsencrypt directory. In case it does not, we can simply create one and copy-paste this content into it.

    # This file contains important security parameters. If you modify this file
    # manually, Certbot will be unable to automatically provide future security
    # updates. Instead, Certbot will print and log an error message with a path to
    # the up-to-date file that you will need to refer to when manually updating
    # this file.
    
    ssl_session_cache shared:le_nginx_SSL:10m;
    ssl_session_timeout 1440m;
    ssl_session_tickets off;
    
    ssl_protocols TLSv1.2 TLSv1.3;
    ssl_prefer_server_ciphers off;
    
    ssl_ciphers "ECDHE-ECDSA-AES128-GCM-SHA256:ECDHE-RSA-AES128-GCM-SHA256:ECDHE-ECDSA-AES256-GCM-SHA384:ECDHE-RSA-AES256-GCM-SHA384:ECDHE-ECDSA-CHACHA20-POLY1305:ECDHE-RSA-CHACHA20-POLY1305:DHE-RSA-AES128-GCM-SHA256:DHE-RSA-AES256-GCM-SHA384";

    Configuring Nginx Server

    Now, let’s cd into our Nginx sites-available directory

    cd /etc/nginx/sites-available

    Now, we need to open our configuration file. I am using the default server block as my configuration.

    sudo vi /etc/nginx/sites-available/default

    Inside the server block, we need to add a few lines:

    server {
    ...
    
    listen [::]:443 ssl ipv6only=on;
    listen 443 ssl;
    ssl_certificate <FULLCHAIN_PEM_PATH>;
    ssl_certificate_key <PRIVKEY_PEM_PATH>;
    include /etc/letsencrypt/options-ssl-nginx.conf;
    }

    Voila! You have successfully configured Let’s Encrypt Wildcard SSL Certificate on Nginx using Certbot.

    Footnotes:

    The process is similar for other providers, provided the provider is supported by Certbot. Here is the list of supported providers.